ADR 0051: Subprocess Execution — System Commands and Interactive Subprocess Actor

Status

Accepted | Implemented (2026-03-05)

Context

Problem Statement

Beamtalk has no way to execute external OS commands. The existing Port class (stdlib/src/Port.bt) is an opaque wrapper for BEAM port identifiers — it provides asString, =:=, and hash, but cannot create, communicate with, or manage subprocesses.

This blocks two categories of use cases:

  1. Simple commands — run git status, capture output, check the exit code. Every developer expects this from a general-purpose language.
  2. Interactive long-lived subprocesses — launch a daemon process, write to its stdin, read its stdout line-by-line, and manage its lifecycle. This is required for building orchestration services like Symphony (BT-1123) where a coding agent subprocess runs for minutes to hours with bidirectional JSON-RPC communication.

This ADR addresses both use cases:

Current State

Constraints

  1. BEAM port limitationsopen_port/2 cannot deliver stdout and stderr as separate streams. stderr_to_stdout merges them; otherwise stderr goes to the VM's terminal unseen. Separate stderr requires a helper process. A Rust helper binary (beamtalk_exec) provides this — the BEAM communicates with the helper via an ETF port protocol (same pattern as beamtalk_compiler_port), and the helper manages the actual subprocess with separate stdio pipes.
  2. Port ownership is process-local — the process that calls open_port/2 becomes the port owner. Only the owner can send commands. Port-backed streams have the same cross-process constraint as file-backed streams (ADR 0021). This means a port-backed Stream cannot be returned from an actor to its caller — the Stream generator would run in the caller's process but the port lives in the actor's process.
  3. Zombie process risk — closing a port sends EOF to the subprocess's stdin. Programs that don't monitor stdin for EOF become orphans. The BEAM has no built-in mechanism to forcefully terminate port children.
  4. Security — shell invocation ({spawn, Command}) enables injection. {spawn_executable, Path} with {args, List} is safe but requires absolute paths resolved via os:find_executable/1.
  5. Actor message modelADR 0043 establishes sync-by-default messaging (. = gen_server:call). All actor methods must return complete values, not lazy generators that depend on actor-internal resources. This shapes the Tier 2 API: readLine returns a String (or nil), not a Stream.

Decision

Tier 1: System Class Methods (One-Shot Commands)

Add three new class methods to System for one-shot command execution: blocking capture, streaming output, and block-scoped streaming with deterministic cleanup. Introduce a CommandResult value class for structured results.

All methods use spawn_executable with explicit argument lists — no shell invocation, no injection risk. This is the same approach recommended by the EEF Security Working Group.

API

// Blocking capture — run to completion, return result value
result := System run: "git" args: #("status")
result output      // => "On branch main\nnothing to commit\n"
result exitCode    // => 0
result isSuccess   // => true

// With custom environment and working directory
result := System run: "make" args: #("test") env: #{
  "CC" => "clang",
  "CFLAGS" => "-O2"
} dir: "/path/to/project"

// Streaming output — lazy Stream of stdout lines (like File lines:)
(System output: "find" args: #(".", "-name", "*.bt")) do: [:line |
  Transcript show: line
]

// Compose with Stream pipeline
(System output: "cat" args: #("server.log"))
  select: [:line | line includesSubstring: "ERROR"]
  take: 10

// Block-scoped streaming with deterministic cleanup (like File open:do:)
System output: "tail" args: #("-f", "server.log") do: [:stream |
  stream take: 100
]
// subprocess terminated and port closed when block exits

Return types:

CommandResult Value Class

sealed Value subclass: CommandResult

  /// The subprocess stdout as a String.
  output -> String => @primitive "output"

  /// The subprocess stderr as a String.
  stderr -> String => @primitive "stderr"

  /// The process exit code (0 = success by convention).
  exitCode -> Integer => @primitive "exitCode"

  /// True if exitCode is 0.
  isSuccess -> Boolean => self exitCode =:= 0

Tier 2: Interactive Subprocess Actor

Add a Subprocess Actor subclass for bidirectional communication with long-lived OS processes. The actor owns the port, buffers stdout data internally, and exposes sync methods for reading/writing.

Design Resolution: Why Naive Streams Don't Work — and How lines Solves It

The initial design considered returning a Stream directly from the actor (e.g., a port-backed generator). This fails for three reasons, all now resolved:

  1. Cross-process Stream constraint (ADR 0021) — the port lives in the actor's gen_server process, but a returned Stream's generator would execute in the caller's process. Port-backed Streams cannot cross process boundaries.

    Resolution: The lines method returns a Stream whose generator calls gen_server:call(ActorPid, {readLine, []}, infinity) — a message send, not a direct port read. The generator executes in the caller's process (correct for Streams), and each next step sends a sync message to the actor to get the next line. No resource handle crosses the process boundary — only the actor's PID. readLine remains as the lower-level primitive for timeout-based and request-response patterns.

  2. Actor constructor patternActor.bt only has spawn and spawnWith:. A Subprocess open: "cmd" args: #("a") factory method doesn't exist in the current protocol.

    Resolution: spawnWith: with a config dictionary is sufficient. The Erlang-side init/1 callback receives the config map and opens the port. A convenience class method open:args: desugars to spawnWith::

    class open: command args: args =>
      self spawnWith: #{"command" => command, "args" => args}
    
    class open: command args: args env: env dir: dir =>
      self spawnWith: #{"command" => command, "args" => args, "env" => env, "dir" => dir}
    

    This requires no new Actor protocol machinery — it's a standard class method calling the existing spawnWith:. The config dictionary supports optional "env" (Dictionary of String => String) and "dir" (String working directory) keys — omitted keys inherit from the parent process.

  3. Sync-messaging conflict (ADR 0043). terminator = gen_server:call. Returning a lazy Stream from a sync call is semantically broken when the Stream's generator needs the actor's port directly.

    Resolution: Already solved by point 1. The lines Stream generator uses message sends (not direct port reads), so each step is a valid sync call returning a simple value. readLine. is also a sync call that returns the next line. No port handle escapes the actor.

API

// Spawn a subprocess — command and args passed via spawnWith:
agent := Subprocess spawnWith: #{
  "command" => "codex",
  "args" => #("--full-auto", "fix the login bug")
}

// Convenience factory method (desugars to spawnWith:)
agent := Subprocess open: "codex" args: #("--full-auto", "fix the login bug")

// With custom environment and working directory
agent := Subprocess open: "make" args: #("test") env: #{
  "CC" => "clang",
  "CFLAGS" => "-O2"
} dir: "/path/to/project"

// Write a line to the subprocess's stdin
agent writeLine: (JSON generate: #{
  "jsonrpc" => "2.0",
  "method" => "initialize",
  "id" => 1,
  "params" => #{"model" => "gpt-4"}
}).

// Read one line from stdout (blocks forever until a line is available or EOF)
line := agent readLine.       // => "{\"jsonrpc\":\"2.0\",\"result\":...}"

// Read with timeout (milliseconds) — returns nil on timeout or EOF
line := agent readLine: 5000. // => String, or nil after 5 seconds

// Read in a loop until EOF
line := agent readLine.
[line notNil] whileTrue: [
  event := JSON parse: line.
  Transcript show: event.
  line := agent readLine.
]

// Stream-based consumption — same composability as Tier 1
agent lines do: [:line |
  Transcript show: line
]

// Compose with Stream pipeline
agent lines
  select: [:line | line includesSubstring: "error"]
  take: 10

// Timeout-based loop — detect hung subprocess
line := agent readLine: 30000.
[line notNil] whileTrue: [
  Transcript show: line.
  line := agent readLine: 30000.
].
line isNil ifTrue: [
  agent isAlive ifTrue: [
    Transcript show: "Subprocess appears hung — killing".
    agent close.
  ]
]

// Check if the subprocess has exited
agent isAlive.                // => true / false (inherited from Actor)

// Get exit code (nil if still running)
agent exitCode.               // => 0, 1, ..., or nil

// Graceful shutdown — closes stdin (sends EOF), waits for exit
agent stop.                   // inherited from Actor

// Force kill — terminates OS process immediately
agent close.                  // port_close + OS kill signal

Subprocess Class

A single Actor subclass handles port lifecycle, buffered reads, and stdin writing. No abstract base class, no separate push-mode class — one class, one Erlang module.

// Note: Subprocess is backed by hand-written beamtalk_subprocess.erl, not codegen.
// State (port, stdout/stderr queues, exitCode, portClosed flag) lives in the
// Erlang gen_server state map — there are no Beamtalk-level state: declarations.
Actor subclass: Subprocess

  /// Convenience factory — desugars to spawnWith:
  class open: command args: args =>
    self spawnWith: #{"command" => command, "args" => args}

  /// Convenience factory with environment and working directory.
  class open: command args: args env: env dir: dir =>
    self spawnWith: #{"command" => command, "args" => args, "env" => env, "dir" => dir}

  /// Write a line to the subprocess's stdin (appends newline).
  writeLine: data -> Nil => @primitive "writeLine:"

  /// Read one line from stdout. Blocks forever until available. Returns nil at EOF.
  readLine -> Object => @primitive "readLine"

  /// Read one line from stdout with timeout (ms). Returns nil on timeout or EOF.
  readLine: timeout -> Object => @primitive "readLine:"

  /// Read one line from stderr. Blocks forever until available. Returns nil at EOF.
  readStderrLine -> Object => @primitive "readStderrLine"

  /// Read one line from stderr with timeout (ms). Returns nil on timeout or EOF.
  readStderrLine: timeout -> Object => @primitive "readStderrLine:"

  /// Return a Stream of stdout lines. Generator calls readLine via gen_server:call.
  lines -> Stream => @primitive "lines"

  /// Return a Stream of stderr lines. Same mechanics as lines.
  stderrLines -> Stream => @primitive "stderrLines"

  /// Get the exit code. Returns nil if the subprocess is still running.
  exitCode -> Object => @primitive "exitCode"

  /// Force-close the subprocess (sends kill to process group).
  close -> Nil => @primitive "close"

Choosing between lines and readLine:

Why lines works despite the cross-process constraint: The Stream returned by lines has a generator closure that calls gen_server:call(ActorPid, {readLine, []}, infinity) — a message send, not a direct port read. The generator executes in the caller's process (correct for Streams), and each next step sends a sync message to the actor to get the next line. No resource handle crosses the process boundary — only the actor's PID.

lines caveats:

  1. Single-consumer: The Stream's generator pulls from the actor's shared readLine buffer. If two consumers iterate the same lines Stream (e.g., s := agent lines. s do: [...]. s do: [...]), lines are distributed non-deterministically between iterations. Each call to lines should be consumed by exactly one terminal operation.
  2. Caller blocking: Each step of a lines iteration calls gen_server:call with infinity timeout. If the subprocess goes quiet for minutes, the caller's process is frozen. If the caller is itself an actor, its gen_server cannot process other messages — supervised actors may be killed by their supervisor's health check. Use readLine: timeout for callers that must remain responsive, or spawn a dedicated process for the read loop.
  3. Abandoned Stream buffer growth: If a lines Stream is abandoned mid-iteration, the actor's stdout buffer grows without bound as the subprocess continues producing output. Use close. to stop the subprocess when the Stream is no longer needed.

Runtime Implementation

Subprocess is backed by a hand-written Erlang module (beamtalk_subprocess.erl) rather than generated codegen. This follows the same pattern as beamtalk_compiler_port.erl — specialized port management that needs direct handle_info control.

OTP behaviour choice: gen_server, not gen_statem. The subprocess has a simple linear state machine (starting → running → draining → closed) with 4 states — easy to encode as a phase key in the gen_server state map. gen_statem would be cleaner but Subprocess needs to interoperate with Beamtalk's Actor infrastructure (which is gen_server-based). When Beamtalk adds full OTP behaviour support including gen_statem wrappers, Subprocess can be migrated.

Key runtime mechanics:

%% In beamtalk_subprocess.erl
%%
%% The BEAM port connects to the beamtalk_exec Rust helper, which
%% communicates via ETF over {packet, 4}. The helper sends tagged
%% tuples: {stdout, ChildId, Data}, {stderr, ChildId, Data},
%% {exit, ChildId, Code}.

%% Helper messages arrive as ETF via {packet,4} — decode and dispatch
handle_info({Port, {data, Packet}}, #{port := Port} = State) ->
    case erlang:binary_to_term(Packet) of
        {stdout, _ChildId, Data} ->
            buffer_and_maybe_reply(stdout, Data, State);
        {stderr, _ChildId, Data} ->
            buffer_and_maybe_reply(stderr, Data, State);
        {exit, _ChildId, Code} ->
            NewState = State#{exit_code => Code, port_closed => true},
            %% If readLine callers are waiting and buffers are empty, reply nil (EOF)
            S1 = maybe_reply_eof(stdout, NewState),
            S2 = maybe_reply_eof(stderr, S1),
            {noreply, S2};
        _Other ->
            {noreply, State}
    end;

handle_info(Msg, State) ->
    beamtalk_actor:handle_info(Msg, State).

%% Shared helper — buffers data for stdout or stderr, replies to waiting caller
buffer_and_maybe_reply(Channel, Data, State) ->
    BufKey = {Channel, buffer},
    PendKey = {Channel, pending},
    WaitKey = {Channel, waiting},
    Buffer = maps:get(BufKey, State),
    Pending = maps:get(PendKey, State, <<>>),
    Combined = <<Pending/binary, Data/binary>>,
    {Lines, Remainder} = split_lines(Combined),
    NewBuffer = queue:join(Buffer, queue:from_list(Lines)),
    case maps:get(WaitKey, State, undefined) of
        undefined ->
            {noreply, State#{BufKey => NewBuffer, PendKey => Remainder}};
        From ->
            case queue:is_empty(NewBuffer) of
                false ->
                    {{value, Line}, Rest} = queue:out(NewBuffer),
                    gen_server:reply(From, Line),
                    {noreply, State#{BufKey => Rest, PendKey => Remainder,
                                     WaitKey => undefined}};
                true ->
                    {noreply, State#{BufKey => NewBuffer, PendKey => Remainder,
                                     WaitKey => From}}
            end
    end.

%% readLine — blocks forever until a line or EOF
handle_readLine([], From, State) ->
    read_line_for(stdout, From, infinity, State).

%% readLine: — blocks up to Timeout ms, returns nil on timeout
handle_readLine([Timeout], From, State) ->
    read_line_for(stdout, From, Timeout, State).

handle_readStderrLine([], From, State) ->
    read_line_for(stderr, From, infinity, State).

handle_readStderrLine([Timeout], From, State) ->
    read_line_for(stderr, From, Timeout, State).

read_line_for(Channel, From, Timeout, State) ->
    BufKey = {Channel, buffer},
    WaitKey = {Channel, waiting},
    Buffer = maps:get(BufKey, State),
    case queue:is_empty(Buffer) of
        false ->
            {{value, Line}, Rest} = queue:out(Buffer),
            {reply, Line, State#{BufKey => Rest}};
        true when maps:get(port_closed, State, false) ->
            %% EOF — no more data coming
            {reply, nil, State};
        true when Timeout =:= infinity ->
            %% No data yet — stash From, reply from handle_info
            {noreply, State#{WaitKey => From}};
        true ->
            %% No data yet — stash From with timer ref
            TimerRef = erlang:send_after(Timeout, self(),
                                         {read_timeout, Channel}),
            {noreply, State#{WaitKey => {From, TimerRef}}}
    end.

%% writeLine — sync call, writes to port stdin
handle_writeLine([Data], _From, #{port := Port} = State) ->
    Line = <<(iolist_to_binary(Data))/binary, $\n>>,
    port_command(Port, Line),
    {reply, nil, State}.

The deferred reply pattern means readLine blocks the caller's gen_server:call until either:

This is standard OTP — handle_call returns {noreply, State} to defer the reply, stashing From in the state. When data arrives via handle_info, the actor calls gen_server:reply(From, Value) to unblock the caller. The same pattern is used for both readLine (stdout) and readStderrLine (stderr), each with independent buffers and waiting callers.

Timeout handling: Two read variants cover different needs:

Both variants exist for stderr too (readStderrLine, readStderrLine:).

ADR 0043 notes that gen_server:call defaults to a 5000ms timeout. For the blocking readLine variant, the runtime must use gen_server:call(AgentPid, {readLine, []}, infinity) — a subprocess may produce no output for minutes (e.g., a coding agent thinking). For readLine:, the gen_server:call also uses infinity — the timeout is managed internally via erlang:send_after, not via OTP's call timeout, so the actor can clean up its waiting state properly.

Actor Lifecycle and Cleanup

 Subprocess spawn         stop (graceful)        close (forced)
      │                       │                       │
      ▼                       ▼                       ▼
  open_port ──data──▶  port_close (EOF)         port_close + kill
      │                  │                         │
      ▼                  ▼                         ▼
  handle_info      wait for exit_status       OS process killed
  buffers data       (timeout → kill)           immediately
      │                  │
      ▼                  ▼
  readLine ◄─── gen_server:call ───▶ nil (EOF)
  writeLine ──▶ port_command

Shared Security Model

Both Tier 1 and Tier 2 share the same security model:

  1. No shell invocation — always spawn_executable, never spawn. Arguments are passed as a list directly to execve(3), preventing injection.
  2. PATH resolution — command names without / are resolved via os:find_executable/1. Absolute paths are allowed (unlike File's relative-only policy) because executables legitimately live in system directories.
  3. Argument validation — all arguments must be strings. Non-string values raise #type_error.
  4. No shell metacharacters — since there's no shell, |, >, &&, ;, backticks, and $() are literal characters, not operators. Users who need piping must compose at the Beamtalk level:
// Instead of: System run: "cat file.txt | grep ERROR" (no shell!)
// Do:
(System output: "cat" args: #("file.txt"))
  select: [:line | line includesSubstring: "ERROR"]

Windows caveat: On Windows, os:find_executable/1 searches PATHEXT and may resolve to .bat/.cmd files, which implicitly invoke cmd.exe when passed to spawn_executable. This partially defeats the "no shell" guarantee. Decision: The implementation MUST detect .bat/.cmd resolution on Windows and reject the command with a clear error:

System run: "setup" args: #("install")
// => Error: #command_error "Cannot execute setup.bat — .bat/.cmd files invoke cmd.exe, bypassing shell injection protection. Use the full path to a .exe instead."

This is the conservative choice: it's better to reject a valid-but-risky command than to silently reintroduce shell injection. Users who genuinely need to run batch files can use System run: "cmd.exe" args: #("/c", "setup.bat") — making the shell invocation explicit and auditable.

Error Handling

// Command not found (Tier 1 and Tier 2)
System run: "nonexistent" args: #()
// => Error: #command_not_found "Command not found: nonexistent"
//    Hint: "Check that the command is installed and on $PATH"

// Non-zero exit (NOT an error — exit code is in the result)
result := System run: "grep" args: #("pattern", "file.txt")
result exitCode    // => 1 (no match)
result isSuccess   // => false

// Non-string argument
System run: 42 args: #()
// => Error: #type_error "run:args: expects a String command, got Integer"

// Writing to a closed subprocess (Tier 2)
agent writeLine: "hello".
// => Error: #port_closed "Cannot write to closed subprocess"
//    Hint: "The subprocess has exited with code 1"

Stderr Handling

The Rust helper binary (beamtalk_exec) manages separate stdout and stderr pipes to the subprocess. Both streams are forwarded to the BEAM over the ETF port protocol with tagged messages ({stdout, ChildId, Data} / {stderr, ChildId, Data}).

Tier 1: CommandResult exposes both output (stdout) and stderr as separate strings:

result := System run: "gcc" args: #("-Wall", "main.c")
result output      // => "" (stdout — empty for compile-only)
result stderr      // => "main.c:5: warning: unused variable\n"
result exitCode    // => 0

Tier 2: Subprocess provides readLine (stdout) and readStderrLine (stderr) as separate methods:

agent := Subprocess open: "codex" args: #("--full-auto", "fix the bug")
line := agent readLine.              // reads from stdout
errLine := agent readStderrLine.     // reads from stderr

Both methods use the same deferred-reply buffering pattern, with independent queues for stdout and stderr.

Zombie Process Prevention

The Rust helper binary (beamtalk_exec) handles all subprocess lifecycle management, including clean shutdown and process group kill. This eliminates the PID reuse, grandchild orphaning, and shell-based kill issues that raw open_port would have.

Rust helper cleanup sequence:

  1. The helper spawns each subprocess in its own process group (Unix: setsid(); Windows: Job Object)
  2. On shutdown request from the BEAM: sends SIGTERM to the process group, waits up to 2 seconds, then sends SIGKILL to the process group
  3. Process-group kill ensures grandchild processes are also terminated — no orphaning
  4. On Windows: TerminateJobObject() kills all processes in the Job Object

Tier 1 (System output:) — Streams work here:

Streams are value-side only (ADR 0021 revised). System output:args: is a valid use of Streams because the caller's process opens the port via the Rust helper, creates the Stream generator, and evaluates the pipeline — no process boundary is crossed. This is the same pattern as File lines: — caller-owned resource, same-process evaluation, lazy pipeline composition.

  1. Stream finalizerSystem output:args: attaches a Stream finalizer that sends a shutdown command to the Rust helper. The finalizer runs when the stream is fully consumed or when a terminal operation completes.
  2. Block-scoped cleanupSystem output:args:do: sends a shutdown command to the helper in an ensure: block when the block exits (normally or via exception). This is the deterministic alternative — use it for long-running commands where relying on the finalizer is insufficient.
  3. Process linking — the BEAM port to the Rust helper is linked to the calling process. If the caller crashes, the port closes, and the helper cleans up the subprocess.

Tier 2 (Subprocess actor) — direct Streams do NOT work, but lines bridges the gap:

A Stream whose generator directly reads from the port cannot be returned from an actor — the generator closure would execute in the caller's process, which cannot read from the actor's port. However, the lines method works around this by returning a Stream whose generator calls gen_server:call(ActorPid, {readLine, []}, infinity) — a message send, not a direct port read. See ADR 0021 "Scope Limitation" and the "Design Resolution: Why Not Streams?" section above.

  1. Actor lifecycle — the port to the Rust helper is owned by the actor's gen_server process. When the actor stops (via stop. or crash), terminate/2 sends a shutdown command to the helper, which cleans up the subprocess with the SIGTERM → SIGKILL sequence.
  2. Explicit closeagent close. sends an immediate kill command to the helper.
  3. Supervision — Subprocess actors can be supervised like any other Actor. If the supervisor restarts the actor, a new subprocess is spawned via the helper.

Rust helper crash safety: If the Rust helper itself crashes or is killed, the subprocess's process group becomes orphaned. However, the helper is a simple, well-tested binary with no dynamic memory management concerns — it is far less likely to crash than a complex C++ program. If the BEAM VM exits, the helper detects stdin EOF and cleans up all managed subprocesses before exiting.

Abandoned Stream Cleanup (Tier 1)

An abandoned System output: Stream (assigned to a variable but never fully consumed) will not be cleaned up until the owning process exits. For long-running REPL sessions, this means a tail -f subprocess could leak indefinitely. Mitigation:

Cross-Process Constraints

Tier 1: Port-backed Streams from System output: have the same cross-process limitation as file-backed Streams (ADR 0021): they must be consumed by the same process that created them. Do not pass a System output: Stream to an actor — materialize it first with asList or take:.

Tier 2: No cross-process constraint for callers. The actor owns the port internally and exposes only simple values (String, Integer, nil) via sync message sends. Any process can call agent readLine. — the actor mediates all port access. This is the key advantage of the actor model for interactive subprocesses.

Prior Art

Erlang/OTP: open_port/2 and os:cmd/1

Elixir: System.cmd/3 and Port.open/2

Pharo Smalltalk: OSSubprocess

Newspeak

erlexec (Erlang library) and Porcelain (Elixir library)

Node.js child_process

User Impact

Newcomer

Smalltalk Developer

Erlang/BEAM Developer

Production Operator

Steelman Analysis

Option A: Start with os:cmd wrapper, upgrade to open_port later

Why rejected: os:cmd/1 invokes /bin/sh -c — shell injection by design. It also discards the exit code, which is critical for scripting (checking $?). Starting with os:cmd would mean either (a) shipping an insecure API that we immediately deprecate, or (b) not exposing exit codes in Phase 0 and adding them as a breaking change in Phase 1.

The API-validation argument is genuine: committing to run:args: is a surface area decision. However, the open_port implementation is only marginally more complex than os:cmd — the real complexity is in the streaming and actor phases, not in run:args:. Phase 1 with open_port already serves as the API validation step. If run:args: turns out to be the wrong shape, we learn that in Phase 1 regardless of whether the implementation uses os:cmd or open_port underneath. The os:cmd indirection would save almost no implementation effort while adding a security liability.

Option B: Use Streams from the Subprocess actor instead of readLine

Why rejected: The cross-process Stream constraint (ADR 0021) makes this impossible without fundamental changes to Stream semantics. A Stream's generator function runs in the caller's process, but the port is owned by the actor's gen_server process. Returning a Stream would require either: (a) violating the cross-process constraint (unsafe), (b) spawning a proxy process per Stream consumer (complex, fragile), or (c) redesigning Streams to support remote generators (large scope via BT-507).

The API coherence concern is real and should not be dismissed as "2 lines of code." However, the lines method already bridges this gap — it returns a Stream whose generator calls readLine via gen_server:call, giving Tier 2 the same composability as Tier 1. The readLine API remains as the lower-level primitive for timeout-based and request-response patterns.

Option C: Use Process as the class name (per ADR 0021 sketch)

Why rejected: While Beamtalk hides BEAM processes as "Actors" in user-facing code, the BEAM ecosystem documentation, tooling (observer, recon), and error messages all use "process." A Beamtalk class named Process would create confusion when users encounter BEAM process IDs, process monitors, and OTP process terminology in error messages and debugging output. Subprocess is unambiguous and widely understood (Python uses subprocess, Pharo uses OSSubprocess). The vocabulary concern is valid but the pragmatic disambiguation outweighs it — especially since Beamtalk targets BEAM developers as a key audience.

Option D: Message-push model instead of readLine polling

Why rejected: Push-based messaging adds significant complexity: subscriber registration, back-pressure handling, message ordering guarantees, and what happens when the subscriber crashes. The sync readLine model is simpler, correct, and sufficient for the motivating JSON-RPC use case (request-response with line-delimited messages).

The single-consumer limitation is real but acceptable: the Symphony orchestrator pattern is one coordinator process per subprocess, not multiple concurrent readers. The operator's observability concern is valid — the implementation should include the subscriber identity in the waiting state field (not just {From, _} but {From, WaitingSince}) for debugging.

A push-based wrapper can be built in ~15 lines of Beamtalk on top of Subprocess (see "Future: Push-Based Callbacks" in Implementation). If the pattern becomes common enough, a stdlib class can be added without changing the Subprocess API.

Tension Points

Alternatives Considered

Alternative: Shell-string API (System shell: "ls -la | grep foo")

Provide a convenience API that invokes the system shell, enabling pipes, redirects, and shell features.

Rejected because: Shell injection is a top security vulnerability in subprocess execution. The EEF Security Working Group explicitly recommends against {spawn, Command} in favor of spawn_executable. Users who need piping can compose Beamtalk streams. The composability of System output: with select: / collect: makes shell piping unnecessary for most cases. (Note: Beamtalk stream composition is sequential, not concurrent like shell pipes — each stage blocks while the previous produces output.)

Alternative: Port class extension

Extend the existing Port.bt to add subprocess creation methods.

Rejected because: Port is a BEAM interop type representing port identifiers from Erlang code (ADR 0028). Adding subprocess creation conflates BEAM interop artifacts with OS process management.

Alternative: Block-scoped only (no bare System output:)

Only provide System output:args:do: (block-scoped), not the bare System output:args: that returns a Stream directly.

Rejected because: The bare Stream form enables pipeline composition (System output: ... select: ... take: 10) which is the idiomatic Beamtalk pattern established by File lines: in ADR 0021. Removing it would make subprocess output less composable than file I/O. Both forms coexist: bare for pipelines, block-scoped for long-running commands where deterministic cleanup matters. This mirrors the File lines: / File open:do: dual pattern.

Alternative: Subprocess as a Value class with mutable handle (Pharo-style)

Create Subprocess as a non-actor class wrapping a port handle, with methods like readLine, writeLine:, close. The caller's process owns the port directly.

Rejected because: This would work for single-threaded use but breaks Beamtalk's concurrency model. If two BEAM processes share a reference to the same Subprocess value, both would try to interact with the port — but only the owner can. The actor model avoids this by routing all port access through the actor's gen_server process, serializing access naturally. Additionally, actors provide supervision, lifecycle management, and standard BEAM observability tools — all free from the OTP framework.

Consequences

Positive

Negative

Neutral

Implementation

Phase 1: Rust subprocess helper binary — beamtalk_exec (S-M)

Build a standalone Rust binary that the BEAM spawns as a port. The binary manages child process spawning, stdio piping, process group lifecycle, and signal handling. The BEAM communicates with it via ETF-encoded messages over the port protocol (same pattern as beamtalk_compiler_port).

New Rust crate: beamtalk-exec (~400-500 lines)

Erlang interface: beamtalk_exec_port.erl

Build integration:

Components: new Rust crate, runtime Tests: Rust unit tests for protocol encoding/decoding, integration tests via stdlib/test/exec_helper_test.bt — spawn echo process, verify separate stdout/stderr, process group kill

Phase 2: System run:args: and CommandResult (M)

New class: CommandResult

Extend System:

Components: stdlib, runtime, codegen (builtins registration for CommandResult) Tests: stdlib/test/system_command_test.bt — spawn/capture, exit codes, separate stdout/stderr, command not found, env/dir overrides

Phase 3: System output:args: streaming (M)

Extend System:

Components: stdlib, runtime Tests: stdlib/test/system_output_test.bt — streaming, pipeline composition, block-scoped cleanup, abandoned stream behavior

Phase 4: Subprocess actor (L)

New class: Subprocess

Codegen registration:

Components: stdlib, runtime, codegen (builtins registration) Tests: stdlib/test/subprocess_test.bt — spawn/readLine/writeLine, separate stderr, exit codes, close/stop lifecycle, deferred readLine blocking, timeout variants, lines Stream consumption, EOF handling, error on write after close

Future: Push-Based Callbacks (Deferred)

A push-based SubprocessListener (with onLine:, onStderrLine:, onExit: callbacks) was considered and deferred. The pull-based Subprocess with readLine + lines covers the motivating use cases (Symphony orchestrator, scripting). Push-based callbacks can be built in ~15 lines of Beamtalk on top of Subprocess by spawning a dedicated reader actor:

// User-space push wrapper — no stdlib support needed
Actor subclass: SubprocessReader

  class on: subprocess do: callback =>
    self spawnWith: #{"subprocess" => subprocess, "callback" => callback}

  run =>
    line := self.subprocess readLine.
    [line notNil] whileTrue: [
      self.callback value: line.
      line := self.subprocess readLine.
    ]

If this pattern becomes common enough to warrant stdlib support, a SubprocessListener class can be added in a future phase without changing the Subprocess API.

References