@ljoukov/llm 3.0.6 → 3.0.10

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -481,6 +481,50 @@ console.log(result.text);
481
481
 
482
482
  Use `customTool()` only when you need freeform/non-JSON tool input grammar.
483
483
 
484
+ ### Mid-Run Steering (Queued Input)
485
+
486
+ You can queue user steering while a tool loop is already running. Steering is applied on the next model step (it does
487
+ not interrupt the current generation/tool execution).
488
+
489
+ ```ts
490
+ import { streamToolLoop, tool } from "@ljoukov/llm";
491
+ import { z } from "zod";
492
+
493
+ const call = streamToolLoop({
494
+ model: "chatgpt-gpt-5.3-codex",
495
+ input: "Start implementing the feature.",
496
+ tools: {
497
+ echo: tool({
498
+ inputSchema: z.object({ text: z.string() }),
499
+ execute: ({ text }) => ({ text }),
500
+ }),
501
+ },
502
+ });
503
+
504
+ // Append steering while run is active.
505
+ call.append("Focus on tests first, then refactor.");
506
+
507
+ const result = await call.result;
508
+ console.log(result.text);
509
+ ```
510
+
511
+ If you already manage your own run lifecycle, you can create and pass a steering channel directly:
512
+
513
+ ```ts
514
+ import { createToolLoopSteeringChannel, runAgentLoop } from "@ljoukov/llm";
515
+
516
+ const steering = createToolLoopSteeringChannel();
517
+ const run = runAgentLoop({
518
+ model: "chatgpt-gpt-5.3-codex",
519
+ input: "Implement the task.",
520
+ filesystemTool: true,
521
+ steering,
522
+ });
523
+
524
+ steering.append("Do not interrupt; apply this guidance on the next turn.");
525
+ const result = await run;
526
+ ```
527
+
484
528
  ### Agentic Loop (`runAgentLoop()`)
485
529
 
486
530
  `runAgentLoop()` is the high-level agentic API. It supports:
@@ -489,6 +533,22 @@ Use `customTool()` only when you need freeform/non-JSON tool input grammar.
489
533
  - built-in subagent orchestration (delegate work across spawned agents),
490
534
  - your own custom runtime tools.
491
535
 
536
+ For interactive runs where you want to stream events and inject steering mid-run, use `streamAgentLoop()`:
537
+
538
+ ```ts
539
+ import { streamAgentLoop } from "@ljoukov/llm";
540
+
541
+ const call = streamAgentLoop({
542
+ model: "chatgpt-gpt-5.3-codex",
543
+ input: "Start implementation.",
544
+ filesystemTool: true,
545
+ });
546
+
547
+ call.append("Prioritize a minimal diff and update tests.");
548
+ const result = await call.result;
549
+ console.log(result.text);
550
+ ```
551
+
492
552
  #### 1) Filesystem agent loop
493
553
 
494
554
  For read/search/write tasks in a workspace, enable `filesystemTool`. The library auto-selects a tool profile by model
@@ -537,6 +597,7 @@ Enable `subagentTool` to allow delegation via Codex-style control tools:
537
597
 
538
598
  - `spawn_agent`, `send_input`, `resume_agent`, `wait`, `close_agent`
539
599
  - optional limits: `maxAgents`, `maxDepth`, wait timeouts
600
+ - `spawn_agent.agent_type` supports built-ins aligned with codex-rs-style roles: `default`, `researcher`, `worker`, `reviewer`
540
601
 
541
602
  ```ts
542
603
  import { runAgentLoop } from "@ljoukov/llm";
@@ -658,6 +719,15 @@ npm run bench:agent:estimate
658
719
 
659
720
  See `benchmarks/agent/README.md` for options and output format.
660
721
 
722
+ ## Examples
723
+
724
+ Interactive CLI chat with mid-run steering, thought streaming, filesystem tools rooted at
725
+ the current directory, subagents enabled, and `Esc` interrupt support:
726
+
727
+ ```bash
728
+ npm run example:cli-chat
729
+ ```
730
+
661
731
  ## License
662
732
 
663
733
  MIT