@nvrland/pi-agent-core 0.65.2-1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,456 @@
1
+ # @nvrland/pi-agent-core
2
+
3
+ Stateful agent with tool execution and event streaming. Built on `@nvrland/pi-ai`.
4
+
5
+ ## Installation
6
+
7
+ ```bash
8
+ npm install @nvrland/pi-agent-core
9
+ ```
10
+
11
+ ## Quick Start
12
+
13
+ ```typescript
14
+ import { Agent } from "@nvrland/pi-agent-core";
15
+ import { getModel } from "@nvrland/pi-ai";
16
+
17
+ const agent = new Agent({
18
+ initialState: {
19
+ systemPrompt: "You are a helpful assistant.",
20
+ model: getModel("anthropic", "claude-sonnet-4-20250514"),
21
+ },
22
+ });
23
+
24
+ agent.subscribe((event) => {
25
+ if (event.type === "message_update" && event.assistantMessageEvent.type === "text_delta") {
26
+ // Stream just the new text chunk
27
+ process.stdout.write(event.assistantMessageEvent.delta);
28
+ }
29
+ });
30
+
31
+ await agent.prompt("Hello!");
32
+ ```
33
+
34
+ ## Core Concepts
35
+
36
+ ### AgentMessage vs LLM Message
37
+
38
+ The agent works with `AgentMessage`, a flexible type that can include:
39
+ - Standard LLM messages (`user`, `assistant`, `toolResult`)
40
+ - Custom app-specific message types via declaration merging
41
+
42
+ LLMs only understand `user`, `assistant`, and `toolResult`. The `convertToLlm` function bridges this gap by filtering and transforming messages before each LLM call.
43
+
44
+ ### Message Flow
45
+
46
+ ```
47
+ AgentMessage[] → transformContext() → AgentMessage[] → convertToLlm() → Message[] → LLM
48
+ (optional) (required)
49
+ ```
50
+
51
+ 1. **transformContext**: Prune old messages, inject external context
52
+ 2. **convertToLlm**: Filter out UI-only messages, convert custom types to LLM format
53
+
54
+ ## Event Flow
55
+
56
+ The agent emits events for UI updates. Understanding the event sequence helps build responsive interfaces.
57
+
58
+ ### prompt() Event Sequence
59
+
60
+ When you call `prompt("Hello")`:
61
+
62
+ ```
63
+ prompt("Hello")
64
+ ├─ agent_start
65
+ ├─ turn_start
66
+ ├─ message_start { message: userMessage } // Your prompt
67
+ ├─ message_end { message: userMessage }
68
+ ├─ message_start { message: assistantMessage } // LLM starts responding
69
+ ├─ message_update { message: partial... } // Streaming chunks
70
+ ├─ message_update { message: partial... }
71
+ ├─ message_end { message: assistantMessage } // Complete response
72
+ ├─ turn_end { message, toolResults: [] }
73
+ └─ agent_end { messages: [...] }
74
+ ```
75
+
76
+ ### With Tool Calls
77
+
78
+ If the assistant calls tools, the loop continues:
79
+
80
+ ```
81
+ prompt("Read config.json")
82
+ ├─ agent_start
83
+ ├─ turn_start
84
+ ├─ message_start/end { userMessage }
85
+ ├─ message_start { assistantMessage with toolCall }
86
+ ├─ message_update...
87
+ ├─ message_end { assistantMessage }
88
+ ├─ tool_execution_start { toolCallId, toolName, args }
89
+ ├─ tool_execution_update { partialResult } // If tool streams
90
+ ├─ tool_execution_end { toolCallId, result }
91
+ ├─ message_start/end { toolResultMessage }
92
+ ├─ turn_end { message, toolResults: [toolResult] }
93
+
94
+ ├─ turn_start // Next turn
95
+ ├─ message_start { assistantMessage } // LLM responds to tool result
96
+ ├─ message_update...
97
+ ├─ message_end
98
+ ├─ turn_end
99
+ └─ agent_end
100
+ ```
101
+
102
+ Tool execution mode is configurable:
103
+
104
+ - `parallel` (default): preflight tool calls sequentially, execute allowed tools concurrently, emit final `tool_execution_end` and `toolResult` messages in assistant source order
105
+ - `sequential`: execute tool calls one by one, matching the historical behavior
106
+
107
+ The `beforeToolCall` hook runs after `tool_execution_start` and validated argument parsing. It can block execution. The `afterToolCall` hook runs after tool execution finishes and before `tool_execution_end` and final tool result message events are emitted.
108
+
109
+ When you use the `Agent` class, assistant `message_end` processing is treated as a barrier before tool preflight begins. That means `beforeToolCall` sees agent state that already includes the assistant message that requested the tool call.
110
+
111
+ ### continue() Event Sequence
112
+
113
+ `continue()` resumes from existing context without adding a new message. Use it for retries after errors.
114
+
115
+ ```typescript
116
+ // After an error, retry from current state
117
+ await agent.continue();
118
+ ```
119
+
120
+ The last message in context must be `user` or `toolResult` (not `assistant`).
121
+
122
+ ### Event Types
123
+
124
+ | Event | Description |
125
+ |-------|-------------|
126
+ | `agent_start` | Agent begins processing |
127
+ | `agent_end` | Final event for the run. Awaited subscribers for this event still count toward settlement |
128
+ | `turn_start` | New turn begins (one LLM call + tool executions) |
129
+ | `turn_end` | Turn completes with assistant message and tool results |
130
+ | `message_start` | Any message begins (user, assistant, toolResult) |
131
+ | `message_update` | **Assistant only.** Includes `assistantMessageEvent` with delta |
132
+ | `message_end` | Message completes |
133
+ | `tool_execution_start` | Tool begins |
134
+ | `tool_execution_update` | Tool streams progress |
135
+ | `tool_execution_end` | Tool completes |
136
+ +
137
+ +`Agent.subscribe()` listeners are awaited in registration order. `agent_end` means no more loop events will be emitted, but `await agent.waitForIdle()` and `await agent.prompt(...)` only settle after awaited `agent_end` listeners finish.
138
+
139
+ ## Agent Options
140
+
141
+ ```typescript
142
+ const agent = new Agent({
143
+ // Initial state
144
+ initialState: {
145
+ systemPrompt: string,
146
+ model: Model<any>,
147
+ thinkingLevel: "off" | "minimal" | "low" | "medium" | "high" | "xhigh",
148
+ tools: AgentTool<any>[],
149
+ messages: AgentMessage[],
150
+ },
151
+
152
+ // Convert AgentMessage[] to LLM Message[] (required for custom message types)
153
+ convertToLlm: (messages) => messages.filter(...),
154
+
155
+ // Transform context before convertToLlm (for pruning, compaction)
156
+ transformContext: async (messages, signal) => pruneOldMessages(messages),
157
+
158
+ // Steering mode: "one-at-a-time" (default) or "all"
159
+ steeringMode: "one-at-a-time",
160
+
161
+ // Follow-up mode: "one-at-a-time" (default) or "all"
162
+ followUpMode: "one-at-a-time",
163
+
164
+ // Custom stream function (for proxy backends)
165
+ streamFn: streamProxy,
166
+
167
+ // Session ID for provider caching
168
+ sessionId: "session-123",
169
+
170
+ // Dynamic API key resolution (for expiring OAuth tokens)
171
+ getApiKey: async (provider) => refreshToken(),
172
+
173
+ // Tool execution mode: "parallel" (default) or "sequential"
174
+ toolExecution: "parallel",
175
+
176
+ // Preflight each tool call after args are validated. Can block execution.
177
+ beforeToolCall: async ({ toolCall, args, context }) => {
178
+ if (toolCall.name === "bash") {
179
+ return { block: true, reason: "bash is disabled" };
180
+ }
181
+ },
182
+
183
+ // Postprocess each tool result before final tool events are emitted.
184
+ afterToolCall: async ({ toolCall, result, isError, context }) => {
185
+ if (!isError) {
186
+ return { details: { ...result.details, audited: true } };
187
+ }
188
+ },
189
+
190
+ // Custom thinking budgets for token-based providers
191
+ thinkingBudgets: {
192
+ minimal: 128,
193
+ low: 512,
194
+ medium: 1024,
195
+ high: 2048,
196
+ },
197
+ });
198
+ ```
199
+
200
+ ## Agent State
201
+
202
+ ```typescript
203
+ interface AgentState {
204
+ systemPrompt: string;
205
+ model: Model<any>;
206
+ thinkingLevel: ThinkingLevel;
207
+ tools: AgentTool<any>[];
208
+ messages: AgentMessage[];
209
+ readonly isStreaming: boolean;
210
+ readonly streamingMessage?: AgentMessage;
211
+ readonly pendingToolCalls: ReadonlySet<string>;
212
+ readonly errorMessage?: string;
213
+ }
214
+ ```
215
+
216
+ Access state via `agent.state`.
217
+
218
+ Assigning `agent.state.tools = [...]` or `agent.state.messages = [...]` copies the top-level array before storing it. Mutating the returned array mutates the current agent state.
219
+
220
+ During streaming, `agent.state.streamingMessage` contains the current partial assistant message.
221
+
222
+ `agent.state.isStreaming` remains `true` until the run fully settles, including awaited `agent_end` subscribers.
223
+
224
+ ## Methods
225
+
226
+ ### Prompting
227
+
228
+ ```typescript
229
+ // Text prompt
230
+ await agent.prompt("Hello");
231
+
232
+ // With images
233
+ await agent.prompt("What's in this image?", [
234
+ { type: "image", data: base64Data, mimeType: "image/jpeg" }
235
+ ]);
236
+
237
+ // AgentMessage directly
238
+ await agent.prompt({ role: "user", content: "Hello", timestamp: Date.now() });
239
+
240
+ // Continue from current context (last message must be user or toolResult)
241
+ await agent.continue();
242
+ ```
243
+
244
+ ### State Management
245
+
246
+ ```typescript
247
+ agent.state.systemPrompt = "New prompt";
248
+ agent.state.model = getModel("openai", "gpt-4o");
249
+ agent.state.thinkingLevel = "medium";
250
+ agent.state.tools = [myTool];
251
+ agent.toolExecution = "sequential";
252
+ agent.beforeToolCall = async ({ toolCall }) => undefined;
253
+ agent.afterToolCall = async ({ toolCall, result }) => undefined;
254
+ agent.state.messages = newMessages; // top-level array is copied
255
+ agent.state.messages.push(message);
256
+ agent.reset();
257
+ ```
258
+
259
+ ### Session and Thinking Budgets
260
+
261
+ ```typescript
262
+ agent.sessionId = "session-123";
263
+
264
+ agent.thinkingBudgets = {
265
+ minimal: 128,
266
+ low: 512,
267
+ medium: 1024,
268
+ high: 2048,
269
+ };
270
+ ```
271
+
272
+ ### Control
273
+
274
+ ```typescript
275
+ agent.abort(); // Cancel current operation
276
+ await agent.waitForIdle(); // Wait for completion
277
+ ```
278
+
279
+ ### Events
280
+
281
+ ```typescript
282
+ const unsubscribe = agent.subscribe(async (event, signal) => {
283
+ if (event.type === "agent_end") {
284
+ // Final barrier work for the run
285
+ await flushSessionState(signal);
286
+ }
287
+ });
288
+ unsubscribe();
289
+ ```
290
+
291
+ ## Steering and Follow-up
292
+
293
+ Steering messages let you interrupt the agent while tools are running. Follow-up messages let you queue work after the agent would otherwise stop.
294
+
295
+ ```typescript
296
+ agent.steeringMode = "one-at-a-time";
297
+ agent.followUpMode = "one-at-a-time";
298
+
299
+ // While agent is running tools
300
+ agent.steer({
301
+ role: "user",
302
+ content: "Stop! Do this instead.",
303
+ timestamp: Date.now(),
304
+ });
305
+
306
+ // After the agent finishes its current work
307
+ agent.followUp({
308
+ role: "user",
309
+ content: "Also summarize the result.",
310
+ timestamp: Date.now(),
311
+ });
312
+
313
+ const steeringMode = agent.steeringMode;
314
+ const followUpMode = agent.followUpMode;
315
+
316
+ agent.clearSteeringQueue();
317
+ agent.clearFollowUpQueue();
318
+ agent.clearAllQueues();
319
+ ```
320
+
321
+ Use clearSteeringQueue, clearFollowUpQueue, or clearAllQueues to drop queued messages.
322
+
323
+ When steering messages are detected after a turn completes:
324
+ 1. All tool calls from the current assistant message have already finished
325
+ 2. Steering messages are injected
326
+ 3. The LLM responds on the next turn
327
+
328
+ Follow-up messages are checked only when there are no more tool calls and no steering messages. If any are queued, they are injected and another turn runs.
329
+
330
+ ## Custom Message Types
331
+
332
+ Extend `AgentMessage` via declaration merging:
333
+
334
+ ```typescript
335
+ declare module "@nvrland/pi-agent-core" {
336
+ interface CustomAgentMessages {
337
+ notification: { role: "notification"; text: string; timestamp: number };
338
+ }
339
+ }
340
+
341
+ // Now valid
342
+ const msg: AgentMessage = { role: "notification", text: "Info", timestamp: Date.now() };
343
+ ```
344
+
345
+ Handle custom types in `convertToLlm`:
346
+
347
+ ```typescript
348
+ const agent = new Agent({
349
+ convertToLlm: (messages) => messages.flatMap(m => {
350
+ if (m.role === "notification") return []; // Filter out
351
+ return [m];
352
+ }),
353
+ });
354
+ ```
355
+
356
+ ## Tools
357
+
358
+ Define tools using `AgentTool`:
359
+
360
+ ```typescript
361
+ import { Type } from "@sinclair/typebox";
362
+
363
+ const readFileTool: AgentTool = {
364
+ name: "read_file",
365
+ label: "Read File", // For UI display
366
+ description: "Read a file's contents",
367
+ parameters: Type.Object({
368
+ path: Type.String({ description: "File path" }),
369
+ }),
370
+ execute: async (toolCallId, params, signal, onUpdate) => {
371
+ const content = await fs.readFile(params.path, "utf-8");
372
+
373
+ // Optional: stream progress
374
+ onUpdate?.({ content: [{ type: "text", text: "Reading..." }], details: {} });
375
+
376
+ return {
377
+ content: [{ type: "text", text: content }],
378
+ details: { path: params.path, size: content.length },
379
+ };
380
+ },
381
+ };
382
+
383
+ agent.state.tools = [readFileTool];
384
+ ```
385
+
386
+ ### Error Handling
387
+
388
+ **Throw an error** when a tool fails. Do not return error messages as content.
389
+
390
+ ```typescript
391
+ execute: async (toolCallId, params, signal, onUpdate) => {
392
+ if (!fs.existsSync(params.path)) {
393
+ throw new Error(`File not found: ${params.path}`);
394
+ }
395
+ // Return content only on success
396
+ return { content: [{ type: "text", text: "..." }] };
397
+ }
398
+ ```
399
+
400
+ Thrown errors are caught by the agent and reported to the LLM as tool errors with `isError: true`.
401
+
402
+ ## Proxy Usage
403
+
404
+ For browser apps that proxy through a backend:
405
+
406
+ ```typescript
407
+ import { Agent, streamProxy } from "@nvrland/pi-agent-core";
408
+
409
+ const agent = new Agent({
410
+ streamFn: (model, context, options) =>
411
+ streamProxy(model, context, {
412
+ ...options,
413
+ authToken: "...",
414
+ proxyUrl: "https://your-server.com",
415
+ }),
416
+ });
417
+ ```
418
+
419
+ ## Low-Level API
420
+
421
+ For direct control without the Agent class:
422
+
423
+ ```typescript
424
+ import { agentLoop, agentLoopContinue } from "@nvrland/pi-agent-core";
425
+
426
+ const context: AgentContext = {
427
+ systemPrompt: "You are helpful.",
428
+ messages: [],
429
+ tools: [],
430
+ };
431
+
432
+ const config: AgentLoopConfig = {
433
+ model: getModel("openai", "gpt-4o"),
434
+ convertToLlm: (msgs) => msgs.filter(m => ["user", "assistant", "toolResult"].includes(m.role)),
435
+ toolExecution: "parallel",
436
+ beforeToolCall: async ({ toolCall, args, context }) => undefined,
437
+ afterToolCall: async ({ toolCall, result, isError, context }) => undefined,
438
+ };
439
+
440
+ const userMessage = { role: "user", content: "Hello", timestamp: Date.now() };
441
+
442
+ for await (const event of agentLoop([userMessage], context, config)) {
443
+ console.log(event.type);
444
+ }
445
+
446
+ // Continue from existing context
447
+ for await (const event of agentLoopContinue(context, config)) {
448
+ console.log(event.type);
449
+ }
450
+ ```
451
+
452
+ These low-level streams are observational. They preserve event order, but they do not wait for your async event handling to settle before later producer phases continue. If you need message processing to act as a barrier before tool preflight, use the `Agent` class instead of raw `agentLoop()` or `agentLoopContinue()`.
453
+
454
+ ## License
455
+
456
+ MIT
@@ -0,0 +1,24 @@
1
+ /**
2
+ * Agent loop that works with AgentMessage throughout.
3
+ * Transforms to Message[] only at the LLM call boundary.
4
+ */
5
+ import { EventStream } from "@mariozechner/pi-ai";
6
+ import type { AgentContext, AgentEvent, AgentLoopConfig, AgentMessage, StreamFn } from "./types.js";
7
+ export type AgentEventSink = (event: AgentEvent) => Promise<void> | void;
8
+ /**
9
+ * Start an agent loop with a new prompt message.
10
+ * The prompt is added to the context and events are emitted for it.
11
+ */
12
+ export declare function agentLoop(prompts: AgentMessage[], context: AgentContext, config: AgentLoopConfig, signal?: AbortSignal, streamFn?: StreamFn): EventStream<AgentEvent, AgentMessage[]>;
13
+ /**
14
+ * Continue an agent loop from the current context without adding a new message.
15
+ * Used for retries - context already has user message or tool results.
16
+ *
17
+ * **Important:** The last message in context must convert to a `user` or `toolResult` message
18
+ * via `convertToLlm`. If it doesn't, the LLM provider will reject the request.
19
+ * This cannot be validated here since `convertToLlm` is only called once per turn.
20
+ */
21
+ export declare function agentLoopContinue(context: AgentContext, config: AgentLoopConfig, signal?: AbortSignal, streamFn?: StreamFn): EventStream<AgentEvent, AgentMessage[]>;
22
+ export declare function runAgentLoop(prompts: AgentMessage[], context: AgentContext, config: AgentLoopConfig, emit: AgentEventSink, signal?: AbortSignal, streamFn?: StreamFn): Promise<AgentMessage[]>;
23
+ export declare function runAgentLoopContinue(context: AgentContext, config: AgentLoopConfig, emit: AgentEventSink, signal?: AbortSignal, streamFn?: StreamFn): Promise<AgentMessage[]>;
24
+ //# sourceMappingURL=agent-loop.d.ts.map
@@ -0,0 +1 @@
1
+ {"version":3,"file":"agent-loop.d.ts","sourceRoot":"","sources":["../src/agent-loop.ts"],"names":[],"mappings":"AAAA;;;GAGG;AAEH,OAAO,EAGN,WAAW,EAIX,MAAM,qBAAqB,CAAC;AAC7B,OAAO,KAAK,EACX,YAAY,EACZ,UAAU,EACV,eAAe,EACf,YAAY,EAIZ,QAAQ,EACR,MAAM,YAAY,CAAC;AAEpB,MAAM,MAAM,cAAc,GAAG,CAAC,KAAK,EAAE,UAAU,KAAK,OAAO,CAAC,IAAI,CAAC,GAAG,IAAI,CAAC;AAEzE;;;GAGG;AACH,wBAAgB,SAAS,CACxB,OAAO,EAAE,YAAY,EAAE,EACvB,OAAO,EAAE,YAAY,EACrB,MAAM,EAAE,eAAe,EACvB,MAAM,CAAC,EAAE,WAAW,EACpB,QAAQ,CAAC,EAAE,QAAQ,GACjB,WAAW,CAAC,UAAU,EAAE,YAAY,EAAE,CAAC,CAiBzC;AAED;;;;;;;GAOG;AACH,wBAAgB,iBAAiB,CAChC,OAAO,EAAE,YAAY,EACrB,MAAM,EAAE,eAAe,EACvB,MAAM,CAAC,EAAE,WAAW,EACpB,QAAQ,CAAC,EAAE,QAAQ,GACjB,WAAW,CAAC,UAAU,EAAE,YAAY,EAAE,CAAC,CAwBzC;AAED,wBAAsB,YAAY,CACjC,OAAO,EAAE,YAAY,EAAE,EACvB,OAAO,EAAE,YAAY,EACrB,MAAM,EAAE,eAAe,EACvB,IAAI,EAAE,cAAc,EACpB,MAAM,CAAC,EAAE,WAAW,EACpB,QAAQ,CAAC,EAAE,QAAQ,GACjB,OAAO,CAAC,YAAY,EAAE,CAAC,CAgBzB;AAED,wBAAsB,oBAAoB,CACzC,OAAO,EAAE,YAAY,EACrB,MAAM,EAAE,eAAe,EACvB,IAAI,EAAE,cAAc,EACpB,MAAM,CAAC,EAAE,WAAW,EACpB,QAAQ,CAAC,EAAE,QAAQ,GACjB,OAAO,CAAC,YAAY,EAAE,CAAC,CAiBzB","sourcesContent":["/**\n * Agent loop that works with AgentMessage throughout.\n * Transforms to Message[] only at the LLM call boundary.\n */\n\nimport {\n\ttype AssistantMessage,\n\ttype Context,\n\tEventStream,\n\tstreamSimple,\n\ttype ToolResultMessage,\n\tvalidateToolArguments,\n} from \"@mariozechner/pi-ai\";\nimport type {\n\tAgentContext,\n\tAgentEvent,\n\tAgentLoopConfig,\n\tAgentMessage,\n\tAgentTool,\n\tAgentToolCall,\n\tAgentToolResult,\n\tStreamFn,\n} from \"./types.js\";\n\nexport type AgentEventSink = (event: AgentEvent) => Promise<void> | void;\n\n/**\n * Start an agent loop with a new prompt message.\n * The prompt is added to the context and events are emitted for it.\n */\nexport function agentLoop(\n\tprompts: AgentMessage[],\n\tcontext: AgentContext,\n\tconfig: AgentLoopConfig,\n\tsignal?: AbortSignal,\n\tstreamFn?: StreamFn,\n): EventStream<AgentEvent, AgentMessage[]> {\n\tconst stream = createAgentStream();\n\n\tvoid runAgentLoop(\n\t\tprompts,\n\t\tcontext,\n\t\tconfig,\n\t\tasync (event) => {\n\t\t\tstream.push(event);\n\t\t},\n\t\tsignal,\n\t\tstreamFn,\n\t).then((messages) => {\n\t\tstream.end(messages);\n\t});\n\n\treturn stream;\n}\n\n/**\n * Continue an agent loop from the current context without adding a new message.\n * Used for retries - context already has user message or tool results.\n *\n * **Important:** The last message in context must convert to a `user` or `toolResult` message\n * via `convertToLlm`. If it doesn't, the LLM provider will reject the request.\n * This cannot be validated here since `convertToLlm` is only called once per turn.\n */\nexport function agentLoopContinue(\n\tcontext: AgentContext,\n\tconfig: AgentLoopConfig,\n\tsignal?: AbortSignal,\n\tstreamFn?: StreamFn,\n): EventStream<AgentEvent, AgentMessage[]> {\n\tif (context.messages.length === 0) {\n\t\tthrow new Error(\"Cannot continue: no messages in context\");\n\t}\n\n\tif (context.messages[context.messages.length - 1].role === \"assistant\") {\n\t\tthrow new Error(\"Cannot continue from message role: assistant\");\n\t}\n\n\tconst stream = createAgentStream();\n\n\tvoid runAgentLoopContinue(\n\t\tcontext,\n\t\tconfig,\n\t\tasync (event) => {\n\t\t\tstream.push(event);\n\t\t},\n\t\tsignal,\n\t\tstreamFn,\n\t).then((messages) => {\n\t\tstream.end(messages);\n\t});\n\n\treturn stream;\n}\n\nexport async function runAgentLoop(\n\tprompts: AgentMessage[],\n\tcontext: AgentContext,\n\tconfig: AgentLoopConfig,\n\temit: AgentEventSink,\n\tsignal?: AbortSignal,\n\tstreamFn?: StreamFn,\n): Promise<AgentMessage[]> {\n\tconst newMessages: AgentMessage[] = [...prompts];\n\tconst currentContext: AgentContext = {\n\t\t...context,\n\t\tmessages: [...context.messages, ...prompts],\n\t};\n\n\tawait emit({ type: \"agent_start\" });\n\tawait emit({ type: \"turn_start\" });\n\tfor (const prompt of prompts) {\n\t\tawait emit({ type: \"message_start\", message: prompt });\n\t\tawait emit({ type: \"message_end\", message: prompt });\n\t}\n\n\tawait runLoop(currentContext, newMessages, config, signal, emit, streamFn);\n\treturn newMessages;\n}\n\nexport async function runAgentLoopContinue(\n\tcontext: AgentContext,\n\tconfig: AgentLoopConfig,\n\temit: AgentEventSink,\n\tsignal?: AbortSignal,\n\tstreamFn?: StreamFn,\n): Promise<AgentMessage[]> {\n\tif (context.messages.length === 0) {\n\t\tthrow new Error(\"Cannot continue: no messages in context\");\n\t}\n\n\tif (context.messages[context.messages.length - 1].role === \"assistant\") {\n\t\tthrow new Error(\"Cannot continue from message role: assistant\");\n\t}\n\n\tconst newMessages: AgentMessage[] = [];\n\tconst currentContext: AgentContext = { ...context };\n\n\tawait emit({ type: \"agent_start\" });\n\tawait emit({ type: \"turn_start\" });\n\n\tawait runLoop(currentContext, newMessages, config, signal, emit, streamFn);\n\treturn newMessages;\n}\n\nfunction createAgentStream(): EventStream<AgentEvent, AgentMessage[]> {\n\treturn new EventStream<AgentEvent, AgentMessage[]>(\n\t\t(event: AgentEvent) => event.type === \"agent_end\",\n\t\t(event: AgentEvent) => (event.type === \"agent_end\" ? event.messages : []),\n\t);\n}\n\n/**\n * Main loop logic shared by agentLoop and agentLoopContinue.\n */\nasync function runLoop(\n\tcurrentContext: AgentContext,\n\tnewMessages: AgentMessage[],\n\tconfig: AgentLoopConfig,\n\tsignal: AbortSignal | undefined,\n\temit: AgentEventSink,\n\tstreamFn?: StreamFn,\n): Promise<void> {\n\tlet firstTurn = true;\n\t// Check for steering messages at start (user may have typed while waiting)\n\tlet pendingMessages: AgentMessage[] = (await config.getSteeringMessages?.()) || [];\n\n\t// Outer loop: continues when queued follow-up messages arrive after agent would stop\n\twhile (true) {\n\t\tlet hasMoreToolCalls = true;\n\n\t\t// Inner loop: process tool calls and steering messages\n\t\twhile (hasMoreToolCalls || pendingMessages.length > 0) {\n\t\t\tif (!firstTurn) {\n\t\t\t\tawait emit({ type: \"turn_start\" });\n\t\t\t} else {\n\t\t\t\tfirstTurn = false;\n\t\t\t}\n\n\t\t\t// Process pending messages (inject before next assistant response)\n\t\t\tif (pendingMessages.length > 0) {\n\t\t\t\tfor (const message of pendingMessages) {\n\t\t\t\t\tawait emit({ type: \"message_start\", message });\n\t\t\t\t\tawait emit({ type: \"message_end\", message });\n\t\t\t\t\tcurrentContext.messages.push(message);\n\t\t\t\t\tnewMessages.push(message);\n\t\t\t\t}\n\t\t\t\tpendingMessages = [];\n\t\t\t}\n\n\t\t\t// Stream assistant response\n\t\t\tconst message = await streamAssistantResponse(currentContext, config, signal, emit, streamFn);\n\t\t\tnewMessages.push(message);\n\n\t\t\tif (message.stopReason === \"error\" || message.stopReason === \"aborted\") {\n\t\t\t\tawait emit({ type: \"turn_end\", message, toolResults: [] });\n\t\t\t\tawait emit({ type: \"agent_end\", messages: newMessages });\n\t\t\t\treturn;\n\t\t\t}\n\n\t\t\t// Check for tool calls\n\t\t\tconst toolCalls = message.content.filter((c) => c.type === \"toolCall\");\n\t\t\thasMoreToolCalls = toolCalls.length > 0;\n\n\t\t\tconst toolResults: ToolResultMessage[] = [];\n\t\t\tif (hasMoreToolCalls) {\n\t\t\t\ttoolResults.push(...(await executeToolCalls(currentContext, message, config, signal, emit)));\n\n\t\t\t\tfor (const result of toolResults) {\n\t\t\t\t\tcurrentContext.messages.push(result);\n\t\t\t\t\tnewMessages.push(result);\n\t\t\t\t}\n\t\t\t}\n\n\t\t\tawait emit({ type: \"turn_end\", message, toolResults });\n\n\t\t\tpendingMessages = (await config.getSteeringMessages?.()) || [];\n\t\t}\n\n\t\t// Agent would stop here. Check for follow-up messages.\n\t\tconst followUpMessages = (await config.getFollowUpMessages?.()) || [];\n\t\tif (followUpMessages.length > 0) {\n\t\t\t// Set as pending so inner loop processes them\n\t\t\tpendingMessages = followUpMessages;\n\t\t\tcontinue;\n\t\t}\n\n\t\t// No more messages, exit\n\t\tbreak;\n\t}\n\n\tawait emit({ type: \"agent_end\", messages: newMessages });\n}\n\n/**\n * Stream an assistant response from the LLM.\n * This is where AgentMessage[] gets transformed to Message[] for the LLM.\n */\nasync function streamAssistantResponse(\n\tcontext: AgentContext,\n\tconfig: AgentLoopConfig,\n\tsignal: AbortSignal | undefined,\n\temit: AgentEventSink,\n\tstreamFn?: StreamFn,\n): Promise<AssistantMessage> {\n\t// Apply context transform if configured (AgentMessage[] → AgentMessage[])\n\tlet messages = context.messages;\n\tif (config.transformContext) {\n\t\tmessages = await config.transformContext(messages, signal);\n\t}\n\n\t// Convert to LLM-compatible messages (AgentMessage[] → Message[])\n\tconst llmMessages = await config.convertToLlm(messages);\n\n\t// Build LLM context\n\tconst llmContext: Context = {\n\t\tsystemPrompt: context.systemPrompt,\n\t\tmessages: llmMessages,\n\t\ttools: context.tools,\n\t};\n\n\tconst streamFunction = streamFn || streamSimple;\n\n\t// Resolve API key (important for expiring tokens)\n\tconst resolvedApiKey =\n\t\t(config.getApiKey ? await config.getApiKey(config.model.provider) : undefined) || config.apiKey;\n\n\tconst response = await streamFunction(config.model, llmContext, {\n\t\t...config,\n\t\tapiKey: resolvedApiKey,\n\t\tsignal,\n\t});\n\n\tlet partialMessage: AssistantMessage | null = null;\n\tlet addedPartial = false;\n\n\tfor await (const event of response) {\n\t\tswitch (event.type) {\n\t\t\tcase \"start\":\n\t\t\t\tpartialMessage = event.partial;\n\t\t\t\tcontext.messages.push(partialMessage);\n\t\t\t\taddedPartial = true;\n\t\t\t\tawait emit({ type: \"message_start\", message: { ...partialMessage } });\n\t\t\t\tbreak;\n\n\t\t\tcase \"text_start\":\n\t\t\tcase \"text_delta\":\n\t\t\tcase \"text_end\":\n\t\t\tcase \"thinking_start\":\n\t\t\tcase \"thinking_delta\":\n\t\t\tcase \"thinking_end\":\n\t\t\tcase \"toolcall_start\":\n\t\t\tcase \"toolcall_delta\":\n\t\t\tcase \"toolcall_end\":\n\t\t\t\tif (partialMessage) {\n\t\t\t\t\tpartialMessage = event.partial;\n\t\t\t\t\tcontext.messages[context.messages.length - 1] = partialMessage;\n\t\t\t\t\tawait emit({\n\t\t\t\t\t\ttype: \"message_update\",\n\t\t\t\t\t\tassistantMessageEvent: event,\n\t\t\t\t\t\tmessage: { ...partialMessage },\n\t\t\t\t\t});\n\t\t\t\t}\n\t\t\t\tbreak;\n\n\t\t\tcase \"done\":\n\t\t\tcase \"error\": {\n\t\t\t\tconst finalMessage = await response.result();\n\t\t\t\tif (addedPartial) {\n\t\t\t\t\tcontext.messages[context.messages.length - 1] = finalMessage;\n\t\t\t\t} else {\n\t\t\t\t\tcontext.messages.push(finalMessage);\n\t\t\t\t}\n\t\t\t\tif (!addedPartial) {\n\t\t\t\t\tawait emit({ type: \"message_start\", message: { ...finalMessage } });\n\t\t\t\t}\n\t\t\t\tawait emit({ type: \"message_end\", message: finalMessage });\n\t\t\t\treturn finalMessage;\n\t\t\t}\n\t\t}\n\t}\n\n\tconst finalMessage = await response.result();\n\tif (addedPartial) {\n\t\tcontext.messages[context.messages.length - 1] = finalMessage;\n\t} else {\n\t\tcontext.messages.push(finalMessage);\n\t\tawait emit({ type: \"message_start\", message: { ...finalMessage } });\n\t}\n\tawait emit({ type: \"message_end\", message: finalMessage });\n\treturn finalMessage;\n}\n\n/**\n * Execute tool calls from an assistant message.\n */\nasync function executeToolCalls(\n\tcurrentContext: AgentContext,\n\tassistantMessage: AssistantMessage,\n\tconfig: AgentLoopConfig,\n\tsignal: AbortSignal | undefined,\n\temit: AgentEventSink,\n): Promise<ToolResultMessage[]> {\n\tconst toolCalls = assistantMessage.content.filter((c) => c.type === \"toolCall\");\n\tif (config.toolExecution === \"sequential\") {\n\t\treturn executeToolCallsSequential(currentContext, assistantMessage, toolCalls, config, signal, emit);\n\t}\n\treturn executeToolCallsParallel(currentContext, assistantMessage, toolCalls, config, signal, emit);\n}\n\nasync function executeToolCallsSequential(\n\tcurrentContext: AgentContext,\n\tassistantMessage: AssistantMessage,\n\ttoolCalls: AgentToolCall[],\n\tconfig: AgentLoopConfig,\n\tsignal: AbortSignal | undefined,\n\temit: AgentEventSink,\n): Promise<ToolResultMessage[]> {\n\tconst results: ToolResultMessage[] = [];\n\n\tfor (const toolCall of toolCalls) {\n\t\tawait emit({\n\t\t\ttype: \"tool_execution_start\",\n\t\t\ttoolCallId: toolCall.id,\n\t\t\ttoolName: toolCall.name,\n\t\t\targs: toolCall.arguments,\n\t\t});\n\n\t\tconst preparation = await prepareToolCall(currentContext, assistantMessage, toolCall, config, signal);\n\t\tif (preparation.kind === \"immediate\") {\n\t\t\tresults.push(await emitToolCallOutcome(toolCall, preparation.result, preparation.isError, emit));\n\t\t} else {\n\t\t\tconst executed = await executePreparedToolCall(preparation, signal, emit);\n\t\t\tresults.push(\n\t\t\t\tawait finalizeExecutedToolCall(\n\t\t\t\t\tcurrentContext,\n\t\t\t\t\tassistantMessage,\n\t\t\t\t\tpreparation,\n\t\t\t\t\texecuted,\n\t\t\t\t\tconfig,\n\t\t\t\t\tsignal,\n\t\t\t\t\temit,\n\t\t\t\t),\n\t\t\t);\n\t\t}\n\t}\n\n\treturn results;\n}\n\nasync function executeToolCallsParallel(\n\tcurrentContext: AgentContext,\n\tassistantMessage: AssistantMessage,\n\ttoolCalls: AgentToolCall[],\n\tconfig: AgentLoopConfig,\n\tsignal: AbortSignal | undefined,\n\temit: AgentEventSink,\n): Promise<ToolResultMessage[]> {\n\tconst results: ToolResultMessage[] = [];\n\tconst runnableCalls: PreparedToolCall[] = [];\n\n\tfor (const toolCall of toolCalls) {\n\t\tawait emit({\n\t\t\ttype: \"tool_execution_start\",\n\t\t\ttoolCallId: toolCall.id,\n\t\t\ttoolName: toolCall.name,\n\t\t\targs: toolCall.arguments,\n\t\t});\n\n\t\tconst preparation = await prepareToolCall(currentContext, assistantMessage, toolCall, config, signal);\n\t\tif (preparation.kind === \"immediate\") {\n\t\t\tresults.push(await emitToolCallOutcome(toolCall, preparation.result, preparation.isError, emit));\n\t\t} else {\n\t\t\trunnableCalls.push(preparation);\n\t\t}\n\t}\n\n\tconst runningCalls = runnableCalls.map((prepared) => ({\n\t\tprepared,\n\t\texecution: executePreparedToolCall(prepared, signal, emit),\n\t}));\n\n\tfor (const running of runningCalls) {\n\t\tconst executed = await running.execution;\n\t\tresults.push(\n\t\t\tawait finalizeExecutedToolCall(\n\t\t\t\tcurrentContext,\n\t\t\t\tassistantMessage,\n\t\t\t\trunning.prepared,\n\t\t\t\texecuted,\n\t\t\t\tconfig,\n\t\t\t\tsignal,\n\t\t\t\temit,\n\t\t\t),\n\t\t);\n\t}\n\n\treturn results;\n}\n\ntype PreparedToolCall = {\n\tkind: \"prepared\";\n\ttoolCall: AgentToolCall;\n\ttool: AgentTool<any>;\n\targs: unknown;\n};\n\ntype ImmediateToolCallOutcome = {\n\tkind: \"immediate\";\n\tresult: AgentToolResult<any>;\n\tisError: boolean;\n};\n\ntype ExecutedToolCallOutcome = {\n\tresult: AgentToolResult<any>;\n\tisError: boolean;\n};\n\nfunction prepareToolCallArguments(tool: AgentTool<any>, toolCall: AgentToolCall): AgentToolCall {\n\tif (!tool.prepareArguments) {\n\t\treturn toolCall;\n\t}\n\tconst preparedArguments = tool.prepareArguments(toolCall.arguments);\n\tif (preparedArguments === toolCall.arguments) {\n\t\treturn toolCall;\n\t}\n\treturn {\n\t\t...toolCall,\n\t\targuments: preparedArguments as Record<string, any>,\n\t};\n}\n\nasync function prepareToolCall(\n\tcurrentContext: AgentContext,\n\tassistantMessage: AssistantMessage,\n\ttoolCall: AgentToolCall,\n\tconfig: AgentLoopConfig,\n\tsignal: AbortSignal | undefined,\n): Promise<PreparedToolCall | ImmediateToolCallOutcome> {\n\tconst tool = currentContext.tools?.find((t) => t.name === toolCall.name);\n\tif (!tool) {\n\t\treturn {\n\t\t\tkind: \"immediate\",\n\t\t\tresult: createErrorToolResult(`Tool ${toolCall.name} not found`),\n\t\t\tisError: true,\n\t\t};\n\t}\n\n\ttry {\n\t\tconst preparedToolCall = prepareToolCallArguments(tool, toolCall);\n\t\tconst validatedArgs = validateToolArguments(tool, preparedToolCall);\n\t\tif (config.beforeToolCall) {\n\t\t\tconst beforeResult = await config.beforeToolCall(\n\t\t\t\t{\n\t\t\t\t\tassistantMessage,\n\t\t\t\t\ttoolCall,\n\t\t\t\t\targs: validatedArgs,\n\t\t\t\t\tcontext: currentContext,\n\t\t\t\t},\n\t\t\t\tsignal,\n\t\t\t);\n\t\t\tif (beforeResult?.block) {\n\t\t\t\treturn {\n\t\t\t\t\tkind: \"immediate\",\n\t\t\t\t\tresult: createErrorToolResult(beforeResult.reason || \"Tool execution was blocked\"),\n\t\t\t\t\tisError: true,\n\t\t\t\t};\n\t\t\t}\n\t\t}\n\t\treturn {\n\t\t\tkind: \"prepared\",\n\t\t\ttoolCall,\n\t\t\ttool,\n\t\t\targs: validatedArgs,\n\t\t};\n\t} catch (error) {\n\t\treturn {\n\t\t\tkind: \"immediate\",\n\t\t\tresult: createErrorToolResult(error instanceof Error ? error.message : String(error)),\n\t\t\tisError: true,\n\t\t};\n\t}\n}\n\nasync function executePreparedToolCall(\n\tprepared: PreparedToolCall,\n\tsignal: AbortSignal | undefined,\n\temit: AgentEventSink,\n): Promise<ExecutedToolCallOutcome> {\n\tconst updateEvents: Promise<void>[] = [];\n\n\ttry {\n\t\tconst result = await prepared.tool.execute(\n\t\t\tprepared.toolCall.id,\n\t\t\tprepared.args as never,\n\t\t\tsignal,\n\t\t\t(partialResult) => {\n\t\t\t\tupdateEvents.push(\n\t\t\t\t\tPromise.resolve(\n\t\t\t\t\t\temit({\n\t\t\t\t\t\t\ttype: \"tool_execution_update\",\n\t\t\t\t\t\t\ttoolCallId: prepared.toolCall.id,\n\t\t\t\t\t\t\ttoolName: prepared.toolCall.name,\n\t\t\t\t\t\t\targs: prepared.toolCall.arguments,\n\t\t\t\t\t\t\tpartialResult,\n\t\t\t\t\t\t}),\n\t\t\t\t\t),\n\t\t\t\t);\n\t\t\t},\n\t\t);\n\t\tawait Promise.all(updateEvents);\n\t\treturn { result, isError: false };\n\t} catch (error) {\n\t\tawait Promise.all(updateEvents);\n\t\treturn {\n\t\t\tresult: createErrorToolResult(error instanceof Error ? error.message : String(error)),\n\t\t\tisError: true,\n\t\t};\n\t}\n}\n\nasync function finalizeExecutedToolCall(\n\tcurrentContext: AgentContext,\n\tassistantMessage: AssistantMessage,\n\tprepared: PreparedToolCall,\n\texecuted: ExecutedToolCallOutcome,\n\tconfig: AgentLoopConfig,\n\tsignal: AbortSignal | undefined,\n\temit: AgentEventSink,\n): Promise<ToolResultMessage> {\n\tlet result = executed.result;\n\tlet isError = executed.isError;\n\n\tif (config.afterToolCall) {\n\t\tconst afterResult = await config.afterToolCall(\n\t\t\t{\n\t\t\t\tassistantMessage,\n\t\t\t\ttoolCall: prepared.toolCall,\n\t\t\t\targs: prepared.args,\n\t\t\t\tresult,\n\t\t\t\tisError,\n\t\t\t\tcontext: currentContext,\n\t\t\t},\n\t\t\tsignal,\n\t\t);\n\t\tif (afterResult) {\n\t\t\tresult = {\n\t\t\t\tcontent: afterResult.content ?? result.content,\n\t\t\t\tdetails: afterResult.details ?? result.details,\n\t\t\t};\n\t\t\tisError = afterResult.isError ?? isError;\n\t\t}\n\t}\n\n\treturn await emitToolCallOutcome(prepared.toolCall, result, isError, emit);\n}\n\nfunction createErrorToolResult(message: string): AgentToolResult<any> {\n\treturn {\n\t\tcontent: [{ type: \"text\", text: message }],\n\t\tdetails: {},\n\t};\n}\n\nasync function emitToolCallOutcome(\n\ttoolCall: AgentToolCall,\n\tresult: AgentToolResult<any>,\n\tisError: boolean,\n\temit: AgentEventSink,\n): Promise<ToolResultMessage> {\n\tawait emit({\n\t\ttype: \"tool_execution_end\",\n\t\ttoolCallId: toolCall.id,\n\t\ttoolName: toolCall.name,\n\t\tresult,\n\t\tisError,\n\t});\n\n\tconst toolResultMessage: ToolResultMessage = {\n\t\trole: \"toolResult\",\n\t\ttoolCallId: toolCall.id,\n\t\ttoolName: toolCall.name,\n\t\tcontent: result.content,\n\t\tdetails: result.details,\n\t\tisError,\n\t\ttimestamp: Date.now(),\n\t};\n\n\tawait emit({ type: \"message_start\", message: toolResultMessage });\n\tawait emit({ type: \"message_end\", message: toolResultMessage });\n\treturn toolResultMessage;\n}\n"]}