@mastra/mcp-docs-server 0.13.10 → 0.13.11-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (60) hide show
  1. package/.docs/organized/changelogs/%40internal%2Fstorage-test-utils.md +9 -9
  2. package/.docs/organized/changelogs/%40internal%2Ftypes-builder.md +2 -0
  3. package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +26 -26
  4. package/.docs/organized/changelogs/%40mastra%2Fcore.md +30 -30
  5. package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloudflare.md +20 -20
  6. package/.docs/organized/changelogs/%40mastra%2Fdeployer-netlify.md +20 -20
  7. package/.docs/organized/changelogs/%40mastra%2Fdeployer-vercel.md +20 -20
  8. package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +30 -30
  9. package/.docs/organized/changelogs/%40mastra%2Ffirecrawl.md +13 -13
  10. package/.docs/organized/changelogs/%40mastra%2Flibsql.md +9 -9
  11. package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +26 -26
  12. package/.docs/organized/changelogs/%40mastra%2Fmemory.md +21 -21
  13. package/.docs/organized/changelogs/%40mastra%2Fpg.md +9 -9
  14. package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +21 -21
  15. package/.docs/organized/changelogs/%40mastra%2Frag.md +12 -12
  16. package/.docs/organized/changelogs/%40mastra%2Fschema-compat.md +7 -0
  17. package/.docs/organized/changelogs/%40mastra%2Fserver.md +26 -26
  18. package/.docs/organized/changelogs/create-mastra.md +11 -11
  19. package/.docs/organized/changelogs/mastra.md +31 -31
  20. package/.docs/organized/code-examples/agent-network.md +4 -3
  21. package/.docs/organized/code-examples/agent.md +33 -2
  22. package/.docs/raw/agents/overview.mdx +21 -1
  23. package/.docs/raw/getting-started/mcp-docs-server.mdx +2 -2
  24. package/.docs/raw/rag/chunking-and-embedding.mdx +11 -0
  25. package/.docs/raw/reference/agents/agent.mdx +64 -38
  26. package/.docs/raw/reference/agents/generate.mdx +206 -202
  27. package/.docs/raw/reference/agents/getAgent.mdx +23 -38
  28. package/.docs/raw/reference/agents/getDefaultGenerateOptions.mdx +62 -0
  29. package/.docs/raw/reference/agents/getDefaultStreamOptions.mdx +62 -0
  30. package/.docs/raw/reference/agents/getDefaultVNextStreamOptions.mdx +62 -0
  31. package/.docs/raw/reference/agents/getDescription.mdx +30 -0
  32. package/.docs/raw/reference/agents/getInstructions.mdx +36 -73
  33. package/.docs/raw/reference/agents/getLLM.mdx +69 -0
  34. package/.docs/raw/reference/agents/getMemory.mdx +42 -119
  35. package/.docs/raw/reference/agents/getModel.mdx +36 -75
  36. package/.docs/raw/reference/agents/getScorers.mdx +62 -0
  37. package/.docs/raw/reference/agents/getTools.mdx +36 -128
  38. package/.docs/raw/reference/agents/getVoice.mdx +36 -83
  39. package/.docs/raw/reference/agents/getWorkflows.mdx +37 -74
  40. package/.docs/raw/reference/agents/stream.mdx +263 -226
  41. package/.docs/raw/reference/agents/streamVNext.mdx +208 -402
  42. package/.docs/raw/reference/cli/build.mdx +1 -0
  43. package/.docs/raw/reference/rag/chunk.mdx +51 -2
  44. package/.docs/raw/reference/scorers/answer-relevancy.mdx +6 -6
  45. package/.docs/raw/reference/scorers/bias.mdx +6 -6
  46. package/.docs/raw/reference/scorers/completeness.mdx +2 -2
  47. package/.docs/raw/reference/scorers/content-similarity.mdx +1 -1
  48. package/.docs/raw/reference/scorers/create-scorer.mdx +445 -0
  49. package/.docs/raw/reference/scorers/faithfulness.mdx +6 -6
  50. package/.docs/raw/reference/scorers/hallucination.mdx +6 -6
  51. package/.docs/raw/reference/scorers/keyword-coverage.mdx +2 -2
  52. package/.docs/raw/reference/scorers/mastra-scorer.mdx +116 -158
  53. package/.docs/raw/reference/scorers/toxicity.mdx +2 -2
  54. package/.docs/raw/scorers/custom-scorers.mdx +166 -268
  55. package/.docs/raw/scorers/overview.mdx +21 -13
  56. package/.docs/raw/server-db/local-dev-playground.mdx +3 -3
  57. package/package.json +5 -5
  58. package/.docs/raw/reference/agents/createTool.mdx +0 -241
  59. package/.docs/raw/reference/scorers/custom-code-scorer.mdx +0 -155
  60. package/.docs/raw/reference/scorers/llm-scorer.mdx +0 -210
@@ -1,45 +1,51 @@
1
1
  ---
2
- title: "Reference: Agent.stream() | Streaming | Agents | Mastra Docs"
3
- description: Documentation for the `.stream()` method in Mastra agents, which enables real-time streaming of responses.
2
+ title: "Reference: stream() | Agents | Mastra Docs"
3
+ description: "Documentation for the `.stream()` method in Mastra agents, which enables real-time streaming of responses."
4
4
  ---
5
5
 
6
- # `stream()`
6
+ # stream()
7
7
 
8
- The `stream()` method enables real-time streaming of responses from an agent. This method accepts `messages` and an optional `options` object as parameters, similar to `generate()`.
8
+ The `.stream()` method enables real-time streaming of responses from an agent. This method accepts messages and optional streaming options.
9
9
 
10
- ## Parameters
11
-
12
- ### `messages`
10
+ ## Usage example
13
11
 
14
- The `messages` parameter can be:
12
+ ```typescript showLineNumbers copy
13
+ const response = await agent.stream("message for agent");
14
+ ```
15
15
 
16
- - A single string
17
- - An array of strings
18
- - An array of message objects with `role` and `content` properties
19
- - An array of `UIMessageWithMetadata` objects (for messages with metadata)
16
+ ## Parameters
20
17
 
21
- The message object structures:
18
+ <PropertiesTable
19
+ content={[
20
+ {
21
+ name: "messages",
22
+ type: "string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]",
23
+ description: "The messages to send to the agent. Can be a single string, array of strings, or structured message objects.",
24
+ },
25
+ {
26
+ name: "options",
27
+ type: "AgentStreamOptions<OUTPUT, EXPERIMENTAL_OUTPUT>",
28
+ isOptional: true,
29
+ description: "Optional configuration for the streaming process.",
30
+ },
31
+ ]}
32
+ />
22
33
 
23
- ```typescript
24
- interface Message {
25
- role: "system" | "user" | "assistant";
26
- content: string;
27
- }
34
+ ## Extended usage example
28
35
 
29
- // For messages with metadata
30
- interface UIMessageWithMetadata {
31
- role: "user" | "assistant";
32
- content: string;
33
- parts: Array<{ type: string; text?: string; [key: string]: any }>;
34
- metadata?: Record<string, unknown>; // Optional metadata field
35
- }
36
+ ```typescript showLineNumbers copy
37
+ const response = await agent.stream("message for agent", {
38
+ temperature: 0.7,
39
+ maxSteps: 3,
40
+ memory: {
41
+ thread: "user-123",
42
+ resource: "test-app"
43
+ },
44
+ toolChoice: "auto"
45
+ });
36
46
  ```
37
47
 
38
- When using `UIMessageWithMetadata`, the metadata will be preserved throughout the conversation and stored with the messages in memory.
39
-
40
- ### `options` (Optional)
41
-
42
- An optional object that can include configuration for output structure, memory management, tool usage, telemetry, and more.
48
+ ### Options parameters
43
49
 
44
50
  <PropertiesTable
45
51
  content={[
@@ -70,6 +76,13 @@ An optional object that can include configuration for output structure, memory m
70
76
  description:
71
77
  "Custom instructions that override the agent's default instructions for this specific generation. Useful for dynamically modifying agent behavior without creating a new agent instance.",
72
78
  },
79
+ {
80
+ name: "output",
81
+ type: "Zod schema | JsonSchema7",
82
+ isOptional: true,
83
+ description:
84
+ "Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.",
85
+ },
73
86
  {
74
87
  name: "memory",
75
88
  type: "object",
@@ -97,7 +110,7 @@ An optional object that can include configuration for output structure, memory m
97
110
  name: "options",
98
111
  type: "MemoryConfig",
99
112
  isOptional: true,
100
- description: "Configuration for memory behavior, like message history and semantic recall. See `MemoryConfig` below."
113
+ description: "Configuration for memory behavior, like message history and semantic recall."
101
114
  }]
102
115
  }
103
116
  ]
@@ -107,7 +120,7 @@ An optional object that can include configuration for output structure, memory m
107
120
  type: "number",
108
121
  isOptional: true,
109
122
  defaultValue: "5",
110
- description: "Maximum number of steps allowed during streaming.",
123
+ description: "Maximum number of execution steps allowed.",
111
124
  },
112
125
  {
113
126
  name: "maxRetries",
@@ -121,27 +134,55 @@ An optional object that can include configuration for output structure, memory m
121
134
  type: "MemoryConfig",
122
135
  isOptional: true,
123
136
  description:
124
- "**Deprecated.** Use `memory.options` instead. Configuration options for memory management. See MemoryConfig section below for details.",
137
+ "**Deprecated.** Use `memory.options` instead. Configuration options for memory management.",
138
+ properties: [
139
+ {
140
+ parameters: [{
141
+ name: "lastMessages",
142
+ type: "number | false",
143
+ isOptional: true,
144
+ description: "Number of recent messages to include in context, or false to disable."
145
+ }]
146
+ },
147
+ {
148
+ parameters: [{
149
+ name: "semanticRecall",
150
+ type: "boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }",
151
+ isOptional: true,
152
+ description: "Enable semantic recall to find relevant past messages. Can be a boolean or detailed configuration."
153
+ }]
154
+ },
155
+ {
156
+ parameters: [{
157
+ name: "workingMemory",
158
+ type: "WorkingMemory",
159
+ isOptional: true,
160
+ description: "Configuration for working memory functionality."
161
+ }]
162
+ },
163
+ {
164
+ parameters: [{
165
+ name: "threads",
166
+ type: "{ generateTitle?: boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> } }",
167
+ isOptional: true,
168
+ description: "Thread-specific configuration, including automatic title generation."
169
+ }]
170
+ }
171
+ ]
125
172
  },
126
173
  {
127
174
  name: "onFinish",
128
- type: "StreamTextOnFinishCallback | StreamObjectOnFinishCallback",
129
- isOptional: true,
130
- description: "Callback function called when streaming is complete.",
131
- },
132
- {
133
- name: "onStepFinish",
134
- type: "GenerateTextOnStepFinishCallback<any> | never",
175
+ type: "StreamTextOnFinishCallback<any> | StreamObjectOnFinishCallback<OUTPUT>",
135
176
  isOptional: true,
136
177
  description:
137
- "Callback function called after each step during streaming. Unavailable for structured output",
178
+ "Callback function called when streaming completes. Receives the final result.",
138
179
  },
139
180
  {
140
- name: "output",
141
- type: "Zod schema | JsonSchema7",
181
+ name: "onStepFinish",
182
+ type: "StreamTextOnStepFinishCallback<any> | never",
142
183
  isOptional: true,
143
184
  description:
144
- "Defines the expected structure of the output. Can be a JSON Schema object or a Zod schema.",
185
+ "Callback function called after each execution step. Receives step details as a JSON string. Unavailable for structured output",
145
186
  },
146
187
  {
147
188
  name: "resourceId",
@@ -155,7 +196,41 @@ An optional object that can include configuration for output structure, memory m
155
196
  type: "TelemetrySettings",
156
197
  isOptional: true,
157
198
  description:
158
- "Settings for telemetry collection during streaming. See TelemetrySettings section below for details.",
199
+ "Settings for telemetry collection during streaming.",
200
+ properties: [
201
+ {
202
+ parameters: [{
203
+ name: "isEnabled",
204
+ type: "boolean",
205
+ isOptional: true,
206
+ description: "Enable or disable telemetry. Disabled by default while experimental."
207
+ }]
208
+ },
209
+ {
210
+ parameters: [{
211
+ name: "recordInputs",
212
+ type: "boolean",
213
+ isOptional: true,
214
+ description: "Enable or disable input recording. Enabled by default. You might want to disable input recording to avoid recording sensitive information."
215
+ }]
216
+ },
217
+ {
218
+ parameters: [{
219
+ name: "recordOutputs",
220
+ type: "boolean",
221
+ isOptional: true,
222
+ description: "Enable or disable output recording. Enabled by default. You might want to disable output recording to avoid recording sensitive information."
223
+ }]
224
+ },
225
+ {
226
+ parameters: [{
227
+ name: "functionId",
228
+ type: "string",
229
+ isOptional: true,
230
+ description: "Identifier for this function. Used to group telemetry data by function."
231
+ }]
232
+ }
233
+ ]
159
234
  },
160
235
  {
161
236
  name: "temperature",
@@ -177,13 +252,43 @@ An optional object that can include configuration for output structure, memory m
177
252
  isOptional: true,
178
253
  defaultValue: "'auto'",
179
254
  description: "Controls how the agent uses tools during streaming.",
255
+ properties: [
256
+ {
257
+ parameters: [{
258
+ name: "'auto'",
259
+ type: "string",
260
+ description: "Let the model decide whether to use tools (default)."
261
+ }]
262
+ },
263
+ {
264
+ parameters: [{
265
+ name: "'none'",
266
+ type: "string",
267
+ description: "Do not use any tools."
268
+ }]
269
+ },
270
+ {
271
+ parameters: [{
272
+ name: "'required'",
273
+ type: "string",
274
+ description: "Require the model to use at least one tool."
275
+ }]
276
+ },
277
+ {
278
+ parameters: [{
279
+ name: "{ type: 'tool'; toolName: string }",
280
+ type: "object",
281
+ description: "Require the model to use a specific tool by name."
282
+ }]
283
+ }
284
+ ]
180
285
  },
181
286
  {
182
287
  name: "toolsets",
183
288
  type: "ToolsetsInput",
184
289
  isOptional: true,
185
290
  description:
186
- "Additional toolsets to make available to the agent during this stream.",
291
+ "Additional toolsets to make available to the agent during streaming.",
187
292
  },
188
293
  {
189
294
  name: "clientTools",
@@ -196,242 +301,174 @@ An optional object that can include configuration for output structure, memory m
196
301
  name: "savePerStep",
197
302
  type: "boolean",
198
303
  isOptional: true,
199
- description: "Save messages incrementally after each generation step completes (default: false)",
200
- }
201
- ]}
202
- />
203
-
204
- #### MemoryConfig
205
-
206
- Configuration options for memory management:
207
-
208
- <PropertiesTable
209
- content={[
210
- {
211
- name: "lastMessages",
212
- type: "number | false",
213
- isOptional: true,
214
- description:
215
- "Number of most recent messages to include in context. Set to false to disable.",
304
+ description: "Save messages incrementally after each stream step completes (default: false).",
216
305
  },
217
306
  {
218
- name: "semanticRecall",
219
- type: "boolean | object",
307
+ name: "providerOptions",
308
+ type: "Record<string, Record<string, JSONValue>>",
220
309
  isOptional: true,
221
- description:
222
- "Configuration for semantic memory recall. Can be boolean or detailed config.",
310
+ description: "Additional provider-specific options that are passed through to the underlying LLM provider. The structure is `{ providerName: { optionKey: value } }`. For example: `{ openai: { reasoningEffort: 'high' }, anthropic: { maxTokens: 1000 } }`.",
223
311
  properties: [
224
312
  {
225
- type: "number",
226
- parameters: [
227
- {
228
- name: "topK",
229
- type: "number",
230
- isOptional: true,
231
- description:
232
- "Number of most semantically similar messages to retrieve.",
233
- },
234
- ],
313
+ parameters: [{
314
+ name: "openai",
315
+ type: "Record<string, JSONValue>",
316
+ isOptional: true,
317
+ description: "OpenAI-specific options. Example: `{ reasoningEffort: 'high' }`"
318
+ }]
235
319
  },
236
320
  {
237
- type: "number | object",
238
- parameters: [
239
- {
240
- name: "messageRange",
241
- type: "number | { before: number; after: number }",
242
- isOptional: true,
243
- description:
244
- "Range of messages to consider for semantic search. Can be a single number or before/after configuration.",
245
- },
246
- ],
321
+ parameters: [{
322
+ name: "anthropic",
323
+ type: "Record<string, JSONValue>",
324
+ isOptional: true,
325
+ description: "Anthropic-specific options. Example: `{ maxTokens: 1000 }`"
326
+ }]
247
327
  },
248
- ],
249
- },
250
- {
251
- name: "workingMemory",
252
- type: "object",
253
- isOptional: true,
254
- description: "Configuration for working memory.",
255
- properties: [
256
328
  {
257
- type: "boolean",
258
- parameters: [
259
- {
260
- name: "enabled",
261
- type: "boolean",
262
- isOptional: true,
263
- description: "Whether to enable working memory.",
264
- },
265
- ],
329
+ parameters: [{
330
+ name: "google",
331
+ type: "Record<string, JSONValue>",
332
+ isOptional: true,
333
+ description: "Google-specific options. Example: `{ safetySettings: [...] }`"
334
+ }]
266
335
  },
267
336
  {
268
- type: "string",
269
- parameters: [
270
- {
271
- name: "template",
272
- type: "string",
273
- isOptional: true,
274
- description: "Template to use for working memory.",
275
- },
276
- ],
277
- },
278
- ],
337
+ parameters: [{
338
+ name: "[providerName]",
339
+ type: "Record<string, JSONValue>",
340
+ isOptional: true,
341
+ description: "Other provider-specific options. The key is the provider name and the value is a record of provider-specific options."
342
+ }]
343
+ }
344
+ ]
279
345
  },
280
346
  {
281
- name: "threads",
282
- type: "object",
347
+ name: "runId",
348
+ type: "string",
283
349
  isOptional: true,
284
- description: "Thread-specific memory configuration.",
285
- properties: [
286
- {
287
- type: "boolean | object",
288
- parameters: [
289
- {
290
- name: "generateTitle",
291
- type: "boolean | { model: LanguageModelV1 | ((ctx: RuntimeContext) => LanguageModelV1 | Promise<LanguageModelV1>), instructions: string | ((ctx: RuntimeContext) => string | Promise<string>) }",
292
- isOptional: true,
293
- description:
294
- `Controls automatic thread title generation from the user's first message. Can be a boolean to enable/disable using the agent's model, or an object specifying a custom model and/or custom instructions for title generation (useful for cost optimization or title customization).
295
- Example: { model: openai('gpt-4.1-nano'), instructions: 'Generate a concise title based on the initial user message.' }`,
296
- },
297
- ],
298
- },
299
- ],
350
+ description: "Unique ID for this generation run. Useful for tracking and debugging purposes.",
300
351
  },
301
- ]}
302
- />
303
-
304
- #### TelemetrySettings
305
-
306
- Settings for telemetry collection during streaming:
307
-
308
- <PropertiesTable
309
- content={[
310
352
  {
311
- name: "isEnabled",
312
- type: "boolean",
353
+ name: "runtimeContext",
354
+ type: "RuntimeContext",
313
355
  isOptional: true,
314
- defaultValue: "false",
315
- description:
316
- "Enable or disable telemetry. Disabled by default while experimental.",
356
+ description: "Runtime context for dependency injection and contextual information.",
317
357
  },
318
358
  {
319
- name: "recordInputs",
320
- type: "boolean",
359
+ name: "maxTokens",
360
+ type: "number",
321
361
  isOptional: true,
322
- defaultValue: "true",
323
- description:
324
- "Enable or disable input recording. You might want to disable this to avoid recording sensitive information, reduce data transfers, or increase performance.",
362
+ description: "Maximum number of tokens to generate.",
325
363
  },
326
364
  {
327
- name: "recordOutputs",
328
- type: "boolean",
365
+ name: "topP",
366
+ type: "number",
329
367
  isOptional: true,
330
- defaultValue: "true",
331
- description:
332
- "Enable or disable output recording. You might want to disable this to avoid recording sensitive information, reduce data transfers, or increase performance.",
368
+ description: "Nucleus sampling. This is a number between 0 and 1. It is recommended to set either `temperature` or `topP`, but not both.",
333
369
  },
334
370
  {
335
- name: "functionId",
336
- type: "string",
371
+ name: "topK",
372
+ type: "number",
337
373
  isOptional: true,
338
- description:
339
- "Identifier for this function. Used to group telemetry data by function.",
374
+ description: "Only sample from the top K options for each subsequent token. Used to remove 'long tail' low probability responses.",
340
375
  },
341
376
  {
342
- name: "metadata",
343
- type: "Record<string, AttributeValue>",
377
+ name: "presencePenalty",
378
+ type: "number",
344
379
  isOptional: true,
345
- description:
346
- "Additional information to include in the telemetry data. AttributeValue can be string, number, boolean, array of these types, or null.",
380
+ description: "Presence penalty setting. It affects the likelihood of the model to repeat information that is already in the prompt. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).",
347
381
  },
348
382
  {
349
- name: "tracer",
350
- type: "Tracer",
383
+ name: "frequencyPenalty",
384
+ type: "number",
351
385
  isOptional: true,
352
- description:
353
- "A custom OpenTelemetry tracer instance to use for the telemetry data. See OpenTelemetry documentation for details.",
386
+ description: "Frequency penalty setting. It affects the likelihood of the model to repeatedly use the same words or phrases. A number between -1 (increase repetition) and 1 (maximum penalty, decrease repetition).",
387
+ },
388
+ {
389
+ name: "stopSequences",
390
+ type: "string[]",
391
+ isOptional: true,
392
+ description: "Stop sequences. If set, the model will stop generating text when one of the stop sequences is generated.",
354
393
  },
394
+ {
395
+ name: "seed",
396
+ type: "number",
397
+ isOptional: true,
398
+ description: "The seed (integer) to use for random sampling. If set and supported by the model, calls will generate deterministic results.",
399
+ },
400
+ {
401
+ name: "headers",
402
+ type: "Record<string, string | undefined>",
403
+ isOptional: true,
404
+ description: "Additional HTTP headers to be sent with the request. Only applicable for HTTP-based providers.",
405
+ }
355
406
  ]}
356
407
  />
357
408
 
358
409
  ## Returns
359
410
 
360
- The return value of the `stream()` method depends on the options provided, specifically the `output` option.
361
-
362
- ### PropertiesTable for Return Values
363
-
364
411
  <PropertiesTable
365
412
  content={[
366
413
  {
367
414
  name: "textStream",
368
- type: "AsyncIterable<string>",
415
+ type: "AsyncGenerator<string>",
416
+ isOptional: true,
417
+ description:
418
+ "Async generator that yields text chunks as they become available.",
419
+ },
420
+ {
421
+ name: "fullStream",
422
+ type: "Promise<ReadableStream>",
369
423
  isOptional: true,
370
424
  description:
371
- "Stream of text chunks. Present when output is 'text' (no schema provided) or when using `experimental_output`.",
425
+ "Promise that resolves to a ReadableStream for the complete response.",
372
426
  },
373
427
  {
374
- name: "objectStream",
375
- type: "AsyncIterable<object>",
428
+ name: "text",
429
+ type: "Promise<string>",
376
430
  isOptional: true,
377
431
  description:
378
- "Stream of structured data. Present only when using `output` option with a schema.",
432
+ "Promise that resolves to the complete text response.",
379
433
  },
380
434
  {
381
- name: "partialObjectStream",
382
- type: "AsyncIterable<object>",
435
+ name: "usage",
436
+ type: "Promise<{ totalTokens: number; promptTokens: number; completionTokens: number }>",
383
437
  isOptional: true,
384
438
  description:
385
- "Stream of structured data. Present only when using `experimental_output` option.",
439
+ "Promise that resolves to token usage information.",
386
440
  },
387
441
  {
388
- name: "object",
389
- type: "Promise<object>",
442
+ name: "finishReason",
443
+ type: "Promise<string>",
390
444
  isOptional: true,
391
445
  description:
392
- "Promise that resolves to the final structured output. Present when using either `output` or `experimental_output` options.",
446
+ "Promise that resolves to the reason why the stream finished.",
447
+ },
448
+ {
449
+ name: "toolCalls",
450
+ type: "Promise<Array<ToolCall>>",
451
+ isOptional: true,
452
+ description:
453
+ "Promise that resolves to the tool calls made during the streaming process.",
454
+ properties: [
455
+ {
456
+ parameters: [{
457
+ name: "toolName",
458
+ type: "string",
459
+ required: true,
460
+ description: "The name of the tool invoked."
461
+ }]
462
+ },
463
+ {
464
+ parameters: [{
465
+ name: "args",
466
+ type: "any",
467
+ required: true,
468
+ description: "The arguments passed to the tool."
469
+ }]
470
+ }
471
+ ]
393
472
  },
394
473
  ]}
395
474
  />
396
-
397
- ## Examples
398
-
399
- ### Basic Text Streaming
400
-
401
- ```typescript
402
- const stream = await myAgent.stream([
403
- { role: "user", content: "Tell me a story." },
404
- ]);
405
-
406
- for await (const chunk of stream.textStream) {
407
- process.stdout.write(chunk);
408
- }
409
- ```
410
-
411
- ### Structured Output Streaming with Thread Context
412
-
413
- ```typescript
414
- const schema = {
415
- type: "object",
416
- properties: {
417
- summary: { type: "string" },
418
- nextSteps: { type: "array", items: { type: "string" } },
419
- },
420
- required: ["summary", "nextSteps"],
421
- };
422
-
423
- const response = await myAgent.stream("What should we do next?", {
424
- output: schema,
425
- threadId: "project-123",
426
- onFinish: (text) => console.log("Finished:", text),
427
- });
428
-
429
- for await (const chunk of response.textStream) {
430
- console.log(chunk);
431
- }
432
-
433
- const result = await response.object;
434
- console.log("Final structured result:", result);
435
- ```
436
-
437
- The key difference between Agent's `stream()` and LLM's `stream()` is that Agents maintain conversation context through `threadId`, can access tools, and integrate with the agent's memory system.