@mastra/mcp-docs-server 0.13.11 → 0.13.12-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (73) hide show
  1. package/.docs/organized/changelogs/%40internal%2Fstorage-test-utils.md +19 -19
  2. package/.docs/organized/changelogs/%40internal%2Ftypes-builder.md +2 -0
  3. package/.docs/organized/changelogs/%40mastra%2Fchroma.md +14 -14
  4. package/.docs/organized/changelogs/%40mastra%2Fclickhouse.md +14 -14
  5. package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +42 -42
  6. package/.docs/organized/changelogs/%40mastra%2Fcloudflare-d1.md +14 -14
  7. package/.docs/organized/changelogs/%40mastra%2Fcore.md +45 -45
  8. package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloudflare.md +53 -53
  9. package/.docs/organized/changelogs/%40mastra%2Fdeployer-netlify.md +49 -49
  10. package/.docs/organized/changelogs/%40mastra%2Fdeployer-vercel.md +49 -49
  11. package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +54 -54
  12. package/.docs/organized/changelogs/%40mastra%2Fdynamodb.md +20 -20
  13. package/.docs/organized/changelogs/%40mastra%2Fevals.md +14 -14
  14. package/.docs/organized/changelogs/%40mastra%2Ffirecrawl.md +21 -21
  15. package/.docs/organized/changelogs/%40mastra%2Flance.md +22 -22
  16. package/.docs/organized/changelogs/%40mastra%2Flibsql.md +34 -34
  17. package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +41 -41
  18. package/.docs/organized/changelogs/%40mastra%2Fmcp.md +33 -33
  19. package/.docs/organized/changelogs/%40mastra%2Fmemory.md +23 -23
  20. package/.docs/organized/changelogs/%40mastra%2Fmongodb.md +20 -20
  21. package/.docs/organized/changelogs/%40mastra%2Fpg.md +37 -37
  22. package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +49 -49
  23. package/.docs/organized/changelogs/%40mastra%2Fqdrant.md +21 -21
  24. package/.docs/organized/changelogs/%40mastra%2Frag.md +20 -20
  25. package/.docs/organized/changelogs/%40mastra%2Fschema-compat.md +8 -0
  26. package/.docs/organized/changelogs/%40mastra%2Fserver.md +44 -44
  27. package/.docs/organized/changelogs/%40mastra%2Fvoice-google-gemini-live.md +14 -0
  28. package/.docs/organized/changelogs/create-mastra.md +26 -26
  29. package/.docs/organized/changelogs/mastra.md +63 -63
  30. package/.docs/organized/code-examples/agent.md +292 -275
  31. package/.docs/raw/agents/input-processors.mdx +25 -19
  32. package/.docs/raw/agents/output-processors.mdx +376 -0
  33. package/.docs/raw/agents/overview.mdx +165 -188
  34. package/.docs/raw/agents/streaming.mdx +11 -5
  35. package/.docs/raw/community/contributing-templates.mdx +1 -1
  36. package/.docs/raw/deployment/cloud-providers/amazon-ec2.mdx +9 -9
  37. package/.docs/raw/deployment/cloud-providers/aws-lambda.mdx +27 -33
  38. package/.docs/raw/deployment/cloud-providers/azure-app-services.mdx +12 -12
  39. package/.docs/raw/deployment/cloud-providers/digital-ocean.mdx +17 -17
  40. package/.docs/raw/getting-started/templates.mdx +1 -1
  41. package/.docs/raw/rag/vector-databases.mdx +9 -1
  42. package/.docs/raw/reference/agents/agent.mdx +9 -3
  43. package/.docs/raw/reference/agents/generate.mdx +80 -3
  44. package/.docs/raw/reference/agents/getDefaultGenerateOptions.mdx +1 -1
  45. package/.docs/raw/reference/agents/getDefaultStreamOptions.mdx +1 -1
  46. package/.docs/raw/reference/agents/getDefaultVNextStreamOptions.mdx +1 -1
  47. package/.docs/raw/reference/agents/getLLM.mdx +1 -1
  48. package/.docs/raw/reference/agents/streamVNext.mdx +88 -5
  49. package/.docs/raw/reference/cli/scorers.mdx +160 -0
  50. package/.docs/raw/reference/rag/chroma.mdx +158 -17
  51. package/.docs/raw/reference/templates.mdx +3 -3
  52. package/.docs/raw/reference/tools/create-tool.mdx +2 -2
  53. package/.docs/raw/reference/tools/mcp-client.mdx +9 -9
  54. package/.docs/raw/reference/tools/mcp-server.mdx +5 -5
  55. package/.docs/raw/reference/workflows/branch.mdx +1 -1
  56. package/.docs/raw/reference/workflows/create-run.mdx +4 -4
  57. package/.docs/raw/reference/workflows/execute.mdx +2 -2
  58. package/.docs/raw/reference/workflows/foreach.mdx +1 -1
  59. package/.docs/raw/reference/workflows/run-methods/cancel.mdx +58 -0
  60. package/.docs/raw/reference/workflows/{resume.mdx → run-methods/resume.mdx} +7 -5
  61. package/.docs/raw/reference/workflows/{start.mdx → run-methods/start.mdx} +5 -5
  62. package/.docs/raw/reference/workflows/{stream.mdx → run-methods/stream.mdx} +6 -3
  63. package/.docs/raw/reference/workflows/{streamVNext.mdx → run-methods/streamVNext.mdx} +14 -9
  64. package/.docs/raw/reference/workflows/{watch.mdx → run-methods/watch.mdx} +12 -12
  65. package/.docs/raw/reference/workflows/run.mdx +104 -0
  66. package/.docs/raw/reference/workflows/step.mdx +0 -1
  67. package/.docs/raw/reference/workflows/workflow.mdx +3 -2
  68. package/.docs/raw/{reference/workflows → server-db}/snapshots.mdx +2 -2
  69. package/.docs/raw/voice/overview.mdx +81 -2
  70. package/.docs/raw/voice/speech-to-speech.mdx +45 -0
  71. package/.docs/raw/workflows/overview.mdx +11 -4
  72. package/.docs/raw/workflows-legacy/overview.mdx +8 -8
  73. package/package.json +4 -4
@@ -5,47 +5,72 @@ description: Overview of agents in Mastra, detailing their capabilities and how
5
5
 
6
6
  # Using Agents
7
7
 
8
- **Agents** are one of the core Mastra primitives. Agents use a language model to decide on a sequence of actions. They can call functions (known as _tools_). You can compose them with *workflows* (the other main Mastra primitive), either by giving an agent a workflow as a tool, or by running an agent from within a workflow.
8
+ Agents let you build intelligent assistants powered by language models that can make decisions and perform actions. Each agent has required instructions and an LLM, with optional tools and memory.
9
9
 
10
- Agents can run autonomously in a loop, run once, or take turns with a user. You can give short-term, long-term, and working memory of their user interactions. They can stream text or return structured output (ie, JSON). They can access third-party APIs, query knowledge bases, and so on.
10
+ An agent coordinates conversations, calls tools when needed, maintains context through memory, and produces responses tailored to the interaction. Agents can operate on their own or work as part of larger workflows.
11
11
 
12
- Additionally, agents support dynamic configuration, allowing you to change their instructions, model, tools, and memory based on runtime context like user preferences, subscription tiers, or environment settings.
12
+ ![Agents overview](/image/agents/agents-overview.jpg)
13
13
 
14
- ## 1. Creating an Agent
14
+ To create an agent:
15
15
 
16
- To create an agent in Mastra, you use the `Agent` class and define its properties:
16
+ - Define **instructions** with the `Agent` class and set the **LLM** it will use.
17
+ - Optionally configure **tools** and **memory** to extend functionality.
18
+ - Run the agent to generate responses, with support for streaming, structured output, and dynamic configuration.
19
+
20
+ This approach provides type safety and runtime validation, ensuring reliable behavior across all agent interactions.
21
+
22
+ > **📹 Watch**: → An introduction to agents, and how they compare to workflows [YouTube (7 minutes)](https://youtu.be/0jg2g3sNvgw)
23
+
24
+ ## Getting started
25
+
26
+ To use agents, install the required dependencies:
27
+
28
+ ```bash
29
+ npm install @mastra/core @ai-sdk/openai
30
+ ```
31
+
32
+ > Mastra works with all AI SDK provider. See [Model Providers](../getting-started/model-providers.mdx) for more information.
33
+
34
+ Import the necessary class from the agents module, and an LLM provider:
17
35
 
18
36
  ```typescript filename="src/mastra/agents/test-agent.ts" showLineNumbers copy
19
37
  import { openai } from "@ai-sdk/openai";
20
38
  import { Agent } from "@mastra/core/agent";
21
-
22
- const testAgent = new Agent({
23
- name: "test-agent",
24
- instructions: "You are a helpful assistant.",
25
- model: openai("gpt-4o")
26
- });
27
39
  ```
40
+ ### LLM providers
28
41
 
29
- **Note:** Ensure that you have set the necessary environment variables, such as your OpenAI API key, in your `.env` file:
42
+ Each LLM provider needs its own API key, named using the provider’s identifier:
30
43
 
31
44
  ```bash filename=".env" copy
32
45
  OPENAI_API_KEY=<your-api-key>
33
46
  ```
34
47
 
35
- Also, make sure you have the `@mastra/core` package installed:
48
+ > See the [AI SDK Providers](https://ai-sdk.dev/providers/ai-sdk-providers) in the Vercel AI SDK docs.
49
+
50
+ ### Creating an agent
51
+
52
+ To create an agent in Mastra, use the `Agent` class. Every agent must include `instructions` to define its behavior, and a `model` parameter to specify the LLM provider and model:
53
+
54
+ ```typescript filename="src/mastra/agents/test-agent.ts" showLineNumbers copy
55
+ import { openai } from "@ai-sdk/openai";
56
+ import { Agent } from "@mastra/core/agent";
36
57
 
37
- ```bash npm2yarn copy
38
- npm install @mastra/core@latest
58
+ export const testAgent = new Agent({
59
+ name: "test-agent",
60
+ instructions: "You are a helpful assistant.",
61
+ model: openai("gpt-4o-mini")
62
+ });
39
63
  ```
40
64
 
41
- All agent properties (instructions, model, tools, memory) can be configured dynamically using runtime context. See the [Dynamic Agents guide](./dynamic-agents.mdx) for examples of how to adapt agent behavior based on user context, subscription tiers, or other runtime variables.
65
+ > See [Agent](../../reference/agents/agent.mdx) for more information.
66
+
42
67
 
43
- ### Registering the Agent
68
+ ### Registering an agent
44
69
 
45
- Register your agent with Mastra to enable logging and access to configured tools and integrations:
70
+ Register your agent in the Mastra instance:
46
71
 
47
72
  ```typescript showLineNumbers filename="src/mastra/index.ts" copy
48
- import { Mastra } from "@mastra/core";
73
+ import { Mastra } from "@mastra/core/mastra";
49
74
  import { testAgent } from './agents/test-agent';
50
75
 
51
76
  export const mastra = new Mastra({
@@ -54,145 +79,111 @@ export const mastra = new Mastra({
54
79
  });
55
80
  ```
56
81
 
57
- ## 2. Generating and streaming text
82
+ ## Referencing an agent
83
+
84
+ You can call agents from workflow steps, tools, the Mastra Client, or the command line. Get a reference by calling `.getAgent()` on your `mastra` or `mastraClient` instance, depending on your setup:
85
+
86
+ ```typescript showLineNumbers copy
87
+ const testAgent = mastra.getAgent("testAgent");
88
+ ```
89
+
90
+ > See [Calling agents](../../examples/agents/calling-agents.mdx) for more information.
91
+
92
+ ## Generating responses
93
+
94
+ Use `.generate()` to get a response from an agent. Pass a single string for simple prompts, an array of strings when providing multiple pieces of context, or an array of message objects with `role` and `content` for precise control over roles and conversational flows.
95
+
96
+ > See [.generate()](../../reference/agents/generate.mdx) for more information.
58
97
 
59
98
  ### Generating text
60
99
 
61
- Use the `.generate()` method to have your agent produce text responses:
100
+ Call `.generate()` with an array of message objects that include `role` and `content`:
62
101
 
63
102
  ```typescript showLineNumbers copy
64
103
  const response = await testAgent.generate([
65
- { role: "user", content: "Hello, how can you assist me today?" },
104
+ { role: "user", content: "Help me organize my day" },
105
+ { role: "user", content: "My day starts at 9am and finishes at 5.30pm" },
106
+ { role: "user", content: "I take lunch between 12:30 and 13:30" },
107
+ { role: "user", content: "I have meetings Monday to Friday between 10:30 and 11:30" }
66
108
  ]);
67
109
 
68
- console.log("Agent:", response.text);
110
+ console.log(response.text);
69
111
  ```
70
112
 
71
- For more details about the generate method and its options, see the [generate reference documentation](/reference/agents/generate).
113
+ ## Streaming responses
114
+
115
+ Use `.stream()` for real-time responses. Pass a single string for simple prompts, an array of strings when providing multiple pieces of context, or an array of message objects with `role` and `content` for precise control over roles and conversational flows.
116
+
117
+ > See [.stream()](../../reference/agents/stream.mdx) for more information.
72
118
 
73
- ### Streaming responses
119
+ ### Streaming text
74
120
 
75
- For more real-time responses, you can stream the agent's response:
121
+ Call `.stream()` with an array of message objects that include `role` and `content`:
76
122
 
77
123
  ```typescript showLineNumbers copy
78
124
  const stream = await testAgent.stream([
79
- { role: "user", content: "Tell me a story." },
125
+ { role: "user", content: "Help me organize my day" },
126
+ { role: "user", content: "My day starts at 9am and finishes at 5.30pm" },
127
+ { role: "user", content: "I take lunch between 12:30 and 13:30" },
128
+ { role: "user", content: "I have meetings Monday to Friday between 10:30 and 11:30" }
80
129
  ]);
81
130
 
82
- console.log("Agent:");
83
-
84
131
  for await (const chunk of stream.textStream) {
85
132
  process.stdout.write(chunk);
86
133
  }
87
134
  ```
88
135
 
89
- For more details about streaming responses, see the [stream reference documentation](/reference/agents/stream).
90
-
91
- ## 3. Describing images
136
+ ### Completion using `onFinish()`
92
137
 
93
- Agents can analyze and describe images by processing both the visual content and any text within them. To enable image analysis, pass an object with `type: 'image'` and the image URL in the content array. You can combine image content with text prompts to guide the agent's analysis.
138
+ When streaming responses, the `onFinish()` callback runs after the LLM finishes generating its response and all tool executions are complete.
139
+ It provides the final `text`, execution `steps`, `finishReason`, token `usage` statistics, and other metadata useful for monitoring or logging.
94
140
 
95
141
  ```typescript showLineNumbers copy
96
- await testAgent.generate([
97
- {
98
- role: 'user',
99
- content: [
100
- {
101
- type: 'image',
102
- image: "https://example.com/images/test-image.jpg",
103
- mimeType: "image/jpeg"
104
- },
105
- {
106
- type: 'text',
107
- text: 'Describe the image in detail, and extract all the text in the image.',
108
- },
109
- ],
110
- },
111
- ]);
112
- ```
113
-
114
- ## 4. Structured Output
115
-
116
- Agents can return structured data by providing a JSON Schema or using a Zod schema.
117
-
118
- ### Using JSON Schema
119
-
120
- ```typescript showLineNumbers copy
121
- const schema = {
122
- type: "object",
123
- properties: {
124
- summary: { type: "string" },
125
- keywords: { type: "array", items: { type: "string" } },
126
- },
127
- additionalProperties: false,
128
- required: ["summary", "keywords"],
129
- };
130
-
131
- const response = await testAgent.generate(
132
- [
133
- {
134
- role: "user",
135
- content:
136
- "Please provide a summary and keywords for the following text: ...",
137
- },
138
- ],
139
- {
140
- output: schema,
141
- },
142
- );
142
+ const stream = await testAgent.stream("Help me organize my day", {
143
+ onFinish: ({ steps, text, finishReason, usage }) => {
144
+ console.log({ steps, text, finishReason, usage });
145
+ }
146
+ });
143
147
 
144
- console.log("Structured Output:", response.object);
148
+ for await (const chunk of stream.textStream) {
149
+ process.stdout.write(chunk);
150
+ }
145
151
  ```
146
152
 
147
- ### Using Zod
148
-
149
- You can also use Zod schemas for type-safe structured outputs.
153
+ ## Structured output
150
154
 
151
- First, install Zod:
155
+ Agents can return structured, type-safe data by defining the expected output with either [Zod](https://zod.dev/) or [JSON Schema](https://json-schema.org/). In both cases, the parsed result is available on `response.object`, allowing you to work directly with validated and typed data.
152
156
 
153
- ```bash npm2yarn copy
154
- npm install zod
155
- ```
157
+ ### Using Zod
156
158
 
157
- Then, define a Zod schema and use it with the agent:
159
+ Define the `output` shape using [Zod](https://zod.dev/):
158
160
 
159
161
  ```typescript showLineNumbers copy
160
162
  import { z } from "zod";
161
163
 
162
- // Define the Zod schema
163
- const schema = z.object({
164
- summary: z.string(),
165
- keywords: z.array(z.string()),
164
+ const response = await testAgent.generate("Monkey, Ice Cream, Boat", {
165
+ experimental_output: z.object({
166
+ summary: z.string(),
167
+ keywords: z.array(z.string())
168
+ })
166
169
  });
167
170
 
168
- // Use the schema with the agent
169
- const response = await testAgent.generate(
170
- [
171
- {
172
- role: "user",
173
- content:
174
- "Please provide a summary and keywords for the following text: ...",
175
- },
176
- ],
177
- {
178
- output: schema,
179
- },
180
- );
181
-
182
- console.log("Structured Output:", response.object);
171
+ console.log(response.object);
183
172
  ```
184
173
 
185
174
  ### Using Tools
186
175
 
187
- If you need to generate structured output alongside tool calls, you'll need to use the `experimental_output` property instead of `output`. Here's how:
176
+ If you need to generate structured output alongside tool calls, you'll need to use the `experimental_output` or `structuredOutput` property instead of `output`. Here's how:
188
177
 
189
178
  ```typescript showLineNumbers copy
190
- const schema = z.object({
191
- summary: z.string(),
192
- keywords: z.array(z.string()),
179
+ const response = await testAgent.generate("Monkey, Ice Cream, Boat", {
180
+ experimental_output: z.object({
181
+ summary: z.string(),
182
+ keywords: z.array(z.string())
183
+ })
193
184
  });
194
185
 
195
- const response = await testAgent.generate(
186
+ const responseWithExperimentalOutput = await testAgent.generate(
196
187
  [
197
188
  {
198
189
  role: "user",
@@ -206,112 +197,98 @@ const response = await testAgent.generate(
206
197
  },
207
198
  );
208
199
 
209
- console.log("Structured Output:", response.object);
210
- ```
200
+ console.log("Structured Output:", responseWithExperimentalOutput.object);
211
201
 
212
- <br />
213
-
214
- This allows you to have strong typing and validation for the structured data returned by the agent.
215
-
216
- ## 5. Multi-step tool use
217
-
218
- Agents can be enhanced with tools - functions that extend their capabilities beyond text generation. Tools allow agents to perform calculations, access external systems, and process data. Agents not only decide whether to call tools they're given, they determine the parameters that should be given to that tool.
219
-
220
- For a detailed guide to creating and configuring tools, see the [Adding Tools documentation](/docs/agents/using-tools-and-mcp), but below are the important things to know.
221
-
222
- ### Using `maxSteps`
223
-
224
- The `maxSteps` parameter controls the maximum number of sequential LLM calls an agent can make, particularly important when using tool calls. By default, it is set to 1 to prevent infinite loops in case of misconfigured tools. You can increase this limit based on your use case:
225
-
226
- ```typescript showLineNumbers copy
227
- const response = await testAgent.generate(
202
+ const responseWithStructuredOutput = await testAgent.generate(
228
203
  [
229
204
  {
230
205
  role: "user",
231
206
  content:
232
- "If a taxi driver earns $41 per hour and works 12 hours a day, how much do they earn in one day?",
207
+ "Please analyze this repository and provide a summary and keywords...",
233
208
  },
234
209
  ],
235
210
  {
236
- maxSteps: 5, // Allow up to 5 tool usage steps
211
+ structuredOutput: {
212
+ schema: z.object({
213
+ summary: z.string(),
214
+ keywords: z.array(z.string())
215
+ }),
216
+ model: openai("gpt-4o-mini"),
217
+ }
237
218
  },
238
219
  );
239
- ```
240
220
 
241
- ### Using `onStepFinish`
221
+ console.log("Structured Output:", responseWithStructuredOutput.object);
222
+ ```
242
223
 
243
- You can monitor the progress of multi-step operations using the `onStepFinish` callback. This is useful for debugging or providing progress updates to users.
224
+ ## Describing images
244
225
 
245
- `onStepFinish` is only available when streaming or generating text without structured output.
226
+ Agents can analyze and describe images by processing both the visual content and any text within them. To enable image analysis, pass an object with `type: 'image'` and the image URL in the `content` array. You can combine image content with text prompts to guide the agent's analysis.
246
227
 
247
228
  ```typescript showLineNumbers copy
248
- const response = await testAgent.generate(
249
- [{ role: "user", content: "Calculate the taxi driver's daily earnings." }],
229
+ const response = await testAgent.generate([
250
230
  {
251
- maxSteps: 5,
252
- onStepFinish: ({ text, toolCalls, toolResults }) => {
253
- console.log("Step completed:", { text, toolCalls, toolResults });
254
- },
255
- },
256
- );
231
+ role: "user",
232
+ content: [
233
+ {
234
+ type: "image",
235
+ image: "https://placebear.com/cache/395-205.jpg",
236
+ mimeType: "image/jpeg"
237
+ },
238
+ {
239
+ type: "text",
240
+ text: "Describe the image in detail, and extract all the text in the image."
241
+ }
242
+ ]
243
+ }
244
+ ]);
245
+
246
+ console.log(response.text);
257
247
  ```
258
248
 
259
- ### Streaming steps with `onChunk`
249
+ ## Multi-step tool use
250
+
251
+ Agents can be enhanced with tools, functions that extend their capabilities beyond text generation. Tools allow agents to perform calculations, access external systems, and process data. Agents not only decide whether to call tools they're given, they determine the parameters that should be given to that tool.
260
252
 
261
- You can monitor the progress of multi-step operations using the `onChunk` callback. This is useful for debugging or providing progress updates to users.
253
+ For a detailed guide to creating and configuring tools, see the [Tools Overview](../tools-mcp/overview.mdx) page.
254
+
255
+ ### Using `maxSteps`
256
+
257
+ The `maxSteps` parameter controls the maximum number of sequential LLM calls an agent can make, particularly important when using tool calls. By default, it is set to 1 to prevent infinite loops in case of misconfigured tools:
262
258
 
263
259
  ```typescript showLineNumbers copy
264
- const stream = await testAgent.stream(
265
- [{ role: "user", content: "Calculate the taxi driver's daily earnings." }],
266
- {
267
- maxSteps: 5,
268
- onChunk: ({ chunk }) => {
269
- console.log("Chunk", chunk);
270
- },
271
- },
272
- );
260
+ const response = await testAgent.generate("Help me organize my day", {
261
+ maxSteps: 5
262
+ });
273
263
 
274
- for await (const chunk of stream.textStream) {
275
- console.log(chunk);
276
- }
264
+ console.log(response.text);
277
265
  ```
278
266
 
279
- ### Detecting completion with `onFinish`
267
+ ### Using `onStepFinish`
280
268
 
281
- The `onFinish` callback is available when streaming responses and provides detailed information about the completed interaction. It is called after the LLM has finished generating its response and all tool executions have completed.
282
- This callback receives the final response text, execution steps, token usage statistics, and other metadata that can be useful for monitoring and logging:
269
+ You can monitor the progress of multi-step operations using the `onStepFinish` callback. This is useful for debugging or providing progress updates to users.
270
+
271
+ `onStepFinish` is only available when streaming or generating text without structured output.
283
272
 
284
273
  ```typescript showLineNumbers copy
285
- const stream = await testAgent.stream(
286
- [{ role: "user", content: "Calculate the taxi driver's daily earnings." }],
287
- {
288
- maxSteps: 5,
289
- onFinish: ({
290
- steps,
291
- text,
292
- finishReason, // 'complete', 'length', 'tool', etc.
293
- usage, // token usage statistics
294
- reasoningDetails, // additional context about the agent's decisions
295
- }) => {
296
- console.log("Stream complete:", {
297
- totalSteps: steps.length,
298
- finishReason,
299
- usage,
300
- });
301
- },
302
- },
303
- );
274
+ const response = await testAgent.generate("Help me organize my day", {
275
+ onStepFinish: ({ text, toolCalls, toolResults, finishReason, usage }) => {
276
+ console.log({ text, toolCalls, toolResults, finishReason, usage });
277
+ }
278
+ });
304
279
  ```
305
280
 
306
- ## 6. Testing agents locally
281
+ ## Testing agents locally
307
282
 
308
- Mastra provides a CLI command `mastra dev` to run your agents behind an API. By default, this looks for exported agents in files in the `src/mastra/agents` directory. It generates endpoints for testing your agent (eg `http://localhost:4111/api/agents/myAgent/generate`) and provides a visual playground where you can chat with an agent and view traces.
283
+ Use the `mastra dev` CLI command to run your agents behind a local API.
284
+ By default, it loads exported agents from the `src/mastra/agents` directory and creates endpoints for testing (for example, `http://localhost:4111/api/agents/myAgent/generate`).
285
+ It also launches a visual playground where you can chat with your agent and view execution traces.
309
286
 
310
- For more details, see the [Local Dev Playground](/docs/server-db/local-dev-playground) docs.
287
+ > For more information, see the [Local Dev Playground](/docs/server-db/local-dev-playground) documentation.
311
288
 
312
289
  ## Next Steps
313
290
 
314
- - Learn about Agent Memory in the [Agent Memory](./agent-memory.mdx) guide.
315
- - Learn about Dynamic Agent configuration in the [Dynamic Agents](./dynamic-agents.mdx) guide.
316
- - Learn about Agent Tools in the [Agent Tools and MCP](./using-tools-and-mcp.mdx) guide.
317
- - See an example agent in the [Chef Michel](../../guides/guide/chef-michel.mdx) example.
291
+ - [Agent Memory](./agent-memory.mdx)
292
+ - [Dynamic Agents](./dynamic-agents.mdx)
293
+ - [Agent Tools and MCP](./using-tools-and-mcp.mdx)
294
+ - [Calling Agents](../../examples/agents/calling-agents.mdx)
@@ -3,13 +3,19 @@ title: "Using Agent Streaming | Agents | Mastra Docs"
3
3
  description: Documentation on how to stream agents
4
4
  ---
5
5
 
6
+ import { Callout } from "nextra/components";
7
+
6
8
  # Agent Streaming
7
9
 
8
- Agents in Mastra have access to a powerful streaming protocol! With seamless integration into tools or agents as a tool, you can stream responses directly back to your clients, creating a more interactive and engaging experience.
10
+ Agents in Mastra support streaming responses for real-time interaction with clients. This enables progressive rendering of responses and better user experience.
11
+
12
+ <Callout type="info">
13
+ **Experimental API**: The `streamVNext` method shown in this guide is an experimental feature that will replace the current `stream()` method after additional testing and refinement. For production use, consider using the stable [`stream()` method](/docs/agents/overview#2-generating-and-streaming-text) until `streamVNext` is finalized.
14
+ </Callout>
9
15
 
10
16
  ## Usage
11
17
 
12
- To use the new protocol, you can use the `streamVNext` method on an agent. This method will return a custom MastraAgentStream. This stream extends a ReadableStream, so all basic stream methods are available.
18
+ The experimental streaming protocol uses the `streamVNext` method on an agent. This method returns a custom MastraAgentStream that extends ReadableStream with additional utilities.
13
19
 
14
20
  ```typescript
15
21
  const stream = await agent.streamVNext({ role: "user", content: "Tell me a story." });
@@ -30,7 +36,7 @@ Each chunk is a JSON object with the following properties:
30
36
  }
31
37
  ```
32
38
 
33
- We have a couple of utility functions on the stream to help you with the streaming process.
39
+ The stream provides several utility functions for working with streaming responses:
34
40
 
35
41
  - `stream.finishReason` - The reason the agent stopped streaming.
36
42
  - `stream.toolCalls` - The tool calls made by the agent.
@@ -86,7 +92,7 @@ export const weatherInfo = createTool({
86
92
  });
87
93
  ```
88
94
 
89
- If you want to use the stream in an agent, you can use the `streamVNext` method on the agent and pipe it to the agent's input stream.
95
+ To use streaming within an agent-based tool, call `streamVNext` on the agent and pipe it to the writer:
90
96
 
91
97
  ```typescript filename="src/mastra/tools/test-tool.ts" showLineNumbers copy
92
98
  import { createTool } from "@mastra/core/tools";
@@ -114,5 +120,5 @@ export const weatherInfo = createTool({
114
120
  });
115
121
  ```
116
122
 
117
- Piping the stream to the agent's input stream will allow us to automatically sum up the usage of the agent so the total usage count can be calculated.
123
+ Piping the stream to the writer enables automatic aggregation of token usage across nested agent calls.
118
124
 
@@ -69,7 +69,7 @@ Ensure your template meets all requirements outlined in the [Templates Reference
69
69
 
70
70
  Submit your template using our contribution form:
71
71
 
72
- **[Submit Template Contribution](https://docs.google.com/forms/d/e/1FAIpQLSfiqD4oeOVaqoE3t10KZJw4QM3fxAQPIOcZBqUYkewJIsQKFw/viewform)**
72
+ **[Submit Template Contribution](https://forms.gle/g1CGuwFxqbrb3Rz57)**
73
73
 
74
74
  ### Required Information
75
75
 
@@ -5,7 +5,7 @@ description: "Deploy your Mastra applications to Amazon EC2."
5
5
 
6
6
  import { Callout, Steps, Tabs } from "nextra/components";
7
7
 
8
- ## Amazon EC2
8
+ # Amazon EC2
9
9
 
10
10
  Deploy your Mastra applications to Amazon EC2 (Elastic Cloud Compute).
11
11
 
@@ -16,7 +16,7 @@ Deploy your Mastra applications to Amazon EC2 (Elastic Cloud Compute).
16
16
  refer to our [getting started guide](/docs/getting-started/installation)
17
17
  </Callout>
18
18
 
19
- ### Prerequisites
19
+ ## Prerequisites
20
20
 
21
21
  - An AWS account with [EC2](https://aws.amazon.com/ec2/) access
22
22
  - An EC2 instance running Ubuntu 24+ or Amazon Linux
@@ -25,11 +25,11 @@ Deploy your Mastra applications to Amazon EC2 (Elastic Cloud Compute).
25
25
  - SSL certificate configured (e.g., using [Let's Encrypt](https://letsencrypt.org/))
26
26
  - Node.js 18+ installed on your instance
27
27
 
28
- ### Deployment Steps
28
+ ## Deployment Steps
29
29
 
30
30
  <Steps>
31
31
 
32
- #### Clone your Mastra application
32
+ ### Clone your Mastra application
33
33
 
34
34
  Connect to your EC2 instance and clone your repository:
35
35
 
@@ -57,13 +57,13 @@ Navigate to the repository directory:
57
57
  cd "<your-repository>"
58
58
  ```
59
59
 
60
- #### Install dependencies
60
+ ### Install dependencies
61
61
 
62
62
  ```bash copy
63
63
  npm install
64
64
  ```
65
65
 
66
- #### Set up environment variables
66
+ ### Set up environment variables
67
67
 
68
68
  Create a `.env` file and add your environment variables:
69
69
 
@@ -78,13 +78,13 @@ OPENAI_API_KEY=<your-openai-api-key>
78
78
  # Add other required environment variables
79
79
  ```
80
80
 
81
- #### Build the application
81
+ ### Build the application
82
82
 
83
83
  ```bash copy
84
84
  npm run build
85
85
  ```
86
86
 
87
- #### Run the application
87
+ ### Run the application
88
88
 
89
89
  ```bash copy
90
90
  node --import=./.mastra/output/instrumentation.mjs --env-file=".env" .mastra/output/index.mjs
@@ -96,7 +96,7 @@ Your Mastra application will run on port 4111 by default. Ensure your reverse pr
96
96
 
97
97
  </Steps>
98
98
 
99
- ### Connect to your Mastra server
99
+ ## Connect to your Mastra server
100
100
 
101
101
  You can now connect to your Mastra server from your client application using a `MastraClient` from the `@mastra/client-js` package.
102
102