@mastra/mcp-docs-server 0.13.11 → 0.13.12-alpha.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/organized/changelogs/%40internal%2Fstorage-test-utils.md +19 -19
- package/.docs/organized/changelogs/%40internal%2Ftypes-builder.md +2 -0
- package/.docs/organized/changelogs/%40mastra%2Fchroma.md +14 -14
- package/.docs/organized/changelogs/%40mastra%2Fclickhouse.md +14 -14
- package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +42 -42
- package/.docs/organized/changelogs/%40mastra%2Fcloudflare-d1.md +14 -14
- package/.docs/organized/changelogs/%40mastra%2Fcore.md +45 -45
- package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloudflare.md +53 -53
- package/.docs/organized/changelogs/%40mastra%2Fdeployer-netlify.md +49 -49
- package/.docs/organized/changelogs/%40mastra%2Fdeployer-vercel.md +49 -49
- package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +54 -54
- package/.docs/organized/changelogs/%40mastra%2Fdynamodb.md +20 -20
- package/.docs/organized/changelogs/%40mastra%2Fevals.md +14 -14
- package/.docs/organized/changelogs/%40mastra%2Ffirecrawl.md +21 -21
- package/.docs/organized/changelogs/%40mastra%2Flance.md +22 -22
- package/.docs/organized/changelogs/%40mastra%2Flibsql.md +34 -34
- package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +41 -41
- package/.docs/organized/changelogs/%40mastra%2Fmcp.md +33 -33
- package/.docs/organized/changelogs/%40mastra%2Fmemory.md +23 -23
- package/.docs/organized/changelogs/%40mastra%2Fmongodb.md +20 -20
- package/.docs/organized/changelogs/%40mastra%2Fpg.md +37 -37
- package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +49 -49
- package/.docs/organized/changelogs/%40mastra%2Fqdrant.md +21 -21
- package/.docs/organized/changelogs/%40mastra%2Frag.md +20 -20
- package/.docs/organized/changelogs/%40mastra%2Fschema-compat.md +8 -0
- package/.docs/organized/changelogs/%40mastra%2Fserver.md +44 -44
- package/.docs/organized/changelogs/%40mastra%2Fvoice-google-gemini-live.md +14 -0
- package/.docs/organized/changelogs/create-mastra.md +26 -26
- package/.docs/organized/changelogs/mastra.md +63 -63
- package/.docs/organized/code-examples/agent.md +292 -275
- package/.docs/raw/agents/input-processors.mdx +25 -19
- package/.docs/raw/agents/output-processors.mdx +376 -0
- package/.docs/raw/agents/overview.mdx +165 -188
- package/.docs/raw/agents/streaming.mdx +11 -5
- package/.docs/raw/community/contributing-templates.mdx +1 -1
- package/.docs/raw/deployment/cloud-providers/amazon-ec2.mdx +9 -9
- package/.docs/raw/deployment/cloud-providers/aws-lambda.mdx +27 -33
- package/.docs/raw/deployment/cloud-providers/azure-app-services.mdx +12 -12
- package/.docs/raw/deployment/cloud-providers/digital-ocean.mdx +17 -17
- package/.docs/raw/getting-started/templates.mdx +1 -1
- package/.docs/raw/rag/vector-databases.mdx +9 -1
- package/.docs/raw/reference/agents/agent.mdx +9 -3
- package/.docs/raw/reference/agents/generate.mdx +80 -3
- package/.docs/raw/reference/agents/getDefaultGenerateOptions.mdx +1 -1
- package/.docs/raw/reference/agents/getDefaultStreamOptions.mdx +1 -1
- package/.docs/raw/reference/agents/getDefaultVNextStreamOptions.mdx +1 -1
- package/.docs/raw/reference/agents/getLLM.mdx +1 -1
- package/.docs/raw/reference/agents/streamVNext.mdx +88 -5
- package/.docs/raw/reference/cli/scorers.mdx +160 -0
- package/.docs/raw/reference/rag/chroma.mdx +158 -17
- package/.docs/raw/reference/templates.mdx +3 -3
- package/.docs/raw/reference/tools/create-tool.mdx +2 -2
- package/.docs/raw/reference/tools/mcp-client.mdx +9 -9
- package/.docs/raw/reference/tools/mcp-server.mdx +5 -5
- package/.docs/raw/reference/workflows/branch.mdx +1 -1
- package/.docs/raw/reference/workflows/create-run.mdx +4 -4
- package/.docs/raw/reference/workflows/execute.mdx +2 -2
- package/.docs/raw/reference/workflows/foreach.mdx +1 -1
- package/.docs/raw/reference/workflows/run-methods/cancel.mdx +58 -0
- package/.docs/raw/reference/workflows/{resume.mdx → run-methods/resume.mdx} +7 -5
- package/.docs/raw/reference/workflows/{start.mdx → run-methods/start.mdx} +5 -5
- package/.docs/raw/reference/workflows/{stream.mdx → run-methods/stream.mdx} +6 -3
- package/.docs/raw/reference/workflows/{streamVNext.mdx → run-methods/streamVNext.mdx} +14 -9
- package/.docs/raw/reference/workflows/{watch.mdx → run-methods/watch.mdx} +12 -12
- package/.docs/raw/reference/workflows/run.mdx +104 -0
- package/.docs/raw/reference/workflows/step.mdx +0 -1
- package/.docs/raw/reference/workflows/workflow.mdx +3 -2
- package/.docs/raw/{reference/workflows → server-db}/snapshots.mdx +2 -2
- package/.docs/raw/voice/overview.mdx +81 -2
- package/.docs/raw/voice/speech-to-speech.mdx +45 -0
- package/.docs/raw/workflows/overview.mdx +11 -4
- package/.docs/raw/workflows-legacy/overview.mdx +8 -8
- package/package.json +4 -4
|
@@ -5,47 +5,72 @@ description: Overview of agents in Mastra, detailing their capabilities and how
|
|
|
5
5
|
|
|
6
6
|
# Using Agents
|
|
7
7
|
|
|
8
|
-
|
|
8
|
+
Agents let you build intelligent assistants powered by language models that can make decisions and perform actions. Each agent has required instructions and an LLM, with optional tools and memory.
|
|
9
9
|
|
|
10
|
-
|
|
10
|
+
An agent coordinates conversations, calls tools when needed, maintains context through memory, and produces responses tailored to the interaction. Agents can operate on their own or work as part of larger workflows.
|
|
11
11
|
|
|
12
|
-
|
|
12
|
+

|
|
13
13
|
|
|
14
|
-
|
|
14
|
+
To create an agent:
|
|
15
15
|
|
|
16
|
-
|
|
16
|
+
- Define **instructions** with the `Agent` class and set the **LLM** it will use.
|
|
17
|
+
- Optionally configure **tools** and **memory** to extend functionality.
|
|
18
|
+
- Run the agent to generate responses, with support for streaming, structured output, and dynamic configuration.
|
|
19
|
+
|
|
20
|
+
This approach provides type safety and runtime validation, ensuring reliable behavior across all agent interactions.
|
|
21
|
+
|
|
22
|
+
> **📹 Watch**: → An introduction to agents, and how they compare to workflows [YouTube (7 minutes)](https://youtu.be/0jg2g3sNvgw)
|
|
23
|
+
|
|
24
|
+
## Getting started
|
|
25
|
+
|
|
26
|
+
To use agents, install the required dependencies:
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
npm install @mastra/core @ai-sdk/openai
|
|
30
|
+
```
|
|
31
|
+
|
|
32
|
+
> Mastra works with all AI SDK provider. See [Model Providers](../getting-started/model-providers.mdx) for more information.
|
|
33
|
+
|
|
34
|
+
Import the necessary class from the agents module, and an LLM provider:
|
|
17
35
|
|
|
18
36
|
```typescript filename="src/mastra/agents/test-agent.ts" showLineNumbers copy
|
|
19
37
|
import { openai } from "@ai-sdk/openai";
|
|
20
38
|
import { Agent } from "@mastra/core/agent";
|
|
21
|
-
|
|
22
|
-
const testAgent = new Agent({
|
|
23
|
-
name: "test-agent",
|
|
24
|
-
instructions: "You are a helpful assistant.",
|
|
25
|
-
model: openai("gpt-4o")
|
|
26
|
-
});
|
|
27
39
|
```
|
|
40
|
+
### LLM providers
|
|
28
41
|
|
|
29
|
-
|
|
42
|
+
Each LLM provider needs its own API key, named using the provider’s identifier:
|
|
30
43
|
|
|
31
44
|
```bash filename=".env" copy
|
|
32
45
|
OPENAI_API_KEY=<your-api-key>
|
|
33
46
|
```
|
|
34
47
|
|
|
35
|
-
|
|
48
|
+
> See the [AI SDK Providers](https://ai-sdk.dev/providers/ai-sdk-providers) in the Vercel AI SDK docs.
|
|
49
|
+
|
|
50
|
+
### Creating an agent
|
|
51
|
+
|
|
52
|
+
To create an agent in Mastra, use the `Agent` class. Every agent must include `instructions` to define its behavior, and a `model` parameter to specify the LLM provider and model:
|
|
53
|
+
|
|
54
|
+
```typescript filename="src/mastra/agents/test-agent.ts" showLineNumbers copy
|
|
55
|
+
import { openai } from "@ai-sdk/openai";
|
|
56
|
+
import { Agent } from "@mastra/core/agent";
|
|
36
57
|
|
|
37
|
-
|
|
38
|
-
|
|
58
|
+
export const testAgent = new Agent({
|
|
59
|
+
name: "test-agent",
|
|
60
|
+
instructions: "You are a helpful assistant.",
|
|
61
|
+
model: openai("gpt-4o-mini")
|
|
62
|
+
});
|
|
39
63
|
```
|
|
40
64
|
|
|
41
|
-
|
|
65
|
+
> See [Agent](../../reference/agents/agent.mdx) for more information.
|
|
66
|
+
|
|
42
67
|
|
|
43
|
-
### Registering
|
|
68
|
+
### Registering an agent
|
|
44
69
|
|
|
45
|
-
Register your agent
|
|
70
|
+
Register your agent in the Mastra instance:
|
|
46
71
|
|
|
47
72
|
```typescript showLineNumbers filename="src/mastra/index.ts" copy
|
|
48
|
-
import { Mastra } from "@mastra/core";
|
|
73
|
+
import { Mastra } from "@mastra/core/mastra";
|
|
49
74
|
import { testAgent } from './agents/test-agent';
|
|
50
75
|
|
|
51
76
|
export const mastra = new Mastra({
|
|
@@ -54,145 +79,111 @@ export const mastra = new Mastra({
|
|
|
54
79
|
});
|
|
55
80
|
```
|
|
56
81
|
|
|
57
|
-
##
|
|
82
|
+
## Referencing an agent
|
|
83
|
+
|
|
84
|
+
You can call agents from workflow steps, tools, the Mastra Client, or the command line. Get a reference by calling `.getAgent()` on your `mastra` or `mastraClient` instance, depending on your setup:
|
|
85
|
+
|
|
86
|
+
```typescript showLineNumbers copy
|
|
87
|
+
const testAgent = mastra.getAgent("testAgent");
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
> See [Calling agents](../../examples/agents/calling-agents.mdx) for more information.
|
|
91
|
+
|
|
92
|
+
## Generating responses
|
|
93
|
+
|
|
94
|
+
Use `.generate()` to get a response from an agent. Pass a single string for simple prompts, an array of strings when providing multiple pieces of context, or an array of message objects with `role` and `content` for precise control over roles and conversational flows.
|
|
95
|
+
|
|
96
|
+
> See [.generate()](../../reference/agents/generate.mdx) for more information.
|
|
58
97
|
|
|
59
98
|
### Generating text
|
|
60
99
|
|
|
61
|
-
|
|
100
|
+
Call `.generate()` with an array of message objects that include `role` and `content`:
|
|
62
101
|
|
|
63
102
|
```typescript showLineNumbers copy
|
|
64
103
|
const response = await testAgent.generate([
|
|
65
|
-
{ role: "user", content: "
|
|
104
|
+
{ role: "user", content: "Help me organize my day" },
|
|
105
|
+
{ role: "user", content: "My day starts at 9am and finishes at 5.30pm" },
|
|
106
|
+
{ role: "user", content: "I take lunch between 12:30 and 13:30" },
|
|
107
|
+
{ role: "user", content: "I have meetings Monday to Friday between 10:30 and 11:30" }
|
|
66
108
|
]);
|
|
67
109
|
|
|
68
|
-
console.log(
|
|
110
|
+
console.log(response.text);
|
|
69
111
|
```
|
|
70
112
|
|
|
71
|
-
|
|
113
|
+
## Streaming responses
|
|
114
|
+
|
|
115
|
+
Use `.stream()` for real-time responses. Pass a single string for simple prompts, an array of strings when providing multiple pieces of context, or an array of message objects with `role` and `content` for precise control over roles and conversational flows.
|
|
116
|
+
|
|
117
|
+
> See [.stream()](../../reference/agents/stream.mdx) for more information.
|
|
72
118
|
|
|
73
|
-
### Streaming
|
|
119
|
+
### Streaming text
|
|
74
120
|
|
|
75
|
-
|
|
121
|
+
Call `.stream()` with an array of message objects that include `role` and `content`:
|
|
76
122
|
|
|
77
123
|
```typescript showLineNumbers copy
|
|
78
124
|
const stream = await testAgent.stream([
|
|
79
|
-
{ role: "user", content: "
|
|
125
|
+
{ role: "user", content: "Help me organize my day" },
|
|
126
|
+
{ role: "user", content: "My day starts at 9am and finishes at 5.30pm" },
|
|
127
|
+
{ role: "user", content: "I take lunch between 12:30 and 13:30" },
|
|
128
|
+
{ role: "user", content: "I have meetings Monday to Friday between 10:30 and 11:30" }
|
|
80
129
|
]);
|
|
81
130
|
|
|
82
|
-
console.log("Agent:");
|
|
83
|
-
|
|
84
131
|
for await (const chunk of stream.textStream) {
|
|
85
132
|
process.stdout.write(chunk);
|
|
86
133
|
}
|
|
87
134
|
```
|
|
88
135
|
|
|
89
|
-
|
|
90
|
-
|
|
91
|
-
## 3. Describing images
|
|
136
|
+
### Completion using `onFinish()`
|
|
92
137
|
|
|
93
|
-
|
|
138
|
+
When streaming responses, the `onFinish()` callback runs after the LLM finishes generating its response and all tool executions are complete.
|
|
139
|
+
It provides the final `text`, execution `steps`, `finishReason`, token `usage` statistics, and other metadata useful for monitoring or logging.
|
|
94
140
|
|
|
95
141
|
```typescript showLineNumbers copy
|
|
96
|
-
await testAgent.
|
|
97
|
-
{
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
|
|
101
|
-
type: 'image',
|
|
102
|
-
image: "https://example.com/images/test-image.jpg",
|
|
103
|
-
mimeType: "image/jpeg"
|
|
104
|
-
},
|
|
105
|
-
{
|
|
106
|
-
type: 'text',
|
|
107
|
-
text: 'Describe the image in detail, and extract all the text in the image.',
|
|
108
|
-
},
|
|
109
|
-
],
|
|
110
|
-
},
|
|
111
|
-
]);
|
|
112
|
-
```
|
|
113
|
-
|
|
114
|
-
## 4. Structured Output
|
|
115
|
-
|
|
116
|
-
Agents can return structured data by providing a JSON Schema or using a Zod schema.
|
|
117
|
-
|
|
118
|
-
### Using JSON Schema
|
|
119
|
-
|
|
120
|
-
```typescript showLineNumbers copy
|
|
121
|
-
const schema = {
|
|
122
|
-
type: "object",
|
|
123
|
-
properties: {
|
|
124
|
-
summary: { type: "string" },
|
|
125
|
-
keywords: { type: "array", items: { type: "string" } },
|
|
126
|
-
},
|
|
127
|
-
additionalProperties: false,
|
|
128
|
-
required: ["summary", "keywords"],
|
|
129
|
-
};
|
|
130
|
-
|
|
131
|
-
const response = await testAgent.generate(
|
|
132
|
-
[
|
|
133
|
-
{
|
|
134
|
-
role: "user",
|
|
135
|
-
content:
|
|
136
|
-
"Please provide a summary and keywords for the following text: ...",
|
|
137
|
-
},
|
|
138
|
-
],
|
|
139
|
-
{
|
|
140
|
-
output: schema,
|
|
141
|
-
},
|
|
142
|
-
);
|
|
142
|
+
const stream = await testAgent.stream("Help me organize my day", {
|
|
143
|
+
onFinish: ({ steps, text, finishReason, usage }) => {
|
|
144
|
+
console.log({ steps, text, finishReason, usage });
|
|
145
|
+
}
|
|
146
|
+
});
|
|
143
147
|
|
|
144
|
-
|
|
148
|
+
for await (const chunk of stream.textStream) {
|
|
149
|
+
process.stdout.write(chunk);
|
|
150
|
+
}
|
|
145
151
|
```
|
|
146
152
|
|
|
147
|
-
|
|
148
|
-
|
|
149
|
-
You can also use Zod schemas for type-safe structured outputs.
|
|
153
|
+
## Structured output
|
|
150
154
|
|
|
151
|
-
|
|
155
|
+
Agents can return structured, type-safe data by defining the expected output with either [Zod](https://zod.dev/) or [JSON Schema](https://json-schema.org/). In both cases, the parsed result is available on `response.object`, allowing you to work directly with validated and typed data.
|
|
152
156
|
|
|
153
|
-
|
|
154
|
-
npm install zod
|
|
155
|
-
```
|
|
157
|
+
### Using Zod
|
|
156
158
|
|
|
157
|
-
|
|
159
|
+
Define the `output` shape using [Zod](https://zod.dev/):
|
|
158
160
|
|
|
159
161
|
```typescript showLineNumbers copy
|
|
160
162
|
import { z } from "zod";
|
|
161
163
|
|
|
162
|
-
|
|
163
|
-
|
|
164
|
-
|
|
165
|
-
|
|
164
|
+
const response = await testAgent.generate("Monkey, Ice Cream, Boat", {
|
|
165
|
+
experimental_output: z.object({
|
|
166
|
+
summary: z.string(),
|
|
167
|
+
keywords: z.array(z.string())
|
|
168
|
+
})
|
|
166
169
|
});
|
|
167
170
|
|
|
168
|
-
|
|
169
|
-
const response = await testAgent.generate(
|
|
170
|
-
[
|
|
171
|
-
{
|
|
172
|
-
role: "user",
|
|
173
|
-
content:
|
|
174
|
-
"Please provide a summary and keywords for the following text: ...",
|
|
175
|
-
},
|
|
176
|
-
],
|
|
177
|
-
{
|
|
178
|
-
output: schema,
|
|
179
|
-
},
|
|
180
|
-
);
|
|
181
|
-
|
|
182
|
-
console.log("Structured Output:", response.object);
|
|
171
|
+
console.log(response.object);
|
|
183
172
|
```
|
|
184
173
|
|
|
185
174
|
### Using Tools
|
|
186
175
|
|
|
187
|
-
If you need to generate structured output alongside tool calls, you'll need to use the `experimental_output` property instead of `output`. Here's how:
|
|
176
|
+
If you need to generate structured output alongside tool calls, you'll need to use the `experimental_output` or `structuredOutput` property instead of `output`. Here's how:
|
|
188
177
|
|
|
189
178
|
```typescript showLineNumbers copy
|
|
190
|
-
const
|
|
191
|
-
|
|
192
|
-
|
|
179
|
+
const response = await testAgent.generate("Monkey, Ice Cream, Boat", {
|
|
180
|
+
experimental_output: z.object({
|
|
181
|
+
summary: z.string(),
|
|
182
|
+
keywords: z.array(z.string())
|
|
183
|
+
})
|
|
193
184
|
});
|
|
194
185
|
|
|
195
|
-
const
|
|
186
|
+
const responseWithExperimentalOutput = await testAgent.generate(
|
|
196
187
|
[
|
|
197
188
|
{
|
|
198
189
|
role: "user",
|
|
@@ -206,112 +197,98 @@ const response = await testAgent.generate(
|
|
|
206
197
|
},
|
|
207
198
|
);
|
|
208
199
|
|
|
209
|
-
console.log("Structured Output:",
|
|
210
|
-
```
|
|
200
|
+
console.log("Structured Output:", responseWithExperimentalOutput.object);
|
|
211
201
|
|
|
212
|
-
|
|
213
|
-
|
|
214
|
-
This allows you to have strong typing and validation for the structured data returned by the agent.
|
|
215
|
-
|
|
216
|
-
## 5. Multi-step tool use
|
|
217
|
-
|
|
218
|
-
Agents can be enhanced with tools - functions that extend their capabilities beyond text generation. Tools allow agents to perform calculations, access external systems, and process data. Agents not only decide whether to call tools they're given, they determine the parameters that should be given to that tool.
|
|
219
|
-
|
|
220
|
-
For a detailed guide to creating and configuring tools, see the [Adding Tools documentation](/docs/agents/using-tools-and-mcp), but below are the important things to know.
|
|
221
|
-
|
|
222
|
-
### Using `maxSteps`
|
|
223
|
-
|
|
224
|
-
The `maxSteps` parameter controls the maximum number of sequential LLM calls an agent can make, particularly important when using tool calls. By default, it is set to 1 to prevent infinite loops in case of misconfigured tools. You can increase this limit based on your use case:
|
|
225
|
-
|
|
226
|
-
```typescript showLineNumbers copy
|
|
227
|
-
const response = await testAgent.generate(
|
|
202
|
+
const responseWithStructuredOutput = await testAgent.generate(
|
|
228
203
|
[
|
|
229
204
|
{
|
|
230
205
|
role: "user",
|
|
231
206
|
content:
|
|
232
|
-
"
|
|
207
|
+
"Please analyze this repository and provide a summary and keywords...",
|
|
233
208
|
},
|
|
234
209
|
],
|
|
235
210
|
{
|
|
236
|
-
|
|
211
|
+
structuredOutput: {
|
|
212
|
+
schema: z.object({
|
|
213
|
+
summary: z.string(),
|
|
214
|
+
keywords: z.array(z.string())
|
|
215
|
+
}),
|
|
216
|
+
model: openai("gpt-4o-mini"),
|
|
217
|
+
}
|
|
237
218
|
},
|
|
238
219
|
);
|
|
239
|
-
```
|
|
240
220
|
|
|
241
|
-
|
|
221
|
+
console.log("Structured Output:", responseWithStructuredOutput.object);
|
|
222
|
+
```
|
|
242
223
|
|
|
243
|
-
|
|
224
|
+
## Describing images
|
|
244
225
|
|
|
245
|
-
`
|
|
226
|
+
Agents can analyze and describe images by processing both the visual content and any text within them. To enable image analysis, pass an object with `type: 'image'` and the image URL in the `content` array. You can combine image content with text prompts to guide the agent's analysis.
|
|
246
227
|
|
|
247
228
|
```typescript showLineNumbers copy
|
|
248
|
-
const response = await testAgent.generate(
|
|
249
|
-
[{ role: "user", content: "Calculate the taxi driver's daily earnings." }],
|
|
229
|
+
const response = await testAgent.generate([
|
|
250
230
|
{
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
|
|
254
|
-
|
|
255
|
-
|
|
256
|
-
|
|
231
|
+
role: "user",
|
|
232
|
+
content: [
|
|
233
|
+
{
|
|
234
|
+
type: "image",
|
|
235
|
+
image: "https://placebear.com/cache/395-205.jpg",
|
|
236
|
+
mimeType: "image/jpeg"
|
|
237
|
+
},
|
|
238
|
+
{
|
|
239
|
+
type: "text",
|
|
240
|
+
text: "Describe the image in detail, and extract all the text in the image."
|
|
241
|
+
}
|
|
242
|
+
]
|
|
243
|
+
}
|
|
244
|
+
]);
|
|
245
|
+
|
|
246
|
+
console.log(response.text);
|
|
257
247
|
```
|
|
258
248
|
|
|
259
|
-
|
|
249
|
+
## Multi-step tool use
|
|
250
|
+
|
|
251
|
+
Agents can be enhanced with tools, functions that extend their capabilities beyond text generation. Tools allow agents to perform calculations, access external systems, and process data. Agents not only decide whether to call tools they're given, they determine the parameters that should be given to that tool.
|
|
260
252
|
|
|
261
|
-
|
|
253
|
+
For a detailed guide to creating and configuring tools, see the [Tools Overview](../tools-mcp/overview.mdx) page.
|
|
254
|
+
|
|
255
|
+
### Using `maxSteps`
|
|
256
|
+
|
|
257
|
+
The `maxSteps` parameter controls the maximum number of sequential LLM calls an agent can make, particularly important when using tool calls. By default, it is set to 1 to prevent infinite loops in case of misconfigured tools:
|
|
262
258
|
|
|
263
259
|
```typescript showLineNumbers copy
|
|
264
|
-
const
|
|
265
|
-
|
|
266
|
-
|
|
267
|
-
maxSteps: 5,
|
|
268
|
-
onChunk: ({ chunk }) => {
|
|
269
|
-
console.log("Chunk", chunk);
|
|
270
|
-
},
|
|
271
|
-
},
|
|
272
|
-
);
|
|
260
|
+
const response = await testAgent.generate("Help me organize my day", {
|
|
261
|
+
maxSteps: 5
|
|
262
|
+
});
|
|
273
263
|
|
|
274
|
-
|
|
275
|
-
console.log(chunk);
|
|
276
|
-
}
|
|
264
|
+
console.log(response.text);
|
|
277
265
|
```
|
|
278
266
|
|
|
279
|
-
###
|
|
267
|
+
### Using `onStepFinish`
|
|
280
268
|
|
|
281
|
-
|
|
282
|
-
|
|
269
|
+
You can monitor the progress of multi-step operations using the `onStepFinish` callback. This is useful for debugging or providing progress updates to users.
|
|
270
|
+
|
|
271
|
+
`onStepFinish` is only available when streaming or generating text without structured output.
|
|
283
272
|
|
|
284
273
|
```typescript showLineNumbers copy
|
|
285
|
-
const
|
|
286
|
-
|
|
287
|
-
|
|
288
|
-
|
|
289
|
-
|
|
290
|
-
steps,
|
|
291
|
-
text,
|
|
292
|
-
finishReason, // 'complete', 'length', 'tool', etc.
|
|
293
|
-
usage, // token usage statistics
|
|
294
|
-
reasoningDetails, // additional context about the agent's decisions
|
|
295
|
-
}) => {
|
|
296
|
-
console.log("Stream complete:", {
|
|
297
|
-
totalSteps: steps.length,
|
|
298
|
-
finishReason,
|
|
299
|
-
usage,
|
|
300
|
-
});
|
|
301
|
-
},
|
|
302
|
-
},
|
|
303
|
-
);
|
|
274
|
+
const response = await testAgent.generate("Help me organize my day", {
|
|
275
|
+
onStepFinish: ({ text, toolCalls, toolResults, finishReason, usage }) => {
|
|
276
|
+
console.log({ text, toolCalls, toolResults, finishReason, usage });
|
|
277
|
+
}
|
|
278
|
+
});
|
|
304
279
|
```
|
|
305
280
|
|
|
306
|
-
##
|
|
281
|
+
## Testing agents locally
|
|
307
282
|
|
|
308
|
-
|
|
283
|
+
Use the `mastra dev` CLI command to run your agents behind a local API.
|
|
284
|
+
By default, it loads exported agents from the `src/mastra/agents` directory and creates endpoints for testing (for example, `http://localhost:4111/api/agents/myAgent/generate`).
|
|
285
|
+
It also launches a visual playground where you can chat with your agent and view execution traces.
|
|
309
286
|
|
|
310
|
-
For more
|
|
287
|
+
> For more information, see the [Local Dev Playground](/docs/server-db/local-dev-playground) documentation.
|
|
311
288
|
|
|
312
289
|
## Next Steps
|
|
313
290
|
|
|
314
|
-
-
|
|
315
|
-
-
|
|
316
|
-
-
|
|
317
|
-
-
|
|
291
|
+
- [Agent Memory](./agent-memory.mdx)
|
|
292
|
+
- [Dynamic Agents](./dynamic-agents.mdx)
|
|
293
|
+
- [Agent Tools and MCP](./using-tools-and-mcp.mdx)
|
|
294
|
+
- [Calling Agents](../../examples/agents/calling-agents.mdx)
|
|
@@ -3,13 +3,19 @@ title: "Using Agent Streaming | Agents | Mastra Docs"
|
|
|
3
3
|
description: Documentation on how to stream agents
|
|
4
4
|
---
|
|
5
5
|
|
|
6
|
+
import { Callout } from "nextra/components";
|
|
7
|
+
|
|
6
8
|
# Agent Streaming
|
|
7
9
|
|
|
8
|
-
Agents in Mastra
|
|
10
|
+
Agents in Mastra support streaming responses for real-time interaction with clients. This enables progressive rendering of responses and better user experience.
|
|
11
|
+
|
|
12
|
+
<Callout type="info">
|
|
13
|
+
**Experimental API**: The `streamVNext` method shown in this guide is an experimental feature that will replace the current `stream()` method after additional testing and refinement. For production use, consider using the stable [`stream()` method](/docs/agents/overview#2-generating-and-streaming-text) until `streamVNext` is finalized.
|
|
14
|
+
</Callout>
|
|
9
15
|
|
|
10
16
|
## Usage
|
|
11
17
|
|
|
12
|
-
|
|
18
|
+
The experimental streaming protocol uses the `streamVNext` method on an agent. This method returns a custom MastraAgentStream that extends ReadableStream with additional utilities.
|
|
13
19
|
|
|
14
20
|
```typescript
|
|
15
21
|
const stream = await agent.streamVNext({ role: "user", content: "Tell me a story." });
|
|
@@ -30,7 +36,7 @@ Each chunk is a JSON object with the following properties:
|
|
|
30
36
|
}
|
|
31
37
|
```
|
|
32
38
|
|
|
33
|
-
|
|
39
|
+
The stream provides several utility functions for working with streaming responses:
|
|
34
40
|
|
|
35
41
|
- `stream.finishReason` - The reason the agent stopped streaming.
|
|
36
42
|
- `stream.toolCalls` - The tool calls made by the agent.
|
|
@@ -86,7 +92,7 @@ export const weatherInfo = createTool({
|
|
|
86
92
|
});
|
|
87
93
|
```
|
|
88
94
|
|
|
89
|
-
|
|
95
|
+
To use streaming within an agent-based tool, call `streamVNext` on the agent and pipe it to the writer:
|
|
90
96
|
|
|
91
97
|
```typescript filename="src/mastra/tools/test-tool.ts" showLineNumbers copy
|
|
92
98
|
import { createTool } from "@mastra/core/tools";
|
|
@@ -114,5 +120,5 @@ export const weatherInfo = createTool({
|
|
|
114
120
|
});
|
|
115
121
|
```
|
|
116
122
|
|
|
117
|
-
Piping the stream to the
|
|
123
|
+
Piping the stream to the writer enables automatic aggregation of token usage across nested agent calls.
|
|
118
124
|
|
|
@@ -69,7 +69,7 @@ Ensure your template meets all requirements outlined in the [Templates Reference
|
|
|
69
69
|
|
|
70
70
|
Submit your template using our contribution form:
|
|
71
71
|
|
|
72
|
-
**[Submit Template Contribution](https://
|
|
72
|
+
**[Submit Template Contribution](https://forms.gle/g1CGuwFxqbrb3Rz57)**
|
|
73
73
|
|
|
74
74
|
### Required Information
|
|
75
75
|
|
|
@@ -5,7 +5,7 @@ description: "Deploy your Mastra applications to Amazon EC2."
|
|
|
5
5
|
|
|
6
6
|
import { Callout, Steps, Tabs } from "nextra/components";
|
|
7
7
|
|
|
8
|
-
|
|
8
|
+
# Amazon EC2
|
|
9
9
|
|
|
10
10
|
Deploy your Mastra applications to Amazon EC2 (Elastic Cloud Compute).
|
|
11
11
|
|
|
@@ -16,7 +16,7 @@ Deploy your Mastra applications to Amazon EC2 (Elastic Cloud Compute).
|
|
|
16
16
|
refer to our [getting started guide](/docs/getting-started/installation)
|
|
17
17
|
</Callout>
|
|
18
18
|
|
|
19
|
-
|
|
19
|
+
## Prerequisites
|
|
20
20
|
|
|
21
21
|
- An AWS account with [EC2](https://aws.amazon.com/ec2/) access
|
|
22
22
|
- An EC2 instance running Ubuntu 24+ or Amazon Linux
|
|
@@ -25,11 +25,11 @@ Deploy your Mastra applications to Amazon EC2 (Elastic Cloud Compute).
|
|
|
25
25
|
- SSL certificate configured (e.g., using [Let's Encrypt](https://letsencrypt.org/))
|
|
26
26
|
- Node.js 18+ installed on your instance
|
|
27
27
|
|
|
28
|
-
|
|
28
|
+
## Deployment Steps
|
|
29
29
|
|
|
30
30
|
<Steps>
|
|
31
31
|
|
|
32
|
-
|
|
32
|
+
### Clone your Mastra application
|
|
33
33
|
|
|
34
34
|
Connect to your EC2 instance and clone your repository:
|
|
35
35
|
|
|
@@ -57,13 +57,13 @@ Navigate to the repository directory:
|
|
|
57
57
|
cd "<your-repository>"
|
|
58
58
|
```
|
|
59
59
|
|
|
60
|
-
|
|
60
|
+
### Install dependencies
|
|
61
61
|
|
|
62
62
|
```bash copy
|
|
63
63
|
npm install
|
|
64
64
|
```
|
|
65
65
|
|
|
66
|
-
|
|
66
|
+
### Set up environment variables
|
|
67
67
|
|
|
68
68
|
Create a `.env` file and add your environment variables:
|
|
69
69
|
|
|
@@ -78,13 +78,13 @@ OPENAI_API_KEY=<your-openai-api-key>
|
|
|
78
78
|
# Add other required environment variables
|
|
79
79
|
```
|
|
80
80
|
|
|
81
|
-
|
|
81
|
+
### Build the application
|
|
82
82
|
|
|
83
83
|
```bash copy
|
|
84
84
|
npm run build
|
|
85
85
|
```
|
|
86
86
|
|
|
87
|
-
|
|
87
|
+
### Run the application
|
|
88
88
|
|
|
89
89
|
```bash copy
|
|
90
90
|
node --import=./.mastra/output/instrumentation.mjs --env-file=".env" .mastra/output/index.mjs
|
|
@@ -96,7 +96,7 @@ Your Mastra application will run on port 4111 by default. Ensure your reverse pr
|
|
|
96
96
|
|
|
97
97
|
</Steps>
|
|
98
98
|
|
|
99
|
-
|
|
99
|
+
## Connect to your Mastra server
|
|
100
100
|
|
|
101
101
|
You can now connect to your Mastra server from your client application using a `MastraClient` from the `@mastra/client-js` package.
|
|
102
102
|
|