@mastra/mcp-docs-server 1.0.0-beta.11 → 1.0.0-beta.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (159) hide show
  1. package/.docs/organized/changelogs/%40mastra%2Fagent-builder.md +12 -12
  2. package/.docs/organized/changelogs/%40mastra%2Fai-sdk.md +24 -24
  3. package/.docs/organized/changelogs/%40mastra%2Fclickhouse.md +45 -45
  4. package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +83 -83
  5. package/.docs/organized/changelogs/%40mastra%2Fcloudflare-d1.md +39 -39
  6. package/.docs/organized/changelogs/%40mastra%2Fcloudflare.md +39 -39
  7. package/.docs/organized/changelogs/%40mastra%2Fconvex.md +38 -0
  8. package/.docs/organized/changelogs/%40mastra%2Fcore.md +174 -174
  9. package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloud.md +17 -17
  10. package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +27 -27
  11. package/.docs/organized/changelogs/%40mastra%2Fdynamodb.md +39 -39
  12. package/.docs/organized/changelogs/%40mastra%2Ffastembed.md +6 -0
  13. package/.docs/organized/changelogs/%40mastra%2Flance.md +39 -39
  14. package/.docs/organized/changelogs/%40mastra%2Flibsql.md +45 -45
  15. package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +15 -15
  16. package/.docs/organized/changelogs/%40mastra%2Fmemory.md +13 -13
  17. package/.docs/organized/changelogs/%40mastra%2Fmongodb.md +39 -39
  18. package/.docs/organized/changelogs/%40mastra%2Fmssql.md +39 -39
  19. package/.docs/organized/changelogs/%40mastra%2Fpg.md +45 -45
  20. package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +75 -75
  21. package/.docs/organized/changelogs/%40mastra%2Freact.md +40 -0
  22. package/.docs/organized/changelogs/%40mastra%2Fschema-compat.md +6 -0
  23. package/.docs/organized/changelogs/%40mastra%2Fserver.md +29 -29
  24. package/.docs/organized/changelogs/%40mastra%2Fupstash.md +39 -39
  25. package/.docs/organized/changelogs/create-mastra.md +29 -29
  26. package/.docs/organized/changelogs/mastra.md +35 -35
  27. package/.docs/organized/code-examples/quick-start.md +0 -4
  28. package/.docs/organized/code-examples/stock-price-tool.md +21 -2
  29. package/.docs/raw/agents/agent-approval.mdx +136 -2
  30. package/.docs/raw/agents/agent-memory.mdx +4 -4
  31. package/.docs/raw/agents/guardrails.mdx +1 -1
  32. package/.docs/raw/agents/networks.mdx +1 -1
  33. package/.docs/raw/agents/overview.mdx +2 -2
  34. package/.docs/raw/agents/using-tools.mdx +1 -1
  35. package/.docs/raw/course/01-first-agent/07-creating-your-agent.md +1 -2
  36. package/.docs/raw/course/01-first-agent/12-connecting-tool-to-agent.md +1 -1
  37. package/.docs/raw/course/01-first-agent/16-adding-memory-to-agent.md +1 -2
  38. package/.docs/raw/course/02-agent-tools-mcp/05-updating-your-agent.md +1 -1
  39. package/.docs/raw/course/02-agent-tools-mcp/10-updating-agent-instructions-zapier.md +1 -1
  40. package/.docs/raw/course/02-agent-tools-mcp/16-updating-agent-instructions-github.md +1 -1
  41. package/.docs/raw/course/02-agent-tools-mcp/21-updating-agent-instructions-hackernews.md +1 -1
  42. package/.docs/raw/course/02-agent-tools-mcp/27-updating-agent-instructions-filesystem.md +1 -1
  43. package/.docs/raw/course/02-agent-tools-mcp/31-enhancing-memory-configuration.md +2 -2
  44. package/.docs/raw/course/03-agent-memory/04-creating-basic-memory-agent.md +1 -2
  45. package/.docs/raw/course/03-agent-memory/08-configuring-conversation-history.md +1 -2
  46. package/.docs/raw/course/03-agent-memory/16-configuring-semantic-recall.md +3 -4
  47. package/.docs/raw/course/03-agent-memory/21-configuring-working-memory.md +2 -3
  48. package/.docs/raw/course/03-agent-memory/22-custom-working-memory-templates.md +2 -3
  49. package/.docs/raw/course/03-agent-memory/25-combining-memory-features.md +1 -2
  50. package/.docs/raw/course/03-agent-memory/27-creating-learning-assistant.md +2 -3
  51. package/.docs/raw/course/04-workflows/11-creating-an-ai-agent.md +2 -3
  52. package/.docs/raw/deployment/cloud-providers.mdx +20 -0
  53. package/.docs/raw/deployment/{building-mastra.mdx → mastra-server.mdx} +2 -2
  54. package/.docs/raw/deployment/monorepo.mdx +23 -44
  55. package/.docs/raw/deployment/overview.mdx +28 -53
  56. package/.docs/raw/deployment/web-framework.mdx +12 -14
  57. package/.docs/raw/getting-started/start.mdx +10 -1
  58. package/.docs/raw/getting-started/studio.mdx +1 -1
  59. package/.docs/raw/guides/build-your-ui/ai-sdk-ui.mdx +1021 -67
  60. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/aws-lambda.mdx +3 -6
  61. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/azure-app-services.mdx +4 -6
  62. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/cloudflare-deployer.mdx +4 -0
  63. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/digital-ocean.mdx +3 -6
  64. package/.docs/raw/guides/deployment/index.mdx +32 -0
  65. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/netlify-deployer.mdx +4 -0
  66. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/vercel-deployer.mdx +4 -0
  67. package/.docs/raw/guides/getting-started/express.mdx +71 -152
  68. package/.docs/raw/guides/getting-started/hono.mdx +227 -0
  69. package/.docs/raw/guides/getting-started/next-js.mdx +173 -63
  70. package/.docs/raw/guides/getting-started/vite-react.mdx +307 -137
  71. package/.docs/raw/guides/guide/research-assistant.mdx +4 -4
  72. package/.docs/raw/guides/migrations/upgrade-to-v1/client.mdx +17 -0
  73. package/.docs/raw/guides/migrations/upgrade-to-v1/overview.mdx +6 -0
  74. package/.docs/raw/index.mdx +1 -1
  75. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/dashboard.mdx +2 -6
  76. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/observability.mdx +1 -5
  77. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/overview.mdx +2 -6
  78. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/setting-up.mdx +3 -6
  79. package/.docs/raw/memory/overview.mdx +1 -1
  80. package/.docs/raw/memory/storage/memory-with-libsql.mdx +1 -1
  81. package/.docs/raw/memory/storage/memory-with-mongodb.mdx +1 -1
  82. package/.docs/raw/memory/storage/memory-with-pg.mdx +1 -1
  83. package/.docs/raw/memory/storage/memory-with-upstash.mdx +1 -1
  84. package/.docs/raw/{server-db/storage.mdx → memory/storage/overview.mdx} +2 -2
  85. package/.docs/raw/observability/logging.mdx +1 -1
  86. package/.docs/raw/observability/tracing/exporters/cloud.mdx +1 -1
  87. package/.docs/raw/observability/tracing/exporters/default.mdx +1 -1
  88. package/.docs/raw/rag/chunking-and-embedding.mdx +12 -25
  89. package/.docs/raw/rag/graph-rag.mdx +220 -0
  90. package/.docs/raw/rag/overview.mdx +1 -2
  91. package/.docs/raw/rag/retrieval.mdx +13 -29
  92. package/.docs/raw/rag/vector-databases.mdx +7 -3
  93. package/.docs/raw/reference/agents/getDefaultGenerateOptions.mdx +1 -1
  94. package/.docs/raw/reference/agents/getDefaultOptions.mdx +1 -1
  95. package/.docs/raw/reference/agents/getDefaultStreamOptions.mdx +1 -1
  96. package/.docs/raw/reference/agents/getInstructions.mdx +1 -1
  97. package/.docs/raw/reference/agents/getLLM.mdx +1 -1
  98. package/.docs/raw/reference/agents/getMemory.mdx +1 -1
  99. package/.docs/raw/reference/agents/getModel.mdx +1 -1
  100. package/.docs/raw/reference/agents/listScorers.mdx +1 -1
  101. package/.docs/raw/reference/ai-sdk/chat-route.mdx +1 -1
  102. package/.docs/raw/reference/ai-sdk/handle-chat-stream.mdx +1 -1
  103. package/.docs/raw/reference/ai-sdk/handle-network-stream.mdx +1 -1
  104. package/.docs/raw/reference/ai-sdk/handle-workflow-stream.mdx +1 -1
  105. package/.docs/raw/reference/ai-sdk/network-route.mdx +1 -1
  106. package/.docs/raw/reference/ai-sdk/to-ai-sdk-v4-messages.mdx +127 -0
  107. package/.docs/raw/reference/ai-sdk/to-ai-sdk-v5-messages.mdx +107 -0
  108. package/.docs/raw/reference/ai-sdk/workflow-route.mdx +1 -1
  109. package/.docs/raw/reference/auth/auth0.mdx +1 -1
  110. package/.docs/raw/reference/auth/clerk.mdx +1 -1
  111. package/.docs/raw/reference/auth/firebase.mdx +1 -1
  112. package/.docs/raw/reference/auth/jwt.mdx +1 -1
  113. package/.docs/raw/reference/auth/supabase.mdx +1 -1
  114. package/.docs/raw/reference/auth/workos.mdx +1 -1
  115. package/.docs/raw/reference/cli/mastra.mdx +1 -1
  116. package/.docs/raw/reference/client-js/mastra-client.mdx +1 -1
  117. package/.docs/raw/reference/client-js/workflows.mdx +20 -0
  118. package/.docs/raw/reference/core/getServer.mdx +2 -2
  119. package/.docs/raw/reference/core/getStorage.mdx +1 -1
  120. package/.docs/raw/reference/core/getStoredAgentById.mdx +1 -1
  121. package/.docs/raw/reference/core/listStoredAgents.mdx +1 -1
  122. package/.docs/raw/reference/core/setStorage.mdx +1 -1
  123. package/.docs/raw/reference/logging/pino-logger.mdx +1 -1
  124. package/.docs/raw/reference/rag/database-config.mdx +1 -1
  125. package/.docs/raw/reference/server/create-route.mdx +1 -1
  126. package/.docs/raw/reference/server/express-adapter.mdx +4 -4
  127. package/.docs/raw/reference/server/hono-adapter.mdx +4 -4
  128. package/.docs/raw/reference/server/mastra-server.mdx +2 -2
  129. package/.docs/raw/reference/server/routes.mdx +28 -1
  130. package/.docs/raw/reference/streaming/agents/stream.mdx +22 -0
  131. package/.docs/raw/reference/streaming/workflows/stream.mdx +33 -20
  132. package/.docs/raw/reference/tools/create-tool.mdx +23 -1
  133. package/.docs/raw/reference/tools/graph-rag-tool.mdx +3 -3
  134. package/.docs/raw/reference/tools/vector-query-tool.mdx +3 -3
  135. package/.docs/raw/reference/workflows/run-methods/startAsync.mdx +143 -0
  136. package/.docs/raw/reference/workflows/workflow-methods/create-run.mdx +35 -0
  137. package/.docs/raw/reference/workflows/workflow.mdx +14 -0
  138. package/.docs/raw/{auth → server/auth}/auth0.mdx +1 -1
  139. package/.docs/raw/{auth → server/auth}/clerk.mdx +1 -1
  140. package/.docs/raw/{auth → server/auth}/firebase.mdx +1 -1
  141. package/.docs/raw/{auth → server/auth}/index.mdx +6 -6
  142. package/.docs/raw/{auth → server/auth}/jwt.mdx +1 -1
  143. package/.docs/raw/{auth → server/auth}/supabase.mdx +1 -1
  144. package/.docs/raw/{auth → server/auth}/workos.mdx +1 -1
  145. package/.docs/raw/{server-db → server}/custom-adapters.mdx +3 -3
  146. package/.docs/raw/{server-db → server}/custom-api-routes.mdx +1 -1
  147. package/.docs/raw/{server-db → server}/mastra-client.mdx +2 -2
  148. package/.docs/raw/{server-db → server}/mastra-server.mdx +5 -5
  149. package/.docs/raw/{server-db → server}/middleware.mdx +2 -2
  150. package/.docs/raw/{server-db → server}/request-context.mdx +3 -3
  151. package/.docs/raw/{server-db → server}/server-adapters.mdx +6 -6
  152. package/.docs/raw/tools-mcp/overview.mdx +2 -2
  153. package/.docs/raw/workflows/error-handling.mdx +162 -1
  154. package/.docs/raw/workflows/overview.mdx +2 -2
  155. package/CHANGELOG.md +14 -0
  156. package/package.json +3 -3
  157. package/.docs/organized/changelogs/%40internal%2Fai-sdk-v4.md +0 -1
  158. package/.docs/raw/deployment/cloud-providers/index.mdx +0 -55
  159. /package/.docs/raw/{deployment/cloud-providers → guides/deployment}/amazon-ec2.mdx +0 -0
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  title: "Agent Approval | Agents"
3
- description: Learn how to require approvals and suspend tool execution while keeping humans in control of agent workflows.
3
+ description: Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
4
4
  ---
5
5
 
6
6
  # Agent Approval
@@ -180,10 +180,144 @@ const handleResume = async () => {
180
180
 
181
181
  ```
182
182
 
183
+ ## Automatic tool resumption
184
+
185
+ When using tools that call `suspend()`, you can enable automatic resumption so the agent resumes suspended tools based on the user's next message. This creates a conversational flow where users provide the required information naturally, without your application needing to call `resumeStream()` explicitly.
186
+
187
+ ### Enabling auto-resume
188
+
189
+ Set `autoResumeSuspendedTools` to `true` in the agent's default options or when calling `stream()`:
190
+
191
+ ```typescript
192
+ import { Agent } from "@mastra/core/agent";
193
+ import { Memory } from "@mastra/memory";
194
+
195
+ // Option 1: In agent configuration
196
+ const agent = new Agent({
197
+ id: "my-agent",
198
+ name: "My Agent",
199
+ instructions: "You are a helpful assistant",
200
+ model: "openai/gpt-4o-mini",
201
+ tools: { weatherTool },
202
+ memory: new Memory(),
203
+ defaultOptions: {
204
+ autoResumeSuspendedTools: true,
205
+ },
206
+ });
207
+
208
+ // Option 2: Per-request
209
+ const stream = await agent.stream("What's the weather?", {
210
+ autoResumeSuspendedTools: true,
211
+ });
212
+ ```
213
+
214
+ ### How it works
215
+
216
+ When `autoResumeSuspendedTools` is enabled:
217
+
218
+ 1. A tool suspends execution by calling `suspend()` with a payload (e.g., requesting more information)
219
+ 2. The suspension is persisted to memory along with the conversation
220
+ 3. When the user sends their next message on the same thread, the agent:
221
+ - Detects the suspended tool from conversation history
222
+ - Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
223
+ - Automatically resumes the tool with the extracted data
224
+
225
+ ### Example
226
+
227
+ ```typescript
228
+ import { createTool } from "@mastra/core/tools";
229
+ import { z } from "zod";
230
+
231
+ export const weatherTool = createTool({
232
+ id: "weather-info",
233
+ description: "Fetches weather information for a city",
234
+ suspendSchema: z.object({
235
+ message: z.string(),
236
+ }),
237
+ resumeSchema: z.object({
238
+ city: z.string(),
239
+ }),
240
+ execute: async (_inputData, context) => {
241
+ // Check if this is a resume with data
242
+ if (!context?.agent?.resumeData) {
243
+ // First call - suspend and ask for the city
244
+ return context?.agent?.suspend({
245
+ message: "What city do you want to know the weather for?",
246
+ });
247
+ }
248
+
249
+ // Resume call - city was extracted from user's message
250
+ const { city } = context.agent.resumeData;
251
+ const response = await fetch(`https://wttr.in/${city}?format=3`);
252
+ const weather = await response.text();
253
+
254
+ return { city, weather };
255
+ },
256
+ });
257
+
258
+ const agent = new Agent({
259
+ id: "my-agent",
260
+ name: "My Agent",
261
+ instructions: "You are a helpful assistant",
262
+ model: "openai/gpt-4o-mini",
263
+ tools: { weatherTool },
264
+ memory: new Memory(),
265
+ defaultOptions: {
266
+ autoResumeSuspendedTools: true,
267
+ },
268
+ });
269
+
270
+ const stream = await agent.stream("What's the weather like?");
271
+
272
+ for await (const chunk of stream.fullStream) {
273
+ if (chunk.type === "tool-call-suspended") {
274
+ console.log(chunk.payload.suspendPayload);
275
+ }
276
+ }
277
+
278
+ const handleResume = async () => {
279
+ const resumedStream = await agent.stream("San Francisco");
280
+
281
+ for await (const chunk of resumedStream.textStream) {
282
+ process.stdout.write(chunk);
283
+ }
284
+ process.stdout.write("\n");
285
+ };
286
+ ```
287
+
288
+ **Conversation flow:**
289
+
290
+ ```
291
+ User: "What's the weather like?"
292
+ Agent: "What city do you want to know the weather for?"
293
+
294
+ User: "San Francisco"
295
+ Agent: "The weather in San Francisco is: San Francisco: ☀️ +72°F"
296
+ ```
297
+
298
+ The second message automatically resumes the suspended tool - the agent extracts `{ city: "San Francisco" }` from the user's message and passes it as `resumeData`.
299
+
300
+ ### Requirements
301
+
302
+ For automatic tool resumption to work:
303
+
304
+ - **Memory configured**: The agent needs memory to track suspended tools across messages
305
+ - **Same thread**: The follow-up message must use the same memory thread and resource identifiers
306
+ - **`resumeSchema` defined**: The tool must define a `resumeSchema` so the agent knows what data structure to extract from the user's message
307
+
308
+ ### Manual vs automatic resumption
309
+
310
+ | Approach | Use case |
311
+ |----------|----------|
312
+ | Manual (`resumeStream()`) | Programmatic control, webhooks, button clicks, external triggers |
313
+ | Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
314
+
315
+ Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the conversation history and the user sends a new message on the same thread.
316
+
183
317
  ## Related
184
318
 
185
319
  - [Using Tools](./using-tools)
186
320
  - [Agent Overview](./overview)
187
321
  - [Tools Overview](../mcp/overview)
188
322
  - [Agent Memory](./agent-memory)
189
- - [Request Context](/docs/v1/server-db/request-context)
323
+ - [Request Context](/docs/v1/server/request-context)
@@ -26,7 +26,7 @@ npm install @mastra/memory@beta @mastra/libsql@beta
26
26
 
27
27
  ## Storage providers
28
28
 
29
- Memory requires a storage provider to persist conversation history, including user messages and agent responses. For more details on available providers and how storage works in Mastra, see the [Storage](/docs/v1/server-db/storage) documentation.
29
+ Memory requires a storage provider to persist conversation history, including user messages and agent responses. For more details on available providers and how storage works in Mastra, see the [Storage](/docs/v1/memory/storage/overview) documentation.
30
30
 
31
31
  ## Configuring memory
32
32
 
@@ -141,7 +141,7 @@ To learn more about memory see the [Memory](../memory/overview) documentation.
141
141
 
142
142
  ## Using `RequestContext`
143
143
 
144
- Use [RequestContext](/docs/v1/server-db/request-context) to access request-specific values. This lets you conditionally select different memory or storage configurations based on the context of the request.
144
+ Use [RequestContext](/docs/v1/server/request-context) to access request-specific values. This lets you conditionally select different memory or storage configurations based on the context of the request.
145
145
 
146
146
  ```typescript title="src/mastra/agents/memory-agent.ts" showLineNumbers
147
147
  export type UserTier = {
@@ -170,7 +170,7 @@ export const memoryAgent = new Agent({
170
170
 
171
171
  :::note
172
172
 
173
- See [Request Context](/docs/v1/server-db/request-context) docs for more information.
173
+ See [Request Context](/docs/v1/server/request-context) docs for more information.
174
174
 
175
175
  :::
176
176
 
@@ -179,4 +179,4 @@ See [Request Context](/docs/v1/server-db/request-context) docs for more informat
179
179
  - [Working Memory](../memory/working-memory)
180
180
  - [Semantic Recall](../memory/semantic-recall)
181
181
  - [Threads and Resources](../memory/threads-and-resources)
182
- - [Request Context](/docs/v1/server-db/request-context)
182
+ - [Request Context](/docs/v1/server/request-context)
@@ -195,7 +195,7 @@ const scrubbedAgent = new Agent({
195
195
  > See [SystemPromptScrubber](/reference/v1/processors/system-prompt-scrubber) for a full list of configuration options.
196
196
 
197
197
  :::note
198
- When streaming responses over HTTP, Mastra redacts sensitive request data (system prompts, tool definitions, API keys) from stream chunks at the server level by default. See [Stream data redaction](/docs/v1/server-db/mastra-server#stream-data-redaction) for details.
198
+ When streaming responses over HTTP, Mastra redacts sensitive request data (system prompts, tool definitions, API keys) from stream chunks at the server level by default. See [Stream data redaction](/docs/v1/server/mastra-server#stream-data-redaction) for details.
199
199
  :::
200
200
 
201
201
  ## Hybrid processors
@@ -239,5 +239,5 @@ network-execution-event-step-finish
239
239
 
240
240
  - [Agent Memory](./agent-memory)
241
241
  - [Workflows Overview](../workflows/overview)
242
- - [Request Context](/docs/v1/server-db/request-context)
242
+ - [Request Context](/docs/v1/server/request-context)
243
243
  - [Supervisor example](https://github.com/mastra-ai/mastra/tree/main/examples/supervisor-agent)
@@ -306,7 +306,7 @@ export const testAgent = new Agent({
306
306
  });
307
307
  ```
308
308
 
309
- > See [Request Context](/docs/v1/server-db/request-context) for more information.
309
+ > See [Request Context](/docs/v1/server/request-context) for more information.
310
310
 
311
311
  ## Testing with Studio
312
312
 
@@ -316,4 +316,4 @@ Use [Studio](/docs/v1/getting-started/studio) to test agents with different mess
316
316
 
317
317
  - [Using Tools](/docs/v1/agents/using-tools)
318
318
  - [Agent Memory](/docs/v1/agents/agent-memory)
319
- - [Request Context](/docs/v1/server-db/request-context)
319
+ - [Request Context](/docs/v1/server/request-context)
@@ -94,4 +94,4 @@ export const weatherAgent = new Agent({
94
94
 
95
95
  - [MCP Overview](/docs/v1/mcp/overview)
96
96
  - [Agent Memory](/docs/v1/agents/agent-memory)
97
- - [Request Context](/docs/v1/server-db/request-context)
97
+ - [Request Context](/docs/v1/server/request-context)
@@ -8,7 +8,6 @@ Now add the necessary imports at the top of your file:
8
8
 
9
9
  ```typescript
10
10
  import { Agent } from "@mastra/core/agent";
11
- import { openai } from "@ai-sdk/openai";
12
11
  // We'll import our tool in a later step
13
12
  ```
14
13
 
@@ -43,7 +42,7 @@ SUCCESS CRITERIA
43
42
  - Deliver accurate and helpful analysis of transaction data.
44
43
  - Achieve high user satisfaction through clear and helpful responses.
45
44
  - Maintain user trust by ensuring data privacy and security.`,
46
- model: openai("gpt-4o"), // You can use "gpt-3.5-turbo" if you prefer
45
+ model: "openai/gpt-4.1-mini",
47
46
  tools: {}, // We'll add tools in a later step
48
47
  });
49
48
  ```
@@ -19,7 +19,7 @@ export const financialAgent = new Agent({
19
19
  TOOLS
20
20
  - Use the getTransactions tool to fetch financial transaction data.
21
21
  - Analyze the transaction data to answer user questions about their spending.`,
22
- model: openai("gpt-4o"),
22
+ model: "openai/gpt-4.1-mini",
23
23
  tools: { getTransactionsTool }, // Add our tool here
24
24
  });
25
25
  ```
@@ -6,7 +6,6 @@ Now, let's update our agent to include memory. Open your `agents/index.ts` file
6
6
 
7
7
  ```typescript
8
8
  import { Agent } from "@mastra/core/agent";
9
- import { openai } from "@ai-sdk/openai";
10
9
  import { Memory } from "@mastra/memory";
11
10
  import { LibSQLStore } from "@mastra/libsql";
12
11
  import { getTransactionsTool } from "../tools";
@@ -20,7 +19,7 @@ export const financialAgent = new Agent({
20
19
  instructions: `ROLE DEFINITION
21
20
  // ... existing instructions ...
22
21
  `,
23
- model: openai("gpt-4o"),
22
+ model: "openai/gpt-4.1-mini",
24
23
  tools: { getTransactionsTool },
25
24
  memory: new Memory({
26
25
  storage: new LibSQLStore({
@@ -10,7 +10,7 @@ export const personalAssistantAgent = new Agent({
10
10
 
11
11
  Keep your responses concise and friendly.
12
12
  `,
13
- model: openai("gpt-4o"),
13
+ model: "openai/gpt-4.1-mini",
14
14
  tools: { ...mcpTools }, // Add MCP tools to your agent
15
15
  });
16
16
  ```
@@ -18,7 +18,7 @@ export const personalAssistantAgent = new Agent({
18
18
 
19
19
  Keep your responses concise and friendly.
20
20
  `,
21
- model: openai("gpt-4o"),
21
+ model: "openai/gpt-4.1-mini",
22
22
  tools: { ...mcpTools },
23
23
  memory,
24
24
  });
@@ -22,7 +22,7 @@ export const personalAssistantAgent = new Agent({
22
22
 
23
23
  Keep your responses concise and friendly.
24
24
  `,
25
- model: openai("gpt-4o"),
25
+ model: "openai/gpt-4.1-mini",
26
26
  tools: { ...mcpTools },
27
27
  memory,
28
28
  });
@@ -27,7 +27,7 @@ export const personalAssistantAgent = new Agent({
27
27
 
28
28
  Keep your responses concise and friendly.
29
29
  `,
30
- model: openai("gpt-4o"),
30
+ model: "openai/gpt-4.1-mini",
31
31
  tools: { ...mcpTools },
32
32
  memory,
33
33
  });
@@ -34,7 +34,7 @@ export const personalAssistantAgent = new Agent({
34
34
 
35
35
  Keep your responses concise and friendly.
36
36
  `,
37
- model: openai("gpt-4o"),
37
+ model: "openai/gpt-4.1-mini",
38
38
  tools: { ...mcpTools },
39
39
  memory,
40
40
  });
@@ -14,7 +14,7 @@ const memory = new Memory({
14
14
  id: "learning-memory-vector",
15
15
  connectionUrl: "file:../../memory.db",
16
16
  }),
17
- embedder: openai.embedding("text-embedding-3-small"),
17
+ embedder: "openai/text-embedding-3-small",
18
18
  options: {
19
19
  // Keep last 20 messages in context
20
20
  lastMessages: 20,
@@ -61,7 +61,7 @@ export const personalAssistantAgent = new Agent({
61
61
  Always maintain a helpful and professional tone.
62
62
  Use the stored information to provide more personalized responses.
63
63
  `,
64
- model: openai("gpt-4o"),
64
+ model: "openai/gpt-4.1-mini",
65
65
  tools: { ...mcpTools },
66
66
  memory,
67
67
  });
@@ -8,7 +8,6 @@ Create or update your `src/mastra/agents/index.ts` file:
8
8
  import { Agent } from "@mastra/core/agent";
9
9
  import { Memory } from "@mastra/memory";
10
10
  import { LibSQLStore } from "@mastra/libsql";
11
- import { openai } from "@ai-sdk/openai";
12
11
 
13
12
  // Create a basic memory instance
14
13
  const memory = new Memory({
@@ -27,7 +26,7 @@ export const memoryAgent = new Agent({
27
26
  When a user shares information about themselves, acknowledge it and remember it for future reference.
28
27
  If asked about something mentioned earlier in the conversation, recall it accurately.
29
28
  `,
30
- model: openai("gpt-4o"), // You can use "gpt-3.5-turbo" if you prefer
29
+ model: "openai/gpt-4.1-mini",
31
30
  memory: memory,
32
31
  });
33
32
  ```
@@ -5,7 +5,6 @@ By default, the `Memory` instance includes the last 10 messages from the current
5
5
  ```typescript
6
6
  import { Agent } from "@mastra/core/agent";
7
7
  import { Memory } from "@mastra/memory";
8
- import { openai } from "@ai-sdk/openai";
9
8
  import { LibSQLStore } from "@mastra/libsql";
10
9
 
11
10
  // Create a memory instance with custom conversation history settings
@@ -27,7 +26,7 @@ export const memoryAgent = new Agent({
27
26
  When a user shares information about themselves, acknowledge it and remember it for future reference.
28
27
  If asked about something mentioned earlier in the conversation, recall it accurately.
29
28
  `,
30
- model: openai("gpt-4o"),
29
+ model: "openai/gpt-4.1-mini",
31
30
  memory: memory,
32
31
  });
33
32
  ```
@@ -5,7 +5,6 @@ Let's update our agent with custom semantic recall settings:
5
5
  ```typescript
6
6
  import { Agent } from "@mastra/core/agent";
7
7
  import { Memory } from "@mastra/memory";
8
- import { openai } from "@ai-sdk/openai";
9
8
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
10
9
 
11
10
  // Create a memory instance with semantic recall configuration
@@ -18,7 +17,7 @@ const memory = new Memory({
18
17
  id: "learning-memory-vector",
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }), // Vector database for semantic search
21
- embedder: openai.embedding("text-embedding-3-small"), // Embedder for message embeddings
20
+ embedder: "openai/text-embedding-3-small", // Embedder for message embeddings
22
21
  options: {
23
22
  lastMessages: 20, // Include the last 20 messages in the context
24
23
  semanticRecall: true, // Enable semantic recall with default settings
@@ -35,9 +34,9 @@ export const memoryAgent = new Agent({
35
34
  If asked about something mentioned earlier in the conversation, recall it accurately.
36
35
  You can also recall relevant information from older conversations when appropriate.
37
36
  `,
38
- model: openai("gpt-4o"),
37
+ model: "openai/gpt-4.1-mini",
39
38
  memory: memory,
40
39
  });
41
40
  ```
42
41
 
43
- For semantic recall to work, you need to have a **vector store** configured. You also need to have an **embedder** configured. You may use any `@ai-sdk`-compatible embedding model for this. In this example, we're using OpenAI's `text-embedding-3-small` model.
42
+ For semantic recall to work, you need to have a **vector store** configured. You also need to have an **embedder** configured. You may use any compatible embedding model for this. In this example, we're using OpenAI's `openai/text-embedding-3-small` model.
@@ -5,7 +5,6 @@ Let's update our agent with working memory capabilities:
5
5
  ```typescript
6
6
  import { Agent } from "@mastra/core/agent";
7
7
  import { Memory } from "@mastra/memory";
8
- import { openai } from "@ai-sdk/openai";
9
8
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
10
9
 
11
10
  // Create a memory instance with working memory configuration
@@ -18,7 +17,7 @@ const memory = new Memory({
18
17
  id: "learning-memory-vector",
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }), // Vector database for semantic search
21
- embedder: openai.embedding("text-embedding-3-small"), // Embedder for message embeddings
20
+ embedder: "openai/text-embedding-3-small", // Embedder for message embeddings
22
21
  options: {
23
22
  semanticRecall: {
24
23
  topK: 3,
@@ -52,7 +51,7 @@ export const memoryAgent = new Agent({
52
51
  Always refer to your working memory before asking for information the user has already provided.
53
52
  Use the information in your working memory to provide personalized responses.
54
53
  `,
55
- model: openai("gpt-4o"),
54
+ model: "openai/gpt-4.1-mini",
56
55
  memory: memory,
57
56
  });
58
57
  ```
@@ -7,7 +7,6 @@ Let's update our agent with a custom working memory template:
7
7
  ```typescript
8
8
  import { Agent } from "@mastra/core/agent";
9
9
  import { Memory } from "@mastra/memory";
10
- import { openai } from "@ai-sdk/openai";
11
10
 
12
11
  // Create a memory instance with a custom working memory template
13
12
  const memory = new Memory({
@@ -18,7 +17,7 @@ const memory = new Memory({
18
17
  vector: new LibSQLVector({
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }), // Vector database for semantic search
21
- embedder: openai.embedding("text-embedding-3-small"), // Embedder for message embeddings
20
+ embedder: "openai/text-embedding-3-small", // Embedder for message embeddings
22
21
  options: {
23
22
  semanticRecall: {
24
23
  topK: 3,
@@ -74,7 +73,7 @@ export const memoryAgent = new Agent({
74
73
  When the user shares personal information such as their name, location, or preferences,
75
74
  acknowledge it and update your working memory accordingly.
76
75
  `,
77
- model: openai("gpt-4o"),
76
+ model: "openai/gpt-4.1-mini",
78
77
  memory: memory,
79
78
  });
80
79
  ```
@@ -10,7 +10,6 @@ Let's create a comprehensive agent that utilizes conversation history, semantic
10
10
  // src/mastra/agents/memory-agent.ts
11
11
  import { Agent } from "@mastra/core/agent";
12
12
  import { Memory } from "@mastra/memory";
13
- import { openai } from "@ai-sdk/openai";
14
13
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
15
14
 
16
15
  // Create a comprehensive memory configuration
@@ -22,7 +21,7 @@ const memory = new Memory({
22
21
  vector: new LibSQLVector({
23
22
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
24
23
  }),
25
- embedder: openai.embedding("text-embedding-3-small"),
24
+ embedder: "openai/text-embedding-3-small",
26
25
  options: {
27
26
  // Conversation history configuration
28
27
  lastMessages: 20, // Include the last 20 messages in the context
@@ -6,7 +6,6 @@ Let's create a practical example of a memory-enhanced agent: a Personal Learning
6
6
  // src/mastra/agents/learning-assistant.ts
7
7
  import { Agent } from "@mastra/core/agent";
8
8
  import { Memory } from "@mastra/memory";
9
- import { openai } from "@ai-sdk/openai";
10
9
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
11
10
 
12
11
  // Create a specialized memory configuration for the learning assistant
@@ -18,7 +17,7 @@ const learningMemory = new Memory({
18
17
  vector: new LibSQLVector({
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }),
21
- embedder: openai.embedding("text-embedding-3-small"),
20
+ embedder: "openai/text-embedding-3-small",
22
21
  options: {
23
22
  lastMessages: 20,
24
23
  semanticRecall: {
@@ -88,7 +87,7 @@ export const learningAssistantAgent = new Agent({
88
87
  Always be encouraging and supportive. Focus on building the user's confidence
89
88
  and celebrating their progress.
90
89
  `,
91
- model: openai("gpt-4o"),
90
+ model: "openai/gpt-4.1-mini",
92
91
  memory: learningMemory,
93
92
  });
94
93
 
@@ -8,7 +8,6 @@ Create a new file for your agent in the `src/mastra/agents` directory. Use `cont
8
8
 
9
9
  ```typescript
10
10
  // src/mastra/agents/content-agent.ts
11
- import { openai } from "@ai-sdk/openai";
12
11
  import { Agent } from "@mastra/core/agent";
13
12
 
14
13
  export const contentAgent = new Agent({
@@ -23,7 +22,7 @@ export const contentAgent = new Agent({
23
22
 
24
23
  Always provide constructive, actionable feedback.
25
24
  `,
26
- model: openai("gpt-4o-mini"),
25
+ model: "openai/gpt-4.1-mini",
27
26
  });
28
27
  ```
29
28
 
@@ -32,7 +31,7 @@ export const contentAgent = new Agent({
32
31
  - **Name**: Unique identifier for the agent
33
32
  - **Description**: What the agent does
34
33
  - **Instructions**: Detailed prompts that guide the AI's behavior
35
- - **Model**: Which AI model to use (GPT-4o-mini is fast and cost-effective)
34
+ - **Model**: Which AI model to use (GPT-4.1-mini is fast and cost-effective)
36
35
 
37
36
  ## Registering and Testing Your Agent
38
37
 
@@ -0,0 +1,20 @@
1
+ ---
2
+ title: "Deploy to Cloud Providers | Deployment"
3
+ description: Deploy your Mastra applications to cloud providers
4
+ ---
5
+
6
+ # Deploy to Cloud Providers
7
+
8
+ Mastra applications can be deployed to cloud providers and serverless platforms. Mastra includes optional built-in deployers for Vercel, Netlify, and Cloudflare to automate the deployment process.
9
+
10
+ ## Supported Cloud Providers
11
+
12
+ The following guides show how to deploy Mastra to specific cloud providers:
13
+
14
+ - [Amazon EC2](/guides/v1/deployment/amazon-ec2)
15
+ - [AWS Lambda](/guides/v1/deployment/aws-lambda)
16
+ - [Azure App Services](/guides/v1/deployment/azure-app-services)
17
+ - [Cloudflare](/guides/v1/deployment/cloudflare-deployer)
18
+ - [Digital Ocean](/guides/v1/deployment/digital-ocean)
19
+ - [Netlify](/guides/v1/deployment/netlify-deployer)
20
+ - [Vercel](/guides/v1/deployment/vercel-deployer)
@@ -1,9 +1,9 @@
1
1
  ---
2
- title: "Building Mastra | Deployment"
2
+ title: "Deploy a Mastra Server | Deployment"
3
3
  description: "Learn how to build a Mastra server with build settings and deployment options."
4
4
  ---
5
5
 
6
- # Building Mastra
6
+ # Deploy a Mastra Server
7
7
 
8
8
  Mastra runs as a standard Node.js server and can be deployed across a wide range of environments.
9
9
 
@@ -1,11 +1,25 @@
1
1
  ---
2
- title: "Monorepo Deployment | Deployment"
2
+ title: "Deploy in a Monorepo | Deployment"
3
3
  description: Learn how to deploy Mastra applications that are part of a monorepo setup
4
4
  ---
5
5
 
6
- # Monorepo Deployment
6
+ # Deploy in a Monorepo
7
7
 
8
- Deploying Mastra in a monorepo follows the same approach as deploying a standalone application. While some [Cloud](./cloud-providers/) or [Serverless Platform](./cloud-providers/) providers may introduce extra requirements, the core setup is the same.
8
+ Deploying Mastra in a monorepo follows the same approach as deploying a standalone application. While some [Cloud](/guides/v1/deployment) or [Serverless Platform](/guides/v1/deployment) providers may introduce extra requirements, the core setup is the same.
9
+
10
+ ## Supported monorepos
11
+
12
+ Mastra works with:
13
+
14
+ - npm workspaces
15
+ - pnpm workspaces
16
+ - Yarn workspaces
17
+ - Turborepo
18
+
19
+ Known limitations:
20
+
21
+ - Bun workspaces - partial support; known issues
22
+ - Nx - You can use Nx's [supported dependency strategies](https://nx.dev/concepts/decisions/dependency-management) but you need to have `package.json` files inside your workspace packages
9
23
 
10
24
  ## Example monorepo
11
25
 
@@ -44,10 +58,15 @@ api/
44
58
 
45
59
  ## Deployment configuration
46
60
 
47
- The image below shows how to select `apps/api` as the project root when deploying to [Mastra Cloud](./mastra-cloud/overview). While the interface may differ between providers, the configuration remains the same.
61
+ The image below shows how to select `apps/api` as the project root when deploying to [Mastra Cloud](/docs/v1/mastra-cloud/overview). While the interface may differ between providers, the configuration remains the same.
48
62
 
49
63
  ![Deployment configuration](/img/monorepo/monorepo-mastra-cloud.jpg)
50
64
 
65
+
66
+ :::info
67
+ Make sure the correct package (e.g. `apps/api`) is selected as the deploy target. Selecting the wrong project root is a common deployment issue in monorepos.
68
+ :::
69
+
51
70
  ## Dependency management
52
71
 
53
72
  In a monorepo, keep dependencies consistent to avoid version conflicts and build errors.
@@ -55,43 +74,3 @@ In a monorepo, keep dependencies consistent to avoid version conflicts and build
55
74
  - Use a **single lockfile** at the project root so all packages resolve the same versions.
56
75
  - Align versions of **shared libraries** (like Mastra or frameworks) to prevent duplicates.
57
76
 
58
- ## Deployment pitfalls
59
-
60
- Common issues to watch for when deploying Mastra in a monorepo:
61
-
62
- - **Wrong project root**: make sure the correct package (e.g. `apps/api`) is selected as the deploy target.
63
-
64
- ## Bundler options
65
-
66
- Use `transpilePackages` to compile TypeScript workspace packages or libraries. List package names exactly as they appear in each `package.json`. Use `externals` to exclude dependencies resolved at runtime, and `sourcemap` to emit readable stack traces.
67
-
68
- ```typescript title="src/mastra/index.ts" showLineNumbers copy
69
- import { Mastra } from "@mastra/core";
70
-
71
- export const mastra = new Mastra({
72
- // ...
73
- bundler: {
74
- transpilePackages: ["utils"],
75
- externals: ["ui"],
76
- sourcemap: true,
77
- },
78
- });
79
- ```
80
-
81
- > See [Mastra Class](/reference/v1/core/mastra-class) for more configuration options.
82
-
83
- ## Supported monorepos
84
-
85
- Mastra works with:
86
-
87
- - npm workspaces
88
- - pnpm workspaces
89
- - Yarn workspaces
90
- - Turborepo
91
-
92
- Known limitations:
93
-
94
- - Bun workspaces — partial support; known issues
95
- - Nx — You can use Nx's [supported dependency strategies](https://nx.dev/concepts/decisions/dependency-management) but you need to have `package.json` files inside your workspace packages
96
-
97
- > If you are experiencing issues with monorepos see our: [Monorepos Support mega issue](https://github.com/mastra-ai/mastra/issues/6852).