@mastra/mcp-docs-server 1.0.0-beta.10 → 1.0.0-beta.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (168) hide show
  1. package/.docs/organized/changelogs/%40mastra%2Fagent-builder.md +12 -12
  2. package/.docs/organized/changelogs/%40mastra%2Fai-sdk.md +50 -50
  3. package/.docs/organized/changelogs/%40mastra%2Fchroma.md +10 -10
  4. package/.docs/organized/changelogs/%40mastra%2Fclickhouse.md +45 -45
  5. package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +109 -109
  6. package/.docs/organized/changelogs/%40mastra%2Fcloudflare-d1.md +39 -39
  7. package/.docs/organized/changelogs/%40mastra%2Fcloudflare.md +39 -39
  8. package/.docs/organized/changelogs/%40mastra%2Fconvex.md +38 -0
  9. package/.docs/organized/changelogs/%40mastra%2Fcore.md +264 -264
  10. package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloud.md +25 -25
  11. package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +37 -37
  12. package/.docs/organized/changelogs/%40mastra%2Fdynamodb.md +39 -39
  13. package/.docs/organized/changelogs/%40mastra%2Ffastembed.md +6 -0
  14. package/.docs/organized/changelogs/%40mastra%2Flance.md +39 -39
  15. package/.docs/organized/changelogs/%40mastra%2Flibsql.md +45 -45
  16. package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +22 -22
  17. package/.docs/organized/changelogs/%40mastra%2Fmemory.md +13 -13
  18. package/.docs/organized/changelogs/%40mastra%2Fmongodb.md +39 -39
  19. package/.docs/organized/changelogs/%40mastra%2Fmssql.md +39 -39
  20. package/.docs/organized/changelogs/%40mastra%2Fpg.md +45 -45
  21. package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +104 -104
  22. package/.docs/organized/changelogs/%40mastra%2Freact.md +66 -0
  23. package/.docs/organized/changelogs/%40mastra%2Fschema-compat.md +6 -0
  24. package/.docs/organized/changelogs/%40mastra%2Fserver.md +59 -59
  25. package/.docs/organized/changelogs/%40mastra%2Fupstash.md +39 -39
  26. package/.docs/organized/changelogs/create-mastra.md +31 -31
  27. package/.docs/organized/changelogs/mastra.md +49 -49
  28. package/.docs/organized/code-examples/quick-start.md +0 -4
  29. package/.docs/organized/code-examples/stock-price-tool.md +21 -2
  30. package/.docs/raw/agents/agent-approval.mdx +136 -2
  31. package/.docs/raw/agents/agent-memory.mdx +4 -4
  32. package/.docs/raw/agents/guardrails.mdx +44 -7
  33. package/.docs/raw/agents/networks.mdx +1 -1
  34. package/.docs/raw/agents/overview.mdx +2 -2
  35. package/.docs/raw/agents/processors.mdx +151 -0
  36. package/.docs/raw/agents/using-tools.mdx +1 -1
  37. package/.docs/raw/course/01-first-agent/07-creating-your-agent.md +1 -2
  38. package/.docs/raw/course/01-first-agent/12-connecting-tool-to-agent.md +1 -1
  39. package/.docs/raw/course/01-first-agent/16-adding-memory-to-agent.md +1 -2
  40. package/.docs/raw/course/02-agent-tools-mcp/05-updating-your-agent.md +1 -1
  41. package/.docs/raw/course/02-agent-tools-mcp/10-updating-agent-instructions-zapier.md +1 -1
  42. package/.docs/raw/course/02-agent-tools-mcp/16-updating-agent-instructions-github.md +1 -1
  43. package/.docs/raw/course/02-agent-tools-mcp/21-updating-agent-instructions-hackernews.md +1 -1
  44. package/.docs/raw/course/02-agent-tools-mcp/27-updating-agent-instructions-filesystem.md +1 -1
  45. package/.docs/raw/course/02-agent-tools-mcp/31-enhancing-memory-configuration.md +2 -2
  46. package/.docs/raw/course/03-agent-memory/04-creating-basic-memory-agent.md +1 -2
  47. package/.docs/raw/course/03-agent-memory/08-configuring-conversation-history.md +1 -2
  48. package/.docs/raw/course/03-agent-memory/16-configuring-semantic-recall.md +3 -4
  49. package/.docs/raw/course/03-agent-memory/21-configuring-working-memory.md +2 -3
  50. package/.docs/raw/course/03-agent-memory/22-custom-working-memory-templates.md +2 -3
  51. package/.docs/raw/course/03-agent-memory/25-combining-memory-features.md +1 -2
  52. package/.docs/raw/course/03-agent-memory/27-creating-learning-assistant.md +2 -3
  53. package/.docs/raw/course/04-workflows/11-creating-an-ai-agent.md +2 -3
  54. package/.docs/raw/deployment/cloud-providers.mdx +20 -0
  55. package/.docs/raw/deployment/{building-mastra.mdx → mastra-server.mdx} +2 -2
  56. package/.docs/raw/deployment/monorepo.mdx +23 -44
  57. package/.docs/raw/deployment/overview.mdx +28 -53
  58. package/.docs/raw/deployment/web-framework.mdx +12 -14
  59. package/.docs/raw/getting-started/mcp-docs-server.mdx +57 -0
  60. package/.docs/raw/getting-started/start.mdx +10 -1
  61. package/.docs/raw/getting-started/studio.mdx +25 -2
  62. package/.docs/raw/guides/build-your-ui/ai-sdk-ui.mdx +1021 -67
  63. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/aws-lambda.mdx +3 -6
  64. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/azure-app-services.mdx +4 -6
  65. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/cloudflare-deployer.mdx +4 -0
  66. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/digital-ocean.mdx +3 -6
  67. package/.docs/raw/guides/deployment/index.mdx +32 -0
  68. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/netlify-deployer.mdx +4 -0
  69. package/.docs/raw/{deployment/cloud-providers → guides/deployment}/vercel-deployer.mdx +4 -0
  70. package/.docs/raw/guides/getting-started/express.mdx +71 -152
  71. package/.docs/raw/guides/getting-started/hono.mdx +227 -0
  72. package/.docs/raw/guides/getting-started/next-js.mdx +173 -63
  73. package/.docs/raw/guides/getting-started/vite-react.mdx +307 -137
  74. package/.docs/raw/guides/guide/research-assistant.mdx +4 -4
  75. package/.docs/raw/guides/migrations/upgrade-to-v1/agent.mdx +70 -0
  76. package/.docs/raw/guides/migrations/upgrade-to-v1/client.mdx +17 -0
  77. package/.docs/raw/guides/migrations/upgrade-to-v1/overview.mdx +6 -0
  78. package/.docs/raw/index.mdx +1 -1
  79. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/dashboard.mdx +2 -6
  80. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/observability.mdx +1 -5
  81. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/overview.mdx +2 -6
  82. package/.docs/raw/{deployment/mastra-cloud → mastra-cloud}/setting-up.mdx +3 -6
  83. package/.docs/raw/memory/overview.mdx +1 -1
  84. package/.docs/raw/memory/storage/memory-with-libsql.mdx +1 -1
  85. package/.docs/raw/memory/storage/memory-with-mongodb.mdx +1 -1
  86. package/.docs/raw/memory/storage/memory-with-pg.mdx +1 -1
  87. package/.docs/raw/memory/storage/memory-with-upstash.mdx +1 -1
  88. package/.docs/raw/{server-db/storage.mdx → memory/storage/overview.mdx} +2 -2
  89. package/.docs/raw/observability/logging.mdx +1 -1
  90. package/.docs/raw/observability/tracing/exporters/cloud.mdx +1 -1
  91. package/.docs/raw/observability/tracing/exporters/default.mdx +1 -1
  92. package/.docs/raw/rag/chunking-and-embedding.mdx +12 -25
  93. package/.docs/raw/rag/graph-rag.mdx +220 -0
  94. package/.docs/raw/rag/overview.mdx +1 -2
  95. package/.docs/raw/rag/retrieval.mdx +13 -29
  96. package/.docs/raw/rag/vector-databases.mdx +7 -3
  97. package/.docs/raw/reference/agents/agent.mdx +11 -4
  98. package/.docs/raw/reference/agents/getDefaultGenerateOptions.mdx +1 -1
  99. package/.docs/raw/reference/agents/getDefaultOptions.mdx +1 -1
  100. package/.docs/raw/reference/agents/getDefaultStreamOptions.mdx +1 -1
  101. package/.docs/raw/reference/agents/getInstructions.mdx +1 -1
  102. package/.docs/raw/reference/agents/getLLM.mdx +1 -1
  103. package/.docs/raw/reference/agents/getMemory.mdx +1 -1
  104. package/.docs/raw/reference/agents/getModel.mdx +1 -1
  105. package/.docs/raw/reference/agents/listScorers.mdx +1 -1
  106. package/.docs/raw/reference/ai-sdk/chat-route.mdx +1 -1
  107. package/.docs/raw/reference/ai-sdk/handle-chat-stream.mdx +1 -1
  108. package/.docs/raw/reference/ai-sdk/handle-network-stream.mdx +1 -1
  109. package/.docs/raw/reference/ai-sdk/handle-workflow-stream.mdx +1 -1
  110. package/.docs/raw/reference/ai-sdk/network-route.mdx +1 -1
  111. package/.docs/raw/reference/ai-sdk/to-ai-sdk-v4-messages.mdx +127 -0
  112. package/.docs/raw/reference/ai-sdk/to-ai-sdk-v5-messages.mdx +107 -0
  113. package/.docs/raw/reference/ai-sdk/workflow-route.mdx +1 -1
  114. package/.docs/raw/reference/auth/auth0.mdx +1 -1
  115. package/.docs/raw/reference/auth/clerk.mdx +1 -1
  116. package/.docs/raw/reference/auth/firebase.mdx +1 -1
  117. package/.docs/raw/reference/auth/jwt.mdx +1 -1
  118. package/.docs/raw/reference/auth/supabase.mdx +1 -1
  119. package/.docs/raw/reference/auth/workos.mdx +1 -1
  120. package/.docs/raw/reference/cli/mastra.mdx +1 -1
  121. package/.docs/raw/reference/client-js/mastra-client.mdx +1 -1
  122. package/.docs/raw/reference/client-js/workflows.mdx +20 -0
  123. package/.docs/raw/reference/core/getServer.mdx +3 -3
  124. package/.docs/raw/reference/core/getStorage.mdx +1 -1
  125. package/.docs/raw/reference/core/getStoredAgentById.mdx +1 -1
  126. package/.docs/raw/reference/core/listStoredAgents.mdx +1 -1
  127. package/.docs/raw/reference/core/setStorage.mdx +1 -1
  128. package/.docs/raw/reference/logging/pino-logger.mdx +1 -1
  129. package/.docs/raw/reference/processors/processor-interface.mdx +314 -13
  130. package/.docs/raw/reference/rag/database-config.mdx +1 -1
  131. package/.docs/raw/reference/server/create-route.mdx +1 -1
  132. package/.docs/raw/reference/server/express-adapter.mdx +4 -4
  133. package/.docs/raw/reference/server/hono-adapter.mdx +4 -4
  134. package/.docs/raw/reference/server/mastra-server.mdx +2 -2
  135. package/.docs/raw/reference/server/routes.mdx +28 -1
  136. package/.docs/raw/reference/streaming/ChunkType.mdx +23 -2
  137. package/.docs/raw/reference/streaming/agents/stream.mdx +38 -29
  138. package/.docs/raw/reference/streaming/workflows/stream.mdx +33 -20
  139. package/.docs/raw/reference/tools/create-tool.mdx +23 -1
  140. package/.docs/raw/reference/tools/graph-rag-tool.mdx +3 -3
  141. package/.docs/raw/reference/tools/vector-query-tool.mdx +3 -3
  142. package/.docs/raw/reference/workflows/run-methods/startAsync.mdx +143 -0
  143. package/.docs/raw/reference/workflows/workflow-methods/create-run.mdx +35 -0
  144. package/.docs/raw/reference/workflows/workflow-methods/foreach.mdx +68 -3
  145. package/.docs/raw/reference/workflows/workflow.mdx +37 -0
  146. package/.docs/raw/{auth → server/auth}/auth0.mdx +1 -1
  147. package/.docs/raw/{auth → server/auth}/clerk.mdx +1 -1
  148. package/.docs/raw/{auth → server/auth}/firebase.mdx +1 -1
  149. package/.docs/raw/{auth → server/auth}/index.mdx +6 -6
  150. package/.docs/raw/{auth → server/auth}/jwt.mdx +1 -1
  151. package/.docs/raw/{auth → server/auth}/supabase.mdx +1 -1
  152. package/.docs/raw/{auth → server/auth}/workos.mdx +1 -1
  153. package/.docs/raw/{server-db → server}/custom-adapters.mdx +3 -3
  154. package/.docs/raw/{server-db → server}/custom-api-routes.mdx +1 -1
  155. package/.docs/raw/{server-db → server}/mastra-client.mdx +2 -2
  156. package/.docs/raw/{server-db → server}/mastra-server.mdx +12 -10
  157. package/.docs/raw/{server-db → server}/middleware.mdx +2 -2
  158. package/.docs/raw/{server-db → server}/request-context.mdx +3 -3
  159. package/.docs/raw/{server-db → server}/server-adapters.mdx +6 -6
  160. package/.docs/raw/tools-mcp/overview.mdx +2 -2
  161. package/.docs/raw/workflows/control-flow.mdx +348 -2
  162. package/.docs/raw/workflows/error-handling.mdx +162 -1
  163. package/.docs/raw/workflows/overview.mdx +2 -2
  164. package/CHANGELOG.md +21 -0
  165. package/package.json +5 -5
  166. package/.docs/organized/changelogs/%40internal%2Fai-sdk-v4.md +0 -1
  167. package/.docs/raw/deployment/cloud-providers/index.mdx +0 -55
  168. /package/.docs/raw/{deployment/cloud-providers → guides/deployment}/amazon-ec2.mdx +0 -0
@@ -12,6 +12,8 @@ Processors are configured as:
12
12
  - **`inputProcessors`**: Run before messages reach the language model.
13
13
  - **`outputProcessors`**: Run after the language model generates a response, but before it's returned to users.
14
14
 
15
+ You can use individual `Processor` objects or compose them into workflows using Mastra's workflow primitives. Workflows give you advanced control over processor execution order, parallel processing, and conditional logic.
16
+
15
17
  Some processors implement both input and output logic and can be used in either array depending on where the transformation should occur.
16
18
 
17
19
  ## When to use processors
@@ -168,6 +170,81 @@ This is useful for:
168
170
  - Filtering or modifying semantic recall content to prevent "prompt too long" errors
169
171
  - Dynamically adjusting system instructions based on the conversation
170
172
 
173
+ ### Per-step processing with processInputStep
174
+
175
+ While `processInput` runs once at the start of agent execution, `processInputStep` runs at **each step** of the agentic loop (including tool call continuations). This enables per-step configuration changes like dynamic model switching or tool choice modifications.
176
+
177
+ ```typescript title="src/mastra/processors/step-processor.ts" showLineNumbers copy
178
+ import type { Processor, ProcessInputStepArgs, ProcessInputStepResult } from "@mastra/core";
179
+
180
+ export class DynamicModelProcessor implements Processor {
181
+ id = "dynamic-model";
182
+
183
+ async processInputStep({
184
+ stepNumber,
185
+ model,
186
+ toolChoice,
187
+ messageList,
188
+ }: ProcessInputStepArgs): Promise<ProcessInputStepResult> {
189
+ // Use a fast model for initial response
190
+ if (stepNumber === 0) {
191
+ return { model: "openai/gpt-4o-mini" };
192
+ }
193
+
194
+ // Disable tools after 5 steps to force completion
195
+ if (stepNumber > 5) {
196
+ return { toolChoice: "none" };
197
+ }
198
+
199
+ // No changes for other steps
200
+ return {};
201
+ }
202
+ }
203
+ ```
204
+
205
+ The `processInputStep` method receives:
206
+ - `stepNumber`: Current step in the agentic loop (0-indexed)
207
+ - `steps`: Results from previous steps
208
+ - `messages`: Current messages snapshot (read-only)
209
+ - `systemMessages`: Current system messages (read-only)
210
+ - `messageList`: The full MessageList instance for mutations
211
+ - `model`: Current model being used
212
+ - `tools`: Current tools available for this step
213
+ - `toolChoice`: Current tool choice setting
214
+ - `activeTools`: Currently active tools
215
+ - `providerOptions`: Provider-specific options
216
+ - `modelSettings`: Model settings like temperature
217
+ - `structuredOutput`: Structured output configuration
218
+
219
+ The method can return any combination of:
220
+ - `model`: Change the model for this step
221
+ - `tools`: Replace or add tools (use spread to merge: `{ tools: { ...tools, newTool } }`)
222
+ - `toolChoice`: Change tool selection behavior
223
+ - `activeTools`: Filter which tools are available
224
+ - `messages`: Replace messages (applied to messageList)
225
+ - `systemMessages`: Replace all system messages
226
+ - `providerOptions`: Modify provider options
227
+ - `modelSettings`: Modify model settings
228
+ - `structuredOutput`: Modify structured output configuration
229
+
230
+ #### Using prepareStep callback
231
+
232
+ For simpler per-step logic, you can use the `prepareStep` callback on `generate()` or `stream()` instead of creating a full processor:
233
+
234
+ ```typescript
235
+ await agent.generate({
236
+ prompt: "Complex task",
237
+ prepareStep: async ({ stepNumber, model }) => {
238
+ if (stepNumber === 0) {
239
+ return { model: "openai/gpt-4o-mini" };
240
+ }
241
+ if (stepNumber > 5) {
242
+ return { toolChoice: "none" };
243
+ }
244
+ },
245
+ });
246
+ ```
247
+
171
248
  ### Custom output processor
172
249
 
173
250
  ```typescript title="src/mastra/processors/custom-output.ts" showLineNumbers copy
@@ -273,7 +350,81 @@ const agent = new Agent({
273
350
 
274
351
  > **Note:** The example above filters tool calls and limits tokens for the LLM, but these filtered messages will still be saved to memory. To also filter messages before they're saved to memory, manually add memory processors before utility processors. See [Memory Processors](/docs/v1/memory/memory-processors#manual-control-and-deduplication) for details.
275
352
 
353
+ ## Using workflows as processors
354
+
355
+ You can use Mastra workflows as processors to create complex processing pipelines with parallel execution, conditional branching, and error handling:
356
+
357
+ ```typescript title="src/mastra/processors/moderation-workflow.ts" showLineNumbers copy
358
+ import { createWorkflow, createStep } from "@mastra/core/workflows";
359
+ import { ProcessorStepSchema } from "@mastra/core/processors";
360
+ import { Agent } from "@mastra/core/agent";
361
+
362
+ // Create a workflow that runs multiple checks in parallel
363
+ const moderationWorkflow = createWorkflow({
364
+ id: "moderation-pipeline",
365
+ inputSchema: ProcessorStepSchema,
366
+ outputSchema: ProcessorStepSchema,
367
+ })
368
+ .then(createStep(new LengthValidator({ maxLength: 10000 })))
369
+ .parallel([
370
+ createStep(new PIIDetector({ strategy: "redact" })),
371
+ createStep(new ToxicityChecker({ threshold: 0.8 })),
372
+ ])
373
+ .commit();
374
+
375
+ // Use the workflow as an input processor
376
+ const agent = new Agent({
377
+ id: "moderated-agent",
378
+ name: "Moderated Agent",
379
+ model: "openai/gpt-4o",
380
+ inputProcessors: [moderationWorkflow],
381
+ });
382
+ ```
383
+
384
+ When an agent is registered with Mastra, processor workflows are automatically registered as workflows, allowing you to view and debug them in the playground.
385
+
386
+ ## Retry mechanism
387
+
388
+ Processors can request that the LLM retry its response with feedback. This is useful for implementing quality checks, output validation, or iterative refinement:
389
+
390
+ ```typescript title="src/mastra/processors/quality-checker.ts" showLineNumbers copy
391
+ import type { Processor } from "@mastra/core";
392
+
393
+ export class QualityChecker implements Processor {
394
+ id = "quality-checker";
395
+
396
+ async processOutputStep({ text, abort, retryCount }) {
397
+ const qualityScore = await evaluateQuality(text);
398
+
399
+ if (qualityScore < 0.7 && retryCount < 3) {
400
+ // Request a retry with feedback for the LLM
401
+ abort("Response quality score too low. Please provide a more detailed answer.", {
402
+ retry: true,
403
+ metadata: { score: qualityScore },
404
+ });
405
+ }
406
+
407
+ return [];
408
+ }
409
+ }
410
+
411
+ const agent = new Agent({
412
+ id: "quality-agent",
413
+ name: "Quality Agent",
414
+ model: "openai/gpt-4o",
415
+ outputProcessors: [new QualityChecker()],
416
+ maxProcessorRetries: 3, // Maximum retry attempts (default: 3)
417
+ });
418
+ ```
419
+
420
+ The retry mechanism:
421
+ - Only works in `processOutputStep` and `processInputStep` methods
422
+ - Replays the step with the abort reason added as context for the LLM
423
+ - Tracks retry count via the `retryCount` parameter
424
+ - Respects `maxProcessorRetries` limit on the agent
425
+
276
426
  ## Related documentation
277
427
 
278
428
  - [Guardrails](/docs/v1/agents/guardrails) - Security and validation processors
279
429
  - [Memory Processors](/docs/v1/memory/memory-processors) - Memory-specific processors and automatic integration
430
+ - [Processor Interface](/reference/v1/processors/processor-interface) - Full API reference for processors
@@ -94,4 +94,4 @@ export const weatherAgent = new Agent({
94
94
 
95
95
  - [MCP Overview](/docs/v1/mcp/overview)
96
96
  - [Agent Memory](/docs/v1/agents/agent-memory)
97
- - [Request Context](/docs/v1/server-db/request-context)
97
+ - [Request Context](/docs/v1/server/request-context)
@@ -8,7 +8,6 @@ Now add the necessary imports at the top of your file:
8
8
 
9
9
  ```typescript
10
10
  import { Agent } from "@mastra/core/agent";
11
- import { openai } from "@ai-sdk/openai";
12
11
  // We'll import our tool in a later step
13
12
  ```
14
13
 
@@ -43,7 +42,7 @@ SUCCESS CRITERIA
43
42
  - Deliver accurate and helpful analysis of transaction data.
44
43
  - Achieve high user satisfaction through clear and helpful responses.
45
44
  - Maintain user trust by ensuring data privacy and security.`,
46
- model: openai("gpt-4o"), // You can use "gpt-3.5-turbo" if you prefer
45
+ model: "openai/gpt-4.1-mini",
47
46
  tools: {}, // We'll add tools in a later step
48
47
  });
49
48
  ```
@@ -19,7 +19,7 @@ export const financialAgent = new Agent({
19
19
  TOOLS
20
20
  - Use the getTransactions tool to fetch financial transaction data.
21
21
  - Analyze the transaction data to answer user questions about their spending.`,
22
- model: openai("gpt-4o"),
22
+ model: "openai/gpt-4.1-mini",
23
23
  tools: { getTransactionsTool }, // Add our tool here
24
24
  });
25
25
  ```
@@ -6,7 +6,6 @@ Now, let's update our agent to include memory. Open your `agents/index.ts` file
6
6
 
7
7
  ```typescript
8
8
  import { Agent } from "@mastra/core/agent";
9
- import { openai } from "@ai-sdk/openai";
10
9
  import { Memory } from "@mastra/memory";
11
10
  import { LibSQLStore } from "@mastra/libsql";
12
11
  import { getTransactionsTool } from "../tools";
@@ -20,7 +19,7 @@ export const financialAgent = new Agent({
20
19
  instructions: `ROLE DEFINITION
21
20
  // ... existing instructions ...
22
21
  `,
23
- model: openai("gpt-4o"),
22
+ model: "openai/gpt-4.1-mini",
24
23
  tools: { getTransactionsTool },
25
24
  memory: new Memory({
26
25
  storage: new LibSQLStore({
@@ -10,7 +10,7 @@ export const personalAssistantAgent = new Agent({
10
10
 
11
11
  Keep your responses concise and friendly.
12
12
  `,
13
- model: openai("gpt-4o"),
13
+ model: "openai/gpt-4.1-mini",
14
14
  tools: { ...mcpTools }, // Add MCP tools to your agent
15
15
  });
16
16
  ```
@@ -18,7 +18,7 @@ export const personalAssistantAgent = new Agent({
18
18
 
19
19
  Keep your responses concise and friendly.
20
20
  `,
21
- model: openai("gpt-4o"),
21
+ model: "openai/gpt-4.1-mini",
22
22
  tools: { ...mcpTools },
23
23
  memory,
24
24
  });
@@ -22,7 +22,7 @@ export const personalAssistantAgent = new Agent({
22
22
 
23
23
  Keep your responses concise and friendly.
24
24
  `,
25
- model: openai("gpt-4o"),
25
+ model: "openai/gpt-4.1-mini",
26
26
  tools: { ...mcpTools },
27
27
  memory,
28
28
  });
@@ -27,7 +27,7 @@ export const personalAssistantAgent = new Agent({
27
27
 
28
28
  Keep your responses concise and friendly.
29
29
  `,
30
- model: openai("gpt-4o"),
30
+ model: "openai/gpt-4.1-mini",
31
31
  tools: { ...mcpTools },
32
32
  memory,
33
33
  });
@@ -34,7 +34,7 @@ export const personalAssistantAgent = new Agent({
34
34
 
35
35
  Keep your responses concise and friendly.
36
36
  `,
37
- model: openai("gpt-4o"),
37
+ model: "openai/gpt-4.1-mini",
38
38
  tools: { ...mcpTools },
39
39
  memory,
40
40
  });
@@ -14,7 +14,7 @@ const memory = new Memory({
14
14
  id: "learning-memory-vector",
15
15
  connectionUrl: "file:../../memory.db",
16
16
  }),
17
- embedder: openai.embedding("text-embedding-3-small"),
17
+ embedder: "openai/text-embedding-3-small",
18
18
  options: {
19
19
  // Keep last 20 messages in context
20
20
  lastMessages: 20,
@@ -61,7 +61,7 @@ export const personalAssistantAgent = new Agent({
61
61
  Always maintain a helpful and professional tone.
62
62
  Use the stored information to provide more personalized responses.
63
63
  `,
64
- model: openai("gpt-4o"),
64
+ model: "openai/gpt-4.1-mini",
65
65
  tools: { ...mcpTools },
66
66
  memory,
67
67
  });
@@ -8,7 +8,6 @@ Create or update your `src/mastra/agents/index.ts` file:
8
8
  import { Agent } from "@mastra/core/agent";
9
9
  import { Memory } from "@mastra/memory";
10
10
  import { LibSQLStore } from "@mastra/libsql";
11
- import { openai } from "@ai-sdk/openai";
12
11
 
13
12
  // Create a basic memory instance
14
13
  const memory = new Memory({
@@ -27,7 +26,7 @@ export const memoryAgent = new Agent({
27
26
  When a user shares information about themselves, acknowledge it and remember it for future reference.
28
27
  If asked about something mentioned earlier in the conversation, recall it accurately.
29
28
  `,
30
- model: openai("gpt-4o"), // You can use "gpt-3.5-turbo" if you prefer
29
+ model: "openai/gpt-4.1-mini",
31
30
  memory: memory,
32
31
  });
33
32
  ```
@@ -5,7 +5,6 @@ By default, the `Memory` instance includes the last 10 messages from the current
5
5
  ```typescript
6
6
  import { Agent } from "@mastra/core/agent";
7
7
  import { Memory } from "@mastra/memory";
8
- import { openai } from "@ai-sdk/openai";
9
8
  import { LibSQLStore } from "@mastra/libsql";
10
9
 
11
10
  // Create a memory instance with custom conversation history settings
@@ -27,7 +26,7 @@ export const memoryAgent = new Agent({
27
26
  When a user shares information about themselves, acknowledge it and remember it for future reference.
28
27
  If asked about something mentioned earlier in the conversation, recall it accurately.
29
28
  `,
30
- model: openai("gpt-4o"),
29
+ model: "openai/gpt-4.1-mini",
31
30
  memory: memory,
32
31
  });
33
32
  ```
@@ -5,7 +5,6 @@ Let's update our agent with custom semantic recall settings:
5
5
  ```typescript
6
6
  import { Agent } from "@mastra/core/agent";
7
7
  import { Memory } from "@mastra/memory";
8
- import { openai } from "@ai-sdk/openai";
9
8
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
10
9
 
11
10
  // Create a memory instance with semantic recall configuration
@@ -18,7 +17,7 @@ const memory = new Memory({
18
17
  id: "learning-memory-vector",
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }), // Vector database for semantic search
21
- embedder: openai.embedding("text-embedding-3-small"), // Embedder for message embeddings
20
+ embedder: "openai/text-embedding-3-small", // Embedder for message embeddings
22
21
  options: {
23
22
  lastMessages: 20, // Include the last 20 messages in the context
24
23
  semanticRecall: true, // Enable semantic recall with default settings
@@ -35,9 +34,9 @@ export const memoryAgent = new Agent({
35
34
  If asked about something mentioned earlier in the conversation, recall it accurately.
36
35
  You can also recall relevant information from older conversations when appropriate.
37
36
  `,
38
- model: openai("gpt-4o"),
37
+ model: "openai/gpt-4.1-mini",
39
38
  memory: memory,
40
39
  });
41
40
  ```
42
41
 
43
- For semantic recall to work, you need to have a **vector store** configured. You also need to have an **embedder** configured. You may use any `@ai-sdk`-compatible embedding model for this. In this example, we're using OpenAI's `text-embedding-3-small` model.
42
+ For semantic recall to work, you need to have a **vector store** configured. You also need to have an **embedder** configured. You may use any compatible embedding model for this. In this example, we're using OpenAI's `openai/text-embedding-3-small` model.
@@ -5,7 +5,6 @@ Let's update our agent with working memory capabilities:
5
5
  ```typescript
6
6
  import { Agent } from "@mastra/core/agent";
7
7
  import { Memory } from "@mastra/memory";
8
- import { openai } from "@ai-sdk/openai";
9
8
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
10
9
 
11
10
  // Create a memory instance with working memory configuration
@@ -18,7 +17,7 @@ const memory = new Memory({
18
17
  id: "learning-memory-vector",
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }), // Vector database for semantic search
21
- embedder: openai.embedding("text-embedding-3-small"), // Embedder for message embeddings
20
+ embedder: "openai/text-embedding-3-small", // Embedder for message embeddings
22
21
  options: {
23
22
  semanticRecall: {
24
23
  topK: 3,
@@ -52,7 +51,7 @@ export const memoryAgent = new Agent({
52
51
  Always refer to your working memory before asking for information the user has already provided.
53
52
  Use the information in your working memory to provide personalized responses.
54
53
  `,
55
- model: openai("gpt-4o"),
54
+ model: "openai/gpt-4.1-mini",
56
55
  memory: memory,
57
56
  });
58
57
  ```
@@ -7,7 +7,6 @@ Let's update our agent with a custom working memory template:
7
7
  ```typescript
8
8
  import { Agent } from "@mastra/core/agent";
9
9
  import { Memory } from "@mastra/memory";
10
- import { openai } from "@ai-sdk/openai";
11
10
 
12
11
  // Create a memory instance with a custom working memory template
13
12
  const memory = new Memory({
@@ -18,7 +17,7 @@ const memory = new Memory({
18
17
  vector: new LibSQLVector({
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }), // Vector database for semantic search
21
- embedder: openai.embedding("text-embedding-3-small"), // Embedder for message embeddings
20
+ embedder: "openai/text-embedding-3-small", // Embedder for message embeddings
22
21
  options: {
23
22
  semanticRecall: {
24
23
  topK: 3,
@@ -74,7 +73,7 @@ export const memoryAgent = new Agent({
74
73
  When the user shares personal information such as their name, location, or preferences,
75
74
  acknowledge it and update your working memory accordingly.
76
75
  `,
77
- model: openai("gpt-4o"),
76
+ model: "openai/gpt-4.1-mini",
78
77
  memory: memory,
79
78
  });
80
79
  ```
@@ -10,7 +10,6 @@ Let's create a comprehensive agent that utilizes conversation history, semantic
10
10
  // src/mastra/agents/memory-agent.ts
11
11
  import { Agent } from "@mastra/core/agent";
12
12
  import { Memory } from "@mastra/memory";
13
- import { openai } from "@ai-sdk/openai";
14
13
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
15
14
 
16
15
  // Create a comprehensive memory configuration
@@ -22,7 +21,7 @@ const memory = new Memory({
22
21
  vector: new LibSQLVector({
23
22
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
24
23
  }),
25
- embedder: openai.embedding("text-embedding-3-small"),
24
+ embedder: "openai/text-embedding-3-small",
26
25
  options: {
27
26
  // Conversation history configuration
28
27
  lastMessages: 20, // Include the last 20 messages in the context
@@ -6,7 +6,6 @@ Let's create a practical example of a memory-enhanced agent: a Personal Learning
6
6
  // src/mastra/agents/learning-assistant.ts
7
7
  import { Agent } from "@mastra/core/agent";
8
8
  import { Memory } from "@mastra/memory";
9
- import { openai } from "@ai-sdk/openai";
10
9
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
11
10
 
12
11
  // Create a specialized memory configuration for the learning assistant
@@ -18,7 +17,7 @@ const learningMemory = new Memory({
18
17
  vector: new LibSQLVector({
19
18
  connectionUrl: "file:../../vector.db", // relative path from the `.mastra/output` directory
20
19
  }),
21
- embedder: openai.embedding("text-embedding-3-small"),
20
+ embedder: "openai/text-embedding-3-small",
22
21
  options: {
23
22
  lastMessages: 20,
24
23
  semanticRecall: {
@@ -88,7 +87,7 @@ export const learningAssistantAgent = new Agent({
88
87
  Always be encouraging and supportive. Focus on building the user's confidence
89
88
  and celebrating their progress.
90
89
  `,
91
- model: openai("gpt-4o"),
90
+ model: "openai/gpt-4.1-mini",
92
91
  memory: learningMemory,
93
92
  });
94
93
 
@@ -8,7 +8,6 @@ Create a new file for your agent in the `src/mastra/agents` directory. Use `cont
8
8
 
9
9
  ```typescript
10
10
  // src/mastra/agents/content-agent.ts
11
- import { openai } from "@ai-sdk/openai";
12
11
  import { Agent } from "@mastra/core/agent";
13
12
 
14
13
  export const contentAgent = new Agent({
@@ -23,7 +22,7 @@ export const contentAgent = new Agent({
23
22
 
24
23
  Always provide constructive, actionable feedback.
25
24
  `,
26
- model: openai("gpt-4o-mini"),
25
+ model: "openai/gpt-4.1-mini",
27
26
  });
28
27
  ```
29
28
 
@@ -32,7 +31,7 @@ export const contentAgent = new Agent({
32
31
  - **Name**: Unique identifier for the agent
33
32
  - **Description**: What the agent does
34
33
  - **Instructions**: Detailed prompts that guide the AI's behavior
35
- - **Model**: Which AI model to use (GPT-4o-mini is fast and cost-effective)
34
+ - **Model**: Which AI model to use (GPT-4.1-mini is fast and cost-effective)
36
35
 
37
36
  ## Registering and Testing Your Agent
38
37
 
@@ -0,0 +1,20 @@
1
+ ---
2
+ title: "Deploy to Cloud Providers | Deployment"
3
+ description: Deploy your Mastra applications to cloud providers
4
+ ---
5
+
6
+ # Deploy to Cloud Providers
7
+
8
+ Mastra applications can be deployed to cloud providers and serverless platforms. Mastra includes optional built-in deployers for Vercel, Netlify, and Cloudflare to automate the deployment process.
9
+
10
+ ## Supported Cloud Providers
11
+
12
+ The following guides show how to deploy Mastra to specific cloud providers:
13
+
14
+ - [Amazon EC2](/guides/v1/deployment/amazon-ec2)
15
+ - [AWS Lambda](/guides/v1/deployment/aws-lambda)
16
+ - [Azure App Services](/guides/v1/deployment/azure-app-services)
17
+ - [Cloudflare](/guides/v1/deployment/cloudflare-deployer)
18
+ - [Digital Ocean](/guides/v1/deployment/digital-ocean)
19
+ - [Netlify](/guides/v1/deployment/netlify-deployer)
20
+ - [Vercel](/guides/v1/deployment/vercel-deployer)
@@ -1,9 +1,9 @@
1
1
  ---
2
- title: "Building Mastra | Deployment"
2
+ title: "Deploy a Mastra Server | Deployment"
3
3
  description: "Learn how to build a Mastra server with build settings and deployment options."
4
4
  ---
5
5
 
6
- # Building Mastra
6
+ # Deploy a Mastra Server
7
7
 
8
8
  Mastra runs as a standard Node.js server and can be deployed across a wide range of environments.
9
9
 
@@ -1,11 +1,25 @@
1
1
  ---
2
- title: "Monorepo Deployment | Deployment"
2
+ title: "Deploy in a Monorepo | Deployment"
3
3
  description: Learn how to deploy Mastra applications that are part of a monorepo setup
4
4
  ---
5
5
 
6
- # Monorepo Deployment
6
+ # Deploy in a Monorepo
7
7
 
8
- Deploying Mastra in a monorepo follows the same approach as deploying a standalone application. While some [Cloud](./cloud-providers/) or [Serverless Platform](./cloud-providers/) providers may introduce extra requirements, the core setup is the same.
8
+ Deploying Mastra in a monorepo follows the same approach as deploying a standalone application. While some [Cloud](/guides/v1/deployment) or [Serverless Platform](/guides/v1/deployment) providers may introduce extra requirements, the core setup is the same.
9
+
10
+ ## Supported monorepos
11
+
12
+ Mastra works with:
13
+
14
+ - npm workspaces
15
+ - pnpm workspaces
16
+ - Yarn workspaces
17
+ - Turborepo
18
+
19
+ Known limitations:
20
+
21
+ - Bun workspaces - partial support; known issues
22
+ - Nx - You can use Nx's [supported dependency strategies](https://nx.dev/concepts/decisions/dependency-management) but you need to have `package.json` files inside your workspace packages
9
23
 
10
24
  ## Example monorepo
11
25
 
@@ -44,10 +58,15 @@ api/
44
58
 
45
59
  ## Deployment configuration
46
60
 
47
- The image below shows how to select `apps/api` as the project root when deploying to [Mastra Cloud](./mastra-cloud/overview). While the interface may differ between providers, the configuration remains the same.
61
+ The image below shows how to select `apps/api` as the project root when deploying to [Mastra Cloud](/docs/v1/mastra-cloud/overview). While the interface may differ between providers, the configuration remains the same.
48
62
 
49
63
  ![Deployment configuration](/img/monorepo/monorepo-mastra-cloud.jpg)
50
64
 
65
+
66
+ :::info
67
+ Make sure the correct package (e.g. `apps/api`) is selected as the deploy target. Selecting the wrong project root is a common deployment issue in monorepos.
68
+ :::
69
+
51
70
  ## Dependency management
52
71
 
53
72
  In a monorepo, keep dependencies consistent to avoid version conflicts and build errors.
@@ -55,43 +74,3 @@ In a monorepo, keep dependencies consistent to avoid version conflicts and build
55
74
  - Use a **single lockfile** at the project root so all packages resolve the same versions.
56
75
  - Align versions of **shared libraries** (like Mastra or frameworks) to prevent duplicates.
57
76
 
58
- ## Deployment pitfalls
59
-
60
- Common issues to watch for when deploying Mastra in a monorepo:
61
-
62
- - **Wrong project root**: make sure the correct package (e.g. `apps/api`) is selected as the deploy target.
63
-
64
- ## Bundler options
65
-
66
- Use `transpilePackages` to compile TypeScript workspace packages or libraries. List package names exactly as they appear in each `package.json`. Use `externals` to exclude dependencies resolved at runtime, and `sourcemap` to emit readable stack traces.
67
-
68
- ```typescript title="src/mastra/index.ts" showLineNumbers copy
69
- import { Mastra } from "@mastra/core";
70
-
71
- export const mastra = new Mastra({
72
- // ...
73
- bundler: {
74
- transpilePackages: ["utils"],
75
- externals: ["ui"],
76
- sourcemap: true,
77
- },
78
- });
79
- ```
80
-
81
- > See [Mastra Class](/reference/v1/core/mastra-class) for more configuration options.
82
-
83
- ## Supported monorepos
84
-
85
- Mastra works with:
86
-
87
- - npm workspaces
88
- - pnpm workspaces
89
- - Yarn workspaces
90
- - Turborepo
91
-
92
- Known limitations:
93
-
94
- - Bun workspaces — partial support; known issues
95
- - Nx — You can use Nx's [supported dependency strategies](https://nx.dev/concepts/decisions/dependency-management) but you need to have `package.json` files inside your workspace packages
96
-
97
- > If you are experiencing issues with monorepos see our: [Monorepos Support mega issue](https://github.com/mastra-ai/mastra/issues/6852).