@mastra/mcp-docs-server 0.13.2 → 0.13.3-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (114) hide show
  1. package/.docs/organized/changelogs/%40internal%2Fstorage-test-utils.md +33 -0
  2. package/.docs/organized/changelogs/%40mastra%2Fagui.md +48 -0
  3. package/.docs/organized/changelogs/%40mastra%2Fastra.md +30 -30
  4. package/.docs/organized/changelogs/%40mastra%2Fchroma.md +30 -30
  5. package/.docs/organized/changelogs/%40mastra%2Fclickhouse.md +33 -33
  6. package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +85 -85
  7. package/.docs/organized/changelogs/%40mastra%2Fcloudflare-d1.md +34 -34
  8. package/.docs/organized/changelogs/%40mastra%2Fcloudflare.md +35 -35
  9. package/.docs/organized/changelogs/%40mastra%2Fcore.md +70 -70
  10. package/.docs/organized/changelogs/%40mastra%2Fcouchbase.md +29 -29
  11. package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloudflare.md +92 -92
  12. package/.docs/organized/changelogs/%40mastra%2Fdeployer-netlify.md +94 -94
  13. package/.docs/organized/changelogs/%40mastra%2Fdeployer-vercel.md +92 -92
  14. package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +104 -104
  15. package/.docs/organized/changelogs/%40mastra%2Fdynamodb.md +35 -35
  16. package/.docs/organized/changelogs/%40mastra%2Fevals.md +26 -26
  17. package/.docs/organized/changelogs/%40mastra%2Ffirecrawl.md +29 -29
  18. package/.docs/organized/changelogs/%40mastra%2Fgithub.md +26 -26
  19. package/.docs/organized/changelogs/%40mastra%2Flance.md +32 -0
  20. package/.docs/organized/changelogs/%40mastra%2Flibsql.md +35 -35
  21. package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +59 -59
  22. package/.docs/organized/changelogs/%40mastra%2Fmcp-registry-registry.md +29 -29
  23. package/.docs/organized/changelogs/%40mastra%2Fmcp.md +31 -31
  24. package/.docs/organized/changelogs/%40mastra%2Fmem0.md +26 -26
  25. package/.docs/organized/changelogs/%40mastra%2Fmemory.md +56 -56
  26. package/.docs/organized/changelogs/%40mastra%2Fmongodb.md +37 -37
  27. package/.docs/organized/changelogs/%40mastra%2Fopensearch.md +30 -17
  28. package/.docs/organized/changelogs/%40mastra%2Fpg.md +57 -57
  29. package/.docs/organized/changelogs/%40mastra%2Fpinecone.md +29 -29
  30. package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +108 -108
  31. package/.docs/organized/changelogs/%40mastra%2Fqdrant.md +29 -29
  32. package/.docs/organized/changelogs/%40mastra%2Frag.md +27 -27
  33. package/.docs/organized/changelogs/%40mastra%2Fragie.md +26 -26
  34. package/.docs/organized/changelogs/%40mastra%2Fschema-compat.md +7 -0
  35. package/.docs/organized/changelogs/%40mastra%2Fserver.md +82 -82
  36. package/.docs/organized/changelogs/%40mastra%2Fturbopuffer.md +29 -29
  37. package/.docs/organized/changelogs/%40mastra%2Fupstash.md +33 -33
  38. package/.docs/organized/changelogs/%40mastra%2Fvectorize.md +31 -31
  39. package/.docs/organized/changelogs/%40mastra%2Fvoice-cloudflare.md +28 -28
  40. package/.docs/organized/changelogs/%40mastra%2Fvoice-deepgram.md +26 -26
  41. package/.docs/organized/changelogs/%40mastra%2Fvoice-elevenlabs.md +26 -26
  42. package/.docs/organized/changelogs/%40mastra%2Fvoice-gladia.md +25 -0
  43. package/.docs/organized/changelogs/%40mastra%2Fvoice-google.md +26 -26
  44. package/.docs/organized/changelogs/%40mastra%2Fvoice-murf.md +26 -26
  45. package/.docs/organized/changelogs/%40mastra%2Fvoice-openai-realtime.md +23 -23
  46. package/.docs/organized/changelogs/%40mastra%2Fvoice-openai.md +26 -26
  47. package/.docs/organized/changelogs/%40mastra%2Fvoice-playai.md +26 -26
  48. package/.docs/organized/changelogs/%40mastra%2Fvoice-sarvam.md +26 -26
  49. package/.docs/organized/changelogs/%40mastra%2Fvoice-speechify.md +26 -26
  50. package/.docs/organized/changelogs/create-mastra.md +37 -37
  51. package/.docs/organized/changelogs/mastra.md +115 -115
  52. package/.docs/organized/code-examples/a2a.md +1 -30
  53. package/.docs/organized/code-examples/agent-network.md +26 -115
  54. package/.docs/organized/code-examples/agent.md +1 -29
  55. package/.docs/organized/code-examples/agui.md +0 -22
  56. package/.docs/organized/code-examples/ai-sdk-useChat.md +1 -16
  57. package/.docs/organized/code-examples/assistant-ui.md +1 -16
  58. package/.docs/organized/code-examples/bird-checker-with-express.md +1 -19
  59. package/.docs/organized/code-examples/bird-checker-with-nextjs-and-eval.md +1 -20
  60. package/.docs/organized/code-examples/bird-checker-with-nextjs.md +1 -18
  61. package/.docs/organized/code-examples/client-side-tools.md +1 -18
  62. package/.docs/organized/code-examples/crypto-chatbot.md +4 -25
  63. package/.docs/organized/code-examples/experimental-auth-weather-agent.md +1 -26
  64. package/.docs/organized/code-examples/fireworks-r1.md +1 -21
  65. package/.docs/organized/code-examples/mcp-configuration.md +1 -24
  66. package/.docs/organized/code-examples/mcp-registry-registry.md +1 -22
  67. package/.docs/organized/code-examples/memory-per-resource-example.md +0 -14
  68. package/.docs/organized/code-examples/memory-todo-agent.md +1 -20
  69. package/.docs/organized/code-examples/memory-with-context.md +1 -20
  70. package/.docs/organized/code-examples/memory-with-libsql.md +1 -21
  71. package/.docs/organized/code-examples/memory-with-mem0.md +1 -21
  72. package/.docs/organized/code-examples/memory-with-pg.md +1 -22
  73. package/.docs/organized/code-examples/memory-with-processors.md +1 -17
  74. package/.docs/organized/code-examples/memory-with-upstash.md +1 -24
  75. package/.docs/organized/code-examples/openapi-spec-writer.md +1 -21
  76. package/.docs/organized/code-examples/quick-start.md +1 -21
  77. package/.docs/organized/code-examples/stock-price-tool.md +1 -21
  78. package/.docs/organized/code-examples/weather-agent.md +1 -20
  79. package/.docs/organized/code-examples/workflow-ai-recruiter.md +1 -22
  80. package/.docs/organized/code-examples/workflow-with-inline-steps.md +1 -20
  81. package/.docs/organized/code-examples/workflow-with-memory.md +1 -21
  82. package/.docs/organized/code-examples/workflow-with-separate-steps.md +1 -22
  83. package/.docs/raw/course/01-first-agent/11-creating-transactions-tool.md +1 -1
  84. package/.docs/raw/course/03-agent-memory/24-working-memory-in-practice.md +0 -3
  85. package/.docs/raw/course/03-agent-memory/29-memory-best-practices.md +0 -6
  86. package/.docs/raw/deployment/cloud-providers/amazon-ec2.mdx +81 -0
  87. package/.docs/raw/deployment/cloud-providers/azure-app-services.mdx +136 -0
  88. package/.docs/raw/deployment/cloud-providers/index.mdx +2 -0
  89. package/.docs/raw/deployment/serverless-platforms/cloudflare-deployer.mdx +111 -0
  90. package/.docs/raw/deployment/{deployment.mdx → serverless-platforms/index.mdx} +5 -10
  91. package/.docs/raw/deployment/serverless-platforms/netlify-deployer.mdx +94 -0
  92. package/.docs/raw/deployment/serverless-platforms/vercel-deployer.mdx +91 -0
  93. package/.docs/raw/frameworks/ai-sdk-v5.mdx +91 -0
  94. package/.docs/raw/frameworks/web-frameworks/next-js.mdx +56 -18
  95. package/.docs/raw/frameworks/web-frameworks/sveltekit.mdx +456 -0
  96. package/.docs/raw/frameworks/web-frameworks/vite-react.mdx +28 -9
  97. package/.docs/raw/getting-started/model-providers.mdx +118 -127
  98. package/.docs/raw/memory/overview.mdx +30 -0
  99. package/.docs/raw/reference/agents/generate.mdx +3 -3
  100. package/.docs/raw/reference/agents/getModel.mdx +1 -1
  101. package/.docs/raw/reference/agents/stream.mdx +3 -3
  102. package/.docs/raw/reference/deployer/cloudflare.mdx +4 -119
  103. package/.docs/raw/reference/deployer/netlify.mdx +4 -83
  104. package/.docs/raw/reference/deployer/vercel.mdx +4 -51
  105. package/.docs/raw/reference/memory/Memory.mdx +71 -2
  106. package/.docs/raw/reference/observability/logger.mdx +70 -0
  107. package/.docs/raw/reference/rag/pg.mdx +15 -0
  108. package/.docs/raw/reference/storage/postgresql.mdx +17 -0
  109. package/.docs/raw/reference/workflows/step.mdx +8 -1
  110. package/.docs/raw/workflows/overview.mdx +1 -1
  111. package/dist/{chunk-P5AHYMUI.js → chunk-7QXT2IEP.js} +48 -11
  112. package/dist/prepare-docs/prepare.js +1 -1
  113. package/dist/stdio.js +1 -1
  114. package/package.json +5 -5
@@ -3,7 +3,7 @@ title: "Model Providers | Getting Started | Mastra Docs"
3
3
  description: "Learn how to configure and use different model providers with Mastra."
4
4
  ---
5
5
 
6
- import { Callout, Tabs } from 'nextra/components'
6
+ import { Callout } from 'nextra/components'
7
7
 
8
8
  # Model Providers
9
9
 
@@ -26,11 +26,11 @@ const result = await agent.generate("What is the weather like?");
26
26
 
27
27
  Model providers from the AI SDK can be grouped into three main categories:
28
28
 
29
- - Official providers maintained by the AI SDK team
30
- - OpenAI-compatible providers
31
- - Community providers
29
+ - [Official providers maintained by the AI SDK team](/docs/getting-started/model-providers#official-providers)
30
+ - [OpenAI-compatible providers](/docs/getting-started/model-providers#openai-compatible-providers)
31
+ - [Community providers](/docs/getting-started/model-providers#community-providers)
32
32
 
33
- You can find a list of all available model providers in the [AI SDK documentation](https://ai-sdk.dev/providers/ai-sdk-providers).
33
+ > You can find a list of all available model providers in the [AI SDK documentation](https://ai-sdk.dev/providers/ai-sdk-providers).
34
34
 
35
35
  <Callout>
36
36
  AI SDK model providers are packages that need to be installed in your Mastra project.
@@ -41,125 +41,116 @@ If you want to use a different model provider, you need to install it in your pr
41
41
 
42
42
  Here are some examples of how Mastra agents can be configured to use the different types of model providers:
43
43
 
44
- <Tabs items={["Official providers", "OpenAI compatible providers", "Community providers"]}>
45
- <Tabs.Tab>
46
- ### Official providers
47
-
48
- Official model providers are maintained by the AI SDK team.
49
- Their packages are usually prefixed with `@ai-sdk/`, e.g. `@ai-sdk/anthropic`, `@ai-sdk/openai`, etc.
50
-
51
- ```typescript showLineNumbers copy {1,7} filename="src/mastra/agents/weather-agent.ts"
52
- import { openai } from "@ai-sdk/openai";
53
- import { Agent } from "@mastra/core/agent";
54
-
55
- const agent = new Agent({
56
- name: "WeatherAgent",
57
- instructions: "Instructions for the agent...",
58
- model: openai("gpt-4-turbo"),
59
- });
60
- ```
61
-
62
- Additional configuration may be done by importing a helper function from the AI SDK provider.
63
- Here's an example using the OpenAI provider:
64
-
65
- ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,4-8,13}
66
- import { createOpenAI } from "@ai-sdk/openai";
67
- import { Agent } from "@mastra/core/agent"
68
-
69
- const openai = createOpenAI({
70
- baseUrl: "<your-custom-base-url>",
71
- apiKey: "<your-custom-api-key>",
72
- ...otherOptions
73
- });
74
-
75
- const agent = new Agent({
76
- name: "WeatherAgent",
77
- instructions: "Instructions for the agent...",
78
- model: openai("<model-name>"),
79
- });
80
- ```
81
-
82
- Different AI providers may have different options for configuration. Please refer to the [AI SDK documentation](https://ai-sdk.dev/providers/ai-sdk-providers) for more information.
83
- </Tabs.Tab>
84
-
85
- <Tabs.Tab>
86
- ### OpenAI-compatible providers
87
-
88
- Some language model providers implement the OpenAI API. For these providers, you can use the [`@ai-sdk/openai-compatible`](https://www.npmjs.com/package/@ai-sdk/openai-compatible) provider.
89
-
90
- Here's the general setup and provider instance creation:
91
-
92
- ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,4-14,19}
93
- import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
94
- import { Agent } from "@mastra/core/agent";
95
-
96
- const openaiCompatible = createOpenAICompatible({
97
- name: "<model-name>",
98
- baseUrl: "<base-url>",
99
- apiKey: "<api-key>",
100
- headers: {},
101
- queryParams: {},
102
- fetch: async (url, options) => {
103
- // custom fetch logic
104
- return fetch(url, options);
105
- }
106
- });
107
-
108
- const agent = new Agent({
109
- name: "WeatherAgent",
110
- instructions: "Instructions for the agent...",
111
- model: openaiCompatible("<model-name>"),
112
- });
113
- ```
114
-
115
- For more information on the OpenAI-compatible provider, please refer to the [AI SDK documentation](https://ai-sdk.dev/providers/openai-compatible-providers).
116
-
117
- </Tabs.Tab>
118
-
119
- <Tabs.Tab>
120
- ### Community providers
121
-
122
- The AI SDK provides a [Language Model Specification](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1).
123
- Following this specification, you can create your own model provider compatible with the AI SDK.
124
-
125
- Some community providers have implemented this specification and are compatible with the AI SDK.
126
- We will look at one such provider, the Ollama provider available in the [`ollama-ai-provider`](https://github.com/sgomez/ollama-ai-provider) package.
127
-
128
- Here's an example:
129
-
130
- ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,7}
131
- import { ollama } from "ollama-ai-provider";
132
- import { Agent } from "@mastra/core/agent";
133
-
134
- const agent = new Agent({
135
- name: "WeatherAgent",
136
- instructions: "Instructions for the agent...",
137
- model: ollama("llama3.2:latest"),
138
- });
139
- ```
140
-
141
- You can also configure the Ollama provider like so:
142
-
143
- ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,4-7,12}
144
- import { createOllama } from "ollama-ai-provider";
145
- import { Agent } from "@mastra/core/agent";
146
-
147
- const ollama = createOllama({
148
- baseUrl: "<your-custom-base-url>",
149
- ...otherOptions,
150
- });
151
-
152
- const agent = new Agent({
153
- name: "WeatherAgent",
154
- instructions: "Instructions for the agent...",
155
- model: ollama("llama3.2:latest"),
156
- });
157
- ```
158
-
159
- For more information on the Ollama provider and other available community providers, please refer to the [AI SDK documentation](https://ai-sdk.dev/providers/community-providers).
160
-
161
- <Callout>
162
- While this example shows how to use the Ollama provider, other providers like `openrouter`, `azure`, etc. may also be used.
163
- </Callout>
164
- </Tabs.Tab>
165
- </Tabs>
44
+ ### Official providers
45
+
46
+ Official model providers are maintained by the AI SDK team.
47
+ Their packages are usually prefixed with `@ai-sdk/`, e.g. `@ai-sdk/anthropic`, `@ai-sdk/openai`, etc.
48
+
49
+ ```typescript showLineNumbers copy {1,7} filename="src/mastra/agents/weather-agent.ts"
50
+ import { openai } from "@ai-sdk/openai";
51
+ import { Agent } from "@mastra/core/agent";
52
+
53
+ const agent = new Agent({
54
+ name: "WeatherAgent",
55
+ instructions: "Instructions for the agent...",
56
+ model: openai("gpt-4-turbo"),
57
+ });
58
+ ```
59
+
60
+ Additional configuration may be done by importing a helper function from the AI SDK provider.
61
+ Here's an example using the OpenAI provider:
62
+
63
+ ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,4-8,13}
64
+ import { createOpenAI } from "@ai-sdk/openai";
65
+ import { Agent } from "@mastra/core/agent"
66
+
67
+ const openai = createOpenAI({
68
+ baseUrl: "<your-custom-base-url>",
69
+ apiKey: "<your-custom-api-key>",
70
+ ...otherOptions
71
+ });
72
+
73
+ const agent = new Agent({
74
+ name: "WeatherAgent",
75
+ instructions: "Instructions for the agent...",
76
+ model: openai("<model-name>"),
77
+ });
78
+ ```
79
+
80
+ ### OpenAI-compatible providers
81
+
82
+ Some language model providers implement the OpenAI API. For these providers, you can use the [`@ai-sdk/openai-compatible`](https://www.npmjs.com/package/@ai-sdk/openai-compatible) provider.
83
+
84
+ Here's the general setup and provider instance creation:
85
+
86
+ ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,4-14,19}
87
+ import { createOpenAICompatible } from "@ai-sdk/openai-compatible";
88
+ import { Agent } from "@mastra/core/agent";
89
+
90
+ const openaiCompatible = createOpenAICompatible({
91
+ name: "<model-name>",
92
+ baseUrl: "<base-url>",
93
+ apiKey: "<api-key>",
94
+ headers: {},
95
+ queryParams: {},
96
+ fetch: async (url, options) => {
97
+ // custom fetch logic
98
+ return fetch(url, options);
99
+ }
100
+ });
101
+
102
+ const agent = new Agent({
103
+ name: "WeatherAgent",
104
+ instructions: "Instructions for the agent...",
105
+ model: openaiCompatible("<model-name>"),
106
+ });
107
+ ```
108
+
109
+ For more information on the OpenAI-compatible provider, please refer to the [AI SDK documentation](https://ai-sdk.dev/providers/openai-compatible-providers).
110
+
111
+ ### Community providers
112
+
113
+ The AI SDK provides a [Language Model Specification](https://github.com/vercel/ai/tree/main/packages/provider/src/language-model/v1).
114
+ Following this specification, you can create your own model provider compatible with the AI SDK.
115
+
116
+ Some community providers have implemented this specification and are compatible with the AI SDK.
117
+ We will look at one such provider, the Ollama provider available in the [`ollama-ai-provider`](https://github.com/sgomez/ollama-ai-provider) package.
118
+
119
+ Here's an example:
120
+
121
+ ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,7}
122
+ import { ollama } from "ollama-ai-provider";
123
+ import { Agent } from "@mastra/core/agent";
124
+
125
+ const agent = new Agent({
126
+ name: "WeatherAgent",
127
+ instructions: "Instructions for the agent...",
128
+ model: ollama("llama3.2:latest"),
129
+ });
130
+ ```
131
+
132
+ You can also configure the Ollama provider like so:
133
+
134
+ ```typescript showLineNumbers copy filename="src/mastra/agents/weather-agent.ts" {1,4-7,12}
135
+ import { createOllama } from "ollama-ai-provider";
136
+ import { Agent } from "@mastra/core/agent";
137
+
138
+ const ollama = createOllama({
139
+ baseUrl: "<your-custom-base-url>",
140
+ ...otherOptions,
141
+ });
142
+
143
+ const agent = new Agent({
144
+ name: "WeatherAgent",
145
+ instructions: "Instructions for the agent...",
146
+ model: ollama("llama3.2:latest"),
147
+ });
148
+ ```
149
+
150
+ For more information on the Ollama provider and other available community providers, please refer to the [AI SDK documentation](https://ai-sdk.dev/providers/community-providers).
151
+
152
+ <Callout>
153
+ While this example shows how to use the Ollama provider, other providers like `openrouter`, `azure`, etc. may also be used.
154
+ </Callout>
155
+
156
+ Different AI providers may have different options for configuration. Please refer to the [AI SDK documentation](https://ai-sdk.dev/providers/ai-sdk-providers) for more information.
@@ -82,6 +82,36 @@ const response = await myMemoryAgent.stream("Hello, my name is Alice.", {
82
82
 
83
83
  **Important:** without these ID's your agent will not use memory, even if memory is properly configured. The playground handles this for you, but you need to add ID's yourself when using memory in your application.
84
84
 
85
+ ### Thread Title Generation
86
+
87
+ Mastra can automatically generate meaningful titles for conversation threads based on the user's first message. This helps organize and identify conversations in your application UI.
88
+
89
+ ```typescript {3-7}
90
+ const memory = new Memory({
91
+ options: {
92
+ threads: {
93
+ generateTitle: true, // Enable automatic title generation
94
+ },
95
+ },
96
+ });
97
+ ```
98
+
99
+ By default, title generation uses the same model as your agent. For cost optimization, you can specify a cheaper model specifically for title generation:
100
+
101
+ ```typescript {5-7}
102
+ const memory = new Memory({
103
+ options: {
104
+ threads: {
105
+ generateTitle: {
106
+ model: openai("gpt-4.1-nano"), // Use cheaper model for titles
107
+ },
108
+ },
109
+ },
110
+ });
111
+ ```
112
+
113
+ Title generation happens asynchronously after the agent responds, so it doesn't impact response time. See the [full configuration reference](../../reference/memory/Memory.mdx#thread-title-generation) for more details and examples.
114
+
85
115
  ## Conversation History
86
116
 
87
117
  By default, the `Memory` instance includes the [last 10 messages](../../reference/memory/Memory.mdx) from the current Memory thread in each new request. This provides the agent with immediate conversational context.
@@ -261,14 +261,14 @@ Configuration options for memory management:
261
261
  description: "Thread-specific memory configuration.",
262
262
  properties: [
263
263
  {
264
- type: "boolean",
264
+ type: "boolean | object",
265
265
  parameters: [
266
266
  {
267
267
  name: "generateTitle",
268
- type: "boolean",
268
+ type: "boolean | { model: LanguageModelV1 | ((ctx: RuntimeContext) => LanguageModelV1 | Promise<LanguageModelV1>) }",
269
269
  isOptional: true,
270
270
  description:
271
- "Whether to automatically generate titles for new threads.",
271
+ "Controls automatic thread title generation from the user's first message. Can be a boolean to enable/disable using the agent's model, or an object with a custom model for title generation (useful for cost optimization). Example: { model: openai('gpt-4.1-nano') }",
272
272
  },
273
273
  ],
274
274
  },
@@ -86,7 +86,7 @@ const agent = new Agent({
86
86
  }
87
87
 
88
88
  // Default to OpenAI
89
- return highQuality ? openai("gpt-4o") : openai("gpt-3.5-turbo");
89
+ return highQuality ? openai("gpt-4o") : openai("gpt-4.1-nano");
90
90
  },
91
91
  });
92
92
 
@@ -267,14 +267,14 @@ Configuration options for memory management:
267
267
  description: "Thread-specific memory configuration.",
268
268
  properties: [
269
269
  {
270
- type: "boolean",
270
+ type: "boolean | object",
271
271
  parameters: [
272
272
  {
273
273
  name: "generateTitle",
274
- type: "boolean",
274
+ type: "boolean | { model: LanguageModelV1 | ((ctx: RuntimeContext) => LanguageModelV1 | Promise<LanguageModelV1>) }",
275
275
  isOptional: true,
276
276
  description:
277
- "Whether to automatically generate titles for new threads.",
277
+ "Controls automatic thread title generation from the user's first message. Can be a boolean to enable/disable using the agent's model, or an object with a custom model for title generation (useful for cost optimization). Example: { model: openai('gpt-4.1-nano') }",
278
278
  },
279
279
  ],
280
280
  },
@@ -5,21 +5,15 @@ description: "Documentation for the CloudflareDeployer class, which deploys Mast
5
5
 
6
6
  # CloudflareDeployer
7
7
 
8
- The CloudflareDeployer deploys standalone Mastra applications to Cloudflare Workers, handling configuration, environment variables, and route management. It extends the abstract Deployer class to provide Cloudflare-specific deployment functionality.
8
+ The `CloudflareDeployer` class handles deployment of standalone Mastra applications to Cloudflare Workers. It manages configuration, deployment, and extends the base [Deployer](/reference/deployer/deployer) class with Cloudflare specific functionality.
9
9
 
10
- ## Installation
11
-
12
- ```bash copy
13
- npm install @mastra/deployer-cloudflare@latest
14
- ```
15
-
16
- ## Usage Example
10
+ ## Usage example
17
11
 
18
12
  ```typescript filename="src/mastra/index.ts" showLineNumbers copy
19
- import { Mastra } from "@mastra/core";
13
+ import { Mastra } from "@mastra/core/mastra";
20
14
  import { CloudflareDeployer } from "@mastra/deployer-cloudflare";
21
15
 
22
- const mastra = new Mastra({
16
+ export const mastra = new Mastra({
23
17
  // ...
24
18
  deployer: new CloudflareDeployer({
25
19
  scope: "your-account-id",
@@ -206,112 +200,3 @@ const mastra = new Mastra({
206
200
  },
207
201
  ]}
208
202
  />
209
-
210
- ### Environment Variables
211
-
212
- The CloudflareDeployer handles environment variables from multiple sources:
213
-
214
- 1. **Environment Files**: Variables from `.env.production` and `.env` files.
215
- 2. **Configuration**: Variables passed through the `env` parameter.
216
-
217
- ## Lint Mastra Project
218
-
219
- Lint your Mastra project to make sure it's fine to build
220
-
221
- ```bash
222
- npx mastra lint
223
- ```
224
-
225
- ## Build Mastra Project
226
-
227
- To build your Mastra project for cloudflare deployment:
228
-
229
- ```bash
230
- npx mastra build
231
- ```
232
-
233
- The build process generates the following output structure in the `.mastra/output` directory:
234
-
235
- ```
236
- .mastra/output/
237
- ├── index.mjs # Main worker entry point
238
- ├── wrangler.json # Cloudflare Worker configuration
239
- └── assets/ # Static assets and dependencies
240
-
241
- ```
242
-
243
- ### Wrangler Configuration
244
-
245
- The CloudflareDeployer automatically generates a `wrangler.json` configuration file with the following settings:
246
-
247
- ```json
248
- {
249
- "name": "your-project-name",
250
- "main": "./output/index.mjs",
251
- "compatibility_date": "2024-12-02",
252
- "compatibility_flags": [
253
- "nodejs_compat",
254
- "nodejs_compat_populate_process_env"
255
- ],
256
- "observability": {
257
- "logs": {
258
- "enabled": true
259
- }
260
- },
261
- "vars": {
262
- // Environment variables from .env files and configuration
263
- },
264
- "routes": [
265
- // Route configurations if specified
266
- ],
267
- "d1_databases": [
268
- // D1 database bindings if specified
269
- ],
270
- "kv_namespaces": [
271
- // KV namespace bindings if specified
272
- ]
273
- }
274
- ```
275
-
276
- Compatibility flags:
277
-
278
- - `nodejs_compat`: Enables Node.js compatibility in Workers.
279
- - `nodejs_compat_populate_process_env`: Populates `process.env` with variables from `vars`.
280
-
281
- ### Route Configuration
282
-
283
- Routes can be configured to direct traffic to your worker based on URL patterns and domains:
284
-
285
- ```typescript
286
- const routes = [
287
- {
288
- pattern: "api.example.com/*",
289
- zone_name: "example.com",
290
- custom_domain: true,
291
- },
292
- {
293
- pattern: "example.com/api/*",
294
- zone_name: "example.com",
295
- },
296
- ];
297
- ```
298
-
299
- ## Deployment Options
300
-
301
- After building, you can deploy your Mastra application `.mastra/output` to Cloudflare Workers using any of these methods:
302
-
303
- 1. **Wrangler CLI**: Deploy directly using Cloudflare's official CLI tool
304
-
305
- - Install the CLI: `npm install -g wrangler`
306
- - Navigate to the output directory: `cd .mastra/output`
307
- - Login to your Cloudflare account: `wrangler login`
308
- - Deploy to preview environment: `wrangler deploy`
309
- - For production deployment: `wrangler deploy --env production`
310
-
311
- 2. **Cloudflare Dashboard**: Upload the build output manually through the Cloudflare dashboard
312
-
313
- > You can also run `wrangler dev` in your output directory `.mastra/output` to test your Mastra application locally.
314
-
315
- ## Platform Documentation
316
-
317
- - [Cloudflare Workers](https://developers.cloudflare.com/workers/)
@@ -5,95 +5,16 @@ description: "Documentation for the NetlifyDeployer class, which deploys Mastra
5
5
 
6
6
  # NetlifyDeployer
7
7
 
8
- The NetlifyDeployer deploys standalone Mastra applications to Netlify Functions, handling site creation, configuration, and deployment processes. It extends the abstract Deployer class to provide Netlify-specific deployment functionality.
8
+ The `NetlifyDeployer` class handles deployment of standalone Mastra applications to Netlify. It manages configuration, deployment, and extends the base [Deployer](/reference/deployer/deployer) class with Netlify specific functionality.
9
9
 
10
- ## Installation
11
-
12
- ```bash copy
13
- npm install @mastra/deployer-netlify@latest
14
- ```
15
-
16
- ## Usage Example
10
+ ## Usage example
17
11
 
18
12
  ```typescript filename="src/mastra/index.ts" showLineNumbers copy
19
- import { Mastra } from "@mastra/core";
13
+ import { Mastra } from "@mastra/core/mastra";
20
14
  import { NetlifyDeployer } from "@mastra/deployer-netlify";
21
15
 
22
- const mastra = new Mastra({
16
+ export const mastra = new Mastra({
23
17
  // ...
24
18
  deployer: new NetlifyDeployer()
25
19
  });
26
20
  ```
27
-
28
- ## Lint Mastra Project
29
-
30
- Lint your Mastra project to make sure it's fine to build
31
-
32
- ```bash
33
- npx mastra lint
34
- ```
35
-
36
- ## Build Mastra Project
37
-
38
- To build your Mastra project for Netlify deployment:
39
-
40
- ```bash
41
- npx mastra build
42
- ```
43
-
44
- The build process generates the following output structure in the `.mastra/output` directory:
45
-
46
- ```
47
- .netlify/
48
- ├── v1/
49
- └── functions/
50
- │ └── api/
51
- │ └── index.mjs # Application entry point
52
- │ config.json # Netlify configuration
53
- ```
54
-
55
- ### Netlify Configuration
56
-
57
- The NetlifyDeployer automatically generates a `config.json` configuration file in `.netlify/v1` with the following settings:
58
-
59
- ```json
60
- {
61
- "redirects": [
62
- {
63
- "force": true,
64
- "from": "/*",
65
- "to": "/.netlify/functions/api/:splat",
66
- "status": 200
67
- }
68
- ]
69
- }
70
- ```
71
-
72
- ## Deployment Options
73
-
74
- After building, you can deploy your Mastra application `.mastra/output` to Netlify using any of these methods:
75
-
76
- 1. **Netlify CLI**: Deploy directly using Netlify's official CLI tool
77
-
78
- - Install the CLI: `npm install -g netlify-cli`
79
- - Deploy with functions directory specified: `netlify deploy`
80
- - For production deployment add `--prod` flag: `netlify deploy --prod`
81
-
82
- 2. **Netlify Dashboard**: Connect your Git repository or drag-and-drop the build output through the Netlify dashboard
83
-
84
- When connecting a Mastra project Git repository to Netlify, use these recommended build settings since Netlify resolves paths relative to the project root:
85
-
86
- ```bash
87
- # Build command
88
- npm run build
89
- ```
90
-
91
-
92
-
93
- 3. **Netlify Dev**: Run your Mastra application locally with Netlify's development environment
94
-
95
- > You can also run `netlify dev` in your project root to test your Mastra application locally.
96
-
97
- ## Platform Documentation
98
-
99
- - [Netlify](https://docs.netlify.com/)
@@ -5,63 +5,16 @@ description: "Documentation for the VercelDeployer class, which deploys Mastra a
5
5
 
6
6
  # VercelDeployer
7
7
 
8
- The VercelDeployer deploys standalone Mastra applications to Vercel, handling configuration, environment variable synchronization, and deployment processes. It extends the abstract Deployer class to provide Vercel-specific deployment functionality.
8
+ The `VercelDeployer` class handles deployment of standalone Mastra applications to Vercel. It manages configuration, deployment, and extends the base [Deployer](/reference/deployer/deployer) class with Vercel specific functionality.
9
9
 
10
- ## Installation
11
-
12
- ```bash copy
13
- npm install @mastra/deployer-vercel@latest
14
- ```
15
-
16
- ## Usage Example
10
+ ## Usage example
17
11
 
18
12
  ```typescript filename="src/mastra/index.ts" showLineNumbers copy
19
- import { Mastra } from "@mastra/core";
13
+ import { Mastra } from "@mastra/core/mastra";
20
14
  import { VercelDeployer } from "@mastra/deployer-vercel";
21
15
 
22
- const mastra = new Mastra({
16
+ export const mastra = new Mastra({
23
17
  // ...
24
18
  deployer: new VercelDeployer()
25
19
  });
26
20
  ```
27
-
28
- ## Lint Mastra Project
29
-
30
- Lint your Mastra project to make sure it's fine to build
31
-
32
- ```bash
33
- npx mastra lint
34
- ```
35
-
36
- ## Build Mastra Project
37
-
38
- To build your Mastra project for Vercel deployment:
39
-
40
- ```bash
41
- npx mastra build
42
- ```
43
-
44
- The build process generates the following output structure in the `.vercel/output` directory:
45
-
46
- ```
47
- .vercel/output/functions/index.func
48
- └── index.mjs # Application entry point
49
- ```
50
-
51
- ## Deployment Options
52
-
53
- After building, you can deploy your Mastra application to Vercel using any of these methods:
54
-
55
- 1. **Vercel CLI**: Deploy directly using Vercel's official CLI tool
56
-
57
- - Install the CLI: `npm install -g vercel`
58
- - Deploy to preview environment: `vercel --prebuilt`
59
- - For production deployment: `vercel --prod --prebuilt`
60
-
61
- 2. **Vercel Dashboard**: Connect your Git repository or drag-and-drop the build output through the Vercel dashboard
62
-
63
- > You can also run `vercel dev` in your project directory to test your Mastra application locally. (Make sure you configured your dev command to `mastra dev`)
64
-
65
- ## Platform Documentation
66
-
67
- - [Vercel](https://vercel.com/docs)