@mastra/mcp-docs-server 1.1.26-alpha.2 → 1.1.26-alpha.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (55) hide show
  1. package/.docs/docs/agents/supervisor-agents.md +18 -0
  2. package/.docs/docs/editor/overview.md +69 -0
  3. package/.docs/docs/mastra-platform/overview.md +3 -1
  4. package/.docs/docs/memory/observational-memory.md +27 -7
  5. package/.docs/docs/observability/tracing/exporters/cloud.md +34 -41
  6. package/.docs/docs/observability/tracing/exporters/langfuse.md +31 -0
  7. package/.docs/guides/build-your-ui/ai-sdk-ui.md +19 -6
  8. package/.docs/guides/deployment/netlify.md +16 -1
  9. package/.docs/guides/migrations/mastra-cloud.md +128 -3
  10. package/.docs/models/gateways/netlify.md +2 -2
  11. package/.docs/models/gateways/openrouter.md +2 -1
  12. package/.docs/models/gateways/vercel.md +4 -1
  13. package/.docs/models/index.md +36 -1
  14. package/.docs/models/providers/302ai.md +32 -1
  15. package/.docs/models/providers/alibaba-cn.md +2 -1
  16. package/.docs/models/providers/anthropic.md +2 -1
  17. package/.docs/models/providers/berget.md +9 -12
  18. package/.docs/models/providers/cloudflare-workers-ai.md +2 -1
  19. package/.docs/models/providers/cortecs.md +4 -1
  20. package/.docs/models/providers/digitalocean.md +116 -0
  21. package/.docs/models/providers/firmware.md +2 -3
  22. package/.docs/models/providers/helicone.md +1 -2
  23. package/.docs/models/providers/hpc-ai.md +73 -0
  24. package/.docs/models/providers/huggingface.md +2 -1
  25. package/.docs/models/providers/kimi-for-coding.md +2 -1
  26. package/.docs/models/providers/llmgateway.md +59 -77
  27. package/.docs/models/providers/nvidia.md +3 -2
  28. package/.docs/models/providers/openai.md +1 -2
  29. package/.docs/models/providers/opencode-go.md +2 -1
  30. package/.docs/models/providers/opencode.md +2 -1
  31. package/.docs/models/providers/ovhcloud.md +4 -7
  32. package/.docs/models/providers/poe.md +2 -1
  33. package/.docs/models/providers/tencent-token-plan.md +71 -0
  34. package/.docs/models/providers/tencent-tokenhub.md +71 -0
  35. package/.docs/models/providers/wafer.ai.md +72 -0
  36. package/.docs/models/providers/zenmux.md +2 -1
  37. package/.docs/models/providers.md +5 -0
  38. package/.docs/reference/agents/generate.md +8 -0
  39. package/.docs/reference/client-js/mastra-client.md +23 -0
  40. package/.docs/reference/client-js/workflows.md +12 -0
  41. package/.docs/reference/core/mastra-class.md +9 -1
  42. package/.docs/reference/deployer/netlify.md +50 -2
  43. package/.docs/reference/harness/harness-class.md +72 -49
  44. package/.docs/reference/index.md +1 -0
  45. package/.docs/reference/memory/observational-memory.md +2 -0
  46. package/.docs/reference/observability/tracing/exporters/cloud-exporter.md +4 -2
  47. package/.docs/reference/observability/tracing/exporters/langfuse.md +2 -0
  48. package/.docs/reference/processors/prefill-error-handler.md +5 -5
  49. package/.docs/reference/streaming/agents/stream.md +8 -0
  50. package/.docs/reference/streaming/workflows/resumeStream.md +2 -0
  51. package/.docs/reference/workflows/run-methods/resume.md +24 -0
  52. package/.docs/reference/workflows/workflow-methods/foreach.md +14 -1
  53. package/.docs/reference/workspace/docker-sandbox.md +196 -0
  54. package/CHANGELOG.md +65 -0
  55. package/package.json +4 -4
@@ -300,8 +300,26 @@ Success criteria:
300
300
  })
301
301
  ```
302
302
 
303
+ ## Sub-agent versioning
304
+
305
+ When using the [editor](https://mastra.ai/docs/editor/overview), you can control which stored version of each sub-agent the supervisor uses at runtime. Set version overrides on the Mastra instance or per invocation:
306
+
307
+ ```typescript
308
+ const result = await supervisor.generate('Research and write about AI safety', {
309
+ versions: {
310
+ agents: {
311
+ 'research-agent': { status: 'published' },
312
+ 'writing-agent': { versionId: 'draft-456' },
313
+ },
314
+ },
315
+ })
316
+ ```
317
+
318
+ Version overrides propagate automatically through delegation. See [Sub-agent versioning](https://mastra.ai/docs/editor/overview) for details on resolution order and server API usage.
319
+
303
320
  ## Related
304
321
 
322
+ - [Sub-agent versioning](https://mastra.ai/docs/editor/overview)
305
323
  - [Guide: Research coordinator](https://mastra.ai/guides/guide/research-coordinator)
306
324
  - [Agent.stream() reference](https://mastra.ai/reference/streaming/agents/stream)
307
325
  - [Agent.generate() reference](https://mastra.ai/reference/agents/generate)
@@ -214,6 +214,75 @@ curl http://localhost:4111/agents/support-agent?versionId=abc-123
214
214
 
215
215
  See the [Client SDK agents reference](https://mastra.ai/reference/client-js/agents) for API methods.
216
216
 
217
+ ### Sub-agent versioning
218
+
219
+ When a [supervisor agent](https://mastra.ai/docs/agents/supervisor-agents) delegates to sub-agents, version overrides determine which stored version of each sub-agent to use instead of the code-defined default. This lets you iterate on sub-agent prompts and tools through the editor without redeploying the supervisor.
220
+
221
+ Set version overrides at three levels, with later levels taking priority:
222
+
223
+ 1. **Mastra instance config** — global defaults that apply to every `generate()` and `stream()` call.
224
+ 2. **Per-invocation options** — overrides passed directly to `generate()` or `stream()`.
225
+ 3. **Server request body** — overrides sent in the `versions` field of an API request.
226
+
227
+ Resolution order: **per-invocation > request body > Mastra instance defaults > code-defined agent**.
228
+
229
+ #### Mastra instance config
230
+
231
+ Set global defaults when creating the `Mastra` instance. Every supervisor call inherits these overrides:
232
+
233
+ ```typescript
234
+ import { Mastra } from '@mastra/core'
235
+ import { MastraEditor } from '@mastra/editor'
236
+
237
+ export const mastra = new Mastra({
238
+ agents: { supervisor, researchAgent, writerAgent },
239
+ editor: new MastraEditor(),
240
+ versions: {
241
+ agents: {
242
+ 'research-agent': { status: 'published' },
243
+ 'writer-agent': { versionId: 'abc-123' },
244
+ },
245
+ },
246
+ })
247
+ ```
248
+
249
+ #### Per-invocation overrides
250
+
251
+ Override versions for a single call to `generate()` or `stream()`. These take priority over Mastra instance defaults:
252
+
253
+ ```typescript
254
+ const result = await supervisor.generate('Research and write an article about AI safety', {
255
+ versions: {
256
+ agents: {
257
+ 'research-agent': { versionId: 'draft-456' },
258
+ },
259
+ },
260
+ })
261
+ ```
262
+
263
+ #### Server request body
264
+
265
+ When calling agents through the Mastra server, pass version overrides in the request body:
266
+
267
+ ```bash
268
+ curl -X POST http://localhost:4111/agents/supervisor/generate \
269
+ -H "Content-Type: application/json" \
270
+ -d '{
271
+ "messages": [{ "role": "user", "content": "Research AI safety" }],
272
+ "versions": {
273
+ "agents": {
274
+ "research-agent": { "versionId": "draft-456" }
275
+ }
276
+ }
277
+ }'
278
+ ```
279
+
280
+ #### How propagation works
281
+
282
+ Version overrides propagate automatically through sub-agent delegation via `requestContext`. When a supervisor delegates to a sub-agent, the framework checks if a version override exists for that sub-agent's ID. If one is found, it resolves the stored version from the editor and uses it instead of the code-defined default.
283
+
284
+ If version resolution fails (for example, when the editor is not configured or the version ID doesn't exist), the framework logs a warning and falls back to the code-defined agent.
285
+
217
286
  ## Next steps
218
287
 
219
288
  - Set up [prompts](https://mastra.ai/docs/editor/prompts) to build reusable instruction templates.
@@ -73,4 +73,6 @@ Develop your project locally with [`mastra dev`](https://mastra.ai/reference/cli
73
73
 
74
74
  Once you're ready to deploy your application to production, use [`mastra studio deploy`](https://mastra.ai/reference/cli/mastra) and [`mastra server deploy`](https://mastra.ai/reference/cli/mastra) to push your application to the cloud.
75
75
 
76
- Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
76
+ Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
77
+
78
+ If you host your Mastra application on your own infrastructure, you can still send observability data to Studio using the [CloudExporter](https://mastra.ai/docs/observability/tracing/exporters/cloud).
@@ -333,13 +333,33 @@ Reflection works similarly — the Reflector runs in the background when observa
333
333
 
334
334
  ### Settings
335
335
 
336
- | Setting | Default | What it controls |
337
- | ------------------------------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
338
- | `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`). |
339
- | `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
340
- | `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
341
- | `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
342
- | `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
336
+ | Setting | Default | What it controls |
337
+ | ------------------------------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
338
+ | `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`). |
339
+ | `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
340
+ | `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
341
+ | `activateAfterIdle` | none | Forces buffered observations and buffered reflections to activate after a period of inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like `300_000`, `"5m"`, or `"1hr"`. Set this to your prompt cache TTL if you want activation to happen before the next cold prompt. |
342
+ | `activateOnProviderChange` | `false` | Forces buffered observations and reflections to activate when the next step uses a different `provider/model` than the one that produced the latest assistant step. Use this when switching providers or models would invalidate prompt cache reuse. |
343
+ | `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
344
+ | `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
345
+
346
+ If you're relying on prompt caching, set `activateAfterIdle` to match your cache TTL. That way, once a thread has been idle long enough for the cache to expire, the next request can activate buffered observations or reflections first and send a smaller compressed context window.
347
+
348
+ ```typescript
349
+ const memory = new Memory({
350
+ options: {
351
+ observationalMemory: {
352
+ model: 'google/gemini-2.5-flash',
353
+ activateAfterIdle: '5m',
354
+ activateOnProviderChange: true,
355
+ },
356
+ },
357
+ })
358
+ ```
359
+
360
+ With a 5-minute prompt cache TTL, this activates buffered context after 5 minutes of inactivity so the next uncached prompt uses observations and reflections instead of a larger raw message window. If you prefer, `300_000` works the same way.
361
+
362
+ Changing model or providers mid-thread will invalidate the prompt cache. If your agent can switch between providers or models mid-thread, `activateOnProviderChange: true` forces buffered context to activate before the new provider runs. That avoids sending a large raw window to a provider that cannot reuse the previous prompt cache.
343
363
 
344
364
  ### Disabling
345
365
 
@@ -2,6 +2,18 @@
2
2
 
3
3
  The `CloudExporter` sends traces, logs, metrics, scores, and feedback to the Mastra platform. Use it to route observability data from any Mastra app to a hosted project in the Mastra platform.
4
4
 
5
+ > **Self-hosted or standalone apps:** If you host your Mastra application on your own infrastructure (not on Mastra Platform), you still need a deployed Studio project to view traces, logs, and metrics. `CloudExporter` sends data to a Studio project, so one must exist before you can use it.
6
+ >
7
+ > 1. [Create a Mastra project](https://mastra.ai/guides/getting-started/quickstart) if you don't have one yet.
8
+ > 2. [Deploy Studio](https://mastra.ai/docs/studio/deployment) to the Mastra platform with `mastra studio deploy`.
9
+ > 3. Follow the [quickstart steps below](#quickstart) to create an access token and find your project ID.
10
+
11
+ ## Version compatibility
12
+
13
+ - Use `CloudExporter` with `@mastra/observability@1.8.0` or later.
14
+ - If you use `@mastra/observability@1.8.0` through `1.9.1`, set `MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai` in addition to `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID`.
15
+ - Starting in `@mastra/observability@1.9.2`, `CloudExporter` defaults to `https://observability.mastra.ai`, so `MASTRA_CLOUD_TRACES_ENDPOINT` is only required when you want to send telemetry to a different collector.
16
+
5
17
  ## Quickstart
6
18
 
7
19
  To connect `CloudExporter`, create an access token, find the destination `projectId`, and add the exporter to your observability config.
@@ -57,52 +69,30 @@ MASTRA_PROJECT_ID=<your-project-id>
57
69
 
58
70
  ### 3. Set your environment variables
59
71
 
60
- Tokens created with `mastra auth tokens create` are organization-scoped, so `MASTRA_PROJECT_ID` is required.
61
-
62
- If you use a project-scoped token from the Mastra platform instead, `CloudExporter` can route without `projectId`, but setting it explicitly is still supported.
63
-
64
- Add both values to your environment:
72
+ Set both values in your environment so `CloudExporter` can authenticate and route telemetry to the correct project:
65
73
 
66
74
  ```bash
67
75
  MASTRA_CLOUD_ACCESS_TOKEN=<your-cloud-access-token>
68
76
  MASTRA_PROJECT_ID=<your-project-id>
69
77
  ```
70
78
 
71
- For example:
79
+ If you use `@mastra/observability@1.8.0` through `1.9.1`, also set the Mastra platform collector explicitly:
72
80
 
73
81
  ```bash
74
- MASTRA_CLOUD_ACCESS_TOKEN=<your-cloud-access-token>
75
- MASTRA_PROJECT_ID=<your-project-id>
82
+ MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai
76
83
  ```
77
84
 
78
- ### 4. Enable `CloudExporter`
79
-
80
- The following example demonstrates how to add `CloudExporter` to your observability config:
81
-
82
- ```ts
83
- import { Mastra } from '@mastra/core'
84
- import { Observability, CloudExporter } from '@mastra/observability'
85
+ If you want to send telemetry somewhere other than Mastra platform, set `MASTRA_CLOUD_TRACES_ENDPOINT` as well. Pass either a base origin or a full traces publish URL ending in `/spans/publish`.
85
86
 
86
- export const mastra = new Mastra({
87
- observability: new Observability({
88
- configs: {
89
- production: {
90
- serviceName: 'my-service',
91
- exporters: [
92
- new CloudExporter({
93
- accessToken: process.env.MASTRA_CLOUD_ACCESS_TOKEN,
94
- projectId: process.env.MASTRA_PROJECT_ID,
95
- }),
96
- ],
97
- },
98
- },
99
- }),
100
- })
87
+ ```bash
88
+ MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
101
89
  ```
102
90
 
103
- Set `serviceName` in code on the observability config, not on `CloudExporter` itself. In the example above, the value is `configs.production.serviceName`.
91
+ When you pass a base origin, `CloudExporter` derives the matching publish URLs for traces, logs, metrics, scores, and feedback automatically.
92
+
93
+ ### 4. Enable `CloudExporter`
104
94
 
105
- For example:
95
+ The following example demonstrates how to add `CloudExporter` to your observability config:
106
96
 
107
97
  ```ts
108
98
  import { Mastra } from '@mastra/core'
@@ -120,6 +110,8 @@ export const mastra = new Mastra({
120
110
  })
121
111
  ```
122
112
 
113
+ Set `serviceName` on the observability config, not on `CloudExporter` itself.
114
+
123
115
  Use a stable `serviceName` value. In Studio, traces can be filtered by **Deployments → Service Name**, so a consistent name makes traces easier to find.
124
116
 
125
117
  Visit [Observability configuration reference](https://mastra.ai/reference/observability/tracing/configuration) for the full observability config shape.
@@ -130,7 +122,7 @@ If you prefer, rely entirely on environment variables:
130
122
  new CloudExporter()
131
123
  ```
132
124
 
133
- With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter` automatically sends data to the correct Mastra platform project.
125
+ With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter` sends data to the Mastra platform project you configured. If you also set `MASTRA_CLOUD_TRACES_ENDPOINT`, it sends data to that collector instead.
134
126
 
135
127
  > **Note:** Visit [CloudExporter reference](https://mastra.ai/reference/observability/tracing/exporters/cloud-exporter) for the full list of configuration options.
136
128
 
@@ -162,13 +154,17 @@ export const mastra = new Mastra({
162
154
 
163
155
  ## Complete configuration
164
156
 
165
- The following example demonstrates how to configure custom endpoints and batching behavior:
157
+ `CloudExporter` defaults to Mastra platform. If you want to send telemetry to a different collector, set `MASTRA_CLOUD_TRACES_ENDPOINT` in your environment or pass `endpoint` in code.
158
+
159
+ ```bash
160
+ MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
161
+ ```
162
+
163
+ The following example demonstrates how to override the collector endpoint and batching behavior in code:
166
164
 
167
165
  ```ts
168
166
  new CloudExporter({
169
- accessToken: process.env.MASTRA_CLOUD_ACCESS_TOKEN,
170
- projectId: process.env.MASTRA_PROJECT_ID,
171
- endpoint: 'https://cloud.your-domain.com',
167
+ endpoint: 'https://collector.example.com',
172
168
  maxBatchSize: 1000,
173
169
  maxBatchWaitMs: 5000,
174
170
  logLevel: 'info',
@@ -179,13 +175,10 @@ new CloudExporter({
179
175
 
180
176
  After you enable `CloudExporter`, open your project in [Mastra Studio](https://projects.mastra.ai) to inspect the exported data.
181
177
 
182
- - Open your project and select **Open Studio**.
178
+ - Open the project you set `MASTRA_PROJECT_ID` to and select **Open Studio**.
183
179
  - In Studio, go to **Traces** to inspect agent and workflow traces.
184
180
  - Open the filter menu and use **Deployments → Service Name** to isolate traces from a specific app or deployment.
185
181
  - Use the **Logs** page in the project dashboard to inspect exported logs.
186
- - Use the project named by `MASTRA_PROJECT_ID` when you work with organization-scoped tokens.
187
-
188
- > **Note:** If you use a project-scoped token, open the project that issued the token. If you use an organization-scoped token, open the project named by `MASTRA_PROJECT_ID`.
189
182
 
190
183
  When you deploy with Mastra Studio, set **Deployment → Service Name** to a stable value and keep it aligned with the `serviceName` in your observability config. This makes traces easier to filter in Studio through **Deployments → Service Name** when multiple services or deployments send data to the same project.
191
184
 
@@ -122,6 +122,35 @@ new LangfuseExporter({
122
122
  })
123
123
  ```
124
124
 
125
+ #### Batch Tuning for High-Volume Traces
126
+
127
+ For self-hosted Langfuse deployments or streamed runs that produce many spans per second, you can tune the OTEL batch size and flush interval to reduce request pressure on the Langfuse ingestion endpoint:
128
+
129
+ ```typescript
130
+ new LangfuseExporter({
131
+ publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
132
+ secretKey: process.env.LANGFUSE_SECRET_KEY!,
133
+ flushAt: 500, // Maximum spans per OTEL export batch
134
+ flushInterval: 20, // Maximum seconds between flushes
135
+ })
136
+ ```
137
+
138
+ To suppress high-volume span types entirely (for example `MODEL_CHUNK` spans from streamed responses), use the observability-level [`excludeSpanTypes` option](https://mastra.ai/reference/observability/tracing/span-filtering) rather than configuring the exporter:
139
+
140
+ ```typescript
141
+ import { SpanType } from '@mastra/core/observability'
142
+
143
+ new Observability({
144
+ configs: {
145
+ langfuse: {
146
+ serviceName: 'my-service',
147
+ exporters: [new LangfuseExporter()],
148
+ excludeSpanTypes: [SpanType.MODEL_CHUNK],
149
+ },
150
+ },
151
+ })
152
+ ```
153
+
125
154
  ### Complete Configuration
126
155
 
127
156
  ```typescript
@@ -133,6 +162,8 @@ new LangfuseExporter({
133
162
  // Optional settings
134
163
  baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
135
164
  realtime: process.env.NODE_ENV === 'development', // Dynamic mode selection
165
+ flushAt: 500, // Maximum spans per OTEL export batch
166
+ flushInterval: 20, // Maximum seconds between flushes
136
167
  logLevel: 'info', // Diagnostic logging: debug | info | warn | error
137
168
 
138
169
  // Langfuse-specific settings
@@ -242,7 +242,7 @@ Use [`prepareSendMessagesRequest`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/u
242
242
 
243
243
  When your agent has [memory](https://mastra.ai/docs/memory/overview) configured, Mastra loads conversation history from storage on the server. Send only the new message from the client instead of the full conversation history.
244
244
 
245
- Sending the full history is redundant and can cause message ordering bugs because client-side timestamps can conflict with the timestamps stored in your database.
245
+ Sending the full history is redundant and can cause message-ordering bugs because client-side timestamps can conflict with the timestamps stored in your database.
246
246
 
247
247
  ```typescript
248
248
  import { useChat } from '@ai-sdk/react'
@@ -392,7 +392,8 @@ Mastra streams data to the frontend as "parts" within messages. Each part has a
392
392
  | Data Part Type | Source | Description |
393
393
  | -------------------- | ----------------------- | ---------------------------------------------------------------------------------- |
394
394
  | `tool-{toolKey}` | AI SDK built-in | Tool invocation with states: `input-available`, `output-available`, `output-error` |
395
- | `data-workflow` | `workflowRoute()` | Workflow execution with step inputs, outputs, and status |
395
+ | `data-workflow` | `workflowRoute()` | Workflow execution state snapshots with step status and final outputs |
396
+ | `data-workflow-step` | `workflowRoute()` | Workflow step delta with the full payload for the step that just changed |
396
397
  | `data-network` | `networkRoute()` | Agent network execution with ordered steps and outputs |
397
398
  | `data-tool-agent` | Nested agent in tool | Agent output streamed from within a tool's `execute()` |
398
399
  | `data-tool-workflow` | Nested workflow in tool | Workflow output streamed from within a tool's `execute()` |
@@ -505,11 +506,11 @@ export function Chat() {
505
506
 
506
507
  ### Rendering workflow data
507
508
 
508
- When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts that contain the workflow's execution state, including step statuses and outputs.
509
+ When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts for workflow state snapshots and `data-workflow-step` parts for the full payload of the step that just changed. This keeps long-running workflows from repeating every completed step output in every intermediate snapshot.
509
510
 
510
511
  **Backend**:
511
512
 
512
- Define a workflow with multiple steps that will emit `data-workflow` parts as it executes.
513
+ Define a workflow with multiple steps that will emit `data-workflow` and `data-workflow-step` parts as it executes.
513
514
 
514
515
  ```typescript
515
516
  import { createStep, createWorkflow } from '@mastra/core/workflows'
@@ -584,14 +585,15 @@ export const mastra = new Mastra({
584
585
 
585
586
  **Frontend**:
586
587
 
587
- Check for `data-workflow` parts and render each step's status and output using the `WorkflowDataPart` type for type safety.
588
+ Check for `data-workflow` parts to render workflow status snapshots. If you need the full payload for the step that just changed, also read `data-workflow-step` parts.
588
589
 
589
590
  ```typescript
590
591
  import { useChat } from '@ai-sdk/react'
591
592
  import { DefaultChatTransport } from 'ai'
592
- import type { WorkflowDataPart } from '@mastra/ai-sdk'
593
+ import type { WorkflowDataPart, WorkflowStepDataPart } from '@mastra/ai-sdk'
593
594
 
594
595
  type WorkflowData = WorkflowDataPart['data']
596
+ type WorkflowStepData = WorkflowStepDataPart['data']
595
597
  type StepStatus = 'running' | 'success' | 'failed' | 'suspended' | 'waiting'
596
598
 
597
599
  function StepIndicator({
@@ -652,6 +654,17 @@ export function WorkflowChat() {
652
654
  </div>
653
655
  )
654
656
  }
657
+ if (part.type === 'data-workflow-step') {
658
+ const stepData = part.data as WorkflowStepData
659
+ return (
660
+ <StepIndicator
661
+ key={index}
662
+ name={stepData.step.name}
663
+ status={stepData.step.status}
664
+ output={stepData.step.output}
665
+ />
666
+ )
667
+ }
655
668
  return null
656
669
  })}
657
670
  </div>
@@ -1,6 +1,6 @@
1
1
  # Deploy Mastra to Netlify
2
2
 
3
- Use `@mastra/deployer-netlify` to deploy your Mastra server as serverless functions on Netlify. The deployer bundles your code and generates a `.netlify` directory conforming to Netlify's [frameworks API](https://docs.netlify.com/build/frameworks/frameworks-api/#netlifyv1functions), ready to deploy.
3
+ Use `@mastra/deployer-netlify` to deploy your Mastra server on Netlify. The deployer bundles your code and generates a `.netlify` directory conforming to Netlify's [Frameworks API](https://docs.netlify.com/build/frameworks/frameworks-api/), ready to deploy. You can deploy as serverless functions (default) or as [edge functions](https://docs.netlify.com/build/edge-functions/overview/) for lower latency and longer execution times.
4
4
 
5
5
  > **Info:** This guide covers deploying the [Mastra server](https://mastra.ai/docs/server/mastra-server). If you're using a [server adapter](https://mastra.ai/docs/server/server-adapters) or [web framework](https://mastra.ai/docs/deployment/web-framework), deploy the way you normally would for that framework.
6
6
 
@@ -49,6 +49,21 @@ export const mastra = new Mastra({
49
49
  })
50
50
  ```
51
51
 
52
+ To deploy as edge functions instead, pass `{ target: 'edge' }`:
53
+
54
+ ```typescript
55
+ import { Mastra } from '@mastra/core'
56
+ import { NetlifyDeployer } from '@mastra/deployer-netlify'
57
+
58
+ export const mastra = new Mastra({
59
+ deployer: new NetlifyDeployer({
60
+ target: 'edge',
61
+ }),
62
+ })
63
+ ```
64
+
65
+ Edge functions run on Deno at the edge closest to your users. They have no hard execution timeout (only a CPU time limit), making them a better fit for longer-running AI workflows. See the [constructor options](https://mastra.ai/reference/deployer/netlify) for details.
66
+
52
67
  Create a `netlify.toml` file with the following contents in your project root:
53
68
 
54
69
  ```toml
@@ -56,11 +56,136 @@ The Mastra platform replaces Mastra Cloud with separate Studio and Server produc
56
56
 
57
57
  ## Replace Mastra Cloud Store with a hosted database
58
58
 
59
- Mastra Cloud provided a managed LibSQL database. The Mastra platform does not host a database for you, so you need to point your storage at an externally hosted instance.
59
+ Mastra Cloud provided a managed libSQL database, backed by [Turso](https://turso.tech). The Mastra platform does not host a database for you, so you need to point your storage at an externally hosted instance.
60
60
 
61
- If you were already using a hosted database ("bring your own"), no changes are needed. Make sure the connection string is set as an environment variable in the dashboard rather than hardcoded.
61
+ If you were already using a hosted database ("bring your own"), no changes are needed. Ensure the connection string is set as an environment variable in the dashboard rather than hardcoded.
62
62
 
63
- If you were using `file:./mastra.db` with Cloud Store, please [contact support](mailto:support@mastra.ai) for a migration path to export and import your data.
63
+ If you were using Cloud Store, follow the steps below to export your data and load it into a new libSQL database that you control.
64
+
65
+ ### Export your Cloud Store data
66
+
67
+ There are two ways to export your Cloud Store data: a one-click download from the dashboard, or a manual dump via the Turso CLI.
68
+
69
+ #### Option A — Export from the dashboard (recommended)
70
+
71
+ Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Storage**. Click the **Export Database** button. The dashboard generates a full `.sql` dump of your Cloud Store and downloads it directly to your Downloads folder.
72
+
73
+ Once the download completes, convert the dump into a SQLite database file:
74
+
75
+ ```bash
76
+ sqlite3 mydb.db < ~/Downloads/mastra-cloud-dump.sql
77
+ ```
78
+
79
+ You now have a portable `mydb.db` file you can inspect locally, back up, or use as the source for the new database in the steps that follow.
80
+
81
+ #### Option B — Export via the Turso CLI
82
+
83
+ If you prefer to work from the command line, or need to script the export, you can dump the database directly using the [Turso CLI](https://docs.turso.tech/cli). This approach requires the database URL and an auth token, both surfaced in the dashboard.
84
+
85
+ 1. Retrieve your Cloud Store credentials from the dashboard.
86
+
87
+ Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Env Variables**. For Cloud Store–backed projects, two variables are injected alongside your own:
88
+
89
+ - `MASTRA_STORAGE_URL`: A libSQL connection string (e.g. `libsql://<db-name>-<org>.turso.io`).
90
+ - `MASTRA_STORAGE_AUTH_TOKEN`: A read-capable auth token scoped to that database.
91
+
92
+ Each row supports the standard environment variable actions — show/hide via the eye toggle, Edit, Delete, and Copy Value. Use **Copy Value** to grab both values for the dump command below.
93
+
94
+ > **Note:** These variables only appear for projects that were provisioned with Cloud Store. If you brought your own database to Mastra Cloud, you already have these credentials and can skip ahead to [Point your Mastra app at the new database](#point-your-mastra-app-at-the-new-database).
95
+
96
+ > **Info:** If the variables are missing, the values do not decrypt, or the Turso CLI rejects the token, email <support@mastra.ai> from the address associated with your Mastra Cloud account and ask for the libSQL URL and auth token for the project you want to export. Include the project name/ID. Support can also run the dump on your behalf if CLI access is blocked on your network.
97
+
98
+ 2. Install the Turso CLI.
99
+
100
+ ```bash
101
+ brew install tursodatabase/tap/turso
102
+ ```
103
+
104
+ ```bash
105
+ curl -sSfL https://get.tur.so/install.sh | bash
106
+ ```
107
+
108
+ See the [Turso CLI introduction](https://docs.turso.tech/cli/introduction) for Windows and headless-install options.
109
+
110
+ 3. Export the database to a SQL dump.
111
+
112
+ Set the credentials provided by support (or use the dashboard values if you already copied them earlier) as environment variables, then dump the database to a local file. If you copied the URL from the dashboard, swap the `libsql://` scheme for `https://` — the Turso CLI expects the HTTPS form when passing the URL with an auth token.
113
+
114
+ ```bash
115
+ export MASTRA_STORAGE_URL="https://<db-name>-<org>.turso.io"
116
+ export MASTRA_STORAGE_AUTH_TOKEN="<token-from-dashboard-or-support>"
117
+
118
+ turso db shell "$MASTRA_STORAGE_URL?authToken=$MASTRA_STORAGE_AUTH_TOKEN" ".dump" > mastra-cloud-dump.sql
119
+ ```
120
+
121
+ > **Warning:** Embedding the auth token in the connection string is less secure than Turso's recommended pattern — the full URL (with token) can end up in shell history, process listings, and terminal logs. Turso officially recommends running `turso auth login` and then dumping by database name only: `turso db shell <database-name> ".dump" > mastra-cloud-dump.sql`. That flow requires the database to live in a Turso account you own, which is not the case for Cloud Store, so the env-var example above is provided as an alternative for this one-time export. If you prefer to avoid token interpolation entirely, ask support to run the dump on your behalf and send you the resulting SQL file.
122
+
123
+ The resulting `mastra-cloud-dump.sql` contains the full schema and data: thread and message history, workflow snapshots, traces, and eval scores. Store it somewhere safe before continuing.
124
+
125
+ ### Load the dump into a new libSQL database
126
+
127
+ The dump is a standard SQL file and can be loaded into any libSQL-compatible database. The example below uses a new Turso-hosted database, which keeps the migration like-for-like and avoids schema translation.
128
+
129
+ 1. Authenticate the Turso CLI against your own Turso account.
130
+
131
+ ```bash
132
+ turso auth login
133
+ ```
134
+
135
+ If you do not have a Turso account, the CLI will prompt you to create one. See [Turso pricing](https://turso.tech/pricing) for plan details.
136
+
137
+ 2. Create a new database and load the dump in one step.
138
+
139
+ ```bash
140
+ turso db create mastra-migrated --from-dump ./mastra-cloud-dump.sql
141
+ ```
142
+
143
+ `--from-dump` restores a local SQLite/libSQL dump at create time, which is faster and safer than piping statements through `turso db shell` after the fact. Pick a region close to where your Mastra Server runs to minimize latency — list available regions with `turso db locations` and pass `--group <group-name>` if you manage multiple groups.
144
+
145
+ For multi-gigabyte dumps, add `--wait` so the CLI blocks until the database is fully available.
146
+
147
+ 3. Generate connection credentials for the new database.
148
+
149
+ ```bash
150
+ turso db show mastra-migrated --url
151
+ turso db tokens create mastra-migrated
152
+ ```
153
+
154
+ The first command prints the libSQL URL. The second prints an auth token. Both are needed by `LibSQLStore`.
155
+
156
+ ### Point your Mastra app at the new database
157
+
158
+ Set the new credentials as environment variables, either locally in `.env` or in the Mastra platform dashboard:
159
+
160
+ ```bash
161
+ TURSO_DATABASE_URL="libsql://mastra-migrated-<org>.turso.io"
162
+ TURSO_AUTH_TOKEN="<token-from-turso-db-tokens-create>"
163
+ ```
164
+
165
+ Configure `LibSQLStore` to read from those variables:
166
+
167
+ ```ts
168
+ import { Mastra } from '@mastra/core/mastra'
169
+ import { LibSQLStore } from '@mastra/libsql'
170
+
171
+ export const mastra = new Mastra({
172
+ storage: new LibSQLStore({
173
+ id: 'libsql-storage',
174
+ url: process.env.TURSO_DATABASE_URL!,
175
+ authToken: process.env.TURSO_AUTH_TOKEN,
176
+ }),
177
+ })
178
+ ```
179
+
180
+ See the [libSQL storage reference](https://mastra.ai/reference/storage/libsql) for the full set of options.
181
+
182
+ ### Verify the migration
183
+
184
+ Before decommissioning your Cloud project, confirm the new database serves the data your app expects.
185
+
186
+ - Run `turso db shell mastra-migrated "SELECT name FROM sqlite_master WHERE type='table';"` to list tables. The output should include the Mastra-managed tables (e.g. `mastra_threads`, `mastra_messages`, `mastra_workflow_snapshot`, `mastra_traces`).
187
+ - Run a row count against a known-populated table, for example `turso db shell mastra-migrated "SELECT COUNT(*) FROM mastra_messages;"`, and compare it to the same query against the Cloud Store URL.
188
+ - Start your Mastra app against the new credentials and confirm that an existing thread or workflow run loads as expected in [Studio](https://mastra.ai/docs/studio/observability).
64
189
 
65
190
  ## Update observability configuration
66
191
 
@@ -13,7 +13,7 @@ const agent = new Agent({
13
13
  id: "my-agent",
14
14
  name: "My Agent",
15
15
  instructions: "You are a helpful assistant",
16
- model: "netlify/anthropic/claude-3-haiku-20240307"
16
+ model: "netlify/anthropic/claude-haiku-4-5"
17
17
  });
18
18
  ```
19
19
 
@@ -35,7 +35,6 @@ ANTHROPIC_API_KEY=ant-...
35
35
 
36
36
  | Model |
37
37
  | ------------------------------------------- |
38
- | `anthropic/claude-3-haiku-20240307` |
39
38
  | `anthropic/claude-haiku-4-5` |
40
39
  | `anthropic/claude-haiku-4-5-20251001` |
41
40
  | `anthropic/claude-opus-4-1-20250805` |
@@ -43,6 +42,7 @@ ANTHROPIC_API_KEY=ant-...
43
42
  | `anthropic/claude-opus-4-5` |
44
43
  | `anthropic/claude-opus-4-5-20251101` |
45
44
  | `anthropic/claude-opus-4-6` |
45
+ | `anthropic/claude-opus-4-7` |
46
46
  | `anthropic/claude-sonnet-4-0` |
47
47
  | `anthropic/claude-sonnet-4-20250514` |
48
48
  | `anthropic/claude-sonnet-4-5` |
@@ -1,6 +1,6 @@
1
1
  # ![OpenRouter logo](https://models.dev/logos/openrouter.svg)OpenRouter
2
2
 
3
- OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 170 models through Mastra's model router.
3
+ OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 171 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
6
6
 
@@ -41,6 +41,7 @@ ANTHROPIC_API_KEY=ant-...
41
41
  | `anthropic/claude-opus-4.1` |
42
42
  | `anthropic/claude-opus-4.5` |
43
43
  | `anthropic/claude-opus-4.6` |
44
+ | `anthropic/claude-opus-4.7` |
44
45
  | `anthropic/claude-sonnet-4` |
45
46
  | `anthropic/claude-sonnet-4.5` |
46
47
  | `anthropic/claude-sonnet-4.6` |