@mastra/mcp-docs-server 1.1.25 → 1.1.26-alpha.10
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/agents/overview.md +4 -0
- package/.docs/docs/mastra-platform/overview.md +3 -1
- package/.docs/docs/memory/observational-memory.md +27 -7
- package/.docs/docs/observability/tracing/exporters/cloud.md +34 -41
- package/.docs/guides/build-your-ui/ai-sdk-ui.md +19 -6
- package/.docs/guides/migrations/mastra-cloud.md +128 -3
- package/.docs/models/gateways/netlify.md +2 -1
- package/.docs/models/gateways/openrouter.md +2 -1
- package/.docs/models/gateways/vercel.md +4 -1
- package/.docs/models/index.md +36 -1
- package/.docs/models/providers/alibaba-cn.md +2 -1
- package/.docs/models/providers/anthropic.md +2 -1
- package/.docs/models/providers/cortecs.md +3 -1
- package/.docs/models/providers/firmware.md +2 -3
- package/.docs/models/providers/hpc-ai.md +73 -0
- package/.docs/models/providers/nvidia.md +1 -1
- package/.docs/models/providers/opencode.md +2 -1
- package/.docs/models/providers/poe.md +2 -1
- package/.docs/models/providers/zenmux.md +2 -1
- package/.docs/models/providers.md +1 -0
- package/.docs/reference/agents/generate.md +77 -3
- package/.docs/reference/client-js/mastra-client.md +23 -0
- package/.docs/reference/index.md +1 -0
- package/.docs/reference/memory/observational-memory.md +2 -0
- package/.docs/reference/observability/tracing/exporters/cloud-exporter.md +4 -2
- package/.docs/reference/workspace/docker-sandbox.md +196 -0
- package/CHANGELOG.md +45 -0
- package/package.json +5 -5
|
@@ -52,6 +52,8 @@ When referencing an agent from your Mastra instance, use `mastra.getAgentById()`
|
|
|
52
52
|
|
|
53
53
|
Returns the full response after all tool calls and steps complete. The result includes `text`, `toolCalls`, `toolResults`, `steps`, and token `usage` statistics.
|
|
54
54
|
|
|
55
|
+
See the [`Agent.generate()` reference](https://mastra.ai/reference/agents/generate) for the response shape, including tool call and tool result payloads.
|
|
56
|
+
|
|
55
57
|
```ts
|
|
56
58
|
const agent = mastra.getAgentById('test-agent')
|
|
57
59
|
const response = await agent.generate('Help me organize my day')
|
|
@@ -62,6 +64,8 @@ console.log(response.text)
|
|
|
62
64
|
|
|
63
65
|
Returns a stream you can consume as tokens arrive. The result exposes `textStream` for incremental output and promises for `toolCalls`, `toolResults`, `steps`, and token `usage` that resolve when the stream finishes.
|
|
64
66
|
|
|
67
|
+
See the [`MastraModelOutput` reference](https://mastra.ai/reference/streaming/agents/MastraModelOutput) for the stream shape, including tool call and tool result payloads.
|
|
68
|
+
|
|
65
69
|
```ts
|
|
66
70
|
const agent = mastra.getAgentById('test-agent')
|
|
67
71
|
const stream = await agent.stream('Help me organize my day')
|
|
@@ -73,4 +73,6 @@ Develop your project locally with [`mastra dev`](https://mastra.ai/reference/cli
|
|
|
73
73
|
|
|
74
74
|
Once you're ready to deploy your application to production, use [`mastra studio deploy`](https://mastra.ai/reference/cli/mastra) and [`mastra server deploy`](https://mastra.ai/reference/cli/mastra) to push your application to the cloud.
|
|
75
75
|
|
|
76
|
-
Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
|
|
76
|
+
Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
|
|
77
|
+
|
|
78
|
+
If you host your Mastra application on your own infrastructure, you can still send observability data to Studio using the [CloudExporter](https://mastra.ai/docs/observability/tracing/exporters/cloud).
|
|
@@ -333,13 +333,33 @@ Reflection works similarly — the Reflector runs in the background when observa
|
|
|
333
333
|
|
|
334
334
|
### Settings
|
|
335
335
|
|
|
336
|
-
| Setting | Default | What it controls
|
|
337
|
-
| ------------------------------ | ------- |
|
|
338
|
-
| `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`).
|
|
339
|
-
| `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history.
|
|
340
|
-
| `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up.
|
|
341
|
-
| `
|
|
342
|
-
| `
|
|
336
|
+
| Setting | Default | What it controls |
|
|
337
|
+
| ------------------------------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
338
|
+
| `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`). |
|
|
339
|
+
| `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
|
|
340
|
+
| `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
|
|
341
|
+
| `activateAfterIdle` | none | Forces buffered observations and buffered reflections to activate after a period of inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like `300_000`, `"5m"`, or `"1hr"`. Set this to your prompt cache TTL if you want activation to happen before the next cold prompt. |
|
|
342
|
+
| `activateOnProviderChange` | `false` | Forces buffered observations and reflections to activate when the next step uses a different `provider/model` than the one that produced the latest assistant step. Use this when switching providers or models would invalidate prompt cache reuse. |
|
|
343
|
+
| `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
|
|
344
|
+
| `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
|
|
345
|
+
|
|
346
|
+
If you're relying on prompt caching, set `activateAfterIdle` to match your cache TTL. That way, once a thread has been idle long enough for the cache to expire, the next request can activate buffered observations or reflections first and send a smaller compressed context window.
|
|
347
|
+
|
|
348
|
+
```typescript
|
|
349
|
+
const memory = new Memory({
|
|
350
|
+
options: {
|
|
351
|
+
observationalMemory: {
|
|
352
|
+
model: 'google/gemini-2.5-flash',
|
|
353
|
+
activateAfterIdle: '5m',
|
|
354
|
+
activateOnProviderChange: true,
|
|
355
|
+
},
|
|
356
|
+
},
|
|
357
|
+
})
|
|
358
|
+
```
|
|
359
|
+
|
|
360
|
+
With a 5-minute prompt cache TTL, this activates buffered context after 5 minutes of inactivity so the next uncached prompt uses observations and reflections instead of a larger raw message window. If you prefer, `300_000` works the same way.
|
|
361
|
+
|
|
362
|
+
Changing model or providers mid-thread will invalidate the prompt cache. If your agent can switch between providers or models mid-thread, `activateOnProviderChange: true` forces buffered context to activate before the new provider runs. That avoids sending a large raw window to a provider that cannot reuse the previous prompt cache.
|
|
343
363
|
|
|
344
364
|
### Disabling
|
|
345
365
|
|
|
@@ -2,6 +2,18 @@
|
|
|
2
2
|
|
|
3
3
|
The `CloudExporter` sends traces, logs, metrics, scores, and feedback to the Mastra platform. Use it to route observability data from any Mastra app to a hosted project in the Mastra platform.
|
|
4
4
|
|
|
5
|
+
> **Self-hosted or standalone apps:** If you host your Mastra application on your own infrastructure (not on Mastra Platform), you still need a deployed Studio project to view traces, logs, and metrics. `CloudExporter` sends data to a Studio project, so one must exist before you can use it.
|
|
6
|
+
>
|
|
7
|
+
> 1. [Create a Mastra project](https://mastra.ai/guides/getting-started/quickstart) if you don't have one yet.
|
|
8
|
+
> 2. [Deploy Studio](https://mastra.ai/docs/studio/deployment) to the Mastra platform with `mastra studio deploy`.
|
|
9
|
+
> 3. Follow the [quickstart steps below](#quickstart) to create an access token and find your project ID.
|
|
10
|
+
|
|
11
|
+
## Version compatibility
|
|
12
|
+
|
|
13
|
+
- Use `CloudExporter` with `@mastra/observability@1.8.0` or later.
|
|
14
|
+
- If you use `@mastra/observability@1.8.0` through `1.9.1`, set `MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai` in addition to `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID`.
|
|
15
|
+
- Starting in `@mastra/observability@1.9.2`, `CloudExporter` defaults to `https://observability.mastra.ai`, so `MASTRA_CLOUD_TRACES_ENDPOINT` is only required when you want to send telemetry to a different collector.
|
|
16
|
+
|
|
5
17
|
## Quickstart
|
|
6
18
|
|
|
7
19
|
To connect `CloudExporter`, create an access token, find the destination `projectId`, and add the exporter to your observability config.
|
|
@@ -57,52 +69,30 @@ MASTRA_PROJECT_ID=<your-project-id>
|
|
|
57
69
|
|
|
58
70
|
### 3. Set your environment variables
|
|
59
71
|
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
If you use a project-scoped token from the Mastra platform instead, `CloudExporter` can route without `projectId`, but setting it explicitly is still supported.
|
|
63
|
-
|
|
64
|
-
Add both values to your environment:
|
|
72
|
+
Set both values in your environment so `CloudExporter` can authenticate and route telemetry to the correct project:
|
|
65
73
|
|
|
66
74
|
```bash
|
|
67
75
|
MASTRA_CLOUD_ACCESS_TOKEN=<your-cloud-access-token>
|
|
68
76
|
MASTRA_PROJECT_ID=<your-project-id>
|
|
69
77
|
```
|
|
70
78
|
|
|
71
|
-
|
|
79
|
+
If you use `@mastra/observability@1.8.0` through `1.9.1`, also set the Mastra platform collector explicitly:
|
|
72
80
|
|
|
73
81
|
```bash
|
|
74
|
-
|
|
75
|
-
MASTRA_PROJECT_ID=<your-project-id>
|
|
82
|
+
MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai
|
|
76
83
|
```
|
|
77
84
|
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
The following example demonstrates how to add `CloudExporter` to your observability config:
|
|
81
|
-
|
|
82
|
-
```ts
|
|
83
|
-
import { Mastra } from '@mastra/core'
|
|
84
|
-
import { Observability, CloudExporter } from '@mastra/observability'
|
|
85
|
+
If you want to send telemetry somewhere other than Mastra platform, set `MASTRA_CLOUD_TRACES_ENDPOINT` as well. Pass either a base origin or a full traces publish URL ending in `/spans/publish`.
|
|
85
86
|
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
configs: {
|
|
89
|
-
production: {
|
|
90
|
-
serviceName: 'my-service',
|
|
91
|
-
exporters: [
|
|
92
|
-
new CloudExporter({
|
|
93
|
-
accessToken: process.env.MASTRA_CLOUD_ACCESS_TOKEN,
|
|
94
|
-
projectId: process.env.MASTRA_PROJECT_ID,
|
|
95
|
-
}),
|
|
96
|
-
],
|
|
97
|
-
},
|
|
98
|
-
},
|
|
99
|
-
}),
|
|
100
|
-
})
|
|
87
|
+
```bash
|
|
88
|
+
MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
|
|
101
89
|
```
|
|
102
90
|
|
|
103
|
-
|
|
91
|
+
When you pass a base origin, `CloudExporter` derives the matching publish URLs for traces, logs, metrics, scores, and feedback automatically.
|
|
92
|
+
|
|
93
|
+
### 4. Enable `CloudExporter`
|
|
104
94
|
|
|
105
|
-
|
|
95
|
+
The following example demonstrates how to add `CloudExporter` to your observability config:
|
|
106
96
|
|
|
107
97
|
```ts
|
|
108
98
|
import { Mastra } from '@mastra/core'
|
|
@@ -120,6 +110,8 @@ export const mastra = new Mastra({
|
|
|
120
110
|
})
|
|
121
111
|
```
|
|
122
112
|
|
|
113
|
+
Set `serviceName` on the observability config, not on `CloudExporter` itself.
|
|
114
|
+
|
|
123
115
|
Use a stable `serviceName` value. In Studio, traces can be filtered by **Deployments → Service Name**, so a consistent name makes traces easier to find.
|
|
124
116
|
|
|
125
117
|
Visit [Observability configuration reference](https://mastra.ai/reference/observability/tracing/configuration) for the full observability config shape.
|
|
@@ -130,7 +122,7 @@ If you prefer, rely entirely on environment variables:
|
|
|
130
122
|
new CloudExporter()
|
|
131
123
|
```
|
|
132
124
|
|
|
133
|
-
With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter`
|
|
125
|
+
With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter` sends data to the Mastra platform project you configured. If you also set `MASTRA_CLOUD_TRACES_ENDPOINT`, it sends data to that collector instead.
|
|
134
126
|
|
|
135
127
|
> **Note:** Visit [CloudExporter reference](https://mastra.ai/reference/observability/tracing/exporters/cloud-exporter) for the full list of configuration options.
|
|
136
128
|
|
|
@@ -162,13 +154,17 @@ export const mastra = new Mastra({
|
|
|
162
154
|
|
|
163
155
|
## Complete configuration
|
|
164
156
|
|
|
165
|
-
|
|
157
|
+
`CloudExporter` defaults to Mastra platform. If you want to send telemetry to a different collector, set `MASTRA_CLOUD_TRACES_ENDPOINT` in your environment or pass `endpoint` in code.
|
|
158
|
+
|
|
159
|
+
```bash
|
|
160
|
+
MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
The following example demonstrates how to override the collector endpoint and batching behavior in code:
|
|
166
164
|
|
|
167
165
|
```ts
|
|
168
166
|
new CloudExporter({
|
|
169
|
-
|
|
170
|
-
projectId: process.env.MASTRA_PROJECT_ID,
|
|
171
|
-
endpoint: 'https://cloud.your-domain.com',
|
|
167
|
+
endpoint: 'https://collector.example.com',
|
|
172
168
|
maxBatchSize: 1000,
|
|
173
169
|
maxBatchWaitMs: 5000,
|
|
174
170
|
logLevel: 'info',
|
|
@@ -179,13 +175,10 @@ new CloudExporter({
|
|
|
179
175
|
|
|
180
176
|
After you enable `CloudExporter`, open your project in [Mastra Studio](https://projects.mastra.ai) to inspect the exported data.
|
|
181
177
|
|
|
182
|
-
- Open
|
|
178
|
+
- Open the project you set `MASTRA_PROJECT_ID` to and select **Open Studio**.
|
|
183
179
|
- In Studio, go to **Traces** to inspect agent and workflow traces.
|
|
184
180
|
- Open the filter menu and use **Deployments → Service Name** to isolate traces from a specific app or deployment.
|
|
185
181
|
- Use the **Logs** page in the project dashboard to inspect exported logs.
|
|
186
|
-
- Use the project named by `MASTRA_PROJECT_ID` when you work with organization-scoped tokens.
|
|
187
|
-
|
|
188
|
-
> **Note:** If you use a project-scoped token, open the project that issued the token. If you use an organization-scoped token, open the project named by `MASTRA_PROJECT_ID`.
|
|
189
182
|
|
|
190
183
|
When you deploy with Mastra Studio, set **Deployment → Service Name** to a stable value and keep it aligned with the `serviceName` in your observability config. This makes traces easier to filter in Studio through **Deployments → Service Name** when multiple services or deployments send data to the same project.
|
|
191
184
|
|
|
@@ -242,7 +242,7 @@ Use [`prepareSendMessagesRequest`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/u
|
|
|
242
242
|
|
|
243
243
|
When your agent has [memory](https://mastra.ai/docs/memory/overview) configured, Mastra loads conversation history from storage on the server. Send only the new message from the client instead of the full conversation history.
|
|
244
244
|
|
|
245
|
-
Sending the full history is redundant and can cause message
|
|
245
|
+
Sending the full history is redundant and can cause message-ordering bugs because client-side timestamps can conflict with the timestamps stored in your database.
|
|
246
246
|
|
|
247
247
|
```typescript
|
|
248
248
|
import { useChat } from '@ai-sdk/react'
|
|
@@ -392,7 +392,8 @@ Mastra streams data to the frontend as "parts" within messages. Each part has a
|
|
|
392
392
|
| Data Part Type | Source | Description |
|
|
393
393
|
| -------------------- | ----------------------- | ---------------------------------------------------------------------------------- |
|
|
394
394
|
| `tool-{toolKey}` | AI SDK built-in | Tool invocation with states: `input-available`, `output-available`, `output-error` |
|
|
395
|
-
| `data-workflow` | `workflowRoute()` | Workflow execution with step
|
|
395
|
+
| `data-workflow` | `workflowRoute()` | Workflow execution state snapshots with step status and final outputs |
|
|
396
|
+
| `data-workflow-step` | `workflowRoute()` | Workflow step delta with the full payload for the step that just changed |
|
|
396
397
|
| `data-network` | `networkRoute()` | Agent network execution with ordered steps and outputs |
|
|
397
398
|
| `data-tool-agent` | Nested agent in tool | Agent output streamed from within a tool's `execute()` |
|
|
398
399
|
| `data-tool-workflow` | Nested workflow in tool | Workflow output streamed from within a tool's `execute()` |
|
|
@@ -505,11 +506,11 @@ export function Chat() {
|
|
|
505
506
|
|
|
506
507
|
### Rendering workflow data
|
|
507
508
|
|
|
508
|
-
When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts
|
|
509
|
+
When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts for workflow state snapshots and `data-workflow-step` parts for the full payload of the step that just changed. This keeps long-running workflows from repeating every completed step output in every intermediate snapshot.
|
|
509
510
|
|
|
510
511
|
**Backend**:
|
|
511
512
|
|
|
512
|
-
Define a workflow with multiple steps that will emit `data-workflow` parts as it executes.
|
|
513
|
+
Define a workflow with multiple steps that will emit `data-workflow` and `data-workflow-step` parts as it executes.
|
|
513
514
|
|
|
514
515
|
```typescript
|
|
515
516
|
import { createStep, createWorkflow } from '@mastra/core/workflows'
|
|
@@ -584,14 +585,15 @@ export const mastra = new Mastra({
|
|
|
584
585
|
|
|
585
586
|
**Frontend**:
|
|
586
587
|
|
|
587
|
-
Check for `data-workflow` parts
|
|
588
|
+
Check for `data-workflow` parts to render workflow status snapshots. If you need the full payload for the step that just changed, also read `data-workflow-step` parts.
|
|
588
589
|
|
|
589
590
|
```typescript
|
|
590
591
|
import { useChat } from '@ai-sdk/react'
|
|
591
592
|
import { DefaultChatTransport } from 'ai'
|
|
592
|
-
import type { WorkflowDataPart } from '@mastra/ai-sdk'
|
|
593
|
+
import type { WorkflowDataPart, WorkflowStepDataPart } from '@mastra/ai-sdk'
|
|
593
594
|
|
|
594
595
|
type WorkflowData = WorkflowDataPart['data']
|
|
596
|
+
type WorkflowStepData = WorkflowStepDataPart['data']
|
|
595
597
|
type StepStatus = 'running' | 'success' | 'failed' | 'suspended' | 'waiting'
|
|
596
598
|
|
|
597
599
|
function StepIndicator({
|
|
@@ -652,6 +654,17 @@ export function WorkflowChat() {
|
|
|
652
654
|
</div>
|
|
653
655
|
)
|
|
654
656
|
}
|
|
657
|
+
if (part.type === 'data-workflow-step') {
|
|
658
|
+
const stepData = part.data as WorkflowStepData
|
|
659
|
+
return (
|
|
660
|
+
<StepIndicator
|
|
661
|
+
key={index}
|
|
662
|
+
name={stepData.step.name}
|
|
663
|
+
status={stepData.step.status}
|
|
664
|
+
output={stepData.step.output}
|
|
665
|
+
/>
|
|
666
|
+
)
|
|
667
|
+
}
|
|
655
668
|
return null
|
|
656
669
|
})}
|
|
657
670
|
</div>
|
|
@@ -56,11 +56,136 @@ The Mastra platform replaces Mastra Cloud with separate Studio and Server produc
|
|
|
56
56
|
|
|
57
57
|
## Replace Mastra Cloud Store with a hosted database
|
|
58
58
|
|
|
59
|
-
Mastra Cloud provided a managed
|
|
59
|
+
Mastra Cloud provided a managed libSQL database, backed by [Turso](https://turso.tech). The Mastra platform does not host a database for you, so you need to point your storage at an externally hosted instance.
|
|
60
60
|
|
|
61
|
-
If you were already using a hosted database ("bring your own"), no changes are needed.
|
|
61
|
+
If you were already using a hosted database ("bring your own"), no changes are needed. Ensure the connection string is set as an environment variable in the dashboard rather than hardcoded.
|
|
62
62
|
|
|
63
|
-
If you were using
|
|
63
|
+
If you were using Cloud Store, follow the steps below to export your data and load it into a new libSQL database that you control.
|
|
64
|
+
|
|
65
|
+
### Export your Cloud Store data
|
|
66
|
+
|
|
67
|
+
There are two ways to export your Cloud Store data: a one-click download from the dashboard, or a manual dump via the Turso CLI.
|
|
68
|
+
|
|
69
|
+
#### Option A — Export from the dashboard (recommended)
|
|
70
|
+
|
|
71
|
+
Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Storage**. Click the **Export Database** button. The dashboard generates a full `.sql` dump of your Cloud Store and downloads it directly to your Downloads folder.
|
|
72
|
+
|
|
73
|
+
Once the download completes, convert the dump into a SQLite database file:
|
|
74
|
+
|
|
75
|
+
```bash
|
|
76
|
+
sqlite3 mydb.db < ~/Downloads/mastra-cloud-dump.sql
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
You now have a portable `mydb.db` file you can inspect locally, back up, or use as the source for the new database in the steps that follow.
|
|
80
|
+
|
|
81
|
+
#### Option B — Export via the Turso CLI
|
|
82
|
+
|
|
83
|
+
If you prefer to work from the command line, or need to script the export, you can dump the database directly using the [Turso CLI](https://docs.turso.tech/cli). This approach requires the database URL and an auth token, both surfaced in the dashboard.
|
|
84
|
+
|
|
85
|
+
1. Retrieve your Cloud Store credentials from the dashboard.
|
|
86
|
+
|
|
87
|
+
Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Env Variables**. For Cloud Store–backed projects, two variables are injected alongside your own:
|
|
88
|
+
|
|
89
|
+
- `MASTRA_STORAGE_URL`: A libSQL connection string (e.g. `libsql://<db-name>-<org>.turso.io`).
|
|
90
|
+
- `MASTRA_STORAGE_AUTH_TOKEN`: A read-capable auth token scoped to that database.
|
|
91
|
+
|
|
92
|
+
Each row supports the standard environment variable actions — show/hide via the eye toggle, Edit, Delete, and Copy Value. Use **Copy Value** to grab both values for the dump command below.
|
|
93
|
+
|
|
94
|
+
> **Note:** These variables only appear for projects that were provisioned with Cloud Store. If you brought your own database to Mastra Cloud, you already have these credentials and can skip ahead to [Point your Mastra app at the new database](#point-your-mastra-app-at-the-new-database).
|
|
95
|
+
|
|
96
|
+
> **Info:** If the variables are missing, the values do not decrypt, or the Turso CLI rejects the token, email <support@mastra.ai> from the address associated with your Mastra Cloud account and ask for the libSQL URL and auth token for the project you want to export. Include the project name/ID. Support can also run the dump on your behalf if CLI access is blocked on your network.
|
|
97
|
+
|
|
98
|
+
2. Install the Turso CLI.
|
|
99
|
+
|
|
100
|
+
```bash
|
|
101
|
+
brew install tursodatabase/tap/turso
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
```bash
|
|
105
|
+
curl -sSfL https://get.tur.so/install.sh | bash
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
See the [Turso CLI introduction](https://docs.turso.tech/cli/introduction) for Windows and headless-install options.
|
|
109
|
+
|
|
110
|
+
3. Export the database to a SQL dump.
|
|
111
|
+
|
|
112
|
+
Set the credentials provided by support (or use the dashboard values if you already copied them earlier) as environment variables, then dump the database to a local file:
|
|
113
|
+
|
|
114
|
+
```bash
|
|
115
|
+
export MASTRA_STORAGE_URL="libsql://<db-name>-<org>.turso.io"
|
|
116
|
+
export MASTRA_STORAGE_AUTH_TOKEN="<token-from-dashboard-or-support>"
|
|
117
|
+
|
|
118
|
+
turso db shell "$MASTRA_STORAGE_URL?authToken=$MASTRA_STORAGE_AUTH_TOKEN" ".dump" > mastra-cloud-dump.sql
|
|
119
|
+
```
|
|
120
|
+
|
|
121
|
+
> **Warning:** Embedding the auth token in the connection string is less secure than Turso's recommended pattern — the full URL (with token) can end up in shell history, process listings, and terminal logs. Turso officially recommends running `turso auth login` and then dumping by database name only: `turso db shell <database-name> ".dump" > mastra-cloud-dump.sql`. That flow requires the database to live in a Turso account you own, which is not the case for Cloud Store, so the env-var example above is provided as an alternative for this one-time export. If you prefer to avoid token interpolation entirely, ask support to run the dump on your behalf and send you the resulting SQL file.
|
|
122
|
+
|
|
123
|
+
The resulting `mastra-cloud-dump.sql` contains the full schema and data: thread and message history, workflow snapshots, traces, and eval scores. Store it somewhere safe before continuing.
|
|
124
|
+
|
|
125
|
+
### Load the dump into a new libSQL database
|
|
126
|
+
|
|
127
|
+
The dump is a standard SQL file and can be loaded into any libSQL-compatible database. The example below uses a new Turso-hosted database, which keeps the migration like-for-like and avoids schema translation.
|
|
128
|
+
|
|
129
|
+
1. Authenticate the Turso CLI against your own Turso account.
|
|
130
|
+
|
|
131
|
+
```bash
|
|
132
|
+
turso auth login
|
|
133
|
+
```
|
|
134
|
+
|
|
135
|
+
If you do not have a Turso account, the CLI will prompt you to create one. See [Turso pricing](https://turso.tech/pricing) for plan details.
|
|
136
|
+
|
|
137
|
+
2. Create a new database and load the dump in one step.
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
turso db create mastra-migrated --from-dump ./mastra-cloud-dump.sql
|
|
141
|
+
```
|
|
142
|
+
|
|
143
|
+
`--from-dump` restores a local SQLite/libSQL dump at create time, which is faster and safer than piping statements through `turso db shell` after the fact. Pick a region close to where your Mastra Server runs to minimize latency — list available regions with `turso db locations` and pass `--group <group-name>` if you manage multiple groups.
|
|
144
|
+
|
|
145
|
+
For multi-gigabyte dumps, add `--wait` so the CLI blocks until the database is fully available.
|
|
146
|
+
|
|
147
|
+
3. Generate connection credentials for the new database.
|
|
148
|
+
|
|
149
|
+
```bash
|
|
150
|
+
turso db show mastra-migrated --url
|
|
151
|
+
turso db tokens create mastra-migrated
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
The first command prints the libSQL URL. The second prints an auth token. Both are needed by `LibSQLStore`.
|
|
155
|
+
|
|
156
|
+
### Point your Mastra app at the new database
|
|
157
|
+
|
|
158
|
+
Set the new credentials as environment variables, either locally in `.env` or in the Mastra platform dashboard:
|
|
159
|
+
|
|
160
|
+
```bash
|
|
161
|
+
TURSO_DATABASE_URL="libsql://mastra-migrated-<org>.turso.io"
|
|
162
|
+
TURSO_AUTH_TOKEN="<token-from-turso-db-tokens-create>"
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
Configure `LibSQLStore` to read from those variables:
|
|
166
|
+
|
|
167
|
+
```ts
|
|
168
|
+
import { Mastra } from '@mastra/core/mastra'
|
|
169
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
170
|
+
|
|
171
|
+
export const mastra = new Mastra({
|
|
172
|
+
storage: new LibSQLStore({
|
|
173
|
+
id: 'libsql-storage',
|
|
174
|
+
url: process.env.TURSO_DATABASE_URL!,
|
|
175
|
+
authToken: process.env.TURSO_AUTH_TOKEN,
|
|
176
|
+
}),
|
|
177
|
+
})
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
See the [libSQL storage reference](https://mastra.ai/reference/storage/libsql) for the full set of options.
|
|
181
|
+
|
|
182
|
+
### Verify the migration
|
|
183
|
+
|
|
184
|
+
Before decommissioning your Cloud project, confirm the new database serves the data your app expects.
|
|
185
|
+
|
|
186
|
+
- Run `turso db shell mastra-migrated "SELECT name FROM sqlite_master WHERE type='table';"` to list tables. The output should include the Mastra-managed tables (e.g. `mastra_threads`, `mastra_messages`, `mastra_workflow_snapshot`, `mastra_traces`).
|
|
187
|
+
- Run a row count against a known-populated table, for example `turso db shell mastra-migrated "SELECT COUNT(*) FROM mastra_messages;"`, and compare it to the same query against the Cloud Store URL.
|
|
188
|
+
- Start your Mastra app against the new credentials and confirm that an existing thread or workflow run loads as expected in [Studio](https://mastra.ai/docs/studio/observability).
|
|
64
189
|
|
|
65
190
|
## Update observability configuration
|
|
66
191
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Netlify
|
|
2
2
|
|
|
3
|
-
Netlify AI Gateway provides unified access to multiple providers with built-in caching and observability. Access
|
|
3
|
+
Netlify AI Gateway provides unified access to multiple providers with built-in caching and observability. Access 63 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Netlify documentation](https://docs.netlify.com/build/ai-gateway/overview/).
|
|
6
6
|
|
|
@@ -43,6 +43,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
43
43
|
| `anthropic/claude-opus-4-5` |
|
|
44
44
|
| `anthropic/claude-opus-4-5-20251101` |
|
|
45
45
|
| `anthropic/claude-opus-4-6` |
|
|
46
|
+
| `anthropic/claude-opus-4-7` |
|
|
46
47
|
| `anthropic/claude-sonnet-4-0` |
|
|
47
48
|
| `anthropic/claude-sonnet-4-20250514` |
|
|
48
49
|
| `anthropic/claude-sonnet-4-5` |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenRouter
|
|
2
2
|
|
|
3
|
-
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 171 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
|
|
6
6
|
|
|
@@ -41,6 +41,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
41
41
|
| `anthropic/claude-opus-4.1` |
|
|
42
42
|
| `anthropic/claude-opus-4.5` |
|
|
43
43
|
| `anthropic/claude-opus-4.6` |
|
|
44
|
+
| `anthropic/claude-opus-4.7` |
|
|
44
45
|
| `anthropic/claude-sonnet-4` |
|
|
45
46
|
| `anthropic/claude-sonnet-4.5` |
|
|
46
47
|
| `anthropic/claude-sonnet-4.6` |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Vercel
|
|
2
2
|
|
|
3
|
-
Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 234 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Vercel documentation](https://ai-sdk.dev/providers/ai-sdk-providers).
|
|
6
6
|
|
|
@@ -72,6 +72,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
72
72
|
| `anthropic/claude-opus-4.1` |
|
|
73
73
|
| `anthropic/claude-opus-4.5` |
|
|
74
74
|
| `anthropic/claude-opus-4.6` |
|
|
75
|
+
| `anthropic/claude-opus-4.7` |
|
|
75
76
|
| `anthropic/claude-sonnet-4` |
|
|
76
77
|
| `anthropic/claude-sonnet-4.5` |
|
|
77
78
|
| `anthropic/claude-sonnet-4.6` |
|
|
@@ -119,6 +120,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
119
120
|
| `google/text-embedding-005` |
|
|
120
121
|
| `google/text-multilingual-embedding-002` |
|
|
121
122
|
| `inception/mercury-2` |
|
|
123
|
+
| `inception/mercury-coder-small` |
|
|
122
124
|
| `inception/mercury-edit-2` |
|
|
123
125
|
| `kwaipilot/kat-coder-pro-v1` |
|
|
124
126
|
| `kwaipilot/kat-coder-pro-v2` |
|
|
@@ -264,4 +266,5 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
264
266
|
| `zai/glm-4.7-flashx` |
|
|
265
267
|
| `zai/glm-5` |
|
|
266
268
|
| `zai/glm-5-turbo` |
|
|
269
|
+
| `zai/glm-5.1` |
|
|
267
270
|
| `zai/glm-5v-turbo` |
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3610 models from 100 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -228,6 +228,41 @@ Mastra tries your primary model first. If it encounters a 500 error, rate limit,
|
|
|
228
228
|
|
|
229
229
|
Your users never experience the disruption - the response comes back with the same format, just from a different model. The error context is preserved as the system moves through your fallback chain, ensuring clean error propagation while maintaining streaming compatibility.
|
|
230
230
|
|
|
231
|
+
### Per-model settings
|
|
232
|
+
|
|
233
|
+
Each fallback entry can carry its own `modelSettings`, `providerOptions`, and `headers` — useful when models in the chain need different temperatures or provider-specific knobs to produce comparable output.
|
|
234
|
+
|
|
235
|
+
```typescript
|
|
236
|
+
import { Agent } from '@mastra/core/agent';
|
|
237
|
+
|
|
238
|
+
const agent = new Agent({
|
|
239
|
+
id: 'tuned-resilient',
|
|
240
|
+
name: 'Tuned Resilient Agent',
|
|
241
|
+
instructions: 'You are a helpful assistant.',
|
|
242
|
+
model: [
|
|
243
|
+
{
|
|
244
|
+
model: 'google/gemini-2.5-flash',
|
|
245
|
+
maxRetries: 2,
|
|
246
|
+
modelSettings: { temperature: 0.3 },
|
|
247
|
+
providerOptions: { google: { thinkingConfig: { thinkingBudget: 0 } } },
|
|
248
|
+
},
|
|
249
|
+
{
|
|
250
|
+
model: 'openai/gpt-5-mini',
|
|
251
|
+
maxRetries: 2,
|
|
252
|
+
modelSettings: { temperature: 0.7 },
|
|
253
|
+
providerOptions: { openai: { reasoningEffort: 'low' } },
|
|
254
|
+
},
|
|
255
|
+
],
|
|
256
|
+
});
|
|
257
|
+
```
|
|
258
|
+
|
|
259
|
+
**Precedence:**
|
|
260
|
+
|
|
261
|
+
- `modelSettings` and `providerOptions`: per-fallback entry overrides call-time options, which override agent `defaultOptions`. `modelSettings` shallow-merges by key. `providerOptions` deep-merges recursively, so nested provider config (e.g. `google.thinkingConfig`) preserves sibling keys across layers.
|
|
262
|
+
- `headers`: call-time `modelSettings.headers` overrides per-fallback `headers`, which overrides headers extracted from model-router models. Runtime headers (tracing, auth, tenancy) intentionally take precedence over model-level headers.
|
|
263
|
+
|
|
264
|
+
Each field also accepts a function of `requestContext`, matching how dynamic models are resolved.
|
|
265
|
+
|
|
231
266
|
## Use local models with Mastra
|
|
232
267
|
|
|
233
268
|
Mastra also supports local models like `gpt-oss`, `Qwen3`, `DeepSeek` and many more that you run on your own hardware. The application running your local model needs to provide an OpenAI-compatible API server for Mastra to connect to. We recommend using [LMStudio](https://lmstudio.ai/) (see [Running the LMStudio server](https://lmstudio.ai/docs/developer/core/server)).
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Alibaba (China)
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 76 Alibaba (China) models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Alibaba (China) documentation](https://www.alibabacloud.com/help/en/model-studio/models).
|
|
6
6
|
|
|
@@ -46,6 +46,7 @@ for await (const chunk of stream) {
|
|
|
46
46
|
| `alibaba-cn/deepseek-v3-1` | 131K | | | | | | $0.57 | $2 |
|
|
47
47
|
| `alibaba-cn/deepseek-v3-2-exp` | 131K | | | | | | $0.29 | $0.43 |
|
|
48
48
|
| `alibaba-cn/glm-5` | 203K | | | | | | $0.86 | $3 |
|
|
49
|
+
| `alibaba-cn/glm-5.1` | 203K | | | | | | $0.87 | $3 |
|
|
49
50
|
| `alibaba-cn/kimi-k2-thinking` | 262K | | | | | | $0.57 | $2 |
|
|
50
51
|
| `alibaba-cn/kimi-k2.5` | 262K | | | | | | $0.57 | $2 |
|
|
51
52
|
| `alibaba-cn/kimi/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Anthropic
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 23 Anthropic models through Mastra's model router. Authentication is handled automatically using the `ANTHROPIC_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Anthropic documentation](https://docs.anthropic.com/en/docs/about-claude/models).
|
|
6
6
|
|
|
@@ -49,6 +49,7 @@ for await (const chunk of stream) {
|
|
|
49
49
|
| `anthropic/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
|
|
50
50
|
| `anthropic/claude-opus-4-5-20251101` | 200K | | | | | | $5 | $25 |
|
|
51
51
|
| `anthropic/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
|
|
52
|
+
| `anthropic/claude-opus-4-7` | 1.0M | | | | | | $5 | $25 |
|
|
52
53
|
| `anthropic/claude-sonnet-4-0` | 200K | | | | | | $3 | $15 |
|
|
53
54
|
| `anthropic/claude-sonnet-4-20250514` | 200K | | | | | | $3 | $15 |
|
|
54
55
|
| `anthropic/claude-sonnet-4-5` | 200K | | | | | | $3 | $15 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Cortecs
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 32 Cortecs models through Mastra's model router. Authentication is handled automatically using the `CORTECS_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Cortecs documentation](https://cortecs.ai).
|
|
6
6
|
|
|
@@ -49,6 +49,7 @@ for await (const chunk of stream) {
|
|
|
49
49
|
| `cortecs/glm-4.7` | 198K | | | | | | $0.45 | $2 |
|
|
50
50
|
| `cortecs/glm-4.7-flash` | 203K | | | | | | $0.09 | $0.53 |
|
|
51
51
|
| `cortecs/glm-5` | 203K | | | | | | $1 | $3 |
|
|
52
|
+
| `cortecs/glm-5.1` | 205K | | | | | | $1 | $4 |
|
|
52
53
|
| `cortecs/gpt-4.1` | 1.0M | | | | | | $2 | $9 |
|
|
53
54
|
| `cortecs/gpt-oss-120b` | 128K | | | | | | — | — |
|
|
54
55
|
| `cortecs/intellect-3` | 128K | | | | | | $0.22 | $1 |
|
|
@@ -59,6 +60,7 @@ for await (const chunk of stream) {
|
|
|
59
60
|
| `cortecs/minimax-m2` | 400K | | | | | | $0.39 | $2 |
|
|
60
61
|
| `cortecs/minimax-m2.1` | 196K | | | | | | $0.34 | $1 |
|
|
61
62
|
| `cortecs/minimax-m2.5` | 197K | | | | | | $0.32 | $1 |
|
|
63
|
+
| `cortecs/minimax-M2.7` | 203K | | | | | | $0.47 | $1 |
|
|
62
64
|
| `cortecs/nova-pro-v1` | 300K | | | | | | $1 | $4 |
|
|
63
65
|
| `cortecs/qwen3-32b` | 16K | | | | | | $0.10 | $0.33 |
|
|
64
66
|
| `cortecs/qwen3-coder-480b-a35b-instruct` | 262K | | | | | | $0.44 | $2 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Firmware
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 24 Firmware models through Mastra's model router. Authentication is handled automatically using the `FIRMWARE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Firmware documentation](https://docs.frogbot.ai).
|
|
6
6
|
|
|
@@ -35,9 +35,8 @@ for await (const chunk of stream) {
|
|
|
35
35
|
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
36
36
|
| -------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
37
|
| `firmware/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
|
|
38
|
-
| `firmware/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
|
|
39
38
|
| `firmware/claude-opus-4-6` | 200K | | | | | | $5 | $25 |
|
|
40
|
-
| `firmware/claude-
|
|
39
|
+
| `firmware/claude-opus-4-7` | 200K | | | | | | $5 | $25 |
|
|
41
40
|
| `firmware/claude-sonnet-4-6` | 200K | | | | | | $3 | $15 |
|
|
42
41
|
| `firmware/deepseek-v3-2` | 128K | | | | | | $0.58 | $2 |
|
|
43
42
|
| `firmware/gemini-2.5-flash` | 1.0M | | | | | | $0.30 | $3 |
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
# HPC-AI
|
|
2
|
+
|
|
3
|
+
Access 3 HPC-AI models through Mastra's model router. Authentication is handled automatically using the `HPC_AI_API_KEY` environment variable.
|
|
4
|
+
|
|
5
|
+
Learn more in the [HPC-AI documentation](https://www.hpc-ai.com/doc/docs/quickstart/).
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
HPC_AI_API_KEY=your-api-key
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
```typescript
|
|
12
|
+
import { Agent } from "@mastra/core/agent";
|
|
13
|
+
|
|
14
|
+
const agent = new Agent({
|
|
15
|
+
id: "my-agent",
|
|
16
|
+
name: "My Agent",
|
|
17
|
+
instructions: "You are a helpful assistant",
|
|
18
|
+
model: "hpc-ai/minimax/minimax-m2.5"
|
|
19
|
+
});
|
|
20
|
+
|
|
21
|
+
// Generate a response
|
|
22
|
+
const response = await agent.generate("Hello!");
|
|
23
|
+
|
|
24
|
+
// Stream a response
|
|
25
|
+
const stream = await agent.stream("Tell me a story");
|
|
26
|
+
for await (const chunk of stream) {
|
|
27
|
+
console.log(chunk);
|
|
28
|
+
}
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
> **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [HPC-AI documentation](https://www.hpc-ai.com/doc/docs/quickstart/) for details.
|
|
32
|
+
|
|
33
|
+
## Models
|
|
34
|
+
|
|
35
|
+
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
36
|
+
| ----------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
|
+
| `hpc-ai/minimax/minimax-m2.5` | 1.0M | | | | | | $0.14 | $0.56 |
|
|
38
|
+
| `hpc-ai/moonshotai/kimi-k2.5` | 262K | | | | | | $0.21 | $1 |
|
|
39
|
+
| `hpc-ai/zai-org/glm-5.1` | 202K | | | | | | $0.66 | $2 |
|
|
40
|
+
|
|
41
|
+
## Advanced configuration
|
|
42
|
+
|
|
43
|
+
### Custom headers
|
|
44
|
+
|
|
45
|
+
```typescript
|
|
46
|
+
const agent = new Agent({
|
|
47
|
+
id: "custom-agent",
|
|
48
|
+
name: "custom-agent",
|
|
49
|
+
model: {
|
|
50
|
+
url: "https://api.hpc-ai.com/inference/v1",
|
|
51
|
+
id: "hpc-ai/minimax/minimax-m2.5",
|
|
52
|
+
apiKey: process.env.HPC_AI_API_KEY,
|
|
53
|
+
headers: {
|
|
54
|
+
"X-Custom-Header": "value"
|
|
55
|
+
}
|
|
56
|
+
}
|
|
57
|
+
});
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
### Dynamic model selection
|
|
61
|
+
|
|
62
|
+
```typescript
|
|
63
|
+
const agent = new Agent({
|
|
64
|
+
id: "dynamic-agent",
|
|
65
|
+
name: "Dynamic Agent",
|
|
66
|
+
model: ({ requestContext }) => {
|
|
67
|
+
const useAdvanced = requestContext.task === "complex";
|
|
68
|
+
return useAdvanced
|
|
69
|
+
? "hpc-ai/zai-org/glm-5.1"
|
|
70
|
+
: "hpc-ai/minimax/minimax-m2.5";
|
|
71
|
+
}
|
|
72
|
+
});
|
|
73
|
+
```
|
|
@@ -71,7 +71,7 @@ for await (const chunk of stream) {
|
|
|
71
71
|
| `nvidia/microsoft/phi-4-mini-instruct` | 131K | | | | | | — | — |
|
|
72
72
|
| `nvidia/minimaxai/minimax-m2.1` | 205K | | | | | | — | — |
|
|
73
73
|
| `nvidia/minimaxai/minimax-m2.5` | 205K | | | | | | — | — |
|
|
74
|
-
| `nvidia/minimaxai/minimax-m2.7` | 205K | | | | | |
|
|
74
|
+
| `nvidia/minimaxai/minimax-m2.7` | 205K | | | | | | — | — |
|
|
75
75
|
| `nvidia/mistralai/codestral-22b-instruct-v0.1` | 128K | | | | | | — | — |
|
|
76
76
|
| `nvidia/mistralai/devstral-2-123b-instruct-2512` | 262K | | | | | | — | — |
|
|
77
77
|
| `nvidia/mistralai/mamba-codestral-7b-v0.1` | 128K | | | | | | — | — |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenCode Zen
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 35 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenCode Zen documentation](https://opencode.ai/docs/zen).
|
|
6
6
|
|
|
@@ -40,6 +40,7 @@ for await (const chunk of stream) {
|
|
|
40
40
|
| `opencode/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
|
|
41
41
|
| `opencode/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
|
|
42
42
|
| `opencode/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
|
|
43
|
+
| `opencode/claude-opus-4-7` | 1.0M | | | | | | $5 | $25 |
|
|
43
44
|
| `opencode/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
|
|
44
45
|
| `opencode/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
|
|
45
46
|
| `opencode/claude-sonnet-4-6` | 1.0M | | | | | | $3 | $15 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Poe
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 118 Poe models through Mastra's model router. Authentication is handled automatically using the `POE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Poe documentation](https://creator.poe.com/docs/external-applications/openai-compatible-api).
|
|
6
6
|
|
|
@@ -41,6 +41,7 @@ for await (const chunk of stream) {
|
|
|
41
41
|
| `poe/anthropic/claude-opus-4.1` | 197K | | | | | | $13 | $64 |
|
|
42
42
|
| `poe/anthropic/claude-opus-4.5` | 197K | | | | | | $4 | $21 |
|
|
43
43
|
| `poe/anthropic/claude-opus-4.6` | 983K | | | | | | $4 | $21 |
|
|
44
|
+
| `poe/anthropic/claude-opus-4.7` | 1.0M | | | | | | $4 | $21 |
|
|
44
45
|
| `poe/anthropic/claude-sonnet-3.7` | 197K | | | | | | $3 | $13 |
|
|
45
46
|
| `poe/anthropic/claude-sonnet-4` | 983K | | | | | | $3 | $13 |
|
|
46
47
|
| `poe/anthropic/claude-sonnet-4.5` | 983K | | | | | | $3 | $13 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# ZenMux
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 88 ZenMux models through Mastra's model router. Authentication is handled automatically using the `ZENMUX_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [ZenMux documentation](https://docs.zenmux.ai).
|
|
6
6
|
|
|
@@ -41,6 +41,7 @@ for await (const chunk of stream) {
|
|
|
41
41
|
| `zenmux/anthropic/claude-opus-4.1` | 200K | | | | | | $15 | $75 |
|
|
42
42
|
| `zenmux/anthropic/claude-opus-4.5` | 200K | | | | | | $5 | $25 |
|
|
43
43
|
| `zenmux/anthropic/claude-opus-4.6` | 1.0M | | | | | | $5 | $25 |
|
|
44
|
+
| `zenmux/anthropic/claude-opus-4.7` | 1.0M | | | | | | $5 | $25 |
|
|
44
45
|
| `zenmux/anthropic/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
|
|
45
46
|
| `zenmux/anthropic/claude-sonnet-4.5` | 1.0M | | | | | | $3 | $15 |
|
|
46
47
|
| `zenmux/anthropic/claude-sonnet-4.6` | 1.0M | | | | | | $3 | $15 |
|
|
@@ -34,6 +34,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
|
|
|
34
34
|
- [Friendli](https://mastra.ai/models/providers/friendli)
|
|
35
35
|
- [GitHub Models](https://mastra.ai/models/providers/github-models)
|
|
36
36
|
- [Helicone](https://mastra.ai/models/providers/helicone)
|
|
37
|
+
- [HPC-AI](https://mastra.ai/models/providers/hpc-ai)
|
|
37
38
|
- [Hugging Face](https://mastra.ai/models/providers/huggingface)
|
|
38
39
|
- [iFlow](https://mastra.ai/models/providers/iflowcn)
|
|
39
40
|
- [Inception](https://mastra.ai/models/providers/inception)
|
|
@@ -300,6 +300,34 @@ const response = await agent.generate('Help me organize my day', {
|
|
|
300
300
|
|
|
301
301
|
**options.includeRawChunks** (`boolean`): Whether to include raw chunks in the stream output. Not available on all model providers.
|
|
302
302
|
|
|
303
|
+
## Response structure
|
|
304
|
+
|
|
305
|
+
`Agent.generate()` returns the final data collected during execution. `steps` is an array of step objects. The tool arrays in the result, including top-level `toolCalls` and `toolResults` and the nested `step.toolCalls` and `step.toolResults` arrays, use Mastra's chunk format.
|
|
306
|
+
|
|
307
|
+
That means tool data is wrapped in `payload`:
|
|
308
|
+
|
|
309
|
+
```ts
|
|
310
|
+
const response = await agent.generate('Check the weather in Lagos')
|
|
311
|
+
|
|
312
|
+
for (const toolCall of response.toolCalls) {
|
|
313
|
+
console.log(toolCall.type) // 'tool-call'
|
|
314
|
+
console.log(toolCall.runId)
|
|
315
|
+
console.log(toolCall.from)
|
|
316
|
+
console.log(toolCall.payload.toolName)
|
|
317
|
+
console.log(toolCall.payload.args)
|
|
318
|
+
}
|
|
319
|
+
|
|
320
|
+
for (const step of response.steps) {
|
|
321
|
+
for (const toolResult of step.toolResults) {
|
|
322
|
+
console.log(toolResult.type) // 'tool-result'
|
|
323
|
+
console.log(toolResult.payload.toolName)
|
|
324
|
+
console.log(toolResult.payload.result)
|
|
325
|
+
}
|
|
326
|
+
}
|
|
327
|
+
```
|
|
328
|
+
|
|
329
|
+
For the streaming version of the same chunk shape, see the [ChunkType reference](https://mastra.ai/reference/streaming/ChunkType).
|
|
330
|
+
|
|
303
331
|
## Returns
|
|
304
332
|
|
|
305
333
|
**result** (`Awaited<ReturnType<MastraModelOutput<Output>['getFullOutput']>>`): Returns the full output of the generation process including text, object (if structured output), tool calls, tool results, usage statistics, and step information.
|
|
@@ -308,13 +336,59 @@ const response = await agent.generate('Help me organize my day', {
|
|
|
308
336
|
|
|
309
337
|
**object** (`Output | undefined`): The structured output object if structuredOutput was provided, validated against the schema.
|
|
310
338
|
|
|
311
|
-
**toolCalls** (`
|
|
339
|
+
**toolCalls** (`ToolCallChunk[]`): Array of tool call chunks made during generation.
|
|
340
|
+
|
|
341
|
+
**toolCalls.type** (`'tool-call'`): Chunk type identifier.
|
|
342
|
+
|
|
343
|
+
**toolCalls.runId** (`string`): Execution run identifier.
|
|
344
|
+
|
|
345
|
+
**toolCalls.from** (`ChunkFrom`): Source of the chunk, such as AGENT or WORKFLOW.
|
|
346
|
+
|
|
347
|
+
**toolCalls.payload** (`ToolCallPayload`): Tool call data.
|
|
348
|
+
|
|
349
|
+
**toolCalls.payload.toolCallId** (`string`): Unique identifier for the tool call.
|
|
350
|
+
|
|
351
|
+
**toolCalls.payload.toolName** (`string`): Name of the tool that was called.
|
|
352
|
+
|
|
353
|
+
**toolCalls.payload.args** (`Record<string, unknown>`): Arguments passed to the tool.
|
|
354
|
+
|
|
355
|
+
**toolCalls.payload.providerExecuted** (`boolean`): Whether the model provider executed the tool directly.
|
|
312
356
|
|
|
313
|
-
**toolResults** (`
|
|
357
|
+
**toolResults** (`ToolResultChunk[]`): Array of tool result chunks from tool executions.
|
|
358
|
+
|
|
359
|
+
**toolResults.type** (`'tool-result'`): Chunk type identifier.
|
|
360
|
+
|
|
361
|
+
**toolResults.runId** (`string`): Execution run identifier.
|
|
362
|
+
|
|
363
|
+
**toolResults.from** (`ChunkFrom`): Source of the chunk, such as AGENT or WORKFLOW.
|
|
364
|
+
|
|
365
|
+
**toolResults.payload** (`ToolResultPayload`): Tool result data.
|
|
366
|
+
|
|
367
|
+
**toolResults.payload.toolCallId** (`string`): Unique identifier for the tool call.
|
|
368
|
+
|
|
369
|
+
**toolResults.payload.toolName** (`string`): Name of the tool that produced the result.
|
|
370
|
+
|
|
371
|
+
**toolResults.payload.result** (`unknown`): Value returned by the tool.
|
|
372
|
+
|
|
373
|
+
**toolResults.payload.isError** (`boolean`): Whether the tool execution failed.
|
|
314
374
|
|
|
315
375
|
**usage** (`TokenUsage`): Token usage statistics for the generation.
|
|
316
376
|
|
|
317
|
-
**steps** (`
|
|
377
|
+
**steps** (`object[]`): Array of execution steps, useful for debugging multi-step generations.
|
|
378
|
+
|
|
379
|
+
**steps.text** (`string`): Text generated in this step.
|
|
380
|
+
|
|
381
|
+
**steps.toolCalls** (`ToolCallChunk[]`): Tool calls emitted during this step.
|
|
382
|
+
|
|
383
|
+
**steps.toolResults** (`ToolResultChunk[]`): Tool results emitted during this step.
|
|
384
|
+
|
|
385
|
+
**steps.finishReason** (`string`): Why this step finished.
|
|
386
|
+
|
|
387
|
+
**steps.usage** (`LanguageModelUsage`): Token usage for this step.
|
|
388
|
+
|
|
389
|
+
**steps.request** (`{ body?: unknown }`): Request metadata for this step.
|
|
390
|
+
|
|
391
|
+
**steps.response** (`object`): Response metadata for this step.
|
|
318
392
|
|
|
319
393
|
**finishReason** (`string`): The reason generation finished. Values include 'stop' (normal completion), 'tool-calls' (ended with tool calls), 'suspended' (waiting for tool approval), or 'error' (error occurred).
|
|
320
394
|
|
|
@@ -12,6 +12,29 @@ export const mastraClient = new MastraClient({
|
|
|
12
12
|
})
|
|
13
13
|
```
|
|
14
14
|
|
|
15
|
+
## `RequestContext`
|
|
16
|
+
|
|
17
|
+
When you use `RequestContext` with the client SDK, import it from `@mastra/client-js`.
|
|
18
|
+
|
|
19
|
+
```typescript
|
|
20
|
+
import { MastraClient, RequestContext } from '@mastra/client-js'
|
|
21
|
+
|
|
22
|
+
const client = new MastraClient({
|
|
23
|
+
baseUrl: 'http://localhost:4111/',
|
|
24
|
+
})
|
|
25
|
+
|
|
26
|
+
const requestContext = new RequestContext()
|
|
27
|
+
requestContext.set('userId', 'user-123')
|
|
28
|
+
|
|
29
|
+
const agent = client.getAgent('support-agent')
|
|
30
|
+
|
|
31
|
+
const response = await agent.generate('Summarize this ticket', {
|
|
32
|
+
requestContext,
|
|
33
|
+
})
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
You can also pass `requestContext` as a `Record<string, any>`.
|
|
37
|
+
|
|
15
38
|
## Parameters
|
|
16
39
|
|
|
17
40
|
**baseUrl** (`string`): The base URL for the Mastra API. All requests will be sent relative to this URL.
|
package/.docs/reference/index.md
CHANGED
|
@@ -284,6 +284,7 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
284
284
|
- [AgentFSFilesystem](https://mastra.ai/reference/workspace/agentfs-filesystem)
|
|
285
285
|
- [BlaxelSandbox](https://mastra.ai/reference/workspace/blaxel-sandbox)
|
|
286
286
|
- [DaytonaSandbox](https://mastra.ai/reference/workspace/daytona-sandbox)
|
|
287
|
+
- [DockerSandbox](https://mastra.ai/reference/workspace/docker-sandbox)
|
|
287
288
|
- [E2BSandbox](https://mastra.ai/reference/workspace/e2b-sandbox)
|
|
288
289
|
- [GCSFilesystem](https://mastra.ai/reference/workspace/gcs-filesystem)
|
|
289
290
|
- [LocalFilesystem](https://mastra.ai/reference/workspace/local-filesystem)
|
|
@@ -36,6 +36,8 @@ OM performs thresholding with fast local token estimation. Text uses `tokenx`, a
|
|
|
36
36
|
|
|
37
37
|
**scope** (`'resource' | 'thread'`): Memory scope for observations. \`'thread'\` keeps observations per-thread. \`'resource'\` (experimental) shares observations across all threads for a resource, enabling cross-conversation memory. (Default: `'thread'`)
|
|
38
38
|
|
|
39
|
+
**activateAfterIdle** (`number | string`): Time before buffered observations or buffered reflections are forced to activate after inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like \`300\_000\`, \`"5m"\`, or \`"1hr"\`. When the gap between the current time and the last assistant message part timestamp exceeds this value, buffered observational memory activates before the next prompt. Useful for aligning with prompt cache TTLs.
|
|
40
|
+
|
|
39
41
|
**shareTokenBudget** (`boolean`): Share the token budget between messages and observations. When enabled, the total budget is \`observation.messageTokens + reflection.observationTokens\`. Messages can use more space when observations are small, and vice versa. This maximizes context usage through flexible allocation. \`shareTokenBudget\` is not yet compatible with async buffering. You must set \`observation: { bufferTokens: false }\` when using this option (this is a temporary limitation). (Default: `false`)
|
|
40
42
|
|
|
41
43
|
**retrieval** (`boolean | { vector?: boolean; scope?: 'thread' | 'resource' }`): \*\*Experimental.\*\* Enable retrieval-mode observation groups as durable pointers to raw message history. \`true\` enables cross-thread browsing by default. \`{ vector: true }\` also enables semantic search using Memory's vector store and embedder. \`{ scope: 'thread' }\` restricts the recall tool to the current thread only. Default scope is \`'resource'\`. (Default: `false`)
|
|
@@ -1,5 +1,7 @@
|
|
|
1
1
|
# CloudExporter
|
|
2
2
|
|
|
3
|
+
**Added in:** `@mastra/observability@1.8.0`
|
|
4
|
+
|
|
3
5
|
Sends tracing spans, logs, metrics, scores, and feedback to the Mastra platform for online visualization and monitoring.
|
|
4
6
|
|
|
5
7
|
## Constructor
|
|
@@ -56,9 +58,9 @@ Extends `BaseExporterConfig`, which includes:
|
|
|
56
58
|
|
|
57
59
|
The exporter reads these environment variables if not provided in config:
|
|
58
60
|
|
|
59
|
-
- `MASTRA_CLOUD_ACCESS_TOKEN` - Authentication token
|
|
61
|
+
- `MASTRA_CLOUD_ACCESS_TOKEN` - Authentication token for `CloudExporter` requests
|
|
60
62
|
- `MASTRA_PROJECT_ID` - Project ID to use when deriving project-scoped collector routes such as `/projects/:projectId/ai/spans/publish`
|
|
61
|
-
- `MASTRA_CLOUD_TRACES_ENDPOINT` - Traces endpoint override. Pass either a base origin or a full traces publish URL. Defaults to `https://
|
|
63
|
+
- `MASTRA_CLOUD_TRACES_ENDPOINT` - Traces endpoint override. Pass either a base origin or a full traces publish URL. Defaults to `https://observability.mastra.ai` in `@mastra/observability@1.9.2` and later
|
|
62
64
|
|
|
63
65
|
## Properties
|
|
64
66
|
|
|
@@ -0,0 +1,196 @@
|
|
|
1
|
+
# DockerSandbox
|
|
2
|
+
|
|
3
|
+
Executes commands inside Docker containers on the local machine. Uses long-lived containers with `docker exec` for command execution. Targets local development, CI/CD, air-gapped deployments, and cost-sensitive scenarios where cloud sandboxes are unnecessary.
|
|
4
|
+
|
|
5
|
+
> **Info:** For interface details, see [WorkspaceSandbox interface](https://mastra.ai/reference/workspace/sandbox).
|
|
6
|
+
|
|
7
|
+
## Installation
|
|
8
|
+
|
|
9
|
+
**npm**:
|
|
10
|
+
|
|
11
|
+
```bash
|
|
12
|
+
npm install @mastra/docker
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
**pnpm**:
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
pnpm add @mastra/docker
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
**Yarn**:
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
yarn add @mastra/docker
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Bun**:
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
bun add @mastra/docker
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
Requires [Docker Engine](https://docs.docker.com/engine/install/) running on the host machine.
|
|
34
|
+
|
|
35
|
+
## Usage
|
|
36
|
+
|
|
37
|
+
Add a `DockerSandbox` to a workspace and assign it to an agent:
|
|
38
|
+
|
|
39
|
+
```typescript
|
|
40
|
+
import { Agent } from '@mastra/core/agent'
|
|
41
|
+
import { Workspace } from '@mastra/core/workspace'
|
|
42
|
+
import { DockerSandbox } from '@mastra/docker'
|
|
43
|
+
|
|
44
|
+
const workspace = new Workspace({
|
|
45
|
+
sandbox: new DockerSandbox({
|
|
46
|
+
image: 'node:22-slim',
|
|
47
|
+
}),
|
|
48
|
+
})
|
|
49
|
+
|
|
50
|
+
const agent = new Agent({
|
|
51
|
+
name: 'dev-agent',
|
|
52
|
+
model: 'anthropic/claude-opus-4-6',
|
|
53
|
+
workspace,
|
|
54
|
+
})
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
## Constructor parameters
|
|
58
|
+
|
|
59
|
+
**id** (`string`): Unique identifier for this sandbox instance. Used for container naming and reconnection via labels. (Default: `Auto-generated`)
|
|
60
|
+
|
|
61
|
+
**image** (`string`): Docker image to use for the container. (Default: `'node:22-slim'`)
|
|
62
|
+
|
|
63
|
+
**command** (`string[]`): Container entrypoint command. Must keep the container alive for exec-based command execution. (Default: `['sleep', 'infinity']`)
|
|
64
|
+
|
|
65
|
+
**env** (`Record<string, string>`): Environment variables to set in the container.
|
|
66
|
+
|
|
67
|
+
**volumes** (`Record<string, string>`): Host-to-container bind mounts. Keys are host paths, values are container paths.
|
|
68
|
+
|
|
69
|
+
**network** (`string`): Docker network to join.
|
|
70
|
+
|
|
71
|
+
**privileged** (`boolean`): Run in privileged mode. (Default: `false`)
|
|
72
|
+
|
|
73
|
+
**workingDir** (`string`): Working directory inside the container. (Default: `'/workspace'`)
|
|
74
|
+
|
|
75
|
+
**labels** (`Record<string, string>`): Additional container labels. Mastra labels (mastra.sandbox, mastra.sandbox.id) are always included.
|
|
76
|
+
|
|
77
|
+
**timeout** (`number`): Default command timeout in milliseconds. (Default: `300000 (5 minutes)`)
|
|
78
|
+
|
|
79
|
+
**dockerOptions** (`Docker.DockerOptions`): Pass-through dockerode connection options for custom socket paths, remote hosts, or TLS certificates.
|
|
80
|
+
|
|
81
|
+
**instructions** (`string | function`): Custom instructions that override the default instructions returned by getInstructions(). Pass an empty string to suppress instructions.
|
|
82
|
+
|
|
83
|
+
## Properties
|
|
84
|
+
|
|
85
|
+
**id** (`string`): Sandbox instance identifier.
|
|
86
|
+
|
|
87
|
+
**name** (`string`): Provider name ('DockerSandbox').
|
|
88
|
+
|
|
89
|
+
**provider** (`string`): Provider identifier ('docker').
|
|
90
|
+
|
|
91
|
+
**status** (`ProviderStatus`): 'pending' | 'starting' | 'running' | 'stopping' | 'stopped' | 'destroying' | 'destroyed' | 'error'
|
|
92
|
+
|
|
93
|
+
**container** (`Container`): The underlying dockerode Container instance. Throws SandboxNotReadyError if the sandbox has not been started.
|
|
94
|
+
|
|
95
|
+
**processes** (`DockerProcessManager`): Background process manager. See \[SandboxProcessManager reference]\(/reference/workspace/process-manager).
|
|
96
|
+
|
|
97
|
+
## Background processes
|
|
98
|
+
|
|
99
|
+
`DockerSandbox` includes a built-in process manager for spawning and managing background processes. Processes run inside the container using `docker exec`.
|
|
100
|
+
|
|
101
|
+
```typescript
|
|
102
|
+
const sandbox = new DockerSandbox({ id: 'dev-sandbox' })
|
|
103
|
+
await sandbox._start()
|
|
104
|
+
|
|
105
|
+
// Spawn a background process
|
|
106
|
+
const handle = await sandbox.processes.spawn('node server.js', {
|
|
107
|
+
env: { PORT: '3000' },
|
|
108
|
+
onStdout: data => console.log(data),
|
|
109
|
+
})
|
|
110
|
+
|
|
111
|
+
// Interact with the process
|
|
112
|
+
console.log(handle.stdout)
|
|
113
|
+
await handle.sendStdin('input\n')
|
|
114
|
+
await handle.kill()
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
See [`SandboxProcessManager` reference](https://mastra.ai/reference/workspace/process-manager) for the full API.
|
|
118
|
+
|
|
119
|
+
## Environment variables
|
|
120
|
+
|
|
121
|
+
Set environment variables at the container level with `env`. Per-command environment variables can also be passed when spawning processes:
|
|
122
|
+
|
|
123
|
+
```typescript
|
|
124
|
+
const sandbox = new DockerSandbox({
|
|
125
|
+
image: 'node:22-slim',
|
|
126
|
+
env: {
|
|
127
|
+
NODE_ENV: 'production',
|
|
128
|
+
DATABASE_URL: 'postgres://localhost:5432/mydb',
|
|
129
|
+
},
|
|
130
|
+
})
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
## Bind mounts
|
|
134
|
+
|
|
135
|
+
Mount host directories into the container using the `volumes` option:
|
|
136
|
+
|
|
137
|
+
```typescript
|
|
138
|
+
const sandbox = new DockerSandbox({
|
|
139
|
+
image: 'node:22-slim',
|
|
140
|
+
volumes: {
|
|
141
|
+
'/my/project': '/workspace/project',
|
|
142
|
+
'/shared/data': '/data',
|
|
143
|
+
},
|
|
144
|
+
})
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
Bind mounts are applied at container creation time. The host paths must exist before the sandbox starts.
|
|
148
|
+
|
|
149
|
+
## Reconnection
|
|
150
|
+
|
|
151
|
+
`DockerSandbox` can reconnect to existing containers by matching labels. When `start()` is called, it checks for a container with the `mastra.sandbox.id` label matching the sandbox ID. If found:
|
|
152
|
+
|
|
153
|
+
- A running container is reused directly.
|
|
154
|
+
- A stopped container is restarted.
|
|
155
|
+
|
|
156
|
+
```typescript
|
|
157
|
+
// First run — creates a new container
|
|
158
|
+
const sandbox = new DockerSandbox({ id: 'persistent-sandbox' })
|
|
159
|
+
await sandbox._start()
|
|
160
|
+
|
|
161
|
+
// Later — reconnects to the existing container
|
|
162
|
+
const sandbox2 = new DockerSandbox({ id: 'persistent-sandbox' })
|
|
163
|
+
await sandbox2._start()
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
## Docker connection options
|
|
167
|
+
|
|
168
|
+
Connect to remote Docker hosts or use custom socket paths via `dockerOptions`:
|
|
169
|
+
|
|
170
|
+
```typescript
|
|
171
|
+
// Remote Docker host
|
|
172
|
+
const sandbox = new DockerSandbox({
|
|
173
|
+
dockerOptions: {
|
|
174
|
+
host: '192.168.1.100',
|
|
175
|
+
port: 2376,
|
|
176
|
+
ca: fs.readFileSync('ca.pem'),
|
|
177
|
+
cert: fs.readFileSync('cert.pem'),
|
|
178
|
+
key: fs.readFileSync('key.pem'),
|
|
179
|
+
},
|
|
180
|
+
})
|
|
181
|
+
|
|
182
|
+
// Custom socket path
|
|
183
|
+
const sandbox = new DockerSandbox({
|
|
184
|
+
dockerOptions: {
|
|
185
|
+
socketPath: '/var/run/docker.sock',
|
|
186
|
+
},
|
|
187
|
+
})
|
|
188
|
+
```
|
|
189
|
+
|
|
190
|
+
## Related
|
|
191
|
+
|
|
192
|
+
- [SandboxProcessManager reference](https://mastra.ai/reference/workspace/process-manager)
|
|
193
|
+
- [WorkspaceSandbox interface](https://mastra.ai/reference/workspace/sandbox)
|
|
194
|
+
- [LocalSandbox reference](https://mastra.ai/reference/workspace/local-sandbox)
|
|
195
|
+
- [E2BSandbox reference](https://mastra.ai/reference/workspace/e2b-sandbox)
|
|
196
|
+
- [Workspace overview](https://mastra.ai/docs/workspace/overview)
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,50 @@
|
|
|
1
1
|
# @mastra/mcp-docs-server
|
|
2
2
|
|
|
3
|
+
## 1.1.26-alpha.9
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Updated dependencies [[`92dcf02`](https://github.com/mastra-ai/mastra/commit/92dcf029294210ac91b090900c1a0555a425c57a)]:
|
|
8
|
+
- @mastra/core@1.26.0-alpha.5
|
|
9
|
+
|
|
10
|
+
## 1.1.26-alpha.7
|
|
11
|
+
|
|
12
|
+
### Patch Changes
|
|
13
|
+
|
|
14
|
+
- Updated dependencies [[`0474c2b`](https://github.com/mastra-ai/mastra/commit/0474c2b2e7c7e1ad8691dca031284841391ff1ef), [`f607106`](https://github.com/mastra-ai/mastra/commit/f607106854c6416c4a07d4082604b9f66d047221), [`62919a6`](https://github.com/mastra-ai/mastra/commit/62919a6ee0fbf3779ad21a97b1ec6696515d5104), [`0fd90a2`](https://github.com/mastra-ai/mastra/commit/0fd90a215caf5fca8099c15a67ca03e4427747a3)]:
|
|
15
|
+
- @mastra/core@1.26.0-alpha.4
|
|
16
|
+
|
|
17
|
+
## 1.1.26-alpha.5
|
|
18
|
+
|
|
19
|
+
### Patch Changes
|
|
20
|
+
|
|
21
|
+
- Updated dependencies [[`fdd54cf`](https://github.com/mastra-ai/mastra/commit/fdd54cf612a9af876e9fdd85e534454f6e7dd518), [`30456b6`](https://github.com/mastra-ai/mastra/commit/30456b6b08c8fd17e109dd093b73d93b65e83bc5), [`9d11a8c`](https://github.com/mastra-ai/mastra/commit/9d11a8c1c8924eb975a245a5884d40ca1b7e0491), [`d246696`](https://github.com/mastra-ai/mastra/commit/d246696139a3144a5b21b042d41c532688e957e1), [`354f9ce`](https://github.com/mastra-ai/mastra/commit/354f9ce1ca6af2074b6a196a23f8ec30012dccca), [`e9837b5`](https://github.com/mastra-ai/mastra/commit/e9837b53699e18711b09e0ca010a4106376f2653)]:
|
|
22
|
+
- @mastra/core@1.26.0-alpha.3
|
|
23
|
+
- @mastra/mcp@1.5.1-alpha.1
|
|
24
|
+
|
|
25
|
+
## 1.1.26-alpha.3
|
|
26
|
+
|
|
27
|
+
### Patch Changes
|
|
28
|
+
|
|
29
|
+
- Updated dependencies [[`3d83d06`](https://github.com/mastra-ai/mastra/commit/3d83d06f776f00fb5f4163dddd32a030c5c20844), [`7e0e63e`](https://github.com/mastra-ai/mastra/commit/7e0e63e2e485e84442351f4c7a79a424c83539dc), [`9467ea8`](https://github.com/mastra-ai/mastra/commit/9467ea87695749a53dfc041576410ebf9ee7bb67), [`7338d94`](https://github.com/mastra-ai/mastra/commit/7338d949380cf68b095342e8e42610dc51d557c1), [`c65aec3`](https://github.com/mastra-ai/mastra/commit/c65aec356cc037ee7c4b30ccea946807d4c4f443)]:
|
|
30
|
+
- @mastra/core@1.26.0-alpha.2
|
|
31
|
+
- @mastra/mcp@1.5.1-alpha.1
|
|
32
|
+
|
|
33
|
+
## 1.1.26-alpha.2
|
|
34
|
+
|
|
35
|
+
### Patch Changes
|
|
36
|
+
|
|
37
|
+
- Updated dependencies [[`7020c06`](https://github.com/mastra-ai/mastra/commit/7020c0690b199d9da337f0e805f16948e557922e), [`7020c06`](https://github.com/mastra-ai/mastra/commit/7020c0690b199d9da337f0e805f16948e557922e)]:
|
|
38
|
+
- @mastra/mcp@1.5.1-alpha.0
|
|
39
|
+
- @mastra/core@1.25.1-alpha.1
|
|
40
|
+
|
|
41
|
+
## 1.1.26-alpha.0
|
|
42
|
+
|
|
43
|
+
### Patch Changes
|
|
44
|
+
|
|
45
|
+
- Updated dependencies [[`d63ffdb`](https://github.com/mastra-ai/mastra/commit/d63ffdbb2c11e76fe5ea45faab44bc15460f010c)]:
|
|
46
|
+
- @mastra/core@1.25.1-alpha.0
|
|
47
|
+
|
|
3
48
|
## 1.1.25
|
|
4
49
|
|
|
5
50
|
### Patch Changes
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/mcp-docs-server",
|
|
3
|
-
"version": "1.1.
|
|
3
|
+
"version": "1.1.26-alpha.10",
|
|
4
4
|
"description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -29,8 +29,8 @@
|
|
|
29
29
|
"jsdom": "^26.1.0",
|
|
30
30
|
"local-pkg": "^1.1.2",
|
|
31
31
|
"zod": "^4.3.6",
|
|
32
|
-
"@mastra/
|
|
33
|
-
"@mastra/
|
|
32
|
+
"@mastra/mcp": "^1.5.1-alpha.1",
|
|
33
|
+
"@mastra/core": "1.26.0-alpha.5"
|
|
34
34
|
},
|
|
35
35
|
"devDependencies": {
|
|
36
36
|
"@hono/node-server": "^1.19.11",
|
|
@@ -46,9 +46,9 @@
|
|
|
46
46
|
"tsx": "^4.21.0",
|
|
47
47
|
"typescript": "^5.9.3",
|
|
48
48
|
"vitest": "4.0.18",
|
|
49
|
-
"@mastra/core": "1.25.0",
|
|
50
49
|
"@internal/lint": "0.0.83",
|
|
51
|
-
"@internal/types-builder": "0.0.58"
|
|
50
|
+
"@internal/types-builder": "0.0.58",
|
|
51
|
+
"@mastra/core": "1.26.0-alpha.5"
|
|
52
52
|
},
|
|
53
53
|
"homepage": "https://mastra.ai",
|
|
54
54
|
"repository": {
|