@mastra/mcp-docs-server 1.1.26-alpha.4 → 1.1.26-alpha.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -73,4 +73,6 @@ Develop your project locally with [`mastra dev`](https://mastra.ai/reference/cli
73
73
 
74
74
  Once you're ready to deploy your application to production, use [`mastra studio deploy`](https://mastra.ai/reference/cli/mastra) and [`mastra server deploy`](https://mastra.ai/reference/cli/mastra) to push your application to the cloud.
75
75
 
76
- Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
76
+ Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
77
+
78
+ If you host your Mastra application on your own infrastructure, you can still send observability data to Studio using the [CloudExporter](https://mastra.ai/docs/observability/tracing/exporters/cloud).
@@ -339,6 +339,7 @@ Reflection works similarly — the Reflector runs in the background when observa
339
339
  | `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
340
340
  | `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
341
341
  | `activateAfterIdle` | none | Forces buffered observations and buffered reflections to activate after a period of inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like `300_000`, `"5m"`, or `"1hr"`. Set this to your prompt cache TTL if you want activation to happen before the next cold prompt. |
342
+ | `activateOnProviderChange` | `false` | Forces buffered observations and reflections to activate when the next step uses a different `provider/model` than the one that produced the latest assistant step. Use this when switching providers or models would invalidate prompt cache reuse. |
342
343
  | `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
343
344
  | `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
344
345
 
@@ -350,6 +351,7 @@ const memory = new Memory({
350
351
  observationalMemory: {
351
352
  model: 'google/gemini-2.5-flash',
352
353
  activateAfterIdle: '5m',
354
+ activateOnProviderChange: true,
353
355
  },
354
356
  },
355
357
  })
@@ -357,6 +359,8 @@ const memory = new Memory({
357
359
 
358
360
  With a 5-minute prompt cache TTL, this activates buffered context after 5 minutes of inactivity so the next uncached prompt uses observations and reflections instead of a larger raw message window. If you prefer, `300_000` works the same way.
359
361
 
362
+ Changing model or providers mid-thread will invalidate the prompt cache. If your agent can switch between providers or models mid-thread, `activateOnProviderChange: true` forces buffered context to activate before the new provider runs. That avoids sending a large raw window to a provider that cannot reuse the previous prompt cache.
363
+
360
364
  ### Disabling
361
365
 
362
366
  To disable async buffering and use synchronous observation/reflection instead:
@@ -2,6 +2,12 @@
2
2
 
3
3
  The `CloudExporter` sends traces, logs, metrics, scores, and feedback to the Mastra platform. Use it to route observability data from any Mastra app to a hosted project in the Mastra platform.
4
4
 
5
+ > **Self-hosted or standalone apps:** If you host your Mastra application on your own infrastructure (not on Mastra Platform), you still need a deployed Studio project to view traces, logs, and metrics. `CloudExporter` sends data to a Studio project, so one must exist before you can use it.
6
+ >
7
+ > 1. [Create a Mastra project](https://mastra.ai/guides/getting-started/quickstart) if you don't have one yet.
8
+ > 2. [Deploy Studio](https://mastra.ai/docs/studio/deployment) to the Mastra platform with `mastra studio deploy`.
9
+ > 3. Follow the [quickstart steps below](#quickstart) to create an access token and find your project ID.
10
+
5
11
  ## Version compatibility
6
12
 
7
13
  - Use `CloudExporter` with `@mastra/observability@1.8.0` or later.
@@ -56,11 +56,136 @@ The Mastra platform replaces Mastra Cloud with separate Studio and Server produc
56
56
 
57
57
  ## Replace Mastra Cloud Store with a hosted database
58
58
 
59
- Mastra Cloud provided a managed LibSQL database. The Mastra platform does not host a database for you, so you need to point your storage at an externally hosted instance.
59
+ Mastra Cloud provided a managed libSQL database, backed by [Turso](https://turso.tech). The Mastra platform does not host a database for you, so you need to point your storage at an externally hosted instance.
60
60
 
61
- If you were already using a hosted database ("bring your own"), no changes are needed. Make sure the connection string is set as an environment variable in the dashboard rather than hardcoded.
61
+ If you were already using a hosted database ("bring your own"), no changes are needed. Ensure the connection string is set as an environment variable in the dashboard rather than hardcoded.
62
62
 
63
- If you were using `file:./mastra.db` with Cloud Store, please [contact support](mailto:support@mastra.ai) for a migration path to export and import your data.
63
+ If you were using Cloud Store, follow the steps below to export your data and load it into a new libSQL database that you control.
64
+
65
+ ### Export your Cloud Store data
66
+
67
+ There are two ways to export your Cloud Store data: a one-click download from the dashboard, or a manual dump via the Turso CLI.
68
+
69
+ #### Option A — Export from the dashboard (recommended)
70
+
71
+ Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Storage**. Click the **Export Database** button. The dashboard generates a full `.sql` dump of your Cloud Store and downloads it directly to your Downloads folder.
72
+
73
+ Once the download completes, convert the dump into a SQLite database file:
74
+
75
+ ```bash
76
+ sqlite3 mydb.db < ~/Downloads/mastra-cloud-dump.sql
77
+ ```
78
+
79
+ You now have a portable `mydb.db` file you can inspect locally, back up, or use as the source for the new database in the steps that follow.
80
+
81
+ #### Option B — Export via the Turso CLI
82
+
83
+ If you prefer to work from the command line, or need to script the export, you can dump the database directly using the [Turso CLI](https://docs.turso.tech/cli). This approach requires the database URL and an auth token, both surfaced in the dashboard.
84
+
85
+ 1. Retrieve your Cloud Store credentials from the dashboard.
86
+
87
+ Open your project in the [Mastra dashboard](https://projects.mastra.ai) and navigate to **Runtime → Settings → Env Variables**. For Cloud Store–backed projects, two variables are injected alongside your own:
88
+
89
+ - `MASTRA_STORAGE_URL`: A libSQL connection string (e.g. `libsql://<db-name>-<org>.turso.io`).
90
+ - `MASTRA_STORAGE_AUTH_TOKEN`: A read-capable auth token scoped to that database.
91
+
92
+ Each row supports the standard environment variable actions — show/hide via the eye toggle, Edit, Delete, and Copy Value. Use **Copy Value** to grab both values for the dump command below.
93
+
94
+ > **Note:** These variables only appear for projects that were provisioned with Cloud Store. If you brought your own database to Mastra Cloud, you already have these credentials and can skip ahead to [Point your Mastra app at the new database](#point-your-mastra-app-at-the-new-database).
95
+
96
+ > **Info:** If the variables are missing, the values do not decrypt, or the Turso CLI rejects the token, email <support@mastra.ai> from the address associated with your Mastra Cloud account and ask for the libSQL URL and auth token for the project you want to export. Include the project name/ID. Support can also run the dump on your behalf if CLI access is blocked on your network.
97
+
98
+ 2. Install the Turso CLI.
99
+
100
+ ```bash
101
+ brew install tursodatabase/tap/turso
102
+ ```
103
+
104
+ ```bash
105
+ curl -sSfL https://get.tur.so/install.sh | bash
106
+ ```
107
+
108
+ See the [Turso CLI introduction](https://docs.turso.tech/cli/introduction) for Windows and headless-install options.
109
+
110
+ 3. Export the database to a SQL dump.
111
+
112
+ Set the credentials provided by support (or use the dashboard values if you already copied them earlier) as environment variables, then dump the database to a local file:
113
+
114
+ ```bash
115
+ export MASTRA_STORAGE_URL="libsql://<db-name>-<org>.turso.io"
116
+ export MASTRA_STORAGE_AUTH_TOKEN="<token-from-dashboard-or-support>"
117
+
118
+ turso db shell "$MASTRA_STORAGE_URL?authToken=$MASTRA_STORAGE_AUTH_TOKEN" ".dump" > mastra-cloud-dump.sql
119
+ ```
120
+
121
+ > **Warning:** Embedding the auth token in the connection string is less secure than Turso's recommended pattern — the full URL (with token) can end up in shell history, process listings, and terminal logs. Turso officially recommends running `turso auth login` and then dumping by database name only: `turso db shell <database-name> ".dump" > mastra-cloud-dump.sql`. That flow requires the database to live in a Turso account you own, which is not the case for Cloud Store, so the env-var example above is provided as an alternative for this one-time export. If you prefer to avoid token interpolation entirely, ask support to run the dump on your behalf and send you the resulting SQL file.
122
+
123
+ The resulting `mastra-cloud-dump.sql` contains the full schema and data: thread and message history, workflow snapshots, traces, and eval scores. Store it somewhere safe before continuing.
124
+
125
+ ### Load the dump into a new libSQL database
126
+
127
+ The dump is a standard SQL file and can be loaded into any libSQL-compatible database. The example below uses a new Turso-hosted database, which keeps the migration like-for-like and avoids schema translation.
128
+
129
+ 1. Authenticate the Turso CLI against your own Turso account.
130
+
131
+ ```bash
132
+ turso auth login
133
+ ```
134
+
135
+ If you do not have a Turso account, the CLI will prompt you to create one. See [Turso pricing](https://turso.tech/pricing) for plan details.
136
+
137
+ 2. Create a new database and load the dump in one step.
138
+
139
+ ```bash
140
+ turso db create mastra-migrated --from-dump ./mastra-cloud-dump.sql
141
+ ```
142
+
143
+ `--from-dump` restores a local SQLite/libSQL dump at create time, which is faster and safer than piping statements through `turso db shell` after the fact. Pick a region close to where your Mastra Server runs to minimize latency — list available regions with `turso db locations` and pass `--group <group-name>` if you manage multiple groups.
144
+
145
+ For multi-gigabyte dumps, add `--wait` so the CLI blocks until the database is fully available.
146
+
147
+ 3. Generate connection credentials for the new database.
148
+
149
+ ```bash
150
+ turso db show mastra-migrated --url
151
+ turso db tokens create mastra-migrated
152
+ ```
153
+
154
+ The first command prints the libSQL URL. The second prints an auth token. Both are needed by `LibSQLStore`.
155
+
156
+ ### Point your Mastra app at the new database
157
+
158
+ Set the new credentials as environment variables, either locally in `.env` or in the Mastra platform dashboard:
159
+
160
+ ```bash
161
+ TURSO_DATABASE_URL="libsql://mastra-migrated-<org>.turso.io"
162
+ TURSO_AUTH_TOKEN="<token-from-turso-db-tokens-create>"
163
+ ```
164
+
165
+ Configure `LibSQLStore` to read from those variables:
166
+
167
+ ```ts
168
+ import { Mastra } from '@mastra/core/mastra'
169
+ import { LibSQLStore } from '@mastra/libsql'
170
+
171
+ export const mastra = new Mastra({
172
+ storage: new LibSQLStore({
173
+ id: 'libsql-storage',
174
+ url: process.env.TURSO_DATABASE_URL!,
175
+ authToken: process.env.TURSO_AUTH_TOKEN,
176
+ }),
177
+ })
178
+ ```
179
+
180
+ See the [libSQL storage reference](https://mastra.ai/reference/storage/libsql) for the full set of options.
181
+
182
+ ### Verify the migration
183
+
184
+ Before decommissioning your Cloud project, confirm the new database serves the data your app expects.
185
+
186
+ - Run `turso db shell mastra-migrated "SELECT name FROM sqlite_master WHERE type='table';"` to list tables. The output should include the Mastra-managed tables (e.g. `mastra_threads`, `mastra_messages`, `mastra_workflow_snapshot`, `mastra_traces`).
187
+ - Run a row count against a known-populated table, for example `turso db shell mastra-migrated "SELECT COUNT(*) FROM mastra_messages;"`, and compare it to the same query against the Cloud Store URL.
188
+ - Start your Mastra app against the new credentials and confirm that an existing thread or workflow run loads as expected in [Studio](https://mastra.ai/docs/studio/observability).
64
189
 
65
190
  ## Update observability configuration
66
191
 
@@ -1,6 +1,6 @@
1
1
  # Netlify
2
2
 
3
- Netlify AI Gateway provides unified access to multiple providers with built-in caching and observability. Access 62 models through Mastra's model router.
3
+ Netlify AI Gateway provides unified access to multiple providers with built-in caching and observability. Access 63 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [Netlify documentation](https://docs.netlify.com/build/ai-gateway/overview/).
6
6
 
@@ -43,6 +43,7 @@ ANTHROPIC_API_KEY=ant-...
43
43
  | `anthropic/claude-opus-4-5` |
44
44
  | `anthropic/claude-opus-4-5-20251101` |
45
45
  | `anthropic/claude-opus-4-6` |
46
+ | `anthropic/claude-opus-4-7` |
46
47
  | `anthropic/claude-sonnet-4-0` |
47
48
  | `anthropic/claude-sonnet-4-20250514` |
48
49
  | `anthropic/claude-sonnet-4-5` |
@@ -1,6 +1,6 @@
1
1
  # ![Vercel logo](https://models.dev/logos/vercel.svg)Vercel
2
2
 
3
- Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 231 models through Mastra's model router.
3
+ Vercel aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 234 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [Vercel documentation](https://ai-sdk.dev/providers/ai-sdk-providers).
6
6
 
@@ -72,6 +72,7 @@ ANTHROPIC_API_KEY=ant-...
72
72
  | `anthropic/claude-opus-4.1` |
73
73
  | `anthropic/claude-opus-4.5` |
74
74
  | `anthropic/claude-opus-4.6` |
75
+ | `anthropic/claude-opus-4.7` |
75
76
  | `anthropic/claude-sonnet-4` |
76
77
  | `anthropic/claude-sonnet-4.5` |
77
78
  | `anthropic/claude-sonnet-4.6` |
@@ -119,6 +120,7 @@ ANTHROPIC_API_KEY=ant-...
119
120
  | `google/text-embedding-005` |
120
121
  | `google/text-multilingual-embedding-002` |
121
122
  | `inception/mercury-2` |
123
+ | `inception/mercury-coder-small` |
122
124
  | `inception/mercury-edit-2` |
123
125
  | `kwaipilot/kat-coder-pro-v1` |
124
126
  | `kwaipilot/kat-coder-pro-v2` |
@@ -264,4 +266,5 @@ ANTHROPIC_API_KEY=ant-...
264
266
  | `zai/glm-4.7-flashx` |
265
267
  | `zai/glm-5` |
266
268
  | `zai/glm-5-turbo` |
269
+ | `zai/glm-5.1` |
267
270
  | `zai/glm-5v-turbo` |
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3592 models from 99 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3608 models from 100 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -228,6 +228,41 @@ Mastra tries your primary model first. If it encounters a 500 error, rate limit,
228
228
 
229
229
  Your users never experience the disruption - the response comes back with the same format, just from a different model. The error context is preserved as the system moves through your fallback chain, ensuring clean error propagation while maintaining streaming compatibility.
230
230
 
231
+ ### Per-model settings
232
+
233
+ Each fallback entry can carry its own `modelSettings`, `providerOptions`, and `headers` — useful when models in the chain need different temperatures or provider-specific knobs to produce comparable output.
234
+
235
+ ```typescript
236
+ import { Agent } from '@mastra/core/agent';
237
+
238
+ const agent = new Agent({
239
+ id: 'tuned-resilient',
240
+ name: 'Tuned Resilient Agent',
241
+ instructions: 'You are a helpful assistant.',
242
+ model: [
243
+ {
244
+ model: 'google/gemini-2.5-flash',
245
+ maxRetries: 2,
246
+ modelSettings: { temperature: 0.3 },
247
+ providerOptions: { google: { thinkingConfig: { thinkingBudget: 0 } } },
248
+ },
249
+ {
250
+ model: 'openai/gpt-5-mini',
251
+ maxRetries: 2,
252
+ modelSettings: { temperature: 0.7 },
253
+ providerOptions: { openai: { reasoningEffort: 'low' } },
254
+ },
255
+ ],
256
+ });
257
+ ```
258
+
259
+ **Precedence:**
260
+
261
+ - `modelSettings` and `providerOptions`: per-fallback entry overrides call-time options, which override agent `defaultOptions`. `modelSettings` shallow-merges by key. `providerOptions` deep-merges recursively, so nested provider config (e.g. `google.thinkingConfig`) preserves sibling keys across layers.
262
+ - `headers`: call-time `modelSettings.headers` overrides per-fallback `headers`, which overrides headers extracted from model-router models. Runtime headers (tracing, auth, tenancy) intentionally take precedence over model-level headers.
263
+
264
+ Each field also accepts a function of `requestContext`, matching how dynamic models are resolved.
265
+
231
266
  ## Use local models with Mastra
232
267
 
233
268
  Mastra also supports local models like `gpt-oss`, `Qwen3`, `DeepSeek` and many more that you run on your own hardware. The application running your local model needs to provide an OpenAI-compatible API server for Mastra to connect to. We recommend using [LMStudio](https://lmstudio.ai/) (see [Running the LMStudio server](https://lmstudio.ai/docs/developer/core/server)).
@@ -1,6 +1,6 @@
1
1
  # ![Alibaba (China) logo](https://models.dev/logos/alibaba-cn.svg)Alibaba (China)
2
2
 
3
- Access 75 Alibaba (China) models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
3
+ Access 76 Alibaba (China) models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Alibaba (China) documentation](https://www.alibabacloud.com/help/en/model-studio/models).
6
6
 
@@ -46,6 +46,7 @@ for await (const chunk of stream) {
46
46
  | `alibaba-cn/deepseek-v3-1` | 131K | | | | | | $0.57 | $2 |
47
47
  | `alibaba-cn/deepseek-v3-2-exp` | 131K | | | | | | $0.29 | $0.43 |
48
48
  | `alibaba-cn/glm-5` | 203K | | | | | | $0.86 | $3 |
49
+ | `alibaba-cn/glm-5.1` | 203K | | | | | | $0.87 | $3 |
49
50
  | `alibaba-cn/kimi-k2-thinking` | 262K | | | | | | $0.57 | $2 |
50
51
  | `alibaba-cn/kimi-k2.5` | 262K | | | | | | $0.57 | $2 |
51
52
  | `alibaba-cn/kimi/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
@@ -1,6 +1,6 @@
1
1
  # ![Anthropic logo](https://models.dev/logos/anthropic.svg)Anthropic
2
2
 
3
- Access 22 Anthropic models through Mastra's model router. Authentication is handled automatically using the `ANTHROPIC_API_KEY` environment variable.
3
+ Access 23 Anthropic models through Mastra's model router. Authentication is handled automatically using the `ANTHROPIC_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Anthropic documentation](https://docs.anthropic.com/en/docs/about-claude/models).
6
6
 
@@ -49,6 +49,7 @@ for await (const chunk of stream) {
49
49
  | `anthropic/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
50
50
  | `anthropic/claude-opus-4-5-20251101` | 200K | | | | | | $5 | $25 |
51
51
  | `anthropic/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
52
+ | `anthropic/claude-opus-4-7` | 1.0M | | | | | | $5 | $25 |
52
53
  | `anthropic/claude-sonnet-4-0` | 200K | | | | | | $3 | $15 |
53
54
  | `anthropic/claude-sonnet-4-20250514` | 200K | | | | | | $3 | $15 |
54
55
  | `anthropic/claude-sonnet-4-5` | 200K | | | | | | $3 | $15 |
@@ -1,6 +1,6 @@
1
1
  # ![Cortecs logo](https://models.dev/logos/cortecs.svg)Cortecs
2
2
 
3
- Access 30 Cortecs models through Mastra's model router. Authentication is handled automatically using the `CORTECS_API_KEY` environment variable.
3
+ Access 32 Cortecs models through Mastra's model router. Authentication is handled automatically using the `CORTECS_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Cortecs documentation](https://cortecs.ai).
6
6
 
@@ -49,6 +49,7 @@ for await (const chunk of stream) {
49
49
  | `cortecs/glm-4.7` | 198K | | | | | | $0.45 | $2 |
50
50
  | `cortecs/glm-4.7-flash` | 203K | | | | | | $0.09 | $0.53 |
51
51
  | `cortecs/glm-5` | 203K | | | | | | $1 | $3 |
52
+ | `cortecs/glm-5.1` | 205K | | | | | | $1 | $4 |
52
53
  | `cortecs/gpt-4.1` | 1.0M | | | | | | $2 | $9 |
53
54
  | `cortecs/gpt-oss-120b` | 128K | | | | | | — | — |
54
55
  | `cortecs/intellect-3` | 128K | | | | | | $0.22 | $1 |
@@ -59,6 +60,7 @@ for await (const chunk of stream) {
59
60
  | `cortecs/minimax-m2` | 400K | | | | | | $0.39 | $2 |
60
61
  | `cortecs/minimax-m2.1` | 196K | | | | | | $0.34 | $1 |
61
62
  | `cortecs/minimax-m2.5` | 197K | | | | | | $0.32 | $1 |
63
+ | `cortecs/minimax-M2.7` | 203K | | | | | | $0.47 | $1 |
62
64
  | `cortecs/nova-pro-v1` | 300K | | | | | | $1 | $4 |
63
65
  | `cortecs/qwen3-32b` | 16K | | | | | | $0.10 | $0.33 |
64
66
  | `cortecs/qwen3-coder-480b-a35b-instruct` | 262K | | | | | | $0.44 | $2 |
@@ -1,6 +1,6 @@
1
1
  # ![Firmware logo](https://models.dev/logos/firmware.svg)Firmware
2
2
 
3
- Access 25 Firmware models through Mastra's model router. Authentication is handled automatically using the `FIRMWARE_API_KEY` environment variable.
3
+ Access 24 Firmware models through Mastra's model router. Authentication is handled automatically using the `FIRMWARE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Firmware documentation](https://docs.frogbot.ai).
6
6
 
@@ -35,9 +35,8 @@ for await (const chunk of stream) {
35
35
  | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
36
  | -------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
37
  | `firmware/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
38
- | `firmware/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
39
38
  | `firmware/claude-opus-4-6` | 200K | | | | | | $5 | $25 |
40
- | `firmware/claude-sonnet-4-5` | 200K | | | | | | $3 | $15 |
39
+ | `firmware/claude-opus-4-7` | 200K | | | | | | $5 | $25 |
41
40
  | `firmware/claude-sonnet-4-6` | 200K | | | | | | $3 | $15 |
42
41
  | `firmware/deepseek-v3-2` | 128K | | | | | | $0.58 | $2 |
43
42
  | `firmware/gemini-2.5-flash` | 1.0M | | | | | | $0.30 | $3 |
@@ -0,0 +1,73 @@
1
+ # ![HPC-AI logo](https://models.dev/logos/hpc-ai.svg)HPC-AI
2
+
3
+ Access 3 HPC-AI models through Mastra's model router. Authentication is handled automatically using the `HPC_AI_API_KEY` environment variable.
4
+
5
+ Learn more in the [HPC-AI documentation](https://www.hpc-ai.com/doc/docs/quickstart/).
6
+
7
+ ```bash
8
+ HPC_AI_API_KEY=your-api-key
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "hpc-ai/minimax/minimax-m2.5"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [HPC-AI documentation](https://www.hpc-ai.com/doc/docs/quickstart/) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ----------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `hpc-ai/minimax/minimax-m2.5` | 1.0M | | | | | | $0.14 | $0.56 |
38
+ | `hpc-ai/moonshotai/kimi-k2.5` | 262K | | | | | | $0.21 | $1 |
39
+ | `hpc-ai/zai-org/glm-5.1` | 202K | | | | | | $0.66 | $2 |
40
+
41
+ ## Advanced configuration
42
+
43
+ ### Custom headers
44
+
45
+ ```typescript
46
+ const agent = new Agent({
47
+ id: "custom-agent",
48
+ name: "custom-agent",
49
+ model: {
50
+ url: "https://api.hpc-ai.com/inference/v1",
51
+ id: "hpc-ai/minimax/minimax-m2.5",
52
+ apiKey: process.env.HPC_AI_API_KEY,
53
+ headers: {
54
+ "X-Custom-Header": "value"
55
+ }
56
+ }
57
+ });
58
+ ```
59
+
60
+ ### Dynamic model selection
61
+
62
+ ```typescript
63
+ const agent = new Agent({
64
+ id: "dynamic-agent",
65
+ name: "Dynamic Agent",
66
+ model: ({ requestContext }) => {
67
+ const useAdvanced = requestContext.task === "complex";
68
+ return useAdvanced
69
+ ? "hpc-ai/zai-org/glm-5.1"
70
+ : "hpc-ai/minimax/minimax-m2.5";
71
+ }
72
+ });
73
+ ```
@@ -71,7 +71,7 @@ for await (const chunk of stream) {
71
71
  | `nvidia/microsoft/phi-4-mini-instruct` | 131K | | | | | | — | — |
72
72
  | `nvidia/minimaxai/minimax-m2.1` | 205K | | | | | | — | — |
73
73
  | `nvidia/minimaxai/minimax-m2.5` | 205K | | | | | | — | — |
74
- | `nvidia/minimaxai/minimax-m2.7` | 205K | | | | | | $0.30 | $1 |
74
+ | `nvidia/minimaxai/minimax-m2.7` | 205K | | | | | | | |
75
75
  | `nvidia/mistralai/codestral-22b-instruct-v0.1` | 128K | | | | | | — | — |
76
76
  | `nvidia/mistralai/devstral-2-123b-instruct-2512` | 262K | | | | | | — | — |
77
77
  | `nvidia/mistralai/mamba-codestral-7b-v0.1` | 128K | | | | | | — | — |
@@ -1,6 +1,6 @@
1
1
  # ![OpenCode Go logo](https://models.dev/logos/opencode-go.svg)OpenCode Go
2
2
 
3
- Access 7 OpenCode Go models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
3
+ Access 9 OpenCode Go models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OpenCode Go documentation](https://opencode.ai/docs/zen).
6
6
 
@@ -41,6 +41,8 @@ for await (const chunk of stream) {
41
41
  | `opencode-go/mimo-v2-pro` | 1.0M | | | | | | $1 | $3 |
42
42
  | `opencode-go/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
43
43
  | `opencode-go/minimax-m2.7` | 205K | | | | | | $0.30 | $1 |
44
+ | `opencode-go/qwen3.5-plus` | 262K | | | | | | $0.20 | $1 |
45
+ | `opencode-go/qwen3.6-plus` | 262K | | | | | | $0.50 | $3 |
44
46
 
45
47
  ## Advanced configuration
46
48
 
@@ -70,7 +72,7 @@ const agent = new Agent({
70
72
  model: ({ requestContext }) => {
71
73
  const useAdvanced = requestContext.task === "complex";
72
74
  return useAdvanced
73
- ? "opencode-go/minimax-m2.7"
75
+ ? "opencode-go/qwen3.6-plus"
74
76
  : "opencode-go/glm-5";
75
77
  }
76
78
  });
@@ -1,6 +1,6 @@
1
1
  # ![OpenCode Zen logo](https://models.dev/logos/opencode.svg)OpenCode Zen
2
2
 
3
- Access 32 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
3
+ Access 35 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OpenCode Zen documentation](https://opencode.ai/docs/zen).
6
6
 
@@ -40,6 +40,7 @@ for await (const chunk of stream) {
40
40
  | `opencode/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
41
41
  | `opencode/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
42
42
  | `opencode/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
43
+ | `opencode/claude-opus-4-7` | 1.0M | | | | | | $5 | $25 |
43
44
  | `opencode/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
44
45
  | `opencode/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
45
46
  | `opencode/claude-sonnet-4-6` | 1.0M | | | | | | $3 | $15 |
@@ -66,6 +67,8 @@ for await (const chunk of stream) {
66
67
  | `opencode/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
67
68
  | `opencode/minimax-m2.5-free` | 205K | | | | | | — | — |
68
69
  | `opencode/nemotron-3-super-free` | 205K | | | | | | — | — |
70
+ | `opencode/qwen3.5-plus` | 262K | | | | | | $0.20 | $1 |
71
+ | `opencode/qwen3.6-plus` | 262K | | | | | | $0.50 | $3 |
69
72
 
70
73
  ## Advanced configuration
71
74
 
@@ -95,7 +98,7 @@ const agent = new Agent({
95
98
  model: ({ requestContext }) => {
96
99
  const useAdvanced = requestContext.task === "complex";
97
100
  return useAdvanced
98
- ? "opencode/nemotron-3-super-free"
101
+ ? "opencode/qwen3.6-plus"
99
102
  : "opencode/big-pickle";
100
103
  }
101
104
  });
@@ -1,6 +1,6 @@
1
1
  # ![ZenMux logo](https://models.dev/logos/zenmux.svg)ZenMux
2
2
 
3
- Access 87 ZenMux models through Mastra's model router. Authentication is handled automatically using the `ZENMUX_API_KEY` environment variable.
3
+ Access 88 ZenMux models through Mastra's model router. Authentication is handled automatically using the `ZENMUX_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [ZenMux documentation](https://docs.zenmux.ai).
6
6
 
@@ -41,6 +41,7 @@ for await (const chunk of stream) {
41
41
  | `zenmux/anthropic/claude-opus-4.1` | 200K | | | | | | $15 | $75 |
42
42
  | `zenmux/anthropic/claude-opus-4.5` | 200K | | | | | | $5 | $25 |
43
43
  | `zenmux/anthropic/claude-opus-4.6` | 1.0M | | | | | | $5 | $25 |
44
+ | `zenmux/anthropic/claude-opus-4.7` | 1.0M | | | | | | $5 | $25 |
44
45
  | `zenmux/anthropic/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
45
46
  | `zenmux/anthropic/claude-sonnet-4.5` | 1.0M | | | | | | $3 | $15 |
46
47
  | `zenmux/anthropic/claude-sonnet-4.6` | 1.0M | | | | | | $3 | $15 |
@@ -34,6 +34,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
34
34
  - [Friendli](https://mastra.ai/models/providers/friendli)
35
35
  - [GitHub Models](https://mastra.ai/models/providers/github-models)
36
36
  - [Helicone](https://mastra.ai/models/providers/helicone)
37
+ - [HPC-AI](https://mastra.ai/models/providers/hpc-ai)
37
38
  - [Hugging Face](https://mastra.ai/models/providers/huggingface)
38
39
  - [iFlow](https://mastra.ai/models/providers/iflowcn)
39
40
  - [Inception](https://mastra.ai/models/providers/inception)
@@ -12,6 +12,29 @@ export const mastraClient = new MastraClient({
12
12
  })
13
13
  ```
14
14
 
15
+ ## `RequestContext`
16
+
17
+ When you use `RequestContext` with the client SDK, import it from `@mastra/client-js`.
18
+
19
+ ```typescript
20
+ import { MastraClient, RequestContext } from '@mastra/client-js'
21
+
22
+ const client = new MastraClient({
23
+ baseUrl: 'http://localhost:4111/',
24
+ })
25
+
26
+ const requestContext = new RequestContext()
27
+ requestContext.set('userId', 'user-123')
28
+
29
+ const agent = client.getAgent('support-agent')
30
+
31
+ const response = await agent.generate('Summarize this ticket', {
32
+ requestContext,
33
+ })
34
+ ```
35
+
36
+ You can also pass `requestContext` as a `Record<string, any>`.
37
+
15
38
  ## Parameters
16
39
 
17
40
  **baseUrl** (`string`): The base URL for the Mastra API. All requests will be sent relative to this URL.
@@ -284,6 +284,7 @@ The Reference section provides documentation of Mastra's API, including paramete
284
284
  - [AgentFSFilesystem](https://mastra.ai/reference/workspace/agentfs-filesystem)
285
285
  - [BlaxelSandbox](https://mastra.ai/reference/workspace/blaxel-sandbox)
286
286
  - [DaytonaSandbox](https://mastra.ai/reference/workspace/daytona-sandbox)
287
+ - [DockerSandbox](https://mastra.ai/reference/workspace/docker-sandbox)
287
288
  - [E2BSandbox](https://mastra.ai/reference/workspace/e2b-sandbox)
288
289
  - [GCSFilesystem](https://mastra.ai/reference/workspace/gcs-filesystem)
289
290
  - [LocalFilesystem](https://mastra.ai/reference/workspace/local-filesystem)
@@ -0,0 +1,196 @@
1
+ # DockerSandbox
2
+
3
+ Executes commands inside Docker containers on the local machine. Uses long-lived containers with `docker exec` for command execution. Targets local development, CI/CD, air-gapped deployments, and cost-sensitive scenarios where cloud sandboxes are unnecessary.
4
+
5
+ > **Info:** For interface details, see [WorkspaceSandbox interface](https://mastra.ai/reference/workspace/sandbox).
6
+
7
+ ## Installation
8
+
9
+ **npm**:
10
+
11
+ ```bash
12
+ npm install @mastra/docker
13
+ ```
14
+
15
+ **pnpm**:
16
+
17
+ ```bash
18
+ pnpm add @mastra/docker
19
+ ```
20
+
21
+ **Yarn**:
22
+
23
+ ```bash
24
+ yarn add @mastra/docker
25
+ ```
26
+
27
+ **Bun**:
28
+
29
+ ```bash
30
+ bun add @mastra/docker
31
+ ```
32
+
33
+ Requires [Docker Engine](https://docs.docker.com/engine/install/) running on the host machine.
34
+
35
+ ## Usage
36
+
37
+ Add a `DockerSandbox` to a workspace and assign it to an agent:
38
+
39
+ ```typescript
40
+ import { Agent } from '@mastra/core/agent'
41
+ import { Workspace } from '@mastra/core/workspace'
42
+ import { DockerSandbox } from '@mastra/docker'
43
+
44
+ const workspace = new Workspace({
45
+ sandbox: new DockerSandbox({
46
+ image: 'node:22-slim',
47
+ }),
48
+ })
49
+
50
+ const agent = new Agent({
51
+ name: 'dev-agent',
52
+ model: 'anthropic/claude-opus-4-6',
53
+ workspace,
54
+ })
55
+ ```
56
+
57
+ ## Constructor parameters
58
+
59
+ **id** (`string`): Unique identifier for this sandbox instance. Used for container naming and reconnection via labels. (Default: `Auto-generated`)
60
+
61
+ **image** (`string`): Docker image to use for the container. (Default: `'node:22-slim'`)
62
+
63
+ **command** (`string[]`): Container entrypoint command. Must keep the container alive for exec-based command execution. (Default: `['sleep', 'infinity']`)
64
+
65
+ **env** (`Record<string, string>`): Environment variables to set in the container.
66
+
67
+ **volumes** (`Record<string, string>`): Host-to-container bind mounts. Keys are host paths, values are container paths.
68
+
69
+ **network** (`string`): Docker network to join.
70
+
71
+ **privileged** (`boolean`): Run in privileged mode. (Default: `false`)
72
+
73
+ **workingDir** (`string`): Working directory inside the container. (Default: `'/workspace'`)
74
+
75
+ **labels** (`Record<string, string>`): Additional container labels. Mastra labels (mastra.sandbox, mastra.sandbox.id) are always included.
76
+
77
+ **timeout** (`number`): Default command timeout in milliseconds. (Default: `300000 (5 minutes)`)
78
+
79
+ **dockerOptions** (`Docker.DockerOptions`): Pass-through dockerode connection options for custom socket paths, remote hosts, or TLS certificates.
80
+
81
+ **instructions** (`string | function`): Custom instructions that override the default instructions returned by getInstructions(). Pass an empty string to suppress instructions.
82
+
83
+ ## Properties
84
+
85
+ **id** (`string`): Sandbox instance identifier.
86
+
87
+ **name** (`string`): Provider name ('DockerSandbox').
88
+
89
+ **provider** (`string`): Provider identifier ('docker').
90
+
91
+ **status** (`ProviderStatus`): 'pending' | 'starting' | 'running' | 'stopping' | 'stopped' | 'destroying' | 'destroyed' | 'error'
92
+
93
+ **container** (`Container`): The underlying dockerode Container instance. Throws SandboxNotReadyError if the sandbox has not been started.
94
+
95
+ **processes** (`DockerProcessManager`): Background process manager. See \[SandboxProcessManager reference]\(/reference/workspace/process-manager).
96
+
97
+ ## Background processes
98
+
99
+ `DockerSandbox` includes a built-in process manager for spawning and managing background processes. Processes run inside the container using `docker exec`.
100
+
101
+ ```typescript
102
+ const sandbox = new DockerSandbox({ id: 'dev-sandbox' })
103
+ await sandbox._start()
104
+
105
+ // Spawn a background process
106
+ const handle = await sandbox.processes.spawn('node server.js', {
107
+ env: { PORT: '3000' },
108
+ onStdout: data => console.log(data),
109
+ })
110
+
111
+ // Interact with the process
112
+ console.log(handle.stdout)
113
+ await handle.sendStdin('input\n')
114
+ await handle.kill()
115
+ ```
116
+
117
+ See [`SandboxProcessManager` reference](https://mastra.ai/reference/workspace/process-manager) for the full API.
118
+
119
+ ## Environment variables
120
+
121
+ Set environment variables at the container level with `env`. Per-command environment variables can also be passed when spawning processes:
122
+
123
+ ```typescript
124
+ const sandbox = new DockerSandbox({
125
+ image: 'node:22-slim',
126
+ env: {
127
+ NODE_ENV: 'production',
128
+ DATABASE_URL: 'postgres://localhost:5432/mydb',
129
+ },
130
+ })
131
+ ```
132
+
133
+ ## Bind mounts
134
+
135
+ Mount host directories into the container using the `volumes` option:
136
+
137
+ ```typescript
138
+ const sandbox = new DockerSandbox({
139
+ image: 'node:22-slim',
140
+ volumes: {
141
+ '/my/project': '/workspace/project',
142
+ '/shared/data': '/data',
143
+ },
144
+ })
145
+ ```
146
+
147
+ Bind mounts are applied at container creation time. The host paths must exist before the sandbox starts.
148
+
149
+ ## Reconnection
150
+
151
+ `DockerSandbox` can reconnect to existing containers by matching labels. When `start()` is called, it checks for a container with the `mastra.sandbox.id` label matching the sandbox ID. If found:
152
+
153
+ - A running container is reused directly.
154
+ - A stopped container is restarted.
155
+
156
+ ```typescript
157
+ // First run — creates a new container
158
+ const sandbox = new DockerSandbox({ id: 'persistent-sandbox' })
159
+ await sandbox._start()
160
+
161
+ // Later — reconnects to the existing container
162
+ const sandbox2 = new DockerSandbox({ id: 'persistent-sandbox' })
163
+ await sandbox2._start()
164
+ ```
165
+
166
+ ## Docker connection options
167
+
168
+ Connect to remote Docker hosts or use custom socket paths via `dockerOptions`:
169
+
170
+ ```typescript
171
+ // Remote Docker host
172
+ const sandbox = new DockerSandbox({
173
+ dockerOptions: {
174
+ host: '192.168.1.100',
175
+ port: 2376,
176
+ ca: fs.readFileSync('ca.pem'),
177
+ cert: fs.readFileSync('cert.pem'),
178
+ key: fs.readFileSync('key.pem'),
179
+ },
180
+ })
181
+
182
+ // Custom socket path
183
+ const sandbox = new DockerSandbox({
184
+ dockerOptions: {
185
+ socketPath: '/var/run/docker.sock',
186
+ },
187
+ })
188
+ ```
189
+
190
+ ## Related
191
+
192
+ - [SandboxProcessManager reference](https://mastra.ai/reference/workspace/process-manager)
193
+ - [WorkspaceSandbox interface](https://mastra.ai/reference/workspace/sandbox)
194
+ - [LocalSandbox reference](https://mastra.ai/reference/workspace/local-sandbox)
195
+ - [E2BSandbox reference](https://mastra.ai/reference/workspace/e2b-sandbox)
196
+ - [Workspace overview](https://mastra.ai/docs/workspace/overview)
package/CHANGELOG.md CHANGED
@@ -1,5 +1,20 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.26-alpha.7
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`0474c2b`](https://github.com/mastra-ai/mastra/commit/0474c2b2e7c7e1ad8691dca031284841391ff1ef), [`f607106`](https://github.com/mastra-ai/mastra/commit/f607106854c6416c4a07d4082604b9f66d047221), [`62919a6`](https://github.com/mastra-ai/mastra/commit/62919a6ee0fbf3779ad21a97b1ec6696515d5104), [`0fd90a2`](https://github.com/mastra-ai/mastra/commit/0fd90a215caf5fca8099c15a67ca03e4427747a3)]:
8
+ - @mastra/core@1.26.0-alpha.4
9
+
10
+ ## 1.1.26-alpha.5
11
+
12
+ ### Patch Changes
13
+
14
+ - Updated dependencies [[`fdd54cf`](https://github.com/mastra-ai/mastra/commit/fdd54cf612a9af876e9fdd85e534454f6e7dd518), [`30456b6`](https://github.com/mastra-ai/mastra/commit/30456b6b08c8fd17e109dd093b73d93b65e83bc5), [`9d11a8c`](https://github.com/mastra-ai/mastra/commit/9d11a8c1c8924eb975a245a5884d40ca1b7e0491), [`d246696`](https://github.com/mastra-ai/mastra/commit/d246696139a3144a5b21b042d41c532688e957e1), [`354f9ce`](https://github.com/mastra-ai/mastra/commit/354f9ce1ca6af2074b6a196a23f8ec30012dccca), [`e9837b5`](https://github.com/mastra-ai/mastra/commit/e9837b53699e18711b09e0ca010a4106376f2653)]:
15
+ - @mastra/core@1.26.0-alpha.3
16
+ - @mastra/mcp@1.5.1-alpha.1
17
+
3
18
  ## 1.1.26-alpha.3
4
19
 
5
20
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.26-alpha.4",
3
+ "version": "1.1.26-alpha.8",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,7 +29,7 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/core": "1.26.0-alpha.2",
32
+ "@mastra/core": "1.26.0-alpha.4",
33
33
  "@mastra/mcp": "^1.5.1-alpha.1"
34
34
  },
35
35
  "devDependencies": {
@@ -48,7 +48,7 @@
48
48
  "vitest": "4.0.18",
49
49
  "@internal/lint": "0.0.83",
50
50
  "@internal/types-builder": "0.0.58",
51
- "@mastra/core": "1.26.0-alpha.2"
51
+ "@mastra/core": "1.26.0-alpha.4"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {