@mastra/mcp-docs-server 1.1.26-alpha.18 → 1.1.26-alpha.20

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -300,8 +300,26 @@ Success criteria:
300
300
  })
301
301
  ```
302
302
 
303
+ ## Sub-agent versioning
304
+
305
+ When using the [editor](https://mastra.ai/docs/editor/overview), you can control which stored version of each sub-agent the supervisor uses at runtime. Set version overrides on the Mastra instance or per invocation:
306
+
307
+ ```typescript
308
+ const result = await supervisor.generate('Research and write about AI safety', {
309
+ versions: {
310
+ agents: {
311
+ 'research-agent': { status: 'published' },
312
+ 'writing-agent': { versionId: 'draft-456' },
313
+ },
314
+ },
315
+ })
316
+ ```
317
+
318
+ Version overrides propagate automatically through delegation. See [Sub-agent versioning](https://mastra.ai/docs/editor/overview) for details on resolution order and server API usage.
319
+
303
320
  ## Related
304
321
 
322
+ - [Sub-agent versioning](https://mastra.ai/docs/editor/overview)
305
323
  - [Guide: Research coordinator](https://mastra.ai/guides/guide/research-coordinator)
306
324
  - [Agent.stream() reference](https://mastra.ai/reference/streaming/agents/stream)
307
325
  - [Agent.generate() reference](https://mastra.ai/reference/agents/generate)
@@ -214,6 +214,75 @@ curl http://localhost:4111/agents/support-agent?versionId=abc-123
214
214
 
215
215
  See the [Client SDK agents reference](https://mastra.ai/reference/client-js/agents) for API methods.
216
216
 
217
+ ### Sub-agent versioning
218
+
219
+ When a [supervisor agent](https://mastra.ai/docs/agents/supervisor-agents) delegates to sub-agents, version overrides determine which stored version of each sub-agent to use instead of the code-defined default. This lets you iterate on sub-agent prompts and tools through the editor without redeploying the supervisor.
220
+
221
+ Set version overrides at three levels, with later levels taking priority:
222
+
223
+ 1. **Mastra instance config** — global defaults that apply to every `generate()` and `stream()` call.
224
+ 2. **Per-invocation options** — overrides passed directly to `generate()` or `stream()`.
225
+ 3. **Server request body** — overrides sent in the `versions` field of an API request.
226
+
227
+ Resolution order: **per-invocation > request body > Mastra instance defaults > code-defined agent**.
228
+
229
+ #### Mastra instance config
230
+
231
+ Set global defaults when creating the `Mastra` instance. Every supervisor call inherits these overrides:
232
+
233
+ ```typescript
234
+ import { Mastra } from '@mastra/core'
235
+ import { MastraEditor } from '@mastra/editor'
236
+
237
+ export const mastra = new Mastra({
238
+ agents: { supervisor, researchAgent, writerAgent },
239
+ editor: new MastraEditor(),
240
+ versions: {
241
+ agents: {
242
+ 'research-agent': { status: 'published' },
243
+ 'writer-agent': { versionId: 'abc-123' },
244
+ },
245
+ },
246
+ })
247
+ ```
248
+
249
+ #### Per-invocation overrides
250
+
251
+ Override versions for a single call to `generate()` or `stream()`. These take priority over Mastra instance defaults:
252
+
253
+ ```typescript
254
+ const result = await supervisor.generate('Research and write an article about AI safety', {
255
+ versions: {
256
+ agents: {
257
+ 'research-agent': { versionId: 'draft-456' },
258
+ },
259
+ },
260
+ })
261
+ ```
262
+
263
+ #### Server request body
264
+
265
+ When calling agents through the Mastra server, pass version overrides in the request body:
266
+
267
+ ```bash
268
+ curl -X POST http://localhost:4111/agents/supervisor/generate \
269
+ -H "Content-Type: application/json" \
270
+ -d '{
271
+ "messages": [{ "role": "user", "content": "Research AI safety" }],
272
+ "versions": {
273
+ "agents": {
274
+ "research-agent": { "versionId": "draft-456" }
275
+ }
276
+ }
277
+ }'
278
+ ```
279
+
280
+ #### How propagation works
281
+
282
+ Version overrides propagate automatically through sub-agent delegation via `requestContext`. When a supervisor delegates to a sub-agent, the framework checks if a version override exists for that sub-agent's ID. If one is found, it resolves the stored version from the editor and uses it instead of the code-defined default.
283
+
284
+ If version resolution fails (for example, when the editor is not configured or the version ID doesn't exist), the framework logs a warning and falls back to the code-defined agent.
285
+
217
286
  ## Next steps
218
287
 
219
288
  - Set up [prompts](https://mastra.ai/docs/editor/prompts) to build reusable instruction templates.
@@ -1,6 +1,6 @@
1
1
  # Netlify
2
2
 
3
- Netlify AI Gateway provides unified access to multiple providers with built-in caching and observability. Access 63 models through Mastra's model router.
3
+ Netlify AI Gateway provides unified access to multiple providers with built-in caching and observability. Access 62 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [Netlify documentation](https://docs.netlify.com/build/ai-gateway/overview/).
6
6
 
@@ -13,7 +13,7 @@ const agent = new Agent({
13
13
  id: "my-agent",
14
14
  name: "My Agent",
15
15
  instructions: "You are a helpful assistant",
16
- model: "netlify/anthropic/claude-3-haiku-20240307"
16
+ model: "netlify/anthropic/claude-haiku-4-5"
17
17
  });
18
18
  ```
19
19
 
@@ -35,7 +35,6 @@ ANTHROPIC_API_KEY=ant-...
35
35
 
36
36
  | Model |
37
37
  | ------------------------------------------- |
38
- | `anthropic/claude-3-haiku-20240307` |
39
38
  | `anthropic/claude-haiku-4-5` |
40
39
  | `anthropic/claude-haiku-4-5-20251001` |
41
40
  | `anthropic/claude-opus-4-1-20250805` |
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3667 models from 104 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3670 models from 104 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -1,6 +1,6 @@
1
1
  # ![Cloudflare Workers AI logo](https://models.dev/logos/cloudflare-workers-ai.svg)Cloudflare Workers AI
2
2
 
3
- Access 7 Cloudflare Workers AI models through Mastra's model router. Authentication is handled automatically using the `CLOUDFLARE_API_KEY` environment variable. Configure `CLOUDFLARE_ACCOUNT_ID` as well.
3
+ Access 8 Cloudflare Workers AI models through Mastra's model router. Authentication is handled automatically using the `CLOUDFLARE_API_KEY` environment variable. Configure `CLOUDFLARE_ACCOUNT_ID` as well.
4
4
 
5
5
  Learn more in the [Cloudflare Workers AI documentation](https://developers.cloudflare.com/workers-ai/models/).
6
6
 
@@ -38,6 +38,7 @@ for await (const chunk of stream) {
38
38
  | `cloudflare-workers-ai/@cf/google/gemma-4-26b-a4b-it` | 256K | | | | | | $0.10 | $0.30 |
39
39
  | `cloudflare-workers-ai/@cf/meta/llama-4-scout-17b-16e-instruct` | 128K | | | | | | $0.27 | $0.85 |
40
40
  | `cloudflare-workers-ai/@cf/moonshotai/kimi-k2.5` | 256K | | | | | | $0.60 | $3 |
41
+ | `cloudflare-workers-ai/@cf/moonshotai/kimi-k2.6` | 256K | | | | | | $0.95 | $4 |
41
42
  | `cloudflare-workers-ai/@cf/nvidia/nemotron-3-120b-a12b` | 256K | | | | | | $0.50 | $2 |
42
43
  | `cloudflare-workers-ai/@cf/openai/gpt-oss-120b` | 128K | | | | | | $0.35 | $0.75 |
43
44
  | `cloudflare-workers-ai/@cf/openai/gpt-oss-20b` | 128K | | | | | | $0.20 | $0.30 |
@@ -1,6 +1,6 @@
1
1
  # ![Hugging Face logo](https://models.dev/logos/huggingface.svg)Hugging Face
2
2
 
3
- Access 22 Hugging Face models through Mastra's model router. Authentication is handled automatically using the `HF_TOKEN` environment variable.
3
+ Access 23 Hugging Face models through Mastra's model router. Authentication is handled automatically using the `HF_TOKEN` environment variable.
4
4
 
5
5
  Learn more in the [Hugging Face documentation](https://huggingface.co).
6
6
 
@@ -43,6 +43,7 @@ for await (const chunk of stream) {
43
43
  | `huggingface/moonshotai/Kimi-K2-Instruct-0905` | 262K | | | | | | $1 | $3 |
44
44
  | `huggingface/moonshotai/Kimi-K2-Thinking` | 262K | | | | | | $0.60 | $3 |
45
45
  | `huggingface/moonshotai/Kimi-K2.5` | 262K | | | | | | $0.60 | $3 |
46
+ | `huggingface/moonshotai/Kimi-K2.6` | 262K | | | | | | $0.95 | $4 |
46
47
  | `huggingface/Qwen/Qwen3-235B-A22B-Thinking-2507` | 262K | | | | | | $0.30 | $3 |
47
48
  | `huggingface/Qwen/Qwen3-Coder-480B-A35B-Instruct` | 262K | | | | | | $2 | $2 |
48
49
  | `huggingface/Qwen/Qwen3-Coder-Next` | 262K | | | | | | $0.20 | $2 |
@@ -1,6 +1,6 @@
1
1
  # ![Kimi For Coding logo](https://models.dev/logos/kimi-for-coding.svg)Kimi For Coding
2
2
 
3
- Access 2 Kimi For Coding models through Mastra's model router. Authentication is handled automatically using the `KIMI_API_KEY` environment variable.
3
+ Access 3 Kimi For Coding models through Mastra's model router. Authentication is handled automatically using the `KIMI_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Kimi For Coding documentation](https://www.kimi.com/coding/docs/en/third-party-agents.html).
6
6
 
@@ -35,6 +35,7 @@ for await (const chunk of stream) {
35
35
  | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
36
  | ---------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
37
  | `kimi-for-coding/k2p5` | 262K | | | | | | — | — |
38
+ | `kimi-for-coding/k2p6` | 262K | | | | | | — | — |
38
39
  | `kimi-for-coding/kimi-k2-thinking` | 262K | | | | | | — | — |
39
40
 
40
41
  ## Advanced configuration
@@ -1,6 +1,6 @@
1
1
  # ![OpenCode Go logo](https://models.dev/logos/opencode-go.svg)OpenCode Go
2
2
 
3
- Access 9 OpenCode Go models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
3
+ Access 10 OpenCode Go models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OpenCode Go documentation](https://opencode.ai/docs/zen).
6
6
 
@@ -37,6 +37,7 @@ for await (const chunk of stream) {
37
37
  | `opencode-go/glm-5` | 205K | | | | | | $1 | $3 |
38
38
  | `opencode-go/glm-5.1` | 205K | | | | | | $1 | $4 |
39
39
  | `opencode-go/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
40
+ | `opencode-go/kimi-k2.6` | 262K | | | | | | $0.95 | $4 |
40
41
  | `opencode-go/mimo-v2-omni` | 262K | | | | | | $0.40 | $2 |
41
42
  | `opencode-go/mimo-v2-pro` | 1.0M | | | | | | $1 | $3 |
42
43
  | `opencode-go/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
@@ -298,6 +298,14 @@ const response = await agent.generate('Help me organize my day', {
298
298
 
299
299
  **options.tracingOptions.tags** (`string[]`): Tags to apply to this trace. String labels for categorizing and filtering traces.
300
300
 
301
+ **options.versions** (`VersionOverrides`): Per-invocation version overrides for sub-agent delegation. Merged on top of Mastra instance-level versions and propagated automatically through sub-agent calls via requestContext. Requires the editor package. See \[Sub-agent versioning]\(/docs/editor/overview#sub-agent-versioning).
302
+
303
+ **options.versions.agents** (`Record<string, VersionSelector>`): A map of agent IDs to their version selectors.
304
+
305
+ **options.versions.agents.versionId** (`string`): Target a specific version by ID.
306
+
307
+ **options.versions.agents.status** (`'draft' | 'published'`): Target the latest version with this publication status.
308
+
301
309
  **options.includeRawChunks** (`boolean`): Whether to include raw chunks in the stream output. Not available on all model providers.
302
310
 
303
311
  ## Response structure
@@ -140,6 +140,18 @@ await run.resume({
140
140
  })
141
141
  ```
142
142
 
143
+ When a [`.foreach()`](https://mastra.ai/reference/workflows/workflow-methods/foreach) step suspends across multiple iterations, pass `forEachIndex` (zero-based; `0` targets the first iteration) to resume one iteration at a time. Iterations you don't target remain suspended.
144
+
145
+ ```typescript
146
+ await run.resume({
147
+ step: 'approve',
148
+ resumeData: { ok: true },
149
+ forEachIndex: 1, // resumes the second iteration
150
+ })
151
+ ```
152
+
153
+ `forEachIndex` is also supported by `resumeAsync()` and `resumeStream()`.
154
+
143
155
  ### `cancel()`
144
156
 
145
157
  Cancel a running workflow:
@@ -63,4 +63,12 @@ Visit the [Configuration reference](https://mastra.ai/reference/configuration) f
63
63
 
64
64
  **gateways** (`Record<string, MastraModelGateway>`): Custom model gateways to register for accessing AI models through alternative providers or private deployments. Structured as a key-value pair, with keys being the registry key (used for getGateway()) and values being gateway instances. (Default: `{}`)
65
65
 
66
- **memory** (`Record<string, MastraMemory>`): Memory instances to register. These can be referenced by stored agents and resolved at runtime. Structured as a key-value pair, with keys being the registry key and values being memory instances. (Default: `{}`)
66
+ **memory** (`Record<string, MastraMemory>`): Memory instances to register. These can be referenced by stored agents and resolved at runtime. Structured as a key-value pair, with keys being the registry key and values being memory instances. (Default: `{}`)
67
+
68
+ **versions** (`VersionOverrides`): Global version overrides for sub-agent delegation. When a supervisor agent delegates to a sub-agent, these overrides determine which stored version of that sub-agent to use instead of the code-defined default. Requires the editor package to be configured. See \[Sub-agent versioning]\(/docs/editor/overview#sub-agent-versioning) for details.
69
+
70
+ **versions.agents** (`Record<string, VersionSelector>`): A map of agent IDs to their version selectors. Each selector can target a specific version by ID or by publication status.
71
+
72
+ **versions.agents.versionId** (`string`): The ID of a specific version to use.
73
+
74
+ **versions.agents.status** (`'draft' | 'published'`): Select the latest version with this publication status.
@@ -1,14 +1,14 @@
1
1
  # PrefillErrorHandler
2
2
 
3
- The `PrefillErrorHandler` is an **error processor** that handles the Anthropic "assistant message prefill" error. This error occurs when a conversation ends with an assistant message and the model doesn't support prefilling assistant responses.
3
+ The `PrefillErrorHandler` is an **error processor** that handles assistant-response prefill errors. This error occurs when a conversation ends with an assistant message and the model rejects the request because it interprets it as prefilling the assistant response.
4
4
 
5
5
  When the error is detected, the processor appends a hidden `<system-reminder>continue</system-reminder>` user message to the conversation and signals a retry. The reminder is persisted with `metadata.systemReminder = { type: 'anthropic-prefill-processor-retry' }`, which keeps it available for retry reconstruction and raw history while standard UI-facing message conversions hide it.
6
6
 
7
- Add this processor to `errorProcessors` when you want Mastra to recover from Anthropic assistant message prefill errors.
7
+ Add this processor to `errorProcessors` when you want Mastra to recover from assistant prefill rejections (for example Anthropic's "assistant message prefill" and Qwen/llama.cpp's "assistant response prefill is incompatible with `enable_thinking`" errors).
8
8
 
9
9
  ## How it works
10
10
 
11
- 1. The LLM API call fails with a message containing "assistant message prefill"
11
+ 1. The LLM API call fails with a known assistant-prefill rejection message
12
12
  2. `PrefillErrorHandler` checks that this is the first retry attempt
13
13
  3. It appends a hidden `<system-reminder>continue</system-reminder>` user message to the `messageList`
14
14
  4. It returns `{ retry: true }` to signal the LLM call should be retried with the modified messages
@@ -17,7 +17,7 @@ The processor now reacts to the API rejection itself instead of re-checking whet
17
17
 
18
18
  ## Usage example
19
19
 
20
- Add `PrefillErrorHandler` to `errorProcessors` for any agent that should retry Anthropic prefill failures:
20
+ Add `PrefillErrorHandler` to `errorProcessors` for any agent that should retry assistant-prefill failures:
21
21
 
22
22
  ```typescript
23
23
  import { Agent } from '@mastra/core/agent'
@@ -62,7 +62,7 @@ The `PrefillErrorHandler` takes no constructor parameters.
62
62
 
63
63
  **name** (`'Prefill Error Handler'`): Processor display name.
64
64
 
65
- **processAPIError** (`(args: ProcessAPIErrorArgs) => ProcessAPIErrorResult | void`): Handles the Anthropic prefill error by appending a hidden system reminder continue message and signaling retry. Only triggers on the first retry attempt.
65
+ **processAPIError** (`(args: ProcessAPIErrorArgs) => ProcessAPIErrorResult | void`): Handles known assistant-prefill errors by appending a hidden system reminder continue message and signaling retry. Only triggers on the first retry attempt.
66
66
 
67
67
  ## Related
68
68
 
@@ -206,6 +206,14 @@ const stream = await agent.stream('message for agent')
206
206
 
207
207
  **options.tracingOptions.tags** (`string[]`): Tags to apply to this trace. String labels for categorizing and filtering traces.
208
208
 
209
+ **options.versions** (`VersionOverrides`): Per-invocation version overrides for sub-agent delegation. Merged on top of Mastra instance-level versions and propagated automatically through sub-agent calls via requestContext. Requires the editor package. See \[Sub-agent versioning]\(/docs/editor/overview#sub-agent-versioning).
210
+
211
+ **options.versions.agents** (`Record<string, VersionSelector>`): A map of agent IDs to their version selectors.
212
+
213
+ **options.versions.agents.versionId** (`string`): Target a specific version by ID.
214
+
215
+ **options.versions.agents.status** (`'draft' | 'published'`): Target the latest version with this publication status.
216
+
209
217
  ## Returns
210
218
 
211
219
  **stream** (`MastraModelOutput<Output>`): Returns a MastraModelOutput instance that provides access to the streaming output.
@@ -32,6 +32,8 @@ if (result!.status === 'suspended') {
32
32
 
33
33
  **step** (`Step<string, any, any, any, any, TEngineType>`): The step to resume execution from
34
34
 
35
+ **forEachIndex** (`number`): Target a specific iteration of a suspended \`.foreach()\` step. Pass the zero-based index of the iteration to resume; other iterations remain suspended. Omit to resume all suspended iterations of the step with the same \`resumeData\`.
36
+
35
37
  **tracingOptions** (`TracingOptions`): Options for Tracing configuration.
36
38
 
37
39
  **tracingOptions.metadata** (`Record<string, any>`): Metadata to add to the root trace span. Useful for adding custom attributes like user IDs, session IDs, or feature flags.
@@ -26,6 +26,8 @@ if (result.status === 'suspended') {
26
26
 
27
27
  **retryCount** (`number`): Optional retry count for nested workflow execution
28
28
 
29
+ **forEachIndex** (`number`): Target a specific iteration of a suspended \`.foreach()\` step. Pass the zero-based index of the iteration you want to resume; other iterations remain suspended. Omit this field to resume all suspended iterations of the step with the same \`resumeData\`.
30
+
29
31
  **tracingContext** (`TracingContext`): Tracing context for creating child spans and adding metadata. Automatically injected when using Mastra's tracing system.
30
32
 
31
33
  **tracingContext.currentSpan** (`Span`): Current span for creating child spans and adding metadata. Use this to create custom child spans or update span attributes during execution.
@@ -67,6 +69,28 @@ if (result.status === 'suspended') {
67
69
 
68
70
  > **Note:** When exactly one step is suspended, you can omit the `step` parameter and the workflow will automatically resume that step. For workflows with multiple suspended steps, you must explicitly specify which step to resume.
69
71
 
72
+ ### Resuming a single [`.foreach()`](https://mastra.ai/reference/workflows/workflow-methods/foreach) iteration
73
+
74
+ When a [`.foreach()`](https://mastra.ai/reference/workflows/workflow-methods/foreach) step suspends across multiple iterations, use `forEachIndex` (zero-based) to resume one iteration at a time with different `resumeData`. Iterations not targeted by `forEachIndex` remain suspended until resumed.
75
+
76
+ ```typescript
77
+ // Resume only the second iteration (index 1) with its own data
78
+ await run.resume({
79
+ step: 'approve',
80
+ resumeData: { ok: true },
81
+ forEachIndex: 1,
82
+ })
83
+
84
+ // Later, resume the first iteration with different data
85
+ await run.resume({
86
+ step: 'approve',
87
+ resumeData: { ok: false },
88
+ forEachIndex: 0,
89
+ })
90
+ ```
91
+
92
+ If `forEachIndex` is omitted, every suspended iteration of the step is resumed with the same `resumeData`.
93
+
70
94
  ## Related
71
95
 
72
96
  - [Workflows overview](https://mastra.ai/docs/workflows/overview)
@@ -113,6 +113,19 @@ Each progress event payload contains:
113
113
 
114
114
  **iterationOutput** (`Record<string, any>`): Output of the iteration (present when iterationStatus is 'success')
115
115
 
116
+ ### Resuming a single iteration
117
+
118
+ When a step inside `.foreach()` suspends, each iteration suspends independently. Pass `forEachIndex` to [`run.resume()`](https://mastra.ai/reference/workflows/run-methods/resume) to resume one iteration at a time with its own `resumeData`. Omitting `forEachIndex` resumes every suspended iteration with the same data.
119
+
120
+ ```typescript
121
+ await run.resume({
122
+ step: 'approve',
123
+ resumeData: { ok: true },
124
+ forEachIndex: 1,
125
+ })
126
+ ```
127
+
116
128
  ## Related
117
129
 
118
- - [Looping with foreach](https://mastra.ai/docs/workflows/control-flow)
130
+ - [Looping with foreach](https://mastra.ai/docs/workflows/control-flow)
131
+ - [run.resume()](https://mastra.ai/reference/workflows/run-methods/resume)
package/CHANGELOG.md CHANGED
@@ -1,5 +1,12 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.26-alpha.19
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`aba393e`](https://github.com/mastra-ai/mastra/commit/aba393e2da7390c69b80e516a4f153cda6f09376), [`0a5fa1d`](https://github.com/mastra-ai/mastra/commit/0a5fa1d3cb0583889d06687155f26fd7d2edc76c), [`ea43e64`](https://github.com/mastra-ai/mastra/commit/ea43e646dd95d507694b6112b0bf1df22ad552b2), [`00d1b16`](https://github.com/mastra-ai/mastra/commit/00d1b16b401199cb294fa23f43336547db4dca9b), [`af8a57e`](https://github.com/mastra-ai/mastra/commit/af8a57ed9ba9685ad8601d5b71ae3706da6222f9)]:
8
+ - @mastra/core@1.26.0-alpha.10
9
+
3
10
  ## 1.1.26-alpha.17
4
11
 
5
12
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.26-alpha.18",
3
+ "version": "1.1.26-alpha.20",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,7 +29,7 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/core": "1.26.0-alpha.9",
32
+ "@mastra/core": "1.26.0-alpha.10",
33
33
  "@mastra/mcp": "^1.5.1-alpha.1"
34
34
  },
35
35
  "devDependencies": {
@@ -48,7 +48,7 @@
48
48
  "vitest": "4.0.18",
49
49
  "@internal/lint": "0.0.83",
50
50
  "@internal/types-builder": "0.0.58",
51
- "@mastra/core": "1.26.0-alpha.9"
51
+ "@mastra/core": "1.26.0-alpha.10"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {