@mastra/mcp-docs-server 1.1.29-alpha.4 → 1.1.29-alpha.8

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -270,6 +270,66 @@ const glutenCheckerScorer = createScorer({...})
270
270
  })
271
271
  ```
272
272
 
273
+ ## Input filtering
274
+
275
+ Agent conversations can contain hundreds of messages with tool calls, data parts, and system metadata. Most scorers only need a subset of this data. The `prepareRun` option transforms the run data before the scorer pipeline executes, reducing noise and keeping scorers focused.
276
+
277
+ ### Declarative filtering with `filterRun()`
278
+
279
+ The [`filterRun()`](https://mastra.ai/reference/evals/filter-run) utility creates a `prepareRun` function from declarative options:
280
+
281
+ ```typescript
282
+ import { createScorer, filterRun } from '@mastra/core/evals'
283
+
284
+ const toolScorer = createScorer({
285
+ id: 'tool-quality',
286
+ description: 'Evaluates tool usage quality',
287
+ type: 'agent',
288
+ prepareRun: filterRun({
289
+ partTypes: ['tool-invocation', 'text'],
290
+ maxRememberedMessages: 20,
291
+ }),
292
+ }).generateScore(({ run }) => {
293
+ // run.input.rememberedMessages has only tool and text messages, max 20
294
+ return 1
295
+ })
296
+ ```
297
+
298
+ Common options include:
299
+
300
+ - `partTypes`: Keep only messages with matching part types (e.g., `'tool-invocation'`, `'text'`, `'reasoning'`)
301
+ - `toolNames`: Keep only messages involving specific tools (e.g., `['write_file', 'execute_command']`)
302
+ - `maxRememberedMessages`: Limit context window size
303
+ - `dropRequestContext`, `dropGroundTruth`, `dropExpectedTrajectory`: Remove unused fields
304
+
305
+ See the [`filterRun()` reference](https://mastra.ai/reference/evals/filter-run) for the full list of options.
306
+
307
+ ### Custom `prepareRun` functions
308
+
309
+ For logic that `filterRun()` doesn't cover, write a `prepareRun` function directly:
310
+
311
+ ```typescript
312
+ import { createScorer } from '@mastra/core/evals'
313
+
314
+ const customScorer = createScorer({
315
+ id: 'recent-output',
316
+ description: 'Scores only the last response',
317
+ type: 'agent',
318
+ prepareRun: (run) => ({
319
+ ...run,
320
+ output: run.output.slice(-1), // Keep only the last message
321
+ requestContext: undefined,
322
+ }),
323
+ })
324
+ .generateScore(({ run }) => {
325
+ return run.output.length > 0 ? 1 : 0
326
+ })
327
+ ```
328
+
329
+ The `prepareRun` function can also be async.
330
+
331
+ > **System messages are always preserved:** `filterRun()` never filters `systemMessages` or `taggedSystemMessages`. These contain agent instructions and are critical context for scoring.
332
+
273
333
  ## Example: Create a custom scorer
274
334
 
275
335
  A custom scorer in Mastra uses `createScorer` with four core components:
@@ -21,9 +21,10 @@ Available providers:
21
21
  - [`LocalFilesystem`](https://mastra.ai/reference/workspace/local-filesystem): Stores files in a directory on disk
22
22
  - [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem): Stores files in Amazon S3 or S3-compatible storage (R2, MinIO)
23
23
  - [`GCSFilesystem`](https://mastra.ai/reference/workspace/gcs-filesystem): Stores files in Google Cloud Storage
24
+ - [`AzureBlobFilesystem`](https://mastra.ai/reference/workspace/azure-blob-filesystem): Stores files in Azure Blob Storage
24
25
  - [`AgentFSFilesystem`](https://mastra.ai/reference/workspace/agentfs-filesystem): Stores files in a Turso/SQLite database via AgentFS
25
26
 
26
- > **Tip:** `LocalFilesystem` is the simplest way to get started as it requires no external services. For cloud storage, use `S3Filesystem` or `GCSFilesystem`. For database-backed storage without external services, use `AgentFSFilesystem`.
27
+ > **Tip:** `LocalFilesystem` is the simplest way to get started as it requires no external services. For cloud storage, use `S3Filesystem`, `GCSFilesystem`, or `AzureBlobFilesystem`. For database-backed storage without external services, use `AgentFSFilesystem`.
27
28
 
28
29
  ## Basic usage
29
30
 
@@ -219,6 +220,7 @@ When you configure a filesystem on a workspace, agents receive tools for reading
219
220
  - [LocalFilesystem reference](https://mastra.ai/reference/workspace/local-filesystem)
220
221
  - [S3Filesystem reference](https://mastra.ai/reference/workspace/s3-filesystem)
221
222
  - [GCSFilesystem reference](https://mastra.ai/reference/workspace/gcs-filesystem)
223
+ - [AzureBlobFilesystem reference](https://mastra.ai/reference/workspace/azure-blob-filesystem)
222
224
  - [AgentFSFilesystem reference](https://mastra.ai/reference/workspace/agentfs-filesystem)
223
225
  - [Workspace overview](https://mastra.ai/docs/workspace/overview)
224
226
  - [Sandbox](https://mastra.ai/docs/workspace/sandbox)
@@ -255,8 +255,10 @@ const { messages, sendMessage } = useChat({
255
255
  return {
256
256
  body: {
257
257
  messages: [messages[messages.length - 1]],
258
- threadId: 'user-thread-123',
259
- resourceId: 'user-123',
258
+ memory: {
259
+ thread: 'user-thread-123',
260
+ resource: 'user-123',
261
+ },
260
262
  },
261
263
  }
262
264
  },
@@ -264,7 +266,7 @@ const { messages, sendMessage } = useChat({
264
266
  })
265
267
  ```
266
268
 
267
- Set `threadId` and `resourceId` from your app's own state, such as URL params, auth context, or your database.
269
+ Set `memory.thread` and `memory.resource` from your app's own state, such as URL params, auth context, or your database.
268
270
 
269
271
  See [Message history](https://mastra.ai/docs/memory/message-history) for more on how Mastra memory loads and stores messages.
270
272
 
@@ -4,6 +4,30 @@ Azure OpenAI provides enterprise-grade access to OpenAI models through dedicated
4
4
 
5
5
  Unlike other providers that have fixed model names, Azure uses **deployment names** that you configure in the Azure Portal.
6
6
 
7
+ ## Usage
8
+
9
+ ```typescript
10
+ import { Agent } from "@mastra/core/agent";
11
+
12
+ const agent = new Agent({
13
+ id: "my-agent",
14
+ name: "My Agent",
15
+ instructions: "You are a helpful assistant",
16
+ model: "azure-openai/my-gpt4-deployment" // Use your Azure deployment name (autocompleted in dev mode)
17
+ });
18
+
19
+ // Generate a response
20
+ const response = await agent.generate("Hello!");
21
+
22
+ // Stream a response
23
+ const stream = await agent.stream("Tell me a story");
24
+ for await (const chunk of stream) {
25
+ console.log(chunk);
26
+ }
27
+ ```
28
+
29
+ Check [Azure OpenAI model availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) for region-specific options.
30
+
7
31
  ## How Azure Deployments Work
8
32
 
9
33
  Azure model IDs follow this pattern: `azure-openai/your-deployment-name`
@@ -101,28 +125,4 @@ export const mastra = new Mastra({
101
125
  | `management.subscriptionId` | `string` | Yes\* | Azure subscription ID |
102
126
  | `management.resourceGroup` | `string` | Yes\* | Resource group name |
103
127
 
104
- \* Required if `management` is provided
105
-
106
- ## Usage
107
-
108
- ```typescript
109
- import { Agent } from "@mastra/core/agent";
110
-
111
- const agent = new Agent({
112
- id: "my-agent",
113
- name: "My Agent",
114
- instructions: "You are a helpful assistant",
115
- model: "azure-openai/my-gpt4-deployment" // Use your Azure deployment name (autocompleted in dev mode)
116
- });
117
-
118
- // Generate a response
119
- const response = await agent.generate("Hello!");
120
-
121
- // Stream a response
122
- const stream = await agent.stream("Tell me a story");
123
- for await (const chunk of stream) {
124
- console.log(chunk);
125
- }
126
- ```
127
-
128
- Check [Azure OpenAI model availability](https://learn.microsoft.com/en-us/azure/ai-services/openai/concepts/models) for region-specific options.
128
+ \* Required if `management` is provided
@@ -0,0 +1,64 @@
1
+ # ![Mastra logo](https://mastra.ai/brand/logo.svg)Mastra
2
+
3
+ The Mastra Memory Gateway is an OpenAI-compatible API proxy with built-in [Observational Memory](https://gateway.mastra.ai/docs/features#observational-memory). Point any HTTP client, SDK, or framework at the gateway and every conversation is automatically remembered without any memory management code.
4
+
5
+ Learn more in the [Memory Gateway documentation](https://gateway.mastra.ai/docs).
6
+
7
+ ## Get an API key
8
+
9
+ Go to [gateway.mastra.ai](https://gateway.mastra.ai) and sign up for a Mastra account. During the onboarding you'll get your personal API key to authenticate requests.
10
+
11
+ ## Usage
12
+
13
+ Define your API key as an environment variable:
14
+
15
+ ```bash
16
+ MASTRA_GATEWAY_API_KEY=your-gateway-key
17
+ ```
18
+
19
+ Set your gateway model ID:
20
+
21
+ ```typescript
22
+ import { Agent } from "@mastra/core/agent";
23
+
24
+ const agent = new Agent({
25
+ id: "my-agent",
26
+ name: "My Agent",
27
+ instructions: "You are a helpful assistant",
28
+ model: "mastra/openai/gpt-5-mini"
29
+ });
30
+ ```
31
+
32
+ Pass `memory.thread` and `memory.resource` when you generate/stream responses to enable Observational Memory:
33
+
34
+ ```typescript
35
+ import { weatherAgent } from "./agents/weather-agent";
36
+
37
+ const memory = {
38
+ thread: "assistant-thread-1",
39
+ resource: "user-42",
40
+ };
41
+
42
+ const result = await weatherAgent.stream("My name is Alex and I prefer concise answers.", {
43
+ memory,
44
+ });
45
+
46
+ for await (const chunk of result.textStream) {
47
+ process.stdout.write(chunk);
48
+ }
49
+ ```
50
+
51
+ ## Configuration
52
+
53
+ ```bash
54
+ # Use gateway API key
55
+ MASTRA_GATEWAY_API_KEY=your-gateway-key
56
+ ```
57
+
58
+ ## Learn more
59
+
60
+ - [Features](https://gateway.mastra.ai/docs/features)
61
+ - [Models](https://gateway.mastra.ai/docs/models)
62
+ - [Limits](https://gateway.mastra.ai/docs/limits)
63
+ - [API Reference](https://gateway.mastra.ai/docs/api/overview)
64
+ - [Examples](https://gateway.mastra.ai/docs/examples/)
@@ -1,6 +1,6 @@
1
1
  # ![OpenRouter logo](https://models.dev/logos/openrouter.svg)OpenRouter
2
2
 
3
- OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 177 models through Mastra's model router.
3
+ OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 179 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
6
6
 
@@ -73,6 +73,7 @@ ANTHROPIC_API_KEY=ant-...
73
73
  | `google/gemini-2.5-pro-preview-06-05` |
74
74
  | `google/gemini-3-flash-preview` |
75
75
  | `google/gemini-3-pro-preview` |
76
+ | `google/gemini-3.1-flash-image-preview` |
76
77
  | `google/gemini-3.1-flash-lite-preview` |
77
78
  | `google/gemini-3.1-pro-preview` |
78
79
  | `google/gemini-3.1-pro-preview-customtools` |
@@ -163,6 +164,7 @@ ANTHROPIC_API_KEY=ant-...
163
164
  | `openai/o4-mini` |
164
165
  | `openrouter/elephant-alpha` |
165
166
  | `openrouter/free` |
167
+ | `openrouter/pareto-code` |
166
168
  | `prime-intellect/intellect-3` |
167
169
  | `qwen/qwen-2.5-coder-32b-instruct` |
168
170
  | `qwen/qwen2.5-vl-72b-instruct` |
@@ -9,6 +9,7 @@ Create custom gateways for private LLM deployments or specialized provider integ
9
9
  ## Built-in gateways
10
10
 
11
11
  - [Azure OpenAI](https://mastra.ai/models/gateways/azure-openai)
12
+ - [Mastra](https://mastra.ai/models/gateways/mastra)
12
13
  - [Netlify](https://mastra.ai/models/gateways/netlify)
13
14
  - [OpenRouter](https://mastra.ai/models/gateways/openrouter)
14
15
  - [Vercel](https://mastra.ai/models/gateways/vercel)
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3739 models from 104 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3757 models from 106 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -27,7 +27,7 @@ const agent = new Agent({
27
27
  id: "my-agent",
28
28
  name: "My Agent",
29
29
  instructions: "You are a helpful assistant",
30
- model: "openai/gpt-5"
30
+ model: "openai/gpt-5.5"
31
31
  })
32
32
  ```
33
33
 
@@ -40,7 +40,7 @@ const agent = new Agent({
40
40
  id: "my-agent",
41
41
  name: "My Agent",
42
42
  instructions: "You are a helpful assistant",
43
- model: "anthropic/claude-4-5-sonnet"
43
+ model: "anthropic/claude-sonnet-4-6"
44
44
  })
45
45
  ```
46
46
 
@@ -79,7 +79,7 @@ const agent = new Agent({
79
79
  id: "my-agent",
80
80
  name: "My Agent",
81
81
  instructions: "You are a helpful assistant",
82
- model: "openrouter/anthropic/claude-haiku-4-5"
82
+ model: "openrouter/anthropic/claude-haiku-4.5"
83
83
  })
84
84
  ```
85
85
 
@@ -0,0 +1,71 @@
1
+ # ![abliteration.ai logo](https://models.dev/logos/abliteration-ai.svg)abliteration.ai
2
+
3
+ Access 1 abliteration.ai model through Mastra's model router. Authentication is handled automatically using the `ABLIT_KEY` environment variable.
4
+
5
+ Learn more in the [abliteration.ai documentation](https://docs.abliteration.ai/models).
6
+
7
+ ```bash
8
+ ABLIT_KEY=your-api-key
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "abliteration-ai/abliterated-model"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [abliteration.ai documentation](https://docs.abliteration.ai/models) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ----------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `abliteration-ai/abliterated-model` | 150K | | | | | | $3 | $3 |
38
+
39
+ ## Advanced configuration
40
+
41
+ ### Custom headers
42
+
43
+ ```typescript
44
+ const agent = new Agent({
45
+ id: "custom-agent",
46
+ name: "custom-agent",
47
+ model: {
48
+ url: "https://api.abliteration.ai/v1",
49
+ id: "abliteration-ai/abliterated-model",
50
+ apiKey: process.env.ABLIT_KEY,
51
+ headers: {
52
+ "X-Custom-Header": "value"
53
+ }
54
+ }
55
+ });
56
+ ```
57
+
58
+ ### Dynamic model selection
59
+
60
+ ```typescript
61
+ const agent = new Agent({
62
+ id: "dynamic-agent",
63
+ name: "Dynamic Agent",
64
+ model: ({ requestContext }) => {
65
+ const useAdvanced = requestContext.task === "complex";
66
+ return useAdvanced
67
+ ? "abliteration-ai/abliterated-model"
68
+ : "abliteration-ai/abliterated-model";
69
+ }
70
+ });
71
+ ```
@@ -1,6 +1,6 @@
1
1
  # ![Alibaba (China) logo](https://models.dev/logos/alibaba-cn.svg)Alibaba (China)
2
2
 
3
- Access 78 Alibaba (China) models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
3
+ Access 80 Alibaba (China) models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Alibaba (China) documentation](https://www.alibabacloud.com/help/en/model-studio/models).
6
6
 
@@ -45,6 +45,8 @@ for await (const chunk of stream) {
45
45
  | `alibaba-cn/deepseek-v3` | 66K | | | | | | $0.29 | $1 |
46
46
  | `alibaba-cn/deepseek-v3-1` | 131K | | | | | | $0.57 | $2 |
47
47
  | `alibaba-cn/deepseek-v3-2-exp` | 131K | | | | | | $0.29 | $0.43 |
48
+ | `alibaba-cn/deepseek-v4-flash` | 1.0M | | | | | | $0.14 | $0.28 |
49
+ | `alibaba-cn/deepseek-v4-pro` | 1.0M | | | | | | $2 | $3 |
48
50
  | `alibaba-cn/glm-5` | 203K | | | | | | $0.86 | $3 |
49
51
  | `alibaba-cn/glm-5.1` | 203K | | | | | | $0.87 | $3 |
50
52
  | `alibaba-cn/kimi-k2-thinking` | 262K | | | | | | $0.57 | $2 |
@@ -1,6 +1,6 @@
1
1
  # ![Alibaba logo](https://models.dev/logos/alibaba.svg)Alibaba
2
2
 
3
- Access 42 Alibaba models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
3
+ Access 47 Alibaba models through Mastra's model router. Authentication is handled automatically using the `DASHSCOPE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Alibaba documentation](https://www.alibabacloud.com/help/en/model-studio/models).
6
6
 
@@ -72,8 +72,13 @@ for await (const chunk of stream) {
72
72
  | `alibaba/qwen3-vl-235b-a22b` | 131K | | | | | | $0.70 | $3 |
73
73
  | `alibaba/qwen3-vl-30b-a3b` | 131K | | | | | | $0.20 | $0.80 |
74
74
  | `alibaba/qwen3-vl-plus` | 262K | | | | | | $0.20 | $2 |
75
+ | `alibaba/qwen3.5-122b-a10b` | 262K | | | | | | $0.40 | $3 |
76
+ | `alibaba/qwen3.5-27b` | 262K | | | | | | $0.30 | $2 |
77
+ | `alibaba/qwen3.5-35b-a3b` | 262K | | | | | | $0.25 | $2 |
75
78
  | `alibaba/qwen3.5-397b-a17b` | 262K | | | | | | $0.60 | $4 |
76
79
  | `alibaba/qwen3.5-plus` | 1.0M | | | | | | $0.40 | $2 |
80
+ | `alibaba/qwen3.6-27b` | 262K | | | | | | $0.60 | $4 |
81
+ | `alibaba/qwen3.6-35b-a3b` | 262K | | | | | | $0.25 | $1 |
77
82
  | `alibaba/qwen3.6-plus` | 1.0M | | | | | | $0.28 | $2 |
78
83
  | `alibaba/qwq-plus` | 131K | | | | | | $0.80 | $2 |
79
84
 
@@ -1,6 +1,6 @@
1
1
  # ![Hugging Face logo](https://models.dev/logos/huggingface.svg)Hugging Face
2
2
 
3
- Access 23 Hugging Face models through Mastra's model router. Authentication is handled automatically using the `HF_TOKEN` environment variable.
3
+ Access 24 Hugging Face models through Mastra's model router. Authentication is handled automatically using the `HF_TOKEN` environment variable.
4
4
 
5
5
  Learn more in the [Hugging Face documentation](https://huggingface.co).
6
6
 
@@ -36,6 +36,7 @@ for await (const chunk of stream) {
36
36
  | ------------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
37
  | `huggingface/deepseek-ai/DeepSeek-R1-0528` | 164K | | | | | | $3 | $5 |
38
38
  | `huggingface/deepseek-ai/DeepSeek-V3.2` | 164K | | | | | | $0.28 | $0.40 |
39
+ | `huggingface/deepseek-ai/DeepSeek-V4-Pro` | 1.0M | | | | | | $2 | $3 |
39
40
  | `huggingface/MiniMaxAI/MiniMax-M2.1` | 205K | | | | | | $0.30 | $1 |
40
41
  | `huggingface/MiniMaxAI/MiniMax-M2.5` | 205K | | | | | | $0.30 | $1 |
41
42
  | `huggingface/MiniMaxAI/MiniMax-M2.7` | 205K | | | | | | $0.30 | $1 |
@@ -1,6 +1,6 @@
1
1
  # ![NovitaAI logo](https://models.dev/logos/novita-ai.svg)NovitaAI
2
2
 
3
- Access 96 NovitaAI models through Mastra's model router. Authentication is handled automatically using the `NOVITA_API_KEY` environment variable.
3
+ Access 99 NovitaAI models through Mastra's model router. Authentication is handled automatically using the `NOVITA_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [NovitaAI documentation](https://novita.ai/docs/guides/introduction).
6
6
 
@@ -56,6 +56,8 @@ for await (const chunk of stream) {
56
56
  | `novita-ai/deepseek/deepseek-v3.1-terminus` | 131K | | | | | | $0.27 | $1 |
57
57
  | `novita-ai/deepseek/deepseek-v3.2` | 164K | | | | | | $0.27 | $0.40 |
58
58
  | `novita-ai/deepseek/deepseek-v3.2-exp` | 164K | | | | | | $0.27 | $0.41 |
59
+ | `novita-ai/deepseek/deepseek-v4-flash` | 1.0M | | | | | | $0.14 | $0.28 |
60
+ | `novita-ai/deepseek/deepseek-v4-pro` | 1.0M | | | | | | $2 | $3 |
59
61
  | `novita-ai/google/gemma-3-12b-it` | 131K | | | | | | $0.05 | $0.10 |
60
62
  | `novita-ai/google/gemma-3-27b-it` | 98K | | | | | | $0.12 | $0.20 |
61
63
  | `novita-ai/google/gemma-4-26b-a4b-it` | 262K | | | | | | $0.13 | $0.40 |
@@ -115,6 +117,7 @@ for await (const chunk of stream) {
115
117
  | `novita-ai/qwen/qwen3.5-27b` | 262K | | | | | | $0.30 | $2 |
116
118
  | `novita-ai/qwen/qwen3.5-35b-a3b` | 262K | | | | | | $0.25 | $2 |
117
119
  | `novita-ai/qwen/qwen3.5-397b-a17b` | 262K | | | | | | $0.60 | $4 |
120
+ | `novita-ai/qwen/qwen3.6-27b` | 262K | | | | | | $0.60 | $4 |
118
121
  | `novita-ai/sao10k/l3-70b-euryale-v2.1` | 8K | | | | | | $1 | $1 |
119
122
  | `novita-ai/sao10k/l3-8b-lunaris` | 8K | | | | | | $0.05 | $0.05 |
120
123
  | `novita-ai/sao10k/L3-8B-Stheno-v3.2` | 8K | | | | | | $0.05 | $0.05 |
@@ -1,6 +1,6 @@
1
1
  # ![Ollama Cloud logo](https://models.dev/logos/ollama-cloud.svg)Ollama Cloud
2
2
 
3
- Access 37 Ollama Cloud models through Mastra's model router. Authentication is handled automatically using the `OLLAMA_API_KEY` environment variable.
3
+ Access 39 Ollama Cloud models through Mastra's model router. Authentication is handled automatically using the `OLLAMA_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Ollama Cloud documentation](https://docs.ollama.com/cloud).
6
6
 
@@ -37,6 +37,8 @@ for await (const chunk of stream) {
37
37
  | `ollama-cloud/cogito-2.1:671b` | 164K | | | | | | — | — |
38
38
  | `ollama-cloud/deepseek-v3.1:671b` | 164K | | | | | | — | — |
39
39
  | `ollama-cloud/deepseek-v3.2` | 164K | | | | | | — | — |
40
+ | `ollama-cloud/deepseek-v4-flash` | 1.0M | | | | | | — | — |
41
+ | `ollama-cloud/deepseek-v4-pro` | 1.0M | | | | | | — | — |
40
42
  | `ollama-cloud/devstral-2:123b` | 262K | | | | | | — | — |
41
43
  | `ollama-cloud/devstral-small-2:24b` | 262K | | | | | | — | — |
42
44
  | `ollama-cloud/gemini-3-flash-preview` | 1.0M | | | | | | — | — |
@@ -36,8 +36,8 @@ for await (const chunk of stream) {
36
36
  | ------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
37
  | `opencode-go/deepseek-v4-flash` | 1.0M | | | | | | $0.14 | $0.28 |
38
38
  | `opencode-go/deepseek-v4-pro` | 1.0M | | | | | | $2 | $3 |
39
- | `opencode-go/glm-5` | 205K | | | | | | $1 | $3 |
40
- | `opencode-go/glm-5.1` | 205K | | | | | | $1 | $4 |
39
+ | `opencode-go/glm-5` | 203K | | | | | | $1 | $3 |
40
+ | `opencode-go/glm-5.1` | 203K | | | | | | $1 | $4 |
41
41
  | `opencode-go/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
42
42
  | `opencode-go/kimi-k2.6` | 262K | | | | | | $0.32 | $1 |
43
43
  | `opencode-go/mimo-v2-omni` | 262K | | | | | | $0.40 | $2 |
@@ -1,6 +1,6 @@
1
1
  # ![Poe logo](https://models.dev/logos/poe.svg)Poe
2
2
 
3
- Access 119 Poe models through Mastra's model router. Authentication is handled automatically using the `POE_API_KEY` environment variable.
3
+ Access 121 Poe models through Mastra's model router. Authentication is handled automatically using the `POE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Poe documentation](https://creator.poe.com/docs/external-applications/openai-compatible-api).
6
6
 
@@ -123,6 +123,8 @@ for await (const chunk of stream) {
123
123
  | `poe/openai/gpt-5.4-mini` | 400K | | | | | | $0.68 | $4 |
124
124
  | `poe/openai/gpt-5.4-nano` | 400K | | | | | | $0.18 | $1 |
125
125
  | `poe/openai/gpt-5.4-pro` | 1.1M | | | | | | $27 | $160 |
126
+ | `poe/openai/gpt-5.5` | 400K | | | | | | $5 | $27 |
127
+ | `poe/openai/gpt-5.5-pro` | 400K | | | | | | $27 | $164 |
126
128
  | `poe/openai/gpt-image-1` | 128K | | | | | | — | — |
127
129
  | `poe/openai/gpt-image-1-mini` | — | | | | | | — | — |
128
130
  | `poe/openai/gpt-image-1.5` | 128K | | | | | | — | — |
@@ -11,6 +11,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
11
11
  - [xAI](https://mastra.ai/models/providers/xai)
12
12
  - [302.AI](https://mastra.ai/models/providers/302ai)
13
13
  - [Abacus](https://mastra.ai/models/providers/abacus)
14
+ - [abliteration.ai](https://mastra.ai/models/providers/abliteration-ai)
14
15
  - [Alibaba](https://mastra.ai/models/providers/alibaba)
15
16
  - [Alibaba (China)](https://mastra.ai/models/providers/alibaba-cn)
16
17
  - [Alibaba Coding Plan](https://mastra.ai/models/providers/alibaba-coding-plan)
@@ -51,6 +51,8 @@ const scorer = createScorer({
51
51
 
52
52
  **type** (`string`): Type specification for input/output. Use 'agent' for automatic agent types. For custom types, use the generic approach instead.
53
53
 
54
+ **prepareRun** (`(run: ScorerRun) => ScorerRun | Promise<ScorerRun>`): Transform the scorer run data before the pipeline executes. Use this to filter messages, limit context size, or drop fields the scorer doesn't need. The \[\`filterRun()\`]\(/reference/evals/filter-run) utility creates this function from declarative options. Can be async.
55
+
54
56
  This function returns a scorer builder that you can chain step methods onto. See the [MastraScorer reference](https://mastra.ai/reference/evals/mastra-scorer) for details on the `.run()` method and its input/output.
55
57
 
56
58
  The judge only runs for steps defined as **prompt objects** (`preprocess`, `analyze`, `generateScore`, `generateReason` in prompt mode). If you use function steps only, the judge is never called and there is no LLM output to inspect. In that case, any score/reason must be produced by your functions.
@@ -0,0 +1,117 @@
1
+ # filterRun()
2
+
3
+ Creates a `prepareRun` function from declarative options. Pass the result to `createScorer()` to filter messages, limit context size, and drop unnecessary fields before the scorer pipeline runs.
4
+
5
+ Use [`filterRun()`](#usage-example) for declarative filtering. Write a custom `prepareRun` function directly when you need imperative logic that `filterRun()` doesn't cover. See [Custom scorers: input filtering](https://mastra.ai/docs/evals/custom-scorers) for more.
6
+
7
+ ## Usage example
8
+
9
+ The following example creates a scorer that only sees tool invocations and text messages, limited to the 20 most recent context messages:
10
+
11
+ ```typescript
12
+ import { createScorer, filterRun } from '@mastra/core/evals'
13
+
14
+ const toolScorer = createScorer({
15
+ id: 'tool-usage',
16
+ description: 'Evaluates tool usage patterns',
17
+ type: 'agent',
18
+ prepareRun: filterRun({
19
+ partTypes: ['tool-invocation', 'text'],
20
+ maxRememberedMessages: 20,
21
+ }),
22
+ }).generateScore(({ run }) => {
23
+ // run.input.rememberedMessages contains only tool and text messages
24
+ // run.output contains only tool and text messages
25
+ return 1
26
+ })
27
+ ```
28
+
29
+ ### Filter by tool name
30
+
31
+ Keep only messages involving specific tools:
32
+
33
+ ```typescript
34
+ import { createScorer, filterRun } from '@mastra/core/evals'
35
+
36
+ const fileEditScorer = createScorer({
37
+ id: 'file-edit-quality',
38
+ description: 'Evaluates file editing patterns',
39
+ type: 'agent',
40
+ prepareRun: filterRun({
41
+ toolNames: ['write_file', 'string_replace_lsp', 'view'],
42
+ }),
43
+ }).generateScore(({ run }) => {
44
+ // Only messages with these tool calls remain
45
+ return 1
46
+ })
47
+ ```
48
+
49
+ ### Drop fields
50
+
51
+ Remove fields the scorer doesn't need:
52
+
53
+ ```typescript
54
+ import { createScorer, filterRun } from '@mastra/core/evals'
55
+
56
+ const simpleScorer = createScorer({
57
+ id: 'response-length',
58
+ description: 'Checks response length',
59
+ type: 'agent',
60
+ prepareRun: filterRun({
61
+ dropRequestContext: true,
62
+ dropExpectedTrajectory: true,
63
+ dropGroundTruth: true,
64
+ maxOutputMessages: 5,
65
+ }),
66
+ }).generateScore(({ run }) => {
67
+ return run.output.length > 0 ? 1 : 0
68
+ })
69
+ ```
70
+
71
+ ## Parameters
72
+
73
+ **options** (`FilterRunOptions`): Configuration object that controls what data the scorer receives.
74
+
75
+ **options.partTypes** (`MastraPartType[]`): Keep only messages whose parts match these types. Each entry is prefix-matched against the message part's type. Plain text messages (no tool invocations) are always kept unless explicitly excluded. System messages and tagged system messages are never filtered.
76
+
77
+ **options.toolNames** (`string[]`): Keep only tool-invocation messages for these specific tools. Each entry is prefix-matched against the tool name. Non-tool messages (text, data) are unaffected.
78
+
79
+ **options.maxRememberedMessages** (`number`): Maximum number of messages to keep in remembered messages (context). Takes from the end (most recent). Applied after type and tool filtering.
80
+
81
+ **options.maxOutputMessages** (`number`): Maximum number of messages to keep in the output. Takes from the end. Applied after type and tool filtering.
82
+
83
+ **options.dropRequestContext** (`boolean`): Remove request context from the run entirely.
84
+
85
+ **options.dropExpectedTrajectory** (`boolean`): Remove expected trajectory from the run.
86
+
87
+ **options.dropGroundTruth** (`boolean`): Remove ground truth from the run.
88
+
89
+ **Returns:** `(run: ScorerRun) => ScorerRun` — A function suitable for the `prepareRun` option on [`createScorer()`](https://mastra.ai/reference/evals/create-scorer).
90
+
91
+ ## Part types
92
+
93
+ The `partTypes` option accepts `MastraPartType` values. Each value is prefix-matched, so `'data-'` matches all data part types.
94
+
95
+ | Type | Description |
96
+ | ------------------- | -------------------------------------------- |
97
+ | `'text'` | Text content parts |
98
+ | `'tool-invocation'` | Tool call and result parts |
99
+ | `'reasoning'` | Chain-of-thought reasoning parts |
100
+ | `'step-start'` | Step marker parts |
101
+ | `'image'` | Image parts |
102
+ | `'file'` | File parts |
103
+ | `'source'` | Source reference parts |
104
+ | `'source-document'` | Source document parts |
105
+ | `'data-'` | All data parts (matches any `data-*` prefix) |
106
+ | `'data-om-'` | Observational memory data parts |
107
+ | `'data-workspace-'` | Workspace data parts |
108
+ | `'data-sandbox-'` | Sandbox data parts |
109
+ | `'data-tool-'` | Tool-related data parts |
110
+
111
+ ## Filtering behavior
112
+
113
+ - **Part type filtering** applies to both `input.rememberedMessages` and `output` when they contain agent message arrays.
114
+ - **Tool name filtering** only affects messages that contain tool invocations. Text-only messages pass through.
115
+ - **System messages** (`systemMessages`, `taggedSystemMessages`) are never filtered, regardless of `partTypes` or `toolNames`.
116
+ - **Message limits** (`maxRememberedMessages`, `maxOutputMessages`) apply after type and tool filtering, taking the most recent messages.
117
+ - When both `partTypes` and `toolNames` are set, a message must satisfy both filters to be kept.
@@ -96,6 +96,7 @@ The Reference section provides documentation of Mastra's API, including paramete
96
96
  - [MastraEditor Class](https://mastra.ai/reference/editor/mastra-editor)
97
97
  - [ToolProvider](https://mastra.ai/reference/editor/tool-provider)
98
98
  - [createScorer()](https://mastra.ai/reference/evals/create-scorer)
99
+ - [filterRun()](https://mastra.ai/reference/evals/filter-run)
99
100
  - [MastraScorer](https://mastra.ai/reference/evals/mastra-scorer)
100
101
  - [runEvals()](https://mastra.ai/reference/evals/run-evals)
101
102
  - [Scorer Utils](https://mastra.ai/reference/evals/scorer-utils)
@@ -285,6 +286,7 @@ The Reference section provides documentation of Mastra's API, including paramete
285
286
  - [.startAsync()](https://mastra.ai/reference/workflows/run-methods/startAsync)
286
287
  - [.timeTravel()](https://mastra.ai/reference/workflows/run-methods/timeTravel)
287
288
  - [AgentFSFilesystem](https://mastra.ai/reference/workspace/agentfs-filesystem)
289
+ - [AzureBlobFilesystem](https://mastra.ai/reference/workspace/azure-blob-filesystem)
288
290
  - [BlaxelSandbox](https://mastra.ai/reference/workspace/blaxel-sandbox)
289
291
  - [DaytonaSandbox](https://mastra.ai/reference/workspace/daytona-sandbox)
290
292
  - [DockerSandbox](https://mastra.ai/reference/workspace/docker-sandbox)
@@ -153,8 +153,10 @@ async function manageClones() {
153
153
 
154
154
  // Have a conversation...
155
155
  await agent.generate("Hello! Let's discuss project options.", {
156
- threadId: originalThread.id,
157
- resourceId: 'user-123',
156
+ memory: {
157
+ thread: originalThread.id,
158
+ resource: 'user-123',
159
+ },
158
160
  })
159
161
 
160
162
  // Create multiple branches (clones) to explore different paths
@@ -93,8 +93,10 @@ const { thread: dateFilteredClone } = await memory.cloneThread({
93
93
 
94
94
  // Continue conversation on the cloned thread
95
95
  const response = await agent.generate("Let's try a different approach", {
96
- threadId: fullClone.id,
97
- resourceId: fullClone.resourceId,
96
+ memory: {
97
+ thread: fullClone.id,
98
+ resource: fullClone.resourceId,
99
+ },
98
100
  })
99
101
  ```
100
102
 
@@ -0,0 +1,219 @@
1
+ # AzureBlobFilesystem
2
+
3
+ Stores files in Azure Blob Storage containers.
4
+
5
+ > **Info:** For interface details, see [WorkspaceFilesystem Interface](https://mastra.ai/reference/workspace/filesystem).
6
+
7
+ ## Installation
8
+
9
+ **npm**:
10
+
11
+ ```bash
12
+ npm install @mastra/azure
13
+ ```
14
+
15
+ **pnpm**:
16
+
17
+ ```bash
18
+ pnpm add @mastra/azure
19
+ ```
20
+
21
+ **Yarn**:
22
+
23
+ ```bash
24
+ yarn add @mastra/azure
25
+ ```
26
+
27
+ **Bun**:
28
+
29
+ ```bash
30
+ bun add @mastra/azure
31
+ ```
32
+
33
+ ## Usage
34
+
35
+ Add an `AzureBlobFilesystem` to a workspace and assign it to an agent:
36
+
37
+ ```typescript
38
+ import { Agent } from '@mastra/core/agent'
39
+ import { Workspace } from '@mastra/core/workspace'
40
+ import { AzureBlobFilesystem } from '@mastra/azure/blob'
41
+
42
+ const workspace = new Workspace({
43
+ filesystem: new AzureBlobFilesystem({
44
+ container: 'my-container',
45
+ connectionString: process.env.AZURE_STORAGE_CONNECTION_STRING,
46
+ }),
47
+ })
48
+
49
+ const agent = new Agent({
50
+ name: 'file-agent',
51
+ model: 'anthropic/claude-opus-4-6',
52
+ workspace,
53
+ })
54
+ ```
55
+
56
+ ### Account key
57
+
58
+ Use an account name and account key when you do not want to pass a full connection string:
59
+
60
+ ```typescript
61
+ import { AzureBlobFilesystem } from '@mastra/azure/blob'
62
+
63
+ const filesystem = new AzureBlobFilesystem({
64
+ container: 'my-container',
65
+ accountName: process.env.AZURE_STORAGE_ACCOUNT_NAME,
66
+ accountKey: process.env.AZURE_STORAGE_ACCOUNT_KEY,
67
+ })
68
+ ```
69
+
70
+ ### Shared access signature token
71
+
72
+ Use a shared access signature (SAS) token with either `accountName` or `endpoint`:
73
+
74
+ ```typescript
75
+ import { AzureBlobFilesystem } from '@mastra/azure/blob'
76
+
77
+ const filesystem = new AzureBlobFilesystem({
78
+ container: 'my-container',
79
+ accountName: process.env.AZURE_STORAGE_ACCOUNT_NAME,
80
+ sasToken: process.env.AZURE_STORAGE_SAS_TOKEN,
81
+ })
82
+ ```
83
+
84
+ ### DefaultAzureCredential
85
+
86
+ Use `DefaultAzureCredential` when your environment provides Azure identity credentials:
87
+
88
+ ```typescript
89
+ import { AzureBlobFilesystem } from '@mastra/azure/blob'
90
+
91
+ const filesystem = new AzureBlobFilesystem({
92
+ container: 'my-container',
93
+ accountName: process.env.AZURE_STORAGE_ACCOUNT_NAME,
94
+ useDefaultCredential: true,
95
+ })
96
+ ```
97
+
98
+ Install `@azure/identity` when you use `DefaultAzureCredential`:
99
+
100
+ **npm**:
101
+
102
+ ```bash
103
+ npm install @azure/identity
104
+ ```
105
+
106
+ **pnpm**:
107
+
108
+ ```bash
109
+ pnpm add @azure/identity
110
+ ```
111
+
112
+ **Yarn**:
113
+
114
+ ```bash
115
+ yarn add @azure/identity
116
+ ```
117
+
118
+ **Bun**:
119
+
120
+ ```bash
121
+ bun add @azure/identity
122
+ ```
123
+
124
+ ## Constructor parameters
125
+
126
+ **container** (`string`): Azure Blob Storage container name
127
+
128
+ **connectionString** (`string`): Full Azure Storage connection string. Takes priority over other authentication options.
129
+
130
+ **accountName** (`string`): Azure Storage account name. Required unless you use connectionString or endpoint.
131
+
132
+ **accountKey** (`string`): Azure Storage account key.
133
+
134
+ **sasToken** (`string`): Shared access signature token. Requires accountName or endpoint.
135
+
136
+ **useDefaultCredential** (`boolean`): Use DefaultAzureCredential from @azure/identity. (Default: `false`)
137
+
138
+ **endpoint** (`string`): Custom Blob service endpoint URL. Used for local development with Azurite.
139
+
140
+ **prefix** (`string`): Optional prefix for all keys (acts like a subdirectory).
141
+
142
+ **id** (`string`): Unique identifier for this filesystem instance. (Default: `Auto-generated`)
143
+
144
+ **displayName** (`string`): Human-friendly display name for the UI.
145
+
146
+ **icon** (`FilesystemIcon`): Icon identifier for the UI.
147
+
148
+ **description** (`string`): Short description of this filesystem for the UI.
149
+
150
+ **readOnly** (`boolean`): When true, all write operations are blocked. (Default: `false`)
151
+
152
+ ## Properties
153
+
154
+ **id** (`string`): Filesystem instance identifier.
155
+
156
+ **name** (`string`): Provider name ('AzureBlobFilesystem').
157
+
158
+ **provider** (`string`): Provider identifier ('azure-blob').
159
+
160
+ **readOnly** (`boolean | undefined`): Whether the filesystem is in read-only mode.
161
+
162
+ ## Methods
163
+
164
+ AzureBlobFilesystem implements the [WorkspaceFilesystem interface](https://mastra.ai/reference/workspace/filesystem), providing all standard filesystem methods:
165
+
166
+ - `readFile(path, options?)` - Read file contents
167
+ - `writeFile(path, content, options?)` - Write content to a file
168
+ - `appendFile(path, content)` - Append content to a file
169
+ - `deleteFile(path, options?)` - Delete a file
170
+ - `copyFile(src, dest, options?)` - Copy a file
171
+ - `moveFile(src, dest, options?)` - Move or rename a file
172
+ - `mkdir(path, options?)` - Create a directory
173
+ - `rmdir(path, options?)` - Remove a directory
174
+ - `readdir(path, options?)` - List directory contents
175
+ - `exists(path)` - Check if a path exists
176
+ - `stat(path)` - Get file or directory metadata
177
+
178
+ ### `init()`
179
+
180
+ Initialize the filesystem. Verifies container access and credentials.
181
+
182
+ ```typescript
183
+ await filesystem.init()
184
+ ```
185
+
186
+ ### `getInfo()`
187
+
188
+ Returns metadata about this filesystem instance.
189
+
190
+ ```typescript
191
+ const info = filesystem.getInfo()
192
+ // { id: '...', name: 'AzureBlobFilesystem', provider: 'azure-blob', status: 'ready' }
193
+ ```
194
+
195
+ ### `getContainer()`
196
+
197
+ Returns the underlying Azure Blob Storage `ContainerClient`.
198
+
199
+ ```typescript
200
+ const container = await filesystem.getContainer()
201
+ ```
202
+
203
+ ### `getMountConfig()`
204
+
205
+ Returns the mount configuration for sandbox providers that support Azure Blob filesystem mounts.
206
+
207
+ ```typescript
208
+ const config = filesystem.getMountConfig()
209
+ // { type: 'azure-blob', container: 'my-container', ... }
210
+ ```
211
+
212
+ > **Note:** Azure Blob sandbox mounting depends on sandbox support for `azure-blob` mount configs. Use `filesystem` for direct workspace file operations when your sandbox provider does not support Azure Blob mounts.
213
+
214
+ ## Related
215
+
216
+ - [WorkspaceFilesystem interface](https://mastra.ai/reference/workspace/filesystem)
217
+ - [S3Filesystem reference](https://mastra.ai/reference/workspace/s3-filesystem)
218
+ - [GCSFilesystem reference](https://mastra.ai/reference/workspace/gcs-filesystem)
219
+ - [Workspace overview](https://mastra.ai/docs/workspace/overview)
@@ -190,5 +190,6 @@ See [E2BSandbox reference](https://mastra.ai/reference/workspace/e2b-sandbox) fo
190
190
 
191
191
  - [WorkspaceFilesystem interface](https://mastra.ai/reference/workspace/filesystem)
192
192
  - [S3Filesystem reference](https://mastra.ai/reference/workspace/s3-filesystem)
193
+ - [AzureBlobFilesystem reference](https://mastra.ai/reference/workspace/azure-blob-filesystem)
193
194
  - [E2BSandbox reference](https://mastra.ai/reference/workspace/e2b-sandbox)
194
195
  - [Workspace overview](https://mastra.ai/docs/workspace/overview)
@@ -193,5 +193,6 @@ See [E2BSandbox reference](https://mastra.ai/reference/workspace/e2b-sandbox) fo
193
193
 
194
194
  - [WorkspaceFilesystem interface](https://mastra.ai/reference/workspace/filesystem)
195
195
  - [GCSFilesystem reference](https://mastra.ai/reference/workspace/gcs-filesystem)
196
+ - [AzureBlobFilesystem reference](https://mastra.ai/reference/workspace/azure-blob-filesystem)
196
197
  - [E2BSandbox reference](https://mastra.ai/reference/workspace/e2b-sandbox)
197
198
  - [Workspace overview](https://mastra.ai/docs/workspace/overview)
package/CHANGELOG.md CHANGED
@@ -1,5 +1,19 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.29-alpha.7
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`c04417b`](https://github.com/mastra-ai/mastra/commit/c04417ba0a2e4ded66da4352331ef29cd4bd1d79), [`cf25a03`](https://github.com/mastra-ai/mastra/commit/cf25a03132164b9dc1e5dccf7394824e33007c51), [`ba6b0c5`](https://github.com/mastra-ai/mastra/commit/ba6b0c51bfce358554fd33c7f2bcd5593633f2ff)]:
8
+ - @mastra/core@1.29.0-alpha.3
9
+
10
+ ## 1.1.29-alpha.5
11
+
12
+ ### Patch Changes
13
+
14
+ - Updated dependencies [[`9e973b0`](https://github.com/mastra-ai/mastra/commit/9e973b010dacfa15ac82b0072897319f5234b90a), [`dd934a0`](https://github.com/mastra-ai/mastra/commit/dd934a0982ce0f78712fbd559e4f2410bf594b39), [`73f2809`](https://github.com/mastra-ai/mastra/commit/73f2809721db24e98cdf122539652a455211b450), [`aedeea4`](https://github.com/mastra-ai/mastra/commit/aedeea48a94f728323f040478775076b9574be50), [`8126d86`](https://github.com/mastra-ai/mastra/commit/8126d8638411eacfafdc29036ac998e8757ea66f), [`ae97520`](https://github.com/mastra-ai/mastra/commit/ae975206fdb0f6ef03c4d5bf94f7dc7c3f706c02), [`441670a`](https://github.com/mastra-ai/mastra/commit/441670a02c9dc7731c52674f55481e7848a84523)]:
15
+ - @mastra/core@1.29.0-alpha.2
16
+
3
17
  ## 1.1.29-alpha.2
4
18
 
5
19
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.29-alpha.4",
3
+ "version": "1.1.29-alpha.8",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,8 +29,8 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/mcp": "^1.5.2",
33
- "@mastra/core": "1.29.0-alpha.1"
32
+ "@mastra/core": "1.29.0-alpha.3",
33
+ "@mastra/mcp": "^1.5.2"
34
34
  },
35
35
  "devDependencies": {
36
36
  "@hono/node-server": "^1.19.11",
@@ -46,8 +46,8 @@
46
46
  "tsx": "^4.21.0",
47
47
  "typescript": "^5.9.3",
48
48
  "vitest": "4.1.5",
49
- "@mastra/core": "1.29.0-alpha.1",
50
49
  "@internal/types-builder": "0.0.61",
50
+ "@mastra/core": "1.29.0-alpha.3",
51
51
  "@internal/lint": "0.0.86"
52
52
  },
53
53
  "homepage": "https://mastra.ai",