@mastra/mcp-docs-server 1.1.10 → 1.1.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/agents/overview.md +1 -1
- package/.docs/docs/deployment/mastra-server.md +5 -5
- package/.docs/docs/getting-started/studio.md +1 -1
- package/.docs/docs/memory/observational-memory.md +23 -0
- package/.docs/docs/memory/overview.md +3 -3
- package/.docs/docs/server/custom-api-routes.md +1 -1
- package/.docs/docs/workspace/filesystem.md +3 -1
- package/.docs/guides/migrations/upgrade-to-v1/deployment.md +12 -1
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/baseten.md +2 -1
- package/.docs/models/providers/nano-gpt.md +3 -1
- package/.docs/models/providers/nebius.md +4 -3
- package/.docs/models/providers/ollama-cloud.md +2 -1
- package/.docs/models/providers/requesty.md +3 -1
- package/.docs/models/providers/wandb.md +26 -19
- package/.docs/models/providers/xai.md +27 -27
- package/.docs/reference/configuration.md +5 -5
- package/.docs/reference/core/mastra-class.md +1 -1
- package/.docs/reference/index.md +1 -0
- package/.docs/reference/memory/observational-memory.md +2 -0
- package/.docs/reference/workspace/agentfs-filesystem.md +110 -0
- package/CHANGELOG.md +16 -0
- package/package.json +6 -6
|
@@ -230,7 +230,7 @@ console.log(response.text)
|
|
|
230
230
|
|
|
231
231
|
## Using `maxSteps`
|
|
232
232
|
|
|
233
|
-
The `maxSteps` parameter controls the maximum number of sequential LLM calls an agent can make. Each step includes generating a response, executing any tool calls, and processing the result. Limiting steps helps prevent infinite loops, reduce latency, and control token usage for agents that use tools. The default is
|
|
233
|
+
The `maxSteps` parameter controls the maximum number of sequential LLM calls an agent can make. Each step includes generating a response, executing any tool calls, and processing the result. Limiting steps helps prevent infinite loops, reduce latency, and control token usage for agents that use tools. The default is 5, but can be increased:
|
|
234
234
|
|
|
235
235
|
```typescript
|
|
236
236
|
const response = await testAgent.generate('Help me organize my day', {
|
|
@@ -94,11 +94,11 @@ The build follows these steps:
|
|
|
94
94
|
|
|
95
95
|
The built server exposes endpoints for health checks, agents, workflows, and more:
|
|
96
96
|
|
|
97
|
-
| Endpoint
|
|
98
|
-
|
|
|
99
|
-
| `GET /health`
|
|
100
|
-
| `GET /openapi.json` | OpenAPI specification (if `server.build.openAPIDocs` is enabled)
|
|
101
|
-
| `GET /swagger-ui`
|
|
97
|
+
| Endpoint | Description |
|
|
98
|
+
| ----------------------- | ---------------------------------------------------------------------- |
|
|
99
|
+
| `GET /health` | Health check endpoint, returns `200 OK` |
|
|
100
|
+
| `GET /api/openapi.json` | OpenAPI specification (if `server.build.openAPIDocs` is enabled). |
|
|
101
|
+
| `GET /swagger-ui` | Interactive API documentation (if `server.build.swaggerUI` is enabled) |
|
|
102
102
|
|
|
103
103
|
This list isn't exhaustive. To view all endpoints, run `mastra dev` and visit `http://localhost:4111/swagger-ui`.
|
|
104
104
|
|
|
@@ -83,7 +83,7 @@ The Scorers tab displays the results of your agent's scorers as they run. When m
|
|
|
83
83
|
|
|
84
84
|
The local development server exposes a complete set of REST API routes, allowing you to programmatically interact with your agents, workflows, and tools during development. This is particularly helpful if you plan to deploy the Mastra server, since the local development server uses the exact same API routes as the [Mastra Server](https://mastra.ai/docs/server/mastra-server), allowing you to develop and test against it with full parity.
|
|
85
85
|
|
|
86
|
-
You can explore all available endpoints in the OpenAPI specification at <http://localhost:4111/openapi.json>, which details every endpoint and its request and response schemas.
|
|
86
|
+
You can explore all available endpoints in the OpenAPI specification at <http://localhost:4111/api/openapi.json>, which details every endpoint and its request and response schemas.
|
|
87
87
|
|
|
88
88
|
To explore the API interactively, visit the Swagger UI at <http://localhost:4111/swagger-ui>. Here, you can discover endpoints and test them directly from your browser.
|
|
89
89
|
|
|
@@ -230,6 +230,29 @@ Setting `bufferTokens: false` disables both observation and reflection async buf
|
|
|
230
230
|
|
|
231
231
|
> **Note:** Async buffering isn't supported with `scope: 'resource'`. It's automatically disabled in resource scope.
|
|
232
232
|
|
|
233
|
+
## Observer Context Optimization
|
|
234
|
+
|
|
235
|
+
By default, the Observer receives the full observation history as context when processing new messages. The Observer also receives prior `current-task` and `suggested-response` metadata (when available), so it can stay oriented even when observation context is truncated. For long-running conversations where observations grow large, you can opt into context optimization to reduce Observer input costs.
|
|
236
|
+
|
|
237
|
+
Set `observation.previousObserverTokens` to limit how many tokens of previous observations are sent to the Observer. Observations are tail-truncated, keeping the most recent entries. When a buffered reflection is pending, the already-reflected lines are automatically replaced with the reflection summary before truncation is applied.
|
|
238
|
+
|
|
239
|
+
```typescript
|
|
240
|
+
const memory = new Memory({
|
|
241
|
+
options: {
|
|
242
|
+
observationalMemory: {
|
|
243
|
+
model: 'google/gemini-2.5-flash',
|
|
244
|
+
observation: {
|
|
245
|
+
previousObserverTokens: 10_000, // keep only ~10k tokens of recent observations
|
|
246
|
+
},
|
|
247
|
+
},
|
|
248
|
+
},
|
|
249
|
+
})
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
- `previousObserverTokens: 2000` → default; keeps \~2k tokens of recent observations.
|
|
253
|
+
- `previousObserverTokens: 0` → omit previous observations completely.
|
|
254
|
+
- `previousObserverTokens: false` → disable truncation and keep full previous observations.
|
|
255
|
+
|
|
233
256
|
## Migrating existing threads
|
|
234
257
|
|
|
235
258
|
No manual migration needed. OM reads existing messages and observes them lazily when thresholds are exceeded.
|
|
@@ -5,9 +5,9 @@ Memory enables your agent to remember user messages, agent replies, and tool res
|
|
|
5
5
|
Mastra supports four complementary memory types:
|
|
6
6
|
|
|
7
7
|
- [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
|
|
8
|
+
- [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
|
|
8
9
|
- [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
|
|
9
10
|
- [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
|
|
10
|
-
- [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
|
|
11
11
|
|
|
12
12
|
If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
|
|
13
13
|
|
|
@@ -16,9 +16,9 @@ If the combined memory exceeds the model's context limit, [memory processors](ht
|
|
|
16
16
|
Choose a memory option to get started:
|
|
17
17
|
|
|
18
18
|
- [Message history](https://mastra.ai/docs/memory/message-history)
|
|
19
|
+
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
19
20
|
- [Working memory](https://mastra.ai/docs/memory/working-memory)
|
|
20
21
|
- [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
21
|
-
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
22
22
|
|
|
23
23
|
## Storage
|
|
24
24
|
|
|
@@ -41,5 +41,5 @@ This visibility helps you understand why an agent made specific decisions and ve
|
|
|
41
41
|
## Next steps
|
|
42
42
|
|
|
43
43
|
- Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
|
|
44
|
-
- Add [Message history](https://mastra.ai/docs/memory/message-history), [
|
|
44
|
+
- Add [Message history](https://mastra.ai/docs/memory/message-history), [Observational memory](https://mastra.ai/docs/memory/observational-memory), [Working memory](https://mastra.ai/docs/memory/working-memory), or [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
45
45
|
- Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options
|
|
@@ -63,7 +63,7 @@ export const mastra = new Mastra({
|
|
|
63
63
|
|
|
64
64
|
## OpenAPI documentation
|
|
65
65
|
|
|
66
|
-
Custom routes can include OpenAPI metadata to appear in the Swagger UI alongside Mastra server routes. Pass an `openapi` option with standard OpenAPI operation fields.
|
|
66
|
+
Custom routes can include OpenAPI metadata to appear in the Swagger UI alongside Mastra server routes. You can access the OpenAPI spec at `/api/openapi.json`, where both custom routes and built-in routes are listed. Pass an `openapi` option with standard OpenAPI operation fields.
|
|
67
67
|
|
|
68
68
|
```typescript
|
|
69
69
|
import { Mastra } from '@mastra/core'
|
|
@@ -21,8 +21,9 @@ Available providers:
|
|
|
21
21
|
- [`LocalFilesystem`](https://mastra.ai/reference/workspace/local-filesystem) - Stores files in a directory on disk
|
|
22
22
|
- [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem) - Stores files in Amazon S3 or S3-compatible storage (R2, MinIO)
|
|
23
23
|
- [`GCSFilesystem`](https://mastra.ai/reference/workspace/gcs-filesystem) - Stores files in Google Cloud Storage
|
|
24
|
+
- [`AgentFSFilesystem`](https://mastra.ai/reference/workspace/agentfs-filesystem) - Stores files in a Turso/SQLite database via AgentFS
|
|
24
25
|
|
|
25
|
-
> **Tip:** `LocalFilesystem` is the simplest way to get started as it requires no external services. For cloud storage, use `S3Filesystem` or `GCSFilesystem`.
|
|
26
|
+
> **Tip:** `LocalFilesystem` is the simplest way to get started as it requires no external services. For cloud storage, use `S3Filesystem` or `GCSFilesystem`. For database-backed storage without external services, use `AgentFSFilesystem`.
|
|
26
27
|
|
|
27
28
|
## Basic usage
|
|
28
29
|
|
|
@@ -164,5 +165,6 @@ When you configure a filesystem on a workspace, agents receive tools for reading
|
|
|
164
165
|
- [LocalFilesystem reference](https://mastra.ai/reference/workspace/local-filesystem)
|
|
165
166
|
- [S3Filesystem reference](https://mastra.ai/reference/workspace/s3-filesystem)
|
|
166
167
|
- [GCSFilesystem reference](https://mastra.ai/reference/workspace/gcs-filesystem)
|
|
168
|
+
- [AgentFSFilesystem reference](https://mastra.ai/reference/workspace/agentfs-filesystem)
|
|
167
169
|
- [Workspace overview](https://mastra.ai/docs/workspace/overview)
|
|
168
170
|
- [Sandbox](https://mastra.ai/docs/workspace/sandbox)
|
|
@@ -1,9 +1,20 @@
|
|
|
1
1
|
# Deployment
|
|
2
2
|
|
|
3
|
-
The `CloudflareDeployer` configuration has been updated to use standard `wrangler.json` property names.
|
|
3
|
+
The OpenAPI spec endpoint has moved to `/api/openapi.json`, and the `CloudflareDeployer` configuration has been updated to use standard `wrangler.json` property names.
|
|
4
4
|
|
|
5
5
|
## Changed
|
|
6
6
|
|
|
7
|
+
### OpenAPI spec endpoint moved to `/api/openapi.json`
|
|
8
|
+
|
|
9
|
+
The OpenAPI specification endpoint has moved from `/openapi.json` to `/api/openapi.json` to align with the `/api` prefix used by all built-in Mastra routes.
|
|
10
|
+
|
|
11
|
+
If you have scripts or tools that consume the OpenAPI spec, update the URL:
|
|
12
|
+
|
|
13
|
+
```diff
|
|
14
|
+
- GET /openapi.json
|
|
15
|
+
+ GET /api/openapi.json
|
|
16
|
+
```
|
|
17
|
+
|
|
7
18
|
### `CloudflareDeployer` uses standard `wrangler.json` property names
|
|
8
19
|
|
|
9
20
|
The `CloudflareDeployer` constructor now accepts standard `wrangler.json` property names instead of custom camelCase variants. This change aligns the deployer with Cloudflare's official configuration format and provides access to all wrangler configuration options.
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3259 models from 92 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Baseten
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 10 Baseten models through Mastra's model router. Authentication is handled automatically using the `BASETEN_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Baseten documentation](https://docs.baseten.co/development/model-apis/overview).
|
|
6
6
|
|
|
@@ -39,6 +39,7 @@ for await (const chunk of stream) {
|
|
|
39
39
|
| `baseten/moonshotai/Kimi-K2-Instruct-0905` | 262K | | | | | | $0.60 | $3 |
|
|
40
40
|
| `baseten/moonshotai/Kimi-K2-Thinking` | 262K | | | | | | $0.60 | $3 |
|
|
41
41
|
| `baseten/moonshotai/Kimi-K2.5` | 262K | | | | | | $0.60 | $3 |
|
|
42
|
+
| `baseten/nvidia/Nemotron-3-Super` | 262K | | | | | | $0.30 | $0.75 |
|
|
42
43
|
| `baseten/Qwen/Qwen3-Coder-480B-A35B-Instruct` | 262K | | | | | | $0.38 | $2 |
|
|
43
44
|
| `baseten/zai-org/GLM-4.6` | 200K | | | | | | $0.60 | $2 |
|
|
44
45
|
| `baseten/zai-org/GLM-4.7` | 205K | | | | | | $0.60 | $2 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# NanoGPT
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 516 NanoGPT models through Mastra's model router. Authentication is handled automatically using the `NANO_GPT_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [NanoGPT documentation](https://docs.nano-gpt.com).
|
|
6
6
|
|
|
@@ -546,6 +546,8 @@ for await (const chunk of stream) {
|
|
|
546
546
|
| `nano-gpt/z-ai/glm-4.6` | 200K | | | | | | $0.40 | $2 |
|
|
547
547
|
| `nano-gpt/z-ai/glm-4.6:thinking` | 200K | | | | | | $0.40 | $2 |
|
|
548
548
|
| `nano-gpt/z-image-turbo` | — | | | | | | — | — |
|
|
549
|
+
| `nano-gpt/zai-org/glm-4.7` | 200K | | | | | | $0.15 | $0.80 |
|
|
550
|
+
| `nano-gpt/zai-org/glm-4.7-flash` | 200K | | | | | | $0.07 | $0.40 |
|
|
549
551
|
| `nano-gpt/zai-org/glm-5` | 200K | | | | | | $0.30 | $3 |
|
|
550
552
|
| `nano-gpt/zai-org/glm-5:thinking` | 200K | | | | | | $0.30 | $3 |
|
|
551
553
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Nebius Token Factory
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 48 Nebius Token Factory models through Mastra's model router. Authentication is handled automatically using the `NEBIUS_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Nebius Token Factory documentation](https://docs.tokenfactory.nebius.com/).
|
|
6
6
|
|
|
@@ -56,10 +56,11 @@ for await (const chunk of stream) {
|
|
|
56
56
|
| `nebius/MiniMaxAI/MiniMax-M2.1` | 128K | | | | | | $0.30 | $1 |
|
|
57
57
|
| `nebius/moonshotai/Kimi-K2-Instruct` | 200K | | | | | | $0.50 | $2 |
|
|
58
58
|
| `nebius/moonshotai/Kimi-K2-Thinking` | 128K | | | | | | $0.60 | $3 |
|
|
59
|
-
| `nebius/moonshotai/Kimi-K2.5` |
|
|
59
|
+
| `nebius/moonshotai/Kimi-K2.5` | 256K | | | | | | $0.50 | $3 |
|
|
60
60
|
| `nebius/NousResearch/Hermes-4-405B` | 128K | | | | | | $1 | $3 |
|
|
61
61
|
| `nebius/NousResearch/Hermes-4-70B` | 128K | | | | | | $0.13 | $0.40 |
|
|
62
62
|
| `nebius/nvidia/Llama-3_1-Nemotron-Ultra-253B-v1` | 128K | | | | | | $0.60 | $2 |
|
|
63
|
+
| `nebius/nvidia/Nemotron-3-Super-120B-A12B` | 256K | | | | | | $0.30 | $0.90 |
|
|
63
64
|
| `nebius/nvidia/Nemotron-Nano-V2-12b` | 32K | | | | | | $0.07 | $0.20 |
|
|
64
65
|
| `nebius/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B` | 32K | | | | | | $0.06 | $0.24 |
|
|
65
66
|
| `nebius/openai/gpt-oss-120b` | 128K | | | | | | $0.15 | $0.60 |
|
|
@@ -80,7 +81,7 @@ for await (const chunk of stream) {
|
|
|
80
81
|
| `nebius/zai-org/GLM-4.5` | 128K | | | | | | $0.60 | $2 |
|
|
81
82
|
| `nebius/zai-org/GLM-4.5-Air` | 128K | | | | | | $0.20 | $1 |
|
|
82
83
|
| `nebius/zai-org/GLM-4.7-FP8` | 128K | | | | | | $0.40 | $2 |
|
|
83
|
-
| `nebius/zai-org/GLM-5` |
|
|
84
|
+
| `nebius/zai-org/GLM-5` | 200K | | | | | | $1 | $3 |
|
|
84
85
|
|
|
85
86
|
## Advanced configuration
|
|
86
87
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Ollama Cloud
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 33 Ollama Cloud models through Mastra's model router. Authentication is handled automatically using the `OLLAMA_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Ollama Cloud documentation](https://docs.ollama.com/cloud).
|
|
6
6
|
|
|
@@ -59,6 +59,7 @@ for await (const chunk of stream) {
|
|
|
59
59
|
| `ollama-cloud/ministral-3:8b` | 262K | | | | | | — | — |
|
|
60
60
|
| `ollama-cloud/mistral-large-3:675b` | 262K | | | | | | — | — |
|
|
61
61
|
| `ollama-cloud/nemotron-3-nano:30b` | 1.0M | | | | | | — | — |
|
|
62
|
+
| `ollama-cloud/nemotron-3-super` | 262K | | | | | | — | — |
|
|
62
63
|
| `ollama-cloud/qwen3-coder-next` | 262K | | | | | | — | — |
|
|
63
64
|
| `ollama-cloud/qwen3-coder:480b` | 262K | | | | | | — | — |
|
|
64
65
|
| `ollama-cloud/qwen3-next:80b` | 262K | | | | | | — | — |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Requesty
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 38 Requesty models through Mastra's model router. Authentication is handled automatically using the `REQUESTY_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Requesty documentation](https://requesty.ai/solution/llm-routing/models).
|
|
6
6
|
|
|
@@ -39,8 +39,10 @@ for await (const chunk of stream) {
|
|
|
39
39
|
| `requesty/anthropic/claude-opus-4` | 200K | | | | | | $15 | $75 |
|
|
40
40
|
| `requesty/anthropic/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
|
|
41
41
|
| `requesty/anthropic/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
|
|
42
|
+
| `requesty/anthropic/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
|
|
42
43
|
| `requesty/anthropic/claude-sonnet-4` | 200K | | | | | | $3 | $15 |
|
|
43
44
|
| `requesty/anthropic/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
|
|
45
|
+
| `requesty/anthropic/claude-sonnet-4-6` | 1.0M | | | | | | $3 | $15 |
|
|
44
46
|
| `requesty/google/gemini-2.5-flash` | 1.0M | | | | | | $0.30 | $3 |
|
|
45
47
|
| `requesty/google/gemini-2.5-pro` | 1.0M | | | | | | $1 | $10 |
|
|
46
48
|
| `requesty/google/gemini-3-flash-preview` | 1.0M | | | | | | $0.50 | $3 |
|
|
@@ -1,8 +1,8 @@
|
|
|
1
1
|
# Weights & Biases
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 17 Weights & Biases models through Mastra's model router. Authentication is handled automatically using the `WANDB_API_KEY` environment variable.
|
|
4
4
|
|
|
5
|
-
Learn more in the [Weights & Biases documentation](https://
|
|
5
|
+
Learn more in the [Weights & Biases documentation](https://docs.wandb.ai).
|
|
6
6
|
|
|
7
7
|
```bash
|
|
8
8
|
WANDB_API_KEY=your-api-key
|
|
@@ -15,7 +15,7 @@ const agent = new Agent({
|
|
|
15
15
|
id: "my-agent",
|
|
16
16
|
name: "My Agent",
|
|
17
17
|
instructions: "You are a helpful assistant",
|
|
18
|
-
model: "wandb/
|
|
18
|
+
model: "wandb/MiniMaxAI/MiniMax-M2.5"
|
|
19
19
|
});
|
|
20
20
|
|
|
21
21
|
// Generate a response
|
|
@@ -28,22 +28,29 @@ for await (const chunk of stream) {
|
|
|
28
28
|
}
|
|
29
29
|
```
|
|
30
30
|
|
|
31
|
-
> **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [Weights & Biases documentation](https://
|
|
31
|
+
> **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [Weights & Biases documentation](https://docs.wandb.ai) for details.
|
|
32
32
|
|
|
33
33
|
## Models
|
|
34
34
|
|
|
35
|
-
| Model
|
|
36
|
-
|
|
|
37
|
-
| `wandb/deepseek-ai/DeepSeek-
|
|
38
|
-
| `wandb/
|
|
39
|
-
| `wandb/meta-llama/Llama-3.1-8B-Instruct`
|
|
40
|
-
| `wandb/meta-llama/Llama-3.3-70B-Instruct`
|
|
41
|
-
| `wandb/meta-llama/Llama-4-Scout-17B-16E-Instruct`
|
|
42
|
-
| `wandb/microsoft/Phi-4-mini-instruct`
|
|
43
|
-
| `wandb/
|
|
44
|
-
| `wandb/
|
|
45
|
-
| `wandb/
|
|
46
|
-
| `wandb/
|
|
35
|
+
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
36
|
+
| ---------------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
|
+
| `wandb/deepseek-ai/DeepSeek-V3.1` | 161K | | | | | | $0.55 | $2 |
|
|
38
|
+
| `wandb/meta-llama/Llama-3.1-70B-Instruct` | 128K | | | | | | $0.80 | $0.80 |
|
|
39
|
+
| `wandb/meta-llama/Llama-3.1-8B-Instruct` | 128K | | | | | | $0.22 | $0.22 |
|
|
40
|
+
| `wandb/meta-llama/Llama-3.3-70B-Instruct` | 128K | | | | | | $0.71 | $0.71 |
|
|
41
|
+
| `wandb/meta-llama/Llama-4-Scout-17B-16E-Instruct` | 64K | | | | | | $0.17 | $0.66 |
|
|
42
|
+
| `wandb/microsoft/Phi-4-mini-instruct` | 128K | | | | | | $0.08 | $0.35 |
|
|
43
|
+
| `wandb/MiniMaxAI/MiniMax-M2.5` | 197K | | | | | | $0.30 | $1 |
|
|
44
|
+
| `wandb/moonshotai/Kimi-K2.5` | 262K | | | | | | $0.50 | $3 |
|
|
45
|
+
| `wandb/nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-FP8` | 262K | | | | | | $0.20 | $0.80 |
|
|
46
|
+
| `wandb/openai/gpt-oss-120b` | 131K | | | | | | $0.15 | $0.60 |
|
|
47
|
+
| `wandb/openai/gpt-oss-20b` | 131K | | | | | | $0.05 | $0.20 |
|
|
48
|
+
| `wandb/OpenPipe/Qwen3-14B-Instruct` | 33K | | | | | | $0.05 | $0.22 |
|
|
49
|
+
| `wandb/Qwen/Qwen3-235B-A22B-Instruct-2507` | 262K | | | | | | $0.10 | $0.10 |
|
|
50
|
+
| `wandb/Qwen/Qwen3-235B-A22B-Thinking-2507` | 262K | | | | | | $0.10 | $0.10 |
|
|
51
|
+
| `wandb/Qwen/Qwen3-30B-A3B-Instruct-2507` | 262K | | | | | | $0.10 | $0.30 |
|
|
52
|
+
| `wandb/Qwen/Qwen3-Coder-480B-A35B-Instruct` | 262K | | | | | | $1 | $2 |
|
|
53
|
+
| `wandb/zai-org/GLM-5-FP8` | 200K | | | | | | $1 | $3 |
|
|
47
54
|
|
|
48
55
|
## Advanced configuration
|
|
49
56
|
|
|
@@ -55,7 +62,7 @@ const agent = new Agent({
|
|
|
55
62
|
name: "custom-agent",
|
|
56
63
|
model: {
|
|
57
64
|
url: "https://api.inference.wandb.ai/v1",
|
|
58
|
-
id: "wandb/
|
|
65
|
+
id: "wandb/MiniMaxAI/MiniMax-M2.5",
|
|
59
66
|
apiKey: process.env.WANDB_API_KEY,
|
|
60
67
|
headers: {
|
|
61
68
|
"X-Custom-Header": "value"
|
|
@@ -73,8 +80,8 @@ const agent = new Agent({
|
|
|
73
80
|
model: ({ requestContext }) => {
|
|
74
81
|
const useAdvanced = requestContext.task === "complex";
|
|
75
82
|
return useAdvanced
|
|
76
|
-
? "wandb/
|
|
77
|
-
: "wandb/
|
|
83
|
+
? "wandb/zai-org/GLM-5-FP8"
|
|
84
|
+
: "wandb/MiniMaxAI/MiniMax-M2.5";
|
|
78
85
|
}
|
|
79
86
|
});
|
|
80
87
|
```
|
|
@@ -30,33 +30,33 @@ for await (const chunk of stream) {
|
|
|
30
30
|
|
|
31
31
|
## Models
|
|
32
32
|
|
|
33
|
-
| Model
|
|
34
|
-
|
|
|
35
|
-
| `xai/grok-2`
|
|
36
|
-
| `xai/grok-2-1212`
|
|
37
|
-
| `xai/grok-2-latest`
|
|
38
|
-
| `xai/grok-2-vision`
|
|
39
|
-
| `xai/grok-2-vision-1212`
|
|
40
|
-
| `xai/grok-2-vision-latest`
|
|
41
|
-
| `xai/grok-3`
|
|
42
|
-
| `xai/grok-3-fast`
|
|
43
|
-
| `xai/grok-3-fast-latest`
|
|
44
|
-
| `xai/grok-3-latest`
|
|
45
|
-
| `xai/grok-3-mini`
|
|
46
|
-
| `xai/grok-3-mini-fast`
|
|
47
|
-
| `xai/grok-3-mini-fast-latest`
|
|
48
|
-
| `xai/grok-3-mini-latest`
|
|
49
|
-
| `xai/grok-4`
|
|
50
|
-
| `xai/grok-4-1-fast`
|
|
51
|
-
| `xai/grok-4-1-fast-non-reasoning`
|
|
52
|
-
| `xai/grok-4-fast`
|
|
53
|
-
| `xai/grok-4-fast-non-reasoning`
|
|
54
|
-
| `xai/grok-4.20-
|
|
55
|
-
| `xai/grok-4.20-
|
|
56
|
-
| `xai/grok-4.20-multi-agent-
|
|
57
|
-
| `xai/grok-beta`
|
|
58
|
-
| `xai/grok-code-fast-1`
|
|
59
|
-
| `xai/grok-vision-beta`
|
|
33
|
+
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
34
|
+
| ----------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
35
|
+
| `xai/grok-2` | 131K | | | | | | $2 | $10 |
|
|
36
|
+
| `xai/grok-2-1212` | 131K | | | | | | $2 | $10 |
|
|
37
|
+
| `xai/grok-2-latest` | 131K | | | | | | $2 | $10 |
|
|
38
|
+
| `xai/grok-2-vision` | 8K | | | | | | $2 | $10 |
|
|
39
|
+
| `xai/grok-2-vision-1212` | 8K | | | | | | $2 | $10 |
|
|
40
|
+
| `xai/grok-2-vision-latest` | 8K | | | | | | $2 | $10 |
|
|
41
|
+
| `xai/grok-3` | 131K | | | | | | $3 | $15 |
|
|
42
|
+
| `xai/grok-3-fast` | 131K | | | | | | $5 | $25 |
|
|
43
|
+
| `xai/grok-3-fast-latest` | 131K | | | | | | $5 | $25 |
|
|
44
|
+
| `xai/grok-3-latest` | 131K | | | | | | $3 | $15 |
|
|
45
|
+
| `xai/grok-3-mini` | 131K | | | | | | $0.30 | $0.50 |
|
|
46
|
+
| `xai/grok-3-mini-fast` | 131K | | | | | | $0.60 | $4 |
|
|
47
|
+
| `xai/grok-3-mini-fast-latest` | 131K | | | | | | $0.60 | $4 |
|
|
48
|
+
| `xai/grok-3-mini-latest` | 131K | | | | | | $0.30 | $0.50 |
|
|
49
|
+
| `xai/grok-4` | 256K | | | | | | $3 | $15 |
|
|
50
|
+
| `xai/grok-4-1-fast` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
51
|
+
| `xai/grok-4-1-fast-non-reasoning` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
52
|
+
| `xai/grok-4-fast` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
53
|
+
| `xai/grok-4-fast-non-reasoning` | 2.0M | | | | | | $0.20 | $0.50 |
|
|
54
|
+
| `xai/grok-4.20-beta-latest-non-reasoning` | 2.0M | | | | | | $2 | $6 |
|
|
55
|
+
| `xai/grok-4.20-beta-latest-reasoning` | 2.0M | | | | | | $2 | $6 |
|
|
56
|
+
| `xai/grok-4.20-multi-agent-beta-latest` | 2.0M | | | | | | $2 | $6 |
|
|
57
|
+
| `xai/grok-beta` | 131K | | | | | | $5 | $15 |
|
|
58
|
+
| `xai/grok-code-fast-1` | 256K | | | | | | $0.20 | $2 |
|
|
59
|
+
| `xai/grok-vision-beta` | 8K | | | | | | $5 | $15 |
|
|
60
60
|
|
|
61
61
|
## Advanced configuration
|
|
62
62
|
|
|
@@ -568,11 +568,11 @@ export const mastra = new Mastra({
|
|
|
568
568
|
|
|
569
569
|
Build-time configuration for server features. These options control development tools like Swagger UI and request logging, which are enabled during local development but disabled in production by default.
|
|
570
570
|
|
|
571
|
-
| Property | Type | Default | Description
|
|
572
|
-
| ------------- | --------- | ------- |
|
|
573
|
-
| `swaggerUI` | `boolean` | `false` | Enable Swagger UI at `/swagger-ui` for interactive API exploration (requires `openAPIDocs` to be `true`)
|
|
574
|
-
| `apiReqLogs` | `boolean` | `false` | Enable API request logging to the console
|
|
575
|
-
| `openAPIDocs` | `boolean` | `false` | Enable OpenAPI specification at `/openapi.json`
|
|
571
|
+
| Property | Type | Default | Description |
|
|
572
|
+
| ------------- | --------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
573
|
+
| `swaggerUI` | `boolean` | `false` | Enable Swagger UI at `/swagger-ui` for interactive API exploration (requires `openAPIDocs` to be `true`) |
|
|
574
|
+
| `apiReqLogs` | `boolean` | `false` | Enable API request logging to the console |
|
|
575
|
+
| `openAPIDocs` | `boolean` | `false` | Enable OpenAPI specification at `/api/openapi.json`. Built-in Mastra routes use `servers: [{url: "/api"}]` while custom routes get a per-path `servers: [{url: "/"}]` override. |
|
|
576
576
|
|
|
577
577
|
```typescript
|
|
578
578
|
import { Mastra } from '@mastra/core'
|
|
@@ -55,7 +55,7 @@ Visit the [Configuration reference](https://mastra.ai/reference/configuration) f
|
|
|
55
55
|
|
|
56
56
|
**mcpServers** (`Record<string, MCPServerBase>`): An object where keys are registry keys (used for getMCPServer()) and values are instances of MCPServer or classes extending MCPServerBase. Each MCPServer must have an id property. Servers can be retrieved by registry key using getMCPServer() or by their intrinsic id using getMCPServerById().
|
|
57
57
|
|
|
58
|
-
**bundler** (`BundlerConfig`): Configuration for the asset bundler with options for externals, sourcemap, and transpilePackages
|
|
58
|
+
**bundler** (`BundlerConfig`): Configuration for the asset bundler with options for externals, sourcemap, transpilePackages, and dynamicPackages. (Default: `{ externals: [], sourcemap: false, transpilePackages: [], dynamicPackages: [] }`)
|
|
59
59
|
|
|
60
60
|
**scorers** (`Record<string, Scorer>`): Scorers for evaluating agent responses and workflow outputs (Default: `{}`)
|
|
61
61
|
|
package/.docs/reference/index.md
CHANGED
|
@@ -266,6 +266,7 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
266
266
|
- [.start()](https://mastra.ai/reference/workflows/run-methods/start)
|
|
267
267
|
- [.startAsync()](https://mastra.ai/reference/workflows/run-methods/startAsync)
|
|
268
268
|
- [.timeTravel()](https://mastra.ai/reference/workflows/run-methods/timeTravel)
|
|
269
|
+
- [AgentFSFilesystem](https://mastra.ai/reference/workspace/agentfs-filesystem)
|
|
269
270
|
- [BlaxelSandbox](https://mastra.ai/reference/workspace/blaxel-sandbox)
|
|
270
271
|
- [DaytonaSandbox](https://mastra.ai/reference/workspace/daytona-sandbox)
|
|
271
272
|
- [E2BSandbox](https://mastra.ai/reference/workspace/e2b-sandbox)
|
|
@@ -60,6 +60,8 @@ OM performs thresholding with fast local token estimation. Text uses `tokenx`, a
|
|
|
60
60
|
|
|
61
61
|
**observation.blockAfter** (`number`): Token threshold above which synchronous (blocking) observation is forced. Between \`messageTokens\` and \`blockAfter\`, only async buffering/activation is used. Above \`blockAfter\`, a synchronous observation runs as a last resort, while buffered activation still preserves a minimum remaining context (min(1000, retention floor)). Accepts a multiplier (1 < value < 2, multiplied by \`messageTokens\`) or an absolute token count (≥ 2, must be greater than \`messageTokens\`). Only relevant when \`bufferTokens\` is set. Defaults to \`1.2\` when async buffering is enabled.
|
|
62
62
|
|
|
63
|
+
**observation.previousObserverTokens** (`number | false`): Optional token budget for the observer's previous-observations context. When set to a number, the observations passed to the Observer agent are tail-truncated to fit within this budget while keeping the newest observations and preserving highlighted 🔴 items when possible. When a buffered reflection is pending, the already-reflected observation lines are automatically replaced with the reflection summary before truncation. Set to \`0\` to omit previous observations entirely, or \`false\` to disable truncation explicitly.
|
|
64
|
+
|
|
63
65
|
**reflection** (`ObservationalMemoryReflectionConfig`): Configuration for the reflection step. Controls when the Reflector agent runs and how it behaves.
|
|
64
66
|
|
|
65
67
|
**reflection.model** (`string | LanguageModel | DynamicModel | ModelWithRetries[]`): Model for the Reflector agent. Cannot be set if a top-level \`model\` is also provided. If neither this nor the top-level \`model\` is set, falls back to \`observation.model\`.
|
|
@@ -0,0 +1,110 @@
|
|
|
1
|
+
# AgentFSFilesystem
|
|
2
|
+
|
|
3
|
+
Stores files in a Turso/SQLite database via the [AgentFS](https://github.com/nichochar/agentfs) SDK. Files are persisted across sessions in a local SQLite database, giving agents durable storage without external cloud services.
|
|
4
|
+
|
|
5
|
+
> **Info:** For interface details, see [WorkspaceFilesystem Interface](https://mastra.ai/reference/workspace/filesystem).
|
|
6
|
+
|
|
7
|
+
## Installation
|
|
8
|
+
|
|
9
|
+
**npm**:
|
|
10
|
+
|
|
11
|
+
```bash
|
|
12
|
+
npm install @mastra/agentfs
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
**pnpm**:
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
pnpm add @mastra/agentfs
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
**Yarn**:
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
yarn add @mastra/agentfs
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Bun**:
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
bun add @mastra/agentfs
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
## Usage
|
|
34
|
+
|
|
35
|
+
Add an `AgentFSFilesystem` to a workspace and assign it to an agent. You must provide at least one of `agentId`, `path`, or `agent` when instantiating the `AgentFSFilesystem` class.
|
|
36
|
+
|
|
37
|
+
```typescript
|
|
38
|
+
import { Agent } from '@mastra/core/agent'
|
|
39
|
+
import { Workspace } from '@mastra/core/workspace'
|
|
40
|
+
import { AgentFSFilesystem } from '@mastra/agentfs'
|
|
41
|
+
|
|
42
|
+
const workspace = new Workspace({
|
|
43
|
+
filesystem: new AgentFSFilesystem({
|
|
44
|
+
agentId: 'my-agent',
|
|
45
|
+
}),
|
|
46
|
+
})
|
|
47
|
+
|
|
48
|
+
const agent = new Agent({
|
|
49
|
+
name: 'file-agent',
|
|
50
|
+
model: 'openai/gpt-5.4',
|
|
51
|
+
workspace,
|
|
52
|
+
})
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### Using an explicit database path
|
|
56
|
+
|
|
57
|
+
By default, databases are stored inside the `.agentfs` directory with the `agentId` as filename. You can specify a custom database path:
|
|
58
|
+
|
|
59
|
+
```typescript
|
|
60
|
+
import { AgentFSFilesystem } from '@mastra/agentfs'
|
|
61
|
+
|
|
62
|
+
const filesystem = new AgentFSFilesystem({
|
|
63
|
+
path: '/data/my-agent.db',
|
|
64
|
+
})
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
### Using a pre-opened AgentFS instance
|
|
68
|
+
|
|
69
|
+
If you need to manage the AgentFS lifecycle yourself:
|
|
70
|
+
|
|
71
|
+
```typescript
|
|
72
|
+
import { AgentFS } from 'agentfs-sdk'
|
|
73
|
+
import { AgentFSFilesystem } from '@mastra/agentfs'
|
|
74
|
+
|
|
75
|
+
const agent = await AgentFS.open({ id: 'my-agent' })
|
|
76
|
+
|
|
77
|
+
const filesystem = new AgentFSFilesystem({
|
|
78
|
+
agent, // caller manages open/close
|
|
79
|
+
})
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
## Constructor parameters
|
|
83
|
+
|
|
84
|
+
You must provide at least one of `agentId`, `path`, or `agent`.
|
|
85
|
+
|
|
86
|
+
**agentId** (`string`): Agent ID — creates database at \`.agentfs/\<agentId>.db\`
|
|
87
|
+
|
|
88
|
+
**path** (`string`): Explicit database file path (alternative to agentId)
|
|
89
|
+
|
|
90
|
+
**agent** (`AgentFS`): Pre-opened AgentFS instance. When provided, the caller manages the lifecycle (open/close).
|
|
91
|
+
|
|
92
|
+
**id** (`string`): Unique identifier for this filesystem instance (Default: `Auto-generated`)
|
|
93
|
+
|
|
94
|
+
**displayName** (`string`): Human-friendly display name for the UI (Default: `'AgentFS'`)
|
|
95
|
+
|
|
96
|
+
**icon** (`FilesystemIcon`): Icon identifier for the UI (Default: `'database'`)
|
|
97
|
+
|
|
98
|
+
**description** (`string`): Short description of this filesystem for the UI
|
|
99
|
+
|
|
100
|
+
**readOnly** (`boolean`): When true, all write operations are blocked (Default: `false`)
|
|
101
|
+
|
|
102
|
+
## Properties
|
|
103
|
+
|
|
104
|
+
**id** (`string`): Filesystem instance identifier
|
|
105
|
+
|
|
106
|
+
**name** (`string`): Provider name ('AgentFSFilesystem')
|
|
107
|
+
|
|
108
|
+
**provider** (`string`): Provider identifier ('agentfs')
|
|
109
|
+
|
|
110
|
+
**readOnly** (`boolean | undefined`): Whether the filesystem is in read-only mode
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,21 @@
|
|
|
1
1
|
# @mastra/mcp-docs-server
|
|
2
2
|
|
|
3
|
+
## 1.1.11
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Updated dependencies [[`ea86967`](https://github.com/mastra-ai/mastra/commit/ea86967449426e0a3673253bd1c2c052a99d970d), [`db21c21`](https://github.com/mastra-ai/mastra/commit/db21c21a6ae5f33539262cc535342fa8757eb359), [`11f5dbe`](https://github.com/mastra-ai/mastra/commit/11f5dbe9a1e7ad8ef3b1ea34fb4a9fa3631d1587), [`11f5dbe`](https://github.com/mastra-ai/mastra/commit/11f5dbe9a1e7ad8ef3b1ea34fb4a9fa3631d1587), [`6751354`](https://github.com/mastra-ai/mastra/commit/67513544d1a64be891d9de7624d40aadc895d56e), [`c958cd3`](https://github.com/mastra-ai/mastra/commit/c958cd36627c1eea122ec241b2b15492977a263a), [`86f2426`](https://github.com/mastra-ai/mastra/commit/86f242631d252a172d2f9f9a2ea0feb8647a76b0), [`950eb07`](https://github.com/mastra-ai/mastra/commit/950eb07b7e7354629630e218d49550fdd299c452)]:
|
|
8
|
+
- @mastra/core@1.13.0
|
|
9
|
+
- @mastra/mcp@1.2.1
|
|
10
|
+
|
|
11
|
+
## 1.1.11-alpha.0
|
|
12
|
+
|
|
13
|
+
### Patch Changes
|
|
14
|
+
|
|
15
|
+
- Updated dependencies [[`ea86967`](https://github.com/mastra-ai/mastra/commit/ea86967449426e0a3673253bd1c2c052a99d970d), [`db21c21`](https://github.com/mastra-ai/mastra/commit/db21c21a6ae5f33539262cc535342fa8757eb359), [`11f5dbe`](https://github.com/mastra-ai/mastra/commit/11f5dbe9a1e7ad8ef3b1ea34fb4a9fa3631d1587), [`11f5dbe`](https://github.com/mastra-ai/mastra/commit/11f5dbe9a1e7ad8ef3b1ea34fb4a9fa3631d1587), [`6751354`](https://github.com/mastra-ai/mastra/commit/67513544d1a64be891d9de7624d40aadc895d56e), [`c958cd3`](https://github.com/mastra-ai/mastra/commit/c958cd36627c1eea122ec241b2b15492977a263a), [`86f2426`](https://github.com/mastra-ai/mastra/commit/86f242631d252a172d2f9f9a2ea0feb8647a76b0), [`950eb07`](https://github.com/mastra-ai/mastra/commit/950eb07b7e7354629630e218d49550fdd299c452)]:
|
|
16
|
+
- @mastra/core@1.13.0-alpha.0
|
|
17
|
+
- @mastra/mcp@1.2.1-alpha.0
|
|
18
|
+
|
|
3
19
|
## 1.1.10
|
|
4
20
|
|
|
5
21
|
### Patch Changes
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/mcp-docs-server",
|
|
3
|
-
"version": "1.1.
|
|
3
|
+
"version": "1.1.11",
|
|
4
4
|
"description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -29,8 +29,8 @@
|
|
|
29
29
|
"jsdom": "^26.1.0",
|
|
30
30
|
"local-pkg": "^1.1.2",
|
|
31
31
|
"zod": "^4.3.6",
|
|
32
|
-
"@mastra/core": "1.
|
|
33
|
-
"@mastra/mcp": "^1.2.
|
|
32
|
+
"@mastra/core": "1.13.0",
|
|
33
|
+
"@mastra/mcp": "^1.2.1"
|
|
34
34
|
},
|
|
35
35
|
"devDependencies": {
|
|
36
36
|
"@hono/node-server": "^1.19.9",
|
|
@@ -46,9 +46,9 @@
|
|
|
46
46
|
"tsx": "^4.21.0",
|
|
47
47
|
"typescript": "^5.9.3",
|
|
48
48
|
"vitest": "4.0.18",
|
|
49
|
-
"@internal/
|
|
50
|
-
"@internal/
|
|
51
|
-
"@mastra/core": "1.
|
|
49
|
+
"@internal/types-builder": "0.0.44",
|
|
50
|
+
"@internal/lint": "0.0.69",
|
|
51
|
+
"@mastra/core": "1.13.0"
|
|
52
52
|
},
|
|
53
53
|
"homepage": "https://mastra.ai",
|
|
54
54
|
"repository": {
|