@mastra/mcp-docs-server 1.1.9 → 1.1.10-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -61,7 +61,7 @@ OM solves both problems by compressing old context into dense observations.
61
61
 
62
62
  When message history tokens exceed a threshold (default: 30,000), the Observer creates observations — concise notes about what happened:
63
63
 
64
- Image parts contribute to this threshold with model-aware estimates, so multimodal conversations trigger observation at the right time. The same applies to image-like `file` parts when a transport normalizes an uploaded image as a file instead of an image part. For example, OpenAI image detail settings can materially change when OM decides to observe.
64
+ OM uses fast local token estimation for this thresholding work. Text is estimated with `tokenx`, while image parts use provider-aware heuristics so multimodal conversations still trigger observation at the right time. The same applies to image-like `file` parts when a transport normalizes an uploaded image as a file instead of an image part. For example, OpenAI image detail settings can materially change when OM decides to observe.
65
65
 
66
66
  The Observer can also see attachments in the history it reviews. OM keeps readable placeholders like `[Image #1: reference-board.png]` or `[File #1: floorplan.pdf]` in the transcript for readability, and forwards the actual attachment parts alongside the text. Image-like `file` parts are upgraded to image inputs for the Observer when possible, while non-image attachments are forwarded as file parts with normalized token counting. This applies to both normal thread observation and batched resource-scope observation.
67
67
 
@@ -176,7 +176,7 @@ const memory = new Memory({
176
176
 
177
177
  ### Token counting cache
178
178
 
179
- OM caches tiktoken part estimates in message metadata to reduce repeat counting work during threshold checks and buffering decisions.
179
+ OM caches token estimates in message metadata to reduce repeat counting work during threshold checks and buffering decisions.
180
180
 
181
181
  - Per-part estimates are stored on `part.providerMetadata.mastra` and reused on subsequent passes when the cache version/tokenizer source matches.
182
182
  - For string-only message content (without parts), OM uses a message-level metadata fallback cache.
@@ -1,6 +1,6 @@
1
1
  # Storage
2
2
 
3
- For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
3
+ For agents to remember previous interactions, Mastra needs a storage adapter. Use one of the [supported providers](#supported-providers) and pass it to your Mastra instance.
4
4
 
5
5
  ```typescript
6
6
  import { Mastra } from '@mastra/core'
@@ -24,7 +24,7 @@ export const mastra = new Mastra({
24
24
 
25
25
  This configures instance-level storage, which all agents share by default. You can also configure [agent-level storage](#agent-level-storage) for isolated data boundaries.
26
26
 
27
- Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
27
+ Mastra automatically initializes the necessary storage structures on first interaction. See [Storage Overview](https://mastra.ai/reference/storage/overview) for domain coverage and the schema used by the built-in database-backed domains.
28
28
 
29
29
  ## Supported providers
30
30
 
@@ -35,7 +35,7 @@ Each provider page includes installation instructions, configuration parameters,
35
35
  - [MongoDB](https://mastra.ai/reference/storage/mongodb)
36
36
  - [Upstash](https://mastra.ai/reference/storage/upstash)
37
37
  - [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1)
38
- - [Cloudflare Durable Objects](https://mastra.ai/reference/storage/cloudflare)
38
+ - [Cloudflare KV & Durable Objects](https://mastra.ai/reference/storage/cloudflare)
39
39
  - [Convex](https://mastra.ai/reference/storage/convex)
40
40
  - [DynamoDB](https://mastra.ai/reference/storage/dynamodb)
41
41
  - [LanceDB](https://mastra.ai/reference/storage/lance)
@@ -49,7 +49,7 @@ Storage can be configured at the instance level (shared by all agents) or at the
49
49
 
50
50
  ### Instance-level storage
51
51
 
52
- Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
52
+ Add storage to your Mastra instance so all agents, workflows, observability traces, and scores share the same storage backend:
53
53
 
54
54
  ```typescript
55
55
  import { Mastra } from '@mastra/core'
@@ -71,7 +71,7 @@ This is useful when all primitives share the same storage backend and have simil
71
71
 
72
72
  #### Composite storage
73
73
 
74
- [Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite) you need) to different storage providers.
74
+ [Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to route `memory` and any other [supported domains](https://mastra.ai/reference/storage/composite) to different storage providers.
75
75
 
76
76
  ```typescript
77
77
  import { Mastra } from '@mastra/core'
@@ -53,15 +53,19 @@ const response = await agent.generate('List all files in the workspace')
53
53
 
54
54
  By default, `LocalFilesystem` runs in **contained mode** — all file operations are restricted to stay within `basePath`. This prevents path traversal attacks and symlink escapes.
55
55
 
56
- In contained mode, absolute paths that fall within `basePath` are used as-is, while other absolute paths are treated as virtual paths relative to `basePath` (e.g. `/file.txt` resolves to `basePath/file.txt`). Any resolved path that escapes `basePath` throws a `PermissionError`.
56
+ In contained mode:
57
57
 
58
- If your agent needs to access specific paths outside `basePath`, use `allowedPaths` to grant access without disabling containment entirely:
58
+ - **Relative paths** (e.g. `src/index.ts`) resolve against `basePath`
59
+ - **Absolute paths** (e.g. `/home/user/.config/file.txt`) are treated as real filesystem paths — if they fall outside `basePath` and any `allowedPaths`, a `PermissionError` is thrown
60
+ - **Tilde paths** (e.g. `~/Documents`) expand to the home directory and follow the same containment rules
61
+
62
+ If your agent needs to access specific paths outside `basePath`, use `allowedPaths` to grant access without disabling containment entirely. Relative paths are resolved against `basePath`, and absolute paths are used as-is:
59
63
 
60
64
  ```typescript
61
65
  const workspace = new Workspace({
62
66
  filesystem: new LocalFilesystem({
63
67
  basePath: './workspace',
64
- allowedPaths: ['/home/user/.config', '/home/user/documents'],
68
+ allowedPaths: ['~/.claude/skills', '../shared-data'],
65
69
  }),
66
70
  })
67
71
  ```
@@ -1,6 +1,6 @@
1
1
  # ![OpenRouter logo](https://models.dev/logos/openrouter.svg)OpenRouter
2
2
 
3
- OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 193 models through Mastra's model router.
3
+ OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 195 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
6
6
 
@@ -168,6 +168,8 @@ ANTHROPIC_API_KEY=ant-...
168
168
  | `openai/o4-mini` |
169
169
  | `openrouter/aurora-alpha` |
170
170
  | `openrouter/free` |
171
+ | `openrouter/healer-alpha` |
172
+ | `openrouter/hunter-alpha` |
171
173
  | `openrouter/sherlock-dash-alpha` |
172
174
  | `openrouter/sherlock-think-alpha` |
173
175
  | `prime-intellect/intellect-3` |
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3239 models from 92 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3245 models from 92 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -37,7 +37,7 @@ for await (const chunk of stream) {
37
37
  | `alibaba-coding-plan/glm-4.7` | 203K | | | | | | — | — |
38
38
  | `alibaba-coding-plan/glm-5` | 203K | | | | | | — | — |
39
39
  | `alibaba-coding-plan/kimi-k2.5` | 262K | | | | | | — | — |
40
- | `alibaba-coding-plan/MiniMax-M2.5` | 17K | | | | | | — | — |
40
+ | `alibaba-coding-plan/MiniMax-M2.5` | 197K | | | | | | — | — |
41
41
  | `alibaba-coding-plan/qwen3-coder-next` | 262K | | | | | | — | — |
42
42
  | `alibaba-coding-plan/qwen3-coder-plus` | 1.0M | | | | | | — | — |
43
43
  | `alibaba-coding-plan/qwen3-max-2026-01-23` | 262K | | | | | | — | — |
@@ -508,11 +508,11 @@ for await (const chunk of stream) {
508
508
  | `nano-gpt/TheDrummer 2/Rocinante-12B-v1.1` | 16K | | | | | | $0.41 | $0.59 |
509
509
  | `nano-gpt/TheDrummer 2/skyfall-36b-v2` | 64K | | | | | | $0.49 | $0.49 |
510
510
  | `nano-gpt/TheDrummer 2/UnslopNemo-12B-v4.1` | 33K | | | | | | $0.49 | $0.49 |
511
- | `nano-gpt/THUDM 2/GLM-4-32B-0414` | 128K | | | | | | $0.20 | $0.20 |
512
- | `nano-gpt/THUDM 2/GLM-4-9B-0414` | 32K | | | | | | $0.20 | $0.20 |
513
- | `nano-gpt/THUDM 2/GLM-Z1-32B-0414` | 128K | | | | | | $0.20 | $0.20 |
514
- | `nano-gpt/THUDM 2/GLM-Z1-9B-0414` | 32K | | | | | | $0.20 | $0.20 |
515
- | `nano-gpt/THUDM 2/GLM-Z1-Rumination-32B-0414` | 32K | | | | | | $0.20 | $0.20 |
511
+ | `nano-gpt/THUDM/GLM-4-32B-0414` | 128K | | | | | | $0.20 | $0.20 |
512
+ | `nano-gpt/THUDM/GLM-4-9B-0414` | 32K | | | | | | $0.20 | $0.20 |
513
+ | `nano-gpt/THUDM/GLM-Z1-32B-0414` | 128K | | | | | | $0.20 | $0.20 |
514
+ | `nano-gpt/THUDM/GLM-Z1-9B-0414` | 32K | | | | | | $0.20 | $0.20 |
515
+ | `nano-gpt/THUDM/GLM-Z1-Rumination-32B-0414` | 32K | | | | | | $0.20 | $0.20 |
516
516
  | `nano-gpt/tngtech/DeepSeek-TNG-R1T2-Chimera` | 128K | | | | | | $0.31 | $0.31 |
517
517
  | `nano-gpt/tngtech/tng-r1t-chimera` | 128K | | | | | | $0.30 | $1 |
518
518
  | `nano-gpt/Tongyi-Zhiwen 2/QwenLong-L1-32B` | 128K | | | | | | $0.14 | $0.60 |
@@ -1,6 +1,6 @@
1
1
  # ![Nebius Token Factory logo](https://models.dev/logos/nebius.svg)Nebius Token Factory
2
2
 
3
- Access 46 Nebius Token Factory models through Mastra's model router. Authentication is handled automatically using the `NEBIUS_API_KEY` environment variable.
3
+ Access 47 Nebius Token Factory models through Mastra's model router. Authentication is handled automatically using the `NEBIUS_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Nebius Token Factory documentation](https://docs.tokenfactory.nebius.com/).
6
6
 
@@ -42,11 +42,11 @@ for await (const chunk of stream) {
42
42
  | `nebius/deepseek-ai/DeepSeek-R1-0528-fast` | 131K | | | | | | $2 | $6 |
43
43
  | `nebius/deepseek-ai/DeepSeek-V3-0324` | 128K | | | | | | $0.50 | $2 |
44
44
  | `nebius/deepseek-ai/DeepSeek-V3-0324-fast` | 128K | | | | | | $0.75 | $2 |
45
- | `nebius/deepseek-ai/DeepSeek-V3.2` | 128K | | | | | | $0.30 | $0.45 |
45
+ | `nebius/deepseek-ai/DeepSeek-V3.2` | 163K | | | | | | $0.30 | $0.45 |
46
46
  | `nebius/google/gemma-2-2b-it` | 8K | | | | | | $0.02 | $0.06 |
47
47
  | `nebius/google/gemma-2-9b-it-fast` | 8K | | | | | | $0.03 | $0.09 |
48
- | `nebius/google/gemma-3-27b-it` | 128K | | | | | | $0.10 | $0.30 |
49
- | `nebius/google/gemma-3-27b-it-fast` | 128K | | | | | | $0.20 | $0.60 |
48
+ | `nebius/google/gemma-3-27b-it` | 110K | | | | | | $0.10 | $0.30 |
49
+ | `nebius/google/gemma-3-27b-it-fast` | 110K | | | | | | $0.20 | $0.60 |
50
50
  | `nebius/intfloat/e5-mistral-7b-instruct` | 33K | | | | | | $0.01 | — |
51
51
  | `nebius/meta-llama/Llama-3.3-70B-Instruct` | 128K | | | | | | $0.13 | $0.40 |
52
52
  | `nebius/meta-llama/Llama-3.3-70B-Instruct-fast` | 128K | | | | | | $0.25 | $0.75 |
@@ -80,6 +80,7 @@ for await (const chunk of stream) {
80
80
  | `nebius/zai-org/GLM-4.5` | 128K | | | | | | $0.60 | $2 |
81
81
  | `nebius/zai-org/GLM-4.5-Air` | 128K | | | | | | $0.20 | $1 |
82
82
  | `nebius/zai-org/GLM-4.7-FP8` | 128K | | | | | | $0.40 | $2 |
83
+ | `nebius/zai-org/GLM-5` | 203K | | | | | | $1 | $3 |
83
84
 
84
85
  ## Advanced configuration
85
86
 
@@ -109,7 +110,7 @@ const agent = new Agent({
109
110
  model: ({ requestContext }) => {
110
111
  const useAdvanced = requestContext.task === "complex";
111
112
  return useAdvanced
112
- ? "nebius/zai-org/GLM-4.7-FP8"
113
+ ? "nebius/zai-org/GLM-5"
113
114
  : "nebius/BAAI/bge-en-icl";
114
115
  }
115
116
  });
@@ -104,7 +104,7 @@ for await (const chunk of stream) {
104
104
  | `nvidia/qwen/qwen3-next-80b-a3b-thinking` | 262K | | | | | | — | — |
105
105
  | `nvidia/qwen/qwen3.5-397b-a17b` | 262K | | | | | | — | — |
106
106
  | `nvidia/qwen/qwq-32b` | 128K | | | | | | — | — |
107
- | `nvidia/stepfun-ai/step-3-5-flash` | 256K | | | | | | — | — |
107
+ | `nvidia/stepfun-ai/step-3.5-flash` | 256K | | | | | | — | — |
108
108
  | `nvidia/z-ai/glm4.7` | 205K | | | | | | — | — |
109
109
  | `nvidia/z-ai/glm5` | 203K | | | | | | — | — |
110
110
 
@@ -1,6 +1,6 @@
1
1
  # ![OpenCode Zen logo](https://models.dev/logos/opencode.svg)OpenCode Zen
2
2
 
3
- Access 33 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
3
+ Access 34 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OpenCode Zen documentation](https://opencode.ai/docs/zen).
6
6
 
@@ -32,41 +32,42 @@ for await (const chunk of stream) {
32
32
 
33
33
  ## Models
34
34
 
35
- | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
- | ------------------------------ | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
- | `opencode/big-pickle` | 200K | | | | | | — | — |
38
- | `opencode/claude-3-5-haiku` | 200K | | | | | | $0.80 | $4 |
39
- | `opencode/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
40
- | `opencode/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
41
- | `opencode/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
42
- | `opencode/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
43
- | `opencode/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
44
- | `opencode/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
45
- | `opencode/claude-sonnet-4-6` | 1.0M | | | | | | $3 | $15 |
46
- | `opencode/gemini-3-flash` | 1.0M | | | | | | $0.50 | $3 |
47
- | `opencode/gemini-3-pro` | 1.0M | | | | | | $2 | $12 |
48
- | `opencode/gemini-3.1-pro` | 1.0M | | | | | | $2 | $12 |
49
- | `opencode/glm-4.6` | 205K | | | | | | $0.60 | $2 |
50
- | `opencode/glm-4.7` | 205K | | | | | | $0.60 | $2 |
51
- | `opencode/glm-5` | 205K | | | | | | $1 | $3 |
52
- | `opencode/gpt-5` | 400K | | | | | | $1 | $9 |
53
- | `opencode/gpt-5-codex` | 400K | | | | | | $1 | $9 |
54
- | `opencode/gpt-5-nano` | 400K | | | | | | — | — |
55
- | `opencode/gpt-5.1` | 400K | | | | | | $1 | $9 |
56
- | `opencode/gpt-5.1-codex` | 400K | | | | | | $1 | $9 |
57
- | `opencode/gpt-5.1-codex-max` | 400K | | | | | | $1 | $10 |
58
- | `opencode/gpt-5.1-codex-mini` | 400K | | | | | | $0.25 | $2 |
59
- | `opencode/gpt-5.2` | 400K | | | | | | $2 | $14 |
60
- | `opencode/gpt-5.2-codex` | 400K | | | | | | $2 | $14 |
61
- | `opencode/gpt-5.3-codex` | 400K | | | | | | $2 | $14 |
62
- | `opencode/gpt-5.3-codex-spark` | 128K | | | | | | $2 | $14 |
63
- | `opencode/gpt-5.4` | 1.1M | | | | | | $3 | $15 |
64
- | `opencode/gpt-5.4-pro` | 1.1M | | | | | | $30 | $180 |
65
- | `opencode/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
66
- | `opencode/mimo-v2-flash-free` | 262K | | | | | | — | — |
67
- | `opencode/minimax-m2.1` | 205K | | | | | | $0.30 | $1 |
68
- | `opencode/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
69
- | `opencode/minimax-m2.5-free` | 205K | | | | | | — | — |
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | -------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `opencode/big-pickle` | 200K | | | | | | — | — |
38
+ | `opencode/claude-3-5-haiku` | 200K | | | | | | $0.80 | $4 |
39
+ | `opencode/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
40
+ | `opencode/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
41
+ | `opencode/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
42
+ | `opencode/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
43
+ | `opencode/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
44
+ | `opencode/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
45
+ | `opencode/claude-sonnet-4-6` | 1.0M | | | | | | $3 | $15 |
46
+ | `opencode/gemini-3-flash` | 1.0M | | | | | | $0.50 | $3 |
47
+ | `opencode/gemini-3-pro` | 1.0M | | | | | | $2 | $12 |
48
+ | `opencode/gemini-3.1-pro` | 1.0M | | | | | | $2 | $12 |
49
+ | `opencode/glm-4.6` | 205K | | | | | | $0.60 | $2 |
50
+ | `opencode/glm-4.7` | 205K | | | | | | $0.60 | $2 |
51
+ | `opencode/glm-5` | 205K | | | | | | $1 | $3 |
52
+ | `opencode/gpt-5` | 400K | | | | | | $1 | $9 |
53
+ | `opencode/gpt-5-codex` | 400K | | | | | | $1 | $9 |
54
+ | `opencode/gpt-5-nano` | 400K | | | | | | — | — |
55
+ | `opencode/gpt-5.1` | 400K | | | | | | $1 | $9 |
56
+ | `opencode/gpt-5.1-codex` | 400K | | | | | | $1 | $9 |
57
+ | `opencode/gpt-5.1-codex-max` | 400K | | | | | | $1 | $10 |
58
+ | `opencode/gpt-5.1-codex-mini` | 400K | | | | | | $0.25 | $2 |
59
+ | `opencode/gpt-5.2` | 400K | | | | | | $2 | $14 |
60
+ | `opencode/gpt-5.2-codex` | 400K | | | | | | $2 | $14 |
61
+ | `opencode/gpt-5.3-codex` | 400K | | | | | | $2 | $14 |
62
+ | `opencode/gpt-5.3-codex-spark` | 128K | | | | | | $2 | $14 |
63
+ | `opencode/gpt-5.4` | 1.1M | | | | | | $3 | $15 |
64
+ | `opencode/gpt-5.4-pro` | 1.1M | | | | | | $30 | $180 |
65
+ | `opencode/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
66
+ | `opencode/mimo-v2-flash-free` | 262K | | | | | | — | — |
67
+ | `opencode/minimax-m2.1` | 205K | | | | | | $0.30 | $1 |
68
+ | `opencode/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
69
+ | `opencode/minimax-m2.5-free` | 205K | | | | | | — | — |
70
+ | `opencode/nemotron-3-super-free` | 1.0M | | | | | | — | — |
70
71
 
71
72
  ## Advanced configuration
72
73
 
@@ -96,7 +97,7 @@ const agent = new Agent({
96
97
  model: ({ requestContext }) => {
97
98
  const useAdvanced = requestContext.task === "complex";
98
99
  return useAdvanced
99
- ? "opencode/minimax-m2.5-free"
100
+ ? "opencode/nemotron-3-super-free"
100
101
  : "opencode/big-pickle";
101
102
  }
102
103
  });
@@ -1,6 +1,6 @@
1
1
  # ![Synthetic logo](https://models.dev/logos/synthetic.svg)Synthetic
2
2
 
3
- Access 26 Synthetic models through Mastra's model router. Authentication is handled automatically using the `SYNTHETIC_API_KEY` environment variable.
3
+ Access 28 Synthetic models through Mastra's model router. Authentication is handled automatically using the `SYNTHETIC_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Synthetic documentation](https://synthetic.new/pricing).
6
6
 
@@ -49,6 +49,7 @@ for await (const chunk of stream) {
49
49
  | `synthetic/hf:meta-llama/Llama-4-Scout-17B-16E-Instruct` | 328K | | | | | | $0.15 | $0.60 |
50
50
  | `synthetic/hf:MiniMaxAI/MiniMax-M2` | 197K | | | | | | $0.55 | $2 |
51
51
  | `synthetic/hf:MiniMaxAI/MiniMax-M2.1` | 205K | | | | | | $0.55 | $2 |
52
+ | `synthetic/hf:MiniMaxAI/MiniMax-M2.5` | 191K | | | | | | $0.60 | $3 |
52
53
  | `synthetic/hf:moonshotai/Kimi-K2-Instruct-0905` | 262K | | | | | | $1 | $1 |
53
54
  | `synthetic/hf:moonshotai/Kimi-K2-Thinking` | 262K | | | | | | $0.55 | $2 |
54
55
  | `synthetic/hf:moonshotai/Kimi-K2.5` | 262K | | | | | | $0.55 | $2 |
@@ -60,6 +61,7 @@ for await (const chunk of stream) {
60
61
  | `synthetic/hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256K | | | | | | $2 | $2 |
61
62
  | `synthetic/hf:zai-org/GLM-4.6` | 200K | | | | | | $0.55 | $2 |
62
63
  | `synthetic/hf:zai-org/GLM-4.7` | 200K | | | | | | $0.55 | $2 |
64
+ | `synthetic/hf:zai-org/GLM-4.7-Flash` | 197K | | | | | | $0.06 | $0.40 |
63
65
 
64
66
  ## Advanced configuration
65
67
 
@@ -89,7 +91,7 @@ const agent = new Agent({
89
91
  model: ({ requestContext }) => {
90
92
  const useAdvanced = requestContext.task === "complex";
91
93
  return useAdvanced
92
- ? "synthetic/hf:zai-org/GLM-4.7"
94
+ ? "synthetic/hf:zai-org/GLM-4.7-Flash"
93
95
  : "synthetic/hf:MiniMaxAI/MiniMax-M2";
94
96
  }
95
97
  });
@@ -28,6 +28,8 @@ The `observationalMemory` option accepts `true`, a configuration object, or `fal
28
28
 
29
29
  Observer input is multimodal-aware. OM keeps text placeholders like `[Image #1: screenshot.png]` in the transcript it builds for the Observer, and also sends the underlying image parts when possible. This applies to both single-thread observation and batched multi-thread observation. Non-image files appear as placeholders only.
30
30
 
31
+ OM performs thresholding with fast local token estimation. Text uses `tokenx`, and image-like inputs use provider-aware heuristics plus deterministic fallbacks when metadata is incomplete.
32
+
31
33
  **enabled** (`boolean`): Enable or disable Observational Memory. When omitted from a config object, defaults to \`true\`. Only \`enabled: false\` explicitly disables it. (Default: `true`)
32
34
 
33
35
  **model** (`string | LanguageModel | DynamicModel | ModelWithRetries[]`): Model for both the Observer and Reflector agents. Sets the model for both at once. Cannot be used together with \`observation.model\` or \`reflection.model\` — an error will be thrown if both are set. When using \`observationalMemory: true\`, defaults to \`google/gemini-2.5-flash\`. When passing a config object, this or \`observation.model\`/\`reflection.model\` must be set. Use \`"default"\` to explicitly use the default model (\`google/gemini-2.5-flash\`). (Default: `'google/gemini-2.5-flash' (when using observationalMemory: true)`)
@@ -42,7 +44,7 @@ Observer input is multimodal-aware. OM keeps text placeholders like `[Image #1:
42
44
 
43
45
  **observation.instruction** (`string`): Custom instruction appended to the Observer's system prompt. Use this to customize what the Observer focuses on, such as domain-specific preferences or priorities.
44
46
 
45
- **observation.messageTokens** (`number`): Token count of unobserved messages that triggers observation. When unobserved message tokens exceed this threshold, the Observer agent is called. Image parts are included with model-aware estimates when possible, with deterministic fallbacks when image metadata is incomplete. Image-like \`file\` parts are counted the same way when uploads are normalized as files.
47
+ **observation.messageTokens** (`number`): Token count of unobserved messages that triggers observation. When unobserved message tokens exceed this threshold, the Observer agent is called. Text is estimated locally with \`tokenx\`. Image parts are included with model-aware heuristics when possible, with deterministic fallbacks when image metadata is incomplete. Image-like \`file\` parts are counted the same way when uploads are normalized as files.
46
48
 
47
49
  **observation.maxTokensPerBatch** (`number`): Maximum tokens per batch when observing multiple threads in resource scope. Threads are chunked into batches of this size and processed in parallel. Lower values mean more parallelism but more API calls.
48
50
 
@@ -78,7 +80,7 @@ Observer input is multimodal-aware. OM keeps text placeholders like `[Image #1:
78
80
 
79
81
  ### Token estimate metadata cache
80
82
 
81
- OM persists token payload estimates so repeated counting can reuse prior tiktoken work.
83
+ OM persists token payload estimates so repeated counting can reuse prior token estimation work.
82
84
 
83
85
  - Part-level cache: `part.providerMetadata.mastra`.
84
86
  - String-content fallback cache: message-level metadata when no parts exist.
@@ -1,8 +1,9 @@
1
1
  # TokenLimiterProcessor
2
2
 
3
- The `TokenLimiterProcessor` limits the number of tokens in messages. It can be used as both an input and output processor:
3
+ The `TokenLimiterProcessor` limits the number of tokens in messages. It can be used as an input, per-step input, and output processor:
4
4
 
5
- - **Input processor**: Filters historical messages to fit within the context window, prioritizing recent messages
5
+ - **Input processor** (`processInput`): Filters historical messages to fit within the context window before the agentic loop starts, prioritizing recent messages
6
+ - **Per-step input processor** (`processInputStep`): Prunes messages at each step of a multi-step agent workflow, preventing unbounded token growth when tools trigger additional LLM calls
6
7
  - **Output processor**: Limits generated response tokens via streaming or non-streaming with configurable strategies for handling exceeded limits
7
8
 
8
9
  ## Usage example
@@ -35,7 +36,9 @@ const processor = new TokenLimiterProcessor({
35
36
 
36
37
  **name** (`string`): Optional processor display name
37
38
 
38
- **processInput** (`(args: { messages: MastraDBMessage[]; abort: (reason?: string) => never }) => Promise<MastraDBMessage[]>`): Filters input messages to fit within token limit, prioritizing recent messages while preserving system messages
39
+ **processInput** (`(args: { messages: MastraDBMessage[]; abort: (reason?: string) => never }) => Promise<MastraDBMessage[]>`): Filters input messages to fit within token limit before the agentic loop starts, prioritizing recent messages while preserving system messages
40
+
41
+ **processInputStep** (`(args: ProcessInputStepArgs) => Promise<void>`): Prunes messages at each step of the agentic loop (including tool call continuations) to keep the conversation within the token limit. Mutates the messageList directly by removing oldest messages first while preserving system messages.
39
42
 
40
43
  **processOutputStream** (`(args: { part: ChunkType; streamParts: ChunkType[]; state: Record<string, any>; abort: (reason?: string) => never }) => Promise<ChunkType | null>`): Processes streaming output parts to limit token count during streaming
41
44
 
@@ -45,7 +48,7 @@ const processor = new TokenLimiterProcessor({
45
48
 
46
49
  ## Error behavior
47
50
 
48
- When used as an input processor, `TokenLimiterProcessor` throws a `TripWire` error in the following cases:
51
+ When used as an input processor (both `processInput` and `processInputStep`), `TokenLimiterProcessor` throws a `TripWire` error in the following cases:
49
52
 
50
53
  - **Empty messages**: If there are no messages to process, a TripWire is thrown because you can't send an LLM request with no messages.
51
54
  - **System messages exceed limit**: If system messages alone exceed the token limit, a TripWire is thrown because you can't send an LLM request with only system messages and no user/assistant messages.
@@ -86,6 +89,29 @@ export const agent = new Agent({
86
89
  })
87
90
  ```
88
91
 
92
+ ### As a per-step input processor (limit multi-step token growth)
93
+
94
+ When an agent uses tools across multiple steps (e.g. `maxSteps > 1`), each step accumulates conversation history from all previous steps. Use `inputProcessors` to also limit tokens at each step of the agentic loop — the `TokenLimiterProcessor` automatically applies to both the initial input and every subsequent step:
95
+
96
+ ```typescript
97
+ import { Agent } from '@mastra/core/agent'
98
+ import { TokenLimiterProcessor } from '@mastra/core/processors'
99
+
100
+ export const agent = new Agent({
101
+ name: 'multi-step-agent',
102
+ instructions: 'You are a helpful research assistant with access to tools',
103
+ model: 'openai/gpt-4o',
104
+ inputProcessors: [
105
+ new TokenLimiterProcessor({ limit: 8000 }), // Applied at every step
106
+ ],
107
+ })
108
+
109
+ // Each tool call step will be limited to ~8000 input tokens
110
+ const result = await agent.generate('Research this topic using your tools', {
111
+ maxSteps: 10,
112
+ })
113
+ ```
114
+
89
115
  ### As an output processor (limit response length)
90
116
 
91
117
  Use `outputProcessors` to limit the length of generated responses:
@@ -1,8 +1,11 @@
1
1
  # Cloudflare storage
2
2
 
3
- The Cloudflare KV storage implementation provides a globally distributed, serverless key-value store solution using Cloudflare Workers KV.
3
+ Mastra provides two Cloudflare storage implementations:
4
4
 
5
- > **Observability Not Supported:** Cloudflare KV storage **doesn't support the observability domain**. Traces from the `DefaultExporter` can't be persisted to KV, and Mastra Studio's observability features won't work with Cloudflare KV as your only storage provider. To enable observability, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to a supported provider like ClickHouse or PostgreSQL.
5
+ - **Cloudflare KV** (`CloudflareKVStorage`) - A globally distributed, eventually consistent key-value store
6
+ - **Cloudflare Durable Objects** (`CloudflareDOStorage`) - A strongly consistent, SQLite-based storage using Durable Objects
7
+
8
+ > **Observability Not Supported:** Cloudflare storage **does not support the observability domain**. Traces from the `DefaultExporter` cannot be persisted, and Mastra Studio's observability features won't work with Cloudflare as your only storage provider. To enable observability, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to a supported provider like ClickHouse or PostgreSQL.
6
9
 
7
10
  ## Installation
8
11
 
@@ -30,13 +33,17 @@ yarn add @mastra/cloudflare@latest
30
33
  bun add @mastra/cloudflare@latest
31
34
  ```
32
35
 
33
- ## Usage
36
+ ## Cloudflare KV Storage
37
+
38
+ The KV storage implementation provides a globally distributed, serverless key-value store solution using Cloudflare Workers KV.
39
+
40
+ ### Usage
34
41
 
35
42
  ```typescript
36
- import { CloudflareStore } from '@mastra/cloudflare'
43
+ import { CloudflareKVStorage } from '@mastra/cloudflare/kv'
37
44
 
38
45
  // --- Example 1: Using Workers Binding ---
39
- const storageWorkers = new CloudflareStore({
46
+ const storageWorkers = new CloudflareKVStorage({
40
47
  id: 'cloudflare-workers-storage',
41
48
  bindings: {
42
49
  threads: THREADS_KV, // KVNamespace binding for threads table
@@ -47,7 +54,7 @@ const storageWorkers = new CloudflareStore({
47
54
  })
48
55
 
49
56
  // --- Example 2: Using REST API ---
50
- const storageRest = new CloudflareStore({
57
+ const storageRest = new CloudflareKVStorage({
51
58
  id: 'cloudflare-rest-storage',
52
59
  accountId: process.env.CLOUDFLARE_ACCOUNT_ID!, // Cloudflare Account ID
53
60
  apiToken: process.env.CLOUDFLARE_API_TOKEN!, // Cloudflare API Token
@@ -55,7 +62,7 @@ const storageRest = new CloudflareStore({
55
62
  })
56
63
  ```
57
64
 
58
- ## Parameters
65
+ ### Parameters
59
66
 
60
67
  **id** (`string`): Unique identifier for this storage instance.
61
68
 
@@ -85,4 +92,68 @@ Cloudflare KV is an eventually consistent store, meaning that data may not be im
85
92
 
86
93
  ### Key Structure & Namespacing
87
94
 
88
- Keys in Cloudflare KV are structured as a combination of a configurable prefix and a table-specific format (e.g., `threads:threadId`). For Workers deployments, `keyPrefix` is used to isolate data within a namespace; for REST API deployments, `namespacePrefix` is used to isolate entire namespaces between environments or applications.
95
+ Keys in Cloudflare KV are structured as a combination of a configurable prefix and a table-specific format (e.g., `threads:threadId`). For Workers deployments, `keyPrefix` is used to isolate data within a namespace; for REST API deployments, `namespacePrefix` is used to isolate entire namespaces between environments or applications.
96
+
97
+ ## Cloudflare Durable Objects Storage
98
+
99
+ The Durable Objects storage implementation provides strongly consistent, SQLite-based storage using Cloudflare Durable Objects. This is ideal for applications that require transactional consistency and SQL query capabilities.
100
+
101
+ ### Usage
102
+
103
+ ```typescript
104
+ import { DurableObject } from 'cloudflare:workers'
105
+ import { CloudflareDOStorage } from '@mastra/cloudflare/do'
106
+
107
+ class AgentDurableObject extends DurableObject<Env> {
108
+ private storage: CloudflareDOStorage
109
+
110
+ constructor(ctx: DurableObjectState, env: Env) {
111
+ super(ctx, env)
112
+ this.storage = new CloudflareDOStorage({
113
+ sql: ctx.storage.sql,
114
+ tablePrefix: 'mastra_', // Optional: prefix for table names
115
+ })
116
+ }
117
+
118
+ async run() {
119
+ const memory = await this.storage.getStore('memory')
120
+ await memory?.saveThread({
121
+ thread: { id: 'thread-1', resourceId: 'user-1', title: 'Chat', metadata: {} },
122
+ })
123
+ }
124
+ }
125
+ ```
126
+
127
+ ### Parameters
128
+
129
+ **sql** (`SqlStorage`): SqlStorage instance from Durable Objects ctx.storage.sql
130
+
131
+ **tablePrefix** (`string`): Optional prefix for table names (only letters, numbers, and underscores allowed)
132
+
133
+ **disableInit** (`boolean`): When true, automatic table creation/migrations are disabled. Useful for CI/CD pipelines where migrations run separately.
134
+
135
+ ### Strong Consistency
136
+
137
+ Unlike KV, Durable Objects provide strong consistency guarantees. All reads and writes within a Durable Object are serialized, making it very suitable for fast, long-running agents.
138
+
139
+ ### SQL Capabilities
140
+
141
+ Durable Objects storage uses SQLite under the hood, enabling efficient queries, filtering, and pagination that aren't possible with key-value storage.
142
+
143
+ ## Schema Management
144
+
145
+ Both storage implementations handle schema creation and updates automatically. They create the following tables:
146
+
147
+ - `threads`: Stores conversation threads
148
+ - `messages`: Stores individual messages
149
+ - `workflow_snapshot`: Stores workflow run state
150
+
151
+ ## Deprecated Aliases
152
+
153
+ For backwards compatibility, the following aliases are available:
154
+
155
+ ```typescript
156
+ // These are deprecated - use CloudflareKVStorage and CloudflareDOStorage instead
157
+ import { CloudflareStore } from '@mastra/cloudflare/kv' // alias for CloudflareKVStorage
158
+ import { DOStore } from '@mastra/cloudflare/do' // alias for CloudflareDOStorage
159
+ ```
@@ -58,7 +58,7 @@ bun add @mastra/pg@latest @mastra/libsql@latest
58
58
 
59
59
  ## Storage domains
60
60
 
61
- Mastra organizes storage into five specialized domains, each handling a specific type of data. Each domain can be backed by a different storage adapter, and domain classes are exported from each storage package.
61
+ Mastra organizes storage into domains, each handling a specific type of data. Each domain can be backed by a different storage adapter, and domain classes are exported from each storage package.
62
62
 
63
63
  | Domain | Description |
64
64
  | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
@@ -67,6 +67,10 @@ Mastra organizes storage into five specialized domains, each handling a specific
67
67
  | `scores` | Evaluation results from Mastra's evals system. Scores and metrics are persisted here for analysis and comparison over time. |
68
68
  | `observability` | Telemetry data including traces and spans. Agent interactions, tool calls, and LLM requests generate spans collected into traces for debugging and performance analysis. |
69
69
  | `agents` | Agent configurations for stored agents. Enables agents to be defined and updated at runtime without code deployments. |
70
+ | `datasets` | Evaluation datasets used for experiment runs. Stores dataset definitions, schemas, and versioned items. |
71
+ | `experiments` | Experiment runs and per-item experiment results linked to datasets and targets. |
72
+
73
+ > **Note:** `MastraCompositeStore` accepts all of the domain keys above, but storage adapter support varies by package. You can mix adapters per domain, but only for domains implemented and exported by those adapters. For example, `memory: new MemoryLibSQL(...)` and `workflows: new WorkflowsPG(...)` is valid because both packages export those domain classes.
70
74
 
71
75
  ## Usage
72
76
 
@@ -124,7 +128,9 @@ export const mastra = new Mastra({
124
128
 
125
129
  **default** (`MastraCompositeStore`): Default storage adapter. Domains not explicitly specified in \`domains\` will use this storage's domains as fallbacks.
126
130
 
127
- **domains** (`object`): Individual domain overrides. Each domain can come from a different storage adapter. These take precedence over the default storage.
131
+ **disableInit** (`boolean`): When true, automatic initialization is disabled. You must call init() explicitly.
132
+
133
+ **domains** (`object`): Individual domain overrides. Each domain can come from a different storage adapter. These take precedence over both \`editor\` and \`default\` storage.
128
134
 
129
135
  **domains.memory** (`MemoryStorage`): Storage for threads, messages, and resources.
130
136
 
@@ -136,7 +142,9 @@ export const mastra = new Mastra({
136
142
 
137
143
  **domains.agents** (`AgentsStorage`): Storage for stored agent configurations.
138
144
 
139
- **disableInit** (`boolean`): When true, automatic initialization is disabled. You must call init() explicitly.
145
+ **domains.datasets** (`DatasetsStorage`): Storage for dataset metadata, dataset items, and dataset versions.
146
+
147
+ **domains.experiments** (`ExperimentsStorage`): Storage for experiment runs and per-item experiment results.
140
148
 
141
149
  ## Initialization
142
150
 
@@ -1,6 +1,23 @@
1
1
  # Storage overview
2
2
 
3
- Mastra requires the following tables to be present in the database.
3
+ Mastra storage is organized into domains. Each domain owns a set of tables or collections. Depending on your adapter and configuration, you may use all domains or only a subset.
4
+
5
+ ## Storage domains
6
+
7
+ `MastraCompositeStore` can route the following domain keys:
8
+
9
+ Not every storage adapter implements every domain. Composite storage lets you mix adapters per domain when the adapter packages export the corresponding domain classes.
10
+
11
+ | Domain | Description |
12
+ | --------------- | -------------------------------------------------------------------------------------- |
13
+ | `memory` | Conversation persistence: messages, threads, and resources (including working memory). |
14
+ | `workflows` | Workflow run snapshots used for suspend and resume. |
15
+ | `scores` | Evaluation score records from eval runs. |
16
+ | `observability` | Traces and spans used by observability exporters and Studio. |
17
+ | `datasets` | Dataset records, versioned items, and dataset versions used by experiments. |
18
+ | `experiments` | Experiment runs and per-item experiment results. |
19
+
20
+ The schema definitions below cover the built-in database-backed tables documented for `memory`, `workflows`, `scores`, and `observability`. Other domains, and non-database adapters, use implementation-specific storage structures.
4
21
 
5
22
  ## Core schema
6
23
 
@@ -38,7 +38,7 @@ const response = await agent.generate('List all files in the workspace')
38
38
 
39
39
  **contained** (`boolean`): When true, all file operations are restricted to stay within basePath. Prevents path traversal attacks and symlink escapes. See \[containment]\(/docs/workspace/filesystem#containment). (Default: `true`)
40
40
 
41
- **allowedPaths** (`string[]`): Additional absolute paths that are allowed beyond basePath. Useful with \`contained: true\` to grant access to specific directories without disabling containment entirely. Paths are resolved to absolute paths. (Default: `[]`)
41
+ **allowedPaths** (`string[]`): Additional directories the agent can access outside of \`basePath\`. (Default: `[]`)
42
42
 
43
43
  **instructions** (`string | ((opts: { defaultInstructions: string; requestContext?: RequestContext }) => string)`): Custom instructions that override the default instructions returned by getInstructions(). Pass a string to fully replace them, or a function to extend them with access to the current requestContext for per-request customization.
44
44
 
@@ -56,7 +56,7 @@ const response = await agent.generate('List all files in the workspace')
56
56
 
57
57
  **readOnly** (`boolean | undefined`): Whether the filesystem is in read-only mode
58
58
 
59
- **allowedPaths** (`readonly string[]`): Current set of additional allowed paths (absolute, resolved). These paths are permitted beyond basePath when containment is enabled.
59
+ **allowedPaths** (`readonly string[]`): Current set of resolved allowed paths. These paths are permitted beyond basePath when containment is enabled.
60
60
 
61
61
  ## Methods
62
62
 
package/CHANGELOG.md CHANGED
@@ -1,5 +1,21 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.10-alpha.1
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`9cede11`](https://github.com/mastra-ai/mastra/commit/9cede110abac9d93072e0521bb3c8bcafb9fdadf), [`a59f126`](https://github.com/mastra-ai/mastra/commit/a59f1269104f54726699c5cdb98c72c93606d2df), [`c510833`](https://github.com/mastra-ai/mastra/commit/c5108333e8cbc19dafee5f8bfefbcb5ee935335c), [`7296fcc`](https://github.com/mastra-ai/mastra/commit/7296fcc599c876a68699a71c7054a16d5aaf2337), [`00c27f9`](https://github.com/mastra-ai/mastra/commit/00c27f9080731433230a61be69c44e39a7a7b4c7), [`ee19c9b`](https://github.com/mastra-ai/mastra/commit/ee19c9ba3ec3ed91feb214ad539bdc766c53bb01)]:
8
+ - @mastra/core@1.12.0-alpha.1
9
+ - @mastra/mcp@1.2.0-alpha.0
10
+
11
+ ## 1.1.10-alpha.0
12
+
13
+ ### Patch Changes
14
+
15
+ - Updated dependencies [[`cddf895`](https://github.com/mastra-ai/mastra/commit/cddf895532b8ee7f9fa814136ec672f53d37a9ba), [`aede3cc`](https://github.com/mastra-ai/mastra/commit/aede3cc2a83b54bbd9e9a54c8aedcd1708b2ef87), [`c4c7dad`](https://github.com/mastra-ai/mastra/commit/c4c7dadfe2e4584f079f6c24bfabdb8c4981827f), [`b9a77b9`](https://github.com/mastra-ai/mastra/commit/b9a77b951fa6422077080b492cce74460d2f8fdd), [`45c3112`](https://github.com/mastra-ai/mastra/commit/45c31122666a0cc56b94727099fcb1871ed1b3f6), [`45c3112`](https://github.com/mastra-ai/mastra/commit/45c31122666a0cc56b94727099fcb1871ed1b3f6), [`5e7c287`](https://github.com/mastra-ai/mastra/commit/5e7c28701f2bce795dd5c811e4c3060bf2ea2242), [`7e17d3f`](https://github.com/mastra-ai/mastra/commit/7e17d3f656fdda2aad47c4beb8c491636d70820c)]:
16
+ - @mastra/core@1.12.0-alpha.0
17
+ - @mastra/mcp@1.2.0-alpha.0
18
+
3
19
  ## 1.1.9
4
20
 
5
21
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.9",
3
+ "version": "1.1.10-alpha.1",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,8 +29,8 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/core": "1.11.0",
33
- "@mastra/mcp": "^1.1.0"
32
+ "@mastra/core": "1.12.0-alpha.1",
33
+ "@mastra/mcp": "^1.2.0-alpha.0"
34
34
  },
35
35
  "devDependencies": {
36
36
  "@hono/node-server": "^1.19.9",
@@ -48,7 +48,7 @@
48
48
  "vitest": "4.0.18",
49
49
  "@internal/lint": "0.0.67",
50
50
  "@internal/types-builder": "0.0.42",
51
- "@mastra/core": "1.11.0"
51
+ "@mastra/core": "1.12.0-alpha.1"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {