@mastra/mcp-docs-server 1.1.1 → 1.1.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (47) hide show
  1. package/.docs/docs/deployment/cloud-providers.md +1 -1
  2. package/.docs/docs/deployment/overview.md +1 -1
  3. package/.docs/docs/deployment/studio.md +234 -0
  4. package/.docs/docs/memory/observational-memory.md +86 -11
  5. package/.docs/docs/streaming/events.md +23 -0
  6. package/.docs/docs/workspace/filesystem.md +72 -1
  7. package/.docs/docs/workspace/overview.md +95 -12
  8. package/.docs/docs/workspace/sandbox.md +2 -0
  9. package/.docs/guides/agent-frameworks/ai-sdk.md +6 -2
  10. package/.docs/guides/deployment/cloudflare.md +99 -0
  11. package/.docs/models/gateways/openrouter.md +6 -3
  12. package/.docs/models/index.md +1 -1
  13. package/.docs/models/providers/baseten.md +2 -1
  14. package/.docs/models/providers/cerebras.md +2 -1
  15. package/.docs/models/providers/fireworks-ai.md +2 -1
  16. package/.docs/models/providers/friendli.md +3 -2
  17. package/.docs/models/providers/huggingface.md +3 -2
  18. package/.docs/models/providers/jiekou.md +4 -2
  19. package/.docs/models/providers/minimax-cn-coding-plan.md +3 -2
  20. package/.docs/models/providers/minimax-cn.md +3 -2
  21. package/.docs/models/providers/minimax-coding-plan.md +3 -2
  22. package/.docs/models/providers/minimax.md +3 -2
  23. package/.docs/models/providers/nano-gpt.md +12 -4
  24. package/.docs/models/providers/novita-ai.md +4 -2
  25. package/.docs/models/providers/ollama-cloud.md +3 -1
  26. package/.docs/models/providers/openai.md +15 -14
  27. package/.docs/models/providers/opencode.md +31 -32
  28. package/.docs/models/providers/stackit.md +78 -0
  29. package/.docs/models/providers/synthetic.md +1 -1
  30. package/.docs/models/providers/zai-coding-plan.md +3 -2
  31. package/.docs/models/providers/zai.md +3 -2
  32. package/.docs/models/providers/zhipuai-coding-plan.md +3 -2
  33. package/.docs/models/providers/zhipuai.md +3 -2
  34. package/.docs/models/providers.md +1 -0
  35. package/.docs/reference/ai-sdk/with-mastra.md +1 -1
  36. package/.docs/reference/cli/mastra.md +1 -1
  37. package/.docs/reference/deployer/cloudflare.md +35 -12
  38. package/.docs/reference/index.md +3 -0
  39. package/.docs/reference/memory/observational-memory.md +318 -9
  40. package/.docs/reference/streaming/workflows/stream.md +1 -0
  41. package/.docs/reference/workflows/workflow-methods/foreach.md +30 -0
  42. package/.docs/reference/workspace/e2b-sandbox.md +299 -0
  43. package/.docs/reference/workspace/gcs-filesystem.md +170 -0
  44. package/.docs/reference/workspace/s3-filesystem.md +169 -0
  45. package/CHANGELOG.md +14 -0
  46. package/package.json +6 -6
  47. package/.docs/guides/deployment/cloudflare-deployer.md +0 -102
@@ -1,6 +1,6 @@
1
1
  # ![OpenCode Zen logo](https://models.dev/logos/opencode.svg)OpenCode Zen
2
2
 
3
- Access 28 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
3
+ Access 27 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OpenCode Zen documentation](https://opencode.ai/docs/zen).
6
6
 
@@ -32,36 +32,35 @@ for await (const chunk of stream) {
32
32
 
33
33
  ## Models
34
34
 
35
- | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
- | ------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
- | `opencode/big-pickle` | 200K | | | | | | — | — |
38
- | `opencode/claude-3-5-haiku` | 200K | | | | | | $0.80 | $4 |
39
- | `opencode/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
40
- | `opencode/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
41
- | `opencode/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
42
- | `opencode/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
43
- | `opencode/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
44
- | `opencode/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
45
- | `opencode/gemini-3-flash` | 1.0M | | | | | | $0.50 | $3 |
46
- | `opencode/gemini-3-pro` | 1.0M | | | | | | $2 | $12 |
47
- | `opencode/glm-4.6` | 205K | | | | | | $0.60 | $2 |
48
- | `opencode/glm-4.7` | 205K | | | | | | $0.60 | $2 |
49
- | `opencode/gpt-5` | 272K | | | | | | $1 | $9 |
50
- | `opencode/gpt-5-codex` | 272K | | | | | | $1 | $9 |
51
- | `opencode/gpt-5-nano` | 272K | | | | | | — | — |
52
- | `opencode/gpt-5.1` | 272K | | | | | | $1 | $9 |
53
- | `opencode/gpt-5.1-codex` | 272K | | | | | | $1 | $9 |
54
- | `opencode/gpt-5.1-codex-max` | 272K | | | | | | $1 | $10 |
55
- | `opencode/gpt-5.1-codex-mini` | 272K | | | | | | $0.25 | $2 |
56
- | `opencode/gpt-5.2` | 272K | | | | | | $2 | $14 |
57
- | `opencode/gpt-5.2-codex` | 272K | | | | | | $2 | $14 |
58
- | `opencode/kimi-k2` | 262K | | | | | | $0.40 | $3 |
59
- | `opencode/kimi-k2-thinking` | 262K | | | | | | $0.40 | $3 |
60
- | `opencode/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
61
- | `opencode/kimi-k2.5-free` | 262K | | | | | | — | — |
62
- | `opencode/minimax-m2.1` | 205K | | | | | | $0.30 | $1 |
63
- | `opencode/minimax-m2.1-free` | 205K | | | | | | — | — |
64
- | `opencode/trinity-large-preview-free` | 131K | | | | | | — | — |
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ----------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `opencode/big-pickle` | 200K | | | | | | — | — |
38
+ | `opencode/claude-3-5-haiku` | 200K | | | | | | $0.80 | $4 |
39
+ | `opencode/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
40
+ | `opencode/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
41
+ | `opencode/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
42
+ | `opencode/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
43
+ | `opencode/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
44
+ | `opencode/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
45
+ | `opencode/gemini-3-flash` | 1.0M | | | | | | $0.50 | $3 |
46
+ | `opencode/gemini-3-pro` | 1.0M | | | | | | $2 | $12 |
47
+ | `opencode/glm-4.6` | 205K | | | | | | $0.60 | $2 |
48
+ | `opencode/glm-4.7` | 205K | | | | | | $0.60 | $2 |
49
+ | `opencode/gpt-5` | 400K | | | | | | $1 | $9 |
50
+ | `opencode/gpt-5-codex` | 400K | | | | | | $1 | $9 |
51
+ | `opencode/gpt-5-nano` | 400K | | | | | | — | — |
52
+ | `opencode/gpt-5.1` | 400K | | | | | | $1 | $9 |
53
+ | `opencode/gpt-5.1-codex` | 400K | | | | | | $1 | $9 |
54
+ | `opencode/gpt-5.1-codex-max` | 400K | | | | | | $1 | $10 |
55
+ | `opencode/gpt-5.1-codex-mini` | 400K | | | | | | $0.25 | $2 |
56
+ | `opencode/gpt-5.2` | 400K | | | | | | $2 | $14 |
57
+ | `opencode/gpt-5.2-codex` | 400K | | | | | | $2 | $14 |
58
+ | `opencode/kimi-k2` | 262K | | | | | | $0.40 | $3 |
59
+ | `opencode/kimi-k2-thinking` | 262K | | | | | | $0.40 | $3 |
60
+ | `opencode/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
61
+ | `opencode/kimi-k2.5-free` | 262K | | | | | | — | — |
62
+ | `opencode/minimax-m2.1` | 205K | | | | | | $0.30 | $1 |
63
+ | `opencode/minimax-m2.5-free` | 205K | | | | | | — | — |
65
64
 
66
65
  ## Advanced Configuration
67
66
 
@@ -91,7 +90,7 @@ const agent = new Agent({
91
90
  model: ({ requestContext }) => {
92
91
  const useAdvanced = requestContext.task === "complex";
93
92
  return useAdvanced
94
- ? "opencode/trinity-large-preview-free"
93
+ ? "opencode/minimax-m2.5-free"
95
94
  : "opencode/big-pickle";
96
95
  }
97
96
  });
@@ -0,0 +1,78 @@
1
+ # ![STACKIT logo](https://models.dev/logos/stackit.svg)STACKIT
2
+
3
+ Access 8 STACKIT models through Mastra's model router. Authentication is handled automatically using the `STACKIT_API_KEY` environment variable.
4
+
5
+ Learn more in the [STACKIT documentation](https://docs.stackit.cloud/products/data-and-ai/ai-model-serving/basics/available-shared-models).
6
+
7
+ ```bash
8
+ STACKIT_API_KEY=your-api-key
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [STACKIT documentation](https://docs.stackit.cloud/products/data-and-ai/ai-model-serving/basics/available-shared-models) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ---------------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `stackit/cortecs/Llama-3.3-70B-Instruct-FP8-Dynamic` | 128K | | | | | | $0.49 | $0.71 |
38
+ | `stackit/google/gemma-3-27b-it` | 37K | | | | | | $0.49 | $0.71 |
39
+ | `stackit/intfloat/e5-mistral-7b-instruct` | 4K | | | | | | $0.02 | $0.02 |
40
+ | `stackit/neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8` | 128K | | | | | | $0.16 | $0.27 |
41
+ | `stackit/neuralmagic/Mistral-Nemo-Instruct-2407-FP8` | 128K | | | | | | $0.49 | $0.71 |
42
+ | `stackit/openai/gpt-oss-120b` | 131K | | | | | | $0.49 | $0.71 |
43
+ | `stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8` | 218K | | | | | | $2 | $2 |
44
+ | `stackit/Qwen/Qwen3-VL-Embedding-8B` | 32K | | | | | | $0.09 | $0.09 |
45
+
46
+ ## Advanced Configuration
47
+
48
+ ### Custom Headers
49
+
50
+ ```typescript
51
+ const agent = new Agent({
52
+ id: "custom-agent",
53
+ name: "custom-agent",
54
+ model: {
55
+ url: "https://api.openai-compat.model-serving.eu01.onstackit.cloud/v1",
56
+ id: "stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8",
57
+ apiKey: process.env.STACKIT_API_KEY,
58
+ headers: {
59
+ "X-Custom-Header": "value"
60
+ }
61
+ }
62
+ });
63
+ ```
64
+
65
+ ### Dynamic Model Selection
66
+
67
+ ```typescript
68
+ const agent = new Agent({
69
+ id: "dynamic-agent",
70
+ name: "Dynamic Agent",
71
+ model: ({ requestContext }) => {
72
+ const useAdvanced = requestContext.task === "complex";
73
+ return useAdvanced
74
+ ? "stackit/openai/gpt-oss-120b"
75
+ : "stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8";
76
+ }
77
+ });
78
+ ```
@@ -52,12 +52,12 @@ for await (const chunk of stream) {
52
52
  | `synthetic/hf:moonshotai/Kimi-K2-Instruct-0905` | 262K | | | | | | $1 | $1 |
53
53
  | `synthetic/hf:moonshotai/Kimi-K2-Thinking` | 262K | | | | | | $0.55 | $2 |
54
54
  | `synthetic/hf:moonshotai/Kimi-K2.5` | 262K | | | | | | $0.55 | $2 |
55
+ | `synthetic/hf:nvidia/Kimi-K2.5-NVFP4` | 262K | | | | | | $0.55 | $2 |
55
56
  | `synthetic/hf:openai/gpt-oss-120b` | 128K | | | | | | $0.10 | $0.10 |
56
57
  | `synthetic/hf:Qwen/Qwen2.5-Coder-32B-Instruct` | 33K | | | | | | $0.80 | $0.80 |
57
58
  | `synthetic/hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256K | | | | | | $0.20 | $0.60 |
58
59
  | `synthetic/hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256K | | | | | | $0.65 | $3 |
59
60
  | `synthetic/hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256K | | | | | | $2 | $2 |
60
- | `synthetic/hf:zai-org/GLM-4.5` | 128K | | | | | | $0.55 | $2 |
61
61
  | `synthetic/hf:zai-org/GLM-4.6` | 200K | | | | | | $0.55 | $2 |
62
62
  | `synthetic/hf:zai-org/GLM-4.7` | 200K | | | | | | $0.55 | $2 |
63
63
 
@@ -1,6 +1,6 @@
1
1
  # ![Z.AI Coding Plan logo](https://models.dev/logos/zai-coding-plan.svg)Z.AI Coding Plan
2
2
 
3
- Access 8 Z.AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
3
+ Access 9 Z.AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Z.AI Coding Plan documentation](https://docs.z.ai/devpack/overview).
6
6
 
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
42
42
  | `zai-coding-plan/glm-4.6v` | 128K | | | | | | — | — |
43
43
  | `zai-coding-plan/glm-4.7` | 205K | | | | | | — | — |
44
44
  | `zai-coding-plan/glm-4.7-flash` | 200K | | | | | | — | — |
45
+ | `zai-coding-plan/glm-5` | 205K | | | | | | — | — |
45
46
 
46
47
  ## Advanced Configuration
47
48
 
@@ -71,7 +72,7 @@ const agent = new Agent({
71
72
  model: ({ requestContext }) => {
72
73
  const useAdvanced = requestContext.task === "complex";
73
74
  return useAdvanced
74
- ? "zai-coding-plan/glm-4.7-flash"
75
+ ? "zai-coding-plan/glm-5"
75
76
  : "zai-coding-plan/glm-4.5";
76
77
  }
77
78
  });
@@ -1,6 +1,6 @@
1
1
  # ![Z.AI logo](https://models.dev/logos/zai.svg)Z.AI
2
2
 
3
- Access 8 Z.AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
3
+ Access 9 Z.AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Z.AI documentation](https://docs.z.ai/guides/overview/pricing).
6
6
 
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
42
42
  | `zai/glm-4.6v` | 128K | | | | | | $0.30 | $0.90 |
43
43
  | `zai/glm-4.7` | 205K | | | | | | $0.60 | $2 |
44
44
  | `zai/glm-4.7-flash` | 200K | | | | | | — | — |
45
+ | `zai/glm-5` | 205K | | | | | | $1 | $3 |
45
46
 
46
47
  ## Advanced Configuration
47
48
 
@@ -71,7 +72,7 @@ const agent = new Agent({
71
72
  model: ({ requestContext }) => {
72
73
  const useAdvanced = requestContext.task === "complex";
73
74
  return useAdvanced
74
- ? "zai/glm-4.7-flash"
75
+ ? "zai/glm-5"
75
76
  : "zai/glm-4.5";
76
77
  }
77
78
  });
@@ -1,6 +1,6 @@
1
1
  # ![Zhipu AI Coding Plan logo](https://models.dev/logos/zhipuai-coding-plan.svg)Zhipu AI Coding Plan
2
2
 
3
- Access 8 Zhipu AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
3
+ Access 9 Zhipu AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Zhipu AI Coding Plan documentation](https://docs.bigmodel.cn/cn/coding-plan/overview).
6
6
 
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
42
42
  | `zhipuai-coding-plan/glm-4.6v` | 128K | | | | | | — | — |
43
43
  | `zhipuai-coding-plan/glm-4.6v-flash` | 128K | | | | | | — | — |
44
44
  | `zhipuai-coding-plan/glm-4.7` | 205K | | | | | | — | — |
45
+ | `zhipuai-coding-plan/glm-5` | 205K | | | | | | — | — |
45
46
 
46
47
  ## Advanced Configuration
47
48
 
@@ -71,7 +72,7 @@ const agent = new Agent({
71
72
  model: ({ requestContext }) => {
72
73
  const useAdvanced = requestContext.task === "complex";
73
74
  return useAdvanced
74
- ? "zhipuai-coding-plan/glm-4.7"
75
+ ? "zhipuai-coding-plan/glm-5"
75
76
  : "zhipuai-coding-plan/glm-4.5";
76
77
  }
77
78
  });
@@ -1,6 +1,6 @@
1
1
  # ![Zhipu AI logo](https://models.dev/logos/zhipuai.svg)Zhipu AI
2
2
 
3
- Access 8 Zhipu AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
3
+ Access 9 Zhipu AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Zhipu AI documentation](https://docs.z.ai/guides/overview/pricing).
6
6
 
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
42
42
  | `zhipuai/glm-4.6v` | 128K | | | | | | $0.30 | $0.90 |
43
43
  | `zhipuai/glm-4.7` | 205K | | | | | | $0.60 | $2 |
44
44
  | `zhipuai/glm-4.7-flash` | 200K | | | | | | — | — |
45
+ | `zhipuai/glm-5` | 205K | | | | | | $1 | $3 |
45
46
 
46
47
  ## Advanced Configuration
47
48
 
@@ -71,7 +72,7 @@ const agent = new Agent({
71
72
  model: ({ requestContext }) => {
72
73
  const useAdvanced = requestContext.task === "complex";
73
74
  return useAdvanced
74
- ? "zhipuai/glm-4.7-flash"
75
+ ? "zhipuai/glm-5"
75
76
  : "zhipuai/glm-4.5";
76
77
  }
77
78
  });
@@ -63,6 +63,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
63
63
  - [Scaleway](https://mastra.ai/models/providers/scaleway)
64
64
  - [SiliconFlow](https://mastra.ai/models/providers/siliconflow)
65
65
  - [SiliconFlow (China)](https://mastra.ai/models/providers/siliconflow-cn)
66
+ - [STACKIT](https://mastra.ai/models/providers/stackit)
66
67
  - [submodel](https://mastra.ai/models/providers/submodel)
67
68
  - [Synthetic](https://mastra.ai/models/providers/synthetic)
68
69
  - [Together AI](https://mastra.ai/models/providers/togetherai)
@@ -40,7 +40,7 @@ const { text } = await generateText({
40
40
 
41
41
  **options.memory?:** (`WithMastraMemoryOptions`): Memory configuration - enables automatic message history persistence.
42
42
 
43
- **options.memory.storage:** (`MemoryStorage`): Storage adapter for message persistence (e.g., LibSQLStore, PostgresStore).
43
+ **options.memory.storage:** (`MemoryStorage`): Memory storage domain for message persistence. Get it from a composite store using \`await storage.getStore('memory')\`.
44
44
 
45
45
  **options.memory.threadId:** (`string`): Thread ID for conversation persistence.
46
46
 
@@ -157,7 +157,7 @@ Comma-separated list of custom arguments to pass to the Node.js process, e.g. `-
157
157
 
158
158
  ## `mastra studio`
159
159
 
160
- Starts [Mastra Studio](https://mastra.ai/docs/getting-started/studio) as a static server. After starting, you can enter your Mastra instance URL (e.g. `http://localhost:4111`) to connect Studio to your Mastra backend.
160
+ Starts [Mastra Studio](https://mastra.ai/docs/getting-started/studio) as a static server. After starting, you can enter your Mastra instance URL (e.g. `http://localhost:4111`) to connect Studio to your Mastra backend. Looks for `.env` and `.env.production` files in the current working directory for configuration.
161
161
 
162
162
  ### Flags
163
163
 
@@ -1,16 +1,46 @@
1
1
  # CloudflareDeployer
2
2
 
3
- The `CloudflareDeployer` class handles deployment of standalone Mastra applications to Cloudflare Workers. It manages configuration, deployment, and extends the base [Deployer](https://mastra.ai/reference/deployer) class with Cloudflare specific functionality.
3
+ The `CloudflareDeployer` bundles your Mastra server and generates a `wrangler.jsonc` file conforming to Cloudflare's [wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/). Cloudflare deploys this as a Cloudflare Worker.
4
+
5
+ ## Installation
6
+
7
+ To use `CloudflareDeployer`, install the `@mastra/deployer-cloudflare` package:
8
+
9
+ **npm**:
10
+
11
+ ```bash
12
+ npm install @mastra/deployer-cloudflare@latest
13
+ ```
14
+
15
+ **pnpm**:
16
+
17
+ ```bash
18
+ pnpm add @mastra/deployer-cloudflare@latest
19
+ ```
20
+
21
+ **Yarn**:
22
+
23
+ ```bash
24
+ yarn add @mastra/deployer-cloudflare@latest
25
+ ```
26
+
27
+ **Bun**:
28
+
29
+ ```bash
30
+ bun add @mastra/deployer-cloudflare@latest
31
+ ```
4
32
 
5
33
  ## Usage example
6
34
 
35
+ Import `CloudflareDeployer` and set it as the deployer in your Mastra configuration:
36
+
7
37
  ```typescript
8
38
  import { Mastra } from "@mastra/core";
9
39
  import { CloudflareDeployer } from "@mastra/deployer-cloudflare";
10
40
 
11
41
  export const mastra = new Mastra({
12
42
  deployer: new CloudflareDeployer({
13
- name: "hello-mastra",
43
+ name: "your-project-name",
14
44
  routes: [
15
45
  {
16
46
  pattern: "example.com/*",
@@ -42,15 +72,8 @@ export const mastra = new Mastra({
42
72
 
43
73
  ## Constructor options
44
74
 
45
- The `CloudflareDeployer` constructor accepts the same configuration options as `wrangler.json`. See the [Wrangler configuration documentation](https://developers.cloudflare.com/workers/wrangler/configuration/) for all available options.
46
-
47
- ### Migrating from earlier versions
75
+ The `CloudflareDeployer` constructor accepts the same configuration options as `wrangler.jsonc`. See the [Wrangler configuration documentation](https://developers.cloudflare.com/workers/wrangler/configuration/) for all available options.
48
76
 
49
- The following fields are deprecated and should be replaced with their standard `wrangler.json` equivalents:
77
+ ## Build output
50
78
 
51
- | Deprecated | Use instead |
52
- | ----------------- | --------------------------- |
53
- | `projectName` | `name` |
54
- | `d1Databases` | `d1_databases` |
55
- | `kvNamespaces` | `kv_namespaces` |
56
- | `workerNamespace` | _(removed, no longer used)_ |
79
+ After running `mastra build`, the deployer generates a `wrangler.jsonc` file conforming to Cloudflare's [wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/). It points to files inside `.mastra/output` so you need to run `mastra build` before deploying with Wrangler.
@@ -164,7 +164,10 @@ The Reference section provides documentation of Mastra's API, including paramete
164
164
  - [Upstash Storage](https://mastra.ai/reference/storage/upstash)
165
165
  - [Workspace Class](https://mastra.ai/reference/workspace/workspace-class)
166
166
  - [LocalFilesystem](https://mastra.ai/reference/workspace/local-filesystem)
167
+ - [S3Filesystem](https://mastra.ai/reference/workspace/s3-filesystem)
168
+ - [GCSFilesystem](https://mastra.ai/reference/workspace/gcs-filesystem)
167
169
  - [LocalSandbox](https://mastra.ai/reference/workspace/local-sandbox)
170
+ - [E2BSandbox](https://mastra.ai/reference/workspace/e2b-sandbox)
168
171
  - [WorkspaceFilesystem](https://mastra.ai/reference/workspace/filesystem)
169
172
  - [WorkspaceSandbox](https://mastra.ai/reference/workspace/sandbox)
170
173
  - [.stream()](https://mastra.ai/reference/streaming/agents/stream)