@mastra/mcp-docs-server 1.1.1 → 1.1.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/deployment/cloud-providers.md +1 -1
- package/.docs/docs/deployment/overview.md +1 -1
- package/.docs/docs/deployment/studio.md +234 -0
- package/.docs/docs/memory/observational-memory.md +86 -11
- package/.docs/docs/streaming/events.md +23 -0
- package/.docs/docs/workspace/filesystem.md +72 -1
- package/.docs/docs/workspace/overview.md +95 -12
- package/.docs/docs/workspace/sandbox.md +2 -0
- package/.docs/guides/agent-frameworks/ai-sdk.md +6 -2
- package/.docs/guides/deployment/cloudflare.md +99 -0
- package/.docs/models/gateways/openrouter.md +6 -3
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/baseten.md +2 -1
- package/.docs/models/providers/cerebras.md +2 -1
- package/.docs/models/providers/fireworks-ai.md +2 -1
- package/.docs/models/providers/friendli.md +3 -2
- package/.docs/models/providers/huggingface.md +3 -2
- package/.docs/models/providers/jiekou.md +4 -2
- package/.docs/models/providers/minimax-cn-coding-plan.md +3 -2
- package/.docs/models/providers/minimax-cn.md +3 -2
- package/.docs/models/providers/minimax-coding-plan.md +3 -2
- package/.docs/models/providers/minimax.md +3 -2
- package/.docs/models/providers/nano-gpt.md +12 -4
- package/.docs/models/providers/novita-ai.md +4 -2
- package/.docs/models/providers/ollama-cloud.md +3 -1
- package/.docs/models/providers/openai.md +15 -14
- package/.docs/models/providers/opencode.md +31 -32
- package/.docs/models/providers/stackit.md +78 -0
- package/.docs/models/providers/synthetic.md +1 -1
- package/.docs/models/providers/zai-coding-plan.md +3 -2
- package/.docs/models/providers/zai.md +3 -2
- package/.docs/models/providers/zhipuai-coding-plan.md +3 -2
- package/.docs/models/providers/zhipuai.md +3 -2
- package/.docs/models/providers.md +1 -0
- package/.docs/reference/ai-sdk/with-mastra.md +1 -1
- package/.docs/reference/cli/mastra.md +1 -1
- package/.docs/reference/deployer/cloudflare.md +35 -12
- package/.docs/reference/index.md +3 -0
- package/.docs/reference/memory/observational-memory.md +318 -9
- package/.docs/reference/streaming/workflows/stream.md +1 -0
- package/.docs/reference/workflows/workflow-methods/foreach.md +30 -0
- package/.docs/reference/workspace/e2b-sandbox.md +299 -0
- package/.docs/reference/workspace/gcs-filesystem.md +170 -0
- package/.docs/reference/workspace/s3-filesystem.md +169 -0
- package/CHANGELOG.md +14 -0
- package/package.json +6 -6
- package/.docs/guides/deployment/cloudflare-deployer.md +0 -102
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenCode Zen
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 27 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenCode Zen documentation](https://opencode.ai/docs/zen).
|
|
6
6
|
|
|
@@ -32,36 +32,35 @@ for await (const chunk of stream) {
|
|
|
32
32
|
|
|
33
33
|
## Models
|
|
34
34
|
|
|
35
|
-
| Model
|
|
36
|
-
|
|
|
37
|
-
| `opencode/big-pickle`
|
|
38
|
-
| `opencode/claude-3-5-haiku`
|
|
39
|
-
| `opencode/claude-haiku-4-5`
|
|
40
|
-
| `opencode/claude-opus-4-1`
|
|
41
|
-
| `opencode/claude-opus-4-5`
|
|
42
|
-
| `opencode/claude-opus-4-6`
|
|
43
|
-
| `opencode/claude-sonnet-4`
|
|
44
|
-
| `opencode/claude-sonnet-4-5`
|
|
45
|
-
| `opencode/gemini-3-flash`
|
|
46
|
-
| `opencode/gemini-3-pro`
|
|
47
|
-
| `opencode/glm-4.6`
|
|
48
|
-
| `opencode/glm-4.7`
|
|
49
|
-
| `opencode/gpt-5`
|
|
50
|
-
| `opencode/gpt-5-codex`
|
|
51
|
-
| `opencode/gpt-5-nano`
|
|
52
|
-
| `opencode/gpt-5.1`
|
|
53
|
-
| `opencode/gpt-5.1-codex`
|
|
54
|
-
| `opencode/gpt-5.1-codex-max`
|
|
55
|
-
| `opencode/gpt-5.1-codex-mini`
|
|
56
|
-
| `opencode/gpt-5.2`
|
|
57
|
-
| `opencode/gpt-5.2-codex`
|
|
58
|
-
| `opencode/kimi-k2`
|
|
59
|
-
| `opencode/kimi-k2-thinking`
|
|
60
|
-
| `opencode/kimi-k2.5`
|
|
61
|
-
| `opencode/kimi-k2.5-free`
|
|
62
|
-
| `opencode/minimax-m2.1`
|
|
63
|
-
| `opencode/minimax-m2.
|
|
64
|
-
| `opencode/trinity-large-preview-free` | 131K | | | | | | — | — |
|
|
35
|
+
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
36
|
+
| ----------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
|
+
| `opencode/big-pickle` | 200K | | | | | | — | — |
|
|
38
|
+
| `opencode/claude-3-5-haiku` | 200K | | | | | | $0.80 | $4 |
|
|
39
|
+
| `opencode/claude-haiku-4-5` | 200K | | | | | | $1 | $5 |
|
|
40
|
+
| `opencode/claude-opus-4-1` | 200K | | | | | | $15 | $75 |
|
|
41
|
+
| `opencode/claude-opus-4-5` | 200K | | | | | | $5 | $25 |
|
|
42
|
+
| `opencode/claude-opus-4-6` | 1.0M | | | | | | $5 | $25 |
|
|
43
|
+
| `opencode/claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
|
|
44
|
+
| `opencode/claude-sonnet-4-5` | 1.0M | | | | | | $3 | $15 |
|
|
45
|
+
| `opencode/gemini-3-flash` | 1.0M | | | | | | $0.50 | $3 |
|
|
46
|
+
| `opencode/gemini-3-pro` | 1.0M | | | | | | $2 | $12 |
|
|
47
|
+
| `opencode/glm-4.6` | 205K | | | | | | $0.60 | $2 |
|
|
48
|
+
| `opencode/glm-4.7` | 205K | | | | | | $0.60 | $2 |
|
|
49
|
+
| `opencode/gpt-5` | 400K | | | | | | $1 | $9 |
|
|
50
|
+
| `opencode/gpt-5-codex` | 400K | | | | | | $1 | $9 |
|
|
51
|
+
| `opencode/gpt-5-nano` | 400K | | | | | | — | — |
|
|
52
|
+
| `opencode/gpt-5.1` | 400K | | | | | | $1 | $9 |
|
|
53
|
+
| `opencode/gpt-5.1-codex` | 400K | | | | | | $1 | $9 |
|
|
54
|
+
| `opencode/gpt-5.1-codex-max` | 400K | | | | | | $1 | $10 |
|
|
55
|
+
| `opencode/gpt-5.1-codex-mini` | 400K | | | | | | $0.25 | $2 |
|
|
56
|
+
| `opencode/gpt-5.2` | 400K | | | | | | $2 | $14 |
|
|
57
|
+
| `opencode/gpt-5.2-codex` | 400K | | | | | | $2 | $14 |
|
|
58
|
+
| `opencode/kimi-k2` | 262K | | | | | | $0.40 | $3 |
|
|
59
|
+
| `opencode/kimi-k2-thinking` | 262K | | | | | | $0.40 | $3 |
|
|
60
|
+
| `opencode/kimi-k2.5` | 262K | | | | | | $0.60 | $3 |
|
|
61
|
+
| `opencode/kimi-k2.5-free` | 262K | | | | | | — | — |
|
|
62
|
+
| `opencode/minimax-m2.1` | 205K | | | | | | $0.30 | $1 |
|
|
63
|
+
| `opencode/minimax-m2.5-free` | 205K | | | | | | — | — |
|
|
65
64
|
|
|
66
65
|
## Advanced Configuration
|
|
67
66
|
|
|
@@ -91,7 +90,7 @@ const agent = new Agent({
|
|
|
91
90
|
model: ({ requestContext }) => {
|
|
92
91
|
const useAdvanced = requestContext.task === "complex";
|
|
93
92
|
return useAdvanced
|
|
94
|
-
? "opencode/
|
|
93
|
+
? "opencode/minimax-m2.5-free"
|
|
95
94
|
: "opencode/big-pickle";
|
|
96
95
|
}
|
|
97
96
|
});
|
|
@@ -0,0 +1,78 @@
|
|
|
1
|
+
# STACKIT
|
|
2
|
+
|
|
3
|
+
Access 8 STACKIT models through Mastra's model router. Authentication is handled automatically using the `STACKIT_API_KEY` environment variable.
|
|
4
|
+
|
|
5
|
+
Learn more in the [STACKIT documentation](https://docs.stackit.cloud/products/data-and-ai/ai-model-serving/basics/available-shared-models).
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
STACKIT_API_KEY=your-api-key
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
```typescript
|
|
12
|
+
import { Agent } from "@mastra/core/agent";
|
|
13
|
+
|
|
14
|
+
const agent = new Agent({
|
|
15
|
+
id: "my-agent",
|
|
16
|
+
name: "My Agent",
|
|
17
|
+
instructions: "You are a helpful assistant",
|
|
18
|
+
model: "stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8"
|
|
19
|
+
});
|
|
20
|
+
|
|
21
|
+
// Generate a response
|
|
22
|
+
const response = await agent.generate("Hello!");
|
|
23
|
+
|
|
24
|
+
// Stream a response
|
|
25
|
+
const stream = await agent.stream("Tell me a story");
|
|
26
|
+
for await (const chunk of stream) {
|
|
27
|
+
console.log(chunk);
|
|
28
|
+
}
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
> **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [STACKIT documentation](https://docs.stackit.cloud/products/data-and-ai/ai-model-serving/basics/available-shared-models) for details.
|
|
32
|
+
|
|
33
|
+
## Models
|
|
34
|
+
|
|
35
|
+
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
36
|
+
| ---------------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
|
+
| `stackit/cortecs/Llama-3.3-70B-Instruct-FP8-Dynamic` | 128K | | | | | | $0.49 | $0.71 |
|
|
38
|
+
| `stackit/google/gemma-3-27b-it` | 37K | | | | | | $0.49 | $0.71 |
|
|
39
|
+
| `stackit/intfloat/e5-mistral-7b-instruct` | 4K | | | | | | $0.02 | $0.02 |
|
|
40
|
+
| `stackit/neuralmagic/Meta-Llama-3.1-8B-Instruct-FP8` | 128K | | | | | | $0.16 | $0.27 |
|
|
41
|
+
| `stackit/neuralmagic/Mistral-Nemo-Instruct-2407-FP8` | 128K | | | | | | $0.49 | $0.71 |
|
|
42
|
+
| `stackit/openai/gpt-oss-120b` | 131K | | | | | | $0.49 | $0.71 |
|
|
43
|
+
| `stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8` | 218K | | | | | | $2 | $2 |
|
|
44
|
+
| `stackit/Qwen/Qwen3-VL-Embedding-8B` | 32K | | | | | | $0.09 | $0.09 |
|
|
45
|
+
|
|
46
|
+
## Advanced Configuration
|
|
47
|
+
|
|
48
|
+
### Custom Headers
|
|
49
|
+
|
|
50
|
+
```typescript
|
|
51
|
+
const agent = new Agent({
|
|
52
|
+
id: "custom-agent",
|
|
53
|
+
name: "custom-agent",
|
|
54
|
+
model: {
|
|
55
|
+
url: "https://api.openai-compat.model-serving.eu01.onstackit.cloud/v1",
|
|
56
|
+
id: "stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8",
|
|
57
|
+
apiKey: process.env.STACKIT_API_KEY,
|
|
58
|
+
headers: {
|
|
59
|
+
"X-Custom-Header": "value"
|
|
60
|
+
}
|
|
61
|
+
}
|
|
62
|
+
});
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
### Dynamic Model Selection
|
|
66
|
+
|
|
67
|
+
```typescript
|
|
68
|
+
const agent = new Agent({
|
|
69
|
+
id: "dynamic-agent",
|
|
70
|
+
name: "Dynamic Agent",
|
|
71
|
+
model: ({ requestContext }) => {
|
|
72
|
+
const useAdvanced = requestContext.task === "complex";
|
|
73
|
+
return useAdvanced
|
|
74
|
+
? "stackit/openai/gpt-oss-120b"
|
|
75
|
+
: "stackit/Qwen/Qwen3-VL-235B-A22B-Instruct-FP8";
|
|
76
|
+
}
|
|
77
|
+
});
|
|
78
|
+
```
|
|
@@ -52,12 +52,12 @@ for await (const chunk of stream) {
|
|
|
52
52
|
| `synthetic/hf:moonshotai/Kimi-K2-Instruct-0905` | 262K | | | | | | $1 | $1 |
|
|
53
53
|
| `synthetic/hf:moonshotai/Kimi-K2-Thinking` | 262K | | | | | | $0.55 | $2 |
|
|
54
54
|
| `synthetic/hf:moonshotai/Kimi-K2.5` | 262K | | | | | | $0.55 | $2 |
|
|
55
|
+
| `synthetic/hf:nvidia/Kimi-K2.5-NVFP4` | 262K | | | | | | $0.55 | $2 |
|
|
55
56
|
| `synthetic/hf:openai/gpt-oss-120b` | 128K | | | | | | $0.10 | $0.10 |
|
|
56
57
|
| `synthetic/hf:Qwen/Qwen2.5-Coder-32B-Instruct` | 33K | | | | | | $0.80 | $0.80 |
|
|
57
58
|
| `synthetic/hf:Qwen/Qwen3-235B-A22B-Instruct-2507` | 256K | | | | | | $0.20 | $0.60 |
|
|
58
59
|
| `synthetic/hf:Qwen/Qwen3-235B-A22B-Thinking-2507` | 256K | | | | | | $0.65 | $3 |
|
|
59
60
|
| `synthetic/hf:Qwen/Qwen3-Coder-480B-A35B-Instruct` | 256K | | | | | | $2 | $2 |
|
|
60
|
-
| `synthetic/hf:zai-org/GLM-4.5` | 128K | | | | | | $0.55 | $2 |
|
|
61
61
|
| `synthetic/hf:zai-org/GLM-4.6` | 200K | | | | | | $0.55 | $2 |
|
|
62
62
|
| `synthetic/hf:zai-org/GLM-4.7` | 200K | | | | | | $0.55 | $2 |
|
|
63
63
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Z.AI Coding Plan
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 9 Z.AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Z.AI Coding Plan documentation](https://docs.z.ai/devpack/overview).
|
|
6
6
|
|
|
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
|
|
|
42
42
|
| `zai-coding-plan/glm-4.6v` | 128K | | | | | | — | — |
|
|
43
43
|
| `zai-coding-plan/glm-4.7` | 205K | | | | | | — | — |
|
|
44
44
|
| `zai-coding-plan/glm-4.7-flash` | 200K | | | | | | — | — |
|
|
45
|
+
| `zai-coding-plan/glm-5` | 205K | | | | | | — | — |
|
|
45
46
|
|
|
46
47
|
## Advanced Configuration
|
|
47
48
|
|
|
@@ -71,7 +72,7 @@ const agent = new Agent({
|
|
|
71
72
|
model: ({ requestContext }) => {
|
|
72
73
|
const useAdvanced = requestContext.task === "complex";
|
|
73
74
|
return useAdvanced
|
|
74
|
-
? "zai-coding-plan/glm-
|
|
75
|
+
? "zai-coding-plan/glm-5"
|
|
75
76
|
: "zai-coding-plan/glm-4.5";
|
|
76
77
|
}
|
|
77
78
|
});
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Z.AI
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 9 Z.AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Z.AI documentation](https://docs.z.ai/guides/overview/pricing).
|
|
6
6
|
|
|
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
|
|
|
42
42
|
| `zai/glm-4.6v` | 128K | | | | | | $0.30 | $0.90 |
|
|
43
43
|
| `zai/glm-4.7` | 205K | | | | | | $0.60 | $2 |
|
|
44
44
|
| `zai/glm-4.7-flash` | 200K | | | | | | — | — |
|
|
45
|
+
| `zai/glm-5` | 205K | | | | | | $1 | $3 |
|
|
45
46
|
|
|
46
47
|
## Advanced Configuration
|
|
47
48
|
|
|
@@ -71,7 +72,7 @@ const agent = new Agent({
|
|
|
71
72
|
model: ({ requestContext }) => {
|
|
72
73
|
const useAdvanced = requestContext.task === "complex";
|
|
73
74
|
return useAdvanced
|
|
74
|
-
? "zai/glm-
|
|
75
|
+
? "zai/glm-5"
|
|
75
76
|
: "zai/glm-4.5";
|
|
76
77
|
}
|
|
77
78
|
});
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Zhipu AI Coding Plan
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 9 Zhipu AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Zhipu AI Coding Plan documentation](https://docs.bigmodel.cn/cn/coding-plan/overview).
|
|
6
6
|
|
|
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
|
|
|
42
42
|
| `zhipuai-coding-plan/glm-4.6v` | 128K | | | | | | — | — |
|
|
43
43
|
| `zhipuai-coding-plan/glm-4.6v-flash` | 128K | | | | | | — | — |
|
|
44
44
|
| `zhipuai-coding-plan/glm-4.7` | 205K | | | | | | — | — |
|
|
45
|
+
| `zhipuai-coding-plan/glm-5` | 205K | | | | | | — | — |
|
|
45
46
|
|
|
46
47
|
## Advanced Configuration
|
|
47
48
|
|
|
@@ -71,7 +72,7 @@ const agent = new Agent({
|
|
|
71
72
|
model: ({ requestContext }) => {
|
|
72
73
|
const useAdvanced = requestContext.task === "complex";
|
|
73
74
|
return useAdvanced
|
|
74
|
-
? "zhipuai-coding-plan/glm-
|
|
75
|
+
? "zhipuai-coding-plan/glm-5"
|
|
75
76
|
: "zhipuai-coding-plan/glm-4.5";
|
|
76
77
|
}
|
|
77
78
|
});
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Zhipu AI
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 9 Zhipu AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Zhipu AI documentation](https://docs.z.ai/guides/overview/pricing).
|
|
6
6
|
|
|
@@ -42,6 +42,7 @@ for await (const chunk of stream) {
|
|
|
42
42
|
| `zhipuai/glm-4.6v` | 128K | | | | | | $0.30 | $0.90 |
|
|
43
43
|
| `zhipuai/glm-4.7` | 205K | | | | | | $0.60 | $2 |
|
|
44
44
|
| `zhipuai/glm-4.7-flash` | 200K | | | | | | — | — |
|
|
45
|
+
| `zhipuai/glm-5` | 205K | | | | | | $1 | $3 |
|
|
45
46
|
|
|
46
47
|
## Advanced Configuration
|
|
47
48
|
|
|
@@ -71,7 +72,7 @@ const agent = new Agent({
|
|
|
71
72
|
model: ({ requestContext }) => {
|
|
72
73
|
const useAdvanced = requestContext.task === "complex";
|
|
73
74
|
return useAdvanced
|
|
74
|
-
? "zhipuai/glm-
|
|
75
|
+
? "zhipuai/glm-5"
|
|
75
76
|
: "zhipuai/glm-4.5";
|
|
76
77
|
}
|
|
77
78
|
});
|
|
@@ -63,6 +63,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
|
|
|
63
63
|
- [Scaleway](https://mastra.ai/models/providers/scaleway)
|
|
64
64
|
- [SiliconFlow](https://mastra.ai/models/providers/siliconflow)
|
|
65
65
|
- [SiliconFlow (China)](https://mastra.ai/models/providers/siliconflow-cn)
|
|
66
|
+
- [STACKIT](https://mastra.ai/models/providers/stackit)
|
|
66
67
|
- [submodel](https://mastra.ai/models/providers/submodel)
|
|
67
68
|
- [Synthetic](https://mastra.ai/models/providers/synthetic)
|
|
68
69
|
- [Together AI](https://mastra.ai/models/providers/togetherai)
|
|
@@ -40,7 +40,7 @@ const { text } = await generateText({
|
|
|
40
40
|
|
|
41
41
|
**options.memory?:** (`WithMastraMemoryOptions`): Memory configuration - enables automatic message history persistence.
|
|
42
42
|
|
|
43
|
-
**options.memory.storage:** (`MemoryStorage`):
|
|
43
|
+
**options.memory.storage:** (`MemoryStorage`): Memory storage domain for message persistence. Get it from a composite store using \`await storage.getStore('memory')\`.
|
|
44
44
|
|
|
45
45
|
**options.memory.threadId:** (`string`): Thread ID for conversation persistence.
|
|
46
46
|
|
|
@@ -157,7 +157,7 @@ Comma-separated list of custom arguments to pass to the Node.js process, e.g. `-
|
|
|
157
157
|
|
|
158
158
|
## `mastra studio`
|
|
159
159
|
|
|
160
|
-
Starts [Mastra Studio](https://mastra.ai/docs/getting-started/studio) as a static server. After starting, you can enter your Mastra instance URL (e.g. `http://localhost:4111`) to connect Studio to your Mastra backend.
|
|
160
|
+
Starts [Mastra Studio](https://mastra.ai/docs/getting-started/studio) as a static server. After starting, you can enter your Mastra instance URL (e.g. `http://localhost:4111`) to connect Studio to your Mastra backend. Looks for `.env` and `.env.production` files in the current working directory for configuration.
|
|
161
161
|
|
|
162
162
|
### Flags
|
|
163
163
|
|
|
@@ -1,16 +1,46 @@
|
|
|
1
1
|
# CloudflareDeployer
|
|
2
2
|
|
|
3
|
-
The `CloudflareDeployer`
|
|
3
|
+
The `CloudflareDeployer` bundles your Mastra server and generates a `wrangler.jsonc` file conforming to Cloudflare's [wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/). Cloudflare deploys this as a Cloudflare Worker.
|
|
4
|
+
|
|
5
|
+
## Installation
|
|
6
|
+
|
|
7
|
+
To use `CloudflareDeployer`, install the `@mastra/deployer-cloudflare` package:
|
|
8
|
+
|
|
9
|
+
**npm**:
|
|
10
|
+
|
|
11
|
+
```bash
|
|
12
|
+
npm install @mastra/deployer-cloudflare@latest
|
|
13
|
+
```
|
|
14
|
+
|
|
15
|
+
**pnpm**:
|
|
16
|
+
|
|
17
|
+
```bash
|
|
18
|
+
pnpm add @mastra/deployer-cloudflare@latest
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
**Yarn**:
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
yarn add @mastra/deployer-cloudflare@latest
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Bun**:
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
bun add @mastra/deployer-cloudflare@latest
|
|
31
|
+
```
|
|
4
32
|
|
|
5
33
|
## Usage example
|
|
6
34
|
|
|
35
|
+
Import `CloudflareDeployer` and set it as the deployer in your Mastra configuration:
|
|
36
|
+
|
|
7
37
|
```typescript
|
|
8
38
|
import { Mastra } from "@mastra/core";
|
|
9
39
|
import { CloudflareDeployer } from "@mastra/deployer-cloudflare";
|
|
10
40
|
|
|
11
41
|
export const mastra = new Mastra({
|
|
12
42
|
deployer: new CloudflareDeployer({
|
|
13
|
-
name: "
|
|
43
|
+
name: "your-project-name",
|
|
14
44
|
routes: [
|
|
15
45
|
{
|
|
16
46
|
pattern: "example.com/*",
|
|
@@ -42,15 +72,8 @@ export const mastra = new Mastra({
|
|
|
42
72
|
|
|
43
73
|
## Constructor options
|
|
44
74
|
|
|
45
|
-
The `CloudflareDeployer` constructor accepts the same configuration options as `wrangler.
|
|
46
|
-
|
|
47
|
-
### Migrating from earlier versions
|
|
75
|
+
The `CloudflareDeployer` constructor accepts the same configuration options as `wrangler.jsonc`. See the [Wrangler configuration documentation](https://developers.cloudflare.com/workers/wrangler/configuration/) for all available options.
|
|
48
76
|
|
|
49
|
-
|
|
77
|
+
## Build output
|
|
50
78
|
|
|
51
|
-
|
|
52
|
-
| ----------------- | --------------------------- |
|
|
53
|
-
| `projectName` | `name` |
|
|
54
|
-
| `d1Databases` | `d1_databases` |
|
|
55
|
-
| `kvNamespaces` | `kv_namespaces` |
|
|
56
|
-
| `workerNamespace` | _(removed, no longer used)_ |
|
|
79
|
+
After running `mastra build`, the deployer generates a `wrangler.jsonc` file conforming to Cloudflare's [wrangler configuration](https://developers.cloudflare.com/workers/wrangler/configuration/). It points to files inside `.mastra/output` so you need to run `mastra build` before deploying with Wrangler.
|
package/.docs/reference/index.md
CHANGED
|
@@ -164,7 +164,10 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
164
164
|
- [Upstash Storage](https://mastra.ai/reference/storage/upstash)
|
|
165
165
|
- [Workspace Class](https://mastra.ai/reference/workspace/workspace-class)
|
|
166
166
|
- [LocalFilesystem](https://mastra.ai/reference/workspace/local-filesystem)
|
|
167
|
+
- [S3Filesystem](https://mastra.ai/reference/workspace/s3-filesystem)
|
|
168
|
+
- [GCSFilesystem](https://mastra.ai/reference/workspace/gcs-filesystem)
|
|
167
169
|
- [LocalSandbox](https://mastra.ai/reference/workspace/local-sandbox)
|
|
170
|
+
- [E2BSandbox](https://mastra.ai/reference/workspace/e2b-sandbox)
|
|
168
171
|
- [WorkspaceFilesystem](https://mastra.ai/reference/workspace/filesystem)
|
|
169
172
|
- [WorkspaceSandbox](https://mastra.ai/reference/workspace/sandbox)
|
|
170
173
|
- [.stream()](https://mastra.ai/reference/streaming/agents/stream)
|