@mastra/mcp-docs-server 1.1.26-alpha.10 → 1.1.26-alpha.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -122,6 +122,35 @@ new LangfuseExporter({
122
122
  })
123
123
  ```
124
124
 
125
+ #### Batch Tuning for High-Volume Traces
126
+
127
+ For self-hosted Langfuse deployments or streamed runs that produce many spans per second, you can tune the OTEL batch size and flush interval to reduce request pressure on the Langfuse ingestion endpoint:
128
+
129
+ ```typescript
130
+ new LangfuseExporter({
131
+ publicKey: process.env.LANGFUSE_PUBLIC_KEY!,
132
+ secretKey: process.env.LANGFUSE_SECRET_KEY!,
133
+ flushAt: 500, // Maximum spans per OTEL export batch
134
+ flushInterval: 20, // Maximum seconds between flushes
135
+ })
136
+ ```
137
+
138
+ To suppress high-volume span types entirely (for example `MODEL_CHUNK` spans from streamed responses), use the observability-level [`excludeSpanTypes` option](https://mastra.ai/reference/observability/tracing/span-filtering) rather than configuring the exporter:
139
+
140
+ ```typescript
141
+ import { SpanType } from '@mastra/core/observability'
142
+
143
+ new Observability({
144
+ configs: {
145
+ langfuse: {
146
+ serviceName: 'my-service',
147
+ exporters: [new LangfuseExporter()],
148
+ excludeSpanTypes: [SpanType.MODEL_CHUNK],
149
+ },
150
+ },
151
+ })
152
+ ```
153
+
125
154
  ### Complete Configuration
126
155
 
127
156
  ```typescript
@@ -133,6 +162,8 @@ new LangfuseExporter({
133
162
  // Optional settings
134
163
  baseUrl: process.env.LANGFUSE_BASE_URL, // Default: https://cloud.langfuse.com
135
164
  realtime: process.env.NODE_ENV === 'development', // Dynamic mode selection
165
+ flushAt: 500, // Maximum spans per OTEL export batch
166
+ flushInterval: 20, // Maximum seconds between flushes
136
167
  logLevel: 'info', // Diagnostic logging: debug | info | warn | error
137
168
 
138
169
  // Langfuse-specific settings
@@ -1,6 +1,6 @@
1
1
  # Deploy Mastra to Netlify
2
2
 
3
- Use `@mastra/deployer-netlify` to deploy your Mastra server as serverless functions on Netlify. The deployer bundles your code and generates a `.netlify` directory conforming to Netlify's [frameworks API](https://docs.netlify.com/build/frameworks/frameworks-api/#netlifyv1functions), ready to deploy.
3
+ Use `@mastra/deployer-netlify` to deploy your Mastra server on Netlify. The deployer bundles your code and generates a `.netlify` directory conforming to Netlify's [Frameworks API](https://docs.netlify.com/build/frameworks/frameworks-api/), ready to deploy. You can deploy as serverless functions (default) or as [edge functions](https://docs.netlify.com/build/edge-functions/overview/) for lower latency and longer execution times.
4
4
 
5
5
  > **Info:** This guide covers deploying the [Mastra server](https://mastra.ai/docs/server/mastra-server). If you're using a [server adapter](https://mastra.ai/docs/server/server-adapters) or [web framework](https://mastra.ai/docs/deployment/web-framework), deploy the way you normally would for that framework.
6
6
 
@@ -49,6 +49,21 @@ export const mastra = new Mastra({
49
49
  })
50
50
  ```
51
51
 
52
+ To deploy as edge functions instead, pass `{ target: 'edge' }`:
53
+
54
+ ```typescript
55
+ import { Mastra } from '@mastra/core'
56
+ import { NetlifyDeployer } from '@mastra/deployer-netlify'
57
+
58
+ export const mastra = new Mastra({
59
+ deployer: new NetlifyDeployer({
60
+ target: 'edge',
61
+ }),
62
+ })
63
+ ```
64
+
65
+ Edge functions run on Deno at the edge closest to your users. They have no hard execution timeout (only a CPU time limit), making them a better fit for longer-running AI workflows. See the [constructor options](https://mastra.ai/reference/deployer/netlify) for details.
66
+
52
67
  Create a `netlify.toml` file with the following contents in your project root:
53
68
 
54
69
  ```toml
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3610 models from 100 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3653 models from 103 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -0,0 +1,116 @@
1
+ # ![DigitalOcean logo](https://models.dev/logos/digitalocean.svg)DigitalOcean
2
+
3
+ Access 46 DigitalOcean models through Mastra's model router. Authentication is handled automatically using the `DIGITALOCEAN_ACCESS_TOKEN` environment variable.
4
+
5
+ Learn more in the [DigitalOcean documentation](https://docs.digitalocean.com/products/gradient-ai-platform/details/models/).
6
+
7
+ ```bash
8
+ DIGITALOCEAN_ACCESS_TOKEN=your-api-token
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "digitalocean/alibaba-qwen3-32b"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [DigitalOcean documentation](https://docs.digitalocean.com/products/gradient-ai-platform/details/models/) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ---------------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `digitalocean/alibaba-qwen3-32b` | 131K | | | | | | $0.25 | $0.55 |
38
+ | `digitalocean/all-mini-lm-l6-v2` | 256 | | | | | | $0.01 | — |
39
+ | `digitalocean/anthropic-claude-4.1-opus` | 200K | | | | | | $15 | $75 |
40
+ | `digitalocean/anthropic-claude-4.5-sonnet` | 1.0M | | | | | | $3 | $15 |
41
+ | `digitalocean/anthropic-claude-4.6-sonnet` | 1.0M | | | | | | $3 | $15 |
42
+ | `digitalocean/anthropic-claude-haiku-4.5` | 200K | | | | | | $1 | $5 |
43
+ | `digitalocean/anthropic-claude-opus-4` | 200K | | | | | | $15 | $75 |
44
+ | `digitalocean/anthropic-claude-opus-4.5` | 200K | | | | | | $5 | $25 |
45
+ | `digitalocean/anthropic-claude-opus-4.6` | 1.0M | | | | | | $5 | $25 |
46
+ | `digitalocean/anthropic-claude-opus-4.7` | 1.0M | | | | | | $5 | $25 |
47
+ | `digitalocean/anthropic-claude-sonnet-4` | 1.0M | | | | | | $3 | $15 |
48
+ | `digitalocean/arcee-trinity-large-thinking` | 256K | | | | | | $0.25 | $0.90 |
49
+ | `digitalocean/deepseek-r1-distill-llama-70b` | 131K | | | | | | $0.99 | $0.99 |
50
+ | `digitalocean/fal-ai/elevenlabs/tts/multilingual-v2` | — | | | | | | — | — |
51
+ | `digitalocean/fal-ai/fast-sdxl` | — | | | | | | — | — |
52
+ | `digitalocean/fal-ai/flux/schnell` | — | | | | | | — | — |
53
+ | `digitalocean/fal-ai/stable-audio-25/text-to-audio` | — | | | | | | — | — |
54
+ | `digitalocean/glm-5` | 203K | | | | | | $1 | $3 |
55
+ | `digitalocean/gte-large-en-v1.5` | 8K | | | | | | $0.09 | — |
56
+ | `digitalocean/kimi-k2.5` | 262K | | | | | | $0.50 | $3 |
57
+ | `digitalocean/llama3.3-70b-instruct` | 128K | | | | | | $0.65 | $0.65 |
58
+ | `digitalocean/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
59
+ | `digitalocean/multi-qa-mpnet-base-dot-v1` | 512 | | | | | | $0.01 | — |
60
+ | `digitalocean/nvidia-nemotron-3-super-120b` | 256K | | | | | | $0.30 | $0.65 |
61
+ | `digitalocean/openai-gpt-4.1` | 1.0M | | | | | | $2 | $8 |
62
+ | `digitalocean/openai-gpt-4o` | 128K | | | | | | $3 | $10 |
63
+ | `digitalocean/openai-gpt-4o-mini` | 128K | | | | | | $0.15 | $0.60 |
64
+ | `digitalocean/openai-gpt-5` | 400K | | | | | | $1 | $10 |
65
+ | `digitalocean/openai-gpt-5-2-pro` | 400K | | | | | | $21 | $168 |
66
+ | `digitalocean/openai-gpt-5-mini` | 400K | | | | | | $0.25 | $2 |
67
+ | `digitalocean/openai-gpt-5-nano` | 400K | | | | | | $0.05 | $0.40 |
68
+ | `digitalocean/openai-gpt-5.1-codex-max` | 400K | | | | | | $1 | $10 |
69
+ | `digitalocean/openai-gpt-5.2` | 400K | | | | | | $2 | $14 |
70
+ | `digitalocean/openai-gpt-5.3-codex` | 400K | | | | | | $2 | $14 |
71
+ | `digitalocean/openai-gpt-5.4` | 1.0M | | | | | | $3 | $15 |
72
+ | `digitalocean/openai-gpt-5.4-mini` | 400K | | | | | | $0.75 | $5 |
73
+ | `digitalocean/openai-gpt-5.4-nano` | 400K | | | | | | $0.20 | $1 |
74
+ | `digitalocean/openai-gpt-5.4-pro` | 400K | | | | | | $30 | $180 |
75
+ | `digitalocean/openai-gpt-image-1` | — | | | | | | $5 | $40 |
76
+ | `digitalocean/openai-gpt-image-1.5` | — | | | | | | $5 | $10 |
77
+ | `digitalocean/openai-gpt-oss-120b` | 131K | | | | | | $0.10 | $0.70 |
78
+ | `digitalocean/openai-gpt-oss-20b` | 131K | | | | | | $0.05 | $0.45 |
79
+ | `digitalocean/openai-o1` | 200K | | | | | | $15 | $60 |
80
+ | `digitalocean/openai-o3` | 200K | | | | | | $2 | $8 |
81
+ | `digitalocean/openai-o3-mini` | 200K | | | | | | $1 | $4 |
82
+ | `digitalocean/qwen3-embedding-0.6b` | 8K | | | | | | $0.04 | — |
83
+
84
+ ## Advanced configuration
85
+
86
+ ### Custom headers
87
+
88
+ ```typescript
89
+ const agent = new Agent({
90
+ id: "custom-agent",
91
+ name: "custom-agent",
92
+ model: {
93
+ url: "https://inference.do-ai.run/v1",
94
+ id: "digitalocean/alibaba-qwen3-32b",
95
+ apiKey: process.env.DIGITALOCEAN_ACCESS_TOKEN,
96
+ headers: {
97
+ "X-Custom-Header": "value"
98
+ }
99
+ }
100
+ });
101
+ ```
102
+
103
+ ### Dynamic model selection
104
+
105
+ ```typescript
106
+ const agent = new Agent({
107
+ id: "dynamic-agent",
108
+ name: "Dynamic Agent",
109
+ model: ({ requestContext }) => {
110
+ const useAdvanced = requestContext.task === "complex";
111
+ return useAdvanced
112
+ ? "digitalocean/qwen3-embedding-0.6b"
113
+ : "digitalocean/alibaba-qwen3-32b";
114
+ }
115
+ });
116
+ ```
@@ -1,6 +1,6 @@
1
1
  # ![Helicone logo](https://models.dev/logos/helicone.svg)Helicone
2
2
 
3
- Access 91 Helicone models through Mastra's model router. Authentication is handled automatically using the `HELICONE_API_KEY` environment variable.
3
+ Access 90 Helicone models through Mastra's model router. Authentication is handled automatically using the `HELICONE_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [Helicone documentation](https://helicone.ai/models).
6
6
 
@@ -48,7 +48,6 @@ for await (const chunk of stream) {
48
48
  | `helicone/claude-opus-4-1-20250805` | 200K | | | | | | $15 | $75 |
49
49
  | `helicone/claude-sonnet-4` | 200K | | | | | | $3 | $15 |
50
50
  | `helicone/claude-sonnet-4-5-20250929` | 200K | | | | | | $3 | $15 |
51
- | `helicone/codex-mini-latest` | 200K | | | | | | $2 | $6 |
52
51
  | `helicone/deepseek-r1-distill-llama-70b` | 128K | | | | | | $0.03 | $0.13 |
53
52
  | `helicone/deepseek-reasoner` | 128K | | | | | | $0.56 | $2 |
54
53
  | `helicone/deepseek-tng-r1t2-chimera` | 130K | | | | | | $0.30 | $1 |
@@ -1,6 +1,6 @@
1
1
  # ![OpenAI logo](https://models.dev/logos/openai.svg)OpenAI
2
2
 
3
- Access 51 OpenAI models through Mastra's model router. Authentication is handled automatically using the `OPENAI_API_KEY` environment variable.
3
+ Access 50 OpenAI models through Mastra's model router. Authentication is handled automatically using the `OPENAI_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OpenAI documentation](https://platform.openai.com/docs/models).
6
6
 
@@ -33,7 +33,6 @@ for await (const chunk of stream) {
33
33
  | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
34
34
  | ------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
35
35
  | `openai/chatgpt-image-latest` | — | | | | | | — | — |
36
- | `openai/codex-mini-latest` | 200K | | | | | | $2 | $6 |
37
36
  | `openai/gpt-3.5-turbo` | 16K | | | | | | $0.50 | $2 |
38
37
  | `openai/gpt-4` | 8K | | | | | | $30 | $60 |
39
38
  | `openai/gpt-4-turbo` | 128K | | | | | | $10 | $30 |
@@ -1,6 +1,6 @@
1
1
  # ![OVHcloud AI Endpoints logo](https://models.dev/logos/ovhcloud.svg)OVHcloud AI Endpoints
2
2
 
3
- Access 13 OVHcloud AI Endpoints models through Mastra's model router. Authentication is handled automatically using the `OVHCLOUD_API_KEY` environment variable.
3
+ Access 10 OVHcloud AI Endpoints models through Mastra's model router. Authentication is handled automatically using the `OVHCLOUD_API_KEY` environment variable.
4
4
 
5
5
  Learn more in the [OVHcloud AI Endpoints documentation](https://www.ovhcloud.com/en/public-cloud/ai-endpoints/catalog//).
6
6
 
@@ -15,7 +15,7 @@ const agent = new Agent({
15
15
  id: "my-agent",
16
16
  name: "My Agent",
17
17
  instructions: "You are a helpful assistant",
18
- model: "ovhcloud/deepseek-r1-distill-llama-70b"
18
+ model: "ovhcloud/gpt-oss-120b"
19
19
  });
20
20
 
21
21
  // Generate a response
@@ -34,7 +34,6 @@ for await (const chunk of stream) {
34
34
 
35
35
  | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
36
  | ---------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
- | `ovhcloud/deepseek-r1-distill-llama-70b` | 131K | | | | | | $0.74 | $0.74 |
38
37
  | `ovhcloud/gpt-oss-120b` | 131K | | | | | | $0.09 | $0.47 |
39
38
  | `ovhcloud/gpt-oss-20b` | 131K | | | | | | $0.05 | $0.18 |
40
39
  | `ovhcloud/llama-3.1-8b-instruct` | 131K | | | | | | $0.11 | $0.11 |
@@ -42,8 +41,6 @@ for await (const chunk of stream) {
42
41
  | `ovhcloud/mistral-7b-instruct-v0.3` | 66K | | | | | | $0.11 | $0.11 |
43
42
  | `ovhcloud/mistral-nemo-instruct-2407` | 66K | | | | | | $0.14 | $0.14 |
44
43
  | `ovhcloud/mistral-small-3.2-24b-instruct-2506` | 131K | | | | | | $0.10 | $0.31 |
45
- | `ovhcloud/mixtral-8x7b-instruct-v0.1` | 33K | | | | | | $0.70 | $0.70 |
46
- | `ovhcloud/qwen2.5-coder-32b-instruct` | 33K | | | | | | $0.96 | $0.96 |
47
44
  | `ovhcloud/qwen2.5-vl-72b-instruct` | 33K | | | | | | $1 | $1 |
48
45
  | `ovhcloud/qwen3-32b` | 33K | | | | | | $0.09 | $0.25 |
49
46
  | `ovhcloud/qwen3-coder-30b-a3b-instruct` | 262K | | | | | | $0.07 | $0.26 |
@@ -58,7 +55,7 @@ const agent = new Agent({
58
55
  name: "custom-agent",
59
56
  model: {
60
57
  url: "https://oai.endpoints.kepler.ai.cloud.ovh.net/v1",
61
- id: "ovhcloud/deepseek-r1-distill-llama-70b",
58
+ id: "ovhcloud/gpt-oss-120b",
62
59
  apiKey: process.env.OVHCLOUD_API_KEY,
63
60
  headers: {
64
61
  "X-Custom-Header": "value"
@@ -77,7 +74,7 @@ const agent = new Agent({
77
74
  const useAdvanced = requestContext.task === "complex";
78
75
  return useAdvanced
79
76
  ? "ovhcloud/qwen3-coder-30b-a3b-instruct"
80
- : "ovhcloud/deepseek-r1-distill-llama-70b";
77
+ : "ovhcloud/gpt-oss-120b";
81
78
  }
82
79
  });
83
80
  ```
@@ -0,0 +1,71 @@
1
+ # ![Tencent Token Plan logo](https://models.dev/logos/tencent-token-plan.svg)Tencent Token Plan
2
+
3
+ Access 1 Tencent Token Plan model through Mastra's model router. Authentication is handled automatically using the `TENCENT_TOKEN_PLAN_API_KEY` environment variable.
4
+
5
+ Learn more in the [Tencent Token Plan documentation](https://cloud.tencent.com/document/product/1823/130060).
6
+
7
+ ```bash
8
+ TENCENT_TOKEN_PLAN_API_KEY=your-api-key
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "tencent-token-plan/hy3-preview"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [Tencent Token Plan documentation](https://cloud.tencent.com/document/product/1823/130060) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | -------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `tencent-token-plan/hy3-preview` | 256K | | | | | | — | — |
38
+
39
+ ## Advanced configuration
40
+
41
+ ### Custom headers
42
+
43
+ ```typescript
44
+ const agent = new Agent({
45
+ id: "custom-agent",
46
+ name: "custom-agent",
47
+ model: {
48
+ url: "https://api.lkeap.cloud.tencent.com/plan/v3",
49
+ id: "tencent-token-plan/hy3-preview",
50
+ apiKey: process.env.TENCENT_TOKEN_PLAN_API_KEY,
51
+ headers: {
52
+ "X-Custom-Header": "value"
53
+ }
54
+ }
55
+ });
56
+ ```
57
+
58
+ ### Dynamic model selection
59
+
60
+ ```typescript
61
+ const agent = new Agent({
62
+ id: "dynamic-agent",
63
+ name: "Dynamic Agent",
64
+ model: ({ requestContext }) => {
65
+ const useAdvanced = requestContext.task === "complex";
66
+ return useAdvanced
67
+ ? "tencent-token-plan/hy3-preview"
68
+ : "tencent-token-plan/hy3-preview";
69
+ }
70
+ });
71
+ ```
@@ -0,0 +1,71 @@
1
+ # ![Tencent TokenHub logo](https://models.dev/logos/tencent-tokenhub.svg)Tencent TokenHub
2
+
3
+ Access 1 Tencent TokenHub model through Mastra's model router. Authentication is handled automatically using the `TENCENT_TOKENHUB_API_KEY` environment variable.
4
+
5
+ Learn more in the [Tencent TokenHub documentation](https://cloud.tencent.com/document/product/1823/130050).
6
+
7
+ ```bash
8
+ TENCENT_TOKENHUB_API_KEY=your-api-key
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "tencent-tokenhub/hy3-preview"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [Tencent TokenHub documentation](https://cloud.tencent.com/document/product/1823/130050) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ------------------------------ | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `tencent-tokenhub/hy3-preview` | 256K | | | | | | — | — |
38
+
39
+ ## Advanced configuration
40
+
41
+ ### Custom headers
42
+
43
+ ```typescript
44
+ const agent = new Agent({
45
+ id: "custom-agent",
46
+ name: "custom-agent",
47
+ model: {
48
+ url: "https://tokenhub.tencentmaas.com/v1",
49
+ id: "tencent-tokenhub/hy3-preview",
50
+ apiKey: process.env.TENCENT_TOKENHUB_API_KEY,
51
+ headers: {
52
+ "X-Custom-Header": "value"
53
+ }
54
+ }
55
+ });
56
+ ```
57
+
58
+ ### Dynamic model selection
59
+
60
+ ```typescript
61
+ const agent = new Agent({
62
+ id: "dynamic-agent",
63
+ name: "Dynamic Agent",
64
+ model: ({ requestContext }) => {
65
+ const useAdvanced = requestContext.task === "complex";
66
+ return useAdvanced
67
+ ? "tencent-tokenhub/hy3-preview"
68
+ : "tencent-tokenhub/hy3-preview";
69
+ }
70
+ });
71
+ ```
@@ -26,6 +26,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
26
26
  - [Cortecs](https://mastra.ai/models/providers/cortecs)
27
27
  - [D.Run (China)](https://mastra.ai/models/providers/drun)
28
28
  - [Deep Infra](https://mastra.ai/models/providers/deepinfra)
29
+ - [DigitalOcean](https://mastra.ai/models/providers/digitalocean)
29
30
  - [DInference](https://mastra.ai/models/providers/dinference)
30
31
  - [evroc](https://mastra.ai/models/providers/evroc)
31
32
  - [FastRouter](https://mastra.ai/models/providers/fastrouter)
@@ -83,6 +84,8 @@ Direct access to individual AI model providers. Each provider offers unique mode
83
84
  - [submodel](https://mastra.ai/models/providers/submodel)
84
85
  - [Synthetic](https://mastra.ai/models/providers/synthetic)
85
86
  - [Tencent Coding Plan (China)](https://mastra.ai/models/providers/tencent-coding-plan)
87
+ - [Tencent Token Plan](https://mastra.ai/models/providers/tencent-token-plan)
88
+ - [Tencent TokenHub](https://mastra.ai/models/providers/tencent-tokenhub)
86
89
  - [The Grid AI](https://mastra.ai/models/providers/the-grid-ai)
87
90
  - [Together AI](https://mastra.ai/models/providers/togetherai)
88
91
  - [Upstage](https://mastra.ai/models/providers/upstage)
@@ -1,6 +1,6 @@
1
1
  # NetlifyDeployer
2
2
 
3
- The `NetlifyDeployer` class handles packaging, configuration, and deployment by adapting Mastra's output to create an optimized version of your server. It extends the base [`Deployer`](https://mastra.ai/reference/deployer) class with Netlify specific functionality. It enables you to run Mastra within Netlify functions.
3
+ The `NetlifyDeployer` class handles packaging, configuration, and deployment by adapting Mastra's output to create an optimized version of your server. It extends the base [`Deployer`](https://mastra.ai/reference/deployer) class with Netlify-specific functionality. It enables you to run Mastra within Netlify serverless functions or edge functions.
4
4
 
5
5
  ## Installation
6
6
 
@@ -43,9 +43,31 @@ export const mastra = new Mastra({
43
43
  })
44
44
  ```
45
45
 
46
+ ## Constructor options
47
+
48
+ - `target?: 'serverless' | 'edge'` — Deploy target for Netlify. Defaults to `'serverless'`.
49
+
50
+ - `'serverless'` — Standard [Netlify Functions](https://docs.netlify.com/functions/overview/) (Node.js runtime, 60s default timeout).
51
+ - `'edge'` — [Netlify Edge Functions](https://docs.netlify.com/build/edge-functions/overview/) (Deno-based runtime, runs at the edge closest to users, no hard timeout).
52
+
53
+ ### Edge functions example
54
+
55
+ ```typescript
56
+ import { Mastra } from '@mastra/core'
57
+ import { NetlifyDeployer } from '@mastra/deployer-netlify'
58
+
59
+ export const mastra = new Mastra({
60
+ deployer: new NetlifyDeployer({
61
+ target: 'edge',
62
+ }),
63
+ })
64
+ ```
65
+
46
66
  ## Output
47
67
 
48
- After running `mastra build`, the deployer generates a `.netlify` folder. The build output includes all agents, tools, and workflows of your project, alongside a special `config.json` file. The `config.json` file configures the behavior of Netlify functions.
68
+ After running `mastra build`, the deployer generates a `.netlify` folder. The build output includes all agents, tools, and workflows of your project, alongside a `config.json` file that configures the [Netlify Frameworks API](https://docs.netlify.com/build/frameworks/frameworks-api/).
69
+
70
+ ### Serverless output (default)
49
71
 
50
72
  ```bash
51
73
  your-project/
@@ -77,4 +99,30 @@ The `config.json` file contains:
77
99
  }
78
100
  ]
79
101
  }
102
+ ```
103
+
104
+ ### Edge output
105
+
106
+ ```bash
107
+ your-project/
108
+ └── .netlify/
109
+ └── v1/
110
+ ├── config.json
111
+ └── edge-functions/
112
+ ├── index.mjs
113
+ ├── package.json
114
+ └── node_modules/
115
+ ```
116
+
117
+ The `config.json` file contains:
118
+
119
+ ```json
120
+ {
121
+ "edge_functions": [
122
+ {
123
+ "function": "index",
124
+ "path": "/*"
125
+ }
126
+ ]
127
+ }
80
128
  ```
@@ -16,6 +16,8 @@ interface LangfuseExporterConfig extends BaseExporterConfig {
16
16
  secretKey?: string
17
17
  baseUrl?: string
18
18
  realtime?: boolean
19
+ flushAt?: number
20
+ flushInterval?: number
19
21
  environment?: string
20
22
  release?: string
21
23
  }
package/CHANGELOG.md CHANGED
@@ -1,5 +1,12 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.26-alpha.12
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`6315317`](https://github.com/mastra-ai/mastra/commit/63153175fe9a7b224e5be7c209bbebc01dd9b0d5), [`9d3b24b`](https://github.com/mastra-ai/mastra/commit/9d3b24b19407ae9c09586cf7766d38dc4dff4a69)]:
8
+ - @mastra/core@1.26.0-alpha.6
9
+
3
10
  ## 1.1.26-alpha.9
4
11
 
5
12
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.26-alpha.10",
3
+ "version": "1.1.26-alpha.13",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -30,7 +30,7 @@
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
32
  "@mastra/mcp": "^1.5.1-alpha.1",
33
- "@mastra/core": "1.26.0-alpha.5"
33
+ "@mastra/core": "1.26.0-alpha.6"
34
34
  },
35
35
  "devDependencies": {
36
36
  "@hono/node-server": "^1.19.11",
@@ -48,7 +48,7 @@
48
48
  "vitest": "4.0.18",
49
49
  "@internal/lint": "0.0.83",
50
50
  "@internal/types-builder": "0.0.58",
51
- "@mastra/core": "1.26.0-alpha.5"
51
+ "@mastra/core": "1.26.0-alpha.6"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {