@mastra/mcp-docs-server 1.1.20-alpha.1 → 1.1.20-alpha.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/observability/overview.md +53 -27
- package/.docs/docs/studio/observability.md +9 -1
- package/.docs/docs/studio/overview.md +1 -1
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/openai.md +8 -4
- package/.docs/models/providers/the-grid-ai.md +73 -0
- package/.docs/models/providers/zai-coding-plan.md +3 -2
- package/.docs/models/providers/zai.md +3 -2
- package/.docs/models/providers/zhipuai-coding-plan.md +3 -2
- package/.docs/models/providers/zhipuai.md +3 -2
- package/.docs/models/providers.md +1 -0
- package/.docs/reference/client-js/responses.md +1 -1
- package/.docs/reference/processors/token-limiter-processor.md +2 -0
- package/CHANGELOG.md +14 -0
- package/package.json +3 -3
|
@@ -1,35 +1,61 @@
|
|
|
1
1
|
# Observability overview
|
|
2
2
|
|
|
3
|
-
Mastra
|
|
3
|
+
Mastra's observability system gives you visibility into every agent run, workflow step, tool call, and model interaction. It captures three complementary signals that work together to help you understand what your application is doing and why.
|
|
4
4
|
|
|
5
|
-
|
|
5
|
+
- [**Tracing**](https://mastra.ai/docs/observability/tracing/overview): Records every operation as a hierarchical timeline of spans, capturing inputs, outputs, token usage, and timing.
|
|
6
|
+
- [**Logging**](https://mastra.ai/docs/observability/logging): Forwards structured log entries from your application and Mastra internals to observability storage, correlated to traces automatically.
|
|
7
|
+
- [**Metrics**](https://mastra.ai/docs/observability/metrics/overview): Extracts duration, token usage, and cost data from traces automatically, with no additional instrumentation required.
|
|
6
8
|
|
|
7
|
-
|
|
9
|
+
## When to use observability
|
|
8
10
|
|
|
9
|
-
|
|
11
|
+
- Debug unexpected agent behavior by inspecting the full decision path, tool calls, and model responses.
|
|
12
|
+
- Monitor latency across agents, workflows, and tools to identify bottlenecks.
|
|
13
|
+
- Track token consumption and estimated cost over time to control spending.
|
|
14
|
+
- Diagnose workflow failures by tracing execution through each step.
|
|
15
|
+
- Compare agent performance before and after prompt or model changes.
|
|
10
16
|
|
|
11
|
-
|
|
12
|
-
- **Agent execution**: Decision paths, tool calls, and memory operations
|
|
13
|
-
- **Workflow steps**: Branching logic, parallel execution, and step outputs
|
|
14
|
-
- **Automatic instrumentation**: Tracing with decorators
|
|
17
|
+
## How the pieces fit together
|
|
15
18
|
|
|
16
|
-
|
|
19
|
+
Tracing is the foundation. When observability is configured, every agent run, workflow execution, tool call, and model interaction produces a [span](https://opentelemetry.io/docs/concepts/signals/traces/#spans). Spans are organized into traces that show the full request lifecycle as a hierarchical timeline.
|
|
17
20
|
|
|
18
|
-
|
|
21
|
+
Metrics are derived from traces automatically. When a span ends, Mastra extracts duration, token counts, and cost estimates without any extra code. These metrics power the dashboards in [Studio](https://mastra.ai/docs/studio/observability).
|
|
19
22
|
|
|
20
|
-
|
|
23
|
+
Logs are correlated to traces automatically. Every `logger.info()`, `logger.warn()`, or `logger.error()` call within a traced context is tagged with the current trace and span IDs. You can navigate from a log entry directly to the trace that produced it.
|
|
21
24
|
|
|
22
|
-
|
|
25
|
+
All three signals share correlation IDs (trace ID, span ID, entity type, entity name), so you can jump between a metric spike, the traces behind it, and the logs within those traces.
|
|
23
26
|
|
|
24
|
-
|
|
27
|
+
## Get started
|
|
25
28
|
|
|
26
|
-
|
|
29
|
+
Install `@mastra/observability` and a storage backend:
|
|
27
30
|
|
|
28
|
-
|
|
31
|
+
**npm**:
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
npm install @mastra/observability @mastra/libsql @mastra/duckdb
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
**pnpm**:
|
|
38
|
+
|
|
39
|
+
```bash
|
|
40
|
+
pnpm add @mastra/observability @mastra/libsql @mastra/duckdb
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
**Yarn**:
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
yarn add @mastra/observability @mastra/libsql @mastra/duckdb
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
**Bun**:
|
|
50
|
+
|
|
51
|
+
```bash
|
|
52
|
+
bun add @mastra/observability @mastra/libsql @mastra/duckdb
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
Then configure observability in your Mastra instance. The following example uses composite storage to route observability data to DuckDB (which supports metrics aggregation) while keeping everything else in LibSQL:
|
|
29
56
|
|
|
30
57
|
```ts
|
|
31
58
|
import { Mastra } from '@mastra/core/mastra'
|
|
32
|
-
import { PinoLogger } from '@mastra/loggers'
|
|
33
59
|
import { LibSQLStore } from '@mastra/libsql'
|
|
34
60
|
import { DuckDBStore } from '@mastra/duckdb'
|
|
35
61
|
import { MastraCompositeStore } from '@mastra/core/storage'
|
|
@@ -41,7 +67,6 @@ import {
|
|
|
41
67
|
} from '@mastra/observability'
|
|
42
68
|
|
|
43
69
|
export const mastra = new Mastra({
|
|
44
|
-
logger: new PinoLogger(),
|
|
45
70
|
storage: new MastraCompositeStore({
|
|
46
71
|
id: 'composite-storage',
|
|
47
72
|
default: new LibSQLStore({
|
|
@@ -60,9 +85,6 @@ export const mastra = new Mastra({
|
|
|
60
85
|
new DefaultExporter(), // Persists traces to storage for Mastra Studio
|
|
61
86
|
new CloudExporter(), // Sends traces to Mastra Cloud (if MASTRA_CLOUD_ACCESS_TOKEN is set)
|
|
62
87
|
],
|
|
63
|
-
logging: {
|
|
64
|
-
level: 'info', // Minimum log level forwarded to storage (default: 'debug')
|
|
65
|
-
},
|
|
66
88
|
spanOutputProcessors: [
|
|
67
89
|
new SensitiveDataFilter(), // Redacts sensitive data like passwords, tokens, keys
|
|
68
90
|
],
|
|
@@ -72,14 +94,18 @@ export const mastra = new Mastra({
|
|
|
72
94
|
})
|
|
73
95
|
```
|
|
74
96
|
|
|
75
|
-
|
|
97
|
+
This enables tracing, log forwarding, and metrics. Mastra also supports external tracing providers like Langfuse, Datadog, and any OpenTelemetry-compatible platform. See [Tracing](https://mastra.ai/docs/observability/tracing/overview) for configuration details.
|
|
98
|
+
|
|
99
|
+
## Storage
|
|
76
100
|
|
|
77
|
-
|
|
101
|
+
Not all storage backends support every signal. Traces and logs work with most backends, but metrics require an OLAP-capable store like DuckDB (development) or ClickHouse (production). For the full compatibility list, see [storage provider support](https://mastra.ai/docs/observability/tracing/exporters/default).
|
|
78
102
|
|
|
79
|
-
|
|
103
|
+
For production environments with high traffic, use composite storage to route the observability domain to a dedicated backend. See [production recommendations](https://mastra.ai/docs/observability/tracing/exporters/default) for details.
|
|
80
104
|
|
|
81
|
-
##
|
|
105
|
+
## Next steps
|
|
82
106
|
|
|
83
|
-
-
|
|
84
|
-
-
|
|
85
|
-
-
|
|
107
|
+
- [Tracing](https://mastra.ai/docs/observability/tracing/overview)
|
|
108
|
+
- [Logging](https://mastra.ai/docs/observability/logging)
|
|
109
|
+
- [Metrics](https://mastra.ai/docs/observability/metrics/overview)
|
|
110
|
+
- [Mastra Studio](https://mastra.ai/docs/studio/observability)
|
|
111
|
+
- [Automatic metrics reference](https://mastra.ai/reference/observability/metrics/automatic-metrics)
|
|
@@ -4,6 +4,7 @@ Studio includes these observability views:
|
|
|
4
4
|
|
|
5
5
|
- **Metrics** for aggregate performance data
|
|
6
6
|
- **Traces** for individual request inspection
|
|
7
|
+
- **Logs** for browsing internal and application logs
|
|
7
8
|
|
|
8
9
|
All require an [observability storage backend](#quickstart) to be configured.
|
|
9
10
|
|
|
@@ -21,6 +22,12 @@ When you run an agent or workflow, the Observability tab displays traces that hi
|
|
|
21
22
|
|
|
22
23
|
Tracing filters out low-level framework details so your traces stay focused and readable. Visit the [tracing overview](https://mastra.ai/docs/observability/tracing/overview) for more details.
|
|
23
24
|
|
|
25
|
+
## Logs
|
|
26
|
+
|
|
27
|
+
Browse internal Mastra logs forwarded to your observability storage. Logs provide full-text search (across message content, entity names, and trace IDs), date presets (last 24 hours to 30 days), and multi-select filters for level, entity type, and entity name. Selecting a log opens a detail panel showing the full message, structured data, and metadata. If the log is correlated with a trace, you can navigate directly to the trace and span timeline.
|
|
28
|
+
|
|
29
|
+
Log forwarding is enabled by default when you configure observability. See [logging](https://mastra.ai/docs/observability/logging) for level configuration, query examples, and customization details.
|
|
30
|
+
|
|
24
31
|
## Quickstart
|
|
25
32
|
|
|
26
33
|
For detailed instructions, follow the [observability instructions](https://mastra.ai/docs/observability/overview). To get up and running quickly, add the `@mastra/observability` package to your project and configure it with [LibSQL](https://mastra.ai/reference/storage/libsql) and [DuckDB](https://mastra.ai/reference/vectors/duckdb) for a local development setup that supports both traces and metrics.
|
|
@@ -95,4 +102,5 @@ export const mastra = new Mastra({
|
|
|
95
102
|
|
|
96
103
|
- [Observability overview](https://mastra.ai/docs/observability/overview)
|
|
97
104
|
- [Metrics overview](https://mastra.ai/docs/observability/metrics/overview)
|
|
98
|
-
- [Tracing overview](https://mastra.ai/docs/observability/tracing/overview)
|
|
105
|
+
- [Tracing overview](https://mastra.ai/docs/observability/tracing/overview)
|
|
106
|
+
- [Logging](https://mastra.ai/docs/observability/logging)
|
|
@@ -124,4 +124,4 @@ Mastra also supports HTTPS development through the [`--https`](https://mastra.ai
|
|
|
124
124
|
|
|
125
125
|
- Learn how to [deploy Studio](https://mastra.ai/docs/studio/deployment) for production use.
|
|
126
126
|
- Add [authentication](https://mastra.ai/docs/studio/auth) to control access to your deployed Studio.
|
|
127
|
-
- Explore [Studio observability](https://mastra.ai/docs/studio/observability) to monitor agent performance
|
|
127
|
+
- Explore [Studio observability](https://mastra.ai/docs/studio/observability) to monitor agent performance through metrics, traces, and logs.
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3578 models from 95 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenAI
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 51 OpenAI models through Mastra's model router. Authentication is handled automatically using the `OPENAI_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenAI documentation](https://platform.openai.com/docs/models).
|
|
6
6
|
|
|
@@ -15,7 +15,7 @@ const agent = new Agent({
|
|
|
15
15
|
id: "my-agent",
|
|
16
16
|
name: "My Agent",
|
|
17
17
|
instructions: "You are a helpful assistant",
|
|
18
|
-
model: "openai/
|
|
18
|
+
model: "openai/chatgpt-image-latest"
|
|
19
19
|
});
|
|
20
20
|
|
|
21
21
|
// Generate a response
|
|
@@ -32,6 +32,7 @@ for await (const chunk of stream) {
|
|
|
32
32
|
|
|
33
33
|
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
34
34
|
| ------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
35
|
+
| `openai/chatgpt-image-latest` | — | | | | | | — | — |
|
|
35
36
|
| `openai/codex-mini-latest` | 200K | | | | | | $2 | $6 |
|
|
36
37
|
| `openai/gpt-3.5-turbo` | 16K | | | | | | $0.50 | $2 |
|
|
37
38
|
| `openai/gpt-4` | 8K | | | | | | $30 | $60 |
|
|
@@ -66,6 +67,9 @@ for await (const chunk of stream) {
|
|
|
66
67
|
| `openai/gpt-5.4-mini` | 400K | | | | | | $0.75 | $5 |
|
|
67
68
|
| `openai/gpt-5.4-nano` | 400K | | | | | | $0.20 | $1 |
|
|
68
69
|
| `openai/gpt-5.4-pro` | 1.1M | | | | | | $30 | $180 |
|
|
70
|
+
| `openai/gpt-image-1` | — | | | | | | — | — |
|
|
71
|
+
| `openai/gpt-image-1-mini` | — | | | | | | — | — |
|
|
72
|
+
| `openai/gpt-image-1.5` | — | | | | | | — | — |
|
|
69
73
|
| `openai/o1` | 200K | | | | | | $15 | $60 |
|
|
70
74
|
| `openai/o1-mini` | 128K | | | | | | $1 | $4 |
|
|
71
75
|
| `openai/o1-preview` | 128K | | | | | | $15 | $60 |
|
|
@@ -89,7 +93,7 @@ const agent = new Agent({
|
|
|
89
93
|
id: "custom-agent",
|
|
90
94
|
name: "custom-agent",
|
|
91
95
|
model: {
|
|
92
|
-
id: "openai/
|
|
96
|
+
id: "openai/chatgpt-image-latest",
|
|
93
97
|
apiKey: process.env.OPENAI_API_KEY,
|
|
94
98
|
headers: {
|
|
95
99
|
"X-Custom-Header": "value"
|
|
@@ -108,7 +112,7 @@ const agent = new Agent({
|
|
|
108
112
|
const useAdvanced = requestContext.task === "complex";
|
|
109
113
|
return useAdvanced
|
|
110
114
|
? "openai/text-embedding-ada-002"
|
|
111
|
-
: "openai/
|
|
115
|
+
: "openai/chatgpt-image-latest";
|
|
112
116
|
}
|
|
113
117
|
});
|
|
114
118
|
```
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
# The Grid AI
|
|
2
|
+
|
|
3
|
+
Access 3 The Grid AI models through Mastra's model router. Authentication is handled automatically using the `THEGRIDAI_API_KEY` environment variable.
|
|
4
|
+
|
|
5
|
+
Learn more in the [The Grid AI documentation](https://thegrid.ai/docs).
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
THEGRIDAI_API_KEY=your-api-key
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
```typescript
|
|
12
|
+
import { Agent } from "@mastra/core/agent";
|
|
13
|
+
|
|
14
|
+
const agent = new Agent({
|
|
15
|
+
id: "my-agent",
|
|
16
|
+
name: "My Agent",
|
|
17
|
+
instructions: "You are a helpful assistant",
|
|
18
|
+
model: "the-grid-ai/text-max"
|
|
19
|
+
});
|
|
20
|
+
|
|
21
|
+
// Generate a response
|
|
22
|
+
const response = await agent.generate("Hello!");
|
|
23
|
+
|
|
24
|
+
// Stream a response
|
|
25
|
+
const stream = await agent.stream("Tell me a story");
|
|
26
|
+
for await (const chunk of stream) {
|
|
27
|
+
console.log(chunk);
|
|
28
|
+
}
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
> **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [The Grid AI documentation](https://thegrid.ai/docs) for details.
|
|
32
|
+
|
|
33
|
+
## Models
|
|
34
|
+
|
|
35
|
+
| Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
|
|
36
|
+
| --------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
|
+
| `the-grid-ai/text-max` | 1.0M | | | | | | — | — |
|
|
38
|
+
| `the-grid-ai/text-prime` | 128K | | | | | | — | — |
|
|
39
|
+
| `the-grid-ai/text-standard` | 128K | | | | | | — | — |
|
|
40
|
+
|
|
41
|
+
## Advanced configuration
|
|
42
|
+
|
|
43
|
+
### Custom headers
|
|
44
|
+
|
|
45
|
+
```typescript
|
|
46
|
+
const agent = new Agent({
|
|
47
|
+
id: "custom-agent",
|
|
48
|
+
name: "custom-agent",
|
|
49
|
+
model: {
|
|
50
|
+
url: "https://api.thegrid.ai/v1",
|
|
51
|
+
id: "the-grid-ai/text-max",
|
|
52
|
+
apiKey: process.env.THEGRIDAI_API_KEY,
|
|
53
|
+
headers: {
|
|
54
|
+
"X-Custom-Header": "value"
|
|
55
|
+
}
|
|
56
|
+
}
|
|
57
|
+
});
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
### Dynamic model selection
|
|
61
|
+
|
|
62
|
+
```typescript
|
|
63
|
+
const agent = new Agent({
|
|
64
|
+
id: "dynamic-agent",
|
|
65
|
+
name: "Dynamic Agent",
|
|
66
|
+
model: ({ requestContext }) => {
|
|
67
|
+
const useAdvanced = requestContext.task === "complex";
|
|
68
|
+
return useAdvanced
|
|
69
|
+
? "the-grid-ai/text-standard"
|
|
70
|
+
: "the-grid-ai/text-max";
|
|
71
|
+
}
|
|
72
|
+
});
|
|
73
|
+
```
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Z.AI Coding Plan
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 13 Z.AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Z.AI Coding Plan documentation](https://docs.z.ai/devpack/overview).
|
|
6
6
|
|
|
@@ -46,6 +46,7 @@ for await (const chunk of stream) {
|
|
|
46
46
|
| `zai-coding-plan/glm-5` | 205K | | | | | | — | — |
|
|
47
47
|
| `zai-coding-plan/glm-5-turbo` | 200K | | | | | | — | — |
|
|
48
48
|
| `zai-coding-plan/glm-5.1` | 200K | | | | | | — | — |
|
|
49
|
+
| `zai-coding-plan/glm-5v-turbo` | 200K | | | | | | — | — |
|
|
49
50
|
|
|
50
51
|
## Advanced configuration
|
|
51
52
|
|
|
@@ -75,7 +76,7 @@ const agent = new Agent({
|
|
|
75
76
|
model: ({ requestContext }) => {
|
|
76
77
|
const useAdvanced = requestContext.task === "complex";
|
|
77
78
|
return useAdvanced
|
|
78
|
-
? "zai-coding-plan/glm-
|
|
79
|
+
? "zai-coding-plan/glm-5v-turbo"
|
|
79
80
|
: "zai-coding-plan/glm-4.5";
|
|
80
81
|
}
|
|
81
82
|
});
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Z.AI
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 12 Z.AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Z.AI documentation](https://docs.z.ai/guides/overview/pricing).
|
|
6
6
|
|
|
@@ -45,6 +45,7 @@ for await (const chunk of stream) {
|
|
|
45
45
|
| `zai/glm-4.7-flashx` | 200K | | | | | | $0.07 | $0.40 |
|
|
46
46
|
| `zai/glm-5` | 205K | | | | | | $1 | $3 |
|
|
47
47
|
| `zai/glm-5-turbo` | 200K | | | | | | $1 | $4 |
|
|
48
|
+
| `zai/glm-5v-turbo` | 200K | | | | | | $1 | $4 |
|
|
48
49
|
|
|
49
50
|
## Advanced configuration
|
|
50
51
|
|
|
@@ -74,7 +75,7 @@ const agent = new Agent({
|
|
|
74
75
|
model: ({ requestContext }) => {
|
|
75
76
|
const useAdvanced = requestContext.task === "complex";
|
|
76
77
|
return useAdvanced
|
|
77
|
-
? "zai/glm-
|
|
78
|
+
? "zai/glm-5v-turbo"
|
|
78
79
|
: "zai/glm-4.5";
|
|
79
80
|
}
|
|
80
81
|
});
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Zhipu AI Coding Plan
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 14 Zhipu AI Coding Plan models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Zhipu AI Coding Plan documentation](https://docs.bigmodel.cn/cn/coding-plan/overview).
|
|
6
6
|
|
|
@@ -47,6 +47,7 @@ for await (const chunk of stream) {
|
|
|
47
47
|
| `zhipuai-coding-plan/glm-5` | 205K | | | | | | — | — |
|
|
48
48
|
| `zhipuai-coding-plan/glm-5-turbo` | 200K | | | | | | — | — |
|
|
49
49
|
| `zhipuai-coding-plan/glm-5.1` | 200K | | | | | | — | — |
|
|
50
|
+
| `zhipuai-coding-plan/glm-5v-turbo` | 200K | | | | | | — | — |
|
|
50
51
|
|
|
51
52
|
## Advanced configuration
|
|
52
53
|
|
|
@@ -76,7 +77,7 @@ const agent = new Agent({
|
|
|
76
77
|
model: ({ requestContext }) => {
|
|
77
78
|
const useAdvanced = requestContext.task === "complex";
|
|
78
79
|
return useAdvanced
|
|
79
|
-
? "zhipuai-coding-plan/glm-
|
|
80
|
+
? "zhipuai-coding-plan/glm-5v-turbo"
|
|
80
81
|
: "zhipuai-coding-plan/glm-4.5";
|
|
81
82
|
}
|
|
82
83
|
});
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Zhipu AI
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 11 Zhipu AI models through Mastra's model router. Authentication is handled automatically using the `ZHIPU_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Zhipu AI documentation](https://docs.z.ai/guides/overview/pricing).
|
|
6
6
|
|
|
@@ -44,6 +44,7 @@ for await (const chunk of stream) {
|
|
|
44
44
|
| `zhipuai/glm-4.7-flash` | 200K | | | | | | — | — |
|
|
45
45
|
| `zhipuai/glm-4.7-flashx` | 200K | | | | | | $0.07 | $0.40 |
|
|
46
46
|
| `zhipuai/glm-5` | 205K | | | | | | $1 | $3 |
|
|
47
|
+
| `zhipuai/glm-5v-turbo` | 200K | | | | | | $5 | $22 |
|
|
47
48
|
|
|
48
49
|
## Advanced configuration
|
|
49
50
|
|
|
@@ -73,7 +74,7 @@ const agent = new Agent({
|
|
|
73
74
|
model: ({ requestContext }) => {
|
|
74
75
|
const useAdvanced = requestContext.task === "complex";
|
|
75
76
|
return useAdvanced
|
|
76
|
-
? "zhipuai/glm-
|
|
77
|
+
? "zhipuai/glm-5v-turbo"
|
|
77
78
|
: "zhipuai/glm-4.5";
|
|
78
79
|
}
|
|
79
80
|
});
|
|
@@ -81,6 +81,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
|
|
|
81
81
|
- [submodel](https://mastra.ai/models/providers/submodel)
|
|
82
82
|
- [Synthetic](https://mastra.ai/models/providers/synthetic)
|
|
83
83
|
- [Tencent Coding Plan (China)](https://mastra.ai/models/providers/tencent-coding-plan)
|
|
84
|
+
- [The Grid AI](https://mastra.ai/models/providers/the-grid-ai)
|
|
84
85
|
- [Together AI](https://mastra.ai/models/providers/togetherai)
|
|
85
86
|
- [Upstage](https://mastra.ai/models/providers/upstage)
|
|
86
87
|
- [Vivgrid](https://mastra.ai/models/providers/vivgrid)
|
|
@@ -137,7 +137,7 @@ Use `text.format` when you want JSON output.
|
|
|
137
137
|
- `json_object` enables JSON mode.
|
|
138
138
|
- `json_schema` enables schema-constrained structured output.
|
|
139
139
|
|
|
140
|
-
|
|
140
|
+
Both formats return JSON in the assistant message content. Use `json_schema` when you need strict schema enforcement. Use `json_object` when you only need valid JSON output.
|
|
141
141
|
|
|
142
142
|
```typescript
|
|
143
143
|
const response = await client.responses.create({
|
|
@@ -30,6 +30,8 @@ const processor = new TokenLimiterProcessor({
|
|
|
30
30
|
|
|
31
31
|
**options.countMode** (`'cumulative' | 'part'`): Whether to count tokens from the beginning of the stream or just the current part: 'cumulative' counts all tokens from start, 'part' only counts tokens in current part
|
|
32
32
|
|
|
33
|
+
**options.trimMode** (`'best-fit' | 'contiguous'`): Controls how messages are trimmed when exceeding the token limit: 'best-fit' keeps as many messages as possible (may create gaps), 'contiguous' stops at the first message that does not fit, ensuring a continuous suffix of conversation history
|
|
34
|
+
|
|
33
35
|
## Returns
|
|
34
36
|
|
|
35
37
|
**id** (`string`): Processor identifier set to 'token-limiter'
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,19 @@
|
|
|
1
1
|
# @mastra/mcp-docs-server
|
|
2
2
|
|
|
3
|
+
## 1.1.20-alpha.4
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Updated dependencies [[`ec5c319`](https://github.com/mastra-ai/mastra/commit/ec5c3197a50d034cb8e9cc494eebfddc684b5d81), [`6517789`](https://github.com/mastra-ai/mastra/commit/65177895b74b5471fe2245c7292f0176d9b3385d), [`9ad6aa6`](https://github.com/mastra-ai/mastra/commit/9ad6aa6dfe858afc6955d1df5f3f78c40bb96b9c), [`2862127`](https://github.com/mastra-ai/mastra/commit/2862127d0a7cbd28523120ad64fea067a95838e6), [`3d16814`](https://github.com/mastra-ai/mastra/commit/3d16814c395931373543728994ff45ac98093074), [`7f498d0`](https://github.com/mastra-ai/mastra/commit/7f498d099eacef64fd43ee412e3bd6f87965a8a6), [`8cf8a67`](https://github.com/mastra-ai/mastra/commit/8cf8a67b061b737cb06d501fb8c1967a98bbf3cb), [`d7827e3`](https://github.com/mastra-ai/mastra/commit/d7827e393937c6cb0c7a744dde4d31538cb542b7)]:
|
|
8
|
+
- @mastra/core@1.21.0-alpha.2
|
|
9
|
+
|
|
10
|
+
## 1.1.20-alpha.2
|
|
11
|
+
|
|
12
|
+
### Patch Changes
|
|
13
|
+
|
|
14
|
+
- Updated dependencies [[`13f4327`](https://github.com/mastra-ai/mastra/commit/13f4327f052faebe199cefbe906d33bf90238767)]:
|
|
15
|
+
- @mastra/core@1.21.0-alpha.1
|
|
16
|
+
|
|
3
17
|
## 1.1.20-alpha.1
|
|
4
18
|
|
|
5
19
|
### Patch Changes
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/mcp-docs-server",
|
|
3
|
-
"version": "1.1.20-alpha.
|
|
3
|
+
"version": "1.1.20-alpha.4",
|
|
4
4
|
"description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -30,7 +30,7 @@
|
|
|
30
30
|
"local-pkg": "^1.1.2",
|
|
31
31
|
"zod": "^4.3.6",
|
|
32
32
|
"@mastra/mcp": "^1.4.1",
|
|
33
|
-
"@mastra/core": "1.21.0-alpha.
|
|
33
|
+
"@mastra/core": "1.21.0-alpha.2"
|
|
34
34
|
},
|
|
35
35
|
"devDependencies": {
|
|
36
36
|
"@hono/node-server": "^1.19.11",
|
|
@@ -48,7 +48,7 @@
|
|
|
48
48
|
"vitest": "4.0.18",
|
|
49
49
|
"@internal/lint": "0.0.77",
|
|
50
50
|
"@internal/types-builder": "0.0.52",
|
|
51
|
-
"@mastra/core": "1.21.0-alpha.
|
|
51
|
+
"@mastra/core": "1.21.0-alpha.2"
|
|
52
52
|
},
|
|
53
53
|
"homepage": "https://mastra.ai",
|
|
54
54
|
"repository": {
|