@mastra/mcp-docs-server 1.1.26-alpha.1 → 1.1.26-alpha.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/memory/observational-memory.md +23 -7
- package/.docs/docs/observability/tracing/exporters/cloud.md +28 -41
- package/.docs/guides/build-your-ui/ai-sdk-ui.md +19 -6
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/opencode-go.md +2 -4
- package/.docs/models/providers/opencode.md +2 -4
- package/.docs/reference/memory/observational-memory.md +2 -0
- package/.docs/reference/observability/tracing/exporters/cloud-exporter.md +4 -2
- package/CHANGELOG.md +16 -0
- package/package.json +4 -4
|
@@ -333,13 +333,29 @@ Reflection works similarly — the Reflector runs in the background when observa
|
|
|
333
333
|
|
|
334
334
|
### Settings
|
|
335
335
|
|
|
336
|
-
| Setting | Default | What it controls
|
|
337
|
-
| ------------------------------ | ------- |
|
|
338
|
-
| `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`).
|
|
339
|
-
| `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history.
|
|
340
|
-
| `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up.
|
|
341
|
-
| `
|
|
342
|
-
| `reflection.
|
|
336
|
+
| Setting | Default | What it controls |
|
|
337
|
+
| ------------------------------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
338
|
+
| `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`). |
|
|
339
|
+
| `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
|
|
340
|
+
| `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
|
|
341
|
+
| `activateAfterIdle` | none | Forces buffered observations and buffered reflections to activate after a period of inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like `300_000`, `"5m"`, or `"1hr"`. Set this to your prompt cache TTL if you want activation to happen before the next cold prompt. |
|
|
342
|
+
| `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
|
|
343
|
+
| `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
|
|
344
|
+
|
|
345
|
+
If you're relying on prompt caching, set `activateAfterIdle` to match your cache TTL. That way, once a thread has been idle long enough for the cache to expire, the next request can activate buffered observations or reflections first and send a smaller compressed context window.
|
|
346
|
+
|
|
347
|
+
```typescript
|
|
348
|
+
const memory = new Memory({
|
|
349
|
+
options: {
|
|
350
|
+
observationalMemory: {
|
|
351
|
+
model: 'google/gemini-2.5-flash',
|
|
352
|
+
activateAfterIdle: '5m',
|
|
353
|
+
},
|
|
354
|
+
},
|
|
355
|
+
})
|
|
356
|
+
```
|
|
357
|
+
|
|
358
|
+
With a 5-minute prompt cache TTL, this activates buffered context after 5 minutes of inactivity so the next uncached prompt uses observations and reflections instead of a larger raw message window. If you prefer, `300_000` works the same way.
|
|
343
359
|
|
|
344
360
|
### Disabling
|
|
345
361
|
|
|
@@ -2,6 +2,12 @@
|
|
|
2
2
|
|
|
3
3
|
The `CloudExporter` sends traces, logs, metrics, scores, and feedback to the Mastra platform. Use it to route observability data from any Mastra app to a hosted project in the Mastra platform.
|
|
4
4
|
|
|
5
|
+
## Version compatibility
|
|
6
|
+
|
|
7
|
+
- Use `CloudExporter` with `@mastra/observability@1.8.0` or later.
|
|
8
|
+
- If you use `@mastra/observability@1.8.0` through `1.9.1`, set `MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai` in addition to `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID`.
|
|
9
|
+
- Starting in `@mastra/observability@1.9.2`, `CloudExporter` defaults to `https://observability.mastra.ai`, so `MASTRA_CLOUD_TRACES_ENDPOINT` is only required when you want to send telemetry to a different collector.
|
|
10
|
+
|
|
5
11
|
## Quickstart
|
|
6
12
|
|
|
7
13
|
To connect `CloudExporter`, create an access token, find the destination `projectId`, and add the exporter to your observability config.
|
|
@@ -57,52 +63,30 @@ MASTRA_PROJECT_ID=<your-project-id>
|
|
|
57
63
|
|
|
58
64
|
### 3. Set your environment variables
|
|
59
65
|
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
If you use a project-scoped token from the Mastra platform instead, `CloudExporter` can route without `projectId`, but setting it explicitly is still supported.
|
|
63
|
-
|
|
64
|
-
Add both values to your environment:
|
|
66
|
+
Set both values in your environment so `CloudExporter` can authenticate and route telemetry to the correct project:
|
|
65
67
|
|
|
66
68
|
```bash
|
|
67
69
|
MASTRA_CLOUD_ACCESS_TOKEN=<your-cloud-access-token>
|
|
68
70
|
MASTRA_PROJECT_ID=<your-project-id>
|
|
69
71
|
```
|
|
70
72
|
|
|
71
|
-
|
|
73
|
+
If you use `@mastra/observability@1.8.0` through `1.9.1`, also set the Mastra platform collector explicitly:
|
|
72
74
|
|
|
73
75
|
```bash
|
|
74
|
-
|
|
75
|
-
MASTRA_PROJECT_ID=<your-project-id>
|
|
76
|
+
MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai
|
|
76
77
|
```
|
|
77
78
|
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
The following example demonstrates how to add `CloudExporter` to your observability config:
|
|
81
|
-
|
|
82
|
-
```ts
|
|
83
|
-
import { Mastra } from '@mastra/core'
|
|
84
|
-
import { Observability, CloudExporter } from '@mastra/observability'
|
|
79
|
+
If you want to send telemetry somewhere other than Mastra platform, set `MASTRA_CLOUD_TRACES_ENDPOINT` as well. Pass either a base origin or a full traces publish URL ending in `/spans/publish`.
|
|
85
80
|
|
|
86
|
-
|
|
87
|
-
|
|
88
|
-
configs: {
|
|
89
|
-
production: {
|
|
90
|
-
serviceName: 'my-service',
|
|
91
|
-
exporters: [
|
|
92
|
-
new CloudExporter({
|
|
93
|
-
accessToken: process.env.MASTRA_CLOUD_ACCESS_TOKEN,
|
|
94
|
-
projectId: process.env.MASTRA_PROJECT_ID,
|
|
95
|
-
}),
|
|
96
|
-
],
|
|
97
|
-
},
|
|
98
|
-
},
|
|
99
|
-
}),
|
|
100
|
-
})
|
|
81
|
+
```bash
|
|
82
|
+
MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
|
|
101
83
|
```
|
|
102
84
|
|
|
103
|
-
|
|
85
|
+
When you pass a base origin, `CloudExporter` derives the matching publish URLs for traces, logs, metrics, scores, and feedback automatically.
|
|
104
86
|
|
|
105
|
-
|
|
87
|
+
### 4. Enable `CloudExporter`
|
|
88
|
+
|
|
89
|
+
The following example demonstrates how to add `CloudExporter` to your observability config:
|
|
106
90
|
|
|
107
91
|
```ts
|
|
108
92
|
import { Mastra } from '@mastra/core'
|
|
@@ -120,6 +104,8 @@ export const mastra = new Mastra({
|
|
|
120
104
|
})
|
|
121
105
|
```
|
|
122
106
|
|
|
107
|
+
Set `serviceName` on the observability config, not on `CloudExporter` itself.
|
|
108
|
+
|
|
123
109
|
Use a stable `serviceName` value. In Studio, traces can be filtered by **Deployments → Service Name**, so a consistent name makes traces easier to find.
|
|
124
110
|
|
|
125
111
|
Visit [Observability configuration reference](https://mastra.ai/reference/observability/tracing/configuration) for the full observability config shape.
|
|
@@ -130,7 +116,7 @@ If you prefer, rely entirely on environment variables:
|
|
|
130
116
|
new CloudExporter()
|
|
131
117
|
```
|
|
132
118
|
|
|
133
|
-
With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter`
|
|
119
|
+
With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter` sends data to the Mastra platform project you configured. If you also set `MASTRA_CLOUD_TRACES_ENDPOINT`, it sends data to that collector instead.
|
|
134
120
|
|
|
135
121
|
> **Note:** Visit [CloudExporter reference](https://mastra.ai/reference/observability/tracing/exporters/cloud-exporter) for the full list of configuration options.
|
|
136
122
|
|
|
@@ -162,13 +148,17 @@ export const mastra = new Mastra({
|
|
|
162
148
|
|
|
163
149
|
## Complete configuration
|
|
164
150
|
|
|
165
|
-
|
|
151
|
+
`CloudExporter` defaults to Mastra platform. If you want to send telemetry to a different collector, set `MASTRA_CLOUD_TRACES_ENDPOINT` in your environment or pass `endpoint` in code.
|
|
152
|
+
|
|
153
|
+
```bash
|
|
154
|
+
MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
The following example demonstrates how to override the collector endpoint and batching behavior in code:
|
|
166
158
|
|
|
167
159
|
```ts
|
|
168
160
|
new CloudExporter({
|
|
169
|
-
|
|
170
|
-
projectId: process.env.MASTRA_PROJECT_ID,
|
|
171
|
-
endpoint: 'https://cloud.your-domain.com',
|
|
161
|
+
endpoint: 'https://collector.example.com',
|
|
172
162
|
maxBatchSize: 1000,
|
|
173
163
|
maxBatchWaitMs: 5000,
|
|
174
164
|
logLevel: 'info',
|
|
@@ -179,13 +169,10 @@ new CloudExporter({
|
|
|
179
169
|
|
|
180
170
|
After you enable `CloudExporter`, open your project in [Mastra Studio](https://projects.mastra.ai) to inspect the exported data.
|
|
181
171
|
|
|
182
|
-
- Open
|
|
172
|
+
- Open the project you set `MASTRA_PROJECT_ID` to and select **Open Studio**.
|
|
183
173
|
- In Studio, go to **Traces** to inspect agent and workflow traces.
|
|
184
174
|
- Open the filter menu and use **Deployments → Service Name** to isolate traces from a specific app or deployment.
|
|
185
175
|
- Use the **Logs** page in the project dashboard to inspect exported logs.
|
|
186
|
-
- Use the project named by `MASTRA_PROJECT_ID` when you work with organization-scoped tokens.
|
|
187
|
-
|
|
188
|
-
> **Note:** If you use a project-scoped token, open the project that issued the token. If you use an organization-scoped token, open the project named by `MASTRA_PROJECT_ID`.
|
|
189
176
|
|
|
190
177
|
When you deploy with Mastra Studio, set **Deployment → Service Name** to a stable value and keep it aligned with the `serviceName` in your observability config. This makes traces easier to filter in Studio through **Deployments → Service Name** when multiple services or deployments send data to the same project.
|
|
191
178
|
|
|
@@ -242,7 +242,7 @@ Use [`prepareSendMessagesRequest`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/u
|
|
|
242
242
|
|
|
243
243
|
When your agent has [memory](https://mastra.ai/docs/memory/overview) configured, Mastra loads conversation history from storage on the server. Send only the new message from the client instead of the full conversation history.
|
|
244
244
|
|
|
245
|
-
Sending the full history is redundant and can cause message
|
|
245
|
+
Sending the full history is redundant and can cause message-ordering bugs because client-side timestamps can conflict with the timestamps stored in your database.
|
|
246
246
|
|
|
247
247
|
```typescript
|
|
248
248
|
import { useChat } from '@ai-sdk/react'
|
|
@@ -392,7 +392,8 @@ Mastra streams data to the frontend as "parts" within messages. Each part has a
|
|
|
392
392
|
| Data Part Type | Source | Description |
|
|
393
393
|
| -------------------- | ----------------------- | ---------------------------------------------------------------------------------- |
|
|
394
394
|
| `tool-{toolKey}` | AI SDK built-in | Tool invocation with states: `input-available`, `output-available`, `output-error` |
|
|
395
|
-
| `data-workflow` | `workflowRoute()` | Workflow execution with step
|
|
395
|
+
| `data-workflow` | `workflowRoute()` | Workflow execution state snapshots with step status and final outputs |
|
|
396
|
+
| `data-workflow-step` | `workflowRoute()` | Workflow step delta with the full payload for the step that just changed |
|
|
396
397
|
| `data-network` | `networkRoute()` | Agent network execution with ordered steps and outputs |
|
|
397
398
|
| `data-tool-agent` | Nested agent in tool | Agent output streamed from within a tool's `execute()` |
|
|
398
399
|
| `data-tool-workflow` | Nested workflow in tool | Workflow output streamed from within a tool's `execute()` |
|
|
@@ -505,11 +506,11 @@ export function Chat() {
|
|
|
505
506
|
|
|
506
507
|
### Rendering workflow data
|
|
507
508
|
|
|
508
|
-
When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts
|
|
509
|
+
When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts for workflow state snapshots and `data-workflow-step` parts for the full payload of the step that just changed. This keeps long-running workflows from repeating every completed step output in every intermediate snapshot.
|
|
509
510
|
|
|
510
511
|
**Backend**:
|
|
511
512
|
|
|
512
|
-
Define a workflow with multiple steps that will emit `data-workflow` parts as it executes.
|
|
513
|
+
Define a workflow with multiple steps that will emit `data-workflow` and `data-workflow-step` parts as it executes.
|
|
513
514
|
|
|
514
515
|
```typescript
|
|
515
516
|
import { createStep, createWorkflow } from '@mastra/core/workflows'
|
|
@@ -584,14 +585,15 @@ export const mastra = new Mastra({
|
|
|
584
585
|
|
|
585
586
|
**Frontend**:
|
|
586
587
|
|
|
587
|
-
Check for `data-workflow` parts
|
|
588
|
+
Check for `data-workflow` parts to render workflow status snapshots. If you need the full payload for the step that just changed, also read `data-workflow-step` parts.
|
|
588
589
|
|
|
589
590
|
```typescript
|
|
590
591
|
import { useChat } from '@ai-sdk/react'
|
|
591
592
|
import { DefaultChatTransport } from 'ai'
|
|
592
|
-
import type { WorkflowDataPart } from '@mastra/ai-sdk'
|
|
593
|
+
import type { WorkflowDataPart, WorkflowStepDataPart } from '@mastra/ai-sdk'
|
|
593
594
|
|
|
594
595
|
type WorkflowData = WorkflowDataPart['data']
|
|
596
|
+
type WorkflowStepData = WorkflowStepDataPart['data']
|
|
595
597
|
type StepStatus = 'running' | 'success' | 'failed' | 'suspended' | 'waiting'
|
|
596
598
|
|
|
597
599
|
function StepIndicator({
|
|
@@ -652,6 +654,17 @@ export function WorkflowChat() {
|
|
|
652
654
|
</div>
|
|
653
655
|
)
|
|
654
656
|
}
|
|
657
|
+
if (part.type === 'data-workflow-step') {
|
|
658
|
+
const stepData = part.data as WorkflowStepData
|
|
659
|
+
return (
|
|
660
|
+
<StepIndicator
|
|
661
|
+
key={index}
|
|
662
|
+
name={stepData.step.name}
|
|
663
|
+
status={stepData.step.status}
|
|
664
|
+
output={stepData.step.output}
|
|
665
|
+
/>
|
|
666
|
+
)
|
|
667
|
+
}
|
|
655
668
|
return null
|
|
656
669
|
})}
|
|
657
670
|
</div>
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3592 models from 99 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenCode Go
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 7 OpenCode Go models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenCode Go documentation](https://opencode.ai/docs/zen).
|
|
6
6
|
|
|
@@ -41,8 +41,6 @@ for await (const chunk of stream) {
|
|
|
41
41
|
| `opencode-go/mimo-v2-pro` | 1.0M | | | | | | $1 | $3 |
|
|
42
42
|
| `opencode-go/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
|
|
43
43
|
| `opencode-go/minimax-m2.7` | 205K | | | | | | $0.30 | $1 |
|
|
44
|
-
| `opencode-go/qwen3.5-plus` | 262K | | | | | | $0.20 | $1 |
|
|
45
|
-
| `opencode-go/qwen3.6-plus` | 262K | | | | | | $0.50 | $3 |
|
|
46
44
|
|
|
47
45
|
## Advanced configuration
|
|
48
46
|
|
|
@@ -72,7 +70,7 @@ const agent = new Agent({
|
|
|
72
70
|
model: ({ requestContext }) => {
|
|
73
71
|
const useAdvanced = requestContext.task === "complex";
|
|
74
72
|
return useAdvanced
|
|
75
|
-
? "opencode-go/
|
|
73
|
+
? "opencode-go/minimax-m2.7"
|
|
76
74
|
: "opencode-go/glm-5";
|
|
77
75
|
}
|
|
78
76
|
});
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenCode Zen
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 32 OpenCode Zen models through Mastra's model router. Authentication is handled automatically using the `OPENCODE_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenCode Zen documentation](https://opencode.ai/docs/zen).
|
|
6
6
|
|
|
@@ -66,8 +66,6 @@ for await (const chunk of stream) {
|
|
|
66
66
|
| `opencode/minimax-m2.5` | 205K | | | | | | $0.30 | $1 |
|
|
67
67
|
| `opencode/minimax-m2.5-free` | 205K | | | | | | — | — |
|
|
68
68
|
| `opencode/nemotron-3-super-free` | 205K | | | | | | — | — |
|
|
69
|
-
| `opencode/qwen3.5-plus` | 262K | | | | | | $0.20 | $1 |
|
|
70
|
-
| `opencode/qwen3.6-plus` | 262K | | | | | | $0.50 | $3 |
|
|
71
69
|
|
|
72
70
|
## Advanced configuration
|
|
73
71
|
|
|
@@ -97,7 +95,7 @@ const agent = new Agent({
|
|
|
97
95
|
model: ({ requestContext }) => {
|
|
98
96
|
const useAdvanced = requestContext.task === "complex";
|
|
99
97
|
return useAdvanced
|
|
100
|
-
? "opencode/
|
|
98
|
+
? "opencode/nemotron-3-super-free"
|
|
101
99
|
: "opencode/big-pickle";
|
|
102
100
|
}
|
|
103
101
|
});
|
|
@@ -36,6 +36,8 @@ OM performs thresholding with fast local token estimation. Text uses `tokenx`, a
|
|
|
36
36
|
|
|
37
37
|
**scope** (`'resource' | 'thread'`): Memory scope for observations. \`'thread'\` keeps observations per-thread. \`'resource'\` (experimental) shares observations across all threads for a resource, enabling cross-conversation memory. (Default: `'thread'`)
|
|
38
38
|
|
|
39
|
+
**activateAfterIdle** (`number | string`): Time before buffered observations or buffered reflections are forced to activate after inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like \`300\_000\`, \`"5m"\`, or \`"1hr"\`. When the gap between the current time and the last assistant message part timestamp exceeds this value, buffered observational memory activates before the next prompt. Useful for aligning with prompt cache TTLs.
|
|
40
|
+
|
|
39
41
|
**shareTokenBudget** (`boolean`): Share the token budget between messages and observations. When enabled, the total budget is \`observation.messageTokens + reflection.observationTokens\`. Messages can use more space when observations are small, and vice versa. This maximizes context usage through flexible allocation. \`shareTokenBudget\` is not yet compatible with async buffering. You must set \`observation: { bufferTokens: false }\` when using this option (this is a temporary limitation). (Default: `false`)
|
|
40
42
|
|
|
41
43
|
**retrieval** (`boolean | { vector?: boolean; scope?: 'thread' | 'resource' }`): \*\*Experimental.\*\* Enable retrieval-mode observation groups as durable pointers to raw message history. \`true\` enables cross-thread browsing by default. \`{ vector: true }\` also enables semantic search using Memory's vector store and embedder. \`{ scope: 'thread' }\` restricts the recall tool to the current thread only. Default scope is \`'resource'\`. (Default: `false`)
|
|
@@ -1,5 +1,7 @@
|
|
|
1
1
|
# CloudExporter
|
|
2
2
|
|
|
3
|
+
**Added in:** `@mastra/observability@1.8.0`
|
|
4
|
+
|
|
3
5
|
Sends tracing spans, logs, metrics, scores, and feedback to the Mastra platform for online visualization and monitoring.
|
|
4
6
|
|
|
5
7
|
## Constructor
|
|
@@ -56,9 +58,9 @@ Extends `BaseExporterConfig`, which includes:
|
|
|
56
58
|
|
|
57
59
|
The exporter reads these environment variables if not provided in config:
|
|
58
60
|
|
|
59
|
-
- `MASTRA_CLOUD_ACCESS_TOKEN` - Authentication token
|
|
61
|
+
- `MASTRA_CLOUD_ACCESS_TOKEN` - Authentication token for `CloudExporter` requests
|
|
60
62
|
- `MASTRA_PROJECT_ID` - Project ID to use when deriving project-scoped collector routes such as `/projects/:projectId/ai/spans/publish`
|
|
61
|
-
- `MASTRA_CLOUD_TRACES_ENDPOINT` - Traces endpoint override. Pass either a base origin or a full traces publish URL. Defaults to `https://
|
|
63
|
+
- `MASTRA_CLOUD_TRACES_ENDPOINT` - Traces endpoint override. Pass either a base origin or a full traces publish URL. Defaults to `https://observability.mastra.ai` in `@mastra/observability@1.9.2` and later
|
|
62
64
|
|
|
63
65
|
## Properties
|
|
64
66
|
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,21 @@
|
|
|
1
1
|
# @mastra/mcp-docs-server
|
|
2
2
|
|
|
3
|
+
## 1.1.26-alpha.3
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Updated dependencies [[`3d83d06`](https://github.com/mastra-ai/mastra/commit/3d83d06f776f00fb5f4163dddd32a030c5c20844), [`7e0e63e`](https://github.com/mastra-ai/mastra/commit/7e0e63e2e485e84442351f4c7a79a424c83539dc), [`9467ea8`](https://github.com/mastra-ai/mastra/commit/9467ea87695749a53dfc041576410ebf9ee7bb67), [`7338d94`](https://github.com/mastra-ai/mastra/commit/7338d949380cf68b095342e8e42610dc51d557c1), [`c65aec3`](https://github.com/mastra-ai/mastra/commit/c65aec356cc037ee7c4b30ccea946807d4c4f443)]:
|
|
8
|
+
- @mastra/core@1.26.0-alpha.2
|
|
9
|
+
- @mastra/mcp@1.5.1-alpha.1
|
|
10
|
+
|
|
11
|
+
## 1.1.26-alpha.2
|
|
12
|
+
|
|
13
|
+
### Patch Changes
|
|
14
|
+
|
|
15
|
+
- Updated dependencies [[`7020c06`](https://github.com/mastra-ai/mastra/commit/7020c0690b199d9da337f0e805f16948e557922e), [`7020c06`](https://github.com/mastra-ai/mastra/commit/7020c0690b199d9da337f0e805f16948e557922e)]:
|
|
16
|
+
- @mastra/mcp@1.5.1-alpha.0
|
|
17
|
+
- @mastra/core@1.25.1-alpha.1
|
|
18
|
+
|
|
3
19
|
## 1.1.26-alpha.0
|
|
4
20
|
|
|
5
21
|
### Patch Changes
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/mcp-docs-server",
|
|
3
|
-
"version": "1.1.26-alpha.
|
|
3
|
+
"version": "1.1.26-alpha.4",
|
|
4
4
|
"description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -29,8 +29,8 @@
|
|
|
29
29
|
"jsdom": "^26.1.0",
|
|
30
30
|
"local-pkg": "^1.1.2",
|
|
31
31
|
"zod": "^4.3.6",
|
|
32
|
-
"@mastra/
|
|
33
|
-
"@mastra/
|
|
32
|
+
"@mastra/core": "1.26.0-alpha.2",
|
|
33
|
+
"@mastra/mcp": "^1.5.1-alpha.1"
|
|
34
34
|
},
|
|
35
35
|
"devDependencies": {
|
|
36
36
|
"@hono/node-server": "^1.19.11",
|
|
@@ -48,7 +48,7 @@
|
|
|
48
48
|
"vitest": "4.0.18",
|
|
49
49
|
"@internal/lint": "0.0.83",
|
|
50
50
|
"@internal/types-builder": "0.0.58",
|
|
51
|
-
"@mastra/core": "1.
|
|
51
|
+
"@mastra/core": "1.26.0-alpha.2"
|
|
52
52
|
},
|
|
53
53
|
"homepage": "https://mastra.ai",
|
|
54
54
|
"repository": {
|