@mastra/mcp-docs-server 1.1.26-alpha.2 → 1.1.26-alpha.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -73,4 +73,6 @@ Develop your project locally with [`mastra dev`](https://mastra.ai/reference/cli
73
73
 
74
74
  Once you're ready to deploy your application to production, use [`mastra studio deploy`](https://mastra.ai/reference/cli/mastra) and [`mastra server deploy`](https://mastra.ai/reference/cli/mastra) to push your application to the cloud.
75
75
 
76
- Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
76
+ Follow the [Studio deployment guide](https://mastra.ai/docs/studio/deployment) and [Server deployment guide](https://mastra.ai/guides/deployment/mastra-platform) for step-by-step instructions.
77
+
78
+ If you host your Mastra application on your own infrastructure, you can still send observability data to Studio using the [CloudExporter](https://mastra.ai/docs/observability/tracing/exporters/cloud).
@@ -333,13 +333,29 @@ Reflection works similarly — the Reflector runs in the background when observa
333
333
 
334
334
  ### Settings
335
335
 
336
- | Setting | Default | What it controls |
337
- | ------------------------------ | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
338
- | `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`). |
339
- | `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
340
- | `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
341
- | `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
342
- | `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
336
+ | Setting | Default | What it controls |
337
+ | ------------------------------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
338
+ | `observation.bufferTokens` | `0.2` | How often to buffer. `0.2` means every 20% of `messageTokens` — with the default 30k threshold, that's roughly every 6k tokens. Can also be an absolute token count (e.g. `5000`). |
339
+ | `observation.bufferActivation` | `0.8` | How aggressively to clear the message window on activation. `0.8` means remove enough messages to keep only 20% of `messageTokens` remaining. Lower values keep more message history. |
340
+ | `observation.blockAfter` | `1.2` | Safety threshold as a multiplier of `messageTokens`. At `1.2`, synchronous observation is forced at 36k tokens (1.2 × 30k). Only matters if buffering can't keep up. |
341
+ | `activateAfterIdle` | none | Forces buffered observations and buffered reflections to activate after a period of inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like `300_000`, `"5m"`, or `"1hr"`. Set this to your prompt cache TTL if you want activation to happen before the next cold prompt. |
342
+ | `reflection.bufferActivation` | `0.5` | When to start background reflection. `0.5` means reflection begins when observations reach 50% of the `observationTokens` threshold. |
343
+ | `reflection.blockAfter` | `1.2` | Safety threshold for reflection, same logic as observation. |
344
+
345
+ If you're relying on prompt caching, set `activateAfterIdle` to match your cache TTL. That way, once a thread has been idle long enough for the cache to expire, the next request can activate buffered observations or reflections first and send a smaller compressed context window.
346
+
347
+ ```typescript
348
+ const memory = new Memory({
349
+ options: {
350
+ observationalMemory: {
351
+ model: 'google/gemini-2.5-flash',
352
+ activateAfterIdle: '5m',
353
+ },
354
+ },
355
+ })
356
+ ```
357
+
358
+ With a 5-minute prompt cache TTL, this activates buffered context after 5 minutes of inactivity so the next uncached prompt uses observations and reflections instead of a larger raw message window. If you prefer, `300_000` works the same way.
343
359
 
344
360
  ### Disabling
345
361
 
@@ -2,6 +2,18 @@
2
2
 
3
3
  The `CloudExporter` sends traces, logs, metrics, scores, and feedback to the Mastra platform. Use it to route observability data from any Mastra app to a hosted project in the Mastra platform.
4
4
 
5
+ > **Self-hosted or standalone apps:** If you host your Mastra application on your own infrastructure (not on Mastra Platform), you still need a deployed Studio project to view traces, logs, and metrics. `CloudExporter` sends data to a Studio project, so one must exist before you can use it.
6
+ >
7
+ > 1. [Create a Mastra project](https://mastra.ai/guides/getting-started/quickstart) if you don't have one yet.
8
+ > 2. [Deploy Studio](https://mastra.ai/docs/studio/deployment) to the Mastra platform with `mastra studio deploy`.
9
+ > 3. Follow the [quickstart steps below](#quickstart) to create an access token and find your project ID.
10
+
11
+ ## Version compatibility
12
+
13
+ - Use `CloudExporter` with `@mastra/observability@1.8.0` or later.
14
+ - If you use `@mastra/observability@1.8.0` through `1.9.1`, set `MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai` in addition to `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID`.
15
+ - Starting in `@mastra/observability@1.9.2`, `CloudExporter` defaults to `https://observability.mastra.ai`, so `MASTRA_CLOUD_TRACES_ENDPOINT` is only required when you want to send telemetry to a different collector.
16
+
5
17
  ## Quickstart
6
18
 
7
19
  To connect `CloudExporter`, create an access token, find the destination `projectId`, and add the exporter to your observability config.
@@ -57,52 +69,30 @@ MASTRA_PROJECT_ID=<your-project-id>
57
69
 
58
70
  ### 3. Set your environment variables
59
71
 
60
- Tokens created with `mastra auth tokens create` are organization-scoped, so `MASTRA_PROJECT_ID` is required.
61
-
62
- If you use a project-scoped token from the Mastra platform instead, `CloudExporter` can route without `projectId`, but setting it explicitly is still supported.
63
-
64
- Add both values to your environment:
72
+ Set both values in your environment so `CloudExporter` can authenticate and route telemetry to the correct project:
65
73
 
66
74
  ```bash
67
75
  MASTRA_CLOUD_ACCESS_TOKEN=<your-cloud-access-token>
68
76
  MASTRA_PROJECT_ID=<your-project-id>
69
77
  ```
70
78
 
71
- For example:
79
+ If you use `@mastra/observability@1.8.0` through `1.9.1`, also set the Mastra platform collector explicitly:
72
80
 
73
81
  ```bash
74
- MASTRA_CLOUD_ACCESS_TOKEN=<your-cloud-access-token>
75
- MASTRA_PROJECT_ID=<your-project-id>
82
+ MASTRA_CLOUD_TRACES_ENDPOINT=https://observability.mastra.ai
76
83
  ```
77
84
 
78
- ### 4. Enable `CloudExporter`
79
-
80
- The following example demonstrates how to add `CloudExporter` to your observability config:
81
-
82
- ```ts
83
- import { Mastra } from '@mastra/core'
84
- import { Observability, CloudExporter } from '@mastra/observability'
85
+ If you want to send telemetry somewhere other than Mastra platform, set `MASTRA_CLOUD_TRACES_ENDPOINT` as well. Pass either a base origin or a full traces publish URL ending in `/spans/publish`.
85
86
 
86
- export const mastra = new Mastra({
87
- observability: new Observability({
88
- configs: {
89
- production: {
90
- serviceName: 'my-service',
91
- exporters: [
92
- new CloudExporter({
93
- accessToken: process.env.MASTRA_CLOUD_ACCESS_TOKEN,
94
- projectId: process.env.MASTRA_PROJECT_ID,
95
- }),
96
- ],
97
- },
98
- },
99
- }),
100
- })
87
+ ```bash
88
+ MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
101
89
  ```
102
90
 
103
- Set `serviceName` in code on the observability config, not on `CloudExporter` itself. In the example above, the value is `configs.production.serviceName`.
91
+ When you pass a base origin, `CloudExporter` derives the matching publish URLs for traces, logs, metrics, scores, and feedback automatically.
92
+
93
+ ### 4. Enable `CloudExporter`
104
94
 
105
- For example:
95
+ The following example demonstrates how to add `CloudExporter` to your observability config:
106
96
 
107
97
  ```ts
108
98
  import { Mastra } from '@mastra/core'
@@ -120,6 +110,8 @@ export const mastra = new Mastra({
120
110
  })
121
111
  ```
122
112
 
113
+ Set `serviceName` on the observability config, not on `CloudExporter` itself.
114
+
123
115
  Use a stable `serviceName` value. In Studio, traces can be filtered by **Deployments → Service Name**, so a consistent name makes traces easier to find.
124
116
 
125
117
  Visit [Observability configuration reference](https://mastra.ai/reference/observability/tracing/configuration) for the full observability config shape.
@@ -130,7 +122,7 @@ If you prefer, rely entirely on environment variables:
130
122
  new CloudExporter()
131
123
  ```
132
124
 
133
- With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter` automatically sends data to the correct Mastra platform project.
125
+ With `MASTRA_CLOUD_ACCESS_TOKEN` and `MASTRA_PROJECT_ID` set, `CloudExporter` sends data to the Mastra platform project you configured. If you also set `MASTRA_CLOUD_TRACES_ENDPOINT`, it sends data to that collector instead.
134
126
 
135
127
  > **Note:** Visit [CloudExporter reference](https://mastra.ai/reference/observability/tracing/exporters/cloud-exporter) for the full list of configuration options.
136
128
 
@@ -162,13 +154,17 @@ export const mastra = new Mastra({
162
154
 
163
155
  ## Complete configuration
164
156
 
165
- The following example demonstrates how to configure custom endpoints and batching behavior:
157
+ `CloudExporter` defaults to Mastra platform. If you want to send telemetry to a different collector, set `MASTRA_CLOUD_TRACES_ENDPOINT` in your environment or pass `endpoint` in code.
158
+
159
+ ```bash
160
+ MASTRA_CLOUD_TRACES_ENDPOINT=https://collector.example.com
161
+ ```
162
+
163
+ The following example demonstrates how to override the collector endpoint and batching behavior in code:
166
164
 
167
165
  ```ts
168
166
  new CloudExporter({
169
- accessToken: process.env.MASTRA_CLOUD_ACCESS_TOKEN,
170
- projectId: process.env.MASTRA_PROJECT_ID,
171
- endpoint: 'https://cloud.your-domain.com',
167
+ endpoint: 'https://collector.example.com',
172
168
  maxBatchSize: 1000,
173
169
  maxBatchWaitMs: 5000,
174
170
  logLevel: 'info',
@@ -179,13 +175,10 @@ new CloudExporter({
179
175
 
180
176
  After you enable `CloudExporter`, open your project in [Mastra Studio](https://projects.mastra.ai) to inspect the exported data.
181
177
 
182
- - Open your project and select **Open Studio**.
178
+ - Open the project you set `MASTRA_PROJECT_ID` to and select **Open Studio**.
183
179
  - In Studio, go to **Traces** to inspect agent and workflow traces.
184
180
  - Open the filter menu and use **Deployments → Service Name** to isolate traces from a specific app or deployment.
185
181
  - Use the **Logs** page in the project dashboard to inspect exported logs.
186
- - Use the project named by `MASTRA_PROJECT_ID` when you work with organization-scoped tokens.
187
-
188
- > **Note:** If you use a project-scoped token, open the project that issued the token. If you use an organization-scoped token, open the project named by `MASTRA_PROJECT_ID`.
189
182
 
190
183
  When you deploy with Mastra Studio, set **Deployment → Service Name** to a stable value and keep it aligned with the `serviceName` in your observability config. This makes traces easier to filter in Studio through **Deployments → Service Name** when multiple services or deployments send data to the same project.
191
184
 
@@ -242,7 +242,7 @@ Use [`prepareSendMessagesRequest`](https://ai-sdk.dev/docs/reference/ai-sdk-ui/u
242
242
 
243
243
  When your agent has [memory](https://mastra.ai/docs/memory/overview) configured, Mastra loads conversation history from storage on the server. Send only the new message from the client instead of the full conversation history.
244
244
 
245
- Sending the full history is redundant and can cause message ordering bugs because client-side timestamps can conflict with the timestamps stored in your database.
245
+ Sending the full history is redundant and can cause message-ordering bugs because client-side timestamps can conflict with the timestamps stored in your database.
246
246
 
247
247
  ```typescript
248
248
  import { useChat } from '@ai-sdk/react'
@@ -392,7 +392,8 @@ Mastra streams data to the frontend as "parts" within messages. Each part has a
392
392
  | Data Part Type | Source | Description |
393
393
  | -------------------- | ----------------------- | ---------------------------------------------------------------------------------- |
394
394
  | `tool-{toolKey}` | AI SDK built-in | Tool invocation with states: `input-available`, `output-available`, `output-error` |
395
- | `data-workflow` | `workflowRoute()` | Workflow execution with step inputs, outputs, and status |
395
+ | `data-workflow` | `workflowRoute()` | Workflow execution state snapshots with step status and final outputs |
396
+ | `data-workflow-step` | `workflowRoute()` | Workflow step delta with the full payload for the step that just changed |
396
397
  | `data-network` | `networkRoute()` | Agent network execution with ordered steps and outputs |
397
398
  | `data-tool-agent` | Nested agent in tool | Agent output streamed from within a tool's `execute()` |
398
399
  | `data-tool-workflow` | Nested workflow in tool | Workflow output streamed from within a tool's `execute()` |
@@ -505,11 +506,11 @@ export function Chat() {
505
506
 
506
507
  ### Rendering workflow data
507
508
 
508
- When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts that contain the workflow's execution state, including step statuses and outputs.
509
+ When using `workflowRoute()` or `handleWorkflowStream()`, Mastra emits `data-workflow` parts for workflow state snapshots and `data-workflow-step` parts for the full payload of the step that just changed. This keeps long-running workflows from repeating every completed step output in every intermediate snapshot.
509
510
 
510
511
  **Backend**:
511
512
 
512
- Define a workflow with multiple steps that will emit `data-workflow` parts as it executes.
513
+ Define a workflow with multiple steps that will emit `data-workflow` and `data-workflow-step` parts as it executes.
513
514
 
514
515
  ```typescript
515
516
  import { createStep, createWorkflow } from '@mastra/core/workflows'
@@ -584,14 +585,15 @@ export const mastra = new Mastra({
584
585
 
585
586
  **Frontend**:
586
587
 
587
- Check for `data-workflow` parts and render each step's status and output using the `WorkflowDataPart` type for type safety.
588
+ Check for `data-workflow` parts to render workflow status snapshots. If you need the full payload for the step that just changed, also read `data-workflow-step` parts.
588
589
 
589
590
  ```typescript
590
591
  import { useChat } from '@ai-sdk/react'
591
592
  import { DefaultChatTransport } from 'ai'
592
- import type { WorkflowDataPart } from '@mastra/ai-sdk'
593
+ import type { WorkflowDataPart, WorkflowStepDataPart } from '@mastra/ai-sdk'
593
594
 
594
595
  type WorkflowData = WorkflowDataPart['data']
596
+ type WorkflowStepData = WorkflowStepDataPart['data']
595
597
  type StepStatus = 'running' | 'success' | 'failed' | 'suspended' | 'waiting'
596
598
 
597
599
  function StepIndicator({
@@ -652,6 +654,17 @@ export function WorkflowChat() {
652
654
  </div>
653
655
  )
654
656
  }
657
+ if (part.type === 'data-workflow-step') {
658
+ const stepData = part.data as WorkflowStepData
659
+ return (
660
+ <StepIndicator
661
+ key={index}
662
+ name={stepData.step.name}
663
+ status={stepData.step.status}
664
+ output={stepData.step.output}
665
+ />
666
+ )
667
+ }
655
668
  return null
656
669
  })}
657
670
  </div>
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3596 models from 99 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3599 models from 100 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -0,0 +1,73 @@
1
+ # ![HPC-AI logo](https://models.dev/logos/hpc-ai.svg)HPC-AI
2
+
3
+ Access 3 HPC-AI models through Mastra's model router. Authentication is handled automatically using the `HPC_AI_API_KEY` environment variable.
4
+
5
+ Learn more in the [HPC-AI documentation](https://www.hpc-ai.com/doc/docs/quickstart/).
6
+
7
+ ```bash
8
+ HPC_AI_API_KEY=your-api-key
9
+ ```
10
+
11
+ ```typescript
12
+ import { Agent } from "@mastra/core/agent";
13
+
14
+ const agent = new Agent({
15
+ id: "my-agent",
16
+ name: "My Agent",
17
+ instructions: "You are a helpful assistant",
18
+ model: "hpc-ai/minimax/minimax-m2.5"
19
+ });
20
+
21
+ // Generate a response
22
+ const response = await agent.generate("Hello!");
23
+
24
+ // Stream a response
25
+ const stream = await agent.stream("Tell me a story");
26
+ for await (const chunk of stream) {
27
+ console.log(chunk);
28
+ }
29
+ ```
30
+
31
+ > **Info:** Mastra uses the OpenAI-compatible `/chat/completions` endpoint. Some provider-specific features may not be available. Check the [HPC-AI documentation](https://www.hpc-ai.com/doc/docs/quickstart/) for details.
32
+
33
+ ## Models
34
+
35
+ | Model | Context | Tools | Reasoning | Image | Audio | Video | Input $/1M | Output $/1M |
36
+ | ----------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
37
+ | `hpc-ai/minimax/minimax-m2.5` | 1.0M | | | | | | $0.14 | $0.56 |
38
+ | `hpc-ai/moonshotai/kimi-k2.5` | 262K | | | | | | $0.21 | $1 |
39
+ | `hpc-ai/zai-org/glm-5.1` | 202K | | | | | | $0.66 | $2 |
40
+
41
+ ## Advanced configuration
42
+
43
+ ### Custom headers
44
+
45
+ ```typescript
46
+ const agent = new Agent({
47
+ id: "custom-agent",
48
+ name: "custom-agent",
49
+ model: {
50
+ url: "https://api.hpc-ai.com/inference/v1",
51
+ id: "hpc-ai/minimax/minimax-m2.5",
52
+ apiKey: process.env.HPC_AI_API_KEY,
53
+ headers: {
54
+ "X-Custom-Header": "value"
55
+ }
56
+ }
57
+ });
58
+ ```
59
+
60
+ ### Dynamic model selection
61
+
62
+ ```typescript
63
+ const agent = new Agent({
64
+ id: "dynamic-agent",
65
+ name: "Dynamic Agent",
66
+ model: ({ requestContext }) => {
67
+ const useAdvanced = requestContext.task === "complex";
68
+ return useAdvanced
69
+ ? "hpc-ai/zai-org/glm-5.1"
70
+ : "hpc-ai/minimax/minimax-m2.5";
71
+ }
72
+ });
73
+ ```
@@ -71,7 +71,7 @@ for await (const chunk of stream) {
71
71
  | `nvidia/microsoft/phi-4-mini-instruct` | 131K | | | | | | — | — |
72
72
  | `nvidia/minimaxai/minimax-m2.1` | 205K | | | | | | — | — |
73
73
  | `nvidia/minimaxai/minimax-m2.5` | 205K | | | | | | — | — |
74
- | `nvidia/minimaxai/minimax-m2.7` | 205K | | | | | | $0.30 | $1 |
74
+ | `nvidia/minimaxai/minimax-m2.7` | 205K | | | | | | | |
75
75
  | `nvidia/mistralai/codestral-22b-instruct-v0.1` | 128K | | | | | | — | — |
76
76
  | `nvidia/mistralai/devstral-2-123b-instruct-2512` | 262K | | | | | | — | — |
77
77
  | `nvidia/mistralai/mamba-codestral-7b-v0.1` | 128K | | | | | | — | — |
@@ -34,6 +34,7 @@ Direct access to individual AI model providers. Each provider offers unique mode
34
34
  - [Friendli](https://mastra.ai/models/providers/friendli)
35
35
  - [GitHub Models](https://mastra.ai/models/providers/github-models)
36
36
  - [Helicone](https://mastra.ai/models/providers/helicone)
37
+ - [HPC-AI](https://mastra.ai/models/providers/hpc-ai)
37
38
  - [Hugging Face](https://mastra.ai/models/providers/huggingface)
38
39
  - [iFlow](https://mastra.ai/models/providers/iflowcn)
39
40
  - [Inception](https://mastra.ai/models/providers/inception)
@@ -12,6 +12,29 @@ export const mastraClient = new MastraClient({
12
12
  })
13
13
  ```
14
14
 
15
+ ## `RequestContext`
16
+
17
+ When you use `RequestContext` with the client SDK, import it from `@mastra/client-js`.
18
+
19
+ ```typescript
20
+ import { MastraClient, RequestContext } from '@mastra/client-js'
21
+
22
+ const client = new MastraClient({
23
+ baseUrl: 'http://localhost:4111/',
24
+ })
25
+
26
+ const requestContext = new RequestContext()
27
+ requestContext.set('userId', 'user-123')
28
+
29
+ const agent = client.getAgent('support-agent')
30
+
31
+ const response = await agent.generate('Summarize this ticket', {
32
+ requestContext,
33
+ })
34
+ ```
35
+
36
+ You can also pass `requestContext` as a `Record<string, any>`.
37
+
15
38
  ## Parameters
16
39
 
17
40
  **baseUrl** (`string`): The base URL for the Mastra API. All requests will be sent relative to this URL.
@@ -36,6 +36,8 @@ OM performs thresholding with fast local token estimation. Text uses `tokenx`, a
36
36
 
37
37
  **scope** (`'resource' | 'thread'`): Memory scope for observations. \`'thread'\` keeps observations per-thread. \`'resource'\` (experimental) shares observations across all threads for a resource, enabling cross-conversation memory. (Default: `'thread'`)
38
38
 
39
+ **activateAfterIdle** (`number | string`): Time before buffered observations or buffered reflections are forced to activate after inactivity, even if their token thresholds have not been reached yet. Accepts milliseconds or duration strings like \`300\_000\`, \`"5m"\`, or \`"1hr"\`. When the gap between the current time and the last assistant message part timestamp exceeds this value, buffered observational memory activates before the next prompt. Useful for aligning with prompt cache TTLs.
40
+
39
41
  **shareTokenBudget** (`boolean`): Share the token budget between messages and observations. When enabled, the total budget is \`observation.messageTokens + reflection.observationTokens\`. Messages can use more space when observations are small, and vice versa. This maximizes context usage through flexible allocation. \`shareTokenBudget\` is not yet compatible with async buffering. You must set \`observation: { bufferTokens: false }\` when using this option (this is a temporary limitation). (Default: `false`)
40
42
 
41
43
  **retrieval** (`boolean | { vector?: boolean; scope?: 'thread' | 'resource' }`): \*\*Experimental.\*\* Enable retrieval-mode observation groups as durable pointers to raw message history. \`true\` enables cross-thread browsing by default. \`{ vector: true }\` also enables semantic search using Memory's vector store and embedder. \`{ scope: 'thread' }\` restricts the recall tool to the current thread only. Default scope is \`'resource'\`. (Default: `false`)
@@ -1,5 +1,7 @@
1
1
  # CloudExporter
2
2
 
3
+ **Added in:** `@mastra/observability@1.8.0`
4
+
3
5
  Sends tracing spans, logs, metrics, scores, and feedback to the Mastra platform for online visualization and monitoring.
4
6
 
5
7
  ## Constructor
@@ -56,9 +58,9 @@ Extends `BaseExporterConfig`, which includes:
56
58
 
57
59
  The exporter reads these environment variables if not provided in config:
58
60
 
59
- - `MASTRA_CLOUD_ACCESS_TOKEN` - Authentication token. Project-scoped tokens work with the default `/ai/{signal}/publish` routes. Organization API keys require `projectId` or `MASTRA_PROJECT_ID`.
61
+ - `MASTRA_CLOUD_ACCESS_TOKEN` - Authentication token for `CloudExporter` requests
60
62
  - `MASTRA_PROJECT_ID` - Project ID to use when deriving project-scoped collector routes such as `/projects/:projectId/ai/spans/publish`
61
- - `MASTRA_CLOUD_TRACES_ENDPOINT` - Traces endpoint override. Pass either a base origin or a full traces publish URL. Defaults to `https://api.mastra.ai`
63
+ - `MASTRA_CLOUD_TRACES_ENDPOINT` - Traces endpoint override. Pass either a base origin or a full traces publish URL. Defaults to `https://observability.mastra.ai` in `@mastra/observability@1.9.2` and later
62
64
 
63
65
  ## Properties
64
66
 
package/CHANGELOG.md CHANGED
@@ -1,5 +1,21 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.26-alpha.5
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`fdd54cf`](https://github.com/mastra-ai/mastra/commit/fdd54cf612a9af876e9fdd85e534454f6e7dd518), [`30456b6`](https://github.com/mastra-ai/mastra/commit/30456b6b08c8fd17e109dd093b73d93b65e83bc5), [`9d11a8c`](https://github.com/mastra-ai/mastra/commit/9d11a8c1c8924eb975a245a5884d40ca1b7e0491), [`d246696`](https://github.com/mastra-ai/mastra/commit/d246696139a3144a5b21b042d41c532688e957e1), [`354f9ce`](https://github.com/mastra-ai/mastra/commit/354f9ce1ca6af2074b6a196a23f8ec30012dccca), [`e9837b5`](https://github.com/mastra-ai/mastra/commit/e9837b53699e18711b09e0ca010a4106376f2653)]:
8
+ - @mastra/core@1.26.0-alpha.3
9
+ - @mastra/mcp@1.5.1-alpha.1
10
+
11
+ ## 1.1.26-alpha.3
12
+
13
+ ### Patch Changes
14
+
15
+ - Updated dependencies [[`3d83d06`](https://github.com/mastra-ai/mastra/commit/3d83d06f776f00fb5f4163dddd32a030c5c20844), [`7e0e63e`](https://github.com/mastra-ai/mastra/commit/7e0e63e2e485e84442351f4c7a79a424c83539dc), [`9467ea8`](https://github.com/mastra-ai/mastra/commit/9467ea87695749a53dfc041576410ebf9ee7bb67), [`7338d94`](https://github.com/mastra-ai/mastra/commit/7338d949380cf68b095342e8e42610dc51d557c1), [`c65aec3`](https://github.com/mastra-ai/mastra/commit/c65aec356cc037ee7c4b30ccea946807d4c4f443)]:
16
+ - @mastra/core@1.26.0-alpha.2
17
+ - @mastra/mcp@1.5.1-alpha.1
18
+
3
19
  ## 1.1.26-alpha.2
4
20
 
5
21
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.26-alpha.2",
3
+ "version": "1.1.26-alpha.6",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,8 +29,8 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/core": "1.25.1-alpha.1",
33
- "@mastra/mcp": "^1.5.1-alpha.0"
32
+ "@mastra/core": "1.26.0-alpha.3",
33
+ "@mastra/mcp": "^1.5.1-alpha.1"
34
34
  },
35
35
  "devDependencies": {
36
36
  "@hono/node-server": "^1.19.11",
@@ -47,8 +47,8 @@
47
47
  "typescript": "^5.9.3",
48
48
  "vitest": "4.0.18",
49
49
  "@internal/lint": "0.0.83",
50
- "@internal/types-builder": "0.0.58",
51
- "@mastra/core": "1.25.1-alpha.1"
50
+ "@mastra/core": "1.26.0-alpha.3",
51
+ "@internal/types-builder": "0.0.58"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {