@mastra/mcp-docs-server 1.1.20-alpha.0 → 1.1.20-alpha.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,35 +1,61 @@
1
1
  # Observability overview
2
2
 
3
- Mastra provides observability features for AI applications. Monitor LLM operations, trace agent decisions, and debug complex workflows with tools that understand AI-specific patterns.
3
+ Mastra's observability system gives you visibility into every agent run, workflow step, tool call, and model interaction. It captures three complementary signals that work together to help you understand what your application is doing and why.
4
4
 
5
- ## Key features
5
+ - [**Tracing**](https://mastra.ai/docs/observability/tracing/overview): Records every operation as a hierarchical timeline of spans, capturing inputs, outputs, token usage, and timing.
6
+ - [**Logging**](https://mastra.ai/docs/observability/logging): Forwards structured log entries from your application and Mastra internals to observability storage, correlated to traces automatically.
7
+ - [**Metrics**](https://mastra.ai/docs/observability/metrics/overview): Extracts duration, token usage, and cost data from traces automatically, with no additional instrumentation required.
6
8
 
7
- ### Tracing
9
+ ## When to use observability
8
10
 
9
- Specialized tracing for AI operations that captures:
11
+ - Debug unexpected agent behavior by inspecting the full decision path, tool calls, and model responses.
12
+ - Monitor latency across agents, workflows, and tools to identify bottlenecks.
13
+ - Track token consumption and estimated cost over time to control spending.
14
+ - Diagnose workflow failures by tracing execution through each step.
15
+ - Compare agent performance before and after prompt or model changes.
10
16
 
11
- - **Model interactions**: Token usage, latency, prompts, and completions
12
- - **Agent execution**: Decision paths, tool calls, and memory operations
13
- - **Workflow steps**: Branching logic, parallel execution, and step outputs
14
- - **Automatic instrumentation**: Tracing with decorators
17
+ ## How the pieces fit together
15
18
 
16
- ### Logging
19
+ Tracing is the foundation. When observability is configured, every agent run, workflow execution, tool call, and model interaction produces a [span](https://opentelemetry.io/docs/concepts/signals/traces/#spans). Spans are organized into traces that show the full request lifecycle as a hierarchical timeline.
17
20
 
18
- All logger calls from your application and Mastra's internal components are automatically forwarded to observability storage when observability is configured. You can [control the log level and disable forwarding](https://mastra.ai/docs/observability/logging) independently from your console logger.
21
+ Metrics are derived from traces automatically. When a span ends, Mastra extracts duration, token counts, and cost estimates without any extra code. These metrics power the dashboards in [Studio](https://mastra.ai/docs/studio/observability).
19
22
 
20
- ## Storage requirements
23
+ Logs are correlated to traces automatically. Every `logger.info()`, `logger.warn()`, or `logger.error()` call within a traced context is tagged with the current trace and span IDs. You can navigate from a log entry directly to the trace that produced it.
21
24
 
22
- The `DefaultExporter` persists traces to your configured storage backend. Not all storage providers support observability—for the full list, see [Storage Provider Support](https://mastra.ai/docs/observability/tracing/exporters/default).
25
+ All three signals share correlation IDs (trace ID, span ID, entity type, entity name), so you can jump between a metric spike, the traces behind it, and the logs within those traces.
23
26
 
24
- For production environments with high traffic, we recommend using **ClickHouse** for the observability domain via [composite storage](https://mastra.ai/reference/storage/composite). See [Production Recommendations](https://mastra.ai/docs/observability/tracing/exporters/default) for details.
27
+ ## Get started
25
28
 
26
- ## Quickstart
29
+ Install `@mastra/observability` and a storage backend:
27
30
 
28
- Configure Observability in your Mastra instance:
31
+ **npm**:
32
+
33
+ ```bash
34
+ npm install @mastra/observability @mastra/libsql @mastra/duckdb
35
+ ```
36
+
37
+ **pnpm**:
38
+
39
+ ```bash
40
+ pnpm add @mastra/observability @mastra/libsql @mastra/duckdb
41
+ ```
42
+
43
+ **Yarn**:
44
+
45
+ ```bash
46
+ yarn add @mastra/observability @mastra/libsql @mastra/duckdb
47
+ ```
48
+
49
+ **Bun**:
50
+
51
+ ```bash
52
+ bun add @mastra/observability @mastra/libsql @mastra/duckdb
53
+ ```
54
+
55
+ Then configure observability in your Mastra instance. The following example uses composite storage to route observability data to DuckDB (which supports metrics aggregation) while keeping everything else in LibSQL:
29
56
 
30
57
  ```ts
31
58
  import { Mastra } from '@mastra/core/mastra'
32
- import { PinoLogger } from '@mastra/loggers'
33
59
  import { LibSQLStore } from '@mastra/libsql'
34
60
  import { DuckDBStore } from '@mastra/duckdb'
35
61
  import { MastraCompositeStore } from '@mastra/core/storage'
@@ -41,7 +67,6 @@ import {
41
67
  } from '@mastra/observability'
42
68
 
43
69
  export const mastra = new Mastra({
44
- logger: new PinoLogger(),
45
70
  storage: new MastraCompositeStore({
46
71
  id: 'composite-storage',
47
72
  default: new LibSQLStore({
@@ -60,9 +85,6 @@ export const mastra = new Mastra({
60
85
  new DefaultExporter(), // Persists traces to storage for Mastra Studio
61
86
  new CloudExporter(), // Sends traces to Mastra Cloud (if MASTRA_CLOUD_ACCESS_TOKEN is set)
62
87
  ],
63
- logging: {
64
- level: 'info', // Minimum log level forwarded to storage (default: 'debug')
65
- },
66
88
  spanOutputProcessors: [
67
89
  new SensitiveDataFilter(), // Redacts sensitive data like passwords, tokens, keys
68
90
  ],
@@ -72,14 +94,18 @@ export const mastra = new Mastra({
72
94
  })
73
95
  ```
74
96
 
75
- > **Serverless environments:** The `file:./mastra.db` storage URL uses the local filesystem, which doesn't work in serverless environments like Vercel, AWS Lambda, or Cloudflare Workers. For serverless deployments, use external storage. See the [Vercel deployment guide](https://mastra.ai/guides/deployment/vercel) for a complete example.
97
+ This enables tracing, log forwarding, and metrics. Mastra also supports external tracing providers like Langfuse, Datadog, and any OpenTelemetry-compatible platform. See [Tracing](https://mastra.ai/docs/observability/tracing/overview) for configuration details.
98
+
99
+ ## Storage
76
100
 
77
- With this basic setup, you will see Traces and Logs in both Studio and in Mastra Cloud.
101
+ Not all storage backends support every signal. Traces and logs work with most backends, but metrics require an OLAP-capable store like DuckDB (development) or ClickHouse (production). For the full compatibility list, see [storage provider support](https://mastra.ai/docs/observability/tracing/exporters/default).
78
102
 
79
- We also support various external tracing providers like MLflow, Langfuse, Braintrust, and any OpenTelemetry-compatible platform (Datadog, New Relic, SigNoz, etc.). See more about this in the [Tracing](https://mastra.ai/docs/observability/tracing/overview) documentation.
103
+ For production environments with high traffic, use composite storage to route the observability domain to a dedicated backend. See [production recommendations](https://mastra.ai/docs/observability/tracing/exporters/default) for details.
80
104
 
81
- ## What's next?
105
+ ## Next steps
82
106
 
83
- - **[Set up Tracing](https://mastra.ai/docs/observability/tracing/overview)**: Configure tracing for your application
84
- - **[Configure Logging](https://mastra.ai/docs/observability/logging)**: Add structured logging
85
- - **[API Reference](https://mastra.ai/reference/observability/tracing/instances)**: Detailed configuration options
107
+ - [Tracing](https://mastra.ai/docs/observability/tracing/overview)
108
+ - [Logging](https://mastra.ai/docs/observability/logging)
109
+ - [Metrics](https://mastra.ai/docs/observability/metrics/overview)
110
+ - [Mastra Studio](https://mastra.ai/docs/studio/observability)
111
+ - [Automatic metrics reference](https://mastra.ai/reference/observability/metrics/automatic-metrics)
@@ -4,7 +4,7 @@ The Mastra Client SDK provides a concise and type-safe interface for interacting
4
4
 
5
5
  ## Prerequisites
6
6
 
7
- To ensure smooth local development, make sure you have:
7
+ Before you start local development, have:
8
8
 
9
9
  - Node.js `v22.13.0` or later
10
10
  - TypeScript `v4.7` or higher (if using TypeScript)
@@ -54,15 +54,17 @@ export const mastraClient = new MastraClient({
54
54
 
55
55
  ## Core APIs
56
56
 
57
- The Mastra Client SDK exposes all resources served by the Mastra Server
57
+ The Mastra Client SDK exposes all resources served by the Mastra Server.
58
58
 
59
59
  - **[Agents](https://mastra.ai/reference/client-js/agents)**: Generate responses and stream conversations.
60
60
  - **[Memory](https://mastra.ai/reference/client-js/memory)**: Manage conversation threads and message history.
61
61
  - **[Tools](https://mastra.ai/reference/client-js/tools)**: Executed and managed tools.
62
62
  - **[Workflows](https://mastra.ai/reference/client-js/workflows)**: Trigger workflows and track their execution.
63
63
  - **[Vectors](https://mastra.ai/reference/client-js/vectors)**: Use vector embeddings for semantic search.
64
+ - **[Responses](https://mastra.ai/reference/client-js/responses)**: Call the OpenAI Responses API through Mastra agents. This API is currently experimental.
65
+ - **[Conversations](https://mastra.ai/reference/client-js/conversations)**: Work with OpenAI Responses API conversations and their stored item history. This API is currently experimental.
64
66
  - **[Logs](https://mastra.ai/reference/client-js/logs)**: View logs and debug system behavior.
65
- - **[Telemetry](https://mastra.ai/reference/client-js/telemetry)**: Monitor app performance and trace activity.
67
+ - **[Telemetry](https://mastra.ai/reference/client-js/telemetry)**: View app performance and trace activity.
66
68
 
67
69
  ## Generating responses
68
70
 
@@ -133,7 +135,7 @@ export const mastraClient = new MastraClient({
133
135
 
134
136
  ## Credentials and session cookies
135
137
 
136
- **Authenticate Mastra API calls with session cookies** when your UI and Mastra API are not on the same origin—different host, subdomain, or port (for example Mastra Studio on one port and a custom server on another). Add **`credentials: 'include'`** to `MastraClient` so each request carries the cookies the user already has after sign-in. Skip this and you will often get **`401`** responses from Mastra even though login succeeded in the browser.
138
+ **Authenticate Mastra API calls with session cookies** when your UI and Mastra API aren't on the same origin—different host, subdomain, or port (for example Mastra Studio on one port and a custom server on another). Add **`credentials: 'include'`** to `MastraClient` so each request carries the cookies the user already has after sign-in. Skip this and you will often get **`401`** responses from Mastra even though login succeeded in the browser.
137
139
 
138
140
  ```typescript
139
141
  import { MastraClient } from '@mastra/client-js'
@@ -146,7 +148,7 @@ export const mastraClient = new MastraClient({
146
148
 
147
149
  **Allow credentialed cross-origin requests on your server**—see [CORS: requests with credentials](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS#requests_with_credentials). You need a concrete `Access-Control-Allow-Origin` (not `*`) and `Access-Control-Allow-Credentials: true`, or the browser will block the call before it reaches Mastra.
148
150
 
149
- **Using `@mastra/react`?** Wrap your app with `MastraReactProvider`, set `baseUrl` and `apiPrefix` to match your server, and rely on the default `credentials: 'include'`. Change `credentials` only when you deliberately want `same-origin` or `omit` behavior.
151
+ **Using `@mastra/react`?** Wrap your app with `MastraReactProvider`, set `baseUrl` and `apiPrefix` to match your server, and rely on the default `credentials: 'include'`. Change `credentials` only when you want `same-origin` or `omit` behavior.
150
152
 
151
153
  ## Adding request cancelling
152
154
 
@@ -219,7 +221,7 @@ const handleClientTool = async () => {
219
221
  }
220
222
  ```
221
223
 
222
- ### Client tool's agent
224
+ ### Client tool agent
223
225
 
224
226
  This is a standard Mastra [agent](https://mastra.ai/docs/agents/overview) configured to return hex color codes, intended to work with the browser-based client tool defined above.
225
227
 
@@ -236,9 +238,9 @@ export const colorAgent = new Agent({
236
238
  })
237
239
  ```
238
240
 
239
- ## Server-side environments
241
+ ## Use MastraClient on the server
240
242
 
241
- You can also use `MastraClient` in server-side environments such as API routes, serverless functions or actions. The usage will broadly remain the same but you may need to recreate the response to your client:
243
+ You can also use `MastraClient` in server-side environments such as API routes, serverless functions, or actions. The usage remains the same, but you may need to recreate the response for your client:
242
244
 
243
245
  ```typescript
244
246
  export async function action() {
@@ -252,7 +254,7 @@ export async function action() {
252
254
 
253
255
  ## Best practices
254
256
 
255
- 1. **Error Handling**: Implement proper [error handling](https://mastra.ai/reference/client-js/error-handling) for development scenarios.
257
+ 1. **Error Handling**: Use [error handling](https://mastra.ai/reference/client-js/error-handling) for development scenarios.
256
258
  2. **Environment Variables**: Use environment variables for configuration.
257
259
  3. **Debugging**: Enable detailed [logging](https://mastra.ai/reference/client-js/logs) when needed.
258
- 4. **Performance**: Monitor application performance, [telemetry](https://mastra.ai/reference/client-js/telemetry) and traces.
260
+ 4. **Performance**: Track application performance, [telemetry](https://mastra.ai/reference/client-js/telemetry), and traces.
@@ -12,7 +12,7 @@ The server provides:
12
12
 
13
13
  - API endpoints for all registered agents and workflows
14
14
  - Custom API routes and middleware
15
- - Authentication with multiple providers
15
+ - Authentication across providers
16
16
  - Request context for dynamic configuration
17
17
  - Stream data redaction for secure responses
18
18
 
@@ -51,6 +51,18 @@ To explore the API interactively, visit the Swagger UI at <http://localhost:4111
51
51
 
52
52
  > **Note:** The OpenAPI and Swagger endpoints are disabled in production by default. To enable them, set [`server.build.openAPIDocs`](https://mastra.ai/reference/configuration) and [`server.build.swaggerUI`](https://mastra.ai/reference/configuration) to `true` respectively.
53
53
 
54
+ ## OpenAI Responses API
55
+
56
+ Mastra exposes OpenAI-compatible Responses and Conversations routes. These routes are agent-backed adapters over Mastra agents, memory, and storage, so requests run through the selected Mastra agent instead of acting as a raw provider proxy.
57
+
58
+ These APIs are currently experimental.
59
+
60
+ Use `agent_id` to select the Mastra agent that should handle the request. Initial requests target an agent directly, and stored follow-up turns can continue with `previous_response_id`. You can also pass `model` to override the agent's configured model for a single request. If you omit `model`, Mastra uses the model already configured on the agent.
61
+
62
+ The Responses routes support streaming, function calling (tools), stored continuations with `previous_response_id`, conversation threads through `conversation_id`, provider-specific passthrough with `providerOptions`, and JSON output through `text.format`.
63
+
64
+ For the full request and response contract, see the [Responses API reference](https://mastra.ai/reference/client-js/responses) and [Conversations API reference](https://mastra.ai/reference/client-js/conversations). For the complete list of HTTP routes, see [server routes](https://mastra.ai/reference/server/routes).
65
+
54
66
  ## Stream data redaction
55
67
 
56
68
  When streaming agent responses, the HTTP layer redacts system prompts, tool definitions, API keys, and similar data from each chunk before sending it to clients. This is enabled by default.
@@ -4,6 +4,7 @@ Studio includes these observability views:
4
4
 
5
5
  - **Metrics** for aggregate performance data
6
6
  - **Traces** for individual request inspection
7
+ - **Logs** for browsing internal and application logs
7
8
 
8
9
  All require an [observability storage backend](#quickstart) to be configured.
9
10
 
@@ -21,6 +22,12 @@ When you run an agent or workflow, the Observability tab displays traces that hi
21
22
 
22
23
  Tracing filters out low-level framework details so your traces stay focused and readable. Visit the [tracing overview](https://mastra.ai/docs/observability/tracing/overview) for more details.
23
24
 
25
+ ## Logs
26
+
27
+ Browse internal Mastra logs forwarded to your observability storage. Logs provide full-text search (across message content, entity names, and trace IDs), date presets (last 24 hours to 30 days), and multi-select filters for level, entity type, and entity name. Selecting a log opens a detail panel showing the full message, structured data, and metadata. If the log is correlated with a trace, you can navigate directly to the trace and span timeline.
28
+
29
+ Log forwarding is enabled by default when you configure observability. See [logging](https://mastra.ai/docs/observability/logging) for level configuration, query examples, and customization details.
30
+
24
31
  ## Quickstart
25
32
 
26
33
  For detailed instructions, follow the [observability instructions](https://mastra.ai/docs/observability/overview). To get up and running quickly, add the `@mastra/observability` package to your project and configure it with [LibSQL](https://mastra.ai/reference/storage/libsql) and [DuckDB](https://mastra.ai/reference/vectors/duckdb) for a local development setup that supports both traces and metrics.
@@ -95,4 +102,5 @@ export const mastra = new Mastra({
95
102
 
96
103
  - [Observability overview](https://mastra.ai/docs/observability/overview)
97
104
  - [Metrics overview](https://mastra.ai/docs/observability/metrics/overview)
98
- - [Tracing overview](https://mastra.ai/docs/observability/tracing/overview)
105
+ - [Tracing overview](https://mastra.ai/docs/observability/tracing/overview)
106
+ - [Logging](https://mastra.ai/docs/observability/logging)
@@ -124,4 +124,4 @@ Mastra also supports HTTPS development through the [`--https`](https://mastra.ai
124
124
 
125
125
  - Learn how to [deploy Studio](https://mastra.ai/docs/studio/deployment) for production use.
126
126
  - Add [authentication](https://mastra.ai/docs/studio/auth) to control access to your deployed Studio.
127
- - Explore [Studio observability](https://mastra.ai/docs/studio/observability) to monitor agent performance and gain insights through metrics, logs, and traces.
127
+ - Explore [Studio observability](https://mastra.ai/docs/studio/observability) to monitor agent performance through metrics, traces, and logs.
@@ -0,0 +1,152 @@
1
+ # Web scraping with Firecrawl
2
+
3
+ Firecrawl is a web data API that turns websites into clean markdown or structured JSON. In this guide, you will wire Firecrawl into Mastra tools so your agents and workflows can search and scrape live web data on demand.
4
+
5
+ ## Prerequisites
6
+
7
+ - Node.js `v22.13.0` or later installed
8
+ - A Firecrawl API key (get one at <https://firecrawl.dev>)
9
+ - An API key from a supported [Model Provider](https://mastra.ai/models)
10
+ - An existing Mastra project (Follow the [installation guide](https://mastra.ai/guides/getting-started/quickstart) to set up a new project)
11
+
12
+ ## Installation
13
+
14
+ Install the Firecrawl SDK:
15
+
16
+ **npm**:
17
+
18
+ ```bash
19
+ npm install @mendable/firecrawl-js
20
+ ```
21
+
22
+ **pnpm**:
23
+
24
+ ```bash
25
+ pnpm add @mendable/firecrawl-js
26
+ ```
27
+
28
+ **Yarn**:
29
+
30
+ ```bash
31
+ yarn add @mendable/firecrawl-js
32
+ ```
33
+
34
+ **Bun**:
35
+
36
+ ```bash
37
+ bun add @mendable/firecrawl-js
38
+ ```
39
+
40
+ ## Configure environment variables
41
+
42
+ Create a `.env` file in your project root:
43
+
44
+ ```bash
45
+ FIRECRAWL_API_KEY=fc-your-api-key
46
+ # Optional: FIRECRAWL_API_URL=http://localhost:3002
47
+ ```
48
+
49
+ ## Build the Firecrawl tools
50
+
51
+ Create a tool file that exposes Firecrawl search and scrape to Mastra.
52
+
53
+ 1. Create `src/mastra/tools/firecrawl.ts` and set up Firecrawl:
54
+
55
+ ```ts
56
+ import Firecrawl from '@mendable/firecrawl-js'
57
+ import { createTool } from '@mastra/core/tools'
58
+ import { z } from 'zod'
59
+
60
+ const firecrawl = new Firecrawl({ apiKey: process.env.FIRECRAWL_API_KEY! })
61
+
62
+ export const firecrawlSearch = createTool({
63
+ id: 'firecrawl-search',
64
+ description: 'Search the web and return top results.',
65
+ inputSchema: z.object({ query: z.string().min(1) }),
66
+ outputSchema: z.object({
67
+ results: z.array(
68
+ z.object({
69
+ title: z.string().nullable(),
70
+ url: z.string(),
71
+ }),
72
+ ),
73
+ }),
74
+ execute: async ({ query }) => {
75
+ const results = await firecrawl.search(query, { limit: 3 })
76
+ return {
77
+ results: (results.web ?? []).map(item => ({
78
+ title: item.title ?? null,
79
+ url: item.url,
80
+ })),
81
+ }
82
+ },
83
+ })
84
+
85
+ export const firecrawlScrape = createTool({
86
+ id: 'firecrawl-scrape',
87
+ description: 'Scrape a URL and return markdown content.',
88
+ inputSchema: z.object({ url: z.string().url() }),
89
+ outputSchema: z.object({ markdown: z.string() }),
90
+ execute: async ({ url }) => {
91
+ const result = await firecrawl.scrape(url, {
92
+ formats: ['markdown'],
93
+ onlyMainContent: true,
94
+ })
95
+ return { markdown: result.markdown ?? '' }
96
+ },
97
+ })
98
+ ```
99
+
100
+ 2. Create a new agent at `src/mastra/agents/web-agent.ts`:
101
+
102
+ ```ts
103
+ import { Agent } from '@mastra/core/agent'
104
+ import { firecrawlSearch, firecrawlScrape } from '../tools/firecrawl'
105
+
106
+ export const webAgent = new Agent({
107
+ id: 'web-agent',
108
+ name: 'Web Agent',
109
+ instructions: 'Use Firecrawl tools to search and scrape web pages, then summarize the results.',
110
+ model: 'openai/gpt-5.4',
111
+ tools: { firecrawlSearch, firecrawlScrape },
112
+ })
113
+ ```
114
+
115
+ 3. Register the newly created agent in `src/mastra/index.ts` on your Mastra instance:
116
+
117
+ ```ts
118
+ import { Mastra } from '@mastra/core'
119
+ import { webAgent } from './agents/web-agent'
120
+
121
+ export const mastra = new Mastra({
122
+ agents: { webAgent },
123
+ })
124
+ ```
125
+
126
+ ## Test in Studio
127
+
128
+ Run the dev server and open [Studio](https://mastra.ai/docs/studio/overview):
129
+
130
+ ```bash
131
+ mastra dev
132
+ ```
133
+
134
+ In Studio, open the **Web Agent** and try:
135
+
136
+ - "Find the latest Mastra changelog and summarize the last release."
137
+ - "Search for Firecrawl pricing and extract the plan tiers."
138
+
139
+ ## Self-hosted Firecrawl
140
+
141
+ If you run Firecrawl locally, set `FIRECRAWL_API_URL` or pass `apiUrl` in the client:
142
+
143
+ ```ts
144
+ const firecrawl = new Firecrawl({
145
+ apiKey: process.env.FIRECRAWL_API_KEY!,
146
+ apiUrl: process.env.FIRECRAWL_API_URL,
147
+ })
148
+ ```
149
+
150
+ ## Related
151
+
152
+ - [Firecrawl documentation](https://docs.firecrawl.dev)
@@ -1,6 +1,6 @@
1
1
  # ![OpenRouter logo](https://models.dev/logos/openrouter.svg)OpenRouter
2
2
 
3
- OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 166 models through Mastra's model router.
3
+ OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 167 models through Mastra's model router.
4
4
 
5
5
  Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
6
6
 
@@ -199,4 +199,5 @@ ANTHROPIC_API_KEY=ant-...
199
199
  | `z-ai/glm-4.6:exacto` |
200
200
  | `z-ai/glm-4.7` |
201
201
  | `z-ai/glm-4.7-flash` |
202
- | `z-ai/glm-5` |
202
+ | `z-ai/glm-5` |
203
+ | `z-ai/glm-5-turbo` |
@@ -1,6 +1,6 @@
1
1
  # Model Providers
2
2
 
3
- Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3566 models from 94 providers through a single API.
3
+ Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3567 models from 94 providers through a single API.
4
4
 
5
5
  ## Features
6
6
 
@@ -1,6 +1,6 @@
1
1
  # Agent.getLLM()
2
2
 
3
- The `.getLLM()` method retrieves the language model instance configured for an agent, resolving it if it's a function. This method provides access to the underlying LLM that powers the agent's capabilities.
3
+ The `.getLLM()` method retrieves the language model instance configured for an agent, resolving it if it's a function. You can also pass a request-scoped `model` override without mutating the agent's configured model.
4
4
 
5
5
  ## Usage example
6
6
 
@@ -8,13 +8,19 @@ The `.getLLM()` method retrieves the language model instance configured for an a
8
8
  await agent.getLLM()
9
9
  ```
10
10
 
11
+ ```typescript
12
+ await agent.getLLM({
13
+ model: 'openai/gpt-5.4',
14
+ })
15
+ ```
16
+
11
17
  ## Parameters
12
18
 
13
- **options** (`{ requestContext?: RequestContext; model?: MastraLanguageModel | DynamicArgument<MastraLanguageModel> }`): Optional configuration object containing request context and optional model override. (Default: `{}`)
19
+ **options** (`{ requestContext?: RequestContext; model?: MastraModelConfig | DynamicArgument<MastraModelConfig> }`): Optional configuration object containing request context and an optional request-scoped model override. (Default: `{}`)
14
20
 
15
21
  **options.requestContext** (`RequestContext`): Request Context for dependency injection and contextual information.
16
22
 
17
- **options.model** (`MastraLanguageModel | DynamicArgument<MastraLanguageModel>`): Optional model override. If provided, this model will be used used instead of the agent's configured model.
23
+ **options.model** (`MastraModelConfig | DynamicArgument<MastraModelConfig>`): Optional request-scoped model override. The agent's configured model is not mutated.
18
24
 
19
25
  ## Returns
20
26
 
@@ -0,0 +1,135 @@
1
+ # OpenAI Responses API Conversations
2
+
3
+ The OpenAI Responses API Conversations surface provides methods to create, retrieve, delete, and inspect thread-backed conversations in Mastra.
4
+
5
+ This API follows up on the [OpenAI Responses API](https://mastra.ai/reference/client-js/responses). Stored Responses calls return `conversation_id`, and in Mastra that value is the raw memory `threadId`. Use `client.conversations` when you want to work with that thread directly.
6
+
7
+ This API is currently experimental.
8
+
9
+ ## Relationship to OpenAI Responses
10
+
11
+ Use the OpenAI Responses API for generation and continuation:
12
+
13
+ ```typescript
14
+ const response = await client.responses.create({
15
+ agent_id: 'support-agent',
16
+ input: 'Start a support thread',
17
+ store: true,
18
+ })
19
+
20
+ console.log(response.conversation_id)
21
+ ```
22
+
23
+ Use the Conversations API when you want to inspect or manage that stored thread:
24
+
25
+ ```typescript
26
+ const conversation = await client.conversations.retrieve(response.conversation_id!)
27
+ const items = await client.conversations.items.list(response.conversation_id!)
28
+ ```
29
+
30
+ ## Usage example
31
+
32
+ ```typescript
33
+ import { MastraClient } from '@mastra/client-js'
34
+
35
+ const client = new MastraClient({
36
+ baseUrl: 'http://localhost:4111',
37
+ })
38
+
39
+ const conversation = await client.conversations.create({
40
+ agent_id: 'support-agent',
41
+ })
42
+
43
+ console.log(conversation.id)
44
+ ```
45
+
46
+ ## Methods
47
+
48
+ ### Lifecycle
49
+
50
+ #### `create(params)`
51
+
52
+ Creates a new conversation thread for the selected agent.
53
+
54
+ ```typescript
55
+ const conversation = await client.conversations.create({
56
+ agent_id: 'support-agent',
57
+ title: 'Billing support',
58
+ })
59
+ ```
60
+
61
+ **Returns:** `Promise<Conversation>`.
62
+
63
+ #### `retrieve(conversationId, requestContext?)`
64
+
65
+ Retrieves a conversation by its thread ID.
66
+
67
+ ```typescript
68
+ const conversation = await client.conversations.retrieve('thread_123')
69
+
70
+ console.log(conversation.thread)
71
+ ```
72
+
73
+ **Returns:** `Promise<Conversation>`.
74
+
75
+ #### `delete(conversationId, requestContext?)`
76
+
77
+ Deletes a conversation by its thread ID.
78
+
79
+ ```typescript
80
+ const deleted = await client.conversations.delete('thread_123')
81
+
82
+ console.log(deleted.deleted)
83
+ ```
84
+
85
+ **Returns:** `Promise<ConversationDeleted>`.
86
+
87
+ ### Items
88
+
89
+ #### `items.list(conversationId, requestContext?)`
90
+
91
+ Lists the stored items for a conversation.
92
+
93
+ ```typescript
94
+ const items = await client.conversations.items.list('thread_123')
95
+
96
+ console.log(items.data)
97
+ ```
98
+
99
+ **Returns:** `Promise<ConversationItemsPage>`.
100
+
101
+ ## Response shape
102
+
103
+ `create()` and `retrieve()` return a conversation object with:
104
+
105
+ - `id`: The raw thread ID
106
+ - `object`: Always `'conversation'`
107
+ - `thread`: The stored thread record
108
+
109
+ `delete()` returns:
110
+
111
+ - `id`: The raw thread ID
112
+ - `object`: Always `'conversation.deleted'`
113
+ - `deleted`: Always `true`
114
+
115
+ `items.list()` returns:
116
+
117
+ - `object`: Always `'list'`
118
+ - `data`: Conversation items such as `message`, `function_call`, and `function_call_output`
119
+ - `first_id`: The first item ID in the page
120
+ - `last_id`: The last item ID in the page
121
+ - `has_more`: Whether more items exist beyond the current page
122
+
123
+ ## Parameters
124
+
125
+ **agent\_id** (`string`): Required. The registered Mastra agent that owns the conversation memory.
126
+
127
+ **conversation\_id** (`string`): Optional conversation ID to use as the raw thread ID.
128
+
129
+ **resource\_id** (`string`): Optional resource ID to associate with the conversation thread.
130
+
131
+ **title** (`string`): Optional thread title stored with the conversation.
132
+
133
+ **metadata** (`Record<string, unknown>`): Optional thread metadata stored with the conversation.
134
+
135
+ **requestContext** (`RequestContext | Record<string, any>`): Optional request context forwarded to the Mastra server.
@@ -50,6 +50,10 @@ export const mastraClient = new MastraClient({
50
50
 
51
51
  **getWorkflow(workflowId)** (`Workflow`): Retrieves a specific workflow instance by ID.
52
52
 
53
+ **responses** (`Responses`): Provides OpenAI-style Responses API helpers with \`create()\`, \`retrieve()\`, \`stream()\`, and \`delete()\`.
54
+
55
+ **conversations** (`Conversations`): Provides conversation helpers with \`create()\`, \`retrieve()\`, \`delete()\`, and \`items.list()\`.
56
+
53
57
  **getVector(vectorName)** (`MastraVector`): Returns a vector store instance by name.
54
58
 
55
59
  **listLogs(params)** (`Promise<LogEntry[]>`): Fetches system logs matching the provided filters.
@@ -0,0 +1,213 @@
1
+ # OpenAI Responses API
2
+
3
+ The OpenAI Responses API provides methods to create, retrieve, stream, and delete OpenAI-compatible responses through Mastra agents.
4
+
5
+ These routes are agent-backed adapters over Mastra agents, memory, and storage. Use `agent_id` to select the Mastra agent that should handle the request. You can pass `model` to override the agent's configured model for a single request, or omit it to use the model already configured on the agent.
6
+
7
+ Stored responses also return `conversation_id`. In Mastra, this is the raw memory `threadId`.
8
+
9
+ This API is currently experimental.
10
+
11
+ ## Usage example
12
+
13
+ ```typescript
14
+ import { MastraClient } from '@mastra/client-js'
15
+
16
+ const client = new MastraClient({
17
+ baseUrl: 'http://localhost:4111',
18
+ })
19
+
20
+ const response = await client.responses.create({
21
+ agent_id: 'support-agent',
22
+ input: 'Summarize this ticket',
23
+ store: true,
24
+ })
25
+
26
+ console.log(response.output_text)
27
+ ```
28
+
29
+ ## Methods
30
+
31
+ ### Lifecycle
32
+
33
+ #### `create(params)`
34
+
35
+ Creates a response.
36
+
37
+ ```typescript
38
+ const response = await client.responses.create({
39
+ agent_id: 'support-agent',
40
+ input: 'Summarize this ticket',
41
+ })
42
+ ```
43
+
44
+ **Returns:** `Promise<ResponsesResponse>` when `stream` is omitted or `false`.
45
+
46
+ When `stream: true`, `create()` returns an async iterable of SSE-style event payloads:
47
+
48
+ ```typescript
49
+ const stream = await client.responses.create({
50
+ agent_id: 'support-agent',
51
+ input: 'Summarize this ticket',
52
+ stream: true,
53
+ })
54
+
55
+ for await (const event of stream) {
56
+ if (event.type === 'response.output_text.delta') {
57
+ process.stdout.write(event.delta)
58
+ }
59
+ }
60
+ ```
61
+
62
+ **Returns:** `Promise<ResponsesStream>`.
63
+
64
+ #### `retrieve(responseId, requestContext?)`
65
+
66
+ Retrieves a stored response.
67
+
68
+ ```typescript
69
+ const response = await client.responses.retrieve('msg_123')
70
+ ```
71
+
72
+ **Returns:** `Promise<ResponsesResponse>`.
73
+
74
+ #### `delete(responseId, requestContext?)`
75
+
76
+ Deletes a stored response.
77
+
78
+ ```typescript
79
+ const deleted = await client.responses.delete('msg_123')
80
+ ```
81
+
82
+ **Returns:** `Promise<{ id: string; object: "response"; deleted: true }>`
83
+
84
+ #### `stream(params)`
85
+
86
+ Creates a streaming response.
87
+
88
+ ```typescript
89
+ const stream = await client.responses.stream({
90
+ agent_id: 'support-agent',
91
+ input: 'Say hello',
92
+ })
93
+
94
+ for await (const event of stream) {
95
+ console.log(event.type)
96
+ }
97
+ ```
98
+
99
+ **Returns:** `Promise<ResponsesStream>`.
100
+
101
+ ## Stored responses and conversations
102
+
103
+ Stored responses include both `response.id` and `conversation_id`.
104
+
105
+ - `response.id` is the response ID. For stored agent-backed responses, this is the persisted assistant message ID.
106
+ - `conversation_id` is the raw Mastra thread ID.
107
+
108
+ Use `previous_response_id` when you want to continue from a previous stored response. Use `conversation_id` when you want to target a known thread directly.
109
+
110
+ ```typescript
111
+ const first = await client.responses.create({
112
+ agent_id: 'support-agent',
113
+ input: 'Start a support thread',
114
+ store: true,
115
+ })
116
+
117
+ const second = await client.responses.create({
118
+ agent_id: 'support-agent',
119
+ conversation_id: first.conversation_id!,
120
+ input: 'Add a follow-up to the same thread',
121
+ store: true,
122
+ })
123
+ ```
124
+
125
+ Use [`client.conversations`](https://mastra.ai/reference/client-js/conversations) when you want to create, retrieve, delete, or inspect the underlying OpenAI Responses API conversation directly.
126
+
127
+ ## Function calling (tools)
128
+
129
+ `response.tools` contains the configured function definitions available for the request.
130
+
131
+ If the model calls a function, that activity appears in `response.output` as `function_call` and `function_call_output` items alongside the final assistant `message`.
132
+
133
+ ## Structured output
134
+
135
+ Use `text.format` when you want JSON output.
136
+
137
+ - `json_object` enables JSON mode.
138
+ - `json_schema` enables schema-constrained structured output.
139
+
140
+ Both formats return JSON in the assistant message content. Use `json_schema` when you need strict schema enforcement. Use `json_object` when you only need valid JSON output.
141
+
142
+ ```typescript
143
+ const response = await client.responses.create({
144
+ agent_id: 'support-agent',
145
+ input: 'Return a structured support ticket summary.',
146
+ text: {
147
+ format: {
148
+ type: 'json_schema',
149
+ name: 'ticket_summary',
150
+ schema: {
151
+ type: 'object',
152
+ properties: {
153
+ summary: { type: 'string' },
154
+ priority: { type: 'string' },
155
+ },
156
+ required: ['summary', 'priority'],
157
+ additionalProperties: false,
158
+ },
159
+ },
160
+ },
161
+ })
162
+ ```
163
+
164
+ ## Provider-backed requests
165
+
166
+ Use `providerOptions` when you need provider-specific options that Mastra does not normalize at the Responses layer.
167
+
168
+ ```typescript
169
+ const response = await client.responses.create({
170
+ agent_id: 'support-agent',
171
+ input: 'Continue this exchange',
172
+ providerOptions: {
173
+ openai: {
174
+ previousResponseId: 'resp_123',
175
+ },
176
+ },
177
+ })
178
+ ```
179
+
180
+ ## Response shape
181
+
182
+ The returned response object includes:
183
+
184
+ - `id`: The response ID
185
+ - `output`: Output items such as the assistant `message`, `function_call`, and `function_call_output`
186
+ - `output_text`: Convenience getter that joins assistant text output
187
+ - `tools`: Configured tool definitions for the request
188
+ - `conversation_id`: The raw thread ID for stored responses
189
+ - `text`: The requested text output format, when provided
190
+
191
+ ## Parameters
192
+
193
+ **agent\_id** (`string`): Required on initial requests. Selects the Mastra agent that executes the request. Stored follow-up turns can omit it when continuing with \`previous\_response\_id\`.
194
+
195
+ **model** (`string`): Optional model override for this request, such as \`openai/gpt-5\`. If omitted, Mastra uses the model configured on the selected agent.
196
+
197
+ **input** (`string | Array<{ role: 'system' | 'developer' | 'user' | 'assistant'; content: string | Array<{ type: 'input_text' | 'text' | 'output_text'; text: string }> }>`): Required. Input text or message array for the response.
198
+
199
+ **instructions** (`string`): Optional instruction override for this request.
200
+
201
+ **text** (`{ format: { type: 'json_object' } | { type: 'json_schema'; name: string; schema: Record<string, unknown>; description?: string; strict?: boolean } }`): Optional text output format. Use \`json\_object\` for JSON mode or \`json\_schema\` for schema-constrained structured output.
202
+
203
+ **providerOptions** (`Record<string, Record<string, unknown> | undefined>`): Optional provider-specific options passed through to the underlying model call.
204
+
205
+ **stream** (`boolean`): When true, returns an async iterable of Responses API events.
206
+
207
+ **store** (`boolean`): When true, persists the response through the selected agent memory.
208
+
209
+ **conversation\_id** (`string`): Optional conversation identifier. In Mastra, this is the raw memory thread ID.
210
+
211
+ **previous\_response\_id** (`string`): Continues a stored response chain from a previous stored response.
212
+
213
+ **requestContext** (`RequestContext | Record<string, any>`): Optional request context forwarded to the Mastra server.
@@ -41,11 +41,13 @@ The Reference section provides documentation of Mastra's API, including paramete
41
41
  - [create-mastra](https://mastra.ai/reference/cli/create-mastra)
42
42
  - [mastra](https://mastra.ai/reference/cli/mastra)
43
43
  - [Agents API](https://mastra.ai/reference/client-js/agents)
44
+ - [Conversations API](https://mastra.ai/reference/client-js/conversations)
44
45
  - [Error Handling](https://mastra.ai/reference/client-js/error-handling)
45
46
  - [Logs API](https://mastra.ai/reference/client-js/logs)
46
47
  - [Mastra Client SDK](https://mastra.ai/reference/client-js/mastra-client)
47
48
  - [Memory API](https://mastra.ai/reference/client-js/memory)
48
49
  - [Observability API](https://mastra.ai/reference/client-js/observability)
50
+ - [Responses API](https://mastra.ai/reference/client-js/responses)
49
51
  - [Telemetry API](https://mastra.ai/reference/client-js/telemetry)
50
52
  - [Tools API](https://mastra.ai/reference/client-js/tools)
51
53
  - [Vectors API](https://mastra.ai/reference/client-js/vectors)
@@ -90,7 +90,7 @@ GET /api/agents/my-agent?versionId=abc123
90
90
  }
91
91
  ```
92
92
 
93
- ### Start-async request body
93
+ ### Request body for `/start-async`
94
94
 
95
95
  ```typescript
96
96
  {
@@ -248,6 +248,27 @@ GET /api/agents/my-agent?versionId=abc123
248
248
  | `POST` | `/api/mcp/:serverId` | MCP HTTP transport |
249
249
  | `GET` | `/api/mcp/:serverId/sse` | MCP SSE transport |
250
250
 
251
+ ## Responses API
252
+
253
+ | Method | Path | Description |
254
+ | -------- | ------------------------------- | ------------------------------------------------------------------- |
255
+ | `POST` | `/api/v1/responses` | Create a response through the OpenAI-compatible Responses API route |
256
+ | `GET` | `/api/v1/responses/:responseId` | Retrieve a stored response |
257
+ | `DELETE` | `/api/v1/responses/:responseId` | Delete a stored response |
258
+
259
+ For the full request and response contract, see the [Responses API reference](https://mastra.ai/reference/client-js/responses).
260
+
261
+ ## Conversations API
262
+
263
+ | Method | Path | Description |
264
+ | -------- | --------------------------------------------- | ------------------------------------ |
265
+ | `POST` | `/api/v1/conversations` | Create a conversation |
266
+ | `GET` | `/api/v1/conversations/:conversationId` | Retrieve a conversation |
267
+ | `DELETE` | `/api/v1/conversations/:conversationId` | Delete a conversation |
268
+ | `GET` | `/api/v1/conversations/:conversationId/items` | List stored items for a conversation |
269
+
270
+ For the full request and response contract, see the [Conversations API reference](https://mastra.ai/reference/client-js/conversations).
271
+
251
272
  ## Logs
252
273
 
253
274
  | Method | Path | Description |
package/CHANGELOG.md CHANGED
@@ -1,5 +1,19 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.20-alpha.2
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`13f4327`](https://github.com/mastra-ai/mastra/commit/13f4327f052faebe199cefbe906d33bf90238767)]:
8
+ - @mastra/core@1.21.0-alpha.1
9
+
10
+ ## 1.1.20-alpha.1
11
+
12
+ ### Patch Changes
13
+
14
+ - Updated dependencies [[`9a43b47`](https://github.com/mastra-ai/mastra/commit/9a43b476465e86c9aca381c2831066b5c33c999a)]:
15
+ - @mastra/core@1.21.0-alpha.0
16
+
3
17
  ## 1.1.19
4
18
 
5
19
  ### Patch Changes
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/mcp-docs-server",
3
- "version": "1.1.20-alpha.0",
3
+ "version": "1.1.20-alpha.3",
4
4
  "description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -29,7 +29,7 @@
29
29
  "jsdom": "^26.1.0",
30
30
  "local-pkg": "^1.1.2",
31
31
  "zod": "^4.3.6",
32
- "@mastra/core": "1.20.0",
32
+ "@mastra/core": "1.21.0-alpha.1",
33
33
  "@mastra/mcp": "^1.4.1"
34
34
  },
35
35
  "devDependencies": {
@@ -46,9 +46,9 @@
46
46
  "tsx": "^4.21.0",
47
47
  "typescript": "^5.9.3",
48
48
  "vitest": "4.0.18",
49
- "@internal/types-builder": "0.0.52",
50
49
  "@internal/lint": "0.0.77",
51
- "@mastra/core": "1.20.0"
50
+ "@internal/types-builder": "0.0.52",
51
+ "@mastra/core": "1.21.0-alpha.1"
52
52
  },
53
53
  "homepage": "https://mastra.ai",
54
54
  "repository": {