@mastra/mcp-docs-server 1.1.20-alpha.0 → 1.1.20-alpha.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/server/mastra-client.md +12 -10
- package/.docs/docs/server/mastra-server.md +13 -1
- package/.docs/guides/guide/firecrawl.md +152 -0
- package/.docs/models/gateways/openrouter.md +3 -2
- package/.docs/models/index.md +1 -1
- package/.docs/reference/agents/getLLM.md +9 -3
- package/.docs/reference/client-js/conversations.md +135 -0
- package/.docs/reference/client-js/mastra-client.md +4 -0
- package/.docs/reference/client-js/responses.md +213 -0
- package/.docs/reference/index.md +2 -0
- package/.docs/reference/server/routes.md +22 -1
- package/CHANGELOG.md +7 -0
- package/package.json +5 -5
|
@@ -4,7 +4,7 @@ The Mastra Client SDK provides a concise and type-safe interface for interacting
|
|
|
4
4
|
|
|
5
5
|
## Prerequisites
|
|
6
6
|
|
|
7
|
-
|
|
7
|
+
Before you start local development, have:
|
|
8
8
|
|
|
9
9
|
- Node.js `v22.13.0` or later
|
|
10
10
|
- TypeScript `v4.7` or higher (if using TypeScript)
|
|
@@ -54,15 +54,17 @@ export const mastraClient = new MastraClient({
|
|
|
54
54
|
|
|
55
55
|
## Core APIs
|
|
56
56
|
|
|
57
|
-
The Mastra Client SDK exposes all resources served by the Mastra Server
|
|
57
|
+
The Mastra Client SDK exposes all resources served by the Mastra Server.
|
|
58
58
|
|
|
59
59
|
- **[Agents](https://mastra.ai/reference/client-js/agents)**: Generate responses and stream conversations.
|
|
60
60
|
- **[Memory](https://mastra.ai/reference/client-js/memory)**: Manage conversation threads and message history.
|
|
61
61
|
- **[Tools](https://mastra.ai/reference/client-js/tools)**: Executed and managed tools.
|
|
62
62
|
- **[Workflows](https://mastra.ai/reference/client-js/workflows)**: Trigger workflows and track their execution.
|
|
63
63
|
- **[Vectors](https://mastra.ai/reference/client-js/vectors)**: Use vector embeddings for semantic search.
|
|
64
|
+
- **[Responses](https://mastra.ai/reference/client-js/responses)**: Call the OpenAI Responses API through Mastra agents. This API is currently experimental.
|
|
65
|
+
- **[Conversations](https://mastra.ai/reference/client-js/conversations)**: Work with OpenAI Responses API conversations and their stored item history. This API is currently experimental.
|
|
64
66
|
- **[Logs](https://mastra.ai/reference/client-js/logs)**: View logs and debug system behavior.
|
|
65
|
-
- **[Telemetry](https://mastra.ai/reference/client-js/telemetry)**:
|
|
67
|
+
- **[Telemetry](https://mastra.ai/reference/client-js/telemetry)**: View app performance and trace activity.
|
|
66
68
|
|
|
67
69
|
## Generating responses
|
|
68
70
|
|
|
@@ -133,7 +135,7 @@ export const mastraClient = new MastraClient({
|
|
|
133
135
|
|
|
134
136
|
## Credentials and session cookies
|
|
135
137
|
|
|
136
|
-
**Authenticate Mastra API calls with session cookies** when your UI and Mastra API
|
|
138
|
+
**Authenticate Mastra API calls with session cookies** when your UI and Mastra API aren't on the same origin—different host, subdomain, or port (for example Mastra Studio on one port and a custom server on another). Add **`credentials: 'include'`** to `MastraClient` so each request carries the cookies the user already has after sign-in. Skip this and you will often get **`401`** responses from Mastra even though login succeeded in the browser.
|
|
137
139
|
|
|
138
140
|
```typescript
|
|
139
141
|
import { MastraClient } from '@mastra/client-js'
|
|
@@ -146,7 +148,7 @@ export const mastraClient = new MastraClient({
|
|
|
146
148
|
|
|
147
149
|
**Allow credentialed cross-origin requests on your server**—see [CORS: requests with credentials](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS#requests_with_credentials). You need a concrete `Access-Control-Allow-Origin` (not `*`) and `Access-Control-Allow-Credentials: true`, or the browser will block the call before it reaches Mastra.
|
|
148
150
|
|
|
149
|
-
**Using `@mastra/react`?** Wrap your app with `MastraReactProvider`, set `baseUrl` and `apiPrefix` to match your server, and rely on the default `credentials: 'include'`. Change `credentials` only when you
|
|
151
|
+
**Using `@mastra/react`?** Wrap your app with `MastraReactProvider`, set `baseUrl` and `apiPrefix` to match your server, and rely on the default `credentials: 'include'`. Change `credentials` only when you want `same-origin` or `omit` behavior.
|
|
150
152
|
|
|
151
153
|
## Adding request cancelling
|
|
152
154
|
|
|
@@ -219,7 +221,7 @@ const handleClientTool = async () => {
|
|
|
219
221
|
}
|
|
220
222
|
```
|
|
221
223
|
|
|
222
|
-
### Client tool
|
|
224
|
+
### Client tool agent
|
|
223
225
|
|
|
224
226
|
This is a standard Mastra [agent](https://mastra.ai/docs/agents/overview) configured to return hex color codes, intended to work with the browser-based client tool defined above.
|
|
225
227
|
|
|
@@ -236,9 +238,9 @@ export const colorAgent = new Agent({
|
|
|
236
238
|
})
|
|
237
239
|
```
|
|
238
240
|
|
|
239
|
-
##
|
|
241
|
+
## Use MastraClient on the server
|
|
240
242
|
|
|
241
|
-
You can also use `MastraClient` in server-side environments such as API routes, serverless functions or actions. The usage
|
|
243
|
+
You can also use `MastraClient` in server-side environments such as API routes, serverless functions, or actions. The usage remains the same, but you may need to recreate the response for your client:
|
|
242
244
|
|
|
243
245
|
```typescript
|
|
244
246
|
export async function action() {
|
|
@@ -252,7 +254,7 @@ export async function action() {
|
|
|
252
254
|
|
|
253
255
|
## Best practices
|
|
254
256
|
|
|
255
|
-
1. **Error Handling**:
|
|
257
|
+
1. **Error Handling**: Use [error handling](https://mastra.ai/reference/client-js/error-handling) for development scenarios.
|
|
256
258
|
2. **Environment Variables**: Use environment variables for configuration.
|
|
257
259
|
3. **Debugging**: Enable detailed [logging](https://mastra.ai/reference/client-js/logs) when needed.
|
|
258
|
-
4. **Performance**:
|
|
260
|
+
4. **Performance**: Track application performance, [telemetry](https://mastra.ai/reference/client-js/telemetry), and traces.
|
|
@@ -12,7 +12,7 @@ The server provides:
|
|
|
12
12
|
|
|
13
13
|
- API endpoints for all registered agents and workflows
|
|
14
14
|
- Custom API routes and middleware
|
|
15
|
-
- Authentication
|
|
15
|
+
- Authentication across providers
|
|
16
16
|
- Request context for dynamic configuration
|
|
17
17
|
- Stream data redaction for secure responses
|
|
18
18
|
|
|
@@ -51,6 +51,18 @@ To explore the API interactively, visit the Swagger UI at <http://localhost:4111
|
|
|
51
51
|
|
|
52
52
|
> **Note:** The OpenAPI and Swagger endpoints are disabled in production by default. To enable them, set [`server.build.openAPIDocs`](https://mastra.ai/reference/configuration) and [`server.build.swaggerUI`](https://mastra.ai/reference/configuration) to `true` respectively.
|
|
53
53
|
|
|
54
|
+
## OpenAI Responses API
|
|
55
|
+
|
|
56
|
+
Mastra exposes OpenAI-compatible Responses and Conversations routes. These routes are agent-backed adapters over Mastra agents, memory, and storage, so requests run through the selected Mastra agent instead of acting as a raw provider proxy.
|
|
57
|
+
|
|
58
|
+
These APIs are currently experimental.
|
|
59
|
+
|
|
60
|
+
Use `agent_id` to select the Mastra agent that should handle the request. Initial requests target an agent directly, and stored follow-up turns can continue with `previous_response_id`. You can also pass `model` to override the agent's configured model for a single request. If you omit `model`, Mastra uses the model already configured on the agent.
|
|
61
|
+
|
|
62
|
+
The Responses routes support streaming, function calling (tools), stored continuations with `previous_response_id`, conversation threads through `conversation_id`, provider-specific passthrough with `providerOptions`, and JSON output through `text.format`.
|
|
63
|
+
|
|
64
|
+
For the full request and response contract, see the [Responses API reference](https://mastra.ai/reference/client-js/responses) and [Conversations API reference](https://mastra.ai/reference/client-js/conversations). For the complete list of HTTP routes, see [server routes](https://mastra.ai/reference/server/routes).
|
|
65
|
+
|
|
54
66
|
## Stream data redaction
|
|
55
67
|
|
|
56
68
|
When streaming agent responses, the HTTP layer redacts system prompts, tool definitions, API keys, and similar data from each chunk before sending it to clients. This is enabled by default.
|
|
@@ -0,0 +1,152 @@
|
|
|
1
|
+
# Web scraping with Firecrawl
|
|
2
|
+
|
|
3
|
+
Firecrawl is a web data API that turns websites into clean markdown or structured JSON. In this guide, you will wire Firecrawl into Mastra tools so your agents and workflows can search and scrape live web data on demand.
|
|
4
|
+
|
|
5
|
+
## Prerequisites
|
|
6
|
+
|
|
7
|
+
- Node.js `v22.13.0` or later installed
|
|
8
|
+
- A Firecrawl API key (get one at <https://firecrawl.dev>)
|
|
9
|
+
- An API key from a supported [Model Provider](https://mastra.ai/models)
|
|
10
|
+
- An existing Mastra project (Follow the [installation guide](https://mastra.ai/guides/getting-started/quickstart) to set up a new project)
|
|
11
|
+
|
|
12
|
+
## Installation
|
|
13
|
+
|
|
14
|
+
Install the Firecrawl SDK:
|
|
15
|
+
|
|
16
|
+
**npm**:
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
npm install @mendable/firecrawl-js
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
**pnpm**:
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
pnpm add @mendable/firecrawl-js
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
**Yarn**:
|
|
29
|
+
|
|
30
|
+
```bash
|
|
31
|
+
yarn add @mendable/firecrawl-js
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
**Bun**:
|
|
35
|
+
|
|
36
|
+
```bash
|
|
37
|
+
bun add @mendable/firecrawl-js
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
## Configure environment variables
|
|
41
|
+
|
|
42
|
+
Create a `.env` file in your project root:
|
|
43
|
+
|
|
44
|
+
```bash
|
|
45
|
+
FIRECRAWL_API_KEY=fc-your-api-key
|
|
46
|
+
# Optional: FIRECRAWL_API_URL=http://localhost:3002
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
## Build the Firecrawl tools
|
|
50
|
+
|
|
51
|
+
Create a tool file that exposes Firecrawl search and scrape to Mastra.
|
|
52
|
+
|
|
53
|
+
1. Create `src/mastra/tools/firecrawl.ts` and set up Firecrawl:
|
|
54
|
+
|
|
55
|
+
```ts
|
|
56
|
+
import Firecrawl from '@mendable/firecrawl-js'
|
|
57
|
+
import { createTool } from '@mastra/core/tools'
|
|
58
|
+
import { z } from 'zod'
|
|
59
|
+
|
|
60
|
+
const firecrawl = new Firecrawl({ apiKey: process.env.FIRECRAWL_API_KEY! })
|
|
61
|
+
|
|
62
|
+
export const firecrawlSearch = createTool({
|
|
63
|
+
id: 'firecrawl-search',
|
|
64
|
+
description: 'Search the web and return top results.',
|
|
65
|
+
inputSchema: z.object({ query: z.string().min(1) }),
|
|
66
|
+
outputSchema: z.object({
|
|
67
|
+
results: z.array(
|
|
68
|
+
z.object({
|
|
69
|
+
title: z.string().nullable(),
|
|
70
|
+
url: z.string(),
|
|
71
|
+
}),
|
|
72
|
+
),
|
|
73
|
+
}),
|
|
74
|
+
execute: async ({ query }) => {
|
|
75
|
+
const results = await firecrawl.search(query, { limit: 3 })
|
|
76
|
+
return {
|
|
77
|
+
results: (results.web ?? []).map(item => ({
|
|
78
|
+
title: item.title ?? null,
|
|
79
|
+
url: item.url,
|
|
80
|
+
})),
|
|
81
|
+
}
|
|
82
|
+
},
|
|
83
|
+
})
|
|
84
|
+
|
|
85
|
+
export const firecrawlScrape = createTool({
|
|
86
|
+
id: 'firecrawl-scrape',
|
|
87
|
+
description: 'Scrape a URL and return markdown content.',
|
|
88
|
+
inputSchema: z.object({ url: z.string().url() }),
|
|
89
|
+
outputSchema: z.object({ markdown: z.string() }),
|
|
90
|
+
execute: async ({ url }) => {
|
|
91
|
+
const result = await firecrawl.scrape(url, {
|
|
92
|
+
formats: ['markdown'],
|
|
93
|
+
onlyMainContent: true,
|
|
94
|
+
})
|
|
95
|
+
return { markdown: result.markdown ?? '' }
|
|
96
|
+
},
|
|
97
|
+
})
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
2. Create a new agent at `src/mastra/agents/web-agent.ts`:
|
|
101
|
+
|
|
102
|
+
```ts
|
|
103
|
+
import { Agent } from '@mastra/core/agent'
|
|
104
|
+
import { firecrawlSearch, firecrawlScrape } from '../tools/firecrawl'
|
|
105
|
+
|
|
106
|
+
export const webAgent = new Agent({
|
|
107
|
+
id: 'web-agent',
|
|
108
|
+
name: 'Web Agent',
|
|
109
|
+
instructions: 'Use Firecrawl tools to search and scrape web pages, then summarize the results.',
|
|
110
|
+
model: 'openai/gpt-5.4',
|
|
111
|
+
tools: { firecrawlSearch, firecrawlScrape },
|
|
112
|
+
})
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
3. Register the newly created agent in `src/mastra/index.ts` on your Mastra instance:
|
|
116
|
+
|
|
117
|
+
```ts
|
|
118
|
+
import { Mastra } from '@mastra/core'
|
|
119
|
+
import { webAgent } from './agents/web-agent'
|
|
120
|
+
|
|
121
|
+
export const mastra = new Mastra({
|
|
122
|
+
agents: { webAgent },
|
|
123
|
+
})
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
## Test in Studio
|
|
127
|
+
|
|
128
|
+
Run the dev server and open [Studio](https://mastra.ai/docs/studio/overview):
|
|
129
|
+
|
|
130
|
+
```bash
|
|
131
|
+
mastra dev
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
In Studio, open the **Web Agent** and try:
|
|
135
|
+
|
|
136
|
+
- "Find the latest Mastra changelog and summarize the last release."
|
|
137
|
+
- "Search for Firecrawl pricing and extract the plan tiers."
|
|
138
|
+
|
|
139
|
+
## Self-hosted Firecrawl
|
|
140
|
+
|
|
141
|
+
If you run Firecrawl locally, set `FIRECRAWL_API_URL` or pass `apiUrl` in the client:
|
|
142
|
+
|
|
143
|
+
```ts
|
|
144
|
+
const firecrawl = new Firecrawl({
|
|
145
|
+
apiKey: process.env.FIRECRAWL_API_KEY!,
|
|
146
|
+
apiUrl: process.env.FIRECRAWL_API_URL,
|
|
147
|
+
})
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
## Related
|
|
151
|
+
|
|
152
|
+
- [Firecrawl documentation](https://docs.firecrawl.dev)
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenRouter
|
|
2
2
|
|
|
3
|
-
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 167 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
|
|
6
6
|
|
|
@@ -199,4 +199,5 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
199
199
|
| `z-ai/glm-4.6:exacto` |
|
|
200
200
|
| `z-ai/glm-4.7` |
|
|
201
201
|
| `z-ai/glm-4.7-flash` |
|
|
202
|
-
| `z-ai/glm-5` |
|
|
202
|
+
| `z-ai/glm-5` |
|
|
203
|
+
| `z-ai/glm-5-turbo` |
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3567 models from 94 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Agent.getLLM()
|
|
2
2
|
|
|
3
|
-
The `.getLLM()` method retrieves the language model instance configured for an agent, resolving it if it's a function.
|
|
3
|
+
The `.getLLM()` method retrieves the language model instance configured for an agent, resolving it if it's a function. You can also pass a request-scoped `model` override without mutating the agent's configured model.
|
|
4
4
|
|
|
5
5
|
## Usage example
|
|
6
6
|
|
|
@@ -8,13 +8,19 @@ The `.getLLM()` method retrieves the language model instance configured for an a
|
|
|
8
8
|
await agent.getLLM()
|
|
9
9
|
```
|
|
10
10
|
|
|
11
|
+
```typescript
|
|
12
|
+
await agent.getLLM({
|
|
13
|
+
model: 'openai/gpt-5.4',
|
|
14
|
+
})
|
|
15
|
+
```
|
|
16
|
+
|
|
11
17
|
## Parameters
|
|
12
18
|
|
|
13
|
-
**options** (`{ requestContext?: RequestContext; model?:
|
|
19
|
+
**options** (`{ requestContext?: RequestContext; model?: MastraModelConfig | DynamicArgument<MastraModelConfig> }`): Optional configuration object containing request context and an optional request-scoped model override. (Default: `{}`)
|
|
14
20
|
|
|
15
21
|
**options.requestContext** (`RequestContext`): Request Context for dependency injection and contextual information.
|
|
16
22
|
|
|
17
|
-
**options.model** (`
|
|
23
|
+
**options.model** (`MastraModelConfig | DynamicArgument<MastraModelConfig>`): Optional request-scoped model override. The agent's configured model is not mutated.
|
|
18
24
|
|
|
19
25
|
## Returns
|
|
20
26
|
|
|
@@ -0,0 +1,135 @@
|
|
|
1
|
+
# OpenAI Responses API Conversations
|
|
2
|
+
|
|
3
|
+
The OpenAI Responses API Conversations surface provides methods to create, retrieve, delete, and inspect thread-backed conversations in Mastra.
|
|
4
|
+
|
|
5
|
+
This API follows up on the [OpenAI Responses API](https://mastra.ai/reference/client-js/responses). Stored Responses calls return `conversation_id`, and in Mastra that value is the raw memory `threadId`. Use `client.conversations` when you want to work with that thread directly.
|
|
6
|
+
|
|
7
|
+
This API is currently experimental.
|
|
8
|
+
|
|
9
|
+
## Relationship to OpenAI Responses
|
|
10
|
+
|
|
11
|
+
Use the OpenAI Responses API for generation and continuation:
|
|
12
|
+
|
|
13
|
+
```typescript
|
|
14
|
+
const response = await client.responses.create({
|
|
15
|
+
agent_id: 'support-agent',
|
|
16
|
+
input: 'Start a support thread',
|
|
17
|
+
store: true,
|
|
18
|
+
})
|
|
19
|
+
|
|
20
|
+
console.log(response.conversation_id)
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
Use the Conversations API when you want to inspect or manage that stored thread:
|
|
24
|
+
|
|
25
|
+
```typescript
|
|
26
|
+
const conversation = await client.conversations.retrieve(response.conversation_id!)
|
|
27
|
+
const items = await client.conversations.items.list(response.conversation_id!)
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## Usage example
|
|
31
|
+
|
|
32
|
+
```typescript
|
|
33
|
+
import { MastraClient } from '@mastra/client-js'
|
|
34
|
+
|
|
35
|
+
const client = new MastraClient({
|
|
36
|
+
baseUrl: 'http://localhost:4111',
|
|
37
|
+
})
|
|
38
|
+
|
|
39
|
+
const conversation = await client.conversations.create({
|
|
40
|
+
agent_id: 'support-agent',
|
|
41
|
+
})
|
|
42
|
+
|
|
43
|
+
console.log(conversation.id)
|
|
44
|
+
```
|
|
45
|
+
|
|
46
|
+
## Methods
|
|
47
|
+
|
|
48
|
+
### Lifecycle
|
|
49
|
+
|
|
50
|
+
#### `create(params)`
|
|
51
|
+
|
|
52
|
+
Creates a new conversation thread for the selected agent.
|
|
53
|
+
|
|
54
|
+
```typescript
|
|
55
|
+
const conversation = await client.conversations.create({
|
|
56
|
+
agent_id: 'support-agent',
|
|
57
|
+
title: 'Billing support',
|
|
58
|
+
})
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
**Returns:** `Promise<Conversation>`.
|
|
62
|
+
|
|
63
|
+
#### `retrieve(conversationId, requestContext?)`
|
|
64
|
+
|
|
65
|
+
Retrieves a conversation by its thread ID.
|
|
66
|
+
|
|
67
|
+
```typescript
|
|
68
|
+
const conversation = await client.conversations.retrieve('thread_123')
|
|
69
|
+
|
|
70
|
+
console.log(conversation.thread)
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
**Returns:** `Promise<Conversation>`.
|
|
74
|
+
|
|
75
|
+
#### `delete(conversationId, requestContext?)`
|
|
76
|
+
|
|
77
|
+
Deletes a conversation by its thread ID.
|
|
78
|
+
|
|
79
|
+
```typescript
|
|
80
|
+
const deleted = await client.conversations.delete('thread_123')
|
|
81
|
+
|
|
82
|
+
console.log(deleted.deleted)
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
**Returns:** `Promise<ConversationDeleted>`.
|
|
86
|
+
|
|
87
|
+
### Items
|
|
88
|
+
|
|
89
|
+
#### `items.list(conversationId, requestContext?)`
|
|
90
|
+
|
|
91
|
+
Lists the stored items for a conversation.
|
|
92
|
+
|
|
93
|
+
```typescript
|
|
94
|
+
const items = await client.conversations.items.list('thread_123')
|
|
95
|
+
|
|
96
|
+
console.log(items.data)
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
**Returns:** `Promise<ConversationItemsPage>`.
|
|
100
|
+
|
|
101
|
+
## Response shape
|
|
102
|
+
|
|
103
|
+
`create()` and `retrieve()` return a conversation object with:
|
|
104
|
+
|
|
105
|
+
- `id`: The raw thread ID
|
|
106
|
+
- `object`: Always `'conversation'`
|
|
107
|
+
- `thread`: The stored thread record
|
|
108
|
+
|
|
109
|
+
`delete()` returns:
|
|
110
|
+
|
|
111
|
+
- `id`: The raw thread ID
|
|
112
|
+
- `object`: Always `'conversation.deleted'`
|
|
113
|
+
- `deleted`: Always `true`
|
|
114
|
+
|
|
115
|
+
`items.list()` returns:
|
|
116
|
+
|
|
117
|
+
- `object`: Always `'list'`
|
|
118
|
+
- `data`: Conversation items such as `message`, `function_call`, and `function_call_output`
|
|
119
|
+
- `first_id`: The first item ID in the page
|
|
120
|
+
- `last_id`: The last item ID in the page
|
|
121
|
+
- `has_more`: Whether more items exist beyond the current page
|
|
122
|
+
|
|
123
|
+
## Parameters
|
|
124
|
+
|
|
125
|
+
**agent\_id** (`string`): Required. The registered Mastra agent that owns the conversation memory.
|
|
126
|
+
|
|
127
|
+
**conversation\_id** (`string`): Optional conversation ID to use as the raw thread ID.
|
|
128
|
+
|
|
129
|
+
**resource\_id** (`string`): Optional resource ID to associate with the conversation thread.
|
|
130
|
+
|
|
131
|
+
**title** (`string`): Optional thread title stored with the conversation.
|
|
132
|
+
|
|
133
|
+
**metadata** (`Record<string, unknown>`): Optional thread metadata stored with the conversation.
|
|
134
|
+
|
|
135
|
+
**requestContext** (`RequestContext | Record<string, any>`): Optional request context forwarded to the Mastra server.
|
|
@@ -50,6 +50,10 @@ export const mastraClient = new MastraClient({
|
|
|
50
50
|
|
|
51
51
|
**getWorkflow(workflowId)** (`Workflow`): Retrieves a specific workflow instance by ID.
|
|
52
52
|
|
|
53
|
+
**responses** (`Responses`): Provides OpenAI-style Responses API helpers with \`create()\`, \`retrieve()\`, \`stream()\`, and \`delete()\`.
|
|
54
|
+
|
|
55
|
+
**conversations** (`Conversations`): Provides conversation helpers with \`create()\`, \`retrieve()\`, \`delete()\`, and \`items.list()\`.
|
|
56
|
+
|
|
53
57
|
**getVector(vectorName)** (`MastraVector`): Returns a vector store instance by name.
|
|
54
58
|
|
|
55
59
|
**listLogs(params)** (`Promise<LogEntry[]>`): Fetches system logs matching the provided filters.
|
|
@@ -0,0 +1,213 @@
|
|
|
1
|
+
# OpenAI Responses API
|
|
2
|
+
|
|
3
|
+
The OpenAI Responses API provides methods to create, retrieve, stream, and delete OpenAI-compatible responses through Mastra agents.
|
|
4
|
+
|
|
5
|
+
These routes are agent-backed adapters over Mastra agents, memory, and storage. Use `agent_id` to select the Mastra agent that should handle the request. You can pass `model` to override the agent's configured model for a single request, or omit it to use the model already configured on the agent.
|
|
6
|
+
|
|
7
|
+
Stored responses also return `conversation_id`. In Mastra, this is the raw memory `threadId`.
|
|
8
|
+
|
|
9
|
+
This API is currently experimental.
|
|
10
|
+
|
|
11
|
+
## Usage example
|
|
12
|
+
|
|
13
|
+
```typescript
|
|
14
|
+
import { MastraClient } from '@mastra/client-js'
|
|
15
|
+
|
|
16
|
+
const client = new MastraClient({
|
|
17
|
+
baseUrl: 'http://localhost:4111',
|
|
18
|
+
})
|
|
19
|
+
|
|
20
|
+
const response = await client.responses.create({
|
|
21
|
+
agent_id: 'support-agent',
|
|
22
|
+
input: 'Summarize this ticket',
|
|
23
|
+
store: true,
|
|
24
|
+
})
|
|
25
|
+
|
|
26
|
+
console.log(response.output_text)
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
## Methods
|
|
30
|
+
|
|
31
|
+
### Lifecycle
|
|
32
|
+
|
|
33
|
+
#### `create(params)`
|
|
34
|
+
|
|
35
|
+
Creates a response.
|
|
36
|
+
|
|
37
|
+
```typescript
|
|
38
|
+
const response = await client.responses.create({
|
|
39
|
+
agent_id: 'support-agent',
|
|
40
|
+
input: 'Summarize this ticket',
|
|
41
|
+
})
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
**Returns:** `Promise<ResponsesResponse>` when `stream` is omitted or `false`.
|
|
45
|
+
|
|
46
|
+
When `stream: true`, `create()` returns an async iterable of SSE-style event payloads:
|
|
47
|
+
|
|
48
|
+
```typescript
|
|
49
|
+
const stream = await client.responses.create({
|
|
50
|
+
agent_id: 'support-agent',
|
|
51
|
+
input: 'Summarize this ticket',
|
|
52
|
+
stream: true,
|
|
53
|
+
})
|
|
54
|
+
|
|
55
|
+
for await (const event of stream) {
|
|
56
|
+
if (event.type === 'response.output_text.delta') {
|
|
57
|
+
process.stdout.write(event.delta)
|
|
58
|
+
}
|
|
59
|
+
}
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
**Returns:** `Promise<ResponsesStream>`.
|
|
63
|
+
|
|
64
|
+
#### `retrieve(responseId, requestContext?)`
|
|
65
|
+
|
|
66
|
+
Retrieves a stored response.
|
|
67
|
+
|
|
68
|
+
```typescript
|
|
69
|
+
const response = await client.responses.retrieve('msg_123')
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
**Returns:** `Promise<ResponsesResponse>`.
|
|
73
|
+
|
|
74
|
+
#### `delete(responseId, requestContext?)`
|
|
75
|
+
|
|
76
|
+
Deletes a stored response.
|
|
77
|
+
|
|
78
|
+
```typescript
|
|
79
|
+
const deleted = await client.responses.delete('msg_123')
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
**Returns:** `Promise<{ id: string; object: "response"; deleted: true }>`
|
|
83
|
+
|
|
84
|
+
#### `stream(params)`
|
|
85
|
+
|
|
86
|
+
Creates a streaming response.
|
|
87
|
+
|
|
88
|
+
```typescript
|
|
89
|
+
const stream = await client.responses.stream({
|
|
90
|
+
agent_id: 'support-agent',
|
|
91
|
+
input: 'Say hello',
|
|
92
|
+
})
|
|
93
|
+
|
|
94
|
+
for await (const event of stream) {
|
|
95
|
+
console.log(event.type)
|
|
96
|
+
}
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
**Returns:** `Promise<ResponsesStream>`.
|
|
100
|
+
|
|
101
|
+
## Stored responses and conversations
|
|
102
|
+
|
|
103
|
+
Stored responses include both `response.id` and `conversation_id`.
|
|
104
|
+
|
|
105
|
+
- `response.id` is the response ID. For stored agent-backed responses, this is the persisted assistant message ID.
|
|
106
|
+
- `conversation_id` is the raw Mastra thread ID.
|
|
107
|
+
|
|
108
|
+
Use `previous_response_id` when you want to continue from a previous stored response. Use `conversation_id` when you want to target a known thread directly.
|
|
109
|
+
|
|
110
|
+
```typescript
|
|
111
|
+
const first = await client.responses.create({
|
|
112
|
+
agent_id: 'support-agent',
|
|
113
|
+
input: 'Start a support thread',
|
|
114
|
+
store: true,
|
|
115
|
+
})
|
|
116
|
+
|
|
117
|
+
const second = await client.responses.create({
|
|
118
|
+
agent_id: 'support-agent',
|
|
119
|
+
conversation_id: first.conversation_id!,
|
|
120
|
+
input: 'Add a follow-up to the same thread',
|
|
121
|
+
store: true,
|
|
122
|
+
})
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
Use [`client.conversations`](https://mastra.ai/reference/client-js/conversations) when you want to create, retrieve, delete, or inspect the underlying OpenAI Responses API conversation directly.
|
|
126
|
+
|
|
127
|
+
## Function calling (tools)
|
|
128
|
+
|
|
129
|
+
`response.tools` contains the configured function definitions available for the request.
|
|
130
|
+
|
|
131
|
+
If the model calls a function, that activity appears in `response.output` as `function_call` and `function_call_output` items alongside the final assistant `message`.
|
|
132
|
+
|
|
133
|
+
## Structured output
|
|
134
|
+
|
|
135
|
+
Use `text.format` when you want JSON output.
|
|
136
|
+
|
|
137
|
+
- `json_object` enables JSON mode.
|
|
138
|
+
- `json_schema` enables schema-constrained structured output.
|
|
139
|
+
|
|
140
|
+
Mastra routes both through the agent's structured output path and still returns the JSON in the normal assistant message text output.
|
|
141
|
+
|
|
142
|
+
```typescript
|
|
143
|
+
const response = await client.responses.create({
|
|
144
|
+
agent_id: 'support-agent',
|
|
145
|
+
input: 'Return a structured support ticket summary.',
|
|
146
|
+
text: {
|
|
147
|
+
format: {
|
|
148
|
+
type: 'json_schema',
|
|
149
|
+
name: 'ticket_summary',
|
|
150
|
+
schema: {
|
|
151
|
+
type: 'object',
|
|
152
|
+
properties: {
|
|
153
|
+
summary: { type: 'string' },
|
|
154
|
+
priority: { type: 'string' },
|
|
155
|
+
},
|
|
156
|
+
required: ['summary', 'priority'],
|
|
157
|
+
additionalProperties: false,
|
|
158
|
+
},
|
|
159
|
+
},
|
|
160
|
+
},
|
|
161
|
+
})
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
## Provider-backed requests
|
|
165
|
+
|
|
166
|
+
Use `providerOptions` when you need provider-specific options that Mastra does not normalize at the Responses layer.
|
|
167
|
+
|
|
168
|
+
```typescript
|
|
169
|
+
const response = await client.responses.create({
|
|
170
|
+
agent_id: 'support-agent',
|
|
171
|
+
input: 'Continue this exchange',
|
|
172
|
+
providerOptions: {
|
|
173
|
+
openai: {
|
|
174
|
+
previousResponseId: 'resp_123',
|
|
175
|
+
},
|
|
176
|
+
},
|
|
177
|
+
})
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
## Response shape
|
|
181
|
+
|
|
182
|
+
The returned response object includes:
|
|
183
|
+
|
|
184
|
+
- `id`: The response ID
|
|
185
|
+
- `output`: Output items such as the assistant `message`, `function_call`, and `function_call_output`
|
|
186
|
+
- `output_text`: Convenience getter that joins assistant text output
|
|
187
|
+
- `tools`: Configured tool definitions for the request
|
|
188
|
+
- `conversation_id`: The raw thread ID for stored responses
|
|
189
|
+
- `text`: The requested text output format, when provided
|
|
190
|
+
|
|
191
|
+
## Parameters
|
|
192
|
+
|
|
193
|
+
**agent\_id** (`string`): Required on initial requests. Selects the Mastra agent that executes the request. Stored follow-up turns can omit it when continuing with \`previous\_response\_id\`.
|
|
194
|
+
|
|
195
|
+
**model** (`string`): Optional model override for this request, such as \`openai/gpt-5\`. If omitted, Mastra uses the model configured on the selected agent.
|
|
196
|
+
|
|
197
|
+
**input** (`string | Array<{ role: 'system' | 'developer' | 'user' | 'assistant'; content: string | Array<{ type: 'input_text' | 'text' | 'output_text'; text: string }> }>`): Required. Input text or message array for the response.
|
|
198
|
+
|
|
199
|
+
**instructions** (`string`): Optional instruction override for this request.
|
|
200
|
+
|
|
201
|
+
**text** (`{ format: { type: 'json_object' } | { type: 'json_schema'; name: string; schema: Record<string, unknown>; description?: string; strict?: boolean } }`): Optional text output format. Use \`json\_object\` for JSON mode or \`json\_schema\` for schema-constrained structured output.
|
|
202
|
+
|
|
203
|
+
**providerOptions** (`Record<string, Record<string, unknown> | undefined>`): Optional provider-specific options passed through to the underlying model call.
|
|
204
|
+
|
|
205
|
+
**stream** (`boolean`): When true, returns an async iterable of Responses API events.
|
|
206
|
+
|
|
207
|
+
**store** (`boolean`): When true, persists the response through the selected agent memory.
|
|
208
|
+
|
|
209
|
+
**conversation\_id** (`string`): Optional conversation identifier. In Mastra, this is the raw memory thread ID.
|
|
210
|
+
|
|
211
|
+
**previous\_response\_id** (`string`): Continues a stored response chain from a previous stored response.
|
|
212
|
+
|
|
213
|
+
**requestContext** (`RequestContext | Record<string, any>`): Optional request context forwarded to the Mastra server.
|
package/.docs/reference/index.md
CHANGED
|
@@ -41,11 +41,13 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
41
41
|
- [create-mastra](https://mastra.ai/reference/cli/create-mastra)
|
|
42
42
|
- [mastra](https://mastra.ai/reference/cli/mastra)
|
|
43
43
|
- [Agents API](https://mastra.ai/reference/client-js/agents)
|
|
44
|
+
- [Conversations API](https://mastra.ai/reference/client-js/conversations)
|
|
44
45
|
- [Error Handling](https://mastra.ai/reference/client-js/error-handling)
|
|
45
46
|
- [Logs API](https://mastra.ai/reference/client-js/logs)
|
|
46
47
|
- [Mastra Client SDK](https://mastra.ai/reference/client-js/mastra-client)
|
|
47
48
|
- [Memory API](https://mastra.ai/reference/client-js/memory)
|
|
48
49
|
- [Observability API](https://mastra.ai/reference/client-js/observability)
|
|
50
|
+
- [Responses API](https://mastra.ai/reference/client-js/responses)
|
|
49
51
|
- [Telemetry API](https://mastra.ai/reference/client-js/telemetry)
|
|
50
52
|
- [Tools API](https://mastra.ai/reference/client-js/tools)
|
|
51
53
|
- [Vectors API](https://mastra.ai/reference/client-js/vectors)
|
|
@@ -90,7 +90,7 @@ GET /api/agents/my-agent?versionId=abc123
|
|
|
90
90
|
}
|
|
91
91
|
```
|
|
92
92
|
|
|
93
|
-
###
|
|
93
|
+
### Request body for `/start-async`
|
|
94
94
|
|
|
95
95
|
```typescript
|
|
96
96
|
{
|
|
@@ -248,6 +248,27 @@ GET /api/agents/my-agent?versionId=abc123
|
|
|
248
248
|
| `POST` | `/api/mcp/:serverId` | MCP HTTP transport |
|
|
249
249
|
| `GET` | `/api/mcp/:serverId/sse` | MCP SSE transport |
|
|
250
250
|
|
|
251
|
+
## Responses API
|
|
252
|
+
|
|
253
|
+
| Method | Path | Description |
|
|
254
|
+
| -------- | ------------------------------- | ------------------------------------------------------------------- |
|
|
255
|
+
| `POST` | `/api/v1/responses` | Create a response through the OpenAI-compatible Responses API route |
|
|
256
|
+
| `GET` | `/api/v1/responses/:responseId` | Retrieve a stored response |
|
|
257
|
+
| `DELETE` | `/api/v1/responses/:responseId` | Delete a stored response |
|
|
258
|
+
|
|
259
|
+
For the full request and response contract, see the [Responses API reference](https://mastra.ai/reference/client-js/responses).
|
|
260
|
+
|
|
261
|
+
## Conversations API
|
|
262
|
+
|
|
263
|
+
| Method | Path | Description |
|
|
264
|
+
| -------- | --------------------------------------------- | ------------------------------------ |
|
|
265
|
+
| `POST` | `/api/v1/conversations` | Create a conversation |
|
|
266
|
+
| `GET` | `/api/v1/conversations/:conversationId` | Retrieve a conversation |
|
|
267
|
+
| `DELETE` | `/api/v1/conversations/:conversationId` | Delete a conversation |
|
|
268
|
+
| `GET` | `/api/v1/conversations/:conversationId/items` | List stored items for a conversation |
|
|
269
|
+
|
|
270
|
+
For the full request and response contract, see the [Conversations API reference](https://mastra.ai/reference/client-js/conversations).
|
|
271
|
+
|
|
251
272
|
## Logs
|
|
252
273
|
|
|
253
274
|
| Method | Path | Description |
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,12 @@
|
|
|
1
1
|
# @mastra/mcp-docs-server
|
|
2
2
|
|
|
3
|
+
## 1.1.20-alpha.1
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Updated dependencies [[`9a43b47`](https://github.com/mastra-ai/mastra/commit/9a43b476465e86c9aca381c2831066b5c33c999a)]:
|
|
8
|
+
- @mastra/core@1.21.0-alpha.0
|
|
9
|
+
|
|
3
10
|
## 1.1.19
|
|
4
11
|
|
|
5
12
|
### Patch Changes
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/mcp-docs-server",
|
|
3
|
-
"version": "1.1.20-alpha.
|
|
3
|
+
"version": "1.1.20-alpha.1",
|
|
4
4
|
"description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -29,8 +29,8 @@
|
|
|
29
29
|
"jsdom": "^26.1.0",
|
|
30
30
|
"local-pkg": "^1.1.2",
|
|
31
31
|
"zod": "^4.3.6",
|
|
32
|
-
"@mastra/
|
|
33
|
-
"@mastra/
|
|
32
|
+
"@mastra/mcp": "^1.4.1",
|
|
33
|
+
"@mastra/core": "1.21.0-alpha.0"
|
|
34
34
|
},
|
|
35
35
|
"devDependencies": {
|
|
36
36
|
"@hono/node-server": "^1.19.11",
|
|
@@ -46,9 +46,9 @@
|
|
|
46
46
|
"tsx": "^4.21.0",
|
|
47
47
|
"typescript": "^5.9.3",
|
|
48
48
|
"vitest": "4.0.18",
|
|
49
|
-
"@internal/types-builder": "0.0.52",
|
|
50
49
|
"@internal/lint": "0.0.77",
|
|
51
|
-
"@
|
|
50
|
+
"@internal/types-builder": "0.0.52",
|
|
51
|
+
"@mastra/core": "1.21.0-alpha.0"
|
|
52
52
|
},
|
|
53
53
|
"homepage": "https://mastra.ai",
|
|
54
54
|
"repository": {
|