@mastra/mcp-docs-server 1.1.17-alpha.1 → 1.1.17-alpha.13
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/evals/built-in-scorers.md +1 -0
- package/.docs/docs/memory/observational-memory.md +56 -9
- package/.docs/docs/observability/tracing/bridges/otel.md +3 -3
- package/.docs/docs/observability/tracing/exporters/sentry.md +1 -1
- package/.docs/docs/server/auth/okta.md +225 -0
- package/.docs/docs/server/auth.md +1 -0
- package/.docs/docs/server/mastra-client.md +17 -0
- package/.docs/docs/workspace/lsp.md +116 -0
- package/.docs/docs/workspace/overview.md +15 -1
- package/.docs/guides/agent-frameworks/ai-sdk.md +3 -3
- package/.docs/models/gateways/openrouter.md +2 -1
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/groq.md +24 -16
- package/.docs/models/providers/llmgateway.md +269 -0
- package/.docs/models/providers/poe.md +3 -1
- package/.docs/models/providers/zai-coding-plan.md +3 -2
- package/.docs/models/providers/zai.md +14 -13
- package/.docs/models/providers/zhipuai-coding-plan.md +5 -2
- package/.docs/models/providers/zhipuai.md +13 -12
- package/.docs/models/providers.md +1 -0
- package/.docs/reference/ai-sdk/handle-chat-stream.md +2 -0
- package/.docs/reference/ai-sdk/with-mastra.md +2 -2
- package/.docs/reference/auth/okta.md +162 -0
- package/.docs/reference/client-js/agents.md +13 -8
- package/.docs/reference/client-js/mastra-client.md +1 -1
- package/.docs/reference/client-js/memory.md +1 -1
- package/.docs/reference/deployer/cloudflare.md +31 -1
- package/.docs/reference/evals/noise-sensitivity.md +3 -3
- package/.docs/reference/evals/run-evals.md +78 -3
- package/.docs/reference/evals/scorer-utils.md +188 -0
- package/.docs/reference/evals/trajectory-accuracy.md +627 -0
- package/.docs/reference/harness/harness-class.md +2 -0
- package/.docs/reference/index.md +3 -2
- package/.docs/reference/logging/pino-logger.md +58 -0
- package/.docs/reference/memory/observational-memory.md +34 -8
- package/.docs/reference/observability/tracing/interfaces.md +1 -1
- package/.docs/reference/processors/message-history-processor.md +1 -1
- package/.docs/reference/processors/processor-interface.md +3 -3
- package/.docs/reference/processors/semantic-recall-processor.md +1 -1
- package/.docs/reference/processors/skill-search-processor.md +93 -0
- package/.docs/reference/processors/tool-call-filter.md +2 -2
- package/.docs/reference/processors/working-memory-processor.md +1 -1
- package/.docs/reference/streaming/agents/stream.md +1 -1
- package/.docs/reference/tools/mcp-client.md +1 -1
- package/CHANGELOG.md +42 -0
- package/package.json +4 -4
- package/.docs/reference/core/getStoredAgentById.md +0 -87
- package/.docs/reference/core/listStoredAgents.md +0 -91
|
@@ -18,6 +18,7 @@ These scorers evaluate how correct, truthful, and complete your agent's answers
|
|
|
18
18
|
- [`content-similarity`](https://mastra.ai/reference/evals/content-similarity): Measures textual similarity using character-level matching (`0-1`, higher is better)
|
|
19
19
|
- [`textual-difference`](https://mastra.ai/reference/evals/textual-difference): Measures textual differences between strings (`0-1`, higher means more similar)
|
|
20
20
|
- [`tool-call-accuracy`](https://mastra.ai/reference/evals/tool-call-accuracy): Evaluates whether the LLM selects the correct tool from available options (`0-1`, higher is better)
|
|
21
|
+
- [`trajectory-accuracy`](https://mastra.ai/reference/evals/trajectory-accuracy): Evaluates whether an agent follows the expected sequence of actions (tool calls, model generations, workflow steps, and other span types) (`0-1`, higher is better)
|
|
21
22
|
- [`prompt-alignment`](https://mastra.ai/reference/evals/prompt-alignment): Measures how well agent responses align with user prompt intent, requirements, completeness, and format (`0-1`, higher is better)
|
|
22
23
|
|
|
23
24
|
### Context quality
|
|
@@ -95,27 +95,72 @@ The result is a three-tier system:
|
|
|
95
95
|
|
|
96
96
|
Normal OM compresses messages into observations, which is great for staying on task — but the original wording is gone. Retrieval mode fixes this by keeping each observation group linked to the raw messages that produced it. When the agent needs exact wording, tool output, or chronology that the summary compressed away, it can call a `recall` tool to page through the source messages.
|
|
97
97
|
|
|
98
|
+
#### Browsing only
|
|
99
|
+
|
|
100
|
+
Set `retrieval: true` to enable the recall tool for browsing raw messages. No vector store needed. By default, the recall tool can browse across all threads for the current resource.
|
|
101
|
+
|
|
98
102
|
```typescript
|
|
99
103
|
const memory = new Memory({
|
|
100
104
|
options: {
|
|
101
105
|
observationalMemory: {
|
|
102
106
|
model: 'google/gemini-2.5-flash',
|
|
103
|
-
scope: 'thread',
|
|
104
107
|
retrieval: true,
|
|
105
108
|
},
|
|
106
109
|
},
|
|
107
110
|
})
|
|
108
111
|
```
|
|
109
112
|
|
|
113
|
+
#### With semantic search
|
|
114
|
+
|
|
115
|
+
Set `retrieval: { vector: true }` to also enable semantic search. This reuses the vector store and embedder already configured on your Memory instance:
|
|
116
|
+
|
|
117
|
+
```typescript
|
|
118
|
+
const memory = new Memory({
|
|
119
|
+
storage,
|
|
120
|
+
vector: myVectorStore,
|
|
121
|
+
embedder: myEmbedder,
|
|
122
|
+
options: {
|
|
123
|
+
observationalMemory: {
|
|
124
|
+
model: 'google/gemini-2.5-flash',
|
|
125
|
+
retrieval: { vector: true },
|
|
126
|
+
},
|
|
127
|
+
},
|
|
128
|
+
})
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
When vector search is configured, new observation groups are automatically indexed at buffer time and during synchronous observation (fire-and-forget, non-blocking). Semantic search returns observation-group matches with their raw source message ID ranges, so the recall tool can show the summarized memory alongside where it came from.
|
|
132
|
+
|
|
133
|
+
#### Restricting to the current thread
|
|
134
|
+
|
|
135
|
+
By default, the recall tool scope is `'resource'` — the agent can list threads, browse other threads, and search across all conversations. Set `scope: 'thread'` to restrict the agent to only the current thread:
|
|
136
|
+
|
|
137
|
+
```typescript
|
|
138
|
+
const memory = new Memory({
|
|
139
|
+
options: {
|
|
140
|
+
observationalMemory: {
|
|
141
|
+
model: 'google/gemini-2.5-flash',
|
|
142
|
+
retrieval: { vector: true, scope: 'thread' },
|
|
143
|
+
},
|
|
144
|
+
},
|
|
145
|
+
})
|
|
146
|
+
```
|
|
147
|
+
|
|
148
|
+
#### What retrieval enables
|
|
149
|
+
|
|
110
150
|
With retrieval mode enabled, OM:
|
|
111
151
|
|
|
112
152
|
- Stores a `range` (e.g. `startId:endId`) on each observation group pointing to the messages it was derived from
|
|
153
|
+
|
|
113
154
|
- Keeps range metadata visible in the agent's context so the agent knows which observations map to which messages
|
|
114
|
-
- Registers a `recall` tool the agent can call to page through the raw messages behind any range
|
|
115
155
|
|
|
116
|
-
|
|
156
|
+
- Registers a `recall` tool the agent can call to:
|
|
157
|
+
|
|
158
|
+
- Page through the raw messages behind any observation group range
|
|
159
|
+
- Search by semantic similarity (`mode: "search"` with a `query` string) — requires `vector: true`
|
|
160
|
+
- List all threads (`mode: "threads"`), browse other threads (`threadId`), and search across all threads (default `scope: 'resource'`)
|
|
161
|
+
- When `scope: 'thread'`: restrict browsing and search to the current thread only
|
|
117
162
|
|
|
118
|
-
See the [recall tool reference](https://mastra.ai/reference/memory/observational-memory) for the full API (detail levels, part indexing, pagination, and token limiting).
|
|
163
|
+
See the [recall tool reference](https://mastra.ai/reference/memory/observational-memory) for the full API (detail levels, part indexing, pagination, cross-thread browsing, and token limiting).
|
|
119
164
|
|
|
120
165
|
## Models
|
|
121
166
|
|
|
@@ -150,17 +195,19 @@ const memory = new Memory({
|
|
|
150
195
|
observation: {
|
|
151
196
|
model: new ModelByInputTokens({
|
|
152
197
|
upTo: {
|
|
153
|
-
|
|
154
|
-
|
|
155
|
-
|
|
198
|
+
// Faster, cheaper models for smaller inputs; stronger models for larger contexts
|
|
199
|
+
5_000: 'openrouter/mistralai/ministral-8b-2512',
|
|
200
|
+
20_000: 'openrouter/mistralai/mistral-small-2603',
|
|
201
|
+
40_000: 'openai/gpt-5.4-mini',
|
|
202
|
+
1_000_000: 'google/gemini-3.1-flash-lite-preview',
|
|
156
203
|
},
|
|
157
204
|
}),
|
|
158
205
|
},
|
|
159
206
|
reflection: {
|
|
160
207
|
model: new ModelByInputTokens({
|
|
161
208
|
upTo: {
|
|
162
|
-
20_000: '
|
|
163
|
-
|
|
209
|
+
20_000: 'openai/gpt-5.4-mini',
|
|
210
|
+
100_000: 'google/gemini-2.5-flash',
|
|
164
211
|
},
|
|
165
212
|
}),
|
|
166
213
|
},
|
|
@@ -151,10 +151,10 @@ With the OtelBridge, your traces maintain proper hierarchy across OTEL and Mastr
|
|
|
151
151
|
```text
|
|
152
152
|
HTTP POST /api/chat (from Hono middleware)
|
|
153
153
|
└── agent.assistant (from Mastra via OtelBridge)
|
|
154
|
-
├── chat gpt-
|
|
154
|
+
├── chat gpt-5.4 (LLM call)
|
|
155
155
|
├── tool.execute search (tool execution)
|
|
156
156
|
│ └── HTTP GET api.example.com (from OTEL auto-instrumentation)
|
|
157
|
-
└── chat gpt-
|
|
157
|
+
└── chat gpt-5.4 (follow-up LLM call)
|
|
158
158
|
```
|
|
159
159
|
|
|
160
160
|
## Multi-service distributed tracing
|
|
@@ -167,7 +167,7 @@ Service A: HTTP POST /api/process
|
|
|
167
167
|
|
|
168
168
|
Service B: HTTP POST /api/analyze (incoming call - same trace!)
|
|
169
169
|
└── agent.analyzer (Mastra agent inherits trace context)
|
|
170
|
-
└── chat gpt-
|
|
170
|
+
└── chat gpt-5.4
|
|
171
171
|
```
|
|
172
172
|
|
|
173
173
|
Both services must have:
|
|
@@ -162,7 +162,7 @@ The exporter uses standard GenAI semantic conventions with Sentry-specific attri
|
|
|
162
162
|
**For MODEL\_GENERATION spans:**
|
|
163
163
|
|
|
164
164
|
- `gen_ai.system`: Model provider (e.g., `openai`, `anthropic`)
|
|
165
|
-
- `gen_ai.request.model`: Model identifier (e.g., `gpt-4`)
|
|
165
|
+
- `gen_ai.request.model`: Model identifier (e.g., `gpt-5.4`)
|
|
166
166
|
- `gen_ai.response.model`: Response model
|
|
167
167
|
- `gen_ai.response.text`: Output text response
|
|
168
168
|
- `gen_ai.response.tool_calls`: Tool calls made during generation (JSON array)
|
|
@@ -0,0 +1,225 @@
|
|
|
1
|
+
# Okta
|
|
2
|
+
|
|
3
|
+
The `@mastra/auth-okta` package provides authentication and role-based access control for Mastra using Okta. It supports an OAuth 2.0 / OIDC login flow with encrypted session cookies and maps Okta groups to Mastra permissions.
|
|
4
|
+
|
|
5
|
+
## Prerequisites
|
|
6
|
+
|
|
7
|
+
This guide uses Okta authentication. Make sure to:
|
|
8
|
+
|
|
9
|
+
1. Create an Okta account at [okta.com](https://www.okta.com/)
|
|
10
|
+
2. Set up an OAuth application in the Okta Admin Console (Web app, Authorization Code grant)
|
|
11
|
+
3. Add your redirect URI to the application's sign-in redirect URIs
|
|
12
|
+
4. Create an API token (required for RBAC)
|
|
13
|
+
|
|
14
|
+
Make sure your environment variables are set.
|
|
15
|
+
|
|
16
|
+
```env
|
|
17
|
+
OKTA_DOMAIN=dev-123456.okta.com
|
|
18
|
+
OKTA_CLIENT_ID=your-client-id
|
|
19
|
+
OKTA_CLIENT_SECRET=your-client-secret
|
|
20
|
+
OKTA_REDIRECT_URI=http://localhost:4111/api/auth/callback
|
|
21
|
+
OKTA_COOKIE_PASSWORD=a-random-string-at-least-32-characters-long
|
|
22
|
+
OKTA_API_TOKEN=your-api-token
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
> **Note:** `OKTA_COOKIE_PASSWORD` encrypts session cookies. If omitted, an auto-generated value is used that does not survive server restarts. Set it explicitly for production.
|
|
26
|
+
>
|
|
27
|
+
> `OKTA_API_TOKEN` is only required when using `MastraRBACOkta` to map Okta groups to permissions.
|
|
28
|
+
|
|
29
|
+
## Installation
|
|
30
|
+
|
|
31
|
+
**npm**:
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
npm install @mastra/auth-okta
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
**pnpm**:
|
|
38
|
+
|
|
39
|
+
```bash
|
|
40
|
+
pnpm add @mastra/auth-okta
|
|
41
|
+
```
|
|
42
|
+
|
|
43
|
+
**Yarn**:
|
|
44
|
+
|
|
45
|
+
```bash
|
|
46
|
+
yarn add @mastra/auth-okta
|
|
47
|
+
```
|
|
48
|
+
|
|
49
|
+
**Bun**:
|
|
50
|
+
|
|
51
|
+
```bash
|
|
52
|
+
bun add @mastra/auth-okta
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
## Usage examples
|
|
56
|
+
|
|
57
|
+
### Basic usage with environment variables
|
|
58
|
+
|
|
59
|
+
With the environment variables above set, all constructor parameters are optional:
|
|
60
|
+
|
|
61
|
+
```typescript
|
|
62
|
+
import { Mastra } from '@mastra/core'
|
|
63
|
+
import { MastraAuthOkta } from '@mastra/auth-okta'
|
|
64
|
+
|
|
65
|
+
export const mastra = new Mastra({
|
|
66
|
+
server: {
|
|
67
|
+
auth: new MastraAuthOkta(),
|
|
68
|
+
},
|
|
69
|
+
})
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
### Auth with RBAC
|
|
73
|
+
|
|
74
|
+
Add `MastraRBACOkta` to map Okta groups to Mastra permissions:
|
|
75
|
+
|
|
76
|
+
```typescript
|
|
77
|
+
import { Mastra } from '@mastra/core'
|
|
78
|
+
import { MastraAuthOkta, MastraRBACOkta } from '@mastra/auth-okta'
|
|
79
|
+
|
|
80
|
+
export const mastra = new Mastra({
|
|
81
|
+
server: {
|
|
82
|
+
auth: new MastraAuthOkta(),
|
|
83
|
+
rbac: new MastraRBACOkta({
|
|
84
|
+
roleMapping: {
|
|
85
|
+
Admin: ['*'],
|
|
86
|
+
Engineering: ['agents:*', 'workflows:*', 'tools:*'],
|
|
87
|
+
Viewer: ['agents:read', 'workflows:read'],
|
|
88
|
+
_default: [], // users with unmapped groups get no permissions
|
|
89
|
+
},
|
|
90
|
+
}),
|
|
91
|
+
},
|
|
92
|
+
})
|
|
93
|
+
```
|
|
94
|
+
|
|
95
|
+
### Cross-provider usage
|
|
96
|
+
|
|
97
|
+
Use a different auth provider (Auth0, Clerk, etc.) for login and Okta for RBAC. Pass a `getUserId` function to resolve the Okta user ID from the other provider's user object:
|
|
98
|
+
|
|
99
|
+
```typescript
|
|
100
|
+
import { Mastra } from '@mastra/core'
|
|
101
|
+
import { MastraAuthAuth0 } from '@mastra/auth-auth0'
|
|
102
|
+
import { MastraRBACOkta } from '@mastra/auth-okta'
|
|
103
|
+
|
|
104
|
+
export const mastra = new Mastra({
|
|
105
|
+
server: {
|
|
106
|
+
auth: new MastraAuthAuth0(),
|
|
107
|
+
rbac: new MastraRBACOkta({
|
|
108
|
+
getUserId: user => user.metadata?.oktaUserId || user.email,
|
|
109
|
+
roleMapping: {
|
|
110
|
+
Engineering: ['agents:*', 'workflows:*'],
|
|
111
|
+
Admin: ['*'],
|
|
112
|
+
_default: [],
|
|
113
|
+
},
|
|
114
|
+
}),
|
|
115
|
+
},
|
|
116
|
+
})
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
> **Note:** To link users between providers, store the Okta user ID in the other provider's user metadata. Mastra uses this ID to fetch groups from Okta.
|
|
120
|
+
|
|
121
|
+
> **Info:** Visit [MastraAuthOkta](https://mastra.ai/reference/auth/okta) for all available configuration options.
|
|
122
|
+
|
|
123
|
+
## Role mapping
|
|
124
|
+
|
|
125
|
+
The `roleMapping` option maps Okta group names to arrays of Mastra permission strings. Permissions follow a `resource:action` pattern and support wildcards:
|
|
126
|
+
|
|
127
|
+
```typescript
|
|
128
|
+
const rbac = new MastraRBACOkta({
|
|
129
|
+
roleMapping: {
|
|
130
|
+
// full access to everything
|
|
131
|
+
Admin: ['*'],
|
|
132
|
+
|
|
133
|
+
// full access to agents and workflows
|
|
134
|
+
Engineering: ['agents:*', 'workflows:*'],
|
|
135
|
+
|
|
136
|
+
// read-only access
|
|
137
|
+
Viewer: ['agents:read', 'workflows:read'],
|
|
138
|
+
|
|
139
|
+
// users whose groups don't match any key above
|
|
140
|
+
_default: [],
|
|
141
|
+
},
|
|
142
|
+
})
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
The `_default` key assigns permissions to users whose Okta groups do not match any other key.
|
|
146
|
+
|
|
147
|
+
## Client-side setup
|
|
148
|
+
|
|
149
|
+
When auth is enabled, requests to Mastra routes require authentication. `MastraAuthOkta` uses SSO, so users authenticate through Okta's hosted login page. After login, an encrypted session cookie is set automatically.
|
|
150
|
+
|
|
151
|
+
### Cookie session (recommended)
|
|
152
|
+
|
|
153
|
+
For cross-origin requests (e.g. a frontend on `:3000` calling Mastra on `:4111`), enable CORS credentials on the Mastra server:
|
|
154
|
+
|
|
155
|
+
```typescript
|
|
156
|
+
export const mastra = new Mastra({
|
|
157
|
+
server: {
|
|
158
|
+
auth: new MastraAuthOkta(),
|
|
159
|
+
cors: {
|
|
160
|
+
origin: 'http://localhost:3000',
|
|
161
|
+
credentials: true,
|
|
162
|
+
},
|
|
163
|
+
},
|
|
164
|
+
})
|
|
165
|
+
```
|
|
166
|
+
|
|
167
|
+
Configure the client to include credentials:
|
|
168
|
+
|
|
169
|
+
```typescript
|
|
170
|
+
import { MastraClient } from '@mastra/client-js'
|
|
171
|
+
|
|
172
|
+
export const mastraClient = new MastraClient({
|
|
173
|
+
baseUrl: 'http://localhost:4111',
|
|
174
|
+
credentials: 'include',
|
|
175
|
+
})
|
|
176
|
+
```
|
|
177
|
+
|
|
178
|
+
### Bearer token
|
|
179
|
+
|
|
180
|
+
You can also pass an Okta access token as a Bearer token. The token is verified against Okta's JWKS endpoint:
|
|
181
|
+
|
|
182
|
+
```typescript
|
|
183
|
+
import { MastraClient } from '@mastra/client-js'
|
|
184
|
+
|
|
185
|
+
export const createMastraClient = (accessToken: string) => {
|
|
186
|
+
return new MastraClient({
|
|
187
|
+
baseUrl: 'http://localhost:4111',
|
|
188
|
+
headers: {
|
|
189
|
+
Authorization: `Bearer ${accessToken}`,
|
|
190
|
+
},
|
|
191
|
+
})
|
|
192
|
+
}
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
> **Info:** Visit [Mastra Client SDK](https://mastra.ai/docs/server/mastra-client) for more configuration options.
|
|
196
|
+
|
|
197
|
+
### Making authenticated requests
|
|
198
|
+
|
|
199
|
+
**MastraClient**:
|
|
200
|
+
|
|
201
|
+
```typescript
|
|
202
|
+
import { mastraClient } from '../lib/mastra-client'
|
|
203
|
+
|
|
204
|
+
const agent = mastraClient.getAgent('weatherAgent')
|
|
205
|
+
const response = await agent.generate('Weather in London')
|
|
206
|
+
console.log(response)
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
**cURL**:
|
|
210
|
+
|
|
211
|
+
```bash
|
|
212
|
+
curl -X POST http://localhost:4111/api/agents/weatherAgent/generate \
|
|
213
|
+
-H "Content-Type: application/json" \
|
|
214
|
+
-H "Authorization: Bearer <your-okta-access-token>" \
|
|
215
|
+
-d '{
|
|
216
|
+
"messages": "Weather in London"
|
|
217
|
+
}'
|
|
218
|
+
```
|
|
219
|
+
|
|
220
|
+
## Troubleshooting
|
|
221
|
+
|
|
222
|
+
- **401 on every request**: Verify your Okta domain, client ID, and client secret are correct. Check that the redirect URI in your Okta application matches `OKTA_REDIRECT_URI`.
|
|
223
|
+
- **Cookies not sent cross-origin**: Set `credentials: "include"` in `MastraClient` and configure `server.cors` with your frontend origin and `credentials: true`.
|
|
224
|
+
- **Session lost on restart**: Set `OKTA_COOKIE_PASSWORD` to a stable value (at least 32 characters). Without it, an auto-generated key is used that changes on each restart.
|
|
225
|
+
- **RBAC returns empty permissions**: Verify `OKTA_API_TOKEN` is set and the token has permission to list user groups. Check that group names in `roleMapping` match your Okta group names exactly.
|
|
@@ -30,6 +30,7 @@ See [Custom API Routes](https://mastra.ai/docs/server/custom-api-routes) for con
|
|
|
30
30
|
- [Better Auth](https://mastra.ai/docs/server/auth/better-auth)
|
|
31
31
|
- [Clerk](https://mastra.ai/docs/server/auth/clerk)
|
|
32
32
|
- [Firebase](https://mastra.ai/docs/server/auth/firebase)
|
|
33
|
+
- [Okta](https://mastra.ai/docs/server/auth/okta)
|
|
33
34
|
- [Supabase](https://mastra.ai/docs/server/auth/supabase)
|
|
34
35
|
- [WorkOS](https://mastra.ai/docs/server/auth/workos)
|
|
35
36
|
|
|
@@ -133,6 +133,23 @@ export const mastraClient = new MastraClient({
|
|
|
133
133
|
|
|
134
134
|
> **Info:** Visit [MastraClient](https://mastra.ai/reference/client-js/mastra-client) for more configuration options.
|
|
135
135
|
|
|
136
|
+
## Credentials and session cookies
|
|
137
|
+
|
|
138
|
+
**Authenticate Mastra API calls with session cookies** when your UI and Mastra API are not on the same origin—different host, subdomain, or port (for example Mastra Studio on one port and a custom server on another). Add **`credentials: 'include'`** to `MastraClient` so each request carries the cookies the user already has after sign-in. Skip this and you will often get **`401`** responses from Mastra even though login succeeded in the browser.
|
|
139
|
+
|
|
140
|
+
```typescript
|
|
141
|
+
import { MastraClient } from '@mastra/client-js'
|
|
142
|
+
|
|
143
|
+
export const mastraClient = new MastraClient({
|
|
144
|
+
baseUrl: process.env.MASTRA_API_URL || 'http://localhost:4111',
|
|
145
|
+
credentials: 'include',
|
|
146
|
+
})
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
**Allow credentialed cross-origin requests on your server**—see [CORS: requests with credentials](https://developer.mozilla.org/en-US/docs/Web/HTTP/Guides/CORS#requests_with_credentials). You need a concrete `Access-Control-Allow-Origin` (not `*`) and `Access-Control-Allow-Credentials: true`, or the browser will block the call before it reaches Mastra.
|
|
150
|
+
|
|
151
|
+
**Using `@mastra/react`?** Wrap your app with `MastraReactProvider`, set `baseUrl` and `apiPrefix` to match your server, and rely on the default `credentials: 'include'`. Change `credentials` only when you deliberately want `same-origin` or `omit` behavior.
|
|
152
|
+
|
|
136
153
|
## Adding request cancelling
|
|
137
154
|
|
|
138
155
|
`MastraClient` supports request cancellation using the standard Node.js `AbortSignal` API. Useful for canceling in-flight requests, such as when users abort an operation or to clean up stale network calls.
|
|
@@ -0,0 +1,116 @@
|
|
|
1
|
+
# LSP inspection
|
|
2
|
+
|
|
3
|
+
**Added in:** `@mastra/core@1.1.0`
|
|
4
|
+
|
|
5
|
+
LSP inspection gives workspace-backed agents semantic code intelligence. When you enable LSP on a workspace, agents can inspect symbols in supported files to retrieve hover information, jump to definitions, and find implementations.
|
|
6
|
+
|
|
7
|
+
## When to use LSP inspection
|
|
8
|
+
|
|
9
|
+
Use LSP inspection when your agent needs semantic code understanding instead of plain-text search alone:
|
|
10
|
+
|
|
11
|
+
- Inspect TypeScript or JavaScript symbols and their inferred types
|
|
12
|
+
- Find where a symbol is declared before editing related code
|
|
13
|
+
- Explore implementations across a codebase without manually tracing every file
|
|
14
|
+
- Combine semantic inspection with `view` and `search_content` for faster navigation
|
|
15
|
+
|
|
16
|
+
## Basic usage
|
|
17
|
+
|
|
18
|
+
Enable LSP on a workspace by setting `lsp: true`:
|
|
19
|
+
|
|
20
|
+
```typescript
|
|
21
|
+
import { Workspace, LocalFilesystem, LocalSandbox } from '@mastra/core/workspace'
|
|
22
|
+
|
|
23
|
+
const workspace = new Workspace({
|
|
24
|
+
filesystem: new LocalFilesystem({ basePath: './workspace' }),
|
|
25
|
+
sandbox: new LocalSandbox({ workingDirectory: './workspace' }),
|
|
26
|
+
lsp: true,
|
|
27
|
+
})
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
With this configuration, the workspace registers the default LSP inspection tool alongside the configured filesystem and sandbox tools.
|
|
31
|
+
|
|
32
|
+
## Agent tool
|
|
33
|
+
|
|
34
|
+
When LSP is enabled, the workspace exposes `mastra_workspace_lsp_inspect` by default.
|
|
35
|
+
|
|
36
|
+
```json
|
|
37
|
+
{
|
|
38
|
+
"path": "/absolute/path/to/file.ts",
|
|
39
|
+
"line": 10,
|
|
40
|
+
"match": "const foo = <<<bar()"
|
|
41
|
+
}
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
The `match` field must include exactly one `<<<` cursor marker. The marker identifies the symbol position on the specified line.
|
|
45
|
+
|
|
46
|
+
The tool returns up to three result groups:
|
|
47
|
+
|
|
48
|
+
| Result | Description |
|
|
49
|
+
| ---------------- | ---------------------------------------------------------------- |
|
|
50
|
+
| `hover` | Type information or documentation for the symbol at the cursor |
|
|
51
|
+
| `diagnostics` | Line-scoped LSP diagnostics for the inspected line, when present |
|
|
52
|
+
| `definition` | Declaration locations with a one-line preview |
|
|
53
|
+
| `implementation` | Implementation or usage locations |
|
|
54
|
+
|
|
55
|
+
## Tool name remapping
|
|
56
|
+
|
|
57
|
+
Rename the tool if your agent expects a shorter name:
|
|
58
|
+
|
|
59
|
+
```typescript
|
|
60
|
+
import { Workspace, LocalFilesystem, WORKSPACE_TOOLS } from '@mastra/core/workspace'
|
|
61
|
+
|
|
62
|
+
const workspace = new Workspace({
|
|
63
|
+
filesystem: new LocalFilesystem({ basePath: './workspace' }),
|
|
64
|
+
lsp: true,
|
|
65
|
+
tools: {
|
|
66
|
+
[WORKSPACE_TOOLS.LSP.LSP_INSPECT]: {
|
|
67
|
+
name: 'lsp_inspect',
|
|
68
|
+
},
|
|
69
|
+
},
|
|
70
|
+
})
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
This changes the exposed tool name only. The configuration key stays `WORKSPACE_TOOLS.LSP.LSP_INSPECT`.
|
|
74
|
+
|
|
75
|
+
## LSP configuration
|
|
76
|
+
|
|
77
|
+
Set `lsp` to `true` for default behavior, or provide an object to customize server startup and diagnostics:
|
|
78
|
+
|
|
79
|
+
```typescript
|
|
80
|
+
import { Workspace, LocalFilesystem } from '@mastra/core/workspace'
|
|
81
|
+
|
|
82
|
+
const workspace = new Workspace({
|
|
83
|
+
filesystem: new LocalFilesystem({ basePath: './workspace' }),
|
|
84
|
+
lsp: {
|
|
85
|
+
diagnosticTimeout: 4000,
|
|
86
|
+
initTimeout: 8000,
|
|
87
|
+
disableServers: ['pyright'],
|
|
88
|
+
binaryOverrides: {
|
|
89
|
+
typescript: '/custom/path/to/typescript-language-server',
|
|
90
|
+
},
|
|
91
|
+
searchPaths: ['/opt/homebrew/bin'],
|
|
92
|
+
},
|
|
93
|
+
})
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
Use custom configuration when you need to:
|
|
97
|
+
|
|
98
|
+
- Increase timeouts for large repositories
|
|
99
|
+
- Disable specific language servers
|
|
100
|
+
- Point Mastra at custom language server binaries
|
|
101
|
+
- Add extra binary search paths in constrained environments
|
|
102
|
+
|
|
103
|
+
## Requirements and limitations
|
|
104
|
+
|
|
105
|
+
- LSP inspection only works for file types with a matching language server
|
|
106
|
+
- The `path` you inspect must resolve inside the workspace filesystem or allowed paths
|
|
107
|
+
- External package inspection may resolve to declaration files such as `.d.ts` instead of runtime source files
|
|
108
|
+
- `lsp_inspect` complements `view` and `search_content`, but does not replace reading implementation code when you need full context
|
|
109
|
+
|
|
110
|
+
## Related
|
|
111
|
+
|
|
112
|
+
- [Workspace overview](https://mastra.ai/docs/workspace/overview)
|
|
113
|
+
- [Filesystem](https://mastra.ai/docs/workspace/filesystem)
|
|
114
|
+
- [Sandbox](https://mastra.ai/docs/workspace/sandbox)
|
|
115
|
+
- [Search and indexing](https://mastra.ai/docs/workspace/search)
|
|
116
|
+
- [Workspace class reference](https://mastra.ai/reference/workspace/workspace-class)
|
|
@@ -8,9 +8,14 @@ A workspace supports the following features:
|
|
|
8
8
|
|
|
9
9
|
- **[Filesystem](https://mastra.ai/docs/workspace/filesystem)**: File storage (read, write, list, delete, copy, move, grep)
|
|
10
10
|
- **[Sandbox](https://mastra.ai/docs/workspace/sandbox)**: Command execution (shell commands) and background processes
|
|
11
|
+
- **[LSP inspection](https://mastra.ai/docs/workspace/lsp)**: Hover, definition, and implementation queries through language servers
|
|
11
12
|
- **[Search](https://mastra.ai/docs/workspace/search)**: BM25, vector, or hybrid search over indexed content
|
|
12
13
|
- **[Skills](https://mastra.ai/docs/workspace/skills)**: Reusable instructions for agents
|
|
13
14
|
|
|
15
|
+
## When to use workspaces
|
|
16
|
+
|
|
17
|
+
Use a workspace when your agent needs access to the local filesystem, shell commands, semantic code inspection, indexed search, or reusable skill instructions.
|
|
18
|
+
|
|
14
19
|
## How it works
|
|
15
20
|
|
|
16
21
|
When you assign a workspace to an agent, Mastra includes the corresponding tools in the agent's toolset. The agent can then use these tools to interact with files and execute commands.
|
|
@@ -208,16 +213,24 @@ import { Workspace, LocalFilesystem, LocalSandbox, WORKSPACE_TOOLS } from '@mast
|
|
|
208
213
|
const workspace = new Workspace({
|
|
209
214
|
filesystem: new LocalFilesystem({ basePath: './workspace' }),
|
|
210
215
|
sandbox: new LocalSandbox({ workingDirectory: './workspace' }),
|
|
216
|
+
lsp: true,
|
|
211
217
|
tools: {
|
|
212
218
|
[WORKSPACE_TOOLS.FILESYSTEM.READ_FILE]: { name: 'view' },
|
|
213
219
|
[WORKSPACE_TOOLS.FILESYSTEM.GREP]: { name: 'search_content' },
|
|
214
220
|
[WORKSPACE_TOOLS.FILESYSTEM.LIST_FILES]: { name: 'find_files' },
|
|
215
221
|
[WORKSPACE_TOOLS.SANDBOX.EXECUTE_COMMAND]: { name: 'execute_command' },
|
|
222
|
+
[WORKSPACE_TOOLS.LSP.LSP_INSPECT]: { name: 'lsp_inspect' },
|
|
216
223
|
},
|
|
217
224
|
})
|
|
218
225
|
```
|
|
219
226
|
|
|
220
|
-
The agent sees `view`, `search_content`, `find_files`, and `
|
|
227
|
+
The agent sees `view`, `search_content`, `find_files`, `execute_command`, and `lsp_inspect` instead of the default `mastra_workspace_*` names. Tool names must be unique — duplicate names or conflicts with other default names throw an error.
|
|
228
|
+
|
|
229
|
+
## LSP inspection
|
|
230
|
+
|
|
231
|
+
Enable `lsp` on a workspace to add semantic code inspection through language servers. This adds the `mastra_workspace_lsp_inspect` tool by default, which can return hover information, definition locations, and implementations for a symbol at a specific cursor position.
|
|
232
|
+
|
|
233
|
+
See [LSP inspection](https://mastra.ai/docs/workspace/lsp) for configuration, examples, and tool name remapping.
|
|
221
234
|
|
|
222
235
|
### Output truncation
|
|
223
236
|
|
|
@@ -294,6 +307,7 @@ External providers may perform additional setup like establishing connections or
|
|
|
294
307
|
|
|
295
308
|
- [Filesystem](https://mastra.ai/docs/workspace/filesystem)
|
|
296
309
|
- [Sandbox](https://mastra.ai/docs/workspace/sandbox)
|
|
310
|
+
- [LSP inspection](https://mastra.ai/docs/workspace/lsp)
|
|
297
311
|
- [Skills](https://mastra.ai/docs/workspace/skills)
|
|
298
312
|
- [Search and indexing](https://mastra.ai/docs/workspace/search)
|
|
299
313
|
- [Workspace class reference](https://mastra.ai/reference/workspace/workspace-class)
|
|
@@ -56,7 +56,7 @@ const loggingProcessor: Processor<'logger'> = {
|
|
|
56
56
|
},
|
|
57
57
|
}
|
|
58
58
|
|
|
59
|
-
const model = withMastra(openai('gpt-
|
|
59
|
+
const model = withMastra(openai('gpt-5.4'), {
|
|
60
60
|
inputProcessors: [loggingProcessor],
|
|
61
61
|
outputProcessors: [loggingProcessor],
|
|
62
62
|
})
|
|
@@ -85,7 +85,7 @@ await storage.init()
|
|
|
85
85
|
|
|
86
86
|
const memoryStorage = await storage.getStore('memory')
|
|
87
87
|
|
|
88
|
-
const model = withMastra(openai('gpt-
|
|
88
|
+
const model = withMastra(openai('gpt-5.4'), {
|
|
89
89
|
memory: {
|
|
90
90
|
storage: memoryStorage!,
|
|
91
91
|
threadId: 'user-thread-123',
|
|
@@ -115,7 +115,7 @@ await storage.init()
|
|
|
115
115
|
|
|
116
116
|
const memoryStorage = await storage.getStore('memory')
|
|
117
117
|
|
|
118
|
-
const model = withMastra(openai('gpt-
|
|
118
|
+
const model = withMastra(openai('gpt-5.4'), {
|
|
119
119
|
inputProcessors: [myGuardProcessor],
|
|
120
120
|
outputProcessors: [myLoggingProcessor],
|
|
121
121
|
memory: {
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# OpenRouter
|
|
2
2
|
|
|
3
|
-
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access
|
|
3
|
+
OpenRouter aggregates models from multiple providers with enhanced features like rate limiting and failover. Access 165 models through Mastra's model router.
|
|
4
4
|
|
|
5
5
|
Learn more in the [OpenRouter documentation](https://openrouter.ai/models).
|
|
6
6
|
|
|
@@ -103,6 +103,7 @@ ANTHROPIC_API_KEY=ant-...
|
|
|
103
103
|
| `mistralai/devstral-small-2507` |
|
|
104
104
|
| `mistralai/mistral-medium-3` |
|
|
105
105
|
| `mistralai/mistral-medium-3.1` |
|
|
106
|
+
| `mistralai/mistral-small-2603` |
|
|
106
107
|
| `mistralai/mistral-small-3.1-24b-instruct` |
|
|
107
108
|
| `mistralai/mistral-small-3.2-24b-instruct` |
|
|
108
109
|
| `moonshotai/kimi-k2` |
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3607 models from 95 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|