@mastra/mcp-docs-server 1.0.0-beta.10 → 1.0.0-beta.11
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/organized/changelogs/%40mastra%2Fai-sdk.md +27 -27
- package/.docs/organized/changelogs/%40mastra%2Fchroma.md +10 -10
- package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +27 -27
- package/.docs/organized/changelogs/%40mastra%2Fcore.md +91 -91
- package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloud.md +9 -9
- package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +11 -11
- package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +8 -8
- package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +30 -30
- package/.docs/organized/changelogs/%40mastra%2Freact.md +26 -0
- package/.docs/organized/changelogs/%40mastra%2Fserver.md +31 -31
- package/.docs/organized/changelogs/create-mastra.md +3 -3
- package/.docs/organized/changelogs/mastra.md +15 -15
- package/.docs/raw/agents/guardrails.mdx +43 -6
- package/.docs/raw/agents/processors.mdx +151 -0
- package/.docs/raw/getting-started/mcp-docs-server.mdx +57 -0
- package/.docs/raw/getting-started/studio.mdx +24 -1
- package/.docs/raw/guides/migrations/upgrade-to-v1/agent.mdx +70 -0
- package/.docs/raw/reference/agents/agent.mdx +11 -4
- package/.docs/raw/reference/core/getServer.mdx +1 -1
- package/.docs/raw/reference/processors/processor-interface.mdx +314 -13
- package/.docs/raw/reference/streaming/ChunkType.mdx +23 -2
- package/.docs/raw/reference/streaming/agents/stream.mdx +16 -29
- package/.docs/raw/reference/workflows/workflow-methods/foreach.mdx +68 -3
- package/.docs/raw/reference/workflows/workflow.mdx +23 -0
- package/.docs/raw/server-db/mastra-server.mdx +7 -5
- package/.docs/raw/workflows/control-flow.mdx +348 -2
- package/CHANGELOG.md +7 -0
- package/package.json +5 -5
|
@@ -1,5 +1,19 @@
|
|
|
1
1
|
# mastra
|
|
2
2
|
|
|
3
|
+
## 1.0.0-beta.9
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Allow to run mastra studio from anywhere in the file system, and not necessarily inside a mastra project ([#11067](https://github.com/mastra-ai/mastra/pull/11067))
|
|
8
|
+
|
|
9
|
+
- Make sure to verify that a mastra instance is running on server.port OR 4111 by default ([#11066](https://github.com/mastra-ai/mastra/pull/11066))
|
|
10
|
+
|
|
11
|
+
- Internal changes to enable a custom base path for Mastra Studio ([#10441](https://github.com/mastra-ai/mastra/pull/10441))
|
|
12
|
+
|
|
13
|
+
- Updated dependencies [[`38380b6`](https://github.com/mastra-ai/mastra/commit/38380b60fca905824bdf6b43df307a58efb1aa15), [`798d0c7`](https://github.com/mastra-ai/mastra/commit/798d0c740232653b1d754870e6b43a55c364ffe2), [`ffe84d5`](https://github.com/mastra-ai/mastra/commit/ffe84d54f3b0f85167fe977efd027dba027eb998), [`2c212e7`](https://github.com/mastra-ai/mastra/commit/2c212e704c90e2db83d4109e62c03f0f6ebd2667), [`4ca4306`](https://github.com/mastra-ai/mastra/commit/4ca430614daa5fa04730205a302a43bf4accfe9f), [`2c212e7`](https://github.com/mastra-ai/mastra/commit/2c212e704c90e2db83d4109e62c03f0f6ebd2667), [`3bf6c5f`](https://github.com/mastra-ai/mastra/commit/3bf6c5f104c25226cd84e0c77f9dec15f2cac2db)]:
|
|
14
|
+
- @mastra/core@1.0.0-beta.11
|
|
15
|
+
- @mastra/deployer@1.0.0-beta.11
|
|
16
|
+
|
|
3
17
|
## 1.0.0-beta.8
|
|
4
18
|
|
|
5
19
|
### Minor Changes
|
|
@@ -484,19 +498,5 @@
|
|
|
484
498
|
|
|
485
499
|
- Improve the overall flow of the `create-mastra` CLI by first asking all questions and then creating the project structure. If you skip entering an API key during the wizard, the `your-api-key` placeholder will now be added to an `.env.example` file instead of `.env`. ([#8603](https://github.com/mastra-ai/mastra/pull/8603))
|
|
486
500
|
|
|
487
|
-
- Updated dependencies [[`0d71771`](https://github.com/mastra-ai/mastra/commit/0d71771f5711164c79f8e80919bc84d6bffeb6bc), [`0d6e55e`](https://github.com/mastra-ai/mastra/commit/0d6e55ecc5a2e689cd4fc9c86525e0eb54d82372)]:
|
|
488
|
-
- @mastra/core@0.20.2-alpha.0
|
|
489
|
-
- @mastra/deployer@0.20.2-alpha.0
|
|
490
|
-
|
|
491
|
-
## 0.15.0
|
|
492
|
-
|
|
493
|
-
### Minor Changes
|
|
494
|
-
|
|
495
|
-
- Update peer dependencies to match core package version bump (0.20.1) ([#8589](https://github.com/mastra-ai/mastra/pull/8589))
|
|
496
|
-
|
|
497
|
-
### Patch Changes
|
|
498
|
-
|
|
499
|
-
- workflow run thread more visible ([#8539](https://github.com/mastra-ai/mastra/pull/8539))
|
|
500
|
-
|
|
501
501
|
|
|
502
|
-
...
|
|
502
|
+
... 6370 more lines hidden. See full changelog in package directory.
|
|
@@ -327,9 +327,9 @@ export const privateAgent = new Agent({
|
|
|
327
327
|
|
|
328
328
|
### Handling blocked requests
|
|
329
329
|
|
|
330
|
-
When a processor blocks a request, the agent will still return successfully without throwing an error. To handle blocked requests, check for `tripwire`
|
|
330
|
+
When a processor blocks a request, the agent will still return successfully without throwing an error. To handle blocked requests, check for `tripwire` in the response.
|
|
331
331
|
|
|
332
|
-
For example, if an agent uses the `PIIDetector` with `strategy: "block"` and the request includes a credit card number, it will be blocked and the response will include
|
|
332
|
+
For example, if an agent uses the `PIIDetector` with `strategy: "block"` and the request includes a credit card number, it will be blocked and the response will include tripwire information.
|
|
333
333
|
|
|
334
334
|
#### `.generate()` example
|
|
335
335
|
|
|
@@ -338,8 +338,14 @@ const result = await agent.generate(
|
|
|
338
338
|
"Is this credit card number valid?: 4543 1374 5089 4332",
|
|
339
339
|
);
|
|
340
340
|
|
|
341
|
-
|
|
342
|
-
console.error(result.
|
|
341
|
+
if (result.tripwire) {
|
|
342
|
+
console.error("Blocked:", result.tripwire.reason);
|
|
343
|
+
console.error("Processor:", result.tripwire.processorId);
|
|
344
|
+
// Optional: check if retry was requested
|
|
345
|
+
console.error("Retry requested:", result.tripwire.retry);
|
|
346
|
+
// Optional: access additional metadata
|
|
347
|
+
console.error("Metadata:", result.tripwire.metadata);
|
|
348
|
+
}
|
|
343
349
|
```
|
|
344
350
|
|
|
345
351
|
#### `.stream()` example
|
|
@@ -351,17 +357,48 @@ const stream = await agent.stream(
|
|
|
351
357
|
|
|
352
358
|
for await (const chunk of stream.fullStream) {
|
|
353
359
|
if (chunk.type === "tripwire") {
|
|
354
|
-
console.error(chunk.payload.
|
|
360
|
+
console.error("Blocked:", chunk.payload.reason);
|
|
361
|
+
console.error("Processor:", chunk.payload.processorId);
|
|
355
362
|
}
|
|
356
363
|
}
|
|
357
364
|
```
|
|
358
365
|
|
|
359
|
-
In this case, the `
|
|
366
|
+
In this case, the `reason` indicates that a credit card number was detected:
|
|
360
367
|
|
|
361
368
|
```text
|
|
362
369
|
PII detected. Types: credit-card
|
|
363
370
|
```
|
|
364
371
|
|
|
372
|
+
### Requesting retries
|
|
373
|
+
|
|
374
|
+
Processors can request that the LLM retry its response with feedback. This is useful for implementing quality checks:
|
|
375
|
+
|
|
376
|
+
```typescript showLineNumbers
|
|
377
|
+
export class QualityChecker implements Processor {
|
|
378
|
+
id = "quality-checker";
|
|
379
|
+
|
|
380
|
+
async processOutputStep({ text, abort, retryCount }) {
|
|
381
|
+
const score = await evaluateQuality(text);
|
|
382
|
+
|
|
383
|
+
if (score < 0.7 && retryCount < 3) {
|
|
384
|
+
// Request retry with feedback for the LLM
|
|
385
|
+
abort("Response quality too low. Please be more specific.", {
|
|
386
|
+
retry: true,
|
|
387
|
+
metadata: { score },
|
|
388
|
+
});
|
|
389
|
+
}
|
|
390
|
+
|
|
391
|
+
return [];
|
|
392
|
+
}
|
|
393
|
+
}
|
|
394
|
+
```
|
|
395
|
+
|
|
396
|
+
The `abort()` function accepts an optional second parameter with:
|
|
397
|
+
- `retry: true` - Request the LLM retry the step
|
|
398
|
+
- `metadata: unknown` - Attach additional data for debugging/logging
|
|
399
|
+
|
|
400
|
+
Use `retryCount` to track retry attempts and prevent infinite loops.
|
|
401
|
+
|
|
365
402
|
## Custom processors
|
|
366
403
|
|
|
367
404
|
If the built-in processors don’t cover your needs, you can create your own by extending the `Processor` class.
|
|
@@ -12,6 +12,8 @@ Processors are configured as:
|
|
|
12
12
|
- **`inputProcessors`**: Run before messages reach the language model.
|
|
13
13
|
- **`outputProcessors`**: Run after the language model generates a response, but before it's returned to users.
|
|
14
14
|
|
|
15
|
+
You can use individual `Processor` objects or compose them into workflows using Mastra's workflow primitives. Workflows give you advanced control over processor execution order, parallel processing, and conditional logic.
|
|
16
|
+
|
|
15
17
|
Some processors implement both input and output logic and can be used in either array depending on where the transformation should occur.
|
|
16
18
|
|
|
17
19
|
## When to use processors
|
|
@@ -168,6 +170,81 @@ This is useful for:
|
|
|
168
170
|
- Filtering or modifying semantic recall content to prevent "prompt too long" errors
|
|
169
171
|
- Dynamically adjusting system instructions based on the conversation
|
|
170
172
|
|
|
173
|
+
### Per-step processing with processInputStep
|
|
174
|
+
|
|
175
|
+
While `processInput` runs once at the start of agent execution, `processInputStep` runs at **each step** of the agentic loop (including tool call continuations). This enables per-step configuration changes like dynamic model switching or tool choice modifications.
|
|
176
|
+
|
|
177
|
+
```typescript title="src/mastra/processors/step-processor.ts" showLineNumbers copy
|
|
178
|
+
import type { Processor, ProcessInputStepArgs, ProcessInputStepResult } from "@mastra/core";
|
|
179
|
+
|
|
180
|
+
export class DynamicModelProcessor implements Processor {
|
|
181
|
+
id = "dynamic-model";
|
|
182
|
+
|
|
183
|
+
async processInputStep({
|
|
184
|
+
stepNumber,
|
|
185
|
+
model,
|
|
186
|
+
toolChoice,
|
|
187
|
+
messageList,
|
|
188
|
+
}: ProcessInputStepArgs): Promise<ProcessInputStepResult> {
|
|
189
|
+
// Use a fast model for initial response
|
|
190
|
+
if (stepNumber === 0) {
|
|
191
|
+
return { model: "openai/gpt-4o-mini" };
|
|
192
|
+
}
|
|
193
|
+
|
|
194
|
+
// Disable tools after 5 steps to force completion
|
|
195
|
+
if (stepNumber > 5) {
|
|
196
|
+
return { toolChoice: "none" };
|
|
197
|
+
}
|
|
198
|
+
|
|
199
|
+
// No changes for other steps
|
|
200
|
+
return {};
|
|
201
|
+
}
|
|
202
|
+
}
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
The `processInputStep` method receives:
|
|
206
|
+
- `stepNumber`: Current step in the agentic loop (0-indexed)
|
|
207
|
+
- `steps`: Results from previous steps
|
|
208
|
+
- `messages`: Current messages snapshot (read-only)
|
|
209
|
+
- `systemMessages`: Current system messages (read-only)
|
|
210
|
+
- `messageList`: The full MessageList instance for mutations
|
|
211
|
+
- `model`: Current model being used
|
|
212
|
+
- `tools`: Current tools available for this step
|
|
213
|
+
- `toolChoice`: Current tool choice setting
|
|
214
|
+
- `activeTools`: Currently active tools
|
|
215
|
+
- `providerOptions`: Provider-specific options
|
|
216
|
+
- `modelSettings`: Model settings like temperature
|
|
217
|
+
- `structuredOutput`: Structured output configuration
|
|
218
|
+
|
|
219
|
+
The method can return any combination of:
|
|
220
|
+
- `model`: Change the model for this step
|
|
221
|
+
- `tools`: Replace or add tools (use spread to merge: `{ tools: { ...tools, newTool } }`)
|
|
222
|
+
- `toolChoice`: Change tool selection behavior
|
|
223
|
+
- `activeTools`: Filter which tools are available
|
|
224
|
+
- `messages`: Replace messages (applied to messageList)
|
|
225
|
+
- `systemMessages`: Replace all system messages
|
|
226
|
+
- `providerOptions`: Modify provider options
|
|
227
|
+
- `modelSettings`: Modify model settings
|
|
228
|
+
- `structuredOutput`: Modify structured output configuration
|
|
229
|
+
|
|
230
|
+
#### Using prepareStep callback
|
|
231
|
+
|
|
232
|
+
For simpler per-step logic, you can use the `prepareStep` callback on `generate()` or `stream()` instead of creating a full processor:
|
|
233
|
+
|
|
234
|
+
```typescript
|
|
235
|
+
await agent.generate({
|
|
236
|
+
prompt: "Complex task",
|
|
237
|
+
prepareStep: async ({ stepNumber, model }) => {
|
|
238
|
+
if (stepNumber === 0) {
|
|
239
|
+
return { model: "openai/gpt-4o-mini" };
|
|
240
|
+
}
|
|
241
|
+
if (stepNumber > 5) {
|
|
242
|
+
return { toolChoice: "none" };
|
|
243
|
+
}
|
|
244
|
+
},
|
|
245
|
+
});
|
|
246
|
+
```
|
|
247
|
+
|
|
171
248
|
### Custom output processor
|
|
172
249
|
|
|
173
250
|
```typescript title="src/mastra/processors/custom-output.ts" showLineNumbers copy
|
|
@@ -273,7 +350,81 @@ const agent = new Agent({
|
|
|
273
350
|
|
|
274
351
|
> **Note:** The example above filters tool calls and limits tokens for the LLM, but these filtered messages will still be saved to memory. To also filter messages before they're saved to memory, manually add memory processors before utility processors. See [Memory Processors](/docs/v1/memory/memory-processors#manual-control-and-deduplication) for details.
|
|
275
352
|
|
|
353
|
+
## Using workflows as processors
|
|
354
|
+
|
|
355
|
+
You can use Mastra workflows as processors to create complex processing pipelines with parallel execution, conditional branching, and error handling:
|
|
356
|
+
|
|
357
|
+
```typescript title="src/mastra/processors/moderation-workflow.ts" showLineNumbers copy
|
|
358
|
+
import { createWorkflow, createStep } from "@mastra/core/workflows";
|
|
359
|
+
import { ProcessorStepSchema } from "@mastra/core/processors";
|
|
360
|
+
import { Agent } from "@mastra/core/agent";
|
|
361
|
+
|
|
362
|
+
// Create a workflow that runs multiple checks in parallel
|
|
363
|
+
const moderationWorkflow = createWorkflow({
|
|
364
|
+
id: "moderation-pipeline",
|
|
365
|
+
inputSchema: ProcessorStepSchema,
|
|
366
|
+
outputSchema: ProcessorStepSchema,
|
|
367
|
+
})
|
|
368
|
+
.then(createStep(new LengthValidator({ maxLength: 10000 })))
|
|
369
|
+
.parallel([
|
|
370
|
+
createStep(new PIIDetector({ strategy: "redact" })),
|
|
371
|
+
createStep(new ToxicityChecker({ threshold: 0.8 })),
|
|
372
|
+
])
|
|
373
|
+
.commit();
|
|
374
|
+
|
|
375
|
+
// Use the workflow as an input processor
|
|
376
|
+
const agent = new Agent({
|
|
377
|
+
id: "moderated-agent",
|
|
378
|
+
name: "Moderated Agent",
|
|
379
|
+
model: "openai/gpt-4o",
|
|
380
|
+
inputProcessors: [moderationWorkflow],
|
|
381
|
+
});
|
|
382
|
+
```
|
|
383
|
+
|
|
384
|
+
When an agent is registered with Mastra, processor workflows are automatically registered as workflows, allowing you to view and debug them in the playground.
|
|
385
|
+
|
|
386
|
+
## Retry mechanism
|
|
387
|
+
|
|
388
|
+
Processors can request that the LLM retry its response with feedback. This is useful for implementing quality checks, output validation, or iterative refinement:
|
|
389
|
+
|
|
390
|
+
```typescript title="src/mastra/processors/quality-checker.ts" showLineNumbers copy
|
|
391
|
+
import type { Processor } from "@mastra/core";
|
|
392
|
+
|
|
393
|
+
export class QualityChecker implements Processor {
|
|
394
|
+
id = "quality-checker";
|
|
395
|
+
|
|
396
|
+
async processOutputStep({ text, abort, retryCount }) {
|
|
397
|
+
const qualityScore = await evaluateQuality(text);
|
|
398
|
+
|
|
399
|
+
if (qualityScore < 0.7 && retryCount < 3) {
|
|
400
|
+
// Request a retry with feedback for the LLM
|
|
401
|
+
abort("Response quality score too low. Please provide a more detailed answer.", {
|
|
402
|
+
retry: true,
|
|
403
|
+
metadata: { score: qualityScore },
|
|
404
|
+
});
|
|
405
|
+
}
|
|
406
|
+
|
|
407
|
+
return [];
|
|
408
|
+
}
|
|
409
|
+
}
|
|
410
|
+
|
|
411
|
+
const agent = new Agent({
|
|
412
|
+
id: "quality-agent",
|
|
413
|
+
name: "Quality Agent",
|
|
414
|
+
model: "openai/gpt-4o",
|
|
415
|
+
outputProcessors: [new QualityChecker()],
|
|
416
|
+
maxProcessorRetries: 3, // Maximum retry attempts (default: 3)
|
|
417
|
+
});
|
|
418
|
+
```
|
|
419
|
+
|
|
420
|
+
The retry mechanism:
|
|
421
|
+
- Only works in `processOutputStep` and `processInputStep` methods
|
|
422
|
+
- Replays the step with the abort reason added as context for the LLM
|
|
423
|
+
- Tracks retry count via the `retryCount` parameter
|
|
424
|
+
- Respects `maxProcessorRetries` limit on the agent
|
|
425
|
+
|
|
276
426
|
## Related documentation
|
|
277
427
|
|
|
278
428
|
- [Guardrails](/docs/v1/agents/guardrails) - Security and validation processors
|
|
279
429
|
- [Memory Processors](/docs/v1/memory/memory-processors) - Memory-specific processors and automatic integration
|
|
430
|
+
- [Processor Interface](/reference/v1/processors/processor-interface) - Full API reference for processors
|
|
@@ -75,6 +75,63 @@ If you followed the automatic installation, you'll see a popup when you open cur
|
|
|
75
75
|
|
|
76
76
|
[More info on using MCP servers with Cursor](https://cursor.com/de/docs/context/mcp)
|
|
77
77
|
|
|
78
|
+
### Antigravity
|
|
79
|
+
|
|
80
|
+
Google Antigravity is an agent-first development platform that supports MCP servers for accessing external documentation, APIs, and project context.
|
|
81
|
+
|
|
82
|
+
1. Open your Antigravity MCP configuration file:
|
|
83
|
+
- Click on **Agent session** and select the **“…” dropdown** at the top of the editor’s side panel, then select **MCP Servers** to access the **MCP Store**.
|
|
84
|
+
- You can access it through the MCP Store interface in Antigravity
|
|
85
|
+
|
|
86
|
+
<img
|
|
87
|
+
src="/img/antigravity_mcp_server.png"
|
|
88
|
+
alt="Antigravity interface showing configured Mastra MCP server"
|
|
89
|
+
width={800}
|
|
90
|
+
className="rounded-lg"
|
|
91
|
+
/>
|
|
92
|
+
|
|
93
|
+
2. To add a custom MCP server, select **Manage MCP Servers** at the top of the MCP Store and click **View raw config** in the main tab.
|
|
94
|
+
<img
|
|
95
|
+
src="/img/antigravity_managed_mcp.png"
|
|
96
|
+
alt="Antigravity interface showing configured Mastra MCP server"
|
|
97
|
+
width={800}
|
|
98
|
+
className="rounded-lg"
|
|
99
|
+
/>
|
|
100
|
+
|
|
101
|
+
3. Add the Mastra MCP server configuration:
|
|
102
|
+
|
|
103
|
+
```json copy
|
|
104
|
+
{
|
|
105
|
+
"mcpServers": {
|
|
106
|
+
"mastra-docs": {
|
|
107
|
+
"command": "npx",
|
|
108
|
+
"args": [
|
|
109
|
+
"-y",
|
|
110
|
+
"@mastra/mcp-docs-server"
|
|
111
|
+
]
|
|
112
|
+
}
|
|
113
|
+
}
|
|
114
|
+
}
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
4. Save the configuration and restart Antigravity
|
|
118
|
+
<img
|
|
119
|
+
src="/img/antigravity_final_interface_mcp.png"
|
|
120
|
+
alt="Antigravity interface showing configured Mastra MCP server"
|
|
121
|
+
width={800}
|
|
122
|
+
className="rounded-lg"
|
|
123
|
+
/>
|
|
124
|
+
|
|
125
|
+
Once configured, the Mastra MCP server exposes the following to Antigravity agents:
|
|
126
|
+
- Indexed documentation and API schemas for Mastra, enabling programmatic retrieval of relevant context during code generation
|
|
127
|
+
- Access to example code snippets and usage patterns stored in Mastra Docs
|
|
128
|
+
- Structured data for error handling and debugging references in the editor
|
|
129
|
+
- Metadata about current Mastra project patterns for code suggestion and completion
|
|
130
|
+
|
|
131
|
+
The MCP server will appear in Antigravity's MCP Store, where you can manage its connection status and authentication if needed.
|
|
132
|
+
|
|
133
|
+
[More info on using MCP servers with Antigravity](https://antigravity.google)
|
|
134
|
+
|
|
78
135
|
### Visual Studio Code
|
|
79
136
|
|
|
80
137
|
1. Create a `.vscode/mcp.json` file in your workspace
|
|
@@ -113,7 +113,7 @@ The OpenAPI and Swagger endpoints are disabled in production by default. To enab
|
|
|
113
113
|
|
|
114
114
|
## Configuration
|
|
115
115
|
|
|
116
|
-
### Port
|
|
116
|
+
### Port and Host
|
|
117
117
|
|
|
118
118
|
By default, the development server runs at http://localhost:4111. You can change the `host` and `port` in the Mastra server configuration:
|
|
119
119
|
|
|
@@ -128,6 +128,29 @@ export const mastra = new Mastra({
|
|
|
128
128
|
});
|
|
129
129
|
```
|
|
130
130
|
|
|
131
|
+
### Sub-path Hosting
|
|
132
|
+
|
|
133
|
+
You can host the Mastra Studio on a sub-path of your existing application using the `studioBase` configuration:
|
|
134
|
+
|
|
135
|
+
```typescript
|
|
136
|
+
import { Mastra } from "@mastra/core";
|
|
137
|
+
|
|
138
|
+
export const mastra = new Mastra({
|
|
139
|
+
server: {
|
|
140
|
+
studioBase: "/my-mastra-studio",
|
|
141
|
+
},
|
|
142
|
+
});
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
This is particularly useful when:
|
|
146
|
+
- Integrating with existing applications
|
|
147
|
+
- Using authentication tools like Cloudflare Zero Trust that benefit from shared domains
|
|
148
|
+
- Managing multiple services under a single domain
|
|
149
|
+
|
|
150
|
+
**Example URLs:**
|
|
151
|
+
- Default: `http://localhost:4111/` (studio at root)
|
|
152
|
+
- With `studioBase`: `http://localhost:4111/my-mastra-studio/` (studio at sub-path)
|
|
153
|
+
|
|
131
154
|
### Local HTTPS
|
|
132
155
|
|
|
133
156
|
Mastra supports local HTTPS development through the [`--https`](/reference/v1/cli/mastra#--https) flag, which automatically creates and manages certificates for your project. When you run `mastra dev --https`, a private key and certificate are generated for localhost (or your configured host). For custom certificate management, you can provide your own key and certificate files through the server configuration:
|
|
@@ -313,3 +313,73 @@ To migrate, remove the `TMetrics` generic parameter and configure scorers using
|
|
|
313
313
|
// ...
|
|
314
314
|
});
|
|
315
315
|
```
|
|
316
|
+
|
|
317
|
+
### Tripwire response format changed
|
|
318
|
+
|
|
319
|
+
The tripwire response format has changed from separate `tripwire` and `tripwireReason` fields to a single `tripwire` object containing all related data.
|
|
320
|
+
|
|
321
|
+
To migrate, update your code to access tripwire data from the new object structure.
|
|
322
|
+
|
|
323
|
+
```diff
|
|
324
|
+
const result = await agent.generate('Hello');
|
|
325
|
+
|
|
326
|
+
- if (result.tripwire) {
|
|
327
|
+
- console.log(result.tripwireReason);
|
|
328
|
+
- }
|
|
329
|
+
+ if (result.tripwire) {
|
|
330
|
+
+ console.log(result.tripwire.reason);
|
|
331
|
+
+ // New fields available:
|
|
332
|
+
+ // result.tripwire.retry - whether this step should be retried
|
|
333
|
+
+ // result.tripwire.metadata - additional metadata from the processor
|
|
334
|
+
+ // result.tripwire.processorId - which processor triggered the tripwire
|
|
335
|
+
+ }
|
|
336
|
+
```
|
|
337
|
+
|
|
338
|
+
For streaming responses:
|
|
339
|
+
|
|
340
|
+
```diff
|
|
341
|
+
for await (const chunk of stream.fullStream) {
|
|
342
|
+
if (chunk.type === 'tripwire') {
|
|
343
|
+
- console.log(chunk.payload.tripwireReason);
|
|
344
|
+
+ console.log(chunk.payload.reason);
|
|
345
|
+
+ // New fields available:
|
|
346
|
+
+ // chunk.payload.retry
|
|
347
|
+
+ // chunk.payload.metadata
|
|
348
|
+
+ // chunk.payload.processorId
|
|
349
|
+
}
|
|
350
|
+
}
|
|
351
|
+
```
|
|
352
|
+
|
|
353
|
+
The step results now also include tripwire information:
|
|
354
|
+
|
|
355
|
+
```diff
|
|
356
|
+
const result = await agent.generate('Hello');
|
|
357
|
+
|
|
358
|
+
for (const step of result.steps) {
|
|
359
|
+
- // No tripwire info on steps
|
|
360
|
+
+ if (step.tripwire) {
|
|
361
|
+
+ console.log('Step was blocked:', step.tripwire.reason);
|
|
362
|
+
+ }
|
|
363
|
+
}
|
|
364
|
+
```
|
|
365
|
+
|
|
366
|
+
### `prepareStep` messages format
|
|
367
|
+
|
|
368
|
+
The `prepareStep` callback now receives messages in `MastraDBMessage` format instead of AI SDK v5 model message format. This change unifies `prepareStep` with the new `processInputStep` processor method, which runs at each step of the agentic loop.
|
|
369
|
+
|
|
370
|
+
If you need the old AI SDK v5 format, use `messageList.get.all.aiV5.model()`:
|
|
371
|
+
|
|
372
|
+
```diff
|
|
373
|
+
agent.generate({
|
|
374
|
+
prompt: 'Hello',
|
|
375
|
+
prepareStep: async ({ messages, messageList }) => {
|
|
376
|
+
- // messages was AI SDK v5 ModelMessage format
|
|
377
|
+
- console.log(messages[0].content);
|
|
378
|
+
+ // messages is now MastraDBMessage format
|
|
379
|
+
+ // Use messageList to get AI SDK v5 format if needed:
|
|
380
|
+
+ const aiSdkMessages = messageList.get.all.aiV5.model();
|
|
381
|
+
|
|
382
|
+
return { toolChoice: 'auto' };
|
|
383
|
+
},
|
|
384
|
+
});
|
|
385
|
+
```
|
|
@@ -212,17 +212,24 @@ export const agent = new Agent({
|
|
|
212
212
|
},
|
|
213
213
|
{
|
|
214
214
|
name: "inputProcessors",
|
|
215
|
-
type: "Processor[] | ({ requestContext: RequestContext }) => Processor[] | Promise<Processor[]>",
|
|
215
|
+
type: "(Processor | ProcessorWorkflow)[] | ({ requestContext: RequestContext }) => (Processor | ProcessorWorkflow)[] | Promise<(Processor | ProcessorWorkflow)[]>",
|
|
216
216
|
isOptional: true,
|
|
217
217
|
description:
|
|
218
|
-
"Input processors that can modify or validate messages before they are processed by the agent.
|
|
218
|
+
"Input processors that can modify or validate messages before they are processed by the agent. Can be individual Processor objects or workflows created with `createWorkflow()` using ProcessorStepSchema.",
|
|
219
219
|
},
|
|
220
220
|
{
|
|
221
221
|
name: "outputProcessors",
|
|
222
|
-
type: "Processor[] | ({ requestContext: RequestContext }) => Processor[] | Promise<Processor[]>",
|
|
222
|
+
type: "(Processor | ProcessorWorkflow)[] | ({ requestContext: RequestContext }) => (Processor | ProcessorWorkflow)[] | Promise<(Processor | ProcessorWorkflow)[]>",
|
|
223
223
|
isOptional: true,
|
|
224
224
|
description:
|
|
225
|
-
"Output processors that can modify or validate messages from the agent
|
|
225
|
+
"Output processors that can modify or validate messages from the agent before they are sent to the client. Can be individual Processor objects or workflows.",
|
|
226
|
+
},
|
|
227
|
+
{
|
|
228
|
+
name: "maxProcessorRetries",
|
|
229
|
+
type: "number",
|
|
230
|
+
isOptional: true,
|
|
231
|
+
description:
|
|
232
|
+
"Maximum number of times a processor can request retrying the LLM step.",
|
|
226
233
|
},
|
|
227
234
|
]}
|
|
228
235
|
/>
|
|
@@ -25,7 +25,7 @@ This method does not accept any parameters.
|
|
|
25
25
|
name: "server",
|
|
26
26
|
type: "ServerConfig | undefined",
|
|
27
27
|
description:
|
|
28
|
-
"The configured server configuration including port, timeout, API routes, middleware, CORS settings, and build options, or undefined if no server has been configured.",
|
|
28
|
+
"The configured server configuration including port, host, studioBase, timeout, API routes, middleware, CORS settings, and build options, or undefined if no server has been configured.",
|
|
29
29
|
},
|
|
30
30
|
]}
|
|
31
31
|
/>
|