@mastra/mcp-docs-server 1.1.29-alpha.9 → 1.1.29
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/docs/agents/background-tasks.md +242 -0
- package/.docs/docs/agents/supervisor-agents.md +35 -4
- package/.docs/docs/agents/using-tools.md +1 -0
- package/.docs/docs/streaming/background-task-streaming.md +80 -0
- package/.docs/docs/streaming/overview.md +3 -0
- package/.docs/docs/workspace/filesystem.md +1 -1
- package/.docs/models/index.md +1 -1
- package/.docs/models/providers/deepinfra.md +2 -1
- package/.docs/models/providers/fireworks-ai.md +2 -1
- package/.docs/models/providers/kilo.md +3 -1
- package/.docs/reference/client-js/agents.md +24 -0
- package/.docs/reference/configuration.md +63 -0
- package/.docs/reference/harness/harness-class.md +53 -10
- package/.docs/reference/index.md +2 -0
- package/.docs/reference/observability/tracing/interfaces.md +17 -0
- package/.docs/reference/processors/stream-error-retry-processor.md +54 -0
- package/.docs/reference/streaming/ChunkType.md +140 -0
- package/.docs/reference/streaming/agents/streamUntilIdle.md +94 -0
- package/.docs/reference/workspace/s3-filesystem.md +79 -5
- package/CHANGELOG.md +23 -0
- package/package.json +6 -6
|
@@ -0,0 +1,242 @@
|
|
|
1
|
+
# Background tasks
|
|
2
|
+
|
|
3
|
+
**Added in:** `@mastra/core@1.28.0`
|
|
4
|
+
|
|
5
|
+
Background tasks let an agent dispatch a long-running tool call without blocking the agentic loop. The tool returns an immediate acknowledgement, the LLM continues responding, and the task runs to completion in the background. When it finishes, its result is written to memory and if you use [`streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) the agent is re-invoked automatically so the result is processed in the same call.
|
|
6
|
+
|
|
7
|
+
## When to use background tasks
|
|
8
|
+
|
|
9
|
+
Use background tasks when a tool call may take long enough that the user shouldn't wait for it before seeing a response. Common cases:
|
|
10
|
+
|
|
11
|
+
- Subagent delegations that themselves run multi-step research or writing.
|
|
12
|
+
- Tool calls that hit slow external services, queues, or large data jobs.
|
|
13
|
+
- Workflows triggered from a tool call that may take minutes to complete.
|
|
14
|
+
|
|
15
|
+
For tool calls that return quickly, foreground execution using `agent.stream()` and `agent.generate()` is simpler.
|
|
16
|
+
|
|
17
|
+
> **Note:** Background tasks require a configured [storage](https://mastra.ai/docs/memory/storage) backend on the Mastra instance. Tasks are persisted so they survive process restarts.
|
|
18
|
+
|
|
19
|
+
## Quickstart
|
|
20
|
+
|
|
21
|
+
Background tasks are off by default. Enable them by setting `backgroundTasks.enabled` on the Mastra instance:
|
|
22
|
+
|
|
23
|
+
```typescript
|
|
24
|
+
import { Mastra } from '@mastra/core'
|
|
25
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
26
|
+
|
|
27
|
+
export const mastra = new Mastra({
|
|
28
|
+
storage: new LibSQLStore({ id: 'storage', url: 'file:mastra.db' }),
|
|
29
|
+
backgroundTasks: {
|
|
30
|
+
enabled: true,
|
|
31
|
+
globalConcurrency: 10,
|
|
32
|
+
perAgentConcurrency: 5,
|
|
33
|
+
backpressure: 'queue',
|
|
34
|
+
defaultTimeoutMs: 300_000,
|
|
35
|
+
},
|
|
36
|
+
})
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
The full set of options is listed in the [backgroundTasks configuration reference](https://mastra.ai/reference/configuration).
|
|
40
|
+
|
|
41
|
+
## Run a tool in the background
|
|
42
|
+
|
|
43
|
+
Enabling the manager doesn't run anything in the background by itself as every tool defaults to foreground execution. You can run a tool in the background at one of three layers, in priority order:
|
|
44
|
+
|
|
45
|
+
1. **LLM per-call override**: the model decides it should run in the background and includes a `_background` field in the tool arguments.
|
|
46
|
+
2. **Agent-level config**: the agent declares which of its tools are background-eligible.
|
|
47
|
+
3. **Tool-level config**: the tool itself declares it as background-eligible.
|
|
48
|
+
|
|
49
|
+
### Tool-level
|
|
50
|
+
|
|
51
|
+
Set `backgroundTasks.enabled: true` on the tool definition. Tools opted in at this layer run in the background whenever called by an agent that has the manager enabled.
|
|
52
|
+
|
|
53
|
+
```typescript
|
|
54
|
+
import { createTool } from '@mastra/core/tools'
|
|
55
|
+
import { z } from 'zod'
|
|
56
|
+
|
|
57
|
+
export const researchTool = createTool({
|
|
58
|
+
id: 'research',
|
|
59
|
+
description: 'Run a long research job',
|
|
60
|
+
inputSchema: z.object({ topic: z.string() }),
|
|
61
|
+
backgroundTasks: {
|
|
62
|
+
enabled: true,
|
|
63
|
+
timeoutMs: 600_000,
|
|
64
|
+
maxRetries: 1,
|
|
65
|
+
},
|
|
66
|
+
execute: async ({ context }) => {
|
|
67
|
+
// ...
|
|
68
|
+
},
|
|
69
|
+
})
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
### Agent-level
|
|
73
|
+
|
|
74
|
+
Use `backgroundTasks.tools` on the agent to opt in specific tools, override timeouts for individual tools, or run all background-eligible tools in the background. Use `disabled: true` to short-circuit background dispatch for the agent entirely.
|
|
75
|
+
|
|
76
|
+
```typescript
|
|
77
|
+
import { Agent } from '@mastra/core/agent'
|
|
78
|
+
|
|
79
|
+
export const researcher = new Agent({
|
|
80
|
+
id: 'researcher',
|
|
81
|
+
instructions: 'You research topics and answer questions.',
|
|
82
|
+
model: 'openai/gpt-5.4',
|
|
83
|
+
tools: { researchTool, summarizeTool },
|
|
84
|
+
backgroundTasks: {
|
|
85
|
+
tools: {
|
|
86
|
+
researchTool: { enabled: true, timeoutMs: 600_000 },
|
|
87
|
+
summarizeTool: false,
|
|
88
|
+
},
|
|
89
|
+
},
|
|
90
|
+
})
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
Set `tools: 'all'` to opt in every tool the agent has.
|
|
94
|
+
|
|
95
|
+
### LLM per-call override
|
|
96
|
+
|
|
97
|
+
When a tool is registered on an agent that has background tasks enabled, the model can include a `_background` field in the tool arguments to override the resolved configuration for that specific call. The model only includes what it wants to override, all fields in `_background` are optional. The override is stripped from the arguments before the tool runs.
|
|
98
|
+
|
|
99
|
+
```json
|
|
100
|
+
{
|
|
101
|
+
"topic": "solana",
|
|
102
|
+
"_background": { "enabled": true, "timeoutMs": 900_000 }
|
|
103
|
+
}
|
|
104
|
+
```
|
|
105
|
+
|
|
106
|
+
### Resolution order
|
|
107
|
+
|
|
108
|
+
When a tool call is dispatched, the resolved background config is computed in this priority order:
|
|
109
|
+
|
|
110
|
+
1. LLM `_background` override (if present in the call's arguments).
|
|
111
|
+
2. Agent-level `backgroundTasks.tools` entry for the tool.
|
|
112
|
+
3. Tool-level `backgroundTasks` config.
|
|
113
|
+
4. Manager defaults (`defaultTimeoutMs`, `defaultRetries`).
|
|
114
|
+
|
|
115
|
+
If the agent has `backgroundTasks.disabled: true`, every tool call runs synchronously regardless of the layers above.
|
|
116
|
+
|
|
117
|
+
## Background tasks related stream chunks
|
|
118
|
+
|
|
119
|
+
When a tool call dispatches as a background task, two streams may surface lifecycle events for it: the agent's own stream and the [`backgroundTaskManager.stream()`](https://mastra.ai/docs/streaming/background-task-streaming) SSE stream. Each stream covers a different set of chunk types:
|
|
120
|
+
|
|
121
|
+
| Chunk type | When it fires | Emitted by |
|
|
122
|
+
| --------------------------- | -------------------------------------------------------------------------------------- | -------------- |
|
|
123
|
+
| `background-task-started` | The task has been enqueued and assigned a `taskId`. | Agent stream |
|
|
124
|
+
| `background-task-running` | The task picked up a worker and started executing. | Manager stream |
|
|
125
|
+
| `background-task-progress` | Shows number of running background tasks. | Agent stream |
|
|
126
|
+
| `background-task-output` | A streamed output chunk from the task's `execute`. | Manager stream |
|
|
127
|
+
| `background-task-completed` | The task finished successfully. The `payload.result` matches the eventual tool result. | Manager stream |
|
|
128
|
+
| `background-task-failed` | The task threw or timed out. | Manager stream |
|
|
129
|
+
| `background-task-cancelled` | The task was cancelled before completing. | Manager stream |
|
|
130
|
+
|
|
131
|
+
`agent.stream().fullStream` only emits the agent-loop chunks (`background-task-started`, `background-task-progress`) on its own. `agent.streamUntilIdle()` emits the same two chunks and additionally subscribes to the manager pubsub for the run's memory scope and pipes the five manager chunks (`background-task-running`, `background-task-output`, `background-task-completed`, `background-task-failed`, `background-task-cancelled`) into the same `fullStream`, so consumers of `streamUntilIdle().fullStream` see all seven types.
|
|
132
|
+
|
|
133
|
+
`backgroundTaskManager.stream()` only emits the five manager chunks.
|
|
134
|
+
|
|
135
|
+
The full payload shapes are documented in the [background task chunks reference](https://mastra.ai/reference/streaming/ChunkType).
|
|
136
|
+
|
|
137
|
+
## Keep the agent stream open with `streamUntilIdle()`
|
|
138
|
+
|
|
139
|
+
`agent.stream()` returns once the LLM emits a final response even if a background task is still running. Use `agent.streamUntilIdle()` when you want the stream to stay open until every dispatched background task has completed and the LLM has had a chance to respond to the result:
|
|
140
|
+
|
|
141
|
+
```typescript
|
|
142
|
+
const stream = await agent.streamUntilIdle('Research solana for me', {
|
|
143
|
+
memory: { thread: 't1', resource: 'u1' },
|
|
144
|
+
maxIdleMs: 5 * 60_000,
|
|
145
|
+
})
|
|
146
|
+
|
|
147
|
+
for await (const chunk of stream.fullStream) {
|
|
148
|
+
// chunks from the initial turn AND any continuation turns triggered by
|
|
149
|
+
// background task completions flow through here
|
|
150
|
+
}
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
When a background task completes, the result is injected into the agent memory, `streamUntilIdle()` re-enters the agentic loop so the LLM can react to it. The stream closes when no tasks are running and no completions are queued.
|
|
154
|
+
|
|
155
|
+
`maxIdleMs` caps how long the stream waits between turns. The timer only runs while the wrapper is between turns, so a slow first token won't close the stream. The default is 5 minutes.
|
|
156
|
+
|
|
157
|
+
> **Note:** Visit [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) for the full API.
|
|
158
|
+
|
|
159
|
+
### Aggregate properties
|
|
160
|
+
|
|
161
|
+
`streamUntilIdle()` returns a `MastraModelOutput` that looks like the one from `stream()`, but only `fullStream` spans the initial turn **and** any auto-continuations. Aggregate properties (`text`, `toolCalls`, `toolResults`, `finishReason`, `messageList`, `getFullOutput()`) still resolve against the **first turn's** internal buffer. If you need an aggregate view across continuations, consume `fullStream` yourself and accumulate.
|
|
162
|
+
|
|
163
|
+
## Subagents in the background
|
|
164
|
+
|
|
165
|
+
Subagent invocations are dispatched as tool calls under the hood, so the same background configuration applies. The recommended pattern is to opt each subagent in on the supervisor, it's clearer and lets you tune `timeoutMs` per subagent in one place:
|
|
166
|
+
|
|
167
|
+
```typescript
|
|
168
|
+
import { Agent } from '@mastra/core/agent'
|
|
169
|
+
|
|
170
|
+
const supervisor = new Agent({
|
|
171
|
+
id: 'supervisor',
|
|
172
|
+
instructions: 'Coordinate research and writing using the available agents.',
|
|
173
|
+
model: 'openai/gpt-5.4',
|
|
174
|
+
agents: { researchAgent, writingAgent },
|
|
175
|
+
backgroundTasks: {
|
|
176
|
+
tools: {
|
|
177
|
+
researchAgent: { enabled: true, timeoutMs: 900_000 },
|
|
178
|
+
writingAgent: { enabled: true, timeoutMs: 900_000 },
|
|
179
|
+
},
|
|
180
|
+
},
|
|
181
|
+
})
|
|
182
|
+
|
|
183
|
+
const stream = await supervisor.streamUntilIdle('Research AI in education and write an article', {
|
|
184
|
+
memory: { thread: 't1', resource: 'u1' },
|
|
185
|
+
})
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
### Inheriting from the subagent
|
|
189
|
+
|
|
190
|
+
If a subagent isn't listed under the supervisor's `backgroundTasks.tools` but has background-eligible tools of its own (either via tool-level `backgroundTasks.enabled: true` or its own `backgroundTasks.tools` entry) the framework still dispatches the entire subagent invocation as a background task. The supervisor inherits the subagent's intent: the subagent itself becomes the background task, and its inner tools run in the foreground inside the subagent's loop.
|
|
191
|
+
|
|
192
|
+
The background config used for the inherited dispatch (for example `waitTimeoutMs`) is derived from the subagent's own `backgroundTasks` config.
|
|
193
|
+
|
|
194
|
+
```typescript
|
|
195
|
+
const researchAgent = new Agent({
|
|
196
|
+
id: 'research-agent',
|
|
197
|
+
description: 'Gathers factual information.',
|
|
198
|
+
model: 'openai/gpt-5-mini',
|
|
199
|
+
tools: { deepResearchTool },
|
|
200
|
+
backgroundTasks: {
|
|
201
|
+
tools: {
|
|
202
|
+
deepResearchTool: { enabled: true, timeoutMs: 600_000 },
|
|
203
|
+
},
|
|
204
|
+
waitTimeoutMs: 900_000,
|
|
205
|
+
},
|
|
206
|
+
})
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
When this `researchAgent` is delegated to from a supervisor that has no backgroundTask configuration for the `researchAgent`, the supervisor still dispatches the whole `researchAgent` invocation as a background task, and `deepResearchTool` runs in the foreground inside that invocation, instead of dispatching its own nested background task.
|
|
210
|
+
|
|
211
|
+
Use this pattern when you want a subagent to behave consistently in the background regardless of which supervisor invokes it. Use the supervisor-side opt-in (above) when you want to tune background behavior centrally per supervisor.
|
|
212
|
+
|
|
213
|
+
## Lifecycle callbacks
|
|
214
|
+
|
|
215
|
+
Each layer can register terminal-state callbacks. They don't replace one another, and success/failure hooks fire for their respective outcomes:
|
|
216
|
+
|
|
217
|
+
- Tool-level `backgroundTasks.onComplete` / `onFailed`: scoped to one tool.
|
|
218
|
+
- Agent-level `backgroundTasks.onTaskComplete` / `onTaskFailed`: scoped to all tasks dispatched by this agent.
|
|
219
|
+
- Manager-level `onTaskComplete` / `onTaskFailed`: scoped globally.
|
|
220
|
+
|
|
221
|
+
```typescript
|
|
222
|
+
export const mastra = new Mastra({
|
|
223
|
+
storage,
|
|
224
|
+
backgroundTasks: {
|
|
225
|
+
enabled: true,
|
|
226
|
+
onTaskComplete: task => {
|
|
227
|
+
logger.info('Background task complete', { taskId: task.id, toolName: task.toolName })
|
|
228
|
+
},
|
|
229
|
+
onTaskFailed: task => {
|
|
230
|
+
logger.error('Background task failed', { taskId: task.id, error: task.error })
|
|
231
|
+
},
|
|
232
|
+
},
|
|
233
|
+
})
|
|
234
|
+
```
|
|
235
|
+
|
|
236
|
+
## Related
|
|
237
|
+
|
|
238
|
+
- [`Agent.streamUntilIdle()` reference](https://mastra.ai/reference/streaming/agents/streamUntilIdle)
|
|
239
|
+
- [backgroundTasks configuration reference](https://mastra.ai/reference/configuration)
|
|
240
|
+
- [Supervisor agents](https://mastra.ai/docs/agents/supervisor-agents)
|
|
241
|
+
- [Stream chunk types](https://mastra.ai/reference/streaming/ChunkType)
|
|
242
|
+
- [Storage](https://mastra.ai/docs/memory/storage)
|
|
@@ -300,9 +300,38 @@ Success criteria:
|
|
|
300
300
|
})
|
|
301
301
|
```
|
|
302
302
|
|
|
303
|
-
##
|
|
303
|
+
## Running subagents in the background
|
|
304
304
|
|
|
305
|
-
|
|
305
|
+
Subagent invocations are dispatched as tool calls, so they can run as [background tasks](https://mastra.ai/docs/agents/background-tasks). This is useful when one or more delegations are long-running and you don't want them to block the supervisor's response.
|
|
306
|
+
|
|
307
|
+
Enable the [backgroundTasks manager](https://mastra.ai/reference/configuration) on the Mastra instance, then opt subagents in on the supervisor:
|
|
308
|
+
|
|
309
|
+
```typescript
|
|
310
|
+
const supervisor = new Agent({
|
|
311
|
+
id: 'supervisor',
|
|
312
|
+
instructions: 'Coordinate research and writing using the available agents.',
|
|
313
|
+
model: 'openai/gpt-5.4',
|
|
314
|
+
agents: { researchAgent, writingAgent },
|
|
315
|
+
backgroundTasks: {
|
|
316
|
+
tools: {
|
|
317
|
+
researchAgent: { enabled: true, timeoutMs: 900_000 },
|
|
318
|
+
writingAgent: { enabled: true, timeoutMs: 900_000 },
|
|
319
|
+
},
|
|
320
|
+
},
|
|
321
|
+
})
|
|
322
|
+
|
|
323
|
+
const stream = await supervisor.streamUntilIdle('Research AI in education and write an article', {
|
|
324
|
+
memory: { thread: 't1', resource: 'u1' },
|
|
325
|
+
})
|
|
326
|
+
```
|
|
327
|
+
|
|
328
|
+
Use [`streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) instead of `stream()` so the stream stays open until the subagents complete and the supervisor has had a chance to respond to their results.
|
|
329
|
+
|
|
330
|
+
If a subagent isn't listed on the supervisor but has its own background-eligible tools, the supervisor still dispatches the subagent as a background task and inherits its config. See [Inheriting from the subagent](https://mastra.ai/docs/agents/background-tasks) for details.
|
|
331
|
+
|
|
332
|
+
## Subagent versioning
|
|
333
|
+
|
|
334
|
+
When using the [editor](https://mastra.ai/docs/editor/overview), you can control which stored version of each subagent the supervisor uses at runtime. Set version overrides on the Mastra instance or per invocation:
|
|
306
335
|
|
|
307
336
|
```typescript
|
|
308
337
|
const result = await supervisor.generate('Research and write about AI safety', {
|
|
@@ -315,13 +344,15 @@ const result = await supervisor.generate('Research and write about AI safety', {
|
|
|
315
344
|
})
|
|
316
345
|
```
|
|
317
346
|
|
|
318
|
-
Version overrides propagate automatically through delegation. See [
|
|
347
|
+
Version overrides propagate automatically through delegation. See [Subagent versioning](https://mastra.ai/docs/editor/overview) for details on resolution order and server API usage.
|
|
319
348
|
|
|
320
349
|
## Related
|
|
321
350
|
|
|
322
|
-
- [
|
|
351
|
+
- [Background tasks](https://mastra.ai/docs/agents/background-tasks)
|
|
352
|
+
- [Subagent versioning](https://mastra.ai/docs/editor/overview)
|
|
323
353
|
- [Guide: Research coordinator](https://mastra.ai/guides/guide/research-coordinator)
|
|
324
354
|
- [Agent.stream() reference](https://mastra.ai/reference/streaming/agents/stream)
|
|
355
|
+
- [Agent.streamUntilIdle() reference](https://mastra.ai/reference/streaming/agents/streamUntilIdle)
|
|
325
356
|
- [Agent.generate() reference](https://mastra.ai/reference/agents/generate)
|
|
326
357
|
- [Agent approval](https://mastra.ai/docs/agents/agent-approval)
|
|
327
358
|
- [Memory in multi-agent systems](https://mastra.ai/docs/memory/overview)
|
|
@@ -290,6 +290,7 @@ Note that for subagents, you'll see two different identifiers in stream response
|
|
|
290
290
|
|
|
291
291
|
- [`createTool` reference](https://mastra.ai/reference/tools/create-tool)
|
|
292
292
|
- [`Agent.generate()` reference](https://mastra.ai/reference/agents/generate): Runtime options for tool selection, steps, and callbacks
|
|
293
|
+
- [Background tasks](https://mastra.ai/docs/agents/background-tasks): Run long-running tools without blocking the agent loop
|
|
293
294
|
- [MCP overview](https://mastra.ai/docs/mcp/overview)
|
|
294
295
|
- [Dynamic tool search](https://mastra.ai/reference/processors/tool-search-processor): Load tools on demand for agents with large tool libraries
|
|
295
296
|
- [Tools with structured output](https://mastra.ai/docs/agents/structured-output): Model compatibility when combining tools and structured output
|
|
@@ -0,0 +1,80 @@
|
|
|
1
|
+
# Background task streaming
|
|
2
|
+
|
|
3
|
+
**Added in:** `@mastra/core@1.28.0`
|
|
4
|
+
|
|
5
|
+
`mastra.backgroundTaskManager.stream()` returns a `ReadableStream` of [background task](https://mastra.ai/docs/agents/background-tasks) lifecycle events. Use it to monitor running tasks across the system, for example to drive a status dashboard, surface progress in your own UI, or pipe events into an SSE response.
|
|
6
|
+
|
|
7
|
+
The stream emits the same chunk types that appear inside `Agent.streamUntilIdle()` (`background-task-running`, `background-task-output`, `background-task-completed`, `background-task-failed`, `background-task-cancelled`). See [background task chunks](https://mastra.ai/reference/streaming/ChunkType) for the full payload shapes.
|
|
8
|
+
|
|
9
|
+
> **Note:** Background tasks must be [enabled on the Mastra instance](https://mastra.ai/reference/configuration) before `backgroundTaskManager` is available. When disabled, `mastra.backgroundTaskManager` is `undefined`.
|
|
10
|
+
|
|
11
|
+
## Subscribe to all task events
|
|
12
|
+
|
|
13
|
+
Calling `stream()` with no filter returns a stream of every task event in the system. On connection, the stream emits a snapshot of all currently running tasks, then forwards live events as they happen.
|
|
14
|
+
|
|
15
|
+
```typescript
|
|
16
|
+
const bgManager = mastra.backgroundTaskManager
|
|
17
|
+
if (!bgManager) throw new Error('Background tasks are not enabled')
|
|
18
|
+
|
|
19
|
+
const controller = new AbortController()
|
|
20
|
+
const stream = bgManager.stream({ abortSignal: controller.signal })
|
|
21
|
+
|
|
22
|
+
for await (const chunk of stream) {
|
|
23
|
+
switch (chunk.type) {
|
|
24
|
+
case 'background-task-running':
|
|
25
|
+
console.log('started', chunk.payload.taskId, chunk.payload.toolName)
|
|
26
|
+
break
|
|
27
|
+
case 'background-task-completed':
|
|
28
|
+
console.log('done', chunk.payload.taskId, chunk.payload.result)
|
|
29
|
+
break
|
|
30
|
+
case 'background-task-failed':
|
|
31
|
+
console.error('failed', chunk.payload.taskId, chunk.payload.error)
|
|
32
|
+
break
|
|
33
|
+
}
|
|
34
|
+
}
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
The stream stays open until the caller's `AbortSignal` fires. Always pass an `abortSignal` so you can disconnect cleanly.
|
|
38
|
+
|
|
39
|
+
## Filter the stream
|
|
40
|
+
|
|
41
|
+
Pass any combination of filter options to narrow the events you receive. Filters apply to both the initial snapshot and the live event subscription.
|
|
42
|
+
|
|
43
|
+
```typescript
|
|
44
|
+
const stream = bgManager.stream({
|
|
45
|
+
agentId: 'researcher',
|
|
46
|
+
threadId: 't1',
|
|
47
|
+
resourceId: 'u1',
|
|
48
|
+
abortSignal: controller.signal,
|
|
49
|
+
})
|
|
50
|
+
```
|
|
51
|
+
|
|
52
|
+
| Filter | Description |
|
|
53
|
+
| ------------- | --------------------------------------------------- |
|
|
54
|
+
| `agentId` | Only events from tasks dispatched by this agent |
|
|
55
|
+
| `runId` | Only events from this specific agent run |
|
|
56
|
+
| `threadId` | Only events from tasks scoped to this memory thread |
|
|
57
|
+
| `resourceId` | Only events from tasks scoped to this resource |
|
|
58
|
+
| `taskId` | Only events for a single task |
|
|
59
|
+
| `abortSignal` | Closes the stream when the signal aborts |
|
|
60
|
+
|
|
61
|
+
## Look up task state directly
|
|
62
|
+
|
|
63
|
+
For one-off lookups instead of a live stream, use `getTask` and `listTasks`:
|
|
64
|
+
|
|
65
|
+
```typescript
|
|
66
|
+
const task = await mastra.backgroundTaskManager?.getTask(taskId)
|
|
67
|
+
const { tasks, total } = await mastra.backgroundTaskManager?.listTasks({
|
|
68
|
+
status: 'running',
|
|
69
|
+
agentId: 'researcher',
|
|
70
|
+
})
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
These read from storage rather than the pubsub stream, so they're suitable for paginated lists and detail views.
|
|
74
|
+
|
|
75
|
+
## Related
|
|
76
|
+
|
|
77
|
+
- [Background tasks](https://mastra.ai/docs/agents/background-tasks)
|
|
78
|
+
- [`Agent.streamUntilIdle()` reference](https://mastra.ai/reference/streaming/agents/streamUntilIdle)
|
|
79
|
+
- [Background task chunks](https://mastra.ai/reference/streaming/ChunkType)
|
|
80
|
+
- [backgroundTasks configuration reference](https://mastra.ai/reference/configuration)
|
|
@@ -29,6 +29,8 @@ for await (const chunk of stream.textStream) {
|
|
|
29
29
|
|
|
30
30
|
> **Info:** Visit [Agent.stream()](https://mastra.ai/reference/streaming/agents/stream) for more information.
|
|
31
31
|
|
|
32
|
+
> **Tip:** For agents that dispatch [background tasks](https://mastra.ai/docs/agents/background-tasks), use [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) to keep the stream open until those tasks complete and the agent has had a chance to respond to their results.
|
|
33
|
+
|
|
32
34
|
### Output from `Agent.stream()`
|
|
33
35
|
|
|
34
36
|
The output streams the generated response from the agent.
|
|
@@ -129,5 +131,6 @@ A workflow stream provides access to various response properties:
|
|
|
129
131
|
## Related
|
|
130
132
|
|
|
131
133
|
- [Streaming events](https://mastra.ai/docs/streaming/events)
|
|
134
|
+
- [Background tasks](https://mastra.ai/docs/agents/background-tasks)
|
|
132
135
|
- [Using Agents](https://mastra.ai/docs/agents/overview)
|
|
133
136
|
- [Workflows overview](https://mastra.ai/docs/workflows/overview)
|
|
@@ -19,7 +19,7 @@ A filesystem provider handles all file operations for a workspace:
|
|
|
19
19
|
Available providers:
|
|
20
20
|
|
|
21
21
|
- [`LocalFilesystem`](https://mastra.ai/reference/workspace/local-filesystem): Stores files in a directory on disk
|
|
22
|
-
- [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem): Stores files in Amazon S3 or S3-compatible storage (R2, MinIO)
|
|
22
|
+
- [`S3Filesystem`](https://mastra.ai/reference/workspace/s3-filesystem): Stores files in Amazon S3 or S3-compatible storage (R2, MinIO, Tigris)
|
|
23
23
|
- [`GCSFilesystem`](https://mastra.ai/reference/workspace/gcs-filesystem): Stores files in Google Cloud Storage
|
|
24
24
|
- [`AzureBlobFilesystem`](https://mastra.ai/reference/workspace/azure-blob-filesystem): Stores files in Azure Blob Storage
|
|
25
25
|
- [`AgentFSFilesystem`](https://mastra.ai/reference/workspace/agentfs-filesystem): Stores files in a Turso/SQLite database via AgentFS
|
package/.docs/models/index.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Model Providers
|
|
2
2
|
|
|
3
|
-
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to
|
|
3
|
+
Mastra provides a unified interface for working with LLMs across multiple providers, giving you access to 3761 models from 106 providers through a single API.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Deep Infra
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 33 Deep Infra models through Mastra's model router. Authentication is handled automatically using the `DEEPINFRA_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Deep Infra documentation](https://deepinfra.com/models).
|
|
6
6
|
|
|
@@ -36,6 +36,7 @@ for await (const chunk of stream) {
|
|
|
36
36
|
| `deepinfra/anthropic/claude-4-opus` | 200K | | | | | | $17 | $83 |
|
|
37
37
|
| `deepinfra/deepseek-ai/DeepSeek-R1-0528` | 164K | | | | | | $0.50 | $2 |
|
|
38
38
|
| `deepinfra/deepseek-ai/DeepSeek-V3.2` | 164K | | | | | | $0.26 | $0.38 |
|
|
39
|
+
| `deepinfra/deepseek-ai/DeepSeek-V4-Pro` | 66K | | | | | | $2 | $3 |
|
|
39
40
|
| `deepinfra/meta-llama/Llama-3.1-70B-Instruct` | 131K | | | | | | $0.40 | $0.40 |
|
|
40
41
|
| `deepinfra/meta-llama/Llama-3.1-70B-Instruct-Turbo` | 131K | | | | | | $0.40 | $0.40 |
|
|
41
42
|
| `deepinfra/meta-llama/Llama-3.1-8B-Instruct` | 131K | | | | | | $0.02 | $0.05 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Fireworks AI
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 19 Fireworks AI models through Mastra's model router. Authentication is handled automatically using the `FIREWORKS_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Fireworks AI documentation](https://fireworks.ai/docs/).
|
|
6
6
|
|
|
@@ -36,6 +36,7 @@ for await (const chunk of stream) {
|
|
|
36
36
|
| --------------------------------------------------------- | ------- | ----- | --------- | ----- | ----- | ----- | ---------- | ----------- |
|
|
37
37
|
| `fireworks-ai/accounts/fireworks/models/deepseek-v3p1` | 164K | | | | | | $0.56 | $2 |
|
|
38
38
|
| `fireworks-ai/accounts/fireworks/models/deepseek-v3p2` | 160K | | | | | | $0.56 | $2 |
|
|
39
|
+
| `fireworks-ai/accounts/fireworks/models/deepseek-v4-pro` | 1.0M | | | | | | $2 | $3 |
|
|
39
40
|
| `fireworks-ai/accounts/fireworks/models/glm-4p5` | 131K | | | | | | $0.55 | $2 |
|
|
40
41
|
| `fireworks-ai/accounts/fireworks/models/glm-4p5-air` | 131K | | | | | | $0.22 | $0.88 |
|
|
41
42
|
| `fireworks-ai/accounts/fireworks/models/glm-4p7` | 198K | | | | | | $0.60 | $2 |
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Kilo Gateway
|
|
2
2
|
|
|
3
|
-
Access
|
|
3
|
+
Access 337 Kilo Gateway models through Mastra's model router. Authentication is handled automatically using the `KILO_API_KEY` environment variable.
|
|
4
4
|
|
|
5
5
|
Learn more in the [Kilo Gateway documentation](https://kilo.ai).
|
|
6
6
|
|
|
@@ -357,6 +357,8 @@ for await (const chunk of stream) {
|
|
|
357
357
|
| `kilo/xiaomi/mimo-v2-flash` | 262K | | | | | | $0.09 | $0.29 |
|
|
358
358
|
| `kilo/xiaomi/mimo-v2-omni` | 262K | | | | | | $0.40 | $2 |
|
|
359
359
|
| `kilo/xiaomi/mimo-v2-pro` | 1.0M | | | | | | $1 | $3 |
|
|
360
|
+
| `kilo/xiaomi/mimo-v2.5` | 1.0M | | | | | | $0.40 | $2 |
|
|
361
|
+
| `kilo/xiaomi/mimo-v2.5-pro` | 1.0M | | | | | | $1 | $3 |
|
|
360
362
|
| `kilo/z-ai/glm-4-32b` | 128K | | | | | | $0.10 | $0.10 |
|
|
361
363
|
| `kilo/z-ai/glm-4.5` | 131K | | | | | | $0.60 | $2 |
|
|
362
364
|
| `kilo/z-ai/glm-4.5-air` | 131K | | | | | | $0.13 | $0.85 |
|
|
@@ -151,6 +151,30 @@ for await (const part of uiMessageStream) {
|
|
|
151
151
|
}
|
|
152
152
|
```
|
|
153
153
|
|
|
154
|
+
### `streamUntilIdle()`
|
|
155
|
+
|
|
156
|
+
Stream a response and keep the stream open until every [background task](https://mastra.ai/docs/agents/background-tasks) dispatched during the run completes. The server re-enters the agentic loop on each task completion so the LLM can react to results in the same call. Requires background tasks to be [enabled on the Mastra instance](https://mastra.ai/reference/configuration) and a memory thread; otherwise the call falls through to a plain `stream()`.
|
|
157
|
+
|
|
158
|
+
```typescript
|
|
159
|
+
const response = await agent.streamUntilIdle('Research solana for me', {
|
|
160
|
+
memory: {
|
|
161
|
+
thread: 'thread-1',
|
|
162
|
+
resource: 'resource-1',
|
|
163
|
+
},
|
|
164
|
+
maxIdleMs: 5 * 60_000,
|
|
165
|
+
})
|
|
166
|
+
|
|
167
|
+
response.processDataStream({
|
|
168
|
+
onChunk: async chunk => {
|
|
169
|
+
if (chunk.type === 'background-task-completed') {
|
|
170
|
+
console.log('task complete:', chunk.payload.taskId)
|
|
171
|
+
}
|
|
172
|
+
},
|
|
173
|
+
})
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
The stream emits the same chunk types as `stream()`, plus `background-task-*` chunks for task lifecycle events. Visit [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle) for the full server-side API and [background task chunks](https://mastra.ai/reference/streaming/ChunkType) for the payload shapes.
|
|
177
|
+
|
|
154
178
|
### `getTool()`
|
|
155
179
|
|
|
156
180
|
Retrieve information about a specific tool available to the agent:
|
|
@@ -36,6 +36,69 @@ export const mastra = new Mastra({
|
|
|
36
36
|
})
|
|
37
37
|
```
|
|
38
38
|
|
|
39
|
+
### backgroundTasks
|
|
40
|
+
|
|
41
|
+
**Type:** `BackgroundTaskManagerConfig`
|
|
42
|
+
|
|
43
|
+
Enables and configures the background task manager. When enabled, agents can dispatch long-running tool calls (including subagent invocations) to run asynchronously while the agentic loop continues. Tasks are persisted, so a configured `storage` backend is required.
|
|
44
|
+
|
|
45
|
+
Visit the [Background tasks documentation](https://mastra.ai/docs/agents/background-tasks) to learn more.
|
|
46
|
+
|
|
47
|
+
```typescript
|
|
48
|
+
import { Mastra } from '@mastra/core'
|
|
49
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
50
|
+
|
|
51
|
+
export const mastra = new Mastra({
|
|
52
|
+
storage: new LibSQLStore({
|
|
53
|
+
id: 'mastra-storage',
|
|
54
|
+
url: 'file:./mastra.db',
|
|
55
|
+
}),
|
|
56
|
+
backgroundTasks: {
|
|
57
|
+
enabled: true,
|
|
58
|
+
globalConcurrency: 10,
|
|
59
|
+
perAgentConcurrency: 5,
|
|
60
|
+
backpressure: 'queue',
|
|
61
|
+
defaultTimeoutMs: 300_000,
|
|
62
|
+
},
|
|
63
|
+
})
|
|
64
|
+
```
|
|
65
|
+
|
|
66
|
+
**enabled** (`boolean`): Whether background tasks are enabled. The manager only initializes when this is true and a storage backend is configured. (Default: `false`)
|
|
67
|
+
|
|
68
|
+
**globalConcurrency** (`number`): Maximum number of background tasks running concurrently across all agents. (Default: `10`)
|
|
69
|
+
|
|
70
|
+
**perAgentConcurrency** (`number`): Maximum number of background tasks running concurrently for a single agent. (Default: `5`)
|
|
71
|
+
|
|
72
|
+
**backpressure** (`'queue' | 'reject' | 'fallback-sync'`): Behavior when a concurrency limit is reached. 'queue' waits for a slot, 'reject' throws on enqueue, 'fallback-sync' runs the tool synchronously in the agentic loop instead. (Default: `'queue'`)
|
|
73
|
+
|
|
74
|
+
**defaultTimeoutMs** (`number`): Default per-task timeout in milliseconds. Can be overridden per-tool or per-call. (Default: `300000`)
|
|
75
|
+
|
|
76
|
+
**defaultRetries** (`RetryConfig`): Default retry policy applied to tasks that fail.
|
|
77
|
+
|
|
78
|
+
**defaultRetries.maxRetries** (`number`): Maximum retry attempts before the task is marked failed.
|
|
79
|
+
|
|
80
|
+
**defaultRetries.retryDelayMs** (`number`): Delay between retries in milliseconds.
|
|
81
|
+
|
|
82
|
+
**defaultRetries.backoffMultiplier** (`number`): Multiplier applied to retryDelayMs on each subsequent attempt.
|
|
83
|
+
|
|
84
|
+
**defaultRetries.maxRetryDelayMs** (`number`): Upper bound on the retry delay regardless of backoff.
|
|
85
|
+
|
|
86
|
+
**defaultRetries.retryableErrors** (`(error: Error) => boolean`): Predicate that decides whether a given error should be retried. Default: retry all errors.
|
|
87
|
+
|
|
88
|
+
**cleanup** (`CleanupConfig`): Controls how long task records are kept and how often the cleanup process runs.
|
|
89
|
+
|
|
90
|
+
**cleanup.completedTtlMs** (`number`): How long to keep completed task records, in milliseconds. Default: 1 hour.
|
|
91
|
+
|
|
92
|
+
**cleanup.failedTtlMs** (`number`): How long to keep failed task records, in milliseconds. Default: 24 hours.
|
|
93
|
+
|
|
94
|
+
**cleanup.cleanupIntervalMs** (`number`): How often the cleanup process runs, in milliseconds. Default: 1 minute.
|
|
95
|
+
|
|
96
|
+
**waitTimeoutMs** (`number`): How long the agentic loop waits for a background task to complete before moving on. If a task has not finished within this time, the loop proceeds without setting isContinued. Default: undefined (do not wait). Can be overridden per-agent or per-tool.
|
|
97
|
+
|
|
98
|
+
**onTaskComplete** (`(task: BackgroundTask) => void | Promise<void>`): Global callback invoked when any background task completes successfully. Fires in addition to per-tool and per-agent callbacks.
|
|
99
|
+
|
|
100
|
+
**onTaskFailed** (`(task: BackgroundTask) => void | Promise<void>`): Global callback invoked when any background task fails. Fires in addition to per-tool and per-agent callbacks.
|
|
101
|
+
|
|
39
102
|
### deployer
|
|
40
103
|
|
|
41
104
|
**Type:** `MastraDeployer`
|
|
@@ -90,6 +90,8 @@ await harness.sendMessage({ content: 'Hello!' })
|
|
|
90
90
|
|
|
91
91
|
**subagents.stopWhen** (`LoopOptions['stopWhen']`): Optional stop condition for the spawned subagent.
|
|
92
92
|
|
|
93
|
+
**subagents.forked** (`boolean`): When \`true\`, calls to this subagent default to forked mode: the subagent runs on a clone of the parent thread, reusing the parent agent’s instructions, tools, and model so the prompt-cache prefix stays intact. Requires \`memory\` to be configured. The subagent definition’s own \`instructions\`, \`tools\`, \`allowedHarnessTools\`, \`allowedWorkspaceTools\`, \`defaultModelId\`, \`maxSteps\`, and \`stopWhen\` are ignored in forked mode. Callers can still override per-invocation via \`forked: false\` in the \`subagent\` tool input. See the \[Forked subagents]\(#forked-subagents) section below for full semantics.
|
|
94
|
+
|
|
93
95
|
**resolveModel** (`(modelId: string) => MastraLanguageModel`): Converts a model ID string (e.g., \`"anthropic/claude-sonnet-4"\`) to a language model instance. Used by subagents and observational memory model resolution.
|
|
94
96
|
|
|
95
97
|
**omConfig** (`HarnessOMConfig`): Default configuration for observational memory (observer/reflector model IDs and thresholds).
|
|
@@ -286,16 +288,21 @@ await harness.switchThread({ threadId: 'thread-abc123' })
|
|
|
286
288
|
|
|
287
289
|
#### `listThreads(options?)`
|
|
288
290
|
|
|
289
|
-
List threads from storage. By default, only threads for the current resource are returned.
|
|
291
|
+
List threads from storage. By default, only threads for the current resource are returned, and transient [forked subagent](#forked-subagents) threads are hidden so they don’t appear in user-facing thread pickers / startup flows.
|
|
290
292
|
|
|
291
293
|
```typescript
|
|
292
|
-
// List threads for current resource
|
|
294
|
+
// List threads for current resource (forks hidden)
|
|
293
295
|
const threads = await harness.listThreads()
|
|
294
296
|
|
|
295
|
-
// List all threads across resources
|
|
297
|
+
// List all threads across resources (forks still hidden)
|
|
296
298
|
const allThreads = await harness.listThreads({ allResources: true })
|
|
299
|
+
|
|
300
|
+
// Include forked subagent fork threads (debug / admin tooling only)
|
|
301
|
+
const everything = await harness.listThreads({ includeForkedSubagents: true })
|
|
297
302
|
```
|
|
298
303
|
|
|
304
|
+
Fork threads are tagged with `metadata.forkedSubagent === true` (and `metadata.parentThreadId`) by the harness. Set `includeForkedSubagents: true` to opt back into seeing them — e.g. for a debug panel.
|
|
305
|
+
|
|
299
306
|
#### `renameThread({ title })`
|
|
300
307
|
|
|
301
308
|
Update the title of the current thread.
|
|
@@ -677,6 +684,42 @@ await harness.setSubagentModelId({ modelId: 'anthropic/claude-sonnet-4-6' })
|
|
|
677
684
|
await harness.setSubagentModelId({ modelId: 'anthropic/claude-haiku-3.5', agentType: 'explore' })
|
|
678
685
|
```
|
|
679
686
|
|
|
687
|
+
### Forked subagents
|
|
688
|
+
|
|
689
|
+
By default, a subagent runs with a fresh context — it doesn't see the parent conversation. **Forked subagents** opt into a different model: the subagent runs on a clone of the parent thread and reuses the parent agent's full configuration. This is useful when the subagent needs the full context of the conversation so far (e.g., recalling earlier user-supplied facts), and when prompt-cache hit rates matter.
|
|
690
|
+
|
|
691
|
+
#### Enabling forked mode
|
|
692
|
+
|
|
693
|
+
Set `forked: true` either on the [`HarnessSubagent` definition](#configuration) (per-type default) or on each `subagent` tool call (per-invocation override):
|
|
694
|
+
|
|
695
|
+
```typescript
|
|
696
|
+
// Per-type default — every call to this subagent forks unless overridden.
|
|
697
|
+
const subagents: HarnessSubagent[] = [
|
|
698
|
+
{
|
|
699
|
+
id: 'collaborator',
|
|
700
|
+
name: 'Collaborator',
|
|
701
|
+
description: 'Continues the conversation in a fork to try a different angle.',
|
|
702
|
+
instructions: '...',
|
|
703
|
+
forked: true,
|
|
704
|
+
},
|
|
705
|
+
]
|
|
706
|
+
```
|
|
707
|
+
|
|
708
|
+
The model can also pass `forked: true` (or `forked: false`) per-invocation in the `subagent` tool input; the per-invocation value wins.
|
|
709
|
+
|
|
710
|
+
#### Semantics and constraints
|
|
711
|
+
|
|
712
|
+
- **Memory required.** Forked mode calls `memory.cloneThread` to create the fork, so the harness must have `memory` configured and an active parent thread. Calls without those return a structured error rather than throwing.
|
|
713
|
+
- **Parent agent reused.** The fork runs through the parent agent's `stream(...)` call. The parent's instructions, tools, model, `maxSteps`, and `stopWhen` apply. The subagent definition's `instructions`, `tools`, `allowedHarnessTools`, `allowedWorkspaceTools`, `defaultModelId`, `maxSteps`, and `stopWhen` are ignored in forked mode — this is what preserves the prompt-cache prefix.
|
|
714
|
+
- **Toolsets inherited, recursive forks blocked at runtime.** Forks inherit the parent's toolsets verbatim (`ask_user`, `submit_plan`, user-configured harness tools, _including the `subagent` tool itself_) so the LLM request prefix — system prompt + tool list + tool schemas + tool descriptions — stays byte-identical to the parent's. This is what preserves the prompt cache. The `subagent` entry is kept on the model side but its `execute` is replaced inside the fork with a stub that returns a non-error "tool unavailable inside a forked subagent" message: nested forks are blocked at the runtime layer without perturbing the cached prefix.
|
|
715
|
+
- **Fork threads are tagged.** Each fork thread is created with `metadata.forkedSubagent === true` and `metadata.parentThreadId === <parent>`. By default, [`listThreads`](#listthreadsoptions) hides these so they don't show up in user-facing thread pickers / startup flows. Pass `includeForkedSubagents: true` to see them in admin / debug tooling.
|
|
716
|
+
- **Save-queue flushed before clone.** The agent stream batches message saves through a debounced `SaveQueueManager`, so the parent's latest user / assistant turn may not be on disk yet when the subagent tool call fires. The fork tool flushes pending saves first via the `flushMessages` callback on `AgentToolExecutionContext` before cloning, so the fork actually carries the latest turn. Flush failures are non-fatal — the clone still runs.
|
|
717
|
+
- **Parent thread untouched.** All subagent activity (messages, OM writes) lands on the fork. The parent thread is never appended to during a forked subagent run.
|
|
718
|
+
|
|
719
|
+
#### When to prefer non-forked mode
|
|
720
|
+
|
|
721
|
+
Forked mode trades isolation for context inheritance. If the subagent should run with a strictly smaller toolset, a different system prompt, or a cheaper model, use the default (non-forked) mode and pass any required context explicitly in the `task` description.
|
|
722
|
+
|
|
680
723
|
### Events
|
|
681
724
|
|
|
682
725
|
#### `subscribe(listener)`
|
|
@@ -753,13 +796,13 @@ The harness emits events through registered listeners. The following table lists
|
|
|
753
796
|
|
|
754
797
|
The harness provides built-in tools to agents in every mode:
|
|
755
798
|
|
|
756
|
-
| Tool | Description
|
|
757
|
-
| ------------- |
|
|
758
|
-
| `ask_user` | Ask the user a question and wait for their response. Supports free text, single-select choices, and multi-select choices.
|
|
759
|
-
| `submit_plan` | Submit a plan for user review and approval.
|
|
760
|
-
| `task_write` | Create or update a structured task list for tracking progress.
|
|
761
|
-
| `task_check` | Check the completion status of the current task list.
|
|
762
|
-
| `subagent` | Spawn a focused subagent with constrained tools (only available when `subagents` is configured).
|
|
799
|
+
| Tool | Description |
|
|
800
|
+
| ------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
|
801
|
+
| `ask_user` | Ask the user a question and wait for their response. Supports free text, single-select choices, and multi-select choices. |
|
|
802
|
+
| `submit_plan` | Submit a plan for user review and approval. |
|
|
803
|
+
| `task_write` | Create or update a structured task list for tracking progress. |
|
|
804
|
+
| `task_check` | Check the completion status of the current task list. |
|
|
805
|
+
| `subagent` | Spawn a focused subagent with constrained tools (only available when `subagents` is configured). Pass `forked: true` to inherit the parent conversation — see [Forked subagents](#forked-subagents). |
|
|
763
806
|
|
|
764
807
|
### `ask_user` selections
|
|
765
808
|
|
package/.docs/reference/index.md
CHANGED
|
@@ -169,6 +169,7 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
169
169
|
- [PromptInjectionDetector](https://mastra.ai/reference/processors/prompt-injection-detector)
|
|
170
170
|
- [SemanticRecall](https://mastra.ai/reference/processors/semantic-recall-processor)
|
|
171
171
|
- [SkillSearchProcessor](https://mastra.ai/reference/processors/skill-search-processor)
|
|
172
|
+
- [StreamErrorRetryProcessor](https://mastra.ai/reference/processors/stream-error-retry-processor)
|
|
172
173
|
- [SystemPromptScrubber](https://mastra.ai/reference/processors/system-prompt-scrubber)
|
|
173
174
|
- [TokenLimiterProcessor](https://mastra.ai/reference/processors/token-limiter-processor)
|
|
174
175
|
- [ToolCallFilter](https://mastra.ai/reference/processors/tool-call-filter)
|
|
@@ -209,6 +210,7 @@ The Reference section provides documentation of Mastra's API, including paramete
|
|
|
209
210
|
- [MastraModelOutput](https://mastra.ai/reference/streaming/agents/MastraModelOutput)
|
|
210
211
|
- [.stream()](https://mastra.ai/reference/streaming/agents/stream)
|
|
211
212
|
- [.streamLegacy()](https://mastra.ai/reference/streaming/agents/streamLegacy)
|
|
213
|
+
- [.streamUntilIdle()](https://mastra.ai/reference/streaming/agents/streamUntilIdle)
|
|
212
214
|
- [.observeStream()](https://mastra.ai/reference/streaming/workflows/observeStream)
|
|
213
215
|
- [.resumeStream()](https://mastra.ai/reference/streaming/workflows/resumeStream)
|
|
214
216
|
- [.stream()](https://mastra.ai/reference/streaming/workflows/stream)
|
|
@@ -126,6 +126,21 @@ interface ObservabilityExporter {
|
|
|
126
126
|
/** Initialize exporter with tracing configuration and/or access to Mastra */
|
|
127
127
|
init?(options: InitExporterOptions): void
|
|
128
128
|
|
|
129
|
+
/** Handle tracing events */
|
|
130
|
+
onTracingEvent?(event: TracingEvent): void | Promise<void>
|
|
131
|
+
|
|
132
|
+
/** Handle log events */
|
|
133
|
+
onLogEvent?(event: LogEvent): void | Promise<void>
|
|
134
|
+
|
|
135
|
+
/** Handle metric events */
|
|
136
|
+
onMetricEvent?(event: MetricEvent): void | Promise<void>
|
|
137
|
+
|
|
138
|
+
/** Handle score events */
|
|
139
|
+
onScoreEvent?(event: ScoreEvent): void | Promise<void>
|
|
140
|
+
|
|
141
|
+
/** Handle feedback events */
|
|
142
|
+
onFeedbackEvent?(event: FeedbackEvent): void | Promise<void>
|
|
143
|
+
|
|
129
144
|
/** Export tracing events */
|
|
130
145
|
exportTracingEvent(event: TracingEvent): Promise<void>
|
|
131
146
|
|
|
@@ -154,6 +169,8 @@ interface ObservabilityExporter {
|
|
|
154
169
|
}
|
|
155
170
|
```
|
|
156
171
|
|
|
172
|
+
Event callback payloads use observability event bus envelopes: `TracingEvent` carries span lifecycle events with `exportedSpan`, `LogEvent` wraps `ExportedLog` in `log`, `MetricEvent` wraps `ExportedMetric` in `metric`, `ScoreEvent` wraps `ExportedScore` in `score`, and `FeedbackEvent` wraps `ExportedFeedback` in `feedback`. For Cloud exporter behavior for these callbacks, see [CloudExporter](https://mastra.ai/reference/observability/tracing/exporters/cloud-exporter).
|
|
173
|
+
|
|
157
174
|
### `SpanOutputProcessor`
|
|
158
175
|
|
|
159
176
|
Interface for span output processors.
|
|
@@ -0,0 +1,54 @@
|
|
|
1
|
+
# StreamErrorRetryProcessor
|
|
2
|
+
|
|
3
|
+
`StreamErrorRetryProcessor` is an **error processor** that retries transient errors emitted after an LLM stream starts. It includes built-in matching for OpenAI Responses stream errors and supports additional matchers for other provider-specific stream error shapes.
|
|
4
|
+
|
|
5
|
+
The processor isn't enabled by default in core. Add it to `errorProcessors` for agents that need stream-error retry handling.
|
|
6
|
+
|
|
7
|
+
## Usage example
|
|
8
|
+
|
|
9
|
+
Add `StreamErrorRetryProcessor` to `errorProcessors`:
|
|
10
|
+
|
|
11
|
+
```typescript
|
|
12
|
+
import { Agent } from '@mastra/core/agent'
|
|
13
|
+
import { StreamErrorRetryProcessor } from '@mastra/core/processors'
|
|
14
|
+
|
|
15
|
+
export const agent = new Agent({
|
|
16
|
+
name: 'openai-agent',
|
|
17
|
+
instructions: 'You are a helpful assistant.',
|
|
18
|
+
model: 'openai/gpt-5',
|
|
19
|
+
errorProcessors: [new StreamErrorRetryProcessor()],
|
|
20
|
+
})
|
|
21
|
+
```
|
|
22
|
+
|
|
23
|
+
## How it works
|
|
24
|
+
|
|
25
|
+
The processor checks the error and its cause chain for:
|
|
26
|
+
|
|
27
|
+
- Provider retry metadata: `isRetryable === true`
|
|
28
|
+
- Built-in OpenAI Responses stream error matching
|
|
29
|
+
- Matcher results: Any configured matcher that returns `true`
|
|
30
|
+
|
|
31
|
+
When the error is retryable, the processor returns `{ retry: true }`. It doesn't mutate messages.
|
|
32
|
+
|
|
33
|
+
## Default OpenAI Responses matcher
|
|
34
|
+
|
|
35
|
+
`isRetryableOpenAIResponsesStreamError` matches OpenAI Responses stream error chunks with `type: 'error'` or `type: 'response.failed'`. It retries known transient OpenAI error codes and, as a fallback, errors with explicit retry guidance such as `You can retry your request`.
|
|
36
|
+
|
|
37
|
+
`StreamErrorRetryProcessor` includes this matcher by default. You can also import it and reuse it in custom retry logic.
|
|
38
|
+
|
|
39
|
+
## Constructor parameters
|
|
40
|
+
|
|
41
|
+
**options** (`StreamErrorRetryProcessorOptions`): Configuration for retry handling.
|
|
42
|
+
|
|
43
|
+
## Properties
|
|
44
|
+
|
|
45
|
+
**id** (`'stream-error-retry-processor'`): Processor identifier.
|
|
46
|
+
|
|
47
|
+
**name** (`'Stream Error Retry Processor'`): Processor display name.
|
|
48
|
+
|
|
49
|
+
**processAPIError** (`(args: ProcessAPIErrorArgs) => ProcessAPIErrorResult | void`): Retries stream errors up to the configured retry limit.
|
|
50
|
+
|
|
51
|
+
## Related
|
|
52
|
+
|
|
53
|
+
- [Processor interface](https://mastra.ai/reference/processors/processor-interface)
|
|
54
|
+
- [Processors](https://mastra.ai/docs/agents/processors)
|
|
@@ -398,6 +398,146 @@ Contains output from workflow step execution, used primarily for usage tracking
|
|
|
398
398
|
|
|
399
399
|
**payload.output** (`ChunkType`): Nested chunk data from step execution, typically containing finish events or other step results
|
|
400
400
|
|
|
401
|
+
## Background task chunks
|
|
402
|
+
|
|
403
|
+
Emitted when a tool call is dispatched as a [background task](https://mastra.ai/docs/agents/background-tasks) and `streamUntilIdle()` is used.
|
|
404
|
+
|
|
405
|
+
### background-task-started
|
|
406
|
+
|
|
407
|
+
Emitted when a tool call is enqueued as a background task and assigned a `taskId`.
|
|
408
|
+
|
|
409
|
+
**type** (`"background-task-started"`): Chunk type identifier
|
|
410
|
+
|
|
411
|
+
**payload** (`BackgroundTaskStartedPayload`): Identifies the newly enqueued task
|
|
412
|
+
|
|
413
|
+
**payload.taskId** (`string`): Unique identifier for the background task
|
|
414
|
+
|
|
415
|
+
**payload.toolName** (`string`): Name of the tool being executed
|
|
416
|
+
|
|
417
|
+
**payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
|
|
418
|
+
|
|
419
|
+
### background-task-running
|
|
420
|
+
|
|
421
|
+
Emitted when a worker picks up the task and execution begins.
|
|
422
|
+
|
|
423
|
+
**type** (`"background-task-running"`): Chunk type identifier
|
|
424
|
+
|
|
425
|
+
**payload** (`BackgroundTaskRunningPayload`): Details about the running task
|
|
426
|
+
|
|
427
|
+
**payload.taskId** (`string`): Unique identifier for the background task
|
|
428
|
+
|
|
429
|
+
**payload.toolName** (`string`): Name of the tool being executed
|
|
430
|
+
|
|
431
|
+
**payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
|
|
432
|
+
|
|
433
|
+
**payload.runId** (`string`): Run ID of the agent that dispatched the task
|
|
434
|
+
|
|
435
|
+
**payload.agentId** (`string`): ID of the agent that dispatched the task
|
|
436
|
+
|
|
437
|
+
**payload.startedAt** (`Date`): Timestamp at which execution started
|
|
438
|
+
|
|
439
|
+
**payload.args** (`Record<string, unknown>`): Arguments passed to the tool's execute function
|
|
440
|
+
|
|
441
|
+
### background-task-progress
|
|
442
|
+
|
|
443
|
+
Periodic snapshot of how many background tasks are currently running across the agent.
|
|
444
|
+
|
|
445
|
+
**type** (`"background-task-progress"`): Chunk type identifier
|
|
446
|
+
|
|
447
|
+
**payload** (`BackgroundTaskProgressPayload`): Aggregate progress for all running tasks
|
|
448
|
+
|
|
449
|
+
**payload.taskIds** (`string[]`): IDs of all currently running background tasks
|
|
450
|
+
|
|
451
|
+
**payload.runningCount** (`number`): Number of background tasks currently running
|
|
452
|
+
|
|
453
|
+
**payload.elapsedMs** (`number`): Milliseconds elapsed since the agent run started
|
|
454
|
+
|
|
455
|
+
### background-task-output
|
|
456
|
+
|
|
457
|
+
A streamed output chunk emitted by the task's `execute` function. Wraps an inner [`tool-output`](#tool-output) chunk.
|
|
458
|
+
|
|
459
|
+
**type** (`"background-task-output"`): Chunk type identifier
|
|
460
|
+
|
|
461
|
+
**payload** (`BackgroundTaskOutputPayload`): Streamed output from the running task
|
|
462
|
+
|
|
463
|
+
**payload.taskId** (`string`): Unique identifier for the background task
|
|
464
|
+
|
|
465
|
+
**payload.toolName** (`string`): Name of the tool being executed
|
|
466
|
+
|
|
467
|
+
**payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
|
|
468
|
+
|
|
469
|
+
**payload.runId** (`string`): Run ID of the agent that dispatched the task
|
|
470
|
+
|
|
471
|
+
**payload.agentId** (`string`): ID of the agent that dispatched the task
|
|
472
|
+
|
|
473
|
+
**payload.payload** (`ToolOutputChunk`): Inner tool-output chunk produced by the task
|
|
474
|
+
|
|
475
|
+
### background-task-completed
|
|
476
|
+
|
|
477
|
+
Emitted when the task finishes successfully. Triggers a continuation turn when consumed by [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle).
|
|
478
|
+
|
|
479
|
+
**type** (`"background-task-completed"`): Chunk type identifier
|
|
480
|
+
|
|
481
|
+
**payload** (`BackgroundTaskResultPayload`): The completed task's result
|
|
482
|
+
|
|
483
|
+
**payload.taskId** (`string`): Unique identifier for the background task
|
|
484
|
+
|
|
485
|
+
**payload.toolName** (`string`): Name of the tool that was executed
|
|
486
|
+
|
|
487
|
+
**payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
|
|
488
|
+
|
|
489
|
+
**payload.agentId** (`string`): ID of the agent that dispatched the task
|
|
490
|
+
|
|
491
|
+
**payload.runId** (`string`): Run ID of the agent that dispatched the task
|
|
492
|
+
|
|
493
|
+
**payload.result** (`unknown`): The tool's resolved return value
|
|
494
|
+
|
|
495
|
+
**payload.completedAt** (`Date`): Timestamp at which the task completed
|
|
496
|
+
|
|
497
|
+
**payload.isError** (`boolean`): True when the tool returned an error result rather than throwing
|
|
498
|
+
|
|
499
|
+
### background-task-failed
|
|
500
|
+
|
|
501
|
+
Emitted when the task throws or times out. Triggers a continuation turn when consumed by [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle).
|
|
502
|
+
|
|
503
|
+
**type** (`"background-task-failed"`): Chunk type identifier
|
|
504
|
+
|
|
505
|
+
**payload** (`BackgroundTaskFailedPayload`): Failure details for the task
|
|
506
|
+
|
|
507
|
+
**payload.taskId** (`string`): Unique identifier for the background task
|
|
508
|
+
|
|
509
|
+
**payload.toolName** (`string`): Name of the tool that was executed
|
|
510
|
+
|
|
511
|
+
**payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
|
|
512
|
+
|
|
513
|
+
**payload.runId** (`string`): Run ID of the agent that dispatched the task
|
|
514
|
+
|
|
515
|
+
**payload.agentId** (`string`): ID of the agent that dispatched the task
|
|
516
|
+
|
|
517
|
+
**payload.error** (`{ message: string }`): Error details thrown by the task
|
|
518
|
+
|
|
519
|
+
**payload.completedAt** (`Date`): Timestamp at which the task failed
|
|
520
|
+
|
|
521
|
+
### background-task-cancelled
|
|
522
|
+
|
|
523
|
+
Emitted when the task is cancelled before completing. Triggers a continuation turn when consumed by [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle).
|
|
524
|
+
|
|
525
|
+
**type** (`"background-task-cancelled"`): Chunk type identifier
|
|
526
|
+
|
|
527
|
+
**payload** (`BackgroundTaskCancelledPayload`): Cancellation details for the task
|
|
528
|
+
|
|
529
|
+
**payload.taskId** (`string`): Unique identifier for the background task
|
|
530
|
+
|
|
531
|
+
**payload.toolName** (`string`): Name of the tool that was executed
|
|
532
|
+
|
|
533
|
+
**payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
|
|
534
|
+
|
|
535
|
+
**payload.runId** (`string`): Run ID of the agent that dispatched the task
|
|
536
|
+
|
|
537
|
+
**payload.agentId** (`string`): ID of the agent that dispatched the task
|
|
538
|
+
|
|
539
|
+
**payload.completedAt** (`Date`): Timestamp at which the task was cancelled
|
|
540
|
+
|
|
401
541
|
## Metadata and special chunks
|
|
402
542
|
|
|
403
543
|
### response-metadata
|
|
@@ -0,0 +1,94 @@
|
|
|
1
|
+
# Agent.streamUntilIdle()
|
|
2
|
+
|
|
3
|
+
**Added in:** `@mastra/core@1.28.0`
|
|
4
|
+
|
|
5
|
+
`streamUntilIdle()` streams an agent's response and keeps the stream open until every background task dispatched during the run completes. When a task finishes, its result is written to memory and the agentic loop re-enters automatically so the LLM can react to it. The stream closes once no tasks are running and no completions are queued.
|
|
6
|
+
|
|
7
|
+
Use it when the agent dispatches background tasks (typically long-running tools or subagents) and you want a single stream that spans the initial response **plus** every continuation triggered by a task completion. For foreground-only runs or if you prefer to manage the continuation manually (manually prompt agent to process the result), use [`Agent.stream()`](https://mastra.ai/reference/streaming/agents/stream).
|
|
8
|
+
|
|
9
|
+
## Usage example
|
|
10
|
+
|
|
11
|
+
```ts
|
|
12
|
+
const stream = await agent.streamUntilIdle('Research solana for me', {
|
|
13
|
+
memory: { thread: 't1', resource: 'u1' },
|
|
14
|
+
})
|
|
15
|
+
|
|
16
|
+
for await (const chunk of stream.fullStream) {
|
|
17
|
+
// chunks from the initial turn AND any continuation turns triggered by
|
|
18
|
+
// background task completions flow through here
|
|
19
|
+
}
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
> **Info:** `streamUntilIdle()` requires both a [`BackgroundTaskManager`](https://mastra.ai/reference/configuration) and a [memory](https://mastra.ai/docs/memory/overview) backend. Without either, it falls through to a plain `agent.stream()` call.
|
|
23
|
+
|
|
24
|
+
## Parameters
|
|
25
|
+
|
|
26
|
+
**messages** (`string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]`): The messages to send to the agent. Can be a single string, array of strings, or structured message objects.
|
|
27
|
+
|
|
28
|
+
**options** (`AgentExecutionOptions<Output> & { maxIdleMs?: number }`): Accepts every option that Agent.stream() accepts, plus maxIdleMs. See the Agent.stream() reference for the full list.
|
|
29
|
+
|
|
30
|
+
**options.maxIdleMs** (`number`): Closes the outer stream after this many ms of idleness between turns. The timer only runs while the wrapper is between turns, so a slow first token does not close the stream. Default: 5 minutes.
|
|
31
|
+
|
|
32
|
+
**options.memory** (`{ thread?: string | { id: string }; resource?: string }`): Memory thread and resource for the run. Required for continuations to write background task results back into the conversation.
|
|
33
|
+
|
|
34
|
+
**options.structuredOutput** (`PublicStructuredOutputOptions<Output>`): Schema-based structured output. Same shape as Agent.stream(). Note that aggregate properties resolve against the first turn only.
|
|
35
|
+
|
|
36
|
+
For every other option (`maxSteps`, `modelSettings`, `toolChoice`, `outputProcessors`, `onFinish`, `onChunk`, etc.), see the [`Agent.stream()` parameters](https://mastra.ai/reference/streaming/agents/stream). `streamUntilIdle()` forwards them to the initial turn.
|
|
37
|
+
|
|
38
|
+
## Returns
|
|
39
|
+
|
|
40
|
+
**stream** (`MastraModelOutput<Output>`): A MastraModelOutput where fullStream spans the initial turn plus every auto-continuation. Aggregate properties (text, toolCalls, toolResults, finishReason, messageList, getFullOutput()) resolve against the first turn only.
|
|
41
|
+
|
|
42
|
+
### Aggregate properties caveat
|
|
43
|
+
|
|
44
|
+
`streamUntilIdle()` returns a proxy over the first turn's `MastraModelOutput`. Only `fullStream` is replaced with a combined stream that spans every continuation. Every other property — `text`, `toolCalls`, `toolResults`, `finishReason`, `messageList`, `getFullOutput()` — resolves against the **first turn's** internal buffer.
|
|
45
|
+
|
|
46
|
+
If you need an aggregate view across all continuations, consume `fullStream` yourself and accumulate.
|
|
47
|
+
|
|
48
|
+
## Continuation behavior
|
|
49
|
+
|
|
50
|
+
Internally, `streamUntilIdle()`:
|
|
51
|
+
|
|
52
|
+
1. Runs the initial turn via `agent.stream(...)` and pipes its `fullStream` into the outer stream.
|
|
53
|
+
2. Subscribes to background-task completion events for the resolved memory scope.
|
|
54
|
+
3. Queues each terminal event (`background-task-completed`, `background-task-failed`, `background-task-cancelled`) and, when the outer wrapper is idle between turns, re-invokes `agent.stream([], ...)` with a directive listing the just-completed `toolCallId`s. The continuation turn flows into the same outer stream.
|
|
55
|
+
4. Closes the outer stream once no tasks are running and no completions are queued.
|
|
56
|
+
|
|
57
|
+
## Extended usage example
|
|
58
|
+
|
|
59
|
+
### Cap idle time between turns
|
|
60
|
+
|
|
61
|
+
```ts
|
|
62
|
+
const stream = await agent.streamUntilIdle('Kick off the long jobs', {
|
|
63
|
+
memory: { thread: 't1', resource: 'u1' },
|
|
64
|
+
maxIdleMs: 60_000, // close the stream after 1 minute of idleness between turns
|
|
65
|
+
})
|
|
66
|
+
|
|
67
|
+
for await (const chunk of stream.fullStream) {
|
|
68
|
+
if (chunk.type === 'background-task-completed') {
|
|
69
|
+
console.log('Task complete:', chunk.payload.taskId)
|
|
70
|
+
}
|
|
71
|
+
}
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
### Aggregate text across continuations
|
|
75
|
+
|
|
76
|
+
```ts
|
|
77
|
+
const stream = await agent.streamUntilIdle('Research and summarize', {
|
|
78
|
+
memory: { thread: 't1', resource: 'u1' },
|
|
79
|
+
})
|
|
80
|
+
|
|
81
|
+
let fullText = ''
|
|
82
|
+
for await (const chunk of stream.fullStream) {
|
|
83
|
+
if (chunk.type === 'text-delta') {
|
|
84
|
+
fullText += chunk.payload.text
|
|
85
|
+
}
|
|
86
|
+
}
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
## Related
|
|
90
|
+
|
|
91
|
+
- [Background tasks](https://mastra.ai/docs/agents/background-tasks)
|
|
92
|
+
- [`Agent.stream()` reference](https://mastra.ai/reference/streaming/agents/stream)
|
|
93
|
+
- [backgroundTasks configuration reference](https://mastra.ai/reference/configuration)
|
|
94
|
+
- [Stream chunk types](https://mastra.ai/reference/streaming/ChunkType)
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# S3Filesystem
|
|
2
2
|
|
|
3
|
-
Stores files in Amazon S3 or S3-compatible storage services like Cloudflare R2, MinIO,
|
|
3
|
+
Stores files in Amazon S3 or S3-compatible storage services like Cloudflare R2, MinIO, DigitalOcean Spaces, and Tigris.
|
|
4
4
|
|
|
5
5
|
> **Info:** For interface details, see [WorkspaceFilesystem Interface](https://mastra.ai/reference/workspace/filesystem).
|
|
6
6
|
|
|
@@ -55,6 +55,61 @@ const agent = new Agent({
|
|
|
55
55
|
})
|
|
56
56
|
```
|
|
57
57
|
|
|
58
|
+
### AWS credential provider chain
|
|
59
|
+
|
|
60
|
+
When no credentials are provided, `S3Filesystem` uses the AWS SDK default credential provider chain. This discovers credentials automatically from environment variables, `~/.aws` config files, ECS container credentials, EC2 instance profiles, and other standard sources.
|
|
61
|
+
|
|
62
|
+
```typescript
|
|
63
|
+
import { S3Filesystem } from '@mastra/s3'
|
|
64
|
+
|
|
65
|
+
// SDK discovers credentials from the environment automatically
|
|
66
|
+
const filesystem = new S3Filesystem({
|
|
67
|
+
bucket: 'my-bucket',
|
|
68
|
+
region: 'us-east-1',
|
|
69
|
+
})
|
|
70
|
+
```
|
|
71
|
+
|
|
72
|
+
Pass a credential provider function for auto-refreshing credentials. This is useful for deployments on ECS, Lambda, or when using SSO/AssumeRole where temporary credentials expire and need to be refreshed.
|
|
73
|
+
|
|
74
|
+
Install the AWS SDK credential providers package when calling `fromNodeProviderChain()` directly:
|
|
75
|
+
|
|
76
|
+
**npm**:
|
|
77
|
+
|
|
78
|
+
```bash
|
|
79
|
+
npm install @aws-sdk/credential-providers
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
**pnpm**:
|
|
83
|
+
|
|
84
|
+
```bash
|
|
85
|
+
pnpm add @aws-sdk/credential-providers
|
|
86
|
+
```
|
|
87
|
+
|
|
88
|
+
**Yarn**:
|
|
89
|
+
|
|
90
|
+
```bash
|
|
91
|
+
yarn add @aws-sdk/credential-providers
|
|
92
|
+
```
|
|
93
|
+
|
|
94
|
+
**Bun**:
|
|
95
|
+
|
|
96
|
+
```bash
|
|
97
|
+
bun add @aws-sdk/credential-providers
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
```typescript
|
|
101
|
+
import { S3Filesystem } from '@mastra/s3'
|
|
102
|
+
import { fromNodeProviderChain } from '@aws-sdk/credential-providers'
|
|
103
|
+
|
|
104
|
+
const filesystem = new S3Filesystem({
|
|
105
|
+
bucket: 'my-bucket',
|
|
106
|
+
region: 'us-east-1',
|
|
107
|
+
credentials: fromNodeProviderChain(),
|
|
108
|
+
})
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
Provider functions only apply to `S3Filesystem` API calls. When mounting the filesystem into an E2B sandbox, mount configuration only supports static `accessKeyId`, `secretAccessKey`, and `sessionToken` values, so credential refresh must be handled outside the mount.
|
|
112
|
+
|
|
58
113
|
### Cloudflare R2
|
|
59
114
|
|
|
60
115
|
```typescript
|
|
@@ -83,19 +138,38 @@ const filesystem = new S3Filesystem({
|
|
|
83
138
|
})
|
|
84
139
|
```
|
|
85
140
|
|
|
141
|
+
### Tigris
|
|
142
|
+
|
|
143
|
+
```typescript
|
|
144
|
+
import { S3Filesystem } from '@mastra/s3'
|
|
145
|
+
|
|
146
|
+
const filesystem = new S3Filesystem({
|
|
147
|
+
bucket: 'my-bucket',
|
|
148
|
+
region: 'auto',
|
|
149
|
+
endpoint: 'https://t3.storage.dev',
|
|
150
|
+
accessKeyId: process.env.TIGRIS_ACCESS_KEY_ID,
|
|
151
|
+
secretAccessKey: process.env.TIGRIS_SECRET_ACCESS_KEY,
|
|
152
|
+
forcePathStyle: false,
|
|
153
|
+
})
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
Tigris uses virtual-hosted-style addressing, so `forcePathStyle` must be set to `false` (the default is `true` when a custom endpoint is provided). Create credentials from the [Tigris Dashboard](https://console.tigris.dev/) — access keys are prefixed `tid_` and secrets `tsec_`.
|
|
157
|
+
|
|
86
158
|
## Constructor parameters
|
|
87
159
|
|
|
88
160
|
**bucket** (`string`): S3 bucket name
|
|
89
161
|
|
|
90
162
|
**region** (`string`): AWS region (use 'auto' for R2)
|
|
91
163
|
|
|
92
|
-
**
|
|
164
|
+
**credentials** (`AwsCredentialIdentity | AwsCredentialIdentityProvider`): AWS credentials or credential provider function. Accepts static credentials or a provider that auto-refreshes (e.g. \`fromNodeProviderChain()\` from \`@aws-sdk/credential-providers\`). Takes precedence over \`accessKeyId\`/\`secretAccessKey\`/\`sessionToken\`. When all credential options are omitted, the SDK default credential provider chain is used.
|
|
165
|
+
|
|
166
|
+
**accessKeyId** (`string`): AWS access key ID. When omitted along with \`secretAccessKey\` and \`credentials\`, the SDK default credential provider chain is used.
|
|
93
167
|
|
|
94
|
-
**secretAccessKey** (`string`): AWS secret access key.
|
|
168
|
+
**secretAccessKey** (`string`): AWS secret access key. When omitted along with \`accessKeyId\` and \`credentials\`, the SDK default credential provider chain is used.
|
|
95
169
|
|
|
96
|
-
**sessionToken** (`string`): AWS session token for temporary credentials.
|
|
170
|
+
**sessionToken** (`string`): AWS session token for static temporary credentials. Use with \`accessKeyId\`/\`secretAccessKey\` only when passing a complete temporary credential set manually. For auto-refreshing SSO, AssumeRole, or container credentials, use the \`credentials\` provider parameter or the SDK default credential provider chain.
|
|
97
171
|
|
|
98
|
-
**endpoint** (`string`): Custom endpoint URL for S3-compatible storage (R2, MinIO, etc.)
|
|
172
|
+
**endpoint** (`string`): Custom endpoint URL for S3-compatible storage (R2, MinIO, Tigris, etc.)
|
|
99
173
|
|
|
100
174
|
**forcePathStyle** (`boolean`): Force path-style URLs instead of virtual-hosted-style. Required for some S3-compatible services like MinIO. Defaults to true when a custom endpoint is provided. (Default: `true (when endpoint is set)`)
|
|
101
175
|
|
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,28 @@
|
|
|
1
1
|
# @mastra/mcp-docs-server
|
|
2
2
|
|
|
3
|
+
## 1.1.29
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Updated dependencies [[`28caa5b`](https://github.com/mastra-ai/mastra/commit/28caa5b032358545af2589ed90636eccb4dd9d2f), [`c1ae974`](https://github.com/mastra-ai/mastra/commit/c1ae97491f6e57378ce880c3a397778c42adcdf1), [`b510d36`](https://github.com/mastra-ai/mastra/commit/b510d368f73dab6be2e2c2bc99035aaef1fb7d7a), [`10e1c9a`](https://github.com/mastra-ai/mastra/commit/10e1c9a6a99c14eb055d0f409b603e07af827e68), [`13b4d7c`](https://github.com/mastra-ai/mastra/commit/13b4d7c16de34dff9095d1cd80f22f544b6cfe75), [`7a7b313`](https://github.com/mastra-ai/mastra/commit/7a7b3138fb3bcf0b0c740eaea07971e43d330ef3), [`c04417b`](https://github.com/mastra-ai/mastra/commit/c04417ba0a2e4ded66da4352331ef29cd4bd1d79), [`cf25a03`](https://github.com/mastra-ai/mastra/commit/cf25a03132164b9dc1e5dccf7394824e33007c51), [`8a71261`](https://github.com/mastra-ai/mastra/commit/8a71261e3954ae617c6f8e25767b951f99438ab2), [`9e973b0`](https://github.com/mastra-ai/mastra/commit/9e973b010dacfa15ac82b0072897319f5234b90a), [`dd934a0`](https://github.com/mastra-ai/mastra/commit/dd934a0982ce0f78712fbd559e4f2410bf594b39), [`ba6b0c5`](https://github.com/mastra-ai/mastra/commit/ba6b0c51bfce358554fd33c7f2bcd5593633f2ff), [`a6dac0a`](https://github.com/mastra-ai/mastra/commit/a6dac0a40c7181161b1add4e8534f962bcbc9aa7), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`6c8c6c7`](https://github.com/mastra-ai/mastra/commit/6c8c6c71518394321a4692614aa4b11f3bb0a343), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`7d056b6`](https://github.com/mastra-ai/mastra/commit/7d056b6ecf603cacaa0f663ff1df025ed885b6c1), [`9cef83b`](https://github.com/mastra-ai/mastra/commit/9cef83b8a642b8098747772921e3523b492bafbc), [`d30e215`](https://github.com/mastra-ai/mastra/commit/d30e2156c746bc9fd791745cec1cc24377b66789), [`021a60f`](https://github.com/mastra-ai/mastra/commit/021a60f1f3e0135a70ef23c58be7a9b3aaffe6b4), [`73f2809`](https://github.com/mastra-ai/mastra/commit/73f2809721db24e98cdf122539652a455211b450), [`aedeea4`](https://github.com/mastra-ai/mastra/commit/aedeea48a94f728323f040478775076b9574be50), [`26f1f94`](https://github.com/mastra-ai/mastra/commit/26f1f9490574b864ba1ecedf2c9632e0767a23bd), [`8126d86`](https://github.com/mastra-ai/mastra/commit/8126d8638411eacfafdc29036ac998e8757ea66f), [`73b45fa`](https://github.com/mastra-ai/mastra/commit/73b45facdef4fbcb8af710c50f0646f18619dbaa), [`ae97520`](https://github.com/mastra-ai/mastra/commit/ae975206fdb0f6ef03c4d5bf94f7dc7c3f706c02), [`7a7b313`](https://github.com/mastra-ai/mastra/commit/7a7b3138fb3bcf0b0c740eaea07971e43d330ef3), [`441670a`](https://github.com/mastra-ai/mastra/commit/441670a02c9dc7731c52674f55481e7848a84523)]:
|
|
8
|
+
- @mastra/core@1.29.0
|
|
9
|
+
- @mastra/mcp@1.6.0
|
|
10
|
+
|
|
11
|
+
## 1.1.29-alpha.12
|
|
12
|
+
|
|
13
|
+
### Patch Changes
|
|
14
|
+
|
|
15
|
+
- Updated dependencies [[`c1ae974`](https://github.com/mastra-ai/mastra/commit/c1ae97491f6e57378ce880c3a397778c42adcdf1), [`10e1c9a`](https://github.com/mastra-ai/mastra/commit/10e1c9a6a99c14eb055d0f409b603e07af827e68), [`13b4d7c`](https://github.com/mastra-ai/mastra/commit/13b4d7c16de34dff9095d1cd80f22f544b6cfe75), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`6c8c6c7`](https://github.com/mastra-ai/mastra/commit/6c8c6c71518394321a4692614aa4b11f3bb0a343), [`5a4b1ee`](https://github.com/mastra-ai/mastra/commit/5a4b1ee80212969621228104995589c0fa59e575), [`ec4cb26`](https://github.com/mastra-ai/mastra/commit/ec4cb26919972eb2031fea510f8f013e1d5b7ee2)]:
|
|
16
|
+
- @mastra/core@1.29.0-alpha.6
|
|
17
|
+
- @mastra/mcp@1.6.0-alpha.0
|
|
18
|
+
|
|
19
|
+
## 1.1.29-alpha.10
|
|
20
|
+
|
|
21
|
+
### Patch Changes
|
|
22
|
+
|
|
23
|
+
- Updated dependencies [[`28caa5b`](https://github.com/mastra-ai/mastra/commit/28caa5b032358545af2589ed90636eccb4dd9d2f), [`7d056b6`](https://github.com/mastra-ai/mastra/commit/7d056b6ecf603cacaa0f663ff1df025ed885b6c1), [`26f1f94`](https://github.com/mastra-ai/mastra/commit/26f1f9490574b864ba1ecedf2c9632e0767a23bd)]:
|
|
24
|
+
- @mastra/core@1.29.0-alpha.5
|
|
25
|
+
|
|
3
26
|
## 1.1.29-alpha.9
|
|
4
27
|
|
|
5
28
|
### Patch Changes
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "@mastra/mcp-docs-server",
|
|
3
|
-
"version": "1.1.29
|
|
3
|
+
"version": "1.1.29",
|
|
4
4
|
"description": "MCP server for accessing Mastra.ai documentation, changelogs, and news.",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "dist/index.js",
|
|
@@ -29,8 +29,8 @@
|
|
|
29
29
|
"jsdom": "^26.1.0",
|
|
30
30
|
"local-pkg": "^1.1.2",
|
|
31
31
|
"zod": "^4.3.6",
|
|
32
|
-
"@mastra/core": "1.29.0
|
|
33
|
-
"@mastra/mcp": "^1.
|
|
32
|
+
"@mastra/core": "1.29.0",
|
|
33
|
+
"@mastra/mcp": "^1.6.0"
|
|
34
34
|
},
|
|
35
35
|
"devDependencies": {
|
|
36
36
|
"@hono/node-server": "^1.19.11",
|
|
@@ -46,9 +46,9 @@
|
|
|
46
46
|
"tsx": "^4.21.0",
|
|
47
47
|
"typescript": "^5.9.3",
|
|
48
48
|
"vitest": "4.1.5",
|
|
49
|
-
"@internal/lint": "0.0.
|
|
50
|
-
"@
|
|
51
|
-
"@
|
|
49
|
+
"@internal/lint": "0.0.87",
|
|
50
|
+
"@mastra/core": "1.29.0",
|
|
51
|
+
"@internal/types-builder": "0.0.62"
|
|
52
52
|
},
|
|
53
53
|
"homepage": "https://mastra.ai",
|
|
54
54
|
"repository": {
|