@mastra/mcp-docs-server 1.1.29-alpha.9 → 1.1.30-alpha.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (36) hide show
  1. package/.docs/docs/agents/background-tasks.md +242 -0
  2. package/.docs/docs/agents/supervisor-agents.md +35 -4
  3. package/.docs/docs/agents/using-tools.md +1 -0
  4. package/.docs/docs/observability/metrics/overview.md +2 -2
  5. package/.docs/docs/observability/tracing/exporters/default.md +3 -3
  6. package/.docs/docs/observability/tracing/exporters/laminar.md +22 -14
  7. package/.docs/docs/streaming/background-task-streaming.md +80 -0
  8. package/.docs/docs/streaming/overview.md +3 -0
  9. package/.docs/docs/workspace/filesystem.md +1 -1
  10. package/.docs/guides/build-your-ui/assistant-ui.md +1 -1
  11. package/.docs/models/gateways/openrouter.md +3 -1
  12. package/.docs/models/index.md +1 -1
  13. package/.docs/models/providers/anthropic.md +4 -2
  14. package/.docs/models/providers/baseten.md +2 -1
  15. package/.docs/models/providers/deepinfra.md +2 -1
  16. package/.docs/models/providers/fireworks-ai.md +2 -1
  17. package/.docs/models/providers/kilo.md +3 -1
  18. package/.docs/models/providers/nvidia.md +2 -1
  19. package/.docs/models/providers/openai.md +2 -1
  20. package/.docs/models/providers/wandb.md +3 -2
  21. package/.docs/models/providers/zai-coding-plan.md +9 -8
  22. package/.docs/models/providers/zenmux.md +8 -1
  23. package/.docs/reference/client-js/agents.md +24 -0
  24. package/.docs/reference/configuration.md +63 -0
  25. package/.docs/reference/harness/harness-class.md +53 -10
  26. package/.docs/reference/index.md +3 -0
  27. package/.docs/reference/observability/metrics/automatic-metrics.md +2 -2
  28. package/.docs/reference/observability/tracing/interfaces.md +17 -0
  29. package/.docs/reference/processors/stream-error-retry-processor.md +54 -0
  30. package/.docs/reference/storage/clickhouse.md +274 -0
  31. package/.docs/reference/storage/composite.md +5 -3
  32. package/.docs/reference/streaming/ChunkType.md +140 -0
  33. package/.docs/reference/streaming/agents/streamUntilIdle.md +94 -0
  34. package/.docs/reference/workspace/s3-filesystem.md +79 -5
  35. package/CHANGELOG.md +37 -0
  36. package/package.json +6 -6
@@ -0,0 +1,274 @@
1
+ # ClickHouse storage
2
+
3
+ [ClickHouse](https://clickhouse.com/) is a columnar database designed for analytical workloads. The `@mastra/clickhouse` package provides storage adapters for several Mastra storage domains and is the recommended backend for production observability.
4
+
5
+ ClickHouse is most commonly used as the dedicated observability backend in a [composite storage](https://mastra.ai/reference/storage/composite) setup, with another database serving the remaining domains. It can also back the supported domains on its own through `ClickhouseStore`.
6
+
7
+ ## When to use ClickHouse
8
+
9
+ - Production observability for traces, logs, metrics, scores, and feedback.
10
+ - Append-heavy workloads where columnar storage and compression help keep costs down.
11
+ - Deployment platforms with ephemeral filesystems (such as [Railway](#deploying-with-railway-and-similar-platforms), Fly.io, Render, Heroku, or container schedulers) where embedded backends like DuckDB cannot persist data.
12
+
13
+ For local development, [LibSQL](https://mastra.ai/reference/storage/libsql) or `@mastra/duckdb` are usually a better fit because they need no external service.
14
+
15
+ ## Installation
16
+
17
+ **npm**:
18
+
19
+ ```bash
20
+ npm install @mastra/clickhouse@latest
21
+ ```
22
+
23
+ **pnpm**:
24
+
25
+ ```bash
26
+ pnpm add @mastra/clickhouse@latest
27
+ ```
28
+
29
+ **Yarn**:
30
+
31
+ ```bash
32
+ yarn add @mastra/clickhouse@latest
33
+ ```
34
+
35
+ **Bun**:
36
+
37
+ ```bash
38
+ bun add @mastra/clickhouse@latest
39
+ ```
40
+
41
+ You will also need a running ClickHouse server. See [Hosting options](#hosting-options) for managed and self-hosted choices.
42
+
43
+ ## Usage
44
+
45
+ ### Observability with vNext (recommended)
46
+
47
+ `ObservabilityStorageClickhouseVNext` is the current observability domain implementation. It uses an insert-only schema backed by `ReplacingMergeTree` and is optimized for the volume produced by traces, logs, metrics, scores, and feedback.
48
+
49
+ Compose it with another storage adapter so observability writes do not contend with your application data:
50
+
51
+ ```typescript
52
+ import { Mastra } from '@mastra/core'
53
+ import { MastraCompositeStore } from '@mastra/core/storage'
54
+ import { LibSQLStore } from '@mastra/libsql'
55
+ import { ObservabilityStorageClickhouseVNext } from '@mastra/clickhouse'
56
+ import { Observability, DefaultExporter } from '@mastra/observability'
57
+
58
+ const observabilityStore = new ObservabilityStorageClickhouseVNext({
59
+ url: process.env.CLICKHOUSE_URL!,
60
+ username: process.env.CLICKHOUSE_USERNAME!,
61
+ password: process.env.CLICKHOUSE_PASSWORD!,
62
+ })
63
+
64
+ export const mastra = new Mastra({
65
+ storage: new MastraCompositeStore({
66
+ id: 'composite-storage',
67
+ default: new LibSQLStore({
68
+ id: 'mastra-storage',
69
+ url: 'file:./mastra.db',
70
+ }),
71
+ domains: {
72
+ observability: observabilityStore,
73
+ },
74
+ }),
75
+ observability: new Observability({
76
+ configs: {
77
+ default: {
78
+ serviceName: 'mastra',
79
+ exporters: [new DefaultExporter()],
80
+ },
81
+ },
82
+ }),
83
+ })
84
+ ```
85
+
86
+ `DefaultExporter` automatically selects the `insert-only` strategy when ClickHouse is the observability backend, which gives the highest write throughput. See [tracing strategies](https://mastra.ai/docs/observability/tracing/exporters/default) for details.
87
+
88
+ ### Observability with the legacy domain
89
+
90
+ `ObservabilityStorageClickhouse` is the original observability adapter and remains supported for projects that have not migrated to the vNext schema. The configuration shape is the same as the vNext class.
91
+
92
+ ```typescript
93
+ import { ObservabilityStorageClickhouse } from '@mastra/clickhouse'
94
+
95
+ const observabilityStore = new ObservabilityStorageClickhouse({
96
+ url: process.env.CLICKHOUSE_URL!,
97
+ username: process.env.CLICKHOUSE_USERNAME!,
98
+ password: process.env.CLICKHOUSE_PASSWORD!,
99
+ })
100
+ ```
101
+
102
+ New projects should use `ObservabilityStorageClickhouseVNext` instead.
103
+
104
+ ### Full storage with `ClickhouseStore`
105
+
106
+ `ClickhouseStore` implements the `memory`, `workflows`, `scores`, and `observability` domains. Use it when you want a single backend for the supported domains. For most observability deployments, the composite setup above is preferable because it isolates observability writes from primary application data.
107
+
108
+ ```typescript
109
+ import { Mastra } from '@mastra/core'
110
+ import { ClickhouseStore } from '@mastra/clickhouse'
111
+
112
+ export const mastra = new Mastra({
113
+ storage: new ClickhouseStore({
114
+ id: 'clickhouse-storage',
115
+ url: process.env.CLICKHOUSE_URL!,
116
+ username: process.env.CLICKHOUSE_USERNAME!,
117
+ password: process.env.CLICKHOUSE_PASSWORD!,
118
+ }),
119
+ })
120
+ ```
121
+
122
+ ### Bring your own ClickHouse client
123
+
124
+ Pass a pre-configured client when you need custom connection settings such as request timeouts, compression, or interceptors:
125
+
126
+ ```typescript
127
+ import { createClient } from '@clickhouse/client'
128
+ import { ClickhouseStore } from '@mastra/clickhouse'
129
+
130
+ const client = createClient({
131
+ url: process.env.CLICKHOUSE_URL!,
132
+ username: process.env.CLICKHOUSE_USERNAME!,
133
+ password: process.env.CLICKHOUSE_PASSWORD!,
134
+ request_timeout: 60_000,
135
+ compression: { request: true, response: true },
136
+ })
137
+
138
+ const storage = new ClickhouseStore({ id: 'clickhouse-storage', client })
139
+ ```
140
+
141
+ The same `client` form is accepted by `ObservabilityStorageClickhouse` and `ObservabilityStorageClickhouseVNext`.
142
+
143
+ ## Configuration
144
+
145
+ ### `ClickhouseStore` options
146
+
147
+ **id** (`string`): Unique identifier for this storage instance.
148
+
149
+ **url** (`string`): ClickHouse server URL (for example, \`https\://your-instance.clickhouse.cloud:8443\` or \`http\://localhost:8123\`). Required when not passing a pre-configured \`client\`.
150
+
151
+ **username** (`string`): ClickHouse username. Required when not passing a pre-configured \`client\`.
152
+
153
+ **password** (`string`): ClickHouse password. Required when not passing a pre-configured \`client\`. Can be an empty string for the default user on a local instance.
154
+
155
+ **client** (`ClickHouseClient`): Pre-configured ClickHouse client from \`@clickhouse/client\`. Use this when you need custom request settings. Mutually exclusive with the credential fields above.
156
+
157
+ **ttl** (`object`): Per-table TTL configuration applied at table creation time. Accepts row-level and column-level TTLs in interval units from \`NANOSECOND\` through \`YEAR\`.
158
+
159
+ **disableInit** (`boolean`): When \`true\`, the store does not run table creation or migrations on first use. Call \`storage.init()\` explicitly from your deployment scripts. (Default: `false`)
160
+
161
+ `ClickhouseStore` also accepts every option from `ClickHouseClientConfigOptions` (such as `database`, `request_timeout`, `compression`, `keep_alive`, and `max_open_connections`).
162
+
163
+ ### Observability domain options
164
+
165
+ `ObservabilityStorageClickhouse` and `ObservabilityStorageClickhouseVNext` accept the same connection options as `ClickhouseStore` (`url`, `username`, `password`, or a pre-configured `client`).
166
+
167
+ ## Hosting options
168
+
169
+ ClickHouse runs anywhere you can reach it over HTTP. Two common choices:
170
+
171
+ - **[ClickHouse Cloud](https://clickhouse.com/cloud)**: Managed service with a free trial tier. Provides connection details directly compatible with `url`, `username`, and `password`.
172
+ - **Self-hosted**: Run the official [`clickhouse/clickhouse-server`](https://hub.docker.com/r/clickhouse/clickhouse-server) container or install from the [official packages](https://clickhouse.com/docs/en/install). Suitable for VPS, dedicated hardware, or Kubernetes.
173
+
174
+ For local development:
175
+
176
+ ```bash
177
+ docker run -d --name mastra-clickhouse \
178
+ -p 8123:8123 -p 9000:9000 \
179
+ -e CLICKHOUSE_USER=default \
180
+ -e CLICKHOUSE_PASSWORD=password \
181
+ clickhouse/clickhouse-server
182
+ ```
183
+
184
+ ```typescript
185
+ new ObservabilityStorageClickhouseVNext({
186
+ url: 'http://localhost:8123',
187
+ username: 'default',
188
+ password: 'password',
189
+ })
190
+ ```
191
+
192
+ ## Deploying with Railway and similar platforms
193
+
194
+ Platforms like [Railway](https://railway.com), [Fly.io](https://fly.io), [Render](https://render.com), and Heroku run application containers on ephemeral filesystems. Embedded observability backends such as DuckDB require a writable, persistent local file, so they either lose data on restart or fail to deploy entirely on these platforms.
195
+
196
+ Use ClickHouse instead. Because ClickHouse is reached over HTTP, the same connection works from any host:
197
+
198
+ ```typescript
199
+ import { Mastra } from '@mastra/core'
200
+ import { MastraCompositeStore } from '@mastra/core/storage'
201
+ import { PostgresStore } from '@mastra/pg'
202
+ import { ObservabilityStorageClickhouseVNext } from '@mastra/clickhouse'
203
+ import { Observability, DefaultExporter } from '@mastra/observability'
204
+
205
+ export const mastra = new Mastra({
206
+ storage: new MastraCompositeStore({
207
+ id: 'composite-storage',
208
+ default: new PostgresStore({
209
+ id: 'pg',
210
+ connectionString: process.env.DATABASE_URL!,
211
+ }),
212
+ domains: {
213
+ observability: new ObservabilityStorageClickhouseVNext({
214
+ url: process.env.CLICKHOUSE_URL!,
215
+ username: process.env.CLICKHOUSE_USERNAME!,
216
+ password: process.env.CLICKHOUSE_PASSWORD!,
217
+ }),
218
+ },
219
+ }),
220
+ observability: new Observability({
221
+ configs: {
222
+ default: {
223
+ serviceName: 'mastra',
224
+ exporters: [new DefaultExporter()],
225
+ },
226
+ },
227
+ }),
228
+ })
229
+ ```
230
+
231
+ Two ways to provision the database:
232
+
233
+ - **Managed**: Use ClickHouse Cloud. Set `CLICKHOUSE_URL`, `CLICKHOUSE_USERNAME`, and `CLICKHOUSE_PASSWORD` as environment variables in your hosting platform.
234
+ - **Self-hosted on Railway**: Add a ClickHouse service to your Railway project from the official Docker image, then reference it in the application service through Railway's private networking.
235
+
236
+ The same approach applies to other hosts with ephemeral filesystems. For application data that should also live off-host, pair this setup with a managed PostgreSQL or LibSQL/Turso instance for the `default` storage.
237
+
238
+ > **Warning:** Do not point an embedded backend like DuckDB at a path inside an ephemeral container filesystem. Data written there is lost when the container restarts, and on some platforms the path is read-only.
239
+
240
+ ## Initialization
241
+
242
+ When passed to the `Mastra` class, `ClickhouseStore` calls `init()` automatically to create the schema and run any pending migrations. The same applies to `ObservabilityStorageClickhouseVNext` when used through `MastraCompositeStore`.
243
+
244
+ If you manage storage outside of `Mastra`, call `init()` explicitly:
245
+
246
+ ```typescript
247
+ import { ObservabilityStorageClickhouseVNext } from '@mastra/clickhouse'
248
+
249
+ const observability = new ObservabilityStorageClickhouseVNext({
250
+ url: process.env.CLICKHOUSE_URL!,
251
+ username: process.env.CLICKHOUSE_USERNAME!,
252
+ password: process.env.CLICKHOUSE_PASSWORD!,
253
+ })
254
+
255
+ await observability.init()
256
+ ```
257
+
258
+ In CI/CD pipelines, set `disableInit: true` on `ClickhouseStore` and run `init()` from a deployment step that uses elevated credentials. Runtime application credentials can then be limited to read and insert.
259
+
260
+ ## Observability
261
+
262
+ ClickHouse is the recommended backend for production observability:
263
+
264
+ - **Insert-only strategy**: `DefaultExporter` writes completed spans in batches without per-span updates, which is the highest-throughput strategy available.
265
+ - **Columnar compression**: Span attributes and log payloads compress well compared to the same data in row-oriented databases.
266
+
267
+ For the full strategy matrix and production guidance, see the [`DefaultExporter` reference](https://mastra.ai/docs/observability/tracing/exporters/default).
268
+
269
+ ## Related
270
+
271
+ - [Storage overview](https://mastra.ai/reference/storage/overview)
272
+ - [Composite storage](https://mastra.ai/reference/storage/composite)
273
+ - [`DefaultExporter`](https://mastra.ai/docs/observability/tracing/exporters/default)
274
+ - [Observability overview](https://mastra.ai/docs/observability/overview)
@@ -218,12 +218,12 @@ const storage = new MastraCompositeStore({
218
218
 
219
219
  Observability data can quickly overwhelm general-purpose databases in production. A single agent interaction can generate hundreds of spans, and high-traffic applications can produce thousands of traces per day.
220
220
 
221
- **ClickHouse** is recommended for production observability because it's optimized for high-volume, write-heavy analytics workloads. Use composite storage to route observability to ClickHouse while keeping other data in your primary database:
221
+ **[ClickHouse](https://mastra.ai/reference/storage/clickhouse)** is recommended for production observability because it's optimized for high-volume, write-heavy analytics workloads. Use composite storage to route observability to ClickHouse while keeping other data in your primary database:
222
222
 
223
223
  ```typescript
224
224
  import { MastraCompositeStore } from '@mastra/core/storage'
225
225
  import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg'
226
- import { ObservabilityStorageClickhouse } from '@mastra/clickhouse'
226
+ import { ObservabilityStorageClickhouseVNext } from '@mastra/clickhouse'
227
227
 
228
228
  const storage = new MastraCompositeStore({
229
229
  id: 'composite',
@@ -231,7 +231,7 @@ const storage = new MastraCompositeStore({
231
231
  memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
232
232
  workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
233
233
  scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
234
- observability: new ObservabilityStorageClickhouse({
234
+ observability: new ObservabilityStorageClickhouseVNext({
235
235
  url: process.env.CLICKHOUSE_URL,
236
236
  username: process.env.CLICKHOUSE_USERNAME,
237
237
  password: process.env.CLICKHOUSE_PASSWORD,
@@ -240,4 +240,6 @@ const storage = new MastraCompositeStore({
240
240
  })
241
241
  ```
242
242
 
243
+ > **Note:** `ObservabilityStorageClickhouseVNext` is the current observability domain implementation. The legacy `ObservabilityStorageClickhouse` class is also exported and remains supported for projects that have not migrated. See the [ClickHouse storage reference](https://mastra.ai/reference/storage/clickhouse) for details.
244
+
243
245
  > **Info:** This approach is also required when using storage providers that don't support observability (like Convex, DynamoDB, or Cloudflare). See the [DefaultExporter documentation](https://mastra.ai/docs/observability/tracing/exporters/default) for the full list of supported providers.
@@ -398,6 +398,146 @@ Contains output from workflow step execution, used primarily for usage tracking
398
398
 
399
399
  **payload.output** (`ChunkType`): Nested chunk data from step execution, typically containing finish events or other step results
400
400
 
401
+ ## Background task chunks
402
+
403
+ Emitted when a tool call is dispatched as a [background task](https://mastra.ai/docs/agents/background-tasks) and `streamUntilIdle()` is used.
404
+
405
+ ### background-task-started
406
+
407
+ Emitted when a tool call is enqueued as a background task and assigned a `taskId`.
408
+
409
+ **type** (`"background-task-started"`): Chunk type identifier
410
+
411
+ **payload** (`BackgroundTaskStartedPayload`): Identifies the newly enqueued task
412
+
413
+ **payload.taskId** (`string`): Unique identifier for the background task
414
+
415
+ **payload.toolName** (`string`): Name of the tool being executed
416
+
417
+ **payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
418
+
419
+ ### background-task-running
420
+
421
+ Emitted when a worker picks up the task and execution begins.
422
+
423
+ **type** (`"background-task-running"`): Chunk type identifier
424
+
425
+ **payload** (`BackgroundTaskRunningPayload`): Details about the running task
426
+
427
+ **payload.taskId** (`string`): Unique identifier for the background task
428
+
429
+ **payload.toolName** (`string`): Name of the tool being executed
430
+
431
+ **payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
432
+
433
+ **payload.runId** (`string`): Run ID of the agent that dispatched the task
434
+
435
+ **payload.agentId** (`string`): ID of the agent that dispatched the task
436
+
437
+ **payload.startedAt** (`Date`): Timestamp at which execution started
438
+
439
+ **payload.args** (`Record<string, unknown>`): Arguments passed to the tool's execute function
440
+
441
+ ### background-task-progress
442
+
443
+ Periodic snapshot of how many background tasks are currently running across the agent.
444
+
445
+ **type** (`"background-task-progress"`): Chunk type identifier
446
+
447
+ **payload** (`BackgroundTaskProgressPayload`): Aggregate progress for all running tasks
448
+
449
+ **payload.taskIds** (`string[]`): IDs of all currently running background tasks
450
+
451
+ **payload.runningCount** (`number`): Number of background tasks currently running
452
+
453
+ **payload.elapsedMs** (`number`): Milliseconds elapsed since the agent run started
454
+
455
+ ### background-task-output
456
+
457
+ A streamed output chunk emitted by the task's `execute` function. Wraps an inner [`tool-output`](#tool-output) chunk.
458
+
459
+ **type** (`"background-task-output"`): Chunk type identifier
460
+
461
+ **payload** (`BackgroundTaskOutputPayload`): Streamed output from the running task
462
+
463
+ **payload.taskId** (`string`): Unique identifier for the background task
464
+
465
+ **payload.toolName** (`string`): Name of the tool being executed
466
+
467
+ **payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
468
+
469
+ **payload.runId** (`string`): Run ID of the agent that dispatched the task
470
+
471
+ **payload.agentId** (`string`): ID of the agent that dispatched the task
472
+
473
+ **payload.payload** (`ToolOutputChunk`): Inner tool-output chunk produced by the task
474
+
475
+ ### background-task-completed
476
+
477
+ Emitted when the task finishes successfully. Triggers a continuation turn when consumed by [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle).
478
+
479
+ **type** (`"background-task-completed"`): Chunk type identifier
480
+
481
+ **payload** (`BackgroundTaskResultPayload`): The completed task's result
482
+
483
+ **payload.taskId** (`string`): Unique identifier for the background task
484
+
485
+ **payload.toolName** (`string`): Name of the tool that was executed
486
+
487
+ **payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
488
+
489
+ **payload.agentId** (`string`): ID of the agent that dispatched the task
490
+
491
+ **payload.runId** (`string`): Run ID of the agent that dispatched the task
492
+
493
+ **payload.result** (`unknown`): The tool's resolved return value
494
+
495
+ **payload.completedAt** (`Date`): Timestamp at which the task completed
496
+
497
+ **payload.isError** (`boolean`): True when the tool returned an error result rather than throwing
498
+
499
+ ### background-task-failed
500
+
501
+ Emitted when the task throws or times out. Triggers a continuation turn when consumed by [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle).
502
+
503
+ **type** (`"background-task-failed"`): Chunk type identifier
504
+
505
+ **payload** (`BackgroundTaskFailedPayload`): Failure details for the task
506
+
507
+ **payload.taskId** (`string`): Unique identifier for the background task
508
+
509
+ **payload.toolName** (`string`): Name of the tool that was executed
510
+
511
+ **payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
512
+
513
+ **payload.runId** (`string`): Run ID of the agent that dispatched the task
514
+
515
+ **payload.agentId** (`string`): ID of the agent that dispatched the task
516
+
517
+ **payload.error** (`{ message: string }`): Error details thrown by the task
518
+
519
+ **payload.completedAt** (`Date`): Timestamp at which the task failed
520
+
521
+ ### background-task-cancelled
522
+
523
+ Emitted when the task is cancelled before completing. Triggers a continuation turn when consumed by [`Agent.streamUntilIdle()`](https://mastra.ai/reference/streaming/agents/streamUntilIdle).
524
+
525
+ **type** (`"background-task-cancelled"`): Chunk type identifier
526
+
527
+ **payload** (`BackgroundTaskCancelledPayload`): Cancellation details for the task
528
+
529
+ **payload.taskId** (`string`): Unique identifier for the background task
530
+
531
+ **payload.toolName** (`string`): Name of the tool that was executed
532
+
533
+ **payload.toolCallId** (`string`): Tool-call ID from the originating LLM tool call
534
+
535
+ **payload.runId** (`string`): Run ID of the agent that dispatched the task
536
+
537
+ **payload.agentId** (`string`): ID of the agent that dispatched the task
538
+
539
+ **payload.completedAt** (`Date`): Timestamp at which the task was cancelled
540
+
401
541
  ## Metadata and special chunks
402
542
 
403
543
  ### response-metadata
@@ -0,0 +1,94 @@
1
+ # Agent.streamUntilIdle()
2
+
3
+ **Added in:** `@mastra/core@1.29.0`
4
+
5
+ `streamUntilIdle()` streams an agent's response and keeps the stream open until every background task dispatched during the run completes. When a task finishes, its result is written to memory and the agentic loop re-enters automatically so the LLM can react to it. The stream closes once no tasks are running and no completions are queued.
6
+
7
+ Use it when the agent dispatches background tasks (typically long-running tools or subagents) and you want a single stream that spans the initial response **plus** every continuation triggered by a task completion. For foreground-only runs or if you prefer to manage the continuation manually (manually prompt agent to process the result), use [`Agent.stream()`](https://mastra.ai/reference/streaming/agents/stream).
8
+
9
+ ## Usage example
10
+
11
+ ```ts
12
+ const stream = await agent.streamUntilIdle('Research solana for me', {
13
+ memory: { thread: 't1', resource: 'u1' },
14
+ })
15
+
16
+ for await (const chunk of stream.fullStream) {
17
+ // chunks from the initial turn AND any continuation turns triggered by
18
+ // background task completions flow through here
19
+ }
20
+ ```
21
+
22
+ > **Info:** `streamUntilIdle()` requires both a [`BackgroundTaskManager`](https://mastra.ai/reference/configuration) and a [memory](https://mastra.ai/docs/memory/overview) backend. Without either, it falls through to a plain `agent.stream()` call.
23
+
24
+ ## Parameters
25
+
26
+ **messages** (`string | string[] | CoreMessage[] | AiMessageType[] | UIMessageWithMetadata[]`): The messages to send to the agent. Can be a single string, array of strings, or structured message objects.
27
+
28
+ **options** (`AgentExecutionOptions<Output> & { maxIdleMs?: number }`): Accepts every option that Agent.stream() accepts, plus maxIdleMs. See the Agent.stream() reference for the full list.
29
+
30
+ **options.maxIdleMs** (`number`): Closes the outer stream after this many ms of idleness between turns. The timer only runs while the wrapper is between turns, so a slow first token does not close the stream. Default: 5 minutes.
31
+
32
+ **options.memory** (`{ thread?: string | { id: string }; resource?: string }`): Memory thread and resource for the run. Required for continuations to write background task results back into the conversation.
33
+
34
+ **options.structuredOutput** (`PublicStructuredOutputOptions<Output>`): Schema-based structured output. Same shape as Agent.stream(). Note that aggregate properties resolve against the first turn only.
35
+
36
+ For every other option (`maxSteps`, `modelSettings`, `toolChoice`, `outputProcessors`, `onFinish`, `onChunk`, etc.), see the [`Agent.stream()` parameters](https://mastra.ai/reference/streaming/agents/stream). `streamUntilIdle()` forwards them to the initial turn.
37
+
38
+ ## Returns
39
+
40
+ **stream** (`MastraModelOutput<Output>`): A MastraModelOutput where fullStream spans the initial turn plus every auto-continuation. Aggregate properties (text, toolCalls, toolResults, finishReason, messageList, getFullOutput()) resolve against the first turn only.
41
+
42
+ ### Aggregate properties caveat
43
+
44
+ `streamUntilIdle()` returns a proxy over the first turn's `MastraModelOutput`. Only `fullStream` is replaced with a combined stream that spans every continuation. Every other property — `text`, `toolCalls`, `toolResults`, `finishReason`, `messageList`, `getFullOutput()` — resolves against the **first turn's** internal buffer.
45
+
46
+ If you need an aggregate view across all continuations, consume `fullStream` yourself and accumulate.
47
+
48
+ ## Continuation behavior
49
+
50
+ Internally, `streamUntilIdle()`:
51
+
52
+ 1. Runs the initial turn via `agent.stream(...)` and pipes its `fullStream` into the outer stream.
53
+ 2. Subscribes to background-task completion events for the resolved memory scope.
54
+ 3. Queues each terminal event (`background-task-completed`, `background-task-failed`, `background-task-cancelled`) and, when the outer wrapper is idle between turns, re-invokes `agent.stream([], ...)` with a directive listing the just-completed `toolCallId`s. The continuation turn flows into the same outer stream.
55
+ 4. Closes the outer stream once no tasks are running and no completions are queued.
56
+
57
+ ## Extended usage example
58
+
59
+ ### Cap idle time between turns
60
+
61
+ ```ts
62
+ const stream = await agent.streamUntilIdle('Kick off the long jobs', {
63
+ memory: { thread: 't1', resource: 'u1' },
64
+ maxIdleMs: 60_000, // close the stream after 1 minute of idleness between turns
65
+ })
66
+
67
+ for await (const chunk of stream.fullStream) {
68
+ if (chunk.type === 'background-task-completed') {
69
+ console.log('Task complete:', chunk.payload.taskId)
70
+ }
71
+ }
72
+ ```
73
+
74
+ ### Aggregate text across continuations
75
+
76
+ ```ts
77
+ const stream = await agent.streamUntilIdle('Research and summarize', {
78
+ memory: { thread: 't1', resource: 'u1' },
79
+ })
80
+
81
+ let fullText = ''
82
+ for await (const chunk of stream.fullStream) {
83
+ if (chunk.type === 'text-delta') {
84
+ fullText += chunk.payload.text
85
+ }
86
+ }
87
+ ```
88
+
89
+ ## Related
90
+
91
+ - [Background tasks](https://mastra.ai/docs/agents/background-tasks)
92
+ - [`Agent.stream()` reference](https://mastra.ai/reference/streaming/agents/stream)
93
+ - [backgroundTasks configuration reference](https://mastra.ai/reference/configuration)
94
+ - [Stream chunk types](https://mastra.ai/reference/streaming/ChunkType)
@@ -1,6 +1,6 @@
1
1
  # S3Filesystem
2
2
 
3
- Stores files in Amazon S3 or S3-compatible storage services like Cloudflare R2, MinIO, and DigitalOcean Spaces.
3
+ Stores files in Amazon S3 or S3-compatible storage services like Cloudflare R2, MinIO, DigitalOcean Spaces, and Tigris.
4
4
 
5
5
  > **Info:** For interface details, see [WorkspaceFilesystem Interface](https://mastra.ai/reference/workspace/filesystem).
6
6
 
@@ -55,6 +55,61 @@ const agent = new Agent({
55
55
  })
56
56
  ```
57
57
 
58
+ ### AWS credential provider chain
59
+
60
+ When no credentials are provided, `S3Filesystem` uses the AWS SDK default credential provider chain. This discovers credentials automatically from environment variables, `~/.aws` config files, ECS container credentials, EC2 instance profiles, and other standard sources.
61
+
62
+ ```typescript
63
+ import { S3Filesystem } from '@mastra/s3'
64
+
65
+ // SDK discovers credentials from the environment automatically
66
+ const filesystem = new S3Filesystem({
67
+ bucket: 'my-bucket',
68
+ region: 'us-east-1',
69
+ })
70
+ ```
71
+
72
+ Pass a credential provider function for auto-refreshing credentials. This is useful for deployments on ECS, Lambda, or when using SSO/AssumeRole where temporary credentials expire and need to be refreshed.
73
+
74
+ Install the AWS SDK credential providers package when calling `fromNodeProviderChain()` directly:
75
+
76
+ **npm**:
77
+
78
+ ```bash
79
+ npm install @aws-sdk/credential-providers
80
+ ```
81
+
82
+ **pnpm**:
83
+
84
+ ```bash
85
+ pnpm add @aws-sdk/credential-providers
86
+ ```
87
+
88
+ **Yarn**:
89
+
90
+ ```bash
91
+ yarn add @aws-sdk/credential-providers
92
+ ```
93
+
94
+ **Bun**:
95
+
96
+ ```bash
97
+ bun add @aws-sdk/credential-providers
98
+ ```
99
+
100
+ ```typescript
101
+ import { S3Filesystem } from '@mastra/s3'
102
+ import { fromNodeProviderChain } from '@aws-sdk/credential-providers'
103
+
104
+ const filesystem = new S3Filesystem({
105
+ bucket: 'my-bucket',
106
+ region: 'us-east-1',
107
+ credentials: fromNodeProviderChain(),
108
+ })
109
+ ```
110
+
111
+ Provider functions only apply to `S3Filesystem` API calls. When mounting the filesystem into an E2B sandbox, mount configuration only supports static `accessKeyId`, `secretAccessKey`, and `sessionToken` values, so credential refresh must be handled outside the mount.
112
+
58
113
  ### Cloudflare R2
59
114
 
60
115
  ```typescript
@@ -83,19 +138,38 @@ const filesystem = new S3Filesystem({
83
138
  })
84
139
  ```
85
140
 
141
+ ### Tigris
142
+
143
+ ```typescript
144
+ import { S3Filesystem } from '@mastra/s3'
145
+
146
+ const filesystem = new S3Filesystem({
147
+ bucket: 'my-bucket',
148
+ region: 'auto',
149
+ endpoint: 'https://t3.storage.dev',
150
+ accessKeyId: process.env.TIGRIS_ACCESS_KEY_ID,
151
+ secretAccessKey: process.env.TIGRIS_SECRET_ACCESS_KEY,
152
+ forcePathStyle: false,
153
+ })
154
+ ```
155
+
156
+ Tigris uses virtual-hosted-style addressing, so `forcePathStyle` must be set to `false` (the default is `true` when a custom endpoint is provided). Create credentials from the [Tigris Dashboard](https://console.tigris.dev/) — access keys are prefixed `tid_` and secrets `tsec_`.
157
+
86
158
  ## Constructor parameters
87
159
 
88
160
  **bucket** (`string`): S3 bucket name
89
161
 
90
162
  **region** (`string`): AWS region (use 'auto' for R2)
91
163
 
92
- **accessKeyId** (`string`): AWS access key ID. Optional for public buckets (read-only access).
164
+ **credentials** (`AwsCredentialIdentity | AwsCredentialIdentityProvider`): AWS credentials or credential provider function. Accepts static credentials or a provider that auto-refreshes (e.g. \`fromNodeProviderChain()\` from \`@aws-sdk/credential-providers\`). Takes precedence over \`accessKeyId\`/\`secretAccessKey\`/\`sessionToken\`. When all credential options are omitted, the SDK default credential provider chain is used.
165
+
166
+ **accessKeyId** (`string`): AWS access key ID. When omitted along with \`secretAccessKey\` and \`credentials\`, the SDK default credential provider chain is used.
93
167
 
94
- **secretAccessKey** (`string`): AWS secret access key. Optional for public buckets (read-only access).
168
+ **secretAccessKey** (`string`): AWS secret access key. When omitted along with \`accessKeyId\` and \`credentials\`, the SDK default credential provider chain is used.
95
169
 
96
- **sessionToken** (`string`): AWS session token for temporary credentials. Required when using SSO, AssumeRole, container credentials, or any other temporary credential provider.
170
+ **sessionToken** (`string`): AWS session token for static temporary credentials. Use with \`accessKeyId\`/\`secretAccessKey\` only when passing a complete temporary credential set manually. For auto-refreshing SSO, AssumeRole, or container credentials, use the \`credentials\` provider parameter or the SDK default credential provider chain.
97
171
 
98
- **endpoint** (`string`): Custom endpoint URL for S3-compatible storage (R2, MinIO, etc.)
172
+ **endpoint** (`string`): Custom endpoint URL for S3-compatible storage (R2, MinIO, Tigris, etc.)
99
173
 
100
174
  **forcePathStyle** (`boolean`): Force path-style URLs instead of virtual-hosted-style. Required for some S3-compatible services like MinIO. Defaults to true when a custom endpoint is provided. (Default: `true (when endpoint is set)`)
101
175