@mastra/mcp-docs-server 1.1.27 → 1.1.28-alpha.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -71,18 +71,40 @@ The six processor methods run at different points in the agent execution lifecyc
71
71
  ## Interface definition
72
72
 
73
73
  ```typescript
74
- interface Processor<TId extends string = string> {
74
+ interface Processor<TId extends string = string, TTripwireMetadata = unknown> {
75
75
  readonly id: TId
76
76
  readonly name?: string
77
+ readonly description?: string
78
+ /** Index of this processor in the workflow (set at runtime when combining processors). */
79
+ processorIndex?: number
80
+ /** When true, processOutputStream also receives `data-*` chunks. Default: false. */
81
+ processDataParts?: boolean
82
+
83
+ processInput?(
84
+ args: ProcessInputArgs<TTripwireMetadata>,
85
+ ): Promise<ProcessInputResult> | ProcessInputResult
86
+
87
+ processInputStep?(
88
+ args: ProcessInputStepArgs<TTripwireMetadata>,
89
+ ):
90
+ | Promise<ProcessInputStepResult | MessageList | MastraDBMessage[] | undefined | void>
91
+ | ProcessInputStepResult
92
+ | MessageList
93
+ | MastraDBMessage[]
94
+ | void
95
+ | undefined
77
96
 
78
- processInput?(args: ProcessInputArgs): Promise<ProcessInputResult> | ProcessInputResult
79
- processInputStep?(args: ProcessInputStepArgs): ProcessorMessageResult
80
97
  processAPIError?(
81
- args: ProcessAPIErrorArgs,
98
+ args: ProcessAPIErrorArgs<TTripwireMetadata>,
82
99
  ): Promise<ProcessAPIErrorResult | void> | ProcessAPIErrorResult | void
83
- processOutputStream?(args: ProcessOutputStreamArgs): Promise<ChunkType | null | undefined>
84
- processOutputStep?(args: ProcessOutputStepArgs): ProcessorMessageResult
85
- processOutputResult?(args: ProcessOutputResultArgs): ProcessorMessageResult
100
+
101
+ processOutputStream?(
102
+ args: ProcessOutputStreamArgs<TTripwireMetadata>,
103
+ ): Promise<ChunkType | null | undefined>
104
+
105
+ processOutputStep?(args: ProcessOutputStepArgs<TTripwireMetadata>): ProcessorMessageResult
106
+
107
+ processOutputResult?(args: ProcessOutputResultArgs<TTripwireMetadata>): ProcessorMessageResult
86
108
  }
87
109
  ```
88
110
 
@@ -92,8 +114,71 @@ interface Processor<TId extends string = string> {
92
114
 
93
115
  **name** (`string`): Optional display name for the processor. Falls back to id if not provided.
94
116
 
117
+ **description** (`string`): Optional human-readable description shown in tracing and Studio.
118
+
119
+ **processorIndex** (`number`): Position of the processor in the combined processor list. Set at runtime by Mastra when processors are merged with memory, workspace, and per-call overrides. You do not set this yourself.
120
+
95
121
  **processDataParts** (`boolean`): When true, the processOutputStream method also receives \`data-\*\` chunks emitted by tools via writer.custom(). Defaults to false.
96
122
 
123
+ ## Message arguments
124
+
125
+ Most processor methods receive both `messages` and `messageList`. They point to the same underlying conversation but expose it differently.
126
+
127
+ ### `messages` vs `messageList`
128
+
129
+ - `messages`: A plain array of `MastraDBMessage` objects, scoped to the current stage. For `processInput` and `processInputStep` this excludes system messages. For `processOutputResult` and `processOutputStep` this includes the latest LLM response. The array is backed by `messageList`, so editing a message's `content.parts` in place is visible to downstream processors and to persistence.
130
+ - `messageList`: The live `MessageList` instance backing the run. It exposes filtered views (input, response, remembered, all), multiple output formats (db, ui, core), and methods for mutating the conversation.
131
+
132
+ Use `messages` when you only need to read, map over, or lightly edit fields on the current stage's messages. Use `messageList` when you need to:
133
+
134
+ - Read messages from another stage, for example input messages while processing output.
135
+ - Add, remove, or replace whole messages.
136
+ - Convert to another format such as UI or core messages for a third-party API.
137
+
138
+ `messages` is always derived from `messageList`, so mutating `messageList` is the canonical way to add, remove, or reorder messages. For in-place edits to a message's content (for example, rewriting `content.parts`), mutating `messages` directly is equivalent. If you return a new array from `messages`, Mastra reconciles it against `messageList` for the current stage.
139
+
140
+ ### Persistence
141
+
142
+ When memory is enabled, only what ends up in `messageList` after all processors finish is persisted to storage. The two return styles are equivalent for persistence:
143
+
144
+ - Mutating `messageList` directly (or returning the same `MessageList` instance) — recorded mutations are applied in place, so the saved conversation reflects your changes.
145
+ - Returning a `MastraDBMessage[]` or `{ messages, systemMessages }` — Mastra reconciles the returned array against `messageList` for the current stage, removing missing messages and replacing system messages.
146
+
147
+ Returning a different `MessageList` instance is an error; always mutate the one passed to your processor.
148
+
149
+ ### Reading text from a message
150
+
151
+ `MastraDBMessage.content` is a structured object, not a string. The canonical way to read user or assistant text is `content.parts`:
152
+
153
+ ```typescript
154
+ import type { MastraDBMessage } from '@mastra/core/memory'
155
+
156
+ function getText(message: MastraDBMessage): string {
157
+ let text = ''
158
+
159
+ if (message.content.parts) {
160
+ for (const part of message.content.parts) {
161
+ if (part.type === 'text' && typeof part.text === 'string') {
162
+ text += part.text
163
+ }
164
+ }
165
+ }
166
+
167
+ // Fallback for legacy messages that only have the flattened `content` string
168
+ if (!text && typeof message.content.content === 'string') {
169
+ text = message.content.content
170
+ }
171
+
172
+ return text
173
+ }
174
+ ```
175
+
176
+ Key points:
177
+
178
+ - `message.content.parts` is the primary source. A single message can contain multiple parts, including non-text parts such as tool calls, tool results, and file parts. Filter by `part.type === 'text'` before reading `part.text`.
179
+ - `message.content.content` is a flattened string kept for backward compatibility. Use it only as a fallback when `parts` is empty or missing.
180
+ - `message.content` itself is never a plain string on `MastraDBMessage`. Legacy `CoreMessage` shapes may be strings, but processors always receive `MastraDBMessage`.
181
+
97
182
  ## Methods
98
183
 
99
184
  ### `processInput`
@@ -114,7 +199,7 @@ processInput?(args: ProcessInputArgs): Promise<ProcessInputResult> | ProcessInpu
114
199
 
115
200
  **abort** (`(reason?: string, options?: { retry?: boolean; metadata?: unknown }) => never`): Function to abort processing. Throws a TripWire error that stops execution. Pass \`retry: true\` to request the LLM retry the step with feedback.
116
201
 
117
- **retryCount** (`number`): Number of times processors have triggered retry for this generation. Use this to limit retry attempts.
202
+ **retryCount** (`number`): Number of times processors have triggered retry for this generation. Use this to limit retry attempts. Always passed by Mastra; starts at 0.
118
203
 
119
204
  **tracingContext** (`TracingContext`): Tracing context for observability.
120
205
 
@@ -137,7 +222,15 @@ The method can return one of three types:
137
222
  Processes input messages at each step of the agentic loop, before they're sent to the LLM. Unlike `processInput` which runs once at the start, this runs at every step including tool call continuations.
138
223
 
139
224
  ```typescript
140
- processInputStep?(args: ProcessInputStepArgs): ProcessorMessageResult;
225
+ processInputStep?<TTripwireMetadata = unknown>(
226
+ args: ProcessInputStepArgs<TTripwireMetadata>,
227
+ ):
228
+ | Promise<ProcessInputStepResult | MessageList | MastraDBMessage[] | void | undefined>
229
+ | ProcessInputStepResult
230
+ | MessageList
231
+ | MastraDBMessage[]
232
+ | void
233
+ | undefined;
141
234
  ```
142
235
 
143
236
  #### Execution order in the agentic loop
@@ -175,7 +268,9 @@ processInputStep?(args: ProcessInputStepArgs): ProcessorMessageResult;
175
268
 
176
269
  **structuredOutput** (`StructuredOutputOptions`): Structured output configuration (schema, output mode). Return in result to modify.
177
270
 
178
- **abort** (`(reason?: string) => never`): Function to abort processing.
271
+ **abort** (`(reason?: string, options?: { retry?: boolean; metadata?: unknown }) => never`): Function to abort processing. Throws a TripWire error that stops execution. Pass \`retry: true\` to request the LLM retry the step with feedback.
272
+
273
+ **retryCount** (`number`): Current retry attempt count from \`ProcessorContext\`. Starts at \`0\`; use to cap processor-triggered retries.
179
274
 
180
275
  **tracingContext** (`TracingContext`): Tracing context for observability.
181
276
 
@@ -183,7 +278,14 @@ processInputStep?(args: ProcessInputStepArgs): ProcessorMessageResult;
183
278
 
184
279
  #### `ProcessInputStepResult`
185
280
 
186
- The method can return any combination of these properties:
281
+ `processInputStep` can return several shapes:
282
+
283
+ - **`ProcessInputStepResult` object** — override any combination of the properties below for this step (described next).
284
+ - **`MessageList`** — return the same `messageList` instance to signal you mutated messages in place.
285
+ - **`MastraDBMessage[]`** — return a transformed messages array; replaces the step's messages.
286
+ - **`void` or `undefined`** — return nothing to leave the step unchanged.
287
+
288
+ The object form can return any combination of these properties:
187
289
 
188
290
  **model** (`LanguageModelV2 | string`): Change the model for this step. Can be a model instance or router ID like 'openai/gpt-5.4'.
189
291
 
@@ -324,9 +426,11 @@ processOutputStream?(args: ProcessOutputStreamArgs): Promise<ChunkType | null |
324
426
 
325
427
  **streamParts** (`ChunkType[]`): All chunks seen so far in the stream.
326
428
 
327
- **state** (`Record<string, unknown>`): Mutable state object that persists across chunks within a single stream.
429
+ **state** (`Record<string, unknown>`): Mutable per-processor state that persists across every chunk and every method call within a single request. A fresh state object is created for each new generate or stream call.
430
+
431
+ **abort** (`(reason?: string, options?: { retry?: boolean; metadata?: unknown }) => never`): Function to abort the stream. Throws a TripWire error that ends the stream and emits a \`tripwire\` chunk. Pass \`retry: true\` to request another LLM attempt instead of ending.
328
432
 
329
- **abort** (`(reason?: string) => never`): Function to abort the stream.
433
+ **retryCount** (`number`): Current retry attempt count from \`ProcessorContext\`. Starts at \`0\`; use to cap processor-triggered retries.
330
434
 
331
435
  **messageList** (`MessageList`): MessageList instance for accessing conversation history.
332
436
 
@@ -338,8 +442,13 @@ processOutputStream?(args: ProcessOutputStreamArgs): Promise<ChunkType | null |
338
442
 
339
443
  #### Return value
340
444
 
341
- - Return the `ChunkType` to emit it (possibly modified)
342
- - Return `null` or `undefined` to skip emitting the chunk
445
+ `processOutputStream` returns `Promise<ChunkType | null | undefined>`.
446
+
447
+ - Return the `ChunkType` to emit the chunk. Return the original `part` to emit it unchanged, or a new `ChunkType` to emit a modified chunk.
448
+ - Return `null` to drop the chunk. Nothing is sent to the next processor or the client.
449
+ - Return `undefined` (including implicit `undefined` from a `return;` statement or a method that falls off the end) to drop the chunk. `null` and `undefined` behave the same way.
450
+
451
+ Dropping a chunk only affects that single chunk. The stream continues and the next chunk is still processed. To stop the stream entirely, call `abort()`.
343
452
 
344
453
  ***
345
454
 
@@ -361,7 +470,9 @@ processOutputResult?(args: ProcessOutputResultArgs): ProcessorMessageResult;
361
470
 
362
471
  **result** (`OutputResult`): Resolved generation result containing \`text\` (accumulated text), \`usage\` (token usage with inputTokens, outputTokens, totalTokens), \`finishReason\` (why generation ended), and \`steps\` (all LLM step results, each with toolCalls, toolResults, reasoning, sources, files, etc.).
363
472
 
364
- **abort** (`(reason?: string) => never`): Function to abort processing.
473
+ **abort** (`(reason?: string, options?: { retry?: boolean; metadata?: unknown }) => never`): Function to abort processing. Throws a TripWire error that stops execution and emits a \`tripwire\` chunk.
474
+
475
+ **retryCount** (`number`): Current retry attempt count from \`ProcessorContext\`. Starts at \`0\`; use to cap processor-triggered retries.
365
476
 
366
477
  **tracingContext** (`TracingContext`): Tracing context for observability.
367
478
 
@@ -393,11 +504,17 @@ processOutputStep?(args: ProcessOutputStepArgs): ProcessorMessageResult;
393
504
 
394
505
  **text** (`string`): Generated text from this step.
395
506
 
507
+ **usage** (`LanguageModelUsage`): Token usage for the current step (\`inputTokens\`, \`outputTokens\`, \`totalTokens\`).
508
+
396
509
  **systemMessages** (`CoreMessage[]`): All system messages for read/modify access.
397
510
 
511
+ **steps** (`StepResult[]`): All completed steps so far, including the current step.
512
+
513
+ **state** (`Record<string, unknown>`): Per-processor state that persists across all method calls within this request. Shared with processOutputStream and processOutputResult.
514
+
398
515
  **abort** (`(reason?: string, options?: { retry?: boolean; metadata?: unknown }) => never`): Function to abort processing. Pass \`retry: true\` to request the LLM retry the step.
399
516
 
400
- **retryCount** (`number`): Number of times processors have triggered retry. Use this to limit retry attempts.
517
+ **retryCount** (`number`): Number of times processors have triggered retry. Use this to limit retry attempts. Always passed by Mastra; starts at 0.
401
518
 
402
519
  **tracingContext** (`TracingContext`): Tracing context for observability.
403
520
 
@@ -413,7 +530,7 @@ processOutputStep?(args: ProcessOutputStepArgs): ProcessorMessageResult;
413
530
  #### Example: Quality guardrail with retry
414
531
 
415
532
  ```typescript
416
- import type { Processor } from '@mastra/core'
533
+ import type { Processor } from '@mastra/core/processors'
417
534
 
418
535
  export class QualityGuardrail implements Processor {
419
536
  id = 'quality-guardrail'
@@ -473,7 +590,8 @@ const agent = new Agent({
473
590
  ### Basic input processor
474
591
 
475
592
  ```typescript
476
- import type { Processor, MastraDBMessage } from '@mastra/core'
593
+ import type { Processor } from '@mastra/core/processors'
594
+ import type { MastraDBMessage } from '@mastra/core/memory'
477
595
 
478
596
  export class LowercaseProcessor implements Processor {
479
597
  id = 'lowercase'
@@ -495,7 +613,11 @@ export class LowercaseProcessor implements Processor {
495
613
  ### Per-step processor with `processInputStep`
496
614
 
497
615
  ```typescript
498
- import type { Processor, ProcessInputStepArgs, ProcessInputStepResult } from '@mastra/core'
616
+ import type {
617
+ Processor,
618
+ ProcessInputStepArgs,
619
+ ProcessInputStepResult,
620
+ } from '@mastra/core/processors'
499
621
 
500
622
  export class DynamicModelProcessor implements Processor {
501
623
  id = 'dynamic-model'
@@ -528,7 +650,8 @@ export class DynamicModelProcessor implements Processor {
528
650
  ### Message transformer with `processInputStep`
529
651
 
530
652
  ```typescript
531
- import type { Processor, MastraDBMessage } from '@mastra/core'
653
+ import type { Processor } from '@mastra/core/processors'
654
+ import type { MastraDBMessage } from '@mastra/core/memory'
532
655
 
533
656
  export class ReasoningTransformer implements Processor {
534
657
  id = 'reasoning-transformer'
@@ -553,7 +676,9 @@ export class ReasoningTransformer implements Processor {
553
676
  ### Hybrid processor (input and output)
554
677
 
555
678
  ```typescript
556
- import type { Processor, MastraDBMessage, ChunkType } from '@mastra/core'
679
+ import type { Processor } from '@mastra/core/processors'
680
+ import type { MastraDBMessage } from '@mastra/core/memory'
681
+ import type { ChunkType } from '@mastra/core/stream'
557
682
 
558
683
  export class ContentFilter implements Processor {
559
684
  id = 'content-filter'
@@ -579,7 +704,7 @@ export class ContentFilter implements Processor {
579
704
 
580
705
  async processOutputStream({ part, abort }): Promise<ChunkType | null> {
581
706
  if (part.type === 'text-delta') {
582
- if (this.blockedWords.some(word => part.textDelta.includes(word))) {
707
+ if (this.blockedWords.some(word => part.payload.text.includes(word))) {
583
708
  abort('Blocked content detected in output')
584
709
  }
585
710
  }
@@ -591,7 +716,8 @@ export class ContentFilter implements Processor {
591
716
  ### Stream accumulator with state
592
717
 
593
718
  ```typescript
594
- import type { Processor, ChunkType } from '@mastra/core'
719
+ import type { Processor } from '@mastra/core/processors'
720
+ import type { ChunkType } from '@mastra/core/stream'
595
721
 
596
722
  export class WordCounter implements Processor {
597
723
  id = 'word-counter'
@@ -604,7 +730,7 @@ export class WordCounter implements Processor {
604
730
 
605
731
  // Count words in text chunks
606
732
  if (part.type === 'text-delta') {
607
- const words = part.textDelta.split(/\s+/).filter(Boolean)
733
+ const words = part.payload.text.split(/\s+/).filter(Boolean)
608
734
  state.wordCount += words.length
609
735
  }
610
736
 
@@ -618,6 +744,142 @@ export class WordCounter implements Processor {
618
744
  }
619
745
  ```
620
746
 
747
+ ## State lifecycle
748
+
749
+ Every processor receives a `state` object in `processOutputStream`, `processOutputStep`, `processOutputResult`, and `processAPIError`. State has three important properties:
750
+
751
+ - **Per-processor**: Each processor gets its own `state` object, keyed by the processor's `id`. Two processors with different ids cannot read or overwrite each other's state.
752
+ - **Per-request**: A fresh state object is created at the start of every `agent.generate()` or `agent.stream()` call. State does not leak between requests or between users.
753
+ - **Shared across methods**: Within one request, the same `state` object is passed to `processOutputStream` (for every chunk), `processOutputStep` (after every LLM step), `processOutputResult` (once at the end), and `processAPIError` (when an LLM call fails). Accumulate data in `processOutputStream` and read it in `processOutputResult` or `processAPIError`.
754
+
755
+ Initialize fields defensively on first access, because `state` starts as an empty object:
756
+
757
+ ```typescript
758
+ import type { Processor } from '@mastra/core/processors'
759
+
760
+ export class WordCounter implements Processor {
761
+ id = 'word-counter'
762
+
763
+ async processOutputStream({ part, state }) {
764
+ state.wordCount ??= 0
765
+ if (part.type === 'text-delta') {
766
+ state.wordCount += part.payload.text.split(/\s+/).filter(Boolean).length
767
+ }
768
+ return part
769
+ }
770
+ }
771
+ ```
772
+
773
+ ## Aborting and tripwire chunks
774
+
775
+ The `abort` function on each method throws a `TripWire` error that stops processing and emits a `tripwire` chunk on the output stream. Clients can detect the chunk to distinguish a blocked response from a normal finish.
776
+
777
+ ```typescript
778
+ abort('Blocked content detected', { retry: false, metadata: { category: 'pii' } })
779
+ ```
780
+
781
+ - `reason`: A human-readable explanation. Appears as `tripwire.payload.reason`.
782
+ - `retry`: When `true`, the agent retries the same step with `reason` fed back as feedback. Retries only run when `maxProcessorRetries` is set on the agent or call; otherwise the request aborts. When `errorProcessors` are configured, `maxProcessorRetries` defaults to `10` for that call.
783
+ - `metadata`: Optional structured data attached to the `tripwire` chunk for downstream consumers.
784
+
785
+ The emitted `tripwire` chunk has the shape:
786
+
787
+ ```typescript
788
+ type TripwireChunk = {
789
+ type: 'tripwire'
790
+ runId: string
791
+ from: 'AGENT'
792
+ payload: {
793
+ reason: string
794
+ retry?: boolean
795
+ metadata?: unknown
796
+ processorId: string
797
+ }
798
+ }
799
+ ```
800
+
801
+ In non-streaming calls (`agent.generate()`), the result exposes the same information as `result.tripwire` and `result.finishReason === 'other'`.
802
+
803
+ ## Emitting custom data chunks
804
+
805
+ Processors with access to `writer` can stream custom `data-*` chunks to the client by calling `writer.custom(chunk)`. Tools can do the same through their own writer. This is the only way for a processor to emit content outside of normal text and tool chunks.
806
+
807
+ ```typescript
808
+ await writer.custom({
809
+ type: 'data-moderation',
810
+ runId,
811
+ from: 'AGENT',
812
+ data: { level: 'warn', reason: 'Possibly unsafe' },
813
+ })
814
+ ```
815
+
816
+ By default, processors do **not** see `data-*` chunks in `processOutputStream` so they don't accidentally process tool telemetry or their own output. Opt in by setting `processDataParts: true` on the processor:
817
+
818
+ ```typescript
819
+ class ModerationCollector implements Processor {
820
+ id = 'moderation-collector'
821
+ processDataParts = true
822
+
823
+ async processOutputStream({ part, state }) {
824
+ if (part.type === 'data-moderation') {
825
+ state.warnings ??= []
826
+ state.warnings.push(part.data)
827
+ }
828
+ return part
829
+ }
830
+ }
831
+ ```
832
+
833
+ The chunk `type` must start with `data-` to be treated as a custom data chunk. Returning `null` or `undefined` from `processOutputStream` still drops the chunk, so a processor can inspect, modify, or filter custom data the same way it filters text chunks.
834
+
835
+ ## Configuring processors on an agent
836
+
837
+ Processors are attached to an agent through three arrays:
838
+
839
+ ```typescript
840
+ import { Agent } from '@mastra/core/agent'
841
+ import { PrefillErrorHandler } from '@mastra/core/processors'
842
+
843
+ const agent = new Agent({
844
+ name: 'support-agent',
845
+ model: 'openai/gpt-5',
846
+ instructions: '...',
847
+ inputProcessors: [new ContentFilter(['secret'])],
848
+ outputProcessors: [new WordCounter()],
849
+ errorProcessors: [new PrefillErrorHandler()],
850
+ maxProcessorRetries: 3,
851
+ })
852
+ ```
853
+
854
+ - `inputProcessors`: Run before the LLM. Receives input messages.
855
+ - `outputProcessors`: Run during or after the LLM response. Receives output chunks or messages.
856
+ - `errorProcessors`: Run when the LLM API call throws. Receives the raw error.
857
+
858
+ Each array also accepts a function so processors can be built per-request from `RequestContext`:
859
+
860
+ ```typescript
861
+ new Agent({
862
+ // ...
863
+ inputProcessors: ({ requestContext }) => {
864
+ const blockedWords = requestContext.get('blockedWords') ?? []
865
+ return [new ContentFilter(blockedWords)]
866
+ },
867
+ })
868
+ ```
869
+
870
+ ### Per-call overrides
871
+
872
+ `agent.generate()` and `agent.stream()` accept `inputProcessors`, `outputProcessors`, `errorProcessors`, and `maxProcessorRetries`. When any processor array is set on the call, it **replaces** the matching array configured on the agent for that request. Memory, workspace, skill, channel, and browser processors that Mastra adds automatically are always preserved and run around your array.
873
+
874
+ ```typescript
875
+ await agent.stream('Summarize this', {
876
+ outputProcessors: [new StreamFilter()],
877
+ maxProcessorRetries: 5,
878
+ })
879
+ ```
880
+
881
+ `maxProcessorRetries` passed on the call overrides the agent default. If neither is set, processor-requested retries are treated as aborts.
882
+
621
883
  ## Related
622
884
 
623
885
  - [Processors overview](https://mastra.ai/docs/agents/processors): Conceptual guide to processors
@@ -120,7 +120,7 @@ const stream = await agent.stream('message for agent')
120
120
 
121
121
  **options.memory.resource** (`string`): Identifier for the user or resource associated with the thread.
122
122
 
123
- **options.memory.options** (`MemoryConfig`): Configuration for memory behavior including lastMessages, readOnly, semanticRecall, and workingMemory.
123
+ **options.memory.options** (`MemoryConfig`): Configuration for memory behavior including lastMessages, readOnly, semanticRecall, workingMemory, and filterIncompleteToolCalls.
124
124
 
125
125
  **options.onFinish** (`StreamTextOnFinishCallback<any> | StreamObjectOnFinishCallback<OUTPUT>`): Callback function called when streaming completes. Receives the final result.
126
126
 
@@ -0,0 +1,168 @@
1
+ # ModalSandbox
2
+
3
+ Executes commands in isolated [Modal](https://modal.com) cloud sandboxes. Provides secure, ephemeral environments backed by Modal's infrastructure.
4
+
5
+ > **Info:** For interface details, see [WorkspaceSandbox interface](https://mastra.ai/reference/workspace/sandbox).
6
+
7
+ ## Installation
8
+
9
+ **npm**:
10
+
11
+ ```bash
12
+ npm install @mastra/modal
13
+ ```
14
+
15
+ **pnpm**:
16
+
17
+ ```bash
18
+ pnpm add @mastra/modal
19
+ ```
20
+
21
+ **Yarn**:
22
+
23
+ ```bash
24
+ yarn add @mastra/modal
25
+ ```
26
+
27
+ **Bun**:
28
+
29
+ ```bash
30
+ bun add @mastra/modal
31
+ ```
32
+
33
+ ## Usage
34
+
35
+ Add a `ModalSandbox` to a workspace and assign it to an agent:
36
+
37
+ ```typescript
38
+ import { Agent } from '@mastra/core/agent'
39
+ import { Workspace } from '@mastra/core/workspace'
40
+ import { ModalSandbox } from '@mastra/modal'
41
+
42
+ const workspace = new Workspace({
43
+ sandbox: new ModalSandbox({
44
+ id: 'dev-sandbox',
45
+ baseImage: 'ubuntu:22.04',
46
+ timeoutMs: 60_000,
47
+ }),
48
+ })
49
+
50
+ const agent = new Agent({
51
+ id: 'dev-agent',
52
+ model: 'anthropic/claude-opus-4-6',
53
+ workspace,
54
+ })
55
+ ```
56
+
57
+ ## Authentication
58
+
59
+ Set Modal credentials via environment variables or constructor options:
60
+
61
+ ```bash
62
+ MODAL_TOKEN_ID=ak-...
63
+ MODAL_TOKEN_SECRET=as-...
64
+ ```
65
+
66
+ Or pass them directly:
67
+
68
+ ```typescript
69
+ const sandbox = new ModalSandbox({
70
+ tokenId: process.env.MODAL_TOKEN_ID,
71
+ tokenSecret: process.env.MODAL_TOKEN_SECRET,
72
+ })
73
+ ```
74
+
75
+ Get your credentials from the [Modal dashboard](https://modal.com/settings/tokens).
76
+
77
+ ## Constructor parameters
78
+
79
+ **id** (`string`): Unique identifier / name for this sandbox. Used as the Modal sandbox name so the sandbox can be reconnected on subsequent start() calls. (Default: `Auto-generated`)
80
+
81
+ **appName** (`string`): Modal App name to associate sandboxes with. (Default: `'mastra'`)
82
+
83
+ **baseImage** (`string`): Docker image to use for the sandbox. (Default: `'ubuntu:22.04'`)
84
+
85
+ **timeoutMs** (`number`): Wall-clock max lifetime in milliseconds. The sandbox is terminated when this expires, regardless of activity. Modal's maximum is 24 hours (86\_400\_000). (Default: `300000 (5 minutes)`)
86
+
87
+ **env** (`Record<string, string>`): Environment variables baked into the sandbox at create time.
88
+
89
+ **workdir** (`string`): Default working directory inside the sandbox.
90
+
91
+ **tokenId** (`string`): Modal token ID. Falls back to MODAL\_TOKEN\_ID environment variable.
92
+
93
+ **tokenSecret** (`string`): Modal token secret. Falls back to MODAL\_TOKEN\_SECRET environment variable.
94
+
95
+ **instructions** (`string | function`): Custom instructions returned by getInstructions(). Pass a string to fully replace the default, or a function to extend it.
96
+
97
+ **onStart** (`function`): Lifecycle hook called after the sandbox reaches running status.
98
+
99
+ **onStop** (`function`): Lifecycle hook called before the sandbox stops.
100
+
101
+ **onDestroy** (`function`): Lifecycle hook called before the sandbox is destroyed.
102
+
103
+ ## Properties
104
+
105
+ **id** (`string`): Sandbox instance identifier.
106
+
107
+ **name** (`string`): Provider name ('ModalSandbox')
108
+
109
+ **provider** (`string`): Provider identifier ('modal')
110
+
111
+ **status** (`ProviderStatus`): 'pending' | 'starting' | 'running' | 'stopping' | 'stopped' | 'destroying' | 'destroyed' | 'error'
112
+
113
+ **modal** (`Sandbox`): The underlying Modal Sandbox instance. Throws SandboxNotReadyError if the sandbox has not been started.
114
+
115
+ **processes** (`ModalProcessManager`): Background process manager. See \[SandboxProcessManager reference]\(/reference/workspace/process-manager).
116
+
117
+ ## Sandbox lifecycle
118
+
119
+ - **`_start()`**: Attempts to reconnect to an existing running sandbox. If none is found, creates a sandbox from the latest snapshot (if available from a previous `_stop()`), or from the baseImage.
120
+ - **`_stop()`**: Snapshots the filesystem, then terminates the sandbox. Snapshot is kept in memory on the same instance for future starts.
121
+ - **`_destroy()`**: Terminates the sandbox and discards any snapshot.
122
+
123
+ ```typescript
124
+ const sandbox = new ModalSandbox({
125
+ id: 'dev-sandbox',
126
+ baseImage: 'ubuntu:22.04',
127
+ timeoutMs: 300_000,
128
+ })
129
+
130
+ await sandbox._start()
131
+ await sandbox.processes.spawn('npm install')
132
+ await sandbox._stop()
133
+ await sandbox._start()
134
+ ```
135
+
136
+ ## Background processes
137
+
138
+ `ModalSandbox` includes a built-in process manager for spawning and managing background processes. Each `spawn()` call creates a new `ContainerProcess` via the Modal SDK's `Sandbox.exec()` API.
139
+
140
+ ```typescript
141
+ const sandbox = new ModalSandbox({ id: 'dev-sandbox' })
142
+ await sandbox._start()
143
+
144
+ // Spawn a background process
145
+ const handle = await sandbox.processes.spawn('node script.js', {
146
+ env: { PORT: '3000' },
147
+ onStdout: data => console.log(data),
148
+ })
149
+
150
+ // Wait for the process to complete
151
+ const result = await handle.wait()
152
+ console.log(result.exitCode)
153
+
154
+ // Kill the process
155
+ await handle.kill()
156
+ ```
157
+
158
+ > **Note:** `sendStdin()` is not supported — the Modal JS SDK does not expose stdin on `Sandbox.exec()`.
159
+
160
+ See [`SandboxProcessManager` reference](https://mastra.ai/reference/workspace/process-manager) for the full API.
161
+
162
+ ## Related
163
+
164
+ - [SandboxProcessManager reference](https://mastra.ai/reference/workspace/process-manager)
165
+ - [WorkspaceSandbox interface](https://mastra.ai/reference/workspace/sandbox)
166
+ - [E2BSandbox reference](https://mastra.ai/reference/workspace/e2b-sandbox)
167
+ - [DaytonaSandbox reference](https://mastra.ai/reference/workspace/daytona-sandbox)
168
+ - [Workspace overview](https://mastra.ai/docs/workspace/overview)
@@ -4,7 +4,7 @@
4
4
 
5
5
  Abstract base class for managing background processes in sandboxes. Provides methods to spawn processes, list them, get handles by PID, and kill them.
6
6
 
7
- [`BlaxelSandbox`](https://mastra.ai/reference/workspace/blaxel-sandbox), [`DaytonaSandbox`](https://mastra.ai/reference/workspace/daytona-sandbox), [`E2BSandbox`](https://mastra.ai/reference/workspace/e2b-sandbox), and [`LocalSandbox`](https://mastra.ai/reference/workspace/local-sandbox) all include built-in process managers. You don't need to instantiate this class directly unless you're building a custom sandbox provider.
7
+ [`BlaxelSandbox`](https://mastra.ai/reference/workspace/blaxel-sandbox), [`DaytonaSandbox`](https://mastra.ai/reference/workspace/daytona-sandbox), [`E2BSandbox`](https://mastra.ai/reference/workspace/e2b-sandbox), [`ModalSandbox`](https://mastra.ai/reference/workspace/modal-sandbox), and [`LocalSandbox`](https://mastra.ai/reference/workspace/local-sandbox) all include built-in process managers. You don't need to instantiate this class directly unless you're building a custom sandbox provider.
8
8
 
9
9
  ## Usage example
10
10
 
package/CHANGELOG.md CHANGED
@@ -1,5 +1,12 @@
1
1
  # @mastra/mcp-docs-server
2
2
 
3
+ ## 1.1.28-alpha.1
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`733bf53`](https://github.com/mastra-ai/mastra/commit/733bf53d9352aedd3ef38c3d501edb275b65b43c), [`5405b3b`](https://github.com/mastra-ai/mastra/commit/5405b3b35325c5b8fb34fc7ac109bd2feb7bb6fe), [`c321127`](https://github.com/mastra-ai/mastra/commit/c3211275fc195de9ad1ead2746b354beb8eae6e8), [`a07bcef`](https://github.com/mastra-ai/mastra/commit/a07bcefea77c03d6d322caad973dca49b4b15fa1), [`b084a80`](https://github.com/mastra-ai/mastra/commit/b084a800db0f82d62e1fc3d6e3e3480da1ba5a53), [`82b7a96`](https://github.com/mastra-ai/mastra/commit/82b7a964169636c1d1e0c694fc892a213b0179d5), [`df97812`](https://github.com/mastra-ai/mastra/commit/df97812bd949dcafeb074b80ecab501724b49c3b), [`8bbe360`](https://github.com/mastra-ai/mastra/commit/8bbe36042af7fc4be0244dffd8913f6795179421), [`f6b8ba8`](https://github.com/mastra-ai/mastra/commit/f6b8ba8dbf533b7a8db90c72b6805ddc804a3a72), [`a07bcef`](https://github.com/mastra-ai/mastra/commit/a07bcefea77c03d6d322caad973dca49b4b15fa1)]:
8
+ - @mastra/core@1.28.0-alpha.0
9
+
3
10
  ## 1.1.27
4
11
 
5
12
  ### Patch Changes