@mastra/libsql 1.6.1-alpha.0 → 1.6.2-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/CHANGELOG.md +22 -0
  2. package/dist/docs/SKILL.md +50 -0
  3. package/dist/docs/assets/SOURCE_MAP.json +6 -0
  4. package/dist/docs/references/docs-agents-agent-approval.md +558 -0
  5. package/dist/docs/references/docs-agents-agent-memory.md +209 -0
  6. package/dist/docs/references/docs-agents-network-approval.md +275 -0
  7. package/dist/docs/references/docs-agents-networks.md +299 -0
  8. package/dist/docs/references/docs-memory-memory-processors.md +314 -0
  9. package/dist/docs/references/docs-memory-message-history.md +260 -0
  10. package/dist/docs/references/docs-memory-overview.md +45 -0
  11. package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
  12. package/dist/docs/references/docs-memory-storage.md +261 -0
  13. package/dist/docs/references/docs-memory-working-memory.md +400 -0
  14. package/dist/docs/references/docs-observability-overview.md +70 -0
  15. package/dist/docs/references/docs-observability-tracing-exporters-default.md +209 -0
  16. package/dist/docs/references/docs-rag-retrieval.md +515 -0
  17. package/dist/docs/references/docs-workflows-snapshots.md +238 -0
  18. package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +140 -0
  19. package/dist/docs/references/reference-core-getMemory.md +50 -0
  20. package/dist/docs/references/reference-core-listMemory.md +56 -0
  21. package/dist/docs/references/reference-core-mastra-class.md +66 -0
  22. package/dist/docs/references/reference-memory-memory-class.md +147 -0
  23. package/dist/docs/references/reference-storage-composite.md +235 -0
  24. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  25. package/dist/docs/references/reference-storage-libsql.md +135 -0
  26. package/dist/docs/references/reference-vectors-libsql.md +305 -0
  27. package/dist/index.cjs +14 -3
  28. package/dist/index.cjs.map +1 -1
  29. package/dist/index.js +14 -3
  30. package/dist/index.js.map +1 -1
  31. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  32. package/dist/vector/index.d.ts.map +1 -1
  33. package/package.json +5 -5
package/CHANGELOG.md CHANGED
@@ -1,5 +1,27 @@
1
1
  # @mastra/libsql
2
2
 
3
+ ## 1.6.2-alpha.0
4
+
5
+ ### Patch Changes
6
+
7
+ - Fixed observation activation to always preserve a minimum amount of context. Previously, swapping buffered observation chunks could unexpectedly drop the context window to near-zero tokens. ([#13476](https://github.com/mastra-ai/mastra/pull/13476))
8
+
9
+ - Add a clear runtime error when `queryVector` is omitted for vector stores that require a vector for queries. Previously, omitting `queryVector` would produce confusing SDK-level errors; now each store throws a structured `MastraError` with `ErrorCategory.USER` explaining that metadata-only queries are not supported by that backend. ([#13286](https://github.com/mastra-ai/mastra/pull/13286))
10
+
11
+ - Updated dependencies [[`df170fd`](https://github.com/mastra-ai/mastra/commit/df170fd139b55f845bfd2de8488b16435bd3d0da), [`ae55343`](https://github.com/mastra-ai/mastra/commit/ae5534397fc006fd6eef3e4f80c235bcdc9289ef), [`c290cec`](https://github.com/mastra-ai/mastra/commit/c290cec5bf9107225de42942b56b487107aa9dce), [`f03e794`](https://github.com/mastra-ai/mastra/commit/f03e794630f812b56e95aad54f7b1993dc003add), [`aa4a5ae`](https://github.com/mastra-ai/mastra/commit/aa4a5aedb80d8d6837bab8cbb2e301215d1ba3e9), [`de3f584`](https://github.com/mastra-ai/mastra/commit/de3f58408752a8d80a295275c7f23fc306cf7f4f), [`d3fb010`](https://github.com/mastra-ai/mastra/commit/d3fb010c98f575f1c0614452667396e2653815f6), [`702ee1c`](https://github.com/mastra-ai/mastra/commit/702ee1c41be67cc532b4dbe89bcb62143508f6f0), [`f495051`](https://github.com/mastra-ai/mastra/commit/f495051eb6496a720f637fc85b6d69941c12554c), [`e622f1d`](https://github.com/mastra-ai/mastra/commit/e622f1d3ab346a8e6aca6d1fe2eac99bd961e50b), [`861f111`](https://github.com/mastra-ai/mastra/commit/861f11189211b20ddb70d8df81a6b901fc78d11e), [`00f43e8`](https://github.com/mastra-ai/mastra/commit/00f43e8e97a80c82b27d5bd30494f10a715a1df9), [`1b6f651`](https://github.com/mastra-ai/mastra/commit/1b6f65127d4a0d6c38d0a1055cb84527db529d6b), [`96a1702`](https://github.com/mastra-ai/mastra/commit/96a1702ce362c50dda20c8b4a228b4ad1a36a17a), [`cb9f921`](https://github.com/mastra-ai/mastra/commit/cb9f921320913975657abb1404855d8c510f7ac5), [`114e7c1`](https://github.com/mastra-ai/mastra/commit/114e7c146ac682925f0fb37376c1be70e5d6e6e5), [`1b6f651`](https://github.com/mastra-ai/mastra/commit/1b6f65127d4a0d6c38d0a1055cb84527db529d6b), [`72df4a8`](https://github.com/mastra-ai/mastra/commit/72df4a8f9bf1a20cfd3d9006a4fdb597ad56d10a)]:
12
+ - @mastra/core@1.8.0-alpha.0
13
+
14
+ ## 1.6.1
15
+
16
+ ### Patch Changes
17
+
18
+ - Fixed non-deterministic query ordering by adding secondary sort on id for dataset and dataset item queries. ([#13399](https://github.com/mastra-ai/mastra/pull/13399))
19
+
20
+ - Prompt blocks can now define their own variables schema (`requestContextSchema`), allowing you to create reusable prompt blocks with typed variable placeholders. The server now correctly computes and returns draft/published status for prompt blocks. Existing databases are automatically migrated when upgrading. ([#13351](https://github.com/mastra-ai/mastra/pull/13351))
21
+
22
+ - Updated dependencies [[`24284ff`](https://github.com/mastra-ai/mastra/commit/24284ffae306ddf0ab83273e13f033520839ef40), [`f5097cc`](https://github.com/mastra-ai/mastra/commit/f5097cc8a813c82c3378882c31178320cadeb655), [`71e237f`](https://github.com/mastra-ai/mastra/commit/71e237fa852a3ad9a50a3ddb3b5f3b20b9a8181c), [`13a291e`](https://github.com/mastra-ai/mastra/commit/13a291ebb9f9bca80befa0d9166b916bb348e8e9), [`397af5a`](https://github.com/mastra-ai/mastra/commit/397af5a69f34d4157f51a7c8da3f1ded1e1d611c), [`d4701f7`](https://github.com/mastra-ai/mastra/commit/d4701f7e24822b081b70f9c806c39411b1a712e7), [`2b40831`](https://github.com/mastra-ai/mastra/commit/2b40831dcca2275c9570ddf09b7f25ba3e8dc7fc), [`6184727`](https://github.com/mastra-ai/mastra/commit/6184727e812bf7a65cee209bacec3a2f5a16e923), [`0c338b8`](https://github.com/mastra-ai/mastra/commit/0c338b87362dcd95ff8191ca00df645b6953f534), [`6f6385b`](https://github.com/mastra-ai/mastra/commit/6f6385be5b33687cd21e71fc27e972e6928bb34c), [`14aba61`](https://github.com/mastra-ai/mastra/commit/14aba61b9cff76d72bc7ef6f3a83ae2c5d059193), [`dd9dd1c`](https://github.com/mastra-ai/mastra/commit/dd9dd1c9ae32ae79093f8c4adde1732ac6357233)]:
23
+ - @mastra/core@1.7.0
24
+
3
25
  ## 1.6.1-alpha.0
4
26
 
5
27
  ### Patch Changes
@@ -0,0 +1,50 @@
1
+ ---
2
+ name: mastra-libsql
3
+ description: Documentation for @mastra/libsql. Use when working with @mastra/libsql APIs, configuration, or implementation.
4
+ metadata:
5
+ package: "@mastra/libsql"
6
+ version: "1.6.2-alpha.0"
7
+ ---
8
+
9
+ ## When to use
10
+
11
+ Use this skill whenever you are working with @mastra/libsql to obtain the domain-specific knowledge.
12
+
13
+ ## How to use
14
+
15
+ Read the individual reference documents for detailed explanations and code examples.
16
+
17
+ ### Docs
18
+
19
+ - [Agent Approval](references/docs-agents-agent-approval.md) - Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
20
+ - [Agent Memory](references/docs-agents-agent-memory.md) - Learn how to add memory to agents to store message history and maintain context across interactions.
21
+ - [Network Approval](references/docs-agents-network-approval.md) - Learn how to require approvals, suspend execution, and resume suspended networks while keeping humans in control of agent network workflows.
22
+ - [Agent Networks](references/docs-agents-networks.md) - Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
23
+ - [Memory Processors](references/docs-memory-memory-processors.md) - Learn how to use memory processors in Mastra to filter, trim, and transform messages before they're sent to the language model to manage context window limits.
24
+ - [Message History](references/docs-memory-message-history.md) - Learn how to configure message history in Mastra to store recent messages from the current conversation.
25
+ - [Memory overview](references/docs-memory-overview.md) - Learn how Mastra's memory system works with working memory, message history, semantic recall, and observational memory.
26
+ - [Semantic Recall](references/docs-memory-semantic-recall.md) - Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
27
+ - [Storage](references/docs-memory-storage.md) - Configure storage for Mastra's memory system to persist conversations, workflows, and traces.
28
+ - [Working Memory](references/docs-memory-working-memory.md) - Learn how to configure working memory in Mastra to store persistent user data, preferences.
29
+ - [Observability Overview](references/docs-observability-overview.md) - Monitor and debug applications with Mastra's Observability features.
30
+ - [Default Exporter](references/docs-observability-tracing-exporters-default.md) - Store traces locally for development and debugging
31
+ - [Retrieval, Semantic Search, Reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
32
+ - [Snapshots](references/docs-workflows-snapshots.md) - Learn how to save and resume workflow execution state with snapshots in Mastra
33
+
34
+ ### Guides
35
+
36
+ - [AI SDK](references/guides-agent-frameworks-ai-sdk.md) - Use Mastra processors and memory with the Vercel AI SDK
37
+
38
+ ### Reference
39
+
40
+ - [Reference: Mastra.getMemory()](references/reference-core-getMemory.md) - Documentation for the `Mastra.getMemory()` method in Mastra, which retrieves a registered memory instance by its registry key.
41
+ - [Reference: Mastra.listMemory()](references/reference-core-listMemory.md) - Documentation for the `Mastra.listMemory()` method in Mastra, which returns all registered memory instances.
42
+ - [Reference: Mastra Class](references/reference-core-mastra-class.md) - Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
43
+ - [Reference: Memory Class](references/reference-memory-memory-class.md) - Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
44
+ - [Reference: Composite Storage](references/reference-storage-composite.md) - Documentation for combining multiple storage backends in Mastra.
45
+ - [Reference: DynamoDB Storage](references/reference-storage-dynamodb.md) - Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
46
+ - [Reference: libSQL Storage](references/reference-storage-libsql.md) - Documentation for the libSQL storage implementation in Mastra.
47
+ - [Reference: libSQL Vector Store](references/reference-vectors-libsql.md) - Documentation for the LibSQLVector class in Mastra, which provides vector search using libSQL with vector extensions.
48
+
49
+
50
+ Read [assets/SOURCE_MAP.json](assets/SOURCE_MAP.json) for source code references.
@@ -0,0 +1,6 @@
1
+ {
2
+ "version": "1.6.2-alpha.0",
3
+ "package": "@mastra/libsql",
4
+ "exports": {},
5
+ "modules": {}
6
+ }
@@ -0,0 +1,558 @@
1
+ # Agent Approval
2
+
3
+ Agents sometimes require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in workflows when calling tools that handle sensitive operations, like deleting resources or performing running long processes. With agent approval you can suspend a tool call and provide feedback to the user, or approve or decline a tool call based on targeted application conditions.
4
+
5
+ ## Tool call approval
6
+
7
+ Tool call approval can be enabled at the agent level and apply to every tool the agent uses, or at the tool level providing more granular control over individual tool calls.
8
+
9
+ ### Storage
10
+
11
+ Agent approval uses a snapshot to capture the state of the request. Ensure you've enabled a storage provider in your main Mastra instance. If storage isn't enabled you'll see an error relating to snapshot not found.
12
+
13
+ ```typescript
14
+ import { Mastra } from '@mastra/core/mastra'
15
+ import { LibSQLStore } from '@mastra/libsql'
16
+
17
+ export const mastra = new Mastra({
18
+ storage: new LibSQLStore({
19
+ id: 'mastra-storage',
20
+ url: ':memory:',
21
+ }),
22
+ })
23
+ ```
24
+
25
+ ## Agent-level approval
26
+
27
+ When calling an agent using `.stream()` set `requireToolApproval` to `true` which will prevent the agent from calling any of the tools defined in its configuration.
28
+
29
+ ```typescript
30
+ const stream = await agent.stream("What's the weather in London?", {
31
+ requireToolApproval: true,
32
+ })
33
+ ```
34
+
35
+ ### Approving tool calls
36
+
37
+ To approve a tool call, access `approveToolCall` from the `agent`, passing in the `runId` of the stream. This will let the agent know its now OK to call its tools.
38
+
39
+ ```typescript
40
+ const handleApproval = async () => {
41
+ const approvedStream = await agent.approveToolCall({ runId: stream.runId })
42
+
43
+ for await (const chunk of approvedStream.textStream) {
44
+ process.stdout.write(chunk)
45
+ }
46
+ process.stdout.write('\n')
47
+ }
48
+ ```
49
+
50
+ ### Declining tool calls
51
+
52
+ To decline a tool call, access the `declineToolCall` from the `agent`. You will see the streamed response from the agent, but it won't call its tools.
53
+
54
+ ```typescript
55
+ const handleDecline = async () => {
56
+ const declinedStream = await agent.declineToolCall({ runId: stream.runId })
57
+
58
+ for await (const chunk of declinedStream.textStream) {
59
+ process.stdout.write(chunk)
60
+ }
61
+ process.stdout.write('\n')
62
+ }
63
+ ```
64
+
65
+ ## Tool approval with generate()
66
+
67
+ Tool approval also works with the `generate()` method for non-streaming use cases. When using `generate()` with `requireToolApproval: true`, the method returns immediately when a tool requires approval instead of executing it.
68
+
69
+ ### How it works
70
+
71
+ When a tool requires approval during a `generate()` call, the response includes:
72
+
73
+ - `finishReason: 'suspended'` - indicates the agent is waiting for approval
74
+ - `suspendPayload` - contains tool call details (`toolCallId`, `toolName`, `args`)
75
+ - `runId` - needed to approve or decline the tool call
76
+
77
+ ### Approving tool calls
78
+
79
+ To approve a tool call with `generate()`, use the `approveToolCallGenerate` method:
80
+
81
+ ```typescript
82
+ const output = await agent.generate('Find user John', {
83
+ requireToolApproval: true,
84
+ })
85
+
86
+ if (output.finishReason === 'suspended') {
87
+ console.log('Tool requires approval:', output.suspendPayload.toolName)
88
+ console.log('Arguments:', output.suspendPayload.args)
89
+
90
+ // Approve the tool call and get the final result
91
+ const result = await agent.approveToolCallGenerate({
92
+ runId: output.runId,
93
+ toolCallId: output.suspendPayload.toolCallId,
94
+ })
95
+
96
+ console.log('Final result:', result.text)
97
+ }
98
+ ```
99
+
100
+ ### Declining tool calls
101
+
102
+ To decline a tool call, use the `declineToolCallGenerate` method:
103
+
104
+ ```typescript
105
+ if (output.finishReason === 'suspended') {
106
+ const result = await agent.declineToolCallGenerate({
107
+ runId: output.runId,
108
+ toolCallId: output.suspendPayload.toolCallId,
109
+ })
110
+
111
+ // Agent will respond acknowledging the declined tool
112
+ console.log(result.text)
113
+ }
114
+ ```
115
+
116
+ ### Stream vs Generate comparison
117
+
118
+ | Aspect | `stream()` | `generate()` |
119
+ | ------------------ | ---------------------------- | ------------------------------------------------ |
120
+ | Response type | Streaming chunks | Complete response |
121
+ | Approval detection | `tool-call-approval` chunk | `finishReason: 'suspended'` |
122
+ | Approve method | `approveToolCall({ runId })` | `approveToolCallGenerate({ runId, toolCallId })` |
123
+ | Decline method | `declineToolCall({ runId })` | `declineToolCallGenerate({ runId, toolCallId })` |
124
+ | Result | Stream to iterate | Full output object |
125
+
126
+ ## Tool-level approval
127
+
128
+ There are two types of tool call approval. The first uses `requireApproval`, which is a property on the tool definition, while `requireToolApproval` is a parameter passed to `agent.stream()`. The second uses `suspend` and lets the agent provide context or confirmation prompts so the user can decide whether the tool call should continue.
129
+
130
+ ### Tool approval using `requireToolApproval`
131
+
132
+ In this approach, `requireApproval` is configured on the tool definition (shown below) rather than on the agent.
133
+
134
+ ```typescript
135
+ export const testTool = createTool({
136
+ id: 'test-tool',
137
+ description: 'Fetches weather for a location',
138
+ inputSchema: z.object({
139
+ location: z.string(),
140
+ }),
141
+ outputSchema: z.object({
142
+ weather: z.string(),
143
+ }),
144
+ resumeSchema: z.object({
145
+ approved: z.boolean(),
146
+ }),
147
+ execute: async inputData => {
148
+ const response = await fetch(`https://wttr.in/${inputData.location}?format=3`)
149
+ const weather = await response.text()
150
+
151
+ return { weather }
152
+ },
153
+ requireApproval: true,
154
+ })
155
+ ```
156
+
157
+ When `requireApproval` is true for a tool, the stream will include chunks of type `tool-call-approval` to indicate that the call is paused. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
158
+
159
+ ```typescript
160
+ const stream = await agent.stream("What's the weather in London?")
161
+
162
+ for await (const chunk of stream.fullStream) {
163
+ if (chunk.type === 'tool-call-approval') {
164
+ console.log('Approval required.')
165
+ }
166
+ }
167
+
168
+ const handleResume = async () => {
169
+ const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId })
170
+
171
+ for await (const chunk of resumedStream.textStream) {
172
+ process.stdout.write(chunk)
173
+ }
174
+ process.stdout.write('\n')
175
+ }
176
+ ```
177
+
178
+ ### Tool approval using `suspend`
179
+
180
+ With this approach, neither the agent nor the tool uses `requireApproval`. Instead, the tool implementation calls `suspend` to pause execution and return context or confirmation prompts to the user.
181
+
182
+ ```typescript
183
+ export const testToolB = createTool({
184
+ id: 'test-tool-b',
185
+ description: 'Fetches weather for a location',
186
+ inputSchema: z.object({
187
+ location: z.string(),
188
+ }),
189
+ outputSchema: z.object({
190
+ weather: z.string(),
191
+ }),
192
+ resumeSchema: z.object({
193
+ approved: z.boolean(),
194
+ }),
195
+ suspendSchema: z.object({
196
+ reason: z.string(),
197
+ }),
198
+ execute: async (inputData, context) => {
199
+ const { resumeData: { approved } = {}, suspend } = context?.agent ?? {}
200
+
201
+ if (!approved) {
202
+ return suspend?.({ reason: 'Approval required.' })
203
+ }
204
+
205
+ const response = await fetch(`https://wttr.in/${inputData.location}?format=3`)
206
+ const weather = await response.text()
207
+
208
+ return { weather }
209
+ },
210
+ })
211
+ ```
212
+
213
+ With this approach the stream will include a `tool-call-suspended` chunk, and the `suspendPayload` will contain the `reason` defined by the tool's `suspendSchema`. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
214
+
215
+ ```typescript
216
+ const stream = await agent.stream("What's the weather in London?")
217
+
218
+ for await (const chunk of stream.fullStream) {
219
+ if (chunk.type === 'tool-call-suspended') {
220
+ console.log(chunk.payload.suspendPayload)
221
+ }
222
+ }
223
+
224
+ const handleResume = async () => {
225
+ const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId })
226
+
227
+ for await (const chunk of resumedStream.textStream) {
228
+ process.stdout.write(chunk)
229
+ }
230
+ process.stdout.write('\n')
231
+ }
232
+ ```
233
+
234
+ ## Automatic tool resumption
235
+
236
+ When using tools that call `suspend()`, you can enable automatic resumption so the agent resumes suspended tools based on the user's next message. This creates a conversational flow where users provide the required information naturally, without your application needing to call `resumeStream()` explicitly.
237
+
238
+ ### Enabling auto-resume
239
+
240
+ Set `autoResumeSuspendedTools` to `true` in the agent's default options or when calling `stream()`:
241
+
242
+ ```typescript
243
+ import { Agent } from '@mastra/core/agent'
244
+ import { Memory } from '@mastra/memory'
245
+
246
+ // Option 1: In agent configuration
247
+ const agent = new Agent({
248
+ id: 'my-agent',
249
+ name: 'My Agent',
250
+ instructions: 'You are a helpful assistant',
251
+ model: 'openai/gpt-4o-mini',
252
+ tools: { weatherTool },
253
+ memory: new Memory(),
254
+ defaultOptions: {
255
+ autoResumeSuspendedTools: true,
256
+ },
257
+ })
258
+
259
+ // Option 2: Per-request
260
+ const stream = await agent.stream("What's the weather?", {
261
+ autoResumeSuspendedTools: true,
262
+ })
263
+ ```
264
+
265
+ ### How it works
266
+
267
+ When `autoResumeSuspendedTools` is enabled:
268
+
269
+ 1. A tool suspends execution by calling `suspend()` with a payload (e.g., requesting more information)
270
+
271
+ 2. The suspension is persisted to memory along with the conversation
272
+
273
+ 3. When the user sends their next message on the same thread, the agent:
274
+
275
+ - Detects the suspended tool from message history
276
+ - Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
277
+ - Automatically resumes the tool with the extracted data
278
+
279
+ ### Example
280
+
281
+ ```typescript
282
+ import { createTool } from '@mastra/core/tools'
283
+ import { z } from 'zod'
284
+
285
+ export const weatherTool = createTool({
286
+ id: 'weather-info',
287
+ description: 'Fetches weather information for a city',
288
+ suspendSchema: z.object({
289
+ message: z.string(),
290
+ }),
291
+ resumeSchema: z.object({
292
+ city: z.string(),
293
+ }),
294
+ execute: async (_inputData, context) => {
295
+ // Check if this is a resume with data
296
+ if (!context?.agent?.resumeData) {
297
+ // First call - suspend and ask for the city
298
+ return context?.agent?.suspend({
299
+ message: 'What city do you want to know the weather for?',
300
+ })
301
+ }
302
+
303
+ // Resume call - city was extracted from user's message
304
+ const { city } = context.agent.resumeData
305
+ const response = await fetch(`https://wttr.in/${city}?format=3`)
306
+ const weather = await response.text()
307
+
308
+ return { city, weather }
309
+ },
310
+ })
311
+
312
+ const agent = new Agent({
313
+ id: 'my-agent',
314
+ name: 'My Agent',
315
+ instructions: 'You are a helpful assistant',
316
+ model: 'openai/gpt-4o-mini',
317
+ tools: { weatherTool },
318
+ memory: new Memory(),
319
+ defaultOptions: {
320
+ autoResumeSuspendedTools: true,
321
+ },
322
+ })
323
+
324
+ const stream = await agent.stream("What's the weather like?")
325
+
326
+ for await (const chunk of stream.fullStream) {
327
+ if (chunk.type === 'tool-call-suspended') {
328
+ console.log(chunk.payload.suspendPayload)
329
+ }
330
+ }
331
+
332
+ const handleResume = async () => {
333
+ const resumedStream = await agent.stream('San Francisco')
334
+
335
+ for await (const chunk of resumedStream.textStream) {
336
+ process.stdout.write(chunk)
337
+ }
338
+ process.stdout.write('\n')
339
+ }
340
+ ```
341
+
342
+ **Conversation flow:**
343
+
344
+ ```text
345
+ User: "What's the weather like?"
346
+ Agent: "What city do you want to know the weather for?"
347
+
348
+ User: "San Francisco"
349
+ Agent: "The weather in San Francisco is: San Francisco: ☀️ +72°F"
350
+ ```
351
+
352
+ The second message automatically resumes the suspended tool - the agent extracts `{ city: "San Francisco" }` from the user's message and passes it as `resumeData`.
353
+
354
+ ### Requirements
355
+
356
+ For automatic tool resumption to work:
357
+
358
+ - **Memory configured**: The agent needs memory to track suspended tools across messages
359
+ - **Same thread**: The follow-up message must use the same memory thread and resource identifiers
360
+ - **`resumeSchema` defined**: The tool must define a `resumeSchema` so the agent knows what data structure to extract from the user's message
361
+
362
+ ### Manual vs automatic resumption
363
+
364
+ | Approach | Use case |
365
+ | -------------------------------------- | ------------------------------------------------------------------------ |
366
+ | Manual (`resumeStream()`) | Programmatic control, webhooks, button clicks, external triggers |
367
+ | Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
368
+
369
+ Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
370
+
371
+ ## Tool approval: Supervisor pattern
372
+
373
+ The [supervisor pattern](https://mastra.ai/docs/agents/networks) lets a supervisor agent coordinate multiple subagents using `.stream()` or `.generate()`. The supervisor delegates tasks to subagents, which may use tools that require approval. When this happens, tool approvals properly propagate through the delegation chain -- the approval request surfaces at the supervisor level where you can handle it, regardless of which subagent triggered it.
374
+
375
+ ### How it works
376
+
377
+ 1. The supervisor agent delegates a task to a subagent.
378
+ 2. The subagent calls a tool that has `requireApproval: true` or uses `suspend()`.
379
+ 3. The approval request bubbles up through the delegation chain to the supervisor.
380
+ 4. You handle the approval or decline at the supervisor level.
381
+ 5. The decision propagates back down to the subagent, which continues or terminates accordingly.
382
+
383
+ ### Example
384
+
385
+ The following example creates a subagent with a database lookup tool that requires approval. The supervisor delegates to this subagent, and when the tool triggers an approval request, it surfaces in the supervisor's stream as a `tool-call-approval` chunk. You then approve the tool call using `approveToolCall` with the stream's `runId`.
386
+
387
+ ```typescript
388
+ import { Agent } from '@mastra/core/agent'
389
+ import { createTool } from '@mastra/core/tools'
390
+ import { Memory } from '@mastra/memory'
391
+ import { z } from 'zod'
392
+
393
+ // subagent with approval-required tool
394
+ const findUserTool = createTool({
395
+ id: 'find-user',
396
+ description: 'Finds user by ID in the database',
397
+ inputSchema: z.object({
398
+ userId: z.string(),
399
+ }),
400
+ outputSchema: z.object({
401
+ user: z.object({
402
+ id: z.string(),
403
+ name: z.string(),
404
+ email: z.string(),
405
+ }),
406
+ }),
407
+ requireApproval: true, // Requires approval before execution
408
+ execute: async input => {
409
+ const user = await database.findUser(input.userId)
410
+ return { user }
411
+ },
412
+ })
413
+
414
+ const dataAgent = new Agent({
415
+ id: 'data-agent',
416
+ name: 'Data Agent',
417
+ description: 'Handles database queries and user data retrieval',
418
+ model: 'openai/gpt-4o-mini',
419
+ tools: { findUserTool },
420
+ })
421
+
422
+ const supervisorAgent = new Agent({
423
+ id: 'supervisor',
424
+ name: 'Supervisor Agent',
425
+ instructions: `You coordinate data retrieval tasks.
426
+ Delegate to data-agent for user lookups.`,
427
+ model: 'openai/gpt-5.1',
428
+ agents: { dataAgent },
429
+ memory: new Memory(),
430
+ })
431
+
432
+ // When supervisor delegates to dataAgent and tool requires approval
433
+ const stream = await supervisorAgent.stream('Find user with ID 12345')
434
+
435
+ for await (const chunk of stream.fullStream) {
436
+ if (chunk.type === 'tool-call-approval') {
437
+ console.log('Tool requires approval:', chunk.payload.toolName)
438
+ console.log('Arguments:', chunk.payload.args)
439
+
440
+ // Approve the tool call
441
+ const resumeStream = await supervisorAgent.approveToolCall({
442
+ runId: stream.runId,
443
+ toolCallId: chunk.payload.toolCallId,
444
+ })
445
+
446
+ // Process resumed stream
447
+ for await (const resumeChunk of resumeStream.textStream) {
448
+ process.stdout.write(resumeChunk)
449
+ }
450
+ }
451
+ }
452
+ ```
453
+
454
+ ### Declining tool calls in supervisor pattern
455
+
456
+ You can also decline tool calls at the supervisor level by calling `declineToolCall`. The supervisor will respond acknowledging the declined tool without executing it:
457
+
458
+ ```typescript
459
+ for await (const chunk of stream.fullStream) {
460
+ if (chunk.type === 'tool-call-approval') {
461
+ console.log('Declining tool call:', chunk.payload.toolName)
462
+
463
+ // Decline the tool call
464
+ const declineStream = await supervisorAgent.declineToolCall({
465
+ runId: stream.runId,
466
+ toolCallId: chunk.payload.toolCallId,
467
+ })
468
+
469
+ // The supervisor will respond acknowledging the declined tool
470
+ for await (const declineChunk of declineStream.textStream) {
471
+ process.stdout.write(declineChunk)
472
+ }
473
+ }
474
+ }
475
+ ```
476
+
477
+ ### Using suspend() in supervisor pattern
478
+
479
+ Tools can also use [`suspend()`](#tool-approval-using-suspend) to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way `requireApproval` does -- the suspension surfaces at the supervisor level:
480
+
481
+ ```typescript
482
+ const conditionalTool = createTool({
483
+ id: 'conditional-operation',
484
+ description: 'Performs an operation that may require confirmation',
485
+ inputSchema: z.object({
486
+ operation: z.string(),
487
+ }),
488
+ suspendSchema: z.object({
489
+ message: z.string(),
490
+ }),
491
+ resumeSchema: z.object({
492
+ confirmed: z.boolean(),
493
+ }),
494
+ execute: async (input, context) => {
495
+ const { resumeData } = context?.agent ?? {}
496
+
497
+ if (!resumeData?.confirmed) {
498
+ return context?.agent?.suspend({
499
+ message: `Confirm: ${input.operation}?`,
500
+ })
501
+ }
502
+
503
+ // Proceed with operation
504
+ return await performOperation(input.operation)
505
+ },
506
+ })
507
+
508
+ // When using this tool through a subagent in supervisor pattern
509
+ for await (const chunk of stream.fullStream) {
510
+ if (chunk.type === 'tool-call-suspended') {
511
+ console.log('Tool suspended:', chunk.payload.suspendPayload.message)
512
+
513
+ // Resume with confirmation
514
+ const resumeStream = await supervisorAgent.resumeStream(
515
+ { confirmed: true },
516
+ { runId: stream.runId },
517
+ )
518
+
519
+ for await (const resumeChunk of resumeStream.textStream) {
520
+ process.stdout.write(resumeChunk)
521
+ }
522
+ }
523
+ }
524
+ ```
525
+
526
+ ### Tool approval with generate()
527
+
528
+ Tool approval propagation also works with `generate()` in supervisor pattern:
529
+
530
+ ```typescript
531
+ const output = await supervisorAgent.generate('Find user with ID 12345', {
532
+ maxSteps: 10,
533
+ })
534
+
535
+ if (output.finishReason === 'suspended') {
536
+ console.log('Tool requires approval:', output.suspendPayload.toolName)
537
+
538
+ // Approve
539
+ const result = await supervisorAgent.approveToolCallGenerate({
540
+ runId: output.runId,
541
+ toolCallId: output.suspendPayload.toolCallId,
542
+ })
543
+
544
+ console.log('Final result:', result.text)
545
+ }
546
+ ```
547
+
548
+ ### Multi-level delegation
549
+
550
+ Tool approvals propagate through multiple levels of delegation. For example, if a supervisor delegates to subagent A, which in turn delegates to subagent B that has a tool with `requireApproval: true`, the approval request still surfaces at the top-level supervisor. You handle the approval or decline there, and the result flows back down through the entire delegation chain to the tool that requested it.
551
+
552
+ ## Related
553
+
554
+ - [Using Tools](https://mastra.ai/docs/agents/using-tools)
555
+ - [Agent Overview](https://mastra.ai/docs/agents/overview)
556
+ - [Tools Overview](https://mastra.ai/docs/mcp/overview)
557
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
558
+ - [Request Context](https://mastra.ai/docs/server/request-context)