@mastra/libsql 1.6.1-alpha.0 → 1.6.2-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/CHANGELOG.md +22 -0
  2. package/dist/docs/SKILL.md +50 -0
  3. package/dist/docs/assets/SOURCE_MAP.json +6 -0
  4. package/dist/docs/references/docs-agents-agent-approval.md +558 -0
  5. package/dist/docs/references/docs-agents-agent-memory.md +209 -0
  6. package/dist/docs/references/docs-agents-network-approval.md +275 -0
  7. package/dist/docs/references/docs-agents-networks.md +299 -0
  8. package/dist/docs/references/docs-memory-memory-processors.md +314 -0
  9. package/dist/docs/references/docs-memory-message-history.md +260 -0
  10. package/dist/docs/references/docs-memory-overview.md +45 -0
  11. package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
  12. package/dist/docs/references/docs-memory-storage.md +261 -0
  13. package/dist/docs/references/docs-memory-working-memory.md +400 -0
  14. package/dist/docs/references/docs-observability-overview.md +70 -0
  15. package/dist/docs/references/docs-observability-tracing-exporters-default.md +209 -0
  16. package/dist/docs/references/docs-rag-retrieval.md +515 -0
  17. package/dist/docs/references/docs-workflows-snapshots.md +238 -0
  18. package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +140 -0
  19. package/dist/docs/references/reference-core-getMemory.md +50 -0
  20. package/dist/docs/references/reference-core-listMemory.md +56 -0
  21. package/dist/docs/references/reference-core-mastra-class.md +66 -0
  22. package/dist/docs/references/reference-memory-memory-class.md +147 -0
  23. package/dist/docs/references/reference-storage-composite.md +235 -0
  24. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  25. package/dist/docs/references/reference-storage-libsql.md +135 -0
  26. package/dist/docs/references/reference-vectors-libsql.md +305 -0
  27. package/dist/index.cjs +14 -3
  28. package/dist/index.cjs.map +1 -1
  29. package/dist/index.js +14 -3
  30. package/dist/index.js.map +1 -1
  31. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  32. package/dist/vector/index.d.ts.map +1 -1
  33. package/package.json +5 -5
@@ -0,0 +1,209 @@
1
+ # Agent memory
2
+
3
+ Agents use memory to maintain context across interactions. LLMs are stateless and don't retain information between calls, so agents need memory to track message history and recall relevant information.
4
+
5
+ Mastra agents can be configured to store message history, with optional [working memory](https://mastra.ai/docs/memory/working-memory) to maintain recent context, [semantic recall](https://mastra.ai/docs/memory/semantic-recall) to retrieve past messages based on meaning, or [observational memory](https://mastra.ai/docs/memory/observational-memory) for automatic long-term memory that compresses conversations as they grow.
6
+
7
+ ## When to use memory
8
+
9
+ Use memory when your agent needs to maintain multi-turn conversations that reference prior exchanges, recall user preferences or facts from earlier in a session, or build context over time within a conversation thread. Skip memory for single-turn requests where each interaction is independent.
10
+
11
+ ## Setting up memory
12
+
13
+ To enable memory in Mastra, install the `@mastra/memory` package along with a storage provider.
14
+
15
+ **npm**:
16
+
17
+ ```bash
18
+ npm install @mastra/memory@latest @mastra/libsql@latest
19
+ ```
20
+
21
+ **pnpm**:
22
+
23
+ ```bash
24
+ pnpm add @mastra/memory@latest @mastra/libsql@latest
25
+ ```
26
+
27
+ **Yarn**:
28
+
29
+ ```bash
30
+ yarn add @mastra/memory@latest @mastra/libsql@latest
31
+ ```
32
+
33
+ **Bun**:
34
+
35
+ ```bash
36
+ bun add @mastra/memory@latest @mastra/libsql@latest
37
+ ```
38
+
39
+ ## Storage providers
40
+
41
+ Memory requires a storage provider to persist message history, including user messages and agent responses. For more details on available providers and how storage works in Mastra, see the [Storage](https://mastra.ai/docs/memory/storage) documentation.
42
+
43
+ ## Configuring memory
44
+
45
+ 1. Enable memory by creating a `Memory` instance and passing it to the agent’s `memory` option.
46
+
47
+ ```typescript
48
+ import { Agent } from '@mastra/core/agent'
49
+ import { Memory } from '@mastra/memory'
50
+
51
+ export const memoryAgent = new Agent({
52
+ id: 'memory-agent',
53
+ name: 'Memory Agent',
54
+ memory: new Memory({
55
+ options: {
56
+ lastMessages: 20,
57
+ },
58
+ }),
59
+ })
60
+ ```
61
+
62
+ > **Info:** Visit [Memory Class](https://mastra.ai/reference/memory/memory-class) for a full list of configuration options.
63
+
64
+ 2. Add a storage provider to your main Mastra instance to enable memory across all configured agents.
65
+
66
+ ```typescript
67
+ import { Mastra } from '@mastra/core'
68
+ import { LibSQLStore } from '@mastra/libsql'
69
+
70
+ export const mastra = new Mastra({
71
+ storage: new LibSQLStore({
72
+ id: 'mastra-storage',
73
+ url: ':memory:',
74
+ }),
75
+ })
76
+ ```
77
+
78
+ > **Info:** Visit [libSQL Storage](https://mastra.ai/reference/storage/libsql) for a full list of configuration options.
79
+
80
+ Alternatively, add storage directly to an agent’s memory to keep data separate or use different providers per agent.
81
+
82
+ ```typescript
83
+ import { Agent } from '@mastra/core/agent'
84
+ import { Memory } from '@mastra/memory'
85
+ import { LibSQLStore } from '@mastra/libsql'
86
+
87
+ export const memoryAgent = new Agent({
88
+ id: 'memory-agent',
89
+ name: 'Memory Agent',
90
+ memory: new Memory({
91
+ storage: new LibSQLStore({
92
+ id: 'mastra-storage',
93
+ url: ':memory:',
94
+ }),
95
+ }),
96
+ })
97
+ ```
98
+
99
+ > **Mastra Cloud Store limitation:** Agent-level storage is not supported when using [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database.
100
+
101
+ ## Message history
102
+
103
+ Include a `memory` object with both `resource` and `thread` to track message history during agent calls.
104
+
105
+ - `resource`: A stable identifier for the user or entity.
106
+ - `thread`: An ID that isolates a specific conversation or session.
107
+
108
+ These fields tell the agent where to store and retrieve context, enabling persistent, thread-aware memory across a conversation.
109
+
110
+ ```typescript
111
+ const response = await memoryAgent.generate('Remember my favorite color is blue.', {
112
+ memory: {
113
+ resource: 'user-123',
114
+ thread: 'conversation-123',
115
+ },
116
+ })
117
+ ```
118
+
119
+ To recall information stored in memory, call the agent with the same `resource` and `thread` values used in the original conversation.
120
+
121
+ ```typescript
122
+ const response = await memoryAgent.generate("What's my favorite color?", {
123
+ memory: {
124
+ resource: 'user-123',
125
+ thread: 'conversation-123',
126
+ },
127
+ })
128
+ ```
129
+
130
+ > **Warning:** Each thread has an owner (`resourceId`) that cannot be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying.
131
+
132
+ To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview) documentation.
133
+
134
+ ## Observational Memory
135
+
136
+ For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
137
+
138
+ ```typescript
139
+ import { Agent } from '@mastra/core/agent'
140
+ import { Memory } from '@mastra/memory'
141
+
142
+ export const memoryAgent = new Agent({
143
+ id: 'memory-agent',
144
+ name: 'Memory Agent',
145
+ memory: new Memory({
146
+ options: {
147
+ observationalMemory: true,
148
+ },
149
+ }),
150
+ })
151
+ ```
152
+
153
+ Setting `observationalMemory: true` uses `google/gemini-2.5-flash` as the default model for the Observer and Reflector. To use a different model or customize thresholds, pass a config object:
154
+
155
+ ```typescript
156
+ import { Agent } from '@mastra/core/agent'
157
+ import { Memory } from '@mastra/memory'
158
+
159
+ export const memoryAgent = new Agent({
160
+ id: 'memory-agent',
161
+ name: 'Memory Agent',
162
+ memory: new Memory({
163
+ options: {
164
+ observationalMemory: {
165
+ model: 'deepseek/deepseek-reasoner',
166
+ observation: {
167
+ messageTokens: 20_000,
168
+ },
169
+ },
170
+ },
171
+ }),
172
+ })
173
+ ```
174
+
175
+ > **Info:** See [Observational Memory](https://mastra.ai/docs/memory/observational-memory) for details on how observations and reflections work, and [the reference](https://mastra.ai/reference/memory/observational-memory) for all configuration options.
176
+
177
+ ## Using `RequestContext`
178
+
179
+ Use [RequestContext](https://mastra.ai/docs/server/request-context) to access request-specific values. This lets you conditionally select different memory or storage configurations based on the context of the request.
180
+
181
+ ```typescript
182
+ export type UserTier = {
183
+ 'user-tier': 'enterprise' | 'pro'
184
+ }
185
+
186
+ const premiumMemory = new Memory()
187
+
188
+ const standardMemory = new Memory()
189
+
190
+ export const memoryAgent = new Agent({
191
+ id: 'memory-agent',
192
+ name: 'Memory Agent',
193
+ memory: ({ requestContext }) => {
194
+ const userTier = requestContext.get('user-tier') as UserTier['user-tier']
195
+
196
+ return userTier === 'enterprise' ? premiumMemory : standardMemory
197
+ },
198
+ })
199
+ ```
200
+
201
+ > **Info:** Visit [Request Context](https://mastra.ai/docs/server/request-context) for more information.
202
+
203
+ ## Related
204
+
205
+ - [Observational Memory](https://mastra.ai/docs/memory/observational-memory)
206
+ - [Working Memory](https://mastra.ai/docs/memory/working-memory)
207
+ - [Semantic Recall](https://mastra.ai/docs/memory/semantic-recall)
208
+ - [Storage](https://mastra.ai/docs/memory/storage)
209
+ - [Request Context](https://mastra.ai/docs/server/request-context)
@@ -0,0 +1,275 @@
1
+ # Network Approval
2
+
3
+ Agent networks can require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in individual agents and workflows. When a tool, subagent, or workflow within a network requires approval or suspends execution, the network pauses and emits events that allow your application to collect user input before resuming.
4
+
5
+ ## Storage
6
+
7
+ Network approval uses snapshots to capture execution state. Ensure you've enabled a storage provider in your Mastra instance. If storage isn't enabled you'll see an error relating to snapshot not found.
8
+
9
+ ```typescript
10
+ import { Mastra } from '@mastra/core/mastra'
11
+ import { LibSQLStore } from '@mastra/libsql'
12
+
13
+ export const mastra = new Mastra({
14
+ storage: new LibSQLStore({
15
+ id: 'mastra-storage',
16
+ url: ':memory:',
17
+ }),
18
+ })
19
+ ```
20
+
21
+ ## Approving network tool calls
22
+
23
+ When a tool within a network has `requireApproval: true`, the network stream emits an `agent-execution-approval` chunk and pauses. To allow the tool to execute, call `approveNetworkToolCall` with the `runId`.
24
+
25
+ ```typescript
26
+ const stream = await routingAgent.network('Process this query', {
27
+ memory: {
28
+ thread: 'user-123',
29
+ resource: 'my-app',
30
+ },
31
+ })
32
+
33
+ let runId: string
34
+
35
+ for await (const chunk of stream) {
36
+ runId = stream.runId
37
+ // if the requirApproval is in a tool inside a subAgent or the subAgent has requireToolApproval set to true
38
+ if (chunk.type === 'agent-execution-approval') {
39
+ console.log('Tool requires approval:', chunk.payload)
40
+ }
41
+
42
+ // if the requirApproval is in a tool directly in the network agent
43
+ if (chunk.type === 'tool-execution-approval') {
44
+ console.log('Tool requires approval:', chunk.payload)
45
+ }
46
+ }
47
+
48
+ // Approve and resume execution
49
+ const approvedStream = await routingAgent.approveNetworkToolCall({
50
+ runId,
51
+ memory: {
52
+ thread: 'user-123',
53
+ resource: 'my-app',
54
+ },
55
+ })
56
+
57
+ for await (const chunk of approvedStream) {
58
+ if (chunk.type === 'network-execution-event-step-finish') {
59
+ console.log(chunk.payload.result)
60
+ }
61
+ }
62
+ ```
63
+
64
+ ## Declining network tool calls
65
+
66
+ To decline a pending tool call and prevent execution, call `declineNetworkToolCall`. The network continues without executing the tool.
67
+
68
+ ```typescript
69
+ const declinedStream = await routingAgent.declineNetworkToolCall({
70
+ runId,
71
+ memory: {
72
+ thread: 'user-123',
73
+ resource: 'my-app',
74
+ },
75
+ })
76
+
77
+ for await (const chunk of declinedStream) {
78
+ if (chunk.type === 'network-execution-event-step-finish') {
79
+ console.log(chunk.payload.result)
80
+ }
81
+ }
82
+ ```
83
+
84
+ ## Resuming suspended networks
85
+
86
+ When a primitive in the network calls `suspend()`, the stream emits an `agent-execution-suspended`/`tool-execution-suspended`/`workflow-execution-suspended` chunk with a `suspendPayload` containing context from the primitive. Use `resumeNetwork` to provide the data requested by the primitive and continue execution.
87
+
88
+ ```typescript
89
+ import { createTool } from '@mastra/core/tools'
90
+ import { z } from 'zod'
91
+
92
+ const confirmationTool = createTool({
93
+ id: 'confirmation-tool',
94
+ description: 'Requests user confirmation before proceeding',
95
+ inputSchema: z.object({
96
+ action: z.string(),
97
+ }),
98
+ outputSchema: z.object({
99
+ confirmed: z.boolean(),
100
+ action: z.string(),
101
+ }),
102
+ suspendSchema: z.object({
103
+ message: z.string(),
104
+ action: z.string(),
105
+ }),
106
+ resumeSchema: z.object({
107
+ confirmed: z.boolean(),
108
+ }),
109
+ execute: async (inputData, context) => {
110
+ const { resumeData, suspend } = context?.agent ?? {}
111
+
112
+ if (!resumeData?.confirmed) {
113
+ return suspend?.({
114
+ message: `Please confirm: ${inputData.action}`,
115
+ action: inputData.action,
116
+ })
117
+ }
118
+
119
+ return { confirmed: true, action: inputData.action }
120
+ },
121
+ })
122
+ ```
123
+
124
+ Handle the suspension and resume with user-provided data:
125
+
126
+ ```typescript
127
+ const stream = await routingAgent.network('Delete the old records', {
128
+ memory: {
129
+ thread: 'user-123',
130
+ resource: 'my-app',
131
+ },
132
+ })
133
+
134
+ for await (const chunk of stream) {
135
+ if (chunk.type === 'workflow-execution-suspended') {
136
+ console.log(chunk.payload.suspendPayload)
137
+ // { message: "Please confirm: delete old records", action: "delete old records" }
138
+ }
139
+ }
140
+
141
+ // Resume with user confirmation
142
+ const resumedStream = await routingAgent.resumeNetwork(
143
+ { confirmed: true },
144
+ {
145
+ runId: stream.runId,
146
+ memory: {
147
+ thread: 'user-123',
148
+ resource: 'my-app',
149
+ },
150
+ },
151
+ )
152
+
153
+ for await (const chunk of resumedStream) {
154
+ if (chunk.type === 'network-execution-event-step-finish') {
155
+ console.log(chunk.payload.result)
156
+ }
157
+ }
158
+ ```
159
+
160
+ ## Automatic primitive resumption
161
+
162
+ When using primitives that call `suspend()`, you can enable automatic resumption so the network resumes suspended primitives based on the user's next message. This creates a conversational flow where users provide the required information naturally.
163
+
164
+ ### Enabling auto-resume
165
+
166
+ Set `autoResumeSuspendedTools` to `true` in the agent's `defaultNetworkOptions` or when calling `network()`:
167
+
168
+ ```typescript
169
+ import { Agent } from '@mastra/core/agent'
170
+ import { Memory } from '@mastra/memory'
171
+
172
+ // Option 1: In agent configuration
173
+ const routingAgent = new Agent({
174
+ id: 'routing-agent',
175
+ name: 'Routing Agent',
176
+ instructions: 'You coordinate tasks across multiple agents',
177
+ model: 'openai/gpt-4o-mini',
178
+ tools: { confirmationTool },
179
+ memory: new Memory(),
180
+ defaultNetworkOptions: {
181
+ autoResumeSuspendedTools: true,
182
+ },
183
+ })
184
+
185
+ // Option 2: Per-request
186
+ const stream = await routingAgent.network('Process this request', {
187
+ autoResumeSuspendedTools: true,
188
+ memory: {
189
+ thread: 'user-123',
190
+ resource: 'my-app',
191
+ },
192
+ })
193
+ ```
194
+
195
+ ### How it works
196
+
197
+ When `autoResumeSuspendedTools` is enabled:
198
+
199
+ 1. A primitive suspends execution by calling `suspend()` with a payload
200
+
201
+ 2. The suspension is persisted to memory along with the conversation
202
+
203
+ 3. When the user sends their next message on the same thread, the network:
204
+
205
+ - Detects the suspended primitive from message history
206
+ - Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
207
+ - Automatically resumes the primitive with the extracted data
208
+
209
+ ### Example
210
+
211
+ ```typescript
212
+ const stream = await routingAgent.network('Delete the old records', {
213
+ autoResumeSuspendedTools: true,
214
+ memory: {
215
+ thread: 'user-123',
216
+ resource: 'my-app',
217
+ },
218
+ })
219
+
220
+ for await (const chunk of stream) {
221
+ if (chunk.type === 'workflow-execution-suspended') {
222
+ console.log(chunk.payload.suspendPayload)
223
+ // { message: "Please confirm: delete old records", action: "delete old records" }
224
+ }
225
+ }
226
+
227
+ // User provides confirmation in their next message
228
+ const resumedStream = await routingAgent.network('Yes, confirmed', {
229
+ autoResumeSuspendedTools: true,
230
+ memory: {
231
+ thread: 'user-123',
232
+ resource: 'my-app',
233
+ },
234
+ })
235
+
236
+ for await (const chunk of resumedStream) {
237
+ if (chunk.type === 'network-execution-event-step-finish') {
238
+ console.log(chunk.payload.result)
239
+ }
240
+ }
241
+ ```
242
+
243
+ **Conversation flow:**
244
+
245
+ ```text
246
+ User: "Delete the old records"
247
+ Agent: "Please confirm: delete old records"
248
+
249
+ User: "Yes, confirmed"
250
+ Agent: "Records deleted successfully"
251
+ ```
252
+
253
+ ### Requirements
254
+
255
+ For automatic tool resumption to work:
256
+
257
+ - **Memory configured**: The agent needs memory to track suspended tools across messages
258
+ - **Same thread**: The follow-up message must use the same memory thread and resource identifiers
259
+ - **`resumeSchema` defined**: The tool (either directly in the network agent or in a subAgent) / workflow (step that gets suspended) must define a `resumeSchema` so the agent knows what data to extract from the user's message
260
+
261
+ ### Manual vs automatic resumption
262
+
263
+ | Approach | Use case |
264
+ | -------------------------------------- | ------------------------------------------------------------------------ |
265
+ | Manual (`resumeNetwork()`) | Programmatic control, webhooks, button clicks, external triggers |
266
+ | Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
267
+
268
+ Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
269
+
270
+ ## Related
271
+
272
+ - [Agent Networks](https://mastra.ai/docs/agents/networks)
273
+ - [Agent Approval](https://mastra.ai/docs/agents/agent-approval)
274
+ - [Human-in-the-Loop](https://mastra.ai/docs/workflows/human-in-the-loop)
275
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)