@mastra/libsql 1.7.1 → 1.7.2-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +13 -0
- package/dist/docs/SKILL.md +2 -4
- package/dist/docs/assets/SOURCE_MAP.json +1 -1
- package/dist/docs/references/docs-agents-agent-approval.md +114 -193
- package/dist/docs/references/docs-agents-networks.md +88 -205
- package/dist/docs/references/docs-memory-overview.md +219 -24
- package/dist/docs/references/docs-memory-semantic-recall.md +1 -1
- package/dist/docs/references/docs-memory-storage.md +4 -4
- package/dist/docs/references/docs-memory-working-memory.md +1 -1
- package/dist/docs/references/docs-observability-overview.md +1 -1
- package/dist/docs/references/reference-core-getMemory.md +1 -2
- package/dist/docs/references/reference-core-listMemory.md +1 -2
- package/dist/index.cjs +112 -5
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +112 -5
- package/dist/index.js.map +1 -1
- package/dist/storage/domains/datasets/index.d.ts.map +1 -1
- package/dist/storage/domains/experiments/index.d.ts +2 -1
- package/dist/storage/domains/experiments/index.d.ts.map +1 -1
- package/package.json +5 -5
- package/dist/docs/references/docs-agents-agent-memory.md +0 -209
- package/dist/docs/references/docs-agents-network-approval.md +0 -278
|
@@ -1,32 +1,16 @@
|
|
|
1
1
|
# Agent networks
|
|
2
2
|
|
|
3
|
-
> **
|
|
3
|
+
> **Deprecated — Use supervisor agents:** Agent networks are deprecated and will be removed in a future major release. [Supervisor agents](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` are now the recommended approach. It provides the same multi-agent coordination with better control, a simpler API, and easier debugging.
|
|
4
4
|
>
|
|
5
|
-
>
|
|
6
|
-
> - **Simpler API**: Uses familiar `stream()` and `generate()` methods instead of a separate `.network()` API
|
|
7
|
-
> - **More flexible**: Stop execution early, modify delegations, filter context, and provide feedback to guide the agent
|
|
8
|
-
> - **Type-safe**: Full TypeScript support for all hooks and callbacks
|
|
9
|
-
> - **Easier debugging**: Monitor progress with `onIterationComplete`, track delegations with `onDelegationStart`/`onDelegationComplete`
|
|
10
|
-
>
|
|
11
|
-
> See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade from `.network()`.
|
|
12
|
-
|
|
13
|
-
Agent networks in Mastra coordinate multiple agents, workflows, and tools to handle tasks that aren't clearly defined upfront but can be inferred from the user's message or context. A top-level **routing agent** (a Mastra agent with other agents, workflows, and tools configured) uses an LLM to interpret the request and decide which primitives (subagents, workflows, or tools) to call, in what order, and with what data.
|
|
14
|
-
|
|
15
|
-
## When to use networks
|
|
5
|
+
> See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade.
|
|
16
6
|
|
|
17
|
-
|
|
7
|
+
A **routing agent** uses an LLM to interpret a request and decide which primitives (subagents, workflows, or tools) to call, in what order, and with what data.
|
|
18
8
|
|
|
19
|
-
##
|
|
9
|
+
## Create an agent network
|
|
20
10
|
|
|
21
|
-
|
|
11
|
+
Configure a routing agent with `agents`, `workflows`, and `tools`. Memory is required as `.network()` uses it to store task history and determine when a task is complete.
|
|
22
12
|
|
|
23
|
-
|
|
24
|
-
- Primitives are selected based on their descriptions. Clear, specific descriptions improve routing. For workflows and tools, the input schema helps determine the right inputs at runtime.
|
|
25
|
-
- If multiple primitives have overlapping functionality, the agent favors the more specific one, using a combination of schema and descriptions to decide which to run.
|
|
26
|
-
|
|
27
|
-
## Creating an agent network
|
|
28
|
-
|
|
29
|
-
An agent network is built around a top-level routing agent that delegates tasks to subagents, workflows, and tools defined in its configuration. Memory is configured on the routing agent using the `memory` option, and `instructions` define the agent's routing behavior.
|
|
13
|
+
Each primitive needs a clear `description` so the routing agent can decide which to use. For workflows and tools, `inputSchema` and `outputSchema` also help the router determine the right inputs.
|
|
30
14
|
|
|
31
15
|
```typescript
|
|
32
16
|
import { Agent } from '@mastra/core/agent'
|
|
@@ -35,7 +19,6 @@ import { LibSQLStore } from '@mastra/libsql'
|
|
|
35
19
|
|
|
36
20
|
import { researchAgent } from './research-agent'
|
|
37
21
|
import { writingAgent } from './writing-agent'
|
|
38
|
-
|
|
39
22
|
import { cityWorkflow } from '../workflows/city-workflow'
|
|
40
23
|
import { weatherTool } from '../tools/weather-tool'
|
|
41
24
|
|
|
@@ -43,11 +26,7 @@ export const routingAgent = new Agent({
|
|
|
43
26
|
id: 'routing-agent',
|
|
44
27
|
name: 'Routing Agent',
|
|
45
28
|
instructions: `
|
|
46
|
-
|
|
47
|
-
The user will ask you to research a topic.
|
|
48
|
-
Always respond with a complete report—no bullet points.
|
|
49
|
-
Write in full paragraphs, like a blog post.
|
|
50
|
-
Do not answer with incomplete or uncertain information.`,
|
|
29
|
+
You are a network of writers and researchers. The user will ask you to research a topic. Always respond with a complete report—no bullet points. Write in full paragraphs, like a blog post. Do not answer with incomplete or uncertain information.`,
|
|
51
30
|
model: 'openai/gpt-5.4',
|
|
52
31
|
agents: {
|
|
53
32
|
researchAgent,
|
|
@@ -68,81 +47,11 @@ export const routingAgent = new Agent({
|
|
|
68
47
|
})
|
|
69
48
|
```
|
|
70
49
|
|
|
71
|
-
|
|
72
|
-
|
|
73
|
-
When configuring a Mastra agent network, each primitive (agent, workflow, or tool) needs a clear description to help the routing agent decide which to use. The routing agent uses each primitive's description and schema to determine what it does and how to use it. Clear descriptions and well-defined input and output schemas improve routing accuracy.
|
|
74
|
-
|
|
75
|
-
#### Agent descriptions
|
|
76
|
-
|
|
77
|
-
Each subagent in a network should include a clear `description` that explains what the agent does.
|
|
78
|
-
|
|
79
|
-
```typescript
|
|
80
|
-
export const researchAgent = new Agent({
|
|
81
|
-
id: 'research-agent',
|
|
82
|
-
name: 'Research Agent',
|
|
83
|
-
description: `This agent gathers concise research insights in bullet-point form.
|
|
84
|
-
It's designed to extract key facts without generating full
|
|
85
|
-
responses or narrative content.`,
|
|
86
|
-
})
|
|
87
|
-
```
|
|
88
|
-
|
|
89
|
-
```typescript
|
|
90
|
-
export const writingAgent = new Agent({
|
|
91
|
-
id: 'writing-agent',
|
|
92
|
-
name: 'Writing Agent',
|
|
93
|
-
description: `This agent turns researched material into well-structured
|
|
94
|
-
written content. It produces full-paragraph reports with no bullet points,
|
|
95
|
-
suitable for use in articles, summaries, or blog posts.`,
|
|
96
|
-
})
|
|
97
|
-
```
|
|
98
|
-
|
|
99
|
-
#### Workflow descriptions
|
|
100
|
-
|
|
101
|
-
Workflows in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
|
|
102
|
-
|
|
103
|
-
```typescript
|
|
104
|
-
export const cityWorkflow = createWorkflow({
|
|
105
|
-
id: 'city-workflow',
|
|
106
|
-
description: `This workflow handles city-specific research tasks.
|
|
107
|
-
It first gathers factual information about the city, then synthesizes
|
|
108
|
-
that research into a full written report. Use it when the user input
|
|
109
|
-
includes a city to be researched.`,
|
|
110
|
-
inputSchema: z.object({
|
|
111
|
-
city: z.string(),
|
|
112
|
-
}),
|
|
113
|
-
outputSchema: z.object({
|
|
114
|
-
text: z.string(),
|
|
115
|
-
}),
|
|
116
|
-
})
|
|
117
|
-
```
|
|
118
|
-
|
|
119
|
-
#### Tool descriptions
|
|
120
|
-
|
|
121
|
-
Tools in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
|
|
122
|
-
|
|
123
|
-
```typescript
|
|
124
|
-
export const weatherTool = createTool({
|
|
125
|
-
id: 'weather-tool',
|
|
126
|
-
description: ` Retrieves current weather information using the wttr.in API.
|
|
127
|
-
Accepts a city or location name as input and returns a short weather summary.
|
|
128
|
-
Use this tool whenever up-to-date weather data is requested.
|
|
129
|
-
`,
|
|
130
|
-
inputSchema: z.object({
|
|
131
|
-
location: z.string(),
|
|
132
|
-
}),
|
|
133
|
-
outputSchema: z.object({
|
|
134
|
-
weather: z.string(),
|
|
135
|
-
}),
|
|
136
|
-
})
|
|
137
|
-
```
|
|
138
|
-
|
|
139
|
-
## Calling agent networks
|
|
140
|
-
|
|
141
|
-
Call a Mastra agent network using `.network()` with a user message. The method returns a stream of events that you can iterate over to track execution progress and retrieve the final result.
|
|
50
|
+
> **Note:** Subagents need a `description` on the `Agent` instance. Workflows and tools need a `description` plus `inputSchema` and `outputSchema` on `createWorkflow()` or `createTool()`.
|
|
142
51
|
|
|
143
|
-
|
|
52
|
+
## Call the network
|
|
144
53
|
|
|
145
|
-
|
|
54
|
+
Call `.network()` with a user message. The method returns a stream of events you can iterate over.
|
|
146
55
|
|
|
147
56
|
```typescript
|
|
148
57
|
const result = await routingAgent.network('Tell me three cool ways to use Mastra')
|
|
@@ -155,145 +64,119 @@ for await (const chunk of result) {
|
|
|
155
64
|
}
|
|
156
65
|
```
|
|
157
66
|
|
|
158
|
-
|
|
159
|
-
|
|
160
|
-
The following `chunk.type` events are emitted during this request:
|
|
161
|
-
|
|
162
|
-
```text
|
|
163
|
-
routing-agent-start
|
|
164
|
-
routing-agent-end
|
|
165
|
-
agent-execution-start
|
|
166
|
-
agent-execution-event-start
|
|
167
|
-
agent-execution-event-step-start
|
|
168
|
-
agent-execution-event-text-start
|
|
169
|
-
agent-execution-event-text-delta
|
|
170
|
-
agent-execution-event-text-end
|
|
171
|
-
agent-execution-event-step-finish
|
|
172
|
-
agent-execution-event-finish
|
|
173
|
-
agent-execution-end
|
|
174
|
-
network-execution-event-step-finish
|
|
175
|
-
```
|
|
176
|
-
|
|
177
|
-
## Workflow example
|
|
67
|
+
## Structured output
|
|
178
68
|
|
|
179
|
-
|
|
69
|
+
Pass `structuredOutput` to get typed, validated results. Use `objectStream` for partial objects as they generate.
|
|
180
70
|
|
|
181
71
|
```typescript
|
|
182
|
-
|
|
72
|
+
import { z } from 'zod'
|
|
183
73
|
|
|
184
|
-
|
|
185
|
-
|
|
186
|
-
|
|
187
|
-
|
|
188
|
-
|
|
189
|
-
}
|
|
190
|
-
```
|
|
74
|
+
const resultSchema = z.object({
|
|
75
|
+
summary: z.string().describe('A brief summary of the findings'),
|
|
76
|
+
recommendations: z.array(z.string()).describe('List of recommendations'),
|
|
77
|
+
confidence: z.number().min(0).max(1).describe('Confidence score'),
|
|
78
|
+
})
|
|
191
79
|
|
|
192
|
-
|
|
80
|
+
const stream = await routingAgent.network('Research AI trends', {
|
|
81
|
+
structuredOutput: { schema: resultSchema },
|
|
82
|
+
})
|
|
193
83
|
|
|
194
|
-
|
|
84
|
+
for await (const partial of stream.objectStream) {
|
|
85
|
+
console.log('Building result:', partial)
|
|
86
|
+
}
|
|
195
87
|
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
workflow-execution-start
|
|
199
|
-
workflow-execution-event-workflow-start
|
|
200
|
-
workflow-execution-event-workflow-step-start
|
|
201
|
-
workflow-execution-event-workflow-step-result
|
|
202
|
-
workflow-execution-event-workflow-finish
|
|
203
|
-
workflow-execution-end
|
|
204
|
-
routing-agent-start
|
|
205
|
-
network-execution-event-step-finish
|
|
88
|
+
const final = await stream.object
|
|
89
|
+
console.log(final?.summary)
|
|
206
90
|
```
|
|
207
91
|
|
|
208
|
-
|
|
92
|
+
## Approve and decline tool calls
|
|
93
|
+
|
|
94
|
+
When a primitive requires approval, the stream emits an `agent-execution-approval` or `tool-execution-approval` chunk. Use `approveNetworkToolCall()` or `declineNetworkToolCall()` to respond.
|
|
209
95
|
|
|
210
|
-
|
|
96
|
+
Network approval uses snapshots to capture execution state. Ensure a [storage provider](https://mastra.ai/docs/memory/storage) is enabled in your Mastra instance.
|
|
211
97
|
|
|
212
98
|
```typescript
|
|
213
|
-
const
|
|
99
|
+
const stream = await routingAgent.network('Perform some sensitive action', {
|
|
100
|
+
memory: {
|
|
101
|
+
thread: 'user-123',
|
|
102
|
+
resource: 'my-app',
|
|
103
|
+
},
|
|
104
|
+
})
|
|
214
105
|
|
|
215
|
-
for await (const chunk of
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
106
|
+
for await (const chunk of stream) {
|
|
107
|
+
if (chunk.type === 'agent-execution-approval' || chunk.type === 'tool-execution-approval') {
|
|
108
|
+
// Approve
|
|
109
|
+
const approvedStream = await routingAgent.approveNetworkToolCall(chunk.payload.toolCallId, {
|
|
110
|
+
runId: stream.runId,
|
|
111
|
+
memory: { thread: 'user-123', resource: 'my-app' },
|
|
112
|
+
})
|
|
113
|
+
|
|
114
|
+
for await (const c of approvedStream) {
|
|
115
|
+
if (c.type === 'network-execution-event-step-finish') {
|
|
116
|
+
console.log(c.payload.result)
|
|
117
|
+
}
|
|
118
|
+
}
|
|
219
119
|
}
|
|
220
120
|
}
|
|
221
121
|
```
|
|
222
122
|
|
|
223
|
-
|
|
123
|
+
To decline instead, call `declineNetworkToolCall()` with the same arguments.
|
|
224
124
|
|
|
225
|
-
|
|
125
|
+
## Suspend and resume
|
|
226
126
|
|
|
227
|
-
|
|
228
|
-
routing-agent-start
|
|
229
|
-
routing-agent-end
|
|
230
|
-
tool-execution-start
|
|
231
|
-
tool-execution-end
|
|
232
|
-
network-execution-event-step-finish
|
|
233
|
-
```
|
|
234
|
-
|
|
235
|
-
## Structured output
|
|
236
|
-
|
|
237
|
-
When you need typed, validated results from a network, use the `structuredOutput` option. After the network completes its task, it generates a structured response matching your schema.
|
|
127
|
+
When a primitive calls `suspend()`, the stream emits a suspension chunk (e.g., `tool-execution-suspended`). Use `resumeNetwork()` to provide the requested data and continue execution.
|
|
238
128
|
|
|
239
129
|
```typescript
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
const resultSchema = z.object({
|
|
243
|
-
summary: z.string().describe('A brief summary of the findings'),
|
|
244
|
-
recommendations: z.array(z.string()).describe('List of recommendations'),
|
|
245
|
-
confidence: z.number().min(0).max(1).describe('Confidence score'),
|
|
246
|
-
})
|
|
247
|
-
|
|
248
|
-
const stream = await routingAgent.network('Research AI trends', {
|
|
249
|
-
structuredOutput: {
|
|
250
|
-
schema: resultSchema,
|
|
251
|
-
},
|
|
130
|
+
const stream = await routingAgent.network('Delete the old records', {
|
|
131
|
+
memory: { thread: 'user-123', resource: 'my-app' },
|
|
252
132
|
})
|
|
253
133
|
|
|
254
|
-
// Consume the stream
|
|
255
134
|
for await (const chunk of stream) {
|
|
256
|
-
if (chunk.type === '
|
|
257
|
-
|
|
258
|
-
console.log('Partial:', chunk.payload.object)
|
|
259
|
-
}
|
|
260
|
-
if (chunk.type === 'network-object-result') {
|
|
261
|
-
// Final structured object
|
|
262
|
-
console.log('Final:', chunk.payload.object)
|
|
135
|
+
if (chunk.type === 'workflow-execution-suspended') {
|
|
136
|
+
console.log(chunk.payload.suspendPayload)
|
|
263
137
|
}
|
|
264
138
|
}
|
|
265
139
|
|
|
266
|
-
//
|
|
267
|
-
const
|
|
268
|
-
|
|
269
|
-
|
|
270
|
-
|
|
140
|
+
// Resume with user confirmation
|
|
141
|
+
const resumedStream = await routingAgent.resumeNetwork(
|
|
142
|
+
{ confirmed: true },
|
|
143
|
+
{
|
|
144
|
+
runId: stream.runId,
|
|
145
|
+
memory: { thread: 'user-123', resource: 'my-app' },
|
|
146
|
+
},
|
|
147
|
+
)
|
|
148
|
+
|
|
149
|
+
for await (const chunk of resumedStream) {
|
|
150
|
+
if (chunk.type === 'network-execution-event-step-finish') {
|
|
151
|
+
console.log(chunk.payload.result)
|
|
152
|
+
}
|
|
153
|
+
}
|
|
271
154
|
```
|
|
272
155
|
|
|
273
|
-
###
|
|
156
|
+
### Automatic resumption
|
|
274
157
|
|
|
275
|
-
|
|
158
|
+
Set `autoResumeSuspendedTools` to `true` so the network resumes suspended primitives based on the user's next message. This creates a conversational flow where users provide the required information naturally.
|
|
276
159
|
|
|
277
160
|
```typescript
|
|
278
|
-
const stream = await routingAgent.network('
|
|
279
|
-
|
|
161
|
+
const stream = await routingAgent.network('Delete the old records', {
|
|
162
|
+
autoResumeSuspendedTools: true,
|
|
163
|
+
memory: { thread: 'user-123', resource: 'my-app' },
|
|
280
164
|
})
|
|
165
|
+
```
|
|
281
166
|
|
|
282
|
-
|
|
283
|
-
for await (const partial of stream.objectStream) {
|
|
284
|
-
console.log('Building result:', partial)
|
|
285
|
-
}
|
|
167
|
+
Requirements for automatic resumption:
|
|
286
168
|
|
|
287
|
-
|
|
288
|
-
|
|
289
|
-
|
|
169
|
+
- **Memory configured**: The agent needs memory to track suspended tools across messages.
|
|
170
|
+
- **Same thread**: The follow-up message must use the same `thread` and `resource` identifiers.
|
|
171
|
+
- **`resumeSchema` defined**: The tool must define a `resumeSchema` so the network can extract data from the user's message.
|
|
172
|
+
|
|
173
|
+
| | Manual (`resumeNetwork`) | Automatic (`autoResumeSuspendedTools`) |
|
|
174
|
+
| -------- | ---------------------------------------------- | ----------------------------------------- |
|
|
175
|
+
| Best for | Custom UIs with approval buttons | Chat-style interfaces |
|
|
176
|
+
| Control | Full control over resume timing and data | Network extracts data from user's message |
|
|
177
|
+
| Setup | Handle suspension chunks, call `resumeNetwork` | Set flag, define `resumeSchema` on tools |
|
|
290
178
|
|
|
291
179
|
## Related
|
|
292
180
|
|
|
293
|
-
- [Supervisor
|
|
294
|
-
- [Migration:
|
|
295
|
-
- [Guide: Research Coordinator](https://mastra.ai/guides/guide/research-coordinator)
|
|
296
|
-
- [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
|
|
297
|
-
- [Agent Approval](https://mastra.ai/docs/agents/agent-approval)
|
|
298
|
-
- [Workflows Overview](https://mastra.ai/docs/workflows/overview)
|
|
299
|
-
- [Request Context](https://mastra.ai/docs/server/request-context)
|
|
181
|
+
- [Supervisor agents](https://mastra.ai/docs/agents/supervisor-agents)
|
|
182
|
+
- [Migration: `.network()` to supervisor agents](https://mastra.ai/guides/migrations/network-to-supervisor)
|
|
@@ -2,44 +2,239 @@
|
|
|
2
2
|
|
|
3
3
|
Memory enables your agent to remember user messages, agent replies, and tool results across interactions, giving it the context it needs to stay consistent, maintain conversation flow, and produce better answers over time.
|
|
4
4
|
|
|
5
|
-
Mastra
|
|
5
|
+
Mastra agents can be configured to store [message history](https://mastra.ai/docs/memory/message-history). Additionally, you can enable:
|
|
6
6
|
|
|
7
|
-
- [
|
|
8
|
-
- [
|
|
9
|
-
- [
|
|
10
|
-
- [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
|
|
7
|
+
- [Observational Memory](https://mastra.ai/docs/memory/observational-memory) (Recommended): Uses background agents to maintain a dense observation log that replaces raw message history as it grows. This keeps the context window small while preserving long-term memory.
|
|
8
|
+
- [Working memory](https://mastra.ai/docs/memory/working-memory): Stores persistent, structured user data such as names, preferences, and goals.
|
|
9
|
+
- [Semantic recall](https://mastra.ai/docs/memory/semantic-recall): Retrieves relevant past messages based on semantic meaning rather than exact keywords.
|
|
11
10
|
|
|
12
11
|
If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
|
|
13
12
|
|
|
14
|
-
|
|
13
|
+
Memory results will be stored in one or more of your configured [storage providers](https://mastra.ai/docs/memory/storage).
|
|
15
14
|
|
|
16
|
-
|
|
15
|
+
## When to use memory
|
|
17
16
|
|
|
18
|
-
-
|
|
19
|
-
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
20
|
-
- [Working memory](https://mastra.ai/docs/memory/working-memory)
|
|
21
|
-
- [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
17
|
+
Use memory when your agent needs to maintain multi-turn conversations that reference prior exchanges, recall user preferences or facts from earlier in a session, or build context over time within a conversation thread. Skip memory for single-turn requests where each interaction is independent.
|
|
22
18
|
|
|
23
|
-
##
|
|
19
|
+
## Quickstart
|
|
24
20
|
|
|
25
|
-
|
|
21
|
+
1. Install the `@mastra/memory` package.
|
|
26
22
|
|
|
27
|
-
|
|
23
|
+
**npm**:
|
|
28
24
|
|
|
29
|
-
|
|
25
|
+
```bash
|
|
26
|
+
npm install @mastra/memory@latest
|
|
27
|
+
```
|
|
30
28
|
|
|
31
|
-
|
|
29
|
+
**pnpm**:
|
|
32
30
|
|
|
33
|
-
|
|
31
|
+
```bash
|
|
32
|
+
pnpm add @mastra/memory@latest
|
|
33
|
+
```
|
|
34
34
|
|
|
35
|
-
|
|
35
|
+
**Yarn**:
|
|
36
36
|
|
|
37
|
-
|
|
37
|
+
```bash
|
|
38
|
+
yarn add @mastra/memory@latest
|
|
39
|
+
```
|
|
38
40
|
|
|
39
|
-
|
|
41
|
+
**Bun**:
|
|
40
42
|
|
|
41
|
-
|
|
43
|
+
```bash
|
|
44
|
+
bun add @mastra/memory@latest
|
|
45
|
+
```
|
|
42
46
|
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
47
|
+
2. Memory **requires** a storage provider to persist message history, including user messages and agent responses.
|
|
48
|
+
|
|
49
|
+
For the purposes of this quickstart, use `@mastra/libsql`.
|
|
50
|
+
|
|
51
|
+
**npm**:
|
|
52
|
+
|
|
53
|
+
```bash
|
|
54
|
+
npm install @mastra/libsql@latest
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
**pnpm**:
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
pnpm add @mastra/libsql@latest
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
**Yarn**:
|
|
64
|
+
|
|
65
|
+
```bash
|
|
66
|
+
yarn add @mastra/libsql@latest
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
**Bun**:
|
|
70
|
+
|
|
71
|
+
```bash
|
|
72
|
+
bun add @mastra/libsql@latest
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
> **Note:** For more details on available providers and how storage works in Mastra, visit the [storage](https://mastra.ai/docs/memory/storage) documentation.
|
|
76
|
+
|
|
77
|
+
3. Add the storage provider to your main Mastra instance to enable memory across all configured agents.
|
|
78
|
+
|
|
79
|
+
```typescript
|
|
80
|
+
import { Mastra } from '@mastra/core'
|
|
81
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
82
|
+
|
|
83
|
+
export const mastra = new Mastra({
|
|
84
|
+
storage: new LibSQLStore({
|
|
85
|
+
id: 'mastra-storage',
|
|
86
|
+
url: ':memory:',
|
|
87
|
+
}),
|
|
88
|
+
})
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
4. Create a `Memory` instance and pass it to the agent's `memory` option.
|
|
92
|
+
|
|
93
|
+
```typescript
|
|
94
|
+
import { Agent } from '@mastra/core/agent'
|
|
95
|
+
import { Memory } from '@mastra/memory'
|
|
96
|
+
|
|
97
|
+
export const memoryAgent = new Agent({
|
|
98
|
+
id: 'memory-agent',
|
|
99
|
+
name: 'Memory Agent',
|
|
100
|
+
memory: new Memory({
|
|
101
|
+
options: {
|
|
102
|
+
lastMessages: 20,
|
|
103
|
+
},
|
|
104
|
+
}),
|
|
105
|
+
})
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
> **Note:** Visit [Memory Class](https://mastra.ai/reference/memory/memory-class) for a full list of configuration options.
|
|
109
|
+
|
|
110
|
+
5. Call your agent, for example in [Mastra Studio](https://mastra.ai/docs/getting-started/studio). Inside Studio, start a new chat with your agent and take a look at the right sidebar. It'll now display various memory-related information.
|
|
111
|
+
|
|
112
|
+
## Message history
|
|
113
|
+
|
|
114
|
+
Pass a `memory` object with `resource` and `thread` to track message history.
|
|
115
|
+
|
|
116
|
+
- `resource`: A stable identifier for the user or entity.
|
|
117
|
+
- `thread`: An ID that isolates a specific conversation or session.
|
|
118
|
+
|
|
119
|
+
```typescript
|
|
120
|
+
const response = await memoryAgent.generate('Remember my favorite color is blue.', {
|
|
121
|
+
memory: {
|
|
122
|
+
resource: 'user-123',
|
|
123
|
+
thread: 'conversation-123',
|
|
124
|
+
},
|
|
125
|
+
})
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
To recall information stored in memory, call the agent with the same `resource` and `thread` values used in the original conversation.
|
|
129
|
+
|
|
130
|
+
```typescript
|
|
131
|
+
const response = await memoryAgent.generate("What's my favorite color?", {
|
|
132
|
+
memory: {
|
|
133
|
+
resource: 'user-123',
|
|
134
|
+
thread: 'conversation-123',
|
|
135
|
+
},
|
|
136
|
+
})
|
|
137
|
+
|
|
138
|
+
// Response: "Your favorite color is blue."
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
> **Warning:** Each thread has an owner (`resourceId`) that can't be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying.
|
|
142
|
+
|
|
143
|
+
To list all threads for a resource, or retrieve a specific thread, [use the memory API directly](https://mastra.ai/docs/memory/message-history).
|
|
144
|
+
|
|
145
|
+
## Observational Memory
|
|
146
|
+
|
|
147
|
+
For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
|
|
148
|
+
|
|
149
|
+
```typescript
|
|
150
|
+
import { Agent } from '@mastra/core/agent'
|
|
151
|
+
import { Memory } from '@mastra/memory'
|
|
152
|
+
|
|
153
|
+
export const memoryAgent = new Agent({
|
|
154
|
+
id: 'memory-agent',
|
|
155
|
+
name: 'Memory Agent',
|
|
156
|
+
memory: new Memory({
|
|
157
|
+
options: {
|
|
158
|
+
observationalMemory: true,
|
|
159
|
+
},
|
|
160
|
+
}),
|
|
161
|
+
})
|
|
162
|
+
```
|
|
163
|
+
|
|
164
|
+
> **Note:** See [Observational Memory](https://mastra.ai/docs/memory/observational-memory) for details on how observations and reflections work, and [the reference](https://mastra.ai/reference/memory/observational-memory) for all configuration options.
|
|
165
|
+
|
|
166
|
+
## Memory in multi-agent systems
|
|
167
|
+
|
|
168
|
+
When a [supervisor agent](https://mastra.ai/docs/agents/supervisor-agents) delegates to a subagent, Mastra isolates subagent memory automatically. There is no flag to enable this as it happens on every delegation. Understanding how this scoping works lets you decide what stays private and what to share intentionally.
|
|
169
|
+
|
|
170
|
+
### How delegation scopes memory
|
|
171
|
+
|
|
172
|
+
Each delegation creates a fresh `threadId` and a deterministic `resourceId` for the subagent:
|
|
173
|
+
|
|
174
|
+
- **Thread ID**: Unique per delegation. The subagent starts with a clean message history every time it's called.
|
|
175
|
+
- **Resource ID**: Derived as `{parentResourceId}-{agentName}`. Because the resource ID is stable across delegations, resource-scoped memory persists between calls. A subagent remembers facts from previous delegations by the same user.
|
|
176
|
+
- **Memory instance**: If a subagent has no memory configured, it inherits the supervisor's `Memory` instance. If the subagent defines its own, that takes precedence.
|
|
177
|
+
|
|
178
|
+
The supervisor forwards its conversation context to the subagent so it has enough background to complete the task. Only the delegation prompt and the subagent's response are saved — the full parent conversation is not stored. You can control which messages reach the subagent with the [`messageFilter`](https://mastra.ai/docs/agents/supervisor-agents) callback.
|
|
179
|
+
|
|
180
|
+
> **Note:** Subagent resource IDs are always suffixed with the agent name (`{parentResourceId}-{agentName}`). Two different subagents under the same supervisor never share a resource ID through delegation.
|
|
181
|
+
|
|
182
|
+
To go beyond this default isolation, you can share memory between agents by passing matching identifiers when you call them directly.
|
|
183
|
+
|
|
184
|
+
### Share memory between agents
|
|
185
|
+
|
|
186
|
+
When you call agents directly (outside the delegation flow), memory sharing is controlled by two identifiers: `resourceId` and `threadId`. Agents that use the same values read and write to the same data. This is useful when agents collaborate on a shared context — for example, a researcher that saves notes and a writer that reads them.
|
|
187
|
+
|
|
188
|
+
**Resource-scoped sharing** is the most common pattern. [Working memory](https://mastra.ai/docs/memory/working-memory) and [semantic recall](https://mastra.ai/docs/memory/semantic-recall) default to `scope: 'resource'`. If two agents share a `resourceId`, they share observations, working memory, and embeddings — even across different threads:
|
|
189
|
+
|
|
190
|
+
```typescript
|
|
191
|
+
// Both agents share the same resource-scoped memory
|
|
192
|
+
await researcher.generate('Find information about quantum computing.', {
|
|
193
|
+
memory: { resource: 'project-42', thread: 'research-session' },
|
|
194
|
+
})
|
|
195
|
+
|
|
196
|
+
await writer.generate('Write a summary from the research notes.', {
|
|
197
|
+
memory: { resource: 'project-42', thread: 'writing-session' },
|
|
198
|
+
})
|
|
199
|
+
```
|
|
200
|
+
|
|
201
|
+
Because both calls use `resource: 'project-42'`, the writer can access the researcher's observations, working memory, and semantic embeddings. Each agent still has its own thread, so message histories stay separate.
|
|
202
|
+
|
|
203
|
+
**Thread-scoped sharing** gives tighter coupling. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) uses `scope: 'thread'` by default. If two agents use the same `resource` _and_ `thread`, they share the full message history. Each agent sees every message the other has written. This is useful when agents need to build on each other's exact outputs.
|
|
204
|
+
|
|
205
|
+
## Observability
|
|
206
|
+
|
|
207
|
+
Enable [Tracing](https://mastra.ai/docs/observability/tracing/overview) to monitor and debug memory in action. Traces show you exactly which messages and observations the agent included in its context for each request, helping you understand agent behavior and verify that memory retrieval is working as expected.
|
|
208
|
+
|
|
209
|
+
Open [Mastra Studio](https://mastra.ai/docs/getting-started/studio) and select the **Observability** tab in the sidebar. Open the trace of a recent agent request, then look for spans of LLMs calls.
|
|
210
|
+
|
|
211
|
+
## Switch memory per request
|
|
212
|
+
|
|
213
|
+
Use [`RequestContext`](https://mastra.ai/docs/server/request-context) to access request-specific values. This lets you conditionally select different memory or storage configurations based on the context of the request.
|
|
214
|
+
|
|
215
|
+
```typescript
|
|
216
|
+
export type UserTier = {
|
|
217
|
+
'user-tier': 'enterprise' | 'pro'
|
|
218
|
+
}
|
|
219
|
+
|
|
220
|
+
const premiumMemory = new Memory()
|
|
221
|
+
const standardMemory = new Memory()
|
|
222
|
+
|
|
223
|
+
export const memoryAgent = new Agent({
|
|
224
|
+
id: 'memory-agent',
|
|
225
|
+
name: 'Memory Agent',
|
|
226
|
+
memory: ({ requestContext }) => {
|
|
227
|
+
const userTier = requestContext.get('user-tier') as UserTier['user-tier']
|
|
228
|
+
|
|
229
|
+
return userTier === 'enterprise' ? premiumMemory : standardMemory
|
|
230
|
+
},
|
|
231
|
+
})
|
|
232
|
+
```
|
|
233
|
+
|
|
234
|
+
> **Note:** Visit [Request Context](https://mastra.ai/docs/server/request-context) for more information.
|
|
235
|
+
|
|
236
|
+
## Related
|
|
237
|
+
|
|
238
|
+
- [`Memory` reference](https://mastra.ai/reference/memory/memory-class)
|
|
239
|
+
- [Tracing](https://mastra.ai/docs/observability/tracing/overview)
|
|
240
|
+
- [Request Context](https://mastra.ai/docs/server/request-context)
|
|
@@ -16,7 +16,7 @@ When it's enabled, new messages are used to query a vector DB for semantically s
|
|
|
16
16
|
|
|
17
17
|
After getting a response from the LLM, all new messages (user, assistant, and tool calls/results) are inserted into the vector DB to be recalled in later interactions.
|
|
18
18
|
|
|
19
|
-
##
|
|
19
|
+
## Quickstart
|
|
20
20
|
|
|
21
21
|
Semantic recall is enabled by default, so if you give your agent memory it will be included:
|
|
22
22
|
|