@mastra/libsql 1.6.1-alpha.0 → 1.6.2-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +22 -0
- package/dist/docs/SKILL.md +50 -0
- package/dist/docs/assets/SOURCE_MAP.json +6 -0
- package/dist/docs/references/docs-agents-agent-approval.md +558 -0
- package/dist/docs/references/docs-agents-agent-memory.md +209 -0
- package/dist/docs/references/docs-agents-network-approval.md +275 -0
- package/dist/docs/references/docs-agents-networks.md +299 -0
- package/dist/docs/references/docs-memory-memory-processors.md +314 -0
- package/dist/docs/references/docs-memory-message-history.md +260 -0
- package/dist/docs/references/docs-memory-overview.md +45 -0
- package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
- package/dist/docs/references/docs-memory-storage.md +261 -0
- package/dist/docs/references/docs-memory-working-memory.md +400 -0
- package/dist/docs/references/docs-observability-overview.md +70 -0
- package/dist/docs/references/docs-observability-tracing-exporters-default.md +209 -0
- package/dist/docs/references/docs-rag-retrieval.md +515 -0
- package/dist/docs/references/docs-workflows-snapshots.md +238 -0
- package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +140 -0
- package/dist/docs/references/reference-core-getMemory.md +50 -0
- package/dist/docs/references/reference-core-listMemory.md +56 -0
- package/dist/docs/references/reference-core-mastra-class.md +66 -0
- package/dist/docs/references/reference-memory-memory-class.md +147 -0
- package/dist/docs/references/reference-storage-composite.md +235 -0
- package/dist/docs/references/reference-storage-dynamodb.md +282 -0
- package/dist/docs/references/reference-storage-libsql.md +135 -0
- package/dist/docs/references/reference-vectors-libsql.md +305 -0
- package/dist/index.cjs +14 -3
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +14 -3
- package/dist/index.js.map +1 -1
- package/dist/storage/domains/memory/index.d.ts.map +1 -1
- package/dist/vector/index.d.ts.map +1 -1
- package/package.json +5 -5
|
@@ -0,0 +1,299 @@
|
|
|
1
|
+
# Agent Networks
|
|
2
|
+
|
|
3
|
+
> **Supervisor Pattern Recommended:** The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
|
|
4
|
+
>
|
|
5
|
+
> - **Better control**: Iteration hooks, delegation hooks, and task completion scoring give you fine-grained control over execution
|
|
6
|
+
> - **Simpler API**: Uses familiar `stream()` and `generate()` methods instead of a separate `.network()` API
|
|
7
|
+
> - **More flexible**: Stop execution early, modify delegations, filter context, and provide feedback to guide the agent
|
|
8
|
+
> - **Type-safe**: Full TypeScript support for all hooks and callbacks
|
|
9
|
+
> - **Easier debugging**: Monitor progress with `onIterationComplete`, track delegations with `onDelegationStart`/`onDelegationComplete`
|
|
10
|
+
>
|
|
11
|
+
> See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade from `.network()`.
|
|
12
|
+
|
|
13
|
+
Agent networks in Mastra coordinate multiple agents, workflows, and tools to handle tasks that aren't clearly defined upfront but can be inferred from the user's message or context. A top-level **routing agent** (a Mastra agent with other agents, workflows, and tools configured) uses an LLM to interpret the request and decide which primitives (subagents, workflows, or tools) to call, in what order, and with what data.
|
|
14
|
+
|
|
15
|
+
## When to use networks
|
|
16
|
+
|
|
17
|
+
Use networks for complex tasks that require coordination across multiple primitives. Unlike workflows, which follow a predefined sequence, networks rely on LLM reasoning to interpret the request and decide what to run.
|
|
18
|
+
|
|
19
|
+
## Core principles
|
|
20
|
+
|
|
21
|
+
Mastra agent networks operate using these principles:
|
|
22
|
+
|
|
23
|
+
- Memory is required when using `.network()` and is used to store task history and determine when a task is complete.
|
|
24
|
+
- Primitives are selected based on their descriptions. Clear, specific descriptions improve routing. For workflows and tools, the input schema helps determine the right inputs at runtime.
|
|
25
|
+
- If multiple primitives have overlapping functionality, the agent favors the more specific one, using a combination of schema and descriptions to decide which to run.
|
|
26
|
+
|
|
27
|
+
## Creating an agent network
|
|
28
|
+
|
|
29
|
+
An agent network is built around a top-level routing agent that delegates tasks to subagents, workflows, and tools defined in its configuration. Memory is configured on the routing agent using the `memory` option, and `instructions` define the agent's routing behavior.
|
|
30
|
+
|
|
31
|
+
```typescript
|
|
32
|
+
import { Agent } from '@mastra/core/agent'
|
|
33
|
+
import { Memory } from '@mastra/memory'
|
|
34
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
35
|
+
|
|
36
|
+
import { researchAgent } from './research-agent'
|
|
37
|
+
import { writingAgent } from './writing-agent'
|
|
38
|
+
|
|
39
|
+
import { cityWorkflow } from '../workflows/city-workflow'
|
|
40
|
+
import { weatherTool } from '../tools/weather-tool'
|
|
41
|
+
|
|
42
|
+
export const routingAgent = new Agent({
|
|
43
|
+
id: 'routing-agent',
|
|
44
|
+
name: 'Routing Agent',
|
|
45
|
+
instructions: `
|
|
46
|
+
You are a network of writers and researchers.
|
|
47
|
+
The user will ask you to research a topic.
|
|
48
|
+
Always respond with a complete report—no bullet points.
|
|
49
|
+
Write in full paragraphs, like a blog post.
|
|
50
|
+
Do not answer with incomplete or uncertain information.`,
|
|
51
|
+
model: 'openai/gpt-5.1',
|
|
52
|
+
agents: {
|
|
53
|
+
researchAgent,
|
|
54
|
+
writingAgent,
|
|
55
|
+
},
|
|
56
|
+
workflows: {
|
|
57
|
+
cityWorkflow,
|
|
58
|
+
},
|
|
59
|
+
tools: {
|
|
60
|
+
weatherTool,
|
|
61
|
+
},
|
|
62
|
+
memory: new Memory({
|
|
63
|
+
storage: new LibSQLStore({
|
|
64
|
+
id: 'mastra-storage',
|
|
65
|
+
url: 'file:../mastra.db',
|
|
66
|
+
}),
|
|
67
|
+
}),
|
|
68
|
+
})
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
### Writing descriptions for network primitives
|
|
72
|
+
|
|
73
|
+
When configuring a Mastra agent network, each primitive (agent, workflow, or tool) needs a clear description to help the routing agent decide which to use. The routing agent uses each primitive's description and schema to determine what it does and how to use it. Clear descriptions and well-defined input and output schemas improve routing accuracy.
|
|
74
|
+
|
|
75
|
+
#### Agent descriptions
|
|
76
|
+
|
|
77
|
+
Each subagent in a network should include a clear `description` that explains what the agent does.
|
|
78
|
+
|
|
79
|
+
```typescript
|
|
80
|
+
export const researchAgent = new Agent({
|
|
81
|
+
id: 'research-agent',
|
|
82
|
+
name: 'Research Agent',
|
|
83
|
+
description: `This agent gathers concise research insights in bullet-point form.
|
|
84
|
+
It's designed to extract key facts without generating full
|
|
85
|
+
responses or narrative content.`,
|
|
86
|
+
})
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
```typescript
|
|
90
|
+
export const writingAgent = new Agent({
|
|
91
|
+
id: 'writing-agent',
|
|
92
|
+
name: 'Writing Agent',
|
|
93
|
+
description: `This agent turns researched material into well-structured
|
|
94
|
+
written content. It produces full-paragraph reports with no bullet points,
|
|
95
|
+
suitable for use in articles, summaries, or blog posts.`,
|
|
96
|
+
})
|
|
97
|
+
```
|
|
98
|
+
|
|
99
|
+
#### Workflow descriptions
|
|
100
|
+
|
|
101
|
+
Workflows in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
|
|
102
|
+
|
|
103
|
+
```typescript
|
|
104
|
+
export const cityWorkflow = createWorkflow({
|
|
105
|
+
id: 'city-workflow',
|
|
106
|
+
description: `This workflow handles city-specific research tasks.
|
|
107
|
+
It first gathers factual information about the city, then synthesizes
|
|
108
|
+
that research into a full written report. Use it when the user input
|
|
109
|
+
includes a city to be researched.`,
|
|
110
|
+
inputSchema: z.object({
|
|
111
|
+
city: z.string(),
|
|
112
|
+
}),
|
|
113
|
+
outputSchema: z.object({
|
|
114
|
+
text: z.string(),
|
|
115
|
+
}),
|
|
116
|
+
})
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
#### Tool descriptions
|
|
120
|
+
|
|
121
|
+
Tools in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
|
|
122
|
+
|
|
123
|
+
```typescript
|
|
124
|
+
export const weatherTool = createTool({
|
|
125
|
+
id: 'weather-tool',
|
|
126
|
+
description: ` Retrieves current weather information using the wttr.in API.
|
|
127
|
+
Accepts a city or location name as input and returns a short weather summary.
|
|
128
|
+
Use this tool whenever up-to-date weather data is requested.
|
|
129
|
+
`,
|
|
130
|
+
inputSchema: z.object({
|
|
131
|
+
location: z.string(),
|
|
132
|
+
}),
|
|
133
|
+
outputSchema: z.object({
|
|
134
|
+
weather: z.string(),
|
|
135
|
+
}),
|
|
136
|
+
})
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
## Calling agent networks
|
|
140
|
+
|
|
141
|
+
Call a Mastra agent network using `.network()` with a user message. The method returns a stream of events that you can iterate over to track execution progress and retrieve the final result.
|
|
142
|
+
|
|
143
|
+
### Agent example
|
|
144
|
+
|
|
145
|
+
In this example, the network interprets the message and would route the request to both the `researchAgent` and `writingAgent` to generate a complete response.
|
|
146
|
+
|
|
147
|
+
```typescript
|
|
148
|
+
const result = await routingAgent.network('Tell me three cool ways to use Mastra')
|
|
149
|
+
|
|
150
|
+
for await (const chunk of result) {
|
|
151
|
+
console.log(chunk.type)
|
|
152
|
+
if (chunk.type === 'network-execution-event-step-finish') {
|
|
153
|
+
console.log(chunk.payload.result)
|
|
154
|
+
}
|
|
155
|
+
}
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
#### Agent output
|
|
159
|
+
|
|
160
|
+
The following `chunk.type` events are emitted during this request:
|
|
161
|
+
|
|
162
|
+
```text
|
|
163
|
+
routing-agent-start
|
|
164
|
+
routing-agent-end
|
|
165
|
+
agent-execution-start
|
|
166
|
+
agent-execution-event-start
|
|
167
|
+
agent-execution-event-step-start
|
|
168
|
+
agent-execution-event-text-start
|
|
169
|
+
agent-execution-event-text-delta
|
|
170
|
+
agent-execution-event-text-end
|
|
171
|
+
agent-execution-event-step-finish
|
|
172
|
+
agent-execution-event-finish
|
|
173
|
+
agent-execution-end
|
|
174
|
+
network-execution-event-step-finish
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
## Workflow example
|
|
178
|
+
|
|
179
|
+
In this example, the routing agent recognizes the city name in the message and runs the `cityWorkflow`. The workflow defines steps that call the `researchAgent` to gather facts, then the `writingAgent` to generate the final text.
|
|
180
|
+
|
|
181
|
+
```typescript
|
|
182
|
+
const result = await routingAgent.network('Tell me some historical facts about London')
|
|
183
|
+
|
|
184
|
+
for await (const chunk of result) {
|
|
185
|
+
console.log(chunk.type)
|
|
186
|
+
if (chunk.type === 'network-execution-event-step-finish') {
|
|
187
|
+
console.log(chunk.payload.result)
|
|
188
|
+
}
|
|
189
|
+
}
|
|
190
|
+
```
|
|
191
|
+
|
|
192
|
+
### Workflow output
|
|
193
|
+
|
|
194
|
+
The following `chunk.type` events are emitted during this request:
|
|
195
|
+
|
|
196
|
+
```text
|
|
197
|
+
routing-agent-end
|
|
198
|
+
workflow-execution-start
|
|
199
|
+
workflow-execution-event-workflow-start
|
|
200
|
+
workflow-execution-event-workflow-step-start
|
|
201
|
+
workflow-execution-event-workflow-step-result
|
|
202
|
+
workflow-execution-event-workflow-finish
|
|
203
|
+
workflow-execution-end
|
|
204
|
+
routing-agent-start
|
|
205
|
+
network-execution-event-step-finish
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
### Tool example
|
|
209
|
+
|
|
210
|
+
In this example, the routing agent skips the `researchAgent`, `writingAgent`, and `cityWorkflow`, and calls the `weatherTool` directly to complete the task.
|
|
211
|
+
|
|
212
|
+
```typescript
|
|
213
|
+
const result = await routingAgent.network("What's the weather in London?")
|
|
214
|
+
|
|
215
|
+
for await (const chunk of result) {
|
|
216
|
+
console.log(chunk.type)
|
|
217
|
+
if (chunk.type === 'network-execution-event-step-finish') {
|
|
218
|
+
console.log(chunk.payload.result)
|
|
219
|
+
}
|
|
220
|
+
}
|
|
221
|
+
```
|
|
222
|
+
|
|
223
|
+
#### Tool output
|
|
224
|
+
|
|
225
|
+
The following `chunk.type` events are emitted during this request:
|
|
226
|
+
|
|
227
|
+
```text
|
|
228
|
+
routing-agent-start
|
|
229
|
+
routing-agent-end
|
|
230
|
+
tool-execution-start
|
|
231
|
+
tool-execution-end
|
|
232
|
+
network-execution-event-step-finish
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
## Structured output
|
|
236
|
+
|
|
237
|
+
When you need typed, validated results from a network, use the `structuredOutput` option. After the network completes its task, it generates a structured response matching your schema.
|
|
238
|
+
|
|
239
|
+
```typescript
|
|
240
|
+
import { z } from 'zod'
|
|
241
|
+
|
|
242
|
+
const resultSchema = z.object({
|
|
243
|
+
summary: z.string().describe('A brief summary of the findings'),
|
|
244
|
+
recommendations: z.array(z.string()).describe('List of recommendations'),
|
|
245
|
+
confidence: z.number().min(0).max(1).describe('Confidence score'),
|
|
246
|
+
})
|
|
247
|
+
|
|
248
|
+
const stream = await routingAgent.network('Research AI trends', {
|
|
249
|
+
structuredOutput: {
|
|
250
|
+
schema: resultSchema,
|
|
251
|
+
},
|
|
252
|
+
})
|
|
253
|
+
|
|
254
|
+
// Consume the stream
|
|
255
|
+
for await (const chunk of stream) {
|
|
256
|
+
if (chunk.type === 'network-object') {
|
|
257
|
+
// Partial object during generation
|
|
258
|
+
console.log('Partial:', chunk.payload.object)
|
|
259
|
+
}
|
|
260
|
+
if (chunk.type === 'network-object-result') {
|
|
261
|
+
// Final structured object
|
|
262
|
+
console.log('Final:', chunk.payload.object)
|
|
263
|
+
}
|
|
264
|
+
}
|
|
265
|
+
|
|
266
|
+
// Get the typed result
|
|
267
|
+
const result = await stream.object
|
|
268
|
+
console.log(result?.summary)
|
|
269
|
+
console.log(result?.recommendations)
|
|
270
|
+
console.log(result?.confidence)
|
|
271
|
+
```
|
|
272
|
+
|
|
273
|
+
### Streaming partial objects
|
|
274
|
+
|
|
275
|
+
For real-time updates during structured output generation, use `objectStream`:
|
|
276
|
+
|
|
277
|
+
```typescript
|
|
278
|
+
const stream = await routingAgent.network('Analyze market data', {
|
|
279
|
+
structuredOutput: { schema: resultSchema },
|
|
280
|
+
})
|
|
281
|
+
|
|
282
|
+
// Stream partial objects as they're generated
|
|
283
|
+
for await (const partial of stream.objectStream) {
|
|
284
|
+
console.log('Building result:', partial)
|
|
285
|
+
}
|
|
286
|
+
|
|
287
|
+
// Get the final typed result
|
|
288
|
+
const final = await stream.object
|
|
289
|
+
```
|
|
290
|
+
|
|
291
|
+
## Related
|
|
292
|
+
|
|
293
|
+
- [Supervisor Agents](https://mastra.ai/docs/agents/supervisor-agents)
|
|
294
|
+
- [Migration: .network() to Supervisor Pattern](https://mastra.ai/guides/migrations/network-to-supervisor)
|
|
295
|
+
- [Guide: Research Coordinator](https://mastra.ai/guides/guide/research-coordinator)
|
|
296
|
+
- [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
|
|
297
|
+
- [Agent Approval](https://mastra.ai/docs/agents/agent-approval)
|
|
298
|
+
- [Workflows Overview](https://mastra.ai/docs/workflows/overview)
|
|
299
|
+
- [Request Context](https://mastra.ai/docs/server/request-context)
|
|
@@ -0,0 +1,314 @@
|
|
|
1
|
+
# Memory Processors
|
|
2
|
+
|
|
3
|
+
Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
|
|
4
|
+
|
|
5
|
+
When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve message history, working memory, and semantically relevant messages, then persist new messages after the model responds.
|
|
6
|
+
|
|
7
|
+
Memory processors are [processors](https://mastra.ai/docs/agents/processors) that operate specifically on memory-related messages and state.
|
|
8
|
+
|
|
9
|
+
## Built-in Memory Processors
|
|
10
|
+
|
|
11
|
+
Mastra automatically adds these processors when memory is enabled:
|
|
12
|
+
|
|
13
|
+
### MessageHistory
|
|
14
|
+
|
|
15
|
+
Retrieves message history and persists new messages.
|
|
16
|
+
|
|
17
|
+
**When you configure:**
|
|
18
|
+
|
|
19
|
+
```typescript
|
|
20
|
+
memory: new Memory({
|
|
21
|
+
lastMessages: 10,
|
|
22
|
+
})
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Mastra internally:**
|
|
26
|
+
|
|
27
|
+
1. Creates a `MessageHistory` processor with `limit: 10`
|
|
28
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
29
|
+
3. Adds it to the agent's output processors (runs after the LLM)
|
|
30
|
+
|
|
31
|
+
**What it does:**
|
|
32
|
+
|
|
33
|
+
- **Input**: Fetches the last 10 messages from storage and prepends them to the conversation
|
|
34
|
+
- **Output**: Persists new messages to storage after the model responds
|
|
35
|
+
|
|
36
|
+
**Example:**
|
|
37
|
+
|
|
38
|
+
```typescript
|
|
39
|
+
import { Agent } from '@mastra/core/agent'
|
|
40
|
+
import { Memory } from '@mastra/memory'
|
|
41
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
42
|
+
import { openai } from '@ai-sdk/openai'
|
|
43
|
+
|
|
44
|
+
const agent = new Agent({
|
|
45
|
+
id: 'test-agent',
|
|
46
|
+
name: 'Test Agent',
|
|
47
|
+
instructions: 'You are a helpful assistant',
|
|
48
|
+
model: 'openai/gpt-4o',
|
|
49
|
+
memory: new Memory({
|
|
50
|
+
storage: new LibSQLStore({
|
|
51
|
+
id: 'memory-store',
|
|
52
|
+
url: 'file:memory.db',
|
|
53
|
+
}),
|
|
54
|
+
lastMessages: 10, // MessageHistory processor automatically added
|
|
55
|
+
}),
|
|
56
|
+
})
|
|
57
|
+
```
|
|
58
|
+
|
|
59
|
+
### SemanticRecall
|
|
60
|
+
|
|
61
|
+
Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
|
|
62
|
+
|
|
63
|
+
**When you configure:**
|
|
64
|
+
|
|
65
|
+
```typescript
|
|
66
|
+
memory: new Memory({
|
|
67
|
+
semanticRecall: { enabled: true },
|
|
68
|
+
vector: myVectorStore,
|
|
69
|
+
embedder: myEmbedder,
|
|
70
|
+
})
|
|
71
|
+
```
|
|
72
|
+
|
|
73
|
+
**Mastra internally:**
|
|
74
|
+
|
|
75
|
+
1. Creates a `SemanticRecall` processor
|
|
76
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
77
|
+
3. Adds it to the agent's output processors (runs after the LLM)
|
|
78
|
+
4. Requires both a vector store and embedder to be configured
|
|
79
|
+
|
|
80
|
+
**What it does:**
|
|
81
|
+
|
|
82
|
+
- **Input**: Performs vector similarity search to find relevant past messages and prepends them to the conversation
|
|
83
|
+
- **Output**: Creates embeddings for new messages and stores them in the vector store for future retrieval
|
|
84
|
+
|
|
85
|
+
**Example:**
|
|
86
|
+
|
|
87
|
+
```typescript
|
|
88
|
+
import { Agent } from '@mastra/core/agent'
|
|
89
|
+
import { Memory } from '@mastra/memory'
|
|
90
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
91
|
+
import { PineconeVector } from '@mastra/pinecone'
|
|
92
|
+
import { OpenAIEmbedder } from '@mastra/openai'
|
|
93
|
+
import { openai } from '@ai-sdk/openai'
|
|
94
|
+
|
|
95
|
+
const agent = new Agent({
|
|
96
|
+
name: 'semantic-agent',
|
|
97
|
+
instructions: 'You are a helpful assistant with semantic memory',
|
|
98
|
+
model: 'openai/gpt-4o',
|
|
99
|
+
memory: new Memory({
|
|
100
|
+
storage: new LibSQLStore({
|
|
101
|
+
id: 'memory-store',
|
|
102
|
+
url: 'file:memory.db',
|
|
103
|
+
}),
|
|
104
|
+
vector: new PineconeVector({
|
|
105
|
+
id: 'memory-vector',
|
|
106
|
+
apiKey: process.env.PINECONE_API_KEY!,
|
|
107
|
+
}),
|
|
108
|
+
embedder: new OpenAIEmbedder({
|
|
109
|
+
model: 'text-embedding-3-small',
|
|
110
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
111
|
+
}),
|
|
112
|
+
semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
|
|
113
|
+
}),
|
|
114
|
+
})
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
### WorkingMemory
|
|
118
|
+
|
|
119
|
+
Manages working memory state across conversations.
|
|
120
|
+
|
|
121
|
+
**When you configure:**
|
|
122
|
+
|
|
123
|
+
```typescript
|
|
124
|
+
memory: new Memory({
|
|
125
|
+
workingMemory: { enabled: true },
|
|
126
|
+
})
|
|
127
|
+
```
|
|
128
|
+
|
|
129
|
+
**Mastra internally:**
|
|
130
|
+
|
|
131
|
+
1. Creates a `WorkingMemory` processor
|
|
132
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
133
|
+
3. Requires a storage adapter to be configured
|
|
134
|
+
|
|
135
|
+
**What it does:**
|
|
136
|
+
|
|
137
|
+
- **Input**: Retrieves working memory state for the current thread and prepends it to the conversation
|
|
138
|
+
- **Output**: No output processing
|
|
139
|
+
|
|
140
|
+
**Example:**
|
|
141
|
+
|
|
142
|
+
```typescript
|
|
143
|
+
import { Agent } from '@mastra/core/agent'
|
|
144
|
+
import { Memory } from '@mastra/memory'
|
|
145
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
146
|
+
import { openai } from '@ai-sdk/openai'
|
|
147
|
+
|
|
148
|
+
const agent = new Agent({
|
|
149
|
+
name: 'working-memory-agent',
|
|
150
|
+
instructions: 'You are an assistant with working memory',
|
|
151
|
+
model: 'openai/gpt-4o',
|
|
152
|
+
memory: new Memory({
|
|
153
|
+
storage: new LibSQLStore({
|
|
154
|
+
id: 'memory-store',
|
|
155
|
+
url: 'file:memory.db',
|
|
156
|
+
}),
|
|
157
|
+
workingMemory: { enabled: true }, // WorkingMemory processor automatically added
|
|
158
|
+
}),
|
|
159
|
+
})
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
## Manual Control and Deduplication
|
|
163
|
+
|
|
164
|
+
If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
|
|
165
|
+
|
|
166
|
+
```typescript
|
|
167
|
+
import { Agent } from '@mastra/core/agent'
|
|
168
|
+
import { Memory } from '@mastra/memory'
|
|
169
|
+
import { MessageHistory } from '@mastra/core/processors'
|
|
170
|
+
import { TokenLimiter } from '@mastra/core/processors'
|
|
171
|
+
import { LibSQLStore } from '@mastra/libsql'
|
|
172
|
+
import { openai } from '@ai-sdk/openai'
|
|
173
|
+
|
|
174
|
+
// Custom MessageHistory with different configuration
|
|
175
|
+
const customMessageHistory = new MessageHistory({
|
|
176
|
+
storage: new LibSQLStore({ id: 'memory-store', url: 'file:memory.db' }),
|
|
177
|
+
lastMessages: 20,
|
|
178
|
+
})
|
|
179
|
+
|
|
180
|
+
const agent = new Agent({
|
|
181
|
+
name: 'custom-memory-agent',
|
|
182
|
+
instructions: 'You are a helpful assistant',
|
|
183
|
+
model: 'openai/gpt-4o',
|
|
184
|
+
memory: new Memory({
|
|
185
|
+
storage: new LibSQLStore({ id: 'memory-store', url: 'file:memory.db' }),
|
|
186
|
+
lastMessages: 10, // This would normally add MessageHistory(10)
|
|
187
|
+
}),
|
|
188
|
+
inputProcessors: [
|
|
189
|
+
customMessageHistory, // Your custom one is used instead
|
|
190
|
+
new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
|
|
191
|
+
],
|
|
192
|
+
})
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
## Processor Execution Order
|
|
196
|
+
|
|
197
|
+
Understanding the execution order is important when combining guardrails with memory:
|
|
198
|
+
|
|
199
|
+
### Input Processors
|
|
200
|
+
|
|
201
|
+
```text
|
|
202
|
+
[Memory Processors] → [Your inputProcessors]
|
|
203
|
+
```
|
|
204
|
+
|
|
205
|
+
1. **Memory processors run FIRST**: `WorkingMemory`, `MessageHistory`, `SemanticRecall`
|
|
206
|
+
2. **Your input processors run AFTER**: guardrails, filters, validators
|
|
207
|
+
|
|
208
|
+
This means memory loads message history before your processors can validate or filter the input.
|
|
209
|
+
|
|
210
|
+
### Output Processors
|
|
211
|
+
|
|
212
|
+
```text
|
|
213
|
+
[Your outputProcessors] → [Memory Processors]
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
1. **Your output processors run FIRST**: guardrails, filters, validators
|
|
217
|
+
2. **Memory processors run AFTER**: `SemanticRecall` (embeddings), `MessageHistory` (persistence)
|
|
218
|
+
|
|
219
|
+
This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
|
|
220
|
+
|
|
221
|
+
## Guardrails and Memory
|
|
222
|
+
|
|
223
|
+
The default execution order provides safe guardrail behavior:
|
|
224
|
+
|
|
225
|
+
### Output guardrails (recommended)
|
|
226
|
+
|
|
227
|
+
Output guardrails run **before** memory processors save messages. If a guardrail aborts:
|
|
228
|
+
|
|
229
|
+
- The tripwire is triggered
|
|
230
|
+
- Memory processors are skipped
|
|
231
|
+
- **No messages are persisted to storage**
|
|
232
|
+
|
|
233
|
+
```typescript
|
|
234
|
+
import { Agent } from '@mastra/core/agent'
|
|
235
|
+
import { Memory } from '@mastra/memory'
|
|
236
|
+
import { openai } from '@ai-sdk/openai'
|
|
237
|
+
|
|
238
|
+
// Output guardrail that blocks inappropriate content
|
|
239
|
+
const contentBlocker = {
|
|
240
|
+
id: 'content-blocker',
|
|
241
|
+
processOutputResult: async ({ messages, abort }) => {
|
|
242
|
+
const hasInappropriateContent = messages.some(msg => containsBadContent(msg))
|
|
243
|
+
if (hasInappropriateContent) {
|
|
244
|
+
abort('Content blocked by guardrail')
|
|
245
|
+
}
|
|
246
|
+
return messages
|
|
247
|
+
},
|
|
248
|
+
}
|
|
249
|
+
|
|
250
|
+
const agent = new Agent({
|
|
251
|
+
name: 'safe-agent',
|
|
252
|
+
instructions: 'You are a helpful assistant',
|
|
253
|
+
model: 'openai/gpt-4o',
|
|
254
|
+
memory: new Memory({ lastMessages: 10 }),
|
|
255
|
+
// Your guardrail runs BEFORE memory saves
|
|
256
|
+
outputProcessors: [contentBlocker],
|
|
257
|
+
})
|
|
258
|
+
|
|
259
|
+
// If the guardrail aborts, nothing is saved to memory
|
|
260
|
+
const result = await agent.generate('Hello')
|
|
261
|
+
if (result.tripwire) {
|
|
262
|
+
console.log('Blocked:', result.tripwire.reason)
|
|
263
|
+
// Memory is empty - no messages were persisted
|
|
264
|
+
}
|
|
265
|
+
```
|
|
266
|
+
|
|
267
|
+
### Input guardrails
|
|
268
|
+
|
|
269
|
+
Input guardrails run **after** memory processors load history. If a guardrail aborts:
|
|
270
|
+
|
|
271
|
+
- The tripwire is triggered
|
|
272
|
+
- The LLM is never called
|
|
273
|
+
- Output processors (including memory persistence) are skipped
|
|
274
|
+
- **No messages are persisted to storage**
|
|
275
|
+
|
|
276
|
+
```typescript
|
|
277
|
+
// Input guardrail that validates user input
|
|
278
|
+
const inputValidator = {
|
|
279
|
+
id: 'input-validator',
|
|
280
|
+
processInput: async ({ messages, abort }) => {
|
|
281
|
+
const lastUserMessage = messages.findLast(m => m.role === 'user')
|
|
282
|
+
if (isInvalidInput(lastUserMessage)) {
|
|
283
|
+
abort('Invalid input detected')
|
|
284
|
+
}
|
|
285
|
+
return messages
|
|
286
|
+
},
|
|
287
|
+
}
|
|
288
|
+
|
|
289
|
+
const agent = new Agent({
|
|
290
|
+
name: 'validated-agent',
|
|
291
|
+
instructions: 'You are a helpful assistant',
|
|
292
|
+
model: 'openai/gpt-4o',
|
|
293
|
+
memory: new Memory({ lastMessages: 10 }),
|
|
294
|
+
// Your guardrail runs AFTER memory loads history
|
|
295
|
+
inputProcessors: [inputValidator],
|
|
296
|
+
})
|
|
297
|
+
```
|
|
298
|
+
|
|
299
|
+
### Summary
|
|
300
|
+
|
|
301
|
+
| Guardrail Type | When it runs | If it aborts |
|
|
302
|
+
| -------------- | -------------------------- | ----------------------------- |
|
|
303
|
+
| Input | After memory loads history | LLM not called, nothing saved |
|
|
304
|
+
| Output | Before memory saves | Nothing saved to storage |
|
|
305
|
+
|
|
306
|
+
Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
|
|
307
|
+
|
|
308
|
+
## Related documentation
|
|
309
|
+
|
|
310
|
+
- [Processors](https://mastra.ai/docs/agents/processors) - General processor concepts and custom processor creation
|
|
311
|
+
- [Guardrails](https://mastra.ai/docs/agents/guardrails) - Security and validation processors
|
|
312
|
+
- [Memory Overview](https://mastra.ai/docs/memory/overview) - Memory types and configuration
|
|
313
|
+
|
|
314
|
+
When creating custom processors avoid mutating the input `messages` array or its objects directly.
|