@mastra/memory 1.0.0-beta.8 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,318 @@
1
+ > Learn how to use memory processors in Mastra to filter, trim, and transform messages before they
2
+
3
+ # Memory Processors
4
+
5
+ Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
6
+
7
+ When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve message history, working memory, and semantically relevant messages, then persist new messages after the model responds.
8
+
9
+ Memory processors are [processors](https://mastra.ai/docs/v1/agents/processors) that operate specifically on memory-related messages and state.
10
+
11
+ ## Built-in Memory Processors
12
+
13
+ Mastra automatically adds these processors when memory is enabled:
14
+
15
+ ### MessageHistory
16
+
17
+ Retrieves message history and persists new messages.
18
+
19
+ **When you configure:**
20
+
21
+ ```typescript
22
+ memory: new Memory({
23
+ lastMessages: 10,
24
+ });
25
+ ```
26
+
27
+ **Mastra internally:**
28
+
29
+ 1. Creates a `MessageHistory` processor with `limit: 10`
30
+ 2. Adds it to the agent's input processors (runs before the LLM)
31
+ 3. Adds it to the agent's output processors (runs after the LLM)
32
+
33
+ **What it does:**
34
+
35
+ - **Input**: Fetches the last 10 messages from storage and prepends them to the conversation
36
+ - **Output**: Persists new messages to storage after the model responds
37
+
38
+ **Example:**
39
+
40
+ ```typescript
41
+ import { Agent } from "@mastra/core/agent";
42
+ import { Memory } from "@mastra/memory";
43
+ import { LibSQLStore } from "@mastra/libsql";
44
+ import { openai } from "@ai-sdk/openai";
45
+
46
+ const agent = new Agent({
47
+ id: "test-agent",
48
+ name: "Test Agent",
49
+ instructions: "You are a helpful assistant",
50
+ model: 'openai/gpt-4o',
51
+ memory: new Memory({
52
+ storage: new LibSQLStore({
53
+ id: "memory-store",
54
+ url: "file:memory.db",
55
+ }),
56
+ lastMessages: 10, // MessageHistory processor automatically added
57
+ }),
58
+ });
59
+ ```
60
+
61
+ ### SemanticRecall
62
+
63
+ Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
64
+
65
+ **When you configure:**
66
+
67
+ ```typescript
68
+ memory: new Memory({
69
+ semanticRecall: { enabled: true },
70
+ vector: myVectorStore,
71
+ embedder: myEmbedder,
72
+ });
73
+ ```
74
+
75
+ **Mastra internally:**
76
+
77
+ 1. Creates a `SemanticRecall` processor
78
+ 2. Adds it to the agent's input processors (runs before the LLM)
79
+ 3. Adds it to the agent's output processors (runs after the LLM)
80
+ 4. Requires both a vector store and embedder to be configured
81
+
82
+ **What it does:**
83
+
84
+ - **Input**: Performs vector similarity search to find relevant past messages and prepends them to the conversation
85
+ - **Output**: Creates embeddings for new messages and stores them in the vector store for future retrieval
86
+
87
+ **Example:**
88
+
89
+ ```typescript
90
+ import { Agent } from "@mastra/core/agent";
91
+ import { Memory } from "@mastra/memory";
92
+ import { LibSQLStore } from "@mastra/libsql";
93
+ import { PineconeVector } from "@mastra/pinecone";
94
+ import { OpenAIEmbedder } from "@mastra/openai";
95
+ import { openai } from "@ai-sdk/openai";
96
+
97
+ const agent = new Agent({
98
+ name: "semantic-agent",
99
+ instructions: "You are a helpful assistant with semantic memory",
100
+ model: 'openai/gpt-4o',
101
+ memory: new Memory({
102
+ storage: new LibSQLStore({
103
+ id: "memory-store",
104
+ url: "file:memory.db",
105
+ }),
106
+ vector: new PineconeVector({
107
+ id: "memory-vector",
108
+ apiKey: process.env.PINECONE_API_KEY!,
109
+ }),
110
+ embedder: new OpenAIEmbedder({
111
+ model: "text-embedding-3-small",
112
+ apiKey: process.env.OPENAI_API_KEY!,
113
+ }),
114
+ semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
115
+ }),
116
+ });
117
+ ```
118
+
119
+ ### WorkingMemory
120
+
121
+ Manages working memory state across conversations.
122
+
123
+ **When you configure:**
124
+
125
+ ```typescript
126
+ memory: new Memory({
127
+ workingMemory: { enabled: true },
128
+ });
129
+ ```
130
+
131
+ **Mastra internally:**
132
+
133
+ 1. Creates a `WorkingMemory` processor
134
+ 2. Adds it to the agent's input processors (runs before the LLM)
135
+ 3. Requires a storage adapter to be configured
136
+
137
+ **What it does:**
138
+
139
+ - **Input**: Retrieves working memory state for the current thread and prepends it to the conversation
140
+ - **Output**: No output processing
141
+
142
+ **Example:**
143
+
144
+ ```typescript
145
+ import { Agent } from "@mastra/core/agent";
146
+ import { Memory } from "@mastra/memory";
147
+ import { LibSQLStore } from "@mastra/libsql";
148
+ import { openai } from "@ai-sdk/openai";
149
+
150
+ const agent = new Agent({
151
+ name: "working-memory-agent",
152
+ instructions: "You are an assistant with working memory",
153
+ model: 'openai/gpt-4o',
154
+ memory: new Memory({
155
+ storage: new LibSQLStore({
156
+ id: "memory-store",
157
+ url: "file:memory.db",
158
+ }),
159
+ workingMemory: { enabled: true }, // WorkingMemory processor automatically added
160
+ }),
161
+ });
162
+ ```
163
+
164
+ ## Manual Control and Deduplication
165
+
166
+ If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
167
+
168
+ ```typescript
169
+ import { Agent } from "@mastra/core/agent";
170
+ import { Memory } from "@mastra/memory";
171
+ import { MessageHistory } from "@mastra/core/processors";
172
+ import { TokenLimiter } from "@mastra/core/processors";
173
+ import { LibSQLStore } from "@mastra/libsql";
174
+ import { openai } from "@ai-sdk/openai";
175
+
176
+ // Custom MessageHistory with different configuration
177
+ const customMessageHistory = new MessageHistory({
178
+ storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
179
+ lastMessages: 20,
180
+ });
181
+
182
+ const agent = new Agent({
183
+ name: "custom-memory-agent",
184
+ instructions: "You are a helpful assistant",
185
+ model: 'openai/gpt-4o',
186
+ memory: new Memory({
187
+ storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
188
+ lastMessages: 10, // This would normally add MessageHistory(10)
189
+ }),
190
+ inputProcessors: [
191
+ customMessageHistory, // Your custom one is used instead
192
+ new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
193
+ ],
194
+ });
195
+ ```
196
+
197
+ ## Processor Execution Order
198
+
199
+ Understanding the execution order is important when combining guardrails with memory:
200
+
201
+ ### Input Processors
202
+
203
+ ```
204
+ [Memory Processors] → [Your inputProcessors]
205
+ ```
206
+
207
+ 1. **Memory processors run FIRST**: `WorkingMemory`, `MessageHistory`, `SemanticRecall`
208
+ 2. **Your input processors run AFTER**: guardrails, filters, validators
209
+
210
+ This means memory loads message history before your processors can validate or filter the input.
211
+
212
+ ### Output Processors
213
+
214
+ ```
215
+ [Your outputProcessors] → [Memory Processors]
216
+ ```
217
+
218
+ 1. **Your output processors run FIRST**: guardrails, filters, validators
219
+ 2. **Memory processors run AFTER**: `SemanticRecall` (embeddings), `MessageHistory` (persistence)
220
+
221
+ This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
222
+
223
+ ## Guardrails and Memory
224
+
225
+ The default execution order provides safe guardrail behavior:
226
+
227
+ ### Output guardrails (recommended)
228
+
229
+ Output guardrails run **before** memory processors save messages. If a guardrail aborts:
230
+
231
+ - The tripwire is triggered
232
+ - Memory processors are skipped
233
+ - **No messages are persisted to storage**
234
+
235
+ ```typescript
236
+ import { Agent } from "@mastra/core/agent";
237
+ import { Memory } from "@mastra/memory";
238
+ import { openai } from "@ai-sdk/openai";
239
+
240
+ // Output guardrail that blocks inappropriate content
241
+ const contentBlocker = {
242
+ id: "content-blocker",
243
+ processOutputResult: async ({ messages, abort }) => {
244
+ const hasInappropriateContent = messages.some((msg) =>
245
+ containsBadContent(msg)
246
+ );
247
+ if (hasInappropriateContent) {
248
+ abort("Content blocked by guardrail");
249
+ }
250
+ return messages;
251
+ },
252
+ };
253
+
254
+ const agent = new Agent({
255
+ name: "safe-agent",
256
+ instructions: "You are a helpful assistant",
257
+ model: 'openai/gpt-4o',
258
+ memory: new Memory({ lastMessages: 10 }),
259
+ // Your guardrail runs BEFORE memory saves
260
+ outputProcessors: [contentBlocker],
261
+ });
262
+
263
+ // If the guardrail aborts, nothing is saved to memory
264
+ const result = await agent.generate("Hello");
265
+ if (result.tripwire) {
266
+ console.log("Blocked:", result.tripwire.reason);
267
+ // Memory is empty - no messages were persisted
268
+ }
269
+ ```
270
+
271
+ ### Input guardrails
272
+
273
+ Input guardrails run **after** memory processors load history. If a guardrail aborts:
274
+
275
+ - The tripwire is triggered
276
+ - The LLM is never called
277
+ - Output processors (including memory persistence) are skipped
278
+ - **No messages are persisted to storage**
279
+
280
+ ```typescript
281
+ // Input guardrail that validates user input
282
+ const inputValidator = {
283
+ id: "input-validator",
284
+ processInput: async ({ messages, abort }) => {
285
+ const lastUserMessage = messages.findLast((m) => m.role === "user");
286
+ if (isInvalidInput(lastUserMessage)) {
287
+ abort("Invalid input detected");
288
+ }
289
+ return messages;
290
+ },
291
+ };
292
+
293
+ const agent = new Agent({
294
+ name: "validated-agent",
295
+ instructions: "You are a helpful assistant",
296
+ model: 'openai/gpt-4o',
297
+ memory: new Memory({ lastMessages: 10 }),
298
+ // Your guardrail runs AFTER memory loads history
299
+ inputProcessors: [inputValidator],
300
+ });
301
+ ```
302
+
303
+ ### Summary
304
+
305
+ | Guardrail Type | When it runs | If it aborts |
306
+ | -------------- | ------------ | ------------ |
307
+ | Input | After memory loads history | LLM not called, nothing saved |
308
+ | Output | Before memory saves | Nothing saved to storage |
309
+
310
+ Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
311
+
312
+ ## Related documentation
313
+
314
+ - [Processors](https://mastra.ai/docs/v1/agents/processors) - General processor concepts and custom processor creation
315
+ - [Guardrails](https://mastra.ai/docs/v1/agents/guardrails) - Security and validation processors
316
+ - [Memory Overview](https://mastra.ai/docs/v1/memory/overview) - Memory types and configuration
317
+
318
+ When creating custom processors avoid mutating the input `messages` array or its objects directly.