@mastra/libsql 1.6.0 → 1.6.1-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. package/CHANGELOG.md +11 -0
  2. package/dist/index.cjs +17 -8
  3. package/dist/index.cjs.map +1 -1
  4. package/dist/index.js +17 -8
  5. package/dist/index.js.map +1 -1
  6. package/dist/storage/domains/prompt-blocks/index.d.ts.map +1 -1
  7. package/package.json +4 -4
  8. package/dist/docs/SKILL.md +0 -50
  9. package/dist/docs/assets/SOURCE_MAP.json +0 -6
  10. package/dist/docs/references/docs-agents-agent-approval.md +0 -377
  11. package/dist/docs/references/docs-agents-agent-memory.md +0 -212
  12. package/dist/docs/references/docs-agents-network-approval.md +0 -275
  13. package/dist/docs/references/docs-agents-networks.md +0 -290
  14. package/dist/docs/references/docs-memory-memory-processors.md +0 -316
  15. package/dist/docs/references/docs-memory-message-history.md +0 -260
  16. package/dist/docs/references/docs-memory-overview.md +0 -45
  17. package/dist/docs/references/docs-memory-semantic-recall.md +0 -272
  18. package/dist/docs/references/docs-memory-storage.md +0 -261
  19. package/dist/docs/references/docs-memory-working-memory.md +0 -400
  20. package/dist/docs/references/docs-observability-overview.md +0 -70
  21. package/dist/docs/references/docs-observability-tracing-exporters-default.md +0 -211
  22. package/dist/docs/references/docs-rag-retrieval.md +0 -521
  23. package/dist/docs/references/docs-workflows-snapshots.md +0 -238
  24. package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +0 -140
  25. package/dist/docs/references/reference-core-getMemory.md +0 -50
  26. package/dist/docs/references/reference-core-listMemory.md +0 -56
  27. package/dist/docs/references/reference-core-mastra-class.md +0 -66
  28. package/dist/docs/references/reference-memory-memory-class.md +0 -147
  29. package/dist/docs/references/reference-storage-composite.md +0 -235
  30. package/dist/docs/references/reference-storage-dynamodb.md +0 -282
  31. package/dist/docs/references/reference-storage-libsql.md +0 -135
  32. package/dist/docs/references/reference-vectors-libsql.md +0 -305
@@ -1,316 +0,0 @@
1
- # Memory Processors
2
-
3
- Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
4
-
5
- When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve message history, working memory, and semantically relevant messages, then persist new messages after the model responds.
6
-
7
- Memory processors are [processors](https://mastra.ai/docs/agents/processors) that operate specifically on memory-related messages and state.
8
-
9
- ## Built-in Memory Processors
10
-
11
- Mastra automatically adds these processors when memory is enabled:
12
-
13
- ### MessageHistory
14
-
15
- Retrieves message history and persists new messages.
16
-
17
- **When you configure:**
18
-
19
- ```typescript
20
- memory: new Memory({
21
- lastMessages: 10,
22
- });
23
- ```
24
-
25
- **Mastra internally:**
26
-
27
- 1. Creates a `MessageHistory` processor with `limit: 10`
28
- 2. Adds it to the agent's input processors (runs before the LLM)
29
- 3. Adds it to the agent's output processors (runs after the LLM)
30
-
31
- **What it does:**
32
-
33
- - **Input**: Fetches the last 10 messages from storage and prepends them to the conversation
34
- - **Output**: Persists new messages to storage after the model responds
35
-
36
- **Example:**
37
-
38
- ```typescript
39
- import { Agent } from "@mastra/core/agent";
40
- import { Memory } from "@mastra/memory";
41
- import { LibSQLStore } from "@mastra/libsql";
42
- import { openai } from "@ai-sdk/openai";
43
-
44
- const agent = new Agent({
45
- id: "test-agent",
46
- name: "Test Agent",
47
- instructions: "You are a helpful assistant",
48
- model: 'openai/gpt-4o',
49
- memory: new Memory({
50
- storage: new LibSQLStore({
51
- id: "memory-store",
52
- url: "file:memory.db",
53
- }),
54
- lastMessages: 10, // MessageHistory processor automatically added
55
- }),
56
- });
57
- ```
58
-
59
- ### SemanticRecall
60
-
61
- Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
62
-
63
- **When you configure:**
64
-
65
- ```typescript
66
- memory: new Memory({
67
- semanticRecall: { enabled: true },
68
- vector: myVectorStore,
69
- embedder: myEmbedder,
70
- });
71
- ```
72
-
73
- **Mastra internally:**
74
-
75
- 1. Creates a `SemanticRecall` processor
76
- 2. Adds it to the agent's input processors (runs before the LLM)
77
- 3. Adds it to the agent's output processors (runs after the LLM)
78
- 4. Requires both a vector store and embedder to be configured
79
-
80
- **What it does:**
81
-
82
- - **Input**: Performs vector similarity search to find relevant past messages and prepends them to the conversation
83
- - **Output**: Creates embeddings for new messages and stores them in the vector store for future retrieval
84
-
85
- **Example:**
86
-
87
- ```typescript
88
- import { Agent } from "@mastra/core/agent";
89
- import { Memory } from "@mastra/memory";
90
- import { LibSQLStore } from "@mastra/libsql";
91
- import { PineconeVector } from "@mastra/pinecone";
92
- import { OpenAIEmbedder } from "@mastra/openai";
93
- import { openai } from "@ai-sdk/openai";
94
-
95
- const agent = new Agent({
96
- name: "semantic-agent",
97
- instructions: "You are a helpful assistant with semantic memory",
98
- model: 'openai/gpt-4o',
99
- memory: new Memory({
100
- storage: new LibSQLStore({
101
- id: "memory-store",
102
- url: "file:memory.db",
103
- }),
104
- vector: new PineconeVector({
105
- id: "memory-vector",
106
- apiKey: process.env.PINECONE_API_KEY!,
107
- }),
108
- embedder: new OpenAIEmbedder({
109
- model: "text-embedding-3-small",
110
- apiKey: process.env.OPENAI_API_KEY!,
111
- }),
112
- semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
113
- }),
114
- });
115
- ```
116
-
117
- ### WorkingMemory
118
-
119
- Manages working memory state across conversations.
120
-
121
- **When you configure:**
122
-
123
- ```typescript
124
- memory: new Memory({
125
- workingMemory: { enabled: true },
126
- });
127
- ```
128
-
129
- **Mastra internally:**
130
-
131
- 1. Creates a `WorkingMemory` processor
132
- 2. Adds it to the agent's input processors (runs before the LLM)
133
- 3. Requires a storage adapter to be configured
134
-
135
- **What it does:**
136
-
137
- - **Input**: Retrieves working memory state for the current thread and prepends it to the conversation
138
- - **Output**: No output processing
139
-
140
- **Example:**
141
-
142
- ```typescript
143
- import { Agent } from "@mastra/core/agent";
144
- import { Memory } from "@mastra/memory";
145
- import { LibSQLStore } from "@mastra/libsql";
146
- import { openai } from "@ai-sdk/openai";
147
-
148
- const agent = new Agent({
149
- name: "working-memory-agent",
150
- instructions: "You are an assistant with working memory",
151
- model: 'openai/gpt-4o',
152
- memory: new Memory({
153
- storage: new LibSQLStore({
154
- id: "memory-store",
155
- url: "file:memory.db",
156
- }),
157
- workingMemory: { enabled: true }, // WorkingMemory processor automatically added
158
- }),
159
- });
160
- ```
161
-
162
- ## Manual Control and Deduplication
163
-
164
- If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
165
-
166
- ```typescript
167
- import { Agent } from "@mastra/core/agent";
168
- import { Memory } from "@mastra/memory";
169
- import { MessageHistory } from "@mastra/core/processors";
170
- import { TokenLimiter } from "@mastra/core/processors";
171
- import { LibSQLStore } from "@mastra/libsql";
172
- import { openai } from "@ai-sdk/openai";
173
-
174
- // Custom MessageHistory with different configuration
175
- const customMessageHistory = new MessageHistory({
176
- storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
177
- lastMessages: 20,
178
- });
179
-
180
- const agent = new Agent({
181
- name: "custom-memory-agent",
182
- instructions: "You are a helpful assistant",
183
- model: 'openai/gpt-4o',
184
- memory: new Memory({
185
- storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
186
- lastMessages: 10, // This would normally add MessageHistory(10)
187
- }),
188
- inputProcessors: [
189
- customMessageHistory, // Your custom one is used instead
190
- new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
191
- ],
192
- });
193
- ```
194
-
195
- ## Processor Execution Order
196
-
197
- Understanding the execution order is important when combining guardrails with memory:
198
-
199
- ### Input Processors
200
-
201
- ```text
202
- [Memory Processors] → [Your inputProcessors]
203
- ```
204
-
205
- 1. **Memory processors run FIRST**: `WorkingMemory`, `MessageHistory`, `SemanticRecall`
206
- 2. **Your input processors run AFTER**: guardrails, filters, validators
207
-
208
- This means memory loads message history before your processors can validate or filter the input.
209
-
210
- ### Output Processors
211
-
212
- ```text
213
- [Your outputProcessors] → [Memory Processors]
214
- ```
215
-
216
- 1. **Your output processors run FIRST**: guardrails, filters, validators
217
- 2. **Memory processors run AFTER**: `SemanticRecall` (embeddings), `MessageHistory` (persistence)
218
-
219
- This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
220
-
221
- ## Guardrails and Memory
222
-
223
- The default execution order provides safe guardrail behavior:
224
-
225
- ### Output guardrails (recommended)
226
-
227
- Output guardrails run **before** memory processors save messages. If a guardrail aborts:
228
-
229
- - The tripwire is triggered
230
- - Memory processors are skipped
231
- - **No messages are persisted to storage**
232
-
233
- ```typescript
234
- import { Agent } from "@mastra/core/agent";
235
- import { Memory } from "@mastra/memory";
236
- import { openai } from "@ai-sdk/openai";
237
-
238
- // Output guardrail that blocks inappropriate content
239
- const contentBlocker = {
240
- id: "content-blocker",
241
- processOutputResult: async ({ messages, abort }) => {
242
- const hasInappropriateContent = messages.some((msg) =>
243
- containsBadContent(msg)
244
- );
245
- if (hasInappropriateContent) {
246
- abort("Content blocked by guardrail");
247
- }
248
- return messages;
249
- },
250
- };
251
-
252
- const agent = new Agent({
253
- name: "safe-agent",
254
- instructions: "You are a helpful assistant",
255
- model: 'openai/gpt-4o',
256
- memory: new Memory({ lastMessages: 10 }),
257
- // Your guardrail runs BEFORE memory saves
258
- outputProcessors: [contentBlocker],
259
- });
260
-
261
- // If the guardrail aborts, nothing is saved to memory
262
- const result = await agent.generate("Hello");
263
- if (result.tripwire) {
264
- console.log("Blocked:", result.tripwire.reason);
265
- // Memory is empty - no messages were persisted
266
- }
267
- ```
268
-
269
- ### Input guardrails
270
-
271
- Input guardrails run **after** memory processors load history. If a guardrail aborts:
272
-
273
- - The tripwire is triggered
274
- - The LLM is never called
275
- - Output processors (including memory persistence) are skipped
276
- - **No messages are persisted to storage**
277
-
278
- ```typescript
279
- // Input guardrail that validates user input
280
- const inputValidator = {
281
- id: "input-validator",
282
- processInput: async ({ messages, abort }) => {
283
- const lastUserMessage = messages.findLast((m) => m.role === "user");
284
- if (isInvalidInput(lastUserMessage)) {
285
- abort("Invalid input detected");
286
- }
287
- return messages;
288
- },
289
- };
290
-
291
- const agent = new Agent({
292
- name: "validated-agent",
293
- instructions: "You are a helpful assistant",
294
- model: 'openai/gpt-4o',
295
- memory: new Memory({ lastMessages: 10 }),
296
- // Your guardrail runs AFTER memory loads history
297
- inputProcessors: [inputValidator],
298
- });
299
- ```
300
-
301
- ### Summary
302
-
303
- | Guardrail Type | When it runs | If it aborts |
304
- | -------------- | -------------------------- | ----------------------------- |
305
- | Input | After memory loads history | LLM not called, nothing saved |
306
- | Output | Before memory saves | Nothing saved to storage |
307
-
308
- Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
309
-
310
- ## Related documentation
311
-
312
- - [Processors](https://mastra.ai/docs/agents/processors) - General processor concepts and custom processor creation
313
- - [Guardrails](https://mastra.ai/docs/agents/guardrails) - Security and validation processors
314
- - [Memory Overview](https://mastra.ai/docs/memory/overview) - Memory types and configuration
315
-
316
- When creating custom processors avoid mutating the input `messages` array or its objects directly.
@@ -1,260 +0,0 @@
1
- # Message History
2
-
3
- Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
4
-
5
- You can also retrieve message history to display past conversations in your UI.
6
-
7
- > **Info:** Each message belongs to a thread (the conversation) and a resource (the user or entity it's associated with). See [Threads and resources](https://mastra.ai/docs/memory/storage) for more detail.
8
-
9
- ## Getting started
10
-
11
- Install the Mastra memory module along with a [storage adapter](https://mastra.ai/docs/memory/storage) for your database. The examples below use `@mastra/libsql`, which stores data locally in a `mastra.db` file.
12
-
13
- **npm**:
14
-
15
- ```bash
16
- npm install @mastra/memory@latest @mastra/libsql@latest
17
- ```
18
-
19
- **pnpm**:
20
-
21
- ```bash
22
- pnpm add @mastra/memory@latest @mastra/libsql@latest
23
- ```
24
-
25
- **Yarn**:
26
-
27
- ```bash
28
- yarn add @mastra/memory@latest @mastra/libsql@latest
29
- ```
30
-
31
- **Bun**:
32
-
33
- ```bash
34
- bun add @mastra/memory@latest @mastra/libsql@latest
35
- ```
36
-
37
- Message history requires a storage adapter to persist conversations. Configure storage on your Mastra instance if you haven't already:
38
-
39
- ```typescript
40
- import { Mastra } from "@mastra/core";
41
- import { LibSQLStore } from "@mastra/libsql";
42
-
43
- export const mastra = new Mastra({
44
- storage: new LibSQLStore({
45
- id: 'mastra-storage',
46
- url: "file:./mastra.db",
47
- }),
48
- });
49
- ```
50
-
51
- Give your agent a `Memory`:
52
-
53
- ```typescript
54
- import { Memory } from "@mastra/memory";
55
- import { Agent } from "@mastra/core/agent";
56
-
57
- export const agent = new Agent({
58
- id: "test-agent",
59
- memory: new Memory({
60
- options: {
61
- lastMessages: 10,
62
- },
63
- }),
64
- });
65
- ```
66
-
67
- When you call the agent, messages are automatically saved to the database. You can specify a `threadId`, `resourceId`, and optional `metadata`:
68
-
69
- **Generate**:
70
-
71
- ```typescript
72
- await agent.generate("Hello", {
73
- memory: {
74
- thread: {
75
- id: "thread-123",
76
- title: "Support conversation",
77
- metadata: { category: "billing" },
78
- },
79
- resource: "user-456",
80
- },
81
- });
82
- ```
83
-
84
- **Stream**:
85
-
86
- ```typescript
87
- await agent.stream("Hello", {
88
- memory: {
89
- thread: {
90
- id: "thread-123",
91
- title: "Support conversation",
92
- metadata: { category: "billing" },
93
- },
94
- resource: "user-456",
95
- },
96
- });
97
- ```
98
-
99
- > **Info:** Threads and messages are created automatically when you call `agent.generate()` or `agent.stream()`, but you can also create them manually with [`createThread()`](https://mastra.ai/reference/memory/createThread) and [`saveMessages()`](https://mastra.ai/reference/memory/memory-class).
100
-
101
- There are two ways to use this history:
102
-
103
- - **Automatic inclusion** - Mastra automatically fetches and includes recent messages in the context window. By default, it includes the last 10 messages, keeping agents grounded in the conversation. You can adjust this number with `lastMessages`, but in most cases you don't need to think about it.
104
- - [**Manual querying**](#querying) - For more control, use the `recall()` function to query threads and messages directly. This lets you choose exactly which memories are included in the context window, or fetch messages to render conversation history in your UI.
105
-
106
- ## Accessing Memory
107
-
108
- To access memory functions for querying, cloning, or deleting threads and messages, call `getMemory()` on an agent:
109
-
110
- ```typescript
111
- const agent = mastra.getAgent("weatherAgent");
112
- const memory = await agent.getMemory();
113
- ```
114
-
115
- The `Memory` instance gives you access to functions for listing threads, recalling messages, cloning conversations, and more.
116
-
117
- ## Querying
118
-
119
- Use these methods to fetch threads and messages for displaying conversation history in your UI or for custom memory retrieval logic.
120
-
121
- > **Warning:** The memory system does not enforce access control. Before running any query, verify in your application logic that the current user is authorized to access the `resourceId` being queried.
122
-
123
- ### Threads
124
-
125
- Use [`listThreads()`](https://mastra.ai/reference/memory/listThreads) to retrieve threads for a resource:
126
-
127
- ```typescript
128
- const result = await memory.listThreads({
129
- filter: { resourceId: "user-123" },
130
- perPage: false,
131
- });
132
- ```
133
-
134
- Paginate through threads:
135
-
136
- ```typescript
137
- const result = await memory.listThreads({
138
- filter: { resourceId: "user-123" },
139
- page: 0,
140
- perPage: 10,
141
- });
142
-
143
- console.log(result.threads); // thread objects
144
- console.log(result.hasMore); // more pages available?
145
- ```
146
-
147
- You can also filter by metadata and control sort order:
148
-
149
- ```typescript
150
- const result = await memory.listThreads({
151
- filter: {
152
- resourceId: "user-123",
153
- metadata: { status: "active" },
154
- },
155
- orderBy: { field: "createdAt", direction: "DESC" },
156
- });
157
- ```
158
-
159
- To fetch a single thread by ID, use [`getThreadById()`](https://mastra.ai/reference/memory/getThreadById):
160
-
161
- ```typescript
162
- const thread = await memory.getThreadById({ threadId: "thread-123" });
163
- ```
164
-
165
- ### Messages
166
-
167
- Once you have a thread, use [`recall()`](https://mastra.ai/reference/memory/recall) to retrieve its messages. It supports pagination, date filtering, and [semantic search](https://mastra.ai/docs/memory/semantic-recall).
168
-
169
- Basic recall returns all messages from a thread:
170
-
171
- ```typescript
172
- const { messages } = await memory.recall({
173
- threadId: "thread-123",
174
- perPage: false,
175
- });
176
- ```
177
-
178
- Paginate through messages:
179
-
180
- ```typescript
181
- const { messages } = await memory.recall({
182
- threadId: "thread-123",
183
- page: 0,
184
- perPage: 50,
185
- });
186
- ```
187
-
188
- Filter by date range:
189
-
190
- ```typescript
191
- const { messages } = await memory.recall({
192
- threadId: "thread-123",
193
- filter: {
194
- dateRange: {
195
- start: new Date("2025-01-01"),
196
- end: new Date("2025-06-01"),
197
- },
198
- },
199
- });
200
- ```
201
-
202
- Fetch a single message by ID:
203
-
204
- ```typescript
205
- const { messages } = await memory.recall({
206
- threadId: "thread-123",
207
- include: [{ id: "msg-123" }],
208
- });
209
- ```
210
-
211
- Fetch multiple messages by ID with surrounding context:
212
-
213
- ```typescript
214
- const { messages } = await memory.recall({
215
- threadId: "thread-123",
216
- include: [
217
- { id: "msg-123" },
218
- {
219
- id: "msg-456",
220
- withPreviousMessages: 3,
221
- withNextMessages: 1,
222
- },
223
- ],
224
- });
225
- ```
226
-
227
- Search by meaning (see [Semantic recall](https://mastra.ai/docs/memory/semantic-recall) for setup):
228
-
229
- ```typescript
230
- const { messages } = await memory.recall({
231
- threadId: "thread-123",
232
- vectorSearchString: "project deadline discussion",
233
- threadConfig: {
234
- semanticRecall: true,
235
- },
236
- });
237
- ```
238
-
239
- ### UI format
240
-
241
- Message queries return `MastraDBMessage[]` format. To display messages in a frontend, you may need to convert them to a format your UI library expects. For example, [`toAISdkV5Messages`](https://mastra.ai/reference/ai-sdk/to-ai-sdk-v5-messages) converts messages to AI SDK UI format.
242
-
243
- ## Thread cloning
244
-
245
- Thread cloning creates a copy of an existing thread with its messages. This is useful for branching conversations, creating checkpoints before a potentially destructive operation, or testing variations of a conversation.
246
-
247
- ```typescript
248
- const { thread, clonedMessages } = await memory.cloneThread({
249
- sourceThreadId: "thread-123",
250
- title: "Branched conversation",
251
- });
252
- ```
253
-
254
- You can filter which messages get cloned (by count or date range), specify custom thread IDs, and use utility methods to inspect clone relationships.
255
-
256
- See [`cloneThread()`](https://mastra.ai/reference/memory/cloneThread) and [clone utilities](https://mastra.ai/reference/memory/clone-utilities) for the full API.
257
-
258
- ## Deleting messages
259
-
260
- To remove messages from a thread, use [`deleteMessages()`](https://mastra.ai/reference/memory/deleteMessages). You can delete by message ID or clear all messages from a thread.
@@ -1,45 +0,0 @@
1
- # Memory
2
-
3
- Memory enables your agent to remember user messages, agent replies, and tool results across interactions, giving it the context it needs to stay consistent, maintain conversation flow, and produce better answers over time.
4
-
5
- Mastra supports four complementary memory types:
6
-
7
- - [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
8
- - [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
9
- - [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
10
- - [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
11
-
12
- If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
13
-
14
- ## Getting started
15
-
16
- Choose a memory option to get started:
17
-
18
- - [Message history](https://mastra.ai/docs/memory/message-history)
19
- - [Working memory](https://mastra.ai/docs/memory/working-memory)
20
- - [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
21
- - [Observational memory](https://mastra.ai/docs/memory/observational-memory)
22
-
23
- ## Storage
24
-
25
- Before enabling memory, you must first configure a storage adapter. Mastra supports several databases including PostgreSQL, MongoDB, libSQL, and [more](https://mastra.ai/docs/memory/storage).
26
-
27
- Storage can be configured at the [instance level](https://mastra.ai/docs/memory/storage) (shared across all agents) or at the [agent level](https://mastra.ai/docs/memory/storage) (dedicated per agent).
28
-
29
- For semantic recall, you can use a separate vector database like Pinecone alongside your primary storage.
30
-
31
- See the [Storage](https://mastra.ai/docs/memory/storage) documentation for configuration options, supported providers, and examples.
32
-
33
- ## Debugging memory
34
-
35
- When [tracing](https://mastra.ai/docs/observability/tracing/overview) is enabled, you can inspect exactly which messages the agent uses for context in each request. The trace output shows all memory included in the agent's context window - both recent message history and messages recalled via semantic recall.
36
-
37
- ![Trace output showing memory context included in an agent request](https://mastra.ai/_next/image?url=%2Ftracingafter.png\&w=1920\&q=75)
38
-
39
- This visibility helps you understand why an agent made specific decisions and verify that memory retrieval is working as expected.
40
-
41
- ## Next steps
42
-
43
- - Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
44
- - Add [Message history](https://mastra.ai/docs/memory/message-history), [Working memory](https://mastra.ai/docs/memory/working-memory), [Semantic recall](https://mastra.ai/docs/memory/semantic-recall), or [Observational memory](https://mastra.ai/docs/memory/observational-memory)
45
- - Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options