@mastra/pg 1.0.0-beta.9 → 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (41) hide show
  1. package/CHANGELOG.md +1404 -0
  2. package/dist/docs/README.md +36 -0
  3. package/dist/docs/SKILL.md +37 -0
  4. package/dist/docs/SOURCE_MAP.json +6 -0
  5. package/dist/docs/memory/01-storage.md +233 -0
  6. package/dist/docs/memory/02-working-memory.md +390 -0
  7. package/dist/docs/memory/03-semantic-recall.md +233 -0
  8. package/dist/docs/memory/04-reference.md +133 -0
  9. package/dist/docs/processors/01-reference.md +297 -0
  10. package/dist/docs/rag/01-overview.md +74 -0
  11. package/dist/docs/rag/02-vector-databases.md +643 -0
  12. package/dist/docs/rag/03-retrieval.md +548 -0
  13. package/dist/docs/rag/04-reference.md +369 -0
  14. package/dist/docs/storage/01-reference.md +828 -0
  15. package/dist/docs/tools/01-reference.md +440 -0
  16. package/dist/docs/vectors/01-reference.md +307 -0
  17. package/dist/index.cjs +1003 -223
  18. package/dist/index.cjs.map +1 -1
  19. package/dist/index.d.ts +1 -1
  20. package/dist/index.d.ts.map +1 -1
  21. package/dist/index.js +1000 -225
  22. package/dist/index.js.map +1 -1
  23. package/dist/shared/config.d.ts +61 -66
  24. package/dist/shared/config.d.ts.map +1 -1
  25. package/dist/storage/client.d.ts +91 -0
  26. package/dist/storage/client.d.ts.map +1 -0
  27. package/dist/storage/db/index.d.ts +82 -17
  28. package/dist/storage/db/index.d.ts.map +1 -1
  29. package/dist/storage/domains/memory/index.d.ts +3 -2
  30. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  31. package/dist/storage/domains/observability/index.d.ts +23 -0
  32. package/dist/storage/domains/observability/index.d.ts.map +1 -1
  33. package/dist/storage/domains/scores/index.d.ts.map +1 -1
  34. package/dist/storage/domains/workflows/index.d.ts +1 -0
  35. package/dist/storage/domains/workflows/index.d.ts.map +1 -1
  36. package/dist/storage/index.d.ts +44 -17
  37. package/dist/storage/index.d.ts.map +1 -1
  38. package/dist/storage/test-utils.d.ts.map +1 -1
  39. package/dist/vector/index.d.ts.map +1 -1
  40. package/dist/vector/sql-builder.d.ts.map +1 -1
  41. package/package.json +11 -11
@@ -0,0 +1,233 @@
1
+ > Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
2
+
3
+ # Semantic Recall
4
+
5
+ If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
6
+
7
+ > **Watch 📹**
8
+
9
+ What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
10
+
11
+ ## How Semantic Recall Works
12
+
13
+ Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](./message-history).
14
+
15
+ It uses vector embeddings of messages for similarity search, integrates with various vector stores, and has configurable context windows around retrieved messages.
16
+
17
+ ![Diagram showing Mastra Memory semantic recall](/img/semantic-recall.png)
18
+
19
+ When it's enabled, new messages are used to query a vector DB for semantically similar messages.
20
+
21
+ After getting a response from the LLM, all new messages (user, assistant, and tool calls/results) are inserted into the vector DB to be recalled in later interactions.
22
+
23
+ ## Quick Start
24
+
25
+ Semantic recall is enabled by default, so if you give your agent memory it will be included:
26
+
27
+ ```typescript {9}
28
+ import { Agent } from "@mastra/core/agent";
29
+ import { Memory } from "@mastra/memory";
30
+
31
+ const agent = new Agent({
32
+ id: "support-agent",
33
+ name: "SupportAgent",
34
+ instructions: "You are a helpful support agent.",
35
+ model: "openai/gpt-5.1",
36
+ memory: new Memory(),
37
+ });
38
+ ```
39
+
40
+ ## Storage configuration
41
+
42
+ Semantic recall relies on a [storage and vector db](https://mastra.ai/reference/v1/memory/memory-class) to store messages and their embeddings.
43
+
44
+ ```ts {8-16}
45
+ import { Memory } from "@mastra/memory";
46
+ import { Agent } from "@mastra/core/agent";
47
+ import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
48
+
49
+ const agent = new Agent({
50
+ memory: new Memory({
51
+ // this is the default storage db if omitted
52
+ storage: new LibSQLStore({
53
+ id: 'agent-storage',
54
+ url: "file:./local.db",
55
+ }),
56
+ // this is the default vector db if omitted
57
+ vector: new LibSQLVector({
58
+ id: 'agent-vector',
59
+ url: "file:./local.db",
60
+ }),
61
+ }),
62
+ });
63
+ ```
64
+
65
+ Each vector store page below includes installation instructions, configuration parameters, and usage examples:
66
+
67
+ - [Astra](https://mastra.ai/reference/v1/vectors/astra)
68
+ - [Chroma](https://mastra.ai/reference/v1/vectors/chroma)
69
+ - [Cloudflare Vectorize](https://mastra.ai/reference/v1/vectors/vectorize)
70
+ - [Convex](https://mastra.ai/reference/v1/vectors/convex)
71
+ - [Couchbase](https://mastra.ai/reference/v1/vectors/couchbase)
72
+ - [DuckDB](https://mastra.ai/reference/v1/vectors/duckdb)
73
+ - [Elasticsearch](https://mastra.ai/reference/v1/vectors/elasticsearch)
74
+ - [LanceDB](https://mastra.ai/reference/v1/vectors/lance)
75
+ - [libSQL](https://mastra.ai/reference/v1/vectors/libsql)
76
+ - [MongoDB](https://mastra.ai/reference/v1/vectors/mongodb)
77
+ - [OpenSearch](https://mastra.ai/reference/v1/vectors/opensearch)
78
+ - [Pinecone](https://mastra.ai/reference/v1/vectors/pinecone)
79
+ - [PostgreSQL](https://mastra.ai/reference/v1/vectors/pg)
80
+ - [Qdrant](https://mastra.ai/reference/v1/vectors/qdrant)
81
+ - [S3 Vectors](https://mastra.ai/reference/v1/vectors/s3vectors)
82
+ - [Turbopuffer](https://mastra.ai/reference/v1/vectors/turbopuffer)
83
+ - [Upstash](https://mastra.ai/reference/v1/vectors/upstash)
84
+
85
+ ## Recall configuration
86
+
87
+ The three main parameters that control semantic recall behavior are:
88
+
89
+ 1. **topK**: How many semantically similar messages to retrieve
90
+ 2. **messageRange**: How much surrounding context to include with each match
91
+ 3. **scope**: Whether to search within the current thread or across all threads owned by a resource (the default is resource scope).
92
+
93
+ ```typescript {5-7}
94
+ const agent = new Agent({
95
+ memory: new Memory({
96
+ options: {
97
+ semanticRecall: {
98
+ topK: 3, // Retrieve 3 most similar messages
99
+ messageRange: 2, // Include 2 messages before and after each match
100
+ scope: "resource", // Search across all threads for this user (default setting if omitted)
101
+ },
102
+ },
103
+ }),
104
+ });
105
+ ```
106
+
107
+ ## Embedder configuration
108
+
109
+ Semantic recall relies on an [embedding model](https://mastra.ai/reference/v1/memory/memory-class) to convert messages into embeddings. Mastra supports embedding models through the model router using `provider/model` strings, or you can use any [embedding model](https://sdk.vercel.ai/docs/ai-sdk-core/embeddings) compatible with the AI SDK.
110
+
111
+ #### Using the Model Router (Recommended)
112
+
113
+ The simplest way is to use a `provider/model` string with autocomplete support:
114
+
115
+ ```ts {7}
116
+ import { Memory } from "@mastra/memory";
117
+ import { Agent } from "@mastra/core/agent";
118
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
119
+
120
+ const agent = new Agent({
121
+ memory: new Memory({
122
+ embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
123
+ }),
124
+ });
125
+ ```
126
+
127
+ Supported embedding models:
128
+
129
+ - **OpenAI**: `text-embedding-3-small`, `text-embedding-3-large`, `text-embedding-ada-002`
130
+ - **Google**: `gemini-embedding-001`, `text-embedding-004`
131
+
132
+ The model router automatically handles API key detection from environment variables (`OPENAI_API_KEY`, `GOOGLE_GENERATIVE_AI_API_KEY`).
133
+
134
+ #### Using AI SDK Packages
135
+
136
+ You can also use AI SDK embedding models directly:
137
+
138
+ ```ts {2,7}
139
+ import { Memory } from "@mastra/memory";
140
+ import { Agent } from "@mastra/core/agent";
141
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
142
+
143
+ const agent = new Agent({
144
+ memory: new Memory({
145
+ embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
146
+ }),
147
+ });
148
+ ```
149
+
150
+ #### Using FastEmbed (Local)
151
+
152
+ To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
153
+
154
+ ```bash npm2yarn
155
+ npm install @mastra/fastembed@beta
156
+ ```
157
+
158
+ Then configure it in your memory:
159
+
160
+ ```ts {3,7}
161
+ import { Memory } from "@mastra/memory";
162
+ import { Agent } from "@mastra/core/agent";
163
+ import { fastembed } from "@mastra/fastembed";
164
+
165
+ const agent = new Agent({
166
+ memory: new Memory({
167
+ embedder: fastembed,
168
+ }),
169
+ });
170
+ ```
171
+
172
+ ## PostgreSQL Index Optimization
173
+
174
+ When using PostgreSQL as your vector store, you can optimize semantic recall performance by configuring the vector index. This is particularly important for large-scale deployments with thousands of messages.
175
+
176
+ PostgreSQL supports both IVFFlat and HNSW indexes. By default, Mastra creates an IVFFlat index, but HNSW indexes typically provide better performance, especially with OpenAI embeddings which use inner product distance.
177
+
178
+ ```typescript {18-23}
179
+ import { Memory } from "@mastra/memory";
180
+ import { PgStore, PgVector } from "@mastra/pg";
181
+
182
+ const agent = new Agent({
183
+ memory: new Memory({
184
+ storage: new PgStore({
185
+ id: 'agent-storage',
186
+ connectionString: process.env.DATABASE_URL,
187
+ }),
188
+ vector: new PgVector({
189
+ id: 'agent-vector',
190
+ connectionString: process.env.DATABASE_URL,
191
+ }),
192
+ options: {
193
+ semanticRecall: {
194
+ topK: 5,
195
+ messageRange: 2,
196
+ indexConfig: {
197
+ type: "hnsw", // Use HNSW for better performance
198
+ metric: "dotproduct", // Best for OpenAI embeddings
199
+ m: 16, // Number of bi-directional links (default: 16)
200
+ efConstruction: 64, // Size of candidate list during construction (default: 64)
201
+ },
202
+ },
203
+ },
204
+ }),
205
+ });
206
+ ```
207
+
208
+ For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/v1/vectors/pg#index-configuration-guide).
209
+
210
+ ## Disabling
211
+
212
+ There is a performance impact to using semantic recall. New messages are converted into embeddings and used to query a vector database before new messages are sent to the LLM.
213
+
214
+ Semantic recall is enabled by default but can be disabled when not needed:
215
+
216
+ ```typescript {4}
217
+ const agent = new Agent({
218
+ memory: new Memory({
219
+ options: {
220
+ semanticRecall: false,
221
+ },
222
+ }),
223
+ });
224
+ ```
225
+
226
+ You might want to disable semantic recall in scenarios like:
227
+
228
+ - When message history provides sufficient context for the current conversation.
229
+ - In performance-sensitive applications, like realtime two-way audio, where the added latency of creating embeddings and running vector queries is noticeable.
230
+
231
+ ## Viewing Recalled Messages
232
+
233
+ When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
@@ -0,0 +1,133 @@
1
+ # Memory API Reference
2
+
3
+ > API reference for memory - 1 entries
4
+
5
+
6
+ ---
7
+
8
+ ## Reference: Memory Class
9
+
10
+ > Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
11
+
12
+ The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.
13
+
14
+ ## Usage example
15
+
16
+ ```typescript title="src/mastra/agents/test-agent.ts"
17
+ import { Memory } from "@mastra/memory";
18
+ import { Agent } from "@mastra/core/agent";
19
+
20
+ export const agent = new Agent({
21
+ name: "test-agent",
22
+ instructions: "You are an agent with memory.",
23
+ model: "openai/gpt-5.1",
24
+ memory: new Memory({
25
+ options: {
26
+ workingMemory: {
27
+ enabled: true,
28
+ },
29
+ },
30
+ }),
31
+ });
32
+ ```
33
+
34
+ > To enable `workingMemory` on an agent, you’ll need a storage provider configured on your main Mastra instance. See [Mastra class](../core/mastra-class) for more information.
35
+
36
+ ## Constructor parameters
37
+
38
+ ### Options parameters
39
+
40
+ ## Returns
41
+
42
+ ## Extended usage example
43
+
44
+ ```typescript title="src/mastra/agents/test-agent.ts"
45
+ import { Memory } from "@mastra/memory";
46
+ import { Agent } from "@mastra/core/agent";
47
+ import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
48
+
49
+ export const agent = new Agent({
50
+ name: "test-agent",
51
+ instructions: "You are an agent with memory.",
52
+ model: "openai/gpt-5.1",
53
+ memory: new Memory({
54
+ storage: new LibSQLStore({
55
+ id: 'test-agent-storage',
56
+ url: "file:./working-memory.db",
57
+ }),
58
+ vector: new LibSQLVector({
59
+ id: 'test-agent-vector',
60
+ url: "file:./vector-memory.db",
61
+ }),
62
+ options: {
63
+ lastMessages: 10,
64
+ semanticRecall: {
65
+ topK: 3,
66
+ messageRange: 2,
67
+ scope: "resource",
68
+ },
69
+ workingMemory: {
70
+ enabled: true,
71
+ },
72
+ generateTitle: true,
73
+ },
74
+ }),
75
+ });
76
+ ```
77
+
78
+ ## PostgreSQL with index configuration
79
+
80
+ ```typescript title="src/mastra/agents/pg-agent.ts"
81
+ import { Memory } from "@mastra/memory";
82
+ import { Agent } from "@mastra/core/agent";
83
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
84
+ import { PgStore, PgVector } from "@mastra/pg";
85
+
86
+ export const agent = new Agent({
87
+ name: "pg-agent",
88
+ instructions: "You are an agent with optimized PostgreSQL memory.",
89
+ model: "openai/gpt-5.1",
90
+ memory: new Memory({
91
+ storage: new PgStore({
92
+ id: 'pg-agent-storage',
93
+ connectionString: process.env.DATABASE_URL,
94
+ }),
95
+ vector: new PgVector({
96
+ id: 'pg-agent-vector',
97
+ connectionString: process.env.DATABASE_URL,
98
+ }),
99
+ embedder: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
100
+ options: {
101
+ lastMessages: 20,
102
+ semanticRecall: {
103
+ topK: 5,
104
+ messageRange: 3,
105
+ scope: "resource",
106
+ indexConfig: {
107
+ type: "hnsw", // Use HNSW for better performance
108
+ metric: "dotproduct", // Optimal for OpenAI embeddings
109
+ m: 16, // Number of bi-directional links
110
+ efConstruction: 64, // Construction-time candidate list size
111
+ },
112
+ },
113
+ workingMemory: {
114
+ enabled: true,
115
+ },
116
+ },
117
+ }),
118
+ });
119
+ ```
120
+
121
+ ### Related
122
+
123
+ - [Getting Started with Memory](https://mastra.ai/docs/v1/memory/overview)
124
+ - [Semantic Recall](https://mastra.ai/docs/v1/memory/semantic-recall)
125
+ - [Working Memory](https://mastra.ai/docs/v1/memory/working-memory)
126
+ - [Memory Processors](https://mastra.ai/docs/v1/memory/memory-processors)
127
+ - [createThread](https://mastra.ai/reference/v1/memory/createThread)
128
+ - [recall](https://mastra.ai/reference/v1/memory/recall)
129
+ - [getThreadById](https://mastra.ai/reference/v1/memory/getThreadById)
130
+ - [listThreads](https://mastra.ai/reference/v1/memory/listThreads)
131
+ - [deleteMessages](https://mastra.ai/reference/v1/memory/deleteMessages)
132
+ - [cloneThread](https://mastra.ai/reference/v1/memory/cloneThread)
133
+ - [Clone Utility Methods](https://mastra.ai/reference/v1/memory/clone-utilities)
@@ -0,0 +1,297 @@
1
+ # Processors API Reference
2
+
3
+ > API reference for processors - 3 entries
4
+
5
+
6
+ ---
7
+
8
+ ## Reference: Message History Processor
9
+
10
+ > Documentation for the MessageHistory processor in Mastra, which handles retrieval and persistence of conversation history.
11
+
12
+ The `MessageHistory` is a **hybrid processor** that handles both retrieval and persistence of message history. On input, it fetches historical messages from storage and prepends them to the conversation. On output, it persists new messages to storage.
13
+
14
+ ## Usage example
15
+
16
+ ```typescript
17
+ import { MessageHistory } from "@mastra/core/processors";
18
+
19
+ const processor = new MessageHistory({
20
+ storage: memoryStorage,
21
+ lastMessages: 50,
22
+ });
23
+ ```
24
+
25
+ ## Constructor parameters
26
+
27
+ ### Options
28
+
29
+ ## Returns
30
+
31
+ ## Extended usage example
32
+
33
+ ```typescript title="src/mastra/agents/memory-agent.ts"
34
+ import { Agent } from "@mastra/core/agent";
35
+ import { MessageHistory } from "@mastra/core/processors";
36
+ import { PostgresStorage } from "@mastra/pg";
37
+
38
+ const storage = new PostgresStorage({
39
+ connectionString: process.env.DATABASE_URL,
40
+ });
41
+
42
+ export const agent = new Agent({
43
+ name: "memory-agent",
44
+ instructions: "You are a helpful assistant with conversation memory",
45
+ model: "openai:gpt-4o",
46
+ inputProcessors: [
47
+ new MessageHistory({
48
+ storage,
49
+ lastMessages: 100,
50
+ }),
51
+ ],
52
+ outputProcessors: [
53
+ new MessageHistory({
54
+ storage,
55
+ }),
56
+ ],
57
+ });
58
+ ```
59
+
60
+ ## Behavior
61
+
62
+ ### Input processing
63
+ 1. Retrieves `threadId` from the request context
64
+ 2. Fetches historical messages from storage (ordered by creation date, descending)
65
+ 3. Filters out system messages (they should not be stored in the database)
66
+ 4. Merges historical messages with incoming messages, avoiding duplicates by ID
67
+ 5. Adds historical messages with `source: 'memory'` tag
68
+
69
+ ### Output processing
70
+ 1. Retrieves `threadId` from the request context
71
+ 2. Skips persistence if `readOnly` is set in memory config
72
+ 3. Filters out incomplete tool calls from messages
73
+ 4. Persists new user input and assistant response messages to storage
74
+ 5. Updates the thread's `updatedAt` timestamp
75
+
76
+ ## Related
77
+
78
+ - [Guardrails](https://mastra.ai/docs/v1/agents/guardrails)
79
+
80
+ ---
81
+
82
+ ## Reference: Semantic Recall Processor
83
+
84
+ > Documentation for the SemanticRecall processor in Mastra, which enables semantic search over conversation history using vector embeddings.
85
+
86
+ The `SemanticRecall` is a **hybrid processor** that enables semantic search over conversation history using vector embeddings. On input, it performs semantic search to find relevant historical messages. On output, it creates embeddings for new messages to enable future semantic retrieval.
87
+
88
+ ## Usage example
89
+
90
+ ```typescript
91
+ import { SemanticRecall } from "@mastra/core/processors";
92
+ import { openai } from "@ai-sdk/openai";
93
+
94
+ const processor = new SemanticRecall({
95
+ storage: memoryStorage,
96
+ vector: vectorStore,
97
+ embedder: openai.embedding("text-embedding-3-small"),
98
+ topK: 5,
99
+ messageRange: 2,
100
+ scope: "resource",
101
+ });
102
+ ```
103
+
104
+ ## Constructor parameters
105
+
106
+ ### Options
107
+
108
+ ## Returns
109
+
110
+ ## Extended usage example
111
+
112
+ ```typescript title="src/mastra/agents/semantic-memory-agent.ts"
113
+ import { Agent } from "@mastra/core/agent";
114
+ import { SemanticRecall, MessageHistory } from "@mastra/core/processors";
115
+ import { PostgresStorage } from "@mastra/pg";
116
+ import { PgVector } from "@mastra/pg";
117
+ import { openai } from "@ai-sdk/openai";
118
+
119
+ const storage = new PostgresStorage({
120
+ id: 'pg-storage',
121
+ connectionString: process.env.DATABASE_URL,
122
+ });
123
+
124
+ const vector = new PgVector({
125
+ id: 'pg-vector',
126
+ connectionString: process.env.DATABASE_URL,
127
+ });
128
+
129
+ const semanticRecall = new SemanticRecall({
130
+ storage,
131
+ vector,
132
+ embedder: openai.embedding("text-embedding-3-small"),
133
+ topK: 5,
134
+ messageRange: { before: 2, after: 1 },
135
+ scope: "resource",
136
+ threshold: 0.7,
137
+ });
138
+
139
+ export const agent = new Agent({
140
+ name: "semantic-memory-agent",
141
+ instructions: "You are a helpful assistant with semantic memory recall",
142
+ model: "openai:gpt-4o",
143
+ inputProcessors: [
144
+ semanticRecall,
145
+ new MessageHistory({ storage, lastMessages: 50 }),
146
+ ],
147
+ outputProcessors: [
148
+ semanticRecall,
149
+ new MessageHistory({ storage }),
150
+ ],
151
+ });
152
+ ```
153
+
154
+ ## Behavior
155
+
156
+ ### Input processing
157
+ 1. Extracts the user query from the last user message
158
+ 2. Generates embeddings for the query
159
+ 3. Performs vector search to find semantically similar messages
160
+ 4. Retrieves matched messages along with surrounding context (based on `messageRange`)
161
+ 5. For `scope: 'resource'`, formats cross-thread messages as a system message with timestamps
162
+ 6. Adds recalled messages with `source: 'memory'` tag
163
+
164
+ ### Output processing
165
+ 1. Extracts text content from new user and assistant messages
166
+ 2. Generates embeddings for each message
167
+ 3. Stores embeddings in the vector store with metadata (message ID, thread ID, resource ID, role, content, timestamp)
168
+ 4. Uses LRU caching for embeddings to avoid redundant API calls
169
+
170
+ ### Cross-thread recall
171
+ When `scope` is set to `'resource'`, the processor can recall messages from other threads. These cross-thread messages are formatted as a system message with timestamps and conversation labels to provide context about when and where the conversation occurred.
172
+
173
+ ## Related
174
+
175
+ - [Guardrails](https://mastra.ai/docs/v1/agents/guardrails)
176
+
177
+ ---
178
+
179
+ ## Reference: Working Memory Processor
180
+
181
+ > Documentation for the WorkingMemory processor in Mastra, which injects persistent user/context data as system instructions.
182
+
183
+ The `WorkingMemory` is an **input processor** that injects working memory data as a system message. It retrieves persistent information from storage and formats it as instructions for the LLM, enabling the agent to maintain context about users across conversations.
184
+
185
+ ## Usage example
186
+
187
+ ```typescript
188
+ import { WorkingMemory } from "@mastra/core/processors";
189
+
190
+ const processor = new WorkingMemory({
191
+ storage: memoryStorage,
192
+ scope: "resource",
193
+ template: {
194
+ format: "markdown",
195
+ content: `# User Profile
196
+ - **Name**:
197
+ - **Preferences**:
198
+ - **Goals**:
199
+ `,
200
+ },
201
+ });
202
+ ```
203
+
204
+ ## Constructor parameters
205
+
206
+ ### Options
207
+
208
+ ### WorkingMemoryTemplate
209
+
210
+ ## Returns
211
+
212
+ ## Extended usage example
213
+
214
+ ```typescript title="src/mastra/agents/personalized-agent.ts"
215
+ import { Agent } from "@mastra/core/agent";
216
+ import { WorkingMemory, MessageHistory } from "@mastra/core/processors";
217
+ import { PostgresStorage } from "@mastra/pg";
218
+
219
+ const storage = new PostgresStorage({
220
+ connectionString: process.env.DATABASE_URL,
221
+ });
222
+
223
+ export const agent = new Agent({
224
+ name: "personalized-agent",
225
+ instructions: "You are a helpful assistant that remembers user preferences",
226
+ model: "openai:gpt-4o",
227
+ inputProcessors: [
228
+ new WorkingMemory({
229
+ storage,
230
+ scope: "resource",
231
+ template: {
232
+ format: "markdown",
233
+ content: `# User Information
234
+ - **Name**:
235
+ - **Location**:
236
+ - **Preferences**:
237
+ - **Communication Style**:
238
+ - **Current Projects**:
239
+ `,
240
+ },
241
+ }),
242
+ new MessageHistory({ storage, lastMessages: 50 }),
243
+ ],
244
+ outputProcessors: [
245
+ new MessageHistory({ storage }),
246
+ ],
247
+ });
248
+ ```
249
+
250
+ ## JSON format example
251
+
252
+ ```typescript
253
+ import { WorkingMemory } from "@mastra/core/processors";
254
+
255
+ const processor = new WorkingMemory({
256
+ storage: memoryStorage,
257
+ scope: "resource",
258
+ template: {
259
+ format: "json",
260
+ content: JSON.stringify({
261
+ user: {
262
+ name: { type: "string" },
263
+ preferences: { type: "object" },
264
+ goals: { type: "array" },
265
+ },
266
+ }),
267
+ },
268
+ });
269
+ ```
270
+
271
+ ## Behavior
272
+
273
+ ### Input processing
274
+ 1. Retrieves `threadId` and `resourceId` from the request context
275
+ 2. Based on scope, fetches working memory from either:
276
+ - Thread metadata (`scope: 'thread'`)
277
+ - Resource record (`scope: 'resource'`)
278
+ 3. Resolves the template (from provider, options, or default)
279
+ 4. Generates system instructions that include:
280
+ - Guidelines for the LLM on storing and updating information
281
+ - The template structure
282
+ - Current working memory data
283
+ 5. Adds the instruction as a system message with `source: 'memory'` tag
284
+
285
+ ### Working memory updates
286
+ Working memory updates happen through the `updateWorkingMemory` tool provided by the Memory class, not through this processor. The processor only handles injecting the current working memory state into conversations.
287
+
288
+ ### Default template
289
+ If no template is provided, the processor uses a default markdown template with fields for:
290
+ - First Name, Last Name
291
+ - Location, Occupation
292
+ - Interests, Goals
293
+ - Events, Facts, Projects
294
+
295
+ ## Related
296
+
297
+ - [Guardrails](https://mastra.ai/docs/v1/agents/guardrails)