@mastra/pg 1.6.1 → 1.7.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +74 -0
- package/dist/docs/SKILL.md +40 -0
- package/dist/docs/assets/SOURCE_MAP.json +6 -0
- package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
- package/dist/docs/references/docs-memory-storage.md +261 -0
- package/dist/docs/references/docs-memory-working-memory.md +400 -0
- package/dist/docs/references/docs-rag-overview.md +72 -0
- package/dist/docs/references/docs-rag-retrieval.md +515 -0
- package/dist/docs/references/docs-rag-vector-databases.md +645 -0
- package/dist/docs/references/reference-memory-memory-class.md +147 -0
- package/dist/docs/references/reference-processors-message-history-processor.md +85 -0
- package/dist/docs/references/reference-processors-semantic-recall-processor.md +117 -0
- package/dist/docs/references/reference-processors-working-memory-processor.md +152 -0
- package/dist/docs/references/reference-rag-metadata-filters.md +216 -0
- package/dist/docs/references/reference-storage-composite.md +235 -0
- package/dist/docs/references/reference-storage-dynamodb.md +282 -0
- package/dist/docs/references/reference-storage-postgresql.md +526 -0
- package/dist/docs/references/reference-tools-vector-query-tool.md +459 -0
- package/dist/docs/references/reference-vectors-pg.md +408 -0
- package/dist/index.cjs +62 -5
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +62 -5
- package/dist/index.js.map +1 -1
- package/dist/storage/db/index.d.ts.map +1 -1
- package/dist/storage/domains/memory/index.d.ts.map +1 -1
- package/dist/vector/index.d.ts.map +1 -1
- package/package.json +5 -5
|
@@ -0,0 +1,147 @@
|
|
|
1
|
+
# Memory Class
|
|
2
|
+
|
|
3
|
+
The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.
|
|
4
|
+
|
|
5
|
+
## Usage example
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { Memory } from '@mastra/memory'
|
|
9
|
+
import { Agent } from '@mastra/core/agent'
|
|
10
|
+
|
|
11
|
+
export const agent = new Agent({
|
|
12
|
+
name: 'test-agent',
|
|
13
|
+
instructions: 'You are an agent with memory.',
|
|
14
|
+
model: 'openai/gpt-5.1',
|
|
15
|
+
memory: new Memory({
|
|
16
|
+
options: {
|
|
17
|
+
workingMemory: {
|
|
18
|
+
enabled: true,
|
|
19
|
+
},
|
|
20
|
+
},
|
|
21
|
+
}),
|
|
22
|
+
})
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
> To enable `workingMemory` on an agent, you’ll need a storage provider configured on your main Mastra instance. See [Mastra class](https://mastra.ai/reference/core/mastra-class) for more information.
|
|
26
|
+
|
|
27
|
+
## Constructor parameters
|
|
28
|
+
|
|
29
|
+
**storage?:** (`MastraCompositeStore`): Storage implementation for persisting memory data. Defaults to \`new DefaultStorage({ config: { url: "file:memory.db" } })\` if not provided.
|
|
30
|
+
|
|
31
|
+
**vector?:** (`MastraVector | false`): Vector store for semantic search capabilities. Set to \`false\` to disable vector operations.
|
|
32
|
+
|
|
33
|
+
**embedder?:** (`EmbeddingModel<string> | EmbeddingModelV2<string>`): Embedder instance for vector embeddings. Required when semantic recall is enabled.
|
|
34
|
+
|
|
35
|
+
**options?:** (`MemoryConfig`): Memory configuration options.
|
|
36
|
+
|
|
37
|
+
### Options parameters
|
|
38
|
+
|
|
39
|
+
**lastMessages?:** (`number | false`): Number of most recent messages to include in context. Set to \`false\` to disable loading conversation history into context. Use \`Number.MAX\_SAFE\_INTEGER\` to retrieve all messages with no limit. To prevent saving new messages, use the \`readOnly\` option instead. (Default: `10`)
|
|
40
|
+
|
|
41
|
+
**readOnly?:** (`boolean`): When true, prevents memory from saving new messages and provides working memory as read-only context (without the updateWorkingMemory tool). Useful for read-only operations like previews, internal routing agents, or sub agents that should reference but not modify memory. (Default: `false`)
|
|
42
|
+
|
|
43
|
+
**semanticRecall?:** (`boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }`): Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured. Default topK is 4, default messageRange is {before: 1, after: 1}. (Default: `false`)
|
|
44
|
+
|
|
45
|
+
**workingMemory?:** (`WorkingMemory`): Configuration for working memory feature. Can be \`{ enabled: boolean; template?: string; schema?: ZodObject\<any> | JSONSchema7; scope?: 'thread' | 'resource' }\` or \`{ enabled: boolean }\` to disable. (Default: `{ enabled: false, template: '# User Information\n- **First Name**:\n- **Last Name**:\n...' }`)
|
|
46
|
+
|
|
47
|
+
**observationalMemory?:** (`boolean | ObservationalMemoryOptions`): Enable Observational Memory for long-context agentic memory. Set to \`true\` for defaults, or pass a config object to customize token budgets, models, and scope. See \[Observational Memory reference]\(/reference/memory/observational-memory) for configuration details. (Default: `false`)
|
|
48
|
+
|
|
49
|
+
**generateTitle?:** (`boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> }`): Controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions. (Default: `false`)
|
|
50
|
+
|
|
51
|
+
## Returns
|
|
52
|
+
|
|
53
|
+
**memory:** (`Memory`): A new Memory instance with the specified configuration.
|
|
54
|
+
|
|
55
|
+
## Extended usage example
|
|
56
|
+
|
|
57
|
+
```typescript
|
|
58
|
+
import { Memory } from '@mastra/memory'
|
|
59
|
+
import { Agent } from '@mastra/core/agent'
|
|
60
|
+
import { LibSQLStore, LibSQLVector } from '@mastra/libsql'
|
|
61
|
+
|
|
62
|
+
export const agent = new Agent({
|
|
63
|
+
name: 'test-agent',
|
|
64
|
+
instructions: 'You are an agent with memory.',
|
|
65
|
+
model: 'openai/gpt-5.1',
|
|
66
|
+
memory: new Memory({
|
|
67
|
+
storage: new LibSQLStore({
|
|
68
|
+
id: 'test-agent-storage',
|
|
69
|
+
url: 'file:./working-memory.db',
|
|
70
|
+
}),
|
|
71
|
+
vector: new LibSQLVector({
|
|
72
|
+
id: 'test-agent-vector',
|
|
73
|
+
url: 'file:./vector-memory.db',
|
|
74
|
+
}),
|
|
75
|
+
options: {
|
|
76
|
+
lastMessages: 10,
|
|
77
|
+
semanticRecall: {
|
|
78
|
+
topK: 3,
|
|
79
|
+
messageRange: 2,
|
|
80
|
+
scope: 'resource',
|
|
81
|
+
},
|
|
82
|
+
workingMemory: {
|
|
83
|
+
enabled: true,
|
|
84
|
+
},
|
|
85
|
+
generateTitle: true,
|
|
86
|
+
},
|
|
87
|
+
}),
|
|
88
|
+
})
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
## PostgreSQL with index configuration
|
|
92
|
+
|
|
93
|
+
```typescript
|
|
94
|
+
import { Memory } from '@mastra/memory'
|
|
95
|
+
import { Agent } from '@mastra/core/agent'
|
|
96
|
+
import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
|
|
97
|
+
import { PgStore, PgVector } from '@mastra/pg'
|
|
98
|
+
|
|
99
|
+
export const agent = new Agent({
|
|
100
|
+
name: 'pg-agent',
|
|
101
|
+
instructions: 'You are an agent with optimized PostgreSQL memory.',
|
|
102
|
+
model: 'openai/gpt-5.1',
|
|
103
|
+
memory: new Memory({
|
|
104
|
+
storage: new PgStore({
|
|
105
|
+
id: 'pg-agent-storage',
|
|
106
|
+
connectionString: process.env.DATABASE_URL,
|
|
107
|
+
}),
|
|
108
|
+
vector: new PgVector({
|
|
109
|
+
id: 'pg-agent-vector',
|
|
110
|
+
connectionString: process.env.DATABASE_URL,
|
|
111
|
+
}),
|
|
112
|
+
embedder: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
|
|
113
|
+
options: {
|
|
114
|
+
lastMessages: 20,
|
|
115
|
+
semanticRecall: {
|
|
116
|
+
topK: 5,
|
|
117
|
+
messageRange: 3,
|
|
118
|
+
scope: 'resource',
|
|
119
|
+
indexConfig: {
|
|
120
|
+
type: 'hnsw', // Use HNSW for better performance
|
|
121
|
+
metric: 'dotproduct', // Optimal for OpenAI embeddings
|
|
122
|
+
m: 16, // Number of bi-directional links
|
|
123
|
+
efConstruction: 64, // Construction-time candidate list size
|
|
124
|
+
},
|
|
125
|
+
},
|
|
126
|
+
workingMemory: {
|
|
127
|
+
enabled: true,
|
|
128
|
+
},
|
|
129
|
+
},
|
|
130
|
+
}),
|
|
131
|
+
})
|
|
132
|
+
```
|
|
133
|
+
|
|
134
|
+
### Related
|
|
135
|
+
|
|
136
|
+
- [Getting Started with Memory](https://mastra.ai/docs/memory/overview)
|
|
137
|
+
- [Semantic Recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
138
|
+
- [Working Memory](https://mastra.ai/docs/memory/working-memory)
|
|
139
|
+
- [Observational Memory](https://mastra.ai/docs/memory/observational-memory)
|
|
140
|
+
- [Memory Processors](https://mastra.ai/docs/memory/memory-processors)
|
|
141
|
+
- [createThread](https://mastra.ai/reference/memory/createThread)
|
|
142
|
+
- [recall](https://mastra.ai/reference/memory/recall)
|
|
143
|
+
- [getThreadById](https://mastra.ai/reference/memory/getThreadById)
|
|
144
|
+
- [listThreads](https://mastra.ai/reference/memory/listThreads)
|
|
145
|
+
- [deleteMessages](https://mastra.ai/reference/memory/deleteMessages)
|
|
146
|
+
- [cloneThread](https://mastra.ai/reference/memory/cloneThread)
|
|
147
|
+
- [Clone Utility Methods](https://mastra.ai/reference/memory/clone-utilities)
|
|
@@ -0,0 +1,85 @@
|
|
|
1
|
+
# MessageHistory
|
|
2
|
+
|
|
3
|
+
The `MessageHistory` is a **hybrid processor** that handles both retrieval and persistence of message history. On input, it fetches historical messages from storage and prepends them to the conversation. On output, it persists new messages to storage.
|
|
4
|
+
|
|
5
|
+
## Usage example
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { MessageHistory } from '@mastra/core/processors'
|
|
9
|
+
|
|
10
|
+
const processor = new MessageHistory({
|
|
11
|
+
storage: memoryStorage,
|
|
12
|
+
lastMessages: 50,
|
|
13
|
+
})
|
|
14
|
+
```
|
|
15
|
+
|
|
16
|
+
## Constructor parameters
|
|
17
|
+
|
|
18
|
+
**options:** (`MessageHistoryOptions`): Configuration options for the message history processor
|
|
19
|
+
|
|
20
|
+
### Options
|
|
21
|
+
|
|
22
|
+
**storage:** (`MemoryStorage`): Storage instance for retrieving and persisting messages
|
|
23
|
+
|
|
24
|
+
**lastMessages?:** (`number`): Maximum number of historical messages to retrieve. If not specified, retrieves all messages
|
|
25
|
+
|
|
26
|
+
## Returns
|
|
27
|
+
|
|
28
|
+
**id:** (`string`): Processor identifier set to 'message-history'
|
|
29
|
+
|
|
30
|
+
**name:** (`string`): Processor display name set to 'MessageHistory'
|
|
31
|
+
|
|
32
|
+
**processInput:** (`(args: { messages: MastraDBMessage[]; messageList: MessageList; abort: (reason?: string) => never; tracingContext?: TracingContext; requestContext?: RequestContext }) => Promise<MessageList | MastraDBMessage[]>`): Fetches historical messages from storage and adds them to the message list
|
|
33
|
+
|
|
34
|
+
**processOutputResult:** (`(args: { messages: MastraDBMessage[]; messageList: MessageList; abort: (reason?: string) => never; tracingContext?: TracingContext; requestContext?: RequestContext }) => Promise<MessageList>`): Persists new messages (user input and assistant response) to storage, excluding system messages
|
|
35
|
+
|
|
36
|
+
## Extended usage example
|
|
37
|
+
|
|
38
|
+
```typescript
|
|
39
|
+
import { Agent } from '@mastra/core/agent'
|
|
40
|
+
import { MessageHistory } from '@mastra/core/processors'
|
|
41
|
+
import { PostgresStorage } from '@mastra/pg'
|
|
42
|
+
|
|
43
|
+
const storage = new PostgresStorage({
|
|
44
|
+
connectionString: process.env.DATABASE_URL,
|
|
45
|
+
})
|
|
46
|
+
|
|
47
|
+
export const agent = new Agent({
|
|
48
|
+
name: 'memory-agent',
|
|
49
|
+
instructions: 'You are a helpful assistant with conversation memory',
|
|
50
|
+
model: 'openai:gpt-4o',
|
|
51
|
+
inputProcessors: [
|
|
52
|
+
new MessageHistory({
|
|
53
|
+
storage,
|
|
54
|
+
lastMessages: 100,
|
|
55
|
+
}),
|
|
56
|
+
],
|
|
57
|
+
outputProcessors: [
|
|
58
|
+
new MessageHistory({
|
|
59
|
+
storage,
|
|
60
|
+
}),
|
|
61
|
+
],
|
|
62
|
+
})
|
|
63
|
+
```
|
|
64
|
+
|
|
65
|
+
## Behavior
|
|
66
|
+
|
|
67
|
+
### Input processing
|
|
68
|
+
|
|
69
|
+
1. Retrieves `threadId` from the request context
|
|
70
|
+
2. Fetches historical messages from storage (ordered by creation date, descending)
|
|
71
|
+
3. Filters out system messages (they should not be stored in the database)
|
|
72
|
+
4. Merges historical messages with incoming messages, avoiding duplicates by ID
|
|
73
|
+
5. Adds historical messages with `source: 'memory'` tag
|
|
74
|
+
|
|
75
|
+
### Output processing
|
|
76
|
+
|
|
77
|
+
1. Retrieves `threadId` from the request context
|
|
78
|
+
2. Skips persistence if `readOnly` is set in memory config
|
|
79
|
+
3. Filters out incomplete tool calls from messages
|
|
80
|
+
4. Persists new user input and assistant response messages to storage
|
|
81
|
+
5. Updates the thread's `updatedAt` timestamp
|
|
82
|
+
|
|
83
|
+
## Related
|
|
84
|
+
|
|
85
|
+
- [Guardrails](https://mastra.ai/docs/agents/guardrails)
|
|
@@ -0,0 +1,117 @@
|
|
|
1
|
+
# SemanticRecall
|
|
2
|
+
|
|
3
|
+
The `SemanticRecall` is a **hybrid processor** that enables semantic search over conversation history using vector embeddings. On input, it performs semantic search to find relevant historical messages. On output, it creates embeddings for new messages to enable future semantic retrieval.
|
|
4
|
+
|
|
5
|
+
## Usage example
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { SemanticRecall } from '@mastra/core/processors'
|
|
9
|
+
import { openai } from '@ai-sdk/openai'
|
|
10
|
+
|
|
11
|
+
const processor = new SemanticRecall({
|
|
12
|
+
storage: memoryStorage,
|
|
13
|
+
vector: vectorStore,
|
|
14
|
+
embedder: openai.embedding('text-embedding-3-small'),
|
|
15
|
+
topK: 5,
|
|
16
|
+
messageRange: 2,
|
|
17
|
+
scope: 'resource',
|
|
18
|
+
})
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
## Constructor parameters
|
|
22
|
+
|
|
23
|
+
**options:** (`SemanticRecallOptions`): Configuration options for the semantic recall processor
|
|
24
|
+
|
|
25
|
+
### Options
|
|
26
|
+
|
|
27
|
+
**storage:** (`MemoryStorage`): Storage instance for retrieving messages
|
|
28
|
+
|
|
29
|
+
**vector:** (`MastraVector`): Vector store for semantic search
|
|
30
|
+
|
|
31
|
+
**embedder:** (`MastraEmbeddingModel<string>`): Embedder for generating query embeddings
|
|
32
|
+
|
|
33
|
+
**topK?:** (`number`): Number of most similar messages to retrieve
|
|
34
|
+
|
|
35
|
+
**messageRange?:** (`number | { before: number; after: number }`): Number of context messages to include before/after each match. Can be a single number (same for both) or an object with separate values
|
|
36
|
+
|
|
37
|
+
**scope?:** (`'thread' | 'resource'`): Scope of semantic search. 'thread' searches within the current thread only. 'resource' searches across all threads for the resource
|
|
38
|
+
|
|
39
|
+
**threshold?:** (`number`): Minimum similarity score threshold (0-1). Messages below this threshold are filtered out
|
|
40
|
+
|
|
41
|
+
**indexName?:** (`string`): Index name for the vector store. If not provided, auto-generated based on embedder model
|
|
42
|
+
|
|
43
|
+
**logger?:** (`IMastraLogger`): Optional logger instance for structured logging
|
|
44
|
+
|
|
45
|
+
## Returns
|
|
46
|
+
|
|
47
|
+
**id:** (`string`): Processor identifier set to 'semantic-recall'
|
|
48
|
+
|
|
49
|
+
**name:** (`string`): Processor display name set to 'SemanticRecall'
|
|
50
|
+
|
|
51
|
+
**processInput:** (`(args: { messages: MastraDBMessage[]; messageList: MessageList; abort: (reason?: string) => never; tracingContext?: TracingContext; requestContext?: RequestContext }) => Promise<MessageList | MastraDBMessage[]>`): Performs semantic search on historical messages and adds relevant context to the message list
|
|
52
|
+
|
|
53
|
+
**processOutputResult:** (`(args: { messages: MastraDBMessage[]; messageList?: MessageList; abort: (reason?: string) => never; tracingContext?: TracingContext; requestContext?: RequestContext }) => Promise<MessageList | MastraDBMessage[]>`): Creates embeddings for new messages to enable future semantic search
|
|
54
|
+
|
|
55
|
+
## Extended usage example
|
|
56
|
+
|
|
57
|
+
```typescript
|
|
58
|
+
import { Agent } from '@mastra/core/agent'
|
|
59
|
+
import { SemanticRecall, MessageHistory } from '@mastra/core/processors'
|
|
60
|
+
import { PostgresStorage } from '@mastra/pg'
|
|
61
|
+
import { PgVector } from '@mastra/pg'
|
|
62
|
+
import { openai } from '@ai-sdk/openai'
|
|
63
|
+
|
|
64
|
+
const storage = new PostgresStorage({
|
|
65
|
+
id: 'pg-storage',
|
|
66
|
+
connectionString: process.env.DATABASE_URL,
|
|
67
|
+
})
|
|
68
|
+
|
|
69
|
+
const vector = new PgVector({
|
|
70
|
+
id: 'pg-vector',
|
|
71
|
+
connectionString: process.env.DATABASE_URL,
|
|
72
|
+
})
|
|
73
|
+
|
|
74
|
+
const semanticRecall = new SemanticRecall({
|
|
75
|
+
storage,
|
|
76
|
+
vector,
|
|
77
|
+
embedder: openai.embedding('text-embedding-3-small'),
|
|
78
|
+
topK: 5,
|
|
79
|
+
messageRange: { before: 2, after: 1 },
|
|
80
|
+
scope: 'resource',
|
|
81
|
+
threshold: 0.7,
|
|
82
|
+
})
|
|
83
|
+
|
|
84
|
+
export const agent = new Agent({
|
|
85
|
+
name: 'semantic-memory-agent',
|
|
86
|
+
instructions: 'You are a helpful assistant with semantic memory recall',
|
|
87
|
+
model: 'openai:gpt-4o',
|
|
88
|
+
inputProcessors: [semanticRecall, new MessageHistory({ storage, lastMessages: 50 })],
|
|
89
|
+
outputProcessors: [semanticRecall, new MessageHistory({ storage })],
|
|
90
|
+
})
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
## Behavior
|
|
94
|
+
|
|
95
|
+
### Input processing
|
|
96
|
+
|
|
97
|
+
1. Extracts the user query from the last user message
|
|
98
|
+
2. Generates embeddings for the query
|
|
99
|
+
3. Performs vector search to find semantically similar messages
|
|
100
|
+
4. Retrieves matched messages along with surrounding context (based on `messageRange`)
|
|
101
|
+
5. For `scope: 'resource'`, formats cross-thread messages as a system message with timestamps
|
|
102
|
+
6. Adds recalled messages with `source: 'memory'` tag
|
|
103
|
+
|
|
104
|
+
### Output processing
|
|
105
|
+
|
|
106
|
+
1. Extracts text content from new user and assistant messages
|
|
107
|
+
2. Generates embeddings for each message
|
|
108
|
+
3. Stores embeddings in the vector store with metadata (message ID, thread ID, resource ID, role, content, timestamp)
|
|
109
|
+
4. Uses LRU caching for embeddings to avoid redundant API calls
|
|
110
|
+
|
|
111
|
+
### Cross-thread recall
|
|
112
|
+
|
|
113
|
+
When `scope` is set to `'resource'`, the processor can recall messages from other threads. These cross-thread messages are formatted as a system message with timestamps and conversation labels to provide context about when and where the conversation occurred.
|
|
114
|
+
|
|
115
|
+
## Related
|
|
116
|
+
|
|
117
|
+
- [Guardrails](https://mastra.ai/docs/agents/guardrails)
|
|
@@ -0,0 +1,152 @@
|
|
|
1
|
+
# WorkingMemory
|
|
2
|
+
|
|
3
|
+
The `WorkingMemory` is an **input processor** that injects working memory data as a system message. It retrieves persistent information from storage and formats it as instructions for the LLM, enabling the agent to maintain context about users across conversations.
|
|
4
|
+
|
|
5
|
+
## Usage example
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { WorkingMemory } from '@mastra/core/processors'
|
|
9
|
+
|
|
10
|
+
const processor = new WorkingMemory({
|
|
11
|
+
storage: memoryStorage,
|
|
12
|
+
scope: 'resource',
|
|
13
|
+
template: {
|
|
14
|
+
format: 'markdown',
|
|
15
|
+
content: `# User Profile
|
|
16
|
+
- **Name**:
|
|
17
|
+
- **Preferences**:
|
|
18
|
+
- **Goals**:
|
|
19
|
+
`,
|
|
20
|
+
},
|
|
21
|
+
})
|
|
22
|
+
```
|
|
23
|
+
|
|
24
|
+
## Constructor parameters
|
|
25
|
+
|
|
26
|
+
**options:** (`Options`): Configuration options for the working memory processor
|
|
27
|
+
|
|
28
|
+
### Options
|
|
29
|
+
|
|
30
|
+
**storage:** (`MemoryStorage`): Storage instance for retrieving working memory data
|
|
31
|
+
|
|
32
|
+
**template?:** (`WorkingMemoryTemplate`): Template defining the format and structure of working memory
|
|
33
|
+
|
|
34
|
+
**scope?:** (`'thread' | 'resource'`): Scope of working memory. 'thread' scopes to current thread, 'resource' shares across all threads for the resource
|
|
35
|
+
|
|
36
|
+
**useVNext?:** (`boolean`): Use the next-generation instruction format with improved guidelines
|
|
37
|
+
|
|
38
|
+
**readOnly?:** (`boolean`): When true, working memory is provided as read-only context. The data is injected into the conversation but without the updateWorkingMemory tool or update instructions. Useful for agents that should reference working memory without modifying it.
|
|
39
|
+
|
|
40
|
+
**templateProvider?:** (`{ getWorkingMemoryTemplate(args: { memoryConfig?: MemoryConfig }): Promise<WorkingMemoryTemplate | null> }`): Dynamic template provider for runtime template resolution
|
|
41
|
+
|
|
42
|
+
**logger?:** (`IMastraLogger`): Optional logger instance for structured logging
|
|
43
|
+
|
|
44
|
+
### WorkingMemoryTemplate
|
|
45
|
+
|
|
46
|
+
**format:** (`'markdown' | 'json'`): Format of the working memory content
|
|
47
|
+
|
|
48
|
+
**content:** (`string`): Template content defining the structure of working memory data
|
|
49
|
+
|
|
50
|
+
## Returns
|
|
51
|
+
|
|
52
|
+
**id:** (`string`): Processor identifier set to 'working-memory'
|
|
53
|
+
|
|
54
|
+
**name:** (`string`): Processor display name set to 'WorkingMemory'
|
|
55
|
+
|
|
56
|
+
**defaultWorkingMemoryTemplate:** (`string`): The default markdown template used when no custom template is provided
|
|
57
|
+
|
|
58
|
+
**processInput:** (`(args: { messages: MastraDBMessage[]; messageList: MessageList; abort: (reason?: string) => never; requestContext?: RequestContext }) => Promise<MessageList | MastraDBMessage[]>`): Retrieves working memory and adds it as a system message to the message list
|
|
59
|
+
|
|
60
|
+
## Extended usage example
|
|
61
|
+
|
|
62
|
+
```typescript
|
|
63
|
+
import { Agent } from '@mastra/core/agent'
|
|
64
|
+
import { WorkingMemory, MessageHistory } from '@mastra/core/processors'
|
|
65
|
+
import { PostgresStorage } from '@mastra/pg'
|
|
66
|
+
|
|
67
|
+
const storage = new PostgresStorage({
|
|
68
|
+
connectionString: process.env.DATABASE_URL,
|
|
69
|
+
})
|
|
70
|
+
|
|
71
|
+
export const agent = new Agent({
|
|
72
|
+
name: 'personalized-agent',
|
|
73
|
+
instructions: 'You are a helpful assistant that remembers user preferences',
|
|
74
|
+
model: 'openai:gpt-4o',
|
|
75
|
+
inputProcessors: [
|
|
76
|
+
new WorkingMemory({
|
|
77
|
+
storage,
|
|
78
|
+
scope: 'resource',
|
|
79
|
+
template: {
|
|
80
|
+
format: 'markdown',
|
|
81
|
+
content: `# User Information
|
|
82
|
+
- **Name**:
|
|
83
|
+
- **Location**:
|
|
84
|
+
- **Preferences**:
|
|
85
|
+
- **Communication Style**:
|
|
86
|
+
- **Current Projects**:
|
|
87
|
+
`,
|
|
88
|
+
},
|
|
89
|
+
}),
|
|
90
|
+
new MessageHistory({ storage, lastMessages: 50 }),
|
|
91
|
+
],
|
|
92
|
+
outputProcessors: [new MessageHistory({ storage })],
|
|
93
|
+
})
|
|
94
|
+
```
|
|
95
|
+
|
|
96
|
+
## JSON format example
|
|
97
|
+
|
|
98
|
+
```typescript
|
|
99
|
+
import { WorkingMemory } from '@mastra/core/processors'
|
|
100
|
+
|
|
101
|
+
const processor = new WorkingMemory({
|
|
102
|
+
storage: memoryStorage,
|
|
103
|
+
scope: 'resource',
|
|
104
|
+
template: {
|
|
105
|
+
format: 'json',
|
|
106
|
+
content: JSON.stringify({
|
|
107
|
+
user: {
|
|
108
|
+
name: { type: 'string' },
|
|
109
|
+
preferences: { type: 'object' },
|
|
110
|
+
goals: { type: 'array' },
|
|
111
|
+
},
|
|
112
|
+
}),
|
|
113
|
+
},
|
|
114
|
+
})
|
|
115
|
+
```
|
|
116
|
+
|
|
117
|
+
## Behavior
|
|
118
|
+
|
|
119
|
+
### Input processing
|
|
120
|
+
|
|
121
|
+
1. Retrieves `threadId` and `resourceId` from the request context
|
|
122
|
+
|
|
123
|
+
2. Based on scope, fetches working memory from either:
|
|
124
|
+
|
|
125
|
+
- Thread metadata (`scope: 'thread'`)
|
|
126
|
+
- Resource record (`scope: 'resource'`)
|
|
127
|
+
|
|
128
|
+
3. Resolves the template (from provider, options, or default)
|
|
129
|
+
|
|
130
|
+
4. Generates system instructions based on mode:
|
|
131
|
+
|
|
132
|
+
- **Normal mode**: Includes guidelines for storing/updating information, template structure, and current data
|
|
133
|
+
- **Read-only mode** (`readOnly: true`): Includes only the current data as context without update instructions
|
|
134
|
+
|
|
135
|
+
5. Adds the instruction as a system message with `source: 'memory'` tag
|
|
136
|
+
|
|
137
|
+
### Working memory updates
|
|
138
|
+
|
|
139
|
+
Working memory updates happen through the `updateWorkingMemory` tool provided by the Memory class, not through this processor. The processor only handles injecting the current working memory state into conversations.
|
|
140
|
+
|
|
141
|
+
### Default template
|
|
142
|
+
|
|
143
|
+
If no template is provided, the processor uses a default markdown template with fields for:
|
|
144
|
+
|
|
145
|
+
- First Name, Last Name
|
|
146
|
+
- Location, Occupation
|
|
147
|
+
- Interests, Goals
|
|
148
|
+
- Events, Facts, Projects
|
|
149
|
+
|
|
150
|
+
## Related
|
|
151
|
+
|
|
152
|
+
- [Guardrails](https://mastra.ai/docs/agents/guardrails)
|