@mastra/pinecone 1.0.0-beta.3 → 1.0.0-beta.4
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +18 -0
- package/dist/docs/README.md +33 -0
- package/dist/docs/SKILL.md +34 -0
- package/dist/docs/SOURCE_MAP.json +6 -0
- package/dist/docs/memory/01-storage.md +181 -0
- package/dist/docs/memory/02-memory-processors.md +319 -0
- package/dist/docs/rag/01-vector-databases.md +638 -0
- package/dist/docs/rag/02-retrieval.md +549 -0
- package/dist/docs/vectors/01-reference.md +101 -0
- package/package.json +7 -7
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,23 @@
|
|
|
1
1
|
# @mastra/pinecone
|
|
2
2
|
|
|
3
|
+
## 1.0.0-beta.4
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Add embedded documentation support for Mastra packages ([#11472](https://github.com/mastra-ai/mastra/pull/11472))
|
|
8
|
+
|
|
9
|
+
Mastra packages now include embedded documentation in the published npm package under `dist/docs/`. This enables coding agents and AI assistants to understand and use the framework by reading documentation directly from `node_modules`.
|
|
10
|
+
|
|
11
|
+
Each package includes:
|
|
12
|
+
- **SKILL.md** - Entry point explaining the package's purpose and capabilities
|
|
13
|
+
- **SOURCE_MAP.json** - Machine-readable index mapping exports to types and implementation files
|
|
14
|
+
- **Topic folders** - Conceptual documentation organized by feature area
|
|
15
|
+
|
|
16
|
+
Documentation is driven by the `packages` frontmatter field in MDX files, which maps docs to their corresponding packages. CI validation ensures all docs include this field.
|
|
17
|
+
|
|
18
|
+
- Updated dependencies [[`d2d3e22`](https://github.com/mastra-ai/mastra/commit/d2d3e22a419ee243f8812a84e3453dd44365ecb0), [`bc72b52`](https://github.com/mastra-ai/mastra/commit/bc72b529ee4478fe89ecd85a8be47ce0127b82a0), [`05b8bee`](https://github.com/mastra-ai/mastra/commit/05b8bee9e50e6c2a4a2bf210eca25ee212ca24fa), [`c042bd0`](https://github.com/mastra-ai/mastra/commit/c042bd0b743e0e86199d0cb83344ca7690e34a9c), [`940a2b2`](https://github.com/mastra-ai/mastra/commit/940a2b27480626ed7e74f55806dcd2181c1dd0c2), [`e0941c3`](https://github.com/mastra-ai/mastra/commit/e0941c3d7fc75695d5d258e7008fd5d6e650800c), [`0c0580a`](https://github.com/mastra-ai/mastra/commit/0c0580a42f697cd2a7d5973f25bfe7da9055038a), [`28f5f89`](https://github.com/mastra-ai/mastra/commit/28f5f89705f2409921e3c45178796c0e0d0bbb64), [`e601b27`](https://github.com/mastra-ai/mastra/commit/e601b272c70f3a5ecca610373aa6223012704892), [`3d3366f`](https://github.com/mastra-ai/mastra/commit/3d3366f31683e7137d126a3a57174a222c5801fb), [`5a4953f`](https://github.com/mastra-ai/mastra/commit/5a4953f7d25bb15ca31ed16038092a39cb3f98b3), [`eb9e522`](https://github.com/mastra-ai/mastra/commit/eb9e522ce3070a405e5b949b7bf5609ca51d7fe2), [`20e6f19`](https://github.com/mastra-ai/mastra/commit/20e6f1971d51d3ff6dd7accad8aaaae826d540ed), [`4f0b3c6`](https://github.com/mastra-ai/mastra/commit/4f0b3c66f196c06448487f680ccbb614d281e2f7), [`74c4f22`](https://github.com/mastra-ai/mastra/commit/74c4f22ed4c71e72598eacc346ba95cdbc00294f), [`81b6a8f`](https://github.com/mastra-ai/mastra/commit/81b6a8ff79f49a7549d15d66624ac1a0b8f5f971), [`e4d366a`](https://github.com/mastra-ai/mastra/commit/e4d366aeb500371dd4210d6aa8361a4c21d87034), [`a4f010b`](https://github.com/mastra-ai/mastra/commit/a4f010b22e4355a5fdee70a1fe0f6e4a692cc29e), [`73b0bb3`](https://github.com/mastra-ai/mastra/commit/73b0bb394dba7c9482eb467a97ab283dbc0ef4db), [`5627a8c`](https://github.com/mastra-ai/mastra/commit/5627a8c6dc11fe3711b3fa7a6ffd6eb34100a306), [`3ff45d1`](https://github.com/mastra-ai/mastra/commit/3ff45d10e0c80c5335a957ab563da72feb623520), [`251df45`](https://github.com/mastra-ai/mastra/commit/251df4531407dfa46d805feb40ff3fb49769f455), [`f894d14`](https://github.com/mastra-ai/mastra/commit/f894d148946629af7b1f452d65a9cf864cec3765), [`c2b9547`](https://github.com/mastra-ai/mastra/commit/c2b9547bf435f56339f23625a743b2147ab1c7a6), [`580b592`](https://github.com/mastra-ai/mastra/commit/580b5927afc82fe460dfdf9a38a902511b6b7e7f), [`58e3931`](https://github.com/mastra-ai/mastra/commit/58e3931af9baa5921688566210f00fb0c10479fa), [`08bb631`](https://github.com/mastra-ai/mastra/commit/08bb631ae2b14684b2678e3549d0b399a6f0561e), [`4fba91b`](https://github.com/mastra-ai/mastra/commit/4fba91bec7c95911dc28e369437596b152b04cd0), [`12b0cc4`](https://github.com/mastra-ai/mastra/commit/12b0cc4077d886b1a552637dedb70a7ade93528c)]:
|
|
19
|
+
- @mastra/core@1.0.0-beta.20
|
|
20
|
+
|
|
3
21
|
## 1.0.0-beta.3
|
|
4
22
|
|
|
5
23
|
### Patch Changes
|
|
@@ -0,0 +1,33 @@
|
|
|
1
|
+
# @mastra/pinecone Documentation
|
|
2
|
+
|
|
3
|
+
> Embedded documentation for coding agents
|
|
4
|
+
|
|
5
|
+
## Quick Start
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
# Read the skill overview
|
|
9
|
+
cat docs/SKILL.md
|
|
10
|
+
|
|
11
|
+
# Get the source map
|
|
12
|
+
cat docs/SOURCE_MAP.json
|
|
13
|
+
|
|
14
|
+
# Read topic documentation
|
|
15
|
+
cat docs/<topic>/01-overview.md
|
|
16
|
+
```
|
|
17
|
+
|
|
18
|
+
## Structure
|
|
19
|
+
|
|
20
|
+
```
|
|
21
|
+
docs/
|
|
22
|
+
├── SKILL.md # Entry point
|
|
23
|
+
├── README.md # This file
|
|
24
|
+
├── SOURCE_MAP.json # Export index
|
|
25
|
+
├── memory/ (2 files)
|
|
26
|
+
├── rag/ (2 files)
|
|
27
|
+
├── vectors/ (1 files)
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## Version
|
|
31
|
+
|
|
32
|
+
Package: @mastra/pinecone
|
|
33
|
+
Version: 1.0.0-beta.4
|
|
@@ -0,0 +1,34 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: mastra-pinecone-docs
|
|
3
|
+
description: Documentation for @mastra/pinecone. Includes links to type definitions and readable implementation code in dist/.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# @mastra/pinecone Documentation
|
|
7
|
+
|
|
8
|
+
> **Version**: 1.0.0-beta.4
|
|
9
|
+
> **Package**: @mastra/pinecone
|
|
10
|
+
|
|
11
|
+
## Quick Navigation
|
|
12
|
+
|
|
13
|
+
Use SOURCE_MAP.json to find any export:
|
|
14
|
+
|
|
15
|
+
```bash
|
|
16
|
+
cat docs/SOURCE_MAP.json
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
Each export maps to:
|
|
20
|
+
- **types**: `.d.ts` file with JSDoc and API signatures
|
|
21
|
+
- **implementation**: `.js` chunk file with readable source
|
|
22
|
+
- **docs**: Conceptual documentation in `docs/`
|
|
23
|
+
|
|
24
|
+
## Top Exports
|
|
25
|
+
|
|
26
|
+
|
|
27
|
+
|
|
28
|
+
See SOURCE_MAP.json for the complete list.
|
|
29
|
+
|
|
30
|
+
## Available Topics
|
|
31
|
+
|
|
32
|
+
- [Memory](memory/) - 2 file(s)
|
|
33
|
+
- [Rag](rag/) - 2 file(s)
|
|
34
|
+
- [Vectors](vectors/) - 1 file(s)
|
|
@@ -0,0 +1,181 @@
|
|
|
1
|
+
> Configure storage for Mastra
|
|
2
|
+
|
|
3
|
+
# Storage
|
|
4
|
+
|
|
5
|
+
For Mastra to remember previous interactions, you must configure a storage adapter. Mastra is designed to work with your preferred database provider - choose from the [supported providers](#supported-providers) and pass it to your Mastra instance.
|
|
6
|
+
|
|
7
|
+
```typescript
|
|
8
|
+
import { Mastra } from "@mastra/core";
|
|
9
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
10
|
+
|
|
11
|
+
const mastra = new Mastra({
|
|
12
|
+
storage: new LibSQLStore({
|
|
13
|
+
id: 'mastra-storage',
|
|
14
|
+
url: "file:./mastra.db",
|
|
15
|
+
}),
|
|
16
|
+
});
|
|
17
|
+
```
|
|
18
|
+
On first interaction, Mastra automatically creates the necessary tables following the [core schema](https://mastra.ai/reference/v1/storage/overview#core-schema). This includes tables for messages, threads, resources, workflows, traces, and evaluation datasets.
|
|
19
|
+
|
|
20
|
+
## Supported Providers
|
|
21
|
+
|
|
22
|
+
Each provider page includes installation instructions, configuration parameters, and usage examples:
|
|
23
|
+
|
|
24
|
+
- [libSQL Storage](https://mastra.ai/reference/v1/storage/libsql)
|
|
25
|
+
- [PostgreSQL Storage](https://mastra.ai/reference/v1/storage/postgresql)
|
|
26
|
+
- [MongoDB Storage](https://mastra.ai/reference/v1/storage/mongodb)
|
|
27
|
+
- [Upstash Storage](https://mastra.ai/reference/v1/storage/upstash)
|
|
28
|
+
- [Cloudflare D1](https://mastra.ai/reference/v1/storage/cloudflare-d1)
|
|
29
|
+
- [Cloudflare Durable Objects](https://mastra.ai/reference/v1/storage/cloudflare)
|
|
30
|
+
- [Convex](https://mastra.ai/reference/v1/storage/convex)
|
|
31
|
+
- [DynamoDB](https://mastra.ai/reference/v1/storage/dynamodb)
|
|
32
|
+
- [LanceDB](https://mastra.ai/reference/v1/storage/lance)
|
|
33
|
+
- [Microsoft SQL Server](https://mastra.ai/reference/v1/storage/mssql)
|
|
34
|
+
|
|
35
|
+
> **Note:**
|
|
36
|
+
libSQL is the easiest way to get started because it doesn’t require running a separate database server
|
|
37
|
+
|
|
38
|
+
## Configuration Scope
|
|
39
|
+
|
|
40
|
+
You can configure storage at two different scopes:
|
|
41
|
+
|
|
42
|
+
### Instance-level storage
|
|
43
|
+
|
|
44
|
+
Add storage to your Mastra instance so all agents share the same memory provider:
|
|
45
|
+
|
|
46
|
+
```typescript
|
|
47
|
+
import { Mastra } from "@mastra/core";
|
|
48
|
+
import { PostgresStore } from "@mastra/pg";
|
|
49
|
+
|
|
50
|
+
const mastra = new Mastra({
|
|
51
|
+
storage: new PostgresStore({
|
|
52
|
+
id: 'mastra-storage',
|
|
53
|
+
connectionString: process.env.DATABASE_URL,
|
|
54
|
+
}),
|
|
55
|
+
});
|
|
56
|
+
|
|
57
|
+
// All agents automatically use this storage
|
|
58
|
+
const agent1 = new Agent({ memory: new Memory() });
|
|
59
|
+
const agent2 = new Agent({ memory: new Memory() });
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
### Agent-level storage
|
|
63
|
+
|
|
64
|
+
Add storage to a specific agent when you need data boundaries or compliance requirements:
|
|
65
|
+
|
|
66
|
+
```typescript
|
|
67
|
+
import { Agent } from "@mastra/core/agent";
|
|
68
|
+
import { Memory } from "@mastra/memory";
|
|
69
|
+
import { PostgresStore } from "@mastra/pg";
|
|
70
|
+
|
|
71
|
+
const agent = new Agent({
|
|
72
|
+
memory: new Memory({
|
|
73
|
+
storage: new PostgresStore({
|
|
74
|
+
id: 'agent-storage',
|
|
75
|
+
connectionString: process.env.AGENT_DATABASE_URL,
|
|
76
|
+
}),
|
|
77
|
+
}),
|
|
78
|
+
});
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
This is useful when different agents need to store data in separate databases for security, compliance, or organizational reasons.
|
|
82
|
+
|
|
83
|
+
## Threads and Resources
|
|
84
|
+
|
|
85
|
+
Mastra organizes memory into threads using two identifiers:
|
|
86
|
+
|
|
87
|
+
- **Thread**: A conversation session containing a sequence of messages (e.g., `convo_123`)
|
|
88
|
+
- **Resource**: An identifier for the entity the thread belongs to, typically a user (e.g., `user_123`)
|
|
89
|
+
|
|
90
|
+
Both identifiers are required for agents to store and recall information:
|
|
91
|
+
|
|
92
|
+
```typescript
|
|
93
|
+
const stream = await agent.stream("message for agent", {
|
|
94
|
+
memory: {
|
|
95
|
+
thread: "convo_123",
|
|
96
|
+
resource: "user_123",
|
|
97
|
+
},
|
|
98
|
+
});
|
|
99
|
+
```
|
|
100
|
+
|
|
101
|
+
> **Note:**
|
|
102
|
+
[Studio](https://mastra.ai/docs/v1/getting-started/studio) automatically generates a thread and resource ID for you. Remember to to pass these explicitly when calling `stream` or `generate` yourself.
|
|
103
|
+
|
|
104
|
+
### Thread title generation
|
|
105
|
+
|
|
106
|
+
Mastra can automatically generate descriptive thread titles based on the user's first message.
|
|
107
|
+
|
|
108
|
+
Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
|
|
109
|
+
|
|
110
|
+
```typescript
|
|
111
|
+
export const testAgent = new Agent({
|
|
112
|
+
memory: new Memory({
|
|
113
|
+
options: {
|
|
114
|
+
generateTitle: true,
|
|
115
|
+
},
|
|
116
|
+
}),
|
|
117
|
+
});
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
Title generation runs asynchronously after the agent responds and does not affect response time.
|
|
121
|
+
|
|
122
|
+
To optimize cost or behavior, provide a smaller `model` and custom `instructions`:
|
|
123
|
+
|
|
124
|
+
```typescript
|
|
125
|
+
export const testAgent = new Agent({
|
|
126
|
+
memory: new Memory({
|
|
127
|
+
options: {
|
|
128
|
+
threads: {
|
|
129
|
+
generateTitle: {
|
|
130
|
+
model: "openai/gpt-4o-mini",
|
|
131
|
+
instructions: "Generate a concise title based on the user's first message",
|
|
132
|
+
},
|
|
133
|
+
},
|
|
134
|
+
},
|
|
135
|
+
}),
|
|
136
|
+
});
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
## Semantic recall
|
|
140
|
+
|
|
141
|
+
Semantic recall uses vector embeddings to retrieve relevant past messages based on meaning rather than recency. This requires a vector database instance, which can be configured at the instance or agent level.
|
|
142
|
+
|
|
143
|
+
The vector database doesn't have to be the same as your storage provider. For example, you might use PostgreSQL for storage and Pinecone for vectors:
|
|
144
|
+
|
|
145
|
+
```typescript
|
|
146
|
+
import { Mastra } from "@mastra/core";
|
|
147
|
+
import { Agent } from "@mastra/core/agent";
|
|
148
|
+
import { Memory } from "@mastra/memory";
|
|
149
|
+
import { PostgresStore } from "@mastra/pg";
|
|
150
|
+
import { PineconeVector } from "@mastra/pinecone";
|
|
151
|
+
|
|
152
|
+
// Instance-level vector configuration
|
|
153
|
+
const mastra = new Mastra({
|
|
154
|
+
storage: new PostgresStore({
|
|
155
|
+
id: 'mastra-storage',
|
|
156
|
+
connectionString: process.env.DATABASE_URL,
|
|
157
|
+
}),
|
|
158
|
+
});
|
|
159
|
+
|
|
160
|
+
// Agent-level vector configuration
|
|
161
|
+
const agent = new Agent({
|
|
162
|
+
memory: new Memory({
|
|
163
|
+
vector: new PineconeVector({
|
|
164
|
+
id: 'agent-vector',
|
|
165
|
+
apiKey: process.env.PINECONE_API_KEY,
|
|
166
|
+
environment: process.env.PINECONE_ENVIRONMENT,
|
|
167
|
+
indexName: 'agent-embeddings',
|
|
168
|
+
}),
|
|
169
|
+
options: {
|
|
170
|
+
semanticRecall: {
|
|
171
|
+
topK: 5,
|
|
172
|
+
messageRange: 2,
|
|
173
|
+
},
|
|
174
|
+
},
|
|
175
|
+
}),
|
|
176
|
+
});
|
|
177
|
+
```
|
|
178
|
+
|
|
179
|
+
We support all popular vector providers including [Pinecone](https://mastra.ai/reference/v1/vectors/pinecone), [Chroma](https://mastra.ai/reference/v1/vectors/chroma), [Qdrant](https://mastra.ai/reference/v1/vectors/qdrant), and many more.
|
|
180
|
+
|
|
181
|
+
For more information on configuring semantic recall, see the [Semantic Recall](./semantic-recall) documentation.
|
|
@@ -0,0 +1,319 @@
|
|
|
1
|
+
> Learn how to use memory processors in Mastra to filter, trim, and transform messages before they
|
|
2
|
+
|
|
3
|
+
# Memory Processors
|
|
4
|
+
|
|
5
|
+
Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
|
|
6
|
+
|
|
7
|
+
When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve message history, working memory, and semantically relevant messages, then persist new messages after the model responds.
|
|
8
|
+
|
|
9
|
+
Memory processors are [processors](https://mastra.ai/docs/v1/agents/processors) that operate specifically on memory-related messages and state.
|
|
10
|
+
|
|
11
|
+
## Built-in Memory Processors
|
|
12
|
+
|
|
13
|
+
Mastra automatically adds these processors when memory is enabled:
|
|
14
|
+
|
|
15
|
+
### MessageHistory
|
|
16
|
+
|
|
17
|
+
Retrieves message history and persists new messages.
|
|
18
|
+
|
|
19
|
+
**When you configure:**
|
|
20
|
+
|
|
21
|
+
```typescript
|
|
22
|
+
memory: new Memory({
|
|
23
|
+
lastMessages: 10,
|
|
24
|
+
});
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
**Mastra internally:**
|
|
28
|
+
|
|
29
|
+
1. Creates a `MessageHistory` processor with `limit: 10`
|
|
30
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
31
|
+
3. Adds it to the agent's output processors (runs after the LLM)
|
|
32
|
+
|
|
33
|
+
**What it does:**
|
|
34
|
+
|
|
35
|
+
- **Input**: Fetches the last 10 messages from storage and prepends them to the conversation
|
|
36
|
+
- **Output**: Persists new messages to storage after the model responds
|
|
37
|
+
|
|
38
|
+
**Example:**
|
|
39
|
+
|
|
40
|
+
```typescript
|
|
41
|
+
import { Agent } from "@mastra/core/agent";
|
|
42
|
+
import { Memory } from "@mastra/memory";
|
|
43
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
44
|
+
import { openai } from "@ai-sdk/openai";
|
|
45
|
+
|
|
46
|
+
const agent = new Agent({
|
|
47
|
+
id: "test-agent",
|
|
48
|
+
name: "Test Agent",
|
|
49
|
+
instructions: "You are a helpful assistant",
|
|
50
|
+
model: 'openai/gpt-4o',
|
|
51
|
+
memory: new Memory({
|
|
52
|
+
storage: new LibSQLStore({
|
|
53
|
+
id: "memory-store",
|
|
54
|
+
url: "file:memory.db",
|
|
55
|
+
}),
|
|
56
|
+
lastMessages: 10, // MessageHistory processor automatically added
|
|
57
|
+
}),
|
|
58
|
+
});
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
### SemanticRecall
|
|
62
|
+
|
|
63
|
+
Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
|
|
64
|
+
|
|
65
|
+
**When you configure:**
|
|
66
|
+
|
|
67
|
+
```typescript
|
|
68
|
+
memory: new Memory({
|
|
69
|
+
semanticRecall: { enabled: true },
|
|
70
|
+
vector: myVectorStore,
|
|
71
|
+
embedder: myEmbedder,
|
|
72
|
+
});
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
**Mastra internally:**
|
|
76
|
+
|
|
77
|
+
1. Creates a `SemanticRecall` processor
|
|
78
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
79
|
+
3. Adds it to the agent's output processors (runs after the LLM)
|
|
80
|
+
4. Requires both a vector store and embedder to be configured
|
|
81
|
+
|
|
82
|
+
**What it does:**
|
|
83
|
+
|
|
84
|
+
- **Input**: Performs vector similarity search to find relevant past messages and prepends them to the conversation
|
|
85
|
+
- **Output**: Creates embeddings for new messages and stores them in the vector store for future retrieval
|
|
86
|
+
|
|
87
|
+
**Example:**
|
|
88
|
+
|
|
89
|
+
```typescript
|
|
90
|
+
import { Agent } from "@mastra/core/agent";
|
|
91
|
+
import { Memory } from "@mastra/memory";
|
|
92
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
93
|
+
import { PineconeVector } from "@mastra/pinecone";
|
|
94
|
+
import { OpenAIEmbedder } from "@mastra/openai";
|
|
95
|
+
import { openai } from "@ai-sdk/openai";
|
|
96
|
+
|
|
97
|
+
const agent = new Agent({
|
|
98
|
+
name: "semantic-agent",
|
|
99
|
+
instructions: "You are a helpful assistant with semantic memory",
|
|
100
|
+
model: 'openai/gpt-4o',
|
|
101
|
+
memory: new Memory({
|
|
102
|
+
storage: new LibSQLStore({
|
|
103
|
+
id: "memory-store",
|
|
104
|
+
url: "file:memory.db",
|
|
105
|
+
}),
|
|
106
|
+
vector: new PineconeVector({
|
|
107
|
+
id: "memory-vector",
|
|
108
|
+
apiKey: process.env.PINECONE_API_KEY!,
|
|
109
|
+
environment: "us-east-1",
|
|
110
|
+
}),
|
|
111
|
+
embedder: new OpenAIEmbedder({
|
|
112
|
+
model: "text-embedding-3-small",
|
|
113
|
+
apiKey: process.env.OPENAI_API_KEY!,
|
|
114
|
+
}),
|
|
115
|
+
semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
|
|
116
|
+
}),
|
|
117
|
+
});
|
|
118
|
+
```
|
|
119
|
+
|
|
120
|
+
### WorkingMemory
|
|
121
|
+
|
|
122
|
+
Manages working memory state across conversations.
|
|
123
|
+
|
|
124
|
+
**When you configure:**
|
|
125
|
+
|
|
126
|
+
```typescript
|
|
127
|
+
memory: new Memory({
|
|
128
|
+
workingMemory: { enabled: true },
|
|
129
|
+
});
|
|
130
|
+
```
|
|
131
|
+
|
|
132
|
+
**Mastra internally:**
|
|
133
|
+
|
|
134
|
+
1. Creates a `WorkingMemory` processor
|
|
135
|
+
2. Adds it to the agent's input processors (runs before the LLM)
|
|
136
|
+
3. Requires a storage adapter to be configured
|
|
137
|
+
|
|
138
|
+
**What it does:**
|
|
139
|
+
|
|
140
|
+
- **Input**: Retrieves working memory state for the current thread and prepends it to the conversation
|
|
141
|
+
- **Output**: No output processing
|
|
142
|
+
|
|
143
|
+
**Example:**
|
|
144
|
+
|
|
145
|
+
```typescript
|
|
146
|
+
import { Agent } from "@mastra/core/agent";
|
|
147
|
+
import { Memory } from "@mastra/memory";
|
|
148
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
149
|
+
import { openai } from "@ai-sdk/openai";
|
|
150
|
+
|
|
151
|
+
const agent = new Agent({
|
|
152
|
+
name: "working-memory-agent",
|
|
153
|
+
instructions: "You are an assistant with working memory",
|
|
154
|
+
model: 'openai/gpt-4o',
|
|
155
|
+
memory: new Memory({
|
|
156
|
+
storage: new LibSQLStore({
|
|
157
|
+
id: "memory-store",
|
|
158
|
+
url: "file:memory.db",
|
|
159
|
+
}),
|
|
160
|
+
workingMemory: { enabled: true }, // WorkingMemory processor automatically added
|
|
161
|
+
}),
|
|
162
|
+
});
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
## Manual Control and Deduplication
|
|
166
|
+
|
|
167
|
+
If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
|
|
168
|
+
|
|
169
|
+
```typescript
|
|
170
|
+
import { Agent } from "@mastra/core/agent";
|
|
171
|
+
import { Memory } from "@mastra/memory";
|
|
172
|
+
import { MessageHistory } from "@mastra/memory/processors";
|
|
173
|
+
import { TokenLimiter } from "@mastra/core/processors";
|
|
174
|
+
import { LibSQLStore } from "@mastra/libsql";
|
|
175
|
+
import { openai } from "@ai-sdk/openai";
|
|
176
|
+
|
|
177
|
+
// Custom MessageHistory with different configuration
|
|
178
|
+
const customMessageHistory = new MessageHistory({
|
|
179
|
+
storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
|
|
180
|
+
lastMessages: 20,
|
|
181
|
+
});
|
|
182
|
+
|
|
183
|
+
const agent = new Agent({
|
|
184
|
+
name: "custom-memory-agent",
|
|
185
|
+
instructions: "You are a helpful assistant",
|
|
186
|
+
model: 'openai/gpt-4o',
|
|
187
|
+
memory: new Memory({
|
|
188
|
+
storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
|
|
189
|
+
lastMessages: 10, // This would normally add MessageHistory(10)
|
|
190
|
+
}),
|
|
191
|
+
inputProcessors: [
|
|
192
|
+
customMessageHistory, // Your custom one is used instead
|
|
193
|
+
new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
|
|
194
|
+
],
|
|
195
|
+
});
|
|
196
|
+
```
|
|
197
|
+
|
|
198
|
+
## Processor Execution Order
|
|
199
|
+
|
|
200
|
+
Understanding the execution order is important when combining guardrails with memory:
|
|
201
|
+
|
|
202
|
+
### Input Processors
|
|
203
|
+
|
|
204
|
+
```
|
|
205
|
+
[Memory Processors] → [Your inputProcessors]
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
1. **Memory processors run FIRST**: `WorkingMemory`, `MessageHistory`, `SemanticRecall`
|
|
209
|
+
2. **Your input processors run AFTER**: guardrails, filters, validators
|
|
210
|
+
|
|
211
|
+
This means memory loads message history before your processors can validate or filter the input.
|
|
212
|
+
|
|
213
|
+
### Output Processors
|
|
214
|
+
|
|
215
|
+
```
|
|
216
|
+
[Your outputProcessors] → [Memory Processors]
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
1. **Your output processors run FIRST**: guardrails, filters, validators
|
|
220
|
+
2. **Memory processors run AFTER**: `SemanticRecall` (embeddings), `MessageHistory` (persistence)
|
|
221
|
+
|
|
222
|
+
This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
|
|
223
|
+
|
|
224
|
+
## Guardrails and Memory
|
|
225
|
+
|
|
226
|
+
The default execution order provides safe guardrail behavior:
|
|
227
|
+
|
|
228
|
+
### Output guardrails (recommended)
|
|
229
|
+
|
|
230
|
+
Output guardrails run **before** memory processors save messages. If a guardrail aborts:
|
|
231
|
+
|
|
232
|
+
- The tripwire is triggered
|
|
233
|
+
- Memory processors are skipped
|
|
234
|
+
- **No messages are persisted to storage**
|
|
235
|
+
|
|
236
|
+
```typescript
|
|
237
|
+
import { Agent } from "@mastra/core/agent";
|
|
238
|
+
import { Memory } from "@mastra/memory";
|
|
239
|
+
import { openai } from "@ai-sdk/openai";
|
|
240
|
+
|
|
241
|
+
// Output guardrail that blocks inappropriate content
|
|
242
|
+
const contentBlocker = {
|
|
243
|
+
id: "content-blocker",
|
|
244
|
+
processOutputResult: async ({ messages, abort }) => {
|
|
245
|
+
const hasInappropriateContent = messages.some((msg) =>
|
|
246
|
+
containsBadContent(msg)
|
|
247
|
+
);
|
|
248
|
+
if (hasInappropriateContent) {
|
|
249
|
+
abort("Content blocked by guardrail");
|
|
250
|
+
}
|
|
251
|
+
return messages;
|
|
252
|
+
},
|
|
253
|
+
};
|
|
254
|
+
|
|
255
|
+
const agent = new Agent({
|
|
256
|
+
name: "safe-agent",
|
|
257
|
+
instructions: "You are a helpful assistant",
|
|
258
|
+
model: 'openai/gpt-4o',
|
|
259
|
+
memory: new Memory({ lastMessages: 10 }),
|
|
260
|
+
// Your guardrail runs BEFORE memory saves
|
|
261
|
+
outputProcessors: [contentBlocker],
|
|
262
|
+
});
|
|
263
|
+
|
|
264
|
+
// If the guardrail aborts, nothing is saved to memory
|
|
265
|
+
const result = await agent.generate("Hello");
|
|
266
|
+
if (result.tripwire) {
|
|
267
|
+
console.log("Blocked:", result.tripwireReason);
|
|
268
|
+
// Memory is empty - no messages were persisted
|
|
269
|
+
}
|
|
270
|
+
```
|
|
271
|
+
|
|
272
|
+
### Input guardrails
|
|
273
|
+
|
|
274
|
+
Input guardrails run **after** memory processors load history. If a guardrail aborts:
|
|
275
|
+
|
|
276
|
+
- The tripwire is triggered
|
|
277
|
+
- The LLM is never called
|
|
278
|
+
- Output processors (including memory persistence) are skipped
|
|
279
|
+
- **No messages are persisted to storage**
|
|
280
|
+
|
|
281
|
+
```typescript
|
|
282
|
+
// Input guardrail that validates user input
|
|
283
|
+
const inputValidator = {
|
|
284
|
+
id: "input-validator",
|
|
285
|
+
processInput: async ({ messages, abort }) => {
|
|
286
|
+
const lastUserMessage = messages.findLast((m) => m.role === "user");
|
|
287
|
+
if (isInvalidInput(lastUserMessage)) {
|
|
288
|
+
abort("Invalid input detected");
|
|
289
|
+
}
|
|
290
|
+
return messages;
|
|
291
|
+
},
|
|
292
|
+
};
|
|
293
|
+
|
|
294
|
+
const agent = new Agent({
|
|
295
|
+
name: "validated-agent",
|
|
296
|
+
instructions: "You are a helpful assistant",
|
|
297
|
+
model: 'openai/gpt-4o',
|
|
298
|
+
memory: new Memory({ lastMessages: 10 }),
|
|
299
|
+
// Your guardrail runs AFTER memory loads history
|
|
300
|
+
inputProcessors: [inputValidator],
|
|
301
|
+
});
|
|
302
|
+
```
|
|
303
|
+
|
|
304
|
+
### Summary
|
|
305
|
+
|
|
306
|
+
| Guardrail Type | When it runs | If it aborts |
|
|
307
|
+
| -------------- | ------------ | ------------ |
|
|
308
|
+
| Input | After memory loads history | LLM not called, nothing saved |
|
|
309
|
+
| Output | Before memory saves | Nothing saved to storage |
|
|
310
|
+
|
|
311
|
+
Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
|
|
312
|
+
|
|
313
|
+
## Related documentation
|
|
314
|
+
|
|
315
|
+
- [Processors](https://mastra.ai/docs/v1/agents/processors) - General processor concepts and custom processor creation
|
|
316
|
+
- [Guardrails](https://mastra.ai/docs/v1/agents/guardrails) - Security and validation processors
|
|
317
|
+
- [Memory Overview](https://mastra.ai/docs/v1/memory/overview) - Memory types and configuration
|
|
318
|
+
|
|
319
|
+
When creating custom processors avoid mutating the input `messages` array or its objects directly.
|