@mastra/pinecone 1.0.0 → 1.0.1-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,14 @@
1
1
  # @mastra/pinecone
2
2
 
3
+ ## 1.0.1-alpha.0
4
+
5
+ ### Patch Changes
6
+
7
+ - Add a clear runtime error when `queryVector` is omitted for vector stores that require a vector for queries. Previously, omitting `queryVector` would produce confusing SDK-level errors; now each store throws a structured `MastraError` with `ErrorCategory.USER` explaining that metadata-only queries are not supported by that backend. ([#13286](https://github.com/mastra-ai/mastra/pull/13286))
8
+
9
+ - Updated dependencies [[`df170fd`](https://github.com/mastra-ai/mastra/commit/df170fd139b55f845bfd2de8488b16435bd3d0da), [`ae55343`](https://github.com/mastra-ai/mastra/commit/ae5534397fc006fd6eef3e4f80c235bcdc9289ef), [`c290cec`](https://github.com/mastra-ai/mastra/commit/c290cec5bf9107225de42942b56b487107aa9dce), [`f03e794`](https://github.com/mastra-ai/mastra/commit/f03e794630f812b56e95aad54f7b1993dc003add), [`aa4a5ae`](https://github.com/mastra-ai/mastra/commit/aa4a5aedb80d8d6837bab8cbb2e301215d1ba3e9), [`de3f584`](https://github.com/mastra-ai/mastra/commit/de3f58408752a8d80a295275c7f23fc306cf7f4f), [`d3fb010`](https://github.com/mastra-ai/mastra/commit/d3fb010c98f575f1c0614452667396e2653815f6), [`702ee1c`](https://github.com/mastra-ai/mastra/commit/702ee1c41be67cc532b4dbe89bcb62143508f6f0), [`f495051`](https://github.com/mastra-ai/mastra/commit/f495051eb6496a720f637fc85b6d69941c12554c), [`e622f1d`](https://github.com/mastra-ai/mastra/commit/e622f1d3ab346a8e6aca6d1fe2eac99bd961e50b), [`861f111`](https://github.com/mastra-ai/mastra/commit/861f11189211b20ddb70d8df81a6b901fc78d11e), [`00f43e8`](https://github.com/mastra-ai/mastra/commit/00f43e8e97a80c82b27d5bd30494f10a715a1df9), [`1b6f651`](https://github.com/mastra-ai/mastra/commit/1b6f65127d4a0d6c38d0a1055cb84527db529d6b), [`96a1702`](https://github.com/mastra-ai/mastra/commit/96a1702ce362c50dda20c8b4a228b4ad1a36a17a), [`cb9f921`](https://github.com/mastra-ai/mastra/commit/cb9f921320913975657abb1404855d8c510f7ac5), [`114e7c1`](https://github.com/mastra-ai/mastra/commit/114e7c146ac682925f0fb37376c1be70e5d6e6e5), [`1b6f651`](https://github.com/mastra-ai/mastra/commit/1b6f65127d4a0d6c38d0a1055cb84527db529d6b), [`72df4a8`](https://github.com/mastra-ai/mastra/commit/72df4a8f9bf1a20cfd3d9006a4fdb597ad56d10a)]:
10
+ - @mastra/core@1.8.0-alpha.0
11
+
3
12
  ## 1.0.0
4
13
 
5
14
  ### Major Changes
@@ -1,34 +1,29 @@
1
1
  ---
2
- name: mastra-pinecone-docs
3
- description: Documentation for @mastra/pinecone. Includes links to type definitions and readable implementation code in dist/.
2
+ name: mastra-pinecone
3
+ description: Documentation for @mastra/pinecone. Use when working with @mastra/pinecone APIs, configuration, or implementation.
4
+ metadata:
5
+ package: "@mastra/pinecone"
6
+ version: "1.0.1-alpha.0"
4
7
  ---
5
8
 
6
- # @mastra/pinecone Documentation
9
+ ## When to use
7
10
 
8
- > **Version**: 1.0.0
9
- > **Package**: @mastra/pinecone
11
+ Use this skill whenever you are working with @mastra/pinecone to obtain the domain-specific knowledge.
10
12
 
11
- ## Quick Navigation
13
+ ## How to use
12
14
 
13
- Use SOURCE_MAP.json to find any export:
15
+ Read the individual reference documents for detailed explanations and code examples.
14
16
 
15
- ```bash
16
- cat docs/SOURCE_MAP.json
17
- ```
17
+ ### Docs
18
18
 
19
- Each export maps to:
20
- - **types**: `.d.ts` file with JSDoc and API signatures
21
- - **implementation**: `.js` chunk file with readable source
22
- - **docs**: Conceptual documentation in `docs/`
19
+ - [Memory Processors](references/docs-memory-memory-processors.md) - Learn how to use memory processors in Mastra to filter, trim, and transform messages before they're sent to the language model to manage context window limits.
20
+ - [Storage](references/docs-memory-storage.md) - Configure storage for Mastra's memory system to persist conversations, workflows, and traces.
21
+ - [Retrieval, Semantic Search, Reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
22
+ - [Storing Embeddings in A Vector Database](references/docs-rag-vector-databases.md) - Guide on vector storage options in Mastra, including embedded and dedicated vector databases for similarity search.
23
23
 
24
- ## Top Exports
24
+ ### Reference
25
25
 
26
+ - [Reference: Pinecone Vector Store](references/reference-vectors-pinecone.md) - Documentation for the PineconeVector class in Mastra, which provides an interface to Pinecone's vector database.
26
27
 
27
28
 
28
- See SOURCE_MAP.json for the complete list.
29
-
30
- ## Available Topics
31
-
32
- - [Memory](memory/) - 2 file(s)
33
- - [Rag](rag/) - 2 file(s)
34
- - [Vectors](vectors/) - 1 file(s)
29
+ Read [assets/SOURCE_MAP.json](assets/SOURCE_MAP.json) for source code references.
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.0.0",
2
+ "version": "1.0.1-alpha.0",
3
3
  "package": "@mastra/pinecone",
4
4
  "exports": {},
5
5
  "modules": {}
@@ -1,12 +1,10 @@
1
- > Learn how to use memory processors in Mastra to filter, trim, and transform messages before they
2
-
3
1
  # Memory Processors
4
2
 
5
3
  Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
6
4
 
7
5
  When memory is enabled on an agent, Mastra adds memory processors to the agent's processor pipeline. These processors retrieve message history, working memory, and semantically relevant messages, then persist new messages after the model responds.
8
6
 
9
- Memory processors are [processors](https://mastra.ai/docs/v1/agents/processors) that operate specifically on memory-related messages and state.
7
+ Memory processors are [processors](https://mastra.ai/docs/agents/processors) that operate specifically on memory-related messages and state.
10
8
 
11
9
  ## Built-in Memory Processors
12
10
 
@@ -21,7 +19,7 @@ Retrieves message history and persists new messages.
21
19
  ```typescript
22
20
  memory: new Memory({
23
21
  lastMessages: 10,
24
- });
22
+ })
25
23
  ```
26
24
 
27
25
  **Mastra internally:**
@@ -38,24 +36,24 @@ memory: new Memory({
38
36
  **Example:**
39
37
 
40
38
  ```typescript
41
- import { Agent } from "@mastra/core/agent";
42
- import { Memory } from "@mastra/memory";
43
- import { LibSQLStore } from "@mastra/libsql";
44
- import { openai } from "@ai-sdk/openai";
39
+ import { Agent } from '@mastra/core/agent'
40
+ import { Memory } from '@mastra/memory'
41
+ import { LibSQLStore } from '@mastra/libsql'
42
+ import { openai } from '@ai-sdk/openai'
45
43
 
46
44
  const agent = new Agent({
47
- id: "test-agent",
48
- name: "Test Agent",
49
- instructions: "You are a helpful assistant",
45
+ id: 'test-agent',
46
+ name: 'Test Agent',
47
+ instructions: 'You are a helpful assistant',
50
48
  model: 'openai/gpt-4o',
51
49
  memory: new Memory({
52
50
  storage: new LibSQLStore({
53
- id: "memory-store",
54
- url: "file:memory.db",
51
+ id: 'memory-store',
52
+ url: 'file:memory.db',
55
53
  }),
56
54
  lastMessages: 10, // MessageHistory processor automatically added
57
55
  }),
58
- });
56
+ })
59
57
  ```
60
58
 
61
59
  ### SemanticRecall
@@ -69,7 +67,7 @@ memory: new Memory({
69
67
  semanticRecall: { enabled: true },
70
68
  vector: myVectorStore,
71
69
  embedder: myEmbedder,
72
- });
70
+ })
73
71
  ```
74
72
 
75
73
  **Mastra internally:**
@@ -87,33 +85,33 @@ memory: new Memory({
87
85
  **Example:**
88
86
 
89
87
  ```typescript
90
- import { Agent } from "@mastra/core/agent";
91
- import { Memory } from "@mastra/memory";
92
- import { LibSQLStore } from "@mastra/libsql";
93
- import { PineconeVector } from "@mastra/pinecone";
94
- import { OpenAIEmbedder } from "@mastra/openai";
95
- import { openai } from "@ai-sdk/openai";
88
+ import { Agent } from '@mastra/core/agent'
89
+ import { Memory } from '@mastra/memory'
90
+ import { LibSQLStore } from '@mastra/libsql'
91
+ import { PineconeVector } from '@mastra/pinecone'
92
+ import { OpenAIEmbedder } from '@mastra/openai'
93
+ import { openai } from '@ai-sdk/openai'
96
94
 
97
95
  const agent = new Agent({
98
- name: "semantic-agent",
99
- instructions: "You are a helpful assistant with semantic memory",
96
+ name: 'semantic-agent',
97
+ instructions: 'You are a helpful assistant with semantic memory',
100
98
  model: 'openai/gpt-4o',
101
99
  memory: new Memory({
102
100
  storage: new LibSQLStore({
103
- id: "memory-store",
104
- url: "file:memory.db",
101
+ id: 'memory-store',
102
+ url: 'file:memory.db',
105
103
  }),
106
104
  vector: new PineconeVector({
107
- id: "memory-vector",
105
+ id: 'memory-vector',
108
106
  apiKey: process.env.PINECONE_API_KEY!,
109
107
  }),
110
108
  embedder: new OpenAIEmbedder({
111
- model: "text-embedding-3-small",
109
+ model: 'text-embedding-3-small',
112
110
  apiKey: process.env.OPENAI_API_KEY!,
113
111
  }),
114
112
  semanticRecall: { enabled: true }, // SemanticRecall processor automatically added
115
113
  }),
116
- });
114
+ })
117
115
  ```
118
116
 
119
117
  ### WorkingMemory
@@ -125,7 +123,7 @@ Manages working memory state across conversations.
125
123
  ```typescript
126
124
  memory: new Memory({
127
125
  workingMemory: { enabled: true },
128
- });
126
+ })
129
127
  ```
130
128
 
131
129
  **Mastra internally:**
@@ -142,23 +140,23 @@ memory: new Memory({
142
140
  **Example:**
143
141
 
144
142
  ```typescript
145
- import { Agent } from "@mastra/core/agent";
146
- import { Memory } from "@mastra/memory";
147
- import { LibSQLStore } from "@mastra/libsql";
148
- import { openai } from "@ai-sdk/openai";
143
+ import { Agent } from '@mastra/core/agent'
144
+ import { Memory } from '@mastra/memory'
145
+ import { LibSQLStore } from '@mastra/libsql'
146
+ import { openai } from '@ai-sdk/openai'
149
147
 
150
148
  const agent = new Agent({
151
- name: "working-memory-agent",
152
- instructions: "You are an assistant with working memory",
149
+ name: 'working-memory-agent',
150
+ instructions: 'You are an assistant with working memory',
153
151
  model: 'openai/gpt-4o',
154
152
  memory: new Memory({
155
153
  storage: new LibSQLStore({
156
- id: "memory-store",
157
- url: "file:memory.db",
154
+ id: 'memory-store',
155
+ url: 'file:memory.db',
158
156
  }),
159
157
  workingMemory: { enabled: true }, // WorkingMemory processor automatically added
160
158
  }),
161
- });
159
+ })
162
160
  ```
163
161
 
164
162
  ## Manual Control and Deduplication
@@ -166,32 +164,32 @@ const agent = new Agent({
166
164
  If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra will **not** automatically add it. This gives you full control over processor ordering:
167
165
 
168
166
  ```typescript
169
- import { Agent } from "@mastra/core/agent";
170
- import { Memory } from "@mastra/memory";
171
- import { MessageHistory } from "@mastra/core/processors";
172
- import { TokenLimiter } from "@mastra/core/processors";
173
- import { LibSQLStore } from "@mastra/libsql";
174
- import { openai } from "@ai-sdk/openai";
167
+ import { Agent } from '@mastra/core/agent'
168
+ import { Memory } from '@mastra/memory'
169
+ import { MessageHistory } from '@mastra/core/processors'
170
+ import { TokenLimiter } from '@mastra/core/processors'
171
+ import { LibSQLStore } from '@mastra/libsql'
172
+ import { openai } from '@ai-sdk/openai'
175
173
 
176
174
  // Custom MessageHistory with different configuration
177
175
  const customMessageHistory = new MessageHistory({
178
- storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
176
+ storage: new LibSQLStore({ id: 'memory-store', url: 'file:memory.db' }),
179
177
  lastMessages: 20,
180
- });
178
+ })
181
179
 
182
180
  const agent = new Agent({
183
- name: "custom-memory-agent",
184
- instructions: "You are a helpful assistant",
181
+ name: 'custom-memory-agent',
182
+ instructions: 'You are a helpful assistant',
185
183
  model: 'openai/gpt-4o',
186
184
  memory: new Memory({
187
- storage: new LibSQLStore({ id: "memory-store", url: "file:memory.db" }),
185
+ storage: new LibSQLStore({ id: 'memory-store', url: 'file:memory.db' }),
188
186
  lastMessages: 10, // This would normally add MessageHistory(10)
189
187
  }),
190
188
  inputProcessors: [
191
189
  customMessageHistory, // Your custom one is used instead
192
190
  new TokenLimiter({ limit: 4000 }), // Runs after your custom MessageHistory
193
191
  ],
194
- });
192
+ })
195
193
  ```
196
194
 
197
195
  ## Processor Execution Order
@@ -200,7 +198,7 @@ Understanding the execution order is important when combining guardrails with me
200
198
 
201
199
  ### Input Processors
202
200
 
203
- ```
201
+ ```text
204
202
  [Memory Processors] → [Your inputProcessors]
205
203
  ```
206
204
 
@@ -211,7 +209,7 @@ This means memory loads message history before your processors can validate or f
211
209
 
212
210
  ### Output Processors
213
211
 
214
- ```
212
+ ```text
215
213
  [Your outputProcessors] → [Memory Processors]
216
214
  ```
217
215
 
@@ -233,37 +231,35 @@ Output guardrails run **before** memory processors save messages. If a guardrail
233
231
  - **No messages are persisted to storage**
234
232
 
235
233
  ```typescript
236
- import { Agent } from "@mastra/core/agent";
237
- import { Memory } from "@mastra/memory";
238
- import { openai } from "@ai-sdk/openai";
234
+ import { Agent } from '@mastra/core/agent'
235
+ import { Memory } from '@mastra/memory'
236
+ import { openai } from '@ai-sdk/openai'
239
237
 
240
238
  // Output guardrail that blocks inappropriate content
241
239
  const contentBlocker = {
242
- id: "content-blocker",
240
+ id: 'content-blocker',
243
241
  processOutputResult: async ({ messages, abort }) => {
244
- const hasInappropriateContent = messages.some((msg) =>
245
- containsBadContent(msg)
246
- );
242
+ const hasInappropriateContent = messages.some(msg => containsBadContent(msg))
247
243
  if (hasInappropriateContent) {
248
- abort("Content blocked by guardrail");
244
+ abort('Content blocked by guardrail')
249
245
  }
250
- return messages;
246
+ return messages
251
247
  },
252
- };
248
+ }
253
249
 
254
250
  const agent = new Agent({
255
- name: "safe-agent",
256
- instructions: "You are a helpful assistant",
251
+ name: 'safe-agent',
252
+ instructions: 'You are a helpful assistant',
257
253
  model: 'openai/gpt-4o',
258
254
  memory: new Memory({ lastMessages: 10 }),
259
255
  // Your guardrail runs BEFORE memory saves
260
256
  outputProcessors: [contentBlocker],
261
- });
257
+ })
262
258
 
263
259
  // If the guardrail aborts, nothing is saved to memory
264
- const result = await agent.generate("Hello");
260
+ const result = await agent.generate('Hello')
265
261
  if (result.tripwire) {
266
- console.log("Blocked:", result.tripwire.reason);
262
+ console.log('Blocked:', result.tripwire.reason)
267
263
  // Memory is empty - no messages were persisted
268
264
  }
269
265
  ```
@@ -280,39 +276,39 @@ Input guardrails run **after** memory processors load history. If a guardrail ab
280
276
  ```typescript
281
277
  // Input guardrail that validates user input
282
278
  const inputValidator = {
283
- id: "input-validator",
279
+ id: 'input-validator',
284
280
  processInput: async ({ messages, abort }) => {
285
- const lastUserMessage = messages.findLast((m) => m.role === "user");
281
+ const lastUserMessage = messages.findLast(m => m.role === 'user')
286
282
  if (isInvalidInput(lastUserMessage)) {
287
- abort("Invalid input detected");
283
+ abort('Invalid input detected')
288
284
  }
289
- return messages;
285
+ return messages
290
286
  },
291
- };
287
+ }
292
288
 
293
289
  const agent = new Agent({
294
- name: "validated-agent",
295
- instructions: "You are a helpful assistant",
290
+ name: 'validated-agent',
291
+ instructions: 'You are a helpful assistant',
296
292
  model: 'openai/gpt-4o',
297
293
  memory: new Memory({ lastMessages: 10 }),
298
294
  // Your guardrail runs AFTER memory loads history
299
295
  inputProcessors: [inputValidator],
300
- });
296
+ })
301
297
  ```
302
298
 
303
299
  ### Summary
304
300
 
305
- | Guardrail Type | When it runs | If it aborts |
306
- | -------------- | ------------ | ------------ |
307
- | Input | After memory loads history | LLM not called, nothing saved |
308
- | Output | Before memory saves | Nothing saved to storage |
301
+ | Guardrail Type | When it runs | If it aborts |
302
+ | -------------- | -------------------------- | ----------------------------- |
303
+ | Input | After memory loads history | LLM not called, nothing saved |
304
+ | Output | Before memory saves | Nothing saved to storage |
309
305
 
310
306
  Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
311
307
 
312
308
  ## Related documentation
313
309
 
314
- - [Processors](https://mastra.ai/docs/v1/agents/processors) - General processor concepts and custom processor creation
315
- - [Guardrails](https://mastra.ai/docs/v1/agents/guardrails) - Security and validation processors
316
- - [Memory Overview](https://mastra.ai/docs/v1/memory/overview) - Memory types and configuration
310
+ - [Processors](https://mastra.ai/docs/agents/processors) - General processor concepts and custom processor creation
311
+ - [Guardrails](https://mastra.ai/docs/agents/guardrails) - Security and validation processors
312
+ - [Memory Overview](https://mastra.ai/docs/memory/overview) - Memory types and configuration
317
313
 
318
314
  When creating custom processors avoid mutating the input `messages` array or its objects directly.
@@ -0,0 +1,261 @@
1
+ # Storage
2
+
3
+ For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
4
+
5
+ ```typescript
6
+ import { Mastra } from '@mastra/core'
7
+ import { LibSQLStore } from '@mastra/libsql'
8
+
9
+ export const mastra = new Mastra({
10
+ storage: new LibSQLStore({
11
+ id: 'mastra-storage',
12
+ url: 'file:./mastra.db',
13
+ }),
14
+ })
15
+ ```
16
+
17
+ > **Sharing the database with Mastra Studio:** When running `mastra dev` alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
18
+ >
19
+ > ```typescript
20
+ > url: 'file:/absolute/path/to/your/project/mastra.db'
21
+ > ```
22
+ >
23
+ > Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
24
+
25
+ This configures instance-level storage, which all agents share by default. You can also configure [agent-level storage](#agent-level-storage) for isolated data boundaries.
26
+
27
+ Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
28
+
29
+ ## Supported providers
30
+
31
+ Each provider page includes installation instructions, configuration parameters, and usage examples:
32
+
33
+ - [libSQL](https://mastra.ai/reference/storage/libsql)
34
+ - [PostgreSQL](https://mastra.ai/reference/storage/postgresql)
35
+ - [MongoDB](https://mastra.ai/reference/storage/mongodb)
36
+ - [Upstash](https://mastra.ai/reference/storage/upstash)
37
+ - [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1)
38
+ - [Cloudflare Durable Objects](https://mastra.ai/reference/storage/cloudflare)
39
+ - [Convex](https://mastra.ai/reference/storage/convex)
40
+ - [DynamoDB](https://mastra.ai/reference/storage/dynamodb)
41
+ - [LanceDB](https://mastra.ai/reference/storage/lance)
42
+ - [Microsoft SQL Server](https://mastra.ai/reference/storage/mssql)
43
+
44
+ > **Tip:** libSQL is the easiest way to get started because it doesn’t require running a separate database server.
45
+
46
+ ## Configuration scope
47
+
48
+ Storage can be configured at the instance level (shared by all agents) or at the agent level (isolated to a specific agent).
49
+
50
+ ### Instance-level storage
51
+
52
+ Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
53
+
54
+ ```typescript
55
+ import { Mastra } from '@mastra/core'
56
+ import { PostgresStore } from '@mastra/pg'
57
+
58
+ export const mastra = new Mastra({
59
+ storage: new PostgresStore({
60
+ id: 'mastra-storage',
61
+ connectionString: process.env.DATABASE_URL,
62
+ }),
63
+ })
64
+
65
+ // Both agents inherit storage from the Mastra instance above
66
+ const agent1 = new Agent({ id: 'agent-1', memory: new Memory() })
67
+ const agent2 = new Agent({ id: 'agent-2', memory: new Memory() })
68
+ ```
69
+
70
+ This is useful when all primitives share the same storage backend and have similar performance, scaling, and operational requirements.
71
+
72
+ #### Composite storage
73
+
74
+ [Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite) you need) to different storage providers.
75
+
76
+ ```typescript
77
+ import { Mastra } from '@mastra/core'
78
+ import { MastraCompositeStore } from '@mastra/core/storage'
79
+ import { MemoryLibSQL } from '@mastra/libsql'
80
+ import { WorkflowsPG } from '@mastra/pg'
81
+ import { ObservabilityStorageClickhouse } from '@mastra/clickhouse'
82
+
83
+ export const mastra = new Mastra({
84
+ storage: new MastraCompositeStore({
85
+ id: 'composite',
86
+ domains: {
87
+ memory: new MemoryLibSQL({ url: 'file:./memory.db' }),
88
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
89
+ observability: new ObservabilityStorageClickhouse({
90
+ url: process.env.CLICKHOUSE_URL,
91
+ username: process.env.CLICKHOUSE_USERNAME,
92
+ password: process.env.CLICKHOUSE_PASSWORD,
93
+ }),
94
+ },
95
+ }),
96
+ })
97
+ ```
98
+
99
+ This is useful when different types of data have different performance or operational requirements, such as low-latency storage for memory, durable storage for workflows, and high-throughput storage for observability.
100
+
101
+ ### Agent-level storage
102
+
103
+ Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need data boundaries or compliance requirements:
104
+
105
+ ```typescript
106
+ import { Agent } from '@mastra/core/agent'
107
+ import { Memory } from '@mastra/memory'
108
+ import { PostgresStore } from '@mastra/pg'
109
+
110
+ export const agent = new Agent({
111
+ id: 'agent',
112
+ memory: new Memory({
113
+ storage: new PostgresStore({
114
+ id: 'agent-storage',
115
+ connectionString: process.env.AGENT_DATABASE_URL,
116
+ }),
117
+ }),
118
+ })
119
+ ```
120
+
121
+ > **Warning:** [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment) doesn't support agent-level storage.
122
+
123
+ ## Threads and resources
124
+
125
+ Mastra organizes conversations using two identifiers:
126
+
127
+ - **Thread** - a conversation session containing a sequence of messages.
128
+ - **Resource** - the entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
129
+
130
+ Both identifiers are required for agents to store information:
131
+
132
+ **Generate**:
133
+
134
+ ```typescript
135
+ const response = await agent.generate('hello', {
136
+ memory: {
137
+ thread: 'conversation-abc-123',
138
+ resource: 'user_123',
139
+ },
140
+ })
141
+ ```
142
+
143
+ **Stream**:
144
+
145
+ ```typescript
146
+ const stream = await agent.stream('hello', {
147
+ memory: {
148
+ thread: 'conversation-abc-123',
149
+ resource: 'user_123',
150
+ },
151
+ })
152
+ ```
153
+
154
+ > **Note:** [Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
155
+
156
+ ### Thread title generation
157
+
158
+ Mastra can automatically generate descriptive thread titles based on the user's first message when `generateTitle` is enabled.
159
+
160
+ Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
161
+
162
+ ```typescript
163
+ export const agent = new Agent({
164
+ id: 'agent',
165
+ memory: new Memory({
166
+ options: {
167
+ generateTitle: true,
168
+ },
169
+ }),
170
+ })
171
+ ```
172
+
173
+ Title generation runs asynchronously after the agent responds and does not affect response time.
174
+
175
+ To optimize cost or behavior, provide a smaller [`model`](https://mastra.ai/models) and custom `instructions`:
176
+
177
+ ```typescript
178
+ export const agent = new Agent({
179
+ id: 'agent',
180
+ memory: new Memory({
181
+ options: {
182
+ generateTitle: {
183
+ model: 'openai/gpt-4o-mini',
184
+ instructions: 'Generate a 1 word title',
185
+ },
186
+ },
187
+ }),
188
+ })
189
+ ```
190
+
191
+ ## Semantic recall
192
+
193
+ Semantic recall has different storage requirements - it needs a vector database in addition to the standard storage adapter. See [Semantic recall](https://mastra.ai/docs/memory/semantic-recall) for setup and supported vector providers.
194
+
195
+ ## Handling large attachments
196
+
197
+ Some storage providers enforce record size limits that base64-encoded file attachments (such as images) can exceed:
198
+
199
+ | Provider | Record size limit |
200
+ | ------------------------------------------------------------------ | ----------------- |
201
+ | [DynamoDB](https://mastra.ai/reference/storage/dynamodb) | 400 KB |
202
+ | [Convex](https://mastra.ai/reference/storage/convex) | 1 MiB |
203
+ | [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB |
204
+
205
+ PostgreSQL, MongoDB, and libSQL have higher limits and are generally unaffected.
206
+
207
+ To avoid this, use an input processor to upload attachments to external storage (S3, R2, GCS, [Convex file storage](https://docs.convex.dev/file-storage), etc.) and replace them with URL references before persistence.
208
+
209
+ ```typescript
210
+ import type { Processor } from '@mastra/core/processors'
211
+ import type { MastraDBMessage } from '@mastra/core/memory'
212
+
213
+ export class AttachmentUploader implements Processor {
214
+ id = 'attachment-uploader'
215
+
216
+ async processInput({ messages }: { messages: MastraDBMessage[] }) {
217
+ return Promise.all(messages.map(msg => this.processMessage(msg)))
218
+ }
219
+
220
+ async processMessage(msg: MastraDBMessage) {
221
+ const attachments = msg.content.experimental_attachments
222
+ if (!attachments?.length) return msg
223
+
224
+ const uploaded = await Promise.all(
225
+ attachments.map(async att => {
226
+ // Skip if already a URL
227
+ if (!att.url?.startsWith('data:')) return att
228
+
229
+ // Upload base64 data and replace with URL
230
+ const url = await this.upload(att.url, att.contentType)
231
+ return { ...att, url }
232
+ }),
233
+ )
234
+
235
+ return { ...msg, content: { ...msg.content, experimental_attachments: uploaded } }
236
+ }
237
+
238
+ async upload(dataUri: string, contentType?: string): Promise<string> {
239
+ const base64 = dataUri.split(',')[1]
240
+ const buffer = Buffer.from(base64, 'base64')
241
+
242
+ // Replace with your storage provider (S3, R2, GCS, Convex, etc.)
243
+ // return await s3.upload(buffer, contentType);
244
+ throw new Error('Implement upload() with your storage provider')
245
+ }
246
+ }
247
+ ```
248
+
249
+ Use the processor with your agent:
250
+
251
+ ```typescript
252
+ import { Agent } from '@mastra/core/agent'
253
+ import { Memory } from '@mastra/memory'
254
+ import { AttachmentUploader } from './processors/attachment-uploader'
255
+
256
+ const agent = new Agent({
257
+ id: 'my-agent',
258
+ memory: new Memory({ storage: yourStorage }),
259
+ inputProcessors: [new AttachmentUploader()],
260
+ })
261
+ ```