@mastra/memory 1.0.0-beta.10 → 1.0.0-beta.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,317 @@
1
+ > Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
2
+
3
+ # Agent Approval
4
+
5
+ Agents sometimes require the same [human-in-the-loop](https://mastra.ai/docs/v1/workflows/human-in-the-loop) oversight used in workflows when calling tools that handle sensitive operations, like deleting resources or performing running long processes. With agent approval you can suspend a tool call and provide feedback to the user, or approve or decline a tool call based on targeted application conditions.
6
+
7
+ ## Tool call approval
8
+
9
+ Tool call approval can be enabled at the agent level and apply to every tool the agent uses, or at the tool level providing more granular control over individual tool calls.
10
+
11
+ ### Storage
12
+
13
+ Agent approval uses a snapshot to capture the state of the request. Ensure you've enabled a storage provider in your main Mastra instance. If storage isn't enabled you'll see an error relating to snapshot not found.
14
+
15
+ ```typescript title="src/mastra/index.ts"
16
+ import { Mastra } from "@mastra/core/mastra";
17
+ import { LibSQLStore } from "@mastra/libsql";
18
+
19
+ export const mastra = new Mastra({
20
+ storage: new LibSQLStore({
21
+ id: "mastra-storage",
22
+ url: ":memory:"
23
+ })
24
+ });
25
+ ```
26
+
27
+ ## Agent-level approval
28
+
29
+ When calling an agent using `.stream()` set `requireToolApproval` to `true` which will prevent the agent from calling any of the tools defined in its configuration.
30
+
31
+ ```typescript
32
+ const stream = await agent.stream("What's the weather in London?", {
33
+ requireToolApproval: true
34
+ });
35
+ ```
36
+
37
+ ### Approving tool calls
38
+
39
+ To approve a tool call, access `approveToolCall` from the `agent`, passing in the `runId` of the stream. This will let the agent know its now OK to call its tools.
40
+
41
+ ```typescript
42
+ const handleApproval = async () => {
43
+ const approvedStream = await agent.approveToolCall({ runId: stream.runId });
44
+
45
+ for await (const chunk of approvedStream.textStream) {
46
+ process.stdout.write(chunk);
47
+ }
48
+ process.stdout.write("\n");
49
+ };
50
+ ```
51
+
52
+ ### Declining tool calls
53
+
54
+ To decline a tool call, access the `declineToolCall` from the `agent`. You will see the streamed response from the agent, but it won't call its tools.
55
+
56
+ ```typescript
57
+ const handleDecline = async () => {
58
+ const declinedStream = await agent.declineToolCall({ runId: stream.runId });
59
+
60
+ for await (const chunk of declinedStream.textStream) {
61
+ process.stdout.write(chunk);
62
+ }
63
+ process.stdout.write("\n");
64
+ };
65
+ ```
66
+
67
+ ## Tool-level approval
68
+
69
+ There are two types of tool call approval. The first uses `requireApproval`, which is a property on the tool definition, while `requireToolApproval` is a parameter passed to `agent.stream()`. The second uses `suspend` and lets the agent provide context or confirmation prompts so the user can decide whether the tool call should continue.
70
+
71
+ ### Tool approval using `requireToolApproval`
72
+
73
+ In this approach, `requireApproval` is configured on the tool definition (shown below) rather than on the agent.
74
+
75
+ ```typescript
76
+ export const testTool = createTool({
77
+ id: "test-tool",
78
+ description: "Fetches weather for a location",
79
+ inputSchema: z.object({
80
+ location: z.string()
81
+ }),
82
+ outputSchema: z.object({
83
+ weather: z.string()
84
+ }),
85
+ resumeSchema: z.object({
86
+ approved: z.boolean()
87
+ }),
88
+ execute: async ({ location }) => {
89
+ const response = await fetch(`https://wttr.in/${location}?format=3`);
90
+ const weather = await response.text();
91
+
92
+ return { weather };
93
+ },
94
+ requireApproval: true
95
+ });
96
+ ```
97
+
98
+ When `requireApproval` is true for a tool, the stream will include chunks of type `tool-call-approval` to indicate that the call is paused. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
99
+
100
+ ```typescript
101
+ const stream = await agent.stream("What's the weather in London?");
102
+
103
+ for await (const chunk of stream.fullStream) {
104
+ if (chunk.type === "tool-call-approval") {
105
+ console.log("Approval required.");
106
+ }
107
+ }
108
+
109
+ const handleResume = async () => {
110
+ const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId });
111
+
112
+ for await (const chunk of resumedStream.textStream) {
113
+ process.stdout.write(chunk);
114
+ }
115
+ process.stdout.write("\n");
116
+ };
117
+ ```
118
+
119
+ ### Tool approval using `suspend`
120
+
121
+ With this approach, neither the agent nor the tool uses `requireApproval`. Instead, the tool implementation calls `suspend` to pause execution and return context or confirmation prompts to the user.
122
+
123
+ ```typescript
124
+
125
+ export const testToolB = createTool({
126
+ id: "test-tool-b",
127
+ description: "Fetches weather for a location",
128
+ inputSchema: z.object({
129
+ location: z.string()
130
+ }),
131
+ outputSchema: z.object({
132
+ weather: z.string()
133
+ }),
134
+ resumeSchema: z.object({
135
+ approved: z.boolean()
136
+ }),
137
+ suspendSchema: z.object({
138
+ reason: z.string()
139
+ }),
140
+ execute: async ({ location }, { agent } = {}) => {
141
+ const { resumeData: { approved } = {}, suspend } = agent ?? {};
142
+
143
+ if (!approved) {
144
+ return suspend?.({ reason: "Approval required." });
145
+ }
146
+
147
+ const response = await fetch(`https://wttr.in/${location}?format=3`);
148
+ const weather = await response.text();
149
+
150
+ return { weather };
151
+ }
152
+ });
153
+ ```
154
+
155
+ With this approach the stream will include a `tool-call-suspended` chunk, and the `suspendPayload` will contain the `reason` defined by the tool's `suspendSchema`. To continue the call, invoke `resumeStream` with the required `resumeSchema` and the `runId`.
156
+
157
+ ```typescript
158
+ const stream = await agent.stream("What's the weather in London?");
159
+
160
+ for await (const chunk of stream.fullStream) {
161
+ if (chunk.type === "tool-call-suspended") {
162
+ console.log(chunk.payload.suspendPayload);
163
+ }
164
+ }
165
+
166
+ const handleResume = async () => {
167
+ const resumedStream = await agent.resumeStream({ approved: true }, { runId: stream.runId });
168
+
169
+ for await (const chunk of resumedStream.textStream) {
170
+ process.stdout.write(chunk);
171
+ }
172
+ process.stdout.write("\n");
173
+ };
174
+
175
+ ```
176
+
177
+ ## Automatic tool resumption
178
+
179
+ When using tools that call `suspend()`, you can enable automatic resumption so the agent resumes suspended tools based on the user's next message. This creates a conversational flow where users provide the required information naturally, without your application needing to call `resumeStream()` explicitly.
180
+
181
+ ### Enabling auto-resume
182
+
183
+ Set `autoResumeSuspendedTools` to `true` in the agent's default options or when calling `stream()`:
184
+
185
+ ```typescript
186
+ import { Agent } from "@mastra/core/agent";
187
+ import { Memory } from "@mastra/memory";
188
+
189
+ // Option 1: In agent configuration
190
+ const agent = new Agent({
191
+ id: "my-agent",
192
+ name: "My Agent",
193
+ instructions: "You are a helpful assistant",
194
+ model: "openai/gpt-4o-mini",
195
+ tools: { weatherTool },
196
+ memory: new Memory(),
197
+ defaultOptions: {
198
+ autoResumeSuspendedTools: true,
199
+ },
200
+ });
201
+
202
+ // Option 2: Per-request
203
+ const stream = await agent.stream("What's the weather?", {
204
+ autoResumeSuspendedTools: true,
205
+ });
206
+ ```
207
+
208
+ ### How it works
209
+
210
+ When `autoResumeSuspendedTools` is enabled:
211
+
212
+ 1. A tool suspends execution by calling `suspend()` with a payload (e.g., requesting more information)
213
+ 2. The suspension is persisted to memory along with the conversation
214
+ 3. When the user sends their next message on the same thread, the agent:
215
+ - Detects the suspended tool from message history
216
+ - Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
217
+ - Automatically resumes the tool with the extracted data
218
+
219
+ ### Example
220
+
221
+ ```typescript
222
+ import { createTool } from "@mastra/core/tools";
223
+ import { z } from "zod";
224
+
225
+ export const weatherTool = createTool({
226
+ id: "weather-info",
227
+ description: "Fetches weather information for a city",
228
+ suspendSchema: z.object({
229
+ message: z.string(),
230
+ }),
231
+ resumeSchema: z.object({
232
+ city: z.string(),
233
+ }),
234
+ execute: async (_inputData, context) => {
235
+ // Check if this is a resume with data
236
+ if (!context?.agent?.resumeData) {
237
+ // First call - suspend and ask for the city
238
+ return context?.agent?.suspend({
239
+ message: "What city do you want to know the weather for?",
240
+ });
241
+ }
242
+
243
+ // Resume call - city was extracted from user's message
244
+ const { city } = context.agent.resumeData;
245
+ const response = await fetch(`https://wttr.in/${city}?format=3`);
246
+ const weather = await response.text();
247
+
248
+ return { city, weather };
249
+ },
250
+ });
251
+
252
+ const agent = new Agent({
253
+ id: "my-agent",
254
+ name: "My Agent",
255
+ instructions: "You are a helpful assistant",
256
+ model: "openai/gpt-4o-mini",
257
+ tools: { weatherTool },
258
+ memory: new Memory(),
259
+ defaultOptions: {
260
+ autoResumeSuspendedTools: true,
261
+ },
262
+ });
263
+
264
+ const stream = await agent.stream("What's the weather like?");
265
+
266
+ for await (const chunk of stream.fullStream) {
267
+ if (chunk.type === "tool-call-suspended") {
268
+ console.log(chunk.payload.suspendPayload);
269
+ }
270
+ }
271
+
272
+ const handleResume = async () => {
273
+ const resumedStream = await agent.stream("San Francisco");
274
+
275
+ for await (const chunk of resumedStream.textStream) {
276
+ process.stdout.write(chunk);
277
+ }
278
+ process.stdout.write("\n");
279
+ };
280
+ ```
281
+
282
+ **Conversation flow:**
283
+
284
+ ```
285
+ User: "What's the weather like?"
286
+ Agent: "What city do you want to know the weather for?"
287
+
288
+ User: "San Francisco"
289
+ Agent: "The weather in San Francisco is: San Francisco: ☀️ +72°F"
290
+ ```
291
+
292
+ The second message automatically resumes the suspended tool - the agent extracts `{ city: "San Francisco" }` from the user's message and passes it as `resumeData`.
293
+
294
+ ### Requirements
295
+
296
+ For automatic tool resumption to work:
297
+
298
+ - **Memory configured**: The agent needs memory to track suspended tools across messages
299
+ - **Same thread**: The follow-up message must use the same memory thread and resource identifiers
300
+ - **`resumeSchema` defined**: The tool must define a `resumeSchema` so the agent knows what data structure to extract from the user's message
301
+
302
+ ### Manual vs automatic resumption
303
+
304
+ | Approach | Use case |
305
+ |----------|----------|
306
+ | Manual (`resumeStream()`) | Programmatic control, webhooks, button clicks, external triggers |
307
+ | Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
308
+
309
+ Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
310
+
311
+ ## Related
312
+
313
+ - [Using Tools](./using-tools)
314
+ - [Agent Overview](./overview)
315
+ - [Tools Overview](../mcp/overview)
316
+ - [Agent Memory](./agent-memory)
317
+ - [Request Context](https://mastra.ai/docs/v1/server/request-context)
@@ -0,0 +1,114 @@
1
+ # Core API Reference
2
+
3
+ > API reference for core - 2 entries
4
+
5
+
6
+ ---
7
+
8
+ ## Reference: Mastra.getMemory()
9
+
10
+ > Documentation for the `Mastra.getMemory()` method in Mastra, which retrieves a registered memory instance by its registry key.
11
+
12
+ The `.getMemory()` method retrieves a memory instance from the Mastra registry by its key. Memory instances are registered in the Mastra constructor and can be referenced by stored agents.
13
+
14
+ ## Usage example
15
+
16
+ ```typescript
17
+ const memory = mastra.getMemory("conversationMemory");
18
+
19
+ // Use the memory instance
20
+ const thread = await memory.createThread({
21
+ resourceId: "user-123",
22
+ title: "New Conversation",
23
+ });
24
+ ```
25
+
26
+ ## Parameters
27
+
28
+ ## Returns
29
+
30
+ ## Example: Registering and Retrieving Memory
31
+
32
+ ```typescript
33
+ import { Mastra } from "@mastra/core";
34
+ import { Memory } from "@mastra/memory";
35
+ import { LibSQLStore } from "@mastra/libsql";
36
+
37
+ const conversationMemory = new Memory({
38
+ storage: new LibSQLStore({ url: ":memory:" }),
39
+ });
40
+
41
+ const mastra = new Mastra({
42
+ memory: {
43
+ conversationMemory,
44
+ },
45
+ });
46
+
47
+ // Later, retrieve the memory instance
48
+ const memory = mastra.getMemory("conversationMemory");
49
+ ```
50
+
51
+ ## Related
52
+
53
+ - [Mastra.listMemory()](https://mastra.ai/reference/v1/core/listMemory)
54
+ - [Memory overview](https://mastra.ai/docs/v1/memory/overview)
55
+ - [Agent Memory](https://mastra.ai/docs/v1/agents/agent-memory)
56
+
57
+ ---
58
+
59
+ ## Reference: Mastra.listMemory()
60
+
61
+ > Documentation for the `Mastra.listMemory()` method in Mastra, which returns all registered memory instances.
62
+
63
+ The `.listMemory()` method returns all memory instances registered with the Mastra instance.
64
+
65
+ ## Usage example
66
+
67
+ ```typescript
68
+ const memoryInstances = mastra.listMemory();
69
+
70
+ for (const [key, memory] of Object.entries(memoryInstances)) {
71
+ console.log(`Memory "${key}": ${memory.id}`);
72
+ }
73
+ ```
74
+
75
+ ## Parameters
76
+
77
+ This method takes no parameters.
78
+
79
+ ## Returns
80
+
81
+ ## Example: Checking Registered Memory
82
+
83
+ ```typescript
84
+ import { Mastra } from "@mastra/core";
85
+ import { Memory } from "@mastra/memory";
86
+ import { LibSQLStore } from "@mastra/libsql";
87
+
88
+ const conversationMemory = new Memory({
89
+ id: "conversation-memory",
90
+ storage: new LibSQLStore({ url: ":memory:" }),
91
+ });
92
+
93
+ const analyticsMemory = new Memory({
94
+ id: "analytics-memory",
95
+ storage: new LibSQLStore({ url: ":memory:" }),
96
+ });
97
+
98
+ const mastra = new Mastra({
99
+ memory: {
100
+ conversationMemory,
101
+ analyticsMemory,
102
+ },
103
+ });
104
+
105
+ // List all registered memory instances
106
+ const allMemory = mastra.listMemory();
107
+ console.log(Object.keys(allMemory)); // ["conversationMemory", "analyticsMemory"]
108
+ ```
109
+
110
+ ## Related
111
+
112
+ - [Mastra.getMemory()](https://mastra.ai/reference/v1/core/getMemory)
113
+ - [Memory overview](https://mastra.ai/docs/v1/memory/overview)
114
+ - [Agent Memory](https://mastra.ai/docs/v1/agents/agent-memory)
@@ -0,0 +1,76 @@
1
+ > Learn how Mastra
2
+
3
+ # Memory
4
+
5
+ Memory gives your agent coherence across interactions and allows it to improve over time by retaining relevant information from past conversations.
6
+
7
+ Mastra requires a [storage provider](./storage) to persist memory and supports three types:
8
+
9
+ - [**Message history**](https://mastra.ai/docs/v1/memory/message-history) captures recent messages from the current conversation, providing short-term continuity and maintaining dialogue flow.
10
+ - [**Working memory**](https://mastra.ai/docs/v1/memory/working-memory) stores persistent user-specific details such as names, preferences, goals, and other structured data.
11
+ - [**Semantic recall**](https://mastra.ai/docs/v1/memory/semantic-recall) retrieves older messages from past conversations based on semantic relevance. Matches are retrieved using vector search and can include surrounding context for better comprehension.
12
+
13
+ You can enable any combination of these memory types. Mastra assembles the relevant memories into the model’s context window. If the total exceeds the model's token limit, use [memory processors](https://mastra.ai/docs/v1/memory/memory-processors) to trim or filter messages before sending them to the model.
14
+
15
+ ## Getting started
16
+
17
+ Install Mastra's memory module and the storage adapter for your preferred database (see the storage section below):
18
+
19
+ ```bash
20
+ npm install @mastra/memory@beta @mastra/libsql@beta
21
+ ```
22
+
23
+ Add the storage adapter to the main Mastra instance:
24
+
25
+ ```typescript title="src/mastra/index.ts"
26
+ import { Mastra } from "@mastra/core";
27
+ import { LibSQLStore } from "@mastra/libsql";
28
+
29
+ export const mastra = new Mastra({
30
+ storage: new LibSQLStore({
31
+ id: 'mastra-storage',
32
+ url: ":memory:",
33
+ }),
34
+ });
35
+ ```
36
+
37
+ Enable memory by passing a `Memory` instance to your agent:
38
+
39
+ ```typescript title="src/mastra/agents/test-agent.ts"
40
+ import { Memory } from "@mastra/memory";
41
+ import { Agent } from "@mastra/core/agent";
42
+
43
+ export const testAgent = new Agent({
44
+ id: "test-agent",
45
+ memory: new Memory({
46
+ options: {
47
+ lastMessages: 20,
48
+ },
49
+ }),
50
+ });
51
+ ```
52
+ When you send a new message, the model can now "see" the previous 20 messages, which gives it better context for the conversation and leads to more coherent, accurate replies.
53
+
54
+ This example configures basic [message history](https://mastra.ai/docs/v1/memory/message-history). You can also enable [working memory](https://mastra.ai/docs/v1/memory/working-memory) and [semantic recall](https://mastra.ai/docs/v1/memory/semantic-recall) by passing additional options to `Memory`.
55
+
56
+ ## Storage
57
+
58
+ Before enabling memory, you must first configure a storage adapter. Mastra supports multiple database providers including PostgreSQL, MongoDB, libSQL, and more.
59
+
60
+ Storage can be configured at the instance level (shared across all agents) or at the agent level (dedicated per agent). You can also use different databases for storage and vector operations.
61
+
62
+ See the [Storage](https://mastra.ai/docs/v1/memory/storage) documentation for configuration options, supported providers, and examples.
63
+
64
+ ## Debugging memory
65
+
66
+ When tracing is enabled, you can inspect exactly which messages the agent uses for context in each request. The trace output shows all memory included in the agent's context window - both recent message history and messages recalled via semantic recall.
67
+
68
+ This visibility helps you understand why an agent made specific decisions and verify that memory retrieval is working as expected.
69
+
70
+ For more details on enabling and configuring tracing, see [Tracing](https://mastra.ai/docs/v1/observability/tracing/overview).
71
+
72
+ ## Next Steps
73
+
74
+ - Learn more about [Storage](https://mastra.ai/docs/v1/memory/storage) providers and configuration options
75
+ - Add [Message History](https://mastra.ai/docs/v1/memory/message-history), [Working Memory](https://mastra.ai/docs/v1/memory/working-memory), or [Semantic Recall](https://mastra.ai/docs/v1/memory/semantic-recall)
76
+ - Visit [Memory configuration reference](https://mastra.ai/reference/v1/memory/memory-class) for all available options
@@ -0,0 +1,181 @@
1
+ > Configure storage for Mastra
2
+
3
+ # Storage
4
+
5
+ For Mastra to remember previous interactions, you must configure a storage adapter. Mastra is designed to work with your preferred database provider - choose from the [supported providers](#supported-providers) and pass it to your Mastra instance.
6
+
7
+ ```typescript
8
+ import { Mastra } from "@mastra/core";
9
+ import { LibSQLStore } from "@mastra/libsql";
10
+
11
+ const mastra = new Mastra({
12
+ storage: new LibSQLStore({
13
+ id: 'mastra-storage',
14
+ url: "file:./mastra.db",
15
+ }),
16
+ });
17
+ ```
18
+ On first interaction, Mastra automatically creates the necessary tables following the [core schema](https://mastra.ai/reference/v1/storage/overview#core-schema). This includes tables for messages, threads, resources, workflows, traces, and evaluation datasets.
19
+
20
+ ## Supported Providers
21
+
22
+ Each provider page includes installation instructions, configuration parameters, and usage examples:
23
+
24
+ - [libSQL Storage](https://mastra.ai/reference/v1/storage/libsql)
25
+ - [PostgreSQL Storage](https://mastra.ai/reference/v1/storage/postgresql)
26
+ - [MongoDB Storage](https://mastra.ai/reference/v1/storage/mongodb)
27
+ - [Upstash Storage](https://mastra.ai/reference/v1/storage/upstash)
28
+ - [Cloudflare D1](https://mastra.ai/reference/v1/storage/cloudflare-d1)
29
+ - [Cloudflare Durable Objects](https://mastra.ai/reference/v1/storage/cloudflare)
30
+ - [Convex](https://mastra.ai/reference/v1/storage/convex)
31
+ - [DynamoDB](https://mastra.ai/reference/v1/storage/dynamodb)
32
+ - [LanceDB](https://mastra.ai/reference/v1/storage/lance)
33
+ - [Microsoft SQL Server](https://mastra.ai/reference/v1/storage/mssql)
34
+
35
+ > **Note:**
36
+ libSQL is the easiest way to get started because it doesn’t require running a separate database server
37
+
38
+ ## Configuration Scope
39
+
40
+ You can configure storage at two different scopes:
41
+
42
+ ### Instance-level storage
43
+
44
+ Add storage to your Mastra instance so all agents share the same memory provider:
45
+
46
+ ```typescript
47
+ import { Mastra } from "@mastra/core";
48
+ import { PostgresStore } from "@mastra/pg";
49
+
50
+ const mastra = new Mastra({
51
+ storage: new PostgresStore({
52
+ id: 'mastra-storage',
53
+ connectionString: process.env.DATABASE_URL,
54
+ }),
55
+ });
56
+
57
+ // All agents automatically use this storage
58
+ const agent1 = new Agent({ memory: new Memory() });
59
+ const agent2 = new Agent({ memory: new Memory() });
60
+ ```
61
+
62
+ ### Agent-level storage
63
+
64
+ Add storage to a specific agent when you need data boundaries or compliance requirements:
65
+
66
+ ```typescript
67
+ import { Agent } from "@mastra/core/agent";
68
+ import { Memory } from "@mastra/memory";
69
+ import { PostgresStore } from "@mastra/pg";
70
+
71
+ const agent = new Agent({
72
+ memory: new Memory({
73
+ storage: new PostgresStore({
74
+ id: 'agent-storage',
75
+ connectionString: process.env.AGENT_DATABASE_URL,
76
+ }),
77
+ }),
78
+ });
79
+ ```
80
+
81
+ This is useful when different agents need to store data in separate databases for security, compliance, or organizational reasons.
82
+
83
+ ## Threads and Resources
84
+
85
+ Mastra organizes memory into threads using two identifiers:
86
+
87
+ - **Thread**: A conversation session containing a sequence of messages (e.g., `convo_123`)
88
+ - **Resource**: An identifier for the entity the thread belongs to, typically a user (e.g., `user_123`)
89
+
90
+ Both identifiers are required for agents to store and recall information:
91
+
92
+ ```typescript
93
+ const stream = await agent.stream("message for agent", {
94
+ memory: {
95
+ thread: "convo_123",
96
+ resource: "user_123",
97
+ },
98
+ });
99
+ ```
100
+
101
+ > **Note:**
102
+ [Studio](https://mastra.ai/docs/v1/getting-started/studio) automatically generates a thread and resource ID for you. Remember to to pass these explicitly when calling `stream` or `generate` yourself.
103
+
104
+ ### Thread title generation
105
+
106
+ Mastra can automatically generate descriptive thread titles based on the user's first message.
107
+
108
+ Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
109
+
110
+ ```typescript
111
+ export const testAgent = new Agent({
112
+ memory: new Memory({
113
+ options: {
114
+ generateTitle: true,
115
+ },
116
+ }),
117
+ });
118
+ ```
119
+
120
+ Title generation runs asynchronously after the agent responds and does not affect response time.
121
+
122
+ To optimize cost or behavior, provide a smaller `model` and custom `instructions`:
123
+
124
+ ```typescript
125
+ export const testAgent = new Agent({
126
+ memory: new Memory({
127
+ options: {
128
+ threads: {
129
+ generateTitle: {
130
+ model: "openai/gpt-4o-mini",
131
+ instructions: "Generate a concise title based on the user's first message",
132
+ },
133
+ },
134
+ },
135
+ }),
136
+ });
137
+ ```
138
+
139
+ ## Semantic recall
140
+
141
+ Semantic recall uses vector embeddings to retrieve relevant past messages based on meaning rather than recency. This requires a vector database instance, which can be configured at the instance or agent level.
142
+
143
+ The vector database doesn't have to be the same as your storage provider. For example, you might use PostgreSQL for storage and Pinecone for vectors:
144
+
145
+ ```typescript
146
+ import { Mastra } from "@mastra/core";
147
+ import { Agent } from "@mastra/core/agent";
148
+ import { Memory } from "@mastra/memory";
149
+ import { PostgresStore } from "@mastra/pg";
150
+ import { PineconeVector } from "@mastra/pinecone";
151
+
152
+ // Instance-level vector configuration
153
+ const mastra = new Mastra({
154
+ storage: new PostgresStore({
155
+ id: 'mastra-storage',
156
+ connectionString: process.env.DATABASE_URL,
157
+ }),
158
+ });
159
+
160
+ // Agent-level vector configuration
161
+ const agent = new Agent({
162
+ memory: new Memory({
163
+ vector: new PineconeVector({
164
+ id: 'agent-vector',
165
+ apiKey: process.env.PINECONE_API_KEY,
166
+ environment: process.env.PINECONE_ENVIRONMENT,
167
+ indexName: 'agent-embeddings',
168
+ }),
169
+ options: {
170
+ semanticRecall: {
171
+ topK: 5,
172
+ messageRange: 2,
173
+ },
174
+ },
175
+ }),
176
+ });
177
+ ```
178
+
179
+ We support all popular vector providers including [Pinecone](https://mastra.ai/reference/v1/vectors/pinecone), [Chroma](https://mastra.ai/reference/v1/vectors/chroma), [Qdrant](https://mastra.ai/reference/v1/vectors/qdrant), and many more.
180
+
181
+ For more information on configuring semantic recall, see the [Semantic Recall](./semantic-recall) documentation.