@mastra/memory 1.0.0-beta.12 → 1.0.0-beta.14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,45 @@
1
1
  # @mastra/memory
2
2
 
3
+ ## 1.0.0-beta.14
4
+
5
+ ### Patch Changes
6
+
7
+ - Updated dependencies [[`1dbd8c7`](https://github.com/mastra-ai/mastra/commit/1dbd8c729fb6536ec52f00064d76b80253d346e9), [`c59e13c`](https://github.com/mastra-ai/mastra/commit/c59e13c7688284bd96b2baee3e314335003548de), [`f93e2f5`](https://github.com/mastra-ai/mastra/commit/f93e2f575e775e627e5c1927cefdd72db07858ed), [`f9a2509`](https://github.com/mastra-ai/mastra/commit/f9a25093ea72d210a5e52cfcb3bcc8b5e02dc25c), [`7a010c5`](https://github.com/mastra-ai/mastra/commit/7a010c56b846a313a49ae42fccd3d8de2b9f292d)]:
8
+ - @mastra/core@1.0.0-beta.24
9
+ - @mastra/schema-compat@1.0.0-beta.7
10
+
11
+ ## 1.0.0-beta.13
12
+
13
+ ### Major Changes
14
+
15
+ - Refactor workflow and tool types to remove Zod-specific constraints ([#11814](https://github.com/mastra-ai/mastra/pull/11814))
16
+
17
+ Removed Zod-specific type constraints across all workflow implementations and tool types, replacing them with generic types. This ensures type consistency across default, evented, and inngest workflows while preparing for Zod v4 migration.
18
+
19
+ **Workflow Changes:**
20
+ - Removed `z.ZodObject<any>` and `z.ZodType<any>` constraints from all workflow generic types
21
+ - Updated method signatures to use `TInput` and `TState` directly instead of `z.infer<TInput>` and `z.infer<TState>`
22
+ - Aligned conditional types across all workflow implementations using `TInput extends unknown` pattern
23
+ - Fixed `TSteps` generic to properly use `TEngineType` instead of `any`
24
+
25
+ **Tool Changes:**
26
+ - Removed Zod schema constraints from `ToolExecutionContext` and related interfaces
27
+ - Simplified type parameters from `TSuspendSchema extends ZodLikeSchema` to `TSuspend` and `TResume`
28
+ - Updated tool execution context types to use generic types
29
+
30
+ **Type Utilities:**
31
+ - Refactored type helpers to work with generic schemas instead of Zod-specific types
32
+ - Updated type extraction utilities for better compatibility
33
+
34
+ This change maintains backward compatibility while improving type consistency and preparing for Zod v4 support across all affected packages.
35
+
36
+ ### Patch Changes
37
+
38
+ - Fix updateMessages() to sync vector database for semantic recall. Previously, updating a message's content would not update the corresponding vector embeddings, causing semantic recall to return stale results based on old content. ([#11842](https://github.com/mastra-ai/mastra/pull/11842))
39
+
40
+ - Updated dependencies [[`ebae12a`](https://github.com/mastra-ai/mastra/commit/ebae12a2dd0212e75478981053b148a2c246962d), [`c61a0a5`](https://github.com/mastra-ai/mastra/commit/c61a0a5de4904c88fd8b3718bc26d1be1c2ec6e7), [`69136e7`](https://github.com/mastra-ai/mastra/commit/69136e748e32f57297728a4e0f9a75988462f1a7), [`449aed2`](https://github.com/mastra-ai/mastra/commit/449aed2ba9d507b75bf93d427646ea94f734dfd1), [`eb648a2`](https://github.com/mastra-ai/mastra/commit/eb648a2cc1728f7678768dd70cd77619b448dab9), [`0131105`](https://github.com/mastra-ai/mastra/commit/0131105532e83bdcbb73352fc7d0879eebf140dc), [`9d5059e`](https://github.com/mastra-ai/mastra/commit/9d5059eae810829935fb08e81a9bb7ecd5b144a7), [`ef756c6`](https://github.com/mastra-ai/mastra/commit/ef756c65f82d16531c43f49a27290a416611e526), [`b00ccd3`](https://github.com/mastra-ai/mastra/commit/b00ccd325ebd5d9e37e34dd0a105caae67eb568f), [`3bdfa75`](https://github.com/mastra-ai/mastra/commit/3bdfa7507a91db66f176ba8221aa28dd546e464a), [`e770de9`](https://github.com/mastra-ai/mastra/commit/e770de941a287a49b1964d44db5a5763d19890a6), [`52e2716`](https://github.com/mastra-ai/mastra/commit/52e2716b42df6eff443de72360ae83e86ec23993), [`27b4040`](https://github.com/mastra-ai/mastra/commit/27b4040bfa1a95d92546f420a02a626b1419a1d6), [`610a70b`](https://github.com/mastra-ai/mastra/commit/610a70bdad282079f0c630e0d7bb284578f20151), [`8dc7f55`](https://github.com/mastra-ai/mastra/commit/8dc7f55900395771da851dc7d78d53ae84fe34ec), [`8379099`](https://github.com/mastra-ai/mastra/commit/8379099fc467af6bef54dd7f80c9bd75bf8bbddf), [`8c0ec25`](https://github.com/mastra-ai/mastra/commit/8c0ec25646c8a7df253ed1e5ff4863a0d3f1316c), [`ff4d9a6`](https://github.com/mastra-ai/mastra/commit/ff4d9a6704fc87b31a380a76ed22736fdedbba5a), [`69821ef`](https://github.com/mastra-ai/mastra/commit/69821ef806482e2c44e2197ac0b050c3fe3a5285), [`1ed5716`](https://github.com/mastra-ai/mastra/commit/1ed5716830867b3774c4a1b43cc0d82935f32b96), [`4186bdd`](https://github.com/mastra-ai/mastra/commit/4186bdd00731305726fa06adba0b076a1d50b49f), [`7aaf973`](https://github.com/mastra-ai/mastra/commit/7aaf973f83fbbe9521f1f9e7a4fd99b8de464617)]:
41
+ - @mastra/core@1.0.0-beta.22
42
+
3
43
  ## 1.0.0-beta.12
4
44
 
5
45
  ### Patch Changes
@@ -22,7 +22,7 @@ docs/
22
22
  ├── SKILL.md # Entry point
23
23
  ├── README.md # This file
24
24
  ├── SOURCE_MAP.json # Export index
25
- ├── agents/ (3 files)
25
+ ├── agents/ (4 files)
26
26
  ├── core/ (2 files)
27
27
  ├── memory/ (11 files)
28
28
  ├── processors/ (1 files)
@@ -33,4 +33,4 @@ docs/
33
33
  ## Version
34
34
 
35
35
  Package: @mastra/memory
36
- Version: 1.0.0-beta.12
36
+ Version: 1.0.0-beta.14
@@ -5,7 +5,7 @@ description: Documentation for @mastra/memory. Includes links to type definition
5
5
 
6
6
  # @mastra/memory Documentation
7
7
 
8
- > **Version**: 1.0.0-beta.12
8
+ > **Version**: 1.0.0-beta.14
9
9
  > **Package**: @mastra/memory
10
10
 
11
11
  ## Quick Navigation
@@ -34,7 +34,7 @@ See SOURCE_MAP.json for the complete list.
34
34
 
35
35
  ## Available Topics
36
36
 
37
- - [Agents](agents/) - 3 file(s)
37
+ - [Agents](agents/) - 4 file(s)
38
38
  - [Core](core/) - 2 file(s)
39
39
  - [Memory](memory/) - 11 file(s)
40
40
  - [Processors](processors/) - 1 file(s)
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.0.0-beta.12",
2
+ "version": "1.0.0-beta.14",
3
3
  "package": "@mastra/memory",
4
4
  "exports": {
5
5
  "extractWorkingMemoryContent": {
@@ -90,6 +90,9 @@ export const memoryAgent = new Agent({
90
90
  });
91
91
  ```
92
92
 
93
+ > **Mastra Cloud Store limitation**
94
+ Agent-level storage is not supported when using [Mastra Cloud Store](https://mastra.ai/docs/v1/mastra-cloud/deployment#using-mastra-cloud-store). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database.
95
+
93
96
  ## Message history
94
97
 
95
98
  Include a `memory` object with both `resource` and `thread` to track message history during agent calls.
@@ -85,8 +85,8 @@ export const testTool = createTool({
85
85
  resumeSchema: z.object({
86
86
  approved: z.boolean()
87
87
  }),
88
- execute: async ({ location }) => {
89
- const response = await fetch(`https://wttr.in/${location}?format=3`);
88
+ execute: async (inputData) => {
89
+ const response = await fetch(`https://wttr.in/${inputData.location}?format=3`);
90
90
  const weather = await response.text();
91
91
 
92
92
  return { weather };
@@ -121,7 +121,6 @@ const handleResume = async () => {
121
121
  With this approach, neither the agent nor the tool uses `requireApproval`. Instead, the tool implementation calls `suspend` to pause execution and return context or confirmation prompts to the user.
122
122
 
123
123
  ```typescript
124
-
125
124
  export const testToolB = createTool({
126
125
  id: "test-tool-b",
127
126
  description: "Fetches weather for a location",
@@ -137,14 +136,14 @@ export const testToolB = createTool({
137
136
  suspendSchema: z.object({
138
137
  reason: z.string()
139
138
  }),
140
- execute: async ({ location }, { agent } = {}) => {
141
- const { resumeData: { approved } = {}, suspend } = agent ?? {};
139
+ execute: async (inputData, context) => {
140
+ const { resumeData: { approved } = {}, suspend } = context?.agent ?? {};
142
141
 
143
142
  if (!approved) {
144
143
  return suspend?.({ reason: "Approval required." });
145
144
  }
146
145
 
147
- const response = await fetch(`https://wttr.in/${location}?format=3`);
146
+ const response = await fetch(`https://wttr.in/${inputData.location}?format=3`);
148
147
  const weather = await response.text();
149
148
 
150
149
  return { weather };
@@ -0,0 +1,274 @@
1
+ > Learn how to require approvals, suspend execution, and resume suspended networks while keeping humans in control of agent network workflows.
2
+
3
+ # Network Approval
4
+
5
+ Agent networks can require the same [human-in-the-loop](https://mastra.ai/docs/v1/workflows/human-in-the-loop) oversight used in individual agents and workflows. When a tool, sub-agent, or workflow within a network requires approval or suspends execution, the network pauses and emits events that allow your application to collect user input before resuming.
6
+
7
+ ## Storage
8
+
9
+ Network approval uses snapshots to capture execution state. Ensure you've enabled a storage provider in your Mastra instance. If storage isn't enabled you'll see an error relating to snapshot not found.
10
+
11
+ ```typescript title="src/mastra/index.ts"
12
+ import { Mastra } from "@mastra/core/mastra";
13
+ import { LibSQLStore } from "@mastra/libsql";
14
+
15
+ export const mastra = new Mastra({
16
+ storage: new LibSQLStore({
17
+ id: "mastra-storage",
18
+ url: ":memory:"
19
+ })
20
+ });
21
+ ```
22
+
23
+ ## Approving network tool calls
24
+
25
+ When a tool within a network has `requireApproval: true`, the network stream emits an `agent-execution-approval` chunk and pauses. To allow the tool to execute, call `approveNetworkToolCall` with the `runId`.
26
+
27
+ ```typescript
28
+ const stream = await routingAgent.network("Process this query", {
29
+ memory: {
30
+ thread: "user-123",
31
+ resource: "my-app"
32
+ }
33
+ });
34
+
35
+ let runId: string;
36
+
37
+ for await (const chunk of stream) {
38
+ runId = stream.runId;
39
+ // if the requirApproval is in a tool inside a subAgent or the subAgent has requireToolApproval set to true
40
+ if (chunk.type === "agent-execution-approval") {
41
+ console.log("Tool requires approval:", chunk.payload);
42
+ }
43
+
44
+ // if the requirApproval is in a tool directly in the network agent
45
+ if (chunk.type === "tool-execution-approval") {
46
+ console.log("Tool requires approval:", chunk.payload);
47
+ }
48
+ }
49
+
50
+ // Approve and resume execution
51
+ const approvedStream = await routingAgent.approveNetworkToolCall({
52
+ runId,
53
+ memory: {
54
+ thread: "user-123",
55
+ resource: "my-app"
56
+ }
57
+ });
58
+
59
+ for await (const chunk of approvedStream) {
60
+ if (chunk.type === "network-execution-event-step-finish") {
61
+ console.log(chunk.payload.result);
62
+ }
63
+ }
64
+ ```
65
+
66
+ ## Declining network tool calls
67
+
68
+ To decline a pending tool call and prevent execution, call `declineNetworkToolCall`. The network continues without executing the tool.
69
+
70
+ ```typescript
71
+ const declinedStream = await routingAgent.declineNetworkToolCall({
72
+ runId,
73
+ memory: {
74
+ thread: "user-123",
75
+ resource: "my-app"
76
+ }
77
+ });
78
+
79
+ for await (const chunk of declinedStream) {
80
+ if (chunk.type === "network-execution-event-step-finish") {
81
+ console.log(chunk.payload.result);
82
+ }
83
+ }
84
+ ```
85
+
86
+ ## Resuming suspended networks
87
+
88
+ When a primitive in the network calls `suspend()`, the stream emits an `agent-execution-suspended`/`tool-execution-suspended`/`workflow-execution-suspended` chunk with a `suspendPayload` containing context from the primitive. Use `resumeNetwork` to provide the data requested by the primitive and continue execution.
89
+
90
+ ```typescript
91
+ import { createTool } from "@mastra/core/tools";
92
+ import { z } from "zod";
93
+
94
+ const confirmationTool = createTool({
95
+ id: "confirmation-tool",
96
+ description: "Requests user confirmation before proceeding",
97
+ inputSchema: z.object({
98
+ action: z.string()
99
+ }),
100
+ outputSchema: z.object({
101
+ confirmed: z.boolean(),
102
+ action: z.string()
103
+ }),
104
+ suspendSchema: z.object({
105
+ message: z.string(),
106
+ action: z.string()
107
+ }),
108
+ resumeSchema: z.object({
109
+ confirmed: z.boolean()
110
+ }),
111
+ execute: async (inputData, context) => {
112
+ const { resumeData, suspend } = context?.agent ?? {};
113
+
114
+ if (!resumeData?.confirmed) {
115
+ return suspend?.({
116
+ message: `Please confirm: ${inputData.action}`,
117
+ action: inputData.action
118
+ });
119
+ }
120
+
121
+ return { confirmed: true, action: inputData.action };
122
+ }
123
+ });
124
+ ```
125
+
126
+ Handle the suspension and resume with user-provided data:
127
+
128
+ ```typescript
129
+ const stream = await routingAgent.network("Delete the old records", {
130
+ memory: {
131
+ thread: "user-123",
132
+ resource: "my-app"
133
+ }
134
+ });
135
+
136
+ for await (const chunk of stream) {
137
+ if (chunk.type === "workflow-execution-suspended") {
138
+ console.log(chunk.payload.suspendPayload);
139
+ // { message: "Please confirm: delete old records", action: "delete old records" }
140
+ }
141
+ }
142
+
143
+ // Resume with user confirmation
144
+ const resumedStream = await routingAgent.resumeNetwork(
145
+ { confirmed: true },
146
+ {
147
+ runId: stream.runId,
148
+ memory: {
149
+ thread: "user-123",
150
+ resource: "my-app"
151
+ }
152
+ }
153
+ );
154
+
155
+ for await (const chunk of resumedStream) {
156
+ if (chunk.type === "network-execution-event-step-finish") {
157
+ console.log(chunk.payload.result);
158
+ }
159
+ }
160
+ ```
161
+
162
+ ## Automatic primitive resumption
163
+
164
+ When using primitives that call `suspend()`, you can enable automatic resumption so the network resumes suspended primitives based on the user's next message. This creates a conversational flow where users provide the required information naturally.
165
+
166
+ ### Enabling auto-resume
167
+
168
+ Set `autoResumeSuspendedTools` to `true` in the agent's `defaultNetworkOptions` or when calling `network()`:
169
+
170
+ ```typescript
171
+ import { Agent } from "@mastra/core/agent";
172
+ import { Memory } from "@mastra/memory";
173
+
174
+ // Option 1: In agent configuration
175
+ const routingAgent = new Agent({
176
+ id: "routing-agent",
177
+ name: "Routing Agent",
178
+ instructions: "You coordinate tasks across multiple agents",
179
+ model: "openai/gpt-4o-mini",
180
+ tools: { confirmationTool },
181
+ memory: new Memory(),
182
+ defaultNetworkOptions: {
183
+ autoResumeSuspendedTools: true,
184
+ },
185
+ });
186
+
187
+ // Option 2: Per-request
188
+ const stream = await routingAgent.network("Process this request", {
189
+ autoResumeSuspendedTools: true,
190
+ memory: {
191
+ thread: "user-123",
192
+ resource: "my-app"
193
+ }
194
+ });
195
+ ```
196
+
197
+ ### How it works
198
+
199
+ When `autoResumeSuspendedTools` is enabled:
200
+
201
+ 1. A primitive suspends execution by calling `suspend()` with a payload
202
+ 2. The suspension is persisted to memory along with the conversation
203
+ 3. When the user sends their next message on the same thread, the network:
204
+ - Detects the suspended primitive from message history
205
+ - Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
206
+ - Automatically resumes the primitive with the extracted data
207
+
208
+ ### Example
209
+
210
+ ```typescript
211
+ const stream = await routingAgent.network("Delete the old records", {
212
+ autoResumeSuspendedTools: true,
213
+ memory: {
214
+ thread: "user-123",
215
+ resource: "my-app"
216
+ }
217
+ });
218
+
219
+ for await (const chunk of stream) {
220
+ if (chunk.type === "workflow-execution-suspended") {
221
+ console.log(chunk.payload.suspendPayload);
222
+ // { message: "Please confirm: delete old records", action: "delete old records" }
223
+ }
224
+ }
225
+
226
+ // User provides confirmation in their next message
227
+ const resumedStream = await routingAgent.network("Yes, confirmed", {
228
+ autoResumeSuspendedTools: true,
229
+ memory: {
230
+ thread: "user-123",
231
+ resource: "my-app"
232
+ }
233
+ });
234
+
235
+ for await (const chunk of resumedStream) {
236
+ if (chunk.type === "network-execution-event-step-finish") {
237
+ console.log(chunk.payload.result);
238
+ }
239
+ }
240
+ ```
241
+
242
+ **Conversation flow:**
243
+
244
+ ```
245
+ User: "Delete the old records"
246
+ Agent: "Please confirm: delete old records"
247
+
248
+ User: "Yes, confirmed"
249
+ Agent: "Records deleted successfully"
250
+ ```
251
+
252
+ ### Requirements
253
+
254
+ For automatic tool resumption to work:
255
+
256
+ - **Memory configured**: The agent needs memory to track suspended tools across messages
257
+ - **Same thread**: The follow-up message must use the same memory thread and resource identifiers
258
+ - **`resumeSchema` defined**: The tool (either directly in the network agent or in a subAgent) / workflow (step that gets suspended) must define a `resumeSchema` so the agent knows what data to extract from the user's message
259
+
260
+ ### Manual vs automatic resumption
261
+
262
+ | Approach | Use case |
263
+ |----------|----------|
264
+ | Manual (`resumeNetwork()`) | Programmatic control, webhooks, button clicks, external triggers |
265
+ | Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
266
+
267
+ Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
268
+
269
+ ## Related
270
+
271
+ - [Agent Networks](./networks)
272
+ - [Agent Approval](./agent-approval)
273
+ - [Human-in-the-Loop](https://mastra.ai/docs/v1/workflows/human-in-the-loop)
274
+ - [Agent Memory](./agent-memory)
@@ -35,7 +35,7 @@ import { Memory } from "@mastra/memory";
35
35
  import { LibSQLStore } from "@mastra/libsql";
36
36
 
37
37
  const conversationMemory = new Memory({
38
- storage: new LibSQLStore({ url: ":memory:" }),
38
+ storage: new LibSQLStore({ id: 'conversation-store', url: ":memory:" }),
39
39
  });
40
40
 
41
41
  const mastra = new Mastra({
@@ -87,12 +87,12 @@ import { LibSQLStore } from "@mastra/libsql";
87
87
 
88
88
  const conversationMemory = new Memory({
89
89
  id: "conversation-memory",
90
- storage: new LibSQLStore({ url: ":memory:" }),
90
+ storage: new LibSQLStore({ id: 'conversation-store', url: ":memory:" }),
91
91
  });
92
92
 
93
93
  const analyticsMemory = new Memory({
94
94
  id: "analytics-memory",
95
- storage: new LibSQLStore({ url: ":memory:" }),
95
+ storage: new LibSQLStore({ id: 'analytics-store', url: ":memory:" }),
96
96
  });
97
97
 
98
98
  const mastra = new Mastra({
@@ -4,11 +4,11 @@
4
4
 
5
5
  For Mastra to remember previous interactions, you must configure a storage adapter. Mastra is designed to work with your preferred database provider - choose from the [supported providers](#supported-providers) and pass it to your Mastra instance.
6
6
 
7
- ```typescript
7
+ ```typescript title="src/mastra/index.ts"
8
8
  import { Mastra } from "@mastra/core";
9
9
  import { LibSQLStore } from "@mastra/libsql";
10
10
 
11
- const mastra = new Mastra({
11
+ export const mastra = new Mastra({
12
12
  storage: new LibSQLStore({
13
13
  id: 'mastra-storage',
14
14
  url: "file:./mastra.db",
@@ -17,7 +17,7 @@ const mastra = new Mastra({
17
17
  ```
18
18
  On first interaction, Mastra automatically creates the necessary tables following the [core schema](https://mastra.ai/reference/v1/storage/overview#core-schema). This includes tables for messages, threads, resources, workflows, traces, and evaluation datasets.
19
19
 
20
- ## Supported Providers
20
+ ## Supported providers
21
21
 
22
22
  Each provider page includes installation instructions, configuration parameters, and usage examples:
23
23
 
@@ -35,19 +35,19 @@ Each provider page includes installation instructions, configuration parameters,
35
35
  > **Note:**
36
36
  libSQL is the easiest way to get started because it doesn’t require running a separate database server
37
37
 
38
- ## Configuration Scope
38
+ ## Configuration scope
39
39
 
40
40
  You can configure storage at two different scopes:
41
41
 
42
42
  ### Instance-level storage
43
43
 
44
- Add storage to your Mastra instance so all agents share the same memory provider:
44
+ Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
45
45
 
46
- ```typescript
46
+ ```typescript
47
47
  import { Mastra } from "@mastra/core";
48
48
  import { PostgresStore } from "@mastra/pg";
49
49
 
50
- const mastra = new Mastra({
50
+ export const mastra = new Mastra({
51
51
  storage: new PostgresStore({
52
52
  id: 'mastra-storage',
53
53
  connectionString: process.env.DATABASE_URL,
@@ -55,20 +55,55 @@ const mastra = new Mastra({
55
55
  });
56
56
 
57
57
  // All agents automatically use this storage
58
- const agent1 = new Agent({ memory: new Memory() });
59
- const agent2 = new Agent({ memory: new Memory() });
58
+ const agent1 = new Agent({ id: "agent-1", memory: new Memory() });
59
+ const agent2 = new Agent({ id: "agent-2", memory: new Memory() });
60
60
  ```
61
61
 
62
+ This is useful when all primitives share the same storage backend and have similar performance, scaling, and operational requirements.
63
+
64
+ #### Composite storage
65
+
66
+ Add storage to your Mastra instance using `MastraStorage` and configure individual storage domains to use different storage providers.
67
+
68
+ ```typescript title="src/mastra/index.ts"
69
+ import { Mastra } from "@mastra/core";
70
+ import { MastraStorage } from "@mastra/core/storage";
71
+ import { MemoryLibSQL } from "@mastra/libsql";
72
+ import { WorkflowsPG } from "@mastra/pg";
73
+ import { ObservabilityStorageClickhouse } from "@mastra/clickhouse";
74
+
75
+ export const mastra = new Mastra({
76
+ storage: new MastraStorage({
77
+ id: "composite",
78
+ domains: {
79
+ memory: new MemoryLibSQL({ url: "file:./memory.db" }),
80
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
81
+ observability: new ObservabilityStorageClickhouse({
82
+ url: process.env.CLICKHOUSE_URL,
83
+ username: process.env.CLICKHOUSE_USERNAME,
84
+ password: process.env.CLICKHOUSE_PASSWORD,
85
+ }),
86
+ },
87
+ }),
88
+ });
89
+ ```
90
+
91
+ This is useful when different types of data have different performance or operational requirements, such as low-latency storage for memory, durable storage for workflows, and high-throughput storage for observability.
92
+
93
+ > **Note:**
94
+ See [Storage Domains](https://mastra.ai/reference/v1/storage/composite#storage-domains) for more information.
95
+
62
96
  ### Agent-level storage
63
97
 
64
- Add storage to a specific agent when you need data boundaries or compliance requirements:
98
+ Agent-level storage overrides storage configured at the instance-level. Add storage to a specific agent when you need data boundaries or compliance requirements:
65
99
 
66
- ```typescript
100
+ ```typescript title="src/mastra/agents/memory-agent.ts"
67
101
  import { Agent } from "@mastra/core/agent";
68
102
  import { Memory } from "@mastra/memory";
69
103
  import { PostgresStore } from "@mastra/pg";
70
104
 
71
- const agent = new Agent({
105
+ export const agent = new Agent({
106
+ id: "agent",
72
107
  memory: new Memory({
73
108
  storage: new PostgresStore({
74
109
  id: 'agent-storage',
@@ -80,7 +115,10 @@ const agent = new Agent({
80
115
 
81
116
  This is useful when different agents need to store data in separate databases for security, compliance, or organizational reasons.
82
117
 
83
- ## Threads and Resources
118
+ > **Mastra Cloud Store limitation**
119
+ Agent-level storage is not supported when using [Mastra Cloud Store](https://mastra.ai/docs/v1/mastra-cloud/deployment#using-mastra-cloud-store). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database.
120
+
121
+ ## Threads and resources
84
122
 
85
123
  Mastra organizes memory into threads using two identifiers:
86
124
 
@@ -89,7 +127,7 @@ Mastra organizes memory into threads using two identifiers:
89
127
 
90
128
  Both identifiers are required for agents to store and recall information:
91
129
 
92
- ```typescript
130
+ ```typescript
93
131
  const stream = await agent.stream("message for agent", {
94
132
  memory: {
95
133
  thread: "convo_123",
@@ -107,8 +145,9 @@ Mastra can automatically generate descriptive thread titles based on the user's
107
145
 
108
146
  Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
109
147
 
110
- ```typescript
148
+ ```typescript
111
149
  export const testAgent = new Agent({
150
+ id: "test-agent",
112
151
  memory: new Memory({
113
152
  options: {
114
153
  generateTitle: true,
@@ -123,13 +162,12 @@ To optimize cost or behavior, provide a smaller `model` and custom `instructions
123
162
 
124
163
  ```typescript
125
164
  export const testAgent = new Agent({
165
+ id: "test-agent",
126
166
  memory: new Memory({
127
167
  options: {
128
- threads: {
129
- generateTitle: {
130
- model: "openai/gpt-4o-mini",
131
- instructions: "Generate a concise title based on the user's first message",
132
- },
168
+ generateTitle: {
169
+ model: "openai/gpt-4o-mini",
170
+ instructions: "Generate a concise title based on the user's first message",
133
171
  },
134
172
  },
135
173
  }),
@@ -142,7 +180,7 @@ Semantic recall uses vector embeddings to retrieve relevant past messages based
142
180
 
143
181
  The vector database doesn't have to be the same as your storage provider. For example, you might use PostgreSQL for storage and Pinecone for vectors:
144
182
 
145
- ```typescript
183
+ ```typescript
146
184
  import { Mastra } from "@mastra/core";
147
185
  import { Agent } from "@mastra/core/agent";
148
186
  import { Memory } from "@mastra/memory";
@@ -150,7 +188,7 @@ import { PostgresStore } from "@mastra/pg";
150
188
  import { PineconeVector } from "@mastra/pinecone";
151
189
 
152
190
  // Instance-level vector configuration
153
- const mastra = new Mastra({
191
+ export const mastra = new Mastra({
154
192
  storage: new PostgresStore({
155
193
  id: 'mastra-storage',
156
194
  connectionString: process.env.DATABASE_URL,
@@ -158,13 +196,12 @@ const mastra = new Mastra({
158
196
  });
159
197
 
160
198
  // Agent-level vector configuration
161
- const agent = new Agent({
199
+ export const agent = new Agent({
200
+ id: "agent",
162
201
  memory: new Memory({
163
202
  vector: new PineconeVector({
164
203
  id: 'agent-vector',
165
204
  apiKey: process.env.PINECONE_API_KEY,
166
- environment: process.env.PINECONE_ENVIRONMENT,
167
- indexName: 'agent-embeddings',
168
205
  }),
169
206
  options: {
170
207
  semanticRecall: {
@@ -80,13 +80,15 @@ const memory = new Memory({
80
80
 
81
81
  ### Usage with Agents
82
82
 
83
- When using resource-scoped memory, make sure to pass the `resourceId` parameter:
83
+ When using resource-scoped memory, make sure to pass the `resource` parameter in the memory options:
84
84
 
85
85
  ```typescript
86
- // Resource-scoped memory requires resourceId
86
+ // Resource-scoped memory requires resource
87
87
  const response = await agent.generate("Hello!", {
88
- threadId: "conversation-123",
89
- resourceId: "user-alice-456", // Same user across different threads
88
+ memory: {
89
+ thread: "conversation-123",
90
+ resource: "user-alice-456", // Same user across different threads
91
+ },
90
92
  });
91
93
  ```
92
94
 
@@ -339,8 +341,10 @@ const thread = await memory.createThread({
339
341
 
340
342
  // The agent will now have access to this information in all messages
341
343
  await agent.generate("What's my blood type?", {
342
- threadId: thread.id,
343
- resourceId: "user-456",
344
+ memory: {
345
+ thread: thread.id,
346
+ resource: "user-456",
347
+ },
344
348
  });
345
349
  // Response: "Your blood type is O+."
346
350
  ```
@@ -56,7 +56,7 @@ const agent = new Agent({
56
56
  // this is the default vector db if omitted
57
57
  vector: new LibSQLVector({
58
58
  id: 'agent-vector',
59
- connectionUrl: "file:./local.db",
59
+ url: "file:./local.db",
60
60
  }),
61
61
  }),
62
62
  });
@@ -230,6 +230,4 @@ You might want to disable semantic recall in scenarios like:
230
230
 
231
231
  ## Viewing Recalled Messages
232
232
 
233
- When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
234
-
235
- For more info on viewing message traces, see [Viewing Retrieved Messages](./overview#viewing-retrieved-messages).
233
+ When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).