@mastra/libsql 1.2.0-alpha.0 → 1.3.0-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. package/CHANGELOG.md +176 -0
  2. package/dist/docs/SKILL.md +36 -26
  3. package/dist/docs/{SOURCE_MAP.json → assets/SOURCE_MAP.json} +1 -1
  4. package/dist/docs/{agents/03-agent-approval.md → references/docs-agents-agent-approval.md} +19 -19
  5. package/dist/docs/references/docs-agents-agent-memory.md +212 -0
  6. package/dist/docs/{agents/04-network-approval.md → references/docs-agents-network-approval.md} +13 -12
  7. package/dist/docs/{agents/02-networks.md → references/docs-agents-networks.md} +10 -12
  8. package/dist/docs/{memory/06-memory-processors.md → references/docs-memory-memory-processors.md} +6 -8
  9. package/dist/docs/{memory/03-message-history.md → references/docs-memory-message-history.md} +31 -20
  10. package/dist/docs/{memory/01-overview.md → references/docs-memory-overview.md} +8 -8
  11. package/dist/docs/{memory/05-semantic-recall.md → references/docs-memory-semantic-recall.md} +33 -17
  12. package/dist/docs/{memory/02-storage.md → references/docs-memory-storage.md} +29 -39
  13. package/dist/docs/{memory/04-working-memory.md → references/docs-memory-working-memory.md} +16 -27
  14. package/dist/docs/{observability/01-overview.md → references/docs-observability-overview.md} +4 -7
  15. package/dist/docs/{observability/02-default.md → references/docs-observability-tracing-exporters-default.md} +11 -14
  16. package/dist/docs/{rag/01-retrieval.md → references/docs-rag-retrieval.md} +26 -53
  17. package/dist/docs/{workflows/01-snapshots.md → references/docs-workflows-snapshots.md} +3 -5
  18. package/dist/docs/{guides/01-ai-sdk.md → references/guides-agent-frameworks-ai-sdk.md} +25 -9
  19. package/dist/docs/references/reference-core-getMemory.md +50 -0
  20. package/dist/docs/references/reference-core-listMemory.md +56 -0
  21. package/dist/docs/references/reference-core-mastra-class.md +66 -0
  22. package/dist/docs/{memory/07-reference.md → references/reference-memory-memory-class.md} +28 -14
  23. package/dist/docs/references/reference-storage-composite.md +235 -0
  24. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  25. package/dist/docs/references/reference-storage-libsql.md +135 -0
  26. package/dist/docs/{vectors/01-reference.md → references/reference-vectors-libsql.md} +105 -13
  27. package/dist/index.cjs +1676 -194
  28. package/dist/index.cjs.map +1 -1
  29. package/dist/index.js +1676 -196
  30. package/dist/index.js.map +1 -1
  31. package/dist/storage/db/index.d.ts.map +1 -1
  32. package/dist/storage/domains/agents/index.d.ts +9 -12
  33. package/dist/storage/domains/agents/index.d.ts.map +1 -1
  34. package/dist/storage/domains/memory/index.d.ts +7 -1
  35. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  36. package/dist/storage/domains/prompt-blocks/index.d.ts +25 -0
  37. package/dist/storage/domains/prompt-blocks/index.d.ts.map +1 -0
  38. package/dist/storage/domains/scorer-definitions/index.d.ts +26 -0
  39. package/dist/storage/domains/scorer-definitions/index.d.ts.map +1 -0
  40. package/dist/storage/index.d.ts +3 -1
  41. package/dist/storage/index.d.ts.map +1 -1
  42. package/package.json +5 -6
  43. package/dist/docs/README.md +0 -39
  44. package/dist/docs/agents/01-agent-memory.md +0 -166
  45. package/dist/docs/core/01-reference.md +0 -151
  46. package/dist/docs/storage/01-reference.md +0 -556
@@ -1,8 +1,6 @@
1
- > Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
2
-
3
1
  # Agent Networks
4
2
 
5
- Agent networks in Mastra coordinate multiple agents, workflows, and tools to handle tasks that aren't clearly defined upfront but can be inferred from the user's message or context. A top-level **routing agent** (a Mastra agent with other agents, workflows, and tools configured) uses an LLM to interpret the request and decide which primitives (sub-agents, workflows, or tools) to call, in what order, and with what data.
3
+ Agent networks in Mastra coordinate multiple agents, workflows, and tools to handle tasks that aren't clearly defined upfront but can be inferred from the user's message or context. A top-level **routing agent** (a Mastra agent with other agents, workflows, and tools configured) uses an LLM to interpret the request and decide which primitives (subagents, workflows, or tools) to call, in what order, and with what data.
6
4
 
7
5
  ## When to use networks
8
6
 
@@ -18,9 +16,9 @@ Mastra agent networks operate using these principles:
18
16
 
19
17
  ## Creating an agent network
20
18
 
21
- An agent network is built around a top-level routing agent that delegates tasks to agents, workflows, and tools defined in its configuration. Memory is configured on the routing agent using the `memory` option, and `instructions` define the agent's routing behavior.
19
+ An agent network is built around a top-level routing agent that delegates tasks to subagents, workflows, and tools defined in its configuration. Memory is configured on the routing agent using the `memory` option, and `instructions` define the agent's routing behavior.
22
20
 
23
- ```typescript {22-23,26,29} title="src/mastra/agents/routing-agent.ts"
21
+ ```typescript
24
22
  import { Agent } from "@mastra/core/agent";
25
23
  import { Memory } from "@mastra/memory";
26
24
  import { LibSQLStore } from "@mastra/libsql";
@@ -66,9 +64,9 @@ When configuring a Mastra agent network, each primitive (agent, workflow, or too
66
64
 
67
65
  #### Agent descriptions
68
66
 
69
- Each agent in a network should include a clear `description` that explains what the agent does.
67
+ Each subagent in a network should include a clear `description` that explains what the agent does.
70
68
 
71
- ```typescript title="src/mastra/agents/research-agent.ts"
69
+ ```typescript
72
70
  export const researchAgent = new Agent({
73
71
  id: "research-agent",
74
72
  name: "Research Agent",
@@ -78,7 +76,7 @@ export const researchAgent = new Agent({
78
76
  });
79
77
  ```
80
78
 
81
- ```typescript title="src/mastra/agents/writing-agent.ts"
79
+ ```typescript
82
80
  export const writingAgent = new Agent({
83
81
  id: "writing-agent",
84
82
  name: "Writing Agent",
@@ -92,7 +90,7 @@ export const writingAgent = new Agent({
92
90
 
93
91
  Workflows in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
94
92
 
95
- ```typescript title="src/mastra/workflows/city-workflow.ts"
93
+ ```typescript
96
94
  export const cityWorkflow = createWorkflow({
97
95
  id: "city-workflow",
98
96
  description: `This workflow handles city-specific research tasks.
@@ -112,7 +110,7 @@ export const cityWorkflow = createWorkflow({
112
110
 
113
111
  Tools in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
114
112
 
115
- ```typescript title="src/mastra/tools/weather-tool.ts"
113
+ ```typescript
116
114
  export const weatherTool = createTool({
117
115
  id: "weather-tool",
118
116
  description: ` Retrieves current weather information using the wttr.in API.
@@ -286,7 +284,7 @@ const final = await stream.object;
286
284
 
287
285
  ## Related
288
286
 
289
- - [Agent Memory](./agent-memory)
290
- - [Workflows Overview](../workflows/overview)
287
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
288
+ - [Workflows Overview](https://mastra.ai/docs/workflows/overview)
291
289
  - [Request Context](https://mastra.ai/docs/server/request-context)
292
290
  - [Supervisor example](https://github.com/mastra-ai/mastra/tree/main/examples/supervisor-agent)
@@ -1,5 +1,3 @@
1
- > Learn how to use memory processors in Mastra to filter, trim, and transform messages before they
2
-
3
1
  # Memory Processors
4
2
 
5
3
  Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
@@ -200,7 +198,7 @@ Understanding the execution order is important when combining guardrails with me
200
198
 
201
199
  ### Input Processors
202
200
 
203
- ```
201
+ ```text
204
202
  [Memory Processors] → [Your inputProcessors]
205
203
  ```
206
204
 
@@ -211,7 +209,7 @@ This means memory loads message history before your processors can validate or f
211
209
 
212
210
  ### Output Processors
213
211
 
214
- ```
212
+ ```text
215
213
  [Your outputProcessors] → [Memory Processors]
216
214
  ```
217
215
 
@@ -302,10 +300,10 @@ const agent = new Agent({
302
300
 
303
301
  ### Summary
304
302
 
305
- | Guardrail Type | When it runs | If it aborts |
306
- | -------------- | ------------ | ------------ |
307
- | Input | After memory loads history | LLM not called, nothing saved |
308
- | Output | Before memory saves | Nothing saved to storage |
303
+ | Guardrail Type | When it runs | If it aborts |
304
+ | -------------- | -------------------------- | ----------------------------- |
305
+ | Input | After memory loads history | LLM not called, nothing saved |
306
+ | Output | Before memory saves | Nothing saved to storage |
309
307
 
310
308
  Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
311
309
 
@@ -1,25 +1,42 @@
1
- > Learn how to configure message history in Mastra to store recent messages from the current conversation.
2
-
3
1
  # Message History
4
2
 
5
- Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
3
+ Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
6
4
 
7
5
  You can also retrieve message history to display past conversations in your UI.
8
6
 
9
- > **Note:**
10
- Each message belongs to a thread (the conversation) and a resource (the user or entity it's associated with). See [Threads and resources](https://mastra.ai/docs/memory/storage#threads-and-resources) for more detail.
7
+ > **Info:** Each message belongs to a thread (the conversation) and a resource (the user or entity it's associated with). See [Threads and resources](https://mastra.ai/docs/memory/storage) for more detail.
11
8
 
12
9
  ## Getting started
13
10
 
14
- Install the Mastra memory module along with a [storage adapter](https://mastra.ai/docs/memory/storage#supported-providers) for your database. The examples below use `@mastra/libsql`, which stores data locally in a `mastra.db` file.
11
+ Install the Mastra memory module along with a [storage adapter](https://mastra.ai/docs/memory/storage) for your database. The examples below use `@mastra/libsql`, which stores data locally in a `mastra.db` file.
12
+
13
+ **npm**:
15
14
 
16
- ```bash npm2yarn
15
+ ```bash
17
16
  npm install @mastra/memory@latest @mastra/libsql@latest
18
17
  ```
19
18
 
19
+ **pnpm**:
20
+
21
+ ```bash
22
+ pnpm add @mastra/memory@latest @mastra/libsql@latest
23
+ ```
24
+
25
+ **Yarn**:
26
+
27
+ ```bash
28
+ yarn add @mastra/memory@latest @mastra/libsql@latest
29
+ ```
30
+
31
+ **Bun**:
32
+
33
+ ```bash
34
+ bun add @mastra/memory@latest @mastra/libsql@latest
35
+ ```
36
+
20
37
  Message history requires a storage adapter to persist conversations. Configure storage on your Mastra instance if you haven't already:
21
38
 
22
- ```typescript title="src/mastra/index.ts"
39
+ ```typescript
23
40
  import { Mastra } from "@mastra/core";
24
41
  import { LibSQLStore } from "@mastra/libsql";
25
42
 
@@ -33,7 +50,7 @@ export const mastra = new Mastra({
33
50
 
34
51
  Give your agent a `Memory`:
35
52
 
36
- ```typescript title="src/mastra/agents/your-agent.ts"
53
+ ```typescript
37
54
  import { Memory } from "@mastra/memory";
38
55
  import { Agent } from "@mastra/core/agent";
39
56
 
@@ -49,7 +66,7 @@ export const agent = new Agent({
49
66
 
50
67
  When you call the agent, messages are automatically saved to the database. You can specify a `threadId`, `resourceId`, and optional `metadata`:
51
68
 
52
- **generate:**
69
+ **Generate**:
53
70
 
54
71
  ```typescript
55
72
  await agent.generate("Hello", {
@@ -64,8 +81,7 @@ await agent.generate("Hello", {
64
81
  });
65
82
  ```
66
83
 
67
-
68
- **stream:**
84
+ **Stream**:
69
85
 
70
86
  ```typescript
71
87
  await agent.stream("Hello", {
@@ -80,11 +96,7 @@ await agent.stream("Hello", {
80
96
  });
81
97
  ```
82
98
 
83
-
84
-
85
- > **Note:**
86
-
87
- Threads and messages are created automatically when you call `agent.generate()` or `agent.stream()`, but you can also create them manually with [`createThread()`](https://mastra.ai/reference/memory/createThread) and [`saveMessages()`](https://mastra.ai/reference/memory/memory-class).
99
+ > **Info:** Threads and messages are created automatically when you call `agent.generate()` or `agent.stream()`, but you can also create them manually with [`createThread()`](https://mastra.ai/reference/memory/createThread) and [`saveMessages()`](https://mastra.ai/reference/memory/memory-class).
88
100
 
89
101
  There are two ways to use this history:
90
102
 
@@ -106,8 +118,7 @@ The `Memory` instance gives you access to functions for listing threads, recalli
106
118
 
107
119
  Use these methods to fetch threads and messages for displaying conversation history in your UI or for custom memory retrieval logic.
108
120
 
109
- > **Note:**
110
- The memory system does not enforce access control. Before running any query, verify in your application logic that the current user is authorized to access the `resourceId` being queried.
121
+ > **Warning:** The memory system does not enforce access control. Before running any query, verify in your application logic that the current user is authorized to access the `resourceId` being queried.
111
122
 
112
123
  ### Threads
113
124
 
@@ -240,7 +251,7 @@ const { thread, clonedMessages } = await memory.cloneThread({
240
251
  });
241
252
  ```
242
253
 
243
- You can filter which messages get cloned (by count or date range), specify custom thread IDs, and use utility methods to inspect clone relationships.
254
+ You can filter which messages get cloned (by count or date range), specify custom thread IDs, and use utility methods to inspect clone relationships.
244
255
 
245
256
  See [`cloneThread()`](https://mastra.ai/reference/memory/cloneThread) and [clone utilities](https://mastra.ai/reference/memory/clone-utilities) for the full API.
246
257
 
@@ -1,14 +1,13 @@
1
- > Learn how Mastra
2
-
3
1
  # Memory
4
2
 
5
3
  Memory enables your agent to remember user messages, agent replies, and tool results across interactions, giving it the context it needs to stay consistent, maintain conversation flow, and produce better answers over time.
6
4
 
7
- Mastra supports three complementary memory types:
5
+ Mastra supports four complementary memory types:
8
6
 
9
7
  - [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
10
8
  - [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
11
- - [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall#storage-configuration) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall#embedder-configuration).
9
+ - [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
10
+ - [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
12
11
 
13
12
  If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
14
13
 
@@ -19,12 +18,13 @@ Choose a memory option to get started:
19
18
  - [Message history](https://mastra.ai/docs/memory/message-history)
20
19
  - [Working memory](https://mastra.ai/docs/memory/working-memory)
21
20
  - [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
21
+ - [Observational memory](https://mastra.ai/docs/memory/observational-memory)
22
22
 
23
23
  ## Storage
24
24
 
25
- Before enabling memory, you must first configure a storage adapter. Mastra supports several databases including PostgreSQL, MongoDB, libSQL, and [more](https://mastra.ai/docs/memory/storage#supported-providers).
25
+ Before enabling memory, you must first configure a storage adapter. Mastra supports several databases including PostgreSQL, MongoDB, libSQL, and [more](https://mastra.ai/docs/memory/storage).
26
26
 
27
- Storage can be configured at the [instance level](https://mastra.ai/docs/memory/storage#instance-level-storage) (shared across all agents) or at the [agent level](https://mastra.ai/docs/memory/storage#agent-level-storage) (dedicated per agent).
27
+ Storage can be configured at the [instance level](https://mastra.ai/docs/memory/storage) (shared across all agents) or at the [agent level](https://mastra.ai/docs/memory/storage) (dedicated per agent).
28
28
 
29
29
  For semantic recall, you can use a separate vector database like Pinecone alongside your primary storage.
30
30
 
@@ -34,12 +34,12 @@ See the [Storage](https://mastra.ai/docs/memory/storage) documentation for confi
34
34
 
35
35
  When [tracing](https://mastra.ai/docs/observability/tracing/overview) is enabled, you can inspect exactly which messages the agent uses for context in each request. The trace output shows all memory included in the agent's context window - both recent message history and messages recalled via semantic recall.
36
36
 
37
- ![Trace output showing memory context included in an agent request](https://mastra.ai/_next/image?url=%2Ftracingafter.png&w=1920&q=75)
37
+ ![Trace output showing memory context included in an agent request](https://mastra.ai/_next/image?url=%2Ftracingafter.png\&w=1920\&q=75)
38
38
 
39
39
  This visibility helps you understand why an agent made specific decisions and verify that memory retrieval is working as expected.
40
40
 
41
41
  ## Next steps
42
42
 
43
43
  - Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
44
- - Add [Message history](https://mastra.ai/docs/memory/message-history), [Working memory](https://mastra.ai/docs/memory/working-memory), or [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
44
+ - Add [Message history](https://mastra.ai/docs/memory/message-history), [Working memory](https://mastra.ai/docs/memory/working-memory), [Semantic recall](https://mastra.ai/docs/memory/semantic-recall), or [Observational memory](https://mastra.ai/docs/memory/observational-memory)
45
45
  - Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options
@@ -1,20 +1,16 @@
1
- > Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
2
-
3
1
  # Semantic Recall
4
2
 
5
3
  If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
6
4
 
7
- > **Watch 📹**
8
-
9
- What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
5
+ > **Watch 📹:** What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
10
6
 
11
7
  ## How Semantic Recall Works
12
8
 
13
- Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](./message-history).
9
+ Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](https://mastra.ai/docs/memory/message-history).
14
10
 
15
11
  It uses vector embeddings of messages for similarity search, integrates with various vector stores, and has configurable context windows around retrieved messages.
16
12
 
17
- ![Diagram showing Mastra Memory semantic recall](/img/semantic-recall.png)
13
+ ![Diagram showing Mastra Memory semantic recall](/assets/images/semantic-recall-fd7b9336a6d0d18019216cb6d3dbe710.png)
18
14
 
19
15
  When it's enabled, new messages are used to query a vector DB for semantically similar messages.
20
16
 
@@ -24,7 +20,7 @@ After getting a response from the LLM, all new messages (user, assistant, and to
24
20
 
25
21
  Semantic recall is enabled by default, so if you give your agent memory it will be included:
26
22
 
27
- ```typescript {9}
23
+ ```typescript
28
24
  import { Agent } from "@mastra/core/agent";
29
25
  import { Memory } from "@mastra/memory";
30
26
 
@@ -64,7 +60,7 @@ const { messages: relevantMessages } = await memory!.recall({
64
60
 
65
61
  Semantic recall relies on a [storage and vector db](https://mastra.ai/reference/memory/memory-class) to store messages and their embeddings.
66
62
 
67
- ```ts {8-16}
63
+ ```ts
68
64
  import { Memory } from "@mastra/memory";
69
65
  import { Agent } from "@mastra/core/agent";
70
66
  import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
@@ -113,7 +109,7 @@ The three main parameters that control semantic recall behavior are:
113
109
  2. **messageRange**: How much surrounding context to include with each match
114
110
  3. **scope**: Whether to search within the current thread or across all threads owned by a resource (the default is resource scope).
115
111
 
116
- ```typescript {5-7}
112
+ ```typescript
117
113
  const agent = new Agent({
118
114
  memory: new Memory({
119
115
  options: {
@@ -135,7 +131,7 @@ Semantic recall relies on an [embedding model](https://mastra.ai/reference/memor
135
131
 
136
132
  The simplest way is to use a `provider/model` string with autocomplete support:
137
133
 
138
- ```ts {7}
134
+ ```ts
139
135
  import { Memory } from "@mastra/memory";
140
136
  import { Agent } from "@mastra/core/agent";
141
137
  import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
@@ -158,7 +154,7 @@ The model router automatically handles API key detection from environment variab
158
154
 
159
155
  You can also use AI SDK embedding models directly:
160
156
 
161
- ```ts {2,7}
157
+ ```ts
162
158
  import { Memory } from "@mastra/memory";
163
159
  import { Agent } from "@mastra/core/agent";
164
160
  import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
@@ -174,13 +170,33 @@ const agent = new Agent({
174
170
 
175
171
  To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
176
172
 
177
- ```bash npm2yarn
173
+ **npm**:
174
+
175
+ ```bash
178
176
  npm install @mastra/fastembed@latest
179
177
  ```
180
178
 
179
+ **pnpm**:
180
+
181
+ ```bash
182
+ pnpm add @mastra/fastembed@latest
183
+ ```
184
+
185
+ **Yarn**:
186
+
187
+ ```bash
188
+ yarn add @mastra/fastembed@latest
189
+ ```
190
+
191
+ **Bun**:
192
+
193
+ ```bash
194
+ bun add @mastra/fastembed@latest
195
+ ```
196
+
181
197
  Then configure it in your memory:
182
198
 
183
- ```ts {3,7}
199
+ ```ts
184
200
  import { Memory } from "@mastra/memory";
185
201
  import { Agent } from "@mastra/core/agent";
186
202
  import { fastembed } from "@mastra/fastembed";
@@ -198,7 +214,7 @@ When using PostgreSQL as your vector store, you can optimize semantic recall per
198
214
 
199
215
  PostgreSQL supports both IVFFlat and HNSW indexes. By default, Mastra creates an IVFFlat index, but HNSW indexes typically provide better performance, especially with OpenAI embeddings which use inner product distance.
200
216
 
201
- ```typescript {18-23}
217
+ ```typescript
202
218
  import { Memory } from "@mastra/memory";
203
219
  import { PgStore, PgVector } from "@mastra/pg";
204
220
 
@@ -228,7 +244,7 @@ const agent = new Agent({
228
244
  });
229
245
  ```
230
246
 
231
- For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg#index-configuration-guide).
247
+ For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg).
232
248
 
233
249
  ## Disabling
234
250
 
@@ -236,7 +252,7 @@ There is a performance impact to using semantic recall. New messages are convert
236
252
 
237
253
  Semantic recall is enabled by default but can be disabled when not needed:
238
254
 
239
- ```typescript {4}
255
+ ```typescript
240
256
  const agent = new Agent({
241
257
  memory: new Memory({
242
258
  options: {
@@ -1,10 +1,8 @@
1
- > Configure storage for Mastra
2
-
3
1
  # Storage
4
2
 
5
- For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
3
+ For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
6
4
 
7
- ```typescript title="src/mastra/index.ts"
5
+ ```typescript
8
6
  import { Mastra } from "@mastra/core";
9
7
  import { LibSQLStore } from "@mastra/libsql";
10
8
 
@@ -16,18 +14,17 @@ export const mastra = new Mastra({
16
14
  });
17
15
  ```
18
16
 
19
- > **Sharing the database with Mastra Studio**
20
- When running `mastra dev` alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
21
-
22
- ```typescript
23
- url: "file:/absolute/path/to/your/project/mastra.db"
24
- ```
25
-
26
- Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
17
+ > **Sharing the database with Mastra Studio:** When running `mastra dev` alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
18
+ >
19
+ > ```typescript
20
+ > url: "file:/absolute/path/to/your/project/mastra.db"
21
+ > ```
22
+ >
23
+ > Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
27
24
 
28
25
  This configures instance-level storage, which all agents share by default. You can also configure [agent-level storage](#agent-level-storage) for isolated data boundaries.
29
26
 
30
- Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview#core-schema) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
27
+ Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
31
28
 
32
29
  ## Supported providers
33
30
 
@@ -44,8 +41,7 @@ Each provider page includes installation instructions, configuration parameters,
44
41
  - [LanceDB](https://mastra.ai/reference/storage/lance)
45
42
  - [Microsoft SQL Server](https://mastra.ai/reference/storage/mssql)
46
43
 
47
- > **Note:**
48
- libSQL is the easiest way to get started because it doesn’t require running a separate database server.
44
+ > **Tip:** libSQL is the easiest way to get started because it doesn’t require running a separate database server.
49
45
 
50
46
  ## Configuration scope
51
47
 
@@ -55,7 +51,7 @@ Storage can be configured at the instance level (shared by all agents) or at the
55
51
 
56
52
  Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
57
53
 
58
- ```typescript title="src/mastra/index.ts"
54
+ ```typescript
59
55
  import { Mastra } from "@mastra/core";
60
56
  import { PostgresStore } from "@mastra/pg";
61
57
 
@@ -75,9 +71,9 @@ This is useful when all primitives share the same storage backend and have simil
75
71
 
76
72
  #### Composite storage
77
73
 
78
- [Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite#storage-domains) you need) to different storage providers.
74
+ [Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite) you need) to different storage providers.
79
75
 
80
- ```typescript title="src/mastra/index.ts"
76
+ ```typescript
81
77
  import { Mastra } from "@mastra/core";
82
78
  import { MastraCompositeStore } from "@mastra/core/storage";
83
79
  import { MemoryLibSQL } from "@mastra/libsql";
@@ -88,7 +84,6 @@ export const mastra = new Mastra({
88
84
  storage: new MastraCompositeStore({
89
85
  id: "composite",
90
86
  domains: {
91
- // highlight-next-line
92
87
  memory: new MemoryLibSQL({ url: "file:./memory.db" }),
93
88
  workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
94
89
  observability: new ObservabilityStorageClickhouse({
@@ -107,7 +102,7 @@ This is useful when different types of data have different performance or operat
107
102
 
108
103
  Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need data boundaries or compliance requirements:
109
104
 
110
- ```typescript title="src/mastra/agents/your-agent.ts"
105
+ ```typescript
111
106
  import { Agent } from "@mastra/core/agent";
112
107
  import { Memory } from "@mastra/memory";
113
108
  import { PostgresStore } from "@mastra/pg";
@@ -123,19 +118,18 @@ export const agent = new Agent({
123
118
  });
124
119
  ```
125
120
 
126
- > **Note:**
127
- [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment#using-mastra-cloud-store) doesn't support agent-level storage.
121
+ > **Warning:** [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment) doesn't support agent-level storage.
128
122
 
129
123
  ## Threads and resources
130
124
 
131
- Mastra organizes conversations using two identifiers:
125
+ Mastra organizes conversations using two identifiers:
132
126
 
133
127
  - **Thread** - a conversation session containing a sequence of messages.
134
128
  - **Resource** - the entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
135
129
 
136
130
  Both identifiers are required for agents to store information:
137
131
 
138
- **generate:**
132
+ **Generate**:
139
133
 
140
134
  ```typescript
141
135
  const response = await agent.generate("hello", {
@@ -146,8 +140,7 @@ const response = await agent.generate("hello", {
146
140
  });
147
141
  ```
148
142
 
149
-
150
- **stream:**
143
+ **Stream**:
151
144
 
152
145
  ```typescript
153
146
  const stream = await agent.stream("hello", {
@@ -158,10 +151,7 @@ const stream = await agent.stream("hello", {
158
151
  });
159
152
  ```
160
153
 
161
-
162
-
163
- > **Note:**
164
- [Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
154
+ > **Note:** [Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
165
155
 
166
156
  ### Thread title generation
167
157
 
@@ -169,7 +159,7 @@ Mastra can automatically generate descriptive thread titles based on the user's
169
159
 
170
160
  Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
171
161
 
172
- ```typescript title="src/mastra/agents/my-agent.ts"
162
+ ```typescript
173
163
  export const agent = new Agent({
174
164
  id: "agent",
175
165
  memory: new Memory({
@@ -182,9 +172,9 @@ export const agent = new Agent({
182
172
 
183
173
  Title generation runs asynchronously after the agent responds and does not affect response time.
184
174
 
185
- To optimize cost or behavior, provide a smaller [`model`](/models) and custom `instructions`:
175
+ To optimize cost or behavior, provide a smaller [`model`](https://mastra.ai/models) and custom `instructions`:
186
176
 
187
- ```typescript title="src/mastra/agents/my-agent.ts"
177
+ ```typescript
188
178
  export const agent = new Agent({
189
179
  id: "agent",
190
180
  memory: new Memory({
@@ -206,17 +196,17 @@ Semantic recall has different storage requirements - it needs a vector database
206
196
 
207
197
  Some storage providers enforce record size limits that base64-encoded file attachments (such as images) can exceed:
208
198
 
209
- | Provider | Record size limit |
210
- | -------- | ----------------- |
211
- | [DynamoDB](https://mastra.ai/reference/storage/dynamodb) | 400 KB |
212
- | [Convex](https://mastra.ai/reference/storage/convex) | 1 MiB |
213
- | [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB |
199
+ | Provider | Record size limit |
200
+ | ------------------------------------------------------------------ | ----------------- |
201
+ | [DynamoDB](https://mastra.ai/reference/storage/dynamodb) | 400 KB |
202
+ | [Convex](https://mastra.ai/reference/storage/convex) | 1 MiB |
203
+ | [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB |
214
204
 
215
205
  PostgreSQL, MongoDB, and libSQL have higher limits and are generally unaffected.
216
206
 
217
207
  To avoid this, use an input processor to upload attachments to external storage (S3, R2, GCS, [Convex file storage](https://docs.convex.dev/file-storage), etc.) and replace them with URL references before persistence.
218
208
 
219
- ```typescript title="src/mastra/processors/attachment-uploader.ts"
209
+ ```typescript
220
210
  import type { Processor } from "@mastra/core/processors";
221
211
  import type { MastraDBMessage } from "@mastra/core/memory";
222
212