@mastra/libsql 1.2.0 → 1.3.0-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +87 -0
- package/dist/docs/SKILL.md +36 -26
- package/dist/docs/{SOURCE_MAP.json → assets/SOURCE_MAP.json} +1 -1
- package/dist/docs/{agents/03-agent-approval.md → references/docs-agents-agent-approval.md} +19 -19
- package/dist/docs/references/docs-agents-agent-memory.md +212 -0
- package/dist/docs/{agents/04-network-approval.md → references/docs-agents-network-approval.md} +13 -12
- package/dist/docs/{agents/02-networks.md → references/docs-agents-networks.md} +10 -12
- package/dist/docs/{memory/06-memory-processors.md → references/docs-memory-memory-processors.md} +6 -8
- package/dist/docs/{memory/03-message-history.md → references/docs-memory-message-history.md} +31 -20
- package/dist/docs/{memory/01-overview.md → references/docs-memory-overview.md} +8 -8
- package/dist/docs/{memory/05-semantic-recall.md → references/docs-memory-semantic-recall.md} +33 -17
- package/dist/docs/{memory/02-storage.md → references/docs-memory-storage.md} +29 -39
- package/dist/docs/{memory/04-working-memory.md → references/docs-memory-working-memory.md} +16 -27
- package/dist/docs/{observability/01-overview.md → references/docs-observability-overview.md} +4 -7
- package/dist/docs/{observability/02-default.md → references/docs-observability-tracing-exporters-default.md} +11 -14
- package/dist/docs/{rag/01-retrieval.md → references/docs-rag-retrieval.md} +26 -53
- package/dist/docs/{workflows/01-snapshots.md → references/docs-workflows-snapshots.md} +3 -5
- package/dist/docs/{guides/01-ai-sdk.md → references/guides-agent-frameworks-ai-sdk.md} +25 -9
- package/dist/docs/references/reference-core-getMemory.md +50 -0
- package/dist/docs/references/reference-core-listMemory.md +56 -0
- package/dist/docs/references/reference-core-mastra-class.md +66 -0
- package/dist/docs/{memory/07-reference.md → references/reference-memory-memory-class.md} +28 -14
- package/dist/docs/references/reference-storage-composite.md +235 -0
- package/dist/docs/references/reference-storage-dynamodb.md +282 -0
- package/dist/docs/references/reference-storage-libsql.md +135 -0
- package/dist/docs/{vectors/01-reference.md → references/reference-vectors-libsql.md} +105 -13
- package/dist/index.cjs +1676 -194
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +1676 -196
- package/dist/index.js.map +1 -1
- package/dist/storage/db/index.d.ts.map +1 -1
- package/dist/storage/domains/agents/index.d.ts +9 -12
- package/dist/storage/domains/agents/index.d.ts.map +1 -1
- package/dist/storage/domains/memory/index.d.ts +7 -1
- package/dist/storage/domains/memory/index.d.ts.map +1 -1
- package/dist/storage/domains/prompt-blocks/index.d.ts +25 -0
- package/dist/storage/domains/prompt-blocks/index.d.ts.map +1 -0
- package/dist/storage/domains/scorer-definitions/index.d.ts +26 -0
- package/dist/storage/domains/scorer-definitions/index.d.ts.map +1 -0
- package/dist/storage/index.d.ts +3 -1
- package/dist/storage/index.d.ts.map +1 -1
- package/package.json +3 -4
- package/dist/docs/README.md +0 -39
- package/dist/docs/agents/01-agent-memory.md +0 -166
- package/dist/docs/core/01-reference.md +0 -151
- package/dist/docs/storage/01-reference.md +0 -556
package/dist/docs/{memory/06-memory-processors.md → references/docs-memory-memory-processors.md}
RENAMED
|
@@ -1,5 +1,3 @@
|
|
|
1
|
-
> Learn how to use memory processors in Mastra to filter, trim, and transform messages before they
|
|
2
|
-
|
|
3
1
|
# Memory Processors
|
|
4
2
|
|
|
5
3
|
Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
|
|
@@ -200,7 +198,7 @@ Understanding the execution order is important when combining guardrails with me
|
|
|
200
198
|
|
|
201
199
|
### Input Processors
|
|
202
200
|
|
|
203
|
-
```
|
|
201
|
+
```text
|
|
204
202
|
[Memory Processors] → [Your inputProcessors]
|
|
205
203
|
```
|
|
206
204
|
|
|
@@ -211,7 +209,7 @@ This means memory loads message history before your processors can validate or f
|
|
|
211
209
|
|
|
212
210
|
### Output Processors
|
|
213
211
|
|
|
214
|
-
```
|
|
212
|
+
```text
|
|
215
213
|
[Your outputProcessors] → [Memory Processors]
|
|
216
214
|
```
|
|
217
215
|
|
|
@@ -302,10 +300,10 @@ const agent = new Agent({
|
|
|
302
300
|
|
|
303
301
|
### Summary
|
|
304
302
|
|
|
305
|
-
| Guardrail Type | When it runs
|
|
306
|
-
| -------------- |
|
|
307
|
-
| Input
|
|
308
|
-
| Output
|
|
303
|
+
| Guardrail Type | When it runs | If it aborts |
|
|
304
|
+
| -------------- | -------------------------- | ----------------------------- |
|
|
305
|
+
| Input | After memory loads history | LLM not called, nothing saved |
|
|
306
|
+
| Output | Before memory saves | Nothing saved to storage |
|
|
309
307
|
|
|
310
308
|
Both scenarios are safe - guardrails prevent inappropriate content from being persisted to memory
|
|
311
309
|
|
package/dist/docs/{memory/03-message-history.md → references/docs-memory-message-history.md}
RENAMED
|
@@ -1,25 +1,42 @@
|
|
|
1
|
-
> Learn how to configure message history in Mastra to store recent messages from the current conversation.
|
|
2
|
-
|
|
3
1
|
# Message History
|
|
4
2
|
|
|
5
|
-
Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
|
|
3
|
+
Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
|
|
6
4
|
|
|
7
5
|
You can also retrieve message history to display past conversations in your UI.
|
|
8
6
|
|
|
9
|
-
> **
|
|
10
|
-
Each message belongs to a thread (the conversation) and a resource (the user or entity it's associated with). See [Threads and resources](https://mastra.ai/docs/memory/storage#threads-and-resources) for more detail.
|
|
7
|
+
> **Info:** Each message belongs to a thread (the conversation) and a resource (the user or entity it's associated with). See [Threads and resources](https://mastra.ai/docs/memory/storage) for more detail.
|
|
11
8
|
|
|
12
9
|
## Getting started
|
|
13
10
|
|
|
14
|
-
Install the Mastra memory module along with a [storage adapter](https://mastra.ai/docs/memory/storage
|
|
11
|
+
Install the Mastra memory module along with a [storage adapter](https://mastra.ai/docs/memory/storage) for your database. The examples below use `@mastra/libsql`, which stores data locally in a `mastra.db` file.
|
|
12
|
+
|
|
13
|
+
**npm**:
|
|
15
14
|
|
|
16
|
-
```bash
|
|
15
|
+
```bash
|
|
17
16
|
npm install @mastra/memory@latest @mastra/libsql@latest
|
|
18
17
|
```
|
|
19
18
|
|
|
19
|
+
**pnpm**:
|
|
20
|
+
|
|
21
|
+
```bash
|
|
22
|
+
pnpm add @mastra/memory@latest @mastra/libsql@latest
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
**Yarn**:
|
|
26
|
+
|
|
27
|
+
```bash
|
|
28
|
+
yarn add @mastra/memory@latest @mastra/libsql@latest
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
**Bun**:
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
bun add @mastra/memory@latest @mastra/libsql@latest
|
|
35
|
+
```
|
|
36
|
+
|
|
20
37
|
Message history requires a storage adapter to persist conversations. Configure storage on your Mastra instance if you haven't already:
|
|
21
38
|
|
|
22
|
-
```typescript
|
|
39
|
+
```typescript
|
|
23
40
|
import { Mastra } from "@mastra/core";
|
|
24
41
|
import { LibSQLStore } from "@mastra/libsql";
|
|
25
42
|
|
|
@@ -33,7 +50,7 @@ export const mastra = new Mastra({
|
|
|
33
50
|
|
|
34
51
|
Give your agent a `Memory`:
|
|
35
52
|
|
|
36
|
-
```typescript
|
|
53
|
+
```typescript
|
|
37
54
|
import { Memory } from "@mastra/memory";
|
|
38
55
|
import { Agent } from "@mastra/core/agent";
|
|
39
56
|
|
|
@@ -49,7 +66,7 @@ export const agent = new Agent({
|
|
|
49
66
|
|
|
50
67
|
When you call the agent, messages are automatically saved to the database. You can specify a `threadId`, `resourceId`, and optional `metadata`:
|
|
51
68
|
|
|
52
|
-
|
|
69
|
+
**Generate**:
|
|
53
70
|
|
|
54
71
|
```typescript
|
|
55
72
|
await agent.generate("Hello", {
|
|
@@ -64,8 +81,7 @@ await agent.generate("Hello", {
|
|
|
64
81
|
});
|
|
65
82
|
```
|
|
66
83
|
|
|
67
|
-
|
|
68
|
-
**stream:**
|
|
84
|
+
**Stream**:
|
|
69
85
|
|
|
70
86
|
```typescript
|
|
71
87
|
await agent.stream("Hello", {
|
|
@@ -80,11 +96,7 @@ await agent.stream("Hello", {
|
|
|
80
96
|
});
|
|
81
97
|
```
|
|
82
98
|
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
> **Note:**
|
|
86
|
-
|
|
87
|
-
Threads and messages are created automatically when you call `agent.generate()` or `agent.stream()`, but you can also create them manually with [`createThread()`](https://mastra.ai/reference/memory/createThread) and [`saveMessages()`](https://mastra.ai/reference/memory/memory-class).
|
|
99
|
+
> **Info:** Threads and messages are created automatically when you call `agent.generate()` or `agent.stream()`, but you can also create them manually with [`createThread()`](https://mastra.ai/reference/memory/createThread) and [`saveMessages()`](https://mastra.ai/reference/memory/memory-class).
|
|
88
100
|
|
|
89
101
|
There are two ways to use this history:
|
|
90
102
|
|
|
@@ -106,8 +118,7 @@ The `Memory` instance gives you access to functions for listing threads, recalli
|
|
|
106
118
|
|
|
107
119
|
Use these methods to fetch threads and messages for displaying conversation history in your UI or for custom memory retrieval logic.
|
|
108
120
|
|
|
109
|
-
> **
|
|
110
|
-
The memory system does not enforce access control. Before running any query, verify in your application logic that the current user is authorized to access the `resourceId` being queried.
|
|
121
|
+
> **Warning:** The memory system does not enforce access control. Before running any query, verify in your application logic that the current user is authorized to access the `resourceId` being queried.
|
|
111
122
|
|
|
112
123
|
### Threads
|
|
113
124
|
|
|
@@ -240,7 +251,7 @@ const { thread, clonedMessages } = await memory.cloneThread({
|
|
|
240
251
|
});
|
|
241
252
|
```
|
|
242
253
|
|
|
243
|
-
You can filter which messages get cloned (by count or date range), specify custom thread IDs, and use utility methods to inspect clone relationships.
|
|
254
|
+
You can filter which messages get cloned (by count or date range), specify custom thread IDs, and use utility methods to inspect clone relationships.
|
|
244
255
|
|
|
245
256
|
See [`cloneThread()`](https://mastra.ai/reference/memory/cloneThread) and [clone utilities](https://mastra.ai/reference/memory/clone-utilities) for the full API.
|
|
246
257
|
|
|
@@ -1,14 +1,13 @@
|
|
|
1
|
-
> Learn how Mastra
|
|
2
|
-
|
|
3
1
|
# Memory
|
|
4
2
|
|
|
5
3
|
Memory enables your agent to remember user messages, agent replies, and tool results across interactions, giving it the context it needs to stay consistent, maintain conversation flow, and produce better answers over time.
|
|
6
4
|
|
|
7
|
-
Mastra supports
|
|
5
|
+
Mastra supports four complementary memory types:
|
|
8
6
|
|
|
9
7
|
- [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
|
|
10
8
|
- [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
|
|
11
|
-
- [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall
|
|
9
|
+
- [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
|
|
10
|
+
- [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
|
|
12
11
|
|
|
13
12
|
If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
|
|
14
13
|
|
|
@@ -19,12 +18,13 @@ Choose a memory option to get started:
|
|
|
19
18
|
- [Message history](https://mastra.ai/docs/memory/message-history)
|
|
20
19
|
- [Working memory](https://mastra.ai/docs/memory/working-memory)
|
|
21
20
|
- [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
21
|
+
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
22
22
|
|
|
23
23
|
## Storage
|
|
24
24
|
|
|
25
|
-
Before enabling memory, you must first configure a storage adapter. Mastra supports several databases including PostgreSQL, MongoDB, libSQL, and [more](https://mastra.ai/docs/memory/storage
|
|
25
|
+
Before enabling memory, you must first configure a storage adapter. Mastra supports several databases including PostgreSQL, MongoDB, libSQL, and [more](https://mastra.ai/docs/memory/storage).
|
|
26
26
|
|
|
27
|
-
Storage can be configured at the [instance level](https://mastra.ai/docs/memory/storage
|
|
27
|
+
Storage can be configured at the [instance level](https://mastra.ai/docs/memory/storage) (shared across all agents) or at the [agent level](https://mastra.ai/docs/memory/storage) (dedicated per agent).
|
|
28
28
|
|
|
29
29
|
For semantic recall, you can use a separate vector database like Pinecone alongside your primary storage.
|
|
30
30
|
|
|
@@ -34,12 +34,12 @@ See the [Storage](https://mastra.ai/docs/memory/storage) documentation for confi
|
|
|
34
34
|
|
|
35
35
|
When [tracing](https://mastra.ai/docs/observability/tracing/overview) is enabled, you can inspect exactly which messages the agent uses for context in each request. The trace output shows all memory included in the agent's context window - both recent message history and messages recalled via semantic recall.
|
|
36
36
|
|
|
37
|
-

|
|
38
38
|
|
|
39
39
|
This visibility helps you understand why an agent made specific decisions and verify that memory retrieval is working as expected.
|
|
40
40
|
|
|
41
41
|
## Next steps
|
|
42
42
|
|
|
43
43
|
- Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
|
|
44
|
-
- Add [Message history](https://mastra.ai/docs/memory/message-history), [Working memory](https://mastra.ai/docs/memory/working-memory),
|
|
44
|
+
- Add [Message history](https://mastra.ai/docs/memory/message-history), [Working memory](https://mastra.ai/docs/memory/working-memory), [Semantic recall](https://mastra.ai/docs/memory/semantic-recall), or [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
45
45
|
- Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options
|
package/dist/docs/{memory/05-semantic-recall.md → references/docs-memory-semantic-recall.md}
RENAMED
|
@@ -1,20 +1,16 @@
|
|
|
1
|
-
> Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
|
|
2
|
-
|
|
3
1
|
# Semantic Recall
|
|
4
2
|
|
|
5
3
|
If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
|
|
6
4
|
|
|
7
|
-
> **Watch
|
|
8
|
-
|
|
9
|
-
What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
|
|
5
|
+
> **Watch 📹:** What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
|
|
10
6
|
|
|
11
7
|
## How Semantic Recall Works
|
|
12
8
|
|
|
13
|
-
Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](
|
|
9
|
+
Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](https://mastra.ai/docs/memory/message-history).
|
|
14
10
|
|
|
15
11
|
It uses vector embeddings of messages for similarity search, integrates with various vector stores, and has configurable context windows around retrieved messages.
|
|
16
12
|
|
|
17
|
-

|
|
18
14
|
|
|
19
15
|
When it's enabled, new messages are used to query a vector DB for semantically similar messages.
|
|
20
16
|
|
|
@@ -24,7 +20,7 @@ After getting a response from the LLM, all new messages (user, assistant, and to
|
|
|
24
20
|
|
|
25
21
|
Semantic recall is enabled by default, so if you give your agent memory it will be included:
|
|
26
22
|
|
|
27
|
-
```typescript
|
|
23
|
+
```typescript
|
|
28
24
|
import { Agent } from "@mastra/core/agent";
|
|
29
25
|
import { Memory } from "@mastra/memory";
|
|
30
26
|
|
|
@@ -64,7 +60,7 @@ const { messages: relevantMessages } = await memory!.recall({
|
|
|
64
60
|
|
|
65
61
|
Semantic recall relies on a [storage and vector db](https://mastra.ai/reference/memory/memory-class) to store messages and their embeddings.
|
|
66
62
|
|
|
67
|
-
```ts
|
|
63
|
+
```ts
|
|
68
64
|
import { Memory } from "@mastra/memory";
|
|
69
65
|
import { Agent } from "@mastra/core/agent";
|
|
70
66
|
import { LibSQLStore, LibSQLVector } from "@mastra/libsql";
|
|
@@ -113,7 +109,7 @@ The three main parameters that control semantic recall behavior are:
|
|
|
113
109
|
2. **messageRange**: How much surrounding context to include with each match
|
|
114
110
|
3. **scope**: Whether to search within the current thread or across all threads owned by a resource (the default is resource scope).
|
|
115
111
|
|
|
116
|
-
```typescript
|
|
112
|
+
```typescript
|
|
117
113
|
const agent = new Agent({
|
|
118
114
|
memory: new Memory({
|
|
119
115
|
options: {
|
|
@@ -135,7 +131,7 @@ Semantic recall relies on an [embedding model](https://mastra.ai/reference/memor
|
|
|
135
131
|
|
|
136
132
|
The simplest way is to use a `provider/model` string with autocomplete support:
|
|
137
133
|
|
|
138
|
-
```ts
|
|
134
|
+
```ts
|
|
139
135
|
import { Memory } from "@mastra/memory";
|
|
140
136
|
import { Agent } from "@mastra/core/agent";
|
|
141
137
|
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
|
|
@@ -158,7 +154,7 @@ The model router automatically handles API key detection from environment variab
|
|
|
158
154
|
|
|
159
155
|
You can also use AI SDK embedding models directly:
|
|
160
156
|
|
|
161
|
-
```ts
|
|
157
|
+
```ts
|
|
162
158
|
import { Memory } from "@mastra/memory";
|
|
163
159
|
import { Agent } from "@mastra/core/agent";
|
|
164
160
|
import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
|
|
@@ -174,13 +170,33 @@ const agent = new Agent({
|
|
|
174
170
|
|
|
175
171
|
To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
|
|
176
172
|
|
|
177
|
-
|
|
173
|
+
**npm**:
|
|
174
|
+
|
|
175
|
+
```bash
|
|
178
176
|
npm install @mastra/fastembed@latest
|
|
179
177
|
```
|
|
180
178
|
|
|
179
|
+
**pnpm**:
|
|
180
|
+
|
|
181
|
+
```bash
|
|
182
|
+
pnpm add @mastra/fastembed@latest
|
|
183
|
+
```
|
|
184
|
+
|
|
185
|
+
**Yarn**:
|
|
186
|
+
|
|
187
|
+
```bash
|
|
188
|
+
yarn add @mastra/fastembed@latest
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
**Bun**:
|
|
192
|
+
|
|
193
|
+
```bash
|
|
194
|
+
bun add @mastra/fastembed@latest
|
|
195
|
+
```
|
|
196
|
+
|
|
181
197
|
Then configure it in your memory:
|
|
182
198
|
|
|
183
|
-
```ts
|
|
199
|
+
```ts
|
|
184
200
|
import { Memory } from "@mastra/memory";
|
|
185
201
|
import { Agent } from "@mastra/core/agent";
|
|
186
202
|
import { fastembed } from "@mastra/fastembed";
|
|
@@ -198,7 +214,7 @@ When using PostgreSQL as your vector store, you can optimize semantic recall per
|
|
|
198
214
|
|
|
199
215
|
PostgreSQL supports both IVFFlat and HNSW indexes. By default, Mastra creates an IVFFlat index, but HNSW indexes typically provide better performance, especially with OpenAI embeddings which use inner product distance.
|
|
200
216
|
|
|
201
|
-
```typescript
|
|
217
|
+
```typescript
|
|
202
218
|
import { Memory } from "@mastra/memory";
|
|
203
219
|
import { PgStore, PgVector } from "@mastra/pg";
|
|
204
220
|
|
|
@@ -228,7 +244,7 @@ const agent = new Agent({
|
|
|
228
244
|
});
|
|
229
245
|
```
|
|
230
246
|
|
|
231
|
-
For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg
|
|
247
|
+
For detailed information about index configuration options and performance tuning, see the [PgVector configuration guide](https://mastra.ai/reference/vectors/pg).
|
|
232
248
|
|
|
233
249
|
## Disabling
|
|
234
250
|
|
|
@@ -236,7 +252,7 @@ There is a performance impact to using semantic recall. New messages are convert
|
|
|
236
252
|
|
|
237
253
|
Semantic recall is enabled by default but can be disabled when not needed:
|
|
238
254
|
|
|
239
|
-
```typescript
|
|
255
|
+
```typescript
|
|
240
256
|
const agent = new Agent({
|
|
241
257
|
memory: new Memory({
|
|
242
258
|
options: {
|
|
@@ -1,10 +1,8 @@
|
|
|
1
|
-
> Configure storage for Mastra
|
|
2
|
-
|
|
3
1
|
# Storage
|
|
4
2
|
|
|
5
|
-
For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
|
|
3
|
+
For agents to remember previous interactions, Mastra needs a database. Use a storage adapter for one of the [supported databases](#supported-providers) and pass it to your Mastra instance.
|
|
6
4
|
|
|
7
|
-
```typescript
|
|
5
|
+
```typescript
|
|
8
6
|
import { Mastra } from "@mastra/core";
|
|
9
7
|
import { LibSQLStore } from "@mastra/libsql";
|
|
10
8
|
|
|
@@ -16,18 +14,17 @@ export const mastra = new Mastra({
|
|
|
16
14
|
});
|
|
17
15
|
```
|
|
18
16
|
|
|
19
|
-
> **Sharing the database with Mastra Studio
|
|
20
|
-
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
|
|
24
|
-
|
|
25
|
-
|
|
26
|
-
Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
|
|
17
|
+
> **Sharing the database with Mastra Studio:** When running `mastra dev` alongside your application (e.g., Next.js), use an absolute path to ensure both processes access the same database:
|
|
18
|
+
>
|
|
19
|
+
> ```typescript
|
|
20
|
+
> url: "file:/absolute/path/to/your/project/mastra.db"
|
|
21
|
+
> ```
|
|
22
|
+
>
|
|
23
|
+
> Relative paths like `file:./mastra.db` resolve based on each process's working directory, which may differ.
|
|
27
24
|
|
|
28
25
|
This configures instance-level storage, which all agents share by default. You can also configure [agent-level storage](#agent-level-storage) for isolated data boundaries.
|
|
29
26
|
|
|
30
|
-
Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview
|
|
27
|
+
Mastra automatically creates the necessary tables on first interaction. See the [core schema](https://mastra.ai/reference/storage/overview) for details on what gets created, including tables for messages, threads, resources, workflows, traces, and evaluation datasets.
|
|
31
28
|
|
|
32
29
|
## Supported providers
|
|
33
30
|
|
|
@@ -44,8 +41,7 @@ Each provider page includes installation instructions, configuration parameters,
|
|
|
44
41
|
- [LanceDB](https://mastra.ai/reference/storage/lance)
|
|
45
42
|
- [Microsoft SQL Server](https://mastra.ai/reference/storage/mssql)
|
|
46
43
|
|
|
47
|
-
> **
|
|
48
|
-
libSQL is the easiest way to get started because it doesn’t require running a separate database server.
|
|
44
|
+
> **Tip:** libSQL is the easiest way to get started because it doesn’t require running a separate database server.
|
|
49
45
|
|
|
50
46
|
## Configuration scope
|
|
51
47
|
|
|
@@ -55,7 +51,7 @@ Storage can be configured at the instance level (shared by all agents) or at the
|
|
|
55
51
|
|
|
56
52
|
Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
|
|
57
53
|
|
|
58
|
-
```typescript
|
|
54
|
+
```typescript
|
|
59
55
|
import { Mastra } from "@mastra/core";
|
|
60
56
|
import { PostgresStore } from "@mastra/pg";
|
|
61
57
|
|
|
@@ -75,9 +71,9 @@ This is useful when all primitives share the same storage backend and have simil
|
|
|
75
71
|
|
|
76
72
|
#### Composite storage
|
|
77
73
|
|
|
78
|
-
[Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite
|
|
74
|
+
[Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to set the `memory` domain (and any other [domains](https://mastra.ai/reference/storage/composite) you need) to different storage providers.
|
|
79
75
|
|
|
80
|
-
```typescript
|
|
76
|
+
```typescript
|
|
81
77
|
import { Mastra } from "@mastra/core";
|
|
82
78
|
import { MastraCompositeStore } from "@mastra/core/storage";
|
|
83
79
|
import { MemoryLibSQL } from "@mastra/libsql";
|
|
@@ -88,7 +84,6 @@ export const mastra = new Mastra({
|
|
|
88
84
|
storage: new MastraCompositeStore({
|
|
89
85
|
id: "composite",
|
|
90
86
|
domains: {
|
|
91
|
-
// highlight-next-line
|
|
92
87
|
memory: new MemoryLibSQL({ url: "file:./memory.db" }),
|
|
93
88
|
workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
|
|
94
89
|
observability: new ObservabilityStorageClickhouse({
|
|
@@ -107,7 +102,7 @@ This is useful when different types of data have different performance or operat
|
|
|
107
102
|
|
|
108
103
|
Agent-level storage overrides storage configured at the instance level. Add storage to a specific agent when you need data boundaries or compliance requirements:
|
|
109
104
|
|
|
110
|
-
```typescript
|
|
105
|
+
```typescript
|
|
111
106
|
import { Agent } from "@mastra/core/agent";
|
|
112
107
|
import { Memory } from "@mastra/memory";
|
|
113
108
|
import { PostgresStore } from "@mastra/pg";
|
|
@@ -123,19 +118,18 @@ export const agent = new Agent({
|
|
|
123
118
|
});
|
|
124
119
|
```
|
|
125
120
|
|
|
126
|
-
> **
|
|
127
|
-
[Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment#using-mastra-cloud-store) doesn't support agent-level storage.
|
|
121
|
+
> **Warning:** [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment) doesn't support agent-level storage.
|
|
128
122
|
|
|
129
123
|
## Threads and resources
|
|
130
124
|
|
|
131
|
-
Mastra organizes conversations using two identifiers:
|
|
125
|
+
Mastra organizes conversations using two identifiers:
|
|
132
126
|
|
|
133
127
|
- **Thread** - a conversation session containing a sequence of messages.
|
|
134
128
|
- **Resource** - the entity that owns the thread, such as a user, organization, project, or any other domain entity in your application.
|
|
135
129
|
|
|
136
130
|
Both identifiers are required for agents to store information:
|
|
137
131
|
|
|
138
|
-
|
|
132
|
+
**Generate**:
|
|
139
133
|
|
|
140
134
|
```typescript
|
|
141
135
|
const response = await agent.generate("hello", {
|
|
@@ -146,8 +140,7 @@ const response = await agent.generate("hello", {
|
|
|
146
140
|
});
|
|
147
141
|
```
|
|
148
142
|
|
|
149
|
-
|
|
150
|
-
**stream:**
|
|
143
|
+
**Stream**:
|
|
151
144
|
|
|
152
145
|
```typescript
|
|
153
146
|
const stream = await agent.stream("hello", {
|
|
@@ -158,10 +151,7 @@ const stream = await agent.stream("hello", {
|
|
|
158
151
|
});
|
|
159
152
|
```
|
|
160
153
|
|
|
161
|
-
|
|
162
|
-
|
|
163
|
-
> **Note:**
|
|
164
|
-
[Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
|
|
154
|
+
> **Note:** [Studio](https://mastra.ai/docs/getting-started/studio) automatically generates a thread and resource ID for you. When calling `stream()` or `generate()` yourself, remember to provide these identifiers explicitly.
|
|
165
155
|
|
|
166
156
|
### Thread title generation
|
|
167
157
|
|
|
@@ -169,7 +159,7 @@ Mastra can automatically generate descriptive thread titles based on the user's
|
|
|
169
159
|
|
|
170
160
|
Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
|
|
171
161
|
|
|
172
|
-
```typescript
|
|
162
|
+
```typescript
|
|
173
163
|
export const agent = new Agent({
|
|
174
164
|
id: "agent",
|
|
175
165
|
memory: new Memory({
|
|
@@ -182,9 +172,9 @@ export const agent = new Agent({
|
|
|
182
172
|
|
|
183
173
|
Title generation runs asynchronously after the agent responds and does not affect response time.
|
|
184
174
|
|
|
185
|
-
To optimize cost or behavior, provide a smaller [`model`](/models) and custom `instructions`:
|
|
175
|
+
To optimize cost or behavior, provide a smaller [`model`](https://mastra.ai/models) and custom `instructions`:
|
|
186
176
|
|
|
187
|
-
```typescript
|
|
177
|
+
```typescript
|
|
188
178
|
export const agent = new Agent({
|
|
189
179
|
id: "agent",
|
|
190
180
|
memory: new Memory({
|
|
@@ -206,17 +196,17 @@ Semantic recall has different storage requirements - it needs a vector database
|
|
|
206
196
|
|
|
207
197
|
Some storage providers enforce record size limits that base64-encoded file attachments (such as images) can exceed:
|
|
208
198
|
|
|
209
|
-
| Provider
|
|
210
|
-
|
|
|
211
|
-
| [DynamoDB](https://mastra.ai/reference/storage/dynamodb)
|
|
212
|
-
| [Convex](https://mastra.ai/reference/storage/convex)
|
|
213
|
-
| [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB
|
|
199
|
+
| Provider | Record size limit |
|
|
200
|
+
| ------------------------------------------------------------------ | ----------------- |
|
|
201
|
+
| [DynamoDB](https://mastra.ai/reference/storage/dynamodb) | 400 KB |
|
|
202
|
+
| [Convex](https://mastra.ai/reference/storage/convex) | 1 MiB |
|
|
203
|
+
| [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1) | 1 MiB |
|
|
214
204
|
|
|
215
205
|
PostgreSQL, MongoDB, and libSQL have higher limits and are generally unaffected.
|
|
216
206
|
|
|
217
207
|
To avoid this, use an input processor to upload attachments to external storage (S3, R2, GCS, [Convex file storage](https://docs.convex.dev/file-storage), etc.) and replace them with URL references before persistence.
|
|
218
208
|
|
|
219
|
-
```typescript
|
|
209
|
+
```typescript
|
|
220
210
|
import type { Processor } from "@mastra/core/processors";
|
|
221
211
|
import type { MastraDBMessage } from "@mastra/core/memory";
|
|
222
212
|
|
|
@@ -1,8 +1,6 @@
|
|
|
1
|
-
> Learn how to configure working memory in Mastra to store persistent user data, preferences.
|
|
2
|
-
|
|
3
1
|
# Working Memory
|
|
4
2
|
|
|
5
|
-
While [message history](https://mastra.ai/docs/memory/message-history) and [semantic recall](
|
|
3
|
+
While [message history](https://mastra.ai/docs/memory/message-history) and [semantic recall](https://mastra.ai/docs/memory/semantic-recall) help agents remember conversations, working memory allows them to maintain persistent information about users across interactions.
|
|
6
4
|
|
|
7
5
|
Think of it as the agent's active thoughts or scratchpad – the key information they keep available about the user or task. It's similar to how a person would naturally remember someone's name, preferences, or important details during a conversation.
|
|
8
6
|
|
|
@@ -19,7 +17,7 @@ Working memory can persist at two different scopes:
|
|
|
19
17
|
|
|
20
18
|
Here's a minimal example of setting up an agent with working memory:
|
|
21
19
|
|
|
22
|
-
```typescript
|
|
20
|
+
```typescript
|
|
23
21
|
import { Agent } from "@mastra/core/agent";
|
|
24
22
|
import { Memory } from "@mastra/memory";
|
|
25
23
|
|
|
@@ -43,7 +41,7 @@ const agent = new Agent({
|
|
|
43
41
|
|
|
44
42
|
Working memory is a block of Markdown text that the agent is able to update over time to store continuously relevant information:
|
|
45
43
|
|
|
46
|
-
|
|
44
|
+
[YouTube video player](https://www.youtube-nocookie.com/embed/UMy_JHLf1n8)
|
|
47
45
|
|
|
48
46
|
## Memory Persistence Scopes
|
|
49
47
|
|
|
@@ -136,7 +134,7 @@ Templates guide the agent on what information to track and update in working mem
|
|
|
136
134
|
|
|
137
135
|
Here's an example of a custom template. In this example the agent will store the users name, location, timezone, etc as soon as the user sends a message containing any of the info:
|
|
138
136
|
|
|
139
|
-
```typescript
|
|
137
|
+
```typescript
|
|
140
138
|
const memory = new Memory({
|
|
141
139
|
options: {
|
|
142
140
|
workingMemory: {
|
|
@@ -172,19 +170,13 @@ const memory = new Memory({
|
|
|
172
170
|
|
|
173
171
|
## Designing Effective Templates
|
|
174
172
|
|
|
175
|
-
A well-structured template keeps the information easy for the agent to parse and update. Treat the
|
|
176
|
-
template as a short form that you want the assistant to keep up to date.
|
|
173
|
+
A well-structured template keeps the information easy for the agent to parse and update. Treat the template as a short form that you want the assistant to keep up to date.
|
|
177
174
|
|
|
178
|
-
- **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example
|
|
179
|
-
|
|
180
|
-
- **
|
|
181
|
-
|
|
182
|
-
- **
|
|
183
|
-
fill in the correct spots.
|
|
184
|
-
- **Abbreviate very long values.** If you only need a short form, include guidance like
|
|
185
|
-
`- Name: [First name or nickname]` or `- Address (short):` rather than the full legal text.
|
|
186
|
-
- **Mention update rules in `instructions`.** You can instruct how and when to fill or clear parts of
|
|
187
|
-
the template directly in the agent's `instructions` field.
|
|
175
|
+
- **Short, focused labels.** Avoid paragraphs or very long headings. Keep labels brief (for example `## Personal Info` or `- Name:`) so updates are easy to read and less likely to be truncated.
|
|
176
|
+
- **Use consistent casing.** Inconsistent capitalization (`Timezone:` vs `timezone:`) can cause messy updates. Stick to Title Case or lower case for headings and bullet labels.
|
|
177
|
+
- **Keep placeholder text simple.** Use hints such as `[e.g., Formal]` or `[Date]` to help the LLM fill in the correct spots.
|
|
178
|
+
- **Abbreviate very long values.** If you only need a short form, include guidance like `- Name: [First name or nickname]` or `- Address (short):` rather than the full legal text.
|
|
179
|
+
- **Mention update rules in `instructions`.** You can instruct how and when to fill or clear parts of the template directly in the agent's `instructions` field.
|
|
188
180
|
|
|
189
181
|
### Alternative Template Styles
|
|
190
182
|
|
|
@@ -281,8 +273,7 @@ Schema-based working memory uses **merge semantics**, meaning the agent only nee
|
|
|
281
273
|
|
|
282
274
|
## Example: Multi-step Retention
|
|
283
275
|
|
|
284
|
-
Below is a simplified view of how the `User Profile` template updates across a short user
|
|
285
|
-
conversation:
|
|
276
|
+
Below is a simplified view of how the `User Profile` template updates across a short user conversation:
|
|
286
277
|
|
|
287
278
|
```nohighlight
|
|
288
279
|
# User Profile
|
|
@@ -308,11 +299,9 @@ conversation:
|
|
|
308
299
|
- Timezone: CET
|
|
309
300
|
```
|
|
310
301
|
|
|
311
|
-
The agent can now refer to `Sam` or `Berlin` in later responses without requesting the information
|
|
312
|
-
again because it has been stored in working memory.
|
|
302
|
+
The agent can now refer to `Sam` or `Berlin` in later responses without requesting the information again because it has been stored in working memory.
|
|
313
303
|
|
|
314
|
-
If your agent is not properly updating working memory when you expect it to, you can add system
|
|
315
|
-
instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
|
|
304
|
+
If your agent is not properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
|
|
316
305
|
|
|
317
306
|
## Setting Initial Working Memory
|
|
318
307
|
|
|
@@ -322,7 +311,7 @@ While agents typically update working memory through the `updateWorkingMemory` t
|
|
|
322
311
|
|
|
323
312
|
When creating a thread, you can provide initial working memory through the metadata's `workingMemory` key:
|
|
324
313
|
|
|
325
|
-
```typescript
|
|
314
|
+
```typescript
|
|
326
315
|
// Create a thread with initial working memory
|
|
327
316
|
const thread = await memory.createThread({
|
|
328
317
|
threadId: "thread-123",
|
|
@@ -353,7 +342,7 @@ await agent.generate("What's my blood type?", {
|
|
|
353
342
|
|
|
354
343
|
You can also update an existing thread's working memory:
|
|
355
344
|
|
|
356
|
-
```typescript
|
|
345
|
+
```typescript
|
|
357
346
|
// Update thread metadata to add/modify working memory
|
|
358
347
|
await memory.updateThread({
|
|
359
348
|
id: "thread-123",
|
|
@@ -375,7 +364,7 @@ await memory.updateThread({
|
|
|
375
364
|
|
|
376
365
|
Alternatively, use the `updateWorkingMemory` method directly:
|
|
377
366
|
|
|
378
|
-
```typescript
|
|
367
|
+
```typescript
|
|
379
368
|
await memory.updateWorkingMemory({
|
|
380
369
|
threadId: "thread-123",
|
|
381
370
|
resourceId: "user-456", // Required for resource-scoped memory
|
package/dist/docs/{observability/01-overview.md → references/docs-observability-overview.md}
RENAMED
|
@@ -1,5 +1,3 @@
|
|
|
1
|
-
> Monitor and debug applications with Mastra
|
|
2
|
-
|
|
3
1
|
# Observability Overview
|
|
4
2
|
|
|
5
3
|
Mastra provides observability features for AI applications. Monitor LLM operations, trace agent decisions, and debug complex workflows with tools that understand AI-specific patterns.
|
|
@@ -17,15 +15,15 @@ Specialized tracing for AI operations that captures:
|
|
|
17
15
|
|
|
18
16
|
## Storage Requirements
|
|
19
17
|
|
|
20
|
-
The `DefaultExporter` persists traces to your configured storage backend. Not all storage providers support observability—for the full list, see [Storage Provider Support](https://mastra.ai/docs/observability/tracing/exporters/default
|
|
18
|
+
The `DefaultExporter` persists traces to your configured storage backend. Not all storage providers support observability—for the full list, see [Storage Provider Support](https://mastra.ai/docs/observability/tracing/exporters/default).
|
|
21
19
|
|
|
22
|
-
For production environments with high traffic, we recommend using **ClickHouse** for the observability domain via [composite storage](https://mastra.ai/reference/storage/composite). See [Production Recommendations](https://mastra.ai/docs/observability/tracing/exporters/default
|
|
20
|
+
For production environments with high traffic, we recommend using **ClickHouse** for the observability domain via [composite storage](https://mastra.ai/reference/storage/composite). See [Production Recommendations](https://mastra.ai/docs/observability/tracing/exporters/default) for details.
|
|
23
21
|
|
|
24
22
|
## Quick Start
|
|
25
23
|
|
|
26
24
|
Configure Observability in your Mastra instance:
|
|
27
25
|
|
|
28
|
-
```typescript
|
|
26
|
+
```typescript
|
|
29
27
|
import { Mastra } from "@mastra/core";
|
|
30
28
|
import { PinoLogger } from "@mastra/loggers";
|
|
31
29
|
import { LibSQLStore } from "@mastra/libsql";
|
|
@@ -59,8 +57,7 @@ export const mastra = new Mastra({
|
|
|
59
57
|
});
|
|
60
58
|
```
|
|
61
59
|
|
|
62
|
-
> **Serverless environments
|
|
63
|
-
The `file:./mastra.db` storage URL uses the local filesystem, which doesn't work in serverless environments like Vercel, AWS Lambda, or Cloudflare Workers. For serverless deployments, use external storage. See the [Vercel deployment guide](https://mastra.ai/guides/deployment/vercel-deployer#observability) for a complete example.
|
|
60
|
+
> **Serverless environments:** The `file:./mastra.db` storage URL uses the local filesystem, which doesn't work in serverless environments like Vercel, AWS Lambda, or Cloudflare Workers. For serverless deployments, use external storage. See the [Vercel deployment guide](https://mastra.ai/guides/deployment/vercel) for a complete example.
|
|
64
61
|
|
|
65
62
|
With this basic setup, you will see Traces and Logs in both Studio and in Mastra Cloud.
|
|
66
63
|
|