@mastra/libsql 1.7.0-alpha.0 → 1.7.1-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +28 -0
- package/dist/docs/SKILL.md +19 -19
- package/dist/docs/assets/SOURCE_MAP.json +1 -1
- package/dist/docs/references/docs-agents-agent-approval.md +7 -7
- package/dist/docs/references/docs-agents-agent-memory.md +1 -1
- package/dist/docs/references/docs-agents-network-approval.md +2 -2
- package/dist/docs/references/docs-agents-networks.md +2 -2
- package/dist/docs/references/docs-memory-memory-processors.md +14 -14
- package/dist/docs/references/docs-memory-message-history.md +2 -2
- package/dist/docs/references/docs-memory-overview.md +3 -3
- package/dist/docs/references/docs-memory-semantic-recall.md +8 -8
- package/dist/docs/references/docs-memory-storage.md +6 -6
- package/dist/docs/references/docs-memory-working-memory.md +15 -15
- package/dist/docs/references/docs-observability-overview.md +5 -5
- package/dist/docs/references/docs-observability-tracing-exporters-default.md +7 -7
- package/dist/docs/references/docs-rag-retrieval.md +16 -16
- package/dist/docs/references/reference-core-getMemory.md +1 -1
- package/dist/docs/references/reference-core-listMemory.md +1 -1
- package/dist/docs/references/reference-core-mastra-class.md +2 -2
- package/dist/docs/references/reference-memory-memory-class.md +4 -4
- package/dist/docs/references/reference-storage-composite.md +12 -4
- package/dist/docs/references/reference-storage-dynamodb.md +5 -5
- package/dist/docs/references/reference-storage-libsql.md +1 -1
- package/dist/docs/references/reference-vectors-libsql.md +16 -16
- package/dist/index.cjs +8 -5
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +8 -5
- package/dist/index.js.map +1 -1
- package/dist/storage/domains/memory/index.d.ts.map +1 -1
- package/package.json +6 -6
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,33 @@
|
|
|
1
1
|
# @mastra/libsql
|
|
2
2
|
|
|
3
|
+
## 1.7.1-alpha.0
|
|
4
|
+
|
|
5
|
+
### Patch Changes
|
|
6
|
+
|
|
7
|
+
- Added dated message boundary delimiters when activating buffered observations for improved cache stability. ([#14367](https://github.com/mastra-ai/mastra/pull/14367))
|
|
8
|
+
|
|
9
|
+
- Updated dependencies [[`4444280`](https://github.com/mastra-ai/mastra/commit/444428094253e916ec077e66284e685fde67021e), [`dbb879a`](https://github.com/mastra-ai/mastra/commit/dbb879af0b809c668e9b3a9d8bac97d806caa267), [`8de3555`](https://github.com/mastra-ai/mastra/commit/8de355572c6fd838f863a3e7e6fe24d0947b774f)]:
|
|
10
|
+
- @mastra/core@1.14.0-alpha.2
|
|
11
|
+
|
|
12
|
+
## 1.7.0
|
|
13
|
+
|
|
14
|
+
### Minor Changes
|
|
15
|
+
|
|
16
|
+
- Added `requestContext` column to the spans table. Request context data from tracing is now persisted alongside other span data. ([#14020](https://github.com/mastra-ai/mastra/pull/14020))
|
|
17
|
+
|
|
18
|
+
- Added `requestContext` and `requestContextSchema` column support to dataset storage. Dataset items now persist request context alongside input and ground truth data. ([#13938](https://github.com/mastra-ai/mastra/pull/13938))
|
|
19
|
+
|
|
20
|
+
### Patch Changes
|
|
21
|
+
|
|
22
|
+
- Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. ([#14021](https://github.com/mastra-ai/mastra/pull/14021))
|
|
23
|
+
|
|
24
|
+
For example, calling `db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } })` will silently ignore `futureField` if it doesn't exist in the database table, rather than throwing. The same applies to `update` — unknown fields in the data payload are dropped before building the SQL statement.
|
|
25
|
+
|
|
26
|
+
- Fixed slow semantic recall in the libsql, Cloudflare D1, and ClickHouse storage adapters. Recall performance no longer degrades as threads grow larger. (Fixes #11702) ([#14022](https://github.com/mastra-ai/mastra/pull/14022))
|
|
27
|
+
|
|
28
|
+
- Updated dependencies [[`4f71b43`](https://github.com/mastra-ai/mastra/commit/4f71b436a4a6b8839842d8da47b57b84509af56c), [`a070277`](https://github.com/mastra-ai/mastra/commit/a07027766ce195ba74d0783116d894cbab25d44c), [`b628b91`](https://github.com/mastra-ai/mastra/commit/b628b9128b372c0f54214d902b07279f03443900), [`332c014`](https://github.com/mastra-ai/mastra/commit/332c014e076b81edf7fe45b58205882726415e90), [`6b63153`](https://github.com/mastra-ai/mastra/commit/6b63153878ea841c0f4ce632ba66bb33e57e9c1b), [`4246e34`](https://github.com/mastra-ai/mastra/commit/4246e34cec9c26636d0965942268e6d07c346671), [`b8837ee`](https://github.com/mastra-ai/mastra/commit/b8837ee77e2e84197609762bfabd8b3da326d30c), [`866cc2c`](https://github.com/mastra-ai/mastra/commit/866cc2cb1f0e3b314afab5194f69477fada745d1), [`5d950f7`](https://github.com/mastra-ai/mastra/commit/5d950f7bf426a215a1808f0abef7de5c8336ba1c), [`28c85b1`](https://github.com/mastra-ai/mastra/commit/28c85b184fc32b40f7f160483c982da6d388ecbd), [`e9a08fb`](https://github.com/mastra-ai/mastra/commit/e9a08fbef1ada7e50e961e2f54f55e8c10b4a45c), [`1d0a8a8`](https://github.com/mastra-ai/mastra/commit/1d0a8a8acf33203d5744fc429b090ad8598aa8ed), [`631ffd8`](https://github.com/mastra-ai/mastra/commit/631ffd82fed108648b448b28e6a90e38c5f53bf5), [`6bcbf8a`](https://github.com/mastra-ai/mastra/commit/6bcbf8a6774d5a53b21d61db8a45ce2593ca1616), [`aae2295`](https://github.com/mastra-ai/mastra/commit/aae2295838a2d329ad6640829e87934790ffe5b8), [`aa61f29`](https://github.com/mastra-ai/mastra/commit/aa61f29ff8095ce46a4ae16e46c4d8c79b2b685b), [`7ff3714`](https://github.com/mastra-ai/mastra/commit/7ff37148515439bb3be009a60e02c3e363299760), [`18c3a90`](https://github.com/mastra-ai/mastra/commit/18c3a90c9e48cf69500e308affeb8eba5860b2af), [`41d79a1`](https://github.com/mastra-ai/mastra/commit/41d79a14bd8cb6de1e2565fd0a04786bae2f211b), [`f35487b`](https://github.com/mastra-ai/mastra/commit/f35487bb2d46c636e22aa71d90025613ae38235a), [`6dc2192`](https://github.com/mastra-ai/mastra/commit/6dc21921aef0f0efab15cd0805fa3d18f277a76f), [`eeb3a3f`](https://github.com/mastra-ai/mastra/commit/eeb3a3f43aca10cf49479eed2a84b7d9ecea02ba), [`e673376`](https://github.com/mastra-ai/mastra/commit/e6733763ad1321aa7e5ae15096b9c2104f93b1f3), [`05f8d90`](https://github.com/mastra-ai/mastra/commit/05f8d9009290ce6aa03428b3add635268615db85), [`b2204c9`](https://github.com/mastra-ai/mastra/commit/b2204c98a42848bbfb6f0440f005dc2b6354f1cd), [`a1bf1e3`](https://github.com/mastra-ai/mastra/commit/a1bf1e385ed4c0ef6f11b56c5887442970d127f2), [`b6f647a`](https://github.com/mastra-ai/mastra/commit/b6f647ae2388e091f366581595feb957e37d5b40), [`0c57b8b`](https://github.com/mastra-ai/mastra/commit/0c57b8b0a69a97b5a4ae3f79be6c610f29f3cf7b), [`b081f27`](https://github.com/mastra-ai/mastra/commit/b081f272cf411716e1d6bd72ceac4bcee2657b19), [`4b8da97`](https://github.com/mastra-ai/mastra/commit/4b8da97a5ce306e97869df6c39535d9069e563db), [`0c09eac`](https://github.com/mastra-ai/mastra/commit/0c09eacb1926f64cfdc9ae5c6d63385cf8c9f72c), [`6b9b93d`](https://github.com/mastra-ai/mastra/commit/6b9b93d6f459d1ba6e36f163abf62a085ddb3d64), [`31b6067`](https://github.com/mastra-ai/mastra/commit/31b6067d0cc3ab10e1b29c36147f3b5266bc714a), [`797ac42`](https://github.com/mastra-ai/mastra/commit/797ac4276de231ad2d694d9aeca75980f6cd0419), [`0bc289e`](https://github.com/mastra-ai/mastra/commit/0bc289e2d476bf46c5b91c21969e8d0c6864691c), [`9b75a06`](https://github.com/mastra-ai/mastra/commit/9b75a06e53ebb0b950ba7c1e83a0142047185f46), [`4c3a1b1`](https://github.com/mastra-ai/mastra/commit/4c3a1b122ea083e003d71092f30f3b31680b01c0), [`256df35`](https://github.com/mastra-ai/mastra/commit/256df3571d62beb3ad4971faa432927cc140e603), [`85cc3b3`](https://github.com/mastra-ai/mastra/commit/85cc3b3b6f32ae4b083c26498f50d5b250ba944b), [`97ea28c`](https://github.com/mastra-ai/mastra/commit/97ea28c746e9e4147d56047bbb1c4a92417a3fec), [`d567299`](https://github.com/mastra-ai/mastra/commit/d567299cf81e02bd9d5221d4bc05967d6c224161), [`716ffe6`](https://github.com/mastra-ai/mastra/commit/716ffe68bed81f7c2690bc8581b9e140f7bf1c3d), [`8296332`](https://github.com/mastra-ai/mastra/commit/8296332de21c16e3dfc3d0b2d615720a6dc88f2f), [`4df2116`](https://github.com/mastra-ai/mastra/commit/4df211619dd922c047d396ca41cd7027c8c4c8e7), [`2219c1a`](https://github.com/mastra-ai/mastra/commit/2219c1acbd21da116da877f0036ffb985a9dd5a3), [`17c4145`](https://github.com/mastra-ai/mastra/commit/17c4145166099354545582335b5252bdfdfd908b)]:
|
|
29
|
+
- @mastra/core@1.11.0
|
|
30
|
+
|
|
3
31
|
## 1.7.0-alpha.0
|
|
4
32
|
|
|
5
33
|
### Minor Changes
|
package/dist/docs/SKILL.md
CHANGED
|
@@ -3,7 +3,7 @@ name: mastra-libsql
|
|
|
3
3
|
description: Documentation for @mastra/libsql. Use when working with @mastra/libsql APIs, configuration, or implementation.
|
|
4
4
|
metadata:
|
|
5
5
|
package: "@mastra/libsql"
|
|
6
|
-
version: "1.7.
|
|
6
|
+
version: "1.7.1-alpha.0"
|
|
7
7
|
---
|
|
8
8
|
|
|
9
9
|
## When to use
|
|
@@ -16,19 +16,19 @@ Read the individual reference documents for detailed explanations and code examp
|
|
|
16
16
|
|
|
17
17
|
### Docs
|
|
18
18
|
|
|
19
|
-
- [Agent
|
|
20
|
-
- [Agent
|
|
21
|
-
- [Network
|
|
22
|
-
- [Agent
|
|
23
|
-
- [Memory
|
|
24
|
-
- [Message
|
|
19
|
+
- [Agent approval](references/docs-agents-agent-approval.md) - Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
|
|
20
|
+
- [Agent memory](references/docs-agents-agent-memory.md) - Learn how to add memory to agents to store message history and maintain context across interactions.
|
|
21
|
+
- [Network approval](references/docs-agents-network-approval.md) - Learn how to require approvals, suspend execution, and resume suspended networks while keeping humans in control of agent network workflows.
|
|
22
|
+
- [Agent networks](references/docs-agents-networks.md) - Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
|
|
23
|
+
- [Memory processors](references/docs-memory-memory-processors.md) - Learn how to use memory processors in Mastra to filter, trim, and transform messages before they're sent to the language model to manage context window limits.
|
|
24
|
+
- [Message history](references/docs-memory-message-history.md) - Learn how to configure message history in Mastra to store recent messages from the current conversation.
|
|
25
25
|
- [Memory overview](references/docs-memory-overview.md) - Learn how Mastra's memory system works with working memory, message history, semantic recall, and observational memory.
|
|
26
|
-
- [Semantic
|
|
27
|
-
- [Storage](references/docs-memory-storage.md) - Configure storage for Mastra
|
|
28
|
-
- [Working
|
|
29
|
-
- [Observability
|
|
30
|
-
- [Default
|
|
31
|
-
- [Retrieval,
|
|
26
|
+
- [Semantic recall](references/docs-memory-semantic-recall.md) - Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
|
|
27
|
+
- [Storage](references/docs-memory-storage.md) - Configure storage for Mastra to persist conversations and other runtime state.
|
|
28
|
+
- [Working memory](references/docs-memory-working-memory.md) - Learn how to configure working memory in Mastra to store persistent user data, preferences.
|
|
29
|
+
- [Observability overview](references/docs-observability-overview.md) - Monitor and debug applications with Mastra's Observability features.
|
|
30
|
+
- [Default exporter](references/docs-observability-tracing-exporters-default.md) - Store traces locally for development and debugging
|
|
31
|
+
- [Retrieval, semantic search, reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
|
|
32
32
|
- [Snapshots](references/docs-workflows-snapshots.md) - Learn how to save and resume workflow execution state with snapshots in Mastra
|
|
33
33
|
|
|
34
34
|
### Guides
|
|
@@ -39,12 +39,12 @@ Read the individual reference documents for detailed explanations and code examp
|
|
|
39
39
|
|
|
40
40
|
- [Reference: Mastra.getMemory()](references/reference-core-getMemory.md) - Documentation for the `Mastra.getMemory()` method in Mastra, which retrieves a registered memory instance by its registry key.
|
|
41
41
|
- [Reference: Mastra.listMemory()](references/reference-core-listMemory.md) - Documentation for the `Mastra.listMemory()` method in Mastra, which returns all registered memory instances.
|
|
42
|
-
- [Reference: Mastra
|
|
43
|
-
- [Reference: Memory
|
|
44
|
-
- [Reference: Composite
|
|
45
|
-
- [Reference: DynamoDB
|
|
46
|
-
- [Reference: libSQL
|
|
47
|
-
- [Reference: libSQL
|
|
42
|
+
- [Reference: Mastra class](references/reference-core-mastra-class.md) - Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
|
|
43
|
+
- [Reference: Memory class](references/reference-memory-memory-class.md) - Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
|
|
44
|
+
- [Reference: Composite storage](references/reference-storage-composite.md) - Documentation for combining multiple storage backends in Mastra.
|
|
45
|
+
- [Reference: DynamoDB storage](references/reference-storage-dynamodb.md) - Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
|
|
46
|
+
- [Reference: libSQL storage](references/reference-storage-libsql.md) - Documentation for the libSQL storage implementation in Mastra.
|
|
47
|
+
- [Reference: libSQL vector store](references/reference-vectors-libsql.md) - Documentation for the LibSQLVector class in Mastra, which provides vector search using libSQL with vector extensions.
|
|
48
48
|
|
|
49
49
|
|
|
50
50
|
Read [assets/SOURCE_MAP.json](assets/SOURCE_MAP.json) for source code references.
|
|
@@ -92,7 +92,7 @@ const handleDecline = async () => {
|
|
|
92
92
|
}
|
|
93
93
|
```
|
|
94
94
|
|
|
95
|
-
## Tool approval with generate()
|
|
95
|
+
## Tool approval with `generate()`
|
|
96
96
|
|
|
97
97
|
Tool approval also works with the `generate()` method for non-streaming use cases. When a tool requires approval during a `generate()` call, the method returns immediately instead of executing the tool.
|
|
98
98
|
|
|
@@ -278,7 +278,7 @@ const agent = new Agent({
|
|
|
278
278
|
id: 'my-agent',
|
|
279
279
|
name: 'My Agent',
|
|
280
280
|
instructions: 'You are a helpful assistant',
|
|
281
|
-
model: 'openai/gpt-
|
|
281
|
+
model: 'openai/gpt-5-mini',
|
|
282
282
|
tools: { weatherTool },
|
|
283
283
|
memory: new Memory(),
|
|
284
284
|
defaultOptions: {
|
|
@@ -343,7 +343,7 @@ const agent = new Agent({
|
|
|
343
343
|
id: 'my-agent',
|
|
344
344
|
name: 'My Agent',
|
|
345
345
|
instructions: 'You are a helpful assistant',
|
|
346
|
-
model: 'openai/gpt-
|
|
346
|
+
model: 'openai/gpt-5-mini',
|
|
347
347
|
tools: { weatherTool },
|
|
348
348
|
memory: new Memory(),
|
|
349
349
|
defaultOptions: {
|
|
@@ -445,7 +445,7 @@ const dataAgent = new Agent({
|
|
|
445
445
|
id: 'data-agent',
|
|
446
446
|
name: 'Data Agent',
|
|
447
447
|
description: 'Handles database queries and user data retrieval',
|
|
448
|
-
model: 'openai/gpt-
|
|
448
|
+
model: 'openai/gpt-5-mini',
|
|
449
449
|
tools: { findUserTool },
|
|
450
450
|
})
|
|
451
451
|
|
|
@@ -454,7 +454,7 @@ const supervisorAgent = new Agent({
|
|
|
454
454
|
name: 'Supervisor Agent',
|
|
455
455
|
instructions: `You coordinate data retrieval tasks.
|
|
456
456
|
Delegate to data-agent for user lookups.`,
|
|
457
|
-
model: 'openai/gpt-5.
|
|
457
|
+
model: 'openai/gpt-5.4',
|
|
458
458
|
agents: { dataAgent },
|
|
459
459
|
memory: new Memory(),
|
|
460
460
|
})
|
|
@@ -504,7 +504,7 @@ for await (const chunk of stream.fullStream) {
|
|
|
504
504
|
}
|
|
505
505
|
```
|
|
506
506
|
|
|
507
|
-
### Using suspend() in supervisor pattern
|
|
507
|
+
### Using `suspend()` in supervisor pattern
|
|
508
508
|
|
|
509
509
|
Tools can also use [`suspend()`](#approval-using-suspend) to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way `requireApproval` does — the suspension surfaces at the supervisor level:
|
|
510
510
|
|
|
@@ -553,7 +553,7 @@ for await (const chunk of stream.fullStream) {
|
|
|
553
553
|
}
|
|
554
554
|
```
|
|
555
555
|
|
|
556
|
-
### Tool approval with generate()
|
|
556
|
+
### Tool approval with `generate()`
|
|
557
557
|
|
|
558
558
|
Tool approval propagation also works with `generate()` in supervisor pattern:
|
|
559
559
|
|
|
@@ -131,7 +131,7 @@ const response = await memoryAgent.generate("What's my favorite color?", {
|
|
|
131
131
|
|
|
132
132
|
To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview) documentation.
|
|
133
133
|
|
|
134
|
-
## Observational
|
|
134
|
+
## Observational memory
|
|
135
135
|
|
|
136
136
|
For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
|
|
137
137
|
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Network
|
|
1
|
+
# Network approval
|
|
2
2
|
|
|
3
3
|
> **Deprecated:** Agent networks are deprecated and will be removed in a future release. Use the [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) instead. See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade.
|
|
4
4
|
|
|
@@ -176,7 +176,7 @@ const routingAgent = new Agent({
|
|
|
176
176
|
id: 'routing-agent',
|
|
177
177
|
name: 'Routing Agent',
|
|
178
178
|
instructions: 'You coordinate tasks across multiple agents',
|
|
179
|
-
model: 'openai/gpt-
|
|
179
|
+
model: 'openai/gpt-5-mini',
|
|
180
180
|
tools: { confirmationTool },
|
|
181
181
|
memory: new Memory(),
|
|
182
182
|
defaultNetworkOptions: {
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Agent
|
|
1
|
+
# Agent networks
|
|
2
2
|
|
|
3
3
|
> **Agent Network Deprecated — Supervisor Pattern Recommended:** Agent networks are deprecated and will be removed in a future release. The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
|
|
4
4
|
>
|
|
@@ -48,7 +48,7 @@ export const routingAgent = new Agent({
|
|
|
48
48
|
Always respond with a complete report—no bullet points.
|
|
49
49
|
Write in full paragraphs, like a blog post.
|
|
50
50
|
Do not answer with incomplete or uncertain information.`,
|
|
51
|
-
model: 'openai/gpt-5.
|
|
51
|
+
model: 'openai/gpt-5.4',
|
|
52
52
|
agents: {
|
|
53
53
|
researchAgent,
|
|
54
54
|
writingAgent,
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Memory
|
|
1
|
+
# Memory processors
|
|
2
2
|
|
|
3
3
|
Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
|
|
4
4
|
|
|
@@ -6,11 +6,11 @@ When memory is enabled on an agent, Mastra adds memory processors to the agent's
|
|
|
6
6
|
|
|
7
7
|
Memory processors are [processors](https://mastra.ai/docs/agents/processors) that operate specifically on memory-related messages and state.
|
|
8
8
|
|
|
9
|
-
## Built-in
|
|
9
|
+
## Built-in memory processors
|
|
10
10
|
|
|
11
11
|
Mastra automatically adds these processors when memory is enabled:
|
|
12
12
|
|
|
13
|
-
### MessageHistory
|
|
13
|
+
### `MessageHistory`
|
|
14
14
|
|
|
15
15
|
Retrieves message history and persists new messages.
|
|
16
16
|
|
|
@@ -45,7 +45,7 @@ const agent = new Agent({
|
|
|
45
45
|
id: 'test-agent',
|
|
46
46
|
name: 'Test Agent',
|
|
47
47
|
instructions: 'You are a helpful assistant',
|
|
48
|
-
model: 'openai/gpt-
|
|
48
|
+
model: 'openai/gpt-5.4',
|
|
49
49
|
memory: new Memory({
|
|
50
50
|
storage: new LibSQLStore({
|
|
51
51
|
id: 'memory-store',
|
|
@@ -56,7 +56,7 @@ const agent = new Agent({
|
|
|
56
56
|
})
|
|
57
57
|
```
|
|
58
58
|
|
|
59
|
-
### SemanticRecall
|
|
59
|
+
### `SemanticRecall`
|
|
60
60
|
|
|
61
61
|
Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
|
|
62
62
|
|
|
@@ -95,7 +95,7 @@ import { openai } from '@ai-sdk/openai'
|
|
|
95
95
|
const agent = new Agent({
|
|
96
96
|
name: 'semantic-agent',
|
|
97
97
|
instructions: 'You are a helpful assistant with semantic memory',
|
|
98
|
-
model: 'openai/gpt-
|
|
98
|
+
model: 'openai/gpt-5.4',
|
|
99
99
|
memory: new Memory({
|
|
100
100
|
storage: new LibSQLStore({
|
|
101
101
|
id: 'memory-store',
|
|
@@ -114,7 +114,7 @@ const agent = new Agent({
|
|
|
114
114
|
})
|
|
115
115
|
```
|
|
116
116
|
|
|
117
|
-
### WorkingMemory
|
|
117
|
+
### `WorkingMemory`
|
|
118
118
|
|
|
119
119
|
Manages working memory state across conversations.
|
|
120
120
|
|
|
@@ -148,7 +148,7 @@ import { openai } from '@ai-sdk/openai'
|
|
|
148
148
|
const agent = new Agent({
|
|
149
149
|
name: 'working-memory-agent',
|
|
150
150
|
instructions: 'You are an assistant with working memory',
|
|
151
|
-
model: 'openai/gpt-
|
|
151
|
+
model: 'openai/gpt-5.4',
|
|
152
152
|
memory: new Memory({
|
|
153
153
|
storage: new LibSQLStore({
|
|
154
154
|
id: 'memory-store',
|
|
@@ -159,7 +159,7 @@ const agent = new Agent({
|
|
|
159
159
|
})
|
|
160
160
|
```
|
|
161
161
|
|
|
162
|
-
## Manual
|
|
162
|
+
## Manual control and deduplication
|
|
163
163
|
|
|
164
164
|
If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra **won't** automatically add it. This gives you full control over processor ordering:
|
|
165
165
|
|
|
@@ -180,7 +180,7 @@ const customMessageHistory = new MessageHistory({
|
|
|
180
180
|
const agent = new Agent({
|
|
181
181
|
name: 'custom-memory-agent',
|
|
182
182
|
instructions: 'You are a helpful assistant',
|
|
183
|
-
model: 'openai/gpt-
|
|
183
|
+
model: 'openai/gpt-5.4',
|
|
184
184
|
memory: new Memory({
|
|
185
185
|
storage: new LibSQLStore({ id: 'memory-store', url: 'file:memory.db' }),
|
|
186
186
|
lastMessages: 10, // This would normally add MessageHistory(10)
|
|
@@ -192,7 +192,7 @@ const agent = new Agent({
|
|
|
192
192
|
})
|
|
193
193
|
```
|
|
194
194
|
|
|
195
|
-
## Processor
|
|
195
|
+
## Processor execution order
|
|
196
196
|
|
|
197
197
|
Understanding the execution order is important when combining guardrails with memory:
|
|
198
198
|
|
|
@@ -218,7 +218,7 @@ This means memory loads message history before your processors can validate or f
|
|
|
218
218
|
|
|
219
219
|
This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
|
|
220
220
|
|
|
221
|
-
## Guardrails and
|
|
221
|
+
## Guardrails and memory
|
|
222
222
|
|
|
223
223
|
The default execution order provides safe guardrail behavior:
|
|
224
224
|
|
|
@@ -250,7 +250,7 @@ const contentBlocker = {
|
|
|
250
250
|
const agent = new Agent({
|
|
251
251
|
name: 'safe-agent',
|
|
252
252
|
instructions: 'You are a helpful assistant',
|
|
253
|
-
model: 'openai/gpt-
|
|
253
|
+
model: 'openai/gpt-5.4',
|
|
254
254
|
memory: new Memory({ lastMessages: 10 }),
|
|
255
255
|
// Your guardrail runs BEFORE memory saves
|
|
256
256
|
outputProcessors: [contentBlocker],
|
|
@@ -289,7 +289,7 @@ const inputValidator = {
|
|
|
289
289
|
const agent = new Agent({
|
|
290
290
|
name: 'validated-agent',
|
|
291
291
|
instructions: 'You are a helpful assistant',
|
|
292
|
-
model: 'openai/gpt-
|
|
292
|
+
model: 'openai/gpt-5.4',
|
|
293
293
|
memory: new Memory({ lastMessages: 10 }),
|
|
294
294
|
// Your guardrail runs AFTER memory loads history
|
|
295
295
|
inputProcessors: [inputValidator],
|
|
@@ -1,4 +1,4 @@
|
|
|
1
|
-
# Message
|
|
1
|
+
# Message history
|
|
2
2
|
|
|
3
3
|
Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
|
|
4
4
|
|
|
@@ -103,7 +103,7 @@ You can use this history in two ways:
|
|
|
103
103
|
- **Automatic inclusion** - Mastra automatically fetches and includes recent messages in the context window. By default, it includes the last 10 messages, keeping agents grounded in the conversation. You can adjust this number with `lastMessages`, but in most cases you don't need to think about it.
|
|
104
104
|
- [**Manual querying**](#querying) - For more control, use the `recall()` function to query threads and messages directly. This lets you choose exactly which memories are included in the context window, or fetch messages to render conversation history in your UI.
|
|
105
105
|
|
|
106
|
-
## Accessing
|
|
106
|
+
## Accessing memory
|
|
107
107
|
|
|
108
108
|
To access memory functions for querying, cloning, or deleting threads and messages, call `getMemory()` on an agent:
|
|
109
109
|
|
|
@@ -5,9 +5,9 @@ Memory enables your agent to remember user messages, agent replies, and tool res
|
|
|
5
5
|
Mastra supports four complementary memory types:
|
|
6
6
|
|
|
7
7
|
- [**Message history**](https://mastra.ai/docs/memory/message-history) - keeps recent messages from the current conversation so they can be rendered in the UI and used to maintain short-term continuity within the exchange.
|
|
8
|
+
- [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
|
|
8
9
|
- [**Working memory**](https://mastra.ai/docs/memory/working-memory) - stores persistent, structured user data such as names, preferences, and goals.
|
|
9
10
|
- [**Semantic recall**](https://mastra.ai/docs/memory/semantic-recall) - retrieves relevant messages from older conversations based on semantic meaning rather than exact keywords, mirroring how humans recall information by association. Requires a [vector database](https://mastra.ai/docs/memory/semantic-recall) and an [embedding model](https://mastra.ai/docs/memory/semantic-recall).
|
|
10
|
-
- [**Observational memory**](https://mastra.ai/docs/memory/observational-memory) - uses background Observer and Reflector agents to maintain a dense observation log that replaces raw message history as it grows, keeping the context window small while preserving long-term memory across conversations.
|
|
11
11
|
|
|
12
12
|
If the combined memory exceeds the model's context limit, [memory processors](https://mastra.ai/docs/memory/memory-processors) can filter, trim, or prioritize content so the most relevant information is preserved.
|
|
13
13
|
|
|
@@ -16,9 +16,9 @@ If the combined memory exceeds the model's context limit, [memory processors](ht
|
|
|
16
16
|
Choose a memory option to get started:
|
|
17
17
|
|
|
18
18
|
- [Message history](https://mastra.ai/docs/memory/message-history)
|
|
19
|
+
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
19
20
|
- [Working memory](https://mastra.ai/docs/memory/working-memory)
|
|
20
21
|
- [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
21
|
-
- [Observational memory](https://mastra.ai/docs/memory/observational-memory)
|
|
22
22
|
|
|
23
23
|
## Storage
|
|
24
24
|
|
|
@@ -41,5 +41,5 @@ This visibility helps you understand why an agent made specific decisions and ve
|
|
|
41
41
|
## Next steps
|
|
42
42
|
|
|
43
43
|
- Learn more about [Storage](https://mastra.ai/docs/memory/storage) providers and configuration options
|
|
44
|
-
- Add [Message history](https://mastra.ai/docs/memory/message-history), [
|
|
44
|
+
- Add [Message history](https://mastra.ai/docs/memory/message-history), [Observational memory](https://mastra.ai/docs/memory/observational-memory), [Working memory](https://mastra.ai/docs/memory/working-memory), or [Semantic recall](https://mastra.ai/docs/memory/semantic-recall)
|
|
45
45
|
- Visit [Memory configuration reference](https://mastra.ai/reference/memory/memory-class) for all available options
|
|
@@ -1,10 +1,10 @@
|
|
|
1
|
-
# Semantic
|
|
1
|
+
# Semantic recall
|
|
2
2
|
|
|
3
3
|
If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
|
|
4
4
|
|
|
5
5
|
> **Watch 📹:** What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
|
|
6
6
|
|
|
7
|
-
## How
|
|
7
|
+
## How semantic recall works
|
|
8
8
|
|
|
9
9
|
Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](https://mastra.ai/docs/memory/message-history).
|
|
10
10
|
|
|
@@ -16,7 +16,7 @@ When it's enabled, new messages are used to query a vector DB for semantically s
|
|
|
16
16
|
|
|
17
17
|
After getting a response from the LLM, all new messages (user, assistant, and tool calls/results) are inserted into the vector DB to be recalled in later interactions.
|
|
18
18
|
|
|
19
|
-
## Quick
|
|
19
|
+
## Quick start
|
|
20
20
|
|
|
21
21
|
Semantic recall is enabled by default, so if you give your agent memory it will be included:
|
|
22
22
|
|
|
@@ -28,12 +28,12 @@ const agent = new Agent({
|
|
|
28
28
|
id: 'support-agent',
|
|
29
29
|
name: 'SupportAgent',
|
|
30
30
|
instructions: 'You are a helpful support agent.',
|
|
31
|
-
model: 'openai/gpt-5.
|
|
31
|
+
model: 'openai/gpt-5.4',
|
|
32
32
|
memory: new Memory(),
|
|
33
33
|
})
|
|
34
34
|
```
|
|
35
35
|
|
|
36
|
-
## Using the recall()
|
|
36
|
+
## Using the `recall()` method
|
|
37
37
|
|
|
38
38
|
While `listMessages` retrieves messages by thread ID with basic pagination, [`recall()`](https://mastra.ai/reference/memory/recall) adds support for **semantic search**. When you need to find messages by meaning rather than recency, use `recall()` with a `vectorSearchString`:
|
|
39
39
|
|
|
@@ -182,7 +182,7 @@ const agent = new Agent({
|
|
|
182
182
|
})
|
|
183
183
|
```
|
|
184
184
|
|
|
185
|
-
### Using FastEmbed (
|
|
185
|
+
### Using FastEmbed (local)
|
|
186
186
|
|
|
187
187
|
To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
|
|
188
188
|
|
|
@@ -224,7 +224,7 @@ const agent = new Agent({
|
|
|
224
224
|
})
|
|
225
225
|
```
|
|
226
226
|
|
|
227
|
-
## PostgreSQL
|
|
227
|
+
## PostgreSQL index optimization
|
|
228
228
|
|
|
229
229
|
When using PostgreSQL as your vector store, you can optimize semantic recall performance by configuring the vector index. This is particularly important for large-scale deployments with thousands of messages.
|
|
230
230
|
|
|
@@ -283,6 +283,6 @@ You might want to disable semantic recall in scenarios like:
|
|
|
283
283
|
- When message history provides sufficient context for the current conversation.
|
|
284
284
|
- In performance-sensitive applications, like realtime two-way audio, where the added latency of creating embeddings and running vector queries is noticeable.
|
|
285
285
|
|
|
286
|
-
## Viewing
|
|
286
|
+
## Viewing recalled messages
|
|
287
287
|
|
|
288
288
|
When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# Storage
|
|
2
2
|
|
|
3
|
-
For agents to remember previous interactions, Mastra needs a
|
|
3
|
+
For agents to remember previous interactions, Mastra needs a storage adapter. Use one of the [supported providers](#supported-providers) and pass it to your Mastra instance.
|
|
4
4
|
|
|
5
5
|
```typescript
|
|
6
6
|
import { Mastra } from '@mastra/core'
|
|
@@ -24,7 +24,7 @@ export const mastra = new Mastra({
|
|
|
24
24
|
|
|
25
25
|
This configures instance-level storage, which all agents share by default. You can also configure [agent-level storage](#agent-level-storage) for isolated data boundaries.
|
|
26
26
|
|
|
27
|
-
Mastra automatically
|
|
27
|
+
Mastra automatically initializes the necessary storage structures on first interaction. See [Storage Overview](https://mastra.ai/reference/storage/overview) for domain coverage and the schema used by the built-in database-backed domains.
|
|
28
28
|
|
|
29
29
|
## Supported providers
|
|
30
30
|
|
|
@@ -35,7 +35,7 @@ Each provider page includes installation instructions, configuration parameters,
|
|
|
35
35
|
- [MongoDB](https://mastra.ai/reference/storage/mongodb)
|
|
36
36
|
- [Upstash](https://mastra.ai/reference/storage/upstash)
|
|
37
37
|
- [Cloudflare D1](https://mastra.ai/reference/storage/cloudflare-d1)
|
|
38
|
-
- [Cloudflare Durable Objects](https://mastra.ai/reference/storage/cloudflare)
|
|
38
|
+
- [Cloudflare KV & Durable Objects](https://mastra.ai/reference/storage/cloudflare)
|
|
39
39
|
- [Convex](https://mastra.ai/reference/storage/convex)
|
|
40
40
|
- [DynamoDB](https://mastra.ai/reference/storage/dynamodb)
|
|
41
41
|
- [LanceDB](https://mastra.ai/reference/storage/lance)
|
|
@@ -49,7 +49,7 @@ Storage can be configured at the instance level (shared by all agents) or at the
|
|
|
49
49
|
|
|
50
50
|
### Instance-level storage
|
|
51
51
|
|
|
52
|
-
Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same
|
|
52
|
+
Add storage to your Mastra instance so all agents, workflows, observability traces, and scores share the same storage backend:
|
|
53
53
|
|
|
54
54
|
```typescript
|
|
55
55
|
import { Mastra } from '@mastra/core'
|
|
@@ -71,7 +71,7 @@ This is useful when all primitives share the same storage backend and have simil
|
|
|
71
71
|
|
|
72
72
|
#### Composite storage
|
|
73
73
|
|
|
74
|
-
[Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to
|
|
74
|
+
[Composite storage](https://mastra.ai/reference/storage/composite) is an alternative way to configure instance-level storage. Use `MastraCompositeStore` to route `memory` and any other [supported domains](https://mastra.ai/reference/storage/composite) to different storage providers.
|
|
75
75
|
|
|
76
76
|
```typescript
|
|
77
77
|
import { Mastra } from '@mastra/core'
|
|
@@ -180,7 +180,7 @@ export const agent = new Agent({
|
|
|
180
180
|
memory: new Memory({
|
|
181
181
|
options: {
|
|
182
182
|
generateTitle: {
|
|
183
|
-
model: 'openai/gpt-
|
|
183
|
+
model: 'openai/gpt-5-mini',
|
|
184
184
|
instructions: 'Generate a 1 word title',
|
|
185
185
|
},
|
|
186
186
|
},
|
|
@@ -13,7 +13,7 @@ Working memory can persist at two different scopes:
|
|
|
13
13
|
|
|
14
14
|
**Important:** Switching between scopes means the agent won't see memory from the other scope - thread-scoped memory is completely separate from resource-scoped memory.
|
|
15
15
|
|
|
16
|
-
## Quick
|
|
16
|
+
## Quick start
|
|
17
17
|
|
|
18
18
|
Here's a minimal example of setting up an agent with working memory:
|
|
19
19
|
|
|
@@ -26,7 +26,7 @@ const agent = new Agent({
|
|
|
26
26
|
id: 'personal-assistant',
|
|
27
27
|
name: 'PersonalAssistant',
|
|
28
28
|
instructions: 'You are a helpful personal assistant.',
|
|
29
|
-
model: 'openai/gpt-5.
|
|
29
|
+
model: 'openai/gpt-5.4',
|
|
30
30
|
memory: new Memory({
|
|
31
31
|
options: {
|
|
32
32
|
workingMemory: {
|
|
@@ -37,13 +37,13 @@ const agent = new Agent({
|
|
|
37
37
|
})
|
|
38
38
|
```
|
|
39
39
|
|
|
40
|
-
## How it
|
|
40
|
+
## How it works
|
|
41
41
|
|
|
42
42
|
Working memory is a block of Markdown text that the agent is able to update over time to store continuously relevant information:
|
|
43
43
|
|
|
44
44
|
[YouTube video player](https://www.youtube-nocookie.com/embed/UMy_JHLf1n8)
|
|
45
45
|
|
|
46
|
-
## Memory
|
|
46
|
+
## Memory persistence scopes
|
|
47
47
|
|
|
48
48
|
Working memory can operate in two different scopes, allowing you to choose how memory persists across conversations:
|
|
49
49
|
|
|
@@ -117,7 +117,7 @@ const memory = new Memory({
|
|
|
117
117
|
- Temporary or session-specific information
|
|
118
118
|
- Workflows where each thread needs working memory but threads are ephemeral and not related to each other
|
|
119
119
|
|
|
120
|
-
## Storage
|
|
120
|
+
## Storage adapter support
|
|
121
121
|
|
|
122
122
|
Resource-scoped working memory requires specific storage adapters that support the `mastra_resources` table:
|
|
123
123
|
|
|
@@ -128,7 +128,7 @@ Resource-scoped working memory requires specific storage adapters that support t
|
|
|
128
128
|
- **Upstash** (`@mastra/upstash`)
|
|
129
129
|
- **MongoDB** (`@mastra/mongodb`)
|
|
130
130
|
|
|
131
|
-
## Custom
|
|
131
|
+
## Custom templates
|
|
132
132
|
|
|
133
133
|
Templates guide the agent on what information to track and update in working memory. While a default template is used if none is provided, you'll typically want to define a custom template tailored to your agent's specific use case to ensure it remembers the most relevant information.
|
|
134
134
|
|
|
@@ -142,7 +142,7 @@ const memory = new Memory({
|
|
|
142
142
|
template: `
|
|
143
143
|
# User Profile
|
|
144
144
|
|
|
145
|
-
## Personal
|
|
145
|
+
## Personal info
|
|
146
146
|
|
|
147
147
|
- Name:
|
|
148
148
|
- Location:
|
|
@@ -156,7 +156,7 @@ const memory = new Memory({
|
|
|
156
156
|
- [Deadline 1]: [Date]
|
|
157
157
|
- [Deadline 2]: [Date]
|
|
158
158
|
|
|
159
|
-
## Session
|
|
159
|
+
## Session state
|
|
160
160
|
|
|
161
161
|
- Last Task Discussed:
|
|
162
162
|
- Open Questions:
|
|
@@ -168,7 +168,7 @@ const memory = new Memory({
|
|
|
168
168
|
})
|
|
169
169
|
```
|
|
170
170
|
|
|
171
|
-
## Designing
|
|
171
|
+
## Designing effective templates
|
|
172
172
|
|
|
173
173
|
A well-structured template keeps the information straightforward for the agent to parse and update. Treat the template as a short form that you want the assistant to keep up to date.
|
|
174
174
|
|
|
@@ -206,7 +206,7 @@ const paragraphMemory = new Memory({
|
|
|
206
206
|
})
|
|
207
207
|
```
|
|
208
208
|
|
|
209
|
-
## Structured
|
|
209
|
+
## Structured working memory
|
|
210
210
|
|
|
211
211
|
Working memory can also be defined using a structured schema instead of a Markdown template. This allows you to specify the exact fields and types that should be tracked, using a [Zod](https://zod.dev/) schema. When using a schema, the agent will see and update working memory as a JSON object matching your schema.
|
|
212
212
|
|
|
@@ -265,20 +265,20 @@ Schema-based working memory uses **merge semantics**, meaning the agent only nee
|
|
|
265
265
|
- **Set a field to `null` to delete it:** This explicitly removes the field from memory
|
|
266
266
|
- **Arrays are replaced entirely:** When an array field is provided, it replaces the existing array (arrays aren't merged element-by-element)
|
|
267
267
|
|
|
268
|
-
## Choosing
|
|
268
|
+
## Choosing between template and schema
|
|
269
269
|
|
|
270
270
|
- Use a **template** (Markdown) if you want the agent to maintain memory as a free-form text block, such as a user profile or scratchpad. Templates use **replace semantics** — the agent must provide the complete memory content on each update.
|
|
271
271
|
- Use a **schema** if you need structured, type-safe data that can be validated and programmatically accessed as JSON. Schemas use **merge semantics** — the agent only provides fields to update, and existing fields are preserved.
|
|
272
272
|
- Only one mode can be active at a time: setting both `template` and `schema` isn't supported.
|
|
273
273
|
|
|
274
|
-
## Example: Multi-step
|
|
274
|
+
## Example: Multi-step retention
|
|
275
275
|
|
|
276
276
|
Below is a simplified view of how the `User Profile` template updates across a short user conversation:
|
|
277
277
|
|
|
278
278
|
```nohighlight
|
|
279
279
|
# User Profile
|
|
280
280
|
|
|
281
|
-
## Personal
|
|
281
|
+
## Personal info
|
|
282
282
|
|
|
283
283
|
- Name:
|
|
284
284
|
- Location:
|
|
@@ -303,7 +303,7 @@ The agent can now refer to `Sam` or `Berlin` in later responses without requesti
|
|
|
303
303
|
|
|
304
304
|
If your agent isn't properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
|
|
305
305
|
|
|
306
|
-
## Setting
|
|
306
|
+
## Setting initial working memory
|
|
307
307
|
|
|
308
308
|
While agents typically update working memory through the `updateWorkingMemory` tool, you can also set initial working memory programmatically when creating or updating threads. This is useful for injecting user data (like their name, preferences, or other info) that you want available to the agent without passing it in every request.
|
|
309
309
|
|
|
@@ -372,7 +372,7 @@ await memory.updateWorkingMemory({
|
|
|
372
372
|
})
|
|
373
373
|
```
|
|
374
374
|
|
|
375
|
-
## Read-
|
|
375
|
+
## Read-only working memory
|
|
376
376
|
|
|
377
377
|
In some scenarios, you may want an agent to have access to working memory data without the ability to modify it. This is useful for:
|
|
378
378
|
|