@mastra/libsql 1.7.0-alpha.0 → 1.7.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,24 @@
1
1
  # @mastra/libsql
2
2
 
3
+ ## 1.7.0
4
+
5
+ ### Minor Changes
6
+
7
+ - Added `requestContext` column to the spans table. Request context data from tracing is now persisted alongside other span data. ([#14020](https://github.com/mastra-ai/mastra/pull/14020))
8
+
9
+ - Added `requestContext` and `requestContextSchema` column support to dataset storage. Dataset items now persist request context alongside input and ground truth data. ([#13938](https://github.com/mastra-ai/mastra/pull/13938))
10
+
11
+ ### Patch Changes
12
+
13
+ - Added resilient column handling to insert and update operations. Unknown columns in records are now silently dropped instead of causing SQL errors, ensuring forward compatibility when newer domain packages add fields that haven't been migrated yet. ([#14021](https://github.com/mastra-ai/mastra/pull/14021))
14
+
15
+ For example, calling `db.insert({ tableName, record: { id: '1', title: 'Hello', futureField: 'value' } })` will silently ignore `futureField` if it doesn't exist in the database table, rather than throwing. The same applies to `update` — unknown fields in the data payload are dropped before building the SQL statement.
16
+
17
+ - Fixed slow semantic recall in the libsql, Cloudflare D1, and ClickHouse storage adapters. Recall performance no longer degrades as threads grow larger. (Fixes #11702) ([#14022](https://github.com/mastra-ai/mastra/pull/14022))
18
+
19
+ - Updated dependencies [[`4f71b43`](https://github.com/mastra-ai/mastra/commit/4f71b436a4a6b8839842d8da47b57b84509af56c), [`a070277`](https://github.com/mastra-ai/mastra/commit/a07027766ce195ba74d0783116d894cbab25d44c), [`b628b91`](https://github.com/mastra-ai/mastra/commit/b628b9128b372c0f54214d902b07279f03443900), [`332c014`](https://github.com/mastra-ai/mastra/commit/332c014e076b81edf7fe45b58205882726415e90), [`6b63153`](https://github.com/mastra-ai/mastra/commit/6b63153878ea841c0f4ce632ba66bb33e57e9c1b), [`4246e34`](https://github.com/mastra-ai/mastra/commit/4246e34cec9c26636d0965942268e6d07c346671), [`b8837ee`](https://github.com/mastra-ai/mastra/commit/b8837ee77e2e84197609762bfabd8b3da326d30c), [`866cc2c`](https://github.com/mastra-ai/mastra/commit/866cc2cb1f0e3b314afab5194f69477fada745d1), [`5d950f7`](https://github.com/mastra-ai/mastra/commit/5d950f7bf426a215a1808f0abef7de5c8336ba1c), [`28c85b1`](https://github.com/mastra-ai/mastra/commit/28c85b184fc32b40f7f160483c982da6d388ecbd), [`e9a08fb`](https://github.com/mastra-ai/mastra/commit/e9a08fbef1ada7e50e961e2f54f55e8c10b4a45c), [`1d0a8a8`](https://github.com/mastra-ai/mastra/commit/1d0a8a8acf33203d5744fc429b090ad8598aa8ed), [`631ffd8`](https://github.com/mastra-ai/mastra/commit/631ffd82fed108648b448b28e6a90e38c5f53bf5), [`6bcbf8a`](https://github.com/mastra-ai/mastra/commit/6bcbf8a6774d5a53b21d61db8a45ce2593ca1616), [`aae2295`](https://github.com/mastra-ai/mastra/commit/aae2295838a2d329ad6640829e87934790ffe5b8), [`aa61f29`](https://github.com/mastra-ai/mastra/commit/aa61f29ff8095ce46a4ae16e46c4d8c79b2b685b), [`7ff3714`](https://github.com/mastra-ai/mastra/commit/7ff37148515439bb3be009a60e02c3e363299760), [`18c3a90`](https://github.com/mastra-ai/mastra/commit/18c3a90c9e48cf69500e308affeb8eba5860b2af), [`41d79a1`](https://github.com/mastra-ai/mastra/commit/41d79a14bd8cb6de1e2565fd0a04786bae2f211b), [`f35487b`](https://github.com/mastra-ai/mastra/commit/f35487bb2d46c636e22aa71d90025613ae38235a), [`6dc2192`](https://github.com/mastra-ai/mastra/commit/6dc21921aef0f0efab15cd0805fa3d18f277a76f), [`eeb3a3f`](https://github.com/mastra-ai/mastra/commit/eeb3a3f43aca10cf49479eed2a84b7d9ecea02ba), [`e673376`](https://github.com/mastra-ai/mastra/commit/e6733763ad1321aa7e5ae15096b9c2104f93b1f3), [`05f8d90`](https://github.com/mastra-ai/mastra/commit/05f8d9009290ce6aa03428b3add635268615db85), [`b2204c9`](https://github.com/mastra-ai/mastra/commit/b2204c98a42848bbfb6f0440f005dc2b6354f1cd), [`a1bf1e3`](https://github.com/mastra-ai/mastra/commit/a1bf1e385ed4c0ef6f11b56c5887442970d127f2), [`b6f647a`](https://github.com/mastra-ai/mastra/commit/b6f647ae2388e091f366581595feb957e37d5b40), [`0c57b8b`](https://github.com/mastra-ai/mastra/commit/0c57b8b0a69a97b5a4ae3f79be6c610f29f3cf7b), [`b081f27`](https://github.com/mastra-ai/mastra/commit/b081f272cf411716e1d6bd72ceac4bcee2657b19), [`4b8da97`](https://github.com/mastra-ai/mastra/commit/4b8da97a5ce306e97869df6c39535d9069e563db), [`0c09eac`](https://github.com/mastra-ai/mastra/commit/0c09eacb1926f64cfdc9ae5c6d63385cf8c9f72c), [`6b9b93d`](https://github.com/mastra-ai/mastra/commit/6b9b93d6f459d1ba6e36f163abf62a085ddb3d64), [`31b6067`](https://github.com/mastra-ai/mastra/commit/31b6067d0cc3ab10e1b29c36147f3b5266bc714a), [`797ac42`](https://github.com/mastra-ai/mastra/commit/797ac4276de231ad2d694d9aeca75980f6cd0419), [`0bc289e`](https://github.com/mastra-ai/mastra/commit/0bc289e2d476bf46c5b91c21969e8d0c6864691c), [`9b75a06`](https://github.com/mastra-ai/mastra/commit/9b75a06e53ebb0b950ba7c1e83a0142047185f46), [`4c3a1b1`](https://github.com/mastra-ai/mastra/commit/4c3a1b122ea083e003d71092f30f3b31680b01c0), [`256df35`](https://github.com/mastra-ai/mastra/commit/256df3571d62beb3ad4971faa432927cc140e603), [`85cc3b3`](https://github.com/mastra-ai/mastra/commit/85cc3b3b6f32ae4b083c26498f50d5b250ba944b), [`97ea28c`](https://github.com/mastra-ai/mastra/commit/97ea28c746e9e4147d56047bbb1c4a92417a3fec), [`d567299`](https://github.com/mastra-ai/mastra/commit/d567299cf81e02bd9d5221d4bc05967d6c224161), [`716ffe6`](https://github.com/mastra-ai/mastra/commit/716ffe68bed81f7c2690bc8581b9e140f7bf1c3d), [`8296332`](https://github.com/mastra-ai/mastra/commit/8296332de21c16e3dfc3d0b2d615720a6dc88f2f), [`4df2116`](https://github.com/mastra-ai/mastra/commit/4df211619dd922c047d396ca41cd7027c8c4c8e7), [`2219c1a`](https://github.com/mastra-ai/mastra/commit/2219c1acbd21da116da877f0036ffb985a9dd5a3), [`17c4145`](https://github.com/mastra-ai/mastra/commit/17c4145166099354545582335b5252bdfdfd908b)]:
20
+ - @mastra/core@1.11.0
21
+
3
22
  ## 1.7.0-alpha.0
4
23
 
5
24
  ### Minor Changes
@@ -3,7 +3,7 @@ name: mastra-libsql
3
3
  description: Documentation for @mastra/libsql. Use when working with @mastra/libsql APIs, configuration, or implementation.
4
4
  metadata:
5
5
  package: "@mastra/libsql"
6
- version: "1.7.0-alpha.0"
6
+ version: "1.7.0"
7
7
  ---
8
8
 
9
9
  ## When to use
@@ -16,19 +16,19 @@ Read the individual reference documents for detailed explanations and code examp
16
16
 
17
17
  ### Docs
18
18
 
19
- - [Agent Approval](references/docs-agents-agent-approval.md) - Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
20
- - [Agent Memory](references/docs-agents-agent-memory.md) - Learn how to add memory to agents to store message history and maintain context across interactions.
21
- - [Network Approval](references/docs-agents-network-approval.md) - Learn how to require approvals, suspend execution, and resume suspended networks while keeping humans in control of agent network workflows.
22
- - [Agent Networks](references/docs-agents-networks.md) - Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
23
- - [Memory Processors](references/docs-memory-memory-processors.md) - Learn how to use memory processors in Mastra to filter, trim, and transform messages before they're sent to the language model to manage context window limits.
24
- - [Message History](references/docs-memory-message-history.md) - Learn how to configure message history in Mastra to store recent messages from the current conversation.
19
+ - [Agent approval](references/docs-agents-agent-approval.md) - Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
20
+ - [Agent memory](references/docs-agents-agent-memory.md) - Learn how to add memory to agents to store message history and maintain context across interactions.
21
+ - [Network approval](references/docs-agents-network-approval.md) - Learn how to require approvals, suspend execution, and resume suspended networks while keeping humans in control of agent network workflows.
22
+ - [Agent networks](references/docs-agents-networks.md) - Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
23
+ - [Memory processors](references/docs-memory-memory-processors.md) - Learn how to use memory processors in Mastra to filter, trim, and transform messages before they're sent to the language model to manage context window limits.
24
+ - [Message history](references/docs-memory-message-history.md) - Learn how to configure message history in Mastra to store recent messages from the current conversation.
25
25
  - [Memory overview](references/docs-memory-overview.md) - Learn how Mastra's memory system works with working memory, message history, semantic recall, and observational memory.
26
- - [Semantic Recall](references/docs-memory-semantic-recall.md) - Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
26
+ - [Semantic recall](references/docs-memory-semantic-recall.md) - Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
27
27
  - [Storage](references/docs-memory-storage.md) - Configure storage for Mastra's memory system to persist conversations, workflows, and traces.
28
- - [Working Memory](references/docs-memory-working-memory.md) - Learn how to configure working memory in Mastra to store persistent user data, preferences.
29
- - [Observability Overview](references/docs-observability-overview.md) - Monitor and debug applications with Mastra's Observability features.
30
- - [Default Exporter](references/docs-observability-tracing-exporters-default.md) - Store traces locally for development and debugging
31
- - [Retrieval, Semantic Search, Reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
28
+ - [Working memory](references/docs-memory-working-memory.md) - Learn how to configure working memory in Mastra to store persistent user data, preferences.
29
+ - [Observability overview](references/docs-observability-overview.md) - Monitor and debug applications with Mastra's Observability features.
30
+ - [Default exporter](references/docs-observability-tracing-exporters-default.md) - Store traces locally for development and debugging
31
+ - [Retrieval, semantic search, reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
32
32
  - [Snapshots](references/docs-workflows-snapshots.md) - Learn how to save and resume workflow execution state with snapshots in Mastra
33
33
 
34
34
  ### Guides
@@ -39,12 +39,12 @@ Read the individual reference documents for detailed explanations and code examp
39
39
 
40
40
  - [Reference: Mastra.getMemory()](references/reference-core-getMemory.md) - Documentation for the `Mastra.getMemory()` method in Mastra, which retrieves a registered memory instance by its registry key.
41
41
  - [Reference: Mastra.listMemory()](references/reference-core-listMemory.md) - Documentation for the `Mastra.listMemory()` method in Mastra, which returns all registered memory instances.
42
- - [Reference: Mastra Class](references/reference-core-mastra-class.md) - Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
43
- - [Reference: Memory Class](references/reference-memory-memory-class.md) - Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
44
- - [Reference: Composite Storage](references/reference-storage-composite.md) - Documentation for combining multiple storage backends in Mastra.
45
- - [Reference: DynamoDB Storage](references/reference-storage-dynamodb.md) - Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
46
- - [Reference: libSQL Storage](references/reference-storage-libsql.md) - Documentation for the libSQL storage implementation in Mastra.
47
- - [Reference: libSQL Vector Store](references/reference-vectors-libsql.md) - Documentation for the LibSQLVector class in Mastra, which provides vector search using libSQL with vector extensions.
42
+ - [Reference: Mastra class](references/reference-core-mastra-class.md) - Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
43
+ - [Reference: Memory class](references/reference-memory-memory-class.md) - Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
44
+ - [Reference: Composite storage](references/reference-storage-composite.md) - Documentation for combining multiple storage backends in Mastra.
45
+ - [Reference: DynamoDB storage](references/reference-storage-dynamodb.md) - Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
46
+ - [Reference: libSQL storage](references/reference-storage-libsql.md) - Documentation for the libSQL storage implementation in Mastra.
47
+ - [Reference: libSQL vector store](references/reference-vectors-libsql.md) - Documentation for the LibSQLVector class in Mastra, which provides vector search using libSQL with vector extensions.
48
48
 
49
49
 
50
50
  Read [assets/SOURCE_MAP.json](assets/SOURCE_MAP.json) for source code references.
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.7.0-alpha.0",
2
+ "version": "1.7.0",
3
3
  "package": "@mastra/libsql",
4
4
  "exports": {},
5
5
  "modules": {}
@@ -92,7 +92,7 @@ const handleDecline = async () => {
92
92
  }
93
93
  ```
94
94
 
95
- ## Tool approval with generate()
95
+ ## Tool approval with `generate()`
96
96
 
97
97
  Tool approval also works with the `generate()` method for non-streaming use cases. When a tool requires approval during a `generate()` call, the method returns immediately instead of executing the tool.
98
98
 
@@ -504,7 +504,7 @@ for await (const chunk of stream.fullStream) {
504
504
  }
505
505
  ```
506
506
 
507
- ### Using suspend() in supervisor pattern
507
+ ### Using `suspend()` in supervisor pattern
508
508
 
509
509
  Tools can also use [`suspend()`](#approval-using-suspend) to pause execution and return context to the user. This approach works through the supervisor delegation chain the same way `requireApproval` does — the suspension surfaces at the supervisor level:
510
510
 
@@ -553,7 +553,7 @@ for await (const chunk of stream.fullStream) {
553
553
  }
554
554
  ```
555
555
 
556
- ### Tool approval with generate()
556
+ ### Tool approval with `generate()`
557
557
 
558
558
  Tool approval propagation also works with `generate()` in supervisor pattern:
559
559
 
@@ -131,7 +131,7 @@ const response = await memoryAgent.generate("What's my favorite color?", {
131
131
 
132
132
  To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview) documentation.
133
133
 
134
- ## Observational Memory
134
+ ## Observational memory
135
135
 
136
136
  For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
137
137
 
@@ -1,4 +1,4 @@
1
- # Network Approval
1
+ # Network approval
2
2
 
3
3
  > **Deprecated:** Agent networks are deprecated and will be removed in a future release. Use the [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) instead. See the [migration guide](https://mastra.ai/guides/migrations/network-to-supervisor) to upgrade.
4
4
 
@@ -1,4 +1,4 @@
1
- # Agent Networks
1
+ # Agent networks
2
2
 
3
3
  > **Agent Network Deprecated — Supervisor Pattern Recommended:** Agent networks are deprecated and will be removed in a future release. The [supervisor pattern](https://mastra.ai/docs/agents/supervisor-agents) using `agent.stream()` or `agent.generate()` is now the recommended approach for coordinating multiple agents. It provides the same multi-agent coordination capabilities as `.network()` with significant improvements:
4
4
  >
@@ -1,4 +1,4 @@
1
- # Memory Processors
1
+ # Memory processors
2
2
 
3
3
  Memory processors transform and filter messages as they pass through an agent with memory enabled. They manage context window limits, remove unnecessary content, and optimize the information sent to the language model.
4
4
 
@@ -6,11 +6,11 @@ When memory is enabled on an agent, Mastra adds memory processors to the agent's
6
6
 
7
7
  Memory processors are [processors](https://mastra.ai/docs/agents/processors) that operate specifically on memory-related messages and state.
8
8
 
9
- ## Built-in Memory Processors
9
+ ## Built-in memory processors
10
10
 
11
11
  Mastra automatically adds these processors when memory is enabled:
12
12
 
13
- ### MessageHistory
13
+ ### `MessageHistory`
14
14
 
15
15
  Retrieves message history and persists new messages.
16
16
 
@@ -56,7 +56,7 @@ const agent = new Agent({
56
56
  })
57
57
  ```
58
58
 
59
- ### SemanticRecall
59
+ ### `SemanticRecall`
60
60
 
61
61
  Retrieves semantically relevant messages based on the current input and creates embeddings for new messages.
62
62
 
@@ -114,7 +114,7 @@ const agent = new Agent({
114
114
  })
115
115
  ```
116
116
 
117
- ### WorkingMemory
117
+ ### `WorkingMemory`
118
118
 
119
119
  Manages working memory state across conversations.
120
120
 
@@ -159,7 +159,7 @@ const agent = new Agent({
159
159
  })
160
160
  ```
161
161
 
162
- ## Manual Control and Deduplication
162
+ ## Manual control and deduplication
163
163
 
164
164
  If you manually add a memory processor to `inputProcessors` or `outputProcessors`, Mastra **won't** automatically add it. This gives you full control over processor ordering:
165
165
 
@@ -192,7 +192,7 @@ const agent = new Agent({
192
192
  })
193
193
  ```
194
194
 
195
- ## Processor Execution Order
195
+ ## Processor execution order
196
196
 
197
197
  Understanding the execution order is important when combining guardrails with memory:
198
198
 
@@ -218,7 +218,7 @@ This means memory loads message history before your processors can validate or f
218
218
 
219
219
  This ordering is designed to be **safe by default**: if your output guardrail calls `abort()`, the memory processors never run and **no messages are saved**.
220
220
 
221
- ## Guardrails and Memory
221
+ ## Guardrails and memory
222
222
 
223
223
  The default execution order provides safe guardrail behavior:
224
224
 
@@ -1,4 +1,4 @@
1
- # Message History
1
+ # Message history
2
2
 
3
3
  Message history is the most basic and important form of memory. It gives the LLM a view of recent messages in the context window, enabling your agent to reference earlier exchanges and respond coherently.
4
4
 
@@ -103,7 +103,7 @@ You can use this history in two ways:
103
103
  - **Automatic inclusion** - Mastra automatically fetches and includes recent messages in the context window. By default, it includes the last 10 messages, keeping agents grounded in the conversation. You can adjust this number with `lastMessages`, but in most cases you don't need to think about it.
104
104
  - [**Manual querying**](#querying) - For more control, use the `recall()` function to query threads and messages directly. This lets you choose exactly which memories are included in the context window, or fetch messages to render conversation history in your UI.
105
105
 
106
- ## Accessing Memory
106
+ ## Accessing memory
107
107
 
108
108
  To access memory functions for querying, cloning, or deleting threads and messages, call `getMemory()` on an agent:
109
109
 
@@ -1,10 +1,10 @@
1
- # Semantic Recall
1
+ # Semantic recall
2
2
 
3
3
  If you ask your friend what they did last weekend, they will search in their memory for events associated with "last weekend" and then tell you what they did. That's sort of like how semantic recall works in Mastra.
4
4
 
5
5
  > **Watch 📹:** What semantic recall is, how it works, and how to configure it in Mastra → [YouTube (5 minutes)](https://youtu.be/UVZtK8cK8xQ)
6
6
 
7
- ## How Semantic Recall Works
7
+ ## How semantic recall works
8
8
 
9
9
  Semantic recall is RAG-based search that helps agents maintain context across longer interactions when messages are no longer within [recent message history](https://mastra.ai/docs/memory/message-history).
10
10
 
@@ -16,7 +16,7 @@ When it's enabled, new messages are used to query a vector DB for semantically s
16
16
 
17
17
  After getting a response from the LLM, all new messages (user, assistant, and tool calls/results) are inserted into the vector DB to be recalled in later interactions.
18
18
 
19
- ## Quick Start
19
+ ## Quick start
20
20
 
21
21
  Semantic recall is enabled by default, so if you give your agent memory it will be included:
22
22
 
@@ -33,7 +33,7 @@ const agent = new Agent({
33
33
  })
34
34
  ```
35
35
 
36
- ## Using the recall() Method
36
+ ## Using the `recall()` method
37
37
 
38
38
  While `listMessages` retrieves messages by thread ID with basic pagination, [`recall()`](https://mastra.ai/reference/memory/recall) adds support for **semantic search**. When you need to find messages by meaning rather than recency, use `recall()` with a `vectorSearchString`:
39
39
 
@@ -182,7 +182,7 @@ const agent = new Agent({
182
182
  })
183
183
  ```
184
184
 
185
- ### Using FastEmbed (Local)
185
+ ### Using FastEmbed (local)
186
186
 
187
187
  To use FastEmbed (a local embedding model), install `@mastra/fastembed`:
188
188
 
@@ -224,7 +224,7 @@ const agent = new Agent({
224
224
  })
225
225
  ```
226
226
 
227
- ## PostgreSQL Index Optimization
227
+ ## PostgreSQL index optimization
228
228
 
229
229
  When using PostgreSQL as your vector store, you can optimize semantic recall performance by configuring the vector index. This is particularly important for large-scale deployments with thousands of messages.
230
230
 
@@ -283,6 +283,6 @@ You might want to disable semantic recall in scenarios like:
283
283
  - When message history provides sufficient context for the current conversation.
284
284
  - In performance-sensitive applications, like realtime two-way audio, where the added latency of creating embeddings and running vector queries is noticeable.
285
285
 
286
- ## Viewing Recalled Messages
286
+ ## Viewing recalled messages
287
287
 
288
288
  When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
@@ -13,7 +13,7 @@ Working memory can persist at two different scopes:
13
13
 
14
14
  **Important:** Switching between scopes means the agent won't see memory from the other scope - thread-scoped memory is completely separate from resource-scoped memory.
15
15
 
16
- ## Quick Start
16
+ ## Quick start
17
17
 
18
18
  Here's a minimal example of setting up an agent with working memory:
19
19
 
@@ -37,13 +37,13 @@ const agent = new Agent({
37
37
  })
38
38
  ```
39
39
 
40
- ## How it Works
40
+ ## How it works
41
41
 
42
42
  Working memory is a block of Markdown text that the agent is able to update over time to store continuously relevant information:
43
43
 
44
44
  [YouTube video player](https://www.youtube-nocookie.com/embed/UMy_JHLf1n8)
45
45
 
46
- ## Memory Persistence Scopes
46
+ ## Memory persistence scopes
47
47
 
48
48
  Working memory can operate in two different scopes, allowing you to choose how memory persists across conversations:
49
49
 
@@ -117,7 +117,7 @@ const memory = new Memory({
117
117
  - Temporary or session-specific information
118
118
  - Workflows where each thread needs working memory but threads are ephemeral and not related to each other
119
119
 
120
- ## Storage Adapter Support
120
+ ## Storage adapter support
121
121
 
122
122
  Resource-scoped working memory requires specific storage adapters that support the `mastra_resources` table:
123
123
 
@@ -128,7 +128,7 @@ Resource-scoped working memory requires specific storage adapters that support t
128
128
  - **Upstash** (`@mastra/upstash`)
129
129
  - **MongoDB** (`@mastra/mongodb`)
130
130
 
131
- ## Custom Templates
131
+ ## Custom templates
132
132
 
133
133
  Templates guide the agent on what information to track and update in working memory. While a default template is used if none is provided, you'll typically want to define a custom template tailored to your agent's specific use case to ensure it remembers the most relevant information.
134
134
 
@@ -142,7 +142,7 @@ const memory = new Memory({
142
142
  template: `
143
143
  # User Profile
144
144
 
145
- ## Personal Info
145
+ ## Personal info
146
146
 
147
147
  - Name:
148
148
  - Location:
@@ -156,7 +156,7 @@ const memory = new Memory({
156
156
  - [Deadline 1]: [Date]
157
157
  - [Deadline 2]: [Date]
158
158
 
159
- ## Session State
159
+ ## Session state
160
160
 
161
161
  - Last Task Discussed:
162
162
  - Open Questions:
@@ -168,7 +168,7 @@ const memory = new Memory({
168
168
  })
169
169
  ```
170
170
 
171
- ## Designing Effective Templates
171
+ ## Designing effective templates
172
172
 
173
173
  A well-structured template keeps the information straightforward for the agent to parse and update. Treat the template as a short form that you want the assistant to keep up to date.
174
174
 
@@ -206,7 +206,7 @@ const paragraphMemory = new Memory({
206
206
  })
207
207
  ```
208
208
 
209
- ## Structured Working Memory
209
+ ## Structured working memory
210
210
 
211
211
  Working memory can also be defined using a structured schema instead of a Markdown template. This allows you to specify the exact fields and types that should be tracked, using a [Zod](https://zod.dev/) schema. When using a schema, the agent will see and update working memory as a JSON object matching your schema.
212
212
 
@@ -265,20 +265,20 @@ Schema-based working memory uses **merge semantics**, meaning the agent only nee
265
265
  - **Set a field to `null` to delete it:** This explicitly removes the field from memory
266
266
  - **Arrays are replaced entirely:** When an array field is provided, it replaces the existing array (arrays aren't merged element-by-element)
267
267
 
268
- ## Choosing Between Template and Schema
268
+ ## Choosing between template and schema
269
269
 
270
270
  - Use a **template** (Markdown) if you want the agent to maintain memory as a free-form text block, such as a user profile or scratchpad. Templates use **replace semantics** — the agent must provide the complete memory content on each update.
271
271
  - Use a **schema** if you need structured, type-safe data that can be validated and programmatically accessed as JSON. Schemas use **merge semantics** — the agent only provides fields to update, and existing fields are preserved.
272
272
  - Only one mode can be active at a time: setting both `template` and `schema` isn't supported.
273
273
 
274
- ## Example: Multi-step Retention
274
+ ## Example: Multi-step retention
275
275
 
276
276
  Below is a simplified view of how the `User Profile` template updates across a short user conversation:
277
277
 
278
278
  ```nohighlight
279
279
  # User Profile
280
280
 
281
- ## Personal Info
281
+ ## Personal info
282
282
 
283
283
  - Name:
284
284
  - Location:
@@ -303,7 +303,7 @@ The agent can now refer to `Sam` or `Berlin` in later responses without requesti
303
303
 
304
304
  If your agent isn't properly updating working memory when you expect it to, you can add system instructions on _how_ and _when_ to use this template in your agent's `instructions` setting.
305
305
 
306
- ## Setting Initial Working Memory
306
+ ## Setting initial working memory
307
307
 
308
308
  While agents typically update working memory through the `updateWorkingMemory` tool, you can also set initial working memory programmatically when creating or updating threads. This is useful for injecting user data (like their name, preferences, or other info) that you want available to the agent without passing it in every request.
309
309
 
@@ -372,7 +372,7 @@ await memory.updateWorkingMemory({
372
372
  })
373
373
  ```
374
374
 
375
- ## Read-Only Working Memory
375
+ ## Read-only working memory
376
376
 
377
377
  In some scenarios, you may want an agent to have access to working memory data without the ability to modify it. This is useful for:
378
378
 
@@ -1,8 +1,8 @@
1
- # Observability Overview
1
+ # Observability overview
2
2
 
3
3
  Mastra provides observability features for AI applications. Monitor LLM operations, trace agent decisions, and debug complex workflows with tools that understand AI-specific patterns.
4
4
 
5
- ## Key Features
5
+ ## Key features
6
6
 
7
7
  ### Tracing
8
8
 
@@ -13,13 +13,13 @@ Specialized tracing for AI operations that captures:
13
13
  - **Workflow steps**: Branching logic, parallel execution, and step outputs
14
14
  - **Automatic instrumentation**: Tracing with decorators
15
15
 
16
- ## Storage Requirements
16
+ ## Storage requirements
17
17
 
18
18
  The `DefaultExporter` persists traces to your configured storage backend. Not all storage providers support observability—for the full list, see [Storage Provider Support](https://mastra.ai/docs/observability/tracing/exporters/default).
19
19
 
20
20
  For production environments with high traffic, we recommend using **ClickHouse** for the observability domain via [composite storage](https://mastra.ai/reference/storage/composite). See [Production Recommendations](https://mastra.ai/docs/observability/tracing/exporters/default) for details.
21
21
 
22
- ## Quick Start
22
+ ## Quick start
23
23
 
24
24
  Configure Observability in your Mastra instance:
25
25
 
@@ -63,7 +63,7 @@ With this basic setup, you will see Traces and Logs in both Studio and in Mastra
63
63
 
64
64
  We also support various external tracing providers like MLflow, Langfuse, Braintrust, and any OpenTelemetry-compatible platform (Datadog, New Relic, SigNoz, etc.). See more about this in the [Tracing](https://mastra.ai/docs/observability/tracing/overview) documentation.
65
65
 
66
- ## What's Next?
66
+ ## What's next?
67
67
 
68
68
  - **[Set up Tracing](https://mastra.ai/docs/observability/tracing/overview)**: Configure tracing for your application
69
69
  - **[Configure Logging](https://mastra.ai/docs/observability/logging)**: Add structured logging
@@ -1,4 +1,4 @@
1
- # Default Exporter
1
+ # Default exporter
2
2
 
3
3
  The `DefaultExporter` persists traces to your configured storage backend, making them accessible through Studio. It's automatically enabled when using the default observability configuration and requires no external services.
4
4
 
@@ -68,7 +68,7 @@ export const mastra = new Mastra({
68
68
  })
69
69
  ```
70
70
 
71
- ## Viewing Traces
71
+ ## Viewing traces
72
72
 
73
73
  ### Studio
74
74
 
@@ -79,7 +79,7 @@ Access your traces through Studio:
79
79
  3. Filter and search your local traces
80
80
  4. Inspect detailed span information
81
81
 
82
- ## Tracing Strategies
82
+ ## Tracing strategies
83
83
 
84
84
  DefaultExporter automatically selects the optimal tracing strategy based on your storage provider. You can also override this selection if needed.
85
85
 
@@ -106,7 +106,7 @@ new DefaultExporter({
106
106
  })
107
107
  ```
108
108
 
109
- ## Storage Provider Support
109
+ ## Storage provider support
110
110
 
111
111
  Different storage providers support different tracing strategies. Some providers support observability for production workloads, while others are intended primarily for local development.
112
112
 
@@ -139,7 +139,7 @@ The following storage providers **don't support** the observability domain. If y
139
139
  - **batch-with-updates**: 10-100x throughput improvement, full span lifecycle
140
140
  - **insert-only**: Additional 70% reduction in database operations, perfect for analytics
141
141
 
142
- ## Production Recommendations
142
+ ## Production recommendations
143
143
 
144
144
  Observability data grows quickly in production environments. A single agent interaction can generate hundreds of spans, and high-traffic applications can produce thousands of traces per day. Most general-purpose databases aren't optimized for this write-heavy, append-only workload.
145
145
 
@@ -156,7 +156,7 @@ Observability data grows quickly in production environments. A single agent inte
156
156
 
157
157
  If you're using a provider without observability support (like Convex or DynamoDB) or want to optimize performance, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to ClickHouse while keeping other data in your primary database.
158
158
 
159
- ## Batching Behavior
159
+ ## Batching behavior
160
160
 
161
161
  ### Flush Triggers
162
162
 
@@ -167,7 +167,7 @@ For both batch strategies (`batch-with-updates` and `insert-only`), traces are f
167
167
  3. **Emergency flush**: Buffer approaches `maxBufferSize` limit
168
168
  4. **Shutdown**: Force flush all pending events
169
169
 
170
- ### Error Handling
170
+ ### Error handling
171
171
 
172
172
  The DefaultExporter includes robust error handling for production use:
173
173
 
@@ -1,10 +1,10 @@
1
- # Retrieval in RAG Systems
1
+ # Retrieval in RAG systems
2
2
 
3
3
  After storing embeddings, you need to retrieve relevant chunks to answer user queries.
4
4
 
5
5
  Mastra provides flexible retrieval options with support for semantic search, filtering, and re-ranking.
6
6
 
7
- ## How Retrieval Works
7
+ ## How retrieval works
8
8
 
9
9
  1. The user's query is converted to an embedding using the same model used for document embeddings
10
10
  2. This embedding is compared to stored embeddings using vector similarity
@@ -14,7 +14,7 @@ Mastra provides flexible retrieval options with support for semantic search, fil
14
14
  - Re-ranked for better relevance
15
15
  - Processed through a knowledge graph
16
16
 
17
- ## Basic Retrieval
17
+ ## Basic retrieval
18
18
 
19
19
  The simplest approach is direct semantic search. This method uses vector similarity to find chunks that are semantically similar to the query:
20
20
 
@@ -63,7 +63,7 @@ Results include both the text content and a similarity score:
63
63
  ]
64
64
  ```
65
65
 
66
- ## Advanced Retrieval options
66
+ ## Advanced retrieval options
67
67
 
68
68
  ### Metadata Filtering
69
69
 
@@ -22,7 +22,7 @@ const thread = await memory.createThread({
22
22
 
23
23
  **memory** (`TMemory[TMemoryKey]`): The memory instance with the specified key. Throws an error if the memory is not found.
24
24
 
25
- ## Example: Registering and Retrieving Memory
25
+ ## Example: Registering and retrieving memory
26
26
 
27
27
  ```typescript
28
28
  import { Mastra } from '@mastra/core'
@@ -20,7 +20,7 @@ This method takes no parameters.
20
20
 
21
21
  **memory** (`Record<string, MastraMemory>`): An object containing all registered memory instances, keyed by their registry keys.
22
22
 
23
- ## Example: Checking Registered Memory
23
+ ## Example: Checking registered memory
24
24
 
25
25
  ```typescript
26
26
  import { Mastra } from '@mastra/core'
@@ -1,4 +1,4 @@
1
- # Mastra Class
1
+ # Mastra class
2
2
 
3
3
  The `Mastra` class is the central orchestrator in any Mastra application, managing agents, workflows, storage, logging, observability, and more. Typically, you create a single instance of `Mastra` to coordinate your application.
4
4
 
@@ -1,4 +1,4 @@
1
- # Memory Class
1
+ # Memory class
2
2
 
3
3
  The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.
4
4
 
@@ -1,4 +1,4 @@
1
- # Composite Storage
1
+ # Composite storage
2
2
 
3
3
  `MastraCompositeStore` can compose storage domains from different providers. Use it when you need different databases for different purposes. For example, use LibSQL for memory and PostgreSQL for workflows.
4
4
 
@@ -1,4 +1,4 @@
1
- # DynamoDB Storage
1
+ # DynamoDB storage
2
2
 
3
3
  The DynamoDB storage implementation provides a scalable and performant NoSQL database solution for Mastra, leveraging a single-table design pattern with [ElectroDB](https://electrodb.dev/).
4
4
 
@@ -120,7 +120,7 @@ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/
120
120
 
121
121
  **config.ttl** (`object`): TTL (Time To Live) configuration for automatic data expiration. Configure per entity type: thread, message, trace, eval, workflow\_snapshot, resource, score. Each entity config includes: enabled (boolean), attributeName (string, default: 'ttl'), defaultTtlSeconds (number).
122
122
 
123
- ## TTL (Time To Live) Configuration
123
+ ## TTL (time to live) configuration
124
124
 
125
125
  DynamoDB TTL allows you to automatically delete items after a specified time period. This is useful for:
126
126
 
@@ -216,7 +216,7 @@ aws dynamodb update-time-to-live \
216
216
 
217
217
  > **Note:** DynamoDB deletes expired items within 48 hours after expiration. Items remain queryable until actually deleted.
218
218
 
219
- ## AWS IAM Permissions
219
+ ## AWS IAM permissions
220
220
 
221
221
  The IAM role or user executing the code needs appropriate permissions to interact with the specified DynamoDB table and its indexes. Below is a sample policy. Replace `${YOUR_TABLE_NAME}` with your actual table name and `${YOUR_AWS_REGION}` and `${YOUR_AWS_ACCOUNT_ID}` with appropriate values.
222
222
 
@@ -246,7 +246,7 @@ The IAM role or user executing the code needs appropriate permissions to interac
246
246
  }
247
247
  ```
248
248
 
249
- ## Key Considerations
249
+ ## Key considerations
250
250
 
251
251
  Before diving into the architectural details, keep these key points in mind when working with the DynamoDB storage adapter:
252
252
 
@@ -255,7 +255,7 @@ Before diving into the architectural details, keep these key points in mind when
255
255
  - **Understanding GSIs:** Familiarity with how the GSIs are structured (as per `TABLE_SETUP.md`) is important for understanding data retrieval and potential query patterns.
256
256
  - **ElectroDB:** The adapter uses ElectroDB to manage interactions with DynamoDB, providing a layer of abstraction and type safety over raw DynamoDB operations.
257
257
 
258
- ## Architectural Approach
258
+ ## Architectural approach
259
259
 
260
260
  This storage adapter utilizes a **single-table design pattern** leveraging [ElectroDB](https://electrodb.dev/), a common and recommended approach for DynamoDB. This differs architecturally from relational database adapters (like `@mastra/pg` or `@mastra/libsql`) that typically use multiple tables, each dedicated to a specific entity (threads, messages, etc.).
261
261
 
@@ -1,4 +1,4 @@
1
- # libSQL Storage
1
+ # libSQL storage
2
2
 
3
3
  [libSQL](https://docs.turso.tech/libsql) is an open-source, SQLite-compatible database that supports both local and remote deployments. It can be used to store message history, workflow snapshots, traces, and eval scores.
4
4
 
@@ -1,4 +1,4 @@
1
- # libSQL Vector Store
1
+ # libSQL vector store
2
2
 
3
3
  The libSQL storage implementation provides a SQLite-compatible vector search [libSQL](https://github.com/tursodatabase/libsql), a fork of SQLite with vector extensions, and [Turso](https://turso.tech/) with vector extensions, offering a lightweight and efficient vector database solution. It's part of the `@mastra/libsql` package and offers efficient vector similarity search with metadata filtering.
4
4
 
@@ -69,7 +69,7 @@ const results = await store.query({
69
69
  });
70
70
  ```
71
71
 
72
- ## Constructor Options
72
+ ## Constructor options
73
73
 
74
74
  **url** (`string`): libSQL database URL. Use ':memory:' for in-memory database, 'file:dbname.db' for local file, or a libSQL-compatible connection string like 'libsql://your-database.turso.io'.
75
75
 
@@ -81,7 +81,7 @@ const results = await store.query({
81
81
 
82
82
  ## Methods
83
83
 
84
- ### createIndex()
84
+ ### `createIndex()`
85
85
 
86
86
  Creates a new vector collection. The index name must start with a letter or underscore and can only contain letters, numbers, and underscores. The dimension must be a positive integer.
87
87
 
@@ -91,7 +91,7 @@ Creates a new vector collection. The index name must start with a letter or unde
91
91
 
92
92
  **metric** (`'cosine' | 'euclidean' | 'dotproduct'`): Distance metric for similarity search. Note: Currently only cosine similarity is supported by libSQL. (Default: `cosine`)
93
93
 
94
- ### upsert()
94
+ ### `upsert()`
95
95
 
96
96
  Adds or updates vectors and their metadata in the index. Uses a transaction to ensure all vectors are inserted atomically - if any insert fails, the entire operation is rolled back.
97
97
 
@@ -103,7 +103,7 @@ Adds or updates vectors and their metadata in the index. Uses a transaction to e
103
103
 
104
104
  **ids** (`string[]`): Optional vector IDs (auto-generated if not provided)
105
105
 
106
- ### query()
106
+ ### `query()`
107
107
 
108
108
  Searches for similar vectors with optional metadata filtering.
109
109
 
@@ -119,7 +119,7 @@ Searches for similar vectors with optional metadata filtering.
119
119
 
120
120
  **minScore** (`number`): Minimum similarity score threshold (Default: `0`)
121
121
 
122
- ### describeIndex()
122
+ ### `describeIndex()`
123
123
 
124
124
  Gets information about an index.
125
125
 
@@ -135,25 +135,25 @@ interface IndexStats {
135
135
  }
136
136
  ```
137
137
 
138
- ### deleteIndex()
138
+ ### `deleteIndex()`
139
139
 
140
140
  Deletes an index and all its data.
141
141
 
142
142
  **indexName** (`string`): Name of the index to delete
143
143
 
144
- ### listIndexes()
144
+ ### `listIndexes()`
145
145
 
146
146
  Lists all vector indexes in the database.
147
147
 
148
148
  Returns: `Promise<string[]>`
149
149
 
150
- ### truncateIndex()
150
+ ### `truncateIndex()`
151
151
 
152
152
  Removes all vectors from an index while keeping the index structure.
153
153
 
154
154
  **indexName** (`string`): Name of the index to truncate
155
155
 
156
- ### updateVector()
156
+ ### `updateVector()`
157
157
 
158
158
  Update a single vector by ID or by metadata filter. Either `id` or `filter` must be provided, but not both.
159
159
 
@@ -169,7 +169,7 @@ Update a single vector by ID or by metadata filter. Either `id` or `filter` must
169
169
 
170
170
  **update.metadata** (`Record<string, any>`): New metadata to update
171
171
 
172
- ### deleteVector()
172
+ ### `deleteVector()`
173
173
 
174
174
  Deletes a specific vector entry from an index by its ID.
175
175
 
@@ -177,7 +177,7 @@ Deletes a specific vector entry from an index by its ID.
177
177
 
178
178
  **id** (`string`): ID of the vector entry to delete
179
179
 
180
- ### deleteVectors()
180
+ ### `deleteVectors()`
181
181
 
182
182
  Delete multiple vectors by IDs or by metadata filter. Either `ids` or `filter` must be provided, but not both.
183
183
 
@@ -187,7 +187,7 @@ Delete multiple vectors by IDs or by metadata filter. Either `ids` or `filter` m
187
187
 
188
188
  **filter** (`Record<string, any>`): Metadata filter to identify vectors to delete (mutually exclusive with ids)
189
189
 
190
- ## Response Types
190
+ ## Response types
191
191
 
192
192
  Query results are returned in this format:
193
193
 
@@ -200,7 +200,7 @@ interface QueryResult {
200
200
  }
201
201
  ```
202
202
 
203
- ## Error Handling
203
+ ## Error handling
204
204
 
205
205
  The store throws specific errors for different failure cases:
206
206
 
@@ -232,7 +232,7 @@ Common error cases include:
232
232
  - Database connection issues
233
233
  - Transaction failures during upsert
234
234
 
235
- ## Usage Example
235
+ ## Usage example
236
236
 
237
237
  ### Local embeddings with fastembed
238
238
 
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@mastra/libsql",
3
- "version": "1.7.0-alpha.0",
3
+ "version": "1.7.0",
4
4
  "description": "Libsql provider for Mastra - includes both vector and db storage capabilities",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
@@ -30,10 +30,10 @@
30
30
  "tsup": "^8.5.1",
31
31
  "typescript": "^5.9.3",
32
32
  "vitest": "4.0.18",
33
- "@internal/lint": "0.0.66",
34
- "@internal/types-builder": "0.0.41",
35
- "@internal/storage-test-utils": "0.0.62",
36
- "@mastra/core": "1.11.0-alpha.0"
33
+ "@internal/lint": "0.0.67",
34
+ "@internal/storage-test-utils": "0.0.63",
35
+ "@mastra/core": "1.11.0",
36
+ "@internal/types-builder": "0.0.42"
37
37
  },
38
38
  "peerDependencies": {
39
39
  "@mastra/core": ">=1.0.0-0 <2.0.0-0"