@mastra/libsql 1.2.0 → 1.3.0-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (46) hide show
  1. package/CHANGELOG.md +87 -0
  2. package/dist/docs/SKILL.md +36 -26
  3. package/dist/docs/{SOURCE_MAP.json → assets/SOURCE_MAP.json} +1 -1
  4. package/dist/docs/{agents/03-agent-approval.md → references/docs-agents-agent-approval.md} +19 -19
  5. package/dist/docs/references/docs-agents-agent-memory.md +212 -0
  6. package/dist/docs/{agents/04-network-approval.md → references/docs-agents-network-approval.md} +13 -12
  7. package/dist/docs/{agents/02-networks.md → references/docs-agents-networks.md} +10 -12
  8. package/dist/docs/{memory/06-memory-processors.md → references/docs-memory-memory-processors.md} +6 -8
  9. package/dist/docs/{memory/03-message-history.md → references/docs-memory-message-history.md} +31 -20
  10. package/dist/docs/{memory/01-overview.md → references/docs-memory-overview.md} +8 -8
  11. package/dist/docs/{memory/05-semantic-recall.md → references/docs-memory-semantic-recall.md} +33 -17
  12. package/dist/docs/{memory/02-storage.md → references/docs-memory-storage.md} +29 -39
  13. package/dist/docs/{memory/04-working-memory.md → references/docs-memory-working-memory.md} +16 -27
  14. package/dist/docs/{observability/01-overview.md → references/docs-observability-overview.md} +4 -7
  15. package/dist/docs/{observability/02-default.md → references/docs-observability-tracing-exporters-default.md} +11 -14
  16. package/dist/docs/{rag/01-retrieval.md → references/docs-rag-retrieval.md} +26 -53
  17. package/dist/docs/{workflows/01-snapshots.md → references/docs-workflows-snapshots.md} +3 -5
  18. package/dist/docs/{guides/01-ai-sdk.md → references/guides-agent-frameworks-ai-sdk.md} +25 -9
  19. package/dist/docs/references/reference-core-getMemory.md +50 -0
  20. package/dist/docs/references/reference-core-listMemory.md +56 -0
  21. package/dist/docs/references/reference-core-mastra-class.md +66 -0
  22. package/dist/docs/{memory/07-reference.md → references/reference-memory-memory-class.md} +28 -14
  23. package/dist/docs/references/reference-storage-composite.md +235 -0
  24. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  25. package/dist/docs/references/reference-storage-libsql.md +135 -0
  26. package/dist/docs/{vectors/01-reference.md → references/reference-vectors-libsql.md} +105 -13
  27. package/dist/index.cjs +1676 -194
  28. package/dist/index.cjs.map +1 -1
  29. package/dist/index.js +1676 -196
  30. package/dist/index.js.map +1 -1
  31. package/dist/storage/db/index.d.ts.map +1 -1
  32. package/dist/storage/domains/agents/index.d.ts +9 -12
  33. package/dist/storage/domains/agents/index.d.ts.map +1 -1
  34. package/dist/storage/domains/memory/index.d.ts +7 -1
  35. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  36. package/dist/storage/domains/prompt-blocks/index.d.ts +25 -0
  37. package/dist/storage/domains/prompt-blocks/index.d.ts.map +1 -0
  38. package/dist/storage/domains/scorer-definitions/index.d.ts +26 -0
  39. package/dist/storage/domains/scorer-definitions/index.d.ts.map +1 -0
  40. package/dist/storage/index.d.ts +3 -1
  41. package/dist/storage/index.d.ts.map +1 -1
  42. package/package.json +3 -4
  43. package/dist/docs/README.md +0 -39
  44. package/dist/docs/agents/01-agent-memory.md +0 -166
  45. package/dist/docs/core/01-reference.md +0 -151
  46. package/dist/docs/storage/01-reference.md +0 -556
package/CHANGELOG.md CHANGED
@@ -1,5 +1,92 @@
1
1
  # @mastra/libsql
2
2
 
3
+ ## 1.3.0-alpha.0
4
+
5
+ ### Minor Changes
6
+
7
+ - **Updated storage adapters for generic storage domain API** ([#12846](https://github.com/mastra-ai/mastra/pull/12846))
8
+
9
+ All storage adapters now implement the unified `VersionedStorageDomain` method names. Entity-specific methods (`createAgent`, `getAgentById`, `deleteAgent`, etc.) have been replaced with generic equivalents (`create`, `getById`, `delete`, etc.) across agents, prompt blocks, and scorer definitions domains.
10
+
11
+ Added `scorer-definitions` domain support to all adapters.
12
+
13
+ **Before:**
14
+
15
+ ```ts
16
+ const store = storage.getStore('agents');
17
+ await store.createAgent({ agent: input });
18
+ await store.getAgentById({ id: 'abc' });
19
+ await store.deleteAgent({ id: 'abc' });
20
+ ```
21
+
22
+ **After:**
23
+
24
+ ```ts
25
+ const store = storage.getStore('agents');
26
+ await store.create({ agent: input });
27
+ await store.getById('abc');
28
+ await store.delete('abc');
29
+ ```
30
+
31
+ ### Patch Changes
32
+
33
+ - Fixed issues with stored agents ([#12790](https://github.com/mastra-ai/mastra/pull/12790))
34
+
35
+ - **Async buffering for observational memory is now enabled by default.** Observations are pre-computed in the background as conversations grow — when the context window fills up, buffered observations activate instantly with no blocking LLM call. This keeps agents responsive during long conversations. ([#12891](https://github.com/mastra-ai/mastra/pull/12891))
36
+
37
+ **Default settings:**
38
+ - `observation.bufferTokens: 0.2` — buffer every 20% of `messageTokens` (~6k tokens with the default 30k threshold)
39
+ - `observation.bufferActivation: 0.8` — on activation, retain 20% of the message window
40
+ - `reflection.bufferActivation: 0.5` — start background reflection at 50% of the observation threshold
41
+
42
+ **Disabling async buffering:**
43
+
44
+ Set `observation.bufferTokens: false` to disable async buffering for both observations and reflections:
45
+
46
+ ```ts
47
+ const memory = new Memory({
48
+ options: {
49
+ observationalMemory: {
50
+ model: 'google/gemini-2.5-flash',
51
+ observation: {
52
+ bufferTokens: false,
53
+ },
54
+ },
55
+ },
56
+ });
57
+ ```
58
+
59
+ **Model is now required** when passing an observational memory config object. Use `observationalMemory: true` for the default (google/gemini-2.5-flash), or set a model explicitly:
60
+
61
+ ```ts
62
+ // Uses default model (google/gemini-2.5-flash)
63
+ observationalMemory: true
64
+
65
+ // Explicit model
66
+ observationalMemory: {
67
+ model: "google/gemini-2.5-flash",
68
+ }
69
+ ```
70
+
71
+ **`shareTokenBudget` requires `bufferTokens: false`** (temporary limitation). If you use `shareTokenBudget: true`, you must explicitly disable async buffering:
72
+
73
+ ```ts
74
+ observationalMemory: {
75
+ model: "google/gemini-2.5-flash",
76
+ shareTokenBudget: true,
77
+ observation: { bufferTokens: false },
78
+ }
79
+ ```
80
+
81
+ **New streaming event:** `data-om-status` replaces `data-om-progress` with a structured status object containing active window usage, buffered observation/reflection state, and projected activation impact.
82
+
83
+ **Buffering markers:** New `data-om-buffering-start`, `data-om-buffering-end`, and `data-om-buffering-failed` streaming events for UI feedback during background operations.
84
+
85
+ - Added prompt block storage implementations. Each store supports full CRUD for prompt blocks and their versions, including JSON serialization for rules and metadata. Also updated agent instructions serialization to support the new `AgentInstructionBlock` array format alongside plain strings. ([#12776](https://github.com/mastra-ai/mastra/pull/12776))
86
+
87
+ - Updated dependencies [[`717ffab`](https://github.com/mastra-ai/mastra/commit/717ffab42cfd58ff723b5c19ada4939997773004), [`e4b6dab`](https://github.com/mastra-ai/mastra/commit/e4b6dab171c5960e340b3ea3ea6da8d64d2b8672), [`5719fa8`](https://github.com/mastra-ai/mastra/commit/5719fa8880e86e8affe698ec4b3807c7e0e0a06f), [`83cda45`](https://github.com/mastra-ai/mastra/commit/83cda4523e588558466892bff8f80f631a36945a), [`11804ad`](https://github.com/mastra-ai/mastra/commit/11804adf1d6be46ebe216be40a43b39bb8b397d7), [`aa95f95`](https://github.com/mastra-ai/mastra/commit/aa95f958b186ae5c9f4219c88e268f5565c277a2), [`f5501ae`](https://github.com/mastra-ai/mastra/commit/f5501aedb0a11106c7db7e480d6eaf3971b7bda8), [`44573af`](https://github.com/mastra-ai/mastra/commit/44573afad0a4bc86f627d6cbc0207961cdcb3bc3), [`00e3861`](https://github.com/mastra-ai/mastra/commit/00e3861863fbfee78faeb1ebbdc7c0223aae13ff), [`7bfbc52`](https://github.com/mastra-ai/mastra/commit/7bfbc52a8604feb0fff2c0a082c13c0c2a3df1a2), [`1445994`](https://github.com/mastra-ai/mastra/commit/1445994aee19c9334a6a101cf7bd80ca7ed4d186), [`61f44a2`](https://github.com/mastra-ai/mastra/commit/61f44a26861c89e364f367ff40825bdb7f19df55), [`37145d2`](https://github.com/mastra-ai/mastra/commit/37145d25f99dc31f1a9105576e5452609843ce32), [`fdad759`](https://github.com/mastra-ai/mastra/commit/fdad75939ff008b27625f5ec0ce9c6915d99d9ec), [`e4569c5`](https://github.com/mastra-ai/mastra/commit/e4569c589e00c4061a686c9eb85afe1b7050b0a8), [`7309a85`](https://github.com/mastra-ai/mastra/commit/7309a85427281a8be23f4fb80ca52e18eaffd596), [`99424f6`](https://github.com/mastra-ai/mastra/commit/99424f6862ffb679c4ec6765501486034754a4c2), [`44eb452`](https://github.com/mastra-ai/mastra/commit/44eb4529b10603c279688318bebf3048543a1d61), [`6c40593`](https://github.com/mastra-ai/mastra/commit/6c40593d6d2b1b68b0c45d1a3a4c6ac5ecac3937), [`8c1135d`](https://github.com/mastra-ai/mastra/commit/8c1135dfb91b057283eae7ee11f9ec28753cc64f), [`dd39e54`](https://github.com/mastra-ai/mastra/commit/dd39e54ea34532c995b33bee6e0e808bf41a7341), [`b6fad9a`](https://github.com/mastra-ai/mastra/commit/b6fad9a602182b1cc0df47cd8c55004fa829ad61), [`4129c07`](https://github.com/mastra-ai/mastra/commit/4129c073349b5a66643fd8136ebfe9d7097cf793), [`5b930ab`](https://github.com/mastra-ai/mastra/commit/5b930aba1834d9898e8460a49d15106f31ac7c8d), [`4be93d0`](https://github.com/mastra-ai/mastra/commit/4be93d09d68e20aaf0ea3f210749422719618b5f), [`047635c`](https://github.com/mastra-ai/mastra/commit/047635ccd7861d726c62d135560c0022a5490aec), [`8c90ff4`](https://github.com/mastra-ai/mastra/commit/8c90ff4d3414e7f2a2d216ea91274644f7b29133), [`ed232d1`](https://github.com/mastra-ai/mastra/commit/ed232d1583f403925dc5ae45f7bee948cf2a182b), [`3891795`](https://github.com/mastra-ai/mastra/commit/38917953518eb4154a984ee36e6ededdcfe80f72), [`4f955b2`](https://github.com/mastra-ai/mastra/commit/4f955b20c7f66ed282ee1fd8709696fa64c4f19d), [`55a4c90`](https://github.com/mastra-ai/mastra/commit/55a4c9044ac7454349b9f6aeba0bbab5ee65d10f)]:
88
+ - @mastra/core@1.3.0-alpha.1
89
+
3
90
  ## 1.2.0
4
91
 
5
92
  ### Minor Changes
@@ -1,40 +1,50 @@
1
1
  ---
2
- name: mastra-libsql-docs
3
- description: Documentation for @mastra/libsql. Includes links to type definitions and readable implementation code in dist/.
2
+ name: mastra-libsql
3
+ description: Documentation for @mastra/libsql. Use when working with @mastra/libsql APIs, configuration, or implementation.
4
+ metadata:
5
+ package: "@mastra/libsql"
6
+ version: "1.3.0-alpha.0"
4
7
  ---
5
8
 
6
- # @mastra/libsql Documentation
9
+ ## When to use
7
10
 
8
- > **Version**: 1.2.0
9
- > **Package**: @mastra/libsql
11
+ Use this skill whenever you are working with @mastra/libsql to obtain the domain-specific knowledge.
10
12
 
11
- ## Quick Navigation
13
+ ## How to use
12
14
 
13
- Use SOURCE_MAP.json to find any export:
15
+ Read the individual reference documents for detailed explanations and code examples.
14
16
 
15
- ```bash
16
- cat docs/SOURCE_MAP.json
17
- ```
17
+ ### Docs
18
18
 
19
- Each export maps to:
20
- - **types**: `.d.ts` file with JSDoc and API signatures
21
- - **implementation**: `.js` chunk file with readable source
22
- - **docs**: Conceptual documentation in `docs/`
19
+ - [Agent Approval](references/docs-agents-agent-approval.md) - Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
20
+ - [Agent Memory](references/docs-agents-agent-memory.md) - Learn how to add memory to agents to store message history and maintain context across interactions.
21
+ - [Network Approval](references/docs-agents-network-approval.md) - Learn how to require approvals, suspend execution, and resume suspended networks while keeping humans in control of agent network workflows.
22
+ - [Agent Networks](references/docs-agents-networks.md) - Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
23
+ - [Memory Processors](references/docs-memory-memory-processors.md) - Learn how to use memory processors in Mastra to filter, trim, and transform messages before they're sent to the language model to manage context window limits.
24
+ - [Message History](references/docs-memory-message-history.md) - Learn how to configure message history in Mastra to store recent messages from the current conversation.
25
+ - [Memory overview](references/docs-memory-overview.md) - Learn how Mastra's memory system works with working memory, message history, semantic recall, and observational memory.
26
+ - [Semantic Recall](references/docs-memory-semantic-recall.md) - Learn how to use semantic recall in Mastra to retrieve relevant messages from past conversations using vector search and embeddings.
27
+ - [Storage](references/docs-memory-storage.md) - Configure storage for Mastra's memory system to persist conversations, workflows, and traces.
28
+ - [Working Memory](references/docs-memory-working-memory.md) - Learn how to configure working memory in Mastra to store persistent user data, preferences.
29
+ - [Observability Overview](references/docs-observability-overview.md) - Monitor and debug applications with Mastra's Observability features.
30
+ - [Default Exporter](references/docs-observability-tracing-exporters-default.md) - Store traces locally for development and debugging
31
+ - [Retrieval, Semantic Search, Reranking](references/docs-rag-retrieval.md) - Guide on retrieval processes in Mastra's RAG systems, including semantic search, filtering, and re-ranking.
32
+ - [Snapshots](references/docs-workflows-snapshots.md) - Learn how to save and resume workflow execution state with snapshots in Mastra
23
33
 
24
- ## Top Exports
34
+ ### Guides
25
35
 
36
+ - [AI SDK](references/guides-agent-frameworks-ai-sdk.md) - Use Mastra processors and memory with the Vercel AI SDK
26
37
 
38
+ ### Reference
27
39
 
28
- See SOURCE_MAP.json for the complete list.
40
+ - [Reference: Mastra.getMemory()](references/reference-core-getMemory.md) - Documentation for the `Mastra.getMemory()` method in Mastra, which retrieves a registered memory instance by its registry key.
41
+ - [Reference: Mastra.listMemory()](references/reference-core-listMemory.md) - Documentation for the `Mastra.listMemory()` method in Mastra, which returns all registered memory instances.
42
+ - [Reference: Mastra Class](references/reference-core-mastra-class.md) - Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
43
+ - [Reference: Memory Class](references/reference-memory-memory-class.md) - Documentation for the `Memory` class in Mastra, which provides a robust system for managing conversation history and thread-based message storage.
44
+ - [Reference: Composite Storage](references/reference-storage-composite.md) - Documentation for combining multiple storage backends in Mastra.
45
+ - [Reference: DynamoDB Storage](references/reference-storage-dynamodb.md) - Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
46
+ - [Reference: libSQL Storage](references/reference-storage-libsql.md) - Documentation for the libSQL storage implementation in Mastra.
47
+ - [Reference: libSQL Vector Store](references/reference-vectors-libsql.md) - Documentation for the LibSQLVector class in Mastra, which provides vector search using libSQL with vector extensions.
29
48
 
30
- ## Available Topics
31
49
 
32
- - [Agents](agents/) - 4 file(s)
33
- - [Core](core/) - 3 file(s)
34
- - [Guides](guides/) - 1 file(s)
35
- - [Memory](memory/) - 7 file(s)
36
- - [Observability](observability/) - 2 file(s)
37
- - [Rag](rag/) - 1 file(s)
38
- - [Storage](storage/) - 3 file(s)
39
- - [Vectors](vectors/) - 1 file(s)
40
- - [Workflows](workflows/) - 1 file(s)
50
+ Read [assets/SOURCE_MAP.json](assets/SOURCE_MAP.json) for source code references.
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.2.0",
2
+ "version": "1.3.0-alpha.0",
3
3
  "package": "@mastra/libsql",
4
4
  "exports": {},
5
5
  "modules": {}
@@ -1,5 +1,3 @@
1
- > Learn how to require approvals, suspend tool execution, and automatically resume suspended tools while keeping humans in control of agent workflows.
2
-
3
1
  # Agent Approval
4
2
 
5
3
  Agents sometimes require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in workflows when calling tools that handle sensitive operations, like deleting resources or performing running long processes. With agent approval you can suspend a tool call and provide feedback to the user, or approve or decline a tool call based on targeted application conditions.
@@ -12,7 +10,7 @@ Tool call approval can be enabled at the agent level and apply to every tool the
12
10
 
13
11
  Agent approval uses a snapshot to capture the state of the request. Ensure you've enabled a storage provider in your main Mastra instance. If storage isn't enabled you'll see an error relating to snapshot not found.
14
12
 
15
- ```typescript title="src/mastra/index.ts"
13
+ ```typescript
16
14
  import { Mastra } from "@mastra/core/mastra";
17
15
  import { LibSQLStore } from "@mastra/libsql";
18
16
 
@@ -117,13 +115,13 @@ if (output.finishReason === "suspended") {
117
115
 
118
116
  ### Stream vs Generate comparison
119
117
 
120
- | Aspect | `stream()` | `generate()` |
121
- |--------|-----------|--------------|
122
- | Response type | Streaming chunks | Complete response |
123
- | Approval detection | `tool-call-approval` chunk | `finishReason: 'suspended'` |
124
- | Approve method | `approveToolCall({ runId })` | `approveToolCallGenerate({ runId, toolCallId })` |
125
- | Decline method | `declineToolCall({ runId })` | `declineToolCallGenerate({ runId, toolCallId })` |
126
- | Result | Stream to iterate | Full output object |
118
+ | Aspect | `stream()` | `generate()` |
119
+ | ------------------ | ---------------------------- | ------------------------------------------------ |
120
+ | Response type | Streaming chunks | Complete response |
121
+ | Approval detection | `tool-call-approval` chunk | `finishReason: 'suspended'` |
122
+ | Approve method | `approveToolCall({ runId })` | `approveToolCallGenerate({ runId, toolCallId })` |
123
+ | Decline method | `declineToolCall({ runId })` | `declineToolCallGenerate({ runId, toolCallId })` |
124
+ | Result | Stream to iterate | Full output object |
127
125
 
128
126
  ## Tool-level approval
129
127
 
@@ -231,7 +229,6 @@ const handleResume = async () => {
231
229
  }
232
230
  process.stdout.write("\n");
233
231
  };
234
-
235
232
  ```
236
233
 
237
234
  ## Automatic tool resumption
@@ -270,8 +267,11 @@ const stream = await agent.stream("What's the weather?", {
270
267
  When `autoResumeSuspendedTools` is enabled:
271
268
 
272
269
  1. A tool suspends execution by calling `suspend()` with a payload (e.g., requesting more information)
270
+
273
271
  2. The suspension is persisted to memory along with the conversation
272
+
274
273
  3. When the user sends their next message on the same thread, the agent:
274
+
275
275
  - Detects the suspended tool from message history
276
276
  - Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
277
277
  - Automatically resumes the tool with the extracted data
@@ -341,7 +341,7 @@ const handleResume = async () => {
341
341
 
342
342
  **Conversation flow:**
343
343
 
344
- ```
344
+ ```text
345
345
  User: "What's the weather like?"
346
346
  Agent: "What city do you want to know the weather for?"
347
347
 
@@ -361,17 +361,17 @@ For automatic tool resumption to work:
361
361
 
362
362
  ### Manual vs automatic resumption
363
363
 
364
- | Approach | Use case |
365
- |----------|----------|
366
- | Manual (`resumeStream()`) | Programmatic control, webhooks, button clicks, external triggers |
364
+ | Approach | Use case |
365
+ | -------------------------------------- | ------------------------------------------------------------------------ |
366
+ | Manual (`resumeStream()`) | Programmatic control, webhooks, button clicks, external triggers |
367
367
  | Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
368
368
 
369
369
  Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
370
370
 
371
371
  ## Related
372
372
 
373
- - [Using Tools](./using-tools)
374
- - [Agent Overview](./overview)
375
- - [Tools Overview](../mcp/overview)
376
- - [Agent Memory](./agent-memory)
373
+ - [Using Tools](https://mastra.ai/docs/agents/using-tools)
374
+ - [Agent Overview](https://mastra.ai/docs/agents/overview)
375
+ - [Tools Overview](https://mastra.ai/docs/mcp/overview)
376
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
377
377
  - [Request Context](https://mastra.ai/docs/server/request-context)
@@ -0,0 +1,212 @@
1
+ # Agent memory
2
+
3
+ Agents use memory to maintain context across interactions. LLMs are stateless and don't retain information between calls, so agents need memory to track message history and recall relevant information.
4
+
5
+ Mastra agents can be configured to store message history, with optional [working memory](https://mastra.ai/docs/memory/working-memory) to maintain recent context, [semantic recall](https://mastra.ai/docs/memory/semantic-recall) to retrieve past messages based on meaning, or [observational memory](https://mastra.ai/docs/memory/observational-memory) for automatic long-term memory that compresses conversations as they grow.
6
+
7
+ ## When to use memory
8
+
9
+ Use memory when your agent needs to maintain multi-turn conversations that reference prior exchanges, recall user preferences or facts from earlier in a session, or build context over time within a conversation thread. Skip memory for single-turn requests where each interaction is independent.
10
+
11
+ ## Setting up memory
12
+
13
+ To enable memory in Mastra, install the `@mastra/memory` package along with a storage provider.
14
+
15
+ **npm**:
16
+
17
+ ```bash
18
+ npm install @mastra/memory@latest @mastra/libsql@latest
19
+ ```
20
+
21
+ **pnpm**:
22
+
23
+ ```bash
24
+ pnpm add @mastra/memory@latest @mastra/libsql@latest
25
+ ```
26
+
27
+ **Yarn**:
28
+
29
+ ```bash
30
+ yarn add @mastra/memory@latest @mastra/libsql@latest
31
+ ```
32
+
33
+ **Bun**:
34
+
35
+ ```bash
36
+ bun add @mastra/memory@latest @mastra/libsql@latest
37
+ ```
38
+
39
+ ## Storage providers
40
+
41
+ Memory requires a storage provider to persist message history, including user messages and agent responses. For more details on available providers and how storage works in Mastra, see the [Storage](https://mastra.ai/docs/memory/storage) documentation.
42
+
43
+ ## Configuring memory
44
+
45
+ 1. Enable memory by creating a `Memory` instance and passing it to the agent’s `memory` option.
46
+
47
+ ```typescript
48
+ import { Agent } from "@mastra/core/agent";
49
+ import { Memory } from "@mastra/memory";
50
+
51
+ export const memoryAgent = new Agent({
52
+ id: 'memory-agent',
53
+ name: 'Memory Agent',
54
+ memory: new Memory({
55
+ options: {
56
+ lastMessages: 20,
57
+ },
58
+ }),
59
+ });
60
+ ```
61
+
62
+ > **Info:** Visit [Memory Class](https://mastra.ai/reference/memory/memory-class) for a full list of configuration options.
63
+
64
+ 2. Add a storage provider to your main Mastra instance to enable memory across all configured agents.
65
+
66
+ ```typescript
67
+ import { Mastra } from "@mastra/core";
68
+ import { LibSQLStore } from "@mastra/libsql";
69
+
70
+ export const mastra = new Mastra({
71
+ storage: new LibSQLStore({
72
+ id: 'mastra-storage',
73
+ url: ":memory:",
74
+ }),
75
+ });
76
+ ```
77
+
78
+ > **Info:** Visit [libSQL Storage](https://mastra.ai/reference/storage/libsql) for a full list of configuration options.
79
+
80
+ Alternatively, add storage directly to an agent’s memory to keep data separate or use different providers per agent.
81
+
82
+ ```typescript
83
+ import { Agent } from "@mastra/core/agent";
84
+ import { Memory } from "@mastra/memory";
85
+ import { LibSQLStore } from "@mastra/libsql";
86
+
87
+ export const memoryAgent = new Agent({
88
+ id: 'memory-agent',
89
+ name: 'Memory Agent',
90
+ memory: new Memory({
91
+ storage: new LibSQLStore({
92
+ id: 'mastra-storage',
93
+ url: ":memory:",
94
+ }),
95
+ }),
96
+ });
97
+ ```
98
+
99
+ > **Mastra Cloud Store limitation:** Agent-level storage is not supported when using [Mastra Cloud Store](https://mastra.ai/docs/mastra-cloud/deployment). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database.
100
+
101
+ ## Message history
102
+
103
+ Include a `memory` object with both `resource` and `thread` to track message history during agent calls.
104
+
105
+ - `resource`: A stable identifier for the user or entity.
106
+ - `thread`: An ID that isolates a specific conversation or session.
107
+
108
+ These fields tell the agent where to store and retrieve context, enabling persistent, thread-aware memory across a conversation.
109
+
110
+ ```typescript
111
+ const response = await memoryAgent.generate(
112
+ "Remember my favorite color is blue.",
113
+ {
114
+ memory: {
115
+ resource: "user-123",
116
+ thread: "conversation-123",
117
+ },
118
+ },
119
+ );
120
+ ```
121
+
122
+ To recall information stored in memory, call the agent with the same `resource` and `thread` values used in the original conversation.
123
+
124
+ ```typescript
125
+ const response = await memoryAgent.generate("What's my favorite color?", {
126
+ memory: {
127
+ resource: "user-123",
128
+ thread: "conversation-123",
129
+ },
130
+ });
131
+ ```
132
+
133
+ > **Warning:** Each thread has an owner (`resourceId`) that cannot be changed after creation. Avoid reusing the same thread ID for threads with different owners, as this will cause errors when querying.
134
+
135
+ To learn more about memory see the [Memory](https://mastra.ai/docs/memory/overview) documentation.
136
+
137
+ ## Observational Memory
138
+
139
+ For long-running conversations, raw message history grows until it fills the context window, degrading agent performance. [Observational Memory](https://mastra.ai/docs/memory/observational-memory) solves this by running background agents that compress old messages into dense observations, keeping the context window small while preserving long-term memory.
140
+
141
+ ```typescript
142
+ import { Agent } from "@mastra/core/agent";
143
+ import { Memory } from "@mastra/memory";
144
+
145
+ export const memoryAgent = new Agent({
146
+ id: 'memory-agent',
147
+ name: 'Memory Agent',
148
+ memory: new Memory({
149
+ options: {
150
+ observationalMemory: true,
151
+ },
152
+ }),
153
+ });
154
+ ```
155
+
156
+ Setting `observationalMemory: true` uses `google/gemini-2.5-flash` as the default model for the Observer and Reflector. To use a different model or customize thresholds, pass a config object:
157
+
158
+ ```typescript
159
+ import { Agent } from "@mastra/core/agent";
160
+ import { Memory } from "@mastra/memory";
161
+
162
+ export const memoryAgent = new Agent({
163
+ id: 'memory-agent',
164
+ name: 'Memory Agent',
165
+ memory: new Memory({
166
+ options: {
167
+ observationalMemory: {
168
+ model: "deepseek/deepseek-reasoner",
169
+ observation: {
170
+ messageTokens: 20_000,
171
+ },
172
+ },
173
+ },
174
+ }),
175
+ });
176
+ ```
177
+
178
+ > **Info:** See [Observational Memory](https://mastra.ai/docs/memory/observational-memory) for details on how observations and reflections work, and [the reference](https://mastra.ai/reference/memory/observational-memory) for all configuration options.
179
+
180
+ ## Using `RequestContext`
181
+
182
+ Use [RequestContext](https://mastra.ai/docs/server/request-context) to access request-specific values. This lets you conditionally select different memory or storage configurations based on the context of the request.
183
+
184
+ ```typescript
185
+ export type UserTier = {
186
+ "user-tier": "enterprise" | "pro";
187
+ };
188
+
189
+ const premiumMemory = new Memory();
190
+
191
+ const standardMemory = new Memory();
192
+
193
+ export const memoryAgent = new Agent({
194
+ id: 'memory-agent',
195
+ name: 'Memory Agent',
196
+ memory: ({ requestContext }) => {
197
+ const userTier = requestContext.get("user-tier") as UserTier["user-tier"];
198
+
199
+ return userTier === "enterprise" ? premiumMemory : standardMemory;
200
+ },
201
+ });
202
+ ```
203
+
204
+ > **Info:** Visit [Request Context](https://mastra.ai/docs/server/request-context) for more information.
205
+
206
+ ## Related
207
+
208
+ - [Observational Memory](https://mastra.ai/docs/memory/observational-memory)
209
+ - [Working Memory](https://mastra.ai/docs/memory/working-memory)
210
+ - [Semantic Recall](https://mastra.ai/docs/memory/semantic-recall)
211
+ - [Storage](https://mastra.ai/docs/memory/storage)
212
+ - [Request Context](https://mastra.ai/docs/server/request-context)
@@ -1,14 +1,12 @@
1
- > Learn how to require approvals, suspend execution, and resume suspended networks while keeping humans in control of agent network workflows.
2
-
3
1
  # Network Approval
4
2
 
5
- Agent networks can require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in individual agents and workflows. When a tool, sub-agent, or workflow within a network requires approval or suspends execution, the network pauses and emits events that allow your application to collect user input before resuming.
3
+ Agent networks can require the same [human-in-the-loop](https://mastra.ai/docs/workflows/human-in-the-loop) oversight used in individual agents and workflows. When a tool, subagent, or workflow within a network requires approval or suspends execution, the network pauses and emits events that allow your application to collect user input before resuming.
6
4
 
7
5
  ## Storage
8
6
 
9
7
  Network approval uses snapshots to capture execution state. Ensure you've enabled a storage provider in your Mastra instance. If storage isn't enabled you'll see an error relating to snapshot not found.
10
8
 
11
- ```typescript title="src/mastra/index.ts"
9
+ ```typescript
12
10
  import { Mastra } from "@mastra/core/mastra";
13
11
  import { LibSQLStore } from "@mastra/libsql";
14
12
 
@@ -37,7 +35,7 @@ let runId: string;
37
35
  for await (const chunk of stream) {
38
36
  runId = stream.runId;
39
37
  // if the requirApproval is in a tool inside a subAgent or the subAgent has requireToolApproval set to true
40
- if (chunk.type === "agent-execution-approval") {
38
+ if (chunk.type === "agent-execution-approval") {
41
39
  console.log("Tool requires approval:", chunk.payload);
42
40
  }
43
41
 
@@ -199,8 +197,11 @@ const stream = await routingAgent.network("Process this request", {
199
197
  When `autoResumeSuspendedTools` is enabled:
200
198
 
201
199
  1. A primitive suspends execution by calling `suspend()` with a payload
200
+
202
201
  2. The suspension is persisted to memory along with the conversation
202
+
203
203
  3. When the user sends their next message on the same thread, the network:
204
+
204
205
  - Detects the suspended primitive from message history
205
206
  - Extracts `resumeData` from the user's message based on the tool's `resumeSchema`
206
207
  - Automatically resumes the primitive with the extracted data
@@ -241,7 +242,7 @@ for await (const chunk of resumedStream) {
241
242
 
242
243
  **Conversation flow:**
243
244
 
244
- ```
245
+ ```text
245
246
  User: "Delete the old records"
246
247
  Agent: "Please confirm: delete old records"
247
248
 
@@ -259,16 +260,16 @@ For automatic tool resumption to work:
259
260
 
260
261
  ### Manual vs automatic resumption
261
262
 
262
- | Approach | Use case |
263
- |----------|----------|
264
- | Manual (`resumeNetwork()`) | Programmatic control, webhooks, button clicks, external triggers |
263
+ | Approach | Use case |
264
+ | -------------------------------------- | ------------------------------------------------------------------------ |
265
+ | Manual (`resumeNetwork()`) | Programmatic control, webhooks, button clicks, external triggers |
265
266
  | Automatic (`autoResumeSuspendedTools`) | Conversational flows where users provide resume data in natural language |
266
267
 
267
268
  Both approaches work with the same tool definitions. Automatic resumption triggers only when suspended tools exist in the message history and the user sends a new message on the same thread.
268
269
 
269
270
  ## Related
270
271
 
271
- - [Agent Networks](./networks)
272
- - [Agent Approval](./agent-approval)
272
+ - [Agent Networks](https://mastra.ai/docs/agents/networks)
273
+ - [Agent Approval](https://mastra.ai/docs/agents/agent-approval)
273
274
  - [Human-in-the-Loop](https://mastra.ai/docs/workflows/human-in-the-loop)
274
- - [Agent Memory](./agent-memory)
275
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
@@ -1,8 +1,6 @@
1
- > Learn how to coordinate multiple agents, workflows, and tools using agent networks for complex, non-deterministic task execution.
2
-
3
1
  # Agent Networks
4
2
 
5
- Agent networks in Mastra coordinate multiple agents, workflows, and tools to handle tasks that aren't clearly defined upfront but can be inferred from the user's message or context. A top-level **routing agent** (a Mastra agent with other agents, workflows, and tools configured) uses an LLM to interpret the request and decide which primitives (sub-agents, workflows, or tools) to call, in what order, and with what data.
3
+ Agent networks in Mastra coordinate multiple agents, workflows, and tools to handle tasks that aren't clearly defined upfront but can be inferred from the user's message or context. A top-level **routing agent** (a Mastra agent with other agents, workflows, and tools configured) uses an LLM to interpret the request and decide which primitives (subagents, workflows, or tools) to call, in what order, and with what data.
6
4
 
7
5
  ## When to use networks
8
6
 
@@ -18,9 +16,9 @@ Mastra agent networks operate using these principles:
18
16
 
19
17
  ## Creating an agent network
20
18
 
21
- An agent network is built around a top-level routing agent that delegates tasks to agents, workflows, and tools defined in its configuration. Memory is configured on the routing agent using the `memory` option, and `instructions` define the agent's routing behavior.
19
+ An agent network is built around a top-level routing agent that delegates tasks to subagents, workflows, and tools defined in its configuration. Memory is configured on the routing agent using the `memory` option, and `instructions` define the agent's routing behavior.
22
20
 
23
- ```typescript {22-23,26,29} title="src/mastra/agents/routing-agent.ts"
21
+ ```typescript
24
22
  import { Agent } from "@mastra/core/agent";
25
23
  import { Memory } from "@mastra/memory";
26
24
  import { LibSQLStore } from "@mastra/libsql";
@@ -66,9 +64,9 @@ When configuring a Mastra agent network, each primitive (agent, workflow, or too
66
64
 
67
65
  #### Agent descriptions
68
66
 
69
- Each agent in a network should include a clear `description` that explains what the agent does.
67
+ Each subagent in a network should include a clear `description` that explains what the agent does.
70
68
 
71
- ```typescript title="src/mastra/agents/research-agent.ts"
69
+ ```typescript
72
70
  export const researchAgent = new Agent({
73
71
  id: "research-agent",
74
72
  name: "Research Agent",
@@ -78,7 +76,7 @@ export const researchAgent = new Agent({
78
76
  });
79
77
  ```
80
78
 
81
- ```typescript title="src/mastra/agents/writing-agent.ts"
79
+ ```typescript
82
80
  export const writingAgent = new Agent({
83
81
  id: "writing-agent",
84
82
  name: "Writing Agent",
@@ -92,7 +90,7 @@ export const writingAgent = new Agent({
92
90
 
93
91
  Workflows in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
94
92
 
95
- ```typescript title="src/mastra/workflows/city-workflow.ts"
93
+ ```typescript
96
94
  export const cityWorkflow = createWorkflow({
97
95
  id: "city-workflow",
98
96
  description: `This workflow handles city-specific research tasks.
@@ -112,7 +110,7 @@ export const cityWorkflow = createWorkflow({
112
110
 
113
111
  Tools in a network should include a `description` to explain their purpose, along with `inputSchema` and `outputSchema` to describe the expected data.
114
112
 
115
- ```typescript title="src/mastra/tools/weather-tool.ts"
113
+ ```typescript
116
114
  export const weatherTool = createTool({
117
115
  id: "weather-tool",
118
116
  description: ` Retrieves current weather information using the wttr.in API.
@@ -286,7 +284,7 @@ const final = await stream.object;
286
284
 
287
285
  ## Related
288
286
 
289
- - [Agent Memory](./agent-memory)
290
- - [Workflows Overview](../workflows/overview)
287
+ - [Agent Memory](https://mastra.ai/docs/agents/agent-memory)
288
+ - [Workflows Overview](https://mastra.ai/docs/workflows/overview)
291
289
  - [Request Context](https://mastra.ai/docs/server/request-context)
292
290
  - [Supervisor example](https://github.com/mastra-ai/mastra/tree/main/examples/supervisor-agent)