@mastra/libsql 1.0.0-beta.11 → 1.0.0-beta.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/CHANGELOG.md +154 -0
  2. package/dist/docs/README.md +2 -2
  3. package/dist/docs/SKILL.md +2 -2
  4. package/dist/docs/SOURCE_MAP.json +1 -1
  5. package/dist/docs/agents/01-agent-memory.md +6 -0
  6. package/dist/docs/agents/02-networks.md +56 -0
  7. package/dist/docs/agents/03-agent-approval.md +66 -6
  8. package/dist/docs/agents/04-network-approval.md +274 -0
  9. package/dist/docs/core/01-reference.md +7 -8
  10. package/dist/docs/memory/02-storage.md +77 -25
  11. package/dist/docs/memory/03-working-memory.md +10 -6
  12. package/dist/docs/memory/04-semantic-recall.md +2 -4
  13. package/dist/docs/memory/05-memory-processors.md +2 -3
  14. package/dist/docs/memory/06-reference.md +3 -5
  15. package/dist/docs/rag/01-retrieval.md +5 -6
  16. package/dist/docs/storage/01-reference.md +106 -15
  17. package/dist/docs/vectors/01-reference.md +3 -5
  18. package/dist/index.cjs +165 -71
  19. package/dist/index.cjs.map +1 -1
  20. package/dist/index.js +165 -71
  21. package/dist/index.js.map +1 -1
  22. package/dist/storage/db/index.d.ts.map +1 -1
  23. package/dist/storage/db/utils.d.ts +16 -1
  24. package/dist/storage/db/utils.d.ts.map +1 -1
  25. package/dist/storage/domains/memory/index.d.ts +2 -2
  26. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  27. package/dist/storage/domains/scores/index.d.ts +0 -1
  28. package/dist/storage/domains/scores/index.d.ts.map +1 -1
  29. package/dist/storage/domains/workflows/index.d.ts +1 -0
  30. package/dist/storage/domains/workflows/index.d.ts.map +1 -1
  31. package/dist/vector/index.d.ts +6 -2
  32. package/dist/vector/index.d.ts.map +1 -1
  33. package/package.json +3 -3
@@ -9,12 +9,9 @@
9
9
 
10
10
  > Documentation for the `Mastra` class in Mastra, the core entry point for managing agents, workflows, MCP servers, and server endpoints.
11
11
 
12
- The `Mastra` class is the central orchestrator in any Mastra application, managing agents, workflows, storage, logging, telemetry, and more. Typically, you create a single instance of `Mastra` to coordinate your application.
12
+ The `Mastra` class is the central orchestrator in any Mastra application, managing agents, workflows, storage, logging, observability, and more. Typically, you create a single instance of `Mastra` to coordinate your application.
13
13
 
14
- Think of `Mastra` as a top-level registry:
15
-
16
- - Registering **integrations** makes them accessible to **agents**, **workflows**, and **tools** alike.
17
- - **tools** aren’t registered on `Mastra` directly but are associated with agents and discovered automatically.
14
+ Think of `Mastra` as a top-level registry where you register agents, workflows, tools, and other components that need to be accessible throughout your application.
18
15
 
19
16
  ## Usage example
20
17
 
@@ -41,6 +38,8 @@ export const mastra = new Mastra({
41
38
 
42
39
  ## Constructor parameters
43
40
 
41
+ Visit the [Configuration reference](https://mastra.ai/reference/v1/configuration) for detailed documentation on all available configuration options.
42
+
44
43
  ---
45
44
 
46
45
  ## Reference: Mastra.getMemory()
@@ -73,7 +72,7 @@ import { Memory } from "@mastra/memory";
73
72
  import { LibSQLStore } from "@mastra/libsql";
74
73
 
75
74
  const conversationMemory = new Memory({
76
- storage: new LibSQLStore({ url: ":memory:" }),
75
+ storage: new LibSQLStore({ id: 'conversation-store', url: ":memory:" }),
77
76
  });
78
77
 
79
78
  const mastra = new Mastra({
@@ -125,12 +124,12 @@ import { LibSQLStore } from "@mastra/libsql";
125
124
 
126
125
  const conversationMemory = new Memory({
127
126
  id: "conversation-memory",
128
- storage: new LibSQLStore({ url: ":memory:" }),
127
+ storage: new LibSQLStore({ id: 'conversation-store', url: ":memory:" }),
129
128
  });
130
129
 
131
130
  const analyticsMemory = new Memory({
132
131
  id: "analytics-memory",
133
- storage: new LibSQLStore({ url: ":memory:" }),
132
+ storage: new LibSQLStore({ id: 'analytics-store', url: ":memory:" }),
134
133
  });
135
134
 
136
135
  const mastra = new Mastra({
@@ -4,11 +4,11 @@
4
4
 
5
5
  For Mastra to remember previous interactions, you must configure a storage adapter. Mastra is designed to work with your preferred database provider - choose from the [supported providers](#supported-providers) and pass it to your Mastra instance.
6
6
 
7
- ```typescript
7
+ ```typescript title="src/mastra/index.ts"
8
8
  import { Mastra } from "@mastra/core";
9
9
  import { LibSQLStore } from "@mastra/libsql";
10
10
 
11
- const mastra = new Mastra({
11
+ export const mastra = new Mastra({
12
12
  storage: new LibSQLStore({
13
13
  id: 'mastra-storage',
14
14
  url: "file:./mastra.db",
@@ -17,7 +17,7 @@ const mastra = new Mastra({
17
17
  ```
18
18
  On first interaction, Mastra automatically creates the necessary tables following the [core schema](https://mastra.ai/reference/v1/storage/overview#core-schema). This includes tables for messages, threads, resources, workflows, traces, and evaluation datasets.
19
19
 
20
- ## Supported Providers
20
+ ## Supported providers
21
21
 
22
22
  Each provider page includes installation instructions, configuration parameters, and usage examples:
23
23
 
@@ -35,19 +35,19 @@ Each provider page includes installation instructions, configuration parameters,
35
35
  > **Note:**
36
36
  libSQL is the easiest way to get started because it doesn’t require running a separate database server
37
37
 
38
- ## Configuration Scope
38
+ ## Configuration scope
39
39
 
40
40
  You can configure storage at two different scopes:
41
41
 
42
42
  ### Instance-level storage
43
43
 
44
- Add storage to your Mastra instance so all agents share the same memory provider:
44
+ Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
45
45
 
46
- ```typescript
46
+ ```typescript
47
47
  import { Mastra } from "@mastra/core";
48
48
  import { PostgresStore } from "@mastra/pg";
49
49
 
50
- const mastra = new Mastra({
50
+ export const mastra = new Mastra({
51
51
  storage: new PostgresStore({
52
52
  id: 'mastra-storage',
53
53
  connectionString: process.env.DATABASE_URL,
@@ -55,20 +55,55 @@ const mastra = new Mastra({
55
55
  });
56
56
 
57
57
  // All agents automatically use this storage
58
- const agent1 = new Agent({ memory: new Memory() });
59
- const agent2 = new Agent({ memory: new Memory() });
58
+ const agent1 = new Agent({ id: "agent-1", memory: new Memory() });
59
+ const agent2 = new Agent({ id: "agent-2", memory: new Memory() });
60
+ ```
61
+
62
+ This is useful when all primitives share the same storage backend and have similar performance, scaling, and operational requirements.
63
+
64
+ #### Composite storage
65
+
66
+ Add storage to your Mastra instance using `MastraStorage` and configure individual storage domains to use different storage providers.
67
+
68
+ ```typescript title="src/mastra/index.ts"
69
+ import { Mastra } from "@mastra/core";
70
+ import { MastraStorage } from "@mastra/core/storage";
71
+ import { MemoryLibSQL } from "@mastra/libsql";
72
+ import { WorkflowsPG } from "@mastra/pg";
73
+ import { ObservabilityStorageClickhouse } from "@mastra/clickhouse";
74
+
75
+ export const mastra = new Mastra({
76
+ storage: new MastraStorage({
77
+ id: "composite",
78
+ domains: {
79
+ memory: new MemoryLibSQL({ url: "file:./memory.db" }),
80
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
81
+ observability: new ObservabilityStorageClickhouse({
82
+ url: process.env.CLICKHOUSE_URL,
83
+ username: process.env.CLICKHOUSE_USERNAME,
84
+ password: process.env.CLICKHOUSE_PASSWORD,
85
+ }),
86
+ },
87
+ }),
88
+ });
60
89
  ```
61
90
 
91
+ This is useful when different types of data have different performance or operational requirements, such as low-latency storage for memory, durable storage for workflows, and high-throughput storage for observability.
92
+
93
+ > **Note:**
94
+ See [Storage Domains](https://mastra.ai/reference/v1/storage/composite#storage-domains) for more information.
95
+
62
96
  ### Agent-level storage
63
97
 
64
- Add storage to a specific agent when you need data boundaries or compliance requirements:
98
+ Agent-level storage overrides storage configured at the instance-level. Add storage to a specific agent when you need data boundaries or compliance requirements:
65
99
 
66
- ```typescript
100
+ ```typescript title="src/mastra/agents/memory-agent.ts"
67
101
  import { Agent } from "@mastra/core/agent";
68
102
  import { Memory } from "@mastra/memory";
69
103
  import { PostgresStore } from "@mastra/pg";
70
104
 
71
- const agent = new Agent({
105
+ export const agent = new Agent({
106
+ id: "agent",
72
107
  memory: new Memory({
73
108
  storage: new PostgresStore({
74
109
  id: 'agent-storage',
@@ -80,7 +115,10 @@ const agent = new Agent({
80
115
 
81
116
  This is useful when different agents need to store data in separate databases for security, compliance, or organizational reasons.
82
117
 
83
- ## Threads and Resources
118
+ > **Mastra Cloud Store limitation**
119
+ Agent-level storage is not supported when using [Mastra Cloud Store](https://mastra.ai/docs/v1/mastra-cloud/deployment#using-mastra-cloud-store). If you use Mastra Cloud Store, configure storage on the Mastra instance instead. This limitation does not apply if you bring your own database.
120
+
121
+ ## Threads and resources
84
122
 
85
123
  Mastra organizes memory into threads using two identifiers:
86
124
 
@@ -89,7 +127,7 @@ Mastra organizes memory into threads using two identifiers:
89
127
 
90
128
  Both identifiers are required for agents to store and recall information:
91
129
 
92
- ```typescript
130
+ ```typescript
93
131
  const stream = await agent.stream("message for agent", {
94
132
  memory: {
95
133
  thread: "convo_123",
@@ -101,14 +139,30 @@ const stream = await agent.stream("message for agent", {
101
139
  > **Note:**
102
140
  [Studio](https://mastra.ai/docs/v1/getting-started/studio) automatically generates a thread and resource ID for you. Remember to to pass these explicitly when calling `stream` or `generate` yourself.
103
141
 
142
+ ### Thread and resource relationship
143
+
144
+ Each thread has an owner (its `resourceId`) that is set when the thread is created and cannot be changed. When you query a thread, you must use the correct owner's resource ID. Attempting to query a thread with a different resource ID will result in an error:
145
+
146
+ ```text
147
+ Thread with id <thread_id> is for resource with id <resource_a>
148
+ but resource <resource_b> was queried
149
+ ```
150
+
151
+ Note that while each thread has one owner, messages within that thread can have different `resourceId` values. This is used for message attribution and filtering (e.g., distinguishing between different agents in a multi-agent system, or filtering messages for analytics).
152
+
153
+ **Security:** Memory is a storage layer, not an authorization layer. Your application must implement access control before calling memory APIs. The `resourceId` parameter controls both validation and filtering - provide it to validate ownership and filter messages, or omit it for server-side access without validation.
154
+
155
+ To avoid accidentally reusing thread IDs across different owners, use UUIDs: `crypto.randomUUID()`
156
+
104
157
  ### Thread title generation
105
158
 
106
159
  Mastra can automatically generate descriptive thread titles based on the user's first message.
107
160
 
108
161
  Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
109
162
 
110
- ```typescript
163
+ ```typescript
111
164
  export const testAgent = new Agent({
165
+ id: "test-agent",
112
166
  memory: new Memory({
113
167
  options: {
114
168
  generateTitle: true,
@@ -123,13 +177,12 @@ To optimize cost or behavior, provide a smaller `model` and custom `instructions
123
177
 
124
178
  ```typescript
125
179
  export const testAgent = new Agent({
180
+ id: "test-agent",
126
181
  memory: new Memory({
127
182
  options: {
128
- threads: {
129
- generateTitle: {
130
- model: "openai/gpt-4o-mini",
131
- instructions: "Generate a concise title based on the user's first message",
132
- },
183
+ generateTitle: {
184
+ model: "openai/gpt-4o-mini",
185
+ instructions: "Generate a concise title based on the user's first message",
133
186
  },
134
187
  },
135
188
  }),
@@ -142,7 +195,7 @@ Semantic recall uses vector embeddings to retrieve relevant past messages based
142
195
 
143
196
  The vector database doesn't have to be the same as your storage provider. For example, you might use PostgreSQL for storage and Pinecone for vectors:
144
197
 
145
- ```typescript
198
+ ```typescript
146
199
  import { Mastra } from "@mastra/core";
147
200
  import { Agent } from "@mastra/core/agent";
148
201
  import { Memory } from "@mastra/memory";
@@ -150,7 +203,7 @@ import { PostgresStore } from "@mastra/pg";
150
203
  import { PineconeVector } from "@mastra/pinecone";
151
204
 
152
205
  // Instance-level vector configuration
153
- const mastra = new Mastra({
206
+ export const mastra = new Mastra({
154
207
  storage: new PostgresStore({
155
208
  id: 'mastra-storage',
156
209
  connectionString: process.env.DATABASE_URL,
@@ -158,13 +211,12 @@ const mastra = new Mastra({
158
211
  });
159
212
 
160
213
  // Agent-level vector configuration
161
- const agent = new Agent({
214
+ export const agent = new Agent({
215
+ id: "agent",
162
216
  memory: new Memory({
163
217
  vector: new PineconeVector({
164
218
  id: 'agent-vector',
165
219
  apiKey: process.env.PINECONE_API_KEY,
166
- environment: process.env.PINECONE_ENVIRONMENT,
167
- indexName: 'agent-embeddings',
168
220
  }),
169
221
  options: {
170
222
  semanticRecall: {
@@ -80,13 +80,15 @@ const memory = new Memory({
80
80
 
81
81
  ### Usage with Agents
82
82
 
83
- When using resource-scoped memory, make sure to pass the `resourceId` parameter:
83
+ When using resource-scoped memory, make sure to pass the `resource` parameter in the memory options:
84
84
 
85
85
  ```typescript
86
- // Resource-scoped memory requires resourceId
86
+ // Resource-scoped memory requires resource
87
87
  const response = await agent.generate("Hello!", {
88
- threadId: "conversation-123",
89
- resourceId: "user-alice-456", // Same user across different threads
88
+ memory: {
89
+ thread: "conversation-123",
90
+ resource: "user-alice-456", // Same user across different threads
91
+ },
90
92
  });
91
93
  ```
92
94
 
@@ -339,8 +341,10 @@ const thread = await memory.createThread({
339
341
 
340
342
  // The agent will now have access to this information in all messages
341
343
  await agent.generate("What's my blood type?", {
342
- threadId: thread.id,
343
- resourceId: "user-456",
344
+ memory: {
345
+ thread: thread.id,
346
+ resource: "user-456",
347
+ },
344
348
  });
345
349
  // Response: "Your blood type is O+."
346
350
  ```
@@ -56,7 +56,7 @@ const agent = new Agent({
56
56
  // this is the default vector db if omitted
57
57
  vector: new LibSQLVector({
58
58
  id: 'agent-vector',
59
- connectionUrl: "file:./local.db",
59
+ url: "file:./local.db",
60
60
  }),
61
61
  }),
62
62
  });
@@ -230,6 +230,4 @@ You might want to disable semantic recall in scenarios like:
230
230
 
231
231
  ## Viewing Recalled Messages
232
232
 
233
- When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
234
-
235
- For more info on viewing message traces, see [Viewing Retrieved Messages](./overview#viewing-retrieved-messages).
233
+ When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
@@ -106,7 +106,6 @@ const agent = new Agent({
106
106
  vector: new PineconeVector({
107
107
  id: "memory-vector",
108
108
  apiKey: process.env.PINECONE_API_KEY!,
109
- environment: "us-east-1",
110
109
  }),
111
110
  embedder: new OpenAIEmbedder({
112
111
  model: "text-embedding-3-small",
@@ -169,7 +168,7 @@ If you manually add a memory processor to `inputProcessors` or `outputProcessors
169
168
  ```typescript
170
169
  import { Agent } from "@mastra/core/agent";
171
170
  import { Memory } from "@mastra/memory";
172
- import { MessageHistory } from "@mastra/memory/processors";
171
+ import { MessageHistory } from "@mastra/core/processors";
173
172
  import { TokenLimiter } from "@mastra/core/processors";
174
173
  import { LibSQLStore } from "@mastra/libsql";
175
174
  import { openai } from "@ai-sdk/openai";
@@ -264,7 +263,7 @@ const agent = new Agent({
264
263
  // If the guardrail aborts, nothing is saved to memory
265
264
  const result = await agent.generate("Hello");
266
265
  if (result.tripwire) {
267
- console.log("Blocked:", result.tripwireReason);
266
+ console.log("Blocked:", result.tripwire.reason);
268
267
  // Memory is empty - no messages were persisted
269
268
  }
270
269
  ```
@@ -57,7 +57,7 @@ export const agent = new Agent({
57
57
  }),
58
58
  vector: new LibSQLVector({
59
59
  id: 'test-agent-vector',
60
- connectionUrl: "file:./vector-memory.db",
60
+ url: "file:./vector-memory.db",
61
61
  }),
62
62
  options: {
63
63
  lastMessages: 10,
@@ -69,9 +69,7 @@ export const agent = new Agent({
69
69
  workingMemory: {
70
70
  enabled: true,
71
71
  },
72
- threads: {
73
- generateTitle: true,
74
- },
72
+ generateTitle: true,
75
73
  },
76
74
  }),
77
75
  });
@@ -129,7 +127,7 @@ export const agent = new Agent({
129
127
  - [createThread](https://mastra.ai/reference/v1/memory/createThread)
130
128
  - [recall](https://mastra.ai/reference/v1/memory/recall)
131
129
  - [getThreadById](https://mastra.ai/reference/v1/memory/getThreadById)
132
- - [listThreadsByResourceId](https://mastra.ai/reference/v1/memory/listThreadsByResourceId)
130
+ - [listThreads](https://mastra.ai/reference/v1/memory/listThreads)
133
131
  - [deleteMessages](https://mastra.ai/reference/v1/memory/deleteMessages)
134
132
  - [cloneThread](https://mastra.ai/reference/v1/memory/cloneThread)
135
133
  - [Clone Utility Methods](https://mastra.ai/reference/v1/memory/clone-utilities)
@@ -171,7 +171,7 @@ The Vector Query Tool supports database-specific configurations that enable you
171
171
  > **Note:**
172
172
  These configurations are for **query-time options** like namespaces, performance tuning, and filtering—not for database connection setup.
173
173
 
174
- Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ connectionUrl: '...' })`).
174
+ Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ url: '...' })`).
175
175
 
176
176
  ```ts
177
177
  import { createVectorQueryTool } from "@mastra/rag";
@@ -258,11 +258,10 @@ requestContext.set("databaseConfig", {
258
258
  },
259
259
  });
260
260
 
261
- await pineconeQueryTool.execute({
262
- context: { queryText: "search query" },
263
- mastra,
264
- requestContext,
265
- });
261
+ await pineconeQueryTool.execute(
262
+ { queryText: "search query" },
263
+ { mastra, requestContext }
264
+ );
266
265
  ```
267
266
 
268
267
  For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](https://mastra.ai/reference/v1/tools/vector-query-tool).
@@ -5,27 +5,27 @@
5
5
 
6
6
  ---
7
7
 
8
- ## Reference: Storage Composition
8
+ ## Reference: Composite Storage
9
9
 
10
10
  > Documentation for combining multiple storage backends in Mastra.
11
11
 
12
- MastraStorage can compose storage domains from different adapters. Use it when you need different databases for different purposes. For example, use LibSQL for memory and PostgreSQL for workflows.
12
+ `MastraStorage` can compose storage domains from different providers. Use it when you need different databases for different purposes. For example, use LibSQL for memory and PostgreSQL for workflows.
13
13
 
14
14
  ## Installation
15
15
 
16
- MastraStorage is included in `@mastra/core`:
16
+ `MastraStorage` is included in `@mastra/core`:
17
17
 
18
18
  ```bash
19
19
  npm install @mastra/core@beta
20
20
  ```
21
21
 
22
- You'll also need to install the storage adapters you want to compose:
22
+ You'll also need to install the storage providers you want to compose:
23
23
 
24
24
  ```bash
25
25
  npm install @mastra/pg@beta @mastra/libsql@beta
26
26
  ```
27
27
 
28
- ## Storage Domains
28
+ ## Storage domains
29
29
 
30
30
  Mastra organizes storage into five specialized domains, each handling a specific type of data. Each domain can be backed by a different storage adapter, and domain classes are exported from each storage package.
31
31
 
@@ -43,13 +43,13 @@ Mastra organizes storage into five specialized domains, each handling a specific
43
43
 
44
44
  Import domain classes directly from each store package and compose them:
45
45
 
46
- ```typescript
46
+ ```typescript title="src/mastra/index.ts"
47
47
  import { MastraStorage } from "@mastra/core/storage";
48
48
  import { WorkflowsPG, ScoresPG } from "@mastra/pg";
49
49
  import { MemoryLibSQL } from "@mastra/libsql";
50
50
  import { Mastra } from "@mastra/core";
51
51
 
52
- const mastra = new Mastra({
52
+ export const mastra = new Mastra({
53
53
  storage: new MastraStorage({
54
54
  id: "composite",
55
55
  domains: {
@@ -65,7 +65,7 @@ const mastra = new Mastra({
65
65
 
66
66
  Use `default` to specify a fallback storage, then override specific domains:
67
67
 
68
- ```typescript
68
+ ```typescript title="src/mastra/index.ts"
69
69
  import { MastraStorage } from "@mastra/core/storage";
70
70
  import { PostgresStore } from "@mastra/pg";
71
71
  import { MemoryLibSQL } from "@mastra/libsql";
@@ -76,7 +76,7 @@ const pgStore = new PostgresStore({
76
76
  connectionString: process.env.DATABASE_URL,
77
77
  });
78
78
 
79
- const mastra = new Mastra({
79
+ export const mastra = new Mastra({
80
80
  storage: new MastraStorage({
81
81
  id: "composite",
82
82
  default: pgStore,
@@ -91,9 +91,9 @@ const mastra = new Mastra({
91
91
 
92
92
  ## Initialization
93
93
 
94
- MastraStorage initializes each configured domain independently. When passed to the Mastra class, `init()` is called automatically:
94
+ `MastraStorage` initializes each configured domain independently. When passed to the Mastra class, `init()` is called automatically:
95
95
 
96
- ```typescript
96
+ ```typescript title="src/mastra/index.ts"
97
97
  import { MastraStorage } from "@mastra/core/storage";
98
98
  import { MemoryPG, WorkflowsPG, ScoresPG } from "@mastra/pg";
99
99
  import { Mastra } from "@mastra/core";
@@ -107,7 +107,7 @@ const storage = new MastraStorage({
107
107
  },
108
108
  });
109
109
 
110
- const mastra = new Mastra({
110
+ export const mastra = new Mastra({
111
111
  storage, // init() called automatically
112
112
  });
113
113
  ```
@@ -132,7 +132,7 @@ const memoryStore = await storage.getStore("memory");
132
132
  const thread = await memoryStore?.getThreadById({ threadId: "..." });
133
133
  ```
134
134
 
135
- ## Use Cases
135
+ ## Use cases
136
136
 
137
137
  ### Separate databases for different workloads
138
138
 
@@ -197,6 +197,7 @@ The DynamoDB storage implementation provides a scalable and performant NoSQL dat
197
197
  - Compatible with AWS DynamoDB Local for development
198
198
  - Stores Thread, Message, Trace, Eval, and Workflow data
199
199
  - Optimized for serverless environments
200
+ - Configurable TTL (Time To Live) for automatic data expiration per entity type
200
201
 
201
202
  ## Installation
202
203
 
@@ -224,7 +225,7 @@ import { DynamoDBStore } from "@mastra/dynamodb";
224
225
 
225
226
  // Initialize the DynamoDB storage
226
227
  const storage = new DynamoDBStore({
227
- name: "dynamodb", // A name for this storage instance
228
+ id: "dynamodb", // Unique identifier for this storage instance
228
229
  config: {
229
230
  tableName: "mastra-single-table", // Name of your DynamoDB table
230
231
  region: "us-east-1", // Optional: AWS region, defaults to 'us-east-1'
@@ -258,7 +259,7 @@ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/
258
259
  import { DynamoDBStore } from "@mastra/dynamodb";
259
260
 
260
261
  const storage = new DynamoDBStore({
261
- name: "dynamodb-local",
262
+ id: "dynamodb-local",
262
263
  config: {
263
264
  tableName: "mastra-single-table", // Ensure this table is created in your local DynamoDB
264
265
  region: "localhost", // Can be any string for local, 'localhost' is common
@@ -274,6 +275,96 @@ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/
274
275
 
275
276
  ## Parameters
276
277
 
278
+ ## TTL (Time To Live) Configuration
279
+
280
+ DynamoDB TTL allows you to automatically delete items after a specified time period. This is useful for:
281
+
282
+ - **Cost optimization**: Automatically remove old data to reduce storage costs
283
+ - **Data lifecycle management**: Implement retention policies for compliance
284
+ - **Performance**: Prevent tables from growing indefinitely
285
+ - **Privacy compliance**: Automatically purge personal data after specified periods
286
+
287
+ ### Enabling TTL
288
+
289
+ To use TTL, you must:
290
+
291
+ 1. **Configure TTL in DynamoDBStore** (shown below)
292
+ 2. **Enable TTL on your DynamoDB table** via AWS Console or CLI, specifying the attribute name (default: `ttl`)
293
+
294
+ ```typescript
295
+ import { DynamoDBStore } from "@mastra/dynamodb";
296
+
297
+ const storage = new DynamoDBStore({
298
+ name: "dynamodb",
299
+ config: {
300
+ tableName: "mastra-single-table",
301
+ region: "us-east-1",
302
+ ttl: {
303
+ // Messages expire after 30 days
304
+ message: {
305
+ enabled: true,
306
+ defaultTtlSeconds: 30 * 24 * 60 * 60, // 30 days
307
+ },
308
+ // Threads expire after 90 days
309
+ thread: {
310
+ enabled: true,
311
+ defaultTtlSeconds: 90 * 24 * 60 * 60, // 90 days
312
+ },
313
+ // Traces expire after 7 days with custom attribute name
314
+ trace: {
315
+ enabled: true,
316
+ attributeName: "expiresAt", // Custom TTL attribute
317
+ defaultTtlSeconds: 7 * 24 * 60 * 60, // 7 days
318
+ },
319
+ // Workflow snapshots don't expire
320
+ workflow_snapshot: {
321
+ enabled: false,
322
+ },
323
+ },
324
+ },
325
+ });
326
+ ```
327
+
328
+ ### Supported Entity Types
329
+
330
+ TTL can be configured for these entity types:
331
+
332
+ | Entity | Description |
333
+ |--------|-------------|
334
+ | `thread` | Conversation threads |
335
+ | `message` | Messages within threads |
336
+ | `trace` | Observability traces |
337
+ | `eval` | Evaluation results |
338
+ | `workflow_snapshot` | Workflow state snapshots |
339
+ | `resource` | User/resource data |
340
+ | `score` | Scoring results |
341
+
342
+ ### TTL Entity Configuration
343
+
344
+ Each entity type accepts the following configuration:
345
+
346
+ ### Enabling TTL on Your DynamoDB Table
347
+
348
+ After configuring TTL in your code, you must enable TTL on the DynamoDB table itself:
349
+
350
+ **Using AWS CLI:**
351
+
352
+ ```bash
353
+ aws dynamodb update-time-to-live \
354
+ --table-name mastra-single-table \
355
+ --time-to-live-specification "Enabled=true, AttributeName=ttl"
356
+ ```
357
+
358
+ **Using AWS Console:**
359
+
360
+ 1. Go to the DynamoDB console
361
+ 2. Select your table
362
+ 3. Go to "Additional settings" tab
363
+ 4. Under "Time to Live (TTL)", click "Manage TTL"
364
+ 5. Enable TTL and specify the attribute name (default: `ttl`)
365
+
366
+ > **Note**: DynamoDB deletes expired items within 48 hours after expiration. Items remain queryable until actually deleted.
367
+
277
368
  ## AWS IAM Permissions
278
369
 
279
370
  The IAM role or user executing the code needs appropriate permissions to interact with the specified DynamoDB table and its indexes. Below is a sample policy. Replace `${YOUR_TABLE_NAME}` with your actual table name and `${YOUR_AWS_REGION}` and `${YOUR_AWS_ACCOUNT_ID}` with appropriate values.
@@ -26,7 +26,7 @@ import { LibSQLVector } from "@mastra/libsql";
26
26
  // Create a new vector store instance
27
27
  const store = new LibSQLVector({
28
28
  id: 'libsql-vector',
29
- connectionUrl: process.env.DATABASE_URL,
29
+ url: process.env.DATABASE_URL,
30
30
  // Optional: for Turso cloud databases
31
31
  authToken: process.env.DATABASE_AUTH_TOKEN,
32
32
  });
@@ -193,7 +193,7 @@ export const libsqlAgent = new Agent({
193
193
  }),
194
194
  vector: new LibSQLVector({
195
195
  id: 'libsql-agent-vector',
196
- connectionUrl: "file:libsql-agent.db",
196
+ url: "file:libsql-agent.db",
197
197
  }),
198
198
  embedder: fastembed,
199
199
  options: {
@@ -202,9 +202,7 @@ export const libsqlAgent = new Agent({
202
202
  topK: 3,
203
203
  messageRange: 2,
204
204
  },
205
- threads: {
206
- generateTitle: true, // Explicitly enable automatic title generation
207
- },
205
+ generateTitle: true, // Explicitly enable automatic title generation
208
206
  },
209
207
  }),
210
208
  });