@mastra/libsql 1.6.1-alpha.0 → 1.6.2-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/CHANGELOG.md +22 -0
  2. package/dist/docs/SKILL.md +50 -0
  3. package/dist/docs/assets/SOURCE_MAP.json +6 -0
  4. package/dist/docs/references/docs-agents-agent-approval.md +558 -0
  5. package/dist/docs/references/docs-agents-agent-memory.md +209 -0
  6. package/dist/docs/references/docs-agents-network-approval.md +275 -0
  7. package/dist/docs/references/docs-agents-networks.md +299 -0
  8. package/dist/docs/references/docs-memory-memory-processors.md +314 -0
  9. package/dist/docs/references/docs-memory-message-history.md +260 -0
  10. package/dist/docs/references/docs-memory-overview.md +45 -0
  11. package/dist/docs/references/docs-memory-semantic-recall.md +272 -0
  12. package/dist/docs/references/docs-memory-storage.md +261 -0
  13. package/dist/docs/references/docs-memory-working-memory.md +400 -0
  14. package/dist/docs/references/docs-observability-overview.md +70 -0
  15. package/dist/docs/references/docs-observability-tracing-exporters-default.md +209 -0
  16. package/dist/docs/references/docs-rag-retrieval.md +515 -0
  17. package/dist/docs/references/docs-workflows-snapshots.md +238 -0
  18. package/dist/docs/references/guides-agent-frameworks-ai-sdk.md +140 -0
  19. package/dist/docs/references/reference-core-getMemory.md +50 -0
  20. package/dist/docs/references/reference-core-listMemory.md +56 -0
  21. package/dist/docs/references/reference-core-mastra-class.md +66 -0
  22. package/dist/docs/references/reference-memory-memory-class.md +147 -0
  23. package/dist/docs/references/reference-storage-composite.md +235 -0
  24. package/dist/docs/references/reference-storage-dynamodb.md +282 -0
  25. package/dist/docs/references/reference-storage-libsql.md +135 -0
  26. package/dist/docs/references/reference-vectors-libsql.md +305 -0
  27. package/dist/index.cjs +14 -3
  28. package/dist/index.cjs.map +1 -1
  29. package/dist/index.js +14 -3
  30. package/dist/index.js.map +1 -1
  31. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  32. package/dist/vector/index.d.ts.map +1 -1
  33. package/package.json +5 -5
@@ -0,0 +1,147 @@
1
+ # Memory Class
2
+
3
+ The `Memory` class provides a robust system for managing conversation history and thread-based message storage in Mastra. It enables persistent storage of conversations, semantic search capabilities, and efficient message retrieval. You must configure a storage provider for conversation history, and if you enable semantic recall you will also need to provide a vector store and embedder.
4
+
5
+ ## Usage example
6
+
7
+ ```typescript
8
+ import { Memory } from '@mastra/memory'
9
+ import { Agent } from '@mastra/core/agent'
10
+
11
+ export const agent = new Agent({
12
+ name: 'test-agent',
13
+ instructions: 'You are an agent with memory.',
14
+ model: 'openai/gpt-5.1',
15
+ memory: new Memory({
16
+ options: {
17
+ workingMemory: {
18
+ enabled: true,
19
+ },
20
+ },
21
+ }),
22
+ })
23
+ ```
24
+
25
+ > To enable `workingMemory` on an agent, you’ll need a storage provider configured on your main Mastra instance. See [Mastra class](https://mastra.ai/reference/core/mastra-class) for more information.
26
+
27
+ ## Constructor parameters
28
+
29
+ **storage?:** (`MastraCompositeStore`): Storage implementation for persisting memory data. Defaults to \`new DefaultStorage({ config: { url: "file:memory.db" } })\` if not provided.
30
+
31
+ **vector?:** (`MastraVector | false`): Vector store for semantic search capabilities. Set to \`false\` to disable vector operations.
32
+
33
+ **embedder?:** (`EmbeddingModel<string> | EmbeddingModelV2<string>`): Embedder instance for vector embeddings. Required when semantic recall is enabled.
34
+
35
+ **options?:** (`MemoryConfig`): Memory configuration options.
36
+
37
+ ### Options parameters
38
+
39
+ **lastMessages?:** (`number | false`): Number of most recent messages to include in context. Set to \`false\` to disable loading conversation history into context. Use \`Number.MAX\_SAFE\_INTEGER\` to retrieve all messages with no limit. To prevent saving new messages, use the \`readOnly\` option instead. (Default: `10`)
40
+
41
+ **readOnly?:** (`boolean`): When true, prevents memory from saving new messages and provides working memory as read-only context (without the updateWorkingMemory tool). Useful for read-only operations like previews, internal routing agents, or sub agents that should reference but not modify memory. (Default: `false`)
42
+
43
+ **semanticRecall?:** (`boolean | { topK: number; messageRange: number | { before: number; after: number }; scope?: 'thread' | 'resource' }`): Enable semantic search in message history. Can be a boolean or an object with configuration options. When enabled, requires both vector store and embedder to be configured. Default topK is 4, default messageRange is {before: 1, after: 1}. (Default: `false`)
44
+
45
+ **workingMemory?:** (`WorkingMemory`): Configuration for working memory feature. Can be \`{ enabled: boolean; template?: string; schema?: ZodObject\<any> | JSONSchema7; scope?: 'thread' | 'resource' }\` or \`{ enabled: boolean }\` to disable. (Default: `{ enabled: false, template: '# User Information\n- **First Name**:\n- **Last Name**:\n...' }`)
46
+
47
+ **observationalMemory?:** (`boolean | ObservationalMemoryOptions`): Enable Observational Memory for long-context agentic memory. Set to \`true\` for defaults, or pass a config object to customize token budgets, models, and scope. See \[Observational Memory reference]\(/reference/memory/observational-memory) for configuration details. (Default: `false`)
48
+
49
+ **generateTitle?:** (`boolean | { model: DynamicArgument<MastraLanguageModel>; instructions?: DynamicArgument<string> }`): Controls automatic thread title generation from the user's first message. Can be a boolean or an object with custom model and instructions. (Default: `false`)
50
+
51
+ ## Returns
52
+
53
+ **memory:** (`Memory`): A new Memory instance with the specified configuration.
54
+
55
+ ## Extended usage example
56
+
57
+ ```typescript
58
+ import { Memory } from '@mastra/memory'
59
+ import { Agent } from '@mastra/core/agent'
60
+ import { LibSQLStore, LibSQLVector } from '@mastra/libsql'
61
+
62
+ export const agent = new Agent({
63
+ name: 'test-agent',
64
+ instructions: 'You are an agent with memory.',
65
+ model: 'openai/gpt-5.1',
66
+ memory: new Memory({
67
+ storage: new LibSQLStore({
68
+ id: 'test-agent-storage',
69
+ url: 'file:./working-memory.db',
70
+ }),
71
+ vector: new LibSQLVector({
72
+ id: 'test-agent-vector',
73
+ url: 'file:./vector-memory.db',
74
+ }),
75
+ options: {
76
+ lastMessages: 10,
77
+ semanticRecall: {
78
+ topK: 3,
79
+ messageRange: 2,
80
+ scope: 'resource',
81
+ },
82
+ workingMemory: {
83
+ enabled: true,
84
+ },
85
+ generateTitle: true,
86
+ },
87
+ }),
88
+ })
89
+ ```
90
+
91
+ ## PostgreSQL with index configuration
92
+
93
+ ```typescript
94
+ import { Memory } from '@mastra/memory'
95
+ import { Agent } from '@mastra/core/agent'
96
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
97
+ import { PgStore, PgVector } from '@mastra/pg'
98
+
99
+ export const agent = new Agent({
100
+ name: 'pg-agent',
101
+ instructions: 'You are an agent with optimized PostgreSQL memory.',
102
+ model: 'openai/gpt-5.1',
103
+ memory: new Memory({
104
+ storage: new PgStore({
105
+ id: 'pg-agent-storage',
106
+ connectionString: process.env.DATABASE_URL,
107
+ }),
108
+ vector: new PgVector({
109
+ id: 'pg-agent-vector',
110
+ connectionString: process.env.DATABASE_URL,
111
+ }),
112
+ embedder: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
113
+ options: {
114
+ lastMessages: 20,
115
+ semanticRecall: {
116
+ topK: 5,
117
+ messageRange: 3,
118
+ scope: 'resource',
119
+ indexConfig: {
120
+ type: 'hnsw', // Use HNSW for better performance
121
+ metric: 'dotproduct', // Optimal for OpenAI embeddings
122
+ m: 16, // Number of bi-directional links
123
+ efConstruction: 64, // Construction-time candidate list size
124
+ },
125
+ },
126
+ workingMemory: {
127
+ enabled: true,
128
+ },
129
+ },
130
+ }),
131
+ })
132
+ ```
133
+
134
+ ### Related
135
+
136
+ - [Getting Started with Memory](https://mastra.ai/docs/memory/overview)
137
+ - [Semantic Recall](https://mastra.ai/docs/memory/semantic-recall)
138
+ - [Working Memory](https://mastra.ai/docs/memory/working-memory)
139
+ - [Observational Memory](https://mastra.ai/docs/memory/observational-memory)
140
+ - [Memory Processors](https://mastra.ai/docs/memory/memory-processors)
141
+ - [createThread](https://mastra.ai/reference/memory/createThread)
142
+ - [recall](https://mastra.ai/reference/memory/recall)
143
+ - [getThreadById](https://mastra.ai/reference/memory/getThreadById)
144
+ - [listThreads](https://mastra.ai/reference/memory/listThreads)
145
+ - [deleteMessages](https://mastra.ai/reference/memory/deleteMessages)
146
+ - [cloneThread](https://mastra.ai/reference/memory/cloneThread)
147
+ - [Clone Utility Methods](https://mastra.ai/reference/memory/clone-utilities)
@@ -0,0 +1,235 @@
1
+ # Composite Storage
2
+
3
+ `MastraCompositeStore` can compose storage domains from different providers. Use it when you need different databases for different purposes. For example, use LibSQL for memory and PostgreSQL for workflows.
4
+
5
+ ## Installation
6
+
7
+ `MastraCompositeStore` is included in `@mastra/core`:
8
+
9
+ **npm**:
10
+
11
+ ```bash
12
+ npm install @mastra/core@latest
13
+ ```
14
+
15
+ **pnpm**:
16
+
17
+ ```bash
18
+ pnpm add @mastra/core@latest
19
+ ```
20
+
21
+ **Yarn**:
22
+
23
+ ```bash
24
+ yarn add @mastra/core@latest
25
+ ```
26
+
27
+ **Bun**:
28
+
29
+ ```bash
30
+ bun add @mastra/core@latest
31
+ ```
32
+
33
+ You'll also need to install the storage providers you want to compose:
34
+
35
+ **npm**:
36
+
37
+ ```bash
38
+ npm install @mastra/pg@latest @mastra/libsql@latest
39
+ ```
40
+
41
+ **pnpm**:
42
+
43
+ ```bash
44
+ pnpm add @mastra/pg@latest @mastra/libsql@latest
45
+ ```
46
+
47
+ **Yarn**:
48
+
49
+ ```bash
50
+ yarn add @mastra/pg@latest @mastra/libsql@latest
51
+ ```
52
+
53
+ **Bun**:
54
+
55
+ ```bash
56
+ bun add @mastra/pg@latest @mastra/libsql@latest
57
+ ```
58
+
59
+ ## Storage domains
60
+
61
+ Mastra organizes storage into five specialized domains, each handling a specific type of data. Each domain can be backed by a different storage adapter, and domain classes are exported from each storage package.
62
+
63
+ | Domain | Description |
64
+ | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
65
+ | `memory` | Conversation persistence for agents. Stores threads (conversation sessions), messages, resources (user identities), and working memory (persistent context across conversations). |
66
+ | `workflows` | Workflow execution state. When workflows suspend for human input, external events, or scheduled resumption, their state is persisted here to enable resumption after server restarts. |
67
+ | `scores` | Evaluation results from Mastra's evals system. Scores and metrics are persisted here for analysis and comparison over time. |
68
+ | `observability` | Telemetry data including traces and spans. Agent interactions, tool calls, and LLM requests generate spans collected into traces for debugging and performance analysis. |
69
+ | `agents` | Agent configurations for stored agents. Enables agents to be defined and updated at runtime without code deployments. |
70
+
71
+ ## Usage
72
+
73
+ ### Basic composition
74
+
75
+ Import domain classes directly from each store package and compose them:
76
+
77
+ ```typescript
78
+ import { MastraCompositeStore } from '@mastra/core/storage'
79
+ import { WorkflowsPG, ScoresPG } from '@mastra/pg'
80
+ import { MemoryLibSQL } from '@mastra/libsql'
81
+ import { Mastra } from '@mastra/core'
82
+
83
+ export const mastra = new Mastra({
84
+ storage: new MastraCompositeStore({
85
+ id: 'composite',
86
+ domains: {
87
+ memory: new MemoryLibSQL({ url: 'file:./local.db' }),
88
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
89
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
90
+ },
91
+ }),
92
+ })
93
+ ```
94
+
95
+ ### With a default storage
96
+
97
+ Use `default` to specify a fallback storage, then override specific domains:
98
+
99
+ ```typescript
100
+ import { MastraCompositeStore } from '@mastra/core/storage'
101
+ import { PostgresStore } from '@mastra/pg'
102
+ import { MemoryLibSQL } from '@mastra/libsql'
103
+ import { Mastra } from '@mastra/core'
104
+
105
+ const pgStore = new PostgresStore({
106
+ id: 'pg',
107
+ connectionString: process.env.DATABASE_URL,
108
+ })
109
+
110
+ export const mastra = new Mastra({
111
+ storage: new MastraCompositeStore({
112
+ id: 'composite',
113
+ default: pgStore,
114
+ domains: {
115
+ memory: new MemoryLibSQL({ url: 'file:./local.db' }),
116
+ },
117
+ }),
118
+ })
119
+ ```
120
+
121
+ ## Options
122
+
123
+ **id:** (`string`): Unique identifier for this storage instance.
124
+
125
+ **default?:** (`MastraCompositeStore`): Default storage adapter. Domains not explicitly specified in \`domains\` will use this storage's domains as fallbacks.
126
+
127
+ **domains?:** (`object`): Individual domain overrides. Each domain can come from a different storage adapter. These take precedence over the default storage.
128
+
129
+ **domains.memory?:** (`MemoryStorage`): Storage for threads, messages, and resources.
130
+
131
+ **domains.workflows?:** (`WorkflowsStorage`): Storage for workflow snapshots.
132
+
133
+ **domains.scores?:** (`ScoresStorage`): Storage for evaluation scores.
134
+
135
+ **domains.observability?:** (`ObservabilityStorage`): Storage for traces and spans.
136
+
137
+ **domains.agents?:** (`AgentsStorage`): Storage for stored agent configurations.
138
+
139
+ **disableInit?:** (`boolean`): When true, automatic initialization is disabled. You must call init() explicitly.
140
+
141
+ ## Initialization
142
+
143
+ `MastraCompositeStore` initializes each configured domain independently. When passed to the Mastra class, `init()` is called automatically:
144
+
145
+ ```typescript
146
+ import { MastraCompositeStore } from '@mastra/core/storage'
147
+ import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg'
148
+ import { Mastra } from '@mastra/core'
149
+
150
+ const storage = new MastraCompositeStore({
151
+ id: 'composite',
152
+ domains: {
153
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
154
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
155
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
156
+ },
157
+ })
158
+
159
+ export const mastra = new Mastra({
160
+ storage, // init() called automatically
161
+ })
162
+ ```
163
+
164
+ If using storage directly, call `init()` explicitly:
165
+
166
+ ```typescript
167
+ import { MastraCompositeStore } from '@mastra/core/storage'
168
+ import { MemoryPG } from '@mastra/pg'
169
+
170
+ const storage = new MastraCompositeStore({
171
+ id: 'composite',
172
+ domains: {
173
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
174
+ },
175
+ })
176
+
177
+ await storage.init()
178
+
179
+ // Access domain-specific stores via getStore()
180
+ const memoryStore = await storage.getStore('memory')
181
+ const thread = await memoryStore?.getThreadById({ threadId: '...' })
182
+ ```
183
+
184
+ ## Use cases
185
+
186
+ ### Separate databases for different workloads
187
+
188
+ Use a local database for development while keeping production data in a managed service:
189
+
190
+ ```typescript
191
+ import { MastraCompositeStore } from '@mastra/core/storage'
192
+ import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg'
193
+ import { MemoryLibSQL } from '@mastra/libsql'
194
+
195
+ const storage = new MastraCompositeStore({
196
+ id: 'composite',
197
+ domains: {
198
+ // Use local SQLite for development, PostgreSQL for production
199
+ memory:
200
+ process.env.NODE_ENV === 'development'
201
+ ? new MemoryLibSQL({ url: 'file:./dev.db' })
202
+ : new MemoryPG({ connectionString: process.env.DATABASE_URL }),
203
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
204
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
205
+ },
206
+ })
207
+ ```
208
+
209
+ ### Specialized storage for observability
210
+
211
+ Observability data can quickly overwhelm general-purpose databases in production. A single agent interaction can generate hundreds of spans, and high-traffic applications can produce thousands of traces per day.
212
+
213
+ **ClickHouse** is recommended for production observability because it's optimized for high-volume, write-heavy analytics workloads. Use composite storage to route observability to ClickHouse while keeping other data in your primary database:
214
+
215
+ ```typescript
216
+ import { MastraCompositeStore } from '@mastra/core/storage'
217
+ import { MemoryPG, WorkflowsPG, ScoresPG } from '@mastra/pg'
218
+ import { ObservabilityStorageClickhouse } from '@mastra/clickhouse'
219
+
220
+ const storage = new MastraCompositeStore({
221
+ id: 'composite',
222
+ domains: {
223
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
224
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
225
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
226
+ observability: new ObservabilityStorageClickhouse({
227
+ url: process.env.CLICKHOUSE_URL,
228
+ username: process.env.CLICKHOUSE_USERNAME,
229
+ password: process.env.CLICKHOUSE_PASSWORD,
230
+ }),
231
+ },
232
+ })
233
+ ```
234
+
235
+ > **Info:** This approach is also required when using storage providers that don't support observability (like Convex, DynamoDB, or Cloudflare). See the [DefaultExporter documentation](https://mastra.ai/docs/observability/tracing/exporters/default) for the full list of supported providers.
@@ -0,0 +1,282 @@
1
+ # DynamoDB Storage
2
+
3
+ The DynamoDB storage implementation provides a scalable and performant NoSQL database solution for Mastra, leveraging a single-table design pattern with [ElectroDB](https://electrodb.dev/).
4
+
5
+ > **Observability Not Supported:** DynamoDB storage **does not support the observability domain**. Traces from the `DefaultExporter` cannot be persisted to DynamoDB, and Mastra Studio's observability features won't work with DynamoDB as your only storage provider. To enable observability, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to a supported provider like ClickHouse or PostgreSQL.
6
+
7
+ > **Item Size Limit:** DynamoDB enforces a **400 KB maximum item size**. This limit can be exceeded when storing messages with base64-encoded attachments such as images. See [Handling large attachments](https://mastra.ai/docs/memory/storage) for workarounds including uploading attachments to external storage.
8
+
9
+ ## Features
10
+
11
+ - Efficient single-table design for all Mastra storage needs
12
+ - Based on ElectroDB for type-safe DynamoDB access
13
+ - Support for AWS credentials, regions, and endpoints
14
+ - Compatible with AWS DynamoDB Local for development
15
+ - Stores Thread, Message, Eval, and Workflow data
16
+ - Optimized for serverless environments
17
+ - Configurable TTL (Time To Live) for automatic data expiration per entity type
18
+
19
+ ## Installation
20
+
21
+ **npm**:
22
+
23
+ ```bash
24
+ npm install @mastra/dynamodb@latest
25
+ ```
26
+
27
+ **pnpm**:
28
+
29
+ ```bash
30
+ pnpm add @mastra/dynamodb@latest
31
+ ```
32
+
33
+ **Yarn**:
34
+
35
+ ```bash
36
+ yarn add @mastra/dynamodb@latest
37
+ ```
38
+
39
+ **Bun**:
40
+
41
+ ```bash
42
+ bun add @mastra/dynamodb@latest
43
+ ```
44
+
45
+ ## Prerequisites
46
+
47
+ Before using this package, you **must** create a DynamoDB table with a specific structure, including primary keys and Global Secondary Indexes (GSIs). This adapter expects the DynamoDB table and its GSIs to be provisioned externally.
48
+
49
+ Detailed instructions for setting up the table using AWS CloudFormation or AWS CDK are available in [TABLE\_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md). Please ensure your table is configured according to those instructions before proceeding.
50
+
51
+ ## Usage
52
+
53
+ ### Basic Usage
54
+
55
+ ```typescript
56
+ import { Memory } from '@mastra/memory'
57
+ import { DynamoDBStore } from '@mastra/dynamodb'
58
+
59
+ // Initialize the DynamoDB storage
60
+ const storage = new DynamoDBStore({
61
+ id: 'dynamodb', // Unique identifier for this storage instance
62
+ config: {
63
+ tableName: 'mastra-single-table', // Name of your DynamoDB table
64
+ region: 'us-east-1', // Optional: AWS region, defaults to 'us-east-1'
65
+ // endpoint: "http://localhost:8000", // Optional: For local DynamoDB
66
+ // credentials: { accessKeyId: "YOUR_ACCESS_KEY", secretAccessKey: "YOUR_SECRET_KEY" } // Optional
67
+ },
68
+ })
69
+
70
+ // Example: Initialize Memory with DynamoDB storage
71
+ const memory = new Memory({
72
+ storage,
73
+ options: {
74
+ lastMessages: 10,
75
+ },
76
+ })
77
+ ```
78
+
79
+ ### Local Development with DynamoDB Local
80
+
81
+ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
82
+
83
+ 1. **Run DynamoDB Local (e.g., using Docker):**
84
+
85
+ ```bash
86
+ docker run -p 8000:8000 amazon/dynamodb-local
87
+ ```
88
+
89
+ 2. **Configure `DynamoDBStore` to use the local endpoint:**
90
+
91
+ ```typescript
92
+ import { DynamoDBStore } from '@mastra/dynamodb'
93
+
94
+ const storage = new DynamoDBStore({
95
+ id: 'dynamodb-local',
96
+ config: {
97
+ tableName: 'mastra-single-table', // Ensure this table is created in your local DynamoDB
98
+ region: 'localhost', // Can be any string for local, 'localhost' is common
99
+ endpoint: 'http://localhost:8000',
100
+ // For DynamoDB Local, credentials are not typically required unless configured.
101
+ // If you've configured local credentials:
102
+ // credentials: { accessKeyId: "fakeMyKeyId", secretAccessKey: "fakeSecretAccessKey" }
103
+ },
104
+ })
105
+ ```
106
+
107
+ You will still need to create the table and GSIs in your local DynamoDB instance, for example, using the AWS CLI pointed to your local endpoint.
108
+
109
+ ## Parameters
110
+
111
+ **id:** (`string`): Unique identifier for this storage instance.
112
+
113
+ **config.tableName:** (`string`): The name of your DynamoDB table.
114
+
115
+ **config.region?:** (`string`): AWS region. Defaults to 'us-east-1'. For local development, can be set to 'localhost' or similar.
116
+
117
+ **config.endpoint?:** (`string`): Custom endpoint for DynamoDB (e.g., 'http\://localhost:8000' for local development).
118
+
119
+ **config.credentials?:** (`object`): AWS credentials object with \`accessKeyId\` and \`secretAccessKey\`. If not provided, the AWS SDK will attempt to source credentials from environment variables, IAM roles (e.g., for EC2/Lambda), or the shared AWS credentials file.
120
+
121
+ **config.ttl?:** (`object`): TTL (Time To Live) configuration for automatic data expiration. Configure per entity type: thread, message, trace, eval, workflow\_snapshot, resource, score. Each entity config includes: enabled (boolean), attributeName (string, default: 'ttl'), defaultTtlSeconds (number).
122
+
123
+ ## TTL (Time To Live) Configuration
124
+
125
+ DynamoDB TTL allows you to automatically delete items after a specified time period. This is useful for:
126
+
127
+ - **Cost optimization**: Automatically remove old data to reduce storage costs
128
+ - **Data lifecycle management**: Implement retention policies for compliance
129
+ - **Performance**: Prevent tables from growing indefinitely
130
+ - **Privacy compliance**: Automatically purge personal data after specified periods
131
+
132
+ ### Enabling TTL
133
+
134
+ To use TTL, you must:
135
+
136
+ 1. **Configure TTL in DynamoDBStore** (shown below)
137
+ 2. **Enable TTL on your DynamoDB table** via AWS Console or CLI, specifying the attribute name (default: `ttl`)
138
+
139
+ ```typescript
140
+ import { DynamoDBStore } from '@mastra/dynamodb'
141
+
142
+ const storage = new DynamoDBStore({
143
+ name: 'dynamodb',
144
+ config: {
145
+ tableName: 'mastra-single-table',
146
+ region: 'us-east-1',
147
+ ttl: {
148
+ // Messages expire after 30 days
149
+ message: {
150
+ enabled: true,
151
+ defaultTtlSeconds: 30 * 24 * 60 * 60, // 30 days
152
+ },
153
+ // Threads expire after 90 days
154
+ thread: {
155
+ enabled: true,
156
+ defaultTtlSeconds: 90 * 24 * 60 * 60, // 90 days
157
+ },
158
+ // Traces expire after 7 days with custom attribute name
159
+ trace: {
160
+ enabled: true,
161
+ attributeName: 'expiresAt', // Custom TTL attribute
162
+ defaultTtlSeconds: 7 * 24 * 60 * 60, // 7 days
163
+ },
164
+ // Workflow snapshots don't expire
165
+ workflow_snapshot: {
166
+ enabled: false,
167
+ },
168
+ },
169
+ },
170
+ })
171
+ ```
172
+
173
+ ### Supported Entity Types
174
+
175
+ TTL can be configured for these entity types:
176
+
177
+ | Entity | Description |
178
+ | ------------------- | ------------------------ |
179
+ | `thread` | Conversation threads |
180
+ | `message` | Messages within threads |
181
+ | `trace` | Observability traces |
182
+ | `eval` | Evaluation results |
183
+ | `workflow_snapshot` | Workflow state snapshots |
184
+ | `resource` | User/resource data |
185
+ | `score` | Scoring results |
186
+
187
+ ### TTL Entity Configuration
188
+
189
+ Each entity type accepts the following configuration:
190
+
191
+ **enabled:** (`boolean`): Whether TTL is enabled for this entity type.
192
+
193
+ **attributeName?:** (`string`): The DynamoDB attribute name to use for TTL. Must match the TTL attribute configured on your DynamoDB table. Defaults to 'ttl'.
194
+
195
+ **defaultTtlSeconds?:** (`number`): Default TTL in seconds from item creation time. Items will be automatically deleted by DynamoDB after this duration.
196
+
197
+ ### Enabling TTL on Your DynamoDB Table
198
+
199
+ After configuring TTL in your code, you must enable TTL on the DynamoDB table itself:
200
+
201
+ **Using AWS CLI:**
202
+
203
+ ```bash
204
+ aws dynamodb update-time-to-live \
205
+ --table-name mastra-single-table \
206
+ --time-to-live-specification "Enabled=true, AttributeName=ttl"
207
+ ```
208
+
209
+ **Using AWS Console:**
210
+
211
+ 1. Go to the DynamoDB console
212
+ 2. Select your table
213
+ 3. Go to "Additional settings" tab
214
+ 4. Under "Time to Live (TTL)", click "Manage TTL"
215
+ 5. Enable TTL and specify the attribute name (default: `ttl`)
216
+
217
+ > **Note:** DynamoDB deletes expired items within 48 hours after expiration. Items remain queryable until actually deleted.
218
+
219
+ ## AWS IAM Permissions
220
+
221
+ The IAM role or user executing the code needs appropriate permissions to interact with the specified DynamoDB table and its indexes. Below is a sample policy. Replace `${YOUR_TABLE_NAME}` with your actual table name and `${YOUR_AWS_REGION}` and `${YOUR_AWS_ACCOUNT_ID}` with appropriate values.
222
+
223
+ ```json
224
+ {
225
+ "Version": "2012-10-17",
226
+ "Statement": [
227
+ {
228
+ "Effect": "Allow",
229
+ "Action": [
230
+ "dynamodb:DescribeTable",
231
+ "dynamodb:GetItem",
232
+ "dynamodb:PutItem",
233
+ "dynamodb:UpdateItem",
234
+ "dynamodb:DeleteItem",
235
+ "dynamodb:Query",
236
+ "dynamodb:Scan",
237
+ "dynamodb:BatchGetItem",
238
+ "dynamodb:BatchWriteItem"
239
+ ],
240
+ "Resource": [
241
+ "arn:aws:dynamodb:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:table/${YOUR_TABLE_NAME}",
242
+ "arn:aws:dynamodb:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:table/${YOUR_TABLE_NAME}/index/*"
243
+ ]
244
+ }
245
+ ]
246
+ }
247
+ ```
248
+
249
+ ## Key Considerations
250
+
251
+ Before diving into the architectural details, keep these key points in mind when working with the DynamoDB storage adapter:
252
+
253
+ - **External Table Provisioning:** This adapter _requires_ you to create and configure the DynamoDB table and its Global Secondary Indexes (GSIs) yourself, prior to using the adapter. Follow the guide in [TABLE\_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md).
254
+ - **Single-Table Design:** All Mastra data (threads, messages, etc.) is stored in one DynamoDB table. This is a deliberate design choice optimized for DynamoDB, differing from relational database approaches.
255
+ - **Understanding GSIs:** Familiarity with how the GSIs are structured (as per `TABLE_SETUP.md`) is important for understanding data retrieval and potential query patterns.
256
+ - **ElectroDB:** The adapter uses ElectroDB to manage interactions with DynamoDB, providing a layer of abstraction and type safety over raw DynamoDB operations.
257
+
258
+ ## Architectural Approach
259
+
260
+ This storage adapter utilizes a **single-table design pattern** leveraging [ElectroDB](https://electrodb.dev/), a common and recommended approach for DynamoDB. This differs architecturally from relational database adapters (like `@mastra/pg` or `@mastra/libsql`) that typically use multiple tables, each dedicated to a specific entity (threads, messages, etc.).
261
+
262
+ Key aspects of this approach:
263
+
264
+ - **DynamoDB Native:** The single-table design is optimized for DynamoDB's key-value and query capabilities, often leading to better performance and scalability compared to mimicking relational models.
265
+ - **External Table Management:** Unlike some adapters that might offer helper functions to create tables via code, this adapter **expects the DynamoDB table and its associated Global Secondary Indexes (GSIs) to be provisioned externally** before use. Please refer to [TABLE\_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md) for detailed instructions using tools like AWS CloudFormation or CDK. The adapter focuses solely on interacting with the pre-existing table structure.
266
+ - **Consistency via Interface:** While the underlying storage model differs, this adapter adheres to the same `MastraStorage` interface as other adapters, ensuring it can be used interchangeably within the Mastra `Memory` component.
267
+
268
+ ### Mastra Data in the Single Table
269
+
270
+ Within the single DynamoDB table, different Mastra data entities (such as Threads, Messages, Traces, Evals, and Workflows) are managed and distinguished using ElectroDB. ElectroDB defines specific models for each entity type, which include unique key structures and attributes. This allows the adapter to store and retrieve diverse data types efficiently within the same table.
271
+
272
+ For example, a `Thread` item might have a primary key like `THREAD#<threadId>`, while a `Message` item belonging to that thread might use `THREAD#<threadId>` as a partition key and `MESSAGE#<messageId>` as a sort key. The Global Secondary Indexes (GSIs), detailed in `TABLE_SETUP.md`, are strategically designed to support common access patterns across these different entities, such as fetching all messages for a thread or querying traces associated with a particular workflow.
273
+
274
+ ### Advantages of Single-Table Design
275
+
276
+ This implementation uses a single-table design pattern with ElectroDB, which offers several advantages within the context of DynamoDB:
277
+
278
+ 1. **Lower cost (potentially):** Fewer tables can simplify Read/Write Capacity Unit (RCU/WCU) provisioning and management, especially with on-demand capacity.
279
+ 2. **Better performance:** Related data can be co-located or accessed efficiently through GSIs, enabling fast lookups for common access patterns.
280
+ 3. **Simplified administration:** Fewer distinct tables to monitor, back up, and manage.
281
+ 4. **Reduced complexity in access patterns:** ElectroDB helps manage the complexity of item types and access patterns on a single table.
282
+ 5. **Transaction support:** DynamoDB transactions can be used across different "entity" types stored within the same table if needed.