@mastra/pg 1.0.0-beta.11 → 1.0.0-beta.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,828 @@
1
+ # Storage API Reference
2
+
3
+ > API reference for storage - 3 entries
4
+
5
+
6
+ ---
7
+
8
+ ## Reference: Composite Storage
9
+
10
+ > Documentation for combining multiple storage backends in Mastra.
11
+
12
+ `MastraStorage` can compose storage domains from different providers. Use it when you need different databases for different purposes. For example, use LibSQL for memory and PostgreSQL for workflows.
13
+
14
+ ## Installation
15
+
16
+ `MastraStorage` is included in `@mastra/core`:
17
+
18
+ ```bash
19
+ npm install @mastra/core@beta
20
+ ```
21
+
22
+ You'll also need to install the storage providers you want to compose:
23
+
24
+ ```bash
25
+ npm install @mastra/pg@beta @mastra/libsql@beta
26
+ ```
27
+
28
+ ## Storage domains
29
+
30
+ Mastra organizes storage into five specialized domains, each handling a specific type of data. Each domain can be backed by a different storage adapter, and domain classes are exported from each storage package.
31
+
32
+ | Domain | Description |
33
+ |--------|-------------|
34
+ | `memory` | Conversation persistence for agents. Stores threads (conversation sessions), messages, resources (user identities), and working memory (persistent context across conversations). |
35
+ | `workflows` | Workflow execution state. When workflows suspend for human input, external events, or scheduled resumption, their state is persisted here to enable resumption after server restarts. |
36
+ | `scores` | Evaluation results from Mastra's evals system. Scores and metrics are persisted here for analysis and comparison over time. |
37
+ | `observability` | Telemetry data including traces and spans. Agent interactions, tool calls, and LLM requests generate spans collected into traces for debugging and performance analysis. |
38
+ | `agents` | Agent configurations for stored agents. Enables agents to be defined and updated at runtime without code deployments. |
39
+
40
+ ## Usage
41
+
42
+ ### Basic composition
43
+
44
+ Import domain classes directly from each store package and compose them:
45
+
46
+ ```typescript title="src/mastra/index.ts"
47
+ import { MastraStorage } from "@mastra/core/storage";
48
+ import { WorkflowsPG, ScoresPG } from "@mastra/pg";
49
+ import { MemoryLibSQL } from "@mastra/libsql";
50
+ import { Mastra } from "@mastra/core";
51
+
52
+ export const mastra = new Mastra({
53
+ storage: new MastraStorage({
54
+ id: "composite",
55
+ domains: {
56
+ memory: new MemoryLibSQL({ url: "file:./local.db" }),
57
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
58
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
59
+ },
60
+ }),
61
+ });
62
+ ```
63
+
64
+ ### With a default storage
65
+
66
+ Use `default` to specify a fallback storage, then override specific domains:
67
+
68
+ ```typescript title="src/mastra/index.ts"
69
+ import { MastraStorage } from "@mastra/core/storage";
70
+ import { PostgresStore } from "@mastra/pg";
71
+ import { MemoryLibSQL } from "@mastra/libsql";
72
+ import { Mastra } from "@mastra/core";
73
+
74
+ const pgStore = new PostgresStore({
75
+ id: "pg",
76
+ connectionString: process.env.DATABASE_URL,
77
+ });
78
+
79
+ export const mastra = new Mastra({
80
+ storage: new MastraStorage({
81
+ id: "composite",
82
+ default: pgStore,
83
+ domains: {
84
+ memory: new MemoryLibSQL({ url: "file:./local.db" }),
85
+ },
86
+ }),
87
+ });
88
+ ```
89
+
90
+ ## Options
91
+
92
+ ## Initialization
93
+
94
+ `MastraStorage` initializes each configured domain independently. When passed to the Mastra class, `init()` is called automatically:
95
+
96
+ ```typescript title="src/mastra/index.ts"
97
+ import { MastraStorage } from "@mastra/core/storage";
98
+ import { MemoryPG, WorkflowsPG, ScoresPG } from "@mastra/pg";
99
+ import { Mastra } from "@mastra/core";
100
+
101
+ const storage = new MastraStorage({
102
+ id: "composite",
103
+ domains: {
104
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
105
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
106
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
107
+ },
108
+ });
109
+
110
+ export const mastra = new Mastra({
111
+ storage, // init() called automatically
112
+ });
113
+ ```
114
+
115
+ If using storage directly, call `init()` explicitly:
116
+
117
+ ```typescript
118
+ import { MastraStorage } from "@mastra/core/storage";
119
+ import { MemoryPG } from "@mastra/pg";
120
+
121
+ const storage = new MastraStorage({
122
+ id: "composite",
123
+ domains: {
124
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
125
+ },
126
+ });
127
+
128
+ await storage.init();
129
+
130
+ // Access domain-specific stores via getStore()
131
+ const memoryStore = await storage.getStore("memory");
132
+ const thread = await memoryStore?.getThreadById({ threadId: "..." });
133
+ ```
134
+
135
+ ## Use cases
136
+
137
+ ### Separate databases for different workloads
138
+
139
+ Use a local database for development while keeping production data in a managed service:
140
+
141
+ ```typescript
142
+ import { MastraStorage } from "@mastra/core/storage";
143
+ import { MemoryPG, WorkflowsPG, ScoresPG } from "@mastra/pg";
144
+ import { MemoryLibSQL } from "@mastra/libsql";
145
+
146
+ const storage = new MastraStorage({
147
+ id: "composite",
148
+ domains: {
149
+ // Use local SQLite for development, PostgreSQL for production
150
+ memory:
151
+ process.env.NODE_ENV === "development"
152
+ ? new MemoryLibSQL({ url: "file:./dev.db" })
153
+ : new MemoryPG({ connectionString: process.env.DATABASE_URL }),
154
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
155
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
156
+ },
157
+ });
158
+ ```
159
+
160
+ ### Specialized storage for observability
161
+
162
+ Use a time-series database for traces while keeping other data in PostgreSQL:
163
+
164
+ ```typescript
165
+ import { MastraStorage } from "@mastra/core/storage";
166
+ import { MemoryPG, WorkflowsPG, ScoresPG } from "@mastra/pg";
167
+ import { ObservabilityStorageClickhouse } from "@mastra/clickhouse";
168
+
169
+ const storage = new MastraStorage({
170
+ id: "composite",
171
+ domains: {
172
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
173
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
174
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
175
+ observability: new ObservabilityStorageClickhouse({
176
+ url: process.env.CLICKHOUSE_URL,
177
+ username: process.env.CLICKHOUSE_USERNAME,
178
+ password: process.env.CLICKHOUSE_PASSWORD,
179
+ }),
180
+ },
181
+ });
182
+ ```
183
+
184
+ ---
185
+
186
+ ## Reference: DynamoDB Storage
187
+
188
+ > Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
189
+
190
+ The DynamoDB storage implementation provides a scalable and performant NoSQL database solution for Mastra, leveraging a single-table design pattern with [ElectroDB](https://electrodb.dev/).
191
+
192
+ ## Features
193
+
194
+ - Efficient single-table design for all Mastra storage needs
195
+ - Based on ElectroDB for type-safe DynamoDB access
196
+ - Support for AWS credentials, regions, and endpoints
197
+ - Compatible with AWS DynamoDB Local for development
198
+ - Stores Thread, Message, Trace, Eval, and Workflow data
199
+ - Optimized for serverless environments
200
+ - Configurable TTL (Time To Live) for automatic data expiration per entity type
201
+
202
+ ## Installation
203
+
204
+ ```bash
205
+ npm install @mastra/dynamodb@beta
206
+ # or
207
+ pnpm add @mastra/dynamodb@beta
208
+ # or
209
+ yarn add @mastra/dynamodb@beta
210
+ ```
211
+
212
+ ## Prerequisites
213
+
214
+ Before using this package, you **must** create a DynamoDB table with a specific structure, including primary keys and Global Secondary Indexes (GSIs). This adapter expects the DynamoDB table and its GSIs to be provisioned externally.
215
+
216
+ Detailed instructions for setting up the table using AWS CloudFormation or AWS CDK are available in [TABLE_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md). Please ensure your table is configured according to those instructions before proceeding.
217
+
218
+ ## Usage
219
+
220
+ ### Basic Usage
221
+
222
+ ```typescript
223
+ import { Memory } from "@mastra/memory";
224
+ import { DynamoDBStore } from "@mastra/dynamodb";
225
+
226
+ // Initialize the DynamoDB storage
227
+ const storage = new DynamoDBStore({
228
+ id: "dynamodb", // Unique identifier for this storage instance
229
+ config: {
230
+ tableName: "mastra-single-table", // Name of your DynamoDB table
231
+ region: "us-east-1", // Optional: AWS region, defaults to 'us-east-1'
232
+ // endpoint: "http://localhost:8000", // Optional: For local DynamoDB
233
+ // credentials: { accessKeyId: "YOUR_ACCESS_KEY", secretAccessKey: "YOUR_SECRET_KEY" } // Optional
234
+ },
235
+ });
236
+
237
+ // Example: Initialize Memory with DynamoDB storage
238
+ const memory = new Memory({
239
+ storage,
240
+ options: {
241
+ lastMessages: 10,
242
+ },
243
+ });
244
+ ```
245
+
246
+ ### Local Development with DynamoDB Local
247
+
248
+ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
249
+
250
+ 1. **Run DynamoDB Local (e.g., using Docker):**
251
+
252
+ ```bash
253
+ docker run -p 8000:8000 amazon/dynamodb-local
254
+ ```
255
+
256
+ 2. **Configure `DynamoDBStore` to use the local endpoint:**
257
+
258
+ ```typescript
259
+ import { DynamoDBStore } from "@mastra/dynamodb";
260
+
261
+ const storage = new DynamoDBStore({
262
+ id: "dynamodb-local",
263
+ config: {
264
+ tableName: "mastra-single-table", // Ensure this table is created in your local DynamoDB
265
+ region: "localhost", // Can be any string for local, 'localhost' is common
266
+ endpoint: "http://localhost:8000",
267
+ // For DynamoDB Local, credentials are not typically required unless configured.
268
+ // If you've configured local credentials:
269
+ // credentials: { accessKeyId: "fakeMyKeyId", secretAccessKey: "fakeSecretAccessKey" }
270
+ },
271
+ });
272
+ ```
273
+
274
+ You will still need to create the table and GSIs in your local DynamoDB instance, for example, using the AWS CLI pointed to your local endpoint.
275
+
276
+ ## Parameters
277
+
278
+ ## TTL (Time To Live) Configuration
279
+
280
+ DynamoDB TTL allows you to automatically delete items after a specified time period. This is useful for:
281
+
282
+ - **Cost optimization**: Automatically remove old data to reduce storage costs
283
+ - **Data lifecycle management**: Implement retention policies for compliance
284
+ - **Performance**: Prevent tables from growing indefinitely
285
+ - **Privacy compliance**: Automatically purge personal data after specified periods
286
+
287
+ ### Enabling TTL
288
+
289
+ To use TTL, you must:
290
+
291
+ 1. **Configure TTL in DynamoDBStore** (shown below)
292
+ 2. **Enable TTL on your DynamoDB table** via AWS Console or CLI, specifying the attribute name (default: `ttl`)
293
+
294
+ ```typescript
295
+ import { DynamoDBStore } from "@mastra/dynamodb";
296
+
297
+ const storage = new DynamoDBStore({
298
+ name: "dynamodb",
299
+ config: {
300
+ tableName: "mastra-single-table",
301
+ region: "us-east-1",
302
+ ttl: {
303
+ // Messages expire after 30 days
304
+ message: {
305
+ enabled: true,
306
+ defaultTtlSeconds: 30 * 24 * 60 * 60, // 30 days
307
+ },
308
+ // Threads expire after 90 days
309
+ thread: {
310
+ enabled: true,
311
+ defaultTtlSeconds: 90 * 24 * 60 * 60, // 90 days
312
+ },
313
+ // Traces expire after 7 days with custom attribute name
314
+ trace: {
315
+ enabled: true,
316
+ attributeName: "expiresAt", // Custom TTL attribute
317
+ defaultTtlSeconds: 7 * 24 * 60 * 60, // 7 days
318
+ },
319
+ // Workflow snapshots don't expire
320
+ workflow_snapshot: {
321
+ enabled: false,
322
+ },
323
+ },
324
+ },
325
+ });
326
+ ```
327
+
328
+ ### Supported Entity Types
329
+
330
+ TTL can be configured for these entity types:
331
+
332
+ | Entity | Description |
333
+ |--------|-------------|
334
+ | `thread` | Conversation threads |
335
+ | `message` | Messages within threads |
336
+ | `trace` | Observability traces |
337
+ | `eval` | Evaluation results |
338
+ | `workflow_snapshot` | Workflow state snapshots |
339
+ | `resource` | User/resource data |
340
+ | `score` | Scoring results |
341
+
342
+ ### TTL Entity Configuration
343
+
344
+ Each entity type accepts the following configuration:
345
+
346
+ ### Enabling TTL on Your DynamoDB Table
347
+
348
+ After configuring TTL in your code, you must enable TTL on the DynamoDB table itself:
349
+
350
+ **Using AWS CLI:**
351
+
352
+ ```bash
353
+ aws dynamodb update-time-to-live \
354
+ --table-name mastra-single-table \
355
+ --time-to-live-specification "Enabled=true, AttributeName=ttl"
356
+ ```
357
+
358
+ **Using AWS Console:**
359
+
360
+ 1. Go to the DynamoDB console
361
+ 2. Select your table
362
+ 3. Go to "Additional settings" tab
363
+ 4. Under "Time to Live (TTL)", click "Manage TTL"
364
+ 5. Enable TTL and specify the attribute name (default: `ttl`)
365
+
366
+ > **Note**: DynamoDB deletes expired items within 48 hours after expiration. Items remain queryable until actually deleted.
367
+
368
+ ## AWS IAM Permissions
369
+
370
+ The IAM role or user executing the code needs appropriate permissions to interact with the specified DynamoDB table and its indexes. Below is a sample policy. Replace `${YOUR_TABLE_NAME}` with your actual table name and `${YOUR_AWS_REGION}` and `${YOUR_AWS_ACCOUNT_ID}` with appropriate values.
371
+
372
+ ```json
373
+ {
374
+ "Version": "2012-10-17",
375
+ "Statement": [
376
+ {
377
+ "Effect": "Allow",
378
+ "Action": [
379
+ "dynamodb:DescribeTable",
380
+ "dynamodb:GetItem",
381
+ "dynamodb:PutItem",
382
+ "dynamodb:UpdateItem",
383
+ "dynamodb:DeleteItem",
384
+ "dynamodb:Query",
385
+ "dynamodb:Scan",
386
+ "dynamodb:BatchGetItem",
387
+ "dynamodb:BatchWriteItem"
388
+ ],
389
+ "Resource": [
390
+ "arn:aws:dynamodb:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:table/${YOUR_TABLE_NAME}",
391
+ "arn:aws:dynamodb:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:table/${YOUR_TABLE_NAME}/index/*"
392
+ ]
393
+ }
394
+ ]
395
+ }
396
+ ```
397
+
398
+ ## Key Considerations
399
+
400
+ Before diving into the architectural details, keep these key points in mind when working with the DynamoDB storage adapter:
401
+
402
+ - **External Table Provisioning:** This adapter _requires_ you to create and configure the DynamoDB table and its Global Secondary Indexes (GSIs) yourself, prior to using the adapter. Follow the guide in [TABLE_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md).
403
+ - **Single-Table Design:** All Mastra data (threads, messages, etc.) is stored in one DynamoDB table. This is a deliberate design choice optimized for DynamoDB, differing from relational database approaches.
404
+ - **Understanding GSIs:** Familiarity with how the GSIs are structured (as per `TABLE_SETUP.md`) is important for understanding data retrieval and potential query patterns.
405
+ - **ElectroDB:** The adapter uses ElectroDB to manage interactions with DynamoDB, providing a layer of abstraction and type safety over raw DynamoDB operations.
406
+
407
+ ## Architectural Approach
408
+
409
+ This storage adapter utilizes a **single-table design pattern** leveraging [ElectroDB](https://electrodb.dev/), a common and recommended approach for DynamoDB. This differs architecturally from relational database adapters (like `@mastra/pg` or `@mastra/libsql`) that typically use multiple tables, each dedicated to a specific entity (threads, messages, etc.).
410
+
411
+ Key aspects of this approach:
412
+
413
+ - **DynamoDB Native:** The single-table design is optimized for DynamoDB's key-value and query capabilities, often leading to better performance and scalability compared to mimicking relational models.
414
+ - **External Table Management:** Unlike some adapters that might offer helper functions to create tables via code, this adapter **expects the DynamoDB table and its associated Global Secondary Indexes (GSIs) to be provisioned externally** before use. Please refer to [TABLE_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md) for detailed instructions using tools like AWS CloudFormation or CDK. The adapter focuses solely on interacting with the pre-existing table structure.
415
+ - **Consistency via Interface:** While the underlying storage model differs, this adapter adheres to the same `MastraStorage` interface as other adapters, ensuring it can be used interchangeably within the Mastra `Memory` component.
416
+
417
+ ### Mastra Data in the Single Table
418
+
419
+ Within the single DynamoDB table, different Mastra data entities (such as Threads, Messages, Traces, Evals, and Workflows) are managed and distinguished using ElectroDB. ElectroDB defines specific models for each entity type, which include unique key structures and attributes. This allows the adapter to store and retrieve diverse data types efficiently within the same table.
420
+
421
+ For example, a `Thread` item might have a primary key like `THREAD#<threadId>`, while a `Message` item belonging to that thread might use `THREAD#<threadId>` as a partition key and `MESSAGE#<messageId>` as a sort key. The Global Secondary Indexes (GSIs), detailed in `TABLE_SETUP.md`, are strategically designed to support common access patterns across these different entities, such as fetching all messages for a thread or querying traces associated with a particular workflow.
422
+
423
+ ### Advantages of Single-Table Design
424
+
425
+ This implementation uses a single-table design pattern with ElectroDB, which offers several advantages within the context of DynamoDB:
426
+
427
+ 1. **Lower cost (potentially):** Fewer tables can simplify Read/Write Capacity Unit (RCU/WCU) provisioning and management, especially with on-demand capacity.
428
+ 2. **Better performance:** Related data can be co-located or accessed efficiently through GSIs, enabling fast lookups for common access patterns.
429
+ 3. **Simplified administration:** Fewer distinct tables to monitor, back up, and manage.
430
+ 4. **Reduced complexity in access patterns:** ElectroDB helps manage the complexity of item types and access patterns on a single table.
431
+ 5. **Transaction support:** DynamoDB transactions can be used across different "entity" types stored within the same table if needed.
432
+
433
+ ---
434
+
435
+ ## Reference: PostgreSQL Storage
436
+
437
+ > Documentation for the PostgreSQL storage implementation in Mastra.
438
+
439
+ The PostgreSQL storage implementation provides a production-ready storage solution using PostgreSQL databases.
440
+
441
+ ## Installation
442
+
443
+ ```bash
444
+ npm install @mastra/pg@beta
445
+ ```
446
+
447
+ ## Usage
448
+
449
+ ```typescript
450
+ import { PostgresStore } from "@mastra/pg";
451
+
452
+ const storage = new PostgresStore({
453
+ id: 'pg-storage',
454
+ connectionString: process.env.DATABASE_URL,
455
+ });
456
+ ```
457
+
458
+ ## Parameters
459
+
460
+ ## Constructor Examples
461
+
462
+ You can instantiate `PostgresStore` in the following ways:
463
+
464
+ ```ts
465
+ import { PostgresStore } from "@mastra/pg";
466
+
467
+ // Using a connection string only
468
+ const store1 = new PostgresStore({
469
+ id: 'pg-storage-1',
470
+ connectionString: "postgresql://user:password@localhost:5432/mydb",
471
+ });
472
+
473
+ // Using a connection string with a custom schema name
474
+ const store2 = new PostgresStore({
475
+ id: 'pg-storage-2',
476
+ connectionString: "postgresql://user:password@localhost:5432/mydb",
477
+ schemaName: "custom_schema", // optional
478
+ });
479
+
480
+ // Using individual connection parameters
481
+ const store4 = new PostgresStore({
482
+ id: 'pg-storage-3',
483
+ host: "localhost",
484
+ port: 5432,
485
+ database: "mydb",
486
+ user: "user",
487
+ password: "password",
488
+ });
489
+
490
+ // Individual parameters with schemaName
491
+ const store5 = new PostgresStore({
492
+ id: 'pg-storage-4',
493
+ host: "localhost",
494
+ port: 5432,
495
+ database: "mydb",
496
+ user: "user",
497
+ password: "password",
498
+ schemaName: "custom_schema", // optional
499
+ });
500
+ ```
501
+
502
+ ## Additional Notes
503
+
504
+ ### Schema Management
505
+
506
+ The storage implementation handles schema creation and updates automatically. It creates the following tables:
507
+
508
+ - `mastra_workflow_snapshot`: Stores workflow state and execution data
509
+ - `mastra_evals`: Stores evaluation results and metadata
510
+ - `mastra_threads`: Stores conversation threads
511
+ - `mastra_messages`: Stores individual messages
512
+ - `mastra_traces`: Stores telemetry and tracing data
513
+ - `mastra_scorers`: Stores scoring and evaluation data
514
+ - `mastra_resources`: Stores resource working memory data
515
+
516
+ ### Initialization
517
+
518
+ When you pass storage to the Mastra class, `init()` is called automatically before any storage operation:
519
+
520
+ ```typescript
521
+ import { Mastra } from "@mastra/core";
522
+ import { PostgresStore } from "@mastra/pg";
523
+
524
+ const storage = new PostgresStore({
525
+ id: 'pg-storage',
526
+ connectionString: process.env.DATABASE_URL,
527
+ });
528
+
529
+ const mastra = new Mastra({
530
+ storage, // init() is called automatically
531
+ });
532
+ ```
533
+
534
+ If you're using storage directly without Mastra, you must call `init()` explicitly to create the tables:
535
+
536
+ ```typescript
537
+ import { PostgresStore } from "@mastra/pg";
538
+
539
+ const storage = new PostgresStore({
540
+ id: 'pg-storage',
541
+ connectionString: process.env.DATABASE_URL,
542
+ });
543
+
544
+ // Required when using storage directly
545
+ await storage.init();
546
+
547
+ // Access domain-specific stores via getStore()
548
+ const memoryStore = await storage.getStore('memory');
549
+ const thread = await memoryStore?.getThreadById({ threadId: "..." });
550
+ ```
551
+
552
+ > **Note:**
553
+ If `init()` is not called, tables won't be created and storage operations will fail silently or throw errors.
554
+
555
+ ### Direct Database and Pool Access
556
+
557
+ `PostgresStore` exposes both the underlying database object and the pg-promise instance as public fields:
558
+
559
+ ```typescript
560
+ store.db; // pg-promise database instance
561
+ store.pgp; // pg-promise main instance
562
+ ```
563
+
564
+ This enables direct queries and custom transaction management. When using these fields:
565
+
566
+ - You are responsible for proper connection and transaction handling.
567
+ - Closing the store (`store.close()`) will destroy the associated connection pool.
568
+ - Direct access bypasses any additional logic or validation provided by PostgresStore methods.
569
+
570
+ This approach is intended for advanced scenarios where low-level access is required.
571
+
572
+ ### Using with Next.js
573
+
574
+ When using `PostgresStore` in Next.js applications, [Hot Module Replacement (HMR)](https://nextjs.org/docs/architecture/fast-refresh) during development can cause multiple storage instances to be created, resulting in this warning:
575
+
576
+ ```
577
+ WARNING: Creating a duplicate database object for the same connection.
578
+ ```
579
+
580
+ To prevent this, store the `PostgresStore` instance on the global object so it persists across HMR reloads:
581
+
582
+ ```typescript title="src/mastra/storage.ts"
583
+ import { PostgresStore } from "@mastra/pg";
584
+ import { Memory } from "@mastra/memory";
585
+
586
+ // Extend the global type to include our instances
587
+ declare global {
588
+ var pgStore: PostgresStore | undefined;
589
+ var memory: Memory | undefined;
590
+ }
591
+
592
+ // Get or create the PostgresStore instance
593
+ function getPgStore(): PostgresStore {
594
+ if (!global.pgStore) {
595
+ if (!process.env.DATABASE_URL) {
596
+ throw new Error("DATABASE_URL is not defined in environment variables");
597
+ }
598
+ global.pgStore = new PostgresStore({
599
+ id: "pg-storage",
600
+ connectionString: process.env.DATABASE_URL,
601
+ ssl:
602
+ process.env.DATABASE_SSL === "true"
603
+ ? { rejectUnauthorized: false }
604
+ : false,
605
+ });
606
+ }
607
+ return global.pgStore;
608
+ }
609
+
610
+ // Get or create the Memory instance
611
+ function getMemory(): Memory {
612
+ if (!global.memory) {
613
+ global.memory = new Memory({
614
+ storage: getPgStore(),
615
+ });
616
+ }
617
+ return global.memory;
618
+ }
619
+
620
+ export const storage = getPgStore();
621
+ export const memory = getMemory();
622
+ ```
623
+
624
+ Then use the exported instances in your Mastra configuration:
625
+
626
+ ```typescript title="src/mastra/index.ts"
627
+ import { Mastra } from "@mastra/core/mastra";
628
+ import { storage } from "./storage";
629
+
630
+ export const mastra = new Mastra({
631
+ storage,
632
+ // ...other config
633
+ });
634
+ ```
635
+
636
+ This pattern ensures only one `PostgresStore` instance is created regardless of how many times the module is reloaded during development. The same pattern can be applied to other storage providers like `LibSQLStore`.
637
+
638
+ > **Note:**
639
+ This singleton pattern is only necessary during local development with HMR. In production builds, modules are only loaded once.
640
+
641
+ ## Usage Example
642
+
643
+ ### Adding memory to an agent
644
+
645
+ To add PostgreSQL memory to an agent use the `Memory` class and create a new `storage` key using `PostgresStore`. The `connectionString` can either be a remote location, or a local database connection.
646
+
647
+ ```typescript title="src/mastra/agents/example-pg-agent.ts"
648
+ import { Memory } from "@mastra/memory";
649
+ import { Agent } from "@mastra/core/agent";
650
+ import { PostgresStore } from "@mastra/pg";
651
+
652
+ export const pgAgent = new Agent({
653
+ id: "pg-agent",
654
+ name: "PG Agent",
655
+ instructions:
656
+ "You are an AI agent with the ability to automatically recall memories from previous interactions.",
657
+ model: "openai/gpt-5.1",
658
+ memory: new Memory({
659
+ storage: new PostgresStore({
660
+ id: 'pg-agent-storage',
661
+ connectionString: process.env.DATABASE_URL!,
662
+ }),
663
+ options: {
664
+ generateTitle: true, // Explicitly enable automatic title generation
665
+ },
666
+ }),
667
+ });
668
+ ```
669
+
670
+ ### Using the agent
671
+
672
+ Use `memoryOptions` to scope recall for this request. Set `lastMessages: 5` to limit recency-based recall, and use `semanticRecall` to fetch the `topK: 3` most relevant messages, including `messageRange: 2` neighboring messages for context around each match.
673
+
674
+ ```typescript title="src/test-pg-agent.ts"
675
+ import "dotenv/config";
676
+
677
+ import { mastra } from "./mastra";
678
+
679
+ const threadId = "123";
680
+ const resourceId = "user-456";
681
+
682
+ const agent = mastra.getAgent("pg-agent");
683
+
684
+ const message = await agent.stream("My name is Mastra", {
685
+ memory: {
686
+ thread: threadId,
687
+ resource: resourceId,
688
+ },
689
+ });
690
+
691
+ await message.textStream.pipeTo(new WritableStream());
692
+
693
+ const stream = await agent.stream("What's my name?", {
694
+ memory: {
695
+ thread: threadId,
696
+ resource: resourceId,
697
+ },
698
+ memoryOptions: {
699
+ lastMessages: 5,
700
+ semanticRecall: {
701
+ topK: 3,
702
+ messageRange: 2,
703
+ },
704
+ },
705
+ });
706
+
707
+ for await (const chunk of stream.textStream) {
708
+ process.stdout.write(chunk);
709
+ }
710
+ ```
711
+
712
+ ## Index Management
713
+
714
+ PostgreSQL storage provides index management to optimize query performance.
715
+
716
+ ### Default Indexes
717
+
718
+ PostgreSQL storage creates composite indexes during initialization for common query patterns:
719
+
720
+ - `mastra_threads_resourceid_createdat_idx`: (resourceId, createdAt DESC)
721
+ - `mastra_messages_thread_id_createdat_idx`: (thread_id, createdAt DESC)
722
+ - `mastra_ai_spans_traceid_startedat_idx`: (traceId, startedAt DESC)
723
+ - `mastra_ai_spans_parentspanid_startedat_idx`: (parentSpanId, startedAt DESC)
724
+ - `mastra_ai_spans_name_startedat_idx`: (name, startedAt DESC)
725
+ - `mastra_ai_spans_scope_startedat_idx`: (scope, startedAt DESC)
726
+ - `mastra_scores_trace_id_span_id_created_at_idx`: (traceId, spanId, createdAt DESC)
727
+
728
+ These indexes improve performance for filtered queries with sorting, including `dateRange` filters on message queries.
729
+
730
+ ### Configuring Indexes
731
+
732
+ You can control index creation via constructor options:
733
+
734
+ ```typescript
735
+ import { PostgresStore } from "@mastra/pg";
736
+
737
+ // Skip default indexes (manage indexes separately)
738
+ const store = new PostgresStore({
739
+ id: 'pg-storage',
740
+ connectionString: process.env.DATABASE_URL,
741
+ skipDefaultIndexes: true,
742
+ });
743
+
744
+ // Add custom indexes during initialization
745
+ const storeWithCustomIndexes = new PostgresStore({
746
+ id: 'pg-storage',
747
+ connectionString: process.env.DATABASE_URL,
748
+ indexes: [
749
+ {
750
+ name: "idx_threads_metadata_type",
751
+ table: "mastra_threads",
752
+ columns: ["metadata->>'type'"],
753
+ },
754
+ {
755
+ name: "idx_messages_status",
756
+ table: "mastra_messages",
757
+ columns: ["metadata->>'status'"],
758
+ },
759
+ ],
760
+ });
761
+ ```
762
+
763
+ For advanced index types, you can specify additional options:
764
+
765
+ - `unique: true` for unique constraints
766
+ - `where: 'condition'` for partial indexes
767
+ - `method: 'brin'` for time-series data
768
+ - `storage: { fillfactor: 90 }` for update-heavy tables
769
+ - `concurrent: true` for non-blocking creation (default)
770
+
771
+ ### Index Options
772
+
773
+ ### Schema-Specific Indexes
774
+
775
+ When using custom schemas, index names are prefixed with the schema name:
776
+
777
+ ```typescript
778
+ const storage = new PostgresStore({
779
+ id: 'pg-storage',
780
+ connectionString: process.env.DATABASE_URL,
781
+ schemaName: "custom_schema",
782
+ indexes: [
783
+ {
784
+ name: "idx_threads_status",
785
+ table: "mastra_threads",
786
+ columns: ["status"],
787
+ },
788
+ ],
789
+ });
790
+
791
+ // Creates index as: custom_schema_idx_threads_status
792
+ ```
793
+
794
+ ### Managing Indexes via SQL
795
+
796
+ For advanced index management (listing, dropping, analyzing), use direct SQL queries via the `db` accessor:
797
+
798
+ ```typescript
799
+ // List indexes for a table
800
+ const indexes = await storage.db.any(`
801
+ SELECT indexname, indexdef
802
+ FROM pg_indexes
803
+ WHERE tablename = 'mastra_messages'
804
+ `);
805
+
806
+ // Drop an index
807
+ await storage.db.none('DROP INDEX IF EXISTS idx_my_custom_index');
808
+
809
+ // Analyze index usage
810
+ const stats = await storage.db.one(`
811
+ SELECT idx_scan, idx_tup_read
812
+ FROM pg_stat_user_indexes
813
+ WHERE indexrelname = 'mastra_messages_thread_id_createdat_idx'
814
+ `);
815
+ ```
816
+
817
+ ### Index Types and Use Cases
818
+
819
+ PostgreSQL offers different index types optimized for specific scenarios:
820
+
821
+ | Index Type | Best For | Storage | Speed |
822
+ | ------------------- | --------------------------------------- | ---------- | -------------------------- |
823
+ | **btree** (default) | Range queries, sorting, general purpose | Moderate | Fast |
824
+ | **hash** | Equality comparisons only | Small | Very fast for `=` |
825
+ | **gin** | JSONB, arrays, full-text search | Large | Fast for contains |
826
+ | **gist** | Geometric data, full-text search | Moderate | Fast for nearest-neighbor |
827
+ | **spgist** | Non-balanced data, text patterns | Small | Fast for specific patterns |
828
+ | **brin** | Large tables with natural ordering | Very small | Fast for ranges |