@mastra/pg 1.0.0-beta.9 → 1.1.0-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. package/CHANGELOG.md +1481 -0
  2. package/dist/docs/README.md +36 -0
  3. package/dist/docs/SKILL.md +37 -0
  4. package/dist/docs/SOURCE_MAP.json +6 -0
  5. package/dist/docs/memory/01-storage.md +261 -0
  6. package/dist/docs/memory/02-working-memory.md +411 -0
  7. package/dist/docs/memory/03-semantic-recall.md +256 -0
  8. package/dist/docs/memory/04-reference.md +133 -0
  9. package/dist/docs/processors/01-reference.md +296 -0
  10. package/dist/docs/rag/01-overview.md +74 -0
  11. package/dist/docs/rag/02-vector-databases.md +643 -0
  12. package/dist/docs/rag/03-retrieval.md +548 -0
  13. package/dist/docs/rag/04-reference.md +369 -0
  14. package/dist/docs/storage/01-reference.md +905 -0
  15. package/dist/docs/tools/01-reference.md +440 -0
  16. package/dist/docs/vectors/01-reference.md +307 -0
  17. package/dist/index.cjs +1293 -260
  18. package/dist/index.cjs.map +1 -1
  19. package/dist/index.d.ts +1 -1
  20. package/dist/index.d.ts.map +1 -1
  21. package/dist/index.js +1290 -262
  22. package/dist/index.js.map +1 -1
  23. package/dist/shared/config.d.ts +61 -66
  24. package/dist/shared/config.d.ts.map +1 -1
  25. package/dist/storage/client.d.ts +91 -0
  26. package/dist/storage/client.d.ts.map +1 -0
  27. package/dist/storage/db/index.d.ts +82 -17
  28. package/dist/storage/db/index.d.ts.map +1 -1
  29. package/dist/storage/domains/agents/index.d.ts +11 -1
  30. package/dist/storage/domains/agents/index.d.ts.map +1 -1
  31. package/dist/storage/domains/memory/index.d.ts +3 -2
  32. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  33. package/dist/storage/domains/observability/index.d.ts +24 -1
  34. package/dist/storage/domains/observability/index.d.ts.map +1 -1
  35. package/dist/storage/domains/scores/index.d.ts.map +1 -1
  36. package/dist/storage/domains/workflows/index.d.ts +1 -0
  37. package/dist/storage/domains/workflows/index.d.ts.map +1 -1
  38. package/dist/storage/index.d.ts +44 -17
  39. package/dist/storage/index.d.ts.map +1 -1
  40. package/dist/storage/test-utils.d.ts.map +1 -1
  41. package/dist/vector/index.d.ts.map +1 -1
  42. package/dist/vector/sql-builder.d.ts.map +1 -1
  43. package/package.json +14 -14
@@ -0,0 +1,905 @@
1
+ # Storage API Reference
2
+
3
+ > API reference for storage - 3 entries
4
+
5
+
6
+ ---
7
+
8
+ ## Reference: Composite Storage
9
+
10
+ > Documentation for combining multiple storage backends in Mastra.
11
+
12
+ `MastraCompositeStore` can compose storage domains from different providers. Use it when you need different databases for different purposes. For example, use LibSQL for memory and PostgreSQL for workflows.
13
+
14
+ ## Installation
15
+
16
+ `MastraCompositeStore` is included in `@mastra/core`:
17
+
18
+ ```bash npm2yarn
19
+ npm install @mastra/core@latest
20
+ ```
21
+
22
+ You'll also need to install the storage providers you want to compose:
23
+
24
+ ```bash npm2yarn
25
+ npm install @mastra/pg@latest @mastra/libsql@latest
26
+ ```
27
+
28
+ ## Storage domains
29
+
30
+ Mastra organizes storage into five specialized domains, each handling a specific type of data. Each domain can be backed by a different storage adapter, and domain classes are exported from each storage package.
31
+
32
+ | Domain | Description |
33
+ |--------|-------------|
34
+ | `memory` | Conversation persistence for agents. Stores threads (conversation sessions), messages, resources (user identities), and working memory (persistent context across conversations). |
35
+ | `workflows` | Workflow execution state. When workflows suspend for human input, external events, or scheduled resumption, their state is persisted here to enable resumption after server restarts. |
36
+ | `scores` | Evaluation results from Mastra's evals system. Scores and metrics are persisted here for analysis and comparison over time. |
37
+ | `observability` | Telemetry data including traces and spans. Agent interactions, tool calls, and LLM requests generate spans collected into traces for debugging and performance analysis. |
38
+ | `agents` | Agent configurations for stored agents. Enables agents to be defined and updated at runtime without code deployments. |
39
+
40
+ ## Usage
41
+
42
+ ### Basic composition
43
+
44
+ Import domain classes directly from each store package and compose them:
45
+
46
+ ```typescript title="src/mastra/index.ts"
47
+ import { MastraCompositeStore } from "@mastra/core/storage";
48
+ import { WorkflowsPG, ScoresPG } from "@mastra/pg";
49
+ import { MemoryLibSQL } from "@mastra/libsql";
50
+ import { Mastra } from "@mastra/core";
51
+
52
+ export const mastra = new Mastra({
53
+ storage: new MastraCompositeStore({
54
+ id: "composite",
55
+ domains: {
56
+ memory: new MemoryLibSQL({ url: "file:./local.db" }),
57
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
58
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
59
+ },
60
+ }),
61
+ });
62
+ ```
63
+
64
+ ### With a default storage
65
+
66
+ Use `default` to specify a fallback storage, then override specific domains:
67
+
68
+ ```typescript title="src/mastra/index.ts"
69
+ import { MastraCompositeStore } from "@mastra/core/storage";
70
+ import { PostgresStore } from "@mastra/pg";
71
+ import { MemoryLibSQL } from "@mastra/libsql";
72
+ import { Mastra } from "@mastra/core";
73
+
74
+ const pgStore = new PostgresStore({
75
+ id: "pg",
76
+ connectionString: process.env.DATABASE_URL,
77
+ });
78
+
79
+ export const mastra = new Mastra({
80
+ storage: new MastraCompositeStore({
81
+ id: "composite",
82
+ default: pgStore,
83
+ domains: {
84
+ memory: new MemoryLibSQL({ url: "file:./local.db" }),
85
+ },
86
+ }),
87
+ });
88
+ ```
89
+
90
+ ## Options
91
+
92
+ ## Initialization
93
+
94
+ `MastraCompositeStore` initializes each configured domain independently. When passed to the Mastra class, `init()` is called automatically:
95
+
96
+ ```typescript title="src/mastra/index.ts"
97
+ import { MastraCompositeStore } from "@mastra/core/storage";
98
+ import { MemoryPG, WorkflowsPG, ScoresPG } from "@mastra/pg";
99
+ import { Mastra } from "@mastra/core";
100
+
101
+ const storage = new MastraCompositeStore({
102
+ id: "composite",
103
+ domains: {
104
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
105
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
106
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
107
+ },
108
+ });
109
+
110
+ export const mastra = new Mastra({
111
+ storage, // init() called automatically
112
+ });
113
+ ```
114
+
115
+ If using storage directly, call `init()` explicitly:
116
+
117
+ ```typescript
118
+ import { MastraCompositeStore } from "@mastra/core/storage";
119
+ import { MemoryPG } from "@mastra/pg";
120
+
121
+ const storage = new MastraCompositeStore({
122
+ id: "composite",
123
+ domains: {
124
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
125
+ },
126
+ });
127
+
128
+ await storage.init();
129
+
130
+ // Access domain-specific stores via getStore()
131
+ const memoryStore = await storage.getStore("memory");
132
+ const thread = await memoryStore?.getThreadById({ threadId: "..." });
133
+ ```
134
+
135
+ ## Use cases
136
+
137
+ ### Separate databases for different workloads
138
+
139
+ Use a local database for development while keeping production data in a managed service:
140
+
141
+ ```typescript
142
+ import { MastraCompositeStore } from "@mastra/core/storage";
143
+ import { MemoryPG, WorkflowsPG, ScoresPG } from "@mastra/pg";
144
+ import { MemoryLibSQL } from "@mastra/libsql";
145
+
146
+ const storage = new MastraCompositeStore({
147
+ id: "composite",
148
+ domains: {
149
+ // Use local SQLite for development, PostgreSQL for production
150
+ memory:
151
+ process.env.NODE_ENV === "development"
152
+ ? new MemoryLibSQL({ url: "file:./dev.db" })
153
+ : new MemoryPG({ connectionString: process.env.DATABASE_URL }),
154
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
155
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
156
+ },
157
+ });
158
+ ```
159
+
160
+ ### Specialized storage for observability
161
+
162
+ Observability data can quickly overwhelm general-purpose databases in production. A single agent interaction can generate hundreds of spans, and high-traffic applications can produce thousands of traces per day.
163
+
164
+ **ClickHouse** is recommended for production observability because it's optimized for high-volume, write-heavy analytics workloads. Use composite storage to route observability to ClickHouse while keeping other data in your primary database:
165
+
166
+ ```typescript
167
+ import { MastraCompositeStore } from "@mastra/core/storage";
168
+ import { MemoryPG, WorkflowsPG, ScoresPG } from "@mastra/pg";
169
+ import { ObservabilityStorageClickhouse } from "@mastra/clickhouse";
170
+
171
+ const storage = new MastraCompositeStore({
172
+ id: "composite",
173
+ domains: {
174
+ memory: new MemoryPG({ connectionString: process.env.DATABASE_URL }),
175
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
176
+ scores: new ScoresPG({ connectionString: process.env.DATABASE_URL }),
177
+ observability: new ObservabilityStorageClickhouse({
178
+ url: process.env.CLICKHOUSE_URL,
179
+ username: process.env.CLICKHOUSE_USERNAME,
180
+ password: process.env.CLICKHOUSE_PASSWORD,
181
+ }),
182
+ },
183
+ });
184
+ ```
185
+
186
+ > **Note:**
187
+
188
+ This approach is also required when using storage providers that don't support observability (like Convex, DynamoDB, or Cloudflare). See the [DefaultExporter documentation](https://mastra.ai/docs/observability/tracing/exporters/default#storage-provider-support) for the full list of supported providers.
189
+
190
+ ---
191
+
192
+ ## Reference: DynamoDB Storage
193
+
194
+ > Documentation for the DynamoDB storage implementation in Mastra, using a single-table design with ElectroDB.
195
+
196
+ The DynamoDB storage implementation provides a scalable and performant NoSQL database solution for Mastra, leveraging a single-table design pattern with [ElectroDB](https://electrodb.dev/).
197
+
198
+ > **Observability Not Supported**
199
+ DynamoDB storage **does not support the observability domain**. Traces from the `DefaultExporter` cannot be persisted to DynamoDB, and Mastra Studio's observability features won't work with DynamoDB as your only storage provider. To enable observability, use [composite storage](https://mastra.ai/reference/storage/composite#specialized-storage-for-observability) to route observability data to a supported provider like ClickHouse or PostgreSQL.
200
+
201
+ > **Item Size Limit**
202
+ DynamoDB enforces a **400 KB maximum item size**. This limit can be exceeded when storing messages with base64-encoded attachments such as images. See [Handling large attachments](https://mastra.ai/docs/memory/storage#handling-large-attachments) for workarounds including uploading attachments to external storage.
203
+
204
+ ## Features
205
+
206
+ - Efficient single-table design for all Mastra storage needs
207
+ - Based on ElectroDB for type-safe DynamoDB access
208
+ - Support for AWS credentials, regions, and endpoints
209
+ - Compatible with AWS DynamoDB Local for development
210
+ - Stores Thread, Message, Eval, and Workflow data
211
+ - Optimized for serverless environments
212
+ - Configurable TTL (Time To Live) for automatic data expiration per entity type
213
+
214
+ ## Installation
215
+
216
+ ```bash npm2yarn
217
+ npm install @mastra/dynamodb@latest
218
+ ```
219
+
220
+ ## Prerequisites
221
+
222
+ Before using this package, you **must** create a DynamoDB table with a specific structure, including primary keys and Global Secondary Indexes (GSIs). This adapter expects the DynamoDB table and its GSIs to be provisioned externally.
223
+
224
+ Detailed instructions for setting up the table using AWS CloudFormation or AWS CDK are available in [TABLE_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md). Please ensure your table is configured according to those instructions before proceeding.
225
+
226
+ ## Usage
227
+
228
+ ### Basic Usage
229
+
230
+ ```typescript
231
+ import { Memory } from "@mastra/memory";
232
+ import { DynamoDBStore } from "@mastra/dynamodb";
233
+
234
+ // Initialize the DynamoDB storage
235
+ const storage = new DynamoDBStore({
236
+ id: "dynamodb", // Unique identifier for this storage instance
237
+ config: {
238
+ tableName: "mastra-single-table", // Name of your DynamoDB table
239
+ region: "us-east-1", // Optional: AWS region, defaults to 'us-east-1'
240
+ // endpoint: "http://localhost:8000", // Optional: For local DynamoDB
241
+ // credentials: { accessKeyId: "YOUR_ACCESS_KEY", secretAccessKey: "YOUR_SECRET_KEY" } // Optional
242
+ },
243
+ });
244
+
245
+ // Example: Initialize Memory with DynamoDB storage
246
+ const memory = new Memory({
247
+ storage,
248
+ options: {
249
+ lastMessages: 10,
250
+ },
251
+ });
252
+ ```
253
+
254
+ ### Local Development with DynamoDB Local
255
+
256
+ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DynamoDBLocal.html).
257
+
258
+ 1. **Run DynamoDB Local (e.g., using Docker):**
259
+
260
+ ```bash
261
+ docker run -p 8000:8000 amazon/dynamodb-local
262
+ ```
263
+
264
+ 2. **Configure `DynamoDBStore` to use the local endpoint:**
265
+
266
+ ```typescript
267
+ import { DynamoDBStore } from "@mastra/dynamodb";
268
+
269
+ const storage = new DynamoDBStore({
270
+ id: "dynamodb-local",
271
+ config: {
272
+ tableName: "mastra-single-table", // Ensure this table is created in your local DynamoDB
273
+ region: "localhost", // Can be any string for local, 'localhost' is common
274
+ endpoint: "http://localhost:8000",
275
+ // For DynamoDB Local, credentials are not typically required unless configured.
276
+ // If you've configured local credentials:
277
+ // credentials: { accessKeyId: "fakeMyKeyId", secretAccessKey: "fakeSecretAccessKey" }
278
+ },
279
+ });
280
+ ```
281
+
282
+ You will still need to create the table and GSIs in your local DynamoDB instance, for example, using the AWS CLI pointed to your local endpoint.
283
+
284
+ ## Parameters
285
+
286
+ ## TTL (Time To Live) Configuration
287
+
288
+ DynamoDB TTL allows you to automatically delete items after a specified time period. This is useful for:
289
+
290
+ - **Cost optimization**: Automatically remove old data to reduce storage costs
291
+ - **Data lifecycle management**: Implement retention policies for compliance
292
+ - **Performance**: Prevent tables from growing indefinitely
293
+ - **Privacy compliance**: Automatically purge personal data after specified periods
294
+
295
+ ### Enabling TTL
296
+
297
+ To use TTL, you must:
298
+
299
+ 1. **Configure TTL in DynamoDBStore** (shown below)
300
+ 2. **Enable TTL on your DynamoDB table** via AWS Console or CLI, specifying the attribute name (default: `ttl`)
301
+
302
+ ```typescript
303
+ import { DynamoDBStore } from "@mastra/dynamodb";
304
+
305
+ const storage = new DynamoDBStore({
306
+ name: "dynamodb",
307
+ config: {
308
+ tableName: "mastra-single-table",
309
+ region: "us-east-1",
310
+ ttl: {
311
+ // Messages expire after 30 days
312
+ message: {
313
+ enabled: true,
314
+ defaultTtlSeconds: 30 * 24 * 60 * 60, // 30 days
315
+ },
316
+ // Threads expire after 90 days
317
+ thread: {
318
+ enabled: true,
319
+ defaultTtlSeconds: 90 * 24 * 60 * 60, // 90 days
320
+ },
321
+ // Traces expire after 7 days with custom attribute name
322
+ trace: {
323
+ enabled: true,
324
+ attributeName: "expiresAt", // Custom TTL attribute
325
+ defaultTtlSeconds: 7 * 24 * 60 * 60, // 7 days
326
+ },
327
+ // Workflow snapshots don't expire
328
+ workflow_snapshot: {
329
+ enabled: false,
330
+ },
331
+ },
332
+ },
333
+ });
334
+ ```
335
+
336
+ ### Supported Entity Types
337
+
338
+ TTL can be configured for these entity types:
339
+
340
+ | Entity | Description |
341
+ |--------|-------------|
342
+ | `thread` | Conversation threads |
343
+ | `message` | Messages within threads |
344
+ | `trace` | Observability traces |
345
+ | `eval` | Evaluation results |
346
+ | `workflow_snapshot` | Workflow state snapshots |
347
+ | `resource` | User/resource data |
348
+ | `score` | Scoring results |
349
+
350
+ ### TTL Entity Configuration
351
+
352
+ Each entity type accepts the following configuration:
353
+
354
+ ### Enabling TTL on Your DynamoDB Table
355
+
356
+ After configuring TTL in your code, you must enable TTL on the DynamoDB table itself:
357
+
358
+ **Using AWS CLI:**
359
+
360
+ ```bash
361
+ aws dynamodb update-time-to-live \
362
+ --table-name mastra-single-table \
363
+ --time-to-live-specification "Enabled=true, AttributeName=ttl"
364
+ ```
365
+
366
+ **Using AWS Console:**
367
+
368
+ 1. Go to the DynamoDB console
369
+ 2. Select your table
370
+ 3. Go to "Additional settings" tab
371
+ 4. Under "Time to Live (TTL)", click "Manage TTL"
372
+ 5. Enable TTL and specify the attribute name (default: `ttl`)
373
+
374
+ > **Note**: DynamoDB deletes expired items within 48 hours after expiration. Items remain queryable until actually deleted.
375
+
376
+ ## AWS IAM Permissions
377
+
378
+ The IAM role or user executing the code needs appropriate permissions to interact with the specified DynamoDB table and its indexes. Below is a sample policy. Replace `${YOUR_TABLE_NAME}` with your actual table name and `${YOUR_AWS_REGION}` and `${YOUR_AWS_ACCOUNT_ID}` with appropriate values.
379
+
380
+ ```json
381
+ {
382
+ "Version": "2012-10-17",
383
+ "Statement": [
384
+ {
385
+ "Effect": "Allow",
386
+ "Action": [
387
+ "dynamodb:DescribeTable",
388
+ "dynamodb:GetItem",
389
+ "dynamodb:PutItem",
390
+ "dynamodb:UpdateItem",
391
+ "dynamodb:DeleteItem",
392
+ "dynamodb:Query",
393
+ "dynamodb:Scan",
394
+ "dynamodb:BatchGetItem",
395
+ "dynamodb:BatchWriteItem"
396
+ ],
397
+ "Resource": [
398
+ "arn:aws:dynamodb:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:table/${YOUR_TABLE_NAME}",
399
+ "arn:aws:dynamodb:${YOUR_AWS_REGION}:${YOUR_AWS_ACCOUNT_ID}:table/${YOUR_TABLE_NAME}/index/*"
400
+ ]
401
+ }
402
+ ]
403
+ }
404
+ ```
405
+
406
+ ## Key Considerations
407
+
408
+ Before diving into the architectural details, keep these key points in mind when working with the DynamoDB storage adapter:
409
+
410
+ - **External Table Provisioning:** This adapter _requires_ you to create and configure the DynamoDB table and its Global Secondary Indexes (GSIs) yourself, prior to using the adapter. Follow the guide in [TABLE_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md).
411
+ - **Single-Table Design:** All Mastra data (threads, messages, etc.) is stored in one DynamoDB table. This is a deliberate design choice optimized for DynamoDB, differing from relational database approaches.
412
+ - **Understanding GSIs:** Familiarity with how the GSIs are structured (as per `TABLE_SETUP.md`) is important for understanding data retrieval and potential query patterns.
413
+ - **ElectroDB:** The adapter uses ElectroDB to manage interactions with DynamoDB, providing a layer of abstraction and type safety over raw DynamoDB operations.
414
+
415
+ ## Architectural Approach
416
+
417
+ This storage adapter utilizes a **single-table design pattern** leveraging [ElectroDB](https://electrodb.dev/), a common and recommended approach for DynamoDB. This differs architecturally from relational database adapters (like `@mastra/pg` or `@mastra/libsql`) that typically use multiple tables, each dedicated to a specific entity (threads, messages, etc.).
418
+
419
+ Key aspects of this approach:
420
+
421
+ - **DynamoDB Native:** The single-table design is optimized for DynamoDB's key-value and query capabilities, often leading to better performance and scalability compared to mimicking relational models.
422
+ - **External Table Management:** Unlike some adapters that might offer helper functions to create tables via code, this adapter **expects the DynamoDB table and its associated Global Secondary Indexes (GSIs) to be provisioned externally** before use. Please refer to [TABLE_SETUP.md](https://github.com/mastra-ai/mastra/blob/main/stores/dynamodb/TABLE_SETUP.md) for detailed instructions using tools like AWS CloudFormation or CDK. The adapter focuses solely on interacting with the pre-existing table structure.
423
+ - **Consistency via Interface:** While the underlying storage model differs, this adapter adheres to the same `MastraStorage` interface as other adapters, ensuring it can be used interchangeably within the Mastra `Memory` component.
424
+
425
+ ### Mastra Data in the Single Table
426
+
427
+ Within the single DynamoDB table, different Mastra data entities (such as Threads, Messages, Traces, Evals, and Workflows) are managed and distinguished using ElectroDB. ElectroDB defines specific models for each entity type, which include unique key structures and attributes. This allows the adapter to store and retrieve diverse data types efficiently within the same table.
428
+
429
+ For example, a `Thread` item might have a primary key like `THREAD#<threadId>`, while a `Message` item belonging to that thread might use `THREAD#<threadId>` as a partition key and `MESSAGE#<messageId>` as a sort key. The Global Secondary Indexes (GSIs), detailed in `TABLE_SETUP.md`, are strategically designed to support common access patterns across these different entities, such as fetching all messages for a thread or querying traces associated with a particular workflow.
430
+
431
+ ### Advantages of Single-Table Design
432
+
433
+ This implementation uses a single-table design pattern with ElectroDB, which offers several advantages within the context of DynamoDB:
434
+
435
+ 1. **Lower cost (potentially):** Fewer tables can simplify Read/Write Capacity Unit (RCU/WCU) provisioning and management, especially with on-demand capacity.
436
+ 2. **Better performance:** Related data can be co-located or accessed efficiently through GSIs, enabling fast lookups for common access patterns.
437
+ 3. **Simplified administration:** Fewer distinct tables to monitor, back up, and manage.
438
+ 4. **Reduced complexity in access patterns:** ElectroDB helps manage the complexity of item types and access patterns on a single table.
439
+ 5. **Transaction support:** DynamoDB transactions can be used across different "entity" types stored within the same table if needed.
440
+
441
+ ---
442
+
443
+ ## Reference: PostgreSQL Storage
444
+
445
+ > Documentation for the PostgreSQL storage implementation in Mastra.
446
+
447
+ The PostgreSQL storage implementation provides a production-ready storage solution using PostgreSQL databases.
448
+
449
+ ## Installation
450
+
451
+ ```bash npm2yarn
452
+ npm install @mastra/pg@latest
453
+ ```
454
+
455
+ ## Usage
456
+
457
+ ```typescript
458
+ import { PostgresStore } from "@mastra/pg";
459
+
460
+ const storage = new PostgresStore({
461
+ id: 'pg-storage',
462
+ connectionString: process.env.DATABASE_URL,
463
+ });
464
+ ```
465
+
466
+ ## Parameters
467
+
468
+ ## Constructor Examples
469
+
470
+ You can instantiate `PostgresStore` in the following ways:
471
+
472
+ ```ts
473
+ import { PostgresStore } from "@mastra/pg";
474
+ import { Pool } from "pg";
475
+
476
+ // Using a connection string
477
+ const store1 = new PostgresStore({
478
+ id: 'pg-storage-1',
479
+ connectionString: "postgresql://user:password@localhost:5432/mydb",
480
+ });
481
+
482
+ // Using a connection string with pool options
483
+ const store2 = new PostgresStore({
484
+ id: 'pg-storage-2',
485
+ connectionString: "postgresql://user:password@localhost:5432/mydb",
486
+ schemaName: "custom_schema",
487
+ max: 30, // Max pool connections
488
+ idleTimeoutMillis: 60000, // Idle timeout
489
+ ssl: { rejectUnauthorized: false },
490
+ });
491
+
492
+ // Using individual connection parameters
493
+ const store3 = new PostgresStore({
494
+ id: 'pg-storage-3',
495
+ host: "localhost",
496
+ port: 5432,
497
+ database: "mydb",
498
+ user: "user",
499
+ password: "password",
500
+ });
501
+
502
+ // Using a pre-configured pg.Pool (recommended for pool reuse)
503
+ const existingPool = new Pool({
504
+ connectionString: "postgresql://user:password@localhost:5432/mydb",
505
+ max: 20,
506
+ // ... your custom pool configuration
507
+ });
508
+
509
+ const store4 = new PostgresStore({
510
+ id: 'pg-storage-4',
511
+ pool: existingPool,
512
+ schemaName: "custom_schema", // optional
513
+ });
514
+ ```
515
+
516
+ ## Additional Notes
517
+
518
+ ### Schema Management
519
+
520
+ The storage implementation handles schema creation and updates automatically. It creates the following tables:
521
+
522
+ - `mastra_workflow_snapshot`: Stores workflow state and execution data
523
+ - `mastra_evals`: Stores evaluation results and metadata
524
+ - `mastra_threads`: Stores conversation threads
525
+ - `mastra_messages`: Stores individual messages
526
+ - `mastra_traces`: Stores telemetry and tracing data
527
+ - `mastra_scorers`: Stores scoring and evaluation data
528
+ - `mastra_resources`: Stores resource working memory data
529
+
530
+ ### Observability
531
+
532
+ PostgreSQL supports observability and can handle low trace volumes. Throughput capacity depends on deployment factors such as hardware, schema design, indexing, and retention policies, and should be validated for your specific environment. For high-volume production environments, consider:
533
+
534
+ - Using the `insert-only` [tracing strategy](https://mastra.ai/docs/observability/tracing/exporters/default#tracing-strategies) to reduce database write operations
535
+ - Setting up table partitioning for efficient data retention
536
+ - Migrating observability to [ClickHouse via composite storage](https://mastra.ai/reference/storage/composite#specialized-storage-for-observability) if you need to scale further
537
+
538
+ ### Initialization
539
+
540
+ When you pass storage to the Mastra class, `init()` is called automatically before any storage operation:
541
+
542
+ ```typescript
543
+ import { Mastra } from "@mastra/core";
544
+ import { PostgresStore } from "@mastra/pg";
545
+
546
+ const storage = new PostgresStore({
547
+ id: 'pg-storage',
548
+ connectionString: process.env.DATABASE_URL,
549
+ });
550
+
551
+ const mastra = new Mastra({
552
+ storage, // init() is called automatically
553
+ });
554
+ ```
555
+
556
+ If you're using storage directly without Mastra, you must call `init()` explicitly to create the tables:
557
+
558
+ ```typescript
559
+ import { PostgresStore } from "@mastra/pg";
560
+
561
+ const storage = new PostgresStore({
562
+ id: 'pg-storage',
563
+ connectionString: process.env.DATABASE_URL,
564
+ });
565
+
566
+ // Required when using storage directly
567
+ await storage.init();
568
+
569
+ // Access domain-specific stores via getStore()
570
+ const memoryStore = await storage.getStore('memory');
571
+ const thread = await memoryStore?.getThreadById({ threadId: "..." });
572
+ ```
573
+
574
+ > **Note:**
575
+ If `init()` is not called, tables won't be created and storage operations will fail silently or throw errors.
576
+
577
+ ### Using an Existing Pool
578
+
579
+ If you already have a `pg.Pool` in your application (e.g., shared with an ORM or for Row Level Security), you can pass it directly to `PostgresStore`:
580
+
581
+ ```typescript
582
+ import { Pool } from "pg";
583
+ import { PostgresStore } from "@mastra/pg";
584
+
585
+ // Your existing pool (shared across your application)
586
+ const pool = new Pool({
587
+ connectionString: process.env.DATABASE_URL,
588
+ max: 20,
589
+ });
590
+
591
+ const storage = new PostgresStore({
592
+ id: "shared-storage",
593
+ pool: pool,
594
+ });
595
+ ```
596
+
597
+ **Pool lifecycle behavior:**
598
+
599
+ - When you **provide a pool**: Mastra uses your pool but does **not** close it when `store.close()` is called. You manage the pool lifecycle.
600
+ - When Mastra **creates a pool**: Mastra owns the pool and will close it when `store.close()` is called.
601
+
602
+ ### Direct Database and Pool Access
603
+
604
+ `PostgresStore` exposes the underlying database client and pool for advanced use cases:
605
+
606
+ ```typescript
607
+ store.db; // DbClient - query interface with helpers (any, one, tx, etc.)
608
+ store.pool; // pg.Pool - the underlying connection pool
609
+ ```
610
+
611
+ **Using `store.db` for queries:**
612
+
613
+ ```typescript
614
+ // Execute queries with helper methods
615
+ const users = await store.db.any("SELECT * FROM users WHERE active = $1", [true]);
616
+ const user = await store.db.one("SELECT * FROM users WHERE id = $1", [userId]);
617
+ const maybeUser = await store.db.oneOrNone("SELECT * FROM users WHERE email = $1", [email]);
618
+
619
+ // Use transactions
620
+ const result = await store.db.tx(async (t) => {
621
+ await t.none("INSERT INTO logs (message) VALUES ($1)", ["Started"]);
622
+ const data = await t.any("SELECT * FROM items");
623
+ return data;
624
+ });
625
+ ```
626
+
627
+ **Using `store.pool` directly:**
628
+
629
+ ```typescript
630
+ // Get a client for manual connection management
631
+ const client = await store.pool.connect();
632
+ try {
633
+ await client.query("SET LOCAL app.user_id = $1", [userId]);
634
+ const result = await client.query("SELECT * FROM protected_table");
635
+ return result.rows;
636
+ } finally {
637
+ client.release();
638
+ }
639
+ ```
640
+
641
+ When using these fields:
642
+
643
+ - You are responsible for proper connection and transaction handling.
644
+ - Closing the store (`store.close()`) will destroy the pool only if Mastra created it.
645
+ - Direct access bypasses any additional logic or validation provided by PostgresStore methods.
646
+
647
+ This approach is intended for advanced scenarios where low-level access is required.
648
+
649
+ ### Using with Next.js
650
+
651
+ When using `PostgresStore` in Next.js applications, [Hot Module Replacement (HMR)](https://nextjs.org/docs/architecture/fast-refresh) during development can cause multiple storage instances to be created, resulting in this warning:
652
+
653
+ ```
654
+ WARNING: Creating a duplicate database object for the same connection.
655
+ ```
656
+
657
+ To prevent this, store the `PostgresStore` instance on the global object so it persists across HMR reloads:
658
+
659
+ ```typescript title="src/mastra/storage.ts"
660
+ import { PostgresStore } from "@mastra/pg";
661
+ import { Memory } from "@mastra/memory";
662
+
663
+ // Extend the global type to include our instances
664
+ declare global {
665
+ var pgStore: PostgresStore | undefined;
666
+ var memory: Memory | undefined;
667
+ }
668
+
669
+ // Get or create the PostgresStore instance
670
+ function getPgStore(): PostgresStore {
671
+ if (!global.pgStore) {
672
+ if (!process.env.DATABASE_URL) {
673
+ throw new Error("DATABASE_URL is not defined in environment variables");
674
+ }
675
+ global.pgStore = new PostgresStore({
676
+ id: "pg-storage",
677
+ connectionString: process.env.DATABASE_URL,
678
+ ssl:
679
+ process.env.DATABASE_SSL === "true"
680
+ ? { rejectUnauthorized: false }
681
+ : false,
682
+ });
683
+ }
684
+ return global.pgStore;
685
+ }
686
+
687
+ // Get or create the Memory instance
688
+ function getMemory(): Memory {
689
+ if (!global.memory) {
690
+ global.memory = new Memory({
691
+ storage: getPgStore(),
692
+ });
693
+ }
694
+ return global.memory;
695
+ }
696
+
697
+ export const storage = getPgStore();
698
+ export const memory = getMemory();
699
+ ```
700
+
701
+ Then use the exported instances in your Mastra configuration:
702
+
703
+ ```typescript title="src/mastra/index.ts"
704
+ import { Mastra } from "@mastra/core/mastra";
705
+ import { storage } from "./storage";
706
+
707
+ export const mastra = new Mastra({
708
+ storage,
709
+ // ...other config
710
+ });
711
+ ```
712
+
713
+ This pattern ensures only one `PostgresStore` instance is created regardless of how many times the module is reloaded during development. The same pattern can be applied to other storage providers like `LibSQLStore`.
714
+
715
+ > **Note:**
716
+ This singleton pattern is only necessary during local development with HMR. In production builds, modules are only loaded once.
717
+
718
+ ## Usage Example
719
+
720
+ ### Adding memory to an agent
721
+
722
+ To add PostgreSQL memory to an agent use the `Memory` class and create a new `storage` key using `PostgresStore`. The `connectionString` can either be a remote location, or a local database connection.
723
+
724
+ ```typescript title="src/mastra/agents/example-pg-agent.ts"
725
+ import { Memory } from "@mastra/memory";
726
+ import { Agent } from "@mastra/core/agent";
727
+ import { PostgresStore } from "@mastra/pg";
728
+
729
+ export const pgAgent = new Agent({
730
+ id: "pg-agent",
731
+ name: "PG Agent",
732
+ instructions:
733
+ "You are an AI agent with the ability to automatically recall memories from previous interactions.",
734
+ model: "openai/gpt-5.1",
735
+ memory: new Memory({
736
+ storage: new PostgresStore({
737
+ id: 'pg-agent-storage',
738
+ connectionString: process.env.DATABASE_URL!,
739
+ }),
740
+ options: {
741
+ generateTitle: true, // Explicitly enable automatic title generation
742
+ },
743
+ }),
744
+ });
745
+ ```
746
+
747
+ ### Using the agent
748
+
749
+ Use `memoryOptions` to scope recall for this request. Set `lastMessages: 5` to limit recency-based recall, and use `semanticRecall` to fetch the `topK: 3` most relevant messages, including `messageRange: 2` neighboring messages for context around each match.
750
+
751
+ ```typescript title="src/test-pg-agent.ts"
752
+ import "dotenv/config";
753
+
754
+ import { mastra } from "./mastra";
755
+
756
+ const threadId = "123";
757
+ const resourceId = "user-456";
758
+
759
+ const agent = mastra.getAgent("pg-agent");
760
+
761
+ const message = await agent.stream("My name is Mastra", {
762
+ memory: {
763
+ thread: threadId,
764
+ resource: resourceId,
765
+ },
766
+ });
767
+
768
+ await message.textStream.pipeTo(new WritableStream());
769
+
770
+ const stream = await agent.stream("What's my name?", {
771
+ memory: {
772
+ thread: threadId,
773
+ resource: resourceId,
774
+ },
775
+ memoryOptions: {
776
+ lastMessages: 5,
777
+ semanticRecall: {
778
+ topK: 3,
779
+ messageRange: 2,
780
+ },
781
+ },
782
+ });
783
+
784
+ for await (const chunk of stream.textStream) {
785
+ process.stdout.write(chunk);
786
+ }
787
+ ```
788
+
789
+ ## Index Management
790
+
791
+ PostgreSQL storage provides index management to optimize query performance.
792
+
793
+ ### Default Indexes
794
+
795
+ PostgreSQL storage creates composite indexes during initialization for common query patterns:
796
+
797
+ - `mastra_threads_resourceid_createdat_idx`: (resourceId, createdAt DESC)
798
+ - `mastra_messages_thread_id_createdat_idx`: (thread_id, createdAt DESC)
799
+ - `mastra_ai_spans_traceid_startedat_idx`: (traceId, startedAt DESC)
800
+ - `mastra_ai_spans_parentspanid_startedat_idx`: (parentSpanId, startedAt DESC)
801
+ - `mastra_ai_spans_name_startedat_idx`: (name, startedAt DESC)
802
+ - `mastra_ai_spans_scope_startedat_idx`: (scope, startedAt DESC)
803
+ - `mastra_scores_trace_id_span_id_created_at_idx`: (traceId, spanId, createdAt DESC)
804
+
805
+ These indexes improve performance for filtered queries with sorting, including `dateRange` filters on message queries.
806
+
807
+ ### Configuring Indexes
808
+
809
+ You can control index creation via constructor options:
810
+
811
+ ```typescript
812
+ import { PostgresStore } from "@mastra/pg";
813
+
814
+ // Skip default indexes (manage indexes separately)
815
+ const store = new PostgresStore({
816
+ id: 'pg-storage',
817
+ connectionString: process.env.DATABASE_URL,
818
+ skipDefaultIndexes: true,
819
+ });
820
+
821
+ // Add custom indexes during initialization
822
+ const storeWithCustomIndexes = new PostgresStore({
823
+ id: 'pg-storage',
824
+ connectionString: process.env.DATABASE_URL,
825
+ indexes: [
826
+ {
827
+ name: "idx_threads_metadata_type",
828
+ table: "mastra_threads",
829
+ columns: ["metadata->>'type'"],
830
+ },
831
+ {
832
+ name: "idx_messages_status",
833
+ table: "mastra_messages",
834
+ columns: ["metadata->>'status'"],
835
+ },
836
+ ],
837
+ });
838
+ ```
839
+
840
+ For advanced index types, you can specify additional options:
841
+
842
+ - `unique: true` for unique constraints
843
+ - `where: 'condition'` for partial indexes
844
+ - `method: 'brin'` for time-series data
845
+ - `storage: { fillfactor: 90 }` for update-heavy tables
846
+ - `concurrent: true` for non-blocking creation (default)
847
+
848
+ ### Index Options
849
+
850
+ ### Schema-Specific Indexes
851
+
852
+ When using custom schemas, index names are prefixed with the schema name:
853
+
854
+ ```typescript
855
+ const storage = new PostgresStore({
856
+ id: 'pg-storage',
857
+ connectionString: process.env.DATABASE_URL,
858
+ schemaName: "custom_schema",
859
+ indexes: [
860
+ {
861
+ name: "idx_threads_status",
862
+ table: "mastra_threads",
863
+ columns: ["status"],
864
+ },
865
+ ],
866
+ });
867
+
868
+ // Creates index as: custom_schema_idx_threads_status
869
+ ```
870
+
871
+ ### Managing Indexes via SQL
872
+
873
+ For advanced index management (listing, dropping, analyzing), use direct SQL queries via the `db` accessor:
874
+
875
+ ```typescript
876
+ // List indexes for a table
877
+ const indexes = await storage.db.any(`
878
+ SELECT indexname, indexdef
879
+ FROM pg_indexes
880
+ WHERE tablename = 'mastra_messages'
881
+ `);
882
+
883
+ // Drop an index
884
+ await storage.db.none('DROP INDEX IF EXISTS idx_my_custom_index');
885
+
886
+ // Analyze index usage
887
+ const stats = await storage.db.one(`
888
+ SELECT idx_scan, idx_tup_read
889
+ FROM pg_stat_user_indexes
890
+ WHERE indexrelname = 'mastra_messages_thread_id_createdat_idx'
891
+ `);
892
+ ```
893
+
894
+ ### Index Types and Use Cases
895
+
896
+ PostgreSQL offers different index types optimized for specific scenarios:
897
+
898
+ | Index Type | Best For | Storage | Speed |
899
+ | ------------------- | --------------------------------------- | ---------- | -------------------------- |
900
+ | **btree** (default) | Range queries, sorting, general purpose | Moderate | Fast |
901
+ | **hash** | Equality comparisons only | Small | Very fast for `=` |
902
+ | **gin** | JSONB, arrays, full-text search | Large | Fast for contains |
903
+ | **gist** | Geometric data, full-text search | Moderate | Fast for nearest-neighbor |
904
+ | **spgist** | Non-balanced data, text patterns | Small | Fast for specific patterns |
905
+ | **brin** | Large tables with natural ordering | Very small | Fast for ranges |