@mastra/pg 1.7.2 → 1.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. package/CHANGELOG.md +124 -0
  2. package/dist/docs/SKILL.md +14 -14
  3. package/dist/docs/assets/SOURCE_MAP.json +1 -1
  4. package/dist/docs/references/docs-memory-semantic-recall.md +9 -9
  5. package/dist/docs/references/docs-memory-storage.md +1 -1
  6. package/dist/docs/references/docs-memory-working-memory.md +20 -20
  7. package/dist/docs/references/docs-rag-overview.md +2 -2
  8. package/dist/docs/references/docs-rag-retrieval.md +4 -4
  9. package/dist/docs/references/docs-rag-vector-databases.md +13 -13
  10. package/dist/docs/references/reference-memory-memory-class.md +1 -1
  11. package/dist/docs/references/reference-processors-message-history-processor.md +1 -1
  12. package/dist/docs/references/reference-rag-metadata-filters.md +15 -15
  13. package/dist/docs/references/reference-storage-composite.md +1 -1
  14. package/dist/docs/references/reference-storage-dynamodb.md +7 -7
  15. package/dist/docs/references/reference-storage-postgresql.md +7 -7
  16. package/dist/docs/references/reference-tools-vector-query-tool.md +12 -12
  17. package/dist/docs/references/reference-vectors-pg.md +23 -21
  18. package/dist/index.cjs +379 -91
  19. package/dist/index.cjs.map +1 -1
  20. package/dist/index.js +379 -91
  21. package/dist/index.js.map +1 -1
  22. package/dist/storage/db/index.d.ts +13 -0
  23. package/dist/storage/db/index.d.ts.map +1 -1
  24. package/dist/storage/domains/datasets/index.d.ts.map +1 -1
  25. package/dist/storage/domains/memory/index.d.ts +7 -2
  26. package/dist/storage/domains/memory/index.d.ts.map +1 -1
  27. package/dist/storage/domains/observability/index.d.ts.map +1 -1
  28. package/dist/vector/index.d.ts +44 -10
  29. package/dist/vector/index.d.ts.map +1 -1
  30. package/dist/vector/types.d.ts +32 -2
  31. package/dist/vector/types.d.ts.map +1 -1
  32. package/package.json +7 -7
@@ -1,8 +1,8 @@
1
- # Metadata Filters
1
+ # Metadata filters
2
2
 
3
3
  Mastra provides a unified metadata filtering syntax across all vector stores, based on MongoDB/Sift query syntax. Each vector store translates these filters into their native format.
4
4
 
5
- ## Basic Example
5
+ ## Basic example
6
6
 
7
7
  ```typescript
8
8
  import { PgVector } from '@mastra/pg'
@@ -24,7 +24,7 @@ const results = await store.query({
24
24
  })
25
25
  ```
26
26
 
27
- ## Supported Operators
27
+ ## Supported operators
28
28
 
29
29
  ### Basic Comparison
30
30
 
@@ -46,9 +46,9 @@ const results = await store.query({
46
46
 
47
47
  `$contains`Text contains substring{ description: { $contains: "sale" } }Supported by: Upstash, libSQL, PgVector`$regex`Regular expression match{ name: { $regex: "^test" } }Supported by: Qdrant, PgVector, Upstash, MongoDB`$size`Array length check{ tags: { $size: { $gt: 2 } } }Supported by: Astra, libSQL, PgVector, MongoDB`$geo`Geospatial query{ location: { $geo: { type: "radius", ... } } }Supported by: Qdrant`$datetime`Datetime range query{ created: { $datetime: { range: { gt: "2024-01-01" } } } }Supported by: Qdrant`$hasId`Vector ID existence check{ $hasId: \["id1", "id2"] }Supported by: Qdrant`$hasVector`Vector existence check{ $hasVector: true }Supported by: Qdrant
48
48
 
49
- ## Common Rules and Restrictions
49
+ ## Common rules and restrictions
50
50
 
51
- 1. Field names cannot:
51
+ 1. Field names can't:
52
52
 
53
53
  - Contain dots (.) unless referring to nested fields
54
54
  - Start with $ or contain null characters
@@ -63,11 +63,11 @@ const results = await store.query({
63
63
  3. Logical operators:
64
64
 
65
65
  - Must contain valid conditions
66
- - Cannot be empty
66
+ - Can't be empty
67
67
  - Must be properly nested
68
68
  - Can only be used at top level or nested within other logical operators
69
- - Cannot be used at field level or nested inside a field
70
- - Cannot be used inside an operator
69
+ - Can't be used at field level or nested inside a field
70
+ - Can't be used inside an operator
71
71
  - Valid: `{ "$and": [{ "field": { "$gt": 100 } }] }`
72
72
  - Valid: `{ "$or": [{ "$and": [{ "field": { "$gt": 100 } }] }] }`
73
73
  - Invalid: `{ "field": { "$and": [{ "$gt": 100 }] } }`
@@ -76,7 +76,7 @@ const results = await store.query({
76
76
  4. $not operator:
77
77
 
78
78
  - Must be an object
79
- - Cannot be empty
79
+ - Can't be empty
80
80
  - Can be used at field level or top level
81
81
  - Valid: `{ "$not": { "field": "value" } }`
82
82
  - Valid: `{ "field": { "$not": { "$eq": "value" } } }`
@@ -87,7 +87,7 @@ const results = await store.query({
87
87
  - Valid: `{ "$and": [{ "field": { "$gt": 100 } }] }`
88
88
  - Invalid: `{ "$and": [{ "$gt": 100 }] }`
89
89
 
90
- ## Store-Specific Notes
90
+ ## Store-specific notes
91
91
 
92
92
  ### Astra
93
93
 
@@ -98,7 +98,7 @@ const results = await store.query({
98
98
  ### ChromaDB
99
99
 
100
100
  - Where filters only return results where the filtered field exists in metadata
101
- - Empty metadata fields are not included in filter results
101
+ - Empty metadata fields aren't included in filter results
102
102
  - Metadata fields must be present for negative matches (e.g., $ne won't match documents missing the field)
103
103
 
104
104
  ### Cloudflare Vectorize
@@ -109,7 +109,7 @@ const results = await store.query({
109
109
  - String values are indexed up to first 64 bytes (truncated on UTF-8 boundaries)
110
110
  - Number values use float64 precision
111
111
  - Filter JSON must be under 2048 bytes
112
- - Field names cannot contain dots (.) or start with $
112
+ - Field names can't contain dots (.) or start with $
113
113
  - Field names limited to 512 characters
114
114
  - Vectors must be re-upserted after creating new metadata indexes to be included in filtered results
115
115
  - Range queries may have reduced accuracy with very large datasets (\~10M+ vectors)
@@ -186,12 +186,12 @@ const results = await store.query({
186
186
 
187
187
  ### Couchbase
188
188
 
189
- - Currently does not have support for metadata filters. Filtering must be done client-side after retrieving results or by using the Couchbase SDK's Search capabilities directly for more complex queries.
189
+ - Currently doesn't have support for metadata filters. Filtering must be done client-side after retrieving results or by using the Couchbase SDK's Search capabilities directly for more complex queries.
190
190
 
191
191
  ### Amazon S3 Vectors
192
192
 
193
- - Equality values must be primitives (string/number/boolean). `null`/`undefined`, arrays, objects, and Date are not allowed for equality. Range operators accept numbers or Date (Dates are normalized to epoch ms).
194
- - `$in`/`$nin` require **non-empty arrays of primitives**; Date elements are allowed and normalized to epoch ms. **Array equality** is not supported.
193
+ - Equality values must be primitives (string/number/boolean). `null`/`undefined`, arrays, objects, and Date aren't allowed for equality. Range operators accept numbers or Date (Dates are normalized to epoch ms).
194
+ - `$in`/`$nin` require **non-empty arrays of primitives**; Date elements are allowed and normalized to epoch ms. **Array equality** isn't supported.
195
195
  - Implicit AND is canonicalized (`{a:1,b:2}` → `{$and:[{a:1},{b:2}]}`). Logical operators must contain field conditions, use non-empty arrays, and appear only at the root or within other logical operators (not inside field values).
196
196
  - Keys listed in `nonFilterableMetadataKeys` at index creation are stored but not filterable; this setting is immutable.
197
197
  - $exists requires a boolean value.
@@ -1,4 +1,4 @@
1
- # Composite Storage
1
+ # Composite storage
2
2
 
3
3
  `MastraCompositeStore` can compose storage domains from different providers. Use it when you need different databases for different purposes. For example, use LibSQL for memory and PostgreSQL for workflows.
4
4
 
@@ -1,8 +1,8 @@
1
- # DynamoDB Storage
1
+ # DynamoDB storage
2
2
 
3
3
  The DynamoDB storage implementation provides a scalable and performant NoSQL database solution for Mastra, leveraging a single-table design pattern with [ElectroDB](https://electrodb.dev/).
4
4
 
5
- > **Observability Not Supported:** DynamoDB storage **does not support the observability domain**. Traces from the `DefaultExporter` cannot be persisted to DynamoDB, and Mastra Studio's observability features won't work with DynamoDB as your only storage provider. To enable observability, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to a supported provider like ClickHouse or PostgreSQL.
5
+ > **Observability Not Supported:** DynamoDB storage **doesn't support the observability domain**. Traces from the `DefaultExporter` can't be persisted to DynamoDB, and Mastra Studio's observability features won't work with DynamoDB as your only storage provider. To enable observability, use [composite storage](https://mastra.ai/reference/storage/composite) to route observability data to a supported provider like ClickHouse or PostgreSQL.
6
6
 
7
7
  > **Item Size Limit:** DynamoDB enforces a **400 KB maximum item size**. This limit can be exceeded when storing messages with base64-encoded attachments such as images. See [Handling large attachments](https://mastra.ai/docs/memory/storage) for workarounds including uploading attachments to external storage.
8
8
 
@@ -120,7 +120,7 @@ For local development, you can use [DynamoDB Local](https://docs.aws.amazon.com/
120
120
 
121
121
  **config.ttl** (`object`): TTL (Time To Live) configuration for automatic data expiration. Configure per entity type: thread, message, trace, eval, workflow\_snapshot, resource, score. Each entity config includes: enabled (boolean), attributeName (string, default: 'ttl'), defaultTtlSeconds (number).
122
122
 
123
- ## TTL (Time To Live) Configuration
123
+ ## TTL (time to live) configuration
124
124
 
125
125
  DynamoDB TTL allows you to automatically delete items after a specified time period. This is useful for:
126
126
 
@@ -211,12 +211,12 @@ aws dynamodb update-time-to-live \
211
211
  1. Go to the DynamoDB console
212
212
  2. Select your table
213
213
  3. Go to "Additional settings" tab
214
- 4. Under "Time to Live (TTL)", click "Manage TTL"
214
+ 4. Under "Time to Live (TTL)", select "Manage TTL"
215
215
  5. Enable TTL and specify the attribute name (default: `ttl`)
216
216
 
217
217
  > **Note:** DynamoDB deletes expired items within 48 hours after expiration. Items remain queryable until actually deleted.
218
218
 
219
- ## AWS IAM Permissions
219
+ ## AWS IAM permissions
220
220
 
221
221
  The IAM role or user executing the code needs appropriate permissions to interact with the specified DynamoDB table and its indexes. Below is a sample policy. Replace `${YOUR_TABLE_NAME}` with your actual table name and `${YOUR_AWS_REGION}` and `${YOUR_AWS_ACCOUNT_ID}` with appropriate values.
222
222
 
@@ -246,7 +246,7 @@ The IAM role or user executing the code needs appropriate permissions to interac
246
246
  }
247
247
  ```
248
248
 
249
- ## Key Considerations
249
+ ## Key considerations
250
250
 
251
251
  Before diving into the architectural details, keep these key points in mind when working with the DynamoDB storage adapter:
252
252
 
@@ -255,7 +255,7 @@ Before diving into the architectural details, keep these key points in mind when
255
255
  - **Understanding GSIs:** Familiarity with how the GSIs are structured (as per `TABLE_SETUP.md`) is important for understanding data retrieval and potential query patterns.
256
256
  - **ElectroDB:** The adapter uses ElectroDB to manage interactions with DynamoDB, providing a layer of abstraction and type safety over raw DynamoDB operations.
257
257
 
258
- ## Architectural Approach
258
+ ## Architectural approach
259
259
 
260
260
  This storage adapter utilizes a **single-table design pattern** leveraging [ElectroDB](https://electrodb.dev/), a common and recommended approach for DynamoDB. This differs architecturally from relational database adapters (like `@mastra/pg` or `@mastra/libsql`) that typically use multiple tables, each dedicated to a specific entity (threads, messages, etc.).
261
261
 
@@ -1,4 +1,4 @@
1
- # PostgreSQL Storage
1
+ # PostgreSQL storage
2
2
 
3
3
  The PostgreSQL storage implementation provides a production-ready storage solution using PostgreSQL databases.
4
4
 
@@ -71,7 +71,7 @@ const storage = new PostgresStore({
71
71
 
72
72
  **indexes** (`CreateIndexOptions[]`): Custom indexes to create during initialization.
73
73
 
74
- ## Constructor Examples
74
+ ## Constructor examples
75
75
 
76
76
  You can instantiate `PostgresStore` in the following ways:
77
77
 
@@ -119,7 +119,7 @@ const store4 = new PostgresStore({
119
119
  })
120
120
  ```
121
121
 
122
- ## Additional Notes
122
+ ## Additional notes
123
123
 
124
124
  ### Schema Management
125
125
 
@@ -177,7 +177,7 @@ const memoryStore = await storage.getStore('memory')
177
177
  const thread = await memoryStore?.getThreadById({ threadId: '...' })
178
178
  ```
179
179
 
180
- > **Warning:** If `init()` is not called, tables won't be created and storage operations will fail silently or throw errors.
180
+ > **Warning:** If `init()` isn't called, tables won't be created and storage operations will fail silently or throw errors.
181
181
 
182
182
  ### Using an Existing Pool
183
183
 
@@ -201,7 +201,7 @@ const storage = new PostgresStore({
201
201
 
202
202
  **Pool lifecycle behavior:**
203
203
 
204
- - When you **provide a pool**: Mastra uses your pool but does **not** close it when `store.close()` is called. You manage the pool lifecycle.
204
+ - When you **provide a pool**: Mastra uses your pool but **doesn't** close it when `store.close()` is called. You manage the pool lifecycle.
205
205
  - When Mastra **creates a pool**: Mastra owns the pool and will close it when `store.close()` is called.
206
206
 
207
207
  ### Direct Database and Pool Access
@@ -316,7 +316,7 @@ This pattern ensures only one `PostgresStore` instance is created regardless of
316
316
 
317
317
  > **Tip:** This singleton pattern is only necessary during local development with HMR. In production builds, modules are only loaded once.
318
318
 
319
- ## Usage Example
319
+ ## Usage example
320
320
 
321
321
  ### Adding memory to an agent
322
322
 
@@ -387,7 +387,7 @@ for await (const chunk of stream.textStream) {
387
387
  }
388
388
  ```
389
389
 
390
- ## Index Management
390
+ ## Index management
391
391
 
392
392
  PostgreSQL storage provides index management to optimize query performance.
393
393
 
@@ -2,7 +2,7 @@
2
2
 
3
3
  The `createVectorQueryTool()` function creates a tool for semantic search over vector stores. It supports filtering, reranking, database-specific configurations, and integrates with various vector store backends.
4
4
 
5
- ## Basic Usage
5
+ ## Basic usage
6
6
 
7
7
  ```typescript
8
8
  import { createVectorQueryTool } from '@mastra/rag'
@@ -79,7 +79,7 @@ The tool returns an object with:
79
79
 
80
80
  **sources** (`QueryResult[]`): Array of full retrieval result objects. Each object contains all information needed to reference the original document, chunk, and similarity score.
81
81
 
82
- ### QueryResult object structure
82
+ ### `QueryResult` object structure
83
83
 
84
84
  ```typescript
85
85
  {
@@ -91,7 +91,7 @@ The tool returns an object with:
91
91
  }
92
92
  ```
93
93
 
94
- ## Default Tool Description
94
+ ## Default tool description
95
95
 
96
96
  The default description focuses on:
97
97
 
@@ -99,11 +99,11 @@ The default description focuses on:
99
99
  - Answering user questions
100
100
  - Retrieving factual content
101
101
 
102
- ## Result Handling
102
+ ## Result handling
103
103
 
104
104
  The tool determines the number of results to return based on the user's query, with a default of 10 results. This can be adjusted based on the query requirements.
105
105
 
106
- ## Example with Filters
106
+ ## Example with filters
107
107
 
108
108
  ```typescript
109
109
  const queryTool = createVectorQueryTool({
@@ -134,7 +134,7 @@ For detailed filter syntax and store-specific capabilities, see the [Metadata Fi
134
134
 
135
135
  For an example of how agent-driven filtering works, see the [Agent-Driven Metadata Filtering](https://github.com/mastra-ai/mastra/tree/main/examples/basics/rag/filter-rag) example.
136
136
 
137
- ## Example with Reranking
137
+ ## Example with reranking
138
138
 
139
139
  ```typescript
140
140
  const queryTool = createVectorQueryTool({
@@ -164,7 +164,7 @@ Reranking improves result quality by combining:
164
164
 
165
165
  The reranker processes the initial vector search results and returns a reordered list optimized for relevance.
166
166
 
167
- ## Example with Custom Description
167
+ ## Example with custom description
168
168
 
169
169
  ```typescript
170
170
  const queryTool = createVectorQueryTool({
@@ -178,7 +178,7 @@ const queryTool = createVectorQueryTool({
178
178
 
179
179
  This example shows how to customize the tool description for a specific use case while maintaining its core purpose of information retrieval.
180
180
 
181
- ## Database-Specific Configuration Examples
181
+ ## Database-specific configuration examples
182
182
 
183
183
  The `databaseConfig` parameter allows you to leverage unique features and optimizations specific to each vector database. These configurations are automatically applied during query execution.
184
184
 
@@ -335,7 +335,7 @@ This approach allows you to:
335
335
  - Adjust performance parameters based on load
336
336
  - Apply different filtering strategies per request
337
337
 
338
- ## Example: Using Request Context
338
+ ## Example: Using request context
339
339
 
340
340
  ```typescript
341
341
  const queryTool = createVectorQueryTool({
@@ -374,7 +374,7 @@ For more information on request context, please see:
374
374
  - [Agent Request Context](https://mastra.ai/docs/server/request-context)
375
375
  - [Request Context](https://mastra.ai/docs/server/request-context)
376
376
 
377
- ## Usage Without a Mastra Server
377
+ ## Usage without a Mastra server
378
378
 
379
379
  The tool can be used by itself to retrieve documents matching a query:
380
380
 
@@ -401,7 +401,7 @@ const queryResult = await vectorQueryTool.execute({ queryText: 'foo', topK: 1 },
401
401
  console.log(queryResult.sources)
402
402
  ```
403
403
 
404
- ## Dynamic Vector Store for Multi-Tenant Applications
404
+ ## Dynamic vector store for multi-tenant applications
405
405
 
406
406
  For multi-tenant applications where each tenant has isolated data (e.g., separate PostgreSQL schemas), you can pass a resolver function instead of a static vector store instance. The function receives the request context and can return the appropriate vector store for the current tenant:
407
407
 
@@ -457,7 +457,7 @@ This pattern is similar to how `Agent.memory` supports dynamic configuration and
457
457
  - **Database isolation**: Route to different database instances per tenant
458
458
  - **Dynamic configuration**: Adjust vector store settings based on request context
459
459
 
460
- ## Tool Details
460
+ ## Tool details
461
461
 
462
462
  The tool is created with:
463
463
 
@@ -1,8 +1,8 @@
1
- # PG Vector Store
1
+ # PG vector store
2
2
 
3
3
  The PgVector class provides vector search using [PostgreSQL](https://www.postgresql.org/) with [pgvector](https://github.com/pgvector/pgvector) extension. It provides robust vector similarity search capabilities within your existing PostgreSQL database.
4
4
 
5
- ## Constructor Options
5
+ ## Constructor options
6
6
 
7
7
  **connectionString** (`string`): PostgreSQL connection URL
8
8
 
@@ -26,7 +26,7 @@ The PgVector class provides vector search using [PostgreSQL](https://www.postgre
26
26
 
27
27
  **pgPoolOptions** (`PoolConfig`): Additional pg pool configuration options
28
28
 
29
- ## Constructor Examples
29
+ ## Constructor examples
30
30
 
31
31
  ### Connection String
32
32
 
@@ -70,7 +70,7 @@ const vectorStore = new PgVector({
70
70
 
71
71
  ## Methods
72
72
 
73
- ### createIndex()
73
+ ### `createIndex()`
74
74
 
75
75
  **indexName** (`string`): Name of the index to create
76
76
 
@@ -82,7 +82,9 @@ const vectorStore = new PgVector({
82
82
 
83
83
  **buildIndex** (`boolean`): Whether to build the index (Default: `true`)
84
84
 
85
- #### IndexConfig
85
+ **metadataIndexes** (`string[]`): Array of metadata field names to create btree indexes on. Improves query performance when filtering by these metadata fields.
86
+
87
+ #### `IndexConfig`
86
88
 
87
89
  **type** (`'flat' | 'hnsw' | 'ivfflat'`): Index type (Default: `ivfflat`)
88
90
 
@@ -112,7 +114,7 @@ HNSW indexes require significant shared memory during construction. For 100K vec
112
114
 
113
115
  Higher M values or efConstruction values will increase memory requirements significantly. Adjust your system's shared memory limits if needed.
114
116
 
115
- ### upsert()
117
+ ### `upsert()`
116
118
 
117
119
  **indexName** (`string`): Name of the index to upsert vectors into
118
120
 
@@ -122,7 +124,7 @@ Higher M values or efConstruction values will increase memory requirements signi
122
124
 
123
125
  **ids** (`string[]`): Optional vector IDs (auto-generated if not provided)
124
126
 
125
- ### query()
127
+ ### `query()`
126
128
 
127
129
  **indexName** (`string`): Name of the index to query
128
130
 
@@ -142,11 +144,11 @@ Higher M values or efConstruction values will increase memory requirements signi
142
144
 
143
145
  **options.probes** (`number`): IVF search parameter
144
146
 
145
- ### listIndexes()
147
+ ### `listIndexes()`
146
148
 
147
149
  Returns an array of index names as strings.
148
150
 
149
- ### describeIndex()
151
+ ### `describeIndex()`
150
152
 
151
153
  **indexName** (`string`): Name of the index to describe
152
154
 
@@ -167,11 +169,11 @@ interface PGIndexStats {
167
169
  }
168
170
  ```
169
171
 
170
- ### deleteIndex()
172
+ ### `deleteIndex()`
171
173
 
172
174
  **indexName** (`string`): Name of the index to delete
173
175
 
174
- ### updateVector()
176
+ ### `updateVector()`
175
177
 
176
178
  Update a single vector by ID or by metadata filter. Either `id` or `filter` must be provided, but not both.
177
179
 
@@ -206,7 +208,7 @@ await pgVector.updateVector({
206
208
  })
207
209
  ```
208
210
 
209
- ### deleteVector()
211
+ ### `deleteVector()`
210
212
 
211
213
  **indexName** (`string`): Name of the index containing the vector
212
214
 
@@ -218,7 +220,7 @@ Deletes a single vector by ID from the specified index.
218
220
  await pgVector.deleteVector({ indexName: 'my_vectors', id: 'vector123' })
219
221
  ```
220
222
 
221
- ### deleteVectors()
223
+ ### `deleteVectors()`
222
224
 
223
225
  Delete multiple vectors by IDs or by metadata filter. Either `ids` or `filter` must be provided, but not both.
224
226
 
@@ -228,11 +230,11 @@ Delete multiple vectors by IDs or by metadata filter. Either `ids` or `filter` m
228
230
 
229
231
  **filter** (`Record<string, any>`): Metadata filter to identify vectors to delete (mutually exclusive with ids)
230
232
 
231
- ### disconnect()
233
+ ### `disconnect()`
232
234
 
233
235
  Closes the database connection pool. Should be called when done using the store.
234
236
 
235
- ### buildIndex()
237
+ ### `buildIndex()`
236
238
 
237
239
  **indexName** (`string`): Name of the index to define
238
240
 
@@ -266,7 +268,7 @@ await pgVector.buildIndex('my_vectors', 'cosine', {
266
268
  })
267
269
  ```
268
270
 
269
- ## Response Types
271
+ ## Response types
270
272
 
271
273
  Query results are returned in this format:
272
274
 
@@ -279,7 +281,7 @@ interface QueryResult {
279
281
  }
280
282
  ```
281
283
 
282
- ## Error Handling
284
+ ## Error handling
283
285
 
284
286
  The store throws typed errors that can be caught:
285
287
 
@@ -297,7 +299,7 @@ try {
297
299
  }
298
300
  ```
299
301
 
300
- ## Index Configuration Guide
302
+ ## Index configuration guide
301
303
 
302
304
  ### Performance Optimization
303
305
 
@@ -329,14 +331,14 @@ The system automatically detects configuration changes and only rebuilds indexes
329
331
  - Changed configuration: Index is dropped and rebuilt
330
332
  - This prevents the performance issues from unnecessary index recreations
331
333
 
332
- ## Best Practices
334
+ ## Best practices
333
335
 
334
336
  - Regularly evaluate your index configuration to ensure optimal performance.
335
337
  - Adjust parameters like `lists` and `m` based on dataset size and query requirements.
336
338
  - **Monitor index performance** using `describeIndex()` to track usage
337
339
  - Rebuild indexes periodically to maintain efficiency, especially after significant data changes
338
340
 
339
- ## Direct Pool Access
341
+ ## Direct pool access
340
342
 
341
343
  The `PgVector` class exposes its underlying PostgreSQL connection pool as a public field:
342
344
 
@@ -352,7 +354,7 @@ This enables advanced usage such as running direct SQL queries, managing transac
352
354
 
353
355
  This design supports advanced use cases but requires careful resource management by the user.
354
356
 
355
- ## Usage Example
357
+ ## Usage example
356
358
 
357
359
  ### Local embeddings with fastembed
358
360