@mastra/pg 1.0.0-beta.12 → 1.0.0-beta.13

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -1,5 +1,132 @@
1
1
  # @mastra/pg
2
2
 
3
+ ## 1.0.0-beta.13
4
+
5
+ ### Minor Changes
6
+
7
+ - Changed JSON columns from TEXT to JSONB in `mastra_threads` and `mastra_workflow_snapshot` tables. ([#11853](https://github.com/mastra-ai/mastra/pull/11853))
8
+
9
+ **Why this change?**
10
+
11
+ These were the last remaining columns storing JSON as TEXT. This change aligns them with other tables that already use JSONB, enabling native JSON operators and improved performance. See [#8978](https://github.com/mastra-ai/mastra/issues/8978) for details.
12
+
13
+ **Columns Changed:**
14
+ - `mastra_threads.metadata` - Thread metadata
15
+ - `mastra_workflow_snapshot.snapshot` - Workflow run state
16
+
17
+ **PostgreSQL**
18
+
19
+ Migration Required - PostgreSQL enforces column types, so existing tables must be migrated. Note: Migration will fail if existing column values contain invalid JSON.
20
+
21
+ ```sql
22
+ ALTER TABLE mastra_threads
23
+ ALTER COLUMN metadata TYPE jsonb
24
+ USING metadata::jsonb;
25
+
26
+ ALTER TABLE mastra_workflow_snapshot
27
+ ALTER COLUMN snapshot TYPE jsonb
28
+ USING snapshot::jsonb;
29
+ ```
30
+
31
+ **LibSQL**
32
+
33
+ No Migration Required - LibSQL now uses native SQLite JSONB format (added in SQLite 3.45) for ~3x performance improvement on JSON operations. The changes are fully backwards compatible:
34
+ - Existing TEXT JSON data continues to work
35
+ - New data is stored in binary JSONB format
36
+ - Both formats can coexist in the same table
37
+ - All JSON functions (`json_extract`, etc.) work on both formats
38
+
39
+ New installations automatically use JSONB. Existing applications continue to work without any changes.
40
+
41
+ ### Patch Changes
42
+
43
+ - Fix `listWorkflowRuns` failing with "unsupported Unicode escape sequence" error when filtering by status on snapshots containing null characters (`\u0000`) or unpaired Unicode surrogates (`\uD800-\uDFFF`). ([#11616](https://github.com/mastra-ai/mastra/pull/11616))
44
+
45
+ The fix uses `regexp_replace` to sanitize problematic escape sequences before casting to JSONB in the WHERE clause, while preserving the original data in the returned results.
46
+
47
+ - Fixed PostgreSQL migration errors when upgrading from v0.x to v1 ([#11633](https://github.com/mastra-ai/mastra/pull/11633))
48
+
49
+ **What changed:** PostgreSQL storage now automatically adds missing `spanId` and `requestContext` columns to the scorers table during initialization, preventing "column does not exist" errors when upgrading from v0.x to v1.0.0.
50
+
51
+ **Why:** Previously, upgrading to v1 could fail with errors like `column "requestContext" of relation "mastra_scorers" does not exist` if your database was created with an older version.
52
+
53
+ Related: #11631
54
+
55
+ - Aligned vector store configuration with underlying library APIs, giving you access to all library options directly. ([#11742](https://github.com/mastra-ai/mastra/pull/11742))
56
+
57
+ **Why this change?**
58
+
59
+ Previously, each vector store defined its own configuration types that only exposed a subset of the underlying library's options. This meant users couldn't access advanced features like authentication, SSL, compression, or custom headers without creating their own client instances. Now, the configuration types extend the library types directly, so all options are available.
60
+
61
+ **@mastra/libsql** (Breaking)
62
+
63
+ Renamed `connectionUrl` to `url` to match the `@libsql/client` API and align with LibSQLStorage.
64
+
65
+ ```typescript
66
+ // Before
67
+ new LibSQLVector({ id: 'my-vector', connectionUrl: 'file:./db.sqlite' });
68
+
69
+ // After
70
+ new LibSQLVector({ id: 'my-vector', url: 'file:./db.sqlite' });
71
+ ```
72
+
73
+ **@mastra/opensearch** (Breaking)
74
+
75
+ Renamed `url` to `node` and added support for all OpenSearch `ClientOptions` including authentication, SSL, and compression.
76
+
77
+ ```typescript
78
+ // Before
79
+ new OpenSearchVector({ id: 'my-vector', url: 'http://localhost:9200' });
80
+
81
+ // After
82
+ new OpenSearchVector({ id: 'my-vector', node: 'http://localhost:9200' });
83
+
84
+ // With authentication (now possible)
85
+ new OpenSearchVector({
86
+ id: 'my-vector',
87
+ node: 'https://localhost:9200',
88
+ auth: { username: 'admin', password: 'admin' },
89
+ ssl: { rejectUnauthorized: false },
90
+ });
91
+ ```
92
+
93
+ **@mastra/pinecone** (Breaking)
94
+
95
+ Removed `environment` parameter. Use `controllerHostUrl` instead (the actual Pinecone SDK field name). Added support for all `PineconeConfiguration` options.
96
+
97
+ ```typescript
98
+ // Before
99
+ new PineconeVector({ id: 'my-vector', apiKey: '...', environment: '...' });
100
+
101
+ // After
102
+ new PineconeVector({ id: 'my-vector', apiKey: '...' });
103
+
104
+ // With custom controller host (if needed)
105
+ new PineconeVector({ id: 'my-vector', apiKey: '...', controllerHostUrl: '...' });
106
+ ```
107
+
108
+ **@mastra/clickhouse**
109
+
110
+ Added support for all `ClickHouseClientConfigOptions` like `request_timeout`, `compression`, `keep_alive`, and `database`. Existing configurations continue to work unchanged.
111
+
112
+ **@mastra/cloudflare, @mastra/cloudflare-d1, @mastra/lance, @mastra/libsql, @mastra/mongodb, @mastra/pg, @mastra/upstash**
113
+
114
+ Improved logging by replacing `console.warn` with structured logger in workflow storage domains.
115
+
116
+ **@mastra/deployer-cloud**
117
+
118
+ Updated internal LibSQLVector configuration for compatibility with the new API.
119
+
120
+ - Fixed PostgreSQL storage issues after JSONB migration. ([#11906](https://github.com/mastra-ai/mastra/pull/11906))
121
+
122
+ **Bug Fixes**
123
+ - Fixed `clearTable()` using incorrect default schema. The method was checking for table existence in the 'mastra' schema instead of PostgreSQL's default 'public' schema, causing table truncation to be skipped and leading to duplicate key violations in tests and production code that uses `dangerouslyClearAll()`.
124
+ - Fixed `listWorkflowRuns()` status filter failing with "function regexp_replace(jsonb, ...) does not exist" error. After the TEXT to JSONB migration, the query tried to use `regexp_replace()` directly on a JSONB column. Now casts to text first: `regexp_replace(snapshot::text, ...)`.
125
+ - Added Unicode sanitization when persisting workflow snapshots to handle null characters (\u0000) and unpaired surrogates (\uD800-\uDFFF) that PostgreSQL's JSONB type rejects, preventing "unsupported Unicode escape sequence" errors.
126
+
127
+ - Updated dependencies [[`ebae12a`](https://github.com/mastra-ai/mastra/commit/ebae12a2dd0212e75478981053b148a2c246962d), [`c61a0a5`](https://github.com/mastra-ai/mastra/commit/c61a0a5de4904c88fd8b3718bc26d1be1c2ec6e7), [`69136e7`](https://github.com/mastra-ai/mastra/commit/69136e748e32f57297728a4e0f9a75988462f1a7), [`449aed2`](https://github.com/mastra-ai/mastra/commit/449aed2ba9d507b75bf93d427646ea94f734dfd1), [`eb648a2`](https://github.com/mastra-ai/mastra/commit/eb648a2cc1728f7678768dd70cd77619b448dab9), [`0131105`](https://github.com/mastra-ai/mastra/commit/0131105532e83bdcbb73352fc7d0879eebf140dc), [`9d5059e`](https://github.com/mastra-ai/mastra/commit/9d5059eae810829935fb08e81a9bb7ecd5b144a7), [`ef756c6`](https://github.com/mastra-ai/mastra/commit/ef756c65f82d16531c43f49a27290a416611e526), [`b00ccd3`](https://github.com/mastra-ai/mastra/commit/b00ccd325ebd5d9e37e34dd0a105caae67eb568f), [`3bdfa75`](https://github.com/mastra-ai/mastra/commit/3bdfa7507a91db66f176ba8221aa28dd546e464a), [`e770de9`](https://github.com/mastra-ai/mastra/commit/e770de941a287a49b1964d44db5a5763d19890a6), [`52e2716`](https://github.com/mastra-ai/mastra/commit/52e2716b42df6eff443de72360ae83e86ec23993), [`27b4040`](https://github.com/mastra-ai/mastra/commit/27b4040bfa1a95d92546f420a02a626b1419a1d6), [`610a70b`](https://github.com/mastra-ai/mastra/commit/610a70bdad282079f0c630e0d7bb284578f20151), [`8dc7f55`](https://github.com/mastra-ai/mastra/commit/8dc7f55900395771da851dc7d78d53ae84fe34ec), [`8379099`](https://github.com/mastra-ai/mastra/commit/8379099fc467af6bef54dd7f80c9bd75bf8bbddf), [`8c0ec25`](https://github.com/mastra-ai/mastra/commit/8c0ec25646c8a7df253ed1e5ff4863a0d3f1316c), [`ff4d9a6`](https://github.com/mastra-ai/mastra/commit/ff4d9a6704fc87b31a380a76ed22736fdedbba5a), [`69821ef`](https://github.com/mastra-ai/mastra/commit/69821ef806482e2c44e2197ac0b050c3fe3a5285), [`1ed5716`](https://github.com/mastra-ai/mastra/commit/1ed5716830867b3774c4a1b43cc0d82935f32b96), [`4186bdd`](https://github.com/mastra-ai/mastra/commit/4186bdd00731305726fa06adba0b076a1d50b49f), [`7aaf973`](https://github.com/mastra-ai/mastra/commit/7aaf973f83fbbe9521f1f9e7a4fd99b8de464617)]:
128
+ - @mastra/core@1.0.0-beta.22
129
+
3
130
  ## 1.0.0-beta.12
4
131
 
5
132
  ### Patch Changes
@@ -33,4 +33,4 @@ docs/
33
33
  ## Version
34
34
 
35
35
  Package: @mastra/pg
36
- Version: 1.0.0-beta.12
36
+ Version: 1.0.0-beta.13
@@ -5,7 +5,7 @@ description: Documentation for @mastra/pg. Includes links to type definitions an
5
5
 
6
6
  # @mastra/pg Documentation
7
7
 
8
- > **Version**: 1.0.0-beta.12
8
+ > **Version**: 1.0.0-beta.13
9
9
  > **Package**: @mastra/pg
10
10
 
11
11
  ## Quick Navigation
@@ -1,5 +1,5 @@
1
1
  {
2
- "version": "1.0.0-beta.12",
2
+ "version": "1.0.0-beta.13",
3
3
  "package": "@mastra/pg",
4
4
  "exports": {},
5
5
  "modules": {}
@@ -4,11 +4,11 @@
4
4
 
5
5
  For Mastra to remember previous interactions, you must configure a storage adapter. Mastra is designed to work with your preferred database provider - choose from the [supported providers](#supported-providers) and pass it to your Mastra instance.
6
6
 
7
- ```typescript
7
+ ```typescript title="src/mastra/index.ts"
8
8
  import { Mastra } from "@mastra/core";
9
9
  import { LibSQLStore } from "@mastra/libsql";
10
10
 
11
- const mastra = new Mastra({
11
+ export const mastra = new Mastra({
12
12
  storage: new LibSQLStore({
13
13
  id: 'mastra-storage',
14
14
  url: "file:./mastra.db",
@@ -17,7 +17,7 @@ const mastra = new Mastra({
17
17
  ```
18
18
  On first interaction, Mastra automatically creates the necessary tables following the [core schema](https://mastra.ai/reference/v1/storage/overview#core-schema). This includes tables for messages, threads, resources, workflows, traces, and evaluation datasets.
19
19
 
20
- ## Supported Providers
20
+ ## Supported providers
21
21
 
22
22
  Each provider page includes installation instructions, configuration parameters, and usage examples:
23
23
 
@@ -35,19 +35,19 @@ Each provider page includes installation instructions, configuration parameters,
35
35
  > **Note:**
36
36
  libSQL is the easiest way to get started because it doesn’t require running a separate database server
37
37
 
38
- ## Configuration Scope
38
+ ## Configuration scope
39
39
 
40
40
  You can configure storage at two different scopes:
41
41
 
42
42
  ### Instance-level storage
43
43
 
44
- Add storage to your Mastra instance so all agents share the same memory provider:
44
+ Add storage to your Mastra instance so all agents, workflows, observability traces and scores share the same memory provider:
45
45
 
46
- ```typescript
46
+ ```typescript
47
47
  import { Mastra } from "@mastra/core";
48
48
  import { PostgresStore } from "@mastra/pg";
49
49
 
50
- const mastra = new Mastra({
50
+ export const mastra = new Mastra({
51
51
  storage: new PostgresStore({
52
52
  id: 'mastra-storage',
53
53
  connectionString: process.env.DATABASE_URL,
@@ -55,20 +55,55 @@ const mastra = new Mastra({
55
55
  });
56
56
 
57
57
  // All agents automatically use this storage
58
- const agent1 = new Agent({ memory: new Memory() });
59
- const agent2 = new Agent({ memory: new Memory() });
58
+ const agent1 = new Agent({ id: "agent-1", memory: new Memory() });
59
+ const agent2 = new Agent({ id: "agent-2", memory: new Memory() });
60
+ ```
61
+
62
+ This is useful when all primitives share the same storage backend and have similar performance, scaling, and operational requirements.
63
+
64
+ #### Composite storage
65
+
66
+ Add storage to your Mastra instance using `MastraStorage` and configure individual storage domains to use different storage providers.
67
+
68
+ ```typescript title="src/mastra/index.ts"
69
+ import { Mastra } from "@mastra/core";
70
+ import { MastraStorage } from "@mastra/core/storage";
71
+ import { MemoryLibSQL } from "@mastra/libsql";
72
+ import { WorkflowsPG } from "@mastra/pg";
73
+ import { ObservabilityStorageClickhouse } from "@mastra/clickhouse";
74
+
75
+ export const mastra = new Mastra({
76
+ storage: new MastraStorage({
77
+ id: "composite",
78
+ domains: {
79
+ memory: new MemoryLibSQL({ url: "file:./memory.db" }),
80
+ workflows: new WorkflowsPG({ connectionString: process.env.DATABASE_URL }),
81
+ observability: new ObservabilityStorageClickhouse({
82
+ url: process.env.CLICKHOUSE_URL,
83
+ username: process.env.CLICKHOUSE_USERNAME,
84
+ password: process.env.CLICKHOUSE_PASSWORD,
85
+ }),
86
+ },
87
+ }),
88
+ });
60
89
  ```
61
90
 
91
+ This is useful when different types of data have different performance or operational requirements, such as low-latency storage for memory, durable storage for workflows, and high-throughput storage for observability.
92
+
93
+ > **Note:**
94
+ See [Storage Domains](https://mastra.ai/reference/v1/storage/composite#storage-domains) for more information.
95
+
62
96
  ### Agent-level storage
63
97
 
64
- Add storage to a specific agent when you need data boundaries or compliance requirements:
98
+ Agent-level storage overrides storage configured at the instance-level. Add storage to a specific agent when you need data boundaries or compliance requirements:
65
99
 
66
- ```typescript
100
+ ```typescript title="src/mastra/agents/memory-agent.ts"
67
101
  import { Agent } from "@mastra/core/agent";
68
102
  import { Memory } from "@mastra/memory";
69
103
  import { PostgresStore } from "@mastra/pg";
70
104
 
71
- const agent = new Agent({
105
+ export const agent = new Agent({
106
+ id: "agent",
72
107
  memory: new Memory({
73
108
  storage: new PostgresStore({
74
109
  id: 'agent-storage',
@@ -80,7 +115,7 @@ const agent = new Agent({
80
115
 
81
116
  This is useful when different agents need to store data in separate databases for security, compliance, or organizational reasons.
82
117
 
83
- ## Threads and Resources
118
+ ## Threads and resources
84
119
 
85
120
  Mastra organizes memory into threads using two identifiers:
86
121
 
@@ -89,7 +124,7 @@ Mastra organizes memory into threads using two identifiers:
89
124
 
90
125
  Both identifiers are required for agents to store and recall information:
91
126
 
92
- ```typescript
127
+ ```typescript
93
128
  const stream = await agent.stream("message for agent", {
94
129
  memory: {
95
130
  thread: "convo_123",
@@ -107,8 +142,9 @@ Mastra can automatically generate descriptive thread titles based on the user's
107
142
 
108
143
  Use this option when implementing a ChatGPT-style chat interface to render a title alongside each thread in the conversation list (for example, in a sidebar) derived from the thread’s initial user message.
109
144
 
110
- ```typescript
145
+ ```typescript
111
146
  export const testAgent = new Agent({
147
+ id: "test-agent",
112
148
  memory: new Memory({
113
149
  options: {
114
150
  generateTitle: true,
@@ -123,13 +159,12 @@ To optimize cost or behavior, provide a smaller `model` and custom `instructions
123
159
 
124
160
  ```typescript
125
161
  export const testAgent = new Agent({
162
+ id: "test-agent",
126
163
  memory: new Memory({
127
164
  options: {
128
- threads: {
129
- generateTitle: {
130
- model: "openai/gpt-4o-mini",
131
- instructions: "Generate a concise title based on the user's first message",
132
- },
165
+ generateTitle: {
166
+ model: "openai/gpt-4o-mini",
167
+ instructions: "Generate a concise title based on the user's first message",
133
168
  },
134
169
  },
135
170
  }),
@@ -142,7 +177,7 @@ Semantic recall uses vector embeddings to retrieve relevant past messages based
142
177
 
143
178
  The vector database doesn't have to be the same as your storage provider. For example, you might use PostgreSQL for storage and Pinecone for vectors:
144
179
 
145
- ```typescript
180
+ ```typescript
146
181
  import { Mastra } from "@mastra/core";
147
182
  import { Agent } from "@mastra/core/agent";
148
183
  import { Memory } from "@mastra/memory";
@@ -150,7 +185,7 @@ import { PostgresStore } from "@mastra/pg";
150
185
  import { PineconeVector } from "@mastra/pinecone";
151
186
 
152
187
  // Instance-level vector configuration
153
- const mastra = new Mastra({
188
+ export const mastra = new Mastra({
154
189
  storage: new PostgresStore({
155
190
  id: 'mastra-storage',
156
191
  connectionString: process.env.DATABASE_URL,
@@ -158,13 +193,12 @@ const mastra = new Mastra({
158
193
  });
159
194
 
160
195
  // Agent-level vector configuration
161
- const agent = new Agent({
196
+ export const agent = new Agent({
197
+ id: "agent",
162
198
  memory: new Memory({
163
199
  vector: new PineconeVector({
164
200
  id: 'agent-vector',
165
201
  apiKey: process.env.PINECONE_API_KEY,
166
- environment: process.env.PINECONE_ENVIRONMENT,
167
- indexName: 'agent-embeddings',
168
202
  }),
169
203
  options: {
170
204
  semanticRecall: {
@@ -80,13 +80,15 @@ const memory = new Memory({
80
80
 
81
81
  ### Usage with Agents
82
82
 
83
- When using resource-scoped memory, make sure to pass the `resourceId` parameter:
83
+ When using resource-scoped memory, make sure to pass the `resource` parameter in the memory options:
84
84
 
85
85
  ```typescript
86
- // Resource-scoped memory requires resourceId
86
+ // Resource-scoped memory requires resource
87
87
  const response = await agent.generate("Hello!", {
88
- threadId: "conversation-123",
89
- resourceId: "user-alice-456", // Same user across different threads
88
+ memory: {
89
+ thread: "conversation-123",
90
+ resource: "user-alice-456", // Same user across different threads
91
+ },
90
92
  });
91
93
  ```
92
94
 
@@ -339,8 +341,10 @@ const thread = await memory.createThread({
339
341
 
340
342
  // The agent will now have access to this information in all messages
341
343
  await agent.generate("What's my blood type?", {
342
- threadId: thread.id,
343
- resourceId: "user-456",
344
+ memory: {
345
+ thread: thread.id,
346
+ resource: "user-456",
347
+ },
344
348
  });
345
349
  // Response: "Your blood type is O+."
346
350
  ```
@@ -56,7 +56,7 @@ const agent = new Agent({
56
56
  // this is the default vector db if omitted
57
57
  vector: new LibSQLVector({
58
58
  id: 'agent-vector',
59
- connectionUrl: "file:./local.db",
59
+ url: "file:./local.db",
60
60
  }),
61
61
  }),
62
62
  });
@@ -230,6 +230,4 @@ You might want to disable semantic recall in scenarios like:
230
230
 
231
231
  ## Viewing Recalled Messages
232
232
 
233
- When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
234
-
235
- For more info on viewing message traces, see [Viewing Retrieved Messages](./overview#viewing-retrieved-messages).
233
+ When tracing is enabled, any messages retrieved via semantic recall will appear in the agent's trace output, alongside recent message history (if configured).
@@ -57,7 +57,7 @@ export const agent = new Agent({
57
57
  }),
58
58
  vector: new LibSQLVector({
59
59
  id: 'test-agent-vector',
60
- connectionUrl: "file:./vector-memory.db",
60
+ url: "file:./vector-memory.db",
61
61
  }),
62
62
  options: {
63
63
  lastMessages: 10,
@@ -69,9 +69,7 @@ export const agent = new Agent({
69
69
  workingMemory: {
70
70
  enabled: true,
71
71
  },
72
- threads: {
73
- generateTitle: true,
74
- },
72
+ generateTitle: true,
75
73
  },
76
74
  }),
77
75
  });
@@ -117,10 +117,12 @@ import { PgVector } from "@mastra/pg";
117
117
  import { openai } from "@ai-sdk/openai";
118
118
 
119
119
  const storage = new PostgresStorage({
120
+ id: 'pg-storage',
120
121
  connectionString: process.env.DATABASE_URL,
121
122
  });
122
123
 
123
124
  const vector = new PgVector({
125
+ id: 'pg-vector',
124
126
  connectionString: process.env.DATABASE_URL,
125
127
  });
126
128
 
@@ -12,6 +12,7 @@ After generating embeddings, you need to store them in a database that supports
12
12
  import { MongoDBVector } from "@mastra/mongodb";
13
13
 
14
14
  const store = new MongoDBVector({
15
+ id: 'mongodb-vector',
15
16
  uri: process.env.MONGODB_URI,
16
17
  dbName: process.env.MONGODB_DATABASE,
17
18
  });
@@ -144,6 +145,7 @@ await store.upsert({
144
145
  import { AstraVector } from "@mastra/astra";
145
146
 
146
147
  const store = new AstraVector({
148
+ id: 'astra-vector',
147
149
  token: process.env.ASTRA_DB_TOKEN,
148
150
  endpoint: process.env.ASTRA_DB_ENDPOINT,
149
151
  keyspace: process.env.ASTRA_DB_KEYSPACE,
@@ -170,7 +172,7 @@ import { LibSQLVector } from "@mastra/core/vector/libsql";
170
172
 
171
173
  const store = new LibSQLVector({
172
174
  id: 'libsql-vector',
173
- connectionUrl: process.env.DATABASE_URL,
175
+ url: process.env.DATABASE_URL,
174
176
  authToken: process.env.DATABASE_AUTH_TOKEN, // Optional: for Turso cloud databases
175
177
  });
176
178
 
@@ -217,6 +219,7 @@ await store.upsert({
217
219
  import { CloudflareVector } from "@mastra/vectorize";
218
220
 
219
221
  const store = new CloudflareVector({
222
+ id: 'cloudflare-vector',
220
223
  accountId: process.env.CF_ACCOUNT_ID,
221
224
  apiToken: process.env.CF_API_TOKEN,
222
225
  });
@@ -238,7 +241,7 @@ await store.upsert({
238
241
  ```ts title="vector-store.ts"
239
242
  import { OpenSearchVector } from "@mastra/opensearch";
240
243
 
241
- const store = new OpenSearchVector({ url: process.env.OPENSEARCH_URL });
244
+ const store = new OpenSearchVector({ id: "opensearch", node: process.env.OPENSEARCH_URL });
242
245
 
243
246
  await store.createIndex({
244
247
  indexName: "my-collection",
@@ -259,7 +262,7 @@ await store.upsert({
259
262
  ```ts title="vector-store.ts"
260
263
  import { ElasticSearchVector } from "@mastra/elasticsearch";
261
264
 
262
- const store = new ElasticSearchVector({ url: process.env.ELASTICSEARCH_URL });
265
+ const store = new ElasticSearchVector({ id: 'elasticsearch-vector', url: process.env.ELASTICSEARCH_URL });
263
266
 
264
267
  await store.createIndex({
265
268
  indexName: "my-collection",
@@ -280,6 +283,7 @@ await store.upsert({
280
283
  import { CouchbaseVector } from "@mastra/couchbase";
281
284
 
282
285
  const store = new CouchbaseVector({
286
+ id: 'couchbase-vector',
283
287
  connectionString: process.env.COUCHBASE_CONNECTION_STRING,
284
288
  username: process.env.COUCHBASE_USERNAME,
285
289
  password: process.env.COUCHBASE_PASSWORD,
@@ -331,6 +335,7 @@ For detailed setup instructions and best practices, see the [official LanceDB do
331
335
  import { S3Vectors } from "@mastra/s3vectors";
332
336
 
333
337
  const store = new S3Vectors({
338
+ id: 's3-vectors',
334
339
  vectorBucketName: "my-vector-bucket",
335
340
  clientConfig: {
336
341
  region: "us-east-1",
@@ -373,7 +378,7 @@ The dimension size must match the output dimension of your chosen embedding mode
373
378
  - Cohere embed-multilingual-v3: 1024 dimensions
374
379
  - Google text-embedding-004: 768 dimensions (or custom)
375
380
 
376
- important
381
+ > **Note:**
377
382
  Index dimensions cannot be changed after creation. To use a different model, delete and recreate the index with the new dimension size.
378
383
 
379
384
  ### Naming Rules for Databases
@@ -537,7 +542,7 @@ The upsert operation:
537
542
 
538
543
  Vector stores support rich metadata (any JSON-serializable fields) for filtering and organization. Since metadata is stored with no fixed schema, use consistent field naming to avoid unexpected query results.
539
544
 
540
- important
545
+ > **Note:**
541
546
  Metadata is crucial for vector storage - without it, you'd only have numerical embeddings with no way to return the original text or filter results. Always store at least the source text as metadata.
542
547
 
543
548
  ```ts
@@ -171,7 +171,7 @@ The Vector Query Tool supports database-specific configurations that enable you
171
171
  > **Note:**
172
172
  These configurations are for **query-time options** like namespaces, performance tuning, and filtering—not for database connection setup.
173
173
 
174
- Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ connectionUrl: '...' })`).
174
+ Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ url: '...' })`).
175
175
 
176
176
  ```ts
177
177
  import { createVectorQueryTool } from "@mastra/rag";
@@ -258,11 +258,10 @@ requestContext.set("databaseConfig", {
258
258
  },
259
259
  });
260
260
 
261
- await pineconeQueryTool.execute({
262
- context: { queryText: "search query" },
263
- mastra,
264
- requestContext,
265
- });
261
+ await pineconeQueryTool.execute(
262
+ { queryText: "search query" },
263
+ { mastra, requestContext }
264
+ );
266
265
  ```
267
266
 
268
267
  For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](https://mastra.ai/reference/v1/tools/vector-query-tool).
@@ -295,6 +295,24 @@ const results = await store.query({
295
295
 
296
296
  - Supports advanced filtering with nested conditions
297
297
  - Payload (metadata) fields must be explicitly indexed for filtering
298
+ - Use `createPayloadIndex()` to index fields you want to filter on:
299
+
300
+ ```typescript
301
+ // Index a field before filtering on it
302
+ await store.createPayloadIndex({
303
+ indexName: "my_index",
304
+ fieldName: "source",
305
+ fieldSchema: "keyword", // 'keyword' | 'integer' | 'float' | 'geo' | 'text' | 'bool' | 'datetime' | 'uuid'
306
+ });
307
+
308
+ // Now filtering works
309
+ const results = await store.query({
310
+ indexName: "my_index",
311
+ queryVector: queryVector,
312
+ filter: { source: "document-a" },
313
+ });
314
+ ```
315
+
298
316
  - Efficient handling of geo-spatial queries
299
317
  - Special handling for null and empty values
300
318
  - Vector-specific filtering capabilities