@mastra/s3vectors 1.0.0-beta.1 → 1.0.0-beta.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,549 @@
1
+ > Guide on retrieval processes in Mastra
2
+
3
+ # Retrieval in RAG Systems
4
+
5
+ After storing embeddings, you need to retrieve relevant chunks to answer user queries.
6
+
7
+ Mastra provides flexible retrieval options with support for semantic search, filtering, and re-ranking.
8
+
9
+ ## How Retrieval Works
10
+
11
+ 1. The user's query is converted to an embedding using the same model used for document embeddings
12
+ 2. This embedding is compared to stored embeddings using vector similarity
13
+ 3. The most similar chunks are retrieved and can be optionally:
14
+
15
+ - Filtered by metadata
16
+ - Re-ranked for better relevance
17
+ - Processed through a knowledge graph
18
+
19
+ ## Basic Retrieval
20
+
21
+ The simplest approach is direct semantic search. This method uses vector similarity to find chunks that are semantically similar to the query:
22
+
23
+ ```ts
24
+ import { embed } from "ai";
25
+ import { PgVector } from "@mastra/pg";
26
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
27
+
28
+ // Convert query to embedding
29
+ const { embedding } = await embed({
30
+ value: "What are the main points in the article?",
31
+ model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
32
+ });
33
+
34
+ // Query vector store
35
+ const pgVector = new PgVector({
36
+ id: 'pg-vector',
37
+ connectionString: process.env.POSTGRES_CONNECTION_STRING,
38
+ });
39
+ const results = await pgVector.query({
40
+ indexName: "embeddings",
41
+ queryVector: embedding,
42
+ topK: 10,
43
+ });
44
+
45
+ // Display results
46
+ console.log(results);
47
+ ```
48
+
49
+ The `topK` parameter specifies the maximum number of most similar results to return from the vector search.
50
+
51
+ Results include both the text content and a similarity score:
52
+
53
+ ```ts
54
+ [
55
+ {
56
+ text: "Climate change poses significant challenges...",
57
+ score: 0.89,
58
+ metadata: { source: "article1.txt" },
59
+ },
60
+ {
61
+ text: "Rising temperatures affect crop yields...",
62
+ score: 0.82,
63
+ metadata: { source: "article1.txt" },
64
+ },
65
+ ];
66
+ ```
67
+
68
+ ## Advanced Retrieval options
69
+
70
+ ### Metadata Filtering
71
+
72
+ Filter results based on metadata fields to narrow down the search space. This approach - combining vector similarity search with metadata filters - is sometimes called hybrid vector search, as it merges semantic search with structured filtering criteria.
73
+
74
+ This is useful when you have documents from different sources, time periods, or with specific attributes. Mastra provides a unified MongoDB-style query syntax that works across all supported vector stores.
75
+
76
+ For detailed information about available operators and syntax, see the [Metadata Filters Reference](https://mastra.ai/reference/v1/rag/metadata-filters).
77
+
78
+ Basic filtering examples:
79
+
80
+ ```ts
81
+ // Simple equality filter
82
+ const results = await pgVector.query({
83
+ indexName: "embeddings",
84
+ queryVector: embedding,
85
+ topK: 10,
86
+ filter: {
87
+ source: "article1.txt",
88
+ },
89
+ });
90
+
91
+ // Numeric comparison
92
+ const results = await pgVector.query({
93
+ indexName: "embeddings",
94
+ queryVector: embedding,
95
+ topK: 10,
96
+ filter: {
97
+ price: { $gt: 100 },
98
+ },
99
+ });
100
+
101
+ // Multiple conditions
102
+ const results = await pgVector.query({
103
+ indexName: "embeddings",
104
+ queryVector: embedding,
105
+ topK: 10,
106
+ filter: {
107
+ category: "electronics",
108
+ price: { $lt: 1000 },
109
+ inStock: true,
110
+ },
111
+ });
112
+
113
+ // Array operations
114
+ const results = await pgVector.query({
115
+ indexName: "embeddings",
116
+ queryVector: embedding,
117
+ topK: 10,
118
+ filter: {
119
+ tags: { $in: ["sale", "new"] },
120
+ },
121
+ });
122
+
123
+ // Logical operators
124
+ const results = await pgVector.query({
125
+ indexName: "embeddings",
126
+ queryVector: embedding,
127
+ topK: 10,
128
+ filter: {
129
+ $or: [{ category: "electronics" }, { category: "accessories" }],
130
+ $and: [{ price: { $gt: 50 } }, { price: { $lt: 200 } }],
131
+ },
132
+ });
133
+ ```
134
+
135
+ Common use cases for metadata filtering:
136
+
137
+ - Filter by document source or type
138
+ - Filter by date ranges
139
+ - Filter by specific categories or tags
140
+ - Filter by numerical ranges (e.g., price, rating)
141
+ - Combine multiple conditions for precise querying
142
+ - Filter by document attributes (e.g., language, author)
143
+
144
+ ### Vector Query Tool
145
+
146
+ Sometimes you want to give your agent the ability to query a vector database directly. The Vector Query Tool allows your agent to be in charge of retrieval decisions, combining semantic search with optional filtering and reranking based on the agent's understanding of the user's needs.
147
+
148
+ ```ts
149
+ import { createVectorQueryTool } from "@mastra/rag";
150
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
151
+
152
+ const vectorQueryTool = createVectorQueryTool({
153
+ vectorStoreName: "pgVector",
154
+ indexName: "embeddings",
155
+ model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
156
+ });
157
+ ```
158
+
159
+ When creating the tool, pay special attention to the tool's name and description - these help the agent understand when and how to use the retrieval capabilities. For example, you might name it "SearchKnowledgeBase" and describe it as "Search through our documentation to find relevant information about X topic."
160
+
161
+ This is particularly useful when:
162
+
163
+ - Your agent needs to dynamically decide what information to retrieve
164
+ - The retrieval process requires complex decision-making
165
+ - You want the agent to combine multiple retrieval strategies based on context
166
+
167
+ #### Database-Specific Configurations
168
+
169
+ The Vector Query Tool supports database-specific configurations that enable you to leverage unique features and optimizations of different vector stores.
170
+
171
+ > **Note:**
172
+ These configurations are for **query-time options** like namespaces, performance tuning, and filtering—not for database connection setup.
173
+
174
+ Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ connectionUrl: '...' })`).
175
+
176
+ ```ts
177
+ import { createVectorQueryTool } from "@mastra/rag";
178
+ import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
179
+
180
+ // Pinecone with namespace
181
+ const pineconeQueryTool = createVectorQueryTool({
182
+ vectorStoreName: "pinecone",
183
+ indexName: "docs",
184
+ model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
185
+ databaseConfig: {
186
+ pinecone: {
187
+ namespace: "production", // Isolate data by environment
188
+ },
189
+ },
190
+ });
191
+
192
+ // pgVector with performance tuning
193
+ const pgVectorQueryTool = createVectorQueryTool({
194
+ vectorStoreName: "postgres",
195
+ indexName: "embeddings",
196
+ model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
197
+ databaseConfig: {
198
+ pgvector: {
199
+ minScore: 0.7, // Filter low-quality results
200
+ ef: 200, // HNSW search parameter
201
+ probes: 10, // IVFFlat probe parameter
202
+ },
203
+ },
204
+ });
205
+
206
+ // Chroma with advanced filtering
207
+ const chromaQueryTool = createVectorQueryTool({
208
+ vectorStoreName: "chroma",
209
+ indexName: "documents",
210
+ model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
211
+ databaseConfig: {
212
+ chroma: {
213
+ where: { category: "technical" },
214
+ whereDocument: { $contains: "API" },
215
+ },
216
+ },
217
+ });
218
+
219
+ // LanceDB with table specificity
220
+ const lanceQueryTool = createVectorQueryTool({
221
+ vectorStoreName: "lance",
222
+ indexName: "documents",
223
+ model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
224
+ databaseConfig: {
225
+ lance: {
226
+ tableName: "myVectors", // Specify which table to query
227
+ includeAllColumns: true, // Include all metadata columns in results
228
+ },
229
+ },
230
+ });
231
+ ```
232
+
233
+ **Key Benefits:**
234
+
235
+ - **Pinecone namespaces**: Organize vectors by tenant, environment, or data type
236
+ - **pgVector optimization**: Control search accuracy and speed with ef/probes parameters
237
+ - **Quality filtering**: Set minimum similarity thresholds to improve result relevance
238
+ - **LanceDB tables**: Separate data into tables for better organization and performance
239
+ - **Runtime flexibility**: Override configurations dynamically based on context
240
+
241
+ **Common Use Cases:**
242
+
243
+ - Multi-tenant applications using Pinecone namespaces
244
+ - Performance optimization in high-load scenarios
245
+ - Environment-specific configurations (dev/staging/prod)
246
+ - Quality-gated search results
247
+ - Embedded, file-based vector storage with LanceDB for edge deployment scenarios
248
+
249
+ You can also override these configurations at runtime using the request context:
250
+
251
+ ```ts
252
+ import { RequestContext } from "@mastra/core/request-context";
253
+
254
+ const requestContext = new RequestContext();
255
+ requestContext.set("databaseConfig", {
256
+ pinecone: {
257
+ namespace: "runtime-namespace",
258
+ },
259
+ });
260
+
261
+ await pineconeQueryTool.execute({
262
+ context: { queryText: "search query" },
263
+ mastra,
264
+ requestContext,
265
+ });
266
+ ```
267
+
268
+ For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](https://mastra.ai/reference/v1/tools/vector-query-tool).
269
+
270
+ ### Vector Store Prompts
271
+
272
+ Vector store prompts define query patterns and filtering capabilities for each vector database implementation.
273
+ When implementing filtering, these prompts are required in the agent's instructions to specify valid operators and syntax for each vector store implementation.
274
+
275
+ **pgvector:**
276
+
277
+ ```ts
278
+ import { PGVECTOR_PROMPT } from "@mastra/pg";
279
+
280
+ export const ragAgent = new Agent({
281
+ id: "rag-agent",
282
+ name: "RAG Agent",
283
+ model: "openai/gpt-5.1",
284
+ instructions: `
285
+ Process queries using the provided context. Structure responses to be concise and relevant.
286
+ ${PGVECTOR_PROMPT}
287
+ `,
288
+ tools: { vectorQueryTool },
289
+ });
290
+ ```
291
+
292
+
293
+
294
+ **pinecone:**
295
+
296
+ ```ts title="vector-store.ts"
297
+ import { PINECONE_PROMPT } from "@mastra/pinecone";
298
+
299
+ export const ragAgent = new Agent({
300
+ id: "rag-agent",
301
+ name: "RAG Agent",
302
+ model: "openai/gpt-5.1",
303
+ instructions: `
304
+ Process queries using the provided context. Structure responses to be concise and relevant.
305
+ ${PINECONE_PROMPT}
306
+ `,
307
+ tools: { vectorQueryTool },
308
+ });
309
+ ```
310
+
311
+
312
+
313
+ **qdrant:**
314
+
315
+ ```ts title="vector-store.ts"
316
+ import { QDRANT_PROMPT } from "@mastra/qdrant";
317
+
318
+ export const ragAgent = new Agent({
319
+ id: "rag-agent",
320
+ name: "RAG Agent",
321
+ model: "openai/gpt-5.1",
322
+ instructions: `
323
+ Process queries using the provided context. Structure responses to be concise and relevant.
324
+ ${QDRANT_PROMPT}
325
+ `,
326
+ tools: { vectorQueryTool },
327
+ });
328
+ ```
329
+
330
+
331
+
332
+ **chroma:**
333
+
334
+ ```ts title="vector-store.ts"
335
+ import { CHROMA_PROMPT } from "@mastra/chroma";
336
+
337
+ export const ragAgent = new Agent({
338
+ id: "rag-agent",
339
+ name: "RAG Agent",
340
+ model: "openai/gpt-5.1",
341
+ instructions: `
342
+ Process queries using the provided context. Structure responses to be concise and relevant.
343
+ ${CHROMA_PROMPT}
344
+ `,
345
+ tools: { vectorQueryTool },
346
+ });
347
+ ```
348
+
349
+
350
+
351
+ **astra:**
352
+
353
+ ```ts title="vector-store.ts"
354
+ import { ASTRA_PROMPT } from "@mastra/astra";
355
+
356
+ export const ragAgent = new Agent({
357
+ id: "rag-agent",
358
+ name: "RAG Agent",
359
+ model: "openai/gpt-5.1",
360
+ instructions: `
361
+ Process queries using the provided context. Structure responses to be concise and relevant.
362
+ ${ASTRA_PROMPT}
363
+ `,
364
+ tools: { vectorQueryTool },
365
+ });
366
+ ```
367
+
368
+
369
+
370
+ **libsql:**
371
+
372
+ ```ts title="vector-store.ts"
373
+ import { LIBSQL_PROMPT } from "@mastra/libsql";
374
+
375
+ export const ragAgent = new Agent({
376
+ id: "rag-agent",
377
+ name: "RAG Agent",
378
+ model: "openai/gpt-5.1",
379
+ instructions: `
380
+ Process queries using the provided context. Structure responses to be concise and relevant.
381
+ ${LIBSQL_PROMPT}
382
+ `,
383
+ tools: { vectorQueryTool },
384
+ });
385
+ ```
386
+
387
+
388
+
389
+ **upstash:**
390
+
391
+ ```ts title="vector-store.ts"
392
+ import { UPSTASH_PROMPT } from "@mastra/upstash";
393
+
394
+ export const ragAgent = new Agent({
395
+ id: "rag-agent",
396
+ name: "RAG Agent",
397
+ model: "openai/gpt-5.1",
398
+ instructions: `
399
+ Process queries using the provided context. Structure responses to be concise and relevant.
400
+ ${UPSTASH_PROMPT}
401
+ `,
402
+ tools: { vectorQueryTool },
403
+ });
404
+ ```
405
+
406
+
407
+
408
+ **vectorize:**
409
+
410
+ ```ts title="vector-store.ts"
411
+ import { VECTORIZE_PROMPT } from "@mastra/vectorize";
412
+
413
+ export const ragAgent = new Agent({
414
+ id: "rag-agent",
415
+ name: "RAG Agent",
416
+ model: "openai/gpt-5.1",
417
+ instructions: `
418
+ Process queries using the provided context. Structure responses to be concise and relevant.
419
+ ${VECTORIZE_PROMPT}
420
+ `,
421
+ tools: { vectorQueryTool },
422
+ });
423
+ ```
424
+
425
+
426
+
427
+ **mongodb:**
428
+
429
+ ```ts title="vector-store.ts"
430
+ import { MONGODB_PROMPT } from "@mastra/mongodb";
431
+
432
+ export const ragAgent = new Agent({
433
+ id: "rag-agent",
434
+ name: "RAG Agent",
435
+ model: "openai/gpt-5.1",
436
+ instructions: `
437
+ Process queries using the provided context. Structure responses to be concise and relevant.
438
+ ${MONGODB_PROMPT}
439
+ `,
440
+ tools: { vectorQueryTool },
441
+ });
442
+ ```
443
+
444
+
445
+
446
+ **opensearch:**
447
+
448
+ ```ts title="vector-store.ts"
449
+ import { OPENSEARCH_PROMPT } from "@mastra/opensearch";
450
+
451
+ export const ragAgent = new Agent({
452
+ id: "rag-agent",
453
+ name: "RAG Agent",
454
+ model: "openai/gpt-5.1",
455
+ instructions: `
456
+ Process queries using the provided context. Structure responses to be concise and relevant.
457
+ ${OPENSEARCH_PROMPT}
458
+ `,
459
+ tools: { vectorQueryTool },
460
+ });
461
+ ```
462
+
463
+
464
+
465
+ **s3vectors:**
466
+
467
+ ```ts title="vector-store.ts"
468
+ import { S3VECTORS_PROMPT } from "@mastra/s3vectors";
469
+
470
+ export const ragAgent = new Agent({
471
+ id: "rag-agent",
472
+ name: "RAG Agent",
473
+ model: "openai/gpt-5.1",
474
+ instructions: `
475
+ Process queries using the provided context. Structure responses to be concise and relevant.
476
+ ${S3VECTORS_PROMPT}
477
+ `,
478
+ tools: { vectorQueryTool },
479
+ });
480
+ ```
481
+
482
+
483
+
484
+ ### Re-ranking
485
+
486
+ Initial vector similarity search can sometimes miss nuanced relevance. Re-ranking is a more computationally expensive process, but more accurate algorithm that improves results by:
487
+
488
+ - Considering word order and exact matches
489
+ - Applying more sophisticated relevance scoring
490
+ - Using a method called cross-attention between query and documents
491
+
492
+ Here's how to use re-ranking:
493
+
494
+ ```ts
495
+ import {
496
+ rerankWithScorer as rerank,
497
+ MastraAgentRelevanceScorer
498
+ } from "@mastra/rag";
499
+
500
+ // Get initial results from vector search
501
+ const initialResults = await pgVector.query({
502
+ indexName: "embeddings",
503
+ queryVector: queryEmbedding,
504
+ topK: 10,
505
+ });
506
+
507
+ // Create a relevance scorer
508
+ const relevanceProvider = new MastraAgentRelevanceScorer('relevance-scorer', "openai/gpt-5.1");
509
+
510
+ // Re-rank the results
511
+ const rerankedResults = await rerank({
512
+ results: initialResults,
513
+ query,
514
+ scorer: relevanceProvider,
515
+ options: {
516
+ weights: {
517
+ semantic: 0.5, // How well the content matches the query semantically
518
+ vector: 0.3, // Original vector similarity score
519
+ position: 0.2, // Preserves original result ordering
520
+ },
521
+ topK: 10,
522
+ },
523
+ );
524
+ ```
525
+
526
+ The weights control how different factors influence the final ranking:
527
+
528
+ - `semantic`: Higher values prioritize semantic understanding and relevance to the query
529
+ - `vector`: Higher values favor the original vector similarity scores
530
+ - `position`: Higher values help maintain the original ordering of results
531
+
532
+ > **Note:**
533
+ For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
534
+
535
+ You can also use other relevance score providers like Cohere or ZeroEntropy:
536
+
537
+ ```ts
538
+ const relevanceProvider = new CohereRelevanceScorer("rerank-v3.5");
539
+ ```
540
+
541
+ ```ts
542
+ const relevanceProvider = new ZeroEntropyRelevanceScorer("zerank-1");
543
+ ```
544
+
545
+ The re-ranked results combine vector similarity with semantic understanding to improve retrieval quality.
546
+
547
+ For more details about re-ranking, see the [rerank()](https://mastra.ai/reference/v1/rag/rerankWithScorer) method.
548
+
549
+ For graph-based retrieval that follows connections between chunks, see the [GraphRAG](https://mastra.ai/docs/v1/rag/graph-rag) documentation.