@mastra/vectorize 1.0.0-beta.3 → 1.0.1-alpha.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,3 @@
1
- > Guide on retrieval processes in Mastra
2
-
3
1
  # Retrieval in RAG Systems
4
2
 
5
3
  After storing embeddings, you need to retrieve relevant chunks to answer user queries.
@@ -21,29 +19,29 @@ Mastra provides flexible retrieval options with support for semantic search, fil
21
19
  The simplest approach is direct semantic search. This method uses vector similarity to find chunks that are semantically similar to the query:
22
20
 
23
21
  ```ts
24
- import { embed } from "ai";
25
- import { PgVector } from "@mastra/pg";
26
- import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
22
+ import { embed } from 'ai'
23
+ import { PgVector } from '@mastra/pg'
24
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
27
25
 
28
26
  // Convert query to embedding
29
27
  const { embedding } = await embed({
30
- value: "What are the main points in the article?",
31
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
32
- });
28
+ value: 'What are the main points in the article?',
29
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
30
+ })
33
31
 
34
32
  // Query vector store
35
33
  const pgVector = new PgVector({
36
34
  id: 'pg-vector',
37
35
  connectionString: process.env.POSTGRES_CONNECTION_STRING,
38
- });
36
+ })
39
37
  const results = await pgVector.query({
40
- indexName: "embeddings",
38
+ indexName: 'embeddings',
41
39
  queryVector: embedding,
42
40
  topK: 10,
43
- });
41
+ })
44
42
 
45
43
  // Display results
46
- console.log(results);
44
+ console.log(results)
47
45
  ```
48
46
 
49
47
  The `topK` parameter specifies the maximum number of most similar results to return from the vector search.
@@ -53,16 +51,16 @@ Results include both the text content and a similarity score:
53
51
  ```ts
54
52
  [
55
53
  {
56
- text: "Climate change poses significant challenges...",
54
+ text: 'Climate change poses significant challenges...',
57
55
  score: 0.89,
58
- metadata: { source: "article1.txt" },
56
+ metadata: { source: 'article1.txt' },
59
57
  },
60
58
  {
61
- text: "Rising temperatures affect crop yields...",
59
+ text: 'Rising temperatures affect crop yields...',
62
60
  score: 0.82,
63
- metadata: { source: "article1.txt" },
61
+ metadata: { source: 'article1.txt' },
64
62
  },
65
- ];
63
+ ]
66
64
  ```
67
65
 
68
66
  ## Advanced Retrieval options
@@ -73,63 +71,63 @@ Filter results based on metadata fields to narrow down the search space. This ap
73
71
 
74
72
  This is useful when you have documents from different sources, time periods, or with specific attributes. Mastra provides a unified MongoDB-style query syntax that works across all supported vector stores.
75
73
 
76
- For detailed information about available operators and syntax, see the [Metadata Filters Reference](https://mastra.ai/reference/v1/rag/metadata-filters).
74
+ For detailed information about available operators and syntax, see the [Metadata Filters Reference](https://mastra.ai/reference/rag/metadata-filters).
77
75
 
78
76
  Basic filtering examples:
79
77
 
80
78
  ```ts
81
79
  // Simple equality filter
82
80
  const results = await pgVector.query({
83
- indexName: "embeddings",
81
+ indexName: 'embeddings',
84
82
  queryVector: embedding,
85
83
  topK: 10,
86
84
  filter: {
87
- source: "article1.txt",
85
+ source: 'article1.txt',
88
86
  },
89
- });
87
+ })
90
88
 
91
89
  // Numeric comparison
92
90
  const results = await pgVector.query({
93
- indexName: "embeddings",
91
+ indexName: 'embeddings',
94
92
  queryVector: embedding,
95
93
  topK: 10,
96
94
  filter: {
97
95
  price: { $gt: 100 },
98
96
  },
99
- });
97
+ })
100
98
 
101
99
  // Multiple conditions
102
100
  const results = await pgVector.query({
103
- indexName: "embeddings",
101
+ indexName: 'embeddings',
104
102
  queryVector: embedding,
105
103
  topK: 10,
106
104
  filter: {
107
- category: "electronics",
105
+ category: 'electronics',
108
106
  price: { $lt: 1000 },
109
107
  inStock: true,
110
108
  },
111
- });
109
+ })
112
110
 
113
111
  // Array operations
114
112
  const results = await pgVector.query({
115
- indexName: "embeddings",
113
+ indexName: 'embeddings',
116
114
  queryVector: embedding,
117
115
  topK: 10,
118
116
  filter: {
119
- tags: { $in: ["sale", "new"] },
117
+ tags: { $in: ['sale', 'new'] },
120
118
  },
121
- });
119
+ })
122
120
 
123
121
  // Logical operators
124
122
  const results = await pgVector.query({
125
- indexName: "embeddings",
123
+ indexName: 'embeddings',
126
124
  queryVector: embedding,
127
125
  topK: 10,
128
126
  filter: {
129
- $or: [{ category: "electronics" }, { category: "accessories" }],
127
+ $or: [{ category: 'electronics' }, { category: 'accessories' }],
130
128
  $and: [{ price: { $gt: 50 } }, { price: { $lt: 200 } }],
131
129
  },
132
- });
130
+ })
133
131
  ```
134
132
 
135
133
  Common use cases for metadata filtering:
@@ -146,14 +144,14 @@ Common use cases for metadata filtering:
146
144
  Sometimes you want to give your agent the ability to query a vector database directly. The Vector Query Tool allows your agent to be in charge of retrieval decisions, combining semantic search with optional filtering and reranking based on the agent's understanding of the user's needs.
147
145
 
148
146
  ```ts
149
- import { createVectorQueryTool } from "@mastra/rag";
150
- import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
147
+ import { createVectorQueryTool } from '@mastra/rag'
148
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
151
149
 
152
150
  const vectorQueryTool = createVectorQueryTool({
153
- vectorStoreName: "pgVector",
154
- indexName: "embeddings",
155
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
156
- });
151
+ vectorStoreName: 'pgVector',
152
+ indexName: 'embeddings',
153
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
154
+ })
157
155
  ```
158
156
 
159
157
  When creating the tool, pay special attention to the tool's name and description - these help the agent understand when and how to use the retrieval capabilities. For example, you might name it "SearchKnowledgeBase" and describe it as "Search through our documentation to find relevant information about X topic."
@@ -168,32 +166,31 @@ This is particularly useful when:
168
166
 
169
167
  The Vector Query Tool supports database-specific configurations that enable you to leverage unique features and optimizations of different vector stores.
170
168
 
171
- > **Note:**
172
- These configurations are for **query-time options** like namespaces, performance tuning, and filtering—not for database connection setup.
173
-
174
- Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ connectionUrl: '...' })`).
169
+ > **Note:** These configurations are for **query-time options** like namespaces, performance tuning, and filtering—not for database connection setup.
170
+ >
171
+ > Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ url: '...' })`).
175
172
 
176
173
  ```ts
177
- import { createVectorQueryTool } from "@mastra/rag";
178
- import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
174
+ import { createVectorQueryTool } from '@mastra/rag'
175
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
179
176
 
180
177
  // Pinecone with namespace
181
178
  const pineconeQueryTool = createVectorQueryTool({
182
- vectorStoreName: "pinecone",
183
- indexName: "docs",
184
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
179
+ vectorStoreName: 'pinecone',
180
+ indexName: 'docs',
181
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
185
182
  databaseConfig: {
186
183
  pinecone: {
187
- namespace: "production", // Isolate data by environment
184
+ namespace: 'production', // Isolate data by environment
188
185
  },
189
186
  },
190
- });
187
+ })
191
188
 
192
189
  // pgVector with performance tuning
193
190
  const pgVectorQueryTool = createVectorQueryTool({
194
- vectorStoreName: "postgres",
195
- indexName: "embeddings",
196
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
191
+ vectorStoreName: 'postgres',
192
+ indexName: 'embeddings',
193
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
197
194
  databaseConfig: {
198
195
  pgvector: {
199
196
  minScore: 0.7, // Filter low-quality results
@@ -201,33 +198,33 @@ const pgVectorQueryTool = createVectorQueryTool({
201
198
  probes: 10, // IVFFlat probe parameter
202
199
  },
203
200
  },
204
- });
201
+ })
205
202
 
206
203
  // Chroma with advanced filtering
207
204
  const chromaQueryTool = createVectorQueryTool({
208
- vectorStoreName: "chroma",
209
- indexName: "documents",
210
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
205
+ vectorStoreName: 'chroma',
206
+ indexName: 'documents',
207
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
211
208
  databaseConfig: {
212
209
  chroma: {
213
- where: { category: "technical" },
214
- whereDocument: { $contains: "API" },
210
+ where: { category: 'technical' },
211
+ whereDocument: { $contains: 'API' },
215
212
  },
216
213
  },
217
- });
214
+ })
218
215
 
219
216
  // LanceDB with table specificity
220
217
  const lanceQueryTool = createVectorQueryTool({
221
- vectorStoreName: "lance",
222
- indexName: "documents",
223
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
218
+ vectorStoreName: 'lance',
219
+ indexName: 'documents',
220
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
224
221
  databaseConfig: {
225
222
  lance: {
226
- tableName: "myVectors", // Specify which table to query
223
+ tableName: 'myVectors', // Specify which table to query
227
224
  includeAllColumns: true, // Include all metadata columns in results
228
225
  },
229
226
  },
230
- });
227
+ })
231
228
  ```
232
229
 
233
230
  **Key Benefits:**
@@ -249,238 +246,211 @@ const lanceQueryTool = createVectorQueryTool({
249
246
  You can also override these configurations at runtime using the request context:
250
247
 
251
248
  ```ts
252
- import { RequestContext } from "@mastra/core/request-context";
249
+ import { RequestContext } from '@mastra/core/request-context'
253
250
 
254
- const requestContext = new RequestContext();
255
- requestContext.set("databaseConfig", {
251
+ const requestContext = new RequestContext()
252
+ requestContext.set('databaseConfig', {
256
253
  pinecone: {
257
- namespace: "runtime-namespace",
254
+ namespace: 'runtime-namespace',
258
255
  },
259
- });
256
+ })
260
257
 
261
- await pineconeQueryTool.execute({
262
- context: { queryText: "search query" },
263
- mastra,
264
- requestContext,
265
- });
258
+ await pineconeQueryTool.execute({ queryText: 'search query' }, { mastra, requestContext })
266
259
  ```
267
260
 
268
- For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](https://mastra.ai/reference/v1/tools/vector-query-tool).
261
+ For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](https://mastra.ai/reference/tools/vector-query-tool).
269
262
 
270
263
  ### Vector Store Prompts
271
264
 
272
- Vector store prompts define query patterns and filtering capabilities for each vector database implementation.
273
- When implementing filtering, these prompts are required in the agent's instructions to specify valid operators and syntax for each vector store implementation.
265
+ Vector store prompts define query patterns and filtering capabilities for each vector database implementation. When implementing filtering, these prompts are required in the agent's instructions to specify valid operators and syntax for each vector store implementation.
274
266
 
275
- **pgvector:**
267
+ **pgVector**:
276
268
 
277
269
  ```ts
278
- import { PGVECTOR_PROMPT } from "@mastra/pg";
270
+ import { PGVECTOR_PROMPT } from '@mastra/pg'
279
271
 
280
272
  export const ragAgent = new Agent({
281
- id: "rag-agent",
282
- name: "RAG Agent",
283
- model: "openai/gpt-5.1",
273
+ id: 'rag-agent',
274
+ name: 'RAG Agent',
275
+ model: 'openai/gpt-5.1',
284
276
  instructions: `
285
277
  Process queries using the provided context. Structure responses to be concise and relevant.
286
278
  ${PGVECTOR_PROMPT}
287
279
  `,
288
280
  tools: { vectorQueryTool },
289
- });
281
+ })
290
282
  ```
291
283
 
292
-
293
-
294
- **pinecone:**
284
+ **Pinecone**:
295
285
 
296
- ```ts title="vector-store.ts"
297
- import { PINECONE_PROMPT } from "@mastra/pinecone";
286
+ ```ts
287
+ import { PINECONE_PROMPT } from '@mastra/pinecone'
298
288
 
299
289
  export const ragAgent = new Agent({
300
- id: "rag-agent",
301
- name: "RAG Agent",
302
- model: "openai/gpt-5.1",
290
+ id: 'rag-agent',
291
+ name: 'RAG Agent',
292
+ model: 'openai/gpt-5.1',
303
293
  instructions: `
304
294
  Process queries using the provided context. Structure responses to be concise and relevant.
305
295
  ${PINECONE_PROMPT}
306
296
  `,
307
297
  tools: { vectorQueryTool },
308
- });
298
+ })
309
299
  ```
310
300
 
311
-
312
-
313
- **qdrant:**
301
+ **Qdrant**:
314
302
 
315
- ```ts title="vector-store.ts"
316
- import { QDRANT_PROMPT } from "@mastra/qdrant";
303
+ ```ts
304
+ import { QDRANT_PROMPT } from '@mastra/qdrant'
317
305
 
318
306
  export const ragAgent = new Agent({
319
- id: "rag-agent",
320
- name: "RAG Agent",
321
- model: "openai/gpt-5.1",
307
+ id: 'rag-agent',
308
+ name: 'RAG Agent',
309
+ model: 'openai/gpt-5.1',
322
310
  instructions: `
323
311
  Process queries using the provided context. Structure responses to be concise and relevant.
324
312
  ${QDRANT_PROMPT}
325
313
  `,
326
314
  tools: { vectorQueryTool },
327
- });
315
+ })
328
316
  ```
329
317
 
330
-
318
+ **Chroma**:
331
319
 
332
- **chroma:**
333
-
334
- ```ts title="vector-store.ts"
335
- import { CHROMA_PROMPT } from "@mastra/chroma";
320
+ ```ts
321
+ import { CHROMA_PROMPT } from '@mastra/chroma'
336
322
 
337
323
  export const ragAgent = new Agent({
338
- id: "rag-agent",
339
- name: "RAG Agent",
340
- model: "openai/gpt-5.1",
324
+ id: 'rag-agent',
325
+ name: 'RAG Agent',
326
+ model: 'openai/gpt-5.1',
341
327
  instructions: `
342
328
  Process queries using the provided context. Structure responses to be concise and relevant.
343
329
  ${CHROMA_PROMPT}
344
330
  `,
345
331
  tools: { vectorQueryTool },
346
- });
332
+ })
347
333
  ```
348
334
 
349
-
350
-
351
- **astra:**
335
+ **Astra**:
352
336
 
353
- ```ts title="vector-store.ts"
354
- import { ASTRA_PROMPT } from "@mastra/astra";
337
+ ```ts
338
+ import { ASTRA_PROMPT } from '@mastra/astra'
355
339
 
356
340
  export const ragAgent = new Agent({
357
- id: "rag-agent",
358
- name: "RAG Agent",
359
- model: "openai/gpt-5.1",
341
+ id: 'rag-agent',
342
+ name: 'RAG Agent',
343
+ model: 'openai/gpt-5.1',
360
344
  instructions: `
361
345
  Process queries using the provided context. Structure responses to be concise and relevant.
362
346
  ${ASTRA_PROMPT}
363
347
  `,
364
348
  tools: { vectorQueryTool },
365
- });
349
+ })
366
350
  ```
367
351
 
368
-
369
-
370
- **libsql:**
352
+ **libSQL**:
371
353
 
372
- ```ts title="vector-store.ts"
373
- import { LIBSQL_PROMPT } from "@mastra/libsql";
354
+ ```ts
355
+ import { LIBSQL_PROMPT } from '@mastra/libsql'
374
356
 
375
357
  export const ragAgent = new Agent({
376
- id: "rag-agent",
377
- name: "RAG Agent",
378
- model: "openai/gpt-5.1",
358
+ id: 'rag-agent',
359
+ name: 'RAG Agent',
360
+ model: 'openai/gpt-5.1',
379
361
  instructions: `
380
362
  Process queries using the provided context. Structure responses to be concise and relevant.
381
363
  ${LIBSQL_PROMPT}
382
364
  `,
383
365
  tools: { vectorQueryTool },
384
- });
366
+ })
385
367
  ```
386
368
 
387
-
369
+ **Upstash**:
388
370
 
389
- **upstash:**
390
-
391
- ```ts title="vector-store.ts"
392
- import { UPSTASH_PROMPT } from "@mastra/upstash";
371
+ ```ts
372
+ import { UPSTASH_PROMPT } from '@mastra/upstash'
393
373
 
394
374
  export const ragAgent = new Agent({
395
- id: "rag-agent",
396
- name: "RAG Agent",
397
- model: "openai/gpt-5.1",
375
+ id: 'rag-agent',
376
+ name: 'RAG Agent',
377
+ model: 'openai/gpt-5.1',
398
378
  instructions: `
399
379
  Process queries using the provided context. Structure responses to be concise and relevant.
400
380
  ${UPSTASH_PROMPT}
401
381
  `,
402
382
  tools: { vectorQueryTool },
403
- });
383
+ })
404
384
  ```
405
385
 
406
-
407
-
408
- **vectorize:**
386
+ **Vectorize**:
409
387
 
410
- ```ts title="vector-store.ts"
411
- import { VECTORIZE_PROMPT } from "@mastra/vectorize";
388
+ ```ts
389
+ import { VECTORIZE_PROMPT } from '@mastra/vectorize'
412
390
 
413
391
  export const ragAgent = new Agent({
414
- id: "rag-agent",
415
- name: "RAG Agent",
416
- model: "openai/gpt-5.1",
392
+ id: 'rag-agent',
393
+ name: 'RAG Agent',
394
+ model: 'openai/gpt-5.1',
417
395
  instructions: `
418
396
  Process queries using the provided context. Structure responses to be concise and relevant.
419
397
  ${VECTORIZE_PROMPT}
420
398
  `,
421
399
  tools: { vectorQueryTool },
422
- });
400
+ })
423
401
  ```
424
402
 
425
-
426
-
427
- **mongodb:**
403
+ **MongoDB**:
428
404
 
429
- ```ts title="vector-store.ts"
430
- import { MONGODB_PROMPT } from "@mastra/mongodb";
405
+ ```ts
406
+ import { MONGODB_PROMPT } from '@mastra/mongodb'
431
407
 
432
408
  export const ragAgent = new Agent({
433
- id: "rag-agent",
434
- name: "RAG Agent",
435
- model: "openai/gpt-5.1",
409
+ id: 'rag-agent',
410
+ name: 'RAG Agent',
411
+ model: 'openai/gpt-5.1',
436
412
  instructions: `
437
413
  Process queries using the provided context. Structure responses to be concise and relevant.
438
414
  ${MONGODB_PROMPT}
439
415
  `,
440
416
  tools: { vectorQueryTool },
441
- });
417
+ })
442
418
  ```
443
419
 
444
-
420
+ **OpenSearch**:
445
421
 
446
- **opensearch:**
447
-
448
- ```ts title="vector-store.ts"
449
- import { OPENSEARCH_PROMPT } from "@mastra/opensearch";
422
+ ```ts
423
+ import { OPENSEARCH_PROMPT } from '@mastra/opensearch'
450
424
 
451
425
  export const ragAgent = new Agent({
452
- id: "rag-agent",
453
- name: "RAG Agent",
454
- model: "openai/gpt-5.1",
426
+ id: 'rag-agent',
427
+ name: 'RAG Agent',
428
+ model: 'openai/gpt-5.1',
455
429
  instructions: `
456
430
  Process queries using the provided context. Structure responses to be concise and relevant.
457
431
  ${OPENSEARCH_PROMPT}
458
432
  `,
459
433
  tools: { vectorQueryTool },
460
- });
434
+ })
461
435
  ```
462
436
 
463
-
464
-
465
- **s3vectors:**
437
+ **S3Vectors**:
466
438
 
467
- ```ts title="vector-store.ts"
468
- import { S3VECTORS_PROMPT } from "@mastra/s3vectors";
439
+ ```ts
440
+ import { S3VECTORS_PROMPT } from '@mastra/s3vectors'
469
441
 
470
442
  export const ragAgent = new Agent({
471
- id: "rag-agent",
472
- name: "RAG Agent",
473
- model: "openai/gpt-5.1",
443
+ id: 'rag-agent',
444
+ name: 'RAG Agent',
445
+ model: 'openai/gpt-5.1',
474
446
  instructions: `
475
447
  Process queries using the provided context. Structure responses to be concise and relevant.
476
448
  ${S3VECTORS_PROMPT}
477
449
  `,
478
450
  tools: { vectorQueryTool },
479
- });
451
+ })
480
452
  ```
481
453
 
482
-
483
-
484
454
  ### Re-ranking
485
455
 
486
456
  Initial vector similarity search can sometimes miss nuanced relevance. Re-ranking is a more computationally expensive process, but more accurate algorithm that improves results by:
@@ -492,20 +462,17 @@ Initial vector similarity search can sometimes miss nuanced relevance. Re-rankin
492
462
  Here's how to use re-ranking:
493
463
 
494
464
  ```ts
495
- import {
496
- rerankWithScorer as rerank,
497
- MastraAgentRelevanceScorer
498
- } from "@mastra/rag";
465
+ import { rerankWithScorer as rerank, MastraAgentRelevanceScorer } from '@mastra/rag'
499
466
 
500
467
  // Get initial results from vector search
501
468
  const initialResults = await pgVector.query({
502
- indexName: "embeddings",
469
+ indexName: 'embeddings',
503
470
  queryVector: queryEmbedding,
504
471
  topK: 10,
505
- });
472
+ })
506
473
 
507
474
  // Create a relevance scorer
508
- const relevanceProvider = new MastraAgentRelevanceScorer('relevance-scorer', "openai/gpt-5.1");
475
+ const relevanceProvider = new MastraAgentRelevanceScorer('relevance-scorer', 'openai/gpt-5.1')
509
476
 
510
477
  // Re-rank the results
511
478
  const rerankedResults = await rerank({
@@ -520,7 +487,7 @@ const rerankedResults = await rerank({
520
487
  },
521
488
  topK: 10,
522
489
  },
523
- );
490
+ })
524
491
  ```
525
492
 
526
493
  The weights control how different factors influence the final ranking:
@@ -529,21 +496,20 @@ The weights control how different factors influence the final ranking:
529
496
  - `vector`: Higher values favor the original vector similarity scores
530
497
  - `position`: Higher values help maintain the original ordering of results
531
498
 
532
- > **Note:**
533
- For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
499
+ > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
534
500
 
535
501
  You can also use other relevance score providers like Cohere or ZeroEntropy:
536
502
 
537
503
  ```ts
538
- const relevanceProvider = new CohereRelevanceScorer("rerank-v3.5");
504
+ const relevanceProvider = new CohereRelevanceScorer('rerank-v3.5')
539
505
  ```
540
506
 
541
507
  ```ts
542
- const relevanceProvider = new ZeroEntropyRelevanceScorer("zerank-1");
508
+ const relevanceProvider = new ZeroEntropyRelevanceScorer('zerank-1')
543
509
  ```
544
510
 
545
511
  The re-ranked results combine vector similarity with semantic understanding to improve retrieval quality.
546
512
 
547
- For more details about re-ranking, see the [rerank()](https://mastra.ai/reference/v1/rag/rerankWithScorer) method.
513
+ For more details about re-ranking, see the [rerank()](https://mastra.ai/reference/rag/rerankWithScorer) method.
548
514
 
549
- For graph-based retrieval that follows connections between chunks, see the [GraphRAG](https://mastra.ai/docs/v1/rag/graph-rag) documentation.
515
+ For graph-based retrieval that follows connections between chunks, see the [GraphRAG](https://mastra.ai/docs/rag/graph-rag) documentation.