@mastra/vectorize 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,5 +1,3 @@
1
- > Guide on retrieval processes in Mastra
2
-
3
1
  # Retrieval in RAG Systems
4
2
 
5
3
  After storing embeddings, you need to retrieve relevant chunks to answer user queries.
@@ -21,29 +19,29 @@ Mastra provides flexible retrieval options with support for semantic search, fil
21
19
  The simplest approach is direct semantic search. This method uses vector similarity to find chunks that are semantically similar to the query:
22
20
 
23
21
  ```ts
24
- import { embed } from "ai";
25
- import { PgVector } from "@mastra/pg";
26
- import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
22
+ import { embed } from 'ai'
23
+ import { PgVector } from '@mastra/pg'
24
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
27
25
 
28
26
  // Convert query to embedding
29
27
  const { embedding } = await embed({
30
- value: "What are the main points in the article?",
31
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
32
- });
28
+ value: 'What are the main points in the article?',
29
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
30
+ })
33
31
 
34
32
  // Query vector store
35
33
  const pgVector = new PgVector({
36
34
  id: 'pg-vector',
37
35
  connectionString: process.env.POSTGRES_CONNECTION_STRING,
38
- });
36
+ })
39
37
  const results = await pgVector.query({
40
- indexName: "embeddings",
38
+ indexName: 'embeddings',
41
39
  queryVector: embedding,
42
40
  topK: 10,
43
- });
41
+ })
44
42
 
45
43
  // Display results
46
- console.log(results);
44
+ console.log(results)
47
45
  ```
48
46
 
49
47
  The `topK` parameter specifies the maximum number of most similar results to return from the vector search.
@@ -53,16 +51,16 @@ Results include both the text content and a similarity score:
53
51
  ```ts
54
52
  [
55
53
  {
56
- text: "Climate change poses significant challenges...",
54
+ text: 'Climate change poses significant challenges...',
57
55
  score: 0.89,
58
- metadata: { source: "article1.txt" },
56
+ metadata: { source: 'article1.txt' },
59
57
  },
60
58
  {
61
- text: "Rising temperatures affect crop yields...",
59
+ text: 'Rising temperatures affect crop yields...',
62
60
  score: 0.82,
63
- metadata: { source: "article1.txt" },
61
+ metadata: { source: 'article1.txt' },
64
62
  },
65
- ];
63
+ ]
66
64
  ```
67
65
 
68
66
  ## Advanced Retrieval options
@@ -73,63 +71,63 @@ Filter results based on metadata fields to narrow down the search space. This ap
73
71
 
74
72
  This is useful when you have documents from different sources, time periods, or with specific attributes. Mastra provides a unified MongoDB-style query syntax that works across all supported vector stores.
75
73
 
76
- For detailed information about available operators and syntax, see the [Metadata Filters Reference](https://mastra.ai/reference/v1/rag/metadata-filters).
74
+ For detailed information about available operators and syntax, see the [Metadata Filters Reference](https://mastra.ai/reference/rag/metadata-filters).
77
75
 
78
76
  Basic filtering examples:
79
77
 
80
78
  ```ts
81
79
  // Simple equality filter
82
80
  const results = await pgVector.query({
83
- indexName: "embeddings",
81
+ indexName: 'embeddings',
84
82
  queryVector: embedding,
85
83
  topK: 10,
86
84
  filter: {
87
- source: "article1.txt",
85
+ source: 'article1.txt',
88
86
  },
89
- });
87
+ })
90
88
 
91
89
  // Numeric comparison
92
90
  const results = await pgVector.query({
93
- indexName: "embeddings",
91
+ indexName: 'embeddings',
94
92
  queryVector: embedding,
95
93
  topK: 10,
96
94
  filter: {
97
95
  price: { $gt: 100 },
98
96
  },
99
- });
97
+ })
100
98
 
101
99
  // Multiple conditions
102
100
  const results = await pgVector.query({
103
- indexName: "embeddings",
101
+ indexName: 'embeddings',
104
102
  queryVector: embedding,
105
103
  topK: 10,
106
104
  filter: {
107
- category: "electronics",
105
+ category: 'electronics',
108
106
  price: { $lt: 1000 },
109
107
  inStock: true,
110
108
  },
111
- });
109
+ })
112
110
 
113
111
  // Array operations
114
112
  const results = await pgVector.query({
115
- indexName: "embeddings",
113
+ indexName: 'embeddings',
116
114
  queryVector: embedding,
117
115
  topK: 10,
118
116
  filter: {
119
- tags: { $in: ["sale", "new"] },
117
+ tags: { $in: ['sale', 'new'] },
120
118
  },
121
- });
119
+ })
122
120
 
123
121
  // Logical operators
124
122
  const results = await pgVector.query({
125
- indexName: "embeddings",
123
+ indexName: 'embeddings',
126
124
  queryVector: embedding,
127
125
  topK: 10,
128
126
  filter: {
129
- $or: [{ category: "electronics" }, { category: "accessories" }],
127
+ $or: [{ category: 'electronics' }, { category: 'accessories' }],
130
128
  $and: [{ price: { $gt: 50 } }, { price: { $lt: 200 } }],
131
129
  },
132
- });
130
+ })
133
131
  ```
134
132
 
135
133
  Common use cases for metadata filtering:
@@ -146,14 +144,14 @@ Common use cases for metadata filtering:
146
144
  Sometimes you want to give your agent the ability to query a vector database directly. The Vector Query Tool allows your agent to be in charge of retrieval decisions, combining semantic search with optional filtering and reranking based on the agent's understanding of the user's needs.
147
145
 
148
146
  ```ts
149
- import { createVectorQueryTool } from "@mastra/rag";
150
- import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
147
+ import { createVectorQueryTool } from '@mastra/rag'
148
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
151
149
 
152
150
  const vectorQueryTool = createVectorQueryTool({
153
- vectorStoreName: "pgVector",
154
- indexName: "embeddings",
155
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
156
- });
151
+ vectorStoreName: 'pgVector',
152
+ indexName: 'embeddings',
153
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
154
+ })
157
155
  ```
158
156
 
159
157
  When creating the tool, pay special attention to the tool's name and description - these help the agent understand when and how to use the retrieval capabilities. For example, you might name it "SearchKnowledgeBase" and describe it as "Search through our documentation to find relevant information about X topic."
@@ -168,32 +166,31 @@ This is particularly useful when:
168
166
 
169
167
  The Vector Query Tool supports database-specific configurations that enable you to leverage unique features and optimizations of different vector stores.
170
168
 
171
- > **Note:**
172
- These configurations are for **query-time options** like namespaces, performance tuning, and filtering—not for database connection setup.
173
-
174
- Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ url: '...' })`).
169
+ > **Note:** These configurations are for **query-time options** like namespaces, performance tuning, and filtering—not for database connection setup.
170
+ >
171
+ > Connection credentials (URLs, auth tokens) are configured when you instantiate the vector store class (e.g., `new LibSQLVector({ url: '...' })`).
175
172
 
176
173
  ```ts
177
- import { createVectorQueryTool } from "@mastra/rag";
178
- import { ModelRouterEmbeddingModel } from "@mastra/core/llm";
174
+ import { createVectorQueryTool } from '@mastra/rag'
175
+ import { ModelRouterEmbeddingModel } from '@mastra/core/llm'
179
176
 
180
177
  // Pinecone with namespace
181
178
  const pineconeQueryTool = createVectorQueryTool({
182
- vectorStoreName: "pinecone",
183
- indexName: "docs",
184
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
179
+ vectorStoreName: 'pinecone',
180
+ indexName: 'docs',
181
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
185
182
  databaseConfig: {
186
183
  pinecone: {
187
- namespace: "production", // Isolate data by environment
184
+ namespace: 'production', // Isolate data by environment
188
185
  },
189
186
  },
190
- });
187
+ })
191
188
 
192
189
  // pgVector with performance tuning
193
190
  const pgVectorQueryTool = createVectorQueryTool({
194
- vectorStoreName: "postgres",
195
- indexName: "embeddings",
196
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
191
+ vectorStoreName: 'postgres',
192
+ indexName: 'embeddings',
193
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
197
194
  databaseConfig: {
198
195
  pgvector: {
199
196
  minScore: 0.7, // Filter low-quality results
@@ -201,33 +198,33 @@ const pgVectorQueryTool = createVectorQueryTool({
201
198
  probes: 10, // IVFFlat probe parameter
202
199
  },
203
200
  },
204
- });
201
+ })
205
202
 
206
203
  // Chroma with advanced filtering
207
204
  const chromaQueryTool = createVectorQueryTool({
208
- vectorStoreName: "chroma",
209
- indexName: "documents",
210
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
205
+ vectorStoreName: 'chroma',
206
+ indexName: 'documents',
207
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
211
208
  databaseConfig: {
212
209
  chroma: {
213
- where: { category: "technical" },
214
- whereDocument: { $contains: "API" },
210
+ where: { category: 'technical' },
211
+ whereDocument: { $contains: 'API' },
215
212
  },
216
213
  },
217
- });
214
+ })
218
215
 
219
216
  // LanceDB with table specificity
220
217
  const lanceQueryTool = createVectorQueryTool({
221
- vectorStoreName: "lance",
222
- indexName: "documents",
223
- model: new ModelRouterEmbeddingModel("openai/text-embedding-3-small"),
218
+ vectorStoreName: 'lance',
219
+ indexName: 'documents',
220
+ model: new ModelRouterEmbeddingModel('openai/text-embedding-3-small'),
224
221
  databaseConfig: {
225
222
  lance: {
226
- tableName: "myVectors", // Specify which table to query
223
+ tableName: 'myVectors', // Specify which table to query
227
224
  includeAllColumns: true, // Include all metadata columns in results
228
225
  },
229
226
  },
230
- });
227
+ })
231
228
  ```
232
229
 
233
230
  **Key Benefits:**
@@ -249,237 +246,211 @@ const lanceQueryTool = createVectorQueryTool({
249
246
  You can also override these configurations at runtime using the request context:
250
247
 
251
248
  ```ts
252
- import { RequestContext } from "@mastra/core/request-context";
249
+ import { RequestContext } from '@mastra/core/request-context'
253
250
 
254
- const requestContext = new RequestContext();
255
- requestContext.set("databaseConfig", {
251
+ const requestContext = new RequestContext()
252
+ requestContext.set('databaseConfig', {
256
253
  pinecone: {
257
- namespace: "runtime-namespace",
254
+ namespace: 'runtime-namespace',
258
255
  },
259
- });
256
+ })
260
257
 
261
- await pineconeQueryTool.execute(
262
- { queryText: "search query" },
263
- { mastra, requestContext }
264
- );
258
+ await pineconeQueryTool.execute({ queryText: 'search query' }, { mastra, requestContext })
265
259
  ```
266
260
 
267
- For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](https://mastra.ai/reference/v1/tools/vector-query-tool).
261
+ For detailed configuration options and advanced usage, see the [Vector Query Tool Reference](https://mastra.ai/reference/tools/vector-query-tool).
268
262
 
269
263
  ### Vector Store Prompts
270
264
 
271
- Vector store prompts define query patterns and filtering capabilities for each vector database implementation.
272
- When implementing filtering, these prompts are required in the agent's instructions to specify valid operators and syntax for each vector store implementation.
265
+ Vector store prompts define query patterns and filtering capabilities for each vector database implementation. When implementing filtering, these prompts are required in the agent's instructions to specify valid operators and syntax for each vector store implementation.
273
266
 
274
- **pgvector:**
267
+ **pgVector**:
275
268
 
276
269
  ```ts
277
- import { PGVECTOR_PROMPT } from "@mastra/pg";
270
+ import { PGVECTOR_PROMPT } from '@mastra/pg'
278
271
 
279
272
  export const ragAgent = new Agent({
280
- id: "rag-agent",
281
- name: "RAG Agent",
282
- model: "openai/gpt-5.1",
273
+ id: 'rag-agent',
274
+ name: 'RAG Agent',
275
+ model: 'openai/gpt-5.1',
283
276
  instructions: `
284
277
  Process queries using the provided context. Structure responses to be concise and relevant.
285
278
  ${PGVECTOR_PROMPT}
286
279
  `,
287
280
  tools: { vectorQueryTool },
288
- });
281
+ })
289
282
  ```
290
283
 
291
-
292
-
293
- **pinecone:**
284
+ **Pinecone**:
294
285
 
295
- ```ts title="vector-store.ts"
296
- import { PINECONE_PROMPT } from "@mastra/pinecone";
286
+ ```ts
287
+ import { PINECONE_PROMPT } from '@mastra/pinecone'
297
288
 
298
289
  export const ragAgent = new Agent({
299
- id: "rag-agent",
300
- name: "RAG Agent",
301
- model: "openai/gpt-5.1",
290
+ id: 'rag-agent',
291
+ name: 'RAG Agent',
292
+ model: 'openai/gpt-5.1',
302
293
  instructions: `
303
294
  Process queries using the provided context. Structure responses to be concise and relevant.
304
295
  ${PINECONE_PROMPT}
305
296
  `,
306
297
  tools: { vectorQueryTool },
307
- });
298
+ })
308
299
  ```
309
300
 
310
-
311
-
312
- **qdrant:**
301
+ **Qdrant**:
313
302
 
314
- ```ts title="vector-store.ts"
315
- import { QDRANT_PROMPT } from "@mastra/qdrant";
303
+ ```ts
304
+ import { QDRANT_PROMPT } from '@mastra/qdrant'
316
305
 
317
306
  export const ragAgent = new Agent({
318
- id: "rag-agent",
319
- name: "RAG Agent",
320
- model: "openai/gpt-5.1",
307
+ id: 'rag-agent',
308
+ name: 'RAG Agent',
309
+ model: 'openai/gpt-5.1',
321
310
  instructions: `
322
311
  Process queries using the provided context. Structure responses to be concise and relevant.
323
312
  ${QDRANT_PROMPT}
324
313
  `,
325
314
  tools: { vectorQueryTool },
326
- });
315
+ })
327
316
  ```
328
317
 
329
-
318
+ **Chroma**:
330
319
 
331
- **chroma:**
332
-
333
- ```ts title="vector-store.ts"
334
- import { CHROMA_PROMPT } from "@mastra/chroma";
320
+ ```ts
321
+ import { CHROMA_PROMPT } from '@mastra/chroma'
335
322
 
336
323
  export const ragAgent = new Agent({
337
- id: "rag-agent",
338
- name: "RAG Agent",
339
- model: "openai/gpt-5.1",
324
+ id: 'rag-agent',
325
+ name: 'RAG Agent',
326
+ model: 'openai/gpt-5.1',
340
327
  instructions: `
341
328
  Process queries using the provided context. Structure responses to be concise and relevant.
342
329
  ${CHROMA_PROMPT}
343
330
  `,
344
331
  tools: { vectorQueryTool },
345
- });
332
+ })
346
333
  ```
347
334
 
348
-
349
-
350
- **astra:**
335
+ **Astra**:
351
336
 
352
- ```ts title="vector-store.ts"
353
- import { ASTRA_PROMPT } from "@mastra/astra";
337
+ ```ts
338
+ import { ASTRA_PROMPT } from '@mastra/astra'
354
339
 
355
340
  export const ragAgent = new Agent({
356
- id: "rag-agent",
357
- name: "RAG Agent",
358
- model: "openai/gpt-5.1",
341
+ id: 'rag-agent',
342
+ name: 'RAG Agent',
343
+ model: 'openai/gpt-5.1',
359
344
  instructions: `
360
345
  Process queries using the provided context. Structure responses to be concise and relevant.
361
346
  ${ASTRA_PROMPT}
362
347
  `,
363
348
  tools: { vectorQueryTool },
364
- });
349
+ })
365
350
  ```
366
351
 
367
-
368
-
369
- **libsql:**
352
+ **libSQL**:
370
353
 
371
- ```ts title="vector-store.ts"
372
- import { LIBSQL_PROMPT } from "@mastra/libsql";
354
+ ```ts
355
+ import { LIBSQL_PROMPT } from '@mastra/libsql'
373
356
 
374
357
  export const ragAgent = new Agent({
375
- id: "rag-agent",
376
- name: "RAG Agent",
377
- model: "openai/gpt-5.1",
358
+ id: 'rag-agent',
359
+ name: 'RAG Agent',
360
+ model: 'openai/gpt-5.1',
378
361
  instructions: `
379
362
  Process queries using the provided context. Structure responses to be concise and relevant.
380
363
  ${LIBSQL_PROMPT}
381
364
  `,
382
365
  tools: { vectorQueryTool },
383
- });
366
+ })
384
367
  ```
385
368
 
386
-
369
+ **Upstash**:
387
370
 
388
- **upstash:**
389
-
390
- ```ts title="vector-store.ts"
391
- import { UPSTASH_PROMPT } from "@mastra/upstash";
371
+ ```ts
372
+ import { UPSTASH_PROMPT } from '@mastra/upstash'
392
373
 
393
374
  export const ragAgent = new Agent({
394
- id: "rag-agent",
395
- name: "RAG Agent",
396
- model: "openai/gpt-5.1",
375
+ id: 'rag-agent',
376
+ name: 'RAG Agent',
377
+ model: 'openai/gpt-5.1',
397
378
  instructions: `
398
379
  Process queries using the provided context. Structure responses to be concise and relevant.
399
380
  ${UPSTASH_PROMPT}
400
381
  `,
401
382
  tools: { vectorQueryTool },
402
- });
383
+ })
403
384
  ```
404
385
 
405
-
406
-
407
- **vectorize:**
386
+ **Vectorize**:
408
387
 
409
- ```ts title="vector-store.ts"
410
- import { VECTORIZE_PROMPT } from "@mastra/vectorize";
388
+ ```ts
389
+ import { VECTORIZE_PROMPT } from '@mastra/vectorize'
411
390
 
412
391
  export const ragAgent = new Agent({
413
- id: "rag-agent",
414
- name: "RAG Agent",
415
- model: "openai/gpt-5.1",
392
+ id: 'rag-agent',
393
+ name: 'RAG Agent',
394
+ model: 'openai/gpt-5.1',
416
395
  instructions: `
417
396
  Process queries using the provided context. Structure responses to be concise and relevant.
418
397
  ${VECTORIZE_PROMPT}
419
398
  `,
420
399
  tools: { vectorQueryTool },
421
- });
400
+ })
422
401
  ```
423
402
 
424
-
425
-
426
- **mongodb:**
403
+ **MongoDB**:
427
404
 
428
- ```ts title="vector-store.ts"
429
- import { MONGODB_PROMPT } from "@mastra/mongodb";
405
+ ```ts
406
+ import { MONGODB_PROMPT } from '@mastra/mongodb'
430
407
 
431
408
  export const ragAgent = new Agent({
432
- id: "rag-agent",
433
- name: "RAG Agent",
434
- model: "openai/gpt-5.1",
409
+ id: 'rag-agent',
410
+ name: 'RAG Agent',
411
+ model: 'openai/gpt-5.1',
435
412
  instructions: `
436
413
  Process queries using the provided context. Structure responses to be concise and relevant.
437
414
  ${MONGODB_PROMPT}
438
415
  `,
439
416
  tools: { vectorQueryTool },
440
- });
417
+ })
441
418
  ```
442
419
 
443
-
420
+ **OpenSearch**:
444
421
 
445
- **opensearch:**
446
-
447
- ```ts title="vector-store.ts"
448
- import { OPENSEARCH_PROMPT } from "@mastra/opensearch";
422
+ ```ts
423
+ import { OPENSEARCH_PROMPT } from '@mastra/opensearch'
449
424
 
450
425
  export const ragAgent = new Agent({
451
- id: "rag-agent",
452
- name: "RAG Agent",
453
- model: "openai/gpt-5.1",
426
+ id: 'rag-agent',
427
+ name: 'RAG Agent',
428
+ model: 'openai/gpt-5.1',
454
429
  instructions: `
455
430
  Process queries using the provided context. Structure responses to be concise and relevant.
456
431
  ${OPENSEARCH_PROMPT}
457
432
  `,
458
433
  tools: { vectorQueryTool },
459
- });
434
+ })
460
435
  ```
461
436
 
462
-
463
-
464
- **s3vectors:**
437
+ **S3Vectors**:
465
438
 
466
- ```ts title="vector-store.ts"
467
- import { S3VECTORS_PROMPT } from "@mastra/s3vectors";
439
+ ```ts
440
+ import { S3VECTORS_PROMPT } from '@mastra/s3vectors'
468
441
 
469
442
  export const ragAgent = new Agent({
470
- id: "rag-agent",
471
- name: "RAG Agent",
472
- model: "openai/gpt-5.1",
443
+ id: 'rag-agent',
444
+ name: 'RAG Agent',
445
+ model: 'openai/gpt-5.1',
473
446
  instructions: `
474
447
  Process queries using the provided context. Structure responses to be concise and relevant.
475
448
  ${S3VECTORS_PROMPT}
476
449
  `,
477
450
  tools: { vectorQueryTool },
478
- });
451
+ })
479
452
  ```
480
453
 
481
-
482
-
483
454
  ### Re-ranking
484
455
 
485
456
  Initial vector similarity search can sometimes miss nuanced relevance. Re-ranking is a more computationally expensive process, but more accurate algorithm that improves results by:
@@ -491,20 +462,17 @@ Initial vector similarity search can sometimes miss nuanced relevance. Re-rankin
491
462
  Here's how to use re-ranking:
492
463
 
493
464
  ```ts
494
- import {
495
- rerankWithScorer as rerank,
496
- MastraAgentRelevanceScorer
497
- } from "@mastra/rag";
465
+ import { rerankWithScorer as rerank, MastraAgentRelevanceScorer } from '@mastra/rag'
498
466
 
499
467
  // Get initial results from vector search
500
468
  const initialResults = await pgVector.query({
501
- indexName: "embeddings",
469
+ indexName: 'embeddings',
502
470
  queryVector: queryEmbedding,
503
471
  topK: 10,
504
- });
472
+ })
505
473
 
506
474
  // Create a relevance scorer
507
- const relevanceProvider = new MastraAgentRelevanceScorer('relevance-scorer', "openai/gpt-5.1");
475
+ const relevanceProvider = new MastraAgentRelevanceScorer('relevance-scorer', 'openai/gpt-5.1')
508
476
 
509
477
  // Re-rank the results
510
478
  const rerankedResults = await rerank({
@@ -519,7 +487,7 @@ const rerankedResults = await rerank({
519
487
  },
520
488
  topK: 10,
521
489
  },
522
- );
490
+ })
523
491
  ```
524
492
 
525
493
  The weights control how different factors influence the final ranking:
@@ -528,21 +496,20 @@ The weights control how different factors influence the final ranking:
528
496
  - `vector`: Higher values favor the original vector similarity scores
529
497
  - `position`: Higher values help maintain the original ordering of results
530
498
 
531
- > **Note:**
532
- For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
499
+ > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
533
500
 
534
501
  You can also use other relevance score providers like Cohere or ZeroEntropy:
535
502
 
536
503
  ```ts
537
- const relevanceProvider = new CohereRelevanceScorer("rerank-v3.5");
504
+ const relevanceProvider = new CohereRelevanceScorer('rerank-v3.5')
538
505
  ```
539
506
 
540
507
  ```ts
541
- const relevanceProvider = new ZeroEntropyRelevanceScorer("zerank-1");
508
+ const relevanceProvider = new ZeroEntropyRelevanceScorer('zerank-1')
542
509
  ```
543
510
 
544
511
  The re-ranked results combine vector similarity with semantic understanding to improve retrieval quality.
545
512
 
546
- For more details about re-ranking, see the [rerank()](https://mastra.ai/reference/v1/rag/rerankWithScorer) method.
513
+ For more details about re-ranking, see the [rerank()](https://mastra.ai/reference/rag/rerankWithScorer) method.
547
514
 
548
- For graph-based retrieval that follows connections between chunks, see the [GraphRAG](https://mastra.ai/docs/v1/rag/graph-rag) documentation.
515
+ For graph-based retrieval that follows connections between chunks, see the [GraphRAG](https://mastra.ai/docs/rag/graph-rag) documentation.