@mastra/rag 2.1.0 → 2.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,868 +0,0 @@
1
- # Rag API Reference
2
-
3
- > API reference for rag - 7 entries
4
-
5
-
6
- ---
7
-
8
- ## Reference: GraphRAG
9
-
10
- > Documentation for the GraphRAG class in Mastra, which implements a graph-based approach to retrieval augmented generation.
11
-
12
- The `GraphRAG` class implements a graph-based approach to retrieval augmented generation. It creates a knowledge graph from document chunks where nodes represent documents and edges represent semantic relationships, enabling both direct similarity matching and discovery of related content through graph traversal.
13
-
14
- ## Basic Usage
15
-
16
- ```typescript
17
- import { GraphRAG } from "@mastra/rag";
18
-
19
- const graphRag = new GraphRAG({
20
- dimension: 1536,
21
- threshold: 0.7,
22
- });
23
-
24
- // Create the graph from chunks and embeddings
25
- graphRag.createGraph(documentChunks, embeddings);
26
-
27
- // Query the graph with embedding
28
- const results = await graphRag.query({
29
- query: queryEmbedding,
30
- topK: 10,
31
- randomWalkSteps: 100,
32
- restartProb: 0.15,
33
- });
34
- ```
35
-
36
- ## Constructor Parameters
37
-
38
- ## Methods
39
-
40
- ### createGraph
41
-
42
- Creates a knowledge graph from document chunks and their embeddings.
43
-
44
- ```typescript
45
- createGraph(chunks: GraphChunk[], embeddings: GraphEmbedding[]): void
46
- ```
47
-
48
- #### Parameters
49
-
50
- ### query
51
-
52
- Performs a graph-based search combining vector similarity and graph traversal.
53
-
54
- ```typescript
55
- query({
56
- query,
57
- topK = 10,
58
- randomWalkSteps = 100,
59
- restartProb = 0.15
60
- }: {
61
- query: number[];
62
- topK?: number;
63
- randomWalkSteps?: number;
64
- restartProb?: number;
65
- }): RankedNode[]
66
- ```
67
-
68
- #### Parameters
69
-
70
- #### Returns
71
-
72
- Returns an array of `RankedNode` objects, where each node contains:
73
-
74
- ## Advanced Example
75
-
76
- ```typescript
77
- const graphRag = new GraphRAG({
78
- dimension: 1536,
79
- threshold: 0.8, // Stricter similarity threshold
80
- });
81
-
82
- // Create graph from chunks and embeddings
83
- graphRag.createGraph(documentChunks, embeddings);
84
-
85
- // Query with custom parameters
86
- const results = await graphRag.query({
87
- query: queryEmbedding,
88
- topK: 5,
89
- randomWalkSteps: 200,
90
- restartProb: 0.2,
91
- });
92
- ```
93
-
94
- ## Related
95
-
96
- - [createGraphRAGTool](../tools/graph-rag-tool)
97
-
98
- ---
99
-
100
- ## Reference: .chunk()
101
-
102
- > Documentation for the chunk function in Mastra, which splits documents into smaller segments using various strategies.
103
-
104
- The `.chunk()` function splits documents into smaller segments using various strategies and options.
105
-
106
- ## Example
107
-
108
- ```typescript
109
- import { MDocument } from "@mastra/rag";
110
-
111
- const doc = MDocument.fromMarkdown(`
112
- # Introduction
113
- This is a sample document that we want to split into chunks.
114
-
115
- ## Section 1
116
- Here is the first section with some content.
117
-
118
- ## Section 2
119
- Here is another section with different content.
120
- `);
121
-
122
- // Basic chunking with defaults
123
- const chunks = await doc.chunk();
124
-
125
- // Markdown-specific chunking with header extraction
126
- const chunksWithMetadata = await doc.chunk({
127
- strategy: "markdown",
128
- headers: [
129
- ["#", "title"],
130
- ["##", "section"],
131
- ],
132
- extract: {
133
- summary: true, // Extract summaries with default settings
134
- keywords: true, // Extract keywords with default settings
135
- },
136
- });
137
- ```
138
-
139
- ## Parameters
140
-
141
- The following parameters are available for all chunking strategies.
142
- **Important:** Each strategy will only utilize a subset of these parameters relevant to its specific use case.
143
-
144
- See [ExtractParams reference](https://mastra.ai/reference/rag/extract-params) for details on the `extract` parameter.
145
-
146
- ## Strategy-Specific Options
147
-
148
- Strategy-specific options are passed as top-level parameters alongside the strategy parameter. For example:
149
-
150
- ```typescript
151
- // Character strategy example
152
- const chunks = await doc.chunk({
153
- strategy: "character",
154
- separator: ".", // Character-specific option
155
- isSeparatorRegex: false, // Character-specific option
156
- maxSize: 300, // general option
157
- });
158
-
159
- // Recursive strategy example
160
- const chunks = await doc.chunk({
161
- strategy: "recursive",
162
- separators: ["\n\n", "\n", " "], // Recursive-specific option
163
- language: "markdown", // Recursive-specific option
164
- maxSize: 500, // general option
165
- });
166
-
167
- // Sentence strategy example
168
- const chunks = await doc.chunk({
169
- strategy: "sentence",
170
- maxSize: 450, // Required for sentence strategy
171
- minSize: 50, // Sentence-specific option
172
- sentenceEnders: ["."], // Sentence-specific option
173
- fallbackToCharacters: false, // Sentence-specific option
174
- });
175
-
176
- // HTML strategy example
177
- const chunks = await doc.chunk({
178
- strategy: "html",
179
- headers: [
180
- ["h1", "title"],
181
- ["h2", "subtitle"],
182
- ], // HTML-specific option
183
- });
184
-
185
- // Markdown strategy example
186
- const chunks = await doc.chunk({
187
- strategy: "markdown",
188
- headers: [
189
- ["#", "title"],
190
- ["##", "section"],
191
- ], // Markdown-specific option
192
- stripHeaders: true, // Markdown-specific option
193
- });
194
-
195
- // Semantic Markdown strategy example
196
- const chunks = await doc.chunk({
197
- strategy: "semantic-markdown",
198
- joinThreshold: 500, // Semantic Markdown-specific option
199
- modelName: "gpt-3.5-turbo", // Semantic Markdown-specific option
200
- });
201
-
202
- // Token strategy example
203
- const chunks = await doc.chunk({
204
- strategy: "token",
205
- encodingName: "gpt2", // Token-specific option
206
- modelName: "gpt-3.5-turbo", // Token-specific option
207
- maxSize: 1000, // general option
208
- });
209
- ```
210
-
211
- The options documented below are passed directly at the top level of the configuration object, not nested within a separate options object.
212
-
213
- ### Character
214
-
215
- ### Recursive
216
-
217
- ### Sentence
218
-
219
- ### HTML
220
-
221
- **Important:** When using the HTML strategy, all general options are ignored. Use `headers` for header-based splitting or `sections` for section-based splitting. If used together, `sections` will be ignored.
222
-
223
- ### Markdown
224
-
225
- **Important:** When using the `headers` option, the markdown strategy ignores all general options and content is split based on the markdown header structure. To use size-based chunking with markdown, omit the `headers` parameter.
226
-
227
- ### Semantic Markdown
228
-
229
- ### Token
230
-
231
- ### JSON
232
-
233
- ### Latex
234
-
235
- The Latex strategy uses only the general chunking options listed above. It provides LaTeX-aware splitting optimized for mathematical and academic documents.
236
-
237
- ## Return Value
238
-
239
- Returns a `MDocument` instance containing the chunked documents. Each chunk includes:
240
-
241
- ```typescript
242
- interface DocumentNode {
243
- text: string;
244
- metadata: Record<string, any>;
245
- embedding?: number[];
246
- }
247
- ```
248
-
249
- ---
250
-
251
- ## Reference: DatabaseConfig
252
-
253
- > API reference for database-specific configuration types used with vector query tools in Mastra RAG systems.
254
-
255
- The `DatabaseConfig` type allows you to specify database-specific configurations when using vector query tools. These configurations enable you to leverage unique features and optimizations offered by different vector stores.
256
-
257
- ## Type Definition
258
-
259
- ```typescript
260
- export type DatabaseConfig = {
261
- pinecone?: PineconeConfig;
262
- pgvector?: PgVectorConfig;
263
- chroma?: ChromaConfig;
264
- [key: string]: any; // Extensible for future databases
265
- };
266
- ```
267
-
268
- ## Database-Specific Types
269
-
270
- ### PineconeConfig
271
-
272
- Configuration options specific to Pinecone vector store.
273
-
274
- **Use Cases:**
275
-
276
- - Multi-tenant applications (separate namespaces per tenant)
277
- - Environment isolation (dev/staging/prod namespaces)
278
- - Hybrid search combining semantic and keyword matching
279
-
280
- ### PgVectorConfig
281
-
282
- Configuration options specific to PostgreSQL with pgvector extension.
283
-
284
- **Performance Guidelines:**
285
-
286
- - **ef**: Start with 2-4x your topK value, increase for better accuracy
287
- - **probes**: Start with 1-10, increase for better recall
288
- - **minScore**: Use values between 0.5-0.9 depending on your quality requirements
289
-
290
- **Use Cases:**
291
-
292
- - Performance optimization for high-load scenarios
293
- - Quality filtering to remove irrelevant results
294
- - Fine-tuning search accuracy vs speed tradeoffs
295
-
296
- ### ChromaConfig
297
-
298
- Configuration options specific to Chroma vector store.
299
-
300
- **Filter Syntax Examples:**
301
-
302
- ```typescript
303
- // Simple equality
304
- where: { "category": "technical" }
305
-
306
- // Operators
307
- where: { "price": { "$gt": 100 } }
308
-
309
- // Multiple conditions
310
- where: {
311
- "category": "electronics",
312
- "inStock": true
313
- }
314
-
315
- // Document content filtering
316
- whereDocument: { "$contains": "API documentation" }
317
- ```
318
-
319
- **Use Cases:**
320
-
321
- - Advanced metadata filtering
322
- - Content-based document filtering
323
- - Complex query combinations
324
-
325
- ## Usage Examples
326
-
327
- **basic-usage:**
328
-
329
- ### Basic Database Configuration
330
-
331
- ```typescript
332
- import { createVectorQueryTool } from '@mastra/rag';
333
-
334
- const vectorTool = createVectorQueryTool({
335
- vectorStoreName: 'pinecone',
336
- indexName: 'documents',
337
- model: embedModel,
338
- databaseConfig: {
339
- pinecone: {
340
- namespace: 'production'
341
- }
342
- }
343
- });
344
- ```
345
-
346
-
347
-
348
- **runtime-override:**
349
-
350
- ### Runtime Configuration Override
351
-
352
- ```typescript
353
- import { RequestContext } from '@mastra/core/request-context';
354
-
355
- // Initial configuration
356
- const vectorTool = createVectorQueryTool({
357
- vectorStoreName: 'pinecone',
358
- indexName: 'documents',
359
- model: embedModel,
360
- databaseConfig: {
361
- pinecone: {
362
- namespace: 'development'
363
- }
364
- }
365
- });
366
-
367
- // Override at runtime
368
- const requestContext = new RequestContext();
369
- requestContext.set('databaseConfig', {
370
- pinecone: {
371
- namespace: 'production'
372
- }
373
- });
374
-
375
- await vectorTool.execute(
376
- { queryText: 'search query' },
377
- { mastra, requestContext }
378
- );
379
- ```
380
-
381
-
382
-
383
- **multi-database:**
384
-
385
- ### Multi-Database Configuration
386
-
387
- ```typescript
388
- const vectorTool = createVectorQueryTool({
389
- vectorStoreName: 'dynamic', // Will be determined at runtime
390
- indexName: 'documents',
391
- model: embedModel,
392
- databaseConfig: {
393
- pinecone: {
394
- namespace: 'default'
395
- },
396
- pgvector: {
397
- minScore: 0.8,
398
- ef: 150
399
- },
400
- chroma: {
401
- where: { 'type': 'documentation' }
402
- }
403
- }
404
- });
405
- ```
406
-
407
- > **Note:**
408
-
409
- **Multi-Database Support**: When you configure multiple databases, only the configuration matching the actual vector store being used will be applied.
410
-
411
-
412
-
413
- **performance-tuning:**
414
-
415
- ### Performance Tuning
416
-
417
- ```typescript
418
- // High accuracy configuration
419
- const highAccuracyTool = createVectorQueryTool({
420
- vectorStoreName: 'postgres',
421
- indexName: 'embeddings',
422
- model: embedModel,
423
- databaseConfig: {
424
- pgvector: {
425
- ef: 400, // High accuracy
426
- probes: 20, // High recall
427
- minScore: 0.85 // High quality threshold
428
- }
429
- }
430
- });
431
-
432
- // High speed configuration
433
- const highSpeedTool = createVectorQueryTool({
434
- vectorStoreName: 'postgres',
435
- indexName: 'embeddings',
436
- model: embedModel,
437
- databaseConfig: {
438
- pgvector: {
439
- ef: 50, // Lower accuracy, faster
440
- probes: 3, // Lower recall, faster
441
- minScore: 0.6 // Lower quality threshold
442
- }
443
- }
444
- });
445
- ```
446
-
447
-
448
-
449
- ## Extensibility
450
-
451
- The `DatabaseConfig` type is designed to be extensible. To add support for a new vector database:
452
-
453
- ```typescript
454
- // 1. Define the configuration interface
455
- export interface NewDatabaseConfig {
456
- customParam1?: string;
457
- customParam2?: number;
458
- }
459
-
460
- // 2. Extend DatabaseConfig type
461
- export type DatabaseConfig = {
462
- pinecone?: PineconeConfig;
463
- pgvector?: PgVectorConfig;
464
- chroma?: ChromaConfig;
465
- newdatabase?: NewDatabaseConfig;
466
- [key: string]: any;
467
- };
468
-
469
- // 3. Use in vector query tool
470
- const vectorTool = createVectorQueryTool({
471
- vectorStoreName: "newdatabase",
472
- indexName: "documents",
473
- model: embedModel,
474
- databaseConfig: {
475
- newdatabase: {
476
- customParam1: "value",
477
- customParam2: 42,
478
- },
479
- },
480
- });
481
- ```
482
-
483
- ## Best Practices
484
-
485
- 1. **Environment Configuration**: Use different namespaces or configurations for different environments
486
- 2. **Performance Tuning**: Start with default values and adjust based on your specific needs
487
- 3. **Quality Filtering**: Use minScore to filter out low-quality results
488
- 4. **Runtime Flexibility**: Override configurations at runtime for dynamic scenarios
489
- 5. **Documentation**: Document your specific configuration choices for team members
490
-
491
- ## Migration Guide
492
-
493
- Existing vector query tools continue to work without changes. To add database configurations:
494
-
495
- ```diff
496
- const vectorTool = createVectorQueryTool({
497
- vectorStoreName: 'pinecone',
498
- indexName: 'documents',
499
- model: embedModel,
500
- + databaseConfig: {
501
- + pinecone: {
502
- + namespace: 'production'
503
- + }
504
- + }
505
- });
506
- ```
507
-
508
- ## Related
509
-
510
- - [createVectorQueryTool()](https://mastra.ai/reference/tools/vector-query-tool)
511
- - [Hybrid Vector Search](https://mastra.ai/docs/rag/retrieval#metadata-filtering)
512
- - [Metadata Filters](https://mastra.ai/reference/rag/metadata-filters)
513
-
514
- ---
515
-
516
- ## Reference: MDocument
517
-
518
- > Documentation for the MDocument class in Mastra, which handles document processing and chunking.
519
-
520
- The MDocument class processes documents for RAG applications. The main methods are `.chunk()` and `.extractMetadata()`.
521
-
522
- ## Constructor
523
-
524
- ## Static Methods
525
-
526
- ### fromText()
527
-
528
- Creates a document from plain text content.
529
-
530
- ```typescript
531
- static fromText(text: string, metadata?: Record<string, any>): MDocument
532
- ```
533
-
534
- ### fromHTML()
535
-
536
- Creates a document from HTML content.
537
-
538
- ```typescript
539
- static fromHTML(html: string, metadata?: Record<string, any>): MDocument
540
- ```
541
-
542
- ### fromMarkdown()
543
-
544
- Creates a document from Markdown content.
545
-
546
- ```typescript
547
- static fromMarkdown(markdown: string, metadata?: Record<string, any>): MDocument
548
- ```
549
-
550
- ### fromJSON()
551
-
552
- Creates a document from JSON content.
553
-
554
- ```typescript
555
- static fromJSON(json: string, metadata?: Record<string, any>): MDocument
556
- ```
557
-
558
- ## Instance Methods
559
-
560
- ### chunk()
561
-
562
- Splits document into chunks and optionally extracts metadata.
563
-
564
- ```typescript
565
- async chunk(params?: ChunkParams): Promise<Chunk[]>
566
- ```
567
-
568
- See [chunk() reference](./chunk) for detailed options.
569
-
570
- ### getDocs()
571
-
572
- Returns array of processed document chunks.
573
-
574
- ```typescript
575
- getDocs(): Chunk[]
576
- ```
577
-
578
- ### getText()
579
-
580
- Returns array of text strings from chunks.
581
-
582
- ```typescript
583
- getText(): string[]
584
- ```
585
-
586
- ### getMetadata()
587
-
588
- Returns array of metadata objects from chunks.
589
-
590
- ```typescript
591
- getMetadata(): Record<string, any>[]
592
- ```
593
-
594
- ### extractMetadata()
595
-
596
- Extracts metadata using specified extractors. See [ExtractParams reference](./extract-params) for details.
597
-
598
- ```typescript
599
- async extractMetadata(params: ExtractParams): Promise<MDocument>
600
- ```
601
-
602
- ## Examples
603
-
604
- ```typescript
605
- import { MDocument } from "@mastra/rag";
606
-
607
- // Create document from text
608
- const doc = MDocument.fromText("Your content here");
609
-
610
- // Split into chunks with metadata extraction
611
- const chunks = await doc.chunk({
612
- strategy: "markdown",
613
- headers: [
614
- ["#", "title"],
615
- ["##", "section"],
616
- ],
617
- extract: {
618
- summary: true, // Extract summaries with default settings
619
- keywords: true, // Extract keywords with default settings
620
- },
621
- });
622
-
623
- // Get processed chunks
624
- const docs = doc.getDocs();
625
- const texts = doc.getText();
626
- const metadata = doc.getMetadata();
627
- ```
628
-
629
- ---
630
-
631
- ## Reference: ExtractParams
632
-
633
- > Documentation for metadata extraction configuration in Mastra.
634
-
635
- ExtractParams configures metadata extraction from document chunks using LLM analysis.
636
-
637
- ## Example
638
-
639
- ```typescript
640
- import { MDocument } from "@mastra/rag";
641
-
642
- const doc = MDocument.fromText(text);
643
- const chunks = await doc.chunk({
644
- extract: {
645
- title: true, // Extract titles using default settings
646
- summary: true, // Generate summaries using default settings
647
- keywords: true, // Extract keywords using default settings
648
- },
649
- });
650
-
651
- // Example output:
652
- // chunks[0].metadata = {
653
- // documentTitle: "AI Systems Overview",
654
- // sectionSummary: "Overview of artificial intelligence concepts and applications",
655
- // excerptKeywords: "KEYWORDS: AI, machine learning, algorithms"
656
- // }
657
- ```
658
-
659
- ## Parameters
660
-
661
- The `extract` parameter accepts the following fields:
662
-
663
- ## Extractor Arguments
664
-
665
- ### TitleExtractorsArgs
666
-
667
- ### SummaryExtractArgs
668
-
669
- ### QuestionAnswerExtractArgs
670
-
671
- ### KeywordExtractArgs
672
-
673
- ### SchemaExtractArgs
674
-
675
- ## Advanced Example
676
-
677
- ```typescript
678
- import { MDocument } from "@mastra/rag";
679
-
680
- const doc = MDocument.fromText(text);
681
- const chunks = await doc.chunk({
682
- extract: {
683
- // Title extraction with custom settings
684
- title: {
685
- nodes: 2, // Extract 2 title nodes
686
- nodeTemplate: "Generate a title for this: {context}",
687
- combineTemplate: "Combine these titles: {context}",
688
- },
689
-
690
- // Summary extraction with custom settings
691
- summary: {
692
- summaries: ["self"], // Generate summaries for current chunk
693
- promptTemplate: "Summarize this: {context}",
694
- },
695
-
696
- // Question generation with custom settings
697
- questions: {
698
- questions: 3, // Generate 3 questions
699
- promptTemplate: "Generate {numQuestions} questions about: {context}",
700
- embeddingOnly: false,
701
- },
702
-
703
- // Keyword extraction with custom settings
704
- keywords: {
705
- keywords: 5, // Extract 5 keywords
706
- promptTemplate: "Extract {maxKeywords} key terms from: {context}",
707
- },
708
-
709
- // Schema extraction with Zod
710
- schema: {
711
- schema: z.object({
712
- productName: z.string(),
713
- category: z.enum(["electronics", "clothing"]),
714
- }),
715
- instructions: "Extract product information.",
716
- metadataKey: "product",
717
- },
718
- },
719
- });
720
-
721
- // Example output:
722
- // chunks[0].metadata = {
723
- // documentTitle: "AI in Modern Computing",
724
- // sectionSummary: "Overview of AI concepts and their applications in computing",
725
- // questionsThisExcerptCanAnswer: "1. What is machine learning?\n2. How do neural networks work?",
726
- // excerptKeywords: "1. Machine learning\n2. Neural networks\n3. Training data",
727
- // product: {
728
- // productName: "Neural Net 2000",
729
- // category: "electronics"
730
- // }
731
- // }
732
- ```
733
-
734
- ## Document Grouping for Title Extraction
735
-
736
- When using the `TitleExtractor`, you can group multiple chunks together for title extraction by specifying a shared `docId` in the `metadata` field of each chunk. All chunks with the same `docId` will receive the same extracted title. If no `docId` is set, each chunk is treated as its own document for title extraction.
737
-
738
- **Example:**
739
-
740
- ```ts
741
- import { MDocument } from "@mastra/rag";
742
-
743
- const doc = new MDocument({
744
- docs: [
745
- { text: "chunk 1", metadata: { docId: "docA" } },
746
- { text: "chunk 2", metadata: { docId: "docA" } },
747
- { text: "chunk 3", metadata: { docId: "docB" } },
748
- ],
749
- type: "text",
750
- });
751
-
752
- await doc.extractMetadata({ title: true });
753
- // The first two chunks will share a title, while the third chunk will be assigned a separate title.
754
- ```
755
-
756
- ---
757
-
758
- ## Reference: rerank()
759
-
760
- > Documentation for the rerank function in Mastra, which provides advanced reranking capabilities for vector search results.
761
-
762
- The `rerank()` function provides advanced reranking capabilities for vector search results by combining semantic relevance, vector similarity, and position-based scoring.
763
-
764
- ```typescript
765
- function rerank(
766
- results: QueryResult[],
767
- query: string,
768
- modelConfig: ModelConfig,
769
- options?: RerankerFunctionOptions,
770
- ): Promise<RerankResult[]>;
771
- ```
772
-
773
- ## Usage Example
774
-
775
- ```typescript
776
- import { rerank } from "@mastra/rag";
777
-
778
- const model = "openai/gpt-5.1";
779
-
780
- const rerankedResults = await rerank(
781
- vectorSearchResults,
782
- "How do I deploy to production?",
783
- model,
784
- {
785
- weights: {
786
- semantic: 0.5,
787
- vector: 0.3,
788
- position: 0.2,
789
- },
790
- topK: 3,
791
- },
792
- );
793
- ```
794
-
795
- ## Parameters
796
-
797
- The rerank function accepts any LanguageModel from the Vercel AI SDK. When using the Cohere model `rerank-v3.5`, it will automatically use Cohere's reranking capabilities.
798
-
799
- > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
800
-
801
- ### RerankerFunctionOptions
802
-
803
- ## Returns
804
-
805
- The function returns an array of `RerankResult` objects:
806
-
807
- ### ScoringDetails
808
-
809
- ## Related
810
-
811
- - [createVectorQueryTool](../tools/vector-query-tool)
812
-
813
- ---
814
-
815
- ## Reference: rerankWithScorer()
816
-
817
- > Documentation for the rerank function in Mastra, which provides advanced reranking capabilities for vector search results.
818
-
819
- The `rerankWithScorer()` function provides advanced reranking capabilities for vector search results by combining semantic relevance, vector similarity, and position-based scoring.
820
-
821
- ```typescript
822
- function rerankWithScorer({
823
- results: QueryResult[],
824
- query: string,
825
- scorer: RelevanceScoreProvider,
826
- options?: RerankerFunctionOptions,
827
- }): Promise<RerankResult[]>;
828
- ```
829
-
830
- ## Usage Example
831
-
832
- ```typescript
833
- import { rerankWithScorer as rerank, CohereRelevanceScorer } from "@mastra/rag";
834
-
835
- const scorer = new CohereRelevanceScorer("rerank-v3.5");
836
-
837
- const rerankedResults = await rerank({
838
- results: vectorSearchResults,
839
- query: "How do I deploy to production?",
840
- scorer,
841
- options: {
842
- weights: {
843
- semantic: 0.5,
844
- vector: 0.3,
845
- position: 0.2,
846
- },
847
- topK: 3,
848
- },
849
- });
850
- ```
851
-
852
- ## Parameters
853
-
854
- The `rerankWithScorer` function accepts any `RelevanceScoreProvider` from @mastra/rag.
855
-
856
- > **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
857
-
858
- ### RerankerFunctionOptions
859
-
860
- ## Returns
861
-
862
- The function returns an array of `RerankResult` objects:
863
-
864
- ### ScoringDetails
865
-
866
- ## Related
867
-
868
- - [createVectorQueryTool](../tools/vector-query-tool)