@mastra/rag 2.1.2 → 2.1.3-alpha.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +11 -0
- package/LICENSE.md +15 -0
- package/dist/docs/SKILL.md +3 -3
- package/dist/docs/assets/SOURCE_MAP.json +1 -1
- package/dist/docs/references/docs-rag-chunking-and-embedding.md +5 -5
- package/dist/docs/references/docs-rag-graph-rag.md +2 -2
- package/dist/docs/references/docs-rag-overview.md +2 -2
- package/dist/docs/references/docs-rag-retrieval.md +16 -16
- package/dist/docs/references/reference-rag-chunk.md +40 -40
- package/dist/docs/references/reference-rag-database-config.md +19 -15
- package/dist/docs/references/reference-rag-document.md +13 -13
- package/dist/docs/references/reference-rag-extract-params.md +31 -31
- package/dist/docs/references/reference-rag-graph-rag.md +16 -16
- package/dist/docs/references/reference-rag-rerank.md +28 -20
- package/dist/docs/references/reference-rag-rerankWithScorer.md +27 -19
- package/dist/docs/references/reference-tools-document-chunker-tool.md +11 -11
- package/dist/docs/references/reference-tools-graph-rag-tool.md +23 -25
- package/dist/docs/references/reference-tools-vector-query-tool.md +47 -35
- package/dist/document/validation.d.ts.map +1 -1
- package/dist/index.cjs +6 -5
- package/dist/index.cjs.map +1 -1
- package/dist/index.js +6 -5
- package/dist/index.js.map +1 -1
- package/dist/tools/document-chunker.d.ts +1 -3
- package/dist/tools/document-chunker.d.ts.map +1 -1
- package/dist/tools/graph-rag.d.ts +5 -19
- package/dist/tools/graph-rag.d.ts.map +1 -1
- package/dist/tools/vector-query.d.ts +5 -19
- package/dist/tools/vector-query.d.ts.map +1 -1
- package/dist/utils/tool-schemas.d.ts +9 -47
- package/dist/utils/tool-schemas.d.ts.map +1 -1
- package/package.json +9 -9
|
@@ -28,65 +28,65 @@ const chunks = await doc.chunk({
|
|
|
28
28
|
|
|
29
29
|
The `extract` parameter accepts the following fields:
|
|
30
30
|
|
|
31
|
-
**title
|
|
31
|
+
**title** (`boolean | TitleExtractorsArgs`): Enable title extraction. Set to true for default settings, or provide custom configuration.
|
|
32
32
|
|
|
33
|
-
**summary
|
|
33
|
+
**summary** (`boolean | SummaryExtractArgs`): Enable summary extraction. Set to true for default settings, or provide custom configuration.
|
|
34
34
|
|
|
35
|
-
**questions
|
|
35
|
+
**questions** (`boolean | QuestionAnswerExtractArgs`): Enable question generation. Set to true for default settings, or provide custom configuration.
|
|
36
36
|
|
|
37
|
-
**keywords
|
|
37
|
+
**keywords** (`boolean | KeywordExtractArgs`): Enable keyword extraction. Set to true for default settings, or provide custom configuration.
|
|
38
38
|
|
|
39
|
-
**schema
|
|
39
|
+
**schema** (`SchemaExtractArgs`): Enable structured metadata extraction using a Zod schema.
|
|
40
40
|
|
|
41
|
-
## Extractor
|
|
41
|
+
## Extractor arguments
|
|
42
42
|
|
|
43
|
-
### TitleExtractorsArgs
|
|
43
|
+
### `TitleExtractorsArgs`
|
|
44
44
|
|
|
45
|
-
**llm
|
|
45
|
+
**llm** (`MastraLanguageModel`): AI SDK language model to use for title extraction
|
|
46
46
|
|
|
47
|
-
**nodes
|
|
47
|
+
**nodes** (`number`): Number of title nodes to extract
|
|
48
48
|
|
|
49
|
-
**nodeTemplate
|
|
49
|
+
**nodeTemplate** (`string`): Custom prompt template for title node extraction. Must include {context} placeholder
|
|
50
50
|
|
|
51
|
-
**combineTemplate
|
|
51
|
+
**combineTemplate** (`string`): Custom prompt template for combining titles. Must include {context} placeholder
|
|
52
52
|
|
|
53
|
-
### SummaryExtractArgs
|
|
53
|
+
### `SummaryExtractArgs`
|
|
54
54
|
|
|
55
|
-
**llm
|
|
55
|
+
**llm** (`MastraLanguageModel`): AI SDK language model to use for summary extraction
|
|
56
56
|
|
|
57
|
-
**summaries
|
|
57
|
+
**summaries** (`('self' | 'prev' | 'next')[]`): List of summary types to generate. Can only include 'self' (current chunk), 'prev' (previous chunk), or 'next' (next chunk)
|
|
58
58
|
|
|
59
|
-
**promptTemplate
|
|
59
|
+
**promptTemplate** (`string`): Custom prompt template for summary generation. Must include {context} placeholder
|
|
60
60
|
|
|
61
|
-
### QuestionAnswerExtractArgs
|
|
61
|
+
### `QuestionAnswerExtractArgs`
|
|
62
62
|
|
|
63
|
-
**llm
|
|
63
|
+
**llm** (`MastraLanguageModel`): AI SDK language model to use for question generation
|
|
64
64
|
|
|
65
|
-
**questions
|
|
65
|
+
**questions** (`number`): Number of questions to generate
|
|
66
66
|
|
|
67
|
-
**promptTemplate
|
|
67
|
+
**promptTemplate** (`string`): Custom prompt template for question generation. Must include both {context} and {numQuestions} placeholders
|
|
68
68
|
|
|
69
|
-
**embeddingOnly
|
|
69
|
+
**embeddingOnly** (`boolean`): If true, only generate embeddings without actual questions
|
|
70
70
|
|
|
71
|
-
### KeywordExtractArgs
|
|
71
|
+
### `KeywordExtractArgs`
|
|
72
72
|
|
|
73
|
-
**llm
|
|
73
|
+
**llm** (`MastraLanguageModel`): AI SDK language model to use for keyword extraction
|
|
74
74
|
|
|
75
|
-
**keywords
|
|
75
|
+
**keywords** (`number`): Number of keywords to extract
|
|
76
76
|
|
|
77
|
-
**promptTemplate
|
|
77
|
+
**promptTemplate** (`string`): Custom prompt template for keyword extraction. Must include both {context} and {maxKeywords} placeholders
|
|
78
78
|
|
|
79
|
-
### SchemaExtractArgs
|
|
79
|
+
### `SchemaExtractArgs`
|
|
80
80
|
|
|
81
|
-
**schema
|
|
81
|
+
**schema** (`ZodType`): Zod schema defining the structure of the data to extract.
|
|
82
82
|
|
|
83
|
-
**llm
|
|
83
|
+
**llm** (`MastraLanguageModel`): AI SDK language model to use for extraction.
|
|
84
84
|
|
|
85
|
-
**instructions
|
|
85
|
+
**instructions** (`string`): Instructions for the LLM on what to extract.
|
|
86
86
|
|
|
87
|
-
**metadataKey
|
|
87
|
+
**metadataKey** (`string`): Key to nest extraction results under. If omitted, results are spread into the metadata object.
|
|
88
88
|
|
|
89
|
-
## Advanced
|
|
89
|
+
## Advanced example
|
|
90
90
|
|
|
91
91
|
```typescript
|
|
92
92
|
import { MDocument } from '@mastra/rag'
|
|
@@ -145,7 +145,7 @@ const chunks = await doc.chunk({
|
|
|
145
145
|
// }
|
|
146
146
|
```
|
|
147
147
|
|
|
148
|
-
## Document
|
|
148
|
+
## Document grouping for title extraction
|
|
149
149
|
|
|
150
150
|
When using the `TitleExtractor`, you can group multiple chunks together for title extraction by specifying a shared `docId` in the `metadata` field of each chunk. All chunks with the same `docId` will receive the same extracted title. If no `docId` is set, each chunk is treated as its own document for title extraction.
|
|
151
151
|
|
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
The `GraphRAG` class implements a graph-based approach to retrieval augmented generation. It creates a knowledge graph from document chunks where nodes represent documents and edges represent semantic relationships, enabling both direct similarity matching and discovery of related content through graph traversal.
|
|
4
4
|
|
|
5
|
-
## Basic
|
|
5
|
+
## Basic usage
|
|
6
6
|
|
|
7
7
|
```typescript
|
|
8
8
|
import { GraphRAG } from '@mastra/rag'
|
|
@@ -24,15 +24,15 @@ const results = await graphRag.query({
|
|
|
24
24
|
})
|
|
25
25
|
```
|
|
26
26
|
|
|
27
|
-
## Constructor
|
|
27
|
+
## Constructor parameters
|
|
28
28
|
|
|
29
|
-
**dimension
|
|
29
|
+
**dimension** (`number`): Dimension of the embedding vectors (Default: `1536`)
|
|
30
30
|
|
|
31
|
-
**threshold
|
|
31
|
+
**threshold** (`number`): Similarity threshold for creating edges between nodes (0-1) (Default: `0.7`)
|
|
32
32
|
|
|
33
33
|
## Methods
|
|
34
34
|
|
|
35
|
-
### createGraph
|
|
35
|
+
### `createGraph`
|
|
36
36
|
|
|
37
37
|
Creates a knowledge graph from document chunks and their embeddings.
|
|
38
38
|
|
|
@@ -42,9 +42,9 @@ createGraph(chunks: GraphChunk[], embeddings: GraphEmbedding[]): void
|
|
|
42
42
|
|
|
43
43
|
#### Parameters
|
|
44
44
|
|
|
45
|
-
**chunks
|
|
45
|
+
**chunks** (`GraphChunk[]`): Array of document chunks with text and metadata
|
|
46
46
|
|
|
47
|
-
**embeddings
|
|
47
|
+
**embeddings** (`GraphEmbedding[]`): Array of embeddings corresponding to chunks
|
|
48
48
|
|
|
49
49
|
### query
|
|
50
50
|
|
|
@@ -66,27 +66,27 @@ query({
|
|
|
66
66
|
|
|
67
67
|
#### Parameters
|
|
68
68
|
|
|
69
|
-
**query
|
|
69
|
+
**query** (`number[]`): Query embedding vector
|
|
70
70
|
|
|
71
|
-
**topK
|
|
71
|
+
**topK** (`number`): Number of results to return (Default: `10`)
|
|
72
72
|
|
|
73
|
-
**randomWalkSteps
|
|
73
|
+
**randomWalkSteps** (`number`): Number of steps in random walk (Default: `100`)
|
|
74
74
|
|
|
75
|
-
**restartProb
|
|
75
|
+
**restartProb** (`number`): Probability of restarting walk from query node (Default: `0.15`)
|
|
76
76
|
|
|
77
77
|
#### Returns
|
|
78
78
|
|
|
79
79
|
Returns an array of `RankedNode` objects, where each node contains:
|
|
80
80
|
|
|
81
|
-
**id
|
|
81
|
+
**id** (`string`): Unique identifier for the node
|
|
82
82
|
|
|
83
|
-
**content
|
|
83
|
+
**content** (`string`): Text content of the document chunk
|
|
84
84
|
|
|
85
|
-
**metadata
|
|
85
|
+
**metadata** (`Record<string, any>`): Additional metadata associated with the chunk
|
|
86
86
|
|
|
87
|
-
**score
|
|
87
|
+
**score** (`number`): Combined relevance score from graph traversal
|
|
88
88
|
|
|
89
|
-
## Advanced
|
|
89
|
+
## Advanced example
|
|
90
90
|
|
|
91
91
|
```typescript
|
|
92
92
|
const graphRag = new GraphRAG({
|
|
@@ -11,12 +11,12 @@ function rerank(
|
|
|
11
11
|
): Promise<RerankResult[]>
|
|
12
12
|
```
|
|
13
13
|
|
|
14
|
-
## Usage
|
|
14
|
+
## Usage example
|
|
15
15
|
|
|
16
16
|
```typescript
|
|
17
17
|
import { rerank } from '@mastra/rag'
|
|
18
18
|
|
|
19
|
-
const model = 'openai/gpt-5.
|
|
19
|
+
const model = 'openai/gpt-5.4'
|
|
20
20
|
|
|
21
21
|
const rerankedResults = await rerank(vectorSearchResults, 'How do I deploy to production?', model, {
|
|
22
22
|
weights: {
|
|
@@ -30,45 +30,53 @@ const rerankedResults = await rerank(vectorSearchResults, 'How do I deploy to pr
|
|
|
30
30
|
|
|
31
31
|
## Parameters
|
|
32
32
|
|
|
33
|
-
**results
|
|
33
|
+
**results** (`QueryResult[]`): The vector search results to rerank
|
|
34
34
|
|
|
35
|
-
**query
|
|
35
|
+
**query** (`string`): The search query text used to evaluate relevance
|
|
36
36
|
|
|
37
|
-
**model
|
|
37
|
+
**model** (`MastraLanguageModel`): The language Model to use for reranking
|
|
38
38
|
|
|
39
|
-
**options
|
|
39
|
+
**options** (`RerankerFunctionOptions`): Options for the reranking model
|
|
40
40
|
|
|
41
|
-
|
|
41
|
+
**options.weights** (`WeightConfig`): Weights for different scoring components (must add up to 1)
|
|
42
42
|
|
|
43
|
-
|
|
43
|
+
**options.weights.semantic** (`number (default: 0.4)`): Weight for semantic relevance
|
|
44
|
+
|
|
45
|
+
**options.weights.vector** (`number (default: 0.4)`): Weight for vector similarity
|
|
46
|
+
|
|
47
|
+
**options.weights.position** (`number (default: 0.2)`): Weight for position-based scoring
|
|
44
48
|
|
|
45
|
-
|
|
49
|
+
**options.queryEmbedding** (`number[]`): Embedding of the query
|
|
46
50
|
|
|
47
|
-
**
|
|
51
|
+
**options.topK** (`number`): Number of top results to return
|
|
48
52
|
|
|
49
|
-
|
|
53
|
+
The rerank function accepts any LanguageModel from the Vercel AI SDK. When using the Cohere model `rerank-v3.5`, it will automatically use Cohere's reranking capabilities.
|
|
50
54
|
|
|
51
|
-
**
|
|
55
|
+
> **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
|
|
52
56
|
|
|
53
57
|
## Returns
|
|
54
58
|
|
|
55
59
|
The function returns an array of `RerankResult` objects:
|
|
56
60
|
|
|
57
|
-
**result
|
|
61
|
+
**result** (`QueryResult`): The original query result
|
|
62
|
+
|
|
63
|
+
**score** (`number`): Combined reranking score (0-1)
|
|
64
|
+
|
|
65
|
+
**details** (`ScoringDetails`): Detailed scoring information
|
|
58
66
|
|
|
59
|
-
|
|
67
|
+
### `ScoringDetails`
|
|
60
68
|
|
|
61
|
-
**
|
|
69
|
+
**semantic** (`number`): Semantic relevance score (0-1)
|
|
62
70
|
|
|
63
|
-
|
|
71
|
+
**vector** (`number`): Vector similarity score (0-1)
|
|
64
72
|
|
|
65
|
-
**
|
|
73
|
+
**position** (`number`): Position-based score (0-1)
|
|
66
74
|
|
|
67
|
-
**
|
|
75
|
+
**queryAnalysis** (`object`): Query analysis details
|
|
68
76
|
|
|
69
|
-
**
|
|
77
|
+
**queryAnalysis.magnitude**: Magnitude of the query
|
|
70
78
|
|
|
71
|
-
**queryAnalysis
|
|
79
|
+
**queryAnalysis.dominantFeatures**: Dominant features of the query
|
|
72
80
|
|
|
73
81
|
## Related
|
|
74
82
|
|
|
@@ -11,7 +11,7 @@ function rerankWithScorer({
|
|
|
11
11
|
}): Promise<RerankResult[]>;
|
|
12
12
|
```
|
|
13
13
|
|
|
14
|
-
## Usage
|
|
14
|
+
## Usage example
|
|
15
15
|
|
|
16
16
|
```typescript
|
|
17
17
|
import { rerankWithScorer as rerank, CohereRelevanceScorer } from '@mastra/rag'
|
|
@@ -35,45 +35,53 @@ const rerankedResults = await rerank({
|
|
|
35
35
|
|
|
36
36
|
## Parameters
|
|
37
37
|
|
|
38
|
-
**results
|
|
38
|
+
**results** (`QueryResult[]`): The vector search results to rerank
|
|
39
39
|
|
|
40
|
-
**query
|
|
40
|
+
**query** (`string`): The search query text used to evaluate relevance
|
|
41
41
|
|
|
42
|
-
**scorer
|
|
42
|
+
**scorer** (`RelevanceScoreProvider`): The relevance scorer to use for reranking
|
|
43
43
|
|
|
44
|
-
**options
|
|
44
|
+
**options** (`RerankerFunctionOptions`): Options for the reranking model
|
|
45
45
|
|
|
46
|
-
|
|
46
|
+
**options.weights** (`WeightConfig`): Weights for different scoring components (must add up to 1)
|
|
47
47
|
|
|
48
|
-
|
|
48
|
+
**options.weights.semantic** (`number (default: 0.4)`): Weight for semantic relevance
|
|
49
|
+
|
|
50
|
+
**options.weights.vector** (`number (default: 0.4)`): Weight for vector similarity
|
|
51
|
+
|
|
52
|
+
**options.weights.position** (`number (default: 0.2)`): Weight for position-based scoring
|
|
49
53
|
|
|
50
|
-
|
|
54
|
+
**options.queryEmbedding** (`number[]`): Embedding of the query
|
|
51
55
|
|
|
52
|
-
**
|
|
56
|
+
**options.topK** (`number`): Number of top results to return
|
|
53
57
|
|
|
54
|
-
|
|
58
|
+
The `rerankWithScorer` function accepts any `RelevanceScoreProvider` from @mastra/rag.
|
|
55
59
|
|
|
56
|
-
**
|
|
60
|
+
> **Note:** For semantic scoring to work properly during re-ranking, each result must include the text content in its `metadata.text` field.
|
|
57
61
|
|
|
58
62
|
## Returns
|
|
59
63
|
|
|
60
64
|
The function returns an array of `RerankResult` objects:
|
|
61
65
|
|
|
62
|
-
**result
|
|
66
|
+
**result** (`QueryResult`): The original query result
|
|
67
|
+
|
|
68
|
+
**score** (`number`): Combined reranking score (0-1)
|
|
69
|
+
|
|
70
|
+
**details** (`ScoringDetails`): Detailed scoring information
|
|
63
71
|
|
|
64
|
-
|
|
72
|
+
### `ScoringDetails`
|
|
65
73
|
|
|
66
|
-
**
|
|
74
|
+
**semantic** (`number`): Semantic relevance score (0-1)
|
|
67
75
|
|
|
68
|
-
|
|
76
|
+
**vector** (`number`): Vector similarity score (0-1)
|
|
69
77
|
|
|
70
|
-
**
|
|
78
|
+
**position** (`number`): Position-based score (0-1)
|
|
71
79
|
|
|
72
|
-
**
|
|
80
|
+
**queryAnalysis** (`object`): Query analysis details
|
|
73
81
|
|
|
74
|
-
**
|
|
82
|
+
**queryAnalysis.magnitude**: Magnitude of the query
|
|
75
83
|
|
|
76
|
-
**queryAnalysis
|
|
84
|
+
**queryAnalysis.dominantFeatures**: Dominant features of the query
|
|
77
85
|
|
|
78
86
|
## Related
|
|
79
87
|
|
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
The `createDocumentChunkerTool()` function creates a tool for splitting documents into smaller chunks for efficient processing and retrieval. It supports different chunking strategies and configurable parameters.
|
|
4
4
|
|
|
5
|
-
## Basic
|
|
5
|
+
## Basic usage
|
|
6
6
|
|
|
7
7
|
```typescript
|
|
8
8
|
import { createDocumentChunkerTool, MDocument } from '@mastra/rag'
|
|
@@ -27,25 +27,25 @@ const { chunks } = await chunker.execute()
|
|
|
27
27
|
|
|
28
28
|
## Parameters
|
|
29
29
|
|
|
30
|
-
**doc
|
|
30
|
+
**doc** (`MDocument`): The document to be chunked
|
|
31
31
|
|
|
32
|
-
**params
|
|
32
|
+
**params** (`ChunkParams`): Configuration parameters for chunking (Default: `Default chunking parameters`)
|
|
33
33
|
|
|
34
|
-
### ChunkParams
|
|
34
|
+
### `ChunkParams`
|
|
35
35
|
|
|
36
|
-
**strategy
|
|
36
|
+
**strategy** (`'recursive'`): The chunking strategy to use (Default: `'recursive'`)
|
|
37
37
|
|
|
38
|
-
**size
|
|
38
|
+
**size** (`number`): Target size of each chunk in tokens/characters (Default: `512`)
|
|
39
39
|
|
|
40
|
-
**overlap
|
|
40
|
+
**overlap** (`number`): Number of overlapping tokens/characters between chunks (Default: `50`)
|
|
41
41
|
|
|
42
|
-
**separator
|
|
42
|
+
**separator** (`string`): Character(s) to use as chunk separator (Default: `'\n'`)
|
|
43
43
|
|
|
44
44
|
## Returns
|
|
45
45
|
|
|
46
|
-
**chunks
|
|
46
|
+
**chunks** (`DocumentChunk[]`): Array of document chunks with their content and metadata
|
|
47
47
|
|
|
48
|
-
## Example with
|
|
48
|
+
## Example with custom parameters
|
|
49
49
|
|
|
50
50
|
```typescript
|
|
51
51
|
const technicalDoc = new MDocument({
|
|
@@ -74,7 +74,7 @@ chunks.forEach((chunk, index) => {
|
|
|
74
74
|
})
|
|
75
75
|
```
|
|
76
76
|
|
|
77
|
-
## Tool
|
|
77
|
+
## Tool details
|
|
78
78
|
|
|
79
79
|
The chunker is created as a Mastra tool with the following properties:
|
|
80
80
|
|
|
@@ -2,7 +2,7 @@
|
|
|
2
2
|
|
|
3
3
|
The `createGraphRAGTool()` creates a tool that enhances RAG by building a graph of semantic relationships between documents. It uses the `GraphRAG` system under the hood to provide graph-based retrieval, finding relevant content through both direct similarity and connected relationships.
|
|
4
4
|
|
|
5
|
-
## Usage
|
|
5
|
+
## Usage example
|
|
6
6
|
|
|
7
7
|
```typescript
|
|
8
8
|
import { createGraphRAGTool } from '@mastra/rag'
|
|
@@ -25,45 +25,43 @@ const graphTool = createGraphRAGTool({
|
|
|
25
25
|
|
|
26
26
|
> **Note:** **Parameter Requirements:** Most fields can be set at creation as defaults. Some fields can be overridden at runtime via the request context or input. If a required field is missing from both creation and runtime, an error will be thrown. Note that `model`, `id`, and `description` can only be set at creation time.
|
|
27
27
|
|
|
28
|
-
**id
|
|
28
|
+
**id** (`string`): Custom ID for the tool. By default: 'GraphRAG {vectorStoreName} {indexName} Tool'. (Set at creation only.)
|
|
29
29
|
|
|
30
|
-
**description
|
|
30
|
+
**description** (`string`): Custom description for the tool. By default: 'Access and analyze relationships between information in the knowledge base to answer complex questions about connections and patterns.' (Set at creation only.)
|
|
31
31
|
|
|
32
|
-
**vectorStoreName
|
|
32
|
+
**vectorStoreName** (`string`): Name of the vector store to query. (Can be set at creation or overridden at runtime.)
|
|
33
33
|
|
|
34
|
-
**indexName
|
|
34
|
+
**indexName** (`string`): Name of the index within the vector store. (Can be set at creation or overridden at runtime.)
|
|
35
35
|
|
|
36
|
-
**model
|
|
36
|
+
**model** (`EmbeddingModel`): Embedding model to use for vector search. (Set at creation only.)
|
|
37
37
|
|
|
38
|
-
**enableFilter
|
|
38
|
+
**enableFilter** (`boolean`): Enable filtering of results based on metadata. (Set at creation only, but will be automatically enabled if a filter is provided in the request context.) (Default: `false`)
|
|
39
39
|
|
|
40
|
-
**includeSources
|
|
40
|
+
**includeSources** (`boolean`): Include the full retrieval objects in the results. (Can be set at creation or overridden at runtime.) (Default: `true`)
|
|
41
41
|
|
|
42
|
-
**graphOptions
|
|
42
|
+
**graphOptions** (`GraphOptions`): Configuration for the graph-based retrieval (Default: `Default graph options`)
|
|
43
43
|
|
|
44
|
-
**
|
|
44
|
+
**graphOptions.dimension** (`number`): Dimension of the embedding vectors
|
|
45
45
|
|
|
46
|
-
**
|
|
46
|
+
**graphOptions.threshold** (`number`): Similarity threshold for creating edges between nodes (0-1)
|
|
47
47
|
|
|
48
|
-
|
|
48
|
+
**graphOptions.randomWalkSteps** (`number`): Number of steps in random walk for graph traversal. (Can be set at creation or overridden at runtime.)
|
|
49
49
|
|
|
50
|
-
**
|
|
50
|
+
**graphOptions.restartProb** (`number`): Probability of restarting random walk from query node. (Can be set at creation or overridden at runtime.)
|
|
51
51
|
|
|
52
|
-
**
|
|
52
|
+
**providerOptions** (`Record<string, Record<string, any>>`): Provider-specific options for the embedding model (e.g., outputDimensionality). \*\*Important\*\*: Only works with AI SDK EmbeddingModelV2 models. For V1 models, configure options when creating the model itself.
|
|
53
53
|
|
|
54
|
-
**
|
|
55
|
-
|
|
56
|
-
**restartProb?:** (`number`): Probability of restarting random walk from query node. (Can be set at creation or overridden at runtime.) (Default: `0.15`)
|
|
54
|
+
**vectorStore** (`MastraVector | VectorStoreResolver`): Direct vector store instance or a resolver function for dynamic selection. Use a function for multi-tenant applications where the vector store is selected based on request context. When provided, \`vectorStoreName\` becomes optional.
|
|
57
55
|
|
|
58
56
|
## Returns
|
|
59
57
|
|
|
60
58
|
The tool returns an object with:
|
|
61
59
|
|
|
62
|
-
**relevantContext
|
|
60
|
+
**relevantContext** (`string`): Combined text from the most relevant document chunks, retrieved using graph-based ranking
|
|
63
61
|
|
|
64
|
-
**sources
|
|
62
|
+
**sources** (`QueryResult[]`): Array of full retrieval result objects. Each object contains all information needed to reference the original document, chunk, and similarity score.
|
|
65
63
|
|
|
66
|
-
### QueryResult object structure
|
|
64
|
+
### `QueryResult` object structure
|
|
67
65
|
|
|
68
66
|
```typescript
|
|
69
67
|
{
|
|
@@ -75,7 +73,7 @@ The tool returns an object with:
|
|
|
75
73
|
}
|
|
76
74
|
```
|
|
77
75
|
|
|
78
|
-
## Default
|
|
76
|
+
## Default tool description
|
|
79
77
|
|
|
80
78
|
The default description focuses on:
|
|
81
79
|
|
|
@@ -83,7 +81,7 @@ The default description focuses on:
|
|
|
83
81
|
- Finding patterns and connections
|
|
84
82
|
- Answering complex queries
|
|
85
83
|
|
|
86
|
-
## Advanced
|
|
84
|
+
## Advanced example
|
|
87
85
|
|
|
88
86
|
```typescript
|
|
89
87
|
const graphTool = createGraphRAGTool({
|
|
@@ -99,7 +97,7 @@ const graphTool = createGraphRAGTool({
|
|
|
99
97
|
})
|
|
100
98
|
```
|
|
101
99
|
|
|
102
|
-
## Example with
|
|
100
|
+
## Example with custom description
|
|
103
101
|
|
|
104
102
|
```typescript
|
|
105
103
|
const graphTool = createGraphRAGTool({
|
|
@@ -113,7 +111,7 @@ const graphTool = createGraphRAGTool({
|
|
|
113
111
|
|
|
114
112
|
This example shows how to customize the tool description for a specific use case while maintaining its core purpose of relationship analysis.
|
|
115
113
|
|
|
116
|
-
## Example: Using
|
|
114
|
+
## Example: Using request context
|
|
117
115
|
|
|
118
116
|
```typescript
|
|
119
117
|
const graphTool = createGraphRAGTool({
|
|
@@ -149,7 +147,7 @@ For more information on request context, please see:
|
|
|
149
147
|
- [Agent Request Context](https://mastra.ai/docs/server/request-context)
|
|
150
148
|
- [Request Context](https://mastra.ai/docs/server/request-context)
|
|
151
149
|
|
|
152
|
-
## Dynamic
|
|
150
|
+
## Dynamic vector store for multi-tenant applications
|
|
153
151
|
|
|
154
152
|
For multi-tenant applications where each tenant has isolated data, you can pass a resolver function instead of a static vector store:
|
|
155
153
|
|