@mastra/mcp-docs-server 0.13.30-alpha.0 → 0.13.30-alpha.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/organized/changelogs/%40mastra%2Fagent-builder.md +9 -9
- package/.docs/organized/changelogs/%40mastra%2Fai-sdk.md +15 -0
- package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +15 -15
- package/.docs/organized/changelogs/%40mastra%2Fcore.md +35 -35
- package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloud.md +17 -17
- package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +24 -24
- package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +15 -15
- package/.docs/organized/changelogs/%40mastra%2Fmemory.md +16 -16
- package/.docs/organized/changelogs/%40mastra%2Fpg.md +16 -16
- package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +31 -31
- package/.docs/organized/changelogs/%40mastra%2Freact.md +20 -0
- package/.docs/organized/changelogs/%40mastra%2Fserver.md +15 -15
- package/.docs/organized/changelogs/create-mastra.md +19 -19
- package/.docs/organized/changelogs/mastra.md +27 -27
- package/.docs/organized/code-examples/agent.md +0 -1
- package/.docs/organized/code-examples/agui.md +2 -2
- package/.docs/organized/code-examples/client-side-tools.md +2 -2
- package/.docs/raw/agents/adding-voice.mdx +118 -25
- package/.docs/raw/agents/agent-memory.mdx +73 -89
- package/.docs/raw/agents/guardrails.mdx +1 -1
- package/.docs/raw/agents/overview.mdx +39 -7
- package/.docs/raw/agents/using-tools.mdx +95 -0
- package/.docs/raw/deployment/overview.mdx +9 -11
- package/.docs/raw/frameworks/agentic-uis/ai-sdk.mdx +1 -1
- package/.docs/raw/frameworks/servers/express.mdx +2 -2
- package/.docs/raw/getting-started/installation.mdx +34 -85
- package/.docs/raw/getting-started/mcp-docs-server.mdx +13 -1
- package/.docs/raw/index.mdx +49 -14
- package/.docs/raw/observability/ai-tracing/exporters/otel.mdx +3 -0
- package/.docs/raw/reference/observability/ai-tracing/exporters/otel.mdx +6 -0
- package/.docs/raw/reference/scorers/answer-relevancy.mdx +105 -7
- package/.docs/raw/reference/scorers/answer-similarity.mdx +266 -16
- package/.docs/raw/reference/scorers/bias.mdx +107 -6
- package/.docs/raw/reference/scorers/completeness.mdx +131 -8
- package/.docs/raw/reference/scorers/content-similarity.mdx +107 -8
- package/.docs/raw/reference/scorers/context-precision.mdx +234 -18
- package/.docs/raw/reference/scorers/context-relevance.mdx +418 -35
- package/.docs/raw/reference/scorers/faithfulness.mdx +122 -8
- package/.docs/raw/reference/scorers/hallucination.mdx +125 -8
- package/.docs/raw/reference/scorers/keyword-coverage.mdx +141 -9
- package/.docs/raw/reference/scorers/noise-sensitivity.mdx +478 -6
- package/.docs/raw/reference/scorers/prompt-alignment.mdx +351 -102
- package/.docs/raw/reference/scorers/textual-difference.mdx +134 -6
- package/.docs/raw/reference/scorers/tone-consistency.mdx +133 -0
- package/.docs/raw/reference/scorers/tool-call-accuracy.mdx +422 -65
- package/.docs/raw/reference/scorers/toxicity.mdx +125 -7
- package/.docs/raw/reference/workflows/workflow.mdx +33 -0
- package/.docs/raw/scorers/custom-scorers.mdx +244 -3
- package/.docs/raw/scorers/overview.mdx +8 -38
- package/.docs/raw/server-db/middleware.mdx +5 -2
- package/.docs/raw/server-db/runtime-context.mdx +178 -0
- package/.docs/raw/streaming/workflow-streaming.mdx +5 -1
- package/.docs/raw/tools-mcp/overview.mdx +25 -7
- package/.docs/raw/workflows/overview.mdx +28 -1
- package/CHANGELOG.md +14 -0
- package/package.json +4 -4
- package/.docs/raw/agents/runtime-context.mdx +0 -106
- package/.docs/raw/agents/using-tools-and-mcp.mdx +0 -241
- package/.docs/raw/getting-started/model-providers.mdx +0 -63
- package/.docs/raw/tools-mcp/runtime-context.mdx +0 -63
- /package/.docs/raw/{evals → scorers/evals-old-api}/custom-eval.mdx +0 -0
- /package/.docs/raw/{evals → scorers/evals-old-api}/overview.mdx +0 -0
- /package/.docs/raw/{evals → scorers/evals-old-api}/running-in-ci.mdx +0 -0
- /package/.docs/raw/{evals → scorers/evals-old-api}/textual-evals.mdx +0 -0
- /package/.docs/raw/{server-db → workflows}/snapshots.mdx +0 -0
|
@@ -7,8 +7,6 @@ description: Documentation for the Hallucination Scorer in Mastra, which evaluat
|
|
|
7
7
|
|
|
8
8
|
The `createHallucinationScorer()` function evaluates whether an LLM generates factually correct information by comparing its output against the provided context. This scorer measures hallucination by identifying direct contradictions between the context and the output.
|
|
9
9
|
|
|
10
|
-
For a usage example, see the [Hallucination Examples](/examples/scorers/hallucination).
|
|
11
|
-
|
|
12
10
|
## Parameters
|
|
13
11
|
|
|
14
12
|
The `createHallucinationScorer()` function accepts a single options object with the following properties:
|
|
@@ -117,16 +115,135 @@ Final score: `(hallucinated_statements / total_statements) * scale`
|
|
|
117
115
|
|
|
118
116
|
### Score interpretation
|
|
119
117
|
|
|
120
|
-
|
|
118
|
+
A hallucination score between 0 and 1:
|
|
121
119
|
|
|
122
|
-
-
|
|
123
|
-
- 0.
|
|
124
|
-
- 0.5
|
|
125
|
-
- 0.
|
|
126
|
-
- 0.0
|
|
120
|
+
- **0.0**: No hallucination — all claims match the context.
|
|
121
|
+
- **0.3–0.4**: Low hallucination — a few contradictions.
|
|
122
|
+
- **0.5–0.6**: Mixed hallucination — several contradictions.
|
|
123
|
+
- **0.7–0.8**: High hallucination — many contradictions.
|
|
124
|
+
- **0.9–1.0**: Complete hallucination — most or all claims contradict the context.
|
|
127
125
|
|
|
128
126
|
**Note:** The score represents the degree of hallucination - lower scores indicate better factual alignment with the provided context
|
|
129
127
|
|
|
128
|
+
## Examples
|
|
129
|
+
|
|
130
|
+
### No hallucination example
|
|
131
|
+
|
|
132
|
+
In this example, the response is fully aligned with the provided context. All claims are factually correct and directly supported by the source material, resulting in a low hallucination score.
|
|
133
|
+
|
|
134
|
+
```typescript filename="src/example-no-hallucination.ts" showLineNumbers copy
|
|
135
|
+
import { openai } from "@ai-sdk/openai";
|
|
136
|
+
import { createHallucinationScorer } from "@mastra/evals/scorers/llm";
|
|
137
|
+
|
|
138
|
+
const scorer = createHallucinationScorer({ model: openai("gpt-4o-mini"), options: {
|
|
139
|
+
context: [
|
|
140
|
+
"The iPhone was first released in 2007.",
|
|
141
|
+
"Steve Jobs unveiled it at Macworld.",
|
|
142
|
+
"The original model had a 3.5-inch screen."
|
|
143
|
+
]
|
|
144
|
+
});
|
|
145
|
+
|
|
146
|
+
const query = "When was the first iPhone released?";
|
|
147
|
+
const response = "The iPhone was first released in 2007, when Steve Jobs unveiled it at Macworld. The original iPhone featured a 3.5-inch screen.";
|
|
148
|
+
|
|
149
|
+
const result = await scorer.run({
|
|
150
|
+
input: [{ role: 'user', content: query }],
|
|
151
|
+
output: { text: response },
|
|
152
|
+
});
|
|
153
|
+
|
|
154
|
+
console.log(result);
|
|
155
|
+
```
|
|
156
|
+
|
|
157
|
+
#### No hallucination output
|
|
158
|
+
|
|
159
|
+
The response receives a score of 0 because there are no contradictions. Every statement is consistent with the context, and no new or fabricated information has been introduced.
|
|
160
|
+
|
|
161
|
+
```typescript
|
|
162
|
+
{
|
|
163
|
+
score: 0,
|
|
164
|
+
reason: 'The score is 0 because none of the statements from the context were contradicted by the output.'
|
|
165
|
+
}
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
### Mixed hallucination example
|
|
169
|
+
|
|
170
|
+
In this example, the response includes both accurate and inaccurate claims. Some details align with the context, while others directly contradict it—such as inflated numbers or incorrect locations. These contradictions increase the hallucination score.
|
|
171
|
+
|
|
172
|
+
```typescript filename="src/example-mixed-hallucination.ts" showLineNumbers copy
|
|
173
|
+
import { openai } from "@ai-sdk/openai";
|
|
174
|
+
import { createHallucinationScorer } from "@mastra/evals/scorers/llm";
|
|
175
|
+
|
|
176
|
+
const scorer = createHallucinationScorer({ model: openai("gpt-4o-mini"), options: {
|
|
177
|
+
context: [
|
|
178
|
+
"The first Star Wars movie was released in 1977.",
|
|
179
|
+
"It was directed by George Lucas.",
|
|
180
|
+
"The film earned $775 million worldwide.",
|
|
181
|
+
"The movie was filmed in Tunisia and England."
|
|
182
|
+
]
|
|
183
|
+
});
|
|
184
|
+
|
|
185
|
+
const query = "Tell me about the first Star Wars movie.";
|
|
186
|
+
const response = "The first Star Wars movie came out in 1977 and was directed by George Lucas. It made over $1 billion at the box office and was filmed entirely in California.";
|
|
187
|
+
|
|
188
|
+
const result = await scorer.run({
|
|
189
|
+
input: [{ role: 'user', content: query }],
|
|
190
|
+
output: { text: response },
|
|
191
|
+
});
|
|
192
|
+
|
|
193
|
+
console.log(result);
|
|
194
|
+
```
|
|
195
|
+
|
|
196
|
+
#### Mixed hallucination output
|
|
197
|
+
|
|
198
|
+
The Scorer assigns a mid-range score because parts of the response conflict with the context. While some facts are correct, others are inaccurate or fabricated, reducing overall reliability.
|
|
199
|
+
|
|
200
|
+
```typescript
|
|
201
|
+
{
|
|
202
|
+
score: 0.5,
|
|
203
|
+
reason: 'The score is 0.5 because two out of four statements from the output were contradicted by claims in the context, indicating a balance of accurate and inaccurate information.'
|
|
204
|
+
}
|
|
205
|
+
```
|
|
206
|
+
|
|
207
|
+
### Complete hallucination example
|
|
208
|
+
|
|
209
|
+
In this example, the response contradicts every key fact in the context. None of the claims can be verified, and all presented details are factually incorrect.
|
|
210
|
+
|
|
211
|
+
```typescript filename="src/example-complete-hallucination.ts" showLineNumbers copy
|
|
212
|
+
import { openai } from "@ai-sdk/openai";
|
|
213
|
+
import { createHallucinationScorer } from "@mastra/evals/scorers/llm";
|
|
214
|
+
|
|
215
|
+
const scorer = createHallucinationScorer({ model: openai("gpt-4o-mini"), options: {
|
|
216
|
+
context: [
|
|
217
|
+
"The Wright brothers made their first flight in 1903.",
|
|
218
|
+
"The flight lasted 12 seconds.",
|
|
219
|
+
"It covered a distance of 120 feet."
|
|
220
|
+
]
|
|
221
|
+
});
|
|
222
|
+
|
|
223
|
+
const query = "When did the Wright brothers first fly?";
|
|
224
|
+
const response = "The Wright brothers achieved their historic first flight in 1908. The flight lasted about 2 minutes and covered nearly a mile.";
|
|
225
|
+
|
|
226
|
+
const result = await scorer.run({
|
|
227
|
+
input: [{ role: 'user', content: query }],
|
|
228
|
+
output: { text: response },
|
|
229
|
+
});
|
|
230
|
+
|
|
231
|
+
console.log(result);
|
|
232
|
+
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
#### Complete hallucination output
|
|
236
|
+
|
|
237
|
+
The Scorer assigns a score of 1 because every statement in the response conflicts with the context. The details are fabricated or inaccurate across the board.
|
|
238
|
+
|
|
239
|
+
```typescript
|
|
240
|
+
{
|
|
241
|
+
score: 1,
|
|
242
|
+
reason: 'The score is 1.0 because all three statements from the output directly contradict the context: the first flight was in 1903, not 1908; it lasted 12 seconds, not about 2 minutes; and it covered 120 feet, not nearly a mile.'
|
|
243
|
+
}
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
|
|
130
247
|
## Related
|
|
131
248
|
|
|
132
249
|
- [Faithfulness Scorer](./faithfulness)
|
|
@@ -7,8 +7,6 @@ description: Documentation for the Keyword Coverage Scorer in Mastra, which eval
|
|
|
7
7
|
|
|
8
8
|
The `createKeywordCoverageScorer()` function evaluates how well an LLM's output covers the important keywords from the input. It analyzes keyword presence and matches while ignoring common words and stop words.
|
|
9
9
|
|
|
10
|
-
For a usage example, see the [Keyword Coverage Examples](/examples/scorers/keyword-coverage).
|
|
11
|
-
|
|
12
10
|
## Parameters
|
|
13
11
|
|
|
14
12
|
The `createKeywordCoverageScorer()` function does not take any options.
|
|
@@ -42,6 +40,23 @@ This function returns an instance of the MastraScorer class. See the [MastraScor
|
|
|
42
40
|
]}
|
|
43
41
|
/>
|
|
44
42
|
|
|
43
|
+
`.run()` returns a result in the following shape:
|
|
44
|
+
|
|
45
|
+
```typescript
|
|
46
|
+
{
|
|
47
|
+
runId: string,
|
|
48
|
+
extractStepResult: {
|
|
49
|
+
referenceKeywords: Set<string>,
|
|
50
|
+
responseKeywords: Set<string>
|
|
51
|
+
},
|
|
52
|
+
analyzeStepResult: {
|
|
53
|
+
totalKeywords: number,
|
|
54
|
+
matchedKeywords: number
|
|
55
|
+
},
|
|
56
|
+
score: number
|
|
57
|
+
}
|
|
58
|
+
```
|
|
59
|
+
|
|
45
60
|
## Scoring Details
|
|
46
61
|
|
|
47
62
|
The scorer evaluates keyword coverage by matching keywords with the following features:
|
|
@@ -66,15 +81,15 @@ Final score: `(matched_keywords / total_keywords) * scale`
|
|
|
66
81
|
|
|
67
82
|
### Score interpretation
|
|
68
83
|
|
|
69
|
-
|
|
84
|
+
A coverage score between 0 and 1:
|
|
70
85
|
|
|
71
|
-
- 1.0
|
|
72
|
-
- 0.7
|
|
73
|
-
- 0.4
|
|
74
|
-
- 0.1
|
|
75
|
-
- 0.0
|
|
86
|
+
- **1.0**: Complete coverage – all keywords present.
|
|
87
|
+
- **0.7–0.9**: High coverage – most keywords included.
|
|
88
|
+
- **0.4–0.6**: Partial coverage – some keywords present.
|
|
89
|
+
- **0.1–0.3**: Low coverage – few keywords matched.
|
|
90
|
+
- **0.0**: No coverage – no keywords found.
|
|
76
91
|
|
|
77
|
-
|
|
92
|
+
### Special Cases
|
|
78
93
|
|
|
79
94
|
The scorer handles several special cases:
|
|
80
95
|
|
|
@@ -84,6 +99,123 @@ The scorer handles several special cases:
|
|
|
84
99
|
- Case differences: "JavaScript" matches "javascript"
|
|
85
100
|
- Common words: Ignored in scoring to focus on meaningful keywords
|
|
86
101
|
|
|
102
|
+
## Examples
|
|
103
|
+
|
|
104
|
+
### Full coverage example
|
|
105
|
+
|
|
106
|
+
In this example, the response fully reflects the key terms from the input. All required keywords are present, resulting in complete coverage with no omissions.
|
|
107
|
+
|
|
108
|
+
```typescript filename="src/example-full-keyword-coverage.ts" showLineNumbers copy
|
|
109
|
+
import { createKeywordCoverageScorer } from "@mastra/evals/scorers/code";
|
|
110
|
+
|
|
111
|
+
const scorer = createKeywordCoverageScorer();
|
|
112
|
+
|
|
113
|
+
const input = 'JavaScript frameworks like React and Vue';
|
|
114
|
+
const output = 'Popular JavaScript frameworks include React and Vue for web development';
|
|
115
|
+
|
|
116
|
+
const result = await scorer.run({
|
|
117
|
+
input: [{ role: 'user', content: input }],
|
|
118
|
+
output: { role: 'assistant', text: output },
|
|
119
|
+
});
|
|
120
|
+
|
|
121
|
+
console.log('Score:', result.score);
|
|
122
|
+
console.log('AnalyzeStepResult:', result.analyzeStepResult);
|
|
123
|
+
```
|
|
124
|
+
|
|
125
|
+
#### Full coverage output
|
|
126
|
+
|
|
127
|
+
A score of 1 indicates that all expected keywords were found in the response. The `analyzeStepResult` field confirms that the number of matched keywords equals the total number extracted from the input.
|
|
128
|
+
|
|
129
|
+
```typescript
|
|
130
|
+
{
|
|
131
|
+
score: 1,
|
|
132
|
+
analyzeStepResult: {
|
|
133
|
+
totalKeywords: 4,
|
|
134
|
+
matchedKeywords: 4
|
|
135
|
+
}
|
|
136
|
+
}
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
### Partial coverage example
|
|
140
|
+
|
|
141
|
+
In this example, the response includes some, but not all, of the important keywords from the input. The score reflects partial coverage, with key terms either missing or only partially matched.
|
|
142
|
+
|
|
143
|
+
```typescript filename="src/example-partial-keyword-coverage.ts" showLineNumbers copy
|
|
144
|
+
import { createKeywordCoverageScorer } from "@mastra/evals/scorers/code";
|
|
145
|
+
|
|
146
|
+
const scorer = createKeywordCoverageScorer();
|
|
147
|
+
|
|
148
|
+
const input = 'TypeScript offers interfaces, generics, and type inference';
|
|
149
|
+
const output = 'TypeScript provides type inference and some advanced features';
|
|
150
|
+
|
|
151
|
+
const result = await scorer.run({
|
|
152
|
+
input: [{ role: 'user', content: input }],
|
|
153
|
+
output: { role: 'assistant', text: output },
|
|
154
|
+
});
|
|
155
|
+
|
|
156
|
+
console.log('Score:', result.score);
|
|
157
|
+
console.log('AnalyzeStepResult:', result.analyzeStepResult);
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
#### Partial coverage output
|
|
161
|
+
|
|
162
|
+
A score of 0.5 indicates that only half of the expected keywords were found in the response. The `analyzeStepResult` field shows how many terms were matched compared to the total identified in the input.
|
|
163
|
+
|
|
164
|
+
```typescript
|
|
165
|
+
{
|
|
166
|
+
score: 0.5,
|
|
167
|
+
analyzeStepResult: {
|
|
168
|
+
totalKeywords: 6,
|
|
169
|
+
matchedKeywords: 3
|
|
170
|
+
}
|
|
171
|
+
}
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
### Minimal coverage example
|
|
175
|
+
|
|
176
|
+
In this example, the response includes very few of the important keywords from the input. The score reflects minimal coverage, with most key terms missing or unaccounted for.
|
|
177
|
+
|
|
178
|
+
```typescript filename="src/example-minimal-keyword-coverage.ts" showLineNumbers copy
|
|
179
|
+
import { createKeywordCoverageScorer } from "@mastra/evals/scorers/code";
|
|
180
|
+
|
|
181
|
+
const scorer = createKeywordCoverageScorer();
|
|
182
|
+
|
|
183
|
+
const input = 'Machine learning models require data preprocessing, feature engineering, and hyperparameter tuning';
|
|
184
|
+
const output = 'Data preparation is important for models';
|
|
185
|
+
|
|
186
|
+
const result = await scorer.run({
|
|
187
|
+
input: [{ role: 'user', content: input }],
|
|
188
|
+
output: { role: 'assistant', text: output },
|
|
189
|
+
});
|
|
190
|
+
|
|
191
|
+
console.log('Score:', result.score);
|
|
192
|
+
console.log('AnalyzeStepResult:', result.analyzeStepResult);
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
#### Minimal coverage output
|
|
196
|
+
|
|
197
|
+
A low score indicates that only a small number of the expected keywords were present in the response. The `analyzeStepResult` field highlights the gap between total and matched keywords, signaling insufficient coverage.
|
|
198
|
+
|
|
199
|
+
```typescript
|
|
200
|
+
{
|
|
201
|
+
score: 0.2,
|
|
202
|
+
analyzeStepResult: {
|
|
203
|
+
totalKeywords: 10,
|
|
204
|
+
matchedKeywords: 2
|
|
205
|
+
}
|
|
206
|
+
}
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
### Metric configuration
|
|
210
|
+
|
|
211
|
+
You can create a `KeywordCoverageMetric` instance with default settings. No additional configuration is required.
|
|
212
|
+
|
|
213
|
+
```typescript
|
|
214
|
+
const metric = new KeywordCoverageMetric();
|
|
215
|
+
```
|
|
216
|
+
|
|
217
|
+
> See [KeywordCoverageScorer](/reference/scorers/keyword-coverage.mdx) for a full list of configuration options.
|
|
218
|
+
|
|
87
219
|
## Related
|
|
88
220
|
|
|
89
221
|
- [Completeness Scorer](./completeness)
|