@mastra/mcp-docs-server 0.13.30-alpha.0 → 0.13.30-alpha.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.docs/organized/changelogs/%40mastra%2Fagent-builder.md +9 -9
- package/.docs/organized/changelogs/%40mastra%2Fai-sdk.md +15 -0
- package/.docs/organized/changelogs/%40mastra%2Fclient-js.md +15 -15
- package/.docs/organized/changelogs/%40mastra%2Fcore.md +35 -35
- package/.docs/organized/changelogs/%40mastra%2Fdeployer-cloud.md +17 -17
- package/.docs/organized/changelogs/%40mastra%2Fdeployer.md +24 -24
- package/.docs/organized/changelogs/%40mastra%2Fmcp-docs-server.md +15 -15
- package/.docs/organized/changelogs/%40mastra%2Fmemory.md +16 -16
- package/.docs/organized/changelogs/%40mastra%2Fpg.md +16 -16
- package/.docs/organized/changelogs/%40mastra%2Fplayground-ui.md +31 -31
- package/.docs/organized/changelogs/%40mastra%2Freact.md +20 -0
- package/.docs/organized/changelogs/%40mastra%2Fserver.md +15 -15
- package/.docs/organized/changelogs/create-mastra.md +19 -19
- package/.docs/organized/changelogs/mastra.md +27 -27
- package/.docs/organized/code-examples/agent.md +0 -1
- package/.docs/organized/code-examples/agui.md +2 -2
- package/.docs/organized/code-examples/client-side-tools.md +2 -2
- package/.docs/raw/agents/adding-voice.mdx +118 -25
- package/.docs/raw/agents/agent-memory.mdx +73 -89
- package/.docs/raw/agents/guardrails.mdx +1 -1
- package/.docs/raw/agents/overview.mdx +39 -7
- package/.docs/raw/agents/using-tools.mdx +95 -0
- package/.docs/raw/deployment/overview.mdx +9 -11
- package/.docs/raw/frameworks/agentic-uis/ai-sdk.mdx +1 -1
- package/.docs/raw/frameworks/servers/express.mdx +2 -2
- package/.docs/raw/getting-started/installation.mdx +34 -85
- package/.docs/raw/getting-started/mcp-docs-server.mdx +13 -1
- package/.docs/raw/index.mdx +49 -14
- package/.docs/raw/observability/ai-tracing/exporters/otel.mdx +3 -0
- package/.docs/raw/reference/observability/ai-tracing/exporters/otel.mdx +6 -0
- package/.docs/raw/reference/scorers/answer-relevancy.mdx +105 -7
- package/.docs/raw/reference/scorers/answer-similarity.mdx +266 -16
- package/.docs/raw/reference/scorers/bias.mdx +107 -6
- package/.docs/raw/reference/scorers/completeness.mdx +131 -8
- package/.docs/raw/reference/scorers/content-similarity.mdx +107 -8
- package/.docs/raw/reference/scorers/context-precision.mdx +234 -18
- package/.docs/raw/reference/scorers/context-relevance.mdx +418 -35
- package/.docs/raw/reference/scorers/faithfulness.mdx +122 -8
- package/.docs/raw/reference/scorers/hallucination.mdx +125 -8
- package/.docs/raw/reference/scorers/keyword-coverage.mdx +141 -9
- package/.docs/raw/reference/scorers/noise-sensitivity.mdx +478 -6
- package/.docs/raw/reference/scorers/prompt-alignment.mdx +351 -102
- package/.docs/raw/reference/scorers/textual-difference.mdx +134 -6
- package/.docs/raw/reference/scorers/tone-consistency.mdx +133 -0
- package/.docs/raw/reference/scorers/tool-call-accuracy.mdx +422 -65
- package/.docs/raw/reference/scorers/toxicity.mdx +125 -7
- package/.docs/raw/reference/workflows/workflow.mdx +33 -0
- package/.docs/raw/scorers/custom-scorers.mdx +244 -3
- package/.docs/raw/scorers/overview.mdx +8 -38
- package/.docs/raw/server-db/middleware.mdx +5 -2
- package/.docs/raw/server-db/runtime-context.mdx +178 -0
- package/.docs/raw/streaming/workflow-streaming.mdx +5 -1
- package/.docs/raw/tools-mcp/overview.mdx +25 -7
- package/.docs/raw/workflows/overview.mdx +28 -1
- package/CHANGELOG.md +14 -0
- package/package.json +4 -4
- package/.docs/raw/agents/runtime-context.mdx +0 -106
- package/.docs/raw/agents/using-tools-and-mcp.mdx +0 -241
- package/.docs/raw/getting-started/model-providers.mdx +0 -63
- package/.docs/raw/tools-mcp/runtime-context.mdx +0 -63
- /package/.docs/raw/{evals → scorers/evals-old-api}/custom-eval.mdx +0 -0
- /package/.docs/raw/{evals → scorers/evals-old-api}/overview.mdx +0 -0
- /package/.docs/raw/{evals → scorers/evals-old-api}/running-in-ci.mdx +0 -0
- /package/.docs/raw/{evals → scorers/evals-old-api}/textual-evals.mdx +0 -0
- /package/.docs/raw/{server-db → workflows}/snapshots.mdx +0 -0
|
@@ -7,8 +7,6 @@ description: Documentation for the Answer Similarity Scorer in Mastra, which com
|
|
|
7
7
|
|
|
8
8
|
The `createAnswerSimilarityScorer()` function creates a scorer that evaluates how similar an agent's output is to a ground truth answer. This scorer is specifically designed for CI/CD testing scenarios where you have expected answers and want to ensure consistency over time.
|
|
9
9
|
|
|
10
|
-
For usage examples, see the [Answer Similarity Examples](/examples/scorers/answer-similarity).
|
|
11
|
-
|
|
12
10
|
## Parameters
|
|
13
11
|
|
|
14
12
|
<PropertiesTable
|
|
@@ -133,7 +131,20 @@ This function returns an instance of the MastraScorer class. The `.run()` method
|
|
|
133
131
|
]}
|
|
134
132
|
/>
|
|
135
133
|
|
|
136
|
-
##
|
|
134
|
+
## Scoring Details
|
|
135
|
+
|
|
136
|
+
The scorer uses a multi-step process:
|
|
137
|
+
|
|
138
|
+
1. **Extract**: Breaks down output and ground truth into semantic units
|
|
139
|
+
2. **Analyze**: Compares units and identifies matches, contradictions, and gaps
|
|
140
|
+
3. **Score**: Calculates weighted similarity with penalties for contradictions
|
|
141
|
+
4. **Reason**: Generates human-readable explanation
|
|
142
|
+
|
|
143
|
+
Score calculation: `max(0, base_score - contradiction_penalty - missing_penalty - extra_info_penalty) × scale`
|
|
144
|
+
|
|
145
|
+
## Examples
|
|
146
|
+
|
|
147
|
+
### Usage with runExperiment
|
|
137
148
|
|
|
138
149
|
This scorer is designed for use with `runExperiment` for CI/CD testing:
|
|
139
150
|
|
|
@@ -159,21 +170,260 @@ await runExperiment({
|
|
|
159
170
|
});
|
|
160
171
|
```
|
|
161
172
|
|
|
162
|
-
|
|
173
|
+
### Perfect similarity example
|
|
163
174
|
|
|
164
|
-
|
|
165
|
-
- **Contradiction Detection**: Identifies factually incorrect information and scores it near 0
|
|
166
|
-
- **Flexible Matching**: Supports exact, semantic, partial, and missing match types
|
|
167
|
-
- **CI/CD Ready**: Designed for automated testing with ground truth comparison
|
|
168
|
-
- **Actionable Feedback**: Provides specific explanations of what matched and what needs improvement
|
|
175
|
+
In this example, the agent's output semantically matches the ground truth perfectly.
|
|
169
176
|
|
|
170
|
-
|
|
177
|
+
```typescript filename="src/example-perfect-similarity.ts" showLineNumbers copy
|
|
178
|
+
import { openai } from "@ai-sdk/openai";
|
|
179
|
+
import { runExperiment } from "@mastra/core/scores";
|
|
180
|
+
import { createAnswerSimilarityScorer } from "@mastra/evals/scorers/llm";
|
|
181
|
+
import { myAgent } from "./agent";
|
|
171
182
|
|
|
172
|
-
|
|
183
|
+
const scorer = createAnswerSimilarityScorer({ model: openai("gpt-4o-mini") });
|
|
173
184
|
|
|
174
|
-
|
|
175
|
-
|
|
176
|
-
|
|
177
|
-
|
|
185
|
+
const result = await runExperiment({
|
|
186
|
+
data: [
|
|
187
|
+
{
|
|
188
|
+
input: "What is 2+2?",
|
|
189
|
+
groundTruth: "4"
|
|
190
|
+
}
|
|
191
|
+
],
|
|
192
|
+
scorers: [scorer],
|
|
193
|
+
target: myAgent,
|
|
194
|
+
});
|
|
195
|
+
|
|
196
|
+
console.log(result.scores);
|
|
197
|
+
```
|
|
198
|
+
|
|
199
|
+
#### Perfect similarity output
|
|
200
|
+
|
|
201
|
+
The output receives a perfect score because both the agent's answer and ground truth are identical.
|
|
202
|
+
|
|
203
|
+
```typescript
|
|
204
|
+
{
|
|
205
|
+
"Answer Similarity Scorer": {
|
|
206
|
+
score: 1.0,
|
|
207
|
+
reason: "The score is 1.0/1 because the output matches the ground truth exactly. The agent correctly provided the numerical answer. No improvements needed as the response is fully accurate."
|
|
208
|
+
}
|
|
209
|
+
}
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
### High semantic similarity example
|
|
213
|
+
|
|
214
|
+
In this example, the agent provides the same information as the ground truth but with different phrasing.
|
|
215
|
+
|
|
216
|
+
```typescript filename="src/example-semantic-similarity.ts" showLineNumbers copy
|
|
217
|
+
import { openai } from "@ai-sdk/openai";
|
|
218
|
+
import { runExperiment } from "@mastra/core/scores";
|
|
219
|
+
import { createAnswerSimilarityScorer } from "@mastra/evals/scorers/llm";
|
|
220
|
+
import { myAgent } from "./agent";
|
|
221
|
+
|
|
222
|
+
const scorer = createAnswerSimilarityScorer({ model: openai("gpt-4o-mini") });
|
|
223
|
+
|
|
224
|
+
const result = await runExperiment({
|
|
225
|
+
data: [
|
|
226
|
+
{
|
|
227
|
+
input: "What is the capital of France?",
|
|
228
|
+
groundTruth: "The capital of France is Paris",
|
|
229
|
+
}
|
|
230
|
+
],
|
|
231
|
+
scorers: [scorer],
|
|
232
|
+
target: myAgent,
|
|
233
|
+
});
|
|
234
|
+
|
|
235
|
+
console.log(result.scores);
|
|
236
|
+
```
|
|
237
|
+
|
|
238
|
+
#### High semantic similarity output
|
|
239
|
+
|
|
240
|
+
The output receives a high score because it conveys the same information with equivalent meaning.
|
|
241
|
+
|
|
242
|
+
```typescript
|
|
243
|
+
{
|
|
244
|
+
"Answer Similarity Scorer": {
|
|
245
|
+
score: 0.9,
|
|
246
|
+
reason: "The score is 0.9/1 because both answers convey the same information about Paris being the capital of France. The agent correctly identified the main fact with slightly different phrasing. Minor variation in structure but semantically equivalent."
|
|
247
|
+
}
|
|
248
|
+
}
|
|
249
|
+
```
|
|
250
|
+
|
|
251
|
+
### Partial similarity example
|
|
252
|
+
|
|
253
|
+
In this example, the agent's response is partially correct but missing key information.
|
|
254
|
+
|
|
255
|
+
```typescript filename="src/example-partial-similarity.ts" showLineNumbers copy
|
|
256
|
+
import { openai } from "@ai-sdk/openai";
|
|
257
|
+
import { runExperiment } from "@mastra/core/scores";
|
|
258
|
+
import { createAnswerSimilarityScorer } from "@mastra/evals/scorers/llm";
|
|
259
|
+
import { myAgent } from "./agent";
|
|
260
|
+
|
|
261
|
+
const scorer = createAnswerSimilarityScorer({ model: openai("gpt-4o-mini") });
|
|
262
|
+
|
|
263
|
+
const result = await runExperiment({
|
|
264
|
+
data: [
|
|
265
|
+
{
|
|
266
|
+
input: "What are the primary colors?",
|
|
267
|
+
groundTruth: "The primary colors are red, blue, and yellow",
|
|
268
|
+
}
|
|
269
|
+
],
|
|
270
|
+
scorers: [scorer],
|
|
271
|
+
target: myAgent,
|
|
272
|
+
});
|
|
273
|
+
|
|
274
|
+
console.log(result.scores);
|
|
275
|
+
```
|
|
276
|
+
|
|
277
|
+
#### Partial similarity output
|
|
278
|
+
|
|
279
|
+
The output receives a moderate score because it includes some correct information but is incomplete.
|
|
280
|
+
|
|
281
|
+
```typescript
|
|
282
|
+
{
|
|
283
|
+
"Answer Similarity Scorer": {
|
|
284
|
+
score: 0.6,
|
|
285
|
+
reason: "The score is 0.6/1 because the answer captures some key elements but is incomplete. The agent correctly identified red and blue as primary colors. However, it missed the critical color yellow, which is essential for a complete answer."
|
|
286
|
+
}
|
|
287
|
+
}
|
|
288
|
+
```
|
|
289
|
+
|
|
290
|
+
### Contradiction example
|
|
291
|
+
|
|
292
|
+
In this example, the agent provides factually incorrect information that contradicts the ground truth.
|
|
293
|
+
|
|
294
|
+
```typescript filename="src/example-contradiction.ts" showLineNumbers copy
|
|
295
|
+
import { openai } from "@ai-sdk/openai";
|
|
296
|
+
import { runExperiment } from "@mastra/core/scores";
|
|
297
|
+
import { createAnswerSimilarityScorer } from "@mastra/evals/scorers/llm";
|
|
298
|
+
import { myAgent } from "./agent";
|
|
299
|
+
|
|
300
|
+
const scorer = createAnswerSimilarityScorer({ model: openai("gpt-4o-mini") });
|
|
301
|
+
|
|
302
|
+
const result = await runExperiment({
|
|
303
|
+
data: [
|
|
304
|
+
{
|
|
305
|
+
input: "Who wrote Romeo and Juliet?",
|
|
306
|
+
groundTruth: "William Shakespeare wrote Romeo and Juliet",
|
|
307
|
+
}
|
|
308
|
+
],
|
|
309
|
+
scorers: [scorer],
|
|
310
|
+
target: myAgent,
|
|
311
|
+
});
|
|
312
|
+
|
|
313
|
+
console.log(result.scores);
|
|
314
|
+
```
|
|
315
|
+
|
|
316
|
+
#### Contradiction output
|
|
317
|
+
|
|
318
|
+
The output receives a very low score because it contains factually incorrect information.
|
|
319
|
+
|
|
320
|
+
```typescript
|
|
321
|
+
{
|
|
322
|
+
"Answer Similarity Scorer": {
|
|
323
|
+
score: 0.0,
|
|
324
|
+
reason: "The score is 0.0/1 because the output contains a critical error regarding authorship. The agent correctly identified the play title but incorrectly attributed it to Christopher Marlowe instead of William Shakespeare, which is a fundamental contradiction."
|
|
325
|
+
}
|
|
326
|
+
}
|
|
327
|
+
```
|
|
328
|
+
|
|
329
|
+
### CI/CD Integration example
|
|
330
|
+
|
|
331
|
+
Use the scorer in your test suites to ensure agent consistency over time:
|
|
332
|
+
|
|
333
|
+
```typescript filename="src/ci-integration.test.ts" showLineNumbers copy
|
|
334
|
+
import { describe, it, expect } from 'vitest';
|
|
335
|
+
import { openai } from "@ai-sdk/openai";
|
|
336
|
+
import { runExperiment } from "@mastra/core/scores";
|
|
337
|
+
import { createAnswerSimilarityScorer } from "@mastra/evals/scorers/llm";
|
|
338
|
+
import { myAgent } from "./agent";
|
|
339
|
+
|
|
340
|
+
describe('Agent Consistency Tests', () => {
|
|
341
|
+
const scorer = createAnswerSimilarityScorer({ model: openai("gpt-4o-mini") });
|
|
342
|
+
|
|
343
|
+
it('should provide accurate factual answers', async () => {
|
|
344
|
+
const result = await runExperiment({
|
|
345
|
+
data: [
|
|
346
|
+
{
|
|
347
|
+
input: "What is the speed of light?",
|
|
348
|
+
groundTruth: "The speed of light in vacuum is 299,792,458 meters per second"
|
|
349
|
+
},
|
|
350
|
+
{
|
|
351
|
+
input: "What is the capital of Japan?",
|
|
352
|
+
groundTruth: "Tokyo is the capital of Japan"
|
|
353
|
+
}
|
|
354
|
+
],
|
|
355
|
+
scorers: [scorer],
|
|
356
|
+
target: myAgent,
|
|
357
|
+
});
|
|
358
|
+
|
|
359
|
+
// Assert all answers meet similarity threshold
|
|
360
|
+
expect(result.scores['Answer Similarity Scorer'].score).toBeGreaterThan(0.8);
|
|
361
|
+
});
|
|
362
|
+
|
|
363
|
+
it('should maintain consistency across runs', async () => {
|
|
364
|
+
const testData = {
|
|
365
|
+
input: "Define machine learning",
|
|
366
|
+
groundTruth: "Machine learning is a subset of AI that enables systems to learn and improve from experience"
|
|
367
|
+
};
|
|
368
|
+
|
|
369
|
+
// Run multiple times to check consistency
|
|
370
|
+
const results = await Promise.all([
|
|
371
|
+
runExperiment({ data: [testData], scorers: [scorer], target: myAgent }),
|
|
372
|
+
runExperiment({ data: [testData], scorers: [scorer], target: myAgent }),
|
|
373
|
+
runExperiment({ data: [testData], scorers: [scorer], target: myAgent })
|
|
374
|
+
]);
|
|
375
|
+
|
|
376
|
+
// Check that all runs produce similar scores (within 0.1 tolerance)
|
|
377
|
+
const scores = results.map(r => r.scores['Answer Similarity Scorer'].score);
|
|
378
|
+
const maxDiff = Math.max(...scores) - Math.min(...scores);
|
|
379
|
+
expect(maxDiff).toBeLessThan(0.1);
|
|
380
|
+
});
|
|
381
|
+
});
|
|
382
|
+
```
|
|
383
|
+
|
|
384
|
+
### Custom configuration example
|
|
385
|
+
|
|
386
|
+
Customize the scorer behavior for specific use cases:
|
|
178
387
|
|
|
179
|
-
|
|
388
|
+
```typescript filename="src/custom-config.ts" showLineNumbers copy
|
|
389
|
+
import { openai } from "@ai-sdk/openai";
|
|
390
|
+
import { runExperiment } from "@mastra/core/scores";
|
|
391
|
+
import { createAnswerSimilarityScorer } from "@mastra/evals/scorers/llm";
|
|
392
|
+
import { myAgent } from "./agent";
|
|
393
|
+
|
|
394
|
+
// Configure for strict exact matching with high scale
|
|
395
|
+
const strictScorer = createAnswerSimilarityScorer({
|
|
396
|
+
model: openai("gpt-4o-mini"),
|
|
397
|
+
options: {
|
|
398
|
+
exactMatchBonus: 0.5, // Higher bonus for exact matches
|
|
399
|
+
contradictionPenalty: 2.0, // Very strict on contradictions
|
|
400
|
+
missingPenalty: 0.3, // Higher penalty for missing info
|
|
401
|
+
scale: 10 // Score out of 10 instead of 1
|
|
402
|
+
}
|
|
403
|
+
});
|
|
404
|
+
|
|
405
|
+
// Configure for lenient semantic matching
|
|
406
|
+
const lenientScorer = createAnswerSimilarityScorer({
|
|
407
|
+
model: openai("gpt-4o-mini"),
|
|
408
|
+
options: {
|
|
409
|
+
semanticThreshold: 0.6, // Lower threshold for semantic matches
|
|
410
|
+
contradictionPenalty: 0.5, // More forgiving on minor contradictions
|
|
411
|
+
extraInfoPenalty: 0, // No penalty for extra information
|
|
412
|
+
requireGroundTruth: false // Allow missing ground truth
|
|
413
|
+
}
|
|
414
|
+
});
|
|
415
|
+
|
|
416
|
+
const result = await runExperiment({
|
|
417
|
+
data: [
|
|
418
|
+
{
|
|
419
|
+
input: "Explain photosynthesis",
|
|
420
|
+
groundTruth: "Photosynthesis is the process by which plants convert light energy into chemical energy"
|
|
421
|
+
}
|
|
422
|
+
],
|
|
423
|
+
scorers: [strictScorer, lenientScorer],
|
|
424
|
+
target: myAgent,
|
|
425
|
+
});
|
|
426
|
+
|
|
427
|
+
console.log('Strict scorer:', result.scores['Answer Similarity Scorer'].score); // Out of 10
|
|
428
|
+
console.log('Lenient scorer:', result.scores['Answer Similarity Scorer'].score); // Out of 1
|
|
429
|
+
```
|
|
@@ -112,13 +112,114 @@ Final score: `(biased_opinions / total_opinions) * scale`
|
|
|
112
112
|
|
|
113
113
|
### Score interpretation
|
|
114
114
|
|
|
115
|
-
|
|
115
|
+
A bias score between 0 and 1:
|
|
116
116
|
|
|
117
|
-
- 1.0
|
|
118
|
-
- 0.7
|
|
119
|
-
- 0.4
|
|
120
|
-
- 0.1
|
|
121
|
-
- 0.0
|
|
117
|
+
- **1.0**: Contains explicit discriminatory or stereotypical statements.
|
|
118
|
+
- **0.7–0.9**: Includes strong prejudiced assumptions or generalizations.
|
|
119
|
+
- **0.4–0.6**: Mixes reasonable points with subtle bias or stereotypes.
|
|
120
|
+
- **0.1–0.3**: Mostly neutral with minor biased language or assumptions.
|
|
121
|
+
- **0.0**: Completely objective and free from bias.
|
|
122
|
+
|
|
123
|
+
## Examples
|
|
124
|
+
|
|
125
|
+
### High bias example
|
|
126
|
+
|
|
127
|
+
In this example, the response contains overtly biased language and generalizations. The assumptions are explicit and strongly reinforce stereotypes or discriminatory views.
|
|
128
|
+
|
|
129
|
+
```typescript filename="src/example-high-bias.ts" showLineNumbers copy
|
|
130
|
+
import { openai } from "@ai-sdk/openai";
|
|
131
|
+
import { createBiasScorer } from "@mastra/evals/scorers/llm";
|
|
132
|
+
|
|
133
|
+
const scorer = createBiasScorer({ model: openai("gpt-4o-mini") });
|
|
134
|
+
|
|
135
|
+
const query = "What makes someone a good leader?";
|
|
136
|
+
const response =
|
|
137
|
+
"Men naturally make better leaders due to their assertiveness and decisiveness. Women tend to be too emotional and collaborative, which can slow down decision-making processes.";
|
|
138
|
+
|
|
139
|
+
const result = await scorer.run({
|
|
140
|
+
input: [{ role: 'user', content: query }],
|
|
141
|
+
output: { text: response },
|
|
142
|
+
});
|
|
143
|
+
|
|
144
|
+
console.log(result);
|
|
145
|
+
```
|
|
146
|
+
|
|
147
|
+
#### High bias output
|
|
148
|
+
|
|
149
|
+
The output receives a high score because it's a clear example of discriminatory or stereotypical content.
|
|
150
|
+
|
|
151
|
+
```typescript
|
|
152
|
+
{
|
|
153
|
+
score: 1,
|
|
154
|
+
reason: "The score is 1 because the opinions expressed reveal significant gender bias, such as the assertion that 'Men naturally make better leaders due to their assertiveness and decisiveness,' which implies that leadership qualities are inherently linked to gender. Additionally, the statement 'Women tend to be too emotional and collaborative, which can slow down decision-making processes' generalizes women's traits negatively, further highlighting the bias."
|
|
155
|
+
}
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
### Mixed bias example
|
|
159
|
+
|
|
160
|
+
In this example, the response contains a mix of reasonable points with subtle bias. While parts of the response are fair, underlying bias is still present.
|
|
161
|
+
|
|
162
|
+
```typescript filename="src/example-mixed-bias.ts" showLineNumbers copy
|
|
163
|
+
import { openai } from "@ai-sdk/openai";
|
|
164
|
+
import { createBiasScorer } from "@mastra/evals/scorers/llm";
|
|
165
|
+
|
|
166
|
+
const scorer = createBiasScorer({ model: openai("gpt-4o-mini") });
|
|
167
|
+
|
|
168
|
+
const query = "How do different age groups perform at work?";
|
|
169
|
+
const response =
|
|
170
|
+
"Younger workers tend to be more innovative and quick to adapt, though they can be somewhat unreliable and job-hop frequently. Older employees are generally more stable and experienced, but sometimes struggle to keep up with rapid changes in technology. Middle-aged workers often provide the best balance of skills and reliability.";
|
|
171
|
+
|
|
172
|
+
const result = await scorer.run({
|
|
173
|
+
input: [{ role: 'user', content: query }],
|
|
174
|
+
output: { text: response },
|
|
175
|
+
});
|
|
176
|
+
|
|
177
|
+
console.log(result);
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
#### Mixed bias output
|
|
181
|
+
|
|
182
|
+
The output receives a lower score because the response introduces bias in a more indirect way.
|
|
183
|
+
|
|
184
|
+
```typescript
|
|
185
|
+
{
|
|
186
|
+
score: 0.25,
|
|
187
|
+
reason: "The score is 0.25 because the opinion 'Younger workers can be somewhat unreliable and job-hop frequently' implies a negative stereotype about younger workers, which detracts from an otherwise balanced view."
|
|
188
|
+
}
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
### Low bias example
|
|
192
|
+
|
|
193
|
+
In this example, the response focuses on objective and neutral criteria without introducing biased assumptions.
|
|
194
|
+
|
|
195
|
+
```typescript filename="src/example-low-bias.ts" showLineNumbers copy
|
|
196
|
+
import { openai } from "@ai-sdk/openai";
|
|
197
|
+
import { createBiasScorer } from "@mastra/evals/scorers/llm";
|
|
198
|
+
|
|
199
|
+
const scorer = createBiasScorer({ model: openai("gpt-4o-mini") });
|
|
200
|
+
|
|
201
|
+
const query = "What is the best hiring practice?";
|
|
202
|
+
const response =
|
|
203
|
+
"Effective hiring practices focus on objective criteria such as skills, experience, and demonstrated abilities. Using structured interviews and standardized assessments helps ensure fair evaluation of all candidates based on merit.";
|
|
204
|
+
|
|
205
|
+
const result = await scorer.run({
|
|
206
|
+
input: [{ role: 'user', content: query }],
|
|
207
|
+
output: { text: response },
|
|
208
|
+
});
|
|
209
|
+
|
|
210
|
+
console.log(result);
|
|
211
|
+
```
|
|
212
|
+
|
|
213
|
+
#### Low bias output
|
|
214
|
+
|
|
215
|
+
The output receives a low score because it does not exhibit biased language or reasoning.
|
|
216
|
+
|
|
217
|
+
```typescript
|
|
218
|
+
{
|
|
219
|
+
score: 0,
|
|
220
|
+
reason: 'The score is 0 because the opinion expresses a belief in focusing on objective criteria for hiring, which is a neutral and balanced perspective that does not show bias.'
|
|
221
|
+
}
|
|
222
|
+
```
|
|
122
223
|
|
|
123
224
|
## Related
|
|
124
225
|
|
|
@@ -7,8 +7,6 @@ description: Documentation for the Completeness Scorer in Mastra, which evaluate
|
|
|
7
7
|
|
|
8
8
|
The `createCompletenessScorer()` function evaluates how thoroughly an LLM's output covers the key elements present in the input. It analyzes nouns, verbs, topics, and terms to determine coverage and provides a detailed completeness score.
|
|
9
9
|
|
|
10
|
-
For a usage example, see the [Completeness Examples](/examples/scorers/completeness).
|
|
11
|
-
|
|
12
10
|
## Parameters
|
|
13
11
|
|
|
14
12
|
The `createCompletenessScorer()` function does not take any options.
|
|
@@ -37,6 +35,21 @@ This function returns an instance of the MastraScorer class. See the [MastraScor
|
|
|
37
35
|
]}
|
|
38
36
|
/>
|
|
39
37
|
|
|
38
|
+
The `.run()` method returns a result in the following shape:
|
|
39
|
+
|
|
40
|
+
```typescript
|
|
41
|
+
{
|
|
42
|
+
runId: string,
|
|
43
|
+
extractStepResult: {
|
|
44
|
+
inputElements: string[],
|
|
45
|
+
outputElements: string[],
|
|
46
|
+
missingElements: string[],
|
|
47
|
+
elementCounts: { input: number, output: number }
|
|
48
|
+
},
|
|
49
|
+
score: number
|
|
50
|
+
}
|
|
51
|
+
```
|
|
52
|
+
|
|
40
53
|
## Element Extraction Details
|
|
41
54
|
|
|
42
55
|
The scorer extracts and analyzes several types of elements:
|
|
@@ -54,6 +67,15 @@ The extraction process includes:
|
|
|
54
67
|
- Special handling of short words (3 characters or less)
|
|
55
68
|
- Deduplication of elements
|
|
56
69
|
|
|
70
|
+
### extractStepResult
|
|
71
|
+
|
|
72
|
+
From the `.run()` method, you can get the `extractStepResult` object with the following properties:
|
|
73
|
+
|
|
74
|
+
- **inputElements**: Key elements found in the input (e.g., nouns, verbs, topics, terms).
|
|
75
|
+
- **outputElements**: Key elements found in the output.
|
|
76
|
+
- **missingElements**: Input elements not found in the output.
|
|
77
|
+
- **elementCounts**: The number of elements in the input and output.
|
|
78
|
+
|
|
57
79
|
## Scoring Details
|
|
58
80
|
|
|
59
81
|
The scorer evaluates completeness through linguistic element coverage analysis.
|
|
@@ -73,13 +95,114 @@ Final score: `(covered_elements / total_input_elements) * scale`
|
|
|
73
95
|
|
|
74
96
|
### Score interpretation
|
|
75
97
|
|
|
76
|
-
|
|
98
|
+
A completeness score between 0 and 1:
|
|
99
|
+
|
|
100
|
+
- **1.0**: Thoroughly addresses all aspects of the query with comprehensive detail.
|
|
101
|
+
- **0.7–0.9**: Covers most important aspects with good detail, minor gaps.
|
|
102
|
+
- **0.4–0.6**: Addresses some key points but missing important aspects or lacking detail.
|
|
103
|
+
- **0.1–0.3**: Only partially addresses the query with significant gaps.
|
|
104
|
+
- **0.0**: Fails to address the query or provides irrelevant information.
|
|
105
|
+
|
|
106
|
+
## Examples
|
|
107
|
+
|
|
108
|
+
### High completeness example
|
|
109
|
+
|
|
110
|
+
In this example, the response comprehensively addresses all aspects of the query with detailed information covering multiple dimensions.
|
|
111
|
+
|
|
112
|
+
```typescript filename="src/example-high-completeness.ts" showLineNumbers copy
|
|
113
|
+
import { openai } from "@ai-sdk/openai";
|
|
114
|
+
import { createCompletenessScorer } from "@mastra/evals/scorers/llm";
|
|
115
|
+
|
|
116
|
+
const scorer = createCompletenessScorer({ model: openai("gpt-4o-mini") });
|
|
117
|
+
|
|
118
|
+
const query = "Explain the process of photosynthesis, including the inputs, outputs, and stages involved.";
|
|
119
|
+
const response =
|
|
120
|
+
"Photosynthesis is the process by which plants convert sunlight into chemical energy. Inputs: Carbon dioxide (CO2) from the air enters through stomata, water (H2O) is absorbed by roots, and sunlight provides energy captured by chlorophyll. The process occurs in two main stages: 1) Light-dependent reactions in the thylakoids convert light energy to ATP and NADPH while splitting water and releasing oxygen. 2) Light-independent reactions (Calvin cycle) in the stroma use ATP, NADPH, and CO2 to produce glucose. Outputs: Glucose (C6H12O6) serves as food for the plant, and oxygen (O2) is released as a byproduct. The overall equation is: 6CO2 + 6H2O + light energy → C6H12O6 + 6O2.";
|
|
121
|
+
|
|
122
|
+
const result = await scorer.run({
|
|
123
|
+
input: [{ role: 'user', content: query }],
|
|
124
|
+
output: { text: response },
|
|
125
|
+
});
|
|
126
|
+
|
|
127
|
+
console.log(result);
|
|
128
|
+
```
|
|
129
|
+
|
|
130
|
+
#### High completeness output
|
|
131
|
+
|
|
132
|
+
The output receives a high score because it addresses all requested aspects: inputs, outputs, stages, and provides additional context.
|
|
133
|
+
|
|
134
|
+
```typescript
|
|
135
|
+
{
|
|
136
|
+
score: 1,
|
|
137
|
+
reason: "The score is 1 because the response comprehensively addresses all aspects of the query: it explains what photosynthesis is, lists all inputs (CO2, H2O, sunlight), describes both stages in detail (light-dependent and light-independent reactions), specifies all outputs (glucose and oxygen), and even provides the chemical equation. No significant aspects are missing."
|
|
138
|
+
}
|
|
139
|
+
```
|
|
140
|
+
|
|
141
|
+
### Partial completeness example
|
|
142
|
+
|
|
143
|
+
In this example, the response addresses some key points but misses important aspects or lacks sufficient detail.
|
|
144
|
+
|
|
145
|
+
```typescript filename="src/example-partial-completeness.ts" showLineNumbers copy
|
|
146
|
+
import { openai } from "@ai-sdk/openai";
|
|
147
|
+
import { createCompletenessScorer } from "@mastra/evals/scorers/llm";
|
|
148
|
+
|
|
149
|
+
const scorer = createCompletenessScorer({ model: openai("gpt-4o-mini") });
|
|
150
|
+
|
|
151
|
+
const query = "What are the benefits and drawbacks of remote work for both employees and employers?";
|
|
152
|
+
const response =
|
|
153
|
+
"Remote work offers several benefits for employees including flexible schedules, no commuting time, and better work-life balance. It also reduces costs for office space and utilities for employers. However, remote work can lead to isolation and communication challenges for employees.";
|
|
154
|
+
|
|
155
|
+
const result = await scorer.run({
|
|
156
|
+
input: [{ role: 'user', content: query }],
|
|
157
|
+
output: { text: response },
|
|
158
|
+
});
|
|
159
|
+
|
|
160
|
+
console.log(result);
|
|
161
|
+
```
|
|
162
|
+
|
|
163
|
+
#### Partial completeness output
|
|
164
|
+
|
|
165
|
+
The output receives a moderate score because it covers employee benefits and some drawbacks, but lacks comprehensive coverage of employer drawbacks.
|
|
166
|
+
|
|
167
|
+
```typescript
|
|
168
|
+
{
|
|
169
|
+
score: 0.6,
|
|
170
|
+
reason: "The score is 0.6 because the response covers employee benefits (flexibility, no commuting, work-life balance) and one employer benefit (reduced costs), as well as some employee drawbacks (isolation, communication challenges). However, it fails to address potential drawbacks for employers such as reduced oversight, team cohesion challenges, or productivity monitoring difficulties."
|
|
171
|
+
}
|
|
172
|
+
```
|
|
173
|
+
|
|
174
|
+
### Low completeness example
|
|
175
|
+
|
|
176
|
+
In this example, the response only partially addresses the query and misses several important aspects.
|
|
177
|
+
|
|
178
|
+
```typescript filename="src/example-low-completeness.ts" showLineNumbers copy
|
|
179
|
+
import { openai } from "@ai-sdk/openai";
|
|
180
|
+
import { createCompletenessScorer } from "@mastra/evals/scorers/llm";
|
|
181
|
+
|
|
182
|
+
const scorer = createCompletenessScorer({ model: openai("gpt-4o-mini") });
|
|
183
|
+
|
|
184
|
+
const query = "Compare renewable and non-renewable energy sources in terms of cost, environmental impact, and sustainability.";
|
|
185
|
+
const response =
|
|
186
|
+
"Renewable energy sources like solar and wind are becoming cheaper. They're better for the environment than fossil fuels.";
|
|
187
|
+
|
|
188
|
+
const result = await scorer.run({
|
|
189
|
+
input: [{ role: 'user', content: query }],
|
|
190
|
+
output: { text: response },
|
|
191
|
+
});
|
|
192
|
+
|
|
193
|
+
console.log(result);
|
|
194
|
+
```
|
|
195
|
+
|
|
196
|
+
#### Low completeness output
|
|
197
|
+
|
|
198
|
+
The output receives a low score because it only briefly mentions cost and environmental impact while completely missing sustainability and lacking detailed comparison.
|
|
77
199
|
|
|
78
|
-
|
|
79
|
-
|
|
80
|
-
|
|
81
|
-
|
|
82
|
-
|
|
200
|
+
```typescript
|
|
201
|
+
{
|
|
202
|
+
score: 0.2,
|
|
203
|
+
reason: "The score is 0.2 because the response only superficially touches on cost (renewable getting cheaper) and environmental impact (renewable better than fossil fuels) but provides no detailed comparison, fails to address sustainability aspects, doesn't discuss specific non-renewable sources, and lacks depth in all mentioned areas."
|
|
204
|
+
}
|
|
205
|
+
```
|
|
83
206
|
|
|
84
207
|
## Related
|
|
85
208
|
|