@gaffer-sh/mcp 0.3.0 → 0.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +158 -118
- package/dist/index.js +694 -135
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -4,11 +4,12 @@ MCP (Model Context Protocol) server for [Gaffer](https://gaffer.sh) - give your
|
|
|
4
4
|
|
|
5
5
|
## What is this?
|
|
6
6
|
|
|
7
|
-
This MCP server connects AI coding assistants like Claude Code and Cursor to your Gaffer test history. It allows AI to:
|
|
7
|
+
This MCP server connects AI coding assistants like Claude Code and Cursor to your Gaffer test history and coverage data. It allows AI to:
|
|
8
8
|
|
|
9
9
|
- Check your project's test health (pass rate, flaky tests, trends)
|
|
10
10
|
- Look up the history of specific tests to understand stability
|
|
11
11
|
- Get context about test failures when debugging
|
|
12
|
+
- Analyze code coverage and identify untested areas
|
|
12
13
|
- Browse all your projects (with user API Keys)
|
|
13
14
|
- Access test report files (HTML reports, coverage, etc.)
|
|
14
15
|
|
|
@@ -17,20 +18,6 @@ This MCP server connects AI coding assistants like Claude Code and Cursor to you
|
|
|
17
18
|
1. A [Gaffer](https://gaffer.sh) account with test results uploaded
|
|
18
19
|
2. An API Key from Account Settings > API Keys
|
|
19
20
|
|
|
20
|
-
## Authentication
|
|
21
|
-
|
|
22
|
-
The MCP server supports two types of authentication:
|
|
23
|
-
|
|
24
|
-
### User API Keys (Recommended)
|
|
25
|
-
|
|
26
|
-
User API Keys (`gaf_` prefix) provide read-only access to all projects across your organizations. This is the recommended approach as it allows your AI assistant to work across multiple projects.
|
|
27
|
-
|
|
28
|
-
Get your API Key from: **Account Settings > API Keys**
|
|
29
|
-
|
|
30
|
-
### Project Upload Tokens (Legacy)
|
|
31
|
-
|
|
32
|
-
Project Upload Tokens (`gfr_` prefix) are designed for uploading test results and only provide access to a single project. While still supported for backward compatibility, user API Keys are preferred for the MCP server.
|
|
33
|
-
|
|
34
21
|
## Setup
|
|
35
22
|
|
|
36
23
|
### Claude Code
|
|
@@ -71,144 +58,197 @@ Add to `.cursor/mcp.json` in your project:
|
|
|
71
58
|
|
|
72
59
|
## Available Tools
|
|
73
60
|
|
|
74
|
-
###
|
|
61
|
+
### Project & Test Run Tools
|
|
75
62
|
|
|
76
|
-
|
|
63
|
+
| Tool | Description |
|
|
64
|
+
|------|-------------|
|
|
65
|
+
| `list_projects` | List all projects you have access to |
|
|
66
|
+
| `get_project_health` | Get health metrics (pass rate, flaky count, trends) |
|
|
67
|
+
| `list_test_runs` | List recent test runs with optional filtering |
|
|
68
|
+
| `get_test_run_details` | Get parsed test results for a specific test run |
|
|
69
|
+
| `get_report` | Get report file URLs for a test run |
|
|
70
|
+
| `get_report_browser_url` | Get a browser-navigable URL for viewing reports |
|
|
77
71
|
|
|
78
|
-
|
|
79
|
-
- `organizationId` (optional): Filter by organization
|
|
80
|
-
- `limit` (optional): Max results (default: 50)
|
|
72
|
+
### Test Analysis Tools
|
|
81
73
|
|
|
82
|
-
|
|
83
|
-
|
|
74
|
+
| Tool | Description |
|
|
75
|
+
|------|-------------|
|
|
76
|
+
| `get_test_history` | Get pass/fail history for a specific test |
|
|
77
|
+
| `get_flaky_tests` | Get tests with high flip rates (pass↔fail) |
|
|
78
|
+
| `get_slowest_tests` | Get slowest tests by P95 duration |
|
|
79
|
+
| `compare_test_metrics` | Compare test performance between commits |
|
|
84
80
|
|
|
85
|
-
|
|
81
|
+
### Coverage Tools
|
|
86
82
|
|
|
87
|
-
|
|
83
|
+
| Tool | Description |
|
|
84
|
+
|------|-------------|
|
|
85
|
+
| `get_coverage_summary` | Get overall coverage metrics and trends |
|
|
86
|
+
| `get_coverage_for_file` | Get coverage for specific files or paths |
|
|
87
|
+
| `get_untested_files` | Get files below a coverage threshold |
|
|
88
|
+
| `find_uncovered_failure_areas` | Find files with low coverage AND test failures |
|
|
88
89
|
|
|
89
|
-
|
|
90
|
+
## Tool Details
|
|
90
91
|
|
|
91
|
-
|
|
92
|
-
- `projectId` (required with user API Keys): Project ID from `list_projects`
|
|
93
|
-
- `days` (optional): Number of days to analyze (default: 30)
|
|
92
|
+
### `list_projects`
|
|
94
93
|
|
|
95
|
-
|
|
96
|
-
- Health score (0-100)
|
|
97
|
-
- Pass rate percentage
|
|
98
|
-
- Number of test runs
|
|
99
|
-
- Flaky test count
|
|
100
|
-
- Trend (up/down/stable)
|
|
94
|
+
List all projects you have access to.
|
|
101
95
|
|
|
102
|
-
**
|
|
96
|
+
- **Input:** `organizationId` (optional), `limit` (optional, default: 50)
|
|
97
|
+
- **Returns:** List of projects with IDs, names, and organization info
|
|
98
|
+
- **Example:** "What projects do I have in Gaffer?"
|
|
103
99
|
|
|
104
|
-
### `
|
|
100
|
+
### `get_project_health`
|
|
105
101
|
|
|
106
|
-
Get the
|
|
102
|
+
Get the health metrics for a project.
|
|
107
103
|
|
|
108
|
-
**Input:**
|
|
109
|
-
-
|
|
110
|
-
-
|
|
111
|
-
|
|
112
|
-
|
|
104
|
+
- **Input:** `projectId` (required), `days` (optional, default: 30)
|
|
105
|
+
- **Returns:** Health score (0-100), pass rate, test run count, flaky test count, trend
|
|
106
|
+
- **Example:** "What's the health of my test suite?"
|
|
107
|
+
|
|
108
|
+
### `get_test_history`
|
|
113
109
|
|
|
114
|
-
|
|
115
|
-
- History of test runs with pass/fail status
|
|
116
|
-
- Duration of each run
|
|
117
|
-
- Branch and commit info
|
|
118
|
-
- Error messages for failures
|
|
119
|
-
- Summary statistics
|
|
110
|
+
Get the pass/fail history for a specific test.
|
|
120
111
|
|
|
121
|
-
**
|
|
122
|
-
-
|
|
123
|
-
- "
|
|
112
|
+
- **Input:** `projectId` (required), `testName` or `filePath` (one required), `limit` (optional)
|
|
113
|
+
- **Returns:** History of runs with status, duration, branch, commit, errors
|
|
114
|
+
- **Example:** "Is the login test flaky? Check its history"
|
|
124
115
|
|
|
125
116
|
### `get_flaky_tests`
|
|
126
117
|
|
|
127
118
|
Get the list of flaky tests in a project.
|
|
128
119
|
|
|
129
|
-
**Input:**
|
|
130
|
-
-
|
|
131
|
-
-
|
|
132
|
-
- `limit` (optional): Max results (default: 50)
|
|
133
|
-
- `days` (optional): Analysis period in days (default: 30)
|
|
134
|
-
|
|
135
|
-
**Returns:**
|
|
136
|
-
- List of flaky tests with flip rates
|
|
137
|
-
- Number of status transitions
|
|
138
|
-
- Total runs analyzed
|
|
139
|
-
- When each test last ran
|
|
140
|
-
|
|
141
|
-
**Example prompts:**
|
|
142
|
-
- "Which tests are flaky in my project?"
|
|
143
|
-
- "Show me the most unstable tests"
|
|
120
|
+
- **Input:** `projectId` (required), `threshold` (optional, default: 0.1), `days` (optional), `limit` (optional)
|
|
121
|
+
- **Returns:** List of flaky tests with flip rates, transition counts, run counts
|
|
122
|
+
- **Example:** "Which tests are flaky in my project?"
|
|
144
123
|
|
|
145
124
|
### `list_test_runs`
|
|
146
125
|
|
|
147
126
|
List recent test runs with optional filtering.
|
|
148
127
|
|
|
149
|
-
**Input:**
|
|
150
|
-
-
|
|
151
|
-
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
- `limit` (optional): Max results (default: 20)
|
|
128
|
+
- **Input:** `projectId` (required), `commitSha` (optional), `branch` (optional), `status` (optional), `limit` (optional)
|
|
129
|
+
- **Returns:** List of test runs with pass/fail/skip counts, commit and branch info
|
|
130
|
+
- **Example:** "What tests failed in the last commit?"
|
|
131
|
+
|
|
132
|
+
### `get_test_run_details`
|
|
155
133
|
|
|
156
|
-
|
|
157
|
-
- List of test runs with pass/fail/skip counts
|
|
158
|
-
- Commit SHA and branch info
|
|
159
|
-
- Pagination info
|
|
134
|
+
Get parsed test results for a specific test run.
|
|
160
135
|
|
|
161
|
-
**
|
|
162
|
-
-
|
|
163
|
-
- "Show me
|
|
164
|
-
- "Did any tests fail on my feature branch?"
|
|
136
|
+
- **Input:** `testRunId` (required), `projectId` (required), `status` (optional filter), `limit` (optional)
|
|
137
|
+
- **Returns:** Individual test results with name, status, duration, file path, errors
|
|
138
|
+
- **Example:** "Show me all failed tests from this test run"
|
|
165
139
|
|
|
166
140
|
### `get_report`
|
|
167
141
|
|
|
168
|
-
Get
|
|
142
|
+
Get URLs for report files uploaded with a test run.
|
|
169
143
|
|
|
170
|
-
**Input:**
|
|
171
|
-
-
|
|
144
|
+
- **Input:** `testRunId` (required)
|
|
145
|
+
- **Returns:** List of files with filename, size, content type, download URL
|
|
146
|
+
- **Example:** "Get the Playwright report for the latest test run"
|
|
172
147
|
|
|
173
|
-
|
|
174
|
-
- Test run ID and project info
|
|
175
|
-
- Framework used (e.g., playwright, vitest)
|
|
176
|
-
- List of files with:
|
|
177
|
-
- Filename (e.g., "report.html", "coverage/index.html")
|
|
178
|
-
- File size in bytes
|
|
179
|
-
- Content type (e.g., "text/html")
|
|
180
|
-
- Download URL
|
|
148
|
+
### `get_report_browser_url`
|
|
181
149
|
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
-
|
|
185
|
-
-
|
|
150
|
+
Get a browser-navigable URL for viewing a test report.
|
|
151
|
+
|
|
152
|
+
- **Input:** `projectId` (required), `testRunId` (required), `filename` (optional)
|
|
153
|
+
- **Returns:** Signed URL valid for 30 minutes
|
|
154
|
+
- **Example:** "Give me a link to view the test report"
|
|
186
155
|
|
|
187
156
|
### `get_slowest_tests`
|
|
188
157
|
|
|
189
|
-
Get the slowest tests in a project, sorted by P95 duration.
|
|
190
|
-
|
|
191
|
-
**Input:**
|
|
192
|
-
-
|
|
193
|
-
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
**
|
|
209
|
-
- "
|
|
210
|
-
|
|
211
|
-
|
|
158
|
+
Get the slowest tests in a project, sorted by P95 duration.
|
|
159
|
+
|
|
160
|
+
- **Input:** `projectId` (required), `days` (optional), `limit` (optional), `framework` (optional), `branch` (optional)
|
|
161
|
+
- **Returns:** List of tests with average and P95 duration, run count
|
|
162
|
+
- **Example:** "Which tests are slowing down my CI pipeline?"
|
|
163
|
+
|
|
164
|
+
### `compare_test_metrics`
|
|
165
|
+
|
|
166
|
+
Compare test metrics between two commits or test runs.
|
|
167
|
+
|
|
168
|
+
- **Input:** `projectId` (required), `testName` (required), `beforeCommit`/`afterCommit` OR `beforeRunId`/`afterRunId`
|
|
169
|
+
- **Returns:** Before/after metrics with duration change and percentage
|
|
170
|
+
- **Example:** "Did my fix make this test faster?"
|
|
171
|
+
|
|
172
|
+
### `get_coverage_summary`
|
|
173
|
+
|
|
174
|
+
Get the coverage metrics summary for a project.
|
|
175
|
+
|
|
176
|
+
- **Input:** `projectId` (required), `days` (optional, default: 30)
|
|
177
|
+
- **Returns:** Line/branch/function coverage percentages, trend, report count, lowest coverage files
|
|
178
|
+
- **Example:** "What's our test coverage?"
|
|
179
|
+
|
|
180
|
+
### `get_coverage_for_file`
|
|
181
|
+
|
|
182
|
+
Get coverage metrics for specific files or paths.
|
|
183
|
+
|
|
184
|
+
- **Input:** `projectId` (required), `filePath` (required - exact or partial match)
|
|
185
|
+
- **Returns:** List of matching files with line/branch/function coverage
|
|
186
|
+
- **Example:** "What's the coverage for our API routes?"
|
|
187
|
+
|
|
188
|
+
### `get_untested_files`
|
|
189
|
+
|
|
190
|
+
Get files with little or no test coverage.
|
|
191
|
+
|
|
192
|
+
- **Input:** `projectId` (required), `maxCoverage` (optional, default: 10%), `limit` (optional)
|
|
193
|
+
- **Returns:** List of files below threshold sorted by coverage (lowest first)
|
|
194
|
+
- **Example:** "Which files have no tests?"
|
|
195
|
+
|
|
196
|
+
### `find_uncovered_failure_areas`
|
|
197
|
+
|
|
198
|
+
Find code areas with both low coverage AND test failures (high risk).
|
|
199
|
+
|
|
200
|
+
- **Input:** `projectId` (required), `days` (optional), `coverageThreshold` (optional, default: 80%)
|
|
201
|
+
- **Returns:** Risk areas ranked by score, with file path, coverage %, failure count
|
|
202
|
+
- **Example:** "Where should we focus our testing efforts?"
|
|
203
|
+
|
|
204
|
+
## Prioritizing Coverage Improvements
|
|
205
|
+
|
|
206
|
+
When using coverage tools to improve your test suite, combine coverage data with codebase exploration for best results:
|
|
207
|
+
|
|
208
|
+
### 1. Understand Code Utilization
|
|
209
|
+
|
|
210
|
+
Before targeting files purely by coverage percentage, explore which code is actually critical:
|
|
211
|
+
|
|
212
|
+
- **Find entry points:** Look for route definitions, event handlers, exported functions - these reveal what code actually executes in production
|
|
213
|
+
- **Find heavily-imported files:** Files imported by many others are high-value targets
|
|
214
|
+
- **Identify critical business logic:** Look for files handling auth, payments, data mutations, or core domain logic
|
|
215
|
+
|
|
216
|
+
### 2. Prioritize by Impact
|
|
217
|
+
|
|
218
|
+
Low coverage alone doesn't indicate priority. Consider:
|
|
219
|
+
|
|
220
|
+
- **High utilization + low coverage = highest priority** - Code that runs frequently but lacks tests
|
|
221
|
+
- **Large files with 0% coverage** - More uncovered lines means bigger impact on overall coverage
|
|
222
|
+
- **Files with both failures and low coverage** - Use `find_uncovered_failure_areas` for this
|
|
223
|
+
|
|
224
|
+
### 3. Use Path-Based Queries
|
|
225
|
+
|
|
226
|
+
The `get_untested_files` tool may return many frontend components. For backend or specific areas:
|
|
227
|
+
|
|
228
|
+
```
|
|
229
|
+
# Query specific paths with get_coverage_for_file
|
|
230
|
+
get_coverage_for_file(filePath="server/services")
|
|
231
|
+
get_coverage_for_file(filePath="src/api")
|
|
232
|
+
get_coverage_for_file(filePath="lib/core")
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
### 4. Iterative Improvement
|
|
236
|
+
|
|
237
|
+
1. Get baseline with `get_coverage_summary`
|
|
238
|
+
2. Identify targets with `get_coverage_for_file` on critical paths
|
|
239
|
+
3. Write tests for highest-impact files
|
|
240
|
+
4. Re-check coverage after CI uploads new results
|
|
241
|
+
5. Repeat
|
|
242
|
+
|
|
243
|
+
## Authentication
|
|
244
|
+
|
|
245
|
+
### User API Keys (Recommended)
|
|
246
|
+
|
|
247
|
+
User API Keys (`gaf_` prefix) provide read-only access to all projects across your organizations. Get your API Key from: **Account Settings > API Keys**
|
|
248
|
+
|
|
249
|
+
### Project Upload Tokens (Legacy)
|
|
250
|
+
|
|
251
|
+
Project Upload Tokens (`gfr_` prefix) are designed for uploading test results and only provide access to a single project. User API Keys are preferred for the MCP server.
|
|
212
252
|
|
|
213
253
|
## Environment Variables
|
|
214
254
|
|
package/dist/index.js
CHANGED
|
@@ -341,6 +341,112 @@ var GafferApiClient = class _GafferApiClient {
|
|
|
341
341
|
}
|
|
342
342
|
);
|
|
343
343
|
}
|
|
344
|
+
/**
|
|
345
|
+
* Get coverage summary for a project
|
|
346
|
+
*
|
|
347
|
+
* @param options - Query options
|
|
348
|
+
* @param options.projectId - The project ID (required)
|
|
349
|
+
* @param options.days - Analysis period in days (default: 30)
|
|
350
|
+
* @returns Coverage summary with trends and lowest coverage files
|
|
351
|
+
*/
|
|
352
|
+
async getCoverageSummary(options) {
|
|
353
|
+
if (!this.isUserToken()) {
|
|
354
|
+
throw new Error("getCoverageSummary requires a user API Key (gaf_).");
|
|
355
|
+
}
|
|
356
|
+
if (!options.projectId) {
|
|
357
|
+
throw new Error("projectId is required");
|
|
358
|
+
}
|
|
359
|
+
return this.request(
|
|
360
|
+
`/user/projects/${options.projectId}/coverage-summary`,
|
|
361
|
+
{
|
|
362
|
+
...options.days && { days: options.days }
|
|
363
|
+
}
|
|
364
|
+
);
|
|
365
|
+
}
|
|
366
|
+
/**
|
|
367
|
+
* Get coverage files for a project with filtering
|
|
368
|
+
*
|
|
369
|
+
* @param options - Query options
|
|
370
|
+
* @param options.projectId - The project ID (required)
|
|
371
|
+
* @param options.filePath - Filter to specific file path
|
|
372
|
+
* @param options.minCoverage - Minimum coverage percentage
|
|
373
|
+
* @param options.maxCoverage - Maximum coverage percentage
|
|
374
|
+
* @param options.limit - Maximum number of results
|
|
375
|
+
* @param options.offset - Pagination offset
|
|
376
|
+
* @param options.sortBy - Sort by 'path' or 'coverage'
|
|
377
|
+
* @param options.sortOrder - Sort order 'asc' or 'desc'
|
|
378
|
+
* @returns List of files with coverage data
|
|
379
|
+
*/
|
|
380
|
+
async getCoverageFiles(options) {
|
|
381
|
+
if (!this.isUserToken()) {
|
|
382
|
+
throw new Error("getCoverageFiles requires a user API Key (gaf_).");
|
|
383
|
+
}
|
|
384
|
+
if (!options.projectId) {
|
|
385
|
+
throw new Error("projectId is required");
|
|
386
|
+
}
|
|
387
|
+
return this.request(
|
|
388
|
+
`/user/projects/${options.projectId}/coverage/files`,
|
|
389
|
+
{
|
|
390
|
+
...options.filePath && { filePath: options.filePath },
|
|
391
|
+
...options.minCoverage !== void 0 && { minCoverage: options.minCoverage },
|
|
392
|
+
...options.maxCoverage !== void 0 && { maxCoverage: options.maxCoverage },
|
|
393
|
+
...options.limit && { limit: options.limit },
|
|
394
|
+
...options.offset && { offset: options.offset },
|
|
395
|
+
...options.sortBy && { sortBy: options.sortBy },
|
|
396
|
+
...options.sortOrder && { sortOrder: options.sortOrder }
|
|
397
|
+
}
|
|
398
|
+
);
|
|
399
|
+
}
|
|
400
|
+
/**
|
|
401
|
+
* Get risk areas (files with low coverage AND test failures)
|
|
402
|
+
*
|
|
403
|
+
* @param options - Query options
|
|
404
|
+
* @param options.projectId - The project ID (required)
|
|
405
|
+
* @param options.days - Analysis period in days (default: 30)
|
|
406
|
+
* @param options.coverageThreshold - Include files below this coverage (default: 80)
|
|
407
|
+
* @returns List of risk areas sorted by risk score
|
|
408
|
+
*/
|
|
409
|
+
async getCoverageRiskAreas(options) {
|
|
410
|
+
if (!this.isUserToken()) {
|
|
411
|
+
throw new Error("getCoverageRiskAreas requires a user API Key (gaf_).");
|
|
412
|
+
}
|
|
413
|
+
if (!options.projectId) {
|
|
414
|
+
throw new Error("projectId is required");
|
|
415
|
+
}
|
|
416
|
+
return this.request(
|
|
417
|
+
`/user/projects/${options.projectId}/coverage/risk-areas`,
|
|
418
|
+
{
|
|
419
|
+
...options.days && { days: options.days },
|
|
420
|
+
...options.coverageThreshold !== void 0 && { coverageThreshold: options.coverageThreshold }
|
|
421
|
+
}
|
|
422
|
+
);
|
|
423
|
+
}
|
|
424
|
+
/**
|
|
425
|
+
* Get a browser-navigable URL for viewing a test report
|
|
426
|
+
*
|
|
427
|
+
* @param options - Query options
|
|
428
|
+
* @param options.projectId - The project ID (required)
|
|
429
|
+
* @param options.testRunId - The test run ID (required)
|
|
430
|
+
* @param options.filename - Specific file to open (default: index.html)
|
|
431
|
+
* @returns URL with signed token for browser access
|
|
432
|
+
*/
|
|
433
|
+
async getReportBrowserUrl(options) {
|
|
434
|
+
if (!this.isUserToken()) {
|
|
435
|
+
throw new Error("getReportBrowserUrl requires a user API Key (gaf_).");
|
|
436
|
+
}
|
|
437
|
+
if (!options.projectId) {
|
|
438
|
+
throw new Error("projectId is required");
|
|
439
|
+
}
|
|
440
|
+
if (!options.testRunId) {
|
|
441
|
+
throw new Error("testRunId is required");
|
|
442
|
+
}
|
|
443
|
+
return this.request(
|
|
444
|
+
`/user/projects/${options.projectId}/reports/${options.testRunId}/browser-url`,
|
|
445
|
+
{
|
|
446
|
+
...options.filename && { filename: options.filename }
|
|
447
|
+
}
|
|
448
|
+
);
|
|
449
|
+
}
|
|
344
450
|
};
|
|
345
451
|
|
|
346
452
|
// src/tools/compare-test-metrics.ts
|
|
@@ -443,26 +549,219 @@ Use cases:
|
|
|
443
549
|
Tip: Use get_test_history first to find the commit SHAs or test run IDs you want to compare.`
|
|
444
550
|
};
|
|
445
551
|
|
|
446
|
-
// src/tools/
|
|
552
|
+
// src/tools/find-uncovered-failure-areas.ts
|
|
447
553
|
import { z as z2 } from "zod";
|
|
554
|
+
var findUncoveredFailureAreasInputSchema = {
|
|
555
|
+
projectId: z2.string().describe("Project ID to analyze. Required. Use list_projects to find project IDs."),
|
|
556
|
+
days: z2.number().int().min(1).max(365).optional().describe("Number of days to analyze for test failures (default: 30)"),
|
|
557
|
+
coverageThreshold: z2.number().min(0).max(100).optional().describe("Include files with coverage below this percentage (default: 80)")
|
|
558
|
+
};
|
|
559
|
+
var findUncoveredFailureAreasOutputSchema = {
|
|
560
|
+
hasCoverage: z2.boolean(),
|
|
561
|
+
hasTestResults: z2.boolean(),
|
|
562
|
+
riskAreas: z2.array(z2.object({
|
|
563
|
+
filePath: z2.string(),
|
|
564
|
+
coverage: z2.number(),
|
|
565
|
+
failureCount: z2.number(),
|
|
566
|
+
riskScore: z2.number(),
|
|
567
|
+
testNames: z2.array(z2.string())
|
|
568
|
+
})),
|
|
569
|
+
message: z2.string().optional()
|
|
570
|
+
};
|
|
571
|
+
async function executeFindUncoveredFailureAreas(client, input) {
|
|
572
|
+
const response = await client.getCoverageRiskAreas({
|
|
573
|
+
projectId: input.projectId,
|
|
574
|
+
days: input.days,
|
|
575
|
+
coverageThreshold: input.coverageThreshold
|
|
576
|
+
});
|
|
577
|
+
return {
|
|
578
|
+
hasCoverage: response.hasCoverage,
|
|
579
|
+
hasTestResults: response.hasTestResults,
|
|
580
|
+
riskAreas: response.riskAreas,
|
|
581
|
+
message: response.message
|
|
582
|
+
};
|
|
583
|
+
}
|
|
584
|
+
var findUncoveredFailureAreasMetadata = {
|
|
585
|
+
name: "find_uncovered_failure_areas",
|
|
586
|
+
title: "Find Uncovered Failure Areas",
|
|
587
|
+
description: `Find areas of code that have both low coverage AND test failures.
|
|
588
|
+
|
|
589
|
+
This cross-references test failures with coverage data to identify high-risk
|
|
590
|
+
areas in your codebase that need attention. Files are ranked by a "risk score"
|
|
591
|
+
calculated as: (100 - coverage%) \xD7 failureCount.
|
|
592
|
+
|
|
593
|
+
When using a user API Key (gaf_), you must provide a projectId.
|
|
594
|
+
Use list_projects first to find available project IDs.
|
|
595
|
+
|
|
596
|
+
Parameters:
|
|
597
|
+
- projectId: The project to analyze (required)
|
|
598
|
+
- days: Analysis period for test failures (default: 30)
|
|
599
|
+
- coverageThreshold: Include files below this coverage % (default: 80)
|
|
600
|
+
|
|
601
|
+
Returns:
|
|
602
|
+
- List of risk areas sorted by risk score (highest risk first)
|
|
603
|
+
- Each area includes: file path, coverage %, failure count, risk score, test names
|
|
604
|
+
|
|
605
|
+
Use this to prioritize which parts of your codebase need better test coverage.`
|
|
606
|
+
};
|
|
607
|
+
|
|
608
|
+
// src/tools/get-coverage-for-file.ts
|
|
609
|
+
import { z as z3 } from "zod";
|
|
610
|
+
var getCoverageForFileInputSchema = {
|
|
611
|
+
projectId: z3.string().describe("Project ID to get coverage for. Required. Use list_projects to find project IDs."),
|
|
612
|
+
filePath: z3.string().describe("File path to get coverage for. Can be exact path or partial match.")
|
|
613
|
+
};
|
|
614
|
+
var getCoverageForFileOutputSchema = {
|
|
615
|
+
hasCoverage: z3.boolean(),
|
|
616
|
+
files: z3.array(z3.object({
|
|
617
|
+
path: z3.string(),
|
|
618
|
+
lines: z3.object({
|
|
619
|
+
covered: z3.number(),
|
|
620
|
+
total: z3.number(),
|
|
621
|
+
percentage: z3.number()
|
|
622
|
+
}),
|
|
623
|
+
branches: z3.object({
|
|
624
|
+
covered: z3.number(),
|
|
625
|
+
total: z3.number(),
|
|
626
|
+
percentage: z3.number()
|
|
627
|
+
}),
|
|
628
|
+
functions: z3.object({
|
|
629
|
+
covered: z3.number(),
|
|
630
|
+
total: z3.number(),
|
|
631
|
+
percentage: z3.number()
|
|
632
|
+
})
|
|
633
|
+
})),
|
|
634
|
+
message: z3.string().optional()
|
|
635
|
+
};
|
|
636
|
+
async function executeGetCoverageForFile(client, input) {
|
|
637
|
+
const response = await client.getCoverageFiles({
|
|
638
|
+
projectId: input.projectId,
|
|
639
|
+
filePath: input.filePath,
|
|
640
|
+
limit: 10
|
|
641
|
+
// Return up to 10 matching files
|
|
642
|
+
});
|
|
643
|
+
return {
|
|
644
|
+
hasCoverage: response.hasCoverage,
|
|
645
|
+
files: response.files.map((f) => ({
|
|
646
|
+
path: f.path,
|
|
647
|
+
lines: f.lines,
|
|
648
|
+
branches: f.branches,
|
|
649
|
+
functions: f.functions
|
|
650
|
+
})),
|
|
651
|
+
message: response.message
|
|
652
|
+
};
|
|
653
|
+
}
|
|
654
|
+
var getCoverageForFileMetadata = {
|
|
655
|
+
name: "get_coverage_for_file",
|
|
656
|
+
title: "Get Coverage for File",
|
|
657
|
+
description: `Get coverage metrics for a specific file or files matching a path pattern.
|
|
658
|
+
|
|
659
|
+
When using a user API Key (gaf_), you must provide a projectId.
|
|
660
|
+
Use list_projects first to find available project IDs.
|
|
661
|
+
|
|
662
|
+
Parameters:
|
|
663
|
+
- projectId: The project to query (required)
|
|
664
|
+
- filePath: File path to search for (exact or partial match)
|
|
665
|
+
|
|
666
|
+
Returns:
|
|
667
|
+
- Line coverage (covered/total/percentage)
|
|
668
|
+
- Branch coverage (covered/total/percentage)
|
|
669
|
+
- Function coverage (covered/total/percentage)
|
|
670
|
+
|
|
671
|
+
This is the preferred tool for targeted coverage analysis. Use path prefixes to focus on
|
|
672
|
+
specific areas of the codebase:
|
|
673
|
+
- "server/services" - Backend service layer
|
|
674
|
+
- "server/utils" - Backend utilities
|
|
675
|
+
- "src/api" - API routes
|
|
676
|
+
- "lib/core" - Core business logic
|
|
677
|
+
|
|
678
|
+
Before querying, explore the codebase to identify critical paths - entry points,
|
|
679
|
+
heavily-imported files, and code handling auth/payments/data mutations.
|
|
680
|
+
Prioritize: high utilization + low coverage = highest impact.`
|
|
681
|
+
};
|
|
682
|
+
|
|
683
|
+
// src/tools/get-coverage-summary.ts
|
|
684
|
+
import { z as z4 } from "zod";
|
|
685
|
+
var getCoverageSummaryInputSchema = {
|
|
686
|
+
projectId: z4.string().describe("Project ID to get coverage for. Required. Use list_projects to find project IDs."),
|
|
687
|
+
days: z4.number().int().min(1).max(365).optional().describe("Number of days to analyze for trends (default: 30)")
|
|
688
|
+
};
|
|
689
|
+
var getCoverageSummaryOutputSchema = {
|
|
690
|
+
hasCoverage: z4.boolean(),
|
|
691
|
+
current: z4.object({
|
|
692
|
+
lines: z4.number(),
|
|
693
|
+
branches: z4.number(),
|
|
694
|
+
functions: z4.number()
|
|
695
|
+
}).optional(),
|
|
696
|
+
trend: z4.object({
|
|
697
|
+
direction: z4.enum(["up", "down", "stable"]),
|
|
698
|
+
change: z4.number()
|
|
699
|
+
}).optional(),
|
|
700
|
+
totalReports: z4.number(),
|
|
701
|
+
latestReportDate: z4.string().nullable().optional(),
|
|
702
|
+
lowestCoverageFiles: z4.array(z4.object({
|
|
703
|
+
path: z4.string(),
|
|
704
|
+
coverage: z4.number()
|
|
705
|
+
})).optional(),
|
|
706
|
+
message: z4.string().optional()
|
|
707
|
+
};
|
|
708
|
+
async function executeGetCoverageSummary(client, input) {
|
|
709
|
+
const response = await client.getCoverageSummary({
|
|
710
|
+
projectId: input.projectId,
|
|
711
|
+
days: input.days
|
|
712
|
+
});
|
|
713
|
+
return {
|
|
714
|
+
hasCoverage: response.hasCoverage,
|
|
715
|
+
current: response.current,
|
|
716
|
+
trend: response.trend,
|
|
717
|
+
totalReports: response.totalReports,
|
|
718
|
+
latestReportDate: response.latestReportDate,
|
|
719
|
+
lowestCoverageFiles: response.lowestCoverageFiles,
|
|
720
|
+
message: response.message
|
|
721
|
+
};
|
|
722
|
+
}
|
|
723
|
+
var getCoverageSummaryMetadata = {
|
|
724
|
+
name: "get_coverage_summary",
|
|
725
|
+
title: "Get Coverage Summary",
|
|
726
|
+
description: `Get the coverage metrics summary for a project.
|
|
727
|
+
|
|
728
|
+
When using a user API Key (gaf_), you must provide a projectId.
|
|
729
|
+
Use list_projects first to find available project IDs.
|
|
730
|
+
|
|
731
|
+
Returns:
|
|
732
|
+
- Current coverage percentages (lines, branches, functions)
|
|
733
|
+
- Trend direction (up, down, stable) and change amount
|
|
734
|
+
- Total number of coverage reports
|
|
735
|
+
- Latest report date
|
|
736
|
+
- Top 5 files with lowest coverage
|
|
737
|
+
|
|
738
|
+
Use this to understand your project's overall test coverage health.
|
|
739
|
+
|
|
740
|
+
After getting the summary, use get_coverage_for_file with path prefixes to drill into
|
|
741
|
+
specific areas (e.g., "server/services", "src/api", "lib/core"). This helps identify
|
|
742
|
+
high-value targets in critical code paths rather than just the files with lowest coverage.`
|
|
743
|
+
};
|
|
744
|
+
|
|
745
|
+
// src/tools/get-flaky-tests.ts
|
|
746
|
+
import { z as z5 } from "zod";
|
|
448
747
|
var getFlakyTestsInputSchema = {
|
|
449
|
-
projectId:
|
|
450
|
-
threshold:
|
|
451
|
-
limit:
|
|
452
|
-
days:
|
|
748
|
+
projectId: z5.string().optional().describe("Project ID to get flaky tests for. Required when using a user API Key (gaf_). Use list_projects to find project IDs."),
|
|
749
|
+
threshold: z5.number().min(0).max(1).optional().describe("Minimum flip rate to be considered flaky (0-1, default: 0.1 = 10%)"),
|
|
750
|
+
limit: z5.number().int().min(1).max(100).optional().describe("Maximum number of flaky tests to return (default: 50)"),
|
|
751
|
+
days: z5.number().int().min(1).max(365).optional().describe("Analysis period in days (default: 30)")
|
|
453
752
|
};
|
|
454
753
|
var getFlakyTestsOutputSchema = {
|
|
455
|
-
flakyTests:
|
|
456
|
-
name:
|
|
457
|
-
flipRate:
|
|
458
|
-
flipCount:
|
|
459
|
-
totalRuns:
|
|
460
|
-
lastSeen:
|
|
754
|
+
flakyTests: z5.array(z5.object({
|
|
755
|
+
name: z5.string(),
|
|
756
|
+
flipRate: z5.number(),
|
|
757
|
+
flipCount: z5.number(),
|
|
758
|
+
totalRuns: z5.number(),
|
|
759
|
+
lastSeen: z5.string()
|
|
461
760
|
})),
|
|
462
|
-
summary:
|
|
463
|
-
threshold:
|
|
464
|
-
totalFlaky:
|
|
465
|
-
period:
|
|
761
|
+
summary: z5.object({
|
|
762
|
+
threshold: z5.number(),
|
|
763
|
+
totalFlaky: z5.number(),
|
|
764
|
+
period: z5.number()
|
|
466
765
|
})
|
|
467
766
|
};
|
|
468
767
|
async function executeGetFlakyTests(client, input) {
|
|
@@ -502,22 +801,22 @@ specific tests are flaky and need investigation.`
|
|
|
502
801
|
};
|
|
503
802
|
|
|
504
803
|
// src/tools/get-project-health.ts
|
|
505
|
-
import { z as
|
|
804
|
+
import { z as z6 } from "zod";
|
|
506
805
|
var getProjectHealthInputSchema = {
|
|
507
|
-
projectId:
|
|
508
|
-
days:
|
|
806
|
+
projectId: z6.string().optional().describe("Project ID to get health for. Required when using a user API Key (gaf_). Use list_projects to find project IDs."),
|
|
807
|
+
days: z6.number().int().min(1).max(365).optional().describe("Number of days to analyze (default: 30)")
|
|
509
808
|
};
|
|
510
809
|
var getProjectHealthOutputSchema = {
|
|
511
|
-
projectName:
|
|
512
|
-
healthScore:
|
|
513
|
-
passRate:
|
|
514
|
-
testRunCount:
|
|
515
|
-
flakyTestCount:
|
|
516
|
-
trend:
|
|
517
|
-
period:
|
|
518
|
-
days:
|
|
519
|
-
start:
|
|
520
|
-
end:
|
|
810
|
+
projectName: z6.string(),
|
|
811
|
+
healthScore: z6.number(),
|
|
812
|
+
passRate: z6.number().nullable(),
|
|
813
|
+
testRunCount: z6.number(),
|
|
814
|
+
flakyTestCount: z6.number(),
|
|
815
|
+
trend: z6.enum(["up", "down", "stable"]),
|
|
816
|
+
period: z6.object({
|
|
817
|
+
days: z6.number(),
|
|
818
|
+
start: z6.string(),
|
|
819
|
+
end: z6.string()
|
|
521
820
|
})
|
|
522
821
|
};
|
|
523
822
|
async function executeGetProjectHealth(client, input) {
|
|
@@ -553,23 +852,77 @@ Returns:
|
|
|
553
852
|
Use this to understand the current state of your test suite.`
|
|
554
853
|
};
|
|
555
854
|
|
|
855
|
+
// src/tools/get-report-browser-url.ts
|
|
856
|
+
import { z as z7 } from "zod";
|
|
857
|
+
var getReportBrowserUrlInputSchema = {
|
|
858
|
+
projectId: z7.string().describe("Project ID the test run belongs to. Required. Use list_projects to find project IDs."),
|
|
859
|
+
testRunId: z7.string().describe("The test run ID to get the report URL for. Use list_test_runs to find test run IDs."),
|
|
860
|
+
filename: z7.string().optional().describe("Specific file to open (default: index.html or first HTML file)")
|
|
861
|
+
};
|
|
862
|
+
var getReportBrowserUrlOutputSchema = {
|
|
863
|
+
url: z7.string(),
|
|
864
|
+
filename: z7.string(),
|
|
865
|
+
testRunId: z7.string(),
|
|
866
|
+
expiresAt: z7.string(),
|
|
867
|
+
expiresInSeconds: z7.number()
|
|
868
|
+
};
|
|
869
|
+
async function executeGetReportBrowserUrl(client, input) {
|
|
870
|
+
const response = await client.getReportBrowserUrl({
|
|
871
|
+
projectId: input.projectId,
|
|
872
|
+
testRunId: input.testRunId,
|
|
873
|
+
filename: input.filename
|
|
874
|
+
});
|
|
875
|
+
return {
|
|
876
|
+
url: response.url,
|
|
877
|
+
filename: response.filename,
|
|
878
|
+
testRunId: response.testRunId,
|
|
879
|
+
expiresAt: response.expiresAt,
|
|
880
|
+
expiresInSeconds: response.expiresInSeconds
|
|
881
|
+
};
|
|
882
|
+
}
|
|
883
|
+
var getReportBrowserUrlMetadata = {
|
|
884
|
+
name: "get_report_browser_url",
|
|
885
|
+
title: "Get Report Browser URL",
|
|
886
|
+
description: `Get a browser-navigable URL for viewing a test report (Playwright, Vitest, etc.).
|
|
887
|
+
|
|
888
|
+
Returns a signed URL that can be opened directly in a browser without requiring
|
|
889
|
+
the user to log in. The URL expires after 30 minutes for security.
|
|
890
|
+
|
|
891
|
+
When using a user API Key (gaf_), you must provide both projectId and testRunId.
|
|
892
|
+
Use list_projects and list_test_runs to find available IDs.
|
|
893
|
+
|
|
894
|
+
Parameters:
|
|
895
|
+
- projectId: The project the test run belongs to (required)
|
|
896
|
+
- testRunId: The test run to view (required)
|
|
897
|
+
- filename: Specific file to open (optional, defaults to index.html)
|
|
898
|
+
|
|
899
|
+
Returns:
|
|
900
|
+
- url: Browser-navigable URL with signed token
|
|
901
|
+
- filename: The file being accessed
|
|
902
|
+
- expiresAt: ISO timestamp when the URL expires
|
|
903
|
+
- expiresInSeconds: Time until expiration
|
|
904
|
+
|
|
905
|
+
The returned URL can be shared with users who need to view the report.
|
|
906
|
+
Note: URLs expire after 30 minutes for security.`
|
|
907
|
+
};
|
|
908
|
+
|
|
556
909
|
// src/tools/get-report.ts
|
|
557
|
-
import { z as
|
|
910
|
+
import { z as z8 } from "zod";
|
|
558
911
|
var getReportInputSchema = {
|
|
559
|
-
testRunId:
|
|
912
|
+
testRunId: z8.string().describe("The test run ID to get report files for. Use list_test_runs to find test run IDs.")
|
|
560
913
|
};
|
|
561
914
|
var getReportOutputSchema = {
|
|
562
|
-
testRunId:
|
|
563
|
-
projectId:
|
|
564
|
-
projectName:
|
|
565
|
-
resultSchema:
|
|
566
|
-
files:
|
|
567
|
-
filename:
|
|
568
|
-
size:
|
|
569
|
-
contentType:
|
|
570
|
-
downloadUrl:
|
|
915
|
+
testRunId: z8.string(),
|
|
916
|
+
projectId: z8.string(),
|
|
917
|
+
projectName: z8.string(),
|
|
918
|
+
resultSchema: z8.string().optional(),
|
|
919
|
+
files: z8.array(z8.object({
|
|
920
|
+
filename: z8.string(),
|
|
921
|
+
size: z8.number(),
|
|
922
|
+
contentType: z8.string(),
|
|
923
|
+
downloadUrl: z8.string()
|
|
571
924
|
})),
|
|
572
|
-
urlExpiresInSeconds:
|
|
925
|
+
urlExpiresInSeconds: z8.number().optional()
|
|
573
926
|
};
|
|
574
927
|
async function executeGetReport(client, input) {
|
|
575
928
|
const response = await client.getReport(input.testRunId);
|
|
@@ -627,29 +980,29 @@ Use cases:
|
|
|
627
980
|
};
|
|
628
981
|
|
|
629
982
|
// src/tools/get-slowest-tests.ts
|
|
630
|
-
import { z as
|
|
983
|
+
import { z as z9 } from "zod";
|
|
631
984
|
var getSlowestTestsInputSchema = {
|
|
632
|
-
projectId:
|
|
633
|
-
days:
|
|
634
|
-
limit:
|
|
635
|
-
framework:
|
|
636
|
-
branch:
|
|
985
|
+
projectId: z9.string().describe("Project ID to get slowest tests for. Required. Use list_projects to find project IDs."),
|
|
986
|
+
days: z9.number().int().min(1).max(365).optional().describe("Analysis period in days (default: 30)"),
|
|
987
|
+
limit: z9.number().int().min(1).max(100).optional().describe("Maximum number of tests to return (default: 20)"),
|
|
988
|
+
framework: z9.string().optional().describe('Filter by test framework (e.g., "playwright", "vitest", "jest")'),
|
|
989
|
+
branch: z9.string().optional().describe('Filter by git branch name (e.g., "main", "develop")')
|
|
637
990
|
};
|
|
638
991
|
var getSlowestTestsOutputSchema = {
|
|
639
|
-
slowestTests:
|
|
640
|
-
name:
|
|
641
|
-
fullName:
|
|
642
|
-
filePath:
|
|
643
|
-
framework:
|
|
644
|
-
avgDurationMs:
|
|
645
|
-
p95DurationMs:
|
|
646
|
-
runCount:
|
|
992
|
+
slowestTests: z9.array(z9.object({
|
|
993
|
+
name: z9.string(),
|
|
994
|
+
fullName: z9.string(),
|
|
995
|
+
filePath: z9.string().optional(),
|
|
996
|
+
framework: z9.string().optional(),
|
|
997
|
+
avgDurationMs: z9.number(),
|
|
998
|
+
p95DurationMs: z9.number(),
|
|
999
|
+
runCount: z9.number()
|
|
647
1000
|
})),
|
|
648
|
-
summary:
|
|
649
|
-
projectId:
|
|
650
|
-
projectName:
|
|
651
|
-
period:
|
|
652
|
-
totalReturned:
|
|
1001
|
+
summary: z9.object({
|
|
1002
|
+
projectId: z9.string(),
|
|
1003
|
+
projectName: z9.string(),
|
|
1004
|
+
period: z9.number(),
|
|
1005
|
+
totalReturned: z9.number()
|
|
653
1006
|
})
|
|
654
1007
|
};
|
|
655
1008
|
async function executeGetSlowestTests(client, input) {
|
|
@@ -707,28 +1060,28 @@ Use cases:
|
|
|
707
1060
|
};
|
|
708
1061
|
|
|
709
1062
|
// src/tools/get-test-history.ts
|
|
710
|
-
import { z as
|
|
1063
|
+
import { z as z10 } from "zod";
|
|
711
1064
|
var getTestHistoryInputSchema = {
|
|
712
|
-
projectId:
|
|
713
|
-
testName:
|
|
714
|
-
filePath:
|
|
715
|
-
limit:
|
|
1065
|
+
projectId: z10.string().optional().describe("Project ID to get test history for. Required when using a user API Key (gaf_). Use list_projects to find project IDs."),
|
|
1066
|
+
testName: z10.string().optional().describe("Exact test name to search for"),
|
|
1067
|
+
filePath: z10.string().optional().describe("File path containing the test"),
|
|
1068
|
+
limit: z10.number().int().min(1).max(100).optional().describe("Maximum number of results (default: 20)")
|
|
716
1069
|
};
|
|
717
1070
|
var getTestHistoryOutputSchema = {
|
|
718
|
-
history:
|
|
719
|
-
testRunId:
|
|
720
|
-
createdAt:
|
|
721
|
-
branch:
|
|
722
|
-
commitSha:
|
|
723
|
-
status:
|
|
724
|
-
durationMs:
|
|
725
|
-
message:
|
|
1071
|
+
history: z10.array(z10.object({
|
|
1072
|
+
testRunId: z10.string(),
|
|
1073
|
+
createdAt: z10.string(),
|
|
1074
|
+
branch: z10.string().optional(),
|
|
1075
|
+
commitSha: z10.string().optional(),
|
|
1076
|
+
status: z10.enum(["passed", "failed", "skipped", "pending"]),
|
|
1077
|
+
durationMs: z10.number(),
|
|
1078
|
+
message: z10.string().optional()
|
|
726
1079
|
})),
|
|
727
|
-
summary:
|
|
728
|
-
totalRuns:
|
|
729
|
-
passedRuns:
|
|
730
|
-
failedRuns:
|
|
731
|
-
passRate:
|
|
1080
|
+
summary: z10.object({
|
|
1081
|
+
totalRuns: z10.number(),
|
|
1082
|
+
passedRuns: z10.number(),
|
|
1083
|
+
failedRuns: z10.number(),
|
|
1084
|
+
passRate: z10.number().nullable()
|
|
732
1085
|
})
|
|
733
1086
|
};
|
|
734
1087
|
async function executeGetTestHistory(client, input) {
|
|
@@ -783,35 +1136,35 @@ Use this to investigate flaky tests or understand test stability.`
|
|
|
783
1136
|
};
|
|
784
1137
|
|
|
785
1138
|
// src/tools/get-test-run-details.ts
|
|
786
|
-
import { z as
|
|
1139
|
+
import { z as z11 } from "zod";
|
|
787
1140
|
var getTestRunDetailsInputSchema = {
|
|
788
|
-
testRunId:
|
|
789
|
-
projectId:
|
|
790
|
-
status:
|
|
791
|
-
limit:
|
|
792
|
-
offset:
|
|
1141
|
+
testRunId: z11.string().describe("The test run ID to get details for. Use list_test_runs to find test run IDs."),
|
|
1142
|
+
projectId: z11.string().describe("Project ID the test run belongs to. Required when using a user API Key (gaf_). Use list_projects to find project IDs."),
|
|
1143
|
+
status: z11.enum(["passed", "failed", "skipped"]).optional().describe("Filter tests by status. Returns only tests matching this status."),
|
|
1144
|
+
limit: z11.number().int().min(1).max(500).optional().describe("Maximum number of tests to return (default: 100, max: 500)"),
|
|
1145
|
+
offset: z11.number().int().min(0).optional().describe("Number of tests to skip for pagination (default: 0)")
|
|
793
1146
|
};
|
|
794
1147
|
var getTestRunDetailsOutputSchema = {
|
|
795
|
-
testRunId:
|
|
796
|
-
summary:
|
|
797
|
-
passed:
|
|
798
|
-
failed:
|
|
799
|
-
skipped:
|
|
800
|
-
total:
|
|
1148
|
+
testRunId: z11.string(),
|
|
1149
|
+
summary: z11.object({
|
|
1150
|
+
passed: z11.number(),
|
|
1151
|
+
failed: z11.number(),
|
|
1152
|
+
skipped: z11.number(),
|
|
1153
|
+
total: z11.number()
|
|
801
1154
|
}),
|
|
802
|
-
tests:
|
|
803
|
-
name:
|
|
804
|
-
fullName:
|
|
805
|
-
status:
|
|
806
|
-
durationMs:
|
|
807
|
-
filePath:
|
|
808
|
-
error:
|
|
1155
|
+
tests: z11.array(z11.object({
|
|
1156
|
+
name: z11.string(),
|
|
1157
|
+
fullName: z11.string(),
|
|
1158
|
+
status: z11.enum(["passed", "failed", "skipped"]),
|
|
1159
|
+
durationMs: z11.number().nullable(),
|
|
1160
|
+
filePath: z11.string().nullable(),
|
|
1161
|
+
error: z11.string().nullable()
|
|
809
1162
|
})),
|
|
810
|
-
pagination:
|
|
811
|
-
total:
|
|
812
|
-
limit:
|
|
813
|
-
offset:
|
|
814
|
-
hasMore:
|
|
1163
|
+
pagination: z11.object({
|
|
1164
|
+
total: z11.number(),
|
|
1165
|
+
limit: z11.number(),
|
|
1166
|
+
offset: z11.number(),
|
|
1167
|
+
hasMore: z11.boolean()
|
|
815
1168
|
})
|
|
816
1169
|
};
|
|
817
1170
|
async function executeGetTestRunDetails(client, input) {
|
|
@@ -866,24 +1219,107 @@ Note: For aggregate analytics like flaky test detection or duration trends,
|
|
|
866
1219
|
use get_test_history, get_flaky_tests, or get_slowest_tests instead.`
|
|
867
1220
|
};
|
|
868
1221
|
|
|
1222
|
+
// src/tools/get-untested-files.ts
|
|
1223
|
+
import { z as z12 } from "zod";
|
|
1224
|
+
var getUntestedFilesInputSchema = {
|
|
1225
|
+
projectId: z12.string().describe("Project ID to analyze. Required. Use list_projects to find project IDs."),
|
|
1226
|
+
maxCoverage: z12.number().min(0).max(100).optional().describe('Maximum coverage percentage to include (default: 10 for "untested")'),
|
|
1227
|
+
limit: z12.number().int().min(1).max(100).optional().describe("Maximum number of files to return (default: 20)")
|
|
1228
|
+
};
|
|
1229
|
+
var getUntestedFilesOutputSchema = {
|
|
1230
|
+
hasCoverage: z12.boolean(),
|
|
1231
|
+
files: z12.array(z12.object({
|
|
1232
|
+
path: z12.string(),
|
|
1233
|
+
lines: z12.object({
|
|
1234
|
+
covered: z12.number(),
|
|
1235
|
+
total: z12.number(),
|
|
1236
|
+
percentage: z12.number()
|
|
1237
|
+
}),
|
|
1238
|
+
branches: z12.object({
|
|
1239
|
+
covered: z12.number(),
|
|
1240
|
+
total: z12.number(),
|
|
1241
|
+
percentage: z12.number()
|
|
1242
|
+
}),
|
|
1243
|
+
functions: z12.object({
|
|
1244
|
+
covered: z12.number(),
|
|
1245
|
+
total: z12.number(),
|
|
1246
|
+
percentage: z12.number()
|
|
1247
|
+
})
|
|
1248
|
+
})),
|
|
1249
|
+
totalCount: z12.number(),
|
|
1250
|
+
message: z12.string().optional()
|
|
1251
|
+
};
|
|
1252
|
+
async function executeGetUntestedFiles(client, input) {
|
|
1253
|
+
const maxCoverage = input.maxCoverage ?? 10;
|
|
1254
|
+
const limit = input.limit ?? 20;
|
|
1255
|
+
const response = await client.getCoverageFiles({
|
|
1256
|
+
projectId: input.projectId,
|
|
1257
|
+
maxCoverage,
|
|
1258
|
+
limit,
|
|
1259
|
+
sortBy: "coverage",
|
|
1260
|
+
sortOrder: "asc"
|
|
1261
|
+
// Lowest coverage first
|
|
1262
|
+
});
|
|
1263
|
+
return {
|
|
1264
|
+
hasCoverage: response.hasCoverage,
|
|
1265
|
+
files: response.files.map((f) => ({
|
|
1266
|
+
path: f.path,
|
|
1267
|
+
lines: f.lines,
|
|
1268
|
+
branches: f.branches,
|
|
1269
|
+
functions: f.functions
|
|
1270
|
+
})),
|
|
1271
|
+
totalCount: response.pagination.total,
|
|
1272
|
+
message: response.message
|
|
1273
|
+
};
|
|
1274
|
+
}
|
|
1275
|
+
var getUntestedFilesMetadata = {
|
|
1276
|
+
name: "get_untested_files",
|
|
1277
|
+
title: "Get Untested Files",
|
|
1278
|
+
description: `Get files with little or no test coverage.
|
|
1279
|
+
|
|
1280
|
+
Returns files sorted by coverage percentage (lowest first), filtered
|
|
1281
|
+
to only include files below a coverage threshold.
|
|
1282
|
+
|
|
1283
|
+
When using a user API Key (gaf_), you must provide a projectId.
|
|
1284
|
+
Use list_projects first to find available project IDs.
|
|
1285
|
+
|
|
1286
|
+
Parameters:
|
|
1287
|
+
- projectId: The project to analyze (required)
|
|
1288
|
+
- maxCoverage: Include files with coverage at or below this % (default: 10)
|
|
1289
|
+
- limit: Maximum number of files to return (default: 20, max: 100)
|
|
1290
|
+
|
|
1291
|
+
Returns:
|
|
1292
|
+
- List of files sorted by coverage (lowest first)
|
|
1293
|
+
- Each file includes line/branch/function coverage metrics
|
|
1294
|
+
- Total count of files matching the criteria
|
|
1295
|
+
|
|
1296
|
+
IMPORTANT: Results may be dominated by certain file types (e.g., UI components) that are
|
|
1297
|
+
numerous but not necessarily the highest priority. For targeted analysis of specific code
|
|
1298
|
+
areas (backend, services, utilities), use get_coverage_for_file with path prefixes instead.
|
|
1299
|
+
|
|
1300
|
+
To prioritize effectively, explore the codebase to understand which code is heavily utilized
|
|
1301
|
+
(entry points, frequently-imported files, critical business logic) and then query coverage
|
|
1302
|
+
for those specific paths.`
|
|
1303
|
+
};
|
|
1304
|
+
|
|
869
1305
|
// src/tools/list-projects.ts
|
|
870
|
-
import { z as
|
|
1306
|
+
import { z as z13 } from "zod";
|
|
871
1307
|
var listProjectsInputSchema = {
|
|
872
|
-
organizationId:
|
|
873
|
-
limit:
|
|
1308
|
+
organizationId: z13.string().optional().describe("Filter by organization ID (optional)"),
|
|
1309
|
+
limit: z13.number().int().min(1).max(100).optional().describe("Maximum number of projects to return (default: 50)")
|
|
874
1310
|
};
|
|
875
1311
|
var listProjectsOutputSchema = {
|
|
876
|
-
projects:
|
|
877
|
-
id:
|
|
878
|
-
name:
|
|
879
|
-
description:
|
|
880
|
-
organization:
|
|
881
|
-
id:
|
|
882
|
-
name:
|
|
883
|
-
slug:
|
|
1312
|
+
projects: z13.array(z13.object({
|
|
1313
|
+
id: z13.string(),
|
|
1314
|
+
name: z13.string(),
|
|
1315
|
+
description: z13.string().nullable().optional(),
|
|
1316
|
+
organization: z13.object({
|
|
1317
|
+
id: z13.string(),
|
|
1318
|
+
name: z13.string(),
|
|
1319
|
+
slug: z13.string()
|
|
884
1320
|
})
|
|
885
1321
|
})),
|
|
886
|
-
total:
|
|
1322
|
+
total: z13.number()
|
|
887
1323
|
};
|
|
888
1324
|
async function executeListProjects(client, input) {
|
|
889
1325
|
const response = await client.listProjects({
|
|
@@ -912,28 +1348,28 @@ Requires a user API Key (gaf_). Get one from Account Settings in the Gaffer dash
|
|
|
912
1348
|
};
|
|
913
1349
|
|
|
914
1350
|
// src/tools/list-test-runs.ts
|
|
915
|
-
import { z as
|
|
1351
|
+
import { z as z14 } from "zod";
|
|
916
1352
|
var listTestRunsInputSchema = {
|
|
917
|
-
projectId:
|
|
918
|
-
commitSha:
|
|
919
|
-
branch:
|
|
920
|
-
status:
|
|
921
|
-
limit:
|
|
1353
|
+
projectId: z14.string().optional().describe("Project ID to list test runs for. Required when using a user API Key (gaf_). Use list_projects to find project IDs."),
|
|
1354
|
+
commitSha: z14.string().optional().describe("Filter by commit SHA (exact or prefix match)"),
|
|
1355
|
+
branch: z14.string().optional().describe("Filter by branch name"),
|
|
1356
|
+
status: z14.enum(["passed", "failed"]).optional().describe('Filter by status: "passed" (no failures) or "failed" (has failures)'),
|
|
1357
|
+
limit: z14.number().int().min(1).max(100).optional().describe("Maximum number of test runs to return (default: 20)")
|
|
922
1358
|
};
|
|
923
1359
|
var listTestRunsOutputSchema = {
|
|
924
|
-
testRuns:
|
|
925
|
-
id:
|
|
926
|
-
commitSha:
|
|
927
|
-
branch:
|
|
928
|
-
passedCount:
|
|
929
|
-
failedCount:
|
|
930
|
-
skippedCount:
|
|
931
|
-
totalCount:
|
|
932
|
-
createdAt:
|
|
1360
|
+
testRuns: z14.array(z14.object({
|
|
1361
|
+
id: z14.string(),
|
|
1362
|
+
commitSha: z14.string().optional(),
|
|
1363
|
+
branch: z14.string().optional(),
|
|
1364
|
+
passedCount: z14.number(),
|
|
1365
|
+
failedCount: z14.number(),
|
|
1366
|
+
skippedCount: z14.number(),
|
|
1367
|
+
totalCount: z14.number(),
|
|
1368
|
+
createdAt: z14.string()
|
|
933
1369
|
})),
|
|
934
|
-
pagination:
|
|
935
|
-
total:
|
|
936
|
-
hasMore:
|
|
1370
|
+
pagination: z14.object({
|
|
1371
|
+
total: z14.number(),
|
|
1372
|
+
hasMore: z14.boolean()
|
|
937
1373
|
})
|
|
938
1374
|
};
|
|
939
1375
|
async function executeListTestRuns(client, input) {
|
|
@@ -1029,10 +1465,33 @@ async function main() {
|
|
|
1029
1465
|
process.exit(1);
|
|
1030
1466
|
}
|
|
1031
1467
|
const client = GafferApiClient.fromEnv();
|
|
1032
|
-
const server = new McpServer(
|
|
1033
|
-
|
|
1034
|
-
|
|
1035
|
-
|
|
1468
|
+
const server = new McpServer(
|
|
1469
|
+
{
|
|
1470
|
+
name: "gaffer",
|
|
1471
|
+
version: "0.1.0"
|
|
1472
|
+
},
|
|
1473
|
+
{
|
|
1474
|
+
instructions: `Gaffer provides test analytics and coverage data for your projects.
|
|
1475
|
+
|
|
1476
|
+
## Coverage Analysis Best Practices
|
|
1477
|
+
|
|
1478
|
+
When helping users improve test coverage, combine coverage data with codebase exploration:
|
|
1479
|
+
|
|
1480
|
+
1. **Understand code utilization first**: Before targeting files by coverage percentage, explore which code is critical:
|
|
1481
|
+
- Find entry points (route definitions, event handlers, exported functions)
|
|
1482
|
+
- Find heavily-imported files (files imported by many others are high-value targets)
|
|
1483
|
+
- Identify critical business logic (auth, payments, data mutations)
|
|
1484
|
+
|
|
1485
|
+
2. **Prioritize by impact**: Low coverage alone doesn't indicate priority. Consider:
|
|
1486
|
+
- High utilization + low coverage = highest priority
|
|
1487
|
+
- Large files with 0% coverage have bigger impact than small files
|
|
1488
|
+
- Use find_uncovered_failure_areas for files with both low coverage AND test failures
|
|
1489
|
+
|
|
1490
|
+
3. **Use path-based queries**: The get_untested_files tool may return many files of a certain type (e.g., UI components). For targeted analysis, use get_coverage_for_file with path prefixes to focus on specific areas of the codebase.
|
|
1491
|
+
|
|
1492
|
+
4. **Iterate**: Get baseline \u2192 identify targets \u2192 write tests \u2192 re-check coverage after CI uploads new results.`
|
|
1493
|
+
}
|
|
1494
|
+
);
|
|
1036
1495
|
server.registerTool(
|
|
1037
1496
|
getProjectHealthMetadata.name,
|
|
1038
1497
|
{
|
|
@@ -1213,6 +1672,106 @@ async function main() {
|
|
|
1213
1672
|
}
|
|
1214
1673
|
}
|
|
1215
1674
|
);
|
|
1675
|
+
server.registerTool(
|
|
1676
|
+
getCoverageSummaryMetadata.name,
|
|
1677
|
+
{
|
|
1678
|
+
title: getCoverageSummaryMetadata.title,
|
|
1679
|
+
description: getCoverageSummaryMetadata.description,
|
|
1680
|
+
inputSchema: getCoverageSummaryInputSchema,
|
|
1681
|
+
outputSchema: getCoverageSummaryOutputSchema
|
|
1682
|
+
},
|
|
1683
|
+
async (input) => {
|
|
1684
|
+
try {
|
|
1685
|
+
const output = await executeGetCoverageSummary(client, input);
|
|
1686
|
+
return {
|
|
1687
|
+
content: [{ type: "text", text: JSON.stringify(output, null, 2) }],
|
|
1688
|
+
structuredContent: output
|
|
1689
|
+
};
|
|
1690
|
+
} catch (error) {
|
|
1691
|
+
return handleToolError(getCoverageSummaryMetadata.name, error);
|
|
1692
|
+
}
|
|
1693
|
+
}
|
|
1694
|
+
);
|
|
1695
|
+
server.registerTool(
|
|
1696
|
+
getCoverageForFileMetadata.name,
|
|
1697
|
+
{
|
|
1698
|
+
title: getCoverageForFileMetadata.title,
|
|
1699
|
+
description: getCoverageForFileMetadata.description,
|
|
1700
|
+
inputSchema: getCoverageForFileInputSchema,
|
|
1701
|
+
outputSchema: getCoverageForFileOutputSchema
|
|
1702
|
+
},
|
|
1703
|
+
async (input) => {
|
|
1704
|
+
try {
|
|
1705
|
+
const output = await executeGetCoverageForFile(client, input);
|
|
1706
|
+
return {
|
|
1707
|
+
content: [{ type: "text", text: JSON.stringify(output, null, 2) }],
|
|
1708
|
+
structuredContent: output
|
|
1709
|
+
};
|
|
1710
|
+
} catch (error) {
|
|
1711
|
+
return handleToolError(getCoverageForFileMetadata.name, error);
|
|
1712
|
+
}
|
|
1713
|
+
}
|
|
1714
|
+
);
|
|
1715
|
+
server.registerTool(
|
|
1716
|
+
findUncoveredFailureAreasMetadata.name,
|
|
1717
|
+
{
|
|
1718
|
+
title: findUncoveredFailureAreasMetadata.title,
|
|
1719
|
+
description: findUncoveredFailureAreasMetadata.description,
|
|
1720
|
+
inputSchema: findUncoveredFailureAreasInputSchema,
|
|
1721
|
+
outputSchema: findUncoveredFailureAreasOutputSchema
|
|
1722
|
+
},
|
|
1723
|
+
async (input) => {
|
|
1724
|
+
try {
|
|
1725
|
+
const output = await executeFindUncoveredFailureAreas(client, input);
|
|
1726
|
+
return {
|
|
1727
|
+
content: [{ type: "text", text: JSON.stringify(output, null, 2) }],
|
|
1728
|
+
structuredContent: output
|
|
1729
|
+
};
|
|
1730
|
+
} catch (error) {
|
|
1731
|
+
return handleToolError(findUncoveredFailureAreasMetadata.name, error);
|
|
1732
|
+
}
|
|
1733
|
+
}
|
|
1734
|
+
);
|
|
1735
|
+
server.registerTool(
|
|
1736
|
+
getUntestedFilesMetadata.name,
|
|
1737
|
+
{
|
|
1738
|
+
title: getUntestedFilesMetadata.title,
|
|
1739
|
+
description: getUntestedFilesMetadata.description,
|
|
1740
|
+
inputSchema: getUntestedFilesInputSchema,
|
|
1741
|
+
outputSchema: getUntestedFilesOutputSchema
|
|
1742
|
+
},
|
|
1743
|
+
async (input) => {
|
|
1744
|
+
try {
|
|
1745
|
+
const output = await executeGetUntestedFiles(client, input);
|
|
1746
|
+
return {
|
|
1747
|
+
content: [{ type: "text", text: JSON.stringify(output, null, 2) }],
|
|
1748
|
+
structuredContent: output
|
|
1749
|
+
};
|
|
1750
|
+
} catch (error) {
|
|
1751
|
+
return handleToolError(getUntestedFilesMetadata.name, error);
|
|
1752
|
+
}
|
|
1753
|
+
}
|
|
1754
|
+
);
|
|
1755
|
+
server.registerTool(
|
|
1756
|
+
getReportBrowserUrlMetadata.name,
|
|
1757
|
+
{
|
|
1758
|
+
title: getReportBrowserUrlMetadata.title,
|
|
1759
|
+
description: getReportBrowserUrlMetadata.description,
|
|
1760
|
+
inputSchema: getReportBrowserUrlInputSchema,
|
|
1761
|
+
outputSchema: getReportBrowserUrlOutputSchema
|
|
1762
|
+
},
|
|
1763
|
+
async (input) => {
|
|
1764
|
+
try {
|
|
1765
|
+
const output = await executeGetReportBrowserUrl(client, input);
|
|
1766
|
+
return {
|
|
1767
|
+
content: [{ type: "text", text: JSON.stringify(output, null, 2) }],
|
|
1768
|
+
structuredContent: output
|
|
1769
|
+
};
|
|
1770
|
+
} catch (error) {
|
|
1771
|
+
return handleToolError(getReportBrowserUrlMetadata.name, error);
|
|
1772
|
+
}
|
|
1773
|
+
}
|
|
1774
|
+
);
|
|
1216
1775
|
const transport = new StdioServerTransport();
|
|
1217
1776
|
await server.connect(transport);
|
|
1218
1777
|
}
|
package/package.json
CHANGED