@gaffer-sh/mcp 0.4.0 → 0.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +158 -118
- package/dist/index.js +49 -7
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -4,11 +4,12 @@ MCP (Model Context Protocol) server for [Gaffer](https://gaffer.sh) - give your
|
|
|
4
4
|
|
|
5
5
|
## What is this?
|
|
6
6
|
|
|
7
|
-
This MCP server connects AI coding assistants like Claude Code and Cursor to your Gaffer test history. It allows AI to:
|
|
7
|
+
This MCP server connects AI coding assistants like Claude Code and Cursor to your Gaffer test history and coverage data. It allows AI to:
|
|
8
8
|
|
|
9
9
|
- Check your project's test health (pass rate, flaky tests, trends)
|
|
10
10
|
- Look up the history of specific tests to understand stability
|
|
11
11
|
- Get context about test failures when debugging
|
|
12
|
+
- Analyze code coverage and identify untested areas
|
|
12
13
|
- Browse all your projects (with user API Keys)
|
|
13
14
|
- Access test report files (HTML reports, coverage, etc.)
|
|
14
15
|
|
|
@@ -17,20 +18,6 @@ This MCP server connects AI coding assistants like Claude Code and Cursor to you
|
|
|
17
18
|
1. A [Gaffer](https://gaffer.sh) account with test results uploaded
|
|
18
19
|
2. An API Key from Account Settings > API Keys
|
|
19
20
|
|
|
20
|
-
## Authentication
|
|
21
|
-
|
|
22
|
-
The MCP server supports two types of authentication:
|
|
23
|
-
|
|
24
|
-
### User API Keys (Recommended)
|
|
25
|
-
|
|
26
|
-
User API Keys (`gaf_` prefix) provide read-only access to all projects across your organizations. This is the recommended approach as it allows your AI assistant to work across multiple projects.
|
|
27
|
-
|
|
28
|
-
Get your API Key from: **Account Settings > API Keys**
|
|
29
|
-
|
|
30
|
-
### Project Upload Tokens (Legacy)
|
|
31
|
-
|
|
32
|
-
Project Upload Tokens (`gfr_` prefix) are designed for uploading test results and only provide access to a single project. While still supported for backward compatibility, user API Keys are preferred for the MCP server.
|
|
33
|
-
|
|
34
21
|
## Setup
|
|
35
22
|
|
|
36
23
|
### Claude Code
|
|
@@ -71,144 +58,197 @@ Add to `.cursor/mcp.json` in your project:
|
|
|
71
58
|
|
|
72
59
|
## Available Tools
|
|
73
60
|
|
|
74
|
-
###
|
|
61
|
+
### Project & Test Run Tools
|
|
75
62
|
|
|
76
|
-
|
|
63
|
+
| Tool | Description |
|
|
64
|
+
|------|-------------|
|
|
65
|
+
| `list_projects` | List all projects you have access to |
|
|
66
|
+
| `get_project_health` | Get health metrics (pass rate, flaky count, trends) |
|
|
67
|
+
| `list_test_runs` | List recent test runs with optional filtering |
|
|
68
|
+
| `get_test_run_details` | Get parsed test results for a specific test run |
|
|
69
|
+
| `get_report` | Get report file URLs for a test run |
|
|
70
|
+
| `get_report_browser_url` | Get a browser-navigable URL for viewing reports |
|
|
77
71
|
|
|
78
|
-
|
|
79
|
-
- `organizationId` (optional): Filter by organization
|
|
80
|
-
- `limit` (optional): Max results (default: 50)
|
|
72
|
+
### Test Analysis Tools
|
|
81
73
|
|
|
82
|
-
|
|
83
|
-
|
|
74
|
+
| Tool | Description |
|
|
75
|
+
|------|-------------|
|
|
76
|
+
| `get_test_history` | Get pass/fail history for a specific test |
|
|
77
|
+
| `get_flaky_tests` | Get tests with high flip rates (pass↔fail) |
|
|
78
|
+
| `get_slowest_tests` | Get slowest tests by P95 duration |
|
|
79
|
+
| `compare_test_metrics` | Compare test performance between commits |
|
|
84
80
|
|
|
85
|
-
|
|
81
|
+
### Coverage Tools
|
|
86
82
|
|
|
87
|
-
|
|
83
|
+
| Tool | Description |
|
|
84
|
+
|------|-------------|
|
|
85
|
+
| `get_coverage_summary` | Get overall coverage metrics and trends |
|
|
86
|
+
| `get_coverage_for_file` | Get coverage for specific files or paths |
|
|
87
|
+
| `get_untested_files` | Get files below a coverage threshold |
|
|
88
|
+
| `find_uncovered_failure_areas` | Find files with low coverage AND test failures |
|
|
88
89
|
|
|
89
|
-
|
|
90
|
+
## Tool Details
|
|
90
91
|
|
|
91
|
-
|
|
92
|
-
- `projectId` (required with user API Keys): Project ID from `list_projects`
|
|
93
|
-
- `days` (optional): Number of days to analyze (default: 30)
|
|
92
|
+
### `list_projects`
|
|
94
93
|
|
|
95
|
-
|
|
96
|
-
- Health score (0-100)
|
|
97
|
-
- Pass rate percentage
|
|
98
|
-
- Number of test runs
|
|
99
|
-
- Flaky test count
|
|
100
|
-
- Trend (up/down/stable)
|
|
94
|
+
List all projects you have access to.
|
|
101
95
|
|
|
102
|
-
**
|
|
96
|
+
- **Input:** `organizationId` (optional), `limit` (optional, default: 50)
|
|
97
|
+
- **Returns:** List of projects with IDs, names, and organization info
|
|
98
|
+
- **Example:** "What projects do I have in Gaffer?"
|
|
103
99
|
|
|
104
|
-
### `
|
|
100
|
+
### `get_project_health`
|
|
105
101
|
|
|
106
|
-
Get the
|
|
102
|
+
Get the health metrics for a project.
|
|
107
103
|
|
|
108
|
-
**Input:**
|
|
109
|
-
-
|
|
110
|
-
-
|
|
111
|
-
|
|
112
|
-
|
|
104
|
+
- **Input:** `projectId` (required), `days` (optional, default: 30)
|
|
105
|
+
- **Returns:** Health score (0-100), pass rate, test run count, flaky test count, trend
|
|
106
|
+
- **Example:** "What's the health of my test suite?"
|
|
107
|
+
|
|
108
|
+
### `get_test_history`
|
|
113
109
|
|
|
114
|
-
|
|
115
|
-
- History of test runs with pass/fail status
|
|
116
|
-
- Duration of each run
|
|
117
|
-
- Branch and commit info
|
|
118
|
-
- Error messages for failures
|
|
119
|
-
- Summary statistics
|
|
110
|
+
Get the pass/fail history for a specific test.
|
|
120
111
|
|
|
121
|
-
**
|
|
122
|
-
-
|
|
123
|
-
- "
|
|
112
|
+
- **Input:** `projectId` (required), `testName` or `filePath` (one required), `limit` (optional)
|
|
113
|
+
- **Returns:** History of runs with status, duration, branch, commit, errors
|
|
114
|
+
- **Example:** "Is the login test flaky? Check its history"
|
|
124
115
|
|
|
125
116
|
### `get_flaky_tests`
|
|
126
117
|
|
|
127
118
|
Get the list of flaky tests in a project.
|
|
128
119
|
|
|
129
|
-
**Input:**
|
|
130
|
-
-
|
|
131
|
-
-
|
|
132
|
-
- `limit` (optional): Max results (default: 50)
|
|
133
|
-
- `days` (optional): Analysis period in days (default: 30)
|
|
134
|
-
|
|
135
|
-
**Returns:**
|
|
136
|
-
- List of flaky tests with flip rates
|
|
137
|
-
- Number of status transitions
|
|
138
|
-
- Total runs analyzed
|
|
139
|
-
- When each test last ran
|
|
140
|
-
|
|
141
|
-
**Example prompts:**
|
|
142
|
-
- "Which tests are flaky in my project?"
|
|
143
|
-
- "Show me the most unstable tests"
|
|
120
|
+
- **Input:** `projectId` (required), `threshold` (optional, default: 0.1), `days` (optional), `limit` (optional)
|
|
121
|
+
- **Returns:** List of flaky tests with flip rates, transition counts, run counts
|
|
122
|
+
- **Example:** "Which tests are flaky in my project?"
|
|
144
123
|
|
|
145
124
|
### `list_test_runs`
|
|
146
125
|
|
|
147
126
|
List recent test runs with optional filtering.
|
|
148
127
|
|
|
149
|
-
**Input:**
|
|
150
|
-
-
|
|
151
|
-
-
|
|
152
|
-
|
|
153
|
-
|
|
154
|
-
- `limit` (optional): Max results (default: 20)
|
|
128
|
+
- **Input:** `projectId` (required), `commitSha` (optional), `branch` (optional), `status` (optional), `limit` (optional)
|
|
129
|
+
- **Returns:** List of test runs with pass/fail/skip counts, commit and branch info
|
|
130
|
+
- **Example:** "What tests failed in the last commit?"
|
|
131
|
+
|
|
132
|
+
### `get_test_run_details`
|
|
155
133
|
|
|
156
|
-
|
|
157
|
-
- List of test runs with pass/fail/skip counts
|
|
158
|
-
- Commit SHA and branch info
|
|
159
|
-
- Pagination info
|
|
134
|
+
Get parsed test results for a specific test run.
|
|
160
135
|
|
|
161
|
-
**
|
|
162
|
-
-
|
|
163
|
-
- "Show me
|
|
164
|
-
- "Did any tests fail on my feature branch?"
|
|
136
|
+
- **Input:** `testRunId` (required), `projectId` (required), `status` (optional filter), `limit` (optional)
|
|
137
|
+
- **Returns:** Individual test results with name, status, duration, file path, errors
|
|
138
|
+
- **Example:** "Show me all failed tests from this test run"
|
|
165
139
|
|
|
166
140
|
### `get_report`
|
|
167
141
|
|
|
168
|
-
Get
|
|
142
|
+
Get URLs for report files uploaded with a test run.
|
|
169
143
|
|
|
170
|
-
**Input:**
|
|
171
|
-
-
|
|
144
|
+
- **Input:** `testRunId` (required)
|
|
145
|
+
- **Returns:** List of files with filename, size, content type, download URL
|
|
146
|
+
- **Example:** "Get the Playwright report for the latest test run"
|
|
172
147
|
|
|
173
|
-
|
|
174
|
-
- Test run ID and project info
|
|
175
|
-
- Framework used (e.g., playwright, vitest)
|
|
176
|
-
- List of files with:
|
|
177
|
-
- Filename (e.g., "report.html", "coverage/index.html")
|
|
178
|
-
- File size in bytes
|
|
179
|
-
- Content type (e.g., "text/html")
|
|
180
|
-
- Download URL
|
|
148
|
+
### `get_report_browser_url`
|
|
181
149
|
|
|
182
|
-
|
|
183
|
-
|
|
184
|
-
-
|
|
185
|
-
-
|
|
150
|
+
Get a browser-navigable URL for viewing a test report.
|
|
151
|
+
|
|
152
|
+
- **Input:** `projectId` (required), `testRunId` (required), `filename` (optional)
|
|
153
|
+
- **Returns:** Signed URL valid for 30 minutes
|
|
154
|
+
- **Example:** "Give me a link to view the test report"
|
|
186
155
|
|
|
187
156
|
### `get_slowest_tests`
|
|
188
157
|
|
|
189
|
-
Get the slowest tests in a project, sorted by P95 duration.
|
|
190
|
-
|
|
191
|
-
**Input:**
|
|
192
|
-
-
|
|
193
|
-
-
|
|
194
|
-
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
|
|
205
|
-
|
|
206
|
-
|
|
207
|
-
|
|
208
|
-
**
|
|
209
|
-
- "
|
|
210
|
-
|
|
211
|
-
|
|
158
|
+
Get the slowest tests in a project, sorted by P95 duration.
|
|
159
|
+
|
|
160
|
+
- **Input:** `projectId` (required), `days` (optional), `limit` (optional), `framework` (optional), `branch` (optional)
|
|
161
|
+
- **Returns:** List of tests with average and P95 duration, run count
|
|
162
|
+
- **Example:** "Which tests are slowing down my CI pipeline?"
|
|
163
|
+
|
|
164
|
+
### `compare_test_metrics`
|
|
165
|
+
|
|
166
|
+
Compare test metrics between two commits or test runs.
|
|
167
|
+
|
|
168
|
+
- **Input:** `projectId` (required), `testName` (required), `beforeCommit`/`afterCommit` OR `beforeRunId`/`afterRunId`
|
|
169
|
+
- **Returns:** Before/after metrics with duration change and percentage
|
|
170
|
+
- **Example:** "Did my fix make this test faster?"
|
|
171
|
+
|
|
172
|
+
### `get_coverage_summary`
|
|
173
|
+
|
|
174
|
+
Get the coverage metrics summary for a project.
|
|
175
|
+
|
|
176
|
+
- **Input:** `projectId` (required), `days` (optional, default: 30)
|
|
177
|
+
- **Returns:** Line/branch/function coverage percentages, trend, report count, lowest coverage files
|
|
178
|
+
- **Example:** "What's our test coverage?"
|
|
179
|
+
|
|
180
|
+
### `get_coverage_for_file`
|
|
181
|
+
|
|
182
|
+
Get coverage metrics for specific files or paths.
|
|
183
|
+
|
|
184
|
+
- **Input:** `projectId` (required), `filePath` (required - exact or partial match)
|
|
185
|
+
- **Returns:** List of matching files with line/branch/function coverage
|
|
186
|
+
- **Example:** "What's the coverage for our API routes?"
|
|
187
|
+
|
|
188
|
+
### `get_untested_files`
|
|
189
|
+
|
|
190
|
+
Get files with little or no test coverage.
|
|
191
|
+
|
|
192
|
+
- **Input:** `projectId` (required), `maxCoverage` (optional, default: 10%), `limit` (optional)
|
|
193
|
+
- **Returns:** List of files below threshold sorted by coverage (lowest first)
|
|
194
|
+
- **Example:** "Which files have no tests?"
|
|
195
|
+
|
|
196
|
+
### `find_uncovered_failure_areas`
|
|
197
|
+
|
|
198
|
+
Find code areas with both low coverage AND test failures (high risk).
|
|
199
|
+
|
|
200
|
+
- **Input:** `projectId` (required), `days` (optional), `coverageThreshold` (optional, default: 80%)
|
|
201
|
+
- **Returns:** Risk areas ranked by score, with file path, coverage %, failure count
|
|
202
|
+
- **Example:** "Where should we focus our testing efforts?"
|
|
203
|
+
|
|
204
|
+
## Prioritizing Coverage Improvements
|
|
205
|
+
|
|
206
|
+
When using coverage tools to improve your test suite, combine coverage data with codebase exploration for best results:
|
|
207
|
+
|
|
208
|
+
### 1. Understand Code Utilization
|
|
209
|
+
|
|
210
|
+
Before targeting files purely by coverage percentage, explore which code is actually critical:
|
|
211
|
+
|
|
212
|
+
- **Find entry points:** Look for route definitions, event handlers, exported functions - these reveal what code actually executes in production
|
|
213
|
+
- **Find heavily-imported files:** Files imported by many others are high-value targets
|
|
214
|
+
- **Identify critical business logic:** Look for files handling auth, payments, data mutations, or core domain logic
|
|
215
|
+
|
|
216
|
+
### 2. Prioritize by Impact
|
|
217
|
+
|
|
218
|
+
Low coverage alone doesn't indicate priority. Consider:
|
|
219
|
+
|
|
220
|
+
- **High utilization + low coverage = highest priority** - Code that runs frequently but lacks tests
|
|
221
|
+
- **Large files with 0% coverage** - More uncovered lines means bigger impact on overall coverage
|
|
222
|
+
- **Files with both failures and low coverage** - Use `find_uncovered_failure_areas` for this
|
|
223
|
+
|
|
224
|
+
### 3. Use Path-Based Queries
|
|
225
|
+
|
|
226
|
+
The `get_untested_files` tool may return many frontend components. For backend or specific areas:
|
|
227
|
+
|
|
228
|
+
```
|
|
229
|
+
# Query specific paths with get_coverage_for_file
|
|
230
|
+
get_coverage_for_file(filePath="server/services")
|
|
231
|
+
get_coverage_for_file(filePath="src/api")
|
|
232
|
+
get_coverage_for_file(filePath="lib/core")
|
|
233
|
+
```
|
|
234
|
+
|
|
235
|
+
### 4. Iterative Improvement
|
|
236
|
+
|
|
237
|
+
1. Get baseline with `get_coverage_summary`
|
|
238
|
+
2. Identify targets with `get_coverage_for_file` on critical paths
|
|
239
|
+
3. Write tests for highest-impact files
|
|
240
|
+
4. Re-check coverage after CI uploads new results
|
|
241
|
+
5. Repeat
|
|
242
|
+
|
|
243
|
+
## Authentication
|
|
244
|
+
|
|
245
|
+
### User API Keys (Recommended)
|
|
246
|
+
|
|
247
|
+
User API Keys (`gaf_` prefix) provide read-only access to all projects across your organizations. Get your API Key from: **Account Settings > API Keys**
|
|
248
|
+
|
|
249
|
+
### Project Upload Tokens (Legacy)
|
|
250
|
+
|
|
251
|
+
Project Upload Tokens (`gfr_` prefix) are designed for uploading test results and only provide access to a single project. User API Keys are preferred for the MCP server.
|
|
212
252
|
|
|
213
253
|
## Environment Variables
|
|
214
254
|
|
package/dist/index.js
CHANGED
|
@@ -668,7 +668,16 @@ Returns:
|
|
|
668
668
|
- Branch coverage (covered/total/percentage)
|
|
669
669
|
- Function coverage (covered/total/percentage)
|
|
670
670
|
|
|
671
|
-
|
|
671
|
+
This is the preferred tool for targeted coverage analysis. Use path prefixes to focus on
|
|
672
|
+
specific areas of the codebase:
|
|
673
|
+
- "server/services" - Backend service layer
|
|
674
|
+
- "server/utils" - Backend utilities
|
|
675
|
+
- "src/api" - API routes
|
|
676
|
+
- "lib/core" - Core business logic
|
|
677
|
+
|
|
678
|
+
Before querying, explore the codebase to identify critical paths - entry points,
|
|
679
|
+
heavily-imported files, and code handling auth/payments/data mutations.
|
|
680
|
+
Prioritize: high utilization + low coverage = highest impact.`
|
|
672
681
|
};
|
|
673
682
|
|
|
674
683
|
// src/tools/get-coverage-summary.ts
|
|
@@ -726,7 +735,11 @@ Returns:
|
|
|
726
735
|
- Latest report date
|
|
727
736
|
- Top 5 files with lowest coverage
|
|
728
737
|
|
|
729
|
-
Use this to understand your project's overall test coverage health
|
|
738
|
+
Use this to understand your project's overall test coverage health.
|
|
739
|
+
|
|
740
|
+
After getting the summary, use get_coverage_for_file with path prefixes to drill into
|
|
741
|
+
specific areas (e.g., "server/services", "src/api", "lib/core"). This helps identify
|
|
742
|
+
high-value targets in critical code paths rather than just the files with lowest coverage.`
|
|
730
743
|
};
|
|
731
744
|
|
|
732
745
|
// src/tools/get-flaky-tests.ts
|
|
@@ -1280,7 +1293,13 @@ Returns:
|
|
|
1280
1293
|
- Each file includes line/branch/function coverage metrics
|
|
1281
1294
|
- Total count of files matching the criteria
|
|
1282
1295
|
|
|
1283
|
-
|
|
1296
|
+
IMPORTANT: Results may be dominated by certain file types (e.g., UI components) that are
|
|
1297
|
+
numerous but not necessarily the highest priority. For targeted analysis of specific code
|
|
1298
|
+
areas (backend, services, utilities), use get_coverage_for_file with path prefixes instead.
|
|
1299
|
+
|
|
1300
|
+
To prioritize effectively, explore the codebase to understand which code is heavily utilized
|
|
1301
|
+
(entry points, frequently-imported files, critical business logic) and then query coverage
|
|
1302
|
+
for those specific paths.`
|
|
1284
1303
|
};
|
|
1285
1304
|
|
|
1286
1305
|
// src/tools/list-projects.ts
|
|
@@ -1446,10 +1465,33 @@ async function main() {
|
|
|
1446
1465
|
process.exit(1);
|
|
1447
1466
|
}
|
|
1448
1467
|
const client = GafferApiClient.fromEnv();
|
|
1449
|
-
const server = new McpServer(
|
|
1450
|
-
|
|
1451
|
-
|
|
1452
|
-
|
|
1468
|
+
const server = new McpServer(
|
|
1469
|
+
{
|
|
1470
|
+
name: "gaffer",
|
|
1471
|
+
version: "0.1.0"
|
|
1472
|
+
},
|
|
1473
|
+
{
|
|
1474
|
+
instructions: `Gaffer provides test analytics and coverage data for your projects.
|
|
1475
|
+
|
|
1476
|
+
## Coverage Analysis Best Practices
|
|
1477
|
+
|
|
1478
|
+
When helping users improve test coverage, combine coverage data with codebase exploration:
|
|
1479
|
+
|
|
1480
|
+
1. **Understand code utilization first**: Before targeting files by coverage percentage, explore which code is critical:
|
|
1481
|
+
- Find entry points (route definitions, event handlers, exported functions)
|
|
1482
|
+
- Find heavily-imported files (files imported by many others are high-value targets)
|
|
1483
|
+
- Identify critical business logic (auth, payments, data mutations)
|
|
1484
|
+
|
|
1485
|
+
2. **Prioritize by impact**: Low coverage alone doesn't indicate priority. Consider:
|
|
1486
|
+
- High utilization + low coverage = highest priority
|
|
1487
|
+
- Large files with 0% coverage have bigger impact than small files
|
|
1488
|
+
- Use find_uncovered_failure_areas for files with both low coverage AND test failures
|
|
1489
|
+
|
|
1490
|
+
3. **Use path-based queries**: The get_untested_files tool may return many files of a certain type (e.g., UI components). For targeted analysis, use get_coverage_for_file with path prefixes to focus on specific areas of the codebase.
|
|
1491
|
+
|
|
1492
|
+
4. **Iterate**: Get baseline \u2192 identify targets \u2192 write tests \u2192 re-check coverage after CI uploads new results.`
|
|
1493
|
+
}
|
|
1494
|
+
);
|
|
1453
1495
|
server.registerTool(
|
|
1454
1496
|
getProjectHealthMetadata.name,
|
|
1455
1497
|
{
|
package/package.json
CHANGED