qaa-agent 1.6.3 → 1.7.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +22 -0
- package/agents/qaa-analyzer.md +16 -1
- package/agents/qaa-bug-detective.md +33 -0
- package/agents/qaa-discovery.md +384 -0
- package/agents/qaa-e2e-runner.md +7 -6
- package/agents/qaa-planner.md +16 -1
- package/agents/qaa-testid-injector.md +60 -2
- package/agents/qaa-validator.md +38 -0
- package/bin/install.cjs +25 -13
- package/commands/qa-audit.md +119 -0
- package/commands/qa-create-test.md +288 -0
- package/commands/qa-fix.md +395 -0
- package/commands/qa-map.md +137 -0
- package/package.json +40 -41
- package/{.claude/settings.json → settings.json} +19 -20
- package/{.claude/skills → skills}/qa-bug-detective/SKILL.md +122 -122
- package/{.claude/skills → skills}/qa-repo-analyzer/SKILL.md +88 -88
- package/{.claude/skills → skills}/qa-self-validator/SKILL.md +109 -109
- package/{.claude/skills → skills}/qa-template-engine/SKILL.md +113 -113
- package/{.claude/skills → skills}/qa-testid-injector/SKILL.md +93 -93
- package/{.claude/skills → skills}/qa-workflow-documenter/SKILL.md +87 -87
- package/workflows/qa-gap.md +7 -1
- package/workflows/qa-start.md +25 -1
- package/workflows/qa-testid.md +29 -1
- package/workflows/qa-validate.md +5 -1
- package/.claude/commands/create-test.md +0 -164
- package/.claude/commands/qa-audit.md +0 -37
- package/.claude/commands/qa-blueprint.md +0 -54
- package/.claude/commands/qa-fix.md +0 -36
- package/.claude/commands/qa-from-ticket.md +0 -24
- package/.claude/commands/qa-gap.md +0 -20
- package/.claude/commands/qa-map.md +0 -47
- package/.claude/commands/qa-pom.md +0 -36
- package/.claude/commands/qa-pyramid.md +0 -37
- package/.claude/commands/qa-report.md +0 -38
- package/.claude/commands/qa-research.md +0 -33
- package/.claude/commands/qa-validate.md +0 -42
- package/.claude/commands/update-test.md +0 -58
- package/.claude/skills/qa-learner/SKILL.md +0 -150
- /package/{.claude/commands → commands}/qa-pr.md +0 -0
- /package/{.claude/commands → commands}/qa-start.md +0 -0
- /package/{.claude/commands → commands}/qa-testid.md +0 -0
package/agents/qaa-validator.md
CHANGED
|
@@ -22,6 +22,21 @@ Read ALL of the following files BEFORE performing any validation. Do NOT skip.
|
|
|
22
22
|
|
|
23
23
|
- **~/.claude/qaa/MY_PREFERENCES.md** (optional -- read if exists). User's personal QA preferences saved by the qa-learner skill. If a preference conflicts with CLAUDE.md, the preference wins (it is a user override). Check for rules about: assertion style, locator strategy, naming conventions, framework choices.
|
|
24
24
|
|
|
25
|
+
- **Locator Registry** (optional -- read if it exists):
|
|
26
|
+
- **`.qa-output/locators/LOCATOR_REGISTRY.md`** -- Central index of all locators extracted from the live app across all features.
|
|
27
|
+
- **`.qa-output/locators/{feature}.locators.md`** -- Per-feature locator files with detailed page-by-page locator tables.
|
|
28
|
+
|
|
29
|
+
When locator registry files exist, use them during Layer 4 (Logic) validation:
|
|
30
|
+
- Verify that locators used in generated test files and POMs match the locators in the registry (real DOM values take precedence over guessed values)
|
|
31
|
+
- Flag any POM locator that uses a `data-testid` value NOT found in the registry as a potential mismatch
|
|
32
|
+
- Flag any POM using Tier 4 locators (CSS/XPath) when a Tier 1 locator exists in the registry for the same element
|
|
33
|
+
|
|
34
|
+
- **Codebase map documents** (optional -- read if they exist in `.qa-output/codebase/`):
|
|
35
|
+
- **CODE_PATTERNS.md** -- Naming conventions, import patterns, code style. Use during Layer 2 (Structure) to verify generated files follow the project's naming conventions and import patterns.
|
|
36
|
+
- **TEST_SURFACE.md** -- Function signatures, parameter types, return types. Use during Layer 4 (Logic) to verify that test targets (file paths, function names) actually exist in the codebase and that mock setup matches real function signatures.
|
|
37
|
+
- **API_CONTRACTS.md** -- Request/response shapes, auth patterns. Use during Layer 4 (Logic) to verify API test payloads and assertions match the real API contracts.
|
|
38
|
+
If these files exist, they enable higher-quality validation that catches mismatches between generated tests and the actual codebase.
|
|
39
|
+
|
|
25
40
|
Note: Read these files in full. Extract the layer definitions, pass criteria, confidence calculation rules, and quality gate checklist. These define your validation contract and output requirements.
|
|
26
41
|
|
|
27
42
|
**Important:** The generation plan is the source of truth for which files to validate. If a file exists in the test directory but is NOT in the generation plan, it is a pre-existing file and MUST be excluded from validation scope. The only exception is Layer 4's cross-check for duplicate IDs, which reads (but does not validate or modify) existing test files.
|
|
@@ -55,6 +70,18 @@ Read all required input files before performing any validation.
|
|
|
55
70
|
4. **Read templates/validation-report.md** -- extract the 5 required sections, field definitions, and confidence criteria table for report generation.
|
|
56
71
|
|
|
57
72
|
5. **Read .claude/skills/qa-self-validator/SKILL.md** -- extract the 4 layer definitions and pass criteria.
|
|
73
|
+
|
|
74
|
+
6. **Read Locator Registry** (if it exists):
|
|
75
|
+
- Check for `.qa-output/locators/LOCATOR_REGISTRY.md` (central index)
|
|
76
|
+
- Check for `.qa-output/locators/{feature}.locators.md` (feature-specific)
|
|
77
|
+
- Extract all locators per page: element name, locator type, locator value, tier
|
|
78
|
+
- Index by page name and element for cross-referencing during Layer 4 validation
|
|
79
|
+
|
|
80
|
+
7. **Read codebase map documents** (if they exist in `.qa-output/codebase/`):
|
|
81
|
+
- **CODE_PATTERNS.md** -- Extract naming conventions and import patterns for Layer 2 validation
|
|
82
|
+
- **TEST_SURFACE.md** -- Extract function signatures and component exports for Layer 4 target verification
|
|
83
|
+
- **API_CONTRACTS.md** -- Extract real API request/response shapes for Layer 4 assertion verification
|
|
84
|
+
If any of these files do not exist, proceed without them -- validation quality is improved but not dependent on them.
|
|
58
85
|
</step>
|
|
59
86
|
|
|
60
87
|
<step name="validate_layer_1_syntax">
|
|
@@ -196,6 +223,17 @@ Check test logic quality against CLAUDE.md standards. This layer includes cross-
|
|
|
196
223
|
- Every `test()`, `it()`, or `def test_` block contains at least one `expect()`, `assert`, or `.should()` call
|
|
197
224
|
- Empty test bodies or tests with only setup/action but no assertion are flagged
|
|
198
225
|
|
|
226
|
+
7. **Locator registry cross-check (if registry exists):**
|
|
227
|
+
- For each POM file, compare every locator property against the locator registry
|
|
228
|
+
- If a POM uses `getByTestId('login-submit-btn')` but the registry shows the real `data-testid` is `login-submit-button-btn`, flag the mismatch
|
|
229
|
+
- If a POM uses a Tier 4 locator (CSS/XPath) but the registry has a Tier 1 locator for the same element, flag as upgradeable
|
|
230
|
+
- If a POM references a `data-testid` value that does NOT exist in the registry AND was not found in the codebase, flag as potentially incorrect
|
|
231
|
+
|
|
232
|
+
8. **API contract cross-check (if API_CONTRACTS.md exists):**
|
|
233
|
+
- For each API test file, compare request payloads and response assertions against the real API contracts
|
|
234
|
+
- Flag payload fields not in the contract or missing required fields
|
|
235
|
+
- Flag assertion values that don't match the contract's response shape
|
|
236
|
+
|
|
199
237
|
**Cross-check for overlapping selectors:**
|
|
200
238
|
- If the generated tests use `getByTestId('login-submit-btn')` and an existing test also targets `login-submit-btn`, note the overlap. This is informational (not necessarily a collision), but helps identify potential test interference.
|
|
201
239
|
- If generated tests define custom selectors that conflict with existing test helper selectors, flag for review.
|
package/bin/install.cjs
CHANGED
|
@@ -101,14 +101,14 @@ async function main() {
|
|
|
101
101
|
console.log(` Installing for ${runtime.name} to ${isGlobal ? '~/' + path.relative(HOME, runtime.dir) : './.claude'}`);
|
|
102
102
|
console.log('');
|
|
103
103
|
|
|
104
|
-
// Install commands
|
|
105
|
-
const commandsSrc = path.join(ROOT, '
|
|
104
|
+
// Install commands (from commands/ in package root to ~/.claude/commands/)
|
|
105
|
+
const commandsSrc = path.join(ROOT, 'commands');
|
|
106
106
|
const commandsDest = path.join(baseDir, 'commands');
|
|
107
107
|
const cmdCount = copyDir(commandsSrc, commandsDest);
|
|
108
108
|
ok(`Installed ${cmdCount} slash commands`);
|
|
109
109
|
|
|
110
|
-
// Install skills (
|
|
111
|
-
const skillsSrc = path.join(ROOT, '
|
|
110
|
+
// Install skills (from skills/ in package root to ~/.claude/skills/)
|
|
111
|
+
const skillsSrc = path.join(ROOT, 'skills');
|
|
112
112
|
const skillsDest = path.join(baseDir, 'skills');
|
|
113
113
|
const skillCount = copyDir(skillsSrc, skillsDest);
|
|
114
114
|
const skillDirCount = countEntries(skillsSrc, 'dirs');
|
|
@@ -143,20 +143,30 @@ async function main() {
|
|
|
143
143
|
copyFile(path.join(ROOT, 'CLAUDE.md'), path.join(qaaDir, 'CLAUDE.md'));
|
|
144
144
|
ok('Installed QA standards (CLAUDE.md)');
|
|
145
145
|
|
|
146
|
-
// Install .mcp.json (Playwright MCP server config)
|
|
146
|
+
// Install .mcp.json (Playwright MCP server config) -- both to qaaDir AND global baseDir
|
|
147
147
|
const mcpSrc = path.join(ROOT, '.mcp.json');
|
|
148
148
|
if (fs.existsSync(mcpSrc)) {
|
|
149
|
-
|
|
150
|
-
copyFile(mcpSrc,
|
|
151
|
-
|
|
149
|
+
// Copy to qaa dir for reference
|
|
150
|
+
copyFile(mcpSrc, path.join(qaaDir, '.mcp.json'));
|
|
151
|
+
// Merge into global ~/.claude/.mcp.json so Playwright MCP is available in ALL projects
|
|
152
|
+
const globalMcpPath = path.join(baseDir, '.mcp.json');
|
|
153
|
+
let globalMcp = { mcpServers: {} };
|
|
154
|
+
if (fs.existsSync(globalMcpPath)) {
|
|
155
|
+
try { globalMcp = JSON.parse(fs.readFileSync(globalMcpPath, 'utf8')); } catch {}
|
|
156
|
+
globalMcp.mcpServers = globalMcp.mcpServers || {};
|
|
157
|
+
}
|
|
158
|
+
const qaaMcp = JSON.parse(fs.readFileSync(mcpSrc, 'utf8'));
|
|
159
|
+
Object.assign(globalMcp.mcpServers, qaaMcp.mcpServers);
|
|
160
|
+
fs.writeFileSync(globalMcpPath, JSON.stringify(globalMcp, null, 2));
|
|
161
|
+
ok('Installed Playwright MCP server config (global — available in all projects)');
|
|
152
162
|
}
|
|
153
163
|
|
|
154
164
|
// Write version
|
|
155
165
|
fs.writeFileSync(path.join(qaaDir, 'VERSION'), VERSION);
|
|
156
166
|
ok(`Wrote VERSION (${VERSION})`);
|
|
157
167
|
|
|
158
|
-
// Merge settings
|
|
159
|
-
const settingsSrc = path.join(ROOT, '
|
|
168
|
+
// Merge settings (from settings.json in package root)
|
|
169
|
+
const settingsSrc = path.join(ROOT, 'settings.json');
|
|
160
170
|
const settingsDest = path.join(baseDir, 'settings.json');
|
|
161
171
|
if (fs.existsSync(settingsSrc)) {
|
|
162
172
|
let existing = {};
|
|
@@ -184,9 +194,11 @@ async function main() {
|
|
|
184
194
|
console.log('');
|
|
185
195
|
console.log(' \x1b[1m/qa-start\x1b[0m Full QA pipeline (multi-agent)');
|
|
186
196
|
console.log(' \x1b[1m/qa-map\x1b[0m Codebase map + analysis');
|
|
187
|
-
console.log(' \x1b[1m/create-test\x1b[0m
|
|
188
|
-
console.log(' \x1b[1m/qa-
|
|
189
|
-
console.log(' \x1b[1m/qa-
|
|
197
|
+
console.log(' \x1b[1m/qa-create-test\x1b[0m Tests for a feature/ticket');
|
|
198
|
+
console.log(' \x1b[1m/qa-audit\x1b[0m Audit existing tests');
|
|
199
|
+
console.log(' \x1b[1m/qa-fix\x1b[0m Fix broken tests');
|
|
200
|
+
console.log(' \x1b[1m/qa-testid\x1b[0m Inject data-testid attributes');
|
|
201
|
+
console.log(' \x1b[1m/qa-pr\x1b[0m Create QA pull request');
|
|
190
202
|
console.log('');
|
|
191
203
|
console.log(` ${cmdCount} commands + ${skillDirCount} skills + ${agentCount} agents ready.`);
|
|
192
204
|
console.log('');
|
|
@@ -0,0 +1,119 @@
|
|
|
1
|
+
# QA Audit & Report
|
|
2
|
+
|
|
3
|
+
Comprehensive quality audit of a test suite with 6-dimension scoring, testing pyramid analysis, and status reporting. Supports three output modes: full audit, pyramid analysis only, or status report adapted to audience.
|
|
4
|
+
|
|
5
|
+
## Usage
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
/qa-audit <path-to-tests> [options]
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
### Options
|
|
12
|
+
|
|
13
|
+
- `<path-to-tests>` — directory containing test files to audit
|
|
14
|
+
- `--dev-repo <path>` — path to developer repository (for coverage cross-reference)
|
|
15
|
+
- `--app-url <url>` — URL of running application for locator verification via Playwright MCP
|
|
16
|
+
- `--pyramid` — pyramid analysis only: compare actual vs target distribution with action plan
|
|
17
|
+
- `--report [team|management|client]` — generate status report adapted to audience (default: team)
|
|
18
|
+
|
|
19
|
+
### Mode Detection
|
|
20
|
+
|
|
21
|
+
```
|
|
22
|
+
if --pyramid:
|
|
23
|
+
MODE = "pyramid" → PYRAMID_ANALYSIS.md
|
|
24
|
+
elif --report:
|
|
25
|
+
MODE = "report" → QA_STATUS_REPORT.md (adapted to audience)
|
|
26
|
+
else:
|
|
27
|
+
MODE = "audit" → QA_AUDIT_REPORT.md (full 6-dimension audit, default)
|
|
28
|
+
```
|
|
29
|
+
|
|
30
|
+
## What It Produces
|
|
31
|
+
|
|
32
|
+
| Mode | Artifact | Description |
|
|
33
|
+
|------|----------|-------------|
|
|
34
|
+
| audit | QA_AUDIT_REPORT.md | 6-dimension scoring, critical issues, recommendations with effort estimates |
|
|
35
|
+
| pyramid | PYRAMID_ANALYSIS.md | Current vs target distribution, gap table, prioritized action plan |
|
|
36
|
+
| report | QA_STATUS_REPORT.md | Metrics, pyramid distribution, risk areas, adapted to audience level |
|
|
37
|
+
|
|
38
|
+
## Instructions
|
|
39
|
+
|
|
40
|
+
### AUDIT MODE (default)
|
|
41
|
+
|
|
42
|
+
Scores across 6 dimensions: Locator Quality (20%), Assertion Specificity (20%), POM Compliance (15%), Test Coverage (20%), Naming Convention (15%), Test Data Management (10%).
|
|
43
|
+
|
|
44
|
+
1. Read `CLAUDE.md` — quality gates, locator tiers, assertion rules, POM rules, naming conventions.
|
|
45
|
+
2. Invoke validator agent in audit mode:
|
|
46
|
+
|
|
47
|
+
Task(
|
|
48
|
+
prompt="
|
|
49
|
+
<objective>Audit test suite quality and produce QA_AUDIT_REPORT.md with 6-dimension scoring. If Playwright MCP is connected and an app URL is available, verify E2E test locators against the live DOM via browser_navigate + browser_snapshot. Flag stale locators (Tier 4 CSS/XPath that could be upgraded to Tier 1 data-testid) and locators that no longer match any DOM element.</objective>
|
|
50
|
+
<execution_context>@agents/qaa-validator.md</execution_context>
|
|
51
|
+
<files_to_read>
|
|
52
|
+
- CLAUDE.md
|
|
53
|
+
</files_to_read>
|
|
54
|
+
<parameters>
|
|
55
|
+
user_input: $ARGUMENTS
|
|
56
|
+
mode: audit
|
|
57
|
+
app_url: {auto-detect from test config baseURL, or ask user}
|
|
58
|
+
</parameters>
|
|
59
|
+
"
|
|
60
|
+
)
|
|
61
|
+
|
|
62
|
+
3. Present results with overall score and prioritized recommendations.
|
|
63
|
+
|
|
64
|
+
---
|
|
65
|
+
|
|
66
|
+
### PYRAMID MODE (`--pyramid`)
|
|
67
|
+
|
|
68
|
+
Analyze test distribution against the ideal testing pyramid from CLAUDE.md (Unit 60-70%, Integration 10-15%, API 20-25%, E2E 3-5%). Compares actual percentages to targets and produces an action plan.
|
|
69
|
+
|
|
70
|
+
1. Read `CLAUDE.md` — testing pyramid target percentages.
|
|
71
|
+
2. Invoke analyzer agent for pyramid analysis:
|
|
72
|
+
|
|
73
|
+
Task(
|
|
74
|
+
prompt="
|
|
75
|
+
<objective>Produce PYRAMID_ANALYSIS.md comparing actual test distribution to target pyramid. Count tests by type (unit, integration, API, E2E), calculate percentages, compare to CLAUDE.md targets, identify gaps, and produce a prioritized action plan to reach the recommended distribution. Adjust target percentages based on the actual app architecture.</objective>
|
|
76
|
+
<execution_context>@agents/qaa-analyzer.md</execution_context>
|
|
77
|
+
<files_to_read>
|
|
78
|
+
- CLAUDE.md
|
|
79
|
+
</files_to_read>
|
|
80
|
+
<parameters>
|
|
81
|
+
user_input: $ARGUMENTS
|
|
82
|
+
mode: pyramid-analysis
|
|
83
|
+
</parameters>
|
|
84
|
+
"
|
|
85
|
+
)
|
|
86
|
+
|
|
87
|
+
3. Present analysis with gap table and action plan.
|
|
88
|
+
|
|
89
|
+
---
|
|
90
|
+
|
|
91
|
+
### REPORT MODE (`--report`)
|
|
92
|
+
|
|
93
|
+
Generate a summary report of current QA status. Adapts detail level to audience.
|
|
94
|
+
|
|
95
|
+
**Audience levels:**
|
|
96
|
+
- `team` (default) — file-level details, specific locator/assertion issues, technical recommendations
|
|
97
|
+
- `management` — high-level metrics, risk areas, coverage percentages, trend indicators
|
|
98
|
+
- `client` — coverage summary, confidence level, test pass rates, risk mitigation status
|
|
99
|
+
|
|
100
|
+
1. Read `CLAUDE.md` — testing pyramid targets, quality gates.
|
|
101
|
+
2. Invoke analyzer agent for status reporting:
|
|
102
|
+
|
|
103
|
+
Task(
|
|
104
|
+
prompt="
|
|
105
|
+
<objective>Produce QA_STATUS_REPORT.md with current test suite metrics and coverage. Adapt detail level to the specified audience: team (file-level technical details), management (high-level metrics and risks), or client (coverage summary and confidence). Include testing pyramid distribution, pass/fail rates, risk areas, and actionable recommendations appropriate for the audience.</objective>
|
|
106
|
+
<execution_context>@agents/qaa-analyzer.md</execution_context>
|
|
107
|
+
<files_to_read>
|
|
108
|
+
- CLAUDE.md
|
|
109
|
+
</files_to_read>
|
|
110
|
+
<parameters>
|
|
111
|
+
user_input: $ARGUMENTS
|
|
112
|
+
mode: status-report
|
|
113
|
+
</parameters>
|
|
114
|
+
"
|
|
115
|
+
)
|
|
116
|
+
|
|
117
|
+
3. Present report to user.
|
|
118
|
+
|
|
119
|
+
$ARGUMENTS
|
|
@@ -0,0 +1,288 @@
|
|
|
1
|
+
# QA Create Test
|
|
2
|
+
|
|
3
|
+
Create, update, or generate tests from tickets — all in one command. Supports three modes: generate tests from code analysis, generate tests from a ticket (Jira/Linear/GitHub), or update/improve existing tests. Uses Playwright MCP to extract real locators from the live app when available.
|
|
4
|
+
|
|
5
|
+
## Usage
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
/qa-create-test <feature-or-source> [options]
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
### Modes (auto-detected from arguments)
|
|
12
|
+
|
|
13
|
+
| Mode | Trigger | Example |
|
|
14
|
+
|------|---------|---------|
|
|
15
|
+
| **From code** | Feature name (no URL, no path to tests) | `/qa-create-test login` |
|
|
16
|
+
| **From ticket** | URL, shorthand (#123), or `--ticket` flag | `/qa-create-test https://github.com/org/repo/issues/42` |
|
|
17
|
+
| **Update existing** | Path to existing test files or `--update` flag | `/qa-create-test --update tests/e2e/` |
|
|
18
|
+
| **POM only** | `--pom-only` flag | `/qa-create-test --pom-only src/pages/` |
|
|
19
|
+
|
|
20
|
+
### Options
|
|
21
|
+
|
|
22
|
+
- `--dev-repo <path>` — path to developer repository (default: current directory)
|
|
23
|
+
- `--app-url <url>` — URL of running application for E2E execution and locator extraction (auto-detects if not provided)
|
|
24
|
+
- `--skip-run` — skip E2E execution, only generate and statically validate
|
|
25
|
+
- `--ticket <source>` — force ticket mode with: URL, shorthand (#123, org/repo#123), file path, or plain text
|
|
26
|
+
- `--update <path>` — force update mode: audit and improve existing tests at path
|
|
27
|
+
- `--scope fix|improve|add|full` — for update mode only (default: full)
|
|
28
|
+
- `--pom-only [path]` — generate only Page Object Model files (BasePage + feature POMs), no test specs
|
|
29
|
+
- `--framework <name>` — override framework auto-detection (playwright, cypress, selenium) — used with --pom-only
|
|
30
|
+
|
|
31
|
+
### Mode Detection Logic
|
|
32
|
+
|
|
33
|
+
```
|
|
34
|
+
if --pom-only:
|
|
35
|
+
MODE = "pom-only"
|
|
36
|
+
elif argument matches URL pattern ...
|
|
37
|
+
if argument matches URL pattern (github.com, atlassian.net, linear.app) OR contains "#" + digits OR --ticket flag:
|
|
38
|
+
MODE = "from-ticket"
|
|
39
|
+
elif --update flag OR argument is path to existing test directory/files:
|
|
40
|
+
MODE = "update"
|
|
41
|
+
else:
|
|
42
|
+
MODE = "from-code"
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
## What It Produces
|
|
46
|
+
|
|
47
|
+
### From Code Mode
|
|
48
|
+
- Test spec files (unit, API, E2E as appropriate)
|
|
49
|
+
- Page Object Model files (for E2E tests)
|
|
50
|
+
- Fixture files (test data)
|
|
51
|
+
- Locator registry entries (`.qa-output/locators/`)
|
|
52
|
+
- E2E_RUN_REPORT.md (if E2E tests ran against live app)
|
|
53
|
+
|
|
54
|
+
### From Ticket Mode
|
|
55
|
+
- TEST_CASES_FROM_TICKET.md — traceability matrix (AC → test case)
|
|
56
|
+
- GENERATION_PLAN_TICKET.md — synthetic generation plan
|
|
57
|
+
- Test spec files with `traces_to` fields linking back to ticket ACs
|
|
58
|
+
- VALIDATION_REPORT.md
|
|
59
|
+
|
|
60
|
+
### Update Mode
|
|
61
|
+
- QA_AUDIT_REPORT.md — current quality assessment
|
|
62
|
+
- Improved test files (after user approval)
|
|
63
|
+
|
|
64
|
+
## Instructions
|
|
65
|
+
|
|
66
|
+
### Step 1: Detect Mode
|
|
67
|
+
|
|
68
|
+
Parse `$ARGUMENTS` to determine mode using the detection logic above.
|
|
69
|
+
|
|
70
|
+
Print mode banner:
|
|
71
|
+
```
|
|
72
|
+
=== QA Create Test ===
|
|
73
|
+
Mode: {from-code | from-ticket | update}
|
|
74
|
+
Target: {feature name | ticket URL | test path}
|
|
75
|
+
App URL: {url or "auto-detect"}
|
|
76
|
+
===========================
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
---
|
|
80
|
+
|
|
81
|
+
### FROM CODE MODE
|
|
82
|
+
|
|
83
|
+
1. Read `CLAUDE.md` — POM rules, locator tiers, assertion rules, naming conventions, quality gates.
|
|
84
|
+
2. Read existing analysis artifacts if available:
|
|
85
|
+
- `.qa-output/QA_ANALYSIS.md` — architecture context
|
|
86
|
+
- `.qa-output/TEST_INVENTORY.md` — pre-defined test cases for this feature
|
|
87
|
+
3. **Check for codebase map** (`.qa-output/codebase/`):
|
|
88
|
+
- Look for: `CODE_PATTERNS.md`, `API_CONTRACTS.md`, `TEST_SURFACE.md`, `TESTABILITY.md`
|
|
89
|
+
- If at least 2 of these files exist: read them all for project context (naming conventions, API shapes, testable surfaces).
|
|
90
|
+
- **If NONE of these files exist: STOP and tell the user:**
|
|
91
|
+
```
|
|
92
|
+
⚠ No codebase map found (.qa-output/codebase/ is empty or missing).
|
|
93
|
+
|
|
94
|
+
The codebase map provides critical context: naming conventions, API contracts,
|
|
95
|
+
testable surfaces, and project structure. Without it, generated tests will lack
|
|
96
|
+
project-specific context and may not follow your repo's conventions.
|
|
97
|
+
|
|
98
|
+
Run /qa-map first to generate the codebase map, then re-run /qa-create-test.
|
|
99
|
+
|
|
100
|
+
To skip this check and proceed without context: re-run with --skip-map
|
|
101
|
+
```
|
|
102
|
+
Only proceed without codebase map if the user explicitly passes `--skip-map`.
|
|
103
|
+
|
|
104
|
+
4. **Check existing locator registry and extract new locators from live app:**
|
|
105
|
+
|
|
106
|
+
a. Read `.qa-output/locators/LOCATOR_REGISTRY.md` if it exists.
|
|
107
|
+
|
|
108
|
+
b. If locators for this feature already exist in the registry AND no `--app-url` was provided: reuse cached locators.
|
|
109
|
+
|
|
110
|
+
c. If locators are missing or `--app-url` was provided: Use Playwright MCP to navigate the app and extract real locators:
|
|
111
|
+
```
|
|
112
|
+
mcp__playwright__browser_navigate({ url: "{app_url}/{feature_path}" })
|
|
113
|
+
mcp__playwright__browser_snapshot()
|
|
114
|
+
```
|
|
115
|
+
Extract all data-testid, ARIA roles, labels, placeholders.
|
|
116
|
+
Navigate through multi-page flows if needed.
|
|
117
|
+
Write per-feature locator file to `.qa-output/locators/{feature}.locators.md`.
|
|
118
|
+
Update the registry `.qa-output/locators/LOCATOR_REGISTRY.md`.
|
|
119
|
+
|
|
120
|
+
If no app URL available and no locators in registry, skip — executor proposes locators from source code.
|
|
121
|
+
|
|
122
|
+
5. Invoke executor agent to generate test files:
|
|
123
|
+
|
|
124
|
+
Task(
|
|
125
|
+
prompt="
|
|
126
|
+
<objective>Generate test files for the specified feature following CLAUDE.md standards, using codebase map for context</objective>
|
|
127
|
+
<execution_context>@agents/qaa-executor.md</execution_context>
|
|
128
|
+
<files_to_read>
|
|
129
|
+
- CLAUDE.md
|
|
130
|
+
- .qa-output/locators/LOCATOR_REGISTRY.md (if exists)
|
|
131
|
+
- .qa-output/locators/{feature}.locators.md (if exists)
|
|
132
|
+
- .qa-output/codebase/CODE_PATTERNS.md (if exists)
|
|
133
|
+
- .qa-output/codebase/API_CONTRACTS.md (if exists)
|
|
134
|
+
- .qa-output/codebase/TEST_SURFACE.md (if exists)
|
|
135
|
+
- .qa-output/codebase/TESTABILITY.md (if exists)
|
|
136
|
+
</files_to_read>
|
|
137
|
+
<parameters>
|
|
138
|
+
user_input: $ARGUMENTS
|
|
139
|
+
mode: feature-test
|
|
140
|
+
codebase_map_dir: .qa-output/codebase
|
|
141
|
+
locator_registry: .qa-output/locators/LOCATOR_REGISTRY.md
|
|
142
|
+
</parameters>
|
|
143
|
+
"
|
|
144
|
+
)
|
|
145
|
+
|
|
146
|
+
6. If E2E test files were generated AND `--skip-run` was NOT passed, invoke E2E runner:
|
|
147
|
+
|
|
148
|
+
Task(
|
|
149
|
+
prompt="
|
|
150
|
+
<objective>Run generated E2E tests against live application, capture real locators, fix mismatches, loop until pass</objective>
|
|
151
|
+
<execution_context>@agents/qaa-e2e-runner.md</execution_context>
|
|
152
|
+
<files_to_read>
|
|
153
|
+
- CLAUDE.md
|
|
154
|
+
- {generated E2E test files from executor return}
|
|
155
|
+
- {generated POM files from executor return}
|
|
156
|
+
</files_to_read>
|
|
157
|
+
<parameters>
|
|
158
|
+
app_url: {from --app-url flag or auto-detect}
|
|
159
|
+
output_dir: .qa-output
|
|
160
|
+
</parameters>
|
|
161
|
+
"
|
|
162
|
+
)
|
|
163
|
+
|
|
164
|
+
7. Present results with file counts and suggest `/qa-pr`.
|
|
165
|
+
|
|
166
|
+
---
|
|
167
|
+
|
|
168
|
+
### FROM TICKET MODE
|
|
169
|
+
|
|
170
|
+
1. Read `CLAUDE.md` — all QA standards.
|
|
171
|
+
2. Execute the ticket workflow end-to-end:
|
|
172
|
+
|
|
173
|
+
Follow the workflow defined in `@workflows/qa-from-ticket.md` end-to-end.
|
|
174
|
+
Preserve all workflow gates (ticket parsing, acceptance criteria extraction, traceability matrix, validation).
|
|
175
|
+
|
|
176
|
+
Key steps in the workflow:
|
|
177
|
+
- Parse ticket source (GitHub URL, Jira URL, Linear URL, file, or plain text)
|
|
178
|
+
- Fetch ticket content (via `gh issue view`, WebFetch, or file read)
|
|
179
|
+
- Extract acceptance criteria, user stories, edge cases
|
|
180
|
+
- Scan dev repo for related source files
|
|
181
|
+
- Extract locators from live app via Playwright MCP (if app URL available)
|
|
182
|
+
- Generate test cases with traceability matrix (every AC maps to ≥1 test case)
|
|
183
|
+
- Spawn executor agent to produce test files
|
|
184
|
+
- Spawn validator agent for 4-layer validation
|
|
185
|
+
- Print summary with AC coverage and traceability
|
|
186
|
+
|
|
187
|
+
**Traceability guarantee:** Every acceptance criterion maps to at least one test case via the `traces_to` field.
|
|
188
|
+
|
|
189
|
+
**Supported ticket sources:**
|
|
190
|
+
|
|
191
|
+
| Format | Example | Detection |
|
|
192
|
+
|--------|---------|-----------|
|
|
193
|
+
| GitHub Issue URL | `https://github.com/org/repo/issues/123` | Contains `github.com` + `/issues/` |
|
|
194
|
+
| GitHub shorthand | `org/repo#123` or `#123` | Contains `#` + digits |
|
|
195
|
+
| Jira URL | `https://company.atlassian.net/browse/PROJ-123` | Contains `.atlassian.net/browse/` |
|
|
196
|
+
| Linear URL | `https://linear.app/team/issue/TEAM-123` | Contains `linear.app` |
|
|
197
|
+
| File path | `./tickets/feature-spec.md` | Path exists on disk |
|
|
198
|
+
| Plain text | `"As a user I want to..."` | None of the above match |
|
|
199
|
+
|
|
200
|
+
---
|
|
201
|
+
|
|
202
|
+
### UPDATE MODE
|
|
203
|
+
|
|
204
|
+
1. Read `CLAUDE.md` — quality gates, locator tiers, assertion rules, POM rules.
|
|
205
|
+
2. Invoke validator agent in audit mode:
|
|
206
|
+
|
|
207
|
+
Task(
|
|
208
|
+
prompt="
|
|
209
|
+
<objective>Audit existing test quality and produce QA_AUDIT_REPORT.md. If Playwright MCP is connected, verify E2E test locators against the live DOM via browser_navigate + browser_snapshot.</objective>
|
|
210
|
+
<execution_context>@agents/qaa-validator.md</execution_context>
|
|
211
|
+
<files_to_read>
|
|
212
|
+
- CLAUDE.md
|
|
213
|
+
</files_to_read>
|
|
214
|
+
<parameters>
|
|
215
|
+
user_input: $ARGUMENTS
|
|
216
|
+
mode: audit
|
|
217
|
+
app_url: {auto-detect from test config baseURL, or ask user}
|
|
218
|
+
</parameters>
|
|
219
|
+
"
|
|
220
|
+
)
|
|
221
|
+
|
|
222
|
+
3. Present audit results and wait for user approval.
|
|
223
|
+
4. Invoke executor agent to apply approved improvements:
|
|
224
|
+
|
|
225
|
+
Task(
|
|
226
|
+
prompt="
|
|
227
|
+
<objective>Apply approved improvements to existing tests without deleting working tests. If Playwright MCP is connected, use browser_navigate + browser_snapshot to extract real locators when upgrading from Tier 4 to Tier 1.</objective>
|
|
228
|
+
<execution_context>@agents/qaa-executor.md</execution_context>
|
|
229
|
+
<files_to_read>
|
|
230
|
+
- CLAUDE.md
|
|
231
|
+
- .qa-output/QA_AUDIT_REPORT.md
|
|
232
|
+
- .qa-output/locators/LOCATOR_REGISTRY.md (if exists)
|
|
233
|
+
</files_to_read>
|
|
234
|
+
<parameters>
|
|
235
|
+
user_input: $ARGUMENTS
|
|
236
|
+
mode: update
|
|
237
|
+
app_url: {auto-detect from test config baseURL, or ask user}
|
|
238
|
+
</parameters>
|
|
239
|
+
"
|
|
240
|
+
)
|
|
241
|
+
|
|
242
|
+
**Update scopes:**
|
|
243
|
+
- `fix` — repair broken tests only
|
|
244
|
+
- `improve` — upgrade locators, assertions, POM structure
|
|
245
|
+
- `add` — add missing test cases without modifying existing
|
|
246
|
+
- `full` — audit everything, then improve with approval (default)
|
|
247
|
+
|
|
248
|
+
**Rule:** NEVER delete or rewrite working tests without user approval. Surgical: add, fix, improve — never replace.
|
|
249
|
+
|
|
250
|
+
---
|
|
251
|
+
|
|
252
|
+
### POM ONLY MODE (`--pom-only`)
|
|
253
|
+
|
|
254
|
+
Generate only Page Object Model files — no test specs.
|
|
255
|
+
|
|
256
|
+
1. Read `CLAUDE.md` — POM rules, locator tier hierarchy, naming conventions.
|
|
257
|
+
2. Invoke executor agent in POM-only mode:
|
|
258
|
+
|
|
259
|
+
Task(
|
|
260
|
+
prompt="
|
|
261
|
+
<objective>Generate Page Object Models following CLAUDE.md POM rules. If Playwright MCP is connected and an app URL is available, navigate each page first to extract real locators (data-testid, ARIA roles, labels) from the live DOM via browser_navigate + browser_snapshot before generating POMs. This ensures POM locators match the real app instead of guessing from source code.</objective>
|
|
262
|
+
<execution_context>@agents/qaa-executor.md</execution_context>
|
|
263
|
+
<files_to_read>
|
|
264
|
+
- CLAUDE.md
|
|
265
|
+
- .qa-output/locators/LOCATOR_REGISTRY.md (if exists)
|
|
266
|
+
</files_to_read>
|
|
267
|
+
<parameters>
|
|
268
|
+
user_input: $ARGUMENTS
|
|
269
|
+
mode: pom-only
|
|
270
|
+
app_url: {auto-detect from test config baseURL, or ask user}
|
|
271
|
+
</parameters>
|
|
272
|
+
"
|
|
273
|
+
)
|
|
274
|
+
|
|
275
|
+
**Produces:**
|
|
276
|
+
- BasePage file (if not already present)
|
|
277
|
+
- Feature-specific POM files following `[PageName]Page.[ext]` naming convention
|
|
278
|
+
- No test specs, no fixtures
|
|
279
|
+
|
|
280
|
+
**POM rules enforced:**
|
|
281
|
+
- One class per page — no god objects
|
|
282
|
+
- No assertions in page objects — assertions belong ONLY in test specs
|
|
283
|
+
- Locators as readonly properties — Tier 1 preferred (data-testid, ARIA roles)
|
|
284
|
+
- Actions return void or next page — for fluent chaining
|
|
285
|
+
- State queries return data — let the test decide what to assert
|
|
286
|
+
- Every POM extends BasePage
|
|
287
|
+
|
|
288
|
+
$ARGUMENTS
|