qaa-agent 1.6.2 → 1.7.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.mcp.json +8 -8
- package/CHANGELOG.md +93 -71
- package/CLAUDE.md +553 -553
- package/agents/qa-pipeline-orchestrator.md +1378 -1378
- package/agents/qaa-analyzer.md +539 -524
- package/agents/qaa-bug-detective.md +479 -446
- package/agents/qaa-codebase-mapper.md +935 -935
- package/agents/qaa-discovery.md +384 -0
- package/agents/qaa-e2e-runner.md +416 -415
- package/agents/qaa-executor.md +651 -651
- package/agents/qaa-planner.md +405 -390
- package/agents/qaa-project-researcher.md +319 -319
- package/agents/qaa-scanner.md +424 -424
- package/agents/qaa-testid-injector.md +643 -585
- package/agents/qaa-validator.md +490 -452
- package/bin/install.cjs +200 -198
- package/bin/lib/commands.cjs +709 -709
- package/bin/lib/config.cjs +307 -307
- package/bin/lib/core.cjs +497 -497
- package/bin/lib/frontmatter.cjs +299 -299
- package/bin/lib/init.cjs +989 -989
- package/bin/lib/milestone.cjs +241 -241
- package/bin/lib/model-profiles.cjs +60 -60
- package/bin/lib/phase.cjs +911 -911
- package/bin/lib/roadmap.cjs +306 -306
- package/bin/lib/state.cjs +748 -748
- package/bin/lib/template.cjs +222 -222
- package/bin/lib/verify.cjs +842 -842
- package/bin/qaa-tools.cjs +607 -607
- package/commands/qa-audit.md +119 -0
- package/commands/qa-create-test.md +288 -0
- package/commands/qa-fix.md +147 -0
- package/commands/qa-map.md +137 -0
- package/{.claude/commands → commands}/qa-pr.md +23 -23
- package/{.claude/commands → commands}/qa-start.md +22 -22
- package/{.claude/commands → commands}/qa-testid.md +19 -19
- package/docs/COMMANDS.md +341 -341
- package/docs/DEMO.md +182 -182
- package/docs/TESTING.md +156 -156
- package/package.json +6 -7
- package/{.claude/settings.json → settings.json} +1 -2
- package/templates/failure-classification.md +391 -391
- package/templates/gap-analysis.md +409 -409
- package/templates/pr-template.md +48 -48
- package/templates/qa-analysis.md +381 -381
- package/templates/qa-audit-report.md +465 -465
- package/templates/qa-repo-blueprint.md +636 -636
- package/templates/scan-manifest.md +312 -312
- package/templates/test-inventory.md +582 -582
- package/templates/testid-audit-report.md +354 -354
- package/templates/validation-report.md +243 -243
- package/workflows/qa-analyze.md +296 -296
- package/workflows/qa-from-ticket.md +536 -536
- package/workflows/qa-gap.md +309 -303
- package/workflows/qa-pr.md +389 -389
- package/workflows/qa-start.md +1192 -1168
- package/workflows/qa-testid.md +384 -356
- package/workflows/qa-validate.md +299 -295
- package/.claude/commands/create-test.md +0 -164
- package/.claude/commands/qa-audit.md +0 -37
- package/.claude/commands/qa-blueprint.md +0 -54
- package/.claude/commands/qa-fix.md +0 -36
- package/.claude/commands/qa-from-ticket.md +0 -24
- package/.claude/commands/qa-gap.md +0 -20
- package/.claude/commands/qa-map.md +0 -47
- package/.claude/commands/qa-pom.md +0 -36
- package/.claude/commands/qa-pyramid.md +0 -37
- package/.claude/commands/qa-report.md +0 -38
- package/.claude/commands/qa-research.md +0 -33
- package/.claude/commands/qa-validate.md +0 -42
- package/.claude/commands/update-test.md +0 -58
- package/.claude/skills/qa-learner/SKILL.md +0 -150
- /package/{.claude/skills → skills}/qa-bug-detective/SKILL.md +0 -0
- /package/{.claude/skills → skills}/qa-repo-analyzer/SKILL.md +0 -0
- /package/{.claude/skills → skills}/qa-self-validator/SKILL.md +0 -0
- /package/{.claude/skills → skills}/qa-template-engine/SKILL.md +0 -0
- /package/{.claude/skills → skills}/qa-testid-injector/SKILL.md +0 -0
- /package/{.claude/skills → skills}/qa-workflow-documenter/SKILL.md +0 -0
|
@@ -1,536 +1,536 @@
|
|
|
1
|
-
<purpose>
|
|
2
|
-
Generate test cases and test files from a ticket (GitHub Issue, Jira, Linear, plain text, or file). Extracts acceptance criteria, user stories, and edge cases from the ticket content, scans the dev repo for related source files, generates a traceability matrix mapping acceptance criteria to test cases, then spawns the executor and validator agents to produce and verify test files. Use this workflow to ensure every acceptance criterion has corresponding test coverage before a feature ships.
|
|
3
|
-
</purpose>
|
|
4
|
-
|
|
5
|
-
<required_reading>
|
|
6
|
-
- `CLAUDE.md` -- QA automation standards, test spec rules, naming conventions, quality gates, testing pyramid
|
|
7
|
-
- `agents/qaa-scanner.md` -- Scanner agent definition (repo scanning for related source files)
|
|
8
|
-
- `agents/qaa-executor.md` -- Executor agent definition (test file generation)
|
|
9
|
-
- `agents/qaa-validator.md` -- Validator agent definition (4-layer validation)
|
|
10
|
-
- `templates/test-inventory.md` -- TEST_INVENTORY.md format contract (test case structure)
|
|
11
|
-
</required_reading>
|
|
12
|
-
|
|
13
|
-
<process>
|
|
14
|
-
|
|
15
|
-
<step name="parse_arguments">
|
|
16
|
-
## Step 1: Parse Ticket Source
|
|
17
|
-
|
|
18
|
-
Parse `$ARGUMENTS` to determine the ticket source type and content.
|
|
19
|
-
|
|
20
|
-
**Supported argument formats:**
|
|
21
|
-
|
|
22
|
-
| Format | Example | Detection |
|
|
23
|
-
|--------|---------|-----------|
|
|
24
|
-
| GitHub Issue URL | `https://github.com/org/repo/issues/123` | Starts with `https://github.com` and contains `/issues/` |
|
|
25
|
-
| GitHub Issue shorthand | `org/repo#123` or `#123` | Contains `#` followed by digits |
|
|
26
|
-
| Jira URL | `https://company.atlassian.net/browse/PROJ-123` | Contains `.atlassian.net/browse/` |
|
|
27
|
-
| Linear URL | `https://linear.app/team/issue/TEAM-123` | Contains `linear.app` |
|
|
28
|
-
| File path | `./tickets/feature-spec.md` | Path exists on disk |
|
|
29
|
-
| Plain text | `"As a user I want to..."` | None of the above patterns match |
|
|
30
|
-
|
|
31
|
-
**Parsing logic:**
|
|
32
|
-
|
|
33
|
-
```bash
|
|
34
|
-
TICKET_SOURCE="$ARGUMENTS"
|
|
35
|
-
TICKET_TYPE=""
|
|
36
|
-
TICKET_CONTENT=""
|
|
37
|
-
|
|
38
|
-
# Detect source type from the argument pattern
|
|
39
|
-
if matches GitHub URL or shorthand:
|
|
40
|
-
TICKET_TYPE="github"
|
|
41
|
-
elif matches Jira URL:
|
|
42
|
-
TICKET_TYPE="jira"
|
|
43
|
-
elif matches Linear URL:
|
|
44
|
-
TICKET_TYPE="linear"
|
|
45
|
-
elif file exists at path:
|
|
46
|
-
TICKET_TYPE="file"
|
|
47
|
-
else:
|
|
48
|
-
TICKET_TYPE="text"
|
|
49
|
-
```
|
|
50
|
-
|
|
51
|
-
**If no arguments provided:**
|
|
52
|
-
|
|
53
|
-
Print error and STOP:
|
|
54
|
-
```
|
|
55
|
-
Error: No ticket source provided.
|
|
56
|
-
Usage: /qa-from-ticket <source>
|
|
57
|
-
|
|
58
|
-
Supported sources:
|
|
59
|
-
GitHub: https://github.com/org/repo/issues/123 or #123
|
|
60
|
-
Jira: https://company.atlassian.net/browse/PROJ-123
|
|
61
|
-
Linear: https://linear.app/team/issue/TEAM-123
|
|
62
|
-
File: ./path/to/ticket.md
|
|
63
|
-
Text: "As a user I want to log in so that I can access my dashboard"
|
|
64
|
-
```
|
|
65
|
-
</step>
|
|
66
|
-
|
|
67
|
-
<step name="fetch_ticket_content">
|
|
68
|
-
## Step 2: Fetch Ticket Content
|
|
69
|
-
|
|
70
|
-
Retrieve the ticket content based on the source type.
|
|
71
|
-
|
|
72
|
-
**GitHub Issue:**
|
|
73
|
-
|
|
74
|
-
```bash
|
|
75
|
-
# For full URL: extract owner, repo, issue number
|
|
76
|
-
# For shorthand #123: use current repo context
|
|
77
|
-
gh issue view {issue_number} --repo {owner/repo} --json title,body,labels,assignees,milestone
|
|
78
|
-
```
|
|
79
|
-
|
|
80
|
-
Extract from JSON response:
|
|
81
|
-
- `title` -- Issue title
|
|
82
|
-
- `body` -- Issue body (markdown)
|
|
83
|
-
- `labels` -- Issue labels (may indicate priority)
|
|
84
|
-
- `milestone` -- Milestone (may indicate release target)
|
|
85
|
-
|
|
86
|
-
**Jira / Linear URL:**
|
|
87
|
-
|
|
88
|
-
```bash
|
|
89
|
-
# Use WebFetch to retrieve the ticket page content
|
|
90
|
-
# Extract structured content from the response
|
|
91
|
-
```
|
|
92
|
-
|
|
93
|
-
If WebFetch fails or authentication is required:
|
|
94
|
-
|
|
95
|
-
```
|
|
96
|
-
CHECKPOINT:
|
|
97
|
-
type: human-action
|
|
98
|
-
blocking: "Cannot access ticket URL -- authentication required"
|
|
99
|
-
details: "Attempted to fetch: {TICKET_SOURCE}. Received authentication error."
|
|
100
|
-
awaiting: "Provide ticket content as plain text or a file path instead. Or authenticate with the service."
|
|
101
|
-
```
|
|
102
|
-
|
|
103
|
-
**File path:**
|
|
104
|
-
|
|
105
|
-
```bash
|
|
106
|
-
# Read the file content directly
|
|
107
|
-
TICKET_CONTENT=$(cat "{TICKET_SOURCE}")
|
|
108
|
-
```
|
|
109
|
-
|
|
110
|
-
If the file is empty or unreadable, print error: `"Error: Cannot read ticket file: {path}"` and STOP.
|
|
111
|
-
|
|
112
|
-
**Plain text:**
|
|
113
|
-
|
|
114
|
-
```bash
|
|
115
|
-
TICKET_CONTENT="${TICKET_SOURCE}"
|
|
116
|
-
```
|
|
117
|
-
|
|
118
|
-
**Print ticket summary:**
|
|
119
|
-
|
|
120
|
-
```
|
|
121
|
-
Ticket Source: {TICKET_TYPE}
|
|
122
|
-
Title: {extracted title or first line}
|
|
123
|
-
Content Length: {character count}
|
|
124
|
-
```
|
|
125
|
-
</step>
|
|
126
|
-
|
|
127
|
-
<step name="extract_acceptance_criteria">
|
|
128
|
-
## Step 3: Extract Acceptance Criteria
|
|
129
|
-
|
|
130
|
-
Parse the ticket content to identify testable acceptance criteria, user stories, edge cases, and priority.
|
|
131
|
-
|
|
132
|
-
**Extraction targets:**
|
|
133
|
-
|
|
134
|
-
| Target | What to Look For | Example |
|
|
135
|
-
|--------|-----------------|---------|
|
|
136
|
-
| Title | Issue title or first heading | "Add password reset flow" |
|
|
137
|
-
| User Story | "As a [role], I want to [action], so that [benefit]" pattern | "As a user, I want to reset my password via email" |
|
|
138
|
-
| Acceptance Criteria | Bullet lists after "Acceptance Criteria", "AC:", "Given/When/Then" patterns | "Given a valid email, When I request reset, Then I receive a reset link" |
|
|
139
|
-
| Edge Cases | Items mentioning: invalid, expired, duplicate, empty, boundary, error, timeout | "Handle expired reset tokens gracefully" |
|
|
140
|
-
| Priority | Labels, priority fields, or keywords: critical, blocker, P0, P1, P2 | "Priority: P0" |
|
|
141
|
-
| Affected Components | File paths, module names, feature areas mentioned | "Auth module", "src/services/auth.service.ts" |
|
|
142
|
-
|
|
143
|
-
**Structured extraction output:**
|
|
144
|
-
|
|
145
|
-
```
|
|
146
|
-
TICKET_TITLE: "{title}"
|
|
147
|
-
TICKET_PRIORITY: "{P0|P1|P2}"
|
|
148
|
-
|
|
149
|
-
ACCEPTANCE_CRITERIA:
|
|
150
|
-
AC-1: "{criterion text}"
|
|
151
|
-
AC-2: "{criterion text}"
|
|
152
|
-
...
|
|
153
|
-
|
|
154
|
-
USER_STORIES:
|
|
155
|
-
US-1: "{story text}"
|
|
156
|
-
...
|
|
157
|
-
|
|
158
|
-
EDGE_CASES:
|
|
159
|
-
EC-1: "{edge case description}"
|
|
160
|
-
EC-2: "{edge case description}"
|
|
161
|
-
...
|
|
162
|
-
|
|
163
|
-
KEYWORDS: ["{keyword1}", "{keyword2}", ...]
|
|
164
|
-
```
|
|
165
|
-
|
|
166
|
-
**If no acceptance criteria can be extracted:**
|
|
167
|
-
|
|
168
|
-
```
|
|
169
|
-
CHECKPOINT:
|
|
170
|
-
type: human-action
|
|
171
|
-
blocking: "Cannot extract acceptance criteria from ticket"
|
|
172
|
-
details: "Ticket content does not contain recognizable acceptance criteria, Given/When/Then patterns, or bullet-pointed requirements. Raw content: {first 500 chars}"
|
|
173
|
-
awaiting: "Provide acceptance criteria explicitly as a numbered list, or reformat the ticket."
|
|
174
|
-
```
|
|
175
|
-
</step>
|
|
176
|
-
|
|
177
|
-
<step name="scan_related_source">
|
|
178
|
-
## Step 4: Scan Dev Repo for Related Source Files
|
|
179
|
-
|
|
180
|
-
Search the dev repo for source files related to the ticket's feature area.
|
|
181
|
-
|
|
182
|
-
**Search strategy (using keywords from step 3):**
|
|
183
|
-
|
|
184
|
-
1. **File name search:** Glob for files matching keywords from the ticket
|
|
185
|
-
- Keywords: module names, feature areas, component names mentioned in the ticket
|
|
186
|
-
- Patterns: `**/*{keyword}*.*` for each keyword
|
|
187
|
-
|
|
188
|
-
2. **Content search:** Grep for function names, route paths, class names mentioned in the ticket
|
|
189
|
-
- Search for acceptance-criteria-related terms: endpoints, function names, component names
|
|
190
|
-
- Grep patterns: route paths (e.g., `/api/v1/password-reset`), handler names (e.g., `resetPassword`)
|
|
191
|
-
|
|
192
|
-
3. **Directory search:** Check for feature-organized directories matching the ticket's domain
|
|
193
|
-
- Directories: `src/services/{feature}*`, `src/controllers/{feature}*`, `src/routes/{feature}*`
|
|
194
|
-
|
|
195
|
-
**Build related files list:**
|
|
196
|
-
|
|
197
|
-
```
|
|
198
|
-
RELATED_FILES:
|
|
199
|
-
- path: "src/services/auth.service.ts"
|
|
200
|
-
relevance: "Contains resetPassword function referenced in AC-1"
|
|
201
|
-
- path: "src/routes/auth.routes.ts"
|
|
202
|
-
relevance: "Contains /api/v1/password-reset endpoint from AC-2"
|
|
203
|
-
- path: "src/controllers/auth.controller.ts"
|
|
204
|
-
relevance: "Controller handling password reset requests"
|
|
205
|
-
...
|
|
206
|
-
```
|
|
207
|
-
|
|
208
|
-
**If no related files found:**
|
|
209
|
-
|
|
210
|
-
Print warning but continue:
|
|
211
|
-
```
|
|
212
|
-
Warning: No source files found matching ticket keywords.
|
|
213
|
-
Keywords searched: {keywords}
|
|
214
|
-
Generating test cases based on ticket content only (no source-level analysis).
|
|
215
|
-
```
|
|
216
|
-
</step>
|
|
217
|
-
|
|
218
|
-
<step name="extract_locators_from_app">
|
|
219
|
-
## Step 5: Check Locator Registry and Extract from Live App (Optional)
|
|
220
|
-
|
|
221
|
-
Check the locator registry for existing locators, and if needed, use Playwright MCP to extract new ones from the live app.
|
|
222
|
-
|
|
223
|
-
**Step 5a: Check existing registry**
|
|
224
|
-
|
|
225
|
-
Read `.qa-output/locators/LOCATOR_REGISTRY.md` if it exists. Check if locators for pages related to this ticket's feature already exist. If they do and no `--app-url` was provided, reuse them and skip browser extraction.
|
|
226
|
-
|
|
227
|
-
**Step 5b: When to extract from browser**
|
|
228
|
-
- Locators for this feature's pages are NOT in the registry, OR
|
|
229
|
-
- An `--app-url` argument was explicitly provided (forces re-extraction)
|
|
230
|
-
|
|
231
|
-
**When to skip entirely:**
|
|
232
|
-
- No app URL available, no dev server detected, AND no registry exists
|
|
233
|
-
- The ticket describes only backend/API functionality with no UI
|
|
234
|
-
|
|
235
|
-
**Extraction process:**
|
|
236
|
-
|
|
237
|
-
1. Identify relevant pages from the ticket's acceptance criteria and affected components (from Step 3 and Step 4).
|
|
238
|
-
|
|
239
|
-
2. For each relevant page, navigate and capture:
|
|
240
|
-
```
|
|
241
|
-
mcp__playwright__browser_navigate({ url: "{app_url}/{page_path}" })
|
|
242
|
-
mcp__playwright__browser_snapshot()
|
|
243
|
-
```
|
|
244
|
-
|
|
245
|
-
3. If the ticket describes a user flow (e.g., "user fills form and submits"), walk through the flow:
|
|
246
|
-
```
|
|
247
|
-
mcp__playwright__browser_fill_form({ ... })
|
|
248
|
-
mcp__playwright__browser_click({ element: "Submit button" })
|
|
249
|
-
mcp__playwright__browser_snapshot() // capture resulting page
|
|
250
|
-
```
|
|
251
|
-
|
|
252
|
-
4. From each snapshot, extract:
|
|
253
|
-
- All `data-testid` attributes
|
|
254
|
-
- ARIA roles with accessible names
|
|
255
|
-
- Form labels and placeholders
|
|
256
|
-
- Page structure and navigation elements
|
|
257
|
-
|
|
258
|
-
5. Write per-feature locator file to `.qa-output/locators/{feature}.locators.md`:
|
|
259
|
-
```markdown
|
|
260
|
-
# Locators -- {feature}
|
|
261
|
-
|
|
262
|
-
Extracted: {date}
|
|
263
|
-
App URL: {app_url}
|
|
264
|
-
|
|
265
|
-
## Page: {page_name} ({url})
|
|
266
|
-
|
|
267
|
-
| Element | Locator Type | Locator Value | Tier |
|
|
268
|
-
|---------|-------------|---------------|------|
|
|
269
|
-
| ... | data-testid | ... | 1 |
|
|
270
|
-
| ... | role + name | ... | 1 |
|
|
271
|
-
| ... | label | ... | 2 |
|
|
272
|
-
```
|
|
273
|
-
|
|
274
|
-
6. Update the registry `.qa-output/locators/LOCATOR_REGISTRY.md` -- merge new locators into the central index without overwriting locators from other features.
|
|
275
|
-
|
|
276
|
-
If this step is skipped entirely, the executor will propose locators based on source code analysis and CLAUDE.md conventions.
|
|
277
|
-
</step>
|
|
278
|
-
|
|
279
|
-
<step name="generate_test_cases">
|
|
280
|
-
## Step 6: Generate Test Cases with Traceability Matrix
|
|
281
|
-
|
|
282
|
-
Map each acceptance criterion to one or more test cases, following CLAUDE.md test spec rules.
|
|
283
|
-
|
|
284
|
-
**Test case generation rules:**
|
|
285
|
-
|
|
286
|
-
1. **One or more test cases per acceptance criterion** -- Each AC must have at least one test case. Complex ACs may produce multiple test cases (happy path + error cases).
|
|
287
|
-
|
|
288
|
-
2. **One test case per edge case** -- Each extracted edge case becomes a dedicated test case.
|
|
289
|
-
|
|
290
|
-
3. **Follow naming convention:**
|
|
291
|
-
- Unit tests: `UT-{MODULE}-{NNN}` (for logic-level ACs)
|
|
292
|
-
- API tests: `API-{RESOURCE}-{NNN}` (for endpoint-level ACs)
|
|
293
|
-
- Integration tests: `INT-{MODULE}-{NNN}` (for cross-module ACs)
|
|
294
|
-
- E2E tests: `E2E-{FLOW}-{NNN}` (for user-journey ACs)
|
|
295
|
-
|
|
296
|
-
4. **Follow CLAUDE.md Test Spec Rules:**
|
|
297
|
-
- Every test case MUST have: unique ID, exact target, concrete inputs, explicit expected outcome, priority
|
|
298
|
-
- Expected outcomes must be concrete (no "works correctly", "handles properly")
|
|
299
|
-
- Concrete inputs with actual values (no "valid data")
|
|
300
|
-
|
|
301
|
-
5. **Pyramid level assignment:**
|
|
302
|
-
- Pure function or service logic -> Unit test
|
|
303
|
-
- API endpoint behavior -> API test
|
|
304
|
-
- Cross-module interaction -> Integration test
|
|
305
|
-
- Full user journey described in ticket -> E2E test
|
|
306
|
-
|
|
307
|
-
**Write TEST_CASES_FROM_TICKET.md:**
|
|
308
|
-
|
|
309
|
-
```markdown
|
|
310
|
-
# Test Cases from Ticket
|
|
311
|
-
|
|
312
|
-
## Ticket Info
|
|
313
|
-
|
|
314
|
-
| Field | Value |
|
|
315
|
-
|-------|-------|
|
|
316
|
-
| Source | {TICKET_SOURCE} |
|
|
317
|
-
| Title | {TICKET_TITLE} |
|
|
318
|
-
| Priority | {TICKET_PRIORITY} |
|
|
319
|
-
| Acceptance Criteria | {AC_COUNT} |
|
|
320
|
-
| Edge Cases | {EC_COUNT} |
|
|
321
|
-
| Test Cases Generated | {TOTAL_TEST_CASES} |
|
|
322
|
-
|
|
323
|
-
## Traceability Matrix
|
|
324
|
-
|
|
325
|
-
| Acceptance Criterion | Test Case ID | Pyramid Level | Priority |
|
|
326
|
-
|---------------------|--------------|---------------|----------|
|
|
327
|
-
| AC-1: {text} | UT-AUTH-001 | Unit | P0 |
|
|
328
|
-
| AC-1: {text} | API-AUTH-001 | API | P0 |
|
|
329
|
-
| AC-2: {text} | E2E-RESET-001 | E2E | P0 |
|
|
330
|
-
| EC-1: {text} | UT-AUTH-002 | Unit | P1 |
|
|
331
|
-
| ... | ... | ... | ... |
|
|
332
|
-
|
|
333
|
-
## Test Cases
|
|
334
|
-
|
|
335
|
-
### Unit Tests
|
|
336
|
-
|
|
337
|
-
#### UT-{MODULE}-{NNN}: {description}
|
|
338
|
-
|
|
339
|
-
| Field | Value |
|
|
340
|
-
|-------|-------|
|
|
341
|
-
| test_id | UT-{MODULE}-{NNN} |
|
|
342
|
-
| target | {file_path}:{function_name} |
|
|
343
|
-
| what_to_validate | {behavior description} |
|
|
344
|
-
| concrete_inputs | {actual input values} |
|
|
345
|
-
| mocks_needed | {dependencies to mock or "None (pure function)"} |
|
|
346
|
-
| expected_outcome | {exact return value, error message, or state change} |
|
|
347
|
-
| priority | {P0|P1|P2} |
|
|
348
|
-
| traces_to | AC-{N} or EC-{N} |
|
|
349
|
-
|
|
350
|
-
[... repeat for all unit tests ...]
|
|
351
|
-
|
|
352
|
-
### API Tests
|
|
353
|
-
|
|
354
|
-
[... same structure with API-specific fields ...]
|
|
355
|
-
|
|
356
|
-
### Integration Tests
|
|
357
|
-
|
|
358
|
-
[... same structure with integration-specific fields ...]
|
|
359
|
-
|
|
360
|
-
### E2E Smoke Tests
|
|
361
|
-
|
|
362
|
-
[... same structure with E2E-specific fields ...]
|
|
363
|
-
```
|
|
364
|
-
|
|
365
|
-
**Set output directory:**
|
|
366
|
-
|
|
367
|
-
```bash
|
|
368
|
-
OUTPUT_DIR=".qa-output"
|
|
369
|
-
mkdir -p "${OUTPUT_DIR}"
|
|
370
|
-
```
|
|
371
|
-
|
|
372
|
-
Write to `{OUTPUT_DIR}/TEST_CASES_FROM_TICKET.md`.
|
|
373
|
-
</step>
|
|
374
|
-
|
|
375
|
-
<step name="generate_test_files">
|
|
376
|
-
## Step 7: Spawn Executor Agent
|
|
377
|
-
|
|
378
|
-
Build a synthetic generation plan from the test cases and spawn the executor to write test files.
|
|
379
|
-
|
|
380
|
-
**Build generation plan:**
|
|
381
|
-
|
|
382
|
-
Group test cases by feature (from ticket domain) and create task entries following the same structure the executor expects:
|
|
383
|
-
|
|
384
|
-
```markdown
|
|
385
|
-
# Generation Plan (from ticket)
|
|
386
|
-
|
|
387
|
-
## Summary
|
|
388
|
-
|
|
389
|
-
| Metric | Value |
|
|
390
|
-
|--------|-------|
|
|
391
|
-
| Total tasks | {N} |
|
|
392
|
-
| Total files | {N} |
|
|
393
|
-
| Feature groups | {N} |
|
|
394
|
-
| Test cases covered | {N} |
|
|
395
|
-
| Framework | {detected from project} |
|
|
396
|
-
| File extension | {ext from project} |
|
|
397
|
-
|
|
398
|
-
## Tasks
|
|
399
|
-
|
|
400
|
-
### Task: {feature}-unit
|
|
401
|
-
| Field | Value |
|
|
402
|
-
|-------|-------|
|
|
403
|
-
| task_id | {feature}-unit |
|
|
404
|
-
| feature_group | {feature} |
|
|
405
|
-
| files_to_create | tests/unit/{feature}.unit.spec.{ext} |
|
|
406
|
-
| test_case_ids | UT-{MODULE}-001, UT-{MODULE}-002, ... |
|
|
407
|
-
| depends_on | none |
|
|
408
|
-
| estimated_complexity | {LOW|MEDIUM|HIGH} |
|
|
409
|
-
|
|
410
|
-
[... additional tasks for API, E2E, POM, fixtures ...]
|
|
411
|
-
```
|
|
412
|
-
|
|
413
|
-
Write to `{OUTPUT_DIR}/GENERATION_PLAN_TICKET.md`.
|
|
414
|
-
|
|
415
|
-
**Spawn executor:**
|
|
416
|
-
|
|
417
|
-
```
|
|
418
|
-
Task(
|
|
419
|
-
prompt="
|
|
420
|
-
<objective>Generate test files from ticket-derived test cases</objective>
|
|
421
|
-
<execution_context>@agents/qaa-executor.md</execution_context>
|
|
422
|
-
<files_to_read>
|
|
423
|
-
- {OUTPUT_DIR}/GENERATION_PLAN_TICKET.md
|
|
424
|
-
- {OUTPUT_DIR}/TEST_CASES_FROM_TICKET.md
|
|
425
|
-
- {OUTPUT_DIR}/locators/LOCATOR_REGISTRY.md (if exists -- accumulated real locators)
|
|
426
|
-
- CLAUDE.md
|
|
427
|
-
</files_to_read>
|
|
428
|
-
<parameters>
|
|
429
|
-
output_base: {test output directory}
|
|
430
|
-
</parameters>
|
|
431
|
-
"
|
|
432
|
-
)
|
|
433
|
-
```
|
|
434
|
-
|
|
435
|
-
**Handle executor return:**
|
|
436
|
-
|
|
437
|
-
Extract: `files_created`, `total_files`, `commit_count`, `test_case_count`.
|
|
438
|
-
</step>
|
|
439
|
-
|
|
440
|
-
<step name="validate_generated_tests">
|
|
441
|
-
## Step 8: Spawn Validator Agent
|
|
442
|
-
|
|
443
|
-
Validate the generated test files against CLAUDE.md standards.
|
|
444
|
-
|
|
445
|
-
```
|
|
446
|
-
Task(
|
|
447
|
-
prompt="
|
|
448
|
-
<objective>Validate generated test files across 4 layers</objective>
|
|
449
|
-
<execution_context>@agents/qaa-validator.md</execution_context>
|
|
450
|
-
<files_to_read>
|
|
451
|
-
- CLAUDE.md
|
|
452
|
-
- {OUTPUT_DIR}/GENERATION_PLAN_TICKET.md
|
|
453
|
-
</files_to_read>
|
|
454
|
-
<parameters>
|
|
455
|
-
output_path: {OUTPUT_DIR}/VALIDATION_REPORT.md
|
|
456
|
-
</parameters>
|
|
457
|
-
"
|
|
458
|
-
)
|
|
459
|
-
```
|
|
460
|
-
|
|
461
|
-
**Handle validator return:**
|
|
462
|
-
|
|
463
|
-
Extract: `overall_status`, `confidence`, `issues_found`, `issues_fixed`, `unresolved_count`.
|
|
464
|
-
</step>
|
|
465
|
-
|
|
466
|
-
<step name="print_summary">
|
|
467
|
-
## Step 9: Print Summary
|
|
468
|
-
|
|
469
|
-
Print a comprehensive summary showing traceability from ticket to tests.
|
|
470
|
-
|
|
471
|
-
```
|
|
472
|
-
=== Test Generation from Ticket Complete ===
|
|
473
|
-
|
|
474
|
-
Ticket: {TICKET_TITLE}
|
|
475
|
-
Source: {TICKET_TYPE} ({TICKET_SOURCE})
|
|
476
|
-
|
|
477
|
-
Acceptance Criteria Coverage:
|
|
478
|
-
Total ACs: {AC_COUNT}
|
|
479
|
-
Covered: {COVERED_COUNT}
|
|
480
|
-
Uncovered: {UNCOVERED_COUNT}
|
|
481
|
-
|
|
482
|
-
Edge Cases:
|
|
483
|
-
Extracted: {EC_COUNT}
|
|
484
|
-
With tests: {EC_TESTED_COUNT}
|
|
485
|
-
|
|
486
|
-
Test Cases Generated:
|
|
487
|
-
Unit Tests: {unit_count}
|
|
488
|
-
Integration Tests: {integration_count}
|
|
489
|
-
API Tests: {api_count}
|
|
490
|
-
E2E Tests: {e2e_count}
|
|
491
|
-
--------------------------
|
|
492
|
-
Total: {total_count}
|
|
493
|
-
|
|
494
|
-
Files Created: {file_count}
|
|
495
|
-
|
|
496
|
-
Validation:
|
|
497
|
-
Status: {PASS|PASS_WITH_WARNINGS|FAIL}
|
|
498
|
-
Confidence: {HIGH|MEDIUM|LOW}
|
|
499
|
-
|
|
500
|
-
Artifacts:
|
|
501
|
-
- {OUTPUT_DIR}/TEST_CASES_FROM_TICKET.md (traceability matrix)
|
|
502
|
-
- {OUTPUT_DIR}/GENERATION_PLAN_TICKET.md (generation plan)
|
|
503
|
-
- {OUTPUT_DIR}/VALIDATION_REPORT.md (validation results)
|
|
504
|
-
- {test file paths...} (generated test files)
|
|
505
|
-
===========================================
|
|
506
|
-
```
|
|
507
|
-
</step>
|
|
508
|
-
|
|
509
|
-
</process>
|
|
510
|
-
|
|
511
|
-
<output>
|
|
512
|
-
This workflow generates tests mapped 1:1 to ticket acceptance criteria.
|
|
513
|
-
|
|
514
|
-
**Artifacts produced:**
|
|
515
|
-
|
|
516
|
-
| Artifact | When Produced | Description |
|
|
517
|
-
|----------|---------------|-------------|
|
|
518
|
-
| TEST_CASES_FROM_TICKET.md | Always | Test cases with traceability matrix mapping ACs to test IDs |
|
|
519
|
-
| GENERATION_PLAN_TICKET.md | Always | Synthetic generation plan for the executor agent |
|
|
520
|
-
| Test files (unit, API, E2E, POM, fixtures) | Always | Actual test code following CLAUDE.md standards |
|
|
521
|
-
| VALIDATION_REPORT.md | Always | 4-layer validation of generated test files |
|
|
522
|
-
|
|
523
|
-
**Traceability guarantee:** Every acceptance criterion in the ticket maps to at least one test case. The traceability matrix in TEST_CASES_FROM_TICKET.md documents this mapping with `traces_to` fields.
|
|
524
|
-
</output>
|
|
525
|
-
|
|
526
|
-
<error_handling>
|
|
527
|
-
| Error | Cause | Action |
|
|
528
|
-
|-------|-------|--------|
|
|
529
|
-
| No ticket source provided | Missing argument | Print usage help, STOP |
|
|
530
|
-
| Cannot access ticket URL | Auth required or URL invalid | Checkpoint: ask user for content as text or file |
|
|
531
|
-
| Cannot read ticket file | File does not exist or is empty | Print error with path, STOP |
|
|
532
|
-
| No acceptance criteria found | Ticket lacks structured requirements | Checkpoint: ask user to provide ACs explicitly |
|
|
533
|
-
| No related source files found | Keywords do not match any files | Warning only -- continue with ticket-only analysis |
|
|
534
|
-
| Test framework not detected | No config files in project | Executor checkpoints for user to specify framework |
|
|
535
|
-
| Validation FAIL | Generated tests have quality issues | Report issues in VALIDATION_REPORT.md for review |
|
|
536
|
-
</error_handling>
|
|
1
|
+
<purpose>
|
|
2
|
+
Generate test cases and test files from a ticket (GitHub Issue, Jira, Linear, plain text, or file). Extracts acceptance criteria, user stories, and edge cases from the ticket content, scans the dev repo for related source files, generates a traceability matrix mapping acceptance criteria to test cases, then spawns the executor and validator agents to produce and verify test files. Use this workflow to ensure every acceptance criterion has corresponding test coverage before a feature ships.
|
|
3
|
+
</purpose>
|
|
4
|
+
|
|
5
|
+
<required_reading>
|
|
6
|
+
- `CLAUDE.md` -- QA automation standards, test spec rules, naming conventions, quality gates, testing pyramid
|
|
7
|
+
- `agents/qaa-scanner.md` -- Scanner agent definition (repo scanning for related source files)
|
|
8
|
+
- `agents/qaa-executor.md` -- Executor agent definition (test file generation)
|
|
9
|
+
- `agents/qaa-validator.md` -- Validator agent definition (4-layer validation)
|
|
10
|
+
- `templates/test-inventory.md` -- TEST_INVENTORY.md format contract (test case structure)
|
|
11
|
+
</required_reading>
|
|
12
|
+
|
|
13
|
+
<process>
|
|
14
|
+
|
|
15
|
+
<step name="parse_arguments">
|
|
16
|
+
## Step 1: Parse Ticket Source
|
|
17
|
+
|
|
18
|
+
Parse `$ARGUMENTS` to determine the ticket source type and content.
|
|
19
|
+
|
|
20
|
+
**Supported argument formats:**
|
|
21
|
+
|
|
22
|
+
| Format | Example | Detection |
|
|
23
|
+
|--------|---------|-----------|
|
|
24
|
+
| GitHub Issue URL | `https://github.com/org/repo/issues/123` | Starts with `https://github.com` and contains `/issues/` |
|
|
25
|
+
| GitHub Issue shorthand | `org/repo#123` or `#123` | Contains `#` followed by digits |
|
|
26
|
+
| Jira URL | `https://company.atlassian.net/browse/PROJ-123` | Contains `.atlassian.net/browse/` |
|
|
27
|
+
| Linear URL | `https://linear.app/team/issue/TEAM-123` | Contains `linear.app` |
|
|
28
|
+
| File path | `./tickets/feature-spec.md` | Path exists on disk |
|
|
29
|
+
| Plain text | `"As a user I want to..."` | None of the above patterns match |
|
|
30
|
+
|
|
31
|
+
**Parsing logic:**
|
|
32
|
+
|
|
33
|
+
```bash
|
|
34
|
+
TICKET_SOURCE="$ARGUMENTS"
|
|
35
|
+
TICKET_TYPE=""
|
|
36
|
+
TICKET_CONTENT=""
|
|
37
|
+
|
|
38
|
+
# Detect source type from the argument pattern
|
|
39
|
+
if matches GitHub URL or shorthand:
|
|
40
|
+
TICKET_TYPE="github"
|
|
41
|
+
elif matches Jira URL:
|
|
42
|
+
TICKET_TYPE="jira"
|
|
43
|
+
elif matches Linear URL:
|
|
44
|
+
TICKET_TYPE="linear"
|
|
45
|
+
elif file exists at path:
|
|
46
|
+
TICKET_TYPE="file"
|
|
47
|
+
else:
|
|
48
|
+
TICKET_TYPE="text"
|
|
49
|
+
```
|
|
50
|
+
|
|
51
|
+
**If no arguments provided:**
|
|
52
|
+
|
|
53
|
+
Print error and STOP:
|
|
54
|
+
```
|
|
55
|
+
Error: No ticket source provided.
|
|
56
|
+
Usage: /qa-from-ticket <source>
|
|
57
|
+
|
|
58
|
+
Supported sources:
|
|
59
|
+
GitHub: https://github.com/org/repo/issues/123 or #123
|
|
60
|
+
Jira: https://company.atlassian.net/browse/PROJ-123
|
|
61
|
+
Linear: https://linear.app/team/issue/TEAM-123
|
|
62
|
+
File: ./path/to/ticket.md
|
|
63
|
+
Text: "As a user I want to log in so that I can access my dashboard"
|
|
64
|
+
```
|
|
65
|
+
</step>
|
|
66
|
+
|
|
67
|
+
<step name="fetch_ticket_content">
|
|
68
|
+
## Step 2: Fetch Ticket Content
|
|
69
|
+
|
|
70
|
+
Retrieve the ticket content based on the source type.
|
|
71
|
+
|
|
72
|
+
**GitHub Issue:**
|
|
73
|
+
|
|
74
|
+
```bash
|
|
75
|
+
# For full URL: extract owner, repo, issue number
|
|
76
|
+
# For shorthand #123: use current repo context
|
|
77
|
+
gh issue view {issue_number} --repo {owner/repo} --json title,body,labels,assignees,milestone
|
|
78
|
+
```
|
|
79
|
+
|
|
80
|
+
Extract from JSON response:
|
|
81
|
+
- `title` -- Issue title
|
|
82
|
+
- `body` -- Issue body (markdown)
|
|
83
|
+
- `labels` -- Issue labels (may indicate priority)
|
|
84
|
+
- `milestone` -- Milestone (may indicate release target)
|
|
85
|
+
|
|
86
|
+
**Jira / Linear URL:**
|
|
87
|
+
|
|
88
|
+
```bash
|
|
89
|
+
# Use WebFetch to retrieve the ticket page content
|
|
90
|
+
# Extract structured content from the response
|
|
91
|
+
```
|
|
92
|
+
|
|
93
|
+
If WebFetch fails or authentication is required:
|
|
94
|
+
|
|
95
|
+
```
|
|
96
|
+
CHECKPOINT:
|
|
97
|
+
type: human-action
|
|
98
|
+
blocking: "Cannot access ticket URL -- authentication required"
|
|
99
|
+
details: "Attempted to fetch: {TICKET_SOURCE}. Received authentication error."
|
|
100
|
+
awaiting: "Provide ticket content as plain text or a file path instead. Or authenticate with the service."
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
**File path:**
|
|
104
|
+
|
|
105
|
+
```bash
|
|
106
|
+
# Read the file content directly
|
|
107
|
+
TICKET_CONTENT=$(cat "{TICKET_SOURCE}")
|
|
108
|
+
```
|
|
109
|
+
|
|
110
|
+
If the file is empty or unreadable, print error: `"Error: Cannot read ticket file: {path}"` and STOP.
|
|
111
|
+
|
|
112
|
+
**Plain text:**
|
|
113
|
+
|
|
114
|
+
```bash
|
|
115
|
+
TICKET_CONTENT="${TICKET_SOURCE}"
|
|
116
|
+
```
|
|
117
|
+
|
|
118
|
+
**Print ticket summary:**
|
|
119
|
+
|
|
120
|
+
```
|
|
121
|
+
Ticket Source: {TICKET_TYPE}
|
|
122
|
+
Title: {extracted title or first line}
|
|
123
|
+
Content Length: {character count}
|
|
124
|
+
```
|
|
125
|
+
</step>
|
|
126
|
+
|
|
127
|
+
<step name="extract_acceptance_criteria">
|
|
128
|
+
## Step 3: Extract Acceptance Criteria
|
|
129
|
+
|
|
130
|
+
Parse the ticket content to identify testable acceptance criteria, user stories, edge cases, and priority.
|
|
131
|
+
|
|
132
|
+
**Extraction targets:**
|
|
133
|
+
|
|
134
|
+
| Target | What to Look For | Example |
|
|
135
|
+
|--------|-----------------|---------|
|
|
136
|
+
| Title | Issue title or first heading | "Add password reset flow" |
|
|
137
|
+
| User Story | "As a [role], I want to [action], so that [benefit]" pattern | "As a user, I want to reset my password via email" |
|
|
138
|
+
| Acceptance Criteria | Bullet lists after "Acceptance Criteria", "AC:", "Given/When/Then" patterns | "Given a valid email, When I request reset, Then I receive a reset link" |
|
|
139
|
+
| Edge Cases | Items mentioning: invalid, expired, duplicate, empty, boundary, error, timeout | "Handle expired reset tokens gracefully" |
|
|
140
|
+
| Priority | Labels, priority fields, or keywords: critical, blocker, P0, P1, P2 | "Priority: P0" |
|
|
141
|
+
| Affected Components | File paths, module names, feature areas mentioned | "Auth module", "src/services/auth.service.ts" |
|
|
142
|
+
|
|
143
|
+
**Structured extraction output:**
|
|
144
|
+
|
|
145
|
+
```
|
|
146
|
+
TICKET_TITLE: "{title}"
|
|
147
|
+
TICKET_PRIORITY: "{P0|P1|P2}"
|
|
148
|
+
|
|
149
|
+
ACCEPTANCE_CRITERIA:
|
|
150
|
+
AC-1: "{criterion text}"
|
|
151
|
+
AC-2: "{criterion text}"
|
|
152
|
+
...
|
|
153
|
+
|
|
154
|
+
USER_STORIES:
|
|
155
|
+
US-1: "{story text}"
|
|
156
|
+
...
|
|
157
|
+
|
|
158
|
+
EDGE_CASES:
|
|
159
|
+
EC-1: "{edge case description}"
|
|
160
|
+
EC-2: "{edge case description}"
|
|
161
|
+
...
|
|
162
|
+
|
|
163
|
+
KEYWORDS: ["{keyword1}", "{keyword2}", ...]
|
|
164
|
+
```
|
|
165
|
+
|
|
166
|
+
**If no acceptance criteria can be extracted:**
|
|
167
|
+
|
|
168
|
+
```
|
|
169
|
+
CHECKPOINT:
|
|
170
|
+
type: human-action
|
|
171
|
+
blocking: "Cannot extract acceptance criteria from ticket"
|
|
172
|
+
details: "Ticket content does not contain recognizable acceptance criteria, Given/When/Then patterns, or bullet-pointed requirements. Raw content: {first 500 chars}"
|
|
173
|
+
awaiting: "Provide acceptance criteria explicitly as a numbered list, or reformat the ticket."
|
|
174
|
+
```
|
|
175
|
+
</step>
|
|
176
|
+
|
|
177
|
+
<step name="scan_related_source">
|
|
178
|
+
## Step 4: Scan Dev Repo for Related Source Files
|
|
179
|
+
|
|
180
|
+
Search the dev repo for source files related to the ticket's feature area.
|
|
181
|
+
|
|
182
|
+
**Search strategy (using keywords from step 3):**
|
|
183
|
+
|
|
184
|
+
1. **File name search:** Glob for files matching keywords from the ticket
|
|
185
|
+
- Keywords: module names, feature areas, component names mentioned in the ticket
|
|
186
|
+
- Patterns: `**/*{keyword}*.*` for each keyword
|
|
187
|
+
|
|
188
|
+
2. **Content search:** Grep for function names, route paths, class names mentioned in the ticket
|
|
189
|
+
- Search for acceptance-criteria-related terms: endpoints, function names, component names
|
|
190
|
+
- Grep patterns: route paths (e.g., `/api/v1/password-reset`), handler names (e.g., `resetPassword`)
|
|
191
|
+
|
|
192
|
+
3. **Directory search:** Check for feature-organized directories matching the ticket's domain
|
|
193
|
+
- Directories: `src/services/{feature}*`, `src/controllers/{feature}*`, `src/routes/{feature}*`
|
|
194
|
+
|
|
195
|
+
**Build related files list:**
|
|
196
|
+
|
|
197
|
+
```
|
|
198
|
+
RELATED_FILES:
|
|
199
|
+
- path: "src/services/auth.service.ts"
|
|
200
|
+
relevance: "Contains resetPassword function referenced in AC-1"
|
|
201
|
+
- path: "src/routes/auth.routes.ts"
|
|
202
|
+
relevance: "Contains /api/v1/password-reset endpoint from AC-2"
|
|
203
|
+
- path: "src/controllers/auth.controller.ts"
|
|
204
|
+
relevance: "Controller handling password reset requests"
|
|
205
|
+
...
|
|
206
|
+
```
|
|
207
|
+
|
|
208
|
+
**If no related files found:**
|
|
209
|
+
|
|
210
|
+
Print warning but continue:
|
|
211
|
+
```
|
|
212
|
+
Warning: No source files found matching ticket keywords.
|
|
213
|
+
Keywords searched: {keywords}
|
|
214
|
+
Generating test cases based on ticket content only (no source-level analysis).
|
|
215
|
+
```
|
|
216
|
+
</step>
|
|
217
|
+
|
|
218
|
+
<step name="extract_locators_from_app">
|
|
219
|
+
## Step 5: Check Locator Registry and Extract from Live App (Optional)
|
|
220
|
+
|
|
221
|
+
Check the locator registry for existing locators, and if needed, use Playwright MCP to extract new ones from the live app.
|
|
222
|
+
|
|
223
|
+
**Step 5a: Check existing registry**
|
|
224
|
+
|
|
225
|
+
Read `.qa-output/locators/LOCATOR_REGISTRY.md` if it exists. Check if locators for pages related to this ticket's feature already exist. If they do and no `--app-url` was provided, reuse them and skip browser extraction.
|
|
226
|
+
|
|
227
|
+
**Step 5b: When to extract from browser**
|
|
228
|
+
- Locators for this feature's pages are NOT in the registry, OR
|
|
229
|
+
- An `--app-url` argument was explicitly provided (forces re-extraction)
|
|
230
|
+
|
|
231
|
+
**When to skip entirely:**
|
|
232
|
+
- No app URL available, no dev server detected, AND no registry exists
|
|
233
|
+
- The ticket describes only backend/API functionality with no UI
|
|
234
|
+
|
|
235
|
+
**Extraction process:**
|
|
236
|
+
|
|
237
|
+
1. Identify relevant pages from the ticket's acceptance criteria and affected components (from Step 3 and Step 4).
|
|
238
|
+
|
|
239
|
+
2. For each relevant page, navigate and capture:
|
|
240
|
+
```
|
|
241
|
+
mcp__playwright__browser_navigate({ url: "{app_url}/{page_path}" })
|
|
242
|
+
mcp__playwright__browser_snapshot()
|
|
243
|
+
```
|
|
244
|
+
|
|
245
|
+
3. If the ticket describes a user flow (e.g., "user fills form and submits"), walk through the flow:
|
|
246
|
+
```
|
|
247
|
+
mcp__playwright__browser_fill_form({ ... })
|
|
248
|
+
mcp__playwright__browser_click({ element: "Submit button" })
|
|
249
|
+
mcp__playwright__browser_snapshot() // capture resulting page
|
|
250
|
+
```
|
|
251
|
+
|
|
252
|
+
4. From each snapshot, extract:
|
|
253
|
+
- All `data-testid` attributes
|
|
254
|
+
- ARIA roles with accessible names
|
|
255
|
+
- Form labels and placeholders
|
|
256
|
+
- Page structure and navigation elements
|
|
257
|
+
|
|
258
|
+
5. Write per-feature locator file to `.qa-output/locators/{feature}.locators.md`:
|
|
259
|
+
```markdown
|
|
260
|
+
# Locators -- {feature}
|
|
261
|
+
|
|
262
|
+
Extracted: {date}
|
|
263
|
+
App URL: {app_url}
|
|
264
|
+
|
|
265
|
+
## Page: {page_name} ({url})
|
|
266
|
+
|
|
267
|
+
| Element | Locator Type | Locator Value | Tier |
|
|
268
|
+
|---------|-------------|---------------|------|
|
|
269
|
+
| ... | data-testid | ... | 1 |
|
|
270
|
+
| ... | role + name | ... | 1 |
|
|
271
|
+
| ... | label | ... | 2 |
|
|
272
|
+
```
|
|
273
|
+
|
|
274
|
+
6. Update the registry `.qa-output/locators/LOCATOR_REGISTRY.md` -- merge new locators into the central index without overwriting locators from other features.
|
|
275
|
+
|
|
276
|
+
If this step is skipped entirely, the executor will propose locators based on source code analysis and CLAUDE.md conventions.
|
|
277
|
+
</step>
|
|
278
|
+
|
|
279
|
+
<step name="generate_test_cases">
|
|
280
|
+
## Step 6: Generate Test Cases with Traceability Matrix
|
|
281
|
+
|
|
282
|
+
Map each acceptance criterion to one or more test cases, following CLAUDE.md test spec rules.
|
|
283
|
+
|
|
284
|
+
**Test case generation rules:**
|
|
285
|
+
|
|
286
|
+
1. **One or more test cases per acceptance criterion** -- Each AC must have at least one test case. Complex ACs may produce multiple test cases (happy path + error cases).
|
|
287
|
+
|
|
288
|
+
2. **One test case per edge case** -- Each extracted edge case becomes a dedicated test case.
|
|
289
|
+
|
|
290
|
+
3. **Follow naming convention:**
|
|
291
|
+
- Unit tests: `UT-{MODULE}-{NNN}` (for logic-level ACs)
|
|
292
|
+
- API tests: `API-{RESOURCE}-{NNN}` (for endpoint-level ACs)
|
|
293
|
+
- Integration tests: `INT-{MODULE}-{NNN}` (for cross-module ACs)
|
|
294
|
+
- E2E tests: `E2E-{FLOW}-{NNN}` (for user-journey ACs)
|
|
295
|
+
|
|
296
|
+
4. **Follow CLAUDE.md Test Spec Rules:**
|
|
297
|
+
- Every test case MUST have: unique ID, exact target, concrete inputs, explicit expected outcome, priority
|
|
298
|
+
- Expected outcomes must be concrete (no "works correctly", "handles properly")
|
|
299
|
+
- Concrete inputs with actual values (no "valid data")
|
|
300
|
+
|
|
301
|
+
5. **Pyramid level assignment:**
|
|
302
|
+
- Pure function or service logic -> Unit test
|
|
303
|
+
- API endpoint behavior -> API test
|
|
304
|
+
- Cross-module interaction -> Integration test
|
|
305
|
+
- Full user journey described in ticket -> E2E test
|
|
306
|
+
|
|
307
|
+
**Write TEST_CASES_FROM_TICKET.md:**
|
|
308
|
+
|
|
309
|
+
```markdown
|
|
310
|
+
# Test Cases from Ticket
|
|
311
|
+
|
|
312
|
+
## Ticket Info
|
|
313
|
+
|
|
314
|
+
| Field | Value |
|
|
315
|
+
|-------|-------|
|
|
316
|
+
| Source | {TICKET_SOURCE} |
|
|
317
|
+
| Title | {TICKET_TITLE} |
|
|
318
|
+
| Priority | {TICKET_PRIORITY} |
|
|
319
|
+
| Acceptance Criteria | {AC_COUNT} |
|
|
320
|
+
| Edge Cases | {EC_COUNT} |
|
|
321
|
+
| Test Cases Generated | {TOTAL_TEST_CASES} |
|
|
322
|
+
|
|
323
|
+
## Traceability Matrix
|
|
324
|
+
|
|
325
|
+
| Acceptance Criterion | Test Case ID | Pyramid Level | Priority |
|
|
326
|
+
|---------------------|--------------|---------------|----------|
|
|
327
|
+
| AC-1: {text} | UT-AUTH-001 | Unit | P0 |
|
|
328
|
+
| AC-1: {text} | API-AUTH-001 | API | P0 |
|
|
329
|
+
| AC-2: {text} | E2E-RESET-001 | E2E | P0 |
|
|
330
|
+
| EC-1: {text} | UT-AUTH-002 | Unit | P1 |
|
|
331
|
+
| ... | ... | ... | ... |
|
|
332
|
+
|
|
333
|
+
## Test Cases
|
|
334
|
+
|
|
335
|
+
### Unit Tests
|
|
336
|
+
|
|
337
|
+
#### UT-{MODULE}-{NNN}: {description}
|
|
338
|
+
|
|
339
|
+
| Field | Value |
|
|
340
|
+
|-------|-------|
|
|
341
|
+
| test_id | UT-{MODULE}-{NNN} |
|
|
342
|
+
| target | {file_path}:{function_name} |
|
|
343
|
+
| what_to_validate | {behavior description} |
|
|
344
|
+
| concrete_inputs | {actual input values} |
|
|
345
|
+
| mocks_needed | {dependencies to mock or "None (pure function)"} |
|
|
346
|
+
| expected_outcome | {exact return value, error message, or state change} |
|
|
347
|
+
| priority | {P0|P1|P2} |
|
|
348
|
+
| traces_to | AC-{N} or EC-{N} |
|
|
349
|
+
|
|
350
|
+
[... repeat for all unit tests ...]
|
|
351
|
+
|
|
352
|
+
### API Tests
|
|
353
|
+
|
|
354
|
+
[... same structure with API-specific fields ...]
|
|
355
|
+
|
|
356
|
+
### Integration Tests
|
|
357
|
+
|
|
358
|
+
[... same structure with integration-specific fields ...]
|
|
359
|
+
|
|
360
|
+
### E2E Smoke Tests
|
|
361
|
+
|
|
362
|
+
[... same structure with E2E-specific fields ...]
|
|
363
|
+
```
|
|
364
|
+
|
|
365
|
+
**Set output directory:**
|
|
366
|
+
|
|
367
|
+
```bash
|
|
368
|
+
OUTPUT_DIR=".qa-output"
|
|
369
|
+
mkdir -p "${OUTPUT_DIR}"
|
|
370
|
+
```
|
|
371
|
+
|
|
372
|
+
Write to `{OUTPUT_DIR}/TEST_CASES_FROM_TICKET.md`.
|
|
373
|
+
</step>
|
|
374
|
+
|
|
375
|
+
<step name="generate_test_files">
|
|
376
|
+
## Step 7: Spawn Executor Agent
|
|
377
|
+
|
|
378
|
+
Build a synthetic generation plan from the test cases and spawn the executor to write test files.
|
|
379
|
+
|
|
380
|
+
**Build generation plan:**
|
|
381
|
+
|
|
382
|
+
Group test cases by feature (from ticket domain) and create task entries following the same structure the executor expects:
|
|
383
|
+
|
|
384
|
+
```markdown
|
|
385
|
+
# Generation Plan (from ticket)
|
|
386
|
+
|
|
387
|
+
## Summary
|
|
388
|
+
|
|
389
|
+
| Metric | Value |
|
|
390
|
+
|--------|-------|
|
|
391
|
+
| Total tasks | {N} |
|
|
392
|
+
| Total files | {N} |
|
|
393
|
+
| Feature groups | {N} |
|
|
394
|
+
| Test cases covered | {N} |
|
|
395
|
+
| Framework | {detected from project} |
|
|
396
|
+
| File extension | {ext from project} |
|
|
397
|
+
|
|
398
|
+
## Tasks
|
|
399
|
+
|
|
400
|
+
### Task: {feature}-unit
|
|
401
|
+
| Field | Value |
|
|
402
|
+
|-------|-------|
|
|
403
|
+
| task_id | {feature}-unit |
|
|
404
|
+
| feature_group | {feature} |
|
|
405
|
+
| files_to_create | tests/unit/{feature}.unit.spec.{ext} |
|
|
406
|
+
| test_case_ids | UT-{MODULE}-001, UT-{MODULE}-002, ... |
|
|
407
|
+
| depends_on | none |
|
|
408
|
+
| estimated_complexity | {LOW|MEDIUM|HIGH} |
|
|
409
|
+
|
|
410
|
+
[... additional tasks for API, E2E, POM, fixtures ...]
|
|
411
|
+
```
|
|
412
|
+
|
|
413
|
+
Write to `{OUTPUT_DIR}/GENERATION_PLAN_TICKET.md`.
|
|
414
|
+
|
|
415
|
+
**Spawn executor:**
|
|
416
|
+
|
|
417
|
+
```
|
|
418
|
+
Task(
|
|
419
|
+
prompt="
|
|
420
|
+
<objective>Generate test files from ticket-derived test cases</objective>
|
|
421
|
+
<execution_context>@agents/qaa-executor.md</execution_context>
|
|
422
|
+
<files_to_read>
|
|
423
|
+
- {OUTPUT_DIR}/GENERATION_PLAN_TICKET.md
|
|
424
|
+
- {OUTPUT_DIR}/TEST_CASES_FROM_TICKET.md
|
|
425
|
+
- {OUTPUT_DIR}/locators/LOCATOR_REGISTRY.md (if exists -- accumulated real locators)
|
|
426
|
+
- CLAUDE.md
|
|
427
|
+
</files_to_read>
|
|
428
|
+
<parameters>
|
|
429
|
+
output_base: {test output directory}
|
|
430
|
+
</parameters>
|
|
431
|
+
"
|
|
432
|
+
)
|
|
433
|
+
```
|
|
434
|
+
|
|
435
|
+
**Handle executor return:**
|
|
436
|
+
|
|
437
|
+
Extract: `files_created`, `total_files`, `commit_count`, `test_case_count`.
|
|
438
|
+
</step>
|
|
439
|
+
|
|
440
|
+
<step name="validate_generated_tests">
|
|
441
|
+
## Step 8: Spawn Validator Agent
|
|
442
|
+
|
|
443
|
+
Validate the generated test files against CLAUDE.md standards.
|
|
444
|
+
|
|
445
|
+
```
|
|
446
|
+
Task(
|
|
447
|
+
prompt="
|
|
448
|
+
<objective>Validate generated test files across 4 layers</objective>
|
|
449
|
+
<execution_context>@agents/qaa-validator.md</execution_context>
|
|
450
|
+
<files_to_read>
|
|
451
|
+
- CLAUDE.md
|
|
452
|
+
- {OUTPUT_DIR}/GENERATION_PLAN_TICKET.md
|
|
453
|
+
</files_to_read>
|
|
454
|
+
<parameters>
|
|
455
|
+
output_path: {OUTPUT_DIR}/VALIDATION_REPORT.md
|
|
456
|
+
</parameters>
|
|
457
|
+
"
|
|
458
|
+
)
|
|
459
|
+
```
|
|
460
|
+
|
|
461
|
+
**Handle validator return:**
|
|
462
|
+
|
|
463
|
+
Extract: `overall_status`, `confidence`, `issues_found`, `issues_fixed`, `unresolved_count`.
|
|
464
|
+
</step>
|
|
465
|
+
|
|
466
|
+
<step name="print_summary">
|
|
467
|
+
## Step 9: Print Summary
|
|
468
|
+
|
|
469
|
+
Print a comprehensive summary showing traceability from ticket to tests.
|
|
470
|
+
|
|
471
|
+
```
|
|
472
|
+
=== Test Generation from Ticket Complete ===
|
|
473
|
+
|
|
474
|
+
Ticket: {TICKET_TITLE}
|
|
475
|
+
Source: {TICKET_TYPE} ({TICKET_SOURCE})
|
|
476
|
+
|
|
477
|
+
Acceptance Criteria Coverage:
|
|
478
|
+
Total ACs: {AC_COUNT}
|
|
479
|
+
Covered: {COVERED_COUNT}
|
|
480
|
+
Uncovered: {UNCOVERED_COUNT}
|
|
481
|
+
|
|
482
|
+
Edge Cases:
|
|
483
|
+
Extracted: {EC_COUNT}
|
|
484
|
+
With tests: {EC_TESTED_COUNT}
|
|
485
|
+
|
|
486
|
+
Test Cases Generated:
|
|
487
|
+
Unit Tests: {unit_count}
|
|
488
|
+
Integration Tests: {integration_count}
|
|
489
|
+
API Tests: {api_count}
|
|
490
|
+
E2E Tests: {e2e_count}
|
|
491
|
+
--------------------------
|
|
492
|
+
Total: {total_count}
|
|
493
|
+
|
|
494
|
+
Files Created: {file_count}
|
|
495
|
+
|
|
496
|
+
Validation:
|
|
497
|
+
Status: {PASS|PASS_WITH_WARNINGS|FAIL}
|
|
498
|
+
Confidence: {HIGH|MEDIUM|LOW}
|
|
499
|
+
|
|
500
|
+
Artifacts:
|
|
501
|
+
- {OUTPUT_DIR}/TEST_CASES_FROM_TICKET.md (traceability matrix)
|
|
502
|
+
- {OUTPUT_DIR}/GENERATION_PLAN_TICKET.md (generation plan)
|
|
503
|
+
- {OUTPUT_DIR}/VALIDATION_REPORT.md (validation results)
|
|
504
|
+
- {test file paths...} (generated test files)
|
|
505
|
+
===========================================
|
|
506
|
+
```
|
|
507
|
+
</step>
|
|
508
|
+
|
|
509
|
+
</process>
|
|
510
|
+
|
|
511
|
+
<output>
|
|
512
|
+
This workflow generates tests mapped 1:1 to ticket acceptance criteria.
|
|
513
|
+
|
|
514
|
+
**Artifacts produced:**
|
|
515
|
+
|
|
516
|
+
| Artifact | When Produced | Description |
|
|
517
|
+
|----------|---------------|-------------|
|
|
518
|
+
| TEST_CASES_FROM_TICKET.md | Always | Test cases with traceability matrix mapping ACs to test IDs |
|
|
519
|
+
| GENERATION_PLAN_TICKET.md | Always | Synthetic generation plan for the executor agent |
|
|
520
|
+
| Test files (unit, API, E2E, POM, fixtures) | Always | Actual test code following CLAUDE.md standards |
|
|
521
|
+
| VALIDATION_REPORT.md | Always | 4-layer validation of generated test files |
|
|
522
|
+
|
|
523
|
+
**Traceability guarantee:** Every acceptance criterion in the ticket maps to at least one test case. The traceability matrix in TEST_CASES_FROM_TICKET.md documents this mapping with `traces_to` fields.
|
|
524
|
+
</output>
|
|
525
|
+
|
|
526
|
+
<error_handling>
|
|
527
|
+
| Error | Cause | Action |
|
|
528
|
+
|-------|-------|--------|
|
|
529
|
+
| No ticket source provided | Missing argument | Print usage help, STOP |
|
|
530
|
+
| Cannot access ticket URL | Auth required or URL invalid | Checkpoint: ask user for content as text or file |
|
|
531
|
+
| Cannot read ticket file | File does not exist or is empty | Print error with path, STOP |
|
|
532
|
+
| No acceptance criteria found | Ticket lacks structured requirements | Checkpoint: ask user to provide ACs explicitly |
|
|
533
|
+
| No related source files found | Keywords do not match any files | Warning only -- continue with ticket-only analysis |
|
|
534
|
+
| Test framework not detected | No config files in project | Executor checkpoints for user to specify framework |
|
|
535
|
+
| Validation FAIL | Generated tests have quality issues | Report issues in VALIDATION_REPORT.md for review |
|
|
536
|
+
</error_handling>
|