qaa-agent 1.8.6 → 1.9.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +29 -0
- package/CLAUDE.md +18 -14
- package/README.md +3 -1
- package/agents/qaa-bug-detective.md +57 -0
- package/agents/qaa-e2e-runner.md +51 -0
- package/agents/qaa-executor.md +57 -0
- package/agents/qaa-project-researcher.md +23 -3
- package/commands/qa-create-test.md +8 -2
- package/commands/qa-fix.md +9 -3
- package/commands/qa-research.md +157 -0
- package/commands/qa-test-report.md +219 -0
- package/package.json +1 -1
- package/workflows/qa-start.md +70 -0
package/CHANGELOG.md
CHANGED
|
@@ -3,6 +3,35 @@
|
|
|
3
3
|
|
|
4
4
|
All notable changes to QAA (QA Automation Agent) are documented here.
|
|
5
5
|
|
|
6
|
+
## [1.9.1] - 2026-04-27
|
|
7
|
+
|
|
8
|
+
### Added
|
|
9
|
+
- **`/qa-test-report` command** — generates a per-ticket QA execution summary and appends it to the Azure DevOps work item's `Custom.QATestCasesReport` field.
|
|
10
|
+
- Resolves the work item and its linked test cases (TestedBy-Forward relations)
|
|
11
|
+
- Pulls test case execution status via REST `/_apis/test/points` (using `ADO_MCP_AUTH_TOKEN` env var as PAT, Basic auth — same pattern as `/qa-create-test --ado`), or falls back to a manual prompt when no run result exists or the token is not set
|
|
12
|
+
- Renders a markdown report in chat (for review) and an HTML report appended to the ADO field — preserving prior content with a blank-line separator (no local file is written)
|
|
13
|
+
- Smoke-tested end-to-end (manual mode, all passed) — field write and ADO render verified
|
|
14
|
+
|
|
15
|
+
## [1.9.0] - 2026-04-24
|
|
16
|
+
|
|
17
|
+
### Added
|
|
18
|
+
|
|
19
|
+
- **`/qa-research` command** — new standalone command that invokes the `qaa-project-researcher` agent to investigate the testing ecosystem for the current project. Supports 4 modes via `--focus` flag: `stack-testing` (default, full stack analysis), `framework-deep-dive` (deep dive into detected framework), `api-testing` (API testing strategy), `e2e-strategy` (E2E strategy). Produces research documents in `.qa-output/research/` consumed by all downstream agents.
|
|
20
|
+
- **Research stage in `/qa-start` pipeline** — the full pipeline now runs `scan → research → codebase-map → analyze → ...`. The researcher agent runs after the scanner, using Context7 MCP as the primary source for framework documentation. Research is non-blocking: if it fails, downstream agents fall back to querying Context7 directly.
|
|
21
|
+
- **Context7 MCP mandatory in 3 agents** — `qaa-executor`, `qaa-e2e-runner`, and `qaa-bug-detective` now have a non-negotiable "Framework Verification via Context7" section. Before generating code, fixing locators, or auto-fixing test errors, these agents MUST query Context7 for the framework's current API and syntax. This applies to ALL frameworks including well-known ones (Playwright, Cypress, Jest) — training data may be outdated.
|
|
22
|
+
- **Research documents propagated across commands** — `/qa-create-test`, `/qa-fix`, and `/qa-start` now pass research documents (`FRAMEWORK_CAPABILITIES.md`, `TESTING_STACK.md`, `E2E_STRATEGY.md`, `API_TESTING_STRATEGY.md`) to executor, e2e-runner, and bug-detective agents via `files_to_read`. Documents are optional — if they don't exist, agents query Context7 directly.
|
|
23
|
+
- **Context7 verification in quality gate checklists** — executor, e2e-runner, and bug-detective now include Context7-specific items in their verification checklists (e.g., "Context7 was queried for selector API before generating POM files").
|
|
24
|
+
|
|
25
|
+
### Changed
|
|
26
|
+
|
|
27
|
+
- **`qaa-project-researcher` agent updated** — Context7 MCP tools (`resolve-library-id`, `query-docs`) now declared explicitly in frontmatter. Output path changed from `specs/research/` to `.qa-output/research/`. Added version awareness section: detects project's current framework version, generates all syntax for that version, and includes an informational note about newer versions (without recommending upgrade).
|
|
28
|
+
- **Pipeline diagram updated** — all pipeline references across CLAUDE.md, README.md, and workflows now include the `research` stage: `scan → research → codebase-map → analyze → ...`.
|
|
29
|
+
- **Module boundaries and dependency ordering updated** — CLAUDE.md now includes `qa-project-researcher` in the module boundaries table, stage transitions table, dependency ordering, and read-before-write rules.
|
|
30
|
+
|
|
31
|
+
### Fixed
|
|
32
|
+
|
|
33
|
+
- **`qaa-project-researcher` was an orphaned agent** — the agent existed since v1.2.0 but was never invoked by any command, workflow, or the pipeline orchestrator. The `/qa-research` command documented in `docs/COMMANDS.md` never had a corresponding file in `/commands/`. No downstream agent read its output files. This release connects the researcher to the pipeline and creates the missing command.
|
|
34
|
+
|
|
6
35
|
## [1.8.6] - 2026-04-20
|
|
7
36
|
|
|
8
37
|
### Added
|
package/CLAUDE.md
CHANGED
|
@@ -197,7 +197,7 @@ The QA automation system runs agents in a defined pipeline. Each stage produces
|
|
|
197
197
|
### Pipeline Stages
|
|
198
198
|
|
|
199
199
|
```
|
|
200
|
-
scan -> codebase-map -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> [e2e-runner if E2E tests] -> [bug-detective if failures] -> deliver
|
|
200
|
+
scan -> research -> codebase-map -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> [e2e-runner if E2E tests] -> [bug-detective if failures] -> deliver
|
|
201
201
|
```
|
|
202
202
|
|
|
203
203
|
### Workflow Options
|
|
@@ -205,7 +205,7 @@ scan -> codebase-map -> analyze -> [testid-inject if frontend] -> plan -> genera
|
|
|
205
205
|
**Option 1: Dev-Only Repo (no existing QA repo)**
|
|
206
206
|
Full pipeline from scratch:
|
|
207
207
|
```
|
|
208
|
-
scan -> codebase-map -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> [e2e-runner if E2E tests] -> [bug-detective if failures] -> deliver
|
|
208
|
+
scan -> research -> codebase-map -> analyze -> [testid-inject if frontend] -> plan -> generate -> validate -> [e2e-runner if E2E tests] -> [bug-detective if failures] -> deliver
|
|
209
209
|
```
|
|
210
210
|
Produces: SCAN_MANIFEST.md -> 8 codebase map documents -> QA_ANALYSIS.md + TEST_INVENTORY.md + QA_REPO_BLUEPRINT.md -> [TESTID_AUDIT_REPORT.md] -> generation plan -> test files + POMs + fixtures + configs -> VALIDATION_REPORT.md -> [E2E_RUN_REPORT.md] -> branch + PR
|
|
211
211
|
|
|
@@ -227,7 +227,8 @@ Produces: SCAN_MANIFEST.md (both repos) -> GAP_ANALYSIS.md (thin areas only) ->
|
|
|
227
227
|
|
|
228
228
|
| From | To | Condition |
|
|
229
229
|
|------|----|-----------|
|
|
230
|
-
| scan |
|
|
230
|
+
| scan | research | SCAN_MANIFEST.md exists with > 0 testable surfaces |
|
|
231
|
+
| research | codebase-map | Research complete (non-blocking — continues even if no research output) |
|
|
231
232
|
| codebase-map | analyze | At least 4 of 8 codebase map documents exist |
|
|
232
233
|
| analyze | testid-inject | QA_ANALYSIS.md exists AND frontend components detected |
|
|
233
234
|
| analyze | plan | QA_ANALYSIS.md + TEST_INVENTORY.md exist (skip testid-inject if no frontend) |
|
|
@@ -249,6 +250,7 @@ Each agent owns specific artifacts. No agent may produce artifacts assigned to a
|
|
|
249
250
|
| Agent | Reads | Produces | Template |
|
|
250
251
|
|-------|-------|----------|----------|
|
|
251
252
|
| qa-scanner | repo source files, package.json, file tree | SCAN_MANIFEST.md | templates/scan-manifest.md |
|
|
253
|
+
| qa-project-researcher | SCAN_MANIFEST.md, repo source files, Context7 MCP, WebSearch, WebFetch | TESTING_STACK.md, FRAMEWORK_CAPABILITIES.md, API_TESTING_STRATEGY.md, E2E_STRATEGY.md | -- |
|
|
252
254
|
| qa-codebase-mapper | SCAN_MANIFEST.md, repo source files, CLAUDE.md | TESTABILITY.md, TEST_SURFACE.md, RISK_MAP.md, CRITICAL_PATHS.md, CODE_PATTERNS.md, API_CONTRACTS.md, TEST_ASSESSMENT.md, COVERAGE_GAPS.md | -- |
|
|
253
255
|
| qa-analyzer | SCAN_MANIFEST.md, codebase map documents, CLAUDE.md | QA_ANALYSIS.md, TEST_INVENTORY.md, QA_REPO_BLUEPRINT.md (Option 1) or GAP_ANALYSIS.md (Option 2/3) | templates/qa-analysis.md, templates/test-inventory.md, templates/qa-repo-blueprint.md, templates/gap-analysis.md |
|
|
254
256
|
| qa-planner | TEST_INVENTORY.md, QA_ANALYSIS.md, codebase map documents | Generation plan (internal) | -- |
|
|
@@ -257,7 +259,6 @@ Each agent owns specific artifacts. No agent may produce artifacts assigned to a
|
|
|
257
259
|
| qa-e2e-runner | generated E2E test files, POM files, CLAUDE.md, live application | E2E_RUN_REPORT.md, fixed test/POM files with real locators | -- |
|
|
258
260
|
| qa-testid-injector | repo source files, SCAN_MANIFEST.md, CLAUDE.md | TESTID_AUDIT_REPORT.md, modified source files with data-testid attributes | templates/scan-manifest.md, templates/testid-audit-report.md |
|
|
259
261
|
| qa-bug-detective | test execution results, test source files, CLAUDE.md | FAILURE_CLASSIFICATION_REPORT.md | templates/failure-classification.md |
|
|
260
|
-
| qa-project-researcher | SCAN_MANIFEST.md, repo source files | TESTING_STACK.md, FRAMEWORK_CAPABILITIES.md, API_TESTING_STRATEGY.md, E2E_STRATEGY.md | -- |
|
|
261
262
|
|
|
262
263
|
**Rule:** An agent MUST NOT produce artifacts assigned to another agent.
|
|
263
264
|
|
|
@@ -398,13 +399,14 @@ Each agent operates on the same branch. No worktree splits are needed for this s
|
|
|
398
399
|
|
|
399
400
|
Respect stage transitions from the Agent Pipeline section:
|
|
400
401
|
1. qa-scanner runs first (no dependencies)
|
|
401
|
-
2. qa-
|
|
402
|
-
3. qa-
|
|
403
|
-
4. qa-
|
|
404
|
-
5. qa-
|
|
405
|
-
6. qa-
|
|
406
|
-
7. qa-
|
|
407
|
-
8. qa-
|
|
402
|
+
2. qa-project-researcher runs after scanner (depends on SCAN_MANIFEST.md, uses Context7 MCP — non-blocking, pipeline continues if it fails)
|
|
403
|
+
3. qa-codebase-mapper runs after researcher (depends on SCAN_MANIFEST.md, spawns 4 parallel focus agents)
|
|
404
|
+
4. qa-analyzer and qa-testid-injector run after codebase-map (both depend on SCAN_MANIFEST.md + codebase map documents + research documents if available)
|
|
405
|
+
5. qa-planner runs after analyzer (depends on QA_ANALYSIS.md + TEST_INVENTORY.md + codebase map documents)
|
|
406
|
+
6. qa-executor runs after planner (depends on generation plan + codebase map documents + research documents if available + Context7 MCP)
|
|
407
|
+
7. qa-validator runs after executor (depends on generated test files)
|
|
408
|
+
8. qa-e2e-runner runs after validator (depends on E2E test files + live application + Context7 MCP)
|
|
409
|
+
9. qa-bug-detective runs after e2e-runner or validator if failures (depends on test results + Context7 MCP)
|
|
408
410
|
|
|
409
411
|
### Auto-Advance Mode
|
|
410
412
|
|
|
@@ -428,12 +430,14 @@ Every agent MUST read its required inputs before producing any output. Failure t
|
|
|
428
430
|
| Agent | MUST Read Before Producing Output |
|
|
429
431
|
|-------|-----------------------------------|
|
|
430
432
|
| qa-scanner | package.json (or equivalent), folder tree structure, all source file extensions to detect framework and language |
|
|
431
|
-
| qa-
|
|
433
|
+
| qa-project-researcher | SCAN_MANIFEST.md, repo source files, MY_PREFERENCES.md. Uses Context7 MCP as primary source. |
|
|
434
|
+
| qa-analyzer | SCAN_MANIFEST.md (complete, verified), CLAUDE.md (all QA standards sections), research documents (if available) |
|
|
432
435
|
| qa-planner | TEST_INVENTORY.md (all test cases), QA_ANALYSIS.md (architecture and risk context) |
|
|
433
|
-
| qa-executor | TEST_INVENTORY.md (test cases to implement), CLAUDE.md (POM rules, locator hierarchy, assertion rules, naming conventions, quality gates) |
|
|
436
|
+
| qa-executor | TEST_INVENTORY.md (test cases to implement), CLAUDE.md (POM rules, locator hierarchy, assertion rules, naming conventions, quality gates), research documents (if available). Must query Context7 MCP before generating code. |
|
|
434
437
|
| qa-validator | CLAUDE.md (quality gates, locator tiers, assertion rules), all generated test files to validate |
|
|
435
438
|
| qa-testid-injector | SCAN_MANIFEST.md (component file list), CLAUDE.md (data-testid Convention section for naming rules) |
|
|
436
|
-
| qa-bug-detective | Test execution output (stdout/stderr, exit codes), test source files (to read the failing code), CLAUDE.md (classification rules) |
|
|
439
|
+
| qa-bug-detective | Test execution output (stdout/stderr, exit codes), test source files (to read the failing code), CLAUDE.md (classification rules), research documents (if available). Must query Context7 MCP before auto-fixing. |
|
|
440
|
+
| qa-e2e-runner | Generated E2E test files, POM files, CLAUDE.md, live application, research documents (if available). Must query Context7 MCP before fixing locators. |
|
|
437
441
|
|
|
438
442
|
### Handoff Patterns
|
|
439
443
|
|
package/README.md
CHANGED
|
@@ -6,7 +6,7 @@
|
|
|
6
6
|
Multi-agent QA pipeline for [Claude Code](https://docs.anthropic.com/en/docs/claude-code). Analyzes any codebase, generates a complete test suite following industry standards, validates everything, and delivers the result as a draft pull request.
|
|
7
7
|
|
|
8
8
|
```
|
|
9
|
-
scan → map → analyze → plan → generate → validate → deliver
|
|
9
|
+
scan → research → map → analyze → plan → generate → validate → deliver
|
|
10
10
|
```
|
|
11
11
|
|
|
12
12
|
---
|
|
@@ -25,6 +25,7 @@ QAA runs a pipeline of 12 specialized AI agents, each responsible for one stage:
|
|
|
25
25
|
| Stage | What happens | Output |
|
|
26
26
|
|-------|-------------|--------|
|
|
27
27
|
| **Scan** | Detects framework, language, testable surfaces | `SCAN_MANIFEST.md` |
|
|
28
|
+
| **Research** | Investigates testing ecosystem via Context7 MCP and official docs | `TESTING_STACK.md`, `FRAMEWORK_CAPABILITIES.md` |
|
|
28
29
|
| **Map** | Deep-scans codebase with 4 parallel agents (testability, risk, patterns, existing tests) | 8 codebase documents |
|
|
29
30
|
| **Analyze** | Produces risk assessment, test inventory, testing pyramid | `QA_ANALYSIS.md`, `TEST_INVENTORY.md` |
|
|
30
31
|
| **Plan** | Groups test cases by feature, assigns to files, resolves dependencies | `GENERATION_PLAN.md` |
|
|
@@ -142,6 +143,7 @@ Runs the full pipeline end-to-end: scan, map, analyze, plan, generate, validate,
|
|
|
142
143
|
| Command | Purpose |
|
|
143
144
|
|---------|---------|
|
|
144
145
|
| `/qa-start` | Full pipeline end-to-end (scan through PR) |
|
|
146
|
+
| `/qa-research` | Research testing ecosystem via Context7 MCP |
|
|
145
147
|
| `/qa-map` | Deep codebase analysis with 4 parallel agents |
|
|
146
148
|
| `/qa-create-test <feature>` | Generate tests for a specific feature |
|
|
147
149
|
| `/qa-fix [path]` | Diagnose and fix broken tests |
|
|
@@ -26,9 +26,59 @@ Read ALL of the following files BEFORE classifying any failures. Do NOT skip.
|
|
|
26
26
|
|
|
27
27
|
- **~/.claude/qaa/MY_PREFERENCES.md** (optional -- read if exists). User's personal QA preferences saved by the qa-learner skill. If a preference conflicts with CLAUDE.md, the preference wins (it is a user override). Check for rules about: framework choices, assertion style, language preferences.
|
|
28
28
|
|
|
29
|
+
- **Codebase map documents** (optional -- read if they exist in `.qa-output/codebase/`):
|
|
30
|
+
- **CODE_PATTERNS.md** -- Naming conventions, import patterns
|
|
31
|
+
- **API_CONTRACTS.md** -- API shapes for diagnosing API test failures
|
|
32
|
+
- **TEST_SURFACE.md** -- Function signatures for diagnosing unit test failures
|
|
33
|
+
- **TESTABILITY.md** -- Mock boundaries for diagnosing mock-related failures
|
|
34
|
+
|
|
35
|
+
- **Research documents** (optional -- read if they exist in `.qa-output/research/`):
|
|
36
|
+
- **FRAMEWORK_CAPABILITIES.md** -- Verified framework API, selector syntax, assertion patterns. Critical for writing correct auto-fixes.
|
|
37
|
+
- **TESTING_STACK.md** -- Recommended stack configuration. Useful for diagnosing configuration-related failures.
|
|
38
|
+
If these files exist, use them as the primary source for framework-specific syntax when auto-fixing.
|
|
39
|
+
|
|
29
40
|
Note: Read these files in full. Extract the decision tree, evidence field requirements, confidence level definitions, and auto-fix eligibility rules. These define your classification contract and output format.
|
|
30
41
|
</required_reading>
|
|
31
42
|
|
|
43
|
+
<context7_verification>
|
|
44
|
+
|
|
45
|
+
## Non-negotiable: Framework Verification via Context7 Before Auto-Fixing
|
|
46
|
+
|
|
47
|
+
**BEFORE auto-fixing any TEST CODE ERROR**, the bug-detective MUST verify the correct fix syntax using Context7 MCP. An auto-fix that uses incorrect syntax (wrong selector engine, wrong API method, wrong import path) is worse than no fix at all — it introduces a new TEST CODE ERROR.
|
|
48
|
+
|
|
49
|
+
### When to query Context7
|
|
50
|
+
|
|
51
|
+
1. **When detecting an unfamiliar framework** — if the test files use a framework you haven't seen in the research documents (e.g., Robot Framework, Selenium WebDriver, TestCafe), query Context7 before classifying or fixing:
|
|
52
|
+
```
|
|
53
|
+
mcp__context7__resolve-library-id({ libraryName: "{framework-name}" })
|
|
54
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "selector syntax locator API" })
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
2. **Before writing any auto-fix that changes selectors or locators** — verify the correct syntax for the specific framework:
|
|
58
|
+
```
|
|
59
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "{specific selector pattern}" })
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
3. **Before writing any auto-fix that changes assertion syntax** — verify the correct assertion API:
|
|
63
|
+
```
|
|
64
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "assertion API expect" })
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
4. **When diagnosing failures that might be caused by framework API changes** — a test that used to pass but now fails may be using a deprecated API. Query Context7 for the current API.
|
|
68
|
+
|
|
69
|
+
### Auto-fix validation rule
|
|
70
|
+
|
|
71
|
+
Every auto-fix MUST have its syntax verified against Context7 or research documents before being applied. If Context7 is unavailable and no research documents cover the framework, downgrade the fix confidence to MEDIUM (which means it will be flagged for review instead of auto-applied).
|
|
72
|
+
|
|
73
|
+
### If Context7 is unavailable
|
|
74
|
+
|
|
75
|
+
If Context7 MCP is not connected or `resolve-library-id` fails:
|
|
76
|
+
1. Use WebFetch to access official documentation
|
|
77
|
+
2. Flag in MCP evidence file: `context7_available: false, fallback: webfetch`
|
|
78
|
+
3. If neither source can verify the fix syntax, do NOT auto-fix — classify as TEST CODE ERROR but set confidence to MEDIUM so it gets flagged for user review instead of auto-applied
|
|
79
|
+
|
|
80
|
+
</context7_verification>
|
|
81
|
+
|
|
32
82
|
<process>
|
|
33
83
|
|
|
34
84
|
<step name="read_inputs" priority="first">
|
|
@@ -509,6 +559,13 @@ Before considering the classification complete, verify ALL of the following.
|
|
|
509
559
|
- [ ] Recommendations are grouped by category and specific to the failures found (not generic advice)
|
|
510
560
|
- [ ] INCONCLUSIVE entries (if any) explain what information is missing
|
|
511
561
|
|
|
562
|
+
**Context7 verification checks:**
|
|
563
|
+
|
|
564
|
+
- [ ] Context7 was queried for the framework's syntax before writing any auto-fix that changes selectors or assertions
|
|
565
|
+
- [ ] If research documents exist (`.qa-output/research/`), FRAMEWORK_CAPABILITIES.md was read before auto-fixing
|
|
566
|
+
- [ ] If the test framework is not covered by research documents, Context7 was queried for it
|
|
567
|
+
- [ ] No auto-fix was applied using unverified syntax (all fix syntax confirmed via Context7, research docs, or official docs)
|
|
568
|
+
|
|
512
569
|
**Additional detective-specific checks:**
|
|
513
570
|
|
|
514
571
|
- [ ] Test suite was actually executed (not static analysis) -- real test runner output captured with stdout, stderr, and exit code
|
package/agents/qaa-e2e-runner.md
CHANGED
|
@@ -27,8 +27,57 @@ Read ALL of the following files BEFORE running any tests. Do NOT skip.
|
|
|
27
27
|
- **Codebase map documents** (optional -- read if they exist in `.qa-output/codebase/`):
|
|
28
28
|
- **CODE_PATTERNS.md** -- Naming conventions to match when fixing test code
|
|
29
29
|
- **TEST_SURFACE.md** -- Testable entry points for reference
|
|
30
|
+
|
|
31
|
+
- **Research documents** (optional -- read if they exist in `.qa-output/research/`):
|
|
32
|
+
- **FRAMEWORK_CAPABILITIES.md** -- Verified framework API, selector syntax, patterns. Use as primary reference for correct syntax when fixing locators and assertions.
|
|
33
|
+
- **E2E_STRATEGY.md** -- E2E patterns, POM patterns, selector strategies for this project's stack.
|
|
34
|
+
If these files exist, use them as the primary source for framework-specific syntax when fixing code.
|
|
35
|
+
|
|
36
|
+
- **Locator Registry** (optional -- read if it exists):
|
|
37
|
+
- **`.qa-output/locators/LOCATOR_REGISTRY.md`** -- Central index of all locators extracted from the live app.
|
|
38
|
+
- **`.qa-output/locators/{feature}.locators.md`** -- Per-feature locator files.
|
|
30
39
|
</required_reading>
|
|
31
40
|
|
|
41
|
+
<context7_verification>
|
|
42
|
+
|
|
43
|
+
## Non-negotiable: Framework Verification via Context7
|
|
44
|
+
|
|
45
|
+
**BEFORE fixing any locator or assertion**, the e2e-runner MUST verify the correct syntax using Context7 MCP. This is critical when the test framework is not standard Playwright JS/TS (e.g., Robot Framework, Cypress, Selenium, pytest).
|
|
46
|
+
|
|
47
|
+
### When to query Context7
|
|
48
|
+
|
|
49
|
+
1. **At the start of the run** (once per framework detected):
|
|
50
|
+
- Detect the framework from test file imports and config (Playwright, Cypress, Robot Framework, etc.)
|
|
51
|
+
- Query Context7 for the framework's selector/locator syntax:
|
|
52
|
+
```
|
|
53
|
+
mcp__context7__resolve-library-id({ libraryName: "{framework-name}" })
|
|
54
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "selector syntax locator API" })
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
2. **When fixing locators** — before rewriting a locator, verify the correct syntax for the framework:
|
|
58
|
+
- Playwright JS/TS: `page.getByTestId()`, `page.getByRole()`, `page.locator()`
|
|
59
|
+
- Cypress: `cy.get('[data-cy="..."]')`, `cy.findByRole()`
|
|
60
|
+
- Robot Framework Browser: `Get Element`, `Click`, selectors use `id=`, `css=`, `text=` engines
|
|
61
|
+
- Other frameworks: query Context7 first, do NOT guess
|
|
62
|
+
|
|
63
|
+
3. **When the framework is unfamiliar** — if the test files use a framework you haven't queried yet, STOP and query Context7 before making any changes.
|
|
64
|
+
|
|
65
|
+
### Priority order for syntax decisions
|
|
66
|
+
|
|
67
|
+
1. **Context7 query result** — always current, most authoritative
|
|
68
|
+
2. **Research documents** (`.qa-output/research/FRAMEWORK_CAPABILITIES.md`) — verified
|
|
69
|
+
3. **CLAUDE.md examples** — general patterns
|
|
70
|
+
4. **Training data** — last resort
|
|
71
|
+
|
|
72
|
+
### If Context7 is unavailable
|
|
73
|
+
|
|
74
|
+
If Context7 MCP is not connected or `resolve-library-id` fails:
|
|
75
|
+
1. Use WebFetch to access official documentation
|
|
76
|
+
2. Flag in MCP evidence file: `context7_available: false, fallback: webfetch`
|
|
77
|
+
3. If neither Context7 nor WebFetch can resolve the framework syntax, do NOT guess — flag the fix as INCONCLUSIVE and report to user
|
|
78
|
+
|
|
79
|
+
</context7_verification>
|
|
80
|
+
|
|
32
81
|
<tools>
|
|
33
82
|
This agent uses the Playwright MCP browser tools for all browser interaction:
|
|
34
83
|
|
|
@@ -449,6 +498,8 @@ E2E runner is complete when:
|
|
|
449
498
|
- [ ] Tests were executed against the live app
|
|
450
499
|
- [ ] Failures were diagnosed using browser tools (snapshot, screenshot, evaluate)
|
|
451
500
|
- [ ] Fixable issues (locators, assertions) were auto-fixed (up to 5 loops)
|
|
501
|
+
- [ ] Context7 was queried for the framework's selector syntax before fixing any locators
|
|
502
|
+
- [ ] If research documents exist (`.qa-output/research/`), FRAMEWORK_CAPABILITIES.md was read
|
|
452
503
|
- [ ] Application bugs were classified with evidence (not auto-fixed)
|
|
453
504
|
- [ ] E2E_RUN_REPORT.md was written with full results
|
|
454
505
|
- [ ] Locator registry updated with all real locators discovered during execution (`.qa-output/locators/`)
|
package/agents/qaa-executor.md
CHANGED
|
@@ -60,8 +60,58 @@ Read ALL of the following files BEFORE producing any output. The executor's code
|
|
|
60
60
|
If these files exist, they enable the executor to generate higher-quality tests that match the project's actual code patterns and API shapes.
|
|
61
61
|
|
|
62
62
|
Note: The executor MUST read CLAUDE.md POM rules and locator tiers before writing any page object or test file. These rules are non-negotiable and must be applied to every generated file.
|
|
63
|
+
|
|
64
|
+
- **Research documents** (optional -- read if they exist in `.qa-output/research/`):
|
|
65
|
+
- **TESTING_STACK.md** -- Recommended test framework, assertion libraries, mock strategies. Verified against current docs via Context7.
|
|
66
|
+
- **FRAMEWORK_CAPABILITIES.md** -- Full capabilities of the detected test framework: API, patterns, pitfalls, selector syntax. This is the most important research file for the executor.
|
|
67
|
+
- **API_TESTING_STRATEGY.md** -- API testing patterns, contract testing, auth testing.
|
|
68
|
+
- **E2E_STRATEGY.md** -- E2E framework patterns, POM patterns, selector strategies.
|
|
69
|
+
If these files exist, use them as the primary source for framework-specific syntax and patterns. They contain verified, up-to-date information from Context7 and official docs.
|
|
70
|
+
|
|
63
71
|
</required_reading>
|
|
64
72
|
|
|
73
|
+
<context7_verification>
|
|
74
|
+
|
|
75
|
+
## Non-negotiable: Framework Verification via Context7
|
|
76
|
+
|
|
77
|
+
**BEFORE generating any test file, POM, or fixture**, the executor MUST verify the framework's current API and syntax using Context7 MCP. This applies to ALL frameworks — including Playwright, Cypress, Jest, and other "well-known" frameworks. Training data may be outdated; Context7 provides current documentation.
|
|
78
|
+
|
|
79
|
+
### When to query Context7
|
|
80
|
+
|
|
81
|
+
1. **At the start of generation** (once per framework detected):
|
|
82
|
+
```
|
|
83
|
+
mcp__context7__resolve-library-id({ libraryName: "{framework-name}" })
|
|
84
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "{framework} selector syntax and locator API" })
|
|
85
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "{framework} assertion API" })
|
|
86
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "{framework} configuration and setup" })
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
2. **When generating for a framework not covered by research documents** — if the user requests a framework (e.g., Robot Framework, Selenium, pytest) and `.qa-output/research/FRAMEWORK_CAPABILITIES.md` either does not exist or covers a different framework:
|
|
90
|
+
```
|
|
91
|
+
mcp__context7__resolve-library-id({ libraryName: "{new-framework}" })
|
|
92
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "getting started setup imports" })
|
|
93
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "selector syntax locator API" })
|
|
94
|
+
mcp__context7__query-docs({ libraryId: "{resolved-id}", query: "assertion API expect" })
|
|
95
|
+
```
|
|
96
|
+
|
|
97
|
+
3. **When writing syntax you are not 100% certain about** — if you hesitate on an import path, method name, or API signature, query Context7 before writing it. Do NOT guess.
|
|
98
|
+
|
|
99
|
+
### Priority order for framework information
|
|
100
|
+
|
|
101
|
+
1. **Context7 query result** — most authoritative, always current
|
|
102
|
+
2. **Research documents** (`.qa-output/research/`) — verified but may not cover all details
|
|
103
|
+
3. **CLAUDE.md examples** — general patterns, may be outdated for specific framework versions
|
|
104
|
+
4. **Training data** — last resort, flag as LOW confidence if used alone
|
|
105
|
+
|
|
106
|
+
### If Context7 is unavailable
|
|
107
|
+
|
|
108
|
+
If the Context7 MCP is not connected or `resolve-library-id` fails for the requested framework:
|
|
109
|
+
1. Use WebFetch to access official documentation directly
|
|
110
|
+
2. Flag in MCP evidence file: `context7_available: false, fallback: webfetch`
|
|
111
|
+
3. Continue with WebFetch results, but mark generated code as MEDIUM confidence
|
|
112
|
+
|
|
113
|
+
</context7_verification>
|
|
114
|
+
|
|
65
115
|
<process>
|
|
66
116
|
|
|
67
117
|
<step name="read_inputs" priority="first">
|
|
@@ -658,6 +708,13 @@ Before considering the executor's work complete, verify ALL of the following.
|
|
|
658
708
|
- [ ] Priority assigned to every test case (P0, P1, or P2)
|
|
659
709
|
- [ ] Framework matches what the project already uses
|
|
660
710
|
|
|
711
|
+
**Context7 verification checks:**
|
|
712
|
+
|
|
713
|
+
- [ ] Context7 was queried for the framework's selector/locator API before generating POM files
|
|
714
|
+
- [ ] Context7 was queried for the framework's assertion API before generating test specs
|
|
715
|
+
- [ ] If research documents exist (`.qa-output/research/`), they were read before generation
|
|
716
|
+
- [ ] If the requested framework differs from what research documents cover, Context7 was queried for the new framework
|
|
717
|
+
|
|
661
718
|
**Additional executor-specific checks:**
|
|
662
719
|
|
|
663
720
|
- [ ] All planned files exist on disk (every file_path from generation plan verified)
|
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: qaa-project-researcher
|
|
3
|
-
description: Researches testing ecosystem for a project's stack. Investigates framework capabilities, best practices, and testing patterns. Produces research files consumed by
|
|
4
|
-
tools: Read, Write, Bash, Grep, Glob, WebSearch, WebFetch
|
|
3
|
+
description: Researches testing ecosystem for a project's stack. Investigates framework capabilities, best practices, and testing patterns using Context7 MCP as primary source. Produces research files consumed by all downstream QA agents.
|
|
4
|
+
tools: Read, Write, Bash, Grep, Glob, WebSearch, WebFetch, mcp__context7__resolve-library-id, mcp__context7__query-docs
|
|
5
5
|
color: cyan
|
|
6
6
|
---
|
|
7
7
|
|
|
@@ -191,7 +191,7 @@ Answer these for every project (depth varies by mode):
|
|
|
191
191
|
|
|
192
192
|
<output_formats>
|
|
193
193
|
|
|
194
|
-
All output files are written to the path specified by the orchestrator (typically
|
|
194
|
+
All output files are written to the path specified by the orchestrator (typically `.qa-output/research/`). If no path is specified, write to `.qa-output/research/`.
|
|
195
195
|
|
|
196
196
|
**Every output file follows this common structure:**
|
|
197
197
|
|
|
@@ -295,6 +295,26 @@ Return one of these to the orchestrator:
|
|
|
295
295
|
|
|
296
296
|
</structured_returns>
|
|
297
297
|
|
|
298
|
+
<version_awareness>
|
|
299
|
+
|
|
300
|
+
## Version Detection and Reporting
|
|
301
|
+
|
|
302
|
+
**Always detect the version the project currently uses** from `package.json`, `requirements.txt`, `go.mod`, lock files, or config files. Generate all research and syntax examples targeting the version in use — never assume the latest version.
|
|
303
|
+
|
|
304
|
+
**Informational note about newer versions:** At the end of each output file, include a section:
|
|
305
|
+
|
|
306
|
+
```markdown
|
|
307
|
+
## Version Note (Informational)
|
|
308
|
+
|
|
309
|
+
- **Project version:** {framework} {version detected from project}
|
|
310
|
+
- **Latest stable version:** {latest version from Context7 or official docs}
|
|
311
|
+
- **Notable changes since project version:** {brief list of relevant changes, if any}
|
|
312
|
+
```
|
|
313
|
+
|
|
314
|
+
This is informational only — do NOT recommend upgrading. The user decides if and when to upgrade. All syntax, examples, and patterns in the research must target the version the project currently uses.
|
|
315
|
+
|
|
316
|
+
</version_awareness>
|
|
317
|
+
|
|
298
318
|
<success_criteria>
|
|
299
319
|
|
|
300
320
|
Research is complete when:
|
|
@@ -137,7 +137,7 @@ App URL: {url or "auto-detect"}
|
|
|
137
137
|
|
|
138
138
|
Task(
|
|
139
139
|
prompt="
|
|
140
|
-
<objective>Generate test files for the specified feature following CLAUDE.md standards, using codebase map for context
|
|
140
|
+
<objective>Generate test files for the specified feature following CLAUDE.md standards, using codebase map and research documents for context. Query Context7 MCP to verify framework syntax before generating code.</objective>
|
|
141
141
|
<execution_context>@agents/qaa-executor.md</execution_context>
|
|
142
142
|
<files_to_read>
|
|
143
143
|
- CLAUDE.md
|
|
@@ -148,6 +148,10 @@ Task(
|
|
|
148
148
|
- .qa-output/codebase/API_CONTRACTS.md (if exists)
|
|
149
149
|
- .qa-output/codebase/TEST_SURFACE.md (if exists)
|
|
150
150
|
- .qa-output/codebase/TESTABILITY.md (if exists)
|
|
151
|
+
- .qa-output/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
152
|
+
- .qa-output/research/E2E_STRATEGY.md (if exists)
|
|
153
|
+
- .qa-output/research/API_TESTING_STRATEGY.md (if exists)
|
|
154
|
+
- .qa-output/research/TESTING_STACK.md (if exists)
|
|
151
155
|
</files_to_read>
|
|
152
156
|
<parameters>
|
|
153
157
|
user_input: $ARGUMENTS
|
|
@@ -162,12 +166,14 @@ Task(
|
|
|
162
166
|
|
|
163
167
|
Task(
|
|
164
168
|
prompt="
|
|
165
|
-
<objective>Run generated E2E tests against live application, capture real locators, fix mismatches, loop until pass
|
|
169
|
+
<objective>Run generated E2E tests against live application, capture real locators, fix mismatches, loop until pass. Query Context7 MCP to verify framework selector syntax before fixing locators.</objective>
|
|
166
170
|
<execution_context>@agents/qaa-e2e-runner.md</execution_context>
|
|
167
171
|
<files_to_read>
|
|
168
172
|
- CLAUDE.md
|
|
169
173
|
- ~/.claude/qaa/MY_PREFERENCES.md (if exists)
|
|
170
174
|
- .qa-output/locators/LOCATOR_REGISTRY.md (if exists)
|
|
175
|
+
- .qa-output/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
176
|
+
- .qa-output/research/E2E_STRATEGY.md (if exists)
|
|
171
177
|
- {generated E2E test files from executor return}
|
|
172
178
|
- {generated POM files from executor return}
|
|
173
179
|
</files_to_read>
|
package/commands/qa-fix.md
CHANGED
|
@@ -331,12 +331,14 @@ If `--run` flag is also present and E2E test files exist, invoke E2E runner afte
|
|
|
331
331
|
|
|
332
332
|
Task(
|
|
333
333
|
prompt="
|
|
334
|
-
<objective>Run E2E tests against live application, capture real locators, fix mismatches, loop until pass
|
|
334
|
+
<objective>Run E2E tests against live application, capture real locators, fix mismatches, loop until pass. Query Context7 MCP to verify framework selector syntax before fixing locators.</objective>
|
|
335
335
|
<execution_context>@agents/qaa-e2e-runner.md</execution_context>
|
|
336
336
|
<files_to_read>
|
|
337
337
|
- CLAUDE.md
|
|
338
338
|
- ~/.claude/qaa/MY_PREFERENCES.md (if exists)
|
|
339
339
|
- .qa-output/locators/LOCATOR_REGISTRY.md (if exists)
|
|
340
|
+
- .qa-output/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
341
|
+
- .qa-output/research/E2E_STRATEGY.md (if exists)
|
|
340
342
|
- {E2E test files from validated directory}
|
|
341
343
|
- {POM files from validated directory}
|
|
342
344
|
</files_to_read>
|
|
@@ -366,7 +368,7 @@ Fix mode runs in two phases: **analyze first, then fix after user confirmation.*
|
|
|
366
368
|
|
|
367
369
|
Task(
|
|
368
370
|
prompt="
|
|
369
|
-
<objective>Run tests and classify failures. Do NOT auto-fix anything yet — this is the analysis phase. Use Playwright MCP to reproduce E2E failures in the browser when available — navigate to failing pages, snapshot DOM, reproduce actions, and screenshot failure state for evidence.</objective>
|
|
371
|
+
<objective>Run tests and classify failures. Do NOT auto-fix anything yet — this is the analysis phase. Use Playwright MCP to reproduce E2E failures in the browser when available — navigate to failing pages, snapshot DOM, reproduce actions, and screenshot failure state for evidence. Query Context7 MCP to verify framework syntax when diagnosing failures.</objective>
|
|
370
372
|
<execution_context>@agents/qaa-bug-detective.md</execution_context>
|
|
371
373
|
<files_to_read>
|
|
372
374
|
- CLAUDE.md
|
|
@@ -376,6 +378,8 @@ Task(
|
|
|
376
378
|
- .qa-output/codebase/API_CONTRACTS.md (if exists)
|
|
377
379
|
- .qa-output/codebase/TEST_SURFACE.md (if exists)
|
|
378
380
|
- .qa-output/codebase/TESTABILITY.md (if exists)
|
|
381
|
+
- .qa-output/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
382
|
+
- .qa-output/research/TESTING_STACK.md (if exists)
|
|
379
383
|
</files_to_read>
|
|
380
384
|
<parameters>
|
|
381
385
|
user_input: $ARGUMENTS
|
|
@@ -435,7 +439,7 @@ Proceed with auto-fixes? [yes / modify / cancel]
|
|
|
435
439
|
|
|
436
440
|
Task(
|
|
437
441
|
prompt="
|
|
438
|
-
<objective>Auto-fix the confirmed TEST CODE ERRORS from the analysis phase. Use Playwright MCP to reproduce E2E failures in the browser when available
|
|
442
|
+
<objective>Auto-fix the confirmed TEST CODE ERRORS from the analysis phase. Use Playwright MCP to reproduce E2E failures in the browser when available. Query Context7 MCP to verify correct framework syntax before applying any fix.</objective>
|
|
439
443
|
<execution_context>@agents/qaa-bug-detective.md</execution_context>
|
|
440
444
|
<files_to_read>
|
|
441
445
|
- CLAUDE.md
|
|
@@ -445,6 +449,8 @@ Task(
|
|
|
445
449
|
- .qa-output/codebase/API_CONTRACTS.md (if exists)
|
|
446
450
|
- .qa-output/codebase/TEST_SURFACE.md (if exists)
|
|
447
451
|
- .qa-output/codebase/TESTABILITY.md (if exists)
|
|
452
|
+
- .qa-output/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
453
|
+
- .qa-output/research/TESTING_STACK.md (if exists)
|
|
448
454
|
- .qa-output/FAILURE_CLASSIFICATION_REPORT.md
|
|
449
455
|
</files_to_read>
|
|
450
456
|
<parameters>
|
|
@@ -0,0 +1,157 @@
|
|
|
1
|
+
# QA Research
|
|
2
|
+
|
|
3
|
+
Research the testing ecosystem for the current project. Investigates framework capabilities, best practices, selector syntax, and testing patterns using Context7 MCP and official documentation. Produces research files consumed by all downstream QA agents (analyzer, planner, executor, validator).
|
|
4
|
+
|
|
5
|
+
## Usage
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
/qa-research [options]
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
### Options
|
|
12
|
+
|
|
13
|
+
- `--focus <mode>` — research mode (default: `stack-testing`)
|
|
14
|
+
- `stack-testing` — full stack analysis: test runner, assertions, mocking, E2E framework, CI/CD patterns
|
|
15
|
+
- `framework-deep-dive` — deep dive into the detected/specified framework: full API, patterns, pitfalls, selector syntax
|
|
16
|
+
- `api-testing` — API testing strategy: endpoint patterns, contract testing, auth testing, error testing
|
|
17
|
+
- `e2e-strategy` — E2E strategy: framework comparison, POM patterns, selector strategies, visual testing
|
|
18
|
+
- `--dev-repo <path>` — path to developer repository (default: current directory)
|
|
19
|
+
|
|
20
|
+
### Mode Detection
|
|
21
|
+
|
|
22
|
+
```
|
|
23
|
+
if --focus framework-deep-dive:
|
|
24
|
+
MODE = "framework-deep-dive"
|
|
25
|
+
elif --focus api-testing:
|
|
26
|
+
MODE = "api-testing"
|
|
27
|
+
elif --focus e2e-strategy:
|
|
28
|
+
MODE = "e2e-strategy"
|
|
29
|
+
else:
|
|
30
|
+
MODE = "stack-testing"
|
|
31
|
+
```
|
|
32
|
+
|
|
33
|
+
## What It Produces
|
|
34
|
+
|
|
35
|
+
All output files are written to `.qa-output/research/`.
|
|
36
|
+
|
|
37
|
+
| Mode | Output |
|
|
38
|
+
|------|--------|
|
|
39
|
+
| stack-testing | `TESTING_STACK.md` (+ `API_TESTING_STRATEGY.md` and `E2E_STRATEGY.md` if project has APIs and frontend) |
|
|
40
|
+
| framework-deep-dive | `FRAMEWORK_CAPABILITIES.md` |
|
|
41
|
+
| api-testing | `API_TESTING_STRATEGY.md` |
|
|
42
|
+
| e2e-strategy | `E2E_STRATEGY.md` |
|
|
43
|
+
|
|
44
|
+
These files are consumed by downstream agents:
|
|
45
|
+
|
|
46
|
+
| File | Consumed By |
|
|
47
|
+
|------|-------------|
|
|
48
|
+
| `TESTING_STACK.md` | Analyzer (framework selection), Planner (dependency setup) |
|
|
49
|
+
| `FRAMEWORK_CAPABILITIES.md` | Executor (idiomatic test writing), Validator (pattern checking), E2E Runner (correct syntax) |
|
|
50
|
+
| `API_TESTING_STRATEGY.md` | Planner (API test case design), Executor (implementation patterns) |
|
|
51
|
+
| `E2E_STRATEGY.md` | Planner (E2E scope decisions), Executor (POM and selector patterns) |
|
|
52
|
+
|
|
53
|
+
## Instructions
|
|
54
|
+
|
|
55
|
+
### Step 1: Initialize
|
|
56
|
+
|
|
57
|
+
Parse `$ARGUMENTS` for flags.
|
|
58
|
+
|
|
59
|
+
```bash
|
|
60
|
+
MODE="stack-testing"
|
|
61
|
+
DEV_REPO=""
|
|
62
|
+
|
|
63
|
+
if echo "$ARGUMENTS" | grep -qE '\-\-focus'; then
|
|
64
|
+
MODE=$(echo "$ARGUMENTS" | grep -oE '\-\-focus\s+[^\s]+' | awk '{print $2}')
|
|
65
|
+
fi
|
|
66
|
+
|
|
67
|
+
if echo "$ARGUMENTS" | grep -qE '\-\-dev-repo'; then
|
|
68
|
+
DEV_REPO=$(echo "$ARGUMENTS" | grep -oE '\-\-dev-repo\s+[^\s]+' | awk '{print $2}')
|
|
69
|
+
fi
|
|
70
|
+
|
|
71
|
+
if [ -z "$DEV_REPO" ]; then
|
|
72
|
+
DEV_REPO=$(pwd)
|
|
73
|
+
fi
|
|
74
|
+
|
|
75
|
+
mkdir -p .qa-output/research
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
Print banner:
|
|
79
|
+
```
|
|
80
|
+
=== QA Research ===
|
|
81
|
+
Mode: {MODE}
|
|
82
|
+
Dev Repo: {DEV_REPO}
|
|
83
|
+
Output: .qa-output/research/
|
|
84
|
+
====================
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### Step 2: Read Existing Context (if available)
|
|
88
|
+
|
|
89
|
+
Read these files if they exist — they provide context to the researcher:
|
|
90
|
+
|
|
91
|
+
- `.qa-output/SCAN_MANIFEST.md` — detected project stack from scanner
|
|
92
|
+
- `CLAUDE.md` — QA standards that the research must align with
|
|
93
|
+
- `~/.claude/qaa/MY_PREFERENCES.md` — user preferences (framework choices, conventions)
|
|
94
|
+
|
|
95
|
+
### Step 3: Spawn Researcher Agent
|
|
96
|
+
|
|
97
|
+
Task(
|
|
98
|
+
prompt="
|
|
99
|
+
<objective>Research the testing ecosystem for this project. Mode: {MODE}. Use Context7 MCP as the primary source for all framework and library questions. Verify everything against current documentation — do not rely on training data alone.</objective>
|
|
100
|
+
<execution_context>@agents/qaa-project-researcher.md</execution_context>
|
|
101
|
+
<files_to_read>
|
|
102
|
+
- CLAUDE.md
|
|
103
|
+
- ~/.claude/qaa/MY_PREFERENCES.md (if exists)
|
|
104
|
+
- .qa-output/SCAN_MANIFEST.md (if exists)
|
|
105
|
+
</files_to_read>
|
|
106
|
+
<parameters>
|
|
107
|
+
mode: {MODE}
|
|
108
|
+
dev_repo_path: {DEV_REPO}
|
|
109
|
+
output_dir: .qa-output/research
|
|
110
|
+
</parameters>
|
|
111
|
+
"
|
|
112
|
+
)
|
|
113
|
+
|
|
114
|
+
### Step 4: Present Results
|
|
115
|
+
|
|
116
|
+
After the researcher completes, summarize what was found:
|
|
117
|
+
|
|
118
|
+
```
|
|
119
|
+
=== Research Complete ===
|
|
120
|
+
Mode: {MODE}
|
|
121
|
+
Files produced: {list of files in .qa-output/research/}
|
|
122
|
+
Framework detected: {name + version}
|
|
123
|
+
Key findings:
|
|
124
|
+
- {finding 1}
|
|
125
|
+
- {finding 2}
|
|
126
|
+
- {finding 3}
|
|
127
|
+
|
|
128
|
+
These files will be automatically consumed by downstream agents
|
|
129
|
+
when you run /qa-create-test, /qa-fix, or /qa-start.
|
|
130
|
+
=========================
|
|
131
|
+
```
|
|
132
|
+
|
|
133
|
+
$ARGUMENTS
|
|
134
|
+
|
|
135
|
+
## MANDATORY verification — run ALL commands below, no exceptions, no skipping
|
|
136
|
+
|
|
137
|
+
Before returning control, copy-paste and run this ENTIRE block. Do NOT decide which commands "apply" — run all of them every time. The output confirms what happened; you do not get to assume the answer.
|
|
138
|
+
|
|
139
|
+
```bash
|
|
140
|
+
echo "=== QA-RESEARCH CHECKLIST START ==="
|
|
141
|
+
echo "1. Research output files:"
|
|
142
|
+
ls .qa-output/research/ 2>/dev/null || echo "NO_RESEARCH_FILES"
|
|
143
|
+
echo "2. TESTING_STACK.md content check:"
|
|
144
|
+
head -20 .qa-output/research/TESTING_STACK.md 2>/dev/null || echo "NO_TESTING_STACK"
|
|
145
|
+
echo "3. FRAMEWORK_CAPABILITIES.md content check:"
|
|
146
|
+
head -20 .qa-output/research/FRAMEWORK_CAPABILITIES.md 2>/dev/null || echo "NO_FRAMEWORK_CAPABILITIES"
|
|
147
|
+
echo "4. Context7 was used (check for confidence levels):"
|
|
148
|
+
grep -c "HIGH\|MEDIUM\|LOW" .qa-output/research/*.md 2>/dev/null || echo "NO_CONFIDENCE_LEVELS"
|
|
149
|
+
echo "5. Version information present:"
|
|
150
|
+
grep -i "version" .qa-output/research/*.md 2>/dev/null | head -5 || echo "NO_VERSION_INFO"
|
|
151
|
+
echo "=== QA-RESEARCH CHECKLIST END ==="
|
|
152
|
+
```
|
|
153
|
+
|
|
154
|
+
**Rules:**
|
|
155
|
+
- Run the block AS-IS. Do not modify it. Do not split it. Do not skip lines.
|
|
156
|
+
- If any output shows a problem (NO_RESEARCH_FILES after research completed), fix it before returning.
|
|
157
|
+
- Do NOT mark this task as complete until the block has been executed and you have read every line of output.
|
|
@@ -0,0 +1,219 @@
|
|
|
1
|
+
# QA Test Report
|
|
2
|
+
|
|
3
|
+
Generate a per-ticket QA execution summary and append it to the Azure DevOps work item's `Custom.QATestCasesReport` field. For each test case linked to the work item, pull the execution status from ADO Test Runs — or prompt the user when ADO has no run result — then render a bulleted "Tested Scenarios" list (with a separate "Failures" section when anything failed).
|
|
4
|
+
|
|
5
|
+
## Usage
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
/qa-test-report <work-item-id>
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
### Arguments
|
|
12
|
+
|
|
13
|
+
| Parameter | Purpose | Default |
|
|
14
|
+
|-----------|---------|---------|
|
|
15
|
+
| `<work-item-id>` | Azure DevOps work item ID (Bug, Feature, User Story, Ticket) whose linked test cases you want reported | Required — prompt the user if missing |
|
|
16
|
+
|
|
17
|
+
## What It Produces
|
|
18
|
+
|
|
19
|
+
- Markdown report printed to chat (for user review)
|
|
20
|
+
- HTML report **appended** to the work item's `Custom.QATestCasesReport` field, separated from prior content by a blank line
|
|
21
|
+
- No local file is written
|
|
22
|
+
|
|
23
|
+
---
|
|
24
|
+
|
|
25
|
+
## Process
|
|
26
|
+
|
|
27
|
+
### Phase 1 — Resolve the Work Item
|
|
28
|
+
|
|
29
|
+
1. If `$ARGUMENTS` is empty or missing a work item ID, ask the user: *"Which Azure DevOps work item ID should I build the QA Test Cases report for?"* Wait for the answer before proceeding.
|
|
30
|
+
2. Call `wit_get_work_item` with `expand: "relations"` for the resolved ID.
|
|
31
|
+
- Capture: **title**, **type** (`Bug`, `Feature`, `User Story`, `Ticket`), **state**, **area path**, **iteration path**.
|
|
32
|
+
- For context used in Phase 4, capture type-appropriate content fields:
|
|
33
|
+
- `Bug` / `Ticket`: **Repro Steps** (`Microsoft.VSTS.TCM.ReproSteps`), **System Info** (`Microsoft.VSTS.TCM.SystemInfo`), **Description**, **QA Notes** (`CIIScrum.QANotes`), **What is expected to happen** (`Custom.Whatisexpectedtohappen`), **What is actually happening** (`Custom.Whatisactuallyhappening`).
|
|
34
|
+
- `User Story`: **Acceptance Criteria** (`Microsoft.VSTS.Common.AcceptanceCriteria`), **Description**.
|
|
35
|
+
- `Feature`: **Description**, **Acceptance Criteria** if present.
|
|
36
|
+
- Note the project for all subsequent ADO calls.
|
|
37
|
+
3. Also call `wit_list_work_item_comments` — comments often contain fail reasons or tester observations referenced in Phase 3 fallbacks.
|
|
38
|
+
|
|
39
|
+
---
|
|
40
|
+
|
|
41
|
+
### Phase 2 — Resolve the Test Case List
|
|
42
|
+
|
|
43
|
+
Source-resolution order:
|
|
44
|
+
|
|
45
|
+
1. **Check for a local file** at `ai-tasks/ticket-{id}/test-cases.md` (produced by `/qa-create-test --ado`). If it exists, parse the TC IDs it contains — this is a hint list only.
|
|
46
|
+
2. **Always query ADO** — inspect the relations returned in Phase 1, filter for link type `"Microsoft.VSTS.Common.TestedBy-Forward"` (*Tested By*), and build the authoritative list of linked TC IDs.
|
|
47
|
+
3. **If both exist and disagree**: trust **ADO**. Log the discrepancy in chat as an FYI (e.g., `"TC#4521 was in local file but not linked in ADO — skipped"` or `"TC#4530 is linked in ADO but missing from local file — included"`) but proceed with the ADO list.
|
|
48
|
+
4. **If no TCs are found in ADO** (and the local file has none either): print `"No test cases linked to work item #{id} — nothing to report."` and stop.
|
|
49
|
+
|
|
50
|
+
For every TC ID in the final list, call `wit_get_work_item` with `expand: "relations"` to capture: **title**, **state**, **steps** (`Microsoft.VSTS.TCM.Steps`), **priority**, **tags**, and any linked runs.
|
|
51
|
+
|
|
52
|
+
---
|
|
53
|
+
|
|
54
|
+
### Phase 3 — Resolve Execution Status Per Test Case
|
|
55
|
+
|
|
56
|
+
For each TC, determine a status of **Passed**, **Failed**, or **Blocked** using this order.
|
|
57
|
+
|
|
58
|
+
**Step 1 — Fetch Test Point outcomes via ADO REST (formal path).**
|
|
59
|
+
|
|
60
|
+
The ADO MCP surface does not expose Test Point outcomes, but the REST API does. Use the same auth pattern as `/qa-create-test --ado`: the `ADO_MCP_AUTH_TOKEN` env var as a PAT, sent via Basic auth.
|
|
61
|
+
|
|
62
|
+
1. Derive **org** and **project** from the work item context captured in Phase 1 (not hardcoded). The work-item URL / project name on the `wit_get_work_item` response gives you both.
|
|
63
|
+
2. If `ADO_MCP_AUTH_TOKEN` is **not set**, skip directly to Step 2 (and note in the final report: *"Auto-status skipped — ADO_MCP_AUTH_TOKEN not set."*).
|
|
64
|
+
3. Otherwise, call the Test Points REST endpoint once per run with the list of linked TC IDs:
|
|
65
|
+
|
|
66
|
+
```bash
|
|
67
|
+
curl -s \
|
|
68
|
+
--header "Authorization: Basic $(echo -n :${ADO_MCP_AUTH_TOKEN} | base64)" \
|
|
69
|
+
--header "Content-Type: application/json" \
|
|
70
|
+
--request POST \
|
|
71
|
+
--data '{"pointsFilter":{"testcaseIds":[<id1>,<id2>,...]}}' \
|
|
72
|
+
"https://dev.azure.com/{org}/{project}/_apis/test/points?api-version=7.1"
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
4. Parse the response. For each returned point you get: `testCase.id`, `testPlan.id`, `suite.id`, `outcome`, `lastTestRun.id`, `lastResultDetails.dateCompleted`, `lastResultDetails.runBy`.
|
|
76
|
+
5. **If a TC has multiple points** (e.g., it lives in several suites or runs on multiple configurations), pick the point with the **most recent `lastResultDetails.dateCompleted`**.
|
|
77
|
+
6. Map `outcome` (case-insensitive) to our statuses:
|
|
78
|
+
- `passed` → **Passed**
|
|
79
|
+
- `failed` → **Failed**
|
|
80
|
+
- `blocked` → **Blocked**
|
|
81
|
+
- `notExecuted` / `none` / `notApplicable` / `paused` / `inProgress` / `warning` / `error` → treat as missing, fall through to Step 2 for this TC
|
|
82
|
+
7. **If the call fails** (non-2xx, network error, token rejected): log the failure briefly in chat, skip to Step 2 for all TCs, and note in the final report that auto-status was unavailable.
|
|
83
|
+
|
|
84
|
+
**Step 2 — Ask the user** (per TC that didn't resolve in Step 1). Show the TC ID and its synthesized scenario description (from Phase 4) for context:
|
|
85
|
+
|
|
86
|
+
```
|
|
87
|
+
TC #{id} — {scenario description}
|
|
88
|
+
Status? [Passed / Failed / Blocked]
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
Accept case-insensitive input. Re-prompt if the answer is not one of the three.
|
|
92
|
+
|
|
93
|
+
**Step 3 — If the resolved status is Failed**, capture a reason:
|
|
94
|
+
|
|
95
|
+
1. Check the Test Run data first — when Step 1 returned an outcome, also fetch the Test Result for `lastTestRun.id` to read `errorMessage`/`comment`:
|
|
96
|
+
```bash
|
|
97
|
+
curl -s \
|
|
98
|
+
--header "Authorization: Basic $(echo -n :${ADO_MCP_AUTH_TOKEN} | base64)" \
|
|
99
|
+
"https://dev.azure.com/{org}/{project}/_apis/test/Runs/{runId}/results?api-version=7.1"
|
|
100
|
+
```
|
|
101
|
+
If the result has a non-empty `errorMessage` or `comment`, use that as the reason.
|
|
102
|
+
2. Also check the TC's own comments (`wit_list_work_item_comments` for the TC ID) and the parent work item's comments for any line mentioning this TC.
|
|
103
|
+
3. If no reason is found, ask the user:
|
|
104
|
+
```
|
|
105
|
+
TC #{id} failed — what was the reason?
|
|
106
|
+
```
|
|
107
|
+
Accept a free-text one-liner.
|
|
108
|
+
|
|
109
|
+
Keep the per-TC answers (status, reason, and auto/manual source) in memory; do not write them anywhere intermediate.
|
|
110
|
+
|
|
111
|
+
---
|
|
112
|
+
|
|
113
|
+
### Phase 4 — Synthesize Scenario Descriptions
|
|
114
|
+
|
|
115
|
+
For each TC, produce a **readable one-line scenario description** for the bullet list. Do **not** copy the raw TC title when it is terse or cryptic — TC titles are often shorthand.
|
|
116
|
+
|
|
117
|
+
Build the description from:
|
|
118
|
+
- TC **title** (starting point)
|
|
119
|
+
- TC **steps** (each step's action + expected result — gives you what was actually verified)
|
|
120
|
+
- Parent work item **description / repro steps / acceptance criteria** (gives you the user-facing phrasing of *what should be true after the fix*)
|
|
121
|
+
|
|
122
|
+
Write the description in the voice of the tested outcome, not the test action. Examples:
|
|
123
|
+
|
|
124
|
+
| Raw TC title | Good scenario description |
|
|
125
|
+
|---|---|
|
|
126
|
+
| `Verify PH row display logic` | `$0 Previous Balance row no longer appears in Payment History` |
|
|
127
|
+
| `Check login 401 scenario` | `Invalid credentials return a 401 with an error banner` |
|
|
128
|
+
| `Regression: limit 100` | `Entry limit enforced at 100 — the 101st entry is rejected` |
|
|
129
|
+
|
|
130
|
+
Keep bullets short — one line, past-tense or assertive present-tense, user-visible wording.
|
|
131
|
+
|
|
132
|
+
---
|
|
133
|
+
|
|
134
|
+
### Phase 5 — Build the Report
|
|
135
|
+
|
|
136
|
+
**Markdown version (for chat output):**
|
|
137
|
+
|
|
138
|
+
```markdown
|
|
139
|
+
Tested Scenarios
|
|
140
|
+
|
|
141
|
+
- ✅ {scenario}
|
|
142
|
+
- ✅ {scenario}
|
|
143
|
+
- ❌ {scenario}
|
|
144
|
+
- ⚠️ {scenario}
|
|
145
|
+
|
|
146
|
+
Failures
|
|
147
|
+
|
|
148
|
+
- ❌ {scenario}: {reason}
|
|
149
|
+
```
|
|
150
|
+
|
|
151
|
+
Rules:
|
|
152
|
+
- Status emoji: **Passed → ✅**, **Failed → ❌**, **Blocked → ⚠️**
|
|
153
|
+
- Include the `Failures` section **only if at least one TC failed**. One bullet per failure with the captured reason appended after `:`.
|
|
154
|
+
- Do **not** include a "Result" section.
|
|
155
|
+
- Do **not** include TC IDs, priority, tags, or counts in the bullets.
|
|
156
|
+
- Keep order stable — preserve the order TCs appeared in the ADO linked list.
|
|
157
|
+
|
|
158
|
+
**HTML version (for the ADO field):**
|
|
159
|
+
|
|
160
|
+
Render with the minimum markup needed for ADO's rich-text field to display bullets and line breaks:
|
|
161
|
+
|
|
162
|
+
```html
|
|
163
|
+
<b>Tested Scenarios</b><br>
|
|
164
|
+
<ul>
|
|
165
|
+
<li>✅ {scenario}</li>
|
|
166
|
+
<li>✅ {scenario}</li>
|
|
167
|
+
<li>❌ {scenario}</li>
|
|
168
|
+
<li>⚠️ {scenario}</li>
|
|
169
|
+
</ul>
|
|
170
|
+
<b>Failures</b><br>
|
|
171
|
+
<ul>
|
|
172
|
+
<li>❌ {scenario}: {reason}</li>
|
|
173
|
+
</ul>
|
|
174
|
+
```
|
|
175
|
+
|
|
176
|
+
Rules:
|
|
177
|
+
- Escape scenario text for HTML (`&` → `&`, `<` → `<`, `>` → `>`).
|
|
178
|
+
- Emit the `<b>Failures</b>…` block **only if at least one TC failed**.
|
|
179
|
+
- No inline styles, no classes, no wrapping `<div>`.
|
|
180
|
+
|
|
181
|
+
---
|
|
182
|
+
|
|
183
|
+
### Phase 6 — Append to the ADO Field
|
|
184
|
+
|
|
185
|
+
1. Re-read the current `Custom.QATestCasesReport` value for the work item (it may contain prior runs you must preserve).
|
|
186
|
+
2. Build the new field value:
|
|
187
|
+
- If the existing value is empty or whitespace: new value = the HTML report from Phase 5.
|
|
188
|
+
- Otherwise: new value = existing content + `<br><br>` + the HTML report from Phase 5.
|
|
189
|
+
3. Call `wit_update_work_item` setting `Custom.QATestCasesReport` to the new value. Pass only this field — do not include any others in the update.
|
|
190
|
+
|
|
191
|
+
Do **not** overwrite other fields. Do **not** create a local file.
|
|
192
|
+
|
|
193
|
+
---
|
|
194
|
+
|
|
195
|
+
### Phase 7 — Present to User
|
|
196
|
+
|
|
197
|
+
Print the **markdown** version of the report to the user in chat. After the report, print a one-line confirmation:
|
|
198
|
+
|
|
199
|
+
```
|
|
200
|
+
Appended to work item #{id} → Custom.QATestCasesReport.
|
|
201
|
+
```
|
|
202
|
+
|
|
203
|
+
If any scenarios required a manual status answer or a manual failure reason during Phase 3, append a short note listing how many:
|
|
204
|
+
|
|
205
|
+
```
|
|
206
|
+
{n} status(es) entered manually · {m} failure reason(s) entered manually.
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
If the TC list had any local-vs-ADO discrepancies (Phase 2), append a short note listing them.
|
|
210
|
+
|
|
211
|
+
---
|
|
212
|
+
|
|
213
|
+
## Notes
|
|
214
|
+
|
|
215
|
+
- This skill does **not** create test cases. Use `/qa-create-test --ado <id>` for that.
|
|
216
|
+
- This skill is read-mostly on the ADO side — it only writes to `Custom.QATestCasesReport`. It does not change TC state, does not create Test Runs, and does not modify links.
|
|
217
|
+
- Scenario descriptions are synthesized, not verbatim TC titles — review the chat output before trusting the field content.
|
|
218
|
+
|
|
219
|
+
$ARGUMENTS
|
package/package.json
CHANGED
package/workflows/qa-start.md
CHANGED
|
@@ -271,6 +271,66 @@ If SCAN_MANIFEST.md is missing, treat as stage failure. Set status to failed and
|
|
|
271
271
|
|
|
272
272
|
</step>
|
|
273
273
|
|
|
274
|
+
<step name="research">
|
|
275
|
+
|
|
276
|
+
## Step 3b: Research Testing Ecosystem
|
|
277
|
+
|
|
278
|
+
**State update -- mark research as running:**
|
|
279
|
+
```bash
|
|
280
|
+
node bin/qaa-tools.cjs state patch --"Research Status" running --"Status" "Researching testing ecosystem" 2>/dev/null || true
|
|
281
|
+
```
|
|
282
|
+
|
|
283
|
+
**Print stage banner:**
|
|
284
|
+
```
|
|
285
|
+
+------------------------------------------+
|
|
286
|
+
| STAGE 1b: Project Researcher |
|
|
287
|
+
| Status: Running... |
|
|
288
|
+
+------------------------------------------+
|
|
289
|
+
```
|
|
290
|
+
|
|
291
|
+
**Create output directory:**
|
|
292
|
+
```bash
|
|
293
|
+
mkdir -p ${output_dir}/research
|
|
294
|
+
```
|
|
295
|
+
|
|
296
|
+
**Spawn researcher agent:**
|
|
297
|
+
```
|
|
298
|
+
Agent(subagent_type="general-purpose",
|
|
299
|
+
prompt="
|
|
300
|
+
<objective>Research the testing ecosystem for this project. Use Context7 MCP as the primary source for all framework and library questions. Produce research documents consumed by downstream agents.</objective>
|
|
301
|
+
<execution_context>@agents/qaa-project-researcher.md</execution_context>
|
|
302
|
+
<files_to_read>
|
|
303
|
+
- CLAUDE.md
|
|
304
|
+
- ~/.claude/qaa/MY_PREFERENCES.md (if exists)
|
|
305
|
+
- ${output_dir}/SCAN_MANIFEST.md
|
|
306
|
+
</files_to_read>
|
|
307
|
+
<parameters>
|
|
308
|
+
mode: stack-testing
|
|
309
|
+
dev_repo_path: {DEV_REPO}
|
|
310
|
+
output_dir: ${output_dir}/research
|
|
311
|
+
</parameters>
|
|
312
|
+
"
|
|
313
|
+
)
|
|
314
|
+
```
|
|
315
|
+
|
|
316
|
+
**Verify research artifacts exist:**
|
|
317
|
+
```bash
|
|
318
|
+
ls ${output_dir}/research/*.md 2>/dev/null && echo "OK: Research files produced" || echo "WARNING: No research files produced"
|
|
319
|
+
```
|
|
320
|
+
|
|
321
|
+
**Research is non-blocking:** If the researcher fails or produces no output, the pipeline continues — downstream agents will fall back to Context7 queries directly. Log the warning but do NOT stop the pipeline.
|
|
322
|
+
|
|
323
|
+
```bash
|
|
324
|
+
if [ ! -f "${output_dir}/research/TESTING_STACK.md" ]; then
|
|
325
|
+
echo "WARNING: Research stage produced no output. Downstream agents will query Context7 directly."
|
|
326
|
+
node bin/qaa-tools.cjs state patch --"Research Status" "skipped (no output)" 2>/dev/null || true
|
|
327
|
+
else
|
|
328
|
+
node bin/qaa-tools.cjs state patch --"Research Status" complete 2>/dev/null || true
|
|
329
|
+
fi
|
|
330
|
+
```
|
|
331
|
+
|
|
332
|
+
</step>
|
|
333
|
+
|
|
274
334
|
<step name="analyze">
|
|
275
335
|
|
|
276
336
|
## Step 4: Analyze Repository
|
|
@@ -301,6 +361,8 @@ Agent(subagent_type="general-purpose",
|
|
|
301
361
|
<files_to_read>
|
|
302
362
|
- {output_dir}/SCAN_MANIFEST.md
|
|
303
363
|
- CLAUDE.md
|
|
364
|
+
- {output_dir}/research/TESTING_STACK.md (if exists)
|
|
365
|
+
- {output_dir}/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
304
366
|
</files_to_read>
|
|
305
367
|
<parameters>
|
|
306
368
|
mode: {mode}
|
|
@@ -528,6 +590,9 @@ Agent(subagent_type="general-purpose",
|
|
|
528
590
|
- {output_dir}/GENERATION_PLAN.md
|
|
529
591
|
- {output_dir}/TEST_INVENTORY.md (Option 1) or {output_dir}/GAP_ANALYSIS.md (Options 2/3)
|
|
530
592
|
- CLAUDE.md
|
|
593
|
+
- {output_dir}/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
594
|
+
- {output_dir}/research/E2E_STRATEGY.md (if exists)
|
|
595
|
+
- {output_dir}/research/API_TESTING_STRATEGY.md (if exists)
|
|
531
596
|
</files_to_read>
|
|
532
597
|
<parameters>
|
|
533
598
|
workflow_option: {option}
|
|
@@ -554,6 +619,9 @@ Agent(subagent_type="general-purpose",
|
|
|
554
619
|
- {output_dir}/GENERATION_PLAN.md
|
|
555
620
|
- {output_dir}/TEST_INVENTORY.md (Option 1) or {output_dir}/GAP_ANALYSIS.md (Options 2/3)
|
|
556
621
|
- CLAUDE.md
|
|
622
|
+
- {output_dir}/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
623
|
+
- {output_dir}/research/E2E_STRATEGY.md (if exists)
|
|
624
|
+
- {output_dir}/research/API_TESTING_STRATEGY.md (if exists)
|
|
557
625
|
</files_to_read>
|
|
558
626
|
<parameters>
|
|
559
627
|
workflow_option: {option}
|
|
@@ -745,6 +813,8 @@ Agent(subagent_type="general-purpose",
|
|
|
745
813
|
- {test execution results -- from validator or direct test run}
|
|
746
814
|
- {failing test source files -- paths from executor return}
|
|
747
815
|
- CLAUDE.md
|
|
816
|
+
- {output_dir}/research/FRAMEWORK_CAPABILITIES.md (if exists)
|
|
817
|
+
- {output_dir}/research/TESTING_STACK.md (if exists)
|
|
748
818
|
</files_to_read>
|
|
749
819
|
<parameters>
|
|
750
820
|
output_path: {output_dir}/FAILURE_CLASSIFICATION_REPORT.md
|