qaa-agent 1.8.1 → 1.8.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +12 -0
- package/commands/qa-create-test-ado.md +404 -0
- package/commands/qa-create-test.md +46 -5
- package/package.json +1 -1
package/CHANGELOG.md
CHANGED
|
@@ -3,6 +3,18 @@
|
|
|
3
3
|
|
|
4
4
|
All notable changes to QAA (QA Automation Agent) are documented here.
|
|
5
5
|
|
|
6
|
+
## [1.8.5] - 2026-04-17
|
|
7
|
+
|
|
8
|
+
### Added
|
|
9
|
+
|
|
10
|
+
- **Azure DevOps mode in `/qa-create-test`** — new `--ado` flag enables creating Test Cases directly in Azure DevOps from a work item. Supports work item ID or full ADO URL, auto-detects `dev.azure.com` and `*.visualstudio.com` URLs. Features include: boundary value triplet detection (N-1, N, N+1), deduplication against existing linked TCs, confidence scoring (Specified vs Draft), keyword-based Critical tagging, and preconditions block per test case.
|
|
11
|
+
- **`/qa-create-test-ado` standalone command** — dedicated command for Azure DevOps test case creation with 7-phase workflow: retrieve work item with comments/attachments, dedup check, type-based content extraction (Bug → Repro Steps, User Story → Acceptance Criteria), test case design, creation in ADO via `testplan_create_test_case`, structured report generation, and report attachment to source work item.
|
|
12
|
+
- **ADO-specific flags** — `--area-path`, `--iteration-path` (override paths for created TCs), `--skip-dedup` (skip deduplication check).
|
|
13
|
+
|
|
14
|
+
### Changed
|
|
15
|
+
|
|
16
|
+
- **`/qa-create-test` now supports 5 modes** — from-code, from-ticket, ADO, update, and POM-only (previously 3 modes). Mode detection updated to recognize ADO URLs before ticket URLs to avoid routing conflicts.
|
|
17
|
+
|
|
6
18
|
## [1.8.1] - 2026-04-16
|
|
7
19
|
|
|
8
20
|
### Added
|
|
@@ -0,0 +1,404 @@
|
|
|
1
|
+
# QA Create Test — Azure DevOps
|
|
2
|
+
|
|
3
|
+
Retrieve an Azure DevOps work item, analyze its content, and generate well-structured Test Cases directly in Azure DevOps using the ADO MCP tools. Each test case is tagged for test plan membership (Smoke, Regression, Critical) and linked back to the source work item for full traceability. Integrates with the QAA pipeline: reads codebase map, locator registry, and user preferences for context-aware test case generation.
|
|
4
|
+
|
|
5
|
+
## Usage
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
/qa-create-test-ado <work-item-id> [--area-path=<path>] [--iteration-path=<path>] [--skip-map] [--skip-dedup] [--app-url <url>]
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
### Arguments
|
|
12
|
+
|
|
13
|
+
| Parameter | Purpose | Default |
|
|
14
|
+
|-----------|---------|---------|
|
|
15
|
+
| `<work-item-id>` | Azure DevOps work item ID to generate test cases from | Required |
|
|
16
|
+
| `--area-path=<path>` | Override area path for all created test artifacts | Source work item's area path |
|
|
17
|
+
| `--iteration-path=<path>` | Override iteration path for all created test artifacts | Source work item's iteration path |
|
|
18
|
+
| `--skip-map` | Skip codebase map check and proceed without project context | false |
|
|
19
|
+
| `--skip-dedup` | Skip deduplication check against existing linked test cases | false |
|
|
20
|
+
| `--app-url <url>` | URL of running application for locator extraction via Playwright MCP | auto-detect |
|
|
21
|
+
|
|
22
|
+
## What It Produces
|
|
23
|
+
|
|
24
|
+
- Test Cases created directly in Azure DevOps (via `testplan_create_test_case`)
|
|
25
|
+
- Test Cases linked to source work item via *Tested By* relationship
|
|
26
|
+
- Tags applied: `Smoke`, `Regression`, `Critical`, `AutomationCandidate`, `NeedsReview`
|
|
27
|
+
- `ai-tasks/ticket-{id}/test-cases.md` — structured report
|
|
28
|
+
- Report attached to work item (if `ADO_MCP_AUTH_TOKEN` is set) or written to `Custom.QATestCasesReport` field (fallback)
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## Process
|
|
33
|
+
|
|
34
|
+
### Phase 1: Read Pipeline Context
|
|
35
|
+
|
|
36
|
+
Before retrieving the work item, read QAA pipeline artifacts for context-aware generation.
|
|
37
|
+
|
|
38
|
+
1. **Read `CLAUDE.md`** — POM rules, locator tiers, assertion rules, naming conventions, quality gates, test spec rules.
|
|
39
|
+
|
|
40
|
+
2. **Read user preferences** — `~/.claude/qaa/MY_PREFERENCES.md` (if exists). User overrides win over defaults.
|
|
41
|
+
|
|
42
|
+
3. **Check for codebase map** (`.qa-output/codebase/`):
|
|
43
|
+
- Look for: `CODE_PATTERNS.md`, `API_CONTRACTS.md`, `TEST_SURFACE.md`, `TESTABILITY.md`, `RISK_MAP.md`, `CRITICAL_PATHS.md`
|
|
44
|
+
- If at least 2 exist: read them all for project context (naming conventions, API shapes, testable surfaces, risk areas).
|
|
45
|
+
- If NONE exist and `--skip-map` not passed: warn the user that test cases will lack project context, suggest running `/qa-map` first. Continue anyway (ADO test cases are higher-level than code-level tests).
|
|
46
|
+
|
|
47
|
+
4. **Check locator registry** — `.qa-output/locators/LOCATOR_REGISTRY.md` (if exists):
|
|
48
|
+
- If locators exist for pages related to the work item's feature: reference them in test step expected results (e.g., "Verify element `[data-testid='login-submit-btn']` is visible").
|
|
49
|
+
- If `--app-url` provided and locators missing: use Playwright MCP to extract locators from the live app before designing test steps:
|
|
50
|
+
```
|
|
51
|
+
mcp__playwright__browser_navigate({ url: "{app_url}/{feature_path}" })
|
|
52
|
+
mcp__playwright__browser_snapshot()
|
|
53
|
+
```
|
|
54
|
+
- Write extracted locators to `.qa-output/locators/{feature}.locators.md` and update the registry.
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
### Phase 2: Retrieve the Work Item
|
|
59
|
+
|
|
60
|
+
Use `wit_get_work_item` with `expand: "relations"` to fetch the full work item:
|
|
61
|
+
|
|
62
|
+
- Capture: **title**, **type** (`Bug`, `User Story`, `Ticket`), **state**, **assigned-to**, **area path**, **iteration path**
|
|
63
|
+
- Capture all relevant content fields based on type (see Phase 3)
|
|
64
|
+
- Note the project for all subsequent calls
|
|
65
|
+
|
|
66
|
+
**Also retrieve comments** using `wit_list_work_item_comments`:
|
|
67
|
+
|
|
68
|
+
- Read all comments in chronological order
|
|
69
|
+
- Look for: acceptance criteria added in comments, QA notes, scope clarifications, tester feedback, or any conditions of satisfaction mentioned informally
|
|
70
|
+
- These often contain implied test cases not captured in the formal fields
|
|
71
|
+
|
|
72
|
+
**Also check attachments** from the relations list (entries where `rel` equals `AttachedFile`):
|
|
73
|
+
|
|
74
|
+
- Filter to `.csv` and `.txt` files (case-insensitive) by inspecting `attributes.name`
|
|
75
|
+
- If found, download via:
|
|
76
|
+
```bash
|
|
77
|
+
curl -s --user ":{AZURE_DEVOPS_PAT}" "{attachment-url}"
|
|
78
|
+
```
|
|
79
|
+
- Read content for test data, expected values, error logs, or sample datasets that define expected behavior
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
|
|
83
|
+
### Phase 2b: Deduplication Check — Query Existing Test Cases
|
|
84
|
+
|
|
85
|
+
Before generating any new test cases, check whether the source work item already has linked test cases to prevent duplicates.
|
|
86
|
+
|
|
87
|
+
1. Inspect the relations returned in Phase 2 — filter for link type `"Microsoft.VSTS.Common.TestedBy-Forward"` (i.e., *Tested By* links).
|
|
88
|
+
2. For each linked test case ID found, call `wit_get_work_item` to retrieve its **title** and **state**.
|
|
89
|
+
3. Build an **existing TC registry** — a list of `{ id, title, state }` for all currently linked test cases.
|
|
90
|
+
4. In Phase 5, before calling `testplan_create_test_case` for each planned TC, compare its title (normalized: lowercase, trimmed) against every title in the registry.
|
|
91
|
+
- **If match found** and existing TC is in state `Design`, `Ready`, or `Closed`: skip creation, log `"Skipped — duplicate of TC #{id}"`.
|
|
92
|
+
- **If match found** but existing TC is in state `Removed`: create the new TC anyway (the old one was intentionally discarded).
|
|
93
|
+
- **If no match**: proceed with creation.
|
|
94
|
+
5. Include a **Dedup Summary** section in the output report.
|
|
95
|
+
|
|
96
|
+
Skip this check with `--skip-dedup`.
|
|
97
|
+
|
|
98
|
+
---
|
|
99
|
+
|
|
100
|
+
### Phase 3: Identify Work Item Type and Extract Test Source Content
|
|
101
|
+
|
|
102
|
+
Apply the correct extraction strategy based on work item type:
|
|
103
|
+
|
|
104
|
+
#### If type is `Bug` or `Ticket`:
|
|
105
|
+
|
|
106
|
+
Primary source — **Repro Steps** (`Microsoft.VSTS.TCM.ReproSteps`):
|
|
107
|
+
- Each distinct action sequence is a candidate test case
|
|
108
|
+
- The repro steps define the *negative path* (what triggers the bug)
|
|
109
|
+
- Derive the *positive/fix-verification path* by inverting the expected outcome
|
|
110
|
+
- Also read: **System Info** (`Microsoft.VSTS.TCM.SystemInfo`), **Description**, **QA Notes** (`CIIScrum.QANotes`)
|
|
111
|
+
- Check `Custom.Whatisexpectedtohappen` and `Custom.Whatisactuallyhappening` to anchor pass/fail assertions
|
|
112
|
+
|
|
113
|
+
Secondary sources:
|
|
114
|
+
- Comments for tester observations or specific scenarios to cover
|
|
115
|
+
- Attachments for error data or sample inputs
|
|
116
|
+
|
|
117
|
+
#### If type is `User Story`:
|
|
118
|
+
|
|
119
|
+
Primary source — **Acceptance Criteria** (`Microsoft.VSTS.Common.AcceptanceCriteria`):
|
|
120
|
+
- Each acceptance criterion (Given/When/Then or checklist) maps to one or more test cases
|
|
121
|
+
- Also read: **Description** for context and implied behaviors
|
|
122
|
+
|
|
123
|
+
Secondary sources:
|
|
124
|
+
- Comments for clarifications, edge cases raised in refinement, or stakeholder scenarios
|
|
125
|
+
- Attachments for wireframes described in text, sample data, or business rules documents
|
|
126
|
+
|
|
127
|
+
#### If type is unrecognized or fields are empty:
|
|
128
|
+
|
|
129
|
+
Fall back to **Description** as the primary source. Extract any stated behaviors, expected outcomes, or constraints. Note the fallback in the output.
|
|
130
|
+
|
|
131
|
+
**Cross-reference with codebase map** (if available):
|
|
132
|
+
- Match mentioned components/features against `TEST_SURFACE.md` entry points
|
|
133
|
+
- Check `RISK_MAP.md` for risk level of affected areas
|
|
134
|
+
- Use `API_CONTRACTS.md` for exact endpoint shapes if the work item mentions API behavior
|
|
135
|
+
- Use `CODE_PATTERNS.md` to align test step language with project conventions
|
|
136
|
+
|
|
137
|
+
---
|
|
138
|
+
|
|
139
|
+
### Phase 4: Analyze and Design Test Cases
|
|
140
|
+
|
|
141
|
+
Before creating anything in Azure DevOps, plan out all test cases:
|
|
142
|
+
|
|
143
|
+
**For each distinct scenario identified, determine:**
|
|
144
|
+
|
|
145
|
+
1. **Test Case Title** — concise action-oriented name (e.g., "Verify guest pass entry counter resets at midnight")
|
|
146
|
+
2. **Steps** — formatted as `{step action} | {expected result}` per step, using `|` as the delimiter
|
|
147
|
+
3. **Priority** — 1 (Critical), 2 (High), 3 (Medium), 4 (Low)
|
|
148
|
+
4. **Tags** — one or more of: `Smoke`, `Regression`, `Critical`, `AutomationCandidate`, `NeedsReview`
|
|
149
|
+
5. **Preconditions** — required setup before executing the test
|
|
150
|
+
6. **Confidence** — `Specified` or `Draft`
|
|
151
|
+
|
|
152
|
+
**Minimum test case coverage per work item type:**
|
|
153
|
+
|
|
154
|
+
| Scenario Type | Bug/Ticket | User Story |
|
|
155
|
+
|---------------|-----------|------------|
|
|
156
|
+
| Happy path (fix verified / AC met) | Required | Required per AC item |
|
|
157
|
+
| Negative / error path | Required (original repro) | Where AC implies failure states |
|
|
158
|
+
| Boundary / edge cases | If data-driven | If AC contains limits or conditions |
|
|
159
|
+
| Boundary value triplets (n-1, n, n+1) | If limits detected | If AC contains limits/ranges |
|
|
160
|
+
| Regression guard (related area) | Required | Required |
|
|
161
|
+
|
|
162
|
+
#### Boundary Value Detection
|
|
163
|
+
|
|
164
|
+
Scan all source content for **boundary keyword triggers**:
|
|
165
|
+
|
|
166
|
+
> `max`, `min`, `limit`, `threshold`, `cap`, `ceiling`, `floor`, `range`, `between`, `up to`, `at most`, `at least`, `no more than`, `no fewer than`, `maximum`, `minimum`, `exactly`, `exceeds`, `boundary`
|
|
167
|
+
|
|
168
|
+
When a trigger is found alongside a numeric value **N**:
|
|
169
|
+
|
|
170
|
+
1. **Generate three test cases** (the boundary triplet):
|
|
171
|
+
- **N - 1** — just below the boundary
|
|
172
|
+
- **N** — exactly at the boundary
|
|
173
|
+
- **N + 1** — just above the boundary
|
|
174
|
+
2. Title them clearly: e.g., `"Verify entry limit at 99 (below threshold)"`, `"...at 100 (at threshold)"`, `"...at 101 (above threshold)"`.
|
|
175
|
+
3. Tag all three with `Regression`.
|
|
176
|
+
4. If the boundary is on a critical-path field (per `CRITICAL_PATHS.md` or keyword detection), also tag `Critical`.
|
|
177
|
+
|
|
178
|
+
If the source mentions a range, generate boundary triplets for **both** ends.
|
|
179
|
+
|
|
180
|
+
#### Tagging Rules
|
|
181
|
+
|
|
182
|
+
| Tag | Assign when... |
|
|
183
|
+
|-----|---------------|
|
|
184
|
+
| `Smoke` | Verifies core, user-facing functionality that must work for the app to be usable at all. Limit to the most essential 1-2 cases per work item. |
|
|
185
|
+
| `Regression` | Guards against the specific bug or behavior being re-introduced. Every fix-verification test for a Bug/Ticket should be tagged. For User Stories, tag tests covering AC that touches shared or high-traffic code paths. |
|
|
186
|
+
| `Critical` | Covers functionality whose failure would directly impact revenue, security, data integrity, or legal compliance. **Also apply when critical keywords are detected** (see Keyword-Based Critical Tagging below). Apply conservatively. |
|
|
187
|
+
| `AutomationCandidate` | Test has: (a) deterministic steps with no subjective judgment, (b) assertions based on concrete data/state, (c) no manual-only prerequisites. Advisory only — QA confirms. |
|
|
188
|
+
|
|
189
|
+
**Do not assign Smoke to every test case.** Smoke tests are a small, fast-running set.
|
|
190
|
+
|
|
191
|
+
#### Keyword-Based Critical Tagging
|
|
192
|
+
|
|
193
|
+
Automatically tag as `Critical` when any of the following keywords appear in the source content:
|
|
194
|
+
|
|
195
|
+
> `auth`, `authentication`, `login`, `password`, `OAuth`, `SSO`, `payment`, `billing`, `charge`, `invoice`, `PII`, `personal data`, `SSN`, `date of birth`, `security`, `encryption`, `token`, `certificate`, `data integrity`, `transaction`, `rollback`, `compliance`, `HIPAA`, `GDPR`, `SOC`, `audit`, `permission`, `role-based`, `access control`
|
|
196
|
+
|
|
197
|
+
Cross-reference with `RISK_MAP.md` (if available) for additional risk-based tagging.
|
|
198
|
+
|
|
199
|
+
#### Confidence Scoring
|
|
200
|
+
|
|
201
|
+
| Confidence | Criteria | Behavior |
|
|
202
|
+
|------------|----------|----------|
|
|
203
|
+
| **Specified** | Source content explicitly describes the scenario, expected outcome, and data. | Create the TC normally. |
|
|
204
|
+
| **Draft** | Scenario is implied or partially described — inferred from context or sparse source. | Prefix TC title with `[DRAFT]`. Add `NeedsReview` tag. Add final step: `"Review — this test case was auto-generated from sparse source material and requires QA validation before execution." | "QA has reviewed and confirmed or updated the steps."` |
|
|
205
|
+
|
|
206
|
+
**Threshold**: If more than 50% of the source content fields are empty or contain fewer than 20 words, default all inferred TCs to Draft.
|
|
207
|
+
|
|
208
|
+
#### Preconditions Block
|
|
209
|
+
|
|
210
|
+
Every test case documents preconditions:
|
|
211
|
+
|
|
212
|
+
| Field | Description | Example |
|
|
213
|
+
|-------|-------------|--------|
|
|
214
|
+
| **Required Role(s)** | User role(s) or permission level(s) needed | `Admin`, `Property Manager`, `Resident` |
|
|
215
|
+
| **Application State** | System/feature state that must be true before step 1 | `User is logged in`, `Feature flag X is enabled` |
|
|
216
|
+
| **Test Data** | Specific data that must exist or be created | `Resident account with active lease` |
|
|
217
|
+
| **Environment** | Environment-specific requirements | `Staging`, `API key configured` |
|
|
218
|
+
|
|
219
|
+
Prepend preconditions to the TC description field in Azure DevOps:
|
|
220
|
+
|
|
221
|
+
```
|
|
222
|
+
**Preconditions**
|
|
223
|
+
- Role(s): {roles}
|
|
224
|
+
- State: {state}
|
|
225
|
+
- Test Data: {data}
|
|
226
|
+
- Environment: {env}
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
If locator registry data is available, include relevant locator references in test steps for E2E-related scenarios.
|
|
230
|
+
|
|
231
|
+
---
|
|
232
|
+
|
|
233
|
+
### Phase 5: Create Test Cases in Azure DevOps
|
|
234
|
+
|
|
235
|
+
**Dedup gate**: Before creating each TC, check against the registry from Phase 2b.
|
|
236
|
+
|
|
237
|
+
For each planned test case, call `testplan_create_test_case` with:
|
|
238
|
+
|
|
239
|
+
- `project`: the work item's project
|
|
240
|
+
- `title`: the test case title — prefixed with `[DRAFT]` if confidence is Draft
|
|
241
|
+
- `steps`: formatted as `1. {action}|{expected result}\n2. {action}|{expected result}` — use `|` as delimiter. **Never pass XML or pre-formatted `<steps>` markup** — the tool generates XML from plain-text format.
|
|
242
|
+
- `priority`: numeric priority (1-4)
|
|
243
|
+
- `iterationPath`: use `--iteration-path` override if provided, otherwise source work item's iteration path
|
|
244
|
+
- `areaPath`: use `--area-path` override if provided, otherwise source work item's area path
|
|
245
|
+
|
|
246
|
+
**After creating each test case:**
|
|
247
|
+
|
|
248
|
+
1. Call `wit_add_artifact_link` or `wit_work_items_link` to link the new TC to the source work item using link type `"tested by"`:
|
|
249
|
+
```
|
|
250
|
+
source work item --[Tested By]--> test case
|
|
251
|
+
```
|
|
252
|
+
|
|
253
|
+
2. Call `wit_update_work_item` on the new TC to set `System.Tags` to semicolon-separated tags (e.g., `"Regression; Critical; AutomationCandidate"`).
|
|
254
|
+
- Draft TCs always include `NeedsReview`.
|
|
255
|
+
|
|
256
|
+
Create all test cases sequentially — capture each new TC ID before proceeding.
|
|
257
|
+
|
|
258
|
+
---
|
|
259
|
+
|
|
260
|
+
### Phase 6: Synthesize the Output Report
|
|
261
|
+
|
|
262
|
+
Save the report to `ai-tasks/ticket-$ARGUMENTS/test-cases.md`.
|
|
263
|
+
|
|
264
|
+
**Required document structure:**
|
|
265
|
+
|
|
266
|
+
```markdown
|
|
267
|
+
# Test Cases: {work-item-id} — {Work Item Title}
|
|
268
|
+
|
|
269
|
+
**Generated**: {current date}
|
|
270
|
+
**Work Item**: [{work-item-id}]({azure-devops-url}) — {type} | {state}
|
|
271
|
+
**Assigned To**: {assigned-to}
|
|
272
|
+
**Area Path**: {area path}
|
|
273
|
+
**Iteration**: {iteration path}
|
|
274
|
+
**Test Source**: {Repro Steps / Acceptance Criteria / Description (fallback)}
|
|
275
|
+
**Pipeline Context**: Codebase map: {yes/no}, Locator registry: {yes/no}, Preferences: {yes/no}
|
|
276
|
+
|
|
277
|
+
---
|
|
278
|
+
|
|
279
|
+
## Source Analysis
|
|
280
|
+
|
|
281
|
+
### Work Item Summary
|
|
282
|
+
{2-3 sentences describing the work item and what behavior needed to be tested.}
|
|
283
|
+
|
|
284
|
+
### Key Scenarios Identified
|
|
285
|
+
{Bulleted list of distinct testable scenarios extracted before designing test cases.}
|
|
286
|
+
|
|
287
|
+
### Source Content Notes
|
|
288
|
+
{Observations about quality/completeness of source material. Were repro steps/AC clear? Did comments add scenarios?}
|
|
289
|
+
|
|
290
|
+
### Codebase Context Used
|
|
291
|
+
{If codebase map was available: list which documents were read and what context they provided. If not available: note that test cases were generated without codebase context.}
|
|
292
|
+
|
|
293
|
+
---
|
|
294
|
+
|
|
295
|
+
## Test Cases Created
|
|
296
|
+
|
|
297
|
+
### TC-{azure-devops-id}: {title}
|
|
298
|
+
|
|
299
|
+
**Confidence**: `Specified` or `[DRAFT] — NeedsReview`
|
|
300
|
+
**Tags**: `{Smoke}` · `{Regression}` · `{Critical}` · `{AutomationCandidate}` · `{NeedsReview}` *(show only tags that apply)*
|
|
301
|
+
**Priority**: {1 – Critical / 2 – High / 3 – Medium / 4 – Low}
|
|
302
|
+
**Linked To**: Work Item #{work-item-id} via *Tested By*
|
|
303
|
+
**Azure DevOps ID**: {test-case-id}
|
|
304
|
+
|
|
305
|
+
**Preconditions:**
|
|
306
|
+
- **Role(s)**: {required roles or N/A}
|
|
307
|
+
- **State**: {required application state or N/A}
|
|
308
|
+
- **Test Data**: {required data or N/A}
|
|
309
|
+
- **Environment**: {environment requirements or N/A}
|
|
310
|
+
|
|
311
|
+
**Test Steps:**
|
|
312
|
+
|
|
313
|
+
| # | Action | Expected Result |
|
|
314
|
+
|---|--------|-----------------|
|
|
315
|
+
| 1 | {action} | {expected result} |
|
|
316
|
+
| 2 | {action} | {expected result} |
|
|
317
|
+
|
|
318
|
+
{Repeat for each test case.}
|
|
319
|
+
|
|
320
|
+
---
|
|
321
|
+
|
|
322
|
+
## Tag Summary
|
|
323
|
+
|
|
324
|
+
| Tag | Count | Test Case IDs |
|
|
325
|
+
|-----|-------|---------------|
|
|
326
|
+
| Smoke | {n} | {comma-separated IDs} |
|
|
327
|
+
| Regression | {n} | {comma-separated IDs} |
|
|
328
|
+
| Critical | {n} | {comma-separated IDs} |
|
|
329
|
+
| AutomationCandidate | {n} | {comma-separated IDs} |
|
|
330
|
+
| NeedsReview | {n} | {comma-separated IDs} |
|
|
331
|
+
|
|
332
|
+
---
|
|
333
|
+
|
|
334
|
+
## Dedup Summary
|
|
335
|
+
|
|
336
|
+
| Planned Title | Skipped Reason | Existing TC |
|
|
337
|
+
|---------------|---------------|-------------|
|
|
338
|
+
| {title} | Duplicate of TC #{id} | #{id} — {state} |
|
|
339
|
+
|
|
340
|
+
{If no duplicates: "No duplicates detected — all test cases were created."}
|
|
341
|
+
|
|
342
|
+
---
|
|
343
|
+
|
|
344
|
+
## Traceability
|
|
345
|
+
|
|
346
|
+
All test cases linked to work item **#{work-item-id}** via *Tested By*.
|
|
347
|
+
|
|
348
|
+
**Path Overrides Applied**: {If --area-path or --iteration-path provided, state them. Otherwise: "None — used source work item paths."}
|
|
349
|
+
**Confidence Breakdown**: {n} Specified, {n} Draft (NeedsReview)
|
|
350
|
+
**Boundary Triplets Generated**: {n} (from {n} detected boundaries)
|
|
351
|
+
```
|
|
352
|
+
|
|
353
|
+
---
|
|
354
|
+
|
|
355
|
+
### Phase 7: Attach Report to Source Work Item
|
|
356
|
+
|
|
357
|
+
**If `ADO_MCP_AUTH_TOKEN` is set:**
|
|
358
|
+
|
|
359
|
+
Upload `test-cases.md` as an attachment:
|
|
360
|
+
|
|
361
|
+
```bash
|
|
362
|
+
# Step 1: Upload file
|
|
363
|
+
ATTACHMENT_URL=$(curl -s \
|
|
364
|
+
--header "Authorization: Basic $(echo -n :${ADO_MCP_AUTH_TOKEN} | base64)" \
|
|
365
|
+
--header "Content-Type: application/octet-stream" \
|
|
366
|
+
--request POST \
|
|
367
|
+
--data-binary "@ai-tasks/ticket-$ARGUMENTS/test-cases.md" \
|
|
368
|
+
"https://dev.azure.com/{org}/{project}/_apis/wit/attachments?fileName=test-cases.md&api-version=7.1" \
|
|
369
|
+
| python3 -c "import sys,json; print(json.load(sys.stdin)['url'])")
|
|
370
|
+
|
|
371
|
+
# Step 2: Link attachment to work item
|
|
372
|
+
curl -s \
|
|
373
|
+
--header "Authorization: Basic $(echo -n :${ADO_MCP_AUTH_TOKEN} | base64)" \
|
|
374
|
+
--header "Content-Type: application/json-patch+json" \
|
|
375
|
+
--request PATCH \
|
|
376
|
+
--data "[{\"op\":\"add\",\"path\":\"/relations/-\",\"value\":{\"rel\":\"AttachedFile\",\"url\":\"${ATTACHMENT_URL}\",\"attributes\":{\"comment\":\"Generated test cases report\"}}}]" \
|
|
377
|
+
"https://dev.azure.com/{org}/{project}/_apis/wit/workItems/$ARGUMENTS?api-version=7.1"
|
|
378
|
+
```
|
|
379
|
+
|
|
380
|
+
**If `ADO_MCP_AUTH_TOKEN` is NOT set (fallback):**
|
|
381
|
+
|
|
382
|
+
Write the full report as HTML to the work item's `Custom.QATestCasesReport` field via `wit_update_work_item`. Include all sections converted to HTML.
|
|
383
|
+
|
|
384
|
+
Note in the final report which method was used.
|
|
385
|
+
|
|
386
|
+
---
|
|
387
|
+
|
|
388
|
+
## Final Report to User
|
|
389
|
+
|
|
390
|
+
After completing all phases, provide:
|
|
391
|
+
|
|
392
|
+
1. Brief inline summary (2-3 sentences) of scenarios covered
|
|
393
|
+
2. Full path to generated file: `ai-tasks/ticket-{id}/test-cases.md`
|
|
394
|
+
3. Table of every created TC: ID, title, tags, confidence
|
|
395
|
+
4. Counts by tag: Smoke, Regression, Critical, AutomationCandidate, NeedsReview
|
|
396
|
+
5. Dedup summary: how many planned TCs were skipped
|
|
397
|
+
6. Confidence summary: Specified vs Draft counts
|
|
398
|
+
7. Boundary summary: how many boundary triplets generated
|
|
399
|
+
8. Pipeline context: which codebase map documents and locator registry data were used
|
|
400
|
+
9. Gaps or assumptions made
|
|
401
|
+
10. Path override confirmation (if used)
|
|
402
|
+
11. Report delivery confirmation (attached as file or written to custom field)
|
|
403
|
+
|
|
404
|
+
$ARGUMENTS
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# QA Create Test
|
|
2
2
|
|
|
3
|
-
Create, update, or generate tests from tickets — all in one command. Supports
|
|
3
|
+
Create, update, or generate tests from tickets — all in one command. Supports five modes: generate tests from code analysis, generate tests from a ticket (Jira/Linear/GitHub), create Test Cases in Azure DevOps from a work item, update/improve existing tests, or generate POM files only. Uses Playwright MCP to extract real locators from the live app when available.
|
|
4
4
|
|
|
5
5
|
## Usage
|
|
6
6
|
|
|
@@ -14,6 +14,7 @@ Create, update, or generate tests from tickets — all in one command. Supports
|
|
|
14
14
|
|------|---------|---------|
|
|
15
15
|
| **From code** | Feature name (no URL, no path to tests) | `/qa-create-test login` |
|
|
16
16
|
| **From ticket** | URL, shorthand (#123), or `--ticket` flag | `/qa-create-test https://github.com/org/repo/issues/42` |
|
|
17
|
+
| **Azure DevOps** | `--ado` flag with work item ID or ADO URL | `/qa-create-test --ado 85508` |
|
|
17
18
|
| **Update existing** | Path to existing test files or `--update` flag | `/qa-create-test --update tests/e2e/` |
|
|
18
19
|
| **POM only** | `--pom-only` flag | `/qa-create-test --pom-only src/pages/` |
|
|
19
20
|
|
|
@@ -25,6 +26,10 @@ Create, update, or generate tests from tickets — all in one command. Supports
|
|
|
25
26
|
- `--ticket <source>` — force ticket mode with: URL, shorthand (#123, org/repo#123), file path, or plain text
|
|
26
27
|
- `--update <path>` — force update mode: audit and improve existing tests at path
|
|
27
28
|
- `--scope fix|improve|add|full` — for update mode only (default: full)
|
|
29
|
+
- `--ado <work-item-id>` — Azure DevOps mode: read a work item and create Test Cases in ADO (accepts ID or full ADO URL)
|
|
30
|
+
- `--area-path <path>` — (ADO mode) override area path for created test cases (default: source work item's area path)
|
|
31
|
+
- `--iteration-path <path>` — (ADO mode) override iteration path for created test cases (default: source work item's iteration path)
|
|
32
|
+
- `--skip-dedup` — (ADO mode) skip deduplication check against existing linked test cases
|
|
28
33
|
- `--pom-only [path]` — generate only Page Object Model files (BasePage + feature POMs), no test specs
|
|
29
34
|
- `--framework <name>` — override framework auto-detection (playwright, cypress, selenium) — used with --pom-only
|
|
30
35
|
|
|
@@ -33,8 +38,9 @@ Create, update, or generate tests from tickets — all in one command. Supports
|
|
|
33
38
|
```
|
|
34
39
|
if --pom-only:
|
|
35
40
|
MODE = "pom-only"
|
|
36
|
-
elif argument matches URL
|
|
37
|
-
|
|
41
|
+
elif --ado flag OR argument matches ADO URL (dev.azure.com, *.visualstudio.com):
|
|
42
|
+
MODE = "ado"
|
|
43
|
+
elif argument matches URL pattern (github.com, atlassian.net, linear.app) OR contains "#" + digits OR --ticket flag:
|
|
38
44
|
MODE = "from-ticket"
|
|
39
45
|
elif --update flag OR argument is path to existing test directory/files:
|
|
40
46
|
MODE = "update"
|
|
@@ -57,6 +63,13 @@ else:
|
|
|
57
63
|
- Test spec files with `traces_to` fields linking back to ticket ACs
|
|
58
64
|
- VALIDATION_REPORT.md
|
|
59
65
|
|
|
66
|
+
### Azure DevOps Mode
|
|
67
|
+
- Test Cases created directly in Azure DevOps (via `testplan_create_test_case`)
|
|
68
|
+
- Test Cases linked to source work item via *Tested By* relationship
|
|
69
|
+
- Tags applied: `Smoke`, `Regression`, `Critical`, `AutomationCandidate`, `NeedsReview`
|
|
70
|
+
- `ai-tasks/ticket-{id}/test-cases.md` — structured report
|
|
71
|
+
- Report attached to work item (if `ADO_MCP_AUTH_TOKEN` is set) or written to `Custom.QATestCasesReport` field (fallback)
|
|
72
|
+
|
|
60
73
|
### Update Mode
|
|
61
74
|
- QA_AUDIT_REPORT.md — current quality assessment
|
|
62
75
|
- Improved test files (after user approval)
|
|
@@ -70,8 +83,8 @@ Parse `$ARGUMENTS` to determine mode using the detection logic above.
|
|
|
70
83
|
Print mode banner:
|
|
71
84
|
```
|
|
72
85
|
=== QA Create Test ===
|
|
73
|
-
Mode: {from-code | from-ticket | update}
|
|
74
|
-
Target: {feature name | ticket URL | test path}
|
|
86
|
+
Mode: {from-code | from-ticket | ado | update | pom-only}
|
|
87
|
+
Target: {feature name | ticket URL | ADO work item ID | test path}
|
|
75
88
|
App URL: {url or "auto-detect"}
|
|
76
89
|
===========================
|
|
77
90
|
```
|
|
@@ -203,6 +216,34 @@ Key steps in the workflow:
|
|
|
203
216
|
|
|
204
217
|
---
|
|
205
218
|
|
|
219
|
+
### ADO MODE (Azure DevOps)
|
|
220
|
+
|
|
221
|
+
Create Test Cases directly in Azure DevOps from a work item. Reads the work item content (repro steps, acceptance criteria, comments, attachments), designs test cases with boundary detection and deduplication, and creates them in ADO with full traceability.
|
|
222
|
+
|
|
223
|
+
**Prerequisites:** ADO MCP server must be connected (provides `wit_get_work_item`, `testplan_create_test_case`, etc.).
|
|
224
|
+
|
|
225
|
+
Execute the full ADO workflow defined in `@commands/qa-create-test-ado.md`:
|
|
226
|
+
|
|
227
|
+
1. **Phase 1** — Read pipeline context: CLAUDE.md, MY_PREFERENCES.md, codebase map, locator registry
|
|
228
|
+
2. **Phase 2** — Retrieve work item with relations, comments, and attachments
|
|
229
|
+
3. **Phase 2b** — Deduplication check against existing linked test cases (skip with `--skip-dedup`)
|
|
230
|
+
4. **Phase 3** — Extract test source content based on work item type (Bug → Repro Steps, User Story → Acceptance Criteria)
|
|
231
|
+
5. **Phase 4** — Design test cases with boundary value detection, tagging rules, confidence scoring, and preconditions
|
|
232
|
+
6. **Phase 5** — Create test cases in ADO via `testplan_create_test_case`, link via *Tested By*, set tags
|
|
233
|
+
7. **Phase 6** — Generate structured report to `ai-tasks/ticket-{id}/test-cases.md`
|
|
234
|
+
8. **Phase 7** — Attach report to source work item
|
|
235
|
+
|
|
236
|
+
**Key features:**
|
|
237
|
+
- Boundary value triplets: detects `max`, `min`, `limit`, `threshold` keywords with numeric values → generates N-1, N, N+1 test cases
|
|
238
|
+
- Deduplication: checks existing linked TCs before creating, prevents duplicates
|
|
239
|
+
- Confidence scoring: `Specified` (explicit source) vs `Draft` (inferred, tagged `NeedsReview`)
|
|
240
|
+
- Cross-references codebase map for project-specific context when available
|
|
241
|
+
- Supports `--area-path` and `--iteration-path` overrides
|
|
242
|
+
|
|
243
|
+
For the complete step-by-step process, see `@commands/qa-create-test-ado.md`.
|
|
244
|
+
|
|
245
|
+
---
|
|
246
|
+
|
|
206
247
|
### UPDATE MODE
|
|
207
248
|
|
|
208
249
|
1. Read `CLAUDE.md` — quality gates, locator tiers, assertion rules, POM rules.
|
package/package.json
CHANGED