qaa-agent 1.8.0 → 1.8.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.mcp.json +4 -0
- package/CHANGELOG.md +27 -0
- package/README.md +26 -44
- package/bin/install.cjs +253 -0
- package/commands/qa-create-test-ado.md +404 -0
- package/commands/qa-create-test.md +46 -5
- package/package.json +3 -2
package/.mcp.json
CHANGED
package/CHANGELOG.md
CHANGED
|
@@ -3,6 +3,33 @@
|
|
|
3
3
|
|
|
4
4
|
All notable changes to QAA (QA Automation Agent) are documented here.
|
|
5
5
|
|
|
6
|
+
## [1.8.5] - 2026-04-17
|
|
7
|
+
|
|
8
|
+
### Added
|
|
9
|
+
|
|
10
|
+
- **Azure DevOps mode in `/qa-create-test`** — new `--ado` flag enables creating Test Cases directly in Azure DevOps from a work item. Supports work item ID or full ADO URL, auto-detects `dev.azure.com` and `*.visualstudio.com` URLs. Features include: boundary value triplet detection (N-1, N, N+1), deduplication against existing linked TCs, confidence scoring (Specified vs Draft), keyword-based Critical tagging, and preconditions block per test case.
|
|
11
|
+
- **`/qa-create-test-ado` standalone command** — dedicated command for Azure DevOps test case creation with 7-phase workflow: retrieve work item with comments/attachments, dedup check, type-based content extraction (Bug → Repro Steps, User Story → Acceptance Criteria), test case design, creation in ADO via `testplan_create_test_case`, structured report generation, and report attachment to source work item.
|
|
12
|
+
- **ADO-specific flags** — `--area-path`, `--iteration-path` (override paths for created TCs), `--skip-dedup` (skip deduplication check).
|
|
13
|
+
|
|
14
|
+
### Changed
|
|
15
|
+
|
|
16
|
+
- **`/qa-create-test` now supports 5 modes** — from-code, from-ticket, ADO, update, and POM-only (previously 3 modes). Mode detection updated to recognize ADO URLs before ticket URLs to avoid routing conflicts.
|
|
17
|
+
|
|
18
|
+
## [1.8.1] - 2026-04-16
|
|
19
|
+
|
|
20
|
+
### Added
|
|
21
|
+
|
|
22
|
+
- **Context7 MCP integration** — `@upstash/context7-mcp` is now bundled alongside Playwright MCP. The installer registers both MCP servers in the user-scope config (`~/.claude.json`) so they're available in every project on the machine, not just in the QAA repo. Context7 gives every QAA agent on-demand access to up-to-date library documentation (Playwright, Cypress, Jest, Vitest, pytest, and any other framework), keeping generated tests aligned with current APIs instead of outdated training data.
|
|
23
|
+
- **`bin/install.cjs` installer script** — the file was referenced in `package.json` but didn't actually exist on npm, causing `npx qaa-agent` to fail silently (`No bin file found at bin/install.cjs`). The installer now performs three steps on every run: (1) copies agents, commands, skills, templates, workflows, docs, and config files into the chosen scope (`~/.claude/qaa` for global, `./.claude/qaa` for local), (2) registers both MCP servers in `~/.claude.json` with idempotency — existing entries are not duplicated, and (3) deep-merges the QAA permissions into the user's `settings.json` without overwriting their existing settings.
|
|
24
|
+
|
|
25
|
+
### Changed
|
|
26
|
+
|
|
27
|
+
- **MCP registration is now user-scope by default** — previously MCPs were defined only in the project-level `.mcp.json`, which meant they only activated when the user opened the QAA repo itself. They now register in `~/.claude.json`, making them available in every Claude Code project on the user's machine. The project-level `.mcp.json` is kept for QAA development purposes but is no longer the source of truth for end users.
|
|
28
|
+
|
|
29
|
+
### Fixed
|
|
30
|
+
|
|
31
|
+
- **Silent `npx qaa-agent` failure** — users who installed QAA via npm before this release did not get Playwright or Context7 MCPs registered because the installer script was missing from the published package. Publishing 1.8.1 restores the expected behavior: a single `npx qaa-agent` command copies all files and registers both MCPs globally.
|
|
32
|
+
|
|
6
33
|
## [1.8.0] - 2026-04-13
|
|
7
34
|
|
|
8
35
|
### Added
|
package/README.md
CHANGED
|
@@ -43,7 +43,9 @@ npx qaa-agent
|
|
|
43
43
|
The interactive installer:
|
|
44
44
|
|
|
45
45
|
1. Copies agents, commands, skills, templates, and workflows into your runtime directory
|
|
46
|
-
2.
|
|
46
|
+
2. Registers **two MCP servers** in your user-scope config (`~/.claude.json`) so they're available in **all projects**:
|
|
47
|
+
- [Playwright MCP](https://www.npmjs.com/package/@playwright/mcp) — live browser control for E2E tests and locator extraction
|
|
48
|
+
- [Context7 MCP](https://www.npmjs.com/package/@upstash/context7-mcp) — up-to-date library documentation on demand
|
|
47
49
|
3. Merges required permissions into `settings.json`
|
|
48
50
|
|
|
49
51
|
**Supported runtimes:** Claude Code, OpenCode
|
|
@@ -55,48 +57,34 @@ The interactive installer:
|
|
|
55
57
|
- [Node.js](https://nodejs.org/) 18+
|
|
56
58
|
- [Claude Code](https://docs.anthropic.com/en/docs/claude-code) installed
|
|
57
59
|
|
|
58
|
-
###
|
|
60
|
+
### Bundled MCP servers
|
|
59
61
|
|
|
60
|
-
|
|
62
|
+
Both MCP servers are **registered automatically** in `~/.claude.json` when you run `npx qaa-agent`. No manual setup required — once installed, they're available in every Claude Code project on your machine.
|
|
61
63
|
|
|
62
|
-
|
|
64
|
+
#### Playwright MCP — live browser control
|
|
63
65
|
|
|
64
|
-
|
|
65
|
-
<summary><strong>VS Code (Claude Code extension)</strong></summary>
|
|
66
|
+
Uses [`@playwright/mcp`](https://www.npmjs.com/package/@playwright/mcp) to:
|
|
66
67
|
|
|
67
|
-
|
|
68
|
-
|
|
68
|
+
- Open a real browser and navigate your running app
|
|
69
|
+
- Extract actual locators (`data-testid`, ARIA roles, labels) from live pages
|
|
70
|
+
- Run E2E tests, capture failures, and auto-fix locator mismatches
|
|
71
|
+
- Build a persistent **Locator Registry** (`.qa-output/locators/`) that caches real locators across features
|
|
69
72
|
|
|
70
|
-
|
|
71
|
-
{
|
|
72
|
-
"claude-code.mcpServers": {
|
|
73
|
-
"playwright": {
|
|
74
|
-
"command": "npx",
|
|
75
|
-
"args": ["@playwright/mcp@latest"]
|
|
76
|
-
}
|
|
77
|
-
}
|
|
78
|
-
}
|
|
79
|
-
```
|
|
73
|
+
#### Context7 MCP — up-to-date library docs
|
|
80
74
|
|
|
81
|
-
|
|
75
|
+
Uses [`@upstash/context7-mcp`](https://www.npmjs.com/package/@upstash/context7-mcp) to:
|
|
82
76
|
|
|
83
|
-
|
|
84
|
-
|
|
85
|
-
|
|
86
|
-
"playwright": {
|
|
87
|
-
"command": "npx",
|
|
88
|
-
"args": ["@playwright/mcp@latest"]
|
|
89
|
-
}
|
|
90
|
-
}
|
|
91
|
-
}
|
|
92
|
-
```
|
|
77
|
+
- Fetch the latest documentation for Playwright, Cypress, Jest, Vitest, pytest, and any other library the agent is working with
|
|
78
|
+
- Keep generated tests aligned with current framework APIs instead of outdated training data
|
|
79
|
+
- Free tier: ~60 requests/hour, ~3,300 tokens/query
|
|
93
80
|
|
|
94
|
-
|
|
81
|
+
#### Verifying the MCPs are connected
|
|
95
82
|
|
|
96
|
-
|
|
97
|
-
<summary><strong>Claude Code CLI</strong></summary>
|
|
83
|
+
Open Claude Code in any project and type `/mcp`. You should see both `playwright` and `context7` listed as connected.
|
|
98
84
|
|
|
99
|
-
|
|
85
|
+
#### Manual config (fallback)
|
|
86
|
+
|
|
87
|
+
If for any reason the automatic registration fails, you can add the servers manually to `~/.claude.json`:
|
|
100
88
|
|
|
101
89
|
```json
|
|
102
90
|
{
|
|
@@ -104,21 +92,15 @@ Add to `~/.claude.json` (user-scope, all projects):
|
|
|
104
92
|
"playwright": {
|
|
105
93
|
"command": "npx",
|
|
106
94
|
"args": ["@playwright/mcp@latest"]
|
|
95
|
+
},
|
|
96
|
+
"context7": {
|
|
97
|
+
"command": "npx",
|
|
98
|
+
"args": ["-y", "@upstash/context7-mcp@latest"]
|
|
107
99
|
}
|
|
108
100
|
}
|
|
109
101
|
}
|
|
110
102
|
```
|
|
111
103
|
|
|
112
|
-
Or add a `.mcp.json` file in your project root for project-scope only.
|
|
113
|
-
|
|
114
|
-
</details>
|
|
115
|
-
|
|
116
|
-
Once configured, Playwright MCP enables QAA to:
|
|
117
|
-
- Open a real browser and navigate your running app
|
|
118
|
-
- Extract actual locators (`data-testid`, ARIA roles, labels) from live pages
|
|
119
|
-
- Run E2E tests, capture failures, and auto-fix locator mismatches
|
|
120
|
-
- Build a persistent **Locator Registry** (`.qa-output/locators/`) that caches real locators across features
|
|
121
|
-
|
|
122
104
|
---
|
|
123
105
|
|
|
124
106
|
## Quick Start
|
|
@@ -328,7 +310,7 @@ qaa-agent/
|
|
|
328
310
|
bin/ # Installer and CLI tools
|
|
329
311
|
docs/ # User documentation
|
|
330
312
|
CLAUDE.md # QA standards (read by every agent)
|
|
331
|
-
.mcp.json # Playwright MCP server config
|
|
313
|
+
.mcp.json # Playwright + Context7 MCP server config
|
|
332
314
|
settings.json # Claude Code permissions
|
|
333
315
|
```
|
|
334
316
|
|
package/bin/install.cjs
ADDED
|
@@ -0,0 +1,253 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
|
|
3
|
+
/**
|
|
4
|
+
* QAA Agent Installer
|
|
5
|
+
*
|
|
6
|
+
* Installs QAA (QA Automation Agent) into the user's Claude Code environment.
|
|
7
|
+
*
|
|
8
|
+
* What it does:
|
|
9
|
+
* 1. Copies agents, commands, skills, templates, workflows, docs, bin, and config files
|
|
10
|
+
* to the chosen install directory (global ~/.claude/qaa or local ./.claude/qaa)
|
|
11
|
+
* 2. Registers Playwright MCP and Context7 MCP as global MCP servers
|
|
12
|
+
* 3. Merges required permissions into Claude Code settings.json
|
|
13
|
+
*
|
|
14
|
+
* Usage:
|
|
15
|
+
* npx qaa-agent
|
|
16
|
+
*/
|
|
17
|
+
|
|
18
|
+
const fs = require('fs');
|
|
19
|
+
const path = require('path');
|
|
20
|
+
const readline = require('readline');
|
|
21
|
+
const { execSync } = require('child_process');
|
|
22
|
+
|
|
23
|
+
// ── Helpers ──────────────────────────────────────────────────────────────────
|
|
24
|
+
|
|
25
|
+
function log(msg) { console.log(` ${msg}`); }
|
|
26
|
+
function success(msg) { console.log(` ✓ ${msg}`); }
|
|
27
|
+
function warn(msg) { console.log(` ⚠ ${msg}`); }
|
|
28
|
+
function fail(msg) { console.error(` ✗ ${msg}`); }
|
|
29
|
+
|
|
30
|
+
function ask(question) {
|
|
31
|
+
const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
|
|
32
|
+
return new Promise(resolve => {
|
|
33
|
+
rl.question(` ${question} `, answer => {
|
|
34
|
+
rl.close();
|
|
35
|
+
resolve(answer.trim());
|
|
36
|
+
});
|
|
37
|
+
});
|
|
38
|
+
}
|
|
39
|
+
|
|
40
|
+
function copyDirRecursive(src, dest) {
|
|
41
|
+
if (!fs.existsSync(src)) return 0;
|
|
42
|
+
fs.mkdirSync(dest, { recursive: true });
|
|
43
|
+
let count = 0;
|
|
44
|
+
const entries = fs.readdirSync(src, { withFileTypes: true });
|
|
45
|
+
for (const entry of entries) {
|
|
46
|
+
const srcPath = path.join(src, entry.name);
|
|
47
|
+
const destPath = path.join(dest, entry.name);
|
|
48
|
+
if (entry.isDirectory()) {
|
|
49
|
+
count += copyDirRecursive(srcPath, destPath);
|
|
50
|
+
} else {
|
|
51
|
+
fs.copyFileSync(srcPath, destPath);
|
|
52
|
+
count++;
|
|
53
|
+
}
|
|
54
|
+
}
|
|
55
|
+
return count;
|
|
56
|
+
}
|
|
57
|
+
|
|
58
|
+
function deepMerge(target, source) {
|
|
59
|
+
for (const key of Object.keys(source)) {
|
|
60
|
+
if (
|
|
61
|
+
source[key] && typeof source[key] === 'object' && !Array.isArray(source[key]) &&
|
|
62
|
+
target[key] && typeof target[key] === 'object' && !Array.isArray(target[key])
|
|
63
|
+
) {
|
|
64
|
+
deepMerge(target[key], source[key]);
|
|
65
|
+
} else if (Array.isArray(source[key]) && Array.isArray(target[key])) {
|
|
66
|
+
// Merge arrays without duplicates
|
|
67
|
+
const merged = [...new Set([...target[key], ...source[key]])];
|
|
68
|
+
target[key] = merged;
|
|
69
|
+
} else {
|
|
70
|
+
target[key] = source[key];
|
|
71
|
+
}
|
|
72
|
+
}
|
|
73
|
+
return target;
|
|
74
|
+
}
|
|
75
|
+
|
|
76
|
+
// ── MCP Registration ─────────────────────────────────────────────────────────
|
|
77
|
+
|
|
78
|
+
function registerMcpServers(claudeJsonPath) {
|
|
79
|
+
const mcpServers = {
|
|
80
|
+
playwright: {
|
|
81
|
+
command: 'npx',
|
|
82
|
+
args: ['@playwright/mcp@latest']
|
|
83
|
+
},
|
|
84
|
+
context7: {
|
|
85
|
+
command: 'npx',
|
|
86
|
+
args: ['-y', '@upstash/context7-mcp@latest']
|
|
87
|
+
}
|
|
88
|
+
};
|
|
89
|
+
|
|
90
|
+
let config = {};
|
|
91
|
+
if (fs.existsSync(claudeJsonPath)) {
|
|
92
|
+
try {
|
|
93
|
+
config = JSON.parse(fs.readFileSync(claudeJsonPath, 'utf-8'));
|
|
94
|
+
} catch {
|
|
95
|
+
config = {};
|
|
96
|
+
}
|
|
97
|
+
}
|
|
98
|
+
|
|
99
|
+
if (!config.mcpServers) config.mcpServers = {};
|
|
100
|
+
|
|
101
|
+
let added = [];
|
|
102
|
+
for (const [name, serverConfig] of Object.entries(mcpServers)) {
|
|
103
|
+
if (!config.mcpServers[name]) {
|
|
104
|
+
config.mcpServers[name] = serverConfig;
|
|
105
|
+
added.push(name);
|
|
106
|
+
}
|
|
107
|
+
}
|
|
108
|
+
|
|
109
|
+
fs.writeFileSync(claudeJsonPath, JSON.stringify(config, null, 2) + '\n');
|
|
110
|
+
return added;
|
|
111
|
+
}
|
|
112
|
+
|
|
113
|
+
// ── Settings Merge ───────────────────────────────────────────────────────────
|
|
114
|
+
|
|
115
|
+
function mergeSettings(installDir, packageDir) {
|
|
116
|
+
const srcSettings = path.join(packageDir, 'settings.json');
|
|
117
|
+
if (!fs.existsSync(srcSettings)) return false;
|
|
118
|
+
|
|
119
|
+
const claudeDir = path.dirname(installDir);
|
|
120
|
+
const destSettings = path.join(claudeDir, 'settings.json');
|
|
121
|
+
|
|
122
|
+
const source = JSON.parse(fs.readFileSync(srcSettings, 'utf-8'));
|
|
123
|
+
|
|
124
|
+
let target = {};
|
|
125
|
+
if (fs.existsSync(destSettings)) {
|
|
126
|
+
try {
|
|
127
|
+
target = JSON.parse(fs.readFileSync(destSettings, 'utf-8'));
|
|
128
|
+
} catch {
|
|
129
|
+
target = {};
|
|
130
|
+
}
|
|
131
|
+
}
|
|
132
|
+
|
|
133
|
+
deepMerge(target, source);
|
|
134
|
+
fs.writeFileSync(destSettings, JSON.stringify(target, null, 2) + '\n');
|
|
135
|
+
return true;
|
|
136
|
+
}
|
|
137
|
+
|
|
138
|
+
// ── Main ─────────────────────────────────────────────────────────────────────
|
|
139
|
+
|
|
140
|
+
async function main() {
|
|
141
|
+
console.log('');
|
|
142
|
+
console.log(' ╔═══════════════════════════════════════╗');
|
|
143
|
+
console.log(' ║ QAA — QA Automation Agent Installer ║');
|
|
144
|
+
console.log(' ╚═══════════════════════════════════════╝');
|
|
145
|
+
console.log('');
|
|
146
|
+
|
|
147
|
+
// Determine package root (where the npm package files are)
|
|
148
|
+
const packageDir = path.resolve(__dirname, '..');
|
|
149
|
+
|
|
150
|
+
// Check that package files exist
|
|
151
|
+
const requiredDirs = ['agents', 'commands', 'skills'];
|
|
152
|
+
const missing = requiredDirs.filter(d => !fs.existsSync(path.join(packageDir, d)));
|
|
153
|
+
if (missing.length > 0) {
|
|
154
|
+
fail(`Package incomplete — missing: ${missing.join(', ')}`);
|
|
155
|
+
process.exit(1);
|
|
156
|
+
}
|
|
157
|
+
|
|
158
|
+
// Ask install scope
|
|
159
|
+
console.log(' Install scope:');
|
|
160
|
+
console.log(' 1) Global — ~/.claude/qaa (available in all projects)');
|
|
161
|
+
console.log(' 2) Local — ./.claude/qaa (this project only)');
|
|
162
|
+
console.log('');
|
|
163
|
+
const scopeChoice = await ask('Choose [1/2] (default: 1):');
|
|
164
|
+
const isGlobal = scopeChoice !== '2';
|
|
165
|
+
|
|
166
|
+
const homeDir = process.env.HOME || process.env.USERPROFILE;
|
|
167
|
+
const claudeDir = isGlobal
|
|
168
|
+
? path.join(homeDir, '.claude')
|
|
169
|
+
: path.join(process.cwd(), '.claude');
|
|
170
|
+
const installDir = path.join(claudeDir, 'qaa');
|
|
171
|
+
|
|
172
|
+
// Check for existing installation
|
|
173
|
+
if (fs.existsSync(installDir)) {
|
|
174
|
+
const overwrite = await ask('QAA already installed at this location. Overwrite? [y/N]:');
|
|
175
|
+
if (overwrite.toLowerCase() !== 'y') {
|
|
176
|
+
log('Installation cancelled.');
|
|
177
|
+
process.exit(0);
|
|
178
|
+
}
|
|
179
|
+
}
|
|
180
|
+
|
|
181
|
+
console.log('');
|
|
182
|
+
log(`Installing to: ${installDir}`);
|
|
183
|
+
console.log('');
|
|
184
|
+
|
|
185
|
+
// ── Step 1: Copy files ──────────────────────────────────────────────────
|
|
186
|
+
|
|
187
|
+
const dirsToCopy = ['agents', 'commands', 'skills', 'templates', 'workflows', 'docs', 'bin'];
|
|
188
|
+
const filesToCopy = ['CLAUDE.md', 'CHANGELOG.md', '.mcp.json', 'package.json'];
|
|
189
|
+
|
|
190
|
+
let totalFiles = 0;
|
|
191
|
+
|
|
192
|
+
for (const dir of dirsToCopy) {
|
|
193
|
+
const src = path.join(packageDir, dir);
|
|
194
|
+
const dest = path.join(installDir, dir);
|
|
195
|
+
if (fs.existsSync(src)) {
|
|
196
|
+
const count = copyDirRecursive(src, dest);
|
|
197
|
+
success(`${dir}/ — ${count} files`);
|
|
198
|
+
totalFiles += count;
|
|
199
|
+
}
|
|
200
|
+
}
|
|
201
|
+
|
|
202
|
+
for (const file of filesToCopy) {
|
|
203
|
+
const src = path.join(packageDir, file);
|
|
204
|
+
const dest = path.join(installDir, file);
|
|
205
|
+
if (fs.existsSync(src)) {
|
|
206
|
+
fs.mkdirSync(path.dirname(dest), { recursive: true });
|
|
207
|
+
fs.copyFileSync(src, dest);
|
|
208
|
+
success(file);
|
|
209
|
+
totalFiles++;
|
|
210
|
+
}
|
|
211
|
+
}
|
|
212
|
+
|
|
213
|
+
console.log('');
|
|
214
|
+
|
|
215
|
+
// ── Step 2: Register MCP servers ────────────────────────────────────────
|
|
216
|
+
|
|
217
|
+
const claudeJsonPath = path.join(homeDir, '.claude.json');
|
|
218
|
+
const addedMcps = registerMcpServers(claudeJsonPath);
|
|
219
|
+
|
|
220
|
+
if (addedMcps.length > 0) {
|
|
221
|
+
success(`MCP servers registered: ${addedMcps.join(', ')} → ${claudeJsonPath}`);
|
|
222
|
+
} else {
|
|
223
|
+
success('MCP servers already configured (playwright, context7)');
|
|
224
|
+
}
|
|
225
|
+
|
|
226
|
+
// ── Step 3: Merge settings ──────────────────────────────────────────────
|
|
227
|
+
|
|
228
|
+
const settingsMerged = mergeSettings(installDir, packageDir);
|
|
229
|
+
if (settingsMerged) {
|
|
230
|
+
success('Permissions merged into settings.json');
|
|
231
|
+
}
|
|
232
|
+
|
|
233
|
+
// ── Done ────────────────────────────────────────────────────────────────
|
|
234
|
+
|
|
235
|
+
console.log('');
|
|
236
|
+
console.log(' ╔═══════════════════════════════════════╗');
|
|
237
|
+
console.log(' ║ Installation complete! ║');
|
|
238
|
+
console.log(' ╚═══════════════════════════════════════╝');
|
|
239
|
+
console.log('');
|
|
240
|
+
log(`${totalFiles} files installed to ${installDir}`);
|
|
241
|
+
log('MCP servers: playwright, context7');
|
|
242
|
+
log('');
|
|
243
|
+
log('Restart Claude Code, then run any QAA command:');
|
|
244
|
+
log(' /qa-start --dev-repo ./your-project');
|
|
245
|
+
log(' /qa-create-test login');
|
|
246
|
+
log(' /qa-map');
|
|
247
|
+
console.log('');
|
|
248
|
+
}
|
|
249
|
+
|
|
250
|
+
main().catch(err => {
|
|
251
|
+
fail(err.message);
|
|
252
|
+
process.exit(1);
|
|
253
|
+
});
|
|
@@ -0,0 +1,404 @@
|
|
|
1
|
+
# QA Create Test — Azure DevOps
|
|
2
|
+
|
|
3
|
+
Retrieve an Azure DevOps work item, analyze its content, and generate well-structured Test Cases directly in Azure DevOps using the ADO MCP tools. Each test case is tagged for test plan membership (Smoke, Regression, Critical) and linked back to the source work item for full traceability. Integrates with the QAA pipeline: reads codebase map, locator registry, and user preferences for context-aware test case generation.
|
|
4
|
+
|
|
5
|
+
## Usage
|
|
6
|
+
|
|
7
|
+
```
|
|
8
|
+
/qa-create-test-ado <work-item-id> [--area-path=<path>] [--iteration-path=<path>] [--skip-map] [--skip-dedup] [--app-url <url>]
|
|
9
|
+
```
|
|
10
|
+
|
|
11
|
+
### Arguments
|
|
12
|
+
|
|
13
|
+
| Parameter | Purpose | Default |
|
|
14
|
+
|-----------|---------|---------|
|
|
15
|
+
| `<work-item-id>` | Azure DevOps work item ID to generate test cases from | Required |
|
|
16
|
+
| `--area-path=<path>` | Override area path for all created test artifacts | Source work item's area path |
|
|
17
|
+
| `--iteration-path=<path>` | Override iteration path for all created test artifacts | Source work item's iteration path |
|
|
18
|
+
| `--skip-map` | Skip codebase map check and proceed without project context | false |
|
|
19
|
+
| `--skip-dedup` | Skip deduplication check against existing linked test cases | false |
|
|
20
|
+
| `--app-url <url>` | URL of running application for locator extraction via Playwright MCP | auto-detect |
|
|
21
|
+
|
|
22
|
+
## What It Produces
|
|
23
|
+
|
|
24
|
+
- Test Cases created directly in Azure DevOps (via `testplan_create_test_case`)
|
|
25
|
+
- Test Cases linked to source work item via *Tested By* relationship
|
|
26
|
+
- Tags applied: `Smoke`, `Regression`, `Critical`, `AutomationCandidate`, `NeedsReview`
|
|
27
|
+
- `ai-tasks/ticket-{id}/test-cases.md` — structured report
|
|
28
|
+
- Report attached to work item (if `ADO_MCP_AUTH_TOKEN` is set) or written to `Custom.QATestCasesReport` field (fallback)
|
|
29
|
+
|
|
30
|
+
---
|
|
31
|
+
|
|
32
|
+
## Process
|
|
33
|
+
|
|
34
|
+
### Phase 1: Read Pipeline Context
|
|
35
|
+
|
|
36
|
+
Before retrieving the work item, read QAA pipeline artifacts for context-aware generation.
|
|
37
|
+
|
|
38
|
+
1. **Read `CLAUDE.md`** — POM rules, locator tiers, assertion rules, naming conventions, quality gates, test spec rules.
|
|
39
|
+
|
|
40
|
+
2. **Read user preferences** — `~/.claude/qaa/MY_PREFERENCES.md` (if exists). User overrides win over defaults.
|
|
41
|
+
|
|
42
|
+
3. **Check for codebase map** (`.qa-output/codebase/`):
|
|
43
|
+
- Look for: `CODE_PATTERNS.md`, `API_CONTRACTS.md`, `TEST_SURFACE.md`, `TESTABILITY.md`, `RISK_MAP.md`, `CRITICAL_PATHS.md`
|
|
44
|
+
- If at least 2 exist: read them all for project context (naming conventions, API shapes, testable surfaces, risk areas).
|
|
45
|
+
- If NONE exist and `--skip-map` not passed: warn the user that test cases will lack project context, suggest running `/qa-map` first. Continue anyway (ADO test cases are higher-level than code-level tests).
|
|
46
|
+
|
|
47
|
+
4. **Check locator registry** — `.qa-output/locators/LOCATOR_REGISTRY.md` (if exists):
|
|
48
|
+
- If locators exist for pages related to the work item's feature: reference them in test step expected results (e.g., "Verify element `[data-testid='login-submit-btn']` is visible").
|
|
49
|
+
- If `--app-url` provided and locators missing: use Playwright MCP to extract locators from the live app before designing test steps:
|
|
50
|
+
```
|
|
51
|
+
mcp__playwright__browser_navigate({ url: "{app_url}/{feature_path}" })
|
|
52
|
+
mcp__playwright__browser_snapshot()
|
|
53
|
+
```
|
|
54
|
+
- Write extracted locators to `.qa-output/locators/{feature}.locators.md` and update the registry.
|
|
55
|
+
|
|
56
|
+
---
|
|
57
|
+
|
|
58
|
+
### Phase 2: Retrieve the Work Item
|
|
59
|
+
|
|
60
|
+
Use `wit_get_work_item` with `expand: "relations"` to fetch the full work item:
|
|
61
|
+
|
|
62
|
+
- Capture: **title**, **type** (`Bug`, `User Story`, `Ticket`), **state**, **assigned-to**, **area path**, **iteration path**
|
|
63
|
+
- Capture all relevant content fields based on type (see Phase 3)
|
|
64
|
+
- Note the project for all subsequent calls
|
|
65
|
+
|
|
66
|
+
**Also retrieve comments** using `wit_list_work_item_comments`:
|
|
67
|
+
|
|
68
|
+
- Read all comments in chronological order
|
|
69
|
+
- Look for: acceptance criteria added in comments, QA notes, scope clarifications, tester feedback, or any conditions of satisfaction mentioned informally
|
|
70
|
+
- These often contain implied test cases not captured in the formal fields
|
|
71
|
+
|
|
72
|
+
**Also check attachments** from the relations list (entries where `rel` equals `AttachedFile`):
|
|
73
|
+
|
|
74
|
+
- Filter to `.csv` and `.txt` files (case-insensitive) by inspecting `attributes.name`
|
|
75
|
+
- If found, download via:
|
|
76
|
+
```bash
|
|
77
|
+
curl -s --user ":{AZURE_DEVOPS_PAT}" "{attachment-url}"
|
|
78
|
+
```
|
|
79
|
+
- Read content for test data, expected values, error logs, or sample datasets that define expected behavior
|
|
80
|
+
|
|
81
|
+
---
|
|
82
|
+
|
|
83
|
+
### Phase 2b: Deduplication Check — Query Existing Test Cases
|
|
84
|
+
|
|
85
|
+
Before generating any new test cases, check whether the source work item already has linked test cases to prevent duplicates.
|
|
86
|
+
|
|
87
|
+
1. Inspect the relations returned in Phase 2 — filter for link type `"Microsoft.VSTS.Common.TestedBy-Forward"` (i.e., *Tested By* links).
|
|
88
|
+
2. For each linked test case ID found, call `wit_get_work_item` to retrieve its **title** and **state**.
|
|
89
|
+
3. Build an **existing TC registry** — a list of `{ id, title, state }` for all currently linked test cases.
|
|
90
|
+
4. In Phase 5, before calling `testplan_create_test_case` for each planned TC, compare its title (normalized: lowercase, trimmed) against every title in the registry.
|
|
91
|
+
- **If match found** and existing TC is in state `Design`, `Ready`, or `Closed`: skip creation, log `"Skipped — duplicate of TC #{id}"`.
|
|
92
|
+
- **If match found** but existing TC is in state `Removed`: create the new TC anyway (the old one was intentionally discarded).
|
|
93
|
+
- **If no match**: proceed with creation.
|
|
94
|
+
5. Include a **Dedup Summary** section in the output report.
|
|
95
|
+
|
|
96
|
+
Skip this check with `--skip-dedup`.
|
|
97
|
+
|
|
98
|
+
---
|
|
99
|
+
|
|
100
|
+
### Phase 3: Identify Work Item Type and Extract Test Source Content
|
|
101
|
+
|
|
102
|
+
Apply the correct extraction strategy based on work item type:
|
|
103
|
+
|
|
104
|
+
#### If type is `Bug` or `Ticket`:
|
|
105
|
+
|
|
106
|
+
Primary source — **Repro Steps** (`Microsoft.VSTS.TCM.ReproSteps`):
|
|
107
|
+
- Each distinct action sequence is a candidate test case
|
|
108
|
+
- The repro steps define the *negative path* (what triggers the bug)
|
|
109
|
+
- Derive the *positive/fix-verification path* by inverting the expected outcome
|
|
110
|
+
- Also read: **System Info** (`Microsoft.VSTS.TCM.SystemInfo`), **Description**, **QA Notes** (`CIIScrum.QANotes`)
|
|
111
|
+
- Check `Custom.Whatisexpectedtohappen` and `Custom.Whatisactuallyhappening` to anchor pass/fail assertions
|
|
112
|
+
|
|
113
|
+
Secondary sources:
|
|
114
|
+
- Comments for tester observations or specific scenarios to cover
|
|
115
|
+
- Attachments for error data or sample inputs
|
|
116
|
+
|
|
117
|
+
#### If type is `User Story`:
|
|
118
|
+
|
|
119
|
+
Primary source — **Acceptance Criteria** (`Microsoft.VSTS.Common.AcceptanceCriteria`):
|
|
120
|
+
- Each acceptance criterion (Given/When/Then or checklist) maps to one or more test cases
|
|
121
|
+
- Also read: **Description** for context and implied behaviors
|
|
122
|
+
|
|
123
|
+
Secondary sources:
|
|
124
|
+
- Comments for clarifications, edge cases raised in refinement, or stakeholder scenarios
|
|
125
|
+
- Attachments for wireframes described in text, sample data, or business rules documents
|
|
126
|
+
|
|
127
|
+
#### If type is unrecognized or fields are empty:
|
|
128
|
+
|
|
129
|
+
Fall back to **Description** as the primary source. Extract any stated behaviors, expected outcomes, or constraints. Note the fallback in the output.
|
|
130
|
+
|
|
131
|
+
**Cross-reference with codebase map** (if available):
|
|
132
|
+
- Match mentioned components/features against `TEST_SURFACE.md` entry points
|
|
133
|
+
- Check `RISK_MAP.md` for risk level of affected areas
|
|
134
|
+
- Use `API_CONTRACTS.md` for exact endpoint shapes if the work item mentions API behavior
|
|
135
|
+
- Use `CODE_PATTERNS.md` to align test step language with project conventions
|
|
136
|
+
|
|
137
|
+
---
|
|
138
|
+
|
|
139
|
+
### Phase 4: Analyze and Design Test Cases
|
|
140
|
+
|
|
141
|
+
Before creating anything in Azure DevOps, plan out all test cases:
|
|
142
|
+
|
|
143
|
+
**For each distinct scenario identified, determine:**
|
|
144
|
+
|
|
145
|
+
1. **Test Case Title** — concise action-oriented name (e.g., "Verify guest pass entry counter resets at midnight")
|
|
146
|
+
2. **Steps** — formatted as `{step action} | {expected result}` per step, using `|` as the delimiter
|
|
147
|
+
3. **Priority** — 1 (Critical), 2 (High), 3 (Medium), 4 (Low)
|
|
148
|
+
4. **Tags** — one or more of: `Smoke`, `Regression`, `Critical`, `AutomationCandidate`, `NeedsReview`
|
|
149
|
+
5. **Preconditions** — required setup before executing the test
|
|
150
|
+
6. **Confidence** — `Specified` or `Draft`
|
|
151
|
+
|
|
152
|
+
**Minimum test case coverage per work item type:**
|
|
153
|
+
|
|
154
|
+
| Scenario Type | Bug/Ticket | User Story |
|
|
155
|
+
|---------------|-----------|------------|
|
|
156
|
+
| Happy path (fix verified / AC met) | Required | Required per AC item |
|
|
157
|
+
| Negative / error path | Required (original repro) | Where AC implies failure states |
|
|
158
|
+
| Boundary / edge cases | If data-driven | If AC contains limits or conditions |
|
|
159
|
+
| Boundary value triplets (n-1, n, n+1) | If limits detected | If AC contains limits/ranges |
|
|
160
|
+
| Regression guard (related area) | Required | Required |
|
|
161
|
+
|
|
162
|
+
#### Boundary Value Detection
|
|
163
|
+
|
|
164
|
+
Scan all source content for **boundary keyword triggers**:
|
|
165
|
+
|
|
166
|
+
> `max`, `min`, `limit`, `threshold`, `cap`, `ceiling`, `floor`, `range`, `between`, `up to`, `at most`, `at least`, `no more than`, `no fewer than`, `maximum`, `minimum`, `exactly`, `exceeds`, `boundary`
|
|
167
|
+
|
|
168
|
+
When a trigger is found alongside a numeric value **N**:
|
|
169
|
+
|
|
170
|
+
1. **Generate three test cases** (the boundary triplet):
|
|
171
|
+
- **N - 1** — just below the boundary
|
|
172
|
+
- **N** — exactly at the boundary
|
|
173
|
+
- **N + 1** — just above the boundary
|
|
174
|
+
2. Title them clearly: e.g., `"Verify entry limit at 99 (below threshold)"`, `"...at 100 (at threshold)"`, `"...at 101 (above threshold)"`.
|
|
175
|
+
3. Tag all three with `Regression`.
|
|
176
|
+
4. If the boundary is on a critical-path field (per `CRITICAL_PATHS.md` or keyword detection), also tag `Critical`.
|
|
177
|
+
|
|
178
|
+
If the source mentions a range, generate boundary triplets for **both** ends.
|
|
179
|
+
|
|
180
|
+
#### Tagging Rules
|
|
181
|
+
|
|
182
|
+
| Tag | Assign when... |
|
|
183
|
+
|-----|---------------|
|
|
184
|
+
| `Smoke` | Verifies core, user-facing functionality that must work for the app to be usable at all. Limit to the most essential 1-2 cases per work item. |
|
|
185
|
+
| `Regression` | Guards against the specific bug or behavior being re-introduced. Every fix-verification test for a Bug/Ticket should be tagged. For User Stories, tag tests covering AC that touches shared or high-traffic code paths. |
|
|
186
|
+
| `Critical` | Covers functionality whose failure would directly impact revenue, security, data integrity, or legal compliance. **Also apply when critical keywords are detected** (see Keyword-Based Critical Tagging below). Apply conservatively. |
|
|
187
|
+
| `AutomationCandidate` | Test has: (a) deterministic steps with no subjective judgment, (b) assertions based on concrete data/state, (c) no manual-only prerequisites. Advisory only — QA confirms. |
|
|
188
|
+
|
|
189
|
+
**Do not assign Smoke to every test case.** Smoke tests are a small, fast-running set.
|
|
190
|
+
|
|
191
|
+
#### Keyword-Based Critical Tagging
|
|
192
|
+
|
|
193
|
+
Automatically tag as `Critical` when any of the following keywords appear in the source content:
|
|
194
|
+
|
|
195
|
+
> `auth`, `authentication`, `login`, `password`, `OAuth`, `SSO`, `payment`, `billing`, `charge`, `invoice`, `PII`, `personal data`, `SSN`, `date of birth`, `security`, `encryption`, `token`, `certificate`, `data integrity`, `transaction`, `rollback`, `compliance`, `HIPAA`, `GDPR`, `SOC`, `audit`, `permission`, `role-based`, `access control`
|
|
196
|
+
|
|
197
|
+
Cross-reference with `RISK_MAP.md` (if available) for additional risk-based tagging.
|
|
198
|
+
|
|
199
|
+
#### Confidence Scoring
|
|
200
|
+
|
|
201
|
+
| Confidence | Criteria | Behavior |
|
|
202
|
+
|------------|----------|----------|
|
|
203
|
+
| **Specified** | Source content explicitly describes the scenario, expected outcome, and data. | Create the TC normally. |
|
|
204
|
+
| **Draft** | Scenario is implied or partially described — inferred from context or sparse source. | Prefix TC title with `[DRAFT]`. Add `NeedsReview` tag. Add final step: `"Review — this test case was auto-generated from sparse source material and requires QA validation before execution." | "QA has reviewed and confirmed or updated the steps."` |
|
|
205
|
+
|
|
206
|
+
**Threshold**: If more than 50% of the source content fields are empty or contain fewer than 20 words, default all inferred TCs to Draft.
|
|
207
|
+
|
|
208
|
+
#### Preconditions Block
|
|
209
|
+
|
|
210
|
+
Every test case documents preconditions:
|
|
211
|
+
|
|
212
|
+
| Field | Description | Example |
|
|
213
|
+
|-------|-------------|--------|
|
|
214
|
+
| **Required Role(s)** | User role(s) or permission level(s) needed | `Admin`, `Property Manager`, `Resident` |
|
|
215
|
+
| **Application State** | System/feature state that must be true before step 1 | `User is logged in`, `Feature flag X is enabled` |
|
|
216
|
+
| **Test Data** | Specific data that must exist or be created | `Resident account with active lease` |
|
|
217
|
+
| **Environment** | Environment-specific requirements | `Staging`, `API key configured` |
|
|
218
|
+
|
|
219
|
+
Prepend preconditions to the TC description field in Azure DevOps:
|
|
220
|
+
|
|
221
|
+
```
|
|
222
|
+
**Preconditions**
|
|
223
|
+
- Role(s): {roles}
|
|
224
|
+
- State: {state}
|
|
225
|
+
- Test Data: {data}
|
|
226
|
+
- Environment: {env}
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
If locator registry data is available, include relevant locator references in test steps for E2E-related scenarios.
|
|
230
|
+
|
|
231
|
+
---
|
|
232
|
+
|
|
233
|
+
### Phase 5: Create Test Cases in Azure DevOps
|
|
234
|
+
|
|
235
|
+
**Dedup gate**: Before creating each TC, check against the registry from Phase 2b.
|
|
236
|
+
|
|
237
|
+
For each planned test case, call `testplan_create_test_case` with:
|
|
238
|
+
|
|
239
|
+
- `project`: the work item's project
|
|
240
|
+
- `title`: the test case title — prefixed with `[DRAFT]` if confidence is Draft
|
|
241
|
+
- `steps`: formatted as `1. {action}|{expected result}\n2. {action}|{expected result}` — use `|` as delimiter. **Never pass XML or pre-formatted `<steps>` markup** — the tool generates XML from plain-text format.
|
|
242
|
+
- `priority`: numeric priority (1-4)
|
|
243
|
+
- `iterationPath`: use `--iteration-path` override if provided, otherwise source work item's iteration path
|
|
244
|
+
- `areaPath`: use `--area-path` override if provided, otherwise source work item's area path
|
|
245
|
+
|
|
246
|
+
**After creating each test case:**
|
|
247
|
+
|
|
248
|
+
1. Call `wit_add_artifact_link` or `wit_work_items_link` to link the new TC to the source work item using link type `"tested by"`:
|
|
249
|
+
```
|
|
250
|
+
source work item --[Tested By]--> test case
|
|
251
|
+
```
|
|
252
|
+
|
|
253
|
+
2. Call `wit_update_work_item` on the new TC to set `System.Tags` to semicolon-separated tags (e.g., `"Regression; Critical; AutomationCandidate"`).
|
|
254
|
+
- Draft TCs always include `NeedsReview`.
|
|
255
|
+
|
|
256
|
+
Create all test cases sequentially — capture each new TC ID before proceeding.
|
|
257
|
+
|
|
258
|
+
---
|
|
259
|
+
|
|
260
|
+
### Phase 6: Synthesize the Output Report
|
|
261
|
+
|
|
262
|
+
Save the report to `ai-tasks/ticket-$ARGUMENTS/test-cases.md`.
|
|
263
|
+
|
|
264
|
+
**Required document structure:**
|
|
265
|
+
|
|
266
|
+
```markdown
|
|
267
|
+
# Test Cases: {work-item-id} — {Work Item Title}
|
|
268
|
+
|
|
269
|
+
**Generated**: {current date}
|
|
270
|
+
**Work Item**: [{work-item-id}]({azure-devops-url}) — {type} | {state}
|
|
271
|
+
**Assigned To**: {assigned-to}
|
|
272
|
+
**Area Path**: {area path}
|
|
273
|
+
**Iteration**: {iteration path}
|
|
274
|
+
**Test Source**: {Repro Steps / Acceptance Criteria / Description (fallback)}
|
|
275
|
+
**Pipeline Context**: Codebase map: {yes/no}, Locator registry: {yes/no}, Preferences: {yes/no}
|
|
276
|
+
|
|
277
|
+
---
|
|
278
|
+
|
|
279
|
+
## Source Analysis
|
|
280
|
+
|
|
281
|
+
### Work Item Summary
|
|
282
|
+
{2-3 sentences describing the work item and what behavior needed to be tested.}
|
|
283
|
+
|
|
284
|
+
### Key Scenarios Identified
|
|
285
|
+
{Bulleted list of distinct testable scenarios extracted before designing test cases.}
|
|
286
|
+
|
|
287
|
+
### Source Content Notes
|
|
288
|
+
{Observations about quality/completeness of source material. Were repro steps/AC clear? Did comments add scenarios?}
|
|
289
|
+
|
|
290
|
+
### Codebase Context Used
|
|
291
|
+
{If codebase map was available: list which documents were read and what context they provided. If not available: note that test cases were generated without codebase context.}
|
|
292
|
+
|
|
293
|
+
---
|
|
294
|
+
|
|
295
|
+
## Test Cases Created
|
|
296
|
+
|
|
297
|
+
### TC-{azure-devops-id}: {title}
|
|
298
|
+
|
|
299
|
+
**Confidence**: `Specified` or `[DRAFT] — NeedsReview`
|
|
300
|
+
**Tags**: `{Smoke}` · `{Regression}` · `{Critical}` · `{AutomationCandidate}` · `{NeedsReview}` *(show only tags that apply)*
|
|
301
|
+
**Priority**: {1 – Critical / 2 – High / 3 – Medium / 4 – Low}
|
|
302
|
+
**Linked To**: Work Item #{work-item-id} via *Tested By*
|
|
303
|
+
**Azure DevOps ID**: {test-case-id}
|
|
304
|
+
|
|
305
|
+
**Preconditions:**
|
|
306
|
+
- **Role(s)**: {required roles or N/A}
|
|
307
|
+
- **State**: {required application state or N/A}
|
|
308
|
+
- **Test Data**: {required data or N/A}
|
|
309
|
+
- **Environment**: {environment requirements or N/A}
|
|
310
|
+
|
|
311
|
+
**Test Steps:**
|
|
312
|
+
|
|
313
|
+
| # | Action | Expected Result |
|
|
314
|
+
|---|--------|-----------------|
|
|
315
|
+
| 1 | {action} | {expected result} |
|
|
316
|
+
| 2 | {action} | {expected result} |
|
|
317
|
+
|
|
318
|
+
{Repeat for each test case.}
|
|
319
|
+
|
|
320
|
+
---
|
|
321
|
+
|
|
322
|
+
## Tag Summary
|
|
323
|
+
|
|
324
|
+
| Tag | Count | Test Case IDs |
|
|
325
|
+
|-----|-------|---------------|
|
|
326
|
+
| Smoke | {n} | {comma-separated IDs} |
|
|
327
|
+
| Regression | {n} | {comma-separated IDs} |
|
|
328
|
+
| Critical | {n} | {comma-separated IDs} |
|
|
329
|
+
| AutomationCandidate | {n} | {comma-separated IDs} |
|
|
330
|
+
| NeedsReview | {n} | {comma-separated IDs} |
|
|
331
|
+
|
|
332
|
+
---
|
|
333
|
+
|
|
334
|
+
## Dedup Summary
|
|
335
|
+
|
|
336
|
+
| Planned Title | Skipped Reason | Existing TC |
|
|
337
|
+
|---------------|---------------|-------------|
|
|
338
|
+
| {title} | Duplicate of TC #{id} | #{id} — {state} |
|
|
339
|
+
|
|
340
|
+
{If no duplicates: "No duplicates detected — all test cases were created."}
|
|
341
|
+
|
|
342
|
+
---
|
|
343
|
+
|
|
344
|
+
## Traceability
|
|
345
|
+
|
|
346
|
+
All test cases linked to work item **#{work-item-id}** via *Tested By*.
|
|
347
|
+
|
|
348
|
+
**Path Overrides Applied**: {If --area-path or --iteration-path provided, state them. Otherwise: "None — used source work item paths."}
|
|
349
|
+
**Confidence Breakdown**: {n} Specified, {n} Draft (NeedsReview)
|
|
350
|
+
**Boundary Triplets Generated**: {n} (from {n} detected boundaries)
|
|
351
|
+
```
|
|
352
|
+
|
|
353
|
+
---
|
|
354
|
+
|
|
355
|
+
### Phase 7: Attach Report to Source Work Item
|
|
356
|
+
|
|
357
|
+
**If `ADO_MCP_AUTH_TOKEN` is set:**
|
|
358
|
+
|
|
359
|
+
Upload `test-cases.md` as an attachment:
|
|
360
|
+
|
|
361
|
+
```bash
|
|
362
|
+
# Step 1: Upload file
|
|
363
|
+
ATTACHMENT_URL=$(curl -s \
|
|
364
|
+
--header "Authorization: Basic $(echo -n :${ADO_MCP_AUTH_TOKEN} | base64)" \
|
|
365
|
+
--header "Content-Type: application/octet-stream" \
|
|
366
|
+
--request POST \
|
|
367
|
+
--data-binary "@ai-tasks/ticket-$ARGUMENTS/test-cases.md" \
|
|
368
|
+
"https://dev.azure.com/{org}/{project}/_apis/wit/attachments?fileName=test-cases.md&api-version=7.1" \
|
|
369
|
+
| python3 -c "import sys,json; print(json.load(sys.stdin)['url'])")
|
|
370
|
+
|
|
371
|
+
# Step 2: Link attachment to work item
|
|
372
|
+
curl -s \
|
|
373
|
+
--header "Authorization: Basic $(echo -n :${ADO_MCP_AUTH_TOKEN} | base64)" \
|
|
374
|
+
--header "Content-Type: application/json-patch+json" \
|
|
375
|
+
--request PATCH \
|
|
376
|
+
--data "[{\"op\":\"add\",\"path\":\"/relations/-\",\"value\":{\"rel\":\"AttachedFile\",\"url\":\"${ATTACHMENT_URL}\",\"attributes\":{\"comment\":\"Generated test cases report\"}}}]" \
|
|
377
|
+
"https://dev.azure.com/{org}/{project}/_apis/wit/workItems/$ARGUMENTS?api-version=7.1"
|
|
378
|
+
```
|
|
379
|
+
|
|
380
|
+
**If `ADO_MCP_AUTH_TOKEN` is NOT set (fallback):**
|
|
381
|
+
|
|
382
|
+
Write the full report as HTML to the work item's `Custom.QATestCasesReport` field via `wit_update_work_item`. Include all sections converted to HTML.
|
|
383
|
+
|
|
384
|
+
Note in the final report which method was used.
|
|
385
|
+
|
|
386
|
+
---
|
|
387
|
+
|
|
388
|
+
## Final Report to User
|
|
389
|
+
|
|
390
|
+
After completing all phases, provide:
|
|
391
|
+
|
|
392
|
+
1. Brief inline summary (2-3 sentences) of scenarios covered
|
|
393
|
+
2. Full path to generated file: `ai-tasks/ticket-{id}/test-cases.md`
|
|
394
|
+
3. Table of every created TC: ID, title, tags, confidence
|
|
395
|
+
4. Counts by tag: Smoke, Regression, Critical, AutomationCandidate, NeedsReview
|
|
396
|
+
5. Dedup summary: how many planned TCs were skipped
|
|
397
|
+
6. Confidence summary: Specified vs Draft counts
|
|
398
|
+
7. Boundary summary: how many boundary triplets generated
|
|
399
|
+
8. Pipeline context: which codebase map documents and locator registry data were used
|
|
400
|
+
9. Gaps or assumptions made
|
|
401
|
+
10. Path override confirmation (if used)
|
|
402
|
+
11. Report delivery confirmation (attached as file or written to custom field)
|
|
403
|
+
|
|
404
|
+
$ARGUMENTS
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# QA Create Test
|
|
2
2
|
|
|
3
|
-
Create, update, or generate tests from tickets — all in one command. Supports
|
|
3
|
+
Create, update, or generate tests from tickets — all in one command. Supports five modes: generate tests from code analysis, generate tests from a ticket (Jira/Linear/GitHub), create Test Cases in Azure DevOps from a work item, update/improve existing tests, or generate POM files only. Uses Playwright MCP to extract real locators from the live app when available.
|
|
4
4
|
|
|
5
5
|
## Usage
|
|
6
6
|
|
|
@@ -14,6 +14,7 @@ Create, update, or generate tests from tickets — all in one command. Supports
|
|
|
14
14
|
|------|---------|---------|
|
|
15
15
|
| **From code** | Feature name (no URL, no path to tests) | `/qa-create-test login` |
|
|
16
16
|
| **From ticket** | URL, shorthand (#123), or `--ticket` flag | `/qa-create-test https://github.com/org/repo/issues/42` |
|
|
17
|
+
| **Azure DevOps** | `--ado` flag with work item ID or ADO URL | `/qa-create-test --ado 85508` |
|
|
17
18
|
| **Update existing** | Path to existing test files or `--update` flag | `/qa-create-test --update tests/e2e/` |
|
|
18
19
|
| **POM only** | `--pom-only` flag | `/qa-create-test --pom-only src/pages/` |
|
|
19
20
|
|
|
@@ -25,6 +26,10 @@ Create, update, or generate tests from tickets — all in one command. Supports
|
|
|
25
26
|
- `--ticket <source>` — force ticket mode with: URL, shorthand (#123, org/repo#123), file path, or plain text
|
|
26
27
|
- `--update <path>` — force update mode: audit and improve existing tests at path
|
|
27
28
|
- `--scope fix|improve|add|full` — for update mode only (default: full)
|
|
29
|
+
- `--ado <work-item-id>` — Azure DevOps mode: read a work item and create Test Cases in ADO (accepts ID or full ADO URL)
|
|
30
|
+
- `--area-path <path>` — (ADO mode) override area path for created test cases (default: source work item's area path)
|
|
31
|
+
- `--iteration-path <path>` — (ADO mode) override iteration path for created test cases (default: source work item's iteration path)
|
|
32
|
+
- `--skip-dedup` — (ADO mode) skip deduplication check against existing linked test cases
|
|
28
33
|
- `--pom-only [path]` — generate only Page Object Model files (BasePage + feature POMs), no test specs
|
|
29
34
|
- `--framework <name>` — override framework auto-detection (playwright, cypress, selenium) — used with --pom-only
|
|
30
35
|
|
|
@@ -33,8 +38,9 @@ Create, update, or generate tests from tickets — all in one command. Supports
|
|
|
33
38
|
```
|
|
34
39
|
if --pom-only:
|
|
35
40
|
MODE = "pom-only"
|
|
36
|
-
elif argument matches URL
|
|
37
|
-
|
|
41
|
+
elif --ado flag OR argument matches ADO URL (dev.azure.com, *.visualstudio.com):
|
|
42
|
+
MODE = "ado"
|
|
43
|
+
elif argument matches URL pattern (github.com, atlassian.net, linear.app) OR contains "#" + digits OR --ticket flag:
|
|
38
44
|
MODE = "from-ticket"
|
|
39
45
|
elif --update flag OR argument is path to existing test directory/files:
|
|
40
46
|
MODE = "update"
|
|
@@ -57,6 +63,13 @@ else:
|
|
|
57
63
|
- Test spec files with `traces_to` fields linking back to ticket ACs
|
|
58
64
|
- VALIDATION_REPORT.md
|
|
59
65
|
|
|
66
|
+
### Azure DevOps Mode
|
|
67
|
+
- Test Cases created directly in Azure DevOps (via `testplan_create_test_case`)
|
|
68
|
+
- Test Cases linked to source work item via *Tested By* relationship
|
|
69
|
+
- Tags applied: `Smoke`, `Regression`, `Critical`, `AutomationCandidate`, `NeedsReview`
|
|
70
|
+
- `ai-tasks/ticket-{id}/test-cases.md` — structured report
|
|
71
|
+
- Report attached to work item (if `ADO_MCP_AUTH_TOKEN` is set) or written to `Custom.QATestCasesReport` field (fallback)
|
|
72
|
+
|
|
60
73
|
### Update Mode
|
|
61
74
|
- QA_AUDIT_REPORT.md — current quality assessment
|
|
62
75
|
- Improved test files (after user approval)
|
|
@@ -70,8 +83,8 @@ Parse `$ARGUMENTS` to determine mode using the detection logic above.
|
|
|
70
83
|
Print mode banner:
|
|
71
84
|
```
|
|
72
85
|
=== QA Create Test ===
|
|
73
|
-
Mode: {from-code | from-ticket | update}
|
|
74
|
-
Target: {feature name | ticket URL | test path}
|
|
86
|
+
Mode: {from-code | from-ticket | ado | update | pom-only}
|
|
87
|
+
Target: {feature name | ticket URL | ADO work item ID | test path}
|
|
75
88
|
App URL: {url or "auto-detect"}
|
|
76
89
|
===========================
|
|
77
90
|
```
|
|
@@ -203,6 +216,34 @@ Key steps in the workflow:
|
|
|
203
216
|
|
|
204
217
|
---
|
|
205
218
|
|
|
219
|
+
### ADO MODE (Azure DevOps)
|
|
220
|
+
|
|
221
|
+
Create Test Cases directly in Azure DevOps from a work item. Reads the work item content (repro steps, acceptance criteria, comments, attachments), designs test cases with boundary detection and deduplication, and creates them in ADO with full traceability.
|
|
222
|
+
|
|
223
|
+
**Prerequisites:** ADO MCP server must be connected (provides `wit_get_work_item`, `testplan_create_test_case`, etc.).
|
|
224
|
+
|
|
225
|
+
Execute the full ADO workflow defined in `@commands/qa-create-test-ado.md`:
|
|
226
|
+
|
|
227
|
+
1. **Phase 1** — Read pipeline context: CLAUDE.md, MY_PREFERENCES.md, codebase map, locator registry
|
|
228
|
+
2. **Phase 2** — Retrieve work item with relations, comments, and attachments
|
|
229
|
+
3. **Phase 2b** — Deduplication check against existing linked test cases (skip with `--skip-dedup`)
|
|
230
|
+
4. **Phase 3** — Extract test source content based on work item type (Bug → Repro Steps, User Story → Acceptance Criteria)
|
|
231
|
+
5. **Phase 4** — Design test cases with boundary value detection, tagging rules, confidence scoring, and preconditions
|
|
232
|
+
6. **Phase 5** — Create test cases in ADO via `testplan_create_test_case`, link via *Tested By*, set tags
|
|
233
|
+
7. **Phase 6** — Generate structured report to `ai-tasks/ticket-{id}/test-cases.md`
|
|
234
|
+
8. **Phase 7** — Attach report to source work item
|
|
235
|
+
|
|
236
|
+
**Key features:**
|
|
237
|
+
- Boundary value triplets: detects `max`, `min`, `limit`, `threshold` keywords with numeric values → generates N-1, N, N+1 test cases
|
|
238
|
+
- Deduplication: checks existing linked TCs before creating, prevents duplicates
|
|
239
|
+
- Confidence scoring: `Specified` (explicit source) vs `Draft` (inferred, tagged `NeedsReview`)
|
|
240
|
+
- Cross-references codebase map for project-specific context when available
|
|
241
|
+
- Supports `--area-path` and `--iteration-path` overrides
|
|
242
|
+
|
|
243
|
+
For the complete step-by-step process, see `@commands/qa-create-test-ado.md`.
|
|
244
|
+
|
|
245
|
+
---
|
|
246
|
+
|
|
206
247
|
### UPDATE MODE
|
|
207
248
|
|
|
208
249
|
1. Read `CLAUDE.md` — quality gates, locator tiers, assertion rules, POM rules.
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "qaa-agent",
|
|
3
|
-
"version": "1.8.
|
|
3
|
+
"version": "1.8.5",
|
|
4
4
|
"description": "QA Automation Agent for Claude Code — multi-agent pipeline that analyzes repos, generates tests, validates, and creates PRs",
|
|
5
5
|
"bin": {
|
|
6
6
|
"qaa-agent": "./bin/install.cjs"
|
|
@@ -22,7 +22,8 @@
|
|
|
22
22
|
"author": "Backhaus7997",
|
|
23
23
|
"license": "MIT",
|
|
24
24
|
"dependencies": {
|
|
25
|
-
"@playwright/mcp": "latest"
|
|
25
|
+
"@playwright/mcp": "latest",
|
|
26
|
+
"@upstash/context7-mcp": "latest"
|
|
26
27
|
},
|
|
27
28
|
"files": [
|
|
28
29
|
"bin/",
|