ado-sync 0.1.65 → 0.1.67

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (58) hide show
  1. package/README.md +15 -15
  2. package/dist/__tests__/regressions.test.js +1011 -1
  3. package/dist/__tests__/regressions.test.js.map +1 -1
  4. package/dist/ai/summarizer.d.ts +2 -1
  5. package/dist/ai/summarizer.js +6 -1
  6. package/dist/ai/summarizer.js.map +1 -1
  7. package/dist/azure/test-cases.d.ts +11 -1
  8. package/dist/azure/test-cases.js +286 -43
  9. package/dist/azure/test-cases.js.map +1 -1
  10. package/dist/cli.js +85 -8
  11. package/dist/cli.js.map +1 -1
  12. package/dist/config.js +74 -1
  13. package/dist/config.js.map +1 -1
  14. package/dist/id-markers.d.ts +1 -0
  15. package/dist/id-markers.js +13 -0
  16. package/dist/id-markers.js.map +1 -1
  17. package/dist/sync/cache.d.ts +2 -0
  18. package/dist/sync/cache.js.map +1 -1
  19. package/dist/sync/engine.d.ts +12 -1
  20. package/dist/sync/engine.js +210 -41
  21. package/dist/sync/engine.js.map +1 -1
  22. package/dist/types.d.ts +52 -2
  23. package/llms.txt +11 -11
  24. package/package.json +8 -1
  25. package/docs/advanced.md +0 -989
  26. package/docs/agent-setup.md +0 -204
  27. package/docs/capability-roadmap.md +0 -280
  28. package/docs/cli.md +0 -614
  29. package/docs/configuration.md +0 -322
  30. package/docs/examples/csharp-mstest-local-llm.yaml +0 -35
  31. package/docs/examples/csharp-mstest.yaml +0 -21
  32. package/docs/examples/csharp-nunit.yaml +0 -21
  33. package/docs/examples/csharp-specflow.yaml +0 -16
  34. package/docs/examples/cypress.yaml +0 -21
  35. package/docs/examples/detox-react-native.yaml +0 -21
  36. package/docs/examples/espresso-android.yaml +0 -21
  37. package/docs/examples/flutter-dart.yaml +0 -21
  38. package/docs/examples/java-junit.yaml +0 -21
  39. package/docs/examples/java-testng.yaml +0 -21
  40. package/docs/examples/js-jasmine-wdio.yaml +0 -21
  41. package/docs/examples/js-jest.yaml +0 -21
  42. package/docs/examples/playwright-js.yaml +0 -21
  43. package/docs/examples/playwright-ts.yaml +0 -21
  44. package/docs/examples/puppeteer.yaml +0 -21
  45. package/docs/examples/python-pytest.yaml +0 -21
  46. package/docs/examples/robot-framework.yaml +0 -19
  47. package/docs/examples/testcafe.yaml +0 -21
  48. package/docs/examples/xcuitest-ios.yaml +0 -21
  49. package/docs/mcp-server.md +0 -312
  50. package/docs/publish-test-results.md +0 -947
  51. package/docs/spec-formats.md +0 -1357
  52. package/docs/troubleshooting.md +0 -101
  53. package/docs/vscode-extension.md +0 -139
  54. package/docs/work-item-links.md +0 -115
  55. package/docs/workflows.md +0 -457
  56. package/mkdocs.yml +0 -40
  57. package/requirements-docs.txt +0 -4
  58. package/scripts/build_site.sh +0 -6
package/docs/cli.md DELETED
@@ -1,614 +0,0 @@
1
- # CLI Reference
2
-
3
- ```
4
- ado-sync [options] [command]
5
-
6
- Options:
7
- -c, --config <path> Path to config file (default: ado-sync.json)
8
- --output <format> Output format: text (default) or json
9
- -V, --version Print version
10
- -h, --help Show help
11
-
12
- Commands:
13
- init Generate a starter config file (interactive wizard)
14
- validate Check config and Azure DevOps connectivity
15
- push Push local specs to Azure DevOps
16
- pull Pull updates from Azure DevOps into local files
17
- status Show diff without making changes
18
- diff Show field-level diff between local and Azure
19
- generate Generate local spec files from ADO User Stories
20
- story-context Show AC, tags, and linked TCs for a User Story
21
- publish-test-results Publish TRX / JUnit / Cucumber JSON results
22
- coverage Spec link rate and story coverage report
23
- stale List (and optionally retire) TCs with no local spec
24
- ac-gate Validate stories have AC + linked TCs (CI gate)
25
- find-tagged Find work items where a tag was added in the last N hours/days
26
- trend Flaky test detection and pass-rate trend report
27
- watch Auto-push on file save
28
- help [command] Help for a specific command
29
- ```
30
-
31
- ---
32
-
33
- ## `init`
34
-
35
- Generate a starter config file. Runs an **interactive wizard** when stdin is a TTY.
36
-
37
- ```bash
38
- ado-sync init # creates ado-sync.json (wizard)
39
- ado-sync init ado-sync.yml # YAML format
40
- ado-sync init --no-interactive # dump template without prompting
41
- ```
42
-
43
- The wizard asks for org URL, project, auth type, token env var, test plan ID, local spec type, and include glob — then writes a minimal valid config.
44
-
45
- ---
46
-
47
- ## `validate`
48
-
49
- Check that the config is valid and Azure DevOps is reachable. Run this before your first `push`.
50
-
51
- ```bash
52
- ado-sync validate
53
- ado-sync validate -c path/to/ado-sync.json
54
- ```
55
-
56
- Output:
57
- ```
58
- ✓ Config loaded — /project/ado-sync.json
59
- ✓ Azure connection — https://dev.azure.com/myorg
60
- ✓ Project "MyProject" found
61
- ✓ Test Plan #42 "Regression Suite" found
62
-
63
- All checks passed.
64
- ```
65
-
66
- ---
67
-
68
- ## `push`
69
-
70
- Push local spec files to Azure DevOps — creates new Test Cases or updates existing ones.
71
-
72
- ```bash
73
- ado-sync push
74
- ado-sync push --dry-run
75
- ado-sync push --tags "@smoke and not @wip"
76
- ado-sync push --config-override testPlan.id=9999
77
-
78
- # AI-generated test steps for code files
79
- ado-sync push --ai-provider heuristic # fast regex, no model needed
80
- ado-sync push --ai-provider local --ai-model ~/.cache/models/qwen2.5-coder-1.5b-instruct-q4_k_m.gguf
81
- ado-sync push --ai-provider ollama --ai-model qwen2.5-coder:7b
82
- ado-sync push --ai-provider docker --ai-model ai/llama3.2 # Docker Model Runner, no key needed
83
- ado-sync push --ai-provider openai --ai-key $OPENAI_API_KEY
84
- ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
85
- ado-sync push --ai-provider huggingface --ai-model mistralai/Mistral-7B-Instruct-v0.3 --ai-key $HF_TOKEN
86
- ado-sync push --ai-provider bedrock --ai-model anthropic.claude-3-haiku-20240307-v1:0 --ai-region us-east-1
87
- ado-sync push --ai-provider azureai --ai-url https://myresource.openai.azure.com --ai-model gpt-4o --ai-key $AZURE_OPENAI_KEY
88
- ado-sync push --ai-provider github --ai-model gpt-4o # uses $GITHUB_TOKEN automatically
89
- ado-sync push --ai-provider azureinference --ai-url https://myendpoint.inference.azure.com --ai-model gpt-4o --ai-key $AZURE_AI_KEY
90
- ado-sync push --ai-provider none # disable AI entirely
91
- ado-sync push --ai-context ./docs/ai-context.md # inject domain context
92
- ```
93
-
94
- | Scenario | Action |
95
- |----------|--------|
96
- | No ID tag | Creates a new Test Case, writes ID back |
97
- | ID tag, no changes | Skipped |
98
- | ID tag, content changed | Updates the existing Test Case |
99
- | Deleted locally, still in Azure | Tagged `ado-sync:removed` in Azure |
100
-
101
- ### AI auto-summary
102
-
103
- For code-based types (`java`, `csharp`, `python`, `javascript`, `playwright`, `cypress`, `testcafe`, `detox`, `espresso`, `xcuitest`, `flutter`), ado-sync reads test function bodies and generates a TC **title**, **description**, and **steps** automatically.
104
-
105
- > No setup required — `push` always works, falling back to fast heuristic analysis.
106
-
107
- | Provider | Quality | Setup |
108
- |---|---|---|
109
- | `local` *(default)* | Good–Excellent | Download a GGUF model (see below) |
110
- | `heuristic` | Basic | None — offline, zero dependencies |
111
- | `ollama` | Good–Excellent | `ollama pull qwen2.5-coder:7b` |
112
- | `docker` | Good–Excellent | Docker Desktop with Model Runner enabled — `--ai-model ai/llama3.2` |
113
- | `openai` | Excellent | `--ai-key $OPENAI_API_KEY` |
114
- | `anthropic` | Excellent | `--ai-key $ANTHROPIC_API_KEY` |
115
- | `huggingface` | Good–Excellent | `--ai-model <model-id> --ai-key $HF_TOKEN` |
116
- | `bedrock` | Excellent | AWS credentials + optional `--ai-region` |
117
- | `azureai` | Excellent | `--ai-url <endpoint> --ai-key $AZURE_OPENAI_KEY` |
118
- | `github` | Excellent | GitHub PAT with `models:read` — `$GITHUB_TOKEN` auto-detected |
119
- | `azureinference` | Excellent | `--ai-url <endpoint> --ai-key $AZURE_AI_KEY` |
120
- | `openai` + `--ai-url` | Excellent | Any OpenAI-compatible proxy (LiteLLM, vLLM) |
121
-
122
- #### Local LLM — model download
123
-
124
- ```bash
125
- # macOS / Linux
126
- mkdir -p ~/.cache/ado-sync/models
127
- curl -L -o ~/.cache/ado-sync/models/qwen2.5-coder-1.5b-instruct-q4_k_m.gguf \
128
- "https://huggingface.co/Qwen/Qwen2.5-Coder-1.5B-Instruct-GGUF/resolve/main/qwen2.5-coder-1.5b-instruct-q4_k_m.gguf"
129
-
130
- ado-sync push --ai-model ~/.cache/ado-sync/models/qwen2.5-coder-1.5b-instruct-q4_k_m.gguf
131
- ```
132
-
133
- | Model | RAM | Quality |
134
- |-------|-----|---------|
135
- | 1.5B Q4_K_M | ~1.1 GB | Good |
136
- | 7B Q4_K_M | ~4.5 GB | Better |
137
- | 14B Q4_K_M | ~8.5 GB | Excellent |
138
-
139
- #### Inject domain context
140
-
141
- ```bash
142
- ado-sync push --ai-context ./docs/ai-context.md
143
- ```
144
-
145
- Or in config:
146
- ```json
147
- { "sync": { "ai": { "provider": "anthropic", "contextFile": "./docs/ai-context.md" } } }
148
- ```
149
-
150
- #### Freeze steps in source files
151
-
152
- Enable `writebackDocComment: true` to write AI-generated steps as JSDoc comments above each `test()` call. On subsequent pushes the parser reads the JSDoc, so AI is not re-invoked.
153
-
154
- ```json
155
- { "sync": { "ai": { "writebackDocComment": true } } }
156
- ```
157
-
158
- ---
159
-
160
- ## `pull`
161
-
162
- Pull Azure DevOps Test Case changes into local spec files.
163
-
164
- ```bash
165
- ado-sync pull
166
- ado-sync pull --dry-run
167
- ado-sync pull --tags "@smoke"
168
-
169
- # AI step generation — same providers as push, used when pull creates new local stubs
170
- ado-sync pull --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
171
- ado-sync pull --ai-provider openai --ai-key $OPENAI_API_KEY --ai-model gpt-4o
172
- ado-sync pull --ai-provider ollama --ai-model qwen2.5-coder:7b
173
- ado-sync pull --ai-provider docker --ai-model ai/llama3.2
174
- ado-sync pull --ai-provider huggingface --ai-model mistralai/Mistral-7B-Instruct-v0.3 --ai-key $HF_TOKEN
175
- ado-sync pull --ai-provider bedrock --ai-model anthropic.claude-3-haiku-20240307-v1:0 --ai-region us-east-1
176
- ado-sync pull --ai-provider azureai --ai-url https://myresource.openai.azure.com --ai-model gpt-4o --ai-key $AZURE_OPENAI_KEY
177
- ado-sync pull --ai-provider github --ai-model gpt-4o
178
- ado-sync pull --ai-provider azureinference --ai-url https://myendpoint.inference.azure.com --ai-model gpt-4o --ai-key $AZURE_AI_KEY
179
- ```
180
-
181
- ---
182
-
183
- ## `status`
184
-
185
- Show what would change on the next push without making any modifications.
186
-
187
- ```bash
188
- ado-sync status
189
- ado-sync status --tags "@smoke"
190
- ado-sync status --output json # machine-readable
191
-
192
- # AI step generation — used when computing diff for code-based test types
193
- ado-sync status --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
194
- ado-sync status --ai-provider openai --ai-key $OPENAI_API_KEY
195
- ado-sync status --ai-provider ollama --ai-model qwen2.5-coder:7b
196
- ado-sync status --ai-provider docker --ai-model ai/llama3.2
197
- ado-sync status --ai-provider bedrock --ai-model anthropic.claude-3-haiku-20240307-v1:0 --ai-region us-east-1
198
- ado-sync status --ai-provider azureai --ai-url https://myresource.openai.azure.com --ai-model gpt-4o --ai-key $AZURE_OPENAI_KEY
199
- ado-sync status --ai-provider github --ai-model gpt-4o
200
- ado-sync status --ai-provider azureinference --ai-url https://myendpoint.inference.azure.com --ai-model gpt-4o --ai-key $AZURE_AI_KEY
201
- ```
202
-
203
- ---
204
-
205
- ## `diff`
206
-
207
- Show a field-level diff between local specs and Azure DevOps — richer than `status`.
208
-
209
- ```bash
210
- ado-sync diff
211
- ado-sync diff --tags "@smoke"
212
- ```
213
-
214
- Output example:
215
- ```
216
- ~ specs/login.feature:12 · Login with valid credentials [#1234]
217
- changed fields: title, steps
218
- ```
219
-
220
- ---
221
-
222
- ## `generate`
223
-
224
- Generate local spec files (`.feature` or `.md`) from Azure DevOps User Stories, pulling title, description, and acceptance criteria to scaffold each file.
225
-
226
- ```bash
227
- # By explicit work item IDs
228
- ado-sync generate --story-ids 1234,5678
229
-
230
- # By area path (fetches all User Stories under it)
231
- ado-sync generate --area-path "MyProject\\\\Teams\\\\QA"
232
-
233
- # By WIQL query
234
- ado-sync generate --query "SELECT [System.Id] FROM WorkItems WHERE [System.WorkItemType]='User Story' AND [System.State]='Active'"
235
-
236
- # Options
237
- ado-sync generate --story-ids 1234 --format gherkin # output .feature files
238
- ado-sync generate --story-ids 1234 --format markdown # output .md files (default)
239
- ado-sync generate --story-ids 1234 --output-folder specs/generated
240
- ado-sync generate --story-ids 1234 --force # overwrite existing files
241
- ado-sync generate --story-ids 1234 --dry-run # preview without writing
242
-
243
- # AI-powered generation — fills in real steps from the story description + AC
244
- ado-sync generate --story-ids 1234 --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
245
- ado-sync generate --story-ids 1234 --ai-provider openai --ai-key $OPENAI_API_KEY --ai-model gpt-4o
246
- ado-sync generate --story-ids 1234 --ai-provider ollama --ai-model qwen2.5-coder:7b
247
- ado-sync generate --story-ids 1234 --ai-provider docker --ai-model ai/llama3.2
248
- ado-sync generate --story-ids 1234 --ai-provider huggingface --ai-model mistralai/Mistral-7B-Instruct-v0.3 --ai-key $HF_TOKEN
249
- ado-sync generate --story-ids 1234 --ai-provider bedrock --ai-model anthropic.claude-3-haiku-20240307-v1:0 --ai-region us-east-1
250
- ado-sync generate --story-ids 1234 --ai-provider azureai --ai-url https://myresource.openai.azure.com --ai-model gpt-4o --ai-key $AZURE_OPENAI_KEY
251
- ado-sync generate --story-ids 1234 --ai-provider github --ai-model gpt-4o
252
- ado-sync generate --story-ids 1234 --ai-provider azureinference --ai-url https://myendpoint.inference.azure.com --ai-model gpt-4o --ai-key $AZURE_AI_KEY
253
- ado-sync generate --story-ids 1234 --ai-provider local --ai-model ~/.cache/models/qwen2.5-coder-1.5b.gguf
254
- ado-sync generate --story-ids 1234 --ai-provider openai --ai-key $OPENAI_API_KEY \
255
- --ai-context src/orders/** --ai-context tests/orders/**
256
- ```
257
-
258
- ### AI generate flags
259
-
260
- | Flag | Description |
261
- |------|-------------|
262
- | `--ai-provider` | Provider: `local`, `ollama`, `docker`, `openai`, `anthropic`, `huggingface`, `bedrock`, `azureai`, `github`, `azureinference` |
263
- | `--ai-model` | Model name, path, or deployment ID |
264
- | `--ai-key` | API key (or `$ENV_VAR` reference) |
265
- | `--ai-url` | Base URL override (Ollama, Azure OpenAI full endpoint, OpenAI-compatible) |
266
- | `--ai-region` | AWS region for `bedrock` (default: `AWS_REGION` env or `us-east-1`) |
267
- | `--ai-context` | Additional context source for AI generation. Accepts a file, folder, or glob. Repeatable. |
268
-
269
- ### AI generation context
270
-
271
- `generate` does not scan the entire workspace by default. For better AI-generated specs, pass a small set of relevant files or folders explicitly:
272
-
273
- ```bash
274
- ado-sync generate --story-ids 1234 \
275
- --ai-provider openai --ai-key $OPENAI_API_KEY \
276
- --ai-context src/billing/** \
277
- --ai-context tests/billing/** \
278
- --ai-context docs/billing.md
279
- ```
280
-
281
- Best results come from targeted context: relevant app code, existing automation, and feature docs. The CLI caps the number of files and total context size so prompts stay bounded and predictable.
282
-
283
- Without `--ai-provider`, `generate` uses the template scaffold (no AI required). With it, the AI reads the story's title, description, and acceptance criteria and produces realistic Given/When/Then steps.
284
-
285
- For `bedrock`, AWS credentials are picked up from the standard chain (`AWS_ACCESS_KEY_ID` / `AWS_SECRET_ACCESS_KEY` env vars, `~/.aws/credentials`, or IAM role). Install the SDK if needed: `npm install @aws-sdk/client-bedrock-runtime`.
286
-
287
- For `github`, create a GitHub PAT with `models:read` scope and set `GITHUB_TOKEN` in your environment — no `--ai-key` flag needed. Available models: `gpt-4o`, `gpt-4o-mini`, `Meta-Llama-3.1-70B-Instruct`, `Mistral-large`, and others from [GitHub Models](https://github.com/marketplace/models).
288
-
289
- For `azureinference`, the endpoint is your Azure AI Inference resource URL (e.g. from Azure AI Foundry). Install the SDK if needed: `npm install @azure-rest/ai-inference @azure/core-auth`.
290
-
291
- AI settings also fall back to `sync.ai` in `ado-sync.json`, so you can configure a provider once and all commands (`push`, `generate`, `publish-test-results`) will use it automatically.
292
-
293
- #### Dry-run with AI preview
294
-
295
- When `--dry-run` and `--ai-provider` are both set, the AI is called but no file is written. The first 20 lines of the generated content are printed to the terminal so you can review quality before committing:
296
-
297
- ```bash
298
- ado-sync generate --story-ids 1234 --dry-run --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
299
- ```
300
-
301
- #### Automatic `@tc:` tag writeback
302
-
303
- If the ADO story already has a linked Test Case (from a previous `push` run), `generate` automatically injects the `@tc:<id>` tag into the new file. This prevents the file from appearing as an untracked new test on the next `status` run.
304
-
305
- ---
306
-
307
- ## `publish-test-results`
308
-
309
- Publish test results from TRX, JUnit XML, NUnit XML, Cucumber JSON, or Playwright JSON files to Azure DevOps Test Runs.
310
-
311
- ```bash
312
- ado-sync publish-test-results --testResult results/test.trx
313
- ado-sync publish-test-results --testResult results/test.xml --testResultFormat junit --dry-run
314
- ado-sync publish-test-results --testResult results/playwright.json --attachmentsFolder test-results/
315
-
316
- # AI failure analysis — posts root-cause + fix suggestion as a comment on each failed result
317
- ado-sync publish-test-results \
318
- --testResult results/playwright.json \
319
- --analyze-failures \
320
- --ai-provider anthropic \
321
- --ai-model claude-haiku-4-5-20251001 \
322
- --ai-key $ANTHROPIC_API_KEY
323
-
324
- # Or with OpenAI
325
- ado-sync publish-test-results \
326
- --testResult results/test.trx \
327
- --analyze-failures \
328
- --ai-provider openai \
329
- --ai-key $OPENAI_API_KEY
330
-
331
- # Or with a local Ollama server (no cloud cost)
332
- ado-sync publish-test-results \
333
- --testResult results/junit.xml \
334
- --analyze-failures \
335
- --ai-provider ollama \
336
- --ai-model qwen2.5-coder:7b
337
-
338
- # Hugging Face Inference API
339
- ado-sync publish-test-results \
340
- --testResult results/playwright.json \
341
- --analyze-failures \
342
- --ai-provider huggingface \
343
- --ai-model mistralai/Mistral-7B-Instruct-v0.3 \
344
- --ai-key $HF_TOKEN
345
-
346
- # AWS Bedrock (uses default AWS credential chain)
347
- ado-sync publish-test-results \
348
- --testResult results/playwright.json \
349
- --analyze-failures \
350
- --ai-provider bedrock \
351
- --ai-model anthropic.claude-3-haiku-20240307-v1:0 \
352
- --ai-region us-east-1
353
-
354
- # Azure OpenAI Service
355
- ado-sync publish-test-results \
356
- --testResult results/playwright.json \
357
- --analyze-failures \
358
- --ai-provider azureai \
359
- --ai-url https://myresource.openai.azure.com \
360
- --ai-model gpt-4o \
361
- --ai-key $AZURE_OPENAI_KEY
362
-
363
- # GitHub Models (uses $GITHUB_TOKEN automatically)
364
- ado-sync publish-test-results \
365
- --testResult results/playwright.json \
366
- --analyze-failures \
367
- --ai-provider github \
368
- --ai-model gpt-4o
369
-
370
- # Azure AI Inference (Azure AI Foundry endpoint)
371
- ado-sync publish-test-results \
372
- --testResult results/playwright.json \
373
- --analyze-failures \
374
- --ai-provider azureinference \
375
- --ai-url https://myendpoint.inference.azure.com \
376
- --ai-model gpt-4o \
377
- --ai-key $AZURE_AI_KEY
378
- ```
379
-
380
- See [publish-test-results.md](publish-test-results.md) for full reference including config-based setup.
381
-
382
- ---
383
-
384
- ## `find-tagged`
385
-
386
- Find work items where a specific tag was added within the last N hours or days. Uses the Azure DevOps **revisions API** to find the exact date and time the tag first appeared — not just the item's last-changed date.
387
-
388
- ```bash
389
- # Tag added in the last 24 hours
390
- ado-sync find-tagged --tag "regression" --hours 24
391
-
392
- # Tag added in the last 7 days
393
- ado-sync find-tagged --tag "sprint-42" --days 7
394
-
395
- # Different work item type
396
- ado-sync find-tagged --tag "blocker" --hours 48 --work-item-type Bug
397
-
398
- # Machine-readable output
399
- ado-sync find-tagged --tag "regression" --days 3 --output json
400
- ```
401
-
402
- | Flag | Description |
403
- |------|-------------|
404
- | `--tag <name>` | **Required.** Tag to search for |
405
- | `--hours <n>` | Return items where the tag was added in the last N hours |
406
- | `--days <n>` | Return items where the tag was added in the last N days |
407
- | `--work-item-type <type>` | Work item type to search (default: `User Story`) |
408
-
409
- One of `--hours` or `--days` is required.
410
-
411
- Example output:
412
- ```
413
- + [#1234] [Active] User can reset password
414
- Tag added: Mar 18, 2026 14:32:07 by Jane Smith (rev 5)
415
- https://dev.azure.com/myorg/MyProject/_workitems/edit/1234
416
-
417
- 3 items found where "regression" was added in the last 24 hours.
418
- ```
419
-
420
- ---
421
-
422
- ## `--config-override`
423
-
424
- All commands accept `--config-override path=value` (repeatable) to set config values at runtime:
425
-
426
- ```bash
427
- ado-sync push --config-override testPlan.id=9999
428
- ado-sync push --config-override sync.disableLocalChanges=true
429
- ado-sync push --config-override sync.tagPrefix=test --config-override testPlan.id=42
430
- ```
431
-
432
- ---
433
-
434
- ## `--output json`
435
-
436
- All sync commands support `--output json` for machine-readable output:
437
-
438
- ```bash
439
- ado-sync status --output json | jq '.[] | select(.action=="updated")'
440
- ado-sync push --output json > results.json
441
- ```
442
-
443
- Each result object: `{ action, filePath, title, azureId?, detail?, changedFields? }`
444
-
445
- ---
446
-
447
- ## `ac-gate`
448
-
449
- Validate that ADO User Stories have Acceptance Criteria and at least one linked Test Case. Designed as a sprint or merge quality gate — exits with code 1 when any stories fail.
450
-
451
- ```bash
452
- # Validate specific story IDs
453
- ado-sync ac-gate --story-ids 1234,5678
454
-
455
- # Validate all User Stories under an area path
456
- ado-sync ac-gate --area-path "MyProject\\Teams\\QA"
457
-
458
- # Validate via a WIQL query
459
- ado-sync ac-gate --query "SELECT [System.Id] FROM WorkItems WHERE [System.WorkItemType]='User Story' AND [System.State]='Active'"
460
-
461
- # Default: validates all Active/Resolved/Closed stories in the project
462
- ado-sync ac-gate
463
-
464
- # Machine-readable output
465
- ado-sync ac-gate --story-ids 1234,5678 --output json
466
- ```
467
-
468
- Options:
469
-
470
- | Flag | Description |
471
- |------|-------------|
472
- | `--story-ids <ids>` | Comma-separated ADO work item IDs |
473
- | `--area-path <path>` | Validate all User Stories under this area path |
474
- | `--query <wiql>` | WIQL query to select stories |
475
- | `--states <states>` | Story states to include (default: `Active,Resolved,Closed`) |
476
-
477
- **Outcomes per story:**
478
- - `pass` — has AC and at least one linked TC
479
- - `no-ac` — missing Acceptance Criteria
480
- - `no-tc` — has AC but no linked Test Cases (run `ado-sync push` to create them)
481
-
482
- **CI usage:**
483
-
484
- ```yaml
485
- - name: AC gate
486
- run: ado-sync ac-gate --area-path "MyProject\\QA"
487
- env:
488
- AZURE_DEVOPS_TOKEN: ${{ secrets.AZURE_DEVOPS_TOKEN }}
489
- ```
490
-
491
- ---
492
-
493
- ## `stale --retire`
494
-
495
- The `stale` command now supports automatically retiring orphaned Azure Test Cases rather than just listing them.
496
-
497
- ```bash
498
- # List stale TCs (unchanged behaviour)
499
- ado-sync stale
500
-
501
- # Preview what --retire would do
502
- ado-sync stale --retire --dry-run
503
-
504
- # Transition stale TCs to Closed and tag ado-sync:retired
505
- ado-sync stale --retire
506
-
507
- # Use a custom target state
508
- ado-sync stale --retire --retire-state "Inactive"
509
- ```
510
-
511
- Options:
512
-
513
- | Flag | Description |
514
- |------|-------------|
515
- | `--retire` | Transition stale TCs to the target state and add `ado-sync:retired` tag |
516
- | `--retire-state <state>` | Target state (default: `Closed`) |
517
- | `--dry-run` | Preview without making changes |
518
-
519
- ---
520
-
521
- ## `trend`
522
-
523
- Analyse historical Azure DevOps test runs to detect flaky tests and failing patterns. Optionally post a summary to a Slack or Teams webhook.
524
-
525
- ```bash
526
- # Analyse last 30 days (default)
527
- ado-sync trend
528
-
529
- # Narrow the period and number of runs sampled
530
- ado-sync trend --days 14 --max-runs 20
531
-
532
- # Show top 20 failing tests
533
- ado-sync trend --top 20
534
-
535
- # Filter to runs whose name contains "nightly"
536
- ado-sync trend --run-name nightly
537
-
538
- # Post summary to Slack
539
- ado-sync trend --webhook-url https://hooks.slack.com/services/... --webhook-type slack
540
-
541
- # Post to Microsoft Teams
542
- ado-sync trend --webhook-url https://your-org.webhook.office.com/... --webhook-type teams
543
-
544
- # Fail the build when flaky tests are detected
545
- ado-sync trend --fail-on-flaky
546
-
547
- # Fail when overall pass rate drops below 85%
548
- ado-sync trend --fail-below 85
549
-
550
- # Machine-readable output
551
- ado-sync trend --output json
552
- ```
553
-
554
- Options:
555
-
556
- | Flag | Description |
557
- |------|-------------|
558
- | `--days <n>` | Days of history to analyse (default: 30) |
559
- | `--max-runs <n>` | Max test runs to sample (default: 50) |
560
- | `--top <n>` | Top N failing tests to surface (default: 10) |
561
- | `--run-name <filter>` | Only include runs whose name contains this string |
562
- | `--webhook-url <url>` | Webhook URL to post summary to |
563
- | `--webhook-type <type>` | `slack` (default), `teams`, or `generic` |
564
- | `--fail-on-flaky` | Exit 1 when any flaky tests are found |
565
- | `--fail-below <percent>` | Exit 1 when overall pass rate is below threshold |
566
-
567
- **Flaky test definition:** a test that both passed and failed at least once in the sampled period.
568
-
569
- **CI usage:**
570
-
571
- ```yaml
572
- - name: Trend report
573
- run: ado-sync trend --days 30 --fail-on-flaky --webhook-url ${{ secrets.SLACK_WEBHOOK }}
574
- env:
575
- AZURE_DEVOPS_TOKEN: ${{ secrets.AZURE_DEVOPS_TOKEN }}
576
- ```
577
-
578
- ---
579
-
580
- ## GitHub Actions templates
581
-
582
- Two ready-to-use workflow templates ship in `.github/workflows/`:
583
-
584
- ### `ado-sync-pr-check.yml`
585
-
586
- Runs `ado-sync push --dry-run` on every pull request and posts (or updates) a structured comment listing unlinked specs, drift, and conflicts.
587
-
588
- Required secrets: `ADO_PAT`
589
-
590
- ```yaml
591
- # .github/workflows/ado-sync-pr-check.yml (already present in this repo)
592
- on:
593
- pull_request:
594
- types: [opened, synchronize, reopened]
595
- ```
596
-
597
- ### `ado-sync-coverage-gate.yml`
598
-
599
- Runs three checks on push/PR to main:
600
- 1. **Spec link rate gate** — `ado-sync coverage --fail-below N`
601
- 2. **AC gate** — `ado-sync ac-gate` scoped to an area path
602
- 3. **Trend report** — informational, posts to webhook if configured
603
-
604
- Required secrets: `ADO_PAT`
605
-
606
- Key repository variables:
607
-
608
- | Variable | Default | Description |
609
- |----------|---------|-------------|
610
- | `ADO_SYNC_COVERAGE_MIN` | `80` | Minimum spec link rate % |
611
- | `ADO_SYNC_AC_AREA_PATH` | *(all)* | Area path scope for AC gate |
612
- | `ADO_SYNC_TREND_DAYS` | `30` | Days of history for trend report |
613
- | `ADO_SYNC_TREND_WEBHOOK` | | Webhook URL for trend summary |
614
- | `ADO_SYNC_TREND_WEBHOOK_TYPE` | `slack` | `slack`, `teams`, or `generic` |