ado-sync 0.1.65 → 0.1.68

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (67) hide show
  1. package/README.md +15 -15
  2. package/dist/__tests__/regressions.test.js +1133 -1
  3. package/dist/__tests__/regressions.test.js.map +1 -1
  4. package/dist/ai/summarizer.d.ts +2 -1
  5. package/dist/ai/summarizer.js +6 -1
  6. package/dist/ai/summarizer.js.map +1 -1
  7. package/dist/azure/test-cases.d.ts +11 -1
  8. package/dist/azure/test-cases.js +286 -43
  9. package/dist/azure/test-cases.js.map +1 -1
  10. package/dist/cli-diagnostics.d.ts +66 -0
  11. package/dist/cli-diagnostics.js +75 -0
  12. package/dist/cli-diagnostics.js.map +1 -0
  13. package/dist/cli.js +335 -23
  14. package/dist/cli.js.map +1 -1
  15. package/dist/config.js +194 -9
  16. package/dist/config.js.map +1 -1
  17. package/dist/extensions.d.ts +8 -0
  18. package/dist/extensions.js +86 -0
  19. package/dist/extensions.js.map +1 -0
  20. package/dist/id-markers.d.ts +1 -0
  21. package/dist/id-markers.js +13 -0
  22. package/dist/id-markers.js.map +1 -1
  23. package/dist/sync/cache.d.ts +2 -0
  24. package/dist/sync/cache.js.map +1 -1
  25. package/dist/sync/engine.d.ts +29 -2
  26. package/dist/sync/engine.js +270 -41
  27. package/dist/sync/engine.js.map +1 -1
  28. package/dist/sync/publish-results.d.ts +25 -0
  29. package/dist/sync/publish-results.js +81 -2
  30. package/dist/sync/publish-results.js.map +1 -1
  31. package/dist/types.d.ts +98 -2
  32. package/llms.txt +11 -11
  33. package/package.json +9 -1
  34. package/docs/advanced.md +0 -989
  35. package/docs/agent-setup.md +0 -204
  36. package/docs/capability-roadmap.md +0 -280
  37. package/docs/cli.md +0 -614
  38. package/docs/configuration.md +0 -322
  39. package/docs/examples/csharp-mstest-local-llm.yaml +0 -35
  40. package/docs/examples/csharp-mstest.yaml +0 -21
  41. package/docs/examples/csharp-nunit.yaml +0 -21
  42. package/docs/examples/csharp-specflow.yaml +0 -16
  43. package/docs/examples/cypress.yaml +0 -21
  44. package/docs/examples/detox-react-native.yaml +0 -21
  45. package/docs/examples/espresso-android.yaml +0 -21
  46. package/docs/examples/flutter-dart.yaml +0 -21
  47. package/docs/examples/java-junit.yaml +0 -21
  48. package/docs/examples/java-testng.yaml +0 -21
  49. package/docs/examples/js-jasmine-wdio.yaml +0 -21
  50. package/docs/examples/js-jest.yaml +0 -21
  51. package/docs/examples/playwright-js.yaml +0 -21
  52. package/docs/examples/playwright-ts.yaml +0 -21
  53. package/docs/examples/puppeteer.yaml +0 -21
  54. package/docs/examples/python-pytest.yaml +0 -21
  55. package/docs/examples/robot-framework.yaml +0 -19
  56. package/docs/examples/testcafe.yaml +0 -21
  57. package/docs/examples/xcuitest-ios.yaml +0 -21
  58. package/docs/mcp-server.md +0 -312
  59. package/docs/publish-test-results.md +0 -947
  60. package/docs/spec-formats.md +0 -1357
  61. package/docs/troubleshooting.md +0 -101
  62. package/docs/vscode-extension.md +0 -139
  63. package/docs/work-item-links.md +0 -115
  64. package/docs/workflows.md +0 -457
  65. package/mkdocs.yml +0 -40
  66. package/requirements-docs.txt +0 -4
  67. package/scripts/build_site.sh +0 -6
@@ -1,947 +0,0 @@
1
- # publish-test-results
2
-
3
- Parses test result files (TRX, NUnit XML, JUnit, Cucumber JSON, Playwright JSON, CTRF JSON) and publishes them to an Azure DevOps Test Run, linking results back to Test Cases either directly by TC ID (when available in the result file) or by `AutomatedTestName` matching.
4
-
5
- ---
6
-
7
- ## Usage
8
-
9
- ```bash
10
- ado-sync publish-test-results \
11
- --testResult results/test-results.trx \
12
- --runName "CI run #42"
13
-
14
- # Multiple result files
15
- ado-sync publish-test-results \
16
- --testResult results/unit.trx \
17
- --testResult results/integration.xml \
18
- --testResultFormat junit
19
-
20
- # Dry run — parse and summarise without publishing
21
- ado-sync publish-test-results --testResult results/test.trx --dry-run
22
-
23
- # Associate with a build
24
- ado-sync publish-test-results \
25
- --testResult results/test.trx \
26
- --buildId 12345
27
-
28
- # Publish to a specific planned suite/configuration
29
- ado-sync publish-test-results \
30
- --testResult results/test.trx \
31
- --testPlan "Smoke Plan" \
32
- --testSuite "BDD" \
33
- --testConfiguration "Windows 10"
34
- ```
35
-
36
- ### Options
37
-
38
- | Option | Description |
39
- |--------|-------------|
40
- | `--testResult <path>` | Path to a result file. Repeatable. |
41
- | `--testResultFormat <format>` | `trx` · `nunitXml` · `junit` · `cucumberJson` · `playwrightJson` · `ctrfJson`. Auto-detected when omitted. |
42
- | `--attachmentsFolder <path>` | Folder to scan for screenshots/videos/logs to attach to test results. |
43
- | `--runName <name>` | Name for the Test Run in Azure DevOps. Defaults to `ado-sync <ISO timestamp>`. |
44
- | `--buildId <id>` | Build ID to associate with the Test Run. |
45
- | `--testConfiguration <nameOrId>` | Azure test configuration name or numeric ID for the published run. |
46
- | `--testSuite <nameOrId>` | Azure test suite (name or ID) for **planned run** publication. Enables TC linkage. |
47
- | `--testPlan <nameOrId>` | Azure test plan (name or ID). Used with `--testSuite`. Falls back to `testPlan.id` from config. |
48
- | `--dry-run` | Parse results and print summary without creating a run in Azure. |
49
- | `--create-issues-on-failure` | File GitHub Issues or ADO Bugs for each failed test after publishing. |
50
- | `--issue-provider <github\|ado>` | Issue provider. Default: `github`. |
51
- | `--github-repo <owner/repo>` | GitHub repository to file issues in. |
52
- | `--github-token <token>` | GitHub token. Supports `$ENV_VAR` references. |
53
- | `--bug-threshold <percent>` | If more than this % of tests fail, one environment-failure issue is filed instead of per-test issues. Default: `20`. |
54
- | `--max-issues <n>` | Hard cap on issues filed per run. Default: `50`. |
55
- | `--analyze-failures` | Use AI to analyse each failed test and post a root-cause + suggestion comment on the Azure test result. |
56
- | `--ai-provider <provider>` | AI provider for failure analysis: `ollama`, `openai`, or `anthropic`. |
57
- | `--ai-model <model>` | Model name (e.g. `gpt-4o-mini`, `claude-haiku-4-5-20251001`, `gemma-4-e4b-it`). |
58
- | `--ai-url <url>` | Base URL for Ollama or an OpenAI-compatible endpoint. |
59
- | `--ai-key <key>` | API key. Supports `$ENV_VAR` references. |
60
- | `--config-override` | Override config values (repeatable, same as other commands). |
61
-
62
- ---
63
-
64
- ## Planned runs (TC linkage)
65
-
66
- Azure DevOps **silently ignores** `testCase.id` on unplanned test runs. To link published
67
- results to Test Cases in the Test Plans UI, you must create a **planned run** that uses
68
- test points.
69
-
70
- Pass `--testPlan` and `--testSuite` to enable planned-run mode:
71
-
72
- ```bash
73
- ado-sync publish-test-results \
74
- --testPlan 32953 \
75
- --testSuite 32954 \
76
- --testResult results/junit.xml
77
- ```
78
-
79
- How it works:
80
-
81
- 1. ado-sync resolves the test suite's test points (each point links a Test Case to a configuration).
82
- 2. A planned run is created with those point IDs — ADO pre-populates result slots linked to each TC.
83
- 3. Parsed results are matched to TCs by the `tc` property/tag in the result file.
84
- 4. Matched results are patched with outcome, duration, and error message.
85
-
86
- Results **without** a TC ID are skipped with a warning (the run still succeeds).
87
- Test Cases **without** a matching result keep the default "Active" outcome.
88
-
89
- ### Config equivalent
90
-
91
- ```yaml
92
- publishTestResults:
93
- testSuite:
94
- id: 32954 # or name: "My Suite"
95
- testPlan: "32953" # or plan name
96
- testConfiguration:
97
- id: 1 # optional — filter points by configuration
98
- ```
99
-
100
- ---
101
-
102
- ## AI failure analysis
103
-
104
- When `--analyze-failures` is set, ado-sync calls the configured AI provider for each **failed** test result and posts a comment directly on the Azure DevOps test result containing:
105
-
106
- - **Root cause** — a concise one-line explanation of why the test failed
107
- - **Suggestion** — a concrete fix recommendation
108
-
109
- The comment appears in Azure DevOps under the test result's **Comments** tab, alongside the error message and stack trace.
110
-
111
- ### CLI examples
112
-
113
- ```bash
114
- # Analyse failures with OpenAI (gpt-4o-mini by default)
115
- ado-sync publish-test-results \
116
- --testResult results/test.trx \
117
- --analyze-failures \
118
- --ai-provider openai \
119
- --ai-key $OPENAI_API_KEY
120
-
121
- # Analyse with Claude (Haiku is fast and cost-effective)
122
- ado-sync publish-test-results \
123
- --testResult results/playwright.json \
124
- --analyze-failures \
125
- --ai-provider anthropic \
126
- --ai-model claude-haiku-4-5-20251001 \
127
- --ai-key $ANTHROPIC_API_KEY
128
-
129
- # Analyse with a local Ollama server (no cloud cost)
130
- ado-sync publish-test-results \
131
- --testResult results/junit.xml \
132
- --analyze-failures \
133
- --ai-provider ollama \
134
- --ai-model gemma-4-e4b-it
135
-
136
- # Analyse with Docker Model Runner (local, OpenAI-compatible, no API key)
137
- ado-sync publish-test-results \
138
- --testResult results/junit.xml \
139
- --analyze-failures \
140
- --ai-provider docker \
141
- --ai-model ai/llama3.2
142
- ```
143
-
144
- ### Config-based (no CLI flags needed)
145
-
146
- ```json
147
- {
148
- "sync": {
149
- "ai": {
150
- "provider": "anthropic",
151
- "model": "claude-haiku-4-5-20251001",
152
- "apiKey": "$ANTHROPIC_API_KEY",
153
- "analyzeFailures": true
154
- }
155
- }
156
- }
157
- ```
158
-
159
- With this in place, every `publish-test-results` run automatically analyses failures — no extra CLI flags needed.
160
-
161
- ### Supported providers
162
-
163
- | Provider | Flag value | Notes |
164
- |----------|-----------|-------|
165
- | OpenAI | `openai` | Default model: `gpt-4o-mini`. Works with any OpenAI-compatible endpoint via `--ai-url`. |
166
- | Anthropic | `anthropic` | Default model: `claude-haiku-4-5-20251001`. Fast and cost-effective. |
167
- | Ollama | `ollama` | Default model: `gemma-4-e4b-it`. Runs locally — no cloud cost or data egress. |
168
- | Docker Model Runner | `docker` | Default endpoint: `http://localhost:12434/engines/llama.cpp/v1`. Default model: `ai/llama3.2`. OpenAI-compatible local inference via Docker Desktop. No API key required. |
169
-
170
- > `heuristic` and `local` (node-llama-cpp) providers are not supported for failure analysis — they are suited for step generation, not conversational reasoning.
171
-
172
- ---
173
-
174
- ## Supported formats
175
-
176
- | Format | Extension | Auto-detected | TC ID in file? | Attachments extracted |
177
- |--------|-----------|---------------|----------------|----------------------|
178
- | TRX (MSTest / SpecFlow / VSTest) | `.trx` | Yes (`<TestRun>` root) | Yes — via `[TestProperty("tc","ID")]` | `<Output><StdOut>` + `<Output><ResultFiles>` |
179
- | NUnit XML (native) | `.xml` | Yes (`<test-run>` root) | Yes — via `[Property("tc","ID")]` | `<output>` + `<attachments>` |
180
- | JUnit XML | `.xml` | Yes (`<testsuites>` / `<testsuite>` root) | Optional — via `<property name="tc" value="ID"/>` | `<system-out>`, `<system-err>`, `[[ATTACHMENT\|path]]` (Playwright) |
181
- | Cucumber JSON | `.json` | Yes (JSON array, Cucumber format) | Yes — via `@tc:ID` tag on scenario | `step.embeddings[]` (base64 screenshots/video) |
182
- | Playwright JSON | `.json` | Yes (JSON object with `suites` key) | Yes — via `test.annotations[{ type: 'tc', description: 'ID' }]` (preferred) or `@tc:ID` in test title | `test.results[].attachments[]` (screenshots, videos, traces) |
183
- | Robot Framework XML | `output.xml` | Yes (`<robot>` root element) | Yes — via `<tags><tag>tc:ID</tag></tags>` | — |
184
- | CTRF JSON | `.json` | Yes (`results.tests` array) | Yes — via `tags: ["@tc:ID"]` or `@tc:ID` in test name | `attachments[].path` files, `stdout`/`stderr` arrays |
185
-
186
- > **NUnit via TRX**: when NUnit tests are run through the VSTest adapter (`--logger trx`), `[Property]` values are **not** included in the TRX output. Use `--logger "nunit3;LogFileName=results.xml"` to get the native NUnit XML format, which does include property values.
187
-
188
- > **TRX `<ResultFiles>` nesting**: In TRX format, `<ResultFiles>` is a child of `<Output>`, not a direct child of `<UnitTestResult>`. ado-sync reads from `UnitTestResult > Output > ResultFiles > ResultFile` — paths are resolved relative to the result file's directory.
189
-
190
- > **Attachment paths**: All file paths embedded in result files (`<ResultFile path="...">` in TRX, `<filePath>` in NUnit XML, `[[ATTACHMENT|path]]` in JUnit, `attachments[].path` in Playwright JSON) are resolved **relative to the result file's directory**, not the working directory. Ensure screenshots and other artifacts stay in the same folder hierarchy as your test runner produces.
191
-
192
- > **Automated vs planned runs**: ado-sync creates **standalone automated runs** without a test plan association. Do not add `plan.id` to the run model — doing so makes Azure DevOps treat the run as "planned", requiring `testPointId` and `testCaseRevision` for every result (which ado-sync doesn't provide). TC linking is done via `testCase.id` on individual results, which works for automated runs without a plan association.
193
-
194
- > **Valid attachment types**: Azure DevOps only accepts `GeneralAttachment` and `ConsoleLog` as `attachmentType` values. ado-sync maps screenshots, images, and binary files to `GeneralAttachment`; plain text and log files to `ConsoleLog`. Other type names (e.g. `Screenshot`, `Log`, `VideoLog`) will cause a 400 error.
195
-
196
- ### How TC linking works
197
-
198
- Results are linked to Azure Test Cases in priority order:
199
-
200
- 1. **TC ID from file** (preferred) — when the result file contains a TC ID (`[TestProperty]`, `[Property]`, `<property name="tc">`, `@tc:` tag, or Playwright `test.annotations[{ type: 'tc' }]`), the result is posted with `testCase.id` set directly. This is robust to class/method renames.
201
- 2. **AutomatedTestName matching** (fallback) — when no TC ID is found, the result is posted with `automatedTestName` = the fully-qualified method name. Azure DevOps links it to a TC whose `AutomatedTestName` field matches. Requires `sync.markAutomated: true` on push.
202
-
203
- ---
204
-
205
- ## Per-framework guide
206
-
207
- ### C# MSTest
208
-
209
- ```bash
210
- dotnet test --logger "trx;LogFileName=results.trx"
211
- ado-sync publish-test-results --testResult results/results.trx
212
- ```
213
-
214
- TC IDs are read from `[TestProperty("tc","ID")]` embedded in the TRX — no extra config needed.
215
-
216
- ---
217
-
218
- ### C# NUnit
219
-
220
- ```bash
221
- # Use native NUnit XML (includes [Property] values)
222
- dotnet test --logger "nunit3;LogFileName=results.xml"
223
- ado-sync publish-test-results --testResult results/results.xml
224
-
225
- # TRX via VSTest adapter (TC IDs NOT included — uses AutomatedTestName matching)
226
- dotnet test --logger "trx;LogFileName=results.trx"
227
- ado-sync publish-test-results --testResult results/results.trx
228
- ```
229
-
230
- ---
231
-
232
- ### C# SpecFlow
233
-
234
- SpecFlow generates TRX output via the VSTest adapter. ado-sync reads `@tc:ID` tags from the Gherkin `@tc:ID` tag embedded as `[TestProperty("tc","ID")]` in the TRX by SpecFlow's runner.
235
-
236
- ```bash
237
- # 1. Push feature files to create TCs — @tc:ID is written back into .feature files
238
- ado-sync push
239
-
240
- # 2. Run SpecFlow tests (generates TRX)
241
- dotnet test --logger "trx;LogFileName=results.trx"
242
-
243
- # 3. Publish results
244
- ado-sync publish-test-results --testResult results/results.trx
245
- ```
246
-
247
- SpecFlow uses `local.type: gherkin` (same as Cucumber). TC IDs are `@tc:ID` Gherkin tags, which SpecFlow embeds into the TRX as `TestProperty` values automatically.
248
-
249
- ---
250
-
251
- ### Java JUnit 4 / JUnit 5 (Maven Surefire)
252
-
253
- Maven Surefire generates JUnit XML with `classname` = FQCN and `name` = method name. ado-sync builds `automatedTestName` as `FQCN.methodName` on push, which matches the `classname.name` format in the JUnit XML automatically.
254
-
255
- ```bash
256
- # Run tests (Surefire writes target/surefire-reports/TEST-*.xml)
257
- mvn test
258
-
259
- # Publish — TC linking uses AutomatedTestName matching
260
- ado-sync publish-test-results \
261
- --testResult "target/surefire-reports/TEST-*.xml" \
262
- --testResultFormat junit
263
- ```
264
-
265
- Recommended config:
266
- ```json
267
- { "sync": { "markAutomated": true } }
268
- ```
269
-
270
- **Optional — write TC IDs into JUnit XML** for direct linking (more reliable):
271
-
272
- Add a JUnit 5 extension or JUnit 4 rule that reads the `@Tag("tc:ID")` / `// @tc:ID` value and calls `recordProperty` to embed it into the XML. With Surefire, test properties are written as `<property>` elements inside each `<testcase>`.
273
-
274
- ---
275
-
276
- ### Java TestNG
277
-
278
- TestNG's Surefire reporter generates the same JUnit XML format. Same commands as JUnit above.
279
-
280
- ```bash
281
- mvn test # or: ./gradlew test
282
-
283
- ado-sync publish-test-results \
284
- --testResult "target/surefire-reports/TEST-*.xml" \
285
- --testResultFormat junit
286
- ```
287
-
288
- ---
289
-
290
- ### Python pytest
291
-
292
- ```bash
293
- # Run tests and generate JUnit XML
294
- pytest --junitxml=results/junit.xml
295
-
296
- # Publish — uses AutomatedTestName matching (classname.name from JUnit XML)
297
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
298
- ```
299
-
300
- Recommended config:
301
- ```json
302
- { "sync": { "markAutomated": true } }
303
- ```
304
-
305
- **Optional — embed TC IDs into JUnit XML** for direct linking.
306
-
307
- Add the following to your `conftest.py`:
308
-
309
- ```python
310
- # conftest.py
311
- def pytest_runtest_makereport(item, call):
312
- """Write @pytest.mark.tc(N) as a JUnit XML property for ado-sync to pick up."""
313
- for marker in item.iter_markers("tc"):
314
- if marker.args:
315
- item.user_properties.append(("tc", str(marker.args[0])))
316
- ```
317
-
318
- With this hook, pytest writes:
319
- ```xml
320
- <testcase name="test_foo" classname="tests.module.TestClass">
321
- <properties>
322
- <property name="tc" value="1041"/>
323
- </properties>
324
- </testcase>
325
- ```
326
-
327
- ado-sync will extract the `tc` property and link the result directly to TC 1041, without needing AutomatedTestName matching.
328
-
329
- ---
330
-
331
- ### JavaScript / TypeScript — Jest
332
-
333
- Install `jest-junit`:
334
- ```bash
335
- npm install --save-dev jest-junit
336
- ```
337
-
338
- Run tests:
339
- ```bash
340
- JEST_JUNIT_OUTPUT_DIR=results JEST_JUNIT_OUTPUT_NAME=junit.xml \
341
- npx jest --reporters=default --reporters=jest-junit
342
- ```
343
-
344
- Publish:
345
- ```bash
346
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
347
- ```
348
-
349
- > **TC linking for Jest**: jest-junit does not embed TC IDs in the XML. Linking uses `AutomatedTestName` matching. Set `sync.markAutomated: true` and ensure the `JEST_JUNIT_CLASSNAME` and `JEST_JUNIT_TITLE` env vars match the `automatedTestName` format stored in the TC (`{fileBasename} > {describe} > {testTitle}`).
350
- >
351
- > Set these env vars to align the format:
352
- > ```
353
- > JEST_JUNIT_CLASSNAME="{classname}" # default: suite hierarchy
354
- > JEST_JUNIT_TITLE="{title}" # default: test title
355
- > ```
356
-
357
- ---
358
-
359
- ### JavaScript / TypeScript — WebdriverIO
360
-
361
- WebdriverIO supports JUnit XML via `@wdio/junit-reporter`:
362
-
363
- ```bash
364
- # Install (if not already present)
365
- npm install --save-dev @wdio/junit-reporter
366
- ```
367
-
368
- Add to `wdio.conf.ts`:
369
- ```typescript
370
- reporters: [['junit', { outputDir: './results', outputFileFormat: () => 'junit.xml' }]]
371
- ```
372
-
373
- Run tests:
374
- ```bash
375
- npx wdio run wdio.conf.ts
376
- ```
377
-
378
- Publish:
379
- ```bash
380
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
381
- ```
382
-
383
- ---
384
-
385
- ### Gherkin / Cucumber (JS)
386
-
387
- Cucumber JS with Selenium captures screenshots/videos as base64 `embeddings` inside each step. These are extracted automatically and attached to the test result in Azure DevOps.
388
-
389
- ```bash
390
- # Run with Cucumber JSON reporter (includes step embeddings)
391
- npx cucumber-js --format json:results/cucumber.json
392
-
393
- # Publish — TC IDs from @tc:ID tags, screenshots from embeddings
394
- ado-sync publish-test-results --testResult results/cucumber.json
395
- ```
396
-
397
- TC IDs from `@tc:12345` tags are extracted directly. Screenshots embedded by Selenium/WebDriver hooks are uploaded automatically.
398
-
399
- ---
400
-
401
- ### Playwright
402
-
403
- Playwright supports two result formats — both extract attachments (screenshots, videos, traces):
404
-
405
- **Option A — Playwright JSON reporter** (recommended, includes all attachments):
406
-
407
- ```bash
408
- # playwright.config.ts
409
- # reporter: [['json', { outputFile: 'results/playwright.json' }]]
410
-
411
- npx playwright test
412
- ado-sync publish-test-results --testResult results/playwright.json
413
- ```
414
-
415
- Screenshots on failure, videos, and trace files are uploaded automatically from the `test-results/` folder referenced in the JSON.
416
-
417
- **Option B — JUnit XML reporter** (for CI systems that need JUnit format):
418
-
419
- ```bash
420
- # playwright.config.ts
421
- # reporter: [['junit', { outputFile: 'results/junit.xml' }]]
422
-
423
- npx playwright test
424
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
425
- ```
426
-
427
- Playwright embeds `[[ATTACHMENT|path]]` markers in `<system-out>` — ado-sync reads these and uploads the referenced files (screenshots, videos).
428
-
429
- **Using `--attachmentsFolder` for extra files:**
430
-
431
- ```bash
432
- ado-sync publish-test-results \
433
- --testResult results/junit.xml \
434
- --attachmentsFolder test-results
435
- ```
436
-
437
- ---
438
-
439
- ### TestCafe
440
-
441
- TestCafe requires the `testcafe-reporter-junit` package to produce JUnit XML:
442
-
443
- ```bash
444
- npm install --save-dev testcafe-reporter-junit
445
- ```
446
-
447
- ```bash
448
- # Run tests with JUnit reporter
449
- npx testcafe chrome tests/ --reporter junit:results/junit.xml
450
-
451
- # Publish results
452
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
453
- ```
454
-
455
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push. The `automatedTestName` format stored in the TC is `{fileBasename} > {fixture} > {testTitle}`, which matches the JUnit `suiteName.testName` format produced by `testcafe-reporter-junit`.
456
-
457
- > **TC IDs are not embedded in JUnit output**: TestCafe's `test.meta('tc', 'N')` metadata is not written into the JUnit XML. Linking relies on AutomatedTestName matching only.
458
-
459
- ---
460
-
461
- ### Cypress
462
-
463
- Cypress has a built-in JUnit reporter via Mocha:
464
-
465
- ```bash
466
- # Run tests with JUnit reporter (outputs one file per spec by default)
467
- npx cypress run \
468
- --reporter junit \
469
- --reporter-options "mochaFile=results/junit-[hash].xml"
470
-
471
- # Publish all result files
472
- ado-sync publish-test-results \
473
- --testResult "results/junit-*.xml" \
474
- --testResultFormat junit
475
- ```
476
-
477
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push.
478
-
479
- > **JUnit classname format**: By default Cypress sets `classname` to the spec file path and `name` to the test title. Ensure your `automatedTestName` format matches by setting `reporterOptions: { suiteTitleSeparatedBy: ' > ' }` in `cypress.config.js`.
480
-
481
- ---
482
-
483
- ### Detox (React Native)
484
-
485
- Detox uses Jest as its runner — use `jest-junit` the same way as Jest:
486
-
487
- ```bash
488
- npm install --save-dev jest-junit
489
- ```
490
-
491
- ```bash
492
- # Run Detox tests
493
- JEST_JUNIT_OUTPUT_DIR=results JEST_JUNIT_OUTPUT_NAME=junit.xml \
494
- npx detox test --configuration ios.sim.release
495
-
496
- # Publish results
497
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
498
- ```
499
-
500
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push.
501
-
502
- ---
503
-
504
- ### XCUITest (iOS / macOS)
505
-
506
- Export results from Xcode's `.xcresult` bundle to JUnit XML using `xcresulttool`:
507
-
508
- ```bash
509
- # Run tests and save result bundle
510
- xcodebuild test \
511
- -project MyApp.xcodeproj \
512
- -scheme MyApp \
513
- -destination 'platform=iOS Simulator,name=iPhone 15' \
514
- -resultBundlePath TestResults.xcresult
515
-
516
- # Export JUnit XML
517
- xcrun xcresulttool get --path TestResults.xcresult --format junit > results/junit.xml
518
-
519
- # Publish results
520
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
521
- ```
522
-
523
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push. The `automatedTestName` format is `{className}/{funcTestName}` (e.g. `LoginUITests/testValidCredentialsNavigateToInventory`).
524
-
525
- > **Attachments**: `xcresulttool` does not embed screenshots in the JUnit export. Use `--attachmentsFolder` to attach screenshot files produced by your test hooks separately.
526
-
527
- ---
528
-
529
- ### Espresso (Android)
530
-
531
- Gradle's `connectedAndroidTest` task writes JUnit XML to `app/build/outputs/androidTest-results/connected/`:
532
-
533
- ```bash
534
- # Run instrumented tests
535
- ./gradlew connectedAndroidTest
536
-
537
- # Publish results
538
- ado-sync publish-test-results \
539
- --testResult "app/build/outputs/androidTest-results/connected/TEST-*.xml" \
540
- --testResultFormat junit
541
- ```
542
-
543
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push. The `automatedTestName` format is `{packageName}.{ClassName}.{methodName}`.
544
-
545
- ---
546
-
547
- ### Robot Framework
548
-
549
- Robot Framework writes test results to `output.xml` by default. ado-sync auto-detects this format from the `<robot>` root element.
550
-
551
- ```bash
552
- # Run Robot Framework tests (generates output.xml)
553
- robot --outputdir results tests/
554
-
555
- # Publish results — TC IDs from [Tags] tc:N values
556
- ado-sync publish-test-results --testResult results/output.xml
557
- ```
558
-
559
- TC IDs are extracted directly from the `<tags>` element in `output.xml` — the same `tc:N` tag written back by `ado-sync push`. No `--testResultFormat` flag is needed; the format is auto-detected.
560
-
561
- ```bash
562
- # Custom output file location
563
- robot --outputdir results --output my-results.xml tests/
564
- ado-sync publish-test-results --testResult results/my-results.xml
565
- ```
566
-
567
- Recommended config:
568
- ```json
569
- { "sync": { "markAutomated": true } }
570
- ```
571
-
572
- > **TC linking for Robot**: `output.xml` includes `tc:N` tags in `<tags>` elements. ado-sync uses these for direct TC linking. If a test has no `tc:N` tag, it falls back to `AutomatedTestName` matching using the suite.test name path (e.g. `SuiteName.Test Case Name`).
573
-
574
- ---
575
-
576
- ### CTRF (Common Test Report Format)
577
-
578
- [CTRF](https://ctrf.io) is a framework-agnostic JSON report format supported by reporters for Playwright, Cypress, Jest, k6, and many others. ado-sync auto-detects CTRF from the `results.tests` array structure.
579
-
580
- ```bash
581
- # Example: Playwright with CTRF reporter
582
- npm install --save-dev playwright-ctrf-json-reporter
583
-
584
- # playwright.config.ts:
585
- # reporter: [['playwright-ctrf-json-reporter', { outputFile: 'results/ctrf.json' }]]
586
-
587
- npx playwright test
588
- ado-sync publish-test-results --testResult results/ctrf.json
589
- ```
590
-
591
- ```bash
592
- # Example: Jest with CTRF reporter
593
- npm install --save-dev jest-ctrf-json-reporter
594
-
595
- # jest.config.ts:
596
- # reporters: [['jest-ctrf-json-reporter', { outputFile: 'results/ctrf.json' }]]
597
-
598
- npx jest
599
- ado-sync publish-test-results --testResult results/ctrf.json
600
- ```
601
-
602
- TC IDs are extracted from the `tags` array (e.g. `["@tc:1234", "@smoke"]`) or, as a fallback, from `@tc:ID` in the test name. `stdout`/`stderr` arrays and `attachments[].path` files are uploaded automatically.
603
-
604
- > **Status mapping**: CTRF `passed` → `Passed`, `failed` → `Failed`, `skipped`/`pending`/`other` → `NotExecuted`.
605
-
606
- ---
607
-
608
- ### Flutter
609
-
610
- Flutter can produce JUnit XML via the `flutter_test_junit` package or by piping `--reporter junit`:
611
-
612
- ```bash
613
- # Option A — built-in reporter (Flutter ≥ 3.7)
614
- flutter test --reporter junit > results/junit.xml
615
-
616
- # Option B — flutter_test_junit package
617
- flutter pub add --dev flutter_test_junit
618
- dart run flutter_test_junit:main > results/junit.xml
619
- ```
620
-
621
- ```bash
622
- # Publish results
623
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
624
- ```
625
-
626
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push.
627
-
628
- ---
629
-
630
- | Framework | Result format | TC ID in file | Attachments uploaded | Live-tested |
631
- |-----------|---------------|---------------|----------------------|-------------|
632
- | C# MSTest | TRX | ✅ `[TestProperty("tc","ID")]` | `<Output><StdOut>` + `<Output><ResultFiles>` files | ✅ |
633
- | C# NUnit | NUnit XML | ✅ `[Property("tc","ID")]` | `<output>` text + `<attachments><filePath>` files | ✅ |
634
- | C# SpecFlow | TRX | ✅ `@tc:ID` → `[TestProperty]` | `<Output><StdOut>` + `<Output><ResultFiles>` files | ✅ |
635
- | Java JUnit 4/5 | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
636
- | Java TestNG | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
637
- | Python pytest | JUnit XML | ⚠️ optional (conftest.py hook) | `<system-out>`, `<system-err>` | ✅ |
638
- | Jest | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
639
- | WebdriverIO / Jasmine | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
640
- | Cucumber JS | Cucumber JSON | ✅ `@tc:ID` tag | `step.embeddings[]` (base64 screenshots/video) | ✅ |
641
- | Playwright | Playwright JSON | ✅ native `annotation: { type: 'tc', description: 'ID' }`; or `@tc:ID` in test title | Files from `attachments[].path` (screenshots, videos, traces) | ✅ |
642
- | Playwright | JUnit XML | ⚠️ `@tc:ID` in test title only (no annotation in JUnit format) | `[[ATTACHMENT\|path]]` referenced files | ✅ |
643
- | TestCafe | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
644
- | Cypress | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
645
- | Detox | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
646
- | XCUITest | JUnit XML | ❌ AutomatedTestName matching only | none (use `--attachmentsFolder`) | |
647
- | Espresso | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
648
- | Flutter | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
649
- | Robot Framework | Robot XML (`output.xml`) | ✅ `tc:N` in `<tags>` | — | |
650
- | CTRF (any framework) | CTRF JSON | ✅ `tags: ["@tc:ID"]` or `@tc:ID` in name | `attachments[].path` files + `stdout`/`stderr` | |
651
-
652
- ---
653
-
654
- ## Attachments
655
-
656
- ado-sync uploads screenshots, videos, and logs from test results to the corresponding Azure DevOps test result entry. Attachments appear in the Azure Test Plans UI under each result.
657
-
658
- ### What is extracted automatically per format
659
-
660
- | Format | Extracted automatically |
661
- |--------|------------------------|
662
- | TRX | `<Output><StdOut>` → console log; `<Output><ResultFiles><ResultFile path="...">` → files on disk |
663
- | NUnit XML | `<output>` → console log; `<attachments><attachment><filePath>` → files on disk |
664
- | JUnit XML | `<system-out>` → log; `<system-err>` → log; `[[ATTACHMENT\|path]]` → Playwright files |
665
- | Cucumber JSON | `step.embeddings[]` → base64-encoded screenshots/video |
666
- | Playwright JSON | `results[].attachments[].path` → files on disk (screenshots, videos, traces) |
667
- | CTRF JSON | `tests[].attachments[].path` → files on disk; `tests[].stdout[]` / `tests[].stderr[]` → console logs |
668
-
669
- > **Note**: All file paths are resolved relative to the result file's directory, not the process working directory. This matches how test runners (Playwright, MSTest, NUnit) write relative paths in their output.
670
-
671
- > **Attachment types**: Azure DevOps accepts only `GeneralAttachment` (images, videos, binary files) and `ConsoleLog` (text, logs) as attachment type values. ado-sync maps all file types to one of these two automatically.
672
-
673
- ### `--attachmentsFolder` — folder-based attachment upload
674
-
675
- For any framework, point ado-sync at a folder containing screenshots and videos:
676
-
677
- ```bash
678
- ado-sync publish-test-results \
679
- --testResult results/junit.xml \
680
- --attachmentsFolder test-results/screenshots
681
- ```
682
-
683
- Files are matched to individual test results by looking for the test method name in the filename (case-insensitive). For example, `addItemAndCheckout_failed.png` → matched to `com.example.CheckoutTests.addItemAndCheckout`.
684
-
685
- Unmatched files are attached at the test run level.
686
-
687
- ### Config-based attachment folder
688
-
689
- ```json
690
- {
691
- "publishTestResults": {
692
- "attachments": {
693
- "folder": "test-results/screenshots",
694
- "include": ["**/*.png", "**/*.mp4"],
695
- "matchByTestName": true
696
- },
697
- "publishAttachmentsForPassingTests": "files"
698
- }
699
- }
700
- ```
701
-
702
- ### `publishAttachmentsForPassingTests`
703
-
704
- Controls how attachments are handled for **passing** tests (failing tests always get all attachments):
705
-
706
- | Value | Behaviour |
707
- |-------|-----------|
708
- | `"none"` *(default)* | No attachments uploaded for passing tests |
709
- | `"files"` | Screenshots and videos uploaded; console logs skipped |
710
- | `"all"` | All attachments including logs uploaded for passing tests |
711
-
712
- ### Framework-specific setup
713
-
714
- **C# MSTest — attach files from test code:**
715
-
716
- ```csharp
717
- TestContext.AddResultFile("screenshots/mytest.png");
718
- ```
719
-
720
- **C# NUnit — attach files:**
721
-
722
- ```csharp
723
- TestContext.AddAttachment("screenshots/mytest.png");
724
- ```
725
-
726
- **Java (JUnit/TestNG) — capture Selenium screenshot and write to system-out:**
727
-
728
- ```java
729
- // In an @AfterMethod / @After hook:
730
- File screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
731
- // Surefire writes <system-out> to JUnit XML — log the path:
732
- System.out.println("Screenshot: " + screenshot.getAbsolutePath());
733
- // Or use --attachmentsFolder to pick up the file directly
734
- ```
735
-
736
- **Python pytest — capture screenshot via conftest.py:**
737
-
738
- ```python
739
- # conftest.py
740
- @pytest.fixture(autouse=True)
741
- def screenshot_on_failure(request, driver):
742
- yield
743
- if request.node.rep_call.failed:
744
- driver.save_screenshot(f"screenshots/{request.node.name}.png")
745
- ```
746
-
747
- Then publish with:
748
- ```bash
749
- ado-sync publish-test-results \
750
- --testResult results/junit.xml \
751
- --attachmentsFolder screenshots
752
- ```
753
-
754
- **Cucumber JS — embed screenshot in step hook:**
755
-
756
- ```javascript
757
- // hooks.js
758
- After(async function({ pickle, result }) {
759
- if (result.status === Status.FAILED) {
760
- const screenshot = await driver.takeScreenshot();
761
- this.attach(Buffer.from(screenshot, 'base64'), 'image/png');
762
- }
763
- });
764
- ```
765
-
766
- Screenshots are embedded as base64 in the Cucumber JSON and uploaded automatically.
767
-
768
- **Playwright — screenshots and videos are automatic** when configured in `playwright.config.ts`:
769
-
770
- ```typescript
771
- use: {
772
- screenshot: 'only-on-failure',
773
- video: 'retain-on-failure',
774
- trace: 'on-first-retry',
775
- }
776
- ```
777
-
778
- Use the Playwright JSON reporter — attachments are uploaded automatically.
779
-
780
- ---
781
-
782
- ## Outcome mapping
783
-
784
- | Source outcome | Azure outcome |
785
- |----------------|---------------|
786
- | `passed` / `pass` / `success` | `Passed` |
787
- | `failed` / `fail` / `failure` / `error` | `Failed` |
788
- | `skipped` / `ignored` / `pending` / `notExecuted` | `NotExecuted` |
789
- | `inconclusive` | `Inconclusive` (or override with `treatInconclusiveAs`) |
790
-
791
- ---
792
-
793
- ## Configuration
794
-
795
- Results can also be configured in the config file under `publishTestResults`:
796
-
797
- ```json
798
- {
799
- "publishTestResults": {
800
- "testResult": {
801
- "sources": [
802
- { "value": "results/unit.trx", "format": "trx" },
803
- { "value": "results/integration.xml", "format": "junit" }
804
- ]
805
- },
806
- "treatInconclusiveAs": "Failed",
807
- "testRunSettings": {
808
- "name": "My CI Run",
809
- "comment": "Automated sync run",
810
- "runType": "Automated"
811
- },
812
- "testResultSettings": {
813
- "comment": "Published by ado-sync"
814
- },
815
- "testConfiguration": {
816
- "name": "Default"
817
- }
818
- }
819
- }
820
- ```
821
-
822
- ### `publishTestResults` fields
823
-
824
- | Field | Description |
825
- |-------|-------------|
826
- | `testResult.sources` | Array of `{ value, format }` objects. `value` is a path relative to config dir. |
827
- | `treatInconclusiveAs` | Override inconclusive outcome. e.g. `"Failed"` or `"NotExecuted"`. |
828
- | `flakyTestOutcome` | How to handle flaky tests: `"lastAttemptOutcome"` *(default)* · `"firstAttemptOutcome"` · `"worstOutcome"`. |
829
- | `testConfiguration.name` | Name of the Azure test configuration to associate. |
830
- | `testConfiguration.id` | ID of the Azure test configuration. |
831
- | `testSuite.name` | Name of the Azure test suite to publish against. Requires every result to resolve to a test case ID. |
832
- | `testSuite.id` | ID of the Azure test suite to publish against. |
833
- | `testSuite.testPlan` | Optional test plan name or ID used when resolving the target suite. Defaults to `testPlan.id` from the main config. |
834
- | `testRunSettings.name` | Name for the Test Run. |
835
- | `testRunSettings.comment` | Comment attached to the Test Run. |
836
- | `testRunSettings.runType` | `"Automated"` *(default)* · `"Manual"`. |
837
- | `testResultSettings.comment` | Comment applied to every test result. |
838
- | `publishAttachmentsForPassingTests` | `"none"` *(default)* · `"files"` · `"all"`. |
839
-
840
- When `testSuite` is configured, ado-sync creates a planned run and binds each published result to the matching test point in that suite. If a suite contains multiple configurations for the same test case, set `testConfiguration.id` or `testConfiguration.name` so the target point can be resolved unambiguously.
841
-
842
- ---
843
-
844
- ## Creating issues on failure
845
-
846
- `--create-issues-on-failure` automatically files a GitHub Issue or ADO Bug for each failed test
847
- after the run is published. Multiple guards prevent flooding your tracker when the environment is
848
- the problem rather than individual tests.
849
-
850
- ### Guard logic (applied in order)
851
-
852
- ```
853
- failures > threshold% of total?
854
- └─ YES → 1 environment-failure issue, stop
855
- └─ NO
856
- └─ cluster by error signature
857
- └─ cluster size > 1?
858
- └─ YES → 1 issue per cluster (lists affected test names)
859
- └─ NO → 1 issue per TC (up to maxIssues cap)
860
- └─ cap hit? → 1 overflow summary issue
861
- ```
862
-
863
- | Guard | Default | Description |
864
- |---|---|---|
865
- | Failure-rate threshold | 20% | Above this, one env-failure issue is filed instead of per-test |
866
- | Error clustering | enabled | Tests with the same error message are grouped into one issue |
867
- | Hard cap | 50 | No more than this many issues per run; one overflow summary when exceeded |
868
- | Dedup | enabled | Skip if an open issue already exists for the same TC (GitHub: by `tc:ID` label; ADO: by title) |
869
-
870
- ### GitHub Issues (recommended)
871
-
872
- ```bash
873
- ado-sync publish-test-results \
874
- --testResult results/ctrf.json \
875
- --create-issues-on-failure \
876
- --github-repo myorg/myrepo \
877
- --github-token $GITHUB_TOKEN
878
- ```
879
-
880
- Each issue is labelled `test-failure` and `tc:{ID}` (when a TC ID is available). The issue body
881
- contains the error message, stack trace, ADO TC link, and run URL — everything a healer agent
882
- needs to propose a fix PR.
883
-
884
- ### ADO Bugs
885
-
886
- ```bash
887
- ado-sync publish-test-results \
888
- --testResult results/junit.xml \
889
- --create-issues-on-failure \
890
- --issue-provider ado
891
- ```
892
-
893
- ADO Bugs are created as Bug work items in the same project. The `Repro Steps` field is populated
894
- with the error details. When a TC ID is known, a `TestedBy` relation is added linking the Bug to
895
- the Test Case.
896
-
897
- ### Config-based setup
898
-
899
- ```json
900
- {
901
- "publishTestResults": {
902
- "createIssuesOnFailure": {
903
- "provider": "github",
904
- "repo": "myorg/myrepo",
905
- "token": "$GITHUB_TOKEN",
906
- "labels": ["test-failure", "automated"],
907
- "threshold": 20,
908
- "maxIssues": 50,
909
- "clusterByError": true,
910
- "dedupByTestCase": true
911
- }
912
- }
913
- }
914
- ```
915
-
916
- CLI flags override the config values when both are present.
917
-
918
- ### MCP tool: `create_issue`
919
-
920
- The `create_issue` MCP tool lets healer agents file a single issue directly:
921
-
922
- ```
923
- create_issue({
924
- title: "[FAILED] Login with valid credentials",
925
- body: "Error: Expected 200 but got 401\n\nStack: ...",
926
- provider: "github",
927
- githubRepo: "myorg/myrepo",
928
- githubToken: "$GITHUB_TOKEN",
929
- testCaseId: 1234
930
- })
931
- ```
932
-
933
- Returns the issue URL immediately, which the agent can embed in its fix PR.
934
-
935
- ---
936
-
937
- ## Output
938
-
939
- ```
940
- ado-sync publish-test-results
941
- Config: ado-sync.json
942
-
943
- Total results: 42
944
- 38 passed 3 failed 1 other
945
- Run ID: 9876
946
- URL: https://dev.azure.com/my-org/MyProject/_testManagement/runs?runId=9876
947
- ```