ado-sync 0.1.64 → 0.1.67

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (63) hide show
  1. package/README.md +20 -15
  2. package/dist/__tests__/regressions.test.js +1011 -1
  3. package/dist/__tests__/regressions.test.js.map +1 -1
  4. package/dist/ai/generate-spec.d.ts +1 -1
  5. package/dist/ai/generate-spec.js +23 -0
  6. package/dist/ai/generate-spec.js.map +1 -1
  7. package/dist/ai/summarizer.d.ts +3 -2
  8. package/dist/ai/summarizer.js +50 -1
  9. package/dist/ai/summarizer.js.map +1 -1
  10. package/dist/azure/test-cases.d.ts +11 -1
  11. package/dist/azure/test-cases.js +286 -43
  12. package/dist/azure/test-cases.js.map +1 -1
  13. package/dist/cli.js +91 -14
  14. package/dist/cli.js.map +1 -1
  15. package/dist/config.js +74 -1
  16. package/dist/config.js.map +1 -1
  17. package/dist/id-markers.d.ts +1 -0
  18. package/dist/id-markers.js +13 -0
  19. package/dist/id-markers.js.map +1 -1
  20. package/dist/mcp-server.js +1 -1
  21. package/dist/mcp-server.js.map +1 -1
  22. package/dist/sync/cache.d.ts +2 -0
  23. package/dist/sync/cache.js.map +1 -1
  24. package/dist/sync/engine.d.ts +12 -1
  25. package/dist/sync/engine.js +210 -41
  26. package/dist/sync/engine.js.map +1 -1
  27. package/dist/types.d.ts +56 -4
  28. package/llms.txt +12 -11
  29. package/package.json +8 -1
  30. package/docs/advanced.md +0 -988
  31. package/docs/agent-setup.md +0 -204
  32. package/docs/capability-roadmap.md +0 -280
  33. package/docs/cli.md +0 -609
  34. package/docs/configuration.md +0 -322
  35. package/docs/examples/csharp-mstest-local-llm.yaml +0 -35
  36. package/docs/examples/csharp-mstest.yaml +0 -21
  37. package/docs/examples/csharp-nunit.yaml +0 -21
  38. package/docs/examples/csharp-specflow.yaml +0 -16
  39. package/docs/examples/cypress.yaml +0 -21
  40. package/docs/examples/detox-react-native.yaml +0 -21
  41. package/docs/examples/espresso-android.yaml +0 -21
  42. package/docs/examples/flutter-dart.yaml +0 -21
  43. package/docs/examples/java-junit.yaml +0 -21
  44. package/docs/examples/java-testng.yaml +0 -21
  45. package/docs/examples/js-jasmine-wdio.yaml +0 -21
  46. package/docs/examples/js-jest.yaml +0 -21
  47. package/docs/examples/playwright-js.yaml +0 -21
  48. package/docs/examples/playwright-ts.yaml +0 -21
  49. package/docs/examples/puppeteer.yaml +0 -21
  50. package/docs/examples/python-pytest.yaml +0 -21
  51. package/docs/examples/robot-framework.yaml +0 -19
  52. package/docs/examples/testcafe.yaml +0 -21
  53. package/docs/examples/xcuitest-ios.yaml +0 -21
  54. package/docs/mcp-server.md +0 -312
  55. package/docs/publish-test-results.md +0 -939
  56. package/docs/spec-formats.md +0 -1357
  57. package/docs/troubleshooting.md +0 -101
  58. package/docs/vscode-extension.md +0 -139
  59. package/docs/work-item-links.md +0 -115
  60. package/docs/workflows.md +0 -457
  61. package/mkdocs.yml +0 -40
  62. package/requirements-docs.txt +0 -4
  63. package/scripts/build_site.sh +0 -6
@@ -1,939 +0,0 @@
1
- # publish-test-results
2
-
3
- Parses test result files (TRX, NUnit XML, JUnit, Cucumber JSON, Playwright JSON, CTRF JSON) and publishes them to an Azure DevOps Test Run, linking results back to Test Cases either directly by TC ID (when available in the result file) or by `AutomatedTestName` matching.
4
-
5
- ---
6
-
7
- ## Usage
8
-
9
- ```bash
10
- ado-sync publish-test-results \
11
- --testResult results/test-results.trx \
12
- --runName "CI run #42"
13
-
14
- # Multiple result files
15
- ado-sync publish-test-results \
16
- --testResult results/unit.trx \
17
- --testResult results/integration.xml \
18
- --testResultFormat junit
19
-
20
- # Dry run — parse and summarise without publishing
21
- ado-sync publish-test-results --testResult results/test.trx --dry-run
22
-
23
- # Associate with a build
24
- ado-sync publish-test-results \
25
- --testResult results/test.trx \
26
- --buildId 12345
27
-
28
- # Publish to a specific planned suite/configuration
29
- ado-sync publish-test-results \
30
- --testResult results/test.trx \
31
- --testPlan "Smoke Plan" \
32
- --testSuite "BDD" \
33
- --testConfiguration "Windows 10"
34
- ```
35
-
36
- ### Options
37
-
38
- | Option | Description |
39
- |--------|-------------|
40
- | `--testResult <path>` | Path to a result file. Repeatable. |
41
- | `--testResultFormat <format>` | `trx` · `nunitXml` · `junit` · `cucumberJson` · `playwrightJson` · `ctrfJson`. Auto-detected when omitted. |
42
- | `--attachmentsFolder <path>` | Folder to scan for screenshots/videos/logs to attach to test results. |
43
- | `--runName <name>` | Name for the Test Run in Azure DevOps. Defaults to `ado-sync <ISO timestamp>`. |
44
- | `--buildId <id>` | Build ID to associate with the Test Run. |
45
- | `--testConfiguration <nameOrId>` | Azure test configuration name or numeric ID for the published run. |
46
- | `--testSuite <nameOrId>` | Azure test suite (name or ID) for **planned run** publication. Enables TC linkage. |
47
- | `--testPlan <nameOrId>` | Azure test plan (name or ID). Used with `--testSuite`. Falls back to `testPlan.id` from config. |
48
- | `--dry-run` | Parse results and print summary without creating a run in Azure. |
49
- | `--create-issues-on-failure` | File GitHub Issues or ADO Bugs for each failed test after publishing. |
50
- | `--issue-provider <github\|ado>` | Issue provider. Default: `github`. |
51
- | `--github-repo <owner/repo>` | GitHub repository to file issues in. |
52
- | `--github-token <token>` | GitHub token. Supports `$ENV_VAR` references. |
53
- | `--bug-threshold <percent>` | If more than this % of tests fail, one environment-failure issue is filed instead of per-test issues. Default: `20`. |
54
- | `--max-issues <n>` | Hard cap on issues filed per run. Default: `50`. |
55
- | `--analyze-failures` | Use AI to analyse each failed test and post a root-cause + suggestion comment on the Azure test result. |
56
- | `--ai-provider <provider>` | AI provider for failure analysis: `ollama`, `openai`, or `anthropic`. |
57
- | `--ai-model <model>` | Model name (e.g. `gpt-4o-mini`, `claude-haiku-4-5-20251001`, `gemma-4-e4b-it`). |
58
- | `--ai-url <url>` | Base URL for Ollama or an OpenAI-compatible endpoint. |
59
- | `--ai-key <key>` | API key. Supports `$ENV_VAR` references. |
60
- | `--config-override` | Override config values (repeatable, same as other commands). |
61
-
62
- ---
63
-
64
- ## Planned runs (TC linkage)
65
-
66
- Azure DevOps **silently ignores** `testCase.id` on unplanned test runs. To link published
67
- results to Test Cases in the Test Plans UI, you must create a **planned run** that uses
68
- test points.
69
-
70
- Pass `--testPlan` and `--testSuite` to enable planned-run mode:
71
-
72
- ```bash
73
- ado-sync publish-test-results \
74
- --testPlan 32953 \
75
- --testSuite 32954 \
76
- --testResult results/junit.xml
77
- ```
78
-
79
- How it works:
80
-
81
- 1. ado-sync resolves the test suite's test points (each point links a Test Case to a configuration).
82
- 2. A planned run is created with those point IDs — ADO pre-populates result slots linked to each TC.
83
- 3. Parsed results are matched to TCs by the `tc` property/tag in the result file.
84
- 4. Matched results are patched with outcome, duration, and error message.
85
-
86
- Results **without** a TC ID are skipped with a warning (the run still succeeds).
87
- Test Cases **without** a matching result keep the default "Active" outcome.
88
-
89
- ### Config equivalent
90
-
91
- ```yaml
92
- publishTestResults:
93
- testSuite:
94
- id: 32954 # or name: "My Suite"
95
- testPlan: "32953" # or plan name
96
- testConfiguration:
97
- id: 1 # optional — filter points by configuration
98
- ```
99
-
100
- ---
101
-
102
- ## AI failure analysis
103
-
104
- When `--analyze-failures` is set, ado-sync calls the configured AI provider for each **failed** test result and posts a comment directly on the Azure DevOps test result containing:
105
-
106
- - **Root cause** — a concise one-line explanation of why the test failed
107
- - **Suggestion** — a concrete fix recommendation
108
-
109
- The comment appears in Azure DevOps under the test result's **Comments** tab, alongside the error message and stack trace.
110
-
111
- ### CLI examples
112
-
113
- ```bash
114
- # Analyse failures with OpenAI (gpt-4o-mini by default)
115
- ado-sync publish-test-results \
116
- --testResult results/test.trx \
117
- --analyze-failures \
118
- --ai-provider openai \
119
- --ai-key $OPENAI_API_KEY
120
-
121
- # Analyse with Claude (Haiku is fast and cost-effective)
122
- ado-sync publish-test-results \
123
- --testResult results/playwright.json \
124
- --analyze-failures \
125
- --ai-provider anthropic \
126
- --ai-model claude-haiku-4-5-20251001 \
127
- --ai-key $ANTHROPIC_API_KEY
128
-
129
- # Analyse with a local Ollama server (no cloud cost)
130
- ado-sync publish-test-results \
131
- --testResult results/junit.xml \
132
- --analyze-failures \
133
- --ai-provider ollama \
134
- --ai-model gemma-4-e4b-it
135
- ```
136
-
137
- ### Config-based (no CLI flags needed)
138
-
139
- ```json
140
- {
141
- "sync": {
142
- "ai": {
143
- "provider": "anthropic",
144
- "model": "claude-haiku-4-5-20251001",
145
- "apiKey": "$ANTHROPIC_API_KEY",
146
- "analyzeFailures": true
147
- }
148
- }
149
- }
150
- ```
151
-
152
- With this in place, every `publish-test-results` run automatically analyses failures — no extra CLI flags needed.
153
-
154
- ### Supported providers
155
-
156
- | Provider | Flag value | Notes |
157
- |----------|-----------|-------|
158
- | OpenAI | `openai` | Default model: `gpt-4o-mini`. Works with any OpenAI-compatible endpoint via `--ai-url`. |
159
- | Anthropic | `anthropic` | Default model: `claude-haiku-4-5-20251001`. Fast and cost-effective. |
160
- | Ollama | `ollama` | Default model: `gemma-4-e4b-it`. Runs locally — no cloud cost or data egress. |
161
-
162
- > `heuristic` and `local` (node-llama-cpp) providers are not supported for failure analysis — they are suited for step generation, not conversational reasoning.
163
-
164
- ---
165
-
166
- ## Supported formats
167
-
168
- | Format | Extension | Auto-detected | TC ID in file? | Attachments extracted |
169
- |--------|-----------|---------------|----------------|----------------------|
170
- | TRX (MSTest / SpecFlow / VSTest) | `.trx` | Yes (`<TestRun>` root) | Yes — via `[TestProperty("tc","ID")]` | `<Output><StdOut>` + `<Output><ResultFiles>` |
171
- | NUnit XML (native) | `.xml` | Yes (`<test-run>` root) | Yes — via `[Property("tc","ID")]` | `<output>` + `<attachments>` |
172
- | JUnit XML | `.xml` | Yes (`<testsuites>` / `<testsuite>` root) | Optional — via `<property name="tc" value="ID"/>` | `<system-out>`, `<system-err>`, `[[ATTACHMENT\|path]]` (Playwright) |
173
- | Cucumber JSON | `.json` | Yes (JSON array, Cucumber format) | Yes — via `@tc:ID` tag on scenario | `step.embeddings[]` (base64 screenshots/video) |
174
- | Playwright JSON | `.json` | Yes (JSON object with `suites` key) | Yes — via `test.annotations[{ type: 'tc', description: 'ID' }]` (preferred) or `@tc:ID` in test title | `test.results[].attachments[]` (screenshots, videos, traces) |
175
- | Robot Framework XML | `output.xml` | Yes (`<robot>` root element) | Yes — via `<tags><tag>tc:ID</tag></tags>` | — |
176
- | CTRF JSON | `.json` | Yes (`results.tests` array) | Yes — via `tags: ["@tc:ID"]` or `@tc:ID` in test name | `attachments[].path` files, `stdout`/`stderr` arrays |
177
-
178
- > **NUnit via TRX**: when NUnit tests are run through the VSTest adapter (`--logger trx`), `[Property]` values are **not** included in the TRX output. Use `--logger "nunit3;LogFileName=results.xml"` to get the native NUnit XML format, which does include property values.
179
-
180
- > **TRX `<ResultFiles>` nesting**: In TRX format, `<ResultFiles>` is a child of `<Output>`, not a direct child of `<UnitTestResult>`. ado-sync reads from `UnitTestResult > Output > ResultFiles > ResultFile` — paths are resolved relative to the result file's directory.
181
-
182
- > **Attachment paths**: All file paths embedded in result files (`<ResultFile path="...">` in TRX, `<filePath>` in NUnit XML, `[[ATTACHMENT|path]]` in JUnit, `attachments[].path` in Playwright JSON) are resolved **relative to the result file's directory**, not the working directory. Ensure screenshots and other artifacts stay in the same folder hierarchy as your test runner produces.
183
-
184
- > **Automated vs planned runs**: ado-sync creates **standalone automated runs** without a test plan association. Do not add `plan.id` to the run model — doing so makes Azure DevOps treat the run as "planned", requiring `testPointId` and `testCaseRevision` for every result (which ado-sync doesn't provide). TC linking is done via `testCase.id` on individual results, which works for automated runs without a plan association.
185
-
186
- > **Valid attachment types**: Azure DevOps only accepts `GeneralAttachment` and `ConsoleLog` as `attachmentType` values. ado-sync maps screenshots, images, and binary files to `GeneralAttachment`; plain text and log files to `ConsoleLog`. Other type names (e.g. `Screenshot`, `Log`, `VideoLog`) will cause a 400 error.
187
-
188
- ### How TC linking works
189
-
190
- Results are linked to Azure Test Cases in priority order:
191
-
192
- 1. **TC ID from file** (preferred) — when the result file contains a TC ID (`[TestProperty]`, `[Property]`, `<property name="tc">`, `@tc:` tag, or Playwright `test.annotations[{ type: 'tc' }]`), the result is posted with `testCase.id` set directly. This is robust to class/method renames.
193
- 2. **AutomatedTestName matching** (fallback) — when no TC ID is found, the result is posted with `automatedTestName` = the fully-qualified method name. Azure DevOps links it to a TC whose `AutomatedTestName` field matches. Requires `sync.markAutomated: true` on push.
194
-
195
- ---
196
-
197
- ## Per-framework guide
198
-
199
- ### C# MSTest
200
-
201
- ```bash
202
- dotnet test --logger "trx;LogFileName=results.trx"
203
- ado-sync publish-test-results --testResult results/results.trx
204
- ```
205
-
206
- TC IDs are read from `[TestProperty("tc","ID")]` embedded in the TRX — no extra config needed.
207
-
208
- ---
209
-
210
- ### C# NUnit
211
-
212
- ```bash
213
- # Use native NUnit XML (includes [Property] values)
214
- dotnet test --logger "nunit3;LogFileName=results.xml"
215
- ado-sync publish-test-results --testResult results/results.xml
216
-
217
- # TRX via VSTest adapter (TC IDs NOT included — uses AutomatedTestName matching)
218
- dotnet test --logger "trx;LogFileName=results.trx"
219
- ado-sync publish-test-results --testResult results/results.trx
220
- ```
221
-
222
- ---
223
-
224
- ### C# SpecFlow
225
-
226
- SpecFlow generates TRX output via the VSTest adapter. ado-sync reads `@tc:ID` tags from the Gherkin `@tc:ID` tag embedded as `[TestProperty("tc","ID")]` in the TRX by SpecFlow's runner.
227
-
228
- ```bash
229
- # 1. Push feature files to create TCs — @tc:ID is written back into .feature files
230
- ado-sync push
231
-
232
- # 2. Run SpecFlow tests (generates TRX)
233
- dotnet test --logger "trx;LogFileName=results.trx"
234
-
235
- # 3. Publish results
236
- ado-sync publish-test-results --testResult results/results.trx
237
- ```
238
-
239
- SpecFlow uses `local.type: gherkin` (same as Cucumber). TC IDs are `@tc:ID` Gherkin tags, which SpecFlow embeds into the TRX as `TestProperty` values automatically.
240
-
241
- ---
242
-
243
- ### Java JUnit 4 / JUnit 5 (Maven Surefire)
244
-
245
- Maven Surefire generates JUnit XML with `classname` = FQCN and `name` = method name. ado-sync builds `automatedTestName` as `FQCN.methodName` on push, which matches the `classname.name` format in the JUnit XML automatically.
246
-
247
- ```bash
248
- # Run tests (Surefire writes target/surefire-reports/TEST-*.xml)
249
- mvn test
250
-
251
- # Publish — TC linking uses AutomatedTestName matching
252
- ado-sync publish-test-results \
253
- --testResult "target/surefire-reports/TEST-*.xml" \
254
- --testResultFormat junit
255
- ```
256
-
257
- Recommended config:
258
- ```json
259
- { "sync": { "markAutomated": true } }
260
- ```
261
-
262
- **Optional — write TC IDs into JUnit XML** for direct linking (more reliable):
263
-
264
- Add a JUnit 5 extension or JUnit 4 rule that reads the `@Tag("tc:ID")` / `// @tc:ID` value and calls `recordProperty` to embed it into the XML. With Surefire, test properties are written as `<property>` elements inside each `<testcase>`.
265
-
266
- ---
267
-
268
- ### Java TestNG
269
-
270
- TestNG's Surefire reporter generates the same JUnit XML format. Same commands as JUnit above.
271
-
272
- ```bash
273
- mvn test # or: ./gradlew test
274
-
275
- ado-sync publish-test-results \
276
- --testResult "target/surefire-reports/TEST-*.xml" \
277
- --testResultFormat junit
278
- ```
279
-
280
- ---
281
-
282
- ### Python pytest
283
-
284
- ```bash
285
- # Run tests and generate JUnit XML
286
- pytest --junitxml=results/junit.xml
287
-
288
- # Publish — uses AutomatedTestName matching (classname.name from JUnit XML)
289
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
290
- ```
291
-
292
- Recommended config:
293
- ```json
294
- { "sync": { "markAutomated": true } }
295
- ```
296
-
297
- **Optional — embed TC IDs into JUnit XML** for direct linking.
298
-
299
- Add the following to your `conftest.py`:
300
-
301
- ```python
302
- # conftest.py
303
- def pytest_runtest_makereport(item, call):
304
- """Write @pytest.mark.tc(N) as a JUnit XML property for ado-sync to pick up."""
305
- for marker in item.iter_markers("tc"):
306
- if marker.args:
307
- item.user_properties.append(("tc", str(marker.args[0])))
308
- ```
309
-
310
- With this hook, pytest writes:
311
- ```xml
312
- <testcase name="test_foo" classname="tests.module.TestClass">
313
- <properties>
314
- <property name="tc" value="1041"/>
315
- </properties>
316
- </testcase>
317
- ```
318
-
319
- ado-sync will extract the `tc` property and link the result directly to TC 1041, without needing AutomatedTestName matching.
320
-
321
- ---
322
-
323
- ### JavaScript / TypeScript — Jest
324
-
325
- Install `jest-junit`:
326
- ```bash
327
- npm install --save-dev jest-junit
328
- ```
329
-
330
- Run tests:
331
- ```bash
332
- JEST_JUNIT_OUTPUT_DIR=results JEST_JUNIT_OUTPUT_NAME=junit.xml \
333
- npx jest --reporters=default --reporters=jest-junit
334
- ```
335
-
336
- Publish:
337
- ```bash
338
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
339
- ```
340
-
341
- > **TC linking for Jest**: jest-junit does not embed TC IDs in the XML. Linking uses `AutomatedTestName` matching. Set `sync.markAutomated: true` and ensure the `JEST_JUNIT_CLASSNAME` and `JEST_JUNIT_TITLE` env vars match the `automatedTestName` format stored in the TC (`{fileBasename} > {describe} > {testTitle}`).
342
- >
343
- > Set these env vars to align the format:
344
- > ```
345
- > JEST_JUNIT_CLASSNAME="{classname}" # default: suite hierarchy
346
- > JEST_JUNIT_TITLE="{title}" # default: test title
347
- > ```
348
-
349
- ---
350
-
351
- ### JavaScript / TypeScript — WebdriverIO
352
-
353
- WebdriverIO supports JUnit XML via `@wdio/junit-reporter`:
354
-
355
- ```bash
356
- # Install (if not already present)
357
- npm install --save-dev @wdio/junit-reporter
358
- ```
359
-
360
- Add to `wdio.conf.ts`:
361
- ```typescript
362
- reporters: [['junit', { outputDir: './results', outputFileFormat: () => 'junit.xml' }]]
363
- ```
364
-
365
- Run tests:
366
- ```bash
367
- npx wdio run wdio.conf.ts
368
- ```
369
-
370
- Publish:
371
- ```bash
372
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
373
- ```
374
-
375
- ---
376
-
377
- ### Gherkin / Cucumber (JS)
378
-
379
- Cucumber JS with Selenium captures screenshots/videos as base64 `embeddings` inside each step. These are extracted automatically and attached to the test result in Azure DevOps.
380
-
381
- ```bash
382
- # Run with Cucumber JSON reporter (includes step embeddings)
383
- npx cucumber-js --format json:results/cucumber.json
384
-
385
- # Publish — TC IDs from @tc:ID tags, screenshots from embeddings
386
- ado-sync publish-test-results --testResult results/cucumber.json
387
- ```
388
-
389
- TC IDs from `@tc:12345` tags are extracted directly. Screenshots embedded by Selenium/WebDriver hooks are uploaded automatically.
390
-
391
- ---
392
-
393
- ### Playwright
394
-
395
- Playwright supports two result formats — both extract attachments (screenshots, videos, traces):
396
-
397
- **Option A — Playwright JSON reporter** (recommended, includes all attachments):
398
-
399
- ```bash
400
- # playwright.config.ts
401
- # reporter: [['json', { outputFile: 'results/playwright.json' }]]
402
-
403
- npx playwright test
404
- ado-sync publish-test-results --testResult results/playwright.json
405
- ```
406
-
407
- Screenshots on failure, videos, and trace files are uploaded automatically from the `test-results/` folder referenced in the JSON.
408
-
409
- **Option B — JUnit XML reporter** (for CI systems that need JUnit format):
410
-
411
- ```bash
412
- # playwright.config.ts
413
- # reporter: [['junit', { outputFile: 'results/junit.xml' }]]
414
-
415
- npx playwright test
416
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
417
- ```
418
-
419
- Playwright embeds `[[ATTACHMENT|path]]` markers in `<system-out>` — ado-sync reads these and uploads the referenced files (screenshots, videos).
420
-
421
- **Using `--attachmentsFolder` for extra files:**
422
-
423
- ```bash
424
- ado-sync publish-test-results \
425
- --testResult results/junit.xml \
426
- --attachmentsFolder test-results
427
- ```
428
-
429
- ---
430
-
431
- ### TestCafe
432
-
433
- TestCafe requires the `testcafe-reporter-junit` package to produce JUnit XML:
434
-
435
- ```bash
436
- npm install --save-dev testcafe-reporter-junit
437
- ```
438
-
439
- ```bash
440
- # Run tests with JUnit reporter
441
- npx testcafe chrome tests/ --reporter junit:results/junit.xml
442
-
443
- # Publish results
444
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
445
- ```
446
-
447
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push. The `automatedTestName` format stored in the TC is `{fileBasename} > {fixture} > {testTitle}`, which matches the JUnit `suiteName.testName` format produced by `testcafe-reporter-junit`.
448
-
449
- > **TC IDs are not embedded in JUnit output**: TestCafe's `test.meta('tc', 'N')` metadata is not written into the JUnit XML. Linking relies on AutomatedTestName matching only.
450
-
451
- ---
452
-
453
- ### Cypress
454
-
455
- Cypress has a built-in JUnit reporter via Mocha:
456
-
457
- ```bash
458
- # Run tests with JUnit reporter (outputs one file per spec by default)
459
- npx cypress run \
460
- --reporter junit \
461
- --reporter-options "mochaFile=results/junit-[hash].xml"
462
-
463
- # Publish all result files
464
- ado-sync publish-test-results \
465
- --testResult "results/junit-*.xml" \
466
- --testResultFormat junit
467
- ```
468
-
469
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push.
470
-
471
- > **JUnit classname format**: By default Cypress sets `classname` to the spec file path and `name` to the test title. Ensure your `automatedTestName` format matches by setting `reporterOptions: { suiteTitleSeparatedBy: ' > ' }` in `cypress.config.js`.
472
-
473
- ---
474
-
475
- ### Detox (React Native)
476
-
477
- Detox uses Jest as its runner — use `jest-junit` the same way as Jest:
478
-
479
- ```bash
480
- npm install --save-dev jest-junit
481
- ```
482
-
483
- ```bash
484
- # Run Detox tests
485
- JEST_JUNIT_OUTPUT_DIR=results JEST_JUNIT_OUTPUT_NAME=junit.xml \
486
- npx detox test --configuration ios.sim.release
487
-
488
- # Publish results
489
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
490
- ```
491
-
492
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push.
493
-
494
- ---
495
-
496
- ### XCUITest (iOS / macOS)
497
-
498
- Export results from Xcode's `.xcresult` bundle to JUnit XML using `xcresulttool`:
499
-
500
- ```bash
501
- # Run tests and save result bundle
502
- xcodebuild test \
503
- -project MyApp.xcodeproj \
504
- -scheme MyApp \
505
- -destination 'platform=iOS Simulator,name=iPhone 15' \
506
- -resultBundlePath TestResults.xcresult
507
-
508
- # Export JUnit XML
509
- xcrun xcresulttool get --path TestResults.xcresult --format junit > results/junit.xml
510
-
511
- # Publish results
512
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
513
- ```
514
-
515
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push. The `automatedTestName` format is `{className}/{funcTestName}` (e.g. `LoginUITests/testValidCredentialsNavigateToInventory`).
516
-
517
- > **Attachments**: `xcresulttool` does not embed screenshots in the JUnit export. Use `--attachmentsFolder` to attach screenshot files produced by your test hooks separately.
518
-
519
- ---
520
-
521
- ### Espresso (Android)
522
-
523
- Gradle's `connectedAndroidTest` task writes JUnit XML to `app/build/outputs/androidTest-results/connected/`:
524
-
525
- ```bash
526
- # Run instrumented tests
527
- ./gradlew connectedAndroidTest
528
-
529
- # Publish results
530
- ado-sync publish-test-results \
531
- --testResult "app/build/outputs/androidTest-results/connected/TEST-*.xml" \
532
- --testResultFormat junit
533
- ```
534
-
535
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push. The `automatedTestName` format is `{packageName}.{ClassName}.{methodName}`.
536
-
537
- ---
538
-
539
- ### Robot Framework
540
-
541
- Robot Framework writes test results to `output.xml` by default. ado-sync auto-detects this format from the `<robot>` root element.
542
-
543
- ```bash
544
- # Run Robot Framework tests (generates output.xml)
545
- robot --outputdir results tests/
546
-
547
- # Publish results — TC IDs from [Tags] tc:N values
548
- ado-sync publish-test-results --testResult results/output.xml
549
- ```
550
-
551
- TC IDs are extracted directly from the `<tags>` element in `output.xml` — the same `tc:N` tag written back by `ado-sync push`. No `--testResultFormat` flag is needed; the format is auto-detected.
552
-
553
- ```bash
554
- # Custom output file location
555
- robot --outputdir results --output my-results.xml tests/
556
- ado-sync publish-test-results --testResult results/my-results.xml
557
- ```
558
-
559
- Recommended config:
560
- ```json
561
- { "sync": { "markAutomated": true } }
562
- ```
563
-
564
- > **TC linking for Robot**: `output.xml` includes `tc:N` tags in `<tags>` elements. ado-sync uses these for direct TC linking. If a test has no `tc:N` tag, it falls back to `AutomatedTestName` matching using the suite.test name path (e.g. `SuiteName.Test Case Name`).
565
-
566
- ---
567
-
568
- ### CTRF (Common Test Report Format)
569
-
570
- [CTRF](https://ctrf.io) is a framework-agnostic JSON report format supported by reporters for Playwright, Cypress, Jest, k6, and many others. ado-sync auto-detects CTRF from the `results.tests` array structure.
571
-
572
- ```bash
573
- # Example: Playwright with CTRF reporter
574
- npm install --save-dev playwright-ctrf-json-reporter
575
-
576
- # playwright.config.ts:
577
- # reporter: [['playwright-ctrf-json-reporter', { outputFile: 'results/ctrf.json' }]]
578
-
579
- npx playwright test
580
- ado-sync publish-test-results --testResult results/ctrf.json
581
- ```
582
-
583
- ```bash
584
- # Example: Jest with CTRF reporter
585
- npm install --save-dev jest-ctrf-json-reporter
586
-
587
- # jest.config.ts:
588
- # reporters: [['jest-ctrf-json-reporter', { outputFile: 'results/ctrf.json' }]]
589
-
590
- npx jest
591
- ado-sync publish-test-results --testResult results/ctrf.json
592
- ```
593
-
594
- TC IDs are extracted from the `tags` array (e.g. `["@tc:1234", "@smoke"]`) or, as a fallback, from `@tc:ID` in the test name. `stdout`/`stderr` arrays and `attachments[].path` files are uploaded automatically.
595
-
596
- > **Status mapping**: CTRF `passed` → `Passed`, `failed` → `Failed`, `skipped`/`pending`/`other` → `NotExecuted`.
597
-
598
- ---
599
-
600
- ### Flutter
601
-
602
- Flutter can produce JUnit XML via the `flutter_test_junit` package or by piping `--reporter junit`:
603
-
604
- ```bash
605
- # Option A — built-in reporter (Flutter ≥ 3.7)
606
- flutter test --reporter junit > results/junit.xml
607
-
608
- # Option B — flutter_test_junit package
609
- flutter pub add --dev flutter_test_junit
610
- dart run flutter_test_junit:main > results/junit.xml
611
- ```
612
-
613
- ```bash
614
- # Publish results
615
- ado-sync publish-test-results --testResult results/junit.xml --testResultFormat junit
616
- ```
617
-
618
- TC linking uses `AutomatedTestName` matching — set `sync.markAutomated: true` on push.
619
-
620
- ---
621
-
622
- | Framework | Result format | TC ID in file | Attachments uploaded | Live-tested |
623
- |-----------|---------------|---------------|----------------------|-------------|
624
- | C# MSTest | TRX | ✅ `[TestProperty("tc","ID")]` | `<Output><StdOut>` + `<Output><ResultFiles>` files | ✅ |
625
- | C# NUnit | NUnit XML | ✅ `[Property("tc","ID")]` | `<output>` text + `<attachments><filePath>` files | ✅ |
626
- | C# SpecFlow | TRX | ✅ `@tc:ID` → `[TestProperty]` | `<Output><StdOut>` + `<Output><ResultFiles>` files | ✅ |
627
- | Java JUnit 4/5 | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
628
- | Java TestNG | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
629
- | Python pytest | JUnit XML | ⚠️ optional (conftest.py hook) | `<system-out>`, `<system-err>` | ✅ |
630
- | Jest | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
631
- | WebdriverIO / Jasmine | JUnit XML | ⚠️ optional `<property name="tc">` | `<system-out>`, `<system-err>` | ✅ |
632
- | Cucumber JS | Cucumber JSON | ✅ `@tc:ID` tag | `step.embeddings[]` (base64 screenshots/video) | ✅ |
633
- | Playwright | Playwright JSON | ✅ native `annotation: { type: 'tc', description: 'ID' }`; or `@tc:ID` in test title | Files from `attachments[].path` (screenshots, videos, traces) | ✅ |
634
- | Playwright | JUnit XML | ⚠️ `@tc:ID` in test title only (no annotation in JUnit format) | `[[ATTACHMENT\|path]]` referenced files | ✅ |
635
- | TestCafe | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
636
- | Cypress | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
637
- | Detox | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
638
- | XCUITest | JUnit XML | ❌ AutomatedTestName matching only | none (use `--attachmentsFolder`) | |
639
- | Espresso | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
640
- | Flutter | JUnit XML | ❌ AutomatedTestName matching only | `<system-out>`, `<system-err>` | |
641
- | Robot Framework | Robot XML (`output.xml`) | ✅ `tc:N` in `<tags>` | — | |
642
- | CTRF (any framework) | CTRF JSON | ✅ `tags: ["@tc:ID"]` or `@tc:ID` in name | `attachments[].path` files + `stdout`/`stderr` | |
643
-
644
- ---
645
-
646
- ## Attachments
647
-
648
- ado-sync uploads screenshots, videos, and logs from test results to the corresponding Azure DevOps test result entry. Attachments appear in the Azure Test Plans UI under each result.
649
-
650
- ### What is extracted automatically per format
651
-
652
- | Format | Extracted automatically |
653
- |--------|------------------------|
654
- | TRX | `<Output><StdOut>` → console log; `<Output><ResultFiles><ResultFile path="...">` → files on disk |
655
- | NUnit XML | `<output>` → console log; `<attachments><attachment><filePath>` → files on disk |
656
- | JUnit XML | `<system-out>` → log; `<system-err>` → log; `[[ATTACHMENT\|path]]` → Playwright files |
657
- | Cucumber JSON | `step.embeddings[]` → base64-encoded screenshots/video |
658
- | Playwright JSON | `results[].attachments[].path` → files on disk (screenshots, videos, traces) |
659
- | CTRF JSON | `tests[].attachments[].path` → files on disk; `tests[].stdout[]` / `tests[].stderr[]` → console logs |
660
-
661
- > **Note**: All file paths are resolved relative to the result file's directory, not the process working directory. This matches how test runners (Playwright, MSTest, NUnit) write relative paths in their output.
662
-
663
- > **Attachment types**: Azure DevOps accepts only `GeneralAttachment` (images, videos, binary files) and `ConsoleLog` (text, logs) as attachment type values. ado-sync maps all file types to one of these two automatically.
664
-
665
- ### `--attachmentsFolder` — folder-based attachment upload
666
-
667
- For any framework, point ado-sync at a folder containing screenshots and videos:
668
-
669
- ```bash
670
- ado-sync publish-test-results \
671
- --testResult results/junit.xml \
672
- --attachmentsFolder test-results/screenshots
673
- ```
674
-
675
- Files are matched to individual test results by looking for the test method name in the filename (case-insensitive). For example, `addItemAndCheckout_failed.png` → matched to `com.example.CheckoutTests.addItemAndCheckout`.
676
-
677
- Unmatched files are attached at the test run level.
678
-
679
- ### Config-based attachment folder
680
-
681
- ```json
682
- {
683
- "publishTestResults": {
684
- "attachments": {
685
- "folder": "test-results/screenshots",
686
- "include": ["**/*.png", "**/*.mp4"],
687
- "matchByTestName": true
688
- },
689
- "publishAttachmentsForPassingTests": "files"
690
- }
691
- }
692
- ```
693
-
694
- ### `publishAttachmentsForPassingTests`
695
-
696
- Controls how attachments are handled for **passing** tests (failing tests always get all attachments):
697
-
698
- | Value | Behaviour |
699
- |-------|-----------|
700
- | `"none"` *(default)* | No attachments uploaded for passing tests |
701
- | `"files"` | Screenshots and videos uploaded; console logs skipped |
702
- | `"all"` | All attachments including logs uploaded for passing tests |
703
-
704
- ### Framework-specific setup
705
-
706
- **C# MSTest — attach files from test code:**
707
-
708
- ```csharp
709
- TestContext.AddResultFile("screenshots/mytest.png");
710
- ```
711
-
712
- **C# NUnit — attach files:**
713
-
714
- ```csharp
715
- TestContext.AddAttachment("screenshots/mytest.png");
716
- ```
717
-
718
- **Java (JUnit/TestNG) — capture Selenium screenshot and write to system-out:**
719
-
720
- ```java
721
- // In an @AfterMethod / @After hook:
722
- File screenshot = ((TakesScreenshot) driver).getScreenshotAs(OutputType.FILE);
723
- // Surefire writes <system-out> to JUnit XML — log the path:
724
- System.out.println("Screenshot: " + screenshot.getAbsolutePath());
725
- // Or use --attachmentsFolder to pick up the file directly
726
- ```
727
-
728
- **Python pytest — capture screenshot via conftest.py:**
729
-
730
- ```python
731
- # conftest.py
732
- @pytest.fixture(autouse=True)
733
- def screenshot_on_failure(request, driver):
734
- yield
735
- if request.node.rep_call.failed:
736
- driver.save_screenshot(f"screenshots/{request.node.name}.png")
737
- ```
738
-
739
- Then publish with:
740
- ```bash
741
- ado-sync publish-test-results \
742
- --testResult results/junit.xml \
743
- --attachmentsFolder screenshots
744
- ```
745
-
746
- **Cucumber JS — embed screenshot in step hook:**
747
-
748
- ```javascript
749
- // hooks.js
750
- After(async function({ pickle, result }) {
751
- if (result.status === Status.FAILED) {
752
- const screenshot = await driver.takeScreenshot();
753
- this.attach(Buffer.from(screenshot, 'base64'), 'image/png');
754
- }
755
- });
756
- ```
757
-
758
- Screenshots are embedded as base64 in the Cucumber JSON and uploaded automatically.
759
-
760
- **Playwright — screenshots and videos are automatic** when configured in `playwright.config.ts`:
761
-
762
- ```typescript
763
- use: {
764
- screenshot: 'only-on-failure',
765
- video: 'retain-on-failure',
766
- trace: 'on-first-retry',
767
- }
768
- ```
769
-
770
- Use the Playwright JSON reporter — attachments are uploaded automatically.
771
-
772
- ---
773
-
774
- ## Outcome mapping
775
-
776
- | Source outcome | Azure outcome |
777
- |----------------|---------------|
778
- | `passed` / `pass` / `success` | `Passed` |
779
- | `failed` / `fail` / `failure` / `error` | `Failed` |
780
- | `skipped` / `ignored` / `pending` / `notExecuted` | `NotExecuted` |
781
- | `inconclusive` | `Inconclusive` (or override with `treatInconclusiveAs`) |
782
-
783
- ---
784
-
785
- ## Configuration
786
-
787
- Results can also be configured in the config file under `publishTestResults`:
788
-
789
- ```json
790
- {
791
- "publishTestResults": {
792
- "testResult": {
793
- "sources": [
794
- { "value": "results/unit.trx", "format": "trx" },
795
- { "value": "results/integration.xml", "format": "junit" }
796
- ]
797
- },
798
- "treatInconclusiveAs": "Failed",
799
- "testRunSettings": {
800
- "name": "My CI Run",
801
- "comment": "Automated sync run",
802
- "runType": "Automated"
803
- },
804
- "testResultSettings": {
805
- "comment": "Published by ado-sync"
806
- },
807
- "testConfiguration": {
808
- "name": "Default"
809
- }
810
- }
811
- }
812
- ```
813
-
814
- ### `publishTestResults` fields
815
-
816
- | Field | Description |
817
- |-------|-------------|
818
- | `testResult.sources` | Array of `{ value, format }` objects. `value` is a path relative to config dir. |
819
- | `treatInconclusiveAs` | Override inconclusive outcome. e.g. `"Failed"` or `"NotExecuted"`. |
820
- | `flakyTestOutcome` | How to handle flaky tests: `"lastAttemptOutcome"` *(default)* · `"firstAttemptOutcome"` · `"worstOutcome"`. |
821
- | `testConfiguration.name` | Name of the Azure test configuration to associate. |
822
- | `testConfiguration.id` | ID of the Azure test configuration. |
823
- | `testSuite.name` | Name of the Azure test suite to publish against. Requires every result to resolve to a test case ID. |
824
- | `testSuite.id` | ID of the Azure test suite to publish against. |
825
- | `testSuite.testPlan` | Optional test plan name or ID used when resolving the target suite. Defaults to `testPlan.id` from the main config. |
826
- | `testRunSettings.name` | Name for the Test Run. |
827
- | `testRunSettings.comment` | Comment attached to the Test Run. |
828
- | `testRunSettings.runType` | `"Automated"` *(default)* · `"Manual"`. |
829
- | `testResultSettings.comment` | Comment applied to every test result. |
830
- | `publishAttachmentsForPassingTests` | `"none"` *(default)* · `"files"` · `"all"`. |
831
-
832
- When `testSuite` is configured, ado-sync creates a planned run and binds each published result to the matching test point in that suite. If a suite contains multiple configurations for the same test case, set `testConfiguration.id` or `testConfiguration.name` so the target point can be resolved unambiguously.
833
-
834
- ---
835
-
836
- ## Creating issues on failure
837
-
838
- `--create-issues-on-failure` automatically files a GitHub Issue or ADO Bug for each failed test
839
- after the run is published. Multiple guards prevent flooding your tracker when the environment is
840
- the problem rather than individual tests.
841
-
842
- ### Guard logic (applied in order)
843
-
844
- ```
845
- failures > threshold% of total?
846
- └─ YES → 1 environment-failure issue, stop
847
- └─ NO
848
- └─ cluster by error signature
849
- └─ cluster size > 1?
850
- └─ YES → 1 issue per cluster (lists affected test names)
851
- └─ NO → 1 issue per TC (up to maxIssues cap)
852
- └─ cap hit? → 1 overflow summary issue
853
- ```
854
-
855
- | Guard | Default | Description |
856
- |---|---|---|
857
- | Failure-rate threshold | 20% | Above this, one env-failure issue is filed instead of per-test |
858
- | Error clustering | enabled | Tests with the same error message are grouped into one issue |
859
- | Hard cap | 50 | No more than this many issues per run; one overflow summary when exceeded |
860
- | Dedup | enabled | Skip if an open issue already exists for the same TC (GitHub: by `tc:ID` label; ADO: by title) |
861
-
862
- ### GitHub Issues (recommended)
863
-
864
- ```bash
865
- ado-sync publish-test-results \
866
- --testResult results/ctrf.json \
867
- --create-issues-on-failure \
868
- --github-repo myorg/myrepo \
869
- --github-token $GITHUB_TOKEN
870
- ```
871
-
872
- Each issue is labelled `test-failure` and `tc:{ID}` (when a TC ID is available). The issue body
873
- contains the error message, stack trace, ADO TC link, and run URL — everything a healer agent
874
- needs to propose a fix PR.
875
-
876
- ### ADO Bugs
877
-
878
- ```bash
879
- ado-sync publish-test-results \
880
- --testResult results/junit.xml \
881
- --create-issues-on-failure \
882
- --issue-provider ado
883
- ```
884
-
885
- ADO Bugs are created as Bug work items in the same project. The `Repro Steps` field is populated
886
- with the error details. When a TC ID is known, a `TestedBy` relation is added linking the Bug to
887
- the Test Case.
888
-
889
- ### Config-based setup
890
-
891
- ```json
892
- {
893
- "publishTestResults": {
894
- "createIssuesOnFailure": {
895
- "provider": "github",
896
- "repo": "myorg/myrepo",
897
- "token": "$GITHUB_TOKEN",
898
- "labels": ["test-failure", "automated"],
899
- "threshold": 20,
900
- "maxIssues": 50,
901
- "clusterByError": true,
902
- "dedupByTestCase": true
903
- }
904
- }
905
- }
906
- ```
907
-
908
- CLI flags override the config values when both are present.
909
-
910
- ### MCP tool: `create_issue`
911
-
912
- The `create_issue` MCP tool lets healer agents file a single issue directly:
913
-
914
- ```
915
- create_issue({
916
- title: "[FAILED] Login with valid credentials",
917
- body: "Error: Expected 200 but got 401\n\nStack: ...",
918
- provider: "github",
919
- githubRepo: "myorg/myrepo",
920
- githubToken: "$GITHUB_TOKEN",
921
- testCaseId: 1234
922
- })
923
- ```
924
-
925
- Returns the issue URL immediately, which the agent can embed in its fix PR.
926
-
927
- ---
928
-
929
- ## Output
930
-
931
- ```
932
- ado-sync publish-test-results
933
- Config: ado-sync.json
934
-
935
- Total results: 42
936
- 38 passed 3 failed 1 other
937
- Run ID: 9876
938
- URL: https://dev.azure.com/my-org/MyProject/_testManagement/runs?runId=9876
939
- ```