ado-sync 0.1.64 → 0.1.67

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (63) hide show
  1. package/README.md +20 -15
  2. package/dist/__tests__/regressions.test.js +1011 -1
  3. package/dist/__tests__/regressions.test.js.map +1 -1
  4. package/dist/ai/generate-spec.d.ts +1 -1
  5. package/dist/ai/generate-spec.js +23 -0
  6. package/dist/ai/generate-spec.js.map +1 -1
  7. package/dist/ai/summarizer.d.ts +3 -2
  8. package/dist/ai/summarizer.js +50 -1
  9. package/dist/ai/summarizer.js.map +1 -1
  10. package/dist/azure/test-cases.d.ts +11 -1
  11. package/dist/azure/test-cases.js +286 -43
  12. package/dist/azure/test-cases.js.map +1 -1
  13. package/dist/cli.js +91 -14
  14. package/dist/cli.js.map +1 -1
  15. package/dist/config.js +74 -1
  16. package/dist/config.js.map +1 -1
  17. package/dist/id-markers.d.ts +1 -0
  18. package/dist/id-markers.js +13 -0
  19. package/dist/id-markers.js.map +1 -1
  20. package/dist/mcp-server.js +1 -1
  21. package/dist/mcp-server.js.map +1 -1
  22. package/dist/sync/cache.d.ts +2 -0
  23. package/dist/sync/cache.js.map +1 -1
  24. package/dist/sync/engine.d.ts +12 -1
  25. package/dist/sync/engine.js +210 -41
  26. package/dist/sync/engine.js.map +1 -1
  27. package/dist/types.d.ts +56 -4
  28. package/llms.txt +12 -11
  29. package/package.json +8 -1
  30. package/docs/advanced.md +0 -988
  31. package/docs/agent-setup.md +0 -204
  32. package/docs/capability-roadmap.md +0 -280
  33. package/docs/cli.md +0 -609
  34. package/docs/configuration.md +0 -322
  35. package/docs/examples/csharp-mstest-local-llm.yaml +0 -35
  36. package/docs/examples/csharp-mstest.yaml +0 -21
  37. package/docs/examples/csharp-nunit.yaml +0 -21
  38. package/docs/examples/csharp-specflow.yaml +0 -16
  39. package/docs/examples/cypress.yaml +0 -21
  40. package/docs/examples/detox-react-native.yaml +0 -21
  41. package/docs/examples/espresso-android.yaml +0 -21
  42. package/docs/examples/flutter-dart.yaml +0 -21
  43. package/docs/examples/java-junit.yaml +0 -21
  44. package/docs/examples/java-testng.yaml +0 -21
  45. package/docs/examples/js-jasmine-wdio.yaml +0 -21
  46. package/docs/examples/js-jest.yaml +0 -21
  47. package/docs/examples/playwright-js.yaml +0 -21
  48. package/docs/examples/playwright-ts.yaml +0 -21
  49. package/docs/examples/puppeteer.yaml +0 -21
  50. package/docs/examples/python-pytest.yaml +0 -21
  51. package/docs/examples/robot-framework.yaml +0 -19
  52. package/docs/examples/testcafe.yaml +0 -21
  53. package/docs/examples/xcuitest-ios.yaml +0 -21
  54. package/docs/mcp-server.md +0 -312
  55. package/docs/publish-test-results.md +0 -939
  56. package/docs/spec-formats.md +0 -1357
  57. package/docs/troubleshooting.md +0 -101
  58. package/docs/vscode-extension.md +0 -139
  59. package/docs/work-item-links.md +0 -115
  60. package/docs/workflows.md +0 -457
  61. package/mkdocs.yml +0 -40
  62. package/requirements-docs.txt +0 -4
  63. package/scripts/build_site.sh +0 -6
package/docs/advanced.md DELETED
@@ -1,988 +0,0 @@
1
- # Advanced configuration
2
-
3
- ---
4
-
5
- ## Format configuration
6
-
7
- `sync.format` controls how test case content is structured when pushed to Azure DevOps.
8
-
9
- | Field | Default | Description |
10
- |-------|---------|-------------|
11
- | `prefixTitle` | `true` | Prefix TC title with `"Scenario: "` or `"Scenario Outline: "`. Set `false` to use the raw scenario name. |
12
- | `prefixBackgroundSteps` | `true` | Include Background steps in the TC steps list, prefixed with `"Background: "`. Set `false` to exclude them. |
13
- | `useExpectedResult` | `false` | When `true`, `Then`/`Verify` steps are moved to the Expected Result column instead of the Action column. |
14
- | `syncDataTableAsText` | `false` | When `true`, inline Gherkin data tables are appended to the step action as plain `\| cell \| cell \|` text instead of being handled as sub-steps. |
15
- | `showParameterListStep` | `"whenUnusedParameters"` | Append a `Parameters: @p1@, @p2@, ...` step to parametrized TCs. `"always"` — always append. `"never"` — never append. `"whenUnusedParameters"` — append only when at least one parameter is not already referenced in a step. |
16
- | `emptyActionValue` | *(blank)* | Value to use when a step action would be empty (e.g. when `useExpectedResult` moves a step to the expected column). |
17
- | `emptyExpectedResultValue` | *(blank)* | Value to use when the expected result column would be empty. |
18
-
19
- ### Example
20
-
21
- ```json
22
- {
23
- "sync": {
24
- "format": {
25
- "prefixTitle": false,
26
- "useExpectedResult": true,
27
- "showParameterListStep": "always",
28
- "emptyActionValue": "-"
29
- }
30
- }
31
- }
32
- ```
33
-
34
- ---
35
-
36
- ## State configuration
37
-
38
- `sync.state` sets the Azure Test Case `State` field whenever a scenario is created or updated.
39
-
40
- | Field | Description |
41
- |-------|-------------|
42
- | `setValueOnChangeTo` | The state value to set, e.g. `"Design"`, `"Ready"`. |
43
- | `condition` | *(Optional)* Tag expression. Only scenarios matching this expression trigger the state change. |
44
-
45
- ```json
46
- {
47
- "sync": {
48
- "state": {
49
- "setValueOnChangeTo": "Design",
50
- "condition": "@active"
51
- }
52
- }
53
- }
54
- ```
55
-
56
- ---
57
-
58
- ## Field updates
59
-
60
- `sync.fieldUpdates` applies custom field values on push. Each key is an Azure DevOps field reference name (e.g. `"System.AreaPath"`) or display name.
61
-
62
- ### Simple value (always set)
63
-
64
- ```json
65
- {
66
- "sync": {
67
- "fieldUpdates": {
68
- "Custom.AutomationStatus": "Automated",
69
- "System.AreaPath": "MyProject\\QA Team"
70
- }
71
- }
72
- }
73
- ```
74
-
75
- ### Conditional value (switch by tag)
76
-
77
- ```json
78
- {
79
- "sync": {
80
- "fieldUpdates": {
81
- "System.AreaPath": {
82
- "conditionalValue": {
83
- "@smoke": "MyProject\\Smoke",
84
- "@regression": "MyProject\\Regression",
85
- "otherwise": "MyProject\\General"
86
- }
87
- }
88
- }
89
- }
90
- }
91
- ```
92
-
93
- ### Tag wildcard capture
94
-
95
- Wildcard `*` captures the matched portion and exposes it as `{1}`, `{2}`, ... in the value.
96
-
97
- ```json
98
- {
99
- "sync": {
100
- "fieldUpdates": {
101
- "Custom.Priority": {
102
- "condition": "@priority:*",
103
- "value": "{1}"
104
- }
105
- }
106
- }
107
- }
108
- ```
109
-
110
- With tag `@priority:high`, this sets `Custom.Priority` to `"high"`.
111
-
112
- ### Update event
113
-
114
- Control when the update fires:
115
-
116
- | `update` | Behaviour |
117
- |----------|-----------|
118
- | `"always"` *(default)* | Apply on every push (create and update). |
119
- | `"onCreate"` | Apply only when the TC is being created for the first time. |
120
- | `"onChange"` | Apply only when the TC already exists and is being updated. |
121
-
122
- ```json
123
- {
124
- "sync": {
125
- "fieldUpdates": {
126
- "Custom.CreatedBySync": { "value": "true", "update": "onCreate" }
127
- }
128
- }
129
- }
130
- ```
131
-
132
- ### Placeholders
133
-
134
- Value strings support these placeholders:
135
-
136
- | Placeholder | Resolves to |
137
- |-------------|-------------|
138
- | `{scenario-name}` | Scenario title |
139
- | `{feature-name}` | File name without extension |
140
- | `{feature-file}` | File name with extension |
141
- | `{scenario-description}` | Scenario description text |
142
- | `{1}`, `{2}`, … | Wildcard captures from the `condition` |
143
-
144
- ---
145
-
146
- ## Customizations
147
-
148
- ### Field defaults
149
-
150
- Set default Azure field values applied only when a Test Case is **created** (not on updates).
151
-
152
- ```json
153
- {
154
- "customizations": {
155
- "fieldDefaults": {
156
- "enabled": true,
157
- "defaultValues": {
158
- "System.State": "Design",
159
- "Custom.AutomationStatus": "Planned"
160
- }
161
- }
162
- }
163
- }
164
- ```
165
-
166
- ### Ignore test case tags
167
-
168
- Preserve Azure-side tags from being removed during push. Useful for tags managed by Azure DevOps workflows (e.g. `reviewed`, `approved`).
169
-
170
- ```json
171
- {
172
- "customizations": {
173
- "ignoreTestCaseTags": {
174
- "enabled": true,
175
- "tags": ["reviewed", "ado-*"]
176
- }
177
- }
178
- }
179
- ```
180
-
181
- Patterns support a trailing `*` wildcard: `"ado-*"` matches any tag starting with `ado-`.
182
-
183
- ### Tag text map transformation
184
-
185
- Apply character or substring replacements to tags before they are pushed to Azure DevOps.
186
-
187
- ```json
188
- {
189
- "customizations": {
190
- "tagTextMapTransformation": {
191
- "enabled": true,
192
- "textMap": { "_": " " }
193
- }
194
- }
195
- }
196
- ```
197
-
198
- With this config, `@my_feature_tag` is stored in Azure as `my feature tag`.
199
-
200
- ---
201
-
202
- ## Attachments
203
-
204
- Attach files to Test Cases via tags.
205
-
206
- ### Config
207
-
208
- ```json
209
- {
210
- "sync": {
211
- "attachments": {
212
- "enabled": true,
213
- "tagPrefixes": ["wireframe", "spec"],
214
- "baseFolder": "specs/attachments"
215
- }
216
- }
217
- }
218
- ```
219
-
220
- | Field | Default | Description |
221
- |-------|---------|-------------|
222
- | `enabled` | `false` | Enable attachment sync. |
223
- | `tagPrefixes` | `[]` | Additional tag prefixes beyond the built-in `attachment`. |
224
- | `baseFolder` | *(feature file dir)* | Base directory for resolving file paths. Relative to config file. |
225
-
226
- ### Usage
227
-
228
- ```gherkin
229
- @tc:1042 @attachment:screenshots/login.png @wireframe:mockups/login.fig
230
- Scenario: Login page
231
- ...
232
- ```
233
-
234
- The default `attachment` prefix is always active when `enabled: true`. Additional prefixes are configured via `tagPrefixes`.
235
-
236
- File paths support glob patterns: `@attachment:screenshots/*.png` attaches all matching files.
237
-
238
- Files are uploaded to the Azure Work Item as attachments. Already-attached files (by name) are not re-uploaded.
239
-
240
- ---
241
-
242
- ## Pull configuration
243
-
244
- ### Pull-create: generate local files from Azure
245
-
246
- When `sync.pull.enableCreatingNewLocalTestCases` is `true`, a `pull` run will create new local spec files for Azure Test Cases that have no local counterpart (i.e. they exist in the configured suite but have no `@tc:ID` anywhere in the local files).
247
-
248
- ```json
249
- {
250
- "sync": {
251
- "pull": {
252
- "enableCreatingNewLocalTestCases": true,
253
- "targetFolder": "specs/pulled"
254
- }
255
- }
256
- }
257
- ```
258
-
259
- | Field | Default | Description |
260
- |-------|---------|-------------|
261
- | `enableCreatingNewLocalTestCases` | `false` | When `true`, `pull` creates local files for unlinked Azure TCs. |
262
- | `targetFolder` | `.` (config dir) | Directory where new files are created. Relative to config file. |
263
-
264
- Generated files use the format matching `local.type` (`.feature` for Gherkin, `.md` for Markdown). The `@tc:ID` tag is written into the file so subsequent pushes link back to the same TC.
265
-
266
- ---
267
-
268
- ## Suite hierarchy
269
-
270
- By default, all Test Cases go into a single flat suite (`suiteMapping: "flat"`). Two additional modes mirror local structure as child suites in Azure.
271
-
272
- ### `byFolder` — mirror folder structure
273
-
274
- ```json
275
- {
276
- "testPlan": {
277
- "id": 1234,
278
- "suiteId": 5678,
279
- "suiteMapping": "byFolder"
280
- }
281
- }
282
- ```
283
-
284
- ```
285
- specs/
286
- login/
287
- basic.feature → suite "login" → TC "Successful login"
288
- checkout/
289
- happy.feature → suite "checkout" → TC "Add item and checkout"
290
- ```
291
-
292
- ### `byFile` — one suite per spec file
293
-
294
- ```json
295
- {
296
- "testPlan": {
297
- "id": 1234,
298
- "suiteId": 5678,
299
- "suiteMapping": "byFile"
300
- }
301
- }
302
- ```
303
-
304
- ```
305
- specs/
306
- login/
307
- basic.feature → suite "login / basic" → TC "Successful login"
308
- checkout/
309
- happy.feature → suite "checkout / happy" → TC "Add item and checkout"
310
- ```
311
-
312
- With `byFile`, each spec file gets its own dedicated child suite named after the file (without extension). The folder hierarchy is still reflected as parent suites. All Test Cases from the same file land in the same leaf suite.
313
-
314
- Child suites are created automatically if they do not exist. The suite hierarchy is re-used across runs.
315
-
316
- ---
317
-
318
- ## Multi-suite routing
319
-
320
- `testPlan.suiteRouting` routes each Test Case to a specific child suite based on its tags. This is separate from `suiteMapping` — it assigns a **primary suite** per test based on tag expressions evaluated in order. The first matching route wins.
321
-
322
- ```json
323
- {
324
- "testPlan": {
325
- "id": 1234,
326
- "suiteId": 5678,
327
- "suiteRouting": [
328
- { "tags": "@smoke", "suite": "Smoke" },
329
- { "tags": "@regression", "suite": "Regression" },
330
- { "suite": "General" }
331
- ]
332
- }
333
- }
334
- ```
335
-
336
- A route with no `tags` is a catch-all — it matches every test that didn't match an earlier route.
337
-
338
- The `suite` value can be:
339
- - A **string** — the named child suite is auto-created under `suiteId` if it doesn't exist.
340
- - A **number** — the exact suite ID is used directly (must already exist).
341
-
342
- If no route matches and no catch-all is defined, the test falls back to `suiteId`.
343
-
344
- ### Combining routing with multi-plan mode
345
-
346
- Each `testPlans` entry can define its own `suiteRouting`, overriding any base routing:
347
-
348
- ```json
349
- {
350
- "testPlans": [
351
- {
352
- "id": 1001,
353
- "suiteId": 2001,
354
- "include": "specs/smoke/**/*.feature",
355
- "suiteRouting": [
356
- { "tags": "@critical", "suite": "Critical" },
357
- { "suite": "Smoke" }
358
- ]
359
- },
360
- {
361
- "id": 1002,
362
- "suiteId": 2002,
363
- "include": "specs/regression/**/*.feature",
364
- "suiteMapping": "byFile"
365
- }
366
- ]
367
- }
368
- ```
369
-
370
- ---
371
-
372
- ## Conflict detection
373
-
374
- ado-sync uses a local state cache (`.ado-sync-state.json`) to detect conflicts — cases where **both** the local file and the Azure Test Case were changed since the last sync.
375
-
376
- The `sync.conflictAction` setting controls what happens:
377
-
378
- | Value | Behaviour |
379
- |-------|-----------|
380
- | `"overwrite"` *(default)* | Push local version to Azure, overwriting the remote change. |
381
- | `"skip"` | Emit a `!` conflict result and leave both sides unchanged. |
382
- | `"fail"` | Throw an error listing all conflicting scenarios and abort. |
383
-
384
- ```json
385
- { "sync": { "conflictAction": "skip" } }
386
- ```
387
-
388
- **Commit `.ado-sync-state.json` to version control** so all team members and CI share the same last-synced state.
389
-
390
- The cache also speeds up `push` — unchanged scenarios (same local hash + same Azure `changedDate`) are skipped without an API call.
391
-
392
- To reset the cache: delete `.ado-sync-state.json`. The next push re-populates it from Azure.
393
-
394
- ---
395
-
396
- ## CI / build server mode
397
-
398
- Set `sync.disableLocalChanges: true` to prevent ado-sync from writing back to local files:
399
-
400
- - `push` — creates and updates Test Cases in Azure, but does **not** write ID tags to local files.
401
- - `pull` — computes what would change but does **not** modify local files (behaves like `--dry-run`).
402
-
403
- ```json
404
- { "sync": { "disableLocalChanges": true } }
405
- ```
406
-
407
- Or per-run via `--config-override`:
408
-
409
- ```bash
410
- ado-sync push --config-override sync.disableLocalChanges=true
411
- ```
412
-
413
- ### GitHub Actions example
414
-
415
- ```yaml
416
- - name: Sync test cases to Azure DevOps
417
- run: ado-sync push --config-override sync.disableLocalChanges=true
418
- env:
419
- AZURE_DEVOPS_TOKEN: ${{ secrets.AZURE_DEVOPS_TOKEN }}
420
- ```
421
-
422
- ---
423
-
424
- ## Removed scenario detection
425
-
426
- When a scenario is deleted from a local file but its Test Case still exists in the Azure suite, ado-sync detects this on the next `push` and appends the tag `ado-sync:removed` to the Azure Test Case (without deleting it). A `−` removed line is printed in the output.
427
-
428
- To completely remove the Test Case from Azure, delete it manually in Test Plans after reviewing.
429
-
430
- ---
431
-
432
- ## AI auto-summary for code tests
433
-
434
- When pushing code-based test types (`java`, `csharp`, `python`, `javascript`, `playwright`), ado-sync reads each test function body and automatically generates a TC **title**, **description**, and numbered **steps**.
435
-
436
- **What gets generated and when:**
437
-
438
- | Test state | What AI generates |
439
- |------------|------------------|
440
- | No doc comment at all | Title + description + all steps |
441
- | Has doc comment steps but no description | Description only (existing steps kept) |
442
- | Has both steps and a description | Nothing — left unchanged |
443
-
444
- Local source files are **never modified** by the AI summary feature — unless `sync.ai.writebackDocComment` is `true` (see [JSDoc writeback](#jsdoc-writeback-syncaiwritebackdoccomment) below).
445
-
446
- ### JSDoc writeback (`sync.ai.writebackDocComment`)
447
-
448
- When `writebackDocComment: true` is set, ado-sync writes AI-generated steps back into the JS/TS source file as a JSDoc block above each `test()` call immediately after the first push. On subsequent pushes the parser reads the JSDoc back, so AI is not re-invoked and the steps remain stable even if the test body is edited.
449
-
450
- **Why this matters:** Without writeback, AI re-reads the test body on every push and may produce slightly different phrasing each time — changing Azure Test Case steps unnecessarily. With writeback the steps are frozen in the source file on first push and never change unless you edit the JSDoc manually.
451
-
452
- ```json
453
- {
454
- "sync": {
455
- "ai": {
456
- "provider": "anthropic",
457
- "model": "claude-sonnet-4-6",
458
- "apiKey": "$ANTHROPIC_API_KEY",
459
- "writebackDocComment": true
460
- }
461
- }
462
- }
463
- ```
464
-
465
- After the first `ado-sync push` the source file will contain:
466
-
467
- ```typescript
468
- /**
469
- * User can log in with valid credentials
470
- * Description: Verifies the login form accepts a correct email/password pair
471
- * 1. Navigate to the login page
472
- * 2. Enter a valid email address
473
- * 3. Enter the matching password
474
- * 4. Click the Sign In button
475
- * 5. Check: The dashboard is displayed
476
- */
477
- test('should log in with valid credentials', async ({ page }) => { ... });
478
- ```
479
-
480
- **Rules:**
481
- - Only applies to `javascript`, `playwright`, `puppeteer`, `cypress`, `detox`, and `xcuitest` framework types.
482
- - Has no effect when `sync.disableLocalChanges: true`.
483
- - If a JSDoc block already exists above a `test()` call it is replaced, not duplicated.
484
- - Steps prefixed `Check:` map to Azure's **Expected Result** column when `sync.format.useExpectedResult: true`.
485
- - You can populate JSDoc comments manually before the first push — the parser will read them and skip AI entirely.
486
-
487
- **Recommended workflow for pre-populating existing specs:**
488
- 1. Write the JSDoc manually above each `test()` call (or use an LLM in your editor to batch-generate them).
489
- 2. Enable `writebackDocComment: true` in config.
490
- 3. Run `ado-sync push` — existing JSDoc is read, AI is skipped, steps are pushed to Azure.
491
-
492
- ### AI failure analysis
493
-
494
- When `sync.ai.analyzeFailures: true` is set (and the provider is `ollama`, `openai`, or `anthropic`), ado-sync uses the AI provider to generate a root-cause summary for failing test results during `publish-test-results`. The summary is attached as a comment on the Azure Test Run result.
495
-
496
- ```json
497
- {
498
- "sync": {
499
- "ai": {
500
- "provider": "anthropic",
501
- "apiKey": "$ANTHROPIC_API_KEY",
502
- "analyzeFailures": true
503
- }
504
- }
505
- }
506
- ```
507
-
508
- The AI receives the test name, error message, and stack trace (if available) and returns a `rootCause` and `suggestion`. These are appended to the Azure test result comment for easy triage.
509
-
510
- > `analyzeFailures` has no effect for the `heuristic` and `local` providers, which do not perform failure analysis.
511
-
512
- ---
513
-
514
- ### Providers
515
-
516
- | Provider | Quality | Requires |
517
- |----------|---------|---------|
518
- | `local` *(default)* | Good–Excellent | A GGUF model file (see setup below) |
519
- | `heuristic` | Basic | Nothing — zero dependencies, works offline |
520
- | `ollama` | Good–Excellent | [Ollama](https://ollama.com) server running locally |
521
- | `openai` | Excellent | OpenAI API key, or any OpenAI-compatible proxy (LiteLLM, Azure OpenAI, vLLM, etc.) |
522
- | `anthropic` | Excellent | Anthropic API key |
523
-
524
- > **No setup required to try it.** If no `--ai-model` is passed for `local`, it falls back to `heuristic` silently — so `ado-sync push` always works.
525
-
526
- ### CLI flags
527
-
528
- | Flag | Description |
529
- |------|-------------|
530
- | `--ai-provider <p>` | Provider to use. Default: `local`. Pass `none` to disable entirely. |
531
- | `--ai-model <m>` | For `local`: path to `.gguf` file. For `ollama`/`openai`/`anthropic`: model name/tag. |
532
- | `--ai-url <url>` | Base URL for `ollama` or an OpenAI-compatible endpoint. |
533
- | `--ai-key <key>` | API key for `openai` or `anthropic`. Supports `$ENV_VAR` references. |
534
- | `--ai-context <file>` | Path to a markdown file with domain context/instructions injected into the AI prompt. |
535
-
536
- ---
537
-
538
- ### Domain context file (`sync.ai.contextFile`)
539
-
540
- You can provide a markdown file that gives the AI additional context about your application or team conventions. The file's content is injected into the prompt before the test code, so the AI can use it when writing titles, descriptions, and steps.
541
-
542
- #### Config
543
-
544
- ```json
545
- {
546
- "sync": {
547
- "ai": {
548
- "provider": "anthropic",
549
- "model": "claude-sonnet-4-6",
550
- "apiKey": "$ANTHROPIC_API_KEY",
551
- "contextFile": "./docs/ai-context.md"
552
- }
553
- }
554
- }
555
- ```
556
-
557
- ```yaml
558
- sync:
559
- ai:
560
- provider: anthropic
561
- model: claude-sonnet-4-6
562
- apiKey: $ANTHROPIC_API_KEY
563
- contextFile: ./docs/ai-context.md
564
- ```
565
-
566
- The path is resolved relative to the config file directory. Absolute paths are also accepted.
567
-
568
- #### CLI override
569
-
570
- ```bash
571
- ado-sync push --ai-context ./docs/ai-context.md
572
- ```
573
-
574
- The CLI flag takes precedence over `contextFile` in config.
575
-
576
- #### What to put in the context file
577
-
578
- The file is plain markdown — write whatever helps the AI produce better output for your domain. Common patterns:
579
-
580
- ```markdown
581
- ## Glossary
582
- - "Checkout" means the 3-step payment flow (cart → shipping → payment)
583
- - "PDP" means Product Detail Page
584
- - "MFA" means multi-factor authentication via the Authenticator app
585
-
586
- ## Step writing style
587
- - Start every action step with a verb: Click, Enter, Select, Navigate, Verify
588
- - Use customer-facing button/field labels, not CSS selectors or test IDs
589
- - Precondition steps ("Given the user is logged in") come before action steps
590
- - End with at least one "Check:" verification step
591
-
592
- ## Out of scope
593
- - Do not mention internal service names (e.g. auth-svc, cart-ms)
594
- - Do not reference environment-specific URLs
595
- ```
596
-
597
- #### Notes
598
-
599
- - Context is injected for all LLM providers: `local`, `ollama`, `openai`, `anthropic`.
600
- - The `heuristic` provider does not use a prompt and ignores this setting.
601
- - If the file cannot be read, a warning is printed and the push continues without it.
602
-
603
- ---
604
-
605
- ### Setting up the local provider (step by step)
606
-
607
- `node-llama-cpp` is bundled with ado-sync — **no separate install needed**. You only need to download a model file once.
608
-
609
- #### Step 1 — Choose a model size
610
-
611
- All models use the `Q4_K_M` quantization (best balance of size and quality).
612
-
613
- | Model | RAM needed | Quality | HF repo |
614
- |-------|-----------|---------|---------|
615
- | E2B | ~3.2 GB | Good | `google/gemma-4-e2b-it-GGUF` |
616
- | **E4B** *(start here)* | ~5 GB | Better | `google/gemma-4-e4b-it-GGUF` |
617
- | 26B A4B (MoE) | ~15.6 GB | Excellent local | `google/gemma-4-26b-a4b-it-GGUF` |
618
- | 31B | ~17.4 GB | Best | `google/gemma-4-31b-it-GGUF` |
619
-
620
- #### Step 2 — Download the model
621
-
622
- **macOS / Linux:**
623
- ```bash
624
- mkdir -p ~/.cache/ado-sync/models
625
-
626
- # curl (no extra tools needed)
627
- curl -L -o ~/.cache/ado-sync/models/gemma-4-e4b-it-Q4_K_M.gguf \
628
- "https://huggingface.co/google/gemma-4-e4b-it-GGUF/resolve/main/gemma-4-e4b-it-Q4_K_M.gguf"
629
-
630
- # or huggingface-cli (shows a progress bar — useful for larger models)
631
- pip install -U huggingface_hub
632
- huggingface-cli download google/gemma-4-e4b-it-GGUF \
633
- gemma-4-e4b-it-Q4_K_M.gguf \
634
- --local-dir ~/.cache/ado-sync/models
635
- ```
636
-
637
- **Windows (PowerShell):**
638
- ```powershell
639
- New-Item -ItemType Directory -Force "$env:LOCALAPPDATA\ado-sync\models"
640
-
641
- # Invoke-WebRequest
642
- Invoke-WebRequest `
643
- -Uri "https://huggingface.co/google/gemma-4-e4b-it-GGUF/resolve/main/gemma-4-e4b-it-Q4_K_M.gguf" `
644
- -OutFile "$env:LOCALAPPDATA\ado-sync\models\gemma-4-e4b-it-Q4_K_M.gguf"
645
-
646
- # or huggingface-cli (shows a progress bar)
647
- pip install -U huggingface_hub
648
- huggingface-cli download google/gemma-4-e4b-it-GGUF `
649
- gemma-4-e4b-it-Q4_K_M.gguf `
650
- --local-dir "$env:LOCALAPPDATA\ado-sync\models"
651
- ```
652
-
653
- #### Step 3 — Push with the model
654
-
655
- ```bash
656
- # macOS / Linux
657
- ado-sync push --ai-model ~/.cache/ado-sync/models/gemma-4-e4b-it-Q4_K_M.gguf
658
-
659
- # Windows
660
- ado-sync push --ai-model "$env:LOCALAPPDATA\ado-sync\models\gemma-4-e4b-it-Q4_K_M.gguf"
661
- ```
662
-
663
- The model is loaded once and reused for all tests in the run — no repeated loading overhead.
664
-
665
- ### Complete example — C# MSTest with local LLM
666
-
667
- A full `ado-sync.yaml` for a C# MSTest project using a local GGUF model (no API key, no internet required at push time):
668
-
669
- ```yaml
670
- orgUrl: https://dev.azure.com/your-org
671
- project: YourProject
672
- auth:
673
- type: pat
674
- token: $AZURE_DEVOPS_TOKEN
675
- testPlan:
676
- id: 12345
677
- suiteId: 12346
678
- suiteMapping: flat
679
- local:
680
- type: csharp
681
- include: Tests/**/*.cs
682
- sync:
683
- tagPrefix: tc
684
- titleField: System.Title
685
- markAutomated: true
686
- ai:
687
- provider: local
688
- model: ~/.cache/ado-sync/models/qwen2.5-coder-7b-instruct-q4_k_m.gguf
689
- # Windows: model: $env:LOCALAPPDATA\ado-sync\models\qwen2.5-coder-7b-instruct-q4_k_m.gguf
690
- ```
691
-
692
- Run:
693
- ```bash
694
- export AZURE_DEVOPS_TOKEN=your-pat
695
- ado-sync push --config ado-sync.yaml
696
- ```
697
-
698
- > No `apiKey` or `baseUrl` needed — the model runs entirely in-process via `node-llama-cpp`.
699
-
700
- ---
701
-
702
- ### Setting up Ollama
703
-
704
- ```bash
705
- # 1. Install Ollama from https://ollama.com
706
-
707
- # 2. Pull a model
708
- ollama pull gemma-4-e4b-it
709
-
710
- # 3. Push (Ollama server must be running)
711
- ado-sync push --ai-provider ollama --ai-model gemma-4-e4b-it
712
- ```
713
-
714
- ### Setting up OpenAI / Anthropic
715
-
716
- ```bash
717
- ado-sync push --ai-provider openai --ai-key $OPENAI_API_KEY
718
- ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
719
- ```
720
-
721
- ---
722
-
723
- ### Using GitHub Copilot or Claude Code
724
-
725
- If you already use **GitHub Copilot** or **Claude Code** as your IDE AI assistant, you can reuse the same credentials with ado-sync. The key point: these tools are IDE plugins — they don't expose an API endpoint ado-sync can call. Instead, use the underlying AI provider they run on.
726
-
727
- #### Claude Code → `anthropic` provider
728
-
729
- Claude Code is powered by Anthropic's Claude models. If you have an `ANTHROPIC_API_KEY` (required to run Claude Code), pass it directly:
730
-
731
- ```bash
732
- export ANTHROPIC_API_KEY=sk-ant-...
733
-
734
- ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
735
- ```
736
-
737
- Pin a specific model with `--ai-model` (default is `claude-haiku-4-5-20251001`):
738
-
739
- ```bash
740
- # Faster / cheaper
741
- ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY --ai-model claude-haiku-4-5-20251001
742
-
743
- # Higher quality
744
- ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY --ai-model claude-sonnet-4-6
745
- ```
746
-
747
- Config file equivalent — set once, never repeat the flag:
748
-
749
- ```json
750
- {
751
- "sync": {
752
- "ai": {
753
- "provider": "anthropic",
754
- "apiKey": "$ANTHROPIC_API_KEY",
755
- "model": "claude-haiku-4-5-20251001"
756
- }
757
- }
758
- }
759
- ```
760
-
761
- #### GitHub Copilot → `openai` or `openai` + `--ai-url` provider
762
-
763
- GitHub Copilot itself does not expose a public API endpoint. Use one of these alternatives depending on your subscription:
764
-
765
- **Option A — OpenAI API key** (Copilot Individual / Team subscribers)
766
-
767
- If you have a separate OpenAI API key:
768
-
769
- ```bash
770
- ado-sync push --ai-provider openai --ai-key $OPENAI_API_KEY
771
- ```
772
-
773
- **Option B — Azure OpenAI** (Copilot Enterprise / corporate Azure customers)
774
-
775
- If your org has an Azure OpenAI deployment (which also powers enterprise Copilot):
776
-
777
- ```bash
778
- ado-sync push \
779
- --ai-provider openai \
780
- --ai-url "https://<your-resource>.openai.azure.com/openai/deployments/<deployment>/v1" \
781
- --ai-key $AZURE_OPENAI_KEY \
782
- --ai-model gpt-4o-mini
783
- ```
784
-
785
- Config file equivalent:
786
-
787
- ```json
788
- {
789
- "sync": {
790
- "ai": {
791
- "provider": "openai",
792
- "baseUrl": "https://<your-resource>.openai.azure.com/openai/deployments/<deployment>/v1",
793
- "apiKey": "$AZURE_OPENAI_KEY",
794
- "model": "gpt-4o-mini"
795
- }
796
- }
797
- }
798
- ```
799
-
800
- **Option C — No API key (heuristic)**
801
-
802
- Works offline with zero setup — good when you don't want to spend API credits:
803
-
804
- ```bash
805
- ado-sync push --ai-provider heuristic
806
- ```
807
-
808
- #### Quick reference
809
-
810
- | You use | Recommended provider | Command |
811
- |---------|---------------------|---------|
812
- | Claude Code | `anthropic` | `ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY` |
813
- | Copilot Individual / Team | `openai` | `ado-sync push --ai-provider openai --ai-key $OPENAI_API_KEY` |
814
- | Copilot Enterprise / Azure | `openai` + `--ai-url` | See Azure OpenAI option above |
815
- | Either, no API budget | `heuristic` | `ado-sync push --ai-provider heuristic` |
816
- | Privacy-sensitive / air-gapped | `local` | `ado-sync push --ai-model ~/.cache/ado-sync/models/...` |
817
-
818
- #### Running ado-sync from within your IDE assistant
819
-
820
- Both tools can execute terminal commands, so you can ask them to run ado-sync for you directly.
821
-
822
- **Claude Code:**
823
-
824
- ```
825
- Run: ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY --dry-run
826
- ```
827
-
828
- Claude Code will execute it in the terminal and explain what would change before you commit to a real push.
829
-
830
- **GitHub Copilot Chat (VS Code):**
831
-
832
- Use the `@terminal` agent in Copilot Chat:
833
-
834
- ```
835
- @terminal run ado-sync push --ai-provider heuristic --dry-run and explain the output
836
- ```
837
-
838
- Copilot will propose the command in the terminal panel for you to accept and run.
839
-
840
- ### Using LiteLLM (or any OpenAI-compatible proxy)
841
-
842
- [LiteLLM](https://github.com/BerriAI/litellm) is a proxy that exposes an OpenAI-compatible API for 100+ model providers (Azure OpenAI, Bedrock, Gemini, Mistral, Cohere, vLLM, and more). Use the `openai` provider with `--ai-url` pointing at your LiteLLM server:
843
-
844
- ```bash
845
- # Start LiteLLM proxy (example)
846
- litellm --model gpt-4o-mini # listens on http://localhost:4000 by default
847
-
848
- # Push using LiteLLM
849
- ado-sync push \
850
- --ai-provider openai \
851
- --ai-url http://localhost:4000 \
852
- --ai-key $LITELLM_API_KEY \
853
- --ai-model gpt-4o-mini
854
- ```
855
-
856
- The same `--ai-url` override works for any other OpenAI-compatible server:
857
-
858
- | Service | `--ai-url` |
859
- |---------|-----------|
860
- | LiteLLM (local proxy) | `http://localhost:4000` |
861
- | LiteLLM (hosted) | `https://<your-litellm-host>/v1` |
862
- | Hugging Face Inference | `https://router.huggingface.co/v1` |
863
- | Azure OpenAI | `https://<resource>.openai.azure.com/openai/deployments/<deployment>` |
864
- | vLLM | `http://localhost:8000/v1` |
865
- | LocalAI | `http://localhost:8080/v1` |
866
- | LM Studio | `http://localhost:1234/v1` |
867
-
868
- > **Note:** `api-inference.huggingface.co` is deprecated — use `router.huggingface.co` instead.
869
-
870
- ### Using Hugging Face Inference API
871
-
872
- [Hugging Face](https://huggingface.co) provides a free serverless inference API for open-source models. Use the `openai` provider since HF exposes an OpenAI-compatible endpoint:
873
-
874
- ```bash
875
- ado-sync push \
876
- --ai-provider openai \
877
- --ai-url https://router.huggingface.co/v1 \
878
- --ai-key $HF_TOKEN \
879
- --ai-model Qwen/Qwen2.5-Coder-7B-Instruct
880
- ```
881
-
882
- Get a token at [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) (requires the **Inference** permission).
883
-
884
- Recommended open-source models:
885
-
886
- | Model | Notes |
887
- |-------|-------|
888
- | `Qwen/Qwen2.5-Coder-7B-Instruct` | Best for code/test understanding |
889
- | `meta-llama/Llama-3.1-8B-Instruct` | Good general purpose |
890
- | `mistralai/Mistral-7B-Instruct-v0.3` | Lightweight and fast |
891
-
892
- Config file equivalent:
893
-
894
- ```json
895
- {
896
- "sync": {
897
- "ai": {
898
- "provider": "openai",
899
- "baseUrl": "https://router.huggingface.co/v1",
900
- "apiKey": "$HF_TOKEN",
901
- "model": "Qwen/Qwen2.5-Coder-7B-Instruct"
902
- }
903
- }
904
- }
905
- ```
906
-
907
- > **LiteLLM model names:** When using a hosted LiteLLM instance that proxies Anthropic models, prefix the model name with `anthropic/`, e.g. `anthropic/claude-opus-4-6`. Check your instance's `/v1/models` endpoint for registered model names.
908
-
909
- Config file equivalent — set any `--ai-*` flag in `sync.ai` to avoid repeating it on every push. CLI flags always take precedence over config values:
910
-
911
- ```json
912
- {
913
- "sync": {
914
- "ai": {
915
- "provider": "openai",
916
- "baseUrl": "http://localhost:4000",
917
- "apiKey": "$LITELLM_API_KEY",
918
- "model": "gpt-4o-mini"
919
- }
920
- }
921
- }
922
- ```
923
-
924
- The `sync.ai` block works for any provider:
925
-
926
- ```json
927
- { "sync": { "ai": { "provider": "ollama", "model": "gemma-4-e4b-it" } } }
928
- ```
929
-
930
- ```json
931
- { "sync": { "ai": { "provider": "anthropic", "apiKey": "$ANTHROPIC_API_KEY" } } }
932
- ```
933
-
934
- ```json
935
- { "sync": { "ai": { "provider": "none" } } }
936
- ```
937
-
938
- ### Complete example — C# MSTest with Hugging Face
939
-
940
- A full `ado-sync.yaml` for a C# MSTest project using the Hugging Face Inference API for AI-generated test steps:
941
-
942
- ```yaml
943
- orgUrl: https://dev.azure.com/your-org
944
- project: YourProject
945
- auth:
946
- type: pat
947
- token: $AZURE_DEVOPS_TOKEN
948
- testPlan:
949
- id: 12345
950
- suiteId: 12346
951
- suiteMapping: flat
952
- local:
953
- type: csharp
954
- include: Tests/**/*.cs
955
- sync:
956
- tagPrefix: tc
957
- titleField: System.Title
958
- markAutomated: true
959
- ai:
960
- provider: openai
961
- baseUrl: https://router.huggingface.co/v1
962
- apiKey: $HF_TOKEN
963
- model: Qwen/Qwen2.5-Coder-7B-Instruct
964
- ```
965
-
966
- Run:
967
- ```bash
968
- export AZURE_DEVOPS_TOKEN=your-pat
969
- export HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxx
970
- ado-sync push --config ado-sync.yaml
971
- ```
972
-
973
- ### Disabling AI summary
974
-
975
- ```bash
976
- ado-sync push --ai-provider none
977
- ```
978
-
979
- ---
980
-
981
- ### How it works internally
982
-
983
- 1. After parsing local files, ado-sync checks each test for a missing description or missing steps.
984
- 2. For each test that needs either, ado-sync extracts the raw function body from the source file.
985
- 3. The body is sent to the configured provider with a prompt requesting `Title:`, `Description:`, and `N. Step` / `N. Check:` lines.
986
- 4. Title and steps are applied only when the test had no existing steps. Description is applied only when the test had no existing description.
987
- 5. If the LLM call fails (network error, model not found, etc.), it automatically falls back to `heuristic`.
988
- 6. The `local` provider caches the GGUF model in memory for the entire push run — a 50-test suite loads it only once.