ado-sync 0.1.65 → 0.1.67
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +15 -15
- package/dist/__tests__/regressions.test.js +1011 -1
- package/dist/__tests__/regressions.test.js.map +1 -1
- package/dist/ai/summarizer.d.ts +2 -1
- package/dist/ai/summarizer.js +6 -1
- package/dist/ai/summarizer.js.map +1 -1
- package/dist/azure/test-cases.d.ts +11 -1
- package/dist/azure/test-cases.js +286 -43
- package/dist/azure/test-cases.js.map +1 -1
- package/dist/cli.js +85 -8
- package/dist/cli.js.map +1 -1
- package/dist/config.js +74 -1
- package/dist/config.js.map +1 -1
- package/dist/id-markers.d.ts +1 -0
- package/dist/id-markers.js +13 -0
- package/dist/id-markers.js.map +1 -1
- package/dist/sync/cache.d.ts +2 -0
- package/dist/sync/cache.js.map +1 -1
- package/dist/sync/engine.d.ts +12 -1
- package/dist/sync/engine.js +210 -41
- package/dist/sync/engine.js.map +1 -1
- package/dist/types.d.ts +52 -2
- package/llms.txt +11 -11
- package/package.json +8 -1
- package/docs/advanced.md +0 -989
- package/docs/agent-setup.md +0 -204
- package/docs/capability-roadmap.md +0 -280
- package/docs/cli.md +0 -614
- package/docs/configuration.md +0 -322
- package/docs/examples/csharp-mstest-local-llm.yaml +0 -35
- package/docs/examples/csharp-mstest.yaml +0 -21
- package/docs/examples/csharp-nunit.yaml +0 -21
- package/docs/examples/csharp-specflow.yaml +0 -16
- package/docs/examples/cypress.yaml +0 -21
- package/docs/examples/detox-react-native.yaml +0 -21
- package/docs/examples/espresso-android.yaml +0 -21
- package/docs/examples/flutter-dart.yaml +0 -21
- package/docs/examples/java-junit.yaml +0 -21
- package/docs/examples/java-testng.yaml +0 -21
- package/docs/examples/js-jasmine-wdio.yaml +0 -21
- package/docs/examples/js-jest.yaml +0 -21
- package/docs/examples/playwright-js.yaml +0 -21
- package/docs/examples/playwright-ts.yaml +0 -21
- package/docs/examples/puppeteer.yaml +0 -21
- package/docs/examples/python-pytest.yaml +0 -21
- package/docs/examples/robot-framework.yaml +0 -19
- package/docs/examples/testcafe.yaml +0 -21
- package/docs/examples/xcuitest-ios.yaml +0 -21
- package/docs/mcp-server.md +0 -312
- package/docs/publish-test-results.md +0 -947
- package/docs/spec-formats.md +0 -1357
- package/docs/troubleshooting.md +0 -101
- package/docs/vscode-extension.md +0 -139
- package/docs/work-item-links.md +0 -115
- package/docs/workflows.md +0 -457
- package/mkdocs.yml +0 -40
- package/requirements-docs.txt +0 -4
- package/scripts/build_site.sh +0 -6
package/docs/advanced.md
DELETED
|
@@ -1,989 +0,0 @@
|
|
|
1
|
-
# Advanced configuration
|
|
2
|
-
|
|
3
|
-
---
|
|
4
|
-
|
|
5
|
-
## Format configuration
|
|
6
|
-
|
|
7
|
-
`sync.format` controls how test case content is structured when pushed to Azure DevOps.
|
|
8
|
-
|
|
9
|
-
| Field | Default | Description |
|
|
10
|
-
|-------|---------|-------------|
|
|
11
|
-
| `prefixTitle` | `true` | Prefix TC title with `"Scenario: "` or `"Scenario Outline: "`. Set `false` to use the raw scenario name. |
|
|
12
|
-
| `prefixBackgroundSteps` | `true` | Include Background steps in the TC steps list, prefixed with `"Background: "`. Set `false` to exclude them. |
|
|
13
|
-
| `useExpectedResult` | `false` | When `true`, `Then`/`Verify` steps are moved to the Expected Result column instead of the Action column. |
|
|
14
|
-
| `syncDataTableAsText` | `false` | When `true`, inline Gherkin data tables are appended to the step action as plain `\| cell \| cell \|` text instead of being handled as sub-steps. |
|
|
15
|
-
| `showParameterListStep` | `"whenUnusedParameters"` | Append a `Parameters: @p1@, @p2@, ...` step to parametrized TCs. `"always"` — always append. `"never"` — never append. `"whenUnusedParameters"` — append only when at least one parameter is not already referenced in a step. |
|
|
16
|
-
| `emptyActionValue` | *(blank)* | Value to use when a step action would be empty (e.g. when `useExpectedResult` moves a step to the expected column). |
|
|
17
|
-
| `emptyExpectedResultValue` | *(blank)* | Value to use when the expected result column would be empty. |
|
|
18
|
-
|
|
19
|
-
### Example
|
|
20
|
-
|
|
21
|
-
```json
|
|
22
|
-
{
|
|
23
|
-
"sync": {
|
|
24
|
-
"format": {
|
|
25
|
-
"prefixTitle": false,
|
|
26
|
-
"useExpectedResult": true,
|
|
27
|
-
"showParameterListStep": "always",
|
|
28
|
-
"emptyActionValue": "-"
|
|
29
|
-
}
|
|
30
|
-
}
|
|
31
|
-
}
|
|
32
|
-
```
|
|
33
|
-
|
|
34
|
-
---
|
|
35
|
-
|
|
36
|
-
## State configuration
|
|
37
|
-
|
|
38
|
-
`sync.state` sets the Azure Test Case `State` field whenever a scenario is created or updated.
|
|
39
|
-
|
|
40
|
-
| Field | Description |
|
|
41
|
-
|-------|-------------|
|
|
42
|
-
| `setValueOnChangeTo` | The state value to set, e.g. `"Design"`, `"Ready"`. |
|
|
43
|
-
| `condition` | *(Optional)* Tag expression. Only scenarios matching this expression trigger the state change. |
|
|
44
|
-
|
|
45
|
-
```json
|
|
46
|
-
{
|
|
47
|
-
"sync": {
|
|
48
|
-
"state": {
|
|
49
|
-
"setValueOnChangeTo": "Design",
|
|
50
|
-
"condition": "@active"
|
|
51
|
-
}
|
|
52
|
-
}
|
|
53
|
-
}
|
|
54
|
-
```
|
|
55
|
-
|
|
56
|
-
---
|
|
57
|
-
|
|
58
|
-
## Field updates
|
|
59
|
-
|
|
60
|
-
`sync.fieldUpdates` applies custom field values on push. Each key is an Azure DevOps field reference name (e.g. `"System.AreaPath"`) or display name.
|
|
61
|
-
|
|
62
|
-
### Simple value (always set)
|
|
63
|
-
|
|
64
|
-
```json
|
|
65
|
-
{
|
|
66
|
-
"sync": {
|
|
67
|
-
"fieldUpdates": {
|
|
68
|
-
"Custom.AutomationStatus": "Automated",
|
|
69
|
-
"System.AreaPath": "MyProject\\QA Team"
|
|
70
|
-
}
|
|
71
|
-
}
|
|
72
|
-
}
|
|
73
|
-
```
|
|
74
|
-
|
|
75
|
-
### Conditional value (switch by tag)
|
|
76
|
-
|
|
77
|
-
```json
|
|
78
|
-
{
|
|
79
|
-
"sync": {
|
|
80
|
-
"fieldUpdates": {
|
|
81
|
-
"System.AreaPath": {
|
|
82
|
-
"conditionalValue": {
|
|
83
|
-
"@smoke": "MyProject\\Smoke",
|
|
84
|
-
"@regression": "MyProject\\Regression",
|
|
85
|
-
"otherwise": "MyProject\\General"
|
|
86
|
-
}
|
|
87
|
-
}
|
|
88
|
-
}
|
|
89
|
-
}
|
|
90
|
-
}
|
|
91
|
-
```
|
|
92
|
-
|
|
93
|
-
### Tag wildcard capture
|
|
94
|
-
|
|
95
|
-
Wildcard `*` captures the matched portion and exposes it as `{1}`, `{2}`, ... in the value.
|
|
96
|
-
|
|
97
|
-
```json
|
|
98
|
-
{
|
|
99
|
-
"sync": {
|
|
100
|
-
"fieldUpdates": {
|
|
101
|
-
"Custom.Priority": {
|
|
102
|
-
"condition": "@priority:*",
|
|
103
|
-
"value": "{1}"
|
|
104
|
-
}
|
|
105
|
-
}
|
|
106
|
-
}
|
|
107
|
-
}
|
|
108
|
-
```
|
|
109
|
-
|
|
110
|
-
With tag `@priority:high`, this sets `Custom.Priority` to `"high"`.
|
|
111
|
-
|
|
112
|
-
### Update event
|
|
113
|
-
|
|
114
|
-
Control when the update fires:
|
|
115
|
-
|
|
116
|
-
| `update` | Behaviour |
|
|
117
|
-
|----------|-----------|
|
|
118
|
-
| `"always"` *(default)* | Apply on every push (create and update). |
|
|
119
|
-
| `"onCreate"` | Apply only when the TC is being created for the first time. |
|
|
120
|
-
| `"onChange"` | Apply only when the TC already exists and is being updated. |
|
|
121
|
-
|
|
122
|
-
```json
|
|
123
|
-
{
|
|
124
|
-
"sync": {
|
|
125
|
-
"fieldUpdates": {
|
|
126
|
-
"Custom.CreatedBySync": { "value": "true", "update": "onCreate" }
|
|
127
|
-
}
|
|
128
|
-
}
|
|
129
|
-
}
|
|
130
|
-
```
|
|
131
|
-
|
|
132
|
-
### Placeholders
|
|
133
|
-
|
|
134
|
-
Value strings support these placeholders:
|
|
135
|
-
|
|
136
|
-
| Placeholder | Resolves to |
|
|
137
|
-
|-------------|-------------|
|
|
138
|
-
| `{scenario-name}` | Scenario title |
|
|
139
|
-
| `{feature-name}` | File name without extension |
|
|
140
|
-
| `{feature-file}` | File name with extension |
|
|
141
|
-
| `{scenario-description}` | Scenario description text |
|
|
142
|
-
| `{1}`, `{2}`, … | Wildcard captures from the `condition` |
|
|
143
|
-
|
|
144
|
-
---
|
|
145
|
-
|
|
146
|
-
## Customizations
|
|
147
|
-
|
|
148
|
-
### Field defaults
|
|
149
|
-
|
|
150
|
-
Set default Azure field values applied only when a Test Case is **created** (not on updates).
|
|
151
|
-
|
|
152
|
-
```json
|
|
153
|
-
{
|
|
154
|
-
"customizations": {
|
|
155
|
-
"fieldDefaults": {
|
|
156
|
-
"enabled": true,
|
|
157
|
-
"defaultValues": {
|
|
158
|
-
"System.State": "Design",
|
|
159
|
-
"Custom.AutomationStatus": "Planned"
|
|
160
|
-
}
|
|
161
|
-
}
|
|
162
|
-
}
|
|
163
|
-
}
|
|
164
|
-
```
|
|
165
|
-
|
|
166
|
-
### Ignore test case tags
|
|
167
|
-
|
|
168
|
-
Preserve Azure-side tags from being removed during push. Useful for tags managed by Azure DevOps workflows (e.g. `reviewed`, `approved`).
|
|
169
|
-
|
|
170
|
-
```json
|
|
171
|
-
{
|
|
172
|
-
"customizations": {
|
|
173
|
-
"ignoreTestCaseTags": {
|
|
174
|
-
"enabled": true,
|
|
175
|
-
"tags": ["reviewed", "ado-*"]
|
|
176
|
-
}
|
|
177
|
-
}
|
|
178
|
-
}
|
|
179
|
-
```
|
|
180
|
-
|
|
181
|
-
Patterns support a trailing `*` wildcard: `"ado-*"` matches any tag starting with `ado-`.
|
|
182
|
-
|
|
183
|
-
### Tag text map transformation
|
|
184
|
-
|
|
185
|
-
Apply character or substring replacements to tags before they are pushed to Azure DevOps.
|
|
186
|
-
|
|
187
|
-
```json
|
|
188
|
-
{
|
|
189
|
-
"customizations": {
|
|
190
|
-
"tagTextMapTransformation": {
|
|
191
|
-
"enabled": true,
|
|
192
|
-
"textMap": { "_": " " }
|
|
193
|
-
}
|
|
194
|
-
}
|
|
195
|
-
}
|
|
196
|
-
```
|
|
197
|
-
|
|
198
|
-
With this config, `@my_feature_tag` is stored in Azure as `my feature tag`.
|
|
199
|
-
|
|
200
|
-
---
|
|
201
|
-
|
|
202
|
-
## Attachments
|
|
203
|
-
|
|
204
|
-
Attach files to Test Cases via tags.
|
|
205
|
-
|
|
206
|
-
### Config
|
|
207
|
-
|
|
208
|
-
```json
|
|
209
|
-
{
|
|
210
|
-
"sync": {
|
|
211
|
-
"attachments": {
|
|
212
|
-
"enabled": true,
|
|
213
|
-
"tagPrefixes": ["wireframe", "spec"],
|
|
214
|
-
"baseFolder": "specs/attachments"
|
|
215
|
-
}
|
|
216
|
-
}
|
|
217
|
-
}
|
|
218
|
-
```
|
|
219
|
-
|
|
220
|
-
| Field | Default | Description |
|
|
221
|
-
|-------|---------|-------------|
|
|
222
|
-
| `enabled` | `false` | Enable attachment sync. |
|
|
223
|
-
| `tagPrefixes` | `[]` | Additional tag prefixes beyond the built-in `attachment`. |
|
|
224
|
-
| `baseFolder` | *(feature file dir)* | Base directory for resolving file paths. Relative to config file. |
|
|
225
|
-
|
|
226
|
-
### Usage
|
|
227
|
-
|
|
228
|
-
```gherkin
|
|
229
|
-
@tc:1042 @attachment:screenshots/login.png @wireframe:mockups/login.fig
|
|
230
|
-
Scenario: Login page
|
|
231
|
-
...
|
|
232
|
-
```
|
|
233
|
-
|
|
234
|
-
The default `attachment` prefix is always active when `enabled: true`. Additional prefixes are configured via `tagPrefixes`.
|
|
235
|
-
|
|
236
|
-
File paths support glob patterns: `@attachment:screenshots/*.png` attaches all matching files.
|
|
237
|
-
|
|
238
|
-
Files are uploaded to the Azure Work Item as attachments. Already-attached files (by name) are not re-uploaded.
|
|
239
|
-
|
|
240
|
-
---
|
|
241
|
-
|
|
242
|
-
## Pull configuration
|
|
243
|
-
|
|
244
|
-
### Pull-create: generate local files from Azure
|
|
245
|
-
|
|
246
|
-
When `sync.pull.enableCreatingNewLocalTestCases` is `true`, a `pull` run will create new local spec files for Azure Test Cases that have no local counterpart (i.e. they exist in the configured suite but have no `@tc:ID` anywhere in the local files).
|
|
247
|
-
|
|
248
|
-
```json
|
|
249
|
-
{
|
|
250
|
-
"sync": {
|
|
251
|
-
"pull": {
|
|
252
|
-
"enableCreatingNewLocalTestCases": true,
|
|
253
|
-
"targetFolder": "specs/pulled"
|
|
254
|
-
}
|
|
255
|
-
}
|
|
256
|
-
}
|
|
257
|
-
```
|
|
258
|
-
|
|
259
|
-
| Field | Default | Description |
|
|
260
|
-
|-------|---------|-------------|
|
|
261
|
-
| `enableCreatingNewLocalTestCases` | `false` | When `true`, `pull` creates local files for unlinked Azure TCs. |
|
|
262
|
-
| `targetFolder` | `.` (config dir) | Directory where new files are created. Relative to config file. |
|
|
263
|
-
|
|
264
|
-
Generated files use the format matching `local.type` (`.feature` for Gherkin, `.md` for Markdown). The `@tc:ID` tag is written into the file so subsequent pushes link back to the same TC.
|
|
265
|
-
|
|
266
|
-
---
|
|
267
|
-
|
|
268
|
-
## Suite hierarchy
|
|
269
|
-
|
|
270
|
-
By default, all Test Cases go into a single flat suite (`suiteMapping: "flat"`). Two additional modes mirror local structure as child suites in Azure.
|
|
271
|
-
|
|
272
|
-
### `byFolder` — mirror folder structure
|
|
273
|
-
|
|
274
|
-
```json
|
|
275
|
-
{
|
|
276
|
-
"testPlan": {
|
|
277
|
-
"id": 1234,
|
|
278
|
-
"suiteId": 5678,
|
|
279
|
-
"suiteMapping": "byFolder"
|
|
280
|
-
}
|
|
281
|
-
}
|
|
282
|
-
```
|
|
283
|
-
|
|
284
|
-
```
|
|
285
|
-
specs/
|
|
286
|
-
login/
|
|
287
|
-
basic.feature → suite "login" → TC "Successful login"
|
|
288
|
-
checkout/
|
|
289
|
-
happy.feature → suite "checkout" → TC "Add item and checkout"
|
|
290
|
-
```
|
|
291
|
-
|
|
292
|
-
### `byFile` — one suite per spec file
|
|
293
|
-
|
|
294
|
-
```json
|
|
295
|
-
{
|
|
296
|
-
"testPlan": {
|
|
297
|
-
"id": 1234,
|
|
298
|
-
"suiteId": 5678,
|
|
299
|
-
"suiteMapping": "byFile"
|
|
300
|
-
}
|
|
301
|
-
}
|
|
302
|
-
```
|
|
303
|
-
|
|
304
|
-
```
|
|
305
|
-
specs/
|
|
306
|
-
login/
|
|
307
|
-
basic.feature → suite "login / basic" → TC "Successful login"
|
|
308
|
-
checkout/
|
|
309
|
-
happy.feature → suite "checkout / happy" → TC "Add item and checkout"
|
|
310
|
-
```
|
|
311
|
-
|
|
312
|
-
With `byFile`, each spec file gets its own dedicated child suite named after the file (without extension). The folder hierarchy is still reflected as parent suites. All Test Cases from the same file land in the same leaf suite.
|
|
313
|
-
|
|
314
|
-
Child suites are created automatically if they do not exist. The suite hierarchy is re-used across runs.
|
|
315
|
-
|
|
316
|
-
---
|
|
317
|
-
|
|
318
|
-
## Multi-suite routing
|
|
319
|
-
|
|
320
|
-
`testPlan.suiteRouting` routes each Test Case to a specific child suite based on its tags. This is separate from `suiteMapping` — it assigns a **primary suite** per test based on tag expressions evaluated in order. The first matching route wins.
|
|
321
|
-
|
|
322
|
-
```json
|
|
323
|
-
{
|
|
324
|
-
"testPlan": {
|
|
325
|
-
"id": 1234,
|
|
326
|
-
"suiteId": 5678,
|
|
327
|
-
"suiteRouting": [
|
|
328
|
-
{ "tags": "@smoke", "suite": "Smoke" },
|
|
329
|
-
{ "tags": "@regression", "suite": "Regression" },
|
|
330
|
-
{ "suite": "General" }
|
|
331
|
-
]
|
|
332
|
-
}
|
|
333
|
-
}
|
|
334
|
-
```
|
|
335
|
-
|
|
336
|
-
A route with no `tags` is a catch-all — it matches every test that didn't match an earlier route.
|
|
337
|
-
|
|
338
|
-
The `suite` value can be:
|
|
339
|
-
- A **string** — the named child suite is auto-created under `suiteId` if it doesn't exist.
|
|
340
|
-
- A **number** — the exact suite ID is used directly (must already exist).
|
|
341
|
-
|
|
342
|
-
If no route matches and no catch-all is defined, the test falls back to `suiteId`.
|
|
343
|
-
|
|
344
|
-
### Combining routing with multi-plan mode
|
|
345
|
-
|
|
346
|
-
Each `testPlans` entry can define its own `suiteRouting`, overriding any base routing:
|
|
347
|
-
|
|
348
|
-
```json
|
|
349
|
-
{
|
|
350
|
-
"testPlans": [
|
|
351
|
-
{
|
|
352
|
-
"id": 1001,
|
|
353
|
-
"suiteId": 2001,
|
|
354
|
-
"include": "specs/smoke/**/*.feature",
|
|
355
|
-
"suiteRouting": [
|
|
356
|
-
{ "tags": "@critical", "suite": "Critical" },
|
|
357
|
-
{ "suite": "Smoke" }
|
|
358
|
-
]
|
|
359
|
-
},
|
|
360
|
-
{
|
|
361
|
-
"id": 1002,
|
|
362
|
-
"suiteId": 2002,
|
|
363
|
-
"include": "specs/regression/**/*.feature",
|
|
364
|
-
"suiteMapping": "byFile"
|
|
365
|
-
}
|
|
366
|
-
]
|
|
367
|
-
}
|
|
368
|
-
```
|
|
369
|
-
|
|
370
|
-
---
|
|
371
|
-
|
|
372
|
-
## Conflict detection
|
|
373
|
-
|
|
374
|
-
ado-sync uses a local state cache (`.ado-sync-state.json`) to detect conflicts — cases where **both** the local file and the Azure Test Case were changed since the last sync.
|
|
375
|
-
|
|
376
|
-
The `sync.conflictAction` setting controls what happens:
|
|
377
|
-
|
|
378
|
-
| Value | Behaviour |
|
|
379
|
-
|-------|-----------|
|
|
380
|
-
| `"overwrite"` *(default)* | Push local version to Azure, overwriting the remote change. |
|
|
381
|
-
| `"skip"` | Emit a `!` conflict result and leave both sides unchanged. |
|
|
382
|
-
| `"fail"` | Throw an error listing all conflicting scenarios and abort. |
|
|
383
|
-
|
|
384
|
-
```json
|
|
385
|
-
{ "sync": { "conflictAction": "skip" } }
|
|
386
|
-
```
|
|
387
|
-
|
|
388
|
-
**Commit `.ado-sync-state.json` to version control** so all team members and CI share the same last-synced state.
|
|
389
|
-
|
|
390
|
-
The cache also speeds up `push` — unchanged scenarios (same local hash + same Azure `changedDate`) are skipped without an API call.
|
|
391
|
-
|
|
392
|
-
To reset the cache: delete `.ado-sync-state.json`. The next push re-populates it from Azure.
|
|
393
|
-
|
|
394
|
-
---
|
|
395
|
-
|
|
396
|
-
## CI / build server mode
|
|
397
|
-
|
|
398
|
-
Set `sync.disableLocalChanges: true` to prevent ado-sync from writing back to local files:
|
|
399
|
-
|
|
400
|
-
- `push` — creates and updates Test Cases in Azure, but does **not** write ID tags to local files.
|
|
401
|
-
- `pull` — computes what would change but does **not** modify local files (behaves like `--dry-run`).
|
|
402
|
-
|
|
403
|
-
```json
|
|
404
|
-
{ "sync": { "disableLocalChanges": true } }
|
|
405
|
-
```
|
|
406
|
-
|
|
407
|
-
Or per-run via `--config-override`:
|
|
408
|
-
|
|
409
|
-
```bash
|
|
410
|
-
ado-sync push --config-override sync.disableLocalChanges=true
|
|
411
|
-
```
|
|
412
|
-
|
|
413
|
-
### GitHub Actions example
|
|
414
|
-
|
|
415
|
-
```yaml
|
|
416
|
-
- name: Sync test cases to Azure DevOps
|
|
417
|
-
run: ado-sync push --config-override sync.disableLocalChanges=true
|
|
418
|
-
env:
|
|
419
|
-
AZURE_DEVOPS_TOKEN: ${{ secrets.AZURE_DEVOPS_TOKEN }}
|
|
420
|
-
```
|
|
421
|
-
|
|
422
|
-
---
|
|
423
|
-
|
|
424
|
-
## Removed scenario detection
|
|
425
|
-
|
|
426
|
-
When a scenario is deleted from a local file but its Test Case still exists in the Azure suite, ado-sync detects this on the next `push` and appends the tag `ado-sync:removed` to the Azure Test Case (without deleting it). A `−` removed line is printed in the output.
|
|
427
|
-
|
|
428
|
-
To completely remove the Test Case from Azure, delete it manually in Test Plans after reviewing.
|
|
429
|
-
|
|
430
|
-
---
|
|
431
|
-
|
|
432
|
-
## AI auto-summary for code tests
|
|
433
|
-
|
|
434
|
-
When pushing code-based test types (`java`, `csharp`, `python`, `javascript`, `playwright`), ado-sync reads each test function body and automatically generates a TC **title**, **description**, and numbered **steps**.
|
|
435
|
-
|
|
436
|
-
**What gets generated and when:**
|
|
437
|
-
|
|
438
|
-
| Test state | What AI generates |
|
|
439
|
-
|------------|------------------|
|
|
440
|
-
| No doc comment at all | Title + description + all steps |
|
|
441
|
-
| Has doc comment steps but no description | Description only (existing steps kept) |
|
|
442
|
-
| Has both steps and a description | Nothing — left unchanged |
|
|
443
|
-
|
|
444
|
-
Local source files are **never modified** by the AI summary feature — unless `sync.ai.writebackDocComment` is `true` (see [JSDoc writeback](#jsdoc-writeback-syncaiwritebackdoccomment) below).
|
|
445
|
-
|
|
446
|
-
### JSDoc writeback (`sync.ai.writebackDocComment`)
|
|
447
|
-
|
|
448
|
-
When `writebackDocComment: true` is set, ado-sync writes AI-generated steps back into the JS/TS source file as a JSDoc block above each `test()` call immediately after the first push. On subsequent pushes the parser reads the JSDoc back, so AI is not re-invoked and the steps remain stable even if the test body is edited.
|
|
449
|
-
|
|
450
|
-
**Why this matters:** Without writeback, AI re-reads the test body on every push and may produce slightly different phrasing each time — changing Azure Test Case steps unnecessarily. With writeback the steps are frozen in the source file on first push and never change unless you edit the JSDoc manually.
|
|
451
|
-
|
|
452
|
-
```json
|
|
453
|
-
{
|
|
454
|
-
"sync": {
|
|
455
|
-
"ai": {
|
|
456
|
-
"provider": "anthropic",
|
|
457
|
-
"model": "claude-sonnet-4-6",
|
|
458
|
-
"apiKey": "$ANTHROPIC_API_KEY",
|
|
459
|
-
"writebackDocComment": true
|
|
460
|
-
}
|
|
461
|
-
}
|
|
462
|
-
}
|
|
463
|
-
```
|
|
464
|
-
|
|
465
|
-
After the first `ado-sync push` the source file will contain:
|
|
466
|
-
|
|
467
|
-
```typescript
|
|
468
|
-
/**
|
|
469
|
-
* User can log in with valid credentials
|
|
470
|
-
* Description: Verifies the login form accepts a correct email/password pair
|
|
471
|
-
* 1. Navigate to the login page
|
|
472
|
-
* 2. Enter a valid email address
|
|
473
|
-
* 3. Enter the matching password
|
|
474
|
-
* 4. Click the Sign In button
|
|
475
|
-
* 5. Check: The dashboard is displayed
|
|
476
|
-
*/
|
|
477
|
-
test('should log in with valid credentials', async ({ page }) => { ... });
|
|
478
|
-
```
|
|
479
|
-
|
|
480
|
-
**Rules:**
|
|
481
|
-
- Only applies to `javascript`, `playwright`, `puppeteer`, `cypress`, `detox`, and `xcuitest` framework types.
|
|
482
|
-
- Has no effect when `sync.disableLocalChanges: true`.
|
|
483
|
-
- If a JSDoc block already exists above a `test()` call it is replaced, not duplicated.
|
|
484
|
-
- Steps prefixed `Check:` map to Azure's **Expected Result** column when `sync.format.useExpectedResult: true`.
|
|
485
|
-
- You can populate JSDoc comments manually before the first push — the parser will read them and skip AI entirely.
|
|
486
|
-
|
|
487
|
-
**Recommended workflow for pre-populating existing specs:**
|
|
488
|
-
1. Write the JSDoc manually above each `test()` call (or use an LLM in your editor to batch-generate them).
|
|
489
|
-
2. Enable `writebackDocComment: true` in config.
|
|
490
|
-
3. Run `ado-sync push` — existing JSDoc is read, AI is skipped, steps are pushed to Azure.
|
|
491
|
-
|
|
492
|
-
### AI failure analysis
|
|
493
|
-
|
|
494
|
-
When `sync.ai.analyzeFailures: true` is set (and the provider is `ollama`, `docker`, `openai`, or `anthropic`), ado-sync uses the AI provider to generate a root-cause summary for failing test results during `publish-test-results`. The summary is attached as a comment on the Azure Test Run result.
|
|
495
|
-
|
|
496
|
-
```json
|
|
497
|
-
{
|
|
498
|
-
"sync": {
|
|
499
|
-
"ai": {
|
|
500
|
-
"provider": "anthropic",
|
|
501
|
-
"apiKey": "$ANTHROPIC_API_KEY",
|
|
502
|
-
"analyzeFailures": true
|
|
503
|
-
}
|
|
504
|
-
}
|
|
505
|
-
}
|
|
506
|
-
```
|
|
507
|
-
|
|
508
|
-
The AI receives the test name, error message, and stack trace (if available) and returns a `rootCause` and `suggestion`. These are appended to the Azure test result comment for easy triage.
|
|
509
|
-
|
|
510
|
-
> `analyzeFailures` has no effect for the `heuristic` and `local` providers, which do not perform failure analysis.
|
|
511
|
-
|
|
512
|
-
---
|
|
513
|
-
|
|
514
|
-
### Providers
|
|
515
|
-
|
|
516
|
-
| Provider | Quality | Requires |
|
|
517
|
-
|----------|---------|---------|
|
|
518
|
-
| `local` *(default)* | Good–Excellent | A GGUF model file (see setup below) |
|
|
519
|
-
| `heuristic` | Basic | Nothing — zero dependencies, works offline |
|
|
520
|
-
| `ollama` | Good–Excellent | [Ollama](https://ollama.com) server running locally |
|
|
521
|
-
| `docker` | Good–Excellent | Docker Desktop with Model Runner enabled — `--ai-model ai/llama3.2`, no API key |
|
|
522
|
-
| `openai` | Excellent | OpenAI API key, or any OpenAI-compatible proxy (LiteLLM, Azure OpenAI, vLLM, etc.) |
|
|
523
|
-
| `anthropic` | Excellent | Anthropic API key |
|
|
524
|
-
|
|
525
|
-
> **No setup required to try it.** If no `--ai-model` is passed for `local`, it falls back to `heuristic` silently — so `ado-sync push` always works.
|
|
526
|
-
|
|
527
|
-
### CLI flags
|
|
528
|
-
|
|
529
|
-
| Flag | Description |
|
|
530
|
-
|------|-------------|
|
|
531
|
-
| `--ai-provider <p>` | Provider to use. Default: `local`. Pass `none` to disable entirely. |
|
|
532
|
-
| `--ai-model <m>` | For `local`: path to `.gguf` file. For `ollama`/`docker`/`openai`/`anthropic`: model name/tag. |
|
|
533
|
-
| `--ai-url <url>` | Base URL for `ollama`, `docker`, or an OpenAI-compatible endpoint. |
|
|
534
|
-
| `--ai-key <key>` | API key for `openai` or `anthropic`. Supports `$ENV_VAR` references. |
|
|
535
|
-
| `--ai-context <file>` | Path to a markdown file with domain context/instructions injected into the AI prompt. |
|
|
536
|
-
|
|
537
|
-
---
|
|
538
|
-
|
|
539
|
-
### Domain context file (`sync.ai.contextFile`)
|
|
540
|
-
|
|
541
|
-
You can provide a markdown file that gives the AI additional context about your application or team conventions. The file's content is injected into the prompt before the test code, so the AI can use it when writing titles, descriptions, and steps.
|
|
542
|
-
|
|
543
|
-
#### Config
|
|
544
|
-
|
|
545
|
-
```json
|
|
546
|
-
{
|
|
547
|
-
"sync": {
|
|
548
|
-
"ai": {
|
|
549
|
-
"provider": "anthropic",
|
|
550
|
-
"model": "claude-sonnet-4-6",
|
|
551
|
-
"apiKey": "$ANTHROPIC_API_KEY",
|
|
552
|
-
"contextFile": "./docs/ai-context.md"
|
|
553
|
-
}
|
|
554
|
-
}
|
|
555
|
-
}
|
|
556
|
-
```
|
|
557
|
-
|
|
558
|
-
```yaml
|
|
559
|
-
sync:
|
|
560
|
-
ai:
|
|
561
|
-
provider: anthropic
|
|
562
|
-
model: claude-sonnet-4-6
|
|
563
|
-
apiKey: $ANTHROPIC_API_KEY
|
|
564
|
-
contextFile: ./docs/ai-context.md
|
|
565
|
-
```
|
|
566
|
-
|
|
567
|
-
The path is resolved relative to the config file directory. Absolute paths are also accepted.
|
|
568
|
-
|
|
569
|
-
#### CLI override
|
|
570
|
-
|
|
571
|
-
```bash
|
|
572
|
-
ado-sync push --ai-context ./docs/ai-context.md
|
|
573
|
-
```
|
|
574
|
-
|
|
575
|
-
The CLI flag takes precedence over `contextFile` in config.
|
|
576
|
-
|
|
577
|
-
#### What to put in the context file
|
|
578
|
-
|
|
579
|
-
The file is plain markdown — write whatever helps the AI produce better output for your domain. Common patterns:
|
|
580
|
-
|
|
581
|
-
```markdown
|
|
582
|
-
## Glossary
|
|
583
|
-
- "Checkout" means the 3-step payment flow (cart → shipping → payment)
|
|
584
|
-
- "PDP" means Product Detail Page
|
|
585
|
-
- "MFA" means multi-factor authentication via the Authenticator app
|
|
586
|
-
|
|
587
|
-
## Step writing style
|
|
588
|
-
- Start every action step with a verb: Click, Enter, Select, Navigate, Verify
|
|
589
|
-
- Use customer-facing button/field labels, not CSS selectors or test IDs
|
|
590
|
-
- Precondition steps ("Given the user is logged in") come before action steps
|
|
591
|
-
- End with at least one "Check:" verification step
|
|
592
|
-
|
|
593
|
-
## Out of scope
|
|
594
|
-
- Do not mention internal service names (e.g. auth-svc, cart-ms)
|
|
595
|
-
- Do not reference environment-specific URLs
|
|
596
|
-
```
|
|
597
|
-
|
|
598
|
-
#### Notes
|
|
599
|
-
|
|
600
|
-
- Context is injected for all LLM providers: `local`, `ollama`, `docker`, `openai`, `anthropic`.
|
|
601
|
-
- The `heuristic` provider does not use a prompt and ignores this setting.
|
|
602
|
-
- If the file cannot be read, a warning is printed and the push continues without it.
|
|
603
|
-
|
|
604
|
-
---
|
|
605
|
-
|
|
606
|
-
### Setting up the local provider (step by step)
|
|
607
|
-
|
|
608
|
-
`node-llama-cpp` is bundled with ado-sync — **no separate install needed**. You only need to download a model file once.
|
|
609
|
-
|
|
610
|
-
#### Step 1 — Choose a model size
|
|
611
|
-
|
|
612
|
-
All models use the `Q4_K_M` quantization (best balance of size and quality).
|
|
613
|
-
|
|
614
|
-
| Model | RAM needed | Quality | HF repo |
|
|
615
|
-
|-------|-----------|---------|---------|
|
|
616
|
-
| E2B | ~3.2 GB | Good | `google/gemma-4-e2b-it-GGUF` |
|
|
617
|
-
| **E4B** *(start here)* | ~5 GB | Better | `google/gemma-4-e4b-it-GGUF` |
|
|
618
|
-
| 26B A4B (MoE) | ~15.6 GB | Excellent local | `google/gemma-4-26b-a4b-it-GGUF` |
|
|
619
|
-
| 31B | ~17.4 GB | Best | `google/gemma-4-31b-it-GGUF` |
|
|
620
|
-
|
|
621
|
-
#### Step 2 — Download the model
|
|
622
|
-
|
|
623
|
-
**macOS / Linux:**
|
|
624
|
-
```bash
|
|
625
|
-
mkdir -p ~/.cache/ado-sync/models
|
|
626
|
-
|
|
627
|
-
# curl (no extra tools needed)
|
|
628
|
-
curl -L -o ~/.cache/ado-sync/models/gemma-4-e4b-it-Q4_K_M.gguf \
|
|
629
|
-
"https://huggingface.co/google/gemma-4-e4b-it-GGUF/resolve/main/gemma-4-e4b-it-Q4_K_M.gguf"
|
|
630
|
-
|
|
631
|
-
# or huggingface-cli (shows a progress bar — useful for larger models)
|
|
632
|
-
pip install -U huggingface_hub
|
|
633
|
-
huggingface-cli download google/gemma-4-e4b-it-GGUF \
|
|
634
|
-
gemma-4-e4b-it-Q4_K_M.gguf \
|
|
635
|
-
--local-dir ~/.cache/ado-sync/models
|
|
636
|
-
```
|
|
637
|
-
|
|
638
|
-
**Windows (PowerShell):**
|
|
639
|
-
```powershell
|
|
640
|
-
New-Item -ItemType Directory -Force "$env:LOCALAPPDATA\ado-sync\models"
|
|
641
|
-
|
|
642
|
-
# Invoke-WebRequest
|
|
643
|
-
Invoke-WebRequest `
|
|
644
|
-
-Uri "https://huggingface.co/google/gemma-4-e4b-it-GGUF/resolve/main/gemma-4-e4b-it-Q4_K_M.gguf" `
|
|
645
|
-
-OutFile "$env:LOCALAPPDATA\ado-sync\models\gemma-4-e4b-it-Q4_K_M.gguf"
|
|
646
|
-
|
|
647
|
-
# or huggingface-cli (shows a progress bar)
|
|
648
|
-
pip install -U huggingface_hub
|
|
649
|
-
huggingface-cli download google/gemma-4-e4b-it-GGUF `
|
|
650
|
-
gemma-4-e4b-it-Q4_K_M.gguf `
|
|
651
|
-
--local-dir "$env:LOCALAPPDATA\ado-sync\models"
|
|
652
|
-
```
|
|
653
|
-
|
|
654
|
-
#### Step 3 — Push with the model
|
|
655
|
-
|
|
656
|
-
```bash
|
|
657
|
-
# macOS / Linux
|
|
658
|
-
ado-sync push --ai-model ~/.cache/ado-sync/models/gemma-4-e4b-it-Q4_K_M.gguf
|
|
659
|
-
|
|
660
|
-
# Windows
|
|
661
|
-
ado-sync push --ai-model "$env:LOCALAPPDATA\ado-sync\models\gemma-4-e4b-it-Q4_K_M.gguf"
|
|
662
|
-
```
|
|
663
|
-
|
|
664
|
-
The model is loaded once and reused for all tests in the run — no repeated loading overhead.
|
|
665
|
-
|
|
666
|
-
### Complete example — C# MSTest with local LLM
|
|
667
|
-
|
|
668
|
-
A full `ado-sync.yaml` for a C# MSTest project using a local GGUF model (no API key, no internet required at push time):
|
|
669
|
-
|
|
670
|
-
```yaml
|
|
671
|
-
orgUrl: https://dev.azure.com/your-org
|
|
672
|
-
project: YourProject
|
|
673
|
-
auth:
|
|
674
|
-
type: pat
|
|
675
|
-
token: $AZURE_DEVOPS_TOKEN
|
|
676
|
-
testPlan:
|
|
677
|
-
id: 12345
|
|
678
|
-
suiteId: 12346
|
|
679
|
-
suiteMapping: flat
|
|
680
|
-
local:
|
|
681
|
-
type: csharp
|
|
682
|
-
include: Tests/**/*.cs
|
|
683
|
-
sync:
|
|
684
|
-
tagPrefix: tc
|
|
685
|
-
titleField: System.Title
|
|
686
|
-
markAutomated: true
|
|
687
|
-
ai:
|
|
688
|
-
provider: local
|
|
689
|
-
model: ~/.cache/ado-sync/models/qwen2.5-coder-7b-instruct-q4_k_m.gguf
|
|
690
|
-
# Windows: model: $env:LOCALAPPDATA\ado-sync\models\qwen2.5-coder-7b-instruct-q4_k_m.gguf
|
|
691
|
-
```
|
|
692
|
-
|
|
693
|
-
Run:
|
|
694
|
-
```bash
|
|
695
|
-
export AZURE_DEVOPS_TOKEN=your-pat
|
|
696
|
-
ado-sync push --config ado-sync.yaml
|
|
697
|
-
```
|
|
698
|
-
|
|
699
|
-
> No `apiKey` or `baseUrl` needed — the model runs entirely in-process via `node-llama-cpp`.
|
|
700
|
-
|
|
701
|
-
---
|
|
702
|
-
|
|
703
|
-
### Setting up Ollama
|
|
704
|
-
|
|
705
|
-
```bash
|
|
706
|
-
# 1. Install Ollama from https://ollama.com
|
|
707
|
-
|
|
708
|
-
# 2. Pull a model
|
|
709
|
-
ollama pull gemma-4-e4b-it
|
|
710
|
-
|
|
711
|
-
# 3. Push (Ollama server must be running)
|
|
712
|
-
ado-sync push --ai-provider ollama --ai-model gemma-4-e4b-it
|
|
713
|
-
```
|
|
714
|
-
|
|
715
|
-
### Setting up OpenAI / Anthropic
|
|
716
|
-
|
|
717
|
-
```bash
|
|
718
|
-
ado-sync push --ai-provider openai --ai-key $OPENAI_API_KEY
|
|
719
|
-
ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
|
|
720
|
-
```
|
|
721
|
-
|
|
722
|
-
---
|
|
723
|
-
|
|
724
|
-
### Using GitHub Copilot or Claude Code
|
|
725
|
-
|
|
726
|
-
If you already use **GitHub Copilot** or **Claude Code** as your IDE AI assistant, you can reuse the same credentials with ado-sync. The key point: these tools are IDE plugins — they don't expose an API endpoint ado-sync can call. Instead, use the underlying AI provider they run on.
|
|
727
|
-
|
|
728
|
-
#### Claude Code → `anthropic` provider
|
|
729
|
-
|
|
730
|
-
Claude Code is powered by Anthropic's Claude models. If you have an `ANTHROPIC_API_KEY` (required to run Claude Code), pass it directly:
|
|
731
|
-
|
|
732
|
-
```bash
|
|
733
|
-
export ANTHROPIC_API_KEY=sk-ant-...
|
|
734
|
-
|
|
735
|
-
ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY
|
|
736
|
-
```
|
|
737
|
-
|
|
738
|
-
Pin a specific model with `--ai-model` (default is `claude-haiku-4-5-20251001`):
|
|
739
|
-
|
|
740
|
-
```bash
|
|
741
|
-
# Faster / cheaper
|
|
742
|
-
ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY --ai-model claude-haiku-4-5-20251001
|
|
743
|
-
|
|
744
|
-
# Higher quality
|
|
745
|
-
ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY --ai-model claude-sonnet-4-6
|
|
746
|
-
```
|
|
747
|
-
|
|
748
|
-
Config file equivalent — set once, never repeat the flag:
|
|
749
|
-
|
|
750
|
-
```json
|
|
751
|
-
{
|
|
752
|
-
"sync": {
|
|
753
|
-
"ai": {
|
|
754
|
-
"provider": "anthropic",
|
|
755
|
-
"apiKey": "$ANTHROPIC_API_KEY",
|
|
756
|
-
"model": "claude-haiku-4-5-20251001"
|
|
757
|
-
}
|
|
758
|
-
}
|
|
759
|
-
}
|
|
760
|
-
```
|
|
761
|
-
|
|
762
|
-
#### GitHub Copilot → `openai` or `openai` + `--ai-url` provider
|
|
763
|
-
|
|
764
|
-
GitHub Copilot itself does not expose a public API endpoint. Use one of these alternatives depending on your subscription:
|
|
765
|
-
|
|
766
|
-
**Option A — OpenAI API key** (Copilot Individual / Team subscribers)
|
|
767
|
-
|
|
768
|
-
If you have a separate OpenAI API key:
|
|
769
|
-
|
|
770
|
-
```bash
|
|
771
|
-
ado-sync push --ai-provider openai --ai-key $OPENAI_API_KEY
|
|
772
|
-
```
|
|
773
|
-
|
|
774
|
-
**Option B — Azure OpenAI** (Copilot Enterprise / corporate Azure customers)
|
|
775
|
-
|
|
776
|
-
If your org has an Azure OpenAI deployment (which also powers enterprise Copilot):
|
|
777
|
-
|
|
778
|
-
```bash
|
|
779
|
-
ado-sync push \
|
|
780
|
-
--ai-provider openai \
|
|
781
|
-
--ai-url "https://<your-resource>.openai.azure.com/openai/deployments/<deployment>/v1" \
|
|
782
|
-
--ai-key $AZURE_OPENAI_KEY \
|
|
783
|
-
--ai-model gpt-4o-mini
|
|
784
|
-
```
|
|
785
|
-
|
|
786
|
-
Config file equivalent:
|
|
787
|
-
|
|
788
|
-
```json
|
|
789
|
-
{
|
|
790
|
-
"sync": {
|
|
791
|
-
"ai": {
|
|
792
|
-
"provider": "openai",
|
|
793
|
-
"baseUrl": "https://<your-resource>.openai.azure.com/openai/deployments/<deployment>/v1",
|
|
794
|
-
"apiKey": "$AZURE_OPENAI_KEY",
|
|
795
|
-
"model": "gpt-4o-mini"
|
|
796
|
-
}
|
|
797
|
-
}
|
|
798
|
-
}
|
|
799
|
-
```
|
|
800
|
-
|
|
801
|
-
**Option C — No API key (heuristic)**
|
|
802
|
-
|
|
803
|
-
Works offline with zero setup — good when you don't want to spend API credits:
|
|
804
|
-
|
|
805
|
-
```bash
|
|
806
|
-
ado-sync push --ai-provider heuristic
|
|
807
|
-
```
|
|
808
|
-
|
|
809
|
-
#### Quick reference
|
|
810
|
-
|
|
811
|
-
| You use | Recommended provider | Command |
|
|
812
|
-
|---------|---------------------|---------|
|
|
813
|
-
| Claude Code | `anthropic` | `ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY` |
|
|
814
|
-
| Copilot Individual / Team | `openai` | `ado-sync push --ai-provider openai --ai-key $OPENAI_API_KEY` |
|
|
815
|
-
| Copilot Enterprise / Azure | `openai` + `--ai-url` | See Azure OpenAI option above |
|
|
816
|
-
| Either, no API budget | `heuristic` | `ado-sync push --ai-provider heuristic` |
|
|
817
|
-
| Privacy-sensitive / air-gapped | `local` | `ado-sync push --ai-model ~/.cache/ado-sync/models/...` |
|
|
818
|
-
|
|
819
|
-
#### Running ado-sync from within your IDE assistant
|
|
820
|
-
|
|
821
|
-
Both tools can execute terminal commands, so you can ask them to run ado-sync for you directly.
|
|
822
|
-
|
|
823
|
-
**Claude Code:**
|
|
824
|
-
|
|
825
|
-
```
|
|
826
|
-
Run: ado-sync push --ai-provider anthropic --ai-key $ANTHROPIC_API_KEY --dry-run
|
|
827
|
-
```
|
|
828
|
-
|
|
829
|
-
Claude Code will execute it in the terminal and explain what would change before you commit to a real push.
|
|
830
|
-
|
|
831
|
-
**GitHub Copilot Chat (VS Code):**
|
|
832
|
-
|
|
833
|
-
Use the `@terminal` agent in Copilot Chat:
|
|
834
|
-
|
|
835
|
-
```
|
|
836
|
-
@terminal run ado-sync push --ai-provider heuristic --dry-run and explain the output
|
|
837
|
-
```
|
|
838
|
-
|
|
839
|
-
Copilot will propose the command in the terminal panel for you to accept and run.
|
|
840
|
-
|
|
841
|
-
### Using LiteLLM (or any OpenAI-compatible proxy)
|
|
842
|
-
|
|
843
|
-
[LiteLLM](https://github.com/BerriAI/litellm) is a proxy that exposes an OpenAI-compatible API for 100+ model providers (Azure OpenAI, Bedrock, Gemini, Mistral, Cohere, vLLM, and more). Use the `openai` provider with `--ai-url` pointing at your LiteLLM server:
|
|
844
|
-
|
|
845
|
-
```bash
|
|
846
|
-
# Start LiteLLM proxy (example)
|
|
847
|
-
litellm --model gpt-4o-mini # listens on http://localhost:4000 by default
|
|
848
|
-
|
|
849
|
-
# Push using LiteLLM
|
|
850
|
-
ado-sync push \
|
|
851
|
-
--ai-provider openai \
|
|
852
|
-
--ai-url http://localhost:4000 \
|
|
853
|
-
--ai-key $LITELLM_API_KEY \
|
|
854
|
-
--ai-model gpt-4o-mini
|
|
855
|
-
```
|
|
856
|
-
|
|
857
|
-
The same `--ai-url` override works for any other OpenAI-compatible server:
|
|
858
|
-
|
|
859
|
-
| Service | `--ai-url` |
|
|
860
|
-
|---------|-----------|
|
|
861
|
-
| LiteLLM (local proxy) | `http://localhost:4000` |
|
|
862
|
-
| LiteLLM (hosted) | `https://<your-litellm-host>/v1` |
|
|
863
|
-
| Hugging Face Inference | `https://router.huggingface.co/v1` |
|
|
864
|
-
| Azure OpenAI | `https://<resource>.openai.azure.com/openai/deployments/<deployment>` |
|
|
865
|
-
| vLLM | `http://localhost:8000/v1` |
|
|
866
|
-
| LocalAI | `http://localhost:8080/v1` |
|
|
867
|
-
| LM Studio | `http://localhost:1234/v1` |
|
|
868
|
-
|
|
869
|
-
> **Note:** `api-inference.huggingface.co` is deprecated — use `router.huggingface.co` instead.
|
|
870
|
-
|
|
871
|
-
### Using Hugging Face Inference API
|
|
872
|
-
|
|
873
|
-
[Hugging Face](https://huggingface.co) provides a free serverless inference API for open-source models. Use the `openai` provider since HF exposes an OpenAI-compatible endpoint:
|
|
874
|
-
|
|
875
|
-
```bash
|
|
876
|
-
ado-sync push \
|
|
877
|
-
--ai-provider openai \
|
|
878
|
-
--ai-url https://router.huggingface.co/v1 \
|
|
879
|
-
--ai-key $HF_TOKEN \
|
|
880
|
-
--ai-model Qwen/Qwen2.5-Coder-7B-Instruct
|
|
881
|
-
```
|
|
882
|
-
|
|
883
|
-
Get a token at [huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) (requires the **Inference** permission).
|
|
884
|
-
|
|
885
|
-
Recommended open-source models:
|
|
886
|
-
|
|
887
|
-
| Model | Notes |
|
|
888
|
-
|-------|-------|
|
|
889
|
-
| `Qwen/Qwen2.5-Coder-7B-Instruct` | Best for code/test understanding |
|
|
890
|
-
| `meta-llama/Llama-3.1-8B-Instruct` | Good general purpose |
|
|
891
|
-
| `mistralai/Mistral-7B-Instruct-v0.3` | Lightweight and fast |
|
|
892
|
-
|
|
893
|
-
Config file equivalent:
|
|
894
|
-
|
|
895
|
-
```json
|
|
896
|
-
{
|
|
897
|
-
"sync": {
|
|
898
|
-
"ai": {
|
|
899
|
-
"provider": "openai",
|
|
900
|
-
"baseUrl": "https://router.huggingface.co/v1",
|
|
901
|
-
"apiKey": "$HF_TOKEN",
|
|
902
|
-
"model": "Qwen/Qwen2.5-Coder-7B-Instruct"
|
|
903
|
-
}
|
|
904
|
-
}
|
|
905
|
-
}
|
|
906
|
-
```
|
|
907
|
-
|
|
908
|
-
> **LiteLLM model names:** When using a hosted LiteLLM instance that proxies Anthropic models, prefix the model name with `anthropic/`, e.g. `anthropic/claude-opus-4-6`. Check your instance's `/v1/models` endpoint for registered model names.
|
|
909
|
-
|
|
910
|
-
Config file equivalent — set any `--ai-*` flag in `sync.ai` to avoid repeating it on every push. CLI flags always take precedence over config values:
|
|
911
|
-
|
|
912
|
-
```json
|
|
913
|
-
{
|
|
914
|
-
"sync": {
|
|
915
|
-
"ai": {
|
|
916
|
-
"provider": "openai",
|
|
917
|
-
"baseUrl": "http://localhost:4000",
|
|
918
|
-
"apiKey": "$LITELLM_API_KEY",
|
|
919
|
-
"model": "gpt-4o-mini"
|
|
920
|
-
}
|
|
921
|
-
}
|
|
922
|
-
}
|
|
923
|
-
```
|
|
924
|
-
|
|
925
|
-
The `sync.ai` block works for any provider:
|
|
926
|
-
|
|
927
|
-
```json
|
|
928
|
-
{ "sync": { "ai": { "provider": "ollama", "model": "gemma-4-e4b-it" } } }
|
|
929
|
-
```
|
|
930
|
-
|
|
931
|
-
```json
|
|
932
|
-
{ "sync": { "ai": { "provider": "anthropic", "apiKey": "$ANTHROPIC_API_KEY" } } }
|
|
933
|
-
```
|
|
934
|
-
|
|
935
|
-
```json
|
|
936
|
-
{ "sync": { "ai": { "provider": "none" } } }
|
|
937
|
-
```
|
|
938
|
-
|
|
939
|
-
### Complete example — C# MSTest with Hugging Face
|
|
940
|
-
|
|
941
|
-
A full `ado-sync.yaml` for a C# MSTest project using the Hugging Face Inference API for AI-generated test steps:
|
|
942
|
-
|
|
943
|
-
```yaml
|
|
944
|
-
orgUrl: https://dev.azure.com/your-org
|
|
945
|
-
project: YourProject
|
|
946
|
-
auth:
|
|
947
|
-
type: pat
|
|
948
|
-
token: $AZURE_DEVOPS_TOKEN
|
|
949
|
-
testPlan:
|
|
950
|
-
id: 12345
|
|
951
|
-
suiteId: 12346
|
|
952
|
-
suiteMapping: flat
|
|
953
|
-
local:
|
|
954
|
-
type: csharp
|
|
955
|
-
include: Tests/**/*.cs
|
|
956
|
-
sync:
|
|
957
|
-
tagPrefix: tc
|
|
958
|
-
titleField: System.Title
|
|
959
|
-
markAutomated: true
|
|
960
|
-
ai:
|
|
961
|
-
provider: openai
|
|
962
|
-
baseUrl: https://router.huggingface.co/v1
|
|
963
|
-
apiKey: $HF_TOKEN
|
|
964
|
-
model: Qwen/Qwen2.5-Coder-7B-Instruct
|
|
965
|
-
```
|
|
966
|
-
|
|
967
|
-
Run:
|
|
968
|
-
```bash
|
|
969
|
-
export AZURE_DEVOPS_TOKEN=your-pat
|
|
970
|
-
export HF_TOKEN=hf_xxxxxxxxxxxxxxxxxxxxxxxx
|
|
971
|
-
ado-sync push --config ado-sync.yaml
|
|
972
|
-
```
|
|
973
|
-
|
|
974
|
-
### Disabling AI summary
|
|
975
|
-
|
|
976
|
-
```bash
|
|
977
|
-
ado-sync push --ai-provider none
|
|
978
|
-
```
|
|
979
|
-
|
|
980
|
-
---
|
|
981
|
-
|
|
982
|
-
### How it works internally
|
|
983
|
-
|
|
984
|
-
1. After parsing local files, ado-sync checks each test for a missing description or missing steps.
|
|
985
|
-
2. For each test that needs either, ado-sync extracts the raw function body from the source file.
|
|
986
|
-
3. The body is sent to the configured provider with a prompt requesting `Title:`, `Description:`, and `N. Step` / `N. Check:` lines.
|
|
987
|
-
4. Title and steps are applied only when the test had no existing steps. Description is applied only when the test had no existing description.
|
|
988
|
-
5. If the LLM call fails (network error, model not found, etc.), it automatically falls back to `heuristic`.
|
|
989
|
-
6. The `local` provider caches the GGUF model in memory for the entire push run — a 50-test suite loads it only once.
|