executable-stories-formatters 0.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +205 -0
- package/dist/adapters.cjs +324 -0
- package/dist/adapters.cjs.map +1 -0
- package/dist/adapters.d.cts +1 -0
- package/dist/adapters.d.ts +1 -0
- package/dist/adapters.js +295 -0
- package/dist/adapters.js.map +1 -0
- package/dist/cli.js +5399 -0
- package/dist/cli.js.map +1 -0
- package/dist/index-DCJ0NvAp.d.cts +378 -0
- package/dist/index-DCJ0NvAp.d.ts +378 -0
- package/dist/index.cjs +5519 -0
- package/dist/index.cjs.map +1 -0
- package/dist/index.d.cts +1468 -0
- package/dist/index.d.ts +1468 -0
- package/dist/index.js +5448 -0
- package/dist/index.js.map +1 -0
- package/package.json +77 -0
- package/schemas/README.md +663 -0
- package/schemas/examples/dotnet.json +108 -0
- package/schemas/examples/full.json +254 -0
- package/schemas/examples/go.json +108 -0
- package/schemas/examples/junit5.json +108 -0
- package/schemas/examples/minimal.json +18 -0
- package/schemas/examples/pytest.json +108 -0
- package/schemas/examples/rust.json +108 -0
- package/schemas/raw-run.schema.json +437 -0
|
@@ -0,0 +1,663 @@
|
|
|
1
|
+
# Executable Stories — Schema & Cross-Language Guide
|
|
2
|
+
|
|
3
|
+
Executable Stories turns test results from **any** language or framework into rich HTML, Markdown, JUnit XML, and Cucumber JSON reports. You produce a JSON file conforming to the [RawRun schema](raw-run.schema.json), then feed it to the `executable-stories` CLI.
|
|
4
|
+
|
|
5
|
+
> **The files in `examples/*.json` are RawRun JSON examples** — they are _not_ JUnit XML, pytest JSON, or any other native framework format. They show what the CLI expects as **input**.
|
|
6
|
+
|
|
7
|
+
---
|
|
8
|
+
|
|
9
|
+
## Installation
|
|
10
|
+
|
|
11
|
+
### Standalone binary (recommended for non-JS projects)
|
|
12
|
+
|
|
13
|
+
The CLI is compiled to a standalone binary via `bun build --compile` — **zero runtime dependencies**. Download the `executable-stories` binary for your platform from the [releases page](https://github.com/nicholasgriffintn/executable-stories/releases) and put it on your `PATH`.
|
|
14
|
+
|
|
15
|
+
```bash
|
|
16
|
+
# Example: Linux x64
|
|
17
|
+
chmod +x executable-stories
|
|
18
|
+
sudo mv executable-stories /usr/local/bin/
|
|
19
|
+
```
|
|
20
|
+
|
|
21
|
+
### npm (for JS/TS projects)
|
|
22
|
+
|
|
23
|
+
```bash
|
|
24
|
+
npm install -g executable-stories-formatters
|
|
25
|
+
# or locally
|
|
26
|
+
npm install --save-dev executable-stories-formatters
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
### Dev mode (for contributors)
|
|
30
|
+
|
|
31
|
+
```bash
|
|
32
|
+
pnpm install
|
|
33
|
+
pnpm build
|
|
34
|
+
node packages/executable-stories-formatters/dist/cli.js --help
|
|
35
|
+
```
|
|
36
|
+
|
|
37
|
+
---
|
|
38
|
+
|
|
39
|
+
## Quickstart
|
|
40
|
+
|
|
41
|
+
Create a minimal JSON file (`run.json`):
|
|
42
|
+
|
|
43
|
+
```json
|
|
44
|
+
{
|
|
45
|
+
"schemaVersion": 1,
|
|
46
|
+
"projectRoot": "/path/to/project",
|
|
47
|
+
"testCases": [
|
|
48
|
+
{ "title": "Login succeeds", "status": "pass" },
|
|
49
|
+
{ "title": "Login fails with wrong password", "status": "fail" },
|
|
50
|
+
{ "title": "Registration disabled", "status": "skip" }
|
|
51
|
+
]
|
|
52
|
+
}
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
Generate an HTML report:
|
|
56
|
+
|
|
57
|
+
```bash
|
|
58
|
+
executable-stories format run.json --format html --output-dir reports
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
That's it. The `--synthesize-stories` flag (on by default) will create story metadata for test cases that don't have explicit BDD steps.
|
|
62
|
+
|
|
63
|
+
See [`examples/minimal.json`](examples/minimal.json) for the full minimal example.
|
|
64
|
+
|
|
65
|
+
---
|
|
66
|
+
|
|
67
|
+
## Schema Reference
|
|
68
|
+
|
|
69
|
+
The full schema is in [`raw-run.schema.json`](raw-run.schema.json). Here are the key types:
|
|
70
|
+
|
|
71
|
+
### RawRun (top level)
|
|
72
|
+
|
|
73
|
+
| Field | Type | Required | Description |
|
|
74
|
+
| ---------------- | --------------- | -------- | ------------------------------------------------------------ |
|
|
75
|
+
| `schemaVersion` | `1` | Yes | Must be `1` |
|
|
76
|
+
| `testCases` | `RawTestCase[]` | Yes | All test cases from the run |
|
|
77
|
+
| `projectRoot` | `string` | Yes | Absolute path to project root (for resolving relative paths) |
|
|
78
|
+
| `startedAtMs` | `number` | No | Run start time (Unix epoch ms) |
|
|
79
|
+
| `finishedAtMs` | `number` | No | Run finish time (Unix epoch ms) |
|
|
80
|
+
| `packageVersion` | `string` | No | Version of the package under test |
|
|
81
|
+
| `gitSha` | `string` | No | Git commit SHA |
|
|
82
|
+
| `ci` | `RawCIInfo` | No | CI environment info (`name`, `url`, `buildNumber`) |
|
|
83
|
+
| `meta` | `object` | No | Arbitrary metadata (escape hatch) |
|
|
84
|
+
|
|
85
|
+
### RawTestCase
|
|
86
|
+
|
|
87
|
+
| Field | Type | Required | Description |
|
|
88
|
+
| ------------- | ---------------------- | -------- | --------------------------------------------------- |
|
|
89
|
+
| `status` | `RawStatus` | Yes | Test result status (see below) |
|
|
90
|
+
| `title` | `string` | No | Human-readable test name |
|
|
91
|
+
| `externalId` | `string` | No | Framework's native test ID |
|
|
92
|
+
| `titlePath` | `string[]` | No | Suite/describe path down to the test name |
|
|
93
|
+
| `sourceFile` | `string` | No | Path to source file (relative to `projectRoot`) |
|
|
94
|
+
| `sourceLine` | `integer` | No | Line number (1-based) |
|
|
95
|
+
| `durationMs` | `number` | No | Duration in **milliseconds** |
|
|
96
|
+
| `error` | `{ message?, stack? }` | No | Error info for failures |
|
|
97
|
+
| `story` | `StoryMeta` | No | BDD story metadata (scenario, steps, tags, tickets) |
|
|
98
|
+
| `attachments` | `RawAttachment[]` | No | Screenshots, logs, artifacts |
|
|
99
|
+
| `meta` | `object` | No | Arbitrary metadata |
|
|
100
|
+
| `retry` | `integer` | No | Retry attempt number (0-based) |
|
|
101
|
+
| `retries` | `integer` | No | Total retries configured |
|
|
102
|
+
| `projectName` | `string` | No | Project name (e.g. Playwright project) |
|
|
103
|
+
| `stepEvents` | `RawStepEvent[]` | No | Step-level execution events |
|
|
104
|
+
|
|
105
|
+
### RawStatus
|
|
106
|
+
|
|
107
|
+
```
|
|
108
|
+
pass | fail | skip | todo | pending | timeout | interrupted | unknown
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
Emitters/converters should output these RawStatus values. Canonicalization normalizes them to `passed | failed | skipped | pending`:
|
|
112
|
+
|
|
113
|
+
| RawStatus | Canonical |
|
|
114
|
+
| ------------- | --------- |
|
|
115
|
+
| `pass` | `passed` |
|
|
116
|
+
| `fail` | `failed` |
|
|
117
|
+
| `skip` | `skipped` |
|
|
118
|
+
| `todo` | `pending` |
|
|
119
|
+
| `pending` | `pending` |
|
|
120
|
+
| `timeout` | `failed` |
|
|
121
|
+
| `interrupted` | `failed` |
|
|
122
|
+
| `unknown` | `skipped` |
|
|
123
|
+
|
|
124
|
+
### StoryMeta (BDD metadata)
|
|
125
|
+
|
|
126
|
+
| Field | Type | Required | Description |
|
|
127
|
+
| ------------- | ------------- | -------- | ------------------------------------------ |
|
|
128
|
+
| `scenario` | `string` | Yes | Scenario title |
|
|
129
|
+
| `steps` | `StoryStep[]` | No | BDD steps (`keyword` + `text`) |
|
|
130
|
+
| `tags` | `string[]` | No | Tags for filtering (`smoke`, `auth`, etc.) |
|
|
131
|
+
| `tickets` | `string[]` | No | Ticket references (`AUTH-101`, `SEC-42`) |
|
|
132
|
+
| `suitePath` | `string[]` | No | Parent suite names for grouping |
|
|
133
|
+
| `docs` | `DocEntry[]` | No | Story-level documentation |
|
|
134
|
+
| `meta` | `object` | No | User-defined metadata |
|
|
135
|
+
| `sourceOrder` | `integer` | No | Source-order for stable sorting |
|
|
136
|
+
|
|
137
|
+
### StoryStep
|
|
138
|
+
|
|
139
|
+
| Field | Type | Required | Description |
|
|
140
|
+
| --------- | ------------------------------------------------------- | -------- | ------------------------------ |
|
|
141
|
+
| `keyword` | `Given \| When \| Then \| And \| But` | Yes | BDD keyword (case-insensitive) |
|
|
142
|
+
| `text` | `string` | Yes | Step description |
|
|
143
|
+
| `mode` | `normal \| skip \| only \| todo \| fails \| concurrent` | No | Execution mode |
|
|
144
|
+
| `docs` | `DocEntry[]` | No | Step-level documentation |
|
|
145
|
+
|
|
146
|
+
### DocEntry kinds
|
|
147
|
+
|
|
148
|
+
`note`, `tag`, `kv`, `code`, `table`, `link`, `section`, `mermaid`, `screenshot`, `custom` — see [raw-run.schema.json](raw-run.schema.json) for full details on each kind.
|
|
149
|
+
|
|
150
|
+
---
|
|
151
|
+
|
|
152
|
+
## CLI Reference
|
|
153
|
+
|
|
154
|
+
```
|
|
155
|
+
executable-stories — Generate reports from test results JSON.
|
|
156
|
+
|
|
157
|
+
USAGE
|
|
158
|
+
executable-stories format <file> [options]
|
|
159
|
+
executable-stories format --stdin [options]
|
|
160
|
+
executable-stories validate <file>
|
|
161
|
+
executable-stories validate --stdin
|
|
162
|
+
|
|
163
|
+
SUBCOMMANDS
|
|
164
|
+
format Read raw test results and generate reports
|
|
165
|
+
validate Validate a JSON file against the schema (no output generated)
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
### Options
|
|
169
|
+
|
|
170
|
+
| Flag | Default | Description |
|
|
171
|
+
| ---------------------------- | -------------- | ------------------------------------------------------------- |
|
|
172
|
+
| `--format <formats>` | `html` | Comma-separated: `html`, `markdown`, `junit`, `cucumber-json` |
|
|
173
|
+
| `--input-type <type>` | `raw` | Input type: `raw` or `canonical` |
|
|
174
|
+
| `--output-dir <dir>` | `reports` | Output directory |
|
|
175
|
+
| `--output-name <name>` | `test-results` | Base filename |
|
|
176
|
+
| `--synthesize-stories` | on | Synthesize story metadata for plain test results |
|
|
177
|
+
| `--no-synthesize-stories` | | Disable story synthesis (strict mode) |
|
|
178
|
+
| `--html-title <title>` | `Test Results` | HTML report title |
|
|
179
|
+
| `--html-syntax-highlighting` | off | Enable syntax highlighting in HTML |
|
|
180
|
+
| `--html-mermaid` | off | Enable Mermaid diagrams in HTML |
|
|
181
|
+
| `--html-markdown` | off | Enable markdown parsing in HTML |
|
|
182
|
+
| `--stdin` | | Read JSON from stdin instead of file |
|
|
183
|
+
| `--json-summary` | off | Print machine-parsable JSON summary |
|
|
184
|
+
| `--emit-canonical <path>` | | Write canonical JSON to given path |
|
|
185
|
+
| `--help` | | Show help message |
|
|
186
|
+
|
|
187
|
+
### Exit Codes
|
|
188
|
+
|
|
189
|
+
| Code | Meaning |
|
|
190
|
+
| ---- | ---------------------------- |
|
|
191
|
+
| `0` | Success |
|
|
192
|
+
| `1` | Schema validation failure |
|
|
193
|
+
| `2` | Canonical validation failure |
|
|
194
|
+
| `3` | Formatter/generation failure |
|
|
195
|
+
| `4` | Bad arguments / usage error |
|
|
196
|
+
|
|
197
|
+
### Examples
|
|
198
|
+
|
|
199
|
+
```bash
|
|
200
|
+
# Validate a JSON file
|
|
201
|
+
executable-stories validate run.json
|
|
202
|
+
|
|
203
|
+
# Generate HTML + Markdown reports
|
|
204
|
+
executable-stories format run.json --format html,markdown --output-dir reports
|
|
205
|
+
|
|
206
|
+
# Pipe from stdin
|
|
207
|
+
cat run.json | executable-stories format --stdin --format html
|
|
208
|
+
|
|
209
|
+
# Generate with all HTML features enabled
|
|
210
|
+
executable-stories format run.json \
|
|
211
|
+
--format html \
|
|
212
|
+
--html-syntax-highlighting \
|
|
213
|
+
--html-mermaid \
|
|
214
|
+
--html-markdown \
|
|
215
|
+
--html-title "Sprint 42 Results"
|
|
216
|
+
|
|
217
|
+
# Export canonical JSON for debugging
|
|
218
|
+
executable-stories format run.json --emit-canonical canonical.json
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
---
|
|
222
|
+
|
|
223
|
+
## Framework Guides
|
|
224
|
+
|
|
225
|
+
Each guide below shows the **converter** approach: run your tests with native tooling, convert the output to RawRun JSON, then feed it to `executable-stories`.
|
|
226
|
+
|
|
227
|
+
### Duration normalization
|
|
228
|
+
|
|
229
|
+
RawRun expects durations in **milliseconds** (`durationMs`). Most frameworks report seconds — multiply by 1000.
|
|
230
|
+
|
|
231
|
+
### Status mapping
|
|
232
|
+
|
|
233
|
+
| Framework native | RawStatus |
|
|
234
|
+
| ------------------------ | --------- |
|
|
235
|
+
| JUnit: `SUCCESSFUL` | `pass` |
|
|
236
|
+
| JUnit: `FAILED` | `fail` |
|
|
237
|
+
| JUnit: `ABORTED` | `skip` |
|
|
238
|
+
| pytest: `passed` | `pass` |
|
|
239
|
+
| pytest: `failed` | `fail` |
|
|
240
|
+
| pytest: `skipped` | `skip` |
|
|
241
|
+
| pytest: `xfailed` | `pending` |
|
|
242
|
+
| Go: `pass` | `pass` |
|
|
243
|
+
| Go: `fail` | `fail` |
|
|
244
|
+
| Go: `skip` | `skip` |
|
|
245
|
+
| xUnit: `Pass` | `pass` |
|
|
246
|
+
| xUnit: `Fail` | `fail` |
|
|
247
|
+
| xUnit: `Skip` / `NotRun` | `skip` |
|
|
248
|
+
| Rust: `ok` | `pass` |
|
|
249
|
+
| Rust: `FAILED` | `fail` |
|
|
250
|
+
| Rust: `ignored` | `skip` |
|
|
251
|
+
|
|
252
|
+
---
|
|
253
|
+
|
|
254
|
+
### JUnit 5 (Java)
|
|
255
|
+
|
|
256
|
+
**Today**: Run tests with Maven/Gradle, parse Surefire XML, convert to RawRun JSON.
|
|
257
|
+
|
|
258
|
+
```bash
|
|
259
|
+
# 1. Run tests (produces target/surefire-reports/*.xml)
|
|
260
|
+
mvn test
|
|
261
|
+
|
|
262
|
+
# 2. Convert Surefire XML → RawRun JSON
|
|
263
|
+
python3 convert_surefire.py target/surefire-reports/ > run.json
|
|
264
|
+
|
|
265
|
+
# 3. Generate reports
|
|
266
|
+
executable-stories format run.json --format html --output-dir reports
|
|
267
|
+
```
|
|
268
|
+
|
|
269
|
+
Converter sketch (`convert_surefire.py`):
|
|
270
|
+
|
|
271
|
+
```python
|
|
272
|
+
#!/usr/bin/env python3
|
|
273
|
+
"""Convert Maven Surefire XML reports to RawRun JSON."""
|
|
274
|
+
import json, sys, os, xml.etree.ElementTree as ET
|
|
275
|
+
|
|
276
|
+
STATUS_MAP = {"SUCCESSFUL": "pass", "FAILED": "fail", "ABORTED": "skip"}
|
|
277
|
+
|
|
278
|
+
def convert(reports_dir):
|
|
279
|
+
test_cases = []
|
|
280
|
+
for fname in sorted(os.listdir(reports_dir)):
|
|
281
|
+
if not fname.startswith("TEST-") or not fname.endswith(".xml"):
|
|
282
|
+
continue
|
|
283
|
+
tree = ET.parse(os.path.join(reports_dir, fname))
|
|
284
|
+
for tc in tree.findall(".//testcase"):
|
|
285
|
+
status = "pass"
|
|
286
|
+
error = None
|
|
287
|
+
if tc.find("failure") is not None:
|
|
288
|
+
status = "fail"
|
|
289
|
+
f = tc.find("failure")
|
|
290
|
+
error = {"message": f.get("message", ""), "stack": f.text or ""}
|
|
291
|
+
elif tc.find("skipped") is not None:
|
|
292
|
+
status = "skip"
|
|
293
|
+
# Surefire reports time in seconds (decimal)
|
|
294
|
+
duration_s = float(tc.get("time", "0"))
|
|
295
|
+
test_cases.append({
|
|
296
|
+
"title": tc.get("name"),
|
|
297
|
+
"status": status,
|
|
298
|
+
"durationMs": round(duration_s * 1000),
|
|
299
|
+
"sourceFile": tc.get("classname", "").replace(".", "/") + ".java",
|
|
300
|
+
**({"error": error} if error else {}),
|
|
301
|
+
"meta": {
|
|
302
|
+
"framework": "junit5",
|
|
303
|
+
"frameworkCaseId": f"{tc.get('classname')}.{tc.get('name')}"
|
|
304
|
+
}
|
|
305
|
+
})
|
|
306
|
+
return {
|
|
307
|
+
"schemaVersion": 1,
|
|
308
|
+
"projectRoot": os.getcwd(),
|
|
309
|
+
"meta": {"framework": "junit5"},
|
|
310
|
+
"testCases": test_cases
|
|
311
|
+
}
|
|
312
|
+
|
|
313
|
+
if __name__ == "__main__":
|
|
314
|
+
print(json.dumps(convert(sys.argv[1]), indent=2))
|
|
315
|
+
```
|
|
316
|
+
|
|
317
|
+
See [`examples/junit5.json`](examples/junit5.json) for sample output.
|
|
318
|
+
|
|
319
|
+
**Later**: A thin `executable-stories-junit5` adapter library with `@Story`, `@Given`, `@When`, `@Then` annotations is planned.
|
|
320
|
+
|
|
321
|
+
---
|
|
322
|
+
|
|
323
|
+
### pytest (Python)
|
|
324
|
+
|
|
325
|
+
**Today**: Use `pytest-json-report` to capture results, then convert to RawRun JSON.
|
|
326
|
+
|
|
327
|
+
```bash
|
|
328
|
+
# 1. Run tests with JSON report
|
|
329
|
+
pip install pytest-json-report
|
|
330
|
+
pytest --json-report --json-report-file=.report.json
|
|
331
|
+
|
|
332
|
+
# 2. Convert → RawRun JSON
|
|
333
|
+
python3 convert_pytest.py .report.json > run.json
|
|
334
|
+
|
|
335
|
+
# 3. Generate reports
|
|
336
|
+
executable-stories format run.json --format html --output-dir reports
|
|
337
|
+
```
|
|
338
|
+
|
|
339
|
+
Converter sketch (`convert_pytest.py`):
|
|
340
|
+
|
|
341
|
+
```python
|
|
342
|
+
#!/usr/bin/env python3
|
|
343
|
+
"""Convert pytest-json-report output to RawRun JSON."""
|
|
344
|
+
import json, sys, os
|
|
345
|
+
|
|
346
|
+
STATUS_MAP = {"passed": "pass", "failed": "fail", "skipped": "skip", "xfailed": "pending"}
|
|
347
|
+
|
|
348
|
+
def convert(report_path):
|
|
349
|
+
with open(report_path) as f:
|
|
350
|
+
report = json.load(f)
|
|
351
|
+
test_cases = []
|
|
352
|
+
for t in report.get("tests", []):
|
|
353
|
+
status = STATUS_MAP.get(t.get("outcome", ""), "unknown")
|
|
354
|
+
error = None
|
|
355
|
+
if status == "fail" and "longrepr" in t.get("call", {}):
|
|
356
|
+
call = t["call"]
|
|
357
|
+
error = {"message": call.get("longrepr", "")[:200], "stack": call.get("longrepr", "")}
|
|
358
|
+
# pytest reports duration in seconds (float)
|
|
359
|
+
duration_s = t.get("duration", 0)
|
|
360
|
+
nodeid = t.get("nodeid", "")
|
|
361
|
+
parts = nodeid.split("::")
|
|
362
|
+
source_file = parts[0] if parts else ""
|
|
363
|
+
test_cases.append({
|
|
364
|
+
"externalId": nodeid,
|
|
365
|
+
"title": parts[-1] if parts else nodeid,
|
|
366
|
+
"status": status,
|
|
367
|
+
"durationMs": round(duration_s * 1000),
|
|
368
|
+
"sourceFile": source_file,
|
|
369
|
+
**({"error": error} if error else {}),
|
|
370
|
+
"meta": {
|
|
371
|
+
"framework": "pytest",
|
|
372
|
+
"frameworkCaseId": nodeid
|
|
373
|
+
}
|
|
374
|
+
})
|
|
375
|
+
return {
|
|
376
|
+
"schemaVersion": 1,
|
|
377
|
+
"projectRoot": os.getcwd(),
|
|
378
|
+
"meta": {"framework": "pytest"},
|
|
379
|
+
"testCases": test_cases
|
|
380
|
+
}
|
|
381
|
+
|
|
382
|
+
if __name__ == "__main__":
|
|
383
|
+
print(json.dumps(convert(sys.argv[1]), indent=2))
|
|
384
|
+
```
|
|
385
|
+
|
|
386
|
+
See [`examples/pytest.json`](examples/pytest.json) for sample output.
|
|
387
|
+
|
|
388
|
+
**Later**: A `executable-stories-pytest` plugin with `@story`, `@given`, `@when`, `@then` decorators is planned.
|
|
389
|
+
|
|
390
|
+
---
|
|
391
|
+
|
|
392
|
+
### Go
|
|
393
|
+
|
|
394
|
+
**Today**: Run tests with `go test -json`, parse the JSON stream, convert to RawRun JSON.
|
|
395
|
+
|
|
396
|
+
```bash
|
|
397
|
+
# 1. Run tests with JSON output
|
|
398
|
+
go test -json ./... > test-output.jsonl
|
|
399
|
+
|
|
400
|
+
# 2. Convert → RawRun JSON
|
|
401
|
+
python3 convert_gotest.py test-output.jsonl > run.json
|
|
402
|
+
|
|
403
|
+
# 3. Generate reports
|
|
404
|
+
executable-stories format run.json --format html --output-dir reports
|
|
405
|
+
```
|
|
406
|
+
|
|
407
|
+
Converter sketch (`convert_gotest.py`):
|
|
408
|
+
|
|
409
|
+
```python
|
|
410
|
+
#!/usr/bin/env python3
|
|
411
|
+
"""Convert `go test -json` output to RawRun JSON."""
|
|
412
|
+
import json, sys, os
|
|
413
|
+
|
|
414
|
+
STATUS_MAP = {"pass": "pass", "fail": "fail", "skip": "skip"}
|
|
415
|
+
|
|
416
|
+
def convert(jsonl_path):
|
|
417
|
+
tests = {} # key: (package, test_name)
|
|
418
|
+
with open(jsonl_path) as f:
|
|
419
|
+
for line in f:
|
|
420
|
+
event = json.loads(line)
|
|
421
|
+
action = event.get("Action")
|
|
422
|
+
pkg = event.get("Package", "")
|
|
423
|
+
test = event.get("Test")
|
|
424
|
+
if not test:
|
|
425
|
+
continue
|
|
426
|
+
key = (pkg, test)
|
|
427
|
+
if key not in tests:
|
|
428
|
+
tests[key] = {"package": pkg, "test": test}
|
|
429
|
+
if action in STATUS_MAP:
|
|
430
|
+
tests[key]["status"] = STATUS_MAP[action]
|
|
431
|
+
# Go reports elapsed in seconds (float64)
|
|
432
|
+
elapsed = event.get("Elapsed", 0)
|
|
433
|
+
tests[key]["durationMs"] = round(elapsed * 1000)
|
|
434
|
+
elif action == "output":
|
|
435
|
+
tests[key].setdefault("output", []).append(event.get("Output", ""))
|
|
436
|
+
|
|
437
|
+
test_cases = []
|
|
438
|
+
for (pkg, test_name), data in tests.items():
|
|
439
|
+
status = data.get("status", "unknown")
|
|
440
|
+
error = None
|
|
441
|
+
if status == "fail":
|
|
442
|
+
output = "".join(data.get("output", []))
|
|
443
|
+
error = {"message": output[:200], "stack": output}
|
|
444
|
+
test_cases.append({
|
|
445
|
+
"externalId": f"{pkg}.{test_name}",
|
|
446
|
+
"title": test_name,
|
|
447
|
+
"titlePath": [pkg, test_name],
|
|
448
|
+
"status": status,
|
|
449
|
+
"durationMs": data.get("durationMs", 0),
|
|
450
|
+
**({"error": error} if error else {}),
|
|
451
|
+
"meta": {
|
|
452
|
+
"framework": "go",
|
|
453
|
+
"frameworkCaseId": f"{pkg}.{test_name}"
|
|
454
|
+
}
|
|
455
|
+
})
|
|
456
|
+
return {
|
|
457
|
+
"schemaVersion": 1,
|
|
458
|
+
"projectRoot": os.getcwd(),
|
|
459
|
+
"meta": {"framework": "go"},
|
|
460
|
+
"testCases": test_cases
|
|
461
|
+
}
|
|
462
|
+
|
|
463
|
+
if __name__ == "__main__":
|
|
464
|
+
print(json.dumps(convert(sys.argv[1]), indent=2))
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
See [`examples/go.json`](examples/go.json) for sample output.
|
|
468
|
+
|
|
469
|
+
**Later**: A `executable-stories-go` helper package with `Story()`, `Given()`, `When()`, `Then()` helpers is planned.
|
|
470
|
+
|
|
471
|
+
---
|
|
472
|
+
|
|
473
|
+
### xUnit / .NET
|
|
474
|
+
|
|
475
|
+
**Today**: Run tests with `dotnet test`, export TRX, convert to RawRun JSON.
|
|
476
|
+
|
|
477
|
+
```bash
|
|
478
|
+
# 1. Run tests with TRX output
|
|
479
|
+
dotnet test --logger "trx;LogFileName=results.trx"
|
|
480
|
+
|
|
481
|
+
# 2. Convert TRX → RawRun JSON
|
|
482
|
+
python3 convert_trx.py TestResults/results.trx > run.json
|
|
483
|
+
|
|
484
|
+
# 3. Generate reports
|
|
485
|
+
executable-stories format run.json --format html --output-dir reports
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
Converter sketch (`convert_trx.py`):
|
|
489
|
+
|
|
490
|
+
```python
|
|
491
|
+
#!/usr/bin/env python3
|
|
492
|
+
"""Convert .NET TRX (Visual Studio Test Results) to RawRun JSON."""
|
|
493
|
+
import json, sys, os, xml.etree.ElementTree as ET
|
|
494
|
+
|
|
495
|
+
NS = {"t": "http://microsoft.com/schemas/VisualStudio/TeamTest/2010"}
|
|
496
|
+
STATUS_MAP = {"Passed": "pass", "Failed": "fail", "NotExecuted": "skip", "Skipped": "skip"}
|
|
497
|
+
|
|
498
|
+
def convert(trx_path):
|
|
499
|
+
tree = ET.parse(trx_path)
|
|
500
|
+
root = tree.getroot()
|
|
501
|
+
test_cases = []
|
|
502
|
+
for result in root.findall(".//t:UnitTestResult", NS):
|
|
503
|
+
outcome = result.get("outcome", "NotExecuted")
|
|
504
|
+
status = STATUS_MAP.get(outcome, "unknown")
|
|
505
|
+
error = None
|
|
506
|
+
err_el = result.find(".//t:ErrorInfo", NS)
|
|
507
|
+
if err_el is not None:
|
|
508
|
+
msg_el = err_el.find("t:Message", NS)
|
|
509
|
+
stack_el = err_el.find("t:StackTrace", NS)
|
|
510
|
+
error = {
|
|
511
|
+
"message": msg_el.text if msg_el is not None else "",
|
|
512
|
+
"stack": stack_el.text if stack_el is not None else ""
|
|
513
|
+
}
|
|
514
|
+
# TRX duration is HH:MM:SS.fffffff — convert to ms
|
|
515
|
+
duration_str = result.get("duration", "0:00:00")
|
|
516
|
+
parts = duration_str.split(":")
|
|
517
|
+
duration_ms = round(
|
|
518
|
+
(int(parts[0]) * 3600 + int(parts[1]) * 60 + float(parts[2])) * 1000
|
|
519
|
+
)
|
|
520
|
+
full_name = result.get("testName", "")
|
|
521
|
+
test_cases.append({
|
|
522
|
+
"externalId": full_name,
|
|
523
|
+
"title": full_name.split(".")[-1] if "." in full_name else full_name,
|
|
524
|
+
"status": status,
|
|
525
|
+
"durationMs": duration_ms,
|
|
526
|
+
**({"error": error} if error else {}),
|
|
527
|
+
"meta": {
|
|
528
|
+
"framework": "xunit",
|
|
529
|
+
"frameworkCaseId": full_name
|
|
530
|
+
}
|
|
531
|
+
})
|
|
532
|
+
return {
|
|
533
|
+
"schemaVersion": 1,
|
|
534
|
+
"projectRoot": os.getcwd(),
|
|
535
|
+
"meta": {"framework": "xunit"},
|
|
536
|
+
"testCases": test_cases
|
|
537
|
+
}
|
|
538
|
+
|
|
539
|
+
if __name__ == "__main__":
|
|
540
|
+
print(json.dumps(convert(sys.argv[1]), indent=2))
|
|
541
|
+
```
|
|
542
|
+
|
|
543
|
+
See [`examples/dotnet.json`](examples/dotnet.json) for sample output.
|
|
544
|
+
|
|
545
|
+
**Later**: An `executable-stories-xunit` package with `[Story]`, `[Given]`, `[When]`, `[Then]` attributes is planned.
|
|
546
|
+
|
|
547
|
+
---
|
|
548
|
+
|
|
549
|
+
### Rust
|
|
550
|
+
|
|
551
|
+
**Today**: Use `cargo-nextest` with JSON output (recommended), then convert to RawRun JSON.
|
|
552
|
+
|
|
553
|
+
```bash
|
|
554
|
+
# 1. Run tests with nextest JSON output
|
|
555
|
+
cargo install cargo-nextest
|
|
556
|
+
cargo nextest run --message-format json > test-output.jsonl
|
|
557
|
+
|
|
558
|
+
# 2. Convert → RawRun JSON
|
|
559
|
+
python3 convert_nextest.py test-output.jsonl > run.json
|
|
560
|
+
|
|
561
|
+
# 3. Generate reports
|
|
562
|
+
executable-stories format run.json --format html --output-dir reports
|
|
563
|
+
```
|
|
564
|
+
|
|
565
|
+
> **Note**: `cargo test -Z unstable-options --format json` works on nightly but is unstable. `cargo-nextest` is the recommended stable path.
|
|
566
|
+
|
|
567
|
+
Converter sketch (`convert_nextest.py`):
|
|
568
|
+
|
|
569
|
+
```python
|
|
570
|
+
#!/usr/bin/env python3
|
|
571
|
+
"""Convert cargo-nextest JSON output to RawRun JSON."""
|
|
572
|
+
import json, sys, os
|
|
573
|
+
|
|
574
|
+
STATUS_MAP = {"ok": "pass", "FAILED": "fail", "ignored": "skip"}
|
|
575
|
+
|
|
576
|
+
def convert(jsonl_path):
|
|
577
|
+
test_cases = []
|
|
578
|
+
with open(jsonl_path) as f:
|
|
579
|
+
for line in f:
|
|
580
|
+
event = json.loads(line)
|
|
581
|
+
if event.get("type") != "test" or event.get("event") not in ("ok", "failed", "ignored"):
|
|
582
|
+
continue
|
|
583
|
+
name = event.get("name", "")
|
|
584
|
+
status_key = event.get("event", "")
|
|
585
|
+
status = {"ok": "pass", "failed": "fail", "ignored": "skip"}.get(status_key, "unknown")
|
|
586
|
+
error = None
|
|
587
|
+
if status == "fail" and "stdout" in event:
|
|
588
|
+
error = {"message": event["stdout"][:200], "stack": event["stdout"]}
|
|
589
|
+
# nextest reports exec_time in seconds (float)
|
|
590
|
+
duration_s = event.get("exec_time", 0)
|
|
591
|
+
test_cases.append({
|
|
592
|
+
"externalId": name,
|
|
593
|
+
"title": name.split("::")[-1] if "::" in name else name,
|
|
594
|
+
"titlePath": name.split("::"),
|
|
595
|
+
"status": status,
|
|
596
|
+
"durationMs": round(duration_s * 1000),
|
|
597
|
+
**({"error": error} if error else {}),
|
|
598
|
+
"meta": {
|
|
599
|
+
"framework": "rust",
|
|
600
|
+
"frameworkCaseId": name
|
|
601
|
+
}
|
|
602
|
+
})
|
|
603
|
+
return {
|
|
604
|
+
"schemaVersion": 1,
|
|
605
|
+
"projectRoot": os.getcwd(),
|
|
606
|
+
"meta": {"framework": "rust"},
|
|
607
|
+
"testCases": test_cases
|
|
608
|
+
}
|
|
609
|
+
|
|
610
|
+
if __name__ == "__main__":
|
|
611
|
+
print(json.dumps(convert(sys.argv[1]), indent=2))
|
|
612
|
+
```
|
|
613
|
+
|
|
614
|
+
See [`examples/rust.json`](examples/rust.json) for sample output.
|
|
615
|
+
|
|
616
|
+
**Later**: A `executable-stories-rest` proc-macro crate with `#[story]`, `#[given]`, `#[when]`, `#[then]` attributes is planned.
|
|
617
|
+
|
|
618
|
+
---
|
|
619
|
+
|
|
620
|
+
## Progressive Enrichment
|
|
621
|
+
|
|
622
|
+
You don't have to use all schema features at once. Start simple and add detail over time:
|
|
623
|
+
|
|
624
|
+
### Level 1: Just statuses
|
|
625
|
+
|
|
626
|
+
```json
|
|
627
|
+
{
|
|
628
|
+
"schemaVersion": 1,
|
|
629
|
+
"projectRoot": "/app",
|
|
630
|
+
"testCases": [{ "title": "Login works", "status": "pass", "durationMs": 342 }]
|
|
631
|
+
}
|
|
632
|
+
```
|
|
633
|
+
|
|
634
|
+
Use `--synthesize-stories` (default) to auto-generate story metadata.
|
|
635
|
+
|
|
636
|
+
### Level 2: Add BDD steps
|
|
637
|
+
|
|
638
|
+
```json
|
|
639
|
+
{
|
|
640
|
+
"schemaVersion": 1,
|
|
641
|
+
"projectRoot": "/app",
|
|
642
|
+
"testCases": [
|
|
643
|
+
{
|
|
644
|
+
"title": "Login works",
|
|
645
|
+
"status": "pass",
|
|
646
|
+
"durationMs": 342,
|
|
647
|
+
"story": {
|
|
648
|
+
"scenario": "Successful login with valid credentials",
|
|
649
|
+
"steps": [
|
|
650
|
+
{ "keyword": "Given", "text": "a registered user" },
|
|
651
|
+
{ "keyword": "When", "text": "the user logs in" },
|
|
652
|
+
{ "keyword": "Then", "text": "a JWT token is returned" }
|
|
653
|
+
],
|
|
654
|
+
"tags": ["auth", "smoke"]
|
|
655
|
+
}
|
|
656
|
+
}
|
|
657
|
+
]
|
|
658
|
+
}
|
|
659
|
+
```
|
|
660
|
+
|
|
661
|
+
### Level 3: Docs, tags, tickets, attachments
|
|
662
|
+
|
|
663
|
+
Add rich documentation (code blocks, tables, Mermaid diagrams), ticket references for traceability, and test attachments. See [`examples/full.json`](examples/full.json) for a comprehensive example.
|