deepflow 0.1.87 → 0.1.89

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -6,8 +6,7 @@ context: fork
6
6
 
7
7
  # /df:verify — Verify Specs Satisfied
8
8
 
9
- ## Purpose
10
- Check that implemented code satisfies spec requirements and acceptance criteria. All checks are machine-verifiable — no LLM agents are used.
9
+ **OUTPUT:** Terse. No narration. No reasoning. Only the compact report (section 3). One line per level, issues block if any, next step.
11
10
 
12
11
  **NEVER:** use EnterPlanMode, use ExitPlanMode
13
12
 
@@ -19,13 +18,7 @@ Check that implemented code satisfies spec requirements and acceptance criteria.
19
18
  ```
20
19
 
21
20
  ## Spec File States
22
-
23
- ```
24
- specs/
25
- feature.md → Unplanned (skip)
26
- doing-auth.md → Executed, ready for verification (default target)
27
- done-upload.md → Already verified and merged (--re-verify only)
28
- ```
21
+ `specs/feature.md` → unplanned (skip) | `doing-*.md` → default target | `done-*.md` → `--re-verify` only
29
22
 
30
23
  ## Behavior
31
24
 
@@ -33,170 +26,55 @@ specs/
33
26
 
34
27
  Load: `!`ls specs/doing-*.md 2>/dev/null || echo 'NOT_FOUND'``, `!`cat PLAN.md 2>/dev/null || echo 'NOT_FOUND'``, source code. Load `specs/done-*.md` only if `--re-verify`.
35
28
 
36
- **Readiness check:** For each `doing-*` spec, check PLAN.md:
37
- - All tasks `[x]` → ready (proceed)
38
- - Some tasks `[ ]` → warn: "⚠ {spec} has {n} incomplete tasks. Run /df:execute first."
39
-
40
- If no `doing-*` specs found: report counts, suggest `/df:execute`.
29
+ **Readiness:** All tasks `[x]` → proceed. Some `[ ]` → warn incomplete, suggest `/df:execute`. No `doing-*` specs → report counts, suggest `/df:execute`.
41
30
 
42
31
  ### 1.5. DETECT PROJECT COMMANDS
43
32
 
44
- **Config override always wins.** If `!`cat .deepflow/config.yaml 2>/dev/null || echo 'NOT_FOUND'`` has `quality.test_command` or `quality.build_command`, use those.
33
+ Config override always wins (`quality.test_command` / `quality.build_command` in `!`cat .deepflow/config.yaml 2>/dev/null || echo 'NOT_FOUND'``).
45
34
 
46
35
  **Auto-detection (first match wins):**
47
36
 
48
37
  | File | Build | Test |
49
38
  |------|-------|------|
50
- | `package.json` with `scripts.build` | `npm run build` | `npm test` (if scripts.test is not default placeholder) |
39
+ | `package.json` with `scripts.build` | `npm run build` | `npm test` (if scripts.test not default placeholder) |
51
40
  | `pyproject.toml` or `setup.py` | — | `pytest` |
52
41
  | `Cargo.toml` | `cargo build` | `cargo test` |
53
42
  | `go.mod` | `go build ./...` | `go test ./...` |
54
43
  | `Makefile` with `test` target | `make build` (if target exists) | `make test` |
55
44
 
56
- - Commands found: `Build: npm run build | Test: npm test`
57
- - Nothing found: `⚠ No build/test commands detected. L0/L4 skipped. Set quality.test_command in .deepflow/config.yaml`
45
+ Nothing found `⚠ No build/test commands detected. L0/L4 skipped. Set quality.test_command in .deepflow/config.yaml`
58
46
 
59
47
  ### 2. VERIFY EACH SPEC
60
48
 
61
- **L0: Build check** (if build command detected)
62
-
63
- Run the build command in the worktree:
64
- - Exit code 0 → L0 pass, continue to L1-L2
65
- - Exit code non-zero → L0 FAIL: report "✗ L0: Build failed" with last 30 lines, add fix task to PLAN.md, stop (skip L1-L4)
66
-
67
- **L1: Files exist** (machine-verifiable, via git)
49
+ **L0: Build** Run build command. Exit 0 → pass. Non-zero → FAIL with last 30 lines, add fix task, skip L1-L4.
68
50
 
69
- Check that planned files appear in the worktree diff:
51
+ **L1: Files exist** — Compare `git diff main...HEAD --name-only` in worktree against PLAN.md `Files:` entries. All planned files in diff pass. Missing → FAIL with list.
70
52
 
71
- ```bash
72
- # Get files changed in worktree branch
73
- CHANGED=$(cd ${WORKTREE_PATH} && git diff main...HEAD --name-only)
74
-
75
- # Parse PLAN.md for spec's "Files:" entries
76
- PLANNED=$(grep -A1 "Files:" PLAN.md | grep -v "Files:" | tr ',' '\n' | xargs)
77
-
78
- # Each planned file must appear in diff
79
- for file in ${PLANNED}; do
80
- echo "${CHANGED}" | grep -q "${file}" || MISSING+=("${file}")
81
- done
82
- ```
83
-
84
- - All planned files in diff → L1 pass
85
- - Missing files → L1 FAIL: report "✗ L1: Files not in diff: {list}"
86
-
87
- **L2: Coverage** (coverage tool)
88
-
89
- **Step 1: Detect coverage tool** (first match wins):
90
-
91
- | File/Config | Coverage Tool | Command |
92
- |-------------|--------------|---------|
93
- | `package.json` with `c8` in devDeps | c8 (Node) | `npx c8 --reporter=json-summary npm test` |
94
- | `package.json` with `nyc` in devDeps | nyc (Node) | `npx nyc --reporter=json-summary npm test` |
95
- | `.nycrc` or `.nycrc.json` exists | nyc (Node) | `npx nyc --reporter=json-summary npm test` |
96
- | `pyproject.toml` or `setup.cfg` with coverage config | coverage.py | `python -m coverage run -m pytest && python -m coverage json` |
97
- | `Cargo.toml` + `cargo-tarpaulin` installed | tarpaulin (Rust) | `cargo tarpaulin --out json` |
98
- | `go.mod` | go cover (Go) | `go test -coverprofile=coverage.out ./...` |
99
-
100
- **Step 2: No tool detected** → L2 passes with warning: "⚠ L2: No coverage tool detected, skipping coverage check"
101
-
102
- **Step 3: Run coverage comparison** (when tool available):
103
- ```bash
104
- # Baseline: coverage on main branch (or from ratchet snapshot)
105
- cd ${WORKTREE_PATH}
106
- git stash # Temporarily remove changes
107
- ${COVERAGE_COMMAND}
108
- BASELINE=$(parse_coverage_percentage) # Extract total line coverage %
109
- git stash pop
110
-
111
- # Current: coverage with changes applied
112
- ${COVERAGE_COMMAND}
113
- CURRENT=$(parse_coverage_percentage)
114
-
115
- # Compare
116
- if [ "${CURRENT}" -lt "${BASELINE}" ]; then
117
- echo "✗ L2: Coverage dropped ${BASELINE}% → ${CURRENT}%"
118
- else
119
- echo "✓ L2: Coverage ${CURRENT}% (baseline: ${BASELINE}%)"
120
- fi
121
- ```
53
+ **L2: Coverage** — Detect coverage tool (first match wins):
122
54
 
123
- - Coverage same or improved L2 pass
124
- - Coverage dropped → L2 FAIL: report "✗ L2: Coverage dropped {baseline}% → {current}%", add fix task
55
+ | File/Config | Tool | Command |
56
+ |-------------|------|---------|
57
+ | `package.json` with `c8` in devDeps | c8 | `npx c8 --reporter=json-summary npm test` |
58
+ | `package.json` with `nyc` in devDeps | nyc | `npx nyc --reporter=json-summary npm test` |
59
+ | `.nycrc` or `.nycrc.json` exists | nyc | `npx nyc --reporter=json-summary npm test` |
60
+ | `pyproject.toml`/`setup.cfg` with coverage config | coverage.py | `python -m coverage run -m pytest && python -m coverage json` |
61
+ | `Cargo.toml` + `cargo-tarpaulin` installed | tarpaulin | `cargo tarpaulin --out json` |
62
+ | `go.mod` | go cover | `go test -coverprofile=coverage.out ./...` |
125
63
 
126
- **L3: Integration** (subsumed by L0 + L4)
64
+ No tool → pass with warning. When available: stash changes run coverage on baseline → stash pop → run coverage on current → compare. Drop → FAIL. Same/improved → pass.
127
65
 
128
- Subsumed by L0 (build) + L4 (tests). If code isn't imported/wired, build fails or tests fail. No separate verification needed.
66
+ **L3: Integration** — Subsumed by L0 + L4. No separate check.
129
67
 
130
- **L4: Test execution** (if test command detected)
131
-
132
- Run AFTER L0 passes and L1-L2 complete. Run even if L1-L2 found issues.
133
-
134
- - Exit code 0 → L4 pass
135
- - Exit code non-zero → L4 FAIL: capture last 50 lines, report "✗ L4: Tests failed (N of M)", add fix task
136
-
137
- **Flaky test handling** (if `quality.test_retry_on_fail: true` in config):
138
- - Re-run ONCE on failure. Second pass → "⚠ L4: Passed on retry (possible flaky test)". Second fail → genuine failure.
68
+ **L4: Tests** — Run AFTER L0 passes. Run even if L1-L2 had issues. Exit 0 → pass. Non-zero → FAIL with last 50 lines + fix task. If `quality.test_retry_on_fail: true`: re-run once; second pass → warn (flaky); second fail → genuine failure.
139
69
 
140
70
  **L5: Browser Verification** (if frontend detected)
141
71
 
142
- **Step 1: Detect frontend framework** (config override always wins):
143
-
144
- ```bash
145
- BROWSER_VERIFY=$(yq '.quality.browser_verify' .deepflow/config.yaml 2>/dev/null)
146
-
147
- if [ "${BROWSER_VERIFY}" = "false" ]; then
148
- # Explicitly disabled — skip L5 unconditionally
149
- echo "L5 — (no frontend)"
150
- L5_RESULT="skipped-no-frontend"
151
- elif [ "${BROWSER_VERIFY}" = "true" ]; then
152
- # Explicitly enabled — proceed even without frontend deps
153
- FRONTEND_DETECTED=true
154
- FRONTEND_FRAMEWORK="configured"
155
- else
156
- # Auto-detect from package.json (both dependencies and devDependencies)
157
- FRONTEND_DETECTED=false
158
- FRONTEND_FRAMEWORK=""
159
-
160
- if [ -f package.json ]; then
161
- # Check for React / Next.js
162
- if jq -e '(.dependencies + (.devDependencies // {})) | keys[] | select(. == "react" or . == "react-dom" or . == "next")' package.json >/dev/null 2>&1; then
163
- FRONTEND_DETECTED=true
164
- # Prefer Next.js label when next is present
165
- if jq -e '(.dependencies + (.devDependencies // {}))["next"]' package.json >/dev/null 2>&1; then
166
- FRONTEND_FRAMEWORK="Next.js"
167
- else
168
- FRONTEND_FRAMEWORK="React"
169
- fi
170
- # Check for Nuxt / Vue
171
- elif jq -e '(.dependencies + (.devDependencies // {})) | keys[] | select(. == "vue" or . == "nuxt" or startswith("@vue/"))' package.json >/dev/null 2>&1; then
172
- FRONTEND_DETECTED=true
173
- if jq -e '(.dependencies + (.devDependencies // {}))["nuxt"]' package.json >/dev/null 2>&1; then
174
- FRONTEND_FRAMEWORK="Nuxt"
175
- else
176
- FRONTEND_FRAMEWORK="Vue"
177
- fi
178
- # Check for Svelte / SvelteKit
179
- elif jq -e '(.dependencies + (.devDependencies // {})) | keys[] | select(. == "svelte" or startswith("@sveltejs/"))' package.json >/dev/null 2>&1; then
180
- FRONTEND_DETECTED=true
181
- if jq -e '(.dependencies + (.devDependencies // {}))["@sveltejs/kit"]' package.json >/dev/null 2>&1; then
182
- FRONTEND_FRAMEWORK="SvelteKit"
183
- else
184
- FRONTEND_FRAMEWORK="Svelte"
185
- fi
186
- fi
187
- fi
188
-
189
- if [ "${FRONTEND_DETECTED}" = "false" ]; then
190
- echo "L5 — (no frontend)"
191
- L5_RESULT="skipped-no-frontend"
192
- fi
193
- fi
194
- ```
72
+ Algorithm: detect frontend resolve dev command/port → start server → poll readiness → read assertions from PLAN.md → auto-install Playwright Chromium → evaluate via `locator.ariaSnapshot()` screenshot → retry once on failure → report.
195
73
 
196
- Packages checked in both `dependencies` and `devDependencies`:
74
+ **Step 1: Detect frontend.** Config `quality.browser_verify` overrides: `false` → always skip (`L5 — (no frontend)`), `true` → always run, absent → auto-detect from package.json (both deps and devDeps):
197
75
 
198
- | Package(s) | Detected Framework |
199
- |------------|--------------------|
76
+ | Package(s) | Framework |
77
+ |------------|-----------|
200
78
  | `next` | Next.js |
201
79
  | `react`, `react-dom` | React |
202
80
  | `nuxt` | Nuxt |
@@ -204,245 +82,27 @@ Packages checked in both `dependencies` and `devDependencies`:
204
82
  | `@sveltejs/kit` | SvelteKit |
205
83
  | `svelte`, `@sveltejs/*` | Svelte |
206
84
 
207
- Config key `quality.browser_verify`:
208
- - `false` → always skip L5, output `L5 — (no frontend)`, even if frontend deps are present
209
- - `true` → always run L5, even if no frontend deps detected
210
- - absent → auto-detect from package.json as above
211
-
212
- No frontend deps found and `quality.browser_verify` not set → output `L5 — (no frontend)`, skip all remaining L5 steps.
213
-
214
- **Step 2: Dev server lifecycle**
215
-
216
- **2a. Resolve dev command** (config override always wins):
217
-
218
- ```bash
219
- # 1. Config override
220
- DEV_COMMAND=$(yq '.quality.dev_command' .deepflow/config.yaml 2>/dev/null)
221
-
222
- # 2. Auto-detect from package.json scripts.dev
223
- if [ -z "${DEV_COMMAND}" ] || [ "${DEV_COMMAND}" = "null" ]; then
224
- if [ -f package.json ] && jq -e '.scripts.dev' package.json >/dev/null 2>&1; then
225
- DEV_COMMAND="npm run dev"
226
- fi
227
- fi
228
-
229
- # 3. No dev command found → skip L5 dev server steps
230
- if [ -z "${DEV_COMMAND}" ]; then
231
- echo "⚠ L5: No dev command found (scripts.dev not in package.json, quality.dev_command not set). Skipping browser check."
232
- L5_RESULT="skipped-no-dev-command"
233
- fi
234
- ```
235
-
236
- **2b. Resolve port:**
237
-
238
- ```bash
239
- # Config override wins; fallback to 3000
240
- DEV_PORT=$(yq '.quality.dev_port' .deepflow/config.yaml 2>/dev/null)
241
- if [ -z "${DEV_PORT}" ] || [ "${DEV_PORT}" = "null" ]; then
242
- DEV_PORT=3000
243
- fi
244
- ```
245
-
246
- **2c. Check if dev server is already running (port already bound):**
247
-
248
- ```bash
249
- PORT_IN_USE=false
250
- if curl -s -o /dev/null -w "%{http_code}" "http://localhost:${DEV_PORT}" | grep -q "200"; then
251
- PORT_IN_USE=true
252
- echo "ℹ L5: Port ${DEV_PORT} already bound — using existing dev server, will not kill on exit."
253
- fi
254
- ```
255
-
256
- **2d. Start dev server and poll for readiness:**
257
-
258
- ```bash
259
- DEV_SERVER_PID=""
260
- if [ "${PORT_IN_USE}" = "false" ]; then
261
- # Start in a new process group so all child processes can be killed together
262
- setsid ${DEV_COMMAND} &
263
- DEV_SERVER_PID=$!
264
- fi
265
-
266
- # Resolve timeout from config (default 30s)
267
- TIMEOUT=$(yq '.quality.browser_timeout' .deepflow/config.yaml 2>/dev/null)
268
- if [ -z "${TIMEOUT}" ] || [ "${TIMEOUT}" = "null" ]; then
269
- TIMEOUT=30
270
- fi
271
- POLL_INTERVAL=0.5
272
- MAX_POLLS=$(echo "${TIMEOUT} / ${POLL_INTERVAL}" | bc)
273
-
274
- HTTP_STATUS=""
275
- for i in $(seq 1 ${MAX_POLLS}); do
276
- HTTP_STATUS=$(curl -s -o /dev/null -w "%{http_code}" "http://localhost:${DEV_PORT}" 2>/dev/null)
277
- [ "${HTTP_STATUS}" = "200" ] && break
278
- sleep ${POLL_INTERVAL}
279
- done
280
-
281
- if [ "${HTTP_STATUS}" != "200" ]; then
282
- # Kill process group before reporting failure
283
- if [ -n "${DEV_SERVER_PID}" ]; then
284
- kill -SIGTERM -${DEV_SERVER_PID} 2>/dev/null
285
- fi
286
- echo "✗ L5 FAIL: dev server did not start within ${TIMEOUT}s"
287
- # add fix task to PLAN.md
288
- exit 1
289
- fi
290
- ```
291
-
292
- **2e. Teardown — always runs on both pass and fail paths:**
293
-
294
- ```bash
295
- cleanup_dev_server() {
296
- if [ -n "${DEV_SERVER_PID}" ]; then
297
- # Kill the entire process group to catch any child processes spawned by the dev server
298
- kill -SIGTERM -${DEV_SERVER_PID} 2>/dev/null
299
- # Give it up to 5s to exit cleanly, then force-kill
300
- for i in $(seq 1 10); do
301
- kill -0 ${DEV_SERVER_PID} 2>/dev/null || break
302
- sleep 0.5
303
- done
304
- kill -SIGKILL -${DEV_SERVER_PID} 2>/dev/null || true
305
- fi
306
- }
307
- # Register cleanup for both success and failure paths
308
- trap cleanup_dev_server EXIT
309
- ```
310
-
311
- Note: When `PORT_IN_USE=true` (dev server was already running before L5 began), `DEV_SERVER_PID` is empty and `cleanup_dev_server` is a no-op — the pre-existing server is left running.
312
-
313
- **Step 3: Read assertions from PLAN.md**
314
-
315
- Assertions are written into PLAN.md at plan-time (REQ-8). Extract them for the current spec:
316
-
317
- ```bash
318
- # Parse structured browser assertions block from PLAN.md
319
- # Format expected in PLAN.md under each spec section:
320
- # browser_assertions:
321
- # - selector: "nav"
322
- # role: "navigation"
323
- # name: "Main navigation"
324
- # - selector: "button[type=submit]"
325
- # visible: true
326
- # text: "Submit"
327
- ASSERTIONS=$(parse_yaml_block "browser_assertions" PLAN.md)
328
- ```
329
-
330
- If no `browser_assertions` block found for the spec → L5 — (no assertions), skip Playwright step.
331
-
332
- **Step 3.5: Playwright browser auto-install**
333
-
334
- Before launching Playwright, verify the Chromium browser binary is available. Run this check once per session; cache the result to avoid repeated installs.
335
-
336
- ```bash
337
- # Marker file path — presence means Playwright Chromium was verified this session
338
- PW_MARKER="${TMPDIR:-/tmp}/.deepflow-pw-chromium-ok"
339
-
340
- if [ ! -f "${PW_MARKER}" ]; then
341
- # Dry-run to detect whether the browser binary is already installed
342
- if ! npx --yes playwright install --dry-run chromium 2>&1 | grep -q "chromium.*already installed"; then
343
- echo "ℹ L5: Playwright Chromium not found — installing (one-time setup)..."
344
- if npx --yes playwright install chromium 2>&1; then
345
- echo "✓ L5: Playwright Chromium installed successfully."
346
- touch "${PW_MARKER}"
347
- else
348
- echo "✗ L5 FAIL: Playwright Chromium install failed. Browser verification skipped."
349
- L5_RESULT="skipped-install-failed"
350
- # Skip the remaining L5 steps for this run
351
- fi
352
- else
353
- # Already installed — cache for this session
354
- touch "${PW_MARKER}"
355
- fi
356
- fi
357
-
358
- # If install failed, skip Playwright launch and jump to L5 outcome reporting
359
- if [ "${L5_RESULT}" = "skipped-install-failed" ]; then
360
- # No assertions can be evaluated — treat as a non-blocking skip with error notice
361
- : # fall through to report section
362
- fi
363
- ```
364
-
365
- Skip Steps 4–6 when `L5_RESULT="skipped-install-failed"`.
366
-
367
- **Step 4: Playwright verification**
368
-
369
- Launch Chromium headlessly via Playwright and evaluate each assertion deterministically — no LLM judgment:
370
-
371
- ```javascript
372
- const { chromium } = require('playwright');
373
- const browser = await chromium.launch({ headless: true });
374
- const page = await browser.newPage();
375
- await page.goto('http://localhost:3000');
376
-
377
- const failures = [];
378
-
379
- for (const assertion of assertions) {
380
- const locator = page.locator(assertion.selector);
381
-
382
- // Capture accessibility tree (replaces deprecated page.accessibility.snapshot())
383
- // locator.ariaSnapshot() returns YAML-like text with roles, names, hierarchy
384
- const ariaSnapshot = await locator.ariaSnapshot();
385
-
386
- if (assertion.role && !ariaSnapshot.includes(`role: ${assertion.role}`)) {
387
- failures.push(`${assertion.selector}: expected role "${assertion.role}", not found in aria snapshot`);
388
- }
389
- if (assertion.name && !ariaSnapshot.includes(assertion.name)) {
390
- failures.push(`${assertion.selector}: expected name "${assertion.name}", not found in aria snapshot`);
391
- }
392
-
393
- // Capture bounding boxes for visible assertions
394
- if (assertion.visible !== undefined) {
395
- const box = await locator.boundingBox();
396
- const isVisible = box !== null && box.width > 0 && box.height > 0;
397
- if (assertion.visible !== isVisible) {
398
- failures.push(`${assertion.selector}: expected visible=${assertion.visible}, got visible=${isVisible}`);
399
- }
400
- }
401
-
402
- if (assertion.text) {
403
- const text = await locator.innerText();
404
- if (!text.includes(assertion.text)) {
405
- failures.push(`${assertion.selector}: expected text "${assertion.text}", got "${text}"`);
406
- }
407
- }
408
- }
409
- ```
410
-
411
- Note: `page.accessibility.snapshot()` was removed in Playwright 1.x. Always use `locator.ariaSnapshot()`, which returns YAML-like text describing roles, names, and hierarchy for the matched element subtree.
412
-
413
- **Step 5: Screenshot capture**
414
-
415
- After evaluation (pass or fail), capture a full-page screenshot:
85
+ No frontend detected and no config override → `L5 — (no frontend)`, skip remaining L5 steps.
416
86
 
417
- ```javascript
418
- const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
419
- const specName = 'doing-upload'; // derived from current spec filename
420
- const screenshotPath = `.deepflow/screenshots/${specName}/${timestamp}.png`;
421
- await fs.mkdirSync(path.dirname(screenshotPath), { recursive: true });
422
- await page.screenshot({ path: screenshotPath, fullPage: true });
423
- ```
424
-
425
- Screenshot path: `.deepflow/screenshots/{spec-name}/{timestamp}.png`
87
+ **Step 2: Dev server lifecycle.**
88
+ 1. **Resolve dev command:** Config `quality.dev_command` wins → fallback to `npm run dev` if `scripts.dev` exists → none found → skip L5 with warning.
89
+ 2. **Resolve port:** Config `quality.dev_port` wins fallback to 3000.
90
+ 3. **Check existing server:** curl localhost:{port}. If already responding, reuse it (do not kill on exit).
91
+ 4. **Start & poll:** If not already running, start via `setsid ${DEV_COMMAND} &`. Poll with 0.5s interval up to `quality.browser_timeout` (default 30s). Timeout FAIL + kill process group + fix task.
92
+ 5. **Teardown (always runs):** trap EXIT kills the process group (SIGTERM → wait 5s → SIGKILL). No-op when reusing pre-existing server (`DEV_SERVER_PID` empty).
426
93
 
427
- **Step 6: Retry logic**
94
+ **Step 3: Read assertions from PLAN.md.** Extract `browser_assertions:` YAML block for current spec. Each assertion has `selector` + optional `role`, `name`, `visible`, `text`. No block found → `L5 — (no assertions)`, skip Playwright.
428
95
 
429
- On first failure, retry the FULL L5 check once (total 2 attempts). Re-navigate and re-evaluate all assertions from scratch on the retry:
96
+ **Step 3.5: Playwright auto-install.** Check `$TMPDIR/.deepflow-pw-chromium-ok` marker. If absent, run `npx --yes playwright install --dry-run chromium` to detect, install if needed, cache marker. Install failure `L5 (install failed)`, skip Steps 4-6.
430
97
 
431
- ```javascript
432
- // attempt1_failures populated by Step 4 above
433
- let attempt2_failures = [];
98
+ **Step 4: Evaluate assertions.** Launch headless Chromium, navigate to `localhost:{port}`. For each assertion:
99
+ - `role`/`name` check against `locator(selector).ariaSnapshot()` YAML output (NOT deprecated `page.accessibility.snapshot()`)
100
+ - `visible` check `locator.boundingBox()` non-null with width/height > 0
101
+ - `text` → check `locator.innerText()` contains expected text
434
102
 
435
- if (attempt1_failures.length > 0) {
436
- // Retry: re-navigate and re-evaluate all assertions (identical logic to Step 4)
437
- await page.goto('http://localhost:' + DEV_PORT);
438
- attempt2_failures = await evaluateAssertions(page, assertions); // same loop as Step 4
103
+ **Step 5: Screenshot.** Always capture full-page screenshot to `.deepflow/screenshots/{spec-name}/{timestamp}.png`.
439
104
 
440
- // Capture a second screenshot for the retry attempt
441
- const retryTimestamp = new Date().toISOString().replace(/[:.]/g, '-');
442
- const retryScreenshotPath = `.deepflow/screenshots/${specName}/${retryTimestamp}-retry.png`;
443
- await page.screenshot({ path: retryScreenshotPath, fullPage: true });
444
- }
445
- ```
105
+ **Step 6: Retry.** On first failure, retry FULL L5 once (re-navigate, re-evaluate all assertions, capture retry screenshot with `-retry` suffix). Compare failing selector sets between attempts (by selector string only, ignore detail text).
446
106
 
447
107
  **Outcome matrix:**
448
108
 
@@ -450,129 +110,18 @@ if (attempt1_failures.length > 0) {
450
110
  |-----------|-----------|--------|
451
111
  | Pass | — (not run) | L5 ✓ |
452
112
  | Fail | Pass | L5 ✓ with warning "(passed on retry)" |
453
- | Fail | Fail — same assertions | L5 ✗ — genuine failure |
454
- | Fail | Fail — different assertions | L5 ✗ (flaky) |
455
-
456
- **Outcome reporting:**
457
-
458
- - **First attempt passes:** `✓ L5: All assertions passed` — no retry needed.
459
-
460
- - **First fails, retry passes:**
461
- ```
462
- ⚠ L5: Passed on retry (possible flaky render)
463
- First attempt failed on: {list of assertion selectors from attempt 1}
464
- ```
465
- → L5 pass with warning. No fix task added.
466
-
467
- - **Both fail on SAME assertions** (identical set of failing selectors):
468
- ```
469
- ✗ L5: Browser assertions failed (both attempts)
470
- {selector}: {failure detail}
471
- {selector}: {failure detail}
472
- ...
473
- ```
474
- → L5 FAIL. Add fix task to PLAN.md.
475
-
476
- - **Both fail on DIFFERENT assertions** (flaky — assertion sets differ between attempts):
477
- ```
478
- ✗ L5: Browser assertions failed (flaky — inconsistent failures across attempts)
479
- Attempt 1 failures:
480
- {selector}: {failure detail}
481
- Attempt 2 failures:
482
- {selector}: {failure detail}
483
- ```
484
- → L5 ✗ (flaky). Add fix task to PLAN.md noting flakiness.
485
-
486
- **Fix task generation on L5 failure (both same and flaky):**
487
-
488
- When both attempts fail (`L5_RESULT = 'fail'` or `L5_RESULT = 'fail-flaky'`), generate a fix task and append it to PLAN.md under the spec's section:
489
-
490
- ```javascript
491
- // 1. Determine next task ID
492
- // Scan PLAN.md for highest T{n} and increment
493
- const planContent = fs.readFileSync('PLAN.md', 'utf8');
494
- const taskIds = [...planContent.matchAll(/\bT(\d+)\b/g)].map(m => parseInt(m[1], 10));
495
- const nextId = taskIds.length > 0 ? Math.max(...taskIds) + 1 : 1;
496
- const taskId = `T${nextId}`;
497
-
498
- // 2. Collect fix task context
499
- // - Failing assertions: the structured assertion objects that failed
500
- const failingAssertions = attempt2_failures.length > 0 ? attempt2_failures : attempt1_failures;
501
-
502
- // - DOM snapshot excerpt: capture aria snapshot of body at the time of failure
503
- const domSnapshotExcerpt = await page.locator('body').ariaSnapshot();
504
-
505
- // - Screenshot path: already captured in Step 5 / Step 6 retry
506
- // screenshotPath / retryScreenshotPath are available from those steps
507
-
508
- // 3. Build task description
509
- const isFlaky = L5_RESULT === 'fail-flaky';
510
- const flakySuffix = isFlaky ? ' (flaky — inconsistent failures across attempts)' : '';
511
- const screenshotRef = isFlaky ? retryScreenshotPath : screenshotPath;
512
-
513
- const fixTaskBlock = `
514
- - [ ] ${taskId}: Fix L5 browser assertion failures in ${specName}${flakySuffix}
515
- **Failing assertions:**
516
- ${failingAssertions.map(f => ` - ${f}`).join('\n')}
517
- **DOM snapshot (aria tree excerpt at failure):**
518
- \`\`\`
519
- ${domSnapshotExcerpt.split('\n').slice(0, 40).join('\n')}
520
- \`\`\`
521
- **Screenshot:** ${screenshotRef}
522
- `;
523
-
524
- // 4. Append fix task under spec section in PLAN.md
525
- // Find the spec section and append before the next section header or EOF
526
- const specSectionPattern = new RegExp(`(## ${specName}[\\s\\S]*?)(\n## |$)`);
527
- const updated = planContent.replace(specSectionPattern, (_, section, next) => section + fixTaskBlock + next);
528
- fs.writeFileSync('PLAN.md', updated);
529
-
530
- console.log(`Fix task added to PLAN.md: ${taskId}: Fix L5 browser assertion failures in ${specName}`);
531
- ```
113
+ | Fail | Fail — same selectors | L5 ✗ — genuine failure |
114
+ | Fail | Fail — different selectors | L5 ✗ (flaky) |
532
115
 
533
- Fix task context included:
534
- - **Failing assertions**: the structured assertion data (selector + failure detail) from whichever attempt(s) failed
535
- - **DOM snapshot excerpt**: first 40 lines of `locator('body').ariaSnapshot()` output at time of failure (textual a11y tree)
536
- - **Screenshot path**: `.deepflow/screenshots/{spec-name}/{timestamp}.png` (retry screenshot when available)
537
- - **Flakiness note**: appended to task title when assertion sets differed between attempts
538
-
539
- **Comparing assertion sets (same vs. different):**
540
-
541
- ```javascript
542
- // Compare by selector strings only — ignore detail text differences
543
- const attempt1_selectors = attempt1_failures.map(f => f.split(':')[0]).sort();
544
- const attempt2_selectors = attempt2_failures.map(f => f.split(':')[0]).sort();
545
- const same_assertions = JSON.stringify(attempt1_selectors) === JSON.stringify(attempt2_selectors);
546
-
547
- if (attempt2_failures.length === 0) {
548
- // Retry passed
549
- L5_RESULT = 'pass-on-retry';
550
- } else if (same_assertions) {
551
- // Genuine failure — same assertions failed both times
552
- L5_RESULT = 'fail';
553
- } else {
554
- // Flaky — different assertions failed each time
555
- L5_RESULT = 'fail-flaky';
556
- }
557
- ```
116
+ All L5 outcomes: `✓` pass | `⚠` passed on retry | `✗` both failed (same) | `✗ (flaky)` both failed (different) | `— (no frontend)` | `— (no assertions)` | `✗ (install failed)`
558
117
 
559
- **L5 outcomes:**
560
- - L5 ✓ — all assertions pass on first attempt
561
- - L5 ⚠ — passed on retry (possible flaky render); first-attempt failures listed as context
562
- - L5 ✗ — assertions failed on both attempts (same assertions), fix tasks added
563
- - L5 ✗ (flaky) — assertions failed on both attempts but on different assertions; fix tasks added noting flakiness
564
- - L5 — (no frontend) — no frontend deps detected and no config override
565
- - L5 — (no assertions) — frontend detected but no `browser_assertions` in PLAN.md
566
- - L5 ✗ (install failed) — Playwright Chromium install failed; browser verification skipped for this run
118
+ **Fix task on L5 failure:** Append to PLAN.md under spec section with next T{n} ID. Include: failing assertions (selector + detail), first 40 lines of `locator('body').ariaSnapshot()` DOM excerpt, screenshot path, flakiness note if assertion sets differed.
567
119
 
568
120
  ### 3. GENERATE REPORT
569
121
 
570
- **Format on success:**
571
- ```
572
- doing-upload.md: L0 ✓ | L1 ✓ (5/5 files) | L2 ⚠ (no coverage tool) | L3 — (subsumed) | L4 ✓ (12 tests) | L5 ✓ | 0 quality issues
573
- ```
122
+ **Success:** `doing-upload.md: L0 ✓ | L1 ✓ (5/5 files) | L2 ⚠ (no coverage tool) | L3 — (subsumed) | L4 ✓ (12 tests) | L5 ✓ | 0 quality issues`
574
123
 
575
- **Format on failure:**
124
+ **Failure:**
576
125
  ```
577
126
  doing-upload.md: L0 ✓ | L1 ✗ (3/5 files) | L2 ⚠ | L3 — | L4 ✗ (3 failed) | L5 ✗ (2 assertions failed)
578
127
 
@@ -580,137 +129,41 @@ Issues:
580
129
  ✗ L1: Missing files: src/api/upload.ts, src/services/storage.ts
581
130
  ✗ L4: 3 test failures
582
131
  FAIL src/upload.test.ts > should validate file type
583
- FAIL src/upload.test.ts > should reject oversized files
584
132
 
585
133
  Fix tasks added to PLAN.md:
586
134
  T10: Implement missing upload endpoint and storage service
587
- T11: Fix 3 failing tests in upload module
588
135
 
589
136
  Run /df:execute --continue to fix in the same worktree.
590
137
  ```
591
138
 
592
- **Gate conditions (ALL must pass to merge):**
593
- - L0: Build passes (or no build command detected)
594
- - L1: All planned files appear in diff
595
- - L2: Coverage didn't drop (or no coverage tool detected)
596
- - L4: Tests pass (or no test command detected)
597
- - L5: Browser assertions pass (or no frontend detected, or no assertions defined)
598
-
599
- **If all gates pass:** Proceed to Post-Verification merge.
139
+ **Gate conditions (ALL must pass to merge):** L0 build (or no command) | L1 all files in diff | L2 coverage held (or no tool) | L4 tests pass (or no command) | L5 assertions pass (or no frontend/assertions).
600
140
 
601
- **If issues found:** Add fix tasks to PLAN.md in the worktree and register as native tasks:
602
- 1. Discover worktree (same logic as Post-Verification step 1)
603
- 2. Write fix tasks to `{worktree_path}/PLAN.md` under existing spec section (IDs continue from last)
604
- 3. Register each fix task: `TaskCreate(subject: "T10: Fix {description}", ...)` + `TaskUpdate(addBlockedBy: [...])` if dependencies exist
605
- 4. Output report + "Run /df:execute --continue to fix in the same worktree."
606
-
607
- **Do NOT** create new specs, new worktrees, or merge with issues pending.
141
+ **All pass →** Post-Verification merge. **Issues found →** Add fix tasks to worktree PLAN.md (IDs continue from last), register via TaskCreate/TaskUpdate, output report + "Run /df:execute --continue". Do NOT create new specs, worktrees, or merge with issues pending.
608
142
 
609
143
  ### 4. CAPTURE LEARNINGS
610
144
 
611
- On success, write to `.deepflow/experiments/{domain}--{approach}--success.md` when: non-trivial approach used, alternatives rejected, performance optimization made, or integration pattern discovered. Skip simple CRUD/standard patterns.
612
-
613
- ```markdown
145
+ On success, if non-trivial approach used (not simple CRUD), write to `.deepflow/experiments/{domain}--{approach}--success.md`:
146
+ ```
614
147
  # {Approach} [SUCCESS]
615
- Objective: ...
616
- Approach: ...
617
- Why it worked: ...
618
- Files: ...
148
+ Objective: ... | Approach: ... | Why it worked: ... | Files: ...
619
149
  ```
620
150
 
621
- ## Verification Levels
622
-
623
- | Level | Check | Method | Runner |
624
- |-------|-------|--------|--------|
625
- | L0: Builds | Code compiles/builds | Run build command | Orchestrator (Bash) |
626
- | L1: Files exist | Planned files in diff | `git diff --name-only` vs PLAN.md | Orchestrator (Bash) |
627
- | L2: Coverage | Coverage didn't drop | Coverage tool (before/after) | Orchestrator (Bash) |
628
- | L3: Integration | Build + tests pass | Subsumed by L0 + L4 | — |
629
- | L4: Tested | Tests pass | Run test command | Orchestrator (Bash) |
630
- | L5: Browser | UI assertions pass | Playwright + `locator.ariaSnapshot()` | Orchestrator (Bash + Node) |
631
-
632
- **Default: L0 through L5.** L0 and L4 skipped ONLY if no build/test command detected (see step 1.5). L5 skipped if no frontend detected and no config override. All checks are machine-verifiable. No LLM agents are used.
633
-
634
151
  ## Rules
635
152
  - Verify against spec, not assumptions
636
153
  - Flag partial implementations
637
154
  - All checks machine-verifiable — no LLM judgment
638
155
  - Don't auto-fix — add fix tasks to PLAN.md, then `/df:execute --continue`
639
- - Capture learnings — Write experiments for significant approaches
156
+ - Capture learnings for significant approaches
157
+ - **Terse output** — Output ONLY the compact report format (section 3)
640
158
 
641
159
  ## Post-Verification: Worktree Merge & Cleanup
642
160
 
643
- **Only runs when ALL gates pass.** If any gate fails, fix tasks were added to PLAN.md instead (see step 3).
644
-
645
- ### 1. DISCOVER WORKTREE
646
-
647
- Find worktree info (checkpoint → fallback to git):
161
+ **Only runs when ALL gates pass.**
648
162
 
649
- ```bash
650
- # Strategy 1: checkpoint.json
651
- if [ -f .deepflow/checkpoint.json ]; then
652
- WORKTREE_BRANCH=$(cat .deepflow/checkpoint.json | jq -r '.worktree_branch')
653
- WORKTREE_PATH=$(cat .deepflow/checkpoint.json | jq -r '.worktree_path')
654
- fi
163
+ 1. **Discover worktree:** Read `.deepflow/checkpoint.json` for `worktree_branch`/`worktree_path`. Fallback: infer from `doing-*` spec name + `git worktree list --porcelain`. No worktree → "nothing to merge", exit.
164
+ 2. **Merge:** `git checkout main && git merge ${BRANCH} --no-ff -m "feat({spec}): merge verified changes"`. On conflict → keep worktree, output "Resolve manually, run /df:verify --merge-only", exit.
165
+ 3. **Cleanup:** `git worktree remove --force ${PATH} && git branch -d ${BRANCH} && rm -f .deepflow/checkpoint.json`
166
+ 4. **Rename spec:** `mv specs/doing-${NAME}.md specs/done-${NAME}.md`
167
+ 5. **Extract decisions:** Read done spec, extract `[APPROACH]`/`[ASSUMPTION]`/`[PROVISIONAL]` decisions, append to `.deepflow/decisions.md` as `### {date} — {spec}\n- [TAG] decision — rationale`. Delete done spec after successful write; preserve on failure.
655
168
 
656
- # Strategy 2: Infer from doing-* spec + git worktree list
657
- if [ -z "${WORKTREE_BRANCH}" ]; then
658
- SPEC_NAME=$(basename specs/doing-*.md .md | sed 's/doing-//')
659
- WORKTREE_PATH=".deepflow/worktrees/${SPEC_NAME}"
660
- WORKTREE_BRANCH=$(git worktree list --porcelain | grep -A2 "${WORKTREE_PATH}" | grep 'branch' | sed 's|branch refs/heads/||')
661
- fi
662
-
663
- # No worktree found
664
- if [ -z "${WORKTREE_BRANCH}" ]; then
665
- echo "No worktree found — nothing to merge. Workflow may already be on main."
666
- exit 0
667
- fi
668
- ```
669
-
670
- ### 2. MERGE TO MAIN
671
-
672
- ```bash
673
- git checkout main
674
- git merge "${WORKTREE_BRANCH}" --no-ff -m "feat({spec}): merge verified changes"
675
- ```
676
-
677
- **On merge conflict:** Keep worktree intact, output "Merge conflict detected. Resolve manually, then run /df:verify --merge-only", exit without cleanup.
678
-
679
- ### 3. CLEANUP WORKTREE
680
-
681
- ```bash
682
- git worktree remove --force "${WORKTREE_PATH}"
683
- git branch -d "${WORKTREE_BRANCH}"
684
- rm -f .deepflow/checkpoint.json
685
- ```
686
-
687
- ### 4. RENAME SPEC
688
-
689
- ```bash
690
- # Rename spec to done
691
- mv specs/doing-${SPEC_NAME}.md specs/done-${SPEC_NAME}.md
692
- ```
693
-
694
- ### 5. EXTRACT DECISIONS
695
-
696
- Read the renamed `specs/done-${SPEC_NAME}.md` file. Model-extract architectural decisions:
697
- - Explicit choices → `[APPROACH]`
698
- - Unvalidated assumptions → `[ASSUMPTION]`
699
- - "For now" decisions → `[PROVISIONAL]`
700
-
701
- Append to `.deepflow/decisions.md`:
702
- ```
703
- ### {YYYY-MM-DD} — {spec-name}
704
- - [TAG] decision text — rationale
705
- ```
706
-
707
- After successful append, delete `specs/done-${SPEC_NAME}.md`. If write fails, preserve the file.
708
-
709
- Output:
710
- ```
711
- ✓ Merged df/upload to main
712
- ✓ Cleaned up worktree and branch
713
- ✓ Spec complete: doing-upload → done-upload
714
-
715
- Workflow complete! Ready for next feature: /df:spec <name>
716
- ```
169
+ Output: `✓ Merged main | Cleaned worktree | ✓ Spec complete | Workflow complete! Ready: /df:spec <name>`