deepflow 0.1.89 → 0.1.91

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -3,146 +3,4 @@ name: df:auto-cycle
3
3
  description: Execute one task from PLAN.md with ratchet health checks and state tracking for autonomous mode
4
4
  ---
5
5
 
6
- # /df:auto-cycle Single Cycle of Auto Mode
7
-
8
- Execute one task from PLAN.md. Called by `/loop 1m /df:auto-cycle` — each invocation gets fresh context.
9
-
10
- **NEVER:** use EnterPlanMode, use ExitPlanMode
11
-
12
- ## Behavior
13
-
14
- ### 1. LOAD STATE
15
-
16
- Shell injection (use output directly):
17
- - `` !`cat PLAN.md 2>/dev/null || echo 'NOT_FOUND'` `` — required, error if missing
18
- - `` !`cat .deepflow/auto-memory.yaml 2>/dev/null || echo 'NOT_FOUND'` `` — optional cross-cycle state
19
-
20
- **auto-memory.yaml schema:** see `/df:execute`. Each section optional, missing keys = empty. Created on first write if absent.
21
-
22
- ### 2. PICK NEXT TASK
23
-
24
- **Optimize-active override:** Check `optimize_state.task_id` in auto-memory.yaml first. If present and task still `[ ]` in PLAN.md, resume it (skip normal scan). If task is `[x]`, clear `optimize_state.task_id` and fall through.
25
-
26
- **Normal scan:** First `[ ]` task in PLAN.md where all `Blocked by:` deps are `[x]`.
27
-
28
- **No `[ ]` tasks:** Skip to step 5 (completion check).
29
-
30
- **All remaining blocked:** Error with blocker details, suggest `/df:execute` for manual resolution.
31
-
32
- ### 3. EXECUTE
33
-
34
- Run via Skill tool: `skill: "df:execute", args: "{task_id}"`. Handles worktree, agent spawning, ratchet, commit.
35
-
36
- **Bootstrap handling:** If execute returns `"bootstrap: completed"` (zero pre-existing tests, baseline written):
37
- - Record as task `BOOTSTRAP`, status `passed`
38
- - Do NOT run a regular task in the same cycle
39
- - Next cycle picks up the first regular task
40
-
41
- ### 3.5. WRITE STATE
42
-
43
- After execute returns, update `.deepflow/auto-memory.yaml` (read-merge-write, preserve all keys):
44
-
45
- | Outcome | Write |
46
- |---------|-------|
47
- | Success (non-optimize) | `task_results[id]: {status: success, commit: {hash}, cycle: {N}}` |
48
- | Revert (non-optimize) | `task_results[id]: {status: reverted, reason: "{msg}", cycle: {N}}` + append to `revert_history` |
49
- | Optimize cycle | Merge updated `optimize_state` from execute (confirm `cycles_run`, `current_best`, `history`) |
50
-
51
- ### 3.6. CIRCUIT BREAKER
52
-
53
- **Failure = any L0-L5 verification failure** (build, files, coverage, tests, browser assertions). Does NOT count: L5 skip (no frontend), L5 pass-on-retry.
54
-
55
- **On revert (non-optimize):**
56
- 1. Increment `consecutive_reverts[task_id]` in auto-memory.yaml
57
- 2. Read `circuit_breaker_threshold` from `.deepflow/config.yaml` (default: 3)
58
- 3. If `consecutive_reverts[task_id] >= threshold`: halt loop, report "Circuit breaker tripped: T{n} failed {N} times. Reason: {msg}"
59
- 4. Else: continue to step 4
60
-
61
- **On success (non-optimize):** Reset `consecutive_reverts[task_id]` to 0.
62
-
63
- **Optimize stop conditions** (from execute terminal outcomes):
64
-
65
- | Outcome | Action |
66
- |---------|--------|
67
- | `"target reached: {value}"` | Confirm task [x], write optimize completion (3.7), report, continue |
68
- | `"max cycles reached, best: {value}"` | Confirm task [x], write optimize completion (3.7), report, continue |
69
- | `"circuit breaker: 3 consecutive reverts"` | Task stays [ ], write failure to experiments (3.7), preserve optimize_state, halt loop |
70
-
71
- ### 3.7. OPTIMIZE COMPLETION
72
-
73
- **On target reached or max cycles (task [x]):**
74
- 1. Write each `failed_hypotheses` entry to `.deepflow/experiments/{spec}--optimize-{task_id}--{slug}--failed.md`
75
- 2. Write summary to `.deepflow/experiments/{spec}--optimize-{task_id}--summary--{status}.md` with metric/target/direction/baseline/best/cycles/history table
76
- 3. Clear `optimize_state` from auto-memory.yaml
77
-
78
- **On circuit breaker halt:** Same experiment writes but with status `circuit_breaker`. Preserve `optimize_state` in auto-memory.yaml (add `halted: circuit_breaker` note).
79
-
80
- ### 4. UPDATE REPORT
81
-
82
- Write to `.deepflow/auto-report.md` — append each cycle, never overwrite. First cycle creates skeleton, subsequent cycles update in-place.
83
-
84
- **File sections:** Summary table, Cycle Log, Probe Results, Optimize Runs, Secondary Metric Warnings, Health Score, Reverted Tasks.
85
-
86
- #### Per-cycle update rules
87
-
88
- | Section | When | Action |
89
- |---------|------|--------|
90
- | Cycle Log | Every cycle | Append row: `cycle | task_id | status | commit/reverted | delta | metric_delta | reason | timestamp` |
91
- | Summary | Every cycle | Recalculate from Cycle Log: total cycles, committed, reverted, optimize cycles/best (if applicable) |
92
- | Last updated | Every cycle | Overwrite timestamp |
93
- | Probe Results | Probe/spike task | Append row from `probe_learnings` in auto-memory.yaml |
94
- | Optimize Runs | Optimize terminal event | Append row: task/metric/baseline/best/target/cycles/status |
95
- | Secondary Metric Warnings | >5% regression | Append row (severity: WARNING, advisory only — no auto-revert) |
96
- | Health Score | Every cycle | Replace with latest: tests passed, build status, ratchet green/red, optimize status |
97
- | Reverted Tasks | On revert | Append row from `revert_history` |
98
-
99
- **Status values:** `passed`, `failed` (reverted), `skipped` (already done), `optimize` (inner cycle).
100
-
101
- **Delta format:** `tests: {before}→{after}, build: ok/fail`. Include coverage if available. On revert, show regression.
102
-
103
- **Optimize status in Health Score:** `in_progress` | `reached` | `max_cycles` | `circuit_breaker` | `—` (omit row if no optimize tasks in PLAN.md).
104
-
105
- ### 5. CHECK COMPLETION
106
-
107
- Count `[x]` and `[ ]` tasks in PLAN.md. Per-spec verify+merge happens in `/df:execute` step 8 automatically.
108
-
109
- - **No `[ ]` remaining:** "All specs verified and merged. Workflow complete." → exit
110
- - **Tasks remain:** "Cycle complete. {N} tasks remaining." → exit (next /loop invocation picks up)
111
-
112
- ## Rules
113
-
114
- | Rule | Detail |
115
- |------|--------|
116
- | One task per cycle | Fresh context each invocation — no multi-task batching |
117
- | Bootstrap = sole task | No regular task runs in a bootstrap cycle |
118
- | Idempotent | Safe to call with no work — reports "0 tasks remaining" |
119
- | Never modifies PLAN.md | `/df:execute` handles PLAN.md updates |
120
- | Auto-memory after every cycle | `task_results`, `revert_history`, `consecutive_reverts` always written |
121
- | Circuit breaker halts loop | Default 3 consecutive reverts (configurable: `circuit_breaker_threshold` in config.yaml) |
122
- | One optimize at a time | Defers other optimize tasks until active one terminates |
123
- | Optimize resumes across contexts | `optimize_state.task_id` overrides normal scan |
124
- | Optimize CB preserves state | On halt: task stays [ ], optimize_state kept for diagnosis |
125
- | Secondary metric regression advisory | >5% = WARNING in report, never auto-revert |
126
- | Optimize completion writes experiments | Failed hypotheses + summary to `.deepflow/experiments/` |
127
-
128
- ## Example
129
-
130
- ### Normal Cycle
131
- ```
132
- /df:auto-cycle
133
- Loading PLAN.md... 3 tasks, 1 done, 2 pending
134
- Next: T2 (T1 satisfied)
135
- Running: /df:execute T2 → ✓ ratchet passed (abc1234)
136
- Updated auto-report.md: cycles=2, committed=2
137
- Cycle complete. 1 tasks remaining.
138
- ```
139
-
140
- ### Circuit Breaker Tripped
141
- ```
142
- /df:auto-cycle
143
- Loading PLAN.md... 3 tasks, 1 done, 2 pending
144
- Next: T3
145
- Running: /df:execute T3 → ✗ ratchet failed — "2 tests regressed"
146
- Circuit breaker: consecutive_reverts[T3] = 3 (threshold: 3)
147
- Loop halted. Resolve T3 manually, then resume.
148
- ```
6
+ Use the Skill tool to invoke the `auto-cycle` skill, passing through any arguments.
@@ -42,4 +42,4 @@ Each invocation gets fresh context — zero LLM tokens on loop management.
42
42
  | Plan once | Only runs `/df:plan` if PLAN.md absent |
43
43
  | Snapshot before loop | Ratchet baseline set before any agents run |
44
44
  | No lead agent | `/loop` is native Claude Code — no custom orchestrator |
45
- | Cycle logic in `/df:auto-cycle` | This command is setup only |
45
+ | Cycle logic in `src/skills/auto-cycle/SKILL.md` | This command is setup only; `/df:auto-cycle` is a shim that delegates to the skill |
@@ -101,24 +101,37 @@ Context ≥50% → checkpoint and exit. Before spawning: `TaskUpdate(status: "in
101
101
 
102
102
  ### 5.5. RATCHET CHECK
103
103
 
104
- Run health checks in worktree after each agent completes.
104
+ Run `node bin/ratchet.js` in the worktree directory after each agent completes:
105
+ ```bash
106
+ node bin/ratchet.js --worktree ${WORKTREE_PATH} --snapshot .deepflow/auto-snapshot.txt
107
+ ```
108
+
109
+ The script handles all health checks internally and outputs structured JSON:
110
+ ```json
111
+ {"status": "PASS"|"FAIL"|"SALVAGEABLE", "reason": "...", "details": "..."}
112
+ ```
105
113
 
106
- | File | Build | Test | Typecheck | Lint |
107
- |------|-------|------|-----------|------|
108
- | `package.json` | `npm run build` | `npm test` | `npx tsc --noEmit` | `npm run lint` |
109
- | `pyproject.toml` | — | `pytest` | `mypy .` | `ruff check .` |
110
- | `Cargo.toml` | `cargo build` | `cargo test` | — | `cargo clippy` |
111
- | `go.mod` | `go build ./...` | `go test ./...` | — | `go vet ./...` |
114
+ **Exit codes:** 0 = PASS, 1 = FAIL (script already ran `git revert HEAD --no-edit`), 2 = SALVAGEABLE (lint/typecheck only; build+tests passed).
112
115
 
113
- Run Build Test Typecheck Lint (stop on first failure). Ratchet uses ONLY pre-existing tests from `.deepflow/auto-snapshot.txt`.
116
+ **You MUST NOT inspect, classify, or reinterpret test failures. FAIL means revert. No exceptions.**
117
+
118
+ **Prohibited actions during ratchet:**
119
+ - No `git stash` or `git checkout` for investigation purposes
120
+ - No inline edits to pre-existing test files
121
+ - No reading raw test output to decide what "really" failed
122
+
123
+ **Broken-tests policy:** Updating pre-existing tests requires a separate dedicated task in PLAN.md with explicit justification — never inline during execution.
124
+
125
+ **Orchestrator response by exit code:**
126
+ - **Exit 0 (PASS):** Commit stands. Proceed to §5.6 wave test agent.
127
+ - **Exit 1 (FAIL):** Script already reverted. Set `TaskUpdate(status: "pending")`. Report: `"✗ T{n}: reverted"`.
128
+ - **Exit 2 (SALVAGEABLE):** Spawn `Agent(model="haiku")` to fix lint/typecheck issues. Re-run `node bin/ratchet.js`. If still non-zero → revert both commits, set status pending.
114
129
 
115
130
  **Edit scope validation:** `git diff HEAD~1 --name-only` vs allowed globs. Violation → revert, report.
116
131
  **Impact completeness:** diff vs Impact callers/duplicates. Gap → advisory warning (no revert).
117
132
 
118
133
  **Metric gate (Optimize only):** Run `eval "${metric_command}"` with cwd=`${WORKTREE_PATH}` (never `cd && eval`). Parse float (non-numeric → revert). Compare using `direction`+`min_improvement_threshold`. Both ratchet AND metric must pass → keep. Ratchet pass + metric stagnant → revert. Secondary metrics: regression > `regression_threshold` (5%) → WARNING in auto-report.md (no revert).
119
134
 
120
- **Output truncation:** Success → suppress. Build fail → last 15 lines. Test fail → names + last 20 lines. Typecheck/lint → count + first 5 errors.
121
-
122
135
  **Token tracking result (on pass):** Read `end_percentage`. Sum token fields from `.deepflow/token-history.jsonl` between start/end timestamps (awk ISO 8601 compare). Write to `.deepflow/results/T{N}.yaml`:
123
136
  ```yaml
124
137
  tokens:
@@ -131,10 +144,6 @@ tokens:
131
144
  ```
132
145
  Omit if context.json/token-history.jsonl/awk unavailable. Never fail ratchet for tracking errors.
133
146
 
134
- **Evaluate:** All pass → commit stands. Failure → partial salvage:
135
- 1. Lint/typecheck-only (build+tests passed): spawn `Agent(model="haiku")` to fix. Re-ratchet. Fail → revert both.
136
- 2. Build/test failure → `git revert HEAD --no-edit` (no salvage).
137
-
138
147
  ### 5.6. WAVE TEST AGENT
139
148
 
140
149
  <!-- AC-8: After wave ratchet passes, Opus test agent spawns and writes unit tests -->
@@ -147,8 +156,11 @@ Omit if context.json/token-history.jsonl/awk unavailable. Never fail ratchet for
147
156
 
148
157
  **Flow:**
149
158
  1. Capture the implementation diff: `git -C ${WORKTREE_PATH} diff HEAD~1` → store as `IMPL_DIFF`.
150
- 2. Spawn `Agent(model="opus")` with Wave Test prompt (§6). `run_in_background=true`. End turn, wait.
151
- 3. On notification:
159
+ 2. Gather dedup context:
160
+ - Read `.deepflow/auto-snapshot.txt` store full file list as `SNAPSHOT_FILES`.
161
+ - Extract existing test function names: `grep -h 'describe\|it(\|test(\|def test_\|func Test' $(cat .deepflow/auto-snapshot.txt) 2>/dev/null | head -50` → store as `EXISTING_TEST_NAMES`.
162
+ 3. Spawn `Agent(model="opus")` with Wave Test prompt (§6), passing `SNAPSHOT_FILES` and `EXISTING_TEST_NAMES`. `run_in_background=true`. End turn, wait.
163
+ 4. On notification:
152
164
  a. Run ratchet check (§5.5) — all new + pre-existing tests must pass.
153
165
  b. **Tests pass** → commit stands. **Re-snapshot** immediately so wave N+1 ratchet includes wave N tests:
154
166
  ```bash
@@ -167,7 +179,7 @@ Omit if context.json/token-history.jsonl/awk unavailable. Never fail ratchet for
167
179
  {failure_feedback}
168
180
  Fix the issues above. Do NOT repeat the same mistakes.
169
181
  ```
170
- - On implementer notification: ratchet check (§5.5). Passed → goto step 1 (spawn test agent again). Failed → same retry logic.
182
+ - On implementer notification: ratchet check (§5.5). Passed → goto step 2 (gather dedup context, spawn test agent again). Failed → same retry logic.
171
183
  - If `attempt_count >= 3`:
172
184
  - Revert ALL commits back to pre-task state: `git -C ${WORKTREE_PATH} reset --hard {pre_task_commit}`
173
185
  - `TaskUpdate(status: "pending")`
@@ -185,7 +197,7 @@ Trigger: ≥2 [SPIKE] tasks with same blocker or identical hypothesis.
185
197
  4. Per notification: ratchet (§5.5). Record: ratchet_passed, regressions, coverage_delta, files_changed, commit.
186
198
  5. **Winner selection** (no LLM judge): disqualify regressions. Standard: fewer regressions > coverage > fewer files > first complete. Optimize: best metric delta > fewer regressions > fewer files. No passes → reset pending for debugger.
187
199
  6. Preserve all worktrees. Losers: branch + `-failed`. Record in checkpoint.json.
188
- 7. Log all outcomes to `.deepflow/auto-memory.yaml` under `spike_insights`+`probe_learnings` (schema in auto-cycle.md). Both winners and losers.
200
+ 7. Log all outcomes to `.deepflow/auto-memory.yaml` under `spike_insights`+`probe_learnings` (schema in src/skills/auto-cycle/SKILL.md). Both winners and losers.
189
201
  8. Cherry-pick winner into shared worktree. Winner → `[x] [PROBE_WINNER]`, losers → `[~] [PROBE_FAILED]`.
190
202
 
191
203
  #### 5.7.1. PROBE DIVERSITY (Optimize Probes)
@@ -279,9 +291,16 @@ Implementation diff:
279
291
  Files changed: {changed_files}
280
292
  Existing test patterns: {test_file_examples from auto-snapshot.txt, first 3}
281
293
 
294
+ Pre-existing test files (from auto-snapshot.txt):
295
+ {SNAPSHOT_FILES}
296
+
297
+ Existing test function names (do NOT duplicate these):
298
+ {EXISTING_TEST_NAMES}
299
+
282
300
  --- END ---
283
301
  Write thorough unit tests covering: happy paths, edge cases, error handling.
284
302
  Follow existing test conventions in the codebase.
303
+ Do not duplicate tests for functionality already covered by the existing tests listed above.
285
304
  Commit as: test({spec}): wave-{N} unit tests
286
305
  Do NOT modify implementation files. ONLY add/edit test files.
287
306
  Last line of your response MUST be: TASK_STATUS:pass or TASK_STATUS:fail
@@ -361,23 +380,31 @@ Before merge, spawn an independent Opus QA agent that sees ONLY the spec and exp
361
380
 
362
381
  3. On notification:
363
382
  a. Run ratchet check (§5.5) — all integration tests must pass.
364
- b. **Tests pass** → commit stands. Proceed to step 8.2 (merge).
365
- c. **Tests fail** → **merge is blocked**. Do NOT retry. Report:
366
- `"✗ Final integration tests failed for {spec} — merge blocked, requires human review"`
367
- Leave worktree intact. Set all spec tasks back to `TaskUpdate(status: "pending")`.
368
- Write failure details to `.deepflow/results/final-test-{spec}.yaml`:
383
+ b. **Tests pass** → commit stands. Proceed to step 8.2 (full L0-L5 verify + merge).
384
+ c. **Tests fail** → **merge is blocked**. Do NOT retry. Run diagnostic verify:
385
+ ```
386
+ skill: "df:verify", args: "--diagnostic doing-{name}"
387
+ ```
388
+ Capture the L0-L4 results from verify output (pass/fail/warn per level). Write to `.deepflow/results/final-test-{spec}.yaml`:
369
389
  ```yaml
370
390
  spec: {spec}
371
391
  status: blocked
372
392
  reason: "Final integration tests failed"
373
393
  output: |
374
394
  {truncated test output — last 30 lines}
395
+ diagnostics:
396
+ L0: {pass|fail}
397
+ L1: {pass|fail}
398
+ L2: {pass|warn|fail}
399
+ L4: {pass|fail}
375
400
  ```
376
- STOP. Do not proceed to merge.
401
+ Leave worktree intact. Set all spec tasks back to `TaskUpdate(status: "pending")`.
402
+ Report: `"✗ Final tests failed for {spec} — diagnostic verify: L0 {✓|✗} | L1 {✓|✗} | L2 {✓|⚠|✗} | L4 {✓|✗} — merge blocked"`
403
+ STOP. Do not proceed to merge. Diagnostic verify is informational only — no fix agents, no retries.
377
404
 
378
405
  **8.2. Merge and cleanup:**
379
406
  1. `skill: "df:verify", args: "doing-{name}"` — runs L0-L4 gates, merges, cleans worktree, renames doing→done, extracts decisions. Fail (fix tasks added) → stop; `--continue` picks them up.
380
- 2. Remove spec's ENTIRE section from PLAN.md. Recalculate Summary table.
407
+ 2. PLAN.md section cleanup handled by verify (step 6).
381
408
 
382
409
  ---
383
410
 
@@ -428,5 +455,5 @@ Reverted task: `TaskUpdate(status: "pending")`, dependents stay blocked. Repeate
428
455
  | Plateau → probes | 3 cycles <1% triggers probes |
429
456
  | Circuit breaker = 3 reverts | Halts, needs human |
430
457
  | Wave test after ratchet | Opus writes tests; 3 attempts then revert |
431
- | Final test before merge | Opus black-box integration tests; failure blocks merge, no retry |
458
+ | Final test before merge | Opus black-box integration tests; pass → full L0-L5 verify + merge; failure diagnostic L0-L4 verify, results in final-test-{spec}.yaml, merge blocked |
432
459
  | Probe diversity | ≥1 contraditoria + ≥1 ingenua |
@@ -12,14 +12,36 @@ context: fork
12
12
 
13
13
  ## Usage
14
14
  ```
15
- /df:verify # Verify doing-* specs with all tasks completed
16
- /df:verify doing-upload # Verify specific spec
17
- /df:verify --re-verify # Re-verify done-* specs (already merged)
15
+ /df:verify # Verify doing-* specs with all tasks completed
16
+ /df:verify doing-upload # Verify specific spec
17
+ /df:verify --re-verify # Re-verify done-* specs (already merged)
18
+ /df:verify --diagnostic doing-upload # L0-L4 only; write results to diagnostics yaml; no merge/fix/rename
18
19
  ```
19
20
 
20
21
  ## Spec File States
21
22
  `specs/feature.md` → unplanned (skip) | `doing-*.md` → default target | `done-*.md` → `--re-verify` only
22
23
 
24
+ ## Diagnostic Mode (`--diagnostic`)
25
+
26
+ When invoked with `--diagnostic`:
27
+
28
+ - Run **L0-L4 only** (skip L5 entirely, even if frontend detected).
29
+ - Write results to `.deepflow/results/final-test-{spec}.yaml` under a `diagnostics:` key:
30
+ ```yaml
31
+ diagnostics:
32
+ spec: doing-upload
33
+ timestamp: 2024-01-15T10:30:00Z
34
+ L0: pass # or fail
35
+ L1: pass # or fail
36
+ L2: pass # or warn (no tool)
37
+ L4: fail # or pass
38
+ summary: "L0 ✓ | L1 ✓ | L2 ⚠ | L3 — | L4 ✗"
39
+ ```
40
+ - Prefix all report output with `[DIAGNOSTIC]`.
41
+ - **Skip entirely:** Post-Verification merge (§4), fix task creation, spec rename, decision extraction, PLAN.md cleanup (step 6).
42
+ - Does **not** count as a revert for the circuit breaker.
43
+ - Does **not** modify `auto-snapshot.txt`.
44
+
23
45
  ## Behavior
24
46
 
25
47
  ### 1. LOAD CONTEXT
@@ -71,7 +93,9 @@ No tool → pass with warning. When available: stash changes → run coverage on
71
93
 
72
94
  Algorithm: detect frontend → resolve dev command/port → start server → poll readiness → read assertions from PLAN.md → auto-install Playwright Chromium → evaluate via `locator.ariaSnapshot()` → screenshot → retry once on failure → report.
73
95
 
74
- **Step 1: Detect frontend.** Config `quality.browser_verify` overrides: `false` → always skip (`L5 — (no frontend)`), `true` → always run, absent → auto-detect from package.json (both deps and devDeps):
96
+ **Step 1: Detect frontend.** Config `quality.browser_verify` overrides: `false` → always skip (`L5 — (no frontend)`), `true` → always run, absent → auto-detect using BOTH conditions:
97
+
98
+ 1. Frontend framework found in package.json (deps or devDeps):
75
99
 
76
100
  | Package(s) | Framework |
77
101
  |------------|-----------|
@@ -82,7 +106,12 @@ Algorithm: detect frontend → resolve dev command/port → start server → pol
82
106
  | `@sveltejs/kit` | SvelteKit |
83
107
  | `svelte`, `@sveltejs/*` | Svelte |
84
108
 
85
- No frontend detected and no config override `L5 (no frontend)`, skip remaining L5 steps.
109
+ 2. A `browser_assertions:` block exists in PLAN.md scoped to the current spec.
110
+
111
+ **Auto-detect outcomes (no config override):**
112
+ - No frontend detected → `L5 — (no frontend)`, skip remaining L5 steps.
113
+ - Frontend detected but no `browser_assertions:` block in PLAN.md for current spec → `L5 — (no browser_assertions in PLAN.md)`, skip remaining L5 steps.
114
+ - Both conditions met → proceed to Steps 2–6.
86
115
 
87
116
  **Step 2: Dev server lifecycle.**
88
117
  1. **Resolve dev command:** Config `quality.dev_command` wins → fallback to `npm run dev` if `scripts.dev` exists → none found → skip L5 with warning.
@@ -113,7 +142,7 @@ No frontend detected and no config override → `L5 — (no frontend)`, skip rem
113
142
  | Fail | Fail — same selectors | L5 ✗ — genuine failure |
114
143
  | Fail | Fail — different selectors | L5 ✗ (flaky) |
115
144
 
116
- All L5 outcomes: `✓` pass | `⚠` passed on retry | `✗` both failed (same) | `✗ (flaky)` both failed (different) | `— (no frontend)` | `— (no assertions)` | `✗ (install failed)`
145
+ All L5 outcomes: `✓` pass | `⚠` passed on retry | `✗` both failed (same) | `✗ (flaky)` both failed (different) | `— (no frontend)` | `— (no browser_assertions in PLAN.md)` | `— (no assertions)` | `✗ (install failed)`
117
146
 
118
147
  **Fix task on L5 failure:** Append to PLAN.md under spec section with next T{n} ID. Include: failing assertions (selector + detail), first 40 lines of `locator('body').ariaSnapshot()` DOM excerpt, screenshot path, flakiness note if assertion sets differed.
119
148
 
@@ -158,12 +187,13 @@ Objective: ... | Approach: ... | Why it worked: ... | Files: ...
158
187
 
159
188
  ## Post-Verification: Worktree Merge & Cleanup
160
189
 
161
- **Only runs when ALL gates pass.**
190
+ **Only runs when ALL gates pass AND `--diagnostic` was NOT used.**
162
191
 
163
192
  1. **Discover worktree:** Read `.deepflow/checkpoint.json` for `worktree_branch`/`worktree_path`. Fallback: infer from `doing-*` spec name + `git worktree list --porcelain`. No worktree → "nothing to merge", exit.
164
193
  2. **Merge:** `git checkout main && git merge ${BRANCH} --no-ff -m "feat({spec}): merge verified changes"`. On conflict → keep worktree, output "Resolve manually, run /df:verify --merge-only", exit.
165
194
  3. **Cleanup:** `git worktree remove --force ${PATH} && git branch -d ${BRANCH} && rm -f .deepflow/checkpoint.json`
166
195
  4. **Rename spec:** `mv specs/doing-${NAME}.md specs/done-${NAME}.md`
167
196
  5. **Extract decisions:** Read done spec, extract `[APPROACH]`/`[ASSUMPTION]`/`[PROVISIONAL]` decisions, append to `.deepflow/decisions.md` as `### {date} — {spec}\n- [TAG] decision — rationale`. Delete done spec after successful write; preserve on failure.
197
+ 6. **Clean PLAN.md:** Find the `### {spec-name}` section (match on name stem, strip `doing-`/`done-` prefix). Delete from header through the line before the next `### ` header (or EOF). Recalculate Summary table (recount `### ` headers for spec count, `- [ ]`/`- [x]` for task counts). If no spec sections remain, delete PLAN.md entirely. Skip silently if PLAN.md missing or section already gone.
168
198
 
169
- Output: `✓ Merged → main | ✓ Cleaned worktree | ✓ Spec complete | Workflow complete! Ready: /df:spec <name>`
199
+ Output: `✓ Merged → main | ✓ Cleaned worktree | ✓ Spec complete | ✓ Cleaned PLAN.md | Workflow complete! Ready: /df:spec <name>`
@@ -0,0 +1,148 @@
1
+ ---
2
+ name: auto-cycle
3
+ description: Execute one task from PLAN.md with ratchet health checks and state tracking for autonomous mode
4
+ ---
5
+
6
+ # auto-cycle — Single Cycle of Auto Mode
7
+
8
+ Execute one task from PLAN.md. Called by `/loop 1m /df:auto-cycle` — each invocation gets fresh context.
9
+
10
+ **NEVER:** use EnterPlanMode, use ExitPlanMode
11
+
12
+ ## Behavior
13
+
14
+ ### 1. LOAD STATE
15
+
16
+ Shell injection (use output directly):
17
+ - `` !`cat PLAN.md 2>/dev/null || echo 'NOT_FOUND'` `` — required, error if missing
18
+ - `` !`cat .deepflow/auto-memory.yaml 2>/dev/null || echo 'NOT_FOUND'` `` — optional cross-cycle state
19
+
20
+ **auto-memory.yaml schema:** see `/df:execute`. Each section optional, missing keys = empty. Created on first write if absent.
21
+
22
+ ### 2. PICK NEXT TASK
23
+
24
+ **Optimize-active override:** Check `optimize_state.task_id` in auto-memory.yaml first. If present and task still `[ ]` in PLAN.md, resume it (skip normal scan). If task is `[x]`, clear `optimize_state.task_id` and fall through.
25
+
26
+ **Normal scan:** First `[ ]` task in PLAN.md where all `Blocked by:` deps are `[x]`.
27
+
28
+ **No `[ ]` tasks:** Skip to step 5 (completion check).
29
+
30
+ **All remaining blocked:** Error with blocker details, suggest `/df:execute` for manual resolution.
31
+
32
+ ### 3. EXECUTE
33
+
34
+ Run via Skill tool: `skill: "df:execute", args: "{task_id}"`. Handles worktree, agent spawning, ratchet, commit.
35
+
36
+ **Bootstrap handling:** If execute returns `"bootstrap: completed"` (zero pre-existing tests, baseline written):
37
+ - Record as task `BOOTSTRAP`, status `passed`
38
+ - Do NOT run a regular task in the same cycle
39
+ - Next cycle picks up the first regular task
40
+
41
+ ### 3.5. WRITE STATE
42
+
43
+ After execute returns, update `.deepflow/auto-memory.yaml` (read-merge-write, preserve all keys):
44
+
45
+ | Outcome | Write |
46
+ |---------|-------|
47
+ | Success (non-optimize) | `task_results[id]: {status: success, commit: {hash}, cycle: {N}}` |
48
+ | Revert (non-optimize) | `task_results[id]: {status: reverted, reason: "{msg}", cycle: {N}}` + append to `revert_history` |
49
+ | Optimize cycle | Merge updated `optimize_state` from execute (confirm `cycles_run`, `current_best`, `history`) |
50
+
51
+ ### 3.6. CIRCUIT BREAKER
52
+
53
+ **Failure = any L0-L5 verification failure** (build, files, coverage, tests, browser assertions). Does NOT count: L5 skip (no frontend), L5 pass-on-retry.
54
+
55
+ **On revert (non-optimize):**
56
+ 1. Increment `consecutive_reverts[task_id]` in auto-memory.yaml
57
+ 2. Read `circuit_breaker_threshold` from `.deepflow/config.yaml` (default: 3)
58
+ 3. If `consecutive_reverts[task_id] >= threshold`: halt loop, report "Circuit breaker tripped: T{n} failed {N} times. Reason: {msg}"
59
+ 4. Else: continue to step 4
60
+
61
+ **On success (non-optimize):** Reset `consecutive_reverts[task_id]` to 0.
62
+
63
+ **Optimize stop conditions** (from execute terminal outcomes):
64
+
65
+ | Outcome | Action |
66
+ |---------|--------|
67
+ | `"target reached: {value}"` | Confirm task [x], write optimize completion (3.7), report, continue |
68
+ | `"max cycles reached, best: {value}"` | Confirm task [x], write optimize completion (3.7), report, continue |
69
+ | `"circuit breaker: 3 consecutive reverts"` | Task stays [ ], write failure to experiments (3.7), preserve optimize_state, halt loop |
70
+
71
+ ### 3.7. OPTIMIZE COMPLETION
72
+
73
+ **On target reached or max cycles (task [x]):**
74
+ 1. Write each `failed_hypotheses` entry to `.deepflow/experiments/{spec}--optimize-{task_id}--{slug}--failed.md`
75
+ 2. Write summary to `.deepflow/experiments/{spec}--optimize-{task_id}--summary--{status}.md` with metric/target/direction/baseline/best/cycles/history table
76
+ 3. Clear `optimize_state` from auto-memory.yaml
77
+
78
+ **On circuit breaker halt:** Same experiment writes but with status `circuit_breaker`. Preserve `optimize_state` in auto-memory.yaml (add `halted: circuit_breaker` note).
79
+
80
+ ### 4. UPDATE REPORT
81
+
82
+ Write to `.deepflow/auto-report.md` — append each cycle, never overwrite. First cycle creates skeleton, subsequent cycles update in-place.
83
+
84
+ **File sections:** Summary table, Cycle Log, Probe Results, Optimize Runs, Secondary Metric Warnings, Health Score, Reverted Tasks.
85
+
86
+ #### Per-cycle update rules
87
+
88
+ | Section | When | Action |
89
+ |---------|------|--------|
90
+ | Cycle Log | Every cycle | Append row: `cycle | task_id | status | commit/reverted | delta | metric_delta | reason | timestamp` |
91
+ | Summary | Every cycle | Recalculate from Cycle Log: total cycles, committed, reverted, optimize cycles/best (if applicable) |
92
+ | Last updated | Every cycle | Overwrite timestamp |
93
+ | Probe Results | Probe/spike task | Append row from `probe_learnings` in auto-memory.yaml |
94
+ | Optimize Runs | Optimize terminal event | Append row: task/metric/baseline/best/target/cycles/status |
95
+ | Secondary Metric Warnings | >5% regression | Append row (severity: WARNING, advisory only — no auto-revert) |
96
+ | Health Score | Every cycle | Replace with latest: tests passed, build status, ratchet green/red, optimize status |
97
+ | Reverted Tasks | On revert | Append row from `revert_history` |
98
+
99
+ **Status values:** `passed`, `failed` (reverted), `skipped` (already done), `optimize` (inner cycle).
100
+
101
+ **Delta format:** `tests: {before}→{after}, build: ok/fail`. Include coverage if available. On revert, show regression.
102
+
103
+ **Optimize status in Health Score:** `in_progress` | `reached` | `max_cycles` | `circuit_breaker` | `—` (omit row if no optimize tasks in PLAN.md).
104
+
105
+ ### 5. CHECK COMPLETION
106
+
107
+ Count `[x]` and `[ ]` tasks in PLAN.md. Per-spec verify+merge happens in `/df:execute` step 8 automatically.
108
+
109
+ - **No `[ ]` remaining:** "All specs verified and merged. Workflow complete." → exit
110
+ - **Tasks remain:** "Cycle complete. {N} tasks remaining." → exit (next /loop invocation picks up)
111
+
112
+ ## Rules
113
+
114
+ | Rule | Detail |
115
+ |------|--------|
116
+ | One task per cycle | Fresh context each invocation — no multi-task batching |
117
+ | Bootstrap = sole task | No regular task runs in a bootstrap cycle |
118
+ | Idempotent | Safe to call with no work — reports "0 tasks remaining" |
119
+ | Never modifies PLAN.md | `/df:execute` handles PLAN.md updates |
120
+ | Auto-memory after every cycle | `task_results`, `revert_history`, `consecutive_reverts` always written |
121
+ | Circuit breaker halts loop | Default 3 consecutive reverts (configurable: `circuit_breaker_threshold` in config.yaml) |
122
+ | One optimize at a time | Defers other optimize tasks until active one terminates |
123
+ | Optimize resumes across contexts | `optimize_state.task_id` overrides normal scan |
124
+ | Optimize CB preserves state | On halt: task stays [ ], optimize_state kept for diagnosis |
125
+ | Secondary metric regression advisory | >5% = WARNING in report, never auto-revert |
126
+ | Optimize completion writes experiments | Failed hypotheses + summary to `.deepflow/experiments/` |
127
+
128
+ ## Example
129
+
130
+ ### Normal Cycle
131
+ ```
132
+ /df:auto-cycle
133
+ Loading PLAN.md... 3 tasks, 1 done, 2 pending
134
+ Next: T2 (T1 satisfied)
135
+ Running: /df:execute T2 → ✓ ratchet passed (abc1234)
136
+ Updated auto-report.md: cycles=2, committed=2
137
+ Cycle complete. 1 tasks remaining.
138
+ ```
139
+
140
+ ### Circuit Breaker Tripped
141
+ ```
142
+ /df:auto-cycle
143
+ Loading PLAN.md... 3 tasks, 1 done, 2 pending
144
+ Next: T3
145
+ Running: /df:execute T3 → ✗ ratchet failed — "2 tests regressed"
146
+ Circuit breaker: consecutive_reverts[T3] = 3 (threshold: 3)
147
+ Loop halted. Resolve T3 manually, then resume.
148
+ ```
@@ -82,9 +82,8 @@ quality:
82
82
  # Retry flaky tests once before failing (default: true)
83
83
  test_retry_on_fail: true
84
84
 
85
- # Enable L5 browser verification after tests pass (default: false)
86
- # When true, deepflow will start the dev server and run visual checks
87
- browser_verify: false
85
+ # Three-state: true (force L5), false (skip L5), absent/commented (auto-detect from package.json + browser_assertions)
86
+ # browser_verify:
88
87
 
89
88
  # Override the dev server start command for browser verification
90
89
  # If empty, deepflow will attempt to auto-detect (e.g., "npm run dev", "yarn dev")
@@ -96,6 +95,30 @@ quality:
96
95
  # Timeout in seconds to wait for the dev server to become ready (default: 30)
97
96
  browser_timeout: 30
98
97
 
98
+ # Ratchet configuration for /df:verify health gate
99
+ # Ratchet snapshots baseline metrics (tests passing, coverage, type checks) before execution
100
+ # and ensures subsequent runs don't regress. These overrides control which commands ratchet monitors.
101
+ ratchet:
102
+ # Override auto-detected build command for ratchet health checks
103
+ # If empty, ratchet will detect from indicator files (package.json scripts, Cargo.toml, etc.)
104
+ # Examples: "npm run build", "cargo build", "go build ./..."
105
+ build_command: ""
106
+
107
+ # Override auto-detected test command for ratchet health checks
108
+ # If empty, ratchet will detect from indicator files
109
+ # Examples: "npm test", "pytest", "go test ./...", "cargo test"
110
+ test_command: ""
111
+
112
+ # Override auto-detected typecheck command for ratchet health checks
113
+ # If empty, ratchet will detect from indicator files (e.g., "tsc --noEmit" for TypeScript)
114
+ # Examples: "tsc --noEmit", "mypy .", "cargo check"
115
+ typecheck_command: ""
116
+
117
+ # Override auto-detected lint command for ratchet health checks
118
+ # If empty, ratchet will detect from indicator files
119
+ # Examples: "eslint .", "flake8", "cargo clippy"
120
+ lint_command: ""
121
+
99
122
  # deepflow-dashboard team mode settings
100
123
  # dashboard_url: URL of the shared team server for POST ingestion
101
124
  # Leave blank (or omit) to use local-only mode (no data is pushed)
@@ -1,67 +0,0 @@
1
- #!/usr/bin/env node
2
- /**
3
- * deepflow consolidation checker
4
- * Checks if decisions.md needs consolidation, outputs suggestion if overdue
5
- */
6
-
7
- const fs = require('fs');
8
- const path = require('path');
9
-
10
- const DAYS_THRESHOLD = 7;
11
- const LINES_THRESHOLD = 20;
12
- const DEEPFLOW_DIR = path.join(process.cwd(), '.deepflow');
13
- const DECISIONS_FILE = path.join(DEEPFLOW_DIR, 'decisions.md');
14
- const LAST_CONSOLIDATED_FILE = path.join(DEEPFLOW_DIR, 'last-consolidated.json');
15
-
16
- function checkConsolidation() {
17
- try {
18
- // Check if decisions.md exists
19
- if (!fs.existsSync(DECISIONS_FILE)) {
20
- process.exit(0);
21
- }
22
-
23
- // Check if decisions.md has more than LINES_THRESHOLD lines
24
- const decisionsContent = fs.readFileSync(DECISIONS_FILE, 'utf8');
25
- const lineCount = decisionsContent.split('\n').length;
26
- if (lineCount <= LINES_THRESHOLD) {
27
- process.exit(0);
28
- }
29
-
30
- // Get last consolidated timestamp
31
- let lastConsolidated;
32
- if (fs.existsSync(LAST_CONSOLIDATED_FILE)) {
33
- try {
34
- const data = JSON.parse(fs.readFileSync(LAST_CONSOLIDATED_FILE, 'utf8'));
35
- if (data.last_consolidated) {
36
- lastConsolidated = new Date(data.last_consolidated);
37
- }
38
- } catch (e) {
39
- // Fall through to use mtime
40
- }
41
- }
42
-
43
- // Fallback: use mtime of decisions.md
44
- if (!lastConsolidated || isNaN(lastConsolidated.getTime())) {
45
- const stat = fs.statSync(DECISIONS_FILE);
46
- lastConsolidated = stat.mtime;
47
- }
48
-
49
- // Calculate days since last consolidation
50
- const now = new Date();
51
- const diffMs = now - lastConsolidated;
52
- const diffDays = Math.floor(diffMs / (1000 * 60 * 60 * 24));
53
-
54
- if (diffDays >= DAYS_THRESHOLD) {
55
- process.stderr.write(
56
- `\u{1F4A1} decisions.md hasn't been consolidated in ${diffDays} days. Run /df:consolidate to clean up.\n`
57
- );
58
- }
59
-
60
- } catch (e) {
61
- // Fail silently
62
- }
63
-
64
- process.exit(0);
65
- }
66
-
67
- checkConsolidation();