wogiflow 2.22.4 → 2.24.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -2,26 +2,84 @@
2
2
  description: "Parallel hypothesis debugging - spawns agents for competing theories"
3
3
  effort: high
4
4
  ---
5
- Parallel hypothesis debugging - spawns multiple agents to investigate competing theories simultaneously.
5
+ Parallel hypothesis debugging spawns multiple agents to investigate competing theories simultaneously. **v2.23.0+**: now runs IGR-style assumption surfacing and scope verification BEFORE generating hypotheses, and a cross-hypothesis adversary step AFTER investigation to catch "three variations of the same wrong theory."
6
6
 
7
7
  ## Usage
8
8
 
9
9
  ```
10
10
  /wogi-debug-hypothesis "description of the bug or unexpected behavior"
11
+ /wogi-debug-hypothesis --no-assumptions # skip Tier 2 assumption gate (not recommended)
12
+ /wogi-debug-hypothesis --no-adversary # skip cross-hypothesis adversary
11
13
  ```
12
14
 
13
- ## How It Works
15
+ ## How It Works (v2.23.0+)
14
16
 
15
- 1. **Analyze** the bug description to generate 2-3 competing hypotheses
16
- 2. **Spawn** parallel Task agents, each investigating one hypothesis
17
- 3. **Consolidate** findings into a single diagnosis
18
- 4. **Recommend** the most likely root cause with evidence
17
+ ```
18
+ Bug description
19
+
20
+ Step 0: Scope-Confidence pre-check verify files/modules you plan to
21
+ investigate actually exist (regex → grep); halt if contradictions
22
+
23
+ Step 1: Assumption Surfacing (Tier 2) — list domain-model assumptions
24
+ your hypotheses will depend on; WAIT for user confirmation
25
+
26
+ Step 2: Generate 2-3 hypotheses grounded in CONFIRMED assumptions
27
+
28
+ Step 3: Spawn parallel Explore agents per hypothesis (READ-ONLY)
29
+
30
+ Step 4: Consolidate findings
31
+
32
+ Step 5: Hypothesis Adversary — spawn adversary on different model; goal:
33
+ challenge the winning hypothesis against rejected ones; surface
34
+ any overlap that suggests the diagnosis missed the real cause
35
+
36
+ Step 6: Final diagnosis with confidence + suggested fix
37
+ ```
38
+
39
+ The "user is the adversary" assumption pass (Step 1) is the single most
40
+ important addition — it forces hypothesis generation to be grounded in
41
+ facts rather than whatever the AI's first read of the bug happened to be.
19
42
 
20
43
  ## Execution Steps
21
44
 
22
- ### Step 1: Generate Hypotheses
45
+ ### Step 0: Scope-Confidence Pre-Check (v2.23.0+)
46
+
47
+ Before generating hypotheses, extract noun-phrases from the bug description that look like code entities (function names, filenames, module names, API endpoints) and verify they exist in the codebase:
48
+
49
+ ```javascript
50
+ const gates = require('wogiflow/scripts/flow-story-gates');
51
+ const audit = gates.auditScopeConfidence(ARGUMENTS);
52
+ ```
53
+
54
+ If any assumption is **CONTRADICTED** (user refers to a component that doesn't exist), HALT and ask: "You mentioned `<X>` but I don't see it in the codebase. Did you mean `<nearest match>`, or is `<X>` created under a different name?"
55
+
56
+ This catches the "investigate imaginary code" failure mode before wasting a parallel agent run.
57
+
58
+ ### Step 1: Assumption Surfacing (Tier 2 — MANDATORY unless `--no-assumptions`)
59
+
60
+ Before generating ANY hypothesis, identify the domain-model assumptions your theories will depend on. Present them in a fenced block and **WAIT** for user confirmation.
61
+
62
+ ```
63
+ ━━━ ASSUMPTIONS (confirm before I generate hypotheses) ━━━
64
+ The bug description is: "[ARGUMENTS]"
65
+
66
+ My hypothesis generation will assume:
67
+ 1. <assumption about what X does>
68
+ 2. <assumption about when Y fires>
69
+ 3. <assumption about data shape>
70
+ 4. <assumption about call ordering>
71
+
72
+ Do these match your understanding? [confirm / correct <N>]
73
+ ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
74
+ ```
75
+
76
+ **Do NOT proceed to Step 2 while waiting.** The user's response becomes the ground truth for hypothesis generation.
77
+
78
+ **Rationale** (from `config.researchReasoningGate`): same-model self-critique rubber-stamps. The USER is the effective adversary — they validate the domain model before the AI builds theories on invisible guesses. Three hypotheses rooted in the same wrong assumption are still all wrong.
23
79
 
24
- Read the bug description from ARGUMENTS and generate 2-3 hypotheses.
80
+ ### Step 2: Generate Hypotheses
81
+
82
+ Using the confirmed assumptions from Step 1, generate 2-3 hypotheses.
25
83
 
26
84
  For each hypothesis, identify:
27
85
  - **Theory**: What might be causing this
@@ -52,7 +110,7 @@ Spawning 3 investigation agents in parallel...
52
110
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
53
111
  ```
54
112
 
55
- ### Step 2: Spawn Parallel Investigators
113
+ ### Step 3: Spawn Parallel Investigators
56
114
 
57
115
  Launch one Task agent per hypothesis. **All agents must be launched in a single message** (parallel Task calls).
58
116
 
@@ -91,7 +149,7 @@ IMPORTANT: Only use read-only tools (Glob, Grep, Read, WebSearch, WebFetch). Do
91
149
 
92
150
  Use `subagent_type=Explore` for all investigation agents.
93
151
 
94
- ### Step 3: Consolidate Findings
152
+ ### Step 3.5: Consolidate Findings
95
153
 
96
154
  After all agents complete, display the consolidated results:
97
155
 
@@ -116,7 +174,61 @@ After all agents complete, display the consolidated results:
116
174
  ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
117
175
  ```
118
176
 
119
- ### Step 4: Diagnosis
177
+ ### Step 4: Hypothesis Adversary (v2.23.0+ — MANDATORY unless `--no-adversary`)
178
+
179
+ After consolidation, spawn a single Agent (different `model` param if `config.hybrid.enabled`, else same) with this prompt:
180
+
181
+ ```
182
+ You are the hypothesis adversary.
183
+
184
+ Confirmed domain assumptions (from the user):
185
+ [list from Step 1]
186
+
187
+ The 3 hypotheses and their verdicts:
188
+ H1 [CONFIRMED, confidence HIGH]: <theory>
189
+ Evidence: [top 3 findings]
190
+ H2 [REFUTED, confidence HIGH]: <theory>
191
+ Evidence: ...
192
+ H3 [INCONCLUSIVE, confidence LOW]: <theory>
193
+ Evidence: ...
194
+
195
+ Winning hypothesis: H1
196
+
197
+ Your job (10 min):
198
+ 1. Does H1's evidence actually prove H1, or could it also be consistent
199
+ with H2 or H3? List overlapping evidence.
200
+ 2. Is there a 4th hypothesis that would explain ALL the evidence better
201
+ than H1 alone?
202
+ 3. List 1-3 specific reasons H1 might be WRONG despite the verdict.
203
+ 4. Cite file:line for each concern.
204
+
205
+ Output format:
206
+ {
207
+ "overlap_risk": "low|medium|high",
208
+ "alternative_hypothesis": "<new theory or 'none'>",
209
+ "concerns": [
210
+ { "concern": "...", "evidence": "<file:line>" }
211
+ ]
212
+ }
213
+ ```
214
+
215
+ Present adversary critique to the user alongside the original diagnosis:
216
+
217
+ ```
218
+ ━━━ DIAGNOSIS ━━━
219
+ [original Step 5 content]
220
+
221
+ ━━━ ADVERSARY CRITIQUE (reviewed by a different model) ━━━
222
+ Overlap risk: [low|medium|high]
223
+ Alternative hypothesis: [new theory or "none"]
224
+ Concerns:
225
+ • [concern 1] — [file:line]
226
+ • [concern 2] — [file:line]
227
+ ```
228
+
229
+ One pass only — no iteration loop (this is debug, not implementation).
230
+
231
+ ### Step 5: Diagnosis
120
232
 
121
233
  Synthesize the findings into a final diagnosis:
122
234
 
@@ -165,6 +165,18 @@ You MAY:
165
165
 
166
166
  **If the user provides 9 items, the epic MUST contain 9 stories (or items grouped into stories where every item appears as an acceptance criterion). Verify this with a reconciliation count before proceeding.**
167
167
 
168
+ ## P0 Specification-Quality Gates (v2.24.0+)
169
+
170
+ Before finalizing an epic, run the same P0 gates `/wogi-story` uses (`scripts/flow-story-gates.js`):
171
+
172
+ 1. **Long Input Gate** — ≥40 lines or ≥5 items → route to `/wogi-extract-review`
173
+ 2. **Item Reconciliation** — already enforced via Anti-Deferral above; use `gates.reconcileItems()` to generate the manifest mechanically
174
+ 3. **Consumer Impact** — if epic description contains refactor/rename/migrate/etc. → grep consumers, list breaking count, recommend phased migration at ≥5
175
+ 4. **Scope-Confidence** — extract "new X"/"existing Y" assumptions → classify against codebase → surface CONTRADICTED/UNVERIFIED as Pending Clarifications
176
+ 5. **Intent Bootstrap Coordination** — schedule IGR bootstrap if missing; respects existing session-state flag
177
+
178
+ All fail-open. Call `require('wogiflow/scripts/flow-story-gates')` directly in the epic-creation flow.
179
+
168
180
  ## Tips
169
181
 
170
182
  - **Start with epics for major features** - Break down into stories before implementation
@@ -182,6 +182,31 @@ Contradictions are resolved automatically using temporal ordering:
182
182
 
183
183
  The superseded statement is marked and excluded from story generation.
184
184
 
185
+ ## Item Manifest Export (v2.24.0+)
186
+
187
+ After review is complete, the confirmed items can be exported as an **Item Manifest** in the format `/wogi-story`'s P0 item-reconciliation gate consumes directly. This closes the re-extraction loop — when `/wogi-start` routes a long input to `/wogi-extract-review`, the resulting manifest is handed to `/wogi-story` with `bypassLongInput: true` so story creation doesn't re-route back to extraction.
188
+
189
+ ```bash
190
+ flow extract-zero-loss manifest # JSON: { items, count, bypassLongInput, sourceSessionId, intentBootstrapScheduled }
191
+ ```
192
+
193
+ AI flow:
194
+ ```
195
+ User pastes large input
196
+
197
+ /wogi-start detects line/item threshold → routes to /wogi-extract-review
198
+
199
+ Extract → review → confirm completeness
200
+
201
+ flow extract-zero-loss manifest → JSON
202
+
203
+ /wogi-story --full-input=<manifest.items.join('\n')> --bypass-long-input
204
+
205
+ Item Reconciliation gate sees bypassLongInput, skips Gate 1, runs Gates 2-5 normally
206
+ ```
207
+
208
+ The manifest also carries an `intentBootstrapScheduled` flag so `/wogi-story` won't re-prompt the user if /wogi-extract-review already scheduled IGR bootstrap.
209
+
185
210
  ## Files
186
211
 
187
212
  | File | Location |
@@ -145,6 +145,41 @@ node node_modules/wogiflow/scripts/flow-feature.js progress ft-a1b2c3d4
145
145
  | → | In Progress (1-99%) |
146
146
  | ✓ | Completed (100%) |
147
147
 
148
+ ## Anti-Deferral Rule (MANDATORY — v2.24.0+)
149
+
150
+ **When creating a feature from user input, EVERY item the user provided MUST become a tracked story within the feature.**
151
+
152
+ You must NEVER:
153
+ - Create stories for items 1-3 and silently skip items 4-6 because you judged them as "enhancements"
154
+ - Label items as "deferred" or "long-term" and exclude them from the feature
155
+ - Apply your own priority filter to decide which items deserve stories
156
+
157
+ You MAY:
158
+ - Assign different priorities (P0/P1/P2/P3) — but ALL items get stories
159
+ - Suggest an execution order — but ALL items are tracked in the feature
160
+ - Ask the user "Should I defer items 4-6?" — explicit user consent is the ONLY valid reason to exclude items
161
+
162
+ **If the user provides 5 items, the feature MUST contain 5 stories (or items grouped into stories where every item appears as an acceptance criterion).** Verify with a reconciliation count before proceeding.
163
+
164
+ ## P0 Specification-Quality Gates (v2.24.0+)
165
+
166
+ When creating a feature from user input, apply the same P0 gates `/wogi-story` uses (from `scripts/flow-story-gates.js`):
167
+
168
+ 1. **Long Input Gate** — ≥40 lines or ≥5 items → route to `/wogi-extract-review` first
169
+ 2. **Item Reconciliation** — ≥3 items → enumerate items as a manifest + verify every item maps to at least one story
170
+ 3. **Consumer Impact Analysis** — if the feature title/description contains refactor/rename/migrate/replace/consolidate/split/extract/move keywords → run `git grep` on seeded filenames, list likely consumers, recommend phased migration if ≥5 breaking consumers
171
+ 4. **Scope-Confidence Audit** — extract "new X"/"existing Y"/"the Z service" assumptions from the description, classify against codebase (VERIFIED/CONTRADICTED/UNVERIFIED), write to a Pending Clarifications section on the feature
172
+ 5. **Intent Bootstrap Coordination** — if IGR is enabled and artifacts missing, schedule bootstrap via session-state flag (do NOT duplicate-prompt)
173
+
174
+ All gates fail-open. Use `flow-story-gates.js` directly:
175
+
176
+ ```javascript
177
+ const gates = require('wogiflow/scripts/flow-story-gates');
178
+ gates.reconcileItems(userInput);
179
+ gates.analyzeConsumerImpact(title + ' ' + description);
180
+ gates.auditScopeConfidence(description);
181
+ ```
182
+
148
183
  ## Tips
149
184
 
150
185
  - **Features represent user-facing capabilities** - Not technical components
@@ -30,6 +30,20 @@ KEY CONTEXT
30
30
  BLOCKERS
31
31
  - Waiting on backend team for /preferences endpoint
32
32
 
33
+ WORKSPACE DISPATCH ISSUES (v2.23.0 — manager mode only)
34
+ ━━━ OVERDUE WORKSPACE DISPATCHES (2) ━━━
35
+ • wf-a1b2c3d4 → worker-1 | dispatched 47m ago (17m past 30m deadline)
36
+ • wf-e5f67890 → worker-2 | dispatched 2h3m ago (1h33m past 30m deadline)
37
+
38
+ ━━━ LOST DISPATCHES — WORKER RESTARTED WITH EMPTY QUEUE (1) ━━━
39
+ • wf-55555555 → worker-3 | dispatched 4m ago | still pending after worker restart
40
+
41
+ HONESTY CHECK — claim/state contradictions (v2.23.0)
42
+ ⚠ 2 tasks with claim-vs-state mismatch
43
+ wf-abc12345 (class A, field notes): "fully completed, shipped yesterday..."
44
+ wf-def67890 (class B, field result): "0 outages reported, clean rollout..."
45
+ Review: flow health
46
+
33
47
  RULE VIOLATIONS (deferred from rule creation)
34
48
  Rule: "Split Prisma schema into domain files" (added 2026-02-20)
35
49
  Violations: 4 across 1 file
@@ -39,30 +39,40 @@ Models are selected once per session and remembered for subsequent runs.
39
39
 
40
40
  **Config**: Models configured in `.workflow/config.json` under `models.providers`. API keys in `.env`.
41
41
 
42
- ## Review Flow
42
+ ## Review Flow (v2.23.0+)
43
43
 
44
44
  ```
45
45
  ┌─────────────────────────────────────────────────────────┐
46
46
  │ /wogi-peer-review │
47
47
  ├─────────────────────────────────────────────────────────┤
48
48
  │ 1. Collect code changes (git diff or specified files) │
49
- │ 2. Generate improvement-focused prompt
50
- 3. If includeClaude enabled:
49
+ │ 2. Classify change size → effort tier:
50
+ L0/L1 (>10 files) opus-4-7 xhigh
51
+ │ L2 (3-10 files) → sonnet medium │
52
+ │ L3 (<3 files) → haiku medium │
53
+ │ 3. Generate improvement-focused prompt │
54
+ │ 4. If includeClaude enabled: │
51
55
  │ - Launch Claude review (Task agent, Explore type) │
52
- 4. External model(s) review via API │
53
- 5. Collect all results
54
- 6. Compare findings:
56
+ 5. External model(s) review via API │
57
+ 6. Collect all results, tag each claim with Evidence
58
+ Tier 0-4 (NONE/STATIC/COMPILED/INTERACTIVE/SHIPPED)
59
+ │ 7. Compare findings: │
55
60
  │ - All agree → Strong suggestion │
56
61
  │ - Partial agree → Present perspectives │
57
62
  │ - Disagree → Surface disagreement │
58
- 7. Claude synthesizes and responds to feedback
59
- 8. Output final synthesis
63
+ 8. Synthesis Adversary (v2.23.0 NEW):
64
+ spawn cross-model agent on DIFFERENT model to
65
+ │ critique the synthesis itself — does "3/3 agreement" │
66
+ │ actually mean "3 models all hallucinated the same │
67
+ │ thing"? │
68
+ │ 9. Claude synthesizes + incorporates adversary critique │
69
+ │ 10. Output final synthesis with evidence-tier per claim │
60
70
  └─────────────────────────────────────────────────────────┘
61
71
  ```
62
72
 
63
73
  ## Review Prompt Template
64
74
 
65
- The peer review prompt focuses on improvements, not correctness:
75
+ The peer review prompt focuses on improvements, not correctness, and now demands evidence tiers per claim (v2.23.0+):
66
76
 
67
77
  ```
68
78
  Review this code for IMPROVEMENT OPPORTUNITIES, not bugs:
@@ -73,9 +83,47 @@ Review this code for IMPROVEMENT OPPORTUNITIES, not bugs:
73
83
  4. **Readability**: Could this be clearer/simpler?
74
84
  5. **Extensibility**: Will this be easy to extend?
75
85
 
76
- Respond with: specific suggestions, alternative approaches, trade-off analysis.
86
+ For EACH suggestion, tag it with an evidence tier:
87
+ Tier 0 (NONE) — no evidence, pure opinion
88
+ Tier 1 (STATIC) — based on reading the code
89
+ Tier 2 (COMPILED) — would affect compile / type errors
90
+ Tier 3 (INTERACTIVE) — would affect runtime behavior observably
91
+ Tier 4 (SHIPPED) — you can cite a production incident
92
+
93
+ Respond with: specific suggestions, alternative approaches, trade-off
94
+ analysis, EACH carrying an explicit evidence tier.
77
95
  ```
78
96
 
97
+ ## Synthesis Adversary (v2.23.0+ — MANDATORY unless `--no-adversary`)
98
+
99
+ After initial synthesis, spawn a single adversary agent on a DIFFERENT model from the synthesizer (default: if synthesizer is Opus, adversary is Sonnet; config via `peerReview.adversaryModel`). Prompt:
100
+
101
+ ```
102
+ You are the synthesis adversary.
103
+
104
+ Synthesis claims:
105
+ • [claim 1, tier 2, agreed by 3 models]
106
+ • [claim 2, tier 1, agreed by 2 models]
107
+ • [claim 3, tier 0, 1-model unique insight]
108
+
109
+ Your job (5 min):
110
+ 1. For each claim: could "agreement" be shared hallucination? What would
111
+ refute the claim that couldn't have been seen by all 3 reviewers?
112
+ 2. For Tier 0/1 claims: is the evidence actually there in the code, or
113
+ is it vibes-based?
114
+ 3. Is there an important suggestion MISSING from the synthesis that a
115
+ human reviewer would flag?
116
+
117
+ Output:
118
+ {
119
+ "shared_hallucination_risk": [list of claim indices with reason],
120
+ "vibes_based_claims": [list of claim indices],
121
+ "missing_suggestions": [list of suggestions the synthesis missed]
122
+ }
123
+ ```
124
+
125
+ Merge adversary output into the final report — downgrade any claim flagged as shared-hallucination or vibes-based by one evidence tier.
126
+
79
127
  ## Output
80
128
 
81
129
  ```
@@ -150,5 +198,8 @@ For manual review (no API keys needed): `/wogi-peer-review --manual`
150
198
  | `--json` | Output JSON for automation |
151
199
  | `--verbose` | Show full model responses |
152
200
  | `--create-tasks` | Auto-create tasks for strong agreements |
201
+ | `--no-adversary` | Skip the v2.23.0 synthesis adversary (not recommended for L0/L1 diffs) |
202
+ | `--adversary-model <id>` | Override adversary model (default: cross-model from synthesizer) |
203
+ | `--effort <level>` | Override effort tier (low/medium/high/xhigh/max) — otherwise derived from diff size |
153
204
 
154
205
  ARGUMENTS: {args}
@@ -182,6 +182,34 @@ When `/wogi-plan` is invoked with a description argument, it should:
182
182
  1. Create the plan structure in `.workflow/plans/`
183
183
  2. Enter Claude Code plan mode with the description: `EnterPlanMode` with the plan context
184
184
 
185
+ ## Anti-Deferral Rule (MANDATORY — v2.24.0+)
186
+
187
+ **When creating a plan from user input, EVERY item the user provided MUST become a tracked epic or feature within the plan.**
188
+
189
+ You must NEVER:
190
+ - Create epics for items 1-3 and silently skip items 4-7 because you judged them as "enhancements" or "long-term"
191
+ - Label items as "deferred" and exclude them from the plan
192
+ - Apply your own filter to decide which items deserve tracking
193
+
194
+ You MAY:
195
+ - Assign different priorities (P0/P1/P2/P3) — but ALL items get epics/features
196
+ - Suggest an execution order — but ALL items are tracked in the plan
197
+ - Ask the user "Should I defer items 4-7?" — explicit user consent is the ONLY valid reason to exclude items
198
+
199
+ **If the user provides 7 items, the plan MUST contain 7 tracked sub-items (epics or features, possibly grouped where every item appears as a criterion).** Verify with a reconciliation count before proceeding.
200
+
201
+ ## P0 Specification-Quality Gates (v2.24.0+)
202
+
203
+ When creating a plan from user input, apply the same P0 gates `/wogi-story` uses (`scripts/flow-story-gates.js`):
204
+
205
+ 1. **Long Input Gate** — ≥40 lines or ≥5 items → route to `/wogi-extract-review`
206
+ 2. **Item Reconciliation** — ≥3 items → enumerate manifest + verify every item maps
207
+ 3. **Consumer Impact** — refactor keywords trigger a grep; ≥5 breaking consumers → recommend phased migration
208
+ 4. **Scope-Confidence** — extract "new X"/"existing Y"/"the Z service" claims; classify against codebase; surface contradictions as Pending Clarifications
209
+ 5. **Intent Bootstrap Coordination** — schedule IGR bootstrap if missing (don't re-prompt)
210
+
211
+ All fail-open.
212
+
185
213
  ## Tips
186
214
 
187
215
  - **Plans are for strategic visibility** - Track high-level progress
@@ -10,8 +10,10 @@ Steps:
10
10
  3. **Check app-map** - If new components created, verify they're added
11
11
  4. **Update progress.md** - Add handoff notes for next session
12
12
  5. **Community push** - If `config.community.enabled`, push anonymous learnings
13
- 6. **Commit changes** - Stage and commit all workflow files
14
- 7. **Offer to push** - Ask if should push to remote
13
+ 6. **Completion-claim honesty scan** - Surface done-in-text-but-not-in-status contradictions (2026-04-16 honesty-infra)
14
+ 7. **Workspace session-end message (v2.23.0+)** - If running inside a workspace manager, write a `heads-up` message to `.workspace/messages/` so workers know no new dispatches are coming
15
+ 8. **Commit changes** - Stage and commit all workflow files
16
+ 9. **Offer to push** - Ask if should push to remote
15
17
 
16
18
  Output:
17
19
  ```
@@ -328,3 +328,14 @@ Controlled by `.workflow/config.json`:
328
328
  - **WebMCP assertions**: ~100 tokens per step (tool call + JSON assertion)
329
329
  - **Savings**: ~95% token reduction for a 10-step test flow
330
330
  - **Bonus**: Deterministic results (no visual diff ambiguity)
331
+
332
+ ## Evidence Tier (v2.24.0+)
333
+
334
+ A successful `/wogi-test-browser` run emits **Evidence Tier 4 (SHIPPED)** — the highest tier on the Completion Truth Gate scale. Browser-driven WebMCP tests exercise real DOM interactions with real event listeners, network calls, and state transitions.
335
+
336
+ When reporting results:
337
+ ```json
338
+ { "evidenceTier": 4, "evidenceTierLabel": "SHIPPED" }
339
+ ```
340
+
341
+ See `/wogi-test` for the full tier scale. L0/L1 tasks touching UI should reach Tier 4 before closing; Truth Gate (Step 3.9) will downgrade "done" claims that don't.
@@ -99,4 +99,15 @@ During `/wogi-start` Step 3, verify:
99
99
  - Run generated tests AFTER implementing → they should all pass
100
100
  - If any test passes before implementation → WARNING: test may be trivial
101
101
 
102
+ ## Evidence Tier (v2.24.0+)
103
+
104
+ Generated tests that only assert structural properties (class exists, function has signature) emit **Evidence Tier 1 (STATIC)**. Tests that exercise actual behavior emit **Evidence Tier 2 (COMPILED)** once they pass. To reach Tier 3+ (interactive / shipped), tests must make real network or DOM calls — see `/wogi-test` (API/UI) and `/wogi-test-browser` for those.
105
+
106
+ When recording test results in `.workflow/verifications/`, include:
107
+ ```json
108
+ { "evidenceTier": 1|2, "evidenceTierLabel": "STATIC"|"COMPILED" }
109
+ ```
110
+
111
+ The Completion Truth Gate reads these labels to decide whether "tests pass" is sufficient to accept a "done" claim. L0/L1 tasks cannot close on Tier 1 alone.
112
+
102
113
  ARGUMENTS: {args}
@@ -285,6 +285,31 @@ Include results in the summary:
285
285
  Generated Tests: [passed]/[total] passed
286
286
  ```
287
287
 
288
+ ## Evidence Tiers (v2.24.0+)
289
+
290
+ Every test invocation should emit an **evidence tier** label that the Completion Truth Gate (Step 3.9 in `/wogi-start`) uses to accept or downgrade "done" claims. A task that claims completion without sufficient tier evidence gets surfaced as "implemented (unverified)" rather than rubber-stamped as done.
291
+
292
+ | Tier | Label | What it means | Source |
293
+ |------|-------|---------------|--------|
294
+ | 0 | NONE | No test ran or all tests silently skipped | N/A |
295
+ | 1 | STATIC | Code parses / type-checks cleanly | `flow lint`, `flow typecheck` |
296
+ | 2 | COMPILED | Unit tests pass | `flow-test-integrity.js` generated/unit tests |
297
+ | 3 | INTERACTIVE | API calls succeed end-to-end (HTTP round-trips, DB reads) | `flow-test-api.js`, `flow-test-ui.js` backend hits |
298
+ | 4 | SHIPPED | UI interaction succeeds in a real browser (click, submit, see result) | `flow-test-ui.js` browser E2E |
299
+
300
+ Output format (JSON mode):
301
+ ```json
302
+ {
303
+ "passed": true,
304
+ "results": [...],
305
+ "evidenceTier": 3,
306
+ "evidenceTierLabel": "INTERACTIVE",
307
+ "gates": { "static": true, "unit": true, "api": true, "ui": false }
308
+ }
309
+ ```
310
+
311
+ Any `/wogi-start` task closing with `evidenceTier < 3` for features that touch UI or a service boundary will be flagged by the Truth Gate. L3 tasks (refactor/chore) accept tier 1–2. L0/L1 tasks require tier 3+.
312
+
288
313
  ## Important Notes
289
314
 
290
315
  - Testing is **disabled by default** — zero overhead for projects that don't use it
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "wogiflow",
3
- "version": "2.22.4",
3
+ "version": "2.24.0",
4
4
  "description": "AI-powered development workflow management system with multi-model support",
5
5
  "main": "lib/index.js",
6
6
  "bin": {
@@ -10,7 +10,7 @@
10
10
  },
11
11
  "scripts": {
12
12
  "flow": "./scripts/flow",
13
- "test": "NODE_ENV=test node --test tests/auto-compact-prompt.test.js tests/flow-paths.test.js tests/flow-io.test.js tests/flow-config-loader.test.js tests/flow-damage-control.test.js tests/flow-output.test.js tests/flow-constants.test.js tests/flow-session-state.test.js tests/flow-hooks-integration.test.js tests/flow-utils.test.js tests/flow-security.test.js tests/flow-memory-db.test.js tests/flow-durable-session.test.js tests/flow-skill-matcher.test.js tests/flow-bridge.test.js tests/flow-proactive-compact.test.js tests/flow-cascade-completion.test.js tests/flow-capture-gate.test.js tests/flow-correction-detector-hybrid.test.js tests/flow-promote.test.js tests/flow-archive-runs.test.js tests/flow-memory.test.js tests/flow-hooks-pre-tool-helpers.test.js tests/flow-hooks-bugfix-scope-gate.test.js tests/flow-hooks-routing-gate.test.js tests/flow-hooks-phase-read-gate.test.js tests/flow-hooks-commit-log-gate.test.js tests/flow-hooks-deploy-gate.test.js tests/flow-hooks-todowrite-gate.test.js tests/flow-hooks-git-safety-gate.test.js tests/flow-hooks-scope-mutation-gate.test.js tests/flow-hooks-strike-gate.test.js tests/flow-hooks-component-check.test.js tests/flow-hooks-scope-gate.test.js tests/flow-hooks-implementation-gate.test.js tests/flow-hooks-research-gate.test.js tests/flow-hooks-loop-check.test.js tests/flow-hooks-manager-boundary-gate.test.js tests/flow-hooks-phase-gate.test.js tests/flow-hooks-pre-tool-orchestrator.test.js tests/flow-hooks-observation-capture.test.js tests/flow-hooks-task-gate.test.js tests/flow-durable-session-suspension.test.js tests/flow-health-mcp-scopes.test.js tests/flow-lean-config.test.js tests/flow-workspace-autopickup.test.js tests/flow-worker-boundary-gate.test.js tests/flow-worker-question-classifier.test.js tests/flow-completion-truth-gate-contradictions.test.js tests/flow-structure-sensor.test.js tests/flow-workspace-dispatch-tracking.test.js tests/flow-story-gates.test.js tests/flow-workspace-restart-handoff.test.js tests/flow-wogi-claude-wrapper.test.js && NODE_ENV=test node tests/run-quality-gates.test.js",
13
+ "test": "NODE_ENV=test node --test tests/auto-compact-prompt.test.js tests/flow-paths.test.js tests/flow-io.test.js tests/flow-config-loader.test.js tests/flow-damage-control.test.js tests/flow-output.test.js tests/flow-constants.test.js tests/flow-session-state.test.js tests/flow-hooks-integration.test.js tests/flow-utils.test.js tests/flow-security.test.js tests/flow-memory-db.test.js tests/flow-durable-session.test.js tests/flow-skill-matcher.test.js tests/flow-bridge.test.js tests/flow-proactive-compact.test.js tests/flow-cascade-completion.test.js tests/flow-capture-gate.test.js tests/flow-correction-detector-hybrid.test.js tests/flow-promote.test.js tests/flow-archive-runs.test.js tests/flow-memory.test.js tests/flow-hooks-pre-tool-helpers.test.js tests/flow-hooks-bugfix-scope-gate.test.js tests/flow-hooks-routing-gate.test.js tests/flow-hooks-phase-read-gate.test.js tests/flow-hooks-commit-log-gate.test.js tests/flow-hooks-deploy-gate.test.js tests/flow-hooks-todowrite-gate.test.js tests/flow-hooks-git-safety-gate.test.js tests/flow-hooks-scope-mutation-gate.test.js tests/flow-hooks-strike-gate.test.js tests/flow-hooks-component-check.test.js tests/flow-hooks-scope-gate.test.js tests/flow-hooks-implementation-gate.test.js tests/flow-hooks-research-gate.test.js tests/flow-hooks-loop-check.test.js tests/flow-hooks-manager-boundary-gate.test.js tests/flow-hooks-phase-gate.test.js tests/flow-hooks-pre-tool-orchestrator.test.js tests/flow-hooks-observation-capture.test.js tests/flow-hooks-task-gate.test.js tests/flow-durable-session-suspension.test.js tests/flow-health-mcp-scopes.test.js tests/flow-lean-config.test.js tests/flow-workspace-autopickup.test.js tests/flow-worker-boundary-gate.test.js tests/flow-worker-question-classifier.test.js tests/flow-completion-truth-gate-contradictions.test.js tests/flow-structure-sensor.test.js tests/flow-workspace-dispatch-tracking.test.js tests/flow-story-gates.test.js tests/flow-workspace-restart-handoff.test.js tests/flow-wogi-claude-wrapper.test.js tests/flow-wave1-integrations.test.js tests/flow-wave2-integrations.test.js && NODE_ENV=test node tests/run-quality-gates.test.js",
14
14
  "test:syntax": "find scripts/ lib/ -name '*.js' -not -path '*/node_modules/*' -exec node --check {} +",
15
15
  "lint": "eslint scripts/ lib/ tests/",
16
16
  "lint:ci": "eslint scripts/ lib/ tests/ --max-warnings 0",
@@ -390,6 +390,46 @@ function getConfirmedTasks() {
390
390
  }));
391
391
  }
392
392
 
393
+ /**
394
+ * v2.24.0 — Export confirmed items as an "Item Manifest" compatible with
395
+ * /wogi-story's item-reconciliation gate (wf-63c0f4cc). Downstream callers
396
+ * (/wogi-story, /wogi-epics, /wogi-feature, /wogi-plan) can pass this to
397
+ * their P0 gates as `fullInput` and mark `bypassLongInput: true` so the
398
+ * re-extraction loop is skipped.
399
+ *
400
+ * @returns {{items: Array<string>, count: number, bypassLongInput: true, sourceSessionId: string, intentBootstrapScheduled: boolean}}
401
+ */
402
+ function exportAsItemManifest() {
403
+ const session = loadReviewSession();
404
+ if (!session) throw new Error('No review session active');
405
+ if (!session.completeness_confirmed) {
406
+ throw new Error('Cannot export manifest: review not yet confirmed as complete');
407
+ }
408
+
409
+ const items = session.items
410
+ .filter(i => i.review_status === 'confirmed')
411
+ .map(i => (i.text || '').trim())
412
+ .filter(Boolean);
413
+
414
+ // Coordinate with Intent Bootstrap (see flow-story-gates.coordinateIntentBootstrap)
415
+ // so /wogi-start doesn't re-prompt if the user already scheduled bootstrap via
416
+ // /wogi-story during this session.
417
+ let intentBootstrapScheduled = false;
418
+ try {
419
+ const gates = require('./flow-story-gates');
420
+ const result = gates.coordinateIntentBootstrap();
421
+ intentBootstrapScheduled = !!(result && result.scheduled);
422
+ } catch (_err) { /* non-critical */ }
423
+
424
+ return {
425
+ items,
426
+ count: items.length,
427
+ bypassLongInput: true,
428
+ sourceSessionId: session.id || null,
429
+ intentBootstrapScheduled
430
+ };
431
+ }
432
+
393
433
  // =============================================================================
394
434
  // DISPLAY HELPERS
395
435
  // =============================================================================
@@ -779,6 +819,7 @@ module.exports = {
779
819
 
780
820
  // Get results
781
821
  getConfirmedTasks,
822
+ exportAsItemManifest, // v2.24.0 — Wave 2 alignment with /wogi-story P0 gates
782
823
 
783
824
  // Display
784
825
  formatReviewStatus,
@@ -858,6 +899,16 @@ if (require.main === module) {
858
899
  }
859
900
  break;
860
901
 
902
+ case 'manifest':
903
+ // v2.24.0 — export Item Manifest for downstream /wogi-story coordination
904
+ try {
905
+ const manifest = exportAsItemManifest();
906
+ console.log(JSON.stringify(manifest, null, 2));
907
+ } catch (err) {
908
+ console.error(`${c.red}✗ ${err.message}${c.reset}`);
909
+ }
910
+ break;
911
+
861
912
  default:
862
913
  console.log('Extraction Review Module');
863
914
  console.log('Commands:');
@@ -377,9 +377,31 @@ function collectBriefingData() {
377
377
  git: gitStatus,
378
378
  progress: progressData,
379
379
  metrics: sessionState.metrics,
380
- suggestedPrompt: null // Will be generated if enabled
380
+ suggestedPrompt: null, // Will be generated if enabled
381
+ workspaceOverdue: null, // v2.23.0 — populated below when manager mode
382
+ honestyHits: [] // v2.23.0 — populated below
381
383
  };
382
384
 
385
+ // v2.23.0 — Workspace dispatch surfacing (manager mode only).
386
+ // If the user is working inside a workspace manager session, surface any
387
+ // overdue or restart-gap-lost dispatches so the morning briefing catches
388
+ // what the last manager turn would have caught. Fail-open.
389
+ try {
390
+ if (process.env.WOGI_WORKSPACE_ROOT) {
391
+ const { buildOverdueContext } = require('./hooks/core/overdue-dispatches');
392
+ const ctx = buildOverdueContext();
393
+ if (ctx) briefing.workspaceOverdue = ctx;
394
+ }
395
+ } catch (_err) { /* non-critical */ }
396
+
397
+ // v2.23.0 — Completion-claim honesty scan.
398
+ // Catches done-word-in-notes-while-status-partial and similar
399
+ // contradictions across ready.json (uses the honesty-infra from 2026-04-16).
400
+ try {
401
+ const { checkCompletionClaimHonesty } = require('./flow-health');
402
+ briefing.honestyHits = checkCompletionClaimHonesty();
403
+ } catch (_err) { /* non-critical */ }
404
+
383
405
  // Generate suggested prompt if enabled
384
406
  if (morningConfig.generatePrompt !== false) {
385
407
  briefing.suggestedPrompt = generateSuggestedPrompt(briefing);
@@ -532,6 +554,30 @@ function printBriefing(briefing) {
532
554
  }
533
555
  }
534
556
 
557
+ // v2.23.0 — Workspace dispatch issues (manager mode)
558
+ if (briefing.workspaceOverdue) {
559
+ printSection('WORKSPACE DISPATCH ISSUES');
560
+ for (const line of briefing.workspaceOverdue.split('\n')) {
561
+ console.log(` ${line}`);
562
+ }
563
+ console.log('');
564
+ }
565
+
566
+ // v2.23.0 — Completion-claim honesty contradictions
567
+ if (Array.isArray(briefing.honestyHits) && briefing.honestyHits.length > 0) {
568
+ printSection('HONESTY CHECK — claim/state contradictions');
569
+ console.log(` ${color('yellow', '\u26a0')} ${briefing.honestyHits.length} task${briefing.honestyHits.length === 1 ? '' : 's'} with claim-vs-state mismatch`);
570
+ for (const h of briefing.honestyHits.slice(0, 5)) {
571
+ const snippet = String(h.snippet || '').slice(0, 80);
572
+ console.log(` ${color('dim', `${h.id} (class ${h.class}, field ${h.field}): "${snippet}..."`)}`);
573
+ }
574
+ if (briefing.honestyHits.length > 5) {
575
+ console.log(` ${color('dim', `... and ${briefing.honestyHits.length - 5} more`)}`);
576
+ }
577
+ console.log(` ${color('dim', 'Review: flow health')}`);
578
+ console.log('');
579
+ }
580
+
535
581
  // Changes since last session
536
582
  if (morningConfig.showChanges !== false) {
537
583
  const changes = briefing.changesSinceLastSession;
@@ -580,6 +580,53 @@ function runCompletionContradictionScan() {
580
580
  }
581
581
  }
582
582
 
583
+ /**
584
+ * v2.23.0 — Workspace session-end handoff.
585
+ *
586
+ * When running inside a workspace manager session, write a structured
587
+ * `heads-up` message to .workspace/messages/ so workers (and subsequent
588
+ * manager sessions) can see that the manager is closing cleanly. Pairs
589
+ * with the worker-ready / worker-stopped / lost-dispatches protocol
590
+ * from 2.22.x — gives workers a definitive "manager went away, no new
591
+ * dispatches coming" signal instead of silent absence.
592
+ *
593
+ * Fail-open: any error is logged in DEBUG mode but never blocks session-end.
594
+ */
595
+ function writeWorkspaceSessionEndMessage() {
596
+ const workspaceRoot = process.env.WOGI_WORKSPACE_ROOT;
597
+ if (!workspaceRoot) return;
598
+ const repo = process.env.WOGI_REPO_NAME;
599
+ // Only manager-mode sessions emit this signal. Workers use their own
600
+ // Stop-hook worker-stopped message (see lib/workspace-messages.js).
601
+ if (repo && repo !== 'manager') return;
602
+
603
+ try {
604
+ const messagesLib = path.resolve(__dirname, '..', 'lib', 'workspace-messages.js');
605
+ const { createMessage, saveMessage } = require(messagesLib);
606
+ const msg = createMessage({
607
+ from: repo || 'manager',
608
+ to: 'all',
609
+ type: 'heads-up',
610
+ subject: 'Manager session ended',
611
+ body: [
612
+ 'The workspace manager session has ended cleanly.',
613
+ 'No new dispatches will arrive until a new manager session starts.',
614
+ 'Workers finishing current tasks can safely end-of-turn and restart.'
615
+ ].join('\n'),
616
+ priority: 'medium',
617
+ actionRequired: false
618
+ });
619
+ saveMessage(workspaceRoot, msg);
620
+ if (process.env.DEBUG) {
621
+ console.error(`[session-end] Wrote manager-session-ended message: ${msg.id}`);
622
+ }
623
+ } catch (err) {
624
+ if (process.env.DEBUG) {
625
+ console.error(`[session-end] Workspace session-end message failed (non-fatal): ${err.message}`);
626
+ }
627
+ }
628
+ }
629
+
583
630
  /**
584
631
  * v1.7.0: Archive request log if threshold exceeded
585
632
  */
@@ -1256,6 +1303,9 @@ async function main() {
1256
1303
  // hotfixes[] evidence. Surface-and-prompt, not hard-fail (calibration first).
1257
1304
  runCompletionContradictionScan();
1258
1305
 
1306
+ // v2.23.0 — Workspace session-end message (manager mode only).
1307
+ writeWorkspaceSessionEndMessage();
1308
+
1259
1309
  console.log('');
1260
1310
 
1261
1311
  // Offer to push