@exaudeus/workrail 3.33.0 → 3.34.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (39) hide show
  1. package/dist/cli-worktrain.js +167 -8
  2. package/dist/console-ui/assets/{index-BuJFLLfY.js → index-BVU9OSOb.js} +1 -1
  3. package/dist/console-ui/index.html +1 -1
  4. package/dist/daemon/agent-loop.d.ts +1 -0
  5. package/dist/daemon/agent-loop.js +1 -1
  6. package/dist/daemon/daemon-events.d.ts +17 -1
  7. package/dist/daemon/workflow-runner.d.ts +1 -1
  8. package/dist/daemon/workflow-runner.js +96 -21
  9. package/dist/manifest.json +45 -69
  10. package/dist/mcp/handlers/v2-error-mapping.d.ts +3 -0
  11. package/dist/mcp/handlers/v2-error-mapping.js +2 -0
  12. package/dist/mcp/handlers/v2-execution/advance.js +25 -0
  13. package/dist/mcp/handlers/v2-execution/continue-advance.js +7 -0
  14. package/dist/mcp/transports/http-entry.js +0 -7
  15. package/dist/mcp/transports/stdio-entry.js +0 -8
  16. package/dist/mcp-server.d.ts +0 -2
  17. package/dist/mcp-server.js +1 -42
  18. package/dist/trigger/polled-event-store.js +8 -6
  19. package/dist/v2/durable-core/domain/observation-builder.d.ts +3 -0
  20. package/dist/v2/durable-core/domain/observation-builder.js +2 -2
  21. package/dist/v2/durable-core/domain/prompt-renderer.d.ts +2 -1
  22. package/dist/v2/durable-core/domain/prompt-renderer.js +10 -0
  23. package/dist/v2/usecases/console-service.js +65 -14
  24. package/dist/v2/usecases/console-types.d.ts +1 -0
  25. package/docs/design/bridge-removal-pr-a-candidates.md +115 -0
  26. package/docs/design/bridge-removal-pr-a-design-review.md +79 -0
  27. package/docs/design/bridge-removal-pr-a-implementation-plan.md +203 -0
  28. package/docs/discovery/design-candidates.md +180 -0
  29. package/docs/discovery/design-review-findings.md +110 -0
  30. package/docs/discovery/wr-discovery-goal-reframing.md +303 -0
  31. package/docs/ideas/backlog.md +361 -0
  32. package/package.json +1 -1
  33. package/workflows/wr.discovery.json +58 -7
  34. package/dist/mcp/transports/bridge-entry.d.ts +0 -102
  35. package/dist/mcp/transports/bridge-entry.js +0 -454
  36. package/dist/mcp/transports/bridge-events.d.ts +0 -55
  37. package/dist/mcp/transports/bridge-events.js +0 -24
  38. package/dist/mcp/transports/primary-tombstone.d.ts +0 -21
  39. package/dist/mcp/transports/primary-tombstone.js +0 -51
@@ -0,0 +1,303 @@
1
+ # Discovery: Improving wr.discovery Goal Reframing
2
+
3
+ **Status:** Discovery in progress
4
+ **Session:** wr.discovery (Phase 1 -- diagnosis)
5
+ **Date:** 2026-04-18
6
+
7
+ **Artifact strategy:** This document is for human reading. Execution truth (context variables, step notes) lives in WorkRail session state. This doc is updated at each phase.
8
+
9
+ ---
10
+
11
+ ## Context / Ask
12
+
13
+ The `wr.discovery` workflow (v3.1.0) takes the stated goal at face value. Users often state solutions instead of problems, carry faulty assumptions into the brief, or do not actually know what they want. The goal of this discovery is to diagnose what is specifically weak about the current workflow and identify improvement directions -- not to design the final solution.
14
+
15
+ ---
16
+
17
+ ## Path Recommendation
18
+
19
+ **Path:** `design_first`
20
+
21
+ **Rationale:** The dominant risk here is solving the wrong problem. "Make discovery better" is ambiguous. The brief itself suggests one diagnosis (takes requests at face value) but that framing may itself be incomplete -- the problem could be about *when* reframing happens, *how* it happens, *what triggers* it, or something else entirely (e.g., the problem is that the workflow is too sequential rather than too credulous). `design_first` is the right path because the framing is genuinely uncertain and jumping to "add a reframe step" risks solving a surface symptom. A `landscape_first` path would just survey other discovery frameworks -- interesting but not the center of gravity. `full_spectrum` would work but would diffuse effort equally on landscape and framing when framing is the real bottleneck.
22
+
23
+ **Why not landscape_first:** The landscape (design thinking, Jobs-to-be-Done, 5 Whys, pre-mortem, etc.) is well-known and would mostly confirm what we already know. The interesting question is not "what frameworks exist" but "what structural change to this specific workflow would make it genuinely better."
24
+
25
+ ---
26
+
27
+ ## Constraints / Anti-goals
28
+
29
+ **Constraints:**
30
+ - The improved workflow must remain usable in automated (daemon) contexts where no human is present to answer follow-up questions
31
+ - Changes to `wr.discovery.json` must follow the v2 workflow authoring format
32
+ - The workflow already has a path-selection mechanism (phase-0); any improvement should integrate with or extend this, not duplicate it
33
+ - Must not break existing sessions mid-flight
34
+
35
+ **Anti-goals:**
36
+ - Do not add interactive Q&A that assumes a human will respond (daemon contexts have no human)
37
+ - Do not make the workflow so heavy that it becomes a therapy session before doing any actual work
38
+ - Do not import external frameworks wholesale -- extract principles, do not copy playbooks
39
+ - Do not redesign the full workflow structure -- focus on the goal-reframing problem specifically
40
+
41
+ ---
42
+
43
+ ## Landscape Packet
44
+
45
+ ### Current state summary
46
+
47
+ `wr.discovery` v3.1.0 has a Phase 0 that:
48
+ 1. Captures `problemStatement`, `desiredOutcome`, `coreConstraints`, `antiGoals`, `primaryUncertainty`, `knownApproaches`
49
+ 2. Selects a path: `landscape_first`, `full_spectrum`, `design_first`
50
+ 3. Creates a design doc
51
+
52
+ **What Phase 0 does NOT do:**
53
+ - It does not challenge or probe the stated goal
54
+ - It does not distinguish between "user stated a problem" vs "user stated a solution"
55
+ - It does not surface faulty assumptions in the goal itself
56
+ - It does not ask "is this the real problem or is it a symptom?"
57
+ - The path selection criteria are about *emphasis* (landscape vs framing), not about *goal validity*
58
+
59
+ The workflow does have a `design_first` path choice and a "what would make this framing wrong" element in the problemFrameTemplate -- but these apply after the goal is accepted, not before.
60
+
61
+ ### Existing approaches / precedents
62
+
63
+ **1. Design Thinking (IDEO/Stanford d.school):** "Empathize" phase is explicitly about setting aside your solution hypothesis and observing real user behavior. Pre-work question: "Are we solving the right problem?" The structural insight: problem reframing is not a step *in* the process -- it is a prerequisite to starting the process.
64
+
65
+ **2. Jobs-to-be-Done (Christensen):** When a user states "I want a faster horse," the JTBD practitioner asks "What job are you trying to get done?" The goal is to surface the functional, emotional, and social dimensions of the actual outcome -- not the solution form. Structured probe: "What were you doing before? What did you try? What would success look like even if our solution didn't exist?"
66
+
67
+ **3. 5 Whys (Toyota):** Sequential causal probing. "Why do you want X?" -> "Because Y." "Why Y?" -> "Because Z." Three to five levels of probing reveals whether the stated goal is a root cause or a symptom. Works well for problem goals; less well for opportunity goals.
68
+
69
+ **4. Pre-mortem (Gary Klein):** "Imagine we did exactly what you asked and it failed. What went wrong?" Forces the requester to articulate hidden assumptions. Particularly effective at surfacing: wrong success criteria, ignored stakeholders, underestimated constraints.
70
+
71
+ **5. AI-specific: Prompt interrogation patterns:** The "goal elicitation" pattern in agentic AI systems (e.g., AutoGPT system prompts, OpenAI alignment research) distinguishes between *stated objectives* and *intended objectives*. The typical mechanism: present a synthetic interpretation back to the user and ask for confirmation or correction.
72
+
73
+ **6. Consulting intake (McKinsey, IDEO, etc.):** A structured discovery brief usually has: "What decision do you need to make?" + "What would change if you had the answer?" + "Who else has thought about this?" + "What have you already tried?" The intake is not about validating the solution -- it is about understanding the decision context.
74
+
75
+ ### Option categories
76
+
77
+ 1. **Pre-step interrogation:** Add a Phase -1 (before path selection) that explicitly challenges the goal statement
78
+ 2. **Integrated reframing in Phase 0:** Extend Phase 0 to include goal challenge as part of classification
79
+ 3. **Adversarial dual-track:** Run two interpretations in parallel -- "take it literally" vs "reframe it" -- and let synthesis surface the gap
80
+ 4. **Progressive commitment:** Start with minimal goal acceptance, revisit the framing after each major phase, make reframing explicit at retriage points
81
+ 5. **Structural annotation:** Add a `goalType` classification (solution-framed, problem-framed, opportunity-framed, decision-framed) that changes how Phase 0 proceeds
82
+
83
+ ### Contradictions / disagreements
84
+
85
+ - The workflow already has `design_first` as a path and says "Choose `design_first` when the dominant risk is solving the wrong problem" -- but the path selection itself uses the stated goal as input. If the goal is wrong, the path selection can still be wrong.
86
+ - Phase 1g (re-triage) exists for course correction after landscape and framing work -- but it triggers on `retriageNeeded` being explicitly set, which assumes the agent identifies the need. A goal stated as a solution could pass through path selection and Phase 1 without triggering retriage.
87
+ - The metaGuidance says "Anti-anchoring: do not let the first framing or favorite option dominate the work" -- but this applies to *candidate generation*, not to goal framing. The stated goal is not challenged by this guidance.
88
+
89
+ ### Evidence gaps
90
+
91
+ - No direct data on how often real discovery sessions start with solution-framed goals vs problem-framed goals
92
+ - No post-hoc analysis of sessions where the stated goal turned out to be wrong
93
+ - The two specific examples from the brief (MCP simplification, structured output) are known anecdotally but not documented
94
+
95
+ ### Why this matters for path selection
96
+
97
+ The current workflow's structure means the *path selection step* inherits any misframing in the goal. A solution-framed goal can be classified as `landscape_first` when it should be `design_first`. The goal challenge needs to happen *before* path selection, or path selection needs to be goal-type-aware.
98
+
99
+ ---
100
+
101
+ ## Problem Frame Packet
102
+
103
+ ### Users / stakeholders
104
+
105
+ - **Primary:** User (human or daemon) submitting a goal to `wr.discovery`
106
+ - **Secondary:** Etienne (workflow author) -- wants the workflow to produce genuinely surprising insights, not just organize what was already stated
107
+ - **Tertiary:** Downstream consumers of the design document
108
+
109
+ ### Jobs / goals / outcomes
110
+
111
+ - **Actual job:** "Help me think through this problem so I reach the best decision, even if I described the problem wrong"
112
+ - **Stated job:** "Help me with [stated goal]"
113
+ - **The gap:** These are often not the same. The user often does not know the gap exists.
114
+
115
+ ### Pains / tensions / constraints
116
+
117
+ - **T1: Goal credulous processing:** Phase 0 processes the stated goal as if it is valid. A solution-framed goal ("build X") is treated as if "X is the right solution" is not an assumption.
118
+ - **T2: Daemon context constraint:** A daemon session has no human to answer follow-up questions. Any interactive "is this really what you want?" loop requires a human. Daemon sessions must use a non-interactive reframing strategy.
119
+ - **T3: Overhead vs signal ratio:** Heavy interrogation adds steps. If the goal is correctly framed (which it often is), the extra steps are pure overhead. The mechanism must be lightweight and skip gracefully when the goal is already well-framed.
120
+ - **T4: Reframing is hard to automate:** The AI cannot know what the user "really" wants -- it can only identify structural signals of a poorly-framed goal (solution-framing, missing success criteria, absent alternatives, hidden assumptions).
121
+
122
+ ### Success criteria
123
+
124
+ 1. The workflow identifies solution-framed goals and surfaces the implicit problem before generating candidates
125
+ 2. The workflow works in daemon contexts (no interactive questioning required)
126
+ 3. The overhead for a well-framed goal is minimal (a few structural notes, not a full reframing ceremony)
127
+ 4. The output of a reframed session differs meaningfully from the output of a non-reframed session for the same goal
128
+
129
+ ### Assumptions
130
+
131
+ - The goal text itself contains structural signals that distinguish "stated solution" from "stated problem" (e.g., "implement X" vs "improve Y" vs "decide between A and B")
132
+ - Surfacing the implicit problem does not require user confirmation -- it can be done by the agent itself as a reasoning step
133
+ - The most important case is solution-framed goals -- a user who says "add a reframe step to wr.discovery" is actually asking about "how to make discovery better" (this very brief is an example)
134
+
135
+ ### Reframes / HMW questions
136
+
137
+ - HMW: How might we detect when a stated goal is a solution hypothesis rather than a problem statement -- without requiring human confirmation?
138
+ - HMW: How might we ensure the workflow explores the problem space *before* accepting the stated goal's framing?
139
+ - HMW: How might we make goal reframing something the agent does *for itself* rather than a ceremony it performs *for the user*?
140
+
141
+ ### What would make this framing wrong
142
+
143
+ - If the agent model is already good enough at implicit reframing (without explicit prompting), the structural change adds ceremony without value
144
+ - If daemon sessions are a small minority of `wr.discovery` uses, an interactive probe could work for the majority case
145
+ - If the real problem is not "goal acceptance" but "insufficient candidate diversity" -- i.e., the workflow accepts the goal but generates too-narrow candidates -- then the fix belongs in Phase 3, not Phase 0
146
+
147
+ ---
148
+
149
+ ## Candidate Generation Expectations (design_first)
150
+
151
+ Because this is a `design_first` pass:
152
+ - At least one direction must meaningfully reframe the problem, not just add a "challenge" step
153
+ - The candidate set must address the daemon constraint directly -- solutions that require human interaction are non-starters
154
+ - Include the simplest change that could work alongside more structural alternatives
155
+
156
+ ---
157
+
158
+ ## Candidate Directions
159
+
160
+ ### Direction A: Goal-type classification in Phase 0 (minimal change)
161
+
162
+ **Summary:** Extend Phase 0 to classify the stated goal into one of four structural types: `solution_framed` ("build/add/implement X"), `problem_framed` ("improve/fix/reduce Y"), `opportunity_framed` ("explore/understand Z"), `decision_framed` ("choose between A and B"). When the goal is `solution_framed`, Phase 0 must explicitly surface the implicit problem statement before proceeding.
163
+
164
+ **Mechanism:** Add a `goalType` context variable and a procedure step: "Before selecting a path, classify the goal as `solution_framed`, `problem_framed`, `opportunity_framed`, or `decision_framed`. If `solution_framed`, produce an explicit `impliedProblem` statement: 'The stated goal implies this underlying problem: [X]. Confirm this is correct or surface a different problem frame before proceeding.'"
165
+
166
+ **Why it fits:** Minimal change. Works in daemon context (no human confirmation needed -- the agent surfaces the implication and continues). Does not add steps; extends Phase 0. The goalType classification is cheap and structural.
167
+
168
+ **Strongest evidence for it:** The brief itself is a perfect example: "improve wr.discovery goal reframing" is opportunity-framed, but the actual problem statement needs to be made explicit ("the workflow accepts stated goals uncritically"). Phase 0 in the current workflow would process this as landscape/full_spectrum without ever articulating the underlying mechanism.
169
+
170
+ **Strongest risk against it:** The four-category taxonomy may be too rigid. Many goals are mixed (e.g., "decide whether to build X or Y" is both decision-framed and solution-framed). Also, the agent may classify incorrectly, and without human confirmation in daemon mode, the wrong classification goes uncorrected.
171
+
172
+ **When it should win:** When the change must be minimal, backward-compatible, and immediately implementable without restructuring the workflow.
173
+
174
+ ---
175
+
176
+ ### Direction B: Adversarial goal interrogation as a distinct Phase 0b step
177
+
178
+ **Summary:** Add a new Phase 0b (between goal capture and path selection) that runs a structured adversarial interrogation of the stated goal. The step always runs and produces: the `impliedProblem`, the `hiddenAssumptions`, and at least one `alternativeFraming`. Path selection happens *after* this step, using the enriched understanding rather than the raw goal.
179
+
180
+ **Mechanism:** Phase 0b procedure: "(1) Restate the goal as a problem: what must be true for this goal to be the right thing to do? (2) List the 2-3 hidden assumptions the goal takes for granted. (3) Generate one alternative framing: if the stated goal is wrong, what would a better goal be? (4) Decide: is the original framing correct as-stated, or does the workflow proceed under the reframed problem? Set `goalValidated`, `impliedProblem`, `hiddenAssumptions`, `alternativeFraming` in context."
181
+
182
+ **Why it fits:** Makes reframing structural and always-on rather than path-dependent. The adversarial lens is familiar in WorkRail (adversarial challenge is already used in Phase 3d). This is the same discipline applied earlier in the process. Works in daemon context -- no human response needed, the agent conducts the interrogation with itself.
183
+
184
+ **Strongest evidence for it:** The two examples in the brief: (1) MCP simplification -- the discovery produced design candidates but missed that the immediate fix was just `artifacts` -- this is a case where the stated goal ("how do we simplify?") was accepted when the real problem was narrower ("what's the cheapest fix right now?"). (2) Structured output -- started with the wrong assumption about mixing `response_format + tools`, which Phase 0 would have surfaced if it asked "what assumptions does this goal take for granted?"
185
+
186
+ **Strongest risk against it:** Adds a mandatory step. For well-framed goals, the step is overhead with no signal. Also, the agent interrogating its own goal with itself may produce circular reasoning -- it surfaces the assumptions it already expects, not genuinely hidden ones.
187
+
188
+ **When it should win:** When the problem of goal acceptance is systematic and the overhead of an extra step is acceptable. This is the more thorough solution.
189
+
190
+ ---
191
+
192
+ ### Direction C: Progressive commitment -- reframing woven across phases
193
+
194
+ **Summary:** Instead of a single upfront interrogation, weave explicit "is the framing still correct?" moments at multiple points in the workflow: after landscape (Phase 1b/1c), after problem framing (Phase 1e/1f), and explicitly in re-triage (Phase 1g). The mechanism: add a `framingChallenge` requirement at each of these steps -- a single structured question ("what would have to be true for the original goal to be wrong?") rather than a separate step.
195
+
196
+ **Mechanism:** At Phase 1b (landscape), after summarizing the current state, add: "Before continuing, ask: does the landscape evidence support or challenge the original goal? If it challenges, update `retriageNeeded = true` and note the specific challenge." At Phase 1e/1f (problem framing), the existing `problemFrameTemplate` already has "What would make this framing wrong" -- make this a required non-empty output, not a soft guideline. In Phase 1g (re-triage), add explicit procedure: "Revisit the original goal statement. Is the goal still correct as-stated given what you now know?"
197
+
198
+ **Why it fits:** Does not add steps. Upgrades existing checkpoints. The re-triage step already exists for this purpose but currently triggers on a set variable rather than mandating goal challenge. Most importantly: distributed reframing is more likely to catch late-arriving information than a single upfront interrogation.
199
+
200
+ **Strongest evidence for it:** The workflow already has `anti-anchoring` guidance in metaGuidance -- "do not let the first framing or favorite option dominate." This direction makes the same principle apply to the *goal*, not just the *candidates*. It is consistent with the workflow's existing philosophy.
201
+
202
+ **Strongest risk against it:** If the original goal is wrong in a way that affects path selection itself (e.g., choosing `landscape_first` when `design_first` was needed), distributed reframing happens too late. The path is already chosen; the work is already done in the wrong direction.
203
+
204
+ **When it should win:** When the problem is primarily about insufficient rigor late in the workflow, not about path selection being skewed by a bad goal.
205
+
206
+ ---
207
+
208
+ ## Challenge Notes
209
+
210
+ **Against Direction A (goal-type classification):**
211
+ The four-category taxonomy solves the symptom (goal is solution-framed) but not the underlying mechanism. A goal classified as `problem_framed` ("improve Y") can still contain hidden assumptions. The classification alone is not sufficient -- it needs to be paired with explicit assumption surfacing.
212
+
213
+ **Against Direction B (adversarial Phase 0b):**
214
+ Phase 0b interrogates the goal before the agent has any landscape knowledge. This limits the quality of assumption surfacing -- the agent can only use its prior knowledge, not what it discovers in the codebase. The most surprising hidden assumptions are often ones that only become visible after seeing the actual state of the system.
215
+
216
+ **Against Direction C (progressive commitment):**
217
+ Distributed reframing is weaker than upfront reframing for the specific problem of path selection bias. If the original goal causes the wrong path to be selected, later reframing stages run within the wrong path's constraints. Retriage exists but only triggers when `retriageNeeded` is set -- and an agent anchored to the original framing may not set it.
218
+
219
+ **Synthesis:** The strongest design would combine A and B: goal-type classification *plus* an adversarial interrogation step. Direction C should also be incorporated -- make the "what would make this framing wrong" field in problemFrameTemplate mandatory and non-empty. But a minimal viable change is Direction B alone.
220
+
221
+ ---
222
+
223
+ ## Resolution Notes
224
+
225
+ **Primary diagnosis:**
226
+ The core weakness is that Phase 0 has no mechanism to distinguish "user stated a real problem" from "user stated a solution hypothesis." The path selection, framing, and candidate generation all downstream from this -- if the goal is wrong, the entire workflow is scaffolded on the wrong foundation.
227
+
228
+ **Secondary diagnosis:**
229
+ Even when the path is correct, the workflow has weak *mandatory* goal challenge. The `anti-anchoring` guidance in metaGuidance is for candidates, not for the original goal. The `problemFrameTemplate`'s "what would make this framing wrong" section is a soft guideline, not a required output.
230
+
231
+ **Tertiary diagnosis:**
232
+ The re-triage step (Phase 1g) is underused. It only runs when `retriageNeeded = true`, and the agent sets this variable. An anchored agent will not set it.
233
+
234
+ **Improvement directions (in priority order):**
235
+
236
+ 1. **Highest priority -- Phase 0 goal interrogation:** Add a structured adversarial examination of the stated goal to Phase 0 (or as a small new step before path selection). The key outputs: `goalType`, `impliedProblem`, `hiddenAssumptions`, `alternativeFraming`. This is the primary fix.
237
+
238
+ 2. **Medium priority -- Make "what would make this wrong" mandatory:** In the problemFrameTemplate, change the "What would make this framing wrong" from an optional field to a required output with at least one specific, concrete falsification condition.
239
+
240
+ 3. **Medium priority -- Make re-triage always run for full_spectrum and design_first paths:** Remove the `retriageNeeded = true` condition gate on Phase 1g for these paths. For `landscape_first`, keep the gate.
241
+
242
+ 4. **Lower priority -- Goal type affects path selection:** When `goalType = solution_framed`, bias toward `design_first` path selection rather than accepting the default, unless the user has explicitly confirmed that the stated solution is the correct framing.
243
+
244
+ ---
245
+
246
+ ## Decision Log
247
+
248
+ | Decision | Rationale |
249
+ |----------|-----------|
250
+ | `design_first` path chosen | The framing of "what's wrong with wr.discovery" is itself uncertain; dominant risk is solving the wrong problem |
251
+ | Diagnosis focused on Phase 0 | Phase 0 is where goal acceptance happens; this is the root cause |
252
+ | Daemon context preserved as hard constraint | Daemon sessions are real use cases; interactive questioning is not viable |
253
+ | C1+C3 hybrid selected (not C2) | C1 extends Phase 0 with goalType/impliedProblem/hiddenAssumptions; C3 strengthens existing checkpoints. C2 (mandatory Phase 0a) is structurally stronger but adds mandatory overhead for every session. YAGNI and graceful-no-op criteria favor C1+C3. |
254
+ | 4 refinements added from review | (1) goalType examples in procedure, (2) alternativeFraming in design doc, (3) Phase 1g OR runCondition, (4) specificity instruction for framing-risk required output |
255
+ | C2 named as escalation path | If C1+C3 hybrid proves insufficient for daemon sessions, extract Phase 0a as mandatory pre-step |
256
+ | direct_recommendation resolution | Remaining gap (goalType classification reliability) is a runtime testability question, not a design gap |
257
+
258
+ ---
259
+
260
+ ## Final Summary
261
+
262
+ ### Selected direction: C1+C3 hybrid with 4 refinements
263
+
264
+ **Confidence band: MEDIUM-HIGH**
265
+
266
+ ### What changes in the workflow
267
+
268
+ **Phase 0 (phase-0-select-path):**
269
+ 1. Add to `Capture` list: `goalType` (4-value enum: `solution_framed | problem_framed | opportunity_framed | decision_framed`), `impliedProblem` (required when solution_framed), `hiddenAssumptions` (min 1 when goalType != problem_framed)
270
+ 2. Add to procedure: "Before selecting a path, classify the goal type using these examples: solution_framed ('add X', 'implement Y', 'build X'), problem_framed ('reduce X', 'fix Y', 'understand why Z'), opportunity_framed ('explore X', 'decide whether Y'), decision_framed ('choose between A and B'). If solution_framed, derive the implied problem and record at least 1 hidden assumption. Generate one alternative framing ('if this goal is wrong, what would a better goal be?') and record it in the design doc."
271
+ 3. Add to procedure: "Let goalType influence path selection: when goalType = solution_framed, bias toward design_first unless the stated solution is clearly the correct framing."
272
+
273
+ **Phase 1e and 1f (problem framing steps):**
274
+ 4. Make 'What would make this framing wrong' a required non-empty output with specificity: "Name ONE concrete falsification condition -- a specific thing that, if discovered to be true, would change the path or direction."
275
+
276
+ **Phase 1g (retriage):**
277
+ 5. Change `runCondition` from `{ var: "retriageNeeded", equals: true }` to an OR: `{ or: [{ var: "retriageNeeded", equals: true }, { var: "pathRecommendation", equals: "design_first" }, { var: "pathRecommendation", equals: "full_spectrum" }] }` so retriage always runs for design_first and full_spectrum paths.
278
+
279
+ ### Why this direction wins
280
+
281
+ - Addresses path-selection bias (the root cause) by making goalType available before path selection
282
+ - Works non-interactively (daemon compatible)
283
+ - Adds near-zero overhead for correctly-framed goals
284
+ - Strengthens three existing weak mechanisms rather than adding ceremony
285
+ - Fully backward compatible (new context variables default to unset in existing sessions)
286
+
287
+ ### Strongest alternative: C2 (mandatory Phase 0a)
288
+
289
+ C2 is more structurally correct -- a mandatory separate step enforces that goal interrogation happens before path selection at the step-execution level. If the C1+C3 hybrid proves insufficient (tested by running a session with a known solution-framed goal), the correct escalation is to extract `phase-0a-goal-interrogation` as a mandatory pre-step before Phase 0.
290
+
291
+ ### Residual risks
292
+
293
+ 1. **goalType misclassification** (MEDIUM): a solution-framed goal classified as opportunity_framed bypasses the impliedProblem derivation. Mitigated by examples in procedure and Phase 1e/1f backstop. C2 is the escalation.
294
+ 2. **Quality of 'what would make this framing wrong' output** (LOW-MEDIUM): required non-empty enforces form but not quality. Specificity instruction reduces formulaic responses.
295
+ 3. **Phase 1g produces trivial output for well-framed sessions** (LOW): acceptable graceful no-op.
296
+
297
+ ### Next actions
298
+
299
+ These findings are the input to Phase 2: the `workflow-for-workflows` workflow will design the implementation based on this diagnosis.
300
+
301
+ 1. The wfw workflow should receive: the full diagnosis (Phase 0 is the root cause), the specific changes needed (5 changes listed above), the priority order (Phase 0 goalType classification is highest priority), and the decision to implement the C1+C3 hybrid, not C2.
302
+ 2. After wfw produces the improved workflow, write it to `workflows/wr.discovery.json`.
303
+ 3. Create PR on branch `feat/discovery-workflow-improve-goal-reframing`.