deepflow 0.1.48 → 0.1.50

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -41,29 +41,39 @@ npx deepflow --uninstall
41
41
  # In your project
42
42
  claude
43
43
 
44
- # 1. Discuss what you want to build
45
- # 2. Generate spec when ready
44
+ # 1. Explore the problem space
45
+ /df:discover image-upload
46
+
47
+ # 2. Debate tradeoffs (optional)
48
+ /df:debate upload-strategy
49
+
50
+ # 3. Generate spec from conversation
46
51
  /df:spec image-upload
47
52
 
48
- # 3. Compare specs to code, generate tasks
53
+ # 4. Compare specs to code, generate tasks
49
54
  /df:plan
50
55
 
51
- # 4. Execute tasks with parallel agents
56
+ # 5. Execute tasks with parallel agents
52
57
  /df:execute
53
58
 
54
- # 5. Verify specs are satisfied
59
+ # 6. Verify specs are satisfied
55
60
  /df:verify
56
61
  ```
57
62
 
58
63
  ## The Flow
59
64
 
60
65
  ```
61
- CONVERSATION
62
- Describe what you want
63
- LLM asks gap questions
66
+ /df:discover <name>
67
+ Socratic questioning (motivation, scope, constraints...)
68
+ Captures decisions to .deepflow/decisions.md
69
+
70
+ /df:debate <topic> ← optional
71
+ │ 4 perspectives: User Advocate, Tech Skeptic,
72
+ │ Systems Thinker, LLM Efficiency
73
+ │ Creates specs/.debate-{topic}.md
64
74
 
65
75
  /df:spec <name>
66
- │ Creates specs/{name}.md
76
+ │ Creates specs/{name}.md from conversation
67
77
 
68
78
  /df:plan
69
79
  │ Checks past experiments (learn from failures)
@@ -130,10 +140,14 @@ Statusline shows context usage. At ≥50%:
130
140
 
131
141
  | Command | Purpose |
132
142
  |---------|---------|
143
+ | `/df:discover <name>` | Explore problem space with Socratic questioning |
144
+ | `/df:debate <topic>` | Multi-perspective analysis (4 agents) |
133
145
  | `/df:spec <name>` | Generate spec from conversation |
134
146
  | `/df:plan` | Compare specs to code, create tasks |
135
147
  | `/df:execute` | Run tasks with parallel agents |
136
148
  | `/df:verify` | Check specs satisfied |
149
+ | `/df:note` | Capture decisions from conversation |
150
+ | `/df:resume` | Session continuity briefing |
137
151
  | `/df:update` | Update deepflow to latest |
138
152
 
139
153
  ## File Structure
@@ -147,6 +161,7 @@ your-project/
147
161
  ├── PLAN.md # active tasks
148
162
  └── .deepflow/
149
163
  ├── config.yaml # project settings
164
+ ├── decisions.md # captured decisions (/df:note, /df:discover)
150
165
  ├── context.json # context % tracking
151
166
  ├── experiments/ # spike results (pass/fail)
152
167
  └── worktrees/ # isolated execution
package/bin/install.js CHANGED
@@ -146,7 +146,7 @@ async function main() {
146
146
  console.log(`${c.green}Installation complete!${c.reset}`);
147
147
  console.log('');
148
148
  console.log(`Installed to ${c.cyan}${CLAUDE_DIR}${c.reset}:`);
149
- console.log(' commands/df/ — /df:discover, /df:debate, /df:spec, /df:plan, /df:execute, /df:verify');
149
+ console.log(' commands/df/ — /df:discover, /df:debate, /df:spec, /df:plan, /df:execute, /df:verify, /df:note, /df:resume, /df:update');
150
150
  console.log(' skills/ — gap-discovery, atomic-commits, code-completeness');
151
151
  console.log(' agents/ — reasoner');
152
152
  if (level === 'global') {
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "deepflow",
3
- "version": "0.1.48",
3
+ "version": "0.1.50",
4
4
  "description": "Stay in flow state - lightweight spec-driven task orchestration for Claude Code",
5
5
  "keywords": [
6
6
  "claude",
@@ -4,9 +4,9 @@
4
4
 
5
5
  You coordinate reasoner agents to debate a problem from multiple perspectives, then synthesize their arguments into a structured document.
6
6
 
7
- **NEVER:** Read source files, use Glob/Grep directly, run git, use TaskOutput, use `run_in_background`, use Explore agents, use EnterPlanMode, use ExitPlanMode
7
+ **NEVER:** use TaskOutput, use `run_in_background`, use Explore agents, use EnterPlanMode, use ExitPlanMode
8
8
 
9
- **ONLY:** Spawn reasoner agents (non-background), write debate file, respond conversationally
9
+ **ONLY:** Gather codebase context (Glob/Grep/Read), spawn reasoner agents (non-background), write debate file, respond conversationally
10
10
 
11
11
  ---
12
12
 
@@ -35,111 +35,87 @@ Generate a multi-perspective analysis of a problem before formalizing into a spe
35
35
 
36
36
  ### 1. SUMMARIZE
37
37
 
38
- Summarize the conversation context (from prior discover/conversation) in ~200 words. This summary will be passed to each perspective agent.
38
+ Summarize conversation context in ~200 words: core problem, key requirements, constraints, user priorities. Passed to each perspective agent.
39
39
 
40
- The summary should capture:
41
- - The core problem being solved
42
- - Key requirements mentioned
43
- - Constraints and boundaries
44
- - User's stated preferences and priorities
40
+ ### 2. GATHER CODEBASE CONTEXT
45
41
 
46
- ### 2. SPAWN PERSPECTIVES
42
+ Ground the debate in what actually exists. Glob/Grep/Read relevant files (up to 5-6, focus on core logic).
47
43
 
48
- **Spawn ALL 4 perspective agents in ONE message (non-background, parallel):**
44
+ Produce a ~300 word codebase summary: what exists, key interfaces/contracts, current limitations, dependencies. Passed to every perspective agent so they argue from facts, not assumptions.
49
45
 
50
- Each agent receives the same context summary but a different role. Each must:
51
- - Argue from their perspective
52
- - Identify risks the other perspectives might miss
53
- - Propose concrete alternatives where they disagree with the likely approach
46
+ ### 3. SPAWN PERSPECTIVES
54
47
 
55
- ```python
56
- # All 4 in a single message — parallel, non-background:
57
- Task(subagent_type="reasoner", model="opus", prompt="""
58
- You are the USER ADVOCATE in a design debate.
48
+ **Spawn ALL 4 perspective agents in ONE message (non-background, parallel):**
59
49
 
50
+ Each agent receives the same preamble + codebase context but a different role lens.
51
+
52
+ **Shared preamble for all perspectives:**
53
+ ```
60
54
  ## Context
61
55
  {summary}
62
56
 
63
- ## Your Role
64
- Argue from the perspective of the end user. Focus on:
65
- - Simplicity and ease of use
66
- - Real user needs vs assumed needs
67
- - Friction points and cognitive load
68
- - Whether the solution matches how users actually think
57
+ ## Current Codebase
58
+ {codebase_summary}
69
59
 
70
60
  Provide:
71
61
  1. Your key arguments (3-5 points)
72
- 2. Risks you see from a user perspective
62
+ 2. Risks your perspective surfaces
73
63
  3. Concrete alternatives if you disagree with the current direction
74
64
 
75
65
  Keep response under 400 words.
76
- """)
66
+ ```
67
+
68
+ **Perspective-specific role lenses (append to preamble):**
69
+
70
+ ```python
71
+ # All 4 in a single message — parallel, non-background:
77
72
 
78
73
  Task(subagent_type="reasoner", model="opus", prompt="""
79
- You are the TECH SKEPTIC in a design debate.
74
+ {shared_preamble}
80
75
 
81
- ## Context
82
- {summary}
76
+ ## Your Role: USER ADVOCATE
77
+ Argue from the perspective of the end user. Focus on:
78
+ - Simplicity and ease of use
79
+ - Real user needs vs assumed needs
80
+ - Friction points and cognitive load
81
+ - Whether the solution matches how users actually think
82
+ """)
83
83
 
84
- ## Your Role
84
+ Task(subagent_type="reasoner", model="opus", prompt="""
85
+ {shared_preamble}
86
+
87
+ ## Your Role: TECH SKEPTIC
85
88
  Challenge technical assumptions and surface hidden complexity. Focus on:
86
89
  - What could go wrong technically
87
90
  - Hidden dependencies or coupling
88
91
  - Complexity that seems simple but isn't
89
92
  - Maintenance burden over time
90
-
91
- Provide:
92
- 1. Your key arguments (3-5 points)
93
- 2. Technical risks others might overlook
94
- 3. Simpler alternatives worth considering
95
-
96
- Keep response under 400 words.
97
93
  """)
98
94
 
99
95
  Task(subagent_type="reasoner", model="opus", prompt="""
100
- You are the SYSTEMS THINKER in a design debate.
96
+ {shared_preamble}
101
97
 
102
- ## Context
103
- {summary}
104
-
105
- ## Your Role
98
+ ## Your Role: SYSTEMS THINKER
106
99
  Analyze how this fits into the broader system. Focus on:
107
100
  - Integration with existing components
108
101
  - Scalability implications
109
102
  - Second-order effects and unintended consequences
110
103
  - Long-term evolution and extensibility
111
-
112
- Provide:
113
- 1. Your key arguments (3-5 points)
114
- 2. Systemic risks and ripple effects
115
- 3. Architectural alternatives worth considering
116
-
117
- Keep response under 400 words.
118
104
  """)
119
105
 
120
106
  Task(subagent_type="reasoner", model="opus", prompt="""
121
- You are the LLM EFFICIENCY expert in a design debate.
122
-
123
- ## Context
124
- {summary}
107
+ {shared_preamble}
125
108
 
126
- ## Your Role
109
+ ## Your Role: LLM EFFICIENCY
127
110
  Evaluate from the perspective of LLM consumption and interaction. Focus on:
128
111
  - Token density: can the output be consumed efficiently by LLMs?
129
112
  - Minimal scaffolding: avoid ceremony that adds tokens without information
130
113
  - Navigable structure: can an LLM quickly find what it needs?
131
114
  - Attention budget: does the design respect limited context windows?
132
-
133
- Provide:
134
- 1. Your key arguments (3-5 points)
135
- 2. Efficiency risks others might not consider
136
- 3. Alternatives that optimize for LLM consumption
137
-
138
- Keep response under 400 words.
139
115
  """)
140
116
  ```
141
117
 
142
- ### 3. SYNTHESIZE
118
+ ### 4. SYNTHESIZE
143
119
 
144
120
  After all 4 perspectives return, spawn 1 additional reasoner to synthesize:
145
121
 
@@ -176,84 +152,25 @@ Keep response under 500 words.
176
152
  """)
177
153
  ```
178
154
 
179
- ### 4. WRITE DEBATE FILE
180
-
181
- Create `specs/.debate-{name}.md`:
182
-
183
- ```markdown
184
- # Debate: {Name}
185
-
186
- ## Context
187
- [~200 word summary from step 1]
188
-
189
- ## Perspectives
190
-
191
- ### User Advocate
192
- [arguments from agent]
193
-
194
- ### Tech Skeptic
195
- [arguments from agent]
196
-
197
- ### Systems Thinker
198
- [arguments from agent]
199
-
200
- ### LLM Efficiency
201
- [arguments from agent]
202
-
203
- ## Synthesis
204
-
205
- ### Consensus
206
- [from synthesizer]
207
-
208
- ### Tensions
209
- [from synthesizer]
210
-
211
- ### Open Decisions
212
- [from synthesizer]
213
-
214
- ### Recommendation
215
- [from synthesizer]
216
- ```
217
-
218
- ### 5. CONFIRM
155
+ ### 5. WRITE DEBATE FILE
219
156
 
220
- After writing the file, present a brief summary to the user:
157
+ Create `specs/.debate-{name}.md` with sections: Context · Codebase Context · Perspectives (User Advocate / Tech Skeptic / Systems Thinker / LLM Efficiency) · Synthesis (Consensus / Tensions / Open Decisions / Recommendation).
221
158
 
222
- ```
223
- ✓ Created specs/.debate-{name}.md
159
+ ### 6. CONFIRM
224
160
 
225
- Key tensions:
226
- - [tension 1]
227
- - [tension 2]
161
+ Present key tensions and open decisions, then: `Next: Run /df:spec {name} to formalize into a specification`
228
162
 
229
- Open decisions:
230
- - [decision 1]
231
- - [decision 2]
232
-
233
- Next: Run /df:spec {name} to formalize into a specification
234
- ```
163
+ ### 7. CAPTURE DECISIONS
235
164
 
236
- ### 6. CAPTURE DECISIONS
237
-
238
- Extract up to 4 candidates from consensus/resolved tensions. Ask user via `AskUserQuestion(multiSelect=True)` with options like `{ label: "[APPROACH] {decision}", description: "{rationale}" }`.
239
-
240
- For confirmed decisions, append to `.deepflow/decisions.md` (create if absent) using format:
241
- ```
242
- ### {YYYY-MM-DD} — debate
243
- - [{TAG}] {decision text} — {rationale}
244
- ```
245
- Tags: [APPROACH] directional choices · [PROVISIONAL] tentative · [ASSUMPTION] unverified premises. If a new decision contradicts an existing one, note the conflict inline.
165
+ Follow the **default** variant from `templates/decision-capture.md`. Command name: `debate`.
246
166
 
247
167
  ---
248
168
 
249
169
  ## Rules
250
170
 
251
171
  - **All 4 perspective agents MUST be spawned in ONE message** (parallel, non-background)
252
- - **NEVER use `run_in_background`** causes late notifications that pollute output
253
- - **NEVER use TaskOutput** returns full transcripts that explode context
254
- - **NEVER use Explore agents** — this command doesn't read code
255
- - **NEVER read source files directly** — agents receive context via prompt only
256
- - Reasoner agents receive context through their prompt, not by reading files
172
+ - **Codebase context is gathered by the orchestrator** (step 2) and passed to agents via prompt
173
+ - Reasoner agents receive context through their prompt, not by reading files themselves
257
174
  - The debate file goes in `specs/` so `/df:spec` can reference it
258
175
  - File name MUST be `.debate-{name}.md` (dot prefix = auxiliary file)
259
176
  - Keep each perspective under 400 words, synthesis under 500 words
@@ -263,13 +180,16 @@ Tags: [APPROACH] directional choices · [PROVISIONAL] tentative · [ASSUMPTION]
263
180
  ```
264
181
  USER: /df:debate auth
265
182
 
266
- CLAUDE: Let me summarize what we've discussed and get multiple perspectives
267
- on the authentication design.
183
+ CLAUDE: Let me summarize what we've discussed and understand the current
184
+ codebase before getting multiple perspectives on the authentication design.
268
185
 
269
186
  [Summarizes: ~200 words about auth requirements from conversation]
270
187
 
271
- [Spawns 4 reasoner agents in parallel User Advocate, Tech Skeptic,
272
- Systems Thinker, LLM Efficiency]
188
+ [Globs/Greps/Reads relevant auth filesmiddleware, routes, config]
189
+
190
+ [Produces ~300 word codebase summary of what exists]
191
+
192
+ [Spawns 4 reasoner agents in parallel — each receives both summaries]
273
193
 
274
194
  [All 4 return their arguments]
275
195
 
@@ -81,31 +81,12 @@ Example questions:
81
81
  - Mix structured questions (AskUserQuestion) with conversational follow-ups
82
82
  - Ask follow-up questions based on answers — don't just march through phases mechanically
83
83
  - Go deeper on surprising or unclear answers
84
-
85
84
  ### Behavioral Rules
86
- - **NEVER assume** — if something is ambiguous, ask
87
- - **NEVER suggest ending** — the user decides when they're done
88
- - **NEVER take action** — no code reading, no file creation, no agents
89
- - **NEVER skip phases** — but adapt depth based on the problem
90
85
  - Keep your responses short between questions — don't lecture
91
86
  - Acknowledge answers briefly before asking the next question
92
87
 
93
88
  ### Decision Capture
94
- When the user signals they are ready to move on, before presenting next-step options, extract up to 4 candidate decisions from the session (meaningful choices about approach, scope, or constraints). Present via `AskUserQuestion` with `multiSelect: true`, e.g.:
95
-
96
- ```json
97
- {"questions": [{"question": "Which decisions should be recorded?", "header": "Decisions", "multiSelect": true,
98
- "options": [{"label": "[APPROACH] Use event sourcing", "description": "Matches audit requirements"}]}]}
99
- ```
100
-
101
- For each confirmed decision, append to `.deepflow/decisions.md` (create if missing):
102
- ```
103
- ### {YYYY-MM-DD} — discover
104
- - [APPROACH] Decision text — rationale
105
- ```
106
-
107
- Tags: `[APPROACH]` firm choice · `[PROVISIONAL]` revisit later · `[ASSUMPTION]` unverified belief.
108
-
89
+ Follow the **default** variant from `templates/decision-capture.md`. Command name: `discover`.
109
90
  ### When the User Wants to Move On
110
91
  When the user signals they want to advance (e.g., "I think that's enough", "let's move on", "ready for next step"):
111
92