deepflow 0.1.28 → 0.1.30

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -24,6 +24,7 @@
24
24
  - **Stay in flow** — Minimize context switches, maximize deep work
25
25
  - **Conversational ideation** with proactive gap discovery
26
26
  - **Specs define intent**, tasks close reality gaps
27
+ - **Worktree isolation** — Main branch stays clean during execution
27
28
  - **Parallel execution** with context-aware checkpointing
28
29
  - **Atomic commits** for clean rollback
29
30
 
@@ -70,7 +71,7 @@ CONVERSATION
70
71
  │ Renames: feature.md → doing-feature.md
71
72
 
72
73
  /df:execute
73
- Follows existing patterns
74
+ Creates isolated worktree (main stays clean)
74
75
  │ Parallel agents, file conflicts serialize
75
76
  │ Context-aware (≥50% → checkpoint)
76
77
  │ Atomic commit per task
@@ -95,6 +96,14 @@ specs/
95
96
 
96
97
  **Ongoing:** Detects existing patterns, follows conventions, integrates with current code.
97
98
 
99
+ ## Worktree Isolation
100
+
101
+ Execution happens in an isolated git worktree:
102
+ - Main branch stays clean during execution
103
+ - On failure, worktree preserved for debugging
104
+ - Resume with `/df:execute --continue`
105
+ - On success, changes merged back to main
106
+
98
107
  ## Context-Aware Execution
99
108
 
100
109
  Statusline shows context usage. At ≥50%:
@@ -124,7 +133,8 @@ your-project/
124
133
  └── .deepflow/
125
134
  ├── context.json # context % for execution
126
135
  ├── checkpoint.json # resume state
127
- └── results/ # agent results
136
+ └── worktrees/ # isolated execution (main stays clean)
137
+ └── df/doing-upload/20260202-1430/
128
138
  ```
129
139
 
130
140
  ## Configuration
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "deepflow",
3
- "version": "0.1.28",
3
+ "version": "0.1.30",
4
4
  "description": "Stay in flow state - lightweight spec-driven task orchestration for Claude Code",
5
5
  "keywords": [
6
6
  "claude",
@@ -2,11 +2,11 @@
2
2
 
3
3
  ## Orchestrator Role
4
4
 
5
- You spawn agents and poll results. You never implement.
5
+ You are a coordinator. Spawn agents, wait for results, update PLAN.md. Never implement code yourself.
6
6
 
7
- **NEVER:** Read source files, edit code, run tests, run git (except status), use `TaskOutput`
7
+ **NEVER:** Read source files, edit code, run tests, run git commands (except status)
8
8
 
9
- **ONLY:** Read `PLAN.md` + `specs/doing-*.md`, spawn background agents, poll `.deepflow/results/`, update PLAN.md
9
+ **ONLY:** Read PLAN.md, read specs/doing-*.md, spawn background agents, use TaskOutput to get results, update PLAN.md
10
10
 
11
11
  ---
12
12
 
@@ -43,11 +43,16 @@ Statusline writes to `.deepflow/context.json`: `{"percentage": 45}`
43
43
 
44
44
  ## Agent Protocol
45
45
 
46
- Every task = one background agent. Poll result files, never `TaskOutput`.
46
+ Each task = one background agent. Use TaskOutput to wait for results. Never poll files in a loop.
47
47
 
48
48
  ```python
49
- Task(subagent_type="general-purpose", run_in_background=True, prompt="T1: ...")
50
- # Poll: Glob(".deepflow/results/T*.yaml")
49
+ # Spawn agents in parallel (single message, multiple Task calls)
50
+ task_id_1 = Task(subagent_type="general-purpose", run_in_background=True, prompt="T1: ...")
51
+ task_id_2 = Task(subagent_type="general-purpose", run_in_background=True, prompt="T2: ...")
52
+
53
+ # Wait for all results (single message, multiple TaskOutput calls)
54
+ TaskOutput(task_id=task_id_1)
55
+ TaskOutput(task_id=task_id_2)
51
56
  ```
52
57
 
53
58
  Result file `.deepflow/results/{task_id}.yaml`:
@@ -211,15 +216,22 @@ Ready = `[ ]` + all `blocked_by` complete + experiment validated (if applicable)
211
216
 
212
217
  Context ≥50%: checkpoint and exit.
213
218
 
214
- **Use Task tool to spawn all ready tasks in ONE message (parallel):**
219
+ **CRITICAL: Spawn ALL ready tasks in a SINGLE response with MULTIPLE Task tool calls.**
220
+
221
+ DO NOT spawn one task, wait, then spawn another. Instead, call Task tool multiple times in the SAME message block. This enables true parallelism.
222
+
223
+ Example: If T1, T2, T3 are ready, send ONE message containing THREE Task tool invocations:
224
+
215
225
  ```
216
- Task tool parameters for each task:
217
- - subagent_type: "general-purpose"
218
- - model: "sonnet"
219
- - run_in_background: true
220
- - prompt: "{task details from PLAN.md}"
226
+ // In a SINGLE assistant message, invoke Task THREE times:
227
+ Task(subagent_type="general-purpose", model="sonnet", run_in_background=true, prompt="T1: ...")
228
+ Task(subagent_type="general-purpose", model="sonnet", run_in_background=true, prompt="T2: ...")
229
+ Task(subagent_type="general-purpose", model="sonnet", run_in_background=true, prompt="T3: ...")
221
230
  ```
222
231
 
232
+ **WRONG (sequential):** Send message with Task for T1 → wait → send message with Task for T2 → wait → ...
233
+ **RIGHT (parallel):** Send ONE message with Task for T1, T2, T3 all together
234
+
223
235
  Same-file conflicts: spawn sequentially instead.
224
236
 
225
237
  **Spike Task Execution:**
@@ -375,7 +387,18 @@ When all tasks done for a `doing-*` spec:
375
387
 
376
388
  ### 10. ITERATE
377
389
 
378
- Repeat until: all done, all blocked, or checkpoint.
390
+ After spawning agents, wait for results using TaskOutput. Call TaskOutput for ALL running agents in a SINGLE message (parallel wait).
391
+
392
+ ```python
393
+ # After spawning T1, T2, T3 in parallel, wait for all in parallel:
394
+ TaskOutput(task_id=t1_id) # These three calls go in ONE message
395
+ TaskOutput(task_id=t2_id)
396
+ TaskOutput(task_id=t3_id)
397
+ ```
398
+
399
+ Then check which tasks completed, update PLAN.md, identify newly unblocked tasks, spawn next wave.
400
+
401
+ Repeat until: all done, all blocked, or context ≥50% (checkpoint).
379
402
 
380
403
  ## Rules
381
404
 
@@ -401,6 +424,8 @@ Wave 2: T3 (context: 48%)
401
424
 
402
425
  ✓ doing-upload → done-upload
403
426
  ✓ Complete: 3/3 tasks
427
+
428
+ Next: Run /df:verify to verify specs and merge to main
404
429
  ```
405
430
 
406
431
  ### Spike-First Execution
@@ -426,6 +451,8 @@ Wave 2: T2, T3 parallel (context: 40%)
426
451
 
427
452
  ✓ doing-upload → done-upload
428
453
  ✓ Complete: 3/3 tasks
454
+
455
+ Next: Run /df:verify to verify specs and merge to main
429
456
  ```
430
457
 
431
458
  ### Spike Failed (Agent Correctly Reported)
@@ -441,9 +468,9 @@ Verifying T1...
441
468
  → upload--streaming--failed.md
442
469
 
443
470
  ⚠ Spike T1 invalidated hypothesis
444
- → Run /df:plan to generate new hypothesis spike
445
-
446
471
  Complete: 1/3 tasks (2 blocked by failed experiment)
472
+
473
+ Next: Run /df:plan to generate new hypothesis spike
447
474
  ```
448
475
 
449
476
  ### Spike Failed (Verifier Override)
@@ -460,14 +487,16 @@ Verifying T1...
460
487
  → upload--streaming--failed.md
461
488
 
462
489
  ⚠ Spike T1 invalidated hypothesis
463
- → Run /df:plan to generate new hypothesis spike
464
-
465
490
  Complete: 1/3 tasks (2 blocked by failed experiment)
491
+
492
+ Next: Run /df:plan to generate new hypothesis spike
466
493
  ```
467
494
 
468
495
  ### With Checkpoint
469
496
 
470
497
  ```
471
498
  Wave 1 complete (context: 52%)
472
- Checkpoint saved. Run /df:execute --continue
499
+ Checkpoint saved.
500
+
501
+ Next: Run /df:execute --continue to resume execution
473
502
  ```
@@ -81,12 +81,15 @@ Include patterns in task descriptions for agents to follow.
81
81
 
82
82
  ### 4. ANALYZE CODEBASE
83
83
 
84
- **Use Task tool to spawn Explore agents in parallel:**
84
+ **Spawn ALL Explore agents in ONE message, then wait for ALL with TaskOutput in ONE message:**
85
85
  ```
86
- Task tool parameters:
87
- - subagent_type: "Explore"
88
- - model: "haiku"
89
- - run_in_background: true (for parallel execution)
86
+ // Spawn all in single message:
87
+ t1 = Task(subagent_type="Explore", model="haiku", run_in_background=true, prompt="...")
88
+ t2 = Task(subagent_type="Explore", model="haiku", run_in_background=true, prompt="...")
89
+
90
+ // Wait all in single message:
91
+ TaskOutput(task_id=t1)
92
+ TaskOutput(task_id=t2)
90
93
  ```
91
94
 
92
95
  Scale agent count based on codebase size:
@@ -6,7 +6,7 @@ You coordinate agents and ask questions. You never search code directly.
6
6
 
7
7
  **NEVER:** Read source files, use Glob/Grep directly, run git
8
8
 
9
- **ONLY:** Spawn agents, poll results, ask user questions, write spec file
9
+ **ONLY:** Spawn agents, use TaskOutput to get results, ask user questions, write spec file
10
10
 
11
11
  ---
12
12
 
@@ -31,12 +31,15 @@ Transform conversation context into a structured specification file.
31
31
 
32
32
  ### 1. GATHER CODEBASE CONTEXT
33
33
 
34
- **Use Task tool to spawn Explore agents in parallel:**
34
+ **Spawn ALL Explore agents in ONE message, then wait for ALL with TaskOutput in ONE message:**
35
35
  ```
36
- Task tool parameters:
37
- - subagent_type: "Explore"
38
- - model: "haiku"
39
- - run_in_background: true
36
+ // Spawn all in single message:
37
+ t1 = Task(subagent_type="Explore", model="haiku", run_in_background=true, prompt="...")
38
+ t2 = Task(subagent_type="Explore", model="haiku", run_in_background=true, prompt="...")
39
+
40
+ // Wait all in single message:
41
+ TaskOutput(task_id=t1)
42
+ TaskOutput(task_id=t2)
40
43
  ```
41
44
 
42
45
  Find:
@@ -91,12 +91,15 @@ Default: L1-L3 (L4 optional, can be slow)
91
91
 
92
92
  ## Agent Usage
93
93
 
94
- **Use Task tool to spawn Explore agents:**
94
+ **Spawn ALL Explore agents in ONE message, then wait for ALL with TaskOutput in ONE message:**
95
95
  ```
96
- Task tool parameters:
97
- - subagent_type: "Explore"
98
- - model: "haiku"
99
- - run_in_background: true (for parallel)
96
+ // Spawn all in single message:
97
+ t1 = Task(subagent_type="Explore", model="haiku", run_in_background=true, prompt="...")
98
+ t2 = Task(subagent_type="Explore", model="haiku", run_in_background=true, prompt="...")
99
+
100
+ // Wait all in single message:
101
+ TaskOutput(task_id=t1)
102
+ TaskOutput(task_id=t2)
100
103
  ```
101
104
 
102
105
  Scale: 1-2 agents per spec, cap 10.
@@ -157,4 +160,6 @@ rm .deepflow/checkpoint.json
157
160
  ✓ Merged df/doing-upload/20260202-1430 to main
158
161
  ✓ Cleaned up worktree and branch
159
162
  ✓ Spec complete: doing-upload → done-upload
163
+
164
+ Workflow complete! Ready for next feature: /df:spec <name>
160
165
  ```