deepflow 0.1.19 → 0.1.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "deepflow",
3
- "version": "0.1.19",
3
+ "version": "0.1.21",
4
4
  "description": "Stay in flow state - lightweight spec-driven task orchestration for Claude Code",
5
5
  "keywords": [
6
6
  "claude",
@@ -1,7 +1,7 @@
1
1
  # /df:plan — Generate Task Plan from Specs
2
2
 
3
3
  ## Purpose
4
- Compare specs against codebase, identify gaps, generate prioritized task list.
4
+ Compare specs against codebase and past experiments. Generate prioritized tasks.
5
5
 
6
6
  ## Usage
7
7
  ```
@@ -42,7 +42,23 @@ Determine source_dir from config or default to src/
42
42
 
43
43
  If no new specs: report counts, suggest `/df:execute`.
44
44
 
45
- ### 2. DETECT PROJECT CONTEXT
45
+ ### 2. CHECK PAST EXPERIMENTS
46
+
47
+ Extract domains from spec (perf, auth, api, etc.), then:
48
+
49
+ ```
50
+ Glob .deepflow/experiments/{domain}--*
51
+ ```
52
+
53
+ | Result | Action |
54
+ |--------|--------|
55
+ | `--failed.md` | Exclude approach, note why |
56
+ | `--success.md` | Reference as pattern |
57
+ | No matches | Continue (expected for new projects) |
58
+
59
+ **Naming:** `{domain}--{approach}--{result}.md`
60
+
61
+ ### 3. DETECT PROJECT CONTEXT
46
62
 
47
63
  For existing codebases, identify:
48
64
  - Code style/conventions
@@ -51,7 +67,7 @@ For existing codebases, identify:
51
67
 
52
68
  Include patterns in task descriptions for agents to follow.
53
69
 
54
- ### 3. ANALYZE CODEBASE
70
+ ### 4. ANALYZE CODEBASE
55
71
 
56
72
  **Spawn Explore agents** (haiku, read-only) with dynamic count:
57
73
 
@@ -68,7 +84,7 @@ Include patterns in task descriptions for agents to follow.
68
84
  - Stub functions, placeholder returns
69
85
  - Skipped tests, incomplete coverage
70
86
 
71
- ### 4. COMPARE & PRIORITIZE
87
+ ### 5. COMPARE & PRIORITIZE
72
88
 
73
89
  **Spawn `reasoner` agent** (Opus) for analysis:
74
90
 
@@ -86,45 +102,34 @@ Include patterns in task descriptions for agents to follow.
86
102
  2. Impact — core features before enhancements
87
103
  3. Risk — unknowns early
88
104
 
89
- ### 5. VALIDATE HYPOTHESES
90
-
91
- Before finalizing the plan, identify and test risky assumptions:
105
+ ### 6. VALIDATE HYPOTHESES
92
106
 
93
- **When to validate:**
94
- - Unfamiliar APIs or libraries
95
- - Architectural decisions with multiple approaches
96
- - Integration with external systems
97
- - Performance-critical paths
107
+ Test risky assumptions before finalizing plan.
98
108
 
99
- **How to validate:**
100
- 1. Create minimal prototype (scratchpad, not committed)
101
- 2. Test the specific assumption
102
- 3. Document findings in task description
103
- 4. Adjust approach if hypothesis fails
109
+ **Validate when:** Unfamiliar APIs, multiple approaches possible, external integrations, performance-critical
104
110
 
105
- **Examples:**
106
- - "Does SessionStart hook run once per session?" → Test with simple log
107
- - "Can we use streaming for large files?" → Prototype with sample data
108
- - "Will this regex handle edge cases?" Test against real samples
111
+ **Process:**
112
+ 1. Prototype in scratchpad (not committed)
113
+ 2. Test assumption
114
+ 3. If failsWrite `.deepflow/experiments/{domain}--{approach}--failed.md`
115
+ 4. Adjust approach, document in task
109
116
 
110
- **Skip validation when:**
111
- - Using well-known patterns
112
- - Simple CRUD operations
113
- - Clear documentation exists
117
+ **Skip:** Well-known patterns, simple CRUD, clear docs exist
114
118
 
115
- ### 6. OUTPUT PLAN.md
119
+ ### 7. OUTPUT PLAN.md
116
120
 
117
121
  Append tasks grouped by `### doing-{spec-name}`. Include spec gaps and validation findings.
118
122
 
119
- ### 7. RENAME SPECS
123
+ ### 8. RENAME SPECS
120
124
 
121
125
  `mv specs/feature.md specs/doing-feature.md`
122
126
 
123
- ### 8. REPORT
127
+ ### 9. REPORT
124
128
 
125
129
  `✓ Plan generated — {n} specs, {n} tasks. Run /df:execute`
126
130
 
127
131
  ## Rules
132
+ - **Learn from history** — Check past experiments before proposing approaches
128
133
  - **Plan only** — Do NOT implement anything (except quick validation prototypes)
129
134
  - **Validate before commit** — Test risky assumptions with minimal experiments
130
135
  - **Confirm before assume** — Search code before marking "missing"
@@ -150,7 +155,9 @@ Append tasks grouped by `### doing-{spec-name}`. Include spec gaps and validatio
150
155
  - Files: src/api/upload.ts
151
156
  - Blocked by: none
152
157
 
153
- - [ ] **T2**: Add S3 service
158
+ - [ ] **T2**: Add S3 service with streaming
154
159
  - Files: src/services/storage.ts
155
160
  - Blocked by: T1
161
+ - Note: Use streaming (see experiments/perf--chunked-upload--success.md)
162
+ - Avoid: Direct buffer upload failed for large files (experiments/perf--buffer-upload--failed.md)
156
163
  ```
@@ -46,6 +46,27 @@ Mark each: ✓ satisfied | ✗ missing | ⚠ partial
46
46
  Report per spec: requirements count, acceptance count, quality issues.
47
47
  If issues: suggest creating fix spec or reopening (`mv done-* doing-*`).
48
48
 
49
+ ### 4. CAPTURE LEARNINGS
50
+
51
+ On success, write significant learnings to `.deepflow/experiments/{domain}--{approach}--success.md`
52
+
53
+ **Write when:**
54
+ - Non-trivial approach used
55
+ - Alternatives rejected during planning
56
+ - Performance optimization made
57
+ - Integration pattern discovered
58
+
59
+ **Format:**
60
+ ```markdown
61
+ # {Approach} [SUCCESS]
62
+ Objective: ...
63
+ Approach: ...
64
+ Why it worked: ...
65
+ Files: ...
66
+ ```
67
+
68
+ **Skip:** Simple CRUD, standard patterns, user declines
69
+
49
70
  ## Verification Levels
50
71
 
51
72
  | Level | Check | Method |
@@ -62,6 +83,7 @@ Default: L1-L3 (L4 optional, can be slow)
62
83
  - Flag partial implementations
63
84
  - Report TODO/FIXME as quality issues
64
85
  - Don't auto-fix — report findings for `/df:plan`
86
+ - Capture learnings — Write experiments for significant approaches
65
87
 
66
88
  ## Agent Usage
67
89
 
@@ -76,4 +98,8 @@ done-upload.md: 4/4 reqs ✓, 5/5 acceptance ✓, clean
76
98
  done-auth.md: 2/2 reqs ✓, 3/3 acceptance ✓, clean
77
99
 
78
100
  ✓ All specs verified
101
+
102
+ Learnings captured:
103
+ → experiments/perf--streaming-upload--success.md
104
+ → experiments/auth--jwt-refresh-rotation--success.md
79
105
  ```