deepflow 0.1.82 → 0.1.83
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +14 -0
- package/package.json +1 -1
package/README.md
CHANGED
|
@@ -186,6 +186,20 @@ your-project/
|
|
|
186
186
|
6. **Atomic commits** — One task = one commit
|
|
187
187
|
7. **Context-aware** — Checkpoint before limits, resume seamlessly
|
|
188
188
|
|
|
189
|
+
## Why This Architecture Works
|
|
190
|
+
|
|
191
|
+
Deepflow's design isn't opinionated — it's a direct response to measured LLM limitations:
|
|
192
|
+
|
|
193
|
+
**Focused tasks > giant context** — LLMs lose ~2% effectiveness per 100K additional tokens, even on trivial tasks ([Chroma "Context Rot", 2025](https://research.trychroma.com/context-rot), 18 models tested). Deepflow keeps each task's context minimal and focused instead of loading the entire codebase.
|
|
194
|
+
|
|
195
|
+
**Tool use > context stuffing** — Information in the middle of context has up to 40% less recall than at the start/end ([Lost in the Middle, 2023](https://arxiv.org/abs/2307.03172)). Agents access code on-demand via LSP (`findReferences`, `incomingCalls`) and grep — always fresh, no attention dilution.
|
|
196
|
+
|
|
197
|
+
**Model routing > one-size-fits-all** — Mechanical tasks with cheap models (haiku), complex tasks with powerful models (opus). Fewer tokens per task = less degradation = better results. Effort-aware context budgets strip unnecessary sections from prompts for simpler tasks.
|
|
198
|
+
|
|
199
|
+
**Prompt order follows attention** — Execute prompts follow the attention U-curve: critical instructions (task definition, failure history, success criteria) at start and end, navigable data (impact analysis, dependency context) in the middle. Distractors eliminated by design.
|
|
200
|
+
|
|
201
|
+
**LSP-powered impact analysis** — Plan-time uses `findReferences` and `incomingCalls` to map blast radius precisely. Execute-time runs a freshness check before implementing — catching callers added after planning. Grep as fallback when LSP is unavailable.
|
|
202
|
+
|
|
189
203
|
## Skills
|
|
190
204
|
|
|
191
205
|
| Skill | Purpose |
|