deepflow 0.1.80 → 0.1.82

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "deepflow",
3
- "version": "0.1.80",
3
+ "version": "0.1.82",
4
4
  "description": "Doing reveals what thinking can't predict — spec-driven iterative development for Claude Code",
5
5
  "keywords": [
6
6
  "claude",
@@ -20,8 +20,8 @@ Each task = one background agent. Completion notifications drive the loop.
20
20
  3. On EACH notification:
21
21
  a. Run ratchet check (section 5.5)
22
22
  b. Passed → TaskUpdate(status: "completed"), update PLAN.md [x] + commit hash
23
- c. Failed → git revert HEAD --no-edit, TaskUpdate(status: "pending")
24
- d. Report ONE line: "✓ T1: ratchet passed (abc123)" or "✗ T1: ratchet failed, reverted"
23
+ c. Failed → run partial salvage protocol (section 5.5). If salvaged → treat as passed. If not → git revert, TaskUpdate(status: "pending")
24
+ d. Report ONE line: "✓ T1: ratchet passed (abc123)" or "⚕ T1: salvaged lint fix (abc124)" or "✗ T1: ratchet failed, reverted"
25
25
  e. NOT all done → end turn, wait | ALL done → next wave or finish
26
26
  4. Between waves: check context %. If ≥50% → checkpoint and exit.
27
27
  5. Repeat until: all done, all blocked, or context ≥50%.
@@ -57,6 +57,14 @@ Require clean HEAD (`git diff --quiet`). Derive SPEC_NAME from `specs/doing-*.md
57
57
  Create worktree: `.deepflow/worktrees/{spec}` on branch `df/{spec}`.
58
58
  Reuse if exists. `--fresh` deletes first.
59
59
 
60
+ If `worktree.sparse_paths` is non-empty in config, enable sparse checkout:
61
+ ```bash
62
+ git worktree add --no-checkout -b df/{spec} .deepflow/worktrees/{spec}
63
+ cd .deepflow/worktrees/{spec}
64
+ git sparse-checkout set {sparse_paths...}
65
+ git checkout df/{spec}
66
+ ```
67
+
60
68
  ### 1.6. RATCHET SNAPSHOT
61
69
 
62
70
  Snapshot pre-existing test files in worktree — only these count for ratchet (agent-created tests excluded):
@@ -136,7 +144,15 @@ Run Build → Test → Typecheck → Lint (stop on first failure).
136
144
  Compare `git diff HEAD~1 --name-only` against Impact callers/duplicates list.
137
145
  File listed but not modified → **advisory warning**: "Impact gap: {file} listed as {caller|duplicate} but not modified — verify manually". Not auto-revert (callers sometimes don't need changes), but flags the risk.
138
146
 
139
- **Evaluate:** All pass + no violations → commit stands. Any failure → `git revert HEAD --no-edit`.
147
+ **Evaluate:** All pass + no violations → commit stands. Any failure → attempt partial salvage before reverting:
148
+
149
+ **Partial salvage protocol:**
150
+ 1. Run `git diff HEAD~1 --stat` to see what the agent changed
151
+ 2. If failure is lint-only or typecheck-only (build + tests passed):
152
+ - Spawn `Agent(model="haiku", subagent_type="general-purpose")` with prompt: `Fix the {lint|typecheck} errors in the worktree. Only fix what's broken, change nothing else. Files changed: {diff stat}. Error output: {error}`
153
+ - Run ratchet again on the fix commit
154
+ - If passes → both commits stand. If fails → `git revert HEAD --no-edit && git revert HEAD --no-edit` (revert both)
155
+ 3. If failure is build or test → `git revert HEAD --no-edit` (no salvage, too risky)
140
156
 
141
157
  Ratchet uses ONLY pre-existing test files from `.deepflow/auto-snapshot.txt`.
142
158
 
@@ -153,9 +169,17 @@ Trigger: ≥2 [SPIKE] tasks with same "Blocked by:" target or identical hypothes
153
169
  - Rank: fewer regressions > higher coverage_delta > fewer files_changed > first to complete
154
170
  - No passes → reset all to pending for retry with debugger
155
171
  6. **Preserve all worktrees.** Losers: rename branch + `-failed` suffix. Record in checkpoint.json under `"spike_probes"`
156
- 7. **Log failed probes** to `.deepflow/auto-memory.yaml` (main tree):
172
+ 7. **Log ALL probe outcomes** to `.deepflow/auto-memory.yaml` (main tree):
157
173
  ```yaml
158
174
  spike_insights:
175
+ - date: "YYYY-MM-DD"
176
+ spec: "{spec_name}"
177
+ spike_id: "SPIKE_A"
178
+ hypothesis: "{from PLAN.md}"
179
+ outcome: "winner"
180
+ approach: "{one-sentence summary of what the winning probe chose}"
181
+ ratchet_metrics: {regressions: N, coverage_delta: N, files_changed: N}
182
+ branch: "df/{spec}--probe-SPIKE_A"
159
183
  - date: "YYYY-MM-DD"
160
184
  spec: "{spec_name}"
161
185
  spike_id: "SPIKE_B"
@@ -165,52 +189,90 @@ Trigger: ≥2 [SPIKE] tasks with same "Blocked by:" target or identical hypothes
165
189
  ratchet_metrics: {regressions: N, coverage_delta: N, files_changed: N}
166
190
  worktree: ".deepflow/worktrees/{spec}/probe-SPIKE_B-failed"
167
191
  branch: "df/{spec}--probe-SPIKE_B-failed"
168
- probe_learnings: # read by /df:auto-cycle each start
192
+ probe_learnings: # read by /df:auto-cycle each start AND included in per-task preamble
193
+ - spike: "SPIKE_A"
194
+ probe: "probe-SPIKE_A"
195
+ insight: "{one-sentence summary of winning approach — e.g. 'Use Node.js over Bun for Playwright'}"
169
196
  - spike: "SPIKE_B"
170
197
  probe: "probe-SPIKE_B"
171
198
  insight: "{one-sentence summary from failure_reason}"
172
199
  ```
173
- Create file if missing. Preserve existing keys when merging.
200
+ Create file if missing. Preserve existing keys when merging. Log BOTH winners and losers — downstream tasks need to know what was chosen, not just what failed.
174
201
  8. **Promote winner:** Cherry-pick into shared worktree. Winner → `[x] [PROBE_WINNER]`, losers → `[~] [PROBE_FAILED]`. Resume standard loop.
175
202
 
176
203
  ---
177
204
 
178
205
  ### 6. PER-TASK (agent prompt)
179
206
 
207
+ > **Context engineering rationale:** Prompt order follows the attention U-curve (start/end = high attention, middle = low).
208
+ > Critical instructions go at start and end. Navigable data goes in the middle.
209
+ > See: Chroma "Context Rot" (2025) — performance degrades ~2%/100K tokens; distractors and semantic ambiguity compound degradation.
210
+
180
211
  **Common preamble (include in all agent prompts):**
181
212
  ```
182
213
  Working directory: {worktree_absolute_path}
183
214
  All file operations MUST use this absolute path as base. Do NOT write files to the main project directory.
184
215
  Commit format: {commit_type}({spec}): {description}
185
-
186
- STOP after committing. Do NOT merge branches, rename spec files, remove worktrees, or run git checkout on main.
187
216
  ```
188
217
 
189
- **Standard Task:**
218
+ **Standard Task** (spawn with `Agent(model="{Model from PLAN.md}", ...)`):
219
+
220
+ Prompt sections in order (START = high attention, MIDDLE = navigable data, END = high attention):
221
+
190
222
  ```
223
+ --- START (high attention zone) ---
224
+
191
225
  {task_id}: {description from PLAN.md}
192
226
  Files: {target files} Spec: {spec_name}
193
- {Impact block from PLAN.md — include verbatim if present}
194
227
 
195
228
  {Prior failure context — include ONLY if task was previously reverted. Read from .deepflow/auto-memory.yaml revert_history for this task_id:}
196
- Previous attempts (DO NOT repeat these approaches):
197
- - Cycle {N}: reverted — "{reason from revert_history}"
229
+ DO NOT repeat these approaches:
198
230
  - Cycle {N}: reverted — "{reason from revert_history}"
199
231
  {Omit this entire block if task has no revert history.}
200
232
 
201
- CRITICAL: If Impact lists duplicates or callers, you MUST verify each one is consistent with your changes.
202
- - [active] duplicates → consolidate into single source of truth (e.g., local generateYAML → use shared buildConfigData)
203
- - [dead] duplicates DELETE the dead code entirely. Dead code pollutes context and causes drift.
233
+ {Acceptance criteria excerpt extract 2-3 key ACs from the spec file (specs/doing-*.md). Include only the criteria relevant to THIS task, not the full spec.}
234
+ Success criteria:
235
+ - {AC relevant to this task}
236
+ - {AC relevant to this task}
237
+ {Omit if spec has no structured ACs.}
238
+
239
+ --- MIDDLE (navigable data zone) ---
240
+
241
+ {Impact block from PLAN.md — include verbatim if present. Annotate each caller with WHY it's impacted:}
242
+ Impact:
243
+ - Callers: {file} ({why — e.g. "imports validateToken which you're changing"})
244
+ - Duplicates:
245
+ - {file} [active — consolidate]
246
+ - {file} [dead — DELETE]
247
+ - Data flow: {consumers}
248
+ {Omit if no Impact in PLAN.md.}
249
+
250
+ {Dependency context — for each completed blocker task, include a one-liner summary:}
251
+ Prior tasks:
252
+ - {dep_task_id}: {one-line summary of what changed — e.g. "refactored validateToken to async, changed signature (string) → (string, opts)"}
253
+ {Omit if task has no dependencies or all deps are bootstrap/spike tasks.}
204
254
 
205
255
  Steps:
206
256
  1. External APIs/SDKs → chub search "<library>" --json → chub get <id> --lang <lang> (skip if chub unavailable or internal code only)
207
- 2. Read ALL files in Impact before implementing understand the full picture
208
- 3. Implement the task, updating all impacted files
209
- 4. Commit as feat({spec}): {description}
257
+ 2. LSP freshness check: run `findReferences` on each function/type you're about to change. If callers exist beyond the Impact list, add them to your scope before implementing.
258
+ 3. Read ALL files in Impact (+ any new callers from step 2) before implementing — understand the full picture
259
+ 4. Implement the task, updating all impacted files
260
+ 5. Commit as feat({spec}): {description}
210
261
 
262
+ --- END (high attention zone) ---
263
+
264
+ {If .deepflow/auto-memory.yaml exists and has probe_learnings, include:}
265
+ Spike results (follow these approaches):
266
+ {each probe_learning with outcome "winner" → "- {insight}"}
267
+ {Omit this block if no probe_learnings exist.}
268
+
269
+ If Impact lists duplicates: [active] → consolidate into single source of truth. [dead] → DELETE entirely.
211
270
  Your ONLY job is to write code and commit. Orchestrator runs health checks after.
271
+ STOP after committing. Do NOT merge branches, rename spec files, remove worktrees, or run git checkout on main.
212
272
  ```
213
273
 
274
+ **Effort-aware context budget:** For `Effort: low` tasks, omit the MIDDLE section entirely (no Impact, no dependency context, no steps). For `Effort: medium`, include Impact but omit dependency context. For `Effort: high`, include everything.
275
+
214
276
  **Bootstrap Task:**
215
277
  ```
216
278
  BOOTSTRAP: Write tests for files in edit_scope
@@ -226,7 +288,7 @@ Commit as test({spec}): bootstrap tests for edit_scope
226
288
  Files: {target files} Spec: {spec_name}
227
289
 
228
290
  {Prior failure context — include ONLY if this spike was previously reverted. Read from .deepflow/auto-memory.yaml revert_history + spike_insights for this task_id:}
229
- Previous attempts (DO NOT repeat these approaches):
291
+ DO NOT repeat these approaches:
230
292
  - Cycle {N}: reverted — "{reason}"
231
293
  {Omit this entire block if no revert history.}
232
294
 
@@ -266,7 +328,19 @@ When all tasks done for a `doing-*` spec:
266
328
  | Implementation | `general-purpose` | Task implementation |
267
329
  | Debugger | `reasoner` | Debugging failures |
268
330
 
269
- **Model routing:** Use `model:` from command/agent/skill frontmatter. Default: `sonnet`.
331
+ **Model + effort routing:** Read `Model:` and `Effort:` fields from each task block in PLAN.md. Pass `model:` parameter when spawning the agent. Prepend effort instruction to the agent prompt. Defaults: `Model: sonnet`, `Effort: medium`.
332
+
333
+ | Task fields | Agent call | Prompt preamble |
334
+ |-------------|-----------|-----------------|
335
+ | `Model: haiku, Effort: low` | `Agent(model="haiku", ...)` | `You MUST be maximally efficient: skip explanations, minimize tool calls, go straight to implementation.` |
336
+ | `Model: sonnet, Effort: medium` | `Agent(model="sonnet", ...)` | `Be direct and efficient. Explain only when the logic is non-obvious.` |
337
+ | `Model: opus, Effort: high` | `Agent(model="opus", ...)` | _(no preamble — default behavior)_ |
338
+ | (missing) | `Agent(model="sonnet", ...)` | `Be direct and efficient. Explain only when the logic is non-obvious.` |
339
+
340
+ **Effort preamble rules:**
341
+ - `low` → Prepend efficiency instruction. Agent should make fewest possible tool calls.
342
+ - `medium` → Prepend balanced instruction. Agent skips preamble but explains non-obvious decisions.
343
+ - `high` → No preamble added. Agent uses full reasoning capabilities.
270
344
 
271
345
  **Checkpoint schema:** `.deepflow/checkpoint.json` in worktree:
272
346
  ```json
@@ -75,13 +75,13 @@ Use `code-completeness` skill to search for: implementations matching spec requi
75
75
 
76
76
  For each file in a task's "Files:" list, find the full blast radius.
77
77
 
78
- **Search for:**
78
+ **Search for (prefer LSP, fallback to grep):**
79
79
 
80
- 1. **Callers:** `grep -r "{exported_function}" --include="*.{ext}" -l` — files that import/call what's being changed
80
+ 1. **Callers:** Use LSP `findReferences` / `incomingCalls` on each exported function/type being changed. Annotate each caller with WHY it's impacted (e.g. "imports validateToken which this task changes"). Fallback: `grep -r "{exported_function}" --include="*.{ext}" -l`
81
81
  2. **Duplicates:** Files with similar logic (same function name, same transformation). Classify:
82
82
  - `[active]` — used in production → must consolidate
83
83
  - `[dead]` — bypassed/unreachable → must delete
84
- 3. **Data flow:** If file produces/transforms data, find ALL consumers of that shape across languages
84
+ 3. **Data flow:** If file produces/transforms data, use LSP `outgoingCalls` to trace consumers. Fallback: grep across languages
85
85
 
86
86
  **Embed as `Impact:` block in each task:**
87
87
  ```markdown
@@ -133,6 +133,47 @@ Spawn `Task(subagent_type="reasoner", model="opus")`. Map each requirement to DO
133
133
 
134
134
  Priority: Dependencies → Impact → Risk
135
135
 
136
+ ### 5.5. CLASSIFY MODEL + EFFORT PER TASK
137
+
138
+ For each task, assign `Model:` and `Effort:` based on the routing matrix:
139
+
140
+ #### Routing matrix
141
+
142
+ | Task type | Model | Effort | Rationale |
143
+ |-----------|-------|--------|-----------|
144
+ | Bootstrap (scaffold, config, rename) | `haiku` | `low` | Mechanical, pattern-following, zero ambiguity |
145
+ | browse-fetch (doc retrieval) | `haiku` | `low` | Just fetching and extracting, no reasoning |
146
+ | Single-file simple addition | `haiku` | `high` | Small scope but needs to get it right |
147
+ | Multi-file with clear specs | `sonnet` | `medium` | Standard work, specs remove need for deep thinking |
148
+ | Bug fix (clear repro) | `sonnet` | `medium` | Diagnosis done, just apply fix |
149
+ | Bug fix (unclear cause) | `sonnet` | `high` | Needs reasoning to find root cause |
150
+ | Spike / validation | `sonnet` | `high` | Scoped but needs reasoning to validate hypothesis |
151
+ | Feature work (well-specced) | `sonnet` | `medium` | Clear ACs reduce thinking overhead |
152
+ | Feature work (ambiguous ACs) | `opus` | `medium` | Needs intelligence but effort can be moderate with good specs |
153
+ | Refactor (>5 files, many callers) | `opus` | `medium` | Blast radius needs intelligence, patterns are repetitive |
154
+ | Architecture change | `opus` | `high` | High complexity + high ambiguity |
155
+ | Unfamiliar API integration | `opus` | `high` | Needs deep reasoning about unknown patterns |
156
+ | Retried after revert | _(raise one level)_ | `high` | Prior failure means harder than expected |
157
+
158
+ #### Decision inputs
159
+
160
+ 1. **File count** — 1 file → haiku/sonnet, 2-5 → sonnet, >5 → sonnet/opus
161
+ 2. **Impact blast radius** — many callers/duplicates → raise model
162
+ 3. **Spec clarity** — clear ACs → lower effort, ambiguous → raise effort
163
+ 4. **Type** — spikes → `sonnet high`, bootstrap → `haiku low`
164
+ 5. **Has prior failures** — raise model one level AND set effort to `high`
165
+ 6. **Repetitiveness** — repetitive pattern across files → lower effort even at higher model
166
+
167
+ #### Effort economics
168
+
169
+ Effort controls ALL token spend (text, tool calls, thinking). Lower effort = fewer tool calls, less preamble, shorter reasoning.
170
+
171
+ - `low` → ~60-70% token reduction vs high. Use when task is mechanical.
172
+ - `medium` → ~30-40% token reduction. Use when specs are clear.
173
+ - `high` → full spend (default). Use when ambiguity or risk is high.
174
+
175
+ Add `Model: haiku|sonnet|opus` and `Effort: low|medium|high` to each task block. Defaults: `Model: sonnet`, `Effort: medium`.
176
+
136
177
  ### 6. GENERATE SPIKE TASKS (IF NEEDED)
137
178
 
138
179
  **Spike Task Format:**
@@ -228,6 +269,7 @@ Always use `Task` tool with explicit `subagent_type` and `model`.
228
269
 
229
270
  - [ ] **T2**: Create upload endpoint
230
271
  - Files: src/api/upload.ts
272
+ - Model: sonnet
231
273
  - Impact:
232
274
  - Callers: src/routes/index.ts:5
233
275
  - Duplicates: backend/legacy-upload.go [dead — DELETE]
@@ -235,5 +277,6 @@ Always use `Task` tool with explicit `subagent_type` and `model`.
235
277
 
236
278
  - [ ] **T3**: Add S3 service with streaming
237
279
  - Files: src/services/storage.ts
280
+ - Model: opus
238
281
  - Blocked by: T1, T2
239
282
  ```
@@ -29,8 +29,8 @@ This protocol is the reusable foundation for all browser-based skills (browse-fe
29
29
  Before launching, verify Playwright is available:
30
30
 
31
31
  ```bash
32
- # Prefer bun if available, fall back to node
33
- if which bun > /dev/null 2>&1; then RUNTIME=bun; else RUNTIME=node; fi
32
+ # Prefer Node.js; fall back to Bun
33
+ if which node > /dev/null 2>&1; then RUNTIME=node; elif which bun > /dev/null 2>&1; then RUNTIME=bun; else echo "Error: neither node nor bun found" && exit 1; fi
34
34
 
35
35
  $RUNTIME -e "require('playwright')" 2>/dev/null \
36
36
  || npx --yes playwright install chromium --with-deps 2>&1 | tail -5
@@ -41,8 +41,8 @@ If installation fails, fall back to WebFetch (see Fallback section below).
41
41
  ### 2. Launch Command
42
42
 
43
43
  ```bash
44
- # Detect runtime
45
- if which bun > /dev/null 2>&1; then RUNTIME=bun; else RUNTIME=node; fi
44
+ # Detect runtime — prefer Node.js per decision
45
+ if which node > /dev/null 2>&1; then RUNTIME=node; elif which bun > /dev/null 2>&1; then RUNTIME=bun; else echo "Error: neither node nor bun found" && exit 1; fi
46
46
 
47
47
  $RUNTIME -e "
48
48
  const { chromium } = require('playwright');
@@ -74,13 +74,100 @@ await page.waitForTimeout(1500);
74
74
 
75
75
  ### 4. Content Extraction
76
76
 
77
- Extract the main readable text, not raw HTML:
77
+ Extract content as **structured Markdown** optimized for LLM consumption (not raw HTML or flat text).
78
78
 
79
79
  ```js
80
- // Primary: semantic content containers
81
- let text = await page.innerText('main, article, [role="main"]').catch(() => '');
80
+ // Convert DOM to Markdown inside the browser context — zero dependencies
81
+ let text = await page.evaluate(() => {
82
+ // Remove noise elements
83
+ const noise = 'nav, footer, header, aside, script, style, noscript, svg, [role="navigation"], [role="banner"], [role="contentinfo"], .cookie-banner, #cookie-consent';
84
+ document.querySelectorAll(noise).forEach(el => el.remove());
85
+
86
+ // Pick main content container
87
+ const root = document.querySelector('main, article, [role="main"]') || document.body;
88
+
89
+ function md(node, listDepth = 0) {
90
+ if (node.nodeType === 3) return node.textContent;
91
+ if (node.nodeType !== 1) return '';
92
+ const tag = node.tagName.toLowerCase();
93
+ const children = () => Array.from(node.childNodes).map(c => md(c, listDepth)).join('');
94
+
95
+ // Skip hidden elements
96
+ if (node.getAttribute('aria-hidden') === 'true' || node.hidden) return '';
97
+
98
+ switch (tag) {
99
+ case 'h1': case 'h2': case 'h3': case 'h4': case 'h5': case 'h6': {
100
+ const level = '#'.repeat(parseInt(tag[1]));
101
+ const text = node.textContent.trim();
102
+ return text ? '\n\n' + level + ' ' + text + '\n\n' : '';
103
+ }
104
+ case 'p': return '\n\n' + children().trim() + '\n\n';
105
+ case 'br': return '\n';
106
+ case 'hr': return '\n\n---\n\n';
107
+ case 'strong': case 'b': { const t = children().trim(); return t ? '**' + t + '**' : ''; }
108
+ case 'em': case 'i': { const t = children().trim(); return t ? '*' + t + '*' : ''; }
109
+ case 'code': {
110
+ const t = node.textContent;
111
+ return node.parentElement && node.parentElement.tagName.toLowerCase() === 'pre' ? t : '`' + t + '`';
112
+ }
113
+ case 'pre': {
114
+ const code = node.querySelector('code');
115
+ const lang = code ? (code.className.match(/language-(\w+)/)||[])[1] || '' : '';
116
+ const t = (code || node).textContent.trim();
117
+ return '\n\n```' + lang + '\n' + t + '\n```\n\n';
118
+ }
119
+ case 'a': {
120
+ const href = node.getAttribute('href');
121
+ const t = children().trim();
122
+ return (href && t && !href.startsWith('#')) ? '[' + t + '](' + href + ')' : t;
123
+ }
124
+ case 'img': {
125
+ const alt = node.getAttribute('alt') || '';
126
+ return alt ? '[image: ' + alt + ']' : '';
127
+ }
128
+ case 'ul': case 'ol': return '\n\n' + children() + '\n';
129
+ case 'li': {
130
+ const indent = ' '.repeat(listDepth);
131
+ const bullet = node.parentElement && node.parentElement.tagName.toLowerCase() === 'ol'
132
+ ? (Array.from(node.parentElement.children).indexOf(node) + 1) + '. '
133
+ : '- ';
134
+ const content = Array.from(node.childNodes).map(c => {
135
+ const t = c.tagName && (c.tagName.toLowerCase() === 'ul' || c.tagName.toLowerCase() === 'ol')
136
+ ? md(c, listDepth + 1) : md(c, listDepth);
137
+ return t;
138
+ }).join('').trim();
139
+ return indent + bullet + content + '\n';
140
+ }
141
+ case 'table': {
142
+ const rows = Array.from(node.querySelectorAll('tr'));
143
+ if (!rows.length) return '';
144
+ const matrix = rows.map(r => Array.from(r.querySelectorAll('th, td')).map(c => c.textContent.trim()));
145
+ const cols = Math.max(...matrix.map(r => r.length));
146
+ const widths = Array.from({length: cols}, (_, i) => Math.max(...matrix.map(r => (r[i]||'').length), 3));
147
+ let out = '\n\n';
148
+ matrix.forEach((row, ri) => {
149
+ out += '| ' + Array.from({length: cols}, (_, i) => (row[i]||'').padEnd(widths[i])).join(' | ') + ' |\n';
150
+ if (ri === 0) out += '| ' + widths.map(w => '-'.repeat(w)).join(' | ') + ' |\n';
151
+ });
152
+ return out + '\n';
153
+ }
154
+ case 'blockquote': return '\n\n> ' + children().trim().replace(/\n/g, '\n> ') + '\n\n';
155
+ case 'dl': return '\n\n' + children() + '\n';
156
+ case 'dt': return '**' + children().trim() + '**\n';
157
+ case 'dd': return ': ' + children().trim() + '\n';
158
+ case 'div': case 'section': case 'span': case 'figure': case 'figcaption':
159
+ return children();
160
+ default: return children();
161
+ }
162
+ }
163
+
164
+ let result = md(root);
165
+ // Collapse excessive whitespace
166
+ result = result.replace(/\n{3,}/g, '\n\n').trim();
167
+ return result;
168
+ });
82
169
 
83
- // Fallback: full body text
170
+ // Fallback if extraction is too short
84
171
  if (!text || text.trim().length < 100) {
85
172
  text = await page.innerText('body').catch(() => '');
86
173
  }
@@ -134,11 +221,13 @@ await browser.close();
134
221
 
135
222
  ## Fetch Workflow
136
223
 
137
- **Goal:** retrieve and return the text content of a single URL.
224
+ **Goal:** retrieve and return structured Markdown content of a single URL.
225
+
226
+ The full inline script uses `page.evaluate()` to convert DOM → Markdown inside the browser (zero Node dependencies). Adapt the URL per query.
138
227
 
139
228
  ```bash
140
- # Full inline script — adapt URL and selector per query
141
- if which bun > /dev/null 2>&1; then RUNTIME=bun; else RUNTIME=node; fi
229
+ # Full inline script — adapt URL per query
230
+ if which node > /dev/null 2>&1; then RUNTIME=node; elif which bun > /dev/null 2>&1; then RUNTIME=bun; else echo "Error: neither node nor bun found" && exit 1; fi
142
231
 
143
232
  $RUNTIME -e "
144
233
  const { chromium } = require('playwright');
@@ -157,14 +246,83 @@ const { chromium } = require('playwright');
157
246
  await page.waitForTimeout(1500);
158
247
 
159
248
  const title = await page.title();
160
- const url = page.url();
249
+ const url = page.url();
161
250
 
162
251
  if (/sign.?in|log.?in|auth/i.test(title) || url.includes('/login')) {
163
252
  console.log('[browse-fetch] Blocked by login wall at ' + url);
164
253
  return;
165
254
  }
166
255
 
167
- let text = await page.innerText('main, article, [role=\"main\"]').catch(() => '');
256
+ let text = await page.evaluate(() => {
257
+ const noise = 'nav, footer, header, aside, script, style, noscript, svg, [role=\"navigation\"], [role=\"banner\"], [role=\"contentinfo\"], .cookie-banner, #cookie-consent';
258
+ document.querySelectorAll(noise).forEach(el => el.remove());
259
+ const root = document.querySelector('main, article, [role=\"main\"]') || document.body;
260
+
261
+ function md(node, listDepth) {
262
+ listDepth = listDepth || 0;
263
+ if (node.nodeType === 3) return node.textContent;
264
+ if (node.nodeType !== 1) return '';
265
+ var tag = node.tagName.toLowerCase();
266
+ var kids = function() { return Array.from(node.childNodes).map(function(c) { return md(c, listDepth); }).join(''); };
267
+ if (node.getAttribute('aria-hidden') === 'true' || node.hidden) return '';
268
+ switch (tag) {
269
+ case 'h1': case 'h2': case 'h3': case 'h4': case 'h5': case 'h6':
270
+ var level = '#'.repeat(parseInt(tag[1]));
271
+ var t = node.textContent.trim();
272
+ return t ? '\\n\\n' + level + ' ' + t + '\\n\\n' : '';
273
+ case 'p': return '\\n\\n' + kids().trim() + '\\n\\n';
274
+ case 'br': return '\\n';
275
+ case 'hr': return '\\n\\n---\\n\\n';
276
+ case 'strong': case 'b': var s = kids().trim(); return s ? '**' + s + '**' : '';
277
+ case 'em': case 'i': var e = kids().trim(); return e ? '*' + e + '*' : '';
278
+ case 'code':
279
+ var ct = node.textContent;
280
+ return node.parentElement && node.parentElement.tagName.toLowerCase() === 'pre' ? ct : '\`' + ct + '\`';
281
+ case 'pre':
282
+ var codeEl = node.querySelector('code');
283
+ var lang = codeEl ? ((codeEl.className.match(/language-(\\w+)/) || [])[1] || '') : '';
284
+ var pt = (codeEl || node).textContent.trim();
285
+ return '\\n\\n\`\`\`' + lang + '\\n' + pt + '\\n\`\`\`\\n\\n';
286
+ case 'a':
287
+ var href = node.getAttribute('href');
288
+ var at = kids().trim();
289
+ return (href && at && !href.startsWith('#')) ? '[' + at + '](' + href + ')' : at;
290
+ case 'img':
291
+ var alt = node.getAttribute('alt') || '';
292
+ return alt ? '[image: ' + alt + ']' : '';
293
+ case 'ul': case 'ol': return '\\n\\n' + kids() + '\\n';
294
+ case 'li':
295
+ var indent = ' '.repeat(listDepth);
296
+ var bullet = node.parentElement && node.parentElement.tagName.toLowerCase() === 'ol'
297
+ ? (Array.from(node.parentElement.children).indexOf(node) + 1) + '. ' : '- ';
298
+ var content = Array.from(node.childNodes).map(function(c) {
299
+ var tg = c.tagName && c.tagName.toLowerCase();
300
+ return (tg === 'ul' || tg === 'ol') ? md(c, listDepth + 1) : md(c, listDepth);
301
+ }).join('').trim();
302
+ return indent + bullet + content + '\\n';
303
+ case 'table':
304
+ var rows = Array.from(node.querySelectorAll('tr'));
305
+ if (!rows.length) return '';
306
+ var matrix = rows.map(function(r) { return Array.from(r.querySelectorAll('th, td')).map(function(c) { return c.textContent.trim(); }); });
307
+ var cols = Math.max.apply(null, matrix.map(function(r) { return r.length; }));
308
+ var widths = Array.from({length: cols}, function(_, i) { return Math.max.apply(null, matrix.map(function(r) { return (r[i]||'').length; }).concat([3])); });
309
+ var out = '\\n\\n';
310
+ matrix.forEach(function(row, ri) {
311
+ out += '| ' + Array.from({length: cols}, function(_, i) { return (row[i]||'').padEnd(widths[i]); }).join(' | ') + ' |\\n';
312
+ if (ri === 0) out += '| ' + widths.map(function(w) { return '-'.repeat(w); }).join(' | ') + ' |\\n';
313
+ });
314
+ return out + '\\n';
315
+ case 'blockquote': return '\\n\\n> ' + kids().trim().replace(/\\n/g, '\\n> ') + '\\n\\n';
316
+ case 'dt': return '**' + kids().trim() + '**\\n';
317
+ case 'dd': return ': ' + kids().trim() + '\\n';
318
+ default: return kids();
319
+ }
320
+ }
321
+
322
+ var result = md(root);
323
+ return result.replace(/\\n{3,}/g, '\\n\\n').trim();
324
+ });
325
+
168
326
  if (!text || text.trim().length < 100) {
169
327
  text = await page.innerText('body').catch(() => '');
170
328
  }
@@ -182,7 +340,7 @@ const { chromium } = require('playwright');
182
340
  "
183
341
  ```
184
342
 
185
- Adapt the URL and selector per query. The agent inlines the full script via `node -e` or `bun -e` so no temp files are needed for extractions under ~4000 tokens.
343
+ The agent inlines the full script via `node -e` or `bun -e` so no temp files are needed for extractions under ~4000 tokens.
186
344
 
187
345
  ---
188
346
 
@@ -250,7 +408,7 @@ If WebFetch also fails, return the URL with an explanation and continue the task
250
408
  ## Rules
251
409
 
252
410
  - Always run the install check before the first browser launch in a session.
253
- - Detect runtime with `which bun` first; use `node` if bun is absent.
411
+ - Detect runtime with `which node` first; fall back to `bun` if node is absent.
254
412
  - Never navigate to Google or DuckDuckGo with Playwright — use WebSearch tool or direct URLs.
255
413
  - Truncate output at ~4000 tokens (~16 000 chars) to protect context budget.
256
414
  - On login wall or CAPTCHA, log the block, skip, and continue — never retry infinitely.
@@ -62,6 +62,12 @@ worktree:
62
62
  # Keep worktree after failed execution for debugging
63
63
  cleanup_on_fail: false
64
64
 
65
+ # Sparse checkout paths (for large monorepos only)
66
+ # When set, worktrees checkout only these directories via git sparse-checkout
67
+ # Leave empty for full checkout (default, works for most repos)
68
+ # Example: ["src/", "tests/", "package.json", "tsconfig.json"]
69
+ sparse_paths: []
70
+
65
71
  # Quality gates for /df:verify
66
72
  quality:
67
73
  # Override auto-detected build command (e.g., "npm run build", "cargo build")