claude-code-workflow 7.0.4 → 7.0.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (34) hide show
  1. package/.claude/skills/workflow-wave-plan/SKILL.md +774 -0
  2. package/ccw/dist/core/a2ui/A2UIWebSocketHandler.d.ts.map +1 -1
  3. package/ccw/dist/core/a2ui/A2UIWebSocketHandler.js +64 -50
  4. package/ccw/dist/core/a2ui/A2UIWebSocketHandler.js.map +1 -1
  5. package/ccw/dist/core/routes/orchestrator-routes.d.ts.map +1 -1
  6. package/ccw/dist/core/routes/orchestrator-routes.js +17 -4
  7. package/ccw/dist/core/routes/orchestrator-routes.js.map +1 -1
  8. package/ccw/dist/tools/ask-question.d.ts.map +1 -1
  9. package/ccw/dist/tools/ask-question.js +66 -0
  10. package/ccw/dist/tools/ask-question.js.map +1 -1
  11. package/package.json +1 -1
  12. package/ccw-litellm/src/ccw_litellm/__pycache__/__init__.cpython-312.pyc +0 -0
  13. package/ccw-litellm/src/ccw_litellm/clients/__pycache__/__init__.cpython-312.pyc +0 -0
  14. package/ccw-litellm/src/ccw_litellm/clients/__pycache__/litellm_embedder.cpython-312.pyc +0 -0
  15. package/codex-lens/src/codexlens/cli/__pycache__/embedding_manager.cpython-312.pyc +0 -0
  16. package/codex-lens/src/codexlens/cli/__pycache__/output.cpython-312.pyc +0 -0
  17. package/codex-lens/src/codexlens/hybrid_search/__pycache__/__init__.cpython-312.pyc +0 -0
  18. package/codex-lens/src/codexlens/hybrid_search/__pycache__/data_structures.cpython-312.pyc +0 -0
  19. package/codex-lens/src/codexlens/lsp/__pycache__/lsp_graph_builder.cpython-312.pyc +0 -0
  20. package/codex-lens/src/codexlens/search/__pycache__/global_graph_expander.cpython-312.pyc +0 -0
  21. package/codex-lens/src/codexlens/search/clustering/__pycache__/__init__.cpython-312.pyc +0 -0
  22. package/codex-lens/src/codexlens/search/clustering/__pycache__/base.cpython-312.pyc +0 -0
  23. package/codex-lens/src/codexlens/search/clustering/__pycache__/factory.cpython-312.pyc +0 -0
  24. package/codex-lens/src/codexlens/search/clustering/__pycache__/frequency_strategy.cpython-312.pyc +0 -0
  25. package/codex-lens/src/codexlens/search/clustering/__pycache__/noop_strategy.cpython-312.pyc +0 -0
  26. package/codex-lens/src/codexlens/semantic/__pycache__/__init__.cpython-312.pyc +0 -0
  27. package/codex-lens/src/codexlens/semantic/__pycache__/base.cpython-312.pyc +0 -0
  28. package/codex-lens/src/codexlens/semantic/__pycache__/factory.cpython-312.pyc +0 -0
  29. package/codex-lens/src/codexlens/semantic/__pycache__/gpu_support.cpython-312.pyc +0 -0
  30. package/codex-lens/src/codexlens/watcher/__pycache__/__init__.cpython-312.pyc +0 -0
  31. package/codex-lens/src/codexlens/watcher/__pycache__/events.cpython-312.pyc +0 -0
  32. package/codex-lens/src/codexlens/watcher/__pycache__/file_watcher.cpython-312.pyc +0 -0
  33. package/codex-lens/src/codexlens/watcher/__pycache__/incremental_indexer.cpython-312.pyc +0 -0
  34. package/codex-lens/src/codexlens/watcher/__pycache__/manager.cpython-312.pyc +0 -0
@@ -0,0 +1,774 @@
1
+ ---
2
+ name: workflow-wave-plan
3
+ description: CSV Wave planning and execution - explore via wave, resolve conflicts, execute from CSV with linked exploration context. Triggers on "workflow:wave-plan".
4
+ argument-hint: "<task description> [--yes|-y] [--concurrency|-c N]"
5
+ allowed-tools: Task, AskUserQuestion, Read, Write, Edit, Bash, Glob, Grep
6
+ ---
7
+
8
+ # Workflow Wave Plan
9
+
10
+ CSV Wave-based planning and execution. Uses structured CSV state for both exploration and execution, with cross-phase context propagation via `context_from` linking.
11
+
12
+ ## Architecture
13
+
14
+ ```
15
+ Requirement
16
+
17
+ ┌─ Phase 1: Decompose ─────────────────────┐
18
+ │ Analyze requirement → explore.csv │
19
+ │ (1 row per exploration angle) │
20
+ └────────────────────┬──────────────────────┘
21
+
22
+ ┌─ Phase 2: Wave Explore ──────────────────┐
23
+ │ Wave loop: spawn Explore agents │
24
+ │ → findings/key_files → explore.csv │
25
+ └────────────────────┬──────────────────────┘
26
+
27
+ ┌─ Phase 3: Synthesize & Plan ─────────────┐
28
+ │ Read explore findings → cross-reference │
29
+ │ → resolve conflicts → tasks.csv │
30
+ │ (context_from links to E* explore rows) │
31
+ └────────────────────┬──────────────────────┘
32
+
33
+ ┌─ Phase 4: Wave Execute ──────────────────┐
34
+ │ Wave loop: build prev_context from CSV │
35
+ │ → spawn code-developer agents per wave │
36
+ │ → results → tasks.csv │
37
+ └────────────────────┬──────────────────────┘
38
+
39
+ ┌─ Phase 5: Aggregate ─────────────────────┐
40
+ │ results.csv + context.md + summary │
41
+ └───────────────────────────────────────────┘
42
+ ```
43
+
44
+ ## Context Flow
45
+
46
+ ```
47
+ explore.csv tasks.csv
48
+ ┌──────────┐ ┌──────────┐
49
+ │ E1: arch │──────────→│ T1: setup│ context_from: E1;E2
50
+ │ findings │ │ prev_ctx │← E1+E2 findings
51
+ ├──────────┤ ├──────────┤
52
+ │ E2: deps │──────────→│ T2: impl │ context_from: E1;T1
53
+ │ findings │ │ prev_ctx │← E1+T1 findings
54
+ ├──────────┤ ├──────────┤
55
+ │ E3: test │──┐ ┌───→│ T3: test │ context_from: E3;T2
56
+ │ findings │ └───┘ │ prev_ctx │← E3+T2 findings
57
+ └──────────┘ └──────────┘
58
+
59
+ Two context channels:
60
+ 1. Directed: context_from → prev_context (from CSV findings)
61
+ 2. Broadcast: discoveries.ndjson (append-only shared board)
62
+ ```
63
+
64
+ ---
65
+
66
+ ## CSV Schemas
67
+
68
+ ### explore.csv
69
+
70
+ | Column | Type | Set By | Description |
71
+ |--------|------|--------|-------------|
72
+ | `id` | string | Decomposer | E1, E2, ... |
73
+ | `angle` | string | Decomposer | Exploration angle name |
74
+ | `description` | string | Decomposer | What to explore from this angle |
75
+ | `focus` | string | Decomposer | Keywords and focus areas |
76
+ | `deps` | string | Decomposer | Semicolon-separated dep IDs (usually empty) |
77
+ | `wave` | integer | Wave Engine | Wave number (usually 1) |
78
+ | `status` | enum | Agent | pending / completed / failed |
79
+ | `findings` | string | Agent | Discoveries (max 800 chars) |
80
+ | `key_files` | string | Agent | Relevant files (semicolon-separated) |
81
+ | `error` | string | Agent | Error message if failed |
82
+
83
+ ### tasks.csv
84
+
85
+ | Column | Type | Set By | Description |
86
+ |--------|------|--------|-------------|
87
+ | `id` | string | Planner | T1, T2, ... |
88
+ | `title` | string | Planner | Task title |
89
+ | `description` | string | Planner | Self-contained task description |
90
+ | `deps` | string | Planner | Dependency task IDs: T1;T2 |
91
+ | `context_from` | string | Planner | Context source IDs: **E1;E2;T1** |
92
+ | `wave` | integer | Wave Engine | Wave number (computed from deps) |
93
+ | `status` | enum | Agent | pending / completed / failed / skipped |
94
+ | `findings` | string | Agent | Execution findings (max 500 chars) |
95
+ | `files_modified` | string | Agent | Files modified (semicolon-separated) |
96
+ | `error` | string | Agent | Error if failed |
97
+
98
+ **context_from prefix convention**: `E*` → explore.csv lookup, `T*` → tasks.csv lookup.
99
+
100
+ ---
101
+
102
+ ## Session Structure
103
+
104
+ ```
105
+ .workflow/.wave-plan/{session-id}/
106
+ ├── explore.csv # Exploration state
107
+ ├── tasks.csv # Execution state
108
+ ├── discoveries.ndjson # Shared discovery board
109
+ ├── explore-results/ # Detailed per-angle results
110
+ │ ├── E1.json
111
+ │ └── E2.json
112
+ ├── task-results/ # Detailed per-task results
113
+ │ ├── T1.json
114
+ │ └── T2.json
115
+ ├── results.csv # Final results export
116
+ └── context.md # Full context summary
117
+ ```
118
+
119
+ ---
120
+
121
+ ## Session Initialization
122
+
123
+ ```javascript
124
+ const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
125
+
126
+ // Parse flags
127
+ const AUTO_YES = $ARGUMENTS.includes('--yes') || $ARGUMENTS.includes('-y')
128
+ const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
129
+ const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4
130
+
131
+ const requirement = $ARGUMENTS
132
+ .replace(/--yes|-y|--concurrency\s+\d+|-c\s+\d+/g, '')
133
+ .trim()
134
+
135
+ const slug = requirement.toLowerCase()
136
+ .replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
137
+ .substring(0, 40)
138
+ const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
139
+ const sessionId = `wp-${slug}-${dateStr}`
140
+ const sessionFolder = `.workflow/.wave-plan/${sessionId}`
141
+
142
+ Bash(`mkdir -p ${sessionFolder}/explore-results ${sessionFolder}/task-results`)
143
+ ```
144
+
145
+ ---
146
+
147
+ ## Phase 1: Decompose → explore.csv
148
+
149
+ ### 1.1 Analyze Requirement
150
+
151
+ ```javascript
152
+ const complexity = analyzeComplexity(requirement)
153
+ // Low: 1 angle | Medium: 2-3 angles | High: 3-4 angles
154
+
155
+ const ANGLE_PRESETS = {
156
+ architecture: ['architecture', 'dependencies', 'integration-points', 'modularity'],
157
+ security: ['security', 'auth-patterns', 'dataflow', 'validation'],
158
+ performance: ['performance', 'bottlenecks', 'caching', 'data-access'],
159
+ bugfix: ['error-handling', 'dataflow', 'state-management', 'edge-cases'],
160
+ feature: ['patterns', 'integration-points', 'testing', 'dependencies']
161
+ }
162
+
163
+ function selectAngles(text, count) {
164
+ let preset = 'feature'
165
+ if (/refactor|architect|restructure|modular/.test(text)) preset = 'architecture'
166
+ else if (/security|auth|permission|access/.test(text)) preset = 'security'
167
+ else if (/performance|slow|optimi|cache/.test(text)) preset = 'performance'
168
+ else if (/fix|bug|error|broken/.test(text)) preset = 'bugfix'
169
+ return ANGLE_PRESETS[preset].slice(0, count)
170
+ }
171
+
172
+ const angleCount = complexity === 'High' ? 4 : complexity === 'Medium' ? 3 : 1
173
+ const angles = selectAngles(requirement, angleCount)
174
+ ```
175
+
176
+ ### 1.2 Generate explore.csv
177
+
178
+ ```javascript
179
+ const header = 'id,angle,description,focus,deps,wave,status,findings,key_files,error'
180
+ const rows = angles.map((angle, i) => {
181
+ const id = `E${i + 1}`
182
+ const desc = `Explore codebase from ${angle} perspective for: ${requirement}`
183
+ return `"${id}","${angle}","${escCSV(desc)}","${angle}","",1,"pending","","",""`
184
+ })
185
+
186
+ Write(`${sessionFolder}/explore.csv`, [header, ...rows].join('\n'))
187
+ ```
188
+
189
+ All exploration rows default to wave 1 (independent parallel). If angle dependencies exist, compute waves.
190
+
191
+ ---
192
+
193
+ ## Phase 2: Wave Explore
194
+
195
+ Execute exploration waves using `Task(Explore)` agents.
196
+
197
+ ### 2.1 Wave Loop
198
+
199
+ ```javascript
200
+ const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
201
+ const maxExploreWave = Math.max(...exploreCSV.map(r => parseInt(r.wave)))
202
+
203
+ for (let wave = 1; wave <= maxExploreWave; wave++) {
204
+ const waveRows = exploreCSV.filter(r =>
205
+ parseInt(r.wave) === wave && r.status === 'pending'
206
+ )
207
+ if (waveRows.length === 0) continue
208
+
209
+ // Skip rows with failed dependencies
210
+ const validRows = waveRows.filter(r => {
211
+ if (!r.deps) return true
212
+ return r.deps.split(';').filter(Boolean).every(depId => {
213
+ const dep = exploreCSV.find(d => d.id === depId)
214
+ return dep && dep.status === 'completed'
215
+ })
216
+ })
217
+
218
+ waveRows.filter(r => !validRows.includes(r)).forEach(r => {
219
+ r.status = 'skipped'
220
+ r.error = 'Dependency failed/skipped'
221
+ })
222
+
223
+ // ★ Spawn ALL explore agents in SINGLE message → parallel execution
224
+ const results = validRows.map(row =>
225
+ Task({
226
+ subagent_type: "Explore",
227
+ run_in_background: false,
228
+ description: `Explore: ${row.angle}`,
229
+ prompt: buildExplorePrompt(row, requirement, sessionFolder)
230
+ })
231
+ )
232
+
233
+ // Collect results from JSON files → update explore.csv
234
+ validRows.forEach((row, i) => {
235
+ const resultPath = `${sessionFolder}/explore-results/${row.id}.json`
236
+ if (fileExists(resultPath)) {
237
+ const result = JSON.parse(Read(resultPath))
238
+ row.status = result.status || 'completed'
239
+ row.findings = truncate(result.findings, 800)
240
+ row.key_files = Array.isArray(result.key_files)
241
+ ? result.key_files.join(';')
242
+ : (result.key_files || '')
243
+ row.error = result.error || ''
244
+ } else {
245
+ // Fallback: parse from agent output text
246
+ row.status = 'completed'
247
+ row.findings = truncate(results[i], 800)
248
+ }
249
+ })
250
+
251
+ writeCSV(`${sessionFolder}/explore.csv`, exploreCSV)
252
+ }
253
+ ```
254
+
255
+ ### 2.2 Explore Agent Prompt
256
+
257
+ ```javascript
258
+ function buildExplorePrompt(row, requirement, sessionFolder) {
259
+ return `## Exploration: ${row.angle}
260
+
261
+ **Requirement**: ${requirement}
262
+ **Focus**: ${row.focus}
263
+
264
+ ## Instructions
265
+ Explore the codebase from the **${row.angle}** perspective:
266
+ 1. Discover relevant files, modules, and patterns
267
+ 2. Identify integration points and dependencies
268
+ 3. Note constraints, risks, and conventions
269
+ 4. Find existing patterns to follow
270
+
271
+ ## Output
272
+ Write findings to: ${sessionFolder}/explore-results/${row.id}.json
273
+
274
+ JSON format:
275
+ {
276
+ "status": "completed",
277
+ "findings": "Concise summary of ${row.angle} discoveries (max 800 chars)",
278
+ "key_files": ["relevant/file1.ts", "relevant/file2.ts"],
279
+ "details": {
280
+ "patterns": ["pattern descriptions"],
281
+ "integration_points": [{"file": "path", "description": "..."}],
282
+ "constraints": ["constraint descriptions"],
283
+ "recommendations": ["recommendation descriptions"]
284
+ }
285
+ }
286
+
287
+ Also provide a 2-3 sentence summary.`
288
+ }
289
+ ```
290
+
291
+ ---
292
+
293
+ ## Phase 3: Synthesize & Plan → tasks.csv
294
+
295
+ Read exploration findings, cross-reference, resolve conflicts, generate execution tasks.
296
+
297
+ ### 3.1 Load Explore Results
298
+
299
+ ```javascript
300
+ const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
301
+ const completed = exploreCSV.filter(r => r.status === 'completed')
302
+
303
+ // Load detailed result JSONs where available
304
+ const detailedResults = {}
305
+ completed.forEach(r => {
306
+ const path = `${sessionFolder}/explore-results/${r.id}.json`
307
+ if (fileExists(path)) detailedResults[r.id] = JSON.parse(Read(path))
308
+ })
309
+ ```
310
+
311
+ ### 3.2 Conflict Resolution Protocol
312
+
313
+ Cross-reference findings across all exploration angles:
314
+
315
+ ```javascript
316
+ // 1. Identify common files referenced by multiple angles
317
+ const fileRefs = {}
318
+ completed.forEach(r => {
319
+ r.key_files.split(';').filter(Boolean).forEach(f => {
320
+ if (!fileRefs[f]) fileRefs[f] = []
321
+ fileRefs[f].push({ angle: r.angle, id: r.id })
322
+ })
323
+ })
324
+ const sharedFiles = Object.entries(fileRefs).filter(([_, refs]) => refs.length > 1)
325
+
326
+ // 2. Detect conflicting recommendations
327
+ // Compare recommendations from different angles for same file/module
328
+ // Flag contradictions (angle A says "refactor X" vs angle B says "extend X")
329
+
330
+ // 3. Resolution rules:
331
+ // a. Safety first — when approaches conflict, choose safer option
332
+ // b. Consistency — prefer approaches aligned with existing patterns
333
+ // c. Scope — prefer minimal-change approaches
334
+ // d. Document — note all resolved conflicts for transparency
335
+
336
+ const synthesis = {
337
+ sharedFiles,
338
+ conflicts: detectConflicts(completed, detailedResults),
339
+ resolutions: [],
340
+ allKeyFiles: [...new Set(completed.flatMap(r => r.key_files.split(';').filter(Boolean)))]
341
+ }
342
+ ```
343
+
344
+ ### 3.3 Generate tasks.csv
345
+
346
+ Decompose into execution tasks based on synthesized exploration:
347
+
348
+ ```javascript
349
+ // Task decomposition rules:
350
+ // 1. Group by feature/module (not per-file)
351
+ // 2. Each description is self-contained (agent sees only its row + prev_context)
352
+ // 3. deps only when task B requires task A's output
353
+ // 4. context_from links relevant explore rows (E*) and predecessor tasks (T*)
354
+ // 5. Prefer parallel (minimize deps)
355
+ // 6. Use exploration findings: key_files → target files, patterns → references,
356
+ // integration_points → dependency relationships, constraints → included in description
357
+
358
+ const tasks = []
359
+ // Claude decomposes requirement using exploration synthesis
360
+ // Example:
361
+ // tasks.push({ id: 'T1', title: 'Setup types', description: '...', deps: '', context_from: 'E1;E2' })
362
+ // tasks.push({ id: 'T2', title: 'Implement core', description: '...', deps: 'T1', context_from: 'E1;E2;T1' })
363
+ // tasks.push({ id: 'T3', title: 'Add tests', description: '...', deps: 'T2', context_from: 'E3;T2' })
364
+
365
+ // Compute waves
366
+ const waves = computeWaves(tasks)
367
+ tasks.forEach(t => { t.wave = waves[t.id] })
368
+
369
+ // Write tasks.csv
370
+ const header = 'id,title,description,deps,context_from,wave,status,findings,files_modified,error'
371
+ const rows = tasks.map(t =>
372
+ `"${t.id}","${escCSV(t.title)}","${escCSV(t.description)}","${t.deps}","${t.context_from}",${t.wave},"pending","","",""`
373
+ )
374
+
375
+ Write(`${sessionFolder}/tasks.csv`, [header, ...rows].join('\n'))
376
+ ```
377
+
378
+ ### 3.4 User Confirmation
379
+
380
+ ```javascript
381
+ if (!AUTO_YES) {
382
+ const maxWave = Math.max(...tasks.map(t => t.wave))
383
+
384
+ console.log(`
385
+ ## Execution Plan
386
+
387
+ Explore: ${completed.length} angles completed
388
+ Conflicts resolved: ${synthesis.conflicts.length}
389
+ Tasks: ${tasks.length} across ${maxWave} waves
390
+
391
+ ${Array.from({length: maxWave}, (_, i) => i + 1).map(w => {
392
+ const wt = tasks.filter(t => t.wave === w)
393
+ return `### Wave ${w} (${wt.length} tasks, concurrent)
394
+ ${wt.map(t => ` - [${t.id}] ${t.title} (from: ${t.context_from})`).join('\n')}`
395
+ }).join('\n')}
396
+ `)
397
+
398
+ AskUserQuestion({
399
+ questions: [{
400
+ question: `Proceed with ${tasks.length} tasks across ${maxWave} waves?`,
401
+ header: "Confirm",
402
+ multiSelect: false,
403
+ options: [
404
+ { label: "Execute", description: "Proceed with wave execution" },
405
+ { label: "Modify", description: `Edit ${sessionFolder}/tasks.csv then re-run` },
406
+ { label: "Cancel", description: "Abort" }
407
+ ]
408
+ }]
409
+ })
410
+ }
411
+ ```
412
+
413
+ ---
414
+
415
+ ## Phase 4: Wave Execute
416
+
417
+ Execute tasks from tasks.csv in wave order, with prev_context built from both explore.csv and tasks.csv.
418
+
419
+ ### 4.1 Wave Loop
420
+
421
+ ```javascript
422
+ const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
423
+ const failedIds = new Set()
424
+ const skippedIds = new Set()
425
+
426
+ let tasksCSV = parseCSV(Read(`${sessionFolder}/tasks.csv`))
427
+ const maxWave = Math.max(...tasksCSV.map(r => parseInt(r.wave)))
428
+
429
+ for (let wave = 1; wave <= maxWave; wave++) {
430
+ // Re-read master CSV (updated by previous wave)
431
+ tasksCSV = parseCSV(Read(`${sessionFolder}/tasks.csv`))
432
+
433
+ const waveRows = tasksCSV.filter(r =>
434
+ parseInt(r.wave) === wave && r.status === 'pending'
435
+ )
436
+ if (waveRows.length === 0) continue
437
+
438
+ // Skip on failed dependencies (cascade)
439
+ const validRows = []
440
+ for (const row of waveRows) {
441
+ const deps = (row.deps || '').split(';').filter(Boolean)
442
+ if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
443
+ skippedIds.add(row.id)
444
+ row.status = 'skipped'
445
+ row.error = 'Dependency failed/skipped'
446
+ continue
447
+ }
448
+ validRows.push(row)
449
+ }
450
+
451
+ if (validRows.length === 0) {
452
+ writeCSV(`${sessionFolder}/tasks.csv`, tasksCSV)
453
+ continue
454
+ }
455
+
456
+ // Build prev_context for each row from explore.csv + tasks.csv
457
+ validRows.forEach(row => {
458
+ row._prev_context = buildPrevContext(row.context_from, exploreCSV, tasksCSV)
459
+ })
460
+
461
+ // ★ Spawn ALL task agents in SINGLE message → parallel execution
462
+ const results = validRows.map(row =>
463
+ Task({
464
+ subagent_type: "code-developer",
465
+ run_in_background: false,
466
+ description: row.title,
467
+ prompt: buildExecutePrompt(row, requirement, sessionFolder)
468
+ })
469
+ )
470
+
471
+ // Collect results → update tasks.csv
472
+ validRows.forEach((row, i) => {
473
+ const resultPath = `${sessionFolder}/task-results/${row.id}.json`
474
+ if (fileExists(resultPath)) {
475
+ const result = JSON.parse(Read(resultPath))
476
+ row.status = result.status || 'completed'
477
+ row.findings = truncate(result.findings, 500)
478
+ row.files_modified = Array.isArray(result.files_modified)
479
+ ? result.files_modified.join(';')
480
+ : (result.files_modified || '')
481
+ row.error = result.error || ''
482
+ } else {
483
+ row.status = 'completed'
484
+ row.findings = truncate(results[i], 500)
485
+ }
486
+
487
+ if (row.status === 'failed') failedIds.add(row.id)
488
+ delete row._prev_context // runtime-only, don't persist
489
+ })
490
+
491
+ writeCSV(`${sessionFolder}/tasks.csv`, tasksCSV)
492
+ }
493
+ ```
494
+
495
+ ### 4.2 prev_context Builder
496
+
497
+ The key function linking exploration context to execution:
498
+
499
+ ```javascript
500
+ function buildPrevContext(contextFrom, exploreCSV, tasksCSV) {
501
+ if (!contextFrom) return 'No previous context available'
502
+
503
+ const ids = contextFrom.split(';').filter(Boolean)
504
+ const entries = []
505
+
506
+ ids.forEach(id => {
507
+ if (id.startsWith('E')) {
508
+ // ← Look up in explore.csv (cross-phase link)
509
+ const row = exploreCSV.find(r => r.id === id)
510
+ if (row && row.status === 'completed' && row.findings) {
511
+ entries.push(`[Explore ${row.angle}] ${row.findings}`)
512
+ if (row.key_files) entries.push(` Key files: ${row.key_files}`)
513
+ }
514
+ } else if (id.startsWith('T')) {
515
+ // ← Look up in tasks.csv (same-phase link)
516
+ const row = tasksCSV.find(r => r.id === id)
517
+ if (row && row.status === 'completed' && row.findings) {
518
+ entries.push(`[Task ${row.id}: ${row.title}] ${row.findings}`)
519
+ if (row.files_modified) entries.push(` Modified: ${row.files_modified}`)
520
+ }
521
+ }
522
+ })
523
+
524
+ return entries.length > 0 ? entries.join('\n') : 'No previous context available'
525
+ }
526
+ ```
527
+
528
+ ### 4.3 Execute Agent Prompt
529
+
530
+ ```javascript
531
+ function buildExecutePrompt(row, requirement, sessionFolder) {
532
+ return `## Task: ${row.title}
533
+
534
+ **ID**: ${row.id}
535
+ **Goal**: ${requirement}
536
+
537
+ ## Description
538
+ ${row.description}
539
+
540
+ ## Previous Context (from exploration and predecessor tasks)
541
+ ${row._prev_context}
542
+
543
+ ## Discovery Board
544
+ Read shared discoveries first: ${sessionFolder}/discoveries.ndjson (if exists)
545
+ After execution, append any discoveries:
546
+ echo '{"ts":"<ISO>","worker":"${row.id}","type":"<type>","data":{...}}' >> ${sessionFolder}/discoveries.ndjson
547
+
548
+ ## Instructions
549
+ 1. Read the relevant files identified in the context above
550
+ 2. Implement changes described in the task description
551
+ 3. Ensure changes are consistent with exploration findings
552
+ 4. Test changes if applicable
553
+
554
+ ## Output
555
+ Write results to: ${sessionFolder}/task-results/${row.id}.json
556
+
557
+ {
558
+ "status": "completed",
559
+ "findings": "What was done (max 500 chars)",
560
+ "files_modified": ["file1.ts", "file2.ts"],
561
+ "error": ""
562
+ }`
563
+ }
564
+ ```
565
+
566
+ ---
567
+
568
+ ## Phase 5: Aggregate
569
+
570
+ ### 5.1 Generate Results
571
+
572
+ ```javascript
573
+ const finalTasks = parseCSV(Read(`${sessionFolder}/tasks.csv`))
574
+ const exploreCSV = parseCSV(Read(`${sessionFolder}/explore.csv`))
575
+
576
+ Bash(`cp "${sessionFolder}/tasks.csv" "${sessionFolder}/results.csv"`)
577
+
578
+ const completed = finalTasks.filter(r => r.status === 'completed')
579
+ const failed = finalTasks.filter(r => r.status === 'failed')
580
+ const skipped = finalTasks.filter(r => r.status === 'skipped')
581
+ const maxWave = Math.max(...finalTasks.map(r => parseInt(r.wave)))
582
+ ```
583
+
584
+ ### 5.2 Generate context.md
585
+
586
+ ```javascript
587
+ const contextMd = `# Wave Plan Results
588
+
589
+ **Requirement**: ${requirement}
590
+ **Session**: ${sessionId}
591
+ **Timestamp**: ${getUtc8ISOString()}
592
+
593
+ ## Summary
594
+
595
+ | Metric | Count |
596
+ |--------|-------|
597
+ | Explore Angles | ${exploreCSV.length} |
598
+ | Total Tasks | ${finalTasks.length} |
599
+ | Completed | ${completed.length} |
600
+ | Failed | ${failed.length} |
601
+ | Skipped | ${skipped.length} |
602
+ | Waves | ${maxWave} |
603
+
604
+ ## Exploration Results
605
+
606
+ ${exploreCSV.map(e => `### ${e.id}: ${e.angle} (${e.status})
607
+ ${e.findings || 'N/A'}
608
+ Key files: ${e.key_files || 'none'}`).join('\n\n')}
609
+
610
+ ## Task Results
611
+
612
+ ${finalTasks.map(t => `### ${t.id}: ${t.title} (${t.status})
613
+ - Context from: ${t.context_from || 'none'}
614
+ - Wave: ${t.wave}
615
+ - Findings: ${t.findings || 'N/A'}
616
+ - Files: ${t.files_modified || 'none'}
617
+ ${t.error ? `- Error: ${t.error}` : ''}`).join('\n\n')}
618
+
619
+ ## All Modified Files
620
+
621
+ ${[...new Set(finalTasks.flatMap(t =>
622
+ (t.files_modified || '').split(';')).filter(Boolean)
623
+ )].map(f => '- ' + f).join('\n') || 'None'}
624
+ `
625
+
626
+ Write(`${sessionFolder}/context.md`, contextMd)
627
+ ```
628
+
629
+ ### 5.3 Summary & Next Steps
630
+
631
+ ```javascript
632
+ console.log(`
633
+ ## Wave Plan Complete
634
+
635
+ Session: ${sessionFolder}
636
+ Explore: ${exploreCSV.filter(r => r.status === 'completed').length}/${exploreCSV.length} angles
637
+ Tasks: ${completed.length}/${finalTasks.length} completed, ${failed.length} failed, ${skipped.length} skipped
638
+ Waves: ${maxWave}
639
+
640
+ Files:
641
+ - explore.csv — exploration state
642
+ - tasks.csv — execution state
643
+ - results.csv — final results
644
+ - context.md — full report
645
+ - discoveries.ndjson — shared discoveries
646
+ `)
647
+
648
+ if (!AUTO_YES && failed.length > 0) {
649
+ AskUserQuestion({
650
+ questions: [{
651
+ question: `${failed.length} tasks failed. Next action?`,
652
+ header: "Next Step",
653
+ multiSelect: false,
654
+ options: [
655
+ { label: "Retry Failed", description: "Reset failed + skipped, re-execute Phase 4" },
656
+ { label: "View Report", description: "Display context.md" },
657
+ { label: "Done", description: "Complete session" }
658
+ ]
659
+ }]
660
+ })
661
+ // If Retry: reset failed/skipped status to pending, re-run Phase 4
662
+ }
663
+ ```
664
+
665
+ ---
666
+
667
+ ## Utilities
668
+
669
+ ### Wave Computation (Kahn's BFS)
670
+
671
+ ```javascript
672
+ function computeWaves(tasks) {
673
+ const inDegree = {}, adj = {}, depth = {}
674
+ tasks.forEach(t => { inDegree[t.id] = 0; adj[t.id] = []; depth[t.id] = 1 })
675
+
676
+ tasks.forEach(t => {
677
+ const deps = (t.deps || '').split(';').filter(Boolean)
678
+ deps.forEach(dep => {
679
+ if (adj[dep]) { adj[dep].push(t.id); inDegree[t.id]++ }
680
+ })
681
+ })
682
+
683
+ const queue = Object.keys(inDegree).filter(id => inDegree[id] === 0)
684
+ queue.forEach(id => { depth[id] = 1 })
685
+
686
+ while (queue.length > 0) {
687
+ const current = queue.shift()
688
+ adj[current].forEach(next => {
689
+ depth[next] = Math.max(depth[next], depth[current] + 1)
690
+ inDegree[next]--
691
+ if (inDegree[next] === 0) queue.push(next)
692
+ })
693
+ }
694
+
695
+ if (Object.values(inDegree).some(d => d > 0)) {
696
+ throw new Error('Circular dependency detected')
697
+ }
698
+
699
+ return depth // { taskId: waveNumber }
700
+ }
701
+ ```
702
+
703
+ ### CSV Helpers
704
+
705
+ ```javascript
706
+ function escCSV(s) { return String(s || '').replace(/"/g, '""') }
707
+
708
+ function parseCSV(content) {
709
+ const lines = content.trim().split('\n')
710
+ const header = lines[0].split(',').map(h => h.replace(/"/g, '').trim())
711
+ return lines.slice(1).filter(l => l.trim()).map(line => {
712
+ const values = parseCSVLine(line)
713
+ const row = {}
714
+ header.forEach((col, i) => { row[col] = (values[i] || '').replace(/^"|"$/g, '') })
715
+ return row
716
+ })
717
+ }
718
+
719
+ function writeCSV(path, rows) {
720
+ if (rows.length === 0) return
721
+ // Exclude runtime-only columns (prefixed with _)
722
+ const cols = Object.keys(rows[0]).filter(k => !k.startsWith('_'))
723
+ const header = cols.join(',')
724
+ const lines = rows.map(r =>
725
+ cols.map(c => `"${escCSV(r[c])}"`).join(',')
726
+ )
727
+ Write(path, [header, ...lines].join('\n'))
728
+ }
729
+
730
+ function truncate(s, max) {
731
+ s = String(s || '')
732
+ return s.length > max ? s.substring(0, max - 3) + '...' : s
733
+ }
734
+ ```
735
+
736
+ ---
737
+
738
+ ## Discovery Board Protocol
739
+
740
+ Shared `discoveries.ndjson` — append-only NDJSON accessible to all agents across all phases.
741
+
742
+ ```jsonl
743
+ {"ts":"...","worker":"E1","type":"code_pattern","data":{"name":"repo-pattern","file":"src/repos/Base.ts"}}
744
+ {"ts":"...","worker":"T2","type":"integration_point","data":{"file":"src/auth/index.ts","exports":["auth"]}}
745
+ ```
746
+
747
+ **Types**: `code_pattern`, `integration_point`, `convention`, `blocker`, `tech_stack`
748
+ **Rules**: Read first → write immediately → deduplicate → append-only
749
+
750
+ ---
751
+
752
+ ## Error Handling
753
+
754
+ | Error | Resolution |
755
+ |-------|------------|
756
+ | Explore agent failure | Mark as failed in explore.csv, exclude from planning |
757
+ | Execute agent failure | Mark as failed, skip dependents (cascade) |
758
+ | Circular dependency | Abort wave computation, report cycle |
759
+ | All explores failed | Fallback: plan directly from requirement |
760
+ | CSV parse error | Re-validate format |
761
+ | discoveries.ndjson corrupt | Ignore malformed lines |
762
+
763
+ ---
764
+
765
+ ## Core Rules
766
+
767
+ 1. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes
768
+ 2. **CSV is Source of Truth**: Read master CSV before each wave, write after
769
+ 3. **Context via CSV**: prev_context built from CSV findings, not from memory
770
+ 4. **E* ↔ T* Linking**: tasks.csv `context_from` references explore.csv rows for cross-phase context
771
+ 5. **Skip on Failure**: Failed dep → skip dependent (cascade)
772
+ 6. **Discovery Board Append-Only**: Never clear or modify discoveries.ndjson
773
+ 7. **Explore Before Execute**: Phase 2 completes before Phase 4 starts
774
+ 8. **DO NOT STOP**: Continuous execution until all waves complete or remaining skipped