claude-code-workflow 7.2.21 → 7.2.22
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.codex/skills/analyze-with-file/SKILL.md +235 -497
- package/.codex/skills/brainstorm-with-file/SKILL.md +661 -751
- package/.codex/skills/csv-wave-pipeline/SKILL.md +192 -198
- package/package.json +1 -1
- package/.codex/skills/collaborative-plan-with-file/SKILL.md +0 -830
- package/.codex/skills/unified-execute-with-file/SKILL.md +0 -797
|
@@ -25,9 +25,6 @@ $csv-wave-pipeline --continue "auth-20260228"
|
|
|
25
25
|
- `-c, --concurrency N`: Max concurrent agents within each wave (default: 4)
|
|
26
26
|
- `--continue`: Resume existing session
|
|
27
27
|
|
|
28
|
-
**Output Directory**: `.workflow/.csv-wave/{session-id}/`
|
|
29
|
-
**Core Output**: `tasks.csv` (master state) + `results.csv` (final) + `discoveries.ndjson` (shared exploration) + `context.md` (human-readable report)
|
|
30
|
-
|
|
31
28
|
---
|
|
32
29
|
|
|
33
30
|
## Overview
|
|
@@ -37,35 +34,75 @@ Wave-based batch execution using `spawn_agents_on_csv` with **cross-wave context
|
|
|
37
34
|
**Core workflow**: Decompose → Compute Waves → Execute Wave-by-Wave → Aggregate
|
|
38
35
|
|
|
39
36
|
```
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
48
|
-
|
|
49
|
-
│
|
|
50
|
-
│
|
|
51
|
-
│
|
|
52
|
-
│
|
|
53
|
-
│
|
|
54
|
-
|
|
55
|
-
|
|
56
|
-
|
|
57
|
-
|
|
58
|
-
|
|
59
|
-
|
|
60
|
-
|
|
61
|
-
|
|
62
|
-
|
|
63
|
-
|
|
64
|
-
|
|
65
|
-
|
|
66
|
-
|
|
37
|
+
Phase 1: Requirement → CSV
|
|
38
|
+
├─ Parse requirement into subtasks (3-10 tasks)
|
|
39
|
+
├─ Identify dependencies (deps column)
|
|
40
|
+
├─ Compute dependency waves (topological sort → depth grouping)
|
|
41
|
+
├─ Generate tasks.csv with wave column
|
|
42
|
+
└─ User validates task breakdown (skip if -y)
|
|
43
|
+
|
|
44
|
+
Phase 2: Wave Execution Engine
|
|
45
|
+
├─ For each wave (1..N):
|
|
46
|
+
│ ├─ Build wave CSV (filter rows for this wave)
|
|
47
|
+
│ ├─ Inject previous wave findings into prev_context column
|
|
48
|
+
│ ├─ spawn_agents_on_csv(wave CSV)
|
|
49
|
+
│ ├─ Collect results, merge into master tasks.csv
|
|
50
|
+
│ └─ Check: any failed? → skip dependents or retry
|
|
51
|
+
└─ discoveries.ndjson shared across all waves (append-only)
|
|
52
|
+
|
|
53
|
+
Phase 3: Results Aggregation
|
|
54
|
+
├─ Export final results.csv
|
|
55
|
+
├─ Generate context.md with all findings
|
|
56
|
+
├─ Display summary: completed/failed/skipped per wave
|
|
57
|
+
└─ Offer: view results | retry failed | done
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
### Context Propagation
|
|
61
|
+
|
|
62
|
+
Two context channels flow across waves:
|
|
63
|
+
|
|
64
|
+
1. **CSV findings** (structured): `context_from` column → `prev_context` injection — task-specific directed context
|
|
65
|
+
2. **NDJSON discoveries** (broadcast): `discoveries.ndjson` — general exploration findings available to all
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
Wave 1 agents:
|
|
69
|
+
├─ Execute tasks (no prev_context)
|
|
70
|
+
├─ Write findings to report_agent_job_result
|
|
71
|
+
└─ Append discoveries to discoveries.ndjson
|
|
72
|
+
↓ merge results into master CSV
|
|
73
|
+
Wave 2 agents:
|
|
74
|
+
├─ Read discoveries.ndjson (exploration sharing)
|
|
75
|
+
├─ Read prev_context column (wave 1 findings from context_from)
|
|
76
|
+
├─ Execute tasks with full upstream context
|
|
77
|
+
├─ Write findings to report_agent_job_result
|
|
78
|
+
└─ Append new discoveries to discoveries.ndjson
|
|
79
|
+
↓ merge results into master CSV
|
|
80
|
+
Wave 3+ agents: same pattern, accumulated context from all prior waves
|
|
81
|
+
```
|
|
82
|
+
|
|
83
|
+
---
|
|
84
|
+
|
|
85
|
+
## Session & Output Structure
|
|
86
|
+
|
|
87
|
+
```
|
|
88
|
+
.workflow/.csv-wave/{session-id}/
|
|
89
|
+
├── tasks.csv # Master state (updated per wave)
|
|
90
|
+
├── results.csv # Final results export (Phase 3)
|
|
91
|
+
├── discoveries.ndjson # Shared discovery board (all agents, append-only)
|
|
92
|
+
├── context.md # Human-readable report (Phase 3)
|
|
93
|
+
├── wave-{N}.csv # Temporary per-wave input (cleaned up after merge)
|
|
94
|
+
└── wave-{N}-results.csv # Temporary per-wave output (cleaned up after merge)
|
|
67
95
|
```
|
|
68
96
|
|
|
97
|
+
| File | Purpose | Lifecycle |
|
|
98
|
+
|------|---------|-----------|
|
|
99
|
+
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
|
100
|
+
| `wave-{N}.csv` | Per-wave input with prev_context column | Created before wave, deleted after |
|
|
101
|
+
| `wave-{N}-results.csv` | Per-wave output from spawn_agents_on_csv | Created during wave, deleted after merge |
|
|
102
|
+
| `results.csv` | Final export of all task results | Created in Phase 3 |
|
|
103
|
+
| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves |
|
|
104
|
+
| `context.md` | Human-readable execution report | Created in Phase 3 |
|
|
105
|
+
|
|
69
106
|
---
|
|
70
107
|
|
|
71
108
|
## CSV Schema
|
|
@@ -104,7 +141,7 @@ id,title,description,test,acceptance_criteria,scope,hints,execution_directives,d
|
|
|
104
141
|
|
|
105
142
|
### Per-Wave CSV (Temporary)
|
|
106
143
|
|
|
107
|
-
Each wave generates a temporary `wave-{N}.csv` with an extra `prev_context` column:
|
|
144
|
+
Each wave generates a temporary `wave-{N}.csv` with an extra `prev_context` column built from `context_from` by looking up completed tasks' `findings` in the master CSV:
|
|
108
145
|
|
|
109
146
|
```csv
|
|
110
147
|
id,title,description,test,acceptance_criteria,scope,hints,execution_directives,deps,context_from,wave,prev_context
|
|
@@ -112,32 +149,37 @@ id,title,description,test,acceptance_criteria,scope,hints,execution_directives,d
|
|
|
112
149
|
"3","Add JWT tokens","Implement JWT","Unit test: sign/verify round-trip; Edge test: expired token returns 401","generateToken() returns valid JWT; verifyToken() rejects expired/tampered tokens","src/auth/jwt/**","Use jsonwebtoken library; Set default expiry 1h || src/config/auth.ts","Ensure tsc --noEmit passes","1","1","2","[Task 1] Created auth/ with index.ts and types.ts"
|
|
113
150
|
```
|
|
114
151
|
|
|
115
|
-
The `prev_context` column is built from `context_from` by looking up completed tasks' `findings` in the master CSV.
|
|
116
|
-
|
|
117
152
|
---
|
|
118
153
|
|
|
119
|
-
##
|
|
154
|
+
## Shared Discovery Board Protocol
|
|
120
155
|
|
|
121
|
-
|
|
122
|
-
|------|---------|-----------|
|
|
123
|
-
| `tasks.csv` | Master state — all tasks with status/findings | Updated after each wave |
|
|
124
|
-
| `wave-{N}.csv` | Per-wave input (temporary) | Created before wave, deleted after |
|
|
125
|
-
| `results.csv` | Final export of all task results | Created in Phase 3 |
|
|
126
|
-
| `discoveries.ndjson` | Shared exploration board across all agents | Append-only, carries across waves |
|
|
127
|
-
| `context.md` | Human-readable execution report | Created in Phase 3 |
|
|
156
|
+
All agents across all waves share `discoveries.ndjson`. This eliminates redundant codebase exploration.
|
|
128
157
|
|
|
129
|
-
|
|
158
|
+
**Lifecycle**: Created by the first agent to write a discovery. Carries over across waves — never cleared. Agents append via `echo '...' >> discoveries.ndjson`.
|
|
130
159
|
|
|
131
|
-
|
|
160
|
+
**Format**: NDJSON, each line is a self-contained JSON:
|
|
132
161
|
|
|
162
|
+
```jsonl
|
|
163
|
+
{"ts":"2026-02-28T10:00:00+08:00","worker":"1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
|
|
164
|
+
{"ts":"2026-02-28T10:01:00+08:00","worker":"2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
|
|
133
165
|
```
|
|
134
|
-
|
|
135
|
-
|
|
136
|
-
|
|
137
|
-
|
|
138
|
-
|
|
139
|
-
|
|
140
|
-
|
|
166
|
+
|
|
167
|
+
**Discovery Types**:
|
|
168
|
+
|
|
169
|
+
| type | Dedup Key | Description |
|
|
170
|
+
|------|-----------|-------------|
|
|
171
|
+
| `code_pattern` | `data.name` | Reusable code pattern found |
|
|
172
|
+
| `integration_point` | `data.file` | Module connection point |
|
|
173
|
+
| `convention` | singleton | Code style conventions |
|
|
174
|
+
| `blocker` | `data.issue` | Blocking issue encountered |
|
|
175
|
+
| `tech_stack` | singleton | Project technology stack |
|
|
176
|
+
| `test_command` | singleton | Test commands discovered |
|
|
177
|
+
|
|
178
|
+
**Protocol Rules**:
|
|
179
|
+
1. Read board before own exploration → skip covered areas
|
|
180
|
+
2. Write discoveries immediately via `echo >>` → don't batch
|
|
181
|
+
3. Deduplicate — check existing entries; skip if same type + dedup key exists
|
|
182
|
+
4. Append-only — never modify or delete existing lines
|
|
141
183
|
|
|
142
184
|
---
|
|
143
185
|
|
|
@@ -154,17 +196,19 @@ const continueMode = $ARGUMENTS.includes('--continue')
|
|
|
154
196
|
const concurrencyMatch = $ARGUMENTS.match(/(?:--concurrency|-c)\s+(\d+)/)
|
|
155
197
|
const maxConcurrency = concurrencyMatch ? parseInt(concurrencyMatch[1]) : 4
|
|
156
198
|
|
|
157
|
-
// Clean requirement text (remove flags)
|
|
199
|
+
// Clean requirement text (remove flags — word-boundary safe)
|
|
158
200
|
const requirement = $ARGUMENTS
|
|
159
|
-
.replace(/--yes
|
|
201
|
+
.replace(/--yes|(?:^|\s)-y(?=\s|$)|--continue|--concurrency\s+\d+|-c\s+\d+/g, '')
|
|
160
202
|
.trim()
|
|
161
203
|
|
|
204
|
+
let sessionId, sessionFolder
|
|
205
|
+
|
|
162
206
|
const slug = requirement.toLowerCase()
|
|
163
207
|
.replace(/[^a-z0-9\u4e00-\u9fa5]+/g, '-')
|
|
164
208
|
.substring(0, 40)
|
|
165
209
|
const dateStr = getUtc8ISOString().substring(0, 10).replace(/-/g, '')
|
|
166
|
-
|
|
167
|
-
|
|
210
|
+
sessionId = `cwp-${slug}-${dateStr}`
|
|
211
|
+
sessionFolder = `.workflow/.csv-wave/${sessionId}`
|
|
168
212
|
|
|
169
213
|
// Continue mode: find existing session
|
|
170
214
|
if (continueMode) {
|
|
@@ -181,6 +225,60 @@ if (continueMode) {
|
|
|
181
225
|
Bash(`mkdir -p ${sessionFolder}`)
|
|
182
226
|
```
|
|
183
227
|
|
|
228
|
+
### CSV Utility Functions
|
|
229
|
+
|
|
230
|
+
```javascript
|
|
231
|
+
// Escape a value for CSV (wrap in quotes, double internal quotes)
|
|
232
|
+
function csvEscape(value) {
|
|
233
|
+
const str = String(value ?? '')
|
|
234
|
+
return str.replace(/"/g, '""')
|
|
235
|
+
}
|
|
236
|
+
|
|
237
|
+
// Parse CSV string into array of objects (header row → keys)
|
|
238
|
+
function parseCsv(csvString) {
|
|
239
|
+
const lines = csvString.trim().split('\n')
|
|
240
|
+
if (lines.length < 2) return []
|
|
241
|
+
const headers = parseCsvLine(lines[0]).map(h => h.replace(/^"|"$/g, ''))
|
|
242
|
+
return lines.slice(1).map(line => {
|
|
243
|
+
const cells = parseCsvLine(line).map(c => c.replace(/^"|"$/g, '').replace(/""/g, '"'))
|
|
244
|
+
const obj = {}
|
|
245
|
+
headers.forEach((h, i) => { obj[h] = cells[i] ?? '' })
|
|
246
|
+
return obj
|
|
247
|
+
})
|
|
248
|
+
}
|
|
249
|
+
|
|
250
|
+
// Parse a single CSV line respecting quoted fields with commas/newlines
|
|
251
|
+
function parseCsvLine(line) {
|
|
252
|
+
const cells = []
|
|
253
|
+
let current = ''
|
|
254
|
+
let inQuotes = false
|
|
255
|
+
for (let i = 0; i < line.length; i++) {
|
|
256
|
+
const ch = line[i]
|
|
257
|
+
if (inQuotes) {
|
|
258
|
+
if (ch === '"' && line[i + 1] === '"') {
|
|
259
|
+
current += '"'
|
|
260
|
+
i++ // skip escaped quote
|
|
261
|
+
} else if (ch === '"') {
|
|
262
|
+
inQuotes = false
|
|
263
|
+
} else {
|
|
264
|
+
current += ch
|
|
265
|
+
}
|
|
266
|
+
} else {
|
|
267
|
+
if (ch === '"') {
|
|
268
|
+
inQuotes = true
|
|
269
|
+
} else if (ch === ',') {
|
|
270
|
+
cells.push(current)
|
|
271
|
+
current = ''
|
|
272
|
+
} else {
|
|
273
|
+
current += ch
|
|
274
|
+
}
|
|
275
|
+
}
|
|
276
|
+
}
|
|
277
|
+
cells.push(current)
|
|
278
|
+
return cells
|
|
279
|
+
}
|
|
280
|
+
```
|
|
281
|
+
|
|
184
282
|
---
|
|
185
283
|
|
|
186
284
|
### Phase 1: Requirement → CSV
|
|
@@ -222,11 +320,28 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|
|
222
320
|
// Parse JSON from CLI output → decomposedTasks[]
|
|
223
321
|
```
|
|
224
322
|
|
|
225
|
-
2. **Compute Waves** (
|
|
323
|
+
2. **Compute Waves** (Kahn's BFS topological sort with depth tracking)
|
|
226
324
|
|
|
227
325
|
```javascript
|
|
326
|
+
// Algorithm:
|
|
327
|
+
// 1. Build in-degree map and adjacency list from deps
|
|
328
|
+
// 2. Enqueue all tasks with in-degree 0 at wave 1
|
|
329
|
+
// 3. BFS: for each dequeued task at wave W, for each dependent D:
|
|
330
|
+
// - Decrement D's in-degree
|
|
331
|
+
// - D.wave = max(D.wave, W + 1)
|
|
332
|
+
// - If D's in-degree reaches 0, enqueue D
|
|
333
|
+
// 4. Any task without wave assignment → circular dependency error
|
|
334
|
+
//
|
|
335
|
+
// Wave properties:
|
|
336
|
+
// Wave 1: no dependencies — fully independent
|
|
337
|
+
// Wave N: all deps in waves 1..(N-1) — guaranteed completed before start
|
|
338
|
+
// Within a wave: tasks are independent → safe for concurrent execution
|
|
339
|
+
//
|
|
340
|
+
// Example:
|
|
341
|
+
// A(no deps)→W1, B(no deps)→W1, C(deps:A)→W2, D(deps:A,B)→W2, E(deps:C,D)→W3
|
|
342
|
+
// Wave 1: [A,B] concurrent → Wave 2: [C,D] concurrent → Wave 3: [E]
|
|
343
|
+
|
|
228
344
|
function computeWaves(tasks) {
|
|
229
|
-
// Build adjacency: task.deps → predecessors
|
|
230
345
|
const taskMap = new Map(tasks.map(t => [t.id, t]))
|
|
231
346
|
const inDegree = new Map(tasks.map(t => [t.id, 0]))
|
|
232
347
|
const adjList = new Map(tasks.map(t => [t.id, []]))
|
|
@@ -267,7 +382,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|
|
267
382
|
}
|
|
268
383
|
}
|
|
269
384
|
|
|
270
|
-
// Detect cycles
|
|
385
|
+
// Detect cycles
|
|
271
386
|
for (const task of tasks) {
|
|
272
387
|
if (!waveAssignment.has(task.id)) {
|
|
273
388
|
throw new Error(`Circular dependency detected involving task ${task.id}`)
|
|
@@ -344,10 +459,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|
|
344
459
|
}
|
|
345
460
|
```
|
|
346
461
|
|
|
347
|
-
**Success Criteria**:
|
|
348
|
-
- tasks.csv created with valid schema and wave assignments
|
|
349
|
-
- No circular dependencies
|
|
350
|
-
- User approved (or AUTO_YES)
|
|
462
|
+
**Success Criteria**: tasks.csv created with valid schema and wave assignments, no circular dependencies, user approved (or AUTO_YES).
|
|
351
463
|
|
|
352
464
|
---
|
|
353
465
|
|
|
@@ -378,7 +490,6 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|
|
378
490
|
const deps = task.deps.split(';').filter(Boolean)
|
|
379
491
|
if (deps.some(d => failedIds.has(d) || skippedIds.has(d))) {
|
|
380
492
|
skippedIds.add(task.id)
|
|
381
|
-
// Update master CSV: mark as skipped
|
|
382
493
|
updateMasterCsvRow(sessionFolder, task.id, {
|
|
383
494
|
status: 'skipped',
|
|
384
495
|
error: 'Dependency failed or skipped'
|
|
@@ -394,7 +505,7 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|
|
394
505
|
continue
|
|
395
506
|
}
|
|
396
507
|
|
|
397
|
-
// 4. Build prev_context for each task
|
|
508
|
+
// 4. Build prev_context for each task (from context_from → master CSV findings)
|
|
398
509
|
for (const task of executableTasks) {
|
|
399
510
|
const contextIds = task.context_from.split(';').filter(Boolean)
|
|
400
511
|
const prevFindings = contextIds
|
|
@@ -465,8 +576,8 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|
|
465
576
|
}
|
|
466
577
|
}
|
|
467
578
|
|
|
468
|
-
// 8. Cleanup temporary wave
|
|
469
|
-
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv"`)
|
|
579
|
+
// 8. Cleanup temporary wave CSVs
|
|
580
|
+
Bash(`rm -f "${sessionFolder}/wave-${wave}.csv" "${sessionFolder}/wave-${wave}-results.csv"`)
|
|
470
581
|
|
|
471
582
|
console.log(` Wave ${wave} done: ${waveResults.filter(r => r.status === 'completed').length} completed, ${waveResults.filter(r => r.status === 'failed').length} failed`)
|
|
472
583
|
}
|
|
@@ -535,6 +646,8 @@ REQUIREMENT: ${requirement}" --tool gemini --mode analysis --rule planning-break
|
|
|
535
646
|
- \`integration_point\`: {file, description, exports[]} — module connection points
|
|
536
647
|
- \`convention\`: {naming, imports, formatting} — code style conventions
|
|
537
648
|
- \`blocker\`: {issue, severity, impact} — blocking issues encountered
|
|
649
|
+
- \`tech_stack\`: {runtime, framework, language} — project technology stack
|
|
650
|
+
- \`test_command\`: {command, scope, description} — test commands discovered
|
|
538
651
|
|
|
539
652
|
---
|
|
540
653
|
|
|
@@ -587,11 +700,7 @@ Otherwise set status to "failed" with details in error field.
|
|
|
587
700
|
}
|
|
588
701
|
```
|
|
589
702
|
|
|
590
|
-
**Success Criteria**:
|
|
591
|
-
- All waves executed in order
|
|
592
|
-
- Each wave's results merged into master CSV before next wave starts
|
|
593
|
-
- Dependent tasks skipped when predecessor failed
|
|
594
|
-
- discoveries.ndjson accumulated across all waves
|
|
703
|
+
**Success Criteria**: All waves executed in order, each wave's results merged into master CSV before next wave starts, dependent tasks skipped when predecessor failed, discoveries.ndjson accumulated across all waves.
|
|
595
704
|
|
|
596
705
|
---
|
|
597
706
|
|
|
@@ -741,120 +850,7 @@ ${[...new Set(tasks.flatMap(t => (t.files_modified || '').split(';')).filter(Boo
|
|
|
741
850
|
}
|
|
742
851
|
```
|
|
743
852
|
|
|
744
|
-
**Success Criteria**:
|
|
745
|
-
- results.csv exported
|
|
746
|
-
- context.md generated
|
|
747
|
-
- Summary displayed to user
|
|
748
|
-
|
|
749
|
-
---
|
|
750
|
-
|
|
751
|
-
## Shared Discovery Board Protocol
|
|
752
|
-
|
|
753
|
-
All agents across all waves share `discoveries.ndjson`. This eliminates redundant codebase exploration.
|
|
754
|
-
|
|
755
|
-
**Lifecycle**:
|
|
756
|
-
- Created by the first agent to write a discovery
|
|
757
|
-
- Carries over across waves — never cleared
|
|
758
|
-
- Agents append via `echo '...' >> discoveries.ndjson`
|
|
759
|
-
|
|
760
|
-
**Format**: NDJSON, each line is a self-contained JSON:
|
|
761
|
-
|
|
762
|
-
```jsonl
|
|
763
|
-
{"ts":"2026-02-28T10:00:00+08:00","worker":"1","type":"code_pattern","data":{"name":"repository-pattern","file":"src/repos/Base.ts","description":"Abstract CRUD repository"}}
|
|
764
|
-
{"ts":"2026-02-28T10:01:00+08:00","worker":"2","type":"integration_point","data":{"file":"src/auth/index.ts","description":"Auth module entry","exports":["authenticate","authorize"]}}
|
|
765
|
-
```
|
|
766
|
-
|
|
767
|
-
**Discovery Types**:
|
|
768
|
-
|
|
769
|
-
| type | Dedup Key | Description |
|
|
770
|
-
|------|-----------|-------------|
|
|
771
|
-
| `code_pattern` | `data.name` | Reusable code pattern found |
|
|
772
|
-
| `integration_point` | `data.file` | Module connection point |
|
|
773
|
-
| `convention` | singleton | Code style conventions |
|
|
774
|
-
| `blocker` | `data.issue` | Blocking issue encountered |
|
|
775
|
-
| `tech_stack` | singleton | Project technology stack |
|
|
776
|
-
| `test_command` | singleton | Test commands discovered |
|
|
777
|
-
|
|
778
|
-
**Protocol Rules**:
|
|
779
|
-
1. Read board before own exploration → skip covered areas
|
|
780
|
-
2. Write discoveries immediately via `echo >>` → don't batch
|
|
781
|
-
3. Deduplicate — check existing entries; skip if same type + dedup key exists
|
|
782
|
-
4. Append-only — never modify or delete existing lines
|
|
783
|
-
|
|
784
|
-
---
|
|
785
|
-
|
|
786
|
-
## Wave Computation Details
|
|
787
|
-
|
|
788
|
-
### Algorithm
|
|
789
|
-
|
|
790
|
-
Kahn's BFS topological sort with depth tracking:
|
|
791
|
-
|
|
792
|
-
```
|
|
793
|
-
Input: tasks[] with deps[]
|
|
794
|
-
Output: waveAssignment (taskId → wave number)
|
|
795
|
-
|
|
796
|
-
1. Build in-degree map and adjacency list from deps
|
|
797
|
-
2. Enqueue all tasks with in-degree 0 at wave 1
|
|
798
|
-
3. BFS: for each dequeued task at wave W:
|
|
799
|
-
- For each dependent task D:
|
|
800
|
-
- Decrement D's in-degree
|
|
801
|
-
- D.wave = max(D.wave, W + 1)
|
|
802
|
-
- If D's in-degree reaches 0, enqueue D
|
|
803
|
-
4. Any task without wave assignment → circular dependency error
|
|
804
|
-
```
|
|
805
|
-
|
|
806
|
-
### Wave Properties
|
|
807
|
-
|
|
808
|
-
- **Wave 1**: No dependencies — all tasks in wave 1 are fully independent
|
|
809
|
-
- **Wave N**: All dependencies are in waves 1..(N-1) — guaranteed completed before wave N starts
|
|
810
|
-
- **Within a wave**: Tasks are independent of each other → safe for concurrent execution
|
|
811
|
-
|
|
812
|
-
### Example
|
|
813
|
-
|
|
814
|
-
```
|
|
815
|
-
Task A (no deps) → Wave 1
|
|
816
|
-
Task B (no deps) → Wave 1
|
|
817
|
-
Task C (deps: A) → Wave 2
|
|
818
|
-
Task D (deps: A, B) → Wave 2
|
|
819
|
-
Task E (deps: C, D) → Wave 3
|
|
820
|
-
|
|
821
|
-
Execution:
|
|
822
|
-
Wave 1: [A, B] ← concurrent
|
|
823
|
-
Wave 2: [C, D] ← concurrent, sees A+B findings
|
|
824
|
-
Wave 3: [E] ← sees A+B+C+D findings
|
|
825
|
-
```
|
|
826
|
-
|
|
827
|
-
---
|
|
828
|
-
|
|
829
|
-
## Context Propagation Flow
|
|
830
|
-
|
|
831
|
-
```
|
|
832
|
-
Wave 1 agents:
|
|
833
|
-
├─ Execute tasks (no prev_context)
|
|
834
|
-
├─ Write findings to report_agent_job_result
|
|
835
|
-
└─ Append discoveries to discoveries.ndjson
|
|
836
|
-
|
|
837
|
-
↓ merge results into master CSV
|
|
838
|
-
|
|
839
|
-
Wave 2 agents:
|
|
840
|
-
├─ Read discoveries.ndjson (exploration sharing)
|
|
841
|
-
├─ Read prev_context column (wave 1 findings from context_from)
|
|
842
|
-
├─ Execute tasks with full upstream context
|
|
843
|
-
├─ Write findings to report_agent_job_result
|
|
844
|
-
└─ Append new discoveries to discoveries.ndjson
|
|
845
|
-
|
|
846
|
-
↓ merge results into master CSV
|
|
847
|
-
|
|
848
|
-
Wave 3 agents:
|
|
849
|
-
├─ Read discoveries.ndjson (accumulated from waves 1+2)
|
|
850
|
-
├─ Read prev_context column (wave 1+2 findings from context_from)
|
|
851
|
-
├─ Execute tasks
|
|
852
|
-
└─ ...
|
|
853
|
-
```
|
|
854
|
-
|
|
855
|
-
**Two context channels**:
|
|
856
|
-
1. **CSV findings** (structured): `context_from` column → `prev_context` injection — task-specific directed context
|
|
857
|
-
2. **NDJSON discoveries** (broadcast): `discoveries.ndjson` — general exploration findings available to all
|
|
853
|
+
**Success Criteria**: results.csv exported, context.md generated, summary displayed to user.
|
|
858
854
|
|
|
859
855
|
---
|
|
860
856
|
|
|
@@ -872,7 +868,9 @@ Wave 3 agents:
|
|
|
872
868
|
|
|
873
869
|
---
|
|
874
870
|
|
|
875
|
-
##
|
|
871
|
+
## Rules & Best Practices
|
|
872
|
+
|
|
873
|
+
### Core Rules
|
|
876
874
|
|
|
877
875
|
1. **Start Immediately**: First action is session initialization, then Phase 1
|
|
878
876
|
2. **Wave Order is Sacred**: Never execute wave N before wave N-1 completes and results are merged
|
|
@@ -880,22 +878,18 @@ Wave 3 agents:
|
|
|
880
878
|
4. **Context Propagation**: prev_context built from master CSV, not from memory
|
|
881
879
|
5. **Discovery Board is Append-Only**: Never clear, modify, or recreate discoveries.ndjson
|
|
882
880
|
6. **Skip on Failure**: If a dependency failed, skip the dependent task (don't attempt)
|
|
883
|
-
7. **Cleanup Temp Files**: Remove wave-{N}.csv after results are merged
|
|
881
|
+
7. **Cleanup Temp Files**: Remove wave-{N}.csv and wave-{N}-results.csv after results are merged
|
|
884
882
|
8. **DO NOT STOP**: Continuous execution until all waves complete or all remaining tasks are skipped
|
|
885
883
|
|
|
886
|
-
|
|
887
|
-
|
|
888
|
-
## Best Practices
|
|
884
|
+
### Task Design
|
|
889
885
|
|
|
890
|
-
|
|
891
|
-
|
|
892
|
-
|
|
893
|
-
|
|
894
|
-
|
|
895
|
-
|
|
896
|
-
---
|
|
886
|
+
- **Granularity**: 3-10 tasks optimal; too many = overhead, too few = no parallelism benefit
|
|
887
|
+
- **Minimize Cross-Wave Deps**: More tasks in wave 1 = more parallelism
|
|
888
|
+
- **Specific Descriptions**: Agent sees only its CSV row + prev_context — make description self-contained
|
|
889
|
+
- **Context From ≠ Deps**: `deps` = execution order constraint; `context_from` = information flow. A task can have `context_from` without `deps` (it just reads previous findings but doesn't require them to be done first in its wave)
|
|
890
|
+
- **Concurrency Tuning**: `-c 1` for serial execution (maximum context sharing); `-c 8` for I/O-bound tasks
|
|
897
891
|
|
|
898
|
-
|
|
892
|
+
### Scenario Recommendations
|
|
899
893
|
|
|
900
894
|
| Scenario | Recommended Approach |
|
|
901
895
|
|----------|---------------------|
|
|
@@ -903,4 +897,4 @@ Wave 3 agents:
|
|
|
903
897
|
| Linear pipeline (A→B→C) | `$csv-wave-pipeline -c 1` — 3 waves, serial, full context |
|
|
904
898
|
| Diamond dependency (A→B,C→D) | `$csv-wave-pipeline` — 3 waves, B+C concurrent in wave 2 |
|
|
905
899
|
| Complex requirement, unclear tasks | Use `$roadmap-with-file` first for planning, then feed issues here |
|
|
906
|
-
| Single complex task | Use `$workflow-lite-plan` instead |
|
|
900
|
+
| Single complex task | Use `$workflow-lite-plan` instead |
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "claude-code-workflow",
|
|
3
|
-
"version": "7.2.
|
|
3
|
+
"version": "7.2.22",
|
|
4
4
|
"description": "JSON-driven multi-agent development framework with intelligent CLI orchestration (Gemini/Qwen/Codex), context-first architecture, and automated workflow execution",
|
|
5
5
|
"type": "module",
|
|
6
6
|
"main": "ccw/dist/index.js",
|