claude-code-workflow 7.2.20 → 7.2.22

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (43) hide show
  1. package/.claude/commands/workflow/analyze-with-file.md +25 -12
  2. package/.codex/skills/analyze-with-file/SKILL.md +235 -497
  3. package/.codex/skills/brainstorm-with-file/SKILL.md +661 -751
  4. package/.codex/skills/csv-wave-pipeline/SKILL.md +192 -198
  5. package/.codex/skills/team-arch-opt/SKILL.md +24 -0
  6. package/.codex/skills/team-arch-opt/roles/coordinator/role.md +22 -0
  7. package/.codex/skills/team-brainstorm/SKILL.md +24 -0
  8. package/.codex/skills/team-brainstorm/roles/coordinator/role.md +20 -0
  9. package/.codex/skills/team-coordinate/SKILL.md +24 -0
  10. package/.codex/skills/team-coordinate/roles/coordinator/role.md +40 -12
  11. package/.codex/skills/team-frontend/SKILL.md +24 -0
  12. package/.codex/skills/team-frontend/roles/coordinator/role.md +20 -0
  13. package/.codex/skills/team-frontend-debug/SKILL.md +24 -0
  14. package/.codex/skills/team-frontend-debug/roles/coordinator/role.md +21 -0
  15. package/.codex/skills/team-issue/SKILL.md +24 -0
  16. package/.codex/skills/team-issue/roles/coordinator/role.md +19 -0
  17. package/.codex/skills/team-iterdev/SKILL.md +24 -0
  18. package/.codex/skills/team-iterdev/roles/coordinator/role.md +20 -0
  19. package/.codex/skills/team-lifecycle-v4/SKILL.md +24 -0
  20. package/.codex/skills/team-lifecycle-v4/roles/coordinator/role.md +28 -2
  21. package/.codex/skills/team-perf-opt/SKILL.md +24 -0
  22. package/.codex/skills/team-perf-opt/roles/coordinator/role.md +20 -0
  23. package/.codex/skills/team-planex/SKILL.md +24 -0
  24. package/.codex/skills/team-planex/roles/coordinator/role.md +19 -0
  25. package/.codex/skills/team-quality-assurance/SKILL.md +24 -0
  26. package/.codex/skills/team-quality-assurance/roles/coordinator/role.md +21 -0
  27. package/.codex/skills/team-review/SKILL.md +24 -0
  28. package/.codex/skills/team-review/roles/coordinator/role.md +21 -0
  29. package/.codex/skills/team-roadmap-dev/SKILL.md +24 -0
  30. package/.codex/skills/team-roadmap-dev/roles/coordinator/role.md +19 -0
  31. package/.codex/skills/team-tech-debt/SKILL.md +24 -0
  32. package/.codex/skills/team-tech-debt/roles/coordinator/role.md +19 -0
  33. package/.codex/skills/team-testing/SKILL.md +24 -0
  34. package/.codex/skills/team-testing/roles/coordinator/role.md +21 -0
  35. package/.codex/skills/team-uidesign/SKILL.md +24 -0
  36. package/.codex/skills/team-uidesign/roles/coordinator/role.md +20 -0
  37. package/.codex/skills/team-ultra-analyze/SKILL.md +24 -0
  38. package/.codex/skills/team-ultra-analyze/roles/coordinator/role.md +20 -0
  39. package/.codex/skills/team-ux-improve/SKILL.md +24 -0
  40. package/.codex/skills/team-ux-improve/roles/coordinator/role.md +20 -0
  41. package/package.json +1 -1
  42. package/.codex/skills/collaborative-plan-with-file/SKILL.md +0 -830
  43. package/.codex/skills/unified-execute-with-file/SKILL.md +0 -797
@@ -1,797 +0,0 @@
1
- ---
2
- name: unified-execute-with-file
3
- description: Universal execution engine consuming .task/*.json directory format. Serial task execution with convergence verification, progress tracking via execution.md + execution-events.md.
4
- argument-hint: "PLAN=\"<path/to/.task/>\" [--auto-commit] [--dry-run]"
5
- ---
6
-
7
- # Unified-Execute-With-File Workflow
8
-
9
- ## Quick Start
10
-
11
- Universal execution engine consuming **`.task/*.json`** directory and executing tasks serially with convergence verification and progress tracking.
12
-
13
- ```bash
14
- # Execute from lite-plan output
15
- /codex:unified-execute-with-file PLAN=".workflow/.lite-plan/LPLAN-auth-2025-01-21/.task/"
16
-
17
- # Execute from workflow session output
18
- /codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/" --auto-commit
19
-
20
- # Execute a single task JSON file
21
- /codex:unified-execute-with-file PLAN=".workflow/active/WFS-xxx/.task/IMPL-001.json" --dry-run
22
-
23
- # Auto-detect from .workflow/ directories
24
- /codex:unified-execute-with-file
25
- ```
26
-
27
- **Core workflow**: Scan .task/*.json → Validate → Pre-Execution Analysis → Execute → Verify Convergence → Track Progress
28
-
29
- **Key features**:
30
- - **Directory-based**: Consumes `.task/` directory containing individual task JSON files
31
- - **Convergence-driven**: Verifies each task's convergence criteria after execution
32
- - **Serial execution**: Process tasks in topological order with dependency tracking
33
- - **Dual progress tracking**: `execution.md` (overview) + `execution-events.md` (event stream)
34
- - **Auto-commit**: Optional conventional commits per task
35
- - **Dry-run mode**: Simulate execution without changes
36
- - **Flexible input**: Accepts `.task/` directory path or a single `.json` file path
37
-
38
- **Input format**: Each task is a standalone JSON file in `.task/` directory (e.g., `IMPL-001.json`). Use `plan-converter` to convert other formats to `.task/*.json` first.
39
-
40
- ## Overview
41
-
42
- ```
43
- ┌─────────────────────────────────────────────────────────────┐
44
- │ UNIFIED EXECUTE WORKFLOW │
45
- ├─────────────────────────────────────────────────────────────┤
46
- │ │
47
- │ Phase 1: Load & Validate │
48
- │ ├─ Scan .task/*.json (one task per file) │
49
- │ ├─ Validate schema (id, title, depends_on, convergence) │
50
- │ ├─ Detect cycles, build topological order │
51
- │ └─ Initialize execution.md + execution-events.md │
52
- │ │
53
- │ Phase 2: Pre-Execution Analysis │
54
- │ ├─ Check file conflicts (multiple tasks → same file) │
55
- │ ├─ Verify file existence │
56
- │ ├─ Generate feasibility report │
57
- │ └─ User confirmation (unless dry-run) │
58
- │ │
59
- │ Phase 3: Serial Execution + Convergence Verification │
60
- │ For each task in topological order: │
61
- │ ├─ Check dependencies satisfied │
62
- │ ├─ Record START event │
63
- │ ├─ Execute directly (Read/Edit/Write/Grep/Glob/Bash) │
64
- │ ├─ Verify convergence.criteria[] │
65
- │ ├─ Run convergence.verification command │
66
- │ ├─ Record COMPLETE/FAIL event with verification results │
67
- │ ├─ Update _execution state in task JSON file │
68
- │ └─ Auto-commit if enabled │
69
- │ │
70
- │ Phase 4: Completion │
71
- │ ├─ Finalize execution.md with summary statistics │
72
- │ ├─ Finalize execution-events.md with session footer │
73
- │ ├─ Write back .task/*.json with _execution states │
74
- │ └─ Offer follow-up actions │
75
- │ │
76
- └─────────────────────────────────────────────────────────────┘
77
- ```
78
-
79
- ## Output Structure
80
-
81
- ```
82
- ${projectRoot}/.workflow/.execution/EXEC-{slug}-{date}-{random}/
83
- ├── execution.md # Plan overview + task table + summary
84
- └── execution-events.md # ⭐ Unified event log (single source of truth)
85
- ```
86
-
87
- Additionally, each source `.task/*.json` file is updated in-place with `_execution` states.
88
-
89
- ---
90
-
91
- ## Implementation Details
92
-
93
- ### Session Initialization
94
-
95
- ##### Step 0: Initialize Session
96
-
97
- ```javascript
98
- const getUtc8ISOString = () => new Date(Date.now() + 8 * 60 * 60 * 1000).toISOString()
99
- const projectRoot = Bash(`git rev-parse --show-toplevel 2>/dev/null || pwd`).trim()
100
-
101
- // Parse arguments
102
- const autoCommit = $ARGUMENTS.includes('--auto-commit')
103
- const dryRun = $ARGUMENTS.includes('--dry-run')
104
- const planMatch = $ARGUMENTS.match(/PLAN="([^"]+)"/) || $ARGUMENTS.match(/PLAN=(\S+)/)
105
- let planPath = planMatch ? planMatch[1] : null
106
-
107
- // Auto-detect if no PLAN specified
108
- if (!planPath) {
109
- // Search in order (most recent first):
110
- // .workflow/active/*/.task/
111
- // .workflow/.lite-plan/*/.task/
112
- // .workflow/.req-plan/*/.task/
113
- // .workflow/.planning/*/.task/
114
- // Use most recently modified directory containing *.json files
115
- }
116
-
117
- // Resolve path
118
- planPath = path.isAbsolute(planPath) ? planPath : `${projectRoot}/${planPath}`
119
-
120
- // Generate session ID
121
- const slug = path.basename(path.dirname(planPath)).toLowerCase().substring(0, 30)
122
- const dateStr = getUtc8ISOString().substring(0, 10)
123
- const random = Math.random().toString(36).substring(2, 9)
124
- const sessionId = `EXEC-${slug}-${dateStr}-${random}`
125
- const sessionFolder = `${projectRoot}/.workflow/.execution/${sessionId}`
126
-
127
- Bash(`mkdir -p ${sessionFolder}`)
128
- ```
129
-
130
- ---
131
-
132
- ## Phase 1: Load & Validate
133
-
134
- **Objective**: Scan `.task/` directory, parse individual task JSON files, validate schema and dependencies, build execution order.
135
-
136
- ### Step 1.1: Scan .task/ Directory and Parse Task Files
137
-
138
- ```javascript
139
- // Determine if planPath is a directory or single file
140
- const isDirectory = planPath.endsWith('/') || Bash(`test -d "${planPath}" && echo dir || echo file`).trim() === 'dir'
141
-
142
- let taskFiles, tasks
143
-
144
- if (isDirectory) {
145
- // Directory mode: scan for all *.json files
146
- taskFiles = Glob('*.json', planPath)
147
- if (taskFiles.length === 0) throw new Error(`No .json files found in ${planPath}`)
148
-
149
- tasks = taskFiles.map(filePath => {
150
- try {
151
- const content = Read(filePath)
152
- const task = JSON.parse(content)
153
- task._source_file = filePath // Track source file for write-back
154
- return task
155
- } catch (e) {
156
- throw new Error(`${path.basename(filePath)}: Invalid JSON - ${e.message}`)
157
- }
158
- })
159
- } else {
160
- // Single file mode: parse one task JSON
161
- try {
162
- const content = Read(planPath)
163
- const task = JSON.parse(content)
164
- task._source_file = planPath
165
- tasks = [task]
166
- } catch (e) {
167
- throw new Error(`${path.basename(planPath)}: Invalid JSON - ${e.message}`)
168
- }
169
- }
170
-
171
- if (tasks.length === 0) throw new Error('No tasks found')
172
- ```
173
-
174
- ### Step 1.2: Validate Schema
175
-
176
- Validate against unified task schema: `~/.ccw/workflows/cli-templates/schemas/task-schema.json`
177
-
178
- ```javascript
179
- const errors = []
180
- tasks.forEach((task, i) => {
181
- const src = task._source_file ? path.basename(task._source_file) : `Task ${i + 1}`
182
-
183
- // Required fields (per task-schema.json)
184
- if (!task.id) errors.push(`${src}: missing 'id'`)
185
- if (!task.title) errors.push(`${src}: missing 'title'`)
186
- if (!task.description) errors.push(`${src}: missing 'description'`)
187
- if (!Array.isArray(task.depends_on)) errors.push(`${task.id || src}: missing 'depends_on' array`)
188
-
189
- // Context block (optional but validated if present)
190
- if (task.context) {
191
- if (task.context.requirements && !Array.isArray(task.context.requirements))
192
- errors.push(`${task.id}: context.requirements must be array`)
193
- if (task.context.acceptance && !Array.isArray(task.context.acceptance))
194
- errors.push(`${task.id}: context.acceptance must be array`)
195
- if (task.context.focus_paths && !Array.isArray(task.context.focus_paths))
196
- errors.push(`${task.id}: context.focus_paths must be array`)
197
- }
198
-
199
- // Convergence (required for execution verification)
200
- if (!task.convergence) {
201
- errors.push(`${task.id || src}: missing 'convergence'`)
202
- } else {
203
- if (!task.convergence.criteria?.length) errors.push(`${task.id}: empty convergence.criteria`)
204
- if (!task.convergence.verification) errors.push(`${task.id}: missing convergence.verification`)
205
- if (!task.convergence.definition_of_done) errors.push(`${task.id}: missing convergence.definition_of_done`)
206
- }
207
-
208
- // Flow control (optional but validated if present)
209
- if (task.flow_control) {
210
- if (task.flow_control.target_files && !Array.isArray(task.flow_control.target_files))
211
- errors.push(`${task.id}: flow_control.target_files must be array`)
212
- }
213
-
214
- // New unified schema fields (backward compatible addition)
215
- if (task.focus_paths && !Array.isArray(task.focus_paths))
216
- errors.push(`${task.id}: focus_paths must be array`)
217
- if (task.implementation && !Array.isArray(task.implementation))
218
- errors.push(`${task.id}: implementation must be array`)
219
- if (task.files && !Array.isArray(task.files))
220
- errors.push(`${task.id}: files must be array`)
221
- })
222
-
223
- if (errors.length) {
224
- // Report errors, stop execution
225
- }
226
- ```
227
-
228
- ### Step 1.3: Build Execution Order
229
-
230
- ```javascript
231
- // 1. Validate dependency references
232
- const taskIds = new Set(tasks.map(t => t.id))
233
- tasks.forEach(task => {
234
- task.depends_on.forEach(dep => {
235
- if (!taskIds.has(dep)) errors.push(`${task.id}: depends on unknown task '${dep}'`)
236
- })
237
- })
238
-
239
- // 2. Detect cycles (DFS)
240
- function detectCycles(tasks) {
241
- const graph = new Map(tasks.map(t => [t.id, t.depends_on || []]))
242
- const visited = new Set(), inStack = new Set(), cycles = []
243
- function dfs(node, path) {
244
- if (inStack.has(node)) { cycles.push([...path, node].join(' → ')); return }
245
- if (visited.has(node)) return
246
- visited.add(node); inStack.add(node)
247
- ;(graph.get(node) || []).forEach(dep => dfs(dep, [...path, node]))
248
- inStack.delete(node)
249
- }
250
- tasks.forEach(t => { if (!visited.has(t.id)) dfs(t.id, []) })
251
- return cycles
252
- }
253
- const cycles = detectCycles(tasks)
254
- if (cycles.length) errors.push(`Circular dependencies: ${cycles.join('; ')}`)
255
-
256
- // 3. Topological sort
257
- function topoSort(tasks) {
258
- const inDegree = new Map(tasks.map(t => [t.id, 0]))
259
- tasks.forEach(t => t.depends_on.forEach(dep => {
260
- inDegree.set(t.id, (inDegree.get(t.id) || 0) + 1)
261
- }))
262
- const queue = tasks.filter(t => inDegree.get(t.id) === 0).map(t => t.id)
263
- const order = []
264
- while (queue.length) {
265
- const id = queue.shift()
266
- order.push(id)
267
- tasks.forEach(t => {
268
- if (t.depends_on.includes(id)) {
269
- inDegree.set(t.id, inDegree.get(t.id) - 1)
270
- if (inDegree.get(t.id) === 0) queue.push(t.id)
271
- }
272
- })
273
- }
274
- return order
275
- }
276
- const executionOrder = topoSort(tasks)
277
- ```
278
-
279
- ### Step 1.4: Initialize Execution Artifacts
280
-
281
- ```javascript
282
- // execution.md
283
- const executionMd = `# Execution Overview
284
-
285
- ## Session Info
286
- - **Session ID**: ${sessionId}
287
- - **Plan Source**: ${planPath}
288
- - **Started**: ${getUtc8ISOString()}
289
- - **Total Tasks**: ${tasks.length}
290
- - **Mode**: ${dryRun ? 'Dry-run (no changes)' : 'Direct inline execution'}
291
- - **Auto-Commit**: ${autoCommit ? 'Enabled' : 'Disabled'}
292
-
293
- ## Task Overview
294
-
295
- | # | ID | Title | Type | Priority | Effort | Dependencies | Status |
296
- |---|-----|-------|------|----------|--------|--------------|--------|
297
- ${tasks.map((t, i) => `| ${i+1} | ${t.id} | ${t.title} | ${t.type || '-'} | ${t.priority || '-'} | ${t.effort || '-'} | ${t.depends_on.join(', ') || '-'} | pending |`).join('\n')}
298
-
299
- ## Pre-Execution Analysis
300
- > Populated in Phase 2
301
-
302
- ## Execution Timeline
303
- > Updated as tasks complete
304
-
305
- ## Execution Summary
306
- > Updated after all tasks complete
307
- `
308
- Write(`${sessionFolder}/execution.md`, executionMd)
309
-
310
- // execution-events.md
311
- Write(`${sessionFolder}/execution-events.md`, `# Execution Events
312
-
313
- **Session**: ${sessionId}
314
- **Started**: ${getUtc8ISOString()}
315
- **Source**: ${planPath}
316
-
317
- ---
318
-
319
- `)
320
- ```
321
-
322
- ---
323
-
324
- ## Phase 2: Pre-Execution Analysis
325
-
326
- **Objective**: Validate feasibility and identify issues before execution.
327
-
328
- ### Step 2.1: Analyze File Conflicts
329
-
330
- ```javascript
331
- const fileTaskMap = new Map() // file → [taskIds]
332
- tasks.forEach(task => {
333
- (task.files || []).forEach(f => {
334
- const key = f.path
335
- if (!fileTaskMap.has(key)) fileTaskMap.set(key, [])
336
- fileTaskMap.get(key).push(task.id)
337
- })
338
- })
339
-
340
- const conflicts = []
341
- fileTaskMap.forEach((taskIds, file) => {
342
- if (taskIds.length > 1) {
343
- conflicts.push({ file, tasks: taskIds, resolution: 'Execute in dependency order' })
344
- }
345
- })
346
-
347
- // Check file existence
348
- const missingFiles = []
349
- tasks.forEach(task => {
350
- (task.files || []).forEach(f => {
351
- if (f.action !== 'create' && !file_exists(f.path)) {
352
- missingFiles.push({ file: f.path, task: task.id })
353
- }
354
- })
355
- })
356
- ```
357
-
358
- ### Step 2.2: Append to execution.md
359
-
360
- ```javascript
361
- // Replace "Pre-Execution Analysis" section with:
362
- // - File Conflicts (list or "No conflicts")
363
- // - Missing Files (list or "All files exist")
364
- // - Dependency Validation (errors or "No issues")
365
- // - Execution Order (numbered list)
366
- ```
367
-
368
- ### Step 2.3: User Confirmation
369
-
370
- ```javascript
371
- if (!dryRun) {
372
- request_user_input({
373
- questions: [{
374
- header: "Confirm",
375
- id: "confirm_execute",
376
- question: `Execute ${tasks.length} tasks?`,
377
- options: [
378
- { label: "Execute (Recommended)", description: "Start serial execution" },
379
- { label: "Dry Run", description: "Simulate without changes" },
380
- { label: "Cancel", description: "Abort execution" }
381
- ]
382
- }]
383
- })
384
- // answer.answers.confirm_execute.answers[0] → selected label
385
- }
386
- ```
387
-
388
- ---
389
-
390
- ## Phase 3: Serial Execution + Convergence Verification
391
-
392
- **Objective**: Execute tasks sequentially, verify convergence after each task, track all state.
393
-
394
- **Execution Model**: Direct inline execution — main process reads, edits, writes files directly. No CLI delegation.
395
-
396
- ### Step 3.1: Execution Loop
397
-
398
- ```javascript
399
- const completedTasks = new Set()
400
- const failedTasks = new Set()
401
- const skippedTasks = new Set()
402
-
403
- for (const taskId of executionOrder) {
404
- const task = tasks.find(t => t.id === taskId)
405
- const startTime = getUtc8ISOString()
406
-
407
- // 1. Check dependencies
408
- const unmetDeps = task.depends_on.filter(dep => !completedTasks.has(dep))
409
- if (unmetDeps.length) {
410
- appendToEvents(task, 'BLOCKED', `Unmet dependencies: ${unmetDeps.join(', ')}`)
411
- skippedTasks.add(task.id)
412
- task._execution = { status: 'skipped', executed_at: startTime,
413
- result: { success: false, error: `Blocked by: ${unmetDeps.join(', ')}` } }
414
- continue
415
- }
416
-
417
- // 2. Record START event
418
- appendToEvents(`## ${getUtc8ISOString()} — ${task.id}: ${task.title}
419
-
420
- **Type**: ${task.type || '-'} | **Priority**: ${task.priority || '-'} | **Effort**: ${task.effort || '-'}
421
- **Status**: ⏳ IN PROGRESS
422
- **Files**: ${(task.files || []).map(f => f.path).join(', ') || 'To be determined'}
423
- **Description**: ${task.description}
424
- **Convergence Criteria**:
425
- ${task.convergence.criteria.map(c => `- [ ] ${c}`).join('\n')}
426
-
427
- ### Execution Log
428
- `)
429
-
430
- if (dryRun) {
431
- // Simulate: mark as completed without changes
432
- appendToEvents(`\n**Status**: ⏭ DRY RUN (no changes)\n\n---\n`)
433
- task._execution = { status: 'completed', executed_at: startTime,
434
- result: { success: true, summary: 'Dry run — no changes made' } }
435
- completedTasks.add(task.id)
436
- continue
437
- }
438
-
439
- // 3. Execute task directly
440
- // - Read each file in task.files (if specified)
441
- // - Analyze what changes satisfy task.description + task.convergence.criteria
442
- // - If task.files has detailed changes, use them as guidance
443
- // - Apply changes using Edit (preferred) or Write (for new files)
444
- // - Use Grep/Glob/mcp__ace-tool for discovery if needed
445
- // - Use Bash for build/test commands
446
-
447
- // Dual-path field access (supports both unified and legacy 6-field schema)
448
- // const targetFiles = task.files?.map(f => f.path) || task.flow_control?.target_files || []
449
- // const acceptanceCriteria = task.convergence?.criteria || task.context?.acceptance || []
450
- // const requirements = task.implementation || task.context?.requirements || []
451
- // const focusPaths = task.focus_paths || task.context?.focus_paths || []
452
-
453
- // 4. Verify convergence
454
- const convergenceResults = verifyConvergence(task)
455
-
456
- const endTime = getUtc8ISOString()
457
- const filesModified = getModifiedFiles()
458
-
459
- if (convergenceResults.allPassed) {
460
- // 5a. Record SUCCESS
461
- appendToEvents(`
462
- **Status**: ✅ COMPLETED
463
- **Duration**: ${calculateDuration(startTime, endTime)}
464
- **Files Modified**: ${filesModified.join(', ')}
465
-
466
- #### Changes Summary
467
- ${changeSummary}
468
-
469
- #### Convergence Verification
470
- ${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')}
471
- - **Verification**: ${convergenceResults.verificationOutput}
472
- - **Definition of Done**: ${task.convergence.definition_of_done}
473
-
474
- ---
475
- `)
476
- task._execution = {
477
- status: 'completed', executed_at: endTime,
478
- result: {
479
- success: true,
480
- files_modified: filesModified,
481
- summary: changeSummary,
482
- convergence_verified: convergenceResults.verified
483
- }
484
- }
485
- completedTasks.add(task.id)
486
- } else {
487
- // 5b. Record FAILURE
488
- handleTaskFailure(task, convergenceResults, startTime, endTime)
489
- }
490
-
491
- // 6. Auto-commit if enabled
492
- if (autoCommit && task._execution.status === 'completed') {
493
- autoCommitTask(task, filesModified)
494
- }
495
- }
496
- ```
497
-
498
- ### Step 3.2: Convergence Verification
499
-
500
- ```javascript
501
- function verifyConvergence(task) {
502
- const results = {
503
- verified: [], // boolean[] per criterion
504
- verificationOutput: '', // output of verification command
505
- allPassed: true
506
- }
507
-
508
- // 1. Check each criterion
509
- // For each criterion in task.convergence.criteria:
510
- // - If it references a testable condition, check it
511
- // - If it's manual, mark as verified based on changes made
512
- // - Record true/false per criterion
513
- task.convergence.criteria.forEach(criterion => {
514
- const passed = evaluateCriterion(criterion, task)
515
- results.verified.push(passed)
516
- if (!passed) results.allPassed = false
517
- })
518
-
519
- // 2. Run verification command (if executable)
520
- const verification = task.convergence.verification
521
- if (isExecutableCommand(verification)) {
522
- try {
523
- const output = Bash(verification, { timeout: 120000 })
524
- results.verificationOutput = `${verification} → PASS`
525
- } catch (e) {
526
- results.verificationOutput = `${verification} → FAIL: ${e.message}`
527
- results.allPassed = false
528
- }
529
- } else {
530
- results.verificationOutput = `Manual: ${verification}`
531
- }
532
-
533
- return results
534
- }
535
-
536
- function isExecutableCommand(verification) {
537
- // Detect executable patterns: npm, npx, jest, tsc, curl, pytest, go test, etc.
538
- return /^(npm|npx|jest|tsc|eslint|pytest|go\s+test|cargo\s+test|curl|make)/.test(verification.trim())
539
- }
540
- ```
541
-
542
- ### Step 3.3: Failure Handling
543
-
544
- ```javascript
545
- function handleTaskFailure(task, convergenceResults, startTime, endTime) {
546
- appendToEvents(`
547
- **Status**: ❌ FAILED
548
- **Duration**: ${calculateDuration(startTime, endTime)}
549
- **Error**: Convergence verification failed
550
-
551
- #### Failed Criteria
552
- ${task.convergence.criteria.map((c, i) => `- [${convergenceResults.verified[i] ? 'x' : ' '}] ${c}`).join('\n')}
553
- - **Verification**: ${convergenceResults.verificationOutput}
554
-
555
- ---
556
- `)
557
-
558
- task._execution = {
559
- status: 'failed', executed_at: endTime,
560
- result: {
561
- success: false,
562
- error: 'Convergence verification failed',
563
- convergence_verified: convergenceResults.verified
564
- }
565
- }
566
- failedTasks.add(task.id)
567
-
568
- // Ask user
569
- request_user_input({
570
- questions: [{
571
- header: "Failure",
572
- id: "handle_failure",
573
- question: `Task ${task.id} failed convergence verification. How to proceed?`,
574
- options: [
575
- { label: "Skip & Continue (Recommended)", description: "Skip this task, continue with next" },
576
- { label: "Retry", description: "Retry this task" },
577
- { label: "Abort", description: "Stop execution, keep progress" }
578
- ]
579
- }]
580
- })
581
- // answer.answers.handle_failure.answers[0] → selected label
582
- }
583
- ```
584
-
585
- ### Step 3.4: Auto-Commit
586
-
587
- ```javascript
588
- function autoCommitTask(task, filesModified) {
589
- Bash(`git add ${filesModified.join(' ')}`)
590
-
591
- const commitType = {
592
- fix: 'fix', refactor: 'refactor', feature: 'feat',
593
- enhancement: 'feat', testing: 'test', infrastructure: 'chore'
594
- }[task.type] || 'chore'
595
-
596
- const scope = inferScope(filesModified)
597
-
598
- Bash(`git commit -m "$(cat <<'EOF'
599
- ${commitType}(${scope}): ${task.title}
600
-
601
- Task: ${task.id}
602
- Source: ${path.basename(planPath)}
603
- EOF
604
- )"`)
605
-
606
- appendToEvents(`**Commit**: \`${commitType}(${scope}): ${task.title}\`\n`)
607
- }
608
- ```
609
-
610
- ---
611
-
612
- ## Phase 4: Completion
613
-
614
- **Objective**: Finalize all artifacts, write back execution state, offer follow-up actions.
615
-
616
- ### Step 4.1: Finalize execution.md
617
-
618
- Append summary statistics to execution.md:
619
-
620
- ```javascript
621
- const summary = `
622
- ## Execution Summary
623
-
624
- - **Completed**: ${getUtc8ISOString()}
625
- - **Total Tasks**: ${tasks.length}
626
- - **Succeeded**: ${completedTasks.size}
627
- - **Failed**: ${failedTasks.size}
628
- - **Skipped**: ${skippedTasks.size}
629
- - **Success Rate**: ${Math.round(completedTasks.size / tasks.length * 100)}%
630
-
631
- ### Task Results
632
-
633
- | ID | Title | Status | Convergence | Files Modified |
634
- |----|-------|--------|-------------|----------------|
635
- ${tasks.map(t => {
636
- const ex = t._execution || {}
637
- const convergenceStatus = ex.result?.convergence_verified
638
- ? `${ex.result.convergence_verified.filter(v => v).length}/${ex.result.convergence_verified.length}`
639
- : '-'
640
- return `| ${t.id} | ${t.title} | ${ex.status || 'pending'} | ${convergenceStatus} | ${(ex.result?.files_modified || []).join(', ') || '-'} |`
641
- }).join('\n')}
642
-
643
- ${failedTasks.size > 0 ? `### Failed Tasks
644
-
645
- ${[...failedTasks].map(id => {
646
- const t = tasks.find(t => t.id === id)
647
- return `- **${t.id}**: ${t.title} — ${t._execution?.result?.error || 'Unknown'}`
648
- }).join('\n')}
649
- ` : ''}
650
- ### Artifacts
651
- - **Plan Source**: ${planPath}
652
- - **Execution Overview**: ${sessionFolder}/execution.md
653
- - **Execution Events**: ${sessionFolder}/execution-events.md
654
- `
655
- // Append to execution.md
656
- ```
657
-
658
- ### Step 4.2: Finalize execution-events.md
659
-
660
- ```javascript
661
- appendToEvents(`
662
- ---
663
-
664
- # Session Summary
665
-
666
- - **Session**: ${sessionId}
667
- - **Completed**: ${getUtc8ISOString()}
668
- - **Tasks**: ${completedTasks.size} completed, ${failedTasks.size} failed, ${skippedTasks.size} skipped
669
- - **Total Events**: ${completedTasks.size + failedTasks.size + skippedTasks.size}
670
- `)
671
- ```
672
-
673
- ### Step 4.3: Write Back .task/*.json with _execution
674
-
675
- Update each source task JSON file with execution states:
676
-
677
- ```javascript
678
- tasks.forEach(task => {
679
- const filePath = task._source_file
680
- if (!filePath) return
681
-
682
- // Read current file to preserve formatting and non-execution fields
683
- const current = JSON.parse(Read(filePath))
684
-
685
- // Update _execution status and result
686
- current._execution = {
687
- status: task._execution?.status || 'pending',
688
- executed_at: task._execution?.executed_at || null,
689
- result: task._execution?.result || null
690
- }
691
-
692
- // Write back individual task file
693
- Write(filePath, JSON.stringify(current, null, 2))
694
- })
695
- // Each task JSON file now has _execution: { status, executed_at, result }
696
- ```
697
-
698
- ### Step 4.4: Post-Completion Options
699
-
700
- ```javascript
701
- request_user_input({
702
- questions: [{
703
- header: "Post Execute",
704
- id: "post_execute",
705
- question: `Execution complete: ${completedTasks.size}/${tasks.length} succeeded. Next step?`,
706
- options: [
707
- { label: "Done (Recommended)", description: "End workflow" },
708
- { label: "Retry Failed", description: `Re-execute ${failedTasks.size} failed tasks` },
709
- { label: "Create Issue", description: "Create issue from failed tasks" }
710
- ]
711
- }]
712
- })
713
- // answer.answers.post_execute.answers[0] → selected label
714
- ```
715
-
716
- | Selection | Action |
717
- |-----------|--------|
718
- | Retry Failed | Filter tasks with `_execution.status === 'failed'`, re-execute, append `[RETRY]` events |
719
- | View Events | Display execution-events.md content |
720
- | Create Issue | `Skill(skill="issue:new", args="...")` from failed task details |
721
- | Done | Display artifact paths, sync session state, end workflow |
722
-
723
- ### Step 4.5: Sync Session State
724
-
725
- After completion (regardless of user selection), unless `--dry-run`:
726
-
727
- ```bash
728
- $session-sync -y "Execution complete: {completed}/{total} tasks succeeded"
729
- ```
730
-
731
- Updates specs/*.md with execution learnings and project-tech.json with development index entry.
732
-
733
- ---
734
-
735
- ## Configuration
736
-
737
- | Flag | Default | Description |
738
- |------|---------|-------------|
739
- | `PLAN="..."` | auto-detect | Path to `.task/` directory or single task `.json` file |
740
- | `--auto-commit` | false | Commit changes after each successful task |
741
- | `--dry-run` | false | Simulate execution without making changes |
742
-
743
- ### Plan Auto-Detection Order
744
-
745
- When no `PLAN` specified, search for `.task/` directories in order (most recent first):
746
-
747
- 1. `.workflow/active/*/.task/`
748
- 2. `.workflow/.lite-plan/*/.task/`
749
- 3. `.workflow/.req-plan/*/.task/`
750
- 4. `.workflow/.planning/*/.task/`
751
-
752
- **If source is not `.task/*.json`**: Run `plan-converter` first to generate `.task/` directory.
753
-
754
- ---
755
-
756
- ## Error Handling & Recovery
757
-
758
- | Situation | Action | Recovery |
759
- |-----------|--------|----------|
760
- | .task/ directory not found | Report error with path | Check path, run plan-converter |
761
- | Invalid JSON in task file | Report filename and error | Fix task JSON file manually |
762
- | Missing convergence | Report validation error | Run plan-converter to add convergence |
763
- | Circular dependency | Stop, report cycle path | Fix dependencies in task JSON |
764
- | Task execution fails | Record in events, ask user | Retry, skip, accept, or abort |
765
- | Convergence verification fails | Mark task failed, ask user | Fix code and retry, or accept |
766
- | Verification command timeout | Mark as unverified | Manual verification needed |
767
- | File conflict during execution | Document in events | Resolve in dependency order |
768
- | All tasks fail | Report, suggest plan review | Re-analyze or manual intervention |
769
-
770
- ---
771
-
772
- ## Best Practices
773
-
774
- ### Before Execution
775
-
776
- 1. **Validate Plan**: Use `--dry-run` first to check plan feasibility
777
- 2. **Check Convergence**: Ensure all tasks have meaningful convergence criteria
778
- 3. **Review Dependencies**: Verify execution order makes sense
779
- 4. **Backup**: Commit pending changes before starting
780
- 5. **Convert First**: Use `plan-converter` for non-.task/ sources
781
-
782
- ### During Execution
783
-
784
- 1. **Monitor Events**: Check execution-events.md for real-time progress
785
- 2. **Handle Failures**: Review convergence failures carefully before deciding
786
- 3. **Check Commits**: Verify auto-commits are correct if enabled
787
-
788
- ### After Execution
789
-
790
- 1. **Review Summary**: Check execution.md statistics and failed tasks
791
- 2. **Verify Changes**: Inspect modified files match expectations
792
- 3. **Check Task Files**: Review `_execution` states in `.task/*.json` files
793
- 4. **Next Steps**: Use completion options for follow-up
794
-
795
- ---
796
-
797
- **Now execute unified-execute-with-file for**: $PLAN