clawpowers 1.1.4 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (75) hide show
  1. package/CHANGELOG.md +94 -0
  2. package/LICENSE +44 -0
  3. package/README.md +204 -228
  4. package/SECURITY.md +72 -0
  5. package/dist/index.d.ts +844 -0
  6. package/dist/index.js +2536 -0
  7. package/dist/index.js.map +1 -0
  8. package/package.json +50 -44
  9. package/.claude-plugin/manifest.json +0 -19
  10. package/.codex/INSTALL.md +0 -36
  11. package/.cursor-plugin/manifest.json +0 -21
  12. package/.opencode/INSTALL.md +0 -52
  13. package/ARCHITECTURE.md +0 -69
  14. package/bin/clawpowers.js +0 -625
  15. package/bin/clawpowers.sh +0 -91
  16. package/docs/demo/clawpowers-demo.cast +0 -197
  17. package/docs/demo/clawpowers-demo.gif +0 -0
  18. package/docs/launch-images/25-skills-breakdown.jpg +0 -0
  19. package/docs/launch-images/clawpowers-vs-superpowers.jpg +0 -0
  20. package/docs/launch-images/economic-code-optimization.jpg +0 -0
  21. package/docs/launch-images/native-vs-bridge-2.jpg +0 -0
  22. package/docs/launch-images/native-vs-bridge.jpg +0 -0
  23. package/docs/launch-images/post1-hero-lobster.jpg +0 -0
  24. package/docs/launch-images/post2-dashboard.jpg +0 -0
  25. package/docs/launch-images/post3-superpowers.jpg +0 -0
  26. package/docs/launch-images/post4-before-after.jpg +0 -0
  27. package/docs/launch-images/post5-install-now.jpg +0 -0
  28. package/docs/launch-images/ultimate-stack.jpg +0 -0
  29. package/docs/launch-posts.md +0 -76
  30. package/docs/quickstart-first-transaction.md +0 -204
  31. package/gemini-extension.json +0 -32
  32. package/hooks/session-start +0 -205
  33. package/hooks/session-start.cmd +0 -43
  34. package/hooks/session-start.js +0 -163
  35. package/runtime/demo/README.md +0 -78
  36. package/runtime/demo/x402-mock-server.js +0 -230
  37. package/runtime/feedback/analyze.js +0 -621
  38. package/runtime/feedback/analyze.sh +0 -546
  39. package/runtime/init.js +0 -210
  40. package/runtime/init.sh +0 -178
  41. package/runtime/metrics/collector.js +0 -361
  42. package/runtime/metrics/collector.sh +0 -308
  43. package/runtime/payments/ledger.js +0 -305
  44. package/runtime/payments/ledger.sh +0 -262
  45. package/runtime/payments/pipeline.js +0 -455
  46. package/runtime/persistence/store.js +0 -433
  47. package/runtime/persistence/store.sh +0 -303
  48. package/skill.json +0 -106
  49. package/skills/agent-bounties/SKILL.md +0 -553
  50. package/skills/agent-payments/SKILL.md +0 -479
  51. package/skills/brainstorming/SKILL.md +0 -233
  52. package/skills/content-pipeline/SKILL.md +0 -282
  53. package/skills/cross-project-knowledge/SKILL.md +0 -345
  54. package/skills/dispatching-parallel-agents/SKILL.md +0 -305
  55. package/skills/economic-code-optimization/SKILL.md +0 -265
  56. package/skills/executing-plans/SKILL.md +0 -255
  57. package/skills/finishing-a-development-branch/SKILL.md +0 -260
  58. package/skills/formal-verification-lite/SKILL.md +0 -441
  59. package/skills/learn-how-to-learn/SKILL.md +0 -235
  60. package/skills/market-intelligence/SKILL.md +0 -323
  61. package/skills/meta-skill-evolution/SKILL.md +0 -325
  62. package/skills/prospecting/SKILL.md +0 -454
  63. package/skills/receiving-code-review/SKILL.md +0 -225
  64. package/skills/requesting-code-review/SKILL.md +0 -206
  65. package/skills/security-audit/SKILL.md +0 -353
  66. package/skills/self-healing-code/SKILL.md +0 -369
  67. package/skills/subagent-driven-development/SKILL.md +0 -244
  68. package/skills/systematic-debugging/SKILL.md +0 -355
  69. package/skills/test-driven-development/SKILL.md +0 -416
  70. package/skills/using-clawpowers/SKILL.md +0 -160
  71. package/skills/using-git-worktrees/SKILL.md +0 -261
  72. package/skills/validator/SKILL.md +0 -281
  73. package/skills/verification-before-completion/SKILL.md +0 -254
  74. package/skills/writing-plans/SKILL.md +0 -276
  75. package/skills/writing-skills/SKILL.md +0 -260
@@ -1,345 +0,0 @@
1
- ---
2
- name: cross-project-knowledge
3
- description: Persistent knowledge base across all projects. Extract patterns after every fix or architecture decision; search before starting any task. Work on Project B benefits from everything learned on Projects A, C, D.
4
- version: 1.0.0
5
- requires:
6
- tools: [bash, node]
7
- runtime: true
8
- metrics:
9
- tracks: [patterns_stored, patterns_retrieved, cross_project_hits, success_count_increments, search_latency_ms]
10
- improves: [search_relevance, pattern_categorization_accuracy, retrieval_recall]
11
- ---
12
-
13
- # Cross-Project Knowledge
14
-
15
- ## When to Use
16
-
17
- **Store a pattern after:**
18
- - Successfully fixing a bug (store the root cause + fix)
19
- - Making a significant architecture decision (store the decision + rationale)
20
- - Discovering a performance optimization
21
- - Completing a security fix or identifying a security pattern
22
- - Writing a test strategy that proved effective
23
-
24
- **Search before:**
25
- - Starting any new task (30-second search to avoid re-solving known problems)
26
- - Encountering an error message you haven't seen in this project
27
- - Designing a new component or API
28
- - Choosing between two implementation approaches
29
-
30
- **Skip when:**
31
- - The task is purely mechanical (rename a file, update a config value)
32
- - The runtime directory `~/.clawpowers/` doesn't exist (no persistence available)
33
- - The pattern is too project-specific to generalize (e.g., a business rule for one client)
34
-
35
- **Decision tree:**
36
- ```
37
- Starting a new task?
38
- └── Yes → Search knowledge base first (30 seconds)
39
- ├── Hit found → apply known solution, update success_count
40
- └── No hit → proceed with fresh investigation
41
- └── After solving: store the pattern
42
- ```
43
-
44
- ## Core Methodology
45
-
46
- ### Knowledge Base Structure
47
-
48
- All patterns live in `~/.clawpowers/memory/patterns.jsonl`. Each line is a JSON record:
49
-
50
- ```json
51
- {
52
- "pattern_id": "bp-2024-auth-jwt-expiry",
53
- "category": "bug-fix",
54
- "description": "JWT tokens accepted after expiry when clock skew > 0",
55
- "context": "Node.js + jsonwebtoken library, any project using JWT auth",
56
- "solution": "Add clockTolerance: 0 to verify() options, or explicitly check exp claim",
57
- "code_example": "jwt.verify(token, secret, { clockTolerance: 0 })",
58
- "projects_used_in": ["auth-service", "api-gateway"],
59
- "success_count": 3,
60
- "tags": ["jwt", "auth", "expiry", "clock-skew"],
61
- "created_at": "2024-03-15T10:00:00Z",
62
- "last_used": "2024-11-02T14:30:00Z"
63
- }
64
- ```
65
-
66
- **Categories:**
67
- - `bug-fix` — root cause + fix for a recurring class of bug
68
- - `architecture` — structural patterns, component boundaries, integration decisions
69
- - `performance` — optimizations with measured impact
70
- - `security` — vulnerability patterns and mitigations
71
- - `testing` — test strategies, fixture patterns, effective test designs
72
-
73
- ### Step 1: Search Before Starting
74
-
75
- Before any non-trivial task, run a 30-second search:
76
-
77
- ```bash
78
- # Text search by keyword
79
- PATTERNS_FILE=~/.clawpowers/memory/patterns.jsonl
80
-
81
- search_patterns() {
82
- local query="$1"
83
- local category="$2" # optional filter
84
-
85
- if [[ ! -f "$PATTERNS_FILE" ]]; then
86
- echo "No knowledge base found. Initialize with: mkdir -p ~/.clawpowers/memory && touch ~/.clawpowers/memory/patterns.jsonl"
87
- return
88
- fi
89
-
90
- node - <<EOF
91
- const fs = require('fs');
92
- const query = '${query}'.toLowerCase();
93
- const category = '${category}';
94
- const lines = fs.readFileSync(process.env.HOME + '/.clawpowers/memory/patterns.jsonl', 'utf8')
95
- .trim().split('\n').filter(Boolean).map(l => JSON.parse(l));
96
-
97
- const results = lines.filter(p => {
98
- const matchCat = !category || p.category === category;
99
- const text = [p.description, p.context, p.solution, ...(p.tags||[])].join(' ').toLowerCase();
100
- const matchQuery = query.split(' ').every(word => text.includes(word));
101
- return matchCat && matchQuery;
102
- }).sort((a, b) => b.success_count - a.success_count);
103
-
104
- if (results.length === 0) {
105
- console.log('No matching patterns found.');
106
- } else {
107
- results.slice(0, 5).forEach((p, i) => {
108
- console.log(\`[\${i+1}] [\${p.category}] \${p.description}\`);
109
- console.log(\` Solution: \${p.solution}\`);
110
- if (p.code_example) console.log(\` Example: \${p.code_example}\`);
111
- console.log(\` Used in: \${(p.projects_used_in||[]).join(', ')} | Success count: \${p.success_count}\`);
112
- console.log('');
113
- });
114
- }
115
- EOF
116
- }
117
-
118
- # Usage examples:
119
- search_patterns "jwt expiry"
120
- search_patterns "connection pool" "bug-fix"
121
- search_patterns "react infinite render" "bug-fix"
122
- search_patterns "database index" "performance"
123
- ```
124
-
125
- **What to do with search results:**
126
- - **Hit with high success_count (≥3):** Apply the documented solution directly. Update `success_count` and `last_used`.
127
- - **Hit with low success_count (1-2):** Use as a starting hypothesis, not a guaranteed fix. Verify it applies.
128
- - **No hit:** Proceed with fresh investigation. After solving, store the pattern.
129
-
130
- ### Step 2: Store a Pattern After Solving
131
-
132
- After fixing a bug, making an architecture decision, or discovering a useful pattern:
133
-
134
- ```bash
135
- store_pattern() {
136
- local category="$1" # bug-fix|architecture|performance|security|testing
137
- local description="$2" # what problem this solves (1 sentence)
138
- local context="$3" # when/where this pattern applies
139
- local solution="$4" # the fix or decision
140
- local code_example="$5" # optional code snippet
141
- local tags="$6" # comma-separated keywords
142
-
143
- local pattern_id="${category:0:2}-$(date +%Y%m%d)-$(echo "$description" | tr ' ' '-' | tr '[:upper:]' '[:lower:]' | cut -c1-30)"
144
- local project=$(basename $(git rev-parse --show-toplevel 2>/dev/null) 2>/dev/null || echo "unknown")
145
-
146
- mkdir -p ~/.clawpowers/memory
147
-
148
- # Build JSON record
149
- node - <<EOF >> "$PATTERNS_FILE"
150
- console.log(JSON.stringify({
151
- pattern_id: '$pattern_id',
152
- category: '$category',
153
- description: '$description',
154
- context: '$context',
155
- solution: '$solution',
156
- code_example: '$code_example',
157
- projects_used_in: ['$project'],
158
- success_count: 1,
159
- tags: '$tags'.split(',').map(t=>t.trim()).filter(Boolean),
160
- created_at: new Date().toISOString(),
161
- last_used: new Date().toISOString()
162
- }));
163
- EOF
164
- echo "Pattern stored: $pattern_id"
165
- }
166
- ```
167
-
168
- **Store after these events (mandatory):**
169
-
170
- | Event | Category | What to store |
171
- |-------|---------|--------------|
172
- | Bug fixed | `bug-fix` | Root cause + exact fix + how to detect this bug |
173
- | Architecture decision made | `architecture` | Decision + alternatives considered + rationale |
174
- | Performance improvement measured | `performance` | Optimization + measured delta (e.g., "50% latency reduction") |
175
- | Security issue found/fixed | `security` | Vulnerability pattern + mitigation |
176
- | Test strategy validated | `testing` | Test approach + what it caught that unit tests missed |
177
-
178
- **Example stores:**
179
-
180
- ```bash
181
- # After fixing a React infinite re-render
182
- store_pattern "bug-fix" \
183
- "useEffect with object dependency causes infinite re-render" \
184
- "React functional components, useEffect with object/array deps" \
185
- "Memoize the object with useMemo or extract stable primitive values as deps" \
186
- "const stableRef = useMemo(() => ({ id: user.id }), [user.id])" \
187
- "react,useEffect,infinite-render,memoization"
188
-
189
- # After an architecture decision
190
- store_pattern "architecture" \
191
- "Event sourcing for audit log instead of mutable records" \
192
- "Any service requiring immutable audit trail, compliance requirements" \
193
- "Append-only event log; derive current state by replaying events; never update in-place" \
194
- "" \
195
- "event-sourcing,audit-log,cqrs,immutable"
196
-
197
- # After a performance fix
198
- store_pattern "performance" \
199
- "N+1 query on user.posts relation reduced latency from 800ms to 45ms" \
200
- "ORM with lazy loading, list views fetching related records" \
201
- "Use eager loading: User.includes(:posts) or SQL JOIN instead of per-row query" \
202
- "User.includes(:posts).where(...)" \
203
- "n+1,orm,eager-loading,sql,latency"
204
- ```
205
-
206
- ### Step 3: Update on Reuse
207
-
208
- When a retrieved pattern successfully solves a new problem, increment its signal:
209
-
210
- ```bash
211
- update_pattern_success() {
212
- local pattern_id="$1"
213
- local project=$(basename $(git rev-parse --show-toplevel 2>/dev/null) 2>/dev/null || echo "unknown")
214
-
215
- node - <<EOF > /tmp/patterns-updated.jsonl
216
- const fs = require('fs');
217
- const lines = fs.readFileSync(process.env.HOME + '/.clawpowers/memory/patterns.jsonl', 'utf8')
218
- .trim().split('\n').filter(Boolean).map(l => JSON.parse(l));
219
- const now = new Date().toISOString();
220
- const updated = lines.map(p => {
221
- if (p.pattern_id === '$pattern_id') {
222
- const projects = Array.from(new Set([...(p.projects_used_in||[]), '$project']));
223
- return { ...p, success_count: (p.success_count||0) + 1, last_used: now, projects_used_in: projects };
224
- }
225
- return p;
226
- });
227
- updated.forEach(p => console.log(JSON.stringify(p)));
228
- EOF
229
- mv /tmp/patterns-updated.jsonl "$PATTERNS_FILE"
230
- echo "Updated success_count for $pattern_id"
231
- }
232
- ```
233
-
234
- ### Step 4: Periodic Knowledge Base Maintenance
235
-
236
- Every 100 patterns or monthly, prune and consolidate:
237
-
238
- ```bash
239
- # Knowledge base health report
240
- node - <<'EOF'
241
- const fs = require('fs');
242
- const lines = fs.readFileSync(process.env.HOME + '/.clawpowers/memory/patterns.jsonl', 'utf8')
243
- .trim().split('\n').filter(Boolean).map(l => JSON.parse(l));
244
-
245
- // Count by category
246
- const byCat = {};
247
- lines.forEach(p => { byCat[p.category] = (byCat[p.category]||0) + 1; });
248
-
249
- // High-value patterns (success_count ≥ 3)
250
- const highValue = lines.filter(p => p.success_count >= 3).length;
251
-
252
- // Stale patterns (not used in 6 months)
253
- const sixMonthsAgo = new Date(Date.now() - 6*30*24*60*60*1000).toISOString();
254
- const stale = lines.filter(p => (p.last_used||p.created_at) < sixMonthsAgo).length;
255
-
256
- console.log('Knowledge Base Health:');
257
- console.log(' Total patterns:', lines.length);
258
- console.log(' By category:', JSON.stringify(byCat));
259
- console.log(' High-value (≥3 successes):', highValue);
260
- console.log(' Stale (>6 months unused):', stale);
261
- console.log(' Cross-project patterns (≥2 projects):', lines.filter(p => (p.projects_used_in||[]).length >= 2).length);
262
- EOF
263
- ```
264
-
265
- ### Step 5: Cross-Project Knowledge Transfer
266
-
267
- When starting a project in a domain you've worked in before:
268
-
269
- ```bash
270
- # Before starting on a new auth service
271
- search_patterns "auth jwt token" "security"
272
- search_patterns "auth session" "architecture"
273
- search_patterns "auth rate limit" "security"
274
-
275
- # Before debugging a Node.js memory issue
276
- search_patterns "memory leak node" "bug-fix"
277
- search_patterns "garbage collection" "performance"
278
- search_patterns "heap snapshot" "bug-fix"
279
- ```
280
-
281
- **The power:** An agent working on `project-d` that has seen the JWT clock skew bug on `project-a` will solve it in seconds on `project-d` — not hours.
282
-
283
- ## ClawPowers Enhancement
284
-
285
- When `~/.clawpowers/` runtime is initialized:
286
-
287
- **Full pipeline integration:**
288
-
289
- ```bash
290
- # At the start of any task
291
- bash runtime/persistence/store.sh set "knowledge:current-task:search-done" "false"
292
-
293
- # Search step (always first)
294
- RESULTS=$(node -e "/* search logic above */" 2>/dev/null)
295
- bash runtime/persistence/store.sh set "knowledge:current-task:search-done" "true"
296
- bash runtime/persistence/store.sh set "knowledge:current-task:search-results" "$RESULTS"
297
-
298
- # After task completion — store if new pattern discovered
299
- if [[ "$NEW_PATTERN_FOUND" == "true" ]]; then
300
- store_pattern "$CATEGORY" "$DESCRIPTION" "$CONTEXT" "$SOLUTION" "$CODE_EXAMPLE" "$TAGS"
301
- fi
302
-
303
- # Record metrics
304
- bash runtime/metrics/collector.sh record \
305
- --skill cross-project-knowledge \
306
- --outcome success \
307
- --notes "search: $SEARCH_HITS hits, stored: $STORED_PATTERNS new patterns"
308
- ```
309
-
310
- **Analyze knowledge base effectiveness:**
311
-
312
- ```bash
313
- bash runtime/feedback/analyze.sh --filter cross-project-knowledge
314
- # Reports: search hit rate, most-used patterns, cross-project transfer count,
315
- # average time saved vs. fresh investigation
316
- ```
317
-
318
- **Export / import for team sharing:**
319
- ```bash
320
- # Export your knowledge base (redact sensitive data)
321
- cat ~/.clawpowers/memory/patterns.jsonl | \
322
- node -e "
323
- const lines = require('fs').readFileSync('/dev/stdin','utf8').trim().split('\n').map(JSON.parse);
324
- // Keep only high-value, generic patterns
325
- const shareable = lines.filter(p => p.success_count >= 2 && !p.tags?.includes('internal'));
326
- shareable.forEach(p => console.log(JSON.stringify(p)));
327
- " > shared-patterns.jsonl
328
-
329
- # Import from a teammate's export
330
- cat shared-patterns.jsonl >> ~/.clawpowers/memory/patterns.jsonl
331
- echo "Imported $(wc -l < shared-patterns.jsonl) patterns"
332
- ```
333
-
334
- ## Anti-Patterns
335
-
336
- | Anti-Pattern | Why It Fails | Correct Approach |
337
- |-------------|-------------|-----------------|
338
- | Skip pre-task search | Re-solve known problems; waste time | Always search first, even if you think you know the answer |
339
- | Store patterns too specifically | Pattern only matches one exact situation | Generalize: describe the class of problem, not the instance |
340
- | Store without code_example | Pattern is hard to apply without template | Always include a minimal code example |
341
- | Forget to update success_count | High-value patterns look the same as single-use | Update every time a pattern is successfully applied |
342
- | Store negative results ("this didn't work") | Pollutes the knowledge base with noise | Only store successful patterns; capture failures in debugging logs |
343
- | Never prune stale patterns | Old patterns may suggest deprecated approaches | Monthly maintenance pass; archive patterns unused for 6+ months |
344
- | Search with overly broad terms | Too many irrelevant hits; signal buried | Search with 2-3 specific keywords from the error or domain |
345
- | Treat cross-project patterns as gospel | Context differs; blind application fails | Use as a strong starting hypothesis, then verify it fits |
@@ -1,305 +0,0 @@
1
- ---
2
- name: dispatching-parallel-agents
3
- description: Fan out independent tasks to parallel agent processes with load balancing, failure isolation, and result aggregation. Activate when you have N independent tasks that can execute concurrently.
4
- version: 1.0.0
5
- requires:
6
- tools: [bash, git]
7
- runtime: false
8
- metrics:
9
- tracks: [agents_dispatched, success_rate, parallel_efficiency, aggregation_errors]
10
- improves: [task_partitioning, failure_isolation_strategy, aggregation_method]
11
- ---
12
-
13
- # Dispatching Parallel Agents
14
-
15
- ## When to Use
16
-
17
- Apply this skill when:
18
-
19
- - You have 3+ independent tasks with no shared dependencies
20
- - Each task can be described with a complete, self-contained spec
21
- - You have access to multiple agent processes or context windows
22
- - The tasks are roughly equal in complexity (or can be load-balanced)
23
- - A failure in one task should not abort others
24
-
25
- **Skip when:**
26
- - Tasks share state that would conflict under concurrent access
27
- - Tasks must execute in sequence (use `executing-plans` instead)
28
- - You have fewer than 3 tasks (overhead outweighs benefit)
29
- - You can't isolate failure — one bad result corrupts all results
30
-
31
- **Relationship to `subagent-driven-development`:**
32
- ```
33
- subagent-driven-development: full development methodology (spec, review, worktree, integrate)
34
- dispatching-parallel-agents: execution mechanism (fan-out, monitor, aggregate)
35
-
36
- Use dispatching-parallel-agents for runtime parallelism.
37
- Use subagent-driven-development for development task orchestration.
38
- They are complementary — subagent-driven-development USES dispatching-parallel-agents.
39
- ```
40
-
41
- ## Core Methodology
42
-
43
- ### Step 1: Task Decomposition for Parallelism
44
-
45
- Before dispatching, verify each task is:
46
-
47
- 1. **Self-contained** — has all inputs it needs, produces a defined output
48
- 2. **Isolated** — doesn't write to shared state other tasks read
49
- 3. **Specced** — has clear success criteria (you'll need these for aggregation)
50
- 4. **Sized appropriately** — not so small that dispatch overhead dominates
51
-
52
- **Task spec format for parallel dispatch:**
53
- ```markdown
54
- ## Task ID: [unique identifier]
55
-
56
- **Input:**
57
- - [File or data this task starts from]
58
- - [Any context this task needs]
59
-
60
- **Objective:** [Single sentence]
61
-
62
- **Output:**
63
- - [Exact artifact produced: file path, JSON structure, etc.]
64
-
65
- **Success criteria:**
66
- - [ ] [Verifiable criterion]
67
-
68
- **Failure behavior:**
69
- - [What this task does when it encounters an error]
70
- - [What it outputs on failure so the aggregator can detect it]
71
- ```
72
-
73
- ### Step 2: Failure Isolation Design
74
-
75
- Decide how failures propagate:
76
-
77
- | Strategy | When to Use | Implementation |
78
- |----------|-------------|----------------|
79
- | **Continue on failure** | Tasks are independent, partial results are valuable | Failed tasks return error object, aggregator handles |
80
- | **Fail fast** | Any failure invalidates all results | Use process groups; kill siblings on first failure |
81
- | **Retry on failure** | Tasks are idempotent and failures are transient | Retry N times with exponential backoff |
82
- | **Fallback on failure** | Alternative task exists for same output | Dispatch fallback when primary fails |
83
-
84
- **Output envelope for failure isolation:**
85
- ```json
86
- {
87
- "task_id": "auth-api",
88
- "status": "success|failure|partial",
89
- "output": { ... },
90
- "error": null,
91
- "duration_seconds": 47.3,
92
- "checksum": "sha256_of_output"
93
- }
94
- ```
95
-
96
- Every task must produce this envelope — the aggregator depends on it.
97
-
98
- ### Step 3: Dispatch Mechanism
99
-
100
- **Option A: Process-level parallelism (Bash)**
101
-
102
- ```bash
103
- #!/usr/bin/env bash
104
- # Fan out tasks in background, wait for all, aggregate results
105
-
106
- RESULTS_DIR=$(mktemp -d)
107
- PIDS=()
108
-
109
- dispatch_task() {
110
- local task_id="$1"
111
- local spec_file="$2"
112
-
113
- (
114
- # Each task runs in a subshell with its own output file
115
- output_file="$RESULTS_DIR/${task_id}.json"
116
-
117
- if run_task "$spec_file" > "$output_file" 2>&1; then
118
- # Wrap output in envelope
119
- echo '{"task_id":"'"$task_id"'","status":"success","output":'"$(cat "$output_file")"'}'
120
- else
121
- exit_code=$?
122
- echo '{"task_id":"'"$task_id"'","status":"failure","error":"exit code '"$exit_code"'","output":null}'
123
- fi
124
- ) > "$RESULTS_DIR/${task_id}_envelope.json" &
125
-
126
- PIDS+=($!)
127
- echo "Dispatched task $task_id (PID: ${PIDS[-1]})"
128
- }
129
-
130
- # Dispatch all tasks
131
- dispatch_task "auth-api" "specs/auth-api.md"
132
- dispatch_task "auth-db" "specs/auth-db.md"
133
- dispatch_task "auth-tests" "specs/auth-tests.md"
134
-
135
- # Wait for all tasks
136
- echo "Waiting for ${#PIDS[@]} tasks..."
137
- for pid in "${PIDS[@]}"; do
138
- wait "$pid"
139
- done
140
-
141
- echo "All tasks complete. Results in $RESULTS_DIR/"
142
- ```
143
-
144
- **Option B: Agent-level parallelism (Multi-context)**
145
-
146
- When you have multiple agent contexts (e.g., multiple Claude Code sessions, multiple Cursor instances):
147
-
148
- 1. For each parallel task, open a new agent context
149
- 2. Inject the complete task spec
150
- 3. Each agent works independently in its assigned worktree
151
- 4. Orchestrator aggregates results after all agents complete
152
-
153
- **Worktree-per-agent setup:**
154
- ```bash
155
- TASKS=("auth-api" "auth-db" "auth-tests")
156
- for task in "${TASKS[@]}"; do
157
- git worktree add "../project-${task}" -b "feature/${task}" main
158
- echo "Worktree ready for ${task}: ../project-${task}"
159
- done
160
- ```
161
-
162
- ### Step 4: Monitoring
163
-
164
- Track task progress during execution:
165
-
166
- ```bash
167
- # Check which tasks are still running
168
- for pid in "${PIDS[@]}"; do
169
- if kill -0 "$pid" 2>/dev/null; then
170
- echo "Still running: PID $pid"
171
- fi
172
- done
173
-
174
- # Or: watch output files for progress indicators
175
- watch -n 5 'ls -la '"$RESULTS_DIR"'/'
176
-
177
- # Timeout monitoring — kill tasks that run too long
178
- MAX_DURATION=600 # 10 minutes
179
- for i in "${!PIDS[@]}"; do
180
- pid="${PIDS[$i]}"
181
- task="${TASKS[$i]}"
182
- if kill -0 "$pid" 2>/dev/null; then
183
- elapsed=$(ps -p "$pid" -o etimes= 2>/dev/null | tr -d ' ')
184
- if [[ ${elapsed:-0} -gt $MAX_DURATION ]]; then
185
- kill "$pid"
186
- echo "TIMEOUT: Task $task (PID $pid) exceeded ${MAX_DURATION}s"
187
- fi
188
- fi
189
- done
190
- ```
191
-
192
- ### Step 5: Result Aggregation
193
-
194
- After all tasks complete, aggregate results:
195
-
196
- ```bash
197
- #!/usr/bin/env bash
198
- # Aggregate results from all parallel tasks
199
-
200
- aggregate_results() {
201
- local results_dir="$1"
202
- local success_count=0
203
- local failure_count=0
204
- local failures=()
205
-
206
- for envelope_file in "$results_dir"/*_envelope.json; do
207
- status=$(python3 -c "import json,sys; d=json.load(open('$envelope_file')); print(d['status'])")
208
- task_id=$(python3 -c "import json,sys; d=json.load(open('$envelope_file')); print(d['task_id'])")
209
-
210
- if [[ "$status" == "success" ]]; then
211
- ((success_count++))
212
- echo "✓ $task_id"
213
- else
214
- ((failure_count++))
215
- failures+=("$task_id")
216
- error=$(python3 -c "import json,sys; d=json.load(open('$envelope_file')); print(d.get('error','unknown'))")
217
- echo "✗ $task_id: $error"
218
- fi
219
- done
220
-
221
- echo ""
222
- echo "Results: $success_count succeeded, $failure_count failed"
223
-
224
- if [[ $failure_count -gt 0 ]]; then
225
- echo "Failed tasks: ${failures[*]}"
226
- return 1
227
- fi
228
-
229
- return 0
230
- }
231
-
232
- aggregate_results "$RESULTS_DIR"
233
- ```
234
-
235
- **Aggregation decisions:**
236
- - All succeeded → proceed to integration
237
- - Some failed → re-dispatch only failed tasks (not successful ones)
238
- - Critical task failed → abort, fix, re-dispatch all dependent tasks
239
-
240
- ### Step 6: Integration
241
-
242
- After successful aggregation:
243
-
244
- 1. Verify outputs are compatible (no conflicting interfaces)
245
- 2. Merge in dependency order (see `using-git-worktrees`)
246
- 3. Run integration tests across all task outputs
247
- 4. Clean up worktrees
248
-
249
- ## ClawPowers Enhancement
250
-
251
- When `~/.clawpowers/` runtime is initialized:
252
-
253
- **Execution Registry:**
254
-
255
- ```bash
256
- # Register dispatch batch
257
- BATCH_ID="auth-$(date +%s)"
258
- bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:tasks" "auth-api,auth-db,auth-tests"
259
- bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:started_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
260
-
261
- # Update per-task status as they complete
262
- bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:auth-api:status" "success"
263
- bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:auth-db:status" "running"
264
-
265
- # On session interrupt: resume knows exactly which tasks to re-dispatch
266
- bash runtime/persistence/store.sh list "dispatch:${BATCH_ID}:*:status"
267
- ```
268
-
269
- **Load Balancing:**
270
-
271
- Track task execution times to balance future dispatches:
272
- ```bash
273
- bash runtime/persistence/store.sh set "task-timing:auth-api" "47"
274
- bash runtime/persistence/store.sh set "task-timing:auth-db" "23"
275
- ```
276
-
277
- Future dispatches group tasks to equalize total runtime across agents.
278
-
279
- **Failure Isolation Metrics:**
280
-
281
- ```bash
282
- bash runtime/metrics/collector.sh record \
283
- --skill dispatching-parallel-agents \
284
- --outcome success \
285
- --notes "auth: 3 tasks, all succeeded, 47s wall time vs 117s sequential"
286
- ```
287
-
288
- Tracks parallel efficiency (wall time vs. theoretical serial time), helps tune batch sizes.
289
-
290
- ## Anti-Patterns
291
-
292
- | Anti-Pattern | Why It Fails | Correct Approach |
293
- |-------------|-------------|-----------------|
294
- | Dispatching tasks that share mutable state | Race conditions, data corruption | Verify isolation before dispatch |
295
- | No output envelope format | Aggregator can't distinguish success from failure | Every task must produce structured output |
296
- | Waiting for all tasks when partial results suffice | Slowest task blocks all results | Consider streaming aggregation for independent outputs |
297
- | No timeout on tasks | One hung task blocks aggregation forever | Always set timeouts |
298
- | Re-dispatching succeeded tasks on retry | Wastes time, may produce different results | Track task status, retry only failed tasks |
299
- | No result verification after aggregation | Corrupted output passes through | Verify each task output against spec before integration |
300
-
301
- ## Integration with Other Skills
302
-
303
- - Used by `subagent-driven-development` for task fan-out
304
- - Requires `using-git-worktrees` for file isolation
305
- - Outputs consumed by `verification-before-completion`