clawpowers 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/.claude-plugin/manifest.json +19 -0
  2. package/.codex/INSTALL.md +36 -0
  3. package/.cursor-plugin/manifest.json +21 -0
  4. package/.opencode/INSTALL.md +52 -0
  5. package/ARCHITECTURE.md +69 -0
  6. package/README.md +381 -0
  7. package/bin/clawpowers.js +390 -0
  8. package/bin/clawpowers.sh +91 -0
  9. package/gemini-extension.json +32 -0
  10. package/hooks/session-start +205 -0
  11. package/hooks/session-start.cmd +43 -0
  12. package/hooks/session-start.js +163 -0
  13. package/package.json +54 -0
  14. package/runtime/feedback/analyze.js +621 -0
  15. package/runtime/feedback/analyze.sh +546 -0
  16. package/runtime/init.js +172 -0
  17. package/runtime/init.sh +145 -0
  18. package/runtime/metrics/collector.js +361 -0
  19. package/runtime/metrics/collector.sh +308 -0
  20. package/runtime/persistence/store.js +433 -0
  21. package/runtime/persistence/store.sh +303 -0
  22. package/skill.json +74 -0
  23. package/skills/agent-payments/SKILL.md +411 -0
  24. package/skills/brainstorming/SKILL.md +233 -0
  25. package/skills/content-pipeline/SKILL.md +282 -0
  26. package/skills/dispatching-parallel-agents/SKILL.md +305 -0
  27. package/skills/executing-plans/SKILL.md +255 -0
  28. package/skills/finishing-a-development-branch/SKILL.md +260 -0
  29. package/skills/learn-how-to-learn/SKILL.md +235 -0
  30. package/skills/market-intelligence/SKILL.md +288 -0
  31. package/skills/prospecting/SKILL.md +313 -0
  32. package/skills/receiving-code-review/SKILL.md +225 -0
  33. package/skills/requesting-code-review/SKILL.md +206 -0
  34. package/skills/security-audit/SKILL.md +308 -0
  35. package/skills/subagent-driven-development/SKILL.md +244 -0
  36. package/skills/systematic-debugging/SKILL.md +279 -0
  37. package/skills/test-driven-development/SKILL.md +299 -0
  38. package/skills/using-clawpowers/SKILL.md +137 -0
  39. package/skills/using-git-worktrees/SKILL.md +261 -0
  40. package/skills/verification-before-completion/SKILL.md +254 -0
  41. package/skills/writing-plans/SKILL.md +276 -0
  42. package/skills/writing-skills/SKILL.md +260 -0
@@ -0,0 +1,282 @@
1
+ ---
2
+ name: content-pipeline
3
+ description: Write technical content, humanize it for natural voice, format for the target platform, and publish. Activate when creating blog posts, documentation, social media content, or newsletters.
4
+ version: 1.0.0
5
+ requires:
6
+ tools: [bash, curl]
7
+ runtime: false
8
+ metrics:
9
+ tracks: [content_pieces_published, engagement_scores, revision_cycles, publish_time]
10
+ improves: [humanization_quality, platform_formatting, tone_calibration]
11
+ ---
12
+
13
+ # Content Pipeline
14
+
15
+ ## When to Use
16
+
17
+ Apply this skill when:
18
+
19
+ - Writing technical blog posts or articles
20
+ - Creating documentation for public consumption
21
+ - Drafting social media content (Twitter/X, LinkedIn, Hacker News)
22
+ - Writing newsletters or announcements
23
+ - Creating README files for public repositories
24
+ - Producing technical tutorials or guides
25
+
26
+ **Skip when:**
27
+ - Writing internal docs (no humanization step needed)
28
+ - Pure code comments (different register entirely)
29
+ - Short Slack/Teams messages (too much overhead for too little output)
30
+
31
+ ## Core Methodology
32
+
33
+ ### Stage 1: Write (Technical Draft)
34
+
35
+ Write for accuracy first, voice second. The technical draft should be:
36
+
37
+ - **Complete** — all required information is present
38
+ - **Accurate** — facts, code samples, and commands are verified
39
+ - **Structured** — uses headers, lists, and code blocks appropriately
40
+ - **Dense** — every sentence carries information; no filler
41
+
42
+ **Technical draft goals:**
43
+ - Code examples compile and run
44
+ - Commands produce the described output
45
+ - Version numbers and API names are current and accurate
46
+ - Links work
47
+
48
+ **Structure template for technical blog post:**
49
+ ```markdown
50
+ # [Concrete, specific title — no clickbait]
51
+
52
+ ## The Problem
53
+ [What pain does the reader have? Why does this matter?]
54
+
55
+ ## The Solution
56
+ [What you built/discovered/solved — the payoff]
57
+
58
+ ## How It Works
59
+ [Technical explanation with code examples]
60
+
61
+ ## [Additional Implementation Sections]
62
+ [Step-by-step if it's a tutorial; depth if it's an analysis]
63
+
64
+ ## Conclusion
65
+ [1-2 sentences: what the reader can do now that they couldn't before]
66
+ ```
67
+
68
+ **Structure template for documentation:**
69
+ ```markdown
70
+ # [Feature/Component Name]
71
+
72
+ ## Overview
73
+ [One paragraph: what this is and when to use it]
74
+
75
+ ## Quick Start
76
+ [Minimal working example — 5 lines max]
77
+
78
+ ## Configuration
79
+ [All options, with types, defaults, and descriptions]
80
+
81
+ ## Examples
82
+ [2-3 realistic use cases with full code]
83
+
84
+ ## Reference
85
+ [Complete API/parameter reference]
86
+
87
+ ## Troubleshooting
88
+ [Common errors and their solutions]
89
+ ```
90
+
91
+ ### Stage 2: Humanize
92
+
93
+ The technical draft sounds like documentation. Published content must sound like a person.
94
+
95
+ **The problem:** LLM-generated text has a recognizable voice: over-hedged, passive, verbose, and full of transition phrases that signal nothing.
96
+
97
+ **Banned patterns (remove every instance):**
98
+ ```
99
+ "Delve into"
100
+ "It's worth noting that"
101
+ "In the realm of"
102
+ "Let's explore"
103
+ "Dive deep"
104
+ "In conclusion"
105
+ "In summary"
106
+ "Seamlessly"
107
+ "Leverage" (when "use" works)
108
+ "Game-changer"
109
+ "Groundbreaking"
110
+ "Revolutionary"
111
+ "Powerful" (unqualified)
112
+ "Robust" (unqualified)
113
+ "Ultimately"
114
+ "Furthermore"
115
+ "Moreover"
116
+ "That being said"
117
+ "At the end of the day"
118
+ "It's important to note"
119
+ ```
120
+
121
+ **Humanization checklist:**
122
+ - [ ] Active voice: "The function returns X" not "X is returned by the function"
123
+ - [ ] Specific claims: "37% faster" not "significantly faster"
124
+ - [ ] No filler intros: Start with the substance, not "In this post, we will..."
125
+ - [ ] Conversational where appropriate: Short sentences. Fragments when they land better.
126
+ - [ ] Concrete examples from real use, not "imagine a world where..."
127
+ - [ ] First person when sharing genuine perspective ("I spent 3 days debugging this")
128
+ - [ ] No over-qualified hedging: "This may potentially help some users" → "This solves X"
129
+
130
+ **Humanization transform examples:**
131
+
132
+ Before:
133
+ > "In this article, we will delve into the powerful features of the ClawPowers framework and explore how it can be leveraged to enhance your agent's capabilities in a seamless manner."
134
+
135
+ After:
136
+ > "ClawPowers gives your coding agent 20 skills. Here's how each one works and when to use it."
137
+
138
+ Before:
139
+ > "It's worth noting that the runtime layer provides significant performance improvements."
140
+
141
+ After:
142
+ > "The runtime layer cuts task time by 40% on average. Here's the data."
143
+
144
+ ### Stage 3: Platform Formatting
145
+
146
+ Different platforms have different requirements:
147
+
148
+ **Technical blog (dev.to, Hashnode, personal blog):**
149
+ - Length: 1500-3000 words (comprehensive guides: up to 5000)
150
+ - Code blocks with language hints
151
+ - Headers for navigation (H2, H3 — not H4+)
152
+ - Images optional but useful for architecture diagrams
153
+ - Tags: 3-5, technical and specific
154
+
155
+ **Twitter/X thread:**
156
+ - Thread format: lead tweet → detail tweets → conclusion
157
+ - Lead: hook + value proposition in 280 chars
158
+ - Each tweet: one idea, can stand alone
159
+ - No jargon in the lead tweet (hook a broader audience)
160
+ - End with CTA (link, follow, reply)
161
+ - Example thread structure:
162
+ ```
163
+ Tweet 1: Hook (the problem or the surprising result)
164
+ Tweet 2-3: Setup/context
165
+ Tweet 4-7: The substance (one idea per tweet)
166
+ Tweet 8: The takeaway
167
+ Tweet 9: CTA + link
168
+ ```
169
+
170
+ **LinkedIn:**
171
+ - Length: 150-300 words (longer performs worse)
172
+ - Line breaks every 1-3 sentences (LinkedIn's UI favors scannable text)
173
+ - First 2 lines must hook (everything else is hidden behind "see more")
174
+ - Professional but human tone
175
+ - End with a question to drive comments
176
+
177
+ **Hacker News (Show HN / Ask HN):**
178
+ - Title: factual, specific, no marketing language
179
+ - Top comment: author context, what problem it solves, technical details
180
+ - Avoid superlatives — community is allergic to hype
181
+ - "I built X to solve Y problem" not "Revolutionary new tool transforms..."
182
+
183
+ **GitHub README:**
184
+ - Badge line first (CI status, npm version, license)
185
+ - 3-sentence description: what, who, why
186
+ - Quick start must work with copy-paste
187
+ - Architecture diagram for complex projects
188
+ - License and contributing section at bottom
189
+
190
+ **Newsletter:**
191
+ - Subject line: specific, implies value ("How we cut our test suite from 8min to 47sec")
192
+ - Preheader: complements subject, not a repeat
193
+ - Opening: straight to value — no "Hey, it's [name]!"
194
+ - Sections: use headers, keep scannable
195
+ - CTA: one primary action, at the bottom
196
+
197
+ ### Stage 4: Pre-Publish Review
198
+
199
+ Before publishing:
200
+
201
+ - [ ] All code samples verified (copy-paste and run)
202
+ - [ ] All links work
203
+ - [ ] No confidential information (internal URLs, customer names, private configs)
204
+ - [ ] Humanization complete (banned phrases removed)
205
+ - [ ] Platform format applied
206
+ - [ ] Title is accurate and specific
207
+ - [ ] Tags/categories are correct
208
+
209
+ ### Stage 5: Publish
210
+
211
+ **Blog platforms (API publishing):**
212
+
213
+ ```bash
214
+ # dev.to API
215
+ curl -X POST "https://dev.to/api/articles" \
216
+ -H "api-key: $DEV_TO_API_KEY" \
217
+ -H "Content-Type: application/json" \
218
+ -d '{
219
+ "article": {
220
+ "title": "Your Article Title",
221
+ "body_markdown": "'"$(cat article.md)"'",
222
+ "published": true,
223
+ "tags": ["programming", "ai", "tools"]
224
+ }
225
+ }'
226
+ ```
227
+
228
+ **GitHub (documentation):**
229
+ ```bash
230
+ # Update docs in repo
231
+ git add docs/new-feature.md
232
+ git commit -m "docs: add [feature] guide"
233
+ git push
234
+ ```
235
+
236
+ ## ClawPowers Enhancement
237
+
238
+ When `~/.clawpowers/` runtime is initialized:
239
+
240
+ **Publication Tracking:**
241
+
242
+ ```bash
243
+ bash runtime/persistence/store.sh set "content:clawpowers-intro:platform" "dev.to"
244
+ bash runtime/persistence/store.sh set "content:clawpowers-intro:published_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
245
+ bash runtime/persistence/store.sh set "content:clawpowers-intro:url" "https://dev.to/..."
246
+ ```
247
+
248
+ **Engagement Tracking:**
249
+
250
+ After 24-48 hours, update with engagement metrics:
251
+ ```bash
252
+ bash runtime/persistence/store.sh set "content:clawpowers-intro:views" "847"
253
+ bash runtime/persistence/store.sh set "content:clawpowers-intro:reactions" "34"
254
+ bash runtime/persistence/store.sh set "content:clawpowers-intro:comments" "7"
255
+ ```
256
+
257
+ **Content Performance Analysis:**
258
+
259
+ `runtime/feedback/analyze.sh` identifies:
260
+ - Best-performing title patterns
261
+ - Optimal content length per platform
262
+ - Highest-engagement topic areas
263
+ - Time-of-publish correlation with reach
264
+
265
+ ```bash
266
+ bash runtime/metrics/collector.sh record \
267
+ --skill content-pipeline \
268
+ --outcome success \
269
+ --notes "clawpowers-intro: 1800 words, dev.to + twitter thread, published"
270
+ ```
271
+
272
+ ## Anti-Patterns
273
+
274
+ | Anti-Pattern | Why It Fails | Correct Approach |
275
+ |-------------|-------------|-----------------|
276
+ | Publishing technical draft directly | Reads like documentation, not content | Always run humanization step |
277
+ | Same text on all platforms | Each platform has different format requirements | Platform-specific formatting per Stage 3 |
278
+ | Unverified code samples | Readers can't reproduce, damages credibility | Run every code sample before publishing |
279
+ | Superlative titles ("The BEST guide to...") | Algorithms deprioritize, readers distrust | Specific, factual titles |
280
+ | Buried lede | Readers don't reach the value | Lead with the most interesting thing |
281
+ | Publishing without review | Errors in published content are permanent | Pre-publish checklist, always |
282
+ | No CTA | Content doesn't drive the desired outcome | One clear CTA per piece |
@@ -0,0 +1,305 @@
1
+ ---
2
+ name: dispatching-parallel-agents
3
+ description: Fan out independent tasks to parallel agent processes with load balancing, failure isolation, and result aggregation. Activate when you have N independent tasks that can execute concurrently.
4
+ version: 1.0.0
5
+ requires:
6
+ tools: [bash, git]
7
+ runtime: false
8
+ metrics:
9
+ tracks: [agents_dispatched, success_rate, parallel_efficiency, aggregation_errors]
10
+ improves: [task_partitioning, failure_isolation_strategy, aggregation_method]
11
+ ---
12
+
13
+ # Dispatching Parallel Agents
14
+
15
+ ## When to Use
16
+
17
+ Apply this skill when:
18
+
19
+ - You have 3+ independent tasks with no shared dependencies
20
+ - Each task can be described with a complete, self-contained spec
21
+ - You have access to multiple agent processes or context windows
22
+ - The tasks are roughly equal in complexity (or can be load-balanced)
23
+ - A failure in one task should not abort others
24
+
25
+ **Skip when:**
26
+ - Tasks share state that would conflict under concurrent access
27
+ - Tasks must execute in sequence (use `executing-plans` instead)
28
+ - You have fewer than 3 tasks (overhead outweighs benefit)
29
+ - You can't isolate failure — one bad result corrupts all results
30
+
31
+ **Relationship to `subagent-driven-development`:**
32
+ ```
33
+ subagent-driven-development: full development methodology (spec, review, worktree, integrate)
34
+ dispatching-parallel-agents: execution mechanism (fan-out, monitor, aggregate)
35
+
36
+ Use dispatching-parallel-agents for runtime parallelism.
37
+ Use subagent-driven-development for development task orchestration.
38
+ They are complementary — subagent-driven-development USES dispatching-parallel-agents.
39
+ ```
40
+
41
+ ## Core Methodology
42
+
43
+ ### Step 1: Task Decomposition for Parallelism
44
+
45
+ Before dispatching, verify each task is:
46
+
47
+ 1. **Self-contained** — has all inputs it needs, produces a defined output
48
+ 2. **Isolated** — doesn't write to shared state other tasks read
49
+ 3. **Specced** — has clear success criteria (you'll need these for aggregation)
50
+ 4. **Sized appropriately** — not so small that dispatch overhead dominates
51
+
52
+ **Task spec format for parallel dispatch:**
53
+ ```markdown
54
+ ## Task ID: [unique identifier]
55
+
56
+ **Input:**
57
+ - [File or data this task starts from]
58
+ - [Any context this task needs]
59
+
60
+ **Objective:** [Single sentence]
61
+
62
+ **Output:**
63
+ - [Exact artifact produced: file path, JSON structure, etc.]
64
+
65
+ **Success criteria:**
66
+ - [ ] [Verifiable criterion]
67
+
68
+ **Failure behavior:**
69
+ - [What this task does when it encounters an error]
70
+ - [What it outputs on failure so the aggregator can detect it]
71
+ ```
72
+
73
+ ### Step 2: Failure Isolation Design
74
+
75
+ Decide how failures propagate:
76
+
77
+ | Strategy | When to Use | Implementation |
78
+ |----------|-------------|----------------|
79
+ | **Continue on failure** | Tasks are independent, partial results are valuable | Failed tasks return error object, aggregator handles |
80
+ | **Fail fast** | Any failure invalidates all results | Use process groups; kill siblings on first failure |
81
+ | **Retry on failure** | Tasks are idempotent and failures are transient | Retry N times with exponential backoff |
82
+ | **Fallback on failure** | Alternative task exists for same output | Dispatch fallback when primary fails |
83
+
84
+ **Output envelope for failure isolation:**
85
+ ```json
86
+ {
87
+ "task_id": "auth-api",
88
+ "status": "success|failure|partial",
89
+ "output": { ... },
90
+ "error": null,
91
+ "duration_seconds": 47.3,
92
+ "checksum": "sha256_of_output"
93
+ }
94
+ ```
95
+
96
+ Every task must produce this envelope — the aggregator depends on it.
97
+
98
+ ### Step 3: Dispatch Mechanism
99
+
100
+ **Option A: Process-level parallelism (Bash)**
101
+
102
+ ```bash
103
+ #!/usr/bin/env bash
104
+ # Fan out tasks in background, wait for all, aggregate results
105
+
106
+ RESULTS_DIR=$(mktemp -d)
107
+ PIDS=()
108
+
109
+ dispatch_task() {
110
+ local task_id="$1"
111
+ local spec_file="$2"
112
+
113
+ (
114
+ # Each task runs in a subshell with its own output file
115
+ output_file="$RESULTS_DIR/${task_id}.json"
116
+
117
+ if run_task "$spec_file" > "$output_file" 2>&1; then
118
+ # Wrap output in envelope
119
+ echo '{"task_id":"'"$task_id"'","status":"success","output":'"$(cat "$output_file")"'}'
120
+ else
121
+ exit_code=$?
122
+ echo '{"task_id":"'"$task_id"'","status":"failure","error":"exit code '"$exit_code"'","output":null}'
123
+ fi
124
+ ) > "$RESULTS_DIR/${task_id}_envelope.json" &
125
+
126
+ PIDS+=($!)
127
+ echo "Dispatched task $task_id (PID: ${PIDS[-1]})"
128
+ }
129
+
130
+ # Dispatch all tasks
131
+ dispatch_task "auth-api" "specs/auth-api.md"
132
+ dispatch_task "auth-db" "specs/auth-db.md"
133
+ dispatch_task "auth-tests" "specs/auth-tests.md"
134
+
135
+ # Wait for all tasks
136
+ echo "Waiting for ${#PIDS[@]} tasks..."
137
+ for pid in "${PIDS[@]}"; do
138
+ wait "$pid"
139
+ done
140
+
141
+ echo "All tasks complete. Results in $RESULTS_DIR/"
142
+ ```
143
+
144
+ **Option B: Agent-level parallelism (Multi-context)**
145
+
146
+ When you have multiple agent contexts (e.g., multiple Claude Code sessions, multiple Cursor instances):
147
+
148
+ 1. For each parallel task, open a new agent context
149
+ 2. Inject the complete task spec
150
+ 3. Each agent works independently in its assigned worktree
151
+ 4. Orchestrator aggregates results after all agents complete
152
+
153
+ **Worktree-per-agent setup:**
154
+ ```bash
155
+ TASKS=("auth-api" "auth-db" "auth-tests")
156
+ for task in "${TASKS[@]}"; do
157
+ git worktree add "../project-${task}" -b "feature/${task}" main
158
+ echo "Worktree ready for ${task}: ../project-${task}"
159
+ done
160
+ ```
161
+
162
+ ### Step 4: Monitoring
163
+
164
+ Track task progress during execution:
165
+
166
+ ```bash
167
+ # Check which tasks are still running
168
+ for pid in "${PIDS[@]}"; do
169
+ if kill -0 "$pid" 2>/dev/null; then
170
+ echo "Still running: PID $pid"
171
+ fi
172
+ done
173
+
174
+ # Or: watch output files for progress indicators
175
+ watch -n 5 'ls -la '"$RESULTS_DIR"'/'
176
+
177
+ # Timeout monitoring — kill tasks that run too long
178
+ MAX_DURATION=600 # 10 minutes
179
+ for i in "${!PIDS[@]}"; do
180
+ pid="${PIDS[$i]}"
181
+ task="${TASKS[$i]}"
182
+ if kill -0 "$pid" 2>/dev/null; then
183
+ elapsed=$(ps -p "$pid" -o etimes= 2>/dev/null | tr -d ' ')
184
+ if [[ ${elapsed:-0} -gt $MAX_DURATION ]]; then
185
+ kill "$pid"
186
+ echo "TIMEOUT: Task $task (PID $pid) exceeded ${MAX_DURATION}s"
187
+ fi
188
+ fi
189
+ done
190
+ ```
191
+
192
+ ### Step 5: Result Aggregation
193
+
194
+ After all tasks complete, aggregate results:
195
+
196
+ ```bash
197
+ #!/usr/bin/env bash
198
+ # Aggregate results from all parallel tasks
199
+
200
+ aggregate_results() {
201
+ local results_dir="$1"
202
+ local success_count=0
203
+ local failure_count=0
204
+ local failures=()
205
+
206
+ for envelope_file in "$results_dir"/*_envelope.json; do
207
+ status=$(python3 -c "import json,sys; d=json.load(open('$envelope_file')); print(d['status'])")
208
+ task_id=$(python3 -c "import json,sys; d=json.load(open('$envelope_file')); print(d['task_id'])")
209
+
210
+ if [[ "$status" == "success" ]]; then
211
+ ((success_count++))
212
+ echo "✓ $task_id"
213
+ else
214
+ ((failure_count++))
215
+ failures+=("$task_id")
216
+ error=$(python3 -c "import json,sys; d=json.load(open('$envelope_file')); print(d.get('error','unknown'))")
217
+ echo "✗ $task_id: $error"
218
+ fi
219
+ done
220
+
221
+ echo ""
222
+ echo "Results: $success_count succeeded, $failure_count failed"
223
+
224
+ if [[ $failure_count -gt 0 ]]; then
225
+ echo "Failed tasks: ${failures[*]}"
226
+ return 1
227
+ fi
228
+
229
+ return 0
230
+ }
231
+
232
+ aggregate_results "$RESULTS_DIR"
233
+ ```
234
+
235
+ **Aggregation decisions:**
236
+ - All succeeded → proceed to integration
237
+ - Some failed → re-dispatch only failed tasks (not successful ones)
238
+ - Critical task failed → abort, fix, re-dispatch all dependent tasks
239
+
240
+ ### Step 6: Integration
241
+
242
+ After successful aggregation:
243
+
244
+ 1. Verify outputs are compatible (no conflicting interfaces)
245
+ 2. Merge in dependency order (see `using-git-worktrees`)
246
+ 3. Run integration tests across all task outputs
247
+ 4. Clean up worktrees
248
+
249
+ ## ClawPowers Enhancement
250
+
251
+ When `~/.clawpowers/` runtime is initialized:
252
+
253
+ **Execution Registry:**
254
+
255
+ ```bash
256
+ # Register dispatch batch
257
+ BATCH_ID="auth-$(date +%s)"
258
+ bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:tasks" "auth-api,auth-db,auth-tests"
259
+ bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:started_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
260
+
261
+ # Update per-task status as they complete
262
+ bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:auth-api:status" "success"
263
+ bash runtime/persistence/store.sh set "dispatch:${BATCH_ID}:auth-db:status" "running"
264
+
265
+ # On session interrupt: resume knows exactly which tasks to re-dispatch
266
+ bash runtime/persistence/store.sh list "dispatch:${BATCH_ID}:*:status"
267
+ ```
268
+
269
+ **Load Balancing:**
270
+
271
+ Track task execution times to balance future dispatches:
272
+ ```bash
273
+ bash runtime/persistence/store.sh set "task-timing:auth-api" "47"
274
+ bash runtime/persistence/store.sh set "task-timing:auth-db" "23"
275
+ ```
276
+
277
+ Future dispatches group tasks to equalize total runtime across agents.
278
+
279
+ **Failure Isolation Metrics:**
280
+
281
+ ```bash
282
+ bash runtime/metrics/collector.sh record \
283
+ --skill dispatching-parallel-agents \
284
+ --outcome success \
285
+ --notes "auth: 3 tasks, all succeeded, 47s wall time vs 117s sequential"
286
+ ```
287
+
288
+ Tracks parallel efficiency (wall time vs. theoretical serial time), helps tune batch sizes.
289
+
290
+ ## Anti-Patterns
291
+
292
+ | Anti-Pattern | Why It Fails | Correct Approach |
293
+ |-------------|-------------|-----------------|
294
+ | Dispatching tasks that share mutable state | Race conditions, data corruption | Verify isolation before dispatch |
295
+ | No output envelope format | Aggregator can't distinguish success from failure | Every task must produce structured output |
296
+ | Waiting for all tasks when partial results suffice | Slowest task blocks all results | Consider streaming aggregation for independent outputs |
297
+ | No timeout on tasks | One hung task blocks aggregation forever | Always set timeouts |
298
+ | Re-dispatching succeeded tasks on retry | Wastes time, may produce different results | Track task status, retry only failed tasks |
299
+ | No result verification after aggregation | Corrupted output passes through | Verify each task output against spec before integration |
300
+
301
+ ## Integration with Other Skills
302
+
303
+ - Used by `subagent-driven-development` for task fan-out
304
+ - Requires `using-git-worktrees` for file isolation
305
+ - Outputs consumed by `verification-before-completion`