clawpowers 1.1.3 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (74) hide show
  1. package/CHANGELOG.md +94 -0
  2. package/LICENSE +44 -0
  3. package/README.md +202 -384
  4. package/SECURITY.md +72 -0
  5. package/dist/index.d.ts +844 -0
  6. package/dist/index.js +2536 -0
  7. package/dist/index.js.map +1 -0
  8. package/package.json +52 -42
  9. package/.claude-plugin/manifest.json +0 -19
  10. package/.codex/INSTALL.md +0 -36
  11. package/.cursor-plugin/manifest.json +0 -21
  12. package/.opencode/INSTALL.md +0 -52
  13. package/ARCHITECTURE.md +0 -69
  14. package/bin/clawpowers.js +0 -625
  15. package/bin/clawpowers.sh +0 -91
  16. package/docs/demo/clawpowers-demo.cast +0 -197
  17. package/docs/demo/clawpowers-demo.gif +0 -0
  18. package/docs/launch-images/25-skills-breakdown.jpg +0 -0
  19. package/docs/launch-images/clawpowers-vs-superpowers.jpg +0 -0
  20. package/docs/launch-images/economic-code-optimization.jpg +0 -0
  21. package/docs/launch-images/native-vs-bridge-2.jpg +0 -0
  22. package/docs/launch-images/native-vs-bridge.jpg +0 -0
  23. package/docs/launch-images/post1-hero-lobster.jpg +0 -0
  24. package/docs/launch-images/post2-dashboard.jpg +0 -0
  25. package/docs/launch-images/post3-superpowers.jpg +0 -0
  26. package/docs/launch-images/post4-before-after.jpg +0 -0
  27. package/docs/launch-images/post5-install-now.jpg +0 -0
  28. package/docs/launch-images/ultimate-stack.jpg +0 -0
  29. package/docs/launch-posts.md +0 -76
  30. package/docs/quickstart-first-transaction.md +0 -204
  31. package/gemini-extension.json +0 -32
  32. package/hooks/session-start +0 -205
  33. package/hooks/session-start.cmd +0 -43
  34. package/hooks/session-start.js +0 -163
  35. package/runtime/demo/README.md +0 -78
  36. package/runtime/demo/x402-mock-server.js +0 -230
  37. package/runtime/feedback/analyze.js +0 -621
  38. package/runtime/feedback/analyze.sh +0 -546
  39. package/runtime/init.js +0 -210
  40. package/runtime/init.sh +0 -178
  41. package/runtime/metrics/collector.js +0 -361
  42. package/runtime/metrics/collector.sh +0 -308
  43. package/runtime/payments/ledger.js +0 -305
  44. package/runtime/payments/ledger.sh +0 -262
  45. package/runtime/payments/pipeline.js +0 -459
  46. package/runtime/persistence/store.js +0 -433
  47. package/runtime/persistence/store.sh +0 -303
  48. package/skill.json +0 -106
  49. package/skills/agent-bounties/SKILL.md +0 -553
  50. package/skills/agent-payments/SKILL.md +0 -479
  51. package/skills/brainstorming/SKILL.md +0 -233
  52. package/skills/content-pipeline/SKILL.md +0 -282
  53. package/skills/cross-project-knowledge/SKILL.md +0 -345
  54. package/skills/dispatching-parallel-agents/SKILL.md +0 -305
  55. package/skills/economic-code-optimization/SKILL.md +0 -265
  56. package/skills/executing-plans/SKILL.md +0 -255
  57. package/skills/finishing-a-development-branch/SKILL.md +0 -260
  58. package/skills/formal-verification-lite/SKILL.md +0 -441
  59. package/skills/learn-how-to-learn/SKILL.md +0 -235
  60. package/skills/market-intelligence/SKILL.md +0 -323
  61. package/skills/meta-skill-evolution/SKILL.md +0 -325
  62. package/skills/prospecting/SKILL.md +0 -454
  63. package/skills/receiving-code-review/SKILL.md +0 -225
  64. package/skills/requesting-code-review/SKILL.md +0 -206
  65. package/skills/security-audit/SKILL.md +0 -353
  66. package/skills/self-healing-code/SKILL.md +0 -369
  67. package/skills/subagent-driven-development/SKILL.md +0 -244
  68. package/skills/systematic-debugging/SKILL.md +0 -355
  69. package/skills/test-driven-development/SKILL.md +0 -416
  70. package/skills/using-clawpowers/SKILL.md +0 -160
  71. package/skills/using-git-worktrees/SKILL.md +0 -261
  72. package/skills/verification-before-completion/SKILL.md +0 -254
  73. package/skills/writing-plans/SKILL.md +0 -276
  74. package/skills/writing-skills/SKILL.md +0 -260
@@ -1,233 +0,0 @@
1
- ---
2
- name: brainstorming
3
- description: Structured ideation with convergence protocol. Activate when exploring solutions, designing architecture, or choosing between approaches.
4
- version: 1.0.0
5
- requires:
6
- tools: []
7
- runtime: false
8
- metrics:
9
- tracks: [ideas_generated, ideas_pursued, convergence_time, decision_quality]
10
- improves: [ideation_breadth, convergence_threshold, idea_linking]
11
- ---
12
-
13
- # Brainstorming
14
-
15
- ## When to Use
16
-
17
- Apply this skill when:
18
-
19
- - A problem has multiple valid solution approaches and you need to choose
20
- - You're designing architecture and haven't committed to a direction
21
- - Someone asks "what should we do?" or "what are our options?"
22
- - You've been executing in one direction and need to verify it's still the right one
23
- - A constraint has changed and the previous approach may no longer be optimal
24
- - Creative solutions are needed (not just standard patterns)
25
-
26
- **Skip when:**
27
- - The solution is already determined and execution is what's needed
28
- - There's only one viable approach given the constraints
29
- - You need to converge immediately — brainstorming requires divergence time
30
-
31
- **Decision tree:**
32
- ```
33
- Is the right approach known?
34
- ├── Yes → execute it
35
- └── No → Is this a known problem pattern?
36
- ├── Yes → Apply known solution, validate fit with constraints
37
- └── No → brainstorming ← YOU ARE HERE
38
- ```
39
-
40
- ## Core Methodology
41
-
42
- Brainstorming has two phases: **diverge** (generate many ideas without judgment) and **converge** (evaluate, select, refine). Never mix them — evaluating during generation kills ideas before they can combine into something better.
43
-
44
- ### Phase 1: Diverge
45
-
46
- **Rule:** No evaluation during divergence. Every idea is noted, even obviously bad ones.
47
-
48
- **Seed the space with different lenses:**
49
-
50
- 1. **First-principles lens** — "If we built this from scratch knowing only the requirements, what would we build?"
51
- 2. **Constraint-removal lens** — "If [constraint] didn't exist, what's the best solution? Now how do we get closer to it?"
52
- 3. **Analogy lens** — "How does [analogous problem domain] solve this?"
53
- 4. **Inversion lens** — "What would make this maximally bad? Now invert each item."
54
- 5. **Extreme lens** — "What's the simplest possible approach? What's the most powerful?"
55
- 6. **Time lens** — "What solution would we regret not choosing in 2 years?"
56
-
57
- **Target:** 6-12 distinct ideas before evaluation. If you have fewer than 5, you've stopped too early.
58
-
59
- **Divergence output format:**
60
-
61
- ```markdown
62
- ## Idea 1: [Name]
63
- [2-3 sentences: what it is, how it works, why it might be good]
64
- Rough feasibility: [High/Medium/Low]
65
-
66
- ## Idea 2: [Name]
67
- ...
68
- ```
69
-
70
- ### Phase 2: Rapid Pre-Filter
71
-
72
- After divergence, apply a quick filter before full evaluation:
73
-
74
- **Pre-filter criteria:**
75
- - Feasible within current constraints? (Hard blocker: eliminate)
76
- - Reversible if wrong? (Irreversible = higher bar to choose)
77
- - Team can execute? (Skill gap = risk, not elimination)
78
-
79
- Eliminate only ideas that fail hard feasibility. Keep everything else — your "bad" ideas might combine with your "good" ones.
80
-
81
- ### Phase 3: Evaluate Remaining Ideas
82
-
83
- For each surviving idea, score on:
84
-
85
- | Criterion | Weight | Description |
86
- |-----------|--------|-------------|
87
- | Correctness | 30% | Solves the actual problem completely |
88
- | Maintainability | 20% | Team can reason about and change it |
89
- | Performance | 15% | Meets performance requirements |
90
- | Reversibility | 15% | Can be undone if wrong |
91
- | Time to implement | 10% | Fits within timeline constraint |
92
- | Risk | 10% | Known unknowns and failure modes |
93
-
94
- **Scoring:** 1-5 per criterion. Weighted total = decision score.
95
-
96
- This scoring is a forcing function, not a formula. If the scores conflict with your gut, investigate the gut feeling — it may be capturing a criterion you haven't named.
97
-
98
- ### Phase 4: Convergence Decision
99
-
100
- **Select the highest-scoring idea** unless:
101
- - The second-highest is within 10% and significantly more reversible
102
- - The highest-scoring idea has an unmitigated risk that could invalidate the entire effort
103
- - The team has strong capability gaps that make the highest-scoring idea genuinely infeasible
104
-
105
- **Convergence output:**
106
-
107
- ```markdown
108
- ## Decision: [Idea Name]
109
-
110
- **Rationale:** [Why this over the alternatives]
111
- **Key trade-off accepted:** [What we're giving up and why that's okay]
112
- **Reversibility:** [Can we change this later? At what cost?]
113
- **Risk mitigations:**
114
- - [Risk 1]: [Mitigation]
115
- - [Risk 2]: [Mitigation]
116
-
117
- ## Discarded Alternatives
118
- - [Idea N]: Eliminated because [specific reason]
119
- ```
120
-
121
- The discarded alternatives section is important — it prevents re-litigating the same options in future discussions.
122
-
123
- ### Phase 5: Spike Plan (if needed)
124
-
125
- If the winning idea has a technical unknown, plan a spike (time-boxed experiment):
126
-
127
- ```markdown
128
- ## Spike: Validate [unknown assumption]
129
-
130
- **Question:** [Specific question this spike answers]
131
- **Method:** [How to test it]
132
- **Time box:** [Maximum time, then decide based on results]
133
- **Pass criteria:** [What result confirms the approach is viable]
134
- **Fail criteria:** [What result means we choose the fallback]
135
- **Fallback:** [Idea #2 from the evaluation]
136
- ```
137
-
138
- Spikes that don't have a fallback are bets, not spikes.
139
-
140
- ## ClawPowers Enhancement
141
-
142
- When `~/.clawpowers/` runtime is initialized:
143
-
144
- **Cross-Session Idea Persistence:**
145
-
146
- Ideas don't disappear when the session ends:
147
-
148
- ```bash
149
- # Save ideas from session
150
- bash runtime/persistence/store.sh set "brainstorm:auth-rate-limiting:idea1" "Token bucket with Redis"
151
- bash runtime/persistence/store.sh set "brainstorm:auth-rate-limiting:idea2" "Fixed window with DB"
152
- bash runtime/persistence/store.sh set "brainstorm:auth-rate-limiting:decision" "Token bucket with Redis"
153
- bash runtime/persistence/store.sh set "brainstorm:auth-rate-limiting:discarded" "Fixed window: stale at window boundary"
154
-
155
- # Recall in future session
156
- bash runtime/persistence/store.sh list "brainstorm:auth-rate-limiting:*"
157
- ```
158
-
159
- This prevents re-debating decisions already made and provides context when the approach needs revisiting.
160
-
161
- **Pattern Linking:**
162
-
163
- After 10+ brainstorming sessions, `runtime/feedback/analyze.sh` identifies:
164
- - Which lenses generate the most pursued ideas (your most productive divergence strategies)
165
- - Common discarded idea reasons (helps pre-filter faster)
166
- - Idea-to-outcome correlation (were your decisions good?)
167
-
168
- **Idea Quality Tracking:**
169
-
170
- ```bash
171
- bash runtime/metrics/collector.sh record \
172
- --skill brainstorming \
173
- --outcome success \
174
- --notes "rate-limiting: 7 ideas, 1 spike, decision in 25 min"
175
- ```
176
-
177
- After execution, mark whether the brainstorming decision held up:
178
- ```bash
179
- bash runtime/persistence/store.sh set "brainstorm:auth-rate-limiting:outcome" "decision_held"
180
- # or
181
- bash runtime/persistence/store.sh set "brainstorm:auth-rate-limiting:outcome" "pivoted:reason"
182
- ```
183
-
184
- This feeds the RSI loop — which ideas look good in brainstorming but fail in practice?
185
-
186
- ## Anti-Patterns
187
-
188
- | Anti-Pattern | Why It Fails | Correct Approach |
189
- |-------------|-------------|-----------------|
190
- | Evaluating during divergence | Kills combination ideas before they form | Strict phase separation |
191
- | Stopping at 2-3 ideas | First ideas are obvious; breakthroughs are later | Force 6-12 before evaluating |
192
- | Skipping the spike | Unknown assumption bites you mid-implementation | Spike anything technically uncertain |
193
- | No fallback on spike | If spike fails, you're stuck | Always name the fallback before spiking |
194
- | Consensus brainstorming | Group thinks converges to average | Diverge individually, converge together |
195
- | Re-opening decided questions | Litigating old decisions halts progress | Document discarded alternatives with reasons |
196
- | Brainstorming without constraints | Unconstrained ideas aren't implementable | State constraints at the start of divergence |
197
-
198
- ## Examples
199
-
200
- ### Example 1: Architecture Decision
201
-
202
- **Question:** How should we handle cross-service communication?
203
-
204
- **Divergence:**
205
- 1. REST HTTP calls (synchronous, direct)
206
- 2. Message queue (async, decoupled) — Kafka, RabbitMQ, Redis Streams
207
- 3. gRPC (typed, fast, binary protocol)
208
- 4. GraphQL federation
209
- 5. Event sourcing + event bus
210
- 6. Shared database (anti-pattern but option)
211
- 7. Service mesh with mTLS (Istio)
212
-
213
- **Pre-filter:** Option 6 (shared DB) eliminated — violates service isolation. All others survive.
214
-
215
- **Evaluation scores:** REST: 72pts | Message queue: 85pts | gRPC: 78pts | Others < 70pts
216
-
217
- **Decision:** Message queue (Kafka) — highest score, fully decoupled, reversible per service.
218
-
219
- **Spike:** Can our team operate Kafka? → 2-hour spike → yes, managed Confluent resolves ops burden.
220
-
221
- ### Example 2: Feature Design
222
-
223
- **Question:** How should users specify recurring events?
224
-
225
- **Divergence (constraint-removal):**
226
- 1. cron syntax (powerful, opaque to non-technical users)
227
- 2. Natural language parser ("every Monday at 9am")
228
- 3. Visual calendar picker (intuitive, limited power)
229
- 4. RRULE (RFC 5545 standard, complex)
230
- 5. Predefined presets + custom exception
231
- 6. Wizard with structured questions
232
-
233
- **Convergence:** Option 5 (presets + exceptions) — covers 90% of cases simply, 10% with power.
@@ -1,282 +0,0 @@
1
- ---
2
- name: content-pipeline
3
- description: Write technical content, humanize it for natural voice, format for the target platform, and publish. Activate when creating blog posts, documentation, social media content, or newsletters.
4
- version: 1.0.0
5
- requires:
6
- tools: [bash, curl]
7
- runtime: false
8
- metrics:
9
- tracks: [content_pieces_published, engagement_scores, revision_cycles, publish_time]
10
- improves: [humanization_quality, platform_formatting, tone_calibration]
11
- ---
12
-
13
- # Content Pipeline
14
-
15
- ## When to Use
16
-
17
- Apply this skill when:
18
-
19
- - Writing technical blog posts or articles
20
- - Creating documentation for public consumption
21
- - Drafting social media content (Twitter/X, LinkedIn, Hacker News)
22
- - Writing newsletters or announcements
23
- - Creating README files for public repositories
24
- - Producing technical tutorials or guides
25
-
26
- **Skip when:**
27
- - Writing internal docs (no humanization step needed)
28
- - Pure code comments (different register entirely)
29
- - Short Slack/Teams messages (too much overhead for too little output)
30
-
31
- ## Core Methodology
32
-
33
- ### Stage 1: Write (Technical Draft)
34
-
35
- Write for accuracy first, voice second. The technical draft should be:
36
-
37
- - **Complete** — all required information is present
38
- - **Accurate** — facts, code samples, and commands are verified
39
- - **Structured** — uses headers, lists, and code blocks appropriately
40
- - **Dense** — every sentence carries information; no filler
41
-
42
- **Technical draft goals:**
43
- - Code examples compile and run
44
- - Commands produce the described output
45
- - Version numbers and API names are current and accurate
46
- - Links work
47
-
48
- **Structure template for technical blog post:**
49
- ```markdown
50
- # [Concrete, specific title — no clickbait]
51
-
52
- ## The Problem
53
- [What pain does the reader have? Why does this matter?]
54
-
55
- ## The Solution
56
- [What you built/discovered/solved — the payoff]
57
-
58
- ## How It Works
59
- [Technical explanation with code examples]
60
-
61
- ## [Additional Implementation Sections]
62
- [Step-by-step if it's a tutorial; depth if it's an analysis]
63
-
64
- ## Conclusion
65
- [1-2 sentences: what the reader can do now that they couldn't before]
66
- ```
67
-
68
- **Structure template for documentation:**
69
- ```markdown
70
- # [Feature/Component Name]
71
-
72
- ## Overview
73
- [One paragraph: what this is and when to use it]
74
-
75
- ## Quick Start
76
- [Minimal working example — 5 lines max]
77
-
78
- ## Configuration
79
- [All options, with types, defaults, and descriptions]
80
-
81
- ## Examples
82
- [2-3 realistic use cases with full code]
83
-
84
- ## Reference
85
- [Complete API/parameter reference]
86
-
87
- ## Troubleshooting
88
- [Common errors and their solutions]
89
- ```
90
-
91
- ### Stage 2: Humanize
92
-
93
- The technical draft sounds like documentation. Published content must sound like a person.
94
-
95
- **The problem:** LLM-generated text has a recognizable voice: over-hedged, passive, verbose, and full of transition phrases that signal nothing.
96
-
97
- **Banned patterns (remove every instance):**
98
- ```
99
- "Delve into"
100
- "It's worth noting that"
101
- "In the realm of"
102
- "Let's explore"
103
- "Dive deep"
104
- "In conclusion"
105
- "In summary"
106
- "Seamlessly"
107
- "Leverage" (when "use" works)
108
- "Game-changer"
109
- "Groundbreaking"
110
- "Revolutionary"
111
- "Powerful" (unqualified)
112
- "Robust" (unqualified)
113
- "Ultimately"
114
- "Furthermore"
115
- "Moreover"
116
- "That being said"
117
- "At the end of the day"
118
- "It's important to note"
119
- ```
120
-
121
- **Humanization checklist:**
122
- - [ ] Active voice: "The function returns X" not "X is returned by the function"
123
- - [ ] Specific claims: "37% faster" not "significantly faster"
124
- - [ ] No filler intros: Start with the substance, not "In this post, we will..."
125
- - [ ] Conversational where appropriate: Short sentences. Fragments when they land better.
126
- - [ ] Concrete examples from real use, not "imagine a world where..."
127
- - [ ] First person when sharing genuine perspective ("I spent 3 days debugging this")
128
- - [ ] No over-qualified hedging: "This may potentially help some users" → "This solves X"
129
-
130
- **Humanization transform examples:**
131
-
132
- Before:
133
- > "In this article, we will delve into the powerful features of the ClawPowers framework and explore how it can be leveraged to enhance your agent's capabilities in a seamless manner."
134
-
135
- After:
136
- > "ClawPowers gives your coding agent 20 skills. Here's how each one works and when to use it."
137
-
138
- Before:
139
- > "It's worth noting that the runtime layer provides significant performance improvements."
140
-
141
- After:
142
- > "The runtime layer cuts task time by 40% on average. Here's the data."
143
-
144
- ### Stage 3: Platform Formatting
145
-
146
- Different platforms have different requirements:
147
-
148
- **Technical blog (dev.to, Hashnode, personal blog):**
149
- - Length: 1500-3000 words (comprehensive guides: up to 5000)
150
- - Code blocks with language hints
151
- - Headers for navigation (H2, H3 — not H4+)
152
- - Images optional but useful for architecture diagrams
153
- - Tags: 3-5, technical and specific
154
-
155
- **Twitter/X thread:**
156
- - Thread format: lead tweet → detail tweets → conclusion
157
- - Lead: hook + value proposition in 280 chars
158
- - Each tweet: one idea, can stand alone
159
- - No jargon in the lead tweet (hook a broader audience)
160
- - End with CTA (link, follow, reply)
161
- - Example thread structure:
162
- ```
163
- Tweet 1: Hook (the problem or the surprising result)
164
- Tweet 2-3: Setup/context
165
- Tweet 4-7: The substance (one idea per tweet)
166
- Tweet 8: The takeaway
167
- Tweet 9: CTA + link
168
- ```
169
-
170
- **LinkedIn:**
171
- - Length: 150-300 words (longer performs worse)
172
- - Line breaks every 1-3 sentences (LinkedIn's UI favors scannable text)
173
- - First 2 lines must hook (everything else is hidden behind "see more")
174
- - Professional but human tone
175
- - End with a question to drive comments
176
-
177
- **Hacker News (Show HN / Ask HN):**
178
- - Title: factual, specific, no marketing language
179
- - Top comment: author context, what problem it solves, technical details
180
- - Avoid superlatives — community is allergic to hype
181
- - "I built X to solve Y problem" not "Revolutionary new tool transforms..."
182
-
183
- **GitHub README:**
184
- - Badge line first (CI status, npm version, license)
185
- - 3-sentence description: what, who, why
186
- - Quick start must work with copy-paste
187
- - Architecture diagram for complex projects
188
- - License and contributing section at bottom
189
-
190
- **Newsletter:**
191
- - Subject line: specific, implies value ("How we cut our test suite from 8min to 47sec")
192
- - Preheader: complements subject, not a repeat
193
- - Opening: straight to value — no "Hey, it's [name]!"
194
- - Sections: use headers, keep scannable
195
- - CTA: one primary action, at the bottom
196
-
197
- ### Stage 4: Pre-Publish Review
198
-
199
- Before publishing:
200
-
201
- - [ ] All code samples verified (copy-paste and run)
202
- - [ ] All links work
203
- - [ ] No confidential information (internal URLs, customer names, private configs)
204
- - [ ] Humanization complete (banned phrases removed)
205
- - [ ] Platform format applied
206
- - [ ] Title is accurate and specific
207
- - [ ] Tags/categories are correct
208
-
209
- ### Stage 5: Publish
210
-
211
- **Blog platforms (API publishing):**
212
-
213
- ```bash
214
- # dev.to API
215
- curl -X POST "https://dev.to/api/articles" \
216
- -H "api-key: $DEV_TO_API_KEY" \
217
- -H "Content-Type: application/json" \
218
- -d '{
219
- "article": {
220
- "title": "Your Article Title",
221
- "body_markdown": "'"$(cat article.md)"'",
222
- "published": true,
223
- "tags": ["programming", "ai", "tools"]
224
- }
225
- }'
226
- ```
227
-
228
- **GitHub (documentation):**
229
- ```bash
230
- # Update docs in repo
231
- git add docs/new-feature.md
232
- git commit -m "docs: add [feature] guide"
233
- git push
234
- ```
235
-
236
- ## ClawPowers Enhancement
237
-
238
- When `~/.clawpowers/` runtime is initialized:
239
-
240
- **Publication Tracking:**
241
-
242
- ```bash
243
- bash runtime/persistence/store.sh set "content:clawpowers-intro:platform" "dev.to"
244
- bash runtime/persistence/store.sh set "content:clawpowers-intro:published_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
245
- bash runtime/persistence/store.sh set "content:clawpowers-intro:url" "https://dev.to/..."
246
- ```
247
-
248
- **Engagement Tracking:**
249
-
250
- After 24-48 hours, update with engagement metrics:
251
- ```bash
252
- bash runtime/persistence/store.sh set "content:clawpowers-intro:views" "847"
253
- bash runtime/persistence/store.sh set "content:clawpowers-intro:reactions" "34"
254
- bash runtime/persistence/store.sh set "content:clawpowers-intro:comments" "7"
255
- ```
256
-
257
- **Content Performance Analysis:**
258
-
259
- `runtime/feedback/analyze.sh` identifies:
260
- - Best-performing title patterns
261
- - Optimal content length per platform
262
- - Highest-engagement topic areas
263
- - Time-of-publish correlation with reach
264
-
265
- ```bash
266
- bash runtime/metrics/collector.sh record \
267
- --skill content-pipeline \
268
- --outcome success \
269
- --notes "clawpowers-intro: 1800 words, dev.to + twitter thread, published"
270
- ```
271
-
272
- ## Anti-Patterns
273
-
274
- | Anti-Pattern | Why It Fails | Correct Approach |
275
- |-------------|-------------|-----------------|
276
- | Publishing technical draft directly | Reads like documentation, not content | Always run humanization step |
277
- | Same text on all platforms | Each platform has different format requirements | Platform-specific formatting per Stage 3 |
278
- | Unverified code samples | Readers can't reproduce, damages credibility | Run every code sample before publishing |
279
- | Superlative titles ("The BEST guide to...") | Algorithms deprioritize, readers distrust | Specific, factual titles |
280
- | Buried lede | Readers don't reach the value | Lead with the most interesting thing |
281
- | Publishing without review | Errors in published content are permanent | Pre-publish checklist, always |
282
- | No CTA | Content doesn't drive the desired outcome | One clear CTA per piece |