clawpowers 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/.claude-plugin/manifest.json +19 -0
  2. package/.codex/INSTALL.md +36 -0
  3. package/.cursor-plugin/manifest.json +21 -0
  4. package/.opencode/INSTALL.md +52 -0
  5. package/ARCHITECTURE.md +69 -0
  6. package/README.md +381 -0
  7. package/bin/clawpowers.js +390 -0
  8. package/bin/clawpowers.sh +91 -0
  9. package/gemini-extension.json +32 -0
  10. package/hooks/session-start +205 -0
  11. package/hooks/session-start.cmd +43 -0
  12. package/hooks/session-start.js +163 -0
  13. package/package.json +54 -0
  14. package/runtime/feedback/analyze.js +621 -0
  15. package/runtime/feedback/analyze.sh +546 -0
  16. package/runtime/init.js +172 -0
  17. package/runtime/init.sh +145 -0
  18. package/runtime/metrics/collector.js +361 -0
  19. package/runtime/metrics/collector.sh +308 -0
  20. package/runtime/persistence/store.js +433 -0
  21. package/runtime/persistence/store.sh +303 -0
  22. package/skill.json +74 -0
  23. package/skills/agent-payments/SKILL.md +411 -0
  24. package/skills/brainstorming/SKILL.md +233 -0
  25. package/skills/content-pipeline/SKILL.md +282 -0
  26. package/skills/dispatching-parallel-agents/SKILL.md +305 -0
  27. package/skills/executing-plans/SKILL.md +255 -0
  28. package/skills/finishing-a-development-branch/SKILL.md +260 -0
  29. package/skills/learn-how-to-learn/SKILL.md +235 -0
  30. package/skills/market-intelligence/SKILL.md +288 -0
  31. package/skills/prospecting/SKILL.md +313 -0
  32. package/skills/receiving-code-review/SKILL.md +225 -0
  33. package/skills/requesting-code-review/SKILL.md +206 -0
  34. package/skills/security-audit/SKILL.md +308 -0
  35. package/skills/subagent-driven-development/SKILL.md +244 -0
  36. package/skills/systematic-debugging/SKILL.md +279 -0
  37. package/skills/test-driven-development/SKILL.md +299 -0
  38. package/skills/using-clawpowers/SKILL.md +137 -0
  39. package/skills/using-git-worktrees/SKILL.md +261 -0
  40. package/skills/verification-before-completion/SKILL.md +254 -0
  41. package/skills/writing-plans/SKILL.md +276 -0
  42. package/skills/writing-skills/SKILL.md +260 -0
@@ -0,0 +1,235 @@
1
+ ---
2
+ name: learn-how-to-learn
3
+ description: Metacognitive learning protocol — 5-layer learning stack, 14 cognitive failure modes, confidence calibration, and common sense validation. Activate when approaching an unfamiliar domain, debugging a persistent misconception, or teaching a concept.
4
+ version: 1.0.0
5
+ requires:
6
+ tools: []
7
+ runtime: false
8
+ metrics:
9
+ tracks: [concepts_learned, misconceptions_corrected, confidence_calibration_error, retention_rate]
10
+ improves: [layer_selection, anti_pattern_detection_speed, confidence_threshold]
11
+ ---
12
+
13
+ # Learn How to Learn
14
+
15
+ ## When to Use
16
+
17
+ Apply this skill when:
18
+
19
+ - Encountering a domain, library, or concept for the first time
20
+ - A debugging session reveals a fundamental misunderstanding
21
+ - You've made the same type of error 3+ times
22
+ - You're about to teach or explain something to someone else
23
+ - Your confidence in a concept doesn't match your actual accuracy
24
+ - You've been stuck on the same problem for more than 30 minutes
25
+
26
+ **Skip when:**
27
+ - You genuinely know the domain well (Dunning-Kruger inverse — don't slow down real expertise)
28
+ - The task requires doing, not understanding (sometimes you learn by doing)
29
+ - The knowledge gap is trivial (syntax error, quick API lookup)
30
+
31
+ ## Core Methodology: The 5-Layer Learning Stack
32
+
33
+ Learning has 5 layers. Most people stop at Layer 2 and wonder why they can't apply knowledge reliably. Work all 5 layers for durable understanding.
34
+
35
+ ### Layer 1: Vocabulary (What are we talking about?)
36
+
37
+ Before anything else, establish precise definitions for key terms. Imprecise vocabulary causes compounding confusion — you can't think clearly about something you can't name accurately.
38
+
39
+ **Protocol:**
40
+ 1. List the 5-10 most important terms in the domain
41
+ 2. Find the authoritative definition (official docs, original paper, RFC)
42
+ 3. In your own words: "A [X] is a [category] that [defining characteristic] and differs from [similar concept] because [key distinction]"
43
+ 4. Find one concrete example and one counterexample
44
+
45
+ **Example — learning "idempotent":**
46
+ > Authoritative: "An operation is idempotent if applying it multiple times produces the same result as applying it once."
47
+ > Own words: "Idempotent means repeating the operation is safe — running it 10 times = running it 1 time."
48
+ > Example: HTTP PUT, DELETE. Setting a flag to true.
49
+ > Counterexample: HTTP POST (creates a new resource each time). Incrementing a counter.
50
+
51
+ If you can't produce the example and counterexample, you don't have Layer 1 yet.
52
+
53
+ ### Layer 2: Facts (What is true?)
54
+
55
+ Acquire the key facts of the domain — what exists, what properties it has, what the system guarantees.
56
+
57
+ **Protocol:**
58
+ 1. Read the official documentation (not a tutorial — the docs)
59
+ 2. Note facts you're confident about vs. facts you're inferring
60
+ 3. Verify inferred facts with a minimal test or authoritative source
61
+ 4. Distinguish guaranteed behavior from typical behavior from implementation detail
62
+
63
+ **Common Layer 2 failures:**
64
+ - Treating a blog post as authoritative (verify against official source)
65
+ - Assuming behavior that's actually version-specific
66
+ - Confusing "common practice" with "guaranteed by spec"
67
+
68
+ ### Layer 3: Mental Model (How does it work?)
69
+
70
+ Build a conceptual model that explains WHY facts are true, not just WHAT they are.
71
+
72
+ **Protocol:**
73
+ 1. Draw or describe the internal mechanism (what components, how they interact)
74
+ 2. Use analogies — what familiar system behaves similarly?
75
+ 3. Predict behavior from the model: "Given my model, if X, then Y should happen"
76
+ 4. Test predictions with experiments: does Y actually happen?
77
+ 5. When prediction fails, update the model (not the experiment result)
78
+
79
+ **Mental model quality test:**
80
+ - Can you predict the behavior of a new, unseen scenario?
81
+ - Can you explain WHY an error occurs from first principles?
82
+ - Can you explain it in plain English to someone with no domain knowledge?
83
+
84
+ If you can't predict, diagnose, and explain — you have facts but not a model.
85
+
86
+ ### Layer 4: Application (Can I use it?)
87
+
88
+ Mental models only prove themselves under actual use. Deliberate practice builds the pattern recognition that makes expertise feel intuitive.
89
+
90
+ **Protocol:**
91
+ 1. Apply the concept in a controlled context (tutorial problem, toy example)
92
+ 2. Deliberately seek the edge cases that break naive applications
93
+ 3. Identify the decision criteria: "When should I use X vs Y?"
94
+ 4. Make mistakes intentionally — apply the wrong thing and observe what breaks
95
+ 5. Build from 80% mastery cases to the 20% edge cases
96
+
97
+ **Application anti-patterns:**
98
+ - Only doing examples from tutorials (they're designed to work)
99
+ - Avoiding the edge cases (that's where the real knowledge is)
100
+ - Moving to the next concept before achieving reliable application of this one
101
+
102
+ ### Layer 5: Teaching (Do I really understand it?)
103
+
104
+ Teaching is the highest-fidelity test of understanding. Gaps that survive Layers 1-4 get exposed when you try to explain.
105
+
106
+ **Protocol:**
107
+ 1. Explain the concept to someone unfamiliar with it (or write the explanation)
108
+ 2. Identify where your explanation becomes hand-wavy or uses circular definitions
109
+ 3. Where you hand-wave = where your model has holes → return to Layer 3
110
+
111
+ **The Feynman Technique:**
112
+ > "If you can't explain it simply, you don't understand it well enough."
113
+
114
+ When your explanation requires specialized vocabulary the listener doesn't have — translate. If you can't translate, you have vocabulary without understanding.
115
+
116
+ ## The 14 Cognitive Failure Modes
117
+
118
+ These are the specific ways learning goes wrong. Recognize them in yourself:
119
+
120
+ 1. **Premature closure** — Stopping when it "feels" understood rather than when it's verified
121
+ 2. **Confirmation bias** — Seeking examples that confirm your mental model, ignoring contradictions
122
+ 3. **Vocabulary mimicry** — Using terms correctly in sentences without understanding the concepts
123
+ 4. **Tutorial tunnel** — Only learning happy paths, never the failure modes
124
+ 5. **Abstraction aversion** — Refusing to engage with theory, only "practical" examples (theory predicts edge cases)
125
+ 6. **Authority substitution** — Trusting a source because it's popular, not because it's accurate
126
+ 7. **Analogy overextension** — Applying an analogy beyond where it holds, making wrong predictions
127
+ 8. **Example generalization** — Concluding from 1-2 examples that a rule is universal
128
+ 9. **Dunning-Kruger stall** — Maximum confidence at minimum knowledge; learning feels complete before it is
129
+ 10. **Context collapse** — Applying knowledge correct in one context to a different context where it's wrong
130
+ 11. **False equivalence** — "I know Python, so I know JavaScript" — mapping one domain's rules onto another
131
+ 12. **Memorization substitution** — Memorizing procedures without understanding why they work
132
+ 13. **Distraction inflation** — Treating breadth (knowing many things shallowly) as equivalent to depth
133
+ 14. **Curse of knowledge** — Once you understand something, it's hard to remember what confusion felt like (prevents effective teaching)
134
+
135
+ ## Confidence Calibration
136
+
137
+ Uncalibrated confidence is more dangerous than low confidence — you don't know what you don't know.
138
+
139
+ **Calibration protocol:**
140
+ 1. Before testing your knowledge: estimate your confidence (0-100%)
141
+ 2. Test your knowledge (take a quiz, apply the concept, explain it)
142
+ 3. Measure actual accuracy
143
+ 4. Gap between confidence and accuracy = calibration error
144
+
145
+ **Calibration targets:**
146
+ - If you say 90% confident: should be right 90% of the time
147
+ - Consistently overconfident (90% → 60% accuracy): slow down, go deeper
148
+ - Consistently underconfident (40% → 80% accuracy): apply more boldly
149
+
150
+ **Calibration exercise for technical concepts:**
151
+ ```
152
+ "I'm [X]% sure that [specific claim about the concept]"
153
+ → Test the claim
154
+ → Was it correct?
155
+ → Record: claimed confidence vs. actual result
156
+ → Trend over N claims: are you calibrated?
157
+ ```
158
+
159
+ ## Common Sense Checks
160
+
161
+ Before applying learned knowledge in production:
162
+
163
+ **The 5 Common Sense Gates:**
164
+ 1. **Sanity check** — Does this result make intuitive sense? If not, verify before acting.
165
+ 2. **Order of magnitude** — Is the magnitude of the output in a reasonable range?
166
+ 3. **Edge case check** — What happens with empty input? Null? Maximum value? Zero?
167
+ 4. **Error behavior** — What fails gracefully vs. catastrophically?
168
+ 5. **Reversibility** — Can you undo this action if the knowledge was wrong?
169
+
170
+ **Applied to new code:**
171
+ ```python
172
+ # Before using a new library function, ask:
173
+ result = library.process(data)
174
+
175
+ # Sanity check: is this shape/type what I expected?
176
+ assert isinstance(result, expected_type), f"Got {type(result)}, expected {expected_type}"
177
+
178
+ # Order of magnitude: is this result reasonable?
179
+ assert 0 < len(result) < MAX_EXPECTED_SIZE, f"Unexpected size: {len(result)}"
180
+
181
+ # Edge case: what if data is empty?
182
+ empty_result = library.process([])
183
+ # → Does it raise? Return empty? Return None?
184
+ ```
185
+
186
+ ## ClawPowers Enhancement
187
+
188
+ When `~/.clawpowers/` runtime is initialized:
189
+
190
+ **Learning State Persistence:**
191
+
192
+ Track what you've learned, at what layer, with what confidence:
193
+
194
+ ```bash
195
+ bash runtime/persistence/store.sh set "learning:jwt:layer" "3" # Mental model stage
196
+ bash runtime/persistence/store.sh set "learning:jwt:confidence" "75"
197
+ bash runtime/persistence/store.sh set "learning:jwt:last_tested" "$(date +%Y-%m-%d)"
198
+ bash runtime/persistence/store.sh set "learning:jwt:misconceptions" "HS256 is not weaker than RS256 by default — depends on key management"
199
+ ```
200
+
201
+ **Misconception Log:**
202
+
203
+ Persistent record of corrected misconceptions prevents rediscovery:
204
+ ```bash
205
+ bash runtime/persistence/store.sh set "misconception:python-copy" \
206
+ "list.copy() is shallow — nested objects are shared, not copied"
207
+ bash runtime/persistence/store.sh set "misconception:js-closure" \
208
+ "Closure captures variable, not value — var in loop shares one variable"
209
+ ```
210
+
211
+ **Confidence Calibration History:**
212
+
213
+ ```bash
214
+ bash runtime/metrics/collector.sh record \
215
+ --skill learn-how-to-learn \
216
+ --outcome success \
217
+ --notes "jwt: 5-layer complete, calibration 82% claimed / 78% actual — well calibrated"
218
+ ```
219
+
220
+ After N records, `runtime/feedback/analyze.sh` identifies:
221
+ - Systematic overconfidence domains (where you reliably think you know more than you do)
222
+ - Fast-learning domains (fewer iterations to Layer 5)
223
+ - Misconception clusters (failure modes that tend to co-occur)
224
+
225
+ ## Anti-Patterns
226
+
227
+ | Anti-Pattern | Why It Fails | Correct Approach |
228
+ |-------------|-------------|-----------------|
229
+ | Stopping at Layer 1 (vocabulary) | Can use terms, can't apply concepts | Work all 5 layers |
230
+ | Skipping Layer 3 (mental model) | Can't predict behavior of unseen scenarios | Build the mechanism model first |
231
+ | Treating tutorials as authoritative | Tutorials skip edge cases by design | Use tutorials to start; read docs to finish |
232
+ | High confidence without calibration check | Makes confident wrong decisions | Explicit confidence estimation + testing |
233
+ | Learning breadth over depth | Surface knowledge fails under real conditions | Depth first in the critical domain |
234
+ | Never teaching / explaining | Gaps survive unchallenged | Explain it to verify it |
235
+ | Ignoring contradiction evidence | Mental model stays broken | Update model when prediction fails |
@@ -0,0 +1,288 @@
1
+ ---
2
+ name: market-intelligence
3
+ description: Competitive analysis, trend detection, and opportunity scoring through systematic web research, source validation, and insight extraction. Activate when you need to understand a market, competitor, or emerging technology.
4
+ version: 1.0.0
5
+ requires:
6
+ tools: [bash, curl]
7
+ runtime: false
8
+ metrics:
9
+ tracks: [signals_collected, opportunities_identified, accuracy_rate, decision_impact]
10
+ improves: [source_selection, signal_filtering, opportunity_scoring]
11
+ ---
12
+
13
+ # Market Intelligence
14
+
15
+ ## When to Use
16
+
17
+ Apply this skill when:
18
+
19
+ - Evaluating whether to build a feature vs. buy/use existing solution
20
+ - Competitive analysis before a launch or product decision
21
+ - Tracking emerging technologies that could impact your stack
22
+ - Identifying market opportunities in adjacent spaces
23
+ - Evaluating a new tool, framework, or vendor
24
+ - Pre-sale research on a prospect's technology environment
25
+
26
+ **Skip when:**
27
+ - You need operational data (use actual analytics, not research)
28
+ - The decision is already made (research → decision, not post-hoc justification)
29
+ - You need expert analysis (research surfaces raw intelligence; you still interpret it)
30
+
31
+ ## Core Methodology
32
+
33
+ ### Phase 1: Define Intelligence Requirements
34
+
35
+ Before researching, be specific about what you need to know:
36
+
37
+ ```markdown
38
+ ## Intelligence Brief
39
+
40
+ **Decision to be informed:** [What decision does this research support?]
41
+ **Key questions:** (3-5 maximum)
42
+ 1. [Specific question that research can answer]
43
+ 2. [Specific question that research can answer]
44
+
45
+ **Scope:**
46
+ - Industry/domain: [specific]
47
+ - Geography: [specific or global]
48
+ - Time horizon: [emerging now, 1yr, 3yr, 5yr]
49
+
50
+ **Sources to prioritize:**
51
+ - [ ] GitHub activity (stars, forks, contributors, issue velocity)
52
+ - [ ] HN / reddit / dev communities (practitioner opinions)
53
+ - [ ] Official announcements and changelogs
54
+ - [ ] Job postings (proxy for investment direction)
55
+ - [ ] Academic / research papers (early signals)
56
+ - [ ] Analyst reports (if accessible)
57
+
58
+ **What would change the decision:** [Pre-commit to decision criteria]
59
+ ```
60
+
61
+ This prevents the most common intelligence failure: collecting interesting facts that don't answer the actual question.
62
+
63
+ ### Phase 2: Competitor Mapping
64
+
65
+ For competitive analysis, build a structured competitor map:
66
+
67
+ **Tier Classification:**
68
+ - **Tier 1 (Direct):** Same problem, same target customer, similar approach
69
+ - **Tier 2 (Adjacent):** Same problem, different approach OR different customer segment
70
+ - **Tier 3 (Emerging):** Different problem today, but could expand into yours
71
+
72
+ **Data collection per competitor:**
73
+ ```markdown
74
+ ## [Competitor Name]
75
+
76
+ **Tier:** [1/2/3]
77
+ **Founded / launched:** [year]
78
+ **Team size (estimate):** [from LinkedIn/job postings]
79
+
80
+ ### Product
81
+ - **Core offering:** [what problem it solves]
82
+ - **Technical approach:** [how it solves it]
83
+ - **Key differentiators:** [what they emphasize]
84
+ - **Limitations:** [what users complain about — from reviews, GitHub issues, reddit]
85
+
86
+ ### Traction
87
+ - GitHub: [stars / forks / contributors / issue velocity]
88
+ - npm/PyPI downloads: [weekly download count if available]
89
+ - App store ratings: [if applicable]
90
+ - Pricing: [public pricing if available]
91
+
92
+ ### Strategy signals
93
+ - Recent announcements: [product releases, partnerships, funding]
94
+ - Job postings: [what roles? hints at investment direction]
95
+ - Content: [what are they publishing? what's the message?]
96
+
97
+ ### Comparison to us
98
+ **Advantages over us:** [be honest]
99
+ **Our advantages:** [be honest]
100
+ **Where they're headed:** [based on signals]
101
+ ```
102
+
103
+ ### Phase 3: Trend Detection
104
+
105
+ Identify trends before they become obvious:
106
+
107
+ **Signal Sources and How to Read Them:**
108
+
109
+ **GitHub activity:**
110
+ ```bash
111
+ # Stars over time (use GitHub API)
112
+ curl -s "https://api.github.com/repos/OWNER/REPO" | \
113
+ python3 -c "import json,sys; d=json.load(sys.stdin); print(f'Stars: {d[\"stargazers_count\"]}, Forks: {d[\"forks_count\"]}, Open issues: {d[\"open_issues_count\"]}')"
114
+
115
+ # New repositories in a category (weekly new repo count)
116
+ curl -s "https://api.github.com/search/repositories?q=topic:YOUR_TOPIC&sort=stars&order=desc&per_page=10" | \
117
+ python3 -c "import json,sys; d=json.load(sys.stdin); print(f'Repos: {d[\"total_count\"]}')"
118
+ ```
119
+
120
+ **npm/PyPI download trends:**
121
+ ```bash
122
+ # npm weekly downloads
123
+ curl -s "https://api.npmjs.org/downloads/point/last-week/PACKAGE_NAME" | \
124
+ python3 -c "import json,sys; d=json.load(sys.stdin); print(f'Downloads: {d[\"downloads\"]}')"
125
+
126
+ # Compare week-over-week
127
+ curl -s "https://api.npmjs.org/downloads/range/2026-02-01:2026-03-21/PACKAGE_NAME"
128
+ ```
129
+
130
+ **Job posting proxy analysis:**
131
+ ```
132
+ High-signal job posting queries (for LinkedIn/Indeed/Hacker News Who's Hiring):
133
+ - "Senior [technology]" → investment in scale, not exploration
134
+ - "[technology] engineer" count over 6 months → adoption curve position
135
+ - Startups posting [technology] roles → early adopter signal
136
+ ```
137
+
138
+ **Hacker News trend analysis:**
139
+ ```bash
140
+ # Search HN for discussion volume over time
141
+ curl -s "https://hn.algolia.com/api/v1/search?query=YOUR_TOPIC&dateRange=last_month&hitsPerPage=100" | \
142
+ python3 -c "import json,sys; d=json.load(sys.stdin); print(f'HN posts (30d): {d[\"nbHits\"]}')"
143
+ ```
144
+
145
+ ### Phase 4: Opportunity Scoring
146
+
147
+ Score identified opportunities on 4 dimensions:
148
+
149
+ ```markdown
150
+ ## Opportunity: [Name]
151
+
152
+ ### Scoring (1-5 each)
153
+
154
+ | Dimension | Score | Evidence |
155
+ |-----------|-------|---------|
156
+ | **Market size** | [1-5] | [number of companies/users affected] |
157
+ | **Pain severity** | [1-5] | [how bad is the problem today?] |
158
+ | **Competitive gap** | [1-5] | [how well do existing solutions address this?] |
159
+ | **Feasibility** | [1-5] | [can we address this given current resources?] |
160
+ | **Strategic fit** | [1-5] | [alignment with current direction?] |
161
+
162
+ **Total score:** [X/25]
163
+
164
+ ### Evidence
165
+ - [Source 1]: [what it signals]
166
+ - [Source 2]: [what it signals]
167
+
168
+ ### Risk factors
169
+ - [Risk]: [probability × impact]
170
+
171
+ ### Recommended action
172
+ - Score > 18: Prioritize for immediate investigation
173
+ - Score 12-18: Monitor, consider if resources available
174
+ - Score < 12: Log, revisit in 6 months
175
+ ```
176
+
177
+ ### Phase 5: Intelligence Report
178
+
179
+ ```markdown
180
+ # Market Intelligence Report
181
+
182
+ **Date:** [timestamp]
183
+ **Analyst:** [agent]
184
+ **Subject:** [specific topic]
185
+
186
+ ## Key Findings
187
+
188
+ 1. **[Finding 1 — most important]**
189
+ [Evidence + implication]
190
+
191
+ 2. **[Finding 2]**
192
+ [Evidence + implication]
193
+
194
+ 3. **[Finding 3]**
195
+ [Evidence + implication]
196
+
197
+ ## Competitive Landscape
198
+
199
+ [Competitor map summary + tier table]
200
+
201
+ ## Emerging Trends
202
+
203
+ | Trend | Confidence | Time horizon | Evidence |
204
+ |-------|-----------|-------------|---------|
205
+ | [Trend 1] | High | 6-12 months | [source] |
206
+ | [Trend 2] | Medium | 12-18 months | [source] |
207
+
208
+ ## Opportunities
209
+
210
+ | Opportunity | Score | Recommended Action |
211
+ |-------------|-------|-------------------|
212
+ | [Opp 1] | 21/25 | Investigate immediately |
213
+ | [Opp 2] | 14/25 | Monitor |
214
+
215
+ ## Recommendations
216
+
217
+ 1. [Concrete recommendation based on research]
218
+ 2. [Concrete recommendation]
219
+
220
+ ## Sources
221
+
222
+ [List all sources with access dates]
223
+ ```
224
+
225
+ ## ClawPowers Enhancement
226
+
227
+ When `~/.clawpowers/` runtime is initialized:
228
+
229
+ **Persistent Competitive Tracking:**
230
+
231
+ ```bash
232
+ # Store competitor snapshots over time
233
+ bash runtime/persistence/store.sh set "competitor:superpowers:stars:2026-03-21" "103000"
234
+ bash runtime/persistence/store.sh set "competitor:superpowers:npm_weekly:2026-03-21" "12400"
235
+
236
+ # Next week: compare
237
+ bash runtime/persistence/store.sh get "competitor:superpowers:stars:2026-03-14"
238
+ # → 102100 last week → 900 new stars (trend: 0.87% weekly growth)
239
+ ```
240
+
241
+ **Trend Tracking Over Time:**
242
+
243
+ `runtime/feedback/analyze.sh` computes:
244
+ - Week-over-week GitHub star velocity per tracked competitor
245
+ - npm download trends
246
+ - HN mention frequency trends
247
+
248
+ **Intelligence Accuracy Tracking:**
249
+
250
+ After predictions mature, record accuracy:
251
+ ```bash
252
+ bash runtime/persistence/store.sh set "prediction:x402-adoption:made" "2026-03"
253
+ bash runtime/persistence/store.sh set "prediction:x402-adoption:forecast" "mainstream within 18 months"
254
+ # Later:
255
+ bash runtime/persistence/store.sh set "prediction:x402-adoption:actual" "2027-06:mainstream"
256
+ bash runtime/persistence/store.sh set "prediction:x402-adoption:accuracy" "hit"
257
+ ```
258
+
259
+ ```bash
260
+ bash runtime/metrics/collector.sh record \
261
+ --skill market-intelligence \
262
+ --outcome success \
263
+ --notes "skills-framework: 3 competitors mapped, 2 opportunities scored >18, 4 trends identified"
264
+ ```
265
+
266
+ ## Source Validation
267
+
268
+ Not all sources are equal. Apply these quality checks:
269
+
270
+ | Source type | Strength | Weakness | How to use |
271
+ |-------------|---------|---------|-----------|
272
+ | GitHub stats | Objective, real-time | Stars ≠ production use | Trend over time, not absolute |
273
+ | Download numbers | Objective, real | Includes bots/CI | Growth rate more than absolute |
274
+ | Community reviews (Reddit, HN) | Unfiltered practitioner opinion | Selection bias (frustrated users more vocal) | Cluster analysis, not individual comments |
275
+ | Job postings | Investment proxy | Lags reality by 6-12 months | Directional signal only |
276
+ | Vendor blog posts | First-hand | Marketing material | Verify claims independently |
277
+ | Academic papers | Early signal | Abstract from production reality | Note publication date, verify in practice |
278
+
279
+ ## Anti-Patterns
280
+
281
+ | Anti-Pattern | Why It Fails | Correct Approach |
282
+ |-------------|-------------|-----------------|
283
+ | Collecting facts without questions | Interesting but not actionable | Define intelligence requirements first |
284
+ | One-time snapshot | Markets move; snapshots go stale | Build tracking over time |
285
+ | Trusting vendor-written comparisons | Self-serving bias | Primary source research + practitioner forums |
286
+ | Correlation as causation | "Company X grew as they adopted Y" ≠ Y caused growth | Control for confounders |
287
+ | Ignoring negative signals | Confirmation bias toward preferred outcome | Actively seek contradicting evidence |
288
+ | No scoring system | Opportunities can't be prioritized | Explicit multi-dimension scoring |