clawpowers 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/.claude-plugin/manifest.json +19 -0
  2. package/.codex/INSTALL.md +36 -0
  3. package/.cursor-plugin/manifest.json +21 -0
  4. package/.opencode/INSTALL.md +52 -0
  5. package/ARCHITECTURE.md +69 -0
  6. package/README.md +381 -0
  7. package/bin/clawpowers.js +390 -0
  8. package/bin/clawpowers.sh +91 -0
  9. package/gemini-extension.json +32 -0
  10. package/hooks/session-start +205 -0
  11. package/hooks/session-start.cmd +43 -0
  12. package/hooks/session-start.js +163 -0
  13. package/package.json +54 -0
  14. package/runtime/feedback/analyze.js +621 -0
  15. package/runtime/feedback/analyze.sh +546 -0
  16. package/runtime/init.js +172 -0
  17. package/runtime/init.sh +145 -0
  18. package/runtime/metrics/collector.js +361 -0
  19. package/runtime/metrics/collector.sh +308 -0
  20. package/runtime/persistence/store.js +433 -0
  21. package/runtime/persistence/store.sh +303 -0
  22. package/skill.json +74 -0
  23. package/skills/agent-payments/SKILL.md +411 -0
  24. package/skills/brainstorming/SKILL.md +233 -0
  25. package/skills/content-pipeline/SKILL.md +282 -0
  26. package/skills/dispatching-parallel-agents/SKILL.md +305 -0
  27. package/skills/executing-plans/SKILL.md +255 -0
  28. package/skills/finishing-a-development-branch/SKILL.md +260 -0
  29. package/skills/learn-how-to-learn/SKILL.md +235 -0
  30. package/skills/market-intelligence/SKILL.md +288 -0
  31. package/skills/prospecting/SKILL.md +313 -0
  32. package/skills/receiving-code-review/SKILL.md +225 -0
  33. package/skills/requesting-code-review/SKILL.md +206 -0
  34. package/skills/security-audit/SKILL.md +308 -0
  35. package/skills/subagent-driven-development/SKILL.md +244 -0
  36. package/skills/systematic-debugging/SKILL.md +279 -0
  37. package/skills/test-driven-development/SKILL.md +299 -0
  38. package/skills/using-clawpowers/SKILL.md +137 -0
  39. package/skills/using-git-worktrees/SKILL.md +261 -0
  40. package/skills/verification-before-completion/SKILL.md +254 -0
  41. package/skills/writing-plans/SKILL.md +276 -0
  42. package/skills/writing-skills/SKILL.md +260 -0
@@ -0,0 +1,313 @@
1
+ ---
2
+ name: prospecting
3
+ description: Lead generation workflow — define ICP, find companies, enrich contacts, and prepare outreach. Activate when you need to build a qualified prospect list.
4
+ version: 1.0.0
5
+ requires:
6
+ tools: [bash, curl]
7
+ runtime: true
8
+ metrics:
9
+ tracks: [prospects_found, qualification_rate, enrichment_rate, outreach_response_rate]
10
+ improves: [icp_precision, search_query_effectiveness, enrichment_source_selection]
11
+ ---
12
+
13
+ # Prospecting
14
+
15
+ ## When to Use
16
+
17
+ Apply this skill when:
18
+
19
+ - Building a list of potential customers for a product or service
20
+ - Researching companies before a sales call or outreach
21
+ - Finding decision-makers at target companies
22
+ - Validating that a market segment is large enough to pursue
23
+ - Preparing personalized outreach at scale
24
+
25
+ **Skip when:**
26
+ - You already have qualified prospects — move to outreach preparation
27
+ - The ICP (Ideal Customer Profile) isn't defined — define it first
28
+ - You need > 500 prospects (use dedicated sales intelligence platforms)
29
+
30
+ **Relationship to market-intelligence:**
31
+ ```
32
+ market-intelligence: understand the market (TAM, trends, positioning)
33
+ prospecting: find specific companies and contacts within the market
34
+ ```
35
+
36
+ ## Core Methodology
37
+
38
+ ### Phase 1: ICP Definition
39
+
40
+ Every prospecting campaign starts with a precise Ideal Customer Profile. Vague ICPs produce low-quality lists.
41
+
42
+ **ICP template:**
43
+
44
+ ```markdown
45
+ ## Ideal Customer Profile
46
+
47
+ ### Company Attributes
48
+ - **Industry:** [specific vertical(s) — e.g., "fintech SaaS" not "software"]
49
+ - **Company size:** [employees: 50-500, or ARR: $5M-$50M]
50
+ - **Stage:** [Series A/B/C, bootstrapped, public — funding is proxy for budget authority]
51
+ - **Tech stack:** [if relevant — e.g., uses Python, AWS, GitHub Actions]
52
+ - **Geography:** [US/EU/APAC — where procurement decisions are made]
53
+ - **Signals:** [hiring patterns, job postings, tech adoption signals]
54
+
55
+ ### Contact Attributes
56
+ - **Title:** [VP Engineering, Director of DevOps, CTO — who FEELS the pain]
57
+ - **Department:** [Engineering, Product, Security]
58
+ - **Seniority:** [IC, Manager, Director, VP, C-Suite — who has BUDGET]
59
+ - **Pain signal:** [recent role change, company event, content they've published]
60
+
61
+ ### Negative ICP (Disqualifiers)
62
+ - **Company:** [too small, regulated industry if compliance blocks you, bootstrapped with no budget]
63
+ - **Contact:** [non-technical roles, no budget authority, different pain set]
64
+
65
+ ### Qualification Criteria (rank in this order)
66
+ 1. [Must-have criterion — no exceptions]
67
+ 2. [Must-have criterion — no exceptions]
68
+ 3. [Strong signal — weight heavily]
69
+ 4. [Good signal — weight moderately]
70
+ ```
71
+
72
+ ### Phase 2: Company Discovery
73
+
74
+ **Search Strategies:**
75
+
76
+ **GitHub-based discovery (for developer tools):**
77
+ ```bash
78
+ # Find companies using a specific technology
79
+ curl -s "https://api.github.com/search/repositories?q=TECHNOLOGY+in:readme&sort=stars&per_page=30" | \
80
+ python3 -c "
81
+ import json, sys
82
+ data = json.load(sys.stdin)
83
+ for item in data['items']:
84
+ if item['owner']['type'] == 'Organization':
85
+ print(f\"{item['owner']['login']}: {item['full_name']} ({item['stargazers_count']} stars)\")
86
+ "
87
+ ```
88
+
89
+ **LinkedIn company search:**
90
+
91
+ Build search queries using Boolean operators:
92
+ ```
93
+ "(VP Engineering OR Director Engineering OR CTO) AND (Python OR TypeScript) AND (Series B OR Series C)"
94
+ Location: United States
95
+ Industry: Computer Software
96
+ Company size: 51-500 employees
97
+ ```
98
+
99
+ **Exa/Perplexity for intent signals:**
100
+ ```bash
101
+ # Find companies recently discussing a pain point you solve
102
+ curl -X POST "https://api.exa.ai/search" \
103
+ -H "x-api-key: $EXA_API_KEY" \
104
+ -H "Content-Type: application/json" \
105
+ -d '{
106
+ "query": "engineering team struggling with test coverage deployment velocity",
107
+ "type": "neural",
108
+ "numResults": 10,
109
+ "includeDomains": ["linkedin.com", "dev.to", "medium.com"],
110
+ "startPublishedDate": "2026-01-01"
111
+ }'
112
+ ```
113
+
114
+ **Apollo.io for direct prospecting:**
115
+ ```bash
116
+ # Search for people matching ICP
117
+ curl -X POST "https://api.apollo.io/v1/mixed_people/search" \
118
+ -H "Content-Type: application/json" \
119
+ -H "x-api-key: $APOLLO_API_KEY" \
120
+ -d '{
121
+ "person_titles": ["VP Engineering", "Director of Engineering", "CTO"],
122
+ "organization_num_employees_ranges": ["51,500"],
123
+ "page": 1,
124
+ "per_page": 25
125
+ }'
126
+ ```
127
+
128
+ ### Phase 3: Qualification
129
+
130
+ For each company found, score against your ICP:
131
+
132
+ ```markdown
133
+ ## Company: [Name]
134
+
135
+ **URL:** [company.com]
136
+ **Size:** [employees]
137
+ **Stage:** [funding round]
138
+ **Tech signals:** [GitHub org, job postings for relevant tech]
139
+
140
+ ### ICP Score
141
+ | Criterion | Met? | Evidence |
142
+ |-----------|------|---------|
143
+ | [Must-have 1] | ✅/❌ | [source] |
144
+ | [Must-have 2] | ✅/❌ | [source] |
145
+ | [Signal 1] | ✅/❌/? | [source] |
146
+ | [Signal 2] | ✅/❌/? | [source] |
147
+
148
+ **Status:** Qualified / Disqualified / Research needed
149
+ **Reason for disqualification:** [if applicable]
150
+ ```
151
+
152
+ **Qualification protocol:**
153
+ - Must-have criterion fail → immediate disqualification, no enrichment
154
+ - 2+ strong signals + no disqualifiers → high-priority prospect
155
+ - Unclear signals → brief additional research, then decide
156
+
157
+ ### Phase 4: Contact Enrichment
158
+
159
+ For qualified companies, find the right contacts:
160
+
161
+ **LinkedIn-based enrichment:**
162
+
163
+ Use LinkedIn's company page to find decision-makers:
164
+ ```
165
+ company.com/about/leadership → C-suite
166
+ LinkedIn search: [Company Name] + [Target Title]
167
+ ```
168
+
169
+ **Hunter.io for email discovery:**
170
+ ```bash
171
+ # Find email pattern for a domain
172
+ curl -s "https://api.hunter.io/v2/domain-search?domain=company.com&api_key=$HUNTER_API_KEY" | \
173
+ python3 -c "
174
+ import json, sys
175
+ data = json.load(sys.stdin)
176
+ print(f'Email pattern: {data[\"data\"][\"pattern\"]}')
177
+ for email in data['data']['emails'][:5]:
178
+ print(f'{email[\"value\"]}: {email[\"first_name\"]} {email[\"last_name\"]} ({email[\"position\"]})')
179
+ "
180
+
181
+ # Verify a specific email
182
+ curl -s "https://api.hunter.io/v2/email-verifier?email=john@company.com&api_key=$HUNTER_API_KEY"
183
+ ```
184
+
185
+ **Enriched contact record:**
186
+ ```markdown
187
+ ## Contact: [First Last]
188
+
189
+ **Company:** [Company Name]
190
+ **Title:** [Exact current title]
191
+ **Email:** [email@company.com] (confidence: High/Medium)
192
+ **LinkedIn:** [URL]
193
+ **GitHub:** [username if found]
194
+
195
+ ### Personalization hooks
196
+ - **Recent activity:** [blog post, conference talk, job change, company announcement]
197
+ - **Shared connections:** [mutual contacts]
198
+ - **Pain signals:** [job posting for their team, content they've published]
199
+ - **Tech interest:** [repos they've starred, tools they've written about]
200
+
201
+ ### Outreach priority
202
+ [High / Medium / Low] — [reason]
203
+ ```
204
+
205
+ ### Phase 5: Outreach Preparation
206
+
207
+ Prepare personalized outreach for high-priority contacts:
208
+
209
+ **Outreach template principles:**
210
+ - Short: 3-4 sentences max (first touch)
211
+ - Specific: One concrete observation about their situation
212
+ - Relevant: Clear connection between their pain and your solution
213
+ - Easy: Lowest-friction next step ("5-minute call" not "let's do a demo")
214
+
215
+ **Template structure:**
216
+ ```
217
+ [Personalized opener — specific observation about them or their company]
218
+ [What you do — one sentence, focused on the problem you solve]
219
+ [Why relevant to them — connect their signal to your solution]
220
+ [Low-friction CTA — open-ended question or easy next step]
221
+ ```
222
+
223
+ **Example (developer tools):**
224
+ ```
225
+ Hi [Name],
226
+
227
+ Saw [Company] is hiring 3 senior engineers right now — congrats on the growth.
228
+ We built ClawPowers to help engineering teams like yours ship faster by giving
229
+ AI coding agents persistent memory and self-improvement — instead of re-explaining
230
+ context every session.
231
+
232
+ Would it be worth 10 minutes to see if it fits your current workflow?
233
+ ```
234
+
235
+ **What to avoid:**
236
+ - "I hope this finds you well"
237
+ - "I came across your profile"
238
+ - Feature lists in the first touch
239
+ - Asking for 30-60 minute meetings
240
+ - CC'ing multiple people without introduction
241
+
242
+ ### Phase 6: CRM Output
243
+
244
+ Export qualified, enriched prospects to your CRM:
245
+
246
+ ```bash
247
+ # Build CSV for CRM import
248
+ cat > prospects.csv << 'EOF'
249
+ company,first_name,last_name,title,email,linkedin,priority,notes
250
+ Acme Corp,Jane,Smith,VP Engineering,jane@acme.com,linkedin.com/in/jsmith,High,"Hiring 3 engineers; OSS contributor"
251
+ EOF
252
+
253
+ # Or output JSON for API import
254
+ python3 -c "
255
+ import json
256
+ prospects = [
257
+ {
258
+ 'company': 'Acme Corp',
259
+ 'contact': {'first': 'Jane', 'last': 'Smith', 'title': 'VP Engineering'},
260
+ 'email': 'jane@acme.com',
261
+ 'priority': 'high',
262
+ 'personalization': 'Hiring 3 engineers; active OSS contributor'
263
+ }
264
+ ]
265
+ print(json.dumps(prospects, indent=2))
266
+ "
267
+ ```
268
+
269
+ ## ClawPowers Enhancement
270
+
271
+ When `~/.clawpowers/` runtime is initialized:
272
+
273
+ **Prospect Database:**
274
+
275
+ All prospects, enrichment, and outreach outcomes are stored:
276
+ ```bash
277
+ bash runtime/persistence/store.sh set "prospect:acme-jane-smith:status" "outreach_sent"
278
+ bash runtime/persistence/store.sh set "prospect:acme-jane-smith:email_sent_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
279
+ bash runtime/persistence/store.sh set "prospect:acme-jane-smith:response" "interested"
280
+ bash runtime/persistence/store.sh set "prospect:acme-jane-smith:response_at" "$(date -u +%Y-%m-%dT%H:%M:%SZ)"
281
+ ```
282
+
283
+ **ICP Refinement:**
284
+
285
+ Response rates feed back into ICP scoring:
286
+ ```bash
287
+ bash runtime/feedback/analyze.sh --skill prospecting
288
+ # Output:
289
+ # ICP criterion effectiveness:
290
+ # - "hiring 3+ engineers" signal → 34% response rate (high signal)
291
+ # - "Series B" signal → 12% response rate (weak signal)
292
+ # - "Python OR TypeScript in job postings" → 28% response rate (medium signal)
293
+ # Recommendation: weight hiring signal 3x vs. funding stage
294
+ ```
295
+
296
+ ```bash
297
+ bash runtime/metrics/collector.sh record \
298
+ --skill prospecting \
299
+ --outcome success \
300
+ --notes "devtools-campaign: 25 companies qualified, 18 contacts enriched, 18 outreach prepared"
301
+ ```
302
+
303
+ ## Anti-Patterns
304
+
305
+ | Anti-Pattern | Why It Fails | Correct Approach |
306
+ |-------------|-------------|-----------------|
307
+ | Spray-and-pray outreach | Low response rate, damages brand | Qualify before enriching, enrich before outreach |
308
+ | Contacting every person at a company | Creates noise, damages company relationship | One contact per company (initial touch) |
309
+ | Generic outreach without personalization | Reads as spam | Specific, researched opener per contact |
310
+ | Not disqualifying early | Wasted enrichment and outreach effort | Score ICP criteria before enrichment |
311
+ | Storing prospects without follow-up system | Prospects go cold | CRM entry with follow-up date at enrichment time |
312
+ | Asking for a demo in first touch | Friction too high for cold contact | Low-friction first step (quick call, question) |
313
+ | Not tracking response rates per ICP signal | Can't improve ICP over time | Log which signals correlated with responses |
@@ -0,0 +1,225 @@
1
+ ---
2
+ name: receiving-code-review
3
+ description: Process code review feedback constructively and systematically. Activate when you receive review comments on a PR.
4
+ version: 1.0.0
5
+ requires:
6
+ tools: [git, bash]
7
+ runtime: false
8
+ metrics:
9
+ tracks: [feedback_items_addressed, response_time, clarification_requests, approved_on_cycle]
10
+ improves: [feedback_categorization, response_quality, pattern_detection]
11
+ ---
12
+
13
+ # Receiving Code Review
14
+
15
+ ## When to Use
16
+
17
+ Apply this skill when:
18
+
19
+ - You've received review comments on a PR
20
+ - You need to process reviewer feedback before responding
21
+ - You want to make sure you're addressing feedback effectively, not just defensively
22
+
23
+ **The mindset:** Reviewers are helping you ship better code. Feedback is information, not judgment. The goal is to understand it, not to defend against it.
24
+
25
+ ## Core Methodology
26
+
27
+ ### Step 1: Read Everything Before Responding to Anything
28
+
29
+ Before replying to any comment:
30
+
31
+ 1. Read all comments start to finish
32
+ 2. Identify the reviewer's primary concerns (often one theme across many comments)
33
+ 3. Distinguish between required changes and suggestions
34
+
35
+ **Don't respond in real-time as notifications arrive.** You'll respond to symptoms without seeing the root concern, leading to back-and-forth rather than resolution.
36
+
37
+ ### Step 2: Categorize Feedback
38
+
39
+ For each comment, assign a category:
40
+
41
+ | Category | Definition | Action |
42
+ |----------|-----------|--------|
43
+ | **Bug** | Reviewer found a real bug | Fix unconditionally |
44
+ | **Security** | Security concern identified | Fix unconditionally, escalate if in scope |
45
+ | **Clarity** | Code is correct but hard to understand | Rename / add comment / restructure |
46
+ | **Suggestion** | Optional improvement, not blocking | Evaluate on merit; implement or decline with reason |
47
+ | **Style** | Formatting, naming conventions | Align with team standard; if no standard, discuss |
48
+ | **Question** | Reviewer wants to understand, not change | Explain in response; add inline comment if confusion likely for others |
49
+ | **Out of scope** | Valid concern, but not for this PR | Acknowledge, create tracking issue, respond with issue number |
50
+ | **Nitpick** | Low-impact preference | Implement if quick; decline with "minor — deferred" if not |
51
+
52
+ **Never suppress a Bug or Security comment by arguing it's not a bug.** If you disagree, explain your reasoning and request explicit sign-off from the reviewer. Don't merge until that sign-off is given.
53
+
54
+ ### Step 3: Create a Response Plan
55
+
56
+ Before touching code, plan your responses:
57
+
58
+ ```markdown
59
+ ## Review Response Plan
60
+
61
+ **PR:** feature/auth-service
62
+ **Reviewer:** Alice
63
+ **Total comments:** 12
64
+
65
+ ### Required (must fix before merge)
66
+ 1. Line 61 — Token expiry check order: [AGREED — will move expiry check before signature validation]
67
+ 2. Line 88 — Error message leaks algorithm name: [AGREED — will generalize to "Invalid token"]
68
+
69
+ ### Will Fix (optional but clearly right)
70
+ 3. Line 44 — Rename `u` to `user_id`: [AGREED — will rename throughout]
71
+ 4. Line 102 — Missing test for empty audience claim: [AGREED — will add test]
72
+
73
+ ### Will Discuss (need alignment)
74
+ 5. Line 78 — Algorithm enforcement via allowlist vs. blocklist: [DISAGREE — will explain RS256-only policy in response]
75
+
76
+ ### Out of Scope (create issues)
77
+ 6. Line 30 — Refresh token support: [Valid — creating issue #263, not in this PR's scope]
78
+
79
+ ### Questions (need to understand before acting)
80
+ 7. Line 55 — "Is this safe?" — need clarification: what specifically concerns you?
81
+ ```
82
+
83
+ ### Step 4: Implement Changes
84
+
85
+ Work through required and agreed changes:
86
+
87
+ 1. Create a new commit for the review changes (don't squash yet — reviewer needs to see what changed)
88
+ 2. Address each comment with a corresponding code change
89
+ 3. For each change, reply to the comment in GitHub explaining what you did
90
+
91
+ **Commit message format:**
92
+ ```
93
+ review: address auth service review feedback from Alice
94
+
95
+ - Move token expiry check before signature validation (line 61)
96
+ - Generalize error message to avoid algorithm disclosure (line 88)
97
+ - Rename u → user_id for clarity throughout auth.py
98
+ - Add test for empty audience claim
99
+ ```
100
+
101
+ ### Step 5: Respond to Every Comment
102
+
103
+ Every comment deserves a response, even if the response is "acknowledged" or "disagree — see explanation":
104
+
105
+ **For implemented changes:**
106
+ ```
107
+ Fixed in commit a3f9b2c. Moved expiry check to line 47, now before signature
108
+ validation as you suggested. Tests updated to reflect new check order.
109
+ ```
110
+
111
+ **For agreed suggestions:**
112
+ ```
113
+ Good catch — added test for empty audience claim in test_auth.py:147.
114
+ ```
115
+
116
+ **For disagreements:**
117
+ ```
118
+ I understand the concern. The reason I chose an allowlist (RS256 only) rather than
119
+ a blocklist (exclude HS256) is that new algorithms get added periodically — an
120
+ allowlist stays secure as the JWT spec evolves, a blocklist can be bypassed by
121
+ a newly-added algorithm we haven't blocked yet. Happy to add a comment to the
122
+ code explaining this if it's not obvious.
123
+ ```
124
+
125
+ **For out-of-scope items:**
126
+ ```
127
+ Valid point — this is outside this PR's scope but worth addressing.
128
+ Created issue #263 for refresh token support. Added it to the next sprint backlog.
129
+ ```
130
+
131
+ **Tone rules:**
132
+ - Never sarcastic or defensive
133
+ - Explain your reasoning when disagreeing
134
+ - Thank reviewers for catches that were genuine bugs or security issues
135
+ - Don't over-apologize — brief acknowledgment is sufficient
136
+
137
+ ### Step 6: Request Re-review
138
+
139
+ After addressing all feedback:
140
+
141
+ ```bash
142
+ # Push changes
143
+ git push origin feature/auth-service
144
+
145
+ # In GitHub: mark all addressed conversations as "Resolved"
146
+ # (only resolve conversations you've addressed — let reviewer resolve their own)
147
+ # Re-request review from reviewer
148
+ ```
149
+
150
+ Notify the reviewer:
151
+ ```
152
+ Hi Alice — addressed all your feedback. Main changes:
153
+ 1. Moved expiry check before signature validation
154
+ 2. Generalized error message
155
+ 3. Added 3 new tests for edge cases you identified
156
+
157
+ Disagreed on one point (algorithm allowlist) and explained reasoning in the thread —
158
+ would appreciate your thoughts. PR is ready for re-review.
159
+ ```
160
+
161
+ ### Step 7: Iterate Until Approved
162
+
163
+ Repeat Steps 4-6 until all required changes are addressed and reviewer approves.
164
+
165
+ **If review is dragging:**
166
+ - If reviewer hasn't responded in 2 business days after re-request: follow up in Slack/Teams
167
+ - If a comment thread is becoming a lengthy debate: move it to a real conversation, then update the PR based on the conclusion
168
+
169
+ ## ClawPowers Enhancement
170
+
171
+ When `~/.clawpowers/` runtime is initialized:
172
+
173
+ **Feedback Pattern Database:**
174
+
175
+ Every piece of review feedback gets stored (with PR and reviewer context):
176
+
177
+ ```bash
178
+ bash runtime/persistence/store.sh set "feedback:pattern:token-expiry-order" "expiry check must precede signature check"
179
+ bash runtime/persistence/store.sh set "feedback:pattern:error-message-leakage" "error messages must not disclose algorithm or implementation details"
180
+ ```
181
+
182
+ Before writing code in the future:
183
+ ```bash
184
+ bash runtime/persistence/store.sh list "feedback:pattern:*"
185
+ # → Shows common feedback patterns → prevents them from being submitted in the first place
186
+ ```
187
+
188
+ **Common Issues Tracking:**
189
+
190
+ After 20+ PR cycles:
191
+ ```bash
192
+ bash runtime/feedback/analyze.sh --skill receiving-code-review
193
+ # Output:
194
+ # Most common feedback category: Bug (38%) — improving test coverage recommended
195
+ # Most common feedback type: Security (22%) — consider security review checklist in verification step
196
+ # Average review cycles: 2.1 — target: 1.5
197
+ # Longest threads: algorithm selection, error handling, naming conventions
198
+ ```
199
+
200
+ **Response Quality Metrics:**
201
+
202
+ ```bash
203
+ bash runtime/metrics/collector.sh record \
204
+ --skill receiving-code-review \
205
+ --outcome success \
206
+ --notes "auth-service: 12 comments, 10 fixed, 1 declined (explained), 1 deferred (#263), approved on cycle 2"
207
+ ```
208
+
209
+ ## Anti-Patterns
210
+
211
+ | Anti-Pattern | Why It Fails | Correct Approach |
212
+ |-------------|-------------|-----------------|
213
+ | Responding to comments as they arrive | Miss the theme, respond to symptoms | Read all comments first, then respond |
214
+ | Defensive responses | Damages reviewer relationship, slows iteration | Assume good faith, explain reasoning when disagreeing |
215
+ | Ignoring comments | Reviewer marks "changes requested" forever | Respond to every comment |
216
+ | Resolving reviewer's conversations yourself | Reviewer loses track of what was addressed | Only resolve your own acknowledged items |
217
+ | "Fixed" with no explanation | Reviewer can't verify without re-reading diff | Explain what you changed and where |
218
+ | Silently closing out-of-scope items | Valid concerns get lost | Create issues for deferred items, reference in response |
219
+ | Merging without re-request | Reviewer never sees the updated code | Always re-request after addressing feedback |
220
+
221
+ ## Integration with Other Skills
222
+
223
+ - Preceded by `requesting-code-review`
224
+ - Use `systematic-debugging` if review feedback reveals a deeper architectural issue
225
+ - Use `writing-plans` if review feedback requires substantial new work