@kodrunhq/opencode-autopilot 1.4.0 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,7 @@
1
+ ---
2
+ description: Start a brainstorming session with Socratic design refinement
3
+ ---
4
+
5
+ Use the brainstorming skill to explore the topic through Socratic questioning. Ask clarifying questions, explore alternatives, generate at least 3 distinct approaches, and present a structured design recommendation.
6
+
7
+ $ARGUMENTS
@@ -0,0 +1,7 @@
1
+ ---
2
+ description: Audit all installed skills, commands, and agents with optional lint validation
3
+ ---
4
+
5
+ Invoke the `oc_stocktake` tool to audit all installed assets. Pass any arguments as the lint option.
6
+
7
+ $ARGUMENTS
@@ -0,0 +1,7 @@
1
+ ---
2
+ description: Implement a feature using strict RED-GREEN-REFACTOR TDD methodology
3
+ ---
4
+
5
+ Use the tdd-workflow skill to implement the feature following strict RED-GREEN-REFACTOR. Write the failing test first (RED), implement minimally to pass (GREEN), then clean up (REFACTOR).
6
+
7
+ $ARGUMENTS
@@ -0,0 +1,7 @@
1
+ ---
2
+ description: Detect documentation affected by recent code changes and suggest updates
3
+ ---
4
+
5
+ Invoke the `oc_update_docs` tool to analyze recent code changes and identify documentation that may need updating.
6
+
7
+ $ARGUMENTS
@@ -0,0 +1,7 @@
1
+ ---
2
+ description: Decompose a feature into a structured implementation plan with tasks and dependency waves
3
+ ---
4
+
5
+ Use the plan-writing skill to decompose the feature into bite-sized tasks with exact file paths, dependency waves, and verification criteria for each task.
6
+
7
+ $ARGUMENTS
@@ -0,0 +1,295 @@
1
+ ---
2
+ name: brainstorming
3
+ description: Socratic design refinement methodology for exploring ideas through structured divergent and convergent thinking phases
4
+ stacks: []
5
+ requires: []
6
+ ---
7
+
8
+ # Brainstorming
9
+
10
+ Structured Socratic design refinement for exploring ideas, challenging assumptions, and arriving at well-reasoned implementation plans. This skill guides you through a 5-phase process that moves from problem clarification through divergent exploration to convergent evaluation, synthesis, and actionable output.
11
+
12
+ Apply this skill whenever you need to explore design space before committing to an approach. The methodology prevents premature convergence (jumping to the first solution) and analysis paralysis (exploring forever without deciding).
13
+
14
+ ## When to Use
15
+
16
+ **Activate this skill when:**
17
+
18
+ - Designing a new feature with multiple possible approaches
19
+ - Making architectural decisions with significant trade-offs
20
+ - The user gives vague or open-ended requirements that need refinement
21
+ - Exploring creative solutions to a technical problem
22
+ - Evaluating whether to build, buy, or adapt an existing solution
23
+ - Refactoring a system with multiple viable restructuring strategies
24
+ - The team disagrees on the right approach and needs structured evaluation
25
+
26
+ **Do NOT use when:**
27
+
28
+ - The implementation path is obvious and well-defined
29
+ - The task is a straightforward bug fix with a clear root cause
30
+ - Requirements are precise and leave no design decisions
31
+ - Time pressure demands immediate action over exploration
32
+ - The change is trivial (rename, config tweak, dependency bump)
33
+
34
+ ## The Brainstorming Process
35
+
36
+ The process has five sequential phases. Each phase has a clear purpose and a defined output. Do not skip phases -- the discipline of following all five is what prevents the anti-patterns listed below.
37
+
38
+ ### Phase 1: Clarify the Problem
39
+
40
+ **Purpose:** Understand the real problem before generating solutions. Most bad designs trace back to solving the wrong problem.
41
+
42
+ **Process:**
43
+
44
+ 1. Ask 3-5 Socratic questions to challenge assumptions and surface the actual need
45
+ 2. Each question should probe a different dimension: user need, constraints, scope, success criteria, risk
46
+ 3. Do NOT propose solutions in this phase -- only ask questions
47
+ 4. Synthesize answers into a one-paragraph problem statement
48
+
49
+ **Socratic Questions to Ask:**
50
+
51
+ - "What problem does this solve for the user?" -- Forces focus on user value, not technical elegance
52
+ - "What happens if we don't build this?" -- Tests whether the problem is real or imagined
53
+ - "What's the simplest version that delivers value?" -- Prevents scope creep from the start
54
+ - "Who else has solved this problem?" -- Avoids reinventing the wheel
55
+ - "What are we assuming that might not be true?" -- Surfaces hidden assumptions
56
+
57
+ **Output:** A clear problem statement that the user confirms is accurate. If the user cannot confirm, ask more questions.
58
+
59
+ **Time box:** 5-10 minutes. If you cannot clarify the problem in 10 minutes, the scope is too large -- split it.
60
+
61
+ ### Phase 2: Divergent Exploration
62
+
63
+ **Purpose:** Generate multiple distinct approaches without evaluating them. Quantity over quality in this phase.
64
+
65
+ **Process:**
66
+
67
+ 1. Generate 3-5 distinct approaches to the clarified problem
68
+ 2. Each approach must be genuinely different -- not variations of the same idea
69
+ 3. For each approach, document:
70
+ - **Name:** A short, memorable label (2-4 words)
71
+ - **Description:** One sentence explaining the core idea
72
+ - **Key trade-off:** The main thing you give up with this approach
73
+ - **Effort estimate:** Small (hours), Medium (days), or Large (weeks)
74
+ 4. Include at least one "wild card" approach that challenges a fundamental assumption
75
+ 5. Do NOT evaluate or rank approaches in this phase
76
+
77
+ **Output:** A numbered list of 3-5 approaches with name, description, trade-off, and effort.
78
+
79
+ **Time box:** 10-15 minutes. If you cannot generate 3 approaches, the problem definition is too narrow -- go back to Phase 1 and broaden it.
80
+
81
+ **Forcing creativity:** If all approaches look similar, try these prompts:
82
+ - "What if we had unlimited time/budget?"
83
+ - "What if we had to ship in one hour?"
84
+ - "What if we used a completely different technology?"
85
+ - "What would a competitor build?"
86
+ - "What if we solved the opposite problem?"
87
+
88
+ ### Phase 3: Convergent Evaluation
89
+
90
+ **Purpose:** Systematically assess each approach against consistent criteria.
91
+
92
+ **Process:**
93
+
94
+ 1. For each approach from Phase 2, evaluate three dimensions:
95
+ - **Feasibility:** Can we build this with the current stack, team, and timeline? (LOW / MEDIUM / HIGH)
96
+ - **Alignment:** Does this solve the clarified problem from Phase 1? (LOW / MEDIUM / HIGH)
97
+ - **Risk:** What could go wrong? How likely is failure? (LOW / MEDIUM / HIGH risk)
98
+ 2. For each dimension, write one sentence justifying the rating
99
+ 3. Identify any approach that scores LOW on Alignment -- eliminate it immediately
100
+ 4. Flag any approach with HIGH Risk for deeper analysis in Phase 4
101
+
102
+ **Output:** A comparison table with ratings and justifications for each approach.
103
+
104
+ **Evaluation format:**
105
+
106
+ ```
107
+ | Approach | Feasibility | Alignment | Risk | Notes |
108
+ |----------|-------------|-----------|------|-------|
109
+ | Name 1 | HIGH | HIGH | LOW | ... |
110
+ | Name 2 | MEDIUM | HIGH | MEDIUM | ... |
111
+ | Name 3 | HIGH | LOW | LOW | Eliminated: doesn't solve core problem |
112
+ ```
113
+
114
+ **Time box:** 10-15 minutes. If evaluation takes longer, you have too many approaches -- drop the weakest two.
115
+
116
+ ### Phase 4: Synthesis
117
+
118
+ **Purpose:** Select the best approach and refine it. Combine strengths from multiple approaches if possible.
119
+
120
+ **Process:**
121
+
122
+ 1. Recommend the top approach based on Phase 3 evaluation
123
+ 2. State the rationale in 2-3 sentences: why this approach, why not the alternatives
124
+ 3. If two approaches have complementary strengths, propose a hybrid that combines the best of both
125
+ 4. Document rejected approaches and why -- this becomes decision log material
126
+ 5. Identify risks from Phase 3 and propose specific mitigations
127
+
128
+ **Output:**
129
+
130
+ - Selected approach with rationale
131
+ - Risk mitigations (1-2 sentences each)
132
+ - Decision log entry: what was decided, what was rejected, why
133
+
134
+ **Decision log format:**
135
+
136
+ ```
137
+ ## Decision: [Topic]
138
+ **Date:** [date]
139
+ **Selected:** [approach name]
140
+ **Rationale:** [2-3 sentences]
141
+ **Rejected alternatives:**
142
+ - [Approach 2]: [reason for rejection]
143
+ - [Approach 3]: [reason for rejection]
144
+ **Risks and mitigations:**
145
+ - [Risk 1]: [mitigation]
146
+ ```
147
+
148
+ ### Phase 5: Action Items
149
+
150
+ **Purpose:** Convert the chosen approach into concrete, executable next steps.
151
+
152
+ **Process:**
153
+
154
+ 1. Break the selected approach into 3-7 discrete tasks
155
+ 2. Each task must be independently completable and verifiable
156
+ 3. For each task, specify:
157
+ - **What:** One sentence describing the deliverable
158
+ - **Files:** Which files to create or modify
159
+ - **Tests:** What test(s) verify this task is done
160
+ - **Dependencies:** Which other tasks must complete first
161
+ 4. Order tasks by dependency (independent tasks first)
162
+ 5. Identify the first task to start with -- it should be the smallest, lowest-risk task that validates the approach
163
+
164
+ **Output:** An ordered task list ready for execution.
165
+
166
+ **Task format:**
167
+
168
+ ```
169
+ 1. [Task name]
170
+ - What: [deliverable description]
171
+ - Files: [file paths]
172
+ - Tests: [verification approach]
173
+ - Depends on: [task numbers or "none"]
174
+ ```
175
+
176
+ ## Socratic Question Templates
177
+
178
+ Use these categorized questions when Phase 1 needs more depth.
179
+
180
+ ### Requirements Questions
181
+
182
+ - "Who is the primary user of this feature?"
183
+ - "What does success look like from the user's perspective?"
184
+ - "What is the user doing today without this feature?"
185
+ - "What is the minimum viable version that delivers value?"
186
+ - "Are there existing patterns in the codebase we should follow?"
187
+
188
+ ### Architecture Questions
189
+
190
+ - "What are the scaling constraints we need to respect?"
191
+ - "Where are the system boundaries?"
192
+ - "What data flows through this feature?"
193
+ - "Which existing modules does this touch?"
194
+ - "What's the deployment story -- does this need to be backwards compatible?"
195
+
196
+ ### Risk Questions
197
+
198
+ - "What's the worst failure mode?"
199
+ - "What do we not know yet?"
200
+ - "What assumptions are we making about the user's environment?"
201
+ - "What happens when this feature interacts with [adjacent system]?"
202
+ - "If this fails in production, how do we detect and recover?"
203
+
204
+ ### Scope Questions
205
+
206
+ - "Is this one feature or multiple features bundled together?"
207
+ - "What's explicitly out of scope?"
208
+ - "Can we ship this incrementally, or is it all-or-nothing?"
209
+ - "What's the maintenance cost after we ship?"
210
+
211
+ ## Anti-Pattern Catalog
212
+
213
+ ### Anti-Pattern: Premature Convergence
214
+
215
+ **What goes wrong:** Jumping to the first reasonable solution without exploring alternatives. The "obvious" approach often has hidden trade-offs that only surface during implementation.
216
+
217
+ **Signs:** Phase 2 produces only one approach. The "exploration" is really just elaborating on a pre-decided solution.
218
+
219
+ **Instead:** Force yourself to generate at least 3 genuinely different approaches before evaluating any of them. Use the creativity prompts from Phase 2 if stuck.
220
+
221
+ ### Anti-Pattern: Analysis Paralysis
222
+
223
+ **What goes wrong:** Exploring endlessly without converging. Every approach spawns more questions. The brainstorming session becomes a research project.
224
+
225
+ **Signs:** Phase 2 generates 8+ approaches. Phase 3 evaluation raises more questions than it answers. The session exceeds 45 minutes with no decision.
226
+
227
+ **Instead:** Time-box each phase strictly. After Phase 3, force a decision -- even if imperfect. A good decision now beats a perfect decision never.
228
+
229
+ ### Anti-Pattern: Group Think
230
+
231
+ **What goes wrong:** All generated approaches are variations of the same fundamental idea. No real diversity in the solution space.
232
+
233
+ **Signs:** Every approach uses the same technology, the same architecture, the same data model. The "wild card" approach isn't actually wild.
234
+
235
+ **Instead:** Deliberately propose at least one approach that breaks a core assumption. Try "What if we used no database?" or "What if the user did this manually?" to force divergent thinking.
236
+
237
+ ### Anti-Pattern: Scope Creep
238
+
239
+ **What goes wrong:** Brainstorming expands beyond the original problem. New features, edge cases, and "nice to haves" accumulate until the scope is unrecognizable.
240
+
241
+ **Signs:** Phase 5 produces 15+ tasks. The action items address problems not mentioned in Phase 1. The effort estimate doubled during brainstorming.
242
+
243
+ **Instead:** Revisit the Phase 1 problem statement after each phase. If a new idea doesn't serve the clarified problem, log it for future consideration but exclude it from the current session.
244
+
245
+ ### Anti-Pattern: Solutioning Before Clarifying
246
+
247
+ **What goes wrong:** Skipping Phase 1 and jumping straight to generating approaches. Without a clear problem statement, the approaches solve different problems.
248
+
249
+ **Signs:** Phase 2 approaches are hard to compare because each one addresses a slightly different interpretation of the problem.
250
+
251
+ **Instead:** Never skip Phase 1. Invest the time to get a confirmed problem statement before generating any solutions.
252
+
253
+ ## Integration with Our Tools
254
+
255
+ After completing Phase 5 (Action Items), use the following tools to continue:
256
+
257
+ - **Plan writing skill:** Convert Phase 5 action items into a formal execution plan with file paths, verification steps, and dependency ordering
258
+ - **`oc_orchestrate`:** For autonomous execution of the chosen approach -- pass the Phase 5 task list as the plan input
259
+ - **`oc_review`:** After implementing any Phase 5 task, invoke a review to catch issues early
260
+ - **Decision log:** The Phase 4 synthesis output is a ready-made decision log entry -- save it for future reference
261
+
262
+ ## Failure Modes
263
+
264
+ ### Unclear Problem Definition
265
+
266
+ **Symptom:** Phase 1 ends but you still cannot explain the problem in one sentence.
267
+
268
+ **Recovery:** Ask the user directly: "Can you describe the problem without mentioning any solution?" If they cannot, the problem is not well-defined enough for brainstorming. Help them define the problem first.
269
+
270
+ ### All Approaches Are Variations of the Same Idea
271
+
272
+ **Symptom:** Phase 2 produces approaches that differ only in implementation details, not in fundamental strategy.
273
+
274
+ **Recovery:** Add artificial constraints to force creativity:
275
+ - "What if we couldn't use [the obvious technology]?"
276
+ - "What if the budget was 10x smaller / larger?"
277
+ - "What would a team with completely different expertise build?"
278
+
279
+ ### No Approach Feels Feasible
280
+
281
+ **Symptom:** Phase 3 evaluation rates all approaches LOW on Feasibility.
282
+
283
+ **Recovery:** The problem scope is too large. Go back to Phase 1 and split the problem into smaller sub-problems. Brainstorm each sub-problem independently.
284
+
285
+ ### Brainstorming Produces Consensus But Wrong Direction
286
+
287
+ **Symptom:** The selected approach from Phase 4 fails during implementation.
288
+
289
+ **Recovery:** This is expected and normal. Return to Phase 3 with the failure as new information. Re-evaluate the remaining approaches. The rejected alternatives from the decision log become the starting point for the next round.
290
+
291
+ ### User Disengages During the Process
292
+
293
+ **Symptom:** The user stops responding to Socratic questions or says "just pick one."
294
+
295
+ **Recovery:** Respect the signal. Compress: state your top recommendation with a one-sentence rationale. Ask for a yes/no confirmation. Skip to Phase 5 with the recommended approach.
@@ -0,0 +1,241 @@
1
+ ---
2
+ name: code-review
3
+ description: Structured methodology for requesting and receiving code reviews -- what to check, how to provide feedback, and how to respond to review comments
4
+ stacks: []
5
+ requires:
6
+ - coding-standards
7
+ ---
8
+
9
+ # Code Review
10
+
11
+ A structured methodology for high-quality code reviews. Whether you are requesting a review, performing one, or responding to feedback, follow these guidelines to maximize the value of every review cycle.
12
+
13
+ ## When to Use
14
+
15
+ - Before merging any pull request
16
+ - After completing a feature or bug fix
17
+ - When reviewing someone else's code
18
+ - When `oc_review` flags issues that need human judgment
19
+ - After refactoring sessions to catch unintended behavior changes
20
+
21
+ ## Requesting a Review
22
+
23
+ A good review request sets the reviewer up for success. The less guessing a reviewer has to do, the better the feedback you get back.
24
+
25
+ ### Provide Context
26
+
27
+ Every review request should include:
28
+
29
+ - **What the change does** -- one sentence summary of the behavior change
30
+ - **Why it is needed** -- link to the issue, user story, or design decision
31
+ - **What alternatives were considered** -- and why this approach was chosen
32
+ - **Testing done** -- what was tested, how, and what edge cases were covered
33
+
34
+ ### Highlight Risky Areas
35
+
36
+ Call out areas where you are uncertain or where the change is particularly impactful:
37
+
38
+ - "I am unsure about the error handling in auth.ts lines 40-60"
39
+ - "The migration is irreversible -- please double-check the column drop"
40
+ - "This changes the public API surface -- backward compatibility impact"
41
+
42
+ ### Keep PRs Small
43
+
44
+ - Target under 300 lines of meaningful diff (exclude generated files, lockfiles, snapshots)
45
+ - If a change is larger, split it into stacked PRs or a feature branch with incremental commits
46
+ - Each PR should be independently reviewable and shippable
47
+ - One concern per PR -- do not mix refactoring with feature work
48
+
49
+ ### Self-Review First
50
+
51
+ Before requesting a review from others:
52
+
53
+ 1. Read through the entire diff yourself as if you were the reviewer
54
+ 2. Run `oc_review` for automated multi-agent analysis
55
+ 3. Check that tests pass and coverage is maintained
56
+ 4. Verify you have not left any TODO markers, debug logging, or commented-out code
57
+ 5. Use the coding-standards skill as a checklist for naming, structure, and error handling
58
+
59
+ ## Performing a Review
60
+
61
+ Review in this order for maximum value. Architecture issues found early save the most rework.
62
+
63
+ ### 1. Architecture
64
+
65
+ - Does the overall approach make sense for the problem being solved?
66
+ - Are responsibilities properly separated between modules?
67
+ - Does this introduce new patterns that conflict with existing conventions?
68
+ - Are the right abstractions being used (not too many, not too few)?
69
+ - Will this scale to handle the expected load or data volume?
70
+
71
+ ### 2. Correctness
72
+
73
+ - Does the code do what it claims to do?
74
+ - Are edge cases handled? (null inputs, empty collections, boundary values)
75
+ - Are error paths covered? (network failures, invalid data, timeouts)
76
+ - Is the logic correct for concurrent or async scenarios?
77
+ - Are state transitions valid and complete?
78
+
79
+ ### 3. Security
80
+
81
+ - Is all user input validated at the boundary? (Reference the coding-standards skill)
82
+ - Are authentication and authorization checks in place?
83
+ - Are secrets handled properly? (no hardcoding, no logging)
84
+ - Is output properly escaped to prevent XSS?
85
+ - Are SQL queries parameterized?
86
+ - Is CSRF protection enabled for state-changing endpoints?
87
+
88
+ ### 4. Performance
89
+
90
+ - Any N+1 query patterns? (fetching in a loop instead of batching)
91
+ - Unbounded loops or recursion? (missing limits, no pagination)
92
+ - Missing database indexes for frequent queries?
93
+ - Unnecessary memory allocations? (large objects created in hot paths)
94
+ - Could any expensive operations be cached or deferred?
95
+
96
+ ### 5. Readability
97
+
98
+ - Are names descriptive and intention-revealing?
99
+ - Are functions small and focused (under 50 lines)?
100
+ - Are files focused on a single concern (under 400 lines)?
101
+ - Is the nesting depth reasonable (4 levels or less)?
102
+ - Would a future developer understand this without asking the author?
103
+
104
+ ### 6. Testing
105
+
106
+ - Do tests exist for all new behavior?
107
+ - Do existing tests still pass?
108
+ - Are edge cases tested (not just the happy path)?
109
+ - Are tests independent and deterministic (no flaky tests)?
110
+ - Is the test structure clear? (arrange-act-assert)
111
+
112
+ ## Providing Feedback
113
+
114
+ ### Use Severity Levels
115
+
116
+ Every review comment should be tagged with a severity so the author can prioritize:
117
+
118
+ - **CRITICAL** -- Must fix before merge. Bugs, security issues, data loss risks.
119
+ - **HIGH** -- Should fix before merge. Missing error handling, performance issues, incorrect behavior in edge cases.
120
+ - **MEDIUM** -- Consider fixing. Code quality improvements, better naming, minor refactoring opportunities.
121
+ - **LOW** -- Nit. Style preferences, optional improvements, suggestions for future work.
122
+
123
+ ### Be Specific
124
+
125
+ Bad: "This is confusing."
126
+ Good: "The variable `data` on line 42 of user-service.ts does not convey what it holds. Consider renaming to `activeUserRecords` to match the query filter on line 38."
127
+
128
+ Every comment should include:
129
+
130
+ - The file and line (or line range)
131
+ - What the issue is
132
+ - A suggested fix or alternative approach
133
+
134
+ ### Be Constructive
135
+
136
+ - Explain WHY something is a problem, not just WHAT is wrong
137
+ - Offer alternatives when pointing out issues
138
+ - Acknowledge good work -- positive feedback reinforces good patterns
139
+ - Use "we" language -- "We could improve this by..." not "You did this wrong"
140
+ - Ask questions when unsure -- "Is there a reason this is not using the existing helper?"
141
+
142
+ ## Responding to Review Comments
143
+
144
+ ### Address Every Comment
145
+
146
+ - Fix the issue, or explain why the current approach is intentional
147
+ - Never ignore a review comment without responding
148
+ - If you disagree, explain your reasoning -- the reviewer may have missed context
149
+ - If you agree but want to defer, create a follow-up issue and link it
150
+
151
+ ### Stay Professional
152
+
153
+ - Do not take feedback personally -- reviews are about code, not about you
154
+ - Ask for clarification if a comment is unclear
155
+ - Thank reviewers for catching issues -- they saved you from a production bug
156
+ - If a discussion gets long, move to a synchronous conversation (call, pair session)
157
+
158
+ ### Mark Resolved Comments
159
+
160
+ - After addressing a comment, mark it as resolved
161
+ - If the fix is in a follow-up commit, reference the commit hash
162
+ - Do not resolve comments that were not actually addressed
163
+
164
+ ## Integration with Our Tools
165
+
166
+ ### Automated Review with oc_review
167
+
168
+ Use `oc_review` for automated multi-agent code review. The review engine runs up to 21 specialist agents (universal + stack-gated) covering:
169
+
170
+ - Logic correctness and edge cases
171
+ - Security vulnerabilities and input validation
172
+ - Code quality and maintainability
173
+ - Testing completeness and test quality
174
+ - Performance and scalability concerns
175
+ - Documentation and naming
176
+
177
+ Automated review is a complement to human review, not a replacement. Use it for the mechanical checks so human reviewers can focus on architecture and design decisions.
178
+
179
+ ### Coding Standards Baseline
180
+
181
+ Use the coding-standards skill as the shared baseline for quality checks. This ensures all reviewers apply the same standards for naming, file organization, error handling, immutability, and input validation.
182
+
183
+ ### Review Workflow
184
+
185
+ The recommended workflow for any change:
186
+
187
+ 1. Self-review the diff
188
+ 2. Run `oc_review` for automated analysis
189
+ 3. Address any CRITICAL or HIGH findings from automated review
190
+ 4. Request human review with the context template above
191
+ 5. Address human review feedback
192
+ 6. Merge when all CRITICAL and HIGH items are resolved
193
+
194
+ ## Anti-Pattern Catalog
195
+
196
+ ### Anti-Pattern: Rubber-Stamp Reviews
197
+
198
+ **What it looks like:** Approving a PR after a cursory glance, or approving without reading the diff at all.
199
+
200
+ **Why it is harmful:** Defeats the entire purpose of code review. Bugs, security issues, and design problems ship to production uncaught.
201
+
202
+ **Instead:** Spend at least 10 minutes per 100 lines of meaningful diff. If you do not have time for a thorough review, say so and let someone else review.
203
+
204
+ ### Anti-Pattern: Style-Only Reviews
205
+
206
+ **What it looks like:** Only commenting on formatting, whitespace, and naming conventions while ignoring logic, architecture, and security.
207
+
208
+ **Why it is harmful:** Misallocates review effort. Style issues are the least impactful category and can often be caught by linters.
209
+
210
+ **Instead:** Focus on correctness and architecture first (items 1-4 in the review order). Save style comments for LOW severity nits at the end.
211
+
212
+ ### Anti-Pattern: Blocking on Nits
213
+
214
+ **What it looks like:** Requesting changes or withholding approval for trivial style preferences (single-line formatting, import order, comment wording).
215
+
216
+ **Why it is harmful:** Slows down delivery, creates frustration, and discourages submitting PRs. The cost of the delay exceeds the value of the nit fix.
217
+
218
+ **Instead:** Approve the PR with suggestions for LOW items. The author can address them in a follow-up or not -- it is their call.
219
+
220
+ ### Anti-Pattern: Drive-By Reviews
221
+
222
+ **What it looks like:** Leaving a single comment on a large PR without reviewing the rest, giving the impression the PR was reviewed.
223
+
224
+ **Why it is harmful:** Creates false confidence that the code was reviewed when it was not.
225
+
226
+ **Instead:** If you only have time for a partial review, say so explicitly: "I only reviewed the auth changes, not the database migration. Someone else should review that part."
227
+
228
+ ### Anti-Pattern: Review Ping-Pong
229
+
230
+ **What it looks like:** Reviewer leaves one comment, author fixes it, reviewer finds a new issue, author fixes that, ad infinitum.
231
+
232
+ **Why it is harmful:** Each round-trip adds latency. A thorough first review is faster than five rounds of incremental feedback.
233
+
234
+ **Instead:** Review the entire PR in one pass. Leave all comments at once. If you spot a pattern issue, note it once and add "same issue applies to lines X, Y, Z."
235
+
236
+ ## Failure Modes
237
+
238
+ - **Review takes too long:** The PR is too large. Split it into smaller PRs.
239
+ - **Reviewer and author disagree:** Escalate to a tech lead or use an ADR (Architecture Decision Record) for design disagreements.
240
+ - **Same issues keep appearing:** The team needs better shared standards. Update the coding-standards skill or add linter rules.
241
+ - **Reviews feel adversarial:** Revisit the team's review culture. Reviews should feel collaborative, not combative.