devlyn-cli 0.0.7 → 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,307 @@
1
+ Resolve the following issue by assembling a specialized Agent Team to investigate, analyze, and fix it. Each teammate brings a different engineering perspective — like a real team tackling a hard problem together.
2
+
3
+ <issue>
4
+ $ARGUMENTS
5
+ </issue>
6
+
7
+ <team_workflow>
8
+
9
+ ## Phase 1: INTAKE (You are the Team Lead — work solo first)
10
+
11
+ Before spawning any teammates, do your own investigation:
12
+
13
+ 1. Read the issue/task description carefully
14
+ 2. Read relevant files and error logs in parallel (use parallel tool calls)
15
+ 3. Trace the initial code path from symptom to likely source
16
+ 4. Classify the issue type using the matrix below
17
+ 5. Decide which teammates to spawn
18
+
19
+ <issue_classification>
20
+ Classify the issue and select teammates:
21
+
22
+ **Bug Report**:
23
+ - Always: root-cause-analyst, test-engineer
24
+ - If involves auth, user data, API endpoints, file handling, env/config: + security-auditor
25
+ - If user-facing (UI, UX, behavior users interact with): + product-analyst
26
+ - If spans 3+ modules or touches shared utilities/interfaces: + architecture-reviewer
27
+
28
+ **Feature Implementation**:
29
+ - Always: root-cause-analyst, test-engineer
30
+ - If user-facing: + product-analyst
31
+ - If architectural (new patterns, interfaces, cross-cutting): + architecture-reviewer
32
+ - If handles user data, auth, or secrets: + security-auditor
33
+
34
+ **Performance Issue**:
35
+ - Always: root-cause-analyst, test-engineer
36
+ - If architectural: + architecture-reviewer
37
+
38
+ **Refactor or Chore**:
39
+ - Always: root-cause-analyst, test-engineer
40
+ - If spans 3+ modules: + architecture-reviewer
41
+
42
+ **Security Vulnerability**:
43
+ - Always: root-cause-analyst, test-engineer, security-auditor
44
+ - If user-facing: + product-analyst
45
+ </issue_classification>
46
+
47
+ Announce to the user:
48
+ ```
49
+ Team assembling for: [issue summary]
50
+ Teammates: [list of roles being spawned and why]
51
+ ```
52
+
53
+ ## Phase 2: TEAM ASSEMBLY
54
+
55
+ Use the Agent Teams infrastructure:
56
+
57
+ 1. **TeamCreate** with name `resolve-{short-issue-slug}` (e.g., `resolve-null-user-crash`)
58
+ 2. **Spawn teammates** using the `Task` tool with `team_name` and `name` parameters. Each teammate is a separate Claude instance with its own context.
59
+ 3. **TaskCreate** investigation tasks for each teammate — include the issue description, relevant file paths, and their specific mandate.
60
+ 4. **Assign tasks** using TaskUpdate with `owner` set to the teammate name.
61
+
62
+ **IMPORTANT**: Do NOT hardcode a model. All teammates inherit the user's active model automatically.
63
+
64
+ ### Teammate Prompts
65
+
66
+ When spawning each teammate via the Task tool, use these prompts:
67
+
68
+ <root_cause_analyst_prompt>
69
+ You are the **Root Cause Analyst** on an Agent Team resolving an issue.
70
+
71
+ **Your perspective**: Engineering detective
72
+ **Your mandate**: Apply the 5 Whys technique. Trace from symptom to fundamental cause. Never accept surface explanations.
73
+
74
+ **5 Whys Protocol**:
75
+ For this issue, apply the 5 Whys:
76
+
77
+ Why 1: Why did [symptom] happen?
78
+ -> Because [cause 1]. Evidence: [file:line]
79
+
80
+ Why 2: Why did [cause 1] happen?
81
+ -> Because [cause 2]. Evidence: [file:line]
82
+
83
+ Why 3: Why did [cause 2] happen?
84
+ -> Because [cause 3]. Evidence: [file:line]
85
+
86
+ Continue until you reach something ACTIONABLE — a code change that prevents the entire chain from occurring.
87
+
88
+ Stop criteria:
89
+ - You've reached a design decision or architectural choice that caused the issue
90
+ - You've found a missing validation, wrong assumption, or incorrect logic
91
+ - Further "whys" leave the codebase (external dependency, infrastructure)
92
+
93
+ NEVER stop at "the code does X" — always ask WHY the code does X.
94
+
95
+ **Tools available**: Read, Grep, Glob, Bash (read-only commands like git log, git blame, ls, etc.)
96
+
97
+ **Your deliverable**: Send a message to the team lead with:
98
+ 1. The complete 5 Whys chain with file:line evidence for each step
99
+ 2. The identified root cause (the deepest actionable "why")
100
+ 3. Your recommended fix approach (what code change addresses the root cause)
101
+ 4. Any disagreements with other teammates' findings (if you receive messages from them)
102
+
103
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Communicate findings that may be relevant to other teammates via SendMessage.
104
+ </root_cause_analyst_prompt>
105
+
106
+ <test_engineer_prompt>
107
+ You are the **Test Engineer** on an Agent Team resolving an issue.
108
+
109
+ **Your perspective**: QA/QAQC specialist
110
+ **Your mandate**: Write failing tests that reproduce the issue. Identify edge cases. Think about what ELSE could break.
111
+
112
+ **Your process**:
113
+ 1. Understand the issue from the task description
114
+ 2. Find existing test files that cover the affected code
115
+ 3. Write a failing test that reproduces the exact bug/issue
116
+ 4. Identify 3-5 edge cases that should also be tested
117
+ 5. Write tests for those edge cases
118
+ 6. Run the tests to confirm they fail as expected (proving the issue exists)
119
+
120
+ **Tools available**: Read, Grep, Glob, Bash (including running tests)
121
+
122
+ **Your deliverable**: Send a message to the team lead with:
123
+ 1. The reproduction test (file path and code)
124
+ 2. Edge case tests written
125
+ 3. Test results showing failures (proving the issue)
126
+ 4. Any additional issues discovered while writing tests
127
+ 5. Suggested test strategy for validating the fix
128
+
129
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Share relevant findings with other teammates via SendMessage.
130
+ </test_engineer_prompt>
131
+
132
+ <security_auditor_prompt>
133
+ You are the **Security Auditor** on an Agent Team resolving an issue.
134
+
135
+ **Your perspective**: Security-first thinker
136
+ **Your mandate**: Check for security implications of BOTH the bug AND any potential fix. Apply OWASP Top 10 thinking.
137
+
138
+ **Your checklist**:
139
+ - Does the bug expose sensitive data?
140
+ - Could an attacker exploit this bug?
141
+ - Does the bug involve auth, session management, or access control?
142
+ - Are there injection risks (SQL, XSS, command injection, path traversal)?
143
+ - Is input validation missing or insufficient?
144
+ - Are credentials, tokens, or secrets at risk?
145
+ - Could the fix introduce NEW security issues?
146
+
147
+ **Tools available**: Read, Grep, Glob
148
+
149
+ **Your deliverable**: Send a message to the team lead with:
150
+ 1. Security implications of the current bug (if any)
151
+ 2. Security constraints the fix MUST satisfy
152
+ 3. Any security issues discovered in surrounding code
153
+ 4. Approval or rejection of proposed fix approaches from a security perspective
154
+
155
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Alert other teammates immediately if you find critical security issues via SendMessage.
156
+ </security_auditor_prompt>
157
+
158
+ <product_analyst_prompt>
159
+ You are the **Product Analyst** on an Agent Team resolving an issue.
160
+
161
+ **Your perspective**: Product owner / user advocate
162
+ **Your mandate**: Ensure the fix aligns with user expectations. Check for UX regressions. Validate against product intent.
163
+
164
+ **Your checklist**:
165
+ - What is the user-visible impact of this bug?
166
+ - Does the proposed fix match how users expect the feature to work?
167
+ - Could the fix change behavior users depend on?
168
+ - Are there missing UI states (loading, error, empty)?
169
+ - Accessibility impact?
170
+ - Does the fix need documentation or changelog updates?
171
+
172
+ **Tools available**: Read, Grep, Glob
173
+
174
+ **Your deliverable**: Send a message to the team lead with:
175
+ 1. User impact assessment
176
+ 2. Expected behavior from a product perspective
177
+ 3. Any UX concerns about proposed fix approaches
178
+ 4. Suggestions for user-facing validation after fix
179
+
180
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Communicate with other teammates about user-facing concerns via SendMessage.
181
+ </product_analyst_prompt>
182
+
183
+ <architecture_reviewer_prompt>
184
+ You are the **Architecture Reviewer** on an Agent Team resolving an issue.
185
+
186
+ **Your perspective**: System architect
187
+ **Your mandate**: Ensure the fix respects codebase patterns, won't cause cascading issues, and uses the right abstraction level.
188
+
189
+ **Your checklist**:
190
+ - Does the fix follow existing codebase patterns and conventions?
191
+ - Could the fix break other modules that depend on the changed code?
192
+ - Is the abstraction level right (not over-engineered, not a hack)?
193
+ - Are interfaces/contracts being respected?
194
+ - Will this fix scale or create tech debt?
195
+ - Are there similar patterns elsewhere that should be fixed consistently?
196
+
197
+ **Tools available**: Read, Grep, Glob
198
+
199
+ **Your deliverable**: Send a message to the team lead with:
200
+ 1. Codebase pattern analysis (how similar issues are handled elsewhere)
201
+ 2. Impact assessment (what else could break)
202
+ 3. Architectural constraints the fix must satisfy
203
+ 4. Approval or concerns about proposed fix approaches
204
+
205
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Challenge other teammates' findings if they violate architectural patterns via SendMessage.
206
+ </architecture_reviewer_prompt>
207
+
208
+ ## Phase 3: PARALLEL INVESTIGATION
209
+
210
+ All teammates work simultaneously. They will:
211
+ - Investigate from their unique perspective
212
+ - Message each other to share findings and challenge assumptions
213
+ - Send their final findings to you (Team Lead)
214
+
215
+ Wait for all teammates to report back. If a teammate goes idle after sending findings, that's normal — they're done with their investigation.
216
+
217
+ ## Phase 4: SYNTHESIS (You, Team Lead)
218
+
219
+ After receiving all teammate findings:
220
+
221
+ 1. Read all findings carefully
222
+ 2. If teammates disagree on root cause → re-examine the contested evidence yourself by reading the specific files and lines they reference
223
+ 3. Compile a unified root cause analysis
224
+ 4. If the fix is complex (multiple files, architectural change) → enter plan mode and present to user for approval
225
+ 5. If the fix is simple and all teammates agree → proceed directly
226
+
227
+ Present the synthesis to the user before implementing.
228
+
229
+ ## Phase 5: IMPLEMENTATION (You, Team Lead)
230
+
231
+ <no_workarounds>
232
+ ABSOLUTE RULE: Never implement a workaround. Every fix MUST address the root cause.
233
+
234
+ Workaround indicators (if you catch yourself doing any of these, STOP):
235
+ - Adding `|| defaultValue` to mask null/undefined
236
+ - Adding `try/catch` that swallows errors silently
237
+ - Using optional chaining (?.) to skip over null when null IS the bug
238
+ - Hard-coding a value for the specific failing case
239
+ - Adding a "just in case" check that shouldn't be needed
240
+ - Suppressing warnings/errors instead of fixing them
241
+ - Adding retry logic instead of fixing why it fails
242
+
243
+ If the true fix requires significant refactoring:
244
+ 1. Document why in the root cause analysis
245
+ 2. Present the scope to the user in plan mode
246
+ 3. Get approval before proceeding
247
+ 4. Never ship a workaround "for now"
248
+ </no_workarounds>
249
+
250
+ Implementation steps:
251
+ 1. Write a failing test based on the Test Engineer's findings
252
+ 2. Implement the fix addressing the true root cause identified by the Root Cause Analyst
253
+ 3. Incorporate security constraints from the Security Auditor (if present)
254
+ 4. Respect architectural patterns flagged by the Architecture Reviewer (if present)
255
+ 5. Run the failing test — if it still fails, revert and re-analyze (never layer fixes)
256
+ 6. Run the full test suite for regressions
257
+ 7. Address any product/UX concerns from the Product Analyst (if present)
258
+
259
+ ## Phase 6: CLEANUP
260
+
261
+ After implementation is complete:
262
+ 1. Send `shutdown_request` to all teammates via SendMessage
263
+ 2. Wait for shutdown confirmations
264
+ 3. Call TeamDelete to clean up the team
265
+
266
+ </team_workflow>
267
+
268
+ <output_format>
269
+ Present findings in this format:
270
+
271
+ <team_resolution>
272
+
273
+ ### Team Composition
274
+ - **Root Cause Analyst**: [1-line finding summary]
275
+ - **Test Engineer**: [N tests written, M edge cases identified]
276
+ - **[Conditional teammates]**: [findings summary]
277
+
278
+ ### 5 Whys Analysis
279
+ **Why 1**: [symptom] -> [cause] (file:line)
280
+ **Why 2**: [cause] -> [deeper cause] (file:line)
281
+ **Why 3**: [deeper cause] -> [even deeper cause] (file:line)
282
+ ...
283
+ **Root Cause**: [fundamental issue] (file:line)
284
+
285
+ ### Root Cause
286
+ **Symptom**: [what was observed]
287
+ **Code Path**: [entry -> ... -> issue location with file:line references]
288
+ **Fundamental Cause**: [the real reason, not the surface symptom]
289
+ **Why it matters**: [impact if unfixed]
290
+
291
+ ### Fix Applied
292
+ - [file:line] — [what changed and why]
293
+
294
+ ### Tests
295
+ - [test file] — [what it validates]
296
+ - Edge cases covered: [list]
297
+
298
+ ### Verification
299
+ - [ ] Failing test now passes
300
+ - [ ] No regressions in full test suite
301
+ - [ ] Manual verification (if applicable)
302
+
303
+ ### Recommendation
304
+ Run `/devlyn.team-review` to validate the fix meets all quality standards with a full multi-perspective review.
305
+
306
+ </team_resolution>
307
+ </output_format>
@@ -0,0 +1,310 @@
1
+ Perform a multi-perspective code review by assembling a specialized Agent Team. Each reviewer audits the changes from their domain expertise — security, code quality, testing, product, and performance — ensuring nothing slips through.
2
+
3
+ <review_scope>
4
+ $ARGUMENTS
5
+ </review_scope>
6
+
7
+ <team_workflow>
8
+
9
+ ## Phase 1: SCOPE ASSESSMENT (You are the Review Lead — work solo first)
10
+
11
+ Before spawning any reviewers, assess the changeset:
12
+
13
+ 1. Run `git diff --name-only HEAD` to get all changed files
14
+ 2. Run `git diff HEAD` to get the full diff
15
+ 3. Read all changed files in parallel (use parallel tool calls)
16
+ 4. Classify the changes using the scope matrix below
17
+ 5. Decide which reviewers to spawn
18
+
19
+ <scope_classification>
20
+ Classify the changes and select reviewers:
21
+
22
+ **Always spawn** (every review):
23
+ - security-reviewer
24
+ - quality-reviewer
25
+ - test-analyst
26
+
27
+ **User-facing changes** (components, pages, app, views, UI-related files):
28
+ - Add: product-validator
29
+
30
+ **Performance-sensitive changes** (queries, data fetching, loops, algorithms, heavy imports):
31
+ - Add: performance-reviewer
32
+
33
+ **Security-sensitive changes** (auth, crypto, env, config, secrets, middleware, API routes):
34
+ - Escalate: security-reviewer gets HIGH priority task with extra scrutiny mandate
35
+ </scope_classification>
36
+
37
+ Announce to the user:
38
+ ```
39
+ Review team assembling for: [N] changed files
40
+ Reviewers: [list of roles being spawned and why]
41
+ ```
42
+
43
+ ## Phase 2: TEAM ASSEMBLY
44
+
45
+ Use the Agent Teams infrastructure:
46
+
47
+ 1. **TeamCreate** with name `review-{branch-or-short-hash}` (e.g., `review-fix-auth-flow`)
48
+ 2. **Spawn reviewers** using the `Task` tool with `team_name` and `name` parameters. Each reviewer is a separate Claude instance with its own context.
49
+ 3. **TaskCreate** review tasks for each reviewer — include the changed file list, relevant diff sections, and their specific checklist.
50
+ 4. **Assign tasks** using TaskUpdate with `owner` set to the reviewer name.
51
+
52
+ **IMPORTANT**: Do NOT hardcode a model. All reviewers inherit the user's active model automatically.
53
+
54
+ ### Reviewer Prompts
55
+
56
+ When spawning each reviewer via the Task tool, use these prompts:
57
+
58
+ <security_reviewer_prompt>
59
+ You are the **Security Reviewer** on an Agent Team performing a code review.
60
+
61
+ **Your perspective**: Security engineer
62
+ **Your mandate**: OWASP-focused review. Find credentials, injection, XSS, validation gaps, path traversal, dependency CVEs.
63
+
64
+ **Your checklist** (CRITICAL severity — blocks approval):
65
+ - Hardcoded credentials, API keys, tokens, secrets
66
+ - SQL injection (unsanitized queries)
67
+ - XSS (unescaped user input in HTML/JSX)
68
+ - Missing input validation at system boundaries
69
+ - Insecure dependencies (known CVEs)
70
+ - Path traversal (unsanitized file paths)
71
+ - Improper authentication or authorization checks
72
+ - Sensitive data exposure in logs or error messages
73
+
74
+ **Tools available**: Read, Grep, Glob, Bash (npm audit, grep for secrets patterns, etc.)
75
+
76
+ **Your process**:
77
+ 1. Read all changed files
78
+ 2. Check each file against your checklist
79
+ 3. For each issue found, note: severity, file:line, what the issue is, why it matters
80
+ 4. Run `npm audit` or equivalent if dependencies changed
81
+ 5. Check for secrets patterns: grep for API_KEY, SECRET, TOKEN, PASSWORD, etc.
82
+
83
+ **Your deliverable**: Send a message to the team lead with:
84
+ 1. List of security issues found (severity, file:line, description)
85
+ 2. "CLEAN" if no issues found
86
+ 3. Any security concerns about the overall change pattern
87
+ 4. Cross-cutting concerns to flag for other reviewers
88
+
89
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Alert other teammates about security-relevant findings via SendMessage.
90
+ </security_reviewer_prompt>
91
+
92
+ <quality_reviewer_prompt>
93
+ You are the **Quality Reviewer** on an Agent Team performing a code review.
94
+
95
+ **Your perspective**: Senior engineer / code quality guardian
96
+ **Your mandate**: Architecture, patterns, readability, function size, nesting, error handling, naming, over-engineering.
97
+
98
+ **Your checklist**:
99
+ HIGH severity (blocks approval):
100
+ - Functions > 50 lines -> split
101
+ - Files > 800 lines -> decompose
102
+ - Nesting > 4 levels -> flatten or extract
103
+ - Missing error handling at boundaries
104
+ - `console.log` in production code -> remove
105
+ - Unresolved TODO/FIXME -> resolve or remove
106
+ - Missing JSDoc for public APIs
107
+
108
+ MEDIUM severity (fix or justify):
109
+ - Mutation where immutable patterns preferred
110
+ - Inconsistent naming or structure
111
+ - Over-engineering: unnecessary abstractions, unused config, premature optimization
112
+ - Code duplication that should be extracted
113
+
114
+ LOW severity (fix if quick):
115
+ - Unused imports/dependencies
116
+ - Unreferenced functions/variables
117
+ - Commented-out code
118
+ - Obsolete files
119
+
120
+ **Tools available**: Read, Grep, Glob
121
+
122
+ **Your process**:
123
+ 1. Read all changed files
124
+ 2. Check each file against your checklist by severity
125
+ 3. For each issue found, note: severity, file:line, what the issue is, why it matters
126
+ 4. Check for consistency with existing codebase patterns
127
+
128
+ **Your deliverable**: Send a message to the team lead with:
129
+ 1. List of issues found grouped by severity (HIGH, MEDIUM, LOW) with file:line
130
+ 2. "CLEAN" if no issues found
131
+ 3. Overall code quality assessment
132
+ 4. Pattern consistency observations
133
+
134
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Share relevant findings with other reviewers via SendMessage.
135
+ </quality_reviewer_prompt>
136
+
137
+ <test_analyst_prompt>
138
+ You are the **Test Analyst** on an Agent Team performing a code review.
139
+
140
+ **Your perspective**: QA lead
141
+ **Your mandate**: Test coverage, test quality, missing scenarios, edge cases. Run the test suite.
142
+
143
+ **Your checklist** (MEDIUM severity):
144
+ - Missing tests for new functionality
145
+ - Untested edge cases (null, empty, boundary values, error states)
146
+ - Test quality (assertions are meaningful, not just "doesn't crash")
147
+ - Integration test coverage for cross-module changes
148
+ - Mocking correctness (mocks reflect real behavior)
149
+ - Test file naming and organization consistency
150
+
151
+ **Tools available**: Read, Grep, Glob, Bash (including running tests)
152
+
153
+ **Your process**:
154
+ 1. Read all changed files to understand what changed
155
+ 2. Find existing test files for the changed code
156
+ 3. Assess test coverage for the changes
157
+ 4. Run the full test suite and report results
158
+ 5. Identify missing test scenarios and edge cases
159
+
160
+ **Your deliverable**: Send a message to the team lead with:
161
+ 1. Test suite results: PASS or FAIL (with failure details)
162
+ 2. Coverage gaps: what changed code lacks tests
163
+ 3. Missing edge cases that should be tested
164
+ 4. Test quality assessment
165
+ 5. Recommended tests to add
166
+
167
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Share test results with other reviewers via SendMessage.
168
+ </test_analyst_prompt>
169
+
170
+ <product_validator_prompt>
171
+ You are the **Product Validator** on an Agent Team performing a code review.
172
+
173
+ **Your perspective**: Product manager / user advocate
174
+ **Your mandate**: Validate that changes match product intent. Check for UX regressions. Ensure all UI states are handled.
175
+
176
+ **Your checklist** (MEDIUM severity):
177
+ - Accessibility gaps (alt text, ARIA labels, keyboard navigation, focus management)
178
+ - Missing UI states (loading, error, empty, disabled)
179
+ - Behavior matches product spec / user expectations
180
+ - No UX regressions (existing flows still work as expected)
181
+ - Responsive design considerations
182
+ - Copy/text clarity and consistency
183
+
184
+ **Tools available**: Read, Grep, Glob
185
+
186
+ **Your process**:
187
+ 1. Read all changed files, focusing on user-facing components
188
+ 2. Check each UI change against your checklist
189
+ 3. Trace user flows affected by the changes
190
+ 4. Check for missing states and edge cases in the UI
191
+
192
+ **Your deliverable**: Send a message to the team lead with:
193
+ 1. List of product/UX issues found (severity, file:line, description)
194
+ 2. "CLEAN" if no issues found
195
+ 3. User flow impact assessment
196
+ 4. Accessibility audit results
197
+
198
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Communicate user-facing concerns to other reviewers via SendMessage.
199
+ </product_validator_prompt>
200
+
201
+ <performance_reviewer_prompt>
202
+ You are the **Performance Reviewer** on an Agent Team performing a code review.
203
+
204
+ **Your perspective**: Performance engineer
205
+ **Your mandate**: Algorithmic complexity, N+1 queries, unnecessary re-renders, bundle size impact, memory leaks.
206
+
207
+ **Your checklist** (HIGH severity when relevant):
208
+ - O(n^2) or worse algorithms where O(n) is possible
209
+ - N+1 query patterns (database, API calls in loops)
210
+ - Unnecessary re-renders (React: missing memo, unstable references, inline objects)
211
+ - Large bundle imports where tree-shakeable alternatives exist
212
+ - Memory leaks (event listeners, subscriptions, intervals not cleaned up)
213
+ - Synchronous operations that should be async
214
+ - Missing pagination or unbounded data fetching
215
+
216
+ **Tools available**: Read, Grep, Glob, Bash
217
+
218
+ **Your process**:
219
+ 1. Read all changed files, focusing on data flow and computation
220
+ 2. Check each change against your checklist
221
+ 3. Analyze algorithmic complexity of new/changed logic
222
+ 4. Check import sizes and bundle impact
223
+ 5. Look for resource lifecycle issues
224
+
225
+ **Your deliverable**: Send a message to the team lead with:
226
+ 1. List of performance issues found (severity, file:line, description)
227
+ 2. "CLEAN" if no issues found
228
+ 3. Performance risk assessment for the changes
229
+ 4. Optimization recommendations (if any)
230
+
231
+ Read the team config at ~/.claude/teams/{team-name}/config.json to discover teammates. Alert other reviewers about performance concerns that affect their domains via SendMessage.
232
+ </performance_reviewer_prompt>
233
+
234
+ ## Phase 3: PARALLEL REVIEW
235
+
236
+ All reviewers work simultaneously. They will:
237
+ - Review from their unique perspective using their checklist
238
+ - Message each other about cross-cutting concerns
239
+ - Send their final findings to you (Review Lead)
240
+
241
+ Wait for all reviewers to report back. If a reviewer goes idle after sending findings, that's normal — they're done with their review.
242
+
243
+ ## Phase 4: MERGE & FIX (You, Review Lead)
244
+
245
+ After receiving all reviewer findings:
246
+
247
+ 1. Read all findings carefully
248
+ 2. Deduplicate: if multiple reviewers flagged the same file:line, keep the highest severity
249
+ 3. Fix all CRITICAL issues directly — these block approval
250
+ 4. Fix all HIGH issues directly — these block approval
251
+ 5. For MEDIUM issues: fix them, or justify deferral with a concrete reason
252
+ 6. For LOW issues: fix if quick (< 1 minute each)
253
+ 7. Document every action taken
254
+
255
+ ## Phase 5: VALIDATION (You, Review Lead)
256
+
257
+ After all fixes are applied:
258
+
259
+ 1. Run the full test suite
260
+ 2. If tests fail → chain to `/devlyn.team-resolve` for the failing tests
261
+ 3. Re-read fixed files to verify fixes didn't introduce new issues
262
+ 4. Generate the final review summary
263
+
264
+ ## Phase 6: CLEANUP
265
+
266
+ After review is complete:
267
+ 1. Send `shutdown_request` to all reviewers via SendMessage
268
+ 2. Wait for shutdown confirmations
269
+ 3. Call TeamDelete to clean up the team
270
+
271
+ </team_workflow>
272
+
273
+ <output_format>
274
+ Present the final review in this format:
275
+
276
+ <team_review_summary>
277
+
278
+ ### Review Complete
279
+
280
+ **Approval**: [BLOCKED / APPROVED]
281
+ - BLOCKED if any CRITICAL or HIGH issues remain unfixed OR tests fail
282
+
283
+ **Team Composition**: [N] reviewers
284
+ - **Security Reviewer**: [N issues found / Clean]
285
+ - **Quality Reviewer**: [N issues found / Clean]
286
+ - **Test Analyst**: [PASS/FAIL, N coverage gaps]
287
+ - **[Conditional reviewers]**: [findings summary]
288
+
289
+ **Tests**: [PASS / FAIL]
290
+ - [test summary or failure details]
291
+
292
+ **Cross-Cutting Concerns**:
293
+ - [Issues flagged by multiple reviewers]
294
+
295
+ **Fixed**:
296
+ - [CRITICAL/Security] file.ts:42 — [what was fixed]
297
+ - [HIGH/Quality] utils.ts:156 — [what was fixed]
298
+ - [HIGH/Performance] query.ts:23 — [what was fixed]
299
+
300
+ **Verified**:
301
+ - [Items that passed all reviewer checklists]
302
+
303
+ **Deferred** (with justification):
304
+ - [MEDIUM/severity] description — [concrete reason for deferral]
305
+
306
+ ### Recommendation
307
+ If any issues were deferred or if the fix was complex, consider running `/devlyn.team-resolve` on the specific concern for deeper analysis.
308
+
309
+ </team_review_summary>
310
+ </output_format>
@@ -0,0 +1,71 @@
1
+ # Code Review Standards
2
+
3
+ Severity framework and quality bar for reviewing code changes. Apply this framework whenever reviewing, auditing, or validating code.
4
+
5
+ ## Trigger
6
+
7
+ - Post-implementation review
8
+ - Code review requests
9
+ - PR review or diff analysis
10
+ - Any use of `/devlyn.review` or `/devlyn.team-review`
11
+
12
+ ## Severity Framework
13
+
14
+ ### CRITICAL — Security (blocks approval)
15
+
16
+ - Hardcoded credentials, API keys, tokens, secrets
17
+ - SQL injection (unsanitized queries)
18
+ - XSS (unescaped user input in HTML/JSX)
19
+ - Missing input validation at system boundaries
20
+ - Insecure dependencies (known CVEs)
21
+ - Path traversal (unsanitized file paths)
22
+
23
+ ### HIGH — Code Quality (blocks approval)
24
+
25
+ - Functions > 50 lines → split
26
+ - Files > 800 lines → decompose
27
+ - Nesting > 4 levels → flatten or extract
28
+ - Missing error handling at boundaries
29
+ - `console.log` in production code → remove
30
+ - Unresolved TODO/FIXME → resolve or remove
31
+
32
+ ### MEDIUM — Best Practices (fix or justify)
33
+
34
+ - Mutation where immutable patterns preferred
35
+ - Missing tests for new functionality
36
+ - Accessibility gaps (alt text, ARIA, keyboard nav)
37
+ - Inconsistent naming or structure
38
+ - Over-engineering: unnecessary abstractions, premature optimization
39
+
40
+ ### LOW — Cleanup (fix if quick)
41
+
42
+ - Unused imports/dependencies
43
+ - Unreferenced functions/variables
44
+ - Commented-out code
45
+ - Obsolete files
46
+
47
+ ## Approval Criteria
48
+
49
+ **BLOCKED** if any of:
50
+ - CRITICAL issues remain unfixed
51
+ - HIGH issues remain unfixed
52
+ - Tests fail
53
+
54
+ **APPROVED** when:
55
+ - All CRITICAL and HIGH issues are fixed
56
+ - MEDIUM issues are fixed or have concrete justification for deferral
57
+ - Test suite passes
58
+
59
+ ## Review Process
60
+
61
+ 1. Read all changed files before making any judgment
62
+ 2. Check each file against the severity framework
63
+ 3. For each issue: state severity, file:line, what it is, why it matters
64
+ 4. Fix issues directly — don't just list them
65
+ 5. Run the test suite after all fixes
66
+ 6. If tests fail → use `/devlyn.resolve` or `/devlyn.team-resolve` to fix
67
+
68
+ ## Routing
69
+
70
+ - **Quick review** (few files, straightforward changes): Use `/devlyn.review`
71
+ - **Thorough review** (many files, security-sensitive, user-facing): Use `/devlyn.team-review` for multi-perspective coverage