maxsimcli 4.0.1 → 4.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,3 +1,10 @@
1
+ ## [4.0.1](https://github.com/maystudios/maxsimcli/compare/v4.0.0...v4.0.1) (2026-03-02)
2
+
3
+
4
+ ### Bug Fixes
5
+
6
+ * **mcp:** bundle MCP server dependencies and update README with skills/MCP docs ([7724966](https://github.com/maystudios/maxsimcli/commit/772496696167d8723873b6645826327472824560))
7
+
1
8
  # [4.0.0](https://github.com/maystudios/maxsimcli/compare/v3.12.0...v4.0.0) (2026-03-02)
2
9
 
3
10
 
@@ -1,123 +1,76 @@
1
1
  ---
2
2
  name: batch-worktree
3
- description: Orchestrate parallel work across isolated git worktrees with independent PRs
3
+ description: >-
4
+ Decomposes large tasks into independent units and executes each in an isolated
5
+ git worktree with its own branch and PR. Use when parallelizing work across
6
+ 5-30 independent units or orchestrating worktree-based parallel execution. Not
7
+ for sequential dependencies or fewer than 3 units.
4
8
  ---
5
9
 
6
10
  # Batch Worktree
7
11
 
8
12
  Decompose large tasks into independent units, execute each in an isolated worktree, and produce one PR per unit.
9
13
 
10
- **If units share overlapping files, you cannot parallelize them. Serialize or redesign.**
11
-
12
- ## When to Use
13
-
14
- - The task is decomposable into 5-30 independent units
15
- - Each unit can be implemented, tested, and merged independently
16
- - Units touch non-overlapping files (no merge conflicts between units)
17
- - You want parallel execution with isolated git state per unit
18
-
19
- Do NOT use this skill when:
20
- - Units have sequential dependencies (use SDD instead)
21
- - The task has fewer than 3 units (overhead is not worth it)
22
- - Units modify the same files (merge conflicts will block you)
23
-
24
- ## The Iron Law
25
-
26
- <HARD-GATE>
27
- EVERY UNIT MUST BE INDEPENDENTLY MERGEABLE.
28
- If merging unit A would break the build without unit B, they are not independent — combine them or serialize them.
29
- No exceptions. No "we'll merge them in order." No "it'll probably be fine."
30
- Violating this rule produces unmergeable PRs — wasted work.
31
- </HARD-GATE>
14
+ **HARD GATE: Every unit must be independently mergeable. If merging unit A would break the build without unit B, they are not independent. Combine them or serialize them. No exceptions.**
32
15
 
33
16
  ## Process
34
17
 
35
- ### 1. DECOMPOSE Split Task into Independent Units
36
-
37
- - List all units with a clear one-line description each
38
- - For each unit, list the files it will create or modify
39
- - Verify NO file appears in more than one unit
40
- - If overlap exists: merge the overlapping units into one, or extract shared code into a prerequisite unit that runs first
41
-
42
- ```bash
43
- # Document the decomposition
44
- node .claude/maxsim/bin/maxsim-tools.cjs state-add-decision "Batch decomposition: N units identified, no file overlap confirmed"
45
- ```
18
+ ### 1. Research -- Analyze and Decompose
46
19
 
47
- ### 2. VALIDATE Confirm Independence
20
+ List all units with a one-line description each. For each unit, list the files it will create or modify. Verify no file appears in more than one unit. If overlap exists, merge the overlapping units into one or extract shared code into a prerequisite unit that runs first.
48
21
 
49
- For each pair of units, verify:
22
+ For each pair of units, confirm:
50
23
  - No shared file modifications
51
- - No runtime dependency (unit A's output is not unit B's input)
24
+ - No runtime dependency (unit A output is not unit B input)
52
25
  - Each unit's tests pass without the other unit's changes
53
26
 
54
- If validation fails: redesign the decomposition. Do not proceed with overlapping units.
27
+ If validation fails, redesign the decomposition before proceeding.
55
28
 
56
- ### 3. SPAWN Create Worktree Per Unit
29
+ ### 2. Plan -- Define Unit Specifications
57
30
 
58
- For each unit, create an isolated worktree and spawn an agent:
31
+ For each unit, prepare a specification containing:
32
+ - Unit description and acceptance criteria
33
+ - The list of files it owns (and only those files)
34
+ - The base branch to branch from
35
+ - Instructions to implement, test, commit, push, and create a PR
59
36
 
60
- ```bash
61
- # Create worktree branch for each unit
62
- git worktree add .claude/worktrees/unit-NN unit/NN-description -b unit/NN-description
63
- ```
37
+ Record the decomposition decision:
64
38
 
65
- Spawn one agent per unit with `isolation: "worktree"`. Each agent receives:
66
- - The unit description and acceptance criteria
67
- - The list of files it owns (and ONLY those files)
68
- - The base branch to branch from
69
- - Instructions to: implement, test, commit, push, create PR
39
+ ```
40
+ node .claude/maxsim/bin/maxsim-tools.cjs state-add-decision "Batch decomposition: N units identified, no file overlap confirmed"
41
+ ```
70
42
 
71
- ### 4. EXECUTE Each Agent Works Independently
43
+ ### 3. Spawn -- Create Worktree Per Unit
72
44
 
73
- Each spawned agent follows this sequence:
74
- 1. Read the unit description and relevant source files
75
- 2. Implement the changes (apply TDD or simplify skills as configured)
76
- 3. Run tests — all must pass
77
- 4. Commit with a descriptive message referencing the unit
78
- 5. Push the branch
79
- 6. Create a PR with unit description as the body
45
+ For each unit, create an isolated worktree and spawn an agent with `isolation: "worktree"`. Each agent receives its unit specification and works independently through: read relevant source, implement changes, run tests, commit, push, create PR.
80
46
 
81
- ### 5. TRACK Monitor Progress
47
+ ### 4. Track -- Monitor Progress
82
48
 
83
49
  Maintain a status table and update it as agents report back:
84
50
 
85
- ```markdown
86
51
  | # | Unit | Status | PR |
87
52
  |---|------|--------|----|
88
53
  | 1 | description | done | #123 |
89
- | 2 | description | in-progress | |
90
- | 3 | description | failed | |
91
- ```
54
+ | 2 | description | in-progress | -- |
55
+ | 3 | description | failed | -- |
92
56
 
93
57
  Statuses: `pending`, `in-progress`, `done`, `failed`, `needs-review`
94
58
 
95
- ### 6. REPORT — Collect Results
96
-
97
- When all units complete:
98
- - List all created PRs
99
- - Flag any failed units with error summaries
100
- - If any unit failed: spawn a fix agent for that unit only
101
-
102
- ## Handling Failures
59
+ Failure handling:
60
+ - Unit fails tests: spawn a fix agent in the same worktree
61
+ - Merge conflict: decomposition was wrong, fix overlap and re-run unit
62
+ - Agent times out: re-spawn with the same unit description
63
+ - 3+ failures on same unit: stop and escalate to user
103
64
 
104
- | Situation | Action |
105
- |-----------|--------|
106
- | Unit fails tests | Spawn a fix agent in the same worktree |
107
- | Unit has merge conflict | The decomposition was wrong — fix overlap, re-run unit |
108
- | Agent times out | Re-spawn with the same unit description |
109
- | 3+ failures on same unit | Stop and escalate to user — likely an architectural issue |
65
+ When all units complete, list all created PRs and flag any failed units with error summaries. If any unit failed, spawn a fix agent for that unit only.
110
66
 
111
- ## Common Rationalizations — REJECT THESE
67
+ ## Common Pitfalls
112
68
 
113
- | Excuse | Why It Violates the Rule |
114
- |--------|--------------------------|
115
- | "The overlap is minor" | Minor overlap = merge conflicts. Split the shared code into a prerequisite unit. |
116
- | "We'll merge in the right order" | Order-dependent merges are not independent. Serialize those units. |
117
- | "It's faster to do them all in one branch" | One branch means one context window. Worktrees give each unit fresh context. |
118
- | "Only 2 units, let's still use worktrees" | Worktree overhead is not worth it for <3 units. Use sequential execution. |
69
+ - "The overlap is minor" -- Minor overlap causes merge conflicts. Split shared code into a prerequisite unit.
70
+ - "We'll merge in the right order" -- Order-dependent merges are not independent. Serialize those units.
71
+ - "Only 2 units, let's still use worktrees" -- Worktree overhead is not worth it for fewer than 3 units. Use sequential execution.
119
72
 
120
- ## Verification Checklist
73
+ ## Verification
121
74
 
122
75
  Before reporting completion, confirm:
123
76
 
@@ -128,7 +81,7 @@ Before reporting completion, confirm:
128
81
  - [ ] No PR depends on another PR being merged first
129
82
  - [ ] Status table is complete with all PR links
130
83
 
131
- ## In MAXSIM Plan Execution
84
+ ## MAXSIM Integration
132
85
 
133
86
  When a plan specifies `skill: "batch-worktree"`:
134
87
  - The orchestrator decomposes the plan's tasks into independent units
@@ -1,159 +1,89 @@
1
1
  ---
2
2
  name: brainstorming
3
- description: Use before implementing any significant feature or design — requires exploring multiple approaches and getting explicit design approval before writing code
3
+ description: >-
4
+ Explores multiple implementation approaches with trade-off analysis before
5
+ committing to a design direction. Use when starting a significant feature,
6
+ making architectural decisions, or choosing between design alternatives.
4
7
  ---
5
8
 
6
9
  # Brainstorming
7
10
 
8
11
  The first idea is rarely the best idea. Explore the space before committing to a direction.
9
12
 
10
- **If you have not considered alternatives, you are building the first thing that came to mind.**
13
+ **HARD GATE** -- No implementation without design approval. If you have not presented approaches, discussed trade-offs, and received explicit user approval, you cannot write implementation code. This is not a judgment call.
11
14
 
12
- ## The Iron Law
15
+ ## Process
13
16
 
14
- <HARD-GATE>
15
- NO IMPLEMENTATION WITHOUT DESIGN APPROVAL.
16
- If you have not presented approaches, discussed trade-offs, and received explicit approval, you CANNOT write implementation code.
17
- "I already know the best approach" is an assumption, not a conclusion.
18
- Violating this rule is a violation — not a judgment call.
19
- </HARD-GATE>
17
+ ### 1. Frame the Problem
20
18
 
21
- ## The Gate Function
22
-
23
- Follow these steps IN ORDER before implementing any significant feature, architecture change, or design decision.
24
-
25
- ### 1. FRAME — Define the Problem
26
-
27
- Ask the user ONE question at a time to understand the problem space. Do not bundle multiple questions — each response informs the next question.
19
+ Ask the user ONE question at a time to understand the problem space. Each answer informs the next question.
28
20
 
29
21
  - What is the goal? What does success look like?
30
22
  - What are the constraints (performance, compatibility, timeline)?
31
- - What has already been tried or considered?
23
+ - What has been tried or considered already?
32
24
  - What are the non-negotiables vs. nice-to-haves?
33
25
 
34
- **Rule: ONE question at a time. Wait for the answer before asking the next.**
35
-
36
- ### 2. RESEARCH — Understand the Context
37
-
38
- Before proposing solutions, gather evidence:
26
+ ### 2. Research Context
39
27
 
40
- - Read the relevant code and understand current architecture
41
- - Check `.planning/` for existing decisions and constraints
42
- - Review ROADMAP.md for phase dependencies and scope
43
- - Identify related patterns already in the codebase
28
+ Before proposing solutions, gather evidence from the codebase and any existing planning artifacts. Read relevant code, check for prior decisions, and identify patterns already in use.
44
29
 
45
- ```bash
46
- # Check existing decisions
47
- node ~/.claude/maxsim/bin/maxsim-tools.cjs state read --raw
48
-
49
- # Check current roadmap context
50
- node ~/.claude/maxsim/bin/maxsim-tools.cjs roadmap read --raw
51
- ```
52
-
53
- ### 3. PROPOSE — Present 2-3 Approaches
30
+ ### 3. Present 2-3 Approaches
54
31
 
55
32
  For each approach, provide:
56
33
 
57
- | Aspect | What to Include |
58
- |--------|----------------|
59
- | **Summary** | One-sentence description of the approach |
60
- | **How it works** | Key implementation steps (3-5 bullets) |
61
- | **Pros** | Concrete advantages not vague ("simpler" is vague, "200 fewer lines" is concrete) |
62
- | **Cons** | Honest drawbacks do not hide weaknesses to sell a preferred option |
63
- | **Effort** | Relative complexity (low / medium / high) |
64
- | **Risk** | What could go wrong and how recoverable is it |
65
-
66
- **Present exactly 2-3 approaches.** One option is not brainstorming. Four or more creates decision paralysis.
67
-
68
- If one approach is clearly superior, say so — but still present alternatives so the user can validate your reasoning.
34
+ | Aspect | Content |
35
+ |--------|---------|
36
+ | **Summary** | One sentence |
37
+ | **How it works** | 3-5 implementation bullets |
38
+ | **Pros** | Concrete advantages (not vague -- "200 fewer lines" beats "simpler") |
39
+ | **Cons** | Honest drawbacks -- do not hide weaknesses |
40
+ | **Effort** | Low / Medium / High |
41
+ | **Risk** | What could go wrong and how recoverable |
69
42
 
70
- ### 4. DISCUSS Refine with the User
43
+ Present exactly 2-3 approaches. If one is clearly superior, say so -- but still present alternatives so the user can validate your reasoning.
71
44
 
72
- - Ask the user which approach they prefer (or if they want a hybrid)
73
- - Answer follow-up questions honestly — do not advocate for a single approach
74
- - If the user raises concerns, address them specifically
75
- - If no approach fits, propose new ones informed by the discussion
45
+ ### 4. Discuss and Refine
76
46
 
77
- **Continue ONE question at a time. Do not assume consensus until stated.**
47
+ Ask the user which approach they prefer or whether they want a hybrid. Answer follow-up questions honestly. If no approach fits, propose new ones informed by the discussion. Continue one question at a time -- do not assume consensus.
78
48
 
79
- ### 5. DECIDE — Get Explicit Approval
49
+ ### 5. Get Explicit Approval
80
50
 
81
- The user must explicitly approve the chosen approach. Acceptable approvals:
51
+ The user must explicitly approve one approach (e.g., "Go with A", "Approved", "Ship it"). Vague responses like "Sounds good" or "Interesting" are not approval. If ambiguous, ask: "To confirm -- should I proceed with [specific approach]?"
82
52
 
83
- - "Go with approach A"
84
- - "Let's do option 2"
85
- - "Approved" / "LGTM" / "Ship it"
53
+ ### 6. Document the Decision
86
54
 
87
- Not acceptable as approval:
55
+ Record the chosen approach, rejected alternatives with reasons, key implementation decisions, and risks. Use MAXSIM state tooling if available.
88
56
 
89
- - "Sounds good" (too vague — clarify which approach)
90
- - "Interesting" (not a decision)
91
- - Silence (not consent)
57
+ ### 7. Implement the Approved Design
92
58
 
93
- **If approval is ambiguous, ask: "To confirm should I proceed with [specific approach]?"**
59
+ Only after steps 1-6. Follow the approved design. If implementation reveals a design flaw, stop and return to step 4.
94
60
 
95
- ### 6. DOCUMENT — Record the Decision
61
+ ## Common Pitfalls
96
62
 
97
- After approval, write a design doc and record the decision:
98
-
99
- ```bash
100
- # Record the decision in STATE.md
101
- node ~/.claude/maxsim/bin/maxsim-tools.cjs add-decision \
102
- --phase "current-phase" \
103
- --summary "Chose approach X for [feature] because [reason]" \
104
- --rationale "Evaluated 3 approaches: A (rejected — too complex), B (rejected — performance risk), C (approved — best trade-off of simplicity and extensibility)"
105
- ```
106
-
107
- The design doc should include:
108
- - **Chosen approach** and why
109
- - **Rejected alternatives** and why they were rejected
110
- - **Key implementation decisions** that flow from the choice
111
- - **Risks** and mitigation strategies
112
-
113
- ### 7. IMPLEMENT — Build the Approved Design
114
-
115
- Only after steps 1-6 are complete:
116
- - Follow the approved design — do not deviate without re-discussion
117
- - If implementation reveals a flaw in the design, STOP and return to step 4
118
- - Reference the design doc in commit messages
119
-
120
- ## Common Rationalizations — REJECT THESE
121
-
122
- | Excuse | Why It Violates the Rule |
123
- |--------|--------------------------|
124
- | "I already know the best approach" | You know YOUR preferred approach. Alternatives may be better. |
125
- | "There's only one way to do this" | There is almost never only one way. You have not looked hard enough. |
126
- | "The user won't care about the design" | Users care about the outcome. Bad design leads to bad outcomes. |
63
+ | Excuse | Reality |
64
+ |--------|---------|
65
+ | "I already know the best approach" | You know your preferred approach. Alternatives may be better. |
66
+ | "There's only one way to do this" | There is almost never only one way. |
127
67
  | "Brainstorming slows us down" | Building the wrong thing is slower. 30 minutes of design saves days of rework. |
128
- | "I'll refactor if the first approach is wrong" | Refactoring is expensive. Choosing well upfront is cheaper. |
129
- | "The scope is too small for brainstorming" | If it touches architecture, it needs brainstorming regardless of size. |
130
-
131
- ## Red Flags — STOP If You Catch Yourself:
132
68
 
133
- - Writing implementation code before presenting approaches to the user
134
- - Presenting only one approach and calling it "brainstorming"
135
- - Asking multiple questions at once instead of one at a time
136
- - Assuming approval without an explicit statement
137
- - Skipping the documentation step because "we'll remember"
138
- - Deviating from the approved design without discussion
69
+ Stop immediately if you catch yourself: writing code before presenting approaches, presenting only one option, asking multiple questions at once, assuming approval without explicit confirmation, or skipping documentation.
139
70
 
140
- **If any red flag triggers: STOP. Return to the appropriate step.**
141
-
142
- ## Verification Checklist
71
+ ## Verification
143
72
 
144
73
  Before starting implementation, confirm:
145
74
 
146
75
  - [ ] Problem has been framed with user input (not assumptions)
147
76
  - [ ] Relevant code and context have been researched
148
- - [ ] 2-3 approaches have been presented with concrete trade-offs
77
+ - [ ] 2-3 approaches presented with concrete trade-offs
149
78
  - [ ] User has explicitly approved one specific approach
150
- - [ ] Decision has been recorded in STATE.md
79
+ - [ ] Decision has been recorded
151
80
  - [ ] Design doc captures chosen approach, rejected alternatives, and risks
152
81
 
153
- ## In MAXSIM Plan Execution
82
+ ## MAXSIM Integration
83
+
84
+ Brainstorming applies before significant implementation work within MAXSIM workflows:
154
85
 
155
- Brainstorming applies before significant implementation work:
156
86
  - Use during phase planning when design choices affect multiple tasks
157
- - Use before any task that introduces new architecture, patterns, or external dependencies
158
- - The decision record in STATE.md persists across sessions future agents inherit context
159
- - If a brainstorming session spans multiple interactions, record partial progress in STATE.md blockers
87
+ - Use before any task introducing new architecture, patterns, or external dependencies
88
+ - Decision records in STATE.md persist across sessions -- future agents inherit context
89
+ - If a session spans multiple interactions, record partial progress in STATE.md blockers
@@ -1,40 +1,28 @@
1
1
  ---
2
2
  name: code-review
3
- description: Use after completing a phase or significant implementation — requires reviewing all changed code for critical issues before sign-off
3
+ description: >-
4
+ Reviews all changed code for security vulnerabilities, interface correctness,
5
+ error handling, test coverage, and quality before sign-off. Use when completing
6
+ a phase, reviewing implementation, or before approving changes for merge.
4
7
  ---
5
8
 
6
9
  # Code Review
7
10
 
8
11
  Shipping unreviewed code is shipping unknown risk. Review before sign-off.
9
12
 
10
- **If you have not reviewed every changed file, you cannot approve the phase.**
13
+ **HARD GATE: NO PHASE SIGN-OFF WITHOUT REVIEWING ALL CHANGED CODE.** If every diff introduced in this phase has not been read, the phase cannot be marked complete. Passing tests do not prove code quality.
11
14
 
12
- ## The Iron Law
15
+ ## Process
13
16
 
14
- <HARD-GATE>
15
- NO PHASE SIGN-OFF WITHOUT REVIEWING ALL CHANGED CODE.
16
- If you have not read every diff introduced in this phase, you CANNOT mark it complete.
17
- "It works" is not "it's correct." Passing tests do not prove code quality.
18
- Violating this rule is a violation — not a shortcut.
19
- </HARD-GATE>
17
+ Follow these steps in order before approving any phase or significant implementation.
20
18
 
21
- ## The Gate Function
19
+ ### 1. SCOPE -- Identify All Changes
22
20
 
23
- Follow these steps IN ORDER before approving any phase or significant implementation.
21
+ - Diff against the phase starting point to see every changed file
22
+ - List all new, modified, and deleted files
23
+ - Do not skip generated files, config changes, or minor edits
24
24
 
25
- ### 1. SCOPE Identify All Changes
26
-
27
- - Run `git diff` against the phase's starting point to see every changed file
28
- - List all new files, modified files, and deleted files
29
- - Do NOT skip generated files, config changes, or "minor" edits
30
-
31
- ```bash
32
- # Example: see all changes since phase branch point
33
- git diff --stat main...HEAD
34
- git diff main...HEAD
35
- ```
36
-
37
- ### 2. SECURITY — Check for Vulnerabilities
25
+ ### 2. SECURITY -- Check for Vulnerabilities
38
26
 
39
27
  Review every changed file for:
40
28
 
@@ -48,75 +36,47 @@ Review every changed file for:
48
36
 
49
37
  **Any security issue is a blocking finding. No exceptions.**
50
38
 
51
- ### 3. INTERFACES Verify API Contracts
39
+ ### 3. INTERFACES -- Verify API Contracts
52
40
 
53
41
  - Do public function signatures match their documentation?
54
42
  - Are return types accurate and complete?
55
43
  - Do error types cover all failure modes?
56
44
  - Are breaking changes documented and intentional?
57
- - Do exported interfaces maintain backward compatibility (or is the break intentional)?
45
+ - Do exported interfaces maintain backward compatibility?
58
46
 
59
- ### 4. ERROR HANDLING Check Failure Paths
47
+ ### 4. ERROR HANDLING -- Check Failure Paths
60
48
 
61
- - Are all external calls (I/O, network, user input) wrapped in error handling?
49
+ - Are all external calls wrapped in error handling?
62
50
  - Do error messages provide enough context to diagnose the issue?
63
51
  - Are errors propagated correctly (not swallowed silently)?
64
52
  - Are edge cases handled (empty input, null values, boundary conditions)?
65
53
 
66
- ### 5. TESTS Evaluate Coverage
54
+ ### 5. TESTS -- Evaluate Coverage
67
55
 
68
56
  - Does every new public function have corresponding tests?
69
57
  - Do tests cover both success and failure paths?
70
- - Are edge cases tested (empty, null, boundary, error conditions)?
58
+ - Are edge cases tested?
71
59
  - Do tests verify behavior, not implementation details?
72
60
 
73
- ### 6. QUALITY Assess Maintainability
61
+ ### 6. QUALITY -- Assess Maintainability
74
62
 
75
- - Is naming consistent with the existing codebase conventions?
76
- - Are there code duplication opportunities that should be extracted?
63
+ - Is naming consistent with existing codebase conventions?
64
+ - Are there duplication opportunities that should be extracted?
77
65
  - Is the complexity justified by the requirements?
78
- - Are comments present where logic is non-obvious (and absent where code is self-evident)?
79
-
80
- ## Critical Issues — Block Phase Sign-Off
81
-
82
- These categories MUST be resolved before the phase can be marked complete:
83
-
84
- | Severity | Category | Example |
85
- |----------|----------|---------|
86
- | **Blocker** | Security vulnerability | SQL injection, XSS, hardcoded secrets |
87
- | **Blocker** | Broken interface | Public API returns wrong type, missing required field |
88
- | **Blocker** | Missing error handling | Unhandled promise rejection, swallowed exceptions on I/O |
89
- | **Blocker** | Data loss risk | Destructive operation without confirmation, missing transaction |
90
- | **High** | Performance regression | O(n^2) where O(n) is trivial, unbounded memory allocation |
91
- | **High** | Missing critical tests | No tests for error paths, no tests for new public API |
92
- | **Medium** | Naming inconsistency | Convention mismatch with existing codebase |
93
- | **Medium** | Dead code | Unreachable branches, unused imports, commented-out code |
94
-
95
- **Blocker and High severity issues block sign-off. Medium issues should be filed for follow-up.**
66
+ - Are comments present where logic is non-obvious?
96
67
 
97
- ## Common Rationalizations — REJECT THESE
68
+ ## Common Pitfalls
98
69
 
99
- | Excuse | Why It Violates the Rule |
100
- |--------|--------------------------|
70
+ | Issue | Reality |
71
+ |-------|---------|
101
72
  | "Tests pass, so the code is fine" | Tests verify behavior, not code quality. Review is separate. |
102
73
  | "I wrote it, so I know it's correct" | Author bias is real. Review as if someone else wrote it. |
103
- | "It's just a small change" | Small changes cause large outages. Review proportional effort, not zero effort. |
104
- | "We'll clean it up later" | "Later" accumulates. Fix blockers now, file medium issues. |
105
- | "The deadline is tight" | Shipping broken code costs more time than reviewing. |
74
+ | "It's just a small change" | Small changes cause large outages. |
106
75
  | "Generated code doesn't need review" | Generated code has the same bugs. Review it. |
107
76
 
108
- ## Red Flags STOP If You Catch Yourself:
109
-
110
- - Skipping files because they "look fine" from the diff stat
111
- - Approving without reading the actual code changes
112
- - Ignoring a gut feeling that something is wrong
113
- - Rushing through review to meet a deadline
114
- - Assuming tests cover everything without checking
115
- - Skipping error handling review because "the happy path works"
116
-
117
- **If any red flag triggers: STOP. Go back to step 1 (SCOPE) and review properly.**
77
+ Stop if you catch yourself skipping files because they "look fine," approving without reading actual code, or rushing through review to meet a deadline.
118
78
 
119
- ## Verification Checklist
79
+ ## Verification
120
80
 
121
81
  Before signing off on a phase, confirm:
122
82
 
@@ -128,9 +88,21 @@ Before signing off on a phase, confirm:
128
88
  - [ ] Naming and style are consistent with codebase conventions
129
89
  - [ ] No blocker or high severity issues remain open
130
90
 
131
- ## Review Output Format
91
+ ### Severity Reference
92
+
93
+ | Severity | Category | Example |
94
+ |----------|----------|---------|
95
+ | Blocker | Security vulnerability | SQL injection, XSS, hardcoded secrets |
96
+ | Blocker | Broken interface | Public API returns wrong type |
97
+ | Blocker | Data loss risk | Destructive operation without confirmation |
98
+ | High | Performance regression | O(n^2) where O(n) is trivial |
99
+ | High | Missing critical tests | No tests for error paths or new public API |
100
+ | Medium | Naming inconsistency | Convention mismatch with existing codebase |
101
+ | Medium | Dead code | Unreachable branches, unused imports |
102
+
103
+ Blocker and High severity issues block sign-off. Medium issues should be filed for follow-up.
132
104
 
133
- Produce a review summary for phase documentation:
105
+ ### Review Output Format
134
106
 
135
107
  ```
136
108
  REVIEW SCOPE: [number] files changed, [number] additions, [number] deletions
@@ -142,7 +114,7 @@ QUALITY: PASS | ISSUES FOUND (list)
142
114
  VERDICT: APPROVED | BLOCKED (list blocking issues)
143
115
  ```
144
116
 
145
- ## In MAXSIM Plan Execution
117
+ ## MAXSIM Integration
146
118
 
147
119
  Code review applies at phase boundaries:
148
120
  - After all tasks in a phase are complete, run this review before marking the phase done