codingbuddy-rules 5.3.0 → 5.4.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.ai-rules/adapters/antigravity.md +2 -0
- package/.ai-rules/adapters/claude-code.md +170 -0
- package/.ai-rules/adapters/codex.md +2 -0
- package/.ai-rules/adapters/cursor.md +2 -0
- package/.ai-rules/adapters/kiro.md +2 -0
- package/.ai-rules/adapters/opencode.md +14 -8
- package/.ai-rules/adapters/q.md +2 -0
- package/.ai-rules/adapters/windsurf.md +2 -0
- package/.ai-rules/agents/README.md +6 -4
- package/.ai-rules/agents/act-mode.json +1 -1
- package/.ai-rules/agents/auto-mode.json +13 -8
- package/.ai-rules/agents/plan-mode.json +13 -8
- package/.ai-rules/rules/core.md +34 -21
- package/.ai-rules/rules/parallel-execution.md +27 -0
- package/.ai-rules/rules/pr-review-cycle.md +272 -0
- package/.ai-rules/rules/severity-classification.md +214 -0
- package/.ai-rules/skills/incident-response/severity-classification.md +17 -141
- package/.ai-rules/skills/pr-review/SKILL.md +2 -0
- package/lib/init/scaffold.js +4 -0
- package/package.json +2 -1
|
@@ -92,6 +92,33 @@ Implementation tasks (code changes, file modifications, commits) MUST use tmux-b
|
|
|
92
92
|
|
|
93
93
|
**Why:** Background sub-agents lack proper git worktree isolation, cannot run pre-push checks reliably, and risk file conflicts when multiple agents write to the same workspace. taskMaestro provides each worker with an isolated worktree, proper shell environment, and full CI toolchain access.
|
|
94
94
|
|
|
95
|
+
### Conductor vs Worker Context (MANDATORY distinction)
|
|
96
|
+
|
|
97
|
+
The "Implementation → taskMaestro only" rule applies to the **conductor** (top-level orchestration session). Workers running inside a taskMaestro pane operate under different rules because they already own an isolated worktree.
|
|
98
|
+
|
|
99
|
+
| Layer | Context | Dispatch Tool | Rationale |
|
|
100
|
+
|-------|---------|---------------|-----------|
|
|
101
|
+
| Conductor → implementation | Outer orchestration session | **taskMaestro only** | Workers need worktree isolation + pre-push checks + visual monitoring |
|
|
102
|
+
| Conductor → read-only research | Outer orchestration session | SubAgent allowed | No file mutations, safe to background |
|
|
103
|
+
| **Worker → internal tasks** | Inside a taskMaestro pane | **SubAgent encouraged** | Worker owns its worktree; file-conflict rationale does not apply. Use Explore/Plan subagents for parallel research or context protection. |
|
|
104
|
+
|
|
105
|
+
**Why workers can use sub-agents freely:**
|
|
106
|
+
|
|
107
|
+
1. Each worker operates in an isolated git worktree — no file conflict with other workers
|
|
108
|
+
2. Sub-agents within a worker inherit the worker's worktree, so they share the same isolation boundary
|
|
109
|
+
3. Workers often benefit from parallel read-only research (Explore) or context-protected work before committing
|
|
110
|
+
|
|
111
|
+
**Examples of good worker-internal sub-agent usage:**
|
|
112
|
+
|
|
113
|
+
- Dispatch `Explore` sub-agent to survey existing patterns before writing code
|
|
114
|
+
- Dispatch `Plan` sub-agent to draft implementation approach, then act on the plan directly
|
|
115
|
+
- Dispatch multiple read-only sub-agents in parallel to gather context from different parts of the codebase
|
|
116
|
+
|
|
117
|
+
**What workers still must NOT do:**
|
|
118
|
+
|
|
119
|
+
- Dispatch sub-agents that modify files in **sibling** worktrees (impossible with proper isolation, but worth stating)
|
|
120
|
+
- Dispatch sub-agents to create PRs on their behalf (the worker owns its PR)
|
|
121
|
+
|
|
95
122
|
## Monorepo Path Safety
|
|
96
123
|
|
|
97
124
|
In monorepo environments, always use absolute paths or `git -C <path>` for git commands to prevent path doubling:
|
|
@@ -0,0 +1,272 @@
|
|
|
1
|
+
# PR Review Cycle (Canonical)
|
|
2
|
+
|
|
3
|
+
Canonical protocol for the PR review cycle in conductor/worker parallel workflows and solo workflows. This file is the single source of truth — local skills, adapter docs, and custom instructions MUST reference this document instead of duplicating the protocol.
|
|
4
|
+
|
|
5
|
+
Scope: the review loop that runs **after** a worker (or solo developer) has created a PR, until the PR is approved or explicitly failed.
|
|
6
|
+
|
|
7
|
+
## Quick Reference
|
|
8
|
+
|
|
9
|
+
| Step | Who | Output |
|
|
10
|
+
|------|-----|--------|
|
|
11
|
+
| 1. PR created | Worker (or solo dev) | `status: "success"` + PR URL |
|
|
12
|
+
| 2. CI gate | Reviewer | Pass/fail decision (BLOCKING) |
|
|
13
|
+
| 3. Review | Conductor or review agent | Structured comment on PR |
|
|
14
|
+
| 4. Response | Worker | Fixes pushed OR dispute posted |
|
|
15
|
+
| 5. Re-review | Reviewer | Approve or request more changes |
|
|
16
|
+
| 6. Approve | Reviewer | `status: "approved"` |
|
|
17
|
+
|
|
18
|
+
Approval criteria and severity definitions: see [`severity-classification.md`](./severity-classification.md). Commit hygiene during review fixes: see [Commit Hygiene](#commit-hygiene) below.
|
|
19
|
+
|
|
20
|
+
## Trigger
|
|
21
|
+
|
|
22
|
+
The review cycle begins when a PR is created. In conductor/worker workflows this is detected through `.taskmaestro/wt-N/RESULT.json`:
|
|
23
|
+
|
|
24
|
+
| RESULT.json `status` | Meaning | Reviewer Action |
|
|
25
|
+
|----------------------|---------|-----------------|
|
|
26
|
+
| `success` | Worker finished, PR created | Start review cycle |
|
|
27
|
+
| `failure` / `error` | Worker could not finish | Report to user, do not enter review |
|
|
28
|
+
| `review_pending` | Reviewer has posted comments, waiting on worker | Wait |
|
|
29
|
+
| `review_addressed` | Worker has pushed fixes | Re-review |
|
|
30
|
+
| `approved` | Final approval | Cycle complete ✅ |
|
|
31
|
+
|
|
32
|
+
In solo workflows the trigger is the developer opening the PR; the remaining steps are identical but run in a single session.
|
|
33
|
+
|
|
34
|
+
## Review Routing
|
|
35
|
+
|
|
36
|
+
Two review strategies exist. Conductor Review is the **default and primary** method.
|
|
37
|
+
|
|
38
|
+
### Conductor Review (default)
|
|
39
|
+
|
|
40
|
+
The conductor runs the review directly. Use this whenever a dedicated review pane is not configured.
|
|
41
|
+
|
|
42
|
+
**Rationale:** the conductor already holds the PLAN context for the task, so its review is grounded in the original requirements. Worker-level reviewers do not carry that context.
|
|
43
|
+
|
|
44
|
+
### Review Agent (optional, `--review-pane`)
|
|
45
|
+
|
|
46
|
+
A dedicated review pane takes over the review. Only used when the orchestrator was started with `--review-pane` explicitly and at least three panes are available (conductor + worker + reviewer).
|
|
47
|
+
|
|
48
|
+
The review agent follows the **same protocol** as the conductor (CI gate, code quality scan, spec compliance, test coverage, structured comment). The only differences are where the result is written and how approval is propagated back to the worker — see [Review Agent Result Handling](#review-agent-result-handling).
|
|
49
|
+
|
|
50
|
+
## The Review Protocol
|
|
51
|
+
|
|
52
|
+
Every reviewer — conductor or review agent — MUST perform these steps **in this order**.
|
|
53
|
+
|
|
54
|
+
### 1. CI Gate (BLOCKING)
|
|
55
|
+
|
|
56
|
+
```bash
|
|
57
|
+
gh pr checks <PR_NUMBER>
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
- ALL checks must pass before proceeding to code review.
|
|
61
|
+
- If ANY check fails → STOP. Report the failing job and log URL. **Do NOT approve.** **Do NOT start code review.** Return the PR to the worker as a failure.
|
|
62
|
+
|
|
63
|
+
### 2. Local Verification
|
|
64
|
+
|
|
65
|
+
Check out the PR branch and run the same lint / type / test commands CI runs. This catches issues local to the reviewer's environment and flaky CI blind spots.
|
|
66
|
+
|
|
67
|
+
```bash
|
|
68
|
+
git fetch origin
|
|
69
|
+
git checkout <branch>
|
|
70
|
+
yarn lint
|
|
71
|
+
yarn type-check
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
Any local error becomes a `critical` or `high` finding — not a CI retry.
|
|
75
|
+
|
|
76
|
+
### 3. Read the Diff
|
|
77
|
+
|
|
78
|
+
```bash
|
|
79
|
+
gh pr diff <PR_NUMBER>
|
|
80
|
+
```
|
|
81
|
+
|
|
82
|
+
Optionally call the `generate_checklist` MCP tool with the list of changed files to produce a domain-specific checklist (security, accessibility, performance, etc.) before reading the diff.
|
|
83
|
+
|
|
84
|
+
### 4. Code Quality Scan
|
|
85
|
+
|
|
86
|
+
Against the diff, look for:
|
|
87
|
+
|
|
88
|
+
- Unused imports / variables
|
|
89
|
+
- `any` types
|
|
90
|
+
- Missing error handling
|
|
91
|
+
- Dead code
|
|
92
|
+
- Layer boundary violations
|
|
93
|
+
- Obvious performance pitfalls
|
|
94
|
+
|
|
95
|
+
Use the Code Review Severity scale from [`severity-classification.md`](./severity-classification.md) to rate findings (`critical` / `high` / `medium` / `low`).
|
|
96
|
+
|
|
97
|
+
### 5. Spec Compliance
|
|
98
|
+
|
|
99
|
+
```bash
|
|
100
|
+
gh issue view <ISSUE_NUMBER>
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
Compare the issue's acceptance criteria with the implementation. List every gap as a finding.
|
|
104
|
+
|
|
105
|
+
### 6. Test Coverage
|
|
106
|
+
|
|
107
|
+
- Does new non-trivial logic have tests?
|
|
108
|
+
- Are edge cases and error paths covered?
|
|
109
|
+
- Do tests actually verify behavior, not implementation?
|
|
110
|
+
|
|
111
|
+
Missing tests on new logic is at least `high`.
|
|
112
|
+
|
|
113
|
+
### 7. Write the Review
|
|
114
|
+
|
|
115
|
+
```bash
|
|
116
|
+
gh pr review <PR_NUMBER> --comment --body "<structured review>"
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
Review body format:
|
|
120
|
+
|
|
121
|
+
```markdown
|
|
122
|
+
## Review: [APPROVE | CHANGES_REQUESTED]
|
|
123
|
+
### CI Status: [PASS | FAIL]
|
|
124
|
+
### Issues Found:
|
|
125
|
+
- [critical]: <description> — <file:line>
|
|
126
|
+
- [high]: <description> — <file:line>
|
|
127
|
+
- [medium]: <description> — <file:line>
|
|
128
|
+
### Recommendation: [APPROVE | REQUEST_CHANGES]
|
|
129
|
+
```
|
|
130
|
+
|
|
131
|
+
Follow the anti-sycophancy rules in `skills/pr-review/SKILL.md`: every finding must include a location (`file:line`) and an impact statement. No empty praise.
|
|
132
|
+
|
|
133
|
+
## Worker Response (Review Fix)
|
|
134
|
+
|
|
135
|
+
When the reviewer has posted a review with changes requested, the worker MUST:
|
|
136
|
+
|
|
137
|
+
1. Read the comments: `gh pr view <PR_NUMBER> --comments`
|
|
138
|
+
2. For each comment:
|
|
139
|
+
- **Accept** → fix the code.
|
|
140
|
+
- **Reject** → post a rebuttal comment on the PR with reasoning. Do not silently ignore.
|
|
141
|
+
- Reply with `Resolved: <what you did>` so the reviewer can verify.
|
|
142
|
+
3. Run the full local check battery **before pushing** (see [Commit Hygiene](#commit-hygiene)).
|
|
143
|
+
4. Push the fixes (see [Commit Hygiene](#commit-hygiene) for the amend + force-with-lease rule).
|
|
144
|
+
5. Update `RESULT.json`:
|
|
145
|
+
|
|
146
|
+
```json
|
|
147
|
+
{
|
|
148
|
+
"status": "review_addressed",
|
|
149
|
+
"review_cycle": <current cycle number>
|
|
150
|
+
}
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
Only after `RESULT.json` is updated does the conductor's watch cycle re-trigger a review.
|
|
154
|
+
|
|
155
|
+
## Re-Review
|
|
156
|
+
|
|
157
|
+
On `status: "review_addressed"`, the reviewer repeats the protocol from step 1 (CI gate) on the new commit. If the comments are resolved and no new `critical`/`high` findings appear, proceed to approval.
|
|
158
|
+
|
|
159
|
+
## Approval
|
|
160
|
+
|
|
161
|
+
Approval is gated on the criteria in [`severity-classification.md`](./severity-classification.md#code-review-severity). Summarized here:
|
|
162
|
+
|
|
163
|
+
- CI fully green.
|
|
164
|
+
- Zero `critical` findings.
|
|
165
|
+
- Zero `high` findings that were not explicitly accepted as deferred.
|
|
166
|
+
- New code has adequate tests.
|
|
167
|
+
- Existing tests still pass.
|
|
168
|
+
- Code style consistent with the codebase.
|
|
169
|
+
|
|
170
|
+
Issue the approval:
|
|
171
|
+
|
|
172
|
+
```bash
|
|
173
|
+
# When the reviewer is not the PR author
|
|
174
|
+
gh pr review <PR_NUMBER> --approve --body "LGTM - all review comments addressed"
|
|
175
|
+
|
|
176
|
+
# When the reviewer IS the PR author (GitHub forbids self-approve)
|
|
177
|
+
gh pr review <PR_NUMBER> --comment --body "✅ Review complete - all comments addressed"
|
|
178
|
+
```
|
|
179
|
+
|
|
180
|
+
Then update `RESULT.json` to `status: "approved"`. Only `approved` counts as *done*. `success` means "PR exists"; `approved` means "PR is mergeable".
|
|
181
|
+
|
|
182
|
+
## Max Review Cycles
|
|
183
|
+
|
|
184
|
+
Hard cap of **three** review cycles per PR to prevent infinite loops.
|
|
185
|
+
|
|
186
|
+
When the third cycle still does not reach approval:
|
|
187
|
+
|
|
188
|
+
```
|
|
189
|
+
⚠️ Pane N: unresolved issues after 3 review cycles.
|
|
190
|
+
PR: #<PR_NUMBER>
|
|
191
|
+
Unresolved: [list]
|
|
192
|
+
User decision required.
|
|
193
|
+
```
|
|
194
|
+
|
|
195
|
+
The conductor stops reviewing that pane and waits for user instruction. Do not silently approve to exit the loop.
|
|
196
|
+
|
|
197
|
+
## Review Agent Result Handling
|
|
198
|
+
|
|
199
|
+
When the `--review-pane` strategy is used, the review agent writes its own `RESULT.json` in its worktree:
|
|
200
|
+
|
|
201
|
+
```json
|
|
202
|
+
{
|
|
203
|
+
"status": "success",
|
|
204
|
+
"review_result": "approve | changes_requested",
|
|
205
|
+
"issues_found": 3,
|
|
206
|
+
"critical_count": 0,
|
|
207
|
+
"high_count": 1
|
|
208
|
+
}
|
|
209
|
+
```
|
|
210
|
+
|
|
211
|
+
The conductor reads the review agent's `RESULT.json` and:
|
|
212
|
+
|
|
213
|
+
1. **`review_result: "approve"`** → update the worker's `RESULT.json` to `approved`, post the final approval comment on the PR, report to user.
|
|
214
|
+
2. **`review_result: "changes_requested"`** → update the worker's `RESULT.json` to `review_pending`, increment `review_cycle`, dispatch the review-fix task to the worker.
|
|
215
|
+
3. **Always** → delete the review agent's `RESULT.json` so the next review can trigger.
|
|
216
|
+
|
|
217
|
+
## Commit Hygiene
|
|
218
|
+
|
|
219
|
+
**Rule:** During the review fix cycle, do not pile up additional fix commits on top of the original worker commit. Amend the existing commit and force-push with lease.
|
|
220
|
+
|
|
221
|
+
```bash
|
|
222
|
+
git add <changed files>
|
|
223
|
+
git commit --amend --no-edit
|
|
224
|
+
git push --force-with-lease
|
|
225
|
+
```
|
|
226
|
+
|
|
227
|
+
**Why:** A single clean commit per PR keeps `git log` reviewable, avoids "fix review 1 / fix review 2 / fix typo" noise, and makes the PR easier to rebase. `--force-with-lease` protects against accidental overwrite if the remote has moved.
|
|
228
|
+
|
|
229
|
+
**Exception:** When the review explicitly requests splitting the change into multiple commits (e.g., "please move this refactor to its own commit"), follow the review's direction. That is a *deliberate* split, not commit noise. Document the exception in the review reply so future reviewers understand why the PR has multiple commits.
|
|
230
|
+
|
|
231
|
+
**Pre-push check battery (MANDATORY before every push, including amends):**
|
|
232
|
+
|
|
233
|
+
```bash
|
|
234
|
+
yarn prettier --write .
|
|
235
|
+
yarn lint --fix
|
|
236
|
+
yarn type-check
|
|
237
|
+
yarn test
|
|
238
|
+
```
|
|
239
|
+
|
|
240
|
+
All four MUST pass. A failed pre-push check is not a reason to push anyway.
|
|
241
|
+
|
|
242
|
+
## State Representation
|
|
243
|
+
|
|
244
|
+
In multi-pane orchestration, each pane's state file carries the review cycle as structured fields:
|
|
245
|
+
|
|
246
|
+
```json
|
|
247
|
+
{
|
|
248
|
+
"index": 1,
|
|
249
|
+
"role": "worker",
|
|
250
|
+
"status": "review_pending",
|
|
251
|
+
"review_cycle": 1,
|
|
252
|
+
"pr_number": 42
|
|
253
|
+
}
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
Valid `status` values during the review cycle:
|
|
257
|
+
|
|
258
|
+
| Status | Meaning |
|
|
259
|
+
|--------|---------|
|
|
260
|
+
| `working` | Worker is implementing |
|
|
261
|
+
| `reviewing` | Review agent is running (review-pane only) |
|
|
262
|
+
| `review_pending` | Review comments posted, awaiting worker response |
|
|
263
|
+
| `review_addressed` | Worker has pushed fixes, awaiting re-review |
|
|
264
|
+
| `approved` | Final approval — cycle complete |
|
|
265
|
+
| `done` | Completed without a review cycle (e.g., failure or error) |
|
|
266
|
+
|
|
267
|
+
## Related
|
|
268
|
+
|
|
269
|
+
- [`severity-classification.md`](./severity-classification.md) — canonical severity taxonomy used for approval gates
|
|
270
|
+
- `skills/pr-review/SKILL.md` — manual PR review dimensions, anti-sycophancy guidance, feedback tone spectrum
|
|
271
|
+
- `.claude/skills/taskmaestro/SKILL.md` — conductor/worker orchestration that consumes this protocol
|
|
272
|
+
- `adapters/claude-code.md` — Claude Code execution model for parallel review
|
|
@@ -0,0 +1,214 @@
|
|
|
1
|
+
# Severity Classification (Canonical)
|
|
2
|
+
|
|
3
|
+
Canonical severity taxonomy used across the repo. Two distinct scales — **Code Review Severity** (Critical/High/Medium/Low) and **Production Incident Severity** (P1-P4) — serve different purposes and MUST NOT be conflated.
|
|
4
|
+
|
|
5
|
+
This file is the single source of truth. Other files (skills, adapters, task protocols) MUST reference this document instead of redefining severity levels.
|
|
6
|
+
|
|
7
|
+
## When to Use Which Scale
|
|
8
|
+
|
|
9
|
+
| Context | Use This Scale | Decides |
|
|
10
|
+
|---------|----------------|---------|
|
|
11
|
+
| PR review, code review, EVAL mode, AUTO exit criteria | **Code Review Severity** (Critical/High/Medium/Low) | Whether to approve / request changes |
|
|
12
|
+
| Production incident, on-call alert, SLO burn rate | **Production Incident Severity** (P1/P2/P3/P4) | Response time, pager, war room |
|
|
13
|
+
|
|
14
|
+
Code review and production incidents are **not** the same problem. A `critical` code review finding blocks a PR; a `P1` incident pages the on-call engineer. Do not map `critical` to `P1` on the PR side or vice versa — see [Mapping Table](#mapping-between-scales) below for the narrow cases where they correspond.
|
|
15
|
+
|
|
16
|
+
## Code Review Severity
|
|
17
|
+
|
|
18
|
+
Used in PR review cycles, EVAL mode, AUTO exit criteria, and worker review protocols.
|
|
19
|
+
|
|
20
|
+
### critical
|
|
21
|
+
|
|
22
|
+
**Meaning:** The change introduces a defect that blocks approval. A `critical` finding requires changes before merge — no exceptions.
|
|
23
|
+
|
|
24
|
+
**Criteria (ANY of these):**
|
|
25
|
+
|
|
26
|
+
- Hardcoded secrets / credentials in source
|
|
27
|
+
- SQL injection, XSS, CSRF, or other exploitable vulnerability
|
|
28
|
+
- Missing authentication on a protected route
|
|
29
|
+
- Missing authorization check on a resource access path
|
|
30
|
+
- Data loss, corruption, or irreversible state change risk
|
|
31
|
+
- Build or CI completely broken
|
|
32
|
+
- Runtime exception on happy path
|
|
33
|
+
- Production-breaking regression introduced
|
|
34
|
+
|
|
35
|
+
**Action:** Request changes. Block merge. Approval requires fix + re-review.
|
|
36
|
+
|
|
37
|
+
### high
|
|
38
|
+
|
|
39
|
+
**Meaning:** A defect that should be fixed before merge but does not block approval if the author has a documented reason to defer.
|
|
40
|
+
|
|
41
|
+
**Criteria (ANY of these):**
|
|
42
|
+
|
|
43
|
+
- Missing error handling on a code path that can realistically fail
|
|
44
|
+
- Missing test for new non-trivial logic
|
|
45
|
+
- Clear off-by-one, null dereference, or unhandled edge case
|
|
46
|
+
- `any` type used where a concrete type is available
|
|
47
|
+
- Layer boundary violation or dependency direction reversed
|
|
48
|
+
- Obvious performance pitfall (N+1 query, blocking the main thread)
|
|
49
|
+
- Significant code duplication that invites drift
|
|
50
|
+
|
|
51
|
+
**Action:** Request changes, or approve with an explicit follow-up ticket. Multiple `high` findings together should block merge.
|
|
52
|
+
|
|
53
|
+
### medium
|
|
54
|
+
|
|
55
|
+
**Meaning:** Quality concerns that are worth addressing but do not gate the merge.
|
|
56
|
+
|
|
57
|
+
**Criteria (ANY of these):**
|
|
58
|
+
|
|
59
|
+
- Complexity above the project's target (e.g., cyclomatic > 10, function > 20 lines)
|
|
60
|
+
- Inconsistent naming or minor API shape awkwardness
|
|
61
|
+
- Missing documentation on a public API
|
|
62
|
+
- Accessibility issue that does not block core interaction
|
|
63
|
+
- Non-critical test gap (e.g., edge case test missing for stable behavior)
|
|
64
|
+
|
|
65
|
+
**Action:** Approve with comments, or request changes if the author has time.
|
|
66
|
+
|
|
67
|
+
### low
|
|
68
|
+
|
|
69
|
+
**Meaning:** Style, polish, or future-facing suggestions with no correctness impact.
|
|
70
|
+
|
|
71
|
+
**Criteria (ANY of these):**
|
|
72
|
+
|
|
73
|
+
- Style nit beyond what lint enforces
|
|
74
|
+
- Suggested refactor for readability
|
|
75
|
+
- Minor comment wording
|
|
76
|
+
- Opportunistic cleanup that could be its own PR
|
|
77
|
+
|
|
78
|
+
**Action:** Leave as a `Noting` or `Suggesting` comment (per `skills/pr-review/SKILL.md` feedback tone spectrum). Do not block merge.
|
|
79
|
+
|
|
80
|
+
## Production Incident Severity
|
|
81
|
+
|
|
82
|
+
Used when a production incident is declared. Based on SLO burn rate and user impact, not code quality.
|
|
83
|
+
|
|
84
|
+
### P1 - Critical
|
|
85
|
+
|
|
86
|
+
**SLO Burn Rate:** >14.4x (consuming >2% error budget per hour)
|
|
87
|
+
|
|
88
|
+
**Impact Criteria (ANY of these):**
|
|
89
|
+
|
|
90
|
+
- Complete service outage
|
|
91
|
+
- `>50%` of users affected
|
|
92
|
+
- Critical business function unavailable
|
|
93
|
+
- Data loss or corruption risk
|
|
94
|
+
- Active security breach
|
|
95
|
+
- Revenue-generating flow completely blocked
|
|
96
|
+
- Compliance/regulatory violation in progress
|
|
97
|
+
|
|
98
|
+
**Response Expectations:**
|
|
99
|
+
|
|
100
|
+
| Metric | Target |
|
|
101
|
+
|--------|--------|
|
|
102
|
+
| Acknowledge | Within 5 minutes |
|
|
103
|
+
| First update | Within 15 minutes |
|
|
104
|
+
| War room formed | Within 15 minutes |
|
|
105
|
+
| Executive notification | Within 30 minutes |
|
|
106
|
+
| Customer communication | Within 1 hour |
|
|
107
|
+
| Update cadence | Every 15 minutes |
|
|
108
|
+
|
|
109
|
+
**Escalation:** Immediate page to on-call, all hands if needed.
|
|
110
|
+
|
|
111
|
+
### P2 - High
|
|
112
|
+
|
|
113
|
+
**SLO Burn Rate:** >6x (consuming >5% error budget per 6 hours)
|
|
114
|
+
|
|
115
|
+
**Impact Criteria (ANY of these):**
|
|
116
|
+
|
|
117
|
+
- Major feature unavailable
|
|
118
|
+
- 10-50% of users affected
|
|
119
|
+
- Significant performance degradation (>5x latency)
|
|
120
|
+
- Secondary business function blocked
|
|
121
|
+
- Partial data integrity issues
|
|
122
|
+
- Key integration failing
|
|
123
|
+
|
|
124
|
+
**Response Expectations:**
|
|
125
|
+
|
|
126
|
+
| Metric | Target |
|
|
127
|
+
|--------|--------|
|
|
128
|
+
| Acknowledge | Within 15 minutes |
|
|
129
|
+
| First update | Within 30 minutes |
|
|
130
|
+
| Status page update | Within 30 minutes |
|
|
131
|
+
| Stakeholder notification | Within 1 hour |
|
|
132
|
+
| Update cadence | Every 30 minutes |
|
|
133
|
+
|
|
134
|
+
**Escalation:** Page on-call during business hours, notify team lead.
|
|
135
|
+
|
|
136
|
+
### P3 - Medium
|
|
137
|
+
|
|
138
|
+
**SLO Burn Rate:** >3x (consuming >10% error budget per 24 hours)
|
|
139
|
+
|
|
140
|
+
**Impact Criteria (ANY of these):**
|
|
141
|
+
|
|
142
|
+
- Minor feature impacted
|
|
143
|
+
- `<10%` of users affected
|
|
144
|
+
- Workaround available
|
|
145
|
+
- Non-critical function degraded
|
|
146
|
+
- Cosmetic issues affecting usability
|
|
147
|
+
- Performance slightly degraded
|
|
148
|
+
|
|
149
|
+
**Response Expectations:**
|
|
150
|
+
|
|
151
|
+
| Metric | Target |
|
|
152
|
+
|--------|--------|
|
|
153
|
+
| Acknowledge | Within 1 hour |
|
|
154
|
+
| First update | Within 2 hours |
|
|
155
|
+
| Resolution target | Within 8 business hours |
|
|
156
|
+
| Update cadence | At milestones |
|
|
157
|
+
|
|
158
|
+
**Escalation:** Create ticket, notify team channel.
|
|
159
|
+
|
|
160
|
+
### P4 - Low
|
|
161
|
+
|
|
162
|
+
**SLO Burn Rate:** >1x (projected budget exhaustion within SLO window)
|
|
163
|
+
|
|
164
|
+
**Impact Criteria (ALL of these):**
|
|
165
|
+
|
|
166
|
+
- Minimal or no user impact
|
|
167
|
+
- Edge case or rare scenario
|
|
168
|
+
- Cosmetic only
|
|
169
|
+
- Performance within acceptable range
|
|
170
|
+
- Workaround trivial
|
|
171
|
+
|
|
172
|
+
**Response Expectations:**
|
|
173
|
+
|
|
174
|
+
| Metric | Target |
|
|
175
|
+
|--------|--------|
|
|
176
|
+
| Acknowledge | Within 1 business day |
|
|
177
|
+
| Resolution target | Next sprint/release |
|
|
178
|
+
| Update cadence | On resolution |
|
|
179
|
+
|
|
180
|
+
**Escalation:** Backlog item, routine prioritization.
|
|
181
|
+
|
|
182
|
+
### When Uncertain, Classify Higher
|
|
183
|
+
|
|
184
|
+
Between two levels, pick the more severe one. Over-response is cheaper than under-response; you can always downgrade. Communicate any severity change to stakeholders immediately.
|
|
185
|
+
|
|
186
|
+
### Incident-Specific Details
|
|
187
|
+
|
|
188
|
+
For SLO burn rate math, classification decision tree, and incident report templates, see `skills/incident-response/severity-classification.md`, which narrows this canonical scale to production-incident-specific operational guidance.
|
|
189
|
+
|
|
190
|
+
## Mapping Between Scales
|
|
191
|
+
|
|
192
|
+
The two scales answer different questions, but a handful of situations connect them. Use this table only when a PR review finding must influence — or be informed by — an incident.
|
|
193
|
+
|
|
194
|
+
| Code Review Severity | Rough Production Equivalent | When the Mapping Applies |
|
|
195
|
+
|----------------------|------------------------------|---------------------------|
|
|
196
|
+
| `critical` | P1-P2 | A `critical` PR finding that already shipped to production and is causing user impact becomes an incident at P1 or P2 (impact decides, not the code severity) |
|
|
197
|
+
| `high` | P2-P3 | A `high` PR finding that shipped can become an incident once user impact is observed |
|
|
198
|
+
| `medium` | P3-P4 | A `medium` finding rarely becomes an incident on its own; if it does, it is usually P3 or P4 |
|
|
199
|
+
| `low` | (not applicable) | `low` findings are stylistic and do not map to production incidents |
|
|
200
|
+
|
|
201
|
+
**Direction of the mapping matters:**
|
|
202
|
+
|
|
203
|
+
- **PR → Production:** A `critical` code review finding does not automatically imply a P1 incident. Incident severity is decided by real user impact and SLO burn rate, not by the code reviewer's judgement.
|
|
204
|
+
- **Production → PR:** After an incident, the PR(s) that introduced the regression should be reviewed using the Code Review Severity scale during the postmortem. The incident severity does not set the PR severity directly.
|
|
205
|
+
|
|
206
|
+
## Usage in Other Documents
|
|
207
|
+
|
|
208
|
+
- `rules/pr-review-cycle.md` — uses Code Review Severity for approval gates
|
|
209
|
+
- `skills/pr-review/SKILL.md` — uses Code Review Severity for priority dimensions and decision matrix
|
|
210
|
+
- `skills/incident-response/severity-classification.md` — narrows Production Incident Severity to operational detail (decision tree, burn rate math, report template)
|
|
211
|
+
- `adapters/claude-code.md` — references Code Review Severity for EVAL / AUTO exit criteria
|
|
212
|
+
- `rules/core.md` EVAL section — prioritizes improvements by Code Review Severity
|
|
213
|
+
|
|
214
|
+
When adding a new document that mentions severity, link to this file instead of redefining the levels.
|