sdlc-framework 1.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (53) hide show
  1. package/LICENSE +21 -0
  2. package/README.md +321 -0
  3. package/bin/install.js +193 -0
  4. package/package.json +39 -0
  5. package/src/commands/close.md +200 -0
  6. package/src/commands/debug.md +124 -0
  7. package/src/commands/fast.md +149 -0
  8. package/src/commands/fix.md +104 -0
  9. package/src/commands/help.md +144 -0
  10. package/src/commands/hotfix.md +99 -0
  11. package/src/commands/impl.md +142 -0
  12. package/src/commands/init.md +93 -0
  13. package/src/commands/milestone.md +136 -0
  14. package/src/commands/pause.md +115 -0
  15. package/src/commands/research.md +136 -0
  16. package/src/commands/resume.md +103 -0
  17. package/src/commands/review.md +195 -0
  18. package/src/commands/spec.md +164 -0
  19. package/src/commands/status.md +118 -0
  20. package/src/commands/verify.md +153 -0
  21. package/src/references/clarification-strategy.md +352 -0
  22. package/src/references/engineering-laws.md +374 -0
  23. package/src/references/loop-phases.md +331 -0
  24. package/src/references/playwright-testing.md +298 -0
  25. package/src/references/prompt-detection.md +264 -0
  26. package/src/references/sub-agent-strategy.md +260 -0
  27. package/src/rules/commands.md +180 -0
  28. package/src/rules/style.md +354 -0
  29. package/src/rules/templates.md +238 -0
  30. package/src/rules/workflows.md +314 -0
  31. package/src/templates/HANDOFF.md +121 -0
  32. package/src/templates/LAWS.md +521 -0
  33. package/src/templates/PROJECT.md +112 -0
  34. package/src/templates/REVIEW.md +145 -0
  35. package/src/templates/ROADMAP.md +101 -0
  36. package/src/templates/SPEC.md +231 -0
  37. package/src/templates/STATE.md +106 -0
  38. package/src/templates/SUMMARY.md +126 -0
  39. package/src/workflows/close-phase.md +189 -0
  40. package/src/workflows/debug-flow.md +302 -0
  41. package/src/workflows/fast-forward.md +340 -0
  42. package/src/workflows/fix-findings.md +235 -0
  43. package/src/workflows/hotfix-flow.md +190 -0
  44. package/src/workflows/impl-phase.md +229 -0
  45. package/src/workflows/init-project.md +249 -0
  46. package/src/workflows/milestone-management.md +169 -0
  47. package/src/workflows/pause-work.md +153 -0
  48. package/src/workflows/research.md +219 -0
  49. package/src/workflows/resume-project.md +159 -0
  50. package/src/workflows/review-phase.md +337 -0
  51. package/src/workflows/spec-phase.md +379 -0
  52. package/src/workflows/transition-phase.md +203 -0
  53. package/src/workflows/verify-phase.md +280 -0
@@ -0,0 +1,126 @@
1
+ # Loop Closure Summary Template
2
+
3
+ This template defines the summary document produced during `/sdlc:close`. It is stored at `.sdlc/milestones/M{{N}}/phases/P{{N}}/summaries/SUMMARY-{{NNN}}.md`. Each summary corresponds to one completed spec and records what was built, whether it met acceptance criteria, and what was learned.
4
+
5
+ ---
6
+
7
+ ```markdown
8
+ # Summary: {{SPEC_TITLE}}
9
+
10
+ ## Reference
11
+
12
+ - **Phase:** {{PHASE_NAME}}
13
+ - **Plan:** {{PLAN_NUMBER}}/{{TOTAL_PLANS}}
14
+ - **Spec:** {{SPEC_FILE_PATH}}
15
+ - **Type:** {{feature/bugfix/refactor/test/docs}}
16
+ - **Started:** {{DATE}}
17
+ - **Completed:** {{DATE}}
18
+
19
+ ## Deliverables
20
+
21
+ What was built or changed in this loop iteration:
22
+
23
+ 1. {{DELIVERABLE_DESCRIPTION — what it is and what it does}}
24
+ 2. {{DELIVERABLE_DESCRIPTION}}
25
+ 3. {{DELIVERABLE_DESCRIPTION}}
26
+
27
+ ## Acceptance Criteria Results
28
+
29
+ | AC | Description | Status | Evidence |
30
+ |----|-------------|--------|----------|
31
+ | AC-1 | {{CRITERION_NAME}} | {{pass/fail}} | {{HOW_IT_WAS_VERIFIED — test name, screenshot path, manual check description}} |
32
+ | AC-2 | {{CRITERION_NAME}} | {{pass/fail}} | {{EVIDENCE}} |
33
+ | AC-3 | {{CRITERION_NAME}} | {{pass/fail}} | {{EVIDENCE}} |
34
+
35
+ ### Failed Criteria
36
+
37
+ If any criteria failed, document what went wrong and what was done about it:
38
+
39
+ | AC | Failure Reason | Resolution |
40
+ |----|----------------|------------|
41
+ | {{AC_ID}} | {{WHY_IT_FAILED}} | {{WHAT_WAS_DONE — fixed and re-verified / deferred to next plan / accepted as known limitation}} |
42
+
43
+ ## Deviations
44
+
45
+ Differences between the spec and what was actually built:
46
+
47
+ | # | Planned | Actual | Reason |
48
+ |---|---------|--------|--------|
49
+ | DEV-001 | {{WHAT_THE_SPEC_SAID}} | {{WHAT_WAS_ACTUALLY_DONE}} | {{WHY_THE_DEVIATION_WAS_NECESSARY}} |
50
+
51
+ If no deviations: "Implementation matched spec exactly."
52
+
53
+ ## Decisions Made
54
+
55
+ Decisions made during implementation that were not covered by the spec:
56
+
57
+ | # | Decision | Context | Impact |
58
+ |---|----------|---------|--------|
59
+ | D-001 | {{WHAT_WAS_DECIDED}} | {{WHY_THIS_DECISION_CAME_UP}} | {{HOW_IT_AFFECTS_FUTURE_WORK}} |
60
+
61
+ ## Engineering Laws Review
62
+
63
+ Summary of the code review against engineering laws:
64
+
65
+ | Law | Status | Findings |
66
+ |-----|--------|----------|
67
+ | SOLID | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
68
+ | DRY | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
69
+ | YAGNI | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
70
+ | CLEAN_CODE | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
71
+ | SECURITY | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
72
+ | TESTING | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
73
+ | NAMING | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
74
+ | ERROR_HANDLING | {{pass/warning/violation}} | {{BRIEF_SUMMARY or "No issues found"}} |
75
+
76
+ **Review file:** {{REVIEW_FILE_PATH}}
77
+
78
+ ## Files Modified
79
+
80
+ | File | Action | Description |
81
+ |------|--------|-------------|
82
+ | {{FILE_PATH}} | {{created/modified/deleted}} | {{WHAT_CHANGED_AND_WHY}} |
83
+
84
+ ## Lessons Learned
85
+
86
+ Observations for future iterations:
87
+
88
+ 1. **{{LESSON_TITLE}}** — {{WHAT_WAS_LEARNED_AND_HOW_TO_APPLY_IT_NEXT_TIME}}
89
+ 2. **{{LESSON_TITLE}}** — {{WHAT_WAS_LEARNED}}
90
+ 3. **{{LESSON_TITLE}}** — {{WHAT_WAS_LEARNED}}
91
+ ```
92
+
93
+ ---
94
+
95
+ ## Field Documentation
96
+
97
+ ### Reference Section
98
+ Links the summary back to its spec and provides temporal context. The started/completed dates show how long the loop iteration took.
99
+
100
+ ### Deliverables
101
+ Concrete outputs. Each deliverable should be something you can point to — a file, a feature, a test suite, a configuration. Not aspirational; factual.
102
+
103
+ ### Acceptance Criteria Results
104
+ The most important section. Every AC from the spec must appear here with a pass/fail status and evidence. Evidence must be specific:
105
+ - For UI features: screenshot path or Playwright test name
106
+ - For API endpoints: curl command output or integration test name
107
+ - For business logic: unit test name and assertion
108
+ - For refactors: before/after comparison showing behavior preservation
109
+
110
+ ### Failed Criteria
111
+ If any AC failed, this section explains why and what was done. Options:
112
+ - **Fixed and re-verified** — The issue was found, fixed, and the AC now passes
113
+ - **Deferred to next plan** — The issue requires work outside this spec's scope
114
+ - **Accepted as known limitation** — The stakeholder agreed this is acceptable for now
115
+
116
+ ### Deviations
117
+ Reality rarely matches the plan exactly. Deviations are not failures — they are documented adjustments. The key is that each deviation has a clear reason. "I felt like it" is not a reason.
118
+
119
+ ### Engineering Laws Review
120
+ Summary of the code review. Points to the full REVIEW.md file for details. If any law had a violation at `error` severity, the loop cannot close until it is resolved.
121
+
122
+ ### Files Modified
123
+ Complete inventory of every file touched. Useful for post-implementation auditing and for understanding the blast radius of the change.
124
+
125
+ ### Lessons Learned
126
+ The most undervalued section. Each lesson should be actionable — not "testing is important" but "mocking the database connection at the repository layer instead of the service layer cuts test setup time in half."
@@ -0,0 +1,189 @@
1
+ <purpose>Close the current SPEC-IMPL-VERIFY-REVIEW loop by reconciling planned work against actual outcomes, creating a SUMMARY.md, and advancing the state machine to the next spec or phase transition.</purpose>
2
+ <when_to_use>Run after /sdlc:review passes with zero blockers. STATE.md must show loop_position = REVIEW ✓ and next_required_action = /sdlc:close.</when_to_use>
3
+ <required_reading>.sdlc/STATE.md, the current SPEC.md, REVIEW.md, VERIFY.md</required_reading>
4
+ <loop_context>
5
+ expected_phase: CLOSE (active)
6
+ prior_phase: REVIEW ✓
7
+ next_phase: SPEC (next loop) or transition-phase (if last plan)
8
+ </loop_context>
9
+ <process>
10
+
11
+ <step name="validate_state" priority="first">
12
+ Read .sdlc/STATE.md.
13
+
14
+ CHECK: loop_position must be "REVIEW ✓"
15
+ CHECK: next_required_action must be "/sdlc:close"
16
+
17
+ IF EITHER CHECK FAILS: STOP. Display: "Cannot close. State requires: {next_required_action}. Run that first."
18
+
19
+ Extract current_phase and current_plan from STATE.md.
20
+
21
+ WHY: Closing before review means shipping unreviewed code. The loop enforces quality gates.
22
+ </step>
23
+
24
+ <step name="verify_no_blockers" priority="second">
25
+ Read .sdlc/phases/{current_phase}/{current_plan}-REVIEW.md.
26
+
27
+ CHECK: blocker count must be 0.
28
+
29
+ IF BLOCKERS REMAIN: STOP. Display: "Cannot close with {N} unresolved blockers. Fix them and re-run /sdlc:review."
30
+
31
+ WHY: This is a safety net. Even if the user tries to close manually, the blockers gate holds.
32
+ </step>
33
+
34
+ <step name="reconcile_plan_vs_actual" priority="third">
35
+ Read the SPEC.md. Extract:
36
+ - Planned tasks (name, action, done criteria)
37
+ - Planned acceptance criteria
38
+ - Planned files to modify/create
39
+
40
+ Read the VERIFY.md. Extract:
41
+ - AC results (pass/fail per criterion)
42
+
43
+ Read the REVIEW.md. Extract:
44
+ - Actual files reviewed (which includes actual files modified)
45
+ - Warnings (non-blocking issues that remain)
46
+
47
+ COMPARE planned vs actual:
48
+
49
+ TASKS:
50
+ - For each planned task: was it completed? (check if files were modified as expected)
51
+ - Were any unplanned tasks added? (files modified that were not in the spec)
52
+ - Were any planned tasks skipped?
53
+
54
+ ACCEPTANCE CRITERIA:
55
+ - For each AC: PASS or FAIL (from VERIFY.md)
56
+ - Were any ACs added or removed during implementation?
57
+
58
+ FILES:
59
+ - Compare spec files_modified vs actual files reviewed
60
+ - Were any files modified outside the boundary list?
61
+ - Were any planned files NOT modified?
62
+
63
+ Record deviations:
64
+ ```
65
+ Deviation 1: {what differed}
66
+ Reason: {why it changed — ask user if unknown}
67
+ Impact: {what this means for the project}
68
+ ```
69
+
70
+ IF DEVIATIONS EXIST: Present them to the user and ask for confirmation before proceeding.
71
+
72
+ WHY: Deviations are not inherently bad — scope often shifts during implementation. But they must be RECORDED. Unrecorded deviations become hidden technical debt and surprise future developers.
73
+ </step>
74
+
75
+ <step name="create_summary" priority="fourth">
76
+ Create .sdlc/phases/{current_phase}/{current_plan}-SUMMARY.md:
77
+
78
+ ```markdown
79
+ # Summary: Plan {plan-number} — {title from spec}
80
+
81
+ ## Deliverables
82
+ - {What was built — concrete, not vague}
83
+ - {Each deliverable on its own line}
84
+
85
+ ## Acceptance Criteria Results
86
+ | AC | Description | Status | Evidence |
87
+ |----|-------------|--------|----------|
88
+ | AC-1 | {desc} | PASS | {evidence from VERIFY.md} |
89
+ | AC-2 | {desc} | PASS | {evidence} |
90
+
91
+ ## Deviations from Spec
92
+ | # | Planned | Actual | Reason |
93
+ |---|---------|--------|--------|
94
+ | 1 | {what was planned} | {what actually happened} | {why} |
95
+
96
+ Or: "No deviations. Implementation matched spec exactly."
97
+
98
+ ## Decisions Made During Implementation
99
+ - {Decision}: {rationale}
100
+ - {Decision}: {rationale}
101
+
102
+ Or: "No implementation-time decisions. Spec was followed as written."
103
+
104
+ ## Lessons Learned
105
+ - {What went well}
106
+ - {What could improve}
107
+ - {Patterns discovered that should be reused}
108
+
109
+ ## Files Modified
110
+ - {file-path}: {what changed}
111
+
112
+ ## Files Created
113
+ - {file-path}: {purpose}
114
+
115
+ ## Review Warnings (Non-Blocking)
116
+ - {warning from REVIEW.md, if any}
117
+
118
+ Or: "Review passed with zero warnings."
119
+ ```
120
+
121
+ WHY: SUMMARY.md is the institutional memory. When the next spec runs, it reads prior summaries to maintain context continuity. Without summaries, each loop starts from scratch.
122
+ </step>
123
+
124
+ <step name="update_state" priority="fifth">
125
+ Determine what happens next:
126
+
127
+ OPTION A — More plans in this phase:
128
+ - Check ROADMAP.md: is this the last planned plan for the current phase?
129
+ - If there are more plans (or the phase is open-ended): set next to /sdlc:spec
130
+
131
+ OPTION B — Last plan in phase:
132
+ - If this was the last plan: trigger transition-phase workflow
133
+ - Set next to /sdlc:transition (which handles phase advancement)
134
+
135
+ Update .sdlc/STATE.md:
136
+ - loop_position: CLOSE ✓
137
+ - current_plan: cleared (no active plan)
138
+ - next_required_action: /sdlc:spec OR /sdlc:transition
139
+ - Add history entry: "{timestamp} | close | Plan {N} closed. {N} ACs passed. {N} deviations."
140
+
141
+ Update .sdlc/ROADMAP.md:
142
+ - Increment completed plan count for current phase
143
+ - If phase is complete: mark phase status as COMPLETE
144
+
145
+ WHY: The state machine must always have a clear next action. Ambiguity in state leads to confusion about what to do next.
146
+ </step>
147
+
148
+ <step name="display_result" priority="sixth">
149
+ IF more plans in phase:
150
+ Display:
151
+ ```
152
+ Loop closed: Plan {N} complete.
153
+
154
+ Summary: .sdlc/phases/{phase}/{plan}-SUMMARY.md
155
+
156
+ Deliverables: {count}
157
+ ACs passed: {count}/{count}
158
+ Deviations: {count}
159
+
160
+ NEXT ACTION REQUIRED: /sdlc:spec
161
+ Run /sdlc:spec to create the next specification.
162
+ ```
163
+
164
+ IF phase complete:
165
+ Display:
166
+ ```
167
+ Loop closed: Plan {N} complete.
168
+ Phase {phase} COMPLETE — all plans finished.
169
+
170
+ Summary: .sdlc/phases/{phase}/{plan}-SUMMARY.md
171
+
172
+ Transitioning to next phase...
173
+ NEXT ACTION REQUIRED: /sdlc:transition
174
+ ```
175
+
176
+ IF milestone complete (last phase, last plan):
177
+ Display:
178
+ ```
179
+ Loop closed: Plan {N} complete.
180
+ Phase {phase} COMPLETE.
181
+ MILESTONE {milestone} COMPLETE.
182
+
183
+ Run /sdlc:milestone to finalize and start next milestone.
184
+ ```
185
+
186
+ WHY: The display tells the user exactly where they are in the bigger picture. Completing a phase or milestone is a significant moment — acknowledge it.
187
+ </step>
188
+
189
+ </process>
@@ -0,0 +1,302 @@
1
+ <purpose>Structured debugging workflow: reproduce the bug, isolate the root cause, fix it, and verify the fix. Prevents "shotgun debugging" where developers change random things until the error goes away.</purpose>
2
+ <when_to_use>Run when a bug is reported or discovered. Can be invoked directly via /sdlc:debug or routed from /sdlc:fast when task classification detects bug-related keywords.</when_to_use>
3
+ <required_reading>.sdlc/STATE.md, .sdlc/PROJECT.md, .sdlc/LAWS.md</required_reading>
4
+ <loop_context>
5
+ expected_phase: DEBUG (special flow, parallel to main loop)
6
+ prior_phase: any
7
+ next_phase: REVIEW (debug fixes must be reviewed)
8
+ </loop_context>
9
+ <process>
10
+
11
+ <step name="accept_bug_report" priority="first">
12
+ Gather information about the bug. Ask the user:
13
+
14
+ Question 1: "What is the bug? Describe the unexpected behavior."
15
+ Question 2: "What is the expected behavior?"
16
+ Question 3: "How do you trigger it? (steps to reproduce, if known)"
17
+ Question 4: "Any error messages or stack traces? (paste them)"
18
+ Question 5: "When did it start? (recent change, always been there, unknown)"
19
+
20
+ IF the user provides an error message or stack trace:
21
+ - Parse it immediately for file paths, line numbers, and function names
22
+ - These are the starting points for investigation
23
+
24
+ IF the user does not know how to reproduce:
25
+ - Note this — the reproduce step will need to discover the trigger
26
+
27
+ WHY: A structured bug report prevents thrashing. Without clear symptoms, debugging is guessing. The "when did it start" question is critical — if recent, git log shows the introducing change.
28
+ </step>
29
+
30
+ <step name="reproduce" priority="second">
31
+ The goal is to trigger the bug reliably, on demand.
32
+
33
+ A. SEARCH THE CODEBASE for relevant code:
34
+ - Use file paths from error messages/stack traces
35
+ - Search for function names mentioned in the bug description
36
+ - Search for keywords from the error message
37
+
38
+ B. CREATE A MINIMAL REPRODUCTION based on bug type:
39
+
40
+ FOR UI BUGS (visual, interaction, layout):
41
+ 1. Use browser_navigate to go to the relevant page
42
+ 2. Use browser_snapshot to see the current state
43
+ 3. Follow the user's reproduction steps using Playwright MCP:
44
+ - browser_click, browser_fill_form, browser_select_option, etc.
45
+ 4. Use browser_snapshot after each step to observe behavior
46
+ 5. Use browser_take_screenshot for evidence
47
+ 6. Use browser_console_messages to check for JavaScript errors
48
+ 7. Record: "Bug reproduced at step {N}. Expected: {X}. Actual: {Y}."
49
+
50
+ FOR API BUGS (wrong response, crash, timeout):
51
+ 1. Construct the HTTP request from the bug report
52
+ 2. Execute via Bash: curl -v {request}
53
+ 3. Capture response status, body, headers
54
+ 4. Record: "Bug reproduced. Endpoint {X} returns {status} with body {Y}. Expected: {Z}."
55
+
56
+ FOR LOGIC BUGS (wrong calculation, bad data, incorrect behavior):
57
+ 1. Write a failing test case that exercises the buggy code path
58
+ 2. Run the test — it should FAIL (proving the bug exists)
59
+ 3. Record: "Bug reproduced in test. {function} returns {actual} when given {input}. Expected: {expected}."
60
+
61
+ FOR CLI BUGS (wrong output, crash, incorrect exit code):
62
+ 1. Run the command that triggers the bug
63
+ 2. Capture stdout, stderr, exit code
64
+ 3. Record: "Bug reproduced. Command {X} outputs {Y}. Expected: {Z}."
65
+
66
+ C. CONFIRM REPRODUCTION:
67
+ The reproduction must be RELIABLE — run it twice to confirm the same result.
68
+
69
+ IF REPRODUCTION FAILS (bug does not appear):
70
+ - Check for race conditions (timing-dependent bugs)
71
+ - Check for environment differences (missing env var, different data)
72
+ - Ask user: "I could not reproduce the bug. Can you provide more details?"
73
+ - Do NOT proceed until the bug is reproducible
74
+
75
+ WHY: A bug you cannot reproduce is a bug you cannot verify as fixed. The reproduction becomes the regression test — run it before the fix (fails) and after (passes).
76
+ </step>
77
+
78
+ <step name="isolate" priority="third">
79
+ Narrow down the bug to a specific module, file, function, and line.
80
+
81
+ A. START FROM THE REPRODUCTION:
82
+ - Which file is the entry point for the buggy behavior?
83
+ - Read that file. Trace the execution path from input to output.
84
+
85
+ B. USE BINARY SEARCH STRATEGY:
86
+ The goal is to find the EXACT line where behavior diverges from expectation.
87
+
88
+ 1. Identify the input and the incorrect output
89
+ 2. Find the midpoint of the execution path
90
+ 3. Check: is the data correct at the midpoint?
91
+ - If YES: the bug is AFTER the midpoint
92
+ - If NO: the bug is BEFORE the midpoint
93
+ 4. Repeat, halving the search space each time
94
+
95
+ Techniques for checking data at a point:
96
+ - Add temporary console.log/print statements (remove after debugging)
97
+ - Read the code and mentally trace the data flow
98
+ - Use the test to add assertions at intermediate points
99
+
100
+ C. TRACE DATA FLOW:
101
+ Follow the data from where it enters the system to where it exits wrong:
102
+ - Controller/handler → service → repository → database (and back)
103
+ - For each hop: what goes in? What comes out? Where does it change?
104
+
105
+ D. CHECK RELATED FILES:
106
+ - Does the buggy function call other functions? Read those too.
107
+ - Does the buggy module import shared utilities? Check if THOSE are wrong.
108
+ - Did a recent change modify a shared dependency? (git log on the file)
109
+
110
+ Record the isolation result:
111
+ ```
112
+ Bug isolated to: {file}:{line}
113
+ Function: {function-name}
114
+ The value of {variable} is {actual} but should be {expected}
115
+ This happens because: {initial hypothesis}
116
+ ```
117
+
118
+ WHY: Isolation prevents "shotgun debugging" — changing random things until the error disappears. Without isolation, you might fix the symptom while the root cause remains.
119
+ </step>
120
+
121
+ <step name="root_cause_analysis" priority="fourth">
122
+ Explain WHY the bug exists, not just WHERE it is.
123
+
124
+ A. CLASSIFY THE ROOT CAUSE:
125
+
126
+ LOGIC ERROR: The code does the wrong thing by design
127
+ - Example: off-by-one in loop, wrong comparison operator, inverted condition
128
+ - Question: "What did the author think this code would do vs what it actually does?"
129
+
130
+ RACE CONDITION: Timing-dependent behavior
131
+ - Example: async operations completing in unexpected order, shared state mutation
132
+ - Question: "What happens if operation A completes before/after operation B?"
133
+
134
+ MISSING VALIDATION: Input not checked at boundary
135
+ - Example: null/undefined passed to function that assumes non-null, empty string treated as valid
136
+ - Question: "What inputs were not considered when this code was written?"
137
+
138
+ WRONG ASSUMPTION: Code assumes something that is not always true
139
+ - Example: assumes array is non-empty, assumes API always returns 200, assumes file exists
140
+ - Question: "What assumption does this code make that can be violated?"
141
+
142
+ STALE STATE: Data is cached or stored but not updated
143
+ - Example: reading from cache after the source changed, UI not re-rendering after state update
144
+ - Question: "Is the data being read the most current version?"
145
+
146
+ DEPENDENCY CHANGE: External library or service changed behavior
147
+ - Example: API response format changed, library function signature changed
148
+ - Question: "Did something outside this codebase change recently?"
149
+
150
+ B. CHECK FOR SYSTEMIC ISSUES:
151
+ - Does this same bug pattern exist elsewhere in the codebase?
152
+ - Search for the same anti-pattern in other files
153
+ - If found: this is not a single bug — it is a pattern bug. ALL instances must be fixed.
154
+
155
+ Record:
156
+ ```
157
+ Root Cause: {classification}
158
+ Explanation: {WHY the bug exists, in plain language}
159
+ Systemic: {YES — found in N other locations | NO — isolated incident}
160
+ ```
161
+
162
+ WHY: Understanding WHY prevents the same bug from reappearing. Fixing the WHAT without the WHY is like treating symptoms without diagnosing the disease.
163
+ </step>
164
+
165
+ <step name="fix" priority="fifth">
166
+ Apply the fix following engineering laws.
167
+
168
+ RULES FOR THE FIX:
169
+ 1. Fix the ROOT CAUSE, not the symptom
170
+ - Bad: add a null check before the crash line
171
+ - Good: fix the function that returns null when it should not
172
+
173
+ 2. If the same pattern exists elsewhere (systemic = YES), fix ALL instances
174
+ - Search for every occurrence
175
+ - Fix each one
176
+ - This is DRY applied to bug fixes
177
+
178
+ 3. Add a guard for the edge case that caused the bug
179
+ - If missing validation: add validation
180
+ - If wrong assumption: make the assumption explicit with a check
181
+ - If race condition: add synchronization or ordering
182
+
183
+ 4. Follow engineering laws:
184
+ - Max 40 lines per function (the fix should not bloat the function)
185
+ - If the function is already too long, extract a helper as part of the fix
186
+ - Named types if the fix introduces complex parameters
187
+ - No lint suppressions
188
+
189
+ 5. Add or update tests:
190
+ - The reproduction test case must now PASS
191
+ - Add a regression test that specifically covers the bug scenario
192
+ - If systemic: add tests for each fixed instance
193
+
194
+ Record:
195
+ ```
196
+ Files modified: {list}
197
+ Lines changed: {count}
198
+ Tests added: {list}
199
+ Systemic fixes: {N instances fixed across N files}
200
+ ```
201
+
202
+ WHY: A fix without a regression test will regress. A fix that only addresses one instance of a systemic issue leaves time bombs in the codebase.
203
+ </step>
204
+
205
+ <step name="verify_fix" priority="sixth">
206
+ A. RUN THE REPRODUCTION AGAIN:
207
+ Execute the exact same steps from the reproduce step.
208
+ The bug MUST NOT appear.
209
+
210
+ IF BUG STILL APPEARS: The fix is wrong. Return to isolate step.
211
+
212
+ B. RUN REGRESSION TESTS:
213
+ Run the full test suite (or at minimum, tests for modified files).
214
+ ALL tests must pass. Zero regressions.
215
+
216
+ IF REGRESSIONS: The fix broke something else. Return to isolate step — the fix introduced a new bug.
217
+
218
+ C. FOR UI BUGS:
219
+ Use Playwright MCP to verify:
220
+ - browser_navigate to the page
221
+ - Execute the reproduction steps
222
+ - browser_take_screenshot for evidence
223
+ - Confirm expected behavior in snapshot
224
+
225
+ D. FOR API BUGS:
226
+ Re-run the curl command
227
+ Confirm correct status and response body
228
+
229
+ Record:
230
+ ```
231
+ Reproduction: FIXED (no longer reproduces)
232
+ Regression: CLEAN (all {N} tests pass)
233
+ Evidence: {screenshot, test output, curl response}
234
+ ```
235
+
236
+ WHY: "I think it is fixed" is not verification. Run the reproduction. If it still fails, the fix is wrong.
237
+ </step>
238
+
239
+ <step name="create_debug_record" priority="seventh">
240
+ Create .sdlc/phases/{current_phase}/DEBUG-{issue-slug}-{timestamp}.md:
241
+
242
+ ```markdown
243
+ # Debug: {bug title}
244
+
245
+ ## Bug Report
246
+ - **Symptom**: {what was wrong}
247
+ - **Expected**: {what should happen}
248
+ - **Reported**: {timestamp}
249
+
250
+ ## Reproduction
251
+ - **Type**: {UI|API|Logic|CLI}
252
+ - **Steps**: {reproduction steps}
253
+ - **Reliability**: {always|intermittent}
254
+
255
+ ## Isolation
256
+ - **File**: {file}:{line}
257
+ - **Function**: {function-name}
258
+ - **Data flow**: {trace summary}
259
+
260
+ ## Root Cause
261
+ - **Classification**: {logic error|race condition|missing validation|wrong assumption|stale state|dependency change}
262
+ - **Explanation**: {plain language WHY}
263
+ - **Systemic**: {yes/no — if yes, how many instances}
264
+
265
+ ## Fix
266
+ - **Files modified**: {list}
267
+ - **Approach**: {what was changed and why}
268
+ - **Tests added**: {list}
269
+
270
+ ## Verification
271
+ - **Reproduction**: FIXED
272
+ - **Regressions**: CLEAN
273
+ - **Evidence**: {summary}
274
+ ```
275
+
276
+ WHY: The debug record is a knowledge base. When a similar bug appears, search debug records first. The root cause analysis saves hours of re-investigation.
277
+ </step>
278
+
279
+ <step name="update_state" priority="eighth">
280
+ Update .sdlc/STATE.md:
281
+ - Add history entry: "{timestamp} | debug | Fixed: {bug title}. Root cause: {classification}. {N} files modified."
282
+ - next_required_action: /sdlc:review
283
+
284
+ Display:
285
+ ```
286
+ Bug fixed and verified.
287
+
288
+ Root cause: {classification}
289
+ Files modified: {N}
290
+ Tests added: {N}
291
+ Systemic fixes: {N} (if applicable)
292
+
293
+ Debug record: .sdlc/phases/{phase}/DEBUG-{slug}.md
294
+
295
+ NEXT ACTION REQUIRED: /sdlc:review
296
+ Run /sdlc:review to check the fix against engineering laws.
297
+ ```
298
+
299
+ WHY: Debug fixes still go through review. A hurried fix might violate engineering laws — empty catch blocks, missing types, function too long. The review catches that.
300
+ </step>
301
+
302
+ </process>