@brunosps00/dev-workflow 0.9.0 → 0.11.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +48 -20
- package/lib/constants.js +2 -10
- package/lib/init.js +20 -3
- package/lib/migrate-gsd.js +1 -1
- package/lib/utils.js +8 -3
- package/package.json +1 -1
- package/scaffold/en/commands/dw-analyze-project.md +61 -0
- package/scaffold/en/commands/dw-autopilot.md +6 -6
- package/scaffold/en/commands/dw-brainstorm.md +1 -1
- package/scaffold/en/commands/dw-bugfix.md +1 -0
- package/scaffold/en/commands/dw-code-review.md +29 -0
- package/scaffold/en/commands/dw-commit.md +6 -0
- package/scaffold/en/commands/dw-create-prd.md +16 -0
- package/scaffold/en/commands/dw-create-tasks.md +42 -0
- package/scaffold/en/commands/dw-create-techspec.md +19 -0
- package/scaffold/en/commands/dw-deep-research.md +6 -0
- package/scaffold/en/commands/dw-deps-audit.md +1 -0
- package/scaffold/en/commands/dw-find-skills.md +4 -4
- package/scaffold/en/commands/dw-fix-qa.md +1 -0
- package/scaffold/en/commands/dw-generate-pr.md +1 -0
- package/scaffold/en/commands/dw-help.md +9 -29
- package/scaffold/en/commands/dw-intel.md +1 -1
- package/scaffold/en/commands/dw-refactoring-analysis.md +2 -1
- package/scaffold/en/commands/dw-review-implementation.md +28 -2
- package/scaffold/en/commands/dw-run-plan.md +2 -2
- package/scaffold/en/templates/constitution-template.md +111 -0
- package/scaffold/en/templates/idea-onepager.md +1 -1
- package/scaffold/pt-br/commands/dw-analyze-project.md +61 -0
- package/scaffold/pt-br/commands/dw-autopilot.md +6 -6
- package/scaffold/pt-br/commands/dw-brainstorm.md +1 -1
- package/scaffold/pt-br/commands/dw-bugfix.md +1 -0
- package/scaffold/pt-br/commands/dw-code-review.md +29 -0
- package/scaffold/pt-br/commands/dw-commit.md +6 -0
- package/scaffold/pt-br/commands/dw-create-prd.md +16 -0
- package/scaffold/pt-br/commands/dw-create-tasks.md +42 -0
- package/scaffold/pt-br/commands/dw-create-techspec.md +19 -0
- package/scaffold/pt-br/commands/dw-deep-research.md +6 -0
- package/scaffold/pt-br/commands/dw-deps-audit.md +1 -0
- package/scaffold/pt-br/commands/dw-find-skills.md +4 -4
- package/scaffold/pt-br/commands/dw-fix-qa.md +1 -0
- package/scaffold/pt-br/commands/dw-generate-pr.md +1 -0
- package/scaffold/pt-br/commands/dw-help.md +9 -29
- package/scaffold/pt-br/commands/dw-intel.md +1 -1
- package/scaffold/pt-br/commands/dw-refactoring-analysis.md +2 -1
- package/scaffold/pt-br/commands/dw-review-implementation.md +21 -2
- package/scaffold/pt-br/commands/dw-run-plan.md +2 -2
- package/scaffold/pt-br/templates/constitution-template.md +111 -0
- package/scaffold/pt-br/templates/idea-onepager.md +1 -1
- package/scaffold/skills/dw-codebase-intel/SKILL.md +1 -0
- package/scaffold/skills/dw-codebase-intel/references/api-design-discipline.md +138 -0
- package/scaffold/skills/dw-debug-protocol/SKILL.md +106 -0
- package/scaffold/skills/dw-debug-protocol/references/error-categorization.md +127 -0
- package/scaffold/skills/dw-debug-protocol/references/non-reproducible-strategy.md +108 -0
- package/scaffold/skills/dw-debug-protocol/references/six-step-triage.md +139 -0
- package/scaffold/skills/dw-debug-protocol/references/stop-the-line.md +52 -0
- package/scaffold/skills/dw-git-discipline/SKILL.md +120 -0
- package/scaffold/skills/dw-git-discipline/references/atomic-commits-discipline.md +158 -0
- package/scaffold/skills/dw-git-discipline/references/branch-hygiene.md +150 -0
- package/scaffold/skills/dw-git-discipline/references/trunk-based-pattern.md +82 -0
- package/scaffold/skills/dw-memory/SKILL.md +1 -2
- package/scaffold/skills/dw-simplification/SKILL.md +142 -0
- package/scaffold/skills/dw-simplification/references/behavior-preserving.md +148 -0
- package/scaffold/skills/dw-simplification/references/chestertons-fence.md +152 -0
- package/scaffold/skills/dw-simplification/references/complexity-metrics.md +147 -0
- package/scaffold/skills/dw-source-grounding/SKILL.md +128 -0
- package/scaffold/skills/dw-source-grounding/references/citation-protocol.md +108 -0
- package/scaffold/skills/dw-source-grounding/references/freshness-check.md +108 -0
- package/scaffold/skills/dw-source-grounding/references/source-priority.md +146 -0
- package/scaffold/skills/dw-verify/SKILL.md +0 -1
- package/scaffold/skills/vercel-react-best-practices/SKILL.md +4 -0
- package/scaffold/skills/vercel-react-best-practices/references/perf-discipline.md +122 -0
- package/scaffold/skills/webapp-testing/SKILL.md +5 -0
- package/scaffold/skills/webapp-testing/references/security-boundary.md +115 -0
- package/scaffold/skills/webapp-testing/references/three-workflow-patterns.md +144 -0
- package/scaffold/templates-overrides-readme.md +75 -0
- package/scaffold/en/commands/dw-execute-phase.md +0 -149
- package/scaffold/en/commands/dw-plan-checker.md +0 -144
- package/scaffold/en/commands/dw-quick.md +0 -103
- package/scaffold/en/commands/dw-resume.md +0 -84
- package/scaffold/pt-br/commands/dw-execute-phase.md +0 -149
- package/scaffold/pt-br/commands/dw-plan-checker.md +0 -144
- package/scaffold/pt-br/commands/dw-quick.md +0 -103
- package/scaffold/pt-br/commands/dw-resume.md +0 -84
|
@@ -0,0 +1,106 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: dw-debug-protocol
|
|
3
|
+
description: Use when investigating a bug — applies stop-the-line discipline plus a six-step triage (reproduce → localize → reduce → fix root cause → guard → verify) so you fix what's actually broken instead of papering over symptoms.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Debug Protocol
|
|
7
|
+
|
|
8
|
+
> **Inspired by** [`addyosmani/agent-skills/debugging-and-error-recovery`](https://github.com/addyosmani/agent-skills/tree/main/debugging-and-error-recovery) (MIT). Patterns and stop-the-line philosophy adapted from Addy Osmani's work; specific guidance and references rewritten to fit dev-workflow's bugfix and QA loops.
|
|
9
|
+
|
|
10
|
+
The fastest path through a bug is the disciplined one. This skill encodes the discipline.
|
|
11
|
+
|
|
12
|
+
## When this skill applies
|
|
13
|
+
|
|
14
|
+
- Any failing test, runtime error, or "it works locally but not in CI" report.
|
|
15
|
+
- Reproducing a user-reported issue.
|
|
16
|
+
- A regression after a refactor or dependency bump.
|
|
17
|
+
- An intermittent / non-reproducible bug (see `references/non-reproducible-strategy.md`).
|
|
18
|
+
|
|
19
|
+
If you're tempted to "just try a fix and see," stop. Run the protocol below first.
|
|
20
|
+
|
|
21
|
+
## The four core rules
|
|
22
|
+
|
|
23
|
+
### 1. Stop the line
|
|
24
|
+
|
|
25
|
+
The moment a bug is confirmed, stop adjacent work. Do not pile features on top of broken state. Branches stay frozen until the bug is reproduced and a fix path is identified — *or* explicitly downgraded ("not blocking, scheduled for later").
|
|
26
|
+
|
|
27
|
+
See `references/stop-the-line.md` for when this rule bends.
|
|
28
|
+
|
|
29
|
+
### 2. Reproduce before fixing
|
|
30
|
+
|
|
31
|
+
Without a deterministic reproduction, you cannot know whether your "fix" worked. The reproduction is the hypothesis test. Even a flaky bug needs a *reproducible-enough* test (e.g., "fails 8/10 runs") before fixing.
|
|
32
|
+
|
|
33
|
+
If you cannot reproduce: see `references/non-reproducible-strategy.md`. Don't fix on guess.
|
|
34
|
+
|
|
35
|
+
### 3. Find the root cause, not the nearest cause
|
|
36
|
+
|
|
37
|
+
The first place the stack trace lights up is rarely where the bug LIVES. The bug lives wherever the invariant was violated. Fixing the symptom (e.g., catching a `null` you shouldn't have produced) hides the real issue and makes the next bug worse.
|
|
38
|
+
|
|
39
|
+
### 4. Guard against recurrence before declaring done
|
|
40
|
+
|
|
41
|
+
Every fix produces an artifact: a regression test that proves the bug exists, then proves it's fixed. Without this artifact, the bug WILL come back during the next refactor.
|
|
42
|
+
|
|
43
|
+
## The six-step triage
|
|
44
|
+
|
|
45
|
+
Each bug runs through these six steps, in order. See `references/six-step-triage.md` for detail.
|
|
46
|
+
|
|
47
|
+
| Step | Question | Output |
|
|
48
|
+
|------|----------|--------|
|
|
49
|
+
| 1. Reproduce | Can I trigger this on demand? | A failing test or repro script |
|
|
50
|
+
| 2. Localize | Where does the invariant break? | A file:line where state goes wrong |
|
|
51
|
+
| 3. Reduce | What's the smallest input that triggers it? | Minimal repro (1-3 lines if possible) |
|
|
52
|
+
| 4. Fix root cause | Why is the invariant violated? Fix THAT. | Code change at the cause, not the symptom |
|
|
53
|
+
| 5. Guard | What test would have caught this? | Regression test added to suite |
|
|
54
|
+
| 6. Verify end-to-end | Does the original report scenario now pass? | Manual or scripted E2E proof |
|
|
55
|
+
|
|
56
|
+
Do not skip steps. Skipping reduce → bigger fix than needed. Skipping guard → repeat bug in 3 weeks.
|
|
57
|
+
|
|
58
|
+
## Error categorization
|
|
59
|
+
|
|
60
|
+
Bugs cluster into a small number of categories. Knowing the category narrows where to look — see `references/error-categorization.md`:
|
|
61
|
+
|
|
62
|
+
| Category | Typical symptom | Where to look first |
|
|
63
|
+
|----------|-----------------|---------------------|
|
|
64
|
+
| UI / render | Wrong pixels, missing element, click does nothing | Component tree, prop flow, conditional render |
|
|
65
|
+
| Network / I/O | Timeout, 500, partial data | Request payload, headers, error path, retry |
|
|
66
|
+
| State / data | Wrong value displayed, stale read | State management, cache invalidation, race |
|
|
67
|
+
| Concurrency | "Sometimes fails", deadlock | Async order, locks, await placement |
|
|
68
|
+
| Configuration | Works dev, fails prod | Env vars, secrets, build flags, infra config |
|
|
69
|
+
| Logic | Branch returns wrong result | Guard conditions, off-by-one, boolean polarity |
|
|
70
|
+
|
|
71
|
+
## Non-reproducible bugs
|
|
72
|
+
|
|
73
|
+
Some bugs only happen in production, only on certain users, only at certain times. Don't fix them on intuition. The protocol in `references/non-reproducible-strategy.md` covers:
|
|
74
|
+
|
|
75
|
+
- Timing/race conditions: instrument with logs first, fix second
|
|
76
|
+
- Environment-specific: bisect by environment delta
|
|
77
|
+
- State-dependent: capture user state at moment of failure
|
|
78
|
+
- Frequency-dependent: deploy logging, wait for next occurrence
|
|
79
|
+
|
|
80
|
+
A "fix" for an unreproduced bug is a guess. Mark it as such in the commit message.
|
|
81
|
+
|
|
82
|
+
## Verification before declaring done
|
|
83
|
+
|
|
84
|
+
A bug is fixed when:
|
|
85
|
+
|
|
86
|
+
1. The repro from step 1 now passes.
|
|
87
|
+
2. The regression test from step 5 was committed.
|
|
88
|
+
3. Lint + tests + build are all GREEN.
|
|
89
|
+
4. The original report scenario (E2E) was verified — by you, or with a screenshot/log/run trace if not directly reproducible.
|
|
90
|
+
|
|
91
|
+
If any are missing, the bug is "fixed pending verification," not "fixed."
|
|
92
|
+
|
|
93
|
+
## Integration with dev-workflow commands
|
|
94
|
+
|
|
95
|
+
- `/dw-bugfix` runs this skill end-to-end. The bug report is decomposed into steps 1-6 and progressed atomically.
|
|
96
|
+
- `/dw-fix-qa` uses this skill when QA findings are bug-shaped (failing scenario rather than missing feature). Each finding becomes a six-step run.
|
|
97
|
+
- `/dw-code-review` flags fixes that skipped step 5 (no regression test) as REJECTED.
|
|
98
|
+
|
|
99
|
+
## When to escalate / pair
|
|
100
|
+
|
|
101
|
+
After 60 minutes stuck at step 2 (localize) or step 4 (root cause), surface the situation to the user:
|
|
102
|
+
|
|
103
|
+
- "I've exhausted hypotheses A/B/C; here's what I tried and what's left."
|
|
104
|
+
- Don't silently spin. Fresh eyes find what stuck eyes miss.
|
|
105
|
+
|
|
106
|
+
This is not failure — it's protocol. Stuck > 1 hour is a signal, not a flaw.
|
|
@@ -0,0 +1,127 @@
|
|
|
1
|
+
# Error categorization — symptom → cause matrix
|
|
2
|
+
|
|
3
|
+
Before debugging, classify the bug. Wrong category = wasted hours. Most bugs sit in one of six categories; each has a typical hunting ground.
|
|
4
|
+
|
|
5
|
+
## The six categories
|
|
6
|
+
|
|
7
|
+
### 1. UI / rendering
|
|
8
|
+
|
|
9
|
+
**Symptoms:** Wrong pixels, missing element, click does nothing, layout shift, hydration mismatch.
|
|
10
|
+
|
|
11
|
+
**Hunting ground:**
|
|
12
|
+
- Component tree: prop flow from root to leaf.
|
|
13
|
+
- Conditional render guards: `{cond && <Foo />}` where `cond` is unexpectedly false (or truthy when 0).
|
|
14
|
+
- CSS specificity / cascade collisions.
|
|
15
|
+
- Server vs client mismatch (Next.js, hydration).
|
|
16
|
+
- Event handler binding (lost `this`, stale closure capturing old state).
|
|
17
|
+
|
|
18
|
+
**First check:** Open browser devtools → Elements panel → confirm the element exists in DOM. If yes, problem is CSS/layout. If no, problem is render path.
|
|
19
|
+
|
|
20
|
+
**Often misclassified as:** State bug. Check the prop value at render time before chasing state.
|
|
21
|
+
|
|
22
|
+
### 2. Network / I/O
|
|
23
|
+
|
|
24
|
+
**Symptoms:** Timeout, 5xx response, partial data, "loading forever," CORS error.
|
|
25
|
+
|
|
26
|
+
**Hunting ground:**
|
|
27
|
+
- Request payload (DevTools Network tab → check what was actually sent).
|
|
28
|
+
- Auth headers / cookies (missing, expired, wrong domain).
|
|
29
|
+
- Server-side error (check the API logs, not just the client's view).
|
|
30
|
+
- Retry/timeout configuration (too short = flaky; too long = perceived hang).
|
|
31
|
+
- Idempotency: request retried in a way the server treated as new.
|
|
32
|
+
|
|
33
|
+
**First check:** Reproduce in DevTools Network tab. Copy as cURL → run from terminal. If terminal request works, client config is wrong. If terminal also fails, server is wrong.
|
|
34
|
+
|
|
35
|
+
**Often misclassified as:** Logic bug. Verify the request actually went out before assuming bug is in the response handler.
|
|
36
|
+
|
|
37
|
+
### 3. State / data
|
|
38
|
+
|
|
39
|
+
**Symptoms:** Wrong value displayed, stale read, "I updated it but it didn't change," off-by-one in collections.
|
|
40
|
+
|
|
41
|
+
**Hunting ground:**
|
|
42
|
+
- State management: where is the source of truth? How is it updated?
|
|
43
|
+
- Cache invalidation: did the cache layer return stale data?
|
|
44
|
+
- Race condition: did two updates arrive out of order?
|
|
45
|
+
- Mutation vs replacement: did a deep mutation fail to trigger a re-render?
|
|
46
|
+
- Derived state: is a computed value re-running when it should, or memoized too aggressively?
|
|
47
|
+
|
|
48
|
+
**First check:** Log the state at three points — when it's set, when it's read, when it's rendered. The discrepancy reveals the layer with the bug.
|
|
49
|
+
|
|
50
|
+
**Often misclassified as:** UI bug. The pixel is wrong because the data is wrong; fix the data.
|
|
51
|
+
|
|
52
|
+
### 4. Concurrency
|
|
53
|
+
|
|
54
|
+
**Symptoms:** "Sometimes fails," deadlock, race ("works locally, fails in CI"), out-of-order writes, lost update.
|
|
55
|
+
|
|
56
|
+
**Hunting ground:**
|
|
57
|
+
- Async boundaries: missed `await`, dangling promise, fire-and-forget that needed completion.
|
|
58
|
+
- Locking: missing lock, lock held too long, lock granularity wrong.
|
|
59
|
+
- Order assumptions: code assumes A finishes before B without enforcing it.
|
|
60
|
+
- Test isolation: shared state between tests creating false dependency.
|
|
61
|
+
|
|
62
|
+
**First check:** Add timing logs. If the bug rate changes when you add `setTimeout(0)` or `process.nextTick`, it's a concurrency bug.
|
|
63
|
+
|
|
64
|
+
**Diagnostic patterns:**
|
|
65
|
+
- Sleep-and-check: the bug disappears if you add 100ms delay → ordering issue.
|
|
66
|
+
- Always-fails-on-fast-machine: missed async wait.
|
|
67
|
+
- Always-fails-on-slow-machine: timeout too tight.
|
|
68
|
+
- Fails 1/N runs: race condition; the rate gives clues to the window.
|
|
69
|
+
|
|
70
|
+
**Often misclassified as:** "Just flaky, ignore." Concurrency bugs always have a cause. Re-running until green is hiding, not fixing.
|
|
71
|
+
|
|
72
|
+
### 5. Configuration
|
|
73
|
+
|
|
74
|
+
**Symptoms:** "Works in dev, fails in prod," "works on my machine," sudden behavior change after deploy.
|
|
75
|
+
|
|
76
|
+
**Hunting ground:**
|
|
77
|
+
- Environment variables: present? right value? expected name? leading/trailing whitespace?
|
|
78
|
+
- Feature flags: toggled differently between environments?
|
|
79
|
+
- Build flags: dev mode strips assertions; prod might not include polyfills.
|
|
80
|
+
- Infra: different DB version, different Node version, different network topology.
|
|
81
|
+
- Secrets: rotated, expired, wrong key.
|
|
82
|
+
|
|
83
|
+
**First check:** Print the environment at startup (sans secrets) → diff dev vs prod. The diff often contains the bug.
|
|
84
|
+
|
|
85
|
+
**Diagnostic command:** `env | sort` (or equivalent) at app entry, in both environments. Compare.
|
|
86
|
+
|
|
87
|
+
**Often misclassified as:** Logic bug. The code is correct; the configuration is wrong.
|
|
88
|
+
|
|
89
|
+
### 6. Logic
|
|
90
|
+
|
|
91
|
+
**Symptoms:** Branch returns wrong result given correct input, incorrect calculation, off-by-one in iteration.
|
|
92
|
+
|
|
93
|
+
**Hunting ground:**
|
|
94
|
+
- Guard conditions: `>=` vs `>`, `&&` vs `||`, negation polarity.
|
|
95
|
+
- Edge cases: empty array, single element, very large input, exact boundary value.
|
|
96
|
+
- Type coercion: `"0"` is truthy in JS; `0 == false` but `0 !== false`.
|
|
97
|
+
- Default values: a missing field falls back to `undefined`, not zero.
|
|
98
|
+
- Floating point: `0.1 + 0.2 !== 0.3`.
|
|
99
|
+
|
|
100
|
+
**First check:** Manually trace the function with the failing input. Often the bug is visible at line 3 of the function.
|
|
101
|
+
|
|
102
|
+
**Often misclassified as:** State bug. The state is correct; the function uses it wrong.
|
|
103
|
+
|
|
104
|
+
## Multi-category bugs
|
|
105
|
+
|
|
106
|
+
Some bugs span categories. The clue: fixing only one category fixes the test but not production.
|
|
107
|
+
|
|
108
|
+
Example: "Prod login fails."
|
|
109
|
+
- Network: auth API returns 401 (real).
|
|
110
|
+
- Configuration: prod has rotated secret; staging didn't.
|
|
111
|
+
- State: client retains stale token in localStorage from before rotation.
|
|
112
|
+
|
|
113
|
+
Fix all three or the bug returns next rotation.
|
|
114
|
+
|
|
115
|
+
## Diagnostic shortcut: where did it last work?
|
|
116
|
+
|
|
117
|
+
If the bug is new (not always existed):
|
|
118
|
+
|
|
119
|
+
1. `git log` the affected files. What changed in the last 7 days?
|
|
120
|
+
2. `git bisect` if the regression window is wide.
|
|
121
|
+
3. Look at infra / dep / config changes (often the cause is outside source code).
|
|
122
|
+
|
|
123
|
+
If the bug always existed:
|
|
124
|
+
|
|
125
|
+
1. It's likely an edge case the original author didn't consider.
|
|
126
|
+
2. Check the test fixtures — they probably hit the happy path only.
|
|
127
|
+
3. Add the failing case to the test fixtures BEFORE fixing.
|
|
@@ -0,0 +1,108 @@
|
|
|
1
|
+
# Non-reproducible bugs — strategy when you can't trigger it on demand
|
|
2
|
+
|
|
3
|
+
Some bugs only fire in production, only for some users, only some times. The protocol differs from normal debugging because step 1 (reproduce) is the open question. Don't fix what you haven't seen.
|
|
4
|
+
|
|
5
|
+
## Don't fix on guess
|
|
6
|
+
|
|
7
|
+
The strongest urge with a non-reproducible bug is: "I bet I know what it is — let me push a fix and see if reports stop." This fails because:
|
|
8
|
+
|
|
9
|
+
1. If the fix doesn't work, you wasted a deploy and learned nothing about the cause.
|
|
10
|
+
2. If reports stop (correlation), you can't tell whether your fix worked or the trigger condition stopped occurring.
|
|
11
|
+
3. You build a habit of guessing; the next bug, you guess wrong.
|
|
12
|
+
|
|
13
|
+
Instead: instrument first, fix second.
|
|
14
|
+
|
|
15
|
+
## The four sub-strategies
|
|
16
|
+
|
|
17
|
+
### A. Timing / race conditions
|
|
18
|
+
|
|
19
|
+
**Signs:** "Fails 1 out of N runs," "Worked locally, fails in CI," "Started failing after we sped up X."
|
|
20
|
+
|
|
21
|
+
**Strategy:**
|
|
22
|
+
1. Add timing-stamped logs at every async boundary in the suspect path.
|
|
23
|
+
2. Push to a branch with extra logging; run the suite multiple times in CI.
|
|
24
|
+
3. Compare the timestamps of failure runs vs success runs. Look for: events arriving in different order, gaps that exceed expectations.
|
|
25
|
+
4. Once you see the order anomaly, you have the reproduction (forced via test that constrains order).
|
|
26
|
+
|
|
27
|
+
**Tools:**
|
|
28
|
+
- `Date.now()` or `performance.now()` in logs.
|
|
29
|
+
- For network: response time logging.
|
|
30
|
+
- For tests: `--repeat 100` (Jest, Vitest) to amplify rare failures.
|
|
31
|
+
- Chaos: insert random `await sleep(rand)` at suspected boundaries to provoke order issues.
|
|
32
|
+
|
|
33
|
+
### B. Environment-specific
|
|
34
|
+
|
|
35
|
+
**Signs:** "Works in dev, fails in staging," "Works for one user, fails for another," "Worked yesterday, fails today, no code changes."
|
|
36
|
+
|
|
37
|
+
**Strategy:**
|
|
38
|
+
1. Diff the environments. Run a script that prints config (env vars, dependencies, OS, locale, feature flags) in both.
|
|
39
|
+
2. Bisect by environment delta: change one variable at a time in the failing environment toward the working environment. The change that flips the bug is the cause.
|
|
40
|
+
3. If user-specific: capture the user's state (account flags, history, A/B variants, browser, region) — diff against a working user.
|
|
41
|
+
|
|
42
|
+
**Tools:**
|
|
43
|
+
- Per-user replay: instrument enough that you can re-run a single user's session.
|
|
44
|
+
- A/B comparison: working vs failing pair.
|
|
45
|
+
- Feature flag audit: every flag that differs is a suspect.
|
|
46
|
+
|
|
47
|
+
### C. State-dependent
|
|
48
|
+
|
|
49
|
+
**Signs:** "Happens to long-time users, never new ones," "Only after they've done X then Y," "Cache-related," "Database-state-related."
|
|
50
|
+
|
|
51
|
+
**Strategy:**
|
|
52
|
+
1. Identify the state precondition. (Usually the failing user has a value or pattern in their data that the test fixtures don't.)
|
|
53
|
+
2. Capture the failing user's state (sanitized DB rows, sanitized client state).
|
|
54
|
+
3. Reproduce locally by seeding that exact state. Now you have a deterministic reproduction.
|
|
55
|
+
4. Once reproduced, run the standard six-step protocol.
|
|
56
|
+
|
|
57
|
+
**Tools:**
|
|
58
|
+
- DB dump / sanitize / load locally.
|
|
59
|
+
- Client state export (localStorage, IndexedDB, in-memory store snapshot).
|
|
60
|
+
- Replay system that takes a captured state and replays user actions.
|
|
61
|
+
|
|
62
|
+
**Privacy:** Sanitize before capturing. Production user data should never sit in dev environments unsanitized.
|
|
63
|
+
|
|
64
|
+
### D. Frequency-dependent (rare)
|
|
65
|
+
|
|
66
|
+
**Signs:** "We see this once a week," "Maybe 0.1% of requests," "Customer complained but logs don't show it."
|
|
67
|
+
|
|
68
|
+
**Strategy:**
|
|
69
|
+
1. Add specific logging or telemetry that would catch the bug NEXT time it fires.
|
|
70
|
+
2. Deploy the logging.
|
|
71
|
+
3. Wait for next occurrence (set up an alert if possible).
|
|
72
|
+
4. Use the captured event as your reproduction.
|
|
73
|
+
|
|
74
|
+
**Tools:**
|
|
75
|
+
- Structured logs filterable by error pattern.
|
|
76
|
+
- Sentry / Bugsnag / similar with breadcrumbs (action history before failure).
|
|
77
|
+
- Custom telemetry on the suspect code path.
|
|
78
|
+
|
|
79
|
+
**Time discipline:**
|
|
80
|
+
- Don't sit waiting forever. Set a deadline ("if no event in 7 days, escalate or reprioritize").
|
|
81
|
+
- Continue other work in parallel; this bug is "instrumented and waiting," not "actively in progress."
|
|
82
|
+
|
|
83
|
+
## When to ship a guess
|
|
84
|
+
|
|
85
|
+
Sometimes the bug is high-impact and waiting for instrumentation isn't acceptable. In that case:
|
|
86
|
+
|
|
87
|
+
1. Acknowledge it's a guess in the commit and PR. Don't pretend you reproduced it.
|
|
88
|
+
2. Make the fix MINIMAL. Don't rewrite a module to fix an unconfirmed bug — you'll cause new bugs.
|
|
89
|
+
3. Add monitoring/alerting that would catch the bug if it persists. The fix has a verification path: "no recurrence reports in N days."
|
|
90
|
+
4. Plan a follow-up: if the bug returns, fall back to the instrument-first protocol.
|
|
91
|
+
|
|
92
|
+
A guess fix shipped with explicit acknowledgement is acceptable. A guess fix shipped pretending it was a real diagnosis is not — that pattern erodes the team's trust in fix quality.
|
|
93
|
+
|
|
94
|
+
## Anti-patterns
|
|
95
|
+
|
|
96
|
+
- "Just add a try/catch and log it" → hides the bug; report stops; cause persists.
|
|
97
|
+
- "Restart the service / clear the cache" — operational mitigation, not a fix. Document it as such.
|
|
98
|
+
- Fix shipped without monitoring → no way to know if it worked.
|
|
99
|
+
- Fix shipped touching unrelated code → "while I was in there" expansion that turns one unverified change into many.
|
|
100
|
+
- Refusing to ship until reproduced when production is suffering — perfectionism that costs users. Use the "ship a minimal guess" path with clear acknowledgement.
|
|
101
|
+
|
|
102
|
+
## When to escalate
|
|
103
|
+
|
|
104
|
+
After 4 hours instrumenting + waiting with no reproduction:
|
|
105
|
+
|
|
106
|
+
- Surface the situation: what's instrumented, what's expected, what window of time you're waiting.
|
|
107
|
+
- Ask whether to (a) keep waiting, (b) ship a minimal guess fix with monitoring, (c) reprioritize.
|
|
108
|
+
- The decision is the user/team's, not yours alone. Provide the trade-off.
|
|
@@ -0,0 +1,139 @@
|
|
|
1
|
+
# Six-step debug triage
|
|
2
|
+
|
|
3
|
+
Six discrete steps. Each has a specific output. Don't move to the next without finishing the current.
|
|
4
|
+
|
|
5
|
+
## Step 1 — Reproduce
|
|
6
|
+
|
|
7
|
+
**Question:** Can I trigger this on demand?
|
|
8
|
+
|
|
9
|
+
**Output:** A failing test, a script, or a sequence of UI clicks that produces the bug. Save it. The reproduction IS the hypothesis test for your fix.
|
|
10
|
+
|
|
11
|
+
**How:**
|
|
12
|
+
- From a user report → ask: what version, what environment, what input, what was expected, what happened. Get all five before guessing.
|
|
13
|
+
- From a stack trace → identify the entry point (HTTP route, CLI command, UI event) and the input that triggered it.
|
|
14
|
+
- From a log line → find the surrounding context (request ID, user ID, timestamp); reconstruct the scenario.
|
|
15
|
+
- Write a failing test FIRST when the bug is in pure logic. The test commits the bug to record before you fix it.
|
|
16
|
+
|
|
17
|
+
**Common pitfall:** "It happens sometimes." This is not a reproduction. Either:
|
|
18
|
+
- Find a deterministic trigger (often there's a state precondition you missed), OR
|
|
19
|
+
- Quantify the rate (8/10 runs) and treat the test as flaky-but-progress.
|
|
20
|
+
- See `non-reproducible-strategy.md` for state/timing/env-dependent cases.
|
|
21
|
+
|
|
22
|
+
**Done when:** You have a one-line command (or a series of clicks) that produces the bug ≥80% of the time.
|
|
23
|
+
|
|
24
|
+
## Step 2 — Localize
|
|
25
|
+
|
|
26
|
+
**Question:** Where does the invariant break?
|
|
27
|
+
|
|
28
|
+
**Output:** A file:line reference where state is wrong. Not where the symptom appears — where the cause lives.
|
|
29
|
+
|
|
30
|
+
**How:**
|
|
31
|
+
- Stack trace → walk UPWARDS from the failure point, asking at each frame: "is the input here valid?" The deepest frame with valid input is where the invariant breaks.
|
|
32
|
+
- Bisect the change history if a recent commit broke things: `git bisect start && git bisect bad HEAD && git bisect good <known-good-sha>`.
|
|
33
|
+
- Logs/instrumentation → add explicit log lines at suspected points; reproduce; narrow the window.
|
|
34
|
+
- Debugger → set breakpoints at boundaries (entry/exit of suspect functions); inspect state.
|
|
35
|
+
|
|
36
|
+
**Common pitfall:** Stopping at the first place that catches the error. The error often surfaces far from the cause.
|
|
37
|
+
|
|
38
|
+
**Done when:** You can point at one specific function (or 2-3 closely related lines) and say "the violation happens here."
|
|
39
|
+
|
|
40
|
+
## Step 3 — Reduce
|
|
41
|
+
|
|
42
|
+
**Question:** What's the smallest input that triggers this?
|
|
43
|
+
|
|
44
|
+
**Output:** A minimal reproduction — ideally 1-3 lines of input or a 5-10 line test case.
|
|
45
|
+
|
|
46
|
+
**How:**
|
|
47
|
+
- Start with your full reproduction.
|
|
48
|
+
- Remove one element at a time (a config flag, a request field, a piece of state). Check if the bug still reproduces.
|
|
49
|
+
- Repeat until removing anything else makes the bug disappear.
|
|
50
|
+
|
|
51
|
+
**Why bother:**
|
|
52
|
+
- Smaller repros are faster to test (your fix loop runs in seconds, not minutes).
|
|
53
|
+
- Smaller repros surface the EXACT trigger; vague repros let you fool yourself.
|
|
54
|
+
- The minimal repro often makes the cause obvious: "oh, it only fails when the array is empty."
|
|
55
|
+
|
|
56
|
+
**Common pitfall:** Skipping this step because "I already know what's wrong." If you really know, the reduce takes 2 minutes. Do it.
|
|
57
|
+
|
|
58
|
+
**Done when:** Removing one more thing makes the bug disappear.
|
|
59
|
+
|
|
60
|
+
## Step 4 — Fix root cause
|
|
61
|
+
|
|
62
|
+
**Question:** WHY is the invariant violated? Fix THAT.
|
|
63
|
+
|
|
64
|
+
**Output:** Code change that addresses the cause, not the symptom.
|
|
65
|
+
|
|
66
|
+
**How:**
|
|
67
|
+
- Ask "why?" until you reach a design or assumption that's wrong, not just a bad value.
|
|
68
|
+
- "X is null" — why?
|
|
69
|
+
- "Because Y returned null" — why?
|
|
70
|
+
- "Because the cache was empty when called from this path" — why was the cache empty?
|
|
71
|
+
- "Because the warm-up runs after this path is reachable" — root cause: ordering.
|
|
72
|
+
- Fix at the deepest "why" you can change without scope creep.
|
|
73
|
+
- If the root cause is too deep ("this whole module assumes synchronous I/O"), fix at the shallowest level that makes the bug not recur, AND open a follow-up issue for the deeper fix.
|
|
74
|
+
|
|
75
|
+
**Symptoms vs root cause — examples:**
|
|
76
|
+
|
|
77
|
+
| Symptom fix (don't) | Root cause fix (do) |
|
|
78
|
+
|---------------------|---------------------|
|
|
79
|
+
| Catch the null and return default | Find why null was produced and prevent it |
|
|
80
|
+
| Add a retry loop | Find why the first call failed |
|
|
81
|
+
| Increase the timeout | Find why the operation is slow |
|
|
82
|
+
| Add `if (process.env.NODE_ENV !== 'production')` guard | Find why prod has different behavior |
|
|
83
|
+
| Suppress the warning | Address what the warning is warning about |
|
|
84
|
+
|
|
85
|
+
**Common pitfall:** Adding `try/catch` around the symptom and moving on. This is hiding the bug, not fixing it.
|
|
86
|
+
|
|
87
|
+
**Done when:** Reproduction now passes; lint + tests + build all green.
|
|
88
|
+
|
|
89
|
+
## Step 5 — Guard against recurrence
|
|
90
|
+
|
|
91
|
+
**Question:** What test would have caught this before deploy?
|
|
92
|
+
|
|
93
|
+
**Output:** A regression test (or two) added to the suite that fails without your fix and passes with it.
|
|
94
|
+
|
|
95
|
+
**How:**
|
|
96
|
+
- Take the reproduction from step 1 → make it a test.
|
|
97
|
+
- The test name should describe the bug ("returns empty list when cache warm-up is delayed", not "fix bug").
|
|
98
|
+
- The test should fail clearly if the bug returns: assertion error with a message naming the invariant.
|
|
99
|
+
- Place the test where future maintainers will find it (alongside the function with the cause, not in some catch-all "bugs.test.ts").
|
|
100
|
+
|
|
101
|
+
**When you can't add a test:**
|
|
102
|
+
- Bug requires production-only conditions: add a logging guard or a runtime invariant check that would fire on recurrence.
|
|
103
|
+
- Bug is in infra/config: add a CI check that validates the config matches expectations.
|
|
104
|
+
- Bug is in third-party library: open an upstream issue AND add a defensive wrapper with a test for your defense.
|
|
105
|
+
|
|
106
|
+
**Common pitfall:** "I'll add the test later." You won't. Add it now or the bug returns.
|
|
107
|
+
|
|
108
|
+
**Done when:** Test exists, was committed in same commit as fix, fails on `git revert <fix>`.
|
|
109
|
+
|
|
110
|
+
## Step 6 — Verify end-to-end
|
|
111
|
+
|
|
112
|
+
**Question:** Does the original report scenario now pass?
|
|
113
|
+
|
|
114
|
+
**Output:** Proof — manual verification, screenshot, log trace, scripted E2E run.
|
|
115
|
+
|
|
116
|
+
**How:**
|
|
117
|
+
- Re-read the original bug report.
|
|
118
|
+
- Run through it as if you're the reporter — same inputs, same path, same environment if possible.
|
|
119
|
+
- Capture evidence (screenshot, terminal output, log line) that the issue no longer reproduces.
|
|
120
|
+
|
|
121
|
+
**Why this step matters:**
|
|
122
|
+
- Unit tests pass ≠ user-facing scenario works. The user's path may differ from the test's path.
|
|
123
|
+
- Confirms you fixed the user's bug, not just A bug.
|
|
124
|
+
- Catches "fixed it for case A but broke case B" regressions.
|
|
125
|
+
|
|
126
|
+
**Common pitfall:** Stopping at "tests pass" and skipping the user-facing verification. Tests are necessary, not sufficient.
|
|
127
|
+
|
|
128
|
+
**Done when:** You can confidently say "the original reporter would now see the expected behavior."
|
|
129
|
+
|
|
130
|
+
## After all six steps
|
|
131
|
+
|
|
132
|
+
The fix is ready to commit and ship:
|
|
133
|
+
|
|
134
|
+
- Commit message: describes the bug, root cause, and fix in 1-3 sentences.
|
|
135
|
+
- The regression test goes in the same commit as the fix.
|
|
136
|
+
- If a deeper fix was deferred, the follow-up issue is linked from the commit.
|
|
137
|
+
- E2E verification evidence is in the PR description (screenshot, log, run output).
|
|
138
|
+
|
|
139
|
+
This is what "fixed" looks like. Anything less is "in progress."
|
|
@@ -0,0 +1,52 @@
|
|
|
1
|
+
# Stop the line — when to drop everything for a bug
|
|
2
|
+
|
|
3
|
+
The Toyota factory metaphor: any worker can pull the cord and halt production when they see a defect. The reason isn't dramatic — it's economic. Defects compound downstream. Fixing one now is cheaper than ten later, on top of all the work that built on the broken state.
|
|
4
|
+
|
|
5
|
+
In code, this means: when a real bug surfaces, you stop building features on top of unknown-state code.
|
|
6
|
+
|
|
7
|
+
## What "stop the line" actually requires
|
|
8
|
+
|
|
9
|
+
1. **Don't keep coding around it.** If a test is red, you don't merge other PRs that touch the same module until it's green.
|
|
10
|
+
2. **Don't ignore "flaky" tests.** Flakiness is a bug — usually a race, sometimes a fixture. "Re-run the suite" is not a fix.
|
|
11
|
+
3. **Don't downgrade priority without evidence.** "It's edge case" requires evidence the case is rare AND the impact is low. Often only one of those is true.
|
|
12
|
+
4. **Document the pause.** A bug discovered → a tracking issue or task created → a clear "what's blocked by this" note.
|
|
13
|
+
|
|
14
|
+
## When the rule bends
|
|
15
|
+
|
|
16
|
+
Stop-the-line is not absolute. It bends when:
|
|
17
|
+
|
|
18
|
+
- **The bug is in a parallel system you don't own.** Report it; continue your work. (Don't sit blocked waiting on someone else.)
|
|
19
|
+
- **Production isn't on fire AND a release is shipping in <24h.** Sometimes "ship the working subset, fix the broken one in the next release" is correct. But this requires explicit acknowledgment, not silent slipping.
|
|
20
|
+
- **The bug is informational, not blocking.** A console warning that doesn't affect behavior, a log noise, a deprecated-API notice — those go on a backlog, not stop the line.
|
|
21
|
+
- **Reproduction needs production data.** Continue feature work in parallel branches; don't merge them until repro is achieved or scope-isolated.
|
|
22
|
+
|
|
23
|
+
## When the rule does NOT bend
|
|
24
|
+
|
|
25
|
+
- Tests are red and you don't know why → stop.
|
|
26
|
+
- A previous green build went red after a merge → stop and bisect.
|
|
27
|
+
- A user report can be reproduced → stop.
|
|
28
|
+
- "Build passes locally but fails CI" → stop. CI is closer to truth.
|
|
29
|
+
- Coverage drops below threshold without obvious reason → stop. (Removed a test silently?)
|
|
30
|
+
|
|
31
|
+
## The cost of not stopping
|
|
32
|
+
|
|
33
|
+
Three failure modes when teams ignore stop-the-line:
|
|
34
|
+
|
|
35
|
+
1. **Compounding defects.** Bug A masks bug B; you fix A and now B surfaces in production. The user reports it as a regression of A's fix.
|
|
36
|
+
2. **Lost reproduction.** A bug seen in commit N is hard to reproduce by commit N+5 because the surrounding code moved. The fix becomes "rewrite this whole area" because nobody can isolate the original issue anymore.
|
|
37
|
+
3. **Test debt.** Red tests get marked `.skip` "temporarily" — and never come back. Three months later half the suite is skipped and nobody trusts it.
|
|
38
|
+
|
|
39
|
+
## How to communicate "I'm stopping the line"
|
|
40
|
+
|
|
41
|
+
- "Found a bug in X — pausing my current task to investigate. Will report status in 30 minutes."
|
|
42
|
+
- "I can reproduce the issue. Estimated fix path: Y. Do you want me to push through, or hand off?"
|
|
43
|
+
- "Stuck at localization for 60 minutes. Tried: A, B, C. Need fresh eyes."
|
|
44
|
+
|
|
45
|
+
Brief, factual, no apology. Stopping is the right call; you're naming it.
|
|
46
|
+
|
|
47
|
+
## Anti-patterns to avoid
|
|
48
|
+
|
|
49
|
+
- "Quick fix and we'll come back to it" — almost never come back to it. Either fix root cause now or formally defer with a tracking issue.
|
|
50
|
+
- "It might be flaky" — investigate, don't speculate. Flakiness has a cause.
|
|
51
|
+
- Silently rolling back instead of investigating — you lost the chance to learn what broke.
|
|
52
|
+
- "Worked around it" with a `// TODO: actual fix` — the workaround is now load-bearing in 3 weeks.
|
|
@@ -0,0 +1,120 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: dw-git-discipline
|
|
3
|
+
description: Use when committing or opening a PR — applies trunk-based development, atomic commit discipline (one intent per commit, refactor separate from feature), conventional commit messages, and branch hygiene so history is bisectable and reviewable.
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Git Discipline
|
|
7
|
+
|
|
8
|
+
> **Inspired by** [`addyosmani/agent-skills/git-workflow-and-versioning`](https://github.com/addyosmani/agent-skills/tree/main/git-workflow-and-versioning) (MIT). Trunk-based pattern, atomic commit principles, and branch-hygiene patterns adapted from Addy Osmani's work; specifics rewritten for dev-workflow's commit and PR commands.
|
|
9
|
+
|
|
10
|
+
History is documentation. Bad commit hygiene corrupts the documentation; good hygiene makes future debugging cheap. This skill encodes the rules.
|
|
11
|
+
|
|
12
|
+
## Three core principles
|
|
13
|
+
|
|
14
|
+
### 1. One intent per commit
|
|
15
|
+
|
|
16
|
+
A commit answers ONE question: "what did I change and why?" If your commit message has the word "and" connecting two unrelated changes — split it.
|
|
17
|
+
|
|
18
|
+
Common splits:
|
|
19
|
+
- Refactor + feature → two commits (refactor first, feature second).
|
|
20
|
+
- Fix + style cleanup → two commits.
|
|
21
|
+
- Type fix + behavior change → two commits.
|
|
22
|
+
- Multiple bug fixes → one commit per bug.
|
|
23
|
+
|
|
24
|
+
**Why this matters:** When something breaks 6 weeks later, `git bisect` returns the commit that introduced it. If that commit also did a rename, a refactor, and a feature, you've gained nothing. If it did one thing, you know what to revert.
|
|
25
|
+
|
|
26
|
+
See `references/atomic-commits-discipline.md` for the discipline in detail.
|
|
27
|
+
|
|
28
|
+
### 2. Trunk-based, short-lived branches
|
|
29
|
+
|
|
30
|
+
Long-lived branches diverge from `main` and become merge nightmares. The discipline:
|
|
31
|
+
|
|
32
|
+
- Branches live 1-3 days, max a week.
|
|
33
|
+
- Daily merge or rebase from `main` to keep close to trunk.
|
|
34
|
+
- Incomplete work behind feature flags, not behind a multi-week branch.
|
|
35
|
+
- Small PRs (under ~400 lines diff). If bigger, ask: can this be split into independently-mergeable pieces?
|
|
36
|
+
|
|
37
|
+
See `references/trunk-based-pattern.md` for when this bends and when it doesn't.
|
|
38
|
+
|
|
39
|
+
### 3. Branch + commit message hygiene
|
|
40
|
+
|
|
41
|
+
- **Branch names:** `feat/<scope>`, `fix/<scope>`, `refactor/<scope>`, `chore/<scope>`. Lowercase, kebab-case, ≤40 chars.
|
|
42
|
+
- **Commit messages:** Conventional Commits (`type(scope): subject`) — `feat`, `fix`, `refactor`, `docs`, `chore`, `test`, `style`, `build`, `ci`, `perf`.
|
|
43
|
+
- **Subject line:** ≤72 chars, imperative mood ("add" not "added"), no trailing period.
|
|
44
|
+
- **Body (when needed):** explain WHY, not WHAT. The diff already shows what.
|
|
45
|
+
- **Footer:** breaking-change marker, issue references, co-author lines.
|
|
46
|
+
|
|
47
|
+
See `references/branch-hygiene.md` for naming conventions and rebase-vs-merge guidance.
|
|
48
|
+
|
|
49
|
+
## What this skill enforces
|
|
50
|
+
|
|
51
|
+
When wired into `/dw-commit`, every commit must:
|
|
52
|
+
|
|
53
|
+
1. Have a single logical intent (one feature, one fix, one refactor — not mixed).
|
|
54
|
+
2. Pass lint + tests + build BEFORE the commit is created.
|
|
55
|
+
3. Use Conventional Commits format with correct type/scope.
|
|
56
|
+
4. Have a body that explains WHY for non-trivial changes.
|
|
57
|
+
5. NOT skip pre-commit hooks (`--no-verify` is forbidden unless user explicitly authorizes).
|
|
58
|
+
6. NOT amend an already-pushed commit (history rewrites on shared branches break collaborators).
|
|
59
|
+
|
|
60
|
+
When wired into `/dw-generate-pr`, every PR must:
|
|
61
|
+
|
|
62
|
+
1. Have an explanatory description with summary + test plan, not just commit-list dump.
|
|
63
|
+
2. Stay reasonably scoped (large PRs flagged with split suggestion).
|
|
64
|
+
3. Have a branch that follows naming conventions.
|
|
65
|
+
4. Reference an issue / PRD / spec when applicable.
|
|
66
|
+
|
|
67
|
+
## Quick reference: when to do what
|
|
68
|
+
|
|
69
|
+
| Situation | Action |
|
|
70
|
+
|-----------|--------|
|
|
71
|
+
| Mixed refactor + feature in working dir | Stage refactor → commit → stage feature → commit |
|
|
72
|
+
| Pre-commit hook fails | Investigate. Fix root cause. NEVER `--no-verify` |
|
|
73
|
+
| Want to "tidy up" before PR | `git rebase -i` BEFORE pushing — never after |
|
|
74
|
+
| Already pushed and need to fix message | `git revert` + new commit. Don't force-push to shared. |
|
|
75
|
+
| Branch is 5 days old, tons of conflicts | Rebase from main daily; don't let drift compound |
|
|
76
|
+
| PR has 2,000 lines | Split. Identify natural seams: schema, backend, frontend, tests |
|
|
77
|
+
| Stuck mid-merge with conflicts | Resolve, don't `git checkout --theirs` blindly. The conflict is information |
|
|
78
|
+
|
|
79
|
+
## What this skill does NOT do
|
|
80
|
+
|
|
81
|
+
- It does not push, force-push, or create branches without explicit user request.
|
|
82
|
+
- It does not amend commits without explicit user request.
|
|
83
|
+
- It does not collapse multiple commits via `git rebase -i` on already-pushed branches.
|
|
84
|
+
- It does not enforce a specific commit count per PR — only that each commit is atomic.
|
|
85
|
+
|
|
86
|
+
## Integration with dev-workflow commands
|
|
87
|
+
|
|
88
|
+
- `/dw-commit` runs this skill — verifies lint/tests/build green, drafts a Conventional Commits message, splits commits if multi-intent detected.
|
|
89
|
+
- `/dw-generate-pr` uses this skill to validate branch naming, PR body structure, and scope.
|
|
90
|
+
- `/dw-run-task` and `/dw-run-plan` follow the atomic-commit discipline when their executor commits work — one task = one commit (or one logical sub-task).
|
|
91
|
+
|
|
92
|
+
## Anti-patterns this skill prevents
|
|
93
|
+
|
|
94
|
+
- "WIP" commits getting merged to `main` (pre-PR cleanup expected).
|
|
95
|
+
- "Fix typo" follow-up commits that should have been amended in feature branch.
|
|
96
|
+
- Commits with hooks bypassed (`--no-verify`) — root cause should be fixed instead.
|
|
97
|
+
- Long-lived branches that drift from main.
|
|
98
|
+
- PR descriptions that are just `git log` dumps.
|
|
99
|
+
- Force-pushes to shared branches.
|
|
100
|
+
- Commits that mix unrelated changes.
|
|
101
|
+
|
|
102
|
+
## When discipline bends
|
|
103
|
+
|
|
104
|
+
Real-world software can't always be perfect:
|
|
105
|
+
|
|
106
|
+
- **Hotfix to production:** atomic still applies; hygiene bends only on subject-line conciseness if needed for clarity.
|
|
107
|
+
- **Massive auto-generated change** (lockfile, codemod): one commit is fine; mark `chore(deps)` or `refactor(codemod)` clearly.
|
|
108
|
+
- **Reverts:** use `git revert <sha>` (preserves history) over `git reset --hard` (rewrites history). Both create one commit; only revert is safe on shared branches.
|
|
109
|
+
|
|
110
|
+
## Verification before committing
|
|
111
|
+
|
|
112
|
+
- [ ] Lint passes
|
|
113
|
+
- [ ] Tests pass
|
|
114
|
+
- [ ] Build passes
|
|
115
|
+
- [ ] One logical intent
|
|
116
|
+
- [ ] Conventional Commits subject
|
|
117
|
+
- [ ] Body explains WHY (when non-trivial)
|
|
118
|
+
- [ ] No `--no-verify`
|
|
119
|
+
- [ ] No amend of pushed commit
|
|
120
|
+
- [ ] Branch name follows convention
|