gsd-opencode 1.3.31

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (68) hide show
  1. package/bin/install.js +222 -0
  2. package/command/gsd/add-phase.md +207 -0
  3. package/command/gsd/complete-milestone.md +105 -0
  4. package/command/gsd/consider-issues.md +201 -0
  5. package/command/gsd/create-roadmap.md +115 -0
  6. package/command/gsd/discuss-milestone.md +47 -0
  7. package/command/gsd/discuss-phase.md +60 -0
  8. package/command/gsd/execute-plan.md +128 -0
  9. package/command/gsd/help.md +315 -0
  10. package/command/gsd/insert-phase.md +227 -0
  11. package/command/gsd/list-phase-assumptions.md +50 -0
  12. package/command/gsd/map-codebase.md +84 -0
  13. package/command/gsd/new-milestone.md +59 -0
  14. package/command/gsd/new-project.md +316 -0
  15. package/command/gsd/pause-work.md +122 -0
  16. package/command/gsd/plan-fix.md +205 -0
  17. package/command/gsd/plan-phase.md +67 -0
  18. package/command/gsd/progress.md +316 -0
  19. package/command/gsd/remove-phase.md +338 -0
  20. package/command/gsd/research-phase.md +91 -0
  21. package/command/gsd/resume-work.md +40 -0
  22. package/command/gsd/verify-work.md +71 -0
  23. package/get-shit-done/references/checkpoints.md +287 -0
  24. package/get-shit-done/references/continuation-format.md +255 -0
  25. package/get-shit-done/references/git-integration.md +254 -0
  26. package/get-shit-done/references/plan-format.md +428 -0
  27. package/get-shit-done/references/principles.md +157 -0
  28. package/get-shit-done/references/questioning.md +162 -0
  29. package/get-shit-done/references/research-pitfalls.md +215 -0
  30. package/get-shit-done/references/scope-estimation.md +172 -0
  31. package/get-shit-done/references/tdd.md +263 -0
  32. package/get-shit-done/templates/codebase/architecture.md +255 -0
  33. package/get-shit-done/templates/codebase/concerns.md +310 -0
  34. package/get-shit-done/templates/codebase/conventions.md +307 -0
  35. package/get-shit-done/templates/codebase/integrations.md +280 -0
  36. package/get-shit-done/templates/codebase/stack.md +186 -0
  37. package/get-shit-done/templates/codebase/structure.md +285 -0
  38. package/get-shit-done/templates/codebase/testing.md +480 -0
  39. package/get-shit-done/templates/config.json +18 -0
  40. package/get-shit-done/templates/context.md +161 -0
  41. package/get-shit-done/templates/continue-here.md +78 -0
  42. package/get-shit-done/templates/discovery.md +146 -0
  43. package/get-shit-done/templates/issues.md +32 -0
  44. package/get-shit-done/templates/milestone-archive.md +123 -0
  45. package/get-shit-done/templates/milestone-context.md +93 -0
  46. package/get-shit-done/templates/milestone.md +115 -0
  47. package/get-shit-done/templates/phase-prompt.md +303 -0
  48. package/get-shit-done/templates/project.md +184 -0
  49. package/get-shit-done/templates/research.md +529 -0
  50. package/get-shit-done/templates/roadmap.md +196 -0
  51. package/get-shit-done/templates/state.md +210 -0
  52. package/get-shit-done/templates/summary.md +273 -0
  53. package/get-shit-done/templates/uat-issues.md +143 -0
  54. package/get-shit-done/workflows/complete-milestone.md +643 -0
  55. package/get-shit-done/workflows/create-milestone.md +416 -0
  56. package/get-shit-done/workflows/create-roadmap.md +481 -0
  57. package/get-shit-done/workflows/discovery-phase.md +293 -0
  58. package/get-shit-done/workflows/discuss-milestone.md +236 -0
  59. package/get-shit-done/workflows/discuss-phase.md +247 -0
  60. package/get-shit-done/workflows/execute-phase.md +1625 -0
  61. package/get-shit-done/workflows/list-phase-assumptions.md +178 -0
  62. package/get-shit-done/workflows/map-codebase.md +434 -0
  63. package/get-shit-done/workflows/plan-phase.md +488 -0
  64. package/get-shit-done/workflows/research-phase.md +436 -0
  65. package/get-shit-done/workflows/resume-project.md +287 -0
  66. package/get-shit-done/workflows/transition.md +580 -0
  67. package/get-shit-done/workflows/verify-work.md +202 -0
  68. package/package.json +38 -0
@@ -0,0 +1,162 @@
1
+ <questioning_guide>
2
+ The initialization phase is dream extraction, not requirements gathering. You're helping the user discover and articulate what they want to build. This isn't a contract negotiation — it's collaborative thinking.
3
+
4
+ <philosophy>
5
+ **You are a thinking partner, not an interviewer.**
6
+
7
+ The user often has a fuzzy idea. Your job is to help them sharpen it. Ask questions that make them think "oh, I hadn't considered that" or "yes, that's exactly what I mean."
8
+
9
+ Don't interrogate. Collaborate.
10
+ </philosophy>
11
+
12
+ <critical_rule>
13
+ **ALL questions MUST use AskUserQuestion.**
14
+
15
+ Never ask questions inline as plain text. Every exploration question uses the AskUserQuestion tool with thoughtful options that help the user articulate their vision.
16
+
17
+ This applies to:
18
+ - Opening questions ("What do you want to build?")
19
+ - Follow-up questions ("You mentioned X — what would that look like?")
20
+ - Sharpening questions ("What's essential vs nice-to-have?")
21
+ - Boundary questions ("What's out of scope?")
22
+ - Decision gates ("Ready to proceed?")
23
+
24
+ The AskUserQuestion format helps users think by presenting concrete options to react to, rather than facing a blank text field.
25
+ </critical_rule>
26
+
27
+ <conversation_arc>
28
+ **1. Open**
29
+
30
+ Use AskUserQuestion:
31
+ - header: "Vision"
32
+ - question: "What do you want to build?"
33
+ - options: Contextual starting points if available, otherwise broad categories + "Let me describe it"
34
+
35
+ Let them respond. Then follow up based on what they said.
36
+
37
+ **2. Follow the thread**
38
+
39
+ Whatever they said — dig into it. What excited them? What problem sparked this?
40
+
41
+ Use AskUserQuestion with options that probe what they mentioned:
42
+ - header: "[Topic they mentioned]"
43
+ - question: "You mentioned [X] — what would that actually look like?"
44
+ - options: 2-3 interpretations of what they might mean + "Something else"
45
+
46
+ **3. Sharpen the core**
47
+
48
+ Help them distinguish the essential from the nice-to-have.
49
+
50
+ Use AskUserQuestion:
51
+ - header: "Core"
52
+ - question: "If you could only nail one thing, what would it be?"
53
+ - options: Key features/aspects they've mentioned + "All equally important" + "Something else"
54
+
55
+ **4. Find the boundaries**
56
+
57
+ What is this NOT? Explicit exclusions prevent scope creep later.
58
+
59
+ Use AskUserQuestion:
60
+ - header: "Scope"
61
+ - question: "What's explicitly NOT in v1?"
62
+ - options: Things that might be tempting to include + "Nothing specific" + "Let me list them"
63
+
64
+ **5. Ground in reality**
65
+
66
+ Only ask about constraints that actually exist. Don't invent concerns.
67
+
68
+ Use AskUserQuestion:
69
+ - header: "Constraints"
70
+ - question: "Any hard constraints?"
71
+ - options: Common constraint types relevant to context + "None" + "Yes, let me explain"
72
+ </conversation_arc>
73
+
74
+ <good_vs_bad>
75
+ **BAD — Inline text questions:**
76
+ - Asking "What is your target audience?" as plain text
77
+ - Free-form "Tell me more about X" without options
78
+ - Any question that leaves the user staring at a blank input
79
+
80
+ **GOOD — AskUserQuestion with options:**
81
+ - header: "Audience"
82
+ - question: "Who is this for?"
83
+ - options: ["Just me", "My team", "Public users", "Let me describe"]
84
+
85
+ **BAD — Corporate speak:**
86
+ - "What are your success criteria?"
87
+ - "What's your budget?"
88
+ - "Have you done X before?" (irrelevant — Claude builds)
89
+
90
+ **GOOD — Concrete options that help them think:**
91
+ - header: "Done"
92
+ - question: "How will you know this is working?"
93
+ - options: ["I'm using it daily", "Specific metric improves", "Replaces current workflow", "Let me describe"]
94
+
95
+ **BAD — Checklist walking:**
96
+ - Ask about audience → ask about constraints → ask about tech stack (regardless of what user said)
97
+
98
+ **GOOD — Following threads with targeted options:**
99
+ - User mentions frustration → AskUserQuestion with specific frustration interpretations as options → their selection reveals the core value prop
100
+ </good_vs_bad>
101
+
102
+ <probing_techniques>
103
+ When answers are vague, don't accept them. Probe with AskUserQuestion:
104
+
105
+ **"Make it good"** →
106
+ - header: "Good"
107
+ - question: "What does 'good' mean here?"
108
+ - options: ["Fast", "Beautiful", "Simple", "Reliable", "Let me describe"]
109
+
110
+ **"Users"** →
111
+ - header: "Users"
112
+ - question: "Which users?"
113
+ - options: ["Just me", "My team", "Specific type of person", "Let me describe"]
114
+
115
+ **"It should be easy to use"** →
116
+ - header: "Easy"
117
+ - question: "Easy how?"
118
+ - options: ["Fewer clicks", "No learning curve", "Works on mobile", "Let me describe"]
119
+
120
+ Specifics are everything. Vague in = vague out.
121
+ </probing_techniques>
122
+
123
+ <coverage_check>
124
+ By the end of questioning, you should understand:
125
+
126
+ - [ ] What they're building (the thing)
127
+ - [ ] Why it needs to exist (the motivation)
128
+ - [ ] Who it's for (even if just themselves)
129
+ - [ ] What "done" looks like (measurable outcome)
130
+ - [ ] What's NOT in scope (boundaries)
131
+ - [ ] Any real constraints (tech, timeline, compatibility)
132
+ - [ ] What exists already (greenfield vs brownfield)
133
+
134
+ If gaps remain, weave questions naturally into the conversation. Don't suddenly switch to checklist mode.
135
+ </coverage_check>
136
+
137
+ <decision_gate>
138
+ When you feel you understand the vision, use AskUserQuestion:
139
+
140
+ - header: "Ready?"
141
+ - question: "Ready to create PROJECT.md, or explore more?"
142
+ - options (ALL THREE REQUIRED):
143
+ - "Create PROJECT.md" - Finalize and continue
144
+ - "Ask more questions" - I'll dig into areas we haven't covered
145
+ - "Let me add context" - You have more to share
146
+
147
+ If "Ask more questions" → identify gaps from coverage check → ask naturally → return to gate.
148
+
149
+ Loop until "Create PROJECT.md" selected.
150
+ </decision_gate>
151
+
152
+ <anti_patterns>
153
+ - **Interrogation** - Firing questions without building on answers
154
+ - **Checklist walking** - Going through domains regardless of conversation flow
155
+ - **Corporate speak** - "What are your success criteria?" "Who are your stakeholders?"
156
+ - **Rushing** - Minimizing questions to get to "the work"
157
+ - **Assuming** - Filling gaps with assumptions instead of asking
158
+ - **User skills** - NEVER ask about user's technical experience. Claude builds — user's skills are irrelevant.
159
+ - **Premature constraints** - Asking about tech stack before understanding the idea
160
+ - **Shallow acceptance** - Taking vague answers without probing for specifics
161
+ </anti_patterns>
162
+ </questioning_guide>
@@ -0,0 +1,215 @@
1
+ <research_pitfalls>
2
+
3
+ <purpose>
4
+ This document catalogs research mistakes discovered in production use, providing specific patterns to avoid and verification strategies to prevent recurrence.
5
+ </purpose>
6
+
7
+ <known_pitfalls>
8
+
9
+ <pitfall_config_scope>
10
+ **What**: Assuming global configuration means no project-scoping exists
11
+ **Example**: Concluding "MCP servers are configured GLOBALLY only" while missing project-scoped `.mcp.json`
12
+ **Why it happens**: Not explicitly checking all known configuration patterns
13
+ **Prevention**:
14
+ ```xml
15
+ <verification_checklist>
16
+ **CRITICAL**: Verify ALL configuration scopes:
17
+ □ User/global scope - System-wide configuration
18
+ □ Project scope - Project-level configuration files
19
+ □ Local scope - Project-specific user overrides
20
+ □ Workspace scope - IDE/tool workspace settings
21
+ □ Environment scope - Environment variables
22
+ </verification_checklist>
23
+ ```
24
+ </pitfall_config_scope>
25
+
26
+ <pitfall_search_vagueness>
27
+ **What**: Asking researchers to "search for documentation" without specifying where
28
+ **Example**: "Research MCP documentation" → finds outdated community blog instead of official docs
29
+ **Why it happens**: Vague research instructions don't specify exact sources
30
+ **Prevention**:
31
+ ```xml
32
+ <sources>
33
+ Official sources (use WebFetch):
34
+ - https://exact-url-to-official-docs
35
+ - https://exact-url-to-api-reference
36
+
37
+ Search queries (use WebSearch):
38
+ - "specific search query {current_year}"
39
+ - "another specific query {current_year}"
40
+ </sources>
41
+ ```
42
+ </pitfall_search_vagueness>
43
+
44
+ <pitfall_deprecated_features>
45
+ **What**: Finding archived/old documentation and concluding feature doesn't exist
46
+ **Example**: Finding 2022 docs saying "feature not supported" when current version added it
47
+ **Why it happens**: Not checking multiple sources or recent updates
48
+ **Prevention**:
49
+ ```xml
50
+ <verification_checklist>
51
+ □ Check current official documentation
52
+ □ Review changelog/release notes for recent updates
53
+ □ Verify version numbers and publication dates
54
+ □ Cross-reference multiple authoritative sources
55
+ </verification_checklist>
56
+ ```
57
+ </pitfall_deprecated_features>
58
+
59
+ <pitfall_tool_variations>
60
+ **What**: Conflating capabilities across different tools/environments
61
+ **Example**: "Claude Desktop supports X" ≠ "Claude Code supports X"
62
+ **Why it happens**: Not explicitly checking each environment separately
63
+ **Prevention**:
64
+ ```xml
65
+ <verification_checklist>
66
+ □ Claude Desktop capabilities
67
+ □ Claude Code capabilities
68
+ □ VS Code extension capabilities
69
+ □ API/SDK capabilities
70
+ Document which environment supports which features
71
+ </verification_checklist>
72
+ ```
73
+ </pitfall_tool_variations>
74
+
75
+ <pitfall_negative_claims>
76
+ **What**: Making definitive "X is not possible" statements without official source verification
77
+ **Example**: "Folder-scoped MCP configuration is not supported" (missing `.mcp.json`)
78
+ **Why it happens**: Drawing conclusions from absence of evidence rather than evidence of absence
79
+ **Prevention**:
80
+ ```xml
81
+ <critical_claims_audit>
82
+ For any "X is not possible" or "Y is the only way" statement:
83
+ - [ ] Is this verified by official documentation stating it explicitly?
84
+ - [ ] Have I checked for recent updates that might change this?
85
+ - [ ] Have I verified all possible approaches/mechanisms?
86
+ - [ ] Am I confusing "I didn't find it" with "it doesn't exist"?
87
+ </critical_claims_audit>
88
+ ```
89
+ </pitfall_negative_claims>
90
+
91
+ <pitfall_missing_enumeration>
92
+ **What**: Investigating open-ended scope without enumerating known possibilities first
93
+ **Example**: "Research configuration options" instead of listing specific options to verify
94
+ **Why it happens**: Not creating explicit checklist of items to investigate
95
+ **Prevention**:
96
+ ```xml
97
+ <verification_checklist>
98
+ Enumerate ALL known options FIRST:
99
+ □ Option 1: [specific item]
100
+ □ Option 2: [specific item]
101
+ □ Option 3: [specific item]
102
+ □ Check for additional unlisted options
103
+
104
+ For each option above, document:
105
+ - Existence (confirmed/not found/unclear)
106
+ - Official source URL
107
+ - Current status (active/deprecated/beta)
108
+ </verification_checklist>
109
+ ```
110
+ </pitfall_missing_enumeration>
111
+
112
+ <pitfall_single_source>
113
+ **What**: Relying on a single source for critical claims
114
+ **Example**: Using only Stack Overflow answer from 2021 for current best practices
115
+ **Why it happens**: Not cross-referencing multiple authoritative sources
116
+ **Prevention**:
117
+ ```xml
118
+ <source_verification>
119
+ For critical claims, require multiple sources:
120
+ - [ ] Official documentation (primary)
121
+ - [ ] Release notes/changelog (for currency)
122
+ - [ ] Additional authoritative source (for verification)
123
+ - [ ] Contradiction check (ensure sources agree)
124
+ </source_verification>
125
+ ```
126
+ </pitfall_single_source>
127
+
128
+ <pitfall_assumed_completeness>
129
+ **What**: Assuming search results are complete and authoritative
130
+ **Example**: First Google result is outdated but assumed current
131
+ **Why it happens**: Not verifying publication dates and source authority
132
+ **Prevention**:
133
+ ```xml
134
+ <source_verification>
135
+ For each source consulted:
136
+ - [ ] Publication/update date verified (prefer recent/current)
137
+ - [ ] Source authority confirmed (official docs, not blogs)
138
+ - [ ] Version relevance checked (matches current version)
139
+ - [ ] Multiple search queries tried (not just one)
140
+ </source_verification>
141
+ ```
142
+ </pitfall_assumed_completeness>
143
+ </known_pitfalls>
144
+
145
+ <red_flags>
146
+
147
+ <red_flag_zero_not_found>
148
+ **Warning**: Every investigation succeeds perfectly
149
+ **Problem**: Real research encounters dead ends, ambiguity, and unknowns
150
+ **Action**: Expect honest reporting of limitations, contradictions, and gaps
151
+ </red_flag_zero_not_found>
152
+
153
+ <red_flag_no_confidence>
154
+ **Warning**: All findings presented as equally certain
155
+ **Problem**: Can't distinguish verified facts from educated guesses
156
+ **Action**: Require confidence levels (High/Medium/Low) for key findings
157
+ </red_flag_no_confidence>
158
+
159
+ <red_flag_missing_urls>
160
+ **Warning**: "According to documentation..." without specific URL
161
+ **Problem**: Can't verify claims or check for updates
162
+ **Action**: Require actual URLs for all official documentation claims
163
+ </red_flag_missing_urls>
164
+
165
+ <red_flag_no_evidence>
166
+ **Warning**: "X cannot do Y" or "Z is the only way" without citation
167
+ **Problem**: Strong claims require strong evidence
168
+ **Action**: Flag for verification against official sources
169
+ </red_flag_no_evidence>
170
+
171
+ <red_flag_incomplete_enum>
172
+ **Warning**: Verification checklist lists 4 items, output covers 2
173
+ **Problem**: Systematic gaps in coverage
174
+ **Action**: Ensure all enumerated items addressed or marked "not found"
175
+ </red_flag_incomplete_enum>
176
+ </red_flags>
177
+
178
+ <continuous_improvement>
179
+
180
+ When research gaps occur:
181
+
182
+ 1. **Document the gap**
183
+ - What was missed or incorrect?
184
+ - What was the actual correct information?
185
+ - What was the impact?
186
+
187
+ 2. **Root cause analysis**
188
+ - Why wasn't it caught?
189
+ - Which verification step would have prevented it?
190
+ - What pattern does this reveal?
191
+
192
+ 3. **Update this document**
193
+ - Add new pitfall entry
194
+ - Update relevant checklists
195
+ - Share lesson learned
196
+ </continuous_improvement>
197
+
198
+ <quick_reference>
199
+
200
+ Before submitting research, verify:
201
+
202
+ - [ ] All enumerated items investigated (not just some)
203
+ - [ ] Negative claims verified with official docs
204
+ - [ ] Multiple sources cross-referenced for critical claims
205
+ - [ ] URLs provided for all official documentation
206
+ - [ ] Publication dates checked (prefer recent/current)
207
+ - [ ] Tool/environment-specific variations documented
208
+ - [ ] Confidence levels assigned honestly
209
+ - [ ] Assumptions distinguished from verified facts
210
+ - [ ] "What might I have missed?" review completed
211
+
212
+ **Living Document**: Update after each significant research gap
213
+ **Lessons From**: MCP configuration research gap (missed `.mcp.json`)
214
+ </quick_reference>
215
+ </research_pitfalls>
@@ -0,0 +1,172 @@
1
+ <scope_estimation>
2
+ Plans must maintain consistent quality from first task to last. This requires understanding quality degradation and splitting aggressively.
3
+
4
+ <quality_insight>
5
+ Claude degrades when it *perceives* context pressure and enters "completion mode."
6
+
7
+ | Context Usage | Quality | Claude's State |
8
+ |---------------|---------|----------------|
9
+ | 0-30% | PEAK | Thorough, comprehensive |
10
+ | 30-50% | GOOD | Confident, solid work |
11
+ | 50-70% | DEGRADING | Efficiency mode begins |
12
+ | 70%+ | POOR | Rushed, minimal |
13
+
14
+ **The 40-50% inflection point:** Claude sees context mounting and thinks "I'd better conserve now." Result: "I'll complete the remaining tasks more concisely" = quality crash.
15
+
16
+ **The rule:** Stop BEFORE quality degrades, not at context limit.
17
+ </quality_insight>
18
+
19
+ <context_target>
20
+ **Plans should complete within ~50% of context usage.**
21
+
22
+ Why 50% not 80%?
23
+ - No context anxiety possible
24
+ - Quality maintained start to finish
25
+ - Room for unexpected complexity
26
+ - If you target 80%, you've already spent 40% in degradation mode
27
+ </context_target>
28
+
29
+ <task_rule>
30
+ **Each plan: 2-3 tasks maximum. Stay under 50% context.**
31
+
32
+ | Task Complexity | Tasks/Plan | Context/Task | Total |
33
+ |-----------------|------------|--------------|-------|
34
+ | Simple (CRUD, config) | 3 | ~10-15% | ~30-45% |
35
+ | Complex (auth, payments) | 2 | ~20-30% | ~40-50% |
36
+ | Very complex (migrations, refactors) | 1-2 | ~30-40% | ~30-50% |
37
+
38
+ **When in doubt: Default to 2 tasks.** Better to have an extra plan than degraded quality.
39
+ </task_rule>
40
+
41
+ <tdd_plans>
42
+ **TDD features get their own plans. Target ~40% context.**
43
+
44
+ TDD requires 2-3 execution cycles (RED → GREEN → REFACTOR), each with file reads, test runs, and potential debugging. This is fundamentally heavier than linear task execution.
45
+
46
+ | TDD Feature Complexity | Context Usage |
47
+ |------------------------|---------------|
48
+ | Simple utility function | ~25-30% |
49
+ | Business logic with edge cases | ~35-40% |
50
+ | Complex algorithm | ~40-50% |
51
+
52
+ **One feature per TDD plan.** If features are trivial enough to batch, they're trivial enough to skip TDD.
53
+
54
+ **Why TDD plans are separate:**
55
+ - TDD consumes 40-50% context for a single feature
56
+ - Dedicated plans ensure full quality throughout RED-GREEN-REFACTOR
57
+ - Each TDD feature gets fresh context, peak quality
58
+
59
+ See `~/.config/opencode/get-shit-done/references/tdd.md` for TDD plan structure.
60
+ </tdd_plans>
61
+
62
+ <split_signals>
63
+
64
+ <always_split>
65
+ - **More than 3 tasks** - Even if tasks seem small
66
+ - **Multiple subsystems** - DB + API + UI = separate plans
67
+ - **Any task with >5 file modifications** - Split by file groups
68
+ - **Checkpoint + implementation work** - Checkpoints in one plan, implementation after in separate plan
69
+ - **Discovery + implementation** - DISCOVERY.md in one plan, implementation in another
70
+ </always_split>
71
+
72
+ <consider_splitting>
73
+ - Estimated >5 files modified total
74
+ - Complex domains (auth, payments, data modeling)
75
+ - Any uncertainty about approach
76
+ - Natural semantic boundaries (Setup -> Core -> Features)
77
+ </consider_splitting>
78
+ </split_signals>
79
+
80
+ <splitting_strategies>
81
+ **By subsystem:** Auth → 01: DB models, 02: API routes, 03: Protected routes, 04: UI components
82
+
83
+ **By dependency:** Payments → 01: Stripe setup, 02: Subscription logic, 03: Frontend integration
84
+
85
+ **By complexity:** Dashboard → 01: Layout shell, 02: Data fetching, 03: Visualization
86
+
87
+ **By verification:** Deploy → 01: Vercel setup (checkpoint), 02: Env config (auto), 03: CI/CD (checkpoint)
88
+ </splitting_strategies>
89
+
90
+ <anti_patterns>
91
+ **Bad - Comprehensive plan:**
92
+ ```
93
+ Plan: "Complete Authentication System"
94
+ Tasks: 8 (models, migrations, API, JWT, middleware, hashing, login form, register form)
95
+ Result: Task 1-3 good, Task 4-5 degrading, Task 6-8 rushed
96
+ ```
97
+
98
+ **Good - Atomic plans:**
99
+ ```
100
+ Plan 1: "Auth Database Models" (2 tasks)
101
+ Plan 2: "Auth API Core" (3 tasks)
102
+ Plan 3: "Auth API Protection" (2 tasks)
103
+ Plan 4: "Auth UI Components" (2 tasks)
104
+ Each: 30-40% context, peak quality, atomic commits (2-3 task commits + 1 metadata commit)
105
+ ```
106
+ </anti_patterns>
107
+
108
+ <estimating_context>
109
+ | Files Modified | Context Impact |
110
+ |----------------|----------------|
111
+ | 0-3 files | ~10-15% (small) |
112
+ | 4-6 files | ~20-30% (medium) |
113
+ | 7+ files | ~40%+ (large - split) |
114
+
115
+ | Complexity | Context/Task |
116
+ |------------|--------------|
117
+ | Simple CRUD | ~15% |
118
+ | Business logic | ~25% |
119
+ | Complex algorithms | ~40% |
120
+ | Domain modeling | ~35% |
121
+
122
+ **2 tasks:** Simple ~30%, Medium ~50%, Complex ~80% (split)
123
+ **3 tasks:** Simple ~45%, Medium ~75% (risky), Complex 120% (impossible)
124
+ </estimating_context>
125
+
126
+ <depth_calibration>
127
+ **Depth controls compression tolerance, not artificial inflation.**
128
+
129
+ | Depth | Typical Phases | Typical Plans/Phase | Tasks/Plan |
130
+ |-------|----------------|---------------------|------------|
131
+ | Quick | 3-5 | 1-3 | 2-3 |
132
+ | Standard | 5-8 | 3-5 | 2-3 |
133
+ | Comprehensive | 8-12 | 5-10 | 2-3 |
134
+
135
+ Tasks/plan is CONSTANT at 2-3. The 50% context rule applies universally.
136
+
137
+ **Key principle:** Derive from actual work. Depth determines how aggressively you combine things, not a target to hit.
138
+
139
+ - Comprehensive auth = 8 plans (because auth genuinely has 8 concerns)
140
+ - Comprehensive "add favicon" = 1 plan (because that's all it is)
141
+
142
+ Don't pad small work to hit a number. Don't compress complex work to look efficient.
143
+
144
+ **Comprehensive depth example:**
145
+ Auth system at comprehensive depth = 8 plans (not 3 big ones):
146
+ - 01: DB models (2 tasks)
147
+ - 02: Password hashing (2 tasks)
148
+ - 03: JWT generation (2 tasks)
149
+ - 04: JWT validation middleware (2 tasks)
150
+ - 05: Login endpoint (2 tasks)
151
+ - 06: Register endpoint (2 tasks)
152
+ - 07: Protected route patterns (2 tasks)
153
+ - 08: Auth UI components (3 tasks)
154
+
155
+ Each plan: fresh context, peak quality. More plans = more thoroughness, same quality per plan.
156
+ </depth_calibration>
157
+
158
+ <summary>
159
+ **2-3 tasks, 50% context target:**
160
+ - All tasks: Peak quality
161
+ - Git: Atomic per-task commits (each task = 1 commit, plan = 1 metadata commit)
162
+ - Autonomous plans: Subagent execution (fresh context)
163
+
164
+ **The principle:** Aggressive atomicity. More plans, smaller scope, consistent quality.
165
+
166
+ **The rule:** If in doubt, split. Quality over consolidation. Always.
167
+
168
+ **Depth rule:** Depth increases plan COUNT, never plan SIZE.
169
+
170
+ **Commit rule:** Each plan produces 3-4 commits total (2-3 task commits + 1 docs commit). More granular history = better observability for Claude.
171
+ </summary>
172
+ </scope_estimation>