get-shit-done-cc 1.1.2 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -1,557 +1,451 @@
1
1
  # Research Subagent Prompts
2
2
 
3
3
  <overview>
4
- Prompt templates for subagents that research domain ecosystems before roadmap creation.
4
+ Prompt templates for subagents that research implementation context before roadmap creation.
5
+
6
+ **Critical understanding:** This research is NOT for human decision-making.
7
+ This is CONTEXT INJECTION so Claude Code implements correctly.
5
8
 
6
9
  Each subagent:
7
- - Researches ONE category (ecosystem, architecture, pitfalls, or standards)
10
+ - Receives a research manifest extracted from PROJECT.md
11
+ - Researches ONLY the specific features, constraints, and questions in that manifest
12
+ - Outputs HIGH-CONFIDENCE information only (no padding)
8
13
  - Writes directly to `.planning/research/{category}.md`
9
- - Uses WebSearch and Context7 for current information
10
- - Cross-verifies findings with authoritative sources
14
+ - Formats output for Claude Code to consume during implementation
11
15
 
12
- These prompts are used by the research-project workflow when spawning Task tool subagents.
16
+ **Quality bar:** Will this context make Claude Code generate correct, modern code?
13
17
  </overview>
14
18
 
15
- <ecosystem_subagent_prompt>
16
- ## Ecosystem Research Subagent
19
+ <stack_subagent_prompt>
20
+ ## Stack Research Subagent
17
21
 
18
- Use this template for spawning ecosystem research subagents:
22
+ Use this template for spawning stack research subagents:
19
23
 
20
24
  ```
21
25
  <subagent_prompt>
22
26
  ## Objective
23
- Research the library/framework ecosystem for {domain} and write findings to .planning/research/ecosystem.md
24
27
 
25
- ## Domain Context
26
- {Paste relevant sections from PROJECT.md describing what's being built}
28
+ Research and document the libraries/tools needed for THIS specific project.
29
+ Write findings to .planning/research/stack.md
27
30
 
28
- ## Your Assignment
29
- File: .planning/research/ecosystem.md
30
- Category: ecosystem
31
- Purpose: Map the libraries, frameworks, and tools available for this domain
32
-
33
- ## Research Questions
34
- Answer these questions through web research:
35
- 1. What are the go-to libraries for this problem space?
36
- 2. Which frameworks are actively maintained (commits in last 12 months)?
37
- 3. What's the "standard stack" that domain experts use?
38
- 4. What should NOT be hand-rolled because existing solutions exist?
39
- 5. What are the version requirements and compatibility considerations?
31
+ ## Critical Context
40
32
 
41
- ## Research Requirements
33
+ **This research is for Claude Code, not humans.**
42
34
 
43
- You MUST use WebSearch and Context7 to get current information.
35
+ You are providing context injection so Claude Code implements correctly. Claude's training data may have outdated library versions, deprecated APIs, or old patterns. Your research overrides stale knowledge with current, accurate information.
44
36
 
45
- For EACH library/tool discovered:
46
- 1. Verify actively maintained (GitHub commits in last 12 months)
47
- 2. Get current version number
48
- 3. Understand when to use vs alternatives
49
- 4. Find official documentation and getting started guides
50
- 5. Note any critical dependencies or compatibility issues
37
+ **Quality bar:** Only include information that will make Claude generate correct, modern code.
51
38
 
52
- **Search queries to run:**
53
- - "{domain} best libraries 2024 2025"
54
- - "{domain} framework comparison"
55
- - "{specific problem from PROJECT.md} library recommendation"
56
- - "{domain} standard stack production"
39
+ ## Research Manifest
57
40
 
58
- **Red flags (reject these):**
59
- - No updates in 12+ months
60
- - Deprecated or archived repositories
61
- - Pre-2023 recommendations without verification
41
+ {Paste the research manifest extracted from PROJECT.md}
62
42
 
63
- ## Output Format
43
+ ## Your Assignment
64
44
 
65
- Write to .planning/research/ecosystem.md using this structure:
45
+ For EACH feature in the manifest:
46
+ 1. What library/tool should Claude use?
47
+ 2. What's the current version?
48
+ 3. What's the correct import/setup?
66
49
 
67
- # Ecosystem: {Domain}
50
+ For EACH constraint in the manifest:
51
+ 1. Does the recommended library work within this constraint?
52
+ 2. Any compatibility issues?
68
53
 
69
- **Category:** ecosystem
70
- **Domain:** {domain}
71
- **Researched:** {today's date}
72
- **Confidence:** {high | medium | low}
54
+ ## Research Requirements
73
55
 
74
- ## Research Summary
56
+ Use WebSearch to verify CURRENT information (2024-2025).
75
57
 
76
- {2-3 sentences on what was found and the recommended direction}
58
+ **INCLUDE only if:**
59
+ - Actively maintained (commits in last 12 months)
60
+ - High confidence this is the right choice
61
+ - Directly relevant to a feature in the manifest
77
62
 
78
- ## Findings
63
+ **EXCLUDE:**
64
+ - Low or medium confidence options
65
+ - "Might be useful" alternatives
66
+ - Deprecated or unmaintained libraries
67
+ - Generic options not tied to manifest features
79
68
 
80
- ### {Library/Framework 1}
69
+ ## Output Format
81
70
 
82
- {What it is and what it does}
71
+ Write to .planning/research/stack.md:
83
72
 
84
- **Source:** {URL}
85
- **Version:** {current version}
86
- **Confidence:** {high | medium | low}
87
- **Why consider:** {Relevance to project}
88
- **Tradeoffs:** {Limitations or considerations}
73
+ ```markdown
74
+ # Stack: [Project Name]
89
75
 
90
- ### {Library/Framework 2}
76
+ **Purpose:** Libraries and tools for Claude Code to use during implementation
77
+ **Researched:** [date]
91
78
 
92
- {Same structure}
79
+ ## [Feature 1 from Scope]
93
80
 
94
- {... continue for each significant option ...}
81
+ **Use:** [Library Name] v[X.Y.Z]
82
+ **Why:** [One sentence - why this for this feature]
95
83
 
96
- ## Recommendations
84
+ ```[language]
85
+ // Setup / import
86
+ [actual code]
87
+ ```
97
88
 
98
- ### Use
89
+ **Docs:** [URL to current documentation]
99
90
 
100
- - **{Library A}** - {One-line reason, specific use case}
101
- - **{Library B}** - {One-line reason, specific use case}
91
+ ## [Feature 2 from Scope]
102
92
 
103
- ### Avoid
93
+ **Use:** [Library Name] v[X.Y.Z]
94
+ **Why:** [One sentence]
104
95
 
105
- - **{Library X}** - {One-line reason}
106
- - **Hand-rolling {Y}** - {Existing solution available}
96
+ ```[language]
97
+ // Setup / import
98
+ [actual code]
99
+ ```
107
100
 
108
- ### Defer Decision
101
+ **Docs:** [URL]
109
102
 
110
- - **{Topic}** - {Why more investigation needed}
103
+ ## [Continue for each feature...]
111
104
 
112
- ## Sources
105
+ ## Constraint Compatibility
113
106
 
114
- | Source | Type | Confidence | Last Verified |
115
- |--------|------|------------|---------------|
116
- | {URL} | {official docs / github / blog} | {high/medium/low} | {today} |
107
+ | Constraint | Status | Notes |
108
+ |------------|--------|-------|
109
+ | [constraint 1] | Compatible | [brief note] |
110
+ | [constraint 2] | ✓ Compatible | [brief note] |
117
111
 
118
- ## Open Questions
112
+ ## Unanswered
119
113
 
120
- - {Question that couldn't be resolved}
121
- - {Question for roadmap planning}
114
+ - [Any features where no clear library choice exists]
115
+ - [Any constraints that couldn't be satisfied]
116
+ ```
122
117
 
123
- ---
124
- *Generated by research-project subagent*
125
- *Category: ecosystem*
118
+ ## Quality Checklist
126
119
 
127
- ## Quality Criteria
128
- - Include at least 3-5 library/framework options
129
- - Compare options (don't just list them)
130
- - Every recommendation has a specific reason
131
- - Verify all versions are current
132
- - Note what NOT to hand-roll
120
+ Before writing output, verify:
121
+ - [ ] Every feature from manifest has a library recommendation
122
+ - [ ] Every library is actively maintained
123
+ - [ ] Current versions specified (not "latest")
124
+ - [ ] Setup code is current syntax
125
+ - [ ] No low-confidence padding included
133
126
  </subagent_prompt>
134
127
  ```
135
- </ecosystem_subagent_prompt>
128
+ </stack_subagent_prompt>
136
129
 
137
- <architecture_subagent_prompt>
138
- ## Architecture Research Subagent
130
+ <implementation_subagent_prompt>
131
+ ## Implementation Research Subagent
139
132
 
140
- Use this template for spawning architecture research subagents:
133
+ Use this template for spawning implementation research subagents:
141
134
 
142
135
  ```
143
136
  <subagent_prompt>
144
137
  ## Objective
145
- Research architecture patterns for {domain} projects and write findings to .planning/research/architecture.md
146
138
 
147
- ## Domain Context
148
- {Paste relevant sections from PROJECT.md describing what's being built}
149
-
150
- ## Your Assignment
151
- File: .planning/research/architecture.md
152
- Category: architecture
153
- Purpose: Document standard project structure and architectural patterns
154
-
155
- ## Research Questions
156
- Answer these questions through web research:
157
- 1. How do experts structure projects in this domain?
158
- 2. What component boundaries work best?
159
- 3. What patterns prevent common problems?
160
- 4. How does data flow in typical implementations?
161
- 5. What directory structure do production projects use?
162
-
163
- ## Research Requirements
164
-
165
- You MUST use WebSearch and Context7 to get current information.
166
-
167
- For EACH pattern discovered:
168
- 1. Find real-world examples (GitHub repos, case studies)
169
- 2. Understand the tradeoffs
170
- 3. Note when it applies vs when it doesn't
171
- 4. Find implementation guidelines
172
-
173
- **Search queries to run:**
174
- - "{domain} project structure best practices"
175
- - "{domain} architecture patterns"
176
- - "{specific problem from PROJECT.md} implementation patterns"
177
- - "{domain} component organization"
178
- - "{domain} example github repo structure"
179
-
180
- ## Output Format
139
+ Research and document CURRENT implementation patterns for this project's stack.
140
+ Write findings to .planning/research/implementation.md
181
141
 
182
- Write to .planning/research/architecture.md using this structure:
142
+ ## Critical Context
183
143
 
184
- # Architecture: {Domain}
144
+ **This research is for Claude Code, not humans.**
185
145
 
186
- **Category:** architecture
187
- **Domain:** {domain}
188
- **Researched:** {today's date}
189
- **Confidence:** {high | medium | low}
146
+ Claude may generate outdated code patterns. Your job is to provide:
147
+ - Current API syntax (not deprecated alternatives)
148
+ - Working code examples (not pseudocode)
149
+ - "Do this, not that" corrections
190
150
 
191
- ## Research Summary
151
+ **Quality bar:** Claude reads this file, then generates correct modern code.
192
152
 
193
- {2-3 sentences on recommended architectural approach}
153
+ ## Research Manifest
194
154
 
195
- ## Findings
155
+ {Paste the research manifest extracted from PROJECT.md}
196
156
 
197
- ### Project Structure
198
-
199
- Recommended directory layout:
200
- ```
201
- {concrete directory structure with comments}
202
- ```
203
-
204
- **Source:** {Reference project or documentation}
205
- **Rationale:** {Why this structure works}
206
-
207
- ### {Pattern 1 Name}
208
-
209
- **When to use:** {Specific scenarios}
210
- **Implementation:** {How to implement}
211
- **Example:** {Concrete code example if applicable}
212
- **Tradeoffs:** {What you give up}
213
-
214
- ### {Pattern 2 Name}
215
-
216
- {Same structure}
217
-
218
- ### Data Flow
219
-
220
- {How data moves through the system}
157
+ ## Your Assignment
221
158
 
222
- ### Component Boundaries
159
+ For the stack chosen in stack.md (or infer from manifest):
160
+ 1. What are the CURRENT API patterns?
161
+ 2. What code does Claude need to generate correctly?
162
+ 3. What deprecated patterns might Claude default to?
223
163
 
224
- {What should be separate vs combined}
164
+ For Open Questions in the manifest:
165
+ 1. Research and provide direct answers
166
+ 2. If unanswerable, explain why
225
167
 
226
- ## Recommendations
168
+ ## Research Requirements
227
169
 
228
- ### Adopt
170
+ Use WebSearch and Context7 for current documentation.
229
171
 
230
- - **{Pattern A}** - {Why it fits this project}
231
- - **{Structure B}** - {Why it fits this project}
172
+ **INCLUDE:**
173
+ - Actual code examples with current syntax
174
+ - "Do this (current)" vs "Not this (deprecated)" comparisons
175
+ - Version-specific patterns (as of 2024-2025)
176
+ - Answers to Open Questions from manifest
232
177
 
233
- ### Avoid
178
+ **EXCLUDE:**
179
+ - Theoretical explanations without code
180
+ - Multiple options without recommendation
181
+ - Old patterns from outdated tutorials
182
+ - Generic advice not specific to this project
234
183
 
235
- - **{Anti-pattern X}** - {Why it fails}
236
- - **{Premature abstraction Y}** - {Why to defer}
184
+ ## Output Format
237
185
 
238
- ## Sources
186
+ Write to .planning/research/implementation.md:
239
187
 
240
- | Source | Type | Confidence | Last Verified |
241
- |--------|------|------------|---------------|
242
- | {URL} | {official docs / github / tutorial} | {high/medium/low} | {today} |
188
+ ```markdown
189
+ # Implementation: [Project Name]
243
190
 
244
- ## Open Questions
191
+ **Purpose:** Current API patterns for Claude Code to use
192
+ **Researched:** [date]
245
193
 
246
- - {Architecture decision that needs more context}
247
- - {Question about project-specific constraints}
194
+ ## [Feature 1] Implementation
248
195
 
249
- ---
250
- *Generated by research-project subagent*
251
- *Category: architecture*
196
+ ### Current Pattern (2025)
252
197
 
253
- ## Quality Criteria
254
- - Include concrete directory structure
255
- - Document at least 2-3 patterns
256
- - Every recommendation has specific rationale
257
- - Include real example sources
258
- - Address data flow and component boundaries
259
- </subagent_prompt>
198
+ ```[language]
199
+ // Correct way to do [thing]
200
+ [actual working code]
260
201
  ```
261
- </architecture_subagent_prompt>
262
202
 
263
- <pitfalls_subagent_prompt>
264
- ## Pitfalls Research Subagent
265
-
266
- Use this template for spawning pitfalls research subagents:
203
+ ### NOT This (Deprecated)
267
204
 
205
+ ```[language]
206
+ // Claude may generate this - it's outdated
207
+ [old pattern to avoid]
208
+ // Why wrong: [brief explanation]
268
209
  ```
269
- <subagent_prompt>
270
- ## Objective
271
- Research common mistakes and pitfalls in {domain} projects and write findings to .planning/research/pitfalls.md
272
210
 
273
- ## Domain Context
274
- {Paste relevant sections from PROJECT.md describing what's being built}
211
+ ### Key API Details
275
212
 
276
- ## Your Assignment
277
- File: .planning/research/pitfalls.md
278
- Category: pitfalls
279
- Purpose: Catalog what NOT to do and why
280
-
281
- ## Research Questions
282
- Answer these questions through web research:
283
- 1. What mistakes do beginners commonly make in this domain?
284
- 2. What causes performance problems?
285
- 3. What architectural choices lead to regret?
286
- 4. What seems like a good idea but isn't?
287
- 5. What are the debugging nightmares people warn about?
213
+ - `methodName()` - [what it does, when to use]
214
+ - `otherMethod()` - [what it does, when to use]
288
215
 
289
- ## Research Requirements
290
-
291
- You MUST use WebSearch and Context7 to get current information.
292
-
293
- For EACH pitfall discovered:
294
- 1. Find real examples (Stack Overflow, GitHub issues, blog posts)
295
- 2. Understand the root cause
296
- 3. Find the correct approach
297
- 4. Note detection methods (how to know if you're doing this)
298
-
299
- **Search queries to run:**
300
- - "{domain} common mistakes"
301
- - "{domain} things I wish I knew"
302
- - "{domain} performance problems"
303
- - "{domain} debugging nightmare"
304
- - "{specific technology} gotchas"
305
- - "{domain} anti-patterns"
306
-
307
- ## Output Format
308
-
309
- Write to .planning/research/pitfalls.md using this structure:
310
-
311
- # Pitfalls: {Domain}
312
-
313
- **Category:** pitfalls
314
- **Domain:** {domain}
315
- **Researched:** {today's date}
316
- **Confidence:** {high | medium | low}
216
+ ## [Feature 2] Implementation
317
217
 
318
- ## Research Summary
218
+ [Same structure]
319
219
 
320
- {2-3 sentences on the most critical pitfalls to avoid}
220
+ ## Open Questions Answered
321
221
 
322
- ## Findings
222
+ ### [Question 1 from manifest]
323
223
 
324
- ### {Pitfall 1: Descriptive Name}
224
+ **Answer:** [Direct answer with source]
325
225
 
326
- **Problem:** {What people do wrong}
327
- **Why it happens:** {Root cause or common misconception}
328
- **Consequence:** {What goes wrong}
329
- **Detection:** {How to know if you're doing this}
330
- **Prevention:** {Correct approach}
226
+ ### [Question 2 from manifest]
331
227
 
332
- **Source:** {URL with real example}
333
- **Severity:** {high | medium | low}
228
+ **Answer:** [Direct answer with source]
334
229
 
335
- ### {Pitfall 2: Descriptive Name}
230
+ ## Decisions Validated
336
231
 
337
- {Same structure}
232
+ ### [Decision 1 from manifest]
338
233
 
339
- {... continue for each significant pitfall ...}
234
+ **Validation:** Good choice / ⚠️ Gotcha
235
+ **Notes:** [Any implementation considerations]
340
236
 
341
- ## Recommendations
237
+ ## Patterns Summary
342
238
 
343
- ### Critical Pitfalls (Must Avoid)
344
-
345
- 1. **{Pitfall A}** - {One-line why it's critical}
346
- 2. **{Pitfall B}** - {One-line why it's critical}
239
+ | Task | Current Pattern | Deprecated Pattern |
240
+ |------|-----------------|-------------------|
241
+ | [task 1] | `doThisWay()` | `oldWay()` |
242
+ | [task 2] | `async/await` | `callbacks` |
243
+ ```
347
244
 
348
- ### Common But Recoverable
245
+ ## Quality Checklist
349
246
 
350
- - **{Pitfall C}** - {Can be fixed if caught early}
247
+ Before writing output, verify:
248
+ - [ ] Every code example is runnable (not pseudocode)
249
+ - [ ] Deprecated patterns explicitly flagged
250
+ - [ ] Open Questions from manifest answered
251
+ - [ ] Decisions from manifest validated
252
+ - [ ] No "it depends" - give clear recommendations
253
+ </subagent_prompt>
254
+ ```
255
+ </implementation_subagent_prompt>
351
256
 
352
- ### Easy Wins
257
+ <risks_subagent_prompt>
258
+ ## Risks Research Subagent
353
259
 
354
- - **{Prevention D}** - {Simple thing that prevents problems}
260
+ Use this template for spawning risks research subagents:
355
261
 
356
- ## Sources
262
+ ```
263
+ <subagent_prompt>
264
+ ## Objective
357
265
 
358
- | Source | Type | Confidence | Last Verified |
359
- |--------|------|------------|---------------|
360
- | {URL} | {stack overflow / github issue / blog} | {high/medium/low} | {today} |
266
+ Research what Claude Code might get WRONG when implementing this project.
267
+ Write findings to .planning/research/risks.md
361
268
 
362
- ## Open Questions
269
+ ## Critical Context
363
270
 
364
- - {Potential pitfall that needs more context}
365
- - {Question about project-specific risks}
271
+ **This research is for Claude Code, not humans.**
366
272
 
367
- ---
368
- *Generated by research-project subagent*
369
- *Category: pitfalls*
273
+ Claude's training data includes outdated tutorials, deprecated APIs, and bad patterns. Your job is to identify:
274
+ - What Claude might generate incorrectly
275
+ - Deprecated patterns to explicitly avoid
276
+ - Common implementation mistakes in this domain
370
277
 
371
- ## Quality Criteria
372
- - Include at least 5-7 pitfalls
373
- - Categorize by severity
374
- - Every pitfall has prevention/solution
375
- - Include real examples with sources
376
- - Focus on domain-specific issues (not generic coding advice)
377
- </subagent_prompt>
378
- ```
379
- </pitfalls_subagent_prompt>
278
+ **Quality bar:** After reading this, Claude avoids specific mistakes.
380
279
 
381
- <standards_subagent_prompt>
382
- ## Standards Research Subagent
280
+ ## Research Manifest
383
281
 
384
- Use this template for spawning standards research subagents:
282
+ {Paste the research manifest extracted from PROJECT.md}
385
283
 
386
- ```
387
- <subagent_prompt>
388
- ## Objective
389
- Research best practices and quality standards for {domain} projects and write findings to .planning/research/standards.md
284
+ ## Your Assignment
390
285
 
391
- ## Domain Context
392
- {Paste relevant sections from PROJECT.md describing what's being built}
286
+ For this project's domain and stack:
287
+ 1. What deprecated patterns might Claude default to?
288
+ 2. What common mistakes happen in this domain?
289
+ 3. What version-specific gotchas exist?
393
290
 
394
- ## Your Assignment
395
- File: .planning/research/standards.md
396
- Category: standards
397
- Purpose: Document best practices, conventions, and quality expectations
398
-
399
- ## Research Questions
400
- Answer these questions through web research:
401
- 1. What does "production quality" mean for this domain?
402
- 2. What testing approaches work for this type of project?
403
- 3. What performance benchmarks matter?
404
- 4. What accessibility/security considerations apply?
405
- 5. What conventions do experts follow?
291
+ For Decisions in the manifest:
292
+ 1. Any known issues with these choices?
293
+ 2. Pitfalls specific to this combination?
406
294
 
407
295
  ## Research Requirements
408
296
 
409
- You MUST use WebSearch and Context7 to get current information.
297
+ Use WebSearch to find:
298
+ - GitHub issues about common mistakes
299
+ - Stack Overflow questions about gotchas
300
+ - Migration guides showing old → new patterns
301
+ - "Things I wish I knew" posts
410
302
 
411
- For EACH standard discovered:
412
- 1. Find authoritative sources (official docs, style guides)
413
- 2. Understand the rationale
414
- 3. Find measurable criteria where possible
415
- 4. Note enforcement tools if available
303
+ **INCLUDE:**
304
+ - Specific mistakes with code examples
305
+ - Version-specific deprecations
306
+ - Domain-specific pitfalls for this project
416
307
 
417
- **Search queries to run:**
418
- - "{domain} best practices 2024 2025"
419
- - "{domain} style guide"
420
- - "{domain} testing strategy"
421
- - "{domain} performance benchmarks"
422
- - "{domain} production checklist"
423
- - "{domain} code quality standards"
308
+ **EXCLUDE:**
309
+ - Generic coding advice
310
+ - Low-probability edge cases
311
+ - "Best practices" (that's not risks)
312
+ - Risks that don't apply to this specific project
424
313
 
425
314
  ## Output Format
426
315
 
427
- Write to .planning/research/standards.md using this structure:
316
+ Write to .planning/research/risks.md:
428
317
 
429
- # Standards: {Domain}
318
+ ```markdown
319
+ # Risks: [Project Name]
430
320
 
431
- **Category:** standards
432
- **Domain:** {domain}
433
- **Researched:** {today's date}
434
- **Confidence:** {high | medium | low}
321
+ **Purpose:** What Claude Code might get wrong
322
+ **Researched:** [date]
435
323
 
436
- ## Research Summary
324
+ ## Deprecated Patterns to Avoid
437
325
 
438
- {2-3 sentences on key quality standards for this project}
326
+ ### [Deprecated Thing 1]
439
327
 
440
- ## Findings
441
-
442
- ### Code Quality
443
-
444
- **Conventions:**
445
- - {Naming conventions}
446
- - {Formatting standards}
447
- - {Organization patterns}
448
-
449
- **Tools:** {Linters, formatters, static analysis}
450
- **Source:** {URL}
328
+ **Claude might generate:**
329
+ ```[language]
330
+ // This is outdated
331
+ [deprecated code pattern]
332
+ ```
451
333
 
452
- ### Testing Standards
334
+ **Instead use:**
335
+ ```[language]
336
+ // Current approach
337
+ [correct code pattern]
338
+ ```
453
339
 
454
- **Strategy:** {How to test this type of project}
455
- **Coverage expectations:** {What to test, what's overkill}
456
- **Tools:** {Testing frameworks and utilities}
340
+ **Why:** [Deprecated in vX.Y, removed in vX.Z, etc.]
457
341
 
458
- **Source:** {URL}
342
+ ### [Deprecated Thing 2]
459
343
 
460
- ### Performance Standards
344
+ [Same structure]
461
345
 
462
- **Benchmarks:** {What metrics matter}
463
- **Targets:** {Specific numbers if applicable}
464
- **Measurement:** {How to test}
346
+ ## Common Mistakes
465
347
 
466
- **Source:** {URL}
348
+ ### [Mistake 1: Descriptive Name]
467
349
 
468
- ### {Domain-Specific Standard}
350
+ **What happens:** [The incorrect approach]
351
+ **Why it's wrong:** [Consequence]
352
+ **Correct approach:** [What to do instead]
469
353
 
470
- {Standards specific to this domain}
354
+ ```[language]
355
+ // Wrong
356
+ [bad code]
471
357
 
472
- ## Recommendations
358
+ // Right
359
+ [good code]
360
+ ```
473
361
 
474
- ### Adopt
362
+ ### [Mistake 2]
475
363
 
476
- - **{Standard A}** - {Why it applies}
477
- - **{Convention B}** - {Why it matters}
364
+ [Same structure]
478
365
 
479
- ### Skip (Over-Engineering)
366
+ ## Stack-Specific Gotchas
480
367
 
481
- - **{Standard X}** - {Why not needed for this project}
368
+ ### [Gotcha for chosen library/framework]
482
369
 
483
- ### Defer
370
+ **Issue:** [What goes wrong]
371
+ **When:** [Trigger conditions]
372
+ **Fix:** [How to avoid/handle]
484
373
 
485
- - **{Standard Y}** - {Consider after MVP}
374
+ ## Decision Risks
486
375
 
487
- ## Sources
376
+ ### [Decision 1 from manifest]
488
377
 
489
- | Source | Type | Confidence | Last Verified |
490
- |--------|------|------------|---------------|
491
- | {URL} | {official docs / style guide / blog} | {high/medium/low} | {today} |
378
+ **Risk level:** Low / Medium / High
379
+ **Specific concern:** [What could go wrong with this choice]
380
+ **Mitigation:** [How to avoid the problem]
492
381
 
493
- ## Open Questions
382
+ ## Critical Warnings
494
383
 
495
- - {Standard that depends on project scale}
496
- - {Question about appropriate rigor level}
384
+ 1. **[Most important thing to not mess up]**
385
+ 2. **[Second most important]**
386
+ 3. **[Third most important]**
387
+ ```
497
388
 
498
- ---
499
- *Generated by research-project subagent*
500
- *Category: standards*
389
+ ## Quality Checklist
501
390
 
502
- ## Quality Criteria
503
- - Cover code quality, testing, and performance
504
- - Include measurable criteria where possible
505
- - Distinguish must-haves from nice-to-haves
506
- - Reference authoritative sources
507
- - Be realistic about project scope (not enterprise overkill)
391
+ Before writing output, verify:
392
+ - [ ] Every risk is specific (not generic advice)
393
+ - [ ] Code examples show wrong vs right
394
+ - [ ] Risks are relevant to this project's stack
395
+ - [ ] No padding with unlikely scenarios
396
+ - [ ] Critical warnings prioritized
508
397
  </subagent_prompt>
509
398
  ```
510
- </standards_subagent_prompt>
399
+ </risks_subagent_prompt>
511
400
 
512
401
  <task_tool_usage>
513
402
  ## Using the Task Tool
514
403
 
515
- When spawning research subagents, use the Task tool with:
516
- - `subagent_type`: "general-purpose"
517
- - `prompt`: The filled-in template above
518
- - `description`: Brief description like "Research ecosystem for {domain}"
519
-
520
- **Batching strategy:**
521
-
522
- Spawn 3-4 Task calls in a SINGLE message. Wait for all to complete before next batch.
404
+ Spawn all 3 subagents in a SINGLE message:
523
405
 
524
406
  ```
525
- [Message 1: Batch 1]
526
- Task: Research ecosystem for {domain}
527
- Task: Research architecture for {domain}
528
- [Wait for completion]
529
-
530
- [Message 2: Batch 2]
531
- Task: Research pitfalls for {domain}
532
- Task: Research standards for {domain}
533
- [Wait for completion]
407
+ Task 1:
408
+ subagent_type: "general-purpose"
409
+ description: "Research stack for [project]"
410
+ prompt: [stack_subagent_prompt with manifest]
411
+
412
+ Task 2:
413
+ subagent_type: "general-purpose"
414
+ description: "Research implementation for [project]"
415
+ prompt: [implementation_subagent_prompt with manifest]
416
+
417
+ Task 3:
418
+ subagent_type: "general-purpose"
419
+ description: "Research risks for [project]"
420
+ prompt: [risks_subagent_prompt with manifest]
534
421
  ```
535
422
 
536
- **Batch ordering rationale:**
537
- - Batch 1 (ecosystem + architecture): Core understanding of what to build and how
538
- - Batch 2 (pitfalls + standards): Refinements that build on core understanding
539
-
540
- **After all batches complete:**
541
- 1. Verify all 4 files exist in .planning/research/
542
- 2. Extract key findings for summary
543
- 3. Present next steps to user
423
+ Wait for all to complete, then verify outputs.
544
424
  </task_tool_usage>
545
425
 
546
- <quality_checklist>
547
- ## Subagent Quality Verification
426
+ <quality_verification>
427
+ ## Post-Research Quality Check
428
+
429
+ After subagents complete, verify each file:
430
+
431
+ **stack.md:**
432
+ - [ ] Every manifest feature has a library
433
+ - [ ] Current versions specified
434
+ - [ ] Setup code included
435
+ - [ ] No low-confidence alternatives
436
+
437
+ **implementation.md:**
438
+ - [ ] Code examples are runnable
439
+ - [ ] Deprecated patterns flagged
440
+ - [ ] Open Questions answered
441
+ - [ ] Decisions validated
548
442
 
549
- After subagents complete, verify:
443
+ **risks.md:**
444
+ - [ ] Risks specific to this project
445
+ - [ ] Wrong vs right code shown
446
+ - [ ] No generic advice padding
447
+ - [ ] Critical warnings clear
550
448
 
551
- - [ ] All 4 files exist in .planning/research/
552
- - [ ] Each file has substantive content (not stub/error)
553
- - [ ] Recommendations are specific (not "it depends")
554
- - [ ] Sources are cited with confidence levels
555
- - [ ] Information is current (2024-2025 sources preferred)
556
- - [ ] Open questions are honest about gaps
557
- </quality_checklist>
449
+ **If a file fails quality check:**
450
+ Re-run that specific subagent with stricter instructions, or flag the gap for the user.
451
+ </quality_verification>