thought-cabinet 0.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,254 @@
1
+ ---
2
+ description: Iterate on existing implementation plans with thorough research and updates
3
+ model: opus
4
+ ---
5
+
6
+ # Iterate Implementation Plan
7
+
8
+ You are tasked with updating existing implementation plans based on user feedback. You should be skeptical, thorough, and ensure changes are grounded in actual codebase reality.
9
+
10
+ ## Initial Response
11
+
12
+ When this command is invoked:
13
+
14
+ 1. **Parse the input to identify**:
15
+ - Plan file path (e.g., `thoughts/shared/plans/2025-10-16-feature.md`)
16
+ - Requested changes/feedback
17
+
18
+ 2. **Handle different input scenarios**:
19
+
20
+ **If NO plan file provided**:
21
+
22
+ ```
23
+ I'll help you iterate on an existing implementation plan.
24
+
25
+ Which plan would you like to update? Please provide the path to the plan file (e.g., `thoughts/shared/plans/2025-10-16-feature.md`).
26
+
27
+ Tip: You can list recent plans with `ls -lt thoughts/shared/plans/ | head`
28
+ ```
29
+
30
+ Wait for user input, then re-check for feedback.
31
+
32
+ **If plan file provided but NO feedback**:
33
+
34
+ ```
35
+ I've found the plan at [path]. What changes would you like to make?
36
+
37
+ For example:
38
+ - "Add a phase for migration handling"
39
+ - "Update the success criteria to include performance tests"
40
+ - "Adjust the scope to exclude feature X"
41
+ - "Split Phase 2 into two separate phases"
42
+ ```
43
+
44
+ Wait for user input.
45
+
46
+ **If BOTH plan file AND feedback provided**:
47
+ - Proceed immediately to Step 1
48
+ - No preliminary questions needed
49
+
50
+ ## Process Steps
51
+
52
+ ### Step 1: Read and Understand Current Plan
53
+
54
+ 1. **Read the existing plan file COMPLETELY**:
55
+ - Use the Read tool WITHOUT limit/offset parameters
56
+ - Understand the current structure, phases, and scope
57
+ - Note the success criteria and implementation approach
58
+
59
+ 2. **Understand the requested changes**:
60
+ - Parse what the user wants to add/modify/remove
61
+ - Identify if changes require codebase research
62
+ - Determine scope of the update
63
+
64
+ ### Step 2: Research If Needed
65
+
66
+ **Only spawn research tasks if the changes require new technical understanding.**
67
+
68
+ If the user's feedback requires understanding new code patterns or validating assumptions:
69
+
70
+ 1. **Create a research todo list** using TodoWrite
71
+
72
+ 2. **Spawn parallel sub-tasks for research**:
73
+ Use the right agent for each type of research:
74
+
75
+ **For code investigation:**
76
+ - **codebase-locator** - To find relevant files
77
+ - **codebase-analyzer** - To understand implementation details
78
+ - **codebase-pattern-finder** - To find similar patterns
79
+
80
+ **For historical context:**
81
+ - **thoughts-locator** - To find related research or decisions
82
+ - **thoughts-analyzer** - To extract insights from documents
83
+
84
+ **Be EXTREMELY specific about directories**:
85
+ - Include full path context in prompts
86
+
87
+ 3. **Read any new files identified by research**:
88
+ - Read them FULLY into the main context
89
+ - Cross-reference with the plan requirements
90
+
91
+ 4. **Wait for ALL sub-tasks to complete** before proceeding
92
+
93
+ ### Step 3: Present Understanding and Approach
94
+
95
+ Before making changes, confirm your understanding:
96
+
97
+ ```
98
+ Based on your feedback, I understand you want to:
99
+ - [Change 1 with specific detail]
100
+ - [Change 2 with specific detail]
101
+
102
+ My research found:
103
+ - [Relevant code pattern or constraint]
104
+ - [Important discovery that affects the change]
105
+
106
+ I plan to update the plan by:
107
+ 1. [Specific modification to make]
108
+ 2. [Another modification]
109
+
110
+ Does this align with your intent?
111
+ ```
112
+
113
+ Get user confirmation before proceeding.
114
+
115
+ ### Step 4: Update the Plan
116
+
117
+ 1. **Make focused, precise edits** to the existing plan:
118
+ - Use the Edit tool for surgical changes
119
+ - Maintain the existing structure unless explicitly changing it
120
+ - Keep all file:line references accurate
121
+ - Update success criteria if needed
122
+
123
+ 2. **Ensure consistency**:
124
+ - If adding a new phase, ensure it follows the existing pattern
125
+ - If modifying scope, update "What We're NOT Doing" section
126
+ - If changing approach, update "Implementation Approach" section
127
+ - Maintain the distinction between automated vs manual success criteria
128
+
129
+ 3. **Preserve quality standards**:
130
+ - Include specific file paths and line numbers for new content
131
+ - Write measurable success criteria
132
+ - Use `make` commands for automated verification
133
+ - Keep language clear and actionable
134
+
135
+ ### Step 5: Sync and Review
136
+
137
+ 1. **Sync the updated plan**:
138
+ - Run `thoughtcabinet sync`
139
+ - This ensures changes are properly indexed
140
+
141
+ 2. **Present the changes made**:
142
+
143
+ ```
144
+ I've updated the plan at `thoughts/shared/plans/[filename].md`
145
+
146
+ Changes made:
147
+ - [Specific change 1]
148
+ - [Specific change 2]
149
+
150
+ The updated plan now:
151
+ - [Key improvement]
152
+ - [Another improvement]
153
+
154
+ Would you like any further adjustments?
155
+ ```
156
+
157
+ 3. **Be ready to iterate further** based on feedback
158
+
159
+ ## Important Guidelines
160
+
161
+ 1. **Be Skeptical**:
162
+ - Don't blindly accept change requests that seem problematic
163
+ - Question vague feedback - ask for clarification
164
+ - Verify technical feasibility with code research
165
+ - Point out potential conflicts with existing plan phases
166
+
167
+ 2. **Be Surgical**:
168
+ - Make precise edits, not wholesale rewrites
169
+ - Preserve good content that doesn't need changing
170
+ - Only research what's necessary for the specific changes
171
+ - Don't over-engineer the updates
172
+
173
+ 3. **Be Thorough**:
174
+ - Read the entire existing plan before making changes
175
+ - Research code patterns if changes require new technical understanding
176
+ - Ensure updated sections maintain quality standards
177
+ - Verify success criteria are still measurable
178
+
179
+ 4. **Be Interactive**:
180
+ - Confirm understanding before making changes
181
+ - Show what you plan to change before doing it
182
+ - Allow course corrections
183
+ - Don't disappear into research without communicating
184
+
185
+ 5. **Track Progress**:
186
+ - Use TodoWrite to track update tasks if complex
187
+ - Update todos as you complete research
188
+ - Mark tasks complete when done
189
+
190
+ 6. **No Open Questions**:
191
+ - If the requested change raises questions, ASK
192
+ - Research or get clarification immediately
193
+ - Do NOT update the plan with unresolved questions
194
+ - Every change must be complete and actionable
195
+
196
+ ## Success Criteria Guidelines
197
+
198
+ When updating success criteria, always maintain the two-category structure:
199
+
200
+ 1. **Automated Verification** (can be run by execution agents):
201
+ - Commands that can be run: `make test`, `npm run lint`, etc.
202
+ - Specific files that should exist
203
+ - Code compilation/type checking
204
+
205
+ 2. **Manual Verification** (requires human testing):
206
+ - UI/UX functionality
207
+ - Performance under real conditions
208
+ - Edge cases that are hard to automate
209
+ - User acceptance criteria
210
+
211
+ ## Sub-task Spawning Best Practices
212
+
213
+ When spawning research sub-tasks:
214
+
215
+ 1. **Only spawn if truly needed** - don't research for simple changes
216
+ 2. **Spawn multiple tasks in parallel** for efficiency
217
+ 3. **Each task should be focused** on a specific area
218
+ 4. **Provide detailed instructions** including:
219
+ - Exactly what to search for
220
+ - Which directories to focus on
221
+ - What information to extract
222
+ - Expected output format
223
+ 5. **Request specific file:line references** in responses
224
+ 6. **Wait for all tasks to complete** before synthesizing
225
+ 7. **Verify sub-task results** - if something seems off, spawn follow-up tasks
226
+
227
+ ## Example Interaction Flows
228
+
229
+ **Scenario 1: User provides everything upfront**
230
+
231
+ ```
232
+ User: /iterate_plan thoughts/shared/plans/2025-10-16-feature.md - add phase for error handling
233
+ Assistant: [Reads plan, researches error handling patterns, updates plan]
234
+ ```
235
+
236
+ **Scenario 2: User provides just plan file**
237
+
238
+ ```
239
+ User: /iterate_plan thoughts/shared/plans/2025-10-16-feature.md
240
+ Assistant: I've found the plan. What changes would you like to make?
241
+ User: Split Phase 2 into two phases - one for backend, one for frontend
242
+ Assistant: [Proceeds with update]
243
+ ```
244
+
245
+ **Scenario 3: User provides no arguments**
246
+
247
+ ```
248
+ User: /iterate_plan
249
+ Assistant: Which plan would you like to update? Please provide the path...
250
+ User: thoughts/shared/plans/2025-10-16-feature.md
251
+ Assistant: I've found the plan. What changes would you like to make?
252
+ User: Add more specific success criteria
253
+ Assistant: [Proceeds with update]
254
+ ```
@@ -0,0 +1,107 @@
1
+ ---
2
+ description: Document codebase as-is with thoughts directory for historical context
3
+ model: opus
4
+ ---
5
+
6
+ # Research Codebase
7
+
8
+ You are tasked with conducting comprehensive research across the codebase to answer user questions by spawning parallel sub-agents and synthesizing their findings.
9
+
10
+ ## CRITICAL: YOUR ONLY JOB IS TO DOCUMENT AND EXPLAIN THE CODEBASE AS IT EXISTS TODAY
11
+
12
+ - DO NOT suggest improvements or changes unless the user explicitly asks for them
13
+ - DO NOT perform root cause analysis unless the user explicitly asks for them
14
+ - DO NOT propose future enhancements unless the user explicitly asks for them
15
+ - DO NOT critique the implementation or identify problems
16
+ - DO NOT recommend refactoring, optimization, or architectural changes
17
+ - ONLY describe what exists, where it exists, how it works, and how components interact
18
+ - You are creating a technical map/documentation of the existing system
19
+
20
+ ## Initial Setup:
21
+
22
+ When this command is invoked, respond with:
23
+
24
+ ```
25
+ I'm ready to research the codebase. Please provide your research question or area of interest, and I'll analyze it thoroughly by exploring relevant components and connections.
26
+ ```
27
+
28
+ Then wait for the user's research query.
29
+
30
+ ## Steps to follow after receiving the research query:
31
+
32
+ 1. **Read any directly mentioned files first:**
33
+ - If the user mentions specific files (codes, docs, JSON), read them FULLY first
34
+ - **IMPORTANT**: Use the Read tool WITHOUT limit/offset parameters to read entire files
35
+ - **CRITICAL**: Read these files yourself in the main context before spawning any sub-tasks
36
+ - This ensures you have full context before decomposing the research
37
+
38
+ 2. **Analyze and decompose the research question:**
39
+ - Break down the user's query into composable research areas
40
+ - Take time to ultrathink about the underlying patterns, connections, and architectural implications the user might be seeking
41
+ - Identify specific components, patterns, or concepts to investigate
42
+ - Create a research plan using TodoWrite to track all subtasks
43
+ - Consider which directories, files, or architectural patterns are relevant
44
+
45
+ 3. **Spawn parallel sub-agent tasks for comprehensive research:**
46
+ - Create multiple Task agents to research different aspects concurrently
47
+ - We now have specialized agents that know how to do specific research tasks:
48
+
49
+ **For codebase research:**
50
+ - Use the **codebase-locator** agent to find WHERE files and components live
51
+ - Use the **codebase-analyzer** agent to understand HOW specific code works (without critiquing it)
52
+ - Use the **codebase-pattern-finder** agent to find examples of existing patterns (without evaluating them)
53
+
54
+ **IMPORTANT**: All agents are documentarians, not critics. They will describe what exists without suggesting improvements or identifying issues.
55
+
56
+ **For thoughts directory:**
57
+ - Use the **thoughts-locator** agent to discover what documents exist about the topic
58
+ - Use the **thoughts-analyzer** agent to extract key insights from specific documents (only the most relevant ones)
59
+
60
+ **For web research (only if user explicitly asks):**
61
+ - Use the **web-search-researcher** agent for external documentation and resources
62
+ - IF you use web-research agents, instruct them to return LINKS with their findings, and please INCLUDE those links in your final report
63
+
64
+ The key is to use these agents intelligently:
65
+ - Start with locator agents to find what exists
66
+ - Then use analyzer agents on the most promising findings to document how they work
67
+ - Run multiple agents in parallel when they're searching for different things
68
+ - Each agent knows its job - just tell it what you're looking for
69
+ - Don't write detailed prompts about HOW to search - the agents already know
70
+ - Remind agents they are documenting, not evaluating or improving
71
+
72
+ 4. **Wait for all sub-agents to complete and synthesize findings:**
73
+ - IMPORTANT: Wait for ALL sub-agent tasks to complete before proceeding
74
+ - Compile all sub-agent results (both codebase and thoughts findings)
75
+ - Prioritize live codebase findings as primary source of truth
76
+ - Use thoughts/ findings as supplementary historical context
77
+ - Connect findings across different components
78
+ - Include specific file paths and line numbers for reference
79
+ - Verify all thoughts/ paths are correct (e.g., thoughts/allison/ not thoughts/shared/ for personal files)
80
+ - Highlight patterns, connections, and architectural decisions
81
+ - Answer the user's specific questions with concrete evidence
82
+
83
+ 5. **Generate research document:**
84
+ - Use the **generating-research-document** skill to generate a research document or handle follow-up questions
85
+
86
+ ## Important notes:
87
+
88
+ - Always use parallel Task agents to maximize efficiency and minimize context usage
89
+ - Always run fresh codebase research - never rely solely on existing research documents
90
+ - The thoughts/ directory provides historical context to supplement live findings
91
+ - Focus on finding concrete file paths and line numbers for developer reference
92
+ - Research documents should be self-contained with all necessary context
93
+ - Each sub-agent prompt should be specific and focused on read-only documentation operations
94
+ - Document cross-component connections and how systems interact
95
+ - Include temporal context (when the research was conducted)
96
+ - Link to GitHub when possible for permanent references
97
+ - Keep the main agent focused on synthesis, not deep file reading
98
+ - Have sub-agents document examples and usage patterns as they exist
99
+ - Explore all of thoughts/ directory, not just research subdirectory
100
+ - **CRITICAL**: You and all sub-agents are documentarians, not evaluators
101
+ - **NO RECOMMENDATIONS**: Only describe the current state of the codebase
102
+ - **File reading**: Always read mentioned files FULLY (no limit/offset) before spawning sub-tasks
103
+ - **Critical ordering**: Follow the numbered steps exactly
104
+ - ALWAYS read mentioned files first before spawning sub-tasks (step 1)
105
+ - ALWAYS wait for all sub-agents to complete before synthesizing (step 4)
106
+ - ALWAYS gather metadata before writing the document (step 5 before step 6)
107
+ - NEVER write the research document with placeholder values
@@ -0,0 +1,178 @@
1
+ ---
2
+ description: Validate implementation against plan, verify success criteria, identify issues
3
+ ---
4
+
5
+ # Validate Plan
6
+
7
+ You are tasked with validating that an implementation plan was correctly executed, verifying all success criteria and identifying any deviations or issues.
8
+
9
+ ## Initial Setup
10
+
11
+ When invoked:
12
+
13
+ 1. **Determine context** - Are you in an existing conversation or starting fresh?
14
+ - If existing: Review what was implemented in this session
15
+ - If fresh: Need to discover what was done through git and codebase analysis
16
+
17
+ 2. **Locate the plan**:
18
+ - If plan path provided, use it
19
+ - Otherwise, search recent commits for plan references or ask user
20
+
21
+ 3. **Gather implementation evidence**:
22
+
23
+ ```bash
24
+ # Check recent commits
25
+ git log --oneline -n 20
26
+ git diff HEAD~N..HEAD # Where N covers implementation commits
27
+
28
+ # Run comprehensive checks
29
+ cd $(git rev-parse --show-toplevel) && make check test
30
+ ```
31
+
32
+ ## Validation Process
33
+
34
+ ### Step 1: Context Discovery
35
+
36
+ If starting fresh or need more context:
37
+
38
+ 1. **Read the implementation plan** completely
39
+ 2. **Identify what should have changed**:
40
+ - List all files that should be modified
41
+ - Note all success criteria (automated and manual)
42
+ - Identify key functionality to verify
43
+
44
+ 3. **Spawn parallel research tasks** to discover implementation:
45
+
46
+ ```
47
+ Task 1 - Verify database changes:
48
+ Research if migration [N] was added and schema changes match plan.
49
+ Check: migration files, schema version, table structure
50
+ Return: What was implemented vs what plan specified
51
+
52
+ Task 2 - Verify code changes:
53
+ Find all modified files related to [feature].
54
+ Compare actual changes to plan specifications.
55
+ Return: File-by-file comparison of planned vs actual
56
+
57
+ Task 3 - Verify test coverage:
58
+ Check if tests were added/modified as specified.
59
+ Run test commands and capture results.
60
+ Return: Test status and any missing coverage
61
+ ```
62
+
63
+ ### Step 2: Systematic Validation
64
+
65
+ For each phase in the plan:
66
+
67
+ 1. **Check completion status**:
68
+ - Look for checkmarks in the plan (- [x])
69
+ - Verify the actual code matches claimed completion
70
+
71
+ 2. **Run automated verification**:
72
+ - Execute each command from "Automated Verification"
73
+ - Document pass/fail status
74
+ - If failures, investigate root cause
75
+
76
+ 3. **Assess manual criteria**:
77
+ - List what needs manual testing
78
+ - Provide clear steps for user verification
79
+
80
+ 4. **Think deeply about edge cases**:
81
+ - Were error conditions handled?
82
+ - Are there missing validations?
83
+ - Could the implementation break existing functionality?
84
+
85
+ ### Step 3: Generate Validation Report
86
+
87
+ Create comprehensive validation summary:
88
+
89
+ ```markdown
90
+ ## Validation Report: [Plan Name]
91
+
92
+ ### Implementation Status
93
+
94
+ ✓ Phase 1: [Name] - Fully implemented
95
+ ✓ Phase 2: [Name] - Fully implemented
96
+ ⚠️ Phase 3: [Name] - Partially implemented (see issues)
97
+
98
+ ### Automated Verification Results
99
+
100
+ ✓ Build passes: `make build`
101
+ ✓ Tests pass: `make test`
102
+ ✗ Linting issues: `make lint` (3 warnings)
103
+
104
+ ### Code Review Findings
105
+
106
+ #### Matches Plan:
107
+
108
+ - Database migration correctly adds [table]
109
+ - API endpoints implement specified methods
110
+ - Error handling follows plan
111
+
112
+ #### Deviations from Plan:
113
+
114
+ - Used different variable names in [file:line]
115
+ - Added extra validation in [file:line] (improvement)
116
+
117
+ #### Potential Issues:
118
+
119
+ - Missing index on foreign key could impact performance
120
+ - No rollback handling in migration
121
+
122
+ ### Manual Testing Required:
123
+
124
+ 1. UI functionality:
125
+ - [ ] Verify [feature] appears correctly
126
+ - [ ] Test error states with invalid input
127
+
128
+ 2. Integration:
129
+ - [ ] Confirm works with existing [component]
130
+ - [ ] Check performance with large datasets
131
+
132
+ ### Recommendations:
133
+
134
+ - Address linting warnings before merge
135
+ - Consider adding integration test for [scenario]
136
+ - Document new API endpoints
137
+ ```
138
+
139
+ ## Working with Existing Context
140
+
141
+ If you were part of the implementation:
142
+
143
+ - Review the conversation history
144
+ - Check your todo list for what was completed
145
+ - Focus validation on work done in this session
146
+ - Be honest about any shortcuts or incomplete items
147
+
148
+ ## Important Guidelines
149
+
150
+ 1. **Be thorough but practical** - Focus on what matters
151
+ 2. **Run all automated checks** - Don't skip verification commands
152
+ 3. **Document everything** - Both successes and issues
153
+ 4. **Think critically** - Question if the implementation truly solves the problem
154
+ 5. **Consider maintenance** - Will this be maintainable long-term?
155
+
156
+ ## Validation Checklist
157
+
158
+ Always verify:
159
+
160
+ - [ ] All phases marked complete are actually done
161
+ - [ ] Automated tests pass
162
+ - [ ] Code follows existing patterns
163
+ - [ ] No regressions introduced
164
+ - [ ] Error handling is robust
165
+ - [ ] Documentation updated if needed
166
+ - [ ] Manual test steps are clear
167
+
168
+ ## Relationship to Other Commands
169
+
170
+ Recommended workflow:
171
+
172
+ 1. `/implement_plan` - Execute the implementation
173
+ 2. `/commit` - Create atomic commits for changes
174
+ 3. `/validate_plan` - Verify implementation correctness
175
+
176
+ The validation works best after commits are made, as it can analyze the git history to understand what was implemented.
177
+
178
+ Remember: Good validation catches issues before they reach production. Be constructive but thorough in identifying gaps or improvements.
@@ -0,0 +1,7 @@
1
+ {
2
+ "permissions": {
3
+ "allow": []
4
+ },
5
+ "enableAllProjectMcpServers": false,
6
+ "env": {}
7
+ }
@@ -0,0 +1,41 @@
1
+ ---
2
+ name: generating-research-document
3
+ description: Generate structured research documentation. Use when (1) creating research documents after codebase exploration, (2) documenting findings from code analysis, (3) writing technical research reports to thoughts/shared/research/. Triggers on requests like "how does the authentication flow work?"
4
+ ---
5
+
6
+ # Generate Research Document
7
+
8
+ Generate research documentation for the thoughts/ directory with proper metadata and structure.
9
+
10
+ ## Workflow
11
+
12
+ 1. **Gather metadata**
13
+
14
+ ```bash
15
+ thoughtcabinet metadata
16
+ ```
17
+
18
+ Captures: researcher name, git commit, branch, repository, timestamp
19
+
20
+ 2. **Determine filename**
21
+ - Path: `thoughts/shared/research/YYYY-MM-DD-description.md`
22
+ - Description: kebab-case summary of topic
23
+ - Example: `2025-01-08-authentication-flow.md`
24
+
25
+ 3. **Generate document**
26
+ - Use template from [document_template.md](document_template.md)
27
+ - Fill metadata from step 1
28
+ - Structure findings into appropriate sections
29
+ - Include file:line references for all code mentions
30
+
31
+ 4. **Sync**
32
+ ```bash
33
+ thoughtcabinet sync
34
+ ```
35
+
36
+ ## Key Rules
37
+
38
+ - **Document what IS, not what SHOULD BE** - no recommendations or critiques
39
+ - **Path handling**: Remove only "searchable/" from thoughts paths (e.g., `thoughts/searchable/allison/` → `thoughts/allison/`)
40
+ - **Frontmatter**: Always include, use snake_case for multi-word fields
41
+ - **Code references**: Format as `path/to/file.ext:line` with descriptions
@@ -0,0 +1,97 @@
1
+ # Research Document Template
2
+
3
+ ## YAML Frontmatter
4
+
5
+ ```yaml
6
+ ---
7
+ date: [ISO 8601 datetime with timezone, e.g., 2025-01-08T14:30:00+08:00]
8
+ researcher: [From `thoughtcabinet metadata` output]
9
+ git_commit: [Current commit hash from metadata]
10
+ branch: [Current branch name from metadata]
11
+ repository: [Repository name from metadata]
12
+ topic: "[User's Research Question/Topic]"
13
+ tags: [research, codebase, relevant-component-names]
14
+ status: complete
15
+ last_updated: [YYYY-MM-DD format]
16
+ last_updated_by: [Researcher name]
17
+ ---
18
+ ```
19
+
20
+ ## Document Structure
21
+
22
+ ```markdown
23
+ # Research: [User's Question/Topic]
24
+
25
+ **Date**: [datetime with timezone]
26
+ **Researcher**: [name]
27
+ **Git Commit**: [hash]
28
+ **Branch**: [branch]
29
+ **Repository**: [repo]
30
+
31
+ ## Research Question
32
+
33
+ [Original user query verbatim]
34
+
35
+ ## Summary
36
+
37
+ [High-level documentation answering the user's question by describing what exists]
38
+
39
+ ## Detailed Findings
40
+
41
+ ### [Component/Area 1]
42
+
43
+ - Description of what exists ([file.ext:line](link))
44
+ - How it connects to other components
45
+ - Current implementation details (without evaluation)
46
+
47
+ ### [Component/Area 2]
48
+
49
+ ...
50
+
51
+ ## Code References
52
+
53
+ - `path/to/file.py:123` - Description of what's there
54
+ - `another/file.ts:45-67` - Description of the code block
55
+
56
+ ## Architecture Documentation
57
+
58
+ [Current patterns, conventions, and design implementations found]
59
+
60
+ ## Historical Context (from thoughts/)
61
+
62
+ [Relevant insights from thoughts/ directory with references]
63
+
64
+ - `thoughts/shared/something.md` - Historical decision about X
65
+ - `thoughts/local/notes.md` - Past exploration of Y
66
+ Note: Paths exclude "searchable/" even if found there
67
+
68
+ ## Related Research
69
+
70
+ [Links to other research documents in thoughts/shared/research/]
71
+
72
+ ## Open Questions
73
+
74
+ [Areas that need further investigation]
75
+ ```
76
+
77
+ ## Follow-up Research Section
78
+
79
+ When adding follow-up research, append:
80
+
81
+ ```markdown
82
+ ## Follow-up Research [timestamp]
83
+
84
+ ### Follow-up Question
85
+
86
+ [The follow-up question]
87
+
88
+ ### Additional Findings
89
+
90
+ [New findings from additional investigation]
91
+ ```
92
+
93
+ Also update frontmatter:
94
+
95
+ - `last_updated`: Current date
96
+ - `last_updated_by`: Researcher name
97
+ - Add `last_updated_note: "Added follow-up research for [brief description]"`