sublation-os 1.0.1 โ†’ 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (31) hide show
  1. package/.claude/agents/sublation-os/implementation-verifier.md +16 -0
  2. package/.claude/agents/sublation-os/implementer.md +16 -0
  3. package/.claude/commands/sublation-os/address-comments.md +13 -0
  4. package/.claude/commands/sublation-os/consolidate-learnings.md +553 -0
  5. package/.claude/commands/sublation-os/implement-tasks.md +10 -0
  6. package/.claude/commands/sublation-os/investigate.md +14 -4
  7. package/.claude/commands/sublation-os/optimise.md +14 -0
  8. package/.claude/commands/sublation-os/review.md +18 -1
  9. package/.claude/commands/sublation-os/test-plan.md +19 -2
  10. package/.claude/skills/README.md +274 -0
  11. package/.claude/skills/auto-learn/SKILL.md +233 -0
  12. package/.cursor/commands/sublation-os/address-comments.md +74 -0
  13. package/.cursor/commands/sublation-os/commit-message.md +84 -0
  14. package/.cursor/commands/sublation-os/consolidate-learnings.md +553 -0
  15. package/.cursor/commands/sublation-os/create-tasks.md +254 -0
  16. package/.cursor/commands/sublation-os/implement-tasks.md +207 -0
  17. package/.cursor/commands/sublation-os/investigate.md +164 -0
  18. package/.cursor/commands/sublation-os/learn.md +131 -0
  19. package/.cursor/commands/sublation-os/optimise.md +108 -0
  20. package/.cursor/commands/sublation-os/plan-product.md +241 -0
  21. package/.cursor/commands/sublation-os/pr-description.md +15 -0
  22. package/.cursor/commands/sublation-os/recall.md +114 -0
  23. package/.cursor/commands/sublation-os/review.md +12 -0
  24. package/.cursor/commands/sublation-os/shape-spec.md +395 -0
  25. package/.cursor/commands/sublation-os/test-plan.md +12 -0
  26. package/.cursor/commands/sublation-os/write-spec.md +134 -0
  27. package/README.md +50 -13
  28. package/bin/install.js +103 -24
  29. package/package.json +4 -10
  30. package/.claude/agents/sublation-os/implementer-v2.md +0 -542
  31. /package/{.claude โ†’ .cursor}/commands/sublation-os/review-v2.md +0 -0
@@ -0,0 +1,254 @@
1
+ I want you to create a tasks breakdown from a given spec and requirements for a new feature using the following MULTI-PHASE process and instructions.
2
+
3
+ Carefully read and execute the instructions in the following files IN SEQUENCE, following their numbered file names. Only proceed to the next numbered instruction file once the previous numbered instruction has been executed.
4
+
5
+ Instructions to follow in sequence:
6
+
7
+ # PHASE 1: Get Spec Requirements
8
+
9
+ The FIRST STEP is to make sure you have ONE OR BOTH of these files to inform your tasks breakdown:
10
+ - `sublation-os/specs/[this-spec]/spec.md`
11
+ - `sublation-os/specs/[this-spec]/planning/requirements.md`
12
+
13
+ IF you don't have ONE OR BOTH of those files in your current conversation context, then ask user to provide direction on where to you can find them by outputting the following request then wait for user's response:
14
+
15
+ "I'll need a spec.md or requirements.md (or both) in order to build a tasks list.
16
+
17
+ Please direct me to where I can find those. If you haven't created them yet, you can run /shape-spec or /write-spec."
18
+
19
+ # PHASE 2: Create Tasks List
20
+
21
+ Now that you have the spec.md AND/OR requirements.md, please break those down into an actionable tasks list with strategic grouping and ordering, by following these instructions:
22
+
23
+ # Task List Creation
24
+
25
+ ## Core Responsibilities
26
+
27
+ 1. **Analyze spec and requirements**: Read and analyze the spec.md and/or requirements.md to inform the tasks list you will create.
28
+ 2. **Plan task execution order**: Break the requirements into a list of tasks in an order that takes their dependencies into account.
29
+ 3. **Group tasks by specialization**: Group tasks that require the same skill or stack specialization together (backend, api, ui design, etc.)
30
+ 4. **Create Tasks list**: Create the markdown tasks list broken into groups with sub-tasks.
31
+
32
+ ## Workflow
33
+
34
+ ### Step 1: Analyze Spec & Requirements
35
+
36
+ Read each of these files (whichever are available) and analyze them to understand the requirements for this feature implementation:
37
+ - `sublation-os/specs/[this-spec]/spec.md`
38
+ - `sublation-os/specs/[this-spec]/planning/requirements.md`
39
+
40
+ Use your learnings to inform the tasks list and groupings you will create in the next step.
41
+
42
+
43
+ ### Step 2: Create Tasks Breakdown
44
+
45
+ Generate `sublation-os/specs/[current-spec]/tasks.md`.
46
+
47
+ **Important**: The exact tasks, task groups, and organization will vary based on the feature's specific requirements. The following is an example format - adapt the content of the tasks list to match what THIS feature actually needs.
48
+
49
+ ```markdown
50
+ # Task Breakdown: [Feature Name]
51
+
52
+ ## Overview
53
+ Total Tasks: [count]
54
+
55
+ ## Task List
56
+
57
+ ### Database Layer
58
+
59
+ #### Task Group 1: Data Models and Migrations
60
+ **Dependencies:** None
61
+
62
+ - [ ] 1.0 Complete database layer
63
+ - [ ] 1.1 Write 2-8 focused tests for [Model] functionality
64
+ - Limit to 2-8 highly focused tests maximum
65
+ - Test only critical model behaviors (e.g., primary validation, key association, core method)
66
+ - Skip exhaustive coverage of all methods and edge cases
67
+ - [ ] 1.2 Create [Model] with validations
68
+ - Fields: [list]
69
+ - Validations: [list]
70
+ - Reuse pattern from: [existing model if applicable]
71
+ - [ ] 1.3 Create migration for [table]
72
+ - Add indexes for: [fields]
73
+ - Foreign keys: [relationships]
74
+ - [ ] 1.4 Set up associations
75
+ - [Model] has_many [related]
76
+ - [Model] belongs_to [parent]
77
+ - [ ] 1.5 Ensure database layer tests pass
78
+ - Run ONLY the 2-8 tests written in 1.1
79
+ - Verify migrations run successfully
80
+ - Do NOT run the entire test suite at this stage
81
+
82
+ **Acceptance Criteria:**
83
+ - The 2-8 tests written in 1.1 pass
84
+ - Models pass validation tests
85
+ - Migrations run successfully
86
+ - Associations work correctly
87
+
88
+ ### API Layer
89
+
90
+ #### Task Group 2: API Endpoints
91
+ **Dependencies:** Task Group 1
92
+
93
+ - [ ] 2.0 Complete API layer
94
+ - [ ] 2.1 Write 2-8 focused tests for API endpoints
95
+ - Limit to 2-8 highly focused tests maximum
96
+ - Test only critical controller actions (e.g., primary CRUD operation, auth check, key error case)
97
+ - Skip exhaustive testing of all actions and scenarios
98
+ - [ ] 2.2 Create [resource] controller
99
+ - Actions: index, show, create, update, destroy
100
+ - Follow pattern from: [existing controller]
101
+ - [ ] 2.3 Implement authentication/authorization
102
+ - Use existing auth pattern
103
+ - Add permission checks
104
+ - [ ] 2.4 Add API response formatting
105
+ - JSON responses
106
+ - Error handling
107
+ - Status codes
108
+ - [ ] 2.5 Ensure API layer tests pass
109
+ - Run ONLY the 2-8 tests written in 2.1
110
+ - Verify critical CRUD operations work
111
+ - Do NOT run the entire test suite at this stage
112
+
113
+ **Acceptance Criteria:**
114
+ - The 2-8 tests written in 2.1 pass
115
+ - All CRUD operations work
116
+ - Proper authorization enforced
117
+ - Consistent response format
118
+
119
+ ### Frontend Components
120
+
121
+ #### Task Group 3: UI Design
122
+ **Dependencies:** Task Group 2
123
+
124
+ - [ ] 3.0 Complete UI components
125
+ - [ ] 3.1 Write 2-8 focused tests for UI components
126
+ - Limit to 2-8 highly focused tests maximum
127
+ - Test only critical component behaviors (e.g., primary user interaction, key form submission, main rendering case)
128
+ - Skip exhaustive testing of all component states and interactions
129
+ - [ ] 3.2 Create [Component] component
130
+ - Reuse: [existing component] as base
131
+ - Props: [list]
132
+ - State: [list]
133
+ - [ ] 3.3 Implement [Feature] form
134
+ - Fields: [list]
135
+ - Validation: client-side
136
+ - Submit handling
137
+ - [ ] 3.4 Build [View] page
138
+ - Layout: [description]
139
+ - Components: [list]
140
+ - Match mockup: `planning/visuals/[file]`
141
+ - [ ] 3.5 Apply base styles
142
+ - Follow existing design system
143
+ - Use variables from: [style file]
144
+ - [ ] 3.6 Implement responsive design
145
+ - Mobile: 320px - 768px
146
+ - Tablet: 768px - 1024px
147
+ - Desktop: 1024px+
148
+ - [ ] 3.7 Add interactions and animations
149
+ - Hover states
150
+ - Transitions
151
+ - Loading states
152
+ - [ ] 3.8 Ensure UI component tests pass
153
+ - Run ONLY the 2-8 tests written in 3.1
154
+ - Verify critical component behaviors work
155
+ - Do NOT run the entire test suite at this stage
156
+
157
+ **Acceptance Criteria:**
158
+ - The 2-8 tests written in 3.1 pass
159
+ - Components render correctly
160
+ - Forms validate and submit
161
+ - Matches visual design
162
+
163
+ ### Testing
164
+
165
+ #### Task Group 4: Test Review & Gap Analysis
166
+ **Dependencies:** Task Groups 1-3
167
+
168
+ - [ ] 4.0 Review existing tests and fill critical gaps only
169
+ - [ ] 4.1 Review tests from Task Groups 1-3
170
+ - Review the 2-8 tests written by database-engineer (Task 1.1)
171
+ - Review the 2-8 tests written by api-engineer (Task 2.1)
172
+ - Review the 2-8 tests written by ui-designer (Task 3.1)
173
+ - Total existing tests: approximately 6-24 tests
174
+ - [ ] 4.2 Analyze test coverage gaps for THIS feature only
175
+ - Identify critical user workflows that lack test coverage
176
+ - Focus ONLY on gaps related to this spec's feature requirements
177
+ - Do NOT assess entire application test coverage
178
+ - Prioritize end-to-end workflows over unit test gaps
179
+ - [ ] 4.3 Write up to 10 additional strategic tests maximum
180
+ - Add maximum of 10 new tests to fill identified critical gaps
181
+ - Focus on integration points and end-to-end workflows
182
+ - Do NOT write comprehensive coverage for all scenarios
183
+ - Skip edge cases, performance tests, and accessibility tests unless business-critical
184
+ - [ ] 4.4 Run feature-specific tests only
185
+ - Run ONLY tests related to this spec's feature (tests from 1.1, 2.1, 3.1, and 4.3)
186
+ - Expected total: approximately 16-34 tests maximum
187
+ - Do NOT run the entire application test suite
188
+ - Verify critical workflows pass
189
+
190
+ **Acceptance Criteria:**
191
+ - All feature-specific tests pass (approximately 16-34 tests total)
192
+ - Critical user workflows for this feature are covered
193
+ - No more than 10 additional tests added when filling in testing gaps
194
+ - Testing focused exclusively on this spec's feature requirements
195
+
196
+ ## Execution Order
197
+
198
+ Recommended implementation sequence:
199
+ 1. Database Layer (Task Group 1)
200
+ 2. API Layer (Task Group 2)
201
+ 3. Frontend Design (Task Group 3)
202
+ 4. Test Review & Gap Analysis (Task Group 4)
203
+ ```
204
+
205
+ **Note**: Adapt this structure based on the actual feature requirements. Some features may need:
206
+ - Different task groups (e.g., email notifications, payment processing, data migration)
207
+ - Different execution order based on dependencies
208
+ - More or fewer sub-tasks per group
209
+
210
+ ## Important Constraints
211
+
212
+ - **Create tasks that are specific and verifiable**
213
+ - **Group related tasks:** For example, group back-end engineering tasks together and front-end UI tasks together.
214
+ - **Limit test writing during development**:
215
+ - Each task group (1-3) should write 2-8 focused tests maximum
216
+ - Tests should cover only critical behaviors, not exhaustive coverage
217
+ - Test verification should run ONLY the newly written tests, not the entire suite
218
+ - If there is a dedicated test coverage group for filling in gaps in test coverage, this group should add only a maximum of 10 additional tests IF NECESSARY to fill critical gaps
219
+ - **Use a focused test-driven approach** where each task group starts with writing 2-8 tests (x.1 sub-task) and ends with running ONLY those tests (final sub-task)
220
+ - **Include acceptance criteria** for each task group
221
+ - **Reference visual assets** if visuals are available
222
+
223
+
224
+ ## Display confirmation and next step
225
+
226
+ Display the following message to the user:
227
+
228
+ ```
229
+ The tasks list has created at `sublation-os/specs/[this-spec]/tasks.md`.
230
+
231
+ Review it closely to make sure it all looks good.
232
+
233
+ NEXT STEP ๐Ÿ‘‰ Run `/implement-tasks` (simple, effective) or `/orchestrate-tasks` (advanced, powerful) to start building!
234
+ ```
235
+
236
+ ## User Standards & Preferences Compliance
237
+
238
+ IMPORTANT: Ensure that the tasks list is ALIGNED and DOES NOT CONFLICT with the user's preferences and standards as detailed in the following files:
239
+
240
+ @sublation-os/standards/backend/api.md
241
+ @sublation-os/standards/backend/migrations.md
242
+ @sublation-os/standards/backend/models.md
243
+ @sublation-os/standards/backend/queries.md
244
+ @sublation-os/standards/frontend/accessibility.md
245
+ @sublation-os/standards/frontend/components.md
246
+ @sublation-os/standards/frontend/css.md
247
+ @sublation-os/standards/frontend/responsive.md
248
+ @sublation-os/standards/global/coding-style.md
249
+ @sublation-os/standards/global/commenting.md
250
+ @sublation-os/standards/global/conventions.md
251
+ @sublation-os/standards/global/error-handling.md
252
+ @sublation-os/standards/global/tech-stack.md
253
+ @sublation-os/standards/global/validation.md
254
+ @sublation-os/standards/testing/test-writing.md
@@ -0,0 +1,207 @@
1
+ Now that we have a spec and tasks list ready for implementation, we will proceed with implementation of this spec by following this multi-phase process:
2
+
3
+ PHASE 1: Determine which task group(s) from tasks.md should be implemented
4
+ PHASE 2: Implement the given task(s)
5
+ PHASE 3: After ALL task groups have been implemented, produce the final verification report.
6
+
7
+ Carefully read and execute the instructions in the following files IN SEQUENCE, following their numbered file names. Only proceed to the next numbered instruction file once the previous numbered instruction has been executed.
8
+
9
+ Instructions to follow in sequence:
10
+
11
+ # PHASE 1: Determine Tasks
12
+
13
+ First, check if the user has already provided instructions about which task group(s) to implement.
14
+
15
+ **If the user HAS provided instructions:** Proceed to PHASE 2 to delegate implementation of those specified task group(s) to the **implementer** subagent.
16
+
17
+ **If the user has NOT provided instructions:**
18
+
19
+ Read `agent-os/specs/[this-spec]/tasks.md` to review the available task groups, then output the following message to the user and WAIT for their response:
20
+
21
+ ```
22
+ Should we proceed with implementation of all task groups in tasks.md?
23
+
24
+ If not, then please specify which task(s) to implement.
25
+ ```
26
+
27
+ # PHASE 2: Implement Tasks
28
+
29
+ Now that you have the task group(s) to be implemented, proceed with implementation by following these instructions:
30
+
31
+ Implement all tasks assigned to you and ONLY those task(s) that have been assigned to you.
32
+
33
+ ## Implementation process:
34
+
35
+ 1. Analyze the provided spec.md, requirements.md, and visuals (if any)
36
+ 2. Analyze patterns in the codebase according to its built-in workflow
37
+ 3. Implement the assigned task group according to requirements and standards
38
+ 4. Update `agent-os/specs/[this-spec]/tasks.md` to update the tasks you've implemented to mark that as done by updating their checkbox to checked state: `- [x]`
39
+
40
+ ## Guide your implementation using:
41
+ - **The existing patterns** that you've found and analyzed in the codebase.
42
+ - **Specific notes provided in requirements.md, spec.md AND/OR tasks.md**
43
+ - **Visuals provided (if any)** which would be located in `agent-os/specs/[this-spec]/planning/visuals/`
44
+ - **User Standards & Preferences** which are defined below.
45
+
46
+ ## Self-verify and test your work by:
47
+ - Running ONLY the tests you've written (if any) and ensuring those tests pass.
48
+ - IF your task involves user-facing UI, and IF you have access to browser testing tools, open a browser and use the feature you've implemented as if you are a user to ensure a user can use the feature in the intended way.
49
+ - Take screenshots of the views and UI elements you've tested and store those in `agent-os/specs/[this-spec]/verification/screenshots/`. Do not store screenshots anywhere else in the codebase other than this location.
50
+ - Analyze the screenshot(s) you've taken to check them against your current requirements.
51
+
52
+
53
+ ## Display confirmation and next step
54
+
55
+ Display a summary of what was implemented.
56
+
57
+ IF all tasks are now marked as done (with `- [x]`) in tasks.md, display this message to user:
58
+
59
+ ```
60
+ All tasks have been implemented: `agent-os/specs/[this-spec]/tasks.md`.
61
+
62
+ NEXT STEP ๐Ÿ‘‰ Run `3-verify-implementation.md` to verify the implementation.
63
+ ```
64
+
65
+ IF there are still tasks in tasks.md that have yet to be implemented (marked unfinished with `- [ ]`) then display this message to user:
66
+
67
+ ```
68
+ Would you like to proceed with implementation of the remaining tasks in tasks.md?
69
+
70
+ If not, please specify which task group(s) to implement next.
71
+ ```
72
+
73
+ ## User Standards & Preferences Compliance
74
+
75
+ IMPORTANT: Ensure that the tasks list is ALIGNED and DOES NOT CONFLICT with the user's preferences and standards as detailed in the following files:
76
+
77
+ @agent-os/standards/backend/api.md
78
+ @agent-os/standards/backend/migrations.md
79
+ @agent-os/standards/backend/models.md
80
+ @agent-os/standards/backend/queries.md
81
+ @agent-os/standards/frontend/accessibility.md
82
+ @agent-os/standards/frontend/components.md
83
+ @agent-os/standards/frontend/css.md
84
+ @agent-os/standards/frontend/responsive.md
85
+ @agent-os/standards/global/coding-style.md
86
+ @agent-os/standards/global/commenting.md
87
+ @agent-os/standards/global/conventions.md
88
+ @agent-os/standards/global/error-handling.md
89
+ @agent-os/standards/global/tech-stack.md
90
+ @agent-os/standards/global/validation.md
91
+ @agent-os/standards/testing/test-writing.md
92
+
93
+ # PHASE 3: Verify Implementation
94
+
95
+ Now that we've implemented all tasks in tasks.md, we must run final verifications and produce a verification report using the following MULTI-PHASE workflow:
96
+
97
+ ## Workflow
98
+
99
+ ### Step 1: Ensure tasks.md has been updated
100
+
101
+ Check `agent-os/specs/[this-spec]/tasks.md` and ensure that all tasks and their sub-tasks are marked as completed with `- [x]`.
102
+
103
+ If a task is still marked incomplete, then verify that it has in fact been completed by checking the following:
104
+ - Run a brief spot check in the code to find evidence that this task's details have been implemented
105
+ - Check for existence of an implementation report titled using this task's title in `agent-os/spec/[this-spec]/implementation/` folder.
106
+
107
+ IF you have concluded that this task has been completed, then mark it's checkbox and its' sub-tasks checkboxes as completed with `- [x]`.
108
+
109
+ IF you have concluded that this task has NOT been completed, then mark this checkbox with โš ๏ธ and note it's incompleteness in your verification report.
110
+
111
+
112
+ ### Step 2: Update roadmap (if applicable)
113
+
114
+ Open `agent-os/product/roadmap.md` and check to see whether any item(s) match the description of the current spec that has just been implemented. If so, then ensure that these item(s) are marked as completed by updating their checkbox(s) to `- [x]`.
115
+
116
+
117
+ ### Step 3: Run entire tests suite
118
+
119
+ Run the entire tests suite for the application so that ALL tests run. Verify how many tests are passing and how many have failed or produced errors.
120
+
121
+ Include these counts and the list of failed tests in your final verification report.
122
+
123
+ DO NOT attempt to fix any failing tests. Just note their failures in your final verification report.
124
+
125
+
126
+ ### Step 4: Create final verification report
127
+
128
+ Create your final verification report in `agent-os/specs/[this-spec]/verifications/final-verification.html`.
129
+
130
+ The content of this report should follow this structure:
131
+
132
+ ```markdown
133
+ # Verification Report: [Spec Title]
134
+
135
+ **Spec:** `[spec-name]`
136
+ **Date:** [Current Date]
137
+ **Verifier:** implementation-verifier
138
+ **Status:** โœ… Passed | โš ๏ธ Passed with Issues | โŒ Failed
139
+
140
+ ---
141
+
142
+ ## Executive Summary
143
+
144
+ [Brief 2-3 sentence overview of the verification results and overall implementation quality]
145
+
146
+ ---
147
+
148
+ ## 1. Tasks Verification
149
+
150
+ **Status:** โœ… All Complete | โš ๏ธ Issues Found
151
+
152
+ ### Completed Tasks
153
+ - [x] Task Group 1: [Title]
154
+ - [x] Subtask 1.1
155
+ - [x] Subtask 1.2
156
+ - [x] Task Group 2: [Title]
157
+ - [x] Subtask 2.1
158
+
159
+ ### Incomplete or Issues
160
+ [List any tasks that were found incomplete or have issues, or note "None" if all complete]
161
+
162
+ ---
163
+
164
+ ## 2. Documentation Verification
165
+
166
+ **Status:** โœ… Complete | โš ๏ธ Issues Found
167
+
168
+ ### Implementation Documentation
169
+ - [x] Task Group 1 Implementation: `implementations/1-[task-name]-implementation.md`
170
+ - [x] Task Group 2 Implementation: `implementations/2-[task-name]-implementation.md`
171
+
172
+ ### Verification Documentation
173
+ [List verification documents from area verifiers if applicable]
174
+
175
+ ### Missing Documentation
176
+ [List any missing documentation, or note "None"]
177
+
178
+ ---
179
+
180
+ ## 3. Roadmap Updates
181
+
182
+ **Status:** โœ… Updated | โš ๏ธ No Updates Needed | โŒ Issues Found
183
+
184
+ ### Updated Roadmap Items
185
+ - [x] [Roadmap item that was marked complete]
186
+
187
+ ### Notes
188
+ [Any relevant notes about roadmap updates, or note if no updates were needed]
189
+
190
+ ---
191
+
192
+ ## 4. Test Suite Results
193
+
194
+ **Status:** โœ… All Passing | โš ๏ธ Some Failures | โŒ Critical Failures
195
+
196
+ ### Test Summary
197
+ - **Total Tests:** [count]
198
+ - **Passing:** [count]
199
+ - **Failing:** [count]
200
+ - **Errors:** [count]
201
+
202
+ ### Failed Tests
203
+ [List any failing tests with their descriptions, or note "None - all tests passing"]
204
+
205
+ ### Notes
206
+ [Any additional context about test results, known issues, or regressions]
207
+ ```
@@ -0,0 +1,164 @@
1
+ ---
2
+ argument-hint: [message]
3
+ description: Diagnose errors using ReAct (Reasoning + Acting) methodology with iterative investigation
4
+ ---
5
+
6
+ You are investigating an error using the **ReAct (Reasoning + Acting)** approach, which combines systematic reasoning with targeted actions to diagnose issues.
7
+
8
+ ## Error to Investigate
9
+ $message
10
+
11
+ ---
12
+
13
+ ## Investigation Process
14
+
15
+ Use the **Thought-Action-Observation** loop iteratively until the root cause is identified. Document each cycle clearly.
16
+
17
+ ### Investigation Cycle 1: Initial Analysis
18
+
19
+ **THOUGHT**:
20
+ Analyze the error systematically before taking action:
21
+ - What does the error message tell me? (error type, location, context)
22
+ - Based on the error type, what are the 3 most likely causes?
23
+ - What information would help me confirm or eliminate these hypotheses?
24
+ - What should I investigate first to get maximum diagnostic value?
25
+
26
+ **ACTION**:
27
+ {Describe what you're doing and why}
28
+ Examples:
29
+ - Read the file mentioned in the stack trace to understand context
30
+ - Search for recent changes to the affected code
31
+ - Check logs for related errors or warnings
32
+ - Examine configuration files
33
+ - Review related test files
34
+
35
+ **OBSERVATION**:
36
+ After completing the action, document what you learned:
37
+ - What did I find? (specific findings, not interpretations)
38
+ - Does this confirm or eliminate any hypotheses?
39
+ - What new questions or patterns emerge?
40
+ - Do I have enough information, or do I need another cycle?
41
+
42
+ ---
43
+
44
+ ### Investigation Cycle 2: Hypothesis Testing
45
+
46
+ **THOUGHT**:
47
+ Based on the previous observation, reason through the next step:
48
+ - Which hypothesis is most likely now? Why?
49
+ - What specific evidence would confirm or reject this hypothesis?
50
+ - What's the next best action to take?
51
+
52
+ **ACTION**:
53
+ {Targeted investigation based on your reasoning}
54
+
55
+ **OBSERVATION**:
56
+ {What you discovered and how it changes your understanding}
57
+
58
+ ---
59
+
60
+ ### Investigation Cycle 3-N: Iterative Deep Dive
61
+
62
+ **THOUGHT**: {Continue reasoning based on accumulated observations}
63
+ **ACTION**: {Continue investigating}
64
+ **OBSERVATION**: {Continue learning}
65
+
66
+ *Continue cycles until you have high confidence in the root cause (typically 3-5 cycles)*
67
+
68
+ ---
69
+
70
+ ## Root Cause Analysis
71
+
72
+ After completing investigation cycles, synthesize your findings:
73
+
74
+ ### ๐ŸŽฏ Root Cause
75
+ {Clear, specific statement of what's wrong - reference exact files and line numbers if possible}
76
+
77
+ ### ๐Ÿ” Evidence Trail
78
+ Document the key observations that led to this conclusion:
79
+ 1. {First key finding}
80
+ 2. {Second key finding}
81
+ 3. {Confirming evidence}
82
+
83
+ ### ๐Ÿ“Š Confidence Level
84
+ **Confidence**: โฌ›โฌ›โฌ›โฌœโฌœ (High/Medium/Low)
85
+
86
+ **Reasoning**: {Why are you confident or uncertain? What alternative explanations remain?}
87
+
88
+ ---
89
+
90
+ ## Resolution Plan
91
+
92
+ ### โœ… Recommended Fix
93
+
94
+ **THOUGHT**:
95
+ Given the root cause, what's the best approach to fix it?
96
+ - Consider multiple solutions if applicable
97
+ - Think through trade-offs and side effects
98
+ - Evaluate complexity and risk
99
+
100
+ **Primary Solution**:
101
+ {Detailed fix with code examples or specific changes}
102
+
103
+ **Why This Approach**:
104
+ {Reasoning for why this is the best solution}
105
+
106
+ **Alternative Solutions** (if applicable):
107
+ {Other viable approaches and why they weren't chosen}
108
+
109
+ ### ๐Ÿงช Verification Steps
110
+ How to confirm the fix works:
111
+ 1. {Specific test or check}
112
+ 2. {Expected outcome}
113
+ 3. {Additional validation}
114
+
115
+ ### โš ๏ธ Potential Side Effects
116
+ {Any risks or areas to watch after applying the fix}
117
+
118
+ ---
119
+
120
+ ## Documentation
121
+
122
+ Save this investigation report to:
123
+ `.sublation-os/investigations/[YYYY-MM-DD]-[brief-description].md`
124
+
125
+ Use this format for the filename:
126
+ - `2025-01-15-auth-token-expiry.md`
127
+ - `2025-01-15-database-connection-timeout.md`
128
+
129
+ ---
130
+
131
+ ## Quality Standards
132
+
133
+ **Good Investigation Example**:
134
+
135
+ **THOUGHT**: The error "Cannot read property 'map' of undefined" in UserList.tsx:45 suggests data isn't loaded. Most likely causes: (1) API call failed, (2) API returned unexpected format, (3) race condition in component mounting. I'll check the API call first since that's most common.
136
+
137
+ **ACTION**: Read UserList.tsx to see how data is fetched and when map() is called.
138
+
139
+ **OBSERVATION**: Line 45 calls users.map() but users comes from useState with no default value. The useEffect fetches users, but there's no loading state. The map() runs before data arrives.
140
+
141
+ โœ… **Specific and actionable**
142
+ โœ… **Clear reasoning documented**
143
+ โœ… **Each step builds on previous learning**
144
+
145
+ **Poor Investigation Example**:
146
+
147
+ **THOUGHT**: Something is wrong with the code.
148
+ **ACTION**: Look at files.
149
+ **OBSERVATION**: Found a problem.
150
+
151
+ โŒ **Too vague**
152
+ โŒ **No reasoning shown**
153
+ โŒ **Not helpful for learning**
154
+
155
+ ---
156
+
157
+ ## Integration with Memory System
158
+
159
+ After resolving the issue, consider running `/learn` to capture:
160
+ - The mistake or pattern that caused the issue
161
+ - How to detect similar issues in the future
162
+ - Codebase-specific gotchas discovered
163
+
164
+ This helps future debugging sessions by building institutional knowledge.