agentic-code 0.5.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,272 @@
1
+ # AI Developer Guide
2
+
3
+ ## Technical Anti-patterns (Red Flag Patterns)
4
+
5
+ Immediately stop and reconsider design when detecting the following patterns:
6
+
7
+ ### Code Quality Anti-patterns
8
+ 1. **Writing similar code 3 or more times**
9
+ 2. **Multiple responsibilities mixed in a single file**
10
+ 3. **Defining same content in multiple files**
11
+ 4. **Making changes without checking dependencies**
12
+ 5. **Disabling code with comments**
13
+ 6. **Error suppression**
14
+
15
+ ### Design Anti-patterns
16
+ - **"Make it work for now" thinking**
17
+ - **Patchwork implementation**
18
+ - **Optimistic implementation of uncertain technology**
19
+ - **Symptomatic fixes**
20
+ - **Unplanned large-scale changes**
21
+
22
+ ## Fail-Fast Fallback Design Principles
23
+
24
+ ### Core Principle
25
+ Prioritize primary code reliability over fallback implementations. In distributed systems, excessive fallback mechanisms can mask errors and make debugging difficult.
26
+
27
+ ### Implementation Guidelines
28
+
29
+ #### Default Approach
30
+ - **Explicit failure over silent defaults**: Errors must be visible and traceable, not masked by automatic default values
31
+ - **Preserve error context**: Include original error information when re-throwing
32
+
33
+ #### When Fallbacks Are Acceptable
34
+ - **Only with explicit Design Doc approval**: Document why fallback is necessary
35
+ - **Business-critical continuity**: When partial functionality is better than none
36
+ - **Graceful degradation paths**: Clearly defined degraded service levels
37
+
38
+ #### Layer Responsibilities
39
+ - **Infrastructure Layer**:
40
+ - Always throw errors upward
41
+ - No business logic decisions
42
+ - Provide detailed error context
43
+
44
+ - **Application Layer**:
45
+ - Make business-driven error handling decisions
46
+ - Implement fallbacks only when specified in requirements
47
+ - Log all fallback activations for monitoring
48
+
49
+ ### Error Masking Detection
50
+
51
+ **Review Triggers** (require design review):
52
+ - Writing 3rd error handling block in the same feature
53
+ - Multiple error handling structures in single function
54
+ - Nested error handling structures
55
+ - Error handlers that return default values without propagating
56
+
57
+ **Before Implementing Any Fallback**:
58
+ 1. Verify Design Doc explicitly defines this fallback
59
+ 2. Document the business justification
60
+ 3. Ensure error is logged with full context
61
+ 4. Add monitoring/alerting for fallback activation
62
+
63
+ ### Implementation Patterns
64
+
65
+ Note: Use your language's standard error handling mechanism (exceptions, Result types, error values, etc.)
66
+
67
+ ```
68
+ ❌ AVOID: Silent fallback that hides errors
69
+ [handle error]:
70
+ return DEFAULT_USER // Error is hidden, debugging becomes difficult
71
+
72
+ ✅ PREFERRED: Explicit failure with context
73
+ [handle error]:
74
+ logError('Failed to fetch user data', userId, error)
75
+ propagate ServiceError('User data unavailable', error)
76
+
77
+ ✅ ACCEPTABLE: Documented fallback with monitoring (when justified in Design Doc)
78
+ [handle error]:
79
+ // Fallback defined in Design Doc section 3.2.1
80
+ logWarning('Primary data source failed, using cache', error)
81
+ incrementMetric('data.fallback.cache_used')
82
+
83
+ cachedData = fetchFromCache()
84
+ if not cachedData:
85
+ propagate ServiceError('Both primary and cache failed', error)
86
+ return cachedData
87
+ ```
88
+
89
+ ## Rule of Three - Criteria for Code Duplication
90
+
91
+ | Duplication Count | Action | Reason |
92
+ |-------------------|--------|--------|
93
+ | 1st time | Inline implementation | Cannot predict future changes |
94
+ | 2nd time | Consider future consolidation | Pattern beginning to emerge |
95
+ | 3rd time | Implement commonalization | Pattern established |
96
+
97
+ ### Criteria for Commonalization
98
+
99
+ **Cases for Commonalization**
100
+ - Business logic duplication
101
+ - Complex processing algorithms
102
+ - Areas likely requiring bulk changes
103
+ - Validation rules
104
+
105
+ **Cases to Avoid Commonalization**
106
+ - Accidental matches (coincidentally same code)
107
+ - Possibility of evolving in different directions
108
+ - Significant readability decrease from commonalization
109
+ - Simple helpers in test code
110
+
111
+ ### Implementation Example
112
+ ```
113
+ // ❌ Bad: Immediate commonalization on 1st duplication
114
+ function validateUserEmail(email) { /* ... */ }
115
+ function validateContactEmail(email) { /* ... */ }
116
+ // → Premature abstraction
117
+
118
+ // ✅ Good: Commonalize on 3rd occurrence
119
+ // 1st time: inline implementation
120
+ // 2nd time: Copy but consider future
121
+ // 3rd time: Extract to common validator
122
+ function validateEmail(email, context) { /* ... */ }
123
+ ```
124
+
125
+ ## Common Failure Patterns and Avoidance Methods
126
+
127
+ ### Pattern 1: Error Fix Chain
128
+ **Symptom**: Fixing one error causes new errors
129
+ **Cause**: Surface-level fixes without understanding root cause
130
+ **Avoidance**: Identify root cause with 5 Whys before fixing
131
+
132
+ ### Pattern 2: Implementation Without Sufficient Testing
133
+ **Symptom**: Many bugs after implementation
134
+ **Cause**: Ignoring Red-Green-Refactor process
135
+ **Avoidance**: Always start with failing tests
136
+
137
+ ### Pattern 4: Ignoring Technical Uncertainty
138
+ **Symptom**: Frequent unexpected errors when introducing new technology
139
+ **Cause**: Assuming "it should work according to official documentation" without prior investigation
140
+ **Avoidance**:
141
+ - Record certainty evaluation at the beginning of task files
142
+ ```
143
+ Certainty: low (Reason: no examples of MCP connection found)
144
+ Exploratory implementation: true
145
+ Fallback: use conventional API
146
+ ```
147
+ - For low certainty cases, create minimal verification code first
148
+
149
+ ### Pattern 5: Insufficient Existing Code Investigation
150
+ **Symptom**: Duplicate implementations, architecture inconsistency, integration failures
151
+ **Cause**: Insufficient understanding of existing code before implementation
152
+ **Avoidance Methods**:
153
+ - Before implementation, always search for similar functionality (using domain, responsibility, configuration patterns as keywords)
154
+ - Similar functionality found → Use that implementation (do not create new implementation)
155
+ - Similar functionality is technical debt → Create ADR improvement proposal before implementation
156
+ - No similar functionality exists → Implement new functionality following existing design philosophy
157
+ - Record all decisions and rationale in "Existing Codebase Analysis" section of Design Doc
158
+
159
+ ## Debugging Techniques
160
+
161
+ ### 1. Error Analysis Procedure
162
+ ```bash
163
+ # How to read stack traces
164
+ 1. Read error message (first line) accurately
165
+ 2. Focus on first and last of stack trace
166
+ 3. Identify first line where your code appears
167
+ ```
168
+
169
+ ### 2. 5 Whys - Root Cause Analysis
170
+ ```
171
+ Symptom: Application crash on startup
172
+ Why1: Configuration loading failed → Why2: Config file format changed
173
+ Why3: Dependency update → Why4: Library breaking change
174
+ Why5: Unconstrained dependency version specification
175
+ Root cause: Inappropriate version management strategy
176
+ ```
177
+
178
+ ### 3. Minimal Reproduction Code
179
+ To isolate problems, attempt reproduction with minimal code:
180
+ - Remove unrelated parts
181
+ - Replace external dependencies with mocks
182
+ - Create minimal configuration that reproduces problem
183
+
184
+ ### 4. Debug Log Output
185
+ ```
186
+ // Track problems with structured logs
187
+ log('DEBUG:', {
188
+ context: 'user-creation',
189
+ input: { email, name },
190
+ state: currentState,
191
+ timestamp: currentTimestamp()
192
+ })
193
+ ```
194
+
195
+ ## Situations Requiring Technical Decisions
196
+
197
+ ### Timing of Abstraction
198
+ - Extract patterns after writing concrete implementation 3 times
199
+ - Be conscious of YAGNI, implement only currently needed features
200
+ - Prioritize current simplicity over future extensibility
201
+
202
+ ### Performance vs Readability
203
+ - Prioritize readability unless clear bottleneck exists
204
+ - Measure before optimizing (don't guess, measure)
205
+ - Document reason with comments when optimizing
206
+
207
+ ## Continuous Improvement Mindset
208
+
209
+ - **Humility**: Perfect code doesn't exist, welcome feedback
210
+ - **Courage**: Execute necessary refactoring boldly
211
+ - **Transparency**: Clearly document technical decision reasoning
212
+
213
+ ## Implementation Completeness Assurance
214
+
215
+ ### Impact Analysis: Mandatory 3-Stage Process
216
+
217
+ Complete these stages sequentially before any implementation:
218
+
219
+ **1. Discovery** - Identify all affected code:
220
+ - Implementation references (imports, calls, instantiations)
221
+ - Interface dependencies (contracts, types, data structures)
222
+ - Test coverage
223
+ - Configuration (build configs, env settings, feature flags)
224
+ - Documentation (comments, docs, diagrams)
225
+
226
+ **2. Understanding** - Analyze each discovered location:
227
+ - Role and purpose in the system
228
+ - Dependency direction (consumer or provider)
229
+ - Data flow (origin → transformations → destination)
230
+ - Coupling strength
231
+
232
+ **3. Identification** - Produce structured report:
233
+ ```
234
+ ## Impact Analysis
235
+ ### Direct Impact
236
+ - [Unit]: [Reason and modification needed]
237
+
238
+ ### Indirect Impact
239
+ - [System]: [Integration path → reason]
240
+
241
+ ### Data Flow
242
+ [Source] → [Transformation] → [Consumer]
243
+
244
+ ### Risk Assessment
245
+ - High: [Complex dependencies, fragile areas]
246
+ - Medium: [Moderate coupling, test gaps]
247
+ - Low: [Isolated, well-tested areas]
248
+
249
+ ### Implementation Order
250
+ 1. [Start with lowest risk or deepest dependency]
251
+ 2. [...]
252
+ ```
253
+
254
+ **Critical**: Do not implement until all 3 stages are documented
255
+
256
+ **Relationship to Pattern 5**: This process provides the structured methodology to avoid "Insufficient Existing Code Investigation" (see .agents/rules/core/ai-development-guide.md:150-158)
257
+
258
+ ### Unused Code Deletion
259
+
260
+ When unused code is detected:
261
+ - Will it be used in this work? Yes → Implement now | No → Delete now (Git preserves)
262
+ - Applies to: Code, tests, docs, configs, assets
263
+
264
+ ### Existing Code Modification
265
+
266
+ ```
267
+ In use? No → Delete
268
+ Yes → Working? No → Delete + Reimplement
269
+ Yes → Fix/Extend
270
+ ```
271
+
272
+ **Principle**: Prefer clean implementation over patching broken code
@@ -0,0 +1,184 @@
1
+ # Documentation Creation Criteria
2
+
3
+ ## Creation Decision Matrix
4
+
5
+ | Condition | Required Documents | Creation Order |
6
+ |-----------|-------------------|----------------|
7
+ | New Feature Addition | PRD → [ADR] → Design Doc → Work Plan | After PRD approval |
8
+ | ADR Conditions Met (see below) | ADR → Design Doc → Work Plan | Start immediately |
9
+ | 6+ Files | ADR → Design Doc → Work Plan (Required) | Start immediately |
10
+ | 3-5 Files | Design Doc → Work Plan (Recommended) | Start immediately |
11
+ | 1-2 Files | None | Direct implementation |
12
+
13
+ ## ADR Creation Conditions (Required if Any Apply)
14
+
15
+ ### 1. Type System Changes
16
+ - **Adding nested types/structures with 3+ levels**: e.g., `A { B { C { D } } }`
17
+ - Rationale: Deep nesting has high complexity and wide impact scope
18
+ - **Changing/deleting types used in 3+ locations**
19
+ - Rationale: Multiple location impacts require careful consideration
20
+ - **Data representation responsibility changes** (e.g., transfer object→domain model)
21
+ - Rationale: Conceptual model changes affect design philosophy
22
+
23
+ ### 2. Data Flow Changes
24
+ - **Storage location changes** (DB→File, Memory→Cache)
25
+ - **Processing order changes with 3+ steps**
26
+ - Example: "Input→Validation→Save" to "Input→Save→Async Validation"
27
+ - **Data passing method changes** (props→Context, direct reference→events)
28
+
29
+ ### 3. Architecture Changes
30
+ - Layer addition, responsibility changes, component relocation
31
+
32
+ ### 4. External Dependency Changes
33
+ - Library/framework/external API introduction or replacement
34
+
35
+ ### 5. Complex Implementation Logic (Regardless of Scale)
36
+ - Managing 3+ states
37
+ - Coordinating 5+ asynchronous processes
38
+
39
+ ## Detailed Document Definitions
40
+
41
+ ### PRD (Product Requirements Document)
42
+
43
+ **Purpose**: Define business requirements and user value
44
+
45
+ **Includes**:
46
+ - Business requirements and user value
47
+ - Success metrics and KPIs (measurable format)
48
+ - User stories and use cases
49
+ - MoSCoW prioritization (Must/Should/Could/Won't)
50
+ - MVP and Future phase separation
51
+ - User journey diagram
52
+ - Scope boundary diagram
53
+
54
+ **Excludes**:
55
+ - Technical implementation details (→Design Doc)
56
+ - Technical selection rationale (→ADR)
57
+ - **Implementation phases** (→Work Plan)
58
+ - **Task breakdown** (→Work Plan)
59
+
60
+ ### ADR (Architecture Decision Record)
61
+
62
+ **Purpose**: Record technical decisions
63
+
64
+ **Includes**:
65
+ - Decision (what was selected)
66
+ - Rationale (why that selection was made)
67
+ - Option comparison (minimum 3 options) and trade-offs
68
+ - Architecture impact
69
+ - Principled implementation guidelines
70
+
71
+ **Excludes**:
72
+ - Implementation schedule, duration (→Work Plan)
73
+ - Detailed implementation procedures (→Design Doc)
74
+ - Specific code examples (→Design Doc)
75
+ - Resource assignments (→Work Plan)
76
+
77
+ ### Design Document
78
+
79
+ **Purpose**: Define technical implementation
80
+
81
+ **Includes**:
82
+ - **Existing codebase analysis** (required)
83
+ - Implementation path mapping (both existing and new)
84
+ - Integration point clarification (connection points with existing code even for new implementations)
85
+ - Technical implementation approach (vertical/horizontal/hybrid)
86
+ - **Technical dependencies and implementation constraints** (required implementation order)
87
+ - Interface and type definitions
88
+ - Data flow and component design
89
+ - **E2E verification procedures at integration points**
90
+ - **Acceptance criteria (measurable format)**
91
+ - Change impact map (clearly specify direct impact/indirect impact/no ripple effect)
92
+ - Complete enumeration of integration points
93
+ - Data contract clarification
94
+ - **Agreement checklist** (agreements with stakeholders)
95
+ - **Prerequisite ADRs** (including common ADRs)
96
+
97
+ **Required Structural Elements**:
98
+ ```yaml
99
+ Change Impact Map:
100
+ Change Target: [Component/Feature]
101
+ Direct Impact: [Files/Functions]
102
+ Indirect Impact: [Data format/Processing time]
103
+ No Ripple Effect: [Unaffected features]
104
+
105
+ API Contract Change Matrix:
106
+ Existing: [Function/operation signature]
107
+ New: [Function/operation signature]
108
+ Conversion Required: [Yes/No]
109
+ Compatibility Strategy: [Approach]
110
+ ```
111
+
112
+ **Excludes**:
113
+ - Why that technology was chosen (→Reference ADR)
114
+ - When to implement, duration (→Work Plan)
115
+ - Who will implement (→Work Plan)
116
+
117
+ ### Work Plan
118
+
119
+ **Purpose**: Implementation task management and progress tracking
120
+
121
+ **Includes**:
122
+ - Task breakdown and dependencies (maximum 2 levels)
123
+ - Schedule and duration estimates
124
+ - **Copy E2E verification procedures from Design Doc** (cannot delete, can add)
125
+ - **Stage 4 Quality Assurance Stage (required)**
126
+ - Progress records (checkbox format)
127
+
128
+ **Excludes**:
129
+ - Technical rationale (→ADR)
130
+ - Design details (→Design Doc)
131
+
132
+ **Stage Division Criteria**:
133
+ 1. **Stage 1: Foundation Implementation** - Type definitions, interfaces, test preparation
134
+ 2. **Stage 2: Core Feature Implementation** - Business logic, unit tests
135
+ 3. **Stage 3: Integration Implementation** - External connections, presentation layer
136
+ 4. **Stage 4: Quality Assurance (Required)** - Acceptance criteria achievement, all tests passing, quality checks
137
+
138
+ **Three Elements of Task Completion Definition**:
139
+ 1. **Implementation Complete**: Code is functional
140
+ 2. **Quality Complete**: Tests, type checks, linting pass
141
+ 3. **Integration Complete**: Verified connection with other components
142
+
143
+ ## Creation Process
144
+
145
+ 1. **Problem Analysis**: Change scale assessment, ADR condition check
146
+ 2. **ADR Option Consideration** (ADR only): Compare 3+ options, specify trade-offs
147
+ 3. **Creation**: Use templates, include measurable conditions
148
+ 4. **Approval**: "Accepted" after review enables implementation
149
+
150
+ ## Storage Locations
151
+
152
+ | Document | Path | Naming Convention | Template |
153
+ |----------|------|------------------|----------|
154
+ | PRD | `docs/prd/` | `[feature-name]-prd.md` | `template-en.md` |
155
+ | ADR | `docs/adr/` | `ADR-[4-digits]-[title].md` | `template-en.md` |
156
+ | Design Doc | `docs/design/` | `[feature-name]-design.md` | `template-en.md` |
157
+ | Work Plan | `docs/plans/` | `YYYYMMDD-{type}-{description}.md` | `template-en.md` |
158
+
159
+ *Note: Work plans are stored in `docs/plans/` and excluded by `.gitignore`
160
+
161
+ ## ADR Status
162
+ `Proposed` → `Accepted` → `Deprecated`/`Superseded`/`Rejected`
163
+
164
+ ## AI Automation Rules
165
+ - 5+ files: Suggest ADR creation
166
+ - Type/data flow change detected: ADR mandatory
167
+ - Check existing ADRs before implementation
168
+
169
+ ## Diagram Requirements
170
+
171
+ Required diagrams for each document (using mermaid notation):
172
+
173
+ | Document | Required Diagrams | Purpose |
174
+ |----------|------------------|---------|
175
+ | PRD | User journey diagram, Scope boundary diagram | Clarify user experience and scope |
176
+ | ADR | Option comparison diagram (when needed) | Visualize trade-offs |
177
+ | Design Doc | Architecture diagram, Data flow diagram | Understand technical structure |
178
+ | Work Plan | Phase structure diagram, Task dependency diagram | Clarify implementation order |
179
+
180
+ ## Common ADR Relationships
181
+ 1. **At creation**: Identify common technical areas (logging, error handling, async processing, etc.), reference existing common ADRs
182
+ 2. **When missing**: Consider creating necessary common ADRs
183
+ 3. **Design Doc**: Specify common ADRs in "Prerequisite ADRs" section
184
+ 4. **Compliance check**: Verify design aligns with common ADR decisions
@@ -0,0 +1,76 @@
1
+ # Integration Test & E2E Test Design/Implementation Rules
2
+
3
+ ## Test Types and Limits
4
+
5
+ | Type | Purpose | Limit |
6
+ |------|---------|-------|
7
+ | Integration Test | Component interaction verification | 3 per feature |
8
+ | E2E Test | Critical user journey verification | 1-2 per feature |
9
+
10
+ ## Behavior-First Principle
11
+
12
+ ### Observability Check (All YES = Include)
13
+
14
+ | Check | Question | If NO |
15
+ |-------|----------|-------|
16
+ | Observable | Can user observe the result? | Exclude |
17
+ | System Context | Does it require integration of multiple components? | Exclude |
18
+ | Automatable | Can it run stably in CI environment? | Exclude |
19
+
20
+ ### Include/Exclude Criteria
21
+
22
+ **Include**: Business logic accuracy, data integrity, user-visible features, error handling
23
+ **Exclude**: External live connections, performance metrics, implementation details, UI layout
24
+
25
+ ## Skeleton Specification
26
+
27
+ ### Required Comment Format
28
+
29
+ Each test skeleton MUST include:
30
+ - **AC**: Original acceptance criteria text
31
+ - **ROI**: Calculated score with Business Value and Frequency
32
+ - **Behavior**: Trigger → Process → Observable Result format
33
+ - **Metadata**: @category, @dependency, @complexity annotations
34
+
35
+ ## Implementation Rules
36
+
37
+ ### Behavior Verification
38
+
39
+ | Step Type | Verification Target |
40
+ |-----------|---------------------|
41
+ | Trigger | Reproduce in test setup (Arrange) |
42
+ | Process | Intermediate state or function call |
43
+ | Observable Result | Final output value (return value, error message, log output) |
44
+
45
+ **Pass Criteria**: Test passes if "observable result" is verified as return value or mock call argument
46
+
47
+ ### Integration Test Mock Boundaries
48
+
49
+ | Judgment Criteria | Mock | Actual |
50
+ |-------------------|------|--------|
51
+ | Part of test target? | No → Can mock | Yes → Actual required |
52
+ | External network communication? | Yes → Mock required | No → Actual recommended |
53
+
54
+ ### E2E Test Execution Conditions
55
+
56
+ - Execute only after all components are implemented
57
+ - Do not use mocks (full system integration required)
58
+
59
+ ## Review Criteria
60
+
61
+ ### Skeleton and Implementation Consistency
62
+
63
+ | Check | Failure Condition |
64
+ |-------|-------------------|
65
+ | Behavior Verification | No assertion for "observable result" |
66
+ | Verification Item Coverage | Listed verification items not included in assertions |
67
+ | Mock Boundary | Internal components mocked in integration test |
68
+
69
+ ### Implementation Quality
70
+
71
+ | Check | Failure Condition |
72
+ |-------|-------------------|
73
+ | AAA Structure | Arrange/Act/Assert separation unclear |
74
+ | Independence | State sharing between tests, execution order dependency |
75
+ | Reproducibility | Depends on date/random, results vary |
76
+ | Readability | Test name and verification content don't match |
@@ -0,0 +1,153 @@
1
+ # Metacognition Protocol
2
+
3
+ ## Purpose
4
+
5
+ Self-assessment checkpoints.
6
+
7
+ ## When to Apply [MANDATORY CHECKPOINTS]
8
+
9
+ **BLOCKING METACOGNITION REQUIRED at:**
10
+ - [CHECKPOINT] Task type changes → CANNOT proceed without assessment
11
+ - [CHECKPOINT] After completing ANY task from work plan → MUST evaluate before next task
12
+ - [CHECKPOINT] When encountering error or unexpected result → ASSESS approach immediately
13
+ - [CHECKPOINT] Before writing first line of new feature → VALIDATE approach first
14
+ - [CHECKPOINT] When switching between major phases → CONFIRM all gates passed
15
+
16
+ **ENFORCEMENT**: Skipping metacognition = CRITICAL VIOLATION
17
+
18
+ ## Assessment Questions
19
+
20
+ ### 1. Task Understanding
21
+
22
+ - What is the fundamental goal?
23
+ - Am I solving the root cause or symptom?
24
+ - Do I have all necessary information?
25
+ - Are success criteria clear and measurable?
26
+ - What are the known unknowns at this point?
27
+
28
+ ### 2. Current State
29
+
30
+ - What rules are currently loaded?
31
+ - Which rules are actually being used?
32
+ - What assumptions am I making?
33
+ - What could go wrong?
34
+
35
+ ### 3. Approach Validation
36
+
37
+ - Is my approach the simplest solution?
38
+ - Am I following established patterns?
39
+ - Have I considered alternatives?
40
+ - Is this maintainable long-term?
41
+ - What would make me reverse this decision? (Kill criteria)
42
+
43
+ ## Rule Selection Guide
44
+
45
+ ### By Task Type
46
+
47
+ | Task Type | Essential Rules | Optional Rules |
48
+ |-----------|----------------|----------------|
49
+ | **Implementation** | language/rules.md, ai-development-guide.md | architecture patterns |
50
+ | **Bug Fix** | ai-development-guide.md | debugging patterns |
51
+ | **Design** | documentation-criteria.md | architecture patterns |
52
+ | **Testing** | language/testing.md | coverage strategies |
53
+ | **Refactoring** | ai-development-guide.md | design patterns |
54
+
55
+ ### Loading Strategy
56
+
57
+ **Immediate needs**: Load only what's required now
58
+ **Progressive loading**: Add rules as specific needs arise
59
+ **Cleanup**: Unload rules after task completion
60
+
61
+ Note: Context management is user's responsibility. Ask for guidance if unsure.
62
+
63
+ ## Common Decision Points
64
+
65
+ ### When Starting Work [BLOCKING CHECKLIST]
66
+ ☐ [MUST VERIFY] Task type and scale documented with evidence
67
+ ☐ [MUST VERIFY] Required rules LOADED and file paths listed
68
+ ☐ [MUST VERIFY] Success criteria MEASURABLE and specific
69
+ ☐ [MUST VERIFY] Approach validated against existing patterns
70
+
71
+ **GATE: CANNOT start coding if ANY unchecked**
72
+
73
+ ### During Execution [PROGRESS GATES]
74
+ ☐ [VERIFY] Following work plan from `docs/design/work-plan.md`
75
+ ☐ [VERIFY] Making measurable progress (list completed items)
76
+ ☐ [EVALUATE] Additional rules needed? (load IMMEDIATELY if yes)
77
+ ☐ [EVALUATE] Blocked for >10 minutes? (MUST ask for help)
78
+
79
+ **Dynamic Rule Loading Triggers:**
80
+ - Same error occurs 2+ times → Load `ai-development-guide.md` for debugging patterns
81
+ - "Performance" mentioned in requirements → Load optimization rules if available
82
+ - "Security" mentioned in requirements → Load security guidelines if available
83
+ - External API/service integration needed → Load integration patterns if available
84
+
85
+ **ENFORCEMENT: If progress stalled → MANDATORY metacognition**
86
+
87
+ ### After Completion [EXIT GATES]
88
+ ☐ [VERIFIED] ALL completion criteria met with evidence
89
+ ☐ [VERIFIED] Code quality metrics passed (lint, test, build)
90
+ ☐ [VERIFIED] Documentation updated (if applicable)
91
+ ☐ [RECORDED] What worked/failed for next iteration
92
+
93
+ **GATE: CANNOT mark complete without ALL verified**
94
+
95
+ ## Anti-Pattern Recognition
96
+
97
+ | Pattern | Signs | Correction |
98
+ |---------|-------|------------|
99
+ | **Over-engineering** | Complex solution for simple problem | Simplify approach |
100
+ | **Under-planning** | Jumping into code too quickly | Step back, plan first |
101
+ | **Tunnel vision** | Ignoring alternatives | Consider other approaches |
102
+ | **Quality debt** | Skipping tests or docs | Complete properly |
103
+ | **Context bloat** | Loading unnecessary rules | Load only essentials |
104
+
105
+ ## Error Recovery
106
+
107
+ When stuck:
108
+ 1. Identify what's blocking progress
109
+ 2. Check if it's a knowledge gap or logic error
110
+ 3. Review loaded rules for guidance
111
+ 4. Consider simpler approach
112
+ 5. Ask user for clarification
113
+
114
+ **ERROR HANDLING PROTOCOL:**
115
+
116
+ When encountering an error or blocker:
117
+ - [IMMEDIATE] Execute metacognition assessment
118
+ - [SEARCH] Look for similar patterns in codebase
119
+ - [RE-READ] Relevant rule files for guidance
120
+ - [EVALUATE] Can I solve this with available information?
121
+
122
+ If unable to resolve:
123
+ - [DOCUMENT] Exact error message and context
124
+ - [EXPLAIN] What was attempted and why it failed
125
+ - [REQUEST] User guidance with specific questions
126
+
127
+ **PRINCIPLE: Ask for help when genuinely stuck, not after arbitrary attempt count**
128
+
129
+ ## Learning from Experience
130
+
131
+ Track:
132
+ - What worked well
133
+ - What caused delays
134
+ - Which rules were helpful
135
+ - What patterns emerged
136
+ - What to do differently
137
+
138
+ ## Guidelines
139
+
140
+ - **Be honest**: Acknowledge when uncertain
141
+ - **Be systematic**: Follow structured approach
142
+ - **Be efficient**: Don't overthink simple tasks
143
+ - **Be thorough**: Don't skip important steps
144
+ - **Be adaptive**: Adjust approach based on feedback
145
+
146
+ ## Notes
147
+
148
+ Remember:
149
+ - Metacognition prevents costly mistakes
150
+ - Regular reflection improves quality
151
+ - It's okay to pause and think
152
+ - Ask for help when genuinely stuck
153
+ - Perfect is the enemy of good