agentic-code 0.5.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,199 @@
1
+ # Implementation
2
+
3
+ ## Required Rules [MANDATORY - MUST BE ACTIVE]
4
+
5
+ **RULE AVAILABILITY VERIFICATION:**
6
+ 1. [VERIFY ACTIVE] `.agents/rules/core/metacognition.md`
7
+ 2. [LOAD IF NOT ACTIVE] `.agents/rules/language/rules.md`
8
+ 3. [LOAD IF NOT ACTIVE] `.agents/rules/language/testing.md`
9
+
10
+ **LOADING PROTOCOL:**
11
+ - STEP 1: VERIFY metacognition.md is active from initial session setup
12
+ - STEP 2: CHECK if language/rules.md is active in working memory
13
+ - STEP 3: If language/rules.md NOT active → Execute BLOCKING READ
14
+ - STEP 4: CHECK if testing.md is active in working memory
15
+ - STEP 5: If testing.md NOT active → Execute BLOCKING READ
16
+ - STEP 6: CONFIRM all rules active before writing ANY code
17
+
18
+ ## Plan Injection Requirement [MANDATORY]
19
+
20
+ **Upon reading this file, IMMEDIATELY inject to work plan:**
21
+ 1. All BLOCKING READs identified in Loading Protocol above:
22
+ - `.agents/rules/language/rules.md` (if not active)
23
+ - `.agents/rules/language/testing.md` (if not active)
24
+ 2. Mark each with "[From implementation.md]" source tag
25
+ 3. Show evidence of injection:
26
+ ```
27
+ [PLAN INJECTION FROM implementation.md]
28
+ Injected to work plan:
29
+ ✓ BLOCKING READ: language/rules.md - development standards
30
+ ✓ BLOCKING READ: language/testing.md - TDD process
31
+ Status: VERIFIED
32
+ ```
33
+
34
+ **ENFORCEMENT:** Cannot proceed without Plan Injection evidence
35
+
36
+ **EVIDENCE REQUIRED:**
37
+ ```
38
+ Rule Status Verification:
39
+ ✓ metacognition.md - ACTIVE (from session setup)
40
+ ✓ language/rules.md - ACTIVE (loaded/verified)
41
+ ✓ language/testing.md - ACTIVE (loaded/verified)
42
+ ```
43
+
44
+ ## Phase Entry Gate [BLOCKING - SYSTEM HALT IF VIOLATED]
45
+
46
+ **CHECKPOINT: System CANNOT write ANY CODE until ALL boxes checked:**
47
+ ☐ [VERIFIED] THIS FILE (`implementation.md`) has been READ and is active
48
+ ☐ [VERIFIED] Plan Injection completed (from implementation.md Plan Injection Requirement)
49
+ ☐ [VERIFIED] All required rules listed above are LOADED and active
50
+ ☐ [VERIFIED] Work plan EXISTS with task definitions
51
+ ☐ [VERIFIED] Work plan contains ALL BLOCKING READs from this file
52
+ ☐ [VERIFIED] Current task identified from work plan
53
+ ☐ [VERIFIED] TDD process understood (Red-Green-Refactor-Verify)
54
+ ☐ [VERIFIED] SESSION_BASELINE_DATE established and active
55
+
56
+ **METACOGNITION GATE [MANDATORY]:**
57
+ BEFORE writing first line of code:
58
+ - Understand what needs to be built
59
+ - Verify approach follows existing patterns
60
+ - Confirm TDD cycle will be followed
61
+
62
+ **GATE ENFORCEMENT:**
63
+ IF any box unchecked → HALT → Return to uncompleted step
64
+ IF attempting to write code without gates → CRITICAL ERROR
65
+
66
+ ## Purpose
67
+
68
+ Implement code using TDD process.
69
+
70
+ ## When to Use
71
+
72
+ - Writing new code
73
+ - Modifying existing code
74
+ - Implementing features from design
75
+ - Fixing bugs
76
+ - Refactoring
77
+
78
+ ## Task Exit: Confirm all complete
79
+
80
+ - Tests written and passing?
81
+ - Code implements requirements?
82
+ - Quality checks executed with 0 errors?
83
+ - Changes committed to git?
84
+ - Work plan task checkbox marked [x]?
85
+
86
+ ## TDD Implementation Process
87
+
88
+ ### Every code change follows:
89
+
90
+ **1. RED Phase - Write failing test first**
91
+ - Define expected behavior in test
92
+ - Run test to confirm it fails
93
+ - No implementation code yet
94
+
95
+ **2. GREEN Phase - Make test pass**
96
+ - Write minimal code to pass test
97
+ - Focus only on making test green
98
+ - Run test to confirm it passes
99
+
100
+ **3. REFACTOR Phase - Improve code**
101
+ - Clean up implementation
102
+ - Apply coding standards from language/rules.md
103
+ - Ensure test still passes
104
+
105
+ **4. VERIFY Phase - Quality checks**
106
+ - Execute ALL commands from language/testing.md Quality Check Commands section
107
+ - Confirm ALL checks pass with 0 errors
108
+ - Ready for commit
109
+
110
+ **5. COMMIT Phase - Version control [MANDATORY]**
111
+ - Stage changes for current implementation
112
+ - Create commit with descriptive message
113
+ - Commit message format: "feat: [what was implemented]" or follow work plan task name if available
114
+ - ENFORCEMENT: Implementation task NOT complete until committed
115
+ - NOTE: Skip if user explicitly says "don't commit"
116
+
117
+ ## Internal Progress Tracking
118
+
119
+ Track these phases internally:
120
+ ```
121
+ Current Task: [task name]
122
+ □ RED Phase: Test written
123
+ □ GREEN Phase: Test passing
124
+ □ REFACTOR Phase: Code cleaned
125
+ □ VERIFY Phase: Quality checks passed (0 errors)
126
+ □ COMMIT Phase: Changes committed
127
+ ```
128
+
129
+ ## Common Patterns with TDD
130
+
131
+ **New Feature**
132
+ 1. Write test for new behavior (RED)
133
+ 2. Implement minimal solution (GREEN)
134
+ 3. Refactor and add edge cases (REFACTOR)
135
+ 4. Verify quality (VERIFY)
136
+
137
+ **Bug Fix**
138
+ 1. Write test that reproduces bug (RED)
139
+ 2. Fix the bug (GREEN)
140
+ 3. Ensure no regression (REFACTOR)
141
+ 4. Verify quality (VERIFY)
142
+
143
+ **Refactoring**
144
+ 1. Ensure tests exist and pass
145
+ 2. Make small changes
146
+ 3. Run tests after each change
147
+ 4. Maintain behavior
148
+
149
+ ### 5. Language-Agnostic Standards
150
+
151
+ Regardless of language:
152
+ - Clear variable names
153
+ - Consistent formatting
154
+ - Proper indentation
155
+ - Logical file organization
156
+ - Separation of concerns
157
+
158
+ ### 6. Testing Approach
159
+
160
+ **REFERENCE `.agents/rules/language/testing.md` for complete testing strategy including:**
161
+ - Test types and granularity
162
+ - Test-first development process
163
+ - Coverage requirements
164
+ - Mock and stub usage patterns
165
+
166
+ ### 7. Security Considerations
167
+
168
+ Always:
169
+ - Validate all inputs
170
+ - Sanitize user data
171
+ - Use parameterized queries
172
+ - Avoid hardcoded secrets
173
+ - Follow principle of least privilege
174
+
175
+ ## Anti-Patterns to Avoid
176
+
177
+ 1. **God functions**: Doing too much in one function
178
+ 2. **Magic numbers**: Use named constants
179
+ 3. **Deep nesting**: Max 3 levels of indentation
180
+ 4. **Ignoring errors**: Always handle or propagate
181
+ 5. **Premature optimization**: Make it work, then optimize
182
+ 6. **Copy-paste coding**: Extract common code
183
+ 7. **Unclear naming**: Be explicit, not clever
184
+
185
+ ## Final Quality Gate
186
+
187
+ **REFERENCE `.agents/tasks/quality-assurance.md` for complete quality checklist**
188
+
189
+ Essential completion requirements:
190
+ □ ALL quality standards from quality-assurance.md satisfied
191
+ □ ALL language requirements from language/rules.md met
192
+ □ ALL testing requirements from testing.md completed
193
+ □ Work plan task checkbox updated to [✓]
194
+ □ Metacognition "After Completion" executed
195
+
196
+ **FINAL METACOGNITION GATE [BLOCKING]:**
197
+ - Cannot mark task complete without metacognition assessment
198
+ - Must document what worked/failed for future reference
199
+ - Must evaluate if additional rules needed for next task
@@ -0,0 +1,132 @@
1
+ # Integration Test Review
2
+
3
+ ## Required Rules [MANDATORY - MUST BE ACTIVE]
4
+
5
+ **RULE AVAILABILITY VERIFICATION:**
6
+ 1. [LOAD IF NOT ACTIVE] `.agents/rules/language/testing.md`
7
+ 2. [LOAD IF NOT ACTIVE] `.agents/rules/core/integration-e2e-testing.md`
8
+
9
+ **LOADING PROTOCOL:**
10
+ - STEP 1: CHECK if language/testing.md is active in working memory
11
+ - STEP 2: If language/testing.md NOT active → Execute BLOCKING READ
12
+ - STEP 3: CHECK if integration-e2e-testing.md is active in working memory
13
+ - STEP 4: If integration-e2e-testing.md NOT active → Execute BLOCKING READ
14
+ - STEP 5: CONFIRM all required rules active before proceeding
15
+
16
+ ## Purpose
17
+
18
+ Verify implementation quality of integration and E2E tests. Evaluate consistency between skeleton comments (AC, Behavior, metadata) and actual test implementation.
19
+
20
+ ## When to Use
21
+
22
+ - After integration/E2E test implementation is complete
23
+ - Before Quality Assurance phase
24
+ - When test skeletons have been converted to actual test implementations
25
+
26
+ ## Required Information
27
+
28
+ - **testFile**: Path to the test file to review (required)
29
+ - **designDocPath**: Path to related Design Doc (optional)
30
+
31
+ ## Completion Conditions
32
+
33
+ □ All skeleton comments verified against implementation
34
+ □ Behavior verification assertions present for all observable results
35
+ □ AAA structure confirmed in all test cases
36
+ □ Mock boundaries appropriate (external=mock, internal=actual)
37
+ □ Test independence verified (no shared mutable state)
38
+ □ Quality issues identified and documented
39
+
40
+ ## Review Process
41
+
42
+ ### Stage 1: Skeleton Comment Extraction
43
+
44
+ Extract the following from the test file:
45
+ - `AC:` - Acceptance criteria reference
46
+ - `ROI:` - ROI score and values
47
+ - `Behavior:` - Trigger → Process → Observable Result
48
+ - `@category:` - Test category
49
+ - `@dependency:` - Component dependencies
50
+ - `@complexity:` - Complexity level
51
+
52
+ ### Stage 2: Skeleton and Implementation Consistency Check
53
+
54
+ For each test case, verify:
55
+
56
+ | Check Item | Verification Content | Failure Condition |
57
+ |------------|---------------------|-------------------|
58
+ | AC Correspondence | Test implemented for AC comment | Pending/todo marker remains |
59
+ | Behavior Verification | Assertion exists for "observable result" | No assertion |
60
+ | Verification Item Coverage | All listed items included in assertions | Item missing |
61
+
62
+ ### Stage 3: Implementation Quality Check
63
+
64
+ | Check Item | Verification Content | Failure Condition |
65
+ |------------|---------------------|-------------------|
66
+ | AAA Structure | Arrange/Act/Assert separation clear | Separation unclear |
67
+ | Independence | No state sharing between tests | Shared state modified |
68
+ | Reproducibility | No direct use of current time or random values | Non-deterministic elements |
69
+ | Readability | Test name matches verification content | Name and content diverge |
70
+
71
+ ### Stage 4: Mock Boundary Check (Integration Tests Only)
72
+
73
+ | Judgment Criteria | Expected State | Failure Condition |
74
+ |-------------------|----------------|-------------------|
75
+ | External API | Mock required | Actual HTTP communication |
76
+ | Internal Components | Use actual | Unnecessary mocking |
77
+
78
+ ## Output Format
79
+
80
+ ### Status Determination
81
+
82
+ **approved**:
83
+ - All ACs have implemented tests (no pending/todo markers)
84
+ - All observable results have assertions
85
+ - No quality issues or only low priority ones
86
+
87
+ **needs_revision**:
88
+ - Pending/todo markers remain
89
+ - Behavior verification missing
90
+ - Medium to high priority quality issues exist
91
+
92
+ **blocked**:
93
+ - Skeleton file not found
94
+ - AC intent unclear
95
+ - Major contradiction between Design Doc and implementation
96
+
97
+ ### Report Structure
98
+
99
+ ```
100
+ [REVIEW RESULT]
101
+ status: approved | needs_revision | blocked
102
+ testFile: [path]
103
+ summary: [result summary]
104
+
105
+ [SKELETON COMPLIANCE]
106
+ totalACs: [count]
107
+ implementedACs: [count]
108
+ pendingTodos: [count]
109
+ missingAssertions: [list if any]
110
+
111
+ [QUALITY ISSUES]
112
+ - severity: high | medium | low
113
+ category: [aaa_structure | independence | reproducibility | mock_boundary | readability]
114
+ location: [file:line]
115
+ description: [problem description]
116
+ suggestion: [fix proposal]
117
+
118
+ [VERDICT]
119
+ decision: approved | needs_revision | blocked
120
+ reason: [decision reason]
121
+ prioritizedActions:
122
+ 1. [highest priority fix]
123
+ 2. [next fix]
124
+ ```
125
+
126
+ ## Anti-Patterns to Avoid
127
+
128
+ - Approving tests with remaining pending/todo markers
129
+ - Ignoring missing assertions for observable results
130
+ - Overlooking shared mutable state between tests
131
+ - Accepting excessive mocking of internal components
132
+ - Skipping AAA structure verification
@@ -0,0 +1,336 @@
1
+ # Product Requirements Document (PRD) Creation
2
+
3
+ ## Required Rules [MANDATORY - MUST BE ACTIVE]
4
+
5
+ **RULE AVAILABILITY VERIFICATION:**
6
+ 1. [VERIFY ACTIVE] `.agents/rules/core/metacognition.md` (loaded at session start)
7
+ 2. [LOAD IF NOT ACTIVE] `.agents/rules/core/documentation-criteria.md`
8
+
9
+ **LOADING PROTOCOL:**
10
+ - STEP 1: VERIFY metacognition.md is active from initial session setup
11
+ - STEP 2: CHECK if documentation-criteria.md is active in working memory
12
+ - STEP 3: If documentation-criteria.md NOT active → Execute BLOCKING READ
13
+ - STEP 4: CONFIRM all rules active before proceeding
14
+
15
+ ## Plan Injection Requirement [MANDATORY]
16
+
17
+ **Upon reading this file, IMMEDIATELY inject to work plan:**
18
+ 1. All BLOCKING READs identified in Loading Protocol above:
19
+ - `.agents/rules/core/documentation-criteria.md` (if not active)
20
+ 2. Mark each with "[From prd-creation.md]" source tag
21
+ 3. Show evidence of injection:
22
+ ```
23
+ [PLAN INJECTION FROM prd-creation.md]
24
+ Injected to work plan:
25
+ ✓ BLOCKING READ: documentation-criteria.md - PRD creation criteria
26
+ Status: VERIFIED
27
+ ```
28
+
29
+ **ENFORCEMENT:** Cannot proceed without Plan Injection evidence
30
+
31
+ **EVIDENCE REQUIRED:**
32
+ ```
33
+ Rule Status Verification:
34
+ ✓ metacognition.md - ACTIVE (from session setup)
35
+ ✓ documentation-criteria.md - ACTIVE (loaded/verified)
36
+ ```
37
+
38
+ ## Phase Entry Gate [BLOCKING - SYSTEM HALT IF VIOLATED]
39
+
40
+ **CHECKPOINT: System CANNOT proceed until ALL boxes checked:**
41
+ ☐ [VERIFIED] THIS FILE (`prd-creation.md`) has been READ and is active
42
+ ☐ [VERIFIED] Plan Injection completed (from prd-creation.md Plan Injection Requirement)
43
+ ☐ [VERIFIED] Work plan contains ALL BLOCKING READs from this file
44
+ ☐ [VERIFIED] Project structure confirmed
45
+ ☐ [VERIFIED] Existing PRDs investigation COMPLETED with evidence
46
+ ☐ [VERIFIED] Related documentation search EXECUTED with results documented
47
+ ☐ [VERIFIED] Required rules LOADED with file paths listed above
48
+ ☐ [VERIFIED] User requirements gathered and clarified
49
+ ☐ [VERIFIED] SESSION_BASELINE_DATE established and active
50
+
51
+ **METACOGNITION GATE [MANDATORY]:**
52
+ BEFORE starting PRD creation, execute metacognition assessment:
53
+ - Understand business goals (not just technical implementation)
54
+ - Verify all stakeholder needs identified
55
+ - Confirm user value is clearly defined
56
+
57
+ **GATE ENFORCEMENT:**
58
+ IF any box unchecked → HALT → Return to uncompleted step
59
+ IF attempting to skip → CRITICAL ERROR
60
+
61
+ ## Purpose
62
+
63
+ Create Product Requirements Documents for new features or significant changes.
64
+
65
+ ## When to Use
66
+
67
+ ### PRD Required When [AUTOMATIC TRIGGER]:
68
+ - New feature addition
69
+ - Major user experience changes
70
+ - Business logic fundamental changes
71
+ - Multiple stakeholder impact
72
+ - New product/service launch
73
+ - Significant workflow modifications
74
+
75
+ **ENFORCEMENT**: Starting implementation without PRD for these conditions = REQUIREMENTS PHASE FAILURE
76
+
77
+ ### PRD Not Required When:
78
+ - Simple bug fixes
79
+ - Internal refactoring (no user impact)
80
+ - Documentation updates only
81
+ - Minor UI tweaks
82
+ - Performance optimization (no behavior change)
83
+
84
+ ## Completion Conditions
85
+
86
+ ### For PRD:
87
+ □ Business requirements clearly stated
88
+ □ User personas identified
89
+ □ User stories with acceptance criteria defined
90
+ □ Success metrics (KPIs) quantified
91
+ □ Scope explicitly defined (what's included/excluded)
92
+ □ MoSCoW prioritization completed (Must/Should/Could/Won't)
93
+ □ MVP vs Future phases separated
94
+ □ Risks and assumptions documented
95
+ □ Timeline/milestones outlined
96
+ □ User journey diagram created
97
+ □ Scope boundary diagram included
98
+
99
+ ## Mandatory Process Before Document Creation [STRICT COMPLIANCE]
100
+
101
+ **These steps MUST be completed to pass the entry gate:**
102
+
103
+ ### 1. Existing Documentation Investigation [REQUIRED - CANNOT SKIP]
104
+
105
+ 1. **Existing PRDs Search**
106
+ - Search for PRDs in `docs/prd/` directory
107
+ - Review related feature PRDs for context and dependencies
108
+ - Identify potential conflicts or overlaps
109
+
110
+ 2. **Related Design Docs Investigation**
111
+ - Search design docs in `docs/design/` directory
112
+ - Understand existing technical decisions that might constrain requirements
113
+ - Note integration points with existing features
114
+
115
+ 3. **Current System Analysis** (For feature updates)
116
+ - Identify current functionality and pain points
117
+ - Document what users currently do vs what they want to do
118
+ - Gather metrics on current usage if available
119
+
120
+ 4. **Include in PRD**
121
+ - Always include investigation results in "## Context & Background" section
122
+ - Reference related PRDs and their relationships
123
+ - Document assumptions based on existing system
124
+
125
+ ### 2. Requirements Gathering Checklist [MOST IMPORTANT]
126
+
127
+ 1. **User Requirements**
128
+ - [ ] Target users identified (personas)
129
+ - [ ] User problems/needs documented
130
+ - [ ] Desired outcomes specified
131
+ - [ ] Success criteria from user perspective
132
+
133
+ 2. **Business Requirements**
134
+ - [ ] Business goals aligned
135
+ - [ ] ROI or value proposition defined
136
+ - [ ] Compliance/regulatory requirements checked
137
+ - [ ] Budget constraints acknowledged
138
+
139
+ 3. **Technical Constraints** (high-level only)
140
+ - [ ] Platform limitations identified
141
+ - [ ] Integration requirements noted
142
+ - [ ] Performance expectations set
143
+ - [ ] Security requirements listed
144
+
145
+ ### 3. Stakeholder Research [BLOCKING REQUIREMENT]
146
+
147
+ **Information Gathering:**
148
+ - Interview questions for stakeholders
149
+ - Competitive analysis (if applicable)
150
+ - Market research findings
151
+ - User feedback/requests analysis
152
+
153
+ **Required Research Areas:**
154
+ - Similar features in competing products
155
+ - Industry best practices
156
+ - User behavior patterns
157
+ - Regulatory requirements
158
+
159
+ ## PRD Creation Guidelines
160
+
161
+ ### PRD Structure
162
+
163
+ ```markdown
164
+ # PRD: [Feature Name]
165
+
166
+ ## Executive Summary
167
+ [1-2 paragraphs summarizing the feature and its value]
168
+
169
+ ## Context & Background
170
+ - Current situation
171
+ - Problem statement
172
+ - Opportunity identified
173
+ - References to related PRDs
174
+
175
+ ## User Personas
176
+ ### Primary Users
177
+ - [Persona name]: [Description, needs, goals]
178
+
179
+ ### Secondary Users
180
+ - [Persona name]: [Description, needs, goals]
181
+
182
+ ## User Stories
183
+ ### Epic: [High-level feature]
184
+ - **As a** [user type]
185
+ - **I want to** [action]
186
+ - **So that** [benefit]
187
+
188
+ #### Story 1: [Specific functionality]
189
+ - **Acceptance Criteria:**
190
+ - [ ] [Specific testable condition]
191
+ - [ ] [Another condition]
192
+
193
+ ## Requirements
194
+
195
+ ### Functional Requirements
196
+ #### Must Have (MVP)
197
+ 1. [Requirement with clear description]
198
+ 2. [Another requirement]
199
+
200
+ #### Should Have
201
+ 1. [Nice to have feature]
202
+
203
+ #### Could Have
204
+ 1. [Future consideration]
205
+
206
+ #### Won't Have (Out of Scope)
207
+ 1. [Explicitly excluded item]
208
+
209
+ ### Non-Functional Requirements
210
+ - Performance: [Specific metrics]
211
+ - Security: [Requirements]
212
+ - Usability: [Standards]
213
+ - Accessibility: [Compliance needs]
214
+
215
+ ## Success Metrics
216
+ | Metric | Current | Target | Measurement Method |
217
+ |--------|---------|--------|-------------------|
218
+ | [KPI 1] | X | Y | How to measure |
219
+ | [KPI 2] | A | B | How to measure |
220
+
221
+ ## User Journey
222
+ [Mermaid diagram showing user flow]
223
+
224
+ ## Scope Boundaries
225
+ [Mermaid diagram showing what's in/out of scope]
226
+
227
+ ## Risks & Mitigations
228
+ | Risk | Impact | Probability | Mitigation |
229
+ |------|--------|------------|------------|
230
+ | [Risk 1] | High/Med/Low | High/Med/Low | [Strategy] |
231
+
232
+ ## Timeline & Milestones
233
+ - Stage 1 (MVP): [Timeline]
234
+ - Milestone 1: [Date] - [Deliverable]
235
+ - Stage 2: [Timeline]
236
+ - Milestone 2: [Date] - [Deliverable]
237
+
238
+ ## Dependencies
239
+ - [System/Team/Resource]
240
+ - [Another dependency]
241
+
242
+ ## Open Questions
243
+ 1. [Question needing resolution]
244
+ 2. [Another question]
245
+
246
+ ## Appendix
247
+ - [Supporting documents]
248
+ - [Research findings]
249
+ - [Mockups/wireframes references]
250
+ ```
251
+
252
+ ### Creation Modes
253
+
254
+ **Mode 1: New Feature Creation**
255
+ - Start with user problem identification
256
+ - Define success metrics upfront
257
+ - Clear MVP scope
258
+
259
+ **Mode 2: Feature Update**
260
+ - Reference original PRD
261
+ - Document what's changing and why
262
+ - Maintain version history
263
+
264
+ **Mode 3: Reverse Engineering** (from existing implementation)
265
+ - Document current functionality
266
+ - Identify implicit requirements
267
+ - Formalize success criteria
268
+
269
+ ## Interaction Patterns
270
+
271
+ ### When User Requests Are Vague:
272
+ 1. Ask clarifying questions about:
273
+ - Target users
274
+ - Problem being solved
275
+ - Success criteria
276
+ - Scope boundaries
277
+
278
+ ### When Requirements Conflict:
279
+ 1. Document all viewpoints
280
+ 2. Highlight conflicts explicitly
281
+ 3. Propose resolution or escalation path
282
+
283
+ ### Progressive Elaboration:
284
+ 1. Start with high-level epic
285
+ 2. Break down into user stories
286
+ 3. Add acceptance criteria
287
+ 4. Refine based on feedback
288
+
289
+ ## Quality Checklist
290
+
291
+ ### PRD Completeness:
292
+ - [ ] All user personas defined
293
+ - [ ] User stories have acceptance criteria
294
+ - [ ] Success metrics are quantifiable
295
+ - [ ] Scope boundaries are explicit
296
+ - [ ] MoSCoW prioritization complete
297
+ - [ ] Dependencies identified
298
+ - [ ] Risks documented with mitigations
299
+ - [ ] Timeline is realistic
300
+
301
+ ### PRD Clarity:
302
+ - [ ] No technical implementation details (that's for Design Doc)
303
+ - [ ] Language is user-focused, not tech-focused
304
+ - [ ] Requirements are testable
305
+ - [ ] No ambiguous terms ("fast", "easy", "better")
306
+ - [ ] Visual diagrams support understanding
307
+
308
+ ## Deliverables
309
+
310
+ - PRD document at `docs/prd/[feature-name]-prd.md`
311
+ - Status: "Draft" → "Under Review" → "Approved" → "Implemented"
312
+ - Version history maintained in document
313
+
314
+ ## Post-PRD Metacognition Gate [MANDATORY]
315
+
316
+ **AFTER completing PRD:**
317
+ ☐ Execute "After Completion" metacognition checklist
318
+ ☐ Verify PRD addresses all user needs
319
+ ☐ Confirm business value is clearly articulated
320
+ ☐ Document any unresolved questions
321
+ ☐ Evaluate if technical feasibility assessment needed
322
+
323
+ **CANNOT proceed to technical design without:**
324
+ - Metacognition assessment complete
325
+ - PRD document created
326
+ - User/stakeholder approval received (or noted as pending)
327
+
328
+ ## Anti-Patterns to Avoid
329
+
330
+ 1. **Solution-first thinking**: Defining HOW before understanding WHY
331
+ 2. **Tech-heavy PRDs**: Including implementation details
332
+ 3. **Vague success criteria**: "Improve user experience"
333
+ 4. **Unlimited scope**: No clear boundaries
334
+ 5. **Missing user voice**: Requirements without user stories
335
+ 6. **Ignoring existing PRDs**: Not checking for conflicts/overlaps
336
+ 7. **Skipping competitive analysis**: Not learning from others