openteam 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,108 @@
1
+ ---
2
+ name: architecture-review
3
+ description: "Review implemented code for architectural compliance, detect entropy (bloat, duplication, boundary violations), and propose corrections. Use after Developer completes implementation, or periodically to audit codebase health."
4
+ ---
5
+
6
+ # Architecture Review
7
+
8
+ Review code changes for architectural integrity, detecting and correcting the entropy that naturally accumulates in AI-assisted development.
9
+
10
+ ## Philosophy
11
+
12
+ > "High internal quality leads to faster delivery of new features, because there is less cruft to get in the way... experienced developers reckon that attention to internal quality pays off in weeks not months." — Martin Fowler
13
+
14
+ Entropy is the default outcome of development. Every feature adds code, and without active stewardship, the codebase becomes harder to understand, modify, and extend. Your review is the immune system.
15
+
16
+ > "Knowing your architecture is sacrificial doesn't mean abandoning the internal quality of the software." — Martin Fowler (Sacrificial Architecture)
17
+
18
+ Even if the entire system will be rewritten someday, internal quality still matters NOW — it determines how fast you can ship features TODAY.
19
+
20
+ ## When to Use
21
+
22
+ - After Developer completes implementation (before QA verification)
23
+ - When you notice code complexity growing in a module
24
+ - Periodically as a codebase health check
25
+ - When multiple features have been added without review
26
+
27
+ ## Review Checklist
28
+
29
+ ### 1. Plan Compliance
30
+ - [ ] Do the changes match the implementation plan?
31
+ - [ ] Are there unexpected new files or modules? (Why?)
32
+ - [ ] Were any plan items skipped or altered? (Justified?)
33
+
34
+ ### 2. Boundary Integrity
35
+ - [ ] Do modules still have clear, single responsibilities?
36
+ - [ ] Are there any new cross-module dependencies that shouldn't exist?
37
+ - [ ] Is any module reaching into another module's internals instead of using its public API?
38
+ - [ ] Are there any new circular dependencies?
39
+
40
+ ### 3. Duplication Detection
41
+ - [ ] Is any new code duplicating logic that exists elsewhere?
42
+ - [ ] Could any new utility function be merged with an existing one?
43
+ - [ ] Are there copy-paste patterns that should be abstracted?
44
+
45
+ ### 4. Complexity Assessment
46
+ - [ ] Are new functions/classes doing one thing well, or are they multi-purpose?
47
+ - [ ] Are there functions longer than ~50 lines that should be decomposed?
48
+ - [ ] Is the nesting depth reasonable (max 3 levels)?
49
+ - [ ] Are error paths handled, not swallowed?
50
+
51
+ ### 5. Convention Adherence
52
+ - [ ] Do new files follow the project's naming conventions?
53
+ - [ ] Does new code follow the project's error handling patterns?
54
+ - [ ] Is the coding style consistent with the rest of the codebase?
55
+ - [ ] Are new exports intentional (not accidentally public)?
56
+
57
+ ### 6. Bloat Assessment
58
+ - [ ] Can any new file be eliminated by integrating its content into existing files?
59
+ - [ ] Are there over-abstractions (interfaces with a single implementation, classes with a single method)?
60
+ - [ ] Were any "just in case" features or parameters added beyond the requirement?
61
+
62
+ ## Severity Levels
63
+
64
+ - **Critical** — Boundary violation, circular dependency, data integrity risk → Must fix before QA
65
+ - **Major** — Significant duplication, wrong module placement, convention violation → Should fix before QA
66
+ - **Minor** — Style inconsistency, slightly suboptimal approach, missing comments → Can fix later
67
+ - **Note** — Observation for future consideration, not blocking
68
+
69
+ ## Review Output Format
70
+
71
+ ```markdown
72
+ # Architecture Review: [Feature/PR Title]
73
+
74
+ **Plan**: [link to implementation plan]
75
+ **Reviewer**: [architect name]
76
+ **Verdict**: ✅ Approved | ⚠️ Approved with notes | 🔴 Changes required
77
+
78
+ ## Summary
79
+ One paragraph overall assessment.
80
+
81
+ ## Findings
82
+
83
+ ### [Critical/Major/Minor] — [Title]
84
+ **Location**: `path/to/file.js:L42`
85
+ **Issue**: [what's wrong]
86
+ **Impact**: [why it matters]
87
+ **Suggestion**: [how to fix]
88
+
89
+ ### ...
90
+
91
+ ## Metrics
92
+ - Files changed: [n]
93
+ - Files added: [n] (justified: [y/n])
94
+ - Lines added: [n]
95
+ - Lines removed: [n]
96
+ - Net complexity change: [simpler / same / more complex]
97
+
98
+ ## Technical Debt
99
+ - [Any new debt introduced, with priority for addressing it]
100
+ ```
101
+
102
+ ## Guidelines
103
+
104
+ - **Be specific.** "The code is messy" is not a review. Point to exact files, lines, and patterns.
105
+ - **Suggest, don't just criticize.** Every issue should come with a proposed fix or direction.
106
+ - **Pick your battles.** Minor style issues shouldn't block a review. Focus on structural integrity.
107
+ - **Acknowledge good work.** If the implementation is clean, say so. Positive feedback reinforces good patterns.
108
+ - **Think in trajectories.** One boundary violation is minor. The pattern of boundary violations is critical. Flag trends early.
@@ -0,0 +1,92 @@
1
+ ---
2
+ name: bug-reporting
3
+ description: "Produce clear, actionable, reproducible bug reports. Use whenever a test fails or unexpected behavior is discovered during verification."
4
+ ---
5
+
6
+ # Bug Reporting
7
+
8
+ Write bug reports that are precise, reproducible, and actionable — so Developer can fix the issue without asking clarifying questions.
9
+
10
+ ## Philosophy
11
+
12
+ A bug you can't reproduce isn't a bug report — it's noise. A bug without context is a puzzle that wastes Developer's time. Every minute Developer spends understanding your report is a minute they're not fixing the bug.
13
+
14
+ The goal: Developer reads the report and immediately knows what's wrong, where to look, and how to verify the fix.
15
+
16
+ ## When to Use
17
+
18
+ - Whenever an acceptance test fails
19
+ - When you observe unexpected behavior during any testing
20
+ - When regression tests reveal broken existing functionality
21
+
22
+ ## Bug Report Format
23
+
24
+ ```markdown
25
+ ## Bug: [Short descriptive title]
26
+
27
+ **ID**: BUG-[number]
28
+ **Severity**: Critical | Major | Minor
29
+ **Requirement**: [which PRD requirement this violates, if applicable]
30
+ **Found in**: [acceptance test name / manual exploration]
31
+
32
+ ### Environment
33
+ - [relevant environment details: OS, runtime version, config]
34
+
35
+ ### Steps to Reproduce
36
+ 1. [Precise first step — include exact commands, inputs, clicks]
37
+ 2. [Second step]
38
+ 3. [Third step]
39
+
40
+ ### Expected Behavior
41
+ [What should happen according to the requirement/acceptance criteria]
42
+
43
+ ### Actual Behavior
44
+ [What actually happened — be specific]
45
+
46
+ ### Evidence
47
+ ```
48
+ [Error output, log snippet, or test output]
49
+ ```
50
+
51
+ ### Notes
52
+ [Any additional context: does it happen consistently? any patterns?]
53
+ ```
54
+
55
+ ## Severity Guide
56
+
57
+ - **Critical**: Feature is completely broken. Core functionality doesn't work. Data loss. Crash.
58
+ - **Major**: Feature partially works but an important scenario fails. Significant usability issue. Blocks acceptance.
59
+ - **Minor**: Feature works but with cosmetic issues, minor inconsistencies, or edge cases. Doesn't block acceptance.
60
+
61
+ ## Guidelines
62
+
63
+ ### Be Precise
64
+ - ❌ "Search doesn't work"
65
+ - ✅ "Searching for 'billing' in the project search box returns 0 results, but 3 projects contain 'billing' in their names"
66
+
67
+ ### Be Reproducible
68
+ - Include EXACT inputs, not paraphrased ones
69
+ - Specify the order of steps — it matters
70
+ - Note if the bug is intermittent and under what conditions
71
+
72
+ ### Be Minimal
73
+ - Find the shortest reproduction path
74
+ - Remove unnecessary steps
75
+ - If a bug happens on step 10 of a flow, check if it also happens in a simpler scenario
76
+
77
+ ### Separate Bugs
78
+ - One bug per report. Don't bundle "I found 3 issues" into one report.
79
+ - If bugs are related, note the relationship but file separately.
80
+
81
+ ### Don't Diagnose
82
+ - Report what you observed, not what you think the cause is
83
+ - ❌ "The database query is probably wrong"
84
+ - ✅ "Searching for 'billing' returns 0 results when 3 matching projects exist"
85
+ - Developer knows the code. Trust them to find the cause from good symptoms.
86
+
87
+ ## Anti-Patterns
88
+
89
+ - **The novel**: 2 pages of context before getting to the bug. Put the bug first, context after.
90
+ - **The guess**: "I think it might fail if..." — either reproduce it or don't report it.
91
+ - **The drive-by**: "Something seems off with search." What seems off? Compared to what?
92
+ - **The combo**: "Search doesn't work, also the button color is wrong, and the loading is slow." Three separate bugs.
@@ -0,0 +1,119 @@
1
+ ---
2
+ name: codebase-mapping
3
+ description: "Read and understand a codebase's architecture, modules, boundaries, and conventions. Use when onboarding to a new project, before designing any implementation, or when the codebase has evolved significantly since last review."
4
+ ---
5
+
6
+ # Codebase Mapping
7
+
8
+ Deeply read and understand a codebase to build an accurate mental model of its architecture, enabling informed design decisions.
9
+
10
+ ## Philosophy
11
+
12
+ > "The best code you can write now is code you'll discard in a couple of years time." — Martin Fowler (Sacrificial Architecture)
13
+
14
+ Understanding what exists is the prerequisite to designing what should exist. Most architectural mistakes happen when designers work from imagination rather than from reality.
15
+
16
+ > "A poor architecture is a major contributor to the growth of cruft — elements of the software that impede the ability of developers to understand the software." — Martin Fowler
17
+
18
+ Your job is to map the cruft as well as the clean parts. Both inform design decisions.
19
+
20
+ ## When to Use
21
+
22
+ - First time working on a codebase
23
+ - Before designing any non-trivial implementation
24
+ - When you suspect the codebase has drifted from the last documented architecture
25
+ - After a major refactor or feature addition
26
+
27
+ ## Process
28
+
29
+ ### Step 1: Structural Survey
30
+
31
+ Start with the broadest view and zoom in:
32
+
33
+ 1. **Project root** — What's in the top-level directory? Build system? Monorepo? Single package?
34
+ 2. **Source tree** — Map the directory structure. What's the organizational principle? (by feature? by layer? by domain?)
35
+ 3. **Entry points** — Where does execution start? CLI entry, HTTP server bootstrap, main function, index exports
36
+ 4. **Dependencies** — What external libraries are used? What do they tell you about architectural choices?
37
+ 5. **Build & config** — How is the project built, tested, deployed? What environment variables matter?
38
+
39
+ ### Step 2: Module Boundary Analysis
40
+
41
+ For each major module/directory:
42
+
43
+ 1. **Responsibility** — What is this module's job? (one sentence)
44
+ 2. **Public interface** — What does it export? What do other modules use from it?
45
+ 3. **Internal structure** — How is it organized internally?
46
+ 4. **Dependencies** — What other modules does it depend on? (draw the dependency graph)
47
+ 5. **Boundary integrity** — Is the boundary clean? Or do other modules reach into its internals?
48
+
49
+ **Red flags to note:**
50
+ - Circular dependencies between modules
51
+ - A module that depends on everything (god module)
52
+ - A module that everything depends on (hidden coupling)
53
+ - Files that don't belong in their directory
54
+ - Duplicated logic across modules
55
+
56
+ ### Step 3: Pattern Recognition
57
+
58
+ Identify the recurring patterns in the codebase:
59
+
60
+ - **Naming conventions** — How are files, functions, classes, variables named?
61
+ - **Architectural patterns** — MVC? Event-driven? Plugin architecture? Layered?
62
+ - **Error handling** — How are errors propagated? Custom error types? Error codes?
63
+ - **Data flow** — How does data move through the system? Transforms? Validations?
64
+ - **State management** — Where is state held? How is it mutated?
65
+ - **Testing patterns** — What's the test structure? What frameworks? What's the coverage strategy?
66
+
67
+ ### Step 4: Identify Architectural Decisions
68
+
69
+ Document the implicit decisions embedded in the code:
70
+
71
+ - Why was this technology chosen over alternatives?
72
+ - Why is the code organized this way?
73
+ - What constraints does this architecture impose on future changes?
74
+ - Where has the architecture been compromised (tech debt)?
75
+
76
+ ### Step 5: Produce the Architecture Map
77
+
78
+ ```markdown
79
+ # Architecture Map: [Project Name]
80
+
81
+ ## Overview
82
+ One paragraph summary of the system's architecture.
83
+
84
+ ## Structure
85
+ Directory tree with annotations explaining each major directory.
86
+
87
+ ## Module Map
88
+ For each module:
89
+ - **Purpose**: one sentence
90
+ - **Public API**: key exports/interfaces
91
+ - **Dependencies**: what it uses
92
+ - **Dependents**: what uses it
93
+
94
+ ## Dependency Graph
95
+ ASCII or description of module dependency relationships.
96
+
97
+ ## Patterns & Conventions
98
+ - Naming: [patterns]
99
+ - Error handling: [approach]
100
+ - Data flow: [description]
101
+ - Testing: [approach]
102
+
103
+ ## Architectural Decisions
104
+ Key decisions and their rationale (known or inferred).
105
+
106
+ ## Technical Debt
107
+ Known compromises, hacks, TODOs, and boundary violations.
108
+
109
+ ## Hotspots
110
+ Areas of high complexity or frequent change that need extra care.
111
+ ```
112
+
113
+ ## Guidelines
114
+
115
+ - **Read code, don't skim it.** Skimming leads to wrong mental models. Read the actual implementations of key functions, not just their signatures.
116
+ - **Follow the data.** When in doubt about architecture, trace how data flows from input to output. Data flow reveals the true architecture, regardless of what the directory structure suggests.
117
+ - **Respect what exists.** The current architecture was built by people who had reasons. Understand those reasons before judging.
118
+ - **Note the drift.** Reality drifts from intent. Document where the code has diverged from its apparent design.
119
+ - **Keep it updated.** An outdated architecture map is worse than none — it creates false confidence.
@@ -0,0 +1,146 @@
1
+ ---
2
+ name: implementation-planning
3
+ description: "Design a concrete implementation plan from requirements, specifying which files, modules, functions, and classes to create or modify. Use after receiving requirements from PM and having an up-to-date codebase map."
4
+ ---
5
+
6
+ # Implementation Planning
7
+
8
+ Translate product requirements into a concrete, file-level implementation plan that Developer can execute without guessing.
9
+
10
+ ## Philosophy
11
+
12
+ > "Design is there to enable you to keep changing the software easily in the long term." — Kent Beck (via Martin Fowler)
13
+
14
+ Your plan should make the system simpler or no more complex. Every new file, class, or abstraction must justify its existence against the alternative of reusing or extending what already exists.
15
+
16
+ > "Planned design has faults... it's impossible to think through all the issues that you need to deal with when you are programming." — Martin Fowler (Is Design Dead?)
17
+
18
+ Accept that your plan will be imperfect. Design at the right level of detail: specific enough that Developer knows WHERE to put code, flexible enough that they can handle surprises.
19
+
20
+ > Google's rule: "Design for ~10X growth, but plan to rewrite before ~100X." — Jeff Dean
21
+
22
+ Don't over-engineer. Design for the current need with extension points for the foreseeable future.
23
+
24
+ ## When to Use
25
+
26
+ - After receiving requirements from PM (with acceptance criteria)
27
+ - After updating your codebase map for the affected areas
28
+ - When a requirement reveals that the current architecture needs revision
29
+
30
+ ## Process
31
+
32
+ ### Step 1: Understand the Requirement
33
+
34
+ Read the PRD completely. Verify you understand:
35
+ - What user-visible behavior changes
36
+ - What the acceptance criteria are
37
+ - What the non-goals are (avoid designing for out-of-scope items)
38
+
39
+ If anything is unclear, ask PM before designing.
40
+
41
+ ### Step 2: Survey the Affected Code
42
+
43
+ Using your codebase map, identify:
44
+ - Which existing modules are affected?
45
+ - Which existing functions/classes need modification?
46
+ - What can be reused or extended?
47
+ - What would need to change to accommodate this cleanly?
48
+
49
+ **Critical question: Can this be done by modifying existing code rather than creating new code?**
50
+
51
+ The default AI instinct is to create new files. Resist it. Modification > creation unless there's a clear architectural reason for new code.
52
+
53
+ ### Step 3: Design the Change
54
+
55
+ For each requirement, specify:
56
+
57
+ 1. **Files to modify** — Which existing files change, and what changes in each
58
+ 2. **Files to create** (if truly necessary) — Where they go, what they contain, why a new file is needed
59
+ 3. **Functions/classes** — New or modified, with signatures and brief behavior description
60
+ 4. **Interfaces** — How new code connects to existing code (function calls, events, imports)
61
+ 5. **Data flow** — How data moves through the new/modified components
62
+
63
+ ### Step 4: Analyze Trade-offs
64
+
65
+ For non-trivial decisions, document alternatives:
66
+
67
+ ```markdown
68
+ ### Decision: [what]
69
+
70
+ **Option A**: [description]
71
+ - ✅ Pro: [advantage]
72
+ - ❌ Con: [disadvantage]
73
+
74
+ **Option B**: [description]
75
+ - ✅ Pro: [advantage]
76
+ - ❌ Con: [disadvantage]
77
+
78
+ **Choice**: Option [X] because [reason]
79
+ ```
80
+
81
+ ### Step 5: Check for Entropy
82
+
83
+ Before finalizing, audit your plan against these questions:
84
+
85
+ - [ ] Does this plan add more code than necessary?
86
+ - [ ] Could any new file be merged into an existing one without violating boundaries?
87
+ - [ ] Does this create any new dependencies between modules that didn't exist before?
88
+ - [ ] Does this duplicate any logic that already exists elsewhere?
89
+ - [ ] Does this respect existing naming conventions and patterns?
90
+ - [ ] Would this plan make sense to a developer who knows the codebase but hasn't seen the requirement?
91
+
92
+ If you answer "yes" to the first four questions, revise the plan.
93
+
94
+ ### Step 6: Produce the Implementation Plan
95
+
96
+ ```markdown
97
+ # Implementation Plan: [Feature Title]
98
+
99
+ **Requirement**: [link to PRD or brief summary]
100
+ **Architect**: [name]
101
+ **Status**: Draft | Reviewed | Approved
102
+
103
+ ## Summary
104
+ One paragraph describing the approach.
105
+
106
+ ## Changes
107
+
108
+ ### 1. [Task title]
109
+ **File**: `path/to/existing/file.js`
110
+ **Action**: Modify
111
+ **Changes**:
112
+ - Add function `doThing(input: Type): ReturnType` — [what it does]
113
+ - Modify function `existingFunc` to call `doThing` when [condition]
114
+
115
+ ### 2. [Task title]
116
+ **File**: `path/to/new/file.js` (NEW)
117
+ **Justification**: [why a new file is needed]
118
+ **Contents**:
119
+ - Class `ThingProcessor` — [responsibility]
120
+ - `process(data): Result` — [behavior]
121
+ - `validate(data): boolean` — [behavior]
122
+
123
+ ## Data Flow
124
+ [How data moves through the changes]
125
+
126
+ ## Trade-off Decisions
127
+ [Document any non-obvious choices]
128
+
129
+ ## Risks & Technical Debt
130
+ - [Known risk and mitigation]
131
+ - [Any tech debt this introduces, with plan to address]
132
+
133
+ ## Task Order
134
+ Recommended implementation sequence:
135
+ 1. [Task] — because [dependency reason]
136
+ 2. [Task] — depends on #1
137
+ 3. [Task] — independent, can parallel with #2
138
+ ```
139
+
140
+ ## Anti-Patterns
141
+
142
+ - **The "create new everything" plan**: 5 new files for a feature that could be 30 lines in an existing module
143
+ - **The architecture astronaut**: Introducing abstractions, interfaces, and patterns for a one-off feature
144
+ - **The hand-wave plan**: "Add search to the dashboard" without specifying which file, function, or integration point
145
+ - **The over-specified plan**: Dictating variable names and loop structures — that's Developer's domain
146
+ - **The assumption plan**: Designing without reading the affected code first
@@ -0,0 +1,145 @@
1
+ ---
2
+ name: prd-generation
3
+ description: "Produce a structured Product Requirements Document from clarified requirements. Use after requirement clarification is complete and the team needs a formal handoff document for Architect and QA."
4
+ ---
5
+
6
+ # PRD Generation
7
+
8
+ Produce a structured, actionable Product Requirements Document that serves as the single source of truth for Architect (design), Developer (implementation), and QA (verification).
9
+
10
+ ## Philosophy
11
+
12
+ > "A functional specification describes how a product will work entirely from the user's perspective. It doesn't care how the thing is implemented. It talks about features." — Joel Spolsky
13
+
14
+ The PRD is NOT a technical design document. It describes WHAT the product should do, not HOW it should be built. It is the contract between the user's intent and the team's execution.
15
+
16
+ > "Shaped work is rough, solved, and bounded." — Shape Up
17
+
18
+ The PRD should be detailed enough that no one has to guess what to build, but abstract enough that the Architect has room to design the best solution.
19
+
20
+ ## When to Use
21
+
22
+ - After requirements have been clarified (use `requirement-clarification` skill first)
23
+ - When handing off work to Architect and QA
24
+ - When documenting a feature for future reference
25
+
26
+ ## PRD Structure
27
+
28
+ ```markdown
29
+ # PRD: [Feature/Project Title]
30
+
31
+ **Author**: [PM agent name]
32
+ **Status**: Draft | Review | Approved
33
+ **Priority**: P0 | P1 | P2
34
+ **Date**: [date]
35
+
36
+ ---
37
+
38
+ ## 1. Background & Motivation
39
+
40
+ Why are we doing this? What problem does this solve? What user pain does it address?
41
+ Include any relevant context: user feedback, market pressure, technical debt.
42
+
43
+ Keep it concise — 2-3 paragraphs max.
44
+
45
+ ## 2. Goals
46
+
47
+ 3 clear, orthogonal goals. Each goal should be independently valuable.
48
+
49
+ - **Goal 1**: [measurable outcome]
50
+ - **Goal 2**: [measurable outcome]
51
+ - **Goal 3**: [measurable outcome]
52
+
53
+ ## 3. User Scenarios
54
+
55
+ For each distinct user journey, write a concrete scenario:
56
+
57
+ ### Scenario 1: [Name]
58
+ **User**: [who]
59
+ **Context**: [when/where]
60
+ **Flow**:
61
+ 1. User does X
62
+ 2. System responds with Y
63
+ 3. User sees Z
64
+ **Edge cases**: [what if...]
65
+
66
+ ### Scenario 2: [Name]
67
+ ...
68
+
69
+ ## 4. Requirements
70
+
71
+ ### 4.1 Functional Requirements
72
+
73
+ | ID | Priority | Requirement | Acceptance Criteria |
74
+ |----|----------|-------------|---------------------|
75
+ | F1 | P0 | [what] | Given [x], when [y], then [z] |
76
+ | F2 | P0 | [what] | Given [x], when [y], then [z] |
77
+ | F3 | P1 | [what] | Given [x], when [y], then [z] |
78
+
79
+ ### 4.2 Non-Functional Requirements
80
+
81
+ | ID | Category | Requirement |
82
+ |----|----------|-------------|
83
+ | NF1 | Performance | [specific measurable target] |
84
+ | NF2 | Compatibility | [specific constraint] |
85
+
86
+ ## 5. Non-Goals (Out of Scope)
87
+
88
+ Explicit list of what we are NOT doing and why.
89
+
90
+ - ❌ [thing]: [reason]
91
+
92
+ ## 6. Dependencies & Interactions
93
+
94
+ How this feature interacts with existing system capabilities.
95
+
96
+ - [existing feature] → [how it's affected]
97
+
98
+ ## 7. Open Questions
99
+
100
+ Unresolved items that need answers before or during implementation.
101
+
102
+ - [ ] [question] — owner: [who should answer]
103
+
104
+ ## 8. Appendix (Optional)
105
+
106
+ Supporting data, mockups, research, competitive analysis.
107
+ ```
108
+
109
+ ## Guidelines
110
+
111
+ ### For Architect (Design Consumer)
112
+ - Requirements should describe WHAT, never HOW
113
+ - Include enough context about existing system behavior for the Architect to make informed design decisions
114
+ - Flag areas where you anticipate architectural complexity: "This interacts with the notification system in non-obvious ways"
115
+
116
+ ### For QA (Verification Consumer)
117
+ - Every functional requirement MUST have acceptance criteria
118
+ - Acceptance criteria must be verifiable without reading source code
119
+ - Include edge cases and error scenarios explicitly — don't leave them for QA to "figure out"
120
+ - Describe the expected behavior precisely: exact error messages, status codes, UI states
121
+
122
+ ### For the User (Alignment Consumer)
123
+ - The PRD should be readable by the user who requested the feature
124
+ - They should be able to confirm "yes, this is what I want" or "no, you misunderstood"
125
+ - Avoid jargon. Describe behavior in terms the user understands.
126
+
127
+ ## Quality Checklist
128
+
129
+ Before marking a PRD as ready for review:
130
+
131
+ - [ ] Every requirement has acceptance criteria
132
+ - [ ] At least one concrete user scenario exists
133
+ - [ ] Non-goals are explicitly stated
134
+ - [ ] Dependencies with existing features are documented
135
+ - [ ] Open questions have assigned owners
136
+ - [ ] A non-technical person can understand what will be built
137
+ - [ ] Priority is assigned to every requirement (P0/P1/P2)
138
+
139
+ ## Anti-Patterns
140
+
141
+ - **The novel**: 10-page PRDs that no one reads. Be concise. If it's too long, split into multiple PRDs.
142
+ - **The wishlist**: Requirements without priorities. Everything is P0 means nothing is P0.
143
+ - **The blueprint**: Specifying database schemas, API formats, or code structure. That's the Architect's job.
144
+ - **The assumption**: "Obviously the search should be instant" — nothing is obvious. Write it down.
145
+ - **The orphan**: A PRD with no scenarios. If you can't describe a user doing it, maybe they don't need it.