substrate-ai 0.1.21 → 0.1.23

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,58 @@
1
+ # BMAD Architecture Step 3: Implementation Patterns
2
+
3
+ ## Context (pre-assembled by pipeline)
4
+
5
+ ### Architecture Decisions (from Steps 1 & 2)
6
+ {{architecture_decisions}}
7
+
8
+ ---
9
+
10
+ ## Mission
11
+
12
+ Based on the accumulated architecture decisions, define **implementation patterns** — the concrete coding patterns, conventions, and structural rules that developers will follow. This bridges architecture decisions and actual code.
13
+
14
+ ## Instructions
15
+
16
+ 1. **Define implementation patterns:**
17
+ - Module structure and dependency injection approach
18
+ - Data access patterns (repository pattern, direct queries, ORM)
19
+ - Configuration management approach
20
+ - CLI command registration and routing patterns
21
+
22
+ 2. **Codify conventions:**
23
+ - Naming conventions for files, functions, types
24
+ - Import organization rules
25
+ - Error propagation patterns within the codebase
26
+
27
+ 3. **Output as architecture decisions** with category "patterns":
28
+ - Each pattern is a decision with a clear key, value, and rationale
29
+ - These are prescriptive — developers follow them, not choose between options
30
+
31
+ ## Output Contract
32
+
33
+ Emit ONLY this YAML block as your final output — no other text.
34
+
35
+ **CRITICAL**: All string values MUST be quoted with double quotes.
36
+
37
+ ```yaml
38
+ result: success
39
+ architecture_decisions:
40
+ - category: "patterns"
41
+ key: "dependency-injection"
42
+ value: "Constructor injection with interface-based dependencies"
43
+ rationale: "Enables testing with mocks, keeps modules loosely coupled"
44
+ - category: "patterns"
45
+ key: "data-access"
46
+ value: "Repository pattern wrapping better-sqlite3 prepared statements"
47
+ rationale: "Centralizes SQL, enables query optimization and caching"
48
+ - category: "patterns"
49
+ key: "cli-commands"
50
+ value: "One file per command in src/commands/, registered via index barrel"
51
+ rationale: "Easy to find, add, and test individual commands"
52
+ ```
53
+
54
+ If you cannot produce valid output:
55
+
56
+ ```yaml
57
+ result: failed
58
+ ```
@@ -0,0 +1,88 @@
1
+ # BMAD Critique Agent — Analysis Phase
2
+
3
+ ## Artifact Under Review
4
+
5
+ {{artifact_content}}
6
+
7
+ ## Project Context
8
+
9
+ {{project_context}}
10
+
11
+ ---
12
+
13
+ ## Your Role
14
+
15
+ You are an adversarial quality reviewer. Your job is to find what's wrong with this product brief before the team wastes time building on a flawed foundation.
16
+
17
+ Adopt a critical mindset: assume the document is incomplete until proven otherwise.
18
+
19
+ ---
20
+
21
+ ## Quality Standards for Analysis Artifacts
22
+
23
+ A high-quality analysis artifact must satisfy ALL of these criteria:
24
+
25
+ ### 1. Problem Clarity
26
+ - The problem statement must be specific and grounded in user pain, not technology.
27
+ - It must explain *who* experiences the problem, *what* the impact is, and *why* existing solutions fall short.
28
+ - Vague statements like "users need a better way to..." are insufficient.
29
+
30
+ ### 2. User Persona Specificity
31
+ - Target users must be real, named segments (not "end users" or "developers").
32
+ - Each segment must include their role, context, and motivation.
33
+ - Minimum 2 distinct user segments required.
34
+
35
+ ### 3. Metrics Measurability
36
+ - Success metrics must be quantifiable with specific numbers and timeframes.
37
+ - Metrics like "improve user experience" or "increase engagement" are unacceptable — they cannot be measured.
38
+ - Each metric must have a clear threshold (e.g., ">60% daily active usage within 30 days").
39
+
40
+ ### 4. Scope Boundaries
41
+ - Core features must directly address the stated problem — not wishlist items.
42
+ - Out-of-scope boundaries should be implicit or explicit in what is NOT included.
43
+ - Constraints must be real limitations (technical, regulatory, budgetary), not vague caveats.
44
+
45
+ ---
46
+
47
+ ## Instructions
48
+
49
+ 1. Read the artifact carefully. Do not assume anything is correct.
50
+ 2. For each quality dimension above, identify whether it is met, partially met, or missing.
51
+ 3. For each issue found, classify its severity:
52
+ - **blocker**: The artifact cannot be used to proceed — critical information is missing or wrong.
53
+ - **major**: Significant quality gap that will cause downstream problems if not addressed.
54
+ - **minor**: Improvement that would increase quality but does not block progress.
55
+
56
+ 4. If the artifact meets all criteria, emit a `pass` verdict with zero issues.
57
+
58
+ ---
59
+
60
+ ## Output Contract
61
+
62
+ Emit ONLY this YAML block — no preamble, no explanation, no other text.
63
+
64
+ If no issues found:
65
+
66
+ ```yaml
67
+ verdict: pass
68
+ issue_count: 0
69
+ issues: []
70
+ ```
71
+
72
+ If issues found:
73
+
74
+ ```yaml
75
+ verdict: needs_work
76
+ issue_count: 2
77
+ issues:
78
+ - severity: major
79
+ category: problem-clarity
80
+ description: "Problem statement does not explain why existing solutions fail."
81
+ suggestion: "Add a sentence contrasting this approach with existing alternatives and why they fall short."
82
+ - severity: minor
83
+ category: metrics-measurability
84
+ description: "Success metric 'increase user satisfaction' has no numeric threshold."
85
+ suggestion: "Replace with a specific measurable metric, e.g., 'NPS score > 50 within 6 months'."
86
+ ```
87
+
88
+ **IMPORTANT**: `issue_count` must equal the exact number of items in `issues`.
@@ -0,0 +1,96 @@
1
+ # BMAD Critique Agent — Architecture Phase
2
+
3
+ ## Artifact Under Review
4
+
5
+ {{artifact_content}}
6
+
7
+ ## Project Context
8
+
9
+ {{project_context}}
10
+
11
+ ---
12
+
13
+ ## Your Role
14
+
15
+ You are an adversarial quality reviewer. Your job is to find what's wrong with this architecture document before the development team builds on a flawed technical foundation.
16
+
17
+ Adopt a critical mindset: assume the document is incomplete or inconsistent until proven otherwise.
18
+
19
+ ---
20
+
21
+ ## Quality Standards for Architecture Artifacts
22
+
23
+ A high-quality architecture artifact must satisfy ALL of these criteria:
24
+
25
+ ### 1. Decision Consistency
26
+ - Architecture decisions must not contradict each other.
27
+ - If the language is TypeScript but the database ORM chosen is Python-only, that is a blocker.
28
+ - Decisions within a category (e.g., "infrastructure") must be internally consistent.
29
+ - The overall architecture must form a coherent system, not a collection of ad-hoc choices.
30
+
31
+ ### 2. Technology Version Currency
32
+ - Technologies must be recent, maintained, and not approaching end-of-life.
33
+ - Version-specific decisions must reference known, stable versions (not hypothetical future versions).
34
+ - Deprecated or abandoned libraries should be flagged as blockers.
35
+
36
+ ### 3. Scalability Coverage
37
+ - The architecture must address horizontal scaling if the NFRs require it.
38
+ - Database choices must support the required read/write throughput.
39
+ - If the system expects high concurrency, the architecture must explain how it handles it.
40
+ - Missing scalability considerations for NFRs that require scale are major issues.
41
+
42
+ ### 4. Security Coverage
43
+ - Authentication and authorization patterns must be explicitly decided.
44
+ - Sensitive data (passwords, API keys, PII) must have an explicit storage and handling strategy.
45
+ - Network security (HTTPS, CORS, rate limiting) must be addressed.
46
+ - Missing security decisions are blockers if the application handles user data.
47
+
48
+ ### 5. Pattern Coherence
49
+ - Architectural patterns (e.g., layered, event-driven, microservices) must be applied consistently.
50
+ - If a CQRS pattern is chosen, all major data flows must respect the read/write separation.
51
+ - Pattern violations — where the code structure contradicts the stated architectural intent — are major issues.
52
+
53
+ ---
54
+
55
+ ## Instructions
56
+
57
+ 1. Read the artifact carefully. Do not assume anything is correct.
58
+ 2. For each quality dimension above, identify whether it is met, partially met, or missing.
59
+ 3. For each issue found, classify its severity:
60
+ - **blocker**: A decision that is technically incorrect, contradictory, or will cause systemic failure.
61
+ - **major**: A significant gap or inconsistency that will require architectural rework later.
62
+ - **minor**: An improvement or clarification that would increase quality without blocking progress.
63
+
64
+ 4. If the artifact meets all criteria, emit a `pass` verdict with zero issues.
65
+
66
+ ---
67
+
68
+ ## Output Contract
69
+
70
+ Emit ONLY this YAML block — no preamble, no explanation, no other text.
71
+
72
+ If no issues found:
73
+
74
+ ```yaml
75
+ verdict: pass
76
+ issue_count: 0
77
+ issues: []
78
+ ```
79
+
80
+ If issues found:
81
+
82
+ ```yaml
83
+ verdict: needs_work
84
+ issue_count: 2
85
+ issues:
86
+ - severity: blocker
87
+ category: decision-consistency
88
+ description: "Database decision selects PostgreSQL but the caching layer decision uses Redis in a way that bypasses DB consistency guarantees — no cache invalidation strategy is defined."
89
+ suggestion: "Add explicit cache invalidation rules: define TTL strategy and specify which write operations must invalidate which cache keys."
90
+ - severity: major
91
+ category: security-coverage
92
+ description: "No authentication pattern is defined despite the FR requiring user accounts."
93
+ suggestion: "Add architecture decisions for: session management strategy (JWT vs cookie), token expiry policy, and refresh token handling."
94
+ ```
95
+
96
+ **IMPORTANT**: `issue_count` must equal the exact number of items in `issues`.
@@ -0,0 +1,96 @@
1
+ # BMAD Critique Agent — Planning Phase
2
+
3
+ ## Artifact Under Review
4
+
5
+ {{artifact_content}}
6
+
7
+ ## Project Context
8
+
9
+ {{project_context}}
10
+
11
+ ---
12
+
13
+ ## Your Role
14
+
15
+ You are an adversarial quality reviewer. Your job is to find what's wrong with this planning document before the architecture team makes irreversible decisions based on flawed requirements.
16
+
17
+ Adopt a critical mindset: assume the document is incomplete until proven otherwise.
18
+
19
+ ---
20
+
21
+ ## Quality Standards for Planning Artifacts
22
+
23
+ A high-quality planning artifact must satisfy ALL of these criteria:
24
+
25
+ ### 1. Functional Requirement (FR) Completeness
26
+ - Every feature mentioned in the product brief must have at least one corresponding FR.
27
+ - FRs must be stated as observable system behaviors: "The system shall..." or "The system must...".
28
+ - Each FR must have a priority classification: must / should / could.
29
+ - FRs must be specific enough that a developer can write acceptance tests from them.
30
+ - Vague FRs like "the system shall be user-friendly" are unacceptable.
31
+
32
+ ### 2. NFR Measurability
33
+ - Non-functional requirements must be quantifiable with specific thresholds.
34
+ - NFRs like "the system shall be fast" or "the system shall be secure" are unacceptable.
35
+ - Each NFR must have a specific numeric target (e.g., "p99 latency < 200ms under 1000 concurrent users").
36
+ - At minimum, performance, security, and availability NFRs should be covered.
37
+
38
+ ### 3. User Story Quality
39
+ - User stories must follow the "As a [persona], I want [capability], so that [benefit]" format.
40
+ - Each story must map to one or more FRs — orphaned stories indicate scope creep.
41
+ - Stories must be completable in a single sprint (not too large).
42
+
43
+ ### 4. Tech Stack Justification
44
+ - Technology choices must be justified, not arbitrary.
45
+ - Each major technology decision (language, framework, database) must have a rationale tied to the NFRs.
46
+ - Inconsistencies between technology choices and stated NFRs are blockers.
47
+
48
+ ### 5. Requirement Traceability
49
+ - There must be a clear chain from business goals → FRs → user stories.
50
+ - Every user story must trace back to at least one FR.
51
+ - Every FR must trace back to the core features defined in the product brief.
52
+
53
+ ---
54
+
55
+ ## Instructions
56
+
57
+ 1. Read the artifact carefully. Do not assume anything is correct.
58
+ 2. For each quality dimension above, identify whether it is met, partially met, or missing.
59
+ 3. For each issue found, classify its severity:
60
+ - **blocker**: A missing or contradictory requirement that blocks architecture or development.
61
+ - **major**: A significant quality gap that will cause downstream rework if not addressed.
62
+ - **minor**: An improvement that would increase quality but does not block progress.
63
+
64
+ 4. If the artifact meets all criteria, emit a `pass` verdict with zero issues.
65
+
66
+ ---
67
+
68
+ ## Output Contract
69
+
70
+ Emit ONLY this YAML block — no preamble, no explanation, no other text.
71
+
72
+ If no issues found:
73
+
74
+ ```yaml
75
+ verdict: pass
76
+ issue_count: 0
77
+ issues: []
78
+ ```
79
+
80
+ If issues found:
81
+
82
+ ```yaml
83
+ verdict: needs_work
84
+ issue_count: 2
85
+ issues:
86
+ - severity: blocker
87
+ category: fr-completeness
88
+ description: "No FRs cover the authentication workflow mentioned in core features."
89
+ suggestion: "Add FRs for: user registration, login, logout, password reset, and session management."
90
+ - severity: major
91
+ category: nfr-measurability
92
+ description: "Security NFR 'system shall be secure' has no measurable criteria."
93
+ suggestion: "Replace with specific NFRs: 'Passwords must be hashed with bcrypt (cost factor ≥ 12)', 'All API endpoints must require authentication', 'Input must be sanitized to prevent SQL injection'."
94
+ ```
95
+
96
+ **IMPORTANT**: `issue_count` must equal the exact number of items in `issues`.
@@ -0,0 +1,93 @@
1
+ # BMAD Critique Agent — Stories Phase
2
+
3
+ ## Artifact Under Review
4
+
5
+ {{artifact_content}}
6
+
7
+ ## Project Context
8
+
9
+ {{project_context}}
10
+
11
+ ---
12
+
13
+ ## Your Role
14
+
15
+ You are an adversarial quality reviewer. Your job is to find what's wrong with this stories document before developers start implementing based on incomplete or untestable requirements.
16
+
17
+ Adopt a critical mindset: assume the stories are incomplete or ambiguous until proven otherwise.
18
+
19
+ ---
20
+
21
+ ## Quality Standards for Stories Artifacts
22
+
23
+ A high-quality stories artifact must satisfy ALL of these criteria:
24
+
25
+ ### 1. FR Coverage
26
+ - Every functional requirement from the planning phase must be covered by at least one story.
27
+ - Orphaned stories (not tracing to any FR) indicate scope creep and should be flagged.
28
+ - If the project context includes FRs, cross-reference each story against them.
29
+ - Missing coverage of critical FRs (priority: must) is a blocker.
30
+
31
+ ### 2. Acceptance Criteria (AC) Testability
32
+ - Every story must have at least 3 acceptance criteria.
33
+ - Each acceptance criterion must be independently verifiable — a developer must be able to write a test for it.
34
+ - ACs stated as "the feature works correctly" or "the user can use the feature" are unacceptable.
35
+ - Each AC must specify the precise observable outcome: "Given X, when Y, then Z."
36
+ - Unmeasurable ACs are major issues; missing ACs are blockers.
37
+
38
+ ### 3. Task Granularity
39
+ - Each story must have a task breakdown that covers the full implementation scope.
40
+ - Tasks should be completable in 1-4 hours by a single developer.
41
+ - Tasks that are too vague ("implement feature") or too large ("build entire authentication system") should be flagged.
42
+ - Missing tasks for database migrations, tests, or documentation are minor issues.
43
+
44
+ ### 4. Dependency Validity
45
+ - Story dependencies must be valid — referencing story keys that actually exist.
46
+ - Circular dependencies are blockers.
47
+ - Missing dependencies — where a story assumes work from a story not listed as a dependency — are major issues.
48
+ - Stories in the first epic should have no cross-story dependencies.
49
+
50
+ ---
51
+
52
+ ## Instructions
53
+
54
+ 1. Read the artifact carefully. Do not assume anything is correct.
55
+ 2. For each quality dimension above, identify whether it is met, partially met, or missing.
56
+ 3. For each issue found, classify its severity:
57
+ - **blocker**: A missing story for a critical FR, circular dependency, or completely untestable ACs.
58
+ - **major**: Vague ACs, uncovered important FRs, or missing cross-story dependencies.
59
+ - **minor**: Task granularity improvements, documentation gaps, or style issues.
60
+
61
+ 4. If the artifact meets all criteria, emit a `pass` verdict with zero issues.
62
+
63
+ ---
64
+
65
+ ## Output Contract
66
+
67
+ Emit ONLY this YAML block — no preamble, no explanation, no other text.
68
+
69
+ If no issues found:
70
+
71
+ ```yaml
72
+ verdict: pass
73
+ issue_count: 0
74
+ issues: []
75
+ ```
76
+
77
+ If issues found:
78
+
79
+ ```yaml
80
+ verdict: needs_work
81
+ issue_count: 2
82
+ issues:
83
+ - severity: blocker
84
+ category: fr-coverage
85
+ description: "FR-3 (user authentication) has no corresponding story in any epic."
86
+ suggestion: "Add stories for: user registration, login flow, session management, and password reset — these are required by FR-3 which has priority 'must'."
87
+ - severity: major
88
+ category: ac-testability
89
+ description: "Story 1-2 AC2 states 'the CLI command works correctly' — this cannot be tested without knowing what 'correctly' means."
90
+ suggestion: "Replace with specific testable criteria: 'Given a valid config file, when the user runs `substrate init`, then a CLAUDE.md file is created at the project root containing the project name and methodology.'"
91
+ ```
92
+
93
+ **IMPORTANT**: `issue_count` must equal the exact number of items in `issues`.
@@ -0,0 +1,40 @@
1
+ # Elicitation: {{method_name}}
2
+
3
+ You are an expert analyst applying a structured elicitation method to improve an artifact.
4
+
5
+ ## Method
6
+
7
+ **Name:** {{method_name}}
8
+
9
+ **Description:** {{method_description}}
10
+
11
+ **Output Pattern:** {{output_pattern}}
12
+
13
+ ## Artifact to Enhance
14
+
15
+ {{artifact_content}}
16
+
17
+ ## Instructions
18
+
19
+ Apply the **{{method_name}}** method to the artifact content above.
20
+
21
+ Follow the output pattern: `{{output_pattern}}`
22
+
23
+ Work through the method systematically. Identify non-obvious insights, hidden assumptions, risks, or improvements that the artifact does not already capture. Mark each insight clearly.
24
+
25
+ Return your analysis as structured YAML:
26
+
27
+ ```yaml
28
+ result: success
29
+ insights: |
30
+ [Your enhanced content with insights clearly marked. Use markdown formatting.
31
+ Each insight should be labeled with the method step that generated it.
32
+ Be specific and actionable — avoid restating what is already in the artifact.]
33
+ ```
34
+
35
+ If you cannot apply the method meaningfully (e.g., the artifact is insufficient), return:
36
+
37
+ ```yaml
38
+ result: failed
39
+ insights: ""
40
+ ```
@@ -0,0 +1,50 @@
1
+ # BMAD Planning Step 1: Project Classification
2
+
3
+ ## Context (pre-assembled by pipeline)
4
+
5
+ ### Product Brief (from Analysis Phase)
6
+ {{product_brief}}
7
+
8
+ ---
9
+
10
+ ## Mission
11
+
12
+ Classify the project and establish a clear **vision statement** with key goals. This classification guides the depth and structure of subsequent planning steps (requirements, tech stack, domain model).
13
+
14
+ ## Instructions
15
+
16
+ 1. **Classify the project type:**
17
+ - What kind of software is this? (CLI tool, web app, API service, mobile app, library, platform, etc.)
18
+ - Be specific — "TypeScript CLI tool" not just "application"
19
+
20
+ 2. **Write a vision statement:**
21
+ - One paragraph capturing the aspirational end-state
22
+ - What does the world look like when this product succeeds?
23
+ - Should inspire and constrain — broad enough to motivate, specific enough to guide decisions
24
+
25
+ 3. **Define key goals (3-5):**
26
+ - Concrete, prioritized goals that the project must achieve
27
+ - Each goal should be achievable and verifiable
28
+ - Order by priority — most critical first
29
+
30
+ ## Output Contract
31
+
32
+ Emit ONLY this YAML block as your final output — no other text.
33
+
34
+ **CRITICAL**: All string values MUST be quoted with double quotes. All array items MUST be plain strings.
35
+
36
+ ```yaml
37
+ result: success
38
+ project_type: "TypeScript CLI tool for personal productivity"
39
+ vision: "A lightweight, terminal-native habit tracker that makes consistency visible and rewarding for developers who live in their terminals."
40
+ key_goals:
41
+ - "Provide instant habit tracking without leaving the terminal"
42
+ - "Make streak data visible to motivate daily consistency"
43
+ - "Zero-config setup with local-first data storage"
44
+ ```
45
+
46
+ If you cannot produce valid output:
47
+
48
+ ```yaml
49
+ result: failed
50
+ ```
@@ -0,0 +1,60 @@
1
+ # BMAD Planning Step 2: Functional Requirements & User Stories
2
+
3
+ ## Context (pre-assembled by pipeline)
4
+
5
+ ### Product Brief (from Analysis Phase)
6
+ {{product_brief}}
7
+
8
+ ### Project Classification (from Step 1)
9
+ {{classification}}
10
+
11
+ ---
12
+
13
+ ## Mission
14
+
15
+ Derive **functional requirements** and **user stories** from the product brief and project classification. These define WHAT the system must do from the user's perspective.
16
+
17
+ ## Instructions
18
+
19
+ 1. **Derive functional requirements:**
20
+ - Each FR must be specific, testable, and traceable to a core feature or user need
21
+ - Use MoSCoW prioritization: `must` (MVP-critical), `should` (high-value), `could` (nice-to-have)
22
+ - Minimum 3 FRs, but don't pad — every FR should earn its place
23
+ - Frame as capabilities: "Users can filter by date range" not "Add a date picker component"
24
+
25
+ 2. **Write user stories:**
26
+ - Each story captures a user journey or interaction pattern
27
+ - Title should be scannable; description should explain the "why"
28
+ - Stories bridge the gap between requirements and implementation
29
+
30
+ 3. **Align with classification:**
31
+ - FRs should support the key goals from the classification step
32
+ - Prioritization should reflect the project type and vision
33
+
34
+ ## Output Contract
35
+
36
+ Emit ONLY this YAML block as your final output — no other text.
37
+
38
+ **CRITICAL**: All string values MUST be quoted with double quotes.
39
+
40
+ ```yaml
41
+ result: success
42
+ functional_requirements:
43
+ - description: "Users can register new habits with a name and frequency"
44
+ priority: must
45
+ - description: "Users can view current streaks for all tracked habits"
46
+ priority: must
47
+ - description: "Users can export habit data to JSON or CSV format"
48
+ priority: should
49
+ user_stories:
50
+ - title: "Habit Registration"
51
+ description: "As a developer, I want to register daily habits so I can track my consistency"
52
+ - title: "Streak Dashboard"
53
+ description: "As a user, I want to see my current streaks so I stay motivated"
54
+ ```
55
+
56
+ If you cannot produce valid output:
57
+
58
+ ```yaml
59
+ result: failed
60
+ ```
@@ -0,0 +1,75 @@
1
+ # BMAD Planning Step 3: NFRs, Tech Stack, Domain Model & Scope
2
+
3
+ ## Context (pre-assembled by pipeline)
4
+
5
+ ### Product Brief (from Analysis Phase)
6
+ {{product_brief}}
7
+
8
+ ### Project Classification (from Step 1)
9
+ {{classification}}
10
+
11
+ ### Functional Requirements (from Step 2)
12
+ {{functional_requirements}}
13
+
14
+ ---
15
+
16
+ ## Mission
17
+
18
+ Complete the PRD by defining **non-functional requirements**, **tech stack**, **domain model**, and **out-of-scope items**. These constrain HOW the system is built and what it explicitly does NOT do.
19
+
20
+ ## Instructions
21
+
22
+ 1. **Define non-functional requirements:**
23
+ - Each NFR must have a category (performance, security, scalability, accessibility, reliability)
24
+ - Be concrete: "API responses under 200ms at p95" not "System should be fast"
25
+ - Minimum 2 NFRs covering different categories
26
+ - NFRs should align with the project type and constraints
27
+
28
+ 2. **Specify the tech stack:**
29
+ - Key-value pairs mapping technology concerns to specific choices
30
+ - Use real, current technologies — do not fabricate frameworks
31
+ - Cover at minimum: language, framework, database, testing
32
+ - Choices should align with the product brief constraints
33
+
34
+ 3. **Build the domain model:**
35
+ - Key entities and their relationships
36
+ - Each entity as a key with its attributes/relationships as the value
37
+ - This informs database design and API structure downstream
38
+
39
+ 4. **Define out-of-scope items** to prevent scope creep — what this product explicitly does NOT do.
40
+
41
+ ## Output Contract
42
+
43
+ Emit ONLY this YAML block as your final output — no other text.
44
+
45
+ **CRITICAL**: All string values MUST be quoted with double quotes.
46
+
47
+ ```yaml
48
+ result: success
49
+ non_functional_requirements:
50
+ - description: "CLI commands complete within 200ms for local operations"
51
+ category: "performance"
52
+ - description: "All user data encrypted at rest using AES-256"
53
+ category: "security"
54
+ tech_stack:
55
+ language: "TypeScript"
56
+ framework: "Node.js CLI with Commander"
57
+ database: "SQLite via better-sqlite3"
58
+ testing: "Vitest"
59
+ domain_model:
60
+ Habit:
61
+ attributes: ["name", "frequency", "created_at"]
62
+ relationships: ["has_many: Completions"]
63
+ Completion:
64
+ attributes: ["habit_id", "completed_at"]
65
+ relationships: ["belongs_to: Habit"]
66
+ out_of_scope:
67
+ - "Web or mobile interface"
68
+ - "Cloud sync or multi-device support"
69
+ ```
70
+
71
+ If you cannot produce valid output:
72
+
73
+ ```yaml
74
+ result: failed
75
+ ```