@miniidealab/openlogos 0.3.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,181 @@
1
+ # Skill: Architecture Designer
2
+
3
+ > Before diving into per-scenario technical implementation, establish the project's technical global view — system architecture, technology selection, deployment topology, and non-functional constraints. Ensure that subsequent sequence diagrams, API designs, and code generation all proceed under consistent architectural constraints.
4
+
5
+ ## Trigger Conditions
6
+
7
+ - User requests designing technical architecture, making technology selections, or planning system architecture
8
+ - User mentions "Phase 3 Step 0", "architecture design", "technical plan"
9
+ - Phase 2 product design documents are complete, and Phase 3 needs to begin
10
+ - User wants to determine the tech stack or deployment strategy
11
+
12
+ ## Core Capabilities
13
+
14
+ 1. Read Phase 1 requirements documents and Phase 2 product design documents to understand the full product picture
15
+ 2. Based on product complexity and scenario characteristics, recommend suitable system architectures
16
+ 3. Provide selection rationale and alternative comparisons for each technology choice
17
+ 4. Draw system architecture diagrams (Mermaid) and deployment topology diagrams
18
+ 5. Update the `tech_stack` field in `logos-project.yaml`
19
+
20
+ ## Integration with Phase 1/2
21
+
22
+ Architecture design is the bridge from Phase 2 (product design) to Phase 3 (technical implementation). Its inputs come from Phase 1/2, and its outputs influence all subsequent steps in Phase 3:
23
+
24
+ | Input (from Phase 1/2) | Output (influences subsequent Phase 3 steps) |
25
+ |------------------------|------------------------------|
26
+ | Scenario list and complexity | System boundary definition → sequence diagram participants |
27
+ | Non-functional requirements (performance, security) | Technology selection constraints → API design decisions |
28
+ | Product interaction type (Web/Mobile/API) | Frontend tech stack → prototype implementation approach |
29
+ | Data volume and access patterns | Database selection → DB design |
30
+ | Third-party service dependencies (payment, email, etc.) | Integration approach → external participants in sequence diagrams |
31
+
32
+ ## Execution Steps
33
+
34
+ ### Step 1: Understand the Full Product Picture
35
+
36
+ Read the following documents to build an overall understanding of the project:
37
+
38
+ - **Requirements Document** (Phase 1): Product positioning, core scenarios, constraints and boundaries
39
+ - **Product Design Document** (Phase 2): Information architecture, page structure, interaction complexity
40
+ - **Existing `logos-project.yaml`**: Whether there are initial selections in the current `tech_stack`
41
+
42
+ Key points to extract:
43
+ - Number and complexity of core scenarios
44
+ - Whether there are real-time requirements (WebSocket, SSE)
45
+ - Whether there are background tasks (scheduled tasks, message queues)
46
+ - List of third-party service dependencies
47
+ - Expected user scale
48
+
49
+ ### Step 2: Determine System Architecture
50
+
51
+ Choose an architecture pattern based on product complexity:
52
+
53
+ **Simple Projects** (personal SaaS, utility products):
54
+ - Monolithic architecture + single database
55
+ - Architecture overview can be a paragraph of text + a simple diagram
56
+
57
+ **Medium Projects** (team SaaS, multi-role systems):
58
+ - Frontend-backend separation + monolithic backend + single database
59
+ - May need auxiliary services like object storage, caching, etc.
60
+
61
+ **Complex Projects** (multi-service, high-concurrency, multi-platform):
62
+ - Microservices / modular monolith
63
+ - Requires detailed Architecture Decision Records (ADR)
64
+
65
+ Draw system architecture diagram using Mermaid:
66
+
67
+ ```mermaid
68
+ graph TB
69
+ subgraph Frontend
70
+ Web[Web App - Next.js]
71
+ end
72
+ subgraph Backend
73
+ API[API Server - Node.js]
74
+ Worker[Background Worker]
75
+ end
76
+ subgraph Data
77
+ DB[(PostgreSQL)]
78
+ Cache[(Redis)]
79
+ S3[Object Storage]
80
+ end
81
+ subgraph External
82
+ Auth[Supabase Auth]
83
+ Email[SendGrid]
84
+ end
85
+
86
+ Web -->|REST API| API
87
+ API --> DB
88
+ API --> Cache
89
+ API --> S3
90
+ API --> Auth
91
+ Worker --> DB
92
+ Worker --> Email
93
+ ```
94
+
95
+ ### Step 3: Technology Selection
96
+
97
+ Provide selection and rationale for each technology dimension:
98
+
99
+ ```markdown
100
+ | Dimension | Selection | Rationale | Alternatives |
101
+ |-----------|-----------|-----------|-------------|
102
+ | Language | TypeScript | Unified frontend/backend, type safety | Go (when performance is priority) |
103
+ | Frontend Framework | Next.js 15 | SSR + RSC, mature ecosystem | Astro (content sites), Nuxt (Vue ecosystem) |
104
+ | Backend Framework | Hono | Lightweight, edge-first, native TS | Express (ecosystem), Fastify (performance) |
105
+ | Database | PostgreSQL | Feature-rich, JSONB, RLS | MySQL (simple scenarios) |
106
+ | Authentication | Supabase Auth | Out-of-the-box, RLS integration | NextAuth (self-hosted) |
107
+ | Deployment | Vercel + Supabase | Zero-ops, auto-scaling | AWS (full control) |
108
+ ```
109
+
110
+ **Selection Principles**:
111
+ - Prefer technologies the team is already familiar with
112
+ - When there is no significant difference, choose the option with the larger community
113
+ - Selection rationale must be linked to specific product requirements or constraints
114
+
115
+ ### Step 4: Non-Functional Constraints
116
+
117
+ Define key non-functional requirements:
118
+
119
+ - **Performance Targets**: Core API response time, page load time
120
+ - **Security Requirements**: Authentication method, data encryption, CORS policy
121
+ - **Scalability**: Expected user scale, data growth estimates
122
+ - **Observability**: Logging, monitoring, alerting strategy
123
+ - **Developer Experience**: Local development environment, CI/CD pipeline
124
+
125
+ ### Step 5: External Dependencies and Test Strategies
126
+
127
+ Catalog all external service dependencies for the project and determine the isolation strategy for each dependency during orchestration testing. The output of this step directly impacts whether Phase 3 Step 3 (orchestration testing) can be executed smoothly.
128
+
129
+ 1. Identify external dependencies from the architecture diagram and sequence diagram participants (email, SMS, verification codes, payment, OAuth, etc.)
130
+ 2. Confirm the test strategy for each dependency with the user
131
+
132
+ Available test strategies:
133
+
134
+ | Strategy | Description | Typical Scenario |
135
+ |----------|-------------|-----------------|
136
+ | `test-api` | Test environment provides a backdoor API | Email/SMS verification codes |
137
+ | `fixed-value` | Specific test data uses fixed values | Fixed verification code for test phone numbers |
138
+ | `env-disable` | Environment variable disables the feature | CAPTCHA, slider verification |
139
+ | `mock-callback` | Orchestration actively calls a simulated callback | Payment callbacks, Webhooks |
140
+ | `mock-service` | Local mock service as replacement | OAuth Provider |
141
+
142
+ If the project has no external service dependencies (e.g., a pure CLI tool), this step can be skipped.
143
+
144
+ ### Step 6: Update logos-project.yaml
145
+
146
+ Write the confirmed technology selections into the `tech_stack` field of `logos-project.yaml`, and write external dependencies and test strategies into the `external_dependencies` field, ensuring that all subsequent Skills and AI tools can read the unified tech stack and testing conventions.
147
+
148
+ ```yaml
149
+ external_dependencies:
150
+ - name: "Email Service"
151
+ provider: "SendGrid"
152
+ used_in: ["S01-User Registration", "S03-Forgot Password"]
153
+ test_strategy: "test-api"
154
+ test_config: "GET /api/test/latest-email?to={email}"
155
+ ```
156
+
157
+ ## Output Specification
158
+
159
+ - Architecture overview document: `logos/resources/prd/3-technical-plan/1-architecture/01-architecture-overview.md`
160
+ - Architecture diagrams use Mermaid format
161
+ - Technology selections use table format, each item must include rationale
162
+ - Update the `tech_stack` and `external_dependencies` fields in `logos-project.yaml`
163
+ - Simple projects are allowed to have streamlined output (not all sections are mandatory)
164
+
165
+ ## Best Practices
166
+
167
+ - **Don't over-engineer**: For a solo developer building SaaS, monolith + PostgreSQL + Vercel is sufficient — don't jump straight to microservices
168
+ - **Selection rationale matters more than the selection itself**: Documenting "why X was chosen" is more valuable than "X was chosen", because rationale needs to be re-evaluated as the project evolves
169
+ - **Architecture diagrams are prerequisites for sequence diagrams**: System components in the architecture diagram become participants in subsequent sequence diagrams — the two must be consistent
170
+ - **tech_stack is the AI's anchor**: Subsequent AI code generation reads `tech_stack` from `logos-project.yaml` — inaccurate selections will result in unusable generated code
171
+ - **Start loose with non-functional constraints, tighten later**: Don't set overly strict performance targets initially; tighten them as real data becomes available
172
+ - **Test strategies must be decided during the architecture phase**: If test approaches for verification codes, payments, and other external dependencies are left until orchestration testing, you'll often find that no backdoor APIs were provisioned, making fully automated orchestration tests impossible
173
+
174
+ ## Recommended Prompts
175
+
176
+ The following prompts can be copied directly for use with AI:
177
+
178
+ - `Help me design the technical architecture`
179
+ - `Based on the product design, help me make technology selections`
180
+ - `Help me draw the system architecture diagram`
181
+ - `Help me determine the tech stack and update logos-project.yaml`
@@ -0,0 +1,146 @@
1
+ # Skill: Change Writer
2
+
3
+ > Assist in writing change proposals — analyze the scope of change impact, generate a structured proposal.md and a phase-based tasks.md, ensuring changes are traceable and impact is controllable.
4
+
5
+ ## Trigger Conditions
6
+
7
+ - User has just run `openlogos change <slug>` and wants AI help filling in the proposal
8
+ - User describes a need to modify, add, or remove a scenario/feature
9
+ - User mentions "change proposal", "iteration", "requirement change"
10
+
11
+ ## Prerequisites
12
+
13
+ 1. Project is initialized (`logos/logos.config.json` exists)
14
+ 2. Change proposal directory has been created by CLI (`logos/changes/<slug>/` exists)
15
+ 3. Main documents are readable (effective documents exist in `logos/resources/`)
16
+
17
+ If prerequisites are not met, prompt the user to run `openlogos change <slug>` to create the proposal directory first.
18
+
19
+ ## Core Capabilities
20
+
21
+ 1. Understand the user's intended change
22
+ 2. Scan existing documents in `logos/resources/` to identify the affected scope
23
+ 3. Determine the change type based on change propagation rules (Requirement-level / Design-level / Interface-level / Code-level)
24
+ 4. Generate a compliant proposal.md
25
+ 5. Automatically break down tasks.md by change type
26
+
27
+ ## Execution Steps
28
+
29
+ ### Step 1: Understand the Change Intent
30
+
31
+ Confirm the following information with the user (ask follow-up questions if insufficient, up to 2 rounds):
32
+
33
+ - **What is the change**: What needs to be added, modified, or removed?
34
+ - **Reason for the change**: Why is this change needed? Is it from requirement feedback, a bug, or an optimization?
35
+ - **Related scenarios**: Which existing scenario IDs are involved (S01, S02...)?
36
+
37
+ ### Step 2: Analyze the Impact Scope
38
+
39
+ Scan documents in `logos/resources/` to determine the impact scope:
40
+
41
+ 1. Read requirement documents (`prd/1-product-requirements/`) to check related scenario definitions
42
+ 2. Read product design (`prd/2-product-design/`) to check related functional specs and prototypes
43
+ 3. Read technical plans (`prd/3-technical-plan/`) to check related sequence diagrams
44
+ 4. Read API documents (`api/`) to check related endpoints
45
+ 5. Read DB documents (`database/`) to check related table structures
46
+ 6. Read orchestration tests (`scenario/`) to check related test cases
47
+
48
+ ### Step 3: Determine the Change Type
49
+
50
+ Refer to change propagation rules to determine the change type and minimum update scope:
51
+
52
+ | Change Type | Minimum Updates Required |
53
+ |-------------|------------------------|
54
+ | Requirement-level change | Full chain (Requirements → Design → Architecture → API/DB → Orchestration → Code) |
55
+ | Design-level change | Prototypes + Scenarios + API/DB + Orchestration + Code |
56
+ | Interface-level change | API/DB + Orchestration + Code |
57
+ | Code-level fix | Code + Re-verification |
58
+
59
+ ### Step 4: Generate proposal.md
60
+
61
+ Generate using the following template and write to `logos/changes/<slug>/proposal.md`:
62
+
63
+ ```markdown
64
+ # Change Proposal: [Change Name]
65
+
66
+ ## Reason for Change
67
+ [Why is this change needed? What requirement/feedback/bug does it originate from?]
68
+
69
+ ## Change Type
70
+ [Requirement-level / Design-level / Interface-level / Code-level]
71
+
72
+ ## Change Scope
73
+ - Affected requirement documents: [List, down to filename and section]
74
+ - Affected functional specs: [List]
75
+ - Affected business scenarios: [Scenario ID list]
76
+ - Affected APIs: [Endpoint list]
77
+ - Affected DB tables: [Table name list]
78
+ - Affected orchestration tests: [List]
79
+
80
+ ## Change Summary
81
+ [Describe in 1-3 paragraphs what specifically will change]
82
+ ```
83
+
84
+ ### Step 5: Generate tasks.md
85
+
86
+ Automatically break down the task checklist based on the change type and impact scope. Only list the phases that need updating:
87
+
88
+ ```markdown
89
+ # Implementation Tasks
90
+
91
+ ## Phase 1: Document Changes
92
+ - [ ] Update acceptance criteria for S0x in requirement documents
93
+ - [ ] Add/modify scenario in the scenario overview table
94
+
95
+ ## Phase 2: Design Changes
96
+ - [ ] Update interaction design for S0x in functional specs
97
+ - [ ] Update prototypes
98
+
99
+ ## Phase 3: Technical Changes
100
+ - [ ] Update sequence diagram for S0x
101
+ - [ ] Update API YAML
102
+ - [ ] Update DB DDL
103
+ - [ ] Update orchestration test cases
104
+ - [ ] Implement code changes
105
+ ```
106
+
107
+ ### Step 6: Guide Follow-up Actions (Chain-driven)
108
+
109
+ Provide a ready-to-use prompt that allows the user to kick off chain execution of all tasks with a single command:
110
+
111
+ - **Requirement-level / Design-level changes** (multiple tasks): Suggest the user say "Follow tasks.md and help me progressively update all affected documents for S0x"
112
+ - **Code-level fixes** (fewer tasks): Suggest the user say "Help me fix the [issue description] for S0x and re-verify"
113
+
114
+ Chain execution behavior rules:
115
+ 1. AI reads `tasks.md` and executes items sequentially
116
+ 2. After completing each task, report a summary of changes and automatically prompt "Continue to the next item?"
117
+ 3. After the user says "Continue" or provides adjustments, proceed to the next item
118
+ 4. After all tasks are completed, remind the user to run `openlogos merge <slug>`
119
+
120
+ **Key principle**: Do not make the user manually track the task checklist — AI should proactively drive the process.
121
+
122
+ ## Output Specification
123
+
124
+ - File format: Markdown
125
+ - Storage location: `logos/changes/<slug>/`
126
+ - Filenames: `proposal.md` and `tasks.md` (overwrite the CLI-generated templates)
127
+
128
+ ## Best Practices
129
+
130
+ - **Overestimate the impact scope**: Missing an update in one link is more dangerous than double-checking
131
+ - **Change type determines workload**: Help users understand before they start that changing one requirement may require a full-chain update
132
+ - **tasks.md is the execution checklist**: Check off each item with `[x]` upon completion for easy progress tracking
133
+ - **Follow the process even for small changes**: A change that appears to be "just one API line" may affect orchestration tests and code
134
+
135
+ ## Recommended Prompts
136
+
137
+ The following prompts can be copied directly for use with AI:
138
+
139
+ **Fill in proposal**:
140
+ - `Help me fill in the change proposal <slug>`
141
+ - `I want to add a "remember password" feature to the S02 login scenario, help me analyze the impact scope`
142
+ - `This bug fix only involves the code layer, help me quickly write a proposal`
143
+
144
+ **Execute tasks (after proposal is completed)**:
145
+ - `Follow tasks.md and help me progressively update all affected documents for S02`
146
+ - `Help me fix the 500 error on the S02 login endpoint and re-verify`
@@ -0,0 +1,204 @@
1
+ # Skill: Code Reviewer
2
+
3
+ > Review AI-generated code by performing systematic validation against the full OpenLogos specification chain (API YAML, sequence diagram EX cases, DB DDL), ensuring code is fully consistent with design documents, covers all exception paths, and meets security requirements.
4
+
5
+ ## Trigger Conditions
6
+
7
+ - User requests a code review or Code Review
8
+ - User mentions "Phase 3 Step 4", "code audit", "code review"
9
+ - AI has just generated code that needs quality verification
10
+ - Final check before deployment
11
+ - Need to locate code issues after orchestration test failures
12
+
13
+ ## Prerequisites
14
+
15
+ - `logos/resources/api/` contains API YAML specifications
16
+ - `logos/resources/prd/3-technical-plan/2-scenario-implementation/` contains scenario sequence diagrams (with EX cases)
17
+ - `logos/resources/database/` contains DB DDL
18
+ - The code to be reviewed is accessible
19
+
20
+ For projects without APIs (pure CLI / libraries), API consistency checks can be skipped; focus on sequence diagram coverage and exception handling instead.
21
+
22
+ ## Core Capabilities
23
+
24
+ 1. Validate code implementation consistency with API YAML specifications
25
+ 2. Check whether exception handling covers all EX cases
26
+ 3. Check whether DB operations conform to DDL design
27
+ 4. Check security policies (authentication, RLS, input validation)
28
+ 5. Check code style and best practices
29
+ 6. Output a structured review report
30
+
31
+ ## Execution Steps
32
+
33
+ ### Step 1: Load Specification Context
34
+
35
+ Read the following files to establish a "reference baseline" for the code review:
36
+
37
+ - **API YAML** (`logos/resources/api/*.yaml`): Extract endpoint inventory, record each endpoint's path, method, request body schema, response schema, and status codes
38
+ - **Scenario Sequence Diagrams** (`logos/resources/prd/3-technical-plan/2-scenario-implementation/`): Extract all EX exception case IDs and expected behaviors
39
+ - **DB DDL** (`logos/resources/database/`): Extract table structures, column types, constraints, and indexes
40
+ - **`logos-project.yaml`**: Read `tech_stack` to confirm the technology stack, `external_dependencies` to confirm external dependencies
41
+
42
+ Summarize into a review checklist:
43
+
44
+ ```markdown
45
+ Review scope: S01-related code
46
+ - API endpoints: 4 (auth.yaml)
47
+ - EX exception cases: 7 (EX-2.1 ~ EX-5.2)
48
+ - DB tables: 2 (users, profiles)
49
+ - Security policies: 2 RLS rules
50
+ ```
51
+
52
+ ### Step 2: API Consistency Review
53
+
54
+ Compare code implementation against API YAML specification endpoint by endpoint:
55
+
56
+ **Checklist**:
57
+
58
+ | Check Item | Description | Severity |
59
+ |------------|-------------|----------|
60
+ | Path Match | Whether route paths in code exactly match `paths` in YAML | Critical |
61
+ | HTTP Method | Whether GET/POST/PUT/DELETE matches | Critical |
62
+ | Request Body Fields | Whether code reads all required fields defined in YAML `requestBody.schema` | Critical |
63
+ | Request Body Validation | Whether field type, format (email/uuid), minLength and other constraints are validated in code | Warning |
64
+ | Response Fields | Whether JSON field names and types returned by code match YAML `responses.schema` | Critical |
65
+ | Status Codes | Whether HTTP status codes returned in normal and error cases match YAML definitions | Critical |
66
+ | Error Response Format | Whether error responses follow the unified `{ code, message, details? }` format | Warning |
67
+
68
+ **Output format**:
69
+
70
+ ```markdown
71
+ ### API Consistency
72
+
73
+ | Endpoint | Check Item | Status | Notes |
74
+ |----------|------------|--------|-------|
75
+ | POST /api/auth/register | Request body fields | ✅ | email, password both read |
76
+ | POST /api/auth/register | Response status code | ❌ Critical | Registration success returns 200, YAML defines 201 |
77
+ | POST /api/auth/register | Error code | ❌ Warning | Duplicate email returns generic 400, YAML defines 409 |
78
+ ```
79
+
80
+ ### Step 3: Exception Handling Coverage Review
81
+
82
+ Map all EX exception cases from sequence diagrams to error handling in code one by one:
83
+
84
+ 1. List all EX case IDs and their expected behaviors for the scenario
85
+ 2. Search for corresponding try/catch, if/else, error handlers in code
86
+ 3. Flag uncovered EX cases
87
+
88
+ **Key checks**:
89
+
90
+ - Whether each EX case has a corresponding code branch
91
+ - Whether the correct HTTP status code and error code are returned in exception scenarios
92
+ - Whether there are "silently swallowed exceptions" (empty catch blocks or catch blocks that only log without returning errors)
93
+ - Whether external service calls (DB, third-party APIs) all have timeout and error handling
94
+ - Whether there are exception handlers in code that don't exist in sequence diagrams (which may indicate sequence diagram omissions)
95
+
96
+ **Output format**:
97
+
98
+ ```markdown
99
+ ### Exception Handling Coverage
100
+
101
+ | EX ID | Exception Description | Code Coverage | Notes |
102
+ |-------|----------------------|---------------|-------|
103
+ | EX-2.1 | Email already registered | ✅ | Returns 409, format correct |
104
+ | EX-2.2 | Auth service unavailable | ❌ Critical | No try/catch wrapping the supabase.auth.signUp call |
105
+ | EX-4.1 | profiles write failure | ❌ Critical | auth.users record not rolled back after INSERT failure |
106
+ ```
107
+
108
+ ### Step 4: DB Operations Review
109
+
110
+ Check whether database operations in code conform to DDL design:
111
+
112
+ **Checklist**:
113
+
114
+ - **Table and column names**: Whether table/column names referenced in code match DDL (no typos, case differences)
115
+ - **Field types**: Whether value types passed in code match DDL definitions (e.g., for an `INTEGER` amount field in DDL, whether code passes cents instead of dollars)
116
+ - **Constraint compliance**: Whether NOT NULL fields always have values, whether UNIQUE fields have conflict handling, whether CHECK constraint enum values have corresponding constants in code
117
+ - **Transaction usage**: Whether multi-table write operations are wrapped in transactions
118
+ - **Migration consistency**: Whether the latest fields in DDL are used in code (avoid DDL being updated but code not following up)
119
+
120
+ ### Step 5: Security Review
121
+
122
+ Check the security implementation of the code:
123
+
124
+ | Check Item | Description | Severity |
125
+ |------------|-------------|----------|
126
+ | Authentication Check | Whether endpoints requiring authentication verify token/session before processing logic | Critical |
127
+ | Authorization Check | Whether users can only access their own data (owner check) | Critical |
128
+ | Input Validation | Whether user input has type validation and length limits (prevent injection, prevent XSS) | Critical |
129
+ | Sensitive Data | Whether responses leak password hashes, internal IDs, or stack traces | Critical |
130
+ | RLS Dependency | If relying on PostgreSQL RLS, whether code correctly sets the `auth.uid()` context | Warning |
131
+ | SQL Injection | Whether parameterized queries are used (string-concatenated SQL is prohibited) | Critical |
132
+ | Rate Limiting | Whether critical endpoints (login, registration) have rate limiting against brute force | Warning |
133
+
134
+ ### Step 6: Output Review Report
135
+
136
+ Summarize all findings by severity and generate a structured report:
137
+
138
+ ```markdown
139
+ # Code Review Report: S01 User Registration
140
+
141
+ ## Review Scope
142
+ - Scenario: S01
143
+ - Endpoints: 4
144
+ - EX cases: 7
145
+ - Code files: src/api/auth/register.ts, src/api/auth/login.ts
146
+
147
+ ## Review Summary
148
+
149
+ | Severity | Count |
150
+ |----------|-------|
151
+ | 🔴 Critical | 2 |
152
+ | 🟡 Warning | 3 |
153
+ | 🔵 Info | 1 |
154
+
155
+ ## Critical Findings
156
+
157
+ ### [C1] POST /api/auth/register status code mismatch
158
+ - **Spec source**: auth.yaml → register → responses.201
159
+ - **Issue**: Code returns 200, spec defines 201
160
+ - **Fix suggestion**: Change `res.status(200)` to `res.status(201)`
161
+
162
+ ### [C2] EX-2.2 unhandled: Auth service unavailable
163
+ - **Spec source**: S01 sequence diagram → EX-2.2
164
+ - **Issue**: `supabase.auth.signUp()` call is not wrapped in try/catch
165
+ - **Fix suggestion**: Add try/catch, return 503 on timeout or 5xx
166
+
167
+ ## Warning Findings
168
+ ...
169
+
170
+ ## Info Findings
171
+ ...
172
+ ```
173
+
174
+ **Report principles**:
175
+ - Critical issues must be fixed before proceeding to orchestration acceptance
176
+ - Warning issues are recommended to fix but do not block delivery
177
+ - Info items are improvement suggestions that can be addressed later
178
+ - Every finding must reference a spec source (API YAML, EX ID, DDL)
179
+
180
+ ## Output Specification
181
+
182
+ - Review report is output directly in the conversation (not written to a file)
183
+ - Categorized by severity: Critical / Warning / Info
184
+ - Each finding format: ID + spec source + issue description + fix suggestion
185
+ - End with a summary and next-step recommendation (e.g., "Fix 2 Critical issues, then run orchestration acceptance")
186
+
187
+ ## Best Practices
188
+
189
+ - **Consistency first**: Code must be fully consistent with API YAML — field names, types, and status codes must not deviate. Most production bugs come from subtle inconsistencies between code and specs
190
+ - **Exception handling is the focus**: Most bugs occur in exception paths; carefully check that every EX case has a corresponding catch/error handler
191
+ - **No shortcuts on security**: Authentication checks, RLS policies, input validation — any missing item is a Critical issue
192
+ - **Don't over-review**: Code style issues should be marked as Info and not block delivery. The core goal of the review is "code matches specs", not "code is perfect"
193
+ - **Run tests before reviewing**: If the code can run, execute orchestration tests first and use failing cases to pinpoint issues — this is more efficient than reading code line by line
194
+ - **Watch for compensation logic**: If multi-step writes (e.g., first creating an auth user then writing a profile) fail midway, check whether there is a rollback or compensation mechanism — this is the most commonly missed Critical issue
195
+
196
+ ## Recommended Prompts
197
+
198
+ The following prompts can be copied directly for use with AI:
199
+
200
+ - `Help me do a code review`
201
+ - `Help me check if this code conforms to the API YAML spec`
202
+ - `Review the code implementation related to S01`
203
+ - `Help me check if exception handling is complete`
204
+ - `Help me check if security policies are in place`