@miniidealab/openlogos 0.2.0 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (36) hide show
  1. package/dist/commands/init.d.ts +9 -2
  2. package/dist/commands/init.d.ts.map +1 -1
  3. package/dist/commands/init.js +219 -8
  4. package/dist/commands/init.js.map +1 -1
  5. package/dist/commands/sync.d.ts.map +1 -1
  6. package/dist/commands/sync.js +9 -41
  7. package/dist/commands/sync.js.map +1 -1
  8. package/dist/i18n.d.ts.map +1 -1
  9. package/dist/i18n.js +14 -0
  10. package/dist/i18n.js.map +1 -1
  11. package/dist/index.js +1 -1
  12. package/package.json +5 -2
  13. package/skills/api-designer/SKILL.en.md +209 -0
  14. package/skills/api-designer/SKILL.md +209 -0
  15. package/skills/architecture-designer/SKILL.en.md +181 -0
  16. package/skills/architecture-designer/SKILL.md +181 -0
  17. package/skills/change-writer/SKILL.en.md +146 -0
  18. package/skills/change-writer/SKILL.md +146 -0
  19. package/skills/code-reviewer/SKILL.en.md +204 -0
  20. package/skills/code-reviewer/SKILL.md +204 -0
  21. package/skills/db-designer/SKILL.en.md +212 -0
  22. package/skills/db-designer/SKILL.md +212 -0
  23. package/skills/merge-executor/SKILL.en.md +84 -0
  24. package/skills/merge-executor/SKILL.md +84 -0
  25. package/skills/prd-writer/SKILL.en.md +171 -0
  26. package/skills/prd-writer/SKILL.md +171 -0
  27. package/skills/product-designer/SKILL.en.md +228 -0
  28. package/skills/product-designer/SKILL.md +228 -0
  29. package/skills/project-init/SKILL.en.md +163 -0
  30. package/skills/project-init/SKILL.md +163 -0
  31. package/skills/scenario-architect/SKILL.en.md +214 -0
  32. package/skills/scenario-architect/SKILL.md +214 -0
  33. package/skills/test-orchestrator/SKILL.en.md +142 -0
  34. package/skills/test-orchestrator/SKILL.md +142 -0
  35. package/skills/test-writer/SKILL.en.md +247 -0
  36. package/skills/test-writer/SKILL.md +247 -0
@@ -0,0 +1,204 @@
1
+ # Skill: Code Reviewer
2
+
3
+ > 审查 AI 生成的代码,基于 OpenLogos 全链路规格(API YAML、时序图 EX 用例、DB DDL)进行系统性校验,确保代码与设计文档完全一致,覆盖所有异常路径,满足安全要求。
4
+
5
+ ## 触发条件
6
+
7
+ - 用户要求审查代码或 Code Review
8
+ - 用户提到 "Phase 3 Step 4"、"代码审核"、"代码审查"
9
+ - AI 刚生成了一段代码,需要验证其质量
10
+ - 部署前的最终检查
11
+ - 编排测试失败后需要定位代码问题
12
+
13
+ ## 前置依赖
14
+
15
+ - `logos/resources/api/` 中包含 API YAML 规格
16
+ - `logos/resources/prd/3-technical-plan/2-scenario-implementation/` 中包含场景时序图(含 EX 用例)
17
+ - `logos/resources/database/` 中包含 DB DDL
18
+ - 待审查的代码已可读取
19
+
20
+ 无 API 的项目(纯 CLI / 库)可省略 API 一致性检查,聚焦时序图覆盖和异常处理。
21
+
22
+ ## 核心能力
23
+
24
+ 1. 校验代码实现与 API YAML 规格的一致性
25
+ 2. 检查异常处理是否覆盖所有 EX 用例
26
+ 3. 检查 DB 操作是否符合 DDL 设计
27
+ 4. 检查安全策略(认证、RLS、输入校验)
28
+ 5. 检查代码风格和最佳实践
29
+ 6. 输出结构化的审查报告
30
+
31
+ ## 执行步骤
32
+
33
+ ### Step 1: 加载规格上下文
34
+
35
+ 读取以下文件,建立代码审查的"参照基准":
36
+
37
+ - **API YAML**(`logos/resources/api/*.yaml`):提取端点清单,记录每个端点的路径、方法、请求体 schema、响应 schema、状态码
38
+ - **场景时序图**(`logos/resources/prd/3-technical-plan/2-scenario-implementation/`):提取所有 EX 异常用例编号和预期行为
39
+ - **DB DDL**(`logos/resources/database/`):提取表结构、字段类型、约束、索引
40
+ - **`logos-project.yaml`**:读取 `tech_stack` 确认技术栈,`external_dependencies` 确认外部依赖
41
+
42
+ 汇总为审查检查清单:
43
+
44
+ ```markdown
45
+ 审查范围:S01 相关代码
46
+ - API 端点:4 个(auth.yaml)
47
+ - EX 异常用例:7 个(EX-2.1 ~ EX-5.2)
48
+ - DB 表:2 张(users, profiles)
49
+ - 安全策略:RLS 2 条
50
+ ```
51
+
52
+ ### Step 2: API 一致性审查
53
+
54
+ 逐个端点对比代码实现与 API YAML 规格:
55
+
56
+ **检查项**:
57
+
58
+ | 检查项 | 说明 | 严重程度 |
59
+ |--------|------|---------|
60
+ | 路径匹配 | 代码中的路由路径是否与 YAML 中的 `paths` 完全一致 | Critical |
61
+ | HTTP 方法 | GET/POST/PUT/DELETE 是否匹配 | Critical |
62
+ | 请求体字段 | 代码是否读取了 YAML 中 `requestBody.schema` 定义的所有 required 字段 | Critical |
63
+ | 请求体校验 | 字段类型、format(email/uuid)、minLength 等约束是否在代码中有校验 | Warning |
64
+ | 响应字段 | 代码返回的 JSON 字段名和类型是否与 YAML 中 `responses.schema` 一致 | Critical |
65
+ | 状态码 | 正常和异常情况下返回的 HTTP 状态码是否与 YAML 定义一致 | Critical |
66
+ | 错误响应格式 | 错误响应是否遵循 `{ code, message, details? }` 统一格式 | Warning |
67
+
68
+ **输出格式**:
69
+
70
+ ```markdown
71
+ ### API 一致性
72
+
73
+ | 端点 | 检查项 | 状态 | 说明 |
74
+ |------|--------|------|------|
75
+ | POST /api/auth/register | 请求体字段 | ✅ | email, password 均已读取 |
76
+ | POST /api/auth/register | 响应状态码 | ❌ Critical | 注册成功返回 200,YAML 定义为 201 |
77
+ | POST /api/auth/register | 错误码 | ❌ Warning | 邮箱重复返回通用 400,YAML 定义为 409 |
78
+ ```
79
+
80
+ ### Step 3: 异常处理覆盖审查
81
+
82
+ 将时序图中的所有 EX 异常用例与代码中的错误处理逐一对应:
83
+
84
+ 1. 列出该场景所有 EX 编号及其预期行为
85
+ 2. 在代码中查找对应的 try/catch、if/else、error handler
86
+ 3. 标注未覆盖的 EX 用例
87
+
88
+ **检查重点**:
89
+
90
+ - 每个 EX 用例是否有对应的代码分支
91
+ - 异常情况下是否返回了正确的 HTTP 状态码和错误码
92
+ - 是否有"静默吞掉异常"的情况(catch 块为空或只打日志不返回错误)
93
+ - 外部服务调用(DB、第三方 API)是否都有超时和错误处理
94
+ - 是否存在时序图中没有但代码中多出的异常处理(可能意味着时序图遗漏)
95
+
96
+ **输出格式**:
97
+
98
+ ```markdown
99
+ ### 异常处理覆盖
100
+
101
+ | EX 编号 | 异常描述 | 代码覆盖 | 说明 |
102
+ |---------|---------|---------|------|
103
+ | EX-2.1 | 邮箱已注册 | ✅ | 返回 409,格式正确 |
104
+ | EX-2.2 | Auth 服务不可用 | ❌ Critical | 无 try/catch 包裹 supabase.auth.signUp 调用 |
105
+ | EX-4.1 | profiles 写入失败 | ❌ Critical | INSERT 失败后未回滚 auth.users 记录 |
106
+ ```
107
+
108
+ ### Step 4: DB 操作审查
109
+
110
+ 检查代码中的数据库操作是否符合 DDL 设计:
111
+
112
+ **检查项**:
113
+
114
+ - **表名和列名**:代码中引用的表名/列名是否与 DDL 一致(无拼写错误、大小写差异)
115
+ - **字段类型**:代码中传入的值类型是否与 DDL 定义匹配(如 DDL 中 `INTEGER` 的金额字段,代码是否传入分值而非元)
116
+ - **约束遵守**:NOT NULL 字段是否确保有值、UNIQUE 字段是否做了冲突处理、CHECK 约束中的枚举值是否在代码中有对应常量
117
+ - **事务使用**:涉及多表写入的操作是否包裹在事务中
118
+ - **迁移一致**:DDL 中的最新字段是否在代码中已使用(避免 DDL 更新了但代码忘记跟进)
119
+
120
+ ### Step 5: 安全审查
121
+
122
+ 检查代码的安全实现:
123
+
124
+ | 检查项 | 说明 | 严重程度 |
125
+ |--------|------|---------|
126
+ | 认证检查 | 需认证的端点是否在处理逻辑前验证了 token/session | Critical |
127
+ | 授权检查 | 用户是否只能访问自己的数据(owner check) | Critical |
128
+ | 输入校验 | 用户输入是否做了类型校验和长度限制(防注入、防 XSS) | Critical |
129
+ | 敏感数据 | 响应中是否泄露了密码哈希、内部 ID、堆栈信息 | Critical |
130
+ | RLS 依赖 | 如果依赖 PostgreSQL RLS,代码是否正确设置了 `auth.uid()` 上下文 | Warning |
131
+ | SQL 注入 | 是否使用参数化查询(禁止字符串拼接 SQL) | Critical |
132
+ | 速率限制 | 关键端点(登录、注册)是否有防暴力破解的速率限制 | Warning |
133
+
134
+ ### Step 6: 输出审查报告
135
+
136
+ 按严重程度汇总所有发现,生成结构化报告:
137
+
138
+ ```markdown
139
+ # 代码审查报告:S01 用户注册
140
+
141
+ ## 审查范围
142
+ - 场景:S01
143
+ - 端点:4 个
144
+ - EX 用例:7 个
145
+ - 代码文件:src/api/auth/register.ts, src/api/auth/login.ts
146
+
147
+ ## 审查摘要
148
+
149
+ | 严重程度 | 数量 |
150
+ |---------|------|
151
+ | 🔴 Critical | 2 |
152
+ | 🟡 Warning | 3 |
153
+ | 🔵 Info | 1 |
154
+
155
+ ## Critical 发现
156
+
157
+ ### [C1] POST /api/auth/register 状态码不匹配
158
+ - **规格来源**:auth.yaml → register → responses.201
159
+ - **问题**:代码返回 200,规格定义为 201
160
+ - **修复建议**:将 `res.status(200)` 改为 `res.status(201)`
161
+
162
+ ### [C2] EX-2.2 未处理:Auth 服务不可用
163
+ - **规格来源**:S01 时序图 → EX-2.2
164
+ - **问题**:`supabase.auth.signUp()` 调用未包裹 try/catch
165
+ - **修复建议**:添加 try/catch,超时或 5xx 时返回 503
166
+
167
+ ## Warning 发现
168
+ ...
169
+
170
+ ## Info 发现
171
+ ...
172
+ ```
173
+
174
+ **报告原则**:
175
+ - Critical 必须修复后才能进入编排验收
176
+ - Warning 建议修复,但不阻塞交付
177
+ - Info 为改进建议,可后续处理
178
+ - 每条发现都必须引用规格来源(API YAML、EX 编号、DDL)
179
+
180
+ ## 输出规范
181
+
182
+ - 审查报告直接输出到对话中(不写文件)
183
+ - 按严重程度分类:Critical / Warning / Info
184
+ - 每条发现格式:编号 + 规格来源 + 问题描述 + 修复建议
185
+ - 末尾给出总结和下一步建议(如"修复 2 个 Critical 后可运行编排验收")
186
+
187
+ ## 实践经验
188
+
189
+ - **一致性优先**:代码必须与 API YAML 完全一致——字段名、类型、状态码都不能偏差。大部分线上 Bug 来自代码和规格的微妙不一致
190
+ - **异常处理是重点**:大部分 Bug 都出在异常路径上,仔细检查每个 EX 用例是否有对应的 catch/error handler
191
+ - **安全不打折**:认证检查、RLS 策略、输入校验,任何一项缺失都是 Critical
192
+ - **不要过度审查**:代码风格问题标记为 Info,不阻塞交付。审查的核心目标是"代码与规格一致",不是"代码完美"
193
+ - **审查前先跑一遍**:如果代码能跑起来,先运行一次编排测试,用失败的 case 反向定位问题,比逐行看代码高效
194
+ - **关注补偿逻辑**:多步写入(如先创建 auth user 再写 profile)如果中途失败,是否有回滚或补偿机制——这是最容易遗漏的 Critical 问题
195
+
196
+ ## 推荐提示词
197
+
198
+ 以下提示词可以直接复制给 AI 使用:
199
+
200
+ - `帮我做代码审查`
201
+ - `帮我检查这段代码是否符合 API YAML 规格`
202
+ - `Review 一下 S01 相关的代码实现`
203
+ - `帮我检查异常处理是否完整`
204
+ - `帮我检查安全策略是否到位`
@@ -0,0 +1,212 @@
1
+ # Skill: DB Designer
2
+
3
+ > Derive database table structures from API specifications and generate SQL DDL in the appropriate dialect. The database type is determined during Phase 3 Step 0 technology selection, ensuring that field types, constraints, indexes, and security policies are fully aligned with API endpoints.
4
+
5
+ ## Trigger Conditions
6
+
7
+ - User requests database design or SQL writing
8
+ - User mentions "Phase 3 Step 2", "DB design", "table structure"
9
+ - API YAML specifications already exist and database design needs to be derived
10
+ - User provides a data model that needs to be converted to DDL
11
+
12
+ ## Core Capabilities
13
+
14
+ 1. Derive table structures from API request/response structures
15
+ 2. Read `tech_stack.database` from `logos-project.yaml` to determine the database type
16
+ 3. Generate SQL DDL in the corresponding database dialect
17
+ 4. Design indexes with rationale for each
18
+ 5. Design security policies (RLS / application-level permissions)
19
+ 6. Add comments to every table and every field
20
+
21
+ ## Prerequisites
22
+
23
+ - `logos/resources/api/` contains API YAML specifications (output from api-designer)
24
+ - `tech_stack.database` in `logos-project.yaml` is filled in
25
+
26
+ If the API directory is empty, prompt the user to complete the API design (api-designer) in Phase 3 Step 2 first. If `tech_stack.database` is not filled in, prompt the user to complete Phase 3 Step 0 (architecture-designer) first.
27
+
28
+ ## Execution Steps
29
+
30
+ ### Step 1: Determine Database Type
31
+
32
+ Read the `tech_stack` field from `logos/logos-project.yaml` to determine the database type and dialect:
33
+
34
+ - PostgreSQL → Use features like UUID, TIMESTAMPTZ, RLS, JSONB, etc.
35
+ - MySQL → Use features like InnoDB, utf8mb4, TIMESTAMP, etc.
36
+ - SQLite → Use simplified types like INTEGER PRIMARY KEY, TEXT, etc.
37
+ - Other → Confirm with the user and select the closest dialect
38
+
39
+ ### Step 2: Extract Data Entities
40
+
41
+ Extract all data entities that need to be persisted from the API YAML:
42
+
43
+ 1. Scan `requestBody` and `responses` across all endpoints to identify core data objects
44
+ 2. Distinguish between "needs persistence" and "transfer-only" data:
45
+ - Objects with CRUD operations → need a table (e.g., `users`, `projects`)
46
+ - Objects that only appear in requests/responses but are not stored directly → no table needed (e.g., `loginRequest`)
47
+ 3. Annotate each object with its source API endpoint
48
+
49
+ Output an entity checklist for user confirmation:
50
+
51
+ ```markdown
52
+ Identified N data entities requiring persistence from API specifications:
53
+
54
+ | # | Entity | Source Endpoint | Core Fields |
55
+ |---|--------|----------------|-------------|
56
+ | 1 | users | auth.yaml → register, login | email, password, status |
57
+ | 2 | projects | projects.yaml → create, list, get | name, description, owner_id |
58
+ | 3 | subscriptions | billing.yaml → subscribe | plan, status, expires_at |
59
+ ```
60
+
61
+ ### Step 3: Design Table Structures
62
+
63
+ Design complete table structures for each entity, following the current database dialect:
64
+
65
+ **Every table must include**:
66
+ - Primary key (UUID or auto-increment ID, depending on dialect)
67
+ - Business fields (mapped from API schema, with types converted to database types)
68
+ - Audit fields: `created_at`, `updated_at`
69
+ - Soft delete field: `deleted_at` (as needed)
70
+ - Field constraints: `NOT NULL`, `UNIQUE`, `CHECK`, `DEFAULT`
71
+
72
+ **Type mapping principles**:
73
+ - API `string + format: email` → `TEXT NOT NULL` (with CHECK constraint or application-level validation)
74
+ - API `string + format: uuid` → `UUID` (PostgreSQL) / `CHAR(36)` (MySQL)
75
+ - API `integer` → `INTEGER` / `BIGINT`
76
+ - API `boolean` → `BOOLEAN` (PostgreSQL) / `TINYINT(1)` (MySQL)
77
+ - API `string + enum` → `TEXT + CHECK` constraint (listing enum values)
78
+ - Monetary fields → `INTEGER` (store in cents), **DECIMAL/FLOAT is prohibited**
79
+
80
+ **Example (PostgreSQL)**:
81
+
82
+ ```sql
83
+ -- Users table (source: auth.yaml → register, login)
84
+ CREATE TABLE users (
85
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
86
+ email TEXT NOT NULL UNIQUE,
87
+ password TEXT NOT NULL,
88
+ status TEXT NOT NULL DEFAULT 'pending'
89
+ CHECK (status IN ('pending', 'active', 'disabled')),
90
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
91
+ updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
92
+ );
93
+ ```
94
+
95
+ ### Step 4: Design Table Relationships
96
+
97
+ Design foreign keys based on entity relationships in the API:
98
+
99
+ 1. Derive relationships from nested paths and reference fields in API endpoints (e.g., `/api/projects/:projectId/members` → `project_members` table linking `projects` and `users`)
100
+ 2. Determine relationship types (one-to-many, many-to-many)
101
+ 3. Design foreign key constraints and cascade strategies:
102
+ - `ON DELETE CASCADE`: child records are deleted when the parent record is deleted (e.g., user deleted → projects deleted)
103
+ - `ON DELETE SET NULL`: child records are retained but the foreign key is set to null when the parent is deleted
104
+ - `ON DELETE RESTRICT`: prevent deletion of the parent record if child records exist
105
+
106
+ ### Step 5: Design Security Policies
107
+
108
+ Design corresponding security mechanisms based on the database type:
109
+
110
+ **PostgreSQL — Row-Level Security (RLS)**:
111
+
112
+ ```sql
113
+ ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
114
+
115
+ CREATE POLICY projects_owner_policy ON projects
116
+ USING (owner_id = auth.uid());
117
+ ```
118
+
119
+ - Enable RLS on all tables containing user data
120
+ - Design at least one Policy per table (owner / admin / public)
121
+ - Document the correspondence between RLS policies and the API authentication scheme
122
+
123
+ **MySQL — Application-Level Permissions**:
124
+
125
+ - Annotate data access permissions in table comments (owner-only / admin / public)
126
+ - Do not implement permission control in DDL; delegate to the application layer
127
+
128
+ ### Step 6: Design Indexes
129
+
130
+ Design indexes for common query patterns, with a rationale for each index:
131
+
132
+ ```sql
133
+ -- User lookup by email (login scenario, source: S02)
134
+ CREATE UNIQUE INDEX idx_users_email ON users(email);
135
+
136
+ -- Project lookup by owner (project list, source: S04 Step 1)
137
+ CREATE INDEX idx_projects_owner ON projects(owner_id);
138
+ ```
139
+
140
+ Index design principles:
141
+ - Foreign key columns: indexes are mandatory (to avoid full table scans on JOINs)
142
+ - Unique constraint columns: unique indexes are created automatically
143
+ - High-frequency query columns: determine based on API query parameters
144
+ - Composite indexes: consider for multi-condition queries (leftmost prefix rule)
145
+ - Avoid over-indexing: limit index count on write-heavy tables
146
+
147
+ ### Step 7: Output Complete DDL
148
+
149
+ Organize the DDL file in the following order:
150
+
151
+ 1. File header comment (source, database type, generation timestamp)
152
+ 2. Base tables (tables without foreign key dependencies first)
153
+ 3. Association tables (tables with foreign key dependencies after)
154
+ 4. Indexes
155
+ 5. Security policies (RLS / Policy)
156
+ 6. Table and field comments (PostgreSQL uses `COMMENT ON`)
157
+
158
+ Add a comment above each DDL block noting the source API endpoint.
159
+
160
+ ## Output Specification
161
+
162
+ - File format: SQL (dialect determined by `tech_stack.database`)
163
+ - Storage location: `logos/resources/database/`
164
+ - Single file output: `schema.sql` (simple projects); or split by domain: `auth.sql`, `billing.sql` (complex projects)
165
+ - Every table must have a comment (PostgreSQL: `COMMENT ON TABLE`; MySQL: `COMMENT = '...'`)
166
+ - Every field must have a comment (PostgreSQL: `COMMENT ON COLUMN`; MySQL: `COMMENT '...'` after field definition)
167
+ - Add a SQL comment above each DDL block noting the source API endpoint
168
+
169
+ ## Database Dialect Quick Reference
170
+
171
+ | Feature | PostgreSQL | MySQL |
172
+ |---------|-----------|-------|
173
+ | UUID Primary Key | `UUID DEFAULT gen_random_uuid()` | `CHAR(36) DEFAULT (UUID())` or use `BINARY(16)` |
174
+ | Timestamp Type | `TIMESTAMPTZ` | `DATETIME` / `TIMESTAMP` (mind timezone handling) |
175
+ | JSON Support | `JSONB` (indexable) | `JSON` (limited functionality) |
176
+ | Row-Level Security | RLS (`ENABLE ROW LEVEL SECURITY`) | Not supported; must be implemented at the application layer |
177
+ | Table Comment | `COMMENT ON TABLE t IS '...'` | `CREATE TABLE t (...) COMMENT = '...'` |
178
+ | Column Comment | `COMMENT ON COLUMN t.c IS '...'` | `col_name TYPE COMMENT '...'` |
179
+
180
+ ## Best Practices
181
+
182
+ ### General (All Databases)
183
+
184
+ - **Store monetary values as INTEGER in cents**: DECIMAL/FLOAT is prohibited to avoid floating-point precision issues
185
+ - **Soft delete**: prefer a `deleted_at` timestamp field over physical deletion
186
+ - **Audit fields**: every table should include `created_at` and `updated_at`
187
+ - **Timestamp fields with timezone**: avoid timezone pitfalls
188
+ - **Field names aligned with API**: DB column names should match API YAML field names as closely as possible (e.g., API uses `userId` → DB uses `user_id`; as long as the mapping rule is clear), reducing unnecessary transformations in the code layer
189
+ - **Core tables first, auxiliary tables later**: don't try to design all tables at once — output core business tables for user review first, then add auxiliary tables
190
+
191
+ ### PostgreSQL-Specific
192
+
193
+ - **Primary key**: `id UUID DEFAULT gen_random_uuid() PRIMARY KEY`
194
+ - **Timestamp type**: use `TIMESTAMPTZ`
195
+ - **RLS**: enable on all tables with `ALTER TABLE ... ENABLE ROW LEVEL SECURITY;`
196
+ - **JSONB**: prefer JSONB for unstructured storage and create GIN indexes
197
+
198
+ ### MySQL-Specific
199
+
200
+ - **Primary key**: `id CHAR(36) DEFAULT (UUID()) PRIMARY KEY` or auto-increment BIGINT
201
+ - **Timestamp type**: use `TIMESTAMP` (automatic timezone conversion) or `DATETIME` (stored as-is)
202
+ - **Character set**: specify `CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci` when creating tables
203
+ - **Engine**: always use `ENGINE=InnoDB`
204
+
205
+ ## Recommended Prompts
206
+
207
+ The following prompts can be copied directly for use with AI:
208
+
209
+ - `Help me design the database`
210
+ - `Derive database DDL from the API specifications`
211
+ - `Help me design the database tables involved in S01`
212
+ - `Help me add indexes and RLS policies to the existing table structures`
@@ -0,0 +1,212 @@
1
+ # Skill: DB Designer
2
+
3
+ > 从 API 规格推导数据库表结构,生成对应方言的 SQL DDL。数据库类型由 Phase 3 Step 0 技术选型确定,确保字段类型、约束、索引和安全策略与 API 端点完全对齐。
4
+
5
+ ## 触发条件
6
+
7
+ - 用户要求设计数据库或编写 SQL
8
+ - 用户提到 "Phase 3 Step 2"、"DB 设计"、"表结构"
9
+ - 已有 API YAML 规格,需要推导数据库设计
10
+ - 用户提供了数据模型需要转化为 DDL
11
+
12
+ ## 核心能力
13
+
14
+ 1. 从 API 请求/响应结构推导表结构
15
+ 2. 读取 `logos-project.yaml` 的 `tech_stack.database` 确定数据库类型
16
+ 3. 生成对应数据库方言的 SQL DDL
17
+ 4. 设计索引并说明设计理由
18
+ 5. 设计安全策略(RLS / 应用层权限)
19
+ 6. 为每张表、每个字段添加注释
20
+
21
+ ## 前置依赖
22
+
23
+ - `logos/resources/api/` 中包含 API YAML 规格(api-designer 产出)
24
+ - `logos-project.yaml` 的 `tech_stack.database` 已填写
25
+
26
+ 如果 API 目录为空,提示用户先完成 Phase 3 Step 2 的 API 设计(api-designer)。如果 `tech_stack.database` 未填写,提示用户先完成 Phase 3 Step 0(architecture-designer)。
27
+
28
+ ## 执行步骤
29
+
30
+ ### Step 1: 确认数据库类型
31
+
32
+ 读取 `logos/logos-project.yaml` 的 `tech_stack` 字段,确定数据库类型和方言:
33
+
34
+ - PostgreSQL → 使用 UUID、TIMESTAMPTZ、RLS、JSONB 等特性
35
+ - MySQL → 使用 InnoDB、utf8mb4、TIMESTAMP 等特性
36
+ - SQLite → 使用 INTEGER PRIMARY KEY、TEXT 等简化类型
37
+ - 其他 → 与用户确认后选择最接近的方言
38
+
39
+ ### Step 2: 提取数据实体
40
+
41
+ 从 API YAML 中提取所有需要持久化的数据实体:
42
+
43
+ 1. 扫描所有端点的 `requestBody` 和 `responses`,识别核心数据对象
44
+ 2. 区分"需要持久化"与"仅传输用"的数据:
45
+ - 有 CRUD 操作的对象 → 需要建表(如 `users`、`projects`)
46
+ - 只出现在请求/响应中但不直接存储的 → 不建表(如 `loginRequest`)
47
+ 3. 为每个对象标注来源 API 端点
48
+
49
+ 输出实体清单供用户确认:
50
+
51
+ ```markdown
52
+ 从 API 规格中识别到 N 个需要持久化的数据实体:
53
+
54
+ | # | 实体 | 来源端点 | 核心字段 |
55
+ |---|------|---------|---------|
56
+ | 1 | users | auth.yaml → register, login | email, password, status |
57
+ | 2 | projects | projects.yaml → create, list, get | name, description, owner_id |
58
+ | 3 | subscriptions | billing.yaml → subscribe | plan, status, expires_at |
59
+ ```
60
+
61
+ ### Step 3: 设计表结构
62
+
63
+ 为每个实体设计完整的表结构,遵循当前数据库方言:
64
+
65
+ **每张表必须包含**:
66
+ - 主键(UUID 或自增 ID,视方言决定)
67
+ - 业务字段(从 API schema 映射,类型转换为数据库类型)
68
+ - 审计字段:`created_at`、`updated_at`
69
+ - 软删除字段:`deleted_at`(按需)
70
+ - 字段约束:`NOT NULL`、`UNIQUE`、`CHECK`、`DEFAULT`
71
+
72
+ **类型映射原则**:
73
+ - API `string + format: email` → `TEXT NOT NULL`(配合 CHECK 约束或应用层校验)
74
+ - API `string + format: uuid` → `UUID`(PostgreSQL)/ `CHAR(36)`(MySQL)
75
+ - API `integer` → `INTEGER` / `BIGINT`
76
+ - API `boolean` → `BOOLEAN`(PostgreSQL)/ `TINYINT(1)`(MySQL)
77
+ - API `string + enum` → `TEXT + CHECK` 约束(列出枚举值)
78
+ - 金额字段 → `INTEGER`(存分值),**禁止 DECIMAL/FLOAT**
79
+
80
+ **示例(PostgreSQL)**:
81
+
82
+ ```sql
83
+ -- 用户表(来源:auth.yaml → register, login)
84
+ CREATE TABLE users (
85
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
86
+ email TEXT NOT NULL UNIQUE,
87
+ password TEXT NOT NULL,
88
+ status TEXT NOT NULL DEFAULT 'pending'
89
+ CHECK (status IN ('pending', 'active', 'disabled')),
90
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
91
+ updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
92
+ );
93
+ ```
94
+
95
+ ### Step 4: 设计表间关联
96
+
97
+ 根据 API 中的实体关系设计外键:
98
+
99
+ 1. 从 API 端点的嵌套路径和引用字段推导关联(如 `/api/projects/:projectId/members` → `project_members` 表关联 `projects` 和 `users`)
100
+ 2. 确定关联类型(一对多、多对多)
101
+ 3. 设计外键约束和级联策略:
102
+ - `ON DELETE CASCADE`:父记录删除时子记录跟随删除(如用户删除 → 项目删除)
103
+ - `ON DELETE SET NULL`:父记录删除时子记录保留但外键置空
104
+ - `ON DELETE RESTRICT`:父记录有子记录时禁止删除
105
+
106
+ ### Step 5: 设计安全策略
107
+
108
+ 根据数据库类型设计对应的安全机制:
109
+
110
+ **PostgreSQL — 行级安全(RLS)**:
111
+
112
+ ```sql
113
+ ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
114
+
115
+ CREATE POLICY projects_owner_policy ON projects
116
+ USING (owner_id = auth.uid());
117
+ ```
118
+
119
+ - 所有包含用户数据的表启用 RLS
120
+ - 为每张表设计至少一条 Policy(owner / admin / public)
121
+ - 注明 RLS 策略与 API 认证方案的对应关系
122
+
123
+ **MySQL — 应用层权限**:
124
+
125
+ - 在表注释中标注数据访问权限(owner-only / admin / public)
126
+ - 不在 DDL 中实现权限控制,交由应用层处理
127
+
128
+ ### Step 6: 设计索引
129
+
130
+ 为常见查询模式设计索引,每个索引附设计理由:
131
+
132
+ ```sql
133
+ -- 用户按邮箱查找(登录场景,来源 S02)
134
+ CREATE UNIQUE INDEX idx_users_email ON users(email);
135
+
136
+ -- 项目按 owner 查找(项目列表,来源 S04 Step 1)
137
+ CREATE INDEX idx_projects_owner ON projects(owner_id);
138
+ ```
139
+
140
+ 索引设计原则:
141
+ - 外键列:必须建索引(避免 JOIN 全表扫描)
142
+ - 唯一约束列:自动创建唯一索引
143
+ - 高频查询列:根据 API 的查询参数判断
144
+ - 复合索引:多条件查询时考虑(最左前缀原则)
145
+ - 不过度索引:写多读少的表控制索引数量
146
+
147
+ ### Step 7: 输出完整 DDL
148
+
149
+ 按以下顺序组织 DDL 文件:
150
+
151
+ 1. 文件头注释(来源、数据库类型、生成时间)
152
+ 2. 基础表(无外键依赖的表先建)
153
+ 3. 关联表(有外键依赖的表后建)
154
+ 4. 索引
155
+ 5. 安全策略(RLS / Policy)
156
+ 6. 表和字段注释(PostgreSQL 使用 `COMMENT ON`)
157
+
158
+ 每段 DDL 上方用注释标注来源 API 端点。
159
+
160
+ ## 输出规范
161
+
162
+ - 文件格式:SQL(方言由 `tech_stack.database` 决定)
163
+ - 存放位置:`logos/resources/database/`
164
+ - 单文件输出:`schema.sql`(简单项目);或按领域分文件:`auth.sql`、`billing.sql`(复杂项目)
165
+ - 每张表必须有注释(PostgreSQL: `COMMENT ON TABLE`;MySQL: `COMMENT = '...'`)
166
+ - 每个字段必须有注释(PostgreSQL: `COMMENT ON COLUMN`;MySQL: 字段定义后 `COMMENT '...'`)
167
+ - 每段 DDL 上方用 SQL 注释标注来源 API 端点
168
+
169
+ ## 数据库方言差异速查
170
+
171
+ | 特性 | PostgreSQL | MySQL |
172
+ |------|-----------|-------|
173
+ | UUID 主键 | `UUID DEFAULT gen_random_uuid()` | `CHAR(36) DEFAULT (UUID())` 或使用 `BINARY(16)` |
174
+ | 时间类型 | `TIMESTAMPTZ` | `DATETIME` / `TIMESTAMP`(注意时区处理) |
175
+ | JSON 支持 | `JSONB`(可索引) | `JSON`(功能受限) |
176
+ | 行级安全 | RLS (`ENABLE ROW LEVEL SECURITY`) | 不支持,需在应用层实现 |
177
+ | 表注释 | `COMMENT ON TABLE t IS '...'` | `CREATE TABLE t (...) COMMENT = '...'` |
178
+ | 字段注释 | `COMMENT ON COLUMN t.c IS '...'` | `col_name TYPE COMMENT '...'` |
179
+
180
+ ## 实践经验
181
+
182
+ ### 通用(所有数据库)
183
+
184
+ - **金额一律 INTEGER 存分值**:禁止 DECIMAL/FLOAT,避免浮点精度问题
185
+ - **软删除**:优先使用 `deleted_at` 时间字段而非物理删除
186
+ - **审计字段**:每张表包含 `created_at` 和 `updated_at`
187
+ - **时间字段带时区**:避免时区陷阱
188
+ - **字段名与 API 一致**:DB 列名尽量与 API YAML 中的字段名对齐(如 API 用 `userId` → DB 用 `user_id`,映射规则明确即可),减少代码层的无谓转换
189
+ - **先出核心表再补辅助表**:不要试图一次设计所有表——先输出核心业务表让用户 review,再补充辅助表
190
+
191
+ ### PostgreSQL 特有
192
+
193
+ - **主键**:`id UUID DEFAULT gen_random_uuid() PRIMARY KEY`
194
+ - **时间类型**:使用 `TIMESTAMPTZ`
195
+ - **RLS**:所有表启用 `ALTER TABLE ... ENABLE ROW LEVEL SECURITY;`
196
+ - **JSONB**:需要非结构化存储时优先使用 JSONB 并建 GIN 索引
197
+
198
+ ### MySQL 特有
199
+
200
+ - **主键**:`id CHAR(36) DEFAULT (UUID()) PRIMARY KEY` 或自增 BIGINT
201
+ - **时间类型**:使用 `TIMESTAMP`(自动时区转换)或 `DATETIME`(原样存储)
202
+ - **字符集**:建表时指定 `CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci`
203
+ - **引擎**:一律使用 `ENGINE=InnoDB`
204
+
205
+ ## 推荐提示词
206
+
207
+ 以下提示词可以直接复制给 AI 使用:
208
+
209
+ - `帮我设计数据库`
210
+ - `基于 API 规格帮我推导数据库 DDL`
211
+ - `帮我设计 S01 涉及的数据库表`
212
+ - `帮我给现有表结构补充索引和 RLS 策略`