@miniidealab/openlogos 0.3.0 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,212 @@
1
+ # Skill: DB Designer
2
+
3
+ > Derive database table structures from API specifications and generate SQL DDL in the appropriate dialect. The database type is determined during Phase 3 Step 0 technology selection, ensuring that field types, constraints, indexes, and security policies are fully aligned with API endpoints.
4
+
5
+ ## Trigger Conditions
6
+
7
+ - User requests database design or SQL writing
8
+ - User mentions "Phase 3 Step 2", "DB design", "table structure"
9
+ - API YAML specifications already exist and database design needs to be derived
10
+ - User provides a data model that needs to be converted to DDL
11
+
12
+ ## Core Capabilities
13
+
14
+ 1. Derive table structures from API request/response structures
15
+ 2. Read `tech_stack.database` from `logos-project.yaml` to determine the database type
16
+ 3. Generate SQL DDL in the corresponding database dialect
17
+ 4. Design indexes with rationale for each
18
+ 5. Design security policies (RLS / application-level permissions)
19
+ 6. Add comments to every table and every field
20
+
21
+ ## Prerequisites
22
+
23
+ - `logos/resources/api/` contains API YAML specifications (output from api-designer)
24
+ - `tech_stack.database` in `logos-project.yaml` is filled in
25
+
26
+ If the API directory is empty, prompt the user to complete the API design (api-designer) in Phase 3 Step 2 first. If `tech_stack.database` is not filled in, prompt the user to complete Phase 3 Step 0 (architecture-designer) first.
27
+
28
+ ## Execution Steps
29
+
30
+ ### Step 1: Determine Database Type
31
+
32
+ Read the `tech_stack` field from `logos/logos-project.yaml` to determine the database type and dialect:
33
+
34
+ - PostgreSQL → Use features like UUID, TIMESTAMPTZ, RLS, JSONB, etc.
35
+ - MySQL → Use features like InnoDB, utf8mb4, TIMESTAMP, etc.
36
+ - SQLite → Use simplified types like INTEGER PRIMARY KEY, TEXT, etc.
37
+ - Other → Confirm with the user and select the closest dialect
38
+
39
+ ### Step 2: Extract Data Entities
40
+
41
+ Extract all data entities that need to be persisted from the API YAML:
42
+
43
+ 1. Scan `requestBody` and `responses` across all endpoints to identify core data objects
44
+ 2. Distinguish between "needs persistence" and "transfer-only" data:
45
+ - Objects with CRUD operations → need a table (e.g., `users`, `projects`)
46
+ - Objects that only appear in requests/responses but are not stored directly → no table needed (e.g., `loginRequest`)
47
+ 3. Annotate each object with its source API endpoint
48
+
49
+ Output an entity checklist for user confirmation:
50
+
51
+ ```markdown
52
+ Identified N data entities requiring persistence from API specifications:
53
+
54
+ | # | Entity | Source Endpoint | Core Fields |
55
+ |---|--------|----------------|-------------|
56
+ | 1 | users | auth.yaml → register, login | email, password, status |
57
+ | 2 | projects | projects.yaml → create, list, get | name, description, owner_id |
58
+ | 3 | subscriptions | billing.yaml → subscribe | plan, status, expires_at |
59
+ ```
60
+
61
+ ### Step 3: Design Table Structures
62
+
63
+ Design complete table structures for each entity, following the current database dialect:
64
+
65
+ **Every table must include**:
66
+ - Primary key (UUID or auto-increment ID, depending on dialect)
67
+ - Business fields (mapped from API schema, with types converted to database types)
68
+ - Audit fields: `created_at`, `updated_at`
69
+ - Soft delete field: `deleted_at` (as needed)
70
+ - Field constraints: `NOT NULL`, `UNIQUE`, `CHECK`, `DEFAULT`
71
+
72
+ **Type mapping principles**:
73
+ - API `string + format: email` → `TEXT NOT NULL` (with CHECK constraint or application-level validation)
74
+ - API `string + format: uuid` → `UUID` (PostgreSQL) / `CHAR(36)` (MySQL)
75
+ - API `integer` → `INTEGER` / `BIGINT`
76
+ - API `boolean` → `BOOLEAN` (PostgreSQL) / `TINYINT(1)` (MySQL)
77
+ - API `string + enum` → `TEXT + CHECK` constraint (listing enum values)
78
+ - Monetary fields → `INTEGER` (store in cents), **DECIMAL/FLOAT is prohibited**
79
+
80
+ **Example (PostgreSQL)**:
81
+
82
+ ```sql
83
+ -- Users table (source: auth.yaml → register, login)
84
+ CREATE TABLE users (
85
+ id UUID PRIMARY KEY DEFAULT gen_random_uuid(),
86
+ email TEXT NOT NULL UNIQUE,
87
+ password TEXT NOT NULL,
88
+ status TEXT NOT NULL DEFAULT 'pending'
89
+ CHECK (status IN ('pending', 'active', 'disabled')),
90
+ created_at TIMESTAMPTZ NOT NULL DEFAULT now(),
91
+ updated_at TIMESTAMPTZ NOT NULL DEFAULT now()
92
+ );
93
+ ```
94
+
95
+ ### Step 4: Design Table Relationships
96
+
97
+ Design foreign keys based on entity relationships in the API:
98
+
99
+ 1. Derive relationships from nested paths and reference fields in API endpoints (e.g., `/api/projects/:projectId/members` → `project_members` table linking `projects` and `users`)
100
+ 2. Determine relationship types (one-to-many, many-to-many)
101
+ 3. Design foreign key constraints and cascade strategies:
102
+ - `ON DELETE CASCADE`: child records are deleted when the parent record is deleted (e.g., user deleted → projects deleted)
103
+ - `ON DELETE SET NULL`: child records are retained but the foreign key is set to null when the parent is deleted
104
+ - `ON DELETE RESTRICT`: prevent deletion of the parent record if child records exist
105
+
106
+ ### Step 5: Design Security Policies
107
+
108
+ Design corresponding security mechanisms based on the database type:
109
+
110
+ **PostgreSQL — Row-Level Security (RLS)**:
111
+
112
+ ```sql
113
+ ALTER TABLE projects ENABLE ROW LEVEL SECURITY;
114
+
115
+ CREATE POLICY projects_owner_policy ON projects
116
+ USING (owner_id = auth.uid());
117
+ ```
118
+
119
+ - Enable RLS on all tables containing user data
120
+ - Design at least one Policy per table (owner / admin / public)
121
+ - Document the correspondence between RLS policies and the API authentication scheme
122
+
123
+ **MySQL — Application-Level Permissions**:
124
+
125
+ - Annotate data access permissions in table comments (owner-only / admin / public)
126
+ - Do not implement permission control in DDL; delegate to the application layer
127
+
128
+ ### Step 6: Design Indexes
129
+
130
+ Design indexes for common query patterns, with a rationale for each index:
131
+
132
+ ```sql
133
+ -- User lookup by email (login scenario, source: S02)
134
+ CREATE UNIQUE INDEX idx_users_email ON users(email);
135
+
136
+ -- Project lookup by owner (project list, source: S04 Step 1)
137
+ CREATE INDEX idx_projects_owner ON projects(owner_id);
138
+ ```
139
+
140
+ Index design principles:
141
+ - Foreign key columns: indexes are mandatory (to avoid full table scans on JOINs)
142
+ - Unique constraint columns: unique indexes are created automatically
143
+ - High-frequency query columns: determine based on API query parameters
144
+ - Composite indexes: consider for multi-condition queries (leftmost prefix rule)
145
+ - Avoid over-indexing: limit index count on write-heavy tables
146
+
147
+ ### Step 7: Output Complete DDL
148
+
149
+ Organize the DDL file in the following order:
150
+
151
+ 1. File header comment (source, database type, generation timestamp)
152
+ 2. Base tables (tables without foreign key dependencies first)
153
+ 3. Association tables (tables with foreign key dependencies after)
154
+ 4. Indexes
155
+ 5. Security policies (RLS / Policy)
156
+ 6. Table and field comments (PostgreSQL uses `COMMENT ON`)
157
+
158
+ Add a comment above each DDL block noting the source API endpoint.
159
+
160
+ ## Output Specification
161
+
162
+ - File format: SQL (dialect determined by `tech_stack.database`)
163
+ - Storage location: `logos/resources/database/`
164
+ - Single file output: `schema.sql` (simple projects); or split by domain: `auth.sql`, `billing.sql` (complex projects)
165
+ - Every table must have a comment (PostgreSQL: `COMMENT ON TABLE`; MySQL: `COMMENT = '...'`)
166
+ - Every field must have a comment (PostgreSQL: `COMMENT ON COLUMN`; MySQL: `COMMENT '...'` after field definition)
167
+ - Add a SQL comment above each DDL block noting the source API endpoint
168
+
169
+ ## Database Dialect Quick Reference
170
+
171
+ | Feature | PostgreSQL | MySQL |
172
+ |---------|-----------|-------|
173
+ | UUID Primary Key | `UUID DEFAULT gen_random_uuid()` | `CHAR(36) DEFAULT (UUID())` or use `BINARY(16)` |
174
+ | Timestamp Type | `TIMESTAMPTZ` | `DATETIME` / `TIMESTAMP` (mind timezone handling) |
175
+ | JSON Support | `JSONB` (indexable) | `JSON` (limited functionality) |
176
+ | Row-Level Security | RLS (`ENABLE ROW LEVEL SECURITY`) | Not supported; must be implemented at the application layer |
177
+ | Table Comment | `COMMENT ON TABLE t IS '...'` | `CREATE TABLE t (...) COMMENT = '...'` |
178
+ | Column Comment | `COMMENT ON COLUMN t.c IS '...'` | `col_name TYPE COMMENT '...'` |
179
+
180
+ ## Best Practices
181
+
182
+ ### General (All Databases)
183
+
184
+ - **Store monetary values as INTEGER in cents**: DECIMAL/FLOAT is prohibited to avoid floating-point precision issues
185
+ - **Soft delete**: prefer a `deleted_at` timestamp field over physical deletion
186
+ - **Audit fields**: every table should include `created_at` and `updated_at`
187
+ - **Timestamp fields with timezone**: avoid timezone pitfalls
188
+ - **Field names aligned with API**: DB column names should match API YAML field names as closely as possible (e.g., API uses `userId` → DB uses `user_id`; as long as the mapping rule is clear), reducing unnecessary transformations in the code layer
189
+ - **Core tables first, auxiliary tables later**: don't try to design all tables at once — output core business tables for user review first, then add auxiliary tables
190
+
191
+ ### PostgreSQL-Specific
192
+
193
+ - **Primary key**: `id UUID DEFAULT gen_random_uuid() PRIMARY KEY`
194
+ - **Timestamp type**: use `TIMESTAMPTZ`
195
+ - **RLS**: enable on all tables with `ALTER TABLE ... ENABLE ROW LEVEL SECURITY;`
196
+ - **JSONB**: prefer JSONB for unstructured storage and create GIN indexes
197
+
198
+ ### MySQL-Specific
199
+
200
+ - **Primary key**: `id CHAR(36) DEFAULT (UUID()) PRIMARY KEY` or auto-increment BIGINT
201
+ - **Timestamp type**: use `TIMESTAMP` (automatic timezone conversion) or `DATETIME` (stored as-is)
202
+ - **Character set**: specify `CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci` when creating tables
203
+ - **Engine**: always use `ENGINE=InnoDB`
204
+
205
+ ## Recommended Prompts
206
+
207
+ The following prompts can be copied directly for use with AI:
208
+
209
+ - `Help me design the database`
210
+ - `Derive database DDL from the API specifications`
211
+ - `Help me design the database tables involved in S01`
212
+ - `Help me add indexes and RLS policies to the existing table structures`
@@ -0,0 +1,84 @@
1
+ # Skill: Merge Executor
2
+
3
+ > Read the MERGE_PROMPT.md instruction file generated by the CLI, and merge each delta file from the change proposal into the main documents one by one, ensuring changes are applied accurately.
4
+
5
+ ## Trigger Conditions
6
+
7
+ - User requests AI to execute the merge after running `openlogos merge <slug>`
8
+ - User mentions "execute merge", "merge", "merge the deltas into the main documents"
9
+ - User mentions "read MERGE_PROMPT.md and execute"
10
+
11
+ ## Prerequisites
12
+
13
+ 1. `logos/changes/<slug>/MERGE_PROMPT.md` exists (generated by the `openlogos merge` command)
14
+ 2. The delta files and target main documents referenced in MERGE_PROMPT.md all exist
15
+
16
+ If MERGE_PROMPT.md does not exist, prompt the user to run `openlogos merge <slug>` first.
17
+
18
+ ## Core Capabilities
19
+
20
+ 1. Parse merge instructions from MERGE_PROMPT.md
21
+ 2. Read each delta file and interpret ADDED / MODIFIED / REMOVED markers
22
+ 3. Precisely locate the corresponding sections in the main document and execute the merge
23
+ 4. Maintain formatting and style consistency of the main document
24
+ 5. Output a change summary
25
+
26
+ ## Execution Steps
27
+
28
+ ### Step 1: Read Merge Instructions
29
+
30
+ Read `logos/changes/<slug>/MERGE_PROMPT.md` and parse:
31
+ - Change proposal name and overview
32
+ - Each delta file's path, corresponding target main document path, and operation type
33
+
34
+ ### Step 2: Merge Each Delta File
35
+
36
+ Process delta files one by one in the order listed in MERGE_PROMPT.md:
37
+
38
+ 1. **Read the delta file**: Understand the ADDED / MODIFIED / REMOVED markers and content
39
+ 2. **Read the target main document**: Locate the sections that need modification
40
+ 3. **Execute the merge**:
41
+ - `ADDED`: Insert new content at the specified position in the main document
42
+ - `MODIFIED`: Replace the content of the same-named section in the main document
43
+ - `REMOVED`: Delete the corresponding section from the main document
44
+ 4. **Output summary**: List what modifications were made to the file
45
+
46
+ ### Step 3: Output Overall Change Report
47
+
48
+ After all deltas have been processed, output:
49
+
50
+ ```
51
+ Merge complete:
52
+ - [file path 1]: added x sections, modified y sections, deleted z sections
53
+ - [file path 2]: ...
54
+
55
+ Please verify the changes are correct, then run `openlogos archive <slug>` to archive the proposal.
56
+ ```
57
+
58
+ ## Merge Principles
59
+
60
+ 1. **Maintain format consistency**: Merged content must match the existing format, indentation, and heading levels of the main document
61
+ 2. **Do not alter unrelated content**: Only modify the parts specified by the delta; do not reformat the entire document
62
+ 3. **Ask on conflict**: If a section referenced by the delta cannot be found in the main document (possibly already modified by another change), pause and ask the user how to proceed
63
+ 4. **Confirm after each file**: Show a modification summary after processing each delta file, and wait for user confirmation before processing the next one
64
+
65
+ ## Output Specification
66
+
67
+ - Directly modify the main documents in `logos/resources/` (in-place editing)
68
+ - Do not modify any files in `logos/changes/`
69
+ - Do not create new files during the merge (unless a delta specifies adding a completely new document)
70
+
71
+ ## Best Practices
72
+
73
+ - **Read everything before acting**: Read through all delta files and target documents first to understand the full picture, then merge one by one
74
+ - **MODIFIED is the most error-prone**: Section titles may have minor differences (capitalization, spacing) — fuzzy matching is needed
75
+ - **Preserve change traces**: If the main document has a "last updated" timestamp, remember to update it accordingly
76
+ - **Delta order matters**: Requirement document changes should be processed before API document changes to ensure upstream-downstream consistency
77
+
78
+ ## Recommended Prompts
79
+
80
+ The following prompts can be copied directly for AI use:
81
+
82
+ - `Read logos/changes/<slug>/MERGE_PROMPT.md and execute the merge`
83
+ - `Help me merge the add-remember-me changes into the main documents`
84
+ - `Execute the change merge`
@@ -0,0 +1,171 @@
1
+ # Skill: PRD Writer
2
+
3
+ > Assist in writing scenario-driven requirements documents — starting from user pain points, identifying core business scenarios, and defining GIVEN/WHEN/THEN acceptance criteria for each scenario. Scenario numbers will carry through all subsequent phases.
4
+
5
+ ## Trigger Conditions
6
+
7
+ - User requests writing a requirements document, product requirements, or PRD
8
+ - User discusses product positioning, target users, or feature requirements
9
+ - User mentions "Phase 1", "requirements layer", or "WHY"
10
+ - The project is currently in the requirements analysis phase
11
+
12
+ ## Core Capabilities
13
+
14
+ 1. Guide users through product positioning and target user profiling
15
+ 2. Extract user pain points and establish causal chains
16
+ 3. Identify and define business scenarios from pain points (assigning `S01`, `S02`... numbers)
17
+ 4. Write GIVEN/WHEN/THEN acceptance criteria for each scenario
18
+ 5. Prioritize scenarios
19
+ 6. Identify constraints and the "won't-do" list
20
+ 7. Generate scenario-driven requirements documents conforming to the OpenLogos specification
21
+
22
+ ## Execution Steps
23
+
24
+ ### Step 1: Understand Product Positioning
25
+
26
+ Confirm the following key questions with the user (proactively ask if information is insufficient):
27
+
28
+ - **One-line positioning**: What is this product? For whom? What problem does it solve?
29
+ - **Target user persona**: Specific enough to describe a real person
30
+ - **Core objectives**: What should the product achieve, and what metrics define success
31
+
32
+ ### Step 2: Extract User Pain Points
33
+
34
+ Guide the user to identify pain points from the following dimensions:
35
+
36
+ - How does the user currently do it? (Current state)
37
+ - What difficulties are encountered in the process? (Pain points)
38
+ - What consequences do the difficulties cause? (Impact)
39
+ - How does the user expect it to be resolved? (Expectation)
40
+
41
+ **Every pain point must have a causal chain**: Because [reason] → leads to [pain point] → results in [consequence]
42
+
43
+ Assign a number to each pain point (`P01`, `P02`...) for scenario traceability.
44
+
45
+ ### Step 3: Identify and Define Scenarios
46
+
47
+ **Scenarios are the anchor throughout the entire development lifecycle.** This step is critical.
48
+
49
+ Extract business scenarios from pain points and requirements. Each scenario is a **complete user action path**:
50
+
51
+ - **Who** triggers it under **what circumstances**
52
+ - Through **what steps**
53
+ - To achieve **what outcome**
54
+
55
+ Assign a globally unique number to each scenario (`S01`, `S02`...). This number will carry through to Phase 2 and Phase 3.
56
+
57
+ Output a scenario list table:
58
+
59
+ ```markdown
60
+ | ID | Scenario Name | Trigger Condition | Related Pain Point | Priority |
61
+ |------|--------------------|----------------------------|--------------------|----------|
62
+ | S01 | Email Registration | New user's first visit | P01 | P0 |
63
+ | S02 | Password Login | Registered user returns | P01 | P0 |
64
+ | S03 | Forgot Password | User cannot log in | P02 | P1 |
65
+ ```
66
+
67
+ ### Step 4: Write Scenario Acceptance Criteria
68
+
69
+ Write acceptance criteria for every P0 and P1 scenario:
70
+
71
+ ```markdown
72
+ ### S01: Email Registration
73
+
74
+ - **Trigger Condition**: New user's first visit, clicks "Sign Up"
75
+ - **User Value**: Quickly create an account and start using the product (← P01)
76
+ - **Priority**: P0
77
+ - **Main Path**: User fills in email and password, submits the form, receives a verification email, clicks the link to complete registration
78
+
79
+ #### Acceptance Criteria
80
+
81
+ ##### Normal: Complete registration flow
82
+ - **GIVEN** the user has not registered before and is on the registration page
83
+ - **WHEN** the user fills in a valid email and password (≥8 characters) and clicks "Sign Up"
84
+ - **THEN** the system creates an account, sends a verification email, and the page displays "Please check your email for verification"
85
+
86
+ ##### Exception: Email already registered
87
+ - **GIVEN** the email test@example.com is already registered
88
+ - **WHEN** the user attempts to register using test@example.com
89
+ - **THEN** the page displays "This email is already registered, please log in directly" and no email is sent
90
+
91
+ ##### Exception: Password does not meet requirements
92
+ - **GIVEN** the user is on the registration page
93
+ - **WHEN** the user fills in a valid email but a password with fewer than 8 characters and clicks "Sign Up"
94
+ - **THEN** the page displays "Password must be at least 8 characters" and the request is not submitted
95
+ ```
96
+
97
+ **Principles for writing acceptance criteria**:
98
+
99
+ - Each scenario must have at least 1 normal + 1 exception acceptance criterion
100
+ - GIVEN describes the initial state — specific enough to be reproducible
101
+ - WHEN describes the user action — precise down to the button level
102
+ - THEN describes the expected behavior — specific enough to be verifiable
103
+ - Avoid vague wording: "fast", "friendly", "reasonable" → quantify with concrete metrics
104
+
105
+ ### Step 5: Identify Constraints and Boundaries
106
+
107
+ - **Technical constraints**: Technology stack limitations, third-party service limitations
108
+ - **Resource constraints**: Team size, time window
109
+ - **"Won't-do" list**: Explicitly list features and scenarios that are out of scope for this phase to prevent scope creep
110
+
111
+ ### Step 6: Assemble the Requirements Document
112
+
113
+ Output the complete document in the standard structure:
114
+
115
+ ```markdown
116
+ # [Product Name] Requirements Document
117
+
118
+ > Last updated: [date]
119
+
120
+ ## I. Product Background and Goals
121
+ ### 1.1 Product Positioning
122
+ ### 1.2 Core Objectives
123
+ ### 1.3 Target User Persona
124
+
125
+ ## II. User Pain Point Analysis
126
+ ### P01: [Pain Point Name]
127
+ Because [reason] → leads to [pain point] → results in [consequence]
128
+ ### P02: ...
129
+
130
+ ## III. Scenario Overview
131
+ [Scenario list table: ID / Name / Trigger Condition / Related Pain Point / Priority]
132
+
133
+ ## IV. Core Scenario Details
134
+ ### S01: [Scenario Name]
135
+ [Trigger Condition + User Value + Priority + Main Path + Acceptance Criteria]
136
+ ### S02: ...
137
+
138
+ ## V. Constraints and Boundaries
139
+ ### 5.1 Technical Constraints
140
+ ### 5.2 Resource and Time Constraints
141
+ ### 5.3 "Won't-Do" List
142
+ ```
143
+
144
+ ## Output Specification
145
+
146
+ - File format: Markdown
147
+ - Storage location: `logos/resources/prd/1-product-requirements/`
148
+ - File naming: `{sequence}-{english-name}.md`, e.g., `01-requirements.md`
149
+ - Every scenario must be traceable to at least one user pain point
150
+ - P0/P1 scenarios must have GIVEN/WHEN/THEN (≥1 normal + ≥1 exception)
151
+ - Scenario numbers are globally unique and will carry through Phase 2 and Phase 3
152
+
153
+ ## Best Practices
154
+
155
+ - **Cast a wide net first, then narrow down**: In the first pass, identify as many scenarios as possible, then cut non-core ones during prioritization
156
+ - **Scenarios ≠ Features**: A single feature (e.g., "user authentication") may contain multiple scenarios (registration, login, password recovery); a single scenario (e.g., "first purchase") may span multiple features (browsing, adding to cart, payment). A scenario is a "complete path from the user's perspective"
157
+ - **Scenario granularity**: Keep granularity moderate in Phase 1. Too fine-grained ("user clicks the input box") is meaningless; too coarse ("user uses the product") is unverifiable. Good granularity: the main path of a scenario can be walked through in 1–2 minutes
158
+ - **Acceptance criteria are the precise expression of requirements**: If you cannot write GIVEN/WHEN/THEN, the scenario is not yet well thought out
159
+ - **Exception scenarios are equally important**: Users don't always follow the happy path — exception handling often defines the product experience
160
+ - **The "won't-do" list is the hardest to write**: Restraint is the most important skill of a product manager
161
+ - **Once a scenario number is assigned, it is never reused**: Even if a scenario is deprecated, its number is not recycled, to avoid confusion
162
+ - **Requirements documents are living documents**: They are continuously updated as the product evolves, with every change tracked through Delta change management
163
+
164
+ ## Recommended Prompts
165
+
166
+ The following prompts can be copied directly for use with an AI:
167
+
168
+ - `Help me write a requirements document`
169
+ - `I want to build a xxx product, help me sort out the requirements`
170
+ - `Help me organize these ideas into a structured requirements document`
171
+ - `Help me add exception scenario acceptance criteria to an existing requirements document`