@miniidealab/openlogos 0.9.5 → 0.9.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +1 -0
- package/claude-plugin-template/.claude-plugin/plugin.json +13 -0
- package/claude-plugin-template/agents/change-reviewer.md +46 -0
- package/claude-plugin-template/bin/openlogos-phase +428 -0
- package/claude-plugin-template/commands/archive.md +22 -0
- package/claude-plugin-template/commands/change.md +19 -0
- package/claude-plugin-template/commands/index.md +13 -0
- package/claude-plugin-template/commands/init.md +30 -0
- package/claude-plugin-template/commands/launch.md +16 -0
- package/claude-plugin-template/commands/merge.md +20 -0
- package/claude-plugin-template/commands/next.md +18 -0
- package/claude-plugin-template/commands/status.md +10 -0
- package/claude-plugin-template/commands/sync.md +12 -0
- package/claude-plugin-template/commands/verify.md +17 -0
- package/claude-plugin-template/hooks/hooks.json +14 -0
- package/claude-plugin-template/skills/api-designer/SKILL.md +230 -0
- package/claude-plugin-template/skills/architecture-designer/SKILL.md +186 -0
- package/claude-plugin-template/skills/change-writer/SKILL.md +160 -0
- package/claude-plugin-template/skills/code-implementor/SKILL.md +116 -0
- package/claude-plugin-template/skills/code-reviewer/SKILL.md +214 -0
- package/claude-plugin-template/skills/db-designer/SKILL.md +259 -0
- package/claude-plugin-template/skills/merge-executor/SKILL.md +118 -0
- package/claude-plugin-template/skills/prd-writer/SKILL.md +203 -0
- package/claude-plugin-template/skills/product-designer/SKILL.md +235 -0
- package/claude-plugin-template/skills/project-init/SKILL.md +168 -0
- package/claude-plugin-template/skills/scenario-architect/SKILL.md +229 -0
- package/claude-plugin-template/skills/test-orchestrator/SKILL.md +147 -0
- package/claude-plugin-template/skills/test-writer/SKILL.md +252 -0
- package/dist/commands/init.d.ts +7 -0
- package/dist/commands/init.d.ts.map +1 -1
- package/dist/commands/init.js +145 -10
- package/dist/commands/init.js.map +1 -1
- package/dist/commands/module.d.ts.map +1 -1
- package/dist/commands/module.js +14 -9
- package/dist/commands/module.js.map +1 -1
- package/dist/commands/status.d.ts.map +1 -1
- package/dist/commands/status.js +15 -5
- package/dist/commands/status.js.map +1 -1
- package/dist/commands/sync.d.ts +8 -0
- package/dist/commands/sync.d.ts.map +1 -1
- package/dist/commands/sync.js +74 -3
- package/dist/commands/sync.js.map +1 -1
- package/dist/i18n.d.ts.map +1 -1
- package/dist/i18n.js +10 -0
- package/dist/i18n.js.map +1 -1
- package/dist/index.js +0 -0
- package/package.json +5 -4
- package/spec/logos-project.md +1 -0
|
@@ -0,0 +1,17 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: Run test verification and generate acceptance report with three-layer traceability
|
|
3
|
+
---
|
|
4
|
+
|
|
5
|
+
Verify test results against test case specs and generate an acceptance report.
|
|
6
|
+
|
|
7
|
+
1. Run `openlogos verify` in the project root directory.
|
|
8
|
+
2. If the `openlogos` CLI is not found, tell the user to install it:
|
|
9
|
+
```
|
|
10
|
+
npm install -g @miniidealab/openlogos
|
|
11
|
+
```
|
|
12
|
+
3. The verify command reads `logos/resources/verify/test-results.jsonl` and matches results against test case specs in `logos/resources/test/`.
|
|
13
|
+
4. It generates a three-layer acceptance report:
|
|
14
|
+
- Layer 1: Design-time coverage (are all test cases defined?)
|
|
15
|
+
- Layer 2: Runtime coverage (did all tests run?)
|
|
16
|
+
- Layer 3: Acceptance criteria (did all tests pass?)
|
|
17
|
+
5. Display the report output to the user.
|
|
@@ -0,0 +1,230 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: api-designer
|
|
3
|
+
description: "Design OpenAPI specifications derived from scenario sequence diagrams. Use when scenarios exist in 2-scenario-implementation/ but logos/resources/api/ is empty. All description and summary values in YAML must be double-quoted."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Skill: API Designer
|
|
7
|
+
|
|
8
|
+
> Design OpenAPI 3.0+ YAML specifications based on sequence diagrams, letting APIs emerge naturally from scenarios rather than being defined in isolation. Every endpoint is traceable to a Step number in the sequence diagrams, ensuring "no scenario, no API design."
|
|
9
|
+
|
|
10
|
+
## Trigger Conditions
|
|
11
|
+
|
|
12
|
+
- User requests API design or API documentation
|
|
13
|
+
- User mentions "Phase 3 Step 2" or "API design"
|
|
14
|
+
- Scenario sequence diagrams already exist and API specifications need to be refined
|
|
15
|
+
- User provides a specific API endpoint that needs detailed design
|
|
16
|
+
|
|
17
|
+
## Prerequisites
|
|
18
|
+
|
|
19
|
+
- `logos/resources/prd/3-technical-plan/2-scenario-implementation/` contains scenario sequence diagrams
|
|
20
|
+
- `logos/resources/prd/3-technical-plan/1-architecture/` contains the architecture overview (confirming frontend-backend separation approach, authentication scheme, etc.)
|
|
21
|
+
- `tech_stack` in `logos-project.yaml` is filled in
|
|
22
|
+
|
|
23
|
+
If the sequence diagram directory is empty, prompt the user to complete Phase 3 Step 1 (scenario-architect) first.
|
|
24
|
+
|
|
25
|
+
## Core Capabilities
|
|
26
|
+
|
|
27
|
+
1. Extract all cross-system-boundary API calls from sequence diagrams
|
|
28
|
+
2. Deduplicate, merge, and group by domain to form an endpoint inventory
|
|
29
|
+
3. Design OpenAPI 3.0+ YAML specifications (paths, parameters, request bodies, response structures)
|
|
30
|
+
4. Define a unified error response format and error code system
|
|
31
|
+
5. Design authentication schemes (Bearer Token / API Key / Cookie)
|
|
32
|
+
6. Design standardized parameters for pagination, sorting, and filtering
|
|
33
|
+
|
|
34
|
+
## Execution Steps
|
|
35
|
+
|
|
36
|
+
### Step 1: Read Scenario Context
|
|
37
|
+
|
|
38
|
+
Read the following files to establish complete context:
|
|
39
|
+
|
|
40
|
+
- **Scenario sequence diagrams** (`logos/resources/prd/3-technical-plan/2-scenario-implementation/`): Extract all cross-system-boundary arrows
|
|
41
|
+
- **Architecture overview** (`logos/resources/prd/3-technical-plan/1-architecture/`): Confirm authentication scheme, frontend-backend separation approach, API gateway, etc.
|
|
42
|
+
- **`logos-project.yaml`**: Read `tech_stack` to confirm backend framework and deployment approach
|
|
43
|
+
|
|
44
|
+
### Step 2: Extract Endpoint Inventory
|
|
45
|
+
|
|
46
|
+
Traverse all scenario sequence diagrams and collect every cross-system-boundary call arrow:
|
|
47
|
+
|
|
48
|
+
1. Identify "cross-system-boundary" arrows — client to server, server to external service, inter-service calls
|
|
49
|
+
2. For each arrow, extract: HTTP method, path, source scenario number, and Step number
|
|
50
|
+
3. Deduplicate and merge — the same endpoint may appear in multiple scenarios (e.g., `POST /api/auth/login` may appear in both S02 and S03)
|
|
51
|
+
4. Output an endpoint inventory summary for user confirmation:
|
|
52
|
+
|
|
53
|
+
```markdown
|
|
54
|
+
Identified N API endpoints from sequence diagrams:
|
|
55
|
+
|
|
56
|
+
| # | Method | Path | Source Scenario | Domain |
|
|
57
|
+
|---|--------|------|-----------------|--------|
|
|
58
|
+
| 1 | POST | /api/auth/register | S01 Step 2 | auth |
|
|
59
|
+
| 2 | POST | /api/auth/login | S02 Step 1 | auth |
|
|
60
|
+
| 3 | GET | /api/projects | S04 Step 1 | projects |
|
|
61
|
+
```
|
|
62
|
+
|
|
63
|
+
### Step 3: Group by Domain
|
|
64
|
+
|
|
65
|
+
Group endpoints by business domain, with each group corresponding to a YAML file:
|
|
66
|
+
|
|
67
|
+
- `auth.yaml` — Authentication-related (registration, login, logout, password reset)
|
|
68
|
+
- `projects.yaml` — CRUD for core business entities
|
|
69
|
+
- `billing.yaml` — Payment and subscriptions
|
|
70
|
+
|
|
71
|
+
Grouping principles:
|
|
72
|
+
- Operations on the same data entity go together
|
|
73
|
+
- Authentication/authorization is a separate group
|
|
74
|
+
- Third-party service callbacks (e.g., payment callbacks) go under the corresponding business domain
|
|
75
|
+
|
|
76
|
+
### Step 4: Design Unified Conventions
|
|
77
|
+
|
|
78
|
+
Before generating specific endpoints, establish global conventions:
|
|
79
|
+
|
|
80
|
+
**Authentication scheme** (read from architecture overview):
|
|
81
|
+
|
|
82
|
+
```yaml
|
|
83
|
+
components:
|
|
84
|
+
securitySchemes:
|
|
85
|
+
bearerAuth:
|
|
86
|
+
type: http
|
|
87
|
+
scheme: bearer
|
|
88
|
+
bearerFormat: JWT
|
|
89
|
+
```
|
|
90
|
+
|
|
91
|
+
**Unified error response**:
|
|
92
|
+
|
|
93
|
+
```yaml
|
|
94
|
+
components:
|
|
95
|
+
schemas:
|
|
96
|
+
ErrorResponse:
|
|
97
|
+
type: object
|
|
98
|
+
required: [code, message]
|
|
99
|
+
properties:
|
|
100
|
+
code:
|
|
101
|
+
type: string
|
|
102
|
+
description: Machine-readable error code (e.g., EMAIL_EXISTS)
|
|
103
|
+
message:
|
|
104
|
+
type: string
|
|
105
|
+
description: Human-readable error description
|
|
106
|
+
details:
|
|
107
|
+
type: object
|
|
108
|
+
description: Additional error information (e.g., field-level validation errors)
|
|
109
|
+
```
|
|
110
|
+
|
|
111
|
+
**Pagination parameters** (applicable to list endpoints):
|
|
112
|
+
|
|
113
|
+
```yaml
|
|
114
|
+
parameters:
|
|
115
|
+
- name: page
|
|
116
|
+
in: query
|
|
117
|
+
schema: { type: integer, minimum: 1, default: 1 }
|
|
118
|
+
- name: per_page
|
|
119
|
+
in: query
|
|
120
|
+
schema: { type: integer, minimum: 1, maximum: 100, default: 20 }
|
|
121
|
+
```
|
|
122
|
+
|
|
123
|
+
### Step 5: Design Detailed Specification per Endpoint
|
|
124
|
+
|
|
125
|
+
Design a complete OpenAPI specification for each endpoint, **output by domain**, pausing after each domain for user review:
|
|
126
|
+
|
|
127
|
+
Each endpoint must include:
|
|
128
|
+
- `operationId`: Unique identifier for code generation
|
|
129
|
+
- `summary`: One-sentence description
|
|
130
|
+
- `description`: Annotate the source sequence diagram step (e.g., `Source: S01 Step 2 → Step 3`)
|
|
131
|
+
- `requestBody`: Schema including all fields (with required, types, validation rules such as minLength/format)
|
|
132
|
+
- `responses`: Cover the normal response + all known exceptions (extracted from EX use cases in sequence diagrams)
|
|
133
|
+
|
|
134
|
+
**Example**:
|
|
135
|
+
|
|
136
|
+
```yaml
|
|
137
|
+
paths:
|
|
138
|
+
/api/auth/register:
|
|
139
|
+
post:
|
|
140
|
+
operationId: register
|
|
141
|
+
summary: User registration
|
|
142
|
+
description: "Source: S01 Step 2 → Step 3"
|
|
143
|
+
requestBody:
|
|
144
|
+
required: true
|
|
145
|
+
content:
|
|
146
|
+
application/json:
|
|
147
|
+
schema:
|
|
148
|
+
type: object
|
|
149
|
+
required: [email, password]
|
|
150
|
+
properties:
|
|
151
|
+
email: { type: string, format: email }
|
|
152
|
+
password: { type: string, minLength: 8 }
|
|
153
|
+
responses:
|
|
154
|
+
'201':
|
|
155
|
+
description: Registration successful, verification email sent
|
|
156
|
+
content:
|
|
157
|
+
application/json:
|
|
158
|
+
schema:
|
|
159
|
+
type: object
|
|
160
|
+
properties:
|
|
161
|
+
userId: { type: string, format: uuid }
|
|
162
|
+
message: { type: string }
|
|
163
|
+
'409':
|
|
164
|
+
description: "Email already registered (EX-2.1)"
|
|
165
|
+
content:
|
|
166
|
+
application/json:
|
|
167
|
+
schema:
|
|
168
|
+
$ref: '#/components/schemas/ErrorResponse'
|
|
169
|
+
'422':
|
|
170
|
+
description: Request parameter validation failed
|
|
171
|
+
content:
|
|
172
|
+
application/json:
|
|
173
|
+
schema:
|
|
174
|
+
$ref: '#/components/schemas/ErrorResponse'
|
|
175
|
+
```
|
|
176
|
+
|
|
177
|
+
### Step 6: Verify Traceability Completeness
|
|
178
|
+
|
|
179
|
+
After output is complete, perform a traceability check:
|
|
180
|
+
|
|
181
|
+
1. **Forward check**: Every cross-system arrow in the sequence diagrams has a corresponding API endpoint
|
|
182
|
+
2. **Reverse check**: Every API endpoint's `description` annotates the source Step
|
|
183
|
+
3. **Exception coverage**: Every EX use case in the sequence diagrams has a corresponding HTTP error response
|
|
184
|
+
|
|
185
|
+
If gaps are found, supplement them before outputting the final version.
|
|
186
|
+
|
|
187
|
+
## Output Specification
|
|
188
|
+
|
|
189
|
+
- File format: OpenAPI 3.1 YAML
|
|
190
|
+
- Storage location: `logos/resources/api/`
|
|
191
|
+
- Split by domain: `auth.yaml`, `projects.yaml`, `billing.yaml`
|
|
192
|
+
- Each file contains complete `openapi`, `info`, `paths`, and `components` sections
|
|
193
|
+
- Error responses uniformly reference `$ref: '#/components/schemas/ErrorResponse'`
|
|
194
|
+
- Every endpoint's `description` must annotate the source sequence diagram step
|
|
195
|
+
|
|
196
|
+
## YAML Formatting Rules (MUST Follow)
|
|
197
|
+
|
|
198
|
+
YAML is whitespace- and character-sensitive. AI-generated YAML frequently breaks due to unquoted special characters. **Strictly follow these rules:**
|
|
199
|
+
|
|
200
|
+
1. **Always double-quote `description` and `summary` values** — any string containing `:`, `→`, `#`, `&`, `*`, `!`, `>`, `|`, `%`, `@`, `` ` ``, `{`, `}`, `[`, or `]` MUST be wrapped in `"..."`.
|
|
201
|
+
```yaml
|
|
202
|
+
# ❌ WRONG — colon + arrow breaks YAML parsing
|
|
203
|
+
description: Source: S05 Step 1 → Step 4.
|
|
204
|
+
|
|
205
|
+
# ✅ CORRECT
|
|
206
|
+
description: "Source: S05 Step 1 → Step 4."
|
|
207
|
+
```
|
|
208
|
+
2. **Always quote response status code keys** — use `'201'` not `201` to prevent YAML interpreting them as integers.
|
|
209
|
+
3. **Self-check after generation** — after generating each YAML file, mentally re-parse it to verify no unquoted special characters exist. Pay special attention to `description` fields that reference scenario steps (they always contain `:`).
|
|
210
|
+
4. **When in doubt, quote it** — quoting a safe string is harmless; leaving a dangerous string unquoted breaks the entire file.
|
|
211
|
+
|
|
212
|
+
## Best Practices
|
|
213
|
+
|
|
214
|
+
- **APIs emerge from sequence diagrams**: If an API cannot be traced back to a sequence diagram, it most likely should not exist. Design sequence diagrams first, then APIs — not the other way around
|
|
215
|
+
- **Path naming**: RESTful style, use plural nouns, `/api/{resource}`
|
|
216
|
+
- **Version prefix**: Do not add a version prefix initially (`/api/auth/register`); add `/api/v2/` when versioning becomes necessary
|
|
217
|
+
- **Status code semantics**: Strictly follow HTTP status code semantics — 200 success, 201 created, 400 bad request, 401 unauthorized, 403 forbidden, 404 not found, 409 conflict, 422 validation failed, 500 server error
|
|
218
|
+
- **Idempotent design**: PUT/DELETE operations must be idempotent
|
|
219
|
+
- **Sensitive data**: Do not include plaintext sensitive information such as passwords or tokens in responses
|
|
220
|
+
- **Output by domain**: Do not output all endpoints at once — output in batches by domain, letting the user review each batch before continuing
|
|
221
|
+
- **Consistent field naming**: Field names in the API should be consistent with column names in the subsequent DB design (or have explicit mapping rules) to avoid unnecessary field transformations in the code layer
|
|
222
|
+
|
|
223
|
+
## Recommended Prompts
|
|
224
|
+
|
|
225
|
+
The following prompts can be copied directly for use with the AI:
|
|
226
|
+
|
|
227
|
+
- `Help me design APIs`
|
|
228
|
+
- `Generate OpenAPI YAML based on the sequence diagrams`
|
|
229
|
+
- `Help me design the API specifications related to S01`
|
|
230
|
+
- `Help me extract all cross-system calls from the sequence diagrams into APIs`
|
|
@@ -0,0 +1,186 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: architecture-designer
|
|
3
|
+
description: "Design technical architecture and select technology stack. Use when product design exists in logos/resources/prd/2-product-design/ but logos/resources/prd/3-technical-plan/1-architecture/ is empty."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Skill: Architecture Designer
|
|
7
|
+
|
|
8
|
+
> Before diving into per-scenario technical implementation, establish the project's technical global view — system architecture, technology selection, deployment topology, and non-functional constraints. Ensure that subsequent sequence diagrams, API designs, and code generation all proceed under consistent architectural constraints.
|
|
9
|
+
|
|
10
|
+
## Trigger Conditions
|
|
11
|
+
|
|
12
|
+
- User requests designing technical architecture, making technology selections, or planning system architecture
|
|
13
|
+
- User mentions "Phase 3 Step 0", "architecture design", "technical plan"
|
|
14
|
+
- Phase 2 product design documents are complete, and Phase 3 needs to begin
|
|
15
|
+
- User wants to determine the tech stack or deployment strategy
|
|
16
|
+
|
|
17
|
+
## Core Capabilities
|
|
18
|
+
|
|
19
|
+
1. Read Phase 1 requirements documents and Phase 2 product design documents to understand the full product picture
|
|
20
|
+
2. Based on product complexity and scenario characteristics, recommend suitable system architectures
|
|
21
|
+
3. Provide selection rationale and alternative comparisons for each technology choice
|
|
22
|
+
4. Draw system architecture diagrams (Mermaid) and deployment topology diagrams
|
|
23
|
+
5. Update the `tech_stack` field in `logos-project.yaml`
|
|
24
|
+
|
|
25
|
+
## Integration with Phase 1/2
|
|
26
|
+
|
|
27
|
+
Architecture design is the bridge from Phase 2 (product design) to Phase 3 (technical implementation). Its inputs come from Phase 1/2, and its outputs influence all subsequent steps in Phase 3:
|
|
28
|
+
|
|
29
|
+
| Input (from Phase 1/2) | Output (influences subsequent Phase 3 steps) |
|
|
30
|
+
|------------------------|------------------------------|
|
|
31
|
+
| Scenario list and complexity | System boundary definition → sequence diagram participants |
|
|
32
|
+
| Non-functional requirements (performance, security) | Technology selection constraints → API design decisions |
|
|
33
|
+
| Product interaction type (Web/Mobile/API) | Frontend tech stack → prototype implementation approach |
|
|
34
|
+
| Data volume and access patterns | Database selection → DB design |
|
|
35
|
+
| Third-party service dependencies (payment, email, etc.) | Integration approach → external participants in sequence diagrams |
|
|
36
|
+
|
|
37
|
+
## Execution Steps
|
|
38
|
+
|
|
39
|
+
### Step 1: Understand the Full Product Picture
|
|
40
|
+
|
|
41
|
+
Read the following documents to build an overall understanding of the project:
|
|
42
|
+
|
|
43
|
+
- **Requirements Document** (Phase 1): Product positioning, core scenarios, constraints and boundaries
|
|
44
|
+
- **Product Design Document** (Phase 2): Information architecture, page structure, interaction complexity
|
|
45
|
+
- **Existing `logos-project.yaml`**: Whether there are initial selections in the current `tech_stack`
|
|
46
|
+
|
|
47
|
+
Key points to extract:
|
|
48
|
+
- Number and complexity of core scenarios
|
|
49
|
+
- Whether there are real-time requirements (WebSocket, SSE)
|
|
50
|
+
- Whether there are background tasks (scheduled tasks, message queues)
|
|
51
|
+
- List of third-party service dependencies
|
|
52
|
+
- Expected user scale
|
|
53
|
+
|
|
54
|
+
### Step 2: Determine System Architecture
|
|
55
|
+
|
|
56
|
+
Choose an architecture pattern based on product complexity:
|
|
57
|
+
|
|
58
|
+
**Simple Projects** (personal SaaS, utility products):
|
|
59
|
+
- Monolithic architecture + single database
|
|
60
|
+
- Architecture overview can be a paragraph of text + a simple diagram
|
|
61
|
+
|
|
62
|
+
**Medium Projects** (team SaaS, multi-role systems):
|
|
63
|
+
- Frontend-backend separation + monolithic backend + single database
|
|
64
|
+
- May need auxiliary services like object storage, caching, etc.
|
|
65
|
+
|
|
66
|
+
**Complex Projects** (multi-service, high-concurrency, multi-platform):
|
|
67
|
+
- Microservices / modular monolith
|
|
68
|
+
- Requires detailed Architecture Decision Records (ADR)
|
|
69
|
+
|
|
70
|
+
Draw system architecture diagram using Mermaid:
|
|
71
|
+
|
|
72
|
+
```mermaid
|
|
73
|
+
graph TB
|
|
74
|
+
subgraph Frontend
|
|
75
|
+
Web[Web App - Next.js]
|
|
76
|
+
end
|
|
77
|
+
subgraph Backend
|
|
78
|
+
API[API Server - Node.js]
|
|
79
|
+
Worker[Background Worker]
|
|
80
|
+
end
|
|
81
|
+
subgraph Data
|
|
82
|
+
DB[(PostgreSQL)]
|
|
83
|
+
Cache[(Redis)]
|
|
84
|
+
S3[Object Storage]
|
|
85
|
+
end
|
|
86
|
+
subgraph External
|
|
87
|
+
Auth[Supabase Auth]
|
|
88
|
+
Email[SendGrid]
|
|
89
|
+
end
|
|
90
|
+
|
|
91
|
+
Web -->|REST API| API
|
|
92
|
+
API --> DB
|
|
93
|
+
API --> Cache
|
|
94
|
+
API --> S3
|
|
95
|
+
API --> Auth
|
|
96
|
+
Worker --> DB
|
|
97
|
+
Worker --> Email
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
### Step 3: Technology Selection
|
|
101
|
+
|
|
102
|
+
Provide selection and rationale for each technology dimension:
|
|
103
|
+
|
|
104
|
+
```markdown
|
|
105
|
+
| Dimension | Selection | Rationale | Alternatives |
|
|
106
|
+
|-----------|-----------|-----------|-------------|
|
|
107
|
+
| Language | TypeScript | Unified frontend/backend, type safety | Go (when performance is priority) |
|
|
108
|
+
| Frontend Framework | Next.js 15 | SSR + RSC, mature ecosystem | Astro (content sites), Nuxt (Vue ecosystem) |
|
|
109
|
+
| Backend Framework | Hono | Lightweight, edge-first, native TS | Express (ecosystem), Fastify (performance) |
|
|
110
|
+
| Database | PostgreSQL | Feature-rich, JSONB, RLS | MySQL (simple scenarios) |
|
|
111
|
+
| Authentication | Supabase Auth | Out-of-the-box, RLS integration | NextAuth (self-hosted) |
|
|
112
|
+
| Deployment | Vercel + Supabase | Zero-ops, auto-scaling | AWS (full control) |
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
**Selection Principles**:
|
|
116
|
+
- Prefer technologies the team is already familiar with
|
|
117
|
+
- When there is no significant difference, choose the option with the larger community
|
|
118
|
+
- Selection rationale must be linked to specific product requirements or constraints
|
|
119
|
+
|
|
120
|
+
### Step 4: Non-Functional Constraints
|
|
121
|
+
|
|
122
|
+
Define key non-functional requirements:
|
|
123
|
+
|
|
124
|
+
- **Performance Targets**: Core API response time, page load time
|
|
125
|
+
- **Security Requirements**: Authentication method, data encryption, CORS policy
|
|
126
|
+
- **Scalability**: Expected user scale, data growth estimates
|
|
127
|
+
- **Observability**: Logging, monitoring, alerting strategy
|
|
128
|
+
- **Developer Experience**: Local development environment, CI/CD pipeline
|
|
129
|
+
|
|
130
|
+
### Step 5: External Dependencies and Test Strategies
|
|
131
|
+
|
|
132
|
+
Catalog all external service dependencies for the project and determine the isolation strategy for each dependency during orchestration testing. The output of this step directly impacts whether Phase 3 Step 3 (orchestration testing) can be executed smoothly.
|
|
133
|
+
|
|
134
|
+
1. Identify external dependencies from the architecture diagram and sequence diagram participants (email, SMS, verification codes, payment, OAuth, etc.)
|
|
135
|
+
2. Confirm the test strategy for each dependency with the user
|
|
136
|
+
|
|
137
|
+
Available test strategies:
|
|
138
|
+
|
|
139
|
+
| Strategy | Description | Typical Scenario |
|
|
140
|
+
|----------|-------------|-----------------|
|
|
141
|
+
| `test-api` | Test environment provides a backdoor API | Email/SMS verification codes |
|
|
142
|
+
| `fixed-value` | Specific test data uses fixed values | Fixed verification code for test phone numbers |
|
|
143
|
+
| `env-disable` | Environment variable disables the feature | CAPTCHA, slider verification |
|
|
144
|
+
| `mock-callback` | Orchestration actively calls a simulated callback | Payment callbacks, Webhooks |
|
|
145
|
+
| `mock-service` | Local mock service as replacement | OAuth Provider |
|
|
146
|
+
|
|
147
|
+
If the project has no external service dependencies (e.g., a pure CLI tool), this step can be skipped.
|
|
148
|
+
|
|
149
|
+
### Step 6: Update logos-project.yaml
|
|
150
|
+
|
|
151
|
+
Write the confirmed technology selections into the `tech_stack` field of `logos-project.yaml`, and write external dependencies and test strategies into the `external_dependencies` field, ensuring that all subsequent Skills and AI tools can read the unified tech stack and testing conventions.
|
|
152
|
+
|
|
153
|
+
```yaml
|
|
154
|
+
external_dependencies:
|
|
155
|
+
- name: "Email Service"
|
|
156
|
+
provider: "SendGrid"
|
|
157
|
+
used_in: ["S01-User Registration", "S03-Forgot Password"]
|
|
158
|
+
test_strategy: "test-api"
|
|
159
|
+
test_config: "GET /api/test/latest-email?to={email}"
|
|
160
|
+
```
|
|
161
|
+
|
|
162
|
+
## Output Specification
|
|
163
|
+
|
|
164
|
+
- Architecture overview document: `logos/resources/prd/3-technical-plan/1-architecture/01-architecture-overview.md`
|
|
165
|
+
- Architecture diagrams use Mermaid format
|
|
166
|
+
- Technology selections use table format, each item must include rationale
|
|
167
|
+
- Update the `tech_stack` and `external_dependencies` fields in `logos-project.yaml`
|
|
168
|
+
- Simple projects are allowed to have streamlined output (not all sections are mandatory)
|
|
169
|
+
|
|
170
|
+
## Best Practices
|
|
171
|
+
|
|
172
|
+
- **Don't over-engineer**: For a solo developer building SaaS, monolith + PostgreSQL + Vercel is sufficient — don't jump straight to microservices
|
|
173
|
+
- **Selection rationale matters more than the selection itself**: Documenting "why X was chosen" is more valuable than "X was chosen", because rationale needs to be re-evaluated as the project evolves
|
|
174
|
+
- **Architecture diagrams are prerequisites for sequence diagrams**: System components in the architecture diagram become participants in subsequent sequence diagrams — the two must be consistent
|
|
175
|
+
- **tech_stack is the AI's anchor**: Subsequent AI code generation reads `tech_stack` from `logos-project.yaml` — inaccurate selections will result in unusable generated code
|
|
176
|
+
- **Start loose with non-functional constraints, tighten later**: Don't set overly strict performance targets initially; tighten them as real data becomes available
|
|
177
|
+
- **Test strategies must be decided during the architecture phase**: If test approaches for verification codes, payments, and other external dependencies are left until orchestration testing, you'll often find that no backdoor APIs were provisioned, making fully automated orchestration tests impossible
|
|
178
|
+
|
|
179
|
+
## Recommended Prompts
|
|
180
|
+
|
|
181
|
+
The following prompts can be copied directly for use with AI:
|
|
182
|
+
|
|
183
|
+
- `Help me design the technical architecture`
|
|
184
|
+
- `Based on the product design, help me make technology selections`
|
|
185
|
+
- `Help me draw the system architecture diagram`
|
|
186
|
+
- `Help me determine the tech stack and update logos-project.yaml`
|
|
@@ -0,0 +1,160 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: change-writer
|
|
3
|
+
description: "Write change proposals with impact analysis following OpenLogos delta workflow. Use when the project lifecycle is active and source code or methodology documents need modification."
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
# Skill: Change Writer
|
|
7
|
+
|
|
8
|
+
> Assist in writing change proposals — analyze the scope of change impact, generate a structured proposal.md and a phase-based tasks.md, ensuring changes are traceable and impact is controllable.
|
|
9
|
+
|
|
10
|
+
## Trigger Conditions
|
|
11
|
+
|
|
12
|
+
- User has just run `openlogos change <slug>` and wants AI help filling in the proposal
|
|
13
|
+
- User describes a need to modify, add, or remove a scenario/feature
|
|
14
|
+
- User mentions "change proposal", "iteration", "requirement change"
|
|
15
|
+
|
|
16
|
+
## Prerequisites
|
|
17
|
+
|
|
18
|
+
1. Project is initialized (`logos/logos.config.json` exists)
|
|
19
|
+
2. Change proposal directory has been created by CLI (`logos/changes/<slug>/` exists)
|
|
20
|
+
3. Main documents are readable (effective documents exist in `logos/resources/`)
|
|
21
|
+
|
|
22
|
+
If prerequisites are not met, prompt the user to run `openlogos change <slug>` to create the proposal directory first.
|
|
23
|
+
|
|
24
|
+
## Core Capabilities
|
|
25
|
+
|
|
26
|
+
1. Understand the user's intended change
|
|
27
|
+
2. Scan existing documents in `logos/resources/` to identify the affected scope
|
|
28
|
+
3. Determine the change type based on change propagation rules (Requirement-level / Design-level / Interface-level / Code-level)
|
|
29
|
+
4. Generate a compliant proposal.md
|
|
30
|
+
5. Automatically break down tasks.md by change type
|
|
31
|
+
|
|
32
|
+
## Execution Steps
|
|
33
|
+
|
|
34
|
+
### Step 1: Understand the Change Intent
|
|
35
|
+
|
|
36
|
+
Confirm the following information with the user (ask follow-up questions if insufficient, up to 2 rounds):
|
|
37
|
+
|
|
38
|
+
- **What is the change**: What needs to be added, modified, or removed?
|
|
39
|
+
- **Reason for the change**: Why is this change needed? Is it from requirement feedback, a bug, or an optimization?
|
|
40
|
+
- **Related scenarios**: Which existing scenario IDs are involved (S01, S02...)?
|
|
41
|
+
|
|
42
|
+
### Step 2: Analyze the Impact Scope
|
|
43
|
+
|
|
44
|
+
Scan documents in `logos/resources/` to determine the impact scope:
|
|
45
|
+
|
|
46
|
+
1. Read requirement documents (`prd/1-product-requirements/`) to check related scenario definitions
|
|
47
|
+
2. Read product design (`prd/2-product-design/`) to check related functional specs and prototypes
|
|
48
|
+
3. Read technical plans (`prd/3-technical-plan/`) to check related sequence diagrams
|
|
49
|
+
4. Read API documents (`api/`) to check related endpoints
|
|
50
|
+
5. Read DB documents (`database/`) to check related table structures
|
|
51
|
+
6. Read orchestration tests (`scenario/`) to check related test cases
|
|
52
|
+
|
|
53
|
+
### Step 3: Determine the Change Type
|
|
54
|
+
|
|
55
|
+
Refer to change propagation rules to determine the change type and minimum update scope:
|
|
56
|
+
|
|
57
|
+
| Change Type | Minimum Updates Required |
|
|
58
|
+
|-------------|------------------------|
|
|
59
|
+
| Requirement-level change | Full chain (Requirements → Design → Architecture → API/DB → Orchestration → Code) |
|
|
60
|
+
| Design-level change | Prototypes + Scenarios + API/DB + Orchestration + Code |
|
|
61
|
+
| Interface-level change | API/DB + Orchestration + Code |
|
|
62
|
+
| Code-level fix | Code + Re-verification |
|
|
63
|
+
|
|
64
|
+
### Step 4: Generate proposal.md
|
|
65
|
+
|
|
66
|
+
Generate using the following template and write to `logos/changes/<slug>/proposal.md`:
|
|
67
|
+
|
|
68
|
+
```markdown
|
|
69
|
+
# Change Proposal: [Change Name]
|
|
70
|
+
|
|
71
|
+
## Reason for Change
|
|
72
|
+
[Why is this change needed? What requirement/feedback/bug does it originate from?]
|
|
73
|
+
|
|
74
|
+
## Change Type
|
|
75
|
+
[Requirement-level / Design-level / Interface-level / Code-level]
|
|
76
|
+
|
|
77
|
+
## Change Scope
|
|
78
|
+
- Affected requirement documents: [List, down to filename and section]
|
|
79
|
+
- Affected functional specs: [List]
|
|
80
|
+
- Affected business scenarios: [Scenario ID list]
|
|
81
|
+
- Affected APIs: [Endpoint list]
|
|
82
|
+
- Affected DB tables: [Table name list]
|
|
83
|
+
- Affected orchestration tests: [List]
|
|
84
|
+
|
|
85
|
+
## Change Summary
|
|
86
|
+
[Describe in 1-3 paragraphs what specifically will change]
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
### Step 5: Generate tasks.md
|
|
90
|
+
|
|
91
|
+
Automatically break down the task checklist based on the change type and impact scope. Only list the phases that need updating:
|
|
92
|
+
|
|
93
|
+
```markdown
|
|
94
|
+
# Implementation Tasks
|
|
95
|
+
|
|
96
|
+
## Phase 1: Document Changes
|
|
97
|
+
- [ ] Update acceptance criteria for S0x in requirement documents
|
|
98
|
+
- [ ] Add/modify scenario in the scenario overview table
|
|
99
|
+
|
|
100
|
+
## Phase 2: Design Changes
|
|
101
|
+
- [ ] Update interaction design for S0x in functional specs
|
|
102
|
+
- [ ] Update prototypes
|
|
103
|
+
|
|
104
|
+
## Phase 3: Technical Changes
|
|
105
|
+
- [ ] Update sequence diagram for S0x
|
|
106
|
+
- [ ] Update API YAML
|
|
107
|
+
- [ ] **Validate API YAML** — all files in `logos/resources/api/` must be valid YAML and valid OpenAPI 3.x (all `description`/`summary` values containing `:` or special chars must be double-quoted)
|
|
108
|
+
- [ ] Update DB DDL
|
|
109
|
+
- [ ] Update orchestration test cases
|
|
110
|
+
- [ ] Implement code changes
|
|
111
|
+
```
|
|
112
|
+
|
|
113
|
+
### Step 6: Guide Follow-up Actions (Chain-driven)
|
|
114
|
+
|
|
115
|
+
Provide a ready-to-use prompt that allows the user to kick off chain execution of all tasks with a single command:
|
|
116
|
+
|
|
117
|
+
- **Requirement-level / Design-level changes** (multiple tasks): Suggest the user say "Follow tasks.md and help me progressively update all affected documents for S0x"
|
|
118
|
+
- **Code-level fixes** (fewer tasks): Suggest the user say "Help me fix the [issue description] for S0x and re-verify"
|
|
119
|
+
|
|
120
|
+
Chain execution behavior rules:
|
|
121
|
+
1. AI reads `tasks.md` and executes items sequentially
|
|
122
|
+
2. **After completing each task, immediately update that item in `tasks.md` from `[ ]` to `[x]`** (AI does this proactively — no user reminder needed)
|
|
123
|
+
3. After completing each task, report a summary of changes and automatically prompt "Continue to the next item?"
|
|
124
|
+
4. After the user says "Continue" or provides adjustments, proceed to the next item
|
|
125
|
+
5. After all tasks are completed, remind the user to explicitly authorize running `openlogos merge <slug>`
|
|
126
|
+
|
|
127
|
+
**Key principle**: Do not make the user manually track the task checklist — AI should proactively drive the process.
|
|
128
|
+
|
|
129
|
+
**`openlogos merge` and `openlogos archive` are human confirmation points**:
|
|
130
|
+
- AI must not execute these commands without explicit user authorization
|
|
131
|
+
- When the user explicitly requests execution (including via `/openlogos:merge` or `/openlogos:archive` slash commands), AI may execute them
|
|
132
|
+
- Must not be triggered implicitly in scenarios like "continue", "finish up", or "follow the process"
|
|
133
|
+
|
|
134
|
+
AI is only responsible for driving content modifications and must not advance proposal state without explicit authorization.
|
|
135
|
+
|
|
136
|
+
## Output Specification
|
|
137
|
+
|
|
138
|
+
- File format: Markdown
|
|
139
|
+
- Storage location: `logos/changes/<slug>/`
|
|
140
|
+
- Filenames: `proposal.md` and `tasks.md` (overwrite the CLI-generated templates)
|
|
141
|
+
|
|
142
|
+
## Best Practices
|
|
143
|
+
|
|
144
|
+
- **Overestimate the impact scope**: Missing an update in one link is more dangerous than double-checking
|
|
145
|
+
- **Change type determines workload**: Help users understand before they start that changing one requirement may require a full-chain update
|
|
146
|
+
- **tasks.md is the execution checklist**: Check off each item with `[x]` upon completion for easy progress tracking
|
|
147
|
+
- **Follow the process even for small changes**: A change that appears to be "just one API line" may affect orchestration tests and code
|
|
148
|
+
|
|
149
|
+
## Recommended Prompts
|
|
150
|
+
|
|
151
|
+
The following prompts can be copied directly for use with AI:
|
|
152
|
+
|
|
153
|
+
**Fill in proposal**:
|
|
154
|
+
- `Help me fill in the change proposal <slug>`
|
|
155
|
+
- `I want to add a "remember password" feature to the S02 login scenario, help me analyze the impact scope`
|
|
156
|
+
- `This bug fix only involves the code layer, help me quickly write a proposal`
|
|
157
|
+
|
|
158
|
+
**Execute tasks (after proposal is completed)**:
|
|
159
|
+
- `Follow tasks.md and help me progressively update all affected documents for S02`
|
|
160
|
+
- `Help me fix the 500 error on the S02 login endpoint and re-verify`
|