@miniidealab/openlogos 0.9.4 → 0.9.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (50) hide show
  1. package/README.md +1 -0
  2. package/claude-plugin-template/.claude-plugin/plugin.json +13 -0
  3. package/claude-plugin-template/agents/change-reviewer.md +46 -0
  4. package/claude-plugin-template/bin/openlogos-phase +428 -0
  5. package/claude-plugin-template/commands/archive.md +22 -0
  6. package/claude-plugin-template/commands/change.md +19 -0
  7. package/claude-plugin-template/commands/index.md +13 -0
  8. package/claude-plugin-template/commands/init.md +30 -0
  9. package/claude-plugin-template/commands/launch.md +16 -0
  10. package/claude-plugin-template/commands/merge.md +20 -0
  11. package/claude-plugin-template/commands/next.md +18 -0
  12. package/claude-plugin-template/commands/status.md +10 -0
  13. package/claude-plugin-template/commands/sync.md +12 -0
  14. package/claude-plugin-template/commands/verify.md +17 -0
  15. package/claude-plugin-template/hooks/hooks.json +14 -0
  16. package/claude-plugin-template/skills/api-designer/SKILL.md +230 -0
  17. package/claude-plugin-template/skills/architecture-designer/SKILL.md +186 -0
  18. package/claude-plugin-template/skills/change-writer/SKILL.md +160 -0
  19. package/claude-plugin-template/skills/code-implementor/SKILL.md +116 -0
  20. package/claude-plugin-template/skills/code-reviewer/SKILL.md +214 -0
  21. package/claude-plugin-template/skills/db-designer/SKILL.md +259 -0
  22. package/claude-plugin-template/skills/merge-executor/SKILL.md +118 -0
  23. package/claude-plugin-template/skills/prd-writer/SKILL.md +203 -0
  24. package/claude-plugin-template/skills/product-designer/SKILL.md +235 -0
  25. package/claude-plugin-template/skills/project-init/SKILL.md +168 -0
  26. package/claude-plugin-template/skills/scenario-architect/SKILL.md +229 -0
  27. package/claude-plugin-template/skills/test-orchestrator/SKILL.md +147 -0
  28. package/claude-plugin-template/skills/test-writer/SKILL.md +252 -0
  29. package/dist/commands/init.d.ts +7 -0
  30. package/dist/commands/init.d.ts.map +1 -1
  31. package/dist/commands/init.js +145 -10
  32. package/dist/commands/init.js.map +1 -1
  33. package/dist/commands/module.d.ts.map +1 -1
  34. package/dist/commands/module.js +14 -9
  35. package/dist/commands/module.js.map +1 -1
  36. package/dist/commands/sync.d.ts.map +1 -1
  37. package/dist/commands/sync.js +15 -1
  38. package/dist/commands/sync.js.map +1 -1
  39. package/dist/commands/verify.d.ts +3 -1
  40. package/dist/commands/verify.d.ts.map +1 -1
  41. package/dist/commands/verify.js +60 -28
  42. package/dist/commands/verify.js.map +1 -1
  43. package/dist/i18n.d.ts.map +1 -1
  44. package/dist/i18n.js +10 -4
  45. package/dist/i18n.js.map +1 -1
  46. package/dist/index.js +0 -0
  47. package/package.json +5 -4
  48. package/skills/test-writer/SKILL.md +21 -1
  49. package/spec/cli-json-output.md +2 -1
  50. package/spec/test-results.md +16 -0
@@ -0,0 +1,168 @@
1
+ ---
2
+ name: project-init
3
+ description: "Initialize OpenLogos project structure and directory layout. Use when the user wants to set up a new project with openlogos init or needs help with project initialization."
4
+ ---
5
+
6
+ # Skill: Project Init
7
+
8
+ > Initialize a project structure following the OpenLogos methodology, generating configuration files, AI instruction files, and standard directories.
9
+
10
+ ## Trigger Conditions
11
+
12
+ - User requests creating a new project or initializing a project structure
13
+ - User mentions "openlogos init" or "initialize project"
14
+ - No `logos/logos.config.json` exists in the current directory
15
+
16
+ ## Core Capabilities
17
+
18
+ 1. Create the `logos/` directory and its standard substructure
19
+ 2. Generate the `logos/logos.config.json` configuration file
20
+ 3. Generate the `logos/logos-project.yaml` AI collaboration index
21
+ 4. Generate `AGENTS.md` / `CLAUDE.md` AI instruction files (in the root directory)
22
+ 5. Create the `logos/changes/` change management directory
23
+
24
+ ## Execution Steps
25
+
26
+ ### Step 1: Gather Project Information
27
+
28
+ Confirm the following information with the user:
29
+
30
+ - **Project name**: Used for the `name` field in `logos/logos.config.json`
31
+ - **Project description**: A one-sentence description
32
+ - **Tech stack**: Main framework, language, database, deployment platform
33
+ - **Document modules**: Whether additional modules are needed beyond the default prd/api/scenario/database
34
+
35
+ If the user does not provide these, use reasonable defaults.
36
+
37
+ ### Step 2: Create Directory Structure
38
+
39
+ ```
40
+ project-root/
41
+ └── logos/
42
+ ├── resources/
43
+ │ ├── prd/
44
+ │ │ ├── 1-product-requirements/
45
+ │ │ ├── 2-product-design/
46
+ │ │ │ ├── 1-feature-specs/
47
+ │ │ │ └── 2-page-design/
48
+ │ │ └── 3-technical-plan/
49
+ │ │ ├── 1-architecture/
50
+ │ │ └── 2-scenario-implementation/
51
+ │ ├── api/
52
+ │ ├── database/
53
+ │ └── scenario/
54
+ └── changes/
55
+ ```
56
+
57
+ ### Step 3: Generate logos/logos.config.json
58
+
59
+ ```json
60
+ {
61
+ "name": "{project name}",
62
+ "description": "{project description}",
63
+ "documents": {
64
+ "prd": {
65
+ "label": { "en": "Product Docs", "zh": "产品文档" },
66
+ "path": "./resources/prd",
67
+ "pattern": "**/*.{md,html,htm,pdf}"
68
+ },
69
+ "api": {
70
+ "label": { "en": "API Docs", "zh": "API 文档" },
71
+ "path": "./resources/api",
72
+ "pattern": "**/*.{yaml,yml,json}"
73
+ },
74
+ "scenario": {
75
+ "label": { "en": "Scenarios", "zh": "业务场景" },
76
+ "path": "./resources/scenario",
77
+ "pattern": "**/*.json"
78
+ },
79
+ "database": {
80
+ "label": { "en": "Database", "zh": "数据库" },
81
+ "path": "./resources/database",
82
+ "pattern": "**/*.sql"
83
+ }
84
+ }
85
+ }
86
+ ```
87
+
88
+ > The `path` field is relative to the directory where `logos.config.json` itself resides (i.e., `logos/`), so `./resources/prd` points to `logos/resources/prd`.
89
+
90
+ ### Step 4: Generate logos/logos-project.yaml
91
+
92
+ ```yaml
93
+ project:
94
+ name: "{project name}"
95
+ description: "{project description}"
96
+ methodology: "OpenLogos"
97
+
98
+ tech_stack:
99
+ framework: "{framework provided by user}"
100
+ language: "{language provided by user}"
101
+ # ... populate based on user-provided information
102
+
103
+ resource_index: []
104
+ # Initially empty; entries are added incrementally as documents are produced
105
+
106
+ conventions:
107
+ - "Follow the OpenLogos three-layer progression model (Why → What → How)"
108
+ - "Every change must first create a change proposal in logos/changes/"
109
+ ```
110
+
111
+ ### Step 5: Generate AGENTS.md (Root Directory)
112
+
113
+ Generate AGENTS.md based on the content from Step 3 and Step 4, placed in the **project root directory**:
114
+
115
+ ```markdown
116
+ # AI Assistant Instructions
117
+
118
+ This project follows the **OpenLogos** methodology.
119
+ Read `logos/logos-project.yaml` first to understand the project resource index.
120
+
121
+ ## Project Context
122
+ - Config: `logos/logos.config.json`
123
+ - Resource Index: `logos/logos-project.yaml`
124
+ - Tech Stack: {extracted from logos-project.yaml}
125
+
126
+ ## Methodology Rules
127
+ 1. Never write code without first completing the design documents
128
+ 2. Follow the Why → What → How progression
129
+ 3. All API designs must originate from scenario sequence diagrams
130
+ 4. All code changes must have corresponding API orchestration tests
131
+ 5. Use the Delta change workflow for iterations (see logos/changes/ directory)
132
+
133
+ ## Conventions
134
+ {extracted from conventions in logos-project.yaml}
135
+ ```
136
+
137
+ Also generate `CLAUDE.md` with identical content to AGENTS.md.
138
+
139
+ ### Step 6: Output Initialization Report
140
+
141
+ Report to the user which files and directories were created, and provide next-step suggestions:
142
+
143
+ 1. Edit `logos/logos.config.json` to refine the project configuration
144
+ 2. Start with Phase 1: Write the requirements document
145
+ 3. Use the `prd-writer` Skill to assist with writing
146
+
147
+ ## Output Specification
148
+
149
+ - `logos/logos.config.json`: Valid JSON, conforming to `logos/spec/logos.config.schema.json`
150
+ - `logos/logos-project.yaml`: Valid YAML
151
+ - `AGENTS.md` / `CLAUDE.md`: Markdown format, located in the project root directory
152
+ - All directories under `logos/` are created; empty directories contain `.gitkeep`
153
+
154
+ ## Best Practices
155
+
156
+ - **Don't over-configure**: Keep configuration minimal during initialization; let users refine it gradually during use
157
+ - **resource_index starts empty**: Add entries as documents are produced, avoiding meaningless placeholder content
158
+ - **Keep AGENTS.md concise**: Only include project-specific information; use fixed templates for universal methodology rules
159
+ - **Prioritize creating the directory structure**: The `logos/` directory structure is the first step in adopting the methodology — more important than any document
160
+ - **Low intrusiveness**: All methodology assets are contained within `logos/`, keeping the project's own structure clean
161
+
162
+ ## Recommended Prompts
163
+
164
+ The following prompts can be copied directly for use with AI:
165
+
166
+ - `Help me initialize an OpenLogos project`
167
+ - `Initialize this project with OpenLogos, the project name is xxx`
168
+ - `Help me integrate OpenLogos into an existing project`
@@ -0,0 +1,229 @@
1
+ ---
2
+ name: scenario-architect
3
+ description: "Model business scenarios as detailed sequence diagrams with API calls. Use when architecture exists in 3-technical-plan/1-architecture/ but 3-technical-plan/2-scenario-implementation/ is empty. Scenarios must be complete user action paths, NOT single API calls."
4
+ ---
5
+
6
+ # Skill: Scenario Architect
7
+
8
+ > Expand business scenarios defined in Phase 1/2 into technical sequence diagrams, letting API design emerge naturally, and design comprehensive exception cases. Scenario numbering follows Phase 1 definitions.
9
+
10
+ ## Trigger Conditions
11
+
12
+ - User requests drawing sequence diagrams, designing business scenarios, or performing scenario modeling
13
+ - User mentions "Phase 3 Step 1", "scenario-driven", "technical plan design"
14
+ - Requirements documents and product design documents (with scenario definitions) already exist, ready to begin technical implementation
15
+ - User specifies a particular scenario number (e.g., S01) to be expanded into a sequence diagram
16
+
17
+ ## Core Capabilities
18
+
19
+ 1. Read scenario definitions and acceptance criteria from Phase 1/2 as input for sequence diagrams
20
+ 2. Draw Mermaid sequence diagrams for each scenario (strictly following numbering conventions)
21
+ 3. Write explanatory notes for key steps (explaining "why" rather than "what")
22
+ 4. Identify exception conditions for each step and design structured exception cases
23
+ 5. Generate scenario overview documents (scenario map + scenario index)
24
+
25
+ ## Linking with Phase 1/2
26
+
27
+ **Phase 3 does not identify scenarios from scratch.** Scenarios are defined in Phase 1 (`S01`, `S02`...), interaction flows are refined in Phase 2, and technical architecture and technology choices are established in Step 0. The job of Phase 3 Step 1 is to expand the same scenario from an "interaction perspective" into a "technical perspective". The participants in the sequence diagram should align with the system components in the architecture diagram:
28
+
29
+ | Input (from Phase 1/2) | Output (Phase 3) |
30
+ |------------------------|----------------|
31
+ | Scenario number and name | Sequence diagram title retains the number |
32
+ | Trigger conditions | Starting arrow of the sequence diagram |
33
+ | Main path description | Step sequence in the sequence diagram |
34
+ | GIVEN/WHEN/THEN (business level) | Behavior description of sequence diagram steps |
35
+ | Exception acceptance criteria | EX exception cases (technical level) |
36
+ | Pages and interactions involved (Phase 2) | Participant identification |
37
+
38
+ ## Execution Steps
39
+
40
+ ### Step 1: Load Scenario Context
41
+
42
+ Read scenario definitions from the Phase 1 requirements document, Phase 2 product design document, and Phase 3 Step 0 technical architecture summary. **Do not reinvent scenarios**—directly reuse existing numbers and descriptions. Participant naming should be consistent with the system components in the architecture diagram.
43
+
44
+ **Scenario Granularity Pre-Check (before drawing any sequence diagram):**
45
+
46
+ Before proceeding, verify that the scenario definitions from Phase 1 have proper granularity. Check for these anti-patterns:
47
+
48
+ - **One-API scenarios**: If a scenario's main path only has 1-2 steps (e.g., "Create Task" = just `POST /api/tasks`), it is too fine-grained
49
+ - **CRUD fragmentation**: If the scenario list contains separate "Create X", "Read X", "Update X", "Delete X" scenarios for the same entity, they should be merged into goal-driven scenarios
50
+ - **No business goal**: If a scenario's outcome is just "data was written/read", it lacks a real user goal
51
+
52
+ **If any anti-pattern is detected**: Stop and recommend returning to Phase 1 to re-organize scenarios by business goals before drawing sequence diagrams. Drawing sequence diagrams for overly fine-grained scenarios will propagate the problem to API design, test cases, and code.
53
+
54
+ Confirm the following for each scenario:
55
+ - **Scenario number**: Reuse Phase 1's `S01`, `S02`... (or Phase 2 sub-scenarios `S01.1`)
56
+ - **Participants**: Identify which system components are involved from Phase 2's interaction flows
57
+ - **Main path**: Extract the normal flow from Phase 1/2 acceptance criteria
58
+ - **Known exceptions**: Extract from Phase 1/2 exception acceptance criteria
59
+
60
+ ### Step 2: Draw Sequence Diagrams
61
+
62
+ Draw Mermaid sequence diagrams for each scenario, **strictly following these conventions**:
63
+
64
+ **Numbering conventions**:
65
+ - Every arrow must have a `Step N:` number prefix
66
+ - Numbering starts at 1 and increments consecutively
67
+ - Each arrow includes a one-line behavior description: `HTTP_METHOD /api/path — brief explanation`
68
+
69
+ **Participant conventions**:
70
+ - Use short aliases: `U` (User/Browser), `W` (Web/Frontend), `SB` (Supabase), `DB` (Database)
71
+ - Full name for each participant is noted in the `participant` declaration
72
+
73
+ **Scenario numbering conventions**:
74
+ - Document title format: `S01: Email Registration — Sequence Diagram`
75
+ - One scenario per file, numbered to correspond with Phase 1
76
+
77
+ **Format example**:
78
+
79
+ ```mermaid
80
+ sequenceDiagram
81
+ participant U as User/Browser
82
+ participant W as Web (Astro)
83
+ participant SB as Supabase Auth
84
+ participant DB as Supabase DB
85
+
86
+ U->>W: Step 1: POST /api/auth/register — Submit {email, password, referral_code?}
87
+ W->>SB: Step 2: supabase.auth.signUp(email, password, metadata) — Initiate user creation
88
+ SB-->>W: Step 3: Return {user, session} or error
89
+ W->>DB: Step 4: INSERT INTO profiles — Write user extended info
90
+ DB-->>W: Step 5: Return write result
91
+ W-->>U: Step 6: Return registration result + prompt to verify email
92
+ ```
93
+
94
+ ### Step 3: Write Step Narratives
95
+
96
+ After the sequence diagram, use a **consecutively numbered list** to write out all steps one by one, forming a linear narrative that humans can read fluently from start to finish.
97
+
98
+ **Format conventions**:
99
+
100
+ 1. **Every step must be written out**—no skipping, no omitting. Simple steps can be covered in one line; complex steps are elaborated below using `>` blockquote
101
+ 2. **Every step must have a clear subject**—the reader should never have to guess "who is doing this". Use the alias or full name from the participant table as the subject
102
+ 3. **Numbering strictly corresponds to the Step N in the sequence diagram**
103
+ 4. **Normal flow and exception cases are written separately**—normal flow comes first, exception cases follow. In the normal flow, only add `→ see EX-N.M` references after steps that trigger exceptions; do not expand exception content inline
104
+
105
+ **Normal flow format example**:
106
+
107
+ ````markdown
108
+ ## Step Descriptions
109
+
110
+ 1. **Developer** enters `openlogos init my-project` in the terminal.
111
+ 2. **CLI** checks whether `logos/logos.config.json` already exists. If it exists → see EX-2.1.
112
+ 3. **CLI** displays a language selection menu in the terminal (1. English / 2. 中文). If the terminal is non-TTY → see EX-3.1.
113
+
114
+ > Language selection is placed in the `init` phase (rather than global configuration) because this is the user's first interaction with OpenLogos, making it the most natural moment to confirm language preference.
115
+
116
+ 4. **Developer** selects a language (enters 1 or 2).
117
+ 5. **CLI** detects the project name from `package.json` / `Cargo.toml` / `pyproject.toml` / directory name. If the user-provided name conflicts with the config file name → see EX-5.1.
118
+
119
+ > Priority chain: CLI argument > package.json > Cargo.toml > pyproject.toml > directory name. Scoped names automatically strip the `@org/` prefix.
120
+
121
+ 6. **CLI** creates 11 directories in sequence (`logos/resources/prd/...` etc.), writing `.gitkeep` to each.
122
+ 7. **CLI** writes `logos/logos.config.json` (containing locale + 5 document module definitions).
123
+ 8. **CLI** writes `logos/logos-project.yaml` (containing empty tech_stack + conventions).
124
+ 9. **CLI** writes `AGENTS.md` and `CLAUDE.md` (containing Phase detection logic).
125
+ 10. **CLI** outputs the list of created files and next-step suggestions in the terminal.
126
+ ````
127
+
128
+ **Exception case format example**:
129
+
130
+ ````markdown
131
+ ## Exception Cases
132
+
133
+ ### EX-2.1: Project Already Initialized
134
+
135
+ - **Trigger condition**: Step 2 detects that `logos/logos.config.json` already exists
136
+ - **Expected response**: stderr outputs `Error: logos/logos.config.json already exists in current directory.`, exit(1)
137
+ - **Side effects**: No files created, existing configuration not overwritten
138
+
139
+ ### EX-3.1: Non-TTY Environment
140
+
141
+ - **Trigger condition**: Step 3 detects that `process.stdin.isTTY` is false (CI pipeline / piped input)
142
+ - **Expected response**: Skip language selection interaction, default to `locale = 'en'`
143
+ - **Side effects**: None, flow proceeds directly to Step 5
144
+
145
+ ### EX-5.1: Project Name Conflict
146
+
147
+ - **Trigger condition**: In Step 5, the user-provided `name` differs from the name in `package.json` (or other config files)
148
+ - **Expected response**: Display two options for the user to choose from; in non-TTY environments, automatically use the user-provided name
149
+ - **Side effects**: None, flow continues to Step 6 after selection
150
+ ````
151
+
152
+ **Narrative principles**:
153
+ - **No skipping steps**: Even if a step is worth only one line (e.g., "CLI writes file"), it must still be written out to maintain consecutive numbering
154
+ - **Subject first**: Each step begins with a bold subject, so the reader can immediately see "who is acting"
155
+ - **Use blockquotes for supplementary notes**: When you need to explain "why" or document a design decision, expand below the step using `>` blockquote without disrupting reading flow
156
+ - **Exception cases as separate sections**: Normal flow only contains `→ see EX-N.M` references; the trigger conditions, expected responses, and side effects of exceptions are expanded in the "Exception Cases" section at the bottom of the document
157
+
158
+ ### Step 4: Design Exception Cases
159
+
160
+ Expand exception acceptance criteria identified in Phase 1/2 into technical-level exception cases, and supplement with technical exceptions not covered in Phase 1/2 (e.g., service unavailable, database write failure):
161
+
162
+ ```markdown
163
+ #### Exception Cases
164
+
165
+ ##### EX-2.1: Email Already Registered (← Phase 1 S01 exception acceptance criteria)
166
+ - **Trigger condition**: Submitted email already exists in the auth.users table
167
+ - **Expected response**: HTTP 409 `{ code: "EMAIL_EXISTS", message: "Email already registered" }`
168
+ - **Side effects**: No records created, no emails sent
169
+
170
+ ##### EX-2.2: Supabase Auth Service Unavailable (technical exception, not covered in Phase 1)
171
+ - **Trigger condition**: Supabase Auth service times out or returns 5xx
172
+ - **Expected response**: HTTP 503 `{ code: "AUTH_SERVICE_UNAVAILABLE", message: "Authentication service temporarily unavailable" }`
173
+ - **Side effects**: Error logged, alert triggered
174
+
175
+ ##### EX-4.1: Profile Write Failure (technical exception, not covered in Phase 1)
176
+ - **Trigger condition**: INSERT INTO profiles violates unique constraint or RLS denies access
177
+ - **Expected response**: HTTP 500 `{ code: "PROFILE_CREATE_FAILED", message: "User profile creation failed" }`
178
+ - **Side effects**: Record in auth.users already created but profiles not created (compensation mechanism needed)
179
+ ```
180
+
181
+ **Exception case numbering rule**: `EX-{step number}.{sequence number}`
182
+
183
+ ### Step 5: Generate Scenario Overview Document
184
+
185
+ Summarize the technical implementation status of all scenarios:
186
+
187
+ ```markdown
188
+ # Business Scenario Overview (Technical Implementation)
189
+
190
+ ## Scenario Map
191
+ | Number | Scenario Name | Phase 1 | Phase 2 | Phase 3 Sequence Diagram | API | Orchestration | Status |
192
+ |--------|--------------|---------|---------|--------------------------|-----|---------------|--------|
193
+ | S01 | Email Registration | ✅ | ✅ | ✅ | ✅ | 🔲 | In Progress |
194
+ | S02 | Password Login | ✅ | ✅ | 🔲 | 🔲 | 🔲 | Not Started |
195
+
196
+ ## Scenario Dependencies
197
+ [Describe prerequisite/follow-up relationships between scenarios]
198
+
199
+ ## Scenario Index
200
+ [File links for each scenario, spanning all three Phases]
201
+ ```
202
+
203
+ ## Output Specification
204
+
205
+ - **Scenario overview**: `logos/resources/prd/3-technical-plan/2-scenario-implementation/00-scenario-overview.md`
206
+ - **Scenario documents**: `logos/resources/prd/3-technical-plan/2-scenario-implementation/{scenario-number}-{scenario-name}.md`
207
+ - Sequence diagrams use Mermaid format (renderable directly in Markdown)
208
+ - Exception cases use `EX-N.M` numbering, globally unique
209
+ - Each scenario document contains: sequence diagram + step descriptions + exception cases
210
+ - **Scenario numbers must be consistent with Phase 1/2**
211
+
212
+ ## Best Practices
213
+
214
+ - **Do not identify scenarios from scratch**: Phase 3 scenarios come from Phase 1 requirements documents. If a scenario not present in Phase 1 is discovered, go back to Phase 1 to add it
215
+ - **Phase 1/2 exceptions are inputs**: Phase 1's "exception: email already registered" should be expanded in Phase 3 into a technical specification with HTTP status codes and response bodies
216
+ - **Draw the main path first, then add exceptions**: Do not try to draw all branches in the first pass; get the main path clear first
217
+ - **Exception case coverage strategy**: Every step involving an external call (database, third-party service) should have at least 1 exception case
218
+ - **Step numbering maintenance**: When inserting a step in the middle, renumber all subsequent steps and update all EX references accordingly
219
+ - **Participant granularity**: In microservice architectures, each service is a participant; in monolithic applications, divide by logical layers (Web, Auth, DB)
220
+ - **Sequence diagrams are the source of APIs**: Cross-system-boundary arrows in sequence diagrams are the APIs that need to be designed—if an API cannot be traced back to a sequence diagram, it probably should not exist
221
+
222
+ ## Recommended Prompts
223
+
224
+ The following prompts can be copied directly for AI use:
225
+
226
+ - `Help me draw the sequence diagram for S01`
227
+ - `Help me do scenario modeling for all P0 scenarios`
228
+ - `Help me add exception case sequence diagrams for S03`
229
+ - `Based on the product design, help me do technical scenario modeling`
@@ -0,0 +1,147 @@
1
+ ---
2
+ name: test-orchestrator
3
+ description: "Design API orchestration test scenarios as executable JSON. Use when test cases exist in logos/resources/test/ but logos/resources/scenario/ is empty. For API projects only."
4
+ ---
5
+
6
+ # Skill: Test Orchestrator
7
+
8
+ > Design **API orchestration test** cases based on business scenarios and sequence diagrams (Phase 3 Step 3b), covering normal/exception/boundary scenarios. Automatically identify external dependencies and apply test strategies as end-to-end API acceptance criteria. **Only applicable to projects involving APIs.**
9
+
10
+ ## Relationship with test-writer
11
+
12
+ This Skill is responsible for the **top layer** of the test pyramid — API orchestration tests (HTTP request level), executed in Phase 3 Step 3b.
13
+
14
+ The lower-level unit tests and scenario tests (function call level) are completed by the `test-writer` Skill in Step 3a. Step 3a is a mandatory step for all projects; Step 3b (this Skill) is only executed when the project involves APIs.
15
+
16
+ ## Trigger Conditions
17
+
18
+ - User requests API orchestration test design
19
+ - User mentions "Phase 3 Step 3b", "API orchestration", or "orchestration tests"
20
+ - After Step 3a (test-writer) is complete, AI guides the user to proceed to Step 3b
21
+ - User needs to validate deployed API code
22
+
23
+ ## Prerequisites
24
+
25
+ - `logos/resources/test/` contains test case specification documents (Step 3a completed)
26
+ - `logos/resources/prd/3-technical-plan/2-scenario-implementation/` contains scenario sequence diagrams
27
+ - `logos/resources/api/` contains API specifications (OpenAPI YAML)
28
+ - `logos-project.yaml` contains `external_dependencies` (if applicable)
29
+
30
+ If the project does not involve APIs (pure CLI tools, pure frontend, etc.), skip this Skill.
31
+
32
+ ## Core Capabilities
33
+
34
+ 1. Design normal flow orchestration from sequence diagrams and API YAML
35
+ 2. Design exception flow orchestration based on exception cases (EX-N.M)
36
+ 3. Design boundary cases (valid but non-happy-path variations)
37
+ 4. Define variable extraction and passing mechanisms
38
+ 5. **Identify external dependencies and apply test strategies**: Read `external_dependencies` from `logos-project.yaml` and automatically insert `mock` fields in steps involving external services
39
+ 6. Execute orchestration and verify results
40
+
41
+ ## Execution Steps
42
+
43
+ ### Step 1: Read Scenario Context
44
+
45
+ Read the following files to establish complete context:
46
+
47
+ - Scenario sequence diagrams (`logos/resources/prd/3-technical-plan/2-scenario-implementation/`)
48
+ - API YAML (`logos/resources/api/`)
49
+ - `logos-project.yaml` — focus on reading the `external_dependencies` field
50
+
51
+ ### Step 2: Identify External Dependencies
52
+
53
+ Match `used_in` from `external_dependencies` with the current scenario number. If the current scenario involves external dependencies:
54
+
55
+ - Record the dependency's `test_strategy` and `test_config`
56
+ - If a dependency declares `used_in` but is missing `test_strategy`, **proactively ask the user** for the test strategy
57
+
58
+ If there is no `external_dependencies` field in `logos-project.yaml`, but the sequence diagrams contain calls to external services (e.g., sending emails, payment requests, etc.), proactively remind the user to add them.
59
+
60
+ ### Step 3: Design Normal Flow Orchestration
61
+
62
+ Design the API call chain step by step following the sequence diagram's Step numbers:
63
+
64
+ - Each step includes method, url, headers, body, expected_status
65
+ - For steps involving external dependencies, insert the `mock` field (see Output Specification)
66
+ - For variables that need to be passed from the previous step's response, use `extract` to define extraction rules
67
+
68
+ ### Step 4: Design Exception Flow Orchestration
69
+
70
+ Design independent orchestrations for each EX exception case, ensuring:
71
+
72
+ - Exception scenarios also cover external dependency failure cases
73
+ - Use the `mock` field to simulate external service failures (e.g., timeouts, error responses, etc.)
74
+
75
+ ### Step 5: Design Boundary Case Orchestration
76
+
77
+ Identify valid but non-happy-path variations (e.g., password length exactly at the boundary value, empty fields, etc.) and add supplementary orchestrations.
78
+
79
+ ### Step 6: Output Orchestration JSON
80
+
81
+ Output executable orchestration JSON files per scenario.
82
+
83
+ ## Output Specification
84
+
85
+ - File format: JSON
86
+ - Storage location: `logos/resources/scenario/`
87
+ - Separate files per scenario: `user-auth.json`, `payment-flow.json`
88
+ - Each step in the orchestration corresponds to a Step number in the sequence diagram
89
+
90
+ ### mock Field Structure
91
+
92
+ When a step involves an external dependency, add a `mock` field to that step:
93
+
94
+ ```json
95
+ {
96
+ "step": "Step 2: Get email verification code",
97
+ "mock": {
98
+ "dependency": "Email Service",
99
+ "strategy": "test-api",
100
+ "config": "GET /api/test/latest-email?to={email}",
101
+ "extract": { "code": "response.body.code" }
102
+ },
103
+ "method": "GET",
104
+ "url": "/api/test/latest-email?to={{email}}",
105
+ "expected_status": 200,
106
+ "extract": {
107
+ "verification_code": "body.code"
108
+ }
109
+ }
110
+ ```
111
+
112
+ `mock` field description:
113
+
114
+ | Field | Type | Description |
115
+ |------|------|------|
116
+ | `dependency` | string | Corresponds to `name` in `external_dependencies` |
117
+ | `strategy` | string | Test strategy (`test-api` / `fixed-value` / `env-disable` / `mock-callback` / `mock-service`) |
118
+ | `config` | string | Specific configuration for the test strategy, from `test_config` |
119
+ | `extract` | object | Extract variables from mock response (optional) |
120
+
121
+ Orchestration behavior for different strategies:
122
+
123
+ - **`test-api`**: The step's url is replaced with the backdoor API address
124
+ - **`fixed-value`**: The step does not make an actual request; fixed values are injected directly via `extract`
125
+ - **`env-disable`**: The step is marked as skipped, with a comment explaining the precondition
126
+ - **`mock-callback`**: An additional mock callback request is inserted after the previous step completes
127
+ - **`mock-service`**: The step's url is replaced with the local mock service address
128
+
129
+ ## Best Practices
130
+
131
+ - **Normal orchestration is the skeleton**: Complete the normal flow orchestration first to ensure the happy path works end-to-end
132
+ - **Exception orchestration is the safety net**: At least 1 exception orchestration per external call
133
+ - **Variable passing**: Extract variables from the previous step's response (e.g., token, user_id) and pass them to subsequent steps
134
+ - **Test data**: Prepare test data before orchestration begins and clean up afterwards to ensure idempotency
135
+ - **Concurrency testing**: Key scenarios should account for concurrent situations (e.g., two users registering with the same email simultaneously)
136
+ - **Check the external dependency list first**: Before starting orchestration design, read `external_dependencies` from `logos-project.yaml`; proactively remind the user to add any undeclared external calls
137
+ - **Do not decide mock strategies on your own**: Test strategies are determined during S12 technical architecture design (Phase 3 Step 0, architecture-designer); the orchestration test phase only consumes them — do not modify them unilaterally
138
+ - **Relationship with `openlogos verify`**: API orchestration tests can also produce JSONL results in the same format as `logos/spec/test-results.md`. After orchestration tests run, results are also written to `logos/resources/verify/test-results.jsonl`, and `openlogos verify` reads them uniformly to determine acceptance
139
+
140
+ ## Recommended Prompts
141
+
142
+ The following prompts can be copied directly for AI use:
143
+
144
+ - `Help me design orchestration tests`
145
+ - `Generate orchestration tests for S01 based on the API spec`
146
+ - `Help me orchestrate all normal paths for every scenario`
147
+ - `Help me add exception path orchestration tests for S02`