@miniidealab/openlogos 0.3.0 → 0.3.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/commands/init.d.ts +2 -1
- package/dist/commands/init.d.ts.map +1 -1
- package/dist/commands/init.js +112 -8
- package/dist/commands/init.js.map +1 -1
- package/dist/commands/sync.js +1 -1
- package/dist/commands/sync.js.map +1 -1
- package/dist/index.js +1 -1
- package/package.json +1 -1
- package/skills/api-designer/SKILL.en.md +209 -0
- package/skills/architecture-designer/SKILL.en.md +181 -0
- package/skills/change-writer/SKILL.en.md +146 -0
- package/skills/code-reviewer/SKILL.en.md +204 -0
- package/skills/db-designer/SKILL.en.md +212 -0
- package/skills/merge-executor/SKILL.en.md +84 -0
- package/skills/prd-writer/SKILL.en.md +171 -0
- package/skills/product-designer/SKILL.en.md +228 -0
- package/skills/project-init/SKILL.en.md +163 -0
- package/skills/scenario-architect/SKILL.en.md +214 -0
- package/skills/test-orchestrator/SKILL.en.md +142 -0
- package/skills/test-writer/SKILL.en.md +247 -0
|
@@ -0,0 +1,142 @@
|
|
|
1
|
+
# Skill: Test Orchestrator
|
|
2
|
+
|
|
3
|
+
> Design **API orchestration test** cases based on business scenarios and sequence diagrams (Phase 3 Step 3b), covering normal/exception/boundary scenarios. Automatically identify external dependencies and apply test strategies as end-to-end API acceptance criteria. **Only applicable to projects involving APIs.**
|
|
4
|
+
|
|
5
|
+
## Relationship with test-writer
|
|
6
|
+
|
|
7
|
+
This Skill is responsible for the **top layer** of the test pyramid — API orchestration tests (HTTP request level), executed in Phase 3 Step 3b.
|
|
8
|
+
|
|
9
|
+
The lower-level unit tests and scenario tests (function call level) are completed by the `test-writer` Skill in Step 3a. Step 3a is a mandatory step for all projects; Step 3b (this Skill) is only executed when the project involves APIs.
|
|
10
|
+
|
|
11
|
+
## Trigger Conditions
|
|
12
|
+
|
|
13
|
+
- User requests API orchestration test design
|
|
14
|
+
- User mentions "Phase 3 Step 3b", "API orchestration", or "orchestration tests"
|
|
15
|
+
- After Step 3a (test-writer) is complete, AI guides the user to proceed to Step 3b
|
|
16
|
+
- User needs to validate deployed API code
|
|
17
|
+
|
|
18
|
+
## Prerequisites
|
|
19
|
+
|
|
20
|
+
- `logos/resources/test/` contains test case specification documents (Step 3a completed)
|
|
21
|
+
- `logos/resources/prd/3-technical-plan/2-scenario-implementation/` contains scenario sequence diagrams
|
|
22
|
+
- `logos/resources/api/` contains API specifications (OpenAPI YAML)
|
|
23
|
+
- `logos-project.yaml` contains `external_dependencies` (if applicable)
|
|
24
|
+
|
|
25
|
+
If the project does not involve APIs (pure CLI tools, pure frontend, etc.), skip this Skill.
|
|
26
|
+
|
|
27
|
+
## Core Capabilities
|
|
28
|
+
|
|
29
|
+
1. Design normal flow orchestration from sequence diagrams and API YAML
|
|
30
|
+
2. Design exception flow orchestration based on exception cases (EX-N.M)
|
|
31
|
+
3. Design boundary cases (valid but non-happy-path variations)
|
|
32
|
+
4. Define variable extraction and passing mechanisms
|
|
33
|
+
5. **Identify external dependencies and apply test strategies**: Read `external_dependencies` from `logos-project.yaml` and automatically insert `mock` fields in steps involving external services
|
|
34
|
+
6. Execute orchestration and verify results
|
|
35
|
+
|
|
36
|
+
## Execution Steps
|
|
37
|
+
|
|
38
|
+
### Step 1: Read Scenario Context
|
|
39
|
+
|
|
40
|
+
Read the following files to establish complete context:
|
|
41
|
+
|
|
42
|
+
- Scenario sequence diagrams (`logos/resources/prd/3-technical-plan/2-scenario-implementation/`)
|
|
43
|
+
- API YAML (`logos/resources/api/`)
|
|
44
|
+
- `logos-project.yaml` — focus on reading the `external_dependencies` field
|
|
45
|
+
|
|
46
|
+
### Step 2: Identify External Dependencies
|
|
47
|
+
|
|
48
|
+
Match `used_in` from `external_dependencies` with the current scenario number. If the current scenario involves external dependencies:
|
|
49
|
+
|
|
50
|
+
- Record the dependency's `test_strategy` and `test_config`
|
|
51
|
+
- If a dependency declares `used_in` but is missing `test_strategy`, **proactively ask the user** for the test strategy
|
|
52
|
+
|
|
53
|
+
If there is no `external_dependencies` field in `logos-project.yaml`, but the sequence diagrams contain calls to external services (e.g., sending emails, payment requests, etc.), proactively remind the user to add them.
|
|
54
|
+
|
|
55
|
+
### Step 3: Design Normal Flow Orchestration
|
|
56
|
+
|
|
57
|
+
Design the API call chain step by step following the sequence diagram's Step numbers:
|
|
58
|
+
|
|
59
|
+
- Each step includes method, url, headers, body, expected_status
|
|
60
|
+
- For steps involving external dependencies, insert the `mock` field (see Output Specification)
|
|
61
|
+
- For variables that need to be passed from the previous step's response, use `extract` to define extraction rules
|
|
62
|
+
|
|
63
|
+
### Step 4: Design Exception Flow Orchestration
|
|
64
|
+
|
|
65
|
+
Design independent orchestrations for each EX exception case, ensuring:
|
|
66
|
+
|
|
67
|
+
- Exception scenarios also cover external dependency failure cases
|
|
68
|
+
- Use the `mock` field to simulate external service failures (e.g., timeouts, error responses, etc.)
|
|
69
|
+
|
|
70
|
+
### Step 5: Design Boundary Case Orchestration
|
|
71
|
+
|
|
72
|
+
Identify valid but non-happy-path variations (e.g., password length exactly at the boundary value, empty fields, etc.) and add supplementary orchestrations.
|
|
73
|
+
|
|
74
|
+
### Step 6: Output Orchestration JSON
|
|
75
|
+
|
|
76
|
+
Output executable orchestration JSON files per scenario.
|
|
77
|
+
|
|
78
|
+
## Output Specification
|
|
79
|
+
|
|
80
|
+
- File format: JSON
|
|
81
|
+
- Storage location: `logos/resources/scenario/`
|
|
82
|
+
- Separate files per scenario: `user-auth.json`, `payment-flow.json`
|
|
83
|
+
- Each step in the orchestration corresponds to a Step number in the sequence diagram
|
|
84
|
+
|
|
85
|
+
### mock Field Structure
|
|
86
|
+
|
|
87
|
+
When a step involves an external dependency, add a `mock` field to that step:
|
|
88
|
+
|
|
89
|
+
```json
|
|
90
|
+
{
|
|
91
|
+
"step": "Step 2: Get email verification code",
|
|
92
|
+
"mock": {
|
|
93
|
+
"dependency": "Email Service",
|
|
94
|
+
"strategy": "test-api",
|
|
95
|
+
"config": "GET /api/test/latest-email?to={email}",
|
|
96
|
+
"extract": { "code": "response.body.code" }
|
|
97
|
+
},
|
|
98
|
+
"method": "GET",
|
|
99
|
+
"url": "/api/test/latest-email?to={{email}}",
|
|
100
|
+
"expected_status": 200,
|
|
101
|
+
"extract": {
|
|
102
|
+
"verification_code": "body.code"
|
|
103
|
+
}
|
|
104
|
+
}
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
`mock` field description:
|
|
108
|
+
|
|
109
|
+
| Field | Type | Description |
|
|
110
|
+
|------|------|------|
|
|
111
|
+
| `dependency` | string | Corresponds to `name` in `external_dependencies` |
|
|
112
|
+
| `strategy` | string | Test strategy (`test-api` / `fixed-value` / `env-disable` / `mock-callback` / `mock-service`) |
|
|
113
|
+
| `config` | string | Specific configuration for the test strategy, from `test_config` |
|
|
114
|
+
| `extract` | object | Extract variables from mock response (optional) |
|
|
115
|
+
|
|
116
|
+
Orchestration behavior for different strategies:
|
|
117
|
+
|
|
118
|
+
- **`test-api`**: The step's url is replaced with the backdoor API address
|
|
119
|
+
- **`fixed-value`**: The step does not make an actual request; fixed values are injected directly via `extract`
|
|
120
|
+
- **`env-disable`**: The step is marked as skipped, with a comment explaining the precondition
|
|
121
|
+
- **`mock-callback`**: An additional mock callback request is inserted after the previous step completes
|
|
122
|
+
- **`mock-service`**: The step's url is replaced with the local mock service address
|
|
123
|
+
|
|
124
|
+
## Best Practices
|
|
125
|
+
|
|
126
|
+
- **Normal orchestration is the skeleton**: Complete the normal flow orchestration first to ensure the happy path works end-to-end
|
|
127
|
+
- **Exception orchestration is the safety net**: At least 1 exception orchestration per external call
|
|
128
|
+
- **Variable passing**: Extract variables from the previous step's response (e.g., token, user_id) and pass them to subsequent steps
|
|
129
|
+
- **Test data**: Prepare test data before orchestration begins and clean up afterwards to ensure idempotency
|
|
130
|
+
- **Concurrency testing**: Key scenarios should account for concurrent situations (e.g., two users registering with the same email simultaneously)
|
|
131
|
+
- **Check the external dependency list first**: Before starting orchestration design, read `external_dependencies` from `logos-project.yaml`; proactively remind the user to add any undeclared external calls
|
|
132
|
+
- **Do not decide mock strategies on your own**: Test strategies are determined during S12 technical architecture design (Phase 3 Step 0, architecture-designer); the orchestration test phase only consumes them — do not modify them unilaterally
|
|
133
|
+
- **Relationship with `openlogos verify`**: API orchestration tests can also produce JSONL results in the same format as `spec/test-results.md`. After orchestration tests run, results are also written to `logos/resources/verify/test-results.jsonl`, and `openlogos verify` reads them uniformly to determine acceptance
|
|
134
|
+
|
|
135
|
+
## Recommended Prompts
|
|
136
|
+
|
|
137
|
+
The following prompts can be copied directly for AI use:
|
|
138
|
+
|
|
139
|
+
- `Help me design orchestration tests`
|
|
140
|
+
- `Generate orchestration tests for S01 based on the API spec`
|
|
141
|
+
- `Help me orchestrate all normal paths for every scenario`
|
|
142
|
+
- `Help me add exception path orchestration tests for S02`
|
|
@@ -0,0 +1,247 @@
|
|
|
1
|
+
# Skill: Test Writer
|
|
2
|
+
|
|
3
|
+
> Based on sequence diagrams, API specifications, and DB constraints, design unit test cases and scenario test cases for each business scenario. Applicable to all project types (API services, CLI tools, frontend applications, libraries, etc.), this is a mandatory prerequisite step before code generation.
|
|
4
|
+
|
|
5
|
+
## Trigger Conditions
|
|
6
|
+
|
|
7
|
+
- User requests test case or test plan design
|
|
8
|
+
- User mentions "Phase 3 Step 3", "Step 3a", "test-first", "test design"
|
|
9
|
+
- Sequence diagrams already exist, and tests need to be designed before writing code
|
|
10
|
+
- User specifies a scenario number (e.g., S01) that needs test design
|
|
11
|
+
|
|
12
|
+
## Prerequisites
|
|
13
|
+
|
|
14
|
+
- `logos/resources/prd/3-technical-plan/2-scenario-implementation/` contains sequence diagrams (**required**)
|
|
15
|
+
- `logos/resources/api/` contains API specifications (read if present, skip if absent — non-API projects may not have these)
|
|
16
|
+
- `logos/resources/database/` contains DB DDL (read if present, skip if absent)
|
|
17
|
+
- `logos/resources/prd/1-product-requirements/` contains requirements documents (for tracing acceptance criteria)
|
|
18
|
+
|
|
19
|
+
**Cannot be skipped**: Regardless of project type, Step 3a (this Skill) must be executed.
|
|
20
|
+
|
|
21
|
+
## Core Capabilities
|
|
22
|
+
|
|
23
|
+
1. Extract unit test cases from API field constraints (type, format, length, enum)
|
|
24
|
+
2. Extract unit test cases from DB constraints (UNIQUE, CHECK, NOT NULL, FK)
|
|
25
|
+
3. Extract unit test cases from business rules and single-point error handling in EX exception cases
|
|
26
|
+
4. Extract scenario test cases from sequence diagram Step sequences (happy path)
|
|
27
|
+
5. Extract scenario test cases from EX exception cases (exception paths)
|
|
28
|
+
6. Reverse-validate test coverage completeness against Phase 1/2 acceptance criteria
|
|
29
|
+
|
|
30
|
+
## Execution Steps
|
|
31
|
+
|
|
32
|
+
### Step 1: Load Scenario Context
|
|
33
|
+
|
|
34
|
+
Read the following files to establish complete context:
|
|
35
|
+
|
|
36
|
+
- Sequence diagrams (`logos/resources/prd/3-technical-plan/2-scenario-implementation/`)
|
|
37
|
+
- API YAML (`logos/resources/api/`) — if present
|
|
38
|
+
- DB DDL (`logos/resources/database/`) — if present
|
|
39
|
+
- Phase 1 requirements documents (acceptance criteria)
|
|
40
|
+
- Phase 2 product design documents (interaction-level acceptance criteria)
|
|
41
|
+
|
|
42
|
+
Confirm the following for the current scenario:
|
|
43
|
+
- **Step count**: How many Steps are in the sequence diagram
|
|
44
|
+
- **EX count**: How many exception cases exist
|
|
45
|
+
- **API endpoints**: Which endpoints are involved and their field constraints
|
|
46
|
+
- **DB tables**: Which tables are involved and their constraints
|
|
47
|
+
|
|
48
|
+
### Step 2: Design Unit Test Cases
|
|
49
|
+
|
|
50
|
+
Extract unit test cases from three categories of sources:
|
|
51
|
+
|
|
52
|
+
#### 2a: API Field Constraints
|
|
53
|
+
|
|
54
|
+
Inspect `requestBody` and `parameters` for each API endpoint:
|
|
55
|
+
|
|
56
|
+
- `type` → Type error cases (passing incorrect types)
|
|
57
|
+
- `format` (email, uuid, date-time) → Format validation cases
|
|
58
|
+
- `minLength` / `maxLength` → Boundary value cases (exactly at limit, exceeding by 1)
|
|
59
|
+
- `required` → Required field missing cases
|
|
60
|
+
- `enum` → Enumeration value cases (valid values + invalid values)
|
|
61
|
+
- `minimum` / `maximum` → Numeric range cases
|
|
62
|
+
|
|
63
|
+
#### 2b: DB Constraints
|
|
64
|
+
|
|
65
|
+
Inspect constraints for each related table:
|
|
66
|
+
|
|
67
|
+
- `UNIQUE` → Duplicate insertion cases
|
|
68
|
+
- `NOT NULL` → Null value insertion cases
|
|
69
|
+
- `CHECK` → Constraint violation cases
|
|
70
|
+
- `FOREIGN KEY` → Referencing non-existent record cases
|
|
71
|
+
- `DEFAULT` → Default value verification when no value is provided
|
|
72
|
+
|
|
73
|
+
#### 2c: Business Rules
|
|
74
|
+
|
|
75
|
+
Extract single-point business logic from sequence diagram Step descriptions and EX exception cases:
|
|
76
|
+
|
|
77
|
+
- Permission checks (not logged in, insufficient permissions)
|
|
78
|
+
- State machine transitions (only specific states allow certain operations)
|
|
79
|
+
- Rate limiting / throttling rules
|
|
80
|
+
- Data computation logic (amount calculations, discount rules)
|
|
81
|
+
|
|
82
|
+
**Format for each unit test case**:
|
|
83
|
+
|
|
84
|
+
| Field | Description |
|
|
85
|
+
|-------|-------------|
|
|
86
|
+
| ID | `UT-{scenario-number}-{sequence}`, e.g., `UT-S01-01` |
|
|
87
|
+
| Description | What behavior is being tested |
|
|
88
|
+
| Source | Constraint origin (e.g., `auth.yaml → register → email: format:email`) |
|
|
89
|
+
| Preconditions | State required before the test |
|
|
90
|
+
| Input | Specific input values |
|
|
91
|
+
| Expected Output | Expected return value or error message |
|
|
92
|
+
|
|
93
|
+
### Step 3: Design Scenario Test Cases
|
|
94
|
+
|
|
95
|
+
Extract scenario test cases from two categories of sources:
|
|
96
|
+
|
|
97
|
+
#### 3a: Happy Path (Sequence Diagram Step Sequence)
|
|
98
|
+
|
|
99
|
+
Treat the complete Step 1 → Step N sequence from the sequence diagram as an end-to-end code call chain:
|
|
100
|
+
|
|
101
|
+
- Determine the scenario's entry and exit points
|
|
102
|
+
- Annotate data passing between each Step (previous step's output as next step's input)
|
|
103
|
+
- Verify the final state (database records, return values)
|
|
104
|
+
|
|
105
|
+
#### 3b: Exception Paths (EX Exception Cases)
|
|
106
|
+
|
|
107
|
+
Expand each EX exception case into a scenario test case:
|
|
108
|
+
|
|
109
|
+
- Annotate which Step triggers the exception
|
|
110
|
+
- Verify the handling logic after the exception is triggered (error response, compensation/rollback)
|
|
111
|
+
- Verify the exception did not compromise the integrity of other data
|
|
112
|
+
|
|
113
|
+
**Format for each scenario test case**:
|
|
114
|
+
|
|
115
|
+
| Field | Description |
|
|
116
|
+
|-------|-------------|
|
|
117
|
+
| ID | `ST-{scenario-number}-{sequence}`, e.g., `ST-S01-01` |
|
|
118
|
+
| Description | What scenario flow is being tested |
|
|
119
|
+
| Covered Steps | Which sequence diagram Steps are covered (e.g., `Step 1→6`) or which EX (e.g., `EX-2.1`) |
|
|
120
|
+
| Preconditions | State and data required before the test |
|
|
121
|
+
| Operation Sequence | Ordered list of operations following Step sequence |
|
|
122
|
+
| Expected Result | Final state (return value + database state + side effects) |
|
|
123
|
+
|
|
124
|
+
### Step 4: Coverage Validation
|
|
125
|
+
|
|
126
|
+
Reverse-validate whether test cases cover all critical constraints:
|
|
127
|
+
|
|
128
|
+
- [ ] Each normal acceptance criterion from Phase 1 maps to at least 1 ST case
|
|
129
|
+
- [ ] Each exception acceptance criterion from Phase 1 maps to at least 1 ST or UT case
|
|
130
|
+
- [ ] Each EX exception case maps to at least 1 ST case
|
|
131
|
+
- [ ] Each `required` field in the API has at least 1 UT case
|
|
132
|
+
- [ ] Each `UNIQUE` / `CHECK` constraint in the DB has at least 1 UT case
|
|
133
|
+
|
|
134
|
+
If any items are uncovered, add supplementary cases or explain the reason to the user.
|
|
135
|
+
|
|
136
|
+
### Step 5: Acceptance Criteria Traceability
|
|
137
|
+
|
|
138
|
+
Extract each GIVEN/WHEN/THEN acceptance criterion from the Phase 1 requirements document, assign a traceability ID to each, and link it to the test case IDs that cover that criterion.
|
|
139
|
+
|
|
140
|
+
#### Acceptance Criteria ID Rules
|
|
141
|
+
|
|
142
|
+
- Format: `{scenario-number}-AC-{two-digit-sequence}`, e.g., `S01-AC-01`, `S01-AC-02`
|
|
143
|
+
- Numbered in the order they appear in the requirements document; normal and exception criteria use a unified numbering sequence
|
|
144
|
+
- AC IDs within the same scenario must be consecutive and unique
|
|
145
|
+
|
|
146
|
+
#### Traceability Table Rules
|
|
147
|
+
|
|
148
|
+
1. Read all acceptance criteria (normal + exception) for the current scenario from the requirements document
|
|
149
|
+
2. Assign an AC ID to each acceptance criterion
|
|
150
|
+
3. Find the test case IDs that cover each criterion (can be UT or ST), and fill in the "Covered By" column
|
|
151
|
+
4. Each AC must be linked to at least 1 test case; if it cannot be covered, note the reason in the "Covered By" column
|
|
152
|
+
|
|
153
|
+
`openlogos verify` parses this traceability table and links AC → test case ID → execution result across three layers to generate a complete acceptance traceability report.
|
|
154
|
+
|
|
155
|
+
### Step 6: Output Test Case Specification Document
|
|
156
|
+
|
|
157
|
+
Output the test case specification document in Markdown format, organized by scenario.
|
|
158
|
+
|
|
159
|
+
### Step 7: Guide Next Steps
|
|
160
|
+
|
|
161
|
+
Guide the user to the next step based on project type:
|
|
162
|
+
|
|
163
|
+
- **Involves API** → "Continue to Step 3b to design API orchestration tests?"
|
|
164
|
+
- **Does not involve API** → "Test design is complete. Recommend proceeding to code generation: say 'Implement based on the S01 specification for me'"
|
|
165
|
+
|
|
166
|
+
## Output Specification
|
|
167
|
+
|
|
168
|
+
- **File format**: Markdown
|
|
169
|
+
- **Location**: `logos/resources/test/`
|
|
170
|
+
- **Naming convention**: `{scenario-number}-test-cases.md` (e.g., `S01-test-cases.md`)
|
|
171
|
+
- Each file contains: Unit test cases (grouped by source) + Scenario test cases (happy path + exception paths)
|
|
172
|
+
- Case IDs are globally unique: `UT-{scenario-number}-{sequence}` / `ST-{scenario-number}-{sequence}`
|
|
173
|
+
|
|
174
|
+
### Document Structure Template
|
|
175
|
+
|
|
176
|
+
```markdown
|
|
177
|
+
# {scenario-number}: {scenario-name} — Test Cases
|
|
178
|
+
|
|
179
|
+
## 1. Unit Test Cases
|
|
180
|
+
|
|
181
|
+
### 1.1 {group-name} (Source: {constraint-origin})
|
|
182
|
+
|
|
183
|
+
| ID | Description | Source | Preconditions | Input | Expected Output |
|
|
184
|
+
|----|-------------|--------|---------------|-------|-----------------|
|
|
185
|
+
| UT-S01-01 | ... | ... | ... | ... | ... |
|
|
186
|
+
|
|
187
|
+
## 2. Scenario Test Cases
|
|
188
|
+
|
|
189
|
+
### 2.1 Happy Path: {scenario-name}
|
|
190
|
+
|
|
191
|
+
| ID | Description | Covered Steps | Preconditions | Operation Sequence | Expected Result |
|
|
192
|
+
|----|-------------|---------------|---------------|--------------------|-----------------|
|
|
193
|
+
| ST-S01-01 | ... | Step 1→6 | ... | ... | ... |
|
|
194
|
+
|
|
195
|
+
### 2.2 Exception Paths
|
|
196
|
+
|
|
197
|
+
| ID | Description | Covered EX | Preconditions | Trigger Condition | Expected Result |
|
|
198
|
+
|----|-------------|------------|---------------|-------------------|-----------------|
|
|
199
|
+
| ST-S01-02 | ... | EX-2.1 | ... | ... | ... |
|
|
200
|
+
|
|
201
|
+
## 3. Coverage Validation
|
|
202
|
+
|
|
203
|
+
- [x] Phase 1 normal acceptance criteria: fully covered
|
|
204
|
+
- [x] Phase 1 exception acceptance criteria: fully covered
|
|
205
|
+
- [x] EX exception cases: fully covered
|
|
206
|
+
- [x] API required fields: fully covered
|
|
207
|
+
- [x] DB UNIQUE/CHECK constraints: fully covered
|
|
208
|
+
|
|
209
|
+
## 4. Acceptance Criteria Traceability
|
|
210
|
+
|
|
211
|
+
| AC ID | Acceptance Criterion | Covered By |
|
|
212
|
+
|-------|----------------------|------------|
|
|
213
|
+
| S01-AC-01 | Normal: Fresh project initialization — create complete directory structure | ST-S01-01 |
|
|
214
|
+
| S01-AC-02 | Normal: Confirm when explicit project name differs from config file | ST-S01-02 |
|
|
215
|
+
| S01-AC-03 | Exception: Project already initialized — display error message | ST-S01-03, UT-S01-05 |
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
## Test Case ID Contract
|
|
219
|
+
|
|
220
|
+
Test case IDs (`UT-S01-01`, `ST-S01-01`) serve as a **binding contract** between design documents and runtime:
|
|
221
|
+
|
|
222
|
+
- IDs defined in test-cases.md must be used as-is in the generated test code
|
|
223
|
+
- The test code reporter writes each case's ID and execution result to a JSONL file
|
|
224
|
+
- `openlogos verify` maps execution results back to test case specifications via IDs, automatically determining acceptance
|
|
225
|
+
- When modifying case IDs, the corresponding IDs in the test code must be updated simultaneously
|
|
226
|
+
|
|
227
|
+
See `spec/test-results.md` for the detailed JSONL format definition and reporter code templates for each language.
|
|
228
|
+
|
|
229
|
+
## Best Practices
|
|
230
|
+
|
|
231
|
+
- **Test cases are design documents, not code**: This Skill produces test case specifications in Markdown format; the actual test code is implemented by AI during Step 4 code generation based on these specifications
|
|
232
|
+
- **Unit first, then scenario**: Unit test cases cover the correctness of individual functions; scenario tests cover cross-module integration — first ensure the building blocks are correct, then verify they fit together
|
|
233
|
+
- **Don't overlook DB constraints**: Many bugs originate from database-level constraint violations; DB constraints are an important source of unit test cases
|
|
234
|
+
- **Scenario tests focus on data passing**: Data passing between Steps (previous step's output → next step's input) is where errors most commonly occur
|
|
235
|
+
- **EX exception cases must have corresponding scenario tests**: Every EX annotated in the sequence diagram should be reflected in scenario tests
|
|
236
|
+
- **Boundary values first**: Unit test cases should prioritize boundary values (just valid, just invalid) over random values
|
|
237
|
+
- **Complementary with test-orchestrator**: This Skill designs code-level tests (function call level); test-orchestrator designs API-level tests (HTTP request level). Together they cover different layers of the "testing pyramid"
|
|
238
|
+
- **Case IDs are cross-phase contracts**: IDs span test-cases.md → test code → test-results.jsonl → acceptance-report.md; any inconsistency will cause `openlogos verify` to report incomplete results
|
|
239
|
+
|
|
240
|
+
## Recommended Prompts
|
|
241
|
+
|
|
242
|
+
The following prompts can be copied directly for AI use:
|
|
243
|
+
|
|
244
|
+
- `Design test cases for me`
|
|
245
|
+
- `Design unit tests and scenario tests for S01`
|
|
246
|
+
- `Design test cases for all P0 scenarios`
|
|
247
|
+
- `Check the test coverage for S01`
|