@miniidealab/openlogos 0.2.0 → 0.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/commands/init.d.ts +9 -2
- package/dist/commands/init.d.ts.map +1 -1
- package/dist/commands/init.js +219 -8
- package/dist/commands/init.js.map +1 -1
- package/dist/commands/sync.d.ts.map +1 -1
- package/dist/commands/sync.js +9 -41
- package/dist/commands/sync.js.map +1 -1
- package/dist/i18n.d.ts.map +1 -1
- package/dist/i18n.js +14 -0
- package/dist/i18n.js.map +1 -1
- package/dist/index.js +1 -1
- package/package.json +5 -2
- package/skills/api-designer/SKILL.en.md +209 -0
- package/skills/api-designer/SKILL.md +209 -0
- package/skills/architecture-designer/SKILL.en.md +181 -0
- package/skills/architecture-designer/SKILL.md +181 -0
- package/skills/change-writer/SKILL.en.md +146 -0
- package/skills/change-writer/SKILL.md +146 -0
- package/skills/code-reviewer/SKILL.en.md +204 -0
- package/skills/code-reviewer/SKILL.md +204 -0
- package/skills/db-designer/SKILL.en.md +212 -0
- package/skills/db-designer/SKILL.md +212 -0
- package/skills/merge-executor/SKILL.en.md +84 -0
- package/skills/merge-executor/SKILL.md +84 -0
- package/skills/prd-writer/SKILL.en.md +171 -0
- package/skills/prd-writer/SKILL.md +171 -0
- package/skills/product-designer/SKILL.en.md +228 -0
- package/skills/product-designer/SKILL.md +228 -0
- package/skills/project-init/SKILL.en.md +163 -0
- package/skills/project-init/SKILL.md +163 -0
- package/skills/scenario-architect/SKILL.en.md +214 -0
- package/skills/scenario-architect/SKILL.md +214 -0
- package/skills/test-orchestrator/SKILL.en.md +142 -0
- package/skills/test-orchestrator/SKILL.md +142 -0
- package/skills/test-writer/SKILL.en.md +247 -0
- package/skills/test-writer/SKILL.md +247 -0
|
@@ -0,0 +1,142 @@
|
|
|
1
|
+
# Skill: Test Orchestrator
|
|
2
|
+
|
|
3
|
+
> Design **API orchestration test** cases based on business scenarios and sequence diagrams (Phase 3 Step 3b), covering normal/exception/boundary scenarios. Automatically identify external dependencies and apply test strategies as end-to-end API acceptance criteria. **Only applicable to projects involving APIs.**
|
|
4
|
+
|
|
5
|
+
## Relationship with test-writer
|
|
6
|
+
|
|
7
|
+
This Skill is responsible for the **top layer** of the test pyramid — API orchestration tests (HTTP request level), executed in Phase 3 Step 3b.
|
|
8
|
+
|
|
9
|
+
The lower-level unit tests and scenario tests (function call level) are completed by the `test-writer` Skill in Step 3a. Step 3a is a mandatory step for all projects; Step 3b (this Skill) is only executed when the project involves APIs.
|
|
10
|
+
|
|
11
|
+
## Trigger Conditions
|
|
12
|
+
|
|
13
|
+
- User requests API orchestration test design
|
|
14
|
+
- User mentions "Phase 3 Step 3b", "API orchestration", or "orchestration tests"
|
|
15
|
+
- After Step 3a (test-writer) is complete, AI guides the user to proceed to Step 3b
|
|
16
|
+
- User needs to validate deployed API code
|
|
17
|
+
|
|
18
|
+
## Prerequisites
|
|
19
|
+
|
|
20
|
+
- `logos/resources/test/` contains test case specification documents (Step 3a completed)
|
|
21
|
+
- `logos/resources/prd/3-technical-plan/2-scenario-implementation/` contains scenario sequence diagrams
|
|
22
|
+
- `logos/resources/api/` contains API specifications (OpenAPI YAML)
|
|
23
|
+
- `logos-project.yaml` contains `external_dependencies` (if applicable)
|
|
24
|
+
|
|
25
|
+
If the project does not involve APIs (pure CLI tools, pure frontend, etc.), skip this Skill.
|
|
26
|
+
|
|
27
|
+
## Core Capabilities
|
|
28
|
+
|
|
29
|
+
1. Design normal flow orchestration from sequence diagrams and API YAML
|
|
30
|
+
2. Design exception flow orchestration based on exception cases (EX-N.M)
|
|
31
|
+
3. Design boundary cases (valid but non-happy-path variations)
|
|
32
|
+
4. Define variable extraction and passing mechanisms
|
|
33
|
+
5. **Identify external dependencies and apply test strategies**: Read `external_dependencies` from `logos-project.yaml` and automatically insert `mock` fields in steps involving external services
|
|
34
|
+
6. Execute orchestration and verify results
|
|
35
|
+
|
|
36
|
+
## Execution Steps
|
|
37
|
+
|
|
38
|
+
### Step 1: Read Scenario Context
|
|
39
|
+
|
|
40
|
+
Read the following files to establish complete context:
|
|
41
|
+
|
|
42
|
+
- Scenario sequence diagrams (`logos/resources/prd/3-technical-plan/2-scenario-implementation/`)
|
|
43
|
+
- API YAML (`logos/resources/api/`)
|
|
44
|
+
- `logos-project.yaml` — focus on reading the `external_dependencies` field
|
|
45
|
+
|
|
46
|
+
### Step 2: Identify External Dependencies
|
|
47
|
+
|
|
48
|
+
Match `used_in` from `external_dependencies` with the current scenario number. If the current scenario involves external dependencies:
|
|
49
|
+
|
|
50
|
+
- Record the dependency's `test_strategy` and `test_config`
|
|
51
|
+
- If a dependency declares `used_in` but is missing `test_strategy`, **proactively ask the user** for the test strategy
|
|
52
|
+
|
|
53
|
+
If there is no `external_dependencies` field in `logos-project.yaml`, but the sequence diagrams contain calls to external services (e.g., sending emails, payment requests, etc.), proactively remind the user to add them.
|
|
54
|
+
|
|
55
|
+
### Step 3: Design Normal Flow Orchestration
|
|
56
|
+
|
|
57
|
+
Design the API call chain step by step following the sequence diagram's Step numbers:
|
|
58
|
+
|
|
59
|
+
- Each step includes method, url, headers, body, expected_status
|
|
60
|
+
- For steps involving external dependencies, insert the `mock` field (see Output Specification)
|
|
61
|
+
- For variables that need to be passed from the previous step's response, use `extract` to define extraction rules
|
|
62
|
+
|
|
63
|
+
### Step 4: Design Exception Flow Orchestration
|
|
64
|
+
|
|
65
|
+
Design independent orchestrations for each EX exception case, ensuring:
|
|
66
|
+
|
|
67
|
+
- Exception scenarios also cover external dependency failure cases
|
|
68
|
+
- Use the `mock` field to simulate external service failures (e.g., timeouts, error responses, etc.)
|
|
69
|
+
|
|
70
|
+
### Step 5: Design Boundary Case Orchestration
|
|
71
|
+
|
|
72
|
+
Identify valid but non-happy-path variations (e.g., password length exactly at the boundary value, empty fields, etc.) and add supplementary orchestrations.
|
|
73
|
+
|
|
74
|
+
### Step 6: Output Orchestration JSON
|
|
75
|
+
|
|
76
|
+
Output executable orchestration JSON files per scenario.
|
|
77
|
+
|
|
78
|
+
## Output Specification
|
|
79
|
+
|
|
80
|
+
- File format: JSON
|
|
81
|
+
- Storage location: `logos/resources/scenario/`
|
|
82
|
+
- Separate files per scenario: `user-auth.json`, `payment-flow.json`
|
|
83
|
+
- Each step in the orchestration corresponds to a Step number in the sequence diagram
|
|
84
|
+
|
|
85
|
+
### mock Field Structure
|
|
86
|
+
|
|
87
|
+
When a step involves an external dependency, add a `mock` field to that step:
|
|
88
|
+
|
|
89
|
+
```json
|
|
90
|
+
{
|
|
91
|
+
"step": "Step 2: Get email verification code",
|
|
92
|
+
"mock": {
|
|
93
|
+
"dependency": "Email Service",
|
|
94
|
+
"strategy": "test-api",
|
|
95
|
+
"config": "GET /api/test/latest-email?to={email}",
|
|
96
|
+
"extract": { "code": "response.body.code" }
|
|
97
|
+
},
|
|
98
|
+
"method": "GET",
|
|
99
|
+
"url": "/api/test/latest-email?to={{email}}",
|
|
100
|
+
"expected_status": 200,
|
|
101
|
+
"extract": {
|
|
102
|
+
"verification_code": "body.code"
|
|
103
|
+
}
|
|
104
|
+
}
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
`mock` field description:
|
|
108
|
+
|
|
109
|
+
| Field | Type | Description |
|
|
110
|
+
|------|------|------|
|
|
111
|
+
| `dependency` | string | Corresponds to `name` in `external_dependencies` |
|
|
112
|
+
| `strategy` | string | Test strategy (`test-api` / `fixed-value` / `env-disable` / `mock-callback` / `mock-service`) |
|
|
113
|
+
| `config` | string | Specific configuration for the test strategy, from `test_config` |
|
|
114
|
+
| `extract` | object | Extract variables from mock response (optional) |
|
|
115
|
+
|
|
116
|
+
Orchestration behavior for different strategies:
|
|
117
|
+
|
|
118
|
+
- **`test-api`**: The step's url is replaced with the backdoor API address
|
|
119
|
+
- **`fixed-value`**: The step does not make an actual request; fixed values are injected directly via `extract`
|
|
120
|
+
- **`env-disable`**: The step is marked as skipped, with a comment explaining the precondition
|
|
121
|
+
- **`mock-callback`**: An additional mock callback request is inserted after the previous step completes
|
|
122
|
+
- **`mock-service`**: The step's url is replaced with the local mock service address
|
|
123
|
+
|
|
124
|
+
## Best Practices
|
|
125
|
+
|
|
126
|
+
- **Normal orchestration is the skeleton**: Complete the normal flow orchestration first to ensure the happy path works end-to-end
|
|
127
|
+
- **Exception orchestration is the safety net**: At least 1 exception orchestration per external call
|
|
128
|
+
- **Variable passing**: Extract variables from the previous step's response (e.g., token, user_id) and pass them to subsequent steps
|
|
129
|
+
- **Test data**: Prepare test data before orchestration begins and clean up afterwards to ensure idempotency
|
|
130
|
+
- **Concurrency testing**: Key scenarios should account for concurrent situations (e.g., two users registering with the same email simultaneously)
|
|
131
|
+
- **Check the external dependency list first**: Before starting orchestration design, read `external_dependencies` from `logos-project.yaml`; proactively remind the user to add any undeclared external calls
|
|
132
|
+
- **Do not decide mock strategies on your own**: Test strategies are determined during S12 technical architecture design (Phase 3 Step 0, architecture-designer); the orchestration test phase only consumes them — do not modify them unilaterally
|
|
133
|
+
- **Relationship with `openlogos verify`**: API orchestration tests can also produce JSONL results in the same format as `spec/test-results.md`. After orchestration tests run, results are also written to `logos/resources/verify/test-results.jsonl`, and `openlogos verify` reads them uniformly to determine acceptance
|
|
134
|
+
|
|
135
|
+
## Recommended Prompts
|
|
136
|
+
|
|
137
|
+
The following prompts can be copied directly for AI use:
|
|
138
|
+
|
|
139
|
+
- `Help me design orchestration tests`
|
|
140
|
+
- `Generate orchestration tests for S01 based on the API spec`
|
|
141
|
+
- `Help me orchestrate all normal paths for every scenario`
|
|
142
|
+
- `Help me add exception path orchestration tests for S02`
|
|
@@ -0,0 +1,142 @@
|
|
|
1
|
+
# Skill: Test Orchestrator
|
|
2
|
+
|
|
3
|
+
> 基于业务场景和时序图设计 **API 编排测试**用例(Phase 3 Step 3b),覆盖正常/异常/边界场景,自动识别外部依赖并应用测试策略,作为端到端 API 验收标准。**仅适用于涉及 API 的项目。**
|
|
4
|
+
|
|
5
|
+
## 与 test-writer 的关系
|
|
6
|
+
|
|
7
|
+
本 Skill 负责测试金字塔的**顶层**——API 编排测试(HTTP 请求级别),执行于 Phase 3 Step 3b。
|
|
8
|
+
|
|
9
|
+
底层的单元测试和场景测试(函数调用级别)由 `test-writer` Skill 在 Step 3a 完成。Step 3a 是所有项目的必选步骤,Step 3b(本 Skill)仅在项目涉及 API 时执行。
|
|
10
|
+
|
|
11
|
+
## 触发条件
|
|
12
|
+
|
|
13
|
+
- 用户要求设计 API 编排测试
|
|
14
|
+
- 用户提到 "Phase 3 Step 3b"、"API 编排"、"编排测试"
|
|
15
|
+
- Step 3a(test-writer)完成后,AI 引导用户继续进入 Step 3b
|
|
16
|
+
- 用户需要验收已部署的 API 代码
|
|
17
|
+
|
|
18
|
+
## 前置依赖
|
|
19
|
+
|
|
20
|
+
- `logos/resources/test/` 中包含测试用例规格文档(Step 3a 已完成)
|
|
21
|
+
- `logos/resources/prd/3-technical-plan/2-scenario-implementation/` 中包含场景时序图
|
|
22
|
+
- `logos/resources/api/` 中包含 API 规格(OpenAPI YAML)
|
|
23
|
+
- `logos-project.yaml` 中包含 `external_dependencies`(如有)
|
|
24
|
+
|
|
25
|
+
如果项目不涉及 API(纯 CLI 工具、纯前端等),跳过此 Skill。
|
|
26
|
+
|
|
27
|
+
## 核心能力
|
|
28
|
+
|
|
29
|
+
1. 从时序图和 API YAML 设计正常流程编排
|
|
30
|
+
2. 基于异常用例(EX-N.M)设计异常流程编排
|
|
31
|
+
3. 设计边界用例(合法但非主路径的变体)
|
|
32
|
+
4. 定义变量提取和传递机制
|
|
33
|
+
5. **识别外部依赖并应用测试策略**:读取 `logos-project.yaml` 的 `external_dependencies`,在涉及外部服务的步骤中自动插入 `mock` 字段
|
|
34
|
+
6. 执行编排并验证结果
|
|
35
|
+
|
|
36
|
+
## 执行步骤
|
|
37
|
+
|
|
38
|
+
### Step 1: 读取场景上下文
|
|
39
|
+
|
|
40
|
+
读取以下文件建立完整上下文:
|
|
41
|
+
|
|
42
|
+
- 场景时序图(`logos/resources/prd/3-technical-plan/2-scenario-implementation/`)
|
|
43
|
+
- API YAML(`logos/resources/api/`)
|
|
44
|
+
- `logos-project.yaml` —— 重点读取 `external_dependencies` 字段
|
|
45
|
+
|
|
46
|
+
### Step 2: 识别外部依赖
|
|
47
|
+
|
|
48
|
+
将 `external_dependencies` 中的 `used_in` 与当前场景编号匹配。如果当前场景涉及外部依赖:
|
|
49
|
+
|
|
50
|
+
- 记录该依赖的 `test_strategy` 和 `test_config`
|
|
51
|
+
- 如果某个依赖声明了 `used_in` 但缺少 `test_strategy`,**主动询问用户**测试策略
|
|
52
|
+
|
|
53
|
+
如果 `logos-project.yaml` 中没有 `external_dependencies` 字段,但时序图中存在对外部服务的调用(如发送邮件、请求支付等),也应主动提醒用户补充。
|
|
54
|
+
|
|
55
|
+
### Step 3: 设计正常流程编排
|
|
56
|
+
|
|
57
|
+
按时序图的 Step 编号,逐步设计 API 调用链:
|
|
58
|
+
|
|
59
|
+
- 每步包含 method、url、headers、body、expected_status
|
|
60
|
+
- 涉及外部依赖的步骤,插入 `mock` 字段(见输出规范)
|
|
61
|
+
- 上一步响应中需要传递的变量,使用 `extract` 定义提取规则
|
|
62
|
+
|
|
63
|
+
### Step 4: 设计异常流程编排
|
|
64
|
+
|
|
65
|
+
为每个 EX 异常用例设计独立的编排,确保:
|
|
66
|
+
|
|
67
|
+
- 异常场景也能覆盖外部依赖的失败情况
|
|
68
|
+
- 使用 `mock` 字段模拟外部服务异常(如超时、返回错误等)
|
|
69
|
+
|
|
70
|
+
### Step 5: 设计边界用例编排
|
|
71
|
+
|
|
72
|
+
识别合法但非主路径的变体(如密码长度刚好在边界值、空字段等),补充编排。
|
|
73
|
+
|
|
74
|
+
### Step 6: 输出编排 JSON
|
|
75
|
+
|
|
76
|
+
按场景输出可执行的编排 JSON 文件。
|
|
77
|
+
|
|
78
|
+
## 输出规范
|
|
79
|
+
|
|
80
|
+
- 文件格式:JSON
|
|
81
|
+
- 存放位置:`logos/resources/scenario/`
|
|
82
|
+
- 按场景分文件:`user-auth.json`、`payment-flow.json`
|
|
83
|
+
- 编排中的每一步对应时序图的 Step 编号
|
|
84
|
+
|
|
85
|
+
### mock 字段结构
|
|
86
|
+
|
|
87
|
+
当某一步涉及外部依赖时,在该 step 中添加 `mock` 字段:
|
|
88
|
+
|
|
89
|
+
```json
|
|
90
|
+
{
|
|
91
|
+
"step": "Step 2: 获取邮件验证码",
|
|
92
|
+
"mock": {
|
|
93
|
+
"dependency": "邮件服务",
|
|
94
|
+
"strategy": "test-api",
|
|
95
|
+
"config": "GET /api/test/latest-email?to={email}",
|
|
96
|
+
"extract": { "code": "response.body.code" }
|
|
97
|
+
},
|
|
98
|
+
"method": "GET",
|
|
99
|
+
"url": "/api/test/latest-email?to={{email}}",
|
|
100
|
+
"expected_status": 200,
|
|
101
|
+
"extract": {
|
|
102
|
+
"verification_code": "body.code"
|
|
103
|
+
}
|
|
104
|
+
}
|
|
105
|
+
```
|
|
106
|
+
|
|
107
|
+
`mock` 字段说明:
|
|
108
|
+
|
|
109
|
+
| 字段 | 类型 | 说明 |
|
|
110
|
+
|------|------|------|
|
|
111
|
+
| `dependency` | string | 对应 `external_dependencies` 中的 `name` |
|
|
112
|
+
| `strategy` | string | 测试策略(`test-api` / `fixed-value` / `env-disable` / `mock-callback` / `mock-service`) |
|
|
113
|
+
| `config` | string | 测试策略的具体配置,来自 `test_config` |
|
|
114
|
+
| `extract` | object | 从 mock 响应中提取变量(可选) |
|
|
115
|
+
|
|
116
|
+
不同策略的编排表现:
|
|
117
|
+
|
|
118
|
+
- **`test-api`**:该步骤的 url 替换为后门 API 地址
|
|
119
|
+
- **`fixed-value`**:该步骤不发起实际请求,直接在 `extract` 中注入固定值
|
|
120
|
+
- **`env-disable`**:该步骤标记为跳过,附带注释说明前提条件
|
|
121
|
+
- **`mock-callback`**:在前一步完成后插入一个额外的 mock 回调请求
|
|
122
|
+
- **`mock-service`**:该步骤的 url 替换为本地 mock 服务地址
|
|
123
|
+
|
|
124
|
+
## 实践经验
|
|
125
|
+
|
|
126
|
+
- **正常编排是骨架**:先完成正常流程编排,确保主路径可以跑通
|
|
127
|
+
- **异常编排是保障**:每个外部调用至少 1 个异常编排
|
|
128
|
+
- **变量传递**:前一步的响应中提取变量(如 token、user_id),传给后续步骤
|
|
129
|
+
- **测试数据**:编排开始前准备测试数据,结束后清理,保证幂等性
|
|
130
|
+
- **并发测试**:关键场景需要考虑并发情况(如:两人同时注册同一邮箱)
|
|
131
|
+
- **外部依赖先查清单**:开始设计编排前先读 `logos-project.yaml` 的 `external_dependencies`,没有声明的外部调用要主动提醒用户补充
|
|
132
|
+
- **mock 策略不要自行决定**:测试策略由 S12 技术架构设计(Phase 3 Step 0, architecture-designer)确定,编排测试阶段只负责消费,不要擅自更改
|
|
133
|
+
- **与 `openlogos verify` 的关系**:API 编排测试也可产出与 `spec/test-results.md` 相同格式的 JSONL 结果。编排测试运行后,结果同样写入 `logos/resources/verify/test-results.jsonl`,`openlogos verify` 统一读取并判定验收
|
|
134
|
+
|
|
135
|
+
## 推荐提示词
|
|
136
|
+
|
|
137
|
+
以下提示词可以直接复制给 AI 使用:
|
|
138
|
+
|
|
139
|
+
- `帮我设计编排测试`
|
|
140
|
+
- `基于 API 规格帮我生成 S01 的编排测试`
|
|
141
|
+
- `帮我把所有场景的正常路径编排出来`
|
|
142
|
+
- `帮我给 S02 补充异常路径的编排测试`
|
|
@@ -0,0 +1,247 @@
|
|
|
1
|
+
# Skill: Test Writer
|
|
2
|
+
|
|
3
|
+
> Based on sequence diagrams, API specifications, and DB constraints, design unit test cases and scenario test cases for each business scenario. Applicable to all project types (API services, CLI tools, frontend applications, libraries, etc.), this is a mandatory prerequisite step before code generation.
|
|
4
|
+
|
|
5
|
+
## Trigger Conditions
|
|
6
|
+
|
|
7
|
+
- User requests test case or test plan design
|
|
8
|
+
- User mentions "Phase 3 Step 3", "Step 3a", "test-first", "test design"
|
|
9
|
+
- Sequence diagrams already exist, and tests need to be designed before writing code
|
|
10
|
+
- User specifies a scenario number (e.g., S01) that needs test design
|
|
11
|
+
|
|
12
|
+
## Prerequisites
|
|
13
|
+
|
|
14
|
+
- `logos/resources/prd/3-technical-plan/2-scenario-implementation/` contains sequence diagrams (**required**)
|
|
15
|
+
- `logos/resources/api/` contains API specifications (read if present, skip if absent — non-API projects may not have these)
|
|
16
|
+
- `logos/resources/database/` contains DB DDL (read if present, skip if absent)
|
|
17
|
+
- `logos/resources/prd/1-product-requirements/` contains requirements documents (for tracing acceptance criteria)
|
|
18
|
+
|
|
19
|
+
**Cannot be skipped**: Regardless of project type, Step 3a (this Skill) must be executed.
|
|
20
|
+
|
|
21
|
+
## Core Capabilities
|
|
22
|
+
|
|
23
|
+
1. Extract unit test cases from API field constraints (type, format, length, enum)
|
|
24
|
+
2. Extract unit test cases from DB constraints (UNIQUE, CHECK, NOT NULL, FK)
|
|
25
|
+
3. Extract unit test cases from business rules and single-point error handling in EX exception cases
|
|
26
|
+
4. Extract scenario test cases from sequence diagram Step sequences (happy path)
|
|
27
|
+
5. Extract scenario test cases from EX exception cases (exception paths)
|
|
28
|
+
6. Reverse-validate test coverage completeness against Phase 1/2 acceptance criteria
|
|
29
|
+
|
|
30
|
+
## Execution Steps
|
|
31
|
+
|
|
32
|
+
### Step 1: Load Scenario Context
|
|
33
|
+
|
|
34
|
+
Read the following files to establish complete context:
|
|
35
|
+
|
|
36
|
+
- Sequence diagrams (`logos/resources/prd/3-technical-plan/2-scenario-implementation/`)
|
|
37
|
+
- API YAML (`logos/resources/api/`) — if present
|
|
38
|
+
- DB DDL (`logos/resources/database/`) — if present
|
|
39
|
+
- Phase 1 requirements documents (acceptance criteria)
|
|
40
|
+
- Phase 2 product design documents (interaction-level acceptance criteria)
|
|
41
|
+
|
|
42
|
+
Confirm the following for the current scenario:
|
|
43
|
+
- **Step count**: How many Steps are in the sequence diagram
|
|
44
|
+
- **EX count**: How many exception cases exist
|
|
45
|
+
- **API endpoints**: Which endpoints are involved and their field constraints
|
|
46
|
+
- **DB tables**: Which tables are involved and their constraints
|
|
47
|
+
|
|
48
|
+
### Step 2: Design Unit Test Cases
|
|
49
|
+
|
|
50
|
+
Extract unit test cases from three categories of sources:
|
|
51
|
+
|
|
52
|
+
#### 2a: API Field Constraints
|
|
53
|
+
|
|
54
|
+
Inspect `requestBody` and `parameters` for each API endpoint:
|
|
55
|
+
|
|
56
|
+
- `type` → Type error cases (passing incorrect types)
|
|
57
|
+
- `format` (email, uuid, date-time) → Format validation cases
|
|
58
|
+
- `minLength` / `maxLength` → Boundary value cases (exactly at limit, exceeding by 1)
|
|
59
|
+
- `required` → Required field missing cases
|
|
60
|
+
- `enum` → Enumeration value cases (valid values + invalid values)
|
|
61
|
+
- `minimum` / `maximum` → Numeric range cases
|
|
62
|
+
|
|
63
|
+
#### 2b: DB Constraints
|
|
64
|
+
|
|
65
|
+
Inspect constraints for each related table:
|
|
66
|
+
|
|
67
|
+
- `UNIQUE` → Duplicate insertion cases
|
|
68
|
+
- `NOT NULL` → Null value insertion cases
|
|
69
|
+
- `CHECK` → Constraint violation cases
|
|
70
|
+
- `FOREIGN KEY` → Referencing non-existent record cases
|
|
71
|
+
- `DEFAULT` → Default value verification when no value is provided
|
|
72
|
+
|
|
73
|
+
#### 2c: Business Rules
|
|
74
|
+
|
|
75
|
+
Extract single-point business logic from sequence diagram Step descriptions and EX exception cases:
|
|
76
|
+
|
|
77
|
+
- Permission checks (not logged in, insufficient permissions)
|
|
78
|
+
- State machine transitions (only specific states allow certain operations)
|
|
79
|
+
- Rate limiting / throttling rules
|
|
80
|
+
- Data computation logic (amount calculations, discount rules)
|
|
81
|
+
|
|
82
|
+
**Format for each unit test case**:
|
|
83
|
+
|
|
84
|
+
| Field | Description |
|
|
85
|
+
|-------|-------------|
|
|
86
|
+
| ID | `UT-{scenario-number}-{sequence}`, e.g., `UT-S01-01` |
|
|
87
|
+
| Description | What behavior is being tested |
|
|
88
|
+
| Source | Constraint origin (e.g., `auth.yaml → register → email: format:email`) |
|
|
89
|
+
| Preconditions | State required before the test |
|
|
90
|
+
| Input | Specific input values |
|
|
91
|
+
| Expected Output | Expected return value or error message |
|
|
92
|
+
|
|
93
|
+
### Step 3: Design Scenario Test Cases
|
|
94
|
+
|
|
95
|
+
Extract scenario test cases from two categories of sources:
|
|
96
|
+
|
|
97
|
+
#### 3a: Happy Path (Sequence Diagram Step Sequence)
|
|
98
|
+
|
|
99
|
+
Treat the complete Step 1 → Step N sequence from the sequence diagram as an end-to-end code call chain:
|
|
100
|
+
|
|
101
|
+
- Determine the scenario's entry and exit points
|
|
102
|
+
- Annotate data passing between each Step (previous step's output as next step's input)
|
|
103
|
+
- Verify the final state (database records, return values)
|
|
104
|
+
|
|
105
|
+
#### 3b: Exception Paths (EX Exception Cases)
|
|
106
|
+
|
|
107
|
+
Expand each EX exception case into a scenario test case:
|
|
108
|
+
|
|
109
|
+
- Annotate which Step triggers the exception
|
|
110
|
+
- Verify the handling logic after the exception is triggered (error response, compensation/rollback)
|
|
111
|
+
- Verify the exception did not compromise the integrity of other data
|
|
112
|
+
|
|
113
|
+
**Format for each scenario test case**:
|
|
114
|
+
|
|
115
|
+
| Field | Description |
|
|
116
|
+
|-------|-------------|
|
|
117
|
+
| ID | `ST-{scenario-number}-{sequence}`, e.g., `ST-S01-01` |
|
|
118
|
+
| Description | What scenario flow is being tested |
|
|
119
|
+
| Covered Steps | Which sequence diagram Steps are covered (e.g., `Step 1→6`) or which EX (e.g., `EX-2.1`) |
|
|
120
|
+
| Preconditions | State and data required before the test |
|
|
121
|
+
| Operation Sequence | Ordered list of operations following Step sequence |
|
|
122
|
+
| Expected Result | Final state (return value + database state + side effects) |
|
|
123
|
+
|
|
124
|
+
### Step 4: Coverage Validation
|
|
125
|
+
|
|
126
|
+
Reverse-validate whether test cases cover all critical constraints:
|
|
127
|
+
|
|
128
|
+
- [ ] Each normal acceptance criterion from Phase 1 maps to at least 1 ST case
|
|
129
|
+
- [ ] Each exception acceptance criterion from Phase 1 maps to at least 1 ST or UT case
|
|
130
|
+
- [ ] Each EX exception case maps to at least 1 ST case
|
|
131
|
+
- [ ] Each `required` field in the API has at least 1 UT case
|
|
132
|
+
- [ ] Each `UNIQUE` / `CHECK` constraint in the DB has at least 1 UT case
|
|
133
|
+
|
|
134
|
+
If any items are uncovered, add supplementary cases or explain the reason to the user.
|
|
135
|
+
|
|
136
|
+
### Step 5: Acceptance Criteria Traceability
|
|
137
|
+
|
|
138
|
+
Extract each GIVEN/WHEN/THEN acceptance criterion from the Phase 1 requirements document, assign a traceability ID to each, and link it to the test case IDs that cover that criterion.
|
|
139
|
+
|
|
140
|
+
#### Acceptance Criteria ID Rules
|
|
141
|
+
|
|
142
|
+
- Format: `{scenario-number}-AC-{two-digit-sequence}`, e.g., `S01-AC-01`, `S01-AC-02`
|
|
143
|
+
- Numbered in the order they appear in the requirements document; normal and exception criteria use a unified numbering sequence
|
|
144
|
+
- AC IDs within the same scenario must be consecutive and unique
|
|
145
|
+
|
|
146
|
+
#### Traceability Table Rules
|
|
147
|
+
|
|
148
|
+
1. Read all acceptance criteria (normal + exception) for the current scenario from the requirements document
|
|
149
|
+
2. Assign an AC ID to each acceptance criterion
|
|
150
|
+
3. Find the test case IDs that cover each criterion (can be UT or ST), and fill in the "Covered By" column
|
|
151
|
+
4. Each AC must be linked to at least 1 test case; if it cannot be covered, note the reason in the "Covered By" column
|
|
152
|
+
|
|
153
|
+
`openlogos verify` parses this traceability table and links AC → test case ID → execution result across three layers to generate a complete acceptance traceability report.
|
|
154
|
+
|
|
155
|
+
### Step 6: Output Test Case Specification Document
|
|
156
|
+
|
|
157
|
+
Output the test case specification document in Markdown format, organized by scenario.
|
|
158
|
+
|
|
159
|
+
### Step 7: Guide Next Steps
|
|
160
|
+
|
|
161
|
+
Guide the user to the next step based on project type:
|
|
162
|
+
|
|
163
|
+
- **Involves API** → "Continue to Step 3b to design API orchestration tests?"
|
|
164
|
+
- **Does not involve API** → "Test design is complete. Recommend proceeding to code generation: say 'Implement based on the S01 specification for me'"
|
|
165
|
+
|
|
166
|
+
## Output Specification
|
|
167
|
+
|
|
168
|
+
- **File format**: Markdown
|
|
169
|
+
- **Location**: `logos/resources/test/`
|
|
170
|
+
- **Naming convention**: `{scenario-number}-test-cases.md` (e.g., `S01-test-cases.md`)
|
|
171
|
+
- Each file contains: Unit test cases (grouped by source) + Scenario test cases (happy path + exception paths)
|
|
172
|
+
- Case IDs are globally unique: `UT-{scenario-number}-{sequence}` / `ST-{scenario-number}-{sequence}`
|
|
173
|
+
|
|
174
|
+
### Document Structure Template
|
|
175
|
+
|
|
176
|
+
```markdown
|
|
177
|
+
# {scenario-number}: {scenario-name} — Test Cases
|
|
178
|
+
|
|
179
|
+
## 1. Unit Test Cases
|
|
180
|
+
|
|
181
|
+
### 1.1 {group-name} (Source: {constraint-origin})
|
|
182
|
+
|
|
183
|
+
| ID | Description | Source | Preconditions | Input | Expected Output |
|
|
184
|
+
|----|-------------|--------|---------------|-------|-----------------|
|
|
185
|
+
| UT-S01-01 | ... | ... | ... | ... | ... |
|
|
186
|
+
|
|
187
|
+
## 2. Scenario Test Cases
|
|
188
|
+
|
|
189
|
+
### 2.1 Happy Path: {scenario-name}
|
|
190
|
+
|
|
191
|
+
| ID | Description | Covered Steps | Preconditions | Operation Sequence | Expected Result |
|
|
192
|
+
|----|-------------|---------------|---------------|--------------------|-----------------|
|
|
193
|
+
| ST-S01-01 | ... | Step 1→6 | ... | ... | ... |
|
|
194
|
+
|
|
195
|
+
### 2.2 Exception Paths
|
|
196
|
+
|
|
197
|
+
| ID | Description | Covered EX | Preconditions | Trigger Condition | Expected Result |
|
|
198
|
+
|----|-------------|------------|---------------|-------------------|-----------------|
|
|
199
|
+
| ST-S01-02 | ... | EX-2.1 | ... | ... | ... |
|
|
200
|
+
|
|
201
|
+
## 3. Coverage Validation
|
|
202
|
+
|
|
203
|
+
- [x] Phase 1 normal acceptance criteria: fully covered
|
|
204
|
+
- [x] Phase 1 exception acceptance criteria: fully covered
|
|
205
|
+
- [x] EX exception cases: fully covered
|
|
206
|
+
- [x] API required fields: fully covered
|
|
207
|
+
- [x] DB UNIQUE/CHECK constraints: fully covered
|
|
208
|
+
|
|
209
|
+
## 4. Acceptance Criteria Traceability
|
|
210
|
+
|
|
211
|
+
| AC ID | Acceptance Criterion | Covered By |
|
|
212
|
+
|-------|----------------------|------------|
|
|
213
|
+
| S01-AC-01 | Normal: Fresh project initialization — create complete directory structure | ST-S01-01 |
|
|
214
|
+
| S01-AC-02 | Normal: Confirm when explicit project name differs from config file | ST-S01-02 |
|
|
215
|
+
| S01-AC-03 | Exception: Project already initialized — display error message | ST-S01-03, UT-S01-05 |
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
## Test Case ID Contract
|
|
219
|
+
|
|
220
|
+
Test case IDs (`UT-S01-01`, `ST-S01-01`) serve as a **binding contract** between design documents and runtime:
|
|
221
|
+
|
|
222
|
+
- IDs defined in test-cases.md must be used as-is in the generated test code
|
|
223
|
+
- The test code reporter writes each case's ID and execution result to a JSONL file
|
|
224
|
+
- `openlogos verify` maps execution results back to test case specifications via IDs, automatically determining acceptance
|
|
225
|
+
- When modifying case IDs, the corresponding IDs in the test code must be updated simultaneously
|
|
226
|
+
|
|
227
|
+
See `spec/test-results.md` for the detailed JSONL format definition and reporter code templates for each language.
|
|
228
|
+
|
|
229
|
+
## Best Practices
|
|
230
|
+
|
|
231
|
+
- **Test cases are design documents, not code**: This Skill produces test case specifications in Markdown format; the actual test code is implemented by AI during Step 4 code generation based on these specifications
|
|
232
|
+
- **Unit first, then scenario**: Unit test cases cover the correctness of individual functions; scenario tests cover cross-module integration — first ensure the building blocks are correct, then verify they fit together
|
|
233
|
+
- **Don't overlook DB constraints**: Many bugs originate from database-level constraint violations; DB constraints are an important source of unit test cases
|
|
234
|
+
- **Scenario tests focus on data passing**: Data passing between Steps (previous step's output → next step's input) is where errors most commonly occur
|
|
235
|
+
- **EX exception cases must have corresponding scenario tests**: Every EX annotated in the sequence diagram should be reflected in scenario tests
|
|
236
|
+
- **Boundary values first**: Unit test cases should prioritize boundary values (just valid, just invalid) over random values
|
|
237
|
+
- **Complementary with test-orchestrator**: This Skill designs code-level tests (function call level); test-orchestrator designs API-level tests (HTTP request level). Together they cover different layers of the "testing pyramid"
|
|
238
|
+
- **Case IDs are cross-phase contracts**: IDs span test-cases.md → test code → test-results.jsonl → acceptance-report.md; any inconsistency will cause `openlogos verify` to report incomplete results
|
|
239
|
+
|
|
240
|
+
## Recommended Prompts
|
|
241
|
+
|
|
242
|
+
The following prompts can be copied directly for AI use:
|
|
243
|
+
|
|
244
|
+
- `Design test cases for me`
|
|
245
|
+
- `Design unit tests and scenario tests for S01`
|
|
246
|
+
- `Design test cases for all P0 scenarios`
|
|
247
|
+
- `Check the test coverage for S01`
|