@sun-asterisk/sungen 2.4.2 → 2.4.5
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/cli/commands/add.d.ts.map +1 -1
- package/dist/cli/commands/add.js +7 -1
- package/dist/cli/commands/add.js.map +1 -1
- package/dist/cli/index.js +1 -1
- package/dist/generators/test-generator/code-generator.d.ts.map +1 -1
- package/dist/generators/test-generator/code-generator.js +27 -4
- package/dist/generators/test-generator/code-generator.js.map +1 -1
- package/dist/orchestrator/ai-rules-updater.d.ts.map +1 -1
- package/dist/orchestrator/ai-rules-updater.js +2 -0
- package/dist/orchestrator/ai-rules-updater.js.map +1 -1
- package/dist/orchestrator/project-initializer.d.ts +4 -0
- package/dist/orchestrator/project-initializer.d.ts.map +1 -1
- package/dist/orchestrator/project-initializer.js +20 -3
- package/dist/orchestrator/project-initializer.js.map +1 -1
- package/dist/orchestrator/screen-manager.d.ts +9 -0
- package/dist/orchestrator/screen-manager.d.ts.map +1 -1
- package/dist/orchestrator/screen-manager.js +120 -0
- package/dist/orchestrator/screen-manager.js.map +1 -1
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-add-screen.md +22 -19
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +10 -2
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-review.md +5 -0
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +25 -16
- package/dist/orchestrator/templates/ai-instructions/claude-config.md +4 -97
- package/dist/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +48 -122
- package/dist/orchestrator/templates/ai-instructions/claude-skill-selector-fix.md +172 -25
- package/dist/orchestrator/templates/ai-instructions/claude-skill-tc-generation.md +62 -34
- package/dist/orchestrator/templates/ai-instructions/claude-skill-tc-review.md +19 -14
- package/dist/orchestrator/templates/ai-instructions/claude-skill-test-design-techniques.md +99 -0
- package/dist/orchestrator/templates/ai-instructions/claude-skill-viewpoint.md +151 -64
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-add-screen.md +21 -15
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +10 -3
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-review.md +5 -0
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +24 -15
- package/dist/orchestrator/templates/ai-instructions/copilot-config.md +4 -97
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +48 -122
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-selector-fix.md +172 -25
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-tc-generation.md +63 -29
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-tc-review.md +19 -14
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-test-design-techniques.md +99 -0
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-viewpoint.md +151 -64
- package/dist/orchestrator/templates/readme.md +1 -1
- package/dist/orchestrator/templates/tsconfig.json +21 -0
- package/package.json +1 -1
- package/src/cli/commands/add.ts +8 -1
- package/src/cli/index.ts +1 -1
- package/src/generators/test-generator/code-generator.ts +29 -4
- package/src/orchestrator/ai-rules-updater.ts +2 -0
- package/src/orchestrator/project-initializer.ts +24 -3
- package/src/orchestrator/screen-manager.ts +125 -0
- package/src/orchestrator/templates/ai-instructions/claude-cmd-add-screen.md +22 -19
- package/src/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +10 -2
- package/src/orchestrator/templates/ai-instructions/claude-cmd-review.md +5 -0
- package/src/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +25 -16
- package/src/orchestrator/templates/ai-instructions/claude-config.md +4 -97
- package/src/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +48 -122
- package/src/orchestrator/templates/ai-instructions/claude-skill-selector-fix.md +172 -25
- package/src/orchestrator/templates/ai-instructions/claude-skill-tc-generation.md +62 -34
- package/src/orchestrator/templates/ai-instructions/claude-skill-tc-review.md +19 -14
- package/src/orchestrator/templates/ai-instructions/claude-skill-test-design-techniques.md +99 -0
- package/src/orchestrator/templates/ai-instructions/claude-skill-viewpoint.md +151 -64
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-add-screen.md +21 -15
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +10 -3
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-review.md +5 -0
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +24 -15
- package/src/orchestrator/templates/ai-instructions/copilot-config.md +4 -97
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +48 -122
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-selector-fix.md +172 -25
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-tc-generation.md +63 -29
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-tc-review.md +19 -14
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-test-design-techniques.md +99 -0
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-viewpoint.md +151 -64
- package/src/orchestrator/templates/readme.md +1 -1
- package/src/orchestrator/templates/tsconfig.json +21 -0
|
@@ -1,20 +1,161 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: sungen-selector-fix
|
|
3
|
-
description: 'Selector fixing strategy —
|
|
3
|
+
description: 'Selector fixing strategy — phased execution, priority-first diagnosis, targeted MCP exploration. Auto-loaded by run-test command.'
|
|
4
4
|
user-invocable: false
|
|
5
5
|
---
|
|
6
6
|
|
|
7
|
-
## Strategy:
|
|
7
|
+
## Strategy: Phased Execution
|
|
8
8
|
|
|
9
|
-
|
|
9
|
+
Run tests in priority waves — catch fundamental issues early, fix critical paths first, let shared fixes cascade to lower-priority tests.
|
|
10
10
|
|
|
11
|
-
**
|
|
11
|
+
**Never run all tests blindly.** Always start with selector pre-generation, then a smoke check.
|
|
12
12
|
|
|
13
13
|
---
|
|
14
14
|
|
|
15
|
-
##
|
|
15
|
+
## Phase 0: Pre-run Selector Generation (Playwright MCP)
|
|
16
16
|
|
|
17
|
-
|
|
17
|
+
**Before any `sungen generate` or test run**, populate `selectors.yaml` from the live page so tests don't fail on missing keys in Phase 1.
|
|
18
|
+
|
|
19
|
+
### When to run Phase 0
|
|
20
|
+
|
|
21
|
+
- `selectors.yaml` missing, empty, or contains only the page selector
|
|
22
|
+
- The `.feature` file has `[Reference]` keys without corresponding YAML entries and the referenced element can't be auto-inferred (see `sungen-selector-keys` § Auto-Infer)
|
|
23
|
+
- User explicitly re-scans after UI changes
|
|
24
|
+
|
|
25
|
+
If existing selectors already cover the feature file, **skip Phase 0** and go straight to compile + Phase 1.
|
|
26
|
+
|
|
27
|
+
### Steps
|
|
28
|
+
|
|
29
|
+
1. **Confirm with the user** via `AskUserQuestion`: *"Generate selectors from the live page via Playwright MCP now?"* — offer **Yes, scan live page** / **Skip (use existing selectors.yaml)** / **Cancel**.
|
|
30
|
+
2. **Collect references**: parse the `.feature` file for every `[Reference]` element + its type (e.g. `[Submit] button`, `[Email] field`). Deduplicate.
|
|
31
|
+
3. **Ensure page selector**: if missing, ask user for URL path and write it first.
|
|
32
|
+
4. **Navigate**:
|
|
33
|
+
- Read `baseURL` from `playwright.config.ts`.
|
|
34
|
+
- `browser_navigate` to the page URL.
|
|
35
|
+
- If redirected to login → run **Phase 0.5: Auth Persistence** first (see below), then re-navigate to the target page.
|
|
36
|
+
5. **Snapshot**: take **ONE** `browser_snapshot`. All Phase 0 selectors come from this single snapshot.
|
|
37
|
+
6. **Generate YAML entries**:
|
|
38
|
+
- Keys: follow `sungen-selector-keys` (lowercase, Unicode preserved, `--type` / `--N` suffixes).
|
|
39
|
+
- Selector priority: follow the table in **Diagnosis & Fix § Step 3** (`testid` > `role`+name > `placeholder` > `label` > `locator` > `text`).
|
|
40
|
+
- Copy names **character-for-character** from the snapshot. Never infer from the Gherkin label.
|
|
41
|
+
- If an element is auto-inferable per `sungen-selector-keys` § Auto-Infer, **omit it** from YAML — keep the file minimal.
|
|
42
|
+
7. **Merge, don't overwrite**: preserve the page selector and any user-authored entries in `selectors.yaml`. Only add missing keys.
|
|
43
|
+
8. **Show summary + confirm**: list the keys that will be added, ask the user to approve, then write the file.
|
|
44
|
+
9. **Compile**: `sungen generate --screen <screen>` — then proceed to Phase 1.
|
|
45
|
+
|
|
46
|
+
### Common Phase 0 pitfalls
|
|
47
|
+
|
|
48
|
+
- Writing keys inferred from the Gherkin label instead of the snapshot name → Phase 1 will fail with "no element found".
|
|
49
|
+
- Skipping Phase 0.5 when an auth redirect happened → snapshot captures the login page, all selectors wrong.
|
|
50
|
+
- Using `browser_evaluate` alone to scrape cookies → misses httpOnly session cookies. Always use `browser_storage_state` (or the `browser_run_code` fallback).
|
|
51
|
+
- Overwriting user-authored selectors → always merge.
|
|
52
|
+
|
|
53
|
+
---
|
|
54
|
+
|
|
55
|
+
## Phase 0.5: Auth Persistence (MCP alternative to `sungen makeauth`)
|
|
56
|
+
|
|
57
|
+
Capture an authenticated session from the MCP browser into `specs/.auth/<role>.json` — the same path `sungen makeauth` writes to, which compiled tests already reference via `test.use({ storageState })` based on `@auth:<role>` tags. No `playwright.config.ts` edits needed. Run once per auth lifetime, not on every selector fix.
|
|
58
|
+
|
|
59
|
+
### When to run Phase 0.5
|
|
60
|
+
|
|
61
|
+
- Phase 0 navigation hit a login redirect and `specs/.auth/<role>.json` is missing or expired
|
|
62
|
+
- A scenario tagged `@auth:<role>` is about to run and its auth file is absent
|
|
63
|
+
- User asks to refresh auth
|
|
64
|
+
|
|
65
|
+
Skip if `specs/.auth/<role>.json` already exists and a probe navigation reaches an authenticated page without redirecting to login.
|
|
66
|
+
|
|
67
|
+
### Steps
|
|
68
|
+
|
|
69
|
+
1. **Resolve the role**:
|
|
70
|
+
- Look at the `.feature` file for `@auth:<role>` tags (feature-level or scenario-level). Pick the role for the scenario being run. If no tag exists, default to `user`.
|
|
71
|
+
- Target file: `specs/.auth/<role>.json`. Create `specs/.auth/` if missing.
|
|
72
|
+
- If the file already exists → use `AskUserQuestion` to confirm overwrite (mirrors the `(y/N)` prompt in `sungen makeauth`).
|
|
73
|
+
2. **Navigate to login**:
|
|
74
|
+
- Read `baseURL` from `playwright.config.ts` (fall back to `APP_BASE_URL` env, then `http://localhost:3000` — same resolution order as `sungen makeauth`).
|
|
75
|
+
- `browser_navigate` to `<baseURL>/login`. If the app uses a different login path, ask the user.
|
|
76
|
+
- If the URL doesn't stay on `/login` after load → user is already signed in. Skip step 3.
|
|
77
|
+
3. **Ask the user to log in manually** in the MCP browser (username, password, MFA, SSO — whatever the app needs). Never type credentials via `browser_type` or script the login. Wait for the user to confirm in chat that they're signed in.
|
|
78
|
+
4. **Verify login** — check the current URL or take a `browser_snapshot`; confirm the page is no longer on `/login`.
|
|
79
|
+
5. **Export storage state** (preferred → fallback):
|
|
80
|
+
- **Preferred** — `browser_storage_state` with `filename: "specs/.auth/<role>.json"` (native Playwright MCP tool; captures cookies including httpOnly + localStorage + sessionStorage via the Playwright context — same output format as `context.storageState({ path })` used by `sungen makeauth`).
|
|
81
|
+
- **Fallback** — if `browser_storage_state` isn't available in this MCP version, use `browser_run_code` to execute `await context.storageState({ path: 'specs/.auth/<role>.json' })`.
|
|
82
|
+
- **Do NOT** use `browser_evaluate` for auth export — it misses httpOnly cookies and session auth will fail silently.
|
|
83
|
+
6. **Gitignore** — ensure `specs/.auth/` (or `specs/.auth/*.json`) is in `.gitignore`. Add it if missing.
|
|
84
|
+
7. **Return to Phase 0 step 4** — re-`browser_navigate` to the target page; the session is now active.
|
|
85
|
+
|
|
86
|
+
### Phase 0.5 pitfalls
|
|
87
|
+
|
|
88
|
+
- Writing to a path other than `specs/.auth/<role>.json` → compiled tests won't find the file. Always match `sungen makeauth`'s convention.
|
|
89
|
+
- Committing `specs/.auth/*.json` → leaks a live session. Always gitignore.
|
|
90
|
+
- Scripting the login with `browser_type` → bypasses MFA/CAPTCHA and risks account lockout. Always manual.
|
|
91
|
+
- Running Phase 0.5 on every `run-test` invocation → unnecessary; reuse the file until tests start redirecting to login.
|
|
92
|
+
- Mismatch between `<role>` in the auth file and `@auth:<role>` tag → compiled tests reference a nonexistent file.
|
|
93
|
+
|
|
94
|
+
---
|
|
95
|
+
|
|
96
|
+
## Phase 1: Smoke Check (catch fundamentals)
|
|
97
|
+
|
|
98
|
+
Run **up to 5 scenarios** — pick the first `@critical` or `@high` scenarios in the feature file.
|
|
99
|
+
|
|
100
|
+
```bash
|
|
101
|
+
npx playwright test --grep "VP-.*-001|VP-.*-002|VP-.*-003|VP-.*-004|VP-.*-005" --reporter=line
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
**Purpose:** Detect broken fundamentals before running 50+ tests:
|
|
105
|
+
- Page selector wrong → ALL tests would fail (1 fix, not 50 diagnoses)
|
|
106
|
+
- Auth redirect → need `@no-auth` or user login
|
|
107
|
+
- Base `@steps:` scenario broken → all `@extend:` scenarios would fail
|
|
108
|
+
|
|
109
|
+
**If all 5 pass** → skip to Phase 2.
|
|
110
|
+
**If failures** → diagnose and fix (see Diagnosis & Fix below), then re-run smoke. Max 2 attempts here.
|
|
111
|
+
|
|
112
|
+
---
|
|
113
|
+
|
|
114
|
+
## Phase 2: Priority Wave (@critical + @high)
|
|
115
|
+
|
|
116
|
+
Run all `@critical` and `@high` scenarios:
|
|
117
|
+
|
|
118
|
+
```bash
|
|
119
|
+
npx playwright test --grep "@critical|@high" --reporter=line
|
|
120
|
+
```
|
|
121
|
+
|
|
122
|
+
If your Playwright config doesn't support tag grep, use scenario name grep from the feature file — collect VP IDs of `@critical` and `@high` scenarios.
|
|
123
|
+
|
|
124
|
+
**Fix only failures from this wave.** Most shared selectors (buttons, headings, navigation) get fixed here because critical/high scenarios exercise them.
|
|
125
|
+
|
|
126
|
+
Max 2 fix attempts in this phase.
|
|
127
|
+
|
|
128
|
+
---
|
|
129
|
+
|
|
130
|
+
## Phase 3: Full Run (@normal + @low)
|
|
131
|
+
|
|
132
|
+
Run remaining scenarios:
|
|
133
|
+
|
|
134
|
+
```bash
|
|
135
|
+
npx playwright test --reporter=line
|
|
136
|
+
```
|
|
137
|
+
|
|
138
|
+
Many selectors already fixed from Phase 2 (shared elements). Only diagnose **new** failures — selectors that only appear in lower-priority scenarios.
|
|
139
|
+
|
|
140
|
+
Max 1 fix attempt. If `@low` scenarios still fail after fix → **report and move on**, don't loop.
|
|
141
|
+
|
|
142
|
+
---
|
|
143
|
+
|
|
144
|
+
## Phase 4: Regression
|
|
145
|
+
|
|
146
|
+
One final full run to confirm all phases together:
|
|
147
|
+
|
|
148
|
+
```bash
|
|
149
|
+
npx playwright test --reporter=line
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
Report results. Do NOT enter another fix loop here.
|
|
153
|
+
|
|
154
|
+
---
|
|
155
|
+
|
|
156
|
+
## Diagnosis & Fix (used in each phase)
|
|
157
|
+
|
|
158
|
+
### Step 1: Parse Failures
|
|
18
159
|
|
|
19
160
|
| Error pattern | Root cause | Fix target |
|
|
20
161
|
|---|---|---|
|
|
@@ -28,24 +169,18 @@ Parse Playwright error output to categorize failures:
|
|
|
28
169
|
|
|
29
170
|
**Check `test-results/` first** — Playwright captures failure screenshots automatically. Use these to diagnose before any MCP exploration.
|
|
30
171
|
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
## Step 2: Targeted MCP Exploration
|
|
172
|
+
### Step 2: Targeted MCP Exploration
|
|
34
173
|
|
|
35
174
|
Only when `test-results/` screenshots are insufficient:
|
|
36
175
|
|
|
37
176
|
1. Read `baseURL` from `playwright.config.ts`
|
|
38
177
|
2. `browser_navigate` to target page
|
|
39
|
-
3. If redirected to login →
|
|
178
|
+
3. If redirected to login → run **Phase 0.5: Auth Persistence**, then re-navigate
|
|
40
179
|
4. Take **ONE** `browser_snapshot` — fix all broken selectors from this single snapshot
|
|
41
180
|
|
|
42
|
-
|
|
181
|
+
Never use `browser_evaluate` to inject or read cookies (misses httpOnly). For auth, use Phase 0.5 or `sungen makeauth`.
|
|
43
182
|
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
## Step 3: Fix Broken Selectors
|
|
47
|
-
|
|
48
|
-
For each failed selector, find the correct locator from the snapshot:
|
|
183
|
+
### Step 3: Fix Broken Selectors
|
|
49
184
|
|
|
50
185
|
Selector priority (use first applicable):
|
|
51
186
|
|
|
@@ -73,9 +208,18 @@ Common fixes:
|
|
|
73
208
|
- Element in iframe → add `frame` field
|
|
74
209
|
- Dynamic content → use `testid` or structural `role` + `nth`
|
|
75
210
|
|
|
211
|
+
### Step 4: Recompile After Fix
|
|
212
|
+
|
|
213
|
+
Always recompile before re-running:
|
|
214
|
+
```bash
|
|
215
|
+
sungen generate --screen <screen>
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
Then re-run only the current phase's failing tests, not all tests.
|
|
219
|
+
|
|
76
220
|
---
|
|
77
221
|
|
|
78
|
-
##
|
|
222
|
+
## Table Selectors
|
|
79
223
|
|
|
80
224
|
For table patterns, add table selectors with `columns` config:
|
|
81
225
|
|
|
@@ -100,7 +244,7 @@ users:
|
|
|
100
244
|
|
|
101
245
|
---
|
|
102
246
|
|
|
103
|
-
##
|
|
247
|
+
## Detail Screens with Dynamic IDs
|
|
104
248
|
|
|
105
249
|
For screens like `/admin/users/:id`:
|
|
106
250
|
1. Navigate to list page via MCP to find a real record ID
|
|
@@ -114,12 +258,15 @@ user detail:
|
|
|
114
258
|
|
|
115
259
|
---
|
|
116
260
|
|
|
117
|
-
##
|
|
261
|
+
## Attempt Budget Summary
|
|
118
262
|
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
263
|
+
| Phase | What runs | Max fix attempts | On failure after max |
|
|
264
|
+
|---|---|---|---|
|
|
265
|
+
| 0. Pre-gen | Playwright MCP snapshot → write selectors.yaml | 1 snapshot | Ask user — skip or retry navigation |
|
|
266
|
+
| 0.5. Auth | Manual login in MCP browser → `browser_storage_state` → `specs/.auth/<role>.json` | 1 login | Ask user — retry login or fall back to `sungen makeauth` |
|
|
267
|
+
| 1. Smoke | First 5 @critical/@high | 2 | Ask user — fundamentals broken |
|
|
268
|
+
| 2. Priority | All @critical + @high | 2 | Report failures, continue to Phase 3 |
|
|
269
|
+
| 3. Full | All tests | 1 | Report @low/@normal failures, continue |
|
|
270
|
+
| 4. Regression | All tests | 0 | Report final results |
|
|
124
271
|
|
|
125
|
-
|
|
272
|
+
**Total worst case: 5 fix attempts** (2+2+1), not unbounded loops. Phases 0 and 0.5 don't count toward fix budget.
|
|
@@ -22,7 +22,7 @@ For append: read highest `VP-<CAT>-<NNN>`, continue from next number. Never modi
|
|
|
22
22
|
When `qa/screens/<screen>/requirements/` exists:
|
|
23
23
|
- **`spec.md`** — primary: sections, field constraints, validation messages, business rules, states
|
|
24
24
|
- **`ui/`** — supplementary: screenshots for layout/visual context
|
|
25
|
-
- **`
|
|
25
|
+
- **`test-viewpoint.md`** — supplementary: edge cases, known issues
|
|
26
26
|
|
|
27
27
|
Requirements improve every viewpoint: exact error messages for VAL, business rules for LOGIC, role permissions for SEC.
|
|
28
28
|
|
|
@@ -47,16 +47,64 @@ When exploring live page or reading Figma designs, actively collect to hardcode
|
|
|
47
47
|
|
|
48
48
|
## Section Identification
|
|
49
49
|
|
|
50
|
-
Identify sections from page patterns. Use `sungen-viewpoint` skill for the
|
|
50
|
+
Identify sections from page patterns. Use `sungen-viewpoint` skill for the 10 pattern types (Form & Inputs, Data Table, Create/Add, Update/Edit, Delete, Search, Filter, Pagination, Modal/Dialog, List/Card). Present sections and ask user which to focus on.
|
|
51
51
|
|
|
52
|
-
##
|
|
52
|
+
## Test Generation Strategy
|
|
53
|
+
|
|
54
|
+
### Step 1 — Spec-first extraction (always do this first)
|
|
55
|
+
|
|
56
|
+
Before applying any checklist, extract test conditions from `spec.md` (and `test-viewpoint.md` if present):
|
|
57
|
+
- **Validation rules**: field constraints, error messages, required/optional
|
|
58
|
+
- **Business rules**: eligibility, calculation logic, permission-based behavior
|
|
59
|
+
- **State lifecycle**: allowed transitions, blocked transitions
|
|
60
|
+
- **Edge cases**: boundary values, empty states, concurrent conditions
|
|
61
|
+
|
|
62
|
+
These spec-extracted conditions drive **which scenarios exist** — `sungen-viewpoint` only supplements with generic web UI coverage that spec doesn't explicitly state.
|
|
63
|
+
|
|
64
|
+
### Step 2 — Apply test design techniques
|
|
65
|
+
|
|
66
|
+
Apply `sungen-test-design-techniques` to spec-extracted conditions:
|
|
67
|
+
|
|
68
|
+
| Technique | Apply when spec mentions |
|
|
69
|
+
|---|---|
|
|
70
|
+
| EP | Valid/invalid ranges, categories → **one** scenario per class, not per value |
|
|
71
|
+
| BVA | Numeric range, string length → `min-1`, `min`, `max`, `max+1` (compact 4-point default) |
|
|
72
|
+
| Decision Table | 2+ dependent conditions → one scenario per combination (cap at distinct outcomes if >3 conditions) |
|
|
73
|
+
| State Transition | Entity lifecycle → one scenario per valid transition + key invalid transitions |
|
|
74
|
+
|
|
75
|
+
### Step 3 — Fill coverage gaps with viewpoint checklists
|
|
53
76
|
|
|
54
77
|
Use `sungen-viewpoint` skill for per-pattern checklists across 4 viewpoints: UI/UX, Data & Validate, Logic, Security.
|
|
55
78
|
|
|
56
|
-
|
|
79
|
+
Add scenarios for generic UI coverage that spec didn't explicitly state (empty states, loading states, keyboard nav, hover effects). Skip viewpoints truly N/A.
|
|
57
80
|
|
|
58
81
|
**Validation rule**: capture actual error messages from live page or spec.md. Use `User see {{error_var}}` — never assert just "is visible".
|
|
59
82
|
|
|
83
|
+
## Priority Tags (auto-assign)
|
|
84
|
+
|
|
85
|
+
Every scenario **MUST** have exactly one priority tag. Add it before the scenario line (after `@extend:` if present).
|
|
86
|
+
|
|
87
|
+
| Tag | When to use |
|
|
88
|
+
|---|---|
|
|
89
|
+
| `@critical` | System unusable if fails — login/logout, authentication redirect, main create/submit/delete, permission denied |
|
|
90
|
+
| `@high` | Major feature broken — required field validation, core business rules, data displays correctly, key navigation |
|
|
91
|
+
| `@normal` | Degraded experience — UI layout/element presence, secondary flows, optional field validation, search/filter |
|
|
92
|
+
| `@low` | Minor/cosmetic — tooltips, hover states, empty states, default sort, placeholder text |
|
|
93
|
+
|
|
94
|
+
### Auto-assign heuristics
|
|
95
|
+
|
|
96
|
+
| Viewpoint + Pattern | Default priority |
|
|
97
|
+
|---|---|
|
|
98
|
+
| VP-SEC-* (all security scenarios) | `@critical` |
|
|
99
|
+
| VP-LOGIC-* with create/submit/delete/login | `@critical` |
|
|
100
|
+
| VP-LOGIC-* other state changes | `@high` |
|
|
101
|
+
| VP-VAL-* required field / submit empty | `@high` |
|
|
102
|
+
| VP-VAL-* format, boundary, optional fields | `@normal` |
|
|
103
|
+
| VP-UI-* form fields present, table columns | `@normal` |
|
|
104
|
+
| VP-UI-* hover, tooltip, empty state, placeholder | `@low` |
|
|
105
|
+
|
|
106
|
+
**`@steps:` scenarios** do NOT get a priority tag (they are setup blocks, not test cases).
|
|
107
|
+
|
|
60
108
|
## SPA Wait-For Steps
|
|
61
109
|
|
|
62
110
|
```gherkin
|
|
@@ -74,58 +122,38 @@ And User wait for [Page Title] heading is visible
|
|
|
74
122
|
@auth:role
|
|
75
123
|
Feature: <Screen> Screen
|
|
76
124
|
|
|
77
|
-
# Shared setup —
|
|
125
|
+
# Shared setup — NO priority tag on @steps
|
|
78
126
|
@steps:open_form
|
|
79
127
|
Scenario: Open form
|
|
80
128
|
Given User is on [Screen] page
|
|
81
|
-
And User wait for [Screen Title] heading is visible
|
|
82
129
|
When User click [Create] button
|
|
83
130
|
Then User see [Form] dialog
|
|
84
131
|
|
|
85
|
-
#
|
|
86
|
-
# Section: Create User Form
|
|
87
|
-
# ============================================================
|
|
88
|
-
|
|
89
|
-
# --- UI/UX ---
|
|
132
|
+
# --- Section: Create User Form ---
|
|
90
133
|
|
|
91
|
-
@extend:open_form
|
|
134
|
+
@normal @extend:open_form
|
|
92
135
|
Scenario: VP-UI-001 Form displays all fields with correct defaults
|
|
93
136
|
Given User is on [Form] dialog
|
|
94
137
|
Then User see [Name] field
|
|
95
|
-
And User see [Email] field
|
|
96
138
|
And User see [Submit] button is disabled
|
|
97
139
|
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
@extend:open_form
|
|
140
|
+
@high @extend:open_form
|
|
101
141
|
Scenario: VP-VAL-001 Submit with all empty fields shows errors
|
|
102
142
|
Given User is on [Form] dialog
|
|
103
143
|
When User click [Submit] button
|
|
104
144
|
Then User see [Name error] message with {{name_required_error}}
|
|
105
145
|
|
|
106
|
-
#
|
|
107
|
-
# Section: User Table
|
|
108
|
-
# ============================================================
|
|
109
|
-
|
|
110
|
-
# --- UI/UX ---
|
|
146
|
+
# --- Section: User Table ---
|
|
111
147
|
|
|
148
|
+
@normal
|
|
112
149
|
Scenario: VP-UI-010 Table displays all columns
|
|
113
150
|
Then User see [Name] column in [Users] table
|
|
114
|
-
And User see [Email] column in [Users] table
|
|
115
|
-
And User see [Status] column in [Users] table
|
|
116
|
-
|
|
117
|
-
# --- Data & Validate ---
|
|
118
151
|
|
|
152
|
+
@high
|
|
119
153
|
Scenario: VP-VAL-010 Table displays correct data
|
|
120
154
|
Then User see [Users] table match data:
|
|
121
|
-
| Name
|
|
122
|
-
| {{name_1}}
|
|
123
|
-
| {{name_2}} | {{email_2}} | {{status_2}} |
|
|
124
|
-
|
|
125
|
-
Scenario: VP-VAL-011 Edit button targets correct row
|
|
126
|
-
Given User see [Target] row in [Users] table with {{name_1}}
|
|
127
|
-
When User click [Edit] button in [Users] table with {{name_1}}
|
|
128
|
-
Then User see [Name] field with {{name_1}}
|
|
155
|
+
| Name | Email |
|
|
156
|
+
| {{name_1}} | {{email_1}} |
|
|
129
157
|
```
|
|
130
158
|
|
|
131
159
|
### When to use DataTable vs Row Scope
|
|
@@ -135,7 +163,7 @@ Feature: <Screen> Screen
|
|
|
135
163
|
| `table match data:` + DataTable | Verifying **multiple rows** exist with expected values |
|
|
136
164
|
| `row in [Table] table with {{v}}` + `column with {{v}}` | Checking **single row** details or **acting** on a row (click, edit) |
|
|
137
165
|
|
|
138
|
-
**Naming**: `VP-<CATEGORY>-<NNN>` prefix.
|
|
166
|
+
**Naming**: `VP-<CATEGORY>-<NNN>` prefix. Scenario name must use the **same element type** as the steps — e.g., if the step uses `dialog`, write "dialog opens" not "modal opens".
|
|
139
167
|
|
|
140
168
|
**Test data** — `qa/screens/<screen>/test-data/<screen>.yaml`, grouped by section.
|
|
141
169
|
|
|
@@ -20,16 +20,16 @@ user-invocable: false
|
|
|
20
20
|
|
|
21
21
|
### Coverage (40 pts)
|
|
22
22
|
|
|
23
|
-
| Dimension | Pts |
|
|
24
|
-
|
|
25
|
-
| Happy paths | 8 |
|
|
26
|
-
| Negative cases | 8 |
|
|
27
|
-
| Edge cases |
|
|
28
|
-
| Boundary values |
|
|
29
|
-
| State transitions | 5 |
|
|
30
|
-
|
|
|
23
|
+
| Dimension | Technique | Pts | What to check |
|
|
24
|
+
|---|---|---|---|
|
|
25
|
+
| Happy paths | — | 8 | Core success flows exist |
|
|
26
|
+
| Negative cases | EP | 8 | One scenario per invalid class, no redundant same-class scenarios |
|
|
27
|
+
| Edge cases | EP | 6 | Empty, null, whitespace, special chars covered |
|
|
28
|
+
| Boundary values | BVA | 8 | `min-1`, `min`, `max`, `max+1` for each spec range |
|
|
29
|
+
| State transitions | ST | 5 | Valid transitions + key blocked paths from spec |
|
|
30
|
+
| Condition combos | DT | 5 | Dependent conditions covered, distinct outcomes tested |
|
|
31
31
|
|
|
32
|
-
Score: `(dimensions_covered / 6) * 40`. Per-pattern checklists → `sungen-viewpoint` skill.
|
|
32
|
+
Score: `(dimensions_covered / 6) * 40`. Validate technique application with `sungen-test-design-techniques`. Per-pattern checklists → `sungen-viewpoint` skill.
|
|
33
33
|
|
|
34
34
|
### Viewpoint (30 pts)
|
|
35
35
|
|
|
@@ -37,7 +37,8 @@ Score: `(dimensions_covered / 6) * 40`. Per-pattern checklists → `sungen-viewp
|
|
|
37
37
|
|---|---|
|
|
38
38
|
| All applicable VP present (UI/VAL/LOGIC/SEC) | 10 |
|
|
39
39
|
| Correct classification | 8 |
|
|
40
|
-
| `VP-<CAT>-<NNN>` naming + section grouping |
|
|
40
|
+
| `VP-<CAT>-<NNN>` naming + section grouping | 4 |
|
|
41
|
+
| Priority tag present and correct (`@critical`/`@high`/`@normal`/`@low`) | 4 |
|
|
41
42
|
| Assertion quality (see rules below) | 4 |
|
|
42
43
|
|
|
43
44
|
**Classification**: UI = static/always-same appearance. VAL = input validation/errors. LOGIC = behavior/state changes (includes persisted state without When). SEC = auth/permissions.
|
|
@@ -58,6 +59,7 @@ Score: `(dimensions_covered / 6) * 40`. Per-pattern checklists → `sungen-viewp
|
|
|
58
59
|
2. **`When fill [X]`** → Then must assert the **visible result** (search results, validation error). Don't re-assert the field value.
|
|
59
60
|
3. **UI-only scenarios** (no action needed) → use Given + Then without When.
|
|
60
61
|
4. **Scenario name must match the assertion**, not the action.
|
|
62
|
+
5. **Scenario name must use the same element type as the steps** — e.g., "dialog opens" + `[X] dialog`, never "modal opens" + `[X] dialog`.
|
|
61
63
|
|
|
62
64
|
### @manual Rules
|
|
63
65
|
|
|
@@ -72,13 +74,16 @@ Do NOT mark `@manual` when data is visible in snapshot or documented in spec —
|
|
|
72
74
|
|
|
73
75
|
## Checklist (auto-fix on detection)
|
|
74
76
|
|
|
75
|
-
1. **Redundant scenarios** —
|
|
77
|
+
1. **Redundant scenarios (EP violation)** — multiple scenarios testing same equivalence class? Keep one representative, remove rest
|
|
76
78
|
2. **Misclassified VP** — UI tests behavior? Move to LOGIC. Logic tests appearance? Move to UI
|
|
77
79
|
3. **Dynamic content** — exact match on counters/timestamps? Use `contains` instead
|
|
78
80
|
4. **Duplicate across sections** — SEC scenario identical to UI? Remove duplicate
|
|
79
|
-
5. **
|
|
80
|
-
6. **
|
|
81
|
-
7. **
|
|
81
|
+
5. **Missing/wrong priority tag** — every non-`@steps` scenario needs exactly one of `@critical`/`@high`/`@normal`/`@low`. SEC→`@critical`, VAL required→`@high`, UI layout→`@normal`, hover/tooltip→`@low`
|
|
82
|
+
6. **Always-enabled elements** — `is enabled` on never-disabled element? Remove
|
|
83
|
+
7. **Test-data completeness** — every `{{var}}` must exist in test-data.yaml
|
|
84
|
+
8. **Missing BVA boundaries** — spec defines min/max range but scenarios only test midpoint? Add `min-1`, `min`, `max`, `max+1`
|
|
85
|
+
9. **Missing state transitions** — spec defines lifecycle states but only happy path tested? Add blocked transitions
|
|
86
|
+
10. **Uncovered condition combos** — spec has 2+ dependent conditions but only partial combinations tested? Build decision table
|
|
82
87
|
|
|
83
88
|
---
|
|
84
89
|
|
|
@@ -0,0 +1,99 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sungen-test-design-techniques
|
|
3
|
+
description: 'Test design techniques (EP, BVA, Decision Table, State Transition) for systematic scenario generation from spec constraints. Auto-loaded by create-test command.'
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## When to Apply
|
|
8
|
+
|
|
9
|
+
| Technique | Apply when spec mentions |
|
|
10
|
+
|---|---|
|
|
11
|
+
| EP (Equivalence Partitioning) | Input types, categories, roles, valid/invalid ranges |
|
|
12
|
+
| BVA (Boundary Value Analysis) | Numeric range, string length, date range, count limit |
|
|
13
|
+
| Decision Table | 2+ mutually dependent conditions with different outcomes |
|
|
14
|
+
| State Transition | Entity lifecycle, workflow states, status changes |
|
|
15
|
+
|
|
16
|
+
**Rule:** These techniques determine **how many** and **which** scenarios to generate. `sungen-viewpoint` determines **which viewpoints** to cover.
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## 1. Equivalence Partitioning (EP)
|
|
21
|
+
|
|
22
|
+
**Goal:** One representative per input class. If one value in a partition passes, all values in that partition pass.
|
|
23
|
+
|
|
24
|
+
**How to apply:**
|
|
25
|
+
1. Extract partitions from `spec.md` constraints (e.g., field accepts 1-100)
|
|
26
|
+
2. Valid class: 1 <= value <= 100
|
|
27
|
+
3. Invalid class (below): value < 1
|
|
28
|
+
4. Invalid class (above): value > 100
|
|
29
|
+
5. Write **one** scenario per class
|
|
30
|
+
|
|
31
|
+
**Anti-pattern:**
|
|
32
|
+
```gherkin
|
|
33
|
+
# BAD — 3 scenarios, same class, same result:
|
|
34
|
+
Scenario: VP-VAL-001 Enter value 10
|
|
35
|
+
Scenario: VP-VAL-002 Enter value 50
|
|
36
|
+
Scenario: VP-VAL-003 Enter value 80
|
|
37
|
+
```
|
|
38
|
+
```gherkin
|
|
39
|
+
# GOOD — one representative per class:
|
|
40
|
+
Scenario: VP-VAL-001 Valid range value is accepted # value = 50
|
|
41
|
+
Scenario: VP-VAL-002 Below minimum is rejected # value = 0
|
|
42
|
+
Scenario: VP-VAL-003 Above maximum is rejected # value = 101
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
---
|
|
46
|
+
|
|
47
|
+
## 2. Boundary Value Analysis (BVA)
|
|
48
|
+
|
|
49
|
+
**Goal:** Test exact edges where off-by-one errors occur (`>` vs `>=`, `<` vs `<=`).
|
|
50
|
+
|
|
51
|
+
### Two modes
|
|
52
|
+
|
|
53
|
+
| Mode | Values | Use when |
|
|
54
|
+
|---|---|---|
|
|
55
|
+
| **Compact (default)** | `min-1`, `min`, `max`, `max+1` | Most fields |
|
|
56
|
+
| **Full 6-point** | `min-1`, `min`, `min+1`, `max-1`, `max`, `max+1` | Critical fields with `@critical`/`@high` priority |
|
|
57
|
+
|
|
58
|
+
**How to apply** (example: "quantity must be 1-10"):
|
|
59
|
+
- `min-1` = 0 -> invalid
|
|
60
|
+
- `min` = 1 -> valid (lower boundary)
|
|
61
|
+
- `max` = 10 -> valid (upper boundary)
|
|
62
|
+
- `max+1` = 11 -> invalid
|
|
63
|
+
- Midpoint (e.g., 5) already covered by EP valid class
|
|
64
|
+
|
|
65
|
+
**BVA scenarios** (example: quantity 1-10):
|
|
66
|
+
- `@high VP-VAL-010 Below minimum (0) is rejected`
|
|
67
|
+
- `@high VP-VAL-011 Minimum boundary (1) is accepted`
|
|
68
|
+
- `@high VP-VAL-012 Maximum boundary (10) is accepted`
|
|
69
|
+
- `@high VP-VAL-013 Above maximum (11) is rejected`
|
|
70
|
+
|
|
71
|
+
---
|
|
72
|
+
|
|
73
|
+
## 3. Decision Table
|
|
74
|
+
|
|
75
|
+
**Goal:** Cover all condition combinations when 2+ conditions constrain each other.
|
|
76
|
+
|
|
77
|
+
**How to apply:** List conditions from `spec.md` → build combination→outcome table → one scenario per row.
|
|
78
|
+
|
|
79
|
+
**Cap:** When >3 boolean conditions (>8 rows), prioritize rows with **distinct outcomes** and add `@manual` for exhaustive combos.
|
|
80
|
+
|
|
81
|
+
**Example** — Submit requires valid form AND permission → 4 combos, 2 distinct outcomes:
|
|
82
|
+
- `@normal` Form invalid + no permission → disabled
|
|
83
|
+
- `@normal` Form valid + no permission → disabled
|
|
84
|
+
- `@normal` Has permission + form invalid → disabled
|
|
85
|
+
- `@critical` Form valid + has permission → succeeds
|
|
86
|
+
|
|
87
|
+
---
|
|
88
|
+
|
|
89
|
+
## 4. State Transition
|
|
90
|
+
|
|
91
|
+
**Goal:** Verify every valid transition AND block invalid ones.
|
|
92
|
+
|
|
93
|
+
**How to apply:** Extract state diagram from `spec.md` → one scenario per valid transition + key invalid transitions.
|
|
94
|
+
|
|
95
|
+
**Example** — Order lifecycle (Draft→Pending→Approved→Completed):
|
|
96
|
+
- `@high` Valid: Draft → Pending, Pending → Approved, Approved → Completed
|
|
97
|
+
- `@normal` Invalid: Completed → Draft (blocked), Pending → Completed (skip approval)
|
|
98
|
+
|
|
99
|
+
**test-data:** Use named state keys (`order_in_draft`, `order_in_pending`).
|