@sun-asterisk/sungen 2.5.1 → 2.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (170) hide show
  1. package/dist/cli/commands/add-flow.d.ts +3 -0
  2. package/dist/cli/commands/add-flow.d.ts.map +1 -0
  3. package/dist/cli/commands/add-flow.js +27 -0
  4. package/dist/cli/commands/add-flow.js.map +1 -0
  5. package/dist/cli/commands/delivery.d.ts.map +1 -1
  6. package/dist/cli/commands/delivery.js +95 -60
  7. package/dist/cli/commands/delivery.js.map +1 -1
  8. package/dist/cli/commands/generate.d.ts.map +1 -1
  9. package/dist/cli/commands/generate.js +38 -6
  10. package/dist/cli/commands/generate.js.map +1 -1
  11. package/dist/cli/index.js +3 -1
  12. package/dist/cli/index.js.map +1 -1
  13. package/dist/generators/test-generator/adapters/adapter-interface.d.ts +1 -0
  14. package/dist/generators/test-generator/adapters/adapter-interface.d.ts.map +1 -1
  15. package/dist/generators/test-generator/adapters/playwright/playwright-adapter.d.ts +1 -0
  16. package/dist/generators/test-generator/adapters/playwright/playwright-adapter.d.ts.map +1 -1
  17. package/dist/generators/test-generator/adapters/playwright/playwright-adapter.js.map +1 -1
  18. package/dist/generators/test-generator/adapters/playwright/templates/imports.hbs +2 -2
  19. package/dist/generators/test-generator/adapters/playwright/templates/steps/actions/alert-accept-action.hbs +1 -1
  20. package/dist/generators/test-generator/adapters/playwright/templates/steps/actions/alert-dismiss-action.hbs +1 -1
  21. package/dist/generators/test-generator/adapters/playwright/templates/steps/actions/alert-fill-action.hbs +1 -1
  22. package/dist/generators/test-generator/adapters/playwright/templates/steps/assertions/alert-text-assertion.hbs +2 -2
  23. package/dist/generators/test-generator/adapters/playwright/templates/steps/assertions/column-cell-assertion.hbs +1 -0
  24. package/dist/generators/test-generator/adapters/playwright/templates/steps/navigation/navigation.hbs +2 -1
  25. package/dist/generators/test-generator/adapters/playwright/templates/steps/navigation/route-assertion.hbs +1 -2
  26. package/dist/generators/test-generator/adapters/playwright/templates/steps/navigation/wait-timeout.hbs +1 -1
  27. package/dist/generators/test-generator/code-generator.d.ts +1 -0
  28. package/dist/generators/test-generator/code-generator.d.ts.map +1 -1
  29. package/dist/generators/test-generator/code-generator.js +30 -12
  30. package/dist/generators/test-generator/code-generator.js.map +1 -1
  31. package/dist/generators/test-generator/step-mapper.d.ts +4 -0
  32. package/dist/generators/test-generator/step-mapper.d.ts.map +1 -1
  33. package/dist/generators/test-generator/step-mapper.js +7 -0
  34. package/dist/generators/test-generator/step-mapper.js.map +1 -1
  35. package/dist/generators/test-generator/template-engine.d.ts +1 -0
  36. package/dist/generators/test-generator/template-engine.d.ts.map +1 -1
  37. package/dist/generators/test-generator/template-engine.js +1 -1
  38. package/dist/generators/test-generator/template-engine.js.map +1 -1
  39. package/dist/generators/test-generator/utils/data-resolver.d.ts +3 -20
  40. package/dist/generators/test-generator/utils/data-resolver.d.ts.map +1 -1
  41. package/dist/generators/test-generator/utils/data-resolver.js +23 -66
  42. package/dist/generators/test-generator/utils/data-resolver.js.map +1 -1
  43. package/dist/generators/test-generator/utils/selector-resolver.d.ts +2 -6
  44. package/dist/generators/test-generator/utils/selector-resolver.d.ts.map +1 -1
  45. package/dist/generators/test-generator/utils/selector-resolver.js +18 -80
  46. package/dist/generators/test-generator/utils/selector-resolver.js.map +1 -1
  47. package/dist/orchestrator/ai-rules-updater.d.ts.map +1 -1
  48. package/dist/orchestrator/ai-rules-updater.js +4 -0
  49. package/dist/orchestrator/ai-rules-updater.js.map +1 -1
  50. package/dist/orchestrator/flow-manager.d.ts +22 -0
  51. package/dist/orchestrator/flow-manager.d.ts.map +1 -0
  52. package/dist/orchestrator/flow-manager.js +251 -0
  53. package/dist/orchestrator/flow-manager.js.map +1 -0
  54. package/dist/orchestrator/project-initializer.d.ts.map +1 -1
  55. package/dist/orchestrator/project-initializer.js +1 -0
  56. package/dist/orchestrator/project-initializer.js.map +1 -1
  57. package/dist/orchestrator/templates/ai-instructions/claude-cmd-add-flow.md +88 -0
  58. package/dist/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +11 -8
  59. package/dist/orchestrator/templates/ai-instructions/claude-cmd-review.md +8 -6
  60. package/dist/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +15 -11
  61. package/dist/orchestrator/templates/ai-instructions/claude-config.md +41 -10
  62. package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +12 -0
  63. package/dist/orchestrator/templates/ai-instructions/claude-skill-delivery.md +19 -18
  64. package/dist/orchestrator/templates/ai-instructions/claude-skill-error-mapping.md +12 -0
  65. package/dist/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +52 -0
  66. package/dist/orchestrator/templates/ai-instructions/claude-skill-selector-fix.md +31 -3
  67. package/dist/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +45 -0
  68. package/dist/orchestrator/templates/ai-instructions/claude-skill-tc-generation.md +69 -0
  69. package/dist/orchestrator/templates/ai-instructions/claude-skill-tc-review.md +30 -0
  70. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-add-flow.md +86 -0
  71. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +13 -10
  72. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +18 -17
  73. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-review.md +9 -7
  74. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +23 -19
  75. package/dist/orchestrator/templates/ai-instructions/copilot-config.md +40 -9
  76. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +12 -0
  77. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +19 -18
  78. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-error-mapping.md +12 -0
  79. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +52 -0
  80. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-selector-fix.md +31 -3
  81. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +45 -0
  82. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-tc-generation.md +70 -0
  83. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-tc-review.md +30 -0
  84. package/dist/orchestrator/templates/playwright.config.d.ts.map +1 -1
  85. package/dist/orchestrator/templates/playwright.config.js +3 -1
  86. package/dist/orchestrator/templates/playwright.config.js.map +1 -1
  87. package/dist/orchestrator/templates/playwright.config.ts +4 -1
  88. package/dist/orchestrator/templates/specs-base.d.ts.map +1 -1
  89. package/dist/orchestrator/templates/specs-base.js +11 -56
  90. package/dist/orchestrator/templates/specs-base.js.map +1 -1
  91. package/dist/orchestrator/templates/specs-base.ts +11 -61
  92. package/dist/orchestrator/templates/specs-test-data.d.ts +3 -1
  93. package/dist/orchestrator/templates/specs-test-data.d.ts.map +1 -1
  94. package/dist/orchestrator/templates/specs-test-data.js +10 -2
  95. package/dist/orchestrator/templates/specs-test-data.js.map +1 -1
  96. package/dist/orchestrator/templates/specs-test-data.ts +9 -2
  97. package/package.json +1 -1
  98. package/src/cli/commands/add-flow.ts +25 -0
  99. package/src/cli/commands/delivery.ts +109 -58
  100. package/src/cli/commands/generate.ts +43 -6
  101. package/src/cli/index.ts +3 -1
  102. package/src/generators/test-generator/adapters/adapter-interface.ts +1 -1
  103. package/src/generators/test-generator/adapters/playwright/playwright-adapter.ts +1 -1
  104. package/src/generators/test-generator/adapters/playwright/templates/imports.hbs +2 -2
  105. package/src/generators/test-generator/adapters/playwright/templates/steps/actions/alert-accept-action.hbs +1 -1
  106. package/src/generators/test-generator/adapters/playwright/templates/steps/actions/alert-dismiss-action.hbs +1 -1
  107. package/src/generators/test-generator/adapters/playwright/templates/steps/actions/alert-fill-action.hbs +1 -1
  108. package/src/generators/test-generator/adapters/playwright/templates/steps/assertions/alert-text-assertion.hbs +2 -2
  109. package/src/generators/test-generator/adapters/playwright/templates/steps/assertions/column-cell-assertion.hbs +1 -0
  110. package/src/generators/test-generator/adapters/playwright/templates/steps/navigation/navigation.hbs +2 -1
  111. package/src/generators/test-generator/adapters/playwright/templates/steps/navigation/route-assertion.hbs +1 -2
  112. package/src/generators/test-generator/adapters/playwright/templates/steps/navigation/wait-timeout.hbs +1 -1
  113. package/src/generators/test-generator/code-generator.ts +32 -14
  114. package/src/generators/test-generator/step-mapper.ts +8 -0
  115. package/src/generators/test-generator/template-engine.ts +2 -2
  116. package/src/generators/test-generator/utils/data-resolver.ts +25 -77
  117. package/src/generators/test-generator/utils/selector-resolver.ts +23 -109
  118. package/src/orchestrator/ai-rules-updater.ts +5 -0
  119. package/src/orchestrator/flow-manager.ts +243 -0
  120. package/src/orchestrator/project-initializer.ts +1 -0
  121. package/src/orchestrator/templates/ai-instructions/claude-cmd-add-flow.md +88 -0
  122. package/src/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +11 -8
  123. package/src/orchestrator/templates/ai-instructions/claude-cmd-review.md +8 -6
  124. package/src/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +15 -11
  125. package/src/orchestrator/templates/ai-instructions/claude-config.md +41 -10
  126. package/src/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +12 -0
  127. package/src/orchestrator/templates/ai-instructions/claude-skill-delivery.md +19 -18
  128. package/src/orchestrator/templates/ai-instructions/claude-skill-error-mapping.md +12 -0
  129. package/src/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +52 -0
  130. package/src/orchestrator/templates/ai-instructions/claude-skill-selector-fix.md +31 -3
  131. package/src/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +45 -0
  132. package/src/orchestrator/templates/ai-instructions/claude-skill-tc-generation.md +69 -0
  133. package/src/orchestrator/templates/ai-instructions/claude-skill-tc-review.md +30 -0
  134. package/src/orchestrator/templates/ai-instructions/copilot-cmd-add-flow.md +86 -0
  135. package/src/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +13 -10
  136. package/src/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +18 -17
  137. package/src/orchestrator/templates/ai-instructions/copilot-cmd-review.md +9 -7
  138. package/src/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +23 -19
  139. package/src/orchestrator/templates/ai-instructions/copilot-config.md +40 -9
  140. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +12 -0
  141. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +19 -18
  142. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-error-mapping.md +12 -0
  143. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +52 -0
  144. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-selector-fix.md +31 -3
  145. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +45 -0
  146. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-tc-generation.md +70 -0
  147. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-tc-review.md +30 -0
  148. package/src/orchestrator/templates/playwright.config.ts +4 -1
  149. package/src/orchestrator/templates/specs-base.ts +11 -61
  150. package/src/orchestrator/templates/specs-test-data.ts +9 -2
  151. package/dist/utils/feature-finder.d.ts +0 -9
  152. package/dist/utils/feature-finder.d.ts.map +0 -1
  153. package/dist/utils/feature-finder.js +0 -67
  154. package/dist/utils/feature-finder.js.map +0 -1
  155. package/dist/utils/screen-paths.d.ts +0 -10
  156. package/dist/utils/screen-paths.d.ts.map +0 -1
  157. package/dist/utils/screen-paths.js +0 -73
  158. package/dist/utils/screen-paths.js.map +0 -1
  159. package/dist/utils/selector-loader.d.ts +0 -6
  160. package/dist/utils/selector-loader.d.ts.map +0 -1
  161. package/dist/utils/selector-loader.js +0 -20
  162. package/dist/utils/selector-loader.js.map +0 -1
  163. package/dist/utils/test-data-loader.d.ts +0 -6
  164. package/dist/utils/test-data-loader.d.ts.map +0 -1
  165. package/dist/utils/test-data-loader.js +0 -20
  166. package/dist/utils/test-data-loader.js.map +0 -1
  167. package/src/utils/feature-finder.ts +0 -33
  168. package/src/utils/screen-paths.ts +0 -37
  169. package/src/utils/selector-loader.ts +0 -23
  170. package/src/utils/test-data-loader.ts +0 -23
@@ -6,7 +6,7 @@ user-invocable: false
6
6
 
7
7
  ## Purpose
8
8
 
9
- Export test cases from Sungen screens to a standardized CSV file (format BM-2-901-13) for QA delivery.
9
+ Export test cases from Sungen screens and flows to a standardized CSV file (format BM-2-901-13) for QA delivery.
10
10
 
11
11
  **This skill delegates all heavy work to the `sungen delivery` CLI.** The CLI is the single source of truth for parsing logic — do NOT re-parse files in AI. Your role is only to:
12
12
 
@@ -19,18 +19,19 @@ Export test cases from Sungen screens to a standardized CSV file (format BM-2-90
19
19
  ## Architecture
20
20
 
21
21
  ```
22
- User → /sungen:delivery [screen...]
22
+ User → /sungen:delivery [name...]
23
23
 
24
24
 
25
25
  sungen delivery CLI (deterministic — no AI tokens)
26
- ├─ Scope detection (no-arg = all screens)
27
- ├─ Pre-flight source checks per screen
26
+ ├─ Scope detection (no-arg = all screens + flows)
27
+ ├─ Auto-detect: qa/flows/<name>/ flow, qa/screens/<name>/ → screen
28
+ ├─ Pre-flight source checks per target
28
29
  ├─ Parse .feature (metadata)
29
30
  ├─ Parse .spec.ts (resolved Playwright code — source of truth)
30
31
  ├─ Parse test-data.yaml (resolve {{vars}})
31
32
  ├─ Parse test-results/results.json (match test titles)
32
33
  ├─ Merge sources + generate CSV rows
33
- └─ Write qa/deliverables/<screen>-testcases.csv
34
+ └─ Write qa/deliverables/<name>-testcases.csv
34
35
  ```
35
36
 
36
37
  Source modules: `src/exporters/*.ts`
@@ -39,18 +40,18 @@ Source modules: `src/exporters/*.ts`
39
40
 
40
41
  ## Required sources (CLI pre-flight checks these)
41
42
 
42
- | # | Source | Path | Created by |
43
- |---|--------|------|------------|
44
- | 1 | Feature file | `qa/screens/<screen>/features/<screen>.feature` | `/sungen:add-screen` + `/sungen:create-test` |
45
- | 2 | Test data | `qa/screens/<screen>/test-data/<screen>.yaml` | `/sungen:create-test` |
46
- | 3 | Selectors | `qa/screens/<screen>/selectors/<screen>.yaml` | `/sungen:run-test` |
47
- | 4 | Compiled spec | `specs/generated/<screen>/<screen>.spec.ts` | `sungen generate` (during `/sungen:run-test`) |
48
- | 5 | Test results | `specs/generated/<screen>/<screen>-test-result.json` (per-screen) or `test-results/results.json` (global fallback) | `/sungen:run-test` |
43
+ | # | Source | Screen path | Flow path | Created by |
44
+ |---|--------|-------------|-----------|------------|
45
+ | 1 | Feature file | `qa/screens/<name>/features/<name>.feature` | `qa/flows/<name>/features/<name>.feature` | `add-screen`/`add-flow` + `create-test` |
46
+ | 2 | Test data | `qa/screens/<name>/test-data/<name>.yaml` | `qa/flows/<name>/test-data/<name>.yaml` | `create-test` |
47
+ | 3 | Selectors | `qa/screens/<name>/selectors/<name>.yaml` | `qa/flows/<name>/selectors/<name>.yaml` | `run-test` |
48
+ | 4 | Compiled spec | `specs/generated/<name>/<name>.spec.ts` | `specs/generated/flows/<name>/<name>.spec.ts` | `sungen generate` |
49
+ | 5 | Test results | `specs/generated/<name>/<name>-test-result.json` or `test-results/results.json` | `specs/generated/flows/<name>/<name>-test-result.json` or global fallback | `run-test` |
49
50
 
50
51
  **Sources 1-4 are blocking** — CLI aborts if any is missing.
51
52
  **Source 5 is optional** — CSV is still generated but Test Result/Date/Executor/Env columns are empty (all tests show as Pending).
52
53
 
53
- The CLI reads the **per-screen result file first** (co-located with `.spec.ts`), then falls back to the global `test-results/results.json`. Per-screen is preferred because the global file gets OVERWRITTEN each time Playwright runs, losing results from earlier screens.
54
+ The CLI reads the **per-target result file first** (co-located with `.spec.ts`), then falls back to the global `test-results/results.json`. Per-target is preferred because the global file gets OVERWRITTEN each time Playwright runs, losing results from earlier targets.
54
55
 
55
56
  ---
56
57
 
@@ -87,17 +88,17 @@ The CLI reads the **per-screen result file first** (co-located with `.spec.ts`),
87
88
  ## CLI command reference
88
89
 
89
90
  ```bash
90
- # Export all screens
91
+ # Export all screens and flows
91
92
  sungen delivery
92
93
 
93
- # Export specific screens
94
- sungen delivery kudos awards
94
+ # Export specific targets (auto-detects screen vs flow)
95
+ sungen delivery kudos awards nomination-flow
95
96
 
96
97
  # Skip pre-flight (CI only)
97
98
  sungen delivery --skip-preflight
98
99
 
99
- # Skip screens with blocking misses
100
+ # Skip targets with blocking misses
100
101
  sungen delivery --continue-on-missing
101
102
  ```
102
103
 
103
- Output: `qa/deliverables/<screen>-testcases.csv` (UTF-8 with BOM)
104
+ Output: `qa/deliverables/<name>-testcases.csv` (UTF-8 with BOM)
@@ -101,6 +101,18 @@ If `toHaveText` fails on an input → the Gherkin step has wrong target type. Fi
101
101
 
102
102
  ---
103
103
 
104
+ ## Flow-Specific Errors
105
+
106
+ | Error | Diagnosis | Fix |
107
+ |---|---|---|
108
+ | Navigation timeout between screens | Cross-screen transition takes too long or URL mismatch | Add explicit `wait for page` step between screen transitions in `.feature`. Verify target URL path |
109
+ | Selector `"screen:element"` not found | Namespace key missing or wrong format | Ensure colon-namespaced key in `selectors.yaml` is **quoted**: `"login:submit":`. Check screen prefix matches `[Screen:Element]` ref in Gherkin |
110
+ | Test data `screen.key` undefined | Phase namespace mismatch | Verify `test-data.yaml` uses dot-namespaced keys: `login.email`, `submission.nominee`. Keys must match `{{screen.key}}` refs in `.feature` |
111
+ | State lost between screens | Auth/session expired during multi-screen flow | Ensure all screens in the flow share the same `@auth:role` tag. Check if the app invalidates sessions on navigation |
112
+ | Duplicate selector key across screens | Two screens use same element name without namespace | Always use `[Screen:Element]` format in flow `.feature`. Selectors must use `"screen:element":` quoted keys |
113
+
114
+ ---
115
+
104
116
  ## Performance & Infrastructure Errors → Fix in `specs/base.ts`
105
117
 
106
118
  All generated `.spec.ts` import from `specs/base.ts` — shared context caching, navigation, overlay cleanup. AI **can and should** tune `base.ts` to match the project.
@@ -160,6 +160,57 @@ Options: `nth` `exact` `scope` `match` `variant` `frame` `contenteditable` `colu
160
160
  | `@afterEach` | Hook: runs after each test → `test.afterEach()` (custom cleanup) |
161
161
  | `@afterAll` | Hook: runs once after all tests → `test.afterAll()` |
162
162
 
163
+ ### `@flow` tag (E2E cross-screen testing)
164
+
165
+ `@flow` marks a feature as a **flow** — an E2E journey spanning multiple screens. Flows live in `qa/flows/<name>/` with their own selectors, test-data, and requirements.
166
+
167
+ **Key differences from screen tests:**
168
+
169
+ | Aspect | Screen (`qa/screens/`) | Flow (`qa/flows/`) |
170
+ |---|---|---|
171
+ | Scope | Single page | Multiple pages |
172
+ | Selectors | `[Element]` → own YAML | `[Screen:Element]` → own YAML (namespaced) |
173
+ | Test data | `{{variable}}` | `{{phase.variable}}` (namespaced by phase) |
174
+ | Tag | `@auto` / `@smoke` etc. | `@flow` (required at feature level) |
175
+
176
+ **Selector namespace format:** `[Screen:Element]` where colon separates screen prefix from element name. The YAML key is `"screen:element"` (quoted, lowercase).
177
+
178
+ ```gherkin
179
+ # Feature file
180
+ When User fill [Login:Email] field with {{login.email}}
181
+ And User click [Login:Submit] button
182
+ Then User see [Dashboard] page
183
+ When User click [Dashboard:Awards] link
184
+ ```
185
+
186
+ ```yaml
187
+ # selectors.yaml — keys are namespaced, quoted due to colon
188
+ "login:email":
189
+ type: 'testid'
190
+ value: 'email-input'
191
+
192
+ "login:submit":
193
+ type: 'role'
194
+ value: 'button'
195
+ name: 'Login'
196
+
197
+ dashboard:
198
+ type: 'page'
199
+ value: '/dashboard'
200
+
201
+ "dashboard:awards":
202
+ type: 'role'
203
+ value: 'link'
204
+ name: 'Awards'
205
+ ```
206
+
207
+ **Flow structure:**
208
+ - `Background:` — set starting page only (e.g., `Given User is on [Login] page`)
209
+ - Each `Scenario:` — one phase/step of the flow (login, navigate, submit, etc.)
210
+ - Page navigation between scenarios uses `[Screen] page` references
211
+
212
+ **CLI:** `sungen add-flow --flow <name>`, `sungen generate --flow <name>`, `sungen generate --all` (includes flows)
213
+
163
214
  ### @extend behavior
164
215
 
165
216
  - Tool executes **only Given→When** of `@steps` scenario (skips Then)
@@ -185,6 +236,7 @@ Options: `nth` `exact` `scope` `match` `variant` `frame` `contenteditable` `colu
185
236
  | Missing target type | `fill [email] with {{v}}` | `fill [email] field with {{v}}` |
186
237
  | Background with scope | `Background: ... And User is on [X] dialog` | Use `@steps` + `@extend` for scope-dependent flows |
187
238
  | `is on` after When | `When ... And User is on [X] dialog` | `And User see [X] dialog` or separate Given |
239
+ | Literal URL navigate | `User navigate to "/dashboard"` | `User is on [Dashboard] page` (add page selector in `selectors.yaml`) |
188
240
 
189
241
  ## Background vs @steps/@extend
190
242
 
@@ -24,6 +24,28 @@ Run tests in priority waves — catch fundamental issues early, fix critical pat
24
24
 
25
25
  If existing selectors already cover the feature file, **skip Phase 0** and go straight to compile + Phase 1.
26
26
 
27
+ ### Flow Mode: Screen Selector Reference
28
+
29
+ When running Phase 0 for a **flow** (`qa/flows/<name>/`), check existing screen selectors first before snapshotting live pages. Screen selectors are already verified and proven — reuse them to save time and reduce errors.
30
+
31
+ **Steps (before the standard Phase 0 steps):**
32
+
33
+ 1. **Parse screen references**: read the `.feature` file for `[Screen:Element]` references. Group by screen name (e.g., `Login`, `Awards`, `Dashboard`).
34
+ 2. **For each referenced screen**, check `qa/screens/<screen>/selectors/<screen>.yaml`:
35
+ - **If exists** → copy matching entries to the flow's `selectors.yaml`, remapping keys to namespace format:
36
+ - Screen key `submit` with screen `login` → flow key `"login:submit"`
37
+ - Screen key `email-field` with screen `login` → flow key `"login:email-field"`
38
+ - Preserve the full selector definition (type, value, name, etc.)
39
+ - Mark these entries as **verified** (no `@needs-live-verify` comment needed)
40
+ - **If not found** → add this screen to the "needs live snapshot" list
41
+ 3. **Elements not found in any screen selector** → also added to the "needs live snapshot" list
42
+ 4. **If "needs live snapshot" list is empty** → Phase 0 screen-reference covered everything, skip to compile
43
+ 5. **If "needs live snapshot" list is non-empty** → continue with the standard Phase 0 steps below, but only generate selectors for the missing elements (don't re-snapshot elements already copied from screens)
44
+
45
+ **Merge rule**: screen-referenced entries take priority over provisional (Figma-sourced) entries. If an element was previously generated from Figma with `@needs-live-verify`, the screen-verified entry replaces it.
46
+
47
+ **Important**: flow selectors remain private — they live in the flow's own YAML file. This is just initialization from screen data, not a runtime dependency.
48
+
27
49
  ### Steps
28
50
 
29
51
  1. **Confirm with the user** via `AskUserQuestion`: *"Generate selectors from the live page via Playwright MCP now?"* — offer **Yes, scan live page** / **Skip (use existing selectors.yaml)** / **Cancel**.
@@ -39,9 +61,10 @@ If existing selectors already cover the feature file, **skip Phase 0** and go st
39
61
  - Selector priority: follow the table in **Diagnosis & Fix § Step 3** (`testid` > `role`+name > `placeholder` > `label` > `locator` > `text`).
40
62
  - Copy names **character-for-character** from the snapshot. Never infer from the Gherkin label.
41
63
  - If an element is auto-inferable per `sungen-selector-keys` § Auto-Infer, **omit it** from YAML — keep the file minimal.
42
- 7. **Merge, don't overwrite**: preserve the page selector and any user-authored entries in `selectors.yaml`. Only add missing keys.
43
- 8. **Show summary + confirm**: list the keys that will be added, ask the user to approve, then write the file.
44
- 9. **Compile**: `sungen generate --screen <screen>` then proceed to Phase 1.
64
+ 7. **Substring ambiguity check**: for each `role` + `name` selector, check if any other element in the snapshot has a name that **contains** this name as a substring (e.g., `"Đăng ký"` vs `"Đăng ký bằng Google"`). If yes → add `exact: true` to prevent strict mode violation at runtime.
65
+ 8. **Merge, don't overwrite**: preserve the page selector and any user-authored entries in `selectors.yaml`. Only add missing keys.
66
+ 9. **Show summary + confirm**: list the keys that will be added, ask the user to approve, then write the file.
67
+ 10. **Compile**: **Screen**: `sungen generate --screen <screen>`. **Flow**: `sungen generate --flow <flow>`. Then proceed to Phase 1.
45
68
 
46
69
  ### Common Phase 0 pitfalls
47
70
 
@@ -204,6 +227,7 @@ Array.from(document.querySelectorAll('[data-testid]'))
204
227
  Common fixes:
205
228
  - Name mismatch → copy exact name from snapshot
206
229
  - Multiple matches → add `nth` or `exact: true`
230
+ - Substring ambiguity (e.g., `"Submit"` matches `"Submit"` and `"Submit & Continue"`) → add `exact: true`
207
231
  - No accessible name → use `testid` or `locator` (CSS)
208
232
  - Element in iframe → add `frame` field
209
233
  - Dynamic content → use `testid` or structural `role` + `nth`
@@ -212,7 +236,11 @@ Common fixes:
212
236
 
213
237
  Always recompile before re-running:
214
238
  ```bash
239
+ # Screen
215
240
  sungen generate --screen <screen>
241
+
242
+ # Flow
243
+ sungen generate --flow <flow>
216
244
  ```
217
245
 
218
246
  Then re-run only the current phase's failing tests, not all tests.
@@ -27,6 +27,51 @@ Copy the text from `[Reference]` as-is, then lowercase. Unicode characters (Viet
27
27
  4. **Keep all Unicode characters as-is** (Vietnamese diacritics, Japanese, etc.)
28
28
  5. **Keys use spaces** (not dots) as word separators
29
29
 
30
+ ## Flow Namespaced Keys
31
+
32
+ In `@flow` features, selectors are namespaced by screen using colon: `[Screen:Element]` → YAML key `"screen:element"` (quoted).
33
+
34
+ ```
35
+ [Login:Email] → "login:email"
36
+ [Login:Submit] → "login:submit"
37
+ [Dashboard:Awards] → "dashboard:awards"
38
+ [Awards:Submit] → "awards:submit"
39
+ ```
40
+
41
+ **Rules:**
42
+ 1. Same lowercase + Unicode rules as standard keys
43
+ 2. Colon separates screen prefix from element name
44
+ 3. **YAML keys must be quoted** because of the colon: `"login:email":`
45
+ 4. Page references don't need prefix: `[Login]` → `login:` (page type)
46
+ 5. Prevents duplicate names across screens (e.g., `"login:submit"` vs `"awards:submit"`)
47
+
48
+ ```yaml
49
+ # Flow selectors — each screen section namespaced
50
+ login:
51
+ type: 'page'
52
+ value: '/login'
53
+
54
+ "login:email":
55
+ type: 'testid'
56
+ value: 'email-input'
57
+
58
+ "login:submit":
59
+ type: 'role'
60
+ value: 'button'
61
+ name: 'Login'
62
+
63
+ awards:
64
+ type: 'page'
65
+ value: '/awards'
66
+
67
+ "awards:submit":
68
+ type: 'role'
69
+ value: 'button'
70
+ name: 'Submit Award'
71
+ ```
72
+
73
+ **Type and nth suffixes still apply:** `"login:submit--button"`, `"awards:item--3"`
74
+
30
75
  ## Type-Suffixed Keys
31
76
 
32
77
  When the same label is used for different element types, add `--type` suffix:
@@ -237,4 +237,73 @@ valid_email: admin@staging.example.com
237
237
  valid_password: StagingPass456
238
238
  ```
239
239
 
240
+ ## Flow Test Generation
241
+
242
+ When generating tests for a **flow** (`qa/flows/<name>/`), adapt the strategy:
243
+
244
+ ### Structure
245
+
246
+ - **Background** — starting page only: `Given User is on [Login] page`
247
+ - **Scenarios** — each phase of the E2E journey as a separate scenario
248
+ - **Selector refs** — use `[Screen:Element]` namespace format (see `sungen-gherkin-syntax`)
249
+ - **Test data** — namespace by phase: `login.email`, `submission.nominee`
250
+ - **Feature tag** — `@flow` required at feature level
251
+
252
+ ### Flow vs Screen Differences
253
+
254
+ | Aspect | Screen | Flow |
255
+ |---|---|---|
256
+ | Section focus | UI patterns per section | Journey phases across screens |
257
+ | Viewpoints | VP-UI, VP-VAL, VP-LOGIC, VP-SEC per section | VP-LOGIC (flow transitions), VP-SEC (auth persistence), VP-VAL (cross-screen data) |
258
+ | Tier 1 focus | Happy path + required validation per section | Happy path through entire flow + auth + key error recovery |
259
+ | Background | Navigate to screen | Navigate to starting page |
260
+
261
+ ### Flow-specific scenarios to generate
262
+
263
+ | Category | Examples |
264
+ |---|---|
265
+ | **Happy path** | Complete flow end-to-end with valid data |
266
+ | **Auth persistence** | Auth state maintained across screen transitions |
267
+ | **Error recovery** | Invalid input mid-flow → fix → continue |
268
+ | **Incomplete flow** | User abandons at each phase → state cleanup |
269
+ | **Cross-screen data** | Data entered on screen A visible on screen B |
270
+
271
+ ### Output Format (Flow)
272
+
273
+ ```gherkin
274
+ @flow @auth:user
275
+ Feature: Award Submission Flow
276
+
277
+ Background:
278
+ Given User is on [Login] page
279
+
280
+ @critical
281
+ Scenario: User login successfully
282
+ When User fill [Login:Email] field with {{login.email}}
283
+ And User fill [Login:Password] field with {{login.password}}
284
+ And User click [Login:Submit] button
285
+ Then User see [Dashboard] page
286
+
287
+ @critical
288
+ Scenario: User navigates to awards and submits
289
+ When User click [Dashboard:Awards] link
290
+ Then User see [Awards] page
291
+ When User fill [Awards:Nominee] field with {{submission.nominee}}
292
+ And User click [Awards:Submit] button
293
+ Then User see {{success_message}} message
294
+ ```
295
+
296
+ **Test data** — `qa/flows/<name>/test-data/<name>.yaml`, namespaced by phase:
297
+
298
+ ```yaml
299
+ login:
300
+ email: "admin@example.com"
301
+ password: "secret123"
302
+ submission:
303
+ nominee: "John Doe"
304
+ success_message: "Award submitted successfully"
305
+ ```
306
+
307
+ **Environment overrides** work the same: `<name>.<env>.yaml` merged at runtime via `SUNGEN_ENV`.
308
+
240
309
  **Do NOT generate**: `selectors.yaml` (created during run-test), Playwright code (sungen compiles).
@@ -89,6 +89,36 @@ Do NOT mark `@manual` when data is visible in snapshot or documented in spec —
89
89
 
90
90
  ---
91
91
 
92
+ ## Flow Review Additions
93
+
94
+ When reviewing a `@flow` feature (`qa/flows/<name>/`), apply standard scoring PLUS these flow-specific checks:
95
+
96
+ ### Syntax — additional checks
97
+ - `[Screen:Element]` format used consistently (not mixing bare `[Element]` refs)
98
+ - YAML keys quoted with colon: `"login:submit":` not `login:submit:`
99
+ - `@flow` tag present at feature level
100
+
101
+ ### Coverage — additional dimensions
102
+ | Dimension | Pts (from existing 40) | What to check |
103
+ |---|---|---|
104
+ | Screen transitions | (part of State transitions) | Each screen-to-screen navigation tested |
105
+ | Auth persistence | (part of Happy paths) | Auth state maintained across transitions |
106
+ | Error recovery mid-flow | (part of Negative cases) | Invalid input at each phase → fix → continue |
107
+ | Cross-screen data | (part of Edge cases) | Data entered on screen A asserted on screen B |
108
+
109
+ ### Viewpoint — flow-specific classification
110
+ - **VP-LOGIC** — screen transitions, navigation flow, auth persistence
111
+ - **VP-VAL** — cross-screen data consistency, form data carried across pages
112
+ - **VP-SEC** — auth state across redirects, permission changes mid-flow
113
+ - VP-UI is typically minimal in flows (focus on functionality over layout)
114
+
115
+ ### Checklist — flow-specific items
116
+ 11. **Missing screen transitions** — flow visits 4 screens but only 2 transitions tested? Add missing
117
+ 12. **Orphan scenarios** — scenario doesn't connect to previous/next phase? Flag broken flow
118
+ 13. **Selector namespace consistency** — mixing `[Submit]` and `[Login:Submit]` in same flow? Standardize
119
+
120
+ ---
121
+
92
122
  ## Output Format
93
123
 
94
124
  ```markdown
@@ -0,0 +1,86 @@
1
+ ---
2
+ name: sungen-add-flow
3
+ description: 'Add a new Sungen flow — scaffolds directories for E2E cross-screen testing, helps fill spec.md, and can capture visuals via the capture skills'
4
+ argument-hint: '[flow-name] [--path <start-url>]'
5
+ agent: 'agent'
6
+ tools: [vscode, execute, read, agent, edit, search, todo]
7
+ ---
8
+
9
+ **Input**: Flow name and optional starting URL (e.g., `/sungen-add-flow award-submission --path /login`).
10
+
11
+ You are adding a new Sungen flow for E2E cross-screen test generation.
12
+
13
+ ## Parameters
14
+
15
+ - **flow** — ${input:flow:flow name (e.g., award-submission, user-onboarding)}
16
+ - **--path \<url\>** — starting page URL path (default: `/login`)
17
+ - **--description \<text\>** — flow description (optional)
18
+
19
+ ## Steps
20
+
21
+ ### 1. Scaffold the flow
22
+
23
+ Run with #tool:terminal:
24
+ ```bash
25
+ sungen add-flow --flow ${input:flow} --path ${input:path}
26
+ ```
27
+
28
+ This creates:
29
+ ```
30
+ qa/flows/${input:flow}/
31
+ ├── features/${input:flow}.feature # Gherkin with @flow tag, Background, sample scenarios
32
+ ├── selectors/${input:flow}.yaml # Namespaced keys: "login:submit", "awards:submit"
33
+ ├── test-data/${input:flow}.yaml # Namespaced data: login.email, submission.nominee
34
+ └── requirements/
35
+ ├── spec.md # Flow specification
36
+ └── ui/ # Screenshots, mockups
37
+ ```
38
+
39
+ ### 1a. Identify the screens in the flow
40
+
41
+ Ask the user: "Which screens does this flow visit, in order? (e.g., login → dashboard → award-form → confirmation)"
42
+
43
+ Record the screen list — you will need it for:
44
+ - Filling `spec.md` (Step 3)
45
+ - Suggesting `[Screen:Element]` namespace prefixes
46
+ - Capturing visuals per screen (Step 2)
47
+
48
+ ### 2. Capture visual source
49
+
50
+ Ask: *"Pick a visual source for this flow's screens:"*
51
+ - **Figma designs** (Recommended for pre-launch) — invoke `sungen-capture-figma` skill for each screen
52
+ - **Live page scan** (dev/staging is up) — invoke `sungen-capture-live` skill for each screen URL
53
+ - **Local images** — invoke `sungen-capture-local` skill to load from `requirements/ui/`
54
+ - **Skip** — user will drop images manually into `requirements/ui/` later
55
+
56
+ Each capture skill writes outputs into `qa/flows/${input:flow}/requirements/ui/` and reports back a summary. Do not inline capture logic here — always delegate to the skill.
57
+
58
+ ### 3. Fill spec.md
59
+
60
+ Ask: *"Fill `spec.md` now? (You can reference the captured visuals)"* — offer **Yes, fill now (Recommended)** / **Skip, fill later**.
61
+
62
+ If yes → open `qa/flows/${input:flow}/requirements/spec.md` and help the user fill:
63
+ - **Screens list** — ordered list of screens with URL paths
64
+ - **Flow steps** — what the user does at each screen
65
+ - **Transitions** — what triggers navigation between screens
66
+ - **Business rules** — cross-screen validation, state that persists
67
+ - **Test data** — what data is entered at each screen
68
+
69
+ Reference the captured visuals from Step 2 to suggest field names, form elements, and UI states.
70
+
71
+ ### 4. Next steps
72
+
73
+ Tell the user what was created and offer next steps:
74
+
75
+ - **`/sungen-create-test ${input:flow}`** — Generate test scenarios for the flow (Recommended)
76
+ - **Done for now** — I'll come back later
77
+
78
+ ## Key Rules
79
+
80
+ - Flows are **independent** from screens — own selectors, own test-data
81
+ - Selectors use `[Screen:Element]` namespace format with colon
82
+ - YAML keys must be **quoted** due to colon: `"login:submit":`
83
+ - Test data namespaced by phase: `login.email`, `submission.nominee`
84
+ - `@flow` tag required at feature level
85
+ - `Background:` should only contain the starting page navigation
86
+ - Each scenario = one phase of the journey
@@ -6,7 +6,7 @@ agent: 'agent'
6
6
  tools: [vscode, execute, read, agent, edit, search, web, browser, todo, 'playwright/*']
7
7
  ---
8
8
 
9
- **Input**: Screen name (e.g., `/sungen-create-test admin-users`).
9
+ **Input**: Screen or flow name (e.g., `/sungen-create-test admin-users`).
10
10
 
11
11
  ## Role
12
12
 
@@ -14,13 +14,16 @@ You are a **Senior QA Engineer**. You structure test cases by viewpoint categori
14
14
 
15
15
  ## Parameters
16
16
 
17
- - **screen** — ${input:screen:screen name (e.g., login, dashboard)}
17
+ - **name** — ${input:name:screen or flow name (e.g., login, award-submission)}
18
+
19
+ **Auto-detect context**: check if `qa/flows/<name>/` exists → flow mode (base path: `qa/flows/<name>/`). Else check `qa/screens/<name>/` → screen mode (base path: `qa/screens/<name>/`).
18
20
 
19
21
  ## Steps
20
22
 
21
- 1. Verify `qa/screens/${input:screen}/` exists. If not → run `/sungen-add-screen` first.
23
+ 1. **Flow**: Verify `qa/flows/${input:name}/` exists. If not → `/sungen-add-flow` first.
24
+ **Screen**: Verify `qa/screens/${input:name}/` exists. If not → `/sungen-add-screen` first.
22
25
  2. Check if `.feature` already has scenarios. If yes → summarize existing coverage and ask: **1) Add new sections**, **2) Add viewpoints to existing sections**, or **3) Replace all**. See `sungen-tc-generation` skill for update mode details.
23
- 3. **Read requirements & resolve visual source** — check `qa/screens/${input:screen}/requirements/`:
26
+ 3. **Read requirements & resolve visual source** — check `<base>/${input:name}/requirements/`:
24
27
  - If `spec.md` exists → read it as PRIMARY source (sections, fields, validation rules, business rules, states).
25
28
  - If `test-viewpoint.md` exists → read it. If it only contains HTML comments (scaffold template), ask:
26
29
  - **1) Fill test-viewpoint.md first** — identify edge cases, known issues, and design decisions before generating tests
@@ -30,7 +33,7 @@ You are a **Senior QA Engineer**. You structure test cases by viewpoint categori
30
33
  1. If `spec_figma.md` exists → read it as Figma supplement (PAT flow already completed during `add-screen`). Do NOT call any `mcp__figma__*` tool.
31
34
  2. If `ui/` has images (`.png`, `.jpg`, etc.) → read them for visual context (layout, element positions, states).
32
35
  3. If neither exists → ask: *"No visual source found. Pick one:"*
33
- - **1) Figma PAT** — ask for URL, run `sungen add --screen ${input:screen} --figma '<url>'`, then invoke `sungen-figma-source` skill
36
+ - **1) Figma PAT** — ask for URL, run `sungen add --screen ${input:name} --figma '<url>'`, then invoke `sungen-figma-source` skill
34
37
  - **2) Figma MCP** — invoke `sungen-capture-figma` skill
35
38
  - **3) Live page scan** — invoke `sungen-capture-live` skill
36
39
  - **4) Skip** — generate from spec.md only
@@ -39,13 +42,13 @@ You are a **Senior QA Engineer**. You structure test cases by viewpoint categori
39
42
 
40
43
  Summarize what you found in requirements and present to the user.
41
44
 
42
- 4. Identify screen sections ask user which to focus on (per `sungen-tc-generation` skill). When requirements exist, use the "Requirements-Driven Generation" strategy. Present sections as a numbered list and let user pick.
43
- 5. Generate or update `.feature` + `test-data.yaml` following `sungen-gherkin-syntax` and `sungen-tc-generation` skills.
45
+ 4. Follow the `sungen-tc-generation` skill for section identification, viewpoint generation, and output format. **For flows**, use the "Flow Test Generation" section in the skill. When requirements exist, use the "Requirements-Driven Generation" strategy. Present sections as a numbered list and let user pick.
46
+ 5. Generate or update `.feature` + `test-data.yaml` following `sungen-gherkin-syntax` and `sungen-tc-generation` skills. **For flows**: use `[Screen:Element]` namespace format, namespace test-data by phase, add `@flow` tag.
44
47
  6. Show summary and offer next steps:
45
48
 
46
- - **`/sungen-review ${input:screen}`** — Review syntax, coverage, viewpoint quality (Recommended)
47
- - **`/sungen-run-test ${input:screen}`** — Skip review, generate selectors and run tests now
48
- - **`/sungen-create-test ${input:screen}`** — Expand coverage: add @normal + @low scenarios
49
+ - **`/sungen-review ${input:name}`** — Review syntax, coverage, viewpoint quality (Recommended)
50
+ - **`/sungen-run-test ${input:name}`** — Skip review, generate selectors and run tests now
51
+ - **`/sungen-create-test ${input:name}`** — Expand coverage: add @normal + @low scenarios
49
52
  - **Done for now** — I'll come back later
50
53
 
51
54
  **No selectors.yaml** — selectors are generated during `/sungen-run-test`.
@@ -1,8 +1,8 @@
1
1
  ---
2
- name: delivery
2
+ name: sungen-delivery
3
3
  description: 'Export Gherkin scenarios + Playwright results to CSV test case file for QA delivery.'
4
- argument-hint: "[screen-name...] (omit for all screens)"
5
- allowed-tools: Bash, Read, AskUserQuestion
4
+ argument-hint: "[name...] (omit for all screens and flows)"
5
+ tools: [read, execute, edit, vscode/askQuestions]
6
6
  ---
7
7
 
8
8
  ## Role
@@ -11,9 +11,9 @@ You are a **QA Test Delivery Engineer**. Your job is to invoke the deterministic
11
11
 
12
12
  ## Parameters
13
13
 
14
- Parse **screens** from `$ARGUMENTS`:
15
- - If empty → CLI will process **all** screens in `qa/screens/`
16
- - If provided → pass them through to the CLI
14
+ Parse **names** from `$ARGUMENTS`:
15
+ - If empty → CLI will process **all** screens in `qa/screens/` and flows in `qa/flows/`
16
+ - If provided → pass them through to the CLI (auto-detects screen vs flow per name)
17
17
 
18
18
  ## Steps
19
19
 
@@ -22,28 +22,29 @@ Parse **screens** from `$ARGUMENTS`:
22
22
  Run via Bash (single command, no extra parsing):
23
23
 
24
24
  ```bash
25
- npx sungen delivery <screens>
25
+ npx sungen delivery <names>
26
26
  ```
27
27
 
28
- - If no screen args → just run `npx sungen delivery`
29
- - If screen args → pass them as positional arguments
28
+ - If no args → just run `npx sungen delivery` (exports all screens + flows)
29
+ - If args → pass them as positional arguments (auto-detects screen vs flow)
30
30
 
31
31
  The CLI handles:
32
- - Scope detection (all screens vs specific)
32
+ - Scope detection (all screens + flows vs specific names)
33
+ - Auto-detect: `qa/flows/<name>/` → flow, `qa/screens/<name>/` → screen
33
34
  - Pre-flight source checks with colorful output
34
35
  - Parsing `.feature`, `.spec.ts`, `test-data.yaml`, `test-results/results.json`
35
- - Generating CSV at `qa/deliverables/<screen>-testcases.csv`
36
+ - Generating CSV at `qa/deliverables/<name>-testcases.csv`
36
37
  - Printing summary table
37
38
 
38
39
  ### 2. Handle pre-flight failures (if CLI exits non-zero)
39
40
 
40
- If the CLI exits with blocking issues, it will have already printed a clear table showing exactly what's missing per screen.
41
+ If the CLI exits with blocking issues, it will have already printed a clear table showing exactly what's missing per target.
41
42
 
42
43
  Use `AskUserQuestion` to offer next steps:
43
44
 
44
45
  **Options:**
45
46
  - **Fix missing sources** (Recommended) — Print the suggested commands from CLI output and stop. User will run those commands manually, then re-invoke `/sungen:delivery`.
46
- - **Continue with available screens** — Re-run as `npx sungen delivery <screens> --continue-on-missing` to skip screens with blocking issues.
47
+ - **Continue with available targets** — Re-run as `npx sungen delivery <names> --continue-on-missing` to skip targets with blocking issues.
47
48
  - **Cancel** — Exit.
48
49
 
49
50
  ### 3. Show summary + offer next steps (on success)
@@ -51,8 +52,8 @@ Use `AskUserQuestion` to offer next steps:
51
52
  Forward the CLI's summary table to the user verbatim. Then use `AskUserQuestion`:
52
53
 
53
54
  - **Open a specific CSV** — Help user inspect one of the exported files with Read tool.
54
- - **Run tests to refresh results** — Suggest `/sungen:run-test <screen>` to update `test-results/results.json`, then re-run delivery.
55
- - **Export another screen** — User can run `/sungen:delivery <other-screen>`.
55
+ - **Run tests to refresh results** — Suggest `/sungen-run-test <name>` to update test results, then re-run delivery.
56
+ - **Export another target** — User can run `/sungen-delivery <other-name>`.
56
57
  - **Done** — Exit.
57
58
 
58
59
  ## Important notes
@@ -65,7 +66,7 @@ Forward the CLI's summary table to the user verbatim. Then use `AskUserQuestion`
65
66
  ## CLI Reference
66
67
 
67
68
  ```
68
- sungen delivery [screens...]
69
+ sungen delivery [names...]
69
70
  [--skip-preflight] Skip pre-flight checks (not recommended)
70
- [--continue-on-missing] Skip screens with blocking misses
71
+ [--continue-on-missing] Skip targets with blocking misses
71
72
  ```