@sun-asterisk/sungen 2.5.2 → 2.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (170) hide show
  1. package/dist/cli/commands/add-flow.d.ts +3 -0
  2. package/dist/cli/commands/add-flow.d.ts.map +1 -0
  3. package/dist/cli/commands/add-flow.js +27 -0
  4. package/dist/cli/commands/add-flow.js.map +1 -0
  5. package/dist/cli/commands/delivery.d.ts.map +1 -1
  6. package/dist/cli/commands/delivery.js +95 -60
  7. package/dist/cli/commands/delivery.js.map +1 -1
  8. package/dist/cli/commands/generate.d.ts.map +1 -1
  9. package/dist/cli/commands/generate.js +38 -6
  10. package/dist/cli/commands/generate.js.map +1 -1
  11. package/dist/cli/index.js +3 -1
  12. package/dist/cli/index.js.map +1 -1
  13. package/dist/generators/test-generator/adapters/adapter-interface.d.ts +1 -0
  14. package/dist/generators/test-generator/adapters/adapter-interface.d.ts.map +1 -1
  15. package/dist/generators/test-generator/adapters/playwright/playwright-adapter.d.ts +1 -0
  16. package/dist/generators/test-generator/adapters/playwright/playwright-adapter.d.ts.map +1 -1
  17. package/dist/generators/test-generator/adapters/playwright/playwright-adapter.js.map +1 -1
  18. package/dist/generators/test-generator/adapters/playwright/templates/imports.hbs +2 -2
  19. package/dist/generators/test-generator/adapters/playwright/templates/steps/actions/alert-accept-action.hbs +1 -1
  20. package/dist/generators/test-generator/adapters/playwright/templates/steps/actions/alert-dismiss-action.hbs +1 -1
  21. package/dist/generators/test-generator/adapters/playwright/templates/steps/actions/alert-fill-action.hbs +1 -1
  22. package/dist/generators/test-generator/adapters/playwright/templates/steps/assertions/alert-text-assertion.hbs +2 -2
  23. package/dist/generators/test-generator/adapters/playwright/templates/steps/assertions/column-cell-assertion.hbs +1 -0
  24. package/dist/generators/test-generator/adapters/playwright/templates/steps/navigation/navigation.hbs +2 -1
  25. package/dist/generators/test-generator/adapters/playwright/templates/steps/navigation/route-assertion.hbs +1 -2
  26. package/dist/generators/test-generator/adapters/playwright/templates/steps/navigation/wait-timeout.hbs +1 -1
  27. package/dist/generators/test-generator/code-generator.d.ts +1 -0
  28. package/dist/generators/test-generator/code-generator.d.ts.map +1 -1
  29. package/dist/generators/test-generator/code-generator.js +30 -12
  30. package/dist/generators/test-generator/code-generator.js.map +1 -1
  31. package/dist/generators/test-generator/step-mapper.d.ts +4 -0
  32. package/dist/generators/test-generator/step-mapper.d.ts.map +1 -1
  33. package/dist/generators/test-generator/step-mapper.js +7 -0
  34. package/dist/generators/test-generator/step-mapper.js.map +1 -1
  35. package/dist/generators/test-generator/template-engine.d.ts +1 -0
  36. package/dist/generators/test-generator/template-engine.d.ts.map +1 -1
  37. package/dist/generators/test-generator/template-engine.js +1 -1
  38. package/dist/generators/test-generator/template-engine.js.map +1 -1
  39. package/dist/generators/test-generator/utils/data-resolver.d.ts +3 -20
  40. package/dist/generators/test-generator/utils/data-resolver.d.ts.map +1 -1
  41. package/dist/generators/test-generator/utils/data-resolver.js +23 -66
  42. package/dist/generators/test-generator/utils/data-resolver.js.map +1 -1
  43. package/dist/generators/test-generator/utils/selector-resolver.d.ts +2 -6
  44. package/dist/generators/test-generator/utils/selector-resolver.d.ts.map +1 -1
  45. package/dist/generators/test-generator/utils/selector-resolver.js +18 -80
  46. package/dist/generators/test-generator/utils/selector-resolver.js.map +1 -1
  47. package/dist/orchestrator/ai-rules-updater.d.ts.map +1 -1
  48. package/dist/orchestrator/ai-rules-updater.js +4 -0
  49. package/dist/orchestrator/ai-rules-updater.js.map +1 -1
  50. package/dist/orchestrator/flow-manager.d.ts +22 -0
  51. package/dist/orchestrator/flow-manager.d.ts.map +1 -0
  52. package/dist/orchestrator/flow-manager.js +251 -0
  53. package/dist/orchestrator/flow-manager.js.map +1 -0
  54. package/dist/orchestrator/project-initializer.d.ts.map +1 -1
  55. package/dist/orchestrator/project-initializer.js +1 -0
  56. package/dist/orchestrator/project-initializer.js.map +1 -1
  57. package/dist/orchestrator/templates/ai-instructions/claude-cmd-add-flow.md +88 -0
  58. package/dist/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +11 -8
  59. package/dist/orchestrator/templates/ai-instructions/claude-cmd-review.md +8 -6
  60. package/dist/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +15 -11
  61. package/dist/orchestrator/templates/ai-instructions/claude-config.md +41 -10
  62. package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +12 -0
  63. package/dist/orchestrator/templates/ai-instructions/claude-skill-delivery.md +19 -18
  64. package/dist/orchestrator/templates/ai-instructions/claude-skill-error-mapping.md +12 -0
  65. package/dist/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +52 -0
  66. package/dist/orchestrator/templates/ai-instructions/claude-skill-selector-fix.md +31 -3
  67. package/dist/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +45 -0
  68. package/dist/orchestrator/templates/ai-instructions/claude-skill-tc-generation.md +69 -0
  69. package/dist/orchestrator/templates/ai-instructions/claude-skill-tc-review.md +30 -0
  70. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-add-flow.md +86 -0
  71. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +13 -10
  72. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +16 -15
  73. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-review.md +9 -7
  74. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +21 -17
  75. package/dist/orchestrator/templates/ai-instructions/copilot-config.md +40 -9
  76. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +12 -0
  77. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +19 -18
  78. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-error-mapping.md +12 -0
  79. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +52 -0
  80. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-selector-fix.md +31 -3
  81. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +45 -0
  82. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-tc-generation.md +70 -0
  83. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-tc-review.md +30 -0
  84. package/dist/orchestrator/templates/playwright.config.d.ts.map +1 -1
  85. package/dist/orchestrator/templates/playwright.config.js +3 -1
  86. package/dist/orchestrator/templates/playwright.config.js.map +1 -1
  87. package/dist/orchestrator/templates/playwright.config.ts +4 -1
  88. package/dist/orchestrator/templates/specs-base.d.ts.map +1 -1
  89. package/dist/orchestrator/templates/specs-base.js +11 -56
  90. package/dist/orchestrator/templates/specs-base.js.map +1 -1
  91. package/dist/orchestrator/templates/specs-base.ts +11 -61
  92. package/dist/orchestrator/templates/specs-test-data.d.ts +3 -1
  93. package/dist/orchestrator/templates/specs-test-data.d.ts.map +1 -1
  94. package/dist/orchestrator/templates/specs-test-data.js +10 -2
  95. package/dist/orchestrator/templates/specs-test-data.js.map +1 -1
  96. package/dist/orchestrator/templates/specs-test-data.ts +9 -2
  97. package/package.json +1 -1
  98. package/src/cli/commands/add-flow.ts +25 -0
  99. package/src/cli/commands/delivery.ts +109 -58
  100. package/src/cli/commands/generate.ts +43 -6
  101. package/src/cli/index.ts +3 -1
  102. package/src/generators/test-generator/adapters/adapter-interface.ts +1 -1
  103. package/src/generators/test-generator/adapters/playwright/playwright-adapter.ts +1 -1
  104. package/src/generators/test-generator/adapters/playwright/templates/imports.hbs +2 -2
  105. package/src/generators/test-generator/adapters/playwright/templates/steps/actions/alert-accept-action.hbs +1 -1
  106. package/src/generators/test-generator/adapters/playwright/templates/steps/actions/alert-dismiss-action.hbs +1 -1
  107. package/src/generators/test-generator/adapters/playwright/templates/steps/actions/alert-fill-action.hbs +1 -1
  108. package/src/generators/test-generator/adapters/playwright/templates/steps/assertions/alert-text-assertion.hbs +2 -2
  109. package/src/generators/test-generator/adapters/playwright/templates/steps/assertions/column-cell-assertion.hbs +1 -0
  110. package/src/generators/test-generator/adapters/playwright/templates/steps/navigation/navigation.hbs +2 -1
  111. package/src/generators/test-generator/adapters/playwright/templates/steps/navigation/route-assertion.hbs +1 -2
  112. package/src/generators/test-generator/adapters/playwright/templates/steps/navigation/wait-timeout.hbs +1 -1
  113. package/src/generators/test-generator/code-generator.ts +32 -14
  114. package/src/generators/test-generator/step-mapper.ts +8 -0
  115. package/src/generators/test-generator/template-engine.ts +2 -2
  116. package/src/generators/test-generator/utils/data-resolver.ts +25 -77
  117. package/src/generators/test-generator/utils/selector-resolver.ts +23 -109
  118. package/src/orchestrator/ai-rules-updater.ts +5 -0
  119. package/src/orchestrator/flow-manager.ts +243 -0
  120. package/src/orchestrator/project-initializer.ts +1 -0
  121. package/src/orchestrator/templates/ai-instructions/claude-cmd-add-flow.md +88 -0
  122. package/src/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +11 -8
  123. package/src/orchestrator/templates/ai-instructions/claude-cmd-review.md +8 -6
  124. package/src/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +15 -11
  125. package/src/orchestrator/templates/ai-instructions/claude-config.md +41 -10
  126. package/src/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +12 -0
  127. package/src/orchestrator/templates/ai-instructions/claude-skill-delivery.md +19 -18
  128. package/src/orchestrator/templates/ai-instructions/claude-skill-error-mapping.md +12 -0
  129. package/src/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +52 -0
  130. package/src/orchestrator/templates/ai-instructions/claude-skill-selector-fix.md +31 -3
  131. package/src/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +45 -0
  132. package/src/orchestrator/templates/ai-instructions/claude-skill-tc-generation.md +69 -0
  133. package/src/orchestrator/templates/ai-instructions/claude-skill-tc-review.md +30 -0
  134. package/src/orchestrator/templates/ai-instructions/copilot-cmd-add-flow.md +86 -0
  135. package/src/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +13 -10
  136. package/src/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +16 -15
  137. package/src/orchestrator/templates/ai-instructions/copilot-cmd-review.md +9 -7
  138. package/src/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +21 -17
  139. package/src/orchestrator/templates/ai-instructions/copilot-config.md +40 -9
  140. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +12 -0
  141. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +19 -18
  142. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-error-mapping.md +12 -0
  143. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +52 -0
  144. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-selector-fix.md +31 -3
  145. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +45 -0
  146. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-tc-generation.md +70 -0
  147. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-tc-review.md +30 -0
  148. package/src/orchestrator/templates/playwright.config.ts +4 -1
  149. package/src/orchestrator/templates/specs-base.ts +11 -61
  150. package/src/orchestrator/templates/specs-test-data.ts +9 -2
  151. package/dist/utils/feature-finder.d.ts +0 -9
  152. package/dist/utils/feature-finder.d.ts.map +0 -1
  153. package/dist/utils/feature-finder.js +0 -67
  154. package/dist/utils/feature-finder.js.map +0 -1
  155. package/dist/utils/screen-paths.d.ts +0 -10
  156. package/dist/utils/screen-paths.d.ts.map +0 -1
  157. package/dist/utils/screen-paths.js +0 -73
  158. package/dist/utils/screen-paths.js.map +0 -1
  159. package/dist/utils/selector-loader.d.ts +0 -6
  160. package/dist/utils/selector-loader.d.ts.map +0 -1
  161. package/dist/utils/selector-loader.js +0 -20
  162. package/dist/utils/selector-loader.js.map +0 -1
  163. package/dist/utils/test-data-loader.d.ts +0 -6
  164. package/dist/utils/test-data-loader.d.ts.map +0 -1
  165. package/dist/utils/test-data-loader.js +0 -20
  166. package/dist/utils/test-data-loader.js.map +0 -1
  167. package/src/utils/feature-finder.ts +0 -33
  168. package/src/utils/screen-paths.ts +0 -37
  169. package/src/utils/selector-loader.ts +0 -23
  170. package/src/utils/test-data-loader.ts +0 -23
@@ -27,6 +27,51 @@ Copy the text from `[Reference]` as-is, then lowercase. Unicode characters (Viet
27
27
  4. **Keep all Unicode characters as-is** (Vietnamese diacritics, Japanese, etc.)
28
28
  5. **Keys use spaces** (not dots) as word separators
29
29
 
30
+ ## Flow Namespaced Keys
31
+
32
+ In `@flow` features, selectors are namespaced by screen using colon: `[Screen:Element]` → YAML key `"screen:element"` (quoted).
33
+
34
+ ```
35
+ [Login:Email] → "login:email"
36
+ [Login:Submit] → "login:submit"
37
+ [Dashboard:Awards] → "dashboard:awards"
38
+ [Awards:Submit] → "awards:submit"
39
+ ```
40
+
41
+ **Rules:**
42
+ 1. Same lowercase + Unicode rules as standard keys
43
+ 2. Colon separates screen prefix from element name
44
+ 3. **YAML keys must be quoted** because of the colon: `"login:email":`
45
+ 4. Page references don't need prefix: `[Login]` → `login:` (page type)
46
+ 5. Prevents duplicate names across screens (e.g., `"login:submit"` vs `"awards:submit"`)
47
+
48
+ ```yaml
49
+ # Flow selectors — each screen section namespaced
50
+ login:
51
+ type: 'page'
52
+ value: '/login'
53
+
54
+ "login:email":
55
+ type: 'testid'
56
+ value: 'email-input'
57
+
58
+ "login:submit":
59
+ type: 'role'
60
+ value: 'button'
61
+ name: 'Login'
62
+
63
+ awards:
64
+ type: 'page'
65
+ value: '/awards'
66
+
67
+ "awards:submit":
68
+ type: 'role'
69
+ value: 'button'
70
+ name: 'Submit Award'
71
+ ```
72
+
73
+ **Type and nth suffixes still apply:** `"login:submit--button"`, `"awards:item--3"`
74
+
30
75
  ## Type-Suffixed Keys
31
76
 
32
77
  When the same label is used for different element types, add `--type` suffix:
@@ -237,4 +237,73 @@ valid_email: admin@staging.example.com
237
237
  valid_password: StagingPass456
238
238
  ```
239
239
 
240
+ ## Flow Test Generation
241
+
242
+ When generating tests for a **flow** (`qa/flows/<name>/`), adapt the strategy:
243
+
244
+ ### Structure
245
+
246
+ - **Background** — starting page only: `Given User is on [Login] page`
247
+ - **Scenarios** — each phase of the E2E journey as a separate scenario
248
+ - **Selector refs** — use `[Screen:Element]` namespace format (see `sungen-gherkin-syntax`)
249
+ - **Test data** — namespace by phase: `login.email`, `submission.nominee`
250
+ - **Feature tag** — `@flow` required at feature level
251
+
252
+ ### Flow vs Screen Differences
253
+
254
+ | Aspect | Screen | Flow |
255
+ |---|---|---|
256
+ | Section focus | UI patterns per section | Journey phases across screens |
257
+ | Viewpoints | VP-UI, VP-VAL, VP-LOGIC, VP-SEC per section | VP-LOGIC (flow transitions), VP-SEC (auth persistence), VP-VAL (cross-screen data) |
258
+ | Tier 1 focus | Happy path + required validation per section | Happy path through entire flow + auth + key error recovery |
259
+ | Background | Navigate to screen | Navigate to starting page |
260
+
261
+ ### Flow-specific scenarios to generate
262
+
263
+ | Category | Examples |
264
+ |---|---|
265
+ | **Happy path** | Complete flow end-to-end with valid data |
266
+ | **Auth persistence** | Auth state maintained across screen transitions |
267
+ | **Error recovery** | Invalid input mid-flow → fix → continue |
268
+ | **Incomplete flow** | User abandons at each phase → state cleanup |
269
+ | **Cross-screen data** | Data entered on screen A visible on screen B |
270
+
271
+ ### Output Format (Flow)
272
+
273
+ ```gherkin
274
+ @flow @auth:user
275
+ Feature: Award Submission Flow
276
+
277
+ Background:
278
+ Given User is on [Login] page
279
+
280
+ @critical
281
+ Scenario: User login successfully
282
+ When User fill [Login:Email] field with {{login.email}}
283
+ And User fill [Login:Password] field with {{login.password}}
284
+ And User click [Login:Submit] button
285
+ Then User see [Dashboard] page
286
+
287
+ @critical
288
+ Scenario: User navigates to awards and submits
289
+ When User click [Dashboard:Awards] link
290
+ Then User see [Awards] page
291
+ When User fill [Awards:Nominee] field with {{submission.nominee}}
292
+ And User click [Awards:Submit] button
293
+ Then User see {{success_message}} message
294
+ ```
295
+
296
+ **Test data** — `qa/flows/<name>/test-data/<name>.yaml`, namespaced by phase:
297
+
298
+ ```yaml
299
+ login:
300
+ email: "admin@example.com"
301
+ password: "secret123"
302
+ submission:
303
+ nominee: "John Doe"
304
+ success_message: "Award submitted successfully"
305
+ ```
306
+
307
+ **Environment overrides** work the same: `<name>.<env>.yaml` merged at runtime via `SUNGEN_ENV`.
308
+
240
309
  **Do NOT generate**: `selectors.yaml` (created during run-test), Playwright code (sungen compiles).
@@ -89,6 +89,36 @@ Do NOT mark `@manual` when data is visible in snapshot or documented in spec —
89
89
 
90
90
  ---
91
91
 
92
+ ## Flow Review Additions
93
+
94
+ When reviewing a `@flow` feature (`qa/flows/<name>/`), apply standard scoring PLUS these flow-specific checks:
95
+
96
+ ### Syntax — additional checks
97
+ - `[Screen:Element]` format used consistently (not mixing bare `[Element]` refs)
98
+ - YAML keys quoted with colon: `"login:submit":` not `login:submit:`
99
+ - `@flow` tag present at feature level
100
+
101
+ ### Coverage — additional dimensions
102
+ | Dimension | Pts (from existing 40) | What to check |
103
+ |---|---|---|
104
+ | Screen transitions | (part of State transitions) | Each screen-to-screen navigation tested |
105
+ | Auth persistence | (part of Happy paths) | Auth state maintained across transitions |
106
+ | Error recovery mid-flow | (part of Negative cases) | Invalid input at each phase → fix → continue |
107
+ | Cross-screen data | (part of Edge cases) | Data entered on screen A asserted on screen B |
108
+
109
+ ### Viewpoint — flow-specific classification
110
+ - **VP-LOGIC** — screen transitions, navigation flow, auth persistence
111
+ - **VP-VAL** — cross-screen data consistency, form data carried across pages
112
+ - **VP-SEC** — auth state across redirects, permission changes mid-flow
113
+ - VP-UI is typically minimal in flows (focus on functionality over layout)
114
+
115
+ ### Checklist — flow-specific items
116
+ 11. **Missing screen transitions** — flow visits 4 screens but only 2 transitions tested? Add missing
117
+ 12. **Orphan scenarios** — scenario doesn't connect to previous/next phase? Flag broken flow
118
+ 13. **Selector namespace consistency** — mixing `[Submit]` and `[Login:Submit]` in same flow? Standardize
119
+
120
+ ---
121
+
92
122
  ## Output Format
93
123
 
94
124
  ```markdown
@@ -0,0 +1,86 @@
1
+ ---
2
+ name: sungen-add-flow
3
+ description: 'Add a new Sungen flow — scaffolds directories for E2E cross-screen testing, helps fill spec.md, and can capture visuals via the capture skills'
4
+ argument-hint: '[flow-name] [--path <start-url>]'
5
+ agent: 'agent'
6
+ tools: [vscode, execute, read, agent, edit, search, todo]
7
+ ---
8
+
9
+ **Input**: Flow name and optional starting URL (e.g., `/sungen-add-flow award-submission --path /login`).
10
+
11
+ You are adding a new Sungen flow for E2E cross-screen test generation.
12
+
13
+ ## Parameters
14
+
15
+ - **flow** — ${input:flow:flow name (e.g., award-submission, user-onboarding)}
16
+ - **--path \<url\>** — starting page URL path (default: `/login`)
17
+ - **--description \<text\>** — flow description (optional)
18
+
19
+ ## Steps
20
+
21
+ ### 1. Scaffold the flow
22
+
23
+ Run with #tool:terminal:
24
+ ```bash
25
+ sungen add-flow --flow ${input:flow} --path ${input:path}
26
+ ```
27
+
28
+ This creates:
29
+ ```
30
+ qa/flows/${input:flow}/
31
+ ├── features/${input:flow}.feature # Gherkin with @flow tag, Background, sample scenarios
32
+ ├── selectors/${input:flow}.yaml # Namespaced keys: "login:submit", "awards:submit"
33
+ ├── test-data/${input:flow}.yaml # Namespaced data: login.email, submission.nominee
34
+ └── requirements/
35
+ ├── spec.md # Flow specification
36
+ └── ui/ # Screenshots, mockups
37
+ ```
38
+
39
+ ### 1a. Identify the screens in the flow
40
+
41
+ Ask the user: "Which screens does this flow visit, in order? (e.g., login → dashboard → award-form → confirmation)"
42
+
43
+ Record the screen list — you will need it for:
44
+ - Filling `spec.md` (Step 3)
45
+ - Suggesting `[Screen:Element]` namespace prefixes
46
+ - Capturing visuals per screen (Step 2)
47
+
48
+ ### 2. Capture visual source
49
+
50
+ Ask: *"Pick a visual source for this flow's screens:"*
51
+ - **Figma designs** (Recommended for pre-launch) — invoke `sungen-capture-figma` skill for each screen
52
+ - **Live page scan** (dev/staging is up) — invoke `sungen-capture-live` skill for each screen URL
53
+ - **Local images** — invoke `sungen-capture-local` skill to load from `requirements/ui/`
54
+ - **Skip** — user will drop images manually into `requirements/ui/` later
55
+
56
+ Each capture skill writes outputs into `qa/flows/${input:flow}/requirements/ui/` and reports back a summary. Do not inline capture logic here — always delegate to the skill.
57
+
58
+ ### 3. Fill spec.md
59
+
60
+ Ask: *"Fill `spec.md` now? (You can reference the captured visuals)"* — offer **Yes, fill now (Recommended)** / **Skip, fill later**.
61
+
62
+ If yes → open `qa/flows/${input:flow}/requirements/spec.md` and help the user fill:
63
+ - **Screens list** — ordered list of screens with URL paths
64
+ - **Flow steps** — what the user does at each screen
65
+ - **Transitions** — what triggers navigation between screens
66
+ - **Business rules** — cross-screen validation, state that persists
67
+ - **Test data** — what data is entered at each screen
68
+
69
+ Reference the captured visuals from Step 2 to suggest field names, form elements, and UI states.
70
+
71
+ ### 4. Next steps
72
+
73
+ Tell the user what was created and offer next steps:
74
+
75
+ - **`/sungen-create-test ${input:flow}`** — Generate test scenarios for the flow (Recommended)
76
+ - **Done for now** — I'll come back later
77
+
78
+ ## Key Rules
79
+
80
+ - Flows are **independent** from screens — own selectors, own test-data
81
+ - Selectors use `[Screen:Element]` namespace format with colon
82
+ - YAML keys must be **quoted** due to colon: `"login:submit":`
83
+ - Test data namespaced by phase: `login.email`, `submission.nominee`
84
+ - `@flow` tag required at feature level
85
+ - `Background:` should only contain the starting page navigation
86
+ - Each scenario = one phase of the journey
@@ -6,7 +6,7 @@ agent: 'agent'
6
6
  tools: [vscode, execute, read, agent, edit, search, web, browser, todo, 'playwright/*']
7
7
  ---
8
8
 
9
- **Input**: Screen name (e.g., `/sungen-create-test admin-users`).
9
+ **Input**: Screen or flow name (e.g., `/sungen-create-test admin-users`).
10
10
 
11
11
  ## Role
12
12
 
@@ -14,13 +14,16 @@ You are a **Senior QA Engineer**. You structure test cases by viewpoint categori
14
14
 
15
15
  ## Parameters
16
16
 
17
- - **screen** — ${input:screen:screen name (e.g., login, dashboard)}
17
+ - **name** — ${input:name:screen or flow name (e.g., login, award-submission)}
18
+
19
+ **Auto-detect context**: check if `qa/flows/<name>/` exists → flow mode (base path: `qa/flows/<name>/`). Else check `qa/screens/<name>/` → screen mode (base path: `qa/screens/<name>/`).
18
20
 
19
21
  ## Steps
20
22
 
21
- 1. Verify `qa/screens/${input:screen}/` exists. If not → run `/sungen-add-screen` first.
23
+ 1. **Flow**: Verify `qa/flows/${input:name}/` exists. If not → `/sungen-add-flow` first.
24
+ **Screen**: Verify `qa/screens/${input:name}/` exists. If not → `/sungen-add-screen` first.
22
25
  2. Check if `.feature` already has scenarios. If yes → summarize existing coverage and ask: **1) Add new sections**, **2) Add viewpoints to existing sections**, or **3) Replace all**. See `sungen-tc-generation` skill for update mode details.
23
- 3. **Read requirements & resolve visual source** — check `qa/screens/${input:screen}/requirements/`:
26
+ 3. **Read requirements & resolve visual source** — check `<base>/${input:name}/requirements/`:
24
27
  - If `spec.md` exists → read it as PRIMARY source (sections, fields, validation rules, business rules, states).
25
28
  - If `test-viewpoint.md` exists → read it. If it only contains HTML comments (scaffold template), ask:
26
29
  - **1) Fill test-viewpoint.md first** — identify edge cases, known issues, and design decisions before generating tests
@@ -30,7 +33,7 @@ You are a **Senior QA Engineer**. You structure test cases by viewpoint categori
30
33
  1. If `spec_figma.md` exists → read it as Figma supplement (PAT flow already completed during `add-screen`). Do NOT call any `mcp__figma__*` tool.
31
34
  2. If `ui/` has images (`.png`, `.jpg`, etc.) → read them for visual context (layout, element positions, states).
32
35
  3. If neither exists → ask: *"No visual source found. Pick one:"*
33
- - **1) Figma PAT** — ask for URL, run `sungen add --screen ${input:screen} --figma '<url>'`, then invoke `sungen-figma-source` skill
36
+ - **1) Figma PAT** — ask for URL, run `sungen add --screen ${input:name} --figma '<url>'`, then invoke `sungen-figma-source` skill
34
37
  - **2) Figma MCP** — invoke `sungen-capture-figma` skill
35
38
  - **3) Live page scan** — invoke `sungen-capture-live` skill
36
39
  - **4) Skip** — generate from spec.md only
@@ -39,13 +42,13 @@ You are a **Senior QA Engineer**. You structure test cases by viewpoint categori
39
42
 
40
43
  Summarize what you found in requirements and present to the user.
41
44
 
42
- 4. Identify screen sections ask user which to focus on (per `sungen-tc-generation` skill). When requirements exist, use the "Requirements-Driven Generation" strategy. Present sections as a numbered list and let user pick.
43
- 5. Generate or update `.feature` + `test-data.yaml` following `sungen-gherkin-syntax` and `sungen-tc-generation` skills.
45
+ 4. Follow the `sungen-tc-generation` skill for section identification, viewpoint generation, and output format. **For flows**, use the "Flow Test Generation" section in the skill. When requirements exist, use the "Requirements-Driven Generation" strategy. Present sections as a numbered list and let user pick.
46
+ 5. Generate or update `.feature` + `test-data.yaml` following `sungen-gherkin-syntax` and `sungen-tc-generation` skills. **For flows**: use `[Screen:Element]` namespace format, namespace test-data by phase, add `@flow` tag.
44
47
  6. Show summary and offer next steps:
45
48
 
46
- - **`/sungen-review ${input:screen}`** — Review syntax, coverage, viewpoint quality (Recommended)
47
- - **`/sungen-run-test ${input:screen}`** — Skip review, generate selectors and run tests now
48
- - **`/sungen-create-test ${input:screen}`** — Expand coverage: add @normal + @low scenarios
49
+ - **`/sungen-review ${input:name}`** — Review syntax, coverage, viewpoint quality (Recommended)
50
+ - **`/sungen-run-test ${input:name}`** — Skip review, generate selectors and run tests now
51
+ - **`/sungen-create-test ${input:name}`** — Expand coverage: add @normal + @low scenarios
49
52
  - **Done for now** — I'll come back later
50
53
 
51
54
  **No selectors.yaml** — selectors are generated during `/sungen-run-test`.
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: sungen-delivery
3
3
  description: 'Export Gherkin scenarios + Playwright results to CSV test case file for QA delivery.'
4
- argument-hint: "[screen-name...] (omit for all screens)"
4
+ argument-hint: "[name...] (omit for all screens and flows)"
5
5
  tools: [read, execute, edit, vscode/askQuestions]
6
6
  ---
7
7
 
@@ -11,9 +11,9 @@ You are a **QA Test Delivery Engineer**. Your job is to invoke the deterministic
11
11
 
12
12
  ## Parameters
13
13
 
14
- Parse **screens** from `$ARGUMENTS`:
15
- - If empty → CLI will process **all** screens in `qa/screens/`
16
- - If provided → pass them through to the CLI
14
+ Parse **names** from `$ARGUMENTS`:
15
+ - If empty → CLI will process **all** screens in `qa/screens/` and flows in `qa/flows/`
16
+ - If provided → pass them through to the CLI (auto-detects screen vs flow per name)
17
17
 
18
18
  ## Steps
19
19
 
@@ -22,28 +22,29 @@ Parse **screens** from `$ARGUMENTS`:
22
22
  Run via Bash (single command, no extra parsing):
23
23
 
24
24
  ```bash
25
- npx sungen delivery <screens>
25
+ npx sungen delivery <names>
26
26
  ```
27
27
 
28
- - If no screen args → just run `npx sungen delivery`
29
- - If screen args → pass them as positional arguments
28
+ - If no args → just run `npx sungen delivery` (exports all screens + flows)
29
+ - If args → pass them as positional arguments (auto-detects screen vs flow)
30
30
 
31
31
  The CLI handles:
32
- - Scope detection (all screens vs specific)
32
+ - Scope detection (all screens + flows vs specific names)
33
+ - Auto-detect: `qa/flows/<name>/` → flow, `qa/screens/<name>/` → screen
33
34
  - Pre-flight source checks with colorful output
34
35
  - Parsing `.feature`, `.spec.ts`, `test-data.yaml`, `test-results/results.json`
35
- - Generating CSV at `qa/deliverables/<screen>-testcases.csv`
36
+ - Generating CSV at `qa/deliverables/<name>-testcases.csv`
36
37
  - Printing summary table
37
38
 
38
39
  ### 2. Handle pre-flight failures (if CLI exits non-zero)
39
40
 
40
- If the CLI exits with blocking issues, it will have already printed a clear table showing exactly what's missing per screen.
41
+ If the CLI exits with blocking issues, it will have already printed a clear table showing exactly what's missing per target.
41
42
 
42
43
  Use `AskUserQuestion` to offer next steps:
43
44
 
44
45
  **Options:**
45
46
  - **Fix missing sources** (Recommended) — Print the suggested commands from CLI output and stop. User will run those commands manually, then re-invoke `/sungen:delivery`.
46
- - **Continue with available screens** — Re-run as `npx sungen delivery <screens> --continue-on-missing` to skip screens with blocking issues.
47
+ - **Continue with available targets** — Re-run as `npx sungen delivery <names> --continue-on-missing` to skip targets with blocking issues.
47
48
  - **Cancel** — Exit.
48
49
 
49
50
  ### 3. Show summary + offer next steps (on success)
@@ -51,8 +52,8 @@ Use `AskUserQuestion` to offer next steps:
51
52
  Forward the CLI's summary table to the user verbatim. Then use `AskUserQuestion`:
52
53
 
53
54
  - **Open a specific CSV** — Help user inspect one of the exported files with Read tool.
54
- - **Run tests to refresh results** — Suggest `/sungen:run-test <screen>` to update `test-results/results.json`, then re-run delivery.
55
- - **Export another screen** — User can run `/sungen:delivery <other-screen>`.
55
+ - **Run tests to refresh results** — Suggest `/sungen-run-test <name>` to update test results, then re-run delivery.
56
+ - **Export another target** — User can run `/sungen-delivery <other-name>`.
56
57
  - **Done** — Exit.
57
58
 
58
59
  ## Important notes
@@ -65,7 +66,7 @@ Forward the CLI's summary table to the user verbatim. Then use `AskUserQuestion`
65
66
  ## CLI Reference
66
67
 
67
68
  ```
68
- sungen delivery [screens...]
69
+ sungen delivery [names...]
69
70
  [--skip-preflight] Skip pre-flight checks (not recommended)
70
- [--continue-on-missing] Skip screens with blocking misses
71
+ [--continue-on-missing] Skip targets with blocking misses
71
72
  ```
@@ -6,7 +6,7 @@ agent: 'agent'
6
6
  tools: [vscode, read, edit, search, todo]
7
7
  ---
8
8
 
9
- **Input**: Screen name (e.g., `/sungen-review admin-users`).
9
+ **Input**: Screen or flow name (e.g., `/sungen-review admin-users`).
10
10
 
11
11
  ## Role
12
12
 
@@ -14,17 +14,19 @@ You are a **Senior QA Reviewer**. You evaluate Gherkin test cases using the `sun
14
14
 
15
15
  ## Parameters
16
16
 
17
- - **screen** — ${input:screen:screen name (e.g., login, dashboard)}
17
+ - **name** — ${input:name:screen or flow name (e.g., login, award-submission)}
18
+
19
+ **Auto-detect context**: check if `qa/flows/<name>/` exists → flow mode (base path: `qa/flows/<name>/`). Else check `qa/screens/<name>/` → screen mode (base path: `qa/screens/<name>/`).
18
20
 
19
21
  ## Steps
20
22
 
21
- 1. Read `qa/screens/${input:screen}/features/${input:screen}.feature` and `qa/screens/${input:screen}/test-data/${input:screen}.yaml`. If missing → `/sungen-create-test` first.
22
- 2. Follow the `sungen-tc-review` skill — score 3 dimensions: Syntax (30pts), Coverage (40pts), Viewpoint (30pts). Use `sungen-viewpoint` for pattern checklists.
23
- 3. **Unverified Selectors check** — if `qa/screens/${input:screen}/selectors/${input:screen}.yaml` exists, count lines matching `@needs-live-verify`. Include in the review report as a non-scoring metric (see `sungen-tc-review` skill for report format). Does NOT affect the 60% threshold.
23
+ 1. Read `<base>/<name>/features/<name>.feature` and `<base>/<name>/test-data/<name>.yaml`. If missing → `/sungen-create-test` first.
24
+ 2. Follow the `sungen-tc-review` skill — score 3 dimensions: Syntax (30pts), Coverage (40pts), Viewpoint (30pts). **For flows**, also apply the "Flow Review Additions" section. Use `sungen-viewpoint` for pattern checklists.
25
+ 3. **Unverified Selectors check** — if `<base>/<name>/selectors/<name>.yaml` exists, count lines matching `@needs-live-verify`. Include in the review report as a non-scoring metric. Does NOT affect the 60% threshold.
24
26
  4. Output review report per `sungen-tc-review` format. **>= 60%**: PASS. **< 60%**: FAIL with recommendations.
25
27
  5. If FAIL and user confirms → update test cases following `sungen-gherkin-syntax` and `sungen-tc-generation` skills, then re-review.
26
28
  6. After PASS (or user decides to proceed), offer next steps:
27
29
 
28
- - **`/sungen-run-test ${input:screen}`** — Generate selectors, compile, and run tests (Recommended)
29
- - **`/sungen-create-test ${input:screen}`** — Add more test cases before running
30
+ - **`/sungen-run-test ${input:name}`** — Generate selectors, compile, and run tests (Recommended)
31
+ - **`/sungen-create-test ${input:name}`** — Add more test cases before running
30
32
  - **Done for now** — I'll come back later
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  name: sungen-run-test
3
3
  description: 'Generate selectors + auth state via Playwright MCP, compile, and run Playwright tests — auto-fixes selectors on failure'
4
- argument-hint: '[screen-name]'
4
+ argument-hint: '[name]'
5
5
  tools: [read, execute, edit, vscode/askQuestions, playwright/*]
6
6
  ---
7
7
 
@@ -11,11 +11,13 @@ You are a **Senior Developer**. Use `sungen-selector-fix`, `sungen-selector-keys
11
11
 
12
12
  ## Parameters
13
13
 
14
- Parse **screen** from `$ARGUMENTS`. If missing, ask the user.
14
+ Parse **name** from `$ARGUMENTS`. If missing, ask the user.
15
+
16
+ **Auto-detect context**: check if `qa/flows/<name>/` exists → flow mode (base path: `qa/flows/<name>/`). Else check `qa/screens/<name>/` → screen mode (base path: `qa/screens/<name>/`).
15
17
 
16
18
  ## Pre-run (phased — per `sungen-selector-fix` skill)
17
19
 
18
- 1. Verify `qa/screens/<screen>/` has `.feature` + `test-data.yaml`.
20
+ 1. Verify `<base>/<name>/` has `.feature` + `test-data.yaml`.
19
21
  2. **Phase 0 — Selector Pre-gen**: if `selectors.yaml` is missing/empty or doesn't cover the feature file's `[Reference]`s, apply the following decision tree before running Phase 0 from `sungen-selector-fix`:
20
22
 
21
23
  ```
@@ -29,14 +31,14 @@ Parse **screen** from `$ARGUMENTS`. If missing, ask the user.
29
31
  2. Apply selector heuristics from sungen-figma-source skill (testid > role+name > placeholder > label > locator > text)
30
32
  3. Write selectors.yaml — every provisional entry gets this comment on the line above:
31
33
  # @needs-live-verify source=figma node_id=<id>
32
- 4. Compile: sungen generate --screen <screen> — must succeed
34
+ 4. Compile: Screen: sungen generate --screen <name>. Flow: sungen generate --flow <name> — must succeed
33
35
  5. Phase 1 smoke check runs; tests using unverified selectors may fail
34
36
  → auto-fix triggers on next run-test invocation when a live page is available
35
37
  NO → hard stop: print the following message and stop:
36
38
  "Cannot generate selectors: no live page URL and no spec_figma.md found.
37
39
  Options:
38
40
  • Provide the live URL so Playwright MCP can snapshot the page, OR
39
- • Run: sungen add --screen <screen> --figma <figma-url> to generate spec_figma.md first"
41
+ • Run: sungen add --screen <name> --figma <figma-url> to generate spec_figma.md first"
40
42
  ```
41
43
 
42
44
  **Auto-fix on subsequent runs**: when `run-test` is invoked again with a reachable live page, Phase 0 compares the DOM snapshot against existing `selectors.yaml` entries. Entries tagged `# @needs-live-verify` are treated as candidates — if the actual selector differs, the entry is replaced and the comment removed (entry becomes verified). Entries that already match are also promoted to verified (comment removed).
@@ -50,7 +52,7 @@ Parse **screen** from `$ARGUMENTS`. If missing, ask the user.
50
52
  name: "Submit"
51
53
  ```
52
54
  3. **Phase 0.5 — Auth Persistence**: if the feature has `@auth:<role>` tags and `specs/.auth/<role>.json` is missing/expired, run Phase 0.5 from `sungen-selector-fix` — user logs in manually in MCP browser → `browser_storage_state` → `specs/.auth/<role>.json`. Offer `sungen makeauth <role>` as CLI fallback only if `browser_storage_state` isn't available in this MCP version.
53
- 4. Compile: `sungen generate --screen <screen>` (default: runtime data loading from YAML). Use `--inline-data` only if user requests compile-time hardcoded values.
55
+ 4. Compile: **Screen**: `sungen generate --screen <name>`. **Flow**: `sungen generate --flow <name>`. Default: runtime data loading from YAML. Use `--inline-data` only if user requests compile-time hardcoded values.
54
56
 
55
57
  ## Run & Fix (phased — per `sungen-selector-fix` skill)
56
58
 
@@ -61,25 +63,27 @@ Parse **screen** from `$ARGUMENTS`. If missing, ask the user.
61
63
 
62
64
  ## Playwright command guidelines
63
65
 
64
- **Per-screen JSON results** — each run must write its JSON report to a dedicated path co-located with the `.spec.ts`, so `sungen delivery` can read the correct results per screen:
66
+ **Per-screen/flow JSON results** — each run must write its JSON report to a dedicated path co-located with the `.spec.ts`, so `sungen delivery` can read the correct results:
65
67
 
66
68
  ```bash
67
- # ✅ Correct — per-screen output file via env var
68
- PLAYWRIGHT_JSON_OUTPUT_NAME=specs/generated/<screen>/<screen>-test-result.json \
69
- npx playwright test specs/generated/<screen>/<screen>.spec.ts
70
- ```
69
+ # ✅ Screen
70
+ PLAYWRIGHT_JSON_OUTPUT_NAME=specs/generated/<name>/<name>-test-result.json \
71
+ npx playwright test specs/generated/<name>/<name>.spec.ts
71
72
 
72
- Output: `specs/generated/<screen>/<screen>-test-result.json`
73
+ # ✅ Flow
74
+ PLAYWRIGHT_JSON_OUTPUT_NAME=specs/generated/flows/<name>/<name>-test-result.json \
75
+ npx playwright test specs/generated/flows/<name>/<name>.spec.ts
76
+ ```
73
77
 
74
78
  **DO NOT** pass `--reporter=...` flag — it overrides the reporters from `playwright.config.ts` and disables the JSON reporter that `sungen delivery` depends on.
75
79
 
76
80
  ```bash
77
81
  # ❌ Wrong — --reporter flag disables the config's JSON reporter
78
- npx playwright test specs/generated/<screen>/<screen>.spec.ts --reporter=list
82
+ npx playwright test specs/generated/<name>/<name>.spec.ts --reporter=list
79
83
 
80
84
  # ❌ Wrong — no env var → writes to default test-results/results.json
81
85
  # (overwritten on every screen run, loses per-screen tracking)
82
- npx playwright test specs/generated/<screen>/<screen>.spec.ts
86
+ npx playwright test specs/generated/<name>/<name>.spec.ts
83
87
  ```
84
88
 
85
89
  If you want to filter scenarios, use `-g "<pattern>"` instead of a reporter override.
@@ -91,10 +95,10 @@ If you want to filter scenarios, use `-g "<pattern>"` instead of a reporter over
91
95
  After showing results, use `AskUserQuestion` to offer next steps:
92
96
 
93
97
  If all tests **passed**:
94
- - **`/sungen:create-test <screen>`** — Add more test cases (Recommended)
98
+ - **`/sungen-create-test <name>`** — Add more test cases (Recommended)
95
99
  - **Done** — All tests passed, I'm finished
96
100
 
97
101
  If tests **failed** (after retries):
98
- - **`/sungen:run-test <screen>`** — Re-run after manual fixes
99
- - **`/sungen:create-test <screen>`** — Revise test cases
102
+ - **`/sungen-run-test <name>`** — Re-run after manual fixes
103
+ - **`/sungen-create-test <name>`** — Revise test cases
100
104
  - **Done for now** — I'll fix manually later
@@ -21,22 +21,28 @@ You generate 3 files for sungen — a Gherkin compiler that produces Playwright
21
21
  | `sungen-capture-live` | Capture a live running page via Playwright MCP (snapshot + screenshot) |
22
22
  | `sungen-figma-source` | Figma URL → spec_figma.md + ui/*.png + provisional selectors |
23
23
 
24
- ## Workflow (5 AI commands)
24
+ ## Workflow (6 AI commands)
25
25
 
26
26
  | Command | What it does |
27
27
  |---|---|
28
28
  | `/sungen-add-screen <name> <path>` | Scaffold `qa/screens/<name>/` directories |
29
- | `/sungen-create-test <name>` | Generate `.feature` + `test-data.yaml` (no selectors) |
30
- | `/sungen-review <name>` | Score syntax, coverage, viewpoint quality (60% threshold) |
31
- | `/sungen-run-test <name>` | Generate `selectors.yaml` from live page, compile, run, auto-fix |
29
+ | `/sungen-add-flow <name> [--path <url>]` | Scaffold `qa/flows/<name>/` directories for E2E cross-screen testing |
30
+ | `/sungen-create-test <name>` | Generate `.feature` + `test-data.yaml` (auto-detects screen or flow) |
31
+ | `/sungen-review <name>` | Score syntax, coverage, viewpoint quality (auto-detects screen or flow) |
32
+ | `/sungen-run-test <name>` | Generate `selectors.yaml`, compile, run, auto-fix (auto-detects screen or flow) |
32
33
  | `/sungen-delivery [name...]` | Export test cases → CSV for QA delivery (all screens if no arg) |
33
34
 
34
- **Order:** add-screen → create-test → review → run-test → delivery.
35
+ **Screen path:** add-screen → create-test → review → run-test → delivery.
36
+ **Flow path:** add-flow → create-test → review → run-test → delivery.
37
+
38
+ `create-test`, `review`, and `run-test` auto-detect context: if `qa/flows/<name>/` exists → flow mode, else `qa/screens/<name>/` → screen mode.
35
39
 
36
40
  After each command completes, present the next actions as selectable options. Never just print text — always give clickable choices so the user can continue the workflow seamlessly.
37
41
 
38
42
  ## File Structure
39
43
 
44
+ ### Screen (single-screen testing)
45
+
40
46
  ```
41
47
  qa/screens/<screen-name>/
42
48
  ├── features/<screen>.feature # Gherkin scenarios
@@ -47,9 +53,27 @@ qa/screens/<screen-name>/
47
53
  └── requirements/
48
54
  ├── spec.md # Screen specification (primary source)
49
55
  └── ui/ # Screenshots, mockups
56
+ ```
57
+
58
+ ### Flow (E2E cross-screen testing)
50
59
 
51
- qa/deliverables/<screen>-testcases.csv # Exported test cases (from /sungen-delivery)
52
- qa/deliverables/<screen>-testcases.xlsx # Styled workbook for client hand-off
60
+ ```
61
+ qa/flows/<flow-name>/
62
+ ├── features/<flow>.feature # Gherkin with @flow tag, [Screen:Element] refs
63
+ ├── selectors/<flow>.yaml # Namespaced keys: "login:submit", "awards:title"
64
+ ├── test-data/<flow>.yaml # Namespaced data: login.email, submission.nominee
65
+ ├── test-data/<flow>.staging.yaml # Environment override (optional)
66
+ ├── test-data/<flow>.production.yaml # Environment override (optional)
67
+ └── requirements/
68
+ ├── spec.md # Flow specification (screens, steps, transitions)
69
+ └── ui/ # Screenshots, mockups
70
+ ```
71
+
72
+ Flows are **independent** from screens — own selectors, own test-data. Selectors use colon-namespaced keys (`"login:submit":`) to avoid duplicate element names across screens.
73
+
74
+ ```
75
+ qa/deliverables/<name>-testcases.csv # Exported test cases (from /sungen-delivery)
76
+ qa/deliverables/<name>-testcases.xlsx # Styled workbook for client hand-off
53
77
  ```
54
78
 
55
79
  ## Test Data
@@ -61,11 +85,18 @@ qa/deliverables/<screen>-testcases.xlsx # Styled workbook for client hand-off
61
85
  ## CLI Commands
62
86
 
63
87
  ```bash
88
+ # Screen
64
89
  sungen add --screen <name> --path <url-path> # Scaffold screen directories
65
90
  sungen add --screen <name> --path <path> --feature <name> # Scaffold with sub-feature
66
91
  sungen generate --screen <name> # Compile .feature → .spec.ts (runtime data)
67
92
  sungen generate --screen <name> --inline-data # Compile with hardcoded data (legacy)
68
- sungen generate --all # Compile all screens
93
+
94
+ # Flow
95
+ sungen add-flow --flow <name> --path <start-url> # Scaffold flow directories
96
+ sungen generate --flow <name> # Compile flow .feature → .spec.ts
97
+
98
+ # All
99
+ sungen generate --all # Compile all screens and flows
69
100
  sungen delivery # Export all screens → CSV + XLSX
70
- sungen delivery <screen> # Export a single screen
101
+ sungen delivery <name> # Export a single screen or flow
71
102
  ```
@@ -61,6 +61,18 @@ Where `<timestamp>` is `YYYYMMDD-HHMM` in local time (e.g. `live-20260421-1430.p
61
61
 
62
62
  This gives users a visual record they can reference later without re-scanning.
63
63
 
64
+ ### 6a. Verify unauthenticated redirect target (flow capture only)
65
+
66
+ When capturing for a **flow** that includes security scenarios (e.g., "unauthenticated user cannot access X"):
67
+
68
+ 1. Open a **fresh incognito/unauthenticated** browser context (no storage state).
69
+ 2. `browser_navigate` to the protected route (e.g., `/dashboard`).
70
+ 3. Record the **actual redirect URL** — do NOT assume it goes to `/login`. The app may redirect to `/register`, `/`, or any other route.
71
+ 4. Report the redirect target to the caller: *"Unauthenticated access to `/dashboard` redirects to `/register`"*.
72
+ 5. The caller must use the **actual redirect URL** in Gherkin assertions (e.g., `Then User is on [Register] page`), never an assumed one.
73
+
74
+ Skip this step if the flow has no security scenarios or the user explicitly says to skip.
75
+
64
76
  ### 6. Detect discrepancies vs spec
65
77
 
66
78
  If `spec.md` exists, briefly cross-check the snapshot against spec sections: