@sun-asterisk/sungen 2.4.5 → 2.4.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (129) hide show
  1. package/dist/cli/commands/delivery.d.ts +7 -0
  2. package/dist/cli/commands/delivery.d.ts.map +1 -0
  3. package/dist/cli/commands/delivery.js +348 -0
  4. package/dist/cli/commands/delivery.js.map +1 -0
  5. package/dist/cli/commands/update.d.ts.map +1 -1
  6. package/dist/cli/commands/update.js +64 -1
  7. package/dist/cli/commands/update.js.map +1 -1
  8. package/dist/cli/index.js +4 -2
  9. package/dist/cli/index.js.map +1 -1
  10. package/dist/exporters/csv-exporter.d.ts +32 -0
  11. package/dist/exporters/csv-exporter.d.ts.map +1 -0
  12. package/dist/exporters/csv-exporter.js +311 -0
  13. package/dist/exporters/csv-exporter.js.map +1 -0
  14. package/dist/exporters/feature-parser.d.ts +48 -0
  15. package/dist/exporters/feature-parser.d.ts.map +1 -0
  16. package/dist/exporters/feature-parser.js +178 -0
  17. package/dist/exporters/feature-parser.js.map +1 -0
  18. package/dist/exporters/package-info.d.ts +9 -0
  19. package/dist/exporters/package-info.d.ts.map +1 -0
  20. package/dist/exporters/package-info.js +73 -0
  21. package/dist/exporters/package-info.js.map +1 -0
  22. package/dist/exporters/playwright-report-parser.d.ts +21 -0
  23. package/dist/exporters/playwright-report-parser.d.ts.map +1 -0
  24. package/dist/exporters/playwright-report-parser.js +184 -0
  25. package/dist/exporters/playwright-report-parser.js.map +1 -0
  26. package/dist/exporters/scenario-merger.d.ts +21 -0
  27. package/dist/exporters/scenario-merger.d.ts.map +1 -0
  28. package/dist/exporters/scenario-merger.js +51 -0
  29. package/dist/exporters/scenario-merger.js.map +1 -0
  30. package/dist/exporters/spec-parser.d.ts +20 -0
  31. package/dist/exporters/spec-parser.d.ts.map +1 -0
  32. package/dist/exporters/spec-parser.js +259 -0
  33. package/dist/exporters/spec-parser.js.map +1 -0
  34. package/dist/exporters/step-formatter.d.ts +32 -0
  35. package/dist/exporters/step-formatter.d.ts.map +1 -0
  36. package/dist/exporters/step-formatter.js +76 -0
  37. package/dist/exporters/step-formatter.js.map +1 -0
  38. package/dist/exporters/test-data-resolver.d.ts +20 -0
  39. package/dist/exporters/test-data-resolver.d.ts.map +1 -0
  40. package/dist/exporters/test-data-resolver.js +96 -0
  41. package/dist/exporters/test-data-resolver.js.map +1 -0
  42. package/dist/exporters/types.d.ts +104 -0
  43. package/dist/exporters/types.d.ts.map +1 -0
  44. package/dist/exporters/types.js +6 -0
  45. package/dist/exporters/types.js.map +1 -0
  46. package/dist/exporters/xlsx-exporter.d.ts +19 -0
  47. package/dist/exporters/xlsx-exporter.d.ts.map +1 -0
  48. package/dist/exporters/xlsx-exporter.js +309 -0
  49. package/dist/exporters/xlsx-exporter.js.map +1 -0
  50. package/dist/generators/test-generator/utils/selector-resolver.d.ts.map +1 -1
  51. package/dist/generators/test-generator/utils/selector-resolver.js +26 -0
  52. package/dist/generators/test-generator/utils/selector-resolver.js.map +1 -1
  53. package/dist/orchestrator/ai-rules-updater.d.ts.map +1 -1
  54. package/dist/orchestrator/ai-rules-updater.js +12 -0
  55. package/dist/orchestrator/ai-rules-updater.js.map +1 -1
  56. package/dist/orchestrator/project-initializer.d.ts +12 -1
  57. package/dist/orchestrator/project-initializer.d.ts.map +1 -1
  58. package/dist/orchestrator/project-initializer.js +84 -64
  59. package/dist/orchestrator/project-initializer.js.map +1 -1
  60. package/dist/orchestrator/screen-manager.d.ts.map +1 -1
  61. package/dist/orchestrator/screen-manager.js +2 -0
  62. package/dist/orchestrator/screen-manager.js.map +1 -1
  63. package/dist/orchestrator/templates/ai-instructions/claude-cmd-add-screen.md +15 -17
  64. package/dist/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +7 -5
  65. package/dist/orchestrator/templates/ai-instructions/claude-cmd-delivery.md +71 -0
  66. package/dist/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +27 -0
  67. package/dist/orchestrator/templates/ai-instructions/claude-config.md +12 -2
  68. package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-figma.md +142 -0
  69. package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +100 -0
  70. package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-local.md +73 -0
  71. package/dist/orchestrator/templates/ai-instructions/claude-skill-delivery.md +103 -0
  72. package/dist/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +2 -0
  73. package/dist/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +22 -0
  74. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-add-screen.md +13 -15
  75. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +6 -4
  76. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +71 -0
  77. package/dist/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +38 -14
  78. package/dist/orchestrator/templates/ai-instructions/copilot-config.md +12 -2
  79. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-figma.md +142 -0
  80. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +100 -0
  81. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-local.md +73 -0
  82. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +103 -0
  83. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +2 -0
  84. package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +22 -0
  85. package/dist/orchestrator/templates/playwright.config.d.ts.map +1 -1
  86. package/dist/orchestrator/templates/playwright.config.js +6 -1
  87. package/dist/orchestrator/templates/playwright.config.js.map +1 -1
  88. package/dist/orchestrator/templates/playwright.config.ts +6 -1
  89. package/package.json +2 -1
  90. package/src/cli/commands/delivery.ts +348 -0
  91. package/src/cli/commands/update.ts +84 -2
  92. package/src/cli/index.ts +4 -2
  93. package/src/exporters/csv-exporter.ts +304 -0
  94. package/src/exporters/feature-parser.ts +168 -0
  95. package/src/exporters/package-info.ts +35 -0
  96. package/src/exporters/playwright-report-parser.ts +168 -0
  97. package/src/exporters/scenario-merger.ts +63 -0
  98. package/src/exporters/spec-parser.ts +247 -0
  99. package/src/exporters/step-formatter.ts +80 -0
  100. package/src/exporters/test-data-resolver.ts +59 -0
  101. package/src/exporters/types.ts +112 -0
  102. package/src/exporters/xlsx-exporter.ts +301 -0
  103. package/src/generators/test-generator/utils/selector-resolver.ts +26 -0
  104. package/src/orchestrator/ai-rules-updater.ts +12 -0
  105. package/src/orchestrator/project-initializer.ts +103 -70
  106. package/src/orchestrator/screen-manager.ts +2 -0
  107. package/src/orchestrator/templates/ai-instructions/claude-cmd-add-screen.md +15 -17
  108. package/src/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +7 -5
  109. package/src/orchestrator/templates/ai-instructions/claude-cmd-delivery.md +71 -0
  110. package/src/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +27 -0
  111. package/src/orchestrator/templates/ai-instructions/claude-config.md +12 -2
  112. package/src/orchestrator/templates/ai-instructions/claude-skill-capture-figma.md +142 -0
  113. package/src/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +100 -0
  114. package/src/orchestrator/templates/ai-instructions/claude-skill-capture-local.md +73 -0
  115. package/src/orchestrator/templates/ai-instructions/claude-skill-delivery.md +103 -0
  116. package/src/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +2 -0
  117. package/src/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +22 -0
  118. package/src/orchestrator/templates/ai-instructions/copilot-cmd-add-screen.md +13 -15
  119. package/src/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +6 -4
  120. package/src/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +71 -0
  121. package/src/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +38 -14
  122. package/src/orchestrator/templates/ai-instructions/copilot-config.md +12 -2
  123. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-figma.md +142 -0
  124. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +100 -0
  125. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-local.md +73 -0
  126. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +103 -0
  127. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +2 -0
  128. package/src/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +22 -0
  129. package/src/orchestrator/templates/playwright.config.ts +6 -1
@@ -2,7 +2,7 @@
2
2
  name: create-test
3
3
  description: 'Create or update test cases for a Sungen screen — generates feature + test-data files (20+ scenarios per viewpoint)'
4
4
  argument-hint: [screen-name]
5
- allowed-tools: Read, Grep, Bash, Glob, AskUserQuestion
5
+ allowed-tools: Read, Grep, Bash, Glob, Write, AskUserQuestion, mcp__playwright__browser_navigate, mcp__playwright__browser_snapshot, mcp__playwright__browser_take_screenshot, mcp__figma__get_design_context, mcp__figma__get_variable_defs, mcp__figma__get_screenshot
6
6
  ---
7
7
 
8
8
  ## Role
@@ -25,10 +25,12 @@ Parse **screen** from `$ARGUMENTS`. If missing, ask the user.
25
25
  - **Continue without it** — generate tests from spec and other sources only
26
26
  - Summarize what you found in requirements and present to the user.
27
27
  4. **Screen input** (supplements requirements, or is primary source if no requirements):
28
- - Use `AskUserQuestion` to ask: **Figma design** (provide Figma URL recommended), **UI images** (screenshots/mockups in `requirements/ui/`), or **Live page scan** (optional, via Playwright MCP)?
29
- - Recommend Figma or UI images first. Live page scan is optional useful to verify specs vs actual UI or capture real data.
30
- - If live page scan: `browser_navigate` → ONE `browser_snapshot`. If auth redirect → ask user to log in manually. Never use `browser_run_code` or `browser_evaluate` to inject cookies.
31
- - If exploring, verify and supplement requirementsflag any discrepancies found.
28
+ - Use `AskUserQuestion` to ask the user to pick a visual source always offer all three options so pre-launch projects work:
29
+ - **Figma design** (Recommended for pre-launch)invoke `sungen-capture-figma` skill
30
+ - **UI images** (existing screenshots/mockups in `requirements/ui/`) invoke `sungen-capture-local` skill
31
+ - **Live page scan** (dev/staging is up)invoke `sungen-capture-live` skill
32
+ - Each capture skill writes outputs into `qa/screens/<screen>/requirements/ui/` and reports back a summary. Do not inline capture logic here — always delegate to the skill so behavior stays consistent across commands.
33
+ - After the capture skill returns, cross-check its output against `spec.md` and flag any discrepancies before moving on.
32
34
  5. Follow the `sungen-tc-generation` skill for section identification, viewpoint generation, and output format. When requirements exist, use the "Requirements-Driven Generation" strategy.
33
35
  6. Generate or update `.feature` + `test-data.yaml` following `sungen-gherkin-syntax` and `sungen-tc-generation` skills.
34
36
  7. Show summary, then use `AskUserQuestion` to offer next steps:
@@ -0,0 +1,71 @@
1
+ ---
2
+ name: delivery
3
+ description: 'Export Gherkin scenarios + Playwright results to CSV test case file for QA delivery.'
4
+ argument-hint: "[screen-name...] (omit for all screens)"
5
+ allowed-tools: Bash, Read, AskUserQuestion
6
+ ---
7
+
8
+ ## Role
9
+
10
+ You are a **QA Test Delivery Engineer**. Your job is to invoke the deterministic `sungen delivery` CLI that performs all parsing and CSV export. Your role is minimal — just run the CLI and help the user if pre-flight checks fail.
11
+
12
+ ## Parameters
13
+
14
+ Parse **screens** from `$ARGUMENTS`:
15
+ - If empty → CLI will process **all** screens in `qa/screens/`
16
+ - If provided → pass them through to the CLI
17
+
18
+ ## Steps
19
+
20
+ ### 1. Invoke the CLI
21
+
22
+ Run via Bash (single command, no extra parsing):
23
+
24
+ ```bash
25
+ npx sungen delivery <screens>
26
+ ```
27
+
28
+ - If no screen args → just run `npx sungen delivery`
29
+ - If screen args → pass them as positional arguments
30
+
31
+ The CLI handles:
32
+ - Scope detection (all screens vs specific)
33
+ - Pre-flight source checks with colorful output
34
+ - Parsing `.feature`, `.spec.ts`, `test-data.yaml`, `test-results/results.json`
35
+ - Generating CSV at `qa/deliverables/<screen>-testcases.csv`
36
+ - Printing summary table
37
+
38
+ ### 2. Handle pre-flight failures (if CLI exits non-zero)
39
+
40
+ If the CLI exits with blocking issues, it will have already printed a clear table showing exactly what's missing per screen.
41
+
42
+ Use `AskUserQuestion` to offer next steps:
43
+
44
+ **Options:**
45
+ - **Fix missing sources** (Recommended) — Print the suggested commands from CLI output and stop. User will run those commands manually, then re-invoke `/sungen:delivery`.
46
+ - **Continue with available screens** — Re-run as `npx sungen delivery <screens> --continue-on-missing` to skip screens with blocking issues.
47
+ - **Cancel** — Exit.
48
+
49
+ ### 3. Show summary + offer next steps (on success)
50
+
51
+ Forward the CLI's summary table to the user verbatim. Then use `AskUserQuestion`:
52
+
53
+ - **Open a specific CSV** — Help user inspect one of the exported files with Read tool.
54
+ - **Run tests to refresh results** — Suggest `/sungen:run-test <screen>` to update `test-results/results.json`, then re-run delivery.
55
+ - **Export another screen** — User can run `/sungen:delivery <other-screen>`.
56
+ - **Done** — Exit.
57
+
58
+ ## Important notes
59
+
60
+ - **Do NOT parse files yourself** — the CLI is the source of truth for parsing logic. Your job is orchestration + user interaction.
61
+ - **Do NOT modify feature/spec.ts/test-data files** — the delivery is read-only.
62
+ - **The CLI already respects `@manual` tags, skips `@steps:` base scenarios, groups by Category 2, and generates UTF-8 BOM CSV for Excel compatibility with Vietnamese.**
63
+ - **Pre-flight check is built into the CLI** — use `--skip-preflight` only in CI/automated pipelines where checks are done externally.
64
+
65
+ ## CLI Reference
66
+
67
+ ```
68
+ sungen delivery [screens...]
69
+ [--skip-preflight] Skip pre-flight checks (not recommended)
70
+ [--continue-on-missing] Skip screens with blocking misses
71
+ ```
@@ -27,6 +27,33 @@ Parse **screen** from `$ARGUMENTS`. If missing, ask the user.
27
27
  7. **Phase 3 — Full Run**: Run all tests. Fix only **new** failures (elements unique to `@normal`/`@low`). Max 1 attempt. Don't loop on low-priority failures.
28
28
  8. **Phase 4 — Regression**: One final full run. Report results. No more fix loops.
29
29
 
30
+ ## Playwright command guidelines
31
+
32
+ **Per-screen JSON results** — each run must write its JSON report to a dedicated path co-located with the `.spec.ts`, so `sungen delivery` can read the correct results per screen:
33
+
34
+ ```bash
35
+ # ✅ Correct — per-screen output file via env var
36
+ PLAYWRIGHT_JSON_OUTPUT_NAME=specs/generated/<screen>/<screen>-test-result.json \
37
+ npx playwright test specs/generated/<screen>/<screen>.spec.ts
38
+ ```
39
+
40
+ Output: `specs/generated/<screen>/<screen>-test-result.json`
41
+
42
+ **DO NOT** pass `--reporter=...` flag — it overrides the reporters from `playwright.config.ts` and disables the JSON reporter that `sungen delivery` depends on.
43
+
44
+ ```bash
45
+ # ❌ Wrong — --reporter flag disables the config's JSON reporter
46
+ npx playwright test specs/generated/<screen>/<screen>.spec.ts --reporter=list
47
+
48
+ # ❌ Wrong — no env var → writes to default test-results/results.json
49
+ # (overwritten on every screen run, loses per-screen tracking)
50
+ npx playwright test specs/generated/<screen>/<screen>.spec.ts
51
+ ```
52
+
53
+ If you want to filter scenarios, use `-g "<pattern>"` instead of a reporter override.
54
+
55
+ `sungen delivery` reads the per-screen file first, falls back to the global `test-results/results.json` if missing.
56
+
30
57
  ## Next steps
31
58
 
32
59
  After showing results, use `AskUserQuestion` to offer next steps:
@@ -15,8 +15,12 @@ You generate 3 files for sungen — a Gherkin compiler that produces Playwright
15
15
  | `sungen-viewpoint` | 10 UI patterns x 4 viewpoints — coverage checklists |
16
16
  | `sungen-selector-keys` | YAML key generation from `[Reference]` names, suffixes, lookup priority |
17
17
  | `sungen-selector-fix` | Selector generation from live page, auto-fix strategy |
18
+ | `sungen-delivery` | Export Gherkin + Playwright results → CSV test case deliverable |
19
+ | `sungen-capture-figma` | Fetch design context + PNG from a Figma frame URL via Figma Dev Mode MCP |
20
+ | `sungen-capture-local` | Load existing UI assets (screenshots, mockups, Figma exports) from `requirements/ui/` |
21
+ | `sungen-capture-live` | Capture a live running page via Playwright MCP (snapshot + screenshot) |
18
22
 
19
- ## Workflow (4 AI commands)
23
+ ## Workflow (5 AI commands)
20
24
 
21
25
  | Command | What it does |
22
26
  |---|---|
@@ -24,8 +28,9 @@ You generate 3 files for sungen — a Gherkin compiler that produces Playwright
24
28
  | `/sungen:create-test <name>` | Generate `.feature` + `test-data.yaml` (no selectors) |
25
29
  | `/sungen:review <name>` | Score syntax, coverage, viewpoint quality (60% threshold) |
26
30
  | `/sungen:run-test <name>` | Generate `selectors.yaml` from live page, compile, run, auto-fix |
31
+ | `/sungen:delivery [name...]` | Export test cases → CSV for QA delivery (all screens if no arg) |
27
32
 
28
- **Order:** add-screen → create-test → review → run-test.
33
+ **Order:** add-screen → create-test → review → run-test → delivery.
29
34
 
30
35
  After each command completes, use `AskUserQuestion` to present the next actions as selectable options. Never just print text — always give clickable choices so the user can continue the workflow seamlessly.
31
36
 
@@ -39,6 +44,9 @@ qa/screens/<screen-name>/
39
44
  └── requirements/
40
45
  ├── spec.md # Screen specification (primary source)
41
46
  └── ui/ # Screenshots, mockups
47
+
48
+ qa/deliverables/<screen>-testcases.csv # Exported test cases (from /sungen:delivery)
49
+ qa/deliverables/<screen>-testcases.xlsx # Styled workbook for client hand-off
42
50
  ```
43
51
 
44
52
  ## CLI Commands
@@ -48,4 +56,6 @@ sungen add --screen <name> --path <url-path> # Scaffold screen dir
48
56
  sungen add --screen <name> --path <path> --feature <name> # Scaffold with sub-feature
49
57
  sungen generate --screen <name> # Compile .feature → .spec.ts
50
58
  sungen generate --all # Compile all screens
59
+ sungen delivery # Export all screens → CSV + XLSX
60
+ sungen delivery <screen> # Export a single screen
51
61
  ```
@@ -0,0 +1,142 @@
1
+ ---
2
+ name: sungen-capture-figma
3
+ description: 'Fetch design context + PNG from a Figma frame URL via Figma Dev Mode MCP. Auto-loaded by create-test when user picks Figma as the visual source.'
4
+ user-invocable: false
5
+ ---
6
+
7
+ ## Purpose
8
+
9
+ Pull **structured design data** (layout, typography, colors, component tree, design tokens) and a **PNG screenshot** from a Figma frame URL, so `sungen-tc-generation` can author Gherkin + test-data before a live domain exists.
10
+
11
+ Use this when the project is pre-launch, or when Figma is the source of truth and the live build lags the design.
12
+
13
+ ---
14
+
15
+ ## Prerequisites
16
+
17
+ - **Figma MCP server** (`https://mcp.figma.com/mcp`, HTTP transport) connected in the user's `.mcp.json` — `sungen init` scaffolds this automatically. On first use, Claude Code opens a browser for Figma OAuth. Official tools: `get_design_context`, `get_variable_defs`, `get_screenshot`.
18
+ - Figma account signed in with access to the file. **Dev/Full seats** get per-minute rate limits; **Starter/View seats** get monthly tool-call limits.
19
+ - A Figma URL with both **fileKey** and **nodeId** in it.
20
+
21
+ If the MCP is not connected, **do not fail silently** — tell the user:
22
+ > "Figma MCP not detected. Run `sungen init` to scaffold the config, or manually add `figma` with `url: https://mcp.figma.com/mcp` to `.mcp.json`. Then sign in when Claude Code prompts."
23
+
24
+ Then stop.
25
+
26
+ ---
27
+
28
+ ## Steps
29
+
30
+ ### 1. Resolve Figma URL
31
+
32
+ Prefer in this order:
33
+
34
+ 1. `Figma URL` field in `qa/screens/<screen>/requirements/spec.md` (Overview section)
35
+ 2. If empty or missing → `AskUserQuestion`: *"Paste the Figma frame URL"* (free text)
36
+
37
+ Accept any of these URL shapes:
38
+
39
+ ```
40
+ https://www.figma.com/file/<fileKey>/<title>?node-id=<nodeId>
41
+ https://www.figma.com/design/<fileKey>/<title>?node-id=<nodeId>
42
+ https://www.figma.com/proto/<fileKey>/<title>?node-id=<nodeId>
43
+ ```
44
+
45
+ Parse:
46
+ - `fileKey` = the segment after `/file/`, `/design/`, or `/proto/`
47
+ - `nodeId` = the `node-id` query param (may use `-` or `:` — pass through as-is; MCP accepts both)
48
+
49
+ If `node-id` is missing, ask the user to select a frame in Figma and copy the **frame URL** specifically (not the file root URL).
50
+
51
+ ### 2. Fetch design context
52
+
53
+ Call **both** in parallel:
54
+
55
+ ```
56
+ get_design_context({ fileKey, nodeId })
57
+ get_variable_defs({ fileKey, nodeId })
58
+ ```
59
+
60
+ `get_design_context` returns layout, typography, color values, component structure, spacing.
61
+ `get_variable_defs` returns named design tokens (color/spacing/typography variables).
62
+
63
+ ### 3. Fetch screenshot
64
+
65
+ ```
66
+ get_screenshot({ fileKey, nodeId })
67
+ ```
68
+
69
+ Save the returned PNG to:
70
+
71
+ ```
72
+ qa/screens/<screen>/requirements/ui/figma-<sanitized-nodeId>.png
73
+ ```
74
+
75
+ Sanitize `nodeId` for filesystem: replace `:` and `-` with `_`. Example: `42-15` → `figma-42_15.png`.
76
+
77
+ ### 4. Write metadata dump
78
+
79
+ Combine the design context + variables into a Markdown summary at:
80
+
81
+ ```
82
+ qa/screens/<screen>/requirements/ui/figma-meta.md
83
+ ```
84
+
85
+ Format:
86
+
87
+ ```markdown
88
+ # Figma Capture — <nodeId>
89
+
90
+ **Source:** <full Figma URL>
91
+ **Captured:** <ISO date>
92
+
93
+ ## Components
94
+ <hierarchical list of component names + variants from get_design_context>
95
+
96
+ ## Typography
97
+ <font families, sizes, weights, line heights>
98
+
99
+ ## Colors
100
+ <color tokens + raw hex values>
101
+
102
+ ## Spacing & Layout
103
+ <spacing tokens, auto-layout specs>
104
+
105
+ ## Text Content
106
+ <visible text strings from the frame — used by tc-generation to populate test-data>
107
+ ```
108
+
109
+ This file is consumed by `sungen-tc-generation` as a secondary source alongside `spec.md`.
110
+
111
+ ### 5. Report back
112
+
113
+ Output a short summary to the user:
114
+
115
+ > Captured Figma frame `<nodeId>`:
116
+ > - Components: N
117
+ > - Text strings: M
118
+ > - Design tokens: K
119
+ > - Screenshot: `qa/screens/<screen>/requirements/ui/figma-<nodeId>.png`
120
+ > - Metadata: `requirements/ui/figma-meta.md`
121
+
122
+ Then hand back to the calling command.
123
+
124
+ ---
125
+
126
+ ## Error handling
127
+
128
+ | Error | Action |
129
+ |---|---|
130
+ | MCP tool not available | Print setup instructions, stop, do not fall back silently |
131
+ | `fileKey` missing from URL | Ask user to paste a valid frame URL |
132
+ | `nodeId` missing from URL | Ask user to right-click a frame in Figma → *Copy link to selection* |
133
+ | `get_design_context` 403 | Ask user to check Dev Mode seat on that file |
134
+ | `get_screenshot` returns no image | Continue with metadata only; warn user no PNG was captured |
135
+
136
+ ---
137
+
138
+ ## What this skill does NOT do
139
+
140
+ - Does not generate Gherkin (that's `sungen-tc-generation`)
141
+ - Does not write `selectors.yaml` (that's `/sungen:run-test`)
142
+ - Does not validate the design against live UI (future skill: `sungen-capture-live` can be run afterwards for cross-check)
@@ -0,0 +1,100 @@
1
+ ---
2
+ name: sungen-capture-live
3
+ description: 'Capture a live running page via Playwright MCP — snapshot + screenshot for visual context. Auto-loaded by create-test when user picks Live page scan.'
4
+ user-invocable: false
5
+ ---
6
+
7
+ ## Purpose
8
+
9
+ Navigate a running application, take **one accessibility snapshot** and **one screenshot**, and save them as visual context for test generation. Use when the app is live (dev, staging, or production with read-only access) and you want the tests grounded in the actual rendered UI.
10
+
11
+ This skill handles auth gracefully: if the page redirects to login, it asks the user to sign in manually rather than injecting cookies.
12
+
13
+ ---
14
+
15
+ ## Prerequisites
16
+
17
+ - Playwright MCP connected.
18
+ - Dev/staging server reachable (or a public URL).
19
+ - `playwright.config.ts` exists at the project root (for `baseURL` fallback).
20
+
21
+ ---
22
+
23
+ ## Steps
24
+
25
+ ### 1. Resolve target URL
26
+
27
+ Resolve in this order:
28
+
29
+ 1. `Live URL` field in `qa/screens/<screen>/requirements/spec.md` (Overview section)
30
+ 2. `baseURL` from `playwright.config.ts` + `URL Path` from `spec.md`
31
+ 3. If neither works → `AskUserQuestion`: *"Paste the full URL for the page to scan"*
32
+
33
+ ### 2. Navigate
34
+
35
+ `browser_navigate` to the resolved URL.
36
+
37
+ ### 3. Handle auth redirect
38
+
39
+ If the page redirects to a login route (URL contains `/login`, `/signin`, `/auth`, or the page title/content indicates a login screen):
40
+
41
+ 1. Tell the user which login URL they landed on.
42
+ 2. `AskUserQuestion`:
43
+ - **I'll log in manually** — wait for user confirmation, then re-navigate to the target URL
44
+ - **Skip live scan** — tell caller to invoke `sungen-capture-local` instead
45
+ - **Cancel**
46
+ 3. **Never** inject cookies or localStorage via `browser_evaluate` or `browser_run_code`. Auth belongs to the user.
47
+
48
+ ### 4. Snapshot
49
+
50
+ Take **ONE** `browser_snapshot`. This accessibility tree is the primary AI context — it contains roles, names, text, and structure that the tc-generation skill uses to identify sections and fields.
51
+
52
+ ### 5. Screenshot (optional but recommended)
53
+
54
+ Take **ONE** `browser_take_screenshot` with `fullPage: true`. Save to:
55
+
56
+ ```
57
+ qa/screens/<screen>/requirements/ui/live-<timestamp>.png
58
+ ```
59
+
60
+ Where `<timestamp>` is `YYYYMMDD-HHMM` in local time (e.g. `live-20260421-1430.png`).
61
+
62
+ This gives users a visual record they can reference later without re-scanning.
63
+
64
+ ### 6. Detect discrepancies vs spec
65
+
66
+ If `spec.md` exists, briefly cross-check the snapshot against spec sections:
67
+
68
+ - Fields listed in spec but not in snapshot → flag as *missing in UI*
69
+ - Elements visible in snapshot but not in spec → flag as *missing in spec*
70
+
71
+ Report findings but **do not** auto-edit `spec.md` — let the user decide.
72
+
73
+ ### 7. Report back
74
+
75
+ > Captured live page `<URL>`:
76
+ > - Snapshot: <N> interactive elements detected
77
+ > - Screenshot: `requirements/ui/live-<timestamp>.png`
78
+ > - Discrepancies vs spec: <count, or "none">
79
+
80
+ Hand back to the calling command.
81
+
82
+ ---
83
+
84
+ ## What this skill does NOT do
85
+
86
+ - Does not run tests
87
+ - Does not generate `selectors.yaml` (that's `/sungen:run-test`)
88
+ - Does not inject auth state (user logs in manually)
89
+ - Does not crawl — scans **exactly one** page per invocation
90
+ - Does not generate Gherkin — that's `sungen-tc-generation`
91
+
92
+ ---
93
+
94
+ ## Relationship to other capture skills
95
+
96
+ - `sungen-capture-figma` — design source of truth (pre-launch)
97
+ - `sungen-capture-local` — any image the user dropped in `requirements/ui/`
98
+ - `sungen-capture-live` — this skill, verifies/supplements against the running app
99
+
100
+ All three write to `requirements/ui/` and report back to the caller. They are mutually exclusive per create-test run, but a user can run create-test multiple times with different sources to layer context.
@@ -0,0 +1,73 @@
1
+ ---
2
+ name: sungen-capture-local
3
+ description: 'Load existing UI assets (screenshots, Figma exports, hand-drawn mockups) from requirements/ui/. Auto-loaded by create-test when user picks UI images as the visual source.'
4
+ user-invocable: false
5
+ ---
6
+
7
+ ## Purpose
8
+
9
+ Use **pre-existing images** in `qa/screens/<screen>/requirements/ui/` as visual context for test generation. No network, no MCP, no live site required — works for any design tool (Figma export, Sketch, Penpot, Zeplin, hand-drawn, screenshots of a staging env).
10
+
11
+ This is the **baseline fallback**: if live domain is down and Figma MCP isn't configured, this always works as long as the user drops images in the folder.
12
+
13
+ ---
14
+
15
+ ## Steps
16
+
17
+ ### 1. List available images
18
+
19
+ Glob `qa/screens/<screen>/requirements/ui/*.{png,jpg,jpeg,webp,gif}` and report count + filenames.
20
+
21
+ Filter out metadata files (e.g. `figma-meta.md` written by `sungen-capture-figma`) — those are read by `tc-generation` separately, not treated as images here.
22
+
23
+ ### 2. Handle empty folder
24
+
25
+ If no images found:
26
+
27
+ 1. Tell the user the folder is empty, with the full path so they can navigate there in Finder.
28
+ 2. `AskUserQuestion`:
29
+ - **I'll drop images now** — wait for user to confirm, then re-glob
30
+ - **Switch to Figma URL** — tell caller to invoke `sungen-capture-figma` instead
31
+ - **Switch to live page scan** — tell caller to invoke `sungen-capture-live` instead
32
+ - **Cancel** — abort create-test
33
+ 3. If user picks "drop images now", wait for their confirmation message (e.g. "done") then re-run step 1.
34
+
35
+ ### 3. Read images for context
36
+
37
+ Use the `Read` tool on each image file — Claude Code can read PNG/JPG/WebP directly as visual context.
38
+
39
+ For large sets (>10 images), ask the user which are primary and which are states/variants, to avoid loading too much visual context at once.
40
+
41
+ ### 4. Summarize
42
+
43
+ Output a short summary:
44
+
45
+ > Loaded N image(s) from `qa/screens/<screen>/requirements/ui/`:
46
+ > - `<filename-1>` — <one-line description of what's visible>
47
+ > - `<filename-2>` — <one-line description>
48
+ > ...
49
+
50
+ Hand back to the calling command.
51
+
52
+ ---
53
+
54
+ ## File naming hints for users
55
+
56
+ When this skill reports back, nudge users toward consistent filenames so future runs are self-documenting:
57
+
58
+ - `<section>-default.png` — baseline state of a section
59
+ - `<section>-error.png` — error state
60
+ - `<section>-loading.png` — loading state
61
+ - `<section>-empty.png` — empty state
62
+ - `full-page.png` / `viewport.png` — whole screen (auto-generated by `sungen add --capture`)
63
+
64
+ Don't enforce — just suggest if filenames are ambiguous.
65
+
66
+ ---
67
+
68
+ ## What this skill does NOT do
69
+
70
+ - Does not download images from external URLs
71
+ - Does not generate images (no AI image generation)
72
+ - Does not modify existing images (no crop/resize)
73
+ - Does not generate Gherkin — that's `sungen-tc-generation`
@@ -0,0 +1,103 @@
1
+ ---
2
+ name: sungen-delivery
3
+ description: 'Export Gherkin scenarios + Playwright results → CSV test case deliverable. Auto-loaded by delivery command.'
4
+ user-invocable: false
5
+ ---
6
+
7
+ ## Purpose
8
+
9
+ Export test cases from Sungen screens to a standardized CSV file (format BM-2-901-13) for QA delivery.
10
+
11
+ **This skill delegates all heavy work to the `sungen delivery` CLI.** The CLI is the single source of truth for parsing logic — do NOT re-parse files in AI. Your role is only to:
12
+
13
+ 1. Invoke the CLI
14
+ 2. Show its output verbatim
15
+ 3. Help the user react to pre-flight failures
16
+
17
+ ---
18
+
19
+ ## Architecture
20
+
21
+ ```
22
+ User → /sungen:delivery [screen...]
23
+
24
+
25
+ sungen delivery CLI (deterministic — no AI tokens)
26
+ ├─ Scope detection (no-arg = all screens)
27
+ ├─ Pre-flight source checks per screen
28
+ ├─ Parse .feature (metadata)
29
+ ├─ Parse .spec.ts (resolved Playwright code — source of truth)
30
+ ├─ Parse test-data.yaml (resolve {{vars}})
31
+ ├─ Parse test-results/results.json (match test titles)
32
+ ├─ Merge sources + generate CSV rows
33
+ └─ Write qa/deliverables/<screen>-testcases.csv
34
+ ```
35
+
36
+ Source modules: `src/exporters/*.ts`
37
+
38
+ ---
39
+
40
+ ## Required sources (CLI pre-flight checks these)
41
+
42
+ | # | Source | Path | Created by |
43
+ |---|--------|------|------------|
44
+ | 1 | Feature file | `qa/screens/<screen>/features/<screen>.feature` | `/sungen:add-screen` + `/sungen:create-test` |
45
+ | 2 | Test data | `qa/screens/<screen>/test-data/<screen>.yaml` | `/sungen:create-test` |
46
+ | 3 | Selectors | `qa/screens/<screen>/selectors/<screen>.yaml` | `/sungen:run-test` |
47
+ | 4 | Compiled spec | `specs/generated/<screen>/<screen>.spec.ts` | `sungen generate` (during `/sungen:run-test`) |
48
+ | 5 | Test results | `specs/generated/<screen>/<screen>-test-result.json` (per-screen) or `test-results/results.json` (global fallback) | `/sungen:run-test` |
49
+
50
+ **Sources 1-4 are blocking** — CLI aborts if any is missing.
51
+ **Source 5 is optional** — CSV is still generated but Test Result/Date/Executor/Env columns are empty (all tests show as Pending).
52
+
53
+ The CLI reads the **per-screen result file first** (co-located with `.spec.ts`), then falls back to the global `test-results/results.json`. Per-screen is preferred because the global file gets OVERWRITTEN each time Playwright runs, losing results from earlier screens.
54
+
55
+ ---
56
+
57
+ ## Column mapping (handled by CLI)
58
+
59
+ | CSV Column | Source |
60
+ |------------|--------|
61
+ | TC ID | Generated: `<SCREEN_UPPER>-<VP>-<NNN>` |
62
+ | Category 1 | Scenario name with VP prefix stripped |
63
+ | Category 2 | VP group: `VP-SEC`→Accessing, `VP-UI`→GUI, `VP-VAL`/`VP-LOGIC`→Function |
64
+ | Category 3 | Feature name (first line of `.feature`) |
65
+ | Category 4 | Screen name |
66
+ | Pre-condition | Auth tag → "Logged in as X" / "Not authenticated" + Given steps (natural language) |
67
+ | Test Data | `{{vars}}` from scenario resolved via test-data.yaml → `key: value; key2: value2` |
68
+ | Steps | `.spec.ts` code comments for interactions (numbered) |
69
+ | Expected results | `.spec.ts` `expect(...)` comments (numbered) |
70
+ | Priority | Tag: `@critical`/`@high`/`@normal`/`@low` (default: Normal) |
71
+ | Testcase type | `@manual` → Manual, else Auto. Not compiled → "Not compiled" |
72
+ | Test Result | results.json status: passed→Passed, failed/timedOut→Failed, skipped→N/A, else Pending |
73
+ | Executed Date | results.json startTime formatted as `dd/mm/yyyy` |
74
+ | Test Executor | `git config user.name` |
75
+ | Test Environment | `playwright.config.ts` baseURL + project name |
76
+ | Note | Error message + trace path (for failed tests) |
77
+
78
+ ---
79
+
80
+ ## Excluded from CSV
81
+
82
+ - `@steps:<name>` **base** scenarios — these are setup-only, inlined into `@extend:...` scenarios at compile time
83
+ - Default scaffold `Sample scenario for <screen>` — not a real test
84
+
85
+ ---
86
+
87
+ ## CLI command reference
88
+
89
+ ```bash
90
+ # Export all screens
91
+ sungen delivery
92
+
93
+ # Export specific screens
94
+ sungen delivery kudos awards
95
+
96
+ # Skip pre-flight (CI only)
97
+ sungen delivery --skip-preflight
98
+
99
+ # Skip screens with blocking misses
100
+ sungen delivery --continue-on-missing
101
+ ```
102
+
103
+ Output: `qa/deliverables/<screen>-testcases.csv` (UTF-8 with BOM)
@@ -109,6 +109,8 @@ Row scope: `see [Ref] row in [Table] table with {{v}}` enters scope. Subsequent
109
109
 
110
110
  Most elements auto-infer from `[Label] type` → `getByRole(type, { name: 'Label' })`. Only add YAML when the accessible name differs, needs `nth`, or needs `testid`. Full auto-infer table → see `sungen-selector-keys` skill.
111
111
 
112
+ **Types requiring YAML entry:** `date-picker`, `uploader`, `overlay`, `frame`, `step` - these have no standard ARIA role and need explicit selectors.
113
+
112
114
  ## YAML Keys
113
115
 
114
116
  `[Reference]` → **lowercase, keep Unicode**: `[Search Content]` → `search content:`, `[Thời gian]` → `thời gian:`
@@ -102,5 +102,27 @@ If no YAML key exists, the resolver infers from the Gherkin element type:
102
102
  | `[X] list` | `getByRole('list', { name: 'X' })` |
103
103
  | `[X] column` | `getByRole('columnheader', { name: 'X' })` |
104
104
  | `[X] dialog` / `modal` / `drawer` | `getByRole('dialog', { name: 'X' })` |
105
+ | `[X] dropdown` / `select` | `getByRole('combobox', { name: 'X' })` |
106
+ | `[X] menuitem` | `getByRole('menuitem', { name: 'X' })` |
107
+ | `[X] progressbar` | `getByRole('progressbar', { name: 'X' })` |
108
+ | `[X] section` | `getByRole('region', { name: 'X' })` |
109
+ | `[X] card` | `getByRole('article', { name: 'X' })` |
110
+ | `[X] item` | `getByRole('listitem', { name: 'X' })` |
111
+ | `[X] cell` | `getByRole('cell', { name: 'X' })` |
112
+ | `[X] spinner` | `getByRole('status', { name: 'X' })` |
113
+ | `[X] breadcrumb` | `getByRole('navigation', { name: 'X' })` |
114
+ | `[X] badge` / `tooltip` / `tag` | `getByText('X')` |
105
115
 
106
116
  **Only add a YAML entry when** the auto-inferred locator won't work (wrong name, need testid, need nth, etc.).
117
+
118
+ ### Types requiring YAML entry (no auto-infer)
119
+
120
+ These types need explicit `selectors.yaml` entries:
121
+
122
+ | Type | Reason |
123
+ |------|--------|
124
+ | `date-picker` | Custom component, needs testid or CSS |
125
+ | `uploader` | File input, needs upload type selector |
126
+ | `overlay` | No standard ARIA role, needs CSS/testid |
127
+ | `frame` | Needs iframe selector |
128
+ | `step` | Custom stepper component, needs testid |