@sun-asterisk/sungen 2.4.5 → 2.4.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/dist/cli/commands/delivery.d.ts +7 -0
- package/dist/cli/commands/delivery.d.ts.map +1 -0
- package/dist/cli/commands/delivery.js +348 -0
- package/dist/cli/commands/delivery.js.map +1 -0
- package/dist/cli/commands/update.d.ts.map +1 -1
- package/dist/cli/commands/update.js +64 -1
- package/dist/cli/commands/update.js.map +1 -1
- package/dist/cli/index.js +4 -2
- package/dist/cli/index.js.map +1 -1
- package/dist/exporters/csv-exporter.d.ts +32 -0
- package/dist/exporters/csv-exporter.d.ts.map +1 -0
- package/dist/exporters/csv-exporter.js +311 -0
- package/dist/exporters/csv-exporter.js.map +1 -0
- package/dist/exporters/feature-parser.d.ts +48 -0
- package/dist/exporters/feature-parser.d.ts.map +1 -0
- package/dist/exporters/feature-parser.js +178 -0
- package/dist/exporters/feature-parser.js.map +1 -0
- package/dist/exporters/package-info.d.ts +9 -0
- package/dist/exporters/package-info.d.ts.map +1 -0
- package/dist/exporters/package-info.js +73 -0
- package/dist/exporters/package-info.js.map +1 -0
- package/dist/exporters/playwright-report-parser.d.ts +21 -0
- package/dist/exporters/playwright-report-parser.d.ts.map +1 -0
- package/dist/exporters/playwright-report-parser.js +184 -0
- package/dist/exporters/playwright-report-parser.js.map +1 -0
- package/dist/exporters/scenario-merger.d.ts +21 -0
- package/dist/exporters/scenario-merger.d.ts.map +1 -0
- package/dist/exporters/scenario-merger.js +51 -0
- package/dist/exporters/scenario-merger.js.map +1 -0
- package/dist/exporters/spec-parser.d.ts +20 -0
- package/dist/exporters/spec-parser.d.ts.map +1 -0
- package/dist/exporters/spec-parser.js +259 -0
- package/dist/exporters/spec-parser.js.map +1 -0
- package/dist/exporters/step-formatter.d.ts +32 -0
- package/dist/exporters/step-formatter.d.ts.map +1 -0
- package/dist/exporters/step-formatter.js +76 -0
- package/dist/exporters/step-formatter.js.map +1 -0
- package/dist/exporters/test-data-resolver.d.ts +20 -0
- package/dist/exporters/test-data-resolver.d.ts.map +1 -0
- package/dist/exporters/test-data-resolver.js +96 -0
- package/dist/exporters/test-data-resolver.js.map +1 -0
- package/dist/exporters/types.d.ts +104 -0
- package/dist/exporters/types.d.ts.map +1 -0
- package/dist/exporters/types.js +6 -0
- package/dist/exporters/types.js.map +1 -0
- package/dist/exporters/xlsx-exporter.d.ts +19 -0
- package/dist/exporters/xlsx-exporter.d.ts.map +1 -0
- package/dist/exporters/xlsx-exporter.js +309 -0
- package/dist/exporters/xlsx-exporter.js.map +1 -0
- package/dist/generators/test-generator/utils/selector-resolver.d.ts.map +1 -1
- package/dist/generators/test-generator/utils/selector-resolver.js +26 -0
- package/dist/generators/test-generator/utils/selector-resolver.js.map +1 -1
- package/dist/orchestrator/ai-rules-updater.d.ts.map +1 -1
- package/dist/orchestrator/ai-rules-updater.js +12 -0
- package/dist/orchestrator/ai-rules-updater.js.map +1 -1
- package/dist/orchestrator/project-initializer.d.ts +12 -1
- package/dist/orchestrator/project-initializer.d.ts.map +1 -1
- package/dist/orchestrator/project-initializer.js +84 -64
- package/dist/orchestrator/project-initializer.js.map +1 -1
- package/dist/orchestrator/screen-manager.d.ts.map +1 -1
- package/dist/orchestrator/screen-manager.js +2 -0
- package/dist/orchestrator/screen-manager.js.map +1 -1
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-add-screen.md +15 -17
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +7 -5
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-delivery.md +71 -0
- package/dist/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +27 -0
- package/dist/orchestrator/templates/ai-instructions/claude-config.md +12 -2
- package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-figma.md +142 -0
- package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +100 -0
- package/dist/orchestrator/templates/ai-instructions/claude-skill-capture-local.md +73 -0
- package/dist/orchestrator/templates/ai-instructions/claude-skill-delivery.md +103 -0
- package/dist/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +2 -0
- package/dist/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +22 -0
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-add-screen.md +13 -15
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +6 -4
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +71 -0
- package/dist/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +38 -14
- package/dist/orchestrator/templates/ai-instructions/copilot-config.md +12 -2
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-figma.md +142 -0
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +100 -0
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-capture-local.md +73 -0
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +103 -0
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +2 -0
- package/dist/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +22 -0
- package/dist/orchestrator/templates/playwright.config.d.ts.map +1 -1
- package/dist/orchestrator/templates/playwright.config.js +6 -1
- package/dist/orchestrator/templates/playwright.config.js.map +1 -1
- package/dist/orchestrator/templates/playwright.config.ts +6 -1
- package/package.json +2 -1
- package/src/cli/commands/delivery.ts +348 -0
- package/src/cli/commands/update.ts +84 -2
- package/src/cli/index.ts +4 -2
- package/src/exporters/csv-exporter.ts +304 -0
- package/src/exporters/feature-parser.ts +168 -0
- package/src/exporters/package-info.ts +35 -0
- package/src/exporters/playwright-report-parser.ts +168 -0
- package/src/exporters/scenario-merger.ts +63 -0
- package/src/exporters/spec-parser.ts +247 -0
- package/src/exporters/step-formatter.ts +80 -0
- package/src/exporters/test-data-resolver.ts +59 -0
- package/src/exporters/types.ts +112 -0
- package/src/exporters/xlsx-exporter.ts +301 -0
- package/src/generators/test-generator/utils/selector-resolver.ts +26 -0
- package/src/orchestrator/ai-rules-updater.ts +12 -0
- package/src/orchestrator/project-initializer.ts +103 -70
- package/src/orchestrator/screen-manager.ts +2 -0
- package/src/orchestrator/templates/ai-instructions/claude-cmd-add-screen.md +15 -17
- package/src/orchestrator/templates/ai-instructions/claude-cmd-create-test.md +7 -5
- package/src/orchestrator/templates/ai-instructions/claude-cmd-delivery.md +71 -0
- package/src/orchestrator/templates/ai-instructions/claude-cmd-run-test.md +27 -0
- package/src/orchestrator/templates/ai-instructions/claude-config.md +12 -2
- package/src/orchestrator/templates/ai-instructions/claude-skill-capture-figma.md +142 -0
- package/src/orchestrator/templates/ai-instructions/claude-skill-capture-live.md +100 -0
- package/src/orchestrator/templates/ai-instructions/claude-skill-capture-local.md +73 -0
- package/src/orchestrator/templates/ai-instructions/claude-skill-delivery.md +103 -0
- package/src/orchestrator/templates/ai-instructions/claude-skill-gherkin-syntax.md +2 -0
- package/src/orchestrator/templates/ai-instructions/claude-skill-selector-keys.md +22 -0
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-add-screen.md +13 -15
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-create-test.md +6 -4
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-delivery.md +71 -0
- package/src/orchestrator/templates/ai-instructions/copilot-cmd-run-test.md +38 -14
- package/src/orchestrator/templates/ai-instructions/copilot-config.md +12 -2
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-figma.md +142 -0
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-live.md +100 -0
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-capture-local.md +73 -0
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-delivery.md +103 -0
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-gherkin-syntax.md +2 -0
- package/src/orchestrator/templates/ai-instructions/github-skill-sungen-selector-keys.md +22 -0
- package/src/orchestrator/templates/playwright.config.ts +6 -1
|
@@ -24,27 +24,25 @@ Run with #tool:terminal:
|
|
|
24
24
|
sungen add --screen ${input:screen} --path ${input:path}
|
|
25
25
|
```
|
|
26
26
|
|
|
27
|
-
### 2.
|
|
27
|
+
### 2. Fill spec.md
|
|
28
28
|
|
|
29
|
-
Ask
|
|
29
|
+
Ask: *"Fill `spec.md` now?"* — offer **1) Yes, fill now (Recommended)** / **2) Skip, fill later**.
|
|
30
30
|
|
|
31
|
-
|
|
32
|
-
- **Fill `spec.md` only** — app not live yet, or no need for visuals
|
|
33
|
-
- **Capture live-page screenshot only** — spec will come later
|
|
34
|
-
- **Skip requirements prep** — proceed to `/sungen-create-test` immediately
|
|
31
|
+
If yes → open `qa/screens/${input:screen}/requirements/spec.md` and help the user fill sections, fields, validation rules, business rules, and states. Especially prompt for the optional **Figma URL** and **Live URL** fields in Overview — those unlock auto-capture without re-asking next run.
|
|
35
32
|
|
|
36
|
-
|
|
33
|
+
### 3. Capture visual source
|
|
37
34
|
|
|
38
|
-
|
|
39
|
-
1. Read `baseURL` from `playwright.config.ts` (fall back to `APP_BASE_URL` env, then ask the user).
|
|
40
|
-
2. `browser_navigate` to `<baseURL>${input:path}`.
|
|
41
|
-
3. If redirected to login → ask the user to log in manually in the MCP browser, wait for confirmation, then re-navigate. (No auth persistence needed here — that's handled by Phase 0.5 in `sungen-selector-fix` when tests run.)
|
|
42
|
-
4. `browser_take_screenshot` with `filename: "qa/screens/${input:screen}/requirements/ui/${input:screen}.png"`.
|
|
43
|
-
5. If the screen has multiple important states (empty, loaded, error, modal open), offer additional captures named `${input:screen}-<state>.png`.
|
|
35
|
+
Ask the user to pick a visual source. Always offer all three so pre-launch projects work:
|
|
44
36
|
|
|
45
|
-
|
|
37
|
+
- **1) Figma design** (Recommended for pre-launch) — invoke `sungen-capture-figma` skill
|
|
38
|
+
- **2) Live page scan** (dev/staging is up) — invoke `sungen-capture-live` skill
|
|
39
|
+
- **3) Skip** — user will drop images manually into `requirements/ui/` later, or rely on `/sungen-create-test` to prompt again
|
|
46
40
|
|
|
47
|
-
|
|
41
|
+
Each capture skill writes outputs into `qa/screens/${input:screen}/requirements/ui/` and reports back. Do not inline capture logic here — delegate to the skill so behavior stays consistent with `/sungen-create-test`.
|
|
42
|
+
|
|
43
|
+
If the user has additional UI designs (mockups, hand-drawn sketches), suggest copying them to `requirements/ui/` — `sungen-capture-local` will pick them up during `/sungen-create-test`.
|
|
44
|
+
|
|
45
|
+
### 4. Next steps
|
|
48
46
|
|
|
49
47
|
Tell the user what was created and offer next steps:
|
|
50
48
|
|
|
@@ -28,10 +28,12 @@ You are a **Senior QA Engineer**. You structure test cases by viewpoint categori
|
|
|
28
28
|
- **2) Continue without it** — generate tests from spec and other sources only
|
|
29
29
|
- Summarize what you found in requirements and present to the user.
|
|
30
30
|
4. **Screen input** (supplements requirements, or is primary source if no requirements):
|
|
31
|
-
- Ask
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
31
|
+
- Ask the user to pick a visual source. Always offer all three so pre-launch projects work:
|
|
32
|
+
- **1) Figma design** (Recommended for pre-launch) — invoke `sungen-capture-figma` skill
|
|
33
|
+
- **2) UI images** (existing screenshots/mockups in `requirements/ui/`) — invoke `sungen-capture-local` skill
|
|
34
|
+
- **3) Live page scan** (dev/staging is up) — invoke `sungen-capture-live` skill
|
|
35
|
+
- Each capture skill writes outputs into `qa/screens/${input:screen}/requirements/ui/` and reports back. Do not inline capture logic here — delegate to the skill so behavior stays consistent.
|
|
36
|
+
- After the capture skill returns, cross-check its output against `spec.md` and flag any discrepancies before moving on.
|
|
35
37
|
5. Identify screen sections → ask user which to focus on (per `sungen-tc-generation` skill). When requirements exist, use the "Requirements-Driven Generation" strategy. Present sections as a numbered list and let user pick.
|
|
36
38
|
6. Generate or update `.feature` + `test-data.yaml` following `sungen-gherkin-syntax` and `sungen-tc-generation` skills.
|
|
37
39
|
7. Show summary and offer next steps:
|
|
@@ -0,0 +1,71 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: delivery
|
|
3
|
+
description: 'Export Gherkin scenarios + Playwright results to CSV test case file for QA delivery.'
|
|
4
|
+
argument-hint: "[screen-name...] (omit for all screens)"
|
|
5
|
+
allowed-tools: Bash, Read, AskUserQuestion
|
|
6
|
+
---
|
|
7
|
+
|
|
8
|
+
## Role
|
|
9
|
+
|
|
10
|
+
You are a **QA Test Delivery Engineer**. Your job is to invoke the deterministic `sungen delivery` CLI that performs all parsing and CSV export. Your role is minimal — just run the CLI and help the user if pre-flight checks fail.
|
|
11
|
+
|
|
12
|
+
## Parameters
|
|
13
|
+
|
|
14
|
+
Parse **screens** from `$ARGUMENTS`:
|
|
15
|
+
- If empty → CLI will process **all** screens in `qa/screens/`
|
|
16
|
+
- If provided → pass them through to the CLI
|
|
17
|
+
|
|
18
|
+
## Steps
|
|
19
|
+
|
|
20
|
+
### 1. Invoke the CLI
|
|
21
|
+
|
|
22
|
+
Run via Bash (single command, no extra parsing):
|
|
23
|
+
|
|
24
|
+
```bash
|
|
25
|
+
npx sungen delivery <screens>
|
|
26
|
+
```
|
|
27
|
+
|
|
28
|
+
- If no screen args → just run `npx sungen delivery`
|
|
29
|
+
- If screen args → pass them as positional arguments
|
|
30
|
+
|
|
31
|
+
The CLI handles:
|
|
32
|
+
- Scope detection (all screens vs specific)
|
|
33
|
+
- Pre-flight source checks with colorful output
|
|
34
|
+
- Parsing `.feature`, `.spec.ts`, `test-data.yaml`, `test-results/results.json`
|
|
35
|
+
- Generating CSV at `qa/deliverables/<screen>-testcases.csv`
|
|
36
|
+
- Printing summary table
|
|
37
|
+
|
|
38
|
+
### 2. Handle pre-flight failures (if CLI exits non-zero)
|
|
39
|
+
|
|
40
|
+
If the CLI exits with blocking issues, it will have already printed a clear table showing exactly what's missing per screen.
|
|
41
|
+
|
|
42
|
+
Use `AskUserQuestion` to offer next steps:
|
|
43
|
+
|
|
44
|
+
**Options:**
|
|
45
|
+
- **Fix missing sources** (Recommended) — Print the suggested commands from CLI output and stop. User will run those commands manually, then re-invoke `/sungen:delivery`.
|
|
46
|
+
- **Continue with available screens** — Re-run as `npx sungen delivery <screens> --continue-on-missing` to skip screens with blocking issues.
|
|
47
|
+
- **Cancel** — Exit.
|
|
48
|
+
|
|
49
|
+
### 3. Show summary + offer next steps (on success)
|
|
50
|
+
|
|
51
|
+
Forward the CLI's summary table to the user verbatim. Then use `AskUserQuestion`:
|
|
52
|
+
|
|
53
|
+
- **Open a specific CSV** — Help user inspect one of the exported files with Read tool.
|
|
54
|
+
- **Run tests to refresh results** — Suggest `/sungen:run-test <screen>` to update `test-results/results.json`, then re-run delivery.
|
|
55
|
+
- **Export another screen** — User can run `/sungen:delivery <other-screen>`.
|
|
56
|
+
- **Done** — Exit.
|
|
57
|
+
|
|
58
|
+
## Important notes
|
|
59
|
+
|
|
60
|
+
- **Do NOT parse files yourself** — the CLI is the source of truth for parsing logic. Your job is orchestration + user interaction.
|
|
61
|
+
- **Do NOT modify feature/spec.ts/test-data files** — the delivery is read-only.
|
|
62
|
+
- **The CLI already respects `@manual` tags, skips `@steps:` base scenarios, groups by Category 2, and generates UTF-8 BOM CSV for Excel compatibility with Vietnamese.**
|
|
63
|
+
- **Pre-flight check is built into the CLI** — use `--skip-preflight` only in CI/automated pipelines where checks are done externally.
|
|
64
|
+
|
|
65
|
+
## CLI Reference
|
|
66
|
+
|
|
67
|
+
```
|
|
68
|
+
sungen delivery [screens...]
|
|
69
|
+
[--skip-preflight] Skip pre-flight checks (not recommended)
|
|
70
|
+
[--continue-on-missing] Skip screens with blocking misses
|
|
71
|
+
```
|
|
@@ -1,27 +1,24 @@
|
|
|
1
1
|
---
|
|
2
|
-
name:
|
|
3
|
-
description: 'Generate selectors + auth state via Playwright MCP, compile, and run Playwright tests — auto-fixes selectors on failure
|
|
4
|
-
argument-hint:
|
|
5
|
-
|
|
6
|
-
tools: [vscode, execute, read, agent, edit, search, web, browser, todo, 'playwright/*']
|
|
2
|
+
name: run-test
|
|
3
|
+
description: 'Generate selectors + auth state via Playwright MCP, compile, and run Playwright tests — auto-fixes selectors on failure'
|
|
4
|
+
argument-hint: [screen-name]
|
|
5
|
+
allowed-tools: Read, Grep, Bash, Glob, Edit, Write, AskUserQuestion, mcp__playwright__browser_navigate, mcp__playwright__browser_snapshot, mcp__playwright__browser_take_screenshot, mcp__playwright__browser_wait_for, mcp__playwright__browser_evaluate, mcp__playwright__browser_run_code, mcp__playwright__browser_storage_state, mcp__playwright__browser_set_storage_state
|
|
7
6
|
---
|
|
8
7
|
|
|
9
|
-
**Input**: Screen name (e.g., `/sungen-run-test admin-users`).
|
|
10
|
-
|
|
11
8
|
## Role
|
|
12
9
|
|
|
13
10
|
You are a **Senior Developer**. Use `sungen-selector-fix`, `sungen-selector-keys`, and `sungen-error-mapping` skills.
|
|
14
11
|
|
|
15
12
|
## Parameters
|
|
16
13
|
|
|
17
|
-
|
|
14
|
+
Parse **screen** from `$ARGUMENTS`. If missing, ask the user.
|
|
18
15
|
|
|
19
16
|
## Pre-run (phased — per `sungen-selector-fix` skill)
|
|
20
17
|
|
|
21
|
-
1. Verify `qa/screens
|
|
18
|
+
1. Verify `qa/screens/<screen>/` has `.feature` + `test-data.yaml`.
|
|
22
19
|
2. **Phase 0 — Selector Pre-gen**: if `selectors.yaml` is missing/empty or doesn't cover the feature file's `[Reference]`s, run Phase 0 from `sungen-selector-fix` — confirm with user, `browser_navigate` → one `browser_snapshot` → merge YAML entries.
|
|
23
20
|
3. **Phase 0.5 — Auth Persistence**: if the feature has `@auth:<role>` tags and `specs/.auth/<role>.json` is missing/expired, run Phase 0.5 from `sungen-selector-fix` — user logs in manually in MCP browser → `browser_storage_state` → `specs/.auth/<role>.json`. Offer `sungen makeauth <role>` as CLI fallback only if `browser_storage_state` isn't available in this MCP version.
|
|
24
|
-
4. Compile: `sungen generate --screen
|
|
21
|
+
4. Compile: `sungen generate --screen <screen>`.
|
|
25
22
|
|
|
26
23
|
## Run & Fix (phased — per `sungen-selector-fix` skill)
|
|
27
24
|
|
|
@@ -30,15 +27,42 @@ You are a **Senior Developer**. Use `sungen-selector-fix`, `sungen-selector-keys
|
|
|
30
27
|
7. **Phase 3 — Full Run**: Run all tests. Fix only **new** failures (elements unique to `@normal`/`@low`). Max 1 attempt. Don't loop on low-priority failures.
|
|
31
28
|
8. **Phase 4 — Regression**: One final full run. Report results. No more fix loops.
|
|
32
29
|
|
|
30
|
+
## Playwright command guidelines
|
|
31
|
+
|
|
32
|
+
**Per-screen JSON results** — each run must write its JSON report to a dedicated path co-located with the `.spec.ts`, so `sungen delivery` can read the correct results per screen:
|
|
33
|
+
|
|
34
|
+
```bash
|
|
35
|
+
# ✅ Correct — per-screen output file via env var
|
|
36
|
+
PLAYWRIGHT_JSON_OUTPUT_NAME=specs/generated/<screen>/<screen>-test-result.json \
|
|
37
|
+
npx playwright test specs/generated/<screen>/<screen>.spec.ts
|
|
38
|
+
```
|
|
39
|
+
|
|
40
|
+
Output: `specs/generated/<screen>/<screen>-test-result.json`
|
|
41
|
+
|
|
42
|
+
**DO NOT** pass `--reporter=...` flag — it overrides the reporters from `playwright.config.ts` and disables the JSON reporter that `sungen delivery` depends on.
|
|
43
|
+
|
|
44
|
+
```bash
|
|
45
|
+
# ❌ Wrong — --reporter flag disables the config's JSON reporter
|
|
46
|
+
npx playwright test specs/generated/<screen>/<screen>.spec.ts --reporter=list
|
|
47
|
+
|
|
48
|
+
# ❌ Wrong — no env var → writes to default test-results/results.json
|
|
49
|
+
# (overwritten on every screen run, loses per-screen tracking)
|
|
50
|
+
npx playwright test specs/generated/<screen>/<screen>.spec.ts
|
|
51
|
+
```
|
|
52
|
+
|
|
53
|
+
If you want to filter scenarios, use `-g "<pattern>"` instead of a reporter override.
|
|
54
|
+
|
|
55
|
+
`sungen delivery` reads the per-screen file first, falls back to the global `test-results/results.json` if missing.
|
|
56
|
+
|
|
33
57
|
## Next steps
|
|
34
58
|
|
|
35
|
-
After showing results, offer next steps:
|
|
59
|
+
After showing results, use `AskUserQuestion` to offer next steps:
|
|
36
60
|
|
|
37
61
|
If all tests **passed**:
|
|
38
|
-
- **`/sungen
|
|
62
|
+
- **`/sungen:create-test <screen>`** — Add more test cases (Recommended)
|
|
39
63
|
- **Done** — All tests passed, I'm finished
|
|
40
64
|
|
|
41
65
|
If tests **failed** (after retries):
|
|
42
|
-
- **`/sungen
|
|
43
|
-
- **`/sungen
|
|
66
|
+
- **`/sungen:run-test <screen>`** — Re-run after manual fixes
|
|
67
|
+
- **`/sungen:create-test <screen>`** — Revise test cases
|
|
44
68
|
- **Done for now** — I'll fix manually later
|
|
@@ -15,8 +15,12 @@ You generate 3 files for sungen — a Gherkin compiler that produces Playwright
|
|
|
15
15
|
| `sungen-viewpoint` | 10 UI patterns x 4 viewpoints — coverage checklists |
|
|
16
16
|
| `sungen-selector-keys` | YAML key generation from `[Reference]` names, suffixes, lookup priority |
|
|
17
17
|
| `sungen-selector-fix` | Selector generation from live page, auto-fix strategy |
|
|
18
|
+
| `sungen-delivery` | Export Gherkin + Playwright results → CSV test case deliverable |
|
|
19
|
+
| `sungen-capture-figma` | Fetch design context + PNG from a Figma frame URL via Figma Dev Mode MCP |
|
|
20
|
+
| `sungen-capture-local` | Load existing UI assets (screenshots, mockups, Figma exports) from `requirements/ui/` |
|
|
21
|
+
| `sungen-capture-live` | Capture a live running page via Playwright MCP (snapshot + screenshot) |
|
|
18
22
|
|
|
19
|
-
## Workflow (
|
|
23
|
+
## Workflow (5 AI commands)
|
|
20
24
|
|
|
21
25
|
| Command | What it does |
|
|
22
26
|
|---|---|
|
|
@@ -24,8 +28,9 @@ You generate 3 files for sungen — a Gherkin compiler that produces Playwright
|
|
|
24
28
|
| `/sungen-create-test <name>` | Generate `.feature` + `test-data.yaml` (no selectors) |
|
|
25
29
|
| `/sungen-review <name>` | Score syntax, coverage, viewpoint quality (60% threshold) |
|
|
26
30
|
| `/sungen-run-test <name>` | Generate `selectors.yaml` from live page, compile, run, auto-fix |
|
|
31
|
+
| `/sungen-delivery [name...]` | Export test cases → CSV for QA delivery (all screens if no arg) |
|
|
27
32
|
|
|
28
|
-
**Order:** add-screen → create-test → review → run-test.
|
|
33
|
+
**Order:** add-screen → create-test → review → run-test → delivery.
|
|
29
34
|
|
|
30
35
|
After each command completes, present the next actions as selectable options. Never just print text — always give clickable choices so the user can continue the workflow seamlessly.
|
|
31
36
|
|
|
@@ -39,6 +44,9 @@ qa/screens/<screen-name>/
|
|
|
39
44
|
└── requirements/
|
|
40
45
|
├── spec.md # Screen specification (primary source)
|
|
41
46
|
└── ui/ # Screenshots, mockups
|
|
47
|
+
|
|
48
|
+
qa/deliverables/<screen>-testcases.csv # Exported test cases (from /sungen-delivery)
|
|
49
|
+
qa/deliverables/<screen>-testcases.xlsx # Styled workbook for client hand-off
|
|
42
50
|
```
|
|
43
51
|
|
|
44
52
|
## CLI Commands
|
|
@@ -48,4 +56,6 @@ sungen add --screen <name> --path <url-path> # Scaffold screen dir
|
|
|
48
56
|
sungen add --screen <name> --path <path> --feature <name> # Scaffold with sub-feature
|
|
49
57
|
sungen generate --screen <name> # Compile .feature → .spec.ts
|
|
50
58
|
sungen generate --all # Compile all screens
|
|
59
|
+
sungen delivery # Export all screens → CSV + XLSX
|
|
60
|
+
sungen delivery <screen> # Export a single screen
|
|
51
61
|
```
|
|
@@ -0,0 +1,142 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sungen-capture-figma
|
|
3
|
+
description: 'Fetch design context + PNG from a Figma frame URL via Figma Dev Mode MCP. Auto-loaded by create-test when user picks Figma as the visual source.'
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Purpose
|
|
8
|
+
|
|
9
|
+
Pull **structured design data** (layout, typography, colors, component tree, design tokens) and a **PNG screenshot** from a Figma frame URL, so `sungen-tc-generation` can author Gherkin + test-data before a live domain exists.
|
|
10
|
+
|
|
11
|
+
Use this when the project is pre-launch, or when Figma is the source of truth and the live build lags the design.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Prerequisites
|
|
16
|
+
|
|
17
|
+
- **Figma MCP server** (`https://mcp.figma.com/mcp`, HTTP transport) connected in `.vscode/mcp.json` — `sungen init` scaffolds this automatically. On first use, VS Code / Copilot opens a browser for Figma OAuth. Official tools: `get_design_context`, `get_variable_defs`, `get_screenshot`.
|
|
18
|
+
- Figma account signed in with access to the file. **Dev/Full seats** get per-minute rate limits; **Starter/View seats** get monthly tool-call limits.
|
|
19
|
+
- A Figma URL with both **fileKey** and **nodeId** in it.
|
|
20
|
+
|
|
21
|
+
If the MCP is not connected, **do not fail silently** — tell the user:
|
|
22
|
+
> "Figma MCP not detected. Run `sungen init` to scaffold the config, or manually add `figma` with `url: https://mcp.figma.com/mcp` to `.vscode/mcp.json`. Then sign in when VS Code prompts."
|
|
23
|
+
|
|
24
|
+
Then stop.
|
|
25
|
+
|
|
26
|
+
---
|
|
27
|
+
|
|
28
|
+
## Steps
|
|
29
|
+
|
|
30
|
+
### 1. Resolve Figma URL
|
|
31
|
+
|
|
32
|
+
Prefer in this order:
|
|
33
|
+
|
|
34
|
+
1. `Figma URL` field in `qa/screens/<screen>/requirements/spec.md` (Overview section)
|
|
35
|
+
2. If empty or missing → ask the user: *"Paste the Figma frame URL"*
|
|
36
|
+
|
|
37
|
+
Accept any of these URL shapes:
|
|
38
|
+
|
|
39
|
+
```
|
|
40
|
+
https://www.figma.com/file/<fileKey>/<title>?node-id=<nodeId>
|
|
41
|
+
https://www.figma.com/design/<fileKey>/<title>?node-id=<nodeId>
|
|
42
|
+
https://www.figma.com/proto/<fileKey>/<title>?node-id=<nodeId>
|
|
43
|
+
```
|
|
44
|
+
|
|
45
|
+
Parse:
|
|
46
|
+
- `fileKey` = the segment after `/file/`, `/design/`, or `/proto/`
|
|
47
|
+
- `nodeId` = the `node-id` query param (may use `-` or `:` — pass through as-is; MCP accepts both)
|
|
48
|
+
|
|
49
|
+
If `node-id` is missing, ask the user to select a frame in Figma and copy the **frame URL** specifically (not the file root URL).
|
|
50
|
+
|
|
51
|
+
### 2. Fetch design context
|
|
52
|
+
|
|
53
|
+
Call **both** in parallel:
|
|
54
|
+
|
|
55
|
+
```
|
|
56
|
+
get_design_context({ fileKey, nodeId })
|
|
57
|
+
get_variable_defs({ fileKey, nodeId })
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
`get_design_context` returns layout, typography, color values, component structure, spacing.
|
|
61
|
+
`get_variable_defs` returns named design tokens (color/spacing/typography variables).
|
|
62
|
+
|
|
63
|
+
### 3. Fetch screenshot
|
|
64
|
+
|
|
65
|
+
```
|
|
66
|
+
get_screenshot({ fileKey, nodeId })
|
|
67
|
+
```
|
|
68
|
+
|
|
69
|
+
Save the returned PNG to:
|
|
70
|
+
|
|
71
|
+
```
|
|
72
|
+
qa/screens/<screen>/requirements/ui/figma-<sanitized-nodeId>.png
|
|
73
|
+
```
|
|
74
|
+
|
|
75
|
+
Sanitize `nodeId` for filesystem: replace `:` and `-` with `_`. Example: `42-15` → `figma-42_15.png`.
|
|
76
|
+
|
|
77
|
+
### 4. Write metadata dump
|
|
78
|
+
|
|
79
|
+
Combine the design context + variables into a Markdown summary at:
|
|
80
|
+
|
|
81
|
+
```
|
|
82
|
+
qa/screens/<screen>/requirements/ui/figma-meta.md
|
|
83
|
+
```
|
|
84
|
+
|
|
85
|
+
Format:
|
|
86
|
+
|
|
87
|
+
```markdown
|
|
88
|
+
# Figma Capture — <nodeId>
|
|
89
|
+
|
|
90
|
+
**Source:** <full Figma URL>
|
|
91
|
+
**Captured:** <ISO date>
|
|
92
|
+
|
|
93
|
+
## Components
|
|
94
|
+
<hierarchical list of component names + variants from get_design_context>
|
|
95
|
+
|
|
96
|
+
## Typography
|
|
97
|
+
<font families, sizes, weights, line heights>
|
|
98
|
+
|
|
99
|
+
## Colors
|
|
100
|
+
<color tokens + raw hex values>
|
|
101
|
+
|
|
102
|
+
## Spacing & Layout
|
|
103
|
+
<spacing tokens, auto-layout specs>
|
|
104
|
+
|
|
105
|
+
## Text Content
|
|
106
|
+
<visible text strings from the frame — used by tc-generation to populate test-data>
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
This file is consumed by `sungen-tc-generation` as a secondary source alongside `spec.md`.
|
|
110
|
+
|
|
111
|
+
### 5. Report back
|
|
112
|
+
|
|
113
|
+
Output a short summary to the user:
|
|
114
|
+
|
|
115
|
+
> Captured Figma frame `<nodeId>`:
|
|
116
|
+
> - Components: N
|
|
117
|
+
> - Text strings: M
|
|
118
|
+
> - Design tokens: K
|
|
119
|
+
> - Screenshot: `qa/screens/<screen>/requirements/ui/figma-<nodeId>.png`
|
|
120
|
+
> - Metadata: `requirements/ui/figma-meta.md`
|
|
121
|
+
|
|
122
|
+
Then hand back to the calling command.
|
|
123
|
+
|
|
124
|
+
---
|
|
125
|
+
|
|
126
|
+
## Error handling
|
|
127
|
+
|
|
128
|
+
| Error | Action |
|
|
129
|
+
|---|---|
|
|
130
|
+
| MCP tool not available | Print setup instructions, stop, do not fall back silently |
|
|
131
|
+
| `fileKey` missing from URL | Ask user to paste a valid frame URL |
|
|
132
|
+
| `nodeId` missing from URL | Ask user to right-click a frame in Figma → *Copy link to selection* |
|
|
133
|
+
| `get_design_context` 403 | Ask user to check Dev Mode seat on that file |
|
|
134
|
+
| `get_screenshot` returns no image | Continue with metadata only; warn user no PNG was captured |
|
|
135
|
+
|
|
136
|
+
---
|
|
137
|
+
|
|
138
|
+
## What this skill does NOT do
|
|
139
|
+
|
|
140
|
+
- Does not generate Gherkin (that's `sungen-tc-generation`)
|
|
141
|
+
- Does not write `selectors.yaml` (that's `/sungen-run-test`)
|
|
142
|
+
- Does not validate the design against live UI (future skill: `sungen-capture-live` can be run afterwards for cross-check)
|
|
@@ -0,0 +1,100 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sungen-capture-live
|
|
3
|
+
description: 'Capture a live running page via Playwright MCP — snapshot + screenshot for visual context. Auto-loaded by create-test when user picks Live page scan.'
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Purpose
|
|
8
|
+
|
|
9
|
+
Navigate a running application, take **one accessibility snapshot** and **one screenshot**, and save them as visual context for test generation. Use when the app is live (dev, staging, or production with read-only access) and you want the tests grounded in the actual rendered UI.
|
|
10
|
+
|
|
11
|
+
This skill handles auth gracefully: if the page redirects to login, it asks the user to sign in manually rather than injecting cookies.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Prerequisites
|
|
16
|
+
|
|
17
|
+
- Playwright MCP connected.
|
|
18
|
+
- Dev/staging server reachable (or a public URL).
|
|
19
|
+
- `playwright.config.ts` exists at the project root (for `baseURL` fallback).
|
|
20
|
+
|
|
21
|
+
---
|
|
22
|
+
|
|
23
|
+
## Steps
|
|
24
|
+
|
|
25
|
+
### 1. Resolve target URL
|
|
26
|
+
|
|
27
|
+
Resolve in this order:
|
|
28
|
+
|
|
29
|
+
1. `Live URL` field in `qa/screens/<screen>/requirements/spec.md` (Overview section)
|
|
30
|
+
2. `baseURL` from `playwright.config.ts` + `URL Path` from `spec.md`
|
|
31
|
+
3. If neither works → ask the user: *"Paste the full URL for the page to scan"*
|
|
32
|
+
|
|
33
|
+
### 2. Navigate
|
|
34
|
+
|
|
35
|
+
`browser_navigate` to the resolved URL.
|
|
36
|
+
|
|
37
|
+
### 3. Handle auth redirect
|
|
38
|
+
|
|
39
|
+
If the page redirects to a login route (URL contains `/login`, `/signin`, `/auth`, or the page title/content indicates a login screen):
|
|
40
|
+
|
|
41
|
+
1. Tell the user which login URL they landed on.
|
|
42
|
+
2. Ask the user:
|
|
43
|
+
- **1) I'll log in manually** — wait for user confirmation, then re-navigate to the target URL
|
|
44
|
+
- **2) Skip live scan** — tell caller to invoke `sungen-capture-local` instead
|
|
45
|
+
- **3) Cancel**
|
|
46
|
+
3. **Never** inject cookies or localStorage via `browser_evaluate` or `browser_run_code`. Auth belongs to the user.
|
|
47
|
+
|
|
48
|
+
### 4. Snapshot
|
|
49
|
+
|
|
50
|
+
Take **ONE** `browser_snapshot`. This accessibility tree is the primary AI context — it contains roles, names, text, and structure that the tc-generation skill uses to identify sections and fields.
|
|
51
|
+
|
|
52
|
+
### 5. Screenshot (optional but recommended)
|
|
53
|
+
|
|
54
|
+
Take **ONE** `browser_take_screenshot` with `fullPage: true`. Save to:
|
|
55
|
+
|
|
56
|
+
```
|
|
57
|
+
qa/screens/<screen>/requirements/ui/live-<timestamp>.png
|
|
58
|
+
```
|
|
59
|
+
|
|
60
|
+
Where `<timestamp>` is `YYYYMMDD-HHMM` in local time (e.g. `live-20260421-1430.png`).
|
|
61
|
+
|
|
62
|
+
This gives users a visual record they can reference later without re-scanning.
|
|
63
|
+
|
|
64
|
+
### 6. Detect discrepancies vs spec
|
|
65
|
+
|
|
66
|
+
If `spec.md` exists, briefly cross-check the snapshot against spec sections:
|
|
67
|
+
|
|
68
|
+
- Fields listed in spec but not in snapshot → flag as *missing in UI*
|
|
69
|
+
- Elements visible in snapshot but not in spec → flag as *missing in spec*
|
|
70
|
+
|
|
71
|
+
Report findings but **do not** auto-edit `spec.md` — let the user decide.
|
|
72
|
+
|
|
73
|
+
### 7. Report back
|
|
74
|
+
|
|
75
|
+
> Captured live page `<URL>`:
|
|
76
|
+
> - Snapshot: <N> interactive elements detected
|
|
77
|
+
> - Screenshot: `requirements/ui/live-<timestamp>.png`
|
|
78
|
+
> - Discrepancies vs spec: <count, or "none">
|
|
79
|
+
|
|
80
|
+
Hand back to the calling command.
|
|
81
|
+
|
|
82
|
+
---
|
|
83
|
+
|
|
84
|
+
## What this skill does NOT do
|
|
85
|
+
|
|
86
|
+
- Does not run tests
|
|
87
|
+
- Does not generate `selectors.yaml` (that's `/sungen-run-test`)
|
|
88
|
+
- Does not inject auth state (user logs in manually)
|
|
89
|
+
- Does not crawl — scans **exactly one** page per invocation
|
|
90
|
+
- Does not generate Gherkin — that's `sungen-tc-generation`
|
|
91
|
+
|
|
92
|
+
---
|
|
93
|
+
|
|
94
|
+
## Relationship to other capture skills
|
|
95
|
+
|
|
96
|
+
- `sungen-capture-figma` — design source of truth (pre-launch)
|
|
97
|
+
- `sungen-capture-local` — any image the user dropped in `requirements/ui/`
|
|
98
|
+
- `sungen-capture-live` — this skill, verifies/supplements against the running app
|
|
99
|
+
|
|
100
|
+
All three write to `requirements/ui/` and report back to the caller. They are mutually exclusive per create-test run, but a user can run create-test multiple times with different sources to layer context.
|
|
@@ -0,0 +1,73 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: sungen-capture-local
|
|
3
|
+
description: 'Load existing UI assets (screenshots, Figma exports, hand-drawn mockups) from requirements/ui/. Auto-loaded by create-test when user picks UI images as the visual source.'
|
|
4
|
+
user-invocable: false
|
|
5
|
+
---
|
|
6
|
+
|
|
7
|
+
## Purpose
|
|
8
|
+
|
|
9
|
+
Use **pre-existing images** in `qa/screens/<screen>/requirements/ui/` as visual context for test generation. No network, no MCP, no live site required — works for any design tool (Figma export, Sketch, Penpot, Zeplin, hand-drawn, screenshots of a staging env).
|
|
10
|
+
|
|
11
|
+
This is the **baseline fallback**: if live domain is down and Figma MCP isn't configured, this always works as long as the user drops images in the folder.
|
|
12
|
+
|
|
13
|
+
---
|
|
14
|
+
|
|
15
|
+
## Steps
|
|
16
|
+
|
|
17
|
+
### 1. List available images
|
|
18
|
+
|
|
19
|
+
Glob `qa/screens/<screen>/requirements/ui/*.{png,jpg,jpeg,webp,gif}` and report count + filenames.
|
|
20
|
+
|
|
21
|
+
Filter out metadata files (e.g. `figma-meta.md` written by `sungen-capture-figma`) — those are read by `tc-generation` separately, not treated as images here.
|
|
22
|
+
|
|
23
|
+
### 2. Handle empty folder
|
|
24
|
+
|
|
25
|
+
If no images found:
|
|
26
|
+
|
|
27
|
+
1. Tell the user the folder is empty, with the full path so they can navigate there in their file manager.
|
|
28
|
+
2. Ask the user to pick:
|
|
29
|
+
- **1) I'll drop images now** — wait for user to confirm, then re-glob
|
|
30
|
+
- **2) Switch to Figma URL** — tell caller to invoke `sungen-capture-figma` instead
|
|
31
|
+
- **3) Switch to live page scan** — tell caller to invoke `sungen-capture-live` instead
|
|
32
|
+
- **4) Cancel** — abort create-test
|
|
33
|
+
3. If user picks "drop images now", wait for their confirmation message (e.g. "done") then re-run step 1.
|
|
34
|
+
|
|
35
|
+
### 3. Read images for context
|
|
36
|
+
|
|
37
|
+
Use the `read` tool on each image file — the assistant can read PNG/JPG/WebP directly as visual context.
|
|
38
|
+
|
|
39
|
+
For large sets (>10 images), ask the user which are primary and which are states/variants, to avoid loading too much visual context at once.
|
|
40
|
+
|
|
41
|
+
### 4. Summarize
|
|
42
|
+
|
|
43
|
+
Output a short summary:
|
|
44
|
+
|
|
45
|
+
> Loaded N image(s) from `qa/screens/<screen>/requirements/ui/`:
|
|
46
|
+
> - `<filename-1>` — <one-line description of what's visible>
|
|
47
|
+
> - `<filename-2>` — <one-line description>
|
|
48
|
+
> ...
|
|
49
|
+
|
|
50
|
+
Hand back to the calling command.
|
|
51
|
+
|
|
52
|
+
---
|
|
53
|
+
|
|
54
|
+
## File naming hints for users
|
|
55
|
+
|
|
56
|
+
When this skill reports back, nudge users toward consistent filenames so future runs are self-documenting:
|
|
57
|
+
|
|
58
|
+
- `<section>-default.png` — baseline state of a section
|
|
59
|
+
- `<section>-error.png` — error state
|
|
60
|
+
- `<section>-loading.png` — loading state
|
|
61
|
+
- `<section>-empty.png` — empty state
|
|
62
|
+
- `full-page.png` / `viewport.png` — whole screen (auto-generated by `sungen add --capture`)
|
|
63
|
+
|
|
64
|
+
Don't enforce — just suggest if filenames are ambiguous.
|
|
65
|
+
|
|
66
|
+
---
|
|
67
|
+
|
|
68
|
+
## What this skill does NOT do
|
|
69
|
+
|
|
70
|
+
- Does not download images from external URLs
|
|
71
|
+
- Does not generate images (no AI image generation)
|
|
72
|
+
- Does not modify existing images (no crop/resize)
|
|
73
|
+
- Does not generate Gherkin — that's `sungen-tc-generation`
|