@muggleai/works 4.2.2 → 4.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (33) hide show
  1. package/README.md +45 -37
  2. package/dist/{chunk-BZJXQZ5Q.js → chunk-PMI2DI3V.js} +524 -173
  3. package/dist/cli.js +1 -1
  4. package/dist/index.js +1 -1
  5. package/dist/plugin/.claude-plugin/plugin.json +4 -4
  6. package/dist/plugin/.cursor-plugin/plugin.json +3 -3
  7. package/dist/plugin/README.md +7 -5
  8. package/dist/plugin/scripts/ensure-electron-app.sh +3 -3
  9. package/dist/plugin/skills/do/e2e-acceptance.md +161 -0
  10. package/dist/plugin/skills/do/open-prs.md +86 -16
  11. package/dist/plugin/skills/muggle/SKILL.md +15 -13
  12. package/dist/plugin/skills/muggle-do/SKILL.md +6 -6
  13. package/dist/plugin/skills/muggle-test/SKILL.md +380 -0
  14. package/dist/plugin/skills/muggle-test-feature-local/SKILL.md +44 -27
  15. package/dist/plugin/skills/muggle-test-import/SKILL.md +272 -0
  16. package/dist/plugin/skills/muggle-upgrade/SKILL.md +1 -1
  17. package/dist/plugin/skills/optimize-descriptions/SKILL.md +8 -8
  18. package/package.json +15 -12
  19. package/plugin/.claude-plugin/plugin.json +4 -4
  20. package/plugin/.cursor-plugin/plugin.json +3 -3
  21. package/plugin/README.md +7 -5
  22. package/plugin/scripts/ensure-electron-app.sh +3 -3
  23. package/plugin/skills/do/e2e-acceptance.md +161 -0
  24. package/plugin/skills/do/open-prs.md +86 -16
  25. package/plugin/skills/muggle/SKILL.md +15 -13
  26. package/plugin/skills/muggle-do/SKILL.md +6 -6
  27. package/plugin/skills/muggle-test/SKILL.md +380 -0
  28. package/plugin/skills/muggle-test-feature-local/SKILL.md +44 -27
  29. package/plugin/skills/muggle-test-import/SKILL.md +272 -0
  30. package/plugin/skills/muggle-upgrade/SKILL.md +1 -1
  31. package/plugin/skills/optimize-descriptions/SKILL.md +8 -8
  32. package/dist/plugin/skills/do/qa.md +0 -89
  33. package/plugin/skills/do/qa.md +0 -89
@@ -0,0 +1,161 @@
1
+ # E2E / acceptance agent
2
+
3
+ You are running **end-to-end (E2E) acceptance** test cases against code changes using Muggle AI's local testing infrastructure. These tests simulate real users in a browser — they are not unit tests.
4
+
5
+ ## Design
6
+
7
+ E2E acceptance testing runs **locally** using the `test-feature-local` approach:
8
+
9
+ | Scope | MCP tools |
10
+ | :---- | :-------- |
11
+ | Cloud (projects, cases, scripts, auth) | `muggle-remote-*` |
12
+ | Local (Electron run, publish, results) | `muggle-local-*` |
13
+
14
+ This guarantees E2E acceptance tests always run — no dependency on cloud replay service availability.
15
+
16
+ ## Input
17
+
18
+ You receive:
19
+ - The Muggle project ID
20
+ - The list of changed repos, files, and a summary of changes
21
+ - The requirements goal
22
+ - `localUrl` per repo (from `muggle-repos.json`) — the locally running dev server URL
23
+
24
+ ## Your Job
25
+
26
+ ### Step 0: Resolve Local URL
27
+
28
+ Read `localUrl` for each repo from the context. If it is not provided, ask the user:
29
+ > "E2E acceptance testing requires a running local server. What URL is the `<repo>` app running on? (e.g. `http://localhost:3000`)"
30
+
31
+ **Do not skip E2E acceptance tests.** Wait for the user to provide the URL before proceeding.
32
+
33
+ ### Step 1: Check Authentication
34
+
35
+ - `muggle-remote-auth-status`
36
+ - If not signed in: `muggle-remote-auth-login` then `muggle-remote-auth-poll`
37
+
38
+ Do not skip or assume auth.
39
+
40
+ ### Step 2: Get Test Cases
41
+
42
+ Use `muggle-remote-test-case-list` with the project ID to fetch all test cases.
43
+
44
+ ### Step 3: Filter Relevant Test Cases
45
+
46
+ Based on the changed files and the requirements goal, determine which test cases are relevant:
47
+ - Test cases whose use cases directly relate to the changed functionality
48
+ - Test cases that cover areas potentially affected by the changes
49
+ - When in doubt, include the test case (better to over-test than miss a regression)
50
+
51
+ ### Step 4: Execute Tests Locally
52
+
53
+ For each relevant test case:
54
+
55
+ 1. Call `muggle-remote-test-script-list` filtered by `testCaseId` to check for an existing script.
56
+
57
+ 2. **If a script exists** (replay path):
58
+ - `muggle-remote-test-script-get` with `testScriptId` → note `actionScriptId`
59
+ - `muggle-remote-action-script-get` with that id → full `actionScript`
60
+ - **Use the API response as-is.** Do not edit, shorten, or rebuild `actionScript`; replay needs full `label` paths for element lookup.
61
+ - `muggle-local-execute-replay` with:
62
+ - `testScript`: the full script object
63
+ - `actionScript`: the full action script object (from `muggle-remote-action-script-get`)
64
+ - `localUrl`: the resolved local URL
65
+ - `approveElectronAppLaunch`: `true` *(pipeline context — user starting `muggle-do` is implicit approval)*
66
+ - `timeoutMs`: `600000` (10 min) or `900000` (15 min) for complex flows
67
+
68
+ 3. **If no script exists** (generation path):
69
+ - `muggle-remote-test-case-get` with `testCaseId` to fetch the full test case object.
70
+ - `muggle-local-execute-test-generation` with:
71
+ - `testCase`: the full test case object
72
+ - `localUrl`: the resolved local URL
73
+ - `approveElectronAppLaunch`: `true`
74
+ - `timeoutMs`: `600000` (10 min) or `900000` (15 min) for complex flows
75
+
76
+ 4. When execution completes, call `muggle-local-run-result-get` with the `runId` returned by the execute call.
77
+
78
+ 5. **Retain per test case:** `testCaseId`, `testScriptId` (if present), `runId`, `status` (passed/failed), `artifactsDir`.
79
+
80
+ ### Local Execution Timeout (`timeoutMs`)
81
+
82
+ The MCP client often uses a **default wait of 300000 ms (5 minutes)**. **Exploratory script generation** (Auth0 login, dashboards, multi-step wizards, many LLM iterations) routinely **runs longer than 5 minutes** while Electron is still healthy.
83
+
84
+ - **Always pass `timeoutMs`** — `600000` (10 min) or `900000` (15 min) — unless the test case is known to be simple.
85
+ - If the tool reports **`Electron execution timed out after 300000ms`** but Electron logs show the run still progressing (steps, screenshots, LLM calls), treat it as **orchestration timeout**, not an Electron app defect: **increase `timeoutMs` and retry**.
86
+
87
+ ### Interpreting Failures
88
+
89
+ - **`Electron execution timed out after 300000ms`:** Orchestration wait too short — see `timeoutMs` above.
90
+ - **Exit code 26** (and messages like **LLM failed to generate / replay action script**): Often corresponds to a completed exploration whose **outcome was goal not achievable** (`goal_not_achievable`, summary with `halt`). Use `muggle-local-run-result-get` and read the **summary / structured summary**; do not assume an Electron crash.
91
+ - **Fix for precondition failures:** Choose a project/account that already has the needed state, or narrow the test goal so generation does not try to create resources from scratch unless intentional.
92
+
93
+ ### Step 5: Publish Test Scripts
94
+
95
+ After each test execution completes (whether pass or fail):
96
+
97
+ 1. Call `muggle-local-publish-test-script` with:
98
+ - `runId`: the run ID from execution
99
+ - `cloudTestCaseId`: the test case ID
100
+
101
+ 2. **Retain from publish response:**
102
+ - `testScriptId`: the cloud test script ID
103
+ - `viewUrl`: the URL to view the run on muggle-ai.com
104
+
105
+ This ensures all screenshots are uploaded to the cloud and accessible via URLs for PR comments.
106
+
107
+ ### Step 6: Fetch Screenshot URLs
108
+
109
+ For each published test script:
110
+
111
+ 1. Call `muggle-remote-test-script-get` with the `testScriptId` from publish.
112
+
113
+ 2. Extract from the response:
114
+ - `steps[].operation.screenshotUrl`: cloud URL for each step's screenshot
115
+ - `steps[].operation.action`: the action description for each step
116
+
117
+ 3. **Retain per test case:** array of `{ stepIndex, action, screenshotUrl }`.
118
+
119
+ ### Step 7: Collect Results
120
+
121
+ For each test case:
122
+ - Record pass or fail from the run result
123
+ - If failed, capture the error message, failure step index, and `artifactsDir` for local debugging
124
+ - Every test case must be executed — generate a new script if none exists (no skips)
125
+
126
+ ## Output
127
+
128
+ **E2E acceptance report:**
129
+
130
+ **Passed:** (count)
131
+ - (test case name):
132
+ - testCaseId: `<id>`
133
+ - testScriptId: `<id>`
134
+ - runId: `<id>`
135
+ - viewUrl: `<url>`
136
+ - steps: `[{ stepIndex, action, screenshotUrl }, ...]`
137
+
138
+ **Failed:** (count)
139
+ - (test case name):
140
+ - testCaseId: `<id>`
141
+ - testScriptId: `<id>`
142
+ - runId: `<id>`
143
+ - viewUrl: `<url>`
144
+ - failureStepIndex: `<index>`
145
+ - error: `<message>`
146
+ - steps: `[{ stepIndex, action, screenshotUrl }, ...]`
147
+ - artifactsDir: `<path>` (for local debugging)
148
+
149
+ **Metadata:**
150
+ - projectId: `<projectId>`
151
+
152
+ **Overall:** ALL PASSED | FAILURES DETECTED
153
+
154
+ ## Non-negotiables
155
+
156
+ - No silent auth skip; always verify with `muggle-remote-auth-status` first.
157
+ - Replay: never hand-build or simplify `actionScript` — only use full response from `muggle-remote-action-script-get`.
158
+ - Always pass `timeoutMs` for execution calls; do not rely on default 5-minute timeout.
159
+ - No hiding failures: surface errors, exit codes, and artifact paths.
160
+ - Every test case must be executed — generate a new script if none exists (no skips).
161
+ - Always publish after execution to ensure screenshots are cloud-accessible for PR comments.
@@ -7,7 +7,12 @@ You are creating pull requests for each repository that has changes after a succ
7
7
  You receive:
8
8
  - Per-repo: repo name, path, branch name
9
9
  - Requirements: goal, acceptance criteria
10
- - QA report: passed/failed test cases, each with testCaseId, testScriptId, runId, artifactsDir, and projectId
10
+ - E2E acceptance report: passed/failed test cases, each with:
11
+ - `testCaseId`, `testScriptId`, `runId`, `projectId`
12
+ - `viewUrl`: link to view run on muggle-ai.com
13
+ - `steps`: array of `{ stepIndex, action, screenshotUrl }`
14
+ - `failureStepIndex` and `error` (if failed)
15
+ - `artifactsDir` (for local debugging)
11
16
 
12
17
  ## Your Job
13
18
 
@@ -15,38 +20,103 @@ For each repo with changes:
15
20
 
16
21
  1. **Push the branch** to origin: `git push -u origin <branch-name>` in the repo directory.
17
22
  2. **Build the PR title:**
18
- - If QA has failures: `[QA FAILING] <goal>`
23
+ - If E2E acceptance tests have failures: `[E2E FAILING] <goal>`
19
24
  - Otherwise: `<goal>`
20
25
  - Keep under 70 characters
21
26
  3. **Build the PR body** with these sections:
22
27
  - `## Goal` — the requirements goal
23
28
  - `## Acceptance Criteria` — bulleted list (omit section if empty)
24
29
  - `## Changes` — summary of what changed in this repo
25
- - `## QA Results` full test case breakdown (see format below)
30
+ - E2E acceptance evidence block from `muggle build-pr-section` (see "Rendering the E2E acceptance results block" below)
26
31
  4. **Create the PR** using `gh pr create --title "..." --body "..." --head <branch>` in the repo directory.
27
- 5. **Capture the PR URL** from the output.
32
+ 5. **Capture the PR URL** and extract the PR number.
33
+ 6. **Post the overflow comment only if `muggle build-pr-section` emitted one** (see "Rendering the E2E acceptance results block" below). In the common case, no comment is posted.
28
34
 
29
- ## QA Results Section Format
35
+ ## Rendering the E2E acceptance results block
30
36
 
37
+ Do **not** hand-write the `## E2E Acceptance Results` markdown. Use the `muggle build-pr-section` CLI, which renders a deterministic block and decides whether the evidence fits in the PR description or needs to spill into an overflow comment.
38
+
39
+ ### Step A: Build the report JSON
40
+
41
+ Assemble the e2e-acceptance report you collected in `e2e-acceptance.md` into a JSON object with this shape:
42
+
43
+ ```json
44
+ {
45
+ "projectId": "<project UUID>",
46
+ "tests": [
47
+ {
48
+ "name": "<test case name>",
49
+ "testCaseId": "<UUID>",
50
+ "testScriptId": "<UUID or omitted>",
51
+ "runId": "<UUID>",
52
+ "viewUrl": "<muggle-ai.com run URL>",
53
+ "status": "passed",
54
+ "steps": [
55
+ { "stepIndex": 0, "action": "<action>", "screenshotUrl": "<URL>" }
56
+ ]
57
+ },
58
+ {
59
+ "name": "<test case name>",
60
+ "testCaseId": "<UUID>",
61
+ "runId": "<UUID>",
62
+ "viewUrl": "<muggle-ai.com run URL>",
63
+ "status": "failed",
64
+ "failureStepIndex": 2,
65
+ "error": "<error message>",
66
+ "artifactsDir": "<path, optional>",
67
+ "steps": [
68
+ { "stepIndex": 0, "action": "<action>", "screenshotUrl": "<URL>" }
69
+ ]
70
+ }
71
+ ]
72
+ }
31
73
  ```
32
- ## QA Results
33
74
 
34
- **X passed / Y failed**
75
+ ### Step B: Render the evidence block
76
+
77
+ Pipe the JSON into `muggle build-pr-section`. It writes `{ "body": "...", "comment": "..." | null }` to stdout:
35
78
 
36
- | Test Case | Status | Details |
37
- |-----------|--------|---------|
38
- | [Name](https://www.muggle-ai.com/muggleTestV0/dashboard/projects/{projectId}/scripts?modal=details&testCaseId={testCaseId}) | ✅ PASSED | — |
39
- | [Name](https://www.muggle-ai.com/muggleTestV0/dashboard/projects/{projectId}/scripts?modal=details&testCaseId={testCaseId}) | ❌ FAILED | {error} — artifacts: `{artifactsDir}` |
79
+ ```bash
80
+ echo "$REPORT_JSON" | muggle build-pr-section > /tmp/muggle-pr-section.json
40
81
  ```
41
82
 
42
- Rules:
43
- - Link each test case name to its details page on www.muggle-ai.com using the URL pattern above (requires `testCaseId` and `projectId` from the QA report).
44
- - For failed tests, include the error message and the local `artifactsDir` path so the developer can inspect screenshots.
45
- - Screenshots are in `{artifactsDir}/screenshots/` and viewable locally.
83
+ The command exits nonzero on malformed input and writes a descriptive error to stderr — do not swallow that error, surface it to the user.
84
+
85
+ ### Step C: Build the PR body
86
+
87
+ Build the PR body by concatenating, in order:
88
+
89
+ - `## Goal` — the requirements goal
90
+ - `## Acceptance Criteria` — bulleted list (omit section if empty)
91
+ - `## Changes` — summary of what changed in this repo
92
+ - The `body` field from the CLI output (already contains its own `## E2E Acceptance Results` header)
93
+
94
+ ### Step D: Create the PR, then post the overflow comment only if present
95
+
96
+ 1. Create the PR with `gh pr create --title "..." --body "..." --head <branch>`.
97
+ 2. Capture the PR URL and extract the PR number.
98
+ 3. If the CLI output's `comment` field is `null`, **do not post a comment** — everything is already in the PR description.
99
+ 4. If the CLI output's `comment` field is a non-null string, post it as a follow-up comment:
100
+
101
+ ```bash
102
+ gh pr comment <PR#> --body "$(cat <<'EOF'
103
+ <comment field contents>
104
+ EOF
105
+ )"
106
+ ```
107
+
108
+ ### Notes on fit vs. overflow
109
+
110
+ - **The common case is fit**: the full evidence (summary, per-test rows, collapsible failure details) lives in the PR description, no comment is posted.
111
+ - **The overflow case** is triggered automatically when the full inline body would exceed the CLI's budget. In that case the PR description contains the summary, the per-test rows, and a pointer line; the full step-by-step failure details live in the follow-up comment.
112
+ - You do not make the fit-vs-overflow decision — the CLI does. Never post the comment speculatively.
46
113
 
47
114
  ## Output
48
115
 
49
116
  **PRs Created:**
50
117
  - (repo name): (PR URL)
51
118
 
52
- **Errors:** (any repos where PR creation failed, with the error message)
119
+ **E2E acceptance overflow comments posted:** (only include repos where an overflow comment was actually posted)
120
+ - (repo name): comment posted to PR #(number)
121
+
122
+ **Errors:** (any repos where PR creation or comment posting failed, with the error message)
@@ -9,22 +9,24 @@ Use this as the top-level Muggle command router.
9
9
 
10
10
  ## Menu
11
11
 
12
- When user asks for "muggle" with no specific subcommand, show this command set:
12
+ When user asks for "muggle" with no specific subcommand, use `AskQuestion` to present the command set as clickable options:
13
13
 
14
- - `/muggle:muggle-do`autonomous dev pipeline
15
- - `/muggle:muggle-test-feature-local` — local feature QA
16
- - `/muggle:muggle-status`health check
17
- - `/muggle:muggle-repair`repair broken installation
18
- - `/muggle:muggle-upgrade`upgrade local installation
14
+ - "Test my changes change-driven E2E acceptance testing (local or remote)" → `muggle-test`
15
+ - "Test a feature on localhost — run a single E2E test locally" → `muggle-test-feature-local`
16
+ - "Autonomous dev pipeline requirements to PR" → `muggle-do`
17
+ - "Health check verify installation status" → `muggle-status`
18
+ - "Repairfix broken installation" → `muggle-repair`
19
+ - "Upgrade — update to latest version" → `muggle-upgrade`
19
20
 
20
21
  ## Routing
21
22
 
22
- If the user intent clearly matches one command, route to that command behavior:
23
+ If the user intent clearly matches one command, route directly no menu needed:
23
24
 
24
- - status/health/check -> `muggle-status`
25
- - repair/fix/install broken -> `muggle-repair`
26
- - upgrade/update latest -> `muggle-upgrade`
27
- - test localhost/validate feature -> `muggle-test-feature-local`
28
- - build/implement from request -> `muggle-do`
25
+ - status/health/check `muggle-status`
26
+ - repair/fix/install broken `muggle-repair`
27
+ - upgrade/update latest `muggle-upgrade`
28
+ - test my changes/acceptance test my work/test before push/post E2E acceptance results to PR/test on staging/test on preview → `muggle-test`
29
+ - test localhost/validate single feature `muggle-test-feature-local`
30
+ - build/implement from request → `muggle-do`
29
31
 
30
- If intent is ambiguous, ask one concise clarification question.
32
+ If intent is ambiguous, use `AskQuestion` with the most likely options rather than asking the user to type a clarification.
@@ -6,9 +6,9 @@ disable-model-invocation: true
6
6
 
7
7
  # Muggle Do
8
8
 
9
- Muggle Do is the top-level command for the Muggle AI development workflow.
9
+ Muggle Do is the command for the Muggle AI development workflow.
10
10
 
11
- It runs the autonomous dev cycle: requirements -> impact analysis -> validate code -> coding -> unit tests -> QA -> open PRs.
11
+ It runs a battle-tested autonomous dev cycle: requirements -> impact analysis -> validate code -> coding -> unit tests -> E2E acceptance tests -> open PRs.
12
12
 
13
13
  For maintenance tasks, use the dedicated skills:
14
14
 
@@ -42,12 +42,12 @@ Use the supporting files in the `../do/` directory as stage-specific instruction
42
42
  - [impact-analysis.md](../do/impact-analysis.md)
43
43
  - [validate-code.md](../do/validate-code.md)
44
44
  - [unit-tests.md](../do/unit-tests.md)
45
- - [qa.md](../do/qa.md)
45
+ - [e2e-acceptance.md](../do/e2e-acceptance.md)
46
46
  - [open-prs.md](../do/open-prs.md)
47
47
 
48
48
  ## Guardrails
49
49
 
50
- - Do not skip unit tests before QA.
51
- - Do not skip QA due to missing scripts; generate when needed.
50
+ - Do not skip unit tests before E2E acceptance tests.
51
+ - Do not skip E2E acceptance tests due to missing scripts; generate when needed.
52
52
  - If the same stage fails 3 times in a row, escalate with details.
53
- - If total iterations reach 3 and QA still fails, continue to PR creation with `[QA FAILING]`.
53
+ - If total iterations reach 3 and E2E acceptance tests still fail, continue to PR creation with `[E2E FAILING]`.