@muggleai/works 4.4.0 → 4.6.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +31 -13
- package/dist/{chunk-PMI2DI3V.js → chunk-TP4T4T2Z.js} +348 -105
- package/dist/cli.js +1 -1
- package/dist/index.js +1 -1
- package/dist/plugin/.claude-plugin/plugin.json +1 -1
- package/dist/plugin/.cursor-plugin/plugin.json +1 -1
- package/dist/plugin/README.md +1 -0
- package/dist/plugin/skills/do/e2e-acceptance.md +6 -3
- package/dist/plugin/skills/do/open-prs.md +35 -74
- package/dist/plugin/skills/muggle-pr-visual-walkthrough/SKILL.md +181 -0
- package/dist/plugin/skills/muggle-test/SKILL.md +146 -121
- package/dist/plugin/skills/muggle-test-feature-local/SKILL.md +66 -16
- package/dist/plugin/skills/muggle-test-import/SKILL.md +127 -25
- package/dist/plugin/skills/muggle-test-regenerate-missing/SKILL.md +201 -0
- package/dist/plugin/skills/muggle-test-regenerate-missing/evals/evals.json +58 -0
- package/dist/plugin/skills/muggle-test-regenerate-missing/evals/trigger-eval.json +22 -0
- package/dist/release-manifest.json +7 -0
- package/package.json +7 -7
- package/plugin/.claude-plugin/plugin.json +1 -1
- package/plugin/.cursor-plugin/plugin.json +1 -1
- package/plugin/README.md +1 -0
- package/plugin/skills/do/e2e-acceptance.md +6 -3
- package/plugin/skills/do/open-prs.md +35 -74
- package/plugin/skills/muggle-pr-visual-walkthrough/SKILL.md +181 -0
- package/plugin/skills/muggle-test/SKILL.md +146 -121
- package/plugin/skills/muggle-test-feature-local/SKILL.md +66 -16
- package/plugin/skills/muggle-test-import/SKILL.md +127 -25
- package/plugin/skills/muggle-test-regenerate-missing/SKILL.md +201 -0
- package/plugin/skills/muggle-test-regenerate-missing/evals/evals.json +58 -0
- package/plugin/skills/muggle-test-regenerate-missing/evals/trigger-eval.json +22 -0
|
@@ -15,6 +15,22 @@ A router skill that detects code changes, resolves impacted test cases, executes
|
|
|
15
15
|
- **Multi-select** (use cases, test cases): Use `AskQuestion` with `allow_multiple: true`.
|
|
16
16
|
- **Free-text inputs** (URLs, descriptions): Only use plain text prompts when there is no finite set of options. Even then, offer a detected/default value when possible.
|
|
17
17
|
- **Batch related questions**: If two questions are independent, present them together in a single `AskQuestion` call rather than asking sequentially.
|
|
18
|
+
- **Parallelize job-creation calls**: Whenever you're kicking off N independent cloud jobs — creating multiple use cases, generating/creating multiple test cases, fetching details for multiple test cases, starting multiple remote workflows, publishing multiple local runs, or fetching per-step screenshots for multiple runs — issue all N tool calls in a single message so they run in parallel. Never loop them sequentially unless there is a real ordering constraint (e.g. a single local Electron browser that can only run one test at a time).
|
|
19
|
+
|
|
20
|
+
## Test Case Design: One Atomic Behavior Per Test Case
|
|
21
|
+
|
|
22
|
+
Every test case verifies exactly **one** user-observable behavior. Never bundle multiple concerns, sequential flows, or bootstrap/setup into a single test case — even if you think it would be "cleaner" or "more efficient."
|
|
23
|
+
|
|
24
|
+
**Ordering, dependencies, and bootstrap are Muggle's service responsibility, not yours.** Muggle's cloud handles test case dependencies, prerequisite state, and execution ordering. Your job is to describe the *atomic behavior to verify* — never the flow that gets there.
|
|
25
|
+
|
|
26
|
+
- ❌ Wrong: one test case that "signs up, logs in, navigates to the detail modal, verifies icon stacking, verifies tab order, verifies history format, and verifies reference layout."
|
|
27
|
+
- ✅ Right: four separate test cases — one per verifiable behavior — each with instruction text like "Verify the detail modal shows stacked pair of icons per card" with **no** signup / login / navigation / setup language.
|
|
28
|
+
|
|
29
|
+
**Never bake bootstrap into a test case description.** Signup, login, seed data, prerequisite navigation, tear-down — none of these belong inside the test case body. Write only the verification itself. The service will prepend whatever setup is needed based on its own dependency graph.
|
|
30
|
+
|
|
31
|
+
**Never consolidate the generator's output.** When `muggle-remote-test-case-generate-from-prompt` returns N micro-tests from a single prompt, that decomposition is the authoritative one. Do not "merge them into 1 for simplicity," do not "rewrite them to share bootstrap," do not "collapse them to match a 4 UC / 4 TC plan." Accept what the generator gave you.
|
|
32
|
+
|
|
33
|
+
**Never skip the generate→review cycle.** Even when you are 100% confident about the right shape, always present the generated test cases to the user before calling `muggle-remote-test-case-create`. "I'll skip the generate→review cycle and create directly" is a sign you're about to get it wrong.
|
|
18
34
|
|
|
19
35
|
## Step 1: Confirm Scope of Work (Always First)
|
|
20
36
|
|
|
@@ -41,8 +57,8 @@ If the user's intent is clear, state back what you understood and use `AskQuesti
|
|
|
41
57
|
- Option 2: "Switch to [the other mode]"
|
|
42
58
|
|
|
43
59
|
If ambiguous, use `AskQuestion` to let the user choose:
|
|
44
|
-
- Option 1: "
|
|
45
|
-
- Option 2: "
|
|
60
|
+
- Option 1: "On my computer — test your localhost dev server in a browser on your machine"
|
|
61
|
+
- Option 2: "In the cloud — test remotely targeting your deployed preview/staging URL"
|
|
46
62
|
|
|
47
63
|
Only proceed after the user selects an option.
|
|
48
64
|
|
|
@@ -66,8 +82,12 @@ If no changes detected (clean tree), tell the user and ask what they want to tes
|
|
|
66
82
|
## Step 3: Authenticate
|
|
67
83
|
|
|
68
84
|
1. Call `muggle-remote-auth-status`
|
|
69
|
-
2. If authenticated and not expired →
|
|
70
|
-
|
|
85
|
+
2. If **authenticated and not expired** → print the logged-in email and ask via `AskQuestion`:
|
|
86
|
+
> "You're logged in as **{email}**. Continue with this account?"
|
|
87
|
+
- Option 1: "Yes, continue"
|
|
88
|
+
- Option 2: "No, switch account"
|
|
89
|
+
If the user picks "switch account", call `muggle-remote-auth-login` with `forceNewSession: true`, then `muggle-remote-auth-poll`.
|
|
90
|
+
3. If **not authenticated or expired** → call `muggle-remote-auth-login`
|
|
71
91
|
4. If login pending → call `muggle-remote-auth-poll`
|
|
72
92
|
|
|
73
93
|
If auth fails repeatedly, suggest: `muggle logout && muggle login` from terminal.
|
|
@@ -93,56 +113,68 @@ A **project** is where all your test results, use cases, and test scripts are gr
|
|
|
93
113
|
|
|
94
114
|
Store the `projectId` only after user confirms.
|
|
95
115
|
|
|
96
|
-
## Step 5: Select Use Case (
|
|
116
|
+
## Step 5: Select Use Case (Best-Effort Shortlist)
|
|
97
117
|
|
|
98
118
|
### 5a: List existing use cases
|
|
99
119
|
Call `muggle-remote-use-case-list` with the project ID.
|
|
100
120
|
|
|
101
|
-
### 5b:
|
|
121
|
+
### 5b: Best-effort match against the change summary
|
|
102
122
|
|
|
103
|
-
|
|
123
|
+
Using the change summary from Step 2, pick the use cases whose title/description most plausibly relate to the impacted areas. Produce a **short shortlist** (typically 1–5) — don't try to be exhaustive, and don't dump the full project list on the user. A confident best-effort match is the goal.
|
|
104
124
|
|
|
105
|
-
|
|
125
|
+
If nothing looks like a confident match, fall back to asking the user which use case(s) they have in mind.
|
|
106
126
|
|
|
107
|
-
### 5c:
|
|
127
|
+
### 5c: Present the shortlist for confirmation
|
|
108
128
|
|
|
109
|
-
|
|
110
|
-
- Git changes analysis
|
|
111
|
-
- Use case title/description matching
|
|
112
|
-
- Any heuristic or inference
|
|
129
|
+
Use `AskQuestion` with `allow_multiple: true`:
|
|
113
130
|
|
|
114
|
-
|
|
131
|
+
Prompt: "These use cases look most relevant to your changes — confirm which to test:"
|
|
115
132
|
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
133
|
+
- Pre-check the shortlisted items so the user can accept with one click
|
|
134
|
+
- Include "Pick a different use case" to reveal the full project list
|
|
135
|
+
- Include "Create new use case" at the end
|
|
136
|
+
|
|
137
|
+
### 5d: If user picks "Pick a different use case"
|
|
138
|
+
Re-present the full list from 5a via `AskQuestion` with `allow_multiple: true`, then continue.
|
|
139
|
+
|
|
140
|
+
### 5e: If user chooses "Create new use case"
|
|
141
|
+
1. Ask the user to describe the use case(s) in plain English — they may want more than one
|
|
142
|
+
2. Call `muggle-remote-use-case-create-from-prompts` **once** with **all** descriptions batched into the `instructions` array (this endpoint natively fans out the jobs server-side — do NOT make one call per use case):
|
|
119
143
|
- `projectId`: The project ID
|
|
120
|
-
- `
|
|
121
|
-
3. Present the created use
|
|
144
|
+
- `instructions`: A plain array of strings, one per use case — e.g. `["<description 1>", "<description 2>", ...]`
|
|
145
|
+
3. Present the created use cases and confirm they're correct
|
|
122
146
|
|
|
123
|
-
## Step 6: Select Test Case (
|
|
147
|
+
## Step 6: Select Test Case (Best-Effort Shortlist)
|
|
124
148
|
|
|
125
149
|
For the selected use case(s):
|
|
126
150
|
|
|
127
151
|
### 6a: List existing test cases
|
|
128
152
|
Call `muggle-remote-test-case-list-by-use-case` with each use case ID.
|
|
129
153
|
|
|
130
|
-
### 6b:
|
|
154
|
+
### 6b: Best-effort match against the change summary
|
|
155
|
+
|
|
156
|
+
Using the change summary from Step 2, pick the test cases that look most relevant to the impacted areas. Keep the shortlist small and confident — don't enumerate every test case attached to the use case(s).
|
|
131
157
|
|
|
132
|
-
|
|
158
|
+
If nothing looks like a confident match, fall back to offering to run all test cases for the selected use case(s), or ask the user what they had in mind.
|
|
133
159
|
|
|
134
|
-
|
|
160
|
+
### 6c: Present the shortlist for confirmation
|
|
135
161
|
|
|
136
|
-
|
|
162
|
+
Use `AskQuestion` with `allow_multiple: true`:
|
|
137
163
|
|
|
138
|
-
|
|
164
|
+
Prompt: "These test cases look most relevant — confirm which to run:"
|
|
165
|
+
|
|
166
|
+
- Pre-check the shortlisted items so the user can accept with one click
|
|
167
|
+
- Include "Show all test cases" to reveal the full list
|
|
168
|
+
- Include "Generate new test case" at the end
|
|
139
169
|
|
|
140
170
|
### 6d: If user chooses "Generate new test case"
|
|
141
|
-
1. Ask the user to describe what they want to test in plain English
|
|
142
|
-
2.
|
|
143
|
-
- `projectId`, `useCaseId`, `instruction` (
|
|
144
|
-
|
|
145
|
-
|
|
171
|
+
1. Ask the user to describe what they want to test in plain English — they may want more than one test case
|
|
172
|
+
2. For N descriptions, issue N `muggle-remote-test-case-generate-from-prompt` calls **in parallel** (single message, multiple tool calls — never loop sequentially):
|
|
173
|
+
- `projectId`, `useCaseId`, `instruction` (one description per call)
|
|
174
|
+
- Each `instruction` must describe **exactly one atomic behavior to verify**. No signup, no login, no "first navigate to X, then click Y, then verify Z" chains, no seed data, no cleanup. Just the verification. See **Test Case Design** above.
|
|
175
|
+
3. **Accept the generator's decomposition as-is.** If the generator returns 4 micro-tests from a single prompt, that's 4 correct test cases — never merge, consolidate, or rewrite them to bundle bootstrap.
|
|
176
|
+
4. Present the generated test case(s) for user review — **always do this review cycle**, even when you think you already know the right shape. Skipping straight to creation is the anti-pattern this skill most frequently gets wrong.
|
|
177
|
+
5. For the ones the user approves, issue `muggle-remote-test-case-create` calls **in parallel**
|
|
146
178
|
|
|
147
179
|
### 6e: Confirm final selection
|
|
148
180
|
|
|
@@ -154,9 +186,7 @@ Wait for user confirmation before moving to execution.
|
|
|
154
186
|
|
|
155
187
|
## Step 7A: Execute — Local Mode
|
|
156
188
|
|
|
157
|
-
### Pre-flight
|
|
158
|
-
|
|
159
|
-
**Question 1 — Local URL:**
|
|
189
|
+
### Pre-flight question — Local URL
|
|
160
190
|
|
|
161
191
|
Try to auto-detect the dev server URL by checking running terminals or common ports (e.g., `lsof -iTCP -sTCP:LISTEN -nP | grep -E ':(3000|3001|4200|5173|8080)'`). If a likely URL is found, present it as a clickable default via `AskQuestion`:
|
|
162
192
|
- Option 1: "http://localhost:3000" (or whatever was detected)
|
|
@@ -164,38 +194,31 @@ Try to auto-detect the dev server URL by checking running terminals or common po
|
|
|
164
194
|
|
|
165
195
|
If nothing detected, ask as free text: "Your local app should be running. What's the URL? (e.g., http://localhost:3000)"
|
|
166
196
|
|
|
167
|
-
**
|
|
168
|
-
|
|
169
|
-
After getting the URL, use a single `AskQuestion` call with two questions:
|
|
197
|
+
**No separate approval or visibility question.** The user picking Local mode in Step 1 *is* the approval — do not ask "ready to launch Electron?" before every run. The Electron browser defaults to visible; if the user wants headless, they will say so, otherwise let it run visible.
|
|
170
198
|
|
|
171
|
-
|
|
172
|
-
- "Yes, launch it (visible — I want to watch)"
|
|
173
|
-
- "Yes, launch it (headless — run in background)"
|
|
174
|
-
- "No, cancel"
|
|
199
|
+
### Fetch test case details (in parallel)
|
|
175
200
|
|
|
176
|
-
|
|
201
|
+
Before execution, fetch full test case details for all selected test cases by issuing **all** `muggle-remote-test-case-get` calls in parallel (single message, multiple tool calls).
|
|
177
202
|
|
|
178
|
-
### Run sequentially
|
|
203
|
+
### Run sequentially (Electron constraint)
|
|
179
204
|
|
|
180
|
-
For each test case:
|
|
205
|
+
Execution itself **must** be sequential because there is only one local Electron browser. For each test case, in order:
|
|
181
206
|
|
|
182
|
-
1. Call `muggle-
|
|
183
|
-
|
|
184
|
-
- `
|
|
185
|
-
- `
|
|
186
|
-
|
|
187
|
-
- `showUi`: `true` if user chose "visible", `false` if "headless" (from Question 2)
|
|
188
|
-
3. Store the returned `runId`
|
|
207
|
+
1. Call `muggle-local-execute-test-generation`:
|
|
208
|
+
- `testCase`: Full test case object from the parallel fetch above
|
|
209
|
+
- `localUrl`: User's local URL from the pre-flight question
|
|
210
|
+
- `showUi`: omit (default visible) unless the user explicitly asked for headless, then pass `false`
|
|
211
|
+
2. Store the returned `runId`
|
|
189
212
|
|
|
190
213
|
If a generation fails, log it and continue to the next. Do not abort the batch.
|
|
191
214
|
|
|
192
|
-
### Collect results
|
|
215
|
+
### Collect results (in parallel)
|
|
193
216
|
|
|
194
|
-
For
|
|
217
|
+
For every `runId`, issue all `muggle-local-run-result-get` calls in parallel. Extract: status, duration, step count, `artifactsDir`.
|
|
195
218
|
|
|
196
|
-
### Publish each run to cloud
|
|
219
|
+
### Publish each run to cloud (in parallel)
|
|
197
220
|
|
|
198
|
-
For
|
|
221
|
+
For every completed run, issue all `muggle-local-publish-test-script` calls in parallel (single message, multiple tool calls):
|
|
199
222
|
- `runId`: The local run ID
|
|
200
223
|
- `cloudTestCaseId`: The cloud test case ID
|
|
201
224
|
|
|
@@ -225,26 +248,29 @@ For failures: show which step failed, the local screenshot path, and a suggestio
|
|
|
225
248
|
|
|
226
249
|
> "What's the preview/staging URL to test against?"
|
|
227
250
|
|
|
228
|
-
###
|
|
251
|
+
### Fetch test case details (in parallel)
|
|
229
252
|
|
|
230
|
-
|
|
253
|
+
Issue all `muggle-remote-test-case-get` calls in parallel (single message, multiple tool calls) to hydrate the test case bodies.
|
|
231
254
|
|
|
232
|
-
|
|
233
|
-
|
|
234
|
-
|
|
235
|
-
|
|
236
|
-
|
|
237
|
-
|
|
238
|
-
|
|
239
|
-
|
|
240
|
-
|
|
241
|
-
|
|
242
|
-
|
|
243
|
-
|
|
255
|
+
### Trigger remote workflows (in parallel)
|
|
256
|
+
|
|
257
|
+
Once details are in hand, issue all `muggle-remote-workflow-start-test-script-generation` calls in parallel — never loop them sequentially. For each test case:
|
|
258
|
+
|
|
259
|
+
- `projectId`: The project ID
|
|
260
|
+
- `useCaseId`: The use case ID
|
|
261
|
+
- `testCaseId`: The test case ID
|
|
262
|
+
- `name`: `"muggle-test: {test case title}"`
|
|
263
|
+
- `url`: The preview/staging URL
|
|
264
|
+
- `goal`: From the test case
|
|
265
|
+
- `precondition`: From the test case (use `"None"` if empty)
|
|
266
|
+
- `instructions`: From the test case
|
|
267
|
+
- `expectedResult`: From the test case
|
|
268
|
+
|
|
269
|
+
Store each returned workflow runtime ID.
|
|
244
270
|
|
|
245
|
-
### Monitor and report
|
|
271
|
+
### Monitor and report (in parallel)
|
|
246
272
|
|
|
247
|
-
|
|
273
|
+
Issue all `muggle-remote-wf-get-ts-gen-latest-run` calls in parallel, one per runtime ID.
|
|
248
274
|
|
|
249
275
|
```
|
|
250
276
|
Test Case Workflow Status Runtime ID
|
|
@@ -279,66 +305,61 @@ open "https://www.muggle-ai.com/muggleTestV0/dashboard/projects/{projectId}/runs
|
|
|
279
305
|
Tell the user:
|
|
280
306
|
> "I've opened the Muggle AI dashboard in your browser — you can see the test results, step-by-step screenshots, and action scripts there."
|
|
281
307
|
|
|
282
|
-
## Step 9: Post
|
|
308
|
+
## Step 9: Offer to Post Visual Walkthrough to PR
|
|
283
309
|
|
|
284
|
-
After reporting results,
|
|
310
|
+
After reporting results, ask the user if they want to attach a **visual walkthrough** — a markdown block with per-test-case dashboard links and step-by-step screenshots — to the current branch's open PR. The rendering and posting workflow lives in the shared `muggle-pr-visual-walkthrough` skill; this step gathers the required input and hands off.
|
|
285
311
|
|
|
286
|
-
### 9a:
|
|
287
|
-
|
|
288
|
-
```bash
|
|
289
|
-
gh pr view --json number,url,title 2>/dev/null
|
|
290
|
-
```
|
|
312
|
+
### 9a: Gather per-step screenshots (required input for the shared skill)
|
|
291
313
|
|
|
292
|
-
-
|
|
293
|
-
- If no PR exists → use `AskQuestion`:
|
|
294
|
-
- "Create PR with E2E acceptance results"
|
|
295
|
-
- "Skip posting to PR"
|
|
314
|
+
The shared skill takes an **`E2eReport` JSON** that includes per-step screenshot URLs. You already have `projectId`, `testCaseId`, `runId`, `viewUrl`, and `status` from earlier steps — you still need the step-level data.
|
|
296
315
|
|
|
297
|
-
|
|
316
|
+
For the published runs from Step 7A, issue **all** `muggle-remote-test-script-get` calls in parallel (single message, multiple tool calls) — one per `testScriptId` returned by `muggle-local-publish-test-script`. Then, for each response:
|
|
298
317
|
|
|
299
|
-
|
|
318
|
+
1. Extract `steps[].operation.action` (description) and `steps[].operation.screenshotUrl` (cloud URL).
|
|
319
|
+
2. Build a `steps` array: `[{ stepIndex: 0, action: "...", screenshotUrl: "..." }, ...]`.
|
|
320
|
+
3. If the run failed, also capture `failureStepIndex`, `error`, and the local `artifactsDir` from `muggle-local-run-result-get`.
|
|
300
321
|
|
|
301
|
-
|
|
302
|
-
## 🧪 Muggle AI — E2E Acceptance Results
|
|
322
|
+
Assemble the report:
|
|
303
323
|
|
|
304
|
-
|
|
305
|
-
|
|
306
|
-
|
|
307
|
-
|
|
308
|
-
|
|
309
|
-
|
|
310
|
-
|
|
324
|
+
```json
|
|
325
|
+
{
|
|
326
|
+
"projectId": "<projectId>",
|
|
327
|
+
"tests": [
|
|
328
|
+
{
|
|
329
|
+
"name": "<test case title>",
|
|
330
|
+
"testCaseId": "<id>",
|
|
331
|
+
"testScriptId": "<id>",
|
|
332
|
+
"runId": "<id>",
|
|
333
|
+
"viewUrl": "<publish response viewUrl>",
|
|
334
|
+
"status": "passed",
|
|
335
|
+
"steps": [{ "stepIndex": 0, "action": "...", "screenshotUrl": "..." }]
|
|
336
|
+
}
|
|
337
|
+
]
|
|
338
|
+
}
|
|
339
|
+
```
|
|
311
340
|
|
|
312
|
-
|
|
313
|
-
<summary>Failed test details</summary>
|
|
341
|
+
See the shared skill for the full schema (including the failed-test shape with `failureStepIndex` and `error`).
|
|
314
342
|
|
|
315
|
-
###
|
|
316
|
-
- **Failed at**: Step 7 — "Click checkout button"
|
|
317
|
-
- **Error**: Element not found
|
|
318
|
-
- **Local artifacts**: `~/.muggle-ai/sessions/{runId}/`
|
|
319
|
-
- **Screenshots**: `~/.muggle-ai/sessions/{runId}/screenshots/`
|
|
343
|
+
### 9b: Ask the user
|
|
320
344
|
|
|
321
|
-
|
|
345
|
+
Use `AskQuestion`:
|
|
322
346
|
|
|
323
|
-
|
|
324
|
-
*Generated by [Muggle AI](https://www.muggle-ai.com) — change-driven E2E acceptance testing*
|
|
325
|
-
```
|
|
347
|
+
> "Post a visual walkthrough of these results to the PR? Reviewers can click each test case to see step-by-step screenshots on the Muggle AI dashboard."
|
|
326
348
|
|
|
327
|
-
|
|
349
|
+
- Option 1: "Yes, post to PR"
|
|
350
|
+
- Option 2: "Skip"
|
|
328
351
|
|
|
329
|
-
|
|
330
|
-
```bash
|
|
331
|
-
gh pr comment {pr-number} --body "$(cat <<'EOF'
|
|
332
|
-
{the E2E acceptance comment body from 9b}
|
|
333
|
-
EOF
|
|
334
|
-
)"
|
|
335
|
-
```
|
|
352
|
+
### 9c: Invoke the shared skill in Mode A
|
|
336
353
|
|
|
337
|
-
If
|
|
354
|
+
If the user chooses "Yes, post to PR", invoke the `muggle-pr-visual-walkthrough` skill via the `Skill` tool. With the `E2eReport` already in context, the skill will:
|
|
338
355
|
|
|
339
|
-
|
|
356
|
+
1. Call `muggle build-pr-section` to render the markdown block (fit-vs-overflow automatic)
|
|
357
|
+
2. Find the PR via `gh pr view`
|
|
358
|
+
3. Post `body` as a `gh pr comment`
|
|
359
|
+
4. Post the overflow `comment` as a second comment (only if the CLI emitted one)
|
|
360
|
+
5. Confirm the PR URL to the user
|
|
340
361
|
|
|
341
|
-
|
|
362
|
+
This skill always uses **Mode A** (post to an existing PR); `muggle-do` is the only caller that uses Mode B. Do not attempt to render the walkthrough markdown yourself — delegate to the shared skill.
|
|
342
363
|
|
|
343
364
|
## Tool Reference
|
|
344
365
|
|
|
@@ -360,21 +381,25 @@ If creating a new PR — include the E2E acceptance section in the PR body along
|
|
|
360
381
|
| Results | `muggle-local-run-result-get` | Local |
|
|
361
382
|
| Results | `muggle-remote-wf-get-ts-gen-latest-run` | Remote |
|
|
362
383
|
| Publish | `muggle-local-publish-test-script` | Local |
|
|
384
|
+
| Per-step screenshots (for walkthrough) | `muggle-remote-test-script-get` | Both |
|
|
363
385
|
| Browser | `open` (shell command) | Both |
|
|
364
|
-
| PR | `
|
|
386
|
+
| PR walkthrough | `muggle-pr-visual-walkthrough` (shared skill) | Both |
|
|
365
387
|
|
|
366
388
|
## Guardrails
|
|
367
389
|
|
|
368
390
|
- **Always confirm intent first** — never assume local vs remote without asking
|
|
369
391
|
- **User MUST select project** — present clickable options via `AskQuestion`, wait for explicit choice, never auto-select
|
|
370
|
-
- **
|
|
371
|
-
- **
|
|
392
|
+
- **Best-effort shortlist use cases** — use the change summary to narrow the list to the most relevant 1–5 use cases and pre-check them; never dump every use case in the project on the user. Always leave an escape hatch to reveal the full list.
|
|
393
|
+
- **Best-effort shortlist test cases** — same idea: pre-check the test cases most relevant to the change summary; never enumerate every test case attached to a use case. Always leave an escape hatch to reveal the full list.
|
|
372
394
|
- **Use `AskQuestion` for every selection** — never ask the user to type a number; always present clickable options
|
|
373
|
-
- **
|
|
374
|
-
- **
|
|
395
|
+
- **Auto-detect localhost URL when possible**; only fall back to free-text when nothing is listening on a common port
|
|
396
|
+
- **Parallelize independent cloud jobs** — when creating N use cases, generating/creating N test cases, fetching N test case details, starting N remote workflows, polling N workflow runtimes, publishing N local runs, or fetching N per-step test scripts, issue all N calls in a single message so they fan out in parallel. The only tolerated sequential loop is local Electron execution (one browser, one test at a time). For use case creation specifically, use the native batch form of `muggle-remote-use-case-create-from-prompts` (all descriptions in one `instructions` array) instead of parallel calls.
|
|
397
|
+
- **One atomic behavior per test case** — every test case verifies exactly one user-observable behavior. Never bundle signup/login/navigation/bootstrap/teardown into a test case body. Ordering and dependencies are Muggle's service responsibility, not the skill's.
|
|
398
|
+
- **Never consolidate the generator's output** — if `muggle-remote-test-case-generate-from-prompt` returns N micro-tests, accept all N; never merge them into fewer test cases, even if "the plan" says 4 UC / 4 TC.
|
|
399
|
+
- **Never skip the generate→review cycle** — always present generated test cases to the user before calling `muggle-remote-test-case-create`, even when you're confident. "I'll skip the review and create directly" is always wrong.
|
|
400
|
+
- **Never ask for Electron launch approval before each run** — the user picking Local mode is the approval. Don't prompt "Ready to launch Electron?" before execution; just run.
|
|
375
401
|
- **Never silently drop test cases** — log failures and continue, then report them
|
|
376
402
|
- **Never guess the URL** — always ask the user for localhost or preview URL
|
|
377
403
|
- **Always publish before opening browser** — the dashboard needs the published data to show results
|
|
378
|
-
- **
|
|
379
|
-
- **Always check for PR before posting** — don't create a PR comment if there's no PR (ask user first)
|
|
404
|
+
- **Delegate PR posting to `muggle-pr-visual-walkthrough`** — never inline the walkthrough markdown or call `gh pr comment` directly from this skill; ask the user and hand off
|
|
380
405
|
- **Can be invoked at any state** — if the user already has a project or use cases set up, skip to the relevant step rather than re-doing everything
|
|
@@ -19,7 +19,7 @@ The local URL only changes where the browser opens; it does not change the remot
|
|
|
19
19
|
|
|
20
20
|
**Every selection-based question MUST use the `AskQuestion` tool** (or the platform's equivalent structured selection tool). Never ask the user to "reply with a number" in a plain text message — always present clickable options.
|
|
21
21
|
|
|
22
|
-
- **Selections** (project, use case, test case, script
|
|
22
|
+
- **Selections** (project, use case, test case, script): Use `AskQuestion` with labeled options the user can click.
|
|
23
23
|
- **Free-text inputs** (URLs, descriptions): Only use plain text prompts when there is no finite set of options. Even then, offer a detected/default value when possible.
|
|
24
24
|
|
|
25
25
|
## Workflow
|
|
@@ -27,7 +27,12 @@ The local URL only changes where the browser opens; it does not change the remot
|
|
|
27
27
|
### 1. Auth
|
|
28
28
|
|
|
29
29
|
- `muggle-remote-auth-status`
|
|
30
|
-
- If
|
|
30
|
+
- If **authenticated**: print the logged-in email and ask via `AskQuestion`:
|
|
31
|
+
> "You're logged in as **{email}**. Continue with this account?"
|
|
32
|
+
- Option 1: "Yes, continue"
|
|
33
|
+
- Option 2: "No, switch account"
|
|
34
|
+
If the user picks "switch account", call `muggle-remote-auth-login` with `forceNewSession: true` then `muggle-remote-auth-poll`.
|
|
35
|
+
- If **not signed in or expired**: call `muggle-remote-auth-login` then `muggle-remote-auth-poll`.
|
|
31
36
|
Do not skip or assume auth.
|
|
32
37
|
|
|
33
38
|
### 2. Targets (user must confirm)
|
|
@@ -57,7 +62,7 @@ Prompt for projects: "Pick the project to group this test into:"
|
|
|
57
62
|
- **Project — Create new project:** Collect `projectName`, `description`, and `url` (may be the local app URL, e.g. `http://localhost:3999`). Call `muggle-remote-project-create`. Use the returned `projectId` and continue.
|
|
58
63
|
- **Use case — Create new use case:** User provides a natural-language instruction (or you reuse their testing goal).
|
|
59
64
|
1. `muggle-remote-use-case-prompt-preview` with `projectId`, `instruction` — show preview; get confirmation via `AskQuestion`.
|
|
60
|
-
2. `muggle-remote-use-case-create-from-prompts` with `projectId
|
|
65
|
+
2. `muggle-remote-use-case-create-from-prompts` with `projectId` and `instructions: ["<the user's natural-language instruction>"]` — persist. Use the created use case id and continue to test-case selection.
|
|
61
66
|
- **Test case — Create new test case** (requires a chosen `useCaseId`): User provides an instruction describing what to test.
|
|
62
67
|
1. `muggle-remote-test-case-generate-from-prompt` with `projectId`, `useCaseId`, `instruction` — **preview only** (server test-case prompt preview); show the returned draft(s); get confirmation via `AskQuestion`.
|
|
63
68
|
2. Persist the accepted draft with `muggle-remote-test-case-create`, mapping preview fields into the required properties (`title`, `description`, `goal`, `expectedResult`, `url`, etc.). Then continue from **section 4** with that `testCaseId`.
|
|
@@ -84,21 +89,21 @@ Remind them: local URL is only the execution target, not tied to cloud project c
|
|
|
84
89
|
**Generate**
|
|
85
90
|
|
|
86
91
|
1. `muggle-remote-test-case-get`
|
|
87
|
-
2. `muggle-local-execute-test-generation`
|
|
92
|
+
2. `muggle-local-execute-test-generation` with that test case + `localUrl` (optional: `showUi: false` for headless — defaults to visible; **`timeoutMs`** — see below)
|
|
88
93
|
|
|
89
94
|
**Replay**
|
|
90
95
|
|
|
91
96
|
1. `muggle-remote-test-script-get` — note `actionScriptId`
|
|
92
97
|
2. `muggle-remote-action-script-get` with that id — full `actionScript`
|
|
93
98
|
**Use the API response as-is.** Do not edit, shorten, or rebuild `actionScript`; replay needs full `label` paths for element lookup.
|
|
94
|
-
3. `muggle-local-execute-replay`
|
|
99
|
+
3. `muggle-local-execute-replay` with `testScript`, `actionScript`, `localUrl` (optional: `showUi: false` for headless — defaults to visible; **`timeoutMs`** — see below)
|
|
95
100
|
|
|
96
101
|
### Local execution timeout (`timeoutMs`)
|
|
97
102
|
|
|
98
103
|
The MCP client often uses a **default wait of 300000 ms (5 minutes)** for `muggle-local-execute-test-generation` and `muggle-local-execute-replay`. **Exploratory script generation** (Auth0 login, dashboards, multi-step wizards, many LLM iterations) routinely **runs longer than 5 minutes** while Electron is still healthy.
|
|
99
104
|
|
|
100
105
|
- **Always pass `timeoutMs`** for flows that may be long — for example **`600000` (10 min)** or **`900000` (15 min)** — unless the user explicitly wants a short cap.
|
|
101
|
-
- If the tool reports **`Electron execution timed out after 300000ms`** (or similar) **but** Electron logs show the run still progressing (steps, screenshots, LLM calls), treat it as **orchestration timeout**, not an Electron app defect: **increase `timeoutMs` and retry
|
|
106
|
+
- If the tool reports **`Electron execution timed out after 300000ms`** (or similar) **but** Electron logs show the run still progressing (steps, screenshots, LLM calls), treat it as **orchestration timeout**, not an Electron app defect: **increase `timeoutMs` and retry**.
|
|
102
107
|
- **Test case design:** Preconditions like "a test run has already completed" on an **empty account** can force many steps (sign-up, new project, crawl). Prefer an account/project that **already has** the needed state, or narrow the test goal so generation does not try to create a full project from scratch unless that is intentional.
|
|
103
108
|
|
|
104
109
|
### Interpreting `failed` / non-zero Electron exit
|
|
@@ -106,15 +111,9 @@ The MCP client often uses a **default wait of 300000 ms (5 minutes)** for `muggl
|
|
|
106
111
|
- **`Electron execution timed out after 300000ms`:** Orchestration wait too short — see **`timeoutMs`** above.
|
|
107
112
|
- **Exit code 26** (and messages like **LLM failed to generate / replay action script**): Often corresponds to a completed exploration whose **outcome was goal not achievable** (`goal_not_achievable`, summary with `halt`) — e.g. verifying "view script after a successful run" when **no run or script exists yet** in the UI. Use `muggle-local-run-result-get` and read the **summary / structured summary**; do not assume an Electron crash. **Fix:** choose a **project that already has** completed runs and scripts, or **change the test case** so preconditions match what localhost can satisfy (e.g. include steps to create and run a test first, or assert only empty-state UI when no runs exist).
|
|
108
113
|
|
|
109
|
-
### 6.
|
|
114
|
+
### 6. Execute (no approval prompt)
|
|
110
115
|
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
- "Yes, launch Electron (visible — I want to watch)"
|
|
114
|
-
- "Yes, launch Electron (headless — run in background)"
|
|
115
|
-
- "No, cancel"
|
|
116
|
-
|
|
117
|
-
Only call local execute tools with `approveElectronAppLaunch: true` after the user selects a "Yes" option. Map visible to `showUi: true`, headless to `showUi: false`.
|
|
116
|
+
Call `muggle-local-execute-test-generation` or `muggle-local-execute-replay` directly. **Do not** ask the user to re-approve the Electron launch — the user choosing this skill in the first place is the approval. The browser defaults to visible; only pass `showUi: false` if the user explicitly asked for headless.
|
|
118
117
|
|
|
119
118
|
### 7. After successful generation only
|
|
120
119
|
|
|
@@ -126,11 +125,62 @@ Only call local execute tools with `approveElectronAppLaunch: true` after the us
|
|
|
126
125
|
- `muggle-local-run-result-get` with the run id from execute.
|
|
127
126
|
- Include: status, duration, pass/fail summary, per-step summary, artifact/screenshot paths, errors if failed, and script view URL when publishing ran.
|
|
128
127
|
|
|
128
|
+
### 9. Offer to post a visual walkthrough to the PR
|
|
129
|
+
|
|
130
|
+
After reporting results, gather the required input and hand off to the shared **`muggle-pr-visual-walkthrough`** skill, which renders the walkthrough via `muggle build-pr-section` and posts it to the current branch's open PR.
|
|
131
|
+
|
|
132
|
+
#### 9a: Gather per-step screenshots
|
|
133
|
+
|
|
134
|
+
The shared skill takes an **`E2eReport` JSON** that includes per-step screenshot URLs. After step 7 has called `muggle-local-publish-test-script` and you have the `testScriptId`:
|
|
135
|
+
|
|
136
|
+
1. Call `muggle-remote-test-script-get` with the `testScriptId`.
|
|
137
|
+
2. Extract per step: `steps[].operation.action` and `steps[].operation.screenshotUrl`.
|
|
138
|
+
3. Build the `steps` array: `[{ stepIndex: 0, action: "...", screenshotUrl: "..." }, ...]`.
|
|
139
|
+
4. If the run failed, capture `failureStepIndex`, `error`, and the local `artifactsDir` from the run result in step 8.
|
|
140
|
+
|
|
141
|
+
Assemble the `E2eReport`:
|
|
142
|
+
|
|
143
|
+
```json
|
|
144
|
+
{
|
|
145
|
+
"projectId": "<projectId from step 2>",
|
|
146
|
+
"tests": [
|
|
147
|
+
{
|
|
148
|
+
"name": "<test case title>",
|
|
149
|
+
"testCaseId": "<id>",
|
|
150
|
+
"testScriptId": "<id from publish>",
|
|
151
|
+
"runId": "<runId from execute>",
|
|
152
|
+
"viewUrl": "<viewUrl from publish>",
|
|
153
|
+
"status": "passed",
|
|
154
|
+
"steps": [{ "stepIndex": 0, "action": "...", "screenshotUrl": "..." }]
|
|
155
|
+
}
|
|
156
|
+
]
|
|
157
|
+
}
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
See the `muggle-pr-visual-walkthrough` skill for the full schema including the failed-test shape.
|
|
161
|
+
|
|
162
|
+
#### 9b: Ask the user
|
|
163
|
+
|
|
164
|
+
Use `AskQuestion`:
|
|
165
|
+
|
|
166
|
+
> "Post a visual walkthrough of this run to the PR? Reviewers can click the test case to see step-by-step screenshots on the Muggle AI dashboard."
|
|
167
|
+
|
|
168
|
+
- Option 1: "Yes, post to PR"
|
|
169
|
+
- Option 2: "Skip"
|
|
170
|
+
|
|
171
|
+
#### 9c: Invoke the shared skill in Mode A
|
|
172
|
+
|
|
173
|
+
If the user chooses "Yes, post to PR", invoke the `muggle-pr-visual-walkthrough` skill via the `Skill` tool. With the `E2eReport` in context, the skill renders the markdown block via the CLI, finds the PR via `gh pr view`, posts `body` as a comment, posts the overflow `comment` only if the CLI emitted one, and confirms the PR URL to the user.
|
|
174
|
+
|
|
175
|
+
Always use **Mode A** (post to existing PR) from this skill. Never hand-write the walkthrough markdown or call `gh pr comment` directly — delegate to `muggle-pr-visual-walkthrough`.
|
|
176
|
+
|
|
129
177
|
## Non-negotiables
|
|
130
178
|
|
|
131
|
-
- No silent auth skip
|
|
179
|
+
- No silent auth skip.
|
|
180
|
+
- **Never prompt for Electron launch approval** before execution — invoking this skill is the approval. Just run.
|
|
132
181
|
- If replayable scripts exist, do not default to generation without user choice.
|
|
133
182
|
- No hiding failures: surface errors and artifact paths.
|
|
134
183
|
- Replay: never hand-built or simplified `actionScript` — only from `muggle-remote-action-script-get`.
|
|
135
|
-
- Use `AskQuestion` for every selection — project, use case, test case, script
|
|
184
|
+
- Use `AskQuestion` for every selection — project, use case, test case, script. Never ask the user to type a number.
|
|
136
185
|
- Project, use case, and test case selection lists must always include "Create new ...". Include "Show full list" whenever the API returned at least one row for that step; omit "Show full list" when the list is empty (offer "Create new ..." only). For creates, use preview tools (`muggle-remote-use-case-prompt-preview`, `muggle-remote-test-case-generate-from-prompt`) before persisting.
|
|
186
|
+
- PR posting is always optional and always delegated to the `muggle-pr-visual-walkthrough` skill — never inline the walkthrough markdown or call `gh pr comment` directly from this skill.
|