@muggleai/works 2.0.2 → 3.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (32) hide show
  1. package/README.md +173 -162
  2. package/dist/plugin/.claude-plugin/plugin.json +8 -0
  3. package/dist/plugin/.mcp.json +12 -0
  4. package/dist/plugin/hooks/hooks.json +14 -0
  5. package/dist/plugin/scripts/ensure-electron-app.sh +12 -0
  6. package/dist/plugin/skills/muggle-do/SKILL.md +53 -0
  7. package/dist/plugin/skills/muggle-do/impact-analysis.md +34 -0
  8. package/dist/plugin/skills/muggle-do/open-prs.md +52 -0
  9. package/dist/plugin/skills/muggle-do/qa.md +89 -0
  10. package/dist/plugin/skills/muggle-do/requirements.md +33 -0
  11. package/dist/plugin/skills/muggle-do/unit-tests.md +31 -0
  12. package/dist/plugin/skills/muggle-do/validate-code.md +30 -0
  13. package/dist/plugin/skills/publish-test-to-cloud/SKILL.md +41 -0
  14. package/dist/plugin/skills/test-feature-local/SKILL.md +86 -0
  15. package/package.json +7 -6
  16. package/plugin/.claude-plugin/plugin.json +8 -0
  17. package/plugin/.mcp.json +12 -0
  18. package/plugin/hooks/hooks.json +14 -0
  19. package/plugin/scripts/ensure-electron-app.sh +12 -0
  20. package/plugin/skills/muggle-do/SKILL.md +53 -0
  21. package/plugin/skills/muggle-do/impact-analysis.md +34 -0
  22. package/plugin/skills/muggle-do/open-prs.md +52 -0
  23. package/plugin/skills/muggle-do/qa.md +89 -0
  24. package/plugin/skills/muggle-do/requirements.md +33 -0
  25. package/plugin/skills/muggle-do/unit-tests.md +31 -0
  26. package/plugin/skills/muggle-do/validate-code.md +30 -0
  27. package/plugin/skills/publish-test-to-cloud/SKILL.md +41 -0
  28. package/plugin/skills/test-feature-local/SKILL.md +86 -0
  29. package/scripts/postinstall.mjs +2 -2
  30. package/skills-dist/muggle-do.md +0 -589
  31. package/skills-dist/publish-test-to-cloud.md +0 -43
  32. package/skills-dist/test-feature-local.md +0 -344
@@ -0,0 +1,34 @@
1
+ # Impact Analysis Agent
2
+
3
+ You are analyzing git repositories to determine which ones have actual code changes that need to go through the dev cycle pipeline.
4
+
5
+ ## Input
6
+
7
+ You receive:
8
+ - A list of repos with their local filesystem paths
9
+ - The requirements goal and affected repos from the requirements stage
10
+
11
+ ## Your Job
12
+
13
+ For each repo path provided:
14
+
15
+ 1. **Check the current branch:** Run `git branch --show-current` in the repo. If it returns empty (detached HEAD), report an error for that repo.
16
+ 2. **Detect the default branch:** Run `git symbolic-ref refs/remotes/origin/HEAD --short` to find the default branch (e.g., `origin/main`). Strip the `origin/` prefix. If this fails, check if `main` or `master` exist locally via `git rev-parse --verify`.
17
+ 3. **Verify it's a feature branch:** The current branch must NOT be the default branch. If it is, report an error.
18
+ 4. **List changed files:** Run `git diff --name-only <default-branch>...HEAD` to find files changed on this branch relative to the default branch. If no merge base exists, fall back to `git diff --name-only HEAD`.
19
+ 5. **Get the diff:** Run `git diff <default-branch>...HEAD` for the full diff.
20
+
21
+ ## Output
22
+
23
+ Report per repo:
24
+
25
+ **Repo: (name)**
26
+ - Branch: (current branch name)
27
+ - Default branch: (detected default branch)
28
+ - Changed files: (list)
29
+ - Diff summary: (brief description of what changed)
30
+ - Status: OK | ERROR (with reason)
31
+
32
+ **Summary:** (which repos have changes, which don't, any errors)
33
+
34
+ If NO repos have any changes, clearly state: "No changes detected in any repo."
@@ -0,0 +1,52 @@
1
+ # PR Creation Agent
2
+
3
+ You are creating pull requests for each repository that has changes after a successful dev cycle run.
4
+
5
+ ## Input
6
+
7
+ You receive:
8
+ - Per-repo: repo name, path, branch name
9
+ - Requirements: goal, acceptance criteria
10
+ - QA report: passed/failed test cases, each with testCaseId, testScriptId, runId, artifactsDir, and projectId
11
+
12
+ ## Your Job
13
+
14
+ For each repo with changes:
15
+
16
+ 1. **Push the branch** to origin: `git push -u origin <branch-name>` in the repo directory.
17
+ 2. **Build the PR title:**
18
+ - If QA has failures: `[QA FAILING] <goal>`
19
+ - Otherwise: `<goal>`
20
+ - Keep under 70 characters
21
+ 3. **Build the PR body** with these sections:
22
+ - `## Goal` — the requirements goal
23
+ - `## Acceptance Criteria` — bulleted list (omit section if empty)
24
+ - `## Changes` — summary of what changed in this repo
25
+ - `## QA Results` — full test case breakdown (see format below)
26
+ 4. **Create the PR** using `gh pr create --title "..." --body "..." --head <branch>` in the repo directory.
27
+ 5. **Capture the PR URL** from the output.
28
+
29
+ ## QA Results Section Format
30
+
31
+ ```
32
+ ## QA Results
33
+
34
+ **X passed / Y failed**
35
+
36
+ | Test Case | Status | Details |
37
+ |-----------|--------|---------|
38
+ | [Name](https://www.muggle-ai.com/muggleTestV0/dashboard/projects/{projectId}/scripts?modal=details&testCaseId={testCaseId}) | ✅ PASSED | — |
39
+ | [Name](https://www.muggle-ai.com/muggleTestV0/dashboard/projects/{projectId}/scripts?modal=details&testCaseId={testCaseId}) | ❌ FAILED | {error} — artifacts: `{artifactsDir}` |
40
+ ```
41
+
42
+ Rules:
43
+ - Link each test case name to its details page on www.muggle-ai.com using the URL pattern above (requires `testCaseId` and `projectId` from the QA report).
44
+ - For failed tests, include the error message and the local `artifactsDir` path so the developer can inspect screenshots.
45
+ - Screenshots are in `{artifactsDir}/screenshots/` and viewable locally.
46
+
47
+ ## Output
48
+
49
+ **PRs Created:**
50
+ - (repo name): (PR URL)
51
+
52
+ **Errors:** (any repos where PR creation failed, with the error message)
@@ -0,0 +1,89 @@
1
+ # QA Agent
2
+
3
+ You are running QA test cases against code changes using Muggle AI's local testing infrastructure.
4
+
5
+ ## Design
6
+
7
+ QA runs **locally** using the `test-feature-local` approach:
8
+ - `muggle-remote-*` tools manage cloud entities (auth, projects, test cases, scripts)
9
+ - `muggle-local-*` tools execute tests against the running local dev server
10
+
11
+ This guarantees QA always runs — no dependency on cloud replay service availability.
12
+
13
+ ## Input
14
+
15
+ You receive:
16
+ - The Muggle project ID
17
+ - The list of changed repos, files, and a summary of changes
18
+ - The requirements goal
19
+ - `localUrl` per repo (from `muggle-repos.json`) — the locally running dev server URL
20
+
21
+ ## Your Job
22
+
23
+ ### Step 0: Resolve Local URL
24
+
25
+ Read `localUrl` for each repo from the context. If it is not provided, ask the user:
26
+ > "QA requires a running local server. What URL is the `<repo>` app running on? (e.g. `http://localhost:3000`)"
27
+
28
+ **Do not skip QA.** Wait for the user to provide the URL before proceeding.
29
+
30
+ ### Step 1: Check Authentication
31
+
32
+ Use `muggle-remote-auth-status` to verify valid credentials. If not authenticated, use `muggle-remote-auth-login` to start the device-code login flow and `muggle-remote-auth-poll` to wait for completion.
33
+
34
+ ### Step 2: Get Test Cases
35
+
36
+ Use `muggle-remote-test-case-list` with the project ID to fetch all test cases.
37
+
38
+ ### Step 3: Filter Relevant Test Cases
39
+
40
+ Based on the changed files and the requirements goal, determine which test cases are relevant:
41
+ - Test cases whose use cases directly relate to the changed functionality
42
+ - Test cases that cover areas potentially affected by the changes
43
+ - When in doubt, include the test case (better to over-test than miss a regression)
44
+
45
+ ### Step 4: Execute Tests Locally
46
+
47
+ For each relevant test case:
48
+
49
+ 1. Call `muggle-remote-test-script-list` filtered by `testCaseId` to check for an existing script.
50
+
51
+ 2. **If a script exists** (replay path):
52
+ - Call `muggle-remote-test-script-get` with the `testScriptId` to fetch the full script object.
53
+ - Call `muggle-local-execute-replay` with:
54
+ - `testScript`: the full script object
55
+ - `localUrl`: the resolved local URL
56
+ - `approveElectronAppLaunch`: `true` *(pipeline context — user starting `muggle-do` is implicit approval)*
57
+
58
+ 3. **If no script exists** (generation path):
59
+ - Call `muggle-remote-test-case-get` with the `testCaseId` to fetch the full test case object.
60
+ - Call `muggle-local-execute-test-generation` with:
61
+ - `testCase`: the full test case object
62
+ - `localUrl`: the resolved local URL
63
+ - `approveElectronAppLaunch`: `true`
64
+
65
+ 4. When execution completes, call `muggle-local-run-result-get` with the `runId` returned by the execute call.
66
+
67
+ 5. **Retain per test case:** `testCaseId`, `testScriptId` (if present), `runId`, `status` (passed/failed), `artifactsDir`.
68
+
69
+ ### Step 5: Collect Results
70
+
71
+ For each test case:
72
+ - Record pass or fail from the run result
73
+ - If failed, capture the error message and `artifactsDir` for reproduction
74
+ - Every test case must be executed — generate a new script if none exists (no skips)
75
+
76
+ ## Output
77
+
78
+ **QA Report:**
79
+
80
+ **Passed:** (count)
81
+ - (test case name) [testCaseId: `<id>`, testScriptId: `<id>`, runId: `<id>`]: passed
82
+
83
+ **Failed:** (count)
84
+ - (test case name) [testCaseId: `<id>`, runId: `<id>`]: (error) — artifacts: `<artifactsDir>`
85
+
86
+ **Metadata:**
87
+ - projectId: `<projectId>`
88
+
89
+ **Overall:** ALL PASSED | FAILURES DETECTED
@@ -0,0 +1,33 @@
1
+ # Requirements Analysis Agent
2
+
3
+ You are analyzing a user's task description to extract structured requirements for an autonomous development cycle.
4
+
5
+ ## Input
6
+
7
+ You receive:
8
+ - A user's task description (natural language)
9
+ - A list of configured repository names
10
+
11
+ ## Your Job
12
+
13
+ 1. **Read the task description carefully.** Understand what the user wants to build, fix, or change.
14
+ 2. **Extract the goal** — one clear sentence describing the outcome.
15
+ 3. **Extract acceptance criteria** — specific, verifiable conditions that must be true when the task is done. Each criterion should be independently testable. If the user's description is vague, infer reasonable criteria but flag them as inferred.
16
+ 4. **Identify which repos are likely affected** — based on the task description and the repo names provided.
17
+
18
+ ## Output
19
+
20
+ Report your findings as a structured summary:
21
+
22
+ **Goal:** (one sentence)
23
+
24
+ **Acceptance Criteria:**
25
+ - (criterion 1)
26
+ - (criterion 2)
27
+ - ...
28
+
29
+ **Affected Repos:** (comma-separated list)
30
+
31
+ **Notes:** (any ambiguities, assumptions, or questions — optional)
32
+
33
+ Do NOT ask the user questions. Make reasonable inferences and flag assumptions in Notes.
@@ -0,0 +1,31 @@
1
+ # Unit Test Runner Agent
2
+
3
+ You are running unit tests for each repository that has changes in the dev cycle pipeline.
4
+
5
+ ## Input
6
+
7
+ You receive:
8
+ - A list of repos with their paths and test commands (e.g., `pnpm test`)
9
+
10
+ ## Your Job
11
+
12
+ For each repo:
13
+
14
+ 1. **Run the test command** using Bash in the repo's directory. Use the provided test command (default: `pnpm test`).
15
+ 2. **Capture the full output** — both stdout and stderr.
16
+ 3. **Determine pass/fail** — exit code 0 means pass, anything else means fail.
17
+ 4. **If tests fail**, extract the specific failing test names/descriptions from the output.
18
+
19
+ ## Output
20
+
21
+ Per repo:
22
+
23
+ **Repo: (name)**
24
+ - Test command: (what was run)
25
+ - Result: PASS | FAIL
26
+ - Failed tests: (list, if any)
27
+ - Output: (relevant portion of test output — full output if failed, summary if passed)
28
+
29
+ **Overall:** ALL PASSED | FAILURES DETECTED
30
+
31
+ If any repo fails, clearly state which repos failed and include enough output for the user to diagnose the issue.
@@ -0,0 +1,30 @@
1
+ # Code Validation Agent
2
+
3
+ You are validating that each repository's git state is ready for the dev cycle pipeline.
4
+
5
+ ## Input
6
+
7
+ You receive:
8
+ - A list of repos with changes (from impact analysis), including their paths and branch names
9
+
10
+ ## Your Job
11
+
12
+ For each repo:
13
+
14
+ 1. **Verify the branch is a feature branch** (not main/master/the default branch). This should already be validated by impact analysis, but double-check.
15
+ 2. **Check for uncommitted changes:** Run `git status --porcelain` in the repo. If there are uncommitted changes, warn the user — uncommitted changes won't be included in PRs.
16
+ 3. **Get the branch diff:** Run `git diff <default-branch>...HEAD --stat` for a summary of changes.
17
+ 4. **Verify commits exist on the branch:** Run `git log <default-branch>..HEAD --oneline` to confirm there are commits to push.
18
+
19
+ ## Output
20
+
21
+ Per repo:
22
+
23
+ **Repo: (name)**
24
+ - Branch: (name)
25
+ - Commits on branch: (count and one-line summaries)
26
+ - Uncommitted changes: yes/no (with warning if yes)
27
+ - Diff stat: (file change summary)
28
+ - Status: READY | WARNING | ERROR
29
+
30
+ **Overall:** READY to proceed / BLOCKED (with reasons)
@@ -0,0 +1,41 @@
1
+ ---
2
+ name: publish-test-to-cloud
3
+ description: Publish a local generation run to cloud workflow records using MCP tools.
4
+ ---
5
+
6
+ # Publish Test To Cloud
7
+
8
+ Publish a locally generated run to Muggle AI cloud so it appears in cloud workflow and test result views.
9
+
10
+ ## Required tools
11
+
12
+ - `muggle-remote-auth-status`
13
+ - `muggle-remote-auth-login`
14
+ - `muggle-remote-auth-poll`
15
+ - `muggle-local-run-result-list`
16
+ - `muggle-local-run-result-get`
17
+ - `muggle-local-publish-test-script`
18
+ - `muggle-remote-local-run-upload` (manual fallback)
19
+
20
+ ## Default flow
21
+
22
+ 1. Check auth with `muggle-remote-auth-status`.
23
+ 2. If not authenticated:
24
+ - `muggle-remote-auth-login`
25
+ - `muggle-remote-auth-poll` as needed.
26
+ 3. Find a local run:
27
+ - `muggle-local-run-result-list`
28
+ - choose a generation run in `passed` or `failed` state.
29
+ 4. Validate run:
30
+ - `muggle-local-run-result-get`
31
+ - ensure required metadata exists (`projectId`, `useCaseId`, `cloudTestCaseId`, `executionTimeMs`).
32
+ 5. Publish:
33
+ - `muggle-local-publish-test-script` with `runId` and `cloudTestCaseId`.
34
+ 6. Return cloud identifiers and view URL from tool response.
35
+
36
+ ## Notes
37
+
38
+ - Prefer `muggle-local-publish-test-script`.
39
+ - Use `muggle-remote-local-run-upload` only for advanced/manual fallback.
40
+ - Replay runs are not publishable through this flow.
41
+ - If required metadata is missing, fail fast with explicit error context.
@@ -0,0 +1,86 @@
1
+ ---
2
+ name: test-feature-local
3
+ description: Test a feature's user experience on localhost. Manage entities in cloud with muggle-remote tools and execute locally with muggle-local tools.
4
+ ---
5
+
6
+ # Test Feature Local
7
+
8
+ Run end-to-end feature testing against a local URL using a cloud-first workflow:
9
+
10
+ - Cloud management: `muggle-remote-*`
11
+ - Local execution and artifacts: `muggle-local-*`
12
+
13
+ ## Workflow
14
+
15
+ 1. **Auth**
16
+ - `muggle-remote-auth-status`
17
+ - If needed: `muggle-remote-auth-login` + `muggle-remote-auth-poll`
18
+
19
+ 2. **Select project/use case/test case**
20
+ - `muggle-remote-project-list`
21
+ - `muggle-remote-use-case-list`
22
+ - `muggle-remote-test-case-list-by-use-case`
23
+
24
+ 3. **Resolve local URL**
25
+ - Use the URL provided by the user.
26
+ - If missing, ask explicitly (do not guess).
27
+
28
+ 4. **Check script availability**
29
+ - `muggle-remote-test-script-list` filtered by testCaseId
30
+ - If script exists, recommend replay unless user-flow changes suggest regeneration.
31
+
32
+ 5. **Execute**
33
+ - Replay path:
34
+ - `muggle-remote-test-script-get`
35
+ - `muggle-local-execute-replay`
36
+ - Generation path:
37
+ - `muggle-remote-test-case-get`
38
+ - `muggle-local-execute-test-generation`
39
+
40
+ 6. **Approval requirement**
41
+ - Before execution, get explicit user approval for launching Electron app.
42
+ - Only then set `approveElectronAppLaunch: true`.
43
+
44
+ 7. **Report results**
45
+ - `muggle-local-run-result-get` with returned runId.
46
+ - Report:
47
+ - status
48
+ - duration
49
+ - artifacts path
50
+ - pass/fail summary
51
+
52
+ 8. **Optional publish**
53
+ - Offer `muggle-local-publish-test-script` to publish generated script to cloud.
54
+
55
+ ## Tool map
56
+
57
+ ### Auth
58
+ - `muggle-remote-auth-status`
59
+ - `muggle-remote-auth-login`
60
+ - `muggle-remote-auth-poll`
61
+ - `muggle-remote-auth-logout`
62
+
63
+ ### Cloud entities
64
+ - `muggle-remote-project-list`
65
+ - `muggle-remote-project-create`
66
+ - `muggle-remote-use-case-list`
67
+ - `muggle-remote-use-case-create-from-prompts`
68
+ - `muggle-remote-test-case-list-by-use-case`
69
+ - `muggle-remote-test-case-get`
70
+ - `muggle-remote-test-case-generate-from-prompt`
71
+ - `muggle-remote-test-script-list`
72
+ - `muggle-remote-test-script-get`
73
+
74
+ ### Local execution
75
+ - `muggle-local-execute-test-generation`
76
+ - `muggle-local-execute-replay`
77
+ - `muggle-local-run-result-list`
78
+ - `muggle-local-run-result-get`
79
+ - `muggle-local-publish-test-script`
80
+
81
+ ## Guardrails
82
+
83
+ - Do not silently skip auth.
84
+ - Do not silently skip test execution when no script exists; generate one.
85
+ - Do not launch Electron without explicit approval.
86
+ - Do not hide failing run details; include error and artifacts path.
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "@muggleai/works",
3
- "version": "2.0.2",
4
- "description": "Ship quality products, not just code. AI-powered QA that validates your app's user experience — from Claude Code and Cursor to PR.",
3
+ "version": "3.0.0",
4
+ "description": "Deliver quality web products with an AI agent team. Autonomous pipeline that codes, tests, and QA-validates your app — from Claude Code and Cursor to PR.",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",
7
7
  "bin": {
@@ -9,13 +9,14 @@
9
9
  },
10
10
  "files": [
11
11
  "dist",
12
+ "plugin",
12
13
  "bin/muggle.js",
13
- "scripts/postinstall.mjs",
14
- "skills-dist"
14
+ "scripts/postinstall.mjs"
15
15
  ],
16
16
  "scripts": {
17
17
  "clean": "rimraf dist",
18
- "build": "tsup",
18
+ "build": "tsup && node scripts/build-plugin.mjs",
19
+ "build:plugin": "node scripts/build-plugin.mjs",
19
20
  "build:release": "npm run build",
20
21
  "build:workspace": "turbo run build",
21
22
  "typecheck:workspace": "turbo run typecheck",
@@ -59,7 +60,7 @@
59
60
  "devDependencies": {
60
61
  "@eslint/js": "^10.0.1",
61
62
  "@types/node": "^25.5.0",
62
- "@types/uuid": "^10.0.0",
63
+ "@types/uuid": "^11.0.0",
63
64
  "@typescript-eslint/eslint-plugin": "^8.34.0",
64
65
  "@typescript-eslint/parser": "^8.34.0",
65
66
  "eslint": "^10.1.0",
@@ -0,0 +1,8 @@
1
+ {
2
+ "name": "muggle",
3
+ "description": "Muggle AI QA and autonomous dev workflow plugin.",
4
+ "version": "2.0.2",
5
+ "author": {
6
+ "name": "Muggle AI"
7
+ }
8
+ }
@@ -0,0 +1,12 @@
1
+ {
2
+ "mcpServers": {
3
+ "muggle": {
4
+ "command": "npx",
5
+ "args": [
6
+ "-y",
7
+ "@muggleai/works",
8
+ "serve"
9
+ ]
10
+ }
11
+ }
12
+ }
@@ -0,0 +1,14 @@
1
+ {
2
+ "hooks": {
3
+ "SessionStart": [
4
+ {
5
+ "hooks": [
6
+ {
7
+ "type": "command",
8
+ "command": "bash \"${CLAUDE_PLUGIN_ROOT}/scripts/ensure-electron-app.sh\""
9
+ }
10
+ ]
11
+ }
12
+ ]
13
+ }
14
+ }
@@ -0,0 +1,12 @@
1
+ #!/usr/bin/env bash
2
+
3
+ set -euo pipefail
4
+
5
+ # Ensure the Electron QA runtime is installed/up to date.
6
+ # This is intentionally best-effort so plugin startup is resilient.
7
+ if command -v muggle >/dev/null 2>&1; then
8
+ muggle setup >/dev/null 2>&1 || true
9
+ exit 0
10
+ fi
11
+
12
+ npx -y @muggleai/works setup >/dev/null 2>&1 || true
@@ -0,0 +1,53 @@
1
+ ---
2
+ name: muggle-do
3
+ description: Unified Muggle AI workflow entry point. Routes to autonomous dev cycle, status, repair, or upgrade.
4
+ disable-model-invocation: true
5
+ ---
6
+
7
+ # Muggle Do
8
+
9
+ Muggle Do is the top-level command for the Muggle AI development workflow.
10
+
11
+ It supports two categories of execution:
12
+
13
+ 1. **Dev cycle**: requirements -> impact analysis -> validate code -> coding -> unit tests -> QA -> open PRs
14
+ 2. **Maintenance**: status, repair, upgrade
15
+
16
+ ## Input routing
17
+
18
+ Treat `$ARGUMENTS` as the user command:
19
+
20
+ - Empty / `help` / `menu` / `?` -> show menu and session selector.
21
+ - `status` -> run installation and auth status checks.
22
+ - `repair` -> run repair workflow.
23
+ - `upgrade` -> run upgrade workflow.
24
+ - Anything else -> treat as a new task description and start/resume a dev-cycle session.
25
+
26
+ ## Session model
27
+
28
+ Use `.muggle-do/sessions/<slug>/` with these files:
29
+
30
+ - `state.md`
31
+ - `requirements.md`
32
+ - `iterations/<NNN>.md`
33
+ - `result.md`
34
+
35
+ On each stage transition, update `state.md` and append stage output to the active iteration file.
36
+
37
+ ## Dev cycle agents
38
+
39
+ Use the supporting files in this directory as stage-specific instructions:
40
+
41
+ - [requirements.md](requirements.md)
42
+ - [impact-analysis.md](impact-analysis.md)
43
+ - [validate-code.md](validate-code.md)
44
+ - [unit-tests.md](unit-tests.md)
45
+ - [qa.md](qa.md)
46
+ - [open-prs.md](open-prs.md)
47
+
48
+ ## Guardrails
49
+
50
+ - Do not skip unit tests before QA.
51
+ - Do not skip QA due to missing scripts; generate when needed.
52
+ - If the same stage fails 3 times in a row, escalate with details.
53
+ - If total iterations reach 3 and QA still fails, continue to PR creation with `[QA FAILING]`.
@@ -0,0 +1,34 @@
1
+ # Impact Analysis Agent
2
+
3
+ You are analyzing git repositories to determine which ones have actual code changes that need to go through the dev cycle pipeline.
4
+
5
+ ## Input
6
+
7
+ You receive:
8
+ - A list of repos with their local filesystem paths
9
+ - The requirements goal and affected repos from the requirements stage
10
+
11
+ ## Your Job
12
+
13
+ For each repo path provided:
14
+
15
+ 1. **Check the current branch:** Run `git branch --show-current` in the repo. If it returns empty (detached HEAD), report an error for that repo.
16
+ 2. **Detect the default branch:** Run `git symbolic-ref refs/remotes/origin/HEAD --short` to find the default branch (e.g., `origin/main`). Strip the `origin/` prefix. If this fails, check if `main` or `master` exist locally via `git rev-parse --verify`.
17
+ 3. **Verify it's a feature branch:** The current branch must NOT be the default branch. If it is, report an error.
18
+ 4. **List changed files:** Run `git diff --name-only <default-branch>...HEAD` to find files changed on this branch relative to the default branch. If no merge base exists, fall back to `git diff --name-only HEAD`.
19
+ 5. **Get the diff:** Run `git diff <default-branch>...HEAD` for the full diff.
20
+
21
+ ## Output
22
+
23
+ Report per repo:
24
+
25
+ **Repo: (name)**
26
+ - Branch: (current branch name)
27
+ - Default branch: (detected default branch)
28
+ - Changed files: (list)
29
+ - Diff summary: (brief description of what changed)
30
+ - Status: OK | ERROR (with reason)
31
+
32
+ **Summary:** (which repos have changes, which don't, any errors)
33
+
34
+ If NO repos have any changes, clearly state: "No changes detected in any repo."
@@ -0,0 +1,52 @@
1
+ # PR Creation Agent
2
+
3
+ You are creating pull requests for each repository that has changes after a successful dev cycle run.
4
+
5
+ ## Input
6
+
7
+ You receive:
8
+ - Per-repo: repo name, path, branch name
9
+ - Requirements: goal, acceptance criteria
10
+ - QA report: passed/failed test cases, each with testCaseId, testScriptId, runId, artifactsDir, and projectId
11
+
12
+ ## Your Job
13
+
14
+ For each repo with changes:
15
+
16
+ 1. **Push the branch** to origin: `git push -u origin <branch-name>` in the repo directory.
17
+ 2. **Build the PR title:**
18
+ - If QA has failures: `[QA FAILING] <goal>`
19
+ - Otherwise: `<goal>`
20
+ - Keep under 70 characters
21
+ 3. **Build the PR body** with these sections:
22
+ - `## Goal` — the requirements goal
23
+ - `## Acceptance Criteria` — bulleted list (omit section if empty)
24
+ - `## Changes` — summary of what changed in this repo
25
+ - `## QA Results` — full test case breakdown (see format below)
26
+ 4. **Create the PR** using `gh pr create --title "..." --body "..." --head <branch>` in the repo directory.
27
+ 5. **Capture the PR URL** from the output.
28
+
29
+ ## QA Results Section Format
30
+
31
+ ```
32
+ ## QA Results
33
+
34
+ **X passed / Y failed**
35
+
36
+ | Test Case | Status | Details |
37
+ |-----------|--------|---------|
38
+ | [Name](https://www.muggle-ai.com/muggleTestV0/dashboard/projects/{projectId}/scripts?modal=details&testCaseId={testCaseId}) | ✅ PASSED | — |
39
+ | [Name](https://www.muggle-ai.com/muggleTestV0/dashboard/projects/{projectId}/scripts?modal=details&testCaseId={testCaseId}) | ❌ FAILED | {error} — artifacts: `{artifactsDir}` |
40
+ ```
41
+
42
+ Rules:
43
+ - Link each test case name to its details page on www.muggle-ai.com using the URL pattern above (requires `testCaseId` and `projectId` from the QA report).
44
+ - For failed tests, include the error message and the local `artifactsDir` path so the developer can inspect screenshots.
45
+ - Screenshots are in `{artifactsDir}/screenshots/` and viewable locally.
46
+
47
+ ## Output
48
+
49
+ **PRs Created:**
50
+ - (repo name): (PR URL)
51
+
52
+ **Errors:** (any repos where PR creation failed, with the error message)