@fro.bot/systematic 2.0.2 → 2.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (68) hide show
  1. package/agents/design/figma-design-sync.md +1 -1
  2. package/agents/document-review/coherence-reviewer.md +40 -0
  3. package/agents/document-review/design-lens-reviewer.md +46 -0
  4. package/agents/document-review/feasibility-reviewer.md +42 -0
  5. package/agents/document-review/product-lens-reviewer.md +50 -0
  6. package/agents/document-review/scope-guardian-reviewer.md +54 -0
  7. package/agents/document-review/security-lens-reviewer.md +38 -0
  8. package/agents/research/best-practices-researcher.md +2 -1
  9. package/agents/research/git-history-analyzer.md +1 -1
  10. package/agents/research/learnings-researcher.md +27 -26
  11. package/agents/research/repo-research-analyst.md +164 -9
  12. package/agents/review/api-contract-reviewer.md +49 -0
  13. package/agents/review/correctness-reviewer.md +49 -0
  14. package/agents/review/data-migrations-reviewer.md +53 -0
  15. package/agents/review/dhh-rails-reviewer.md +31 -52
  16. package/agents/review/julik-frontend-races-reviewer.md +27 -200
  17. package/agents/review/kieran-python-reviewer.md +29 -116
  18. package/agents/review/kieran-rails-reviewer.md +29 -98
  19. package/agents/review/kieran-typescript-reviewer.md +29 -107
  20. package/agents/review/maintainability-reviewer.md +49 -0
  21. package/agents/review/pattern-recognition-specialist.md +2 -1
  22. package/agents/review/performance-reviewer.md +51 -0
  23. package/agents/review/reliability-reviewer.md +49 -0
  24. package/agents/review/schema-drift-detector.md +12 -10
  25. package/agents/review/security-reviewer.md +51 -0
  26. package/agents/review/testing-reviewer.md +48 -0
  27. package/agents/workflow/pr-comment-resolver.md +99 -50
  28. package/agents/workflow/spec-flow-analyzer.md +60 -89
  29. package/dist/index.js +9 -0
  30. package/dist/lib/config-handler.d.ts +2 -0
  31. package/package.json +1 -1
  32. package/skills/agent-browser/SKILL.md +69 -48
  33. package/skills/ce-brainstorm/SKILL.md +2 -1
  34. package/skills/ce-compound/SKILL.md +126 -28
  35. package/skills/ce-compound-refresh/SKILL.md +181 -73
  36. package/skills/ce-ideate/SKILL.md +2 -1
  37. package/skills/ce-plan/SKILL.md +424 -414
  38. package/skills/ce-review/SKILL.md +379 -419
  39. package/skills/ce-review-beta/SKILL.md +506 -0
  40. package/skills/ce-review-beta/references/diff-scope.md +31 -0
  41. package/skills/ce-review-beta/references/findings-schema.json +128 -0
  42. package/skills/ce-review-beta/references/persona-catalog.md +50 -0
  43. package/skills/ce-review-beta/references/review-output-template.md +115 -0
  44. package/skills/ce-review-beta/references/subagent-template.md +56 -0
  45. package/skills/ce-work/SKILL.md +17 -8
  46. package/skills/ce-work-beta/SKILL.md +16 -9
  47. package/skills/claude-permissions-optimizer/SKILL.md +15 -14
  48. package/skills/claude-permissions-optimizer/scripts/extract-commands.mjs +9 -159
  49. package/skills/claude-permissions-optimizer/scripts/normalize.mjs +151 -0
  50. package/skills/deepen-plan/SKILL.md +348 -483
  51. package/skills/document-review/SKILL.md +160 -52
  52. package/skills/feature-video/SKILL.md +209 -178
  53. package/skills/file-todos/SKILL.md +72 -94
  54. package/skills/frontend-design/SKILL.md +243 -27
  55. package/skills/git-worktree/SKILL.md +37 -28
  56. package/skills/git-worktree/scripts/worktree-manager.sh +163 -0
  57. package/skills/lfg/SKILL.md +7 -7
  58. package/skills/orchestrating-swarms/SKILL.md +1 -1
  59. package/skills/reproduce-bug/SKILL.md +154 -60
  60. package/skills/resolve-pr-parallel/SKILL.md +19 -12
  61. package/skills/resolve-todo-parallel/SKILL.md +9 -6
  62. package/skills/setup/SKILL.md +8 -160
  63. package/skills/slfg/SKILL.md +11 -7
  64. package/skills/test-browser/SKILL.md +69 -145
  65. package/skills/test-xcode/SKILL.md +61 -183
  66. package/skills/triage/SKILL.md +10 -10
  67. package/skills/ce-plan-beta/SKILL.md +0 -571
  68. package/skills/deepen-plan-beta/SKILL.md +0 -323
@@ -1,101 +1,195 @@
1
1
  ---
2
2
  name: reproduce-bug
3
- description: Reproduce and investigate a bug using logs, console inspection, and browser screenshots
4
- argument-hint: '[GitHub issue number]'
5
- disable-model-invocation: true
3
+ description: Systematically reproduce and investigate a bug from a GitHub issue. Use when the user provides a GitHub issue number or URL for a bug they want reproduced or investigated.
4
+ argument-hint: '[GitHub issue number or URL]'
6
5
  ---
7
6
 
8
- # Reproduce Bug Command
7
+ # Reproduce Bug
9
8
 
10
- Look at github issue #$ARGUMENTS and read the issue description and comments.
9
+ A framework-agnostic, hypothesis-driven workflow for reproducing and investigating bugs from issue reports. Works across any language, framework, or project type.
11
10
 
12
- ## Phase 1: Log Investigation
11
+ ## Phase 1: Understand the Issue
13
12
 
14
- Run the following agents in parallel to investigate the bug:
13
+ Fetch and analyze the bug report to extract structured information before touching the codebase.
15
14
 
16
- 1. task rails-console-explorer(issue_description)
17
- 2. task appsignal-log-investigator(issue_description)
15
+ ### Fetch the issue
18
16
 
19
- Think about the places it could go wrong looking at the codebase. Look for logging output we can look for.
17
+ If no issue number or URL was provided as an argument, ask the user for one before proceeding (using the platform's question tool -- e.g., `question` in OpenCode, `request_user_input` in Codex, `ask_user` in Gemini -- or present a prompt and wait for a reply).
20
18
 
21
- Run the agents again to find any logs that could help us reproduce the bug.
19
+ ```bash
20
+ gh issue view $ARGUMENTS --json title,body,comments,labels,assignees
21
+ ```
22
22
 
23
- Keep running these agents until you have a good idea of what is going on.
23
+ If the argument is a URL rather than a number, extract the issue number or pass the URL directly to `gh`.
24
24
 
25
- ## Phase 2: Visual Reproduction with Playwright
25
+ ### Extract key details
26
26
 
27
- If the bug is UI-related or involves user flows, use Playwright to visually reproduce it:
27
+ Read the issue and comments, then identify:
28
28
 
29
- ### Step 1: Verify Server is Running
29
+ - **Reported symptoms** -- what the user observed (error message, wrong output, visual glitch, crash)
30
+ - **Expected behavior** -- what should have happened instead
31
+ - **Reproduction steps** -- any steps the reporter provided
32
+ - **Environment clues** -- browser, OS, version, user role, data conditions
33
+ - **Frequency** -- always reproducible, intermittent, or one-time
30
34
 
31
- ```
32
- mcp__plugin_compound-engineering_pw__browser_navigate({ url: "http://localhost:3000" })
33
- mcp__plugin_compound-engineering_pw__browser_snapshot({})
34
- ```
35
+ If the issue lacks reproduction steps or is ambiguous, note what is missing -- this shapes the investigation strategy.
35
36
 
36
- If server not running, inform user to start `bin/dev`.
37
+ ## Phase 2: Hypothesize
37
38
 
38
- ### Step 2: Navigate to Affected Area
39
+ Before running anything, form theories about the root cause. This focuses the investigation and prevents aimless exploration.
39
40
 
40
- Based on the issue description, navigate to the relevant page:
41
+ ### Search for relevant code
41
42
 
42
- ```
43
- mcp__plugin_compound-engineering_pw__browser_navigate({ url: "http://localhost:3000/[affected_route]" })
44
- mcp__plugin_compound-engineering_pw__browser_snapshot({})
45
- ```
43
+ Use the native content-search tool (e.g., Grep in OpenCode) to find code paths related to the reported symptoms. Search for:
44
+
45
+ - Error messages or strings mentioned in the issue
46
+ - Feature names, route paths, or UI labels described in the report
47
+ - Related model/service/controller names
48
+
49
+ ### Form hypotheses
50
+
51
+ Based on the issue details and code search results, write down 2-3 plausible hypotheses. Each should identify:
52
+
53
+ - **What** might be wrong (e.g., "race condition in session refresh", "nil check missing on optional field")
54
+ - **Where** in the codebase (specific files and line ranges)
55
+ - **Why** it would produce the reported symptoms
56
+
57
+ Rank hypotheses by likelihood. Start investigating the most likely one first.
58
+
59
+ ## Phase 3: Reproduce
46
60
 
47
- ### Step 3: Capture Screenshots
61
+ Attempt to trigger the bug. The reproduction strategy depends on the bug type.
48
62
 
49
- Take screenshots at each step of reproducing the bug:
63
+ ### Route A: Test-based reproduction (backend, logic, data bugs)
50
64
 
65
+ Write or find an existing test that exercises the suspected code path:
66
+
67
+ 1. Search for existing test files covering the affected code using the native file-search tool (e.g., Glob in OpenCode)
68
+ 2. Run existing tests to see if any already fail
69
+ 3. If no test covers the scenario, write a minimal failing test that demonstrates the reported behavior
70
+ 4. A failing test that matches the reported symptoms confirms the bug
71
+
72
+ ### Route B: Browser-based reproduction (UI, visual, interaction bugs)
73
+
74
+ Use the `agent-browser` CLI for browser automation. Do not use any alternative browser MCP integration or built-in browser-control tool. See the `agent-browser` skill for setup and detailed CLI usage.
75
+
76
+ #### Verify server is running
77
+
78
+ ```bash
79
+ agent-browser open http://localhost:${PORT:-3000}
80
+ agent-browser snapshot -i
51
81
  ```
52
- mcp__plugin_compound-engineering_pw__browser_take_screenshot({ filename: "bug-[issue]-step-1.png" })
82
+
83
+ If the server is not running, ask the user to start their development server and provide the correct port.
84
+
85
+ To detect the correct port, check project instruction files (`AGENTS.md`, `AGENTS.md`) for port references, then `package.json` dev scripts, then `.env` files, falling back to `3000`.
86
+
87
+ #### Follow reproduction steps
88
+
89
+ Navigate to the affected area and execute the steps from the issue:
90
+
91
+ ```bash
92
+ agent-browser open "http://localhost:${PORT}/[affected_route]"
93
+ agent-browser snapshot -i
53
94
  ```
54
95
 
55
- ### Step 4: Follow User Flow
96
+ Use `agent-browser` commands to interact with the page:
97
+ - `agent-browser click @ref` -- click elements
98
+ - `agent-browser fill @ref "text"` -- fill form fields
99
+ - `agent-browser snapshot -i` -- capture current state
100
+ - `agent-browser screenshot bug-evidence.png` -- save visual evidence
56
101
 
57
- Reproduce the exact steps from the issue:
102
+ #### Capture the bug state
58
103
 
59
- 1. **Read the issue's reproduction steps**
60
- 2. **Execute each step using Playwright:**
61
- - `browser_click` for clicking elements
62
- - `browser_type` for filling forms
63
- - `browser_snapshot` to see the current state
64
- - `browser_take_screenshot` to capture evidence
104
+ When the bug is reproduced:
105
+ 1. Take a screenshot of the error state
106
+ 2. Check for console errors: look at browser output and any visible error messages
107
+ 3. Record the exact sequence of steps that triggered it
65
108
 
66
- 3. **Check for console errors:**
67
- ```
68
- mcp__plugin_compound-engineering_pw__browser_console_messages({ level: "error" })
69
- ```
109
+ ### Route C: Manual / environment-specific reproduction
70
110
 
71
- ### Step 5: Capture Bug State
111
+ For bugs that require specific data conditions, user roles, external service state, or cannot be automated:
72
112
 
73
- When you reproduce the bug:
113
+ 1. Document what conditions are needed
114
+ 2. Ask the user (using the platform's question tool -- e.g., `question` in OpenCode, `request_user_input` in Codex, `ask_user` in Gemini -- or present options and wait for a reply) whether they can set up the required conditions
115
+ 3. Guide them through manual reproduction steps if needed
74
116
 
75
- 1. Take a screenshot of the bug state
76
- 2. Capture console errors
77
- 3. Document the exact steps that triggered it
117
+ ### If reproduction fails
118
+
119
+ If the bug cannot be reproduced after trying the most likely hypotheses:
120
+
121
+ 1. Revisit the remaining hypotheses
122
+ 2. Check if the bug is environment-specific (version, OS, browser, data-dependent)
123
+ 3. Search the codebase for recent changes to the affected area: `git log --oneline -20 -- [affected_files]`
124
+ 4. Document what was tried and what conditions might be missing
125
+
126
+ ## Phase 4: Investigate
127
+
128
+ Dig deeper into the root cause using whatever observability the project offers.
129
+
130
+ ### Check logs and traces
131
+
132
+ Search for errors, warnings, or unexpected behavior around the time of reproduction. What to check depends on the bug and what the project has available:
133
+
134
+ - **Application logs** -- search local log output (dev server stdout, log files) for error patterns, stack traces, or warnings using the native content-search tool
135
+ - **Error tracking** -- check for related exceptions in the project's error tracker (Sentry, AppSignal, Bugsnag, Datadog, etc.)
136
+ - **Browser console** -- for UI bugs, check developer console output for JavaScript errors, failed network requests, or CORS issues
137
+ - **Database state** -- if the bug involves data, inspect relevant records for unexpected values, missing associations, or constraint violations
138
+ - **Request/response cycle** -- check server logs for the specific request: status codes, params, timing, middleware behavior
139
+
140
+ ### Trace the code path
141
+
142
+ Starting from the entry point identified in Phase 2, trace the execution path:
143
+
144
+ 1. Read the relevant source files using the native file-read tool
145
+ 2. Identify where the behavior diverges from expectations
146
+ 3. Check edge cases: nil/null values, empty collections, boundary conditions, race conditions
147
+ 4. Look for recent changes that may have introduced the bug: `git log --oneline -10 -- [file]`
148
+
149
+ ## Phase 5: Document Findings
150
+
151
+ Summarize everything discovered during the investigation.
152
+
153
+ ### Compile the report
154
+
155
+ Organize findings into:
156
+
157
+ 1. **Root cause** -- what is actually wrong and where (with file paths and line numbers, e.g., `app/services/example_service.rb:42`)
158
+ 2. **Reproduction steps** -- verified steps to trigger the bug (mark as confirmed or unconfirmed)
159
+ 3. **Evidence** -- screenshots, test output, log excerpts, console errors
160
+ 4. **Suggested fix** -- if a fix is apparent, describe it with the specific code changes needed
161
+ 5. **Open questions** -- anything still unclear or needing further investigation
162
+
163
+ ### Present to user before any external action
164
+
165
+ Present the full report to the user. Do not post comments to the GitHub issue or take any external action without explicit confirmation.
166
+
167
+ Ask the user (using the platform's question tool, or present options and wait):
78
168
 
79
169
  ```
80
- mcp__plugin_compound-engineering_pw__browser_take_screenshot({ filename: "bug-[issue]-reproduced.png" })
170
+ Investigation complete. How to proceed?
171
+
172
+ 1. Post findings to the issue as a comment
173
+ 2. Start working on a fix
174
+ 3. Just review the findings (no external action)
81
175
  ```
82
176
 
83
- ## Phase 3: Document Findings
177
+ If the user chooses to post to the issue:
84
178
 
85
- **Reference Collection:**
179
+ ```bash
180
+ gh issue comment $ARGUMENTS --body "$(cat <<'EOF'
181
+ ## Bug Investigation
86
182
 
87
- - [ ] Document all research findings with specific file paths (e.g., `app/services/example_service.rb:42`)
88
- - [ ] Include screenshots showing the bug reproduction
89
- - [ ] List console errors if any
90
- - [ ] Document the exact reproduction steps
183
+ **Root Cause:** [summary]
91
184
 
92
- ## Phase 4: Report Back
185
+ **Reproduction Steps (verified):**
186
+ 1. [step]
187
+ 2. [step]
93
188
 
94
- Add a comment to the issue with:
189
+ **Relevant Code:** [file:line references]
95
190
 
96
- 1. **Findings** - What you discovered about the cause
97
- 2. **Reproduction Steps** - Exact steps to reproduce (verified)
98
- 3. **Screenshots** - Visual evidence of the bug (upload captured screenshots)
99
- 4. **Relevant Code** - File paths and line numbers
100
- 5. **Suggested Fix** - If you have one
191
+ **Suggested Fix:** [description if applicable]
192
+ EOF
193
+ )"
194
+ ```
101
195
 
@@ -1,7 +1,7 @@
1
1
  ---
2
- name: resolve_pr_parallel
2
+ name: resolve-pr-parallel
3
3
  description: Resolve all PR comments using parallel processing. Use when addressing PR review feedback, resolving review threads, or batch-fixing PR comments.
4
- argument-hint: "[optional: PR number or current PR]"
4
+ argument-hint: '[optional: PR number or current PR]'
5
5
  disable-model-invocation: true
6
6
  allowed-tools: Bash(gh *), Bash(git *), Read
7
7
  ---
@@ -12,7 +12,7 @@ Resolve all unresolved PR review comments by spawning parallel agents for each t
12
12
 
13
13
  ## Context Detection
14
14
 
15
- OpenCode automatically detects git context:
15
+ Detect git context from the current working directory:
16
16
  - Current branch and associated PR
17
17
  - All PR comments and review threads
18
18
  - Works with any PR by specifying the number
@@ -21,7 +21,7 @@ OpenCode automatically detects git context:
21
21
 
22
22
  ### 1. Analyze
23
23
 
24
- Fetch unresolved review threads using the GraphQL script:
24
+ Fetch unresolved review threads using the GraphQL script at [scripts/get-pr-comments](scripts/get-pr-comments):
25
25
 
26
26
  ```bash
27
27
  bash scripts/get-pr-comments PR_NUMBER
@@ -37,7 +37,7 @@ gh api repos/{owner}/{repo}/pulls/PR_NUMBER/comments
37
37
 
38
38
  ### 2. Plan
39
39
 
40
- Create a todowrite list of all unresolved items grouped by type:
40
+ Create a task list of all unresolved items grouped by type (e.g., `todowrite` in OpenCode, `update_plan` in Codex):
41
41
  - Code changes requested
42
42
  - Questions to answer
43
43
  - Style/convention fixes
@@ -45,20 +45,24 @@ Create a todowrite list of all unresolved items grouped by type:
45
45
 
46
46
  ### 3. Implement (PARALLEL)
47
47
 
48
- Spawn a `pr-comment-resolver` agent for each unresolved item in parallel.
48
+ Spawn a `systematic:workflow:pr-comment-resolver` agent for each unresolved item.
49
49
 
50
- If there are 3 comments, spawn 3 agents:
50
+ If there are 3 comments, spawn 3 agents — one per comment. Prefer running all agents in parallel; if the platform does not support parallel dispatch, run them sequentially.
51
51
 
52
- 1. task pr-comment-resolver(comment1)
53
- 2. task pr-comment-resolver(comment2)
54
- 3. task pr-comment-resolver(comment3)
52
+ Keep parent-context pressure bounded:
53
+ - If there are 1-4 unresolved items, direct parallel returns are fine
54
+ - If there are 5+ unresolved items, launch in batches of at most 4 agents at a time
55
+ - Require each resolver agent to return a short status summary to the parent: comment/thread handled, files changed, tests run or skipped, any blocker that still needs human attention, and for question-only threads the substantive reply text so the parent can post or verify it
55
56
 
56
- Always run all in parallel subagents/Tasks for each Todo item.
57
+ If the PR is large enough that even batched short returns are likely to get noisy, use a per-run scratch directory such as `.context/systematic/resolve-pr-parallel/<run-id>/`:
58
+ - Have each resolver write a compact artifact for its thread there
59
+ - Return only a completion summary to the parent
60
+ - Re-read only the artifacts that are needed to resolve threads, answer reviewer questions, or summarize the batch
57
61
 
58
62
  ### 4. Commit & Resolve
59
63
 
60
64
  - Commit changes with a clear message referencing the PR feedback
61
- - Resolve each thread programmatically:
65
+ - Resolve each thread programmatically using [scripts/resolve-pr-thread](scripts/resolve-pr-thread):
62
66
 
63
67
  ```bash
64
68
  bash scripts/resolve-pr-thread THREAD_ID
@@ -76,6 +80,8 @@ bash scripts/get-pr-comments PR_NUMBER
76
80
 
77
81
  Should return an empty array `[]`. If threads remain, repeat from step 1.
78
82
 
83
+ If a scratch directory was used and the user did not ask to inspect it, clean it up after verification succeeds.
84
+
79
85
  ## Scripts
80
86
 
81
87
  - [scripts/get-pr-comments](scripts/get-pr-comments) - GraphQL query for unresolved review threads
@@ -87,3 +93,4 @@ Should return an empty array `[]`. If threads remain, repeat from step 1.
87
93
  - Changes committed and pushed
88
94
  - Threads resolved via GraphQL (marked as resolved on GitHub)
89
95
  - Empty result from get-pr-comments on verify
96
+
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: resolve-todo-parallel
3
- description: Use this skill when there are multiple unresolved CLI todo files and you want to resolve them in parallel, document learnings, and clean up completed entries in one disciplined pass.
3
+ description: Resolve all pending CLI todos using parallel processing, compound on lessons learned, then clean up completed todos.
4
4
  argument-hint: '[optional: specific todo ID or pattern]'
5
5
  ---
6
6
 
@@ -10,9 +10,11 @@ Resolve all TODO comments using parallel processing, document lessons learned, t
10
10
 
11
11
  ### 1. Analyze
12
12
 
13
- Get all unresolved TODOs from the /todos/*.md directory
13
+ Get all unresolved TODOs from `.context/systematic/todos/*.md` and legacy `todos/*.md`
14
14
 
15
- If any todo recommends deleting, removing, or gitignoring files in `docs/brainstorms/`, `docs/plans/`, or `docs/solutions/`, skip it and mark it as `wont_fix`. These are Systematic workflow artifacts that are intentional and permanent.
15
+ Residual actionable work may come from `ce:review-beta mode:autonomous` after its in-skill `safe_auto` pass. Treat those todos as normal unresolved work items; the review skill has already decided they should not be auto-fixed inline.
16
+
17
+ If any todo recommends deleting, removing, or gitignoring files in `docs/brainstorms/`, `docs/plans/`, or `docs/solutions/`, skip it and mark it as `wont_fix`. These are systematic pipeline artifacts that are intentional and permanent.
16
18
 
17
19
  ### 2. Plan
18
20
 
@@ -20,7 +22,7 @@ Create a task list of all unresolved items grouped by type (e.g., `todowrite` in
20
22
 
21
23
  ### 3. Implement (PARALLEL)
22
24
 
23
- Spawn a `pr-comment-resolver` agent for each unresolved item.
25
+ Spawn a `systematic:workflow:pr-comment-resolver` agent for each unresolved item.
24
26
 
25
27
  If there are 3 items, spawn 3 agents — one per item. Prefer running all agents in parallel; if the platform does not support parallel dispatch, run them sequentially respecting the dependency order from step 2.
26
28
 
@@ -52,9 +54,9 @@ GATE: STOP. Verify that the compound skill produced a solution document in `docs
52
54
 
53
55
  ### 6. Clean Up Completed Todos
54
56
 
55
- List all todos and identify those with `done` or `resolved` status, then delete them to keep the todo list clean and actionable.
57
+ Search both `.context/systematic/todos/` and legacy `todos/` for files with `done`, `resolved`, or `complete` status, then delete them to keep the todo list clean and actionable.
56
58
 
57
- If a scratch directory was used and the user did not ask to inspect it, clean it up after todo cleanup succeeds.
59
+ If a per-run scratch directory was created at `.context/systematic/resolve-todo-parallel/<run-id>/`, and the user did not ask to inspect it, delete that specific `<run-id>/` directory after todo cleanup succeeds. Do not delete any other `.context/` subdirectories.
58
60
 
59
61
  After cleanup, output a summary:
60
62
 
@@ -63,3 +65,4 @@ Todos resolved: [count]
63
65
  Lessons documented: [path to solution doc, or "skipped"]
64
66
  Todos cleaned up: [count deleted]
65
67
  ```
68
+
@@ -1,174 +1,22 @@
1
1
  ---
2
2
  name: setup
3
- description: Configure which review agents run for your project. Auto-detects stack and writes systematic.local.md.
3
+ description: Configure project-level settings for systematic workflows. Currently a placeholder review agent selection is handled automatically by ce:review.
4
4
  disable-model-invocation: true
5
5
  ---
6
6
 
7
7
  # Systematic Setup
8
8
 
9
- ## Interaction Method
9
+ Project-level configuration for systematic workflows.
10
10
 
11
- If `question` is available, use it for all prompts below.
11
+ ## Current State
12
12
 
13
- If not, present each question as a numbered list and wait for a reply before proceeding to the next step. For multiSelect questions, accept comma-separated numbers (e.g. `1, 3`). Never skip or auto-configure.
13
+ Review agent selection is handled automatically by the `ce:review` skill, which uses intelligent tiered selection based on diff content. No per-project configuration is needed for code reviews.
14
14
 
15
- Interactive setup for `systematic.local.md` configures which agents run during `/ce:review` and `/ce:work`.
15
+ If this skill is invoked, inform the user:
16
16
 
17
- ## Step 1: Check Existing Config
17
+ > Review agent configuration is no longer needed — `ce:review` automatically selects the right reviewers based on your diff. Project-specific review context (e.g., "we serve 10k req/s" or "watch for N+1 queries") belongs in your project's AGENTS.md or AGENTS.md, where all agents already read it.
18
18
 
19
- Read `systematic.local.md` in the project root. If it exists, display current settings summary and use question:
19
+ ## Future Use
20
20
 
21
- ```
22
- question: "Settings file already exists. What would you like to do?"
23
- header: "Config"
24
- options:
25
- - label: "Reconfigure"
26
- description: "Run the interactive setup again from scratch"
27
- - label: "View current"
28
- description: "Show the file contents, then stop"
29
- - label: "Cancel"
30
- description: "Keep current settings"
31
- ```
21
+ This skill is reserved for future project-level configuration needs beyond review agent selection.
32
22
 
33
- If "View current": read and display the file, then stop.
34
- If "Cancel": stop.
35
-
36
- ## Step 2: Detect and Ask
37
-
38
- Auto-detect the project stack:
39
-
40
- ```bash
41
- test -f Gemfile && test -f config/routes.rb && echo "rails" || \
42
- test -f Gemfile && echo "ruby" || \
43
- test -f tsconfig.json && echo "typescript" || \
44
- test -f package.json && echo "javascript" || \
45
- test -f pyproject.toml && echo "python" || \
46
- test -f requirements.txt && echo "python" || \
47
- echo "general"
48
- ```
49
-
50
- Use question:
51
-
52
- ```
53
- question: "Detected {type} project. How would you like to configure?"
54
- header: "Setup"
55
- options:
56
- - label: "Auto-configure (Recommended)"
57
- description: "Use smart defaults for {type}. Done in one click."
58
- - label: "Customize"
59
- description: "Choose stack, focus areas, and review depth."
60
- ```
61
-
62
- ### If Auto-configure → Skip to Step 4 with defaults:
63
-
64
- - **Rails:** `[kieran-rails-reviewer, dhh-rails-reviewer, code-simplicity-reviewer, security-sentinel, performance-oracle]`
65
- - **Python:** `[kieran-python-reviewer, code-simplicity-reviewer, security-sentinel, performance-oracle]`
66
- - **TypeScript:** `[kieran-typescript-reviewer, code-simplicity-reviewer, security-sentinel, performance-oracle]`
67
- - **General:** `[code-simplicity-reviewer, security-sentinel, performance-oracle, architecture-strategist]`
68
-
69
- ### If Customize → Step 3
70
-
71
- ## Step 3: Customize (3 questions)
72
-
73
- **a. Stack** — confirm or override:
74
-
75
- ```
76
- question: "Which stack should we optimize for?"
77
- header: "Stack"
78
- options:
79
- - label: "{detected_type} (Recommended)"
80
- description: "Auto-detected from project files"
81
- - label: "Rails"
82
- description: "Ruby on Rails — adds DHH-style and Rails-specific reviewers"
83
- - label: "Python"
84
- description: "Python — adds Pythonic pattern reviewer"
85
- - label: "TypeScript"
86
- description: "TypeScript — adds type safety reviewer"
87
- ```
88
-
89
- Only show options that differ from the detected type.
90
-
91
- **b. Focus areas** — multiSelect:
92
-
93
- ```
94
- question: "Which review areas matter most?"
95
- header: "Focus"
96
- multiSelect: true
97
- options:
98
- - label: "Security"
99
- description: "Vulnerability scanning, auth, input validation (security-sentinel)"
100
- - label: "Performance"
101
- description: "N+1 queries, memory leaks, complexity (performance-oracle)"
102
- - label: "Architecture"
103
- description: "Design patterns, SOLID, separation of concerns (architecture-strategist)"
104
- - label: "Code simplicity"
105
- description: "Over-engineering, YAGNI violations (code-simplicity-reviewer)"
106
- ```
107
-
108
- **c. Depth:**
109
-
110
- ```
111
- question: "How thorough should reviews be?"
112
- header: "Depth"
113
- options:
114
- - label: "Thorough (Recommended)"
115
- description: "Stack reviewers + all selected focus agents."
116
- - label: "Fast"
117
- description: "Stack reviewers + code simplicity only. Less context, quicker."
118
- - label: "Comprehensive"
119
- description: "All above + git history, data integrity, agent-native checks."
120
- ```
121
-
122
- ## Step 4: Build Agent List and Write File
123
-
124
- **Stack-specific agents:**
125
- - Rails → `kieran-rails-reviewer, dhh-rails-reviewer`
126
- - Python → `kieran-python-reviewer`
127
- - TypeScript → `kieran-typescript-reviewer`
128
- - General → (none)
129
-
130
- **Focus area agents:**
131
- - Security → `security-sentinel`
132
- - Performance → `performance-oracle`
133
- - Architecture → `architecture-strategist`
134
- - Code simplicity → `code-simplicity-reviewer`
135
-
136
- **Depth:**
137
- - Thorough: stack + selected focus areas
138
- - Fast: stack + `code-simplicity-reviewer` only
139
- - Comprehensive: all above + `git-history-analyzer, data-integrity-guardian, agent-native-reviewer`
140
-
141
- **Plan review agents:** stack-specific reviewer + `code-simplicity-reviewer`.
142
-
143
- Write `systematic.local.md`:
144
-
145
- ```markdown
146
- ---
147
- review_agents: [{computed agent list}]
148
- plan_review_agents: [{computed plan agent list}]
149
- ---
150
-
151
- # Review Context
152
-
153
- Add project-specific review instructions here.
154
- These notes are passed to all review agents during /ce:review and /ce:work.
155
-
156
- Examples:
157
- - "We use Turbo Frames heavily — check for frame-busting issues"
158
- - "Our API is public — extra scrutiny on input validation"
159
- - "Performance-critical: we serve 10k req/s on this endpoint"
160
- ```
161
-
162
- ## Step 5: Confirm
163
-
164
- ```
165
- Saved to systematic.local.md
166
-
167
- Stack: {type}
168
- Review depth: {depth}
169
- Agents: {count} configured
170
- {agent list, one per line}
171
-
172
- Tip: Edit the "Review Context" section to add project-specific instructions.
173
- Re-run this setup anytime to reconfigure.
174
- ```
@@ -9,28 +9,32 @@ Swarm-enabled LFG. Run these steps in order, parallelizing where indicated. Do n
9
9
 
10
10
  ## Sequential Phase
11
11
 
12
- 1. **Optional:** If the `ralph-wiggum` skill is available, run `/ralph-wiggum:ralph-loop "finish all slash commands" --completion-promise "DONE"`. If not available or it fails, skip and continue to step 2 immediately.
13
- 2. `/systematic:ce-plan $ARGUMENTS`
12
+ 1. **Optional:** If the `ralph-loop` skill is available, run `/ralph-loop:ralph-loop "finish all slash commands" --completion-promise "DONE"`. If not available or it fails, skip and continue to step 2 immediately.
13
+ 2. `/ce:plan $ARGUMENTS`
14
14
  3. **Conditionally** run `/systematic:deepen-plan`
15
15
  - Run the `deepen-plan` workflow only if the plan is `Standard` or `Deep`, touches a high-risk area (auth, security, payments, migrations, external APIs, significant rollout concerns), or still has obvious confidence gaps in decisions, sequencing, system-wide impact, risks, or verification
16
16
  - If you run the `deepen-plan` workflow, confirm the plan was deepened or explicitly judged sufficiently grounded before moving on
17
17
  - If you skip it, note why and continue to step 4
18
- 4. `/systematic:ce-work` — **Use swarm mode**: Make a Task list and launch an army of agent swarm subagents to build the plan
18
+ 4. `/ce:work` — **Use swarm mode**: Make a Task list and launch an army of agent swarm subagents to build the plan
19
19
 
20
20
  ## Parallel Phase
21
21
 
22
22
  After work completes, launch steps 5 and 6 as **parallel swarm agents** (both only need code to be written):
23
23
 
24
- 5. `/systematic:ce-review` — spawn as background Task agent
24
+ 5. `/ce:review mode:report-only` — spawn as background Task agent
25
25
  6. `/systematic:test-browser` — spawn as background Task agent
26
26
 
27
27
  Wait for both to complete before continuing.
28
28
 
29
+ ## Autofix Phase
30
+
31
+ 7. `/ce:review mode:autofix` — run sequentially after the parallel phase so it can safely mutate the checkout, apply `safe_auto` fixes, and emit residual todos for step 8
32
+
29
33
  ## Finalize Phase
30
34
 
31
- 7. `/systematic:resolve_todo_parallel` — resolve any findings from the review
32
- 8. `/systematic:feature-video` — record the final walkthrough and add to PR
33
- 9. Output `<promise>DONE</promise>` when video is in PR
35
+ 8. `/systematic:todo-resolve` — resolve findings, compound on learnings, clean up completed todos
36
+ 9. `/systematic:feature-video` — record the final walkthrough and add to PR
37
+ 10. Output `<promise>DONE</promise>` when video is in PR
34
38
 
35
39
  Start with step 1 now.
36
40