@rexeus/agentic 0.3.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +202 -0
- package/README.md +201 -0
- package/assets/opencode/agents/analyst.md +358 -0
- package/assets/opencode/agents/architect.md +308 -0
- package/assets/opencode/agents/developer.md +311 -0
- package/assets/opencode/agents/lead.md +368 -0
- package/assets/opencode/agents/refiner.md +418 -0
- package/assets/opencode/agents/reviewer.md +285 -0
- package/assets/opencode/agents/scout.md +241 -0
- package/assets/opencode/agents/tester.md +323 -0
- package/assets/opencode/commands/agentic-commit.md +128 -0
- package/assets/opencode/commands/agentic-develop.md +170 -0
- package/assets/opencode/commands/agentic-plan.md +165 -0
- package/assets/opencode/commands/agentic-polish.md +190 -0
- package/assets/opencode/commands/agentic-pr.md +226 -0
- package/assets/opencode/commands/agentic-review.md +119 -0
- package/assets/opencode/commands/agentic-simplify.md +123 -0
- package/assets/opencode/commands/agentic-verify.md +193 -0
- package/bin/agentic.js +139 -0
- package/opencode/config.mjs +453 -0
- package/opencode/doctor.mjs +9 -0
- package/opencode/guardrails.mjs +172 -0
- package/opencode/install.mjs +48 -0
- package/opencode/manifest.mjs +34 -0
- package/opencode/plugin.mjs +53 -0
- package/opencode/uninstall.mjs +64 -0
- package/package.json +69 -0
- package/skills/conventions/SKILL.md +83 -0
- package/skills/git-conventions/SKILL.md +141 -0
- package/skills/quality-patterns/SKILL.md +73 -0
- package/skills/security/SKILL.md +77 -0
- package/skills/setup/SKILL.md +105 -0
- package/skills/testing/SKILL.md +113 -0
|
@@ -0,0 +1,170 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Start implementation of a planned feature or task. Runs the full pipeline from understanding through testing."
|
|
3
|
+
agent: "lead"
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
<!-- Generated by pnpm run sync:opencode-commands -->
|
|
7
|
+
|
|
8
|
+
# Develop
|
|
9
|
+
|
|
10
|
+
Start implementation. This command runs the full development pipeline:
|
|
11
|
+
understand → design → build → verify.
|
|
12
|
+
|
|
13
|
+
**Usage:**
|
|
14
|
+
|
|
15
|
+
- `/agentic-develop Implement the token refresh logic from the plan`
|
|
16
|
+
- `/agentic-develop continue` — continue from where the last session left off
|
|
17
|
+
|
|
18
|
+
## Prerequisites
|
|
19
|
+
|
|
20
|
+
This command works best after `/agentic-plan` has produced an approved plan.
|
|
21
|
+
If no plan exists, the command will create a lightweight plan first.
|
|
22
|
+
|
|
23
|
+
## Workflow
|
|
24
|
+
|
|
25
|
+
### Step 0: Create Progress Tracker
|
|
26
|
+
|
|
27
|
+
Before starting any work, create a task list that maps the full pipeline
|
|
28
|
+
for this task. Each task names the responsible agent and describes the
|
|
29
|
+
concrete step. Example:
|
|
30
|
+
|
|
31
|
+
1. "Scout the relevant modules" — scout (skip if already done)
|
|
32
|
+
2. "Design implementation approach" — architect (skip if plan exists)
|
|
33
|
+
3. "Implement feature X" — developer
|
|
34
|
+
4. "Review implementation" — reviewer
|
|
35
|
+
5. "Write and run tests" — tester
|
|
36
|
+
6. "Refine if needed" — refiner (optional)
|
|
37
|
+
|
|
38
|
+
Mark tasks `in_progress` as you start them and `completed` when done.
|
|
39
|
+
Skip tasks that were already covered by a prior `/agentic-plan` run.
|
|
40
|
+
|
|
41
|
+
### Step 1: Establish Context
|
|
42
|
+
|
|
43
|
+
If `$ARGUMENTS` contains "continue":
|
|
44
|
+
|
|
45
|
+
1. Check `git log --oneline -5` for recent commits
|
|
46
|
+
2. Check `git diff` for unstaged changes (work in progress)
|
|
47
|
+
3. Check `git diff --cached` for staged changes
|
|
48
|
+
4. Present your understanding of the current state to the user for confirmation
|
|
49
|
+
before proceeding
|
|
50
|
+
|
|
51
|
+
Otherwise, determine the task:
|
|
52
|
+
|
|
53
|
+
- Is there an existing plan from `/agentic-plan`? Use it.
|
|
54
|
+
- Is `$ARGUMENTS` a clear, self-contained task? Proceed directly.
|
|
55
|
+
- Is the task ambiguous? Ask the user to clarify, or suggest running
|
|
56
|
+
`/agentic-plan` first.
|
|
57
|
+
|
|
58
|
+
### Step 2: Reconnaissance (if needed)
|
|
59
|
+
|
|
60
|
+
If the task touches unfamiliar code:
|
|
61
|
+
|
|
62
|
+
1. Deploy **scout** to map the relevant modules
|
|
63
|
+
2. If the scout returns insufficient context, ask the user for guidance
|
|
64
|
+
3. Deploy **analyst** if the scout reveals complexity
|
|
65
|
+
|
|
66
|
+
Skip this step if a recent `/agentic-plan` already covered the codebase.
|
|
67
|
+
|
|
68
|
+
### Step 3: Design (if needed)
|
|
69
|
+
|
|
70
|
+
If no architecture plan exists:
|
|
71
|
+
|
|
72
|
+
1. Deploy **architect** to produce a lightweight implementation plan
|
|
73
|
+
2. Present the plan to the user for approval
|
|
74
|
+
3. Wait for confirmation before proceeding
|
|
75
|
+
4. If the user rejects the plan, iterate on the design or ask for
|
|
76
|
+
clarification. Do not proceed to implementation without approval.
|
|
77
|
+
|
|
78
|
+
Skip this step if `/agentic-plan` already produced an approved design.
|
|
79
|
+
|
|
80
|
+
### Step 4: Implement
|
|
81
|
+
|
|
82
|
+
Deploy **developer** with a briefing concrete enough that the developer
|
|
83
|
+
can start coding immediately — no interpretation, no planning needed.
|
|
84
|
+
|
|
85
|
+
**Required in every developer briefing:**
|
|
86
|
+
|
|
87
|
+
1. **Implementation plan** — Pass through the architect's full plan, not a
|
|
88
|
+
summary. Must include: files to create/modify, interfaces/signatures,
|
|
89
|
+
implementation order, and edge cases to handle.
|
|
90
|
+
2. **Scout report** — Codebase patterns, naming conventions, file structure.
|
|
91
|
+
3. **Scope boundary** — What is in scope, what is explicitly out.
|
|
92
|
+
4. **Test command** — How to run the test suite.
|
|
93
|
+
|
|
94
|
+
**Rule of thumb:** If you can read your briefing and immediately know which
|
|
95
|
+
file to open and what to type, it's concrete enough. If you'd need to
|
|
96
|
+
"figure out the approach first" — it's too vague and the developer will
|
|
97
|
+
plan instead of code.
|
|
98
|
+
|
|
99
|
+
The developer implements incrementally. After each logical unit:
|
|
100
|
+
|
|
101
|
+
- Verify the code compiles/parses
|
|
102
|
+
- Run existing tests to catch regressions
|
|
103
|
+
|
|
104
|
+
### Step 5: Verify
|
|
105
|
+
|
|
106
|
+
Launch in parallel:
|
|
107
|
+
|
|
108
|
+
**reviewer** — Analyze the implementation for:
|
|
109
|
+
|
|
110
|
+
- Correctness, security, convention adherence
|
|
111
|
+
- Alignment with the architecture plan
|
|
112
|
+
- Confidence-scored findings (threshold 80)
|
|
113
|
+
|
|
114
|
+
**tester** — Write and run tests for:
|
|
115
|
+
|
|
116
|
+
- New functionality (unit tests)
|
|
117
|
+
- Edge cases identified in the plan
|
|
118
|
+
- Integration points
|
|
119
|
+
|
|
120
|
+
### Step 6: Iterate
|
|
121
|
+
|
|
122
|
+
If the reviewer or tester found issues:
|
|
123
|
+
|
|
124
|
+
1. Summarize all findings for the user
|
|
125
|
+
2. Ask whether to fix now or note for later
|
|
126
|
+
3. If fixing: send the developer back with specific findings
|
|
127
|
+
4. Re-run verification after fixes
|
|
128
|
+
|
|
129
|
+
Repeat until the reviewer passes and tests are green,
|
|
130
|
+
or the user decides to stop.
|
|
131
|
+
|
|
132
|
+
### Step 7: Refine (optional)
|
|
133
|
+
|
|
134
|
+
If the reviewer flagged complexity, deep nesting, or convoluted logic
|
|
135
|
+
(severity Warning or above), or if the user requests simplification:
|
|
136
|
+
|
|
137
|
+
1. Ask the user whether to simplify before committing
|
|
138
|
+
2. If yes: deploy the **refiner** with the reviewer's findings and the
|
|
139
|
+
current file list
|
|
140
|
+
3. The refiner simplifies incrementally, verifying tests after each change
|
|
141
|
+
4. Re-run the **tester** to confirm nothing broke
|
|
142
|
+
|
|
143
|
+
Skip this step if the code is already clean and the reviewer had no
|
|
144
|
+
complexity-related findings.
|
|
145
|
+
|
|
146
|
+
### Step 8: Summary
|
|
147
|
+
|
|
148
|
+
When complete, present:
|
|
149
|
+
|
|
150
|
+
```
|
|
151
|
+
## Development Summary
|
|
152
|
+
|
|
153
|
+
### What was built
|
|
154
|
+
<brief description>
|
|
155
|
+
|
|
156
|
+
### Files changed
|
|
157
|
+
- `path/file.ts` — created / modified (what changed)
|
|
158
|
+
|
|
159
|
+
### Test results
|
|
160
|
+
- X tests written, all passing
|
|
161
|
+
- Coverage: X% for new code
|
|
162
|
+
|
|
163
|
+
### Review findings
|
|
164
|
+
- X issues found, X fixed, X deferred
|
|
165
|
+
|
|
166
|
+
### Next steps
|
|
167
|
+
- <any remaining work or follow-up tasks>
|
|
168
|
+
```
|
|
169
|
+
|
|
170
|
+
Suggest running `/agentic-commit` to commit the changes.
|
|
@@ -0,0 +1,165 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Plan a new feature or task. Critically evaluates requirements, presents options, and produces an implementation plan."
|
|
3
|
+
agent: "lead"
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
<!-- Generated by pnpm run sync:opencode-commands -->
|
|
7
|
+
|
|
8
|
+
# Plan
|
|
9
|
+
|
|
10
|
+
Plan a new feature or task. This command challenges assumptions, explores
|
|
11
|
+
options, and produces a concrete plan — before any code is written.
|
|
12
|
+
|
|
13
|
+
**Usage:**
|
|
14
|
+
|
|
15
|
+
- `/agentic-plan Add user authentication with OAuth`
|
|
16
|
+
- `/agentic-plan Refactor the payment module for multi-currency support`
|
|
17
|
+
|
|
18
|
+
## Philosophy
|
|
19
|
+
|
|
20
|
+
The most expensive bugs are wrong assumptions made before the first line of code.
|
|
21
|
+
This command exists to catch them. Your job is to be the critical thinker who
|
|
22
|
+
asks the hard questions BEFORE the team starts building.
|
|
23
|
+
|
|
24
|
+
## Workflow
|
|
25
|
+
|
|
26
|
+
### Step 1: Understand the Request
|
|
27
|
+
|
|
28
|
+
Read `$ARGUMENTS` carefully. If no arguments were provided, ask the user
|
|
29
|
+
what they want to build.
|
|
30
|
+
|
|
31
|
+
This is the most critical step. A brilliant plan for the wrong problem is
|
|
32
|
+
worthless. Your job is to be a **critical thinking partner** — not a
|
|
33
|
+
yes-machine that immediately starts planning.
|
|
34
|
+
|
|
35
|
+
#### 1a: Clarify
|
|
36
|
+
|
|
37
|
+
Ask questions until the problem is crystal clear:
|
|
38
|
+
|
|
39
|
+
- **What exactly should change?** Get specific. "Improve performance" is not
|
|
40
|
+
a requirement — "Reduce API response time from 2s to 200ms" is.
|
|
41
|
+
- **What's the success criteria?** How will we know it's done? What does
|
|
42
|
+
"done" look like?
|
|
43
|
+
- **Who is this for?** End users? Other developers? An internal system?
|
|
44
|
+
- **What's the context?** Why now? What triggered this? Is there urgency?
|
|
45
|
+
- **What are the constraints?** Budget, time, tech stack, backwards compatibility?
|
|
46
|
+
|
|
47
|
+
#### 1b: Challenge
|
|
48
|
+
|
|
49
|
+
Play devil's advocate. Respectfully but firmly question the idea:
|
|
50
|
+
|
|
51
|
+
- **Is this the right problem?** Or is it a symptom of a deeper issue?
|
|
52
|
+
- **Do we actually need this?** What's the cost of NOT doing it?
|
|
53
|
+
- **What could go wrong?** What are the risks, failure modes, unintended consequences?
|
|
54
|
+
- **Is there a simpler way?** Could we achieve 80% of the value with 20% of the effort?
|
|
55
|
+
- **What are we giving up?** Every feature has opportunity cost. What won't we build?
|
|
56
|
+
- **Have we seen this pattern before?** Does the codebase already solve a similar
|
|
57
|
+
problem we can learn from?
|
|
58
|
+
|
|
59
|
+
#### 1c: Confirm Understanding
|
|
60
|
+
|
|
61
|
+
Restate the problem in your own words. Include:
|
|
62
|
+
|
|
63
|
+
- The core problem being solved
|
|
64
|
+
- The key constraints
|
|
65
|
+
- What success looks like
|
|
66
|
+
- Anything explicitly out of scope
|
|
67
|
+
|
|
68
|
+
**Do NOT proceed until the user confirms your understanding.** If they
|
|
69
|
+
correct you, update and confirm again. This loop can take multiple rounds.
|
|
70
|
+
|
|
71
|
+
### Step 2: Reconnaissance
|
|
72
|
+
|
|
73
|
+
Deploy the **scout** to map relevant areas of the codebase:
|
|
74
|
+
|
|
75
|
+
- What exists today that relates to this feature?
|
|
76
|
+
- What patterns does the codebase use?
|
|
77
|
+
- What constraints exist (framework, language, architecture)?
|
|
78
|
+
|
|
79
|
+
If the scout reveals complexity, deploy the **analyst** to trace
|
|
80
|
+
the relevant code paths in depth.
|
|
81
|
+
|
|
82
|
+
### Step 3: Present Options
|
|
83
|
+
|
|
84
|
+
Deploy the **architect** to design 2-3 approaches. For each option, present:
|
|
85
|
+
|
|
86
|
+
```
|
|
87
|
+
## Option A: <name>
|
|
88
|
+
|
|
89
|
+
**Approach:** What this option does and how it works.
|
|
90
|
+
|
|
91
|
+
**Pros:**
|
|
92
|
+
- ...
|
|
93
|
+
|
|
94
|
+
**Cons:**
|
|
95
|
+
- ...
|
|
96
|
+
|
|
97
|
+
**Effort:** Low / Medium / High
|
|
98
|
+
|
|
99
|
+
**Risk:** Low / Medium / High
|
|
100
|
+
|
|
101
|
+
**Fits existing patterns:** Yes / No — explanation.
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
**Always present at least 2 options.** If there seems to be only one way,
|
|
105
|
+
think harder — there's always a trade-off worth exploring.
|
|
106
|
+
|
|
107
|
+
Include a recommendation, but make it clear this is YOUR recommendation
|
|
108
|
+
and the user decides.
|
|
109
|
+
|
|
110
|
+
### Step 4: User Decision
|
|
111
|
+
|
|
112
|
+
Wait for the user to choose an option or provide feedback.
|
|
113
|
+
Do NOT proceed to implementation planning without explicit user choice.
|
|
114
|
+
|
|
115
|
+
If the user has questions or wants to modify an option, iterate.
|
|
116
|
+
This step can take multiple rounds — that's the point.
|
|
117
|
+
|
|
118
|
+
### Step 5: Implementation Plan
|
|
119
|
+
|
|
120
|
+
Once the user approves an approach, produce a concrete plan:
|
|
121
|
+
|
|
122
|
+
```
|
|
123
|
+
## Implementation Plan: <feature>
|
|
124
|
+
|
|
125
|
+
### Overview
|
|
126
|
+
<1-2 sentences describing the chosen approach>
|
|
127
|
+
|
|
128
|
+
### Files to Create
|
|
129
|
+
- `path/file.ts` — purpose
|
|
130
|
+
|
|
131
|
+
### Files to Modify
|
|
132
|
+
- `path/file.ts` — what changes and why
|
|
133
|
+
|
|
134
|
+
### Implementation Steps
|
|
135
|
+
1. <step> — <why this order>
|
|
136
|
+
2. <step>
|
|
137
|
+
3. <step>
|
|
138
|
+
|
|
139
|
+
### Edge Cases
|
|
140
|
+
- <case> — how to handle
|
|
141
|
+
|
|
142
|
+
### Testing Strategy
|
|
143
|
+
- Unit tests for: ...
|
|
144
|
+
- Integration tests for: ...
|
|
145
|
+
|
|
146
|
+
### Open Questions
|
|
147
|
+
- <anything still unresolved>
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
Present the plan for final approval.
|
|
151
|
+
|
|
152
|
+
### Step 6: Transition to Development
|
|
153
|
+
|
|
154
|
+
Once the plan is approved, ask the user directly:
|
|
155
|
+
|
|
156
|
+
> "The plan is ready. Any questions, or shall we start building?"
|
|
157
|
+
|
|
158
|
+
- If the user has questions — answer them, iterate on the plan.
|
|
159
|
+
- If the user says go — **transition seamlessly into the develop pipeline.**
|
|
160
|
+
Do NOT wait for them to manually invoke `/agentic-develop`. You have the
|
|
161
|
+
plan, the context, and the scout findings. Proceed directly:
|
|
162
|
+
1. Create a progress tracking task list based on the implementation steps
|
|
163
|
+
2. Start with Step 4 of the develop workflow (Implement), since planning
|
|
164
|
+
and reconnaissance are already done
|
|
165
|
+
3. Continue through verification, iteration, and summary as normal
|
|
@@ -0,0 +1,190 @@
|
|
|
1
|
+
---
|
|
2
|
+
description: "Iterative codebase harmonization. Discovers patterns, finds inconsistencies, and unifies code across files."
|
|
3
|
+
agent: "lead"
|
|
4
|
+
---
|
|
5
|
+
|
|
6
|
+
<!-- Generated by pnpm run sync:opencode-commands -->
|
|
7
|
+
|
|
8
|
+
# Polish
|
|
9
|
+
|
|
10
|
+
Iterative codebase harmonization. This command discovers the patterns your
|
|
11
|
+
project already uses, finds where files diverge, and systematically unifies
|
|
12
|
+
everything — same voice, same structure, same quality across every file.
|
|
13
|
+
|
|
14
|
+
**Designed for the loop.** Run it, review the changes, clear context, run again.
|
|
15
|
+
Each pass finds fewer issues until the codebase converges.
|
|
16
|
+
|
|
17
|
+
**Usage:**
|
|
18
|
+
|
|
19
|
+
- `/agentic-polish src/services/` — polish a specific area
|
|
20
|
+
- `/agentic-polish services and repositories` — polish by concept
|
|
21
|
+
- `/agentic-polish` — will ask what to polish
|
|
22
|
+
- `/agentic-polish --dry-run` — discovery + plan only, no changes
|
|
23
|
+
|
|
24
|
+
## Philosophy
|
|
25
|
+
|
|
26
|
+
Consistency is not about rules — it's about voice. When every file reads like it
|
|
27
|
+
was written by the same developer, the codebase becomes navigable by intuition.
|
|
28
|
+
You know where things are before you look. You know how they work before you read.
|
|
29
|
+
|
|
30
|
+
This is not `/agentic-review` (which finds issues in a diff) and not
|
|
31
|
+
`/agentic-simplify` (which reduces complexity within code). Polish compares
|
|
32
|
+
**peer files against each other** and unifies them toward a single voice.
|
|
33
|
+
|
|
34
|
+
## Workflow
|
|
35
|
+
|
|
36
|
+
### Step 1: Determine Scope
|
|
37
|
+
|
|
38
|
+
Parse `$ARGUMENTS` to understand what to polish:
|
|
39
|
+
|
|
40
|
+
- **File path** → polish that file against its peers
|
|
41
|
+
- **Directory** → polish all files in that area
|
|
42
|
+
- **Concept** (e.g., "services", "hooks", "components") → scout for relevant files
|
|
43
|
+
- **`--dry-run`** → run Discovery + Synthesis only, skip Execute
|
|
44
|
+
- **No arguments** → Ask the user:
|
|
45
|
+
"What should I polish? Options:
|
|
46
|
+
1. A specific directory (provide path)
|
|
47
|
+
2. An architectural layer (e.g., 'services', 'components', 'hooks')
|
|
48
|
+
3. The entire repository
|
|
49
|
+
4. Discovery only (`--dry-run` for analysis without changes)"
|
|
50
|
+
|
|
51
|
+
### Step 2: Discovery
|
|
52
|
+
|
|
53
|
+
Deploy 4 agents in parallel. Each brings a different lens — independent
|
|
54
|
+
analysis, no cross-contamination between agents.
|
|
55
|
+
|
|
56
|
+
**Scout 1 — Structure Map:**
|
|
57
|
+
|
|
58
|
+
> Map the file structure within `<scope>`. For each file group (e.g., services,
|
|
59
|
+
> components, hooks, utils), report: file names, naming patterns, export
|
|
60
|
+
> patterns, file sizes (lines), and directory organization. Facts only.
|
|
61
|
+
|
|
62
|
+
**Scout 2 — Quantitative Profile:**
|
|
63
|
+
|
|
64
|
+
> Measure the code within `<scope>`. For each file, report: number of
|
|
65
|
+
> functions/methods, function lengths (lines), parameter counts, nesting
|
|
66
|
+
> depth, import counts. Present as a table. Facts only.
|
|
67
|
+
|
|
68
|
+
**Analyst 1 — Pattern Extraction:**
|
|
69
|
+
|
|
70
|
+
> Analyze the code within `<scope>` and extract the patterns currently in
|
|
71
|
+
> use. Document: how are similar files structured? What conventions are
|
|
72
|
+
> followed? What is the common shape of a service / component / hook /
|
|
73
|
+
> repository? Produce a **Pattern Catalog** — the project's current voice.
|
|
74
|
+
|
|
75
|
+
**Analyst 2 — Cross-File Comparison:**
|
|
76
|
+
|
|
77
|
+
> Compare files of the same type within `<scope>`. For each group of peer
|
|
78
|
+
> files (e.g., all services, all components), identify: where do they
|
|
79
|
+
> diverge in structure, naming, error handling, or approach? Where is code
|
|
80
|
+
> duplicated? Where do patterns contradict? Produce an **Inconsistency Report**.
|
|
81
|
+
|
|
82
|
+
### Step 3: Codebase Portrait
|
|
83
|
+
|
|
84
|
+
Synthesize the 4 reports into a unified view:
|
|
85
|
+
|
|
86
|
+
```
|
|
87
|
+
## Codebase Portrait: <scope>
|
|
88
|
+
|
|
89
|
+
### Current Patterns
|
|
90
|
+
<The project's voice — what conventions and structures are actually used>
|
|
91
|
+
|
|
92
|
+
### Inconsistencies
|
|
93
|
+
|
|
94
|
+
**High** — Patterns that fundamentally differ between peer files:
|
|
95
|
+
- ...
|
|
96
|
+
|
|
97
|
+
**Medium** — Structural differences, naming inconsistencies:
|
|
98
|
+
- ...
|
|
99
|
+
|
|
100
|
+
**Low** — Minor formatting, ordering differences:
|
|
101
|
+
- ...
|
|
102
|
+
|
|
103
|
+
### Duplication
|
|
104
|
+
<Code that appears in multiple places and should be extracted>
|
|
105
|
+
|
|
106
|
+
### Recommended Standard
|
|
107
|
+
<For each file group: the pattern that should become the norm,
|
|
108
|
+
based on what the majority already does or what the best example does>
|
|
109
|
+
|
|
110
|
+
### Estimated Scope
|
|
111
|
+
- Files to change: <count>
|
|
112
|
+
- Nature: <what kinds of changes>
|
|
113
|
+
```
|
|
114
|
+
|
|
115
|
+
Present the Codebase Portrait to the user. **Wait for approval before proceeding.**
|
|
116
|
+
|
|
117
|
+
If `--dry-run` was specified, stop here. The portrait IS the deliverable.
|
|
118
|
+
|
|
119
|
+
### Step 4: Unification Plan
|
|
120
|
+
|
|
121
|
+
Deploy the **architect** with the Codebase Portrait:
|
|
122
|
+
|
|
123
|
+
> Design a unification plan for `<scope>`. The Codebase Portrait shows the
|
|
124
|
+
> current patterns and inconsistencies. Produce a concrete plan: which
|
|
125
|
+
> pattern becomes the standard for each file group, which files need changes,
|
|
126
|
+
> and in what order (dependencies first, leaf files last). Group changes
|
|
127
|
+
> logically so each group can be verified independently.
|
|
128
|
+
|
|
129
|
+
Present the plan to the user. **Wait for approval.**
|
|
130
|
+
|
|
131
|
+
### Step 5: Execute
|
|
132
|
+
|
|
133
|
+
Deploy the **developer** with:
|
|
134
|
+
|
|
135
|
+
- The full architect plan (not a summary)
|
|
136
|
+
- The Pattern Catalog from Analyst 1 (so the developer matches the voice)
|
|
137
|
+
- The scout reports (for codebase context)
|
|
138
|
+
- Clear scope boundaries
|
|
139
|
+
- Test command
|
|
140
|
+
|
|
141
|
+
The developer works group by group. After each logical group:
|
|
142
|
+
|
|
143
|
+
1. Run tests to catch regressions
|
|
144
|
+
2. Verify the changes match the approved pattern
|
|
145
|
+
|
|
146
|
+
If the scope is large, check in with the user between groups.
|
|
147
|
+
|
|
148
|
+
### Step 6: Verify
|
|
149
|
+
|
|
150
|
+
Launch in parallel:
|
|
151
|
+
|
|
152
|
+
**reviewer** — Check that the changes are consistent with the approved pattern
|
|
153
|
+
and don't introduce bugs. Compare modified files against the Pattern Catalog.
|
|
154
|
+
|
|
155
|
+
**tester** — Confirm all tests pass and no behavior changed.
|
|
156
|
+
|
|
157
|
+
If the reviewer finds regressions toward inconsistency, send the developer
|
|
158
|
+
back with specific findings. Re-verify after fixes.
|
|
159
|
+
|
|
160
|
+
### Step 7: Convergence Report
|
|
161
|
+
|
|
162
|
+
```
|
|
163
|
+
## Polish Report: <scope>
|
|
164
|
+
|
|
165
|
+
### Changes Applied
|
|
166
|
+
- Files modified: <count>
|
|
167
|
+
- Nature: <summary of what was unified>
|
|
168
|
+
|
|
169
|
+
### Consistency
|
|
170
|
+
- Before: <X inconsistencies across Y file groups>
|
|
171
|
+
- After: <X inconsistencies remaining>
|
|
172
|
+
|
|
173
|
+
### What Was Unified
|
|
174
|
+
1. **<category>** — <what changed and why>
|
|
175
|
+
2. ...
|
|
176
|
+
|
|
177
|
+
### What Remains
|
|
178
|
+
- <areas that were out of scope or need a separate pass>
|
|
179
|
+
- <deeper structural issues that require design decisions>
|
|
180
|
+
|
|
181
|
+
### Convergence
|
|
182
|
+
<One of:>
|
|
183
|
+
- "Significant inconsistencies remain. Another pass is recommended."
|
|
184
|
+
- "Minor inconsistencies remain. One more pass should reach convergence."
|
|
185
|
+
- "No significant inconsistencies detected. Codebase is harmonized."
|
|
186
|
+
|
|
187
|
+
### Next Steps
|
|
188
|
+
- <If more passes needed>: "Clear context and run `/agentic-polish <scope>` again."
|
|
189
|
+
- <If harmonized>: "Ready for `/agentic-commit`."
|
|
190
|
+
```
|