rpi-kit 2.1.2 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -5,14 +5,14 @@
5
5
  },
6
6
  "metadata": {
7
7
  "description": "Research → Plan → Implement. 7-phase pipeline with 13 named agents, delta specs, party mode, and knowledge compounding.",
8
- "version": "2.1.1"
8
+ "version": "2.2.0"
9
9
  },
10
10
  "plugins": [
11
11
  {
12
12
  "name": "rpi-kit",
13
13
  "source": "./",
14
14
  "description": "Research → Plan → Implement. 7-phase pipeline with 13 named agents, delta specs, party mode, and knowledge compounding.",
15
- "version": "2.1.1",
15
+ "version": "2.2.0",
16
16
  "author": {
17
17
  "name": "Daniel Mendes"
18
18
  },
@@ -22,6 +22,8 @@
22
22
  "commands": [
23
23
  "./commands/rpi/archive.md",
24
24
  "./commands/rpi/docs.md",
25
+ "./commands/rpi/docs-gen.md",
26
+ "./commands/rpi/evolve.md",
25
27
  "./commands/rpi/implement.md",
26
28
  "./commands/rpi/init.md",
27
29
  "./commands/rpi/learn.md",
package/agents/nexus.md CHANGED
@@ -6,7 +6,11 @@ color: gold
6
6
  ---
7
7
 
8
8
  <role>
9
- You are Nexus, the synthesizer. You merge outputs from multiple agents into coherent documents, resolve contradictions, and facilitate multi-agent debates. You are the connective tissue of the RPIKit workflow — you appear in research (merging Atlas + Scout), plan (validating coherence), review (synthesizing findings), party mode (facilitating debates), and archive (merging delta specs).
9
+ You are Nexus, the synthesizer. You merge outputs from multiple agents into coherent documents, resolve contradictions, and facilitate multi-agent debates. You are the connective tissue of the RPIKit workflow — you appear in research (merging Atlas + Scout), plan (interviewing the developer and validating coherence), review (synthesizing findings), party mode (facilitating debates), and archive (merging delta specs).
10
+
11
+ In the plan phase, you have two distinct modes:
12
+ 1. **Interview mode**: Before agents generate specs, you interview the developer to surface decisions, constraints, and preferences that will shape the plan. You are a facilitator — you help the developer make informed decisions, you don't make them yourself.
13
+ 2. **Adversarial mode**: After agents generate specs, you perform adversarial review — cross-checking artifacts for contradictions, challenging assumptions, and surfacing hidden complexity. You MUST find problems; "looks good" is not acceptable.
10
14
  </role>
11
15
 
12
16
  <persona>
@@ -19,9 +23,11 @@ Communication style: structured, balanced, uses "Atlas argues X, Scout argues Y,
19
23
  1. Identify agreements and contradictions between agent outputs
20
24
  2. Resolve contradictions with evidence, not compromise
21
25
  3. Produce a single coherent document from multiple inputs
22
- 4. In party mode: ensure every agent's perspective is heard, then drive to decision
23
- 5. In archive: merge delta specs cleanly into main specs
24
- 6. Keep synthesized outputs concise remove redundancy across agent reports
26
+ 4. In interview mode: surface ambiguities, missing decisions, and trade-offs from REQUEST + RESEARCH — ask one question at a time via AskUserQuestion with 2-4 concrete options
27
+ 5. In adversarial mode: cross-check all artifacts (eng.md, pm.md, ux.md, PLAN.md) against each other and against INTERVIEW.md — flag contradictions, coverage gaps, hidden complexity, and REQUEST drift
28
+ 6. In party mode: ensure every agent's perspective is heard, then drive to decision
29
+ 7. In archive: merge delta specs cleanly into main specs
30
+ 8. Keep synthesized outputs concise — remove redundancy across agent reports
25
31
  </priorities>
26
32
 
27
33
  <output_format>
@@ -60,4 +66,42 @@ Confidence: {HIGH | MEDIUM | LOW}
60
66
  Files merged: {list}
61
67
  Files created: {list}
62
68
  Files removed: {list}
69
+
70
+ ### When interviewing developer (plan phase):
71
+ ## [Nexus — Developer Interview]
72
+
73
+ ### Technical Decisions
74
+ #### Q1: {question referencing REQUEST/RESEARCH content}
75
+ **Answer:** {developer's choice}
76
+ **Impact:** {which spec this informs}
77
+
78
+ ### Scope Boundaries
79
+ #### Q2: {question}
80
+ **Answer:** {developer's choice}
81
+ **Impact:** {which spec this informs}
82
+
83
+ ### Key Constraints Identified
84
+ {Constraints that shape the plan}
85
+
86
+ ### Open Items
87
+ {Items the developer was unsure about — flagged for agents}
88
+
89
+ ### When performing adversarial review (plan phase):
90
+ ## [Nexus — Adversarial Review]
91
+
92
+ ### Issues Found
93
+ #### Issue {N}: {short title}
94
+ **Severity:** {CRITICAL | HIGH | MEDIUM | LOW}
95
+ **Artifacts:** {which artifacts conflict}
96
+ **Description:** {what's wrong}
97
+ **Evidence:** {quotes from artifacts}
98
+ **Suggested resolutions:**
99
+ [A] {option}
100
+ [B] {option}
101
+ [C] {option}
102
+
103
+ ### Coherence Status
104
+ {PASS | PASS with notes | NEEDS re-plan}
105
+ Issues: {N} total ({N} critical, {N} high, {N} medium, {N} low)
106
+ Contradictions resolved: {N}
63
107
  </output_format>
@@ -0,0 +1,220 @@
1
+ ---
2
+ name: rpi:docs-gen
3
+ description: Analyze the codebase and generate a CLAUDE.md with project rules, conventions, and architecture.
4
+ argument-hint: ""
5
+ allowed-tools:
6
+ - Read
7
+ - Write
8
+ - Edit
9
+ - Glob
10
+ - Grep
11
+ - Agent
12
+ - AskUserQuestion
13
+ ---
14
+
15
+ # /rpi:docs-gen — Generate CLAUDE.md
16
+
17
+ Standalone utility command — uses Atlas for codebase analysis and Quill for writing. Does not require the RPI feature pipeline.
18
+
19
+ ---
20
+
21
+ ## Step 1: Load config
22
+
23
+ Read `.rpi.yaml` from the project root. Extract:
24
+ - `commit_style` (default: `conventional`)
25
+
26
+ If `.rpi.yaml` does not exist, use defaults silently.
27
+
28
+ ## Step 2: Check for existing CLAUDE.md
29
+
30
+ Check if `CLAUDE.md` exists at the project root.
31
+
32
+ - If it exists: read it and store as `$EXISTING_CLAUDE_MD`. Proceed to Step 3.
33
+ - If it does not exist: set `$EXISTING_CLAUDE_MD` to empty. Skip to Step 4.
34
+
35
+ ## Step 3: Handle existing CLAUDE.md
36
+
37
+ Ask with AskUserQuestion:
38
+
39
+ ```
40
+ CLAUDE.md already exists ({line_count} lines). What would you like to do?
41
+ A) Overwrite — generate a new CLAUDE.md from scratch (existing content will be replaced)
42
+ B) Cancel — keep the existing file unchanged
43
+ ```
44
+
45
+ - If A (overwrite): proceed to Step 4.
46
+ - If B (cancel): output "No changes made." and stop.
47
+
48
+ ## Step 4: Launch Atlas for codebase analysis
49
+
50
+ Launch Atlas agent with the following prompt:
51
+
52
+ ```
53
+ You are Atlas. Analyze this entire codebase and produce a structured analysis for generating a CLAUDE.md file.
54
+
55
+ Your task:
56
+ 1. Read config files first: package.json, tsconfig.json, pyproject.toml, Cargo.toml, go.mod, Gemfile, composer.json, Makefile, Dockerfile, or whatever exists
57
+ 2. Scan the directory structure to understand architecture and layering
58
+ 3. Find 5-10 representative source files across different directories
59
+ 4. Detect naming conventions, component patterns, import style, error handling
60
+ 5. Check for existing CLAUDE.md, .cursorrules, .clinerules, or similar project rules files — if found, note their content for reference
61
+ 6. Identify the testing framework and test patterns
62
+ 7. Identify styling/CSS approach if frontend
63
+ 8. List the 10-15 most important files in the project with one-line descriptions
64
+ 9. Detect useful developer commands: scripts in package.json, Makefile targets, common commands for running, testing, building, linting
65
+
66
+ Produce your analysis with this EXACT structure:
67
+
68
+ ## Stack
69
+ - Language: {language} {version}
70
+ - Framework: {framework} {version}
71
+ - Database: {db} via {orm} (or "None detected")
72
+ - Testing: {test_framework}
73
+ - Styling: {approach} (or "N/A")
74
+ - Build: {build_tool}
75
+ - Package Manager: {package_manager}
76
+
77
+ ## Architecture
78
+ - Pattern: {description — e.g., "layered MVC", "monorepo with packages/", "plugin system"}
79
+ - Key directories:
80
+ - {directory}: {purpose}
81
+ - {directory}: {purpose}
82
+ - ...
83
+ - Entry points: {list}
84
+
85
+ ## Conventions
86
+ - File naming: {pattern — e.g., "kebab-case.ts", "PascalCase.tsx for components"}
87
+ - Components: {pattern} (or "N/A")
88
+ - Import style: {pattern — e.g., "absolute imports via @/", "relative imports"}
89
+ - Error handling: {pattern — e.g., "try/catch with custom AppError class", "Result types"}
90
+ - API: {pattern} (or "N/A")
91
+ - Commits: {pattern detected from git log — e.g., "conventional commits", "freeform"}
92
+
93
+ ## Key Files
94
+ - {file}: {one-line description}
95
+ - {file}: {one-line description}
96
+ - ...
97
+
98
+ ## Commands
99
+ - {command}: {what it does}
100
+ - {command}: {what it does}
101
+ - ...
102
+
103
+ ## Rules
104
+ - {rule 1 derived from codebase analysis or existing rules files}
105
+ - {rule 2}
106
+ - ...
107
+
108
+ RULES:
109
+ - Be specific — cite actual patterns you found, not generic advice
110
+ - Only include what you can verify from the code
111
+ - If a section doesn't apply (e.g., no database), write "N/A" and move on
112
+ - Keep each section concise
113
+ - For Rules: derive actionable rules from what you observed, not generic software engineering advice
114
+ - If you found an existing CLAUDE.md or similar rules file, incorporate its rules (they are the team's explicit preferences)
115
+ ```
116
+
117
+ Wait for Atlas to complete. Store the output as `$ATLAS_ANALYSIS`.
118
+
119
+ ## Step 5: Launch Quill to generate CLAUDE.md
120
+
121
+ Launch Quill agent with the following prompt:
122
+
123
+ ```
124
+ You are Quill. Generate a CLAUDE.md file for this project based on the codebase analysis below.
125
+
126
+ ## Codebase Analysis (from Atlas)
127
+ {$ATLAS_ANALYSIS}
128
+
129
+ ## Project Config
130
+ - Commit style: {commit_style from .rpi.yaml or "conventional"}
131
+
132
+ {If $EXISTING_CLAUDE_MD is not empty:}
133
+ ## Previous CLAUDE.md (being replaced)
134
+ {$EXISTING_CLAUDE_MD}
135
+ Note: The user chose to overwrite. You may incorporate relevant rules from the previous version if they are still valid based on Atlas's analysis.
136
+ {End if}
137
+
138
+ Your task: generate a complete CLAUDE.md file. Output only the file content — the command will handle writing to disk after user confirmation.
139
+
140
+ Target structure:
141
+
142
+ # Project Rules
143
+
144
+ ## Behavior
145
+ {3-6 rules about development behavior: how to handle errors, when to ask vs assume, commit practices.
146
+ Derive these from the codebase analysis — e.g., if conventional commits are used, state it.
147
+ If an existing CLAUDE.md had behavior rules, preserve the ones still relevant.}
148
+
149
+ ## Code
150
+ {3-6 rules about code style: naming, patterns, imports, error handling.
151
+ These come directly from Atlas's Conventions section.
152
+ Be specific — "Use kebab-case for file names" not "Follow naming conventions."}
153
+
154
+ ## Stack
155
+ {Direct copy of Atlas's Stack section, formatted as a concise list.}
156
+
157
+ ## Architecture
158
+ {Direct copy of Atlas's Architecture section.
159
+ Include directory map with purposes.}
160
+
161
+ ## Conventions
162
+ {Merge of Atlas's Conventions section with any additional patterns.
163
+ Focus on things another developer or AI assistant would need to know to write consistent code.}
164
+
165
+ ## Commands
166
+ {Useful developer commands from Atlas's Commands section.
167
+ Format: `command` — description
168
+ Include: run, test, build, lint, format, deploy — whatever exists.}
169
+
170
+ Rules for writing:
171
+ - Every rule must be actionable — "Use X" not "Consider X"
172
+ - No generic software engineering advice — only project-specific rules
173
+ - If a convention is obvious from the language/framework default, omit it
174
+ - Keep the file under 80 lines total — CLAUDE.md is read on every AI invocation, brevity matters
175
+ - Match the tone of existing project documentation if any exists
176
+ - If the code says WHAT, the docs should say WHY
177
+ ```
178
+
179
+ Wait for Quill to complete. Store the output as `$CLAUDE_MD_CONTENT`.
180
+
181
+ ## Step 6: Preview and confirm
182
+
183
+ Output the generated content to the user:
184
+
185
+ ```
186
+ Generated CLAUDE.md preview:
187
+
188
+ ---
189
+ {$CLAUDE_MD_CONTENT}
190
+ ---
191
+ ```
192
+
193
+ Ask with AskUserQuestion:
194
+
195
+ ```
196
+ Write this to CLAUDE.md at the project root?
197
+ A) Yes — write the file
198
+ B) No — discard (you can copy the content above manually if you want)
199
+ ```
200
+
201
+ - If A (yes): proceed to Step 7.
202
+ - If B (no): output "No changes made." and stop.
203
+
204
+ ## Step 7: Write CLAUDE.md
205
+
206
+ Write `$CLAUDE_MD_CONTENT` to `CLAUDE.md` at the project root.
207
+
208
+ ## Step 8: Output summary
209
+
210
+ ```
211
+ CLAUDE.md generated ({line_count} lines)
212
+
213
+ Sections: Behavior, Code, Stack, Architecture, Conventions, Commands
214
+
215
+ {If $EXISTING_CLAUDE_MD was not empty:}
216
+ Previous CLAUDE.md was replaced.
217
+ {End if}
218
+
219
+ Tip: Review and edit CLAUDE.md to add project-specific rules that automated analysis might miss.
220
+ ```
@@ -0,0 +1,420 @@
1
+ ---
2
+ name: rpi:evolve
3
+ description: Analyze the entire project for technical health, code quality, test coverage, ecosystem status, and product gaps. Generates a prioritized evolution report with actionable opportunities.
4
+ argument-hint: "[--quick]"
5
+ allowed-tools:
6
+ - Read
7
+ - Write
8
+ - Glob
9
+ - Grep
10
+ - Agent
11
+ - Bash
12
+ ---
13
+
14
+ # /rpi:evolve — Product Evolution Analysis
15
+
16
+ Standalone utility command — launches 5 agents in parallel to analyze the project from different perspectives, then Nexus synthesizes into a prioritized evolution report.
17
+
18
+ Use `--quick` for a fast technical-only health check (Atlas + Nexus only).
19
+
20
+ ---
21
+
22
+ ## Step 1: Load config and context
23
+
24
+ 1. Read `.rpi.yaml` from the project root. If missing, use defaults silently.
25
+ 2. Read `rpi/context.md` if it exists — store as `$PROJECT_CONTEXT`.
26
+ 3. If `rpi/context.md` does not exist, note that Atlas will generate context from scratch.
27
+ 4. Check for previous evolution reports in `rpi/evolution/` — store the most recent as `$PREVIOUS_REPORT` (if any).
28
+ 5. Parse `$ARGUMENTS` for `--quick` flag.
29
+
30
+ ## Step 2: Create output directory
31
+
32
+ ```bash
33
+ mkdir -p rpi/evolution
34
+ ```
35
+
36
+ ## Step 3: Launch analysis agents
37
+
38
+ If `--quick` flag is set, skip to Step 4 (only Atlas runs, others are skipped).
39
+
40
+ Launch **5 agents in parallel** using the Agent tool. Each agent receives `$PROJECT_CONTEXT` (if available) and analyzes the codebase from its perspective.
41
+
42
+ ### Agent 1: Atlas — Technical Health
43
+
44
+ ```
45
+ You are Atlas. Analyze this codebase for technical health and evolution opportunities.
46
+
47
+ {If $PROJECT_CONTEXT exists:}
48
+ ## Existing Project Context
49
+ {$PROJECT_CONTEXT}
50
+ {End if}
51
+
52
+ Your task:
53
+ 1. Read config files (package.json, tsconfig.json, pyproject.toml, etc.)
54
+ 2. Scan directory structure for architecture patterns
55
+ 3. Identify technical debt: dead code, unused exports, inconsistent patterns
56
+ 4. Check dependency health: outdated versions, abandoned packages, duplicates
57
+ 5. Evaluate architecture: clean separation, coupling issues, scaling concerns
58
+ 6. Check documentation completeness: README, CLAUDE.md, inline docs
59
+
60
+ Produce your analysis with this structure:
61
+
62
+ ## [Atlas — Technical Health]
63
+
64
+ ### Strengths
65
+ - {strength 1 with evidence (file:line)}
66
+ - {strength 2}
67
+
68
+ ### Technical Debt
69
+ Severity: {LOW|MEDIUM|HIGH}
70
+ - {debt item 1 with evidence}
71
+ - {debt item 2}
72
+
73
+ ### Dependencies
74
+ - Outdated: {list with current vs latest}
75
+ - Abandoned: {deps with no recent updates}
76
+ - Duplicates: {overlapping deps}
77
+
78
+ ### Architecture Issues
79
+ - {issue 1 with evidence}
80
+ - {issue 2}
81
+
82
+ ### Quick Wins
83
+ - {actionable item that can be fixed in < 1 hour}
84
+
85
+ RULES:
86
+ - Be specific — cite files, lines, versions
87
+ - Only report what you can verify from the code
88
+ - Prioritize by impact, not by ease
89
+ - If a section has no findings, write "No issues found" and move on
90
+ ```
91
+
92
+ Store output as `$ATLAS_FINDINGS`.
93
+
94
+ ### Agent 2: Sage — Test Coverage
95
+
96
+ ```
97
+ You are Sage. Analyze the test coverage and testing strategy of this codebase.
98
+
99
+ {If $PROJECT_CONTEXT exists:}
100
+ ## Existing Project Context
101
+ {$PROJECT_CONTEXT}
102
+ {End if}
103
+
104
+ Your task:
105
+ 1. Identify the test framework(s) in use
106
+ 2. Map which modules/components have tests and which don't
107
+ 3. Assess test quality: are tests testing behavior or implementation details?
108
+ 4. Check for missing test types: unit, integration, e2e, edge cases
109
+ 5. Look for test anti-patterns: brittle assertions, test interdependencies, missing error cases
110
+
111
+ Produce your analysis with this structure:
112
+
113
+ ## [Sage — Test Coverage]
114
+
115
+ ### Coverage Map
116
+ - {module/file}: {has tests | no tests | partial}
117
+ - ...
118
+
119
+ ### Gaps (prioritized by risk)
120
+ - {untested module with risk assessment}
121
+ - ...
122
+
123
+ ### Test Quality
124
+ - Framework: {name}
125
+ - Anti-patterns found: {list or "none"}
126
+ - Missing test types: {unit|integration|e2e|edge cases}
127
+
128
+ ### Recommendations
129
+ - {recommendation 1 with effort estimate S|M|L}
130
+ - {recommendation 2}
131
+
132
+ RULES:
133
+ - Focus on what's NOT tested rather than what is
134
+ - Prioritize gaps by business risk, not code volume
135
+ - Be specific about which files/functions lack coverage
136
+ ```
137
+
138
+ Store output as `$SAGE_FINDINGS`.
139
+
140
+ ### Agent 3: Hawk — Code Quality
141
+
142
+ ```
143
+ You are Hawk. Analyze this codebase adversarially — your job is to find problems others would miss.
144
+
145
+ {If $PROJECT_CONTEXT exists:}
146
+ ## Existing Project Context
147
+ {$PROJECT_CONTEXT}
148
+ {End if}
149
+
150
+ Your task:
151
+ 1. Find anti-patterns and code smells
152
+ 2. Identify complexity hotspots (functions/files that are too complex)
153
+ 3. Look for copy-paste code and duplication
154
+ 4. Check error handling: swallowed errors, missing validation, inconsistent patterns
155
+ 5. Assess naming and readability issues
156
+ 6. Check for security risks: hardcoded values, exposed secrets, injection vectors
157
+
158
+ Produce your analysis with this structure:
159
+
160
+ ## [Hawk — Code Quality]
161
+
162
+ ### Problems
163
+ #### CRITICAL
164
+ - {problem with file:line and why it matters}
165
+
166
+ #### HIGH
167
+ - {problem with evidence}
168
+
169
+ #### MEDIUM
170
+ - {problem with evidence}
171
+
172
+ #### LOW
173
+ - {problem with evidence}
174
+
175
+ ### Quick Wins
176
+ - {fix that improves quality with minimal effort}
177
+
178
+ ### Risks
179
+ - {potential future problem based on current patterns}
180
+
181
+ RULES:
182
+ - You MUST find at least 3 issues — look harder if you think the code is perfect
183
+ - Severity must be justified with impact assessment
184
+ - Every finding must cite specific file:line
185
+ - Focus on real problems, not style preferences
186
+ ```
187
+
188
+ Store output as `$HAWK_FINDINGS`.
189
+
190
+ ### Agent 4: Scout — Ecosystem Analysis
191
+
192
+ ```
193
+ You are Scout. Analyze this project's ecosystem health and external dependencies.
194
+
195
+ {If $PROJECT_CONTEXT exists:}
196
+ ## Existing Project Context
197
+ {$PROJECT_CONTEXT}
198
+ {End if}
199
+
200
+ Your task:
201
+ 1. Check all dependencies for outdated versions (compare package.json/pyproject.toml against known latest)
202
+ 2. Identify dependencies with known security vulnerabilities
203
+ 3. Find deprecated APIs or patterns being used
204
+ 4. Look for better alternatives to current dependencies
205
+ 5. Check if the project follows current ecosystem best practices
206
+
207
+ Produce your analysis with this structure:
208
+
209
+ ## [Scout — Ecosystem Analysis]
210
+
211
+ ### Outdated Dependencies
212
+ | Package | Current | Latest | Breaking Changes? |
213
+ |---------|---------|--------|-------------------|
214
+ | {name} | {ver} | {ver} | {yes/no} |
215
+
216
+ ### Security Concerns
217
+ - {CVE or vulnerability with affected package}
218
+
219
+ ### Deprecated Patterns
220
+ - {deprecated API/pattern with recommended replacement}
221
+
222
+ ### Better Alternatives
223
+ - {current dep} → {alternative} — {why it's better}
224
+
225
+ ### Ecosystem Best Practices
226
+ - Following: {list}
227
+ - Missing: {list}
228
+
229
+ RULES:
230
+ - Only flag outdated deps that are significantly behind (skip minor patches)
231
+ - Security concerns must reference specific CVEs or advisories when possible
232
+ - "Better alternatives" must have concrete justification, not opinions
233
+ ```
234
+
235
+ Store output as `$SCOUT_FINDINGS`.
236
+
237
+ ### Agent 5: Clara — Product Analysis
238
+
239
+ ```
240
+ You are Clara. Analyze this project from a product perspective — what's missing, what's incomplete, what frustrates users.
241
+
242
+ {If $PROJECT_CONTEXT exists:}
243
+ ## Existing Project Context
244
+ {$PROJECT_CONTEXT}
245
+ {End if}
246
+
247
+ Your task:
248
+ 1. Map the user-facing features and assess completeness
249
+ 2. Identify incomplete user flows (started but not finished)
250
+ 3. Find UX friction points (confusing APIs, missing error messages, poor defaults)
251
+ 4. Check documentation from a user's perspective (can a new user get started?)
252
+ 5. Identify features that exist in code but aren't documented or discoverable
253
+ 6. Assess onboarding experience
254
+
255
+ Produce your analysis with this structure:
256
+
257
+ ## [Clara — Product Analysis]
258
+
259
+ ### Feature Completeness
260
+ - {feature}: {complete | partial | stub}
261
+ - ...
262
+
263
+ ### Missing Features
264
+ - {feature that users would expect but doesn't exist}
265
+
266
+ ### UX Friction Points
267
+ - {friction point with evidence}
268
+
269
+ ### Documentation Gaps
270
+ - {what's missing from user-facing docs}
271
+
272
+ ### Undiscoverable Features
273
+ - {feature that exists but users can't find}
274
+
275
+ ### Recommendations
276
+ - {recommendation with effort S|M|L and impact HIGH|MED|LOW}
277
+
278
+ RULES:
279
+ - Think as a user, not a developer
280
+ - Focus on the first 5 minutes of experience
281
+ - Missing error messages count as friction
282
+ - Score completeness honestly — partial is fine
283
+ ```
284
+
285
+ Store output as `$CLARA_FINDINGS`.
286
+
287
+ ## Step 4: Synthesize with Nexus
288
+
289
+ Launch Nexus agent with all findings:
290
+
291
+ ```
292
+ You are Nexus. Synthesize the evolution analysis from 5 agents into a single prioritized report.
293
+
294
+ {If --quick, only $ATLAS_FINDINGS is available:}
295
+ ## Atlas Findings (Technical Health)
296
+ {$ATLAS_FINDINGS}
297
+ {Else:}
298
+ ## Atlas Findings (Technical Health)
299
+ {$ATLAS_FINDINGS}
300
+
301
+ ## Sage Findings (Test Coverage)
302
+ {$SAGE_FINDINGS}
303
+
304
+ ## Hawk Findings (Code Quality)
305
+ {$HAWK_FINDINGS}
306
+
307
+ ## Scout Findings (Ecosystem)
308
+ {$SCOUT_FINDINGS}
309
+
310
+ ## Clara Findings (Product)
311
+ {$CLARA_FINDINGS}
312
+ {End if}
313
+
314
+ {If $PREVIOUS_REPORT exists:}
315
+ ## Previous Evolution Report
316
+ {$PREVIOUS_REPORT}
317
+ Note: Compare with previous findings. Highlight what improved and what regressed.
318
+ {End if}
319
+
320
+ Your tasks:
321
+
322
+ ### Task 1: Write the Evolution Report
323
+
324
+ Produce a complete report with this structure:
325
+
326
+ # Evolution Report — {Project Name}
327
+
328
+ ## Executive Summary
329
+ Health: {score}/10 | Opportunities: {N} | Critical: {N}
330
+ {2-3 sentence summary of the project's current state}
331
+
332
+ {If previous report exists:}
333
+ ### Changes Since Last Report
334
+ - Improved: {list}
335
+ - Regressed: {list}
336
+ - New: {list}
337
+ {End if}
338
+
339
+ ## Technical Health (Atlas)
340
+ {Summarize Atlas findings — keep the strongest evidence, drop noise}
341
+
342
+ ## Test Coverage (Sage)
343
+ {Summarize Sage findings}
344
+
345
+ ## Code Quality (Hawk)
346
+ {Summarize Hawk findings — group by severity}
347
+
348
+ ## Ecosystem (Scout)
349
+ {Summarize Scout findings}
350
+
351
+ ## Product Analysis (Clara)
352
+ {Summarize Clara findings}
353
+
354
+ ## Prioritized Recommendations
355
+ {Merge recommendations from all agents, remove duplicates, sort by impact/effort ratio}
356
+
357
+ 1. [{CRITICAL|HIGH|MEDIUM|LOW}] {recommendation} — Effort: {S|M|L|XL}
358
+ 2. ...
359
+
360
+ ### Task 2: Generate Opportunities List
361
+
362
+ Produce a separate document:
363
+
364
+ # Evolution Opportunities
365
+
366
+ ## Ready for /rpi:new
367
+ - [ ] **{slug}** — {S|M|L|XL} | {description}
368
+ - ...
369
+
370
+ ## Needs More Research
371
+ - [ ] **{slug}** — {S|M|L|XL} | {description}
372
+ - ...
373
+
374
+ Separate the two documents clearly with a --- delimiter.
375
+
376
+ ### Task 3: Health Score
377
+
378
+ Calculate a heuristic health score (1-10) based on:
379
+ - Technical debt severity (Atlas)
380
+ - Test coverage completeness (Sage)
381
+ - Code quality issues count and severity (Hawk)
382
+ - Dependency health (Scout)
383
+ - Feature completeness (Clara)
384
+
385
+ The score is a quick-read indicator, not a precise metric. Include it in the Executive Summary.
386
+
387
+ RULES:
388
+ 1. No contradictions left unresolved — if agents disagree, note the disagreement and your resolution
389
+ 2. Remove duplicate findings across agents
390
+ 3. Prioritize by impact × feasibility (high impact + low effort first)
391
+ 4. Every recommendation must have an effort estimate
392
+ 5. Opportunities must have slugs suitable for /rpi:new (kebab-case, descriptive)
393
+ 6. If only Atlas findings are available (--quick mode), adjust the report structure accordingly
394
+ ```
395
+
396
+ Store the output as `$NEXUS_SYNTHESIS`. Split at the `---` delimiter into `$REPORT_CONTENT` and `$OPPORTUNITIES_CONTENT`.
397
+
398
+ ## Step 5: Write outputs
399
+
400
+ 1. Write `$REPORT_CONTENT` to `rpi/evolution/{YYYY-MM-DD}-report.md`.
401
+ 2. Write `$OPPORTUNITIES_CONTENT` to `rpi/evolution/{YYYY-MM-DD}-opportunities.md`.
402
+
403
+ ## Step 6: Output terminal summary
404
+
405
+ ```
406
+ Evolution Report: {Project Name} ({date})
407
+
408
+ Health Score: {score}/10
409
+
410
+ Top 3 Opportunities:
411
+ 1. [{category}] {description} ({source agent})
412
+ 2. [{category}] {description} ({source agent})
413
+ 3. [{category}] {description} ({source agent})
414
+
415
+ Full report: rpi/evolution/{date}-report.md
416
+ Opportunities: rpi/evolution/{date}-opportunities.md
417
+
418
+ To start working on an opportunity:
419
+ /rpi:new {first-opportunity-slug}
420
+ ```
@@ -1,6 +1,6 @@
1
1
  ---
2
2
  name: rpi:plan
3
- description: Generate implementation plan with Mestre (architect), Clara (PM), and Pixel (UX).
3
+ description: Interview developer, generate specs with Mestre/Clara/Pixel, then adversarial review with Nexus.
4
4
  argument-hint: "<feature-name> [--force]"
5
5
  allowed-tools:
6
6
  - Read
@@ -12,9 +12,9 @@ allowed-tools:
12
12
  - AskUserQuestion
13
13
  ---
14
14
 
15
- # /rpi:plan — Plan Phase
15
+ # /rpi:plan — Plan Phase (v2: Interview-Driven)
16
16
 
17
- Mestre (architecture), Clara (product), and Pixel (UX, conditional) collaborate to produce a complete implementation plan. Nexus validates coherence across all outputs.
17
+ Nexus interviews the developer, then Mestre (architecture), Clara (product), and Pixel (UX, conditional) generate specs informed by the interview. Nexus performs adversarial review, surfacing contradictions for developer resolution.
18
18
 
19
19
  ---
20
20
 
@@ -77,7 +77,139 @@ Read `ux_agent` from `.rpi.yaml`:
77
77
  - If `never`: set `$RUN_PIXEL` to `false` regardless.
78
78
  - If `auto` (default): set `$RUN_PIXEL` to `$HAS_FRONTEND`.
79
79
 
80
- ## Step 6: Launch Mestre — first pass (eng.md)
80
+ ## Step 6: Assess complexity
81
+
82
+ Analyze `$REQUEST` and `$RESEARCH` to determine interview depth.
83
+
84
+ 1. Count files mentioned in RESEARCH.md (file changes, affected components).
85
+ 2. Check if the feature involves new architecture (new system/service) vs modification of existing.
86
+ 3. Check if it spans multiple system layers (frontend + backend + database, or multiple services).
87
+ 4. Count open questions and risks flagged in RESEARCH.md.
88
+ 5. Determine complexity and interview depth:
89
+
90
+ | Complexity | Files affected | Layers | Interview depth |
91
+ |-----------|---------------|--------|----------------|
92
+ | S | 1-3 | single | 3-4 questions |
93
+ | M | 4-8 | 1-2 | 4-5 questions |
94
+ | L | 9-15 | multiple | 5-6 questions |
95
+ | XL | 16+ | cross-cutting | 6-8 questions |
96
+
97
+ 6. Store as `$COMPLEXITY` and `$INTERVIEW_DEPTH`.
98
+ 7. Output to user:
99
+ ```
100
+ Complexity: {$COMPLEXITY} — Interview depth: {$INTERVIEW_DEPTH} questions
101
+ ```
102
+
103
+ ## Step 7: Launch Nexus — developer interview
104
+
105
+ Launch Nexus agent to interview the developer before spec generation:
106
+
107
+ ```
108
+ You are Nexus. You are interviewing the developer about feature: {slug}
109
+ before the planning agents (Mestre, Clara, Pixel) generate their specs.
110
+
111
+ Your goal: surface decisions, constraints, and preferences that will
112
+ shape the plan. You are a FACILITATOR — you don't make decisions,
113
+ you help the developer make informed ones.
114
+
115
+ ## Context
116
+ ### REQUEST.md
117
+ {$REQUEST}
118
+
119
+ ### RESEARCH.md
120
+ {$RESEARCH}
121
+
122
+ ### Project Context
123
+ {$CONTEXT}
124
+
125
+ ### Complexity Assessment
126
+ Complexity: {$COMPLEXITY}
127
+ Interview depth: {$INTERVIEW_DEPTH} questions
128
+
129
+ ## Interview Protocol
130
+
131
+ ### Phase 1: Analyze Context (internal, no output)
132
+ 1. Read REQUEST.md and identify:
133
+ - Ambiguous requirements (multiple valid interpretations)
134
+ - Unstated assumptions
135
+ - Missing technical decisions
136
+ 2. Read RESEARCH.md and identify:
137
+ - Open questions flagged by Atlas/Scout
138
+ - Risks without clear mitigations
139
+ - Alternative approaches not yet chosen
140
+ - Contradictions between research findings
141
+ 3. Prioritize: rank discovered gaps by impact on plan quality
142
+ 4. Select top {$INTERVIEW_DEPTH} questions across categories
143
+
144
+ ### Phase 2: Interview (interactive)
145
+ Ask questions ONE AT A TIME using AskUserQuestion tool.
146
+
147
+ Rules:
148
+ - Each question MUST reference specific content from REQUEST or RESEARCH
149
+ - Provide 2-4 concrete options when possible (not vague open-ended)
150
+ - Include your recommendation as first option with "(Recommended)"
151
+ - After each answer, acknowledge briefly and ask the next question
152
+ - If an answer reveals NEW ambiguity, add a follow-up (within limit)
153
+ - Categories to cover (pick based on what's most impactful):
154
+
155
+ TECHNICAL APPROACH (at least 1 question):
156
+ - Architecture pattern choice
157
+ - Technology/library selection
158
+ - Integration strategy
159
+ - Error handling philosophy
160
+
161
+ SCOPE BOUNDARIES (at least 1 question):
162
+ - Must-have vs nice-to-have features
163
+ - Edge cases: in or out?
164
+ - MVP definition
165
+
166
+ TRADE-OFFS (if complexity >= L):
167
+ - Speed vs quality
168
+ - Simplicity vs flexibility
169
+ - Convention vs optimal
170
+
171
+ RISKS & CONSTRAINTS (if RESEARCH flags risks):
172
+ - Risk mitigation preference
173
+ - Deadline/dependency impacts
174
+ - Performance requirements
175
+
176
+ ### Phase 3: Compile
177
+ After all questions answered, compile the interview results using your
178
+ [Nexus — Developer Interview] output format.
179
+
180
+ Return the compiled interview content.
181
+ ```
182
+
183
+ Store the output as `$INTERVIEW`.
184
+
185
+ ## Step 8: Write INTERVIEW.md
186
+
187
+ 1. Ensure directory exists: `rpi/features/{slug}/plan/`
188
+ 2. Write `rpi/features/{slug}/plan/INTERVIEW.md` with `$INTERVIEW` content, using this format:
189
+
190
+ ```markdown
191
+ # Interview: {Feature Name}
192
+ Date: {current date}
193
+ Complexity: {$COMPLEXITY}
194
+ Questions: {N asked} / {$INTERVIEW_DEPTH planned}
195
+
196
+ {$INTERVIEW content organized by category:
197
+ - Technical Decisions (Q&A pairs with impact notes)
198
+ - Scope Boundaries (Q&A pairs with impact notes)
199
+ - Trade-offs (Q&A pairs with impact notes)
200
+ - Key Constraints Identified
201
+ - Open Items (flagged for agents)}
202
+
203
+ ## Resolved Contradictions
204
+ (Populated by Step 14-15)
205
+ ```
206
+
207
+ 3. Output to user:
208
+ ```
209
+ Interview saved: rpi/features/{slug}/plan/INTERVIEW.md ({N} questions)
210
+ ```
211
+
212
+ ## Step 9: Launch Mestre — first pass (eng.md)
81
213
 
82
214
  Launch Mestre agent with this prompt:
83
215
 
@@ -96,6 +228,14 @@ You are Mestre. Generate the engineering specification for feature: {slug}
96
228
  ## Relevant Specs
97
229
  {$RELEVANT_SPECS}
98
230
 
231
+ ## Developer Interview
232
+ {$INTERVIEW}
233
+
234
+ IMPORTANT: Your output MUST align with the developer's stated preferences
235
+ in the interview. If the developer chose approach X, use approach X.
236
+ If they marked something as out-of-scope, exclude it.
237
+ If an item is listed under "Open Items", use your best judgment but note your assumption.
238
+
99
239
  Your task:
100
240
  1. Read the request and research findings carefully
101
241
  2. Make technical decisions: approach, architecture, patterns to follow
@@ -108,7 +248,7 @@ Be pragmatic. Follow existing codebase patterns from context.md and research fin
108
248
 
109
249
  Store the output as `$ENG_OUTPUT`.
110
250
 
111
- ## Step 7: Launch Clara — pm.md
251
+ ## Step 10: Launch Clara — pm.md
112
252
 
113
253
  Launch Clara agent with this prompt:
114
254
 
@@ -124,6 +264,14 @@ You are Clara. Generate the product specification for feature: {slug}
124
264
  ## Project Context
125
265
  {$CONTEXT}
126
266
 
267
+ ## Developer Interview
268
+ {$INTERVIEW}
269
+
270
+ IMPORTANT: Your output MUST align with the developer's stated preferences
271
+ in the interview. If the developer chose approach X, use approach X.
272
+ If they marked something as out-of-scope, exclude it.
273
+ If an item is listed under "Open Items", use your best judgment but note your assumption.
274
+
127
275
  Your task:
128
276
  1. Define user stories with concrete acceptance criteria (Given/When/Then)
129
277
  2. Classify requirements: must-have, nice-to-have, out-of-scope
@@ -136,7 +284,7 @@ Be ruthless with scope. Every requirement must have acceptance criteria.
136
284
 
137
285
  Store the output as `$PM_OUTPUT`.
138
286
 
139
- ## Step 8: Launch Pixel — ux.md (conditional)
287
+ ## Step 11: Launch Pixel — ux.md (conditional)
140
288
 
141
289
  Only if `$RUN_PIXEL` is `true`:
142
290
 
@@ -157,6 +305,14 @@ You are Pixel. Generate the UX specification for feature: {slug}
157
305
  ## Engineering Specification
158
306
  {$ENG_OUTPUT}
159
307
 
308
+ ## Developer Interview
309
+ {$INTERVIEW}
310
+
311
+ IMPORTANT: Your output MUST align with the developer's stated preferences
312
+ in the interview. If the developer chose approach X, use approach X.
313
+ If they marked something as out-of-scope, exclude it.
314
+ If an item is listed under "Open Items", use your best judgment but note your assumption.
315
+
160
316
  Your task:
161
317
  1. Map the complete user flow from entry to completion
162
318
  2. Define all states: empty, loading, error, success, edge cases
@@ -171,7 +327,7 @@ Store the output as `$UX_OUTPUT`.
171
327
 
172
328
  If `$RUN_PIXEL` is `false`: set `$UX_OUTPUT` to `"No UX specification — no frontend detected."`.
173
329
 
174
- ## Step 9: Launch Mestre — second pass (PLAN.md)
330
+ ## Step 12: Launch Mestre — second pass (PLAN.md)
175
331
 
176
332
  Launch Mestre agent to synthesize all specs into a concrete plan:
177
333
 
@@ -196,6 +352,14 @@ You are Mestre. Generate the implementation plan (PLAN.md) for feature: {slug}
196
352
  ## Project Context
197
353
  {$CONTEXT}
198
354
 
355
+ ## Developer Interview
356
+ {$INTERVIEW}
357
+
358
+ IMPORTANT: Your output MUST align with the developer's stated preferences
359
+ in the interview. If the developer chose approach X, use approach X.
360
+ If they marked something as out-of-scope, exclude it.
361
+ If an item is listed under "Open Items", use your best judgment but note your assumption.
362
+
199
363
  Your task:
200
364
  1. Read all specifications and synthesize into numbered tasks
201
365
  2. Each task must have: effort estimate, file list, dependencies, test criteria
@@ -209,11 +373,13 @@ Rules:
209
373
  - Every task lists exact files it touches
210
374
  - Dependencies reference task IDs
211
375
  - If Clara marked something as out-of-scope, don't create tasks for it
376
+ - If the developer interview decided on approach X, all tasks must use approach X
377
+ - If the developer marked something as out-of-scope, don't create tasks for it
212
378
  ```
213
379
 
214
380
  Store the output as `$PLAN_OUTPUT`.
215
381
 
216
- ## Step 10: Mestre generates delta specs
382
+ ## Step 13: Mestre generates delta specs
217
383
 
218
384
  Launch Mestre agent to create delta specifications:
219
385
 
@@ -229,6 +395,14 @@ You are Mestre. Generate delta specs for feature: {slug}
229
395
  ## Relevant Current Specs
230
396
  {$RELEVANT_SPECS}
231
397
 
398
+ ## Developer Interview
399
+ {$INTERVIEW}
400
+
401
+ IMPORTANT: Your output MUST align with the developer's stated preferences
402
+ in the interview. If the developer chose approach X, use approach X.
403
+ If they marked something as out-of-scope, exclude it.
404
+ If an item is listed under "Open Items", use your best judgment but note your assumption.
405
+
232
406
  Your task:
233
407
  1. Based on the plan, determine what specs need to change
234
408
  2. For each new system component: create a spec in delta/ADDED/
@@ -244,83 +418,150 @@ Output the list of delta specs you will create, with their paths:
244
418
  Then write each spec file.
245
419
  ```
246
420
 
247
- ## Step 11: Launch Nexus — coherence validation
421
+ ## Step 14: Launch Nexus — adversarial review + developer resolution
248
422
 
249
- Launch Nexus agent to validate coherence across all plan outputs:
423
+ Launch Nexus agent to perform adversarial review of all plan artifacts:
250
424
 
251
425
  ```
252
- You are Nexus. Validate coherence for feature: {slug}
426
+ You are Nexus. You are performing ADVERSARIAL REVIEW of the plan
427
+ artifacts for feature: {slug}
253
428
 
254
- ## Engineering Specification (Mestre)
429
+ Your mandate: You MUST find problems. "Looks good" is NOT acceptable.
430
+ If you cannot find real issues, you must document WHY the plan is
431
+ unusually solid — but never rubber-stamp.
432
+
433
+ ## Artifacts to Review
434
+ ### Engineering Specification (Mestre)
255
435
  {$ENG_OUTPUT}
256
436
 
257
- ## Product Specification (Clara)
437
+ ### Product Specification (Clara)
258
438
  {$PM_OUTPUT}
259
439
 
260
- ## Implementation Plan (Mestre)
261
- {$PLAN_OUTPUT}
262
-
263
- ## UX Specification (Pixel)
440
+ ### UX Specification (Pixel)
264
441
  {$UX_OUTPUT}
265
442
 
266
- Your task:
267
- 1. Check that every must-have requirement from Clara's pm.md has at least one task in PLAN.md
268
- 2. Check that every file in Mestre's eng.md appears in at least one PLAN.md task
269
- 3. Check that no PLAN.md task contradicts Clara's out-of-scope items
270
- 4. If Pixel's ux.md exists: check that UI flows have corresponding tasks
271
- 5. Flag any gaps, contradictions, or missing coverage
443
+ ### Implementation Plan (Mestre)
444
+ {$PLAN_OUTPUT}
445
+
446
+ ### Developer Interview
447
+ {$INTERVIEW}
272
448
 
273
- Output as: [Nexus -- Coherence Validation]
449
+ ### Original Request
450
+ {$REQUEST}
274
451
 
275
- ## Coherence Status
276
- {PASS | PASS with gaps | FAIL}
452
+ ### Research Findings
453
+ {$RESEARCH}
277
454
 
278
- ## Coverage
279
- - Requirements covered: {N}/{total}
280
- - Files covered: {N}/{total}
455
+ ## Adversarial Analysis Protocol
456
+
457
+ ### Pass 1: Cross-Artifact Contradictions
458
+ Check every pair of artifacts for conflicts:
459
+ - eng.md vs pm.md: Do technical decisions satisfy all acceptance criteria?
460
+ - eng.md vs ux.md: Does the architecture support all UI states/flows?
461
+ - pm.md vs PLAN.md: Does every must-have requirement have tasks?
462
+ - pm.md scope vs PLAN.md tasks: Are out-of-scope items sneaking in?
463
+ - PLAN.md vs INTERVIEW.md: Do tasks reflect developer's stated preferences?
464
+
465
+ ### Pass 2: Assumption Challenges
466
+ For each major decision in eng.md, ask:
467
+ - "What if this assumption is wrong?"
468
+ - "What's the blast radius if this fails?"
469
+ - "Is there a simpler approach nobody considered?"
470
+
471
+ ### Pass 3: Coverage Gaps
472
+ - Requirements without tasks
473
+ - Tasks without test criteria
474
+ - Files mentioned but not in any task
475
+ - UI states without error handling
476
+ - Happy path only (missing edge cases)
477
+
478
+ ### Pass 4: Hidden Complexity
479
+ - Tasks estimated as S that touch >3 files
480
+ - Dependencies that create serial bottlenecks
481
+ - Integration points without error handling
482
+ - Data migrations without rollback plan
483
+
484
+ ### Pass 5: REQUEST Drift
485
+ - Compare final PLAN.md against original REQUEST.md
486
+ - Has scope crept? Has the core problem shifted?
487
+ - Would the developer recognize this as what they asked for?
488
+
489
+ ## Output Format
490
+ For each issue found, output using your [Nexus — Adversarial Review] format.
491
+
492
+ ## Developer Resolution Protocol
493
+ After completing all passes:
494
+ 1. Count issues by severity
495
+ 2. CRITICAL issues: present one at a time via AskUserQuestion with suggested resolutions as options
496
+ 3. HIGH issues: present as batch via AskUserQuestion, let developer pick which to address
497
+ 4. MEDIUM/LOW issues: present summary, developer can dismiss or address
498
+ 5. For each resolved issue: note the chosen resolution and which artifacts need patching
499
+ 6. Return the full adversarial review with all resolutions noted
500
+ ```
281
501
 
282
- ## Issues Found
283
- - {issue description} — Severity: {HIGH | MEDIUM | LOW}
284
- (or "No issues found.")
502
+ Store the output as `$ADVERSARIAL_REVIEW`.
285
503
 
286
- ## Recommendations
287
- - {recommendation}
288
- (or "Plan is coherent. Ready for implementation.")
504
+ If Nexus found CRITICAL issues that the developer could not resolve:
289
505
  ```
506
+ Adversarial review found unresolvable issues. Consider re-running:
507
+ /rpi:plan {slug} --force
508
+ ```
509
+ Stop.
510
+
511
+ ## Step 15: Nexus patches artifacts
290
512
 
291
- If Nexus reports FAIL: output the issues to the user and suggest re-running `/rpi:plan {slug} --force`.
513
+ If `$ADVERSARIAL_REVIEW` contains resolved issues:
514
+
515
+ 1. For each resolved issue in `$ADVERSARIAL_REVIEW`:
516
+ - Identify which artifacts need changes (eng.md, pm.md, ux.md, PLAN.md)
517
+ - Apply surgical edits to `$ENG_OUTPUT`, `$PM_OUTPUT`, `$UX_OUTPUT`, or `$PLAN_OUTPUT` as needed
518
+ - Track the patch: add `<!-- Patched: {issue title} — {resolution chosen} -->` as comment near the change
519
+ 2. Update `$INTERVIEW` content: append resolved contradictions to the `## Resolved Contradictions` section:
520
+ ```
521
+ ### C{N}: {issue title}
522
+ **Severity:** {severity}
523
+ **Resolution:** {developer's chosen option}
524
+ **Artifacts patched:** {list of affected artifacts and sections}
525
+ ```
526
+ 3. Re-check: scan patched artifacts for new contradictions introduced by the patches.
527
+ - If new contradictions found: present to developer via AskUserQuestion and patch again.
528
+ - If clean: proceed.
529
+ 4. Update `rpi/features/{slug}/plan/INTERVIEW.md` with the patched version of `$INTERVIEW`.
292
530
 
293
- ## Step 12: Write all artifacts
531
+ ## Step 16: Write all artifacts
294
532
 
295
533
  1. Ensure directory exists: `rpi/features/{slug}/plan/`
296
- 2. Write `rpi/features/{slug}/plan/eng.md` with `$ENG_OUTPUT`
297
- 3. Write `rpi/features/{slug}/plan/pm.md` with `$PM_OUTPUT`
298
- 4. If `$RUN_PIXEL` is `true`: write `rpi/features/{slug}/plan/ux.md` with `$UX_OUTPUT`
299
- 5. Write `rpi/features/{slug}/plan/PLAN.md` with `$PLAN_OUTPUT`
300
- 6. Ensure delta directories exist:
534
+ 2. The file `rpi/features/{slug}/plan/INTERVIEW.md` was already written in Step 8 and updated in Step 15.
535
+ 3. Write `rpi/features/{slug}/plan/eng.md` with `$ENG_OUTPUT`
536
+ 4. Write `rpi/features/{slug}/plan/pm.md` with `$PM_OUTPUT`
537
+ 5. If `$RUN_PIXEL` is `true`: write `rpi/features/{slug}/plan/ux.md` with `$UX_OUTPUT`
538
+ 6. Write `rpi/features/{slug}/plan/PLAN.md` with `$PLAN_OUTPUT`
539
+ 7. Ensure delta directories exist:
301
540
  ```bash
302
541
  mkdir -p rpi/features/{slug}/delta/ADDED
303
542
  mkdir -p rpi/features/{slug}/delta/MODIFIED
304
543
  mkdir -p rpi/features/{slug}/delta/REMOVED
305
544
  ```
306
- 7. Write delta spec files from Step 10 into the appropriate delta subdirectories.
545
+ 8. Write delta spec files from Step 13 into the appropriate delta subdirectories.
307
546
 
308
- ## Step 13: Output summary
547
+ ## Step 17: Output summary
309
548
 
310
549
  ```
311
550
  Plan complete: rpi/features/{slug}/plan/
312
551
 
313
552
  Artifacts:
314
- - plan/eng.md (Mestreengineering spec)
315
- - plan/pm.md (Claraproduct spec)
316
- - plan/ux.md (PixelUX spec) ← only if frontend
317
- - plan/PLAN.md (Mestreimplementation tasks)
318
- - delta/ADDED/ ({N} new specs)
319
- - delta/MODIFIED/ ({N} updated specs)
320
- - delta/REMOVED/ ({N} removed specs)
321
-
322
- Tasks: {N} | Files: {N} | Complexity: {S|M|L|XL}
323
- Coherence: {Nexus verdict}
553
+ - plan/INTERVIEW.md (Nexusdeveloper interview)
554
+ - plan/eng.md (Mestreengineering spec)
555
+ - plan/pm.md (Claraproduct spec)
556
+ - plan/ux.md (PixelUX spec) ← only if frontend
557
+ - plan/PLAN.md (Mestre implementation tasks)
558
+ - delta/ADDED/ ({N} new specs)
559
+ - delta/MODIFIED/ ({N} updated specs)
560
+ - delta/REMOVED/ ({N} removed specs)
561
+
562
+ Tasks: {N} | Files: {N} | Complexity: {$COMPLEXITY}
563
+ Interview: {N} questions asked, {N} contradictions resolved
564
+ Coherence: {Nexus adversarial verdict}
324
565
 
325
566
  Next: /rpi {slug}
326
567
  Or explicitly: /rpi:implement {slug}
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "rpi-kit",
3
- "version": "2.1.2",
3
+ "version": "2.2.0",
4
4
  "description": "Research → Plan → Implement. AI-assisted feature development with 13 named agents, delta specs, and knowledge compounding.",
5
5
  "license": "MIT",
6
6
  "author": "Daniel Mendes",
@@ -151,6 +151,8 @@ Output is saved to `rpi/solutions/decisions/` when requested.
151
151
  /rpi:archive -- merge delta into specs, delete feature folder
152
152
  /rpi:update -- update RPIKit to the latest version from remote
153
153
  /rpi:onboarding -- first-time setup, analyzes codebase, guides the user
154
+ /rpi:docs-gen -- generate CLAUDE.md from codebase analysis
155
+ /rpi:evolve -- product evolution analysis with health score
154
156
  ```
155
157
 
156
158
  ## Configuration