rpi-kit 2.1.3 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -5,14 +5,14 @@
5
5
  },
6
6
  "metadata": {
7
7
  "description": "Research → Plan → Implement. 7-phase pipeline with 13 named agents, delta specs, party mode, and knowledge compounding.",
8
- "version": "2.1.1"
8
+ "version": "2.2.0"
9
9
  },
10
10
  "plugins": [
11
11
  {
12
12
  "name": "rpi-kit",
13
13
  "source": "./",
14
14
  "description": "Research → Plan → Implement. 7-phase pipeline with 13 named agents, delta specs, party mode, and knowledge compounding.",
15
- "version": "2.1.1",
15
+ "version": "2.2.0",
16
16
  "author": {
17
17
  "name": "Daniel Mendes"
18
18
  },
@@ -23,6 +23,7 @@
23
23
  "./commands/rpi/archive.md",
24
24
  "./commands/rpi/docs.md",
25
25
  "./commands/rpi/docs-gen.md",
26
+ "./commands/rpi/evolve.md",
26
27
  "./commands/rpi/implement.md",
27
28
  "./commands/rpi/init.md",
28
29
  "./commands/rpi/learn.md",
@@ -0,0 +1,420 @@
1
+ ---
2
+ name: rpi:evolve
3
+ description: Analyze the entire project for technical health, code quality, test coverage, ecosystem status, and product gaps. Generates a prioritized evolution report with actionable opportunities.
4
+ argument-hint: "[--quick]"
5
+ allowed-tools:
6
+ - Read
7
+ - Write
8
+ - Glob
9
+ - Grep
10
+ - Agent
11
+ - Bash
12
+ ---
13
+
14
+ # /rpi:evolve — Product Evolution Analysis
15
+
16
+ Standalone utility command — launches 5 agents in parallel to analyze the project from different perspectives, then Nexus synthesizes into a prioritized evolution report.
17
+
18
+ Use `--quick` for a fast technical-only health check (Atlas + Nexus only).
19
+
20
+ ---
21
+
22
+ ## Step 1: Load config and context
23
+
24
+ 1. Read `.rpi.yaml` from the project root. If missing, use defaults silently.
25
+ 2. Read `rpi/context.md` if it exists — store as `$PROJECT_CONTEXT`.
26
+ 3. If `rpi/context.md` does not exist, note that Atlas will generate context from scratch.
27
+ 4. Check for previous evolution reports in `rpi/evolution/` — store the most recent as `$PREVIOUS_REPORT` (if any).
28
+ 5. Parse `$ARGUMENTS` for `--quick` flag.
29
+
30
+ ## Step 2: Create output directory
31
+
32
+ ```bash
33
+ mkdir -p rpi/evolution
34
+ ```
35
+
36
+ ## Step 3: Launch analysis agents
37
+
38
+ If `--quick` flag is set, skip to Step 4 (only Atlas runs, others are skipped).
39
+
40
+ Launch **5 agents in parallel** using the Agent tool. Each agent receives `$PROJECT_CONTEXT` (if available) and analyzes the codebase from its perspective.
41
+
42
+ ### Agent 1: Atlas — Technical Health
43
+
44
+ ```
45
+ You are Atlas. Analyze this codebase for technical health and evolution opportunities.
46
+
47
+ {If $PROJECT_CONTEXT exists:}
48
+ ## Existing Project Context
49
+ {$PROJECT_CONTEXT}
50
+ {End if}
51
+
52
+ Your task:
53
+ 1. Read config files (package.json, tsconfig.json, pyproject.toml, etc.)
54
+ 2. Scan directory structure for architecture patterns
55
+ 3. Identify technical debt: dead code, unused exports, inconsistent patterns
56
+ 4. Check dependency health: outdated versions, abandoned packages, duplicates
57
+ 5. Evaluate architecture: clean separation, coupling issues, scaling concerns
58
+ 6. Check documentation completeness: README, CLAUDE.md, inline docs
59
+
60
+ Produce your analysis with this structure:
61
+
62
+ ## [Atlas — Technical Health]
63
+
64
+ ### Strengths
65
+ - {strength 1 with evidence (file:line)}
66
+ - {strength 2}
67
+
68
+ ### Technical Debt
69
+ Severity: {LOW|MEDIUM|HIGH}
70
+ - {debt item 1 with evidence}
71
+ - {debt item 2}
72
+
73
+ ### Dependencies
74
+ - Outdated: {list with current vs latest}
75
+ - Abandoned: {deps with no recent updates}
76
+ - Duplicates: {overlapping deps}
77
+
78
+ ### Architecture Issues
79
+ - {issue 1 with evidence}
80
+ - {issue 2}
81
+
82
+ ### Quick Wins
83
+ - {actionable item that can be fixed in < 1 hour}
84
+
85
+ RULES:
86
+ - Be specific — cite files, lines, versions
87
+ - Only report what you can verify from the code
88
+ - Prioritize by impact, not by ease
89
+ - If a section has no findings, write "No issues found" and move on
90
+ ```
91
+
92
+ Store output as `$ATLAS_FINDINGS`.
93
+
94
+ ### Agent 2: Sage — Test Coverage
95
+
96
+ ```
97
+ You are Sage. Analyze the test coverage and testing strategy of this codebase.
98
+
99
+ {If $PROJECT_CONTEXT exists:}
100
+ ## Existing Project Context
101
+ {$PROJECT_CONTEXT}
102
+ {End if}
103
+
104
+ Your task:
105
+ 1. Identify the test framework(s) in use
106
+ 2. Map which modules/components have tests and which don't
107
+ 3. Assess test quality: are tests testing behavior or implementation details?
108
+ 4. Check for missing test types: unit, integration, e2e, edge cases
109
+ 5. Look for test anti-patterns: brittle assertions, test interdependencies, missing error cases
110
+
111
+ Produce your analysis with this structure:
112
+
113
+ ## [Sage — Test Coverage]
114
+
115
+ ### Coverage Map
116
+ - {module/file}: {has tests | no tests | partial}
117
+ - ...
118
+
119
+ ### Gaps (prioritized by risk)
120
+ - {untested module with risk assessment}
121
+ - ...
122
+
123
+ ### Test Quality
124
+ - Framework: {name}
125
+ - Anti-patterns found: {list or "none"}
126
+ - Missing test types: {unit|integration|e2e|edge cases}
127
+
128
+ ### Recommendations
129
+ - {recommendation 1 with effort estimate S|M|L}
130
+ - {recommendation 2}
131
+
132
+ RULES:
133
+ - Focus on what's NOT tested rather than what is
134
+ - Prioritize gaps by business risk, not code volume
135
+ - Be specific about which files/functions lack coverage
136
+ ```
137
+
138
+ Store output as `$SAGE_FINDINGS`.
139
+
140
+ ### Agent 3: Hawk — Code Quality
141
+
142
+ ```
143
+ You are Hawk. Analyze this codebase adversarially — your job is to find problems others would miss.
144
+
145
+ {If $PROJECT_CONTEXT exists:}
146
+ ## Existing Project Context
147
+ {$PROJECT_CONTEXT}
148
+ {End if}
149
+
150
+ Your task:
151
+ 1. Find anti-patterns and code smells
152
+ 2. Identify complexity hotspots (functions/files that are too complex)
153
+ 3. Look for copy-paste code and duplication
154
+ 4. Check error handling: swallowed errors, missing validation, inconsistent patterns
155
+ 5. Assess naming and readability issues
156
+ 6. Check for security risks: hardcoded values, exposed secrets, injection vectors
157
+
158
+ Produce your analysis with this structure:
159
+
160
+ ## [Hawk — Code Quality]
161
+
162
+ ### Problems
163
+ #### CRITICAL
164
+ - {problem with file:line and why it matters}
165
+
166
+ #### HIGH
167
+ - {problem with evidence}
168
+
169
+ #### MEDIUM
170
+ - {problem with evidence}
171
+
172
+ #### LOW
173
+ - {problem with evidence}
174
+
175
+ ### Quick Wins
176
+ - {fix that improves quality with minimal effort}
177
+
178
+ ### Risks
179
+ - {potential future problem based on current patterns}
180
+
181
+ RULES:
182
+ - You MUST find at least 3 issues — look harder if you think the code is perfect
183
+ - Severity must be justified with impact assessment
184
+ - Every finding must cite specific file:line
185
+ - Focus on real problems, not style preferences
186
+ ```
187
+
188
+ Store output as `$HAWK_FINDINGS`.
189
+
190
+ ### Agent 4: Scout — Ecosystem Analysis
191
+
192
+ ```
193
+ You are Scout. Analyze this project's ecosystem health and external dependencies.
194
+
195
+ {If $PROJECT_CONTEXT exists:}
196
+ ## Existing Project Context
197
+ {$PROJECT_CONTEXT}
198
+ {End if}
199
+
200
+ Your task:
201
+ 1. Check all dependencies for outdated versions (compare package.json/pyproject.toml against known latest)
202
+ 2. Identify dependencies with known security vulnerabilities
203
+ 3. Find deprecated APIs or patterns being used
204
+ 4. Look for better alternatives to current dependencies
205
+ 5. Check if the project follows current ecosystem best practices
206
+
207
+ Produce your analysis with this structure:
208
+
209
+ ## [Scout — Ecosystem Analysis]
210
+
211
+ ### Outdated Dependencies
212
+ | Package | Current | Latest | Breaking Changes? |
213
+ |---------|---------|--------|-------------------|
214
+ | {name} | {ver} | {ver} | {yes/no} |
215
+
216
+ ### Security Concerns
217
+ - {CVE or vulnerability with affected package}
218
+
219
+ ### Deprecated Patterns
220
+ - {deprecated API/pattern with recommended replacement}
221
+
222
+ ### Better Alternatives
223
+ - {current dep} → {alternative} — {why it's better}
224
+
225
+ ### Ecosystem Best Practices
226
+ - Following: {list}
227
+ - Missing: {list}
228
+
229
+ RULES:
230
+ - Only flag outdated deps that are significantly behind (skip minor patches)
231
+ - Security concerns must reference specific CVEs or advisories when possible
232
+ - "Better alternatives" must have concrete justification, not opinions
233
+ ```
234
+
235
+ Store output as `$SCOUT_FINDINGS`.
236
+
237
+ ### Agent 5: Clara — Product Analysis
238
+
239
+ ```
240
+ You are Clara. Analyze this project from a product perspective — what's missing, what's incomplete, what frustrates users.
241
+
242
+ {If $PROJECT_CONTEXT exists:}
243
+ ## Existing Project Context
244
+ {$PROJECT_CONTEXT}
245
+ {End if}
246
+
247
+ Your task:
248
+ 1. Map the user-facing features and assess completeness
249
+ 2. Identify incomplete user flows (started but not finished)
250
+ 3. Find UX friction points (confusing APIs, missing error messages, poor defaults)
251
+ 4. Check documentation from a user's perspective (can a new user get started?)
252
+ 5. Identify features that exist in code but aren't documented or discoverable
253
+ 6. Assess onboarding experience
254
+
255
+ Produce your analysis with this structure:
256
+
257
+ ## [Clara — Product Analysis]
258
+
259
+ ### Feature Completeness
260
+ - {feature}: {complete | partial | stub}
261
+ - ...
262
+
263
+ ### Missing Features
264
+ - {feature that users would expect but doesn't exist}
265
+
266
+ ### UX Friction Points
267
+ - {friction point with evidence}
268
+
269
+ ### Documentation Gaps
270
+ - {what's missing from user-facing docs}
271
+
272
+ ### Undiscoverable Features
273
+ - {feature that exists but users can't find}
274
+
275
+ ### Recommendations
276
+ - {recommendation with effort S|M|L and impact HIGH|MED|LOW}
277
+
278
+ RULES:
279
+ - Think as a user, not a developer
280
+ - Focus on the first 5 minutes of experience
281
+ - Missing error messages count as friction
282
+ - Score completeness honestly — partial is fine
283
+ ```
284
+
285
+ Store output as `$CLARA_FINDINGS`.
286
+
287
+ ## Step 4: Synthesize with Nexus
288
+
289
+ Launch Nexus agent with all findings:
290
+
291
+ ```
292
+ You are Nexus. Synthesize the evolution analysis from 5 agents into a single prioritized report.
293
+
294
+ {If --quick, only $ATLAS_FINDINGS is available:}
295
+ ## Atlas Findings (Technical Health)
296
+ {$ATLAS_FINDINGS}
297
+ {Else:}
298
+ ## Atlas Findings (Technical Health)
299
+ {$ATLAS_FINDINGS}
300
+
301
+ ## Sage Findings (Test Coverage)
302
+ {$SAGE_FINDINGS}
303
+
304
+ ## Hawk Findings (Code Quality)
305
+ {$HAWK_FINDINGS}
306
+
307
+ ## Scout Findings (Ecosystem)
308
+ {$SCOUT_FINDINGS}
309
+
310
+ ## Clara Findings (Product)
311
+ {$CLARA_FINDINGS}
312
+ {End if}
313
+
314
+ {If $PREVIOUS_REPORT exists:}
315
+ ## Previous Evolution Report
316
+ {$PREVIOUS_REPORT}
317
+ Note: Compare with previous findings. Highlight what improved and what regressed.
318
+ {End if}
319
+
320
+ Your tasks:
321
+
322
+ ### Task 1: Write the Evolution Report
323
+
324
+ Produce a complete report with this structure:
325
+
326
+ # Evolution Report — {Project Name}
327
+
328
+ ## Executive Summary
329
+ Health: {score}/10 | Opportunities: {N} | Critical: {N}
330
+ {2-3 sentence summary of the project's current state}
331
+
332
+ {If previous report exists:}
333
+ ### Changes Since Last Report
334
+ - Improved: {list}
335
+ - Regressed: {list}
336
+ - New: {list}
337
+ {End if}
338
+
339
+ ## Technical Health (Atlas)
340
+ {Summarize Atlas findings — keep the strongest evidence, drop noise}
341
+
342
+ ## Test Coverage (Sage)
343
+ {Summarize Sage findings}
344
+
345
+ ## Code Quality (Hawk)
346
+ {Summarize Hawk findings — group by severity}
347
+
348
+ ## Ecosystem (Scout)
349
+ {Summarize Scout findings}
350
+
351
+ ## Product Analysis (Clara)
352
+ {Summarize Clara findings}
353
+
354
+ ## Prioritized Recommendations
355
+ {Merge recommendations from all agents, remove duplicates, sort by impact/effort ratio}
356
+
357
+ 1. [{CRITICAL|HIGH|MEDIUM|LOW}] {recommendation} — Effort: {S|M|L|XL}
358
+ 2. ...
359
+
360
+ ### Task 2: Generate Opportunities List
361
+
362
+ Produce a separate document:
363
+
364
+ # Evolution Opportunities
365
+
366
+ ## Ready for /rpi:new
367
+ - [ ] **{slug}** — {S|M|L|XL} | {description}
368
+ - ...
369
+
370
+ ## Needs More Research
371
+ - [ ] **{slug}** — {S|M|L|XL} | {description}
372
+ - ...
373
+
374
+ Separate the two documents clearly with a --- delimiter.
375
+
376
+ ### Task 3: Health Score
377
+
378
+ Calculate a heuristic health score (1-10) based on:
379
+ - Technical debt severity (Atlas)
380
+ - Test coverage completeness (Sage)
381
+ - Code quality issues count and severity (Hawk)
382
+ - Dependency health (Scout)
383
+ - Feature completeness (Clara)
384
+
385
+ The score is a quick-read indicator, not a precise metric. Include it in the Executive Summary.
386
+
387
+ RULES:
388
+ 1. No contradictions left unresolved — if agents disagree, note the disagreement and your resolution
389
+ 2. Remove duplicate findings across agents
390
+ 3. Prioritize by impact × feasibility (high impact + low effort first)
391
+ 4. Every recommendation must have an effort estimate
392
+ 5. Opportunities must have slugs suitable for /rpi:new (kebab-case, descriptive)
393
+ 6. If only Atlas findings are available (--quick mode), adjust the report structure accordingly
394
+ ```
395
+
396
+ Store the output as `$NEXUS_SYNTHESIS`. Split at the `---` delimiter into `$REPORT_CONTENT` and `$OPPORTUNITIES_CONTENT`.
397
+
398
+ ## Step 5: Write outputs
399
+
400
+ 1. Write `$REPORT_CONTENT` to `rpi/evolution/{YYYY-MM-DD}-report.md`.
401
+ 2. Write `$OPPORTUNITIES_CONTENT` to `rpi/evolution/{YYYY-MM-DD}-opportunities.md`.
402
+
403
+ ## Step 6: Output terminal summary
404
+
405
+ ```
406
+ Evolution Report: {Project Name} ({date})
407
+
408
+ Health Score: {score}/10
409
+
410
+ Top 3 Opportunities:
411
+ 1. [{category}] {description} ({source agent})
412
+ 2. [{category}] {description} ({source agent})
413
+ 3. [{category}] {description} ({source agent})
414
+
415
+ Full report: rpi/evolution/{date}-report.md
416
+ Opportunities: rpi/evolution/{date}-opportunities.md
417
+
418
+ To start working on an opportunity:
419
+ /rpi:new {first-opportunity-slug}
420
+ ```
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "rpi-kit",
3
- "version": "2.1.3",
3
+ "version": "2.2.0",
4
4
  "description": "Research → Plan → Implement. AI-assisted feature development with 13 named agents, delta specs, and knowledge compounding.",
5
5
  "license": "MIT",
6
6
  "author": "Daniel Mendes",
@@ -152,6 +152,7 @@ Output is saved to `rpi/solutions/decisions/` when requested.
152
152
  /rpi:update -- update RPIKit to the latest version from remote
153
153
  /rpi:onboarding -- first-time setup, analyzes codebase, guides the user
154
154
  /rpi:docs-gen -- generate CLAUDE.md from codebase analysis
155
+ /rpi:evolve -- product evolution analysis with health score
155
156
  ```
156
157
 
157
158
  ## Configuration