rpi-kit 1.4.1 → 2.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +9 -6
- package/.claude-plugin/plugin.json +4 -4
- package/AGENTS.md +2004 -109
- package/CHANGELOG.md +83 -0
- package/README.md +117 -169
- package/agents/atlas.md +61 -0
- package/agents/clara.md +49 -0
- package/agents/forge.md +38 -0
- package/agents/hawk.md +54 -0
- package/agents/luna.md +50 -0
- package/agents/mestre.md +61 -0
- package/agents/nexus.md +63 -0
- package/agents/pixel.md +48 -0
- package/agents/quill.md +40 -0
- package/agents/razor.md +41 -0
- package/agents/sage.md +52 -0
- package/agents/scout.md +49 -0
- package/agents/shield.md +51 -0
- package/bin/cli.js +134 -10
- package/bin/onboarding.js +46 -28
- package/commands/rpi/archive.md +149 -0
- package/commands/rpi/docs.md +106 -168
- package/commands/rpi/implement.md +163 -401
- package/commands/rpi/init.md +150 -67
- package/commands/rpi/learn.md +114 -0
- package/commands/rpi/new.md +85 -155
- package/commands/rpi/onboarding.md +157 -336
- package/commands/rpi/party.md +212 -0
- package/commands/rpi/plan.md +241 -205
- package/commands/rpi/research.md +162 -104
- package/commands/rpi/review.md +350 -104
- package/commands/rpi/rpi.md +125 -0
- package/commands/rpi/simplify.md +156 -93
- package/commands/rpi/status.md +91 -114
- package/commands/rpi/update.md +113 -0
- package/package.json +3 -3
- package/skills/rpi-agents/SKILL.md +63 -39
- package/skills/rpi-workflow/SKILL.md +161 -186
- package/agents/code-reviewer.md +0 -40
- package/agents/code-simplifier.md +0 -35
- package/agents/cto-advisor.md +0 -51
- package/agents/doc-synthesizer.md +0 -53
- package/agents/doc-writer.md +0 -36
- package/agents/explore-codebase.md +0 -50
- package/agents/plan-executor.md +0 -48
- package/agents/product-manager.md +0 -52
- package/agents/requirement-parser.md +0 -42
- package/agents/senior-engineer.md +0 -52
- package/agents/test-engineer.md +0 -28
- package/agents/ux-designer.md +0 -47
- package/codex.md +0 -72
- package/commands/rpi/add-todo.md +0 -83
- package/commands/rpi/set-profile.md +0 -124
- package/commands/rpi/test.md +0 -198
package/commands/rpi/review.md
CHANGED
|
@@ -1,159 +1,405 @@
|
|
|
1
1
|
---
|
|
2
2
|
name: rpi:review
|
|
3
|
-
description:
|
|
4
|
-
argument-hint: "<feature-
|
|
3
|
+
description: Adversarial review with Hawk + Shield + Sage in parallel. Nexus synthesizes.
|
|
4
|
+
argument-hint: "<feature-name>"
|
|
5
5
|
allowed-tools:
|
|
6
6
|
- Read
|
|
7
|
+
- Write
|
|
8
|
+
- Bash
|
|
7
9
|
- Glob
|
|
8
10
|
- Grep
|
|
9
|
-
- Bash
|
|
10
11
|
- Agent
|
|
11
|
-
- Write
|
|
12
12
|
---
|
|
13
13
|
|
|
14
|
-
|
|
15
|
-
|
|
16
|
-
|
|
14
|
+
# /rpi:review — Review Phase
|
|
15
|
+
|
|
16
|
+
Adversarial review with three parallel agents: Hawk (code review), Shield (security audit), Sage (test coverage). Nexus synthesizes findings into a final verdict.
|
|
17
|
+
|
|
18
|
+
---
|
|
19
|
+
|
|
20
|
+
## Step 1: Load config and validate
|
|
21
|
+
|
|
22
|
+
1. Read `.rpi.yaml` for config. Apply defaults if missing:
|
|
23
|
+
- `folder`: `rpi/features`
|
|
24
|
+
- `context_file`: `rpi/context.md`
|
|
25
|
+
- `solutions_dir`: `rpi/solutions`
|
|
26
|
+
- `auto_learn`: `true`
|
|
27
|
+
2. Parse `$ARGUMENTS` to extract `{slug}`.
|
|
28
|
+
3. Validate `rpi/features/{slug}/implement/IMPLEMENT.md` exists. If not:
|
|
29
|
+
```
|
|
30
|
+
IMPLEMENT.md not found for '{slug}'. Run /rpi:implement {slug} first.
|
|
31
|
+
```
|
|
32
|
+
Stop.
|
|
33
|
+
|
|
34
|
+
## Step 2: Gather all artifacts
|
|
35
|
+
|
|
36
|
+
1. Read `rpi/features/{slug}/REQUEST.md` — store as `$REQUEST`.
|
|
37
|
+
2. Read `rpi/features/{slug}/plan/PLAN.md` — store as `$PLAN`.
|
|
38
|
+
3. Read `rpi/features/{slug}/plan/eng.md` if it exists — store as `$ENG`.
|
|
39
|
+
4. Read `rpi/features/{slug}/implement/IMPLEMENT.md` — store as `$IMPLEMENT`.
|
|
40
|
+
5. Read `rpi/context.md` (project context) if it exists — store as `$CONTEXT`.
|
|
41
|
+
|
|
42
|
+
## Step 3: Get implementation diff
|
|
43
|
+
|
|
44
|
+
1. Read `$IMPLEMENT` to extract all commit hashes from the Execution Log (including simplify commit if present).
|
|
45
|
+
2. Use git to get the combined diff:
|
|
46
|
+
```bash
|
|
47
|
+
git diff {first_commit}^..{last_commit}
|
|
48
|
+
```
|
|
49
|
+
3. Store the diff as `$IMPL_DIFF`.
|
|
50
|
+
4. Collect the list of all files changed — store as `$CHANGED_FILES`.
|
|
51
|
+
|
|
52
|
+
## Step 4: Launch Hawk, Shield, and Sage in parallel
|
|
53
|
+
|
|
54
|
+
Use the Agent tool to launch all three agents simultaneously.
|
|
55
|
+
|
|
56
|
+
### Hawk (adversarial review)
|
|
57
|
+
|
|
58
|
+
Launch Hawk agent with this prompt:
|
|
17
59
|
|
|
18
|
-
|
|
60
|
+
```
|
|
61
|
+
You are Hawk. Perform an adversarial code review for feature: {slug}
|
|
62
|
+
|
|
63
|
+
## Implementation Diff
|
|
64
|
+
{$IMPL_DIFF}
|
|
65
|
+
|
|
66
|
+
## Changed Files
|
|
67
|
+
{$CHANGED_FILES}
|
|
19
68
|
|
|
20
|
-
##
|
|
69
|
+
## Engineering Spec
|
|
70
|
+
{$ENG}
|
|
21
71
|
|
|
22
|
-
|
|
72
|
+
## Implementation Plan
|
|
73
|
+
{$PLAN}
|
|
23
74
|
|
|
24
|
-
|
|
75
|
+
## Project Context
|
|
76
|
+
{$CONTEXT}
|
|
25
77
|
|
|
26
|
-
|
|
27
|
-
1. Check if `{folder}/{feature-slug}/` exists → type = "feature", path = `{folder}/{feature-slug}`
|
|
28
|
-
2. If not, Glob `{folder}/*/changes/{feature-slug}/` → if found, type = "change", path = matched path, parent_path = parent directory
|
|
29
|
-
3. If multiple matches → AskUserQuestion listing all matches with full paths
|
|
30
|
-
4. If no match → error: `Feature not found: {feature-slug}`
|
|
78
|
+
Your task — ultra-thinking deep dive from 5 perspectives:
|
|
31
79
|
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
80
|
+
1. **Developer**: Code quality, maintainability, readability, patterns
|
|
81
|
+
2. **Ops**: Deployability, monitoring, logging, failure modes
|
|
82
|
+
3. **User**: Edge cases in user-facing behavior, error messages, UX
|
|
83
|
+
4. **Security**: Input validation, auth checks, data exposure
|
|
84
|
+
5. **Business**: Does it solve the stated problem? Missed requirements?
|
|
35
85
|
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
86
|
+
CRITICAL RULES:
|
|
87
|
+
1. You MUST find problems. Zero findings is not acceptable — re-analyse.
|
|
88
|
+
2. Read ALL changed files thoroughly before writing findings.
|
|
89
|
+
3. Each finding must reference specific file and line.
|
|
90
|
+
4. Classify every finding:
|
|
91
|
+
- P1 (blocker): Must fix before merge. Bugs, data loss, security holes, broken contracts.
|
|
92
|
+
- P2 (should fix): Important but not blocking. Performance, naming, missing validation.
|
|
93
|
+
- P3 (nice-to-have): Suggestions, style, minor improvements.
|
|
40
94
|
|
|
41
|
-
|
|
95
|
+
Output format:
|
|
96
|
+
## Findings
|
|
42
97
|
|
|
43
|
-
|
|
98
|
+
### P1 — Blockers
|
|
99
|
+
- [{file}:{line}] {description} — Impact: {impact}
|
|
100
|
+
(or "None found.")
|
|
44
101
|
|
|
45
|
-
|
|
46
|
-
-
|
|
47
|
-
- RESEARCH.md (research findings)
|
|
48
|
-
- PLAN.md (task checklist)
|
|
49
|
-
- eng.md (technical spec)
|
|
50
|
-
- pm.md (if exists — acceptance criteria)
|
|
51
|
-
- ux.md (if exists — UX requirements)
|
|
52
|
-
- IMPLEMENT.md (implementation record)
|
|
102
|
+
### P2 — Should Fix
|
|
103
|
+
- [{file}:{line}] {description} — Impact: {impact}
|
|
53
104
|
|
|
54
|
-
|
|
105
|
+
### P3 — Nice to Have
|
|
106
|
+
- [{file}:{line}] {description} — Suggestion: {suggestion}
|
|
107
|
+
|
|
108
|
+
## Summary
|
|
109
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
110
|
+
- Overall: {assessment}
|
|
111
|
+
```
|
|
55
112
|
|
|
56
|
-
|
|
113
|
+
Store the output as `$HAWK_OUTPUT`.
|
|
57
114
|
|
|
58
|
-
|
|
115
|
+
### Shield (security audit)
|
|
59
116
|
|
|
60
|
-
|
|
117
|
+
Launch Shield agent with this prompt:
|
|
61
118
|
|
|
62
119
|
```
|
|
63
|
-
You are
|
|
120
|
+
You are Shield. Perform a security audit for feature: {slug}
|
|
121
|
+
|
|
122
|
+
## Implementation Diff
|
|
123
|
+
{$IMPL_DIFF}
|
|
124
|
+
|
|
125
|
+
## Changed Files
|
|
126
|
+
{$CHANGED_FILES}
|
|
127
|
+
|
|
128
|
+
## Engineering Spec
|
|
129
|
+
{$ENG}
|
|
130
|
+
|
|
131
|
+
## Project Context
|
|
132
|
+
{$CONTEXT}
|
|
133
|
+
|
|
134
|
+
Your task — systematic security audit:
|
|
135
|
+
|
|
136
|
+
### OWASP Top 10 Check
|
|
137
|
+
For each applicable category, check the implementation:
|
|
138
|
+
1. Injection (SQL, NoSQL, OS command, LDAP)
|
|
139
|
+
2. Broken Authentication
|
|
140
|
+
3. Sensitive Data Exposure
|
|
141
|
+
4. XML External Entities (XXE)
|
|
142
|
+
5. Broken Access Control
|
|
143
|
+
6. Security Misconfiguration
|
|
144
|
+
7. Cross-Site Scripting (XSS)
|
|
145
|
+
8. Insecure Deserialization
|
|
146
|
+
9. Using Components with Known Vulnerabilities
|
|
147
|
+
10. Insufficient Logging & Monitoring
|
|
148
|
+
|
|
149
|
+
### Additional Checks
|
|
150
|
+
- Hardcoded secrets, API keys, tokens
|
|
151
|
+
- Missing input validation or sanitization
|
|
152
|
+
- Auth bypass possibilities
|
|
153
|
+
- Race conditions
|
|
154
|
+
- Edge cases and boundary conditions (overflow, empty input, null)
|
|
155
|
+
- Error messages leaking internal details
|
|
156
|
+
|
|
157
|
+
RULES:
|
|
158
|
+
1. Read ALL changed files before auditing
|
|
159
|
+
2. Each finding must reference specific file and line
|
|
160
|
+
3. Classify: P1 (blocker) | P2 (should fix) | P3 (nice-to-have)
|
|
161
|
+
4. If no security issues found, explicitly state which checks passed
|
|
162
|
+
|
|
163
|
+
Output format:
|
|
164
|
+
## Security Findings
|
|
165
|
+
|
|
166
|
+
### P1 — Critical
|
|
167
|
+
- [{file}:{line}] {vulnerability} — Risk: {risk description}
|
|
168
|
+
(or "None found.")
|
|
169
|
+
|
|
170
|
+
### P2 — Important
|
|
171
|
+
- [{file}:{line}] {vulnerability} — Risk: {risk description}
|
|
172
|
+
|
|
173
|
+
### P3 — Hardening
|
|
174
|
+
- [{file}:{line}] {suggestion} — Benefit: {benefit}
|
|
175
|
+
|
|
176
|
+
## OWASP Coverage
|
|
177
|
+
- {category}: PASS | FAIL | N/A — {notes}
|
|
178
|
+
|
|
179
|
+
## Summary
|
|
180
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
181
|
+
```
|
|
64
182
|
|
|
65
|
-
|
|
66
|
-
- {folder}/{feature-slug}/REQUEST.md
|
|
67
|
-
- {folder}/{feature-slug}/research/RESEARCH.md
|
|
68
|
-
- {folder}/{feature-slug}/plan/PLAN.md
|
|
69
|
-
- {folder}/{feature-slug}/plan/eng.md
|
|
70
|
-
{- {folder}/{feature-slug}/plan/pm.md (if exists)}
|
|
71
|
-
{- {folder}/{feature-slug}/plan/ux.md (if exists)}
|
|
72
|
-
- {folder}/{feature-slug}/implement/IMPLEMENT.md
|
|
183
|
+
Store the output as `$SHIELD_OUTPUT`.
|
|
73
184
|
|
|
74
|
-
|
|
185
|
+
### Sage (coverage check)
|
|
75
186
|
|
|
76
|
-
|
|
77
|
-
- {parent_path}/REQUEST.md
|
|
78
|
-
- {parent_path}/research/RESEARCH.md (if exists)
|
|
79
|
-
- {parent_path}/plan/eng.md (if exists)
|
|
187
|
+
Launch Sage agent with this prompt:
|
|
80
188
|
|
|
81
|
-
|
|
82
|
-
|
|
83
|
-
- Whether breaking changes listed in the change REQUEST.md are properly handled
|
|
189
|
+
```
|
|
190
|
+
You are Sage. Verify test coverage for feature: {slug}
|
|
84
191
|
|
|
85
|
-
|
|
192
|
+
## Implementation Diff
|
|
193
|
+
{$IMPL_DIFF}
|
|
86
194
|
|
|
87
|
-
|
|
88
|
-
|
|
89
|
-
- Are all files from eng.md created/modified as specified?
|
|
195
|
+
## Changed Files
|
|
196
|
+
{$CHANGED_FILES}
|
|
90
197
|
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
- If pm.md exists: are acceptance criteria met?
|
|
94
|
-
- If ux.md exists: are user flows implemented correctly?
|
|
198
|
+
## Engineering Spec
|
|
199
|
+
{$ENG}
|
|
95
200
|
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
- Are listed deviations justified?
|
|
99
|
-
- Are there unlisted deviations (implementation differs from plan but not recorded)?
|
|
201
|
+
## Implementation Plan
|
|
202
|
+
{$PLAN}
|
|
100
203
|
|
|
101
|
-
|
|
102
|
-
|
|
103
|
-
- Do tests exercise real code through public interfaces (no mocks unless external dependency)?
|
|
104
|
-
- Do test names describe behavior clearly?
|
|
105
|
-
- Are edge cases from eng.md covered?
|
|
106
|
-
- If TDD was enabled: verify tests were written before implementation (check git log order)
|
|
204
|
+
## Project Context
|
|
205
|
+
{$CONTEXT}
|
|
107
206
|
|
|
108
|
-
|
|
109
|
-
- Any obvious bugs or logic errors?
|
|
110
|
-
- Security concerns (injection, auth bypass, data exposure)?
|
|
207
|
+
Your task — check what is tested and what is not:
|
|
111
208
|
|
|
112
|
-
|
|
209
|
+
1. For each changed file, find the corresponding test file(s)
|
|
210
|
+
2. Identify modules/functions with NO tests at all
|
|
211
|
+
3. Identify tested modules with MISSING edge cases:
|
|
212
|
+
- Error paths not tested
|
|
213
|
+
- Boundary values not tested
|
|
214
|
+
- Null/empty/invalid inputs not tested
|
|
215
|
+
- Concurrent/race condition scenarios not tested
|
|
216
|
+
4. Check that acceptance criteria from the plan have test coverage
|
|
217
|
+
5. Suggest specific tests that should be added
|
|
113
218
|
|
|
114
|
-
|
|
219
|
+
RULES:
|
|
220
|
+
1. Read ALL changed files and their test files before reporting
|
|
221
|
+
2. Be specific — name the function/module and the missing test case
|
|
222
|
+
3. Classify: P1 (no tests at all) | P2 (missing critical paths) | P3 (missing edge cases)
|
|
115
223
|
|
|
116
|
-
|
|
224
|
+
Output format:
|
|
225
|
+
## Coverage Analysis
|
|
117
226
|
|
|
118
|
-
###
|
|
119
|
-
- {
|
|
227
|
+
### Untested Modules (P1)
|
|
228
|
+
- {file}:{function/class} — No tests found
|
|
229
|
+
(or "All modules have tests.")
|
|
120
230
|
|
|
121
|
-
###
|
|
122
|
-
- {
|
|
231
|
+
### Missing Critical Paths (P2)
|
|
232
|
+
- {file}:{function} — Missing: {description of untested path}
|
|
123
233
|
|
|
124
|
-
###
|
|
125
|
-
- {
|
|
234
|
+
### Missing Edge Cases (P3)
|
|
235
|
+
- {file}:{function} — Missing: {description of edge case}
|
|
126
236
|
|
|
127
|
-
|
|
128
|
-
|
|
237
|
+
## Suggested Tests
|
|
238
|
+
1. {test description} — covers {what it covers}
|
|
239
|
+
2. ...
|
|
129
240
|
|
|
130
|
-
|
|
131
|
-
-
|
|
132
|
-
-
|
|
133
|
-
-
|
|
241
|
+
## Summary
|
|
242
|
+
- Modules without tests: {N}
|
|
243
|
+
- Missing critical paths: {N}
|
|
244
|
+
- Missing edge cases: {N}
|
|
134
245
|
```
|
|
135
246
|
|
|
136
|
-
|
|
247
|
+
Store the output as `$SAGE_OUTPUT`.
|
|
248
|
+
|
|
249
|
+
## Step 5: Wait for completion
|
|
250
|
+
|
|
251
|
+
Wait for all three agents (Hawk, Shield, Sage) to complete.
|
|
137
252
|
|
|
138
|
-
|
|
253
|
+
## Step 6: Launch Nexus — synthesize findings
|
|
139
254
|
|
|
140
|
-
|
|
255
|
+
Launch Nexus agent to produce the final review report:
|
|
141
256
|
|
|
142
|
-
If PASS:
|
|
143
257
|
```
|
|
144
|
-
|
|
145
|
-
|
|
146
|
-
|
|
258
|
+
You are Nexus. Synthesize the review findings for feature: {slug}
|
|
259
|
+
|
|
260
|
+
## Hawk Output (Code Review)
|
|
261
|
+
{$HAWK_OUTPUT}
|
|
262
|
+
|
|
263
|
+
## Shield Output (Security Audit)
|
|
264
|
+
{$SHIELD_OUTPUT}
|
|
265
|
+
|
|
266
|
+
## Sage Output (Coverage)
|
|
267
|
+
{$SAGE_OUTPUT}
|
|
268
|
+
|
|
269
|
+
## Request
|
|
270
|
+
{$REQUEST}
|
|
271
|
+
|
|
272
|
+
Your task:
|
|
273
|
+
1. Merge all findings from Hawk, Shield, and Sage
|
|
274
|
+
2. Deduplicate — if multiple agents flagged the same issue, combine into one finding
|
|
275
|
+
3. Classify every finding: P1 (blocker) | P2 (should fix) | P3 (nice-to-have)
|
|
276
|
+
4. Determine verdict based on findings
|
|
277
|
+
|
|
278
|
+
Verdict rules:
|
|
279
|
+
- Any P1 finding → FAIL
|
|
280
|
+
- No P1 but has P2/P3 → PASS with concerns
|
|
281
|
+
- No findings → PASS
|
|
282
|
+
|
|
283
|
+
Output format:
|
|
284
|
+
## Review Report: {slug}
|
|
285
|
+
|
|
286
|
+
### Verdict: {PASS | PASS with concerns | FAIL}
|
|
287
|
+
|
|
288
|
+
### P1 — Blockers (must fix)
|
|
289
|
+
- [{source}] [{file}:{line}] {description}
|
|
290
|
+
(or "None.")
|
|
291
|
+
|
|
292
|
+
### P2 — Should Fix
|
|
293
|
+
- [{source}] [{file}:{line}] {description}
|
|
294
|
+
|
|
295
|
+
### P3 — Nice to Have
|
|
296
|
+
- [{source}] [{file}:{line}] {description}
|
|
297
|
+
|
|
298
|
+
### Coverage Summary (Sage)
|
|
299
|
+
- {summary of test coverage status}
|
|
300
|
+
|
|
301
|
+
### Totals
|
|
302
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
303
|
+
- Sources: Hawk {N} | Shield {N} | Sage {N}
|
|
147
304
|
```
|
|
148
305
|
|
|
149
|
-
|
|
306
|
+
Store the output as `$NEXUS_OUTPUT`.
|
|
307
|
+
|
|
308
|
+
## Step 7: Handle verdict
|
|
309
|
+
|
|
310
|
+
### If FAIL (P1 findings exist):
|
|
311
|
+
|
|
312
|
+
1. Output to the user:
|
|
313
|
+
```
|
|
314
|
+
Review FAILED for '{slug}'. {N} P1 blockers must be fixed.
|
|
315
|
+
|
|
316
|
+
{list P1 findings with file:line and description}
|
|
317
|
+
|
|
318
|
+
Fix all P1 issues and re-run: /rpi:review {slug}
|
|
319
|
+
```
|
|
320
|
+
2. Do NOT proceed to docs phase.
|
|
321
|
+
|
|
322
|
+
### If PASS with concerns (P2/P3 only):
|
|
323
|
+
|
|
324
|
+
1. Output to the user:
|
|
325
|
+
```
|
|
326
|
+
Review PASSED with concerns for '{slug}'.
|
|
327
|
+
P2: {N} | P3: {N}
|
|
328
|
+
|
|
329
|
+
{list P2 findings}
|
|
330
|
+
|
|
331
|
+
These are non-blocking but should be addressed.
|
|
332
|
+
```
|
|
333
|
+
2. Proceed to Step 8.
|
|
334
|
+
|
|
335
|
+
### If PASS (no findings):
|
|
336
|
+
|
|
337
|
+
1. Output to the user:
|
|
338
|
+
```
|
|
339
|
+
Review PASSED for '{slug}'. No issues found.
|
|
340
|
+
```
|
|
341
|
+
2. Proceed to Step 8.
|
|
342
|
+
|
|
343
|
+
## Step 8: Auto-learn to solutions
|
|
344
|
+
|
|
345
|
+
If `auto_learn` is `true` in config (default):
|
|
346
|
+
|
|
347
|
+
1. Review all P1 and P2 findings that were particularly insightful or represent reusable knowledge.
|
|
348
|
+
2. For each solution worth saving, write to `rpi/solutions/{category}/{slug}.md` using this format:
|
|
349
|
+
```markdown
|
|
350
|
+
# {Title}
|
|
351
|
+
|
|
352
|
+
## Problem
|
|
353
|
+
{symptoms, how it manifests}
|
|
354
|
+
|
|
355
|
+
## Solution
|
|
356
|
+
{code, approach, what worked}
|
|
357
|
+
|
|
358
|
+
## Prevention
|
|
359
|
+
{how to avoid in the future}
|
|
360
|
+
|
|
361
|
+
## Context
|
|
362
|
+
Feature: {slug} | Date: {YYYY-MM-DD}
|
|
363
|
+
Files: {list}
|
|
364
|
+
```
|
|
365
|
+
3. Categories are auto-detected: `performance/`, `security/`, `database/`, `testing/`, `architecture/`, `patterns/`
|
|
366
|
+
4. If no findings are worth saving, skip this step.
|
|
367
|
+
|
|
368
|
+
## Step 9: Update IMPLEMENT.md
|
|
369
|
+
|
|
370
|
+
Append a review section to `rpi/features/{slug}/implement/IMPLEMENT.md`:
|
|
371
|
+
|
|
372
|
+
```markdown
|
|
373
|
+
## Review
|
|
374
|
+
|
|
375
|
+
Date: {YYYY-MM-DD}
|
|
376
|
+
Agents: Hawk + Shield + Sage → Nexus
|
|
377
|
+
Verdict: {PASS | PASS with concerns | FAIL}
|
|
378
|
+
|
|
379
|
+
### Findings
|
|
380
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
381
|
+
|
|
382
|
+
### Details
|
|
383
|
+
{$NEXUS_OUTPUT summary}
|
|
384
|
+
|
|
385
|
+
### Solutions Saved
|
|
386
|
+
- {path to solution file}: {title}
|
|
387
|
+
(or "No solutions saved.")
|
|
150
388
|
```
|
|
151
|
-
Review: FAIL
|
|
152
|
-
{list specific gaps}
|
|
153
389
|
|
|
154
|
-
|
|
155
|
-
|
|
156
|
-
- Accept as-is: mark complete manually in IMPLEMENT.md
|
|
390
|
+
## Step 10: Output summary
|
|
391
|
+
|
|
157
392
|
```
|
|
393
|
+
Review complete: {slug}
|
|
394
|
+
|
|
395
|
+
Verdict: {PASS | PASS with concerns | FAIL}
|
|
396
|
+
Findings: P1={N} P2={N} P3={N}
|
|
397
|
+
Agents: Hawk({N}) Shield({N}) Sage({N})
|
|
158
398
|
|
|
159
|
-
|
|
399
|
+
{If PASS or PASS with concerns:}
|
|
400
|
+
Next: /rpi {slug}
|
|
401
|
+
Or explicitly: /rpi:docs {slug}
|
|
402
|
+
|
|
403
|
+
{If FAIL:}
|
|
404
|
+
Fix P1 blockers and re-run: /rpi:review {slug}
|
|
405
|
+
```
|
|
@@ -0,0 +1,125 @@
|
|
|
1
|
+
---
|
|
2
|
+
name: rpi
|
|
3
|
+
description: Auto-progress a feature to its next phase. Detects current state and runs the appropriate step.
|
|
4
|
+
argument-hint: "<feature-name> [--skip=phase] [--from=phase] [--force]"
|
|
5
|
+
allowed-tools:
|
|
6
|
+
- Read
|
|
7
|
+
- Write
|
|
8
|
+
- Edit
|
|
9
|
+
- Bash
|
|
10
|
+
- Glob
|
|
11
|
+
- Grep
|
|
12
|
+
- Agent
|
|
13
|
+
- AskUserQuestion
|
|
14
|
+
---
|
|
15
|
+
|
|
16
|
+
# /rpi — Auto-Flow
|
|
17
|
+
|
|
18
|
+
Detects the current phase of a feature and runs the next step automatically.
|
|
19
|
+
|
|
20
|
+
---
|
|
21
|
+
|
|
22
|
+
## Step 1: Load config and parse arguments
|
|
23
|
+
|
|
24
|
+
1. Read `.rpi.yaml` for config. Apply defaults if missing:
|
|
25
|
+
- `folder`: `rpi/features`
|
|
26
|
+
2. Parse `$ARGUMENTS` to extract:
|
|
27
|
+
- `{slug}` — the feature name (required)
|
|
28
|
+
- `--skip=phase` — skip a specific phase and detect the next one
|
|
29
|
+
- `--from=phase` — override detection, start from this phase
|
|
30
|
+
- `--force` — pass through to the delegated command
|
|
31
|
+
|
|
32
|
+
If `{slug}` is not provided, ask with AskUserQuestion: "Which feature? Provide the slug (e.g. 'oauth', 'dark-mode')."
|
|
33
|
+
|
|
34
|
+
## Step 2: Validate feature exists
|
|
35
|
+
|
|
36
|
+
Check if `rpi/features/{slug}/` exists and contains `REQUEST.md`.
|
|
37
|
+
|
|
38
|
+
If the directory does not exist:
|
|
39
|
+
```
|
|
40
|
+
Feature '{slug}' not found. Run /rpi:new {slug} to start.
|
|
41
|
+
```
|
|
42
|
+
Stop.
|
|
43
|
+
|
|
44
|
+
If the directory exists but `REQUEST.md` is missing:
|
|
45
|
+
```
|
|
46
|
+
Feature '{slug}' has no REQUEST.md. This shouldn't happen. Run /rpi:new {slug} to recreate it.
|
|
47
|
+
```
|
|
48
|
+
Stop.
|
|
49
|
+
|
|
50
|
+
## Step 3: Detect current phase
|
|
51
|
+
|
|
52
|
+
Check which artifacts exist to determine the next phase:
|
|
53
|
+
|
|
54
|
+
1. Has `REQUEST.md`, no `research/RESEARCH.md` → next = **research**
|
|
55
|
+
2. Has `research/RESEARCH.md`, no `plan/PLAN.md` → next = **plan**
|
|
56
|
+
3. Has `plan/PLAN.md`, no `implement/IMPLEMENT.md` → next = **implement**
|
|
57
|
+
4. Has `implement/IMPLEMENT.md` but NOT all tasks checked (`- [x]`) → next = **implement** (with `--resume`)
|
|
58
|
+
5. Has `implement/IMPLEMENT.md` with all tasks complete, no "## Simplify" section in IMPLEMENT.md → next = **simplify**
|
|
59
|
+
6. Has simplify done, no "## Review Verdict" section in IMPLEMENT.md → next = **review**
|
|
60
|
+
7. Has "## Review Verdict" with PASS, no `docs/` output generated → next = **docs**
|
|
61
|
+
8. Everything done → feature is complete
|
|
62
|
+
|
|
63
|
+
### Detection details
|
|
64
|
+
|
|
65
|
+
For step 4: Read `implement/IMPLEMENT.md` and check if any `- [ ]` (unchecked tasks) remain. If all tasks are `- [x]`, the implementation is complete.
|
|
66
|
+
|
|
67
|
+
For step 5: Check if IMPLEMENT.md contains a `## Simplify` section. If not, simplify has not been run yet.
|
|
68
|
+
|
|
69
|
+
For step 6: Check if IMPLEMENT.md contains a `## Review Verdict` section. If not, review has not been run yet.
|
|
70
|
+
|
|
71
|
+
For step 7: Check if IMPLEMENT.md contains a `## Review Verdict` section with "PASS". Then check if docs have been generated (look for mention of docs completion in IMPLEMENT.md or a generated docs artifact).
|
|
72
|
+
|
|
73
|
+
## Step 4: Apply --skip flag
|
|
74
|
+
|
|
75
|
+
If `--skip=phase` was provided:
|
|
76
|
+
- If the detected next phase matches the skipped phase, advance to the phase after it using the same detection logic.
|
|
77
|
+
- Valid phase names: `research`, `plan`, `implement`, `simplify`, `review`, `docs`.
|
|
78
|
+
- If the skip target is invalid, inform the user and stop.
|
|
79
|
+
|
|
80
|
+
## Step 5: Apply --from flag
|
|
81
|
+
|
|
82
|
+
If `--from=phase` was provided:
|
|
83
|
+
- Override the detected phase. Set next = the specified phase.
|
|
84
|
+
- Valid phase names: `research`, `plan`, `implement`, `simplify`, `review`, `docs`.
|
|
85
|
+
- If the from target is invalid, inform the user and stop.
|
|
86
|
+
- This is useful for re-running a phase (e.g., after fixing issues).
|
|
87
|
+
|
|
88
|
+
## Step 6: Handle completion
|
|
89
|
+
|
|
90
|
+
If no next phase was detected (everything is done):
|
|
91
|
+
```
|
|
92
|
+
{slug} is complete! All phases done.
|
|
93
|
+
|
|
94
|
+
To archive: /rpi:archive {slug}
|
|
95
|
+
To re-run a phase: /rpi {slug} --from=phase
|
|
96
|
+
```
|
|
97
|
+
Stop.
|
|
98
|
+
|
|
99
|
+
## Step 7: Announce and delegate
|
|
100
|
+
|
|
101
|
+
Output what is about to happen:
|
|
102
|
+
|
|
103
|
+
```
|
|
104
|
+
{slug} -> next: {phase}
|
|
105
|
+
Starting {phase} phase...
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
Then delegate to the appropriate command:
|
|
109
|
+
|
|
110
|
+
1. Read `commands/rpi/{phase}.md`
|
|
111
|
+
2. Follow its process section exactly, passing through the `{slug}` and any relevant flags (like `--force`, `--resume`)
|
|
112
|
+
|
|
113
|
+
The auto-flow command does NOT duplicate phase logic. It detects the state, announces the next step, and then executes the full process defined in the corresponding command file.
|
|
114
|
+
|
|
115
|
+
### Phase-to-command mapping
|
|
116
|
+
|
|
117
|
+
| Phase | Command file | Key artifacts |
|
|
118
|
+
|------------|-----------------------|----------------------------------|
|
|
119
|
+
| REQUEST | `commands/rpi/new.md` | `REQUEST.md` |
|
|
120
|
+
| RESEARCH | `commands/rpi/research.md` | `research/RESEARCH.md` |
|
|
121
|
+
| PLAN | `commands/rpi/plan.md` | `plan/PLAN.md` |
|
|
122
|
+
| IMPLEMENT | `commands/rpi/implement.md`| `implement/IMPLEMENT.md` |
|
|
123
|
+
| simplify | `commands/rpi/simplify.md` | Simplify section in IMPLEMENT.md |
|
|
124
|
+
| review | `commands/rpi/review.md` | Review Verdict in IMPLEMENT.md |
|
|
125
|
+
| docs | `commands/rpi/docs.md` | Generated documentation |
|