rpi-kit 2.2.2 → 2.5.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.claude-plugin/marketplace.json +3 -2
- package/.claude-plugin/plugin.json +1 -1
- package/.gemini/commands/opsx/apply.toml +149 -0
- package/.gemini/commands/opsx/archive.toml +154 -0
- package/.gemini/commands/opsx/bulk-archive.toml +239 -0
- package/.gemini/commands/opsx/continue.toml +111 -0
- package/.gemini/commands/opsx/explore.toml +170 -0
- package/.gemini/commands/opsx/ff.toml +94 -0
- package/.gemini/commands/opsx/new.toml +66 -0
- package/.gemini/commands/opsx/onboard.toml +547 -0
- package/.gemini/commands/opsx/propose.toml +103 -0
- package/.gemini/commands/opsx/sync.toml +131 -0
- package/.gemini/commands/opsx/verify.toml +161 -0
- package/.gemini/commands/rpi/archive.toml +140 -0
- package/.gemini/commands/rpi/docs-gen.toml +210 -0
- package/.gemini/commands/rpi/docs.toml +153 -0
- package/.gemini/commands/rpi/evolve.toml +411 -0
- package/.gemini/commands/rpi/fix.toml +290 -0
- package/.gemini/commands/rpi/implement.toml +272 -0
- package/.gemini/commands/rpi/init.toml +180 -0
- package/.gemini/commands/rpi/learn.toml +105 -0
- package/.gemini/commands/rpi/new.toml +158 -0
- package/.gemini/commands/rpi/onboarding.toml +236 -0
- package/.gemini/commands/rpi/party.toml +204 -0
- package/.gemini/commands/rpi/plan.toml +623 -0
- package/.gemini/commands/rpi/research.toml +265 -0
- package/.gemini/commands/rpi/review.toml +443 -0
- package/.gemini/commands/rpi/rpi.toml +114 -0
- package/.gemini/commands/rpi/simplify.toml +214 -0
- package/.gemini/commands/rpi/status.toml +194 -0
- package/.gemini/commands/rpi/update.toml +107 -0
- package/.gemini/skills/openspec-apply-change/SKILL.md +156 -0
- package/.gemini/skills/openspec-archive-change/SKILL.md +114 -0
- package/.gemini/skills/openspec-bulk-archive-change/SKILL.md +246 -0
- package/.gemini/skills/openspec-continue-change/SKILL.md +118 -0
- package/.gemini/skills/openspec-explore/SKILL.md +288 -0
- package/.gemini/skills/openspec-ff-change/SKILL.md +101 -0
- package/.gemini/skills/openspec-new-change/SKILL.md +74 -0
- package/.gemini/skills/openspec-onboard/SKILL.md +554 -0
- package/.gemini/skills/openspec-propose/SKILL.md +110 -0
- package/.gemini/skills/openspec-sync-specs/SKILL.md +138 -0
- package/.gemini/skills/openspec-verify-change/SKILL.md +168 -0
- package/CHANGELOG.md +15 -0
- package/README.md +6 -6
- package/agents/atlas.md +40 -0
- package/agents/clara.md +40 -0
- package/agents/forge.md +40 -0
- package/agents/hawk.md +40 -0
- package/agents/luna.md +40 -0
- package/agents/mestre.md +46 -0
- package/agents/nexus.md +52 -0
- package/agents/pixel.md +40 -0
- package/agents/quill.md +40 -0
- package/agents/razor.md +40 -0
- package/agents/sage.md +46 -0
- package/agents/scout.md +40 -0
- package/agents/shield.md +40 -0
- package/bin/cli.js +60 -18
- package/commands/rpi/docs.md +29 -1
- package/commands/rpi/fix.md +301 -0
- package/commands/rpi/implement.md +37 -0
- package/commands/rpi/plan.md +66 -1
- package/commands/rpi/research.md +48 -1
- package/commands/rpi/review.md +48 -1
- package/commands/rpi/rpi.md +1 -1
- package/commands/rpi/simplify.md +31 -1
- package/commands/rpi/status.md +69 -0
- package/marketplace.json +3 -2
- package/package.json +2 -1
|
@@ -0,0 +1,265 @@
|
|
|
1
|
+
description = "Analyze feasibility with Atlas (codebase) and Scout (external). Nexus synthesizes."
|
|
2
|
+
|
|
3
|
+
prompt = """
|
|
4
|
+
# /rpi:research — Research Phase
|
|
5
|
+
|
|
6
|
+
Run Atlas (codebase analysis) and Scout (external research) in parallel. Nexus synthesizes their outputs into RESEARCH.md with a GO / GO with concerns / NO-GO verdict.
|
|
7
|
+
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
## Step 1: Load config and validate
|
|
11
|
+
|
|
12
|
+
1. Read `.rpi.yaml` for config. Apply defaults if missing:
|
|
13
|
+
- `specs_dir`: `rpi/specs`
|
|
14
|
+
- `solutions_dir`: `rpi/solutions`
|
|
15
|
+
- `context_file`: `rpi/context.md`
|
|
16
|
+
2. Parse `$ARGUMENTS` to extract `{slug}` and optional `--force` flag.
|
|
17
|
+
3. Validate `rpi/features/{slug}/REQUEST.md` exists. If not:
|
|
18
|
+
```
|
|
19
|
+
Feature '{slug}' not found. Run /rpi:new {slug} to start.
|
|
20
|
+
```
|
|
21
|
+
Stop.
|
|
22
|
+
|
|
23
|
+
## Step 2: Check existing research
|
|
24
|
+
|
|
25
|
+
1. Check if `rpi/features/{slug}/research/RESEARCH.md` already exists.
|
|
26
|
+
2. If it exists and `--force` was NOT passed:
|
|
27
|
+
- Ask the user: "RESEARCH.md already exists for '{slug}'. Overwrite? (yes/no)"
|
|
28
|
+
- If no: stop.
|
|
29
|
+
3. If `--force` was passed or user confirms: proceed (will overwrite).
|
|
30
|
+
|
|
31
|
+
## Step 3: Gather context
|
|
32
|
+
|
|
33
|
+
1. Read `rpi/features/{slug}/REQUEST.md` — store as `$REQUEST`.
|
|
34
|
+
2. Read `rpi/context.md` (project context) if it exists — store as `$CONTEXT`.
|
|
35
|
+
3. Scan `rpi/specs/` for any specs relevant to the feature described in REQUEST.md — store as `$RELEVANT_SPECS`.
|
|
36
|
+
4. Scan `rpi/solutions/` for any past solutions relevant to this feature — store as `$RELEVANT_SOLUTIONS`.
|
|
37
|
+
|
|
38
|
+
## Step 4: Launch Atlas and Scout in parallel
|
|
39
|
+
|
|
40
|
+
Use the Agent tool to launch both agents simultaneously.
|
|
41
|
+
|
|
42
|
+
### Atlas (codebase analysis)
|
|
43
|
+
|
|
44
|
+
Launch Atlas agent with this prompt:
|
|
45
|
+
|
|
46
|
+
```
|
|
47
|
+
You are Atlas. Analyze the codebase for feature: {slug}
|
|
48
|
+
|
|
49
|
+
## Request
|
|
50
|
+
{$REQUEST}
|
|
51
|
+
|
|
52
|
+
## Project Context
|
|
53
|
+
{$CONTEXT}
|
|
54
|
+
|
|
55
|
+
## Relevant Specs
|
|
56
|
+
{$RELEVANT_SPECS}
|
|
57
|
+
|
|
58
|
+
## Relevant Past Solutions
|
|
59
|
+
{$RELEVANT_SOLUTIONS}
|
|
60
|
+
|
|
61
|
+
Your task:
|
|
62
|
+
1. Analyze the codebase for patterns, conventions, and architecture relevant to this feature
|
|
63
|
+
2. Check rpi/specs/ for existing specifications that overlap or relate
|
|
64
|
+
3. Check rpi/solutions/ for past solutions that could be reused
|
|
65
|
+
4. Identify files likely affected, patterns to follow, and risks
|
|
66
|
+
5. Output using your standard format: [Atlas -- Codebase Analysis]
|
|
67
|
+
|
|
68
|
+
6. After your analysis, append your activity to rpi/features/{slug}/ACTIVITY.md:
|
|
69
|
+
|
|
70
|
+
### {current_date} — Atlas (Research)
|
|
71
|
+
- **Action:** Codebase analysis for {slug}
|
|
72
|
+
- **Scope:** {list files you actually read}
|
|
73
|
+
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
|
|
74
|
+
- **Patterns found:** {count and summary}
|
|
75
|
+
- **Quality:** {your quality gate result}
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
### Scout (external research)
|
|
79
|
+
|
|
80
|
+
Launch Scout agent with this prompt:
|
|
81
|
+
|
|
82
|
+
```
|
|
83
|
+
You are Scout. Research technical feasibility for feature: {slug}
|
|
84
|
+
|
|
85
|
+
## Request
|
|
86
|
+
{$REQUEST}
|
|
87
|
+
|
|
88
|
+
## Project Context
|
|
89
|
+
{$CONTEXT}
|
|
90
|
+
|
|
91
|
+
## Relevant Past Solutions
|
|
92
|
+
{$RELEVANT_SOLUTIONS}
|
|
93
|
+
|
|
94
|
+
Your task:
|
|
95
|
+
1. FIRST check rpi/solutions/ for relevant past solutions before any external research
|
|
96
|
+
2. Research technical feasibility of the proposed approach
|
|
97
|
+
3. Evaluate alternative libraries/tools with trade-off comparison
|
|
98
|
+
4. Identify risks: breaking changes, security issues, maintenance status
|
|
99
|
+
5. Find relevant benchmarks, examples, or case studies
|
|
100
|
+
6. Output using your standard format: [Scout -- Technical Investigation]
|
|
101
|
+
|
|
102
|
+
7. After your investigation, append your activity to rpi/features/{slug}/ACTIVITY.md:
|
|
103
|
+
|
|
104
|
+
### {current_date} — Scout (Research)
|
|
105
|
+
- **Action:** External research for {slug}
|
|
106
|
+
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
|
|
107
|
+
- **Sources consulted:** {count and list}
|
|
108
|
+
- **Recommendations:** {count and summary}
|
|
109
|
+
- **Quality:** {your quality gate result}
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
## Step 5: Wait for completion
|
|
113
|
+
|
|
114
|
+
Wait for both Atlas and Scout agents to complete. Store their outputs:
|
|
115
|
+
- `$ATLAS_OUTPUT` — Atlas's codebase analysis
|
|
116
|
+
- `$SCOUT_OUTPUT` — Scout's technical investigation
|
|
117
|
+
|
|
118
|
+
## Step 6: Detect disagreements
|
|
119
|
+
|
|
120
|
+
Compare Atlas and Scout outputs for contradictions:
|
|
121
|
+
- Atlas says feasible but Scout says risky (or vice versa)
|
|
122
|
+
- Different recommendations on approach, libraries, or architecture
|
|
123
|
+
- Conflicting risk assessments
|
|
124
|
+
|
|
125
|
+
If disagreements are detected, launch Nexus for a mini-debate:
|
|
126
|
+
|
|
127
|
+
```
|
|
128
|
+
You are Nexus. Atlas and Scout disagree on key points for feature: {slug}
|
|
129
|
+
|
|
130
|
+
## Atlas Output
|
|
131
|
+
{$ATLAS_OUTPUT}
|
|
132
|
+
|
|
133
|
+
## Scout Output
|
|
134
|
+
{$SCOUT_OUTPUT}
|
|
135
|
+
|
|
136
|
+
Identify the specific disagreements. For each one:
|
|
137
|
+
1. State what Atlas argues
|
|
138
|
+
2. State what Scout argues
|
|
139
|
+
3. Evaluate the evidence for each position
|
|
140
|
+
4. Declare the stronger position with reasoning
|
|
141
|
+
|
|
142
|
+
Output as: [Nexus -- Debate Summary]
|
|
143
|
+
```
|
|
144
|
+
|
|
145
|
+
Store the debate result as `$DEBATE_OUTPUT`.
|
|
146
|
+
|
|
147
|
+
## Step 7: Nexus synthesis
|
|
148
|
+
|
|
149
|
+
Launch Nexus agent to produce the final RESEARCH.md:
|
|
150
|
+
|
|
151
|
+
```
|
|
152
|
+
You are Nexus. Synthesize research for feature: {slug}
|
|
153
|
+
|
|
154
|
+
## Request
|
|
155
|
+
{$REQUEST}
|
|
156
|
+
|
|
157
|
+
## Atlas Output
|
|
158
|
+
{$ATLAS_OUTPUT}
|
|
159
|
+
|
|
160
|
+
## Scout Output
|
|
161
|
+
{$SCOUT_OUTPUT}
|
|
162
|
+
|
|
163
|
+
## Debate Results (if any)
|
|
164
|
+
{$DEBATE_OUTPUT or "No disagreements detected."}
|
|
165
|
+
|
|
166
|
+
Produce a single RESEARCH.md with this structure:
|
|
167
|
+
|
|
168
|
+
# Research: {Feature Title}
|
|
169
|
+
|
|
170
|
+
## Summary
|
|
171
|
+
5 lines: verdict, complexity, risk, recommendation, key finding.
|
|
172
|
+
|
|
173
|
+
## Atlas Findings
|
|
174
|
+
{Key findings from Atlas's codebase analysis — preserve strongest evidence}
|
|
175
|
+
|
|
176
|
+
## Scout Findings
|
|
177
|
+
{Key findings from Scout's technical investigation — preserve strongest evidence}
|
|
178
|
+
|
|
179
|
+
## Consensus
|
|
180
|
+
{Points where Atlas and Scout agree}
|
|
181
|
+
|
|
182
|
+
## Resolved Disagreements
|
|
183
|
+
{For each disagreement: what Atlas said, what Scout said, resolution with reasoning}
|
|
184
|
+
(or "No disagreements detected.")
|
|
185
|
+
|
|
186
|
+
## Risks and Mitigations
|
|
187
|
+
{Combined risk assessment from both agents}
|
|
188
|
+
|
|
189
|
+
## Relevant Solutions
|
|
190
|
+
{Past solutions from rpi/solutions/ that apply — for knowledge reuse}
|
|
191
|
+
(or "No relevant past solutions found.")
|
|
192
|
+
|
|
193
|
+
## Open Questions
|
|
194
|
+
{Unresolved items that need user input}
|
|
195
|
+
|
|
196
|
+
## Verdict
|
|
197
|
+
{GO | GO with concerns | NO-GO}
|
|
198
|
+
Confidence: {HIGH | MEDIUM | LOW}
|
|
199
|
+
|
|
200
|
+
Rules for verdict:
|
|
201
|
+
- Any BLOCK finding = NO-GO
|
|
202
|
+
- No BLOCK + 2 or more CONCERN findings = GO with concerns
|
|
203
|
+
- Otherwise = GO
|
|
204
|
+
- NO-GO requires an Alternatives section
|
|
205
|
+
|
|
206
|
+
After synthesis, append your activity to rpi/features/{slug}/ACTIVITY.md:
|
|
207
|
+
|
|
208
|
+
### {current_date} — Nexus (Research Synthesis)
|
|
209
|
+
- **Action:** Synthesized Atlas + Scout findings for {slug}
|
|
210
|
+
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
|
|
211
|
+
- **Consensus points:** {count}
|
|
212
|
+
- **Disagreements resolved:** {count}
|
|
213
|
+
- **Quality:** {your quality gate result}
|
|
214
|
+
```
|
|
215
|
+
|
|
216
|
+
## Step 8: Write RESEARCH.md and populate delta baselines
|
|
217
|
+
|
|
218
|
+
1. Ensure directory exists: `rpi/features/{slug}/research/`
|
|
219
|
+
2. Write the Nexus output to `rpi/features/{slug}/research/RESEARCH.md`
|
|
220
|
+
3. If Nexus identified relevant existing specs in `rpi/specs/`:
|
|
221
|
+
- Ensure `rpi/features/{slug}/delta/` directory structure exists (ADDED/, MODIFIED/, REMOVED/)
|
|
222
|
+
- Copy relevant spec baselines into `delta/MODIFIED/` so the plan phase has reference copies
|
|
223
|
+
- This gives Mestre (plan phase) the current state of specs that will be changed
|
|
224
|
+
|
|
225
|
+
## Step 9: Consolidate decisions to DECISIONS.md
|
|
226
|
+
|
|
227
|
+
1. Read `rpi/features/{slug}/ACTIVITY.md`.
|
|
228
|
+
2. Extract all `<decision>` tags from entries belonging to the Research phase (Atlas, Scout, Nexus entries from this run).
|
|
229
|
+
3. If no decisions found, skip this step.
|
|
230
|
+
4. Write `rpi/features/{slug}/DECISIONS.md`:
|
|
231
|
+
|
|
232
|
+
```markdown
|
|
233
|
+
# Decision Log — {slug}
|
|
234
|
+
|
|
235
|
+
## Research Phase
|
|
236
|
+
_Generated: {current_date}_
|
|
237
|
+
|
|
238
|
+
| # | Type | Decision | Alternatives | Rationale | Impact |
|
|
239
|
+
|---|------|----------|-------------|-----------|--------|
|
|
240
|
+
| {N} | {type} | {summary} | {alternatives} | {rationale} | {impact} |
|
|
241
|
+
```
|
|
242
|
+
|
|
243
|
+
5. Number decisions sequentially starting from 1.
|
|
244
|
+
|
|
245
|
+
## Step 10: Output summary
|
|
246
|
+
|
|
247
|
+
```
|
|
248
|
+
Research complete: rpi/features/{slug}/research/RESEARCH.md
|
|
249
|
+
|
|
250
|
+
Verdict: {GO | GO with concerns | NO-GO}
|
|
251
|
+
|
|
252
|
+
Next: /rpi {slug}
|
|
253
|
+
Or explicitly: /rpi:plan {slug}
|
|
254
|
+
```
|
|
255
|
+
|
|
256
|
+
If NO-GO:
|
|
257
|
+
```
|
|
258
|
+
Research complete: rpi/features/{slug}/research/RESEARCH.md
|
|
259
|
+
|
|
260
|
+
Verdict: NO-GO
|
|
261
|
+
|
|
262
|
+
Review the RESEARCH.md for details and alternatives.
|
|
263
|
+
To override: /rpi:plan {slug} --force
|
|
264
|
+
```
|
|
265
|
+
"""
|
|
@@ -0,0 +1,443 @@
|
|
|
1
|
+
description = "Adversarial review with Hawk + Shield + Sage in parallel. Nexus synthesizes."
|
|
2
|
+
|
|
3
|
+
prompt = """
|
|
4
|
+
# /rpi:review — Review Phase
|
|
5
|
+
|
|
6
|
+
Adversarial review with three parallel agents: Hawk (code review), Shield (security audit), Sage (test coverage). Nexus synthesizes findings into a final verdict.
|
|
7
|
+
|
|
8
|
+
---
|
|
9
|
+
|
|
10
|
+
## Step 1: Load config and validate
|
|
11
|
+
|
|
12
|
+
1. Read `.rpi.yaml` for config. Apply defaults if missing:
|
|
13
|
+
- `folder`: `rpi/features`
|
|
14
|
+
- `context_file`: `rpi/context.md`
|
|
15
|
+
- `solutions_dir`: `rpi/solutions`
|
|
16
|
+
- `auto_learn`: `true`
|
|
17
|
+
2. Parse `$ARGUMENTS` to extract `{slug}`.
|
|
18
|
+
3. Validate `rpi/features/{slug}/implement/IMPLEMENT.md` exists. If not:
|
|
19
|
+
```
|
|
20
|
+
IMPLEMENT.md not found for '{slug}'. Run /rpi:implement {slug} first.
|
|
21
|
+
```
|
|
22
|
+
Stop.
|
|
23
|
+
|
|
24
|
+
## Step 2: Gather all artifacts
|
|
25
|
+
|
|
26
|
+
1. Read `rpi/features/{slug}/REQUEST.md` — store as `$REQUEST`.
|
|
27
|
+
2. Read `rpi/features/{slug}/plan/PLAN.md` — store as `$PLAN`.
|
|
28
|
+
3. Read `rpi/features/{slug}/plan/eng.md` if it exists — store as `$ENG`.
|
|
29
|
+
4. Read `rpi/features/{slug}/implement/IMPLEMENT.md` — store as `$IMPLEMENT`.
|
|
30
|
+
5. Read `rpi/context.md` (project context) if it exists — store as `$CONTEXT`.
|
|
31
|
+
|
|
32
|
+
## Step 3: Get implementation diff
|
|
33
|
+
|
|
34
|
+
1. Read `$IMPLEMENT` to extract all commit hashes from the Execution Log (including simplify commit if present).
|
|
35
|
+
2. Use git to get the combined diff:
|
|
36
|
+
```bash
|
|
37
|
+
git diff {first_commit}^..{last_commit}
|
|
38
|
+
```
|
|
39
|
+
3. Store the diff as `$IMPL_DIFF`.
|
|
40
|
+
4. Collect the list of all files changed — store as `$CHANGED_FILES`.
|
|
41
|
+
|
|
42
|
+
## Step 4: Launch Hawk, Shield, and Sage in parallel
|
|
43
|
+
|
|
44
|
+
Use the Agent tool to launch all three agents simultaneously.
|
|
45
|
+
|
|
46
|
+
### Hawk (adversarial review)
|
|
47
|
+
|
|
48
|
+
Launch Hawk agent with this prompt:
|
|
49
|
+
|
|
50
|
+
```
|
|
51
|
+
You are Hawk. Perform an adversarial code review for feature: {slug}
|
|
52
|
+
|
|
53
|
+
## Implementation Diff
|
|
54
|
+
{$IMPL_DIFF}
|
|
55
|
+
|
|
56
|
+
## Changed Files
|
|
57
|
+
{$CHANGED_FILES}
|
|
58
|
+
|
|
59
|
+
## Engineering Spec
|
|
60
|
+
{$ENG}
|
|
61
|
+
|
|
62
|
+
## Implementation Plan
|
|
63
|
+
{$PLAN}
|
|
64
|
+
|
|
65
|
+
## Project Context
|
|
66
|
+
{$CONTEXT}
|
|
67
|
+
|
|
68
|
+
Your task — ultra-thinking deep dive from 5 perspectives:
|
|
69
|
+
|
|
70
|
+
1. **Developer**: Code quality, maintainability, readability, patterns
|
|
71
|
+
2. **Ops**: Deployability, monitoring, logging, failure modes
|
|
72
|
+
3. **User**: Edge cases in user-facing behavior, error messages, UX
|
|
73
|
+
4. **Security**: Input validation, auth checks, data exposure
|
|
74
|
+
5. **Business**: Does it solve the stated problem? Missed requirements?
|
|
75
|
+
|
|
76
|
+
CRITICAL RULES:
|
|
77
|
+
1. You MUST find problems. Zero findings is not acceptable — re-analyse.
|
|
78
|
+
2. Read ALL changed files thoroughly before writing findings.
|
|
79
|
+
3. Each finding must reference specific file and line.
|
|
80
|
+
4. Classify every finding:
|
|
81
|
+
- P1 (blocker): Must fix before merge. Bugs, data loss, security holes, broken contracts.
|
|
82
|
+
- P2 (should fix): Important but not blocking. Performance, naming, missing validation.
|
|
83
|
+
- P3 (nice-to-have): Suggestions, style, minor improvements.
|
|
84
|
+
|
|
85
|
+
Output format:
|
|
86
|
+
## Findings
|
|
87
|
+
|
|
88
|
+
### P1 — Blockers
|
|
89
|
+
- [{file}:{line}] {description} — Impact: {impact}
|
|
90
|
+
(or "None found.")
|
|
91
|
+
|
|
92
|
+
### P2 — Should Fix
|
|
93
|
+
- [{file}:{line}] {description} — Impact: {impact}
|
|
94
|
+
|
|
95
|
+
### P3 — Nice to Have
|
|
96
|
+
- [{file}:{line}] {description} — Suggestion: {suggestion}
|
|
97
|
+
|
|
98
|
+
## Summary
|
|
99
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
100
|
+
- Overall: {assessment}
|
|
101
|
+
|
|
102
|
+
After your review, append your activity to rpi/features/{slug}/ACTIVITY.md:
|
|
103
|
+
|
|
104
|
+
### {current_date} — Hawk (Review)
|
|
105
|
+
- **Action:** Adversarial code review for {slug}
|
|
106
|
+
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
|
|
107
|
+
- **Findings:** P1={count} P2={count} P3={count}
|
|
108
|
+
- **Perspectives covered:** {list of 5 perspectives}
|
|
109
|
+
- **Quality:** {your quality gate result}
|
|
110
|
+
```
|
|
111
|
+
|
|
112
|
+
Store the output as `$HAWK_OUTPUT`.
|
|
113
|
+
|
|
114
|
+
### Shield (security audit)
|
|
115
|
+
|
|
116
|
+
Launch Shield agent with this prompt:
|
|
117
|
+
|
|
118
|
+
```
|
|
119
|
+
You are Shield. Perform a security audit for feature: {slug}
|
|
120
|
+
|
|
121
|
+
## Implementation Diff
|
|
122
|
+
{$IMPL_DIFF}
|
|
123
|
+
|
|
124
|
+
## Changed Files
|
|
125
|
+
{$CHANGED_FILES}
|
|
126
|
+
|
|
127
|
+
## Engineering Spec
|
|
128
|
+
{$ENG}
|
|
129
|
+
|
|
130
|
+
## Project Context
|
|
131
|
+
{$CONTEXT}
|
|
132
|
+
|
|
133
|
+
Your task — systematic security audit:
|
|
134
|
+
|
|
135
|
+
### OWASP Top 10 Check
|
|
136
|
+
For each applicable category, check the implementation:
|
|
137
|
+
1. Injection (SQL, NoSQL, OS command, LDAP)
|
|
138
|
+
2. Broken Authentication
|
|
139
|
+
3. Sensitive Data Exposure
|
|
140
|
+
4. XML External Entities (XXE)
|
|
141
|
+
5. Broken Access Control
|
|
142
|
+
6. Security Misconfiguration
|
|
143
|
+
7. Cross-Site Scripting (XSS)
|
|
144
|
+
8. Insecure Deserialization
|
|
145
|
+
9. Using Components with Known Vulnerabilities
|
|
146
|
+
10. Insufficient Logging & Monitoring
|
|
147
|
+
|
|
148
|
+
### Additional Checks
|
|
149
|
+
- Hardcoded secrets, API keys, tokens
|
|
150
|
+
- Missing input validation or sanitization
|
|
151
|
+
- Auth bypass possibilities
|
|
152
|
+
- Race conditions
|
|
153
|
+
- Edge cases and boundary conditions (overflow, empty input, null)
|
|
154
|
+
- Error messages leaking internal details
|
|
155
|
+
|
|
156
|
+
RULES:
|
|
157
|
+
1. Read ALL changed files before auditing
|
|
158
|
+
2. Each finding must reference specific file and line
|
|
159
|
+
3. Classify: P1 (blocker) | P2 (should fix) | P3 (nice-to-have)
|
|
160
|
+
4. If no security issues found, explicitly state which checks passed
|
|
161
|
+
|
|
162
|
+
Output format:
|
|
163
|
+
## Security Findings
|
|
164
|
+
|
|
165
|
+
### P1 — Critical
|
|
166
|
+
- [{file}:{line}] {vulnerability} — Risk: {risk description}
|
|
167
|
+
(or "None found.")
|
|
168
|
+
|
|
169
|
+
### P2 — Important
|
|
170
|
+
- [{file}:{line}] {vulnerability} — Risk: {risk description}
|
|
171
|
+
|
|
172
|
+
### P3 — Hardening
|
|
173
|
+
- [{file}:{line}] {suggestion} — Benefit: {benefit}
|
|
174
|
+
|
|
175
|
+
## OWASP Coverage
|
|
176
|
+
- {category}: PASS | FAIL | N/A — {notes}
|
|
177
|
+
|
|
178
|
+
## Summary
|
|
179
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
180
|
+
|
|
181
|
+
After your audit, append your activity to rpi/features/{slug}/ACTIVITY.md:
|
|
182
|
+
|
|
183
|
+
### {current_date} — Shield (Review)
|
|
184
|
+
- **Action:** Security audit for {slug}
|
|
185
|
+
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
|
|
186
|
+
- **Findings:** P1={count} P2={count} P3={count}
|
|
187
|
+
- **OWASP categories checked:** {count}
|
|
188
|
+
- **Quality:** {your quality gate result}
|
|
189
|
+
```
|
|
190
|
+
|
|
191
|
+
Store the output as `$SHIELD_OUTPUT`.
|
|
192
|
+
|
|
193
|
+
### Sage (coverage check)
|
|
194
|
+
|
|
195
|
+
Launch Sage agent with this prompt:
|
|
196
|
+
|
|
197
|
+
```
|
|
198
|
+
You are Sage. Verify test coverage for feature: {slug}
|
|
199
|
+
|
|
200
|
+
## Implementation Diff
|
|
201
|
+
{$IMPL_DIFF}
|
|
202
|
+
|
|
203
|
+
## Changed Files
|
|
204
|
+
{$CHANGED_FILES}
|
|
205
|
+
|
|
206
|
+
## Engineering Spec
|
|
207
|
+
{$ENG}
|
|
208
|
+
|
|
209
|
+
## Implementation Plan
|
|
210
|
+
{$PLAN}
|
|
211
|
+
|
|
212
|
+
## Project Context
|
|
213
|
+
{$CONTEXT}
|
|
214
|
+
|
|
215
|
+
Your task — check what is tested and what is not:
|
|
216
|
+
|
|
217
|
+
1. For each changed file, find the corresponding test file(s)
|
|
218
|
+
2. Identify modules/functions with NO tests at all
|
|
219
|
+
3. Identify tested modules with MISSING edge cases:
|
|
220
|
+
- Error paths not tested
|
|
221
|
+
- Boundary values not tested
|
|
222
|
+
- Null/empty/invalid inputs not tested
|
|
223
|
+
- Concurrent/race condition scenarios not tested
|
|
224
|
+
4. Check that acceptance criteria from the plan have test coverage
|
|
225
|
+
5. Suggest specific tests that should be added
|
|
226
|
+
|
|
227
|
+
RULES:
|
|
228
|
+
1. Read ALL changed files and their test files before reporting
|
|
229
|
+
2. Be specific — name the function/module and the missing test case
|
|
230
|
+
3. Classify: P1 (no tests at all) | P2 (missing critical paths) | P3 (missing edge cases)
|
|
231
|
+
|
|
232
|
+
Output format:
|
|
233
|
+
## Coverage Analysis
|
|
234
|
+
|
|
235
|
+
### Untested Modules (P1)
|
|
236
|
+
- {file}:{function/class} — No tests found
|
|
237
|
+
(or "All modules have tests.")
|
|
238
|
+
|
|
239
|
+
### Missing Critical Paths (P2)
|
|
240
|
+
- {file}:{function} — Missing: {description of untested path}
|
|
241
|
+
|
|
242
|
+
### Missing Edge Cases (P3)
|
|
243
|
+
- {file}:{function} — Missing: {description of edge case}
|
|
244
|
+
|
|
245
|
+
## Suggested Tests
|
|
246
|
+
1. {test description} — covers {what it covers}
|
|
247
|
+
2. ...
|
|
248
|
+
|
|
249
|
+
## Summary
|
|
250
|
+
- Modules without tests: {N}
|
|
251
|
+
- Missing critical paths: {N}
|
|
252
|
+
- Missing edge cases: {N}
|
|
253
|
+
|
|
254
|
+
After your analysis, append your activity to rpi/features/{slug}/ACTIVITY.md:
|
|
255
|
+
|
|
256
|
+
### {current_date} — Sage (Review)
|
|
257
|
+
- **Action:** Test coverage analysis for {slug}
|
|
258
|
+
- **Key decisions:** {for each <decision> tag you emitted: "summary (rationale)", separated by semicolons. If none: "No decisions in this phase."}
|
|
259
|
+
- **Untested modules:** {count}
|
|
260
|
+
- **Missing critical paths:** {count}
|
|
261
|
+
- **Missing edge cases:** {count}
|
|
262
|
+
- **Quality:** {your quality gate result}
|
|
263
|
+
```
|
|
264
|
+
|
|
265
|
+
Store the output as `$SAGE_OUTPUT`.
|
|
266
|
+
|
|
267
|
+
## Step 5: Wait for completion
|
|
268
|
+
|
|
269
|
+
Wait for all three agents (Hawk, Shield, Sage) to complete.
|
|
270
|
+
|
|
271
|
+
## Step 6: Launch Nexus — synthesize findings
|
|
272
|
+
|
|
273
|
+
Launch Nexus agent to produce the final review report:
|
|
274
|
+
|
|
275
|
+
```
|
|
276
|
+
You are Nexus. Synthesize the review findings for feature: {slug}
|
|
277
|
+
|
|
278
|
+
## Hawk Output (Code Review)
|
|
279
|
+
{$HAWK_OUTPUT}
|
|
280
|
+
|
|
281
|
+
## Shield Output (Security Audit)
|
|
282
|
+
{$SHIELD_OUTPUT}
|
|
283
|
+
|
|
284
|
+
## Sage Output (Coverage)
|
|
285
|
+
{$SAGE_OUTPUT}
|
|
286
|
+
|
|
287
|
+
## Request
|
|
288
|
+
{$REQUEST}
|
|
289
|
+
|
|
290
|
+
Your task:
|
|
291
|
+
1. Merge all findings from Hawk, Shield, and Sage
|
|
292
|
+
2. Deduplicate — if multiple agents flagged the same issue, combine into one finding
|
|
293
|
+
3. Classify every finding: P1 (blocker) | P2 (should fix) | P3 (nice-to-have)
|
|
294
|
+
4. Determine verdict based on findings
|
|
295
|
+
|
|
296
|
+
Verdict rules:
|
|
297
|
+
- Any P1 finding → FAIL
|
|
298
|
+
- No P1 but has P2/P3 → PASS with concerns
|
|
299
|
+
- No findings → PASS
|
|
300
|
+
|
|
301
|
+
Output format:
|
|
302
|
+
## Review Report: {slug}
|
|
303
|
+
|
|
304
|
+
### Verdict: {PASS | PASS with concerns | FAIL}
|
|
305
|
+
|
|
306
|
+
### P1 — Blockers (must fix)
|
|
307
|
+
- [{source}] [{file}:{line}] {description}
|
|
308
|
+
(or "None.")
|
|
309
|
+
|
|
310
|
+
### P2 — Should Fix
|
|
311
|
+
- [{source}] [{file}:{line}] {description}
|
|
312
|
+
|
|
313
|
+
### P3 — Nice to Have
|
|
314
|
+
- [{source}] [{file}:{line}] {description}
|
|
315
|
+
|
|
316
|
+
### Coverage Summary (Sage)
|
|
317
|
+
- {summary of test coverage status}
|
|
318
|
+
|
|
319
|
+
### Totals
|
|
320
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
321
|
+
- Sources: Hawk {N} | Shield {N} | Sage {N}
|
|
322
|
+
```
|
|
323
|
+
|
|
324
|
+
Store the output as `$NEXUS_OUTPUT`.
|
|
325
|
+
|
|
326
|
+
## Step 7: Handle verdict
|
|
327
|
+
|
|
328
|
+
### If FAIL (P1 findings exist):
|
|
329
|
+
|
|
330
|
+
1. Output to the user:
|
|
331
|
+
```
|
|
332
|
+
Review FAILED for '{slug}'. {N} P1 blockers must be fixed.
|
|
333
|
+
|
|
334
|
+
{list P1 findings with file:line and description}
|
|
335
|
+
|
|
336
|
+
Fix all P1 issues and re-run: /rpi:review {slug}
|
|
337
|
+
```
|
|
338
|
+
2. Do NOT proceed to docs phase.
|
|
339
|
+
|
|
340
|
+
### If PASS with concerns (P2/P3 only):
|
|
341
|
+
|
|
342
|
+
1. Output to the user:
|
|
343
|
+
```
|
|
344
|
+
Review PASSED with concerns for '{slug}'.
|
|
345
|
+
P2: {N} | P3: {N}
|
|
346
|
+
|
|
347
|
+
{list P2 findings}
|
|
348
|
+
|
|
349
|
+
These are non-blocking but should be addressed.
|
|
350
|
+
```
|
|
351
|
+
2. Proceed to Step 8.
|
|
352
|
+
|
|
353
|
+
### If PASS (no findings):
|
|
354
|
+
|
|
355
|
+
1. Output to the user:
|
|
356
|
+
```
|
|
357
|
+
Review PASSED for '{slug}'. No issues found.
|
|
358
|
+
```
|
|
359
|
+
2. Proceed to Step 8.
|
|
360
|
+
|
|
361
|
+
## Step 8: Auto-learn to solutions
|
|
362
|
+
|
|
363
|
+
If `auto_learn` is `true` in config (default):
|
|
364
|
+
|
|
365
|
+
1. Review all P1 and P2 findings that were particularly insightful or represent reusable knowledge.
|
|
366
|
+
2. For each solution worth saving, write to `rpi/solutions/{category}/{slug}.md` using this format:
|
|
367
|
+
```markdown
|
|
368
|
+
# {Title}
|
|
369
|
+
|
|
370
|
+
## Problem
|
|
371
|
+
{symptoms, how it manifests}
|
|
372
|
+
|
|
373
|
+
## Solution
|
|
374
|
+
{code, approach, what worked}
|
|
375
|
+
|
|
376
|
+
## Prevention
|
|
377
|
+
{how to avoid in the future}
|
|
378
|
+
|
|
379
|
+
## Context
|
|
380
|
+
Feature: {slug} | Date: {YYYY-MM-DD}
|
|
381
|
+
Files: {list}
|
|
382
|
+
```
|
|
383
|
+
3. Categories are auto-detected: `performance/`, `security/`, `database/`, `testing/`, `architecture/`, `patterns/`
|
|
384
|
+
4. If no findings are worth saving, skip this step.
|
|
385
|
+
|
|
386
|
+
## Step 9: Update IMPLEMENT.md
|
|
387
|
+
|
|
388
|
+
Append a review section to `rpi/features/{slug}/implement/IMPLEMENT.md`:
|
|
389
|
+
|
|
390
|
+
```markdown
|
|
391
|
+
## Review
|
|
392
|
+
|
|
393
|
+
Date: {YYYY-MM-DD}
|
|
394
|
+
Agents: Hawk + Shield + Sage → Nexus
|
|
395
|
+
Verdict: {PASS | PASS with concerns | FAIL}
|
|
396
|
+
|
|
397
|
+
### Findings
|
|
398
|
+
- P1: {N} | P2: {N} | P3: {N}
|
|
399
|
+
|
|
400
|
+
### Details
|
|
401
|
+
{$NEXUS_OUTPUT summary}
|
|
402
|
+
|
|
403
|
+
### Solutions Saved
|
|
404
|
+
- {path to solution file}: {title}
|
|
405
|
+
(or "No solutions saved.")
|
|
406
|
+
```
|
|
407
|
+
|
|
408
|
+
## Step 10: Consolidate decisions to DECISIONS.md
|
|
409
|
+
|
|
410
|
+
1. Read `rpi/features/{slug}/ACTIVITY.md`.
|
|
411
|
+
2. Extract all `<decision>` tags from entries belonging to the Review phase (Hawk, Shield, Sage entries from this run).
|
|
412
|
+
3. If no decisions found, skip this step.
|
|
413
|
+
4. Read `rpi/features/{slug}/DECISIONS.md` if it exists (to get the last decision number for sequential numbering).
|
|
414
|
+
5. Append a new section to `rpi/features/{slug}/DECISIONS.md`:
|
|
415
|
+
|
|
416
|
+
```markdown
|
|
417
|
+
## Review Phase
|
|
418
|
+
_Generated: {current_date}_
|
|
419
|
+
|
|
420
|
+
| # | Type | Decision | Alternatives | Rationale | Impact |
|
|
421
|
+
|---|------|----------|-------------|-----------|--------|
|
|
422
|
+
| {N} | {type} | {summary} | {alternatives} | {rationale} | {impact} |
|
|
423
|
+
```
|
|
424
|
+
|
|
425
|
+
6. Number decisions sequentially, continuing from the last number in DECISIONS.md.
|
|
426
|
+
|
|
427
|
+
## Step 11: Output summary
|
|
428
|
+
|
|
429
|
+
```
|
|
430
|
+
Review complete: {slug}
|
|
431
|
+
|
|
432
|
+
Verdict: {PASS | PASS with concerns | FAIL}
|
|
433
|
+
Findings: P1={N} P2={N} P3={N}
|
|
434
|
+
Agents: Hawk({N}) Shield({N}) Sage({N})
|
|
435
|
+
|
|
436
|
+
{If PASS or PASS with concerns:}
|
|
437
|
+
Next: /rpi {slug}
|
|
438
|
+
Or explicitly: /rpi:docs {slug}
|
|
439
|
+
|
|
440
|
+
{If FAIL:}
|
|
441
|
+
Fix P1 blockers and re-run: /rpi:review {slug}
|
|
442
|
+
```
|
|
443
|
+
"""
|