@rvry/mcp 0.3.1 → 0.3.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -4,30 +4,75 @@ argument-hint: [--auto] <question>
4
4
  allowed-tools:
5
5
  - mcp__rvry-dev__deepthink
6
6
  - AskUserQuestion
7
+ - Read
8
+ - Write
9
+ - Grep
10
+ - Glob
11
+ - Bash
7
12
  ---
8
13
 
9
14
  # /deepthink - Deep Structured Analysis
10
15
 
16
+ **YOUR ROLE**: Execute depth-first exploration with adaptive pre-mortems after conclusion-producing rounds. DIVERGENT thinking -- exploring questions, generating hypotheses, stress-testing conclusions before they solidify. For CONVERGENT decision-making, use `/problem-solve`.
17
+
11
18
  **Input**: $ARGUMENTS
12
19
 
13
- ## Parse Flags
20
+ ---
21
+
22
+ ## Phase 0: Parse Flags
23
+
24
+ - If `$ARGUMENTS` is empty or `--help`: display usage and stop:
25
+ ```
26
+ /deepthink - Deep Structured Analysis via RVRY
27
+
28
+ USAGE:
29
+ /deepthink <question>
30
+ /deepthink --auto <question> (fully autonomous, no questions, states assumptions)
31
+
32
+ The engine runs multi-round analysis with constraint tracking, quality gates,
33
+ and reasoning checks. Local persistence survives context compression.
14
34
 
15
- - If `$ARGUMENTS` is empty or `--help`: display usage ("Usage: /deepthink [--auto] <your question>") and stop
35
+ RELATED: /problem-solve (convergent decision-making)
36
+ ```
16
37
  - If `$ARGUMENTS` contains `--auto`: set AUTO_MODE, strip `--auto` from input
17
38
 
18
- ## Phase 1: ORIENT (before calling the engine)
39
+ ---
40
+
41
+ ## Phase 1: Session Setup
19
42
 
20
- Orient directly -- no engine calls yet. Analyze the user's question and identify:
43
+ Create a session folder for local artifacts that survive context compression:
21
44
 
22
- 1. **What I know**: Key facts, evidence, and context apparent from the question
23
- 2. **What I'm uncertain about**: Gaps, assumptions, unknowns (list each specifically)
24
- 3. **What I'm avoiding**: Uncomfortable angles, taboo options, things that would be inconvenient if true
45
+ 1. Generate a timestamp: `YYYYMMDD-HHMM` (current date/time)
46
+ 2. Generate a slug from the question summary (3-4 words, kebab-case)
47
+ 3. Create `{$PWD}/.claude/cognition/YYYYMMDD-HHMM-deepthink-<slug>/`
48
+ IMPORTANT: This is the PROJECT's `.claude/`, NOT `~/.claude/`. Use the absolute project root path.
49
+ 4. Write `00-enter.md`:
50
+ ```markdown
51
+ # DeepThink Session
52
+ **Question**: <full question>
53
+ **Started**: <timestamp>
54
+ ```
55
+
56
+ ---
25
57
 
58
+ ## Phase 2: ORIENT (before calling the engine)
59
+
60
+ Orient directly -- no engine calls yet. This forces commitment to disk before the analysis loop. Everything written here survives if context compresses mid-session.
61
+
62
+ ### `01-orient.md`:
63
+ Write to the session folder:
64
+ - **What I know**: Key facts, evidence, and context apparent from the question
65
+ - **What I'm uncertain about**: Gaps, assumptions, unknowns (list each specifically)
66
+ - **What I'm avoiding**: Uncomfortable angles, taboo options, things that would be inconvenient if true
67
+
68
+ ### `02-scope-questions.md`:
26
69
  Translate each uncertainty into a binary question with a smart default. Generate up to 3 questions, each with exactly 2 options. The recommended option comes first.
27
70
 
71
+ Write the questions to the file BEFORE asking.
72
+
28
73
  ### If AUTO_MODE:
29
74
 
30
- State your assumptions visibly in your reasoning (e.g., "Assuming focus on architectural trade-offs rather than implementation details. Assuming risk analysis is more valuable than landscape mapping."). Proceed directly to Phase 1b.
75
+ State your assumptions visibly (e.g., "Assuming focus on architectural trade-offs rather than implementation details."). Write assumptions to `03-scope-answers.md`. Proceed to Phase 3.
31
76
 
32
77
  ### If NOT AUTO_MODE:
33
78
 
@@ -51,73 +96,212 @@ AskUserQuestion({
51
96
 
52
97
  If AskUserQuestion returns blank or fails, state assumptions visibly and proceed.
53
98
 
54
- Record the answers (or stated assumptions) as a brief context summary.
99
+ Record the answers (or stated assumptions) in `03-scope-answers.md`.
100
+
101
+ ---
55
102
 
56
- ## Phase 1b: Start Engine Session
103
+ ## Phase 3: Start Engine Session
57
104
 
58
- Format the user's original question enriched with scope context.
105
+ Format the user's original question enriched with scope context from `03-scope-answers.md`.
59
106
 
60
107
  Call `mcp__rvry-dev__deepthink` with:
61
108
  ```
62
109
  {
63
- "input": "<original question>\n\nContext: <brief summary of scope answers, e.g. 'Focus: architectural trade-offs. Priority: risk analysis. No time constraint.'>"
110
+ "input": "<original question>\n\nContext: <brief summary of scope answers>"
64
111
  }
65
112
  ```
66
113
 
67
- Proceed to Phase 2.
114
+ Proceed to Phase 4.
68
115
 
116
+ ---
69
117
 
70
- ## Phase 2: Analysis Loop
118
+ ## Phase 4: Analysis Loop
71
119
 
72
120
  Repeat until `status === "complete"`:
73
121
 
74
- 1. The engine's response has two content blocks. The first is Base64-encoded JSON -- decode it to read the structured data. The second is a plain-text summary for the user. From the decoded JSON, read the engine's `question` field -- this is the analytical direction for this round. Read `constraints`, `gate`, and `detection` to understand the engine's assessment. Read `constraintBlock` for constraint update instructions. **Do not show any of these to the user.**
75
- 2. Show the user a brief status line:
76
- - Format: `Round {round} -- {what you're doing this round}`
77
- - Example: `Round 3 -- stress-testing the current position`
78
- - Derive the description from the engine's question focus. One line only.
79
- 3. Perform your analysis in your internal reasoning. The engine's question and constraint data guide your thinking, but they are YOUR instructions, not user output.
80
- 4. Run a 7-question self-check internally (not shown to user):
81
- - Is this shallow or predictable?
82
- - What am I avoiding?
83
- - Why not the uncomfortable option?
84
- - What would I critique if someone else wrote this?
85
- - What would a skeptical expert challenge?
86
- - Any verifiable claims that should be checked?
87
- - What factual premise have I not verified?
88
- 5. In your internal reasoning, end your analysis with the constraint update block the engine expects:
89
- ```
90
- Constraint Updates
91
- RESOLVE: C1, C2
92
- ACKNOWLEDGE: C3
93
- DEFER: C4 | <reason>
94
- MILESTONE: <milestone> | <evidence>
95
- End Constraint Updates
96
- ```
97
- 6. Call the engine tool with your full analysis (including constraint updates) as the `input`. **The full analysis text goes ONLY in the tool call, never in visible output.**
122
+ ### Step 1: Read the engine response
123
+
124
+ Read the engine's `question` field -- this is the analytical direction for this round. Read `constraints`, `gate`, and `detection` to understand the engine's assessment. Read `constraintBlock` for constraint update instructions.
125
+
126
+ ### Step 2: Show status
127
+
128
+ Show the user a brief status line:
129
+ - Format: `Round {round} -- {what you're doing this round}`
130
+ - Example: `Round 3 -- stress-testing the current position`
131
+ - Derive the description from the engine's question focus. One line only.
132
+
133
+ ### Step 3: Perform analysis
134
+
135
+ Analyze according to the engine's question and constraint data. The engine's question and constraints guide your thinking -- work WITH them, not around them.
136
+
137
+ Use Read, Grep, Glob, and Bash as needed to ground your analysis in actual code, data, or project state. Do not analyze in the abstract when concrete evidence is available.
138
+
139
+ ### Step 4: 7-Question Self-Check (mandatory, internal)
140
+
141
+ **Internal**:
142
+ 1. Is this shallow or predictable?
143
+ 2. What am I avoiding?
144
+ 3. Why not the uncomfortable option?
145
+
146
+ **External**:
147
+ 4. What would I critique if someone else wrote this?
148
+ 5. What would a skeptical expert challenge?
149
+ 6. Any verifiable claims that should be checked?
150
+
151
+ **Grounding**:
152
+ 7. What factual premise in this analysis have I not verified?
153
+
154
+ ### Step 5: Verify-or-Defer (mandatory after self-check)
155
+
156
+ For each concern raised in Q5, Q6, or Q7, you MUST either:
157
+ - **VERIFY**: Actually check the claim -- read a file, grep for a pattern, run a command. Record what was verified and the result.
158
+ - **DEFER**: Explicitly state the concern and why it cannot be verified now. Express as a DEFER in the constraint update block.
159
+
160
+ **No dismiss**: You CANNOT raise a concern and then argue it away in the same self-check. If you raise it, you must verify or defer it.
161
+
162
+ ### Step 6: Weakness Probe (rotate by round number)
163
+
164
+ Ask yourself ONE of these:
165
+ 1. Which part of your analysis would you spend more time on if you had another round?
166
+ 2. If your recommendation fails in practice, what is the first concrete thing someone notices?
167
+ 3. What surprised you during this analysis? If nothing, what does that tell you?
168
+ 4. What is the thing you almost said but did not?
169
+ 5. Which of your claims would you remove if you had to stake your credibility on the remaining ones?
170
+ 6. Your strongest claim and your most significant caveat -- are they in tension? If so, which do you stand behind?
171
+
172
+ Include the probe answer in your analysis.
173
+
174
+ ### Step 7: Write constraint updates and submit
175
+
176
+ End your analysis with the constraint update block:
177
+ ```
178
+ Constraint Updates
179
+ RESOLVE: C1, C2
180
+ ACKNOWLEDGE: C3
181
+ DEFER: C4 | <reason>
182
+ MILESTONE: <milestone> | <evidence>
183
+ End Constraint Updates
184
+ ```
185
+
186
+ Call the engine tool with your full analysis (including constraint updates, self-check findings, and verify-or-defer results) as the `input`.
187
+
188
+ ### What the user sees per round:
189
+ - The one-line status (Step 2)
190
+ - Nothing else until harvest
98
191
 
99
192
  ### What NEVER appears in user-visible output:
100
193
  - The engine's `question` field content
101
194
  - Constraint views or constraint update blocks
102
195
  - Self-check questions or responses
103
196
  - Gate verdicts or detection results
104
- - Milestone markers
105
197
  - The `constraintBlock` content
106
198
 
107
- ### What the user sees per round:
108
- - The one-line status (step 2 above)
109
- - Nothing else until harvest
199
+ ---
110
200
 
111
- ## Phase 3: Harvest
201
+ ## Phase 5: Harvest
112
202
 
113
203
  When `status === "complete"`:
114
204
 
115
- Synthesize the analysis for the user based on your accumulated reasoning across all rounds. Do NOT simply echo the engine's `harvest` fields -- they are structured data, not a finished presentation.
205
+ ### Harvest Pre-Mortem (before synthesis)
206
+
207
+ Before writing the final output, stress-test the direction of your analysis:
208
+
209
+ 1. What was the original question?
210
+ 2. What is my emerging recommendation/position?
211
+ 3. What is the gap between the two in complexity and scope?
212
+ 4. If the recommendation is more complex than the original problem warranted, what is the simpler version?
213
+
214
+ **Check for COMPLEXITY-COLLAPSE**: Did the analysis elaborate a simple problem until the complexity justified a complex solution? If yes, name the simpler alternative and present it prominently in the output.
215
+
216
+ ### Write `99-harvest.md`
217
+
218
+ Write to the session folder:
219
+ ```markdown
220
+ # DeepThink Harvest: [Problem Summary]
221
+
222
+ ## Summary
223
+ [2-3 sentence executive summary]
224
+
225
+ ## Key Findings
226
+ - [finding 1]
227
+ - [finding 2]
228
+ - [finding 3]
229
+
230
+ ## Open Questions
231
+ - [what remains uncertain]
232
+
233
+ ## Follow-ups
234
+ - /deepthink "[specific follow-up]" -- [why]
235
+ - /problem-solve "[specific decision]" -- [why]
236
+ ```
237
+
238
+ ### Confirm persistence to user
239
+
240
+ After writing 99-harvest.md:
241
+ ```
242
+ ---
243
+ Session persisted: <absolute path to session folder>
244
+ ---
245
+ ```
246
+
247
+ ### Synthesize for the user
248
+
249
+ Use the engine's `harvest.summary`, `harvest.keyFindings`, `harvest.openQuestions`, and `harvest.followUps` as source material, plus your accumulated reasoning across all rounds and the harvest pre-mortem results.
250
+
251
+ ---
252
+
253
+ ## Final Output Format
254
+
255
+ The sections above the break are the value-add -- what the process surfaced that a straight answer wouldn't have. The prose after the break is just the answer.
256
+
257
+ ```
258
+ # DeepThink: [Problem Summary]
259
+
260
+ ## Default Starting Point
261
+ [Where thinking began -- assumptions, knowns, open questions.]
262
+
263
+ ## Insights from DeepThink
264
+
265
+ **[Finding as a clear statement.]**
266
+ [Expand if needed. Skip if the bold line is self-sufficient.] (#tag-if-relevant)
267
+
268
+ **[Another finding.]**
269
+ [Context if needed.]
270
+
271
+ **[Simple finding that needs no expansion.]** (#pre-mortem)
272
+
273
+ [Format for human readability in terminal:
274
+ - Each finding gets its own appropriate visual unit (bold heading,
275
+ bullet, or short paragraph)
276
+ - Bullets for simple/parallel items. Bold heading + paragraph for
277
+ complex ones. Mix freely.
278
+ - No walls of text. If a paragraph runs long, break it.
279
+ - Lead with the IDEA, never the mode name
280
+ - Never describe the protocol's mechanics or your analytical process
281
+ - (#tags) are optional breadcrumbs: (#pre-mortem), (#inversion),
282
+ (#edge), (#perspective), (#meta). Most findings need no tag.
283
+ - Curate your best 3-6 findings, not every observation from every round.]
284
+
285
+ ---
286
+
287
+ [The answer. No heading. What you actually think now, informed by
288
+ everything above. No dense paragraphs. No prescribed length.
289
+ Never start with "The exploration..." or "After analyzing..."]
290
+
291
+ ## Follow-ups
292
+ -> /problem-solve "[specific decision point]"
293
+ _[why this needs convergent decision-making]_
294
+ -> /deepthink "[specific follow-up]"
295
+ _[why this needs further exploration]_
296
+ ```
297
+
298
+ ---
116
299
 
117
- Present to the user:
118
- 1. **Default Starting Point**: What the obvious answer was and why it was insufficient
119
- 2. **Key Findings**: The non-obvious insights that emerged, as a bullet list. Each finding should be a substantive sentence, not a label.
120
- 3. **Open Questions**: What remains genuinely uncertain (if any)
121
- 4. **Suggested Next Steps**: Concrete follow-up actions or investigations
300
+ ## Key Differences from /problem-solve
122
301
 
123
- Use the engine's `harvest.summary`, `harvest.keyFindings`, `harvest.openQuestions`, and `harvest.followUps` as source material, but write the synthesis in your own words with the depth your analysis produced. The harvest fields are often terse -- your synthesis should reflect the full depth of the multi-round analysis.
302
+ | /deepthink | /problem-solve |
303
+ |------------|----------------|
304
+ | Divergent + pre-mortems | Convergent - decide, commit |
305
+ | 7-question self-check + probe + verify-or-defer | Phase gates + probe |
306
+ | Explores the question space | Narrows to a decision |
307
+ | Valid: "more confused in useful ways" | Valid: clear decision + safeguards |
@@ -1,33 +1,78 @@
1
1
  ---
2
- description: Structured decision-making via RVRY
2
+ description: Convergent decision-making pipeline via RVRY
3
3
  argument-hint: [--auto] <problem or decision>
4
4
  allowed-tools:
5
5
  - mcp__rvry-dev__problem_solve
6
6
  - AskUserQuestion
7
+ - Read
8
+ - Write
9
+ - Grep
10
+ - Glob
11
+ - Bash
7
12
  ---
8
13
 
9
- # /problem-solve - Structured Decision-Making
14
+ # /problem-solve - Convergent Decision-Making
15
+
16
+ **YOUR ROLE**: Execute a convergent decision pipeline. The engine handles the structured analysis phases (orient, anticipate, generate, evaluate, commit). You handle local persistence, self-discipline, and grounding claims in evidence. For DIVERGENT exploration, use `/deepthink`.
10
17
 
11
18
  **Input**: $ARGUMENTS
12
19
 
13
- ## Parse Flags
20
+ ---
21
+
22
+ ## Phase 0: Parse Flags
23
+
24
+ - If `$ARGUMENTS` is empty or `--help`: display usage and stop:
25
+ ```
26
+ /problem-solve - Convergent Decision-Making via RVRY
27
+
28
+ USAGE:
29
+ /problem-solve <problem or decision>
30
+ /problem-solve --auto <problem> (fully autonomous, no questions, states assumptions)
31
+
32
+ The engine runs a multi-round decision pipeline with constraint tracking,
33
+ quality gates, and reasoning checks. Local persistence survives context compression.
14
34
 
15
- - If `$ARGUMENTS` is empty or `--help`: display usage ("Usage: /problem-solve [--auto] <your problem or decision>") and stop
35
+ RELATED: /deepthink (divergent exploration)
36
+ ```
16
37
  - If `$ARGUMENTS` contains `--auto`: set AUTO_MODE, strip `--auto` from input
17
38
 
18
- ## Phase 1: ORIENT (before calling the engine)
39
+ ---
40
+
41
+ ## Phase 1: Session Setup
42
+
43
+ Create a session folder for local artifacts that survive context compression:
19
44
 
20
- Orient directly -- no engine calls yet. Analyze the user's problem and identify:
45
+ 1. Generate a timestamp: `YYYYMMDD-HHMM` (current date/time)
46
+ 2. Generate a slug from the problem summary (3-4 words, kebab-case)
47
+ 3. Create `{$PWD}/.claude/cognition/YYYYMMDD-HHMM-problemsolve-<slug>/`
48
+ IMPORTANT: This is the PROJECT's `.claude/`, NOT `~/.claude/`. Use the absolute project root path.
49
+ 4. Write `00-enter.md`:
50
+ ```markdown
51
+ # ProblemSolve Session
52
+ **Problem**: <full problem statement>
53
+ **Started**: <timestamp>
54
+ ```
55
+
56
+ ---
21
57
 
22
- 1. **What I know**: Key facts, constraints, and context apparent from the problem statement
23
- 2. **What I'm uncertain about**: Gaps, assumptions, unknowns (list each specifically)
24
- 3. **What I'm avoiding**: Uncomfortable options, risky paths, things that would be inconvenient if true
58
+ ## Phase 2: ORIENT (before calling the engine)
25
59
 
60
+ Orient directly -- no engine calls yet. This forces commitment to disk before the analysis loop. Everything written here survives if context compresses mid-session.
61
+
62
+ ### `01-orient.md`:
63
+ Write to the session folder:
64
+ - **What is the problem?**: The decision or problem in concrete terms
65
+ - **What is uncertain?**: Gaps, assumptions, unknowns (list each specifically)
66
+ - **What am I avoiding?**: Uncomfortable options, risky paths, things that would be inconvenient if true
67
+
68
+ ### `02-scope-questions.md`:
26
69
  Translate each uncertainty into a binary question with a smart default. Generate up to 3 questions, each with exactly 2 options. The recommended option comes first.
27
70
 
71
+ Write the questions to the file BEFORE asking.
72
+
28
73
  ### If AUTO_MODE:
29
74
 
30
- State your assumptions visibly in your reasoning (e.g., "Assuming risk minimization over speed. Assuming known options need evaluation rather than generating new alternatives."). Proceed directly to Phase 1b.
75
+ State your assumptions visibly (e.g., "Assuming risk minimization over speed. Assuming known options need evaluation rather than generating new alternatives."). Write assumptions to `03-scope-answers.md`. Proceed to Phase 3.
31
76
 
32
77
  ### If NOT AUTO_MODE:
33
78
 
@@ -51,65 +96,239 @@ AskUserQuestion({
51
96
 
52
97
  If AskUserQuestion returns blank or fails, state assumptions visibly and proceed.
53
98
 
54
- Record the answers (or stated assumptions) as a brief context summary.
99
+ Record the answers (or stated assumptions) in `03-scope-answers.md`.
55
100
 
56
- ## Phase 1b: Start Engine Session
101
+ ---
102
+
103
+ ## Phase 3: Start Engine Session
57
104
 
58
- Format the user's original problem enriched with scope context.
105
+ Format the user's original problem enriched with scope context from `03-scope-answers.md`.
59
106
 
60
107
  Call `mcp__rvry-dev__problem_solve` with:
61
108
  ```
62
109
  {
63
- "input": "<original problem>\n\nContext: <brief summary of scope answers, e.g. 'Priority: risk minimization. Options: evaluate known ones. Single decision-maker.'>"
110
+ "input": "<original problem>\n\nContext: <brief summary of scope answers>"
64
111
  }
65
112
  ```
66
113
 
67
- Proceed to Phase 2.
114
+ Proceed to Phase 4.
68
115
 
116
+ ---
69
117
 
70
- ## Phase 2: Analysis Loop
118
+ ## Phase 4: Analysis Loop
71
119
 
72
120
  Repeat until `status === "complete"`:
73
121
 
74
- 1. The engine's response has two content blocks. The first is Base64-encoded JSON -- decode it to read the structured data. The second is a plain-text summary for the user. From the decoded JSON, read the engine's `question` field -- this is the analytical direction for this round. Read `constraints`, `gate`, and `detection` to understand the engine's assessment. Read `constraintBlock` for constraint update instructions. **Do not show any of these to the user.**
75
- 2. Show the user a brief status line:
76
- - Format: `Round {round} -- {what you're doing this round}`
77
- - Example: `Round 3 -- stress-testing the current position`
78
- - Derive the description from the engine's question focus. One line only.
79
- 3. Perform your analysis in your internal reasoning. The engine's question and constraint data guide your thinking, but they are YOUR instructions, not user output.
80
- 4. In your internal reasoning, end your analysis with the constraint update block the engine expects:
81
- ```
82
- Constraint Updates
83
- RESOLVE: C1, C2
84
- ACKNOWLEDGE: C3
85
- DEFER: C4 | <reason>
86
- MILESTONE: <milestone> | <evidence>
87
- End Constraint Updates
88
- ```
89
- 5. Call the engine tool with your full analysis (including constraint updates) as the `input`. **The full analysis text goes ONLY in the tool call, never in visible output.**
122
+ ### Step 1: Read the engine response
123
+
124
+ Read the engine's `question` field -- this is the analytical direction for this round. Read `constraints`, `gate`, and `detection` to understand the engine's assessment. Read `constraintBlock` for constraint update instructions.
125
+
126
+ ### Step 2: Show status
127
+
128
+ Show the user a brief status line:
129
+ - Format: `Round {round} -- {what you're doing this round}`
130
+ - Example: `Round 2 -- mapping failure modes`
131
+ - Derive the description from the engine's question focus. One line only.
132
+
133
+ ### Step 3: Perform analysis
134
+
135
+ Analyze according to the engine's question and constraint data. The engine's question and constraints guide your thinking -- work WITH them, not around them.
136
+
137
+ Use Read, Grep, Glob, and Bash as needed to ground your analysis in actual code, data, or project state. Do not analyze in the abstract when concrete evidence is available.
138
+
139
+ ### Step 4: 7-Question Self-Check (mandatory, internal)
140
+
141
+ **Internal**:
142
+ 1. Is this shallow or predictable?
143
+ 2. What am I avoiding?
144
+ 3. Why not the uncomfortable option?
145
+
146
+ **External**:
147
+ 4. What would I critique if someone else wrote this?
148
+ 5. What would a skeptical expert challenge?
149
+ 6. Any verifiable claims that should be checked?
150
+
151
+ **Grounding**:
152
+ 7. What factual premise in this analysis have I not verified?
153
+
154
+ ### Step 5: Verify-or-Defer (mandatory after self-check)
155
+
156
+ For each concern raised in Q5, Q6, or Q7, you MUST either:
157
+ - **VERIFY**: Actually check the claim -- read a file, grep for a pattern, run a command. Record what was verified and the result.
158
+ - **DEFER**: Explicitly state the concern and why it cannot be verified now. Express as a DEFER in the constraint update block.
159
+
160
+ **No dismiss**: You CANNOT raise a concern and then argue it away in the same self-check. If you raise it, you must verify or defer it.
161
+
162
+ ### Step 6: Weakness Probe (rotate by round number)
163
+
164
+ Ask yourself ONE of these:
165
+ 1. Which part of your analysis would you spend more time on if you had another round?
166
+ 2. If your recommendation fails in practice, what is the first concrete thing someone notices?
167
+ 3. What surprised you during this analysis? If nothing, what does that tell you?
168
+ 4. What is the thing you almost said but did not?
169
+ 5. Which of your claims would you remove if you had to stake your credibility on the remaining ones?
170
+ 6. Your strongest claim and your most significant caveat -- are they in tension? If so, which do you stand behind?
171
+
172
+ Include the probe answer in your analysis.
173
+
174
+ ### Step 7: Write constraint updates and submit
175
+
176
+ End your analysis with the constraint update block:
177
+ ```
178
+ Constraint Updates
179
+ RESOLVE: C1, C2
180
+ ACKNOWLEDGE: C3
181
+ DEFER: C4 | <reason>
182
+ MILESTONE: <milestone> | <evidence>
183
+ End Constraint Updates
184
+ ```
185
+
186
+ Call the engine tool with your full analysis (including constraint updates, self-check findings, and verify-or-defer results) as the `input`.
187
+
188
+ ### What the user sees per round:
189
+ - The one-line status (Step 2)
190
+ - Nothing else until harvest
90
191
 
91
192
  ### What NEVER appears in user-visible output:
92
193
  - The engine's `question` field content
93
194
  - Constraint views or constraint update blocks
94
195
  - Self-check questions or responses
95
196
  - Gate verdicts or detection results
96
- - Milestone markers
97
197
  - The `constraintBlock` content
98
198
 
99
- ### What the user sees per round:
100
- - The one-line status (step 2 above)
101
- - Nothing else until harvest
199
+ ---
102
200
 
103
- ## Phase 3: Harvest
201
+ ## Phase 5: Harvest
104
202
 
105
203
  When `status === "complete"`:
106
204
 
107
- Synthesize the analysis for the user based on your accumulated reasoning across all rounds. Do NOT simply echo the engine's `harvest` fields -- they are structured data, not a finished presentation.
205
+ ### Harvest Pre-Mortem (before synthesis)
206
+
207
+ Before writing the final output, stress-test the direction of your analysis:
208
+
209
+ 1. What was the original problem?
210
+ 2. What is my recommendation now?
211
+ 3. What is the gap between the two in complexity and scope?
212
+ 4. If the recommendation is more complex than the original problem warranted, what is the simpler version?
213
+
214
+ **Check for COMPLEXITY-COLLAPSE**: Did the analysis elaborate a simple problem until the complexity justified a complex solution? If yes, name the simpler alternative and present it prominently.
215
+
216
+ ### Write `99-harvest.md`
217
+
218
+ Write to the session folder:
219
+ ```markdown
220
+ # ProblemSolve Harvest: [Problem Summary]
221
+
222
+ ## Summary
223
+ [2-3 sentence executive summary of decision + rationale]
224
+
225
+ ## Decision
226
+ [The recommendation with confidence level]
227
+
228
+ ## Key Risks
229
+ - [risk 1 and its safeguard]
230
+ - [risk 2 and its safeguard]
231
+
232
+ ## Alternatives Considered
233
+ - [alternative 1] -- eliminated because [reason]
234
+ - [alternative 2] -- eliminated because [reason]
235
+
236
+ ## Open Questions
237
+ - [what remains uncertain]
238
+
239
+ ## Follow-ups
240
+ - /deepthink "[uncertainty]" -- [why this needs exploration]
241
+ - /problem-solve "[next decision]" -- [why this needs its own analysis]
242
+ ```
243
+
244
+ ### Confirm persistence to user
245
+
246
+ After writing 99-harvest.md:
247
+ ```
248
+ ---
249
+ Session persisted: <absolute path to session folder>
250
+ ---
251
+ ```
252
+
253
+ ### Synthesize for the user
254
+
255
+ Use the engine's `harvest.summary`, `harvest.keyFindings`, `harvest.openQuestions`, and `harvest.followUps` as source material, plus your accumulated reasoning across all rounds and the harvest pre-mortem results.
256
+
257
+ ---
258
+
259
+ ## Final Output Format
260
+
261
+ The output is decision-first. The reader gets the answer immediately, then supporting evidence. This reflects convergent thinking: narrow toward commitment, not expand toward synthesis.
262
+
263
+ ```
264
+ # ProblemSolve: [Problem Summary]
265
+
266
+ ## Analysis
267
+
268
+ [Problem framing -- what the situation was and what triggered
269
+ this analysis. 1-2 sentences.]
270
+
271
+ [Reasoning arc -- what the analysis revealed that confirmed,
272
+ redirected, or complicated the initial instinct. Where the
273
+ reasoning turned. What would change the verdict.
274
+
275
+ State the chosen direction clearly at the end of this section.
276
+ The reader should know what you're recommending before they
277
+ read the stress test.]
278
+
279
+ [For simple decisions, these can collapse into a shorter
280
+ form. The point is readability, not rigid structure.]
281
+
282
+ ## Stress Test
283
+
284
+ **"[Risk or adversarial challenge]"**
285
+ [How it was tested and what happened. Did the decision survive,
286
+ adapt, or need revision? Plain prose, no arrows.]
287
+
288
+ **"[Another risk]"**
289
+ [Response and outcome.]
290
+
291
+ ## Alternative Options
292
+ - **[What the alternative was]**: Eliminated because [reason].
293
+ - **[What the alternative was]**: Eliminated because [reason].
294
+
295
+ [Do NOT reference "Option A/B/C" labels. The reader hasn't
296
+ seen an option tree. Name each alternative by what it actually
297
+ is, then explain why it was rejected.]
298
+
299
+ ## Recommendation
300
+ **[Decision statement]** (confidence: X.X)
301
+
302
+ [Why this is the go-forward path -- the synthesis of the analysis
303
+ and stress test above. 1-3 sentences connecting the reasoning
304
+ to the commitment.]
305
+
306
+ **Safeguards:**
307
+ [Specific commitments to prevent the failures identified above.]
308
+
309
+ [Format for human readability in terminal:
310
+ - The Decision can be multi-part or compact -- match the complexity
311
+ of the actual decision
312
+ - Stress Test entries: bold risk, plain response. No arrows, no
313
+ mechanism names.
314
+ - Never reference the protocol's internal phases, gates, or
315
+ mechanism names in user-facing output
316
+ - No walls of text. Break long paragraphs.]
317
+
318
+ ## Where to Go Next
319
+ -> /deepthink "[uncertainty needing exploration]"
320
+ _[why this needs adversarial testing]_
321
+ -> /problem-solve "[next decision point]"
322
+ _[if a follow-on decision is needed]_
323
+ ```
324
+
325
+ ---
108
326
 
109
- Present to the user:
110
- 1. **Default Starting Point**: What the obvious answer was and why it was insufficient
111
- 2. **Key Findings**: The non-obvious insights that emerged, as a bullet list. Each finding should be a substantive sentence, not a label.
112
- 3. **Open Questions**: What remains genuinely uncertain (if any)
113
- 4. **Suggested Next Steps**: Concrete follow-up actions or investigations
327
+ ## Key Differences from /deepthink
114
328
 
115
- Use the engine's `harvest.summary`, `harvest.keyFindings`, `harvest.openQuestions`, and `harvest.followUps` as source material, but write the synthesis in your own words with the depth your analysis produced. The harvest fields are often terse -- your synthesis should reflect the full depth of the multi-round analysis.
329
+ | /problem-solve | /deepthink |
330
+ |----------------|------------|
331
+ | Convergent - decide, commit | Divergent + pre-mortems |
332
+ | Phase gates + probe | 7-question self-check + probe + verify-or-defer |
333
+ | Narrows to a decision | Explores the question space |
334
+ | Valid: clear decision + safeguards | Valid: "more confused in useful ways" |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rvry/mcp",
3
- "version": "0.3.1",
3
+ "version": "0.3.2",
4
4
  "description": "RVRY reasoning depth enforcement (RDE) engine client.",
5
5
  "type": "module",
6
6
  "bin": {