@rvry/mcp 0.4.2 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +34 -22
  2. package/dist/setup.js +57 -1
  3. package/package.json +1 -1
package/README.md CHANGED
@@ -1,8 +1,8 @@
1
1
  # @rvry/mcp
2
2
 
3
- The thin client for the **RVRY Reasoning Depth Enforcement (RDE) Engine**.
3
+ **Reasoning Depth Enforcement for LLMs.**
4
4
 
5
- RVRY is a structural constraint system that forces Large Language Models past their trained defaults and into a state of deep, absorptive synthesis. This package provides the MCP (Model Context Protocol) interface to use RVRY across all Claude surfaces.
5
+ RVRY is an MCP server that improves AI reliability by forcing your model to finish thinking before it answers. When your AI raises a question, RVRY remembers it. When it makes an assumption, RVRY flags it. When it tries to wrap up while those are still unaddressed, RVRY doesn't let it.
6
6
 
7
7
  ## Quick Start
8
8
 
@@ -12,24 +12,24 @@ npx @rvry/mcp setup
12
12
 
13
13
  The wizard will:
14
14
  1. Open your browser to sign in (or prompt for a token)
15
- 2. Auto-detect Claude Code and Claude Desktop on your machine
16
- 3. Configure both clients automatically
17
- 4. Install slash commands (`/deepthink`, `/problem-solve`)
15
+ 2. Auto-detect supported clients on your machine
16
+ 3. Configure them automatically
18
17
 
19
- That's it. Restart Claude Desktop if it was running, and RVRY is ready.
18
+ Supported clients: **Claude Code**, **Claude Desktop**, **Cursor**, **Gemini CLI**, **Codex**, **Anti-Gravity**
19
+
20
+ Restart any running clients after setup, and RVRY is ready.
20
21
 
21
22
  ### Options
22
23
 
23
24
  ```bash
24
- npx @rvry/mcp setup --token rvry_abc123 # Skip browser auth, use token directly
25
- npx @rvry/mcp setup --client code # Only configure Claude Code
26
- npx @rvry/mcp setup --client desktop # Only configure Claude Desktop
25
+ npx @rvry/mcp setup --token rvry_abc123 # Skip browser auth, use token directly
26
+ npx @rvry/mcp setup --client code # Only configure Claude Code
27
+ npx @rvry/mcp setup --client desktop # Only configure Claude Desktop
28
+ npx @rvry/mcp setup --accept-terms # Accept ToS non-interactively
27
29
  ```
28
30
 
29
31
  ### Manual Installation
30
32
 
31
- If you prefer to configure manually:
32
-
33
33
  **Claude Code:**
34
34
  ```bash
35
35
  claude mcp add -e RVRY_TOKEN=rvry_your_token -s user rvry -- npx @rvry/mcp
@@ -55,19 +55,31 @@ Config file locations:
55
55
  - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
56
56
  - Linux: `~/.config/Claude/claude_desktop_config.json`
57
57
 
58
- ## Tools Provided
58
+ **Other MCP clients:**
59
+
60
+ Any client that supports MCP can use RVRY. Point it at `npx @rvry/mcp` with the environment variable `RVRY_TOKEN` set to your token.
59
61
 
60
- * **`RVRY_think`**: Generic entry point for reasoning depth through guided rounds and self-checks.
61
- * **`RVRY_deepthink`**: Extended analysis for high-stakes problems using pre-mortem failure analysis.
62
- * **`RVRY_problem_solve`**: Structured decision-making (Orient, Anticipate, Generate, Evaluate, Commit).
63
- * **`RVRY_challenge`**: Adversarial evaluation of proposals through causal analysis and edge-case auditing.
64
- * **`RVRY_meta`**: Reflective examination of defaults, shifts, and reasoning posture.
62
+ ## Tools
63
+
64
+ * **`RVRY_deepthink`** Extended analysis that catches assumptions, tests them, and doesn't let your AI wrap up until it's dealt with what it found.
65
+ * **`RVRY_problem_solve`** Structured decision-making that forces your AI through orientation, anticipation, and evaluation before it commits to a recommendation.
65
66
 
66
67
  ## How it Works
67
68
 
68
- RVRY does not "prompt" the model to think better. It **enforces** depth by:
69
- 1. **Blocking Escape Hatches**: Preventing the model from hedging, skipping over gaps, or closing prematurely.
70
- 2. **Imposing Structural Obligations**: Mapping the problem space and forcing the model to satisfy specific reasoning constraints before concluding.
71
- 3. **Inducing the Witness State**: Breaking the commercial "Helpfulness" frame to reach the model's latent reasoning capacity.
69
+ Same model. Same question. Different answer.
70
+
71
+ Your AI already has the reasoning capacity. RVRY forces it to use that capacity by holding it accountable — when it raises a question, that question has to get answered. When it makes an assumption, that assumption has to get tested. It keeps working not because it was told to think harder, but because it hasn't finished what it started.
72
+
73
+ On hard questions, a smaller model with RVRY outperforms a bigger model without it. The bottleneck was never how smart the AI is. It was whether the AI finished thinking.
74
+
75
+ [Learn more →](https://rvry.ai/how-it-works)
76
+
77
+ ## Pricing
78
+
79
+ | Plan | Runs | Price |
80
+ |------|------|-------|
81
+ | Free | 5/month | $0 |
82
+ | Pro | 25/month | $20/mo |
83
+ | Max | Unlimited | $100/mo |
72
84
 
73
- Learn more at [rvry.ai](https://rvry.ai).
85
+ Sign up at [rvry.ai](https://rvry.ai).
package/dist/setup.js CHANGED
@@ -346,6 +346,32 @@ function configureTomlMcp(configPath, token) {
346
346
  return 'error';
347
347
  }
348
348
  }
349
+ function isGeminiCLIAvailable() {
350
+ try {
351
+ const cmd = platform() === 'win32' ? 'where gemini' : 'which gemini';
352
+ execSync(cmd, { stdio: 'pipe' });
353
+ return true;
354
+ }
355
+ catch { /* not on PATH */ }
356
+ if (existsSync(join(homedir(), '.gemini', 'settings.json')))
357
+ return true;
358
+ return false;
359
+ }
360
+ function registerGeminiCLI(token) {
361
+ try {
362
+ // Remove existing entry first (silent fail if not registered)
363
+ try {
364
+ execSync('gemini mcp remove rvry', { stdio: 'pipe' });
365
+ }
366
+ catch { /* not registered */ }
367
+ execSync(`gemini mcp add rvry -e RVRY_TOKEN="${token}" --scope user npx -y @rvry/mcp`, { stdio: 'inherit' });
368
+ return 'ok';
369
+ }
370
+ catch {
371
+ // Fallback: write settings.json directly
372
+ return configureJsonMcp(join(homedir(), '.gemini', 'settings.json'), token);
373
+ }
374
+ }
349
375
  function isCodexAvailable() {
350
376
  try {
351
377
  const cmd = platform() === 'win32' ? 'where codex' : 'which codex';
@@ -377,6 +403,7 @@ const CLIENT_REGISTRY = [
377
403
  {
378
404
  name: 'Anti-Gravity',
379
405
  id: 'antigravity',
406
+ defaultSelected: false,
380
407
  detect: () => {
381
408
  // OR-logic: binary (agy or antigravity), config dir, or macOS app bundle
382
409
  try {
@@ -401,9 +428,38 @@ const CLIENT_REGISTRY = [
401
428
  configure: (token) => configureJsonMcp(join(homedir(), '.gemini', 'antigravity', 'mcp_config.json'), token),
402
429
  notInstalledHint: 'Not installed (https://antigravity.google)',
403
430
  },
431
+ {
432
+ name: 'Cursor',
433
+ id: 'cursor',
434
+ defaultSelected: false,
435
+ detect: () => {
436
+ if (platform() === 'darwin' && existsSync('/Applications/Cursor.app'))
437
+ return true;
438
+ if (existsSync(join(homedir(), '.cursor', 'mcp.json')))
439
+ return true;
440
+ try {
441
+ const cmd = platform() === 'win32' ? 'where cursor' : 'which cursor';
442
+ execSync(cmd, { stdio: 'pipe' });
443
+ return true;
444
+ }
445
+ catch { /* not on PATH */ }
446
+ return false;
447
+ },
448
+ configure: (token) => configureJsonMcp(join(homedir(), '.cursor', 'mcp.json'), token),
449
+ notInstalledHint: 'Not installed (https://cursor.com)',
450
+ },
451
+ {
452
+ name: 'Gemini CLI',
453
+ id: 'gemini',
454
+ defaultSelected: false,
455
+ detect: isGeminiCLIAvailable,
456
+ configure: registerGeminiCLI,
457
+ notInstalledHint: 'Not installed (https://github.com/google-gemini/gemini-cli)',
458
+ },
404
459
  {
405
460
  name: 'Codex',
406
461
  id: 'codex',
462
+ defaultSelected: false,
407
463
  detect: isCodexAvailable,
408
464
  configure: (token) => configureTomlMcp(join(homedir(), '.codex', 'config.toml'), token),
409
465
  notInstalledHint: 'Not installed (https://openai.com/codex)',
@@ -838,7 +894,7 @@ export async function runSetup() {
838
894
  console.log('');
839
895
  const pickerItems = detected.map((d) => ({
840
896
  label: d.client.name,
841
- selected: d.available, // pre-select detected clients
897
+ selected: d.available && (d.client.defaultSelected !== false),
842
898
  available: d.available,
843
899
  hint: d.available ? undefined : d.client.notInstalledHint,
844
900
  }));
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@rvry/mcp",
3
- "version": "0.4.2",
3
+ "version": "0.5.0",
4
4
  "description": "RVRY reasoning depth enforcement (RDE) engine client.",
5
5
  "type": "module",
6
6
  "bin": {