@rip-lang/ai 0.1.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +155 -0
  2. package/mcp.rip +359 -0
  3. package/package.json +42 -0
package/README.md ADDED
@@ -0,0 +1,155 @@
1
+ <img src="https://raw.githubusercontent.com/shreeve/rip-lang/main/docs/assets/rip.png" style="width:50px" /> <br>
2
+
3
+ # Rip AI - @rip-lang/ai
4
+
5
+ > **AI-to-AI collaboration MCP server — peer review, second opinions, and multi-turn discussion between models**
6
+
7
+ An MCP stdio server that lets one AI talk to another. Claude Opus 4.6 in
8
+ Cursor automatically gets GPT-5.4 as its peer, and vice versa. Three tools
9
+ — `chat`, `review`, `discuss` — cover quick questions, structured code
10
+ review, and multi-turn conversations. ~360 lines of Rip, zero dependencies
11
+ beyond the language itself.
12
+
13
+ ## Quick Start
14
+
15
+ ```bash
16
+ bun add @rip-lang/ai
17
+ ```
18
+
19
+ Add to your Cursor MCP config (`~/.cursor/mcp.json`):
20
+
21
+ ```json
22
+ {
23
+ "mcpServers": {
24
+ "ai": {
25
+ "command": "rip",
26
+ "args": ["/path/to/packages/ai/mcp.rip"]
27
+ }
28
+ }
29
+ }
30
+ ```
31
+
32
+ API keys are loaded from environment variables or `~/.config/rip/credentials`:
33
+
34
+ ```bash
35
+ mkdir -p ~/.config/rip
36
+ cat > ~/.config/rip/credentials << 'EOF'
37
+ OPENAI_API_KEY=sk-...
38
+ ANTHROPIC_API_KEY=sk-ant-...
39
+ EOF
40
+ chmod 600 ~/.config/rip/credentials
41
+ ```
42
+
43
+ ## How It Works
44
+
45
+ ```
46
+ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
47
+ │ Claude │ stdio │ rip-ai │ HTTPS │ GPT-5.4 │
48
+ │ (Cursor) │◀────────▶│ (MCP) │────────▶ │ (OpenAI) │
49
+ └─────────────┘ └─────────────┘ └─────────────┘
50
+ ```
51
+
52
+ The calling AI (typically Claude in Cursor) sends a tool call over MCP stdio.
53
+ The server forwards it to the peer model's API and returns the response. The
54
+ peer is selected automatically — Claude gets GPT-5.4, GPT gets Claude — so
55
+ every review comes from a genuinely different perspective.
56
+
57
+ ## Tools
58
+
59
+ ### chat
60
+
61
+ Send a message and get a response. Use for quick questions, second opinions,
62
+ or brainstorming.
63
+
64
+ ```coffee
65
+ # From Cursor's Claude, this reaches GPT-5.4:
66
+ chat({ message: "Is this the right data structure for a LRU cache?" })
67
+ ```
68
+
69
+ | Parameter | Type | Required | Description |
70
+ |-----------|------|----------|-------------|
71
+ | `message` | string | yes | The message to send |
72
+ | `system` | string | no | System prompt override |
73
+
74
+ ### review
75
+
76
+ Structured code review with feedback on correctness, safety, performance,
77
+ style, and suggestions. Include language and context for better results.
78
+
79
+ ```coffee
80
+ review({
81
+ code: "...",
82
+ language: "rip",
83
+ context: "This is a compiler code generator",
84
+ focus: "bugs"
85
+ })
86
+ ```
87
+
88
+ | Parameter | Type | Required | Description |
89
+ |-----------|------|----------|-------------|
90
+ | `code` | string | yes | The code to review |
91
+ | `language` | string | no | Programming language |
92
+ | `context` | string | no | What the code does, project info, constraints |
93
+ | `focus` | string | no | `bugs`, `performance`, `style`, `security`, or `all` (default) |
94
+
95
+ ### discuss
96
+
97
+ Multi-turn conversation that maintains history across calls. Pick a
98
+ conversation ID and keep using it for back-and-forth discussion.
99
+
100
+ ```coffee
101
+ discuss({ conversation_id: "arch-review", message: "Should we use a B-tree or a hash map here?" })
102
+ discuss({ conversation_id: "arch-review", message: "What about cache locality?" })
103
+ ```
104
+
105
+ | Parameter | Type | Required | Description |
106
+ |-----------|------|----------|-------------|
107
+ | `conversation_id` | string | yes | Unique ID for this conversation thread |
108
+ | `message` | string | yes | Your message in the conversation |
109
+ | `system` | string | no | System prompt (first message or reset only) |
110
+ | `reset` | boolean | no | Clear history and start fresh |
111
+
112
+ ## Peer Selection
113
+
114
+ The peer model is chosen automatically based on who's calling:
115
+
116
+ | Caller | Peer | Flag |
117
+ |--------|------|------|
118
+ | Claude (default) | GPT-5.4 | none needed |
119
+ | GPT | Claude Opus 4.6 | `--peer anthropic` |
120
+
121
+ ```bash
122
+ rip mcp.rip # peer = GPT-5.4 (default, for Claude)
123
+ rip mcp.rip --peer anthropic # peer = Claude Opus 4.6 (for GPT)
124
+ ```
125
+
126
+ ## Credential Resolution
127
+
128
+ API keys are resolved in order of priority:
129
+
130
+ 1. **Environment variables** — `OPENAI_API_KEY`, `ANTHROPIC_API_KEY`
131
+ 2. **Credentials file** — `~/.config/rip/credentials` (KEY=value, one per line)
132
+
133
+ Environment variables always win. The credentials file is optional but
134
+ convenient — set it once and every MCP session picks it up.
135
+
136
+ ## How It's Built
137
+
138
+ | File | Lines | Role |
139
+ |------|-------|------|
140
+ | `mcp.rip` | ~360 | MCP server — protocol, tools, provider API calls, conversation state |
141
+
142
+ The server implements the MCP JSON-RPC 2.0 protocol over stdio. Each tool
143
+ call builds a message array, sends it to the peer provider's HTTP API, and
144
+ returns the response as a text content block. Multi-turn conversations
145
+ accumulate message history keyed by conversation ID.
146
+
147
+ ## Requirements
148
+
149
+ - **Bun** 1.0+
150
+ - **rip-lang** 3.x
151
+ - At least one API key (OpenAI and/or Anthropic)
152
+
153
+ ## License
154
+
155
+ MIT
package/mcp.rip ADDED
@@ -0,0 +1,359 @@
1
+ # ==============================================================================
2
+ # @rip-lang/ai MCP Server — AI-to-AI collaboration for Cursor
3
+ # ==============================================================================
4
+ #
5
+ # MCP stdio server that lets one AI talk to its peer. The calling AI (Claude
6
+ # in Cursor) automatically gets GPT-5.4 as the peer, and vice versa.
7
+ #
8
+ # Two models, auto-peer:
9
+ # Claude Opus 4.6 ←→ GPT-5.4
10
+ #
11
+ # Usage:
12
+ # rip mcp.rip # peer = gpt-5.4 (default, for Claude)
13
+ # rip mcp.rip --peer anthropic # peer = claude-opus-4-6 (for GPT)
14
+ #
15
+ # API keys are loaded from (in order of priority):
16
+ # 1. Environment variables (OPENAI_API_KEY, ANTHROPIC_API_KEY)
17
+ # 2. ~/.config/rip/credentials file (KEY=value format, one per line)
18
+ #
19
+ # Credentials file setup:
20
+ # mkdir -p ~/.config/rip
21
+ # cat > ~/.config/rip/credentials << 'EOF'
22
+ # OPENAI_API_KEY=sk-...
23
+ # ANTHROPIC_API_KEY=sk-ant-...
24
+ # EOF
25
+ # chmod 600 ~/.config/rip/credentials
26
+ #
27
+ # Cursor MCP config (~/.cursor/mcp.json):
28
+ # {
29
+ # "mcpServers": {
30
+ # "ai": {
31
+ # "command": "rip",
32
+ # "args": ["/path/to/packages/ai/mcp.rip"]
33
+ # }
34
+ # }
35
+ # }
36
+ #
37
+ # ==============================================================================
38
+
39
+ import { createInterface } from 'readline'
40
+ import { readFileSync, existsSync } from 'fs'
41
+ import { join } from 'path'
42
+
43
+ VERSION =! '1.0.0'
44
+ MAX_RESPONSE =! 100000
45
+
46
+ # ==============================================================================
47
+ # Credentials — load from ~/.config/rip/credentials, env vars override
48
+ # ==============================================================================
49
+
50
+ credentials = do ->
51
+ keys = {}
52
+ home = process.env.HOME or process.env.USERPROFILE or ''
53
+ credFile = join(home, '.config', 'rip', 'credentials')
54
+ if existsSync(credFile)
55
+ try
56
+ lines = readFileSync(credFile, 'utf-8').split('\n')
57
+ for raw in lines
58
+ entry = raw.trim()
59
+ continue if not entry or entry.startsWith('#')
60
+ idx = entry.indexOf('=')
61
+ continue if idx < 1
62
+ keys[entry.slice(0, idx).trim()] = entry.slice(idx + 1).trim()
63
+ catch err
64
+ console.error "[mcp-ai] Warning: could not read #{credFile}: #{err.message}"
65
+ keys
66
+
67
+ getKey = (name) ->
68
+ process.env[name] or credentials[name] or ''
69
+
70
+ # ==============================================================================
71
+ # Configuration
72
+ # ==============================================================================
73
+
74
+ peer = do ->
75
+ args = process.argv.slice(2)
76
+ idx = args.indexOf('--peer')
77
+ name = if idx >= 0 and args[idx + 1] then args[idx + 1] else 'openai'
78
+ switch name
79
+ when 'anthropic' then { provider: 'anthropic', model: 'claude-opus-4-6', name: 'Claude Opus 4.6' }
80
+ else { provider: 'openai', model: 'gpt-5.4', name: 'GPT-5.4' }
81
+
82
+ log = (...args) -> console.error '[mcp-ai]', ...args
83
+
84
+ # ==============================================================================
85
+ # Provider API calls
86
+ # ==============================================================================
87
+
88
+ callOpenAI = (messages, maxTokens) ->
89
+ key = getKey('OPENAI_API_KEY')
90
+ throw new Error('OPENAI_API_KEY not set — add it to ~/.config/rip/credentials') unless key
91
+
92
+ resp = await fetch 'https://api.openai.com/v1/chat/completions',
93
+ method: 'POST'
94
+ headers:
95
+ 'Content-Type': 'application/json'
96
+ 'Authorization': "Bearer #{key}"
97
+ body: JSON.stringify
98
+ model: 'gpt-5.4'
99
+ messages: messages
100
+ max_completion_tokens: maxTokens
101
+ data = await resp.json()
102
+ throw new Error(data.error?.message or JSON.stringify(data.error)) if data.error
103
+ data.choices[0].message.content
104
+
105
+ callAnthropic = (messages, maxTokens) ->
106
+ key = getKey('ANTHROPIC_API_KEY')
107
+ throw new Error('ANTHROPIC_API_KEY not set — add it to ~/.config/rip/credentials') unless key
108
+
109
+ system = messages.filter((m) -> m.role is 'system').map((m) -> m.content).join('\n\n')
110
+ userMessages = messages.filter((m) -> m.role isnt 'system')
111
+
112
+ resp = await fetch 'https://api.anthropic.com/v1/messages',
113
+ method: 'POST'
114
+ headers:
115
+ 'Content-Type': 'application/json'
116
+ 'x-api-key': key
117
+ 'anthropic-version': '2023-06-01'
118
+ body: JSON.stringify
119
+ model: 'claude-opus-4-6'
120
+ max_tokens: maxTokens
121
+ system: system or undefined
122
+ messages: userMessages
123
+ data = await resp.json()
124
+ throw new Error(data.error?.message or JSON.stringify(data.error)) if data.error
125
+ data.content[0].text
126
+
127
+ callPeer = (messages, maxTokens = 8192) ->
128
+ if peer.provider is 'anthropic'
129
+ callAnthropic(messages, maxTokens)
130
+ else
131
+ callOpenAI(messages, maxTokens)
132
+
133
+ # ==============================================================================
134
+ # Conversation state for multi-turn discuss
135
+ # ==============================================================================
136
+
137
+ conversations = {}
138
+
139
+ # ==============================================================================
140
+ # Tools
141
+ # ==============================================================================
142
+
143
+ TOOLS =! [
144
+ {
145
+ name: 'chat'
146
+ description: """Send a message to the peer AI and get a response. \
147
+ Use for quick questions, second opinions, or brainstorming."""
148
+ inputSchema:
149
+ type: 'object'
150
+ properties:
151
+ message: { type: 'string', description: 'The message to send' }
152
+ system: { type: 'string', description: 'System prompt override' }
153
+ required: ['message']
154
+ }
155
+ {
156
+ name: 'review'
157
+ description: """Send code to the peer AI for detailed review. Returns \
158
+ structured feedback on bugs, correctness, performance, style, and \
159
+ suggestions. Include language and context for better results."""
160
+ inputSchema:
161
+ type: 'object'
162
+ properties:
163
+ code: { type: 'string', description: 'The code to review' }
164
+ language: { type: 'string', description: 'Programming language (zig, rip, python, etc.)' }
165
+ context: { type: 'string', description: 'Context: what the code does, project info, constraints' }
166
+ focus: { type: 'string', description: 'Review focus: bugs, performance, style, security, all (default: all)' }
167
+ required: ['code']
168
+ }
169
+ {
170
+ name: 'discuss'
171
+ description: """Multi-turn conversation with the peer AI. Uses a conversation \
172
+ ID to maintain history across calls for back-and-forth discussion."""
173
+ inputSchema:
174
+ type: 'object'
175
+ properties:
176
+ conversation_id: { type: 'string', description: 'Unique ID for this conversation thread' }
177
+ message: { type: 'string', description: 'Your message in the conversation' }
178
+ system: { type: 'string', description: 'System prompt (only used on first message or reset)' }
179
+ reset: { type: 'boolean', description: 'Clear history and start fresh' }
180
+ required: ['conversation_id', 'message']
181
+ }
182
+ ]
183
+
184
+ # ==============================================================================
185
+ # Tool handlers
186
+ # ==============================================================================
187
+
188
+ DEFAULT_SYSTEM =! """\
189
+ You are a senior software engineer collaborating with another AI on a \
190
+ codebase. Be direct, technical, and specific. When you disagree, explain \
191
+ why with concrete reasoning. When you spot issues, be precise about what's \
192
+ wrong and how to fix it. Keep responses focused and actionable."""
193
+
194
+ REVIEW_SYSTEM =! """\
195
+ You are an expert code reviewer. Analyze the provided code thoroughly and \
196
+ return a structured review covering:
197
+
198
+ 1. **Correctness** — bugs, logic errors, edge cases, off-by-one errors
199
+ 2. **Safety** — undefined behavior, memory issues, race conditions, panics
200
+ 3. **Performance** — unnecessary allocations, hot-path inefficiency, algorithmic issues
201
+ 4. **Style** — naming, structure, idiomatic patterns for the language
202
+ 5. **Suggestions** — concrete improvements with code examples where helpful
203
+
204
+ Be specific: reference line numbers or code snippets. Don't pad with praise \
205
+ — focus on what matters. If the code is good, say so briefly and move on."""
206
+
207
+ # --- chat ---
208
+
209
+ handleChat = (params) ->
210
+ return { error: 'Missing message parameter' } unless params.message
211
+ try
212
+ messages = [
213
+ { role: 'system', content: params.system or DEFAULT_SYSTEM }
214
+ { role: 'user', content: params.message }
215
+ ]
216
+ response = await callPeer(messages)
217
+ { response, peer: peer.name }
218
+ catch err
219
+ { error: err.message }
220
+
221
+ # --- review ---
222
+
223
+ handleReview = (params) ->
224
+ return { error: 'Missing code parameter' } unless params.code
225
+ try
226
+ focus = params.focus or 'all'
227
+ fence = '`' + '`' + '`'
228
+ lang = params.language or ''
229
+ prompt = "Review this #{lang} code"
230
+ prompt += " (focus: #{focus})" unless focus is 'all'
231
+ prompt += ":\n\n#{params.context}\n" if params.context
232
+ prompt += "\n#{fence}#{lang}\n#{params.code}\n#{fence}"
233
+
234
+ messages = [
235
+ { role: 'system', content: REVIEW_SYSTEM }
236
+ { role: 'user', content: prompt }
237
+ ]
238
+ response = await callPeer(messages)
239
+ { response, peer: peer.name }
240
+ catch err
241
+ { error: err.message }
242
+
243
+ # --- discuss ---
244
+
245
+ handleDiscuss = (params) ->
246
+ return { error: 'Missing conversation_id' } unless params.conversation_id
247
+ return { error: 'Missing message' } unless params.message
248
+ try
249
+ id = params.conversation_id
250
+
251
+ if params.reset or not conversations[id]
252
+ conversations[id] =
253
+ messages: [{ role: 'system', content: params.system or DEFAULT_SYSTEM }]
254
+ turns: 0
255
+
256
+ convo = conversations[id]
257
+ convo.messages.push { role: 'user', content: params.message }
258
+ convo.turns += 1
259
+
260
+ response = await callPeer(convo.messages)
261
+ convo.messages.push { role: 'assistant', content: response }
262
+
263
+ { response, peer: peer.name, conversation_id: id, turn: convo.turns }
264
+ catch err
265
+ { error: err.message }
266
+
267
+ # ==============================================================================
268
+ # Server Instructions
269
+ # ==============================================================================
270
+
271
+ INSTRUCTIONS =! """\
272
+ You are connected to another AI model via the ai MCP server. Use these tools \
273
+ to get a second opinion, collaborative code review, or have a multi-turn \
274
+ discussion with another AI.
275
+
276
+ Available tools: chat, review, discuss.
277
+
278
+ Guidelines:
279
+ - Use `review` for code review — include language and context for best results
280
+ - Use `chat` for one-off questions or quick second opinions
281
+ - Use `discuss` for multi-turn conversations — pick a conversation_id and \
282
+ keep using it to maintain context across calls
283
+ - The other AI doesn't see your full context — include relevant code and \
284
+ context in your messages
285
+ - When relaying the other AI's feedback to the user, synthesize and \
286
+ attribute it (e.g. "GPT-5.4 suggests...")
287
+ """
288
+
289
+ # ==============================================================================
290
+ # MCP Protocol — JSON-RPC 2.0 over stdio
291
+ # ==============================================================================
292
+
293
+ respond = (id, result) ->
294
+ JSON.stringify({ jsonrpc: '2.0', id, result })
295
+
296
+ respondError = (id, code, message) ->
297
+ JSON.stringify({ jsonrpc: '2.0', id, error: { code, message } })
298
+
299
+ dispatch = (msg) ->
300
+ { id, method, params } = msg
301
+
302
+ switch method
303
+ when 'initialize'
304
+ respond id,
305
+ protocolVersion: '2024-11-05'
306
+ capabilities: { tools: {} }
307
+ serverInfo: { name: 'rip-ai-mcp', version: VERSION }
308
+ instructions: INSTRUCTIONS
309
+
310
+ when 'notifications/initialized'
311
+ null
312
+
313
+ when 'tools/list'
314
+ respond id, { tools: TOOLS }
315
+
316
+ when 'tools/call'
317
+ name = params?.name
318
+ args = params?.arguments or {}
319
+
320
+ if name is 'chat'
321
+ result = await handleChat(args)
322
+ else if name is 'review'
323
+ result = await handleReview(args)
324
+ else if name is 'discuss'
325
+ result = await handleDiscuss(args)
326
+ else
327
+ result = { error: "Unknown tool: #{name}" }
328
+
329
+ isError = result.error?
330
+ text = JSON.stringify(result)
331
+
332
+ if text.length > MAX_RESPONSE
333
+ text = text.slice(0, MAX_RESPONSE)
334
+ result.truncated = true
335
+
336
+ respond id, { content: [{ type: 'text', text }], isError }
337
+
338
+ else
339
+ if id?
340
+ respondError id, -32601, "Unknown method: #{method}"
341
+ else
342
+ null
343
+
344
+ # ==============================================================================
345
+ # Main
346
+ # ==============================================================================
347
+
348
+ log "Starting (peer: #{peer.name})"
349
+
350
+ rl = createInterface({ input: process.stdin })
351
+ rl.on 'line', (line) ->
352
+ return unless line.trim()
353
+ try
354
+ msg = JSON.parse(line)
355
+ result = await dispatch(msg)
356
+ if result
357
+ process.stdout.write result + '\n'
358
+ catch err
359
+ log "Error: #{err.message}"
package/package.json ADDED
@@ -0,0 +1,42 @@
1
+ {
2
+ "name": "@rip-lang/ai",
3
+ "version": "0.1.1",
4
+ "description": "AI-to-AI collaboration MCP server — peer review and multi-turn discussion between AI models",
5
+ "type": "module",
6
+ "main": "mcp.rip",
7
+ "exports": {
8
+ ".": "./mcp.rip"
9
+ },
10
+ "scripts": {
11
+ "start": "rip mcp.rip"
12
+ },
13
+ "keywords": [
14
+ "ai",
15
+ "mcp",
16
+ "llm",
17
+ "peer-review",
18
+ "collaboration",
19
+ "openai",
20
+ "anthropic",
21
+ "cursor",
22
+ "rip"
23
+ ],
24
+ "repository": {
25
+ "type": "git",
26
+ "url": "git+https://github.com/shreeve/rip-lang.git",
27
+ "directory": "packages/ai"
28
+ },
29
+ "homepage": "https://github.com/shreeve/rip-lang/tree/main/packages/ai#readme",
30
+ "bugs": {
31
+ "url": "https://github.com/shreeve/rip-lang/issues"
32
+ },
33
+ "author": "Steve Shreeve <steve.shreeve@gmail.com>",
34
+ "license": "MIT",
35
+ "dependencies": {
36
+ "rip-lang": ">=3.13.134"
37
+ },
38
+ "files": [
39
+ "mcp.rip",
40
+ "README.md"
41
+ ]
42
+ }