converse-mcp-server 2.3.0 → 2.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/README.md +771 -738
  2. package/docs/API.md +10 -1
  3. package/docs/PROVIDERS.md +8 -4
  4. package/package.json +12 -12
  5. package/src/async/asyncJobStore.js +82 -52
  6. package/src/async/eventBus.js +25 -20
  7. package/src/async/fileCache.js +121 -40
  8. package/src/async/jobRunner.js +65 -39
  9. package/src/async/providerStreamNormalizer.js +203 -117
  10. package/src/config.js +374 -102
  11. package/src/continuationStore.js +32 -24
  12. package/src/index.js +45 -25
  13. package/src/prompts/helpPrompt.js +328 -305
  14. package/src/providers/anthropic.js +303 -119
  15. package/src/providers/codex.js +103 -45
  16. package/src/providers/deepseek.js +24 -8
  17. package/src/providers/google.js +323 -93
  18. package/src/providers/index.js +1 -1
  19. package/src/providers/interface.js +16 -11
  20. package/src/providers/mistral.js +179 -69
  21. package/src/providers/openai-compatible.js +231 -94
  22. package/src/providers/openai.js +1094 -912
  23. package/src/providers/openrouter-endpoints-client.js +220 -216
  24. package/src/providers/openrouter.js +426 -381
  25. package/src/providers/xai.js +153 -56
  26. package/src/resources/helpResource.js +70 -67
  27. package/src/router.js +95 -67
  28. package/src/services/summarizationService.js +51 -24
  29. package/src/systemPrompts.js +89 -89
  30. package/src/tools/cancelJob.js +31 -19
  31. package/src/tools/chat.js +997 -883
  32. package/src/tools/checkStatus.js +86 -65
  33. package/src/tools/consensus.js +401 -235
  34. package/src/tools/index.js +39 -16
  35. package/src/transport/httpTransport.js +82 -55
  36. package/src/utils/contextProcessor.js +54 -37
  37. package/src/utils/errorHandler.js +95 -45
  38. package/src/utils/fileValidator.js +107 -98
  39. package/src/utils/formatStatus.js +122 -64
  40. package/src/utils/logger.js +459 -449
  41. package/src/utils/pathUtils.js +2 -2
  42. package/src/utils/tokenLimiter.js +216 -216
package/README.md CHANGED
@@ -1,738 +1,771 @@
1
- # Converse MCP Server
2
-
3
- [![npm version](https://img.shields.io/npm/v/converse-mcp-server.svg)](https://www.npmjs.com/package/converse-mcp-server)
4
-
5
- An MCP (Model Context Protocol) server that lets Claude talk to other AI models. Use it to chat with models from OpenAI, Google, Anthropic, X.AI, Mistral, DeepSeek, or OpenRouter. You can either talk to one model at a time or get multiple models to weigh in on complex decisions.
6
-
7
- ## 📋 Requirements
8
-
9
- - **Node.js**: Version 20 or higher
10
- - **Package Manager**: npm (or pnpm/yarn)
11
- - **API Keys**: At least one from any supported provider
12
-
13
- ## 🚀 Quick Start
14
-
15
- ### Step 1: Get Your API Keys
16
-
17
- You need at least one API key from these providers:
18
-
19
- | Provider | Where to Get | Example Format |
20
- |----------|-------------|----------------|
21
- | **OpenAI** | [platform.openai.com/api-keys](https://platform.openai.com/api-keys) | `sk-proj-...` |
22
- | **Google/Gemini** | [makersuite.google.com/app/apikey](https://makersuite.google.com/app/apikey) | `AIzaSy...` |
23
- | **X.AI** | [console.x.ai](https://console.x.ai/) | `xai-...` |
24
- | **Anthropic** | [console.anthropic.com](https://console.anthropic.com/) | `sk-ant-...` |
25
- | **Mistral** | [console.mistral.ai](https://console.mistral.ai/) | `wfBMkWL0...` |
26
- | **DeepSeek** | [platform.deepseek.com](https://platform.deepseek.com/) | `sk-...` |
27
- | **OpenRouter** | [openrouter.ai/keys](https://openrouter.ai/keys) | `sk-or-...` |
28
- | **Codex** | ChatGPT login (system-wide) | Local agentic assistant |
29
-
30
- **Note:** Codex uses your ChatGPT login (not an API key). If you have an active ChatGPT session, Codex will work automatically. For headless/server deployments, set `CODEX_API_KEY` in your environment.
31
-
32
- ### Step 2: Add to Claude Code or Claude Desktop
33
-
34
- #### For Claude Code (Recommended)
35
- ```bash
36
- # Add the server with your API keys
37
- claude mcp add converse \
38
- -e OPENAI_API_KEY=your_key_here \
39
- -e GEMINI_API_KEY=your_key_here \
40
- -e XAI_API_KEY=your_key_here \
41
- -e ANTHROPIC_API_KEY=your_key_here \
42
- -e MISTRAL_API_KEY=your_key_here \
43
- -e DEEPSEEK_API_KEY=your_key_here \
44
- -e OPENROUTER_API_KEY=your_key_here \
45
- -e ENABLE_RESPONSE_SUMMARIZATION=true \
46
- -e SUMMARIZATION_MODEL=gpt-5 \
47
- -s user \
48
- npx converse-mcp-server
49
- ```
50
-
51
- #### For Claude Desktop
52
-
53
- Add this configuration to your Claude Desktop settings:
54
-
55
- ```json
56
- {
57
- "mcpServers": {
58
- "converse": {
59
- "command": "npx",
60
- "args": ["converse-mcp-server"],
61
- "env": {
62
- "OPENAI_API_KEY": "your_key_here",
63
- "GEMINI_API_KEY": "your_key_here",
64
- "XAI_API_KEY": "your_key_here",
65
- "ANTHROPIC_API_KEY": "your_key_here",
66
- "MISTRAL_API_KEY": "your_key_here",
67
- "DEEPSEEK_API_KEY": "your_key_here",
68
- "OPENROUTER_API_KEY": "your_key_here",
69
- "ENABLE_RESPONSE_SUMMARIZATION": "true",
70
- "SUMMARIZATION_MODEL": "gpt-5"
71
- }
72
- }
73
- }
74
- }
75
- ```
76
-
77
- **Windows Troubleshooting**: If `npx converse-mcp-server` doesn't work on Windows, try:
78
- ```json
79
- {
80
- "command": "cmd",
81
- "args": ["/c", "npx", "converse-mcp-server"],
82
- "env": {
83
- "ENABLE_RESPONSE_SUMMARIZATION": "true",
84
- "SUMMARIZATION_MODEL": "gpt-5"
85
- // ... add your API keys here
86
- }
87
- }
88
- ```
89
-
90
- ### Step 3: Start Using Converse
91
-
92
- Once installed, you can:
93
-
94
- - **Chat with a specific model**: Ask Claude to use the chat tool with your preferred model
95
- - **Get consensus**: Ask Claude to use the consensus tool when you need multiple perspectives
96
- - **Run tasks in background**: Use `async: true` for long-running operations that you can check later
97
- - **Monitor progress**: Use the check_status tool to monitor async operations with AI-generated summaries
98
- - **Cancel jobs**: Use the cancel_job tool to stop running operations
99
- - **Smart summaries**: Get auto-generated titles and summaries for better context understanding
100
- - **Get help**: Type `/converse:help` in Claude
101
-
102
- ## 🛠️ Available Tools
103
-
104
- ### 1. Chat Tool
105
-
106
- Talk to any AI model with support for files, images, and conversation history. The tool automatically routes your request to the right provider based on the model name. When AI summarization is enabled, generates smart titles and summaries for better context understanding.
107
-
108
- ```javascript
109
- // Synchronous execution (default)
110
- {
111
- "prompt": "How should I structure the authentication module for this Express.js API?",
112
- "model": "gemini-2.5-flash", // Routes to Google
113
- "files": ["/path/to/src/auth.js", "/path/to/config.json"],
114
- "images": ["/path/to/architecture.png"],
115
- "temperature": 0.5,
116
- "reasoning_effort": "medium",
117
- "use_websearch": false
118
- }
119
-
120
- // Asynchronous execution (for long-running tasks)
121
- {
122
- "prompt": "Analyze this large codebase and provide optimization recommendations",
123
- "model": "gpt-5",
124
- "files": ["/path/to/large-project"],
125
- "async": true, // Enables background processing
126
- "continuation_id": "my-analysis-task" // Optional: custom ID for tracking
127
- }
128
-
129
- // Codex - Agentic coding assistant with local file access
130
- {
131
- "prompt": "Analyze this codebase and suggest improvements",
132
- "model": "codex",
133
- "files": ["/path/to/your/project"],
134
- "async": true // Recommended for Codex (responses take 6-20+ seconds)
135
- }
136
- ```
137
-
138
- **Codex Notes:**
139
- - Uses thread-based sessions (context persists with `continuation_id`)
140
- - Responses typically take 6-20 seconds (complex tasks may take minutes)
141
- - Accesses files directly from your working directory
142
- - Configure sandbox mode via `CODEX_SANDBOX_MODE` environment variable
143
-
144
- ### 2. Consensus Tool
145
-
146
- Get multiple AI models to analyze the same question simultaneously. Each model can see and respond to the others' answers, creating a rich discussion.
147
-
148
- ```javascript
149
- // Synchronous consensus (default)
150
- {
151
- "prompt": "Should we use microservices or monolith architecture for our e-commerce platform?",
152
- "models": ["gpt-5", "gemini-2.5-flash", "grok-4"],
153
- "files": ["/path/to/requirements.md"],
154
- "enable_cross_feedback": true,
155
- "temperature": 0.2
156
- }
157
-
158
- // Asynchronous consensus (for complex analysis)
159
- {
160
- "prompt": "Review our system architecture and provide comprehensive recommendations",
161
- "models": ["gpt-5", "gemini-2.5-pro", "claude-sonnet-4"],
162
- "files": ["/path/to/architecture-docs"],
163
- "async": true, // Run in background
164
- "enable_cross_feedback": true
165
- }
166
- ```
167
-
168
- ### 3. Check Status Tool
169
-
170
- Monitor the progress and retrieve results from asynchronous operations. When AI summarization is enabled, provides intelligent summaries of ongoing and completed tasks.
171
-
172
- ```javascript
173
- // Check status of a specific job
174
- {
175
- "continuation_id": "my-analysis-task"
176
- }
177
-
178
- // List recent jobs (shows last 10)
179
- // With summarization enabled, displays titles and final summaries
180
- {}
181
-
182
- // Get full conversation history for completed job
183
- {
184
- "continuation_id": "my-analysis-task",
185
- "full_history": true
186
- }
187
- ```
188
-
189
- ### 4. Cancel Job Tool
190
-
191
- Cancel running asynchronous operations when needed.
192
-
193
- ```javascript
194
- // Cancel a running job
195
- {
196
- "continuation_id": "my-analysis-task"
197
- }
198
- ```
199
-
200
- ## 🤖 AI Summarization Feature
201
-
202
- When enabled, the server automatically generates intelligent titles and summaries for better context understanding:
203
-
204
- - **Automatic Title Generation**: Creates descriptive titles (up to 60 chars) for each request
205
- - **Streaming Summaries**: Status check returns an up-to-date summary of the progress based on the partially streamed response
206
- - **Final Summaries**: Concise 1-2 sentence summaries of completed responses
207
- - **Smart Status Display**: Enhanced check_status tool shows titles and summaries in job listings
208
- - **Persistent Context**: Summaries are stored with async jobs for better progress tracking
209
-
210
- **Configuration**:
211
- ```bash
212
- # Enable in your environment
213
- ENABLE_RESPONSE_SUMMARIZATION=true # Default: false
214
- SUMMARIZATION_MODEL=gpt-5-nano # Default: gpt-5-nano
215
- ```
216
-
217
- **Benefits**:
218
- - Quickly understand what each async job is doing without reading full responses
219
- - Better context when reviewing multiple ongoing operations
220
- - Improved job management with at-a-glance understanding of task progress
221
- - Graceful fallback to text snippets when summarization is disabled or fails
222
-
223
- ## 📊 Supported Models
224
-
225
- ### OpenAI Models
226
- - **gpt-5**: Latest flagship model (400K context, 128K output) - Superior reasoning, code generation, and analysis
227
- - **gpt-5-mini**: Faster, cost-efficient GPT-5 (400K context, 128K output) - Well-defined tasks, precise prompts
228
- - **gpt-5-nano**: Fastest, most cost-efficient GPT-5 (400K context, 128K output) - Summarization, classification
229
- - **gpt-5-pro**: Most advanced reasoning model (400K context, 272K output) - Hardest problems, extended compute time (EXPENSIVE)
230
- - **o3**: Strong reasoning (200K context)
231
- - **o3-mini**: Fast O3 variant (200K context)
232
- - **o3-pro**: Professional-grade reasoning (200K context) - EXTREMELY EXPENSIVE
233
- - **o3-deep-research**: Deep research model (200K context) - 30-90 min runtime
234
- - **o4-mini**: Latest reasoning model (200K context)
235
- - **o4-mini-deep-research**: Fast deep research model (200K context) - 15-60 min runtime
236
- - **gpt-4.1**: Advanced reasoning (1M context)
237
- - **gpt-4o**: Multimodal flagship (128K context)
238
- - **gpt-4o-mini**: Fast multimodal (128K context)
239
-
240
- ### Google/Gemini Models
241
-
242
- **API Key Options**:
243
- - **GEMINI_API_KEY**: For Gemini Developer API (recommended)
244
- - **GOOGLE_API_KEY**: Alternative name (GEMINI_API_KEY takes priority)
245
- - **Vertex AI**: Use `GOOGLE_GENAI_USE_VERTEXAI=true` with project/location settings
246
- - **gemini-2.5-flash** (alias: `flash`): Ultra-fast (1M context)
247
- - **gemini-2.5-pro** (alias: `pro`): Deep reasoning (1M context)
248
- - **gemini-2.0-flash**: Latest with experimental thinking
249
- - **gemini-2.0-flash-lite**: Lightweight fast model, text-only
250
-
251
- ### X.AI/Grok Models
252
- - **grok-4-0709** (aliases: `grok`, `grok-4`): Latest advanced model (256K context)
253
- - **grok-code-fast-1**: Speedy and economical reasoning model that excels at agentic coding (256K context)
254
-
255
- ### Anthropic Models
256
- - **claude-opus-4.1**: Highest intelligence with extended thinking (200K context)
257
- - **claude-sonnet-4**: Balanced performance with extended thinking (200K context)
258
- - **claude-3.7-sonnet**: Enhanced 3.x generation with thinking (200K context)
259
- - **claude-3.5-sonnet**: Fast and intelligent (200K context)
260
- - **claude-3.5-haiku**: Fastest model for simple queries (200K context)
261
-
262
- ### Mistral Models
263
- - **magistral-medium**: Frontier-class reasoning model (40K context)
264
- - **magistral-small**: Small reasoning model (40K context)
265
- - **mistral-medium-3**: Frontier-class multimodal model (128K context)
266
-
267
- ### DeepSeek Models
268
- - **deepseek-chat**: Strong MoE model with 671B/37B parameters (64K context)
269
- - **deepseek-reasoner**: Advanced reasoning model with CoT (64K context)
270
-
271
- ### OpenRouter Models
272
- - **qwen3-235b-thinking**: Qwen3 with enhanced reasoning (32K context)
273
- - **qwen3-coder**: Specialized for programming tasks (32K context)
274
- - **kimi-k2**: Moonshot AI Kimi K2 with extended context (200K context)
275
-
276
- ### Codex Models
277
- - **codex**: OpenAI Codex agentic coding assistant
278
- - Thread-based sessions with persistent context
279
- - Direct filesystem access from working directory
280
- - Typical response time: 6-20 seconds (longer for complex tasks)
281
- - Requires ChatGPT login or CODEX_API_KEY
282
- - See [Configuration](#configuration) for sandbox and approval settings
283
-
284
- ## 📚 Help & Documentation
285
-
286
- ### Built-in Help
287
- Type these commands directly in Claude:
288
- - `/converse:help` - Full documentation
289
- - `/converse:help tools` - Tool-specific help (includes async features)
290
- - `/converse:help models` - Model information
291
- - `/converse:help parameters` - Configuration details
292
- - `/converse:help examples` - Usage examples (sync and async)
293
- - `/converse:help async` - Async execution guide
294
-
295
- ### Additional Resources
296
- - **API Reference**: [docs/API.md](docs/API.md)
297
- - **Architecture Guide**: [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md)
298
- - **Integration Examples**: [docs/EXAMPLES.md](docs/EXAMPLES.md)
299
-
300
- ## ⚙️ Configuration
301
-
302
- ### Environment Variables
303
-
304
- Create a `.env` file in your project root:
305
-
306
- ```bash
307
- # Required: At least one API key
308
- OPENAI_API_KEY=sk-proj-your_openai_key_here
309
- GEMINI_API_KEY=your_gemini_api_key_here # Or GOOGLE_API_KEY (GEMINI_API_KEY takes priority)
310
- XAI_API_KEY=xai-your_xai_key_here
311
- ANTHROPIC_API_KEY=sk-ant-your_anthropic_key_here
312
- MISTRAL_API_KEY=your_mistral_key_here
313
- DEEPSEEK_API_KEY=your_deepseek_key_here
314
- OPENROUTER_API_KEY=sk-or-your_openrouter_key_here
315
-
316
- # Optional: Server configuration
317
- PORT=3157
318
- LOG_LEVEL=info
319
-
320
- # Optional: AI Summarization (Enhanced async status display)
321
- ENABLE_RESPONSE_SUMMARIZATION=true # Enable AI-generated titles and summaries
322
- SUMMARIZATION_MODEL=gpt-5-nano # Model to use for summarization (default: gpt-5-nano)
323
-
324
- # Optional: OpenRouter configuration
325
- OPENROUTER_REFERER=https://github.com/FallDownTheSystem/converse
326
- OPENROUTER_TITLE=Converse
327
- OPENROUTER_DYNAMIC_MODELS=true
328
-
329
- # Optional: Codex configuration
330
- CODEX_API_KEY=your_codex_api_key_here # Optional if ChatGPT login available
331
- CODEX_SANDBOX_MODE=read-only # read-only (default), workspace-write, danger-full-access
332
- CODEX_SKIP_GIT_CHECK=true # true (default), false
333
- CODEX_APPROVAL_POLICY=never # never (default), untrusted, on-failure, on-request
334
- CODEX_DEFAULT_MODEL=gpt-5-codex # Default: gpt-5-codex
335
- ```
336
-
337
- ### Configuration Options
338
-
339
- #### Server Environment Variables (.env file)
340
- | Variable | Description | Default | Example |
341
- |----------|-------------|---------|---------|
342
- | `PORT` | Server port | `3157` | `3157` |
343
- | `LOG_LEVEL` | Logging level | `info` | `debug`, `info`, `error` |
344
-
345
- #### Claude Code Environment Variables (System/Global)
346
- These must be set in your system environment or when launching Claude Code, NOT in the project .env file:
347
-
348
- | Variable | Description | Default | Example |
349
- |----------|-------------|---------|---------|
350
- | `MAX_MCP_OUTPUT_TOKENS` | Token response limit | `25000` | `200000` |
351
- | `MCP_TOOL_TIMEOUT` | Tool execution timeout (ms) | `120000` | `5400000` (90 min for deep research) |
352
-
353
- ```bash
354
- # Example: Set globally before starting Claude Code
355
- export MAX_MCP_OUTPUT_TOKENS=200000
356
- export MCP_TOOL_TIMEOUT=5400000 # 90 minutes for deep research models
357
- claude # Then start Claude Code
358
- ```
359
-
360
- ### Model Selection
361
-
362
- Use `"auto"` for automatic model selection, or specify exact models:
363
-
364
- ```javascript
365
- // Auto-selection (recommended)
366
- "auto"
367
-
368
- // Specific models
369
- "gemini-2.5-flash"
370
- "gpt-5"
371
- "grok-4-0709"
372
-
373
- // Using aliases
374
- "flash" // -> gemini-2.5-flash
375
- "pro" // -> gemini-2.5-pro
376
- "grok" // -> grok-4-0709
377
- "grok-4" // -> grok-4-0709
378
- ```
379
-
380
- **Auto Model Behavior:**
381
- - **Chat Tool**: Selects the first available provider and uses its default model
382
- - **Consensus Tool**: When using `["auto"]`, automatically expands to the first 3 available providers
383
-
384
- Provider priority order (requires corresponding API key):
385
- 1. OpenAI (`gpt-5`)
386
- 2. Google (`gemini-2.5-pro`)
387
- 3. XAI (`grok-4`)
388
- 4. Anthropic (`claude-sonnet-4-20250514`)
389
- 5. Mistral (`magistral-medium-2506`)
390
- 6. DeepSeek (`deepseek-reasoner`)
391
- 7. OpenRouter (`qwen/qwen3-coder`)
392
-
393
- The system will use the first 3 providers that have valid API keys configured. This enables automatic multi-model consensus without manually specifying models.
394
-
395
- ### Advanced Configuration
396
-
397
- #### Manual Installation Options
398
-
399
- ##### Option A: Direct Node.js execution
400
-
401
- If you've cloned the repository locally:
402
-
403
- ```json
404
- {
405
- "mcpServers": {
406
- "converse": {
407
- "command": "node",
408
- "args": [
409
- "C:\\Users\\YourUsername\\Documents\\Projects\\converse\\src\\index.js"
410
- ],
411
- "env": {
412
- "OPENAI_API_KEY": "your_key_here",
413
- "GEMINI_API_KEY": "your_key_here",
414
- "XAI_API_KEY": "your_key_here",
415
- "ANTHROPIC_API_KEY": "your_key_here",
416
- "MISTRAL_API_KEY": "your_key_here",
417
- "DEEPSEEK_API_KEY": "your_key_here",
418
- "OPENROUTER_API_KEY": "your_key_here"
419
- }
420
- }
421
- }
422
- }
423
- ```
424
-
425
- ##### Option B: Local HTTP Development (Advanced)
426
-
427
- For local development with HTTP transport (optional, for debugging):
428
-
429
- 1. **First, start the server manually with HTTP transport**:
430
- ```bash
431
- # In a terminal, navigate to the project directory
432
- cd converse
433
- MCP_TRANSPORT=http npm run dev # Starts server on http://localhost:3157/mcp
434
- ```
435
-
436
- 2. **Then configure Claude to connect to it**:
437
- ```json
438
- {
439
- "mcpServers": {
440
- "converse-local": {
441
- "url": "http://localhost:3157/mcp"
442
- }
443
- }
444
- }
445
- ```
446
-
447
- **Important**: HTTP transport requires the server to be running before Claude can connect to it. Keep the terminal with the server open while using Claude.
448
-
449
- ### Configuration File Locations
450
-
451
- The Claude configuration file is typically located at:
452
- - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
453
- - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
454
- - Linux: `~/.config/Claude/claude_desktop_config.json`
455
-
456
- For more detailed instructions, see the [official MCP configuration guide](https://docs.anthropic.com/en/docs/claude-code/mcp#configure-mcp-servers).
457
-
458
- ## 💻 Running Standalone (Without Claude)
459
-
460
- You can run the server directly without Claude for testing or development:
461
-
462
- ```bash
463
- # Quick run (no installation needed)
464
- npx converse-mcp-server
465
-
466
- # Alternative package managers
467
- pnpm dlx converse-mcp-server
468
- yarn dlx converse-mcp-server
469
- ```
470
-
471
- For development setup, see the [Development](#-development) section below.
472
-
473
- ## 🐛 Troubleshooting
474
-
475
- ### Common Issues
476
-
477
- **Server won't start:**
478
- - Check Node.js version: `node --version` (needs v20+)
479
- - Try a different port: `PORT=3001 npm start`
480
-
481
- **API key errors:**
482
- - Verify your .env file has the correct format
483
- - Test with: `npm run test:real-api`
484
-
485
- **Module import errors:**
486
- - Clear cache and reinstall: `npm run clean`
487
-
488
- ### Debug Mode
489
-
490
- ```bash
491
- # Enable debug logging
492
- LOG_LEVEL=debug npm run dev
493
-
494
- # Start with debugger
495
- npm run debug
496
-
497
- # Trace all operations
498
- LOG_LEVEL=trace npm run dev
499
- ```
500
-
501
- ## 🔧 Development
502
-
503
- ### Getting Started
504
-
505
- ```bash
506
- # Clone the repository
507
- git clone https://github.com/FallDownTheSystem/converse.git
508
- cd converse
509
- npm install
510
-
511
- # Copy environment file and add your API keys
512
- cp .env.example .env
513
-
514
- # Start development server
515
- npm run dev
516
- ```
517
-
518
- ### Scripts Available
519
-
520
- ```bash
521
- # Server management
522
- npm start # Start server (auto-kills existing server on port 3157)
523
- npm run start:clean # Start server without killing existing processes
524
- npm run start:port # Start server on port 3001 (avoids port conflicts)
525
- npm run dev # Development with hot reload (auto-kills existing server)
526
- npm run dev:clean # Development without killing existing processes
527
- npm run dev:port # Development on port 3001 (avoids port conflicts)
528
- npm run dev:quiet # Development with minimal logging
529
- npm run kill-server # Kill any server running on port 3157
530
-
531
- # Testing
532
- npm test # Run all tests
533
- npm run test:unit # Unit tests only
534
- npm run test:integration # Integration tests
535
- npm run test:e2e # End-to-end tests (requires API keys)
536
-
537
- # Integration test subcategories
538
- npm run test:integration:mcp # MCP protocol tests
539
- npm run test:integration:tools # Tool integration tests
540
- npm run test:integration:providers # Provider integration tests
541
- npm run test:integration:performance # Performance tests
542
- npm run test:integration:general # General integration tests
543
-
544
- # Other test categories
545
- npm run test:mcp-client # MCP client tests (HTTP-based)
546
- npm run test:providers # Provider unit tests
547
- npm run test:tools # Tool tests
548
- npm run test:coverage # Coverage report
549
- npm run test:watch # Run tests in watch mode
550
-
551
- # Code quality
552
- npm run lint # Check code style
553
- npm run lint:fix # Fix code style issues
554
- npm run format # Format code with Prettier
555
- npm run validate # Full validation (lint + test)
556
-
557
- # Utilities
558
- npm run build # Build for production
559
- npm run debug # Start with debugger
560
- npm run check-deps # Check for outdated dependencies
561
- npm run kill-server # Kill any server running on port 3157
562
- ```
563
-
564
- ### Development Notes
565
-
566
- **Port conflicts**: The server uses port 3157 by default. If you get an "EADDRINUSE" error:
567
- - Run `npm run kill-server` to free the port
568
- - Or use a different port: `PORT=3001 npm start`
569
-
570
- **Transport Modes**:
571
- - **Stdio** (default): Works automatically with Claude
572
- - **HTTP**: Better for debugging, requires manual start (`MCP_TRANSPORT=http npm run dev`)
573
-
574
- ### Testing with Real APIs
575
-
576
- After setting up your API keys in `.env`:
577
-
578
- ```bash
579
- # Run end-to-end tests
580
- npm run test:e2e
581
-
582
- # Test specific providers
583
- npm run test:integration:providers
584
-
585
- # Full validation
586
- npm run validate
587
- ```
588
-
589
- ### Validation Steps
590
-
591
- After installation, run these tests to verify everything works:
592
-
593
- ```bash
594
- npm start # Should show startup message
595
- npm test # Should pass all unit tests
596
- npm run validate # Full validation suite
597
- ```
598
-
599
- ### Project Structure
600
-
601
- ```
602
- converse/
603
- ├── src/
604
- │ ├── index.js # Main server entry point
605
- │ ├── config.js # Configuration management
606
- │ ├── router.js # Central request dispatcher
607
- │ ├── continuationStore.js # State management
608
- │ ├── systemPrompts.js # Tool system prompts
609
- │ ├── providers/ # AI provider implementations
610
- │ │ ├── index.js # Provider registry
611
- │ │ ├── interface.js # Unified provider interface
612
- │ │ ├── openai.js # OpenAI provider
613
- │ │ ├── xai.js # XAI provider
614
- │ │ ├── google.js # Google provider
615
- │ │ ├── anthropic.js # Anthropic provider
616
- │ │ ├── mistral.js # Mistral AI provider
617
- │ │ ├── deepseek.js # DeepSeek provider
618
- │ │ ├── openrouter.js # OpenRouter provider
619
- │ │ └── openai-compatible.js # Base for OpenAI-compatible APIs
620
- │ ├── tools/ # MCP tool implementations
621
- │ │ ├── index.js # Tool registry
622
- │ │ ├── chat.js # Chat tool
623
- │ │ └── consensus.js # Consensus tool
624
- │ └── utils/ # Utility modules
625
- │ ├── contextProcessor.js # File/image processing
626
- │ ├── errorHandler.js # Error handling
627
- │ └── logger.js # Logging utilities
628
- ├── tests/ # Comprehensive test suite
629
- ├── docs/ # API and architecture docs
630
- └── package.json # Dependencies and scripts
631
- ```
632
-
633
- ## 📦 Publishing to NPM
634
-
635
- > **Note**: This section is for maintainers. The package is already published as `converse-mcp-server`.
636
-
637
- ### Quick Publishing Checklist
638
-
639
- ```bash
640
- # 1. Ensure clean working directory
641
- git status
642
-
643
- # 2. Run full validation
644
- npm run validate
645
-
646
- # 3. Test package contents
647
- npm pack --dry-run
648
-
649
- # 4. Test bin script
650
- node bin/converse.js --help
651
-
652
- # 5. Bump version (choose one)
653
- npm version patch # Bug fixes: 1.0.1 → 1.0.2
654
- npm version minor # New features: 1.0.1 → 1.1.0
655
- npm version major # Breaking changes: 1.0.1 → 2.0.0
656
-
657
- # 6. Test publish (dry run)
658
- npm publish --dry-run
659
-
660
- # 7. Publish to npm
661
- npm publish
662
-
663
- # 8. Verify publication
664
- npm view converse-mcp-server
665
- npx converse-mcp-server --help
666
- ```
667
-
668
- ### Version Guidelines
669
-
670
- - **Patch** (`npm version patch`): Bug fixes, documentation updates, minor improvements
671
- - **Minor** (`npm version minor`): New features, new model support, new tool capabilities
672
- - **Major** (`npm version major`): Breaking API changes, major architecture changes
673
-
674
- ### Post-Publication
675
-
676
- After publishing, update installation instructions if needed and verify:
677
-
678
- ```bash
679
- # Test direct execution
680
- npx converse-mcp-server
681
- npx converse
682
-
683
- # Test MCP client integration
684
- # Update Claude Desktop config to use: "npx converse-mcp-server"
685
- ```
686
-
687
- ### Troubleshooting Publication
688
-
689
- - **Git not clean**: Commit all changes first
690
- - **Tests failing**: Fix issues before publishing
691
- - **Version conflicts**: Check existing versions with `npm view converse-mcp-server versions`
692
- - **Permission issues**: Ensure you're logged in with `npm whoami`
693
-
694
- ## 🤝 Contributing
695
-
696
- 1. Fork the repository
697
- 2. Create a feature branch: `git checkout -b feature/amazing-feature`
698
- 3. Make your changes
699
- 4. Run tests: `npm run validate`
700
- 5. Commit changes: `git commit -m 'Add amazing feature'`
701
- 6. Push to branch: `git push origin feature/amazing-feature`
702
- 7. Open a Pull Request
703
-
704
- ### Development Setup
705
-
706
- ```bash
707
- # Fork and clone your fork
708
- git clone https://github.com/yourusername/converse.git
709
- cd converse
710
-
711
- # Install dependencies
712
- npm install
713
-
714
- # Create feature branch
715
- git checkout -b feature/your-feature
716
-
717
- # Make changes and test
718
- npm run validate
719
-
720
- # Commit and push
721
- git add .
722
- git commit -m "Description of changes"
723
- git push origin feature/your-feature
724
- ```
725
-
726
- ## 🙏 Acknowledgments
727
-
728
- This MCP Server was inspired by and builds upon the excellent work from [BeehiveInnovations/zen-mcp-server](https://github.com/BeehiveInnovations/zen-mcp-server).
729
-
730
- ## 📄 License
731
-
732
- MIT License - see [LICENSE](LICENSE) file for details.
733
-
734
- ## 🔗 Links
735
-
736
- - **GitHub**: https://github.com/FallDownTheSystem/converse
737
- - **Issues**: https://github.com/FallDownTheSystem/converse/issues
738
- - **NPM Package**: https://www.npmjs.com/package/converse-mcp-server
1
+ # Converse MCP Server
2
+
3
+ [![npm version](https://img.shields.io/npm/v/converse-mcp-server.svg)](https://www.npmjs.com/package/converse-mcp-server)
4
+
5
+ An MCP (Model Context Protocol) server that lets Claude talk to other AI models. Use it to chat with models from OpenAI, Google, Anthropic, X.AI, Mistral, DeepSeek, or OpenRouter. You can either talk to one model at a time or get multiple models to weigh in on complex decisions.
6
+
7
+ ## 📋 Requirements
8
+
9
+ - **Node.js**: Version 20 or higher
10
+ - **Package Manager**: npm (or pnpm/yarn)
11
+ - **API Keys**: At least one from any supported provider
12
+
13
+ ## 🚀 Quick Start
14
+
15
+ ### Step 1: Get Your API Keys
16
+
17
+ You need at least one API key from these providers:
18
+
19
+ | Provider | Where to Get | Example Format |
20
+ | ----------------- | ---------------------------------------------------------------------------- | ----------------------- |
21
+ | **OpenAI** | [platform.openai.com/api-keys](https://platform.openai.com/api-keys) | `sk-proj-...` |
22
+ | **Google/Gemini** | [makersuite.google.com/app/apikey](https://makersuite.google.com/app/apikey) | `AIzaSy...` |
23
+ | **X.AI** | [console.x.ai](https://console.x.ai/) | `xai-...` |
24
+ | **Anthropic** | [console.anthropic.com](https://console.anthropic.com/) | `sk-ant-...` |
25
+ | **Mistral** | [console.mistral.ai](https://console.mistral.ai/) | `wfBMkWL0...` |
26
+ | **DeepSeek** | [platform.deepseek.com](https://platform.deepseek.com/) | `sk-...` |
27
+ | **OpenRouter** | [openrouter.ai/keys](https://openrouter.ai/keys) | `sk-or-...` |
28
+ | **Codex** | ChatGPT login (system-wide) | Local agentic assistant |
29
+
30
+ **Note:** Codex uses your ChatGPT login (not an API key). If you have an active ChatGPT session, Codex will work automatically. For headless/server deployments, set `CODEX_API_KEY` in your environment.
31
+
32
+ ### Step 2: Add to Claude Code or Claude Desktop
33
+
34
+ #### For Claude Code (Recommended)
35
+
36
+ ```bash
37
+ # Add the server with your API keys
38
+ claude mcp add converse \
39
+ -e OPENAI_API_KEY=your_key_here \
40
+ -e GEMINI_API_KEY=your_key_here \
41
+ -e XAI_API_KEY=your_key_here \
42
+ -e ANTHROPIC_API_KEY=your_key_here \
43
+ -e MISTRAL_API_KEY=your_key_here \
44
+ -e DEEPSEEK_API_KEY=your_key_here \
45
+ -e OPENROUTER_API_KEY=your_key_here \
46
+ -e ENABLE_RESPONSE_SUMMARIZATION=true \
47
+ -e SUMMARIZATION_MODEL=gpt-5 \
48
+ -s user \
49
+ npx converse-mcp-server
50
+ ```
51
+
52
+ #### For Claude Desktop
53
+
54
+ Add this configuration to your Claude Desktop settings:
55
+
56
+ ```json
57
+ {
58
+ "mcpServers": {
59
+ "converse": {
60
+ "command": "npx",
61
+ "args": ["converse-mcp-server"],
62
+ "env": {
63
+ "OPENAI_API_KEY": "your_key_here",
64
+ "GEMINI_API_KEY": "your_key_here",
65
+ "XAI_API_KEY": "your_key_here",
66
+ "ANTHROPIC_API_KEY": "your_key_here",
67
+ "MISTRAL_API_KEY": "your_key_here",
68
+ "DEEPSEEK_API_KEY": "your_key_here",
69
+ "OPENROUTER_API_KEY": "your_key_here",
70
+ "ENABLE_RESPONSE_SUMMARIZATION": "true",
71
+ "SUMMARIZATION_MODEL": "gpt-5"
72
+ }
73
+ }
74
+ }
75
+ }
76
+ ```
77
+
78
+ **Windows Troubleshooting**: If `npx converse-mcp-server` doesn't work on Windows, try:
79
+
80
+ ```json
81
+ {
82
+ "command": "cmd",
83
+ "args": ["/c", "npx", "converse-mcp-server"],
84
+ "env": {
85
+ "ENABLE_RESPONSE_SUMMARIZATION": "true",
86
+ "SUMMARIZATION_MODEL": "gpt-5"
87
+ // ... add your API keys here
88
+ }
89
+ }
90
+ ```
91
+
92
+ ### Step 3: Start Using Converse
93
+
94
+ Once installed, you can:
95
+
96
+ - **Chat with a specific model**: Ask Claude to use the chat tool with your preferred model
97
+ - **Get consensus**: Ask Claude to use the consensus tool when you need multiple perspectives
98
+ - **Run tasks in background**: Use `async: true` for long-running operations that you can check later
99
+ - **Monitor progress**: Use the check_status tool to monitor async operations with AI-generated summaries
100
+ - **Cancel jobs**: Use the cancel_job tool to stop running operations
101
+ - **Smart summaries**: Get auto-generated titles and summaries for better context understanding
102
+ - **Get help**: Type `/converse:help` in Claude
103
+
104
+ ## 🛠️ Available Tools
105
+
106
+ ### 1. Chat Tool
107
+
108
+ Talk to any AI model with support for files, images, and conversation history. The tool automatically routes your request to the right provider based on the model name. When AI summarization is enabled, generates smart titles and summaries for better context understanding.
109
+
110
+ ```javascript
111
+ // Synchronous execution (default)
112
+ {
113
+ "prompt": "How should I structure the authentication module for this Express.js API?",
114
+ "model": "gemini-2.5-flash", // Routes to Google
115
+ "files": ["/path/to/src/auth.js", "/path/to/config.json"],
116
+ "images": ["/path/to/architecture.png"],
117
+ "temperature": 0.5,
118
+ "reasoning_effort": "medium",
119
+ "use_websearch": false
120
+ }
121
+
122
+ // Asynchronous execution (for long-running tasks)
123
+ {
124
+ "prompt": "Analyze this large codebase and provide optimization recommendations",
125
+ "model": "gpt-5",
126
+ "files": ["/path/to/large-project"],
127
+ "async": true, // Enables background processing
128
+ "continuation_id": "my-analysis-task" // Optional: custom ID for tracking
129
+ }
130
+
131
+ // Codex - Agentic coding assistant with local file access
132
+ {
133
+ "prompt": "Analyze this codebase and suggest improvements",
134
+ "model": "codex",
135
+ "files": ["/path/to/your/project"],
136
+ "async": true // Recommended for Codex (responses take 6-20+ seconds)
137
+ }
138
+ ```
139
+
140
+ **Codex Notes:**
141
+
142
+ - Uses thread-based sessions (context persists with `continuation_id`)
143
+ - Responses typically take 6-20 seconds (complex tasks may take minutes)
144
+ - Accesses files directly from your working directory
145
+ - Configure sandbox mode via `CODEX_SANDBOX_MODE` environment variable
146
+
147
+ ### 2. Consensus Tool
148
+
149
+ Get multiple AI models to analyze the same question simultaneously. Each model can see and respond to the others' answers, creating a rich discussion.
150
+
151
+ ```javascript
152
+ // Synchronous consensus (default)
153
+ {
154
+ "prompt": "Should we use microservices or monolith architecture for our e-commerce platform?",
155
+ "models": ["gpt-5", "gemini-2.5-flash", "grok-4"],
156
+ "files": ["/path/to/requirements.md"],
157
+ "enable_cross_feedback": true,
158
+ "temperature": 0.2
159
+ }
160
+
161
+ // Asynchronous consensus (for complex analysis)
162
+ {
163
+ "prompt": "Review our system architecture and provide comprehensive recommendations",
164
+ "models": ["gpt-5", "gemini-2.5-pro", "claude-sonnet-4"],
165
+ "files": ["/path/to/architecture-docs"],
166
+ "async": true, // Run in background
167
+ "enable_cross_feedback": true
168
+ }
169
+ ```
170
+
171
+ ### 3. Check Status Tool
172
+
173
+ Monitor the progress and retrieve results from asynchronous operations. When AI summarization is enabled, provides intelligent summaries of ongoing and completed tasks.
174
+
175
+ ```javascript
176
+ // Check status of a specific job
177
+ {
178
+ "continuation_id": "my-analysis-task"
179
+ }
180
+
181
+ // List recent jobs (shows last 10)
182
+ // With summarization enabled, displays titles and final summaries
183
+ {}
184
+
185
+ // Get full conversation history for completed job
186
+ {
187
+ "continuation_id": "my-analysis-task",
188
+ "full_history": true
189
+ }
190
+ ```
191
+
192
+ ### 4. Cancel Job Tool
193
+
194
+ Cancel running asynchronous operations when needed.
195
+
196
+ ```javascript
197
+ // Cancel a running job
198
+ {
199
+ "continuation_id": "my-analysis-task"
200
+ }
201
+ ```
202
+
203
+ ## 🤖 AI Summarization Feature
204
+
205
+ When enabled, the server automatically generates intelligent titles and summaries for better context understanding:
206
+
207
+ - **Automatic Title Generation**: Creates descriptive titles (up to 60 chars) for each request
208
+ - **Streaming Summaries**: Status check returns an up-to-date summary of the progress based on the partially streamed response
209
+ - **Final Summaries**: Concise 1-2 sentence summaries of completed responses
210
+ - **Smart Status Display**: Enhanced check_status tool shows titles and summaries in job listings
211
+ - **Persistent Context**: Summaries are stored with async jobs for better progress tracking
212
+
213
+ **Configuration**:
214
+
215
+ ```bash
216
+ # Enable in your environment
217
+ ENABLE_RESPONSE_SUMMARIZATION=true # Default: false
218
+ SUMMARIZATION_MODEL=gpt-5-nano # Default: gpt-5-nano
219
+ ```
220
+
221
+ **Benefits**:
222
+
223
+ - Quickly understand what each async job is doing without reading full responses
224
+ - Better context when reviewing multiple ongoing operations
225
+ - Improved job management with at-a-glance understanding of task progress
226
+ - Graceful fallback to text snippets when summarization is disabled or fails
227
+
228
+ ## 📊 Supported Models
229
+
230
+ ### OpenAI Models
231
+
232
+ - **gpt-5**: Latest flagship model (400K context, 128K output) - Superior reasoning, code generation, and analysis
233
+ - **gpt-5-mini**: Faster, cost-efficient GPT-5 (400K context, 128K output) - Well-defined tasks, precise prompts
234
+ - **gpt-5-nano**: Fastest, most cost-efficient GPT-5 (400K context, 128K output) - Summarization, classification
235
+ - **gpt-5-pro**: Most advanced reasoning model (400K context, 272K output) - Hardest problems, extended compute time (EXPENSIVE)
236
+ - **o3**: Strong reasoning (200K context)
237
+ - **o3-mini**: Fast O3 variant (200K context)
238
+ - **o3-pro**: Professional-grade reasoning (200K context) - EXTREMELY EXPENSIVE
239
+ - **o3-deep-research**: Deep research model (200K context) - 30-90 min runtime
240
+ - **o4-mini**: Latest reasoning model (200K context)
241
+ - **o4-mini-deep-research**: Fast deep research model (200K context) - 15-60 min runtime
242
+ - **gpt-4.1**: Advanced reasoning (1M context)
243
+ - **gpt-4o**: Multimodal flagship (128K context)
244
+ - **gpt-4o-mini**: Fast multimodal (128K context)
245
+
246
+ ### Google/Gemini Models
247
+
248
+ **API Key Options**:
249
+
250
+ - **GEMINI_API_KEY**: For Gemini Developer API (recommended)
251
+ - **GOOGLE_API_KEY**: Alternative name (GEMINI_API_KEY takes priority)
252
+ - **Vertex AI**: Use `GOOGLE_GENAI_USE_VERTEXAI=true` with project/location settings
253
+
254
+ **Supported Models**:
255
+
256
+ - **gemini-3-pro-preview** (aliases: `pro`, `gemini`): Enhanced reasoning with thinking levels (1M context, 64K output)
257
+ - **gemini-2.5-flash** (alias: `flash`): Ultra-fast (1M context, 65K output)
258
+ - **gemini-2.5-pro** (alias: `pro 2.5`): Deep reasoning with thinking budget (1M context, 65K output)
259
+ - **gemini-2.0-flash**: Latest with experimental thinking (1M context, 65K output)
260
+ - **gemini-2.0-flash-lite**: Lightweight fast model, text-only (1M context, 65K output)
261
+
262
+ **Note**: Default aliases (`gemini`, `pro`) now point to Gemini 3.0 Pro. Use `gemini-2.5-pro` explicitly if you need version 2.5.
263
+
264
+ ### X.AI/Grok Models
265
+
266
+ - **grok-4-0709** (aliases: `grok`, `grok-4`): Latest advanced model (256K context)
267
+ - **grok-code-fast-1**: Speedy and economical reasoning model that excels at agentic coding (256K context)
268
+
269
+ ### Anthropic Models
270
+
271
+ - **claude-opus-4.1**: Highest intelligence with extended thinking (200K context)
272
+ - **claude-sonnet-4**: Balanced performance with extended thinking (200K context)
273
+ - **claude-3.7-sonnet**: Enhanced 3.x generation with thinking (200K context)
274
+ - **claude-3.5-sonnet**: Fast and intelligent (200K context)
275
+ - **claude-3.5-haiku**: Fastest model for simple queries (200K context)
276
+
277
+ ### Mistral Models
278
+
279
+ - **magistral-medium**: Frontier-class reasoning model (40K context)
280
+ - **magistral-small**: Small reasoning model (40K context)
281
+ - **mistral-medium-3**: Frontier-class multimodal model (128K context)
282
+
283
+ ### DeepSeek Models
284
+
285
+ - **deepseek-chat**: Strong MoE model with 671B/37B parameters (64K context)
286
+ - **deepseek-reasoner**: Advanced reasoning model with CoT (64K context)
287
+
288
+ ### OpenRouter Models
289
+
290
+ - **qwen3-235b-thinking**: Qwen3 with enhanced reasoning (32K context)
291
+ - **qwen3-coder**: Specialized for programming tasks (32K context)
292
+ - **kimi-k2**: Moonshot AI Kimi K2 with extended context (200K context)
293
+
294
+ ### Codex Models
295
+
296
+ - **codex**: OpenAI Codex agentic coding assistant
297
+ - Thread-based sessions with persistent context
298
+ - Direct filesystem access from working directory
299
+ - Typical response time: 6-20 seconds (longer for complex tasks)
300
+ - Requires ChatGPT login or CODEX_API_KEY
301
+ - See [Configuration](#configuration) for sandbox and approval settings
302
+
303
+ ## 📚 Help & Documentation
304
+
305
+ ### Built-in Help
306
+
307
+ Type these commands directly in Claude:
308
+
309
+ - `/converse:help` - Full documentation
310
+ - `/converse:help tools` - Tool-specific help (includes async features)
311
+ - `/converse:help models` - Model information
312
+ - `/converse:help parameters` - Configuration details
313
+ - `/converse:help examples` - Usage examples (sync and async)
314
+ - `/converse:help async` - Async execution guide
315
+
316
+ ### Additional Resources
317
+
318
+ - **API Reference**: [docs/API.md](docs/API.md)
319
+ - **Architecture Guide**: [docs/ARCHITECTURE.md](docs/ARCHITECTURE.md)
320
+ - **Integration Examples**: [docs/EXAMPLES.md](docs/EXAMPLES.md)
321
+
322
+ ## ⚙️ Configuration
323
+
324
+ ### Environment Variables
325
+
326
+ Create a `.env` file in your project root:
327
+
328
+ ```bash
329
+ # Required: At least one API key
330
+ OPENAI_API_KEY=sk-proj-your_openai_key_here
331
+ GEMINI_API_KEY=your_gemini_api_key_here # Or GOOGLE_API_KEY (GEMINI_API_KEY takes priority)
332
+ XAI_API_KEY=xai-your_xai_key_here
333
+ ANTHROPIC_API_KEY=sk-ant-your_anthropic_key_here
334
+ MISTRAL_API_KEY=your_mistral_key_here
335
+ DEEPSEEK_API_KEY=your_deepseek_key_here
336
+ OPENROUTER_API_KEY=sk-or-your_openrouter_key_here
337
+
338
+ # Optional: Server configuration
339
+ PORT=3157
340
+ LOG_LEVEL=info
341
+
342
+ # Optional: AI Summarization (Enhanced async status display)
343
+ ENABLE_RESPONSE_SUMMARIZATION=true # Enable AI-generated titles and summaries
344
+ SUMMARIZATION_MODEL=gpt-5-nano # Model to use for summarization (default: gpt-5-nano)
345
+
346
+ # Optional: OpenRouter configuration
347
+ OPENROUTER_REFERER=https://github.com/FallDownTheSystem/converse
348
+ OPENROUTER_TITLE=Converse
349
+ OPENROUTER_DYNAMIC_MODELS=true
350
+
351
+ # Optional: Codex configuration
352
+ CODEX_API_KEY=your_codex_api_key_here # Optional if ChatGPT login available
353
+ CODEX_SANDBOX_MODE=read-only # read-only (default), workspace-write, danger-full-access
354
+ CODEX_SKIP_GIT_CHECK=true # true (default), false
355
+ CODEX_APPROVAL_POLICY=never # never (default), untrusted, on-failure, on-request
356
+ CODEX_DEFAULT_MODEL=gpt-5-codex # Default: gpt-5-codex
357
+ ```
358
+
359
+ ### Configuration Options
360
+
361
+ #### Server Environment Variables (.env file)
362
+
363
+ | Variable | Description | Default | Example |
364
+ | ----------- | ------------- | ------- | ------------------------ |
365
+ | `PORT` | Server port | `3157` | `3157` |
366
+ | `LOG_LEVEL` | Logging level | `info` | `debug`, `info`, `error` |
367
+
368
+ #### Claude Code Environment Variables (System/Global)
369
+
370
+ These must be set in your system environment or when launching Claude Code, NOT in the project .env file:
371
+
372
+ | Variable | Description | Default | Example |
373
+ | ----------------------- | --------------------------- | -------- | ------------------------------------ |
374
+ | `MAX_MCP_OUTPUT_TOKENS` | Token response limit | `25000` | `200000` |
375
+ | `MCP_TOOL_TIMEOUT` | Tool execution timeout (ms) | `120000` | `5400000` (90 min for deep research) |
376
+
377
+ ```bash
378
+ # Example: Set globally before starting Claude Code
379
+ export MAX_MCP_OUTPUT_TOKENS=200000
380
+ export MCP_TOOL_TIMEOUT=5400000 # 90 minutes for deep research models
381
+ claude # Then start Claude Code
382
+ ```
383
+
384
+ ### Model Selection
385
+
386
+ Use `"auto"` for automatic model selection, or specify exact models:
387
+
388
+ ```javascript
389
+ // Auto-selection (recommended)
390
+ "auto";
391
+
392
+ // Specific models
393
+ "gemini-2.5-flash";
394
+ "gpt-5";
395
+ "grok-4-0709";
396
+
397
+ // Using aliases
398
+ "flash"; // -> gemini-2.5-flash
399
+ "pro"; // -> gemini-2.5-pro
400
+ "grok"; // -> grok-4-0709
401
+ "grok-4"; // -> grok-4-0709
402
+ ```
403
+
404
+ **Auto Model Behavior:**
405
+
406
+ - **Chat Tool**: Selects the first available provider and uses its default model
407
+ - **Consensus Tool**: When using `["auto"]`, automatically expands to the first 3 available providers
408
+
409
+ Provider priority order (requires corresponding API key):
410
+
411
+ 1. OpenAI (`gpt-5`)
412
+ 2. Google (`gemini-2.5-pro`)
413
+ 3. XAI (`grok-4`)
414
+ 4. Anthropic (`claude-sonnet-4-20250514`)
415
+ 5. Mistral (`magistral-medium-2506`)
416
+ 6. DeepSeek (`deepseek-reasoner`)
417
+ 7. OpenRouter (`qwen/qwen3-coder`)
418
+
419
+ The system will use the first 3 providers that have valid API keys configured. This enables automatic multi-model consensus without manually specifying models.
420
+
421
+ ### Advanced Configuration
422
+
423
+ #### Manual Installation Options
424
+
425
+ ##### Option A: Direct Node.js execution
426
+
427
+ If you've cloned the repository locally:
428
+
429
+ ```json
430
+ {
431
+ "mcpServers": {
432
+ "converse": {
433
+ "command": "node",
434
+ "args": [
435
+ "C:\\Users\\YourUsername\\Documents\\Projects\\converse\\src\\index.js"
436
+ ],
437
+ "env": {
438
+ "OPENAI_API_KEY": "your_key_here",
439
+ "GEMINI_API_KEY": "your_key_here",
440
+ "XAI_API_KEY": "your_key_here",
441
+ "ANTHROPIC_API_KEY": "your_key_here",
442
+ "MISTRAL_API_KEY": "your_key_here",
443
+ "DEEPSEEK_API_KEY": "your_key_here",
444
+ "OPENROUTER_API_KEY": "your_key_here"
445
+ }
446
+ }
447
+ }
448
+ }
449
+ ```
450
+
451
+ ##### Option B: Local HTTP Development (Advanced)
452
+
453
+ For local development with HTTP transport (optional, for debugging):
454
+
455
+ 1. **First, start the server manually with HTTP transport**:
456
+
457
+ ```bash
458
+ # In a terminal, navigate to the project directory
459
+ cd converse
460
+ MCP_TRANSPORT=http npm run dev # Starts server on http://localhost:3157/mcp
461
+ ```
462
+
463
+ 2. **Then configure Claude to connect to it**:
464
+ ```json
465
+ {
466
+ "mcpServers": {
467
+ "converse-local": {
468
+ "url": "http://localhost:3157/mcp"
469
+ }
470
+ }
471
+ }
472
+ ```
473
+
474
+ **Important**: HTTP transport requires the server to be running before Claude can connect to it. Keep the terminal with the server open while using Claude.
475
+
476
+ ### Configuration File Locations
477
+
478
+ The Claude configuration file is typically located at:
479
+
480
+ - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
481
+ - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
482
+ - Linux: `~/.config/Claude/claude_desktop_config.json`
483
+
484
+ For more detailed instructions, see the [official MCP configuration guide](https://docs.anthropic.com/en/docs/claude-code/mcp#configure-mcp-servers).
485
+
486
+ ## 💻 Running Standalone (Without Claude)
487
+
488
+ You can run the server directly without Claude for testing or development:
489
+
490
+ ```bash
491
+ # Quick run (no installation needed)
492
+ npx converse-mcp-server
493
+
494
+ # Alternative package managers
495
+ pnpm dlx converse-mcp-server
496
+ yarn dlx converse-mcp-server
497
+ ```
498
+
499
+ For development setup, see the [Development](#-development) section below.
500
+
501
+ ## 🐛 Troubleshooting
502
+
503
+ ### Common Issues
504
+
505
+ **Server won't start:**
506
+
507
+ - Check Node.js version: `node --version` (needs v20+)
508
+ - Try a different port: `PORT=3001 npm start`
509
+
510
+ **API key errors:**
511
+
512
+ - Verify your .env file has the correct format
513
+ - Test with: `npm run test:real-api`
514
+
515
+ **Module import errors:**
516
+
517
+ - Clear cache and reinstall: `npm run clean`
518
+
519
+ ### Debug Mode
520
+
521
+ ```bash
522
+ # Enable debug logging
523
+ LOG_LEVEL=debug npm run dev
524
+
525
+ # Start with debugger
526
+ npm run debug
527
+
528
+ # Trace all operations
529
+ LOG_LEVEL=trace npm run dev
530
+ ```
531
+
532
+ ## 🔧 Development
533
+
534
+ ### Getting Started
535
+
536
+ ```bash
537
+ # Clone the repository
538
+ git clone https://github.com/FallDownTheSystem/converse.git
539
+ cd converse
540
+ npm install
541
+
542
+ # Copy environment file and add your API keys
543
+ cp .env.example .env
544
+
545
+ # Start development server
546
+ npm run dev
547
+ ```
548
+
549
+ ### Scripts Available
550
+
551
+ ```bash
552
+ # Server management
553
+ npm start # Start server (auto-kills existing server on port 3157)
554
+ npm run start:clean # Start server without killing existing processes
555
+ npm run start:port # Start server on port 3001 (avoids port conflicts)
556
+ npm run dev # Development with hot reload (auto-kills existing server)
557
+ npm run dev:clean # Development without killing existing processes
558
+ npm run dev:port # Development on port 3001 (avoids port conflicts)
559
+ npm run dev:quiet # Development with minimal logging
560
+ npm run kill-server # Kill any server running on port 3157
561
+
562
+ # Testing
563
+ npm test # Run all tests
564
+ npm run test:unit # Unit tests only
565
+ npm run test:integration # Integration tests
566
+ npm run test:e2e # End-to-end tests (requires API keys)
567
+
568
+ # Integration test subcategories
569
+ npm run test:integration:mcp # MCP protocol tests
570
+ npm run test:integration:tools # Tool integration tests
571
+ npm run test:integration:providers # Provider integration tests
572
+ npm run test:integration:performance # Performance tests
573
+ npm run test:integration:general # General integration tests
574
+
575
+ # Other test categories
576
+ npm run test:mcp-client # MCP client tests (HTTP-based)
577
+ npm run test:providers # Provider unit tests
578
+ npm run test:tools # Tool tests
579
+ npm run test:coverage # Coverage report
580
+ npm run test:watch # Run tests in watch mode
581
+
582
+ # Code quality
583
+ npm run lint # Check code style
584
+ npm run lint:fix # Fix code style issues
585
+ npm run format # Format code with Prettier
586
+ npm run validate # Full validation (lint + test)
587
+
588
+ # Utilities
589
+ npm run build # Build for production
590
+ npm run debug # Start with debugger
591
+ npm run check-deps # Check for outdated dependencies
592
+ npm run kill-server # Kill any server running on port 3157
593
+ ```
594
+
595
+ ### Development Notes
596
+
597
+ **Port conflicts**: The server uses port 3157 by default. If you get an "EADDRINUSE" error:
598
+
599
+ - Run `npm run kill-server` to free the port
600
+ - Or use a different port: `PORT=3001 npm start`
601
+
602
+ **Transport Modes**:
603
+
604
+ - **Stdio** (default): Works automatically with Claude
605
+ - **HTTP**: Better for debugging, requires manual start (`MCP_TRANSPORT=http npm run dev`)
606
+
607
+ ### Testing with Real APIs
608
+
609
+ After setting up your API keys in `.env`:
610
+
611
+ ```bash
612
+ # Run end-to-end tests
613
+ npm run test:e2e
614
+
615
+ # Test specific providers
616
+ npm run test:integration:providers
617
+
618
+ # Full validation
619
+ npm run validate
620
+ ```
621
+
622
+ ### Validation Steps
623
+
624
+ After installation, run these tests to verify everything works:
625
+
626
+ ```bash
627
+ npm start # Should show startup message
628
+ npm test # Should pass all unit tests
629
+ npm run validate # Full validation suite
630
+ ```
631
+
632
+ ### Project Structure
633
+
634
+ ```
635
+ converse/
636
+ ├── src/
637
+ │ ├── index.js # Main server entry point
638
+ │ ├── config.js # Configuration management
639
+ │ ├── router.js # Central request dispatcher
640
+ │ ├── continuationStore.js # State management
641
+ │ ├── systemPrompts.js # Tool system prompts
642
+ │ ├── providers/ # AI provider implementations
643
+ │ │ ├── index.js # Provider registry
644
+ │ │ ├── interface.js # Unified provider interface
645
+ │ │ ├── openai.js # OpenAI provider
646
+ │ │ ├── xai.js # XAI provider
647
+ │ │ ├── google.js # Google provider
648
+ │ │ ├── anthropic.js # Anthropic provider
649
+ │ │ ├── mistral.js # Mistral AI provider
650
+ │ │ ├── deepseek.js # DeepSeek provider
651
+ │ │ ├── openrouter.js # OpenRouter provider
652
+ │ │ └── openai-compatible.js # Base for OpenAI-compatible APIs
653
+ │ ├── tools/ # MCP tool implementations
654
+ │ │ ├── index.js # Tool registry
655
+ │ │ ├── chat.js # Chat tool
656
+ │ │ └── consensus.js # Consensus tool
657
+ │ └── utils/ # Utility modules
658
+ │ ├── contextProcessor.js # File/image processing
659
+ │ ├── errorHandler.js # Error handling
660
+ │ └── logger.js # Logging utilities
661
+ ├── tests/ # Comprehensive test suite
662
+ ├── docs/ # API and architecture docs
663
+ └── package.json # Dependencies and scripts
664
+ ```
665
+
666
+ ## 📦 Publishing to NPM
667
+
668
+ > **Note**: This section is for maintainers. The package is already published as `converse-mcp-server`.
669
+
670
+ ### Quick Publishing Checklist
671
+
672
+ ```bash
673
+ # 1. Ensure clean working directory
674
+ git status
675
+
676
+ # 2. Run full validation
677
+ npm run validate
678
+
679
+ # 3. Test package contents
680
+ npm pack --dry-run
681
+
682
+ # 4. Test bin script
683
+ node bin/converse.js --help
684
+
685
+ # 5. Bump version (choose one)
686
+ npm version patch # Bug fixes: 1.0.1 → 1.0.2
687
+ npm version minor # New features: 1.0.1 → 1.1.0
688
+ npm version major # Breaking changes: 1.0.1 → 2.0.0
689
+
690
+ # 6. Test publish (dry run)
691
+ npm publish --dry-run
692
+
693
+ # 7. Publish to npm
694
+ npm publish
695
+
696
+ # 8. Verify publication
697
+ npm view converse-mcp-server
698
+ npx converse-mcp-server --help
699
+ ```
700
+
701
+ ### Version Guidelines
702
+
703
+ - **Patch** (`npm version patch`): Bug fixes, documentation updates, minor improvements
704
+ - **Minor** (`npm version minor`): New features, new model support, new tool capabilities
705
+ - **Major** (`npm version major`): Breaking API changes, major architecture changes
706
+
707
+ ### Post-Publication
708
+
709
+ After publishing, update installation instructions if needed and verify:
710
+
711
+ ```bash
712
+ # Test direct execution
713
+ npx converse-mcp-server
714
+ npx converse
715
+
716
+ # Test MCP client integration
717
+ # Update Claude Desktop config to use: "npx converse-mcp-server"
718
+ ```
719
+
720
+ ### Troubleshooting Publication
721
+
722
+ - **Git not clean**: Commit all changes first
723
+ - **Tests failing**: Fix issues before publishing
724
+ - **Version conflicts**: Check existing versions with `npm view converse-mcp-server versions`
725
+ - **Permission issues**: Ensure you're logged in with `npm whoami`
726
+
727
+ ## 🤝 Contributing
728
+
729
+ 1. Fork the repository
730
+ 2. Create a feature branch: `git checkout -b feature/amazing-feature`
731
+ 3. Make your changes
732
+ 4. Run tests: `npm run validate`
733
+ 5. Commit changes: `git commit -m 'Add amazing feature'`
734
+ 6. Push to branch: `git push origin feature/amazing-feature`
735
+ 7. Open a Pull Request
736
+
737
+ ### Development Setup
738
+
739
+ ```bash
740
+ # Fork and clone your fork
741
+ git clone https://github.com/yourusername/converse.git
742
+ cd converse
743
+
744
+ # Install dependencies
745
+ npm install
746
+
747
+ # Create feature branch
748
+ git checkout -b feature/your-feature
749
+
750
+ # Make changes and test
751
+ npm run validate
752
+
753
+ # Commit and push
754
+ git add .
755
+ git commit -m "Description of changes"
756
+ git push origin feature/your-feature
757
+ ```
758
+
759
+ ## 🙏 Acknowledgments
760
+
761
+ This MCP Server was inspired by and builds upon the excellent work from [BeehiveInnovations/zen-mcp-server](https://github.com/BeehiveInnovations/zen-mcp-server).
762
+
763
+ ## 📄 License
764
+
765
+ MIT License - see [LICENSE](LICENSE) file for details.
766
+
767
+ ## 🔗 Links
768
+
769
+ - **GitHub**: https://github.com/FallDownTheSystem/converse
770
+ - **Issues**: https://github.com/FallDownTheSystem/converse/issues
771
+ - **NPM Package**: https://www.npmjs.com/package/converse-mcp-server