converse-mcp-server 2.4.2 → 2.5.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/bin/converse.js CHANGED
File without changes
package/docs/API.md CHANGED
@@ -331,16 +331,16 @@ MCP_TRANSPORT=stdio npm start
331
331
  | `gpt-4o` | 128K | 16K | Multimodal | Vision, general chat |
332
332
  | `gpt-4o-mini` | 128K | 16K | Fast multimodal | Quick responses, images |
333
333
 
334
- ### Google/Gemini Models
334
+ ### Google/Gemini Models (API-based)
335
335
 
336
336
  | Model | Alias | Context | Tokens | Features | Use Cases |
337
337
  |-------|-------|---------|--------|----------|-----------|
338
- | `gemini-3-pro-preview` | `pro`, `gemini` | 1M | 64K | Thinking levels, enhanced reasoning | Complex problems, deep analysis |
338
+ | `gemini-3-pro-preview` | `pro` | 1M | 64K | Thinking levels, enhanced reasoning | Complex problems, deep analysis |
339
339
  | `gemini-2.5-flash` | `flash` | 1M | 65K | Ultra-fast | Quick analysis, simple queries |
340
340
  | `gemini-2.5-pro` | `pro 2.5` | 1M | 65K | Thinking mode | Deep reasoning, architecture |
341
341
  | `gemini-2.0-flash` | `flash2` | 1M | 65K | Latest | Experimental thinking |
342
342
 
343
- **Note:** Default aliases `gemini`, `pro`, and `gemini-pro` now point to Gemini 3.0 Pro. Use `gemini-2.5-pro` explicitly if you need the 2.5 version.
343
+ **Note:** The short model name `gemini` now routes to **Gemini CLI** (OAuth-based). For Google API access, use specific model names like `gemini-2.5-pro` or `gemini-2.0-flash`.
344
344
 
345
345
  ### X.AI/Grok Models
346
346
 
@@ -398,6 +398,42 @@ MCP_TRANSPORT=stdio npm start
398
398
  - **Response times**: 6-20 seconds typical (complex tasks may take minutes)
399
399
  - **Authentication**: Requires ChatGPT login OR `CODEX_API_KEY` environment variable
400
400
 
401
+ ### Gemini CLI Models (OAuth-based)
402
+
403
+ **Gemini CLI** provides subscription-based access to Gemini models through OAuth:
404
+
405
+ - **Model**: `gemini` (routes to gemini-3-pro-preview)
406
+ - **Authentication**: OAuth via Gemini CLI (requires one-time setup)
407
+ - **Setup**: Install `@google/gemini-cli` globally and run `gemini` to authenticate
408
+ - **Billing**: Uses Google subscription (Google One AI Premium or Gemini Advanced) instead of API credits
409
+ - **Credentials**: Stored in `~/.gemini/oauth_creds.json`
410
+ - **Features**: Access to enhanced agentic features available through CLI
411
+ - **Context**: 1M tokens (inherited from gemini-3-pro-preview)
412
+ - **Output**: 64K tokens
413
+
414
+ **Authentication Setup:**
415
+ ```bash
416
+ # Install Gemini CLI globally
417
+ npm install -g @google/gemini-cli
418
+
419
+ # Run interactive authentication
420
+ gemini
421
+
422
+ # Follow prompts to authenticate via browser
423
+ # Credentials are saved to ~/.gemini/oauth_creds.json
424
+ ```
425
+
426
+ **Usage Example:**
427
+ ```json
428
+ {
429
+ "name": "chat",
430
+ "arguments": {
431
+ "prompt": "Explain the event loop in JavaScript",
432
+ "model": "gemini"
433
+ }
434
+ }
435
+ ```
436
+
401
437
  **Codex-Specific Behavior:**
402
438
  - `continuation_id` - Required for thread continuation (maintains full conversation history)
403
439
  - `files` parameter - Files accessed directly from working directory, not passed as message content
package/docs/PROVIDERS.md CHANGED
@@ -20,11 +20,11 @@ This guide documents all supported AI providers in the Converse MCP Server and t
20
20
  - **Get Key**: [makersuite.google.com/app/apikey](https://makersuite.google.com/app/apikey)
21
21
  - **Environment Variable**: `GOOGLE_API_KEY`
22
22
  - **Supported Models**:
23
- - `gemini-3-pro-preview` (aliases: `pro`, `gemini`) - Enhanced reasoning with thinking levels (1M context, 64K output)
23
+ - `gemini-3-pro-preview` (alias: `pro`) - Enhanced reasoning with thinking levels (1M context, 64K output)
24
24
  - `gemini-2.5-pro` (alias: `pro 2.5`) - Deep reasoning with thinking budget (1M context, 65K output)
25
25
  - `gemini-2.5-flash` (alias: `flash`) - Ultra-fast model with thinking budget (1M context, 65K output)
26
26
  - `gemini-2.0-flash`, `gemini-2.0-flash-lite` - Latest generation (1M context, 65K output)
27
- - **Note**: Default aliases (`gemini`, `pro`, `gemini-pro`) now point to Gemini 3.0 Pro. Use `gemini-2.5-pro` explicitly if you need version 2.5.
27
+ - **Note**: The short model name `gemini` now routes to **Gemini CLI** (OAuth-based access). For Google API access, use specific model names like `gemini-2.5-pro` or `gemini-2.0-flash`.
28
28
 
29
29
  ### X.AI (Grok)
30
30
  - **API Key Format**: `xai-...` (starts with `xai-`)
@@ -114,6 +114,73 @@ This guide documents all supported AI providers in the Converse MCP Server and t
114
114
  - Use `CODEX_APPROVAL_POLICY=never` for headless server deployments
115
115
  - Always use `continuation_id` for thread continuation
116
116
 
117
+ ### Gemini CLI
118
+ - **Authentication**: OAuth via Gemini CLI (no API key needed)
119
+ - **Setup Required**:
120
+ 1. Install Gemini CLI globally: `npm install -g @google/gemini-cli`
121
+ 2. Authenticate: Run `gemini` command and follow interactive prompts
122
+ 3. Credentials stored in `~/.gemini/oauth_creds.json`
123
+ - **Environment Variables**: None (uses OAuth credentials file)
124
+ - **Supported Models**:
125
+ - `gemini` - Routes to gemini-3-pro-preview via CLI
126
+ - Provides access to Gemini 3.0 Pro Preview through Google subscription (Google One AI Premium or Gemini Advanced)
127
+
128
+ **Key Features:**
129
+ - **OAuth Authentication**: Uses Google account login instead of API keys
130
+ - **Subscription Access**: Leverage Google subscription instead of paying per API call
131
+ - **Enhanced Features**: Access to agentic features available through CLI that aren't in standard API
132
+ - **Model Support**: Currently supports gemini-3-pro-preview only
133
+
134
+ **Authentication Setup:**
135
+ ```bash
136
+ # Install Gemini CLI globally
137
+ npm install -g @google/gemini-cli
138
+
139
+ # Run interactive authentication (one-time setup)
140
+ gemini
141
+
142
+ # Follow prompts to:
143
+ # 1. Select authentication method (Personal OAuth recommended)
144
+ # 2. Authorize via browser
145
+ # 3. Credentials are saved to ~/.gemini/oauth_creds.json
146
+ ```
147
+
148
+ **Usage Examples:**
149
+
150
+ *Chat Tool:*
151
+ ```json
152
+ {
153
+ "name": "chat",
154
+ "arguments": {
155
+ "prompt": "Explain async/await in JavaScript",
156
+ "model": "gemini"
157
+ }
158
+ }
159
+ ```
160
+
161
+ *Consensus Tool:*
162
+ ```json
163
+ {
164
+ "name": "consensus",
165
+ "arguments": {
166
+ "prompt": "Should we use TypeScript for this component?",
167
+ "models": ["gemini", "gpt-5", "claude-sonnet-4"]
168
+ }
169
+ }
170
+ ```
171
+
172
+ **Best Practices:**
173
+ - Authenticate before first use (run `gemini` CLI command)
174
+ - Use specific model names for Google API access (e.g., `gemini-2.5-pro`)
175
+ - Model name `gemini` is reserved for CLI-based access
176
+ - Check credentials file exists at `~/.gemini/oauth_creds.json` if authentication fails
177
+
178
+ **Differences from Google API Provider:**
179
+ - **Authentication**: OAuth (CLI) vs API Key (Google API)
180
+ - **Billing**: Google subscription vs pay-per-use API
181
+ - **Model Routing**: `gemini` → CLI provider, specific names (e.g., `gemini-2.5-pro`) → API provider
182
+ - **Models**: Only gemini-3-pro-preview vs full Gemini model family
183
+
117
184
  ## Configuration Examples
118
185
 
119
186
  ### Basic Configuration (.env file)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "converse-mcp-server",
3
- "version": "2.4.2",
3
+ "version": "2.5.1",
4
4
  "description": "Converse MCP Server - Converse with other LLMs with chat and consensus tools",
5
5
  "type": "module",
6
6
  "main": "src/index.js",
@@ -11,56 +11,6 @@
11
11
  "engines": {
12
12
  "node": ">=20.0.0"
13
13
  },
14
- "keywords": [
15
- "mcp",
16
- "server",
17
- "ai",
18
- "chat",
19
- "consensus",
20
- "openai",
21
- "google",
22
- "gemini",
23
- "grok"
24
- ],
25
- "author": "Converse MCP Server",
26
- "license": "MIT",
27
- "homepage": "https://github.com/FallDownTheSystem/converse#readme",
28
- "repository": {
29
- "type": "git",
30
- "url": "git+https://github.com/FallDownTheSystem/converse.git"
31
- },
32
- "bugs": {
33
- "url": "https://github.com/FallDownTheSystem/converse/issues"
34
- },
35
- "files": [
36
- "src/",
37
- "bin/",
38
- "docs/",
39
- "README.md",
40
- ".env.example"
41
- ],
42
- "dependencies": {
43
- "@anthropic-ai/sdk": "^0.70.0",
44
- "@google/genai": "^1.30.0",
45
- "@mistralai/mistralai": "^1.10.0",
46
- "@modelcontextprotocol/sdk": "^1.22.0",
47
- "@openai/codex-sdk": "^0.58.0",
48
- "cors": "^2.8.5",
49
- "dotenv": "^17.2.3",
50
- "express": "^5.1.0",
51
- "lru-cache": "^11.2.2",
52
- "openai": "^6.9.1",
53
- "p-limit": "^7.2.0",
54
- "vite": "^7.2.2"
55
- },
56
- "devDependencies": {
57
- "@vitest/coverage-v8": "^4.0.10",
58
- "cross-env": "^10.1.0",
59
- "eslint": "^9.39.1",
60
- "prettier": "^3.6.2",
61
- "rimraf": "^6.1.0",
62
- "vitest": "^4.0.10"
63
- },
64
14
  "scripts": {
65
15
  "kill-server": "node scripts/kill-server.js",
66
16
  "start": "npm run kill-server && node src/index.js",
@@ -114,5 +64,57 @@
114
64
  "validate:fix": "node scripts/validate.js --fix",
115
65
  "validate:fast": "node scripts/validate.js --skip-tests --skip-lint",
116
66
  "precommit": "npm run validate"
67
+ },
68
+ "keywords": [
69
+ "mcp",
70
+ "server",
71
+ "ai",
72
+ "chat",
73
+ "consensus",
74
+ "openai",
75
+ "google",
76
+ "gemini",
77
+ "grok"
78
+ ],
79
+ "author": "Converse MCP Server",
80
+ "license": "MIT",
81
+ "homepage": "https://github.com/FallDownTheSystem/converse#readme",
82
+ "repository": {
83
+ "type": "git",
84
+ "url": "git+https://github.com/FallDownTheSystem/converse.git"
85
+ },
86
+ "bugs": {
87
+ "url": "https://github.com/FallDownTheSystem/converse/issues"
88
+ },
89
+ "files": [
90
+ "src/",
91
+ "bin/",
92
+ "docs/",
93
+ "README.md",
94
+ ".env.example"
95
+ ],
96
+ "dependencies": {
97
+ "@anthropic-ai/sdk": "^0.70.0",
98
+ "@google/genai": "^1.30.0",
99
+ "@mistralai/mistralai": "^1.10.0",
100
+ "@modelcontextprotocol/sdk": "^1.22.0",
101
+ "@openai/codex-sdk": "^0.58.0",
102
+ "ai": "^5.0.101",
103
+ "ai-sdk-provider-gemini-cli": "^1.4.0",
104
+ "cors": "^2.8.5",
105
+ "dotenv": "^17.2.3",
106
+ "express": "^5.1.0",
107
+ "lru-cache": "^11.2.2",
108
+ "openai": "^6.9.1",
109
+ "p-limit": "^7.2.0",
110
+ "vite": "^7.2.2"
111
+ },
112
+ "devDependencies": {
113
+ "@vitest/coverage-v8": "^4.0.10",
114
+ "cross-env": "^10.1.0",
115
+ "eslint": "^9.39.1",
116
+ "prettier": "^3.6.2",
117
+ "rimraf": "^6.1.0",
118
+ "vitest": "^4.0.10"
117
119
  }
118
- }
120
+ }
@@ -0,0 +1,422 @@
1
+ /**
2
+ * Gemini CLI Provider
3
+ *
4
+ * Provider implementation for Google's Gemini models using the ai-sdk-provider-gemini-cli package.
5
+ * Implements the unified interface: async invoke(messages, options) => { content, stop_reason, rawResponse }
6
+ *
7
+ * Key features:
8
+ * - Uses OAuth authentication from Gemini CLI (no API keys needed)
9
+ * - Supports gemini-3-pro-preview model via Google Cloud Code endpoints
10
+ * - Uses AI SDK v5 standard interfaces (generateText/streamText)
11
+ * - Compatible with both chat and consensus tools
12
+ *
13
+ * Authentication:
14
+ * - Requires global Gemini CLI installation: npm install -g @google/gemini-cli
15
+ * - User must authenticate once via: gemini (interactive CLI)
16
+ * - Credentials stored in ~/.gemini/oauth_creds.json
17
+ */
18
+
19
+ import { existsSync } from 'node:fs';
20
+ import { homedir } from 'node:os';
21
+ import { join } from 'node:path';
22
+ import { debugLog, debugError } from '../utils/console.js';
23
+ import { ProviderError, ErrorCodes, StopReasons } from './interface.js';
24
+
25
+ // Supported Gemini CLI models with their configurations
26
+ const SUPPORTED_MODELS = {
27
+ gemini: {
28
+ modelName: 'gemini',
29
+ friendlyName: 'Gemini 3.0 Pro Preview (via CLI)',
30
+ contextWindow: 1048576, // 1M tokens
31
+ maxOutputTokens: 64000,
32
+ supportsStreaming: true,
33
+ supportsImages: true, // Base64 only (no URLs)
34
+ supportsTemperature: true,
35
+ supportsThinking: true,
36
+ supportsWebSearch: true,
37
+ timeout: 300000, // 5 minutes
38
+ description:
39
+ 'Gemini 3.0 Pro Preview via OAuth - requires Gemini CLI authentication',
40
+ aliases: ['gemini-cli'],
41
+ // Internal SDK model name (user-facing "gemini" maps to SDK's "gemini-3-pro-preview")
42
+ sdkModelName: 'gemini-3-pro-preview',
43
+ },
44
+ };
45
+
46
+ /**
47
+ * Custom error class for Gemini CLI provider errors
48
+ */
49
+ class GeminiCliProviderError extends ProviderError {
50
+ constructor(message, code, originalError = null) {
51
+ super(message, code, originalError);
52
+ this.name = 'GeminiCliProviderError';
53
+ }
54
+ }
55
+
56
+ /**
57
+ * Check if OAuth credentials file exists
58
+ * @returns {boolean} True if credentials file exists
59
+ */
60
+ function hasOAuthCredentials() {
61
+ try {
62
+ const credsPath = join(homedir(), '.gemini', 'oauth_creds.json');
63
+ return existsSync(credsPath);
64
+ } catch (error) {
65
+ debugError('[Gemini CLI] Error checking OAuth credentials', error);
66
+ return false;
67
+ }
68
+ }
69
+
70
+ /**
71
+ * Dynamically import Gemini CLI SDK (lazy loading)
72
+ * This keeps the SDK as an optional dependency
73
+ */
74
+ async function getGeminiCliSDK() {
75
+ try {
76
+ // Use dynamic import to load SDK only when needed
77
+ const { createGeminiProvider } = await import('ai-sdk-provider-gemini-cli');
78
+ return createGeminiProvider;
79
+ } catch (error) {
80
+ throw new GeminiCliProviderError(
81
+ 'Gemini CLI SDK not installed. Install with: npm install ai-sdk-provider-gemini-cli',
82
+ 'GEMINI_CLI_NOT_INSTALLED',
83
+ error,
84
+ );
85
+ }
86
+ }
87
+
88
+ /**
89
+ * Dynamically import AI SDK (lazy loading)
90
+ */
91
+ async function getAISDK() {
92
+ try {
93
+ const { generateText, streamText } = await import('ai');
94
+ return { generateText, streamText };
95
+ } catch (error) {
96
+ throw new GeminiCliProviderError(
97
+ 'AI SDK not installed. Install with: npm install ai',
98
+ 'AI_SDK_NOT_INSTALLED',
99
+ error,
100
+ );
101
+ }
102
+ }
103
+
104
+ /**
105
+ * Create stream generator for Gemini CLI streaming responses
106
+ * Yields normalized events compatible with ProviderStreamNormalizer
107
+ */
108
+ async function* createStreamingGenerator(
109
+ modelInstance,
110
+ messages,
111
+ options,
112
+ signal,
113
+ userFacingModelName = 'gemini',
114
+ ) {
115
+ const { streamText } = await getAISDK();
116
+
117
+ try {
118
+ const streamOptions = {
119
+ model: modelInstance,
120
+ messages,
121
+ ...options,
122
+ };
123
+
124
+ if (signal) {
125
+ streamOptions.abortSignal = signal;
126
+ }
127
+
128
+ const result = await streamText(streamOptions);
129
+
130
+ // Yield start event
131
+ yield {
132
+ type: 'start',
133
+ provider: 'gemini-cli',
134
+ model: userFacingModelName,
135
+ };
136
+
137
+ // Stream text chunks
138
+ for await (const chunk of result.textStream) {
139
+ // Check for cancellation
140
+ if (signal?.aborted) {
141
+ throw new GeminiCliProviderError('Request cancelled', 'CANCELLED');
142
+ }
143
+
144
+ // Yield delta event with content chunk (normalized format)
145
+ yield {
146
+ type: 'delta',
147
+ data: {
148
+ textDelta: chunk,
149
+ },
150
+ };
151
+ }
152
+
153
+ // Get final usage stats and metadata
154
+ const usage = await result.usage;
155
+ const finishReason = await result.finishReason;
156
+
157
+ // Yield usage event
158
+ if (usage) {
159
+ yield {
160
+ type: 'usage',
161
+ usage: {
162
+ input_tokens: usage.promptTokens || 0,
163
+ output_tokens: usage.completionTokens || 0,
164
+ total_tokens: usage.totalTokens || 0,
165
+ cached_input_tokens: 0,
166
+ },
167
+ };
168
+ }
169
+
170
+ // Yield end event
171
+ yield {
172
+ type: 'end',
173
+ stop_reason: mapFinishReason(finishReason),
174
+ finish_reason: finishReason,
175
+ };
176
+ } catch (error) {
177
+ if (signal?.aborted) {
178
+ throw new GeminiCliProviderError('Request cancelled', 'CANCELLED');
179
+ }
180
+ throw error;
181
+ }
182
+ }
183
+
184
+ /**
185
+ * Map AI SDK finish reasons to our StopReasons enum
186
+ */
187
+ function mapFinishReason(finishReason) {
188
+ switch (finishReason) {
189
+ case 'stop':
190
+ return StopReasons.STOP;
191
+ case 'length':
192
+ case 'max-tokens':
193
+ return StopReasons.LENGTH;
194
+ case 'content-filter':
195
+ return StopReasons.CONTENT_FILTER;
196
+ case 'tool-calls':
197
+ return StopReasons.TOOL_USE;
198
+ case 'error':
199
+ return StopReasons.ERROR;
200
+ default:
201
+ return StopReasons.OTHER;
202
+ }
203
+ }
204
+
205
+ /**
206
+ * Gemini CLI Provider Implementation
207
+ */
208
+ export const geminiCliProvider = {
209
+ /**
210
+ * Invoke Gemini CLI with messages and options
211
+ * @param {Array} messages - Message array (Converse format)
212
+ * @param {Object} options - Invocation options
213
+ * @returns {Promise<Object>|AsyncGenerator} Response or stream generator
214
+ */
215
+ async invoke(messages, options = {}) {
216
+ const {
217
+ model = 'gemini',
218
+ config,
219
+ stream = false,
220
+ signal,
221
+ reasoning_effort,
222
+ temperature,
223
+ use_websearch,
224
+ } = options;
225
+
226
+ // Validate configuration
227
+ if (!config) {
228
+ throw new GeminiCliProviderError(
229
+ 'Configuration is required',
230
+ ErrorCodes.MISSING_API_KEY,
231
+ );
232
+ }
233
+
234
+ // Check OAuth credentials
235
+ if (!hasOAuthCredentials()) {
236
+ throw new GeminiCliProviderError(
237
+ 'Gemini CLI authentication required. Run: gemini (interactive CLI) to authenticate',
238
+ ErrorCodes.INVALID_API_KEY,
239
+ );
240
+ }
241
+
242
+ try {
243
+ // Get model configuration to map user-facing name to SDK model name
244
+ const modelConfig = this.getModelConfig(model);
245
+ if (!modelConfig) {
246
+ throw new GeminiCliProviderError(
247
+ `Model ${model} not supported by Gemini CLI provider`,
248
+ ErrorCodes.MODEL_NOT_FOUND,
249
+ );
250
+ }
251
+
252
+ // Get the SDK model name (e.g., "gemini" -> "gemini-3-pro-preview")
253
+ const sdkModelName = modelConfig.sdkModelName || model;
254
+
255
+ // Get SDKs
256
+ const createGeminiProvider = await getGeminiCliSDK();
257
+ const { generateText } = await getAISDK();
258
+
259
+ // Create provider instance with OAuth authentication
260
+ const gemini = createGeminiProvider({
261
+ authType: 'oauth-personal',
262
+ });
263
+
264
+ // Create model instance with SDK model name
265
+ const modelInstance = gemini(sdkModelName);
266
+
267
+ // Build AI SDK options
268
+ const aiOptions = {
269
+ messages,
270
+ };
271
+
272
+ // Add optional parameters
273
+ if (temperature !== undefined) {
274
+ aiOptions.temperature = temperature;
275
+ }
276
+
277
+ // Note: reasoning_effort and use_websearch are not directly supported by AI SDK
278
+ // These would need to be handled at the API level if the provider supports them
279
+ if (reasoning_effort !== undefined) {
280
+ debugLog(
281
+ '[Gemini CLI] Parameter "reasoning_effort" not directly supported (ignored)',
282
+ );
283
+ }
284
+ if (use_websearch) {
285
+ debugLog(
286
+ '[Gemini CLI] Parameter "use_websearch" not directly supported (ignored)',
287
+ );
288
+ }
289
+
290
+ // Streaming mode
291
+ if (stream) {
292
+ return createStreamingGenerator(
293
+ modelInstance,
294
+ messages,
295
+ aiOptions,
296
+ signal,
297
+ model, // Pass user-facing model name for metadata
298
+ );
299
+ }
300
+
301
+ // Synchronous mode
302
+ const startTime = Date.now();
303
+
304
+ const result = await generateText({
305
+ model: modelInstance,
306
+ ...aiOptions,
307
+ ...(signal && { abortSignal: signal }),
308
+ });
309
+
310
+ const responseTime = Date.now() - startTime;
311
+
312
+ // Extract content from AI SDK v5 response format
313
+ const content = result.content?.[0]?.text || result.text || '';
314
+
315
+ return {
316
+ content,
317
+ stop_reason: mapFinishReason(result.finishReason),
318
+ rawResponse: result,
319
+ metadata: {
320
+ provider: 'gemini-cli',
321
+ model,
322
+ usage: result.usage
323
+ ? {
324
+ input_tokens: result.usage.promptTokens || 0,
325
+ output_tokens: result.usage.completionTokens || 0,
326
+ total_tokens: result.usage.totalTokens || 0,
327
+ cached_input_tokens: 0,
328
+ }
329
+ : null,
330
+ response_time_ms: responseTime,
331
+ finish_reason: result.finishReason || 'stop',
332
+ },
333
+ };
334
+ } catch (error) {
335
+ debugError('[Gemini CLI] Execution error', error);
336
+
337
+ // Map common errors to standard error codes
338
+ if (
339
+ error.message?.includes('authentication') ||
340
+ error.message?.includes('oauth') ||
341
+ error.message?.includes('credentials')
342
+ ) {
343
+ throw new GeminiCliProviderError(
344
+ 'Gemini CLI authentication failed. Run: gemini (interactive CLI) to authenticate',
345
+ ErrorCodes.INVALID_API_KEY,
346
+ error,
347
+ );
348
+ }
349
+
350
+ if (error.message?.includes('rate limit')) {
351
+ throw new GeminiCliProviderError(
352
+ 'Rate limit exceeded',
353
+ ErrorCodes.RATE_LIMIT_EXCEEDED,
354
+ error,
355
+ );
356
+ }
357
+
358
+ if (error.message?.includes('timeout')) {
359
+ throw new GeminiCliProviderError(
360
+ 'Request timeout',
361
+ ErrorCodes.TIMEOUT_ERROR,
362
+ error,
363
+ );
364
+ }
365
+
366
+ // Re-throw as Gemini CLI error
367
+ throw new GeminiCliProviderError(
368
+ error.message || 'Gemini CLI execution failed',
369
+ ErrorCodes.API_ERROR,
370
+ error,
371
+ );
372
+ }
373
+ },
374
+
375
+ /**
376
+ * Validate Gemini CLI configuration
377
+ * Gemini CLI uses OAuth authentication (no API keys needed)
378
+ */
379
+ validateConfig(_config) {
380
+ // Check if OAuth credentials file exists
381
+ return hasOAuthCredentials();
382
+ },
383
+
384
+ /**
385
+ * Check if Gemini CLI provider is available
386
+ */
387
+ isAvailable(config) {
388
+ return this.validateConfig(config);
389
+ },
390
+
391
+ /**
392
+ * Get supported Gemini CLI models
393
+ */
394
+ getSupportedModels() {
395
+ return SUPPORTED_MODELS;
396
+ },
397
+
398
+ /**
399
+ * Get model configuration for specific model
400
+ */
401
+ getModelConfig(modelName) {
402
+ const modelNameLower = modelName.toLowerCase();
403
+
404
+ // Check exact match
405
+ if (SUPPORTED_MODELS[modelNameLower]) {
406
+ return SUPPORTED_MODELS[modelNameLower];
407
+ }
408
+
409
+ // Check aliases
410
+ for (const [supportedModel, config] of Object.entries(SUPPORTED_MODELS)) {
411
+ if (config.aliases) {
412
+ for (const alias of config.aliases) {
413
+ if (alias.toLowerCase() === modelNameLower) {
414
+ return config;
415
+ }
416
+ }
417
+ }
418
+ }
419
+
420
+ return null;
421
+ },
422
+ };
@@ -144,7 +144,6 @@ const SUPPORTED_MODELS = {
144
144
  'gemini3',
145
145
  'gemini-3-pro',
146
146
  '3-pro',
147
- 'gemini', // Moving from 2.5 Pro
148
147
  'gemini pro', // Moving from 2.5 Pro
149
148
  'gemini-pro', // Moving from 2.5 Pro
150
149
  'pro', // Moving from 2.5 Pro
@@ -14,6 +14,7 @@ import { mistralProvider } from './mistral.js';
14
14
  import { deepseekProvider } from './deepseek.js';
15
15
  import { openrouterProvider } from './openrouter.js';
16
16
  import { codexProvider } from './codex.js';
17
+ import { geminiCliProvider } from './gemini-cli.js';
17
18
 
18
19
  /**
19
20
  * Provider registry map
@@ -25,6 +26,7 @@ import { codexProvider } from './codex.js';
25
26
  const providers = {
26
27
  openai: openaiProvider,
27
28
  xai: xaiProvider,
29
+ 'gemini-cli': geminiCliProvider,
28
30
  google: googleProvider,
29
31
  anthropic: anthropicProvider,
30
32
  mistral: mistralProvider,
package/src/tools/chat.js CHANGED
@@ -416,6 +416,8 @@ function resolveAutoModel(model, providerName) {
416
416
  }
417
417
 
418
418
  const defaults = {
419
+ codex: 'codex',
420
+ 'gemini-cli': 'gemini',
419
421
  openai: 'gpt-5',
420
422
  xai: 'grok-4-0709',
421
423
  google: 'gemini-pro',
@@ -431,8 +433,15 @@ function resolveAutoModel(model, providerName) {
431
433
  export function mapModelToProvider(model, providers) {
432
434
  const modelLower = model.toLowerCase();
433
435
 
434
- // Handle "auto" - default to OpenAI
436
+ // Handle "auto" - prioritize: codex > gemini-cli > openai
435
437
  if (modelLower === 'auto') {
438
+ // Check availability in priority order
439
+ if (providers['codex']) {
440
+ return 'codex';
441
+ }
442
+ if (providers['gemini-cli']) {
443
+ return 'gemini-cli';
444
+ }
436
445
  return 'openai';
437
446
  }
438
447
 
@@ -441,6 +450,11 @@ export function mapModelToProvider(model, providers) {
441
450
  return 'codex';
442
451
  }
443
452
 
453
+ // Check Gemini CLI (exact match only - routes to CLI provider instead of Google API)
454
+ if (modelLower === 'gemini' || modelLower === 'gemini-cli') {
455
+ return 'gemini-cli';
456
+ }
457
+
444
458
  // Check OpenRouter-specific patterns first
445
459
  if (
446
460
  modelLower === 'openrouter auto' ||
@@ -490,7 +504,6 @@ export function mapModelToProvider(model, providers) {
490
504
 
491
505
  // Google models
492
506
  if (
493
- modelLower.includes('gemini') ||
494
507
  modelLower.includes('flash') ||
495
508
  modelLower.includes('pro') ||
496
509
  modelLower === 'google'
@@ -934,7 +947,7 @@ chatTool.inputSchema = {
934
947
  model: {
935
948
  type: 'string',
936
949
  description:
937
- 'AI model to use. Examples: "auto" (recommended), "gpt-5", "gemini-pro", "grok-4-0709". Defaults to auto-selection.',
950
+ 'AI model to use. Examples: "auto" (recommended), "codex", "gemini", "gpt-5", "grok-4-0709". Defaults to auto-selection.',
938
951
  },
939
952
  files: {
940
953
  type: 'array',
@@ -653,6 +653,8 @@ Please provide your refined response:`;
653
653
  */
654
654
  function getDefaultModelForProvider(providerName) {
655
655
  const defaults = {
656
+ codex: 'codex',
657
+ 'gemini-cli': 'gemini',
656
658
  openai: 'gpt-5',
657
659
  xai: 'grok-4-0709',
658
660
  google: 'gemini-pro',
@@ -679,11 +681,28 @@ function resolveAutoModel(model, providerName) {
679
681
  function mapModelToProvider(model, providers) {
680
682
  const modelLower = model.toLowerCase();
681
683
 
682
- // Handle "auto" - default to OpenAI
684
+ // Handle "auto" - prioritize: codex > gemini-cli > openai
683
685
  if (modelLower === 'auto') {
686
+ // Check availability in priority order
687
+ if (providers['codex']) {
688
+ return 'codex';
689
+ }
690
+ if (providers['gemini-cli']) {
691
+ return 'gemini-cli';
692
+ }
684
693
  return 'openai';
685
694
  }
686
695
 
696
+ // Check Codex (exact match only - don't route "gpt-5-codex" etc to Codex provider)
697
+ if (modelLower === 'codex') {
698
+ return 'codex';
699
+ }
700
+
701
+ // Check Gemini CLI (exact match only - routes to CLI provider instead of Google API)
702
+ if (modelLower === 'gemini' || modelLower === 'gemini-cli') {
703
+ return 'gemini-cli';
704
+ }
705
+
687
706
  // Check OpenRouter-specific patterns first
688
707
  if (
689
708
  modelLower === 'openrouter auto' ||
@@ -733,7 +752,6 @@ function mapModelToProvider(model, providers) {
733
752
 
734
753
  // Google models
735
754
  if (
736
- modelLower.includes('gemini') ||
737
755
  modelLower.includes('flash') ||
738
756
  modelLower.includes('pro') ||
739
757
  modelLower === 'google'