@houtini/gemini-mcp 1.0.0 → 1.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +495 -442
  2. package/package.json +3 -3
package/README.md CHANGED
@@ -1,442 +1,495 @@
1
- # Gemini MCP Server
2
-
3
- [![npm version](https://badge.fury.io/js/@houtini/gemini-mcp.svg)](https://badge.fury.io/js/@houtini/gemini-mcp)
4
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5
- [![TypeScript](https://img.shields.io/badge/%3C%2F%3E-TypeScript-%230074c1.svg)](https://www.typescriptlang.org/)
6
-
7
- A professional, production-ready Model Context Protocol (MCP) server that provides seamless integration with Google's Gemini AI models. Built with TypeScript and designed for enterprise use, this package offers robust error handling, comprehensive logging, and easy deployment.
8
-
9
- ## 🚀 Quick Start
10
-
11
- ```bash
12
- # Install globally
13
- npm install -g @houtini/gemini-mcp
14
-
15
- # Or install locally
16
- npm install @houtini/gemini-mcp
17
-
18
- # Set your API key
19
- export GEMINI_API_KEY="your-api-key-here"
20
-
21
- # Run the server
22
- gemini-mcp
23
- ```
24
-
25
- ## 📋 Table of Contents
26
-
27
- - [Features](#-features)
28
- - [Installation](#-installation)
29
- - [Configuration](#-configuration)
30
- - [Usage Examples](#-usage-examples)
31
- - [API Reference](#-api-reference)
32
- - [Development](#-development)
33
- - [Troubleshooting](#-troubleshooting)
34
- - [Contributing](#-contributing)
35
-
36
- ## Features
37
-
38
- ### Core Functionality
39
- - **🤖 Multi-Model Support** - Access to 6 Gemini models including the latest Gemini 2.5 Flash
40
- - **💬 Chat Interface** - Advanced chat functionality with customisable parameters
41
- - **📊 Model Information** - Detailed model capabilities and specifications
42
- - **🎛️ Fine-Grained Control** - Temperature, token limits, and system prompts
43
-
44
- ### Enterprise Features
45
- - **🏗️ Professional Architecture** - Modular services-based design
46
- - **🛡️ Robust Error Handling** - Comprehensive error handling with detailed logging
47
- - **📝 Winston Logging** - Production-ready logging with file rotation
48
- - **🔒 Security Focused** - No hardcoded credentials, environment-based configuration
49
- - **🏷️ Full TypeScript** - Complete type safety and IntelliSense support
50
- - **⚡ High Performance** - Optimised for minimal latency and resource usage
51
-
52
- ## 📦 Installation
53
-
54
- ### Prerequisites
55
-
56
- - **Node.js** 18.0.0 or higher (you're running v24.6.0 ✅)
57
- - **Google AI Studio API Key** ([Get your key here](https://makersuite.google.com/app/apikey))
58
-
59
- ### Global Installation (Recommended)
60
-
61
- ```bash
62
- npm install -g @houtini/gemini-mcp
63
- ```
64
-
65
- ### Local Installation
66
-
67
- ```bash
68
- npm install @houtini/gemini-mcp
69
- ```
70
-
71
- ### From Source
72
-
73
- ```bash
74
- git clone https://github.com/houtini-ai/gemini-mcp.git
75
- cd gemini-mcp
76
- npm install
77
- npm run build
78
- ```
79
-
80
- ## ⚙️ Configuration
81
-
82
- ### Environment Variables
83
-
84
- The simplest way to configure the server is through environment variables:
85
-
86
- ```bash
87
- # Required
88
- export GEMINI_API_KEY="your-api-key-here"
89
-
90
- # Optional
91
- export LOG_LEVEL="info" # debug, info, warn, error
92
- ```
93
-
94
- ### Using .env File
95
-
96
- Create a `.env` file in your project directory:
97
-
98
- ```env
99
- # Google Gemini Configuration
100
- GEMINI_API_KEY=your-api-key-here
101
-
102
- # Logging Configuration
103
- LOG_LEVEL=info
104
-
105
- # Optional server configuration
106
- SERVER_NAME=gemini-mcp
107
- SERVER_VERSION=1.0.0
108
- ```
109
-
110
- ### Claude Desktop Configuration
111
-
112
- Add to your Claude Desktop configuration file:
113
-
114
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
115
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
116
-
117
- #### For Global Installation:
118
- ```json
119
- {
120
- "mcpServers": {
121
- "gemini": {
122
- "command": "gemini-mcp",
123
- "env": {
124
- "GEMINI_API_KEY": "your-api-key-here",
125
- "LOG_LEVEL": "info"
126
- }
127
- }
128
- }
129
- }
130
- ```
131
-
132
- #### For Local Installation:
133
- ```json
134
- {
135
- "mcpServers": {
136
- "gemini": {
137
- "command": "node",
138
- "args": ["./node_modules/@houtini/gemini-mcp/dist/index.js"],
139
- "env": {
140
- "GEMINI_API_KEY": "your-api-key-here"
141
- }
142
- }
143
- }
144
- }
145
- ```
146
-
147
- #### For Development:
148
- ```json
149
- {
150
- "mcpServers": {
151
- "gemini": {
152
- "command": "node",
153
- "args": ["C:\\path\\to\\gemini-mcp\\dist\\index.js"],
154
- "env": {
155
- "GEMINI_API_KEY": "your-api-key-here"
156
- }
157
- }
158
- }
159
- }
160
- ```
161
-
162
- ## 💡 Usage Examples
163
-
164
- ### Basic Chat
165
-
166
- Ask Claude to use Gemini:
167
-
168
- ```
169
- Can you help me understand quantum computing using Gemini?
170
- ```
171
-
172
- Claude will automatically use the `gemini_chat` tool to get a response from Gemini.
173
-
174
- ### Creative Writing
175
-
176
- ```
177
- Use Gemini to write a short story about artificial intelligence discovering creativity.
178
- ```
179
-
180
- ### Technical Analysis
181
-
182
- ```
183
- Can you use Gemini Pro to explain the differences between various machine learning algorithms?
184
- ```
185
-
186
- ### Model Selection
187
-
188
- ```
189
- Use Gemini 1.5 Pro to analyse this code and suggest improvements.
190
- ```
191
-
192
- ### Getting Model Information
193
-
194
- ```
195
- Show me all available Gemini models and their capabilities.
196
- ```
197
-
198
- ## 🔧 API Reference
199
-
200
- ### Available Tools
201
-
202
- #### `gemini_chat`
203
-
204
- Chat with Gemini models to generate text responses.
205
-
206
- **Parameters:**
207
-
208
- | Parameter | Type | Required | Default | Description |
209
- |-----------|------|----------|---------|-------------|
210
- | `message` | string | ✅ | - | The message to send to Gemini |
211
- | `model` | string | ❌ | "gemini-2.5-flash" | Model to use |
212
- | `temperature` | number | | 0.7 | Controls randomness (0.0-1.0) |
213
- | `max_tokens` | integer | ❌ | 2048 | Maximum tokens in response (1-8192) |
214
- | `system_prompt` | string | ❌ | - | System instruction to guide the model |
215
-
216
- **Example:**
217
- ```json
218
- {
219
- "message": "Explain machine learning in simple terms",
220
- "model": "gemini-1.5-pro",
221
- "temperature": 0.5,
222
- "max_tokens": 1000,
223
- "system_prompt": "You are a helpful teaching assistant. Explain concepts clearly and use analogies where appropriate."
224
- }
225
- ```
226
-
227
- #### `gemini_list_models`
228
-
229
- Retrieve information about all available Gemini models.
230
-
231
- **Parameters:** None required
232
-
233
- **Example:**
234
- ```json
235
- {}
236
- ```
237
-
238
- **Response includes:**
239
- - Model names and display names
240
- - Descriptions of each model's strengths
241
- - Recommended use cases
242
-
243
- ### Available Models
244
-
245
- | Model | Best For | Description |
246
- |-------|----------|-------------|
247
- | **gemini-2.5-flash** | General use, latest features | Latest Gemini 2.5 Flash - Fast, versatile performance |
248
- | **gemini-2.0-flash** | Speed-optimised tasks | Gemini 2.0 Flash - Fast, efficient model |
249
- | **gemini-1.5-flash** | Quick responses | Gemini 1.5 Flash - Fast, efficient model |
250
- | **gemini-1.5-pro** | Complex reasoning | Gemini 1.5 Pro - Advanced reasoning capabilities |
251
- | **gemini-pro** | Balanced performance | Gemini Pro - Balanced performance for most tasks |
252
- | **gemini-pro-vision** | Multimodal tasks | Gemini Pro Vision - Text and image understanding |
253
-
254
- ## 🛠️ Development
255
-
256
- ### Building from Source
257
-
258
- ```bash
259
- # Clone the repository
260
- git clone https://github.com/houtini-ai/gemini-mcp.git
261
- cd gemini-mcp
262
-
263
- # Install dependencies
264
- npm install
265
-
266
- # Build the project
267
- npm run build
268
-
269
- # Run in development mode
270
- npm run dev
271
- ```
272
-
273
- ### Scripts
274
-
275
- | Command | Description |
276
- |---------|-------------|
277
- | `npm run build` | Compile TypeScript to JavaScript |
278
- | `npm run dev` | Run in development mode with live reload |
279
- | `npm start` | Run the compiled server |
280
- | `npm test` | Run test suite |
281
- | `npm run lint` | Check code style |
282
- | `npm run lint:fix` | Fix linting issues automatically |
283
-
284
- ### Project Structure
285
-
286
- ```
287
- src/
288
- ├── config/ # Configuration management
289
- │ ├── index.ts # Main configuration
290
- │ └── types.ts # Configuration types
291
- ├── services/ # Core business logic
292
- │ ├── base-service.ts
293
- │ └── gemini/ # Gemini service implementation
294
- │ ├── index.ts
295
- │ └── types.ts
296
- ├── tools/ # MCP tool implementations
297
- │ ├── gemini-chat.ts
298
- │ └── gemini-list-models.ts
299
- ├── utils/ # Utility functions
300
- │ ├── logger.ts # Winston logging setup
301
- │ └── error-handler.ts
302
- ├── cli.ts # CLI entry point
303
- └── index.ts # Main server implementation
304
- ```
305
-
306
- ### Architecture
307
-
308
- The server follows a clean, layered architecture:
309
-
310
- 1. **CLI Layer** (`cli.ts`) - Command-line interface
311
- 2. **Server Layer** (`index.ts`) - MCP protocol handling
312
- 3. **Tools Layer** (`tools/`) - MCP tool implementations
313
- 4. **Service Layer** (`services/`) - Business logic and API integration
314
- 5. **Utility Layer** (`utils/`) - Cross-cutting concerns
315
-
316
- ## 🐛 Troubleshooting
317
-
318
- ### Common Issues
319
-
320
- #### "GEMINI_API_KEY environment variable not set"
321
-
322
- **Solution:**
323
- ```bash
324
- export GEMINI_API_KEY="your-actual-api-key"
325
- ```
326
-
327
- Or create a `.env` file with your API key.
328
-
329
- #### Server not appearing in Claude Desktop
330
-
331
- **Solutions:**
332
- 1. Restart Claude Desktop after updating configuration
333
- 2. Check that the path in your configuration is correct
334
- 3. Ensure the built files exist in the `dist` directory
335
- 4. Verify your API key is valid
336
-
337
- #### "Module not found" errors
338
-
339
- **Solutions:**
340
- ```bash
341
- # Reinstall dependencies
342
- npm install
343
-
344
- # Rebuild the project
345
- npm run build
346
-
347
- # Check Node.js version (requires 18.0.0+)
348
- node --version
349
- ```
350
-
351
- #### TypeScript compilation errors
352
-
353
- **Solution:**
354
- ```bash
355
- # Clean and rebuild
356
- rm -rf dist
357
- npm run build
358
- ```
359
-
360
- ### Debug Mode
361
-
362
- Enable detailed logging:
363
-
364
- ```bash
365
- export LOG_LEVEL=debug
366
- npm start
367
- ```
368
-
369
- ### Log Files
370
-
371
- Logs are written to:
372
- - **Console output** (stdout/stderr)
373
- - **`logs/combined.log`** - All log levels
374
- - **`logs/error.log`** - Error logs only
375
-
376
- ### Testing Your Setup
377
-
378
- Test the server with these Claude queries:
379
-
380
- 1. **Basic connectivity**: "Can you list the available Gemini models?"
381
- 2. **Simple chat**: "Use Gemini to explain photosynthesis."
382
- 3. **Advanced features**: "Use Gemini 1.5 Pro with temperature 0.9 to write a creative poem about coding."
383
-
384
- ### Performance Tuning
385
-
386
- For better performance:
387
-
388
- 1. **Adjust token limits** based on your use case
389
- 2. **Use appropriate models** (Flash for speed, Pro for complex tasks)
390
- 3. **Monitor logs** for rate limiting or API issues
391
- 4. **Set reasonable temperature values** (0.7 for balanced, 0.3 for focused, 0.9 for creative)
392
-
393
- ## 🤝 Contributing
394
-
395
- Contributions are welcome! Please follow these steps:
396
-
397
- 1. **Fork the repository**
398
- 2. **Create a feature branch**: `git checkout -b feature/amazing-feature`
399
- 3. **Make your changes** and add tests if applicable
400
- 4. **Ensure all tests pass**: `npm test`
401
- 5. **Lint your code**: `npm run lint:fix`
402
- 6. **Build the project**: `npm run build`
403
- 7. **Commit your changes**: `git commit -m 'Add amazing feature'`
404
- 8. **Push to the branch**: `git push origin feature/amazing-feature`
405
- 9. **Open a Pull Request**
406
-
407
- ### Development Guidelines
408
-
409
- - **Follow TypeScript best practices**
410
- - **Add tests for new functionality**
411
- - **Update documentation as needed**
412
- - **Use conventional commit messages**
413
- - **Ensure backwards compatibility**
414
-
415
- ## 📄 License
416
-
417
- This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
418
-
419
- ## 🆘 Support
420
-
421
- - **GitHub Issues**: [Report bugs or request features](https://github.com/houtini-ai/gemini-mcp/issues)
422
- - **GitHub Discussions**: [Ask questions or share ideas](https://github.com/houtini-ai/gemini-mcp/discussions)
423
-
424
- ## 📈 Changelog
425
-
426
- ### v1.0.0
427
-
428
- **Initial Release**
429
- - Complete Node.js/TypeScript rewrite from Python
430
- - Professional modular architecture with services pattern
431
- - Comprehensive error handling and logging system
432
- - Full MCP protocol compliance
433
- - Support for 6 Gemini models
434
- - NPM package distribution ready
435
- - Enterprise-grade configuration management
436
- - Production-ready build system
437
-
438
- ---
439
-
440
- **Built with ❤️ for the Model Context Protocol community**
441
-
442
- For more information about MCP, visit [modelcontextprotocol.io](https://modelcontextprotocol.io)
1
+ # Gemini MCP Server
2
+
3
+ [![npm version](https://badge.fury.io/js/@houtini/gemini-mcp.svg)](https://badge.fury.io/js/@houtini/gemini-mcp)
4
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
5
+ [![TypeScript](https://img.shields.io/badge/%3C%2F%3E-TypeScript-%230074c1.svg)](https://www.typescriptlang.org/)
6
+
7
+ A professional, production-ready Model Context Protocol (MCP) server that provides seamless integration with Google's Gemini AI models. Built with TypeScript and designed for enterprise use, this package offers robust error handling, comprehensive logging, and easy deployment.
8
+
9
+ ## 🚀 Quick Start
10
+
11
+ The easiest way to get started is using `npx` - no installation required:
12
+
13
+ ```bash
14
+ # Get your API key from Google AI Studio
15
+ # https://makersuite.google.com/app/apikey
16
+
17
+ # Test the server (optional)
18
+ npx @houtini/gemini-mcp
19
+
20
+ # Add to Claude Desktop (see configuration below)
21
+ ```
22
+
23
+ ## 📋 Table of Contents
24
+
25
+ - [Features](#-features)
26
+ - [Installation](#-installation)
27
+ - [Configuration](#-configuration)
28
+ - [Usage Examples](#-usage-examples)
29
+ - [API Reference](#-api-reference)
30
+ - [Development](#-development)
31
+ - [Troubleshooting](#-troubleshooting)
32
+ - [Contributing](#-contributing)
33
+
34
+ ## ✨ Features
35
+
36
+ ### Core Functionality
37
+ - **🤖 Multi-Model Support** - Access to 6 Gemini models including the latest Gemini 2.5 Flash
38
+ - **💬 Chat Interface** - Advanced chat functionality with customisable parameters
39
+ - **📊 Model Information** - Detailed model capabilities and specifications
40
+ - **🎛️ Fine-Grained Control** - Temperature, token limits, and system prompts
41
+
42
+ ### Enterprise Features
43
+ - **🏗️ Professional Architecture** - Modular services-based design
44
+ - **🛡️ Robust Error Handling** - Comprehensive error handling with detailed logging
45
+ - **📝 Winston Logging** - Production-ready logging with file rotation
46
+ - **🔒 Security Focused** - No hardcoded credentials, environment-based configuration
47
+ - **🏷️ Full TypeScript** - Complete type safety and IntelliSense support
48
+ - **⚡ High Performance** - Optimised for minimal latency and resource usage
49
+
50
+ ## 📦 Installation
51
+
52
+ ### Prerequisites
53
+
54
+ - **Node.js** v24.0.0
55
+ - **Google AI Studio API Key** ([Get your key here](https://makersuite.google.com/app/apikey))
56
+
57
+ ### Recommended: No Installation Required
58
+
59
+ The simplest approach uses `npx` to run the latest version automatically:
60
+
61
+ ```bash
62
+ # No installation needed - npx handles everything
63
+ npx @houtini/gemini-mcp
64
+ ```
65
+
66
+ ### Alternative Installation Methods
67
+
68
+ #### Global Installation
69
+ ```bash
70
+ # Install once, use anywhere
71
+ npm install -g @houtini/gemini-mcp
72
+ gemini-mcp
73
+ ```
74
+
75
+ #### Local Project Installation
76
+ ```bash
77
+ # Install in your project
78
+ npm install @houtini/gemini-mcp
79
+
80
+ # Run with npx
81
+ npx @houtini/gemini-mcp
82
+ ```
83
+
84
+ #### From Source (Developers)
85
+ ```bash
86
+ git clone https://github.com/houtini-ai/gemini-mcp.git
87
+ cd gemini-mcp
88
+ npm install
89
+ npm run build
90
+ npm start
91
+ ```
92
+
93
+ ## ⚙️ Configuration
94
+
95
+ ### Step 1: Get Your API Key
96
+
97
+ Visit [Google AI Studio](https://makersuite.google.com/app/apikey) to create your free API key.
98
+
99
+ ### Step 2: Configure Claude Desktop
100
+
101
+ Add this configuration to your Claude Desktop config file:
102
+
103
+ **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
104
+ **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
105
+
106
+ #### ✅ Recommended Configuration (using npx)
107
+
108
+ ```json
109
+ {
110
+ "mcpServers": {
111
+ "gemini": {
112
+ "command": "npx",
113
+ "args": ["@houtini/gemini-mcp"],
114
+ "env": {
115
+ "GEMINI_API_KEY": "your-api-key-here"
116
+ }
117
+ }
118
+ }
119
+ }
120
+ ```
121
+
122
+ **Benefits of this approach:**
123
+ - ✅ No global installation required
124
+ - ✅ Always uses the latest version
125
+ - ✅ Cleaner system (no global packages)
126
+ - ✅ Works out of the box
127
+
128
+ #### Alternative: Global Installation
129
+
130
+ ```json
131
+ {
132
+ "mcpServers": {
133
+ "gemini": {
134
+ "command": "gemini-mcp",
135
+ "env": {
136
+ "GEMINI_API_KEY": "your-api-key-here"
137
+ }
138
+ }
139
+ }
140
+ }
141
+ ```
142
+
143
+ *Note: Requires `npm install -g @houtini/gemini-mcp` first*
144
+
145
+ #### Alternative: Local Installation
146
+
147
+ ```json
148
+ {
149
+ "mcpServers": {
150
+ "gemini": {
151
+ "command": "node",
152
+ "args": ["./node_modules/@houtini/gemini-mcp/dist/index.js"],
153
+ "env": {
154
+ "GEMINI_API_KEY": "your-api-key-here"
155
+ }
156
+ }
157
+ }
158
+ }
159
+ ```
160
+
161
+ *Note: Only works if installed locally in the current directory*
162
+
163
+ ### Step 3: Restart Claude Desktop
164
+
165
+ After updating the configuration file, restart Claude Desktop to load the new MCP server.
166
+
167
+ ### Optional Configuration
168
+
169
+ You can add additional environment variables for more control:
170
+
171
+ ```json
172
+ {
173
+ "mcpServers": {
174
+ "gemini": {
175
+ "command": "npx",
176
+ "args": ["@houtini/gemini-mcp"],
177
+ "env": {
178
+ "GEMINI_API_KEY": "your-api-key-here",
179
+ "LOG_LEVEL": "info"
180
+ }
181
+ }
182
+ }
183
+ }
184
+ ```
185
+
186
+ **Available Environment Variables:**
187
+
188
+ | Variable | Default | Description |
189
+ |----------|---------|-------------|
190
+ | `GEMINI_API_KEY` | *required* | Your Google AI Studio API key |
191
+ | `LOG_LEVEL` | `info` | Logging level: `debug`, `info`, `warn`, `error` |
192
+
193
+ ### Using .env File (Development)
194
+
195
+ For development or testing, create a `.env` file:
196
+
197
+ ```env
198
+ # Google Gemini Configuration
199
+ GEMINI_API_KEY=your-api-key-here
200
+
201
+ # Logging Configuration (optional)
202
+ LOG_LEVEL=info
203
+ ```
204
+
205
+ ## 💡 Usage Examples
206
+
207
+ ### Basic Chat
208
+
209
+ Ask Claude to use Gemini:
210
+
211
+ ```
212
+ Can you help me understand quantum computing using Gemini?
213
+ ```
214
+
215
+ Claude will automatically use the `gemini_chat` tool to get a response from Gemini.
216
+
217
+ ### Creative Writing
218
+
219
+ ```
220
+ Use Gemini to write a short story about artificial intelligence discovering creativity.
221
+ ```
222
+
223
+ ### Technical Analysis
224
+
225
+ ```
226
+ Can you use Gemini Pro to explain the differences between various machine learning algorithms?
227
+ ```
228
+
229
+ ### Model Selection
230
+
231
+ ```
232
+ Use Gemini 1.5 Pro to analyse this code and suggest improvements.
233
+ ```
234
+
235
+ ### Getting Model Information
236
+
237
+ ```
238
+ Show me all available Gemini models and their capabilities.
239
+ ```
240
+
241
+ ## 🔧 API Reference
242
+
243
+ ### Available Tools
244
+
245
+ #### `gemini_chat`
246
+
247
+ Chat with Gemini models to generate text responses.
248
+
249
+ **Parameters:**
250
+
251
+ | Parameter | Type | Required | Default | Description |
252
+ |-----------|------|----------|---------|-------------|
253
+ | `message` | string | ✅ | - | The message to send to Gemini |
254
+ | `model` | string | ❌ | "gemini-2.5-flash" | Model to use |
255
+ | `temperature` | number | ❌ | 0.7 | Controls randomness (0.0-1.0) |
256
+ | `max_tokens` | integer | ❌ | 2048 | Maximum tokens in response (1-8192) |
257
+ | `system_prompt` | string | ❌ | - | System instruction to guide the model |
258
+
259
+ **Example:**
260
+ ```json
261
+ {
262
+ "message": "Explain machine learning in simple terms",
263
+ "model": "gemini-1.5-pro",
264
+ "temperature": 0.5,
265
+ "max_tokens": 1000,
266
+ "system_prompt": "You are a helpful teaching assistant. Explain concepts clearly and use analogies where appropriate."
267
+ }
268
+ ```
269
+
270
+ #### `gemini_list_models`
271
+
272
+ Retrieve information about all available Gemini models.
273
+
274
+ **Parameters:** None required
275
+
276
+ **Example:**
277
+ ```json
278
+ {}
279
+ ```
280
+
281
+ **Response includes:**
282
+ - Model names and display names
283
+ - Descriptions of each model's strengths
284
+ - Recommended use cases
285
+
286
+ ### Available Models
287
+
288
+ | Model | Best For | Description |
289
+ |-------|----------|-------------|
290
+ | **gemini-2.5-flash** | General use, latest features | Latest Gemini 2.5 Flash - Fast, versatile performance |
291
+ | **gemini-2.0-flash** | Speed-optimised tasks | Gemini 2.0 Flash - Fast, efficient model |
292
+ | **gemini-1.5-flash** | Quick responses | Gemini 1.5 Flash - Fast, efficient model |
293
+ | **gemini-1.5-pro** | Complex reasoning | Gemini 1.5 Pro - Advanced reasoning capabilities |
294
+ | **gemini-pro** | Balanced performance | Gemini Pro - Balanced performance for most tasks |
295
+ | **gemini-pro-vision** | Multimodal tasks | Gemini Pro Vision - Text and image understanding |
296
+
297
+ ## 🛠️ Development
298
+
299
+ ### Building from Source
300
+
301
+ ```bash
302
+ # Clone the repository
303
+ git clone https://github.com/houtini-ai/gemini-mcp.git
304
+ cd gemini-mcp
305
+
306
+ # Install dependencies
307
+ npm install
308
+
309
+ # Build the project
310
+ npm run build
311
+
312
+ # Run in development mode
313
+ npm run dev
314
+ ```
315
+
316
+ ### Scripts
317
+
318
+ | Command | Description |
319
+ |---------|-------------|
320
+ | `npm run build` | Compile TypeScript to JavaScript |
321
+ | `npm run dev` | Run in development mode with live reload |
322
+ | `npm start` | Run the compiled server |
323
+ | `npm test` | Run test suite |
324
+ | `npm run lint` | Check code style |
325
+ | `npm run lint:fix` | Fix linting issues automatically |
326
+
327
+ ### Project Structure
328
+
329
+ ```
330
+ src/
331
+ ├── config/ # Configuration management
332
+ │ ├── index.ts # Main configuration
333
+ │ └── types.ts # Configuration types
334
+ ├── services/ # Core business logic
335
+ │ ├── base-service.ts
336
+ │ └── gemini/ # Gemini service implementation
337
+ │ ├── index.ts
338
+ │ └── types.ts
339
+ ├── tools/ # MCP tool implementations
340
+ │ ├── gemini-chat.ts
341
+ │ └── gemini-list-models.ts
342
+ ├── utils/ # Utility functions
343
+ │ ├── logger.ts # Winston logging setup
344
+ │ └── error-handler.ts
345
+ ├── cli.ts # CLI entry point
346
+ └── index.ts # Main server implementation
347
+ ```
348
+
349
+ ### Architecture
350
+
351
+ The server follows a clean, layered architecture:
352
+
353
+ 1. **CLI Layer** (`cli.ts`) - Command-line interface
354
+ 2. **Server Layer** (`index.ts`) - MCP protocol handling
355
+ 3. **Tools Layer** (`tools/`) - MCP tool implementations
356
+ 4. **Service Layer** (`services/`) - Business logic and API integration
357
+ 5. **Utility Layer** (`utils/`) - Cross-cutting concerns
358
+
359
+ ## 🐛 Troubleshooting
360
+
361
+ ### Common Issues
362
+
363
+ #### "GEMINI_API_KEY environment variable not set"
364
+
365
+ **Solution:**
366
+ ```bash
367
+ # Make sure your API key is set in the Claude Desktop configuration
368
+ # See the Configuration section above
369
+ ```
370
+
371
+ #### Server not appearing in Claude Desktop
372
+
373
+ **Solutions:**
374
+ 1. **Restart Claude Desktop** after updating configuration
375
+ 2. **Check your configuration file path**:
376
+ - Windows: `%APPDATA%\Claude\claude_desktop_config.json`
377
+ - macOS: `~/Library/Application Support/Claude/claude_desktop_config.json`
378
+ 3. **Verify JSON syntax** - use a JSON validator if needed
379
+ 4. **Ensure your API key is valid** - test at [Google AI Studio](https://makersuite.google.com/app/apikey)
380
+
381
+ #### "Module not found" errors with npx
382
+
383
+ **Solutions:**
384
+ ```bash
385
+ # Clear npx cache and try again
386
+ npx --yes @houtini/gemini-mcp
387
+
388
+ # Or install globally if preferred
389
+ npm install -g @houtini/gemini-mcp
390
+ ```
391
+
392
+ #### Node.js version issues
393
+
394
+ **Solution:**
395
+ ```bash
396
+ # Check your Node.js version
397
+ node --version
398
+
399
+ # Should be v24.0.0 or higher
400
+ # Install latest Node.js from https://nodejs.org
401
+ ```
402
+
403
+ ### Debug Mode
404
+
405
+ Enable detailed logging by setting `LOG_LEVEL=debug` in your Claude Desktop configuration:
406
+
407
+ ```json
408
+ {
409
+ "mcpServers": {
410
+ "gemini": {
411
+ "command": "npx",
412
+ "args": ["@houtini/gemini-mcp"],
413
+ "env": {
414
+ "GEMINI_API_KEY": "your-api-key-here",
415
+ "LOG_LEVEL": "debug"
416
+ }
417
+ }
418
+ }
419
+ }
420
+ ```
421
+
422
+ ### Log Files
423
+
424
+ Logs are written to:
425
+ - **Console output** (visible in Claude Desktop developer tools)
426
+ - **`logs/combined.log`** - All log levels
427
+ - **`logs/error.log`** - Error logs only
428
+
429
+ ### Testing Your Setup
430
+
431
+ Test the server with these Claude queries:
432
+
433
+ 1. **Basic connectivity**: "Can you list the available Gemini models?"
434
+ 2. **Simple chat**: "Use Gemini to explain photosynthesis."
435
+ 3. **Advanced features**: "Use Gemini 1.5 Pro with temperature 0.9 to write a creative poem about coding."
436
+
437
+ ### Performance Tuning
438
+
439
+ For better performance:
440
+
441
+ 1. **Adjust token limits** based on your use case
442
+ 2. **Use appropriate models** (Flash for speed, Pro for complex tasks)
443
+ 3. **Monitor logs** for rate limiting or API issues
444
+ 4. **Set reasonable temperature values** (0.7 for balanced, 0.3 for focused, 0.9 for creative)
445
+
446
+ ## 🤝 Contributing
447
+
448
+ Contributions are welcome! Please follow these steps:
449
+
450
+ 1. **Fork the repository**
451
+ 2. **Create a feature branch**: `git checkout -b feature/amazing-feature`
452
+ 3. **Make your changes** and add tests if applicable
453
+ 4. **Ensure all tests pass**: `npm test`
454
+ 5. **Lint your code**: `npm run lint:fix`
455
+ 6. **Build the project**: `npm run build`
456
+ 7. **Commit your changes**: `git commit -m 'Add amazing feature'`
457
+ 8. **Push to the branch**: `git push origin feature/amazing-feature`
458
+ 9. **Open a Pull Request**
459
+
460
+ ### Development Guidelines
461
+
462
+ - **Follow TypeScript best practices**
463
+ - **Add tests for new functionality**
464
+ - **Update documentation as needed**
465
+ - **Use conventional commit messages**
466
+ - **Ensure backwards compatibility**
467
+
468
+ ## 📄 License
469
+
470
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
471
+
472
+ ## 🆘 Support
473
+
474
+ - **GitHub Issues**: [Report bugs or request features](https://github.com/houtini-ai/gemini-mcp/issues)
475
+ - **GitHub Discussions**: [Ask questions or share ideas](https://github.com/houtini-ai/gemini-mcp/discussions)
476
+
477
+ ## 📈 Changelog
478
+
479
+ ### v1.0.0
480
+
481
+ **Initial Release**
482
+ - Complete Node.js/TypeScript rewrite from Python
483
+ - Professional modular architecture with services pattern
484
+ - Comprehensive error handling and logging system
485
+ - Full MCP protocol compliance
486
+ - Support for 6 Gemini models
487
+ - NPM package distribution ready
488
+ - Enterprise-grade configuration management
489
+ - Production-ready build system
490
+
491
+ ---
492
+
493
+ **Built with ❤️ for the Model Context Protocol community**
494
+
495
+ For more information about MCP, visit [modelcontextprotocol.io](https://modelcontextprotocol.io)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@houtini/gemini-mcp",
3
- "version": "1.0.0",
3
+ "version": "1.0.1",
4
4
  "description": "Professional Model Context Protocol server for Google Gemini AI models with enterprise-grade features",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
@@ -73,8 +73,8 @@
73
73
  "typescript": "^5.5.0"
74
74
  },
75
75
  "engines": {
76
- "node": ">=18.0.0",
77
- "npm": ">=8.0.0"
76
+ "node": ">=24.0.0",
77
+ "npm": ">=10.0.0"
78
78
  },
79
79
  "files": [
80
80
  "dist/**/*",