iflow-mcp_ia-programming-mcp-image 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,150 @@
1
+ Metadata-Version: 2.4
2
+ Name: iflow-mcp_ia-programming-mcp-image
3
+ Version: 0.1.0
4
+ Summary: Add your description here
5
+ Requires-Python: >=3.10
6
+ Description-Content-Type: text/markdown
7
+ License-File: LICENSE
8
+ Requires-Dist: httpx>=0.28.1
9
+ Requires-Dist: mcp[cli]>=1.2.1
10
+ Requires-Dist: pillow>=11.1.0
11
+ Dynamic: license-file
12
+
13
+ # MCP Server - Image
14
+ A Model Context Protocol (MCP) server that provides tools for fetching and processing images from URLs, local file paths, and numpy arrays. The server includes a tool called fetch_images that returns images as base64-encoded strings along with their MIME types.
15
+
16
+ ## Support Us
17
+
18
+ If you find this project helpful and would like to support future projects, consider buying us a coffee! Your support helps us continue building innovative AI solutions.
19
+
20
+ <a href="https://www.buymeacoffee.com/blazzmocompany"><img src="https://img.buymeacoffee.com/button-api/?text=Buy me a coffee&emoji=&slug=blazzmocompany&button_colour=40DCA5&font_colour=ffffff&font_family=Cookie&outline_colour=000000&coffee_colour=FFDD00"></a>
21
+
22
+ Your contributions go a long way in fueling our passion for creating intelligent and user-friendly applications.
23
+
24
+ ## Table of Contents
25
+ - [Features](#features)
26
+ - [Prerequisites](#prerequisites)
27
+ - [Installation](#installation)
28
+ - [Running the Server](#running-the-server)
29
+ - [Direct Method](#1-direct-method)
30
+ - [Configure for Windsurf/Cursor](#2-configure-for-windsurfcursor)
31
+ - [Available Tools](#available-tools)
32
+ - [Usage Examples](#usage-examples)
33
+ - [Debugging](#debugging)
34
+ - [Contributing](#contributing)
35
+ - [License](#license)
36
+
37
+ ## Features
38
+ - Fetch images from URLs (http/https)
39
+ - Load images from local file paths
40
+ - Specialized handling for large local images
41
+ - Automatic image compression for large images (>1MB)
42
+ - Parallel processing of multiple images
43
+ - Proper MIME type mapping for different file extensions
44
+ - Comprehensive error handling and logging
45
+ ## Prerequisites
46
+ - Python 3.10+
47
+ - uv package manager (recommended)
48
+ ## Installation
49
+ 1. Clone this repository
50
+ 2. Create and activate a virtual environment using uv:
51
+ ```bash
52
+ uv venv
53
+ # On Windows:
54
+ .venv\Scripts\activate
55
+ # On Unix/MacOS:
56
+ source .venv/bin/activate
57
+ ```
58
+ 3. Install dependencies using uv:
59
+ ```bash
60
+ uv pip install -r requirements.txt
61
+ ```
62
+ ## Running the Server
63
+ There are two ways to run the MCP server:
64
+
65
+ ### 1. Direct Method
66
+ To start the MCP server directly:
67
+
68
+ ```bash
69
+ uv run python mcp_image.py
70
+ ```
71
+ ### 2. Configure for Windsurf/Cursor
72
+ #### Windsurf
73
+ To add this MCP server to Windsurf:
74
+
75
+ 1. Edit the configuration file at ~/.codeium/windsurf/mcp_config.json
76
+ 2. Add the following configuration:
77
+ ```json
78
+ {
79
+ "mcpServers": {
80
+ "image": {
81
+ "command": "uv",
82
+ "args": ["--directory", "/path/to/mcp-image", "run", "mcp_image.py"]
83
+ }
84
+ }
85
+ }
86
+ ```
87
+ #### Cursor
88
+ To add this MCP server to Cursor:
89
+
90
+ 1. Open Cursor and go to *Settings* (Navbar → Cursor Settings)
91
+ 2. Navigate to *Features* → *MCP Servers*
92
+ 3. Click on + Add New MCP Server
93
+ 4. Enter the following configuration:
94
+ ```json
95
+ {
96
+ "mcpServers": {
97
+ "image": {
98
+ "command": "uv",
99
+ "args": ["--directory", "/path/to/mcp-image", "run", "mcp_image.py"]
100
+ }
101
+ }
102
+ }
103
+ ```
104
+
105
+ ## Available Tools
106
+ The server provides the following tools:
107
+
108
+ [fetch_images](mcp_image.py#L318): Fetch and process images from URLs or local file paths
109
+ Parameters:
110
+ image_sources: List of URLs or file paths to images
111
+ Returns:
112
+ List of processed images with base64 encoding and MIME types
113
+
114
+ ### Usage Examples
115
+ You can now use commands like:
116
+
117
+ - "Fetch these images: [list of URLs or file paths]"
118
+ - "Load and process this local image: [file_path]"
119
+
120
+ #### Examples
121
+ ```
122
+ # URL-only test
123
+ [
124
+ "https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/Chocolate_%28blue_background%29.jpg/400px-Chocolate_%28blue_background%29.jpg",
125
+ "https://imgs.search.brave.com/Sz7BdlhBoOmU4wZjnUkvgestdwmzOzrfc3GsiMr27Ik/rs:fit:860:0:0:0/g:ce/aHR0cHM6Ly9pbWdj/ZG4uc3RhYmxlZGlm/ZnVzaW9ud2ViLmNv/bS8yMDI0LzEwLzE4/LzJmOTY3NTViLTM0/YmQtNDczNi1iNDRh/LWJlMTVmNGM5MDBm/My5qcGc",
126
+ "https://shigacare.fukushi.shiga.jp/mumeixxx/img/main.png"
127
+ ]
128
+
129
+ # Mixed URL and local file test
130
+ [
131
+ "https://upload.wikimedia.org/wikipedia/commons/thumb/7/70/Chocolate_%28blue_background%29.jpg/400px-Chocolate_%28blue_background%29.jpg",
132
+ "C:\\Users\\username\\Pictures\\image1.jpg",
133
+ "https://imgs.search.brave.com/Sz7BdlhBoOmU4wZjnUkvgestdwmzOzrfc3GsiMr27Ik/rs:fit:860:0:0:0/g:ce/aHR0cHM6Ly9pbWdj/ZG4uc3RhYmxlZGlm/ZnVzaW9ud2ViLmNv/bS8yMDI0LzEwLzE4/LzJmOTY3NTViLTM0/YmQtNDczNi1iNDRh/LWJlMTVmNGM5MDBm/My5qcGc",
134
+ "C:\\Users\\username\\Pictures\\image2.jpg"
135
+ ]
136
+ ```
137
+
138
+ ## Debugging
139
+ If you encounter any issues:
140
+
141
+ 1. Check that all dependencies are installed correctly
142
+ 2. Verify that the server is running and listening for connections
143
+ 3. For local image loading issues, ensure the file paths are correct and accessible
144
+ 4. For "Unsupported image type" errors, verify the content type handling
145
+ 5. Look for any error messages in the server output
146
+ ## Contributing
147
+ Contributions are welcome! Please feel free to submit a Pull Request.
148
+
149
+ ## License
150
+ This project is licensed under the MIT License - see the [LICENSE](LICENSE) file for details.
@@ -0,0 +1,6 @@
1
+ mcp_image.py,sha256=Fmo56Edh3JVijtpthuKkGjWS-S_UrMF8V3UNKmaxttw,20698
2
+ iflow_mcp_ia_programming_mcp_image-0.1.0.dist-info/licenses/LICENSE,sha256=CvC41bc7QpjlGSg1B7MpH-lM3zAduA56l1lGxApiCYM,1071
3
+ iflow_mcp_ia_programming_mcp_image-0.1.0.dist-info/METADATA,sha256=crQkdwt2ke8EkCesjY7upxcofvbSbZVcbrInPuBqSbc,5126
4
+ iflow_mcp_ia_programming_mcp_image-0.1.0.dist-info/WHEEL,sha256=wUyA8OaulRlbfwMtmQsvNngGrxQHAvkKcvRmdizlJi0,92
5
+ iflow_mcp_ia_programming_mcp_image-0.1.0.dist-info/top_level.txt,sha256=pNXv7cFTgr0mDgZ7VkhB5I-hZgQ_v8HEVVoxJ9vCCw0,10
6
+ iflow_mcp_ia_programming_mcp_image-0.1.0.dist-info/RECORD,,
@@ -0,0 +1,5 @@
1
+ Wheel-Version: 1.0
2
+ Generator: setuptools (80.10.2)
3
+ Root-Is-Purelib: true
4
+ Tag: py3-none-any
5
+
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 FarhaParveen919
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
mcp_image.py ADDED
@@ -0,0 +1,463 @@
1
+ #!/usr/bin/env python3
2
+
3
+ import os
4
+ import sys
5
+ import asyncio
6
+ import httpx
7
+ import logging
8
+ from io import BytesIO
9
+ from datetime import datetime
10
+ from PIL import Image as PILImage
11
+ from urllib.parse import urlparse
12
+ from mcp.server.fastmcp import FastMCP, Image, Context
13
+ from typing import List, Dict, Any, Union, Optional
14
+
15
+ MAX_IMAGE_SIZE = 1024 # Maximum dimension size in pixels
16
+ TEMP_DIR = "./Temp"
17
+ DATA_DIR = "./data"
18
+
19
+ # Ensure directories exist
20
+ os.makedirs(DATA_DIR, exist_ok=True)
21
+ os.makedirs(TEMP_DIR, exist_ok=True)
22
+
23
+ # Configure logging: first disable other loggers
24
+ logging.getLogger("httpx").setLevel(logging.WARNING)
25
+ logging.getLogger("httpcore").setLevel(logging.WARNING)
26
+ logging.getLogger("asyncio").setLevel(logging.WARNING)
27
+ logging.getLogger("mcp").setLevel(logging.WARNING)
28
+
29
+ # Configure our logger
30
+ log_filename = os.path.join(DATA_DIR, datetime.now().strftime("%d-%m-%y.log"))
31
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
32
+
33
+ # Create handlers
34
+ file_handler = logging.FileHandler(log_filename)
35
+ file_handler.setFormatter(formatter)
36
+ console_handler = logging.StreamHandler(sys.stderr)
37
+ console_handler.setFormatter(formatter)
38
+
39
+ # Set up our logger
40
+ logger = logging.getLogger("image-mcp")
41
+ logger.setLevel(logging.DEBUG)
42
+ logger.addHandler(file_handler)
43
+ logger.addHandler(console_handler)
44
+ # Prevent double logging
45
+ logger.propagate = False
46
+
47
+ # Create a FastMCP server instance
48
+ mcp = FastMCP("image-service")
49
+
50
+ async def process_image_data(data: bytes, content_type: str, image_source: str, ctx: Context) -> Image | None:
51
+ """Process image data and return an MCP Image object."""
52
+ try:
53
+ # If image is not large, try to log dimensions without processing
54
+ if len(data) <= 1048576:
55
+ try:
56
+ with PILImage.open(BytesIO(data)) as img:
57
+ width, height = img.size
58
+ logger.debug(f"Original image dimensions from {image_source}: {width}x{height}")
59
+ logger.debug(f"Image format from PIL: {img.format}, mode: {img.mode}")
60
+ except Exception as e:
61
+ logger.debug(f"Could not determine dimensions for {image_source}: {e}")
62
+
63
+ # Ensure content_type is valid and doesn't include 'image/'
64
+ if content_type.startswith('image/'):
65
+ content_type = content_type.split('/')[-1]
66
+
67
+ logger.debug(f"Creating Image object with format: {content_type}")
68
+ return Image(data=data, format=content_type)
69
+
70
+ # For large images, save to temp file and process
71
+ temp_path = os.path.join(TEMP_DIR, f"temp_image_{hash(image_source)}." + content_type.split('/')[-1])
72
+ with open(temp_path, "wb") as f:
73
+ f.write(data)
74
+
75
+ try:
76
+ # First pass: get dimensions and basic info
77
+ with PILImage.open(temp_path) as img:
78
+ orig_width, orig_height = img.size
79
+ orig_format = img.format
80
+ orig_mode = img.mode
81
+ logger.debug(f"Original image dimensions from {image_source}: {orig_width}x{orig_height}")
82
+ logger.debug(f"Large image format from PIL: {orig_format}, mode: {orig_mode}")
83
+
84
+ # Calculate optimal resize factor if image is very large
85
+ max_dimension = max(orig_width, orig_height)
86
+ initial_scale = 1.0
87
+ if max_dimension > 3000:
88
+ initial_scale = 3000 / max_dimension
89
+ logger.debug(f"Very large image detected ({max_dimension}px), will start with scale factor: {initial_scale}")
90
+
91
+ # Second pass: process the image
92
+ with PILImage.open(temp_path) as img:
93
+ if img.mode in ('RGBA', 'P'):
94
+ img = img.convert('RGB')
95
+
96
+ # Apply initial scale if needed
97
+ if initial_scale < 1.0:
98
+ width = int(orig_width * initial_scale)
99
+ height = int(orig_height * initial_scale)
100
+ img = img.resize((width, height), PILImage.LANCZOS)
101
+ else:
102
+ width, height = img.size
103
+
104
+ quality = 85
105
+ scale_factor = 1.0
106
+
107
+ while True:
108
+ img_byte_arr = BytesIO()
109
+
110
+ # Create a copy for this iteration to avoid accumulating transforms
111
+ if scale_factor < 1.0:
112
+ current_width = int(width * scale_factor)
113
+ current_height = int(height * scale_factor)
114
+ current_img = img.resize((current_width, current_height), PILImage.LANCZOS)
115
+ else:
116
+ current_img = img
117
+ current_width, current_height = width, height
118
+
119
+ current_img.save(img_byte_arr, format='JPEG', quality=quality, optimize=True)
120
+ processed_data = img_byte_arr.getvalue()
121
+
122
+ # Clean up the temporary image if we created one
123
+ if scale_factor < 1.0 and hasattr(current_img, 'close'):
124
+ current_img.close()
125
+
126
+ # Target 800KB to leave buffer for any MCP overhead
127
+ if len(processed_data) <= 819200: # 800KB
128
+ logger.debug(f"Processed image dimensions from {image_source}: {current_width}x{current_height} (quality={quality})")
129
+ logger.debug(f"Returning processed image with format: jpeg, size: {len(processed_data)} bytes")
130
+ return Image(data=processed_data, format='jpeg')
131
+
132
+ # Try reducing quality first
133
+ if quality > 20:
134
+ quality -= 10
135
+ logger.debug(f"Reducing quality to {quality} for {image_source}, current size: {len(processed_data)} bytes")
136
+ else:
137
+ # Then try scaling down
138
+ scale_factor *= 0.8
139
+ if current_width * scale_factor < 200 or current_height * scale_factor < 200:
140
+ ctx.error("Unable to compress image to acceptable size while maintaining quality")
141
+ logger.error(f"Failed processing image from {image_source}: dimensions too small")
142
+ return None
143
+ logger.debug(f"Applying scale factor {scale_factor} to image from {image_source}")
144
+ quality = 85 # Reset quality when changing size
145
+ except MemoryError as e:
146
+ ctx.error(f"Out of memory processing large image: {str(e)}")
147
+ logger.error(f"MemoryError processing image from {image_source}: {str(e)}")
148
+ return None
149
+ except Exception as e:
150
+ ctx.error(f"Image processing error: {str(e)}")
151
+ logger.exception(f"Exception processing image from {image_source}")
152
+ return None
153
+ finally:
154
+ if os.path.exists(temp_path):
155
+ os.remove(temp_path)
156
+
157
+ except Exception as e:
158
+ ctx.error(f"Error processing image: {str(e)}")
159
+ logger.exception(f"Unexpected error processing {image_source}")
160
+ return None
161
+
162
+ async def process_local_image(file_path: str, ctx: Context) -> Dict[str, Any]:
163
+ """Processes a local image file and returns a dictionary with the result."""
164
+ try:
165
+ if not os.path.exists(file_path):
166
+ error_msg = f"File not found: {file_path}"
167
+ ctx.error(error_msg)
168
+ logger.error(error_msg)
169
+ return {"path": file_path, "error": error_msg}
170
+
171
+ # Determine content type based on file extension
172
+ _, ext = os.path.splitext(file_path)
173
+ ext = ext[1:].lower() if ext else "jpeg" # Default to jpeg if no extension
174
+
175
+ # Map extension to proper MIME type
176
+ mime_type_map = {
177
+ "jpg": "jpeg",
178
+ "jpeg": "jpeg",
179
+ "png": "png",
180
+ "gif": "gif",
181
+ "bmp": "bmp",
182
+ "webp": "webp",
183
+ "tiff": "tiff",
184
+ "tif": "tiff"
185
+ }
186
+
187
+ content_type = mime_type_map.get(ext, "jpeg") # Default to jpeg if unknown extension
188
+ logger.debug(f"Local image {file_path} has extension '{ext}', mapped to content type '{content_type}'")
189
+
190
+ # For large files, read and process directly without loading entire file into memory
191
+ file_size = os.path.getsize(file_path)
192
+ if file_size > 1048576:
193
+ logger.debug(f"Large local image detected: {file_path} ({file_size} bytes)")
194
+ # Process the image directly using the same logic as for URL images
195
+ return await process_large_local_image(file_path, content_type, ctx)
196
+
197
+ # For smaller files, read the entire content
198
+ with open(file_path, "rb") as f:
199
+ file_data = f.read()
200
+
201
+ logger.debug(f"Read local image from {file_path} with {len(file_data)} bytes")
202
+ processed_image = await process_image_data(file_data, content_type, file_path, ctx)
203
+
204
+ if processed_image is None:
205
+ return {"path": file_path, "error": "Failed to process image"}
206
+
207
+ return {"path": file_path, "image": processed_image}
208
+
209
+ except Exception as e:
210
+ error_msg = f"Error processing local image {file_path}: {str(e)}"
211
+ ctx.error(error_msg)
212
+ logger.exception(error_msg)
213
+ return {"path": file_path, "error": error_msg}
214
+
215
+ async def process_large_local_image(file_path: str, content_type: str, ctx: Context) -> Dict[str, Any]:
216
+ """Process a large local image file directly without loading it entirely into memory."""
217
+ temp_path = None
218
+ try:
219
+ # Create a temporary file path for processing
220
+ temp_path = os.path.join(TEMP_DIR, f"temp_local_{os.path.basename(file_path)}")
221
+
222
+ # First pass: get dimensions and basic info
223
+ with PILImage.open(file_path) as img:
224
+ orig_width, orig_height = img.size
225
+ orig_format = img.format
226
+ orig_mode = img.mode
227
+ logger.debug(f"Original large local image dimensions from {file_path}: {orig_width}x{orig_height}")
228
+ logger.debug(f"Original image format: {orig_format}, mode: {orig_mode}")
229
+
230
+ # Calculate optimal resize factor if image is very large
231
+ max_dimension = max(orig_width, orig_height)
232
+ initial_scale = 1.0
233
+ if max_dimension > 4000:
234
+ initial_scale = 4000 / max_dimension
235
+ logger.debug(f"Very large image detected, will start with scale factor: {initial_scale}")
236
+
237
+ # Second pass: process the image
238
+ with PILImage.open(file_path) as img:
239
+ if img.mode in ('RGBA', 'P'):
240
+ img = img.convert('RGB')
241
+
242
+ # Apply initial scale if needed
243
+ if initial_scale < 1.0:
244
+ width = int(orig_width * initial_scale)
245
+ height = int(orig_height * initial_scale)
246
+ img = img.resize((width, height), PILImage.LANCZOS)
247
+ else:
248
+ width, height = img.size
249
+
250
+ quality = 75 # Start with lower quality for large images
251
+ scale_factor = 1.0
252
+
253
+ while True:
254
+ # Save the processed image to a temporary BytesIO
255
+ img_byte_arr = BytesIO()
256
+
257
+ # Create a copy for this iteration to avoid accumulating transforms
258
+ if scale_factor < 1.0:
259
+ current_width = int(width * scale_factor)
260
+ current_height = int(height * scale_factor)
261
+ current_img = img.resize((current_width, current_height), PILImage.LANCZOS)
262
+ else:
263
+ current_img = img
264
+ current_width, current_height = width, height
265
+
266
+ current_img.save(img_byte_arr, format='JPEG', quality=quality, optimize=True)
267
+ processed_data = img_byte_arr.getvalue()
268
+
269
+ # Clean up the temporary image if we created one
270
+ if scale_factor < 1.0 and hasattr(current_img, 'close'):
271
+ current_img.close()
272
+
273
+ # Target 800KB to leave buffer for any MCP overhead
274
+ if len(processed_data) <= 819200: # 800KB
275
+ logger.debug(f"Successfully compressed large local image {file_path} to {len(processed_data)} bytes (quality={quality}, dimensions={current_width}x{current_height})")
276
+ return {"path": file_path, "image": Image(data=processed_data, format='jpeg')}
277
+
278
+ # Try reducing quality first
279
+ if quality > 30:
280
+ quality -= 10
281
+ logger.debug(f"Reducing quality to {quality} for {file_path}")
282
+ else:
283
+ # Then try scaling down
284
+ scale_factor *= 0.8
285
+ if current_width * scale_factor < 200 or current_height * scale_factor < 200:
286
+ error_msg = f"Unable to compress large local image {file_path} to acceptable size while maintaining quality"
287
+ ctx.error(error_msg)
288
+ logger.error(error_msg)
289
+ return {"path": file_path, "error": error_msg}
290
+
291
+ logger.debug(f"Applying scale factor {scale_factor} to image {file_path}")
292
+ quality = 85 # Reset quality when changing size
293
+
294
+ except MemoryError as e:
295
+ error_msg = f"Out of memory processing large local image {file_path}: {str(e)}"
296
+ ctx.error(error_msg)
297
+ logger.error(error_msg)
298
+ return {"path": file_path, "error": error_msg}
299
+ except Exception as e:
300
+ error_msg = f"Error processing large local image {file_path}: {str(e)}"
301
+ ctx.error(error_msg)
302
+ logger.exception(error_msg)
303
+ return {"path": file_path, "error": error_msg}
304
+
305
+ finally:
306
+ # Clean up temporary file if it exists
307
+ if temp_path and os.path.exists(temp_path):
308
+ os.remove(temp_path)
309
+
310
+ async def fetch_single_image(url: str, client: httpx.AsyncClient, ctx: Context) -> Dict[str, Any]:
311
+ """Fetches and processes a single image asynchronously."""
312
+ try:
313
+ parsed = urlparse(url)
314
+ if not all([parsed.scheme in ['http', 'https'], parsed.netloc]):
315
+ error_msg = f"Invalid URL: {url}"
316
+ ctx.error(error_msg)
317
+ return {"url": url, "error": error_msg}
318
+
319
+ response = await client.get(url)
320
+ response.raise_for_status()
321
+
322
+ content_type = response.headers.get('content-type', '')
323
+ if not content_type.startswith('image/'):
324
+ error_msg = f"Not an image (got {content_type})"
325
+ ctx.error(error_msg)
326
+ return {"url": url, "error": error_msg}
327
+
328
+ logger.debug(f"Fetched image from {url} with {len(response.content)} bytes")
329
+ logger.debug(f"Content-Type from server: {content_type}")
330
+
331
+ # Extract the format from content-type
332
+ format_type = content_type.split('/')[-1]
333
+ logger.debug(f"Extracted format type: {format_type}")
334
+
335
+ processed_image = await process_image_data(response.content, format_type, url, ctx)
336
+
337
+ if processed_image is None:
338
+ return {"url": url, "error": "Failed to process image"}
339
+
340
+ return {"url": url, "image": processed_image}
341
+
342
+ except httpx.HTTPError as e:
343
+ error_msg = f"HTTP error: {str(e)}"
344
+ ctx.error(error_msg)
345
+ return {"url": url, "error": error_msg}
346
+ except Exception as e:
347
+ error_msg = f"Unexpected error: {str(e)}"
348
+ ctx.error(error_msg)
349
+ return {"url": url, "error": error_msg}
350
+
351
+ def is_url(path_or_url: str) -> bool:
352
+ """Determine if the given string is a URL or a local file path."""
353
+ parsed = urlparse(path_or_url)
354
+ return bool(parsed.scheme and parsed.netloc)
355
+
356
+ async def process_images_async(image_sources: List[str], ctx: Context) -> List[Dict[str, Any]]:
357
+ """Process multiple images (URLs or local files) concurrently."""
358
+ if not image_sources:
359
+ raise ValueError("No image sources provided")
360
+
361
+ # Separate URLs from local file paths
362
+ urls = [src for src in image_sources if is_url(src)]
363
+ local_paths = [src for src in image_sources if not is_url(src)]
364
+
365
+ results = []
366
+
367
+ # Process URLs if any
368
+ if urls:
369
+ logger.debug(f"Processing {len(urls)} URLs")
370
+ async with httpx.AsyncClient() as client:
371
+ url_tasks = [fetch_single_image(url, client, ctx) for url in urls]
372
+ url_results = await asyncio.gather(*url_tasks)
373
+ results.extend(url_results)
374
+
375
+ # Process local files if any
376
+ if local_paths:
377
+ logger.debug(f"Processing {len(local_paths)} local files")
378
+ local_tasks = [process_local_image(path, ctx) for path in local_paths]
379
+ local_results = await asyncio.gather(*local_tasks)
380
+ results.extend(local_results)
381
+
382
+ # Ensure results are in the same order as input sources
383
+ ordered_results = []
384
+ for src in image_sources:
385
+ for result in results:
386
+ if (src == result.get("url", None)) or (src == result.get("path", None)):
387
+ ordered_results.append(result)
388
+ break
389
+
390
+ return ordered_results
391
+
392
+ @mcp.tool()
393
+ async def fetch_images(image_sources: List[str], ctx: Context) -> List[Dict[str, Any]]:
394
+ """
395
+ Fetch and process images from URLs or local file paths, returning them as base64-encoded dictionaries.
396
+
397
+ This tool accepts a list of image sources which can be either:
398
+ 1. URLs pointing to web-hosted images (http:// or https://)
399
+ 2. Local file paths pointing to images stored on the local filesystem (e.g., "C:/images/photo1.jpg")
400
+
401
+ For a single image, provide a one-element list. The function will process images in parallel
402
+ when multiple sources are provided. Images that exceed the size limit (1MB) will be automatically
403
+ compressed while maintaining aspect ratio and reasonable quality.
404
+
405
+ Args:
406
+ image_sources: A list of image URLs or local file paths. For a single image, provide a one-element list.
407
+
408
+ Returns:
409
+ A list of dictionaries with 'data' (base64), 'format', and 'source' fields.
410
+ Failed items have 'error' field instead.
411
+ """
412
+ try:
413
+ start_time = asyncio.get_event_loop().time()
414
+
415
+ # Validate input
416
+ if not image_sources:
417
+ ctx.error("No image sources provided")
418
+ logger.error("fetch_images called with empty source list")
419
+ return []
420
+
421
+ # Log the types of sources we're processing
422
+ url_count = sum(1 for src in image_sources if is_url(src))
423
+ local_count = len(image_sources) - url_count
424
+ logger.debug(f"Processing {len(image_sources)} image sources: {url_count} URLs and {local_count} local files")
425
+
426
+ # Process all images
427
+ results = await process_images_async(image_sources, ctx)
428
+
429
+ # Convert Image objects to dictionaries
430
+ import base64
431
+ image_results = []
432
+ for result in results:
433
+ if "image" in result:
434
+ img = result["image"]
435
+ image_results.append({
436
+ "data": base64.b64encode(img.data).decode(),
437
+ "format": img.format,
438
+ "source": result.get("url", result.get("path", "unknown"))
439
+ })
440
+ else:
441
+ error_msg = result.get("error", "Failed to process image")
442
+ source = result.get("url", result.get("path", "unknown"))
443
+ image_results.append({
444
+ "source": source,
445
+ "error": error_msg
446
+ })
447
+
448
+ elapsed = asyncio.get_event_loop().time() - start_time
449
+ success_count = sum(1 for r in image_results if r is not None)
450
+
451
+ logger.debug(
452
+ f"Processed {len(image_sources)} images in {elapsed:.2f} seconds. "
453
+ f"Success: {success_count}, Failed: {len(image_sources) - success_count}"
454
+ )
455
+
456
+ return image_results
457
+ except Exception as e:
458
+ logger.exception("Error in fetch_images")
459
+ ctx.error(f"Failed to process images: {str(e)}")
460
+ return [{"source": src, "error": str(e)} for src in image_sources]
461
+
462
+ if __name__ == "__main__":
463
+ mcp.run(transport='stdio')