image-proxy-mcp 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/.env.example +35 -0
- package/README.md +114 -0
- package/index.js +875 -0
- package/package.json +42 -0
package/.env.example
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
1
|
+
# ============================================================
|
|
2
|
+
# Image Proxy MCP - Environment Configuration
|
|
3
|
+
# ============================================================
|
|
4
|
+
# Copy this to .env and fill in your values.
|
|
5
|
+
#
|
|
6
|
+
# AUTHENTICATION
|
|
7
|
+
# ==============
|
|
8
|
+
# Two modes supported (per endpoint):
|
|
9
|
+
# 1. Managed Identity (recommended) - leave API key blank, run az login
|
|
10
|
+
# 2. API Key - set the key env var below
|
|
11
|
+
#
|
|
12
|
+
# If an API key is set, it takes priority. If blank, the server
|
|
13
|
+
# uses DefaultAzureCredential (az login locally, system-assigned
|
|
14
|
+
# MI on Azure). Same zero-secret approach as the main MCP server.
|
|
15
|
+
#
|
|
16
|
+
# Azure OpenAI (for gpt-image-1.5, dall-e-3, etc.)
|
|
17
|
+
# ============================================================
|
|
18
|
+
AZURE_OPENAI_ENDPOINT=https://aoai-sweden-csa.openai.azure.com/
|
|
19
|
+
# AZURE_OPENAI_API_KEY= # Leave blank for managed identity
|
|
20
|
+
AZURE_OPENAI_API_VERSION=2024-02-01
|
|
21
|
+
|
|
22
|
+
# ============================================================
|
|
23
|
+
# Azure AI Services Serverless (for flux-kontext-pro, etc.)
|
|
24
|
+
# The endpoint below is the serverless model endpoint, NOT the
|
|
25
|
+
# resource-level endpoint. Find it in Azure AI Foundry under
|
|
26
|
+
# the deployment's "Target URI".
|
|
27
|
+
# ============================================================
|
|
28
|
+
AZURE_AI_SERVICES_ENDPOINT=https://aoai-sweden-csa.services.ai.azure.com/
|
|
29
|
+
# AZURE_AI_SERVICES_KEY= # Leave blank for managed identity
|
|
30
|
+
|
|
31
|
+
# ============================================================
|
|
32
|
+
# Output directory for generated images
|
|
33
|
+
# Default: ~/Documents/generated-images
|
|
34
|
+
# ============================================================
|
|
35
|
+
IMAGE_OUTPUT_DIR=~/Documents/generated-images
|
package/README.md
ADDED
|
@@ -0,0 +1,114 @@
|
|
|
1
|
+
# Image Proxy MCP Server
|
|
2
|
+
|
|
3
|
+
Local MCP server (stdio transport) that proxies image generation to Azure endpoints, saves results to disk, and returns **only file paths** to Claude Desktop. This eliminates the base64 context window overflow problem entirely.
|
|
4
|
+
|
|
5
|
+
## The Problem
|
|
6
|
+
|
|
7
|
+
When Claude Desktop calls image generation through the remote multimodel MCP, the response includes base64-encoded image data (1-4MB of text per image). This consumes 30-60% of the available context window and frequently causes failures.
|
|
8
|
+
|
|
9
|
+
## The Solution
|
|
10
|
+
|
|
11
|
+
This local MCP server:
|
|
12
|
+
1. Receives the image generation request from Claude Desktop
|
|
13
|
+
2. Calls the Azure endpoint directly via HTTP
|
|
14
|
+
3. Receives the base64 response server-side (never enters the context)
|
|
15
|
+
4. Saves the image to a local directory
|
|
16
|
+
5. Returns only a compact JSON with the file path (~200 bytes vs ~2MB)
|
|
17
|
+
|
|
18
|
+
## Setup
|
|
19
|
+
|
|
20
|
+
### 1. Install Dependencies
|
|
21
|
+
|
|
22
|
+
```bash
|
|
23
|
+
cd tools/image-proxy
|
|
24
|
+
npm install
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
### 2. Configure Environment
|
|
28
|
+
|
|
29
|
+
```bash
|
|
30
|
+
cp .env.example .env
|
|
31
|
+
# Edit .env with your Azure endpoints (keys are optional)
|
|
32
|
+
```
|
|
33
|
+
|
|
34
|
+
**Authentication** — two modes, per endpoint:
|
|
35
|
+
|
|
36
|
+
| Mode | Setup | When to use |
|
|
37
|
+
|------|-------|-------------|
|
|
38
|
+
| **Managed Identity** (recommended) | Set endpoints only, leave keys blank. Run `az login` locally. | Zero secrets, same as main MCP server |
|
|
39
|
+
| **API Key** | Set both endpoint + key env vars | Quick testing, no az login needed |
|
|
40
|
+
|
|
41
|
+
If a key is set, it takes priority. If blank, the server uses `DefaultAzureCredential` (`az login` locally, system-assigned MI on Azure).
|
|
42
|
+
|
|
43
|
+
You need endpoints for:
|
|
44
|
+
- **Azure OpenAI** (for gpt-image-1.5, dall-e-3) — `cognitiveservices.azure.com`
|
|
45
|
+
- **Azure AI Services** (for flux-kontext-pro) — `services.ai.azure.com`
|
|
46
|
+
|
|
47
|
+
Both are in your Azure AI Foundry portal under each deployment.
|
|
48
|
+
|
|
49
|
+
### 3. Test
|
|
50
|
+
|
|
51
|
+
```bash
|
|
52
|
+
node test.js
|
|
53
|
+
```
|
|
54
|
+
|
|
55
|
+
### 4. Add to Claude Desktop
|
|
56
|
+
|
|
57
|
+
Add the `image-proxy` entry to your Claude Desktop config:
|
|
58
|
+
|
|
59
|
+
**macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
|
|
60
|
+
**Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
|
|
61
|
+
|
|
62
|
+
```json
|
|
63
|
+
{
|
|
64
|
+
"mcpServers": {
|
|
65
|
+
"image-proxy": {
|
|
66
|
+
"command": "node",
|
|
67
|
+
"args": ["/absolute/path/to/tools/image-proxy/index.js"],
|
|
68
|
+
"env": {
|
|
69
|
+
"AZURE_OPENAI_ENDPOINT": "https://your-resource.openai.azure.com",
|
|
70
|
+
"AZURE_AI_SERVICES_ENDPOINT": "https://your-resource.services.ai.azure.com",
|
|
71
|
+
"IMAGE_OUTPUT_DIR": "~/Documents/generated-images"
|
|
72
|
+
}
|
|
73
|
+
}
|
|
74
|
+
}
|
|
75
|
+
}
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
With managed identity, only endpoints are needed — no keys. Run `az login` and the server authenticates automatically. To use API keys instead, add `AZURE_OPENAI_API_KEY` and/or `AZURE_AI_SERVICES_KEY` to the `env` block.
|
|
79
|
+
|
|
80
|
+
## Tools
|
|
81
|
+
|
|
82
|
+
| Tool | Description |
|
|
83
|
+
|------|-------------|
|
|
84
|
+
| `generate_image` | Generate an image and save to disk. Returns only the file path. Models: gpt-image-1.5, dall-e-3, flux-kontext-pro |
|
|
85
|
+
| `base64_to_file` | Convert a base64 string already in context to a file on disk |
|
|
86
|
+
| `list_generated_images` | List all previously generated images with metadata |
|
|
87
|
+
| `cleanup_images` | Remove old generated images (dry_run=true by default) |
|
|
88
|
+
|
|
89
|
+
## Architecture
|
|
90
|
+
|
|
91
|
+
```
|
|
92
|
+
Claude Desktop
|
|
93
|
+
|
|
|
94
|
+
|-- (stdio) --> image-proxy MCP (LOCAL)
|
|
95
|
+
| |
|
|
96
|
+
| |-- (HTTPS) --> Azure OpenAI (gpt-image-1.5, dall-e-3)
|
|
97
|
+
| |-- (HTTPS) --> Azure AI Services (flux-kontext-pro)
|
|
98
|
+
| |
|
|
99
|
+
| v
|
|
100
|
+
| ~/Documents/generated-images/
|
|
101
|
+
| |
|
|
102
|
+
| v
|
|
103
|
+
| Returns: { filePath, model, size } (~200 bytes)
|
|
104
|
+
|
|
|
105
|
+
|-- (HTTP) --> multimodel MCP (Azure, for non-image tasks)
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
## Context Window Savings
|
|
109
|
+
|
|
110
|
+
| Scenario | Without Proxy | With Proxy |
|
|
111
|
+
|----------|--------------|------------|
|
|
112
|
+
| 1 image (1024x1024) | ~2-4 MB base64 in context | ~200 bytes path |
|
|
113
|
+
| 3 images in conversation | Context overflow / failure | ~600 bytes |
|
|
114
|
+
| Image + follow-up edits | Cascading failures | Works normally |
|
package/index.js
ADDED
|
@@ -0,0 +1,875 @@
|
|
|
1
|
+
#!/usr/bin/env node
|
|
2
|
+
// ============================================================
|
|
3
|
+
// Image Proxy MCP Server
|
|
4
|
+
// ============================================================
|
|
5
|
+
// Generates images via Azure endpoints, saves to local disk,
|
|
6
|
+
// and returns only the file path to Claude Desktop. This keeps
|
|
7
|
+
// base64 image data OUT of the context window entirely.
|
|
8
|
+
//
|
|
9
|
+
// Supported models:
|
|
10
|
+
// - gpt-image-1.5 (Azure OpenAI pattern)
|
|
11
|
+
// - flux-kontext-pro (Azure AI Services serverless pattern)
|
|
12
|
+
//
|
|
13
|
+
// Tools exposed:
|
|
14
|
+
// - generate_image: prompt -> file path
|
|
15
|
+
// - base64_to_file: base64 string -> file path (utility)
|
|
16
|
+
// - list_generated_images: lists all saved images
|
|
17
|
+
// - cleanup_images: remove old generated images
|
|
18
|
+
// ============================================================
|
|
19
|
+
|
|
20
|
+
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
|
|
21
|
+
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
|
|
22
|
+
import {
|
|
23
|
+
CallToolRequestSchema,
|
|
24
|
+
ListToolsRequestSchema,
|
|
25
|
+
} from "@modelcontextprotocol/sdk/types.js";
|
|
26
|
+
import { DefaultAzureCredential } from "@azure/identity";
|
|
27
|
+
import fs from "node:fs";
|
|
28
|
+
import path from "node:path";
|
|
29
|
+
import crypto from "node:crypto";
|
|
30
|
+
import { fileURLToPath } from "node:url";
|
|
31
|
+
|
|
32
|
+
// ---------------------------------------------------------------------------
|
|
33
|
+
// Config
|
|
34
|
+
// ---------------------------------------------------------------------------
|
|
35
|
+
const __dirname = path.dirname(fileURLToPath(import.meta.url));
|
|
36
|
+
|
|
37
|
+
// Load .env if present (dotenv is optional - falls back to process.env)
|
|
38
|
+
try {
|
|
39
|
+
const dotenv = await import("dotenv");
|
|
40
|
+
dotenv.config({ path: path.join(__dirname, ".env") });
|
|
41
|
+
} catch {
|
|
42
|
+
// dotenv not installed - rely on process.env or Claude Desktop env block
|
|
43
|
+
}
|
|
44
|
+
|
|
45
|
+
const CONFIG = {
|
|
46
|
+
azureOpenAI: {
|
|
47
|
+
endpoint: process.env.AZURE_OPENAI_ENDPOINT || "",
|
|
48
|
+
apiKey: process.env.AZURE_OPENAI_API_KEY || "",
|
|
49
|
+
apiVersion: process.env.AZURE_OPENAI_API_VERSION || "2024-02-01",
|
|
50
|
+
},
|
|
51
|
+
azureAIServices: {
|
|
52
|
+
endpoint: process.env.AZURE_AI_SERVICES_ENDPOINT || "",
|
|
53
|
+
apiKey: process.env.AZURE_AI_SERVICES_KEY || "",
|
|
54
|
+
},
|
|
55
|
+
outputDir:
|
|
56
|
+
process.env.IMAGE_OUTPUT_DIR ||
|
|
57
|
+
path.join(process.env.HOME || "/tmp", "Documents", "generated-images"),
|
|
58
|
+
};
|
|
59
|
+
|
|
60
|
+
// ---------------------------------------------------------------------------
|
|
61
|
+
// Authentication: Managed Identity (DefaultAzureCredential) or API keys
|
|
62
|
+
// ---------------------------------------------------------------------------
|
|
63
|
+
// Priority: API key env vars win if set; otherwise use managed identity
|
|
64
|
+
// (az login locally, system-assigned MI on Azure).
|
|
65
|
+
const USE_KEY_AUTH_OPENAI = !!CONFIG.azureOpenAI.apiKey;
|
|
66
|
+
const USE_KEY_AUTH_AI_SERVICES = !!CONFIG.azureAIServices.apiKey;
|
|
67
|
+
|
|
68
|
+
const credential =
|
|
69
|
+
!USE_KEY_AUTH_OPENAI || !USE_KEY_AUTH_AI_SERVICES
|
|
70
|
+
? new DefaultAzureCredential()
|
|
71
|
+
: null;
|
|
72
|
+
|
|
73
|
+
const COGNITIVE_SCOPE = "https://cognitiveservices.azure.com/.default";
|
|
74
|
+
|
|
75
|
+
/** Get a Bearer token for Azure Cognitive Services. Cached internally by the SDK. */
|
|
76
|
+
async function getBearerToken() {
|
|
77
|
+
try {
|
|
78
|
+
const tokenResponse = await credential.getToken(COGNITIVE_SCOPE);
|
|
79
|
+
return tokenResponse.token;
|
|
80
|
+
} catch (err) {
|
|
81
|
+
const isCredentialError =
|
|
82
|
+
err.name === "CredentialUnavailableError" ||
|
|
83
|
+
err.message?.includes("CredentialUnavailable") ||
|
|
84
|
+
err.message?.includes("DefaultAzureCredential") ||
|
|
85
|
+
err.message?.includes("Please run 'az login'");
|
|
86
|
+
|
|
87
|
+
if (isCredentialError) {
|
|
88
|
+
throw new Error(
|
|
89
|
+
"Azure authentication failed — no active login found.\n\n" +
|
|
90
|
+
"To fix this, open a terminal and run:\n\n" +
|
|
91
|
+
" az login\n\n" +
|
|
92
|
+
"Then restart this MCP server in Claude Desktop.\n\n" +
|
|
93
|
+
"Alternatively, set API keys in your config:\n" +
|
|
94
|
+
" AZURE_OPENAI_API_KEY and/or AZURE_AI_SERVICES_KEY"
|
|
95
|
+
);
|
|
96
|
+
}
|
|
97
|
+
throw err;
|
|
98
|
+
}
|
|
99
|
+
}
|
|
100
|
+
|
|
101
|
+
// Model -> deployment mapping and API pattern
|
|
102
|
+
const MODEL_REGISTRY = {
|
|
103
|
+
"gpt-image-1.5": {
|
|
104
|
+
pattern: "azure-openai",
|
|
105
|
+
deployment: "gpt-image-1.5",
|
|
106
|
+
supportedSizes: ["1024x1024", "1024x1792", "1792x1024"],
|
|
107
|
+
defaultSize: "1024x1024",
|
|
108
|
+
supportsQuality: true,
|
|
109
|
+
supportsStyle: false,
|
|
110
|
+
maxN: 1,
|
|
111
|
+
},
|
|
112
|
+
"dall-e-3": {
|
|
113
|
+
pattern: "azure-openai",
|
|
114
|
+
deployment: "dall-e-3",
|
|
115
|
+
supportedSizes: ["1024x1024", "1024x1792", "1792x1024"],
|
|
116
|
+
defaultSize: "1024x1024",
|
|
117
|
+
supportsQuality: true,
|
|
118
|
+
supportsStyle: true,
|
|
119
|
+
maxN: 1,
|
|
120
|
+
},
|
|
121
|
+
"flux-kontext-pro": {
|
|
122
|
+
pattern: "azure-ai-services",
|
|
123
|
+
deployment: "FLUX.1-Kontext-pro",
|
|
124
|
+
supportedSizes: ["1024x1024", "768x1024", "1024x768"],
|
|
125
|
+
defaultSize: "1024x1024",
|
|
126
|
+
supportsQuality: false,
|
|
127
|
+
supportsStyle: false,
|
|
128
|
+
maxN: 1,
|
|
129
|
+
},
|
|
130
|
+
};
|
|
131
|
+
|
|
132
|
+
// ---------------------------------------------------------------------------
|
|
133
|
+
// Ensure output directory exists
|
|
134
|
+
// ---------------------------------------------------------------------------
|
|
135
|
+
function ensureOutputDir() {
|
|
136
|
+
if (!fs.existsSync(CONFIG.outputDir)) {
|
|
137
|
+
fs.mkdirSync(CONFIG.outputDir, { recursive: true });
|
|
138
|
+
}
|
|
139
|
+
}
|
|
140
|
+
|
|
141
|
+
// ---------------------------------------------------------------------------
|
|
142
|
+
// Generate a unique filename
|
|
143
|
+
// ---------------------------------------------------------------------------
|
|
144
|
+
function generateFilename(model, extension = "png") {
|
|
145
|
+
const timestamp = new Date().toISOString().replace(/[:.]/g, "-").slice(0, 19);
|
|
146
|
+
const shortId = crypto.randomBytes(4).toString("hex");
|
|
147
|
+
return `${model}_${timestamp}_${shortId}.${extension}`;
|
|
148
|
+
}
|
|
149
|
+
|
|
150
|
+
// ---------------------------------------------------------------------------
|
|
151
|
+
// Save base64 image data to disk
|
|
152
|
+
// ---------------------------------------------------------------------------
|
|
153
|
+
function saveBase64ToFile(base64Data, filename) {
|
|
154
|
+
ensureOutputDir();
|
|
155
|
+
const filePath = path.join(CONFIG.outputDir, filename);
|
|
156
|
+
// Strip data URI prefix if present
|
|
157
|
+
const cleanBase64 = base64Data.replace(/^data:image\/[a-zA-Z]+;base64,/, "");
|
|
158
|
+
const buffer = Buffer.from(cleanBase64, "base64");
|
|
159
|
+
fs.writeFileSync(filePath, buffer);
|
|
160
|
+
return filePath;
|
|
161
|
+
}
|
|
162
|
+
|
|
163
|
+
// ---------------------------------------------------------------------------
|
|
164
|
+
// Download image from URL and save to disk
|
|
165
|
+
// ---------------------------------------------------------------------------
|
|
166
|
+
async function downloadImageToFile(imageUrl, filename) {
|
|
167
|
+
ensureOutputDir();
|
|
168
|
+
const filePath = path.join(CONFIG.outputDir, filename);
|
|
169
|
+
const response = await fetch(imageUrl);
|
|
170
|
+
if (!response.ok) {
|
|
171
|
+
throw new Error(
|
|
172
|
+
`Failed to download image: ${response.status} ${response.statusText}`
|
|
173
|
+
);
|
|
174
|
+
}
|
|
175
|
+
const arrayBuffer = await response.arrayBuffer();
|
|
176
|
+
fs.writeFileSync(filePath, Buffer.from(arrayBuffer));
|
|
177
|
+
return filePath;
|
|
178
|
+
}
|
|
179
|
+
|
|
180
|
+
// ---------------------------------------------------------------------------
|
|
181
|
+
// Azure OpenAI image generation
|
|
182
|
+
// ---------------------------------------------------------------------------
|
|
183
|
+
async function generateViaAzureOpenAI(model, prompt, options = {}) {
|
|
184
|
+
const modelConfig = MODEL_REGISTRY[model];
|
|
185
|
+
if (!modelConfig) {
|
|
186
|
+
throw new Error(`Unknown model: ${model}`);
|
|
187
|
+
}
|
|
188
|
+
|
|
189
|
+
const endpoint = CONFIG.azureOpenAI.endpoint.replace(/\/$/, "");
|
|
190
|
+
const url = `${endpoint}/openai/deployments/${modelConfig.deployment}/images/generations?api-version=${CONFIG.azureOpenAI.apiVersion}`;
|
|
191
|
+
|
|
192
|
+
const body = {
|
|
193
|
+
prompt: prompt,
|
|
194
|
+
n: 1,
|
|
195
|
+
size: options.size || modelConfig.defaultSize,
|
|
196
|
+
response_format: "b64_json",
|
|
197
|
+
};
|
|
198
|
+
|
|
199
|
+
if (modelConfig.supportsQuality && options.quality) {
|
|
200
|
+
body.quality = options.quality;
|
|
201
|
+
}
|
|
202
|
+
if (modelConfig.supportsStyle && options.style) {
|
|
203
|
+
body.style = options.style;
|
|
204
|
+
}
|
|
205
|
+
|
|
206
|
+
// Auth: API key header or Bearer token from managed identity
|
|
207
|
+
const headers = { "Content-Type": "application/json" };
|
|
208
|
+
if (USE_KEY_AUTH_OPENAI) {
|
|
209
|
+
headers["api-key"] = CONFIG.azureOpenAI.apiKey;
|
|
210
|
+
} else {
|
|
211
|
+
headers["Authorization"] = `Bearer ${await getBearerToken()}`;
|
|
212
|
+
}
|
|
213
|
+
|
|
214
|
+
const response = await fetch(url, {
|
|
215
|
+
method: "POST",
|
|
216
|
+
headers,
|
|
217
|
+
body: JSON.stringify(body),
|
|
218
|
+
});
|
|
219
|
+
|
|
220
|
+
if (!response.ok) {
|
|
221
|
+
const errorText = await response.text();
|
|
222
|
+
throw new Error(
|
|
223
|
+
`Azure OpenAI error (${response.status}): ${errorText}`
|
|
224
|
+
);
|
|
225
|
+
}
|
|
226
|
+
|
|
227
|
+
const result = await response.json();
|
|
228
|
+
const imageData = result.data[0];
|
|
229
|
+
|
|
230
|
+
let filePath;
|
|
231
|
+
const filename = generateFilename(model);
|
|
232
|
+
|
|
233
|
+
if (imageData.b64_json) {
|
|
234
|
+
filePath = saveBase64ToFile(imageData.b64_json, filename);
|
|
235
|
+
} else if (imageData.url) {
|
|
236
|
+
filePath = await downloadImageToFile(imageData.url, filename);
|
|
237
|
+
} else {
|
|
238
|
+
throw new Error("Azure OpenAI returned neither b64_json nor url");
|
|
239
|
+
}
|
|
240
|
+
|
|
241
|
+
return {
|
|
242
|
+
filePath: filePath,
|
|
243
|
+
model: model,
|
|
244
|
+
revisedPrompt: imageData.revised_prompt || null,
|
|
245
|
+
size: body.size,
|
|
246
|
+
};
|
|
247
|
+
}
|
|
248
|
+
|
|
249
|
+
// ---------------------------------------------------------------------------
|
|
250
|
+
// Azure AI Services (serverless) image generation
|
|
251
|
+
// ---------------------------------------------------------------------------
|
|
252
|
+
async function generateViaAzureAIServices(model, prompt, options = {}) {
|
|
253
|
+
const modelConfig = MODEL_REGISTRY[model];
|
|
254
|
+
if (!modelConfig) {
|
|
255
|
+
throw new Error(`Unknown model: ${model}`);
|
|
256
|
+
}
|
|
257
|
+
|
|
258
|
+
const endpoint = CONFIG.azureAIServices.endpoint.replace(/\/$/, "");
|
|
259
|
+
|
|
260
|
+
// Serverless endpoints typically use /images/generations
|
|
261
|
+
// but some models use the root endpoint. Try the standard path first.
|
|
262
|
+
const url = `${endpoint}/images/generations`;
|
|
263
|
+
|
|
264
|
+
const size = options.size || modelConfig.defaultSize;
|
|
265
|
+
const [width, height] = size.split("x").map(Number);
|
|
266
|
+
|
|
267
|
+
const body = {
|
|
268
|
+
prompt: prompt,
|
|
269
|
+
width: width,
|
|
270
|
+
height: height,
|
|
271
|
+
num_images: 1,
|
|
272
|
+
response_format: "b64_json",
|
|
273
|
+
};
|
|
274
|
+
|
|
275
|
+
// Some models accept additional params
|
|
276
|
+
if (options.guidance_scale !== undefined) {
|
|
277
|
+
body.guidance_scale = options.guidance_scale;
|
|
278
|
+
}
|
|
279
|
+
if (options.num_inference_steps !== undefined) {
|
|
280
|
+
body.num_inference_steps = options.num_inference_steps;
|
|
281
|
+
}
|
|
282
|
+
|
|
283
|
+
// Auth: API key as Bearer or managed identity token
|
|
284
|
+
const aiSvcToken = USE_KEY_AUTH_AI_SERVICES
|
|
285
|
+
? CONFIG.azureAIServices.apiKey
|
|
286
|
+
: await getBearerToken();
|
|
287
|
+
|
|
288
|
+
const response = await fetch(url, {
|
|
289
|
+
method: "POST",
|
|
290
|
+
headers: {
|
|
291
|
+
"Content-Type": "application/json",
|
|
292
|
+
Authorization: `Bearer ${aiSvcToken}`,
|
|
293
|
+
},
|
|
294
|
+
body: JSON.stringify(body),
|
|
295
|
+
});
|
|
296
|
+
|
|
297
|
+
if (!response.ok) {
|
|
298
|
+
const errorText = await response.text();
|
|
299
|
+
|
|
300
|
+
// If /images/generations fails, try the root endpoint
|
|
301
|
+
if (response.status === 404) {
|
|
302
|
+
return await generateViaAzureAIServicesFallback(
|
|
303
|
+
model,
|
|
304
|
+
prompt,
|
|
305
|
+
endpoint,
|
|
306
|
+
body
|
|
307
|
+
);
|
|
308
|
+
}
|
|
309
|
+
|
|
310
|
+
throw new Error(
|
|
311
|
+
`Azure AI Services error (${response.status}): ${errorText}`
|
|
312
|
+
);
|
|
313
|
+
}
|
|
314
|
+
|
|
315
|
+
const result = await response.json();
|
|
316
|
+
const filename = generateFilename(model);
|
|
317
|
+
|
|
318
|
+
// Handle various response shapes
|
|
319
|
+
let filePath;
|
|
320
|
+
if (result.data && result.data[0]) {
|
|
321
|
+
const imageData = result.data[0];
|
|
322
|
+
if (imageData.b64_json || imageData.base64) {
|
|
323
|
+
filePath = saveBase64ToFile(
|
|
324
|
+
imageData.b64_json || imageData.base64,
|
|
325
|
+
filename
|
|
326
|
+
);
|
|
327
|
+
} else if (imageData.url) {
|
|
328
|
+
filePath = await downloadImageToFile(imageData.url, filename);
|
|
329
|
+
}
|
|
330
|
+
} else if (result.images && result.images[0]) {
|
|
331
|
+
// Some models return { images: ["base64..."] }
|
|
332
|
+
filePath = saveBase64ToFile(result.images[0], filename);
|
|
333
|
+
} else if (result.image) {
|
|
334
|
+
// Some return { image: "base64..." }
|
|
335
|
+
filePath = saveBase64ToFile(result.image, filename);
|
|
336
|
+
} else if (result.output) {
|
|
337
|
+
// FLUX models sometimes return { output: "base64..." }
|
|
338
|
+
filePath = saveBase64ToFile(result.output, filename);
|
|
339
|
+
}
|
|
340
|
+
|
|
341
|
+
if (!filePath) {
|
|
342
|
+
// Last resort: save the raw response for debugging
|
|
343
|
+
const debugFile = path.join(
|
|
344
|
+
CONFIG.outputDir,
|
|
345
|
+
`debug_${filename}.json`
|
|
346
|
+
);
|
|
347
|
+
fs.writeFileSync(debugFile, JSON.stringify(result, null, 2));
|
|
348
|
+
throw new Error(
|
|
349
|
+
`Could not extract image from response. Response shape saved to ${debugFile} for debugging.`
|
|
350
|
+
);
|
|
351
|
+
}
|
|
352
|
+
|
|
353
|
+
return {
|
|
354
|
+
filePath: filePath,
|
|
355
|
+
model: model,
|
|
356
|
+
size: size,
|
|
357
|
+
};
|
|
358
|
+
}
|
|
359
|
+
|
|
360
|
+
// Fallback: POST directly to the serverless root endpoint
|
|
361
|
+
async function generateViaAzureAIServicesFallback(
|
|
362
|
+
model,
|
|
363
|
+
prompt,
|
|
364
|
+
endpoint,
|
|
365
|
+
body
|
|
366
|
+
) {
|
|
367
|
+
const fallbackToken = USE_KEY_AUTH_AI_SERVICES
|
|
368
|
+
? CONFIG.azureAIServices.apiKey
|
|
369
|
+
: await getBearerToken();
|
|
370
|
+
|
|
371
|
+
const response = await fetch(endpoint, {
|
|
372
|
+
method: "POST",
|
|
373
|
+
headers: {
|
|
374
|
+
"Content-Type": "application/json",
|
|
375
|
+
Authorization: `Bearer ${fallbackToken}`,
|
|
376
|
+
},
|
|
377
|
+
body: JSON.stringify(body),
|
|
378
|
+
});
|
|
379
|
+
|
|
380
|
+
if (!response.ok) {
|
|
381
|
+
const errorText = await response.text();
|
|
382
|
+
throw new Error(
|
|
383
|
+
`Azure AI Services fallback error (${response.status}): ${errorText}`
|
|
384
|
+
);
|
|
385
|
+
}
|
|
386
|
+
|
|
387
|
+
const result = await response.json();
|
|
388
|
+
const filename = generateFilename(model);
|
|
389
|
+
let filePath;
|
|
390
|
+
|
|
391
|
+
// Try all known response shapes
|
|
392
|
+
if (result.data && result.data[0]) {
|
|
393
|
+
const d = result.data[0];
|
|
394
|
+
filePath = saveBase64ToFile(d.b64_json || d.base64 || "", filename);
|
|
395
|
+
} else if (result.images && result.images[0]) {
|
|
396
|
+
filePath = saveBase64ToFile(result.images[0], filename);
|
|
397
|
+
} else if (result.image) {
|
|
398
|
+
filePath = saveBase64ToFile(result.image, filename);
|
|
399
|
+
} else if (result.output) {
|
|
400
|
+
filePath = saveBase64ToFile(result.output, filename);
|
|
401
|
+
}
|
|
402
|
+
|
|
403
|
+
if (!filePath) {
|
|
404
|
+
throw new Error("Could not extract image from fallback response.");
|
|
405
|
+
}
|
|
406
|
+
|
|
407
|
+
return { filePath, model, size: body.width + "x" + body.height };
|
|
408
|
+
}
|
|
409
|
+
|
|
410
|
+
// ---------------------------------------------------------------------------
|
|
411
|
+
// Unified generation dispatcher
|
|
412
|
+
// ---------------------------------------------------------------------------
|
|
413
|
+
async function generateImage(model, prompt, options = {}) {
|
|
414
|
+
const modelConfig = MODEL_REGISTRY[model];
|
|
415
|
+
if (!modelConfig) {
|
|
416
|
+
const available = Object.keys(MODEL_REGISTRY).join(", ");
|
|
417
|
+
throw new Error(
|
|
418
|
+
`Unknown model "${model}". Available: ${available}`
|
|
419
|
+
);
|
|
420
|
+
}
|
|
421
|
+
|
|
422
|
+
switch (modelConfig.pattern) {
|
|
423
|
+
case "azure-openai":
|
|
424
|
+
return await generateViaAzureOpenAI(model, prompt, options);
|
|
425
|
+
case "azure-ai-services":
|
|
426
|
+
return await generateViaAzureAIServices(model, prompt, options);
|
|
427
|
+
default:
|
|
428
|
+
throw new Error(`Unsupported API pattern: ${modelConfig.pattern}`);
|
|
429
|
+
}
|
|
430
|
+
}
|
|
431
|
+
|
|
432
|
+
// ---------------------------------------------------------------------------
|
|
433
|
+
// MCP Server Setup
|
|
434
|
+
// ---------------------------------------------------------------------------
|
|
435
|
+
const server = new Server(
|
|
436
|
+
{
|
|
437
|
+
name: "image-proxy",
|
|
438
|
+
version: "1.0.0",
|
|
439
|
+
},
|
|
440
|
+
{
|
|
441
|
+
capabilities: {
|
|
442
|
+
tools: {},
|
|
443
|
+
},
|
|
444
|
+
}
|
|
445
|
+
);
|
|
446
|
+
|
|
447
|
+
// ---------------------------------------------------------------------------
|
|
448
|
+
// Tool Definitions
|
|
449
|
+
// ---------------------------------------------------------------------------
|
|
450
|
+
server.setRequestHandler(ListToolsRequestSchema, async () => {
|
|
451
|
+
return {
|
|
452
|
+
tools: [
|
|
453
|
+
{
|
|
454
|
+
name: "generate_image",
|
|
455
|
+
description:
|
|
456
|
+
"Generate an image using Azure-hosted models. Saves the image " +
|
|
457
|
+
"to local disk and returns ONLY the file path, keeping base64 " +
|
|
458
|
+
"data out of the context window. Supports: gpt-image-1.5, " +
|
|
459
|
+
"dall-e-3, flux-kontext-pro.",
|
|
460
|
+
inputSchema: {
|
|
461
|
+
type: "object",
|
|
462
|
+
properties: {
|
|
463
|
+
prompt: {
|
|
464
|
+
type: "string",
|
|
465
|
+
description: "Text prompt describing the desired image.",
|
|
466
|
+
},
|
|
467
|
+
model: {
|
|
468
|
+
type: "string",
|
|
469
|
+
description:
|
|
470
|
+
"Model alias: gpt-image-1.5, dall-e-3, or flux-kontext-pro.",
|
|
471
|
+
enum: Object.keys(MODEL_REGISTRY),
|
|
472
|
+
default: "flux-kontext-pro",
|
|
473
|
+
},
|
|
474
|
+
size: {
|
|
475
|
+
type: "string",
|
|
476
|
+
description:
|
|
477
|
+
"Image dimensions (e.g. 1024x1024, 1024x1792, 1792x1024).",
|
|
478
|
+
default: "1024x1024",
|
|
479
|
+
},
|
|
480
|
+
quality: {
|
|
481
|
+
type: "string",
|
|
482
|
+
description:
|
|
483
|
+
"Image quality for OpenAI models: standard or hd.",
|
|
484
|
+
enum: ["standard", "hd"],
|
|
485
|
+
default: "standard",
|
|
486
|
+
},
|
|
487
|
+
style: {
|
|
488
|
+
type: "string",
|
|
489
|
+
description:
|
|
490
|
+
"Image style for dall-e-3 only: vivid or natural.",
|
|
491
|
+
enum: ["vivid", "natural"],
|
|
492
|
+
},
|
|
493
|
+
},
|
|
494
|
+
required: ["prompt"],
|
|
495
|
+
},
|
|
496
|
+
},
|
|
497
|
+
{
|
|
498
|
+
name: "base64_to_file",
|
|
499
|
+
description:
|
|
500
|
+
"Convert a base64-encoded image string to a file on disk. " +
|
|
501
|
+
"Use this when base64 image data is already in the context " +
|
|
502
|
+
"and you want to save it locally to free up context space.",
|
|
503
|
+
inputSchema: {
|
|
504
|
+
type: "object",
|
|
505
|
+
properties: {
|
|
506
|
+
base64_data: {
|
|
507
|
+
type: "string",
|
|
508
|
+
description:
|
|
509
|
+
"Base64-encoded image data (with or without data URI prefix).",
|
|
510
|
+
},
|
|
511
|
+
filename: {
|
|
512
|
+
type: "string",
|
|
513
|
+
description:
|
|
514
|
+
"Optional filename. If not provided, one will be generated.",
|
|
515
|
+
},
|
|
516
|
+
format: {
|
|
517
|
+
type: "string",
|
|
518
|
+
description: "Image format extension.",
|
|
519
|
+
enum: ["png", "jpg", "webp"],
|
|
520
|
+
default: "png",
|
|
521
|
+
},
|
|
522
|
+
},
|
|
523
|
+
required: ["base64_data"],
|
|
524
|
+
},
|
|
525
|
+
},
|
|
526
|
+
{
|
|
527
|
+
name: "list_generated_images",
|
|
528
|
+
description:
|
|
529
|
+
"List all generated images in the output directory with " +
|
|
530
|
+
"file size, creation time, and model used (extracted from filename).",
|
|
531
|
+
inputSchema: {
|
|
532
|
+
type: "object",
|
|
533
|
+
properties: {
|
|
534
|
+
limit: {
|
|
535
|
+
type: "number",
|
|
536
|
+
description: "Maximum number of images to list (newest first).",
|
|
537
|
+
default: 20,
|
|
538
|
+
},
|
|
539
|
+
},
|
|
540
|
+
},
|
|
541
|
+
},
|
|
542
|
+
{
|
|
543
|
+
name: "cleanup_images",
|
|
544
|
+
description:
|
|
545
|
+
"Remove generated images older than the specified number of days.",
|
|
546
|
+
inputSchema: {
|
|
547
|
+
type: "object",
|
|
548
|
+
properties: {
|
|
549
|
+
older_than_days: {
|
|
550
|
+
type: "number",
|
|
551
|
+
description: "Delete images older than this many days.",
|
|
552
|
+
default: 7,
|
|
553
|
+
},
|
|
554
|
+
dry_run: {
|
|
555
|
+
type: "boolean",
|
|
556
|
+
description:
|
|
557
|
+
"If true, list files that would be deleted without deleting.",
|
|
558
|
+
default: true,
|
|
559
|
+
},
|
|
560
|
+
},
|
|
561
|
+
},
|
|
562
|
+
},
|
|
563
|
+
],
|
|
564
|
+
};
|
|
565
|
+
});
|
|
566
|
+
|
|
567
|
+
// ---------------------------------------------------------------------------
|
|
568
|
+
// Tool Handlers
|
|
569
|
+
// ---------------------------------------------------------------------------
|
|
570
|
+
server.setRequestHandler(CallToolRequestSchema, async (request) => {
|
|
571
|
+
const { name, arguments: args } = request.params;
|
|
572
|
+
|
|
573
|
+
try {
|
|
574
|
+
switch (name) {
|
|
575
|
+
// ---------------------------------------------------------------
|
|
576
|
+
// generate_image
|
|
577
|
+
// ---------------------------------------------------------------
|
|
578
|
+
case "generate_image": {
|
|
579
|
+
const model = args.model || "flux-kontext-pro";
|
|
580
|
+
const prompt = args.prompt;
|
|
581
|
+
|
|
582
|
+
if (!prompt) {
|
|
583
|
+
return {
|
|
584
|
+
content: [
|
|
585
|
+
{
|
|
586
|
+
type: "text",
|
|
587
|
+
text: "Error: prompt is required.",
|
|
588
|
+
},
|
|
589
|
+
],
|
|
590
|
+
isError: true,
|
|
591
|
+
};
|
|
592
|
+
}
|
|
593
|
+
|
|
594
|
+
// Validate config — endpoints always required, keys optional with managed identity
|
|
595
|
+
const modelConfig = MODEL_REGISTRY[model];
|
|
596
|
+
if (modelConfig.pattern === "azure-openai") {
|
|
597
|
+
if (!CONFIG.azureOpenAI.endpoint) {
|
|
598
|
+
return {
|
|
599
|
+
content: [
|
|
600
|
+
{
|
|
601
|
+
type: "text",
|
|
602
|
+
text:
|
|
603
|
+
"Error: AZURE_OPENAI_ENDPOINT must be configured for model: " +
|
|
604
|
+
model +
|
|
605
|
+
". Auth: set AZURE_OPENAI_API_KEY or use managed identity (az login).",
|
|
606
|
+
},
|
|
607
|
+
],
|
|
608
|
+
isError: true,
|
|
609
|
+
};
|
|
610
|
+
}
|
|
611
|
+
}
|
|
612
|
+
if (modelConfig.pattern === "azure-ai-services") {
|
|
613
|
+
if (!CONFIG.azureAIServices.endpoint) {
|
|
614
|
+
return {
|
|
615
|
+
content: [
|
|
616
|
+
{
|
|
617
|
+
type: "text",
|
|
618
|
+
text:
|
|
619
|
+
"Error: AZURE_AI_SERVICES_ENDPOINT must be configured for model: " +
|
|
620
|
+
model +
|
|
621
|
+
". Auth: set AZURE_AI_SERVICES_KEY or use managed identity (az login).",
|
|
622
|
+
},
|
|
623
|
+
],
|
|
624
|
+
isError: true,
|
|
625
|
+
};
|
|
626
|
+
}
|
|
627
|
+
}
|
|
628
|
+
|
|
629
|
+
const result = await generateImage(model, prompt, {
|
|
630
|
+
size: args.size,
|
|
631
|
+
quality: args.quality,
|
|
632
|
+
style: args.style,
|
|
633
|
+
});
|
|
634
|
+
|
|
635
|
+
// Build a compact metadata response - NO base64 touches the context
|
|
636
|
+
const stats = fs.statSync(result.filePath);
|
|
637
|
+
const fileSizeKB = (stats.size / 1024).toFixed(1);
|
|
638
|
+
|
|
639
|
+
let responseText =
|
|
640
|
+
`Image generated successfully.\n` +
|
|
641
|
+
` File: ${result.filePath}\n` +
|
|
642
|
+
` Model: ${result.model}\n` +
|
|
643
|
+
` Size: ${result.size}\n` +
|
|
644
|
+
` File size: ${fileSizeKB} KB`;
|
|
645
|
+
|
|
646
|
+
if (result.revisedPrompt) {
|
|
647
|
+
responseText += `\n Revised prompt: ${result.revisedPrompt}`;
|
|
648
|
+
}
|
|
649
|
+
|
|
650
|
+
return {
|
|
651
|
+
content: [
|
|
652
|
+
{
|
|
653
|
+
type: "text",
|
|
654
|
+
text: responseText,
|
|
655
|
+
},
|
|
656
|
+
],
|
|
657
|
+
};
|
|
658
|
+
}
|
|
659
|
+
|
|
660
|
+
// ---------------------------------------------------------------
|
|
661
|
+
// base64_to_file
|
|
662
|
+
// ---------------------------------------------------------------
|
|
663
|
+
case "base64_to_file": {
|
|
664
|
+
const base64Data = args.base64_data;
|
|
665
|
+
if (!base64Data) {
|
|
666
|
+
return {
|
|
667
|
+
content: [
|
|
668
|
+
{
|
|
669
|
+
type: "text",
|
|
670
|
+
text: "Error: base64_data is required.",
|
|
671
|
+
},
|
|
672
|
+
],
|
|
673
|
+
isError: true,
|
|
674
|
+
};
|
|
675
|
+
}
|
|
676
|
+
|
|
677
|
+
const format = args.format || "png";
|
|
678
|
+
const filename =
|
|
679
|
+
args.filename || generateFilename("converted", format);
|
|
680
|
+
const filePath = saveBase64ToFile(base64Data, filename);
|
|
681
|
+
const stats = fs.statSync(filePath);
|
|
682
|
+
const fileSizeKB = (stats.size / 1024).toFixed(1);
|
|
683
|
+
|
|
684
|
+
return {
|
|
685
|
+
content: [
|
|
686
|
+
{
|
|
687
|
+
type: "text",
|
|
688
|
+
text:
|
|
689
|
+
`Base64 image saved to file.\n` +
|
|
690
|
+
` File: ${filePath}\n` +
|
|
691
|
+
` Size: ${fileSizeKB} KB`,
|
|
692
|
+
},
|
|
693
|
+
],
|
|
694
|
+
};
|
|
695
|
+
}
|
|
696
|
+
|
|
697
|
+
// ---------------------------------------------------------------
|
|
698
|
+
// list_generated_images
|
|
699
|
+
// ---------------------------------------------------------------
|
|
700
|
+
case "list_generated_images": {
|
|
701
|
+
ensureOutputDir();
|
|
702
|
+
const limit = args.limit || 20;
|
|
703
|
+
const files = fs.readdirSync(CONFIG.outputDir);
|
|
704
|
+
|
|
705
|
+
const imageFiles = files
|
|
706
|
+
.filter((f) =>
|
|
707
|
+
/\.(png|jpg|jpeg|webp)$/i.test(f)
|
|
708
|
+
)
|
|
709
|
+
.map((f) => {
|
|
710
|
+
const fullPath = path.join(CONFIG.outputDir, f);
|
|
711
|
+
const stats = fs.statSync(fullPath);
|
|
712
|
+
// Extract model from filename pattern: model_timestamp_id.ext
|
|
713
|
+
const modelMatch = f.match(/^([^_]+)_/);
|
|
714
|
+
return {
|
|
715
|
+
filename: f,
|
|
716
|
+
path: fullPath,
|
|
717
|
+
model: modelMatch ? modelMatch[1] : "unknown",
|
|
718
|
+
sizeKB: (stats.size / 1024).toFixed(1),
|
|
719
|
+
created: stats.birthtime.toISOString(),
|
|
720
|
+
};
|
|
721
|
+
})
|
|
722
|
+
.sort((a, b) => new Date(b.created) - new Date(a.created))
|
|
723
|
+
.slice(0, limit);
|
|
724
|
+
|
|
725
|
+
if (imageFiles.length === 0) {
|
|
726
|
+
return {
|
|
727
|
+
content: [
|
|
728
|
+
{
|
|
729
|
+
type: "text",
|
|
730
|
+
text: `No generated images found in ${CONFIG.outputDir}`,
|
|
731
|
+
},
|
|
732
|
+
],
|
|
733
|
+
};
|
|
734
|
+
}
|
|
735
|
+
|
|
736
|
+
const listing = imageFiles
|
|
737
|
+
.map(
|
|
738
|
+
(f, i) =>
|
|
739
|
+
`${i + 1}. ${f.filename}\n` +
|
|
740
|
+
` Model: ${f.model} | Size: ${f.sizeKB} KB | Created: ${f.created}`
|
|
741
|
+
)
|
|
742
|
+
.join("\n");
|
|
743
|
+
|
|
744
|
+
return {
|
|
745
|
+
content: [
|
|
746
|
+
{
|
|
747
|
+
type: "text",
|
|
748
|
+
text:
|
|
749
|
+
`Generated images (${imageFiles.length}/${files.length} total):\n` +
|
|
750
|
+
`Directory: ${CONFIG.outputDir}\n\n${listing}`,
|
|
751
|
+
},
|
|
752
|
+
],
|
|
753
|
+
};
|
|
754
|
+
}
|
|
755
|
+
|
|
756
|
+
// ---------------------------------------------------------------
|
|
757
|
+
// cleanup_images
|
|
758
|
+
// ---------------------------------------------------------------
|
|
759
|
+
case "cleanup_images": {
|
|
760
|
+
ensureOutputDir();
|
|
761
|
+
const olderThanDays = args.older_than_days || 7;
|
|
762
|
+
const dryRun = args.dry_run !== false;
|
|
763
|
+
const cutoff = Date.now() - olderThanDays * 24 * 60 * 60 * 1000;
|
|
764
|
+
|
|
765
|
+
const files = fs.readdirSync(CONFIG.outputDir);
|
|
766
|
+
const toDelete = [];
|
|
767
|
+
|
|
768
|
+
for (const f of files) {
|
|
769
|
+
if (!/\.(png|jpg|jpeg|webp|json)$/i.test(f)) continue;
|
|
770
|
+
const fullPath = path.join(CONFIG.outputDir, f);
|
|
771
|
+
const stats = fs.statSync(fullPath);
|
|
772
|
+
if (stats.birthtimeMs < cutoff) {
|
|
773
|
+
toDelete.push({
|
|
774
|
+
filename: f,
|
|
775
|
+
sizeKB: (stats.size / 1024).toFixed(1),
|
|
776
|
+
created: stats.birthtime.toISOString(),
|
|
777
|
+
});
|
|
778
|
+
}
|
|
779
|
+
}
|
|
780
|
+
|
|
781
|
+
if (toDelete.length === 0) {
|
|
782
|
+
return {
|
|
783
|
+
content: [
|
|
784
|
+
{
|
|
785
|
+
type: "text",
|
|
786
|
+
text: `No images older than ${olderThanDays} days found.`,
|
|
787
|
+
},
|
|
788
|
+
],
|
|
789
|
+
};
|
|
790
|
+
}
|
|
791
|
+
|
|
792
|
+
if (!dryRun) {
|
|
793
|
+
for (const f of toDelete) {
|
|
794
|
+
fs.unlinkSync(path.join(CONFIG.outputDir, f.filename));
|
|
795
|
+
}
|
|
796
|
+
}
|
|
797
|
+
|
|
798
|
+
const listing = toDelete
|
|
799
|
+
.map((f) => ` - ${f.filename} (${f.sizeKB} KB, ${f.created})`)
|
|
800
|
+
.join("\n");
|
|
801
|
+
|
|
802
|
+
const action = dryRun ? "Would delete" : "Deleted";
|
|
803
|
+
return {
|
|
804
|
+
content: [
|
|
805
|
+
{
|
|
806
|
+
type: "text",
|
|
807
|
+
text:
|
|
808
|
+
`${action} ${toDelete.length} files older than ${olderThanDays} days:\n${listing}` +
|
|
809
|
+
(dryRun
|
|
810
|
+
? "\n\nThis was a dry run. Set dry_run=false to actually delete."
|
|
811
|
+
: ""),
|
|
812
|
+
},
|
|
813
|
+
],
|
|
814
|
+
};
|
|
815
|
+
}
|
|
816
|
+
|
|
817
|
+
default:
|
|
818
|
+
return {
|
|
819
|
+
content: [
|
|
820
|
+
{
|
|
821
|
+
type: "text",
|
|
822
|
+
text: `Unknown tool: ${name}`,
|
|
823
|
+
},
|
|
824
|
+
],
|
|
825
|
+
isError: true,
|
|
826
|
+
};
|
|
827
|
+
}
|
|
828
|
+
} catch (error) {
|
|
829
|
+
return {
|
|
830
|
+
content: [
|
|
831
|
+
{
|
|
832
|
+
type: "text",
|
|
833
|
+
text: `Error in ${name}: ${error.message}`,
|
|
834
|
+
},
|
|
835
|
+
],
|
|
836
|
+
isError: true,
|
|
837
|
+
};
|
|
838
|
+
}
|
|
839
|
+
});
|
|
840
|
+
|
|
841
|
+
// ---------------------------------------------------------------------------
|
|
842
|
+
// Start
|
|
843
|
+
// ---------------------------------------------------------------------------
|
|
844
|
+
async function main() {
|
|
845
|
+
const transport = new StdioServerTransport();
|
|
846
|
+
await server.connect(transport);
|
|
847
|
+
// Log to stderr so it does not interfere with MCP stdio protocol
|
|
848
|
+
console.error("Image Proxy MCP server started.");
|
|
849
|
+
console.error(`Output directory: ${CONFIG.outputDir}`);
|
|
850
|
+
console.error(
|
|
851
|
+
`Auth - OpenAI: ${USE_KEY_AUTH_OPENAI ? "API key" : "Managed Identity"}`
|
|
852
|
+
);
|
|
853
|
+
console.error(
|
|
854
|
+
`Auth - AI Services: ${USE_KEY_AUTH_AI_SERVICES ? "API key" : "Managed Identity"}`
|
|
855
|
+
);
|
|
856
|
+
|
|
857
|
+
// Pre-flight check: verify managed identity login if needed
|
|
858
|
+
if (credential) {
|
|
859
|
+
try {
|
|
860
|
+
await credential.getToken(COGNITIVE_SCOPE);
|
|
861
|
+
console.error("Auth check: Managed Identity token acquired OK.");
|
|
862
|
+
} catch {
|
|
863
|
+
console.error(
|
|
864
|
+
"WARNING: Managed Identity auth failed. " +
|
|
865
|
+
"Run 'az login' or set API keys. " +
|
|
866
|
+
"Image generation will fail until this is resolved."
|
|
867
|
+
);
|
|
868
|
+
}
|
|
869
|
+
}
|
|
870
|
+
}
|
|
871
|
+
|
|
872
|
+
main().catch((err) => {
|
|
873
|
+
console.error("Fatal:", err);
|
|
874
|
+
process.exit(1);
|
|
875
|
+
});
|
package/package.json
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
1
|
+
{
|
|
2
|
+
"name": "image-proxy-mcp",
|
|
3
|
+
"version": "1.0.0",
|
|
4
|
+
"description": "MCP server that proxies image generation to Azure (GPT-Image, DALL-E 3, FLUX Kontext Pro), saves to local disk, and returns file paths instead of base64 to keep context windows clean.",
|
|
5
|
+
"type": "module",
|
|
6
|
+
"main": "index.js",
|
|
7
|
+
"bin": {
|
|
8
|
+
"image-proxy-mcp": "./index.js"
|
|
9
|
+
},
|
|
10
|
+
"files": [
|
|
11
|
+
"index.js",
|
|
12
|
+
"README.md",
|
|
13
|
+
".env.example"
|
|
14
|
+
],
|
|
15
|
+
"scripts": {
|
|
16
|
+
"start": "node index.js",
|
|
17
|
+
"test": "node test.js"
|
|
18
|
+
},
|
|
19
|
+
"keywords": [
|
|
20
|
+
"mcp",
|
|
21
|
+
"model-context-protocol",
|
|
22
|
+
"image-generation",
|
|
23
|
+
"azure-openai",
|
|
24
|
+
"dall-e",
|
|
25
|
+
"flux",
|
|
26
|
+
"image-proxy"
|
|
27
|
+
],
|
|
28
|
+
"repository": {
|
|
29
|
+
"type": "git",
|
|
30
|
+
"url": "git+https://github.com/lproux/MCP-Orchestration.git",
|
|
31
|
+
"directory": "tools/image-proxy"
|
|
32
|
+
},
|
|
33
|
+
"license": "MIT",
|
|
34
|
+
"engines": {
|
|
35
|
+
"node": ">=18.0.0"
|
|
36
|
+
},
|
|
37
|
+
"dependencies": {
|
|
38
|
+
"@azure/identity": "^4.13.0",
|
|
39
|
+
"@modelcontextprotocol/sdk": "^1.12.0",
|
|
40
|
+
"dotenv": "^16.4.7"
|
|
41
|
+
}
|
|
42
|
+
}
|