mcp-prompt-optimizer 3.0.3 → 3.0.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +251 -403
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -1,72 +1,35 @@
1
- # MCP Prompt Optimizer v3.0.0
1
+ # MCP Prompt Optimizer v3.0.3
2
2
 
3
- 🚀 **Professional cloud-based MCP server** for AI-powered prompt optimization with intelligent context detection, template management, team collaboration, enterprise-grade features, and **optional personal model configuration**. Starting at $2.99/month.
4
-
5
- ⚠️ **v3.0.0 Breaking Changes:** API key is now REQUIRED for all operations. Development mode and offline mode have been removed for security.
3
+ 🚀 **Professional cloud-based MCP server** for AI-powered prompt optimization with intelligent context detection, template management, team collaboration, and enterprise-grade reliability. Starting at $2.99/month.
6
4
 
7
5
  ## ✨ Key Features
8
6
 
9
- 🧠 **AI Context Detection** - Automatically detects and optimizes for image generation, LLM interaction, technical automation
10
- 📁 **Template Management** - Auto-save high-confidence optimizations, search & reuse patterns
11
- 👥 **Team Collaboration** - Shared quotas, team templates, role-based access
12
- 📊 **Real-time Analytics** - Confidence scoring, usage tracking, optimization insights (Note: Advanced features like Bayesian Optimization and AG-UI are configurable and may provide mock data if disabled in the backend)
13
- ☁️ **Cloud Processing** - Always up-to-date AI models, no local setup required
14
- 🎛️ **Personal Model Choice** - Use your own OpenRouter models via WebUI configuration
15
- 🔧 **Universal MCP** - Works with Claude Desktop, Cursor, Windsurf, Cline, VS Code, Zed, Replit
7
+ 🧠 **AI Context Detection** Automatically detects and optimizes for code, creative writing, image generation, communication, and more
8
+ 📁 **Template Management** Auto-save high-confidence optimizations, search & reuse patterns
9
+ 👥 **Team Collaboration** Shared quotas, team templates, role-based access
10
+ 📊 **Confidence Scoring** Honest quality signal with per-tier annotations
11
+ ☁️ **Cloud Processing** Always up-to-date AI models via backend LLM pipeline
12
+ 🔧 **Resilient Fallback** Structured local optimization if the backend is unreachable
13
+ 🎛️ **Personal Model Choice** Use your own OpenRouter models via WebUI configuration
14
+ 🔧 **Universal MCP** — Works with Claude Desktop, Cursor, Windsurf, Cline, VS Code, Zed, Replit
16
15
 
17
- ## 🚀 Quick Start
16
+ ---
18
17
 
19
- **1. Get your API key (REQUIRED):**
18
+ ## 🚀 Quick Start
20
19
 
21
- ⚠️ **Important:** An API key is REQUIRED to use this package. Choose your tier:
20
+ **1. Get your API key (required):**
22
21
 
23
- - **🆓 FREE Tier** (`sk-local-*`): 5 daily optimizations - Get started at [https://promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)
24
- - **⭐ Paid Tiers** (`sk-opt-*`, `sk-team-*`): More optimizations, team features, advanced capabilities
22
+ - **🆓 Free Tier** (`sk-local-*`): 5 daily optimizations [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)
23
+ - **⭐ Paid Tiers** (`sk-opt-*`, `sk-team-*`): Higher quotas, team features, advanced capabilities
25
24
 
26
- **2. Install the MCP server:**
25
+ **2. Install:**
27
26
  ```bash
28
27
  npm install -g mcp-prompt-optimizer
29
28
  ```
30
29
 
31
- **3. Configure Claude Desktop:**
32
- Add to your `~/.claude/claude_desktop_config.json`:
33
- ```json
34
- {
35
- "mcpServers": {
36
- "prompt-optimizer": {
37
- "command": "npx",
38
- "args": ["mcp-prompt-optimizer"],
39
- "env": {
40
- "OPTIMIZER_API_KEY": "sk-local-your-key-here" // REQUIRED: Use your API key here
41
- }
42
- }
43
- }
44
- }
45
- ```
46
-
47
- > **Note:** All API keys are validated against our backend server. Internet connection required (brief caching for reliability).
48
-
49
- **4. Restart Claude Desktop** and start optimizing with AI context awareness!
50
-
51
- **5. (Optional) Configure custom models** - See [Advanced Model Configuration](#-advanced-model-configuration-optional) below
52
-
53
- ## 🎛️ Advanced Model Configuration (Optional)
54
-
55
- ### WebUI Model Selection & Personal OpenRouter Keys
56
-
57
- **Want to use your own AI models?** Configure them in the WebUI first, then the NPM package automatically uses your settings!
58
-
59
- #### **Step 1: Configure in WebUI**
60
- 1. **Visit Dashboard:** [https://promptoptimizer-blog.vercel.app/dashboard](https://promptoptimizer-blog.vercel.app/dashboard)
61
- 2. **Go to Settings** → User Settings
62
- 3. **Add OpenRouter API Key:** Get one from [OpenRouter.ai](https://openrouter.ai)
63
- 4. **Select Your Models:**
64
- - **Optimization Model:** e.g., `anthropic/claude-3-5-sonnet` (for prompt optimization)
65
- - **Evaluation Model:** e.g., `google/gemini-pro-1.5` (for quality assessment)
66
-
67
- #### **Step 2: Use NPM Package**
68
- Your configured models are **automatically used** by the MCP server - no additional setup needed!
30
+ **3. Configure your MCP client:**
69
31
 
32
+ Add to `~/.claude/claude_desktop_config.json` (Claude Desktop):
70
33
  ```json
71
34
  {
72
35
  "mcpServers": {
@@ -74,467 +37,352 @@ Your configured models are **automatically used** by the MCP server - no additio
74
37
  "command": "npx",
75
38
  "args": ["mcp-prompt-optimizer"],
76
39
  "env": {
77
- "OPTIMIZER_API_KEY": "sk-opt-your-key-here" // Your service API key
40
+ "OPTIMIZER_API_KEY": "sk-local-your-key-here"
78
41
  }
79
42
  }
80
43
  }
81
44
  }
82
45
  ```
83
46
 
84
- ### **Model Selection Priority**
85
- ```
86
- 1. 🎯 Your WebUI-configured models (highest priority)
87
- 2. 🔧 Request-specific model (if specified)
88
- 3. ⚙️ System defaults (fallback)
89
- ```
90
-
91
- ### **Benefits of Personal Model Configuration**
92
-
93
- ✅ **Cost Control** - Pay for your own OpenRouter usage
94
- ✅ **Model Choice** - Access 100+ models (Claude, GPT-4, Gemini, Llama, etc.)
95
- ✅ **Performance** - Choose faster or more capable models
96
- ✅ **Consistency** - Same models across WebUI and MCP tools
97
- ✅ **Privacy** - Your data goes through your OpenRouter account
98
-
99
- ### **Example Model Recommendations**
100
-
101
- **For Creative/Complex Prompts:**
102
- - Optimization: `anthropic/claude-3-5-sonnet`
103
- - Evaluation: `google/gemini-pro-1.5`
104
-
105
- **For Fast/Simple Optimizations:**
106
- - Optimization: `openai/gpt-4o-mini`
107
- - Evaluation: `openai/gpt-3.5-turbo`
108
-
109
- **For Technical/Code Prompts:**
110
- - Optimization: `anthropic/claude-3-5-sonnet`
111
- - Evaluation: `anthropic/claude-3-haiku`
112
-
113
- ### **Important Notes**
114
-
115
- 🔑 **Two Different API Keys:**
116
- - **Service API Key** (`sk-opt-*`): For the MCP service subscription
117
- - **OpenRouter API Key**: For your personal model usage (configured in WebUI)
118
-
119
- 💰 **Cost Structure:**
120
- - **Service subscription**: Monthly fee for optimization features
121
- - **OpenRouter usage**: Pay-per-token for your chosen models
47
+ **4. Restart your MCP client** and start optimizing.
122
48
 
123
- 🔄 **No NPM Package Changes Needed:**
124
- When you update models in WebUI, the NPM package automatically uses the new settings!
49
+ > **Note:** API keys are validated against the backend server. An internet connection is required (responses are cached for up to 2 hours for reliability).
125
50
 
126
51
  ---
127
52
 
128
- ## 💰 Cloud Subscription Plans
53
+ ## ⚙️ How Optimization Works
129
54
 
130
- > **All plans include the same sophisticated AI optimization quality**
55
+ Prompts are routed through a three-tier pipeline based on complexity and context. Each tier produces a different output format and confidence range.
131
56
 
132
- ### 🎯 Explorer - $2.99/month
133
- - **5,000 optimizations** per month
134
- - **Individual use** (1 user, 1 API key)
135
- - **Full AI features** - context detection, template management, insights
136
- - **Personal model configuration** via WebUI
137
- - **Community support**
57
+ ### Tier 1 LLM Optimization (backend, confidence 70–95%)
138
58
 
139
- ### 🎨 Creator - $25.99/month *Popular*
140
- - **18,000 optimizations** per month
141
- - **Team features** (2 members, 3 API keys)
142
- - **Full AI features** - context detection, template management, insights
143
- - **Personal model configuration** via WebUI
144
- - **Priority processing** + email support
59
+ The backend routes complex or high-sophistication prompts through a Gemini Flash LLM pass that genuinely rewrites and enriches the prompt. This is the highest-quality output.
145
60
 
146
- ### 🚀 Innovator - $69.99/month
147
- - **75,000 optimizations** per month
148
- - **Large teams** (5 members, 10 API keys)
149
- - **Full AI features** - context detection, template management, insights
150
- - **Personal model configuration** via WebUI
151
- - **Advanced analytics** + priority support + dedicated support channel
61
+ ```
62
+ # 🎯 Optimized Prompt
152
63
 
153
- 🆓 **Free Trial:** 5 optimizations with full feature access
64
+ My Python script crashes with a KeyError on line 47. I need help diagnosing
65
+ the root cause. I am using Python 3.11 with pandas 2.0. The error occurs
66
+ when accessing dictionary keys after a merge operation.
154
67
 
155
- ## 🧠 AI Context Detection & Enhancement
68
+ **Confidence:** 82.0%
69
+ **AI Context:** code_generation
70
+ ```
156
71
 
157
- The server automatically detects your prompt type and enhances optimization goals:
72
+ ### Tier 2 Backend Rules Optimization (backend, confidence < 25%)
158
73
 
159
- ### 🎨 Image Generation Context
160
- **Detected patterns:** `--ar`, `--v`, `midjourney`, `dall-e`, `photorealistic`, `4k`
161
- ```
162
- Input: "A beautiful landscape --ar 16:9 --v 6"
163
- ✅ Enhanced goals: parameter_preservation, keyword_density, technical_precision
164
- ✅ Preserves technical parameters (--ar, --v, etc.)
165
- ✅ Optimizes quality keywords and visual descriptors
166
- ```
74
+ For simpler or lower-sophistication prompts the backend applies rules-based optimization without an LLM. When this happens, confidence will be below 25% and a note will appear explaining what to check if you expected full LLM enhancement.
167
75
 
168
- ### 🤖 LLM Interaction Context
169
- **Detected patterns:** `analyze`, `explain`, `evaluate`, `summary`, `research`, `paper`, `analysis`, `interpret`, `discussion`, `assessment`, `compare`, `contrast`
170
- ```
171
- Input: "Analyze the pros and cons of this research paper and provide a comprehensive evaluation"
172
- ✅ Enhanced goals: context_specificity, token_efficiency, actionability
173
- ✅ Improves role clarity and instruction precision
174
- ✅ Optimizes for better AI understanding
175
76
  ```
77
+ # 🎯 Optimized Prompt
176
78
 
177
- ### 💻 Code Generation Context
178
- **Detected patterns:** `def`, `function`, `code`, `python`, `javascript`, `java`, `c++`, `return`, `import`, `class`, `for`, `while`, `if`, `else`, `elif`
179
- ```
180
- Input: "def fibonacci(n): return n if n <= 1 else fibonacci(n-1) + fibonacci(n-2)"
181
- ✅ Enhanced goals: technical_accuracy, parameter_preservation, precision
182
- ✅ Protects code elements and technical syntax
183
- ✅ Enhances technical precision and clarity
184
- ```
79
+ My Python script crashes with a KeyError. Please provide the error message
80
+ and relevant code.
185
81
 
186
- ### ⚙️ Technical Automation Context
187
- **Detected patterns:** `automate`, `script`, `api`
188
- ```
189
- Input: "Create a script to automate deployment process"
190
- Enhanced goals: technical_accuracy, parameter_preservation, precision
191
- Protects code elements and technical syntax
192
- ✅ Enhances technical precision and clarity
82
+ > ℹ️ *Low confidence indicates the backend applied rules-based optimization
83
+ > (no LLM). Ensure `OPENROUTER_API_KEY` is configured in the backend for
84
+ > full LLM enhancement.*
85
+
86
+ **Confidence:** 18.0%
87
+ **AI Context:** code_generation
193
88
  ```
194
89
 
195
- ### 💬 Human Communication Context (Default)
196
- **All other prompts** get standard optimization for human readability and clarity.
90
+ ### Tier 3 Local Rules Fallback (npm, confidence 35–55%)
197
91
 
198
- ## 📊 Enhanced Optimization Features
92
+ If the backend is unreachable, the npm package applies optimization locally using a library of domain-specific templates. The output is a structured, user-facing prose prompt — no XML scaffolding. The confidence annotation makes the quality tier explicit.
199
93
 
200
- ### Professional Optimization (All Users)
201
94
  ```
202
- 🎯 Optimized Prompt
95
+ # 🔧 Rules-Based Optimization Applied
203
96
 
204
- Create a comprehensive technical blog post about artificial intelligence that systematically explores current real-world applications, evidence-based benefits, existing limitations and challenges, and data-driven future implications for businesses and society.
97
+ ⚠️ *Backend unreachable your prompt has been structured using local rule
98
+ templates (no LLM). Re-run when the backend is available for full LLM
99
+ optimization.*
205
100
 
206
- Confidence: 87.3%
207
- Plan: Creator
208
- AI Context: Human Communication
209
- Goals Enhanced: Yes (clarity → clarity, specificity, actionability)
101
+ **Template:** `debugging_request`
210
102
 
211
- 🧠 AI Context Benefits Applied
212
- - ✅ Standard optimization rules applied
213
- - Human communication optimized
103
+ **Optimized Prompt:**
104
+ ```
105
+ My Python script crashes with a KeyError
214
106
 
215
- Auto-saved as template (ID: tmp_abc123)
216
- *High-confidence optimization automatically saved for future use*
107
+ To address this effectively:
108
+ - Identify the most likely root causes given the error description.
109
+ - Request the full error message and stack trace, plus the relevant code snippet.
110
+ - Ask about expected vs. actual behavior, programming language, and library versions.
217
111
 
218
- 📋 Similar Templates Found
219
- 1. AI Article Writing Template (92.1% similarity)
220
- 2. Technical Blog Post Structure (85.6% similarity)
221
- *Use `search_templates` tool to explore your template library*
112
+ *Response format: Step-by-step debugging walkthrough with a specific fix or next diagnostic steps.*
113
+ ```
222
114
 
223
- 📊 Optimization Insights
115
+ **Confidence:** 51.0% *(rules-based — LLM optimization typically 70–95%)*
116
+ **AI Context:** code_generation
117
+ ```
224
118
 
225
- Performance Analysis:
226
- - Clarity improvement: +21.9%
227
- - Specificity boost: +17.3%
228
- - Length optimization: +15.2%
119
+ ---
229
120
 
230
- Prompt Analysis:
231
- - Complexity level: intermediate
232
- - Optimization confidence: 87.3%
121
+ ## 🧠 AI Context Detection
233
122
 
234
- AI Recommendations:
235
- - Optimization achieved 87.3% confidence
236
- - Template automatically saved for future reference
237
- - Prompt optimized from 15 to 23 words
123
+ The server automatically detects your prompt type and routes it to the appropriate optimization template. You can also specify `ai_context` manually in the tool call.
238
124
 
239
- *Professional analytics and improvement recommendations*
125
+ | Context | `ai_context` value | Example patterns |
126
+ |---|---|---|
127
+ | Code & debugging | `code_generation` | `debug`, `fix`, `error`, `python`, `sql`, `refactor` |
128
+ | Web development | `code_generation` | `landing page`, `react`, `css`, `html`, `tailwind` |
129
+ | API & integration | `api_automation` | `api`, `rest`, `endpoint`, `oauth`, `webhook` |
130
+ | DevOps & automation | `technical_automation` | `deploy`, `docker`, `ci/cd`, `terraform`, `bash` |
131
+ | Data & schemas | `structured_output` | `json`, `schema`, `csv`, `yaml`, `transform` |
132
+ | Analysis & research | `llm_interaction` | `analyze`, `explain`, `compare`, `research` |
133
+ | Creative writing | `creative_writing` | `story`, `poem`, `script`, `blog`, `copywriting` |
134
+ | Email & communication | `human_communication` | `email`, `letter`, `memo`, `formal`, `reply` |
135
+ | Image generation | `image_generation` | `photorealistic`, `midjourney`, `dall-e`, `portrait` |
136
+ | General | `general_assistant` | Everything else |
240
137
 
241
- ---
242
- *Professional cloud-based AI optimization with context awareness*
243
- 💡 Manage account & configure models: https://promptoptimizer-blog.vercel.app/dashboard
244
- 📊 Check quota: Use `get_quota_status` tool
245
- 🔍 Search templates: Use `search_templates` tool
246
- ```
138
+ ### Image Generation
247
139
 
248
- ## 🔧 Universal MCP Client Support
140
+ Image prompts are handled separately. The local fallback appends style-matched quality boosters directly to the prompt (comma-separated), rather than producing structured bullets. This matches how image generation models consume prompts.
249
141
 
250
- ### Claude Desktop
251
- ```json
252
- {
253
- "mcpServers": {
254
- "prompt-optimizer": {
255
- "command": "npx",
256
- "args": ["mcp-prompt-optimizer"],
257
- "env": {
258
- "OPTIMIZER_API_KEY": "sk-opt-your-key-here"
259
- }
260
- }
261
- }
262
- }
263
142
  ```
264
-
265
- ### Cursor IDE
266
- Add to `~/.cursor/mcp.json`:
267
- ```json
268
- {
269
- "mcpServers": {
270
- "prompt-optimizer": {
271
- "command": "npx",
272
- "args": ["mcp-prompt-optimizer"],
273
- "env": {
274
- "OPTIMIZER_API_KEY": "sk-opt-your-key-here"
275
- }
276
- }
277
- }
278
- }
143
+ Input: "Draw a photorealistic portrait of an astronaut"
144
+ Output: "Draw a photorealistic portrait of an astronaut, ultra realistic,
145
+ sharp focus, professional photography, dynamic lighting, balanced
146
+ composition, high quality, 4K"
279
147
  ```
280
148
 
281
- ### Windsurf
282
- Configure in IDE settings or add to MCP configuration file.
283
-
284
- ### Other MCP Clients
285
- - **Cline:** Standard MCP configuration
286
- - **VS Code:** MCP extension setup
287
- - **Zed:** MCP server configuration
288
- - **Replit:** Environment variable setup
289
- - **JetBrains IDEs:** MCP plugin configuration
290
- - **Emacs/Vim/Neovim:** MCP client setup
291
-
292
- ## 🛠️ Available MCP Tools (for AI Agents & MCP Clients)
149
+ ---
293
150
 
294
- These tools are exposed via the Model Context Protocol (MCP) server and are intended for use by AI agents, MCP-compatible clients (like Claude Desktop, Cursor IDE), or custom scripts that interact with the server via stdin/stdout.
151
+ ## 🛠️ Available MCP Tools
295
152
 
296
153
  ### `optimize_prompt`
297
- **Professional AI optimization with context detection, auto-save, and insights.**
298
- ```javascript
154
+ AI optimization with context detection, auto-save, and insights.
155
+ ```json
299
156
  {
300
157
  "prompt": "Your prompt text",
301
- "goals": ["clarity", "specificity"], // Optional: e.g., "clarity", "conciseness", "creativity", "technical_accuracy"
302
- "ai_context": "llm_interaction", // Optional: Auto-detected if not specified. e.g., "code_generation", "image_generation"
303
- "enable_bayesian": true // Optional: Enable Bayesian optimization features (if available in backend)
158
+ "goals": ["clarity", "specificity"],
159
+ "ai_context": "code_generation",
160
+ "enable_bayesian": true
304
161
  }
305
162
  ```
306
163
 
307
164
  ### `detect_ai_context`
308
- **Detects the AI context for a given prompt using advanced backend analysis.**
309
- ```javascript
165
+ Detect the AI context for a given prompt.
166
+ ```json
310
167
  {
311
168
  "prompt": "The prompt text for which to detect the AI context"
312
169
  }
313
170
  ```
314
171
 
315
172
  ### `create_template`
316
- **Create a new optimization template.**
317
- ```javascript
173
+ Save an optimization as a reusable template.
174
+ ```json
318
175
  {
319
- "title": "Title of the template",
320
- "description": "Description of the template", // Optional
321
- "original_prompt": "The original prompt text",
322
- "optimized_prompt": "The optimized prompt text",
323
- "optimization_goals": ["clarity"], // Optional: e.g., ["clarity", "conciseness"]
324
- "confidence_score": 0.9, // (0.0-1.0)
325
- "model_used": "openai/gpt-4o-mini", // Optional
326
- "optimization_tier": "llm", // Optional: e.g., "rules", "llm", "hybrid"
327
- "ai_context_detected": "llm_interaction", // Optional: e.g., "code_generation", "image_generation"
328
- "is_public": false, // Optional: Whether the template is public
329
- "tags": ["marketing", "email"] // Optional
176
+ "title": "Template title",
177
+ "description": "Optional description",
178
+ "original_prompt": "The original prompt",
179
+ "optimized_prompt": "The optimized prompt",
180
+ "optimization_goals": ["clarity"],
181
+ "confidence_score": 0.9,
182
+ "ai_context_detected": "code_generation",
183
+ "is_public": false,
184
+ "tags": ["debugging", "python"]
330
185
  }
331
186
  ```
332
187
 
333
188
  ### `get_template`
334
- **Retrieve a specific template by its ID.**
335
- ```javascript
189
+ Retrieve a saved template by ID.
190
+ ```json
336
191
  {
337
192
  "template_id": "the-template-id"
338
193
  }
339
194
  ```
340
195
 
341
196
  ### `update_template`
342
- **Update an existing optimization template.**
343
- ```javascript
197
+ Update an existing template.
198
+ ```json
344
199
  {
345
200
  "template_id": "the-template-id",
346
- "title": "New title for the template", // Optional
347
- "description": "New description for the template", // Optional
348
- "is_public": true // Optional: Update public status
349
- // Other fields from 'create_template' can also be updated
201
+ "title": "Updated title",
202
+ "is_public": true
350
203
  }
351
204
  ```
352
205
 
353
206
  ### `search_templates`
354
- **Search your saved template library with AI-aware filtering.**
355
- ```javascript
207
+ Search your saved template library.
208
+ ```json
356
209
  {
357
- "query": "blog post", // Optional: Search term to filter templates by content or title
358
- "ai_context": "human_communication", // Optional: Filter templates by AI context type
359
- "sophistication_level": "advanced", // Optional: Filter by template sophistication level
360
- "complexity_level": "complex", // Optional: Filter by template complexity level
361
- "optimization_strategy": "rules_only", // Optional: Filter by optimization strategy used
362
- "limit": 5, // Optional: Number of templates to return (1-20)
363
- "sort_by": "confidence_score", // Optional: e.g., "created_at", "usage_count", "title"
364
- "sort_order": "desc" // Optional: "asc" or "desc"
210
+ "query": "debugging",
211
+ "ai_context": "code_generation",
212
+ "limit": 5,
213
+ "sort_by": "confidence_score",
214
+ "sort_order": "desc"
365
215
  }
366
216
  ```
367
217
 
368
218
  ### `get_quota_status`
369
- **Check subscription status, quota usage, and account information.**
370
- ```javascript
371
- // No parameters needed
372
- ```
219
+ Check subscription status and quota usage. No parameters required.
373
220
 
374
- ### `get_optimization_insights` (Conditional)
375
- **Get advanced Bayesian optimization insights, performance analytics, and parameter tuning recommendations.**
376
- *Note: This tool provides mock data if Bayesian optimization is disabled in the backend.*
377
- ```javascript
221
+ ### `get_optimization_insights` *(conditional)*
222
+ Bayesian optimization insights and parameter tuning recommendations. Requires the feature to be enabled in the backend; returns mock data otherwise.
223
+ ```json
378
224
  {
379
- "analysis_depth": "detailed", // Optional: "basic", "detailed", "comprehensive"
380
- "include_recommendations": true // Optional: Include optimization recommendations
225
+ "analysis_depth": "detailed",
226
+ "include_recommendations": true
381
227
  }
382
228
  ```
383
229
 
384
- ### `get_real_time_status` (Conditional)
385
- **Get real-time optimization status and AG-UI capabilities.**
386
- *Note: This tool provides mock data if AG-UI features are disabled in the backend.*
387
- ```javascript
388
- // No parameters needed
389
- ```
230
+ ### `get_real_time_status` *(conditional)*
231
+ Real-time optimization status and AG-UI capabilities. Requires the feature to be enabled in the backend; returns mock data otherwise. No parameters required.
390
232
 
391
233
  ---
392
234
 
393
- ## 🔧 Professional CLI Commands (Direct Execution)
235
+ ## 🎛️ Advanced Model Configuration (Optional)
394
236
 
395
- These are direct command-line tools provided by the `mcp-prompt-optimizer` executable for administrative and diagnostic purposes.
237
+ Configure custom models in the WebUI and the MCP server uses them automatically.
396
238
 
397
- ```bash
398
- # Check API key and quota status
399
- mcp-prompt-optimizer check-status
239
+ **Step 1 — Configure in WebUI:**
240
+ 1. Visit [Dashboard](https://promptoptimizer-blog.vercel.app/dashboard)
241
+ 2. Go to Settings → User Settings
242
+ 3. Add your OpenRouter API key (from [openrouter.ai](https://openrouter.ai))
243
+ 4. Select your preferred models for optimization and evaluation
244
+
245
+ **Step 2 — Use the npm package as normal.** Your WebUI model settings are applied automatically — no changes to the MCP configuration required.
246
+
247
+ ### Model Selection Priority
248
+ ```
249
+ 1. Your WebUI-configured models (highest priority)
250
+ 2. Request-specific model override
251
+ 3. System default (google/gemini-flash-1.5-8b)
252
+ ```
253
+
254
+ ### Example Model Recommendations
255
+
256
+ | Use case | Optimization model | Evaluation model |
257
+ |---|---|---|
258
+ | Creative / complex | `anthropic/claude-3-5-sonnet` | `google/gemini-pro-1.5` |
259
+ | Fast / simple | `openai/gpt-4o-mini` | `openai/gpt-3.5-turbo` |
260
+ | Code / technical | `anthropic/claude-3-5-sonnet` | `anthropic/claude-3-haiku` |
261
+
262
+ > **Two different API keys:**
263
+ > - **Service key** (`sk-opt-*`) — your MCP Prompt Optimizer subscription
264
+ > - **OpenRouter key** — your personal OpenRouter account for model usage costs
400
265
 
401
- # Validate API key with backend
402
- mcp-prompt-optimizer validate-key
266
+ ---
267
+
268
+ ## 💰 Subscription Plans
403
269
 
404
- # Test backend integration
405
- mcp-prompt-optimizer test
270
+ | Plan | Price | Optimizations/month | Team members |
271
+ |---|---|---|---|
272
+ | 🎯 Explorer | $2.99/mo | 5,000 | 1 |
273
+ | 🎨 Creator | $25.99/mo | 18,000 | 2 + 3 keys |
274
+ | 🚀 Innovator | $69.99/mo | 75,000 | 5 + 10 keys |
406
275
 
407
- # Run comprehensive diagnostic
408
- mcp-prompt-optimizer diagnose
276
+ 🆓 **Free trial:** 5 optimizations with full feature access.
409
277
 
410
- # Clear validation cache
411
- mcp-prompt-optimizer clear-cache
278
+ All plans include AI context detection, template management, personal model configuration, and optimization insights.
412
279
 
413
- # Show help and setup instructions
414
- mcp-prompt-optimizer help
280
+ ---
415
281
 
416
- # Show version information
417
- mcp-prompt-optimizer version
282
+ ## 🔧 CLI Commands
283
+
284
+ ```bash
285
+ mcp-prompt-optimizer check-status # Check API key and quota status
286
+ mcp-prompt-optimizer validate-key # Validate API key with backend
287
+ mcp-prompt-optimizer test # Test backend integration
288
+ mcp-prompt-optimizer diagnose # Run comprehensive diagnostic
289
+ mcp-prompt-optimizer clear-cache # Clear validation cache
290
+ mcp-prompt-optimizer help # Show help and setup instructions
291
+ mcp-prompt-optimizer version # Show version information
418
292
  ```
419
293
 
420
- ## 🏢 Team Collaboration Features
294
+ ---
295
+
296
+ ## 🏢 Team Collaboration
421
297
 
422
298
  ### Team API Keys (`sk-team-*`)
423
- - **Shared quotas** across team members
424
- - **Centralized billing** and management
425
- - **Team template libraries** for consistency
426
- - **Role-based access** control
427
- - **Team usage analytics**
299
+ - Shared quotas across team members
300
+ - Centralized billing and management
301
+ - Team template libraries for consistency
302
+ - Role-based access control
428
303
 
429
304
  ### Individual API Keys (`sk-opt-*`)
430
- - **Personal quotas** and billing
431
- - **Individual template libraries**
432
- - **Personal usage tracking**
433
- - **Account self-management**
305
+ - Personal quotas and billing
306
+ - Individual template libraries
307
+ - Account self-management
308
+
309
+ ---
434
310
 
435
311
  ## 🔐 Security & Privacy
436
312
 
437
- - **Enterprise-grade security** with encrypted data transmission
438
- - **API key validation** with secure backend authentication
439
- - **Quota enforcement** with real-time usage tracking
440
- - **Professional uptime** with 99.9% availability SLA
441
- - **GDPR compliant** data handling and processing
442
- - **No data retention** - prompts processed and optimized immediately
443
-
444
- ## 📈 Advanced Features
445
-
446
- ### Automatic Template Management
447
- - **Auto-save** high-confidence optimizations (>70% confidence)
448
- - **Intelligent categorization** by AI context and content type
449
- - **Similarity search** to find related templates
450
- - **Template analytics** with usage patterns and effectiveness
451
-
452
- ### Real-time Optimization Insights
453
- - **Performance metrics** - clarity, specificity, length improvements
454
- - **Confidence scoring** with detailed analysis
455
- - **AI-powered recommendations** for continuous improvement
456
- - **Usage analytics** and optimization patterns
457
- *Note: Advanced features like Bayesian Optimization and AG-UI Real-time Features are configurable and may provide mock data if disabled in the backend.*
458
-
459
- ### Intelligent Context Routing
460
- - **Automatic detection** of prompt context and intent
461
- - **Goal enhancement** based on detected context
462
- - **Parameter preservation** for technical prompts
463
- - **Context-specific optimizations** for better results
464
-
465
- ## 🚀 Getting Started
466
-
467
- ### 🏃‍♂️ Fast Start (System Defaults)
468
- 1. **Sign up** at [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)
469
- 2. **Install** the MCP server: `npm install -g mcp-prompt-optimizer`
470
- 3. **Configure** your MCP client with your API key
471
- 4. **Start optimizing** with intelligent AI context detection!
472
-
473
- ### 🎛️ Advanced Start (Custom Models)
474
- 1. **Sign up** at [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)
475
- 2. **Configure WebUI** at [dashboard](https://promptoptimizer-blog.vercel.app/dashboard) with your OpenRouter key & models
476
- 3. **Install** the MCP server: `npm install -g mcp-prompt-optimizer`
477
- 4. **Configure** your MCP client with your API key
478
- 5. **Enjoy enhanced optimization** with your chosen models!
479
-
480
- ## 🔄 Migration to v3.0.0
481
-
482
- ### ⚠️ Breaking Changes from v2.x
483
-
484
- **IMPORTANT:** v3.0.0 includes security enhancements that remove authentication bypasses.
485
-
486
- **What Changed:**
487
- - ❌ `OPTIMIZER_DEV_MODE=true` no longer works
488
- - ❌ `NODE_ENV=development` no longer enables mock mode
489
- - ❌ Offline mode has been removed
490
- - ✅ All API keys must be validated against backend server
491
- - ✅ Internet connection required (1-2 hour caching for reliability)
492
-
493
- **Migration Steps:**
494
- ```bash
495
- # 1. Ensure you have a valid API key
496
- export OPTIMIZER_API_KEY="sk-opt-your-key-here"
313
+ - Encrypted data transmission
314
+ - API key validation with secure backend authentication
315
+ - Quota enforcement with real-time usage tracking
316
+ - No prompt data retained processed and discarded immediately
317
+ - GDPR compliant
497
318
 
498
- # 2. Update to v3.0.0
499
- npm update -g mcp-prompt-optimizer
319
+ ---
500
320
 
501
- # 3. Verify it works
502
- mcp-prompt-optimizer --version # Should show 3.0.0
321
+ ## 🔧 Universal MCP Client Support
322
+
323
+ ### Claude Desktop
324
+ ```json
325
+ {
326
+ "mcpServers": {
327
+ "prompt-optimizer": {
328
+ "command": "npx",
329
+ "args": ["mcp-prompt-optimizer"],
330
+ "env": { "OPTIMIZER_API_KEY": "sk-opt-your-key-here" }
331
+ }
332
+ }
333
+ }
503
334
  ```
504
335
 
505
- **For Developers:**
506
- - Mock mode removed - use real test API keys from backend database
507
- - Development keys (`sk-dev-*`) must be real keys, not mocked
508
- - Offline testing no longer supported - backend connection required
336
+ ### Cursor IDE
337
+ Add to `~/.cursor/mcp.json`:
338
+ ```json
339
+ {
340
+ "mcpServers": {
341
+ "prompt-optimizer": {
342
+ "command": "npx",
343
+ "args": ["mcp-prompt-optimizer"],
344
+ "env": { "OPTIMIZER_API_KEY": "sk-opt-your-key-here" }
345
+ }
346
+ }
347
+ }
348
+ ```
509
349
 
510
- **Cache Behavior:**
511
- - Primary cache: 1 hour
512
- - Network failure fallback: Up to 2 hours
513
- - After 2 hours: Must reconnect to backend
350
+ ### Other clients
351
+ Windsurf, Cline, VS Code, Zed, Replit, JetBrains IDEs, and Neovim are all supported via standard MCP server configuration.
514
352
 
515
- ## 📞 Support & Resources
353
+ ---
354
+
355
+ ## 📦 Changelog
356
+
357
+ ### v3.0.3
358
+ - **Rules fallback output rewritten** — local optimization now produces user-facing prose prompts instead of raw XML scaffolding. The output is directly usable as a prompt without modification.
359
+ - **All 18 local templates reworded** — template principles are now phrased as user request guidance rather than AI-assistant directives, producing more natural and actionable structured prompts.
360
+ - **`creative_writing` context** now routes correctly to the creative writing template (previously fell through to a generic fallback).
361
+ - **`general_assistant` context** now maps explicitly to the LLM interaction template.
362
+ - **Backend rules-tier detection** — when the backend returns confidence below 25% (indicating it ran its own rules tier without an LLM), a note appears explaining the cause and how to resolve it (`OPENROUTER_API_KEY` configuration).
363
+ - **Confidence scale annotation** — rules-based fallback confidence now shows `*(rules-based — LLM optimization typically 70–95%)*` so users understand where their result sits relative to full LLM optimization.
364
+
365
+ ### v3.0.2
366
+ - Cross-platform binary compatibility improvements
367
+ - Bayesian optimization integration
368
+ - AG-UI feature flag support
516
369
 
517
- - **📚 Documentation:** https://promptoptimizer-blog.vercel.app/docs
518
- - **💬 Community Support:** GitHub Discussions
519
- - **📧 Email Support:** support@promptoptimizer.help (Creator/Innovator)
520
- - **🏢 Enterprise:** enterprise@promptoptimizer.help
521
- - **📊 Dashboard & Model Config:** https://promptoptimizer-blog.vercel.app/dashboard
522
- - **🔧 Troubleshooting:** https://promptoptimizer-blog.vercel.app/docs/troubleshooting
523
-
524
- ## 🌟 Why Choose MCP Prompt Optimizer?
525
-
526
- ✅ **Professional Quality** - Enterprise-grade optimization with consistent results
527
- ✅ **Universal Compatibility** - Works with 10+ MCP clients out of the box
528
- ✅ **AI Context Awareness** - Intelligent optimization based on prompt type
529
- ✅ **Personal Model Choice** - Use your own OpenRouter models & pay-per-use
530
- ✅ **Template Management** - Build and reuse optimization patterns
531
- ✅ **Team Collaboration** - Shared resources and centralized management
532
- ✅ **Real-time Analytics** - Track performance and improvement over time
533
- ✅ **Startup Validation** - Comprehensive error handling and troubleshooting
534
- ✅ **Professional Support** - From community to enterprise-level assistance
370
+ ### v3.0.0
371
+ - API key now required for all operations
372
+ - Development mode and offline mode removed for security
373
+ - All keys validated against backend server
535
374
 
536
375
  ---
537
376
 
538
- **🚀 Professional MCP Server** - Built for serious AI development with intelligent context detection, comprehensive template management, personal model configuration, and enterprise-grade reliability.
377
+ ## 📞 Support & Resources
378
+
379
+ - **Documentation:** https://promptoptimizer-blog.vercel.app/docs
380
+ - **Dashboard & model config:** https://promptoptimizer-blog.vercel.app/dashboard
381
+ - **Troubleshooting:** https://promptoptimizer-blog.vercel.app/docs/troubleshooting
382
+ - **Community support:** GitHub Discussions
383
+ - **Email support:** support@promptoptimizer.help (Creator/Innovator plans)
384
+ - **Enterprise:** enterprise@promptoptimizer.help
385
+
386
+ ---
539
387
 
540
- *Get started with 5 free optimizations at [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)*
388
+ *Get started with 5 free optimizations at [promptoptimizer-blog.vercel.app/pricing](https://promptoptimizer-blog.vercel.app/pricing)*
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "mcp-prompt-optimizer",
3
- "version": "3.0.3",
3
+ "version": "3.0.4",
4
4
  "description": "Professional cloud-based MCP server for AI-powered prompt optimization with intelligent context detection, Bayesian optimization, AG-UI real-time optimization, template auto-save, optimization insights, personal model configuration via WebUI, team collaboration, enterprise-grade features, production resilience, and startup validation. Universal compatibility with Claude Desktop, Cursor, Windsurf, and 17+ MCP clients.",
5
5
  "main": "index.js",
6
6
  "bin": {