architectgbt-mcp 0.2.1 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,189 +1,162 @@
1
1
  # ArchitectGBT MCP Server
2
2
 
3
- AI model recommendation engine for Cursor, Claude Desktop, and Windsurf.
3
+ **Model Context Protocol (MCP) server for AI model recommendations and code templates.**
4
4
 
5
- Get instant AI model recommendations without leaving your IDE.
5
+ Get instant AI model recommendations for your projects directly in Cursor IDE or Claude Desktop. Compare **50+ models** from OpenAI, Anthropic, Google, Meta, and Mistral with pricing, capabilities, and production-ready code templates.
6
6
 
7
- ## Features
8
-
9
- - šŸŽÆ **Smart Recommendations** - Get the best AI model for your use case
10
- - šŸ“ **Code Templates** - Production-ready integration code
11
- - šŸ“Š **Model Database** - Compare 50+ AI models with pricing
12
-
13
- ## Installation
7
+ [![npm version](https://img.shields.io/npm/v/architectgbt-mcp.svg)](https://www.npmjs.com/package/architectgbt-mcp)
8
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
14
9
 
15
- ### For Cursor
10
+ ## Features
16
11
 
17
- Add to your `~/.cursor/mcp.json` (or `C:\Users\YourName\.cursor\mcp.json` on Windows):
12
+ ### šŸŽÆ AI Model Recommendations
13
+ - **Personalized suggestions** based on your project description
14
+ - **Budget optimization** (low/medium/high/unlimited)
15
+ - **Priority matching** (cost/speed/quality/balanced)
16
+ - **Smart analysis** with reasoning, pros/cons, and alternatives
17
+ - **Cost estimates** with realistic token usage calculations
18
18
 
19
- ```json
20
- {
21
- "mcpServers": {
22
- "architectgbt": {
23
- "command": "npx",
24
- "args": ["-y", "architectgbt-mcp"]
25
- }
26
- }
27
- }
28
- ```
19
+ ### šŸ“Š Model Database
20
+ - **50+ AI models** across all major providers
21
+ - **Real-time pricing** per 1M tokens (input/output)
22
+ - **Detailed specs** (context windows, capabilities, speed rankings)
23
+ - **Provider filtering** (OpenAI, Anthropic, Google, Meta, Mistral)
29
24
 
30
- **Important:** Restart Cursor completely after adding the configuration.
25
+ ### šŸ’» Code Templates
26
+ - **Production-ready integrations** for Anthropic, OpenAI, and Google
27
+ - **TypeScript and Python** support
28
+ - **Complete examples** with installation, env setup, and usage
29
+ - **Best practices** including error handling and streaming
31
30
 
32
- ### For Claude Desktop
31
+ ## Quick Start
33
32
 
34
- Add to your Claude Desktop config (`%APPDATA%\Claude\claude_desktop_config.json` on Windows):
33
+ ### For Cursor IDE
35
34
 
35
+ 1. Create `.cursor/mcp.json` in your project:
36
36
  ```json
37
37
  {
38
38
  "mcpServers": {
39
39
  "architectgbt": {
40
40
  "command": "npx",
41
- "args": ["architectgbt-mcp"]
41
+ "args": ["-y", "architectgbt-mcp@latest"]
42
42
  }
43
43
  }
44
44
  }
45
45
  ```
46
46
 
47
- ### For Windsurf
47
+ 2. Restart Cursor
48
48
 
49
- Add to your MCP configuration:
49
+ 3. Ask Claude:
50
+ - "Recommend an AI model for a customer support chatbot"
51
+ - "Show me all available AI models"
52
+ - "Give me Claude Sonnet code in TypeScript"
50
53
 
51
- ```json
52
- {
53
- "mcpServers": {
54
- "architectgbt": {
55
- "command": "npx",
56
- "args": ["architectgbt-mcp"]
57
- }
58
- }
59
- }
60
- ```
54
+ ## Pricing
61
55
 
62
- ## Available Tools
56
+ | Tier | Recommendations | Cost | API Key |
57
+ |------|----------------|------|---------|
58
+ | **Free** | 3/day | $0 | Not needed |
59
+ | **Pro** | Unlimited | $15/month | Required |
63
60
 
64
- ### `list_models`
61
+ [Get your API key →](https://architectgbt.com/dashboard/settings)
65
62
 
66
- List available AI models with pricing information.
63
+ ## Tools
67
64
 
68
- **Example prompts:**
69
- - "List all Anthropic models"
70
- - "Show me all models"
71
- - "What OpenAI models are available?"
65
+ ### `list_models`
66
+ Lists all 50+ available AI models with pricing.
72
67
 
73
- **Example output:**
74
- ```
75
- ## šŸ“Š Available AI Models
76
-
77
- | Model | Provider | Input $/1M | Output $/1M |
78
- |-------|----------|------------|-------------|
79
- | Claude Haiku 4.5 | Anthropic | $0.80 | $4.00 |
80
- | Claude Opus 4.5 | Anthropic | $15.00 | $75.00 |
81
- | Claude Sonnet 4.5 | Anthropic | $3.00 | $15.00 |
82
- | GPT-4o | OpenAI | $2.50 | $10.00 |
83
- | GPT-4o mini | OpenAI | $0.15 | $0.60 |
84
- ...
85
-
86
- *Showing 10 models. Use `get_ai_recommendation` for personalized suggestions.*
87
68
  ```
69
+ User: "Show me all AI models"
88
70
 
89
- ### `get_code_template`
90
-
91
- Get production-ready integration code for any AI model.
71
+ Response:
72
+ Available AI Models (50 total)
73
+ ======================================================================
92
74
 
93
- **Supported providers:** Anthropic (Claude), OpenAI (GPT), Google (Gemini)
94
- **Languages:** TypeScript, Python
95
-
96
- **Example prompts:**
97
- - "Give me a TypeScript template for Claude"
98
- - "Python code for OpenAI GPT-4"
99
- - "Gemini integration in TypeScript"
100
-
101
- **Example output:**
102
- ```typescript
103
- // āœ… Installation
104
- npm install @anthropic-ai/sdk
105
-
106
- // šŸ”‘ Environment Variables
107
- ANTHROPIC_API_KEY=your_api_key_here
108
-
109
- // šŸ“¦ Code
110
- import Anthropic from '@anthropic-ai/sdk';
111
-
112
- const client = new Anthropic({
113
- apiKey: process.env.ANTHROPIC_API_KEY,
114
- });
75
+ Anthropic:
76
+ • Claude Haiku 4.5 $ 0.80 / $ 4.00 (in/out per 1M tokens)
77
+ • Claude Sonnet 4.5 $ 3.00 / $ 15.00 (in/out per 1M tokens)
78
+ ...
79
+ ```
115
80
 
116
- const message = await client.messages.create({
117
- model: 'claude-3-5-sonnet-20241022',
118
- max_tokens: 1024,
119
- messages: [
120
- { role: 'user', content: 'Hello, Claude!' }
121
- ],
122
- });
81
+ **Parameters:**
82
+ - `provider` (optional): Filter by "OpenAI" | "Anthropic" | "Google" | "Meta" | "Mistral"
83
+ - `limit` (optional): Number of models (default: 50)
123
84
 
124
- console.log(message.content);
85
+ ### `get_ai_recommendation`
86
+ Get personalized AI model recommendations.
125
87
 
126
- // šŸ’” Usage Tips
127
- // - Use streaming for real-time responses
128
- // - Add system prompts for better control
129
- // - Handle rate limits with retries
130
88
  ```
89
+ User: "I need an AI for a chatbot handling 10k requests/day"
131
90
 
132
- ### `get_ai_recommendation`
91
+ Response:
92
+ šŸŽÆ AI Model Recommendation — Analysis Complete!
133
93
 
134
- Get personalized AI model recommendations based on your project description.
94
+ ✨ TOP MATCH (94% match)
135
95
 
136
- **Note:** Requires authentication. For full recommendations with cost analysis and reasoning, visit [architectgbt.com](https://architectgbt.com) and sign up for a free account.
96
+ Claude Haiku 4.5
97
+ Provider: Anthropic
98
+ Estimated Cost: $0.0240
99
+ Context Window: 200,000 tokens
137
100
 
138
- **Example prompts:**
139
- - "What AI model should I use for a customer support chatbot?"
140
- - "Recommend a model for code generation on a budget"
141
- - "Best model for document analysis with 100K context?"
101
+ šŸ’” Why this model?
102
+ Perfect for customer support with fast responses, strong reasoning...
142
103
 
143
- **Example response:**
104
+ āœ… Pros:
105
+ • Extremely fast (sub-second)
106
+ • Cost-effective at scale
107
+ ...
144
108
  ```
145
- āŒ Authentication Required
146
109
 
147
- The ArchitectGBT API requires authentication. To get AI model recommendations:
110
+ **Parameters:**
111
+ - `prompt` (required): Your project description
112
+ - `budget` (optional): "low" | "medium" | "high" | "unlimited"
113
+ - `priority` (optional): "cost" | "speed" | "quality" | "balanced"
148
114
 
149
- 1. Visit https://architectgbt.com
150
- 2. Sign up for a free account (free tier available!)
151
- 3. Use the website directly for personalized recommendations
115
+ ### `get_code_template`
116
+ Get production-ready code templates.
152
117
 
153
- Alternatively, you can:
154
- - Use `list_models` to browse available models
155
- - Use `get_code_template` to get integration code for any model
118
+ **Parameters:**
119
+ - `provider` (required): "anthropic" | "openai" | "google"
120
+ - `language` (required): "typescript" | "python"
156
121
 
157
- For your query: "customer support chatbot"
158
- I recommend visiting the website for a personalized analysis with cost estimates and reasoning.
159
- ```
122
+ ## Installation Options
160
123
 
161
- ## Development
124
+ ### Option 1: Cursor IDE (Recommended)
162
125
 
163
- ```bash
164
- # Install dependencies
165
- npm install
126
+ See Quick Start above.
166
127
 
167
- # Run in development mode
168
- npm run dev
128
+ ### Option 2: Claude Desktop
169
129
 
170
- # Build for production
171
- npm run build
130
+ **macOS:** `~/Library/Application Support/Claude/claude_desktop_config.json`
131
+ **Windows:** `%APPDATA%\Claude\claude_desktop_config.json`
172
132
 
173
- # Start production server
174
- npm start
133
+ ```json
134
+ {
135
+ "mcpServers": {
136
+ "architectgbt": {
137
+ "command": "npx",
138
+ "args": ["-y", "architectgbt-mcp@latest"],
139
+ "env": {
140
+ "ARCHITECTGBT_API_KEY": "your_key_here"
141
+ }
142
+ }
143
+ }
144
+ }
175
145
  ```
176
146
 
177
- ## Local Testing
147
+ ### Option 3: Unlimited Access (API Key)
178
148
 
179
- To test locally before publishing, update your MCP config to point to the built file:
149
+ Add your API key to `.cursor/mcp.json`:
180
150
 
181
151
  ```json
182
152
  {
183
153
  "mcpServers": {
184
154
  "architectgbt": {
185
- "command": "node",
186
- "args": ["C:/Users/pravi/workspacenew/architectgbt-mcp/dist/index.js"]
155
+ "command": "npx",
156
+ "args": ["-y", "architectgbt-mcp@latest"],
157
+ "env": {
158
+ "ARCHITECTGBT_API_KEY": "agbt_your_key_here"
159
+ }
187
160
  }
188
161
  }
189
162
  }
@@ -191,15 +164,57 @@ To test locally before publishing, update your MCP config to point to the built
191
164
 
192
165
  ## Environment Variables
193
166
 
194
- | Variable | Description | Default |
195
- |----------|-------------|---------|
196
- | `ARCHITECTGBT_API_URL` | API base URL | `https://architectgbt.com` |
167
+ - `ARCHITECTGBT_API_KEY` - Your API key for unlimited access (optional)
168
+ - `ARCHITECTGBT_API_URL` - Custom API URL (default: https://architectgbt.com)
197
169
 
198
- ## License
170
+ ## Troubleshooting
171
+
172
+ ### "Daily Limit Reached"
173
+ You've used 3 free recommendations. Options:
174
+ 1. Wait for daily reset (midnight UTC)
175
+ 2. [Get API key](https://architectgbt.com/dashboard/settings) for unlimited access
199
176
 
200
- MIT
177
+ ### "API Key Invalid"
178
+ 1. Check key starts with `agbt_`
179
+ 2. Regenerate from [Settings](https://architectgbt.com/dashboard/settings)
180
+ 3. Ensure it's in the `env` section of your config
181
+
182
+ ### Models Not Showing
183
+ 1. Check internet connection
184
+ 2. Verify: https://architectgbt.com/api/models
185
+ 3. Try: `npx architectgbt-mcp@latest` manually
186
+
187
+ ### MCP Not Loading
188
+ 1. Node.js >= 18.0.0 required
189
+ 2. Check config JSON syntax
190
+ 3. Restart Cursor after changes
191
+
192
+ ## Development
193
+
194
+ ```bash
195
+ git clone https://github.com/yourusername/architectgbt-mcp.git
196
+ cd architectgbt-mcp
197
+ npm install
198
+ npm run build
199
+ npm run dev
200
+ ```
201
201
 
202
202
  ## Links
203
203
 
204
- - [ArchitectGBT Website](https://architectgbt.com)
205
- - [MCP Documentation](https://modelcontextprotocol.io)
204
+ - **Website**: [architectgbt.com](https://architectgbt.com)
205
+ - **Docs**: [architectgbt.com/docs/mcp-integration](https://architectgbt.com/docs/mcp-integration)
206
+ - **API Keys**: [architectgbt.com/dashboard/settings](https://architectgbt.com/dashboard/settings)
207
+ - **NPM**: [npmjs.com/package/architectgbt-mcp](https://www.npmjs.com/package/architectgbt-mcp)
208
+
209
+ ## Support
210
+
211
+ - **Email**: hello@architectgbt.com
212
+ - **Issues**: [GitHub](https://github.com/yourusername/architectgbt-mcp/issues)
213
+
214
+ ## License
215
+
216
+ MIT Ā© ArchitectGBT
217
+
218
+ ---
219
+
220
+ **Built with ā¤ļø for developers who ship fast**
@@ -108,28 +108,87 @@ export async function handleGetRecommendation(args) {
108
108
  }
109
109
  }
110
110
  function formatRecommendation(data) {
111
- const { recommendation, reasoning, alternatives, model } = data;
112
- let result = `## šŸŽÆ AI Model Recommendation\n\n`;
113
- if (model) {
114
- result += `### Recommended: ${model.name}\n`;
115
- result += `- **Provider:** ${model.provider}\n`;
116
- result += `- **Model ID:** ${model.model_id || "N/A"}\n`;
117
- if (model.input_price || model.output_price) {
118
- result += `- **Pricing:** $${model.input_price}/1M input, $${model.output_price}/1M output\n`;
119
- }
120
- if (model.context_window) {
121
- result += `- **Context Window:** ${model.context_window.toLocaleString()} tokens\n`;
111
+ // Handle conversational/off-topic responses
112
+ if (data.off_topic || data.needs_clarification) {
113
+ let result = data.message || '';
114
+ if (data.questions && Array.isArray(data.questions) && data.questions.length > 0) {
115
+ result += `\n\nšŸ“‹ To help me recommend the perfect AI model, please tell me:\n`;
116
+ data.questions.forEach((q, i) => {
117
+ result += `${i + 1}. ${q}\n`;
118
+ });
122
119
  }
120
+ return result;
121
+ }
122
+ const recommendations = data.recommendations || [];
123
+ if (recommendations.length === 0) {
124
+ return `āŒ No recommendations found. Try describing your project in more detail.`;
125
+ }
126
+ let result = `šŸŽÆ AI Model Recommendation — Analysis Complete!\n`;
127
+ result += `${"=".repeat(70)}\n\n`;
128
+ // Main recommendation
129
+ const top = recommendations[0];
130
+ result += `✨ TOP MATCH (${top.match_score || top.score || 95}% match)\n\n`;
131
+ result += `${top.model_name || top.name}\n`;
132
+ result += `Provider: ${top.provider}\n`;
133
+ // Pricing
134
+ if (top.estimated_cost) {
135
+ const cost = top.estimated_cost;
136
+ result += `Estimated Cost: $${cost.total_cost_usd?.toFixed(4) || '0.0000'}\n`;
137
+ result += ` └─ ${cost.input_tokens?.toLocaleString() || '0'} input + ${cost.output_tokens?.toLocaleString() || '0'} output tokens\n`;
138
+ }
139
+ else if (top.input_price !== undefined && top.output_price !== undefined) {
140
+ result += `Pricing: $${top.input_price}/1M input • $${top.output_price}/1M output\n`;
141
+ }
142
+ // Capabilities
143
+ if (top.capabilities?.context_window || top.context_window) {
144
+ const contextWindow = top.capabilities?.context_window || top.context_window;
145
+ result += `Context Window: ${contextWindow.toLocaleString()} tokens\n`;
146
+ }
147
+ // Reasoning
148
+ if (top.reasoning) {
149
+ result += `\nšŸ’” Why this model?\n${top.reasoning}\n`;
123
150
  }
124
- if (reasoning) {
125
- result += `\n### Why This Model?\n${reasoning}\n`;
151
+ // Pros and Cons
152
+ if (top.pros || top.cons) {
153
+ result += `\n`;
154
+ if (top.pros && top.pros.length > 0) {
155
+ result += `āœ… Pros:\n`;
156
+ top.pros.forEach((pro) => {
157
+ result += ` • ${pro}\n`;
158
+ });
159
+ }
160
+ if (top.cons && top.cons.length > 0) {
161
+ result += `āš ļø Cons:\n`;
162
+ top.cons.forEach((con) => {
163
+ result += ` • ${con}\n`;
164
+ });
165
+ }
126
166
  }
127
- if (alternatives && alternatives.length > 0) {
128
- result += `\n### Alternatives\n`;
129
- alternatives.forEach((alt, i) => {
130
- result += `${i + 1}. **${alt.name}** - ${alt.reason || alt.description || ""}\n`;
167
+ // Alternative recommendations
168
+ if (recommendations.length > 1) {
169
+ result += `\n${"─".repeat(70)}\n`;
170
+ result += `\nAlternative Options:\n\n`;
171
+ recommendations.slice(1, 3).forEach((rec, i) => {
172
+ result += `${i + 2}. ${rec.model_name || rec.name} (${rec.match_score || rec.score || '??'}% match)\n`;
173
+ result += ` Provider: ${rec.provider}\n`;
174
+ if (rec.estimated_cost) {
175
+ result += ` Cost: $${rec.estimated_cost.total_cost_usd?.toFixed(4) || '0.0000'}\n`;
176
+ }
177
+ else if (rec.input_price !== undefined) {
178
+ result += ` Pricing: $${rec.input_price}/1M in • $${rec.output_price}/1M out\n`;
179
+ }
180
+ if (rec.reasoning) {
181
+ result += ` Reason: ${rec.reasoning.substring(0, 150)}${rec.reasoning.length > 150 ? '...' : ''}\n`;
182
+ }
183
+ result += `\n`;
131
184
  });
132
185
  }
133
- result += `\n---\n*Powered by [ArchitectGBT](https://architectgbt.com)*`;
186
+ // Analysis summary
187
+ if (data.analysis_summary) {
188
+ result += `${"─".repeat(70)}\n`;
189
+ result += `\nšŸ“Š Analysis Summary:\n${data.analysis_summary}\n\n`;
190
+ }
191
+ result += `${"=".repeat(70)}\n`;
192
+ result += `šŸ’Ž Powered by ArchitectGBT • https://architectgbt.com`;
134
193
  return result;
135
194
  }
@@ -13,7 +13,7 @@ export const listModelsTool = {
13
13
  },
14
14
  limit: {
15
15
  type: "number",
16
- description: "Maximum number of models to return (default: 10)",
16
+ description: "Maximum number of models to return (default: 50)",
17
17
  },
18
18
  },
19
19
  required: [],
@@ -23,7 +23,7 @@ const InputSchema = z.object({
23
23
  provider: z
24
24
  .enum(["OpenAI", "Anthropic", "Google", "Meta", "Mistral", "all"])
25
25
  .optional(),
26
- limit: z.number().default(10),
26
+ limit: z.number().default(50),
27
27
  });
28
28
  export async function handleListModels(args) {
29
29
  const input = InputSchema.parse(args);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "architectgbt-mcp",
3
- "version": "0.2.1",
3
+ "version": "0.2.3",
4
4
  "description": "Model Context Protocol server for ArchitectGBT - AI architecture recommendations",
5
5
  "type": "module",
6
6
  "bin": {
@@ -126,36 +126,100 @@ export async function handleGetRecommendation(args: unknown) {
126
126
  }
127
127
 
128
128
  function formatRecommendation(data: any): string {
129
- const { recommendation, reasoning, alternatives, model } = data;
129
+ // Handle conversational/off-topic responses
130
+ if (data.off_topic || data.needs_clarification) {
131
+ let result = data.message || '';
132
+
133
+ if (data.questions && Array.isArray(data.questions) && data.questions.length > 0) {
134
+ result += `\n\nšŸ“‹ To help me recommend the perfect AI model, please tell me:\n`;
135
+ data.questions.forEach((q: string, i: number) => {
136
+ result += `${i + 1}. ${q}\n`;
137
+ });
138
+ }
139
+
140
+ return result;
141
+ }
130
142
 
131
- let result = `## šŸŽÆ AI Model Recommendation\n\n`;
143
+ const recommendations = data.recommendations || [];
144
+
145
+ if (recommendations.length === 0) {
146
+ return `āŒ No recommendations found. Try describing your project in more detail.`;
147
+ }
132
148
 
133
- if (model) {
134
- result += `### Recommended: ${model.name}\n`;
135
- result += `- **Provider:** ${model.provider}\n`;
136
- result += `- **Model ID:** ${model.model_id || "N/A"}\n`;
149
+ let result = `šŸŽÆ AI Model Recommendation — Analysis Complete!\n`;
150
+ result += `${"=".repeat(70)}\n\n`;
151
+
152
+ // Main recommendation
153
+ const top = recommendations[0];
154
+ result += `✨ TOP MATCH (${top.match_score || top.score || 95}% match)\n\n`;
155
+ result += `${top.model_name || top.name}\n`;
156
+ result += `Provider: ${top.provider}\n`;
157
+
158
+ // Pricing
159
+ if (top.estimated_cost) {
160
+ const cost = top.estimated_cost;
161
+ result += `Estimated Cost: $${cost.total_cost_usd?.toFixed(4) || '0.0000'}\n`;
162
+ result += ` └─ ${cost.input_tokens?.toLocaleString() || '0'} input + ${cost.output_tokens?.toLocaleString() || '0'} output tokens\n`;
163
+ } else if (top.input_price !== undefined && top.output_price !== undefined) {
164
+ result += `Pricing: $${top.input_price}/1M input • $${top.output_price}/1M output\n`;
165
+ }
137
166
 
138
- if (model.input_price || model.output_price) {
139
- result += `- **Pricing:** $${model.input_price}/1M input, $${model.output_price}/1M output\n`;
140
- }
167
+ // Capabilities
168
+ if (top.capabilities?.context_window || top.context_window) {
169
+ const contextWindow = top.capabilities?.context_window || top.context_window;
170
+ result += `Context Window: ${contextWindow.toLocaleString()} tokens\n`;
171
+ }
141
172
 
142
- if (model.context_window) {
143
- result += `- **Context Window:** ${model.context_window.toLocaleString()} tokens\n`;
144
- }
173
+ // Reasoning
174
+ if (top.reasoning) {
175
+ result += `\nšŸ’” Why this model?\n${top.reasoning}\n`;
145
176
  }
146
177
 
147
- if (reasoning) {
148
- result += `\n### Why This Model?\n${reasoning}\n`;
178
+ // Pros and Cons
179
+ if (top.pros || top.cons) {
180
+ result += `\n`;
181
+ if (top.pros && top.pros.length > 0) {
182
+ result += `āœ… Pros:\n`;
183
+ top.pros.forEach((pro: string) => {
184
+ result += ` • ${pro}\n`;
185
+ });
186
+ }
187
+ if (top.cons && top.cons.length > 0) {
188
+ result += `āš ļø Cons:\n`;
189
+ top.cons.forEach((con: string) => {
190
+ result += ` • ${con}\n`;
191
+ });
192
+ }
149
193
  }
150
194
 
151
- if (alternatives && alternatives.length > 0) {
152
- result += `\n### Alternatives\n`;
153
- alternatives.forEach((alt: any, i: number) => {
154
- result += `${i + 1}. **${alt.name}** - ${alt.reason || alt.description || ""}\n`;
195
+ // Alternative recommendations
196
+ if (recommendations.length > 1) {
197
+ result += `\n${"─".repeat(70)}\n`;
198
+ result += `\nAlternative Options:\n\n`;
199
+
200
+ recommendations.slice(1, 3).forEach((rec: any, i: number) => {
201
+ result += `${i + 2}. ${rec.model_name || rec.name} (${rec.match_score || rec.score || '??'}% match)\n`;
202
+ result += ` Provider: ${rec.provider}\n`;
203
+ if (rec.estimated_cost) {
204
+ result += ` Cost: $${rec.estimated_cost.total_cost_usd?.toFixed(4) || '0.0000'}\n`;
205
+ } else if (rec.input_price !== undefined) {
206
+ result += ` Pricing: $${rec.input_price}/1M in • $${rec.output_price}/1M out\n`;
207
+ }
208
+ if (rec.reasoning) {
209
+ result += ` Reason: ${rec.reasoning.substring(0, 150)}${rec.reasoning.length > 150 ? '...' : ''}\n`;
210
+ }
211
+ result += `\n`;
155
212
  });
156
213
  }
157
214
 
158
- result += `\n---\n*Powered by [ArchitectGBT](https://architectgbt.com)*`;
215
+ // Analysis summary
216
+ if (data.analysis_summary) {
217
+ result += `${"─".repeat(70)}\n`;
218
+ result += `\nšŸ“Š Analysis Summary:\n${data.analysis_summary}\n\n`;
219
+ }
220
+
221
+ result += `${"=".repeat(70)}\n`;
222
+ result += `šŸ’Ž Powered by ArchitectGBT • https://architectgbt.com`;
159
223
 
160
224
  return result;
161
225
  }
@@ -16,7 +16,7 @@ export const listModelsTool = {
16
16
  },
17
17
  limit: {
18
18
  type: "number",
19
- description: "Maximum number of models to return (default: 10)",
19
+ description: "Maximum number of models to return (default: 50)",
20
20
  },
21
21
  },
22
22
  required: [],
@@ -27,7 +27,7 @@ const InputSchema = z.object({
27
27
  provider: z
28
28
  .enum(["OpenAI", "Anthropic", "Google", "Meta", "Mistral", "all"])
29
29
  .optional(),
30
- limit: z.number().default(10),
30
+ limit: z.number().default(50),
31
31
  });
32
32
 
33
33
  export async function handleListModels(args: unknown) {