converse-mcp-server 1.16.0 → 1.17.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -209,8 +209,9 @@ SUMMARIZATION_MODEL=gpt-5-nano # Default: gpt-5-nano
209
209
  - **gpt-5**: Latest flagship model (400K context, 128K output) - Superior reasoning, code generation, and analysis
210
210
  - **gpt-5-mini**: Faster, cost-efficient GPT-5 (400K context, 128K output) - Well-defined tasks, precise prompts
211
211
  - **gpt-5-nano**: Fastest, most cost-efficient GPT-5 (400K context, 128K output) - Summarization, classification
212
+ - **gpt-5-pro**: Most advanced reasoning model (400K context, 272K output) - Hardest problems, extended compute time (EXPENSIVE)
212
213
  - **o3**: Strong reasoning (200K context)
213
- - **o3-mini**: Fast O3 variant (200K context)
214
+ - **o3-mini**: Fast O3 variant (200K context)
214
215
  - **o3-pro**: Professional-grade reasoning (200K context) - EXTREMELY EXPENSIVE
215
216
  - **o3-deep-research**: Deep research model (200K context) - 30-90 min runtime
216
217
  - **o4-mini**: Latest reasoning model (200K context)
package/docs/PROVIDERS.md CHANGED
@@ -9,6 +9,7 @@ This guide documents all supported AI providers in the Converse MCP Server and t
9
9
  - **Get Key**: [platform.openai.com/api-keys](https://platform.openai.com/api-keys)
10
10
  - **Environment Variable**: `OPENAI_API_KEY`
11
11
  - **Supported Models**:
12
+ - `gpt-5`, `gpt-5-mini`, `gpt-5-nano`, `gpt-5-pro` - GPT-5 family with advanced reasoning
12
13
  - `o3`, `o3-mini`, `o3-pro` - Advanced reasoning models
13
14
  - `o4-mini` - Latest fast reasoning model
14
15
  - `gpt-4.1` - Large context (1M tokens)
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "converse-mcp-server",
3
- "version": "1.16.0",
3
+ "version": "1.17.0",
4
4
  "description": "Converse MCP Server - Converse with other LLMs with chat and consensus tools",
5
5
  "type": "module",
6
6
  "main": "src/index.js",
@@ -141,12 +141,12 @@ ${formatProviderModels('OpenRouter', allModels.openrouter)}
141
141
 
142
142
  ### For Large Context Windows
143
143
  - **1M+ Tokens**: gpt-4.1 (1M), all Gemini models (1M)
144
- - **400K Tokens**: gpt-5 family (gpt-5, gpt-5-mini, gpt-5-nano)
144
+ - **400K Tokens**: gpt-5 family (gpt-5, gpt-5-mini, gpt-5-nano, gpt-5-pro)
145
145
  - **256K Tokens**: grok-4 series
146
- - **200K Tokens**: gpt-5 series, o4-mini
146
+ - **200K Tokens**: o3 series, o4-mini
147
147
 
148
148
  ### Special Features
149
- - **Web Search**: gpt-5, gpt-5-mini, gpt-5 series, o4-mini, gpt-4 series, gemini models with grounding, grok-4
149
+ - **Web Search**: gpt-5, gpt-5-mini, gpt-5-pro, o3 series, o4-mini, gpt-4 series, gemini models with grounding, grok-4
150
150
  - **Thinking Mode**: gpt-5 series (reasoning_effort), gemini models (thinking budget)
151
151
  - **Image Support**: All models except gemini-2.0-flash-lite and grok-code-fast-1
152
152
 
@@ -52,6 +52,21 @@ const SUPPORTED_MODELS = {
52
52
  description: 'Fastest, most cost-efficient GPT-5 (400K context, 128K output) - Summarization, classification',
53
53
  aliases: ['gpt5-nano', 'gpt-5nano', 'gpt 5 nano', 'gpt-5-nano-2025-08-07']
54
54
  },
55
+ 'gpt-5-pro': {
56
+ modelName: 'gpt-5-pro',
57
+ friendlyName: 'OpenAI (GPT-5 Pro)',
58
+ contextWindow: 400000,
59
+ maxOutputTokens: 272000,
60
+ supportsStreaming: false, // GPT-5 Pro doesn't support streaming
61
+ supportsImages: true,
62
+ supportsTemperature: false, // GPT-5 models don't support temperature
63
+ supportsWebSearch: true,
64
+ supportsResponsesAPI: true,
65
+ supportsDeepResearch: false, // Not a deep research model
66
+ timeout: 3600000, // 60 minutes - some requests may take several minutes
67
+ description: 'Most advanced reasoning model (400K context, 272K output) - Hardest problems, extended compute time (EXPENSIVE)',
68
+ aliases: ['gpt5-pro', 'gpt-5pro', 'gpt 5 pro', 'gpt-5 pro', 'gpt-5-pro-2025-10-06']
69
+ },
55
70
  'o3': {
56
71
  modelName: 'o3',
57
72
  friendlyName: 'OpenAI (O3)',