consult-llm-mcp 2.13.1 → 2.13.3
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +17 -0
- package/README.md +27 -19
- package/package.json +5 -5
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,22 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## v2.13.3 (2026-04-23)
|
|
4
|
+
|
|
5
|
+
- Added gpt-5.5 model support ($5/$30 per million tokens). The `openai` selector
|
|
6
|
+
now resolves to gpt-5.5, and the Cursor CLI backend automatically appends the
|
|
7
|
+
reasoning effort suffix when routing through cursor-agent
|
|
8
|
+
|
|
9
|
+
## v2.13.2 (2026-04-22)
|
|
10
|
+
|
|
11
|
+
- Added Anthropic provider support with the `claude-opus-4-7` model. Configure
|
|
12
|
+
with `ANTHROPIC_API_KEY`; select via the `anthropic` selector or the exact
|
|
13
|
+
model ID. API backend only (no CLI backend).
|
|
14
|
+
- Monitor: press `K` on an active consultation to kill a stuck agent process
|
|
15
|
+
after confirming
|
|
16
|
+
- Fixed cursor-cli backend failing with "Cannot use this model: gpt-5.4" when
|
|
17
|
+
using the `openai` selector, by automatically appending the reasoning effort
|
|
18
|
+
suffix
|
|
19
|
+
|
|
3
20
|
## v2.13.1 (2026-04-08)
|
|
4
21
|
|
|
5
22
|
- Monitor now shows the full error message in the detail view when a
|
package/README.md
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
# consult-llm-mcp
|
|
2
2
|
|
|
3
3
|
An MCP server that lets Claude Code consult stronger AI models (GPT-5.4, Gemini
|
|
4
|
-
3.1 Pro, DeepSeek Reasoner, MiniMax M2.7) when Sonnet has you running in circles and you need
|
|
4
|
+
3.1 Pro, Claude Opus 4.7, DeepSeek Reasoner, MiniMax M2.7) when Sonnet has you running in circles and you need
|
|
5
5
|
to bring in the heavy artillery. Supports multi-turn conversations.
|
|
6
6
|
|
|
7
7
|
```
|
|
@@ -28,8 +28,8 @@ to bring in the heavy artillery. Supports multi-turn conversations.
|
|
|
28
28
|
|
|
29
29
|
## Features
|
|
30
30
|
|
|
31
|
-
- Query powerful AI models (GPT-5.4, Gemini 3.1 Pro,
|
|
32
|
-
M2.7) with
|
|
31
|
+
- Query powerful AI models (GPT-5.4, Gemini 3.1 Pro, Claude Opus 4.7, DeepSeek
|
|
32
|
+
Reasoner, MiniMax M2.7) with
|
|
33
33
|
relevant files as context
|
|
34
34
|
- Include git changes for code review
|
|
35
35
|
- Comprehensive logging with cost estimation (if using API)
|
|
@@ -87,6 +87,7 @@ to bring in the heavy artillery. Supports multi-turn conversations.
|
|
|
87
87
|
claude mcp add consult-llm \
|
|
88
88
|
-e OPENAI_API_KEY=your_openai_key \
|
|
89
89
|
-e GEMINI_API_KEY=your_gemini_key \
|
|
90
|
+
-e ANTHROPIC_API_KEY=your_anthropic_key \
|
|
90
91
|
-e DEEPSEEK_API_KEY=your_deepseek_key \
|
|
91
92
|
-e MINIMAX_API_KEY=your_minimax_key \
|
|
92
93
|
-- npx -y consult-llm-mcp
|
|
@@ -550,10 +551,11 @@ See the "Using web mode..." example above for a concrete transcript.
|
|
|
550
551
|
mode)
|
|
551
552
|
- `DEEPSEEK_API_KEY` - Your DeepSeek API key (required for DeepSeek models)
|
|
552
553
|
- `MINIMAX_API_KEY` - Your MiniMax API key (required for MiniMax models)
|
|
554
|
+
- `ANTHROPIC_API_KEY` - Your Anthropic API key (required for Claude models)
|
|
553
555
|
- `CONSULT_LLM_DEFAULT_MODEL` - Override the default model (optional)
|
|
554
|
-
- Accepts selectors (`gemini`, `openai`, `deepseek`, `minimax`)
|
|
555
|
-
IDs
|
|
556
|
-
(`gpt-5.4`, `gemini-3.1-pro-preview`, etc.)
|
|
556
|
+
- Accepts selectors (`gemini`, `openai`, `anthropic`, `deepseek`, `minimax`)
|
|
557
|
+
or exact model IDs
|
|
558
|
+
(`gpt-5.4`, `gemini-3.1-pro-preview`, `claude-opus-4-7`, etc.)
|
|
557
559
|
- Selectors are resolved to the best available model at startup
|
|
558
560
|
- `CONSULT_LLM_GEMINI_BACKEND` - Backend for Gemini models (optional)
|
|
559
561
|
- Options: `api` (default), `gemini-cli`, `cursor-cli`, `opencode`
|
|
@@ -563,6 +565,8 @@ See the "Using web mode..." example above for a concrete transcript.
|
|
|
563
565
|
- Options: `api` (default), `opencode`
|
|
564
566
|
- `CONSULT_LLM_MINIMAX_BACKEND` - Backend for MiniMax models (optional)
|
|
565
567
|
- Options: `api` (default), `opencode`
|
|
568
|
+
- `CONSULT_LLM_ANTHROPIC_BACKEND` - Backend for Anthropic models (optional)
|
|
569
|
+
- Options: `api` (default)
|
|
566
570
|
- `CONSULT_LLM_ALLOWED_MODELS` - Restrict which concrete models can be used
|
|
567
571
|
(optional)
|
|
568
572
|
- Comma-separated list, e.g., `gpt-5.4,gemini-3.1-pro-preview`
|
|
@@ -574,7 +578,7 @@ See the "Using web mode..." example above for a concrete transcript.
|
|
|
574
578
|
- Comma-separated list, e.g., `grok-3,kimi-k2.5`
|
|
575
579
|
- Merged with built-in models and included in the tool schema
|
|
576
580
|
- Useful for newly released models with a known provider prefix (`gpt-`,
|
|
577
|
-
`gemini-`, `deepseek-`, `MiniMax-`)
|
|
581
|
+
`gemini-`, `deepseek-`, `MiniMax-`, `claude-`)
|
|
578
582
|
- `CONSULT_LLM_CODEX_REASONING_EFFORT` - Configure reasoning effort for Codex
|
|
579
583
|
CLI (optional, default: `high`)
|
|
580
584
|
- See [Codex CLI](#codex-cli) for details and available options
|
|
@@ -632,17 +636,19 @@ claude mcp add consult-llm \
|
|
|
632
636
|
|
|
633
637
|
### Controlling which models are used
|
|
634
638
|
|
|
635
|
-
The `model` parameter accepts **selectors** (`gemini`, `openai`, `
|
|
636
|
-
that the server resolves to the best available concrete model. When
|
|
637
|
-
specified, the server uses `CONSULT_LLM_DEFAULT_MODEL` or its
|
|
639
|
+
The `model` parameter accepts **selectors** (`gemini`, `openai`, `anthropic`,
|
|
640
|
+
`deepseek`) that the server resolves to the best available concrete model. When
|
|
641
|
+
no model is specified, the server uses `CONSULT_LLM_DEFAULT_MODEL` or its
|
|
642
|
+
built-in fallback.
|
|
638
643
|
|
|
639
644
|
**Selector resolution order** (first available wins):
|
|
640
645
|
|
|
641
|
-
| Selector
|
|
642
|
-
|
|
|
643
|
-
| `gemini`
|
|
644
|
-
| `openai`
|
|
645
|
-
| `
|
|
646
|
+
| Selector | Priority |
|
|
647
|
+
| ----------- | -------------------------------------------------------------- |
|
|
648
|
+
| `gemini` | gemini-3.1-pro-preview → gemini-3-pro-preview → gemini-2.5-pro |
|
|
649
|
+
| `openai` | gpt-5.5 → gpt-5.4 → gpt-5.3-codex → gpt-5.2 → gpt-5.2-codex |
|
|
650
|
+
| `anthropic` | claude-opus-4-7 |
|
|
651
|
+
| `deepseek` | deepseek-reasoner |
|
|
646
652
|
|
|
647
653
|
**Restricting models with `CONSULT_LLM_ALLOWED_MODELS`:**
|
|
648
654
|
|
|
@@ -672,10 +678,10 @@ models complex questions.
|
|
|
672
678
|
- All files are added as context with file paths and code blocks
|
|
673
679
|
|
|
674
680
|
- **model** (optional): Model selector or exact model ID
|
|
675
|
-
- Selectors: `gemini`, `openai`, `deepseek` — the server resolves
|
|
676
|
-
available model for each family
|
|
677
|
-
- Exact model IDs (`gpt-5.4`, `gemini-3.1-pro-preview`,
|
|
678
|
-
accepted as an advanced override
|
|
681
|
+
- Selectors: `gemini`, `openai`, `anthropic`, `deepseek` — the server resolves
|
|
682
|
+
to the best available model for each family
|
|
683
|
+
- Exact model IDs (`gpt-5.4`, `gemini-3.1-pro-preview`, `claude-opus-4-7`,
|
|
684
|
+
etc.) are also accepted as an advanced override
|
|
679
685
|
- When omitted, the server uses the configured default
|
|
680
686
|
|
|
681
687
|
- **task_mode** (optional): Controls the system prompt persona. The calling LLM
|
|
@@ -713,10 +719,12 @@ models complex questions.
|
|
|
713
719
|
- **gemini-3.1-pro-preview**: Google's Gemini 3.1 Pro Preview
|
|
714
720
|
- **deepseek-reasoner**: DeepSeek's reasoning model
|
|
715
721
|
- **MiniMax-M2.7**: MiniMax's M2.7 reasoning model (204K context)
|
|
722
|
+
- **gpt-5.5**: OpenAI's GPT-5.5 model
|
|
716
723
|
- **gpt-5.4**: OpenAI's GPT-5.4 model
|
|
717
724
|
- **gpt-5.2**: OpenAI's GPT-5.2 model
|
|
718
725
|
- **gpt-5.3-codex**: OpenAI's Codex model based on GPT-5.3
|
|
719
726
|
- **gpt-5.2-codex**: OpenAI's Codex model based on GPT-5.2
|
|
727
|
+
- **claude-opus-4-7**: Anthropic's Claude Opus 4.7
|
|
720
728
|
|
|
721
729
|
## Logging
|
|
722
730
|
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "consult-llm-mcp",
|
|
3
|
-
"version": "2.13.
|
|
3
|
+
"version": "2.13.3",
|
|
4
4
|
"description": "MCP server for consulting powerful AI models",
|
|
5
5
|
"repository": {
|
|
6
6
|
"type": "git",
|
|
@@ -31,9 +31,9 @@
|
|
|
31
31
|
"ai"
|
|
32
32
|
],
|
|
33
33
|
"optionalDependencies": {
|
|
34
|
-
"consult-llm-mcp-darwin-arm64": "2.13.
|
|
35
|
-
"consult-llm-mcp-darwin-x64": "2.13.
|
|
36
|
-
"consult-llm-mcp-linux-x64": "2.13.
|
|
37
|
-
"consult-llm-mcp-linux-arm64": "2.13.
|
|
34
|
+
"consult-llm-mcp-darwin-arm64": "2.13.3",
|
|
35
|
+
"consult-llm-mcp-darwin-x64": "2.13.3",
|
|
36
|
+
"consult-llm-mcp-linux-x64": "2.13.3",
|
|
37
|
+
"consult-llm-mcp-linux-arm64": "2.13.3"
|
|
38
38
|
}
|
|
39
39
|
}
|