consult-llm-mcp 2.7.0 → 2.7.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/CHANGELOG.md +10 -0
- package/README.md +29 -34
- package/package.json +5 -5
package/CHANGELOG.md
CHANGED
|
@@ -1,5 +1,15 @@
|
|
|
1
1
|
# Changelog
|
|
2
2
|
|
|
3
|
+
## v2.7.1 (2026-03-09)
|
|
4
|
+
|
|
5
|
+
- Monitor: show "Thinking..." spinner when thinking events are streaming
|
|
6
|
+
- Monitor: auto-enable follow mode when scrolled to bottom in detail view
|
|
7
|
+
- Monitor: sort servers with active consultations above idle ones
|
|
8
|
+
- Monitor: show tool error messages in detail view
|
|
9
|
+
- Fixed cursor-agent thinking deltas containing literal `\n` instead of newlines
|
|
10
|
+
- Fixed cursor-agent crash on unknown tool types
|
|
11
|
+
- Fixed monitor event flushing for real-time detail view updates
|
|
12
|
+
|
|
3
13
|
## v2.7.0 (2026-03-08)
|
|
4
14
|
|
|
5
15
|
- Fixed cursor-agent tool success detection
|
package/README.md
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
# consult-llm-mcp
|
|
2
2
|
|
|
3
|
-
An MCP server that lets Claude Code consult stronger AI models (GPT-5.
|
|
3
|
+
An MCP server that lets Claude Code consult stronger AI models (GPT-5.4, Gemini
|
|
4
4
|
3.1 Pro, DeepSeek Reasoner) when Sonnet has you running in circles and you need
|
|
5
5
|
to bring in the heavy artillery. Supports multi-turn conversations.
|
|
6
6
|
|
|
@@ -23,11 +23,11 @@ to bring in the heavy artillery. Supports multi-turn conversations.
|
|
|
23
23
|
```
|
|
24
24
|
|
|
25
25
|
[Quick start](#quick-start) · [Configuration](#configuration) ·
|
|
26
|
-
[Monitor TUI](#monitor) · [Changelog](CHANGELOG.md)
|
|
26
|
+
[Skills](#skills) · [Monitor TUI](#monitor) · [Changelog](CHANGELOG.md)
|
|
27
27
|
|
|
28
28
|
## Features
|
|
29
29
|
|
|
30
|
-
- Query powerful AI models (GPT-5.
|
|
30
|
+
- Query powerful AI models (GPT-5.4, Gemini 3.1 Pro, DeepSeek Reasoner) with
|
|
31
31
|
relevant files as context
|
|
32
32
|
- Direct queries with optional file context
|
|
33
33
|
- Include git changes for code review and analysis
|
|
@@ -42,6 +42,7 @@ to bring in the heavy artillery. Supports multi-turn conversations.
|
|
|
42
42
|
across requests with `thread_id`
|
|
43
43
|
- [Web mode](#web-mode): Copy formatted prompts to clipboard for browser-based
|
|
44
44
|
LLM services
|
|
45
|
+
- [Skills](#skills): Multi-LLM debate, collaboration, and consultation workflows
|
|
45
46
|
- Less is more: Single MCP tool to not clutter the context
|
|
46
47
|
|
|
47
48
|
<img src="meta/monitor-screenshot.webp" alt="consult-llm-monitor screenshot" width="600">
|
|
@@ -409,6 +410,27 @@ claude mcp add consult-llm \
|
|
|
409
410
|
-- npx -y consult-llm-mcp
|
|
410
411
|
```
|
|
411
412
|
|
|
413
|
+
**Shell command permissions:**
|
|
414
|
+
|
|
415
|
+
Cursor CLI runs with `--mode ask`, which blocks shell commands by default. If
|
|
416
|
+
your prompts involve tools that need to run commands (e.g. `git diff` for code
|
|
417
|
+
review), allow them in `~/.cursor/cli-config.json`:
|
|
418
|
+
|
|
419
|
+
```json
|
|
420
|
+
{
|
|
421
|
+
"permissions": {
|
|
422
|
+
"allow": [
|
|
423
|
+
"Shell(git diff*)",
|
|
424
|
+
"Shell(git log*)",
|
|
425
|
+
"Shell(git show*)"
|
|
426
|
+
],
|
|
427
|
+
"deny": []
|
|
428
|
+
}
|
|
429
|
+
}
|
|
430
|
+
```
|
|
431
|
+
|
|
432
|
+
Glob patterns are supported. The `deny` list takes precedence over `allow`.
|
|
433
|
+
|
|
412
434
|
#### Multi-turn conversations
|
|
413
435
|
|
|
414
436
|
CLI backends support multi-turn conversations via the `thread_id` parameter. The
|
|
@@ -562,9 +584,6 @@ claude mcp add consult-llm \
|
|
|
562
584
|
-- npx -y consult-llm-mcp
|
|
563
585
|
```
|
|
564
586
|
|
|
565
|
-
Alternatively, use a [slash command](#example-slash-command) with hardcoded
|
|
566
|
-
model names for guaranteed model selection.
|
|
567
|
-
|
|
568
587
|
## MCP tool: consult_llm
|
|
569
588
|
|
|
570
589
|
The server provides a single tool called `consult_llm` for asking powerful AI
|
|
@@ -711,22 +730,14 @@ agent's context. This allows Claude to infer when to call the MCP from natural
|
|
|
711
730
|
language (e.g., "ask gemini about..."). Works out of the box, but you have less
|
|
712
731
|
control over how the MCP is invoked.
|
|
713
732
|
|
|
714
|
-
### 2.
|
|
715
|
-
|
|
716
|
-
Explicitly invoke with `/consult ask gemini about X`. Guaranteed activation with
|
|
717
|
-
full control over custom instructions, but requires the explicit syntax. For
|
|
718
|
-
example, you can instruct Claude to always find related files and pass them as
|
|
719
|
-
context via the `files` parameter. See the
|
|
720
|
-
[example slash command](#example-slash-command) below.
|
|
721
|
-
|
|
722
|
-
### 3. Skills
|
|
733
|
+
### 2. Skills
|
|
723
734
|
|
|
724
735
|
Automatically triggers when Claude detects matching intent. Like slash commands,
|
|
725
736
|
supports custom instructions (e.g., always gathering relevant files), but not
|
|
726
|
-
always reliably triggered. See the [
|
|
737
|
+
always reliably triggered. See the [consult skill](#consult) below.
|
|
727
738
|
|
|
728
|
-
**Recommendation:** Start with no custom activation. Use
|
|
729
|
-
|
|
739
|
+
**Recommendation:** Start with no custom activation. Use skills if you need
|
|
740
|
+
custom instructions for how the MCP is invoked.
|
|
730
741
|
|
|
731
742
|
## Installing skills
|
|
732
743
|
|
|
@@ -761,22 +772,6 @@ strictly necessary since Claude can infer from the schema that "ask gemini"
|
|
|
761
772
|
should call this MCP, but it gives more precise control over how the agent calls
|
|
762
773
|
this MCP.
|
|
763
774
|
|
|
764
|
-
## Slash command
|
|
765
|
-
|
|
766
|
-
Here's an example
|
|
767
|
-
[Claude Code slash command](https://code.claude.com/docs/en/slash-commands) that
|
|
768
|
-
uses the `consult_llm` MCP tool. See [examples/consult.md](examples/consult.md)
|
|
769
|
-
for the full content.
|
|
770
|
-
|
|
771
|
-
Save it as `~/.claude/commands/consult.md` and you can then use it by typing
|
|
772
|
-
`/consult ask gemini about X` or `/consult ask codex about X` in Claude Code.
|
|
773
|
-
|
|
774
|
-
## Multi-LLM skills
|
|
775
|
-
|
|
776
|
-
Skills that orchestrate multi-turn conversations between LLMs. All use
|
|
777
|
-
`thread_id` to maintain conversation context across rounds, so each LLM
|
|
778
|
-
remembers the full history without resending everything.
|
|
779
|
-
|
|
780
775
|
### collab
|
|
781
776
|
|
|
782
777
|
**Collaborative ideation.** Gemini and Codex independently brainstorm ideas,
|
package/package.json
CHANGED
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "consult-llm-mcp",
|
|
3
|
-
"version": "2.7.
|
|
3
|
+
"version": "2.7.2",
|
|
4
4
|
"description": "MCP server for consulting powerful AI models",
|
|
5
5
|
"repository": {
|
|
6
6
|
"type": "git",
|
|
@@ -31,9 +31,9 @@
|
|
|
31
31
|
"ai"
|
|
32
32
|
],
|
|
33
33
|
"optionalDependencies": {
|
|
34
|
-
"consult-llm-mcp-darwin-arm64": "2.7.
|
|
35
|
-
"consult-llm-mcp-darwin-x64": "2.7.
|
|
36
|
-
"consult-llm-mcp-linux-x64": "2.7.
|
|
37
|
-
"consult-llm-mcp-linux-arm64": "2.7.
|
|
34
|
+
"consult-llm-mcp-darwin-arm64": "2.7.2",
|
|
35
|
+
"consult-llm-mcp-darwin-x64": "2.7.2",
|
|
36
|
+
"consult-llm-mcp-linux-x64": "2.7.2",
|
|
37
|
+
"consult-llm-mcp-linux-arm64": "2.7.2"
|
|
38
38
|
}
|
|
39
39
|
}
|