consult-llm-mcp 2.4.2 → 2.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -456,9 +456,9 @@ See the "Using web mode..." example above for a concrete transcript.
456
456
  mode)
457
457
  - `DEEPSEEK_API_KEY` - Your DeepSeek API key (required for DeepSeek models)
458
458
  - `CONSULT_LLM_DEFAULT_MODEL` - Override the default model (optional)
459
- - Options: `gpt-5.2` (default), `gemini-2.5-pro`, `gemini-3-pro-preview`,
460
- `gemini-3.1-pro-preview`, `deepseek-reasoner`, `gpt-5.3-codex`,
461
- `gpt-5.2-codex`
459
+ - Options: `gpt-5.2` (default), `gpt-5.4`, `gemini-2.5-pro`,
460
+ `gemini-3-pro-preview`, `gemini-3.1-pro-preview`, `deepseek-reasoner`,
461
+ `gpt-5.3-codex`, `gpt-5.2-codex`
462
462
  - `GEMINI_BACKEND` - Backend for Gemini models (optional)
463
463
  - Options: `api` (default), `gemini-cli`, `cursor-cli`
464
464
  - `OPENAI_BACKEND` - Backend for OpenAI models (optional)
@@ -555,9 +555,9 @@ models complex questions.
555
555
  - All files are added as context with file paths and code blocks
556
556
 
557
557
  - **model** (optional): LLM model to use
558
- - Options: `gpt-5.2` (default), `gemini-2.5-pro`, `gemini-3-pro-preview`,
559
- `gemini-3.1-pro-preview`, `deepseek-reasoner`, `gpt-5.3-codex`,
560
- `gpt-5.2-codex`
558
+ - Options: `gpt-5.2` (default), `gpt-5.4`, `gemini-2.5-pro`,
559
+ `gemini-3-pro-preview`, `gemini-3.1-pro-preview`, `deepseek-reasoner`,
560
+ `gpt-5.3-codex`, `gpt-5.2-codex`
561
561
 
562
562
  - **task_mode** (optional): Controls the system prompt persona. The calling LLM
563
563
  should choose based on the task:
@@ -596,7 +596,8 @@ models complex questions.
596
596
  million tokens for prompts ≤200k tokens, $4/$18 for prompts >200k tokens)
597
597
  - **deepseek-reasoner**: DeepSeek's reasoning model ($0.55/$2.19 per million
598
598
  tokens)
599
- - **gpt-5.2**: OpenAI's latest GPT model
599
+ - **gpt-5.4**: OpenAI's GPT-5.4 model ($2.50/$15 per million tokens)
600
+ - **gpt-5.2**: OpenAI's GPT-5.2 model ($1.75/$14 per million tokens)
600
601
  - **gpt-5.3-codex**: OpenAI's Codex model based on GPT-5.3
601
602
  - **gpt-5.2-codex**: OpenAI's Codex model based on GPT-5.2
602
603
 
@@ -703,12 +704,34 @@ for the full content.
703
704
  Save it as `~/.claude/commands/consult.md` and you can then use it by typing
704
705
  `/consult ask gemini about X` or `/consult ask codex about X` in Claude Code.
705
706
 
706
- ## Debate skills
707
+ ## Multi-LLM skills
707
708
 
708
- Two skills that orchestrate structured debates between LLMs to find the best
709
- implementation approach before writing code. Both use `thread_id` to maintain
710
- conversation context across rounds, so each LLM remembers the full debate
711
- history without resending everything.
709
+ Skills that orchestrate multi-turn conversations between LLMs. All use
710
+ `thread_id` to maintain conversation context across rounds, so each LLM
711
+ remembers the full history without resending everything.
712
+
713
+ ### collab
714
+
715
+ **Collaborative ideation.** Gemini and Codex independently brainstorm ideas,
716
+ then build on each other's suggestions across multiple rounds. Unlike debate,
717
+ the tone is cooperative — refining and combining rather than critiquing. Claude
718
+ synthesizes the strongest ideas into a plan and implements. See
719
+ [skills/collab/SKILL.md](skills/collab/SKILL.md).
720
+
721
+ ```
722
+ > /collab how should we handle offline sync for the mobile app
723
+ ```
724
+
725
+ ### collab-vs
726
+
727
+ **Claude brainstorms with one LLM.** Claude and an opponent (Gemini or Codex)
728
+ take turns building on each other's ideas. Like collab, but Claude participates
729
+ directly instead of moderating. See
730
+ [skills/collab-vs/SKILL.md](skills/collab-vs/SKILL.md).
731
+
732
+ ```
733
+ > /collab-vs --gemini how should we handle offline sync for the mobile app
734
+ ```
712
735
 
713
736
  ### debate
714
737
 
package/dist/llm-cost.js CHANGED
@@ -1,4 +1,8 @@
1
1
  const MODEL_PRICING = {
2
+ 'gpt-5.4': {
3
+ inputCostPerMillion: 2.5,
4
+ outputCostPerMillion: 15.0,
5
+ },
2
6
  'gpt-5.2': {
3
7
  inputCostPerMillion: 1.75,
4
8
  outputCostPerMillion: 14.0,
package/dist/models.d.ts CHANGED
@@ -1 +1 @@
1
- export declare const ALL_MODELS: readonly ["gemini-2.5-pro", "gemini-3-pro-preview", "gemini-3.1-pro-preview", "deepseek-reasoner", "gpt-5.2", "gpt-5.3-codex", "gpt-5.2-codex"];
1
+ export declare const ALL_MODELS: readonly ["gemini-2.5-pro", "gemini-3-pro-preview", "gemini-3.1-pro-preview", "deepseek-reasoner", "gpt-5.2", "gpt-5.4", "gpt-5.3-codex", "gpt-5.2-codex"];
package/dist/models.js CHANGED
@@ -4,6 +4,7 @@ export const ALL_MODELS = [
4
4
  'gemini-3.1-pro-preview',
5
5
  'deepseek-reasoner',
6
6
  'gpt-5.2',
7
+ 'gpt-5.4',
7
8
  'gpt-5.3-codex',
8
9
  'gpt-5.2-codex',
9
10
  ];
package/dist/version.d.ts CHANGED
@@ -1 +1 @@
1
- export declare const GIT_HASH = "1c9e0da";
1
+ export declare const GIT_HASH = "c310fe4";
package/dist/version.js CHANGED
@@ -1 +1 @@
1
- export const GIT_HASH = "1c9e0da";
1
+ export const GIT_HASH = "c310fe4";
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "consult-llm-mcp",
3
- "version": "2.4.2",
3
+ "version": "2.5.0",
4
4
  "description": "MCP server for consulting powerful AI models",
5
5
  "type": "module",
6
6
  "main": "dist/main.js",