consult-llm-mcp 1.0.0 → 1.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -22,8 +22,6 @@ npm install -g .
22
22
 
23
23
  ## Configuration
24
24
 
25
- Set the following environment variables:
26
-
27
25
  - `OPENAI_API_KEY` - Your OpenAI API key (required for o3)
28
26
  - `GEMINI_API_KEY` - Your Google AI API key (required for Gemini models)
29
27
  - `DEEPSEEK_API_KEY` - Your DeepSeek API key (required for DeepSeek models)
@@ -32,17 +30,75 @@ Set the following environment variables:
32
30
 
33
31
  ## Usage with Claude Code
34
32
 
33
+ ### Installation
34
+
35
35
  Add the MCP server to Claude Code:
36
36
 
37
37
  ```bash
38
- claude mcp add consult-llm -- consult-llm-mcp
38
+ claude mcp add consult-llm -- npx -y consult-llm-mcp
39
39
  ```
40
40
 
41
41
  Or for global availability:
42
42
 
43
43
  ```bash
44
- claude mcp add --scope user consult-llm -- consult-llm-mcp
44
+ claude mcp add --scope user consult-llm -- npx -y consult-llm-mcp
45
+ ```
46
+
47
+ ### Example workflows
48
+
49
+ <details>
50
+ <summary>Explain the problem, and tell CC to consult a smarter LLM</summary>
51
+
45
52
  ```
53
+ <Some long explanation of the issue here>
54
+
55
+ Consult LLM for assistance, use gemini
56
+
57
+ ⏺ I see the issue! The problem is that the frontend is trying to use the hint
58
+ for "isn't" but the server generated a hint for "isn't it" (the full phrase).
59
+ This is a mismatch between how the frontend segments text and how the server
60
+ generates hints.
61
+
62
+ Let me consult the LLM to get assistance on this segmentation mismatch issue.
63
+
64
+ ⏺ Write(hint-segmentation-issue.md)
65
+ ⎿  Wrote 53 lines to hint-segmentation-issue.md
66
+ # Hint Segmentation Mismatch Issue
67
+ ## Problem Description
68
+ There's a mismatch between how the frontend segments text for clickable
69
+ words and how the backend generates hints. This causes hint unlocking to
70
+ fail when the segments don't match exactly.
71
+ ## Example Case
72
+ **Original Text**: "Hello! Nice weather, isn't it?"
73
+ … +43 lines (ctrl+r to expand)
74
+
75
+ ⏺ consult-llm:consult_llm (MCP)(model: "gemini-2.5-pro", files: ["/project/hint
76
+ -segmentation-issue.md","/project/web/src/compo
77
+ nents/HintsText.tsx","/project/web/src/utils/hi
78
+ nts.ts","/project/server/src/routes/public/hint
79
+ s.ts","/project/server/src/llm/hints.ts"])
80
+ ⎿ An excellent analysis of the problem. The root cause has been correctly
81
+ identified: a race condition within the `handleConfirmGenerateHints`
82
+ function in `HintsText.tsx`. The component state updates in the background
83
+ after `await generateHints()`, but the function continues executing with a
84
+ stale `activeSegment` variable, causing it to request a hint for a smaller
85
+ text segment ( e.g., "isn't") instead of the full, newly-available hint
86
+ phrase (e.g., "isn't it").
87
+ … +607 lines (ctrl+r to expand)
88
+
89
+ ⏺ Perfect! The LLM identified the core issue - there's a race condition in
90
+ handleConfirmGenerateHints where the function uses stale state after async
91
+ operations. The solution is to re-target the hint key after generation to
92
+ find the correct matching hint.
93
+
94
+ Let me implement this solution:
95
+
96
+ ⏺ Update Todos
97
+ ⎿  ☐ Fix hint segmentation mismatch by implementing smarter hint key
98
+ targeting in handleConfirmGenerateHints
99
+ ```
100
+
101
+ </details>
46
102
 
47
103
  ## MCP Tool: consult_llm
48
104
 
@@ -131,16 +187,3 @@ Tool call:
131
187
  }
132
188
  ```
133
189
  ````
134
-
135
- ## Development
136
-
137
- ```bash
138
- # Run in development mode
139
- npm run dev
140
-
141
- # Build TypeScript
142
- npm run build
143
-
144
- # Format code
145
- npm run format
146
- ```
package/dist/logger.d.ts CHANGED
@@ -2,3 +2,4 @@ export declare function logToFile(content: string): void;
2
2
  export declare function logToolCall(name: string, args: unknown): void;
3
3
  export declare function logPrompt(model: string, prompt: string): void;
4
4
  export declare function logResponse(model: string, response: string, costInfo: string): void;
5
+ export declare function logServerStart(version: string): void;
package/dist/logger.js CHANGED
@@ -28,3 +28,6 @@ export function logPrompt(model, prompt) {
28
28
  export function logResponse(model, response, costInfo) {
29
29
  logToFile(`RESPONSE (model: ${model}):\n${response}\n${costInfo}\n${'='.repeat(80)}`);
30
30
  }
31
+ export function logServerStart(version) {
32
+ logToFile(`MCP SERVER STARTED - consult-llm-mcp v${version}\n${'='.repeat(80)}`);
33
+ }
package/dist/main.js CHANGED
@@ -3,15 +3,21 @@ import { Server } from '@modelcontextprotocol/sdk/server/index.js';
3
3
  import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
4
4
  import { CallToolRequestSchema, ListToolsRequestSchema, } from '@modelcontextprotocol/sdk/types.js';
5
5
  import { config } from './config.js';
6
- import { ConsultLlmArgs, toolSchema } from './schema.js';
6
+ import { ConsultLlmArgs, toolSchema, } from './schema.js';
7
7
  import { processFiles } from './file.js';
8
8
  import { generateGitDiff } from './git.js';
9
9
  import { buildPrompt } from './prompt-builder.js';
10
10
  import { queryLlm } from './llm-query.js';
11
- import { logToolCall, logPrompt, logResponse } from './logger.js';
11
+ import { logToolCall, logPrompt, logResponse, logServerStart, } from './logger.js';
12
+ import { readFileSync } from 'fs';
13
+ import { dirname, join } from 'path';
14
+ import { fileURLToPath } from 'url';
15
+ const __dirname = dirname(fileURLToPath(import.meta.url));
16
+ const packageJson = JSON.parse(readFileSync(join(__dirname, '../package.json'), 'utf-8'));
17
+ const SERVER_VERSION = packageJson.version;
12
18
  const server = new Server({
13
19
  name: 'consult_llm',
14
- version: '1.0.0',
20
+ version: SERVER_VERSION,
15
21
  }, {
16
22
  capabilities: {
17
23
  tools: {},
@@ -61,6 +67,7 @@ server.setRequestHandler(CallToolRequestSchema, async (request) => {
61
67
  throw new Error(`Unknown tool: ${request.params.name}`);
62
68
  });
63
69
  async function main() {
70
+ logServerStart(SERVER_VERSION);
64
71
  const transport = new StdioServerTransport();
65
72
  await server.connect(transport);
66
73
  }
package/dist/schema.d.ts CHANGED
@@ -20,7 +20,7 @@ export declare const ConsultLlmArgs: z.ZodObject<{
20
20
  }, z.core.$strip>;
21
21
  export declare const toolSchema: {
22
22
  readonly name: "consult_llm";
23
- readonly description: "Ask a more powerful AI for help with complex problems. Write your problem description in a markdown file and pass relevant code files as context.";
23
+ readonly description: "Ask a more powerful AI for help with complex problems. Write your problem description in a markdown file and pass relevant code files as context. \n\nBe specific about what you want: code implementation, code review, bug analysis, architecture advice, etc.";
24
24
  readonly inputSchema: {
25
25
  readonly type: "object";
26
26
  readonly properties: {
@@ -29,7 +29,7 @@ export declare const toolSchema: {
29
29
  readonly items: {
30
30
  readonly type: "string";
31
31
  };
32
- readonly description: "Array of file paths to process. Markdown files (.md) become the main prompt, other files are added as context with file paths and code blocks.";
32
+ readonly description: "Array of file paths to process. Markdown files (.md) become the main prompt, other files are added as context with file paths and code blocks. \n\nIn the markdown file(s), be clear about what you want the LLM to do: implement code, review code, explain concepts, analyze bugs, etc.";
33
33
  };
34
34
  readonly model: {
35
35
  readonly type: "string";
package/dist/schema.js CHANGED
@@ -19,14 +19,18 @@ export const ConsultLlmArgs = z.object({
19
19
  });
20
20
  export const toolSchema = {
21
21
  name: 'consult_llm',
22
- description: 'Ask a more powerful AI for help with complex problems. Write your problem description in a markdown file and pass relevant code files as context.',
22
+ description: `Ask a more powerful AI for help with complex problems. Write your problem description in a markdown file and pass relevant code files as context.
23
+
24
+ Be specific about what you want: code implementation, code review, bug analysis, architecture advice, etc.`,
23
25
  inputSchema: {
24
26
  type: 'object',
25
27
  properties: {
26
28
  files: {
27
29
  type: 'array',
28
30
  items: { type: 'string' },
29
- description: 'Array of file paths to process. Markdown files (.md) become the main prompt, other files are added as context with file paths and code blocks.',
31
+ description: `Array of file paths to process. Markdown files (.md) become the main prompt, other files are added as context with file paths and code blocks.
32
+
33
+ In the markdown file(s), be clear about what you want the LLM to do: implement code, review code, explain concepts, analyze bugs, etc.`,
30
34
  },
31
35
  model: {
32
36
  type: 'string',
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "consult-llm-mcp",
3
- "version": "1.0.0",
3
+ "version": "1.0.2",
4
4
  "description": "MCP server for consulting powerful AI models",
5
5
  "type": "module",
6
6
  "main": "dist/main.js",