@gopersonal/advisor 1.0.3 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (3) hide show
  1. package/README.md +62 -21
  2. package/build/index.js +150 -24
  3. package/package.json +2 -2
package/README.md CHANGED
@@ -7,47 +7,64 @@ An MCP server that gives AI agents access to a separate AI advisor via [OpenCode
7
7
  - **ask_advisor** — Get help when stuck after multiple failed attempts
8
8
  - **get_second_opinion** — Sanity-check a decision between approaches
9
9
 
10
- ## Add to Claude Code
11
-
12
- ### Option A: OpenAI endpoint
10
+ ## Quick Start
13
11
 
14
12
  ```bash
15
13
  claude mcp add advisor \
16
14
  -e ADVISOR_PROVIDER=openai \
17
15
  -e ADVISOR_MODEL=gpt-4o \
18
16
  -e ADVISOR_API_KEY=sk-your-key \
17
+ -e ADVISOR_BASE_URL=https://api.openai.com/v1 \
19
18
  -- npx -y @gopersonal/advisor
20
19
  ```
21
20
 
22
- With a custom OpenAI-compatible endpoint:
21
+ ## Setup
22
+
23
+ ### Option A: OpenAI-compatible endpoint
24
+
25
+ Works with OpenAI, MiniMax, Together, Groq, and any `/v1/chat/completions` endpoint.
23
26
 
24
27
  ```bash
25
28
  claude mcp add advisor \
26
29
  -e ADVISOR_PROVIDER=openai \
27
30
  -e ADVISOR_MODEL=gpt-4o \
28
31
  -e ADVISOR_API_KEY=sk-your-key \
29
- -e ADVISOR_BASE_URL=https://your-openai-proxy.example.com/v1 \
32
+ -e ADVISOR_BASE_URL=https://api.openai.com/v1 \
33
+ -- npx -y @gopersonal/advisor
34
+ ```
35
+
36
+ Example with MiniMax (tested):
37
+
38
+ ```bash
39
+ claude mcp add advisor \
40
+ -e ADVISOR_PROVIDER=openai \
41
+ -e ADVISOR_MODEL=MiniMax-M2.5 \
42
+ -e ADVISOR_API_KEY=sk-your-minimax-key \
43
+ -e ADVISOR_BASE_URL=https://api.minimax.io/v1 \
30
44
  -- npx -y @gopersonal/advisor
31
45
  ```
32
46
 
33
- ### Option B: Anthropic endpoint
47
+ ### Option B: Anthropic-compatible endpoint
48
+
49
+ Works with Anthropic and any `/v1/messages` endpoint.
34
50
 
35
51
  ```bash
36
52
  claude mcp add advisor \
37
53
  -e ADVISOR_PROVIDER=anthropic \
38
54
  -e ADVISOR_MODEL=claude-sonnet-4-5 \
39
55
  -e ADVISOR_API_KEY=sk-ant-your-key \
56
+ -e ADVISOR_BASE_URL=https://api.anthropic.com \
40
57
  -- npx -y @gopersonal/advisor
41
58
  ```
42
59
 
43
- With a custom Anthropic-compatible endpoint:
60
+ Example with a proxy (tested):
44
61
 
45
62
  ```bash
46
63
  claude mcp add advisor \
47
64
  -e ADVISOR_PROVIDER=anthropic \
48
- -e ADVISOR_MODEL=claude-sonnet-4-5 \
65
+ -e ADVISOR_MODEL=gpt-5.2-codex \
49
66
  -e ADVISOR_API_KEY=none \
50
- -e ADVISOR_BASE_URL=https://your-anthropic-proxy.example.com \
67
+ -e ADVISOR_BASE_URL=https://azure-openai-anthropic-proxy.go-shops.workers.dev \
51
68
  -- npx -y @gopersonal/advisor
52
69
  ```
53
70
 
@@ -57,30 +74,54 @@ claude mcp add advisor \
57
74
  claude mcp add advisor -- npx -y @gopersonal/advisor
58
75
  ```
59
76
 
77
+ ### With project context and instructions
78
+
79
+ Give the advisor context about your project so it can provide more relevant advice:
80
+
81
+ ```bash
82
+ claude mcp add advisor \
83
+ -e ADVISOR_PROVIDER=anthropic \
84
+ -e ADVISOR_MODEL=gpt-5.2-codex \
85
+ -e ADVISOR_API_KEY=none \
86
+ -e ADVISOR_BASE_URL=https://azure-openai-anthropic-proxy.go-shops.workers.dev \
87
+ -e ADVISOR_DIRECTORY=/Users/me/myproject \
88
+ -e ADVISOR_INSTRUCTIONS=/Users/me/myproject/AGENTS.md \
89
+ -- npx -y @gopersonal/advisor
90
+ ```
91
+
60
92
  ## Environment Variables
61
93
 
62
- | Variable | Description | Example |
63
- |---|---|---|
64
- | `ADVISOR_PROVIDER` | `openai` or `anthropic` | `openai` |
65
- | `ADVISOR_MODEL` | Model name | `gpt-4o`, `claude-sonnet-4-5` |
66
- | `ADVISOR_API_KEY` | API key for the provider | `sk-...` |
67
- | `ADVISOR_BASE_URL` | Custom base URL (optional) | `https://proxy.example.com` |
68
- | `ADVISOR_DIRECTORY` | Project directory for opencode context (optional) | `/Users/me/myproject` |
69
- | `ADVISOR_INSTRUCTIONS` | Path to instructions/AGENTS.md file (optional) | `/Users/me/myproject/AGENTS.md` |
94
+ | Variable | Required | Description | Example |
95
+ |---|---|---|---|
96
+ | `ADVISOR_PROVIDER` | Yes | `openai` or `anthropic` | `openai` |
97
+ | `ADVISOR_MODEL` | Yes | Model name | `gpt-4o`, `claude-sonnet-4-5` |
98
+ | `ADVISOR_API_KEY` | Yes | API key for the provider | `sk-...` |
99
+ | `ADVISOR_BASE_URL` | Yes | API base URL | `https://api.openai.com/v1` |
100
+ | `ADVISOR_DIRECTORY` | No | Project directory for opencode context | `/Users/me/myproject` |
101
+ | `ADVISOR_INSTRUCTIONS` | No | Path to instructions file (AGENTS.md, etc.) | `/Users/me/myproject/AGENTS.md` |
70
102
 
71
- `ADVISOR_DIRECTORY` gives the advisor context about your project (file structure, code, etc.).
72
- `ADVISOR_INSTRUCTIONS` loads a custom instructions file (like AGENTS.md) so the advisor knows your project conventions.
103
+ - `ADVISOR_PROVIDER` determines the API format: `openai` uses `/v1/chat/completions`, `anthropic` uses `/v1/messages`
104
+ - `ADVISOR_DIRECTORY` gives the advisor access to your project files and structure
105
+ - `ADVISOR_INSTRUCTIONS` loads a custom instructions file so the advisor knows your project conventions
73
106
 
74
107
  If no env vars are set, the advisor connects to a running opencode instance or starts one using your default opencode config.
75
108
 
76
109
  ## How it works
77
110
 
78
111
  1. Agent calls `ask_advisor` or `get_second_opinion` via MCP
79
- 2. The server creates a temporary OpenCode session
112
+ 2. The server creates a temporary OpenCode session (scoped to `ADVISOR_DIRECTORY` if set)
80
113
  3. Sends the prompt asynchronously, polls for the response
81
114
  4. Auto-answers any interactive questions from the OpenCode agent
82
115
  5. Returns the advisor's response and cleans up the session
83
116
 
117
+ ## Tested configurations
118
+
119
+ | Provider | Model | Base URL | Status |
120
+ |---|---|---|---|
121
+ | `anthropic` | `gpt-5.2-codex` | `azure-openai-anthropic-proxy.go-shops.workers.dev` | Working |
122
+ | `openai` | `MiniMax-M2.5` | `api.minimax.io/v1` | Working |
123
+ | `anthropic` | `gpt-5.2-codex` | proxy + `ADVISOR_DIRECTORY` + `ADVISOR_INSTRUCTIONS` | Working |
124
+
84
125
  ## Requirements
85
126
 
86
127
  - [OpenCode](https://opencode.ai) installed (`brew install sst/tap/opencode` or `npm i -g opencode`)
@@ -108,7 +149,7 @@ claude mcp add advisor -- node /path/to/advisor/build/index.js
108
149
  npm login
109
150
  ```
110
151
 
111
- 2. Update the version in `package.json`:
152
+ 2. Update the version:
112
153
 
113
154
  ```bash
114
155
  npm version patch # 1.0.0 -> 1.0.1 (bug fixes)
package/build/index.js CHANGED
@@ -3,6 +3,7 @@ import { McpServer } from "@modelcontextprotocol/sdk/server/mcp.js";
3
3
  import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";
4
4
  import { z } from "zod";
5
5
  import { createOpencode, createOpencodeClient } from "@opencode-ai/sdk/v2";
6
+ import { existsSync } from "fs";
6
7
  // --- Environment-based configuration ---
7
8
  //
8
9
  // Pass these via the "env" field in the MCP config JSON.
@@ -68,7 +69,8 @@ function buildOpencodeConfig() {
68
69
  const apiKey = process.env.ADVISOR_API_KEY;
69
70
  const baseURL = process.env.ADVISOR_BASE_URL;
70
71
  const npm = process.env.ADVISOR_NPM || resolved?.npm;
71
- const instructions = process.env.ADVISOR_INSTRUCTIONS;
72
+ const instructionsPath = process.env.ADVISOR_INSTRUCTIONS;
73
+ const instructions = instructionsPath && existsSync(instructionsPath) ? instructionsPath : undefined;
72
74
  const config = {
73
75
  // Disable title generation (it uses small_model which may not work with custom providers)
74
76
  small_model: resolved?.fullModel || undefined,
@@ -77,6 +79,8 @@ function buildOpencodeConfig() {
77
79
  };
78
80
  if (instructions)
79
81
  console.error(`[advisor] Instructions: ${instructions}`);
82
+ if (instructionsPath && !instructions)
83
+ console.error(`[advisor] Instructions file not found: ${instructionsPath}, skipping`);
80
84
  if (resolved) {
81
85
  config.model = resolved.fullModel;
82
86
  console.error(`[advisor] Provider: ${resolved.provider}, Model: ${resolved.model}`);
@@ -166,11 +170,44 @@ function getModelOverride() {
166
170
  function sleep(ms) {
167
171
  return new Promise((resolve) => setTimeout(resolve, ms));
168
172
  }
169
- async function askOpencode(prompt, systemPrompt) {
173
+ function extractTextFromMessages(messages) {
174
+ const allText = [];
175
+ for (const m of messages) {
176
+ const msg = m;
177
+ if (msg.info && typeof msg.info === "object" && "role" in msg.info && msg.info.role === "assistant") {
178
+ if (Array.isArray(msg.parts)) {
179
+ for (const part of msg.parts) {
180
+ if (part && typeof part === "object" && part.type === "text" && "text" in part) {
181
+ allText.push(String(part.text));
182
+ }
183
+ }
184
+ }
185
+ }
186
+ }
187
+ return allText.join("\n");
188
+ }
189
+ function extractToolSummary(messages) {
190
+ const tools = [];
191
+ for (const m of messages) {
192
+ const msg = m;
193
+ if (Array.isArray(msg.parts)) {
194
+ for (const part of msg.parts) {
195
+ const p = part;
196
+ if (p && p.type === "tool-invocation" && p.toolName) {
197
+ const args = p.args ? JSON.stringify(p.args).slice(0, 200) : "";
198
+ tools.push(`- ${p.toolName}${args ? `: ${args}` : ""}`);
199
+ }
200
+ }
201
+ }
202
+ }
203
+ return tools.length > 0 ? `\n\n## Tools used\n${tools.join("\n")}` : "";
204
+ }
205
+ async function runOpencodeSession(prompt, systemPrompt, options = {}) {
206
+ const { timeoutMs = 90_000, stableThreshold = 3, sessionTitle = "Advisor Query", } = options;
170
207
  const client = await getOpencodeClient();
171
208
  const directory = process.env.ADVISOR_DIRECTORY;
172
209
  const sessionResult = await client.session.create({
173
- title: "Advisor Query",
210
+ title: sessionTitle,
174
211
  ...(directory ? { directory } : {}),
175
212
  });
176
213
  if (!sessionResult.data) {
@@ -179,7 +216,6 @@ async function askOpencode(prompt, systemPrompt) {
179
216
  const sessionId = sessionResult.data.id;
180
217
  const modelOverride = getModelOverride();
181
218
  try {
182
- // Send the prompt asynchronously - returns 204 immediately while session processes
183
219
  try {
184
220
  await client.session.promptAsync({
185
221
  sessionID: sessionId,
@@ -188,21 +224,19 @@ async function askOpencode(prompt, systemPrompt) {
188
224
  ...(directory ? { directory } : {}),
189
225
  parts: [{ type: "text", text: prompt }],
190
226
  });
191
- console.error(`[advisor] Prompt submitted async`);
227
+ console.error(`[advisor] Prompt submitted (timeout: ${timeoutMs / 1000}s, stable: ${stableThreshold} cycles)`);
192
228
  }
193
229
  catch (err) {
194
230
  const msg = err instanceof Error ? err.message : String(err);
195
231
  throw new Error(`Failed to submit prompt: ${msg}`);
196
232
  }
197
- // Poll for the assistant's response and auto-answer any questions
198
- const maxWaitMs = 90_000;
199
233
  const pollIntervalMs = 2000;
200
234
  const startTime = Date.now();
201
- let lastTextLength = 0;
235
+ let lastContentLength = 0;
202
236
  let stableCount = 0;
203
237
  const answeredQuestions = new Set();
204
238
  await sleep(3000);
205
- while (Date.now() - startTime < maxWaitMs) {
239
+ while (Date.now() - startTime < timeoutMs) {
206
240
  // Auto-answer any pending questions (opencode agent may ask permission)
207
241
  try {
208
242
  const questions = await client.question.list({});
@@ -211,7 +245,6 @@ async function askOpencode(prompt, systemPrompt) {
211
245
  if (q.sessionID === sessionId && !answeredQuestions.has(q.id)) {
212
246
  console.error(`[advisor] Auto-answering question: ${q.id}`);
213
247
  const answers = q.questions.map((qi) => {
214
- // Pick the first option for each question
215
248
  const opts = qi.options;
216
249
  return opts && opts.length > 0 ? [opts[0].label] : ["yes"];
217
250
  });
@@ -227,7 +260,6 @@ async function askOpencode(prompt, systemPrompt) {
227
260
  catch {
228
261
  // question API may not be available, ignore
229
262
  }
230
- // Check messages for the assistant's response
231
263
  const messagesResult = await client.session.messages({
232
264
  sessionID: sessionId,
233
265
  });
@@ -242,12 +274,13 @@ async function askOpencode(prompt, systemPrompt) {
242
274
  await sleep(pollIntervalMs);
243
275
  continue;
244
276
  }
245
- // Extract text parts
277
+ // Extract text from latest assistant message
246
278
  const textParts = [];
247
279
  if (Array.isArray(assistantMsg.parts)) {
248
280
  for (const part of assistantMsg.parts) {
249
- if (part && typeof part === "object" && part.type === "text" && "text" in part) {
250
- textParts.push(String(part.text));
281
+ const p = part;
282
+ if (p && p.type === "text" && "text" in p) {
283
+ textParts.push(String(p.text));
251
284
  }
252
285
  }
253
286
  }
@@ -257,27 +290,52 @@ async function askOpencode(prompt, systemPrompt) {
257
290
  if (info.time && typeof info.time === "object") {
258
291
  const time = info.time;
259
292
  if (time.completed && currentText.length > 0) {
260
- console.error(`[advisor] Response completed (${currentText.length} chars)`);
261
- return currentText;
293
+ console.error(`[advisor] Completed (${currentText.length} chars)`);
294
+ const toolSummary = extractToolSummary(messages);
295
+ return currentText + toolSummary;
262
296
  }
263
297
  }
264
- // Fallback: text stopped growing for 3 cycles
265
- if (currentText.length > 0) {
266
- if (currentText.length === lastTextLength) {
298
+ // Stability check: use total content across ALL messages to detect when agent stops working
299
+ const totalContentLength = messages.reduce((acc, m) => {
300
+ const msg = m;
301
+ if (Array.isArray(msg.parts)) {
302
+ for (const p of msg.parts) {
303
+ const part = p;
304
+ if (part && "text" in part)
305
+ acc += String(part.text).length;
306
+ if (part && part.type === "tool-invocation")
307
+ acc += 100; // count tool calls as content
308
+ }
309
+ }
310
+ return acc;
311
+ }, 0);
312
+ if (totalContentLength > 0) {
313
+ if (totalContentLength === lastContentLength) {
267
314
  stableCount++;
268
- if (stableCount >= 3) {
269
- console.error(`[advisor] Response stable (${currentText.length} chars)`);
270
- return currentText;
315
+ if (stableCount >= stableThreshold) {
316
+ console.error(`[advisor] Stable after ${stableThreshold} cycles (${totalContentLength} total content)`);
317
+ const toolSummary = extractToolSummary(messages);
318
+ return currentText + toolSummary;
271
319
  }
272
320
  }
273
321
  else {
274
322
  stableCount = 0;
275
- lastTextLength = currentText.length;
323
+ lastContentLength = totalContentLength;
276
324
  }
277
325
  }
278
326
  await sleep(pollIntervalMs);
279
327
  }
280
- throw new Error("Timed out waiting for advisor response");
328
+ // Timeout return partial results if available
329
+ const finalMessages = await client.session.messages({ sessionID: sessionId });
330
+ if (finalMessages.data && Array.isArray(finalMessages.data)) {
331
+ const text = extractTextFromMessages(finalMessages.data);
332
+ const toolSummary = extractToolSummary(finalMessages.data);
333
+ if (text) {
334
+ console.error(`[advisor] Timed out but returning partial result (${text.length} chars)`);
335
+ return text + toolSummary + "\n\n[Note: Task timed out but partial work was completed]";
336
+ }
337
+ }
338
+ throw new Error("Timed out waiting for response");
281
339
  }
282
340
  finally {
283
341
  try {
@@ -288,6 +346,14 @@ async function askOpencode(prompt, systemPrompt) {
288
346
  }
289
347
  }
290
348
  }
349
+ // Legacy wrapper for advisory tools
350
+ async function askOpencode(prompt, systemPrompt) {
351
+ return runOpencodeSession(prompt, systemPrompt, {
352
+ timeoutMs: 90_000,
353
+ stableThreshold: 3,
354
+ sessionTitle: "Advisor Query",
355
+ });
356
+ }
291
357
  // --- MCP Server setup ---
292
358
  const server = new McpServer({
293
359
  name: "advisor",
@@ -397,6 +463,66 @@ server.registerTool("get_second_opinion", {
397
463
  };
398
464
  }
399
465
  });
466
+ // Tool 3: execute_task
467
+ server.registerTool("execute_task", {
468
+ description: "Delegate a coding task to a separate AI agent that will actually DO the work — edit files, run commands, " +
469
+ "create code, fix bugs, etc. Use this when you want another agent to independently complete a task. " +
470
+ "The agent has full access to the project: it can read/write files, run bash commands, and make changes. " +
471
+ "Use this for: implementing features, fixing bugs, refactoring code, writing tests, running builds, " +
472
+ "setting up configurations, or any hands-on coding work you want to delegate. " +
473
+ "Be specific about what you want done — the agent works independently and returns results when finished.",
474
+ inputSchema: {
475
+ task: z
476
+ .string()
477
+ .describe("Clear description of what needs to be done. Be specific: include file paths, function names, " +
478
+ "expected behavior, and acceptance criteria. The more detail you provide, the better the result."),
479
+ context: z
480
+ .string()
481
+ .optional()
482
+ .describe("Additional context: relevant code snippets, error messages, architectural decisions, " +
483
+ "or constraints the agent should know about."),
484
+ files_to_focus: z
485
+ .string()
486
+ .optional()
487
+ .describe("Comma-separated list of file paths the agent should focus on (e.g. 'src/auth.ts, src/routes/login.ts'). " +
488
+ "Helps the agent find relevant code faster."),
489
+ },
490
+ }, async ({ task, context, files_to_focus }) => {
491
+ const systemPrompt = `You are a senior software engineer executing a coding task. Your job is to COMPLETE the task, not discuss it.\n\n` +
492
+ `RULES:\n` +
493
+ `1. DO the work. Read files, write code, run commands. Don't just explain what to do.\n` +
494
+ `2. Start by understanding the codebase — read relevant files before making changes.\n` +
495
+ `3. Make all necessary changes to fully complete the task.\n` +
496
+ `4. After making changes, verify they work (run the code, check for errors, test if possible).\n` +
497
+ `5. If you encounter an error, fix it. Don't stop at the first problem.\n` +
498
+ `6. When done, provide a brief summary of what you changed and why.\n\n` +
499
+ `You have full access to the project filesystem and shell. Use your tools actively.`;
500
+ let userPrompt = `## Task\n${task}`;
501
+ if (context) {
502
+ userPrompt += `\n\n## Context\n${context}`;
503
+ }
504
+ if (files_to_focus) {
505
+ userPrompt += `\n\n## Key files to focus on\n${files_to_focus}`;
506
+ }
507
+ try {
508
+ const result = await runOpencodeSession(userPrompt, systemPrompt, {
509
+ timeoutMs: 300_000, // 5 minutes — tasks need more time than advisory questions
510
+ stableThreshold: 8, // 16 seconds of no activity — tasks involve tool calls between text
511
+ sessionTitle: "Task Execution",
512
+ });
513
+ return { content: [{ type: "text", text: result }] };
514
+ }
515
+ catch (error) {
516
+ const msg = error instanceof Error ? error.message : String(error);
517
+ return {
518
+ content: [{
519
+ type: "text",
520
+ text: `Task execution failed: ${msg}\nMake sure opencode is installed and configured with a provider.`,
521
+ }],
522
+ isError: true,
523
+ };
524
+ }
525
+ });
400
526
  // --- Start the server ---
401
527
  async function main() {
402
528
  const transport = new StdioServerTransport();
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@gopersonal/advisor",
3
- "version": "1.0.3",
3
+ "version": "2.0.0",
4
4
  "type": "module",
5
5
  "main": "./build/index.js",
6
6
  "bin": {
@@ -24,7 +24,7 @@
24
24
  ],
25
25
  "author": "gopersonal",
26
26
  "license": "ISC",
27
- "description": "MCP server that gives AI agents a second opinion via OpenCode SDK",
27
+ "description": "MCP server that gives AI agents advice and can execute coding tasks via OpenCode SDK",
28
28
  "repository": {
29
29
  "type": "git",
30
30
  "url": "https://github.com/gopersonal/calvincode-mcps"