codemaxxing 1.0.16 → 1.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -10,9 +10,11 @@
10
10
 
11
11
  Open-source terminal coding agent. Connect **any** LLM — local or remote — and start building. Like Claude Code, but you bring your own model.
12
12
 
13
+ **🆕 v1.1.0:** Use GPT-5.4 with your ChatGPT Plus subscription — no API key needed. Just `/login` → OpenAI → OAuth. Same access as Codex CLI.
14
+
13
15
  ## Why?
14
16
 
15
- Every coding agent locks you into their API. Codemaxxing doesn't. Run it with LM Studio, Ollama, OpenRouter, OpenAI, or any OpenAI-compatible endpoint. Your machine, your model, your rules.
17
+ Every coding agent locks you into their API. Codemaxxing doesn't. Run it with LM Studio, Ollama, OpenRouter, OpenAI, Anthropic, or any OpenAI-compatible endpoint. Your machine, your model, your rules.
16
18
 
17
19
  ## Install
18
20
 
@@ -48,29 +50,81 @@ curl -fsSL -o $env:TEMP\install-codemaxxing.bat https://raw.githubusercontent.co
48
50
  npm update -g codemaxxing
49
51
  ```
50
52
 
51
- If that doesn't get the latest version:
53
+ If that doesn't get the latest version, use the exact reinstall path:
52
54
  ```bash
53
55
  npm install -g codemaxxing@latest
54
56
  ```
55
57
 
58
+ Then verify:
59
+ ```bash
60
+ codemaxxing --version
61
+ ```
62
+
56
63
  ## Quick Start
57
64
 
58
- ### 1. Start Your LLM
65
+ ### Option A easiest local setup
59
66
 
60
- You need a local LLM server running. The easiest option:
67
+ If you already have a local server running, Codemaxxing auto-detects common defaults:
68
+ - **LM Studio** on `http://localhost:1234/v1`
69
+ - **Ollama** on `http://localhost:11434`
70
+ - **vLLM** on `http://localhost:8000`
61
71
 
72
+ For LM Studio:
62
73
  1. Download [LM Studio](https://lmstudio.ai)
63
- 2. Search for a model (e.g. **Qwen 2.5 Coder 7B Q4_K_M** good for testing)
64
- 3. Load the model
65
- 4. Click **Start Server** (it runs on port 1234 by default)
74
+ 2. Load a coding model (for example **Qwen 2.5 Coder 7B** for a lightweight test)
75
+ 3. Start the local server
76
+ 4. Run:
77
+
78
+ ```bash
79
+ codemaxxing
80
+ ```
81
+
82
+ ### Option B — no local model yet
83
+
84
+ Just run:
85
+
86
+ ```bash
87
+ codemaxxing
88
+ ```
89
+
90
+ If no LLM is available, Codemaxxing can guide you through:
91
+ - detecting your hardware
92
+ - recommending a model
93
+ - installing Ollama
94
+ - downloading the model
95
+ - connecting automatically
96
+
97
+ ### Option C — ChatGPT Plus (GPT-5.4, easiest cloud option)
98
+
99
+ If you have a ChatGPT Plus subscription, get instant access to GPT-5.4 with zero API costs:
100
+
101
+ ```bash
102
+ codemaxxing login
103
+ # → Pick "OpenAI"
104
+ # → Pick "OpenAI (ChatGPT)"
105
+ # → Browser opens, log in with your ChatGPT account
106
+ # → Done — you now have GPT-5.4, GPT-5, o3, o4-mini
107
+
108
+ codemaxxing
109
+ # → /model → pick gpt-5.4
110
+ ```
111
+
112
+ No API key required. Uses your ChatGPT subscription limits instead.
113
+
114
+ ### Option D — other cloud providers
115
+
116
+ Authenticate first:
117
+
118
+ ```bash
119
+ codemaxxing login
120
+ ```
66
121
 
67
- ### 2. Run It
122
+ Then run:
68
123
 
69
124
  ```bash
70
125
  codemaxxing
71
126
  ```
72
127
 
73
- That's it. Codemaxxing auto-detects LM Studio and connects. Start coding.
74
128
 
75
129
  ---
76
130
 
@@ -138,7 +192,7 @@ Dual-model planning. A "planner" model reasons through the approach, then your e
138
192
  - Great for pairing expensive reasoning models with fast editors
139
193
 
140
194
  ### 🧠 Skills System (21 Built-In)
141
- Downloadable skill packs that teach the agent domain expertise. Ships with 21 built-in skills:
195
+ Downloadable skill packs that teach the agent domain expertise. Ships with 21 built-in skills and a menu-first `/skills` flow so you can browse instead of memorizing names:
142
196
 
143
197
  | Category | Skills |
144
198
  |----------|--------|
@@ -155,10 +209,9 @@ Downloadable skill packs that teach the agent domain expertise. Ships with 21 bu
155
209
  /skills on/off X # Toggle per session
156
210
  ```
157
211
 
158
- Project-level config: add `.codemaxxing/skills.json` to scope skills per project.
159
212
 
160
213
  ### 📋 CODEMAXXING.md — Project Rules
161
- Drop a `CODEMAXXING.md` in your project root for project-specific instructions. Auto-loaded every session. Also supports `.cursorrules` for Cursor migrants.
214
+ Drop a `CODEMAXXING.md` in your project root for project-specific instructions. It gets loaded automatically for that project.
162
215
 
163
216
  ### 🔧 Auto-Lint
164
217
  Automatically runs your linter after every file edit and feeds errors back to the model for auto-fix. Detects eslint, biome, ruff, clippy, golangci-lint, and more.
@@ -168,7 +221,7 @@ Automatically runs your linter after every file edit and feeds errors back to th
168
221
  Scans your codebase and builds a map of functions, classes, and types. The model knows what exists where without reading every file.
169
222
 
170
223
  ### 📦 Context Compression
171
- When conversation history exceeds 80k tokens, older messages are automatically summarized to free up context. Configurable via `contextCompressionThreshold`.
224
+ When conversation history gets too large, older messages are automatically summarized to free up context.
172
225
 
173
226
  ### 💰 Cost Tracking
174
227
  Per-session token usage and estimated cost in the status bar. Pricing for 20+ common models. Saved to session history.
@@ -196,11 +249,10 @@ Conversations auto-save to SQLite. Pick up where you left off:
196
249
  - `/resume` — interactive session picker
197
250
 
198
251
  ### 🔌 MCP Support (Model Context Protocol)
199
- Connect to external tools via the industry-standard MCP protocol. Databases, GitHub, Slack, browsers anything with an MCP server.
200
- - Compatible with `.cursor/mcp.json` and `opencode.json` configs
252
+ Connect to external tools via the MCP standard: databases, GitHub, Slack, browsers, and more.
201
253
  - `/mcp` — show connected servers
202
254
  - `/mcp add github npx -y @modelcontextprotocol/server-github` — add a server
203
- - `/mcp tools` — list all available MCP tools
255
+ - `/mcp tools` — list available MCP tools
204
256
 
205
257
  ### 🖥️ Zero-Setup Local LLM
206
258
  First time with no LLM? Codemaxxing walks you through it:
@@ -226,14 +278,14 @@ Switch models mid-session with an interactive picker:
226
278
  - `/model gpt-5` — switch directly by name
227
279
  - Native Anthropic API support (not just OpenAI-compatible)
228
280
 
229
- ### 🎨 15 Themes
230
- `/theme` to browse: cyberpunk-neon, dracula, gruvbox, nord, catppuccin, tokyo-night, one-dark, rose-pine, synthwave, blood-moon, mono, solarized, hacker, hot-dog, acid
281
+ ### 🎨 14 Themes
282
+ `/theme` to browse: cyberpunk-neon, dracula, gruvbox, nord, catppuccin, tokyo-night, one-dark, rose-pine, synthwave, blood-moon, mono, solarized, hacker, acid
231
283
 
232
284
  ### 🔐 Authentication
233
285
  One command to connect any LLM provider. OpenRouter OAuth, Anthropic subscription linking, Codex/Qwen CLI import, GitHub Copilot device flow, or manual API keys.
234
286
 
235
287
  ### 📋 Smart Paste
236
- Multi-line pastes collapse into `[Pasted text #1 +N lines]` badges.
288
+ Multi-line pastes collapse into `[Pasted text #1 +N lines]` badges instead of dumping raw text into the input box. This was specifically hardened for bracketed-paste terminal weirdness.
237
289
 
238
290
  ### ⌨️ Slash Commands
239
291
  Type `/` for autocomplete suggestions. Arrow keys to navigate, Tab or Enter to select.
@@ -351,8 +403,7 @@ Drop a `CODEMAXXING.md` file in your project root to give the model extra contex
351
403
  - **MCP:** [@modelcontextprotocol/sdk](https://github.com/modelcontextprotocol/typescript-sdk)
352
404
  - **Sessions:** [better-sqlite3](https://github.com/WiseLibs/better-sqlite3)
353
405
  - **Local LLM:** Ollama integration (auto-install, pull, manage)
354
- - **Tests:** Vitest26 tests across 8 test files covering commands, tools, config, and agent behavior
355
- - **Zero cloud dependencies** — everything runs locally
406
+ - **Zero cloud dependencies** everything runs locally unless you choose a remote provider
356
407
 
357
408
  ## Inspired By
358
409
 
package/dist/agent.d.ts CHANGED
@@ -26,6 +26,8 @@ export declare class CodingAgent {
26
26
  private client;
27
27
  private anthropicClient;
28
28
  private providerType;
29
+ private currentApiKey;
30
+ private currentBaseUrl;
29
31
  private messages;
30
32
  private tools;
31
33
  private cwd;
@@ -88,10 +90,19 @@ export declare class CodingAgent {
88
90
  * Anthropic-native streaming chat
89
91
  */
90
92
  private chatAnthropic;
93
+ /**
94
+ * OpenAI Responses API chat (for Codex OAuth tokens + GPT-5.4)
95
+ */
96
+ private chatOpenAIResponses;
91
97
  /**
92
98
  * Switch to a different model mid-session
93
99
  */
94
- switchModel(model: string, baseUrl?: string, apiKey?: string): void;
100
+ switchModel(model: string, baseUrl?: string, apiKey?: string, providerType?: "openai" | "anthropic"): void;
101
+ /**
102
+ * Attempt to refresh an expired Anthropic OAuth token.
103
+ * Returns true if refresh succeeded and client was rebuilt.
104
+ */
105
+ private tryRefreshAnthropicToken;
95
106
  getModel(): string;
96
107
  setAutoCommit(enabled: boolean): void;
97
108
  isGitEnabled(): boolean;
package/dist/agent.js CHANGED
@@ -7,6 +7,36 @@ import { isGitRepo, autoCommit } from "./utils/git.js";
7
7
  import { buildSkillPrompts, getActiveSkillCount } from "./utils/skills.js";
8
8
  import { createSession, saveMessage, updateTokenEstimate, updateSessionCost, loadMessages } from "./utils/sessions.js";
9
9
  import { loadMCPConfig, connectToServers, disconnectAll, getAllMCPTools, parseMCPToolName, callMCPTool } from "./utils/mcp.js";
10
+ import { refreshAnthropicOAuthToken } from "./utils/anthropic-oauth.js";
11
+ import { getCredential, saveCredential } from "./utils/auth.js";
12
+ import { chatWithResponsesAPI, shouldUseResponsesAPI } from "./utils/responses-api.js";
13
+ // ── Helper: Sanitize unpaired Unicode surrogates (copied from Pi/OpenClaw) ──
14
+ function sanitizeSurrogates(text) {
15
+ // Removes unpaired Unicode surrogates that cause JSON serialization errors in APIs.
16
+ // Valid emoji (properly paired surrogates) are preserved.
17
+ return text.replace(/[\uD800-\uDBFF](?![\uDC00-\uDFFF])|(?<![\uD800-\uDBFF])[\uDC00-\uDFFF]/g, "");
18
+ }
19
+ // ── Helper: Create Anthropic client with proper auth ──
20
+ function createAnthropicClient(apiKey) {
21
+ // OAuth tokens start with "sk-ant-oat" — need special handling
22
+ if (apiKey.startsWith("sk-ant-oat")) {
23
+ return new Anthropic({
24
+ apiKey: null,
25
+ authToken: apiKey,
26
+ dangerouslyAllowBrowser: true,
27
+ defaultHeaders: {
28
+ "anthropic-beta": "claude-code-20250219,oauth-2025-04-20",
29
+ "user-agent": "claude-cli/2.1.75",
30
+ "x-app": "cli",
31
+ },
32
+ });
33
+ }
34
+ // Regular API keys
35
+ return new Anthropic({
36
+ apiKey,
37
+ dangerouslyAllowBrowser: true,
38
+ });
39
+ }
10
40
  // Tools that can modify your project — require approval
11
41
  const DANGEROUS_TOOLS = new Set(["write_file", "edit_file", "run_command"]);
12
42
  // Cost per 1M tokens (input/output) for common models
@@ -21,6 +51,14 @@ const MODEL_COSTS = {
21
51
  "o1": { input: 15, output: 60 },
22
52
  "o1-mini": { input: 3, output: 12 },
23
53
  "o1-pro": { input: 150, output: 600 },
54
+ "gpt-5.4": { input: 2.5, output: 15 },
55
+ "gpt-5.4-pro": { input: 30, output: 180 },
56
+ "gpt-5": { input: 1.25, output: 10 },
57
+ "gpt-5-mini": { input: 0.3, output: 1.25 },
58
+ "gpt-5.3-codex": { input: 1.25, output: 10 },
59
+ "gpt-4.1": { input: 2, output: 8 },
60
+ "gpt-4.1-mini": { input: 0.4, output: 1.6 },
61
+ "gpt-4.1-nano": { input: 0.1, output: 0.4 },
24
62
  "o3": { input: 10, output: 40 },
25
63
  "o3-mini": { input: 1.1, output: 4.4 },
26
64
  "o4-mini": { input: 1.1, output: 4.4 },
@@ -113,6 +151,8 @@ export class CodingAgent {
113
151
  client;
114
152
  anthropicClient = null;
115
153
  providerType;
154
+ currentApiKey = null;
155
+ currentBaseUrl = "";
116
156
  messages = [];
117
157
  tools = FILE_TOOLS;
118
158
  cwd;
@@ -138,14 +178,13 @@ export class CodingAgent {
138
178
  constructor(options) {
139
179
  this.options = options;
140
180
  this.providerType = options.provider.type || "openai";
181
+ this.currentBaseUrl = options.provider.baseUrl || "https://api.openai.com/v1";
141
182
  this.client = new OpenAI({
142
- baseURL: options.provider.baseUrl,
183
+ baseURL: this.currentBaseUrl,
143
184
  apiKey: options.provider.apiKey,
144
185
  });
145
186
  if (this.providerType === "anthropic") {
146
- this.anthropicClient = new Anthropic({
147
- apiKey: options.provider.apiKey,
148
- });
187
+ this.anthropicClient = createAnthropicClient(options.provider.apiKey);
149
188
  }
150
189
  this.cwd = options.cwd;
151
190
  this.maxTokens = options.maxTokens;
@@ -153,7 +192,7 @@ export class CodingAgent {
153
192
  this.model = options.provider.model;
154
193
  // Default model for Anthropic
155
194
  if (this.providerType === "anthropic" && (this.model === "auto" || !this.model)) {
156
- this.model = "claude-sonnet-4-20250514";
195
+ this.model = "claude-sonnet-4-6";
157
196
  }
158
197
  this.gitEnabled = isGitRepo(this.cwd);
159
198
  this.compressionThreshold = options.contextCompressionThreshold ?? 80000;
@@ -237,18 +276,34 @@ export class CodingAgent {
237
276
  if (this.providerType === "anthropic" && this.anthropicClient) {
238
277
  return this.chatAnthropic(userMessage);
239
278
  }
279
+ // Route to Responses API for models that need it (GPT-5.x, Codex, etc)
280
+ if (this.providerType === "openai" && shouldUseResponsesAPI(this.model)) {
281
+ return this.chatOpenAIResponses(userMessage);
282
+ }
240
283
  let iterations = 0;
241
284
  const MAX_ITERATIONS = 20;
242
285
  while (iterations < MAX_ITERATIONS) {
243
286
  iterations++;
244
- const stream = await this.client.chat.completions.create({
245
- model: this.model,
246
- messages: this.messages,
247
- tools: this.tools,
248
- max_tokens: this.maxTokens,
249
- stream: true,
250
- stream_options: { include_usage: true },
251
- });
287
+ let stream;
288
+ try {
289
+ stream = await this.client.chat.completions.create({
290
+ model: this.model,
291
+ messages: this.messages,
292
+ tools: this.tools,
293
+ max_tokens: this.maxTokens,
294
+ stream: true,
295
+ stream_options: { include_usage: true },
296
+ });
297
+ }
298
+ catch (err) {
299
+ // Better error for OpenAI Codex OAuth scope limitations
300
+ if (err.status === 401 && err.message?.includes("Missing scopes")) {
301
+ throw new Error(`Model "${this.model}" is not available via Chat Completions with your OAuth token. ` +
302
+ `Try a GPT-5.x model (which uses the Responses API automatically), ` +
303
+ `or use an API key for full model access (/login openai → api-key).`);
304
+ }
305
+ throw err;
306
+ }
252
307
  // Accumulate the streamed response
253
308
  let contentText = "";
254
309
  let thinkingText = "";
@@ -493,30 +548,65 @@ export class CodingAgent {
493
548
  * Anthropic-native streaming chat
494
549
  */
495
550
  async chatAnthropic(_userMessage) {
496
- const client = this.anthropicClient;
497
551
  let iterations = 0;
498
552
  const MAX_ITERATIONS = 20;
499
553
  while (iterations < MAX_ITERATIONS) {
500
554
  iterations++;
501
555
  const anthropicMessages = this.getAnthropicMessages();
502
556
  const anthropicTools = this.getAnthropicTools();
503
- const stream = client.messages.stream({
504
- model: this.model,
505
- max_tokens: this.maxTokens,
506
- system: this.systemPrompt,
507
- messages: anthropicMessages,
508
- tools: anthropicTools,
509
- });
557
+ // For OAuth tokens, system prompt must be a structured array with Claude Code identity
558
+ // Check both the current provider key and what was passed to switchModel
559
+ const currentApiKey = this.currentApiKey ?? this.options.provider.apiKey;
560
+ const isOAuthToken = currentApiKey?.includes("sk-ant-oat");
561
+ let systemPrompt = this.systemPrompt;
562
+ if (isOAuthToken) {
563
+ systemPrompt = [
564
+ {
565
+ type: "text",
566
+ text: "You are Claude Code, Anthropic's official CLI for Claude.",
567
+ },
568
+ {
569
+ type: "text",
570
+ text: sanitizeSurrogates(this.systemPrompt),
571
+ },
572
+ ];
573
+ }
574
+ else {
575
+ systemPrompt = sanitizeSurrogates(this.systemPrompt);
576
+ }
577
+ let stream;
578
+ let finalMessage;
510
579
  let contentText = "";
511
580
  const toolCalls = [];
512
- let currentToolId = "";
513
- let currentToolName = "";
514
- let currentToolInput = "";
515
- stream.on("text", (text) => {
516
- contentText += text;
517
- this.options.onToken?.(text);
518
- });
519
- const finalMessage = await stream.finalMessage();
581
+ try {
582
+ stream = this.anthropicClient.messages.stream({
583
+ model: this.model,
584
+ max_tokens: this.maxTokens,
585
+ system: systemPrompt,
586
+ messages: anthropicMessages,
587
+ tools: anthropicTools,
588
+ });
589
+ stream.on("text", (text) => {
590
+ contentText += text;
591
+ this.options.onToken?.(text);
592
+ });
593
+ finalMessage = await stream.finalMessage();
594
+ }
595
+ catch (err) {
596
+ // Handle 401 Unauthorized — try refreshing OAuth token
597
+ const isOAuth = (this.currentApiKey ?? this.options.provider.apiKey)?.startsWith("sk-ant-oat");
598
+ if (err.status === 401 && isOAuth) {
599
+ const refreshed = await this.tryRefreshAnthropicToken();
600
+ if (refreshed) {
601
+ // Token was refreshed — retry this iteration
602
+ iterations--;
603
+ continue;
604
+ }
605
+ throw new Error("Anthropic OAuth token expired. Please re-login with /login anthropic");
606
+ }
607
+ // Re-throw if we can't handle it
608
+ throw err;
609
+ }
520
610
  // Track usage
521
611
  if (finalMessage.usage) {
522
612
  const promptTokens = finalMessage.usage.input_tokens;
@@ -641,18 +731,193 @@ export class CodingAgent {
641
731
  }
642
732
  return "Max iterations reached. The agent may be stuck in a loop.";
643
733
  }
734
+ /**
735
+ * OpenAI Responses API chat (for Codex OAuth tokens + GPT-5.4)
736
+ */
737
+ async chatOpenAIResponses(userMessage) {
738
+ let iterations = 0;
739
+ const MAX_ITERATIONS = 20;
740
+ while (iterations < MAX_ITERATIONS) {
741
+ iterations++;
742
+ try {
743
+ const currentKey = this.currentApiKey || this.options.provider.apiKey;
744
+ const result = await chatWithResponsesAPI({
745
+ baseUrl: this.currentBaseUrl,
746
+ apiKey: currentKey,
747
+ model: this.model,
748
+ maxTokens: this.maxTokens,
749
+ systemPrompt: this.systemPrompt,
750
+ messages: this.messages,
751
+ tools: this.tools,
752
+ onToken: (token) => this.options.onToken?.(token),
753
+ onToolCall: (name, args) => this.options.onToolCall?.(name, args),
754
+ });
755
+ const { contentText, toolCalls, promptTokens, completionTokens } = result;
756
+ // Track usage
757
+ this.totalPromptTokens += promptTokens;
758
+ this.totalCompletionTokens += completionTokens;
759
+ const costs = getModelCost(this.model);
760
+ this.totalCost = (this.totalPromptTokens / 1_000_000) * costs.input +
761
+ (this.totalCompletionTokens / 1_000_000) * costs.output;
762
+ updateSessionCost(this.sessionId, this.totalPromptTokens, this.totalCompletionTokens, this.totalCost);
763
+ // Build and save assistant message
764
+ const assistantMessage = { role: "assistant", content: contentText || null };
765
+ if (toolCalls.length > 0) {
766
+ assistantMessage.tool_calls = toolCalls.map((tc) => ({
767
+ id: tc.id,
768
+ type: "function",
769
+ function: { name: tc.name, arguments: JSON.stringify(tc.input) },
770
+ }));
771
+ }
772
+ this.messages.push(assistantMessage);
773
+ saveMessage(this.sessionId, assistantMessage);
774
+ // If no tool calls, we're done
775
+ if (toolCalls.length === 0) {
776
+ updateTokenEstimate(this.sessionId, this.estimateTokens());
777
+ return contentText || "(empty response)";
778
+ }
779
+ // Process tool calls (same as Chat Completions flow)
780
+ for (const toolCall of toolCalls) {
781
+ const args = toolCall.input;
782
+ // Check approval
783
+ if (DANGEROUS_TOOLS.has(toolCall.name) && !this.autoApprove && !this.alwaysApproved.has(toolCall.name)) {
784
+ if (this.options.onToolApproval) {
785
+ let diff;
786
+ if (toolCall.name === "write_file" && args.path && args.content) {
787
+ const existing = getExistingContent(String(args.path), this.cwd);
788
+ if (existing !== null) {
789
+ diff = generateDiff(existing, String(args.content), String(args.path));
790
+ }
791
+ }
792
+ if (toolCall.name === "edit_file" && args.path && args.oldText !== undefined && args.newText !== undefined) {
793
+ const existing = getExistingContent(String(args.path), this.cwd);
794
+ if (existing !== null) {
795
+ const oldText = String(args.oldText);
796
+ const newText = String(args.newText);
797
+ const replaceAll = Boolean(args.replaceAll);
798
+ const next = replaceAll ? existing.split(oldText).join(newText) : existing.replace(oldText, newText);
799
+ diff = generateDiff(existing, next, String(args.path));
800
+ }
801
+ }
802
+ const decision = await this.options.onToolApproval(toolCall.name, args, diff);
803
+ if (decision === "no") {
804
+ const toolMsg = {
805
+ role: "tool",
806
+ tool_call_id: toolCall.id,
807
+ content: "Tool denied by user.",
808
+ };
809
+ this.messages.push(toolMsg);
810
+ saveMessage(this.sessionId, toolMsg);
811
+ continue;
812
+ }
813
+ if (decision === "always") {
814
+ this.alwaysApproved.add(toolCall.name);
815
+ }
816
+ }
817
+ }
818
+ // Execute tool
819
+ const mcpParsed = parseMCPToolName(toolCall.name);
820
+ let result;
821
+ if (mcpParsed) {
822
+ result = await callMCPTool(mcpParsed.serverName, mcpParsed.toolName, args);
823
+ }
824
+ else {
825
+ result = await executeTool(toolCall.name, args, this.cwd);
826
+ }
827
+ this.options.onToolResult?.(toolCall.name, result);
828
+ // Auto-commit
829
+ if (this.gitEnabled && this.autoCommitEnabled && ["write_file", "edit_file"].includes(toolCall.name) && result.startsWith("✅")) {
830
+ const path = String(args.path ?? "unknown");
831
+ const committed = autoCommit(this.cwd, path, "write");
832
+ if (committed) {
833
+ this.options.onGitCommit?.(`write ${path}`);
834
+ }
835
+ }
836
+ // Auto-lint
837
+ if (this.autoLintEnabled && this.detectedLinter && ["write_file", "edit_file"].includes(toolCall.name) && result.startsWith("✅")) {
838
+ const filePath = String(args.path ?? "");
839
+ const lintErrors = runLinter(this.detectedLinter, filePath, this.cwd);
840
+ if (lintErrors) {
841
+ this.options.onLintResult?.(filePath, lintErrors);
842
+ const lintMsg = {
843
+ role: "tool",
844
+ tool_call_id: toolCall.id,
845
+ content: result + `\n\nLint errors detected in ${filePath}:\n${lintErrors}\nPlease fix these issues.`,
846
+ };
847
+ this.messages.push(lintMsg);
848
+ saveMessage(this.sessionId, lintMsg);
849
+ continue;
850
+ }
851
+ }
852
+ // Save tool result
853
+ const toolMsg = {
854
+ role: "tool",
855
+ tool_call_id: toolCall.id,
856
+ content: result,
857
+ };
858
+ this.messages.push(toolMsg);
859
+ saveMessage(this.sessionId, toolMsg);
860
+ }
861
+ }
862
+ catch (err) {
863
+ throw err;
864
+ }
865
+ }
866
+ return "Max iterations reached. The agent may be stuck in a loop.";
867
+ }
644
868
  /**
645
869
  * Switch to a different model mid-session
646
870
  */
647
- switchModel(model, baseUrl, apiKey) {
871
+ switchModel(model, baseUrl, apiKey, providerType) {
648
872
  this.model = model;
873
+ if (apiKey)
874
+ this.currentApiKey = apiKey;
875
+ if (baseUrl)
876
+ this.currentBaseUrl = baseUrl;
877
+ if (providerType) {
878
+ this.providerType = providerType;
879
+ if (providerType === "anthropic") {
880
+ // Always rebuild Anthropic client when switching (token may have changed)
881
+ const key = apiKey || this.currentApiKey || this.options.provider.apiKey;
882
+ if (!key)
883
+ throw new Error("No API key available for Anthropic");
884
+ this.anthropicClient = createAnthropicClient(key);
885
+ }
886
+ else {
887
+ this.anthropicClient = null;
888
+ }
889
+ }
649
890
  if (baseUrl || apiKey) {
650
891
  this.client = new OpenAI({
651
- baseURL: baseUrl ?? this.options.provider.baseUrl,
892
+ baseURL: baseUrl ?? this.currentBaseUrl,
652
893
  apiKey: apiKey ?? this.options.provider.apiKey,
653
894
  });
654
895
  }
655
896
  }
897
+ /**
898
+ * Attempt to refresh an expired Anthropic OAuth token.
899
+ * Returns true if refresh succeeded and client was rebuilt.
900
+ */
901
+ async tryRefreshAnthropicToken() {
902
+ const cred = getCredential("anthropic");
903
+ if (!cred?.refreshToken)
904
+ return false;
905
+ try {
906
+ const refreshed = await refreshAnthropicOAuthToken(cred.refreshToken);
907
+ // Update stored credential
908
+ cred.apiKey = refreshed.access;
909
+ cred.refreshToken = refreshed.refresh;
910
+ cred.oauthExpires = refreshed.expires;
911
+ saveCredential(cred);
912
+ // Rebuild client with new token
913
+ this.currentApiKey = refreshed.access;
914
+ this.anthropicClient = createAnthropicClient(refreshed.access);
915
+ return true;
916
+ }
917
+ catch {
918
+ return false;
919
+ }
920
+ }
656
921
  getModel() {
657
922
  return this.model;
658
923
  }