codebot-ai 1.2.2 → 1.3.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,50 +1,60 @@
1
1
  # CodeBot AI
2
2
 
3
- Local-first AI coding assistant. Zero runtime dependencies. Works with Ollama, LM Studio, vLLM, Claude, GPT, Gemini, DeepSeek, Groq, Mistral, and Grok.
3
+ [![npm version](https://img.shields.io/npm/v/codebot-ai.svg)](https://www.npmjs.com/package/codebot-ai)
4
+ [![license](https://img.shields.io/npm/l/codebot-ai.svg)](https://github.com/zanderone1980/codebot-ai/blob/main/LICENSE)
5
+ [![node](https://img.shields.io/node/v/codebot-ai.svg)](https://nodejs.org)
6
+
7
+ **Zero-dependency autonomous AI agent.** Works with any LLM — local or cloud. Code, browse the web, run commands, search, automate routines, and more.
8
+
9
+ Built by [Ascendral Software Development & Innovation](https://github.com/AscendralSoftware).
4
10
 
5
11
  ## Quick Start
6
12
 
7
13
  ```bash
8
- # Install globally
9
14
  npm install -g codebot-ai
10
-
11
- # Run — setup wizard launches automatically on first use
12
15
  codebot
13
16
  ```
14
17
 
15
- Or run without installing:
18
+ That's it. The setup wizard launches on first run pick your model, paste an API key (or use a local LLM), and you're coding.
16
19
 
17
20
  ```bash
21
+ # Or run without installing
18
22
  npx codebot-ai
19
23
  ```
20
24
 
21
- Or from source:
22
-
23
- ```bash
24
- git clone https://github.com/AscendralSoftware/codebot-ai.git
25
- cd codebot-ai
26
- npm install && npm run build
27
- ./bin/codebot
28
- ```
25
+ ## What Can It Do?
29
26
 
30
- ## Setup
27
+ - **Write & edit code** — reads your codebase, makes targeted edits, runs tests
28
+ - **Run shell commands** — system checks, builds, deploys, git operations
29
+ - **Browse the web** — navigates Chrome, clicks, types, reads pages, takes screenshots
30
+ - **Search the internet** — real-time web search for docs, APIs, current info
31
+ - **Automate routines** — schedule recurring tasks with cron (daily posts, email checks, monitoring)
32
+ - **Call APIs** — HTTP requests to any REST endpoint
33
+ - **Persistent memory** — remembers preferences and context across sessions
34
+ - **Self-recovering** — retries on network errors, recovers from API failures, never drops out
31
35
 
32
- On first run, CodeBot detects your environment and walks you through configuration:
36
+ ## Supported Models
33
37
 
34
- - Scans for local LLM servers (Ollama, LM Studio, vLLM)
35
- - Detects API keys from environment variables
36
- - Lets you pick a provider and model
37
- - Saves config to `~/.codebot/config.json`
38
+ Pick any model during setup. CodeBot works with all of them:
38
39
 
39
- To reconfigure anytime: `codebot --setup`
40
+ | Provider | Models |
41
+ |----------|--------|
42
+ | **Local (Ollama/LM Studio/vLLM)** | qwen2.5-coder, qwen3, deepseek-coder, llama3.x, mistral, phi-4, codellama, starcoder2, and any model your server runs |
43
+ | **Anthropic** | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 |
44
+ | **OpenAI** | gpt-4o, gpt-4.1, o1, o3, o4-mini |
45
+ | **Google** | gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash |
46
+ | **DeepSeek** | deepseek-chat, deepseek-reasoner |
47
+ | **Groq** | llama-3.3-70b, mixtral-8x7b |
48
+ | **Mistral** | mistral-large, codestral |
49
+ | **xAI** | grok-3, grok-3-mini |
40
50
 
41
- ### Environment Variables
51
+ For local models, just have Ollama/LM Studio/vLLM running — CodeBot auto-detects them.
42
52
 
43
- Set an API key for your preferred cloud provider:
53
+ For cloud models, set an environment variable:
44
54
 
45
55
  ```bash
46
- export ANTHROPIC_API_KEY="sk-ant-..." # Claude
47
56
  export OPENAI_API_KEY="sk-..." # GPT
57
+ export ANTHROPIC_API_KEY="sk-ant-..." # Claude
48
58
  export GEMINI_API_KEY="..." # Gemini
49
59
  export DEEPSEEK_API_KEY="sk-..." # DeepSeek
50
60
  export GROQ_API_KEY="gsk_..." # Groq
@@ -52,52 +62,23 @@ export MISTRAL_API_KEY="..." # Mistral
52
62
  export XAI_API_KEY="xai-..." # Grok
53
63
  ```
54
64
 
55
- For local models, just have Ollama/LM Studio/vLLM running CodeBot auto-detects them.
65
+ Or paste your key during setupeither way works.
56
66
 
57
67
  ## Usage
58
68
 
59
- ### Interactive Mode
60
-
61
- ```bash
62
- codebot
63
- ```
64
-
65
- ### Single Message
66
-
67
69
  ```bash
68
- codebot "fix the bug in app.ts"
69
- codebot --model claude-sonnet-4-6 "explain this codebase"
70
+ codebot # Interactive REPL
71
+ codebot "fix the bug in app.ts" # Single task
72
+ codebot --autonomous "refactor auth and test" # Full auto — no permission prompts
73
+ codebot --continue # Resume last session
74
+ echo "explain this error" | codebot # Pipe mode
70
75
  ```
71
76
 
72
- ### Pipe Mode
73
-
74
- ```bash
75
- echo "write a function that sorts by date" | codebot
76
- cat error.log | codebot "what's causing this?"
77
- ```
78
-
79
- ### Autonomous Mode
80
-
81
- Skip all permission prompts — full auto:
82
-
83
- ```bash
84
- codebot --autonomous "refactor the auth module and run tests"
85
- ```
86
-
87
- ### Session Resume
88
-
89
- CodeBot auto-saves every conversation. Resume anytime:
90
-
91
- ```bash
92
- codebot --continue # Resume last session
93
- codebot --resume <session-id> # Resume specific session
94
- ```
95
-
96
- ## CLI Options
77
+ ### CLI Options
97
78
 
98
79
  ```
99
80
  --setup Run the setup wizard
100
- --model <name> Model to use (default: qwen2.5-coder:32b)
81
+ --model <name> Model to use
101
82
  --provider <name> Provider: openai, anthropic, gemini, deepseek, groq, mistral, xai
102
83
  --base-url <url> LLM API base URL
103
84
  --api-key <key> API key (or use env vars)
@@ -107,25 +88,26 @@ codebot --resume <session-id> # Resume specific session
107
88
  --max-iterations <n> Max agent loop iterations (default: 50)
108
89
  ```
109
90
 
110
- ## Interactive Commands
91
+ ### Interactive Commands
111
92
 
112
93
  ```
113
- /help Show commands
114
- /model Show or change model
115
- /models List all supported models
116
- /sessions List saved sessions
117
- /auto Toggle autonomous mode
118
- /undo Undo last file edit (/undo [path])
119
- /usage Show token usage for this session
120
- /clear Clear conversation
121
- /compact Force context compaction
122
- /config Show configuration
123
- /quit Exit
94
+ /help Show commands
95
+ /model Show or change model
96
+ /models List all supported models
97
+ /sessions List saved sessions
98
+ /routines List scheduled routines
99
+ /auto Toggle autonomous mode
100
+ /undo Undo last file edit (/undo [path])
101
+ /usage Show token usage for this session
102
+ /clear Clear conversation
103
+ /compact Force context compaction
104
+ /config Show configuration
105
+ /quit Exit
124
106
  ```
125
107
 
126
108
  ## Tools
127
109
 
128
- CodeBot has 11 built-in tools:
110
+ CodeBot has 13 built-in tools:
129
111
 
130
112
  | Tool | Description | Permission |
131
113
  |------|-------------|-----------|
@@ -139,7 +121,9 @@ CodeBot has 11 built-in tools:
139
121
  | `think` | Internal reasoning scratchpad | auto |
140
122
  | `memory` | Persistent memory across sessions | auto |
141
123
  | `web_fetch` | HTTP requests and API calls | prompt |
124
+ | `web_search` | Internet search with result summaries | prompt |
142
125
  | `browser` | Chrome automation via CDP | prompt |
126
+ | `routine` | Schedule recurring tasks with cron | prompt |
143
127
 
144
128
  ### Permission Levels
145
129
 
@@ -147,7 +131,7 @@ CodeBot has 11 built-in tools:
147
131
  - **prompt** — Asks for approval (skipped in `--autonomous` mode)
148
132
  - **always-ask** — Always asks, even in autonomous mode
149
133
 
150
- ### Browser Tool
134
+ ### Browser Automation
151
135
 
152
136
  Controls Chrome via the Chrome DevTools Protocol. Actions:
153
137
 
@@ -155,21 +139,34 @@ Controls Chrome via the Chrome DevTools Protocol. Actions:
155
139
  - `content` — Read page text
156
140
  - `screenshot` — Capture the page
157
141
  - `click` — Click an element by CSS selector
142
+ - `find_by_text` — Find and interact with elements by visible text
158
143
  - `type` — Type into an input field
144
+ - `scroll`, `press_key`, `hover` — Page interaction
159
145
  - `evaluate` — Run JavaScript on the page
160
146
  - `tabs` — List open tabs
161
147
  - `close` — Close browser connection
162
148
 
163
149
  Chrome is auto-launched with `--remote-debugging-port` if not already running.
164
150
 
151
+ ### Routines & Scheduling
152
+
153
+ Schedule recurring tasks with cron expressions:
154
+
155
+ ```
156
+ > Set up a routine to check my server health every hour
157
+ > Create a daily routine at 9am to summarize my GitHub notifications
158
+ ```
159
+
160
+ CodeBot creates the cron schedule, and the built-in scheduler runs tasks automatically while the agent is active. Manage with `/routines`.
161
+
165
162
  ### Memory
166
163
 
167
- CodeBot has persistent memory that survives across sessions:
164
+ Persistent memory that survives across sessions:
168
165
 
169
166
  - **Global memory** (`~/.codebot/memory/`) — preferences, patterns
170
167
  - **Project memory** (`.codebot/memory/`) — project-specific context
171
- - Memory is automatically injected into the system prompt
172
- - The agent can read/write its own memory using the `memory` tool
168
+ - Automatically injected into the system prompt
169
+ - The agent reads/writes its own memory to learn your style
173
170
 
174
171
  ### Plugins
175
172
 
@@ -205,33 +202,54 @@ Connect external tool servers via [Model Context Protocol](https://modelcontextp
205
202
 
206
203
  MCP tools appear automatically with the `mcp_<server>_<tool>` prefix.
207
204
 
208
- ## Supported Models
205
+ ## Stability
206
+
207
+ CodeBot v1.3.0 is hardened for continuous operation:
208
+
209
+ - **Automatic retry** — network errors, rate limits (429), and server errors (5xx) retry with exponential backoff
210
+ - **Stream recovery** — if the LLM connection drops mid-response, the agent loop retries on the next iteration
211
+ - **Context compaction** — when the conversation exceeds the model's context window, messages are intelligently summarized
212
+ - **Process resilience** — unhandled exceptions and rejections are caught, logged, and the REPL keeps running
213
+ - **Routine timeouts** — scheduled tasks are capped at 5 minutes to prevent the scheduler from hanging
214
+ - **99 tests** — comprehensive suite covering error recovery, retry logic, tool execution, and edge cases
215
+
216
+ ## Programmatic API
217
+
218
+ CodeBot can be used as a library:
209
219
 
210
- ### Local (Ollama / LM Studio / vLLM)
220
+ ```typescript
221
+ import { Agent, OpenAIProvider, AnthropicProvider } from 'codebot-ai';
211
222
 
212
- qwen2.5-coder (3b/7b/14b/32b), qwen3, deepseek-coder, codellama, llama3.x, mistral, mixtral, phi-3/4, starcoder2, granite-code, gemma2, command-r
223
+ const provider = new AnthropicProvider({
224
+ baseUrl: 'https://api.anthropic.com',
225
+ apiKey: process.env.ANTHROPIC_API_KEY,
226
+ model: 'claude-sonnet-4-6',
227
+ });
213
228
 
214
- ### Cloud
229
+ const agent = new Agent({
230
+ provider,
231
+ model: 'claude-sonnet-4-6',
232
+ autoApprove: true,
233
+ });
215
234
 
216
- - **Anthropic**: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5
217
- - **OpenAI**: gpt-4o, gpt-4.1, o1, o3, o4-mini
218
- - **Google**: gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash
219
- - **DeepSeek**: deepseek-chat, deepseek-reasoner
220
- - **Groq**: llama-3.3-70b, mixtral-8x7b (fast inference)
221
- - **Mistral**: mistral-large, codestral
222
- - **xAI**: grok-3, grok-3-mini
235
+ for await (const event of agent.run('list all TypeScript files')) {
236
+ if (event.type === 'text') process.stdout.write(event.text || '');
237
+ }
238
+ ```
223
239
 
224
240
  ## Architecture
225
241
 
226
242
  ```
227
243
  src/
228
- agent.ts Agent loop with streaming, tool execution, permissions
244
+ agent.ts Agent loop streaming, tool execution, error recovery
229
245
  cli.ts CLI interface, REPL, slash commands
230
246
  types.ts TypeScript interfaces
231
247
  parser.ts XML/JSON tool call parser (for models without native tool support)
232
248
  history.ts Session persistence (JSONL)
233
249
  memory.ts Persistent memory system
234
- setup.ts Interactive setup wizard
250
+ setup.ts Interactive setup wizard (model-first UX)
251
+ scheduler.ts Cron-based routine scheduler
252
+ retry.ts Exponential backoff with jitter
235
253
  context/
236
254
  manager.ts Context window management, LLM-powered compaction
237
255
  repo-map.ts Project structure scanner
@@ -247,31 +265,8 @@ src/
247
265
  read.ts, write.ts, edit.ts, execute.ts
248
266
  batch-edit.ts Multi-file atomic editing
249
267
  glob.ts, grep.ts, think.ts
250
- memory.ts, web-fetch.ts, browser.ts
251
- ```
252
-
253
- ## Programmatic API
254
-
255
- CodeBot can be used as a library:
256
-
257
- ```typescript
258
- import { Agent, OpenAIProvider, AnthropicProvider } from 'codebot-ai';
259
-
260
- const provider = new AnthropicProvider({
261
- baseUrl: 'https://api.anthropic.com',
262
- apiKey: process.env.ANTHROPIC_API_KEY,
263
- model: 'claude-sonnet-4-6',
264
- });
265
-
266
- const agent = new Agent({
267
- provider,
268
- model: 'claude-sonnet-4-6',
269
- autoApprove: true,
270
- });
271
-
272
- for await (const event of agent.run('list all TypeScript files')) {
273
- if (event.type === 'text') process.stdout.write(event.text || '');
274
- }
268
+ memory.ts, web-fetch.ts, web-search.ts
269
+ browser.ts, routine.ts
275
270
  ```
276
271
 
277
272
  ## Configuration
@@ -282,6 +277,15 @@ Config is loaded in this order (later values win):
282
277
  2. Environment variables (`CODEBOT_MODEL`, `CODEBOT_PROVIDER`, etc.)
283
278
  3. CLI flags (`--model`, `--provider`, etc.)
284
279
 
280
+ ## From Source
281
+
282
+ ```bash
283
+ git clone https://github.com/zanderone1980/codebot-ai.git
284
+ cd codebot-ai
285
+ npm install && npm run build
286
+ ./bin/codebot
287
+ ```
288
+
285
289
  ## License
286
290
 
287
- MIT - Ascendral Software Development & Innovation
291
+ MIT - [Ascendral Software Development & Innovation](https://github.com/AscendralSoftware)
package/dist/agent.js CHANGED
@@ -88,9 +88,15 @@ class Agent {
88
88
  this.messages.push(userMsg);
89
89
  this.onMessage?.(userMsg);
90
90
  if (!this.context.fitsInBudget(this.messages)) {
91
- const result = await this.context.compactWithSummary(this.messages);
92
- this.messages = result.messages;
93
- yield { type: 'compaction', text: result.summary || 'Context compacted to fit budget.' };
91
+ try {
92
+ const result = await this.context.compactWithSummary(this.messages);
93
+ this.messages = result.messages;
94
+ yield { type: 'compaction', text: result.summary || 'Context compacted to fit budget.' };
95
+ }
96
+ catch {
97
+ this.messages = this.context.compact(this.messages, true);
98
+ yield { type: 'compaction', text: 'Context compacted (summary unavailable).' };
99
+ }
94
100
  }
95
101
  for (let i = 0; i < this.maxIterations; i++) {
96
102
  // Validate message integrity: ensure every tool_call has a matching tool response
@@ -100,29 +106,41 @@ class Agent {
100
106
  const toolSchemas = supportsTools ? this.tools.getSchemas() : undefined;
101
107
  let fullText = '';
102
108
  let toolCalls = [];
103
- // Stream LLM response
104
- for await (const event of this.provider.chat(this.messages, toolSchemas)) {
105
- switch (event.type) {
106
- case 'text':
107
- fullText += event.text || '';
108
- yield { type: 'text', text: event.text };
109
- break;
110
- case 'thinking':
111
- yield { type: 'thinking', text: event.text };
112
- break;
113
- case 'tool_call_end':
114
- if (event.toolCall) {
115
- toolCalls.push(event.toolCall);
116
- }
117
- break;
118
- case 'usage':
119
- yield { type: 'usage', usage: event.usage };
120
- break;
121
- case 'error':
122
- yield { type: 'error', error: event.error };
123
- return;
109
+ let streamError = null;
110
+ // Stream LLM response wrapped in try-catch for resilience
111
+ try {
112
+ for await (const event of this.provider.chat(this.messages, toolSchemas)) {
113
+ switch (event.type) {
114
+ case 'text':
115
+ fullText += event.text || '';
116
+ yield { type: 'text', text: event.text };
117
+ break;
118
+ case 'thinking':
119
+ yield { type: 'thinking', text: event.text };
120
+ break;
121
+ case 'tool_call_end':
122
+ if (event.toolCall) {
123
+ toolCalls.push(event.toolCall);
124
+ }
125
+ break;
126
+ case 'usage':
127
+ yield { type: 'usage', usage: event.usage };
128
+ break;
129
+ case 'error':
130
+ streamError = event.error || 'Unknown provider error';
131
+ break;
132
+ }
124
133
  }
125
134
  }
135
+ catch (err) {
136
+ const msg = err instanceof Error ? err.message : String(err);
137
+ streamError = `Stream error: ${msg}`;
138
+ }
139
+ // On error: yield it to the UI but DON'T return — continue to next iteration
140
+ if (streamError) {
141
+ yield { type: 'error', error: streamError };
142
+ continue;
143
+ }
126
144
  // If no native tool calls, try parsing from text
127
145
  if (toolCalls.length === 0 && fullText) {
128
146
  toolCalls = (0, parser_1.parseToolCalls)(fullText);
@@ -195,9 +213,15 @@ class Agent {
195
213
  }
196
214
  // Compact after tool results if needed
197
215
  if (!this.context.fitsInBudget(this.messages)) {
198
- const result = await this.context.compactWithSummary(this.messages);
199
- this.messages = result.messages;
200
- yield { type: 'compaction', text: result.summary || 'Context compacted.' };
216
+ try {
217
+ const result = await this.context.compactWithSummary(this.messages);
218
+ this.messages = result.messages;
219
+ yield { type: 'compaction', text: result.summary || 'Context compacted.' };
220
+ }
221
+ catch {
222
+ this.messages = this.context.compact(this.messages, true);
223
+ yield { type: 'compaction', text: 'Context compacted (summary unavailable).' };
224
+ }
201
225
  }
202
226
  }
203
227
  yield { type: 'error', error: `Max iterations (${this.maxIterations}) reached.` };
package/dist/cli.js CHANGED
@@ -44,7 +44,7 @@ const setup_1 = require("./setup");
44
44
  const banner_1 = require("./banner");
45
45
  const tools_1 = require("./tools");
46
46
  const scheduler_1 = require("./scheduler");
47
- const VERSION = '1.2.2';
47
+ const VERSION = '1.3.0';
48
48
  // Session-wide token tracking
49
49
  let sessionTokens = { input: 0, output: 0, total: 0 };
50
50
  const C = {
@@ -61,6 +61,17 @@ function c(text, style) {
61
61
  return `${C[style]}${text}${C.reset}`;
62
62
  }
63
63
  async function main() {
64
+ // Process-level safety nets: prevent silent crashes
65
+ process.on('unhandledRejection', (reason) => {
66
+ const msg = reason instanceof Error ? reason.message : String(reason);
67
+ console.error(`\x1b[31m\nUnhandled error: ${msg}\x1b[0m`);
68
+ });
69
+ process.on('uncaughtException', (err) => {
70
+ console.error(`\x1b[31m\nUncaught exception: ${err.message}\x1b[0m`);
71
+ if (err.message.includes('out of memory') || err.message.includes('ENOMEM')) {
72
+ process.exit(1);
73
+ }
74
+ });
64
75
  const args = parseArgs(process.argv.slice(2));
65
76
  if (args.help) {
66
77
  showHelp();
@@ -363,7 +374,12 @@ function handleSlashCommand(input, agent, config) {
363
374
  case '/routines': {
364
375
  const { RoutineTool } = require('./tools/routine');
365
376
  const rt = new RoutineTool();
366
- rt.execute({ action: 'list' }).then((out) => console.log('\n' + out));
377
+ rt.execute({ action: 'list' })
378
+ .then((out) => console.log('\n' + out))
379
+ .catch((err) => {
380
+ const msg = err instanceof Error ? err.message : String(err);
381
+ console.error(c(`Error listing routines: ${msg}`, 'red'));
382
+ });
367
383
  break;
368
384
  }
369
385
  case '/config':
@@ -1,6 +1,7 @@
1
1
  "use strict";
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
3
  exports.AnthropicProvider = void 0;
4
+ const retry_1 = require("../retry");
4
5
  class AnthropicProvider {
5
6
  name;
6
7
  config;
@@ -27,26 +28,45 @@ class AnthropicProvider {
27
28
  }));
28
29
  }
29
30
  const baseUrl = this.config.baseUrl.replace(/\/+$/, '');
31
+ const MAX_RETRIES = 3;
30
32
  let response;
31
- try {
32
- response = await fetch(`${baseUrl}/v1/messages`, {
33
- method: 'POST',
34
- headers: {
35
- 'Content-Type': 'application/json',
36
- 'x-api-key': this.config.apiKey || '',
37
- 'anthropic-version': '2023-06-01',
38
- },
39
- body: JSON.stringify(body),
40
- });
41
- }
42
- catch (err) {
43
- const msg = err instanceof Error ? err.message : String(err);
44
- yield { type: 'error', error: `Connection failed: ${msg}` };
45
- return;
33
+ let lastError = '';
34
+ for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
35
+ try {
36
+ response = await fetch(`${baseUrl}/v1/messages`, {
37
+ method: 'POST',
38
+ headers: {
39
+ 'Content-Type': 'application/json',
40
+ 'x-api-key': this.config.apiKey || '',
41
+ 'anthropic-version': '2023-06-01',
42
+ },
43
+ body: JSON.stringify(body),
44
+ signal: AbortSignal.timeout(60_000),
45
+ });
46
+ if (response.ok || !(0, retry_1.isRetryable)(null, response.status)) {
47
+ break;
48
+ }
49
+ lastError = `Anthropic error ${response.status}`;
50
+ if (attempt < MAX_RETRIES) {
51
+ const delay = (0, retry_1.getRetryDelay)(attempt, response.headers.get('retry-after'));
52
+ await (0, retry_1.sleep)(delay);
53
+ continue;
54
+ }
55
+ }
56
+ catch (err) {
57
+ lastError = err instanceof Error ? err.message : String(err);
58
+ if (attempt < MAX_RETRIES && (0, retry_1.isRetryable)(err)) {
59
+ const delay = (0, retry_1.getRetryDelay)(attempt);
60
+ await (0, retry_1.sleep)(delay);
61
+ continue;
62
+ }
63
+ yield { type: 'error', error: `Connection failed after ${attempt + 1} attempts: ${lastError}` };
64
+ return;
65
+ }
46
66
  }
47
- if (!response.ok) {
48
- const text = await response.text();
49
- yield { type: 'error', error: `Anthropic error ${response.status}: ${text}` };
67
+ if (!response || !response.ok) {
68
+ const text = response ? await response.text().catch(() => '') : '';
69
+ yield { type: 'error', error: `Anthropic error after retries: ${lastError}${text ? ` — ${text}` : ''}` };
50
70
  return;
51
71
  }
52
72
  if (!response.body) {
@@ -2,6 +2,7 @@
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
3
  exports.OpenAIProvider = void 0;
4
4
  const registry_1 = require("./registry");
5
+ const retry_1 = require("../retry");
5
6
  class OpenAIProvider {
6
7
  name;
7
8
  config;
@@ -33,22 +34,42 @@ class OpenAIProvider {
33
34
  if (this.config.apiKey) {
34
35
  headers['Authorization'] = `Bearer ${this.config.apiKey}`;
35
36
  }
37
+ const MAX_RETRIES = 3;
36
38
  let response;
37
- try {
38
- response = await fetch(`${this.config.baseUrl}/v1/chat/completions`, {
39
- method: 'POST',
40
- headers,
41
- body: JSON.stringify(body),
42
- });
43
- }
44
- catch (err) {
45
- const msg = err instanceof Error ? err.message : String(err);
46
- yield { type: 'error', error: `Connection failed: ${msg}. Is your LLM server running?` };
47
- return;
39
+ let lastError = '';
40
+ for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
41
+ try {
42
+ response = await fetch(`${this.config.baseUrl}/v1/chat/completions`, {
43
+ method: 'POST',
44
+ headers,
45
+ body: JSON.stringify(body),
46
+ signal: AbortSignal.timeout(60_000),
47
+ });
48
+ if (response.ok || !(0, retry_1.isRetryable)(null, response.status)) {
49
+ break;
50
+ }
51
+ // Retryable HTTP status (429, 5xx)
52
+ lastError = `LLM error ${response.status}`;
53
+ if (attempt < MAX_RETRIES) {
54
+ const delay = (0, retry_1.getRetryDelay)(attempt, response.headers.get('retry-after'));
55
+ await (0, retry_1.sleep)(delay);
56
+ continue;
57
+ }
58
+ }
59
+ catch (err) {
60
+ lastError = err instanceof Error ? err.message : String(err);
61
+ if (attempt < MAX_RETRIES && (0, retry_1.isRetryable)(err)) {
62
+ const delay = (0, retry_1.getRetryDelay)(attempt);
63
+ await (0, retry_1.sleep)(delay);
64
+ continue;
65
+ }
66
+ yield { type: 'error', error: `Connection failed after ${attempt + 1} attempts: ${lastError}. Is your LLM server running?` };
67
+ return;
68
+ }
48
69
  }
49
- if (!response.ok) {
50
- const text = await response.text();
51
- yield { type: 'error', error: `LLM error ${response.status}: ${text}` };
70
+ if (!response || !response.ok) {
71
+ const text = response ? await response.text().catch(() => '') : '';
72
+ yield { type: 'error', error: `LLM error after retries: ${lastError}${text ? ` — ${text}` : ''}` };
52
73
  return;
53
74
  }
54
75
  if (!response.body) {
@@ -0,0 +1,22 @@
1
+ /**
2
+ * Retry utilities for resilient network operations.
3
+ * Exponential backoff with jitter, Retry-After header support.
4
+ * Zero dependencies.
5
+ */
6
+ export interface RetryOptions {
7
+ maxRetries?: number;
8
+ baseDelayMs?: number;
9
+ maxDelayMs?: number;
10
+ retryableStatuses?: number[];
11
+ }
12
+ declare const DEFAULTS: Required<RetryOptions>;
13
+ /** Returns true if the error/status is retryable (network error or retryable HTTP status). */
14
+ export declare function isRetryable(error: unknown, status?: number, opts?: RetryOptions): boolean;
15
+ /**
16
+ * Calculate delay with exponential backoff + jitter.
17
+ * For 429 responses, respects Retry-After header.
18
+ */
19
+ export declare function getRetryDelay(attempt: number, retryAfterHeader?: string | null, opts?: RetryOptions): number;
20
+ export declare function sleep(ms: number): Promise<void>;
21
+ export { DEFAULTS as RETRY_DEFAULTS };
22
+ //# sourceMappingURL=retry.d.ts.map
package/dist/retry.js ADDED
@@ -0,0 +1,59 @@
1
+ "use strict";
2
+ /**
3
+ * Retry utilities for resilient network operations.
4
+ * Exponential backoff with jitter, Retry-After header support.
5
+ * Zero dependencies.
6
+ */
7
+ Object.defineProperty(exports, "__esModule", { value: true });
8
+ exports.RETRY_DEFAULTS = void 0;
9
+ exports.isRetryable = isRetryable;
10
+ exports.getRetryDelay = getRetryDelay;
11
+ exports.sleep = sleep;
12
+ const DEFAULTS = {
13
+ maxRetries: 3,
14
+ baseDelayMs: 1000,
15
+ maxDelayMs: 30000,
16
+ retryableStatuses: [429, 500, 502, 503, 504],
17
+ };
18
+ exports.RETRY_DEFAULTS = DEFAULTS;
19
+ /** Returns true if the error/status is retryable (network error or retryable HTTP status). */
20
+ function isRetryable(error, status, opts) {
21
+ const statuses = opts?.retryableStatuses ?? DEFAULTS.retryableStatuses;
22
+ if (status && statuses.includes(status))
23
+ return true;
24
+ if (error instanceof TypeError)
25
+ return true; // fetch network errors
26
+ if (error instanceof Error) {
27
+ const msg = error.message.toLowerCase();
28
+ if (msg.includes('fetch failed') || msg.includes('econnreset') ||
29
+ msg.includes('econnrefused') || msg.includes('etimedout') ||
30
+ msg.includes('socket hang up') || msg.includes('network') ||
31
+ msg.includes('abort')) {
32
+ return true;
33
+ }
34
+ }
35
+ return false;
36
+ }
37
+ /**
38
+ * Calculate delay with exponential backoff + jitter.
39
+ * For 429 responses, respects Retry-After header.
40
+ */
41
+ function getRetryDelay(attempt, retryAfterHeader, opts) {
42
+ const base = opts?.baseDelayMs ?? DEFAULTS.baseDelayMs;
43
+ const max = opts?.maxDelayMs ?? DEFAULTS.maxDelayMs;
44
+ // Respect Retry-After header (in seconds)
45
+ if (retryAfterHeader) {
46
+ const seconds = parseInt(retryAfterHeader, 10);
47
+ if (!isNaN(seconds) && seconds > 0) {
48
+ return Math.min(seconds * 1000, max);
49
+ }
50
+ }
51
+ // Exponential backoff with jitter: base * 2^attempt * (0.5..1.5)
52
+ const exponential = base * Math.pow(2, attempt);
53
+ const jitter = 0.5 + Math.random();
54
+ return Math.min(exponential * jitter, max);
55
+ }
56
+ function sleep(ms) {
57
+ return new Promise(resolve => setTimeout(resolve, ms));
58
+ }
59
+ //# sourceMappingURL=retry.js.map
@@ -12,6 +12,8 @@ export declare class Scheduler {
12
12
  /** Check if any routines need to run right now */
13
13
  private tick;
14
14
  private executeRoutine;
15
+ /** Run the agent loop for a routine — separated so it can be wrapped in Promise.race */
16
+ private runRoutineAgent;
15
17
  private loadRoutines;
16
18
  private saveRoutines;
17
19
  }
package/dist/scheduler.js CHANGED
@@ -90,25 +90,14 @@ class Scheduler {
90
90
  }
91
91
  async executeRoutine(routine, allRoutines) {
92
92
  this.running = true;
93
+ const ROUTINE_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes max per routine
93
94
  try {
94
95
  this.onOutput?.(`\n⏰ Running routine: ${routine.name}\n Task: ${routine.prompt}\n`);
95
- // Run the agent with the routine's prompt
96
- for await (const event of this.agent.run(routine.prompt)) {
97
- switch (event.type) {
98
- case 'text':
99
- this.onOutput?.(event.text || '');
100
- break;
101
- case 'tool_call':
102
- this.onOutput?.(`\n⚡ ${event.toolCall?.name}(${Object.entries(event.toolCall?.args || {}).map(([k, v]) => `${k}: ${typeof v === 'string' ? v.substring(0, 40) : v}`).join(', ')})\n`);
103
- break;
104
- case 'tool_result':
105
- this.onOutput?.(` ✓ ${event.toolResult?.result?.substring(0, 100) || ''}\n`);
106
- break;
107
- case 'error':
108
- this.onOutput?.(` ✗ Error: ${event.error}\n`);
109
- break;
110
- }
111
- }
96
+ // Race against a timeout so a hanging routine doesn't block the scheduler forever
97
+ await Promise.race([
98
+ this.runRoutineAgent(routine),
99
+ new Promise((_, reject) => setTimeout(() => reject(new Error(`Routine timed out after ${ROUTINE_TIMEOUT_MS / 1000}s`)), ROUTINE_TIMEOUT_MS)),
100
+ ]);
112
101
  // Update last run time
113
102
  routine.lastRun = new Date().toISOString();
114
103
  this.saveRoutines(allRoutines);
@@ -122,6 +111,25 @@ class Scheduler {
122
111
  this.running = false;
123
112
  }
124
113
  }
114
+ /** Run the agent loop for a routine — separated so it can be wrapped in Promise.race */
115
+ async runRoutineAgent(routine) {
116
+ for await (const event of this.agent.run(routine.prompt)) {
117
+ switch (event.type) {
118
+ case 'text':
119
+ this.onOutput?.(event.text || '');
120
+ break;
121
+ case 'tool_call':
122
+ this.onOutput?.(`\n⚡ ${event.toolCall?.name}(${Object.entries(event.toolCall?.args || {}).map(([k, v]) => `${k}: ${typeof v === 'string' ? v.substring(0, 40) : v}`).join(', ')})\n`);
123
+ break;
124
+ case 'tool_result':
125
+ this.onOutput?.(` ✓ ${event.toolResult?.result?.substring(0, 100) || ''}\n`);
126
+ break;
127
+ case 'error':
128
+ this.onOutput?.(` ✗ Error: ${event.error}\n`);
129
+ break;
130
+ }
131
+ }
132
+ }
125
133
  loadRoutines() {
126
134
  try {
127
135
  if (fs.existsSync(ROUTINES_FILE)) {
package/dist/setup.d.ts CHANGED
@@ -12,6 +12,6 @@ export declare function loadConfig(): SavedConfig;
12
12
  export declare function saveConfig(config: SavedConfig): void;
13
13
  /** Check if this is the first run (no config, no env keys) */
14
14
  export declare function isFirstRun(): boolean;
15
- /** Interactive setup wizard */
15
+ /** Interactive setup wizard — model-first flow */
16
16
  export declare function runSetup(): Promise<SavedConfig>;
17
17
  //# sourceMappingURL=setup.d.ts.map
package/dist/setup.js CHANGED
@@ -60,15 +60,12 @@ function loadConfig() {
60
60
  function saveConfig(config) {
61
61
  fs.mkdirSync(CONFIG_DIR, { recursive: true });
62
62
  const safe = { ...config };
63
- // Persist API key if user entered it during setup (convenience over env vars)
64
- // The key is stored in the user's home directory with default permissions
65
63
  fs.writeFileSync(CONFIG_FILE, JSON.stringify(safe, null, 2) + '\n');
66
64
  }
67
65
  /** Check if this is the first run (no config, no env keys) */
68
66
  function isFirstRun() {
69
67
  if (fs.existsSync(CONFIG_FILE))
70
68
  return false;
71
- // Check if any provider API keys are set
72
69
  const envKeys = [
73
70
  'ANTHROPIC_API_KEY', 'OPENAI_API_KEY', 'GEMINI_API_KEY',
74
71
  'DEEPSEEK_API_KEY', 'GROQ_API_KEY', 'MISTRAL_API_KEY',
@@ -105,24 +102,7 @@ async function detectLocalServers() {
105
102
  }
106
103
  return servers;
107
104
  }
108
- /** Detect which cloud API keys are available */
109
- function detectApiKeys() {
110
- return Object.entries(registry_1.PROVIDER_DEFAULTS).map(([provider, defaults]) => ({
111
- provider,
112
- envVar: defaults.envKey,
113
- set: !!process.env[defaults.envKey],
114
- }));
115
- }
116
- /** Cloud provider display info */
117
- const CLOUD_PROVIDERS = [
118
- { provider: 'openai', name: 'OpenAI', defaultModel: 'gpt-4o', description: 'GPT-4o, GPT-4.1, o3/o4' },
119
- { provider: 'anthropic', name: 'Anthropic', defaultModel: 'claude-sonnet-4-6', description: 'Claude Opus/Sonnet/Haiku' },
120
- { provider: 'gemini', name: 'Google Gemini', defaultModel: 'gemini-2.5-flash', description: 'Gemini 2.5 Pro/Flash' },
121
- { provider: 'deepseek', name: 'DeepSeek', defaultModel: 'deepseek-chat', description: 'DeepSeek Chat/Reasoner' },
122
- { provider: 'groq', name: 'Groq', defaultModel: 'llama-3.3-70b-versatile', description: 'Fast Llama/Mixtral inference' },
123
- { provider: 'mistral', name: 'Mistral', defaultModel: 'mistral-large-latest', description: 'Mistral Large, Codestral' },
124
- { provider: 'xai', name: 'xAI', defaultModel: 'grok-3', description: 'Grok-3' },
125
- ];
105
+ // ── ANSI helpers ─────────────────────────────────────────────────────────────
126
106
  const C = {
127
107
  reset: '\x1b[0m',
128
108
  bold: '\x1b[1m',
@@ -140,131 +120,253 @@ function ask(rl, question) {
140
120
  rl.question(question, answer => resolve(answer.trim()));
141
121
  });
142
122
  }
143
- /** Interactive setup wizard */
123
+ const PROVIDER_DISPLAY = {
124
+ anthropic: 'Anthropic',
125
+ openai: 'OpenAI',
126
+ gemini: 'Google',
127
+ deepseek: 'DeepSeek',
128
+ groq: 'Groq',
129
+ mistral: 'Mistral',
130
+ xai: 'xAI',
131
+ };
132
+ /** Hand-picked cloud models for the setup menu — best 2-3 from each provider */
133
+ const CURATED_CLOUD_MODELS = [
134
+ // Frontier (most capable)
135
+ { id: 'claude-opus-4-6', displayName: 'Claude Opus 4', provider: 'anthropic', category: 'frontier', contextK: '200K' },
136
+ { id: 'gpt-4.1', displayName: 'GPT-4.1', provider: 'openai', category: 'frontier', contextK: '1M' },
137
+ { id: 'gemini-2.5-pro', displayName: 'Gemini 2.5 Pro', provider: 'gemini', category: 'frontier', contextK: '1M' },
138
+ { id: 'o3', displayName: 'o3', provider: 'openai', category: 'frontier', contextK: '200K' },
139
+ { id: 'grok-3', displayName: 'Grok-3', provider: 'xai', category: 'frontier', contextK: '131K' },
140
+ // Fast & efficient
141
+ { id: 'claude-sonnet-4-6', displayName: 'Claude Sonnet 4', provider: 'anthropic', category: 'fast', contextK: '200K' },
142
+ { id: 'gpt-4o', displayName: 'GPT-4o', provider: 'openai', category: 'fast', contextK: '128K' },
143
+ { id: 'gemini-2.5-flash', displayName: 'Gemini 2.5 Flash', provider: 'gemini', category: 'fast', contextK: '1M' },
144
+ { id: 'deepseek-chat', displayName: 'DeepSeek Chat', provider: 'deepseek', category: 'fast', contextK: '65K' },
145
+ { id: 'mistral-large-latest', displayName: 'Mistral Large', provider: 'mistral', category: 'fast', contextK: '131K' },
146
+ { id: 'llama-3.3-70b-versatile', displayName: 'Llama 3.3 70B', provider: 'groq', category: 'fast', contextK: '131K' },
147
+ { id: 'claude-haiku-4-5-20251001', displayName: 'Claude Haiku 4.5', provider: 'anthropic', category: 'fast', contextK: '200K' },
148
+ // Reasoning
149
+ { id: 'o1', displayName: 'o1', provider: 'openai', category: 'reasoning', contextK: '200K' },
150
+ { id: 'o4-mini', displayName: 'o4-mini', provider: 'openai', category: 'reasoning', contextK: '200K' },
151
+ { id: 'deepseek-reasoner', displayName: 'DeepSeek Reasoner', provider: 'deepseek', category: 'reasoning', contextK: '65K' },
152
+ ];
153
+ /** Format context window for display: 200000 → "200K", 1048576 → "1M" */
154
+ function formatCtx(tokens) {
155
+ if (tokens >= 1000000)
156
+ return `${Math.round(tokens / 1048576)}M`;
157
+ return `${Math.round(tokens / 1024)}K`;
158
+ }
159
+ /** Build the unified model list: local models first, then curated cloud models */
160
+ function buildModelList(localServers, apiKeyStatus) {
161
+ const entries = [];
162
+ // Local models (cap at 8, prioritize well-known models)
163
+ const localPriority = ['qwen', 'deepseek', 'llama', 'phi', 'mistral', 'codellama'];
164
+ for (const server of localServers) {
165
+ const sorted = [...server.models].sort((a, b) => {
166
+ const ai = localPriority.findIndex(p => a.toLowerCase().includes(p));
167
+ const bi = localPriority.findIndex(p => b.toLowerCase().includes(p));
168
+ return (ai === -1 ? 99 : ai) - (bi === -1 ? 99 : bi);
169
+ });
170
+ for (const model of sorted.slice(0, 8)) {
171
+ const info = (0, registry_1.getModelInfo)(model);
172
+ entries.push({
173
+ id: model,
174
+ displayName: model,
175
+ provider: 'local',
176
+ category: 'local',
177
+ contextK: formatCtx(info.contextWindow),
178
+ baseUrl: server.url,
179
+ needsKey: false,
180
+ serverName: server.name,
181
+ });
182
+ }
183
+ }
184
+ // Cloud models from curated list
185
+ for (const model of CURATED_CLOUD_MODELS) {
186
+ const defaults = registry_1.PROVIDER_DEFAULTS[model.provider];
187
+ entries.push({
188
+ ...model,
189
+ baseUrl: defaults?.baseUrl || '',
190
+ needsKey: !apiKeyStatus.get(model.provider),
191
+ });
192
+ }
193
+ return entries;
194
+ }
195
+ function renderCategoryHeader(category) {
196
+ const headers = {
197
+ local: 'LOCAL (free, private, runs on your machine)',
198
+ frontier: 'CLOUD \u2014 FRONTIER (most capable)',
199
+ fast: 'CLOUD \u2014 FAST & EFFICIENT',
200
+ reasoning: 'CLOUD \u2014 REASONING',
201
+ };
202
+ const title = headers[category] || category.toUpperCase();
203
+ console.log(`\n ${fmt(title, 'bold')}`);
204
+ console.log(` ${fmt('\u2500'.repeat(48), 'dim')}`);
205
+ }
206
+ function renderModelRow(index, entry) {
207
+ const num = fmt(String(index).padStart(3), 'cyan');
208
+ const name = entry.displayName.padEnd(26);
209
+ const prov = (entry.serverName || PROVIDER_DISPLAY[entry.provider] || entry.provider).padEnd(11);
210
+ const ctx = fmt((entry.contextK + ' ctx').padStart(9), 'dim');
211
+ let keyStatus = '';
212
+ if (entry.provider !== 'local') {
213
+ keyStatus = entry.needsKey
214
+ ? fmt(' needs key', 'yellow')
215
+ : fmt(' \u2713 key set', 'green');
216
+ }
217
+ console.log(` ${num} ${name}${prov}${ctx}${keyStatus}`);
218
+ }
219
+ /** Fuzzy match a typed model name against all known models */
220
+ function fuzzyMatchModel(input, allModels) {
221
+ const lower = input.toLowerCase();
222
+ // Exact match
223
+ if (allModels.includes(input))
224
+ return input;
225
+ // Case-insensitive exact
226
+ const exact = allModels.find(m => m.toLowerCase() === lower);
227
+ if (exact)
228
+ return exact;
229
+ // Prefix match
230
+ const prefix = allModels.find(m => m.toLowerCase().startsWith(lower));
231
+ if (prefix)
232
+ return prefix;
233
+ // Substring match
234
+ const sub = allModels.find(m => m.toLowerCase().includes(lower));
235
+ if (sub)
236
+ return sub;
237
+ return undefined;
238
+ }
239
+ // ── Setup wizard ─────────────────────────────────────────────────────────────
240
+ /** Interactive setup wizard — model-first flow */
144
241
  async function runSetup() {
145
242
  const rl = readline.createInterface({ input: process.stdin, output: process.stdout });
146
- console.log(fmt('\n CodeBot AI Setup', 'bold'));
243
+ console.log(fmt('\n\u26A1 CodeBot AI \u2014 Setup', 'bold'));
147
244
  console.log(fmt(' Let\'s get you configured.\n', 'dim'));
148
- // Step 1: Detect local servers
245
+ // ── Phase A: Detection ──────────────────────────────────────────────────────
149
246
  console.log(fmt('Scanning for local LLM servers...', 'dim'));
150
247
  const localServers = await detectLocalServers();
151
- // Step 2: Detect API keys
152
- const apiKeys = detectApiKeys();
153
- const availableKeys = apiKeys.filter(k => k.set);
154
- // Show what was found
248
+ const apiKeyStatus = new Map();
249
+ for (const [provider, defaults] of Object.entries(registry_1.PROVIDER_DEFAULTS)) {
250
+ apiKeyStatus.set(provider, !!process.env[defaults.envKey]);
251
+ }
252
+ // Show detection results
155
253
  if (localServers.length > 0) {
156
254
  for (const server of localServers) {
157
- console.log(fmt(` ${server.name} detected (${server.models.length} models)`, 'green'));
255
+ console.log(fmt(` \u2713 ${server.name} detected (${server.models.length} models)`, 'green'));
158
256
  }
159
257
  }
160
258
  else {
161
- console.log(fmt(' No local LLM servers detected.', 'dim'));
259
+ console.log(fmt(' No local servers found. Start Ollama for free local models: ollama.com', 'dim'));
162
260
  }
163
- if (availableKeys.length > 0) {
164
- for (const key of availableKeys) {
165
- console.log(fmt(` ✓ ${key.provider} API key found (${key.envVar})`, 'green'));
166
- }
261
+ const setKeys = [...apiKeyStatus.entries()].filter(([, set]) => set);
262
+ for (const [prov] of setKeys) {
263
+ const display = PROVIDER_DISPLAY[prov] || prov;
264
+ console.log(fmt(` \u2713 ${display} API key found`, 'green'));
167
265
  }
168
- // Step 3: Choose provider show ALL options (local + cloud)
169
- console.log(fmt('\nChoose your setup:', 'bold'));
170
- const options = [];
171
- let idx = 1;
172
- // Local options first
173
- for (const server of localServers) {
174
- const defaultModel = server.models[0] || 'qwen2.5-coder:32b';
175
- options.push({
176
- label: `${server.name} (local, free)`,
177
- provider: 'openai',
178
- model: defaultModel,
179
- baseUrl: server.url,
180
- needsKey: false,
181
- });
182
- console.log(` ${fmt(`${idx}`, 'cyan')} ${server.name} — ${defaultModel} ${fmt('(local, free, private)', 'green')}`);
183
- idx++;
266
+ // ── Phase B: Build & render model list ──────────────────────────────────────
267
+ const modelList = buildModelList(localServers, apiKeyStatus);
268
+ console.log(fmt('\nChoose a model:', 'bold'));
269
+ let currentCategory = '';
270
+ modelList.forEach((entry, i) => {
271
+ if (entry.category !== currentCategory) {
272
+ currentCategory = entry.category;
273
+ renderCategoryHeader(currentCategory);
274
+ }
275
+ renderModelRow(i + 1, entry);
276
+ });
277
+ if (modelList.length === 0) {
278
+ console.log(fmt('\n No models available. Install Ollama or set a cloud API key.', 'yellow'));
279
+ rl.close();
280
+ return {};
184
281
  }
185
- // Cloud options ALWAYS show all providers
186
- for (const cloud of CLOUD_PROVIDERS) {
187
- const keyInfo = apiKeys.find(k => k.provider === cloud.provider);
188
- const hasKey = keyInfo?.set || false;
189
- const defaults = registry_1.PROVIDER_DEFAULTS[cloud.provider];
190
- const keyStatus = hasKey ? fmt('✓ key set', 'green') : fmt('enter key during setup', 'yellow');
191
- options.push({
192
- label: cloud.name,
193
- provider: cloud.provider,
194
- model: cloud.defaultModel,
195
- baseUrl: defaults.baseUrl,
196
- needsKey: !hasKey,
197
- envVar: defaults.envKey,
198
- });
199
- console.log(` ${fmt(`${idx}`, 'cyan')} ${cloud.name} — ${cloud.description} ${fmt(`(${keyStatus})`, 'dim')}`);
200
- idx++;
282
+ // ── Phase C: Model selection ────────────────────────────────────────────────
283
+ const allKnownModels = [
284
+ ...Object.keys(registry_1.MODEL_REGISTRY),
285
+ ...localServers.flatMap(s => s.models),
286
+ ];
287
+ const choice = await ask(rl, fmt(`\nSelect [1-${modelList.length}] or type a model name: `, 'cyan'));
288
+ let selectedModel;
289
+ let selectedProvider;
290
+ let selectedBaseUrl;
291
+ let isLocal = false;
292
+ const choiceNum = parseInt(choice, 10);
293
+ if (choiceNum >= 1 && choiceNum <= modelList.length) {
294
+ // User picked by number
295
+ const entry = modelList[choiceNum - 1];
296
+ selectedModel = entry.id;
297
+ selectedProvider = entry.provider === 'local' ? 'openai' : entry.provider;
298
+ selectedBaseUrl = entry.baseUrl;
299
+ isLocal = entry.provider === 'local';
201
300
  }
202
- const choice = await ask(rl, fmt(`\nSelect [1-${options.length}]: `, 'cyan'));
203
- const selected = options[parseInt(choice, 10) - 1] || options[0];
204
- // Step 4: If cloud provider needs API key, prompt for it
205
- let apiKey = '';
206
- if (selected.needsKey && selected.envVar) {
207
- console.log(fmt(`\n ${selected.label} requires an API key.`, 'yellow'));
208
- console.log(fmt(` Get one at: ${getKeyUrl(selected.provider)}`, 'dim'));
209
- apiKey = await ask(rl, fmt(`\n Enter your ${selected.label} API key: `, 'cyan'));
210
- if (!apiKey) {
211
- console.log(fmt(`\n No key entered. You can set it later:`, 'yellow'));
212
- console.log(fmt(` export ${selected.envVar}="your-key-here"`, 'dim'));
301
+ else if (choice.length > 1) {
302
+ // User typed a model name fuzzy match
303
+ const matched = fuzzyMatchModel(choice, allKnownModels);
304
+ selectedModel = matched || choice;
305
+ const detected = (0, registry_1.detectProvider)(selectedModel);
306
+ selectedProvider = detected || 'openai';
307
+ isLocal = !detected;
308
+ if (isLocal) {
309
+ const server = localServers.find(s => s.models.some(m => m.toLowerCase() === selectedModel.toLowerCase() || m.toLowerCase().includes(selectedModel.toLowerCase())));
310
+ selectedBaseUrl = server?.url || 'http://localhost:11434';
311
+ }
312
+ else {
313
+ selectedBaseUrl = registry_1.PROVIDER_DEFAULTS[selectedProvider]?.baseUrl || '';
213
314
  }
214
315
  }
215
- else if (selected.envVar) {
216
- // Use existing env var
217
- apiKey = process.env[selected.envVar] || '';
316
+ else {
317
+ // Empty or single char — default to first entry
318
+ const entry = modelList[0];
319
+ selectedModel = entry.id;
320
+ selectedProvider = entry.provider === 'local' ? 'openai' : entry.provider;
321
+ selectedBaseUrl = entry.baseUrl;
322
+ isLocal = entry.provider === 'local';
218
323
  }
219
- // Step 5: Show available models for chosen provider
220
- const matchedServer = localServers.find(s => s.url === selected.baseUrl);
221
- const providerModels = matchedServer && matchedServer.models.length > 0
222
- ? matchedServer.models
223
- : Object.entries(registry_1.MODEL_REGISTRY)
224
- .filter(([, info]) => info.provider === selected.provider)
225
- .map(([name]) => name);
226
- if (providerModels.length > 1) {
227
- console.log(fmt(`\nAvailable models${matchedServer ? ` on ${matchedServer.name}` : ''}:`, 'bold'));
228
- providerModels.slice(0, 15).forEach((m, i) => {
229
- const marker = m === selected.model ? fmt(' (default)', 'green') : '';
230
- console.log(` ${fmt(`${i + 1}`, 'cyan')} ${m}${marker}`);
231
- });
232
- const modelChoice = await ask(rl, fmt(`\nModel [Enter for ${selected.model}]: `, 'cyan'));
233
- if (modelChoice) {
234
- const modelIdx = parseInt(modelChoice, 10) - 1;
235
- if (providerModels[modelIdx]) {
236
- selected.model = providerModels[modelIdx];
237
- }
238
- else if (modelChoice.length > 2) {
239
- // Treat as model name typed directly
240
- selected.model = modelChoice;
324
+ console.log(fmt(` \u2713 Selected: ${selectedModel}`, 'green'));
325
+ // ── Phase D: API key resolution ─────────────────────────────────────────────
326
+ let apiKey = '';
327
+ if (!isLocal) {
328
+ const defaults = registry_1.PROVIDER_DEFAULTS[selectedProvider];
329
+ const envKey = defaults?.envKey;
330
+ const existingKey = envKey ? process.env[envKey] : undefined;
331
+ if (existingKey) {
332
+ console.log(fmt(` \u2713 Using ${envKey} from environment`, 'green'));
333
+ apiKey = existingKey;
334
+ }
335
+ else if (envKey) {
336
+ const providerName = PROVIDER_DISPLAY[selectedProvider] || selectedProvider;
337
+ const keyUrl = getKeyUrl(selectedProvider);
338
+ console.log(fmt(`\n ${selectedModel} requires a ${providerName} API key.`, 'yellow'));
339
+ console.log(fmt(` Get one at: ${keyUrl}`, 'dim'));
340
+ apiKey = await ask(rl, fmt('\n Paste your API key: ', 'cyan'));
341
+ if (!apiKey) {
342
+ console.log(fmt(`\n No key entered. Set it later:`, 'yellow'));
343
+ console.log(fmt(` export ${envKey}="your-key-here"`, 'dim'));
241
344
  }
242
345
  }
243
346
  }
244
- // Step 6: Auto mode?
347
+ // ── Phase E: Autonomous mode ────────────────────────────────────────────────
245
348
  const autoChoice = await ask(rl, fmt('\nEnable autonomous mode? (skip permission prompts) [y/N]: ', 'cyan'));
246
349
  const autoApprove = autoChoice.toLowerCase().startsWith('y');
247
350
  rl.close();
248
- // Save config
351
+ // ── Phase F: Save config + summary ──────────────────────────────────────────
249
352
  const config = {
250
- model: selected.model,
251
- provider: selected.provider,
252
- baseUrl: selected.baseUrl,
353
+ model: selectedModel,
354
+ provider: selectedProvider,
355
+ baseUrl: selectedBaseUrl,
253
356
  autoApprove,
254
357
  };
255
- // Save API key if user entered one
256
358
  if (apiKey) {
257
359
  config.apiKey = apiKey;
258
360
  }
259
361
  saveConfig(config);
260
- console.log(fmt('\n Config saved to ~/.codebot/config.json', 'green'));
261
- console.log(fmt(` Model: ${config.model}`, 'dim'));
262
- console.log(fmt(` Provider: ${config.provider}`, 'dim'));
362
+ console.log(fmt('\n\u2713 Config saved to ~/.codebot/config.json', 'green'));
363
+ console.log(fmt(` Model: ${config.model}`, 'dim'));
364
+ console.log(fmt(` Provider: ${selectedProvider}${isLocal ? '' : ' (auto-detected)'}`, 'dim'));
263
365
  if (apiKey) {
264
- console.log(fmt(` API Key: ${'*'.repeat(Math.min(apiKey.length, 20))}`, 'dim'));
366
+ console.log(fmt(` API Key: ${'*'.repeat(Math.min(apiKey.length, 20))}`, 'dim'));
265
367
  }
266
368
  if (autoApprove) {
267
- console.log(fmt(` Mode: AUTONOMOUS`, 'yellow'));
369
+ console.log(fmt(` Mode: AUTONOMOUS`, 'yellow'));
268
370
  }
269
371
  console.log(fmt(`\nRun ${fmt('codebot', 'bold')} to start. Run ${fmt('codebot --setup', 'bold')} to reconfigure.\n`, 'dim'));
270
372
  return config;
@@ -73,14 +73,23 @@ class WebFetchTool {
73
73
  body = args.body;
74
74
  }
75
75
  try {
76
+ // AbortController covers both connection AND body reading (res.text())
77
+ const controller = new AbortController();
78
+ const bodyTimeout = setTimeout(() => controller.abort(), 30_000);
76
79
  const res = await fetch(url, {
77
80
  method,
78
81
  headers,
79
82
  body,
80
- signal: AbortSignal.timeout(30000),
83
+ signal: controller.signal,
81
84
  });
82
85
  const contentType = res.headers.get('content-type') || '';
83
- const responseText = await res.text();
86
+ let responseText;
87
+ try {
88
+ responseText = await res.text();
89
+ }
90
+ finally {
91
+ clearTimeout(bodyTimeout);
92
+ }
84
93
  // Truncate very large responses
85
94
  const maxLen = 50000;
86
95
  const truncated = responseText.length > maxLen
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "codebot-ai",
3
- "version": "1.2.2",
4
- "description": "Local-first AI coding assistant. Zero dependencies. Works with Ollama, LM Studio, vLLM, Claude, GPT, Gemini, and more.",
3
+ "version": "1.3.0",
4
+ "description": "Zero-dependency autonomous AI agent. Code, browse, search, automate. Works with any LLM Ollama, Claude, GPT, Gemini, DeepSeek, Groq, Mistral, Grok.",
5
5
  "main": "dist/index.js",
6
6
  "types": "dist/index.d.ts",
7
7
  "bin": {
@@ -17,17 +17,25 @@
17
17
  },
18
18
  "keywords": [
19
19
  "ai",
20
- "coding-assistant",
21
- "ollama",
22
- "local-llm",
20
+ "ai-agent",
23
21
  "agent",
24
- "cli",
22
+ "autonomous",
23
+ "agentic",
24
+ "coding-assistant",
25
25
  "code-generation",
26
+ "llm",
27
+ "openai",
26
28
  "claude",
27
29
  "gpt",
28
30
  "gemini",
29
- "autonomous",
30
- "agentic"
31
+ "ollama",
32
+ "deepseek",
33
+ "groq",
34
+ "mistral",
35
+ "local-llm",
36
+ "browser-automation",
37
+ "cli",
38
+ "web-search"
31
39
  ],
32
40
  "author": "Ascendral Software Development & Innovation",
33
41
  "license": "MIT",