codebot-ai 1.2.3 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (42) hide show
  1. package/README.md +135 -116
  2. package/dist/agent.js +51 -27
  3. package/dist/cli.js +18 -2
  4. package/dist/providers/anthropic.js +38 -18
  5. package/dist/providers/openai.js +35 -14
  6. package/dist/retry.d.ts +22 -0
  7. package/dist/retry.js +59 -0
  8. package/dist/scheduler.d.ts +2 -0
  9. package/dist/scheduler.js +25 -17
  10. package/dist/tools/code-analysis.d.ts +33 -0
  11. package/dist/tools/code-analysis.js +232 -0
  12. package/dist/tools/code-review.d.ts +32 -0
  13. package/dist/tools/code-review.js +228 -0
  14. package/dist/tools/database.d.ts +35 -0
  15. package/dist/tools/database.js +129 -0
  16. package/dist/tools/diff-viewer.d.ts +39 -0
  17. package/dist/tools/diff-viewer.js +145 -0
  18. package/dist/tools/docker.d.ts +26 -0
  19. package/dist/tools/docker.js +101 -0
  20. package/dist/tools/git.d.ts +26 -0
  21. package/dist/tools/git.js +58 -0
  22. package/dist/tools/http-client.d.ts +39 -0
  23. package/dist/tools/http-client.js +114 -0
  24. package/dist/tools/image-info.d.ts +23 -0
  25. package/dist/tools/image-info.js +170 -0
  26. package/dist/tools/index.js +34 -0
  27. package/dist/tools/multi-search.d.ts +28 -0
  28. package/dist/tools/multi-search.js +153 -0
  29. package/dist/tools/notification.d.ts +38 -0
  30. package/dist/tools/notification.js +96 -0
  31. package/dist/tools/package-manager.d.ts +31 -0
  32. package/dist/tools/package-manager.js +161 -0
  33. package/dist/tools/pdf-extract.d.ts +33 -0
  34. package/dist/tools/pdf-extract.js +178 -0
  35. package/dist/tools/ssh-remote.d.ts +39 -0
  36. package/dist/tools/ssh-remote.js +84 -0
  37. package/dist/tools/task-planner.d.ts +42 -0
  38. package/dist/tools/task-planner.js +161 -0
  39. package/dist/tools/test-runner.d.ts +36 -0
  40. package/dist/tools/test-runner.js +193 -0
  41. package/dist/tools/web-fetch.js +11 -2
  42. package/package.json +16 -8
package/README.md CHANGED
@@ -1,50 +1,60 @@
1
1
  # CodeBot AI
2
2
 
3
- Local-first AI coding assistant. Zero runtime dependencies. Works with Ollama, LM Studio, vLLM, Claude, GPT, Gemini, DeepSeek, Groq, Mistral, and Grok.
3
+ [![npm version](https://img.shields.io/npm/v/codebot-ai.svg)](https://www.npmjs.com/package/codebot-ai)
4
+ [![license](https://img.shields.io/npm/l/codebot-ai.svg)](https://github.com/zanderone1980/codebot-ai/blob/main/LICENSE)
5
+ [![node](https://img.shields.io/node/v/codebot-ai.svg)](https://nodejs.org)
6
+
7
+ **Zero-dependency autonomous AI agent.** Works with any LLM — local or cloud. Code, browse the web, run commands, search, automate routines, and more.
8
+
9
+ Built by [Ascendral Software Development & Innovation](https://github.com/AscendralSoftware).
4
10
 
5
11
  ## Quick Start
6
12
 
7
13
  ```bash
8
- # Install globally
9
14
  npm install -g codebot-ai
10
-
11
- # Run — setup wizard launches automatically on first use
12
15
  codebot
13
16
  ```
14
17
 
15
- Or run without installing:
18
+ That's it. The setup wizard launches on first run pick your model, paste an API key (or use a local LLM), and you're coding.
16
19
 
17
20
  ```bash
21
+ # Or run without installing
18
22
  npx codebot-ai
19
23
  ```
20
24
 
21
- Or from source:
22
-
23
- ```bash
24
- git clone https://github.com/AscendralSoftware/codebot-ai.git
25
- cd codebot-ai
26
- npm install && npm run build
27
- ./bin/codebot
28
- ```
25
+ ## What Can It Do?
29
26
 
30
- ## Setup
27
+ - **Write & edit code** — reads your codebase, makes targeted edits, runs tests
28
+ - **Run shell commands** — system checks, builds, deploys, git operations
29
+ - **Browse the web** — navigates Chrome, clicks, types, reads pages, takes screenshots
30
+ - **Search the internet** — real-time web search for docs, APIs, current info
31
+ - **Automate routines** — schedule recurring tasks with cron (daily posts, email checks, monitoring)
32
+ - **Call APIs** — HTTP requests to any REST endpoint
33
+ - **Persistent memory** — remembers preferences and context across sessions
34
+ - **Self-recovering** — retries on network errors, recovers from API failures, never drops out
31
35
 
32
- On first run, CodeBot detects your environment and walks you through configuration:
36
+ ## Supported Models
33
37
 
34
- - Scans for local LLM servers (Ollama, LM Studio, vLLM)
35
- - Detects API keys from environment variables
36
- - Lets you pick a provider and model
37
- - Saves config to `~/.codebot/config.json`
38
+ Pick any model during setup. CodeBot works with all of them:
38
39
 
39
- To reconfigure anytime: `codebot --setup`
40
+ | Provider | Models |
41
+ |----------|--------|
42
+ | **Local (Ollama/LM Studio/vLLM)** | qwen2.5-coder, qwen3, deepseek-coder, llama3.x, mistral, phi-4, codellama, starcoder2, and any model your server runs |
43
+ | **Anthropic** | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 |
44
+ | **OpenAI** | gpt-4o, gpt-4.1, o1, o3, o4-mini |
45
+ | **Google** | gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash |
46
+ | **DeepSeek** | deepseek-chat, deepseek-reasoner |
47
+ | **Groq** | llama-3.3-70b, mixtral-8x7b |
48
+ | **Mistral** | mistral-large, codestral |
49
+ | **xAI** | grok-3, grok-3-mini |
40
50
 
41
- ### Environment Variables
51
+ For local models, just have Ollama/LM Studio/vLLM running — CodeBot auto-detects them.
42
52
 
43
- Set an API key for your preferred cloud provider:
53
+ For cloud models, set an environment variable:
44
54
 
45
55
  ```bash
46
- export ANTHROPIC_API_KEY="sk-ant-..." # Claude
47
56
  export OPENAI_API_KEY="sk-..." # GPT
57
+ export ANTHROPIC_API_KEY="sk-ant-..." # Claude
48
58
  export GEMINI_API_KEY="..." # Gemini
49
59
  export DEEPSEEK_API_KEY="sk-..." # DeepSeek
50
60
  export GROQ_API_KEY="gsk_..." # Groq
@@ -52,52 +62,23 @@ export MISTRAL_API_KEY="..." # Mistral
52
62
  export XAI_API_KEY="xai-..." # Grok
53
63
  ```
54
64
 
55
- For local models, just have Ollama/LM Studio/vLLM running CodeBot auto-detects them.
65
+ Or paste your key during setupeither way works.
56
66
 
57
67
  ## Usage
58
68
 
59
- ### Interactive Mode
60
-
61
- ```bash
62
- codebot
63
- ```
64
-
65
- ### Single Message
66
-
67
69
  ```bash
68
- codebot "fix the bug in app.ts"
69
- codebot --model claude-sonnet-4-6 "explain this codebase"
70
+ codebot # Interactive REPL
71
+ codebot "fix the bug in app.ts" # Single task
72
+ codebot --autonomous "refactor auth and test" # Full auto — no permission prompts
73
+ codebot --continue # Resume last session
74
+ echo "explain this error" | codebot # Pipe mode
70
75
  ```
71
76
 
72
- ### Pipe Mode
73
-
74
- ```bash
75
- echo "write a function that sorts by date" | codebot
76
- cat error.log | codebot "what's causing this?"
77
- ```
78
-
79
- ### Autonomous Mode
80
-
81
- Skip all permission prompts — full auto:
82
-
83
- ```bash
84
- codebot --autonomous "refactor the auth module and run tests"
85
- ```
86
-
87
- ### Session Resume
88
-
89
- CodeBot auto-saves every conversation. Resume anytime:
90
-
91
- ```bash
92
- codebot --continue # Resume last session
93
- codebot --resume <session-id> # Resume specific session
94
- ```
95
-
96
- ## CLI Options
77
+ ### CLI Options
97
78
 
98
79
  ```
99
80
  --setup Run the setup wizard
100
- --model <name> Model to use (default: qwen2.5-coder:32b)
81
+ --model <name> Model to use
101
82
  --provider <name> Provider: openai, anthropic, gemini, deepseek, groq, mistral, xai
102
83
  --base-url <url> LLM API base URL
103
84
  --api-key <key> API key (or use env vars)
@@ -107,25 +88,26 @@ codebot --resume <session-id> # Resume specific session
107
88
  --max-iterations <n> Max agent loop iterations (default: 50)
108
89
  ```
109
90
 
110
- ## Interactive Commands
91
+ ### Interactive Commands
111
92
 
112
93
  ```
113
- /help Show commands
114
- /model Show or change model
115
- /models List all supported models
116
- /sessions List saved sessions
117
- /auto Toggle autonomous mode
118
- /undo Undo last file edit (/undo [path])
119
- /usage Show token usage for this session
120
- /clear Clear conversation
121
- /compact Force context compaction
122
- /config Show configuration
123
- /quit Exit
94
+ /help Show commands
95
+ /model Show or change model
96
+ /models List all supported models
97
+ /sessions List saved sessions
98
+ /routines List scheduled routines
99
+ /auto Toggle autonomous mode
100
+ /undo Undo last file edit (/undo [path])
101
+ /usage Show token usage for this session
102
+ /clear Clear conversation
103
+ /compact Force context compaction
104
+ /config Show configuration
105
+ /quit Exit
124
106
  ```
125
107
 
126
108
  ## Tools
127
109
 
128
- CodeBot has 11 built-in tools:
110
+ CodeBot has 28 built-in tools:
129
111
 
130
112
  | Tool | Description | Permission |
131
113
  |------|-------------|-----------|
@@ -139,7 +121,24 @@ CodeBot has 11 built-in tools:
139
121
  | `think` | Internal reasoning scratchpad | auto |
140
122
  | `memory` | Persistent memory across sessions | auto |
141
123
  | `web_fetch` | HTTP requests and API calls | prompt |
124
+ | `web_search` | Internet search with result summaries | prompt |
142
125
  | `browser` | Chrome automation via CDP | prompt |
126
+ | `routine` | Schedule recurring tasks with cron | prompt |
127
+ | `git` | Git operations (status, diff, log, commit, branch, etc.) | prompt |
128
+ | `code_analysis` | Symbol extraction, find references, imports, outline | auto |
129
+ | `multi_search` | Fuzzy search across filenames, content, and symbols | auto |
130
+ | `task_planner` | Hierarchical task tracking with priorities | auto |
131
+ | `diff_viewer` | File comparison and git diffs | auto |
132
+ | `docker` | Container management (ps, run, build, compose) | prompt |
133
+ | `database` | Query SQLite databases (blocks destructive SQL) | prompt |
134
+ | `test_runner` | Auto-detect and run tests (jest, vitest, pytest, go, cargo) | prompt |
135
+ | `http_client` | Advanced HTTP requests with auth and headers | prompt |
136
+ | `image_info` | Image dimensions and metadata (PNG, JPEG, GIF, SVG) | auto |
137
+ | `ssh_remote` | Remote command execution and file transfer via SSH | always-ask |
138
+ | `notification` | Webhook notifications (Slack, Discord, generic) | prompt |
139
+ | `pdf_extract` | Extract text and metadata from PDF files | auto |
140
+ | `package_manager` | Dependency management (npm, yarn, pip, cargo, go) | prompt |
141
+ | `code_review` | Security scanning and complexity analysis | auto |
143
142
 
144
143
  ### Permission Levels
145
144
 
@@ -147,7 +146,7 @@ CodeBot has 11 built-in tools:
147
146
  - **prompt** — Asks for approval (skipped in `--autonomous` mode)
148
147
  - **always-ask** — Always asks, even in autonomous mode
149
148
 
150
- ### Browser Tool
149
+ ### Browser Automation
151
150
 
152
151
  Controls Chrome via the Chrome DevTools Protocol. Actions:
153
152
 
@@ -155,21 +154,34 @@ Controls Chrome via the Chrome DevTools Protocol. Actions:
155
154
  - `content` — Read page text
156
155
  - `screenshot` — Capture the page
157
156
  - `click` — Click an element by CSS selector
157
+ - `find_by_text` — Find and interact with elements by visible text
158
158
  - `type` — Type into an input field
159
+ - `scroll`, `press_key`, `hover` — Page interaction
159
160
  - `evaluate` — Run JavaScript on the page
160
161
  - `tabs` — List open tabs
161
162
  - `close` — Close browser connection
162
163
 
163
164
  Chrome is auto-launched with `--remote-debugging-port` if not already running.
164
165
 
166
+ ### Routines & Scheduling
167
+
168
+ Schedule recurring tasks with cron expressions:
169
+
170
+ ```
171
+ > Set up a routine to check my server health every hour
172
+ > Create a daily routine at 9am to summarize my GitHub notifications
173
+ ```
174
+
175
+ CodeBot creates the cron schedule, and the built-in scheduler runs tasks automatically while the agent is active. Manage with `/routines`.
176
+
165
177
  ### Memory
166
178
 
167
- CodeBot has persistent memory that survives across sessions:
179
+ Persistent memory that survives across sessions:
168
180
 
169
181
  - **Global memory** (`~/.codebot/memory/`) — preferences, patterns
170
182
  - **Project memory** (`.codebot/memory/`) — project-specific context
171
- - Memory is automatically injected into the system prompt
172
- - The agent can read/write its own memory using the `memory` tool
183
+ - Automatically injected into the system prompt
184
+ - The agent reads/writes its own memory to learn your style
173
185
 
174
186
  ### Plugins
175
187
 
@@ -205,33 +217,54 @@ Connect external tool servers via [Model Context Protocol](https://modelcontextp
205
217
 
206
218
  MCP tools appear automatically with the `mcp_<server>_<tool>` prefix.
207
219
 
208
- ## Supported Models
220
+ ## Stability
221
+
222
+ CodeBot v1.3.0 is hardened for continuous operation:
223
+
224
+ - **Automatic retry** — network errors, rate limits (429), and server errors (5xx) retry with exponential backoff
225
+ - **Stream recovery** — if the LLM connection drops mid-response, the agent loop retries on the next iteration
226
+ - **Context compaction** — when the conversation exceeds the model's context window, messages are intelligently summarized
227
+ - **Process resilience** — unhandled exceptions and rejections are caught, logged, and the REPL keeps running
228
+ - **Routine timeouts** — scheduled tasks are capped at 5 minutes to prevent the scheduler from hanging
229
+ - **99 tests** — comprehensive suite covering error recovery, retry logic, tool execution, and edge cases
230
+
231
+ ## Programmatic API
232
+
233
+ CodeBot can be used as a library:
209
234
 
210
- ### Local (Ollama / LM Studio / vLLM)
235
+ ```typescript
236
+ import { Agent, OpenAIProvider, AnthropicProvider } from 'codebot-ai';
211
237
 
212
- qwen2.5-coder (3b/7b/14b/32b), qwen3, deepseek-coder, codellama, llama3.x, mistral, mixtral, phi-3/4, starcoder2, granite-code, gemma2, command-r
238
+ const provider = new AnthropicProvider({
239
+ baseUrl: 'https://api.anthropic.com',
240
+ apiKey: process.env.ANTHROPIC_API_KEY,
241
+ model: 'claude-sonnet-4-6',
242
+ });
213
243
 
214
- ### Cloud
244
+ const agent = new Agent({
245
+ provider,
246
+ model: 'claude-sonnet-4-6',
247
+ autoApprove: true,
248
+ });
215
249
 
216
- - **Anthropic**: claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5
217
- - **OpenAI**: gpt-4o, gpt-4.1, o1, o3, o4-mini
218
- - **Google**: gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash
219
- - **DeepSeek**: deepseek-chat, deepseek-reasoner
220
- - **Groq**: llama-3.3-70b, mixtral-8x7b (fast inference)
221
- - **Mistral**: mistral-large, codestral
222
- - **xAI**: grok-3, grok-3-mini
250
+ for await (const event of agent.run('list all TypeScript files')) {
251
+ if (event.type === 'text') process.stdout.write(event.text || '');
252
+ }
253
+ ```
223
254
 
224
255
  ## Architecture
225
256
 
226
257
  ```
227
258
  src/
228
- agent.ts Agent loop with streaming, tool execution, permissions
259
+ agent.ts Agent loop streaming, tool execution, error recovery
229
260
  cli.ts CLI interface, REPL, slash commands
230
261
  types.ts TypeScript interfaces
231
262
  parser.ts XML/JSON tool call parser (for models without native tool support)
232
263
  history.ts Session persistence (JSONL)
233
264
  memory.ts Persistent memory system
234
- setup.ts Interactive setup wizard
265
+ setup.ts Interactive setup wizard (model-first UX)
266
+ scheduler.ts Cron-based routine scheduler
267
+ retry.ts Exponential backoff with jitter
235
268
  context/
236
269
  manager.ts Context window management, LLM-powered compaction
237
270
  repo-map.ts Project structure scanner
@@ -247,31 +280,8 @@ src/
247
280
  read.ts, write.ts, edit.ts, execute.ts
248
281
  batch-edit.ts Multi-file atomic editing
249
282
  glob.ts, grep.ts, think.ts
250
- memory.ts, web-fetch.ts, browser.ts
251
- ```
252
-
253
- ## Programmatic API
254
-
255
- CodeBot can be used as a library:
256
-
257
- ```typescript
258
- import { Agent, OpenAIProvider, AnthropicProvider } from 'codebot-ai';
259
-
260
- const provider = new AnthropicProvider({
261
- baseUrl: 'https://api.anthropic.com',
262
- apiKey: process.env.ANTHROPIC_API_KEY,
263
- model: 'claude-sonnet-4-6',
264
- });
265
-
266
- const agent = new Agent({
267
- provider,
268
- model: 'claude-sonnet-4-6',
269
- autoApprove: true,
270
- });
271
-
272
- for await (const event of agent.run('list all TypeScript files')) {
273
- if (event.type === 'text') process.stdout.write(event.text || '');
274
- }
283
+ memory.ts, web-fetch.ts, web-search.ts
284
+ browser.ts, routine.ts
275
285
  ```
276
286
 
277
287
  ## Configuration
@@ -282,6 +292,15 @@ Config is loaded in this order (later values win):
282
292
  2. Environment variables (`CODEBOT_MODEL`, `CODEBOT_PROVIDER`, etc.)
283
293
  3. CLI flags (`--model`, `--provider`, etc.)
284
294
 
295
+ ## From Source
296
+
297
+ ```bash
298
+ git clone https://github.com/zanderone1980/codebot-ai.git
299
+ cd codebot-ai
300
+ npm install && npm run build
301
+ ./bin/codebot
302
+ ```
303
+
285
304
  ## License
286
305
 
287
- MIT - Ascendral Software Development & Innovation
306
+ MIT - [Ascendral Software Development & Innovation](https://github.com/AscendralSoftware)
package/dist/agent.js CHANGED
@@ -88,9 +88,15 @@ class Agent {
88
88
  this.messages.push(userMsg);
89
89
  this.onMessage?.(userMsg);
90
90
  if (!this.context.fitsInBudget(this.messages)) {
91
- const result = await this.context.compactWithSummary(this.messages);
92
- this.messages = result.messages;
93
- yield { type: 'compaction', text: result.summary || 'Context compacted to fit budget.' };
91
+ try {
92
+ const result = await this.context.compactWithSummary(this.messages);
93
+ this.messages = result.messages;
94
+ yield { type: 'compaction', text: result.summary || 'Context compacted to fit budget.' };
95
+ }
96
+ catch {
97
+ this.messages = this.context.compact(this.messages, true);
98
+ yield { type: 'compaction', text: 'Context compacted (summary unavailable).' };
99
+ }
94
100
  }
95
101
  for (let i = 0; i < this.maxIterations; i++) {
96
102
  // Validate message integrity: ensure every tool_call has a matching tool response
@@ -100,29 +106,41 @@ class Agent {
100
106
  const toolSchemas = supportsTools ? this.tools.getSchemas() : undefined;
101
107
  let fullText = '';
102
108
  let toolCalls = [];
103
- // Stream LLM response
104
- for await (const event of this.provider.chat(this.messages, toolSchemas)) {
105
- switch (event.type) {
106
- case 'text':
107
- fullText += event.text || '';
108
- yield { type: 'text', text: event.text };
109
- break;
110
- case 'thinking':
111
- yield { type: 'thinking', text: event.text };
112
- break;
113
- case 'tool_call_end':
114
- if (event.toolCall) {
115
- toolCalls.push(event.toolCall);
116
- }
117
- break;
118
- case 'usage':
119
- yield { type: 'usage', usage: event.usage };
120
- break;
121
- case 'error':
122
- yield { type: 'error', error: event.error };
123
- return;
109
+ let streamError = null;
110
+ // Stream LLM response wrapped in try-catch for resilience
111
+ try {
112
+ for await (const event of this.provider.chat(this.messages, toolSchemas)) {
113
+ switch (event.type) {
114
+ case 'text':
115
+ fullText += event.text || '';
116
+ yield { type: 'text', text: event.text };
117
+ break;
118
+ case 'thinking':
119
+ yield { type: 'thinking', text: event.text };
120
+ break;
121
+ case 'tool_call_end':
122
+ if (event.toolCall) {
123
+ toolCalls.push(event.toolCall);
124
+ }
125
+ break;
126
+ case 'usage':
127
+ yield { type: 'usage', usage: event.usage };
128
+ break;
129
+ case 'error':
130
+ streamError = event.error || 'Unknown provider error';
131
+ break;
132
+ }
124
133
  }
125
134
  }
135
+ catch (err) {
136
+ const msg = err instanceof Error ? err.message : String(err);
137
+ streamError = `Stream error: ${msg}`;
138
+ }
139
+ // On error: yield it to the UI but DON'T return — continue to next iteration
140
+ if (streamError) {
141
+ yield { type: 'error', error: streamError };
142
+ continue;
143
+ }
126
144
  // If no native tool calls, try parsing from text
127
145
  if (toolCalls.length === 0 && fullText) {
128
146
  toolCalls = (0, parser_1.parseToolCalls)(fullText);
@@ -195,9 +213,15 @@ class Agent {
195
213
  }
196
214
  // Compact after tool results if needed
197
215
  if (!this.context.fitsInBudget(this.messages)) {
198
- const result = await this.context.compactWithSummary(this.messages);
199
- this.messages = result.messages;
200
- yield { type: 'compaction', text: result.summary || 'Context compacted.' };
216
+ try {
217
+ const result = await this.context.compactWithSummary(this.messages);
218
+ this.messages = result.messages;
219
+ yield { type: 'compaction', text: result.summary || 'Context compacted.' };
220
+ }
221
+ catch {
222
+ this.messages = this.context.compact(this.messages, true);
223
+ yield { type: 'compaction', text: 'Context compacted (summary unavailable).' };
224
+ }
201
225
  }
202
226
  }
203
227
  yield { type: 'error', error: `Max iterations (${this.maxIterations}) reached.` };
package/dist/cli.js CHANGED
@@ -44,7 +44,7 @@ const setup_1 = require("./setup");
44
44
  const banner_1 = require("./banner");
45
45
  const tools_1 = require("./tools");
46
46
  const scheduler_1 = require("./scheduler");
47
- const VERSION = '1.2.3';
47
+ const VERSION = '1.4.0';
48
48
  // Session-wide token tracking
49
49
  let sessionTokens = { input: 0, output: 0, total: 0 };
50
50
  const C = {
@@ -61,6 +61,17 @@ function c(text, style) {
61
61
  return `${C[style]}${text}${C.reset}`;
62
62
  }
63
63
  async function main() {
64
+ // Process-level safety nets: prevent silent crashes
65
+ process.on('unhandledRejection', (reason) => {
66
+ const msg = reason instanceof Error ? reason.message : String(reason);
67
+ console.error(`\x1b[31m\nUnhandled error: ${msg}\x1b[0m`);
68
+ });
69
+ process.on('uncaughtException', (err) => {
70
+ console.error(`\x1b[31m\nUncaught exception: ${err.message}\x1b[0m`);
71
+ if (err.message.includes('out of memory') || err.message.includes('ENOMEM')) {
72
+ process.exit(1);
73
+ }
74
+ });
64
75
  const args = parseArgs(process.argv.slice(2));
65
76
  if (args.help) {
66
77
  showHelp();
@@ -363,7 +374,12 @@ function handleSlashCommand(input, agent, config) {
363
374
  case '/routines': {
364
375
  const { RoutineTool } = require('./tools/routine');
365
376
  const rt = new RoutineTool();
366
- rt.execute({ action: 'list' }).then((out) => console.log('\n' + out));
377
+ rt.execute({ action: 'list' })
378
+ .then((out) => console.log('\n' + out))
379
+ .catch((err) => {
380
+ const msg = err instanceof Error ? err.message : String(err);
381
+ console.error(c(`Error listing routines: ${msg}`, 'red'));
382
+ });
367
383
  break;
368
384
  }
369
385
  case '/config':
@@ -1,6 +1,7 @@
1
1
  "use strict";
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
3
  exports.AnthropicProvider = void 0;
4
+ const retry_1 = require("../retry");
4
5
  class AnthropicProvider {
5
6
  name;
6
7
  config;
@@ -27,26 +28,45 @@ class AnthropicProvider {
27
28
  }));
28
29
  }
29
30
  const baseUrl = this.config.baseUrl.replace(/\/+$/, '');
31
+ const MAX_RETRIES = 3;
30
32
  let response;
31
- try {
32
- response = await fetch(`${baseUrl}/v1/messages`, {
33
- method: 'POST',
34
- headers: {
35
- 'Content-Type': 'application/json',
36
- 'x-api-key': this.config.apiKey || '',
37
- 'anthropic-version': '2023-06-01',
38
- },
39
- body: JSON.stringify(body),
40
- });
41
- }
42
- catch (err) {
43
- const msg = err instanceof Error ? err.message : String(err);
44
- yield { type: 'error', error: `Connection failed: ${msg}` };
45
- return;
33
+ let lastError = '';
34
+ for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
35
+ try {
36
+ response = await fetch(`${baseUrl}/v1/messages`, {
37
+ method: 'POST',
38
+ headers: {
39
+ 'Content-Type': 'application/json',
40
+ 'x-api-key': this.config.apiKey || '',
41
+ 'anthropic-version': '2023-06-01',
42
+ },
43
+ body: JSON.stringify(body),
44
+ signal: AbortSignal.timeout(60_000),
45
+ });
46
+ if (response.ok || !(0, retry_1.isRetryable)(null, response.status)) {
47
+ break;
48
+ }
49
+ lastError = `Anthropic error ${response.status}`;
50
+ if (attempt < MAX_RETRIES) {
51
+ const delay = (0, retry_1.getRetryDelay)(attempt, response.headers.get('retry-after'));
52
+ await (0, retry_1.sleep)(delay);
53
+ continue;
54
+ }
55
+ }
56
+ catch (err) {
57
+ lastError = err instanceof Error ? err.message : String(err);
58
+ if (attempt < MAX_RETRIES && (0, retry_1.isRetryable)(err)) {
59
+ const delay = (0, retry_1.getRetryDelay)(attempt);
60
+ await (0, retry_1.sleep)(delay);
61
+ continue;
62
+ }
63
+ yield { type: 'error', error: `Connection failed after ${attempt + 1} attempts: ${lastError}` };
64
+ return;
65
+ }
46
66
  }
47
- if (!response.ok) {
48
- const text = await response.text();
49
- yield { type: 'error', error: `Anthropic error ${response.status}: ${text}` };
67
+ if (!response || !response.ok) {
68
+ const text = response ? await response.text().catch(() => '') : '';
69
+ yield { type: 'error', error: `Anthropic error after retries: ${lastError}${text ? ` — ${text}` : ''}` };
50
70
  return;
51
71
  }
52
72
  if (!response.body) {
@@ -2,6 +2,7 @@
2
2
  Object.defineProperty(exports, "__esModule", { value: true });
3
3
  exports.OpenAIProvider = void 0;
4
4
  const registry_1 = require("./registry");
5
+ const retry_1 = require("../retry");
5
6
  class OpenAIProvider {
6
7
  name;
7
8
  config;
@@ -33,22 +34,42 @@ class OpenAIProvider {
33
34
  if (this.config.apiKey) {
34
35
  headers['Authorization'] = `Bearer ${this.config.apiKey}`;
35
36
  }
37
+ const MAX_RETRIES = 3;
36
38
  let response;
37
- try {
38
- response = await fetch(`${this.config.baseUrl}/v1/chat/completions`, {
39
- method: 'POST',
40
- headers,
41
- body: JSON.stringify(body),
42
- });
43
- }
44
- catch (err) {
45
- const msg = err instanceof Error ? err.message : String(err);
46
- yield { type: 'error', error: `Connection failed: ${msg}. Is your LLM server running?` };
47
- return;
39
+ let lastError = '';
40
+ for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
41
+ try {
42
+ response = await fetch(`${this.config.baseUrl}/v1/chat/completions`, {
43
+ method: 'POST',
44
+ headers,
45
+ body: JSON.stringify(body),
46
+ signal: AbortSignal.timeout(60_000),
47
+ });
48
+ if (response.ok || !(0, retry_1.isRetryable)(null, response.status)) {
49
+ break;
50
+ }
51
+ // Retryable HTTP status (429, 5xx)
52
+ lastError = `LLM error ${response.status}`;
53
+ if (attempt < MAX_RETRIES) {
54
+ const delay = (0, retry_1.getRetryDelay)(attempt, response.headers.get('retry-after'));
55
+ await (0, retry_1.sleep)(delay);
56
+ continue;
57
+ }
58
+ }
59
+ catch (err) {
60
+ lastError = err instanceof Error ? err.message : String(err);
61
+ if (attempt < MAX_RETRIES && (0, retry_1.isRetryable)(err)) {
62
+ const delay = (0, retry_1.getRetryDelay)(attempt);
63
+ await (0, retry_1.sleep)(delay);
64
+ continue;
65
+ }
66
+ yield { type: 'error', error: `Connection failed after ${attempt + 1} attempts: ${lastError}. Is your LLM server running?` };
67
+ return;
68
+ }
48
69
  }
49
- if (!response.ok) {
50
- const text = await response.text();
51
- yield { type: 'error', error: `LLM error ${response.status}: ${text}` };
70
+ if (!response || !response.ok) {
71
+ const text = response ? await response.text().catch(() => '') : '';
72
+ yield { type: 'error', error: `LLM error after retries: ${lastError}${text ? ` — ${text}` : ''}` };
52
73
  return;
53
74
  }
54
75
  if (!response.body) {