codebot-ai 1.2.3 → 1.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.md +120 -116
- package/dist/agent.js +51 -27
- package/dist/cli.js +18 -2
- package/dist/providers/anthropic.js +38 -18
- package/dist/providers/openai.js +35 -14
- package/dist/retry.d.ts +22 -0
- package/dist/retry.js +59 -0
- package/dist/scheduler.d.ts +2 -0
- package/dist/scheduler.js +25 -17
- package/dist/tools/web-fetch.js +11 -2
- package/package.json +16 -8
package/README.md
CHANGED
|
@@ -1,50 +1,60 @@
|
|
|
1
1
|
# CodeBot AI
|
|
2
2
|
|
|
3
|
-
|
|
3
|
+
[](https://www.npmjs.com/package/codebot-ai)
|
|
4
|
+
[](https://github.com/zanderone1980/codebot-ai/blob/main/LICENSE)
|
|
5
|
+
[](https://nodejs.org)
|
|
6
|
+
|
|
7
|
+
**Zero-dependency autonomous AI agent.** Works with any LLM — local or cloud. Code, browse the web, run commands, search, automate routines, and more.
|
|
8
|
+
|
|
9
|
+
Built by [Ascendral Software Development & Innovation](https://github.com/AscendralSoftware).
|
|
4
10
|
|
|
5
11
|
## Quick Start
|
|
6
12
|
|
|
7
13
|
```bash
|
|
8
|
-
# Install globally
|
|
9
14
|
npm install -g codebot-ai
|
|
10
|
-
|
|
11
|
-
# Run — setup wizard launches automatically on first use
|
|
12
15
|
codebot
|
|
13
16
|
```
|
|
14
17
|
|
|
15
|
-
|
|
18
|
+
That's it. The setup wizard launches on first run — pick your model, paste an API key (or use a local LLM), and you're coding.
|
|
16
19
|
|
|
17
20
|
```bash
|
|
21
|
+
# Or run without installing
|
|
18
22
|
npx codebot-ai
|
|
19
23
|
```
|
|
20
24
|
|
|
21
|
-
|
|
22
|
-
|
|
23
|
-
```bash
|
|
24
|
-
git clone https://github.com/AscendralSoftware/codebot-ai.git
|
|
25
|
-
cd codebot-ai
|
|
26
|
-
npm install && npm run build
|
|
27
|
-
./bin/codebot
|
|
28
|
-
```
|
|
25
|
+
## What Can It Do?
|
|
29
26
|
|
|
30
|
-
|
|
27
|
+
- **Write & edit code** — reads your codebase, makes targeted edits, runs tests
|
|
28
|
+
- **Run shell commands** — system checks, builds, deploys, git operations
|
|
29
|
+
- **Browse the web** — navigates Chrome, clicks, types, reads pages, takes screenshots
|
|
30
|
+
- **Search the internet** — real-time web search for docs, APIs, current info
|
|
31
|
+
- **Automate routines** — schedule recurring tasks with cron (daily posts, email checks, monitoring)
|
|
32
|
+
- **Call APIs** — HTTP requests to any REST endpoint
|
|
33
|
+
- **Persistent memory** — remembers preferences and context across sessions
|
|
34
|
+
- **Self-recovering** — retries on network errors, recovers from API failures, never drops out
|
|
31
35
|
|
|
32
|
-
|
|
36
|
+
## Supported Models
|
|
33
37
|
|
|
34
|
-
|
|
35
|
-
- Detects API keys from environment variables
|
|
36
|
-
- Lets you pick a provider and model
|
|
37
|
-
- Saves config to `~/.codebot/config.json`
|
|
38
|
+
Pick any model during setup. CodeBot works with all of them:
|
|
38
39
|
|
|
39
|
-
|
|
40
|
+
| Provider | Models |
|
|
41
|
+
|----------|--------|
|
|
42
|
+
| **Local (Ollama/LM Studio/vLLM)** | qwen2.5-coder, qwen3, deepseek-coder, llama3.x, mistral, phi-4, codellama, starcoder2, and any model your server runs |
|
|
43
|
+
| **Anthropic** | claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5 |
|
|
44
|
+
| **OpenAI** | gpt-4o, gpt-4.1, o1, o3, o4-mini |
|
|
45
|
+
| **Google** | gemini-2.5-pro, gemini-2.5-flash, gemini-2.0-flash |
|
|
46
|
+
| **DeepSeek** | deepseek-chat, deepseek-reasoner |
|
|
47
|
+
| **Groq** | llama-3.3-70b, mixtral-8x7b |
|
|
48
|
+
| **Mistral** | mistral-large, codestral |
|
|
49
|
+
| **xAI** | grok-3, grok-3-mini |
|
|
40
50
|
|
|
41
|
-
|
|
51
|
+
For local models, just have Ollama/LM Studio/vLLM running — CodeBot auto-detects them.
|
|
42
52
|
|
|
43
|
-
|
|
53
|
+
For cloud models, set an environment variable:
|
|
44
54
|
|
|
45
55
|
```bash
|
|
46
|
-
export ANTHROPIC_API_KEY="sk-ant-..." # Claude
|
|
47
56
|
export OPENAI_API_KEY="sk-..." # GPT
|
|
57
|
+
export ANTHROPIC_API_KEY="sk-ant-..." # Claude
|
|
48
58
|
export GEMINI_API_KEY="..." # Gemini
|
|
49
59
|
export DEEPSEEK_API_KEY="sk-..." # DeepSeek
|
|
50
60
|
export GROQ_API_KEY="gsk_..." # Groq
|
|
@@ -52,52 +62,23 @@ export MISTRAL_API_KEY="..." # Mistral
|
|
|
52
62
|
export XAI_API_KEY="xai-..." # Grok
|
|
53
63
|
```
|
|
54
64
|
|
|
55
|
-
|
|
65
|
+
Or paste your key during setup — either way works.
|
|
56
66
|
|
|
57
67
|
## Usage
|
|
58
68
|
|
|
59
|
-
### Interactive Mode
|
|
60
|
-
|
|
61
|
-
```bash
|
|
62
|
-
codebot
|
|
63
|
-
```
|
|
64
|
-
|
|
65
|
-
### Single Message
|
|
66
|
-
|
|
67
69
|
```bash
|
|
68
|
-
codebot
|
|
69
|
-
codebot
|
|
70
|
+
codebot # Interactive REPL
|
|
71
|
+
codebot "fix the bug in app.ts" # Single task
|
|
72
|
+
codebot --autonomous "refactor auth and test" # Full auto — no permission prompts
|
|
73
|
+
codebot --continue # Resume last session
|
|
74
|
+
echo "explain this error" | codebot # Pipe mode
|
|
70
75
|
```
|
|
71
76
|
|
|
72
|
-
###
|
|
73
|
-
|
|
74
|
-
```bash
|
|
75
|
-
echo "write a function that sorts by date" | codebot
|
|
76
|
-
cat error.log | codebot "what's causing this?"
|
|
77
|
-
```
|
|
78
|
-
|
|
79
|
-
### Autonomous Mode
|
|
80
|
-
|
|
81
|
-
Skip all permission prompts — full auto:
|
|
82
|
-
|
|
83
|
-
```bash
|
|
84
|
-
codebot --autonomous "refactor the auth module and run tests"
|
|
85
|
-
```
|
|
86
|
-
|
|
87
|
-
### Session Resume
|
|
88
|
-
|
|
89
|
-
CodeBot auto-saves every conversation. Resume anytime:
|
|
90
|
-
|
|
91
|
-
```bash
|
|
92
|
-
codebot --continue # Resume last session
|
|
93
|
-
codebot --resume <session-id> # Resume specific session
|
|
94
|
-
```
|
|
95
|
-
|
|
96
|
-
## CLI Options
|
|
77
|
+
### CLI Options
|
|
97
78
|
|
|
98
79
|
```
|
|
99
80
|
--setup Run the setup wizard
|
|
100
|
-
--model <name> Model to use
|
|
81
|
+
--model <name> Model to use
|
|
101
82
|
--provider <name> Provider: openai, anthropic, gemini, deepseek, groq, mistral, xai
|
|
102
83
|
--base-url <url> LLM API base URL
|
|
103
84
|
--api-key <key> API key (or use env vars)
|
|
@@ -107,25 +88,26 @@ codebot --resume <session-id> # Resume specific session
|
|
|
107
88
|
--max-iterations <n> Max agent loop iterations (default: 50)
|
|
108
89
|
```
|
|
109
90
|
|
|
110
|
-
|
|
91
|
+
### Interactive Commands
|
|
111
92
|
|
|
112
93
|
```
|
|
113
|
-
/help
|
|
114
|
-
/model
|
|
115
|
-
/models
|
|
116
|
-
/sessions
|
|
117
|
-
/
|
|
118
|
-
/
|
|
119
|
-
/
|
|
120
|
-
/
|
|
121
|
-
/
|
|
122
|
-
/
|
|
123
|
-
/
|
|
94
|
+
/help Show commands
|
|
95
|
+
/model Show or change model
|
|
96
|
+
/models List all supported models
|
|
97
|
+
/sessions List saved sessions
|
|
98
|
+
/routines List scheduled routines
|
|
99
|
+
/auto Toggle autonomous mode
|
|
100
|
+
/undo Undo last file edit (/undo [path])
|
|
101
|
+
/usage Show token usage for this session
|
|
102
|
+
/clear Clear conversation
|
|
103
|
+
/compact Force context compaction
|
|
104
|
+
/config Show configuration
|
|
105
|
+
/quit Exit
|
|
124
106
|
```
|
|
125
107
|
|
|
126
108
|
## Tools
|
|
127
109
|
|
|
128
|
-
CodeBot has
|
|
110
|
+
CodeBot has 13 built-in tools:
|
|
129
111
|
|
|
130
112
|
| Tool | Description | Permission |
|
|
131
113
|
|------|-------------|-----------|
|
|
@@ -139,7 +121,9 @@ CodeBot has 11 built-in tools:
|
|
|
139
121
|
| `think` | Internal reasoning scratchpad | auto |
|
|
140
122
|
| `memory` | Persistent memory across sessions | auto |
|
|
141
123
|
| `web_fetch` | HTTP requests and API calls | prompt |
|
|
124
|
+
| `web_search` | Internet search with result summaries | prompt |
|
|
142
125
|
| `browser` | Chrome automation via CDP | prompt |
|
|
126
|
+
| `routine` | Schedule recurring tasks with cron | prompt |
|
|
143
127
|
|
|
144
128
|
### Permission Levels
|
|
145
129
|
|
|
@@ -147,7 +131,7 @@ CodeBot has 11 built-in tools:
|
|
|
147
131
|
- **prompt** — Asks for approval (skipped in `--autonomous` mode)
|
|
148
132
|
- **always-ask** — Always asks, even in autonomous mode
|
|
149
133
|
|
|
150
|
-
### Browser
|
|
134
|
+
### Browser Automation
|
|
151
135
|
|
|
152
136
|
Controls Chrome via the Chrome DevTools Protocol. Actions:
|
|
153
137
|
|
|
@@ -155,21 +139,34 @@ Controls Chrome via the Chrome DevTools Protocol. Actions:
|
|
|
155
139
|
- `content` — Read page text
|
|
156
140
|
- `screenshot` — Capture the page
|
|
157
141
|
- `click` — Click an element by CSS selector
|
|
142
|
+
- `find_by_text` — Find and interact with elements by visible text
|
|
158
143
|
- `type` — Type into an input field
|
|
144
|
+
- `scroll`, `press_key`, `hover` — Page interaction
|
|
159
145
|
- `evaluate` — Run JavaScript on the page
|
|
160
146
|
- `tabs` — List open tabs
|
|
161
147
|
- `close` — Close browser connection
|
|
162
148
|
|
|
163
149
|
Chrome is auto-launched with `--remote-debugging-port` if not already running.
|
|
164
150
|
|
|
151
|
+
### Routines & Scheduling
|
|
152
|
+
|
|
153
|
+
Schedule recurring tasks with cron expressions:
|
|
154
|
+
|
|
155
|
+
```
|
|
156
|
+
> Set up a routine to check my server health every hour
|
|
157
|
+
> Create a daily routine at 9am to summarize my GitHub notifications
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
CodeBot creates the cron schedule, and the built-in scheduler runs tasks automatically while the agent is active. Manage with `/routines`.
|
|
161
|
+
|
|
165
162
|
### Memory
|
|
166
163
|
|
|
167
|
-
|
|
164
|
+
Persistent memory that survives across sessions:
|
|
168
165
|
|
|
169
166
|
- **Global memory** (`~/.codebot/memory/`) — preferences, patterns
|
|
170
167
|
- **Project memory** (`.codebot/memory/`) — project-specific context
|
|
171
|
-
-
|
|
172
|
-
- The agent
|
|
168
|
+
- Automatically injected into the system prompt
|
|
169
|
+
- The agent reads/writes its own memory to learn your style
|
|
173
170
|
|
|
174
171
|
### Plugins
|
|
175
172
|
|
|
@@ -205,33 +202,54 @@ Connect external tool servers via [Model Context Protocol](https://modelcontextp
|
|
|
205
202
|
|
|
206
203
|
MCP tools appear automatically with the `mcp_<server>_<tool>` prefix.
|
|
207
204
|
|
|
208
|
-
##
|
|
205
|
+
## Stability
|
|
206
|
+
|
|
207
|
+
CodeBot v1.3.0 is hardened for continuous operation:
|
|
208
|
+
|
|
209
|
+
- **Automatic retry** — network errors, rate limits (429), and server errors (5xx) retry with exponential backoff
|
|
210
|
+
- **Stream recovery** — if the LLM connection drops mid-response, the agent loop retries on the next iteration
|
|
211
|
+
- **Context compaction** — when the conversation exceeds the model's context window, messages are intelligently summarized
|
|
212
|
+
- **Process resilience** — unhandled exceptions and rejections are caught, logged, and the REPL keeps running
|
|
213
|
+
- **Routine timeouts** — scheduled tasks are capped at 5 minutes to prevent the scheduler from hanging
|
|
214
|
+
- **99 tests** — comprehensive suite covering error recovery, retry logic, tool execution, and edge cases
|
|
215
|
+
|
|
216
|
+
## Programmatic API
|
|
217
|
+
|
|
218
|
+
CodeBot can be used as a library:
|
|
209
219
|
|
|
210
|
-
|
|
220
|
+
```typescript
|
|
221
|
+
import { Agent, OpenAIProvider, AnthropicProvider } from 'codebot-ai';
|
|
211
222
|
|
|
212
|
-
|
|
223
|
+
const provider = new AnthropicProvider({
|
|
224
|
+
baseUrl: 'https://api.anthropic.com',
|
|
225
|
+
apiKey: process.env.ANTHROPIC_API_KEY,
|
|
226
|
+
model: 'claude-sonnet-4-6',
|
|
227
|
+
});
|
|
213
228
|
|
|
214
|
-
|
|
229
|
+
const agent = new Agent({
|
|
230
|
+
provider,
|
|
231
|
+
model: 'claude-sonnet-4-6',
|
|
232
|
+
autoApprove: true,
|
|
233
|
+
});
|
|
215
234
|
|
|
216
|
-
|
|
217
|
-
|
|
218
|
-
|
|
219
|
-
|
|
220
|
-
- **Groq**: llama-3.3-70b, mixtral-8x7b (fast inference)
|
|
221
|
-
- **Mistral**: mistral-large, codestral
|
|
222
|
-
- **xAI**: grok-3, grok-3-mini
|
|
235
|
+
for await (const event of agent.run('list all TypeScript files')) {
|
|
236
|
+
if (event.type === 'text') process.stdout.write(event.text || '');
|
|
237
|
+
}
|
|
238
|
+
```
|
|
223
239
|
|
|
224
240
|
## Architecture
|
|
225
241
|
|
|
226
242
|
```
|
|
227
243
|
src/
|
|
228
|
-
agent.ts Agent loop
|
|
244
|
+
agent.ts Agent loop — streaming, tool execution, error recovery
|
|
229
245
|
cli.ts CLI interface, REPL, slash commands
|
|
230
246
|
types.ts TypeScript interfaces
|
|
231
247
|
parser.ts XML/JSON tool call parser (for models without native tool support)
|
|
232
248
|
history.ts Session persistence (JSONL)
|
|
233
249
|
memory.ts Persistent memory system
|
|
234
|
-
setup.ts Interactive setup wizard
|
|
250
|
+
setup.ts Interactive setup wizard (model-first UX)
|
|
251
|
+
scheduler.ts Cron-based routine scheduler
|
|
252
|
+
retry.ts Exponential backoff with jitter
|
|
235
253
|
context/
|
|
236
254
|
manager.ts Context window management, LLM-powered compaction
|
|
237
255
|
repo-map.ts Project structure scanner
|
|
@@ -247,31 +265,8 @@ src/
|
|
|
247
265
|
read.ts, write.ts, edit.ts, execute.ts
|
|
248
266
|
batch-edit.ts Multi-file atomic editing
|
|
249
267
|
glob.ts, grep.ts, think.ts
|
|
250
|
-
memory.ts, web-fetch.ts,
|
|
251
|
-
|
|
252
|
-
|
|
253
|
-
## Programmatic API
|
|
254
|
-
|
|
255
|
-
CodeBot can be used as a library:
|
|
256
|
-
|
|
257
|
-
```typescript
|
|
258
|
-
import { Agent, OpenAIProvider, AnthropicProvider } from 'codebot-ai';
|
|
259
|
-
|
|
260
|
-
const provider = new AnthropicProvider({
|
|
261
|
-
baseUrl: 'https://api.anthropic.com',
|
|
262
|
-
apiKey: process.env.ANTHROPIC_API_KEY,
|
|
263
|
-
model: 'claude-sonnet-4-6',
|
|
264
|
-
});
|
|
265
|
-
|
|
266
|
-
const agent = new Agent({
|
|
267
|
-
provider,
|
|
268
|
-
model: 'claude-sonnet-4-6',
|
|
269
|
-
autoApprove: true,
|
|
270
|
-
});
|
|
271
|
-
|
|
272
|
-
for await (const event of agent.run('list all TypeScript files')) {
|
|
273
|
-
if (event.type === 'text') process.stdout.write(event.text || '');
|
|
274
|
-
}
|
|
268
|
+
memory.ts, web-fetch.ts, web-search.ts
|
|
269
|
+
browser.ts, routine.ts
|
|
275
270
|
```
|
|
276
271
|
|
|
277
272
|
## Configuration
|
|
@@ -282,6 +277,15 @@ Config is loaded in this order (later values win):
|
|
|
282
277
|
2. Environment variables (`CODEBOT_MODEL`, `CODEBOT_PROVIDER`, etc.)
|
|
283
278
|
3. CLI flags (`--model`, `--provider`, etc.)
|
|
284
279
|
|
|
280
|
+
## From Source
|
|
281
|
+
|
|
282
|
+
```bash
|
|
283
|
+
git clone https://github.com/zanderone1980/codebot-ai.git
|
|
284
|
+
cd codebot-ai
|
|
285
|
+
npm install && npm run build
|
|
286
|
+
./bin/codebot
|
|
287
|
+
```
|
|
288
|
+
|
|
285
289
|
## License
|
|
286
290
|
|
|
287
|
-
MIT - Ascendral Software Development & Innovation
|
|
291
|
+
MIT - [Ascendral Software Development & Innovation](https://github.com/AscendralSoftware)
|
package/dist/agent.js
CHANGED
|
@@ -88,9 +88,15 @@ class Agent {
|
|
|
88
88
|
this.messages.push(userMsg);
|
|
89
89
|
this.onMessage?.(userMsg);
|
|
90
90
|
if (!this.context.fitsInBudget(this.messages)) {
|
|
91
|
-
|
|
92
|
-
|
|
93
|
-
|
|
91
|
+
try {
|
|
92
|
+
const result = await this.context.compactWithSummary(this.messages);
|
|
93
|
+
this.messages = result.messages;
|
|
94
|
+
yield { type: 'compaction', text: result.summary || 'Context compacted to fit budget.' };
|
|
95
|
+
}
|
|
96
|
+
catch {
|
|
97
|
+
this.messages = this.context.compact(this.messages, true);
|
|
98
|
+
yield { type: 'compaction', text: 'Context compacted (summary unavailable).' };
|
|
99
|
+
}
|
|
94
100
|
}
|
|
95
101
|
for (let i = 0; i < this.maxIterations; i++) {
|
|
96
102
|
// Validate message integrity: ensure every tool_call has a matching tool response
|
|
@@ -100,29 +106,41 @@ class Agent {
|
|
|
100
106
|
const toolSchemas = supportsTools ? this.tools.getSchemas() : undefined;
|
|
101
107
|
let fullText = '';
|
|
102
108
|
let toolCalls = [];
|
|
103
|
-
|
|
104
|
-
|
|
105
|
-
|
|
106
|
-
|
|
107
|
-
|
|
108
|
-
|
|
109
|
-
|
|
110
|
-
|
|
111
|
-
|
|
112
|
-
|
|
113
|
-
|
|
114
|
-
|
|
115
|
-
|
|
116
|
-
|
|
117
|
-
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
|
|
122
|
-
|
|
123
|
-
|
|
109
|
+
let streamError = null;
|
|
110
|
+
// Stream LLM response — wrapped in try-catch for resilience
|
|
111
|
+
try {
|
|
112
|
+
for await (const event of this.provider.chat(this.messages, toolSchemas)) {
|
|
113
|
+
switch (event.type) {
|
|
114
|
+
case 'text':
|
|
115
|
+
fullText += event.text || '';
|
|
116
|
+
yield { type: 'text', text: event.text };
|
|
117
|
+
break;
|
|
118
|
+
case 'thinking':
|
|
119
|
+
yield { type: 'thinking', text: event.text };
|
|
120
|
+
break;
|
|
121
|
+
case 'tool_call_end':
|
|
122
|
+
if (event.toolCall) {
|
|
123
|
+
toolCalls.push(event.toolCall);
|
|
124
|
+
}
|
|
125
|
+
break;
|
|
126
|
+
case 'usage':
|
|
127
|
+
yield { type: 'usage', usage: event.usage };
|
|
128
|
+
break;
|
|
129
|
+
case 'error':
|
|
130
|
+
streamError = event.error || 'Unknown provider error';
|
|
131
|
+
break;
|
|
132
|
+
}
|
|
124
133
|
}
|
|
125
134
|
}
|
|
135
|
+
catch (err) {
|
|
136
|
+
const msg = err instanceof Error ? err.message : String(err);
|
|
137
|
+
streamError = `Stream error: ${msg}`;
|
|
138
|
+
}
|
|
139
|
+
// On error: yield it to the UI but DON'T return — continue to next iteration
|
|
140
|
+
if (streamError) {
|
|
141
|
+
yield { type: 'error', error: streamError };
|
|
142
|
+
continue;
|
|
143
|
+
}
|
|
126
144
|
// If no native tool calls, try parsing from text
|
|
127
145
|
if (toolCalls.length === 0 && fullText) {
|
|
128
146
|
toolCalls = (0, parser_1.parseToolCalls)(fullText);
|
|
@@ -195,9 +213,15 @@ class Agent {
|
|
|
195
213
|
}
|
|
196
214
|
// Compact after tool results if needed
|
|
197
215
|
if (!this.context.fitsInBudget(this.messages)) {
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
216
|
+
try {
|
|
217
|
+
const result = await this.context.compactWithSummary(this.messages);
|
|
218
|
+
this.messages = result.messages;
|
|
219
|
+
yield { type: 'compaction', text: result.summary || 'Context compacted.' };
|
|
220
|
+
}
|
|
221
|
+
catch {
|
|
222
|
+
this.messages = this.context.compact(this.messages, true);
|
|
223
|
+
yield { type: 'compaction', text: 'Context compacted (summary unavailable).' };
|
|
224
|
+
}
|
|
201
225
|
}
|
|
202
226
|
}
|
|
203
227
|
yield { type: 'error', error: `Max iterations (${this.maxIterations}) reached.` };
|
package/dist/cli.js
CHANGED
|
@@ -44,7 +44,7 @@ const setup_1 = require("./setup");
|
|
|
44
44
|
const banner_1 = require("./banner");
|
|
45
45
|
const tools_1 = require("./tools");
|
|
46
46
|
const scheduler_1 = require("./scheduler");
|
|
47
|
-
const VERSION = '1.
|
|
47
|
+
const VERSION = '1.3.0';
|
|
48
48
|
// Session-wide token tracking
|
|
49
49
|
let sessionTokens = { input: 0, output: 0, total: 0 };
|
|
50
50
|
const C = {
|
|
@@ -61,6 +61,17 @@ function c(text, style) {
|
|
|
61
61
|
return `${C[style]}${text}${C.reset}`;
|
|
62
62
|
}
|
|
63
63
|
async function main() {
|
|
64
|
+
// Process-level safety nets: prevent silent crashes
|
|
65
|
+
process.on('unhandledRejection', (reason) => {
|
|
66
|
+
const msg = reason instanceof Error ? reason.message : String(reason);
|
|
67
|
+
console.error(`\x1b[31m\nUnhandled error: ${msg}\x1b[0m`);
|
|
68
|
+
});
|
|
69
|
+
process.on('uncaughtException', (err) => {
|
|
70
|
+
console.error(`\x1b[31m\nUncaught exception: ${err.message}\x1b[0m`);
|
|
71
|
+
if (err.message.includes('out of memory') || err.message.includes('ENOMEM')) {
|
|
72
|
+
process.exit(1);
|
|
73
|
+
}
|
|
74
|
+
});
|
|
64
75
|
const args = parseArgs(process.argv.slice(2));
|
|
65
76
|
if (args.help) {
|
|
66
77
|
showHelp();
|
|
@@ -363,7 +374,12 @@ function handleSlashCommand(input, agent, config) {
|
|
|
363
374
|
case '/routines': {
|
|
364
375
|
const { RoutineTool } = require('./tools/routine');
|
|
365
376
|
const rt = new RoutineTool();
|
|
366
|
-
rt.execute({ action: 'list' })
|
|
377
|
+
rt.execute({ action: 'list' })
|
|
378
|
+
.then((out) => console.log('\n' + out))
|
|
379
|
+
.catch((err) => {
|
|
380
|
+
const msg = err instanceof Error ? err.message : String(err);
|
|
381
|
+
console.error(c(`Error listing routines: ${msg}`, 'red'));
|
|
382
|
+
});
|
|
367
383
|
break;
|
|
368
384
|
}
|
|
369
385
|
case '/config':
|
|
@@ -1,6 +1,7 @@
|
|
|
1
1
|
"use strict";
|
|
2
2
|
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
3
|
exports.AnthropicProvider = void 0;
|
|
4
|
+
const retry_1 = require("../retry");
|
|
4
5
|
class AnthropicProvider {
|
|
5
6
|
name;
|
|
6
7
|
config;
|
|
@@ -27,26 +28,45 @@ class AnthropicProvider {
|
|
|
27
28
|
}));
|
|
28
29
|
}
|
|
29
30
|
const baseUrl = this.config.baseUrl.replace(/\/+$/, '');
|
|
31
|
+
const MAX_RETRIES = 3;
|
|
30
32
|
let response;
|
|
31
|
-
|
|
32
|
-
|
|
33
|
-
|
|
34
|
-
|
|
35
|
-
|
|
36
|
-
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
33
|
+
let lastError = '';
|
|
34
|
+
for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
|
|
35
|
+
try {
|
|
36
|
+
response = await fetch(`${baseUrl}/v1/messages`, {
|
|
37
|
+
method: 'POST',
|
|
38
|
+
headers: {
|
|
39
|
+
'Content-Type': 'application/json',
|
|
40
|
+
'x-api-key': this.config.apiKey || '',
|
|
41
|
+
'anthropic-version': '2023-06-01',
|
|
42
|
+
},
|
|
43
|
+
body: JSON.stringify(body),
|
|
44
|
+
signal: AbortSignal.timeout(60_000),
|
|
45
|
+
});
|
|
46
|
+
if (response.ok || !(0, retry_1.isRetryable)(null, response.status)) {
|
|
47
|
+
break;
|
|
48
|
+
}
|
|
49
|
+
lastError = `Anthropic error ${response.status}`;
|
|
50
|
+
if (attempt < MAX_RETRIES) {
|
|
51
|
+
const delay = (0, retry_1.getRetryDelay)(attempt, response.headers.get('retry-after'));
|
|
52
|
+
await (0, retry_1.sleep)(delay);
|
|
53
|
+
continue;
|
|
54
|
+
}
|
|
55
|
+
}
|
|
56
|
+
catch (err) {
|
|
57
|
+
lastError = err instanceof Error ? err.message : String(err);
|
|
58
|
+
if (attempt < MAX_RETRIES && (0, retry_1.isRetryable)(err)) {
|
|
59
|
+
const delay = (0, retry_1.getRetryDelay)(attempt);
|
|
60
|
+
await (0, retry_1.sleep)(delay);
|
|
61
|
+
continue;
|
|
62
|
+
}
|
|
63
|
+
yield { type: 'error', error: `Connection failed after ${attempt + 1} attempts: ${lastError}` };
|
|
64
|
+
return;
|
|
65
|
+
}
|
|
46
66
|
}
|
|
47
|
-
if (!response.ok) {
|
|
48
|
-
const text = await response.text();
|
|
49
|
-
yield { type: 'error', error: `Anthropic error ${
|
|
67
|
+
if (!response || !response.ok) {
|
|
68
|
+
const text = response ? await response.text().catch(() => '') : '';
|
|
69
|
+
yield { type: 'error', error: `Anthropic error after retries: ${lastError}${text ? ` — ${text}` : ''}` };
|
|
50
70
|
return;
|
|
51
71
|
}
|
|
52
72
|
if (!response.body) {
|
package/dist/providers/openai.js
CHANGED
|
@@ -2,6 +2,7 @@
|
|
|
2
2
|
Object.defineProperty(exports, "__esModule", { value: true });
|
|
3
3
|
exports.OpenAIProvider = void 0;
|
|
4
4
|
const registry_1 = require("./registry");
|
|
5
|
+
const retry_1 = require("../retry");
|
|
5
6
|
class OpenAIProvider {
|
|
6
7
|
name;
|
|
7
8
|
config;
|
|
@@ -33,22 +34,42 @@ class OpenAIProvider {
|
|
|
33
34
|
if (this.config.apiKey) {
|
|
34
35
|
headers['Authorization'] = `Bearer ${this.config.apiKey}`;
|
|
35
36
|
}
|
|
37
|
+
const MAX_RETRIES = 3;
|
|
36
38
|
let response;
|
|
37
|
-
|
|
38
|
-
|
|
39
|
-
|
|
40
|
-
|
|
41
|
-
|
|
42
|
-
|
|
43
|
-
|
|
44
|
-
|
|
45
|
-
|
|
46
|
-
|
|
47
|
-
|
|
39
|
+
let lastError = '';
|
|
40
|
+
for (let attempt = 0; attempt <= MAX_RETRIES; attempt++) {
|
|
41
|
+
try {
|
|
42
|
+
response = await fetch(`${this.config.baseUrl}/v1/chat/completions`, {
|
|
43
|
+
method: 'POST',
|
|
44
|
+
headers,
|
|
45
|
+
body: JSON.stringify(body),
|
|
46
|
+
signal: AbortSignal.timeout(60_000),
|
|
47
|
+
});
|
|
48
|
+
if (response.ok || !(0, retry_1.isRetryable)(null, response.status)) {
|
|
49
|
+
break;
|
|
50
|
+
}
|
|
51
|
+
// Retryable HTTP status (429, 5xx)
|
|
52
|
+
lastError = `LLM error ${response.status}`;
|
|
53
|
+
if (attempt < MAX_RETRIES) {
|
|
54
|
+
const delay = (0, retry_1.getRetryDelay)(attempt, response.headers.get('retry-after'));
|
|
55
|
+
await (0, retry_1.sleep)(delay);
|
|
56
|
+
continue;
|
|
57
|
+
}
|
|
58
|
+
}
|
|
59
|
+
catch (err) {
|
|
60
|
+
lastError = err instanceof Error ? err.message : String(err);
|
|
61
|
+
if (attempt < MAX_RETRIES && (0, retry_1.isRetryable)(err)) {
|
|
62
|
+
const delay = (0, retry_1.getRetryDelay)(attempt);
|
|
63
|
+
await (0, retry_1.sleep)(delay);
|
|
64
|
+
continue;
|
|
65
|
+
}
|
|
66
|
+
yield { type: 'error', error: `Connection failed after ${attempt + 1} attempts: ${lastError}. Is your LLM server running?` };
|
|
67
|
+
return;
|
|
68
|
+
}
|
|
48
69
|
}
|
|
49
|
-
if (!response.ok) {
|
|
50
|
-
const text = await response.text();
|
|
51
|
-
yield { type: 'error', error: `LLM error ${
|
|
70
|
+
if (!response || !response.ok) {
|
|
71
|
+
const text = response ? await response.text().catch(() => '') : '';
|
|
72
|
+
yield { type: 'error', error: `LLM error after retries: ${lastError}${text ? ` — ${text}` : ''}` };
|
|
52
73
|
return;
|
|
53
74
|
}
|
|
54
75
|
if (!response.body) {
|
package/dist/retry.d.ts
ADDED
|
@@ -0,0 +1,22 @@
|
|
|
1
|
+
/**
|
|
2
|
+
* Retry utilities for resilient network operations.
|
|
3
|
+
* Exponential backoff with jitter, Retry-After header support.
|
|
4
|
+
* Zero dependencies.
|
|
5
|
+
*/
|
|
6
|
+
export interface RetryOptions {
|
|
7
|
+
maxRetries?: number;
|
|
8
|
+
baseDelayMs?: number;
|
|
9
|
+
maxDelayMs?: number;
|
|
10
|
+
retryableStatuses?: number[];
|
|
11
|
+
}
|
|
12
|
+
declare const DEFAULTS: Required<RetryOptions>;
|
|
13
|
+
/** Returns true if the error/status is retryable (network error or retryable HTTP status). */
|
|
14
|
+
export declare function isRetryable(error: unknown, status?: number, opts?: RetryOptions): boolean;
|
|
15
|
+
/**
|
|
16
|
+
* Calculate delay with exponential backoff + jitter.
|
|
17
|
+
* For 429 responses, respects Retry-After header.
|
|
18
|
+
*/
|
|
19
|
+
export declare function getRetryDelay(attempt: number, retryAfterHeader?: string | null, opts?: RetryOptions): number;
|
|
20
|
+
export declare function sleep(ms: number): Promise<void>;
|
|
21
|
+
export { DEFAULTS as RETRY_DEFAULTS };
|
|
22
|
+
//# sourceMappingURL=retry.d.ts.map
|
package/dist/retry.js
ADDED
|
@@ -0,0 +1,59 @@
|
|
|
1
|
+
"use strict";
|
|
2
|
+
/**
|
|
3
|
+
* Retry utilities for resilient network operations.
|
|
4
|
+
* Exponential backoff with jitter, Retry-After header support.
|
|
5
|
+
* Zero dependencies.
|
|
6
|
+
*/
|
|
7
|
+
Object.defineProperty(exports, "__esModule", { value: true });
|
|
8
|
+
exports.RETRY_DEFAULTS = void 0;
|
|
9
|
+
exports.isRetryable = isRetryable;
|
|
10
|
+
exports.getRetryDelay = getRetryDelay;
|
|
11
|
+
exports.sleep = sleep;
|
|
12
|
+
const DEFAULTS = {
|
|
13
|
+
maxRetries: 3,
|
|
14
|
+
baseDelayMs: 1000,
|
|
15
|
+
maxDelayMs: 30000,
|
|
16
|
+
retryableStatuses: [429, 500, 502, 503, 504],
|
|
17
|
+
};
|
|
18
|
+
exports.RETRY_DEFAULTS = DEFAULTS;
|
|
19
|
+
/** Returns true if the error/status is retryable (network error or retryable HTTP status). */
|
|
20
|
+
function isRetryable(error, status, opts) {
|
|
21
|
+
const statuses = opts?.retryableStatuses ?? DEFAULTS.retryableStatuses;
|
|
22
|
+
if (status && statuses.includes(status))
|
|
23
|
+
return true;
|
|
24
|
+
if (error instanceof TypeError)
|
|
25
|
+
return true; // fetch network errors
|
|
26
|
+
if (error instanceof Error) {
|
|
27
|
+
const msg = error.message.toLowerCase();
|
|
28
|
+
if (msg.includes('fetch failed') || msg.includes('econnreset') ||
|
|
29
|
+
msg.includes('econnrefused') || msg.includes('etimedout') ||
|
|
30
|
+
msg.includes('socket hang up') || msg.includes('network') ||
|
|
31
|
+
msg.includes('abort')) {
|
|
32
|
+
return true;
|
|
33
|
+
}
|
|
34
|
+
}
|
|
35
|
+
return false;
|
|
36
|
+
}
|
|
37
|
+
/**
|
|
38
|
+
* Calculate delay with exponential backoff + jitter.
|
|
39
|
+
* For 429 responses, respects Retry-After header.
|
|
40
|
+
*/
|
|
41
|
+
function getRetryDelay(attempt, retryAfterHeader, opts) {
|
|
42
|
+
const base = opts?.baseDelayMs ?? DEFAULTS.baseDelayMs;
|
|
43
|
+
const max = opts?.maxDelayMs ?? DEFAULTS.maxDelayMs;
|
|
44
|
+
// Respect Retry-After header (in seconds)
|
|
45
|
+
if (retryAfterHeader) {
|
|
46
|
+
const seconds = parseInt(retryAfterHeader, 10);
|
|
47
|
+
if (!isNaN(seconds) && seconds > 0) {
|
|
48
|
+
return Math.min(seconds * 1000, max);
|
|
49
|
+
}
|
|
50
|
+
}
|
|
51
|
+
// Exponential backoff with jitter: base * 2^attempt * (0.5..1.5)
|
|
52
|
+
const exponential = base * Math.pow(2, attempt);
|
|
53
|
+
const jitter = 0.5 + Math.random();
|
|
54
|
+
return Math.min(exponential * jitter, max);
|
|
55
|
+
}
|
|
56
|
+
function sleep(ms) {
|
|
57
|
+
return new Promise(resolve => setTimeout(resolve, ms));
|
|
58
|
+
}
|
|
59
|
+
//# sourceMappingURL=retry.js.map
|
package/dist/scheduler.d.ts
CHANGED
|
@@ -12,6 +12,8 @@ export declare class Scheduler {
|
|
|
12
12
|
/** Check if any routines need to run right now */
|
|
13
13
|
private tick;
|
|
14
14
|
private executeRoutine;
|
|
15
|
+
/** Run the agent loop for a routine — separated so it can be wrapped in Promise.race */
|
|
16
|
+
private runRoutineAgent;
|
|
15
17
|
private loadRoutines;
|
|
16
18
|
private saveRoutines;
|
|
17
19
|
}
|
package/dist/scheduler.js
CHANGED
|
@@ -90,25 +90,14 @@ class Scheduler {
|
|
|
90
90
|
}
|
|
91
91
|
async executeRoutine(routine, allRoutines) {
|
|
92
92
|
this.running = true;
|
|
93
|
+
const ROUTINE_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes max per routine
|
|
93
94
|
try {
|
|
94
95
|
this.onOutput?.(`\n⏰ Running routine: ${routine.name}\n Task: ${routine.prompt}\n`);
|
|
95
|
-
//
|
|
96
|
-
|
|
97
|
-
|
|
98
|
-
|
|
99
|
-
|
|
100
|
-
break;
|
|
101
|
-
case 'tool_call':
|
|
102
|
-
this.onOutput?.(`\n⚡ ${event.toolCall?.name}(${Object.entries(event.toolCall?.args || {}).map(([k, v]) => `${k}: ${typeof v === 'string' ? v.substring(0, 40) : v}`).join(', ')})\n`);
|
|
103
|
-
break;
|
|
104
|
-
case 'tool_result':
|
|
105
|
-
this.onOutput?.(` ✓ ${event.toolResult?.result?.substring(0, 100) || ''}\n`);
|
|
106
|
-
break;
|
|
107
|
-
case 'error':
|
|
108
|
-
this.onOutput?.(` ✗ Error: ${event.error}\n`);
|
|
109
|
-
break;
|
|
110
|
-
}
|
|
111
|
-
}
|
|
96
|
+
// Race against a timeout so a hanging routine doesn't block the scheduler forever
|
|
97
|
+
await Promise.race([
|
|
98
|
+
this.runRoutineAgent(routine),
|
|
99
|
+
new Promise((_, reject) => setTimeout(() => reject(new Error(`Routine timed out after ${ROUTINE_TIMEOUT_MS / 1000}s`)), ROUTINE_TIMEOUT_MS)),
|
|
100
|
+
]);
|
|
112
101
|
// Update last run time
|
|
113
102
|
routine.lastRun = new Date().toISOString();
|
|
114
103
|
this.saveRoutines(allRoutines);
|
|
@@ -122,6 +111,25 @@ class Scheduler {
|
|
|
122
111
|
this.running = false;
|
|
123
112
|
}
|
|
124
113
|
}
|
|
114
|
+
/** Run the agent loop for a routine — separated so it can be wrapped in Promise.race */
|
|
115
|
+
async runRoutineAgent(routine) {
|
|
116
|
+
for await (const event of this.agent.run(routine.prompt)) {
|
|
117
|
+
switch (event.type) {
|
|
118
|
+
case 'text':
|
|
119
|
+
this.onOutput?.(event.text || '');
|
|
120
|
+
break;
|
|
121
|
+
case 'tool_call':
|
|
122
|
+
this.onOutput?.(`\n⚡ ${event.toolCall?.name}(${Object.entries(event.toolCall?.args || {}).map(([k, v]) => `${k}: ${typeof v === 'string' ? v.substring(0, 40) : v}`).join(', ')})\n`);
|
|
123
|
+
break;
|
|
124
|
+
case 'tool_result':
|
|
125
|
+
this.onOutput?.(` ✓ ${event.toolResult?.result?.substring(0, 100) || ''}\n`);
|
|
126
|
+
break;
|
|
127
|
+
case 'error':
|
|
128
|
+
this.onOutput?.(` ✗ Error: ${event.error}\n`);
|
|
129
|
+
break;
|
|
130
|
+
}
|
|
131
|
+
}
|
|
132
|
+
}
|
|
125
133
|
loadRoutines() {
|
|
126
134
|
try {
|
|
127
135
|
if (fs.existsSync(ROUTINES_FILE)) {
|
package/dist/tools/web-fetch.js
CHANGED
|
@@ -73,14 +73,23 @@ class WebFetchTool {
|
|
|
73
73
|
body = args.body;
|
|
74
74
|
}
|
|
75
75
|
try {
|
|
76
|
+
// AbortController covers both connection AND body reading (res.text())
|
|
77
|
+
const controller = new AbortController();
|
|
78
|
+
const bodyTimeout = setTimeout(() => controller.abort(), 30_000);
|
|
76
79
|
const res = await fetch(url, {
|
|
77
80
|
method,
|
|
78
81
|
headers,
|
|
79
82
|
body,
|
|
80
|
-
signal:
|
|
83
|
+
signal: controller.signal,
|
|
81
84
|
});
|
|
82
85
|
const contentType = res.headers.get('content-type') || '';
|
|
83
|
-
|
|
86
|
+
let responseText;
|
|
87
|
+
try {
|
|
88
|
+
responseText = await res.text();
|
|
89
|
+
}
|
|
90
|
+
finally {
|
|
91
|
+
clearTimeout(bodyTimeout);
|
|
92
|
+
}
|
|
84
93
|
// Truncate very large responses
|
|
85
94
|
const maxLen = 50000;
|
|
86
95
|
const truncated = responseText.length > maxLen
|
package/package.json
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
{
|
|
2
2
|
"name": "codebot-ai",
|
|
3
|
-
"version": "1.
|
|
4
|
-
"description": "
|
|
3
|
+
"version": "1.3.0",
|
|
4
|
+
"description": "Zero-dependency autonomous AI agent. Code, browse, search, automate. Works with any LLM — Ollama, Claude, GPT, Gemini, DeepSeek, Groq, Mistral, Grok.",
|
|
5
5
|
"main": "dist/index.js",
|
|
6
6
|
"types": "dist/index.d.ts",
|
|
7
7
|
"bin": {
|
|
@@ -17,17 +17,25 @@
|
|
|
17
17
|
},
|
|
18
18
|
"keywords": [
|
|
19
19
|
"ai",
|
|
20
|
-
"
|
|
21
|
-
"ollama",
|
|
22
|
-
"local-llm",
|
|
20
|
+
"ai-agent",
|
|
23
21
|
"agent",
|
|
24
|
-
"
|
|
22
|
+
"autonomous",
|
|
23
|
+
"agentic",
|
|
24
|
+
"coding-assistant",
|
|
25
25
|
"code-generation",
|
|
26
|
+
"llm",
|
|
27
|
+
"openai",
|
|
26
28
|
"claude",
|
|
27
29
|
"gpt",
|
|
28
30
|
"gemini",
|
|
29
|
-
"
|
|
30
|
-
"
|
|
31
|
+
"ollama",
|
|
32
|
+
"deepseek",
|
|
33
|
+
"groq",
|
|
34
|
+
"mistral",
|
|
35
|
+
"local-llm",
|
|
36
|
+
"browser-automation",
|
|
37
|
+
"cli",
|
|
38
|
+
"web-search"
|
|
31
39
|
],
|
|
32
40
|
"author": "Ascendral Software Development & Innovation",
|
|
33
41
|
"license": "MIT",
|