@ducci/jarvis 1.0.14 → 1.0.16

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,116 @@
1
+ # Finding 001: Context Window Explosion via Tool Output Accumulation
2
+
3
+ **Date:** 2026-02-26
4
+ **Severity:** High — renders session completely unusable after enough handoffs
5
+ **Status:** Fixed
6
+
7
+ ---
8
+
9
+ ## What Happened
10
+
11
+ A session was started with the question *"Hast du Zugriff auf deinen source code? Wo liegt er?"* (Does Jarvis have access to its own source code?). The agent began exploring the filesystem using `exec` and `list_dir`, running commands like `cat agent.js`, `cat tools.js`, `cat app.js`, and various `find` commands.
12
+
13
+ The task required more than 10 iterations to complete, so the checkpoint/handoff mechanism fired. The agent ran 6 consecutive handoff runs before hitting `maxHandoffs` and stopping with `intervention_required`.
14
+
15
+ By that point the session `conversation.json` had grown to **687KB**. On the very next user message (*"Why?"*), both the primary and fallback models returned a `400 Provider returned error`. The session was permanently broken — no further messages could be processed.
16
+
17
+ ---
18
+
19
+ ## Root Cause
20
+
21
+ Two compounding problems:
22
+
23
+ ### 1. Tool output stored verbatim, without size limit
24
+
25
+ `exec` returns raw `stdout` from shell commands. When the model runs `cat agent.js` (440 lines, ~22 000 chars), that entire output gets stored in `session.messages` as a `role: "tool"` message. Every subsequent model request in that run — and in all future runs — sends this content in full.
26
+
27
+ There was no cap anywhere on tool result content. A single run of 10 iterations with a few `cat` calls could easily produce 100–200 KB of tool messages.
28
+
29
+ ### 2. Handoff runs accumulated on top of each other
30
+
31
+ When the iteration limit is hit, the checkpoint/handoff mechanism pushes `checkpoint.remaining` as a new user message and starts a fresh agent run — but on top of the **same, growing** `session.messages` array. Each of the 6 handoff runs added another 10 iterations of tool call messages to the history. Nothing was ever removed.
32
+
33
+ After 6 runs × ~10 iterations × multiple `cat` commands each, the context reached approximately 170 000 tokens — exceeding the free model's 128 000 token limit. The `400` was the provider rejecting the oversized request.
34
+
35
+ ### Why the `400` appeared on the *next* user message, not during the run
36
+
37
+ The session's final run hit `maxHandoffs` and stopped. At that point the context was already at or near the limit. When the user sent a new message, the full bloated history was loaded and sent again — this time slightly over the limit — causing the rejection.
38
+
39
+ ---
40
+
41
+ ## Model Context Windows (for reference)
42
+
43
+ | Model | Context Window |
44
+ |---|---|
45
+ | arcee-ai/trinity-large-preview:free | ~128 000 tokens |
46
+ | Claude Sonnet 4.6 | 200 000 tokens |
47
+ | Gemini 2.5 Pro / 2.0 Flash | 1 000 000 tokens |
48
+
49
+ A larger model would have delayed the failure, but not prevented it. The conversation would still grow unboundedly.
50
+
51
+ ---
52
+
53
+ ## What We Considered
54
+
55
+ **Truncate tool results in `prepareMessages`** — works, but runs on every loop iteration and is the wrong place conceptually. The content is already stored in full in the session before `prepareMessages` is ever called.
56
+
57
+ **Naive sliding window (drop oldest N messages)** — breaks the OpenRouter/OpenAI API contract. Every `role: "tool"` message must be paired with the assistant message containing the matching `tool_call_id`. Slicing arbitrarily through the message array orphans tool results and causes a `400` — the exact error we're trying to fix.
58
+
59
+ **Token budget / summarisation** — more adaptive but significantly more complex. Requires either token counting per model or an extra LLM call. Overkill for v1.
60
+
61
+ ---
62
+
63
+ ## Fix
64
+
65
+ Two targeted changes to `src/server/agent.js`.
66
+
67
+ ### 1. Cap tool result content at write time (`MAX_TOOL_RESULT = 4000`)
68
+
69
+ Right where a tool result is pushed to `session.messages`, cap the content to 4 000 characters. The full result is still passed to `runToolCalls` and therefore written to the JSONL session log — no information is lost for debugging. Only what the model sees is limited.
70
+
71
+ ```js
72
+ const sessionContent = resultStr.length > MAX_TOOL_RESULT
73
+ ? resultStr.slice(0, MAX_TOOL_RESULT) + '\n[...truncated]'
74
+ : resultStr;
75
+ session.messages.push({ role: 'tool', tool_call_id: toolCall.id, content: sessionContent });
76
+ ```
77
+
78
+ 4 000 chars is ~80 lines of code or a full `ls -la` listing — enough for the model to reason about any output. If more detail is needed, the model should use targeted commands (`grep`, `head`, `tail`) rather than `cat`-ing entire files.
79
+
80
+ ### 2. Strip intermediate tool messages before each handoff
81
+
82
+ Before calling `runAgentLoop`, snapshot `session.messages.length` as `runStartIndex`. If the run ends with `checkpoint_reached`, splice out all messages added during that run *except the final wrap-up assistant response*, then push `checkpoint.remaining` as the new user message.
83
+
84
+ ```js
85
+ const runStartIndex = session.messages.length;
86
+ const run = await runAgentLoop(...);
87
+
88
+ // on checkpoint_reached, before resuming:
89
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex - 1);
90
+ session.messages.push({ role: 'user', content: run.checkpoint.remaining });
91
+ ```
92
+
93
+ **Before** (after 6 handoffs):
94
+ ```
95
+ [system] [user: question] [assistant/tool ×10] [wrap-up] [user: checkpoint1]
96
+ [assistant/tool ×10] [wrap-up] [user: checkpoint2]
97
+ [assistant/tool ×10] [wrap-up] ... → 687 KB
98
+ ```
99
+
100
+ **After** (after 6 handoffs):
101
+ ```
102
+ [system] [user: question] [wrap-up] [user: checkpoint1]
103
+ [wrap-up] [user: checkpoint2]
104
+ [wrap-up] ... → ~5 KB
105
+ ```
106
+
107
+ Each handoff now adds 2 messages instead of 20+. The wrap-up message carries the relevant state (what was done, what remains) so the model is not flying blind — it just doesn't have the raw tool noise from previous runs.
108
+
109
+ ---
110
+
111
+ ## Outcome
112
+
113
+ - Sessions with long-running tasks no longer grow the context unboundedly.
114
+ - The JSONL session log is unaffected — full tool outputs are always written there.
115
+ - The model can still access previous run output via `read_session_log` if needed.
116
+ - A follow-up message after a completed multi-handoff task will no longer receive a `400`.
@@ -0,0 +1,84 @@
1
+ # Finding 002: Handoff Edge Cases Found During Review of Finding 001
2
+
3
+ **Date:** 2026-02-26
4
+ **Severity:** Medium
5
+ **Status:** Fixed
6
+
7
+ ---
8
+
9
+ ## Context
10
+
11
+ While reviewing the fix for [Finding 001](./001-context-explosion.md), two edge cases in the handoff system were found. Neither caused problems in the observed debugging session, but both could cause failures under specific conditions.
12
+
13
+ ---
14
+
15
+ ## Issue A: `checkpoint.remaining` could be `null`, causing a 400 on the next iteration
16
+
17
+ ### What could happen
18
+
19
+ When the iteration limit is hit, the agent asks the model for a wrap-up response that includes a `checkpoint` field:
20
+
21
+ ```json
22
+ {
23
+ "response": "...",
24
+ "logSummary": "...",
25
+ "checkpoint": {
26
+ "progress": "...",
27
+ "remaining": "..."
28
+ }
29
+ }
30
+ ```
31
+
32
+ The server then pushes `checkpoint.remaining` as a user message to start the next run:
33
+
34
+ ```js
35
+ session.messages.push({ role: 'user', content: run.checkpoint.remaining });
36
+ ```
37
+
38
+ Weaker or free models occasionally omit required fields or set them to `null`. If `remaining` is `null`, the session gets a `{ role: 'user', content: null }` message. Most providers reject a null content field with a `400 Bad Request` on the next model call — the same error that surfaced in Finding 001, but from a different cause.
39
+
40
+ ### Fix
41
+
42
+ ```js
43
+ session.messages.push({ role: 'user', content: run.checkpoint.remaining || 'Continue with the task.' });
44
+ ```
45
+
46
+ ---
47
+
48
+ ## Issue B: `intervention_required` did not strip tool history before saving
49
+
50
+ ### What could happen
51
+
52
+ The tool history strip introduced in Finding 001 runs right before pushing `checkpoint.remaining` for the next run. But the `intervention_required` path (max handoffs exceeded) breaks out of the loop *before* reaching the strip:
53
+
54
+ ```js
55
+ if (session.metadata.handoffCount > config.maxHandoffs) {
56
+ // ... log and set status ...
57
+ break; // ← strip never ran
58
+ }
59
+
60
+ // strip only reached here, after the if-block
61
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex - 1);
62
+ ```
63
+
64
+ This meant a session that hit the handoff limit was saved with the full tool history of the last run still in it. When the user sends a new message after `intervention_required`, the model receives all of that accumulated tool history — the same context bloat risk as before the fix in Finding 001.
65
+
66
+ ### Fix
67
+
68
+ Strip the tool history inside the `intervention_required` branch, before breaking:
69
+
70
+ ```js
71
+ if (session.metadata.handoffCount > config.maxHandoffs) {
72
+ // ... log and set status ...
73
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex - 1);
74
+ break;
75
+ }
76
+ ```
77
+
78
+ The wrap-up assistant message (last in the array) is preserved — it gives the model context about what was attempted when the user resumes.
79
+
80
+ ---
81
+
82
+ ## Why these weren't caught earlier
83
+
84
+ Both issues only manifest under specific conditions (model omitting a field; hitting maxHandoffs exactly). The debugging session in Finding 001 stopped at `intervention_required` after 6 handoffs, but the 400 error on the next message was attributed to the overall context size, masking the fact that the strip hadn't run for that final run.
@@ -0,0 +1,120 @@
1
+ # Finding 003: Event Loop Blocking, Async File I/O, and Session Reliability
2
+
3
+ **Date:** 2026-02-27
4
+ **Severity:** High — caused observed 100% CPU and server unresponsiveness in production
5
+ **Status:** Fixed
6
+
7
+ ---
8
+
9
+ ## What Happened
10
+
11
+ A session was started with the question *"Kannst du deinen source code finden und anschauen mittels Tools?"*. The agent used the `exec` tool to run two full-filesystem scans:
12
+
13
+ ```
14
+ find / -type f \( -iname "*.js" -o -iname "*.ts" -o -iname "*.py" \) 2>/dev/null | head -20
15
+ find / -type d -name "jarvis" 2>/dev/null
16
+ ```
17
+
18
+ Both commands start from filesystem root `/`. The second has no output limit and scans everything: real disk filesystems, `/proc`, `/sys`, `/dev`, and any network mounts. On the affected Linux server this caused the CPU to reach 100% and the server became unresponsive. The server had to be shut down manually.
19
+
20
+ ---
21
+
22
+ ## Root Cause
23
+
24
+ ### 1. `execSync` blocks the entire Node.js event loop
25
+
26
+ Both `exec` and `list_dir` used `execSync` from `child_process`. `execSync` is a synchronous call that blocks the event loop for its entire duration. While any shell command runs:
27
+
28
+ - Express cannot process incoming HTTP requests
29
+ - The Telegram bot cannot receive or process new messages
30
+ - All timers and async callbacks are frozen (including the Telegram `typingInterval`, so the user sees no activity indicator)
31
+
32
+ The OS sees a CPU-hungry `find` child process running at full speed while Node.js sits blocked waiting for it. Combined, this presents as ~100% CPU with a completely unresponsive server.
33
+
34
+ Additionally, `list_dir` used `execSync` with **no timeout at all**. A hanging command (e.g. `ls` on an NFS mount or a blocked `/proc` entry) would freeze the server permanently.
35
+
36
+ ### 2. All file I/O was synchronous
37
+
38
+ `loadSession`, `saveSession`, `appendLog`, and `loadTools` all used `fs.*Sync` variants. In an async Node.js server these block the event loop on every request. For small files the impact is measured in microseconds, but the pattern is architecturally incorrect and accumulates under load.
39
+
40
+ ### 3. Session not saved on unexpected error
41
+
42
+ In `handleChat`, `saveSession` was called unconditionally after the `try/catch` block. If the catch re-threw an unexpected error, `saveSession` was never reached. The user message had already been appended to the in-memory session but the on-disk version did not reflect it — leaving the session in an inconsistent state for the next request.
43
+
44
+ ### 4. No concurrency protection per session
45
+
46
+ The Telegram channel uses `@grammyjs/runner`, which processes updates concurrently. If a user sent two messages in quick succession, both `handleChat` calls could load the same session simultaneously, run independent agent loops, and then overwrite each other's `saveSession` call. The second write would silently discard the first response.
47
+
48
+ ### 5. Seed tools never updated after initial creation
49
+
50
+ `seedTools()` used `if (!existing[name])` — it only wrote a seed tool on first run. Any update to `exec` or `list_dir` in the source code would never propagate to an existing installation. This blocked the async fix for `exec` and `list_dir` from taking effect.
51
+
52
+ ---
53
+
54
+ ## Fixes
55
+
56
+ ### 1. `exec` and `list_dir` → async (`src/server/tools.js`)
57
+
58
+ **`exec`**: replaced `execSync` with `promisify(exec)`. The event loop is now free during shell command execution. Timeout (60s) and maxBuffer (2MB) are preserved.
59
+
60
+ **`list_dir`**: replaced `execSync` with `promisify(execFile)`. `execFile` does not use a shell interpreter, which is safer against special characters in paths. Added a 10-second timeout (previously none).
61
+
62
+ ### 2. `executeTool` global timeout (`src/server/tools.js`)
63
+
64
+ All tool executions — both built-in and AI-created — are now wrapped in `Promise.race` against a 60-second timeout. This protects against AI-created tools that hang on async operations (network requests, file I/O). The timeout matches the `exec` tool's own limit for consistency.
65
+
66
+ ```js
67
+ const timeout = new Promise((_, reject) =>
68
+ setTimeout(() => reject(new Error(`Tool '${name}' timed out after 60s`)), 60_000)
69
+ );
70
+ return await Promise.race([fn(toolArgs, fs, path, process, _require), timeout]);
71
+ ```
72
+
73
+ Note: this does not protect against synchronous CPU loops without `await` points — that would require Worker Threads. Such code is unlikely to be generated accidentally.
74
+
75
+ ### 3. Seed tools always updated (`src/server/tools.js`)
76
+
77
+ `seedTools()` now compares the serialized content of each seed tool against the stored version and overwrites only when there is a difference. Updates to built-in tools propagate on the next server start without touching user-created tools.
78
+
79
+ ### 4. All file I/O → async (`src/server/sessions.js`, `src/server/logging.js`, `src/server/tools.js`)
80
+
81
+ `loadSession`, `saveSession`, `appendLog`, and `loadTools` now use `fs.promises.*`. All callers in `agent.js` are updated to `await` these calls.
82
+
83
+ ### 5. `saveSession` moved to `finally` block (`src/server/agent.js`)
84
+
85
+ The session is now always persisted — on success, on model error, and on unexpected errors. A failed save is caught and logged without masking the original error.
86
+
87
+ ```js
88
+ } finally {
89
+ try {
90
+ await saveSession(sessionId, session);
91
+ } catch (saveErr) {
92
+ console.error(`Failed to save session ${sessionId}:`, saveErr);
93
+ }
94
+ }
95
+ ```
96
+
97
+ ### 6. Session queue for concurrency control (`src/server/agent.js`)
98
+
99
+ A module-level `Map<sessionId, Promise>` serializes concurrent requests for the same session. Each new request registers itself as the tail of the queue and waits for the previous request to resolve before starting. The map entry is cleaned up by whichever request is last in the chain.
100
+
101
+ ```js
102
+ const previous = sessionQueues.get(sessionId) ?? Promise.resolve();
103
+ let releaseLock;
104
+ const current = new Promise(resolve => { releaseLock = resolve; });
105
+ sessionQueues.set(sessionId, current);
106
+ await previous;
107
+ // ... process request ...
108
+ // finally: releaseLock()
109
+ ```
110
+
111
+ This is safe in Node.js because the event loop is single-threaded: `get`, `new Promise`, and `set` all execute synchronously before the first `await`, so there is no race between two requests reading the same `undefined` entry.
112
+
113
+ ---
114
+
115
+ ## What Was Not Changed
116
+
117
+ - The agent loop logic, checkpoint/handoff system, loop detection, and format recovery — all unchanged.
118
+ - `seedTools()` remains synchronous (called once at startup, before the server accepts requests).
119
+ - `createSession()` and `getToolDefinitions()` remain synchronous (pure functions, no I/O).
120
+ - No rate limiting or HTTP authentication added — the server is intended for local/personal use only.
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ducci/jarvis",
3
- "version": "1.0.14",
3
+ "version": "1.0.16",
4
4
  "description": "A fully automated agent system that lives on a server.",
5
5
  "main": "./src/index.js",
6
6
  "type": "module",
@@ -8,6 +8,7 @@ import chalk from 'chalk';
8
8
 
9
9
  const FORMAT_NUDGE = 'Your previous response was not valid JSON. Respond only with the required JSON object: {"response": "...", "logSummary": "..."}';
10
10
  const LOOP_DETECTION_THRESHOLD = 3;
11
+ const MAX_TOOL_RESULT = 4000;
11
12
 
12
13
  const WRAP_UP_NOTE = `[System: You have reached the iteration limit. This is your final response for this run.
13
14
  Respond with your normal JSON, but add a checkpoint field:
@@ -23,6 +24,11 @@ Respond with your normal JSON, but add a checkpoint field:
23
24
 
24
25
  The checkpoint field will be used to automatically resume the task in the next run.]`;
25
26
 
27
+ // Serializes concurrent requests for the same session. Maps sessionId to the
28
+ // tail of the current request chain (a Promise that resolves when the last
29
+ // queued request finishes).
30
+ const sessionQueues = new Map();
31
+
26
32
  async function callModel(client, model, messages, tools) {
27
33
  const params = { model, messages };
28
34
  if (tools && tools.length > 0) {
@@ -66,7 +72,7 @@ async function callModelWithFallback(client, config, messages, tools) {
66
72
  * Returns { iteration, response, logSummary, status, runToolCalls, checkpoint }.
67
73
  */
68
74
  async function runAgentLoop(client, config, session, prepareMessages) {
69
- let tools = loadTools();
75
+ let tools = await loadTools();
70
76
  let toolDefs = getToolDefinitions(tools);
71
77
  let iteration = 0;
72
78
  const runToolCalls = [];
@@ -151,10 +157,13 @@ async function runAgentLoop(client, config, session, prepareMessages) {
151
157
  const resultStr = typeof result === 'string' ? result : JSON.stringify(result);
152
158
  runToolCalls.push({ name: toolName, args: toolArgs, status: toolStatus, result: resultStr });
153
159
 
160
+ const sessionContent = resultStr.length > MAX_TOOL_RESULT
161
+ ? resultStr.slice(0, MAX_TOOL_RESULT) + '\n[...truncated]'
162
+ : resultStr;
154
163
  session.messages.push({
155
164
  role: 'tool',
156
165
  tool_call_id: toolCall.id,
157
- content: resultStr,
166
+ content: sessionContent,
158
167
  });
159
168
 
160
169
  const callKey = `${toolName}|${JSON.stringify(toolArgs)}|${resultStr}`;
@@ -171,7 +180,7 @@ async function runAgentLoop(client, config, session, prepareMessages) {
171
180
 
172
181
  // Reload tools if any were created/updated this iteration
173
182
  if (toolsModified) {
174
- tools = loadTools();
183
+ tools = await loadTools();
175
184
  toolDefs = getToolDefinitions(tools);
176
185
  }
177
186
 
@@ -309,14 +318,41 @@ async function runAgentLoop(client, config, session, prepareMessages) {
309
318
  * Manages the handoff loop across multiple agent runs.
310
319
  */
311
320
  export async function handleChat(config, requestSessionId, userMessage) {
321
+ const sessionId = requestSessionId || crypto.randomUUID();
322
+
323
+ // Serialize concurrent requests for the same session. Each request registers
324
+ // itself at the tail of the queue and waits for the previous request to finish
325
+ // before starting. New sessions (no requestSessionId) each get a unique ID,
326
+ // so they never contend with each other.
327
+ const previous = sessionQueues.get(sessionId) ?? Promise.resolve();
328
+ let releaseLock;
329
+ const current = new Promise(resolve => { releaseLock = resolve; });
330
+ sessionQueues.set(sessionId, current);
331
+ await previous;
332
+
333
+ try {
334
+ return await _runHandleChat(config, sessionId, userMessage);
335
+ } finally {
336
+ releaseLock();
337
+ // Clean up only if no one else has queued behind us
338
+ if (sessionQueues.get(sessionId) === current) {
339
+ sessionQueues.delete(sessionId);
340
+ }
341
+ }
342
+ }
343
+
344
+ /**
345
+ * The actual chat logic, extracted so handleChat can wrap it cleanly with the
346
+ * session lock.
347
+ */
348
+ async function _runHandleChat(config, sessionId, userMessage) {
312
349
  const client = new OpenAI({
313
350
  baseURL: 'https://openrouter.ai/api/v1',
314
351
  apiKey: config.apiKey,
315
352
  });
316
353
 
317
354
  const systemPromptTemplate = loadSystemPrompt();
318
- const sessionId = requestSessionId || crypto.randomUUID();
319
- let session = loadSession(sessionId);
355
+ let session = await loadSession(sessionId);
320
356
 
321
357
  if (!session) {
322
358
  session = createSession(systemPromptTemplate);
@@ -341,9 +377,10 @@ export async function handleChat(config, requestSessionId, userMessage) {
341
377
  let finalLogSummary = '';
342
378
  let finalStatus = 'ok';
343
379
 
344
- // Handoff loop
345
380
  try {
381
+ // Handoff loop
346
382
  while (true) {
383
+ const runStartIndex = session.messages.length;
347
384
  const run = await runAgentLoop(client, config, session, prepareMessages);
348
385
  allToolCalls.push(...run.runToolCalls);
349
386
 
@@ -364,7 +401,7 @@ export async function handleChat(config, requestSessionId, userMessage) {
364
401
  if (run.errorDetail) logEntry.errorDetail = run.errorDetail;
365
402
  if (run.contextInfo) logEntry.contextInfo = run.contextInfo;
366
403
  if (run.rawResponse) logEntry.rawResponse = run.rawResponse;
367
- appendLog(sessionId, logEntry);
404
+ await appendLog(sessionId, logEntry);
368
405
 
369
406
  // Inject synthetic error note so the model has context on the next user turn
370
407
  if (finalStatus === 'model_error' || finalStatus === 'format_error') {
@@ -379,7 +416,7 @@ export async function handleChat(config, requestSessionId, userMessage) {
379
416
  }
380
417
 
381
418
  // Checkpoint reached — log this run
382
- appendLog(sessionId, {
419
+ await appendLog(sessionId, {
383
420
  iteration: run.iteration,
384
421
  model: config.selectedModel,
385
422
  userInput: userMessage,
@@ -396,7 +433,7 @@ export async function handleChat(config, requestSessionId, userMessage) {
396
433
  finalLogSummary = run.logSummary;
397
434
  finalStatus = 'intervention_required';
398
435
 
399
- appendLog(sessionId, {
436
+ await appendLog(sessionId, {
400
437
  iteration: 0,
401
438
  model: config.selectedModel,
402
439
  userInput: userMessage,
@@ -405,14 +442,23 @@ export async function handleChat(config, requestSessionId, userMessage) {
405
442
  logSummary: 'Max handoffs exceeded. Human intervention required.',
406
443
  status: 'intervention_required',
407
444
  });
445
+ // Strip tool history even when stopping — prevents context bloat on the
446
+ // next user message when human intervention resumes the session.
447
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex - 1);
408
448
  break;
409
449
  }
410
450
 
411
- // Resume with checkpoint.remaining as new prompt
412
- session.messages.push({ role: 'user', content: run.checkpoint.remaining });
451
+ // Strip intermediate tool messages from this run before resuming.
452
+ // Keep only the wrap-up assistant response (last message added by runAgentLoop)
453
+ // it summarises what was done and is far cheaper context than the raw tool history.
454
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex - 1);
455
+
456
+ // Resume with checkpoint.remaining as new prompt.
457
+ // Guard against null/undefined in case the model omitted the field.
458
+ session.messages.push({ role: 'user', content: run.checkpoint.remaining || 'Continue with the task.' });
413
459
  }
414
460
  } catch (e) {
415
- const errorLog = {
461
+ await appendLog(sessionId, {
416
462
  iteration: 0,
417
463
  model: config.selectedModel,
418
464
  userInput: userMessage,
@@ -421,14 +467,18 @@ export async function handleChat(config, requestSessionId, userMessage) {
421
467
  logSummary: `Critical error: ${e.message}`,
422
468
  status: 'error',
423
469
  errorDetail: { message: e.message, stack: e.stack },
424
- };
425
- appendLog(sessionId, errorLog);
426
- // Re-throw to let app.js handle the HTTP response
470
+ });
427
471
  throw e;
472
+ } finally {
473
+ // Always persist the session — even if an unexpected error occurred.
474
+ // A failed save must not mask the original error.
475
+ try {
476
+ await saveSession(sessionId, session);
477
+ } catch (saveErr) {
478
+ console.error(`Failed to save session ${sessionId}:`, saveErr);
479
+ }
428
480
  }
429
481
 
430
- saveSession(sessionId, session);
431
-
432
482
  console.log(`${chalk.magenta('<<<')} ${chalk.bold('Final Response')} [SID: ${chalk.dim(sessionId.slice(0, 8))}] ${chalk.italic(finalLogSummary)}`);
433
483
 
434
484
  return {
@@ -3,10 +3,10 @@ import path from 'path';
3
3
  import chalk from 'chalk';
4
4
  import { PATHS } from './config.js';
5
5
 
6
- export function appendLog(sessionId, entry) {
6
+ export async function appendLog(sessionId, entry) {
7
7
  const logFile = path.join(PATHS.logsDir, `session-${sessionId}.jsonl`);
8
8
  const line = JSON.stringify({ ts: new Date().toISOString(), sessionId, ...entry }) + '\n';
9
- fs.appendFileSync(logFile, line, 'utf8');
9
+ await fs.promises.appendFile(logFile, line, 'utf8');
10
10
 
11
11
  // Console output for better visibility
12
12
  const statusColor = entry.status === 'ok' ? chalk.green : chalk.red;
@@ -2,19 +2,20 @@ import fs from 'fs';
2
2
  import path from 'path';
3
3
  import { PATHS } from './config.js';
4
4
 
5
- export function loadSession(sessionId) {
5
+ export async function loadSession(sessionId) {
6
6
  const filePath = path.join(PATHS.conversationsDir, `${sessionId}.json`);
7
7
  try {
8
- return JSON.parse(fs.readFileSync(filePath, 'utf8'));
8
+ const raw = await fs.promises.readFile(filePath, 'utf8');
9
+ return JSON.parse(raw);
9
10
  } catch {
10
11
  return null;
11
12
  }
12
13
  }
13
14
 
14
- export function saveSession(sessionId, session) {
15
+ export async function saveSession(sessionId, session) {
15
16
  session.metadata.updatedAt = new Date().toISOString();
16
17
  const filePath = path.join(PATHS.conversationsDir, `${sessionId}.json`);
17
- fs.writeFileSync(filePath, JSON.stringify(session, null, 2), 'utf8');
18
+ await fs.promises.writeFile(filePath, JSON.stringify(session, null, 2), 'utf8');
18
19
  }
19
20
 
20
21
  export function createSession(systemPromptTemplate) {
@@ -6,6 +6,8 @@ import { PATHS } from './config.js';
6
6
  const _require = createRequire(import.meta.url);
7
7
  const AsyncFunction = Object.getPrototypeOf(async function () {}).constructor;
8
8
 
9
+ const TOOL_TIMEOUT_MS = 60_000;
10
+
9
11
  const SEED_TOOLS = {
10
12
  list_dir: {
11
13
  definition: {
@@ -25,7 +27,18 @@ const SEED_TOOLS = {
25
27
  },
26
28
  },
27
29
  },
28
- code: 'const targetPath = args.path || process.cwd(); const resolved = path.resolve(targetPath); const { execSync } = require("child_process"); const output = execSync(`ls -la "${resolved}"`, { encoding: "utf8" }); return { status: "ok", path: resolved, output };',
30
+ code: `
31
+ const { execFile } = require("child_process");
32
+ const { promisify } = require("util");
33
+ const execFileAsync = promisify(execFile);
34
+ const targetPath = args.path || process.cwd();
35
+ const resolved = path.resolve(targetPath);
36
+ const { stdout: output } = await execFileAsync("ls", ["-la", resolved], {
37
+ encoding: "utf8",
38
+ timeout: 10000,
39
+ });
40
+ return { status: "ok", path: resolved, output };
41
+ `,
29
42
  },
30
43
  exec: {
31
44
  definition: {
@@ -45,7 +58,21 @@ const SEED_TOOLS = {
45
58
  },
46
59
  },
47
60
  },
48
- code: 'const { execSync } = require("child_process"); try { const stdout = execSync(args.cmd, { encoding: "utf8", timeout: 60000 }); return { status: "ok", exitCode: 0, stdout, stderr: "" }; } catch (e) { return { status: "error", exitCode: e.status || 1, stdout: e.stdout || "", stderr: e.stderr || e.message }; }',
61
+ code: `
62
+ const { exec } = require("child_process");
63
+ const { promisify } = require("util");
64
+ const execAsync = promisify(exec);
65
+ try {
66
+ const { stdout, stderr } = await execAsync(args.cmd, {
67
+ encoding: "utf8",
68
+ timeout: 60000,
69
+ maxBuffer: 2 * 1024 * 1024,
70
+ });
71
+ return { status: "ok", exitCode: 0, stdout, stderr };
72
+ } catch (e) {
73
+ return { status: "error", exitCode: e.code || 1, stdout: e.stdout || "", stderr: e.stderr || e.message };
74
+ }
75
+ `,
49
76
  },
50
77
  save_user_info: {
51
78
  definition: {
@@ -193,7 +220,9 @@ export function seedTools() {
193
220
 
194
221
  let changed = false;
195
222
  for (const [name, tool] of Object.entries(SEED_TOOLS)) {
196
- if (!existing[name]) {
223
+ // Always keep seed tools up to date — user-created tools have different names
224
+ // and are never touched by this loop.
225
+ if (JSON.stringify(existing[name]) !== JSON.stringify(tool)) {
197
226
  existing[name] = tool;
198
227
  changed = true;
199
228
  }
@@ -207,9 +236,10 @@ export function seedTools() {
207
236
  return existing;
208
237
  }
209
238
 
210
- export function loadTools() {
239
+ export async function loadTools() {
211
240
  try {
212
- return JSON.parse(fs.readFileSync(PATHS.toolsFile, 'utf8'));
241
+ const raw = await fs.promises.readFile(PATHS.toolsFile, 'utf8');
242
+ return JSON.parse(raw);
213
243
  } catch {
214
244
  return {};
215
245
  }
@@ -226,5 +256,13 @@ export async function executeTool(tools, name, toolArgs) {
226
256
  }
227
257
 
228
258
  const fn = new AsyncFunction('args', 'fs', 'path', 'process', 'require', tool.code);
229
- return await fn(toolArgs, fs, path, process, _require);
259
+
260
+ const timeout = new Promise((_, reject) =>
261
+ setTimeout(
262
+ () => reject(new Error(`Tool '${name}' timed out after ${TOOL_TIMEOUT_MS / 1000}s`)),
263
+ TOOL_TIMEOUT_MS
264
+ )
265
+ );
266
+
267
+ return await Promise.race([fn(toolArgs, fs, path, process, _require), timeout]);
230
268
  }