@ducci/jarvis 1.0.30 → 1.0.32

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,202 @@
1
+ # Finding 014: exec stderr Artifact and Malformed Tool Call Arguments
2
+
3
+ **Date:** 2026-03-02
4
+ **Severity:** Medium — caused spurious context noise (spurious nudges), agent confusion on malformed args, and silent loss of failedApproaches across user turns
5
+ **Status:** Fixed
6
+
7
+ ---
8
+
9
+ ## Observed Session
10
+
11
+ Session `d97070f7-e50f-4d9e-a38b-b68e7b27e7b7`. User asked Jarvis to set up an OWASP ZAP web security scanning project. The session ran 4 runs (40 total iterations) without completing the task. The snap-installed ZAP does not include `zap-baseline.sh` (which only ships in the ZAP Docker image); the agent never discovered that `zaproxy -cmd -quickurl` is the correct snap-native CLI equivalent.
12
+
13
+ Three compounding issues were identified that degraded the agent's ability to self-correct.
14
+
15
+ ---
16
+
17
+ ## Issue 1: exec Tool Injects Node.js Error Message into `stderr` Field
18
+
19
+ ### What happened
20
+
21
+ Commands like `which zap-cli` (not installed), `grep` returning no matches, and piped `find | grep` with no results all return exit code 1. In each case the actual process wrote nothing to stderr. But the exec tool result showed:
22
+
23
+ ```json
24
+ {"status":"error","exitCode":1,"stdout":"","stderr":"Command failed: which zap-cli\n"}
25
+ ```
26
+
27
+ The `"Command failed: ..."` string is not from the process — it is `e.message` from Node.js's `execAsync` error object, injected via `e.stderr || e.message`.
28
+
29
+ The stderr nudge check in `agent.js` fires on any non-empty `resultObj.stderr`:
30
+
31
+ ```js
32
+ if (resultObj && resultObj.stderr) {
33
+ stderrErrorInIteration = true;
34
+ }
35
+ ```
36
+
37
+ This triggered the nudge "Examine the stderr field carefully — it likely describes the root cause of the failure" 3–4 times per run, when there was nothing actionable to examine in stderr.
38
+
39
+ ### Root cause
40
+
41
+ In the exec seed tool (`src/server/tools.js`, line 76):
42
+
43
+ ```js
44
+ return { status: "error", exitCode: e.code || 1, stdout: e.stdout || "", stderr: e.stderr || e.message };
45
+ ```
46
+
47
+ The `|| e.message` fallback was designed to show something when `e.stderr` is empty. But it conflates process-generated stderr with Node.js meta-messages about the process exiting non-zero.
48
+
49
+ ### Fix
50
+
51
+ ```js
52
+ // Before:
53
+ return { status: "error", exitCode: e.code || 1, stdout: e.stdout || "", stderr: e.stderr || e.message };
54
+
55
+ // After:
56
+ return { status: "error", exitCode: e.code || 1, stdout: e.stdout || "", stderr: e.stderr || "" };
57
+ ```
58
+
59
+ `status: error` and `exitCode` already signal failure. The "Command failed: ..." Node.js string is not diagnostic and should not appear in the stderr field.
60
+
61
+ **File**: `src/server/tools.js` — exec seed tool code (propagates via seedTools() on next server start)
62
+
63
+ ---
64
+
65
+ ## Issue 2: Malformed Tool Call JSON Arguments Silently Swallowed
66
+
67
+ ### What happened
68
+
69
+ The model sent a tool call with malformed arguments (missing opening `{`):
70
+
71
+ ```json
72
+ {"name": "exec", "arguments": "\"cmd\": \"find /snap/zaproxy/current -type f -name '*.sh' | head -10\"}"}
73
+ ```
74
+
75
+ `JSON.parse` threw, the catch block silently used `toolArgs = {}`, and the exec tool failed with:
76
+
77
+ ```
78
+ {"status":"error","exitCode":"ERR_INVALID_ARG_TYPE","stdout":"","stderr":"The \"command\" argument must be of type string. Received undefined"}
79
+ ```
80
+
81
+ The model saw `ERR_INVALID_ARG_TYPE` / "Received undefined" — no indication that the JSON formatting was wrong. The stderr nudge also fired (Issue 1 compounding: `err.message` put in stderr).
82
+
83
+ ### Root cause
84
+
85
+ In `src/server/agent.js`:
86
+
87
+ ```js
88
+ try {
89
+ toolArgs = JSON.parse(toolCall.function.arguments || '{}');
90
+ } catch {
91
+ toolArgs = {};
92
+ }
93
+ ```
94
+
95
+ The JSON parse error is swallowed. The tool is called with empty args. The resulting type error is cryptic and doesn't tell the model to fix its JSON.
96
+
97
+ ### Fix
98
+
99
+ Detect the parse failure early, push an explicit error tool result, and skip execution:
100
+
101
+ ```js
102
+ let toolArgs;
103
+ let argParseError = null;
104
+ try {
105
+ toolArgs = JSON.parse(toolCall.function.arguments || '{}');
106
+ } catch (e) {
107
+ argParseError = e;
108
+ }
109
+
110
+ if (argParseError) {
111
+ const errorContent = JSON.stringify({
112
+ status: 'error',
113
+ error: `Tool arguments could not be parsed as JSON: ${argParseError.message}. Ensure arguments are a valid JSON object, e.g. {"key": "value"}.`,
114
+ });
115
+ session.messages.push({ role: 'tool', tool_call_id: toolCall.id, content: errorContent });
116
+ runToolCalls.push({ name: toolName, args: {}, status: 'error', result: errorContent });
117
+ consecutiveFailures++;
118
+ continue;
119
+ }
120
+ ```
121
+
122
+ The model immediately sees "Tool arguments could not be parsed as JSON" instead of an opaque `ERR_INVALID_ARG_TYPE`. It can fix its JSON and retry.
123
+
124
+ **File**: `src/server/agent.js` — inner tool execution loop
125
+
126
+ ---
127
+
128
+ ## Issue 3: Accumulated failedApproaches Cleared on New User Message
129
+
130
+ ### What happened
131
+
132
+ In multi-run sessions where multiple checkpoint handoffs accumulate `session.metadata.failedApproaches`, the next user message resets this list to `[]`:
133
+
134
+ ```js
135
+ session.metadata.handoffCount = 0;
136
+ session.metadata.failedApproaches = [];
137
+ ```
138
+
139
+ This was designed to give the model a clean slate after human review. But "Ok do it" is not a review — it's a continuation. The model loses knowledge of what was already tried and can repeat the same failed strategies in the new round of runs.
140
+
141
+ (Note: in this specific session run 1 ended with `ok`, so `failedApproaches` was empty at reset time anyway. But in sessions where checkpoint runs accumulate a list, the reset discards it entirely.)
142
+
143
+ ### Fix
144
+
145
+ Embed accumulated `failedApproaches` into the incoming user message before resetting:
146
+
147
+ ```js
148
+ let userMessageWithContext = userMessage;
149
+ if (session.metadata.failedApproaches && session.metadata.failedApproaches.length > 0) {
150
+ userMessageWithContext += `\n\n[System: The following approaches were tried and failed in previous runs — consider them exhausted:\n${session.metadata.failedApproaches.map((a, i) => `${i + 1}. ${a}`).join('\n')}]`;
151
+ }
152
+ session.messages.push({ role: 'user', content: userMessageWithContext });
153
+ session.metadata.handoffCount = 0;
154
+ session.metadata.failedApproaches = [];
155
+ ```
156
+
157
+ The model enters the new round with awareness of what has already been exhausted. If the user's message implies a fresh task, the model can ignore the list; if it's a continuation, it benefits from the context.
158
+
159
+ **File**: `src/server/agent.js` — `_runHandleChat`, before user message push
160
+
161
+ ---
162
+
163
+ ## Issue 4: WRAP_UP_NOTE Did Not Require Verified Progress Claims
164
+
165
+ ### What happened
166
+
167
+ Runs 2 and 3 claimed to have created project directories and files in the `progress` field of their checkpoints — but their tool calls contained no file creation commands. The model fabricated progress, causing subsequent resume messages to start from false premises.
168
+
169
+ ### Fix
170
+
171
+ Updated the `progress` field description in `WRAP_UP_NOTE`:
172
+
173
+ ```
174
+ // Before:
175
+ "progress": "What has been fully completed so far.",
176
+
177
+ // After:
178
+ "progress": "What has been fully completed — only include items confirmed by tool output (e.g., successful exec with exit code 0, or verified by ls/cat). Do not report planned steps as completed.",
179
+ ```
180
+
181
+ **File**: `src/server/agent.js` — `WRAP_UP_NOTE` constant
182
+
183
+ ---
184
+
185
+ ## Secondary Observations (Not Fixed)
186
+
187
+ **zap-baseline.sh does not exist in snap ZAP**: This script is part of the ZAP Docker image. The snap installation provides `zaproxy -cmd -quickurl <url> -quickout <file>` instead, which is visible in `zaproxy -help` output. The agent saw this output but never connected it to the need for a baseline scan. This is a model knowledge gap.
188
+
189
+ **Zero-progress detection correctly did not fire**: Runs 2 and 3 had genuinely different `remaining` strings (run 3 claimed partial progress). The detection works as designed; the circumvention was through model hallucination of progress, not a code bug.
190
+
191
+ **failedApproaches from `ok`-status runs are not captured**: Run 1 ended with `status: ok` despite having 10 iterations of failed searches. The `failedApproaches` mechanism only captures failures from `checkpoint_reached` runs. Capturing failures from `ok`-status runs would require the model to include a `failedApproaches` field in its final response — a more significant protocol change left for a future finding.
192
+
193
+ ---
194
+
195
+ ## Files Changed
196
+
197
+ | File | Change |
198
+ |------|--------|
199
+ | `src/server/tools.js` | exec seed tool: `e.stderr \|\| ''` instead of `e.stderr \|\| e.message` |
200
+ | `src/server/agent.js` | Malformed JSON args: inject error tool result instead of silent `{}` |
201
+ | `src/server/agent.js` | Preserve failedApproaches in user message before resetting |
202
+ | `src/server/agent.js` | Strengthen WRAP_UP_NOTE `progress` field description |
@@ -0,0 +1,142 @@
1
+ # Finding 015: Failed Runs Leave Tool History in Context (Context Bloat Death Spiral)
2
+
3
+ **Date:** 2026-03-02
4
+ **Severity:** High — caused 3 consecutive `model_error: Empty choices array` failures; session unusable
5
+ **Status:** Fixed
6
+
7
+ ---
8
+
9
+ ## Observed Session
10
+
11
+ Session `6123209d-ce5a-44d0-be12-29aac58b4cf3`. Model: `nvidia/nemotron-3-nano-30b-a3b:free`. User requested a ZAP security scanning project.
12
+
13
+ | Entry | Trigger | Status | messageCount at failure | toolCalls |
14
+ |-------|---------|--------|------------------------|-----------|
15
+ | 1 | "hi all good?" | ok | — | 0 |
16
+ | 2 | ZAP task (run 1) | checkpoint_reached | — | 10 |
17
+ | 3 | handoff resume (run 2) | checkpoint_reached | — | 10 |
18
+ | 4 | handoff resume (run 3) | model_error (empty choices, iter 7) | 22 | 26 |
19
+ | 5 | "Why I get Model returned an empty response?" | model_error (empty choices, iter 3) | 27 | 2 |
20
+ | 6 | "Why I get Model returned an empty response again?!!" | model_error (empty choices, iter 5) | 37 | 4 |
21
+
22
+ The session ended without producing any result. The user received `'Model returned an empty response.'` three times.
23
+
24
+ ---
25
+
26
+ ## Root Cause 1: Failed runs leave tool call history in session
27
+
28
+ ### What happened
29
+
30
+ The handoff loop strips tool call messages for `checkpoint_reached` runs:
31
+
32
+ ```js
33
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex - 1);
34
+ ```
35
+
36
+ Runs that end with `model_error` or `format_error` received **no strip**. Every tool call message (assistant+tool pair, nudge injections) from the failed run remained in `session.messages`, with only a synthetic error note appended afterward.
37
+
38
+ Run 3 had 26 tool calls across 7 iterations — approximately 13 messages added to the session. These were preserved verbatim. Each subsequent user turn started with more context than the last.
39
+
40
+ ### Message count growth
41
+
42
+ - Before run 3: ~8 messages (runs 1 and 2 were both checkpoint_reached and stripped correctly)
43
+ - After entry 4 (model_error, no strip): 21 messages + synthetic note = 22
44
+ - After entry 5 (model_error, no strip): 27 messages + synthetic note = 28
45
+ - At entry 6: 37 messages in context
46
+
47
+ The free model returns `choices: []` when the context exceeds what it can handle. Each failure added more context, making the next failure more likely: a **positive feedback death spiral**.
48
+
49
+ ### Fix
50
+
51
+ Apply the same splice that checkpoint runs already use:
52
+
53
+ ```js
54
+ if (finalStatus === 'model_error' || finalStatus === 'format_error') {
55
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex);
56
+ // then push synthetic error note as before
57
+ }
58
+ ```
59
+
60
+ The strip runs before the synthetic error note is pushed, returning the session to its pre-run state plus one concise note. The JSONL log preserves all tool results for retrospective inspection via `read_session_log`.
61
+
62
+ **File**: `src/server/agent.js` — `_runHandleChat`, non-checkpoint break path
63
+
64
+ ---
65
+
66
+ ## Root Cause 2: No detection or escalation for consecutive model_errors
67
+
68
+ ### What happened
69
+
70
+ After two consecutive `model_error: Empty choices array` entries (4 and 5), no protective mechanism fired. The system continued accepting new user messages and spawning new runs indefinitely.
71
+
72
+ Existing protection mechanisms all missed this case:
73
+ - `maxHandoffs` — only applies to `checkpoint_reached` runs
74
+ - `consecutiveFailures` — tracks tool failures within a single run
75
+ - Zero-progress detection — only applies to `checkpoint_reached` runs
76
+
77
+ ### Fix
78
+
79
+ Detect the pattern structurally in `session.messages` before starting each new run: if the last two assistant messages are both synthetic `model_error` notes, the session is in a confirmed failure loop. Escalate to `intervention_required` without running another agent loop.
80
+
81
+ ```js
82
+ function hasConsecutiveModelErrors(messages) {
83
+ const assistantTail = messages.filter(m => m.role === 'assistant').slice(-2);
84
+ return (
85
+ assistantTail.length === 2 &&
86
+ assistantTail.every(
87
+ m =>
88
+ typeof m.content === 'string' &&
89
+ m.content.startsWith('[System: Previous run failed (model_error)')
90
+ )
91
+ );
92
+ }
93
+ ```
94
+
95
+ This uses no additional state: it reads the session history directly. Old sessions are handled correctly. One failure is allowed (transient errors are real); two consecutive failures mean the session cannot self-recover.
96
+
97
+ Combined with Fix 1, consecutive model_errors in this session would have played out as:
98
+ 1. Entry 4 (run 3): model_error → strip → synthetic note. Session back to 9 messages.
99
+ 2. Entry 5 (user "Why?"): run 4 starts with 10 messages. If it still fails → strip → synthetic note. Two model_error notes now in session.
100
+ 3. Entry 6 (user "Why again?!"): `hasConsecutiveModelErrors` fires → `intervention_required` returned immediately. User gets a clear message: start a new session or switch model.
101
+
102
+ **File**: `src/server/agent.js` — `hasConsecutiveModelErrors` function + check at top of handoff loop
103
+
104
+ ---
105
+
106
+ ## Root Cause 3: Empty choices error message provides no actionable guidance
107
+
108
+ ### What happened
109
+
110
+ The `choices.length === 0` path returned:
111
+
112
+ ```
113
+ Model returned an empty response.
114
+ ```
115
+
116
+ When the user asked "why?", the agent — with ZAP tool call context still present — continued ZAP investigation instead of explaining the API failure. The opaque error and the polluted context compounded: the model had no clear signal about what went wrong and no guidance on how to recover.
117
+
118
+ ### Fix
119
+
120
+ Include the context size and recovery guidance in the response:
121
+
122
+ ```js
123
+ response: `Model returned an empty response (${preparedMessages.length} messages in context). This typically happens when the conversation is too long for the model. Try starting a new session or switching to a model with a larger context window.`,
124
+ ```
125
+
126
+ **File**: `src/server/agent.js` — `runAgentLoop`, empty choices early return
127
+
128
+ ---
129
+
130
+ ## Why Fix 1 is Primary
131
+
132
+ Fix 1 is the root fix. With context stripped after each failure, the model operates on a tiny session (~10 messages) on subsequent turns. The free model handles this easily. Fix 2 is a safety net for persistent non-context failures. Fix 3 improves user-facing error messages for the residual cases that slip through.
133
+
134
+ ---
135
+
136
+ ## Files Changed
137
+
138
+ | File | Change |
139
+ |------|--------|
140
+ | `src/server/agent.js` | Strip tool history on `model_error`/`format_error` (same as checkpoint) |
141
+ | `src/server/agent.js` | `hasConsecutiveModelErrors` function + check before each run in handoff loop |
142
+ | `src/server/agent.js` | Include message count in empty choices response |
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@ducci/jarvis",
3
- "version": "1.0.30",
3
+ "version": "1.0.32",
4
4
  "description": "A fully automated agent system that lives on a server.",
5
5
  "main": "./src/index.js",
6
6
  "type": "module",
@@ -18,7 +18,7 @@ Respond with your normal JSON, but add a checkpoint field:
18
18
  "response": "Brief message to the user that the task is still in progress.",
19
19
  "logSummary": "Human-readable summary of what happened in this run.",
20
20
  "checkpoint": {
21
- "progress": "What has been fully completed so far.",
21
+ "progress": "What has been fully completed only include items confirmed by tool output (e.g., successful exec with exit code 0, or verified by ls/cat). Do not report planned steps as completed.",
22
22
  "remaining": "What still needs to be done to finish the task — as a plain text string, never an array or object.",
23
23
  "failedApproaches": ["Concise description of each approach that was tried and failed, e.g. 'downloading subfinder via curl from GitHub releases — connection reset'. Omit array entries for things that succeeded. Leave as empty array if nothing failed."]
24
24
  }
@@ -69,6 +69,25 @@ async function callModelWithFallback(client, config, messages, tools) {
69
69
  }
70
70
  }
71
71
 
72
+ /**
73
+ * Returns true if the last two assistant messages in the session are both
74
+ * synthetic model_error notes, indicating a confirmed failure loop that cannot
75
+ * self-resolve (e.g. persistent empty choices from context overflow).
76
+ */
77
+ function hasConsecutiveModelErrors(messages) {
78
+ const assistantTail = messages
79
+ .filter(m => m.role === 'assistant')
80
+ .slice(-2);
81
+ return (
82
+ assistantTail.length === 2 &&
83
+ assistantTail.every(
84
+ m =>
85
+ typeof m.content === 'string' &&
86
+ m.content.startsWith('[System: Previous run failed (model_error)')
87
+ )
88
+ );
89
+ }
90
+
72
91
  /**
73
92
  * Runs a single agent loop up to maxIterations.
74
93
  * Returns { iteration, response, logSummary, status, runToolCalls, checkpoint }.
@@ -112,7 +131,7 @@ async function runAgentLoop(client, config, session, prepareMessages) {
112
131
  if (!modelResult.choices || modelResult.choices.length === 0) {
113
132
  return {
114
133
  iteration,
115
- response: 'Model returned an empty response.',
134
+ response: `Model returned an empty response (${preparedMessages.length} messages in context). This typically happens when the conversation is too long for the model. Try starting a new session or switching to a model with a larger context window.`,
116
135
  logSummary: `Model error on iteration ${iteration}: Empty choices array.`,
117
136
  status: 'model_error',
118
137
  runToolCalls,
@@ -143,10 +162,22 @@ async function runAgentLoop(client, config, session, prepareMessages) {
143
162
  for (const toolCall of assistantMessage.tool_calls) {
144
163
  const toolName = toolCall.function.name;
145
164
  let toolArgs;
165
+ let argParseError = null;
146
166
  try {
147
167
  toolArgs = JSON.parse(toolCall.function.arguments || '{}');
148
- } catch {
149
- toolArgs = {};
168
+ } catch (e) {
169
+ argParseError = e;
170
+ }
171
+
172
+ if (argParseError) {
173
+ const errorContent = JSON.stringify({
174
+ status: 'error',
175
+ error: `Tool arguments could not be parsed as JSON: ${argParseError.message}. Ensure arguments are a valid JSON object, e.g. {"key": "value"}.`,
176
+ });
177
+ session.messages.push({ role: 'tool', tool_call_id: toolCall.id, content: errorContent });
178
+ runToolCalls.push({ name: toolName, args: {}, status: 'error', result: errorContent });
179
+ consecutiveFailures++;
180
+ continue;
150
181
  }
151
182
 
152
183
  let result;
@@ -438,8 +469,15 @@ async function _runHandleChat(config, sessionId, userMessage) {
438
469
  session = createSession(systemPromptTemplate);
439
470
  }
440
471
 
472
+ // Preserve accumulated failedApproaches in conversation history before resetting
473
+ // so the model retains knowledge of what failed in the previous batch of handoff runs.
474
+ let userMessageWithContext = userMessage;
475
+ if (session.metadata.failedApproaches && session.metadata.failedApproaches.length > 0) {
476
+ userMessageWithContext += `\n\n[System: The following approaches were tried and failed in previous runs — consider them exhausted:\n${session.metadata.failedApproaches.map((a, i) => `${i + 1}. ${a}`).join('\n')}]`;
477
+ }
478
+
441
479
  // Append user message and reset handoff state
442
- session.messages.push({ role: 'user', content: userMessage });
480
+ session.messages.push({ role: 'user', content: userMessageWithContext });
443
481
  session.metadata.handoffCount = 0;
444
482
  session.metadata.failedApproaches = [];
445
483
 
@@ -463,6 +501,25 @@ async function _runHandleChat(config, sessionId, userMessage) {
463
501
  try {
464
502
  // Handoff loop
465
503
  while (true) {
504
+ // Safety check: if the last two assistant messages are both model_error
505
+ // synthetic notes, we are in a confirmed failure loop. Escalate immediately
506
+ // rather than burning more iterations on a stuck session.
507
+ if (hasConsecutiveModelErrors(session.messages)) {
508
+ finalResponse = 'The model has failed twice in a row. This is likely due to the conversation being too long for the model to process. Please start a new session or switch to a model with a larger context window.';
509
+ finalLogSummary = 'Consecutive model_error detected: session escalated to intervention_required without running another agent loop.';
510
+ finalStatus = 'intervention_required';
511
+ await appendLog(sessionId, {
512
+ iteration: 0,
513
+ model: config.selectedModel,
514
+ userInput: userMessage,
515
+ toolCalls: [],
516
+ response: finalResponse,
517
+ logSummary: finalLogSummary,
518
+ status: 'intervention_required',
519
+ });
520
+ break;
521
+ }
522
+
466
523
  const runStartIndex = session.messages.length;
467
524
  const run = await runAgentLoop(client, config, session, prepareMessages);
468
525
  allToolCalls.push(...run.runToolCalls);
@@ -486,8 +543,14 @@ async function _runHandleChat(config, sessionId, userMessage) {
486
543
  if (run.rawResponse) logEntry.rawResponse = run.rawResponse;
487
544
  await appendLog(sessionId, logEntry);
488
545
 
489
- // Inject synthetic error note so the model has context on the next user turn
546
+ // Inject synthetic error note so the model has context on the next user turn.
547
+ // For failed runs, also strip the tool call history — keeping it would bloat
548
+ // the context and create a positive-feedback death spiral where each failure
549
+ // makes the next one more likely (especially on free models with small context
550
+ // windows). The synthetic note is sufficient context; tool results are preserved
551
+ // in the JSONL log and accessible via read_session_log.
490
552
  if (finalStatus === 'model_error' || finalStatus === 'format_error') {
553
+ session.messages.splice(runStartIndex, session.messages.length - runStartIndex);
491
554
  const errorDetail = run.errorDetail ? ` Error detail: ${JSON.stringify(run.errorDetail)}` : '';
492
555
  session.messages.push({
493
556
  role: 'assistant',
@@ -73,7 +73,7 @@ const SEED_TOOLS = {
73
73
  });
74
74
  return { status: "ok", exitCode: 0, stdout, stderr };
75
75
  } catch (e) {
76
- return { status: "error", exitCode: e.code || 1, stdout: e.stdout || "", stderr: e.stderr || e.message };
76
+ return { status: "error", exitCode: e.code || 1, stdout: e.stdout || "", stderr: e.stderr || "" };
77
77
  }
78
78
  `,
79
79
  },