naisys 1.4.0 → 1.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,13 +1,18 @@
1
1
  ## NAISYS (Node.js Autonomous Intelligence System)
2
2
 
3
- NAISYS acts as a proxy shell between LLM(s) and a real shell. The goal is to see how far a LLM can
4
- get into writing a website from scratch as well as work with other LLM agents on the same project. Trying to figure
5
- out what works and what doesn't when it comes to 'cognitive architectures' for autonomy. NAISYS isn't
6
- limited to websites, but it seemed like a good place to start.
3
+ NAISYS allows any LLM you want to operate a standard linux shell given your instructions. You can control how much
4
+ to spend, the maximum number of tokens to use per session, how long to wait between commands, etc.. Between each command
5
+ NAISYS will wait a few seconds to accept any input you want to put in yourself in case you want to colllaborate with the
6
+ LLM, give it hints, and/or diagnose the session. Once the LLM reaches the token max you specified for the sesssion it
7
+ will wrap things up, and start a fresh shell for the LLM to continue on its work.
7
8
 
8
- Since the LLM has a limited context, NAISYS takes this into account and helps the LLM
9
- perform 'context friendly' operations. For example reading/writing a file can't use a typical editor like
10
- vim or nano so point the LLM to use cat to read/write files in a single operation.
9
+ NAISYS tries to be a minimal wrapper, just helping the LLM operate in the shell 'better'. Making commands 'context friendly'. For instace if a command is long running, NAISYS will interrupt it, show the LLM the current output, and ask the LLM what it wants to
10
+ do next - wait, kill, or send input. The custom command prompt helps the LLM keep track of its token usage during the session. The 'comment' command helps the LLM think outloud without putting invalid commands into the shell.
11
+
12
+ Some use cases are building websites, diagnosing a system for security concerns, mapping out the topology of the local
13
+ network, learning and performing arbitrary tasks, or just plain exploring the limits of autonomy. NAISYS has a built-in
14
+ system for inter-agent communiation. You can manually startup mulitple instances of NAISYS with different roles, or
15
+ you can allow agents to start their own sub-agents on demand with instructions defined by the LLM itself!
11
16
 
12
17
  [NPM](https://www.npmjs.com/package/naisys) | [Website](https://naisys.org) | [Discord](https://discord.gg/JBUPWSbaEt) | [Demo Video](https://www.youtube.com/watch?v=Ttya3ixjumo)
13
18
 
@@ -73,7 +78,7 @@ dreamModel: claude3opus
73
78
 
74
79
  # The model to use for llmynx, pre-processing websites to fit into a smaller context (use a cheaper model)
75
80
  # defaults to the shellModel if omitted
76
- webModel: gemini-pro
81
+ webModel: claude3haiku
77
82
 
78
83
  # The model used by the 'genimg' command. If not defined then the genimg command is not available to the LLM
79
84
  # Valid values: dalle2-256, dalle2-512, dalle2-1024, dalle3-1024, dalle3-1024-HD
@@ -112,6 +117,11 @@ spendLimitDollars: 2.00
112
117
  # Auto: All commands are run through the separate LLM instace that will check to see if the command is safe
113
118
  commandProtection: "none"
114
119
 
120
+ # The max number of subagents allowed to be started and managed. Leave out to disable.
121
+ # Costs by the subagent are applied to the host agent's spend limit
122
+ # Careful: Sub-agents can be chatty, slowing down progress.
123
+ subagentMax: 0
124
+
115
125
  # Run these commands on session start, in the example below the agent will see how to use mail and a list of other agents
116
126
  initialCommands:
117
127
  - llmail users
@@ -169,7 +179,8 @@ initialCommands:
169
179
  - NAISYS apps
170
180
  - `llmail` - A context friendly 'mail system' used for agent to agent communication
171
181
  - `llmynx` - A context friendly wrapping on the lynx browser that can use a separate LLM to reduce the size of a large webpage into something that can fit into the LLM's context
172
- - `genimg "<description>" <filepath>` - Generates an image with the given description, save at the specified path
182
+ - `genimg "<description>" <filepath>` - Generates an image with the given description, save at the specified fully qualified path
183
+ - `subagent` - A way for LLMs to start/stop their own sub-agents. Communicating with each other with `llmail`. Set the `subagentMax` in the agent config to enable.
173
184
 
174
185
  ## Running NAISYS from Source
175
186
 
@@ -188,16 +199,28 @@ initialCommands:
188
199
  - To use NAISYS on Windows you need to run it locally from source (or from within WSL)
189
200
  - Use the above instructions to install locally, and then continue with the instructions below
190
201
  - Install WSL (Windows Subsystem for Linux)
202
+ - Install a Linux distribution, Ubuntu can easily be installed from the Microsoft Store
191
203
  - The `NAISYS_FOLDER` and `WEBSITE_FOLDER` should be set to the WSL path
192
204
  - So `C:\var\naisys` should be `/mnt/c/var/naisys` in the `.env` file
193
- - If you want to use NAISYS for a website
194
- - Install a local web server, for example [XAMPP](https://www.apachefriends.org/) on Windows
195
- - Start the server and put the URL in the `.env` file
205
+
206
+ #### Notes for MacOS users
207
+
208
+ - The browser llmynx requires `timeout` and `lynx`. Run these commands to install them:
209
+ - `brew install coreutils`
210
+ - `brew install lynx`
211
+
212
+ #### Using NAISYS for a website
213
+
214
+ - Many frameworks come with their own dev server
215
+ - PHP for example can start a server with `php -S localhost:8000 -d display_errors=On -d error_reporting=E_ALL`
216
+ - Start the server and put the URL in the `.env` file
196
217
 
197
218
  ## Changelog
198
219
 
220
+ - 1.6: Support for long running shell commands and full screen terminal output
221
+ - 1.5: Allow agents to start their own parallel `subagents`
199
222
  - 1.4: `genimg` command for generating images
200
223
  - 1.3: Post-session 'dreaming' as well as a mail 'blackout' period
201
224
  - 1.2: Created stand-in shell commands for custom Naisys commands
202
225
  - 1.1: Added command protection settings to prevent unwanted writes
203
- - 1.0: Initial release
226
+ - 1.0: Initial release
package/bin/naisys CHANGED
@@ -2,18 +2,21 @@
2
2
 
3
3
  # Make sure to enable this script for execution with `chmod +x naisys`
4
4
 
5
+ # Resolves the location of naisys from the bin directory
6
+ SCRIPT=$(readlink -f "$0" || echo "$0")
7
+ SCRIPT_DIR=$(dirname "$SCRIPT")/..
8
+
5
9
  # Check if an argument is provided
6
10
  if [ $# -eq 0 ]; then
11
+ # get version from package.json
12
+ VERSION=$(node -e "console.log(require('${SCRIPT_DIR}/package.json').version)")
7
13
  echo "NAISYS: Node.js Autonomous Intelligence System"
14
+ echo " Version: $VERSION"
8
15
  echo " Usage: naisys <path to agent config yaml, or directory>"
9
16
  echo " Note: If a folder is passed then all agents will be started in a tmux session"
10
17
  exit 1
11
18
  fi
12
19
 
13
- # Resolves the location of naisys from the bin directory
14
- SCRIPT=$(readlink -f "$0" || echo "$0")
15
- SCRIPT_DIR=$(dirname "$SCRIPT")/..
16
-
17
20
  # if path is a yaml file then start a single agent
18
21
  if [ -f "$1" ]; then
19
22
  if [[ "$1" == *".yaml" ]]; then
@@ -0,0 +1,3 @@
1
+ #!/bin/bash
2
+
3
+ echo "'trimsession' cannot be used with other commands on the same prompt."
@@ -1,12 +1,13 @@
1
1
  import chalk from "chalk";
2
- import * as genimg from "../apps/genimg.js";
3
- import * as llmail from "../apps/llmail.js";
4
- import * as llmynx from "../apps/llmynx.js";
5
2
  import * as config from "../config.js";
3
+ import * as genimg from "../features/genimg.js";
4
+ import * as llmail from "../features/llmail.js";
5
+ import * as llmynx from "../features/llmynx.js";
6
+ import * as subagent from "../features/subagent.js";
6
7
  import * as contextManager from "../llm/contextManager.js";
7
- import { ContentSource } from "../llm/contextManager.js";
8
8
  import * as costTracker from "../llm/costTracker.js";
9
9
  import * as dreamMaker from "../llm/dreamMaker.js";
10
+ import { ContentSource } from "../llm/llmDtos.js";
10
11
  import * as inputMode from "../utils/inputMode.js";
11
12
  import { InputMode } from "../utils/inputMode.js";
12
13
  import * as output from "../utils/output.js";
@@ -65,7 +66,22 @@ export async function processCommand(prompt, consoleInput) {
65
66
  await contextManager.append("Comment noted. Try running commands now to achieve your goal.");
66
67
  break;
67
68
  }
69
+ case "trimsession": {
70
+ if (!config.trimSessionEnabled) {
71
+ throw 'The "trimsession" command is not enabled in this environment.';
72
+ }
73
+ const trimSummary = contextManager.trim(cmdArgs);
74
+ await contextManager.append(trimSummary);
75
+ break;
76
+ }
68
77
  case "endsession": {
78
+ if (!config.endSessionEnabled) {
79
+ throw 'The "trimsession" command is not enabled in this environment.';
80
+ }
81
+ if (shellCommand.isShellSuspended()) {
82
+ await contextManager.append("Session cannot be ended while a shell command is active.");
83
+ break;
84
+ }
69
85
  // Don't need to check end line as this is the last command in the context, just read to the end
70
86
  const endSessionNotes = utilities.trimChars(cmdArgs, '"');
71
87
  if (!endSessionNotes) {
@@ -105,8 +121,7 @@ export async function processCommand(prompt, consoleInput) {
105
121
  };
106
122
  }
107
123
  case "cost": {
108
- const totalCost = await costTracker.getTotalCosts();
109
- output.comment(`Total cost so far $${totalCost.toFixed(2)} of $${config.agent.spendLimitDollars} limit`);
124
+ await costTracker.printCosts();
110
125
  break;
111
126
  }
112
127
  case "llmynx": {
@@ -132,20 +147,23 @@ export async function processCommand(prompt, consoleInput) {
132
147
  break;
133
148
  }
134
149
  case "context":
135
- contextManager.printContext();
150
+ output.comment("#####################");
151
+ output.comment(contextManager.printContext());
152
+ output.comment("#####################");
136
153
  break;
154
+ case "subagent": {
155
+ const subagentResponse = await subagent.handleCommand(cmdArgs);
156
+ await contextManager.append(subagentResponse);
157
+ break;
158
+ }
137
159
  default: {
138
- const shellResponse = await shellCommand.handleCommand(input);
139
- if (shellResponse.hasErrors && nextInput) {
140
- await output.errorAndLog(`Error detected processing shell command:`);
141
- processNextLLMpromptBlock = false;
142
- }
143
- nextCommandAction = shellResponse.terminate
160
+ const exitApp = await shellCommand.handleCommand(input);
161
+ nextCommandAction = exitApp
144
162
  ? NextCommandAction.ExitApplication
145
163
  : NextCommandAction.Continue;
146
164
  }
147
- }
148
- }
165
+ } // End switch
166
+ } // End loop processing LLM response
149
167
  // display unprocessed lines to aid in debugging
150
168
  if (consoleInput.trim()) {
151
169
  await output.errorAndLog(`Unprocessed LLM response:\n${consoleInput}`);
@@ -205,8 +223,18 @@ async function splitMultipleInputCommands(nextInput) {
205
223
  }
206
224
  }
207
225
  // If the LLM forgets the quote on the comment, treat it as a single line comment
226
+ // Not something we want to use for multi-line commands like llmail and subagent
227
+ else if (newLinePos > 0 &&
228
+ (nextInput.startsWith("comment ") ||
229
+ nextInput.startsWith("genimg ") ||
230
+ nextInput.startsWith("trimsession "))) {
231
+ input = nextInput.slice(0, newLinePos);
232
+ nextInput = nextInput.slice(newLinePos).trim();
233
+ }
234
+ // If shell is suspended, the process can kill/wait the shell, and may run some commands after
208
235
  else if (newLinePos > 0 &&
209
- (nextInput.startsWith("comment ") || nextInput.startsWith("genimg "))) {
236
+ shellCommand.isShellSuspended() &&
237
+ (nextInput.startsWith("kill") || nextInput.startsWith("wait"))) {
210
238
  input = nextInput.slice(0, newLinePos);
211
239
  nextInput = nextInput.slice(newLinePos).trim();
212
240
  }
@@ -1,28 +1,31 @@
1
1
  import chalk from "chalk";
2
2
  import * as readline from "readline";
3
- import * as llmail from "../apps/llmail.js";
4
- import * as llmynx from "../apps/llmynx.js";
5
3
  import * as config from "../config.js";
4
+ import * as llmail from "../features/llmail.js";
5
+ import * as llmynx from "../features/llmynx.js";
6
+ import * as subagent from "../features/subagent.js";
7
+ import * as workspaces from "../features/workspaces.js";
6
8
  import * as contextManager from "../llm/contextManager.js";
7
- import { ContentSource } from "../llm/contextManager.js";
8
9
  import * as dreamMaker from "../llm/dreamMaker.js";
9
- import { LlmRole } from "../llm/llmDtos.js";
10
+ import { ContentSource, LlmRole } from "../llm/llmDtos.js";
10
11
  import * as llmService from "../llm/llmService.js";
12
+ import { systemMessage } from "../llm/systemMessage.js";
11
13
  import * as inputMode from "../utils/inputMode.js";
12
14
  import { InputMode } from "../utils/inputMode.js";
13
15
  import * as logService from "../utils/logService.js";
14
16
  import * as output from "../utils/output.js";
17
+ import { OutputColor } from "../utils/output.js";
15
18
  import * as utilities from "../utils/utilities.js";
16
19
  import * as commandHandler from "./commandHandler.js";
17
20
  import { NextCommandAction } from "./commandHandler.js";
18
21
  import * as promptBuilder from "./promptBuilder.js";
22
+ import * as shellCommand from "./shellCommand.js";
19
23
  const maxErrorCount = 5;
20
24
  export async function run() {
21
25
  // Show Agent Config exept the agent prompt
22
26
  await output.commentAndLog(`Agent configured to use ${config.agent.shellModel} model`);
23
27
  // Show System Message
24
28
  await output.commentAndLog("System Message:");
25
- const systemMessage = contextManager.getSystemMessage();
26
29
  output.write(systemMessage);
27
30
  await logService.write({
28
31
  role: LlmRole.System,
@@ -31,6 +34,7 @@ export async function run() {
31
34
  });
32
35
  let nextCommandAction = NextCommandAction.Continue;
33
36
  let llmErrorCount = 0;
37
+ let nextPromptIndex = 0;
34
38
  while (nextCommandAction != NextCommandAction.ExitApplication) {
35
39
  inputMode.toggle(InputMode.LLM);
36
40
  await output.commentAndLog("Starting Context:");
@@ -40,30 +44,37 @@ export async function run() {
40
44
  await contextManager.append(latestDream);
41
45
  }
42
46
  for (const initialCommand of config.agent.initialCommands) {
43
- const prompt = await promptBuilder.getPrompt(0, false);
44
- await contextManager.append(prompt, ContentSource.ConsolePrompt);
47
+ let prompt = await promptBuilder.getPrompt(0, false);
48
+ prompt = setPromptIndex(prompt, ++nextPromptIndex);
49
+ await contextManager.append(prompt, ContentSource.ConsolePrompt, nextPromptIndex);
45
50
  await commandHandler.processCommand(prompt, config.resolveConfigVars(initialCommand));
46
51
  }
47
52
  inputMode.toggle(InputMode.Debug);
48
53
  let pauseSeconds = config.agent.debugPauseSeconds;
49
54
  let wakeOnMessage = config.agent.wakeOnMessage;
50
55
  while (nextCommandAction == NextCommandAction.Continue) {
51
- const prompt = await promptBuilder.getPrompt(pauseSeconds, wakeOnMessage);
56
+ if (shellCommand.isShellSuspended()) {
57
+ await contextManager.append(`Command still running. Enter 'wait' to continue waiting. 'kill' to terminate. Other input will be sent to the process.`, ContentSource.Console);
58
+ }
59
+ let prompt = await promptBuilder.getPrompt(pauseSeconds, wakeOnMessage);
52
60
  let consoleInput = "";
53
61
  // Debug command prompt
54
62
  if (inputMode.current === InputMode.Debug) {
63
+ subagent.unreadContextSummary();
55
64
  consoleInput = await promptBuilder.getInput(`${prompt}`, pauseSeconds, wakeOnMessage);
56
65
  }
57
66
  // LLM command prompt
58
67
  else if (inputMode.current === InputMode.LLM) {
68
+ prompt = setPromptIndex(prompt, ++nextPromptIndex);
59
69
  const workingMsg = prompt +
60
- chalk[output.OutputColor.loading](`LLM (${config.agent.shellModel}) Working...`);
70
+ chalk[OutputColor.loading](`LLM (${config.agent.shellModel}) Working...`);
61
71
  try {
62
72
  await checkNewMailNotification();
63
73
  await checkContextLimitWarning();
64
- await contextManager.append(prompt, ContentSource.ConsolePrompt);
74
+ await workspaces.displayActive();
75
+ await contextManager.append(prompt, ContentSource.ConsolePrompt, nextPromptIndex);
65
76
  process.stdout.write(workingMsg);
66
- consoleInput = await llmService.query(config.agent.shellModel, contextManager.getSystemMessage(), contextManager.messages, "console");
77
+ consoleInput = await llmService.query(config.agent.shellModel, systemMessage, contextManager.getCombinedMessages(), "console");
67
78
  clearPromptMessage(workingMsg);
68
79
  }
69
80
  catch (e) {
@@ -101,6 +112,7 @@ export async function run() {
101
112
  llmynx.clear();
102
113
  contextManager.clear();
103
114
  nextCommandAction = NextCommandAction.Continue;
115
+ nextPromptIndex = 0;
104
116
  }
105
117
  }
106
118
  }
@@ -178,7 +190,7 @@ async function checkNewMailNotification() {
178
190
  for (const unreadThread of unreadThreads) {
179
191
  await llmail.markAsRead(unreadThread.threadId);
180
192
  }
181
- mailBlackoutCountdown = config.mailBlackoutCycles;
193
+ mailBlackoutCountdown = config.agent.mailBlackoutCycles || 0;
182
194
  }
183
195
  else if (llmail.simpleMode) {
184
196
  await contextManager.append(`You have new mail, but not enough context to read them.\n` +
@@ -196,11 +208,34 @@ async function checkContextLimitWarning() {
196
208
  const tokenCount = contextManager.getTokenCount();
197
209
  const tokenMax = config.agent.tokenMax;
198
210
  if (tokenCount > tokenMax) {
199
- await contextManager.append(`The token limit for this session has been exceeded.
200
- Use \`endsession <note>\` to clear the console and reset the session.
211
+ let tokenNote = "";
212
+ if (config.endSessionEnabled) {
213
+ tokenNote += `\nUse 'endsession <note>' to clear the console and reset the session.
201
214
  The note should help you find your bearings in the next session.
202
- The note should contain your next goal, and important things should you remember.
203
- Try to keep the note around 400 tokens.`, ContentSource.Console);
215
+ The note should contain your next goal, and important things should you remember.`;
216
+ }
217
+ if (config.trimSessionEnabled) {
218
+ tokenNote += `\nUse 'trimsession' to reduce the size of the session.
219
+ Use comments to remember important things from trimmed prompts.`;
220
+ }
221
+ await contextManager.append(`The token limit for this session has been exceeded.${tokenNote}`, ContentSource.Console);
222
+ }
223
+ }
224
+ /** Insert prompt index [Index: 1] before the $.
225
+ * Insert at the end of the prompt so that 'prompt splitting' still works in the command handler
226
+ */
227
+ function setPromptIndex(prompt, index) {
228
+ if (!config.trimSessionEnabled) {
229
+ return prompt;
230
+ }
231
+ let newPrompt = prompt;
232
+ const endPromptPos = prompt.lastIndexOf("$");
233
+ if (endPromptPos != -1) {
234
+ newPrompt =
235
+ prompt.slice(0, endPromptPos) +
236
+ ` [Index: ${index}]` +
237
+ prompt.slice(endPromptPos);
204
238
  }
239
+ return newPrompt;
205
240
  }
206
241
  //# sourceMappingURL=commandLoop.js.map
@@ -1,34 +1,42 @@
1
1
  import chalk from "chalk";
2
2
  import * as events from "events";
3
3
  import * as readline from "readline";
4
- import * as llmail from "../apps/llmail.js";
5
4
  import * as config from "../config.js";
5
+ import * as llmail from "../features/llmail.js";
6
6
  import * as contextManager from "../llm/contextManager.js";
7
7
  import * as inputMode from "../utils/inputMode.js";
8
8
  import { InputMode } from "../utils/inputMode.js";
9
9
  import * as output from "../utils/output.js";
10
10
  import * as shellWrapper from "./shellWrapper.js";
11
- // When actual output is entered by the user we want to cancel any auto-continue timers and/or wake on message
12
- // We don't want to cancel if the user is entering a chords like ctrl+b then down arrow, when using tmux
13
- // This is why we can't put the event listener on the standard process.stdin/keypress event.
14
- // There is no 'data entered' output event so this monkey patch does that
11
+ /**
12
+ * When actual output is entered by the user we want to cancel any auto-continue timers and/or wake on message
13
+ * We don't want to cancel if the user is entering a chords like ctrl+b then down arrow, when using tmux
14
+ * This is why we can't put the event listener on the standard process.stdin/keypress event.
15
+ * There is no 'data entered' output event so this monkey patch does that
16
+ */
17
+ const _writeEventEmitter = new events.EventEmitter();
15
18
  const _writeEventName = "write";
16
- const _outputEmitter = new events.EventEmitter();
17
19
  const _originalWrite = process.stdout.write.bind(process.stdout);
18
20
  process.stdout.write = (...args) => {
19
- _outputEmitter.emit(_writeEventName, false, ...args);
21
+ _writeEventEmitter.emit(_writeEventName, false, ...args);
22
+ // eslint-disable-next-line @typescript-eslint/no-explicit-any
20
23
  return _originalWrite.apply(process.stdout, args);
21
24
  };
22
- const _readlineInterface = readline.createInterface({
25
+ /**
26
+ * Tried to make this local and have it cleaned up with close() after using it, but
27
+ * due to the terminal settings below there are bugs with both terminal true and false
28
+ * pause() actually is nice in that it queues up the input, and doesn't allow the user
29
+ * to enter anything while the LLM is working
30
+ */
31
+ const readlineInterface = readline.createInterface({
23
32
  input: process.stdin,
24
33
  output: process.stdout,
34
+ // With this set to ture, after an abort the second input will not be processed, see:
35
+ // https://gist.github.com/swax/964a2488494048c8e03d05493d9370f8
36
+ // With this set to false, the stdout.write event above will not be triggered
37
+ terminal: true,
25
38
  });
26
- // Happens when ctrl+c is pressed
27
- let readlineInterfaceClosed = false;
28
- _readlineInterface.on("close", () => {
29
- readlineInterfaceClosed = true;
30
- output.error("Readline interface closed");
31
- });
39
+ readlineInterface.pause();
32
40
  export async function getPrompt(pauseSeconds, wakeOnMessage) {
33
41
  const promptSuffix = inputMode.current == InputMode.Debug ? "#" : "$";
34
42
  const tokenMax = config.agent.tokenMax;
@@ -59,27 +67,24 @@ export function getInput(commandPrompt, pauseSeconds, wakeOnMessage) {
59
67
  let timeout;
60
68
  let interval;
61
69
  let timeoutCancelled = false;
62
- if (readlineInterfaceClosed) {
63
- output.error("Hanging because readline interface is closed.");
64
- return;
70
+ function clearTimers() {
71
+ timeoutCancelled = true;
72
+ _writeEventEmitter.off(_writeEventName, cancelWaitingForUserInput);
73
+ clearTimeout(timeout);
74
+ clearInterval(interval);
65
75
  }
66
76
  /** Cancels waiting for user input */
67
- function onStdinWrite_cancelTimers(questionAborted, buffer) {
77
+ const cancelWaitingForUserInput = (questionAborted, buffer) => {
68
78
  // Don't allow console escape commands like \x1B[1G to cancel the timeout
69
79
  if (timeoutCancelled || (buffer && !/^[a-zA-Z0-9 ]+$/.test(buffer))) {
70
80
  return;
71
81
  }
72
- timeoutCancelled = true;
73
- _outputEmitter.off(_writeEventName, onStdinWrite_cancelTimers);
74
- clearTimeout(timeout);
75
- clearInterval(interval);
76
- timeout = undefined;
77
- interval = undefined;
82
+ clearTimers();
78
83
  if (questionAborted) {
79
84
  return;
80
85
  }
81
- // Else timeout interrupted by user input, clear out the timeout information from the prompt
82
- // to prevent the user from thinking the timeout still applies
86
+ // Else timeout interrupted by user input
87
+ // Clear out the timeout information from the prompt to prevent the user from thinking the timeout still applies
83
88
  let pausePos = commandPrompt.indexOf("[Paused:");
84
89
  pausePos =
85
90
  pausePos == -1 ? commandPrompt.indexOf("[WakeOnMsg]") : pausePos;
@@ -91,21 +96,22 @@ export function getInput(commandPrompt, pauseSeconds, wakeOnMessage) {
91
96
  process.stdout.write("-".repeat(charsBack - 3));
92
97
  readline.moveCursor(process.stdout, 3, 0);
93
98
  }
94
- }
95
- _readlineInterface.question(chalk.greenBright(commandPrompt), { signal: questionController.signal }, (answer) => {
99
+ };
100
+ readlineInterface.question(chalk.greenBright(commandPrompt), { signal: questionController.signal }, (answer) => {
101
+ clearTimers();
102
+ readlineInterface.pause();
96
103
  resolve(answer);
97
104
  });
98
105
  // If user starts typing in prompt, cancel any auto timeouts or wake on msg
99
- _outputEmitter.on(_writeEventName, onStdinWrite_cancelTimers);
100
- const abortQuestion = () => {
101
- onStdinWrite_cancelTimers(true);
106
+ _writeEventEmitter.on(_writeEventName, cancelWaitingForUserInput);
107
+ function abortQuestion() {
108
+ cancelWaitingForUserInput(true);
102
109
  questionController.abort();
110
+ readlineInterface.pause();
103
111
  resolve("");
104
- };
112
+ }
105
113
  if (pauseSeconds) {
106
- timeout = setTimeout(() => {
107
- abortQuestion();
108
- }, pauseSeconds * 1000);
114
+ timeout = setTimeout(abortQuestion, pauseSeconds * 1000);
109
115
  }
110
116
  if (wakeOnMessage) {
111
117
  // Break timeout if new message is received
@@ -132,7 +138,8 @@ export function getInput(commandPrompt, pauseSeconds, wakeOnMessage) {
132
138
  }
133
139
  export function getCommandConfirmation() {
134
140
  return new Promise((resolve) => {
135
- _readlineInterface.question(chalk.greenBright("Allow command to run? [y/n] "), (answer) => {
141
+ readlineInterface.question(chalk.greenBright("Allow command to run? [y/n] "), (answer) => {
142
+ readlineInterface.pause();
136
143
  resolve(answer);
137
144
  });
138
145
  });
@@ -4,54 +4,55 @@ import * as inputMode from "../utils/inputMode.js";
4
4
  import { InputMode } from "../utils/inputMode.js";
5
5
  import * as utilities from "../utils/utilities.js";
6
6
  import * as shellWrapper from "./shellWrapper.js";
7
+ export const isShellSuspended = () => shellWrapper.isShellSuspended();
7
8
  export async function handleCommand(input) {
8
9
  const cmdParams = input.split(" ");
9
- const response = {
10
- hasErrors: true,
11
- };
12
- // Route user to context friendly edit commands that can read/write the entire file in one go
13
- // Having EOF in quotes is important as it prevents the shell from replacing $variables with bash values
14
- if (["nano", "vi", "vim"].includes(cmdParams[0])) {
15
- await contextManager.append(`${cmdParams[0]} not supported. Use \`cat\` to read a file and \`cat > filename << 'EOF'\` to write a file`);
16
- return response;
17
- }
18
- if (cmdParams[0] == "lynx" && cmdParams[1] != "--dump") {
19
- await contextManager.append(`Interactive mode with lynx is not supported. Use --dump with lynx to view a website`);
20
- return response;
21
- }
22
- if (cmdParams[0] == "exit") {
23
- if (inputMode.current == InputMode.LLM) {
24
- await contextManager.append("Use 'endsession' to end the session and clear the console log.");
25
- }
26
- else if (inputMode.current == InputMode.Debug) {
27
- await shellWrapper.terminate();
28
- response.terminate = true;
29
- }
30
- return response;
31
- }
32
- const output = await shellWrapper.executeCommand(input);
33
- if (output.value) {
34
- let text = output.value;
35
- let outputLimitExceeded = false;
36
- const tokenCount = utilities.getTokenCount(text);
37
- // Prevent too much output from blowing up the context
38
- if (tokenCount > config.shellOutputTokenMax) {
39
- outputLimitExceeded = true;
40
- const trimLength = (text.length * config.shellOutputTokenMax) / tokenCount;
41
- text =
42
- text.slice(0, trimLength / 2) +
43
- "\n\n...\n\n" +
44
- text.slice(-trimLength / 2);
10
+ let response;
11
+ if (!isShellSuspended()) {
12
+ if (["nano", "vi", "vim"].includes(cmdParams[0])) {
13
+ // Route user to context friendly edit commands that can read/write the entire file in one go
14
+ // Having EOF in quotes is important as it prevents the shell from replacing $variables with bash values
15
+ throw `${cmdParams[0]} not supported. Use \`cat\` to read a file and \`cat > filename << 'EOF'\` to write a file`;
45
16
  }
46
- await contextManager.append(text);
47
- if (outputLimitExceeded) {
48
- await contextManager.append(`\nThe shell command generated too much output (${tokenCount} tokens). Only 2,000 tokens worth are shown above.`);
17
+ if (cmdParams[0] == "lynx" && cmdParams[1] != "--dump") {
18
+ throw `Interactive mode with lynx is not supported. Use --dump with lynx to view a website`;
49
19
  }
50
- if (text.endsWith(": command not found")) {
51
- await contextManager.append("Please enter a valid Linux or NAISYS command after the prompt. Use the 'comment' command for thoughts.");
20
+ if (cmdParams[0] == "exit") {
21
+ if (inputMode.current == InputMode.LLM) {
22
+ throw "Use 'endsession' to end the session and clear the console log.";
23
+ }
24
+ // Only the debug user is allowed to exit the shell
25
+ else if (inputMode.current == InputMode.Debug) {
26
+ await shellWrapper.terminate();
27
+ return true;
28
+ }
52
29
  }
30
+ response = await shellWrapper.executeCommand(input);
31
+ }
32
+ // Else shell is suspended, continue
33
+ else {
34
+ response = await shellWrapper.continueCommand(input);
35
+ }
36
+ let outputLimitExceeded = false;
37
+ const tokenCount = utilities.getTokenCount(response);
38
+ // Prevent too much output from blowing up the context
39
+ if (tokenCount > config.shellCommand.outputTokenMax) {
40
+ outputLimitExceeded = true;
41
+ const trimLength = (response.length * config.shellCommand.outputTokenMax) / tokenCount;
42
+ response =
43
+ response.slice(0, trimLength / 2) +
44
+ "\n\n...\n\n" +
45
+ response.slice(-trimLength / 2);
46
+ }
47
+ if (outputLimitExceeded) {
48
+ response += `\nThe shell command generated too much output (${tokenCount} tokens). Only 2,000 tokens worth are shown above.`;
49
+ }
50
+ if (response.endsWith(": command not found")) {
51
+ response +=
52
+ "\nPlease enter a valid Linux or NAISYS command after the prompt. Use the 'comment' command for thoughts.";
53
53
  }
54
- response.hasErrors = output.hasErrors;
55
- return response;
54
+ // TODO: move this into the command handler to remove the context manager dependency
55
+ await contextManager.append(response);
56
+ return false;
56
57
  }
57
58
  //# sourceMappingURL=shellCommand.js.map