naisys 1.0.2 → 1.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,15 +1,15 @@
1
1
  ## NAISYS (Node.js Autonomous Intelligence System)
2
2
 
3
- NAISYS is acts as a proxy shell between a LLM and a real shell. The goal is to see how far a LLM can
3
+ NAISYS acts as a proxy shell between LLM(s) and a real shell. The goal is to see how far a LLM can
4
4
  get into writing a website from scratch as well as work with other LLM agents on the same project. Trying to figure
5
- out what works and what doesn't when it comes to 'cognitive architectures'. NAISYS isn't
5
+ out what works and what doesn't when it comes to 'cognitive architectures' for autonomy. NAISYS isn't
6
6
  limited to websites, but it seemed like a good place to start.
7
7
 
8
- Since the LLM has a limited context, NAISYS should take this into account and help the LLM
8
+ Since the LLM has a limited context, NAISYS takes this into account and helps the LLM
9
9
  perform 'context friendly' operations. For example reading/writing a file can't use a typical editor like
10
10
  vim or nano so point the LLM to use cat to read/write files in a single operation.
11
11
 
12
- [NPM](https://www.npmjs.com/package/naisys) | [Website](https://naisys.org) | [Discord](https://discord.gg/JBUPWSbaEt)
12
+ [NPM](https://www.npmjs.com/package/naisys) | [Website](https://naisys.org) | [Discord](https://discord.gg/JBUPWSbaEt) | [Demo Video](https://www.youtube.com/watch?v=Ttya3ixjumo)
13
13
 
14
14
  #### Node.js is used to create a simple proxy shell environment for the LLM that
15
15
 
@@ -76,6 +76,11 @@ agentPrompt: |
76
76
  You can use PHP as a way to share layout across pages and reduce duplication.
77
77
  Careful when creating new files that what you are creating is not already there.
78
78
 
79
+ # The number of tokens you want to limit a session to, independent of the LLM token max itself
80
+ # A lower max relies more on the LLM ending the session with good enough notes to not get lost when the session restarts
81
+ # A higher max allows the LLM to do more without losing track, but is more expensive
82
+ tokenMax: 5000
83
+
79
84
  # The number of seconds to pause after each console interaction for debugging and rate limiting
80
85
  # No value or zero means wait indefinitely (debug driven)
81
86
  debugPauseSeconds: 5
@@ -94,14 +99,14 @@ spendLimitDollars: 2.00
94
99
  - If a yaml file is passed, naisys will start a single agent
95
100
  - If a directory is passed, naisys will start a tmux session with the screen split for each agent
96
101
 
97
- ### Creating a persistent agent run website (on Digital Ocean for example)
102
+ #### Creating a persistent agent run website (on Digital Ocean for example)
98
103
 
99
104
  - Create new VM using the [LAMP stack droplet template](https://marketplace.digitalocean.com/apps/lamp)
100
105
  - Login to the droplet using the web console
101
106
  - Run `apt install npm`
102
107
  - Install `nvm` using the `curl` url from these [instructions](https://github.com/nvm-sh/nvm?tab=readme-ov-file#installing-and-updating)
103
108
  - Run `nvm install/use 20` to set node version to 20
104
- - Follow the general install instructions above
109
+ - Follow the general install instructions above
105
110
 
106
111
  ## Using NAISYS
107
112
 
@@ -128,12 +133,12 @@ spendLimitDollars: 2.00
128
133
  - NAISYS tries to be light, acting as a helpful proxy between the LLM and a real shell, most commands should pass right though to the shell
129
134
  - Debug Commands
130
135
  - `cost` - Prints the current total LLM cost
131
- - `context` - Prints the current context
136
+ - `context` - Prints the current context
132
137
  - `exit` - Exits NAISYS. If the LLM tries to use `exit`, it is directed to use `endsession` instead
133
138
  - `talk` - Communicate with the local agent to give hints or ask questions (the agent itself does not know about talk and is directed to use `comment` or `llmail` for communication)
134
139
  - Special Commands usable by the LLM as well as by the debug prompt
135
- - `comment <notes>` - The LLM is directed to use this for 'thinking out loud' which avoids 'invalid command' errors
136
- - `endsession <notes>` - Clear the context and start a new session.
140
+ - `comment "<note>"` - The LLM is directed to use this for 'thinking out loud' which avoids 'invalid command' errors
141
+ - `endsession "<note>"` - Clear the context and start a new session.
137
142
  - The LLM is directed to track it's context size and to end the session with a note before running over the context limit
138
143
  - `pause <seconds>` - Can be used by the debug agent or the LLM to pause execution indefinitely, or until a new message is received from another agent, or for a set number of seconds
139
144
  - NAISYS apps
@@ -154,6 +159,8 @@ spendLimitDollars: 2.00
154
159
 
155
160
  #### Notes for Windows users
156
161
 
162
+ - To use NAISYS on Windows you need to run it locally from source (or from within WSL)
163
+ - Use the above instructions to install locally, and then continue with the instructions below
157
164
  - Install WSL (Windows Subsystem for Linux)
158
165
  - The `NAISYS_FOLDER` and `WEBSITE_FOLDER` should be set to the WSL path
159
166
  - So `C:\var\naisys` should be `/mnt/c/var/naisys` in the `.env` file
@@ -7,7 +7,7 @@ import { naisysToHostPath } from "../utils/utilities.js";
7
7
  const _dbFilePath = naisysToHostPath(`${config.naisysFolder}/lib/llmail.db`);
8
8
  let _myUserId = -1;
9
9
  // Implement maxes so that LLMs actively manage threads, archive, and create new ones
10
- const _threadTokenMax = config.tokenMax / 2; // So 4000, would be 2000 thread max
10
+ const _threadTokenMax = config.agent.tokenMax / 2; // So 4000, would be 2000 thread max
11
11
  const _messageTokenMax = _threadTokenMax / 5; // Given the above a 400 token max, and 5 big messages per thread
12
12
  /** The 'non-simple' version of this is a thread first mail system. Where agents can create threads, add users, and reply to threads, etc..
13
13
  * The problem with this was the agents were too chatty with so many mail commands, wasting context replying, reading threads, etc..
@@ -15,7 +15,7 @@ let _nextGlobalLinkNum = 1;
15
15
  export async function handleCommand(cmdArgs) {
16
16
  outputInDebugMode("LLMYNX DEBUG MODE IS ON");
17
17
  const argParams = cmdArgs.split(" ");
18
- const defualtTokenMax = config.tokenMax / 8;
18
+ const defualtTokenMax = config.agent.tokenMax / 8;
19
19
  if (!argParams[0]) {
20
20
  argParams[0] = "help";
21
21
  }
@@ -28,7 +28,7 @@ export async function handleCommand(cmdArgs) {
28
28
  links <url> <page>: Lists only the links for the given url. Use the page number to get more links`;
29
29
  case "search": {
30
30
  const query = argParams.slice(1).join(" ");
31
- return await loadUrl("https://www.google.com/search?q=" + encodeURIComponent(query), config.tokenMax / 2, // Prevent form being reduced as google results are usually short anyways and we want to maintainq the links
31
+ return await loadUrl("https://www.google.com/search?q=" + encodeURIComponent(query), config.agent.tokenMax / 2, // Prevent form being reduced as google results are usually short anyways and we want to maintainq the links
32
32
  true, true);
33
33
  }
34
34
  case "open": {
@@ -10,6 +10,7 @@ import { InputMode } from "../utils/inputMode.js";
10
10
  import * as logService from "../utils/logService.js";
11
11
  import * as output from "../utils/output.js";
12
12
  import { OutputColor } from "../utils/output.js";
13
+ import * as utilities from "../utils/utilities.js";
13
14
  import * as promptBuilder from "./promptBuilder.js";
14
15
  import * as shellCommand from "./shellCommand.js";
15
16
  export var NextCommandAction;
@@ -23,46 +24,19 @@ export async function consoleInput(prompt, consoleInput) {
23
24
  // We process the lines one at a time so we can support multiple commands with line breaks
24
25
  let firstLine = true;
25
26
  let processNextLLMpromptBlock = true;
26
- const userHostPrompt = promptBuilder.getUserHostPrompt();
27
27
  let nextCommandAction = NextCommandAction.Continue;
28
- let nextInput = consoleInput.trim();
29
- while (processNextLLMpromptBlock && nextInput) {
30
- let input = "";
31
- // if the prompt exists in the input, save if for the next run
32
- const nextPromptPos = nextInput.indexOf(userHostPrompt);
33
- const newLinePos = nextInput.indexOf("\n");
34
- if (nextPromptPos == 0) {
35
- const pathPrompt = await promptBuilder.getUserHostPathPrompt();
36
- // check working directory is the same
37
- if (nextInput.startsWith(pathPrompt)) {
38
- // slice nextInput after $
39
- const endPrompt = nextInput.indexOf("$", pathPrompt.length);
40
- nextInput = nextInput.slice(endPrompt + 1).trim();
41
- continue;
42
- }
43
- // else prompt did not match, stop processing input
44
- else {
45
- break;
46
- }
47
- }
48
- // we can't validate that the working directory in the prompt is good until the commands are processed
49
- else if (nextPromptPos > 0) {
50
- input = nextInput.slice(0, nextPromptPos);
51
- nextInput = nextInput.slice(nextPromptPos).trim();
52
- }
53
- // Else for single line custom NAISYS commands, only process the first line as there may be follow up shell commands
54
- else if (newLinePos > 0 && nextInput.startsWith("comment ")) {
55
- input = nextInput.slice(0, newLinePos);
56
- nextInput = nextInput.slice(newLinePos).trim();
28
+ consoleInput = consoleInput.trim();
29
+ while (processNextLLMpromptBlock && consoleInput) {
30
+ const { input, nextInput, splitResult } = await splitMultipleInputCommands(consoleInput);
31
+ consoleInput = nextInput;
32
+ if (splitResult == SplitResult.InputIsPrompt) {
33
+ continue;
57
34
  }
58
- else {
59
- input = nextInput;
60
- nextInput = "";
61
- }
62
- if (!input.trim()) {
35
+ else if (splitResult == SplitResult.InputPromptMismatch ||
36
+ !input.trim()) {
63
37
  break;
64
38
  }
65
- // first line is special because we want to append the output to the context without a line break
39
+ // First line is special because we want to append the output to the context without a line break
66
40
  if (inputMode.current == InputMode.LLM) {
67
41
  if (firstLine) {
68
42
  firstLine = false;
@@ -83,7 +57,12 @@ export async function consoleInput(prompt, consoleInput) {
83
57
  break;
84
58
  }
85
59
  case "endsession": {
86
- previousSessionNotes = cmdArgs;
60
+ // Don't need to check end line as this is the last command in the context, just read to the end
61
+ previousSessionNotes = utilities.trimChars(cmdArgs, '"');
62
+ if (!previousSessionNotes) {
63
+ await contextManager.append(`End session notes are required. Use endsession "<notes>"`);
64
+ break;
65
+ }
87
66
  await output.commentAndLog("------------------------------------------------------");
88
67
  nextCommandAction = NextCommandAction.EndSession;
89
68
  processNextLLMpromptBlock = false;
@@ -144,8 +123,8 @@ export async function consoleInput(prompt, consoleInput) {
144
123
  }
145
124
  }
146
125
  // display unprocessed lines to aid in debugging
147
- if (nextInput.trim()) {
148
- await output.errorAndLog(`Unprocessed LLM response:\n${nextInput}`);
126
+ if (consoleInput.trim()) {
127
+ await output.errorAndLog(`Unprocessed LLM response:\n${consoleInput}`);
149
128
  }
150
129
  return {
151
130
  nextCommandAction,
@@ -153,4 +132,68 @@ export async function consoleInput(prompt, consoleInput) {
153
132
  wakeOnMessage: config.agent.wakeOnMessage,
154
133
  };
155
134
  }
135
+ var SplitResult;
136
+ (function (SplitResult) {
137
+ SplitResult[SplitResult["InputIsPrompt"] = 0] = "InputIsPrompt";
138
+ SplitResult[SplitResult["InputPromptMismatch"] = 1] = "InputPromptMismatch";
139
+ })(SplitResult || (SplitResult = {}));
140
+ async function splitMultipleInputCommands(nextInput) {
141
+ let input = "";
142
+ let splitResult;
143
+ // If the prompt exists in the input, save if for the next run
144
+ const userHostPrompt = promptBuilder.getUserHostPrompt();
145
+ const nextPromptPos = nextInput.indexOf(userHostPrompt);
146
+ const newLinePos = nextInput.indexOf("\n");
147
+ if (nextPromptPos == 0) {
148
+ const pathPrompt = await promptBuilder.getUserHostPathPrompt();
149
+ // Check working directory is the same
150
+ if (nextInput.startsWith(pathPrompt)) {
151
+ // Slice nextInput after $
152
+ const endPrompt = nextInput.indexOf("$", pathPrompt.length);
153
+ nextInput = nextInput.slice(endPrompt + 1).trim();
154
+ splitResult = SplitResult.InputIsPrompt;
155
+ }
156
+ // Else prompt did not match, stop processing input
157
+ else {
158
+ splitResult = SplitResult.InputPromptMismatch;
159
+ }
160
+ }
161
+ // We can't validate that the working directory in the prompt is good until the commands are processed
162
+ else if (nextPromptPos > 0) {
163
+ input = nextInput.slice(0, nextPromptPos);
164
+ nextInput = nextInput.slice(nextPromptPos).trim();
165
+ }
166
+ // Most custom NAISYS commands are single line, but comment in quotes can span multiple lines so we need to handle that
167
+ // because often the LLM puts shell commands after the comment
168
+ else if (nextInput.startsWith(`comment "`)) {
169
+ // Find next double quote in nextInput that isn't escaped
170
+ let endQuote = nextInput.indexOf(`"`, 9);
171
+ while (endQuote > 0 && nextInput[endQuote - 1] === "\\") {
172
+ endQuote = nextInput.indexOf(`"`, endQuote + 1);
173
+ }
174
+ if (endQuote > 0) {
175
+ input = nextInput.slice(0, endQuote + 1);
176
+ nextInput = nextInput.slice(endQuote + 1).trim();
177
+ }
178
+ else {
179
+ input = nextInput;
180
+ nextInput = "";
181
+ }
182
+ }
183
+ // If the LLM forgets the quote on the comment, treat it as a single line comment
184
+ else if (newLinePos > 0 && nextInput.startsWith("comment ")) {
185
+ input = nextInput.slice(0, newLinePos);
186
+ nextInput = nextInput.slice(newLinePos).trim();
187
+ }
188
+ // Else process the entire input now
189
+ else {
190
+ input = nextInput;
191
+ nextInput = "";
192
+ }
193
+ return { input, nextInput, splitResult };
194
+ }
195
+ export const exportedForTesting = {
196
+ splitMultipleInputCommands,
197
+ SplitResult,
198
+ };
156
199
  //# sourceMappingURL=commandHandler.js.map
@@ -150,7 +150,7 @@ async function displayNewMail() {
150
150
  // Check that token max for session will not be exceeded
151
151
  const newMsgTokenCount = newMessages.reduce((acc, msg) => acc + utilities.getTokenCount(msg), 0);
152
152
  const sessionTokens = contextManager.getTokenCount();
153
- const tokenMax = config.tokenMax;
153
+ const tokenMax = config.agent.tokenMax;
154
154
  // Show full messages unless we are close to the token limit of the session
155
155
  // or in simple mode, which means non-threaded messages
156
156
  if (sessionTokens + newMsgTokenCount < tokenMax * 0.75) {
@@ -176,7 +176,7 @@ async function displayNewMail() {
176
176
  }
177
177
  async function displayContextWarning() {
178
178
  const tokenCount = contextManager.getTokenCount();
179
- const tokenMax = config.tokenMax;
179
+ const tokenMax = config.agent.tokenMax;
180
180
  if (tokenCount > tokenMax) {
181
181
  await contextManager.append(`The token limit for this session has been exceeded.
182
182
  Use \`endsession <note>\` to clear the console and reset the session.
@@ -31,7 +31,7 @@ _readlineInterface.on("close", () => {
31
31
  });
32
32
  export async function getPrompt(pauseSeconds, wakeOnMessage) {
33
33
  const promptSuffix = inputMode.current == InputMode.Debug ? "#" : "$";
34
- const tokenMax = config.tokenMax;
34
+ const tokenMax = config.agent.tokenMax;
35
35
  const usedTokens = contextManager.getTokenCount();
36
36
  const tokenSuffix = ` [Tokens: ${usedTokens}/${tokenMax}]`;
37
37
  let pause = "";
package/dist/config.js CHANGED
@@ -7,9 +7,7 @@ program.argument("<agent-path>", "Path to agent configuration file").parse();
7
7
  dotenv.config();
8
8
  /** The system name that shows after the @ in the command prompt */
9
9
  export const hostname = "naisys";
10
- /** The number of tokens you want to limit a session to, independent of the LLM token max */
11
- export const tokenMax = 4000;
12
- /* .env is used for global configs across naisys, while agent configs for the specific agent */
10
+ /* .env is used for global configs across naisys, while agent configs are for the specific agent */
13
11
  export const naisysFolder = getEnv("NAISYS_FOLDER", true);
14
12
  export const websiteFolder = getEnv("WEBSITE_FOLDER");
15
13
  export const localLlmUrl = getEnv("LOCAL_LLM_URL");
@@ -36,6 +34,7 @@ function loadAgentConfig() {
36
34
  "webModel",
37
35
  "agentPrompt",
38
36
  "spendLimitDollars",
37
+ "tokenMax",
39
38
  // debugPauseSeconds and wakeOnMessage can be undefined
40
39
  ]) {
41
40
  if (!valueFromString(checkAgentConfig, key)) {
@@ -44,4 +43,19 @@ function loadAgentConfig() {
44
43
  }
45
44
  return checkAgentConfig;
46
45
  }
46
+ export const packageVersion = await getVersion();
47
+ /** Can only get version from env variable when naisys is started with npm,
48
+ * otherwise need to rip it from the package ourselves relative to where this file is located */
49
+ async function getVersion() {
50
+ try {
51
+ const packageJsonPath = new URL("../package.json", import.meta.url);
52
+ const packageJson = await import(packageJsonPath.href, {
53
+ assert: { type: "json" },
54
+ });
55
+ return packageJson.default.version;
56
+ }
57
+ catch (e) {
58
+ return "0.1";
59
+ }
60
+ }
47
61
  //# sourceMappingURL=config.js.map
@@ -34,7 +34,7 @@ For example when you run 'cat' or 'ls', don't write what you think the output wi
34
34
  Your role is that of the user. The system will provide responses and next command prompt. Don't output your own command prompt.
35
35
  Be careful when writing files through the command prompt with cat. Make sure to close and escape quotes properly.
36
36
 
37
- NAISYS ${process.env.npm_package_version} Shell
37
+ NAISYS ${config.packageVersion} Shell
38
38
  Welcome back ${config.agent.username}!
39
39
  MOTD:
40
40
  Date: ${new Date().toLocaleString()}
@@ -46,9 +46,9 @@ Commands:
46
46
  Special Commands: (Don't mix with standard commands on the same prompt)
47
47
  llmail: A local mail system for communicating with your team
48
48
  llmynx: A context optimized web browser. Enter 'llmynx help' to learn how to use it
49
- comment <thought>: Any non-command output like thinking out loud, prefix with the 'comment' command
49
+ comment "<thought>": Any non-command output like thinking out loud, prefix with the 'comment' command
50
50
  pause <seconds>: Pause for <seconds> or indeterminite if no argument is provided. Auto wake up on new mail message
51
- endsession <note>: Ends this session, clears the console log and context.
51
+ endsession "<note>": Ends this session, clears the console log and context.
52
52
  The note should help you find your bearings in the next session.
53
53
  The note should contain your next goal, and important things should you remember.
54
54
  Try to keep the note around 400 tokens.
@@ -37,4 +37,7 @@ export function ensureFileDirExists(filePath) {
37
37
  fs.mkdirSync(dir, { recursive: true });
38
38
  }
39
39
  }
40
+ export function trimChars(text, charList) {
41
+ return text.replace(new RegExp(`^[${charList}]+|[${charList}]+$`, "g"), "");
42
+ }
40
43
  //# sourceMappingURL=utilities.js.map
package/package.json CHANGED
@@ -1,7 +1,7 @@
1
1
  {
2
2
  "name": "naisys",
3
3
  "description": "Node.js Autonomous Intelligence System",
4
- "version": "1.0.2",
4
+ "version": "1.0.3",
5
5
  "type": "module",
6
6
  "main": "dist/naisys.js",
7
7
  "preferGlobal": true,
@@ -40,10 +40,10 @@
40
40
  "devDependencies": {
41
41
  "@types/escape-html": "1.0.4",
42
42
  "@types/js-yaml": "4.0.9",
43
- "@types/node": "20.11.22",
43
+ "@types/node": "20.11.25",
44
44
  "@types/text-table": "0.2.5",
45
- "@typescript-eslint/eslint-plugin": "7.1.0",
46
- "@typescript-eslint/parser": "7.1.0",
45
+ "@typescript-eslint/eslint-plugin": "7.1.1",
46
+ "@typescript-eslint/parser": "7.1.1",
47
47
  "eslint": "8.57.0",
48
48
  "jest": "29.7.0",
49
49
  "prettier": "3.2.5",
@@ -1,22 +0,0 @@
1
- import { describe, expect, it, jest } from "@jest/globals";
2
- jest.unstable_mockModule("../../config.js", () => ({}));
3
- const mockLogServiceWrite = jest
4
- .fn()
5
- .mockResolvedValue(1);
6
- jest.unstable_mockModule("../../utils/logService.js", () => ({
7
- write: mockLogServiceWrite,
8
- }));
9
- const output = await import("../../utils/output.js");
10
- describe("commentAndLog function", () => {
11
- it("should call writeDbLog with the correct arguments", async () => {
12
- // Assuming you've refactored commentAndLog to take logService or its functionality as a parameter
13
- await output.commentAndLog("Test message");
14
- // Verify the mock was called correctly
15
- expect(mockLogServiceWrite).toHaveBeenCalledWith({
16
- content: "Test message",
17
- role: "user",
18
- type: "comment",
19
- });
20
- });
21
- });
22
- //# sourceMappingURL=output.test.js.map
@@ -1,42 +0,0 @@
1
- import { describe, expect, test } from "@jest/globals";
2
- import { valueFromString } from "../../utils/utilities.js";
3
- describe("valueFromString", () => {
4
- const obj = {
5
- user: {
6
- name: "John Doe",
7
- contact: {
8
- email: "john@example.com",
9
- phone: {
10
- home: "123456",
11
- work: "789101",
12
- },
13
- },
14
- },
15
- };
16
- test("retrieves a nested value successfully", () => {
17
- expect(valueFromString(obj, "user.name")).toBe("John Doe");
18
- expect(valueFromString(obj, "user.contact.email")).toBe("john@example.com");
19
- expect(valueFromString(obj, "user.contact.phone.home")).toBe("123456");
20
- });
21
- test("returns undefined for non-existent path", () => {
22
- expect(valueFromString(obj, "user.address")).toBeUndefined();
23
- });
24
- test("returns default value for non-existent path when specified", () => {
25
- const defaultValue = "N/A";
26
- expect(valueFromString(obj, "user.age", defaultValue)).toBe(defaultValue);
27
- });
28
- test("handles non-object inputs gracefully", () => {
29
- expect(valueFromString(null, "user.name")).toBeUndefined();
30
- expect(valueFromString(undefined, "user.name")).toBeUndefined();
31
- expect(valueFromString("not-an-object", "user.name")).toBeUndefined();
32
- });
33
- test("deals with edge cases for paths", () => {
34
- expect(valueFromString(obj, "")).toEqual(obj);
35
- expect(valueFromString(obj, ".", "default")).toBe("default");
36
- });
37
- test("handles empty object and non-matching paths", () => {
38
- expect(valueFromString({}, "user.name")).toBeUndefined();
39
- expect(valueFromString(obj, "user.nonexistent.prop", "default")).toBe("default");
40
- });
41
- });
42
- //# sourceMappingURL=utilities.test.js.map