npcsh 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
npcsh-0.1.0/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2024 NPC WORLDWIDE
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
npcsh-0.1.0/PKG-INFO ADDED
@@ -0,0 +1,86 @@
1
+ Metadata-Version: 2.1
2
+ Name: npcsh
3
+ Version: 0.1.0
4
+ Summary: A way to use npcsh
5
+ Home-page: https://github.com/cagostino/npcsh
6
+ Author: Christopher Agostino
7
+ Author-email: cjp.agostino@example.com
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Requires-Python: >=3.10
11
+ Description-Content-Type: text/markdown
12
+ License-File: LICENSE
13
+ Requires-Dist: jinja2
14
+ Requires-Dist: pandas
15
+ Requires-Dist: ollama
16
+ Requires-Dist: requests
17
+ Requires-Dist: PyYAML
18
+
19
+ # npcsh
20
+
21
+ Welcome to npcsh, the shell for interacting with NPCs (LLM-powered AI agents). npcsh is meant to be a drop-in replacement shell for any kind of bash/zsh/powershell and allows the user to directly operate their machine through the use of the LLM-powered shell.
22
+
23
+ Additionally, npcsh introduces a new paradigm of programming for LLMs: npcsh allows users to set up NPC profiles (a la npc_profile.npc) where a user sets the primary directive of the NPC, the tools they want the NPC to use, and other properties of the NPC. NPCs can interact with each other and their primary directives and properties make these relationships explicit through jinja references.
24
+
25
+ ## compilation
26
+
27
+ Each NPC can be compiled to accomplish their primary directive and then any issues faced will be recorded and associated with the NPC so that it can reference it later through vector search. In any of the modes where a user requests input from an NPC, the NPC will include RAG search results before carrying out the request.
28
+
29
+
30
+ ## Base npcsh
31
+
32
+
33
+ In the base npcsh shell, inputs are processed by an LLM. The LLM first determines what kind of a request the user is making and decides which of the available tools or modes will best enable it to accomplish the request.
34
+
35
+
36
+ ### spool mode
37
+
38
+ Spool mode allows the users to have threaded conversations in the shell, i.e. conversations where context is retained over the course of several turns.
39
+ Users can speak with specific NPCs in spool mode by doing ```/spool <npc_name>``` and can exit spool mode by doing ```/exit```.
40
+
41
+ ## Built-in NPCs
42
+ Built-in NPCs are NPCs that should offer broad utility to the user and allow them to create more complicated NPCs. These built-in NPCs facilitate the carrying out of many common data processing tasks as well as the ability to run commands and to execute and test programs.
43
+
44
+ ### Bash NPC
45
+ The bash NPC is an NPC focused on running bash commands and scripts. The bash NPC can be used to run bash commands and the user can converse with the bash NPC by doing ```/spool bash``` to interrogate it about the commands it has run and the output it has produced.
46
+ A user can enter bash mode by typing ```/bash``` and can exit bash mode by typing ```/bq```.
47
+ Use the Bash NPC in the profiles of other NPCs by referencing it like ```{{bash}}```.
48
+
49
+ ### Command NPC
50
+
51
+ The LLM or specific NPC will take the user's request and try to write a command or a script to accomplish the task and then attempt to run it and to tweak it until it works or it's exceeded the number of retries (default=5).
52
+
53
+ Use the Command NPC by typing ```/cmd <command>```. Chat with the Command NPC in spool mode by typing ```/spool cmd```.
54
+ Use the Command NPC in the profiles of other NPCs by referencing it like ```{{cmd}}```.
55
+
56
+ ### observation mode
57
+
58
+ Users can create schemas for recording observations. The idea here is to more easily facilitate the recording of data for individuals in essentially any realm (e.g. recipe testing, one's own blood pressure or weight, books read, movies watched, daily mood, etc.) without needing to use a tangled web of applications to do so. Observations can be referenced by the generic npcsh LLM shell or by specific NPCs.
59
+ Use the Observation NPC by typing ```/obs <observation>```.
60
+ Chat with the Observation NPC in spool mode by typing ```/spool obs```.
61
+ Use the Observation NPC in the profiles of other NPCs by referencing it like ```{{obs}}```.
62
+
63
+
64
+ ### question mode
65
+
66
+ The user can submit a 1-shot question to a general LLM or to a specific NPC.
67
+ Use it like
68
+ ```/question <question> <npc_name>```
69
+ or
70
+ ```/question <question>```
71
+
72
+ You can also chat with the Question NPC in spool mode by typing ```/spool question```.
73
+
74
+
75
+
76
+ ### thought mode
77
+
78
+ This will be like a way to write out some general thoughts to get some 1-shot feedback from a general LLM or a specific NPC.
79
+
80
+ Use it like
81
+ ```/thought <thought> <npc_name>```
82
+ or
83
+ ```/thought <thought>```
84
+
85
+
86
+ You can also chat with the Thought NPC in spool mode by typing ```/spool thought```.
npcsh-0.1.0/README.md ADDED
@@ -0,0 +1,68 @@
1
+ # npcsh
2
+
3
+ Welcome to npcsh, the shell for interacting with NPCs (LLM-powered AI agents). npcsh is meant to be a drop-in replacement shell for any kind of bash/zsh/powershell and allows the user to directly operate their machine through the use of the LLM-powered shell.
4
+
5
+ Additionally, npcsh introduces a new paradigm of programming for LLMs: npcsh allows users to set up NPC profiles (a la npc_profile.npc) where a user sets the primary directive of the NPC, the tools they want the NPC to use, and other properties of the NPC. NPCs can interact with each other and their primary directives and properties make these relationships explicit through jinja references.
6
+
7
+ ## compilation
8
+
9
+ Each NPC can be compiled to accomplish their primary directive and then any issues faced will be recorded and associated with the NPC so that it can reference it later through vector search. In any of the modes where a user requests input from an NPC, the NPC will include RAG search results before carrying out the request.
10
+
11
+
12
+ ## Base npcsh
13
+
14
+
15
+ In the base npcsh shell, inputs are processed by an LLM. The LLM first determines what kind of a request the user is making and decides which of the available tools or modes will best enable it to accomplish the request.
16
+
17
+
18
+ ### spool mode
19
+
20
+ Spool mode allows the users to have threaded conversations in the shell, i.e. conversations where context is retained over the course of several turns.
21
+ Users can speak with specific NPCs in spool mode by doing ```/spool <npc_name>``` and can exit spool mode by doing ```/exit```.
22
+
23
+ ## Built-in NPCs
24
+ Built-in NPCs are NPCs that should offer broad utility to the user and allow them to create more complicated NPCs. These built-in NPCs facilitate the carrying out of many common data processing tasks as well as the ability to run commands and to execute and test programs.
25
+
26
+ ### Bash NPC
27
+ The bash NPC is an NPC focused on running bash commands and scripts. The bash NPC can be used to run bash commands and the user can converse with the bash NPC by doing ```/spool bash``` to interrogate it about the commands it has run and the output it has produced.
28
+ A user can enter bash mode by typing ```/bash``` and can exit bash mode by typing ```/bq```.
29
+ Use the Bash NPC in the profiles of other NPCs by referencing it like ```{{bash}}```.
30
+
31
+ ### Command NPC
32
+
33
+ The LLM or specific NPC will take the user's request and try to write a command or a script to accomplish the task and then attempt to run it and to tweak it until it works or it's exceeded the number of retries (default=5).
34
+
35
+ Use the Command NPC by typing ```/cmd <command>```. Chat with the Command NPC in spool mode by typing ```/spool cmd```.
36
+ Use the Command NPC in the profiles of other NPCs by referencing it like ```{{cmd}}```.
37
+
38
+ ### observation mode
39
+
40
+ Users can create schemas for recording observations. The idea here is to more easily facilitate the recording of data for individuals in essentially any realm (e.g. recipe testing, one's own blood pressure or weight, books read, movies watched, daily mood, etc.) without needing to use a tangled web of applications to do so. Observations can be referenced by the generic npcsh LLM shell or by specific NPCs.
41
+ Use the Observation NPC by typing ```/obs <observation>```.
42
+ Chat with the Observation NPC in spool mode by typing ```/spool obs```.
43
+ Use the Observation NPC in the profiles of other NPCs by referencing it like ```{{obs}}```.
44
+
45
+
46
+ ### question mode
47
+
48
+ The user can submit a 1-shot question to a general LLM or to a specific NPC.
49
+ Use it like
50
+ ```/question <question> <npc_name>```
51
+ or
52
+ ```/question <question>```
53
+
54
+ You can also chat with the Question NPC in spool mode by typing ```/spool question```.
55
+
56
+
57
+
58
+ ### thought mode
59
+
60
+ This will be like a way to write out some general thoughts to get some 1-shot feedback from a general LLM or a specific NPC.
61
+
62
+ Use it like
63
+ ```/thought <thought> <npc_name>```
64
+ or
65
+ ```/thought <thought>```
66
+
67
+
68
+ You can also chat with the Thought NPC in spool mode by typing ```/spool thought```.
File without changes
@@ -0,0 +1,81 @@
1
+ import os
2
+ import sqlite3
3
+ import json
4
+ from datetime import datetime
5
+
6
+
7
+ def show_history(command_history, args):
8
+ if args:
9
+ search_results = command_history.search(args[0])
10
+ if search_results:
11
+ return "\n".join(
12
+ [f"{item[0]}. [{item[1]}] {item[2]}" for item in search_results]
13
+ )
14
+ else:
15
+ return f"No commands found matching '{args[0]}'"
16
+ else:
17
+ all_history = command_history.get_all()
18
+ return "\n".join([f"{item[0]}. [{item[1]}] {item[2]}" for item in all_history])
19
+
20
+
21
+ def query_history_for_llm(command_history, query):
22
+ results = command_history.search(query)
23
+ formatted_results = [
24
+ f"Command: {r[2]}\nOutput: {r[4]}\nLocation: {r[5]}" for r in results
25
+ ]
26
+ return "\n\n".join(formatted_results)
27
+
28
+
29
+ class CommandHistory:
30
+ def __init__(self, path="~/npcsh_history.db"):
31
+ self.db_path = os.path.expanduser(path)
32
+ self.conn = sqlite3.connect(self.db_path)
33
+ self.cursor = self.conn.cursor()
34
+ self.create_table()
35
+
36
+ def create_table(self):
37
+ self.cursor.execute(
38
+ """
39
+ CREATE TABLE IF NOT EXISTS command_history (
40
+ id INTEGER PRIMARY KEY AUTOINCREMENT,
41
+ timestamp TEXT,
42
+ command TEXT,
43
+ subcommands TEXT,
44
+ output TEXT,
45
+ location TEXT
46
+ )
47
+ """
48
+ )
49
+ self.conn.commit()
50
+
51
+ def add(self, command, subcommands, output, location):
52
+ timestamp = datetime.now().isoformat()
53
+ self.cursor.execute(
54
+ """
55
+ INSERT INTO command_history (timestamp, command, subcommands, output, location)
56
+ VALUES (?, ?, ?, ?, ?)
57
+ """,
58
+ (timestamp, command, json.dumps(subcommands), output, location),
59
+ )
60
+ self.conn.commit()
61
+
62
+ def search(self, term):
63
+ self.cursor.execute(
64
+ """
65
+ SELECT * FROM command_history WHERE command LIKE ?
66
+ """,
67
+ (f"%{term}%",),
68
+ )
69
+ return self.cursor.fetchall()
70
+
71
+ def get_all(self, limit=100):
72
+ self.cursor.execute(
73
+ """
74
+ SELECT * FROM command_history ORDER BY id DESC LIMIT ?
75
+ """,
76
+ (limit,),
77
+ )
78
+ return self.cursor.fetchall()
79
+
80
+ def close(self):
81
+ self.conn.close()
@@ -0,0 +1,36 @@
1
+ # helpers.py
2
+ import logging
3
+
4
+ logging.basicConfig(
5
+ filename=".npcsh.log",
6
+ level=logging.INFO,
7
+ format="%(asctime)s - %(levelname)s - %(message)s",
8
+ )
9
+ import os
10
+
11
+
12
+ def list_directory(args):
13
+ directory = args[0] if args else "."
14
+ try:
15
+ files = os.listdir(directory)
16
+ for f in files:
17
+ print(f)
18
+ except Exception as e:
19
+ print(f"Error listing directory: {e}")
20
+
21
+
22
+ def read_file(args):
23
+ if not args:
24
+ print("Usage: /read <filename>")
25
+ return
26
+ filename = args[0]
27
+ try:
28
+ with open(filename, "r") as file:
29
+ content = file.read()
30
+ print(content)
31
+ except Exception as e:
32
+ print(f"Error reading file: {e}")
33
+
34
+
35
+ def log_action(action, detail=""):
36
+ logging.info(f"{action}: {detail}")
@@ -0,0 +1,218 @@
1
+ import subprocess
2
+ import requests
3
+ import os
4
+ import json
5
+ import ollama
6
+
7
+ def get_ollama_conversion(messages, model):
8
+ # print(messages)
9
+
10
+ response = ollama.chat(model=model, messages=messages)
11
+ messages.append(response["message"])
12
+ # print(response["message"])
13
+ return messages
14
+
15
+
16
+ """
17
+ test_messages = [
18
+ {"role": "user", "content": "hows it going"}]
19
+ model = "llama3.1"
20
+ ea = get_ollama_conversion(ea, model)
21
+
22
+ ea.append({"role": "user", "content": "then can you help me design something really spectacular?"})
23
+
24
+
25
+ """
26
+
27
+
28
+ def get_ollama_response(prompt, model, format=None, **kwargs):
29
+ try:
30
+ url = "http://localhost:11434/api/generate"
31
+ data = {
32
+ "model": model,
33
+ "prompt": prompt,
34
+ "stream": False,
35
+ }
36
+ if format is not None:
37
+ data["format"] = format
38
+
39
+ # print(f"Requesting LLM response for prompt: {prompt}")
40
+ response = requests.post(url, json=data)
41
+ response.raise_for_status()
42
+ llm_response = json.loads(response.text)["response"]
43
+
44
+ # If format is JSON, try to parse the response as JSON
45
+ if format == "json":
46
+ try:
47
+ return json.loads(llm_response)
48
+ except json.JSONDecodeError:
49
+ print(f"Warning: Expected JSON response, but received: {llm_response}")
50
+ return {"error": "Invalid JSON response"}
51
+ else:
52
+ return llm_response
53
+ except Exception as e:
54
+ return f"Error interacting with LLM: {e}"
55
+
56
+
57
+ def get_openai_response(prompt, model, functional_requirements=None):
58
+ pass
59
+
60
+
61
+ def get_claude_response(prompt, model, format=None):
62
+ pass
63
+
64
+
65
+ def get_llm_response(prompt, provider="ollama", model="phi3", **kwargs):
66
+ if provider == "ollama":
67
+ return get_ollama_response(prompt, model, **kwargs)
68
+ elif provider == "openai":
69
+ return get_openai_response(prompt, model, **kwargs)
70
+ elif provider == "claude":
71
+ return get_claude_response(prompt, model, **kwargs)
72
+ else:
73
+ return "Error: Invalid provider specified."
74
+
75
+
76
+ def execute_llm_command(command, command_history):
77
+ max_attempts = 5
78
+ attempt = 0
79
+ subcommands = []
80
+
81
+ location = os.getcwd()
82
+ while attempt < max_attempts:
83
+ prompt = f"""
84
+ A user submitted this query: {command}.
85
+ You need to generate a bash command that will accomplish the user's intent.
86
+ Respond ONLY with the command that should be executed.
87
+ in the json key "bash_command".
88
+ You must reply with valid json and nothing else.
89
+ """
90
+
91
+ response = get_llm_response(prompt, format="json")
92
+ print(f"LLM suggests: {response}")
93
+
94
+ if isinstance(response, dict) and "bash_command" in response:
95
+ bash_command = response["bash_command"]
96
+ else:
97
+ print("Error: Invalid response format from LLM")
98
+ attempt += 1
99
+ continue
100
+
101
+ try:
102
+ subcommands.append(bash_command)
103
+ result = subprocess.run(
104
+ bash_command, shell=True, text=True, capture_output=True
105
+ )
106
+ if result.returncode == 0:
107
+ # simplify the output
108
+ prompt = f"""
109
+ Here was the output of the result for the {command} inquiry
110
+ which ran this bash command {bash_command}:
111
+
112
+ {result.stdout}
113
+
114
+ Provide a simple short description that provides the most
115
+ useful information about the command's result.
116
+
117
+ """
118
+ response = get_llm_response(prompt)
119
+ print(response)
120
+ output = response
121
+ command_history.add(command, subcommands, output, location)
122
+
123
+ return response
124
+ else:
125
+ print(f"Command failed with error:")
126
+ print(result.stderr)
127
+
128
+ error_prompt = f"""
129
+ The command '{bash_command}' failed with the following error:
130
+ {result.stderr}
131
+ Please suggest a fix or an alternative command.
132
+ Respond with a JSON object containing the key "bash_command" with the suggested command.
133
+ """
134
+ fix_suggestion = get_llm_response(error_prompt, format="json")
135
+ if isinstance(fix_suggestion, dict) and "bash_command" in fix_suggestion:
136
+ print(f"LLM suggests fix: {fix_suggestion['bash_command']}")
137
+ command = fix_suggestion["bash_command"]
138
+ else:
139
+ print("Error: Invalid response format from LLM for fix suggestion")
140
+ except Exception as e:
141
+ print(f"Error executing command: {e}")
142
+
143
+ attempt += 1
144
+ command_history.add(command, subcommands, "Execution failed", location)
145
+
146
+ print("Max attempts reached. Unable to execute the command successfully.")
147
+ return "Max attempts reached. Unable to execute the command successfully."
148
+
149
+
150
+ def execute_llm_question(command, command_history):
151
+ location = os.getcwd()
152
+
153
+ prompt = f"""
154
+ A user submitted this query: {command}
155
+ You need to generate a response to the user's inquiry.
156
+ Respond ONLY with the response that should be given.
157
+ in the json key "response".
158
+ You must reply with valid json and nothing else.
159
+ """
160
+ response = get_llm_response(prompt, format="json")
161
+ print(f"LLM suggests: {response}")
162
+ output = response["response"]
163
+ command_history.add(command, [], output, location)
164
+ return response["response"]
165
+
166
+
167
+ def execute_llm_thought(command, command_history):
168
+ location = os.getcwd()
169
+ prompt = f"""
170
+ A user submitted this query: {command} .
171
+ Please generate a response to the user's inquiry.
172
+ Respond ONLY with the response that should be given.
173
+ in the json key "response".
174
+ You must reply with valid json and nothing else.
175
+ """
176
+ response = get_llm_response(prompt, format="json")
177
+ print(f"LLM suggests: {response}")
178
+ output = response["response"]
179
+ command_history.add(command, [], output, location)
180
+ return response["response"]
181
+
182
+
183
+ def check_llm_command(command, command_history):
184
+ # check what kind of request the command is
185
+ prompt = f"""
186
+ A user submitted this query: {command}
187
+ What kind of request is this?
188
+ [Command, Thought]
189
+ Thoughts are simply musings or ideas.
190
+ Commands are requests for a specific action to be performed.
191
+
192
+ Most requests are likely to be commands so only use the other options if
193
+ the request is clearly not a command.
194
+
195
+
196
+ Return your response in valid json key "request_type".
197
+
198
+ Provide an explanation in key "explanation".
199
+ Your answer must be like
200
+
201
+ {{"request_type": "Command",
202
+ "explanation": "This is a command because..."}}
203
+ or
204
+ {{"request_type": "Thought",
205
+ "explanation": "This is a thought because..."}}
206
+ """
207
+
208
+ response = get_llm_response(prompt, format="json")
209
+ correct_request_type = response["request_type"]
210
+ print(f"LLM initially suggests: {response}")
211
+ # does the user agree with the request type? y/n
212
+
213
+ if correct_request_type == "Command":
214
+ return execute_llm_command(command, command_history)
215
+ elif correct_request_type == "Question":
216
+ return execute_llm_question(command, command_history)
217
+ elif correct_request_type == "Thought":
218
+ return execute_llm_thought(command, command_history)
@@ -0,0 +1,5 @@
1
+ def main():
2
+ print("Hello from npcsh!")
3
+
4
+ if __name__ == "__main__":
5
+ main()
@@ -0,0 +1,189 @@
1
+ # modes.py
2
+ import os
3
+ import subprocess
4
+ from .llm_funcs import get_ollama_conversion
5
+ import sqlite3
6
+
7
+
8
+ def enter_bash_mode():
9
+ print("Entering bash mode. Type '/bq' to exit bash mode.")
10
+ current_dir = os.getcwd()
11
+ bash_output = []
12
+ while True:
13
+ try:
14
+ bash_input = input(f"bash {current_dir}> ").strip()
15
+ if bash_input == "/bq":
16
+ print("Exiting bash mode.")
17
+ break
18
+ else:
19
+ try:
20
+ if bash_input.startswith("cd "):
21
+ new_dir = bash_input[3:].strip()
22
+ try:
23
+ os.chdir(os.path.expanduser(new_dir))
24
+ current_dir = os.getcwd()
25
+ bash_output.append(f"Changed directory to {current_dir}")
26
+ print(f"Changed directory to {current_dir}")
27
+ except FileNotFoundError:
28
+ bash_output.append(
29
+ f"bash: cd: {new_dir}: No such file or directory"
30
+ )
31
+ else:
32
+ result = subprocess.run(
33
+ bash_input,
34
+ shell=True,
35
+ text=True,
36
+ capture_output=True,
37
+ cwd=current_dir,
38
+ )
39
+ if result.stdout:
40
+ print(result.stdout.strip())
41
+ bash_output.append(result.stdout.strip())
42
+ if result.stderr:
43
+ bash_output.append(f"Error: {result.stderr.strip()}")
44
+ except Exception as e:
45
+ bash_output.append(f"Error executing bash command: {e}")
46
+ except (KeyboardInterrupt, EOFError):
47
+ print("\nExiting bash mode.")
48
+ break
49
+ os.chdir(current_dir)
50
+ return "\n".join(bash_output)
51
+
52
+
53
+ def enter_whisper_mode():
54
+ pass
55
+
56
+
57
+ def enter_notes_mode(command_history):
58
+ print("Entering notes mode. Type '/nq' to exit.")
59
+
60
+ while True:
61
+ note = input("Enter your note (or '/nq' to quit): ").strip()
62
+
63
+ if note.lower() == "/nq":
64
+ break
65
+
66
+ save_note(note, command_history)
67
+
68
+ print("Exiting notes mode.")
69
+
70
+
71
+ def save_note(note, command_history):
72
+ current_dir = os.getcwd()
73
+ readme_path = os.path.join(current_dir, "README.md")
74
+
75
+ with open(readme_path, "a") as f:
76
+ f.write(f"\n- {note}\n")
77
+
78
+ print("Note added to README.md")
79
+ command_history.add(f"/note {note}", ["note"], "", current_dir)
80
+
81
+
82
+ # Usage in your main script:
83
+ # enter_notes_mode()
84
+
85
+
86
+ def enter_observation_mode(command_history):
87
+ conn = command_history.conn
88
+ cursor = command_history.cursor
89
+
90
+ print("Entering observation mode. Type '/obsq' or '/oq' to exit.")
91
+
92
+ while True:
93
+ # Show available tables
94
+ cursor.execute(
95
+ "SELECT name FROM sqlite_master WHERE type='table' AND name != 'command_history'"
96
+ )
97
+ tables = cursor.fetchall()
98
+
99
+ print("\nAvailable tables:")
100
+ for i, table in enumerate(tables, 1):
101
+ print(f"{i}. {table[0]}")
102
+ print("n. Create new table")
103
+ print("d. Delete a table")
104
+ print("q. Exit observation mode")
105
+
106
+ choice = input("Choose an option: ").strip().lower()
107
+
108
+ if choice == "q" or choice == "/obsq" or choice == "/oq":
109
+ break
110
+ elif choice == "n":
111
+ create_new_table(cursor, conn)
112
+ elif choice == "d":
113
+ delete_table(cursor, conn)
114
+ elif choice.isdigit() and 1 <= int(choice) <= len(tables):
115
+ add_observation(cursor, conn, tables[int(choice) - 1][0])
116
+
117
+ conn.close()
118
+ print("Exiting observation mode.")
119
+
120
+
121
+ def enter_spool_mode(command_history, inherit_last=0, model="llama3.1"):
122
+ print("Entering spool mode. Type '/sq' to exit spool mode.")
123
+ spool_context = []
124
+
125
+ # Inherit last n messages if specified
126
+ if inherit_last > 0:
127
+ last_commands = command_history.get_all(limit=inherit_last)
128
+ for cmd in reversed(last_commands):
129
+ spool_context.append({"role": "user", "content": cmd[2]}) # command
130
+ spool_context.append({"role": "assistant", "content": cmd[4]}) # output
131
+
132
+ while True:
133
+ try:
134
+ user_input = input("spool> ").strip()
135
+ if user_input.lower() == "/sq":
136
+ print("Exiting spool mode.")
137
+ break
138
+
139
+ # Add user input to spool context
140
+ spool_context.append({"role": "user", "content": user_input})
141
+
142
+ # Process the spool context with LLM
143
+ spool_context = get_ollama_conversion(spool_context, model=model)
144
+
145
+ command_history.add(
146
+ user_input, ["spool"], spool_context[-1]["content"], os.getcwd()
147
+ )
148
+ print(spool_context[-1]["content"])
149
+ except (KeyboardInterrupt, EOFError):
150
+ print("\nExiting spool mode.")
151
+ break
152
+
153
+ return "\n".join(
154
+ [msg["content"] for msg in spool_context if msg["role"] == "assistant"]
155
+ )
156
+
157
+
158
+ def create_new_table(cursor, conn):
159
+ table_name = input("Enter new table name: ").strip()
160
+ columns = input("Enter column names separated by commas: ").strip()
161
+
162
+ create_query = (
163
+ f"CREATE TABLE {table_name} (id INTEGER PRIMARY KEY AUTOINCREMENT, {columns})"
164
+ )
165
+ cursor.execute(create_query)
166
+ conn.commit()
167
+ print(f"Table '{table_name}' created successfully.")
168
+
169
+
170
+ def delete_table(cursor, conn):
171
+ table_name = input("Enter table name to delete: ").strip()
172
+ cursor.execute(f"DROP TABLE IF EXISTS {table_name}")
173
+ conn.commit()
174
+ print(f"Table '{table_name}' deleted successfully.")
175
+
176
+
177
+ def add_observation(cursor, conn, table_name):
178
+ cursor.execute(f"PRAGMA table_info({table_name})")
179
+ columns = [column[1] for column in cursor.fetchall() if column[1] != "id"]
180
+
181
+ values = []
182
+ for column in columns:
183
+ value = input(f"Enter value for {column}: ").strip()
184
+ values.append(value)
185
+
186
+ insert_query = f"INSERT INTO {table_name} ({','.join(columns)}) VALUES ({','.join(['?' for _ in columns])})"
187
+ cursor.execute(insert_query, values)
188
+ conn.commit()
189
+ print("Observation added successfully.")
@@ -0,0 +1,124 @@
1
+ import os
2
+ import subprocess
3
+ import sqlite3
4
+ import yaml
5
+ from jinja2 import Environment, FileSystemLoader, Template
6
+
7
+ import os
8
+ import yaml
9
+ from jinja2 import Environment, FileSystemLoader, TemplateError, Template, Undefined
10
+
11
+
12
+ class SilentUndefined(Undefined):
13
+ def _fail_with_undefined_error(self, *args, **kwargs):
14
+ return ""
15
+
16
+
17
+ class NPCCompiler:
18
+ def __init__(self, npc_directory):
19
+ self.npc_directory = npc_directory
20
+ self.jinja_env = Environment(
21
+ loader=FileSystemLoader(self.npc_directory), undefined=SilentUndefined
22
+ )
23
+ self.npc_cache = {}
24
+ self.resolved_npcs = {}
25
+
26
+ def compile(self, npc_file: str):
27
+ if not npc_file.endswith(".npc"):
28
+ raise ValueError("File must have .npc extension")
29
+
30
+ try:
31
+ # First pass: parse all NPC files without resolving Jinja templates
32
+ self.parse_all_npcs()
33
+
34
+ # Second pass: resolve Jinja templates and merge inherited properties
35
+ self.resolve_all_npcs()
36
+
37
+ # Final pass: resolve any remaining references
38
+ parsed_content = self.finalize_npc_profile(npc_file)
39
+ return parsed_content
40
+ except Exception as e:
41
+ raise
42
+
43
+ def parse_all_npcs(self):
44
+ for filename in os.listdir(self.npc_directory):
45
+ if filename.endswith(".npc"):
46
+ self.parse_npc_file(filename)
47
+
48
+ def parse_npc_file(self, npc_file: str):
49
+ if npc_file in self.npc_cache:
50
+ return self.npc_cache[npc_file]
51
+
52
+ try:
53
+ with open(os.path.join(self.npc_directory, npc_file), "r") as f:
54
+ npc_content = f.read()
55
+
56
+ # Parse YAML without resolving Jinja templates
57
+ profile = yaml.safe_load(npc_content)
58
+ self.npc_cache[npc_file] = profile
59
+
60
+ return profile
61
+ except yaml.YAMLError as e:
62
+ raise ValueError(f"Invalid YAML in NPC profile {npc_file}: {str(e)}")
63
+
64
+ def resolve_all_npcs(self):
65
+ for npc_file in self.npc_cache:
66
+ self.resolve_npc_profile(npc_file)
67
+
68
+ def resolve_npc_profile(self, npc_file: str):
69
+ if npc_file in self.resolved_npcs:
70
+ return self.resolved_npcs[npc_file]
71
+
72
+ profile = self.npc_cache[npc_file].copy()
73
+
74
+ # Resolve Jinja templates
75
+ for key, value in profile.items():
76
+ if isinstance(value, str):
77
+ template = self.jinja_env.from_string(value)
78
+ profile[key] = template.render(self.npc_cache)
79
+
80
+ # Handle inheritance
81
+ if "inherits_from" in profile:
82
+ parent_profile = self.resolve_npc_profile(profile["inherits_from"] + ".npc")
83
+ profile = self.merge_profiles(parent_profile, profile)
84
+
85
+ self.resolved_npcs[npc_file] = profile
86
+ return profile
87
+
88
+ def finalize_npc_profile(self, npc_file: str):
89
+ profile = self.resolved_npcs[npc_file].copy()
90
+
91
+ # Resolve any remaining references
92
+ for key, value in profile.items():
93
+ if isinstance(value, str):
94
+ template = self.jinja_env.from_string(value)
95
+ profile[key] = template.render(self.resolved_npcs)
96
+
97
+ required_keys = [
98
+ "name",
99
+ "primary_directive",
100
+ "suggested_tools_to_use",
101
+ "restrictions",
102
+ "model",
103
+ ]
104
+ for key in required_keys:
105
+ if key not in profile:
106
+ raise ValueError(f"Missing required key in NPC profile: {key}")
107
+
108
+ return profile
109
+
110
+ def merge_profiles(self, parent, child):
111
+ merged = parent.copy()
112
+ for key, value in child.items():
113
+ if isinstance(value, list) and key in merged:
114
+ merged[key] = merged[key] + value
115
+ elif isinstance(value, dict) and key in merged:
116
+ merged[key] = self.merge_profiles(merged[key], value)
117
+ else:
118
+ merged[key] = value
119
+ return merged
120
+
121
+
122
+ # Usage:
123
+ # compiler = NPCCompiler('/path/to/npc/directory')
124
+ # compiled_script = compiler.compile('your_npc_file.npc')
@@ -0,0 +1,132 @@
1
+ # npcsh.py
2
+ import os
3
+
4
+ import readline
5
+ import atexit
6
+ from datetime import datetime
7
+ import pandas as pd
8
+
9
+ # Configure logging
10
+
11
+
12
+ from .command_history import CommandHistory
13
+ from .llm_funcs import (
14
+ get_llm_response,
15
+ execute_llm_command,
16
+ check_llm_command,
17
+ execute_llm_question,
18
+ execute_llm_thought,
19
+ )
20
+ from .modes import (
21
+ enter_bash_mode,
22
+ enter_whisper_mode,
23
+ enter_notes_mode,
24
+ enter_observation_mode,
25
+ enter_spool_mode,
26
+ )
27
+ from .helpers import log_action, list_directory, read_file
28
+ from .npc_compiler import NPCCompiler
29
+
30
+
31
+ def execute_command(command, command_history, db_path, npc_compiler):
32
+ subcommands = []
33
+ output = ""
34
+ location = os.getcwd()
35
+
36
+ if command.startswith("/"):
37
+ command = command[1:]
38
+ log_action("Command Executed", command)
39
+
40
+ command_parts = command.split()
41
+ command_name = command_parts[0]
42
+ args = command_parts[1:]
43
+
44
+ if (
45
+ command_name == "b"
46
+ or command_name == "bash"
47
+ or command_name == "zsh"
48
+ or command_name == "sh"
49
+ ):
50
+ output = enter_bash_mode()
51
+ if command_name == "compile" or command_name == "com":
52
+ try:
53
+ compiled_script = npc_compiler.compile(args[0])
54
+ output = f"Compiled NPC profile: {compiled_script}"
55
+ print(output)
56
+ except Exception as e:
57
+ output = f"Error compiling NPC profile: {str(e)}"
58
+ print(output)
59
+
60
+ elif command_name == "whisper":
61
+ output = enter_whisper_mode()
62
+ elif command_name == "notes":
63
+ output = enter_notes_mode(command_history)
64
+ elif command_name == "obs":
65
+ print(db_path)
66
+ output = enter_observation_mode(command_history)
67
+ elif command_name == "cmd" or command_name == "command":
68
+ output = execute_llm_command(command, command_history)
69
+ elif command_name == "?":
70
+ output = execute_llm_question(command, command_history)
71
+ elif command_name == "th" or command_name == "thought":
72
+ output = execute_llm_thought(command, command_history)
73
+ elif command_name == "spool" or command_name == "sp":
74
+ inherit_last = int(args[0]) if args else 0
75
+
76
+ output = enter_spool_mode(command_history, inherit_last)
77
+ else:
78
+ output = f"Unknown command: {command_name}"
79
+
80
+ subcommands = [f"/{command}"]
81
+ else:
82
+ output = check_llm_command(command, command_history)
83
+
84
+ command_history.add(command, subcommands, output, location)
85
+
86
+
87
+ def setup_readline():
88
+ readline.set_history_length(1000)
89
+ readline.parse_and_bind("set editing-mode vi")
90
+ readline.parse_and_bind('"\C-r": reverse-search-history')
91
+
92
+
93
+ def save_readline_history():
94
+ readline.write_history_file(os.path.expanduser("~/.npcsh_readline_history"))
95
+
96
+
97
+ def main():
98
+ # check if theres a set path to a local db in the os env
99
+ if "NPCSH_DB_PATH" in os.environ:
100
+ db_path = os.environ["NPCSH_DB_PATH"]
101
+ command_history = CommandHistory(db_path)
102
+
103
+ else:
104
+ db_path = "~/npcsh_history.db"
105
+ command_history = CommandHistory()
106
+ # Initialize NPCCompiler
107
+ os.makedirs("./npc_profiles", exist_ok=True)
108
+ npc_directory = os.path.expanduser(
109
+ "./npc_profiles"
110
+ ) # You can change this to your preferred directory
111
+ npc_compiler = NPCCompiler(npc_directory)
112
+
113
+ setup_readline()
114
+ atexit.register(save_readline_history)
115
+ atexit.register(command_history.close)
116
+
117
+ print("Welcome to npcsh!")
118
+ while True:
119
+ try:
120
+ user_input = input("npcsh> ").strip()
121
+ if user_input.lower() in ["exit", "quit"]:
122
+ print("Goodbye!")
123
+ break
124
+ else:
125
+ execute_command(user_input, command_history, db_path, npc_compiler)
126
+ except (KeyboardInterrupt, EOFError):
127
+ print("\nGoodbye!")
128
+ break
129
+
130
+
131
+ if __name__ == "__main__":
132
+ main()
@@ -0,0 +1,86 @@
1
+ Metadata-Version: 2.1
2
+ Name: npcsh
3
+ Version: 0.1.0
4
+ Summary: A way to use npcsh
5
+ Home-page: https://github.com/cagostino/npcsh
6
+ Author: Christopher Agostino
7
+ Author-email: cjp.agostino@example.com
8
+ Classifier: Programming Language :: Python :: 3
9
+ Classifier: License :: OSI Approved :: MIT License
10
+ Requires-Python: >=3.10
11
+ Description-Content-Type: text/markdown
12
+ License-File: LICENSE
13
+ Requires-Dist: jinja2
14
+ Requires-Dist: pandas
15
+ Requires-Dist: ollama
16
+ Requires-Dist: requests
17
+ Requires-Dist: PyYAML
18
+
19
+ # npcsh
20
+
21
+ Welcome to npcsh, the shell for interacting with NPCs (LLM-powered AI agents). npcsh is meant to be a drop-in replacement shell for any kind of bash/zsh/powershell and allows the user to directly operate their machine through the use of the LLM-powered shell.
22
+
23
+ Additionally, npcsh introduces a new paradigm of programming for LLMs: npcsh allows users to set up NPC profiles (a la npc_profile.npc) where a user sets the primary directive of the NPC, the tools they want the NPC to use, and other properties of the NPC. NPCs can interact with each other and their primary directives and properties make these relationships explicit through jinja references.
24
+
25
+ ## compilation
26
+
27
+ Each NPC can be compiled to accomplish their primary directive and then any issues faced will be recorded and associated with the NPC so that it can reference it later through vector search. In any of the modes where a user requests input from an NPC, the NPC will include RAG search results before carrying out the request.
28
+
29
+
30
+ ## Base npcsh
31
+
32
+
33
+ In the base npcsh shell, inputs are processed by an LLM. The LLM first determines what kind of a request the user is making and decides which of the available tools or modes will best enable it to accomplish the request.
34
+
35
+
36
+ ### spool mode
37
+
38
+ Spool mode allows the users to have threaded conversations in the shell, i.e. conversations where context is retained over the course of several turns.
39
+ Users can speak with specific NPCs in spool mode by doing ```/spool <npc_name>``` and can exit spool mode by doing ```/exit```.
40
+
41
+ ## Built-in NPCs
42
+ Built-in NPCs are NPCs that should offer broad utility to the user and allow them to create more complicated NPCs. These built-in NPCs facilitate the carrying out of many common data processing tasks as well as the ability to run commands and to execute and test programs.
43
+
44
+ ### Bash NPC
45
+ The bash NPC is an NPC focused on running bash commands and scripts. The bash NPC can be used to run bash commands and the user can converse with the bash NPC by doing ```/spool bash``` to interrogate it about the commands it has run and the output it has produced.
46
+ A user can enter bash mode by typing ```/bash``` and can exit bash mode by typing ```/bq```.
47
+ Use the Bash NPC in the profiles of other NPCs by referencing it like ```{{bash}}```.
48
+
49
+ ### Command NPC
50
+
51
+ The LLM or specific NPC will take the user's request and try to write a command or a script to accomplish the task and then attempt to run it and to tweak it until it works or it's exceeded the number of retries (default=5).
52
+
53
+ Use the Command NPC by typing ```/cmd <command>```. Chat with the Command NPC in spool mode by typing ```/spool cmd```.
54
+ Use the Command NPC in the profiles of other NPCs by referencing it like ```{{cmd}}```.
55
+
56
+ ### observation mode
57
+
58
+ Users can create schemas for recording observations. The idea here is to more easily facilitate the recording of data for individuals in essentially any realm (e.g. recipe testing, one's own blood pressure or weight, books read, movies watched, daily mood, etc.) without needing to use a tangled web of applications to do so. Observations can be referenced by the generic npcsh LLM shell or by specific NPCs.
59
+ Use the Observation NPC by typing ```/obs <observation>```.
60
+ Chat with the Observation NPC in spool mode by typing ```/spool obs```.
61
+ Use the Observation NPC in the profiles of other NPCs by referencing it like ```{{obs}}```.
62
+
63
+
64
+ ### question mode
65
+
66
+ The user can submit a 1-shot question to a general LLM or to a specific NPC.
67
+ Use it like
68
+ ```/question <question> <npc_name>```
69
+ or
70
+ ```/question <question>```
71
+
72
+ You can also chat with the Question NPC in spool mode by typing ```/spool question```.
73
+
74
+
75
+
76
+ ### thought mode
77
+
78
+ This will be like a way to write out some general thoughts to get some 1-shot feedback from a general LLM or a specific NPC.
79
+
80
+ Use it like
81
+ ```/thought <thought> <npc_name>```
82
+ or
83
+ ```/thought <thought>```
84
+
85
+
86
+ You can also chat with the Thought NPC in spool mode by typing ```/spool thought```.
@@ -0,0 +1,17 @@
1
+ LICENSE
2
+ README.md
3
+ setup.py
4
+ npcsh/__init__.py
5
+ npcsh/command_history.py
6
+ npcsh/helpers.py
7
+ npcsh/llm_funcs.py
8
+ npcsh/main.py
9
+ npcsh/modes.py
10
+ npcsh/npc_compiler.py
11
+ npcsh/npcsh.py
12
+ npcsh.egg-info/PKG-INFO
13
+ npcsh.egg-info/SOURCES.txt
14
+ npcsh.egg-info/dependency_links.txt
15
+ npcsh.egg-info/entry_points.txt
16
+ npcsh.egg-info/requires.txt
17
+ npcsh.egg-info/top_level.txt
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ npcsh = npcsh.npcsh:main
@@ -0,0 +1,5 @@
1
+ jinja2
2
+ pandas
3
+ ollama
4
+ requests
5
+ PyYAML
@@ -0,0 +1 @@
1
+ npcsh
npcsh-0.1.0/setup.cfg ADDED
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
npcsh-0.1.0/setup.py ADDED
@@ -0,0 +1,30 @@
1
+ from setuptools import setup, find_packages
2
+
3
+ setup(
4
+ name="npcsh",
5
+ version="0.1.0",
6
+ packages=find_packages(exclude=["tests*"]),
7
+ install_requires=[
8
+ 'jinja2',
9
+ 'pandas',
10
+ 'ollama',
11
+ 'requests',
12
+ 'PyYAML'
13
+ ],
14
+ entry_points={
15
+ 'console_scripts': [
16
+ 'npcsh=npcsh.npcsh:main',
17
+ ],
18
+ },
19
+ author="Christopher Agostino",
20
+ author_email="cjp.agostino@example.com",
21
+ description="A way to use npcsh",
22
+ long_description=open("README.md").read(),
23
+ long_description_content_type="text/markdown",
24
+ url="https://github.com/cagostino/npcsh",
25
+ classifiers=[
26
+ "Programming Language :: Python :: 3",
27
+ "License :: OSI Approved :: MIT License",
28
+ ],
29
+ python_requires=">=3.10",
30
+ )