gpterm 0.4.1 → 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (5) hide show
  1. checksums.yaml +4 -4
  2. data/README.md +3 -1
  3. data/lib/client.rb +42 -114
  4. data/lib/gpterm.rb +59 -31
  5. metadata +1 -1
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e77a65025792f307da8e9cbc3d152c805ac99693ec166aefb34daf33e03342a1
4
- data.tar.gz: a60503f8eccdb64bce12bbc04190b9f56607d3ee55dcbfbfe9ac270162fdf97b
3
+ metadata.gz: fe8a62780b9a0890cdb4080971d6810c36b160da4bdab39f6a77a7a69c104e07
4
+ data.tar.gz: d48d93b725e8714a8213513ef30a06a9771b9866da8124a8765b95b3060365c1
5
5
  SHA512:
6
- metadata.gz: edccfb30f83becb981042ee7c533fc4ad3754fac9fc6a9dd70d2b51fa7055abe1c09ee52fb814612c16dc0926fa72be53cfba563a76c9b00cef3bd202ec986e1
7
- data.tar.gz: acde43696b1afc0f762d6b34a5a6a6fdde5e150a849f269e1b9fc8abf68c17dea4f6e6cbbb90146c10240cfa3d95765e31d18830c61c114e6b667624632c90c0
6
+ metadata.gz: d6cc7c0a7ae13f2f7d826cae3bbbe479cf12bb54f486d4c8eb0dba33896fa462709cd431f2577e351a40a41b4804502e1f4b991183aa3f32b50f7eb073a311a2
7
+ data.tar.gz: e9cb1e4214ac2c0f2fe8797ec35d821812dc883545b9cb61be0b1bc3ef50e79f3a9875c6a9f8a6a820474de90c9ac2bb38f966af8979ba6004928f60e402335a
data/README.md CHANGED
@@ -1,7 +1,9 @@
1
- # gpterm
1
+ # gpterm: a natural language interface for your terminal
2
2
 
3
3
  **WARNING:** `gpterm` has very few guardrails. If used indiscriminately, it can wipe your entire system or leak information.
4
4
 
5
+ ![gpterm-fb-for-dogs](https://github.com/basicallydan/gpterm/assets/516325/7a5bed4f-6f41-4d0a-85d9-79fb071c1aaf)
6
+
5
7
  `gpterm` is a powerful, flexible and dangerous command-line tool designed to help you generate commands for your terminal using OpenAI's Chat Completions. It will not execute commands without your consent, but please do check which commands it is presenting before you let it execute them. Like so:
6
8
 
7
9
  ```bash
data/lib/client.rb CHANGED
@@ -1,157 +1,85 @@
1
1
  require "openai"
2
+ require 'yaml'
2
3
 
3
4
  class Client
4
- attr_reader :openapi_client
5
+ attr_reader :openai_client
5
6
  attr_reader :config
6
7
 
7
8
  def initialize(config)
8
9
  @config = config
9
- @openapi_client = OpenAI::Client.new(access_token: config["openapi_key"])
10
+ @openai_client = OpenAI::Client.new(access_token: config["openapi_key"])
11
+ @prompts = YAML.load_file('config/prompts.yml')
10
12
  end
11
13
 
12
- def first_prompt(prompt)
13
- system_prompt = <<~PROMPT
14
- You are a command-line application being executed inside of a directory in a macOS environment, on the user's terminal command line.
15
-
16
- You are executed by running `gpterm` in the terminal, and you are provided with a prompt to respond to with the -p flag.
17
-
18
- Users can add a preset prompt by running `gpterm -s <name>,<prompt>`.
19
-
20
- The eventual output to the user would be a list of commands that they can run in their terminal to accomplish a task.
21
-
22
- You have the ability to run any command that this system can run, and you can read the output of those commands.
23
-
24
- However, any command which would ordinarily change the directory, such as cd, will not change the location of the directory in which you are running. To execute a command in a different directory, you must chain the cd command with the command you want to run, like so: `cd /path/to/directory && command`.
25
-
26
- The user is trying to accomplish a task using the terminal, but they are not sure how to do it.
27
- PROMPT
14
+ def first_prompt(user_goal_prompt)
15
+ system_prompt = @prompts["system"]
28
16
 
29
17
  if @config["send_path"]
30
18
  system_prompt += <<~PROMPT
19
+ # ADDITIONAL CONTEXT:
20
+
31
21
  The user's PATH environment variable is:
32
22
  #{ENV["PATH"]}
33
23
  PROMPT
34
24
  end
35
25
 
36
- full_prompt = <<~PROMPT
37
- Your FIRST response should be a list of commands that will be automatically executed to gather more information about the user's system.
38
- - The commands MUST NOT make any changes to the user's system.
39
- - The commands MUST NOT make any changes to any files on the user's system.
40
- - The commands MUST NOT write to any files using the > or >> operators.
41
- - The commands MUST NOT use the touch command.
42
- - The commands MUST NOT use echo or any other command to write into files using the > or >> operators.
43
- - The commands MUST NOT send any data to any external servers.
44
- - The commands MUST NOT contain any placeholders in angle brackets like <this>.
45
- - The commands MUST NOT contain any plain language instructions, or backticks indicating where the commands begin or end.
46
- - The commands MAY gather information about the user's system, such as the version of a software package, or the contents of a file.
47
- - The commands CAN pipe their output into other commands.
48
- - The commands SHOULD tend to gather more verbose information INSTEAD OF more concise information.
49
- This will help you to provide a more accurate response to the user's goal.
50
- Therefore your FIRST response MUST contain ONLY a list of commands and nothing else.
51
-
52
- VALID example response. These commands are examples of commands which CAN be included in your FIRST response:
53
-
54
- for file in *; do cat "$file"; done
55
- which ls
56
- which git
57
- which brew
58
- git diff
59
- git status
60
-
61
- INVALID example response. These commands are examples of commands which MUST NOT be included in your FIRST response:
62
-
63
- touch file.txt
64
- git add .
65
- git push
66
-
67
- If you cannot create a VALID response, simply return the string "$$cannot_compute$$" and the user will be asked to provide a new prompt.
68
- If you do not need to gather more information, simply return the string "$$no_gathering_needed$$" and the next step will be executed.
69
- You probably will need to gather information.
70
- If you need to gather information directly from the user, you will be able to do so in the next step.
71
-
72
- The user's goal prompt is:
73
- "#{prompt}"
74
- Commands to execute to gather more information about the user's system before providing the response which will accomplish the user's goal:
26
+ user_prompt = @prompts["info_gathering"]
27
+ user_prompt += <<~PROMPT
28
+ The user's GOAL PROMPT is:
29
+
30
+ "#{user_goal_prompt}"
31
+
32
+ Please respond with one or more commands to execute to gather more information about the user's system before providing the response which will accomplish the user's goal.
33
+
34
+ COMMANDS:
75
35
  PROMPT
76
36
 
77
37
  @messages = [
78
- { role: "system", content: system_prompt },
79
- { role: "user", content: full_prompt }
38
+ { role: "system", content: system_prompt }
80
39
  ]
81
40
 
82
- response = openapi_client.chat(
83
- parameters: {
84
- model: "gpt-4-turbo-preview",
85
- messages: @messages,
86
- temperature: 0.6,
87
- }
88
- )
89
- content = response.dig("choices", 0, "message", "content")
90
-
91
- @messages << { role: "assistant", content: content }
92
-
93
- content
41
+ continue_conversation(user_prompt)
94
42
  end
95
43
 
96
- def offer_information_prompt(prompt)
97
- full_prompt = <<~PROMPT
98
- This is the output of the command you provided to the user in the previous step.
99
-
100
- #{prompt}
101
-
102
- Before you provide the user with the next command, you have the opportunity to ask the user to provide more information so you can better tailor your response to their needs.
103
-
104
- If you would like to ask the user for more information, please provide a prompt that asks the user for the information you need.
105
- - Your prompt MUST ONLY contain one question. You will be able to ask another question in the next step.
106
- If you have all the information you need, simply return the string "$$no_more_information_needed$$" and the next step will be executed.
107
- PROMPT
44
+ def offer_information_prompt(previous_output, previous_output_type = :question_response)
45
+ question_prompt = if previous_output_type == :question_response
46
+ <<~PROMPT
47
+ This is the output of the question you asked the user in the previous step.
108
48
 
109
- @messages << { role: "user", content: full_prompt }
110
-
111
- response = openapi_client.chat(
112
- parameters: {
113
- model: "gpt-4-turbo-preview",
114
- messages: @messages,
115
- temperature: 0.6,
116
- }
117
- )
49
+ #{previous_output}
50
+ PROMPT
51
+ else
52
+ <<~PROMPT
53
+ This is the output of the command you provided to the user in the previous step.
118
54
 
119
- content = response.dig("choices", 0, "message", "content")
55
+ #{previous_output}
56
+ PROMPT
57
+ end
120
58
 
121
- @messages << { role: "assistant", content: content }
59
+ question_prompt += @prompts["user_question"]
122
60
 
123
- content
61
+ continue_conversation(question_prompt)
124
62
  end
125
63
 
126
64
  def final_prompt(prompt)
127
- full_prompt = <<~PROMPT
65
+ goal_commands_prompt = <<~PROMPT
128
66
  This is the output of the command you provided to the user in the previous step.
129
67
 
130
68
  #{prompt}
131
69
 
132
- Your NEXT response should be a list of commands that will be automatically executed to fulfill the user's goal.
133
- - The commands may make changes to the user's system.
134
- - The commands may install new software using package managers like Homebrew
135
- - The commands MUST all start with a valid command that you would run in the terminal
136
- - The commands MUST NOT contain any placeholders in angle brackets like <this>.
137
- - The response MUST NOT contain any plain language instructions, or backticks indicating where the commands begin or end.
138
- - THe response MUST NOT start or end with backticks.
139
- - The response MUST NOT end with a newline character.
140
- Therefore your NEXT response MUST contain ONLY a list of commands and nothing else.
70
+ PROMPT
141
71
 
142
- VALID example response. These commands are examples of commands which CAN be included in your FINAL response:
72
+ goal_commands_prompt += @prompts["goal_commands"]
143
73
 
144
- ls
145
- mkdir new_directory
146
- brew install git
147
- git commit -m "This is a great commit message"
74
+ continue_conversation(goal_commands_prompt)
75
+ end
148
76
 
149
- If you cannot keep to this restriction, simply return the string "$$cannot_compute$$" and the user will be asked to provide a new prompt.
150
- PROMPT
77
+ private
151
78
 
152
- @messages << { role: "user", content: full_prompt }
79
+ def continue_conversation(prompt)
80
+ @messages << { role: "user", content: prompt }
153
81
 
154
- response = openapi_client.chat(
82
+ response = openai_client.chat(
155
83
  parameters: {
156
84
  model: "gpt-4-turbo-preview",
157
85
  messages: @messages,
data/lib/gpterm.rb CHANGED
@@ -24,36 +24,66 @@ class GPTerm
24
24
  name = @options[:preset_prompt][0]
25
25
  prompt = @options[:preset_prompt][1]
26
26
  AppConfig.add_preset(@config, name, prompt)
27
- puts "Preset prompt '#{name}' saved with prompt '#{prompt}'".colorize(:green)
28
- exit
27
+ exit_with_message("Preset prompt '#{name}' saved with prompt '#{prompt}'", :green)
29
28
  elsif @options[:prompt]
30
- start_prompt(@options[:prompt])
29
+ start_conversation(@options[:prompt])
31
30
  end
32
31
  end
33
32
 
34
33
  private
35
34
 
36
- def execute_command(command)
35
+ def execute_shell_command(command)
37
36
  stdout, stderr, status = Open3.capture3(command)
38
37
  [stdout, stderr, status.exitstatus]
39
38
  end
40
39
 
41
- def start_prompt(prompt)
40
+ def exit_with_message(message, color)
41
+ if color
42
+ puts message.colorize(color)
43
+ else
44
+ puts message
45
+ end
46
+
47
+ exit
48
+ end
49
+
50
+ # Ensures the user enters "y" or "n"
51
+ def get_yes_or_no
52
+ input = STDIN.gets.chomp.downcase
53
+ while ['y', 'n'].include?(input) == false
54
+ puts 'Please enter "y/Y" or "n/N":'.colorize(:yellow)
55
+ input = STDIN.gets.chomp.downcase
56
+ end
57
+ input
58
+ end
59
+
60
+ # Ensures the user enters a non-empty value
61
+ def get_non_empty_input
62
+ input = STDIN.gets.chomp.strip
63
+ while input.length == 0
64
+ puts 'Please enter a non-empty value:'.colorize(:yellow)
65
+ input = STDIN.gets.chomp.strip
66
+ end
67
+ input
68
+ end
69
+
70
+ def start_conversation(prompt)
42
71
  message = @client.first_prompt(prompt)
43
72
 
44
73
  if message.downcase == '$$cannot_compute$$'
45
- puts 'Sorry, a command could not be generated for that prompt. Try another.'.colorize(:red)
46
- exit
74
+ exit_with_message('Sorry, a command could not be generated for that prompt. Try another.', :red)
47
75
  end
48
76
 
49
77
  if message.downcase == '$$no_gathering_needed$$'
50
78
  puts 'No information gathering needed'.colorize(:magenta)
51
79
  output = "No information gathering was needed."
80
+ elsif message.downcase == '$$cannot_compute$$'
81
+ exit_with_message('Sorry, a command could not be generated for that prompt. Try another.', :red)
52
82
  else
53
83
  puts 'Information gathering command:'.colorize(:magenta)
54
84
  puts message.gsub(/^/, "#{" $".colorize(:blue)} ")
55
- puts 'Do you want to execute this command? (Y/n)'.colorize(:yellow)
56
- continue = STDIN.gets.chomp
85
+ puts 'Do you want to execute this command? (Y/n then hit return)'.colorize(:yellow)
86
+ continue = get_yes_or_no
57
87
 
58
88
  unless continue.downcase == 'y'
59
89
  exit
@@ -68,19 +98,19 @@ class GPTerm
68
98
  end
69
99
  end
70
100
 
71
- output = offer_more_information(output)
101
+ output = @client.offer_information_prompt(output, :shell_output_response)
72
102
 
73
103
  while output.downcase != '$$no_more_information_needed$$'
74
104
  puts "You have been asked to provide more information with this command:".colorize(:magenta)
75
105
  puts output.gsub(/^/, "#{" >".colorize(:blue)} ")
76
106
  puts "What is your response? (Type 'skip' to skip this step and force the final command to be generated)".colorize(:yellow)
77
107
 
78
- response = STDIN.gets.chomp
108
+ response = get_non_empty_input
79
109
 
80
110
  if response.downcase == 'skip'
81
111
  output = '$$no_more_information_needed$$'
82
112
  else
83
- output = offer_more_information(response)
113
+ output = @client.offer_information_prompt(response, :question_response)
84
114
  end
85
115
  end
86
116
 
@@ -88,12 +118,16 @@ class GPTerm
88
118
 
89
119
  message = @client.final_prompt(output)
90
120
 
121
+ if message.downcase == '$$cannot_compute$$'
122
+ exit_with_message('Sorry, a command could not be generated for that prompt. Try another.', :red)
123
+ end
124
+
91
125
  puts 'Generated command to accomplish your goal:'.colorize(:magenta)
92
126
  puts message.gsub(/^/, "#{" $".colorize(:green)} ")
93
127
 
94
- puts 'Do you want to execute this command? (Y/n)'.colorize(:yellow)
128
+ puts 'Do you want to execute this command? (Y/n then hit return)'.colorize(:yellow)
95
129
 
96
- continue = STDIN.gets.chomp
130
+ continue = get_yes_or_no
97
131
 
98
132
  unless continue.downcase == 'y'
99
133
  exit
@@ -102,12 +136,11 @@ class GPTerm
102
136
  commands = message.split("\n")
103
137
 
104
138
  commands.each do |command|
105
- stdout, stderr, exit_status = execute_command(command)
139
+ stdout, stderr, exit_status = execute_shell_command(command)
106
140
  if exit_status != 0
107
141
  puts "#{command} failed with the following output:".colorize(:red)
108
142
  puts "#{stderr.gsub(/^/, " ")}".colorize(:red) if stderr.length > 0
109
- puts " Exit status: #{exit_status}".colorize(:red)
110
- exit
143
+ exit_with_message(" Exit status: #{exit_status}", :red)
111
144
  end
112
145
  puts stdout if stdout.length > 0
113
146
  # I'm doing this here because git for some reason always returns the output of a push to stderr,
@@ -123,13 +156,15 @@ class GPTerm
123
156
  new_config = {}
124
157
  puts "Before we get started, we need to configure the application. All the info you provide will be saved in #{AppConfig::CONFIG_FILE}.".colorize(:magenta)
125
158
 
126
- puts "Enter your OpenAI API key's \"SECRET KEY\" value: ".colorize(:yellow)
159
+ puts "Enter your OpenAI API key's \"SECRET KEY\" value then hit return: ".colorize(:yellow)
127
160
  new_config['openapi_key'] = STDIN.gets.chomp
128
161
 
129
162
  puts "Your PATH environment variable is: #{ENV['PATH']}".colorize(:magenta)
130
- puts 'Are you happy for your PATH to be sent to OpenAI to help with command generation? (Y/n) '.colorize(:yellow)
163
+ puts 'Are you happy for your PATH to be sent to OpenAI to help with command generation? (Y/n then hit return) '.colorize(:yellow)
164
+
165
+ input = get_yes_or_no
131
166
 
132
- if STDIN.gets.chomp.downcase == 'y'
167
+ if input == 'y'
133
168
  new_config['send_path'] = true
134
169
  else
135
170
  new_config['send_path'] = false
@@ -163,14 +198,12 @@ class GPTerm
163
198
  opts.banner = "gpterm config [--openapi_key <value>|--send_path <true|false>]"
164
199
  opts.on("--openapi_key VALUE", "Set the OpenAI API key") do |v|
165
200
  AppConfig.add_openapi_key(@config, v)
166
- puts "OpenAI API key saved"
167
- exit
201
+ exit_with_message("OpenAI API key saved")
168
202
  end
169
203
  opts.on("--send_path", "Send the PATH environment variable to OpenAI") do
170
204
  @config['send_path'] = true
171
205
  AppConfig.save_config(@config)
172
- puts "Your PATH environment variable will be sent to OpenAI to help with command generation"
173
- exit
206
+ exit_with_message("Your PATH environment variable will be sent to OpenAI to help with command generation")
174
207
  end
175
208
  end
176
209
  }
@@ -196,19 +229,14 @@ class GPTerm
196
229
  subcommands[command][:option_parser].parse!
197
230
  subcommands[command][:argument_parser].call(ARGV) if subcommands[command][:argument_parser]
198
231
  elsif command == 'help'
199
- puts main
200
- exit
232
+ exit_with_message(main)
201
233
  elsif command
202
234
  options[:prompt] = command
203
235
  else
204
236
  puts 'Enter a prompt to generate text from:'.colorize(:yellow)
205
- options[:prompt] = STDIN.gets.chomp
237
+ options[:prompt] = get_non_empty_input
206
238
  end
207
239
 
208
240
  options
209
241
  end
210
-
211
- def offer_more_information(output)
212
- output = @client.offer_information_prompt(output)
213
- end
214
242
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: gpterm
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.4.1
4
+ version: 0.5.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Dan Hough