gpterm 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/LICENSE +7 -0
- data/README.md +55 -0
- data/bin/gpterm +5 -0
- data/lib/client.rb +165 -0
- data/lib/config.rb +38 -0
- data/lib/gpterm.rb +188 -0
- metadata +67 -0
checksums.yaml
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
---
|
2
|
+
SHA256:
|
3
|
+
metadata.gz: dc7687047e7d6b18c984706991d29e2b3fea168159c3688aac9c039b98f63974
|
4
|
+
data.tar.gz: 0ded9b23d8ca54f53b46639b3854b2f2cd5fd17521c417be045611be409f8140
|
5
|
+
SHA512:
|
6
|
+
metadata.gz: ecb6f33683c67a269e43f24cf13ba1022b22fb37b2e0b01955706dff1cc19bba36b13d32d51cd05d75450ec952d5e9a2c88caa6b699faf7ec1fcd82dd1610faa
|
7
|
+
data.tar.gz: de4343ac31ecd56c910f83ee6390996f88beabcaf3f4fd4e0dfbb3c0a3a79e4fda682f467b943f32436365ec0d572cb61cf8091d96c30b61253cefcbb57f493c
|
data/LICENSE
ADDED
@@ -0,0 +1,7 @@
|
|
1
|
+
Copyright 2024 Daniel Hough
|
2
|
+
|
3
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
4
|
+
|
5
|
+
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
6
|
+
|
7
|
+
THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.md
ADDED
@@ -0,0 +1,55 @@
|
|
1
|
+
# gpterm
|
2
|
+
|
3
|
+
**WARNING:** `gpterm` has very few guardrails. If used indiscriminately, it can wipe your entire system or leak information.
|
4
|
+
|
5
|
+
`gpterm` is a powerful, flexible and dangerous command-line tool designed to help you generate commands for your terminal using OpenAI's Chat Completions. It will not execute commands without your consent, but please do check which commands it is presenting before you let it execute them. Like so:
|
6
|
+
|
7
|
+
```bash
|
8
|
+
$ gpterm "Using git diff to gather info, commit all the latest changes with a descriptive commit message, then push the changes"
|
9
|
+
$ # It gathers info, asks for consent, and does the thing
|
10
|
+
$ [main 94a9292] Update README with gpterm usage example
|
11
|
+
$ 1 file changed, 4 insertions(+)
|
12
|
+
```
|
13
|
+
|
14
|
+
## Getting Started
|
15
|
+
|
16
|
+
You can install it from RubyGems using `gem install gpterm`, or you can clone it and run it straight from the source.
|
17
|
+
|
18
|
+
Ensure you have Ruby installed on your system. Then, follow these steps:
|
19
|
+
|
20
|
+
- Clone the repository or download the source code.
|
21
|
+
- Navigate to the gpterm directory and run `bundle install` to install dependencies.
|
22
|
+
- Start the application by running `./bin/gpterm`
|
23
|
+
|
24
|
+
## Configuration
|
25
|
+
|
26
|
+
On first run, you'll be prompted to enter your OpenAI API key. This is required for the application to interact with OpenAI's API. You will also be asked to specify whether you'd like your `PATH` variable to be sent in the prompt, which can help with command generation.
|
27
|
+
|
28
|
+
## Usage
|
29
|
+
|
30
|
+
`gpterm <prompt> [options] [subcommand [options]]`
|
31
|
+
|
32
|
+
**Subcommands:**
|
33
|
+
|
34
|
+
- `preset` - `gpterm preset <name> <prompt>`
|
35
|
+
- `config` - `gpterm config [--openapi_key <value>|--send_path <true|false>]`
|
36
|
+
|
37
|
+
**Options:**
|
38
|
+
`-v`, `--verbose` Run verbosely
|
39
|
+
|
40
|
+
## Presets
|
41
|
+
|
42
|
+
You can save and reuse preset prompts for common or repeated tasks. To create a preset, use the `-s` option followed by a name and the prompt, separated by a comma.
|
43
|
+
To use a preset, simply pass its name as an argument when starting gpterm.
|
44
|
+
|
45
|
+
## Contributing
|
46
|
+
|
47
|
+
Contributions are welcome! Feel free to open an issue or pull request.
|
48
|
+
|
49
|
+
## License
|
50
|
+
|
51
|
+
gpterm is open-source software licensed under the MIT license.
|
52
|
+
|
53
|
+
## Author
|
54
|
+
|
55
|
+
[Dan Hough](https://danhough.com)
|
data/bin/gpterm
ADDED
data/lib/client.rb
ADDED
@@ -0,0 +1,165 @@
|
|
1
|
+
require "openai"
|
2
|
+
|
3
|
+
class Client
|
4
|
+
attr_reader :openapi_client
|
5
|
+
attr_reader :config
|
6
|
+
|
7
|
+
def initialize(config)
|
8
|
+
@config = config
|
9
|
+
@openapi_client = OpenAI::Client.new(access_token: config["openapi_key"])
|
10
|
+
end
|
11
|
+
|
12
|
+
def first_prompt(prompt)
|
13
|
+
system_prompt = <<~PROMPT
|
14
|
+
You are a command-line application being executed inside of a directory in a macOS environment, on the user's terminal command line.
|
15
|
+
|
16
|
+
You are executed by running `gpterm` in the terminal, and you are provided with a prompt to respond to with the -p flag.
|
17
|
+
|
18
|
+
Users can add a preset prompt by running `gpterm -s <name>,<prompt>`.
|
19
|
+
|
20
|
+
The eventual output to the user would be a list of commands that they can run in their terminal to accomplish a task.
|
21
|
+
|
22
|
+
You have the ability to run any command that this system can run, and you can read the output of those commands.
|
23
|
+
|
24
|
+
The user is trying to accomplish a task using the terminal, but they are not sure how to do it.
|
25
|
+
PROMPT
|
26
|
+
|
27
|
+
if @config["send_path"]
|
28
|
+
system_prompt += <<~PROMPT
|
29
|
+
The user's PATH environment variable is:
|
30
|
+
#{ENV["PATH"]}
|
31
|
+
PROMPT
|
32
|
+
end
|
33
|
+
|
34
|
+
full_prompt = <<~PROMPT
|
35
|
+
Your FIRST response should be a list of commands that will be automatically executed to gather more information about the user's system.
|
36
|
+
- The commands MUST NOT make any changes to the user's system.
|
37
|
+
- The commands MUST NOT make any changes to any files on the user's system.
|
38
|
+
- The commands MUST NOT write to any files using the > or >> operators.
|
39
|
+
- The commands MUST NOT use the touch command.
|
40
|
+
- The commands MUST NOT use echo or any other command to write into files using the > or >> operators.
|
41
|
+
- The commands MUST NOT send any data to any external servers.
|
42
|
+
- The commands MUST NOT contain any placeholders in angle brackets like <this>.
|
43
|
+
- The commands MUST NOT contain any plain language instructions, or backticks indicating where the commands begin or end.
|
44
|
+
- The commands MAY gather information about the user's system, such as the version of a software package, or the contents of a file.
|
45
|
+
- The commands CAN pipe their output into other commands.
|
46
|
+
- The commands SHOULD tend to gather more verbose information INSTEAD OF more concise information.
|
47
|
+
This will help you to provide a more accurate response to the user's goal.
|
48
|
+
Therefore your FIRST response MUST contain ONLY a list of commands and nothing else.
|
49
|
+
|
50
|
+
VALID example response. These commands are examples of commands which CAN be included in your FIRST response:
|
51
|
+
|
52
|
+
for file in *; do cat "$file"; done
|
53
|
+
which ls
|
54
|
+
which git
|
55
|
+
which brew
|
56
|
+
git diff
|
57
|
+
git status
|
58
|
+
|
59
|
+
INVALID example response. These commands are examples of commands which MUST NOT be included in your FIRST response:
|
60
|
+
|
61
|
+
touch file.txt
|
62
|
+
git add .
|
63
|
+
git push
|
64
|
+
|
65
|
+
If you cannot create a VALID response, simply return the string "$$cannot_compute$$" and the user will be asked to provide a new prompt.
|
66
|
+
If you do not need to gather more information, simply return the string "$$no_gathering_needed$$" and the next step will be executed.
|
67
|
+
You probably will need to gather information.
|
68
|
+
If you need to gather information directly from the user, you will be able to do so in the next step.
|
69
|
+
|
70
|
+
The user's goal prompt is:
|
71
|
+
"#{prompt}"
|
72
|
+
Commands to execute to gather more information about the user's system before providing the response which will accomplish the user's goal:
|
73
|
+
PROMPT
|
74
|
+
|
75
|
+
@messages = [
|
76
|
+
{ role: "system", content: system_prompt },
|
77
|
+
{ role: "user", content: full_prompt }
|
78
|
+
]
|
79
|
+
|
80
|
+
response = openapi_client.chat(
|
81
|
+
parameters: {
|
82
|
+
model: "gpt-4-turbo-preview",
|
83
|
+
messages: @messages,
|
84
|
+
temperature: 0.6,
|
85
|
+
}
|
86
|
+
)
|
87
|
+
content = response.dig("choices", 0, "message", "content")
|
88
|
+
|
89
|
+
@messages << { role: "assistant", content: content }
|
90
|
+
|
91
|
+
content
|
92
|
+
end
|
93
|
+
|
94
|
+
def offer_information_prompt(prompt)
|
95
|
+
full_prompt = <<~PROMPT
|
96
|
+
This is the output of the command you provided to the user in the previous step.
|
97
|
+
|
98
|
+
#{prompt}
|
99
|
+
|
100
|
+
Before you provide the user with the next command, you have the opportunity to ask the user to provide more information so you can better tailor your response to their needs.
|
101
|
+
|
102
|
+
If you would like to ask the user for more information, please provide a prompt that asks the user for the information you need.
|
103
|
+
- Your prompt MUST ONLY contain one question. You will be able to ask another question in the next step.
|
104
|
+
If you have all the information you need, simply return the string "$$no_more_information_needed$$" and the next step will be executed.
|
105
|
+
PROMPT
|
106
|
+
|
107
|
+
@messages << { role: "user", content: full_prompt }
|
108
|
+
|
109
|
+
response = openapi_client.chat(
|
110
|
+
parameters: {
|
111
|
+
model: "gpt-4-turbo-preview",
|
112
|
+
messages: @messages,
|
113
|
+
temperature: 0.6,
|
114
|
+
}
|
115
|
+
)
|
116
|
+
|
117
|
+
content = response.dig("choices", 0, "message", "content")
|
118
|
+
|
119
|
+
@messages << { role: "assistant", content: content }
|
120
|
+
|
121
|
+
content
|
122
|
+
end
|
123
|
+
|
124
|
+
def final_prompt(prompt)
|
125
|
+
full_prompt = <<~PROMPT
|
126
|
+
This is the output of the command you provided to the user in the previous step.
|
127
|
+
|
128
|
+
#{prompt}
|
129
|
+
|
130
|
+
Your NEXT response should be a list of commands that will be automatically executed to fulfill the user's goal.
|
131
|
+
- The commands may make changes to the user's system.
|
132
|
+
- The commands may install new software using package managers like Homebrew
|
133
|
+
- The commands MUST all start with a valid command that you would run in the terminal
|
134
|
+
- The commands MUST NOT contain any placeholders in angle brackets like <this>.
|
135
|
+
- The response MUST NOT contain any plain language instructions, or backticks indicating where the commands begin or end.
|
136
|
+
- THe response MUST NOT start or end with backticks.
|
137
|
+
- The response MUST NOT end with a newline character.
|
138
|
+
Therefore your NEXT response MUST contain ONLY a list of commands and nothing else.
|
139
|
+
|
140
|
+
VALID example response. These commands are examples of commands which CAN be included in your FINAL response:
|
141
|
+
|
142
|
+
ls
|
143
|
+
mkdir new_directory
|
144
|
+
brew install git
|
145
|
+
git commit -m "This is a great commit message"
|
146
|
+
|
147
|
+
If you cannot keep to this restriction, simply return the string "$$cannot_compute$$" and the user will be asked to provide a new prompt.
|
148
|
+
PROMPT
|
149
|
+
|
150
|
+
@messages << { role: "user", content: full_prompt }
|
151
|
+
|
152
|
+
response = openapi_client.chat(
|
153
|
+
parameters: {
|
154
|
+
model: "gpt-4-turbo-preview",
|
155
|
+
messages: @messages,
|
156
|
+
temperature: 0.6,
|
157
|
+
}
|
158
|
+
)
|
159
|
+
content = response.dig("choices", 0, "message", "content")
|
160
|
+
|
161
|
+
@messages << { role: "assistant", content: content }
|
162
|
+
|
163
|
+
content
|
164
|
+
end
|
165
|
+
end
|
data/lib/config.rb
ADDED
@@ -0,0 +1,38 @@
|
|
1
|
+
require 'yaml'
|
2
|
+
|
3
|
+
module AppConfig
|
4
|
+
CONFIG_FILE = File.join(Dir.home, '.gpterm', 'config.yml').freeze
|
5
|
+
|
6
|
+
# Check if the directory exists, if not, create it
|
7
|
+
unless File.directory?(File.dirname(CONFIG_FILE))
|
8
|
+
Dir.mkdir(File.dirname(CONFIG_FILE))
|
9
|
+
end
|
10
|
+
|
11
|
+
def self.load_config
|
12
|
+
YAML.load_file(CONFIG_FILE)
|
13
|
+
rescue Errno::ENOENT
|
14
|
+
default_config
|
15
|
+
end
|
16
|
+
|
17
|
+
def self.save_config(config)
|
18
|
+
File.write(CONFIG_FILE, config.to_yaml)
|
19
|
+
end
|
20
|
+
|
21
|
+
def self.add_openapi_key(config, openapi_key)
|
22
|
+
config['openapi_key'] = openapi_key
|
23
|
+
save_config(config)
|
24
|
+
end
|
25
|
+
|
26
|
+
def self.add_preset(config, preset_name, preset_prompt)
|
27
|
+
# This is a YAML file so we need to make sure the presets key exists
|
28
|
+
config['presets'] ||= {}
|
29
|
+
config['presets'][preset_name] = preset_prompt
|
30
|
+
save_config(config)
|
31
|
+
end
|
32
|
+
|
33
|
+
def self.default_config
|
34
|
+
{
|
35
|
+
'openapi_key' => ''
|
36
|
+
}
|
37
|
+
end
|
38
|
+
end
|
data/lib/gpterm.rb
ADDED
@@ -0,0 +1,188 @@
|
|
1
|
+
require 'optparse'
|
2
|
+
require 'colorize'
|
3
|
+
|
4
|
+
require_relative 'config'
|
5
|
+
require_relative 'client'
|
6
|
+
|
7
|
+
class GPTerm
|
8
|
+
def initialize
|
9
|
+
@config = load_config
|
10
|
+
@options = parse_options
|
11
|
+
@client = Client.new(@config)
|
12
|
+
end
|
13
|
+
|
14
|
+
def run
|
15
|
+
if @options[:preset_prompt]
|
16
|
+
name = @options[:preset_prompt][0]
|
17
|
+
prompt = @options[:preset_prompt][1]
|
18
|
+
AppConfig.add_preset(@config, name, prompt)
|
19
|
+
puts "Preset prompt '#{name}' saved with prompt '#{prompt}'".colorize(:green)
|
20
|
+
exit
|
21
|
+
elsif @options[:prompt]
|
22
|
+
start_prompt(@options[:prompt])
|
23
|
+
end
|
24
|
+
end
|
25
|
+
|
26
|
+
private
|
27
|
+
|
28
|
+
def start_prompt(prompt)
|
29
|
+
message = @client.first_prompt(prompt)
|
30
|
+
|
31
|
+
if message.downcase == '$$cannot_compute$$'
|
32
|
+
puts 'Sorry, a command could not be generated for that prompt. Try another.'.colorize(:red)
|
33
|
+
exit
|
34
|
+
end
|
35
|
+
|
36
|
+
if message.downcase == '$$no_gathering_needed$$'
|
37
|
+
puts 'No information gathering needed'.colorize(:magenta)
|
38
|
+
output = "No information gathering was needed."
|
39
|
+
else
|
40
|
+
puts 'Information gathering command:'.colorize(:magenta)
|
41
|
+
puts message.gsub(/^/, "#{" $".colorize(:blue)} ")
|
42
|
+
puts 'Do you want to execute this command? (Y/n)'.colorize(:yellow)
|
43
|
+
continue = STDIN.gets.chomp
|
44
|
+
|
45
|
+
unless continue.downcase == 'y'
|
46
|
+
exit
|
47
|
+
end
|
48
|
+
|
49
|
+
puts 'Running command...'
|
50
|
+
output = `#{message}`
|
51
|
+
|
52
|
+
puts 'Output:'
|
53
|
+
puts output
|
54
|
+
end
|
55
|
+
|
56
|
+
output = offer_more_information(output)
|
57
|
+
|
58
|
+
while output.downcase != '$$no_more_information_needed$$'
|
59
|
+
puts "You have been asked to provide more information with this command:".colorize(:magenta)
|
60
|
+
puts output.gsub(/^/, "#{" >".colorize(:blue)} ")
|
61
|
+
puts "What is your response? (Type 'skip' to skip this step and force the final command to be generated)".colorize(:yellow)
|
62
|
+
|
63
|
+
response = STDIN.gets.chomp
|
64
|
+
|
65
|
+
if response.downcase == 'skip'
|
66
|
+
output = '$$no_more_information_needed$$'
|
67
|
+
else
|
68
|
+
output = offer_more_information(response)
|
69
|
+
end
|
70
|
+
end
|
71
|
+
|
72
|
+
puts 'Requesting the next command...'.colorize(:magenta)
|
73
|
+
|
74
|
+
message = @client.final_prompt(output)
|
75
|
+
|
76
|
+
puts 'Generated command to accomplish your goal:'.colorize(:magenta)
|
77
|
+
puts message.gsub(/^/, "#{" $".colorize(:green)} ")
|
78
|
+
|
79
|
+
puts 'Do you want to execute this command? (Y/n)'.colorize(:yellow)
|
80
|
+
|
81
|
+
continue = STDIN.gets.chomp
|
82
|
+
|
83
|
+
unless continue.downcase == 'y'
|
84
|
+
exit
|
85
|
+
end
|
86
|
+
|
87
|
+
output = `#{message}`
|
88
|
+
|
89
|
+
puts 'Output:'
|
90
|
+
puts output
|
91
|
+
end
|
92
|
+
|
93
|
+
def load_config
|
94
|
+
unless File.exist?(AppConfig::CONFIG_FILE)
|
95
|
+
puts 'Welcome to gpterm! It looks like this is your first time using this application.'.colorize(:magenta)
|
96
|
+
|
97
|
+
new_config = {}
|
98
|
+
puts "Before we get started, we need to configure the application. All the info you provide will be saved in #{AppConfig::CONFIG_FILE}.".colorize(:magenta)
|
99
|
+
|
100
|
+
puts "Enter your OpenAI API key's \"SECRET KEY\" value: ".colorize(:yellow)
|
101
|
+
new_config['openapi_key'] = STDIN.gets.chomp
|
102
|
+
|
103
|
+
puts "Your PATH environment variable is: #{ENV['PATH']}".colorize(:magenta)
|
104
|
+
puts 'Are you happy for your PATH to be sent to OpenAI to help with command generation? (Y/n) '.colorize(:yellow)
|
105
|
+
|
106
|
+
if STDIN.gets.chomp.downcase == 'y'
|
107
|
+
new_config['send_path'] = true
|
108
|
+
else
|
109
|
+
new_config['send_path'] = false
|
110
|
+
end
|
111
|
+
|
112
|
+
AppConfig.save_config(new_config)
|
113
|
+
|
114
|
+
new_config
|
115
|
+
else
|
116
|
+
AppConfig.load_config
|
117
|
+
end
|
118
|
+
end
|
119
|
+
|
120
|
+
def parse_options
|
121
|
+
options = {}
|
122
|
+
subcommands = {
|
123
|
+
'preset' => {
|
124
|
+
option_parser: OptionParser.new do |opts|
|
125
|
+
opts.banner = "gpterm preset <name> <prompt>"
|
126
|
+
end,
|
127
|
+
argument_parser: ->(args) {
|
128
|
+
if args.length < 2
|
129
|
+
options[:prompt] = @config['presets'][args[0]]
|
130
|
+
else
|
131
|
+
options[:preset_prompt] = [args[0], args[1]]
|
132
|
+
end
|
133
|
+
}
|
134
|
+
},
|
135
|
+
'config' => {
|
136
|
+
option_parser: OptionParser.new do |opts|
|
137
|
+
opts.banner = "gpterm config [--openapi_key <value>|--send_path <true|false>]"
|
138
|
+
opts.on("--openapi_key VALUE", "Set the OpenAI API key") do |v|
|
139
|
+
AppConfig.add_openapi_key(@config, v)
|
140
|
+
puts "OpenAI API key saved"
|
141
|
+
exit
|
142
|
+
end
|
143
|
+
opts.on("--send_path", "Send the PATH environment variable to OpenAI") do
|
144
|
+
@config['send_path'] = true
|
145
|
+
AppConfig.save_config(@config)
|
146
|
+
puts "Your PATH environment variable will be sent to OpenAI to help with command generation"
|
147
|
+
exit
|
148
|
+
end
|
149
|
+
end
|
150
|
+
}
|
151
|
+
}
|
152
|
+
|
153
|
+
main = OptionParser.new do |opts|
|
154
|
+
opts.banner = "Usage:"
|
155
|
+
opts.banner += "\n\ngpterm <prompt> [options] [subcommand [options]]"
|
156
|
+
opts.banner += "\n\nSubcommands:"
|
157
|
+
subcommands.each do |name, subcommand|
|
158
|
+
opts.banner += "\n #{name} - #{subcommand[:option_parser].banner}"
|
159
|
+
end
|
160
|
+
opts.banner += "\n\nOptions:"
|
161
|
+
opts.on("-v", "--verbose", "Run verbosely") do |v|
|
162
|
+
options[:verbose] = true
|
163
|
+
end
|
164
|
+
end
|
165
|
+
|
166
|
+
command = ARGV.shift
|
167
|
+
|
168
|
+
main.order!
|
169
|
+
if subcommands.key?(command)
|
170
|
+
subcommands[command][:option_parser].parse!
|
171
|
+
subcommands[command][:argument_parser].call(ARGV) if subcommands[command][:argument_parser]
|
172
|
+
elsif command == 'help'
|
173
|
+
puts main
|
174
|
+
exit
|
175
|
+
elsif command
|
176
|
+
options[:prompt] = command
|
177
|
+
else
|
178
|
+
puts 'Enter a prompt to generate text from:'.colorize(:yellow)
|
179
|
+
options[:prompt] = STDIN.gets.chomp
|
180
|
+
end
|
181
|
+
|
182
|
+
options
|
183
|
+
end
|
184
|
+
|
185
|
+
def offer_more_information(output)
|
186
|
+
output = @client.offer_information_prompt(output)
|
187
|
+
end
|
188
|
+
end
|
metadata
ADDED
@@ -0,0 +1,67 @@
|
|
1
|
+
--- !ruby/object:Gem::Specification
|
2
|
+
name: gpterm
|
3
|
+
version: !ruby/object:Gem::Version
|
4
|
+
version: 0.3.0
|
5
|
+
platform: ruby
|
6
|
+
authors:
|
7
|
+
- Dan Hough
|
8
|
+
autorequire:
|
9
|
+
bindir: bin
|
10
|
+
cert_chain: []
|
11
|
+
date: 2024-02-17 00:00:00.000000000 Z
|
12
|
+
dependencies:
|
13
|
+
- !ruby/object:Gem::Dependency
|
14
|
+
name: ruby-openai
|
15
|
+
requirement: !ruby/object:Gem::Requirement
|
16
|
+
requirements:
|
17
|
+
- - "<"
|
18
|
+
- !ruby/object:Gem::Version
|
19
|
+
version: '7'
|
20
|
+
type: :runtime
|
21
|
+
prerelease: false
|
22
|
+
version_requirements: !ruby/object:Gem::Requirement
|
23
|
+
requirements:
|
24
|
+
- - "<"
|
25
|
+
- !ruby/object:Gem::Version
|
26
|
+
version: '7'
|
27
|
+
description: gpterm is a powerful, flexible and dangerous command-line tool designed
|
28
|
+
to help you generate commands for your terminal using OpenAI's Chat Completions
|
29
|
+
email:
|
30
|
+
- daniel.hough@gmail.com
|
31
|
+
executables:
|
32
|
+
- gpterm
|
33
|
+
extensions: []
|
34
|
+
extra_rdoc_files: []
|
35
|
+
files:
|
36
|
+
- LICENSE
|
37
|
+
- README.md
|
38
|
+
- bin/gpterm
|
39
|
+
- lib/client.rb
|
40
|
+
- lib/config.rb
|
41
|
+
- lib/gpterm.rb
|
42
|
+
homepage: https://github.com/basicallydan/gpterm
|
43
|
+
licenses:
|
44
|
+
- MIT
|
45
|
+
metadata:
|
46
|
+
homepage_uri: https://github.com/basicallydan/gpterm
|
47
|
+
source_code_uri: https://github.com/basicallydan/gpterm
|
48
|
+
post_install_message:
|
49
|
+
rdoc_options: []
|
50
|
+
require_paths:
|
51
|
+
- lib
|
52
|
+
required_ruby_version: !ruby/object:Gem::Requirement
|
53
|
+
requirements:
|
54
|
+
- - ">="
|
55
|
+
- !ruby/object:Gem::Version
|
56
|
+
version: '0'
|
57
|
+
required_rubygems_version: !ruby/object:Gem::Requirement
|
58
|
+
requirements:
|
59
|
+
- - ">="
|
60
|
+
- !ruby/object:Gem::Version
|
61
|
+
version: '0'
|
62
|
+
requirements: []
|
63
|
+
rubygems_version: 3.5.3
|
64
|
+
signing_key:
|
65
|
+
specification_version: 4
|
66
|
+
summary: A CLI for generating CLI commands with OpenAI's help.
|
67
|
+
test_files: []
|