aia 0.3.20 → 0.4.2
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.semver +2 -2
- data/CHANGELOG.md +8 -0
- data/README.md +184 -93
- data/lib/aia/cli.rb +11 -10
- data/lib/aia/directives.rb +66 -0
- data/lib/aia/main.rb +94 -13
- data/lib/aia/prompt.rb +279 -0
- data/lib/aia/tools/backend_common.rb +76 -0
- data/lib/aia/tools/mods.rb +47 -118
- data/lib/aia/tools/sgpt.rb +19 -3
- data/lib/aia/tools.rb +2 -0
- data/lib/aia.rb +1 -1
- data/man/aia.1 +54 -42
- data/man/aia.1.md +62 -38
- metadata +19 -5
- data/lib/aia/prompt_processing.rb +0 -416
- data/lib/aia/tools/temp.md +0 -97
- data/lib/modularization_plan.md +0 -126
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 24e36e9066b83229df951d172cfb3fd7185cadb433fd992ab37fede8d45b8ea5
|
4
|
+
data.tar.gz: f7ce893975dfb29dd69d1ec0922f8ea30987f88d1d3dd69f0ca877218c6c54fc
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 0172e7c82b9e346d176df7691e5eb422515e25f4f7ed243cb92cd38419a4e8a18a1df9f5500ac0606c6bb30d48e88662237877aee4667aedf9cd02973a190268
|
7
|
+
data.tar.gz: c826c22f5b1789ffeeca7982860764527350d3b352cb428a4b1cc6ccc63025fa1c4347ed098477d8d6a0f2f63dfe8fc8187eb9570096698487bf69fe6f8903c0
|
data/.semver
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,4 +1,12 @@
|
|
1
1
|
## [Unreleased]
|
2
|
+
## [0.4.2] 2023-12-31
|
3
|
+
- added the --role CLI option to pre-pend a "role" prompt to the front of a primary prompt.
|
4
|
+
|
5
|
+
## [0.4.1] 2023-12-31
|
6
|
+
- added a chat mode
|
7
|
+
- prompt directives now supported
|
8
|
+
- version bumped to match the `prompt_manager` gem
|
9
|
+
|
2
10
|
## [0.3.20] 2023-12-28
|
3
11
|
- added work around to issue with multiple context files going to the `mods` backend
|
4
12
|
- added shellwords gem to santize prompt text on the command line
|
data/README.md
CHANGED
@@ -6,7 +6,8 @@ Uses the gem "prompt_manager" to manage the prompts sent to the `mods` command-l
|
|
6
6
|
|
7
7
|
**Most Recent Change**
|
8
8
|
|
9
|
-
v0.
|
9
|
+
v0.4.2 = Added a role option to be prepended to a primary prompt.
|
10
|
+
v0.4.1 - Added a chat mode. Prompt directives are now supported.
|
10
11
|
|
11
12
|
<!-- Tocer[start]: Auto-generated, don't remove. -->
|
12
13
|
|
@@ -15,8 +16,12 @@ v0.3.19 - Major code refactoring. Now supports conf giles in YAML or TOML forma
|
|
15
16
|
- [Installation](#installation)
|
16
17
|
- [Usage](#usage)
|
17
18
|
- [System Environment Variables (envar)](#system-environment-variables-envar)
|
19
|
+
- [Prompt Directives](#prompt-directives)
|
20
|
+
- [All About ROLES](#all-about-roles)
|
21
|
+
- [Other Ways to Insert Roles into Prompts](#other-ways-to-insert-roles-into-prompts)
|
18
22
|
- [External CLI Tools Used](#external-cli-tools-used)
|
19
23
|
- [Shell Completion](#shell-completion)
|
24
|
+
- [Ny Most Powerful Prompt](#ny-most-powerful-prompt)
|
20
25
|
- [Development](#development)
|
21
26
|
- [Contributing](#contributing)
|
22
27
|
- [License](#license)
|
@@ -37,13 +42,14 @@ Install the command-line utilities by executing:
|
|
37
42
|
|
38
43
|
You will also need to establish a directory in your file system where your prompt text files, last used parameters and usage log files are kept.
|
39
44
|
|
40
|
-
Setup a system environment variable named "
|
45
|
+
Setup a system environment variable named "AIA_PROMPTS_DIR" that points to your prompts directory. The default is in your HOME directory named ".prompts_dir"
|
41
46
|
|
42
|
-
You
|
47
|
+
You may also want to install the completion script for your shell. To get a copy of the completion script do:
|
43
48
|
|
44
|
-
|
49
|
+
`aia --completion bash`
|
50
|
+
|
51
|
+
`fish` and `zsh` are also available.
|
45
52
|
|
46
|
-
TODO: don't forget to mention have access token (API keys) setup as envars for the various backend services like OpenAI... if they are still in business.
|
47
53
|
|
48
54
|
## Usage
|
49
55
|
|
@@ -52,7 +58,7 @@ The usage report obtained using either `-h` or `--help` is implemented as a stan
|
|
52
58
|
```text
|
53
59
|
$ aia --help
|
54
60
|
|
55
|
-
aia(1)
|
61
|
+
aia(1) User Manuals aia(1)
|
56
62
|
|
57
63
|
NAME
|
58
64
|
aia - command-line interface for an AI assistant
|
@@ -61,157 +67,171 @@ SYNOPSIS
|
|
61
67
|
aia [options]* PROMPT_ID [CONTEXT_FILE]* [-- EXTERNAL_OPTIONS+]
|
62
68
|
|
63
69
|
DESCRIPTION
|
64
|
-
The aia command-line tool is an interface for interacting with an AI
|
65
|
-
|
66
|
-
|
67
|
-
|
68
|
-
levels, and more.
|
70
|
+
The aia command-line tool is an interface for interacting with an AI model backend,
|
71
|
+
providing a simple way to send prompts and receive responses. The CLI supports various
|
72
|
+
options to customize the interaction, load a configuration file, edit prompts, set
|
73
|
+
debugging levels, and more.
|
69
74
|
|
70
75
|
ARGUMENTS
|
71
76
|
PROMPT_ID
|
72
77
|
This is a required argument.
|
73
78
|
|
74
79
|
CONTEXT_FILES
|
75
|
-
This is an optional argument. One or more files can be added to
|
76
|
-
|
80
|
+
This is an optional argument. One or more files can be added to the prompt as
|
81
|
+
context for the backend gen-AI tool to process.
|
77
82
|
|
78
83
|
EXTERNAL_OPTIONS
|
79
|
-
External options are optional. Anything that follow “ -- “ will
|
80
|
-
|
81
|
-
|
82
|
-
backend gen-AI tool.
|
83
|
-
options before sending them to the backend gen-AI tool.
|
84
|
+
External options are optional. Anything that follow “ -- “ will be sent to the
|
85
|
+
backend gen-AI tool. For example “-- -C -m gpt4-128k” will send the options
|
86
|
+
“-C -m gpt4-128k” to the backend gen-AI tool. aia will not validate these
|
87
|
+
external options before sending them to the backend gen-AI tool.
|
84
88
|
|
85
89
|
OPTIONS
|
86
|
-
|
87
|
-
|
90
|
+
--chat begin a chat session with the backend after the initial prompt response; will
|
91
|
+
set --no-output so that the backend response comes to STDOUT.
|
92
|
+
|
93
|
+
--completion SHELL_NAME
|
88
94
|
|
89
95
|
--dump FORMAT
|
90
96
|
|
91
|
-
|
92
|
-
|
97
|
+
--model NAME
|
98
|
+
Name of the LLM model to use - default is gpt-4-1106-preview
|
93
99
|
|
94
|
-
|
95
|
-
|
100
|
+
--speak
|
101
|
+
Simple implementation. Uses the “say” command to speak the response. Fun with
|
102
|
+
--chat
|
96
103
|
|
97
|
-
|
98
|
-
|
104
|
+
--terse
|
105
|
+
Add a clause to the prompt text that instructs the backend to be terse in its
|
106
|
+
response.
|
99
107
|
|
100
108
|
--version
|
101
|
-
Print Version - default
|
109
|
+
Print Version - default is false
|
102
110
|
|
103
|
-
-
|
104
|
-
|
111
|
+
-b, --[no]-backend LLM TOOL
|
112
|
+
Specify the backend prompt resolver - default is mods
|
105
113
|
|
106
|
-
-
|
107
|
-
|
114
|
+
-c, --config PATH_TO_CONFIG_FILE
|
115
|
+
Load Config File - default is nil
|
108
116
|
|
109
|
-
-
|
110
|
-
|
117
|
+
-d, --debug
|
118
|
+
Turn On Debugging - default is false
|
111
119
|
|
112
|
-
--
|
120
|
+
-e, --edit
|
121
|
+
Edit the Prompt File - default is false
|
113
122
|
|
114
|
-
-
|
115
|
-
|
123
|
+
-f, --fuzzy`
|
124
|
+
Use Fuzzy Matching when searching for a prompt - default is false
|
125
|
+
|
126
|
+
-h, --help
|
127
|
+
Show Usage - default is false
|
116
128
|
|
117
129
|
-l, --[no]-log PATH_TO_LOG_FILE
|
118
|
-
Log FILEPATH - default
|
130
|
+
Log FILEPATH - default is $HOME/.prompts/prompts.log
|
119
131
|
|
120
132
|
-m, --[no]-markdown
|
121
|
-
Format with Markdown - default
|
133
|
+
Format with Markdown - default is true
|
122
134
|
|
123
|
-
--
|
124
|
-
|
135
|
+
-o, --[no]-output PATH_TO_OUTPUT_FILE
|
136
|
+
Out FILENAME - default is ./temp.md
|
125
137
|
|
126
138
|
-p, --prompts PATH_TO_DIRECTORY
|
127
|
-
Directory containing the prompt files - default
|
139
|
+
Directory containing the prompt files - default is ~/.prompts
|
128
140
|
|
129
|
-
-
|
130
|
-
|
141
|
+
-r, --role ROLE_ID
|
142
|
+
A role ID is the same as a prompt ID. A “role” is a specialized prompt that
|
143
|
+
gets pre-pended to another prompt. It’s purpose is to configure the LLM into a
|
144
|
+
certain orientation within which to resolve its primary prompt.
|
131
145
|
|
132
|
-
|
133
|
-
|
134
|
-
|
135
|
-
• AIA_PROMPTS_DIR: Path to the directory containing prompts
|
136
|
-
files - default: $HOME/.prompts_dir
|
137
|
-
|
138
|
-
• AIA_BACKEND: The AI command-line program used - default: mods
|
146
|
+
-v, --verbose
|
147
|
+
Be Verbose - default is false
|
139
148
|
|
140
|
-
|
141
|
-
|
149
|
+
CONFIGURATION HIERARCHY
|
150
|
+
System Environment Variables (envars) that are all uppercase and begin with “AIA_” can
|
151
|
+
be used to over-ride the default configuration settings. For example setting “export
|
152
|
+
AIA_PROMPTS_DIR=~/Documents/prompts” will over-ride the default configuration;
|
153
|
+
however, a config value provided by a command line options will over-ride an envar
|
154
|
+
setting.
|
142
155
|
|
143
|
-
|
144
|
-
|
156
|
+
Configuration values found in a config file will over-ride all other values set for a
|
157
|
+
config item.
|
145
158
|
|
146
|
-
|
147
|
-
|
159
|
+
”//config” directives found inside a prompt file over-rides that config item
|
160
|
+
regardless of where the value was set.
|
148
161
|
|
149
|
-
|
150
|
-
|
162
|
+
For example “//config chat? = true” within a prompt will setup the chat back and forth
|
163
|
+
chat session for that specific prompt regardless of the command line options or the
|
164
|
+
envar AIA_CHAT settings
|
151
165
|
|
152
|
-
|
153
|
-
|
166
|
+
OpenAI ACCOUNT IS REQUIRED
|
167
|
+
Additionally, the program requires an OpenAI access key, which can be specified using
|
168
|
+
one of the following environment variables:
|
154
169
|
|
155
170
|
• OPENAI_ACCESS_TOKEN
|
156
171
|
|
157
172
|
• OPENAI_API_KEY
|
158
173
|
|
159
|
-
Currently there is not specific standard for name of the OpenAI key.
|
160
|
-
|
161
|
-
the
|
162
|
-
|
163
|
-
the same key value.
|
174
|
+
Currently there is not specific standard for name of the OpenAI key. Some programs
|
175
|
+
use one name, while others use a different name. Both of the envars listed above mean
|
176
|
+
the same thing. If you use more than one tool to access OpenAI resources, you may
|
177
|
+
have to set several envars to the same key value.
|
164
178
|
|
165
|
-
To acquire an OpenAI access key, first create an account on the OpenAI
|
166
|
-
|
179
|
+
To acquire an OpenAI access key, first create an account on the OpenAI platform, where
|
180
|
+
further documentation is available.
|
167
181
|
|
168
182
|
USAGE NOTES
|
169
|
-
aia is designed for flexibility, allowing users to pass prompt ids and
|
170
|
-
|
171
|
-
|
172
|
-
|
183
|
+
aia is designed for flexibility, allowing users to pass prompt ids and context files
|
184
|
+
as arguments. Some options change the behavior of the output, such as --output for
|
185
|
+
specifying a file or --no-output for disabling file output in favor of standard output
|
186
|
+
(STDPIT).
|
187
|
+
|
188
|
+
The --completion option displays a script that enables prompt ID auto-completion for
|
189
|
+
bash, zsh, or fish shells. It’s crucial to integrate the script into the shell’s
|
190
|
+
runtime to take effect.
|
173
191
|
|
174
|
-
The --
|
175
|
-
|
176
|
-
|
192
|
+
The --dump options will send the current configuration to STDOUT in the format
|
193
|
+
requested. Both YAML and TOML formats are supported.
|
194
|
+
|
195
|
+
PROMPT DIRECTIVES
|
196
|
+
Within a prompt text file any line that begins with “//” is considered a prompt
|
197
|
+
directive. There are numerious prompt directives available. In the discussion above
|
198
|
+
on the configuration you learned about the “//config” directive.
|
199
|
+
|
200
|
+
Detail discussion on individual prompt directives is TBD. Most likely it will be
|
201
|
+
handled in the github wiki <https://github.com/MadBomber/aia>
|
177
202
|
|
178
203
|
SEE ALSO
|
179
204
|
|
180
|
-
• OpenAI Platform Documentation
|
181
|
-
<https://platform.openai.com/docs/overview>
|
205
|
+
• OpenAI Platform Documentation <https://platform.openai.com/docs/overview>
|
182
206
|
for more information on obtaining access tokens
|
183
207
|
<https://platform.openai.com/account/api-keys>
|
184
208
|
and working with OpenAI models.
|
185
209
|
|
186
210
|
• mods <https://github.com/charmbracelet/mods>
|
187
|
-
for more information on mods - AI for the command line, built
|
188
|
-
|
189
|
-
the
|
190
|
-
|
191
|
-
that makes it super easy to use AI on the command line and in
|
211
|
+
for more information on mods - AI for the command line, built for pipelines.
|
212
|
+
LLM based AI is really good at interpreting the output of commands and
|
213
|
+
returning the results in CLI friendly text formats like Markdown. Mods is a
|
214
|
+
simple tool that makes it super easy to use AI on the command line and in
|
192
215
|
your pipelines. Mods works with OpenAI
|
193
216
|
<https://platform.openai.com/account/api-keys>
|
194
217
|
and LocalAI <https://github.com/go-skynet/LocalAI>
|
195
218
|
|
196
219
|
• sgpt <https://github.com/tbckr/sgpt>
|
197
|
-
(aka shell-gpt) is a powerful command-line interface (CLI)
|
198
|
-
|
199
|
-
|
200
|
-
|
201
|
-
|
202
|
-
enhance productivity with this powerful and user-friendly CLI
|
203
|
-
tool.
|
220
|
+
(aka shell-gpt) is a powerful command-line interface (CLI) tool designed for
|
221
|
+
seamless interaction with OpenAI models directly from your terminal.
|
222
|
+
Effortlessly run queries, generate shell commands or code, create images from
|
223
|
+
text, and more, using simple commands. Streamline your workflow and enhance
|
224
|
+
productivity with this powerful and user-friendly CLI tool.
|
204
225
|
|
205
226
|
AUTHOR
|
206
227
|
Dewayne VanHoozer <dvanhoozer@gmail.com>
|
207
228
|
|
208
|
-
AIA
|
209
|
-
|
229
|
+
AIA 2024-01-01 aia(1)
|
210
230
|
```
|
211
231
|
|
212
232
|
## System Environment Variables (envar)
|
213
233
|
|
214
|
-
The `aia` configuration defaults can be over-ridden by envars with the prefix "AIA_" followed by the config item name also in uppercase.
|
234
|
+
The `aia` configuration defaults can be over-ridden by envars with the prefix "AIA_" followed by the config item name also in uppercase. All configuration items can be over-ridden in this way by an envar. The following table show a few examples.
|
215
235
|
|
216
236
|
| Config Item | Default Value | envar key |
|
217
237
|
| ------------- | ------------- | --------- |
|
@@ -231,9 +251,49 @@ The `aia` configuration defaults can be over-ridden by envars with the prefix "A
|
|
231
251
|
|
232
252
|
See the `@options` hash in the `cli.rb` file for a complete list. There are some config items that do not necessarily make sense for use as an envar over-ride. For example if you set `export AIA_DUMP=yaml` then `aia` would dump a config file in HAML format and exit every time it is ran until you finally did `unset AIA_DUMP`
|
233
253
|
|
234
|
-
|
254
|
+
In addition to these config items for `aia` the optional command line parameters for the backend prompt processing utilities (mods and sgpt) can also be set using envars with the "AIA_" prefix. For example "export AIA_TOPP=1.0" will set the "--topp 1.0" command line option for the `mods` utility when its used as the backend processor.
|
255
|
+
|
256
|
+
## Prompt Directives
|
257
|
+
|
258
|
+
Downstream processing directives were added to the `prompt_manager` gem at version 0.4.1. These directives are lines in the prompt text file that begin with "//"
|
259
|
+
|
260
|
+
For example if a prompt text file has this line:
|
261
|
+
|
262
|
+
> //config chat? = true
|
235
263
|
|
236
|
-
|
264
|
+
That prompt will enter the chat loop regardles of the presents of a "--chat" CLI option or the setting of the envar AIA_CHAT.
|
265
|
+
|
266
|
+
BTW did I mention that `aia` supports a chat mode where you can send an initial prompt to the backend and then followup the backend's reponse with additional keyboard entered questions, instructions, prompts etc.
|
267
|
+
|
268
|
+
See the [AIA::Directives](lib/aia/directives.rb) class to see what directives are available on the fromend within `aia`.
|
269
|
+
|
270
|
+
See the [AIA::Mods](lib/aia/tools/mods.rb) class to for directives that are available to the `mods` backend.
|
271
|
+
|
272
|
+
See the [AIA::Sgpt](lib/aia/tools/sgpt.rb) class to for directives that are available to the `sgpt` backend.
|
273
|
+
|
274
|
+
## All About ROLES
|
275
|
+
|
276
|
+
`aia` provides the "-r --role" CLI option to identify a prompt ID within your prompts directory which defines the context within which the LLM is to provide its response. The text of the role ID is pre-pended to the text of the primary prompt to form a complete prompt to be processed by the backend.
|
277
|
+
|
278
|
+
For example consider:
|
279
|
+
|
280
|
+
> aia -r ruby refactor my_class.rb
|
281
|
+
|
282
|
+
Within the prompts directory the contents of the text file `ruby.txt` will be pre-pre-pended to the contents of the refactor.txt file to produce a complete prompt. That complete prompt will have any parameters then directives processed before sending the prompt text to the backend.
|
283
|
+
|
284
|
+
Note that "role" is just a way of saying add this prompt to the front of this other prompt. The contents of the "role" prompt could be anything. It does not necessarily have be an actual role.
|
285
|
+
|
286
|
+
### Other Ways to Insert Roles into Prompts
|
287
|
+
|
288
|
+
Since `aia` supports parameterized prompts you could make a keyword like "[ROLE]" be part of your prompt. For example consider this prompt:
|
289
|
+
|
290
|
+
```text
|
291
|
+
As a [ROLE] tell me what you think about [SUBJECT]
|
292
|
+
```
|
293
|
+
|
294
|
+
When this prompt is processed, `aia` will ask you for a value for the keyword "ROLE" and the keyword "SUBJECT" to complete the prompt. Since `aia` maintains a history your previous answers, you could just choose something that you used in the past or answer with a completely new value.
|
295
|
+
|
296
|
+
## External CLI Tools Used
|
237
297
|
|
238
298
|
```text
|
239
299
|
External Tools Used
|
@@ -271,6 +331,22 @@ If you're not a fan of "born again" replace `bash` with one of the others.
|
|
271
331
|
|
272
332
|
Copy the function to a place where it can be installed in your shell's instance. This might be a `.profile` or `.bashrc` file, etc.
|
273
333
|
|
334
|
+
## Ny Most Powerful Prompt
|
335
|
+
|
336
|
+
This is just between you and me so don't go blabbing this around to everyone. My most power prompt is in a file named `ad_hoc.txt`. It looks like this:
|
337
|
+
|
338
|
+
> [WHAT NOW HUMAN]
|
339
|
+
|
340
|
+
Yep. Just a single parameter for which I can provide a value of anything that is on my mind at the time. Its advantage is that I do not pollute my shell's command history with lots of text.
|
341
|
+
|
342
|
+
Which do you think is better to have in your shell's history file?
|
343
|
+
|
344
|
+
> mods "As a certified public accountant specializing in forensic audit and analysis of public company financial statements, what do you think of mine? What is the best way to hide the millions dracma that I've skimmed?" < financial_statement.txt
|
345
|
+
|
346
|
+
> aia ad_hoc financial_statement.txt
|
347
|
+
|
348
|
+
Both do the same thing; however, aia does not put the text of the prompt into the shell's history file.... of course the keyword/parameter value is saved in the prompt's JSON file and the prompt with the response are logged unless --no-log is specified; but, its not messing up the shell history!
|
349
|
+
|
274
350
|
## Development
|
275
351
|
|
276
352
|
After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake test` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
|
@@ -279,6 +355,21 @@ After checking out the repo, run `bin/setup` to install dependencies. Then, run
|
|
279
355
|
|
280
356
|
Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
|
281
357
|
|
358
|
+
I've designed `aia` so that it should be easy to integrate other backend LLM processors. If you've found one that you like, send me a pull request or a feature request.
|
359
|
+
|
360
|
+
When you find problems with `aia` please note them as an issue. This thing was written mostly by a human and you know how error prone humans are. There should be plenty of errors to find.
|
361
|
+
|
362
|
+
Also I'm interested in doing more with the prompt directives. I'm thinking that there is a way to include dynamic content into the prompt computationally. Maybe something like tjos wpi;d be easu tp dp:
|
363
|
+
|
364
|
+
> //insert url https://www.whitehouse.gov/briefing-room/
|
365
|
+
> //insert file path_to_file.txt
|
366
|
+
|
367
|
+
or maybe incorporating the contents of system environment variables into prompts using $UPPERCASE or $(command) or ${envar_name} patterns.
|
368
|
+
|
369
|
+
I've also been thinking that the REGEX used to identify a keyword within a prompt could be a configuration item. I chose to use square brackets and uppercase in the default regex; maybe, you have a collection of prompt files that use some other regex. Why should it be one way and not the other.
|
370
|
+
|
371
|
+
Also I'm not happy with the way where I hve some command line options for external command hard coded. I think they should be part of the configuration as well. For example the way I'm using `rg` and `fzf` may not be the way that you want to use them.
|
372
|
+
|
282
373
|
## License
|
283
374
|
|
284
375
|
The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
|
data/lib/aia/cli.rb
CHANGED
@@ -117,7 +117,8 @@ class AIA::Cli
|
|
117
117
|
# Default
|
118
118
|
# Key Value, switches
|
119
119
|
arguments: [args], # NOTE: after process, prompt_id and context_files will be left
|
120
|
-
|
120
|
+
directives: [[]], # an empty Array as the default value
|
121
|
+
extra: [''], #
|
121
122
|
#
|
122
123
|
model: ["gpt-4-1106-preview", "--llm --model"],
|
123
124
|
#
|
@@ -130,14 +131,12 @@ class AIA::Cli
|
|
130
131
|
version?: [false, "--version"],
|
131
132
|
help?: [false, "-h --help"],
|
132
133
|
fuzzy?: [false, "-f --fuzzy"],
|
133
|
-
search: [nil, "-s --search"],
|
134
134
|
markdown?: [true, "-m --markdown --no-markdown --md --no-md"],
|
135
|
+
chat?: [false, "--chat"],
|
136
|
+
terse?: [false, "--terse"],
|
137
|
+
speak?: [false, "--speak"],
|
135
138
|
#
|
136
|
-
|
137
|
-
# "~" character and replace it with HOME
|
138
|
-
#
|
139
|
-
# TODO: Consider using standard suffix of _dif and _file
|
140
|
-
# to signal Pathname objects fo validation
|
139
|
+
role: ['', "-r --role"],
|
141
140
|
#
|
142
141
|
config_file:[nil, "-c --config"],
|
143
142
|
prompts_dir:["~/.prompts", "-p --prompts"],
|
@@ -194,13 +193,15 @@ class AIA::Cli
|
|
194
193
|
|
195
194
|
|
196
195
|
def process_command_line_arguments
|
196
|
+
# get the options meant for the backend AI command
|
197
|
+
# doing this first in case there are any options that conflict
|
198
|
+
# between frontend and backend.
|
199
|
+
extract_extra_options
|
200
|
+
|
197
201
|
@options.keys.each do |option|
|
198
202
|
check_for option
|
199
203
|
end
|
200
204
|
|
201
|
-
# get the options meant for the backend AI command
|
202
|
-
extract_extra_options
|
203
|
-
|
204
205
|
bad_options = arguments.select{|a| a.start_with?('-')}
|
205
206
|
|
206
207
|
unless bad_options.empty?
|
@@ -0,0 +1,66 @@
|
|
1
|
+
# lib/aia/directives.rb
|
2
|
+
|
3
|
+
require 'hashie'
|
4
|
+
|
5
|
+
class AIA::Directives
|
6
|
+
def initialize( prompt: )
|
7
|
+
@prompt = prompt # PromptManager::Prompt instance
|
8
|
+
AIA.config.directives = @prompt.directives
|
9
|
+
end
|
10
|
+
|
11
|
+
|
12
|
+
def execute_my_directives
|
13
|
+
return if AIA.config.directives.nil? || AIA.config.directives.empty?
|
14
|
+
|
15
|
+
not_mine = []
|
16
|
+
|
17
|
+
AIA.config.directives.each do |entry|
|
18
|
+
directive = entry[0].to_sym
|
19
|
+
parameters = entry[1]
|
20
|
+
|
21
|
+
if respond_to? directive
|
22
|
+
send(directive, parameters)
|
23
|
+
else
|
24
|
+
not_mine << entry
|
25
|
+
end
|
26
|
+
end
|
27
|
+
|
28
|
+
AIA.config.directives = not_mine
|
29
|
+
end
|
30
|
+
|
31
|
+
|
32
|
+
def box(what)
|
33
|
+
f = what[0]
|
34
|
+
bar = "#{f}"*what.size
|
35
|
+
puts "#{bar}\n#{what}\n#{bar}"
|
36
|
+
end
|
37
|
+
|
38
|
+
|
39
|
+
def shell(what) = puts `#{what}`
|
40
|
+
def ruby(what) = eval what
|
41
|
+
|
42
|
+
|
43
|
+
# Allows a prompt to change its configuration environment
|
44
|
+
def config(what)
|
45
|
+
parts = what.split(' ')
|
46
|
+
item = parts.shift
|
47
|
+
parts.shift if %w[:= =].include? parts[0]
|
48
|
+
|
49
|
+
if '<<' == parts[0]
|
50
|
+
parts.shift
|
51
|
+
value = parts.join
|
52
|
+
if AIA.config(item).is_a?(Array)
|
53
|
+
AIA.config[item] << value
|
54
|
+
else
|
55
|
+
AIA.config[item] = [ value ]
|
56
|
+
end
|
57
|
+
else
|
58
|
+
value = parts.join
|
59
|
+
if item.end_with?('?')
|
60
|
+
AIA.config[item] = %w[1 y yea yes t true].include?(value.downcase)
|
61
|
+
else
|
62
|
+
AIA.config[item] = "STDOUT" == value ? STDOUT : value
|
63
|
+
end
|
64
|
+
end
|
65
|
+
end
|
66
|
+
end
|