aia 0.5.10 → 0.5.12
Sign up to get free protection for your applications and to get access to all the features.
- checksums.yaml +4 -4
- data/.semver +1 -1
- data/CHANGELOG.md +11 -0
- data/README.md +156 -302
- data/justfile +5 -0
- data/lib/aia/clause.rb +7 -0
- data/lib/aia/cli.rb +31 -4
- data/lib/aia/directives.rb +52 -3
- data/lib/aia/main.rb +34 -7
- data/lib/aia/prompt.rb +3 -1
- data/lib/aia/tools.rb +36 -0
- data/lib/aia.rb +8 -0
- data/main.just +5 -0
- data/man/aia.1 +41 -1
- data/man/aia.1.md +39 -1
- metadata +29 -14
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 8fa0d67e36209d8ac1840c94820ce58e0b6c08a6b01e47e579dfe90f04b94420
|
4
|
+
data.tar.gz: 382d34ab554077b0e81d3a724362b41463e5ccc88f888dbba00b1ad65ff0c528
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 109f27eb85889450bd19bb78a123d02ba789bd99708cf7be20528a8da8c090a7ef68655e60f7fda31a0da12a506a38f3fcdf8c9d2b4a2ea9096006ab81c5d494
|
7
|
+
data.tar.gz: 4b46125f0937d74ea4fe446c343b474f7c3b7d0443def5fea9782a5eea9a7c7bf4997e866871723dc2e4fe2d4fd9db07fe7384f9a95655891efbbfac5711a40f
|
data/.semver
CHANGED
data/CHANGELOG.md
CHANGED
@@ -1,5 +1,16 @@
|
|
1
1
|
## [Unreleased]
|
2
2
|
|
3
|
+
## [0.5.12] 2024-02-24
|
4
|
+
- Happy Birthday Ruby!
|
5
|
+
- Added --next CLI option
|
6
|
+
- Added --pipeline CLI option
|
7
|
+
|
8
|
+
## [0.5.11] 2024-02-18
|
9
|
+
- allow directives to return information that is inserted into the prompt text
|
10
|
+
- added //shell command directive
|
11
|
+
- added //ruby ruby_code directive
|
12
|
+
- added //include path_to_file directive
|
13
|
+
|
3
14
|
## [0.5.10] 2024-02-03
|
4
15
|
- Added --roles_dir to isolate roles from other prompts if desired
|
5
16
|
- Changed --prompts to --prompts_dir to be consistent
|
data/README.md
CHANGED
@@ -6,16 +6,16 @@ It leverages the `prompt_manager` gem to manage prompts for the `mods` and `sgpt
|
|
6
6
|
|
7
7
|
**Most Recent Change**: Refer to the [Changelog](CHANGELOG.md)
|
8
8
|
|
9
|
-
> v0.5.
|
10
|
-
> -
|
11
|
-
> -
|
12
|
-
> -
|
9
|
+
> v0.5.12
|
10
|
+
> - Supports Prompt Sequencing
|
11
|
+
> - Added --next option
|
12
|
+
> - Added --pipeline option
|
13
13
|
>
|
14
|
-
> v0.5.
|
15
|
-
> -
|
16
|
-
>
|
17
|
-
>
|
18
|
-
> - Added
|
14
|
+
> v0.5.11
|
15
|
+
> - Allow directives to prepend content into the prompt text
|
16
|
+
> - Added //include path_to_file
|
17
|
+
> - Added //shell shell_command
|
18
|
+
> - Added //ruby ruby code
|
19
19
|
|
20
20
|
<!-- Tocer[start]: Auto-generated, don't remove. -->
|
21
21
|
|
@@ -31,10 +31,18 @@ It leverages the `prompt_manager` gem to manage prompts for the `mods` and `sgpt
|
|
31
31
|
- [*E*mbedded *R*u*B*y (ERB)](#embedded-ruby-erb)
|
32
32
|
- [Chat Session Behavior](#chat-session-behavior)
|
33
33
|
- [Prompt Directives](#prompt-directives)
|
34
|
+
- [Parameter and Shell Substitution in Directives](#parameter-and-shell-substitution-in-directives)
|
34
35
|
- [`aia` Specific Directive Commands](#aia-specific-directive-commands)
|
35
36
|
- [//config](#config)
|
37
|
+
- [//include](#include)
|
38
|
+
- [//ruby](#ruby)
|
39
|
+
- [//shell](#shell)
|
36
40
|
- [Backend Directive Commands](#backend-directive-commands)
|
37
41
|
- [Using Directives in Chat Sessions](#using-directives-in-chat-sessions)
|
42
|
+
- [Prompt Sequences](#prompt-sequences)
|
43
|
+
- [--next](#--next)
|
44
|
+
- [--pipeline](#--pipeline)
|
45
|
+
- [Best Practices ??](#best-practices-)
|
38
46
|
- [All About ROLES](#all-about-roles)
|
39
47
|
- [The --roles_dir (AIA_ROLES_DIR)](#the---roles_dir-aia_roles_dir)
|
40
48
|
- [The --role Option](#the---role-option)
|
@@ -76,298 +84,8 @@ You may also want to install the completion script for your shell. To get a cop
|
|
76
84
|
|
77
85
|
The usage report obtained using either `-h` or `--help` is implemented as a standard `man` page. You can use both `--help --verbose` of `-h -v` together to get not only the `aia` man page but also the usage report from the `backend` LLM processing tool.
|
78
86
|
|
79
|
-
```
|
87
|
+
```shell
|
80
88
|
$ aia --help
|
81
|
-
|
82
|
-
aia(1) User Manuals aia(1)
|
83
|
-
|
84
|
-
NAME
|
85
|
-
aia - command-line interface for an AI assistant
|
86
|
-
|
87
|
-
SYNOPSIS
|
88
|
-
aia [options]* PROMPT_ID [CONTEXT_FILE]* [--
|
89
|
-
EXTERNAL_OPTIONS+]
|
90
|
-
|
91
|
-
DESCRIPTION
|
92
|
-
The aia command-line tool is an interface for interacting
|
93
|
-
with an AI model backend, providing a simple way to send
|
94
|
-
prompts and receive responses. The CLI supports various
|
95
|
-
options to customize the interaction, load a
|
96
|
-
configuration file, edit prompts, set debugging levels,
|
97
|
-
and more.
|
98
|
-
|
99
|
-
ARGUMENTS
|
100
|
-
PROMPT_ID
|
101
|
-
This is a required argument.
|
102
|
-
|
103
|
-
CONTEXT_FILES
|
104
|
-
This is an optional argument. One or more files
|
105
|
-
can be added to the prompt as context for the
|
106
|
-
backend gen-AI tool to process.
|
107
|
-
|
108
|
-
EXTERNAL_OPTIONS
|
109
|
-
External options are optional. Anything that
|
110
|
-
follow “ -- “ will be sent to the backend gen-AI
|
111
|
-
tool. For example “-- -C -m gpt4-128k” will send
|
112
|
-
the options “-C -m gpt4-128k” to the backend
|
113
|
-
gen-AI tool. aia will not validate these external
|
114
|
-
options before sending them to the backend gen-AI
|
115
|
-
tool.
|
116
|
-
|
117
|
-
OPTIONS
|
118
|
-
--chat begin a chat session with the backend after the
|
119
|
-
initial prompt response; will set --no-out_file
|
120
|
-
so that the backend response comes to STDOUT.
|
121
|
-
After the initial prompt is processed, you will be
|
122
|
-
asked to provide a follow up. Just enter whatever
|
123
|
-
is appropriate terminating your input with a
|
124
|
-
RETURN. The backend will provide a response to
|
125
|
-
you follow up and ask you again if you have
|
126
|
-
another follow up. This back and forth chatting
|
127
|
-
will continue until you enter a RETURN without any
|
128
|
-
other content - an empty follow up prompt. You
|
129
|
-
may also enter a directive to be processed after
|
130
|
-
which another follow up is requested. If you have
|
131
|
-
the --shell and/or the --erb options set you may
|
132
|
-
use those tools within your follow up to provide
|
133
|
-
dynamic content.
|
134
|
-
|
135
|
-
--completion SHELL_NAME
|
136
|
-
|
137
|
-
--dump PATH/TO/FILE.ext
|
138
|
-
Dump the current configuration to a file in the
|
139
|
-
format denoted by the file’s extension. Currently
|
140
|
-
only .yml, .yaml and .toml are acceptable file
|
141
|
-
extensions. If the file exists, it will be
|
142
|
-
over-written without warning.
|
143
|
-
|
144
|
-
-e, --edit
|
145
|
-
Invokes an editor on the prompt file. You can
|
146
|
-
make changes to the prompt file, save it and the
|
147
|
-
newly saved prompt will be processed by the
|
148
|
-
backend.
|
149
|
-
|
150
|
-
--shell
|
151
|
-
This option tells aia to replace references to
|
152
|
-
system environment variables in the prompt with
|
153
|
-
the value of the envar. envars are like $HOME and
|
154
|
-
${HOME} in this example their occurance will be
|
155
|
-
replaced by the value of ENV[‘HOME’]. Also the
|
156
|
-
dynamic shell command in the pattern $(shell
|
157
|
-
command) will be executed and its output replaces
|
158
|
-
its pattern. It does not matter if your shell
|
159
|
-
uses different patters than BASH since the
|
160
|
-
replacement is being done within a Ruby context.
|
161
|
-
|
162
|
-
--erb If dynamic prompt content using $(...) wasn’t
|
163
|
-
enough here is ERB. Embedded RUby. <%= ruby code
|
164
|
-
%> within a prompt will have its ruby code
|
165
|
-
executed and the results of that execution will be
|
166
|
-
inserted into the prompt. I’m sure we will find a
|
167
|
-
way to truly misuse this capability. Remember,
|
168
|
-
some say that the simple prompt is the best
|
169
|
-
prompt.
|
170
|
-
|
171
|
-
--model NAME
|
172
|
-
Name of the LLM model to use - default is
|
173
|
-
gpt-4-1106-preview
|
174
|
-
|
175
|
-
--render
|
176
|
-
Render markdown to the terminal using the external
|
177
|
-
tool “glow” - default: false
|
178
|
-
|
179
|
-
--speak
|
180
|
-
Simple implementation. Uses the “say” command to
|
181
|
-
speak the response. Fun with --chat
|
182
|
-
|
183
|
-
--terse
|
184
|
-
Add a clause to the prompt text that instructs the
|
185
|
-
backend to be terse in its response.
|
186
|
-
|
187
|
-
--version
|
188
|
-
Print Version - default is false
|
189
|
-
|
190
|
-
-b, --[no]-backend LLM TOOL
|
191
|
-
Specify the backend prompt resolver - default is
|
192
|
-
mods
|
193
|
-
|
194
|
-
-c, --config_file PATH_TO_CONFIG_FILE
|
195
|
-
Load Config File. both YAML and TOML formats are
|
196
|
-
supported. Also ERB is supported. For example
|
197
|
-
~/aia_config.yml.erb will be processed through ERB
|
198
|
-
and then through YAML. The result will be written
|
199
|
-
out to ~/aia_config.yml so that you can manually
|
200
|
-
verify that you got what you wanted from the ERB
|
201
|
-
processing.
|
202
|
-
|
203
|
-
-d, --debug
|
204
|
-
Turn On Debugging - default is false
|
205
|
-
|
206
|
-
-e, --edit
|
207
|
-
Edit the Prompt File - default is false
|
208
|
-
|
209
|
-
-f, --fuzzy`
|
210
|
-
Use Fuzzy Matching when searching for a prompt -
|
211
|
-
default is false
|
212
|
-
|
213
|
-
-h, --help
|
214
|
-
Show Usage - default is false
|
215
|
-
|
216
|
-
-l, --[no]-log_file PATH_TO_LOG_FILE
|
217
|
-
Log FILEPATH - default is
|
218
|
-
$HOME/.prompts/prompts.log
|
219
|
-
|
220
|
-
-m, --[no]-markdown
|
221
|
-
Format with Markdown - default is true
|
222
|
-
|
223
|
-
-o, --[no]-out_file PATH_TO_OUTPUT_FILE
|
224
|
-
Out FILENAME - default is ./temp.md
|
225
|
-
|
226
|
-
-p, --prompts_dir PATH_TO_DIRECTORY
|
227
|
-
Directory containing the prompt files - default is
|
228
|
-
~/.prompts
|
229
|
-
|
230
|
-
--roles_dir PATH_TO_DIRECTORY
|
231
|
-
Directory containing the personification prompt
|
232
|
-
files - default is ~/.prompts/roles
|
233
|
-
|
234
|
-
-r, --role ROLE_ID
|
235
|
-
A role ID is the same as a prompt ID. A “role” is
|
236
|
-
a specialized prompt that gets pre-pended to
|
237
|
-
another prompt. It’s purpose is to configure the
|
238
|
-
LLM into a certain orientation within which to
|
239
|
-
resolve its primary prompt.
|
240
|
-
|
241
|
-
-v, --verbose
|
242
|
-
Be Verbose - default is false
|
243
|
-
|
244
|
-
CONFIGURATION HIERARCHY
|
245
|
-
System Environment Variables (envars) that are all
|
246
|
-
uppercase and begin with “AIA_” can be used to over-ride
|
247
|
-
the default configuration settings. For example setting
|
248
|
-
“export AIA_PROMPTS_DIR=~/Documents/prompts” will
|
249
|
-
over-ride the default configuration; however, a config
|
250
|
-
value provided by a command line options will over-ride
|
251
|
-
an envar setting.
|
252
|
-
|
253
|
-
Configuration values found in a config file will
|
254
|
-
over-ride all other values set for a config item.
|
255
|
-
|
256
|
-
”//config” directives found inside a prompt file
|
257
|
-
over-rides that config item regardless of where the value
|
258
|
-
was set.
|
259
|
-
|
260
|
-
For example “//config chat? = true” within a prompt will
|
261
|
-
setup the chat back and forth chat session for that
|
262
|
-
specific prompt regardless of the command line options or
|
263
|
-
the envar AIA_CHAT settings
|
264
|
-
|
265
|
-
OpenAI ACCOUNT IS REQUIRED
|
266
|
-
Additionally, the program requires an OpenAI access key,
|
267
|
-
which can be specified using one of the following
|
268
|
-
environment variables:
|
269
|
-
|
270
|
-
• OPENAI_ACCESS_TOKEN
|
271
|
-
|
272
|
-
• OPENAI_API_KEY
|
273
|
-
|
274
|
-
Currently there is not specific standard for name of the
|
275
|
-
OpenAI key. Some programs use one name, while others use
|
276
|
-
a different name. Both of the envars listed above mean
|
277
|
-
the same thing. If you use more than one tool to access
|
278
|
-
OpenAI resources, you may have to set several envars to
|
279
|
-
the same key value.
|
280
|
-
|
281
|
-
To acquire an OpenAI access key, first create an account
|
282
|
-
on the OpenAI platform, where further documentation is
|
283
|
-
available.
|
284
|
-
|
285
|
-
USAGE NOTES
|
286
|
-
aia is designed for flexibility, allowing users to pass
|
287
|
-
prompt ids and context files as arguments. Some options
|
288
|
-
change the behavior of the output, such as --out_file for
|
289
|
-
specifying a file or --no-out_file for disabling file
|
290
|
-
output in favor of standard output (STDPIT).
|
291
|
-
|
292
|
-
The --completion option displays a script that enables
|
293
|
-
prompt ID auto-completion for bash, zsh, or fish shells.
|
294
|
-
It’s crucial to integrate the script into the shell’s
|
295
|
-
runtime to take effect.
|
296
|
-
|
297
|
-
The --dump path/to/file.ext option will write the current
|
298
|
-
configuration to a file in the format requested by the
|
299
|
-
file’s extension. The following extensions are
|
300
|
-
supported: .yml, .yaml and .toml
|
301
|
-
|
302
|
-
PROMPT DIRECTIVES
|
303
|
-
Within a prompt text file any line that begins with “//”
|
304
|
-
is considered a prompt directive. There are numerious
|
305
|
-
prompt directives available. In the discussion above on
|
306
|
-
the configuration you learned about the “//config”
|
307
|
-
directive.
|
308
|
-
|
309
|
-
Detail discussion on individual prompt directives is TBD.
|
310
|
-
Most likely it will be handled in the github wiki
|
311
|
-
<https://github.com/MadBomber/aia>
|
312
|
-
|
313
|
-
SEE ALSO
|
314
|
-
|
315
|
-
• OpenAI Platform Documentation
|
316
|
-
<https://platform.openai.com/docs/overview>
|
317
|
-
for more information on obtaining access tokens
|
318
|
-
<https://platform.openai.com/account/api-keys>
|
319
|
-
and working with OpenAI models.
|
320
|
-
|
321
|
-
• mods <https://github.com/charmbracelet/mods>
|
322
|
-
for more information on mods - AI for the
|
323
|
-
command line, built for pipelines. LLM based AI
|
324
|
-
is really good at interpreting the output of
|
325
|
-
commands and returning the results in CLI
|
326
|
-
friendly text formats like Markdown. Mods is a
|
327
|
-
simple tool that makes it super easy to use AI
|
328
|
-
on the command line and in your pipelines. Mods
|
329
|
-
works with OpenAI
|
330
|
-
<https://platform.openai.com/account/api-keys>
|
331
|
-
and LocalAI
|
332
|
-
<https://github.com/go-skynet/LocalAI>
|
333
|
-
|
334
|
-
• sgpt <https://github.com/tbckr/sgpt>
|
335
|
-
(aka shell-gpt) is a powerful command-line
|
336
|
-
interface (CLI) tool designed for seamless
|
337
|
-
interaction with OpenAI models directly from
|
338
|
-
your terminal. Effortlessly run queries,
|
339
|
-
generate shell commands or code, create images
|
340
|
-
from text, and more, using simple commands.
|
341
|
-
Streamline your workflow and enhance
|
342
|
-
productivity with this powerful and
|
343
|
-
user-friendly CLI tool.
|
344
|
-
|
345
|
-
• fzf <https://github.com/junegunn/fzf>
|
346
|
-
fzf is a general-purpose command-line fuzzy
|
347
|
-
finder. It’s an interactive Unix filter for
|
348
|
-
command-line that can be used with any list;
|
349
|
-
files, command history, processes, hostnames,
|
350
|
-
bookmarks, git commits, etc.
|
351
|
-
|
352
|
-
• ripgrep <https://github.com/BurntSushi/ripgrep>
|
353
|
-
Search tool like grep and The Silver Searcher.
|
354
|
-
It is a line-oriented search tool that
|
355
|
-
recursively searches a directory tree for a
|
356
|
-
regex pattern. By default, ripgrep will respect
|
357
|
-
gitignore rules and automatically skip hidden
|
358
|
-
files/directories and binary files. (To disable
|
359
|
-
all automatic filtering by default, use rg
|
360
|
-
-uuu.) ripgrep has first class support on
|
361
|
-
Windows, macOS and Linux, with binary downloads
|
362
|
-
available for every release.
|
363
|
-
|
364
|
-
• glow <https://github.com/charmbracelet/glow>
|
365
|
-
Render markdown on the CLI
|
366
|
-
|
367
|
-
AUTHOR
|
368
|
-
Dewayne VanHoozer <dvanhoozer@gmail.com>
|
369
|
-
|
370
|
-
AIA v0.5.10 aia(1)
|
371
89
|
```
|
372
90
|
|
373
91
|
## Configuration Using Envars
|
@@ -452,9 +170,28 @@ Downstream processing directives were added to the `prompt_manager` gem used by
|
|
452
170
|
|
453
171
|
There is no space between the "//" and the command.
|
454
172
|
|
173
|
+
### Parameter and Shell Substitution in Directives
|
174
|
+
|
175
|
+
When you combine prompt directives with prompt parameters and shell envar substitutions you can get some powerful compositional prompts.
|
176
|
+
|
177
|
+
Here is an example of a pure generic directive.
|
178
|
+
|
179
|
+
```
|
180
|
+
//[DIRECTIVE_NAME] [DIRECTIVE_PARAMS]
|
181
|
+
```
|
182
|
+
|
183
|
+
When the prompt runs, you will be asked to provide a value for each of the parameters. You could answer "shell" for the directive name and "calc 22/7" if you wanted a bad approximation of PI.
|
184
|
+
|
185
|
+
Try this prompt file:
|
186
|
+
```
|
187
|
+
//shell calc [FORMULA]
|
188
|
+
|
189
|
+
What does that number mean to you?
|
190
|
+
```
|
191
|
+
|
455
192
|
### `aia` Specific Directive Commands
|
456
193
|
|
457
|
-
At this time `aia` only has
|
194
|
+
At this time `aia` only has a few directives which are detailed below.
|
458
195
|
|
459
196
|
#### //config
|
460
197
|
|
@@ -479,6 +216,64 @@ A configuration item such as `--out_file` or `--model` has an associated value o
|
|
479
216
|
//config backend = mods
|
480
217
|
```
|
481
218
|
|
219
|
+
BTW: the "=" is completely options. Its actuall ignored as is ":=" if you were to choose that as your assignment operator. Also the number of spaces between the item and the value is complete arbitrary. I like to line things up so this syntax is just as valie:
|
220
|
+
|
221
|
+
```
|
222
|
+
//config model gpt-3.5-turbo
|
223
|
+
//config out_file temp.md
|
224
|
+
//config backend mods
|
225
|
+
//config chat? true
|
226
|
+
//config terse? true
|
227
|
+
//config model gpt-4
|
228
|
+
```
|
229
|
+
|
230
|
+
NOTE: if you specify the same config item name more than once within the prompt file, its the last one which will be set when the prompt is finally process through the LLM. For example in the example above `gpt-4` will be the model used. Being first does not count in this case.
|
231
|
+
|
232
|
+
#### //include
|
233
|
+
|
234
|
+
Example:
|
235
|
+
```
|
236
|
+
//include path_to_file
|
237
|
+
```
|
238
|
+
|
239
|
+
The `path_to_file` can be either absolute or relative. If it is relative, it is achored at the PWD. If the `path_to_file` includes envars, the `--shell` CLI option must be used to replace the envar in the directive with its actual value.
|
240
|
+
|
241
|
+
The file that is included will have any comments or directives excluded. It is expected that the file will be a text file so that its content can be pre-pended to the existing prompt; however, if the file is a source code file (ex: file.rb) the source code will be included HOWEVER any comment line or line that starts with "//" will be excluded.
|
242
|
+
|
243
|
+
TODO: Consider adding a command line option `--include_dir` to specify the place from which relative files are to come.
|
244
|
+
|
245
|
+
#### //ruby
|
246
|
+
Example:
|
247
|
+
```
|
248
|
+
//ruby any_code_that_returns_an_instance_of_String
|
249
|
+
```
|
250
|
+
|
251
|
+
This directive is in addition to ERB. At this point the `//ruby` directive is limited by the current binding which is within the `AIA::Directives#ruby` method. As such it is not likely to see much use.
|
252
|
+
|
253
|
+
However, sinces it implemented as a simple `eval(code)` then there is a potential for use like this:
|
254
|
+
```
|
255
|
+
//ruby load(some_ruby_file); execute_some_method
|
256
|
+
```
|
257
|
+
|
258
|
+
Each execution of a `//ruby` directive will be a fresh execution of the `AIA::Directives#ruby` method so you cannot carry local variables from one invocation to another; however, you could do something with instance variables or global variables. You might even add something to the `AIA.config` object to be pasted on to the next invocation of the directive within the context of the same prompt.
|
259
|
+
|
260
|
+
#### //shell
|
261
|
+
Example:
|
262
|
+
```
|
263
|
+
//shell some_shell_command
|
264
|
+
```
|
265
|
+
|
266
|
+
It is expected that the shell command will return some text to STDOUT which will be pre-pending to the existing prompt text within the prompt file.
|
267
|
+
|
268
|
+
There are no limitations on what the shell command can be. For example if you wanted to bypass the stripping of comments and directives from a file you could do something like this:
|
269
|
+
```
|
270
|
+
//shell cat path_to_file
|
271
|
+
```
|
272
|
+
|
273
|
+
Which does basically the same thing as the `//include` directive, except it uses the entire content of the file. For relative file paths the same thing applies. The file's path will be relative to the PWD.
|
274
|
+
|
275
|
+
|
276
|
+
|
482
277
|
### Backend Directive Commands
|
483
278
|
|
484
279
|
See the source code for the directives supported by the backends which at this time are configuration-based as well.
|
@@ -486,7 +281,7 @@ See the source code for the directives supported by the backends which at this t
|
|
486
281
|
- [mods](lib/aia/tools/mods.rb)
|
487
282
|
- [sgpt](lib/aia/tools/sgpt.rb)
|
488
283
|
|
489
|
-
|
284
|
+
For example `mods` has a configuration item `topp` which can be set by a directive in a prompt text file directly.
|
490
285
|
|
491
286
|
```
|
492
287
|
//topp 1.5
|
@@ -505,6 +300,65 @@ Whe you are in a chat session, you may use a directive as a follow up prompt. F
|
|
505
300
|
The directive is executed and a new follow up prompt can be entered with a more lengthy response generated from the backend.
|
506
301
|
|
507
302
|
|
303
|
+
## Prompt Sequences
|
304
|
+
|
305
|
+
Why would you need/want to use a sequence of prompts in a batch situation. Maybe you have a complex prompt which exceeds the token limitations of your model for input so you need to break it up into multiple parts. Or suppose its a simple prompt but the number of tokens on the output is limited and you do not get exactly the kind of full response for which you were looking.
|
306
|
+
|
307
|
+
Sometimes it takes a series of prompts to get the kind of response that you want. The reponse from one prompt becomes a context for the next prompt. This is easy to do within a `chat` session were you are manually entering and adjusting your prompts until you get the kind of response that you want.
|
308
|
+
|
309
|
+
If you need to do this on a regular basis or within a batch you can use `aia` and the `--next` and `--pipeline` command line options.
|
310
|
+
|
311
|
+
These two options specify the sequence of prompt IDs to be processed. Both options are available to be used within a prompt file using the `//config` directive. Like all embedded directives you can take advantage of parameterization shell integration and Ruby. I'm start to feel like TIm Tool man - more power!
|
312
|
+
|
313
|
+
Consider the condition in which you have 4 prompt IDs that need to be processed in sequence. The IDs and associated prompt file names are:
|
314
|
+
|
315
|
+
| Promt ID | Prompt File |
|
316
|
+
| -------- | ----------- |
|
317
|
+
| one. | one.txt |
|
318
|
+
| two. | two.txt |
|
319
|
+
| three. | three.txt |
|
320
|
+
| four. | four.txt |
|
321
|
+
|
322
|
+
|
323
|
+
### --next
|
324
|
+
|
325
|
+
```shell
|
326
|
+
export AIA_OUT_FILE=temp.md
|
327
|
+
aia one --next two
|
328
|
+
aia three --next four temp.md
|
329
|
+
```
|
330
|
+
|
331
|
+
or within each of the prompt files you use the config directive:
|
332
|
+
|
333
|
+
```
|
334
|
+
one.txt contains //config next two
|
335
|
+
two.txt contains //config next three
|
336
|
+
three.txt contains //config next four
|
337
|
+
```
|
338
|
+
BUT if you have more than two prompts in your sequence then consider using the --pipeline option.
|
339
|
+
|
340
|
+
### --pipeline
|
341
|
+
|
342
|
+
`aia one --pipeline two,three,four`
|
343
|
+
|
344
|
+
or inside of the `one.txt` prompt file use this directive:
|
345
|
+
|
346
|
+
`//config pipeline two,three,four`
|
347
|
+
|
348
|
+
### Best Practices ??
|
349
|
+
|
350
|
+
Since the response of one prompt is fed into the next prompt within the sequence instead of having all prompts write their response to the same out file, use these directives inside the associated prompt files:
|
351
|
+
|
352
|
+
|
353
|
+
| Prompt File | Directive |
|
354
|
+
| --- | --- |
|
355
|
+
| one.txt | //config out_file one.md |
|
356
|
+
| two.txt | //config out_file two.md |
|
357
|
+
| three.txt | //config out_file three.md |
|
358
|
+
| four.txt | //config out_file four.md |
|
359
|
+
|
360
|
+
This way you can see the response that was generated for each prompt in the sequence.
|
361
|
+
|
508
362
|
## All About ROLES
|
509
363
|
|
510
364
|
### The --roles_dir (AIA_ROLES_DIR)
|
data/justfile
CHANGED
@@ -157,6 +157,11 @@ view_man_page: create_man_page
|
|
157
157
|
create_man_page:
|
158
158
|
rake man
|
159
159
|
|
160
|
+
|
161
|
+
# Generate the Documentation
|
162
|
+
gen_doc: create_man_page update_toc_in_readmen
|
163
|
+
|
164
|
+
|
160
165
|
##########################################
|
161
166
|
|
162
167
|
# Tag the current commit, push it, then bump the version
|
data/lib/aia/clause.rb
ADDED
data/lib/aia/cli.rb
CHANGED
@@ -27,9 +27,8 @@ class AIA::Cli
|
|
27
27
|
load_config_file unless AIA.config.config_file.nil?
|
28
28
|
|
29
29
|
convert_to_pathname_objects
|
30
|
-
|
30
|
+
error_on_invalid_option_combinations
|
31
31
|
setup_prompt_manager
|
32
|
-
|
33
32
|
execute_immediate_commands
|
34
33
|
end
|
35
34
|
|
@@ -47,6 +46,29 @@ class AIA::Cli
|
|
47
46
|
end
|
48
47
|
|
49
48
|
|
49
|
+
def error_on_invalid_option_combinations
|
50
|
+
# --chat is intended as an interactive exchange
|
51
|
+
if AIA.config.chat?
|
52
|
+
unless AIA.config.next.empty?
|
53
|
+
abort "ERROR: Cannot use --next with --chat"
|
54
|
+
end
|
55
|
+
unless STDOUT == AIA.config.out_file
|
56
|
+
abort "ERROR: Cannot use --out_file with --chat"
|
57
|
+
end
|
58
|
+
unless AIA.config.pipeline.empty?
|
59
|
+
abort "ERROR: Cannot use --pipeline with --chat"
|
60
|
+
end
|
61
|
+
end
|
62
|
+
|
63
|
+
# --next says which prompt to process next
|
64
|
+
# but --pipeline gives an entire sequence of prompts for processing
|
65
|
+
unless AIA.config.next.empty?
|
66
|
+
unless AIA.config.pipeline.empty?
|
67
|
+
abort "ERROR: Cannot use --pipeline with --next"
|
68
|
+
end
|
69
|
+
end
|
70
|
+
end
|
71
|
+
|
50
72
|
def string_to_pathname(string)
|
51
73
|
['~/', '$HOME/'].each do |prefix|
|
52
74
|
if string.start_with? prefix
|
@@ -151,6 +173,8 @@ class AIA::Cli
|
|
151
173
|
verbose?: [false, "-v --verbose"],
|
152
174
|
version?: [false, "--version"],
|
153
175
|
#
|
176
|
+
next: ['', "-n --next"],
|
177
|
+
pipeline: [[], "--pipeline"],
|
154
178
|
role: ['', "-r --role"],
|
155
179
|
#
|
156
180
|
config_file:[nil, "-c --config_file"],
|
@@ -263,8 +287,11 @@ class AIA::Cli
|
|
263
287
|
else
|
264
288
|
value = arguments[index + 1]
|
265
289
|
if value.nil? || value.start_with?('-')
|
266
|
-
|
267
|
-
|
290
|
+
abort "ERROR: #{option_sym} requires a parameter value"
|
291
|
+
elsif "--pipeline" == switch
|
292
|
+
prompt_sequence = value.split(',')
|
293
|
+
AIA.config[option_sym] = prompt_sequence
|
294
|
+
arguments.slice!(index,2)
|
268
295
|
else
|
269
296
|
AIA.config[option_sym] = value
|
270
297
|
arguments.slice!(index,2)
|
data/lib/aia/directives.rb
CHANGED
@@ -14,20 +14,24 @@ class AIA::Directives
|
|
14
14
|
def execute_my_directives
|
15
15
|
return if AIA.config.directives.nil? || AIA.config.directives.empty?
|
16
16
|
|
17
|
-
|
17
|
+
result = ""
|
18
|
+
not_mine = []
|
18
19
|
|
19
20
|
AIA.config.directives.each do |entry|
|
20
21
|
directive = entry[0].to_sym
|
21
22
|
parameters = entry[1]
|
22
23
|
|
23
24
|
if respond_to? directive
|
24
|
-
send(directive, parameters)
|
25
|
+
output = send(directive, parameters)
|
26
|
+
result << "#{output}\n" unless output.nil?
|
25
27
|
else
|
26
28
|
not_mine << entry
|
27
29
|
end
|
28
30
|
end
|
29
31
|
|
30
32
|
AIA.config.directives = not_mine
|
33
|
+
|
34
|
+
result.empty? ? nil : result
|
31
35
|
end
|
32
36
|
|
33
37
|
|
@@ -56,9 +60,54 @@ class AIA::Directives
|
|
56
60
|
value = parts.join
|
57
61
|
if item.end_with?('?')
|
58
62
|
AIA.config[item] = %w[1 y yea yes t true].include?(value.downcase)
|
63
|
+
elsif item.end_with?('_file')
|
64
|
+
if "STDOUT" == value.upcase
|
65
|
+
AIA.config[item] = STDOUT
|
66
|
+
elsif "STDERR" == value.upcase
|
67
|
+
AIA.config[item] = STDERR
|
68
|
+
else
|
69
|
+
AIA.config[item] = value.start_with?('/') ?
|
70
|
+
Pathname.new(value) :
|
71
|
+
Pathname.pwd + value
|
72
|
+
end
|
59
73
|
else
|
60
|
-
AIA.config[item] =
|
74
|
+
AIA.config[item] = value
|
61
75
|
end
|
62
76
|
end
|
77
|
+
|
78
|
+
nil
|
79
|
+
end
|
80
|
+
|
81
|
+
|
82
|
+
# when path_to_file is relative it will be
|
83
|
+
# relative to the PWD.
|
84
|
+
#
|
85
|
+
# TODO: Consider an AIA_INCLUDE_DIR --include_dir
|
86
|
+
# option to be used for all relative include paths
|
87
|
+
#
|
88
|
+
def include(path_to_file)
|
89
|
+
path = Pathname.new path_to_file
|
90
|
+
if path.exist? && path.readable?
|
91
|
+
content = path.readlines.reject do |a_line|
|
92
|
+
a_line.strip.start_with?(AIA::Prompt::COMMENT_SIGNAL) ||
|
93
|
+
a_line.strip.start_with?(AIA::Prompt::DIRECTIVE_SIGNAL)
|
94
|
+
end.join("\n")
|
95
|
+
else
|
96
|
+
abort "ERROR: could not include #{path_to_file}"
|
97
|
+
end
|
98
|
+
|
99
|
+
content
|
100
|
+
end
|
101
|
+
|
102
|
+
|
103
|
+
def shell(command)
|
104
|
+
`#{command}`
|
105
|
+
end
|
106
|
+
|
107
|
+
|
108
|
+
def ruby(code)
|
109
|
+
output = eval(code)
|
110
|
+
|
111
|
+
output.is_a?(String) ? output : nil
|
63
112
|
end
|
64
113
|
end
|
data/lib/aia/main.rb
CHANGED
@@ -20,11 +20,12 @@ class AIA::Main
|
|
20
20
|
include AIA::DynamicContent
|
21
21
|
include AIA::UserQuery
|
22
22
|
|
23
|
-
attr_accessor :logger, :tools, :backend
|
23
|
+
attr_accessor :logger, :tools, :backend, :directive_output
|
24
24
|
|
25
25
|
attr_reader :spinner
|
26
26
|
|
27
|
-
def initialize(args= ARGV)
|
27
|
+
def initialize(args= ARGV)
|
28
|
+
@directive_output = ""
|
28
29
|
AIA::Tools.load_tools
|
29
30
|
|
30
31
|
AIA::Cli.new(args)
|
@@ -42,11 +43,11 @@ class AIA::Main
|
|
42
43
|
|
43
44
|
@logger.info(AIA.config) if AIA.config.debug? || AIA.config.verbose?
|
44
45
|
|
45
|
-
@prompt = AIA::Prompt.new.prompt
|
46
|
-
|
47
46
|
|
48
47
|
@directives_processor = AIA::Directives.new
|
49
48
|
|
49
|
+
@prompt = AIA::Prompt.new.prompt
|
50
|
+
|
50
51
|
# TODO: still should verify that the tools are ion the $PATH
|
51
52
|
# tools.class.verify_tools
|
52
53
|
end
|
@@ -70,8 +71,10 @@ class AIA::Main
|
|
70
71
|
end
|
71
72
|
|
72
73
|
|
74
|
+
# This will be recursive with the new options
|
75
|
+
# --next and --pipeline
|
73
76
|
def call
|
74
|
-
@directives_processor.execute_my_directives
|
77
|
+
directive_output = @directives_processor.execute_my_directives
|
75
78
|
|
76
79
|
if AIA.config.chat?
|
77
80
|
AIA.config.out_file = STDOUT
|
@@ -103,6 +106,8 @@ class AIA::Main
|
|
103
106
|
|
104
107
|
the_prompt = @prompt.to_s
|
105
108
|
|
109
|
+
the_prompt.prepend(directive_output + "\n") unless directive_output.nil? || directive_output.empty?
|
110
|
+
|
106
111
|
if AIA.config.terse?
|
107
112
|
the_prompt.prepend "Be terse in your response. "
|
108
113
|
end
|
@@ -121,6 +126,17 @@ class AIA::Main
|
|
121
126
|
speak result
|
122
127
|
lets_chat
|
123
128
|
end
|
129
|
+
|
130
|
+
return if AIA.config.next.empty? && AIA.config.pipeline.empty?
|
131
|
+
|
132
|
+
# Reset some config items to defaults
|
133
|
+
AIA.config.directives = []
|
134
|
+
AIA.config.next = AIA.config.pipeline.shift
|
135
|
+
AIA.config.arguments = [AIA.config.next, AIA.config.out_file.to_s]
|
136
|
+
AIA.config.next = ""
|
137
|
+
|
138
|
+
@prompt = AIA::Prompt.new.prompt
|
139
|
+
call # Recurse!
|
124
140
|
end
|
125
141
|
|
126
142
|
|
@@ -186,7 +202,9 @@ class AIA::Main
|
|
186
202
|
directive = parts.shift
|
187
203
|
parameters = parts.join(' ')
|
188
204
|
AIA.config.directives << [directive, parameters]
|
189
|
-
@directives_processor.execute_my_directives
|
205
|
+
directive_output = @directives_processor.execute_my_directives
|
206
|
+
else
|
207
|
+
directive_output = ""
|
190
208
|
end
|
191
209
|
|
192
210
|
result
|
@@ -202,7 +220,16 @@ class AIA::Main
|
|
202
220
|
the_prompt_text = render_erb(the_prompt_text) if AIA.config.erb?
|
203
221
|
the_prompt_text = render_env(the_prompt_text) if AIA.config.shell?
|
204
222
|
|
205
|
-
|
223
|
+
if handle_directives(the_prompt_text)
|
224
|
+
unless directive_output.nil?
|
225
|
+
the_prompt_text = insert_terse_phrase(the_prompt_text)
|
226
|
+
the_prompt_text << directive_output
|
227
|
+
result = get_and_display_result(the_prompt_text)
|
228
|
+
|
229
|
+
log_the_follow_up(the_prompt_text, result)
|
230
|
+
speak result
|
231
|
+
end
|
232
|
+
else
|
206
233
|
the_prompt_text = insert_terse_phrase(the_prompt_text)
|
207
234
|
result = get_and_display_result(the_prompt_text)
|
208
235
|
|
data/lib/aia/prompt.rb
CHANGED
data/lib/aia/tools.rb
CHANGED
@@ -48,5 +48,41 @@ class AIA::Tools
|
|
48
48
|
require file
|
49
49
|
end
|
50
50
|
end
|
51
|
+
|
52
|
+
|
53
|
+
def validate_tools
|
54
|
+
raise "NotImplemented"
|
55
|
+
end
|
56
|
+
|
57
|
+
|
58
|
+
def setup_backend
|
59
|
+
AIA.config.tools.backend = find_and_initialize_backend
|
60
|
+
end
|
61
|
+
|
62
|
+
|
63
|
+
private
|
64
|
+
|
65
|
+
def find_and_initialize_backend
|
66
|
+
found = AIA::Tools.search_for(name: AIA.config.backend, role: :backend)
|
67
|
+
abort_no_backend_error if found.empty?
|
68
|
+
abort_too_many_backends_error(found) if found.size > 1
|
69
|
+
|
70
|
+
backend_klass = found.first.klass
|
71
|
+
abort "Backend not found: #{AIA.config.backend}" unless backend_klass
|
72
|
+
|
73
|
+
backend_klass.new(
|
74
|
+
text: "",
|
75
|
+
files: []
|
76
|
+
)
|
77
|
+
end
|
78
|
+
|
79
|
+
def abort_no_backend_error
|
80
|
+
abort "There are no :backend tools named #{AIA.config.backend}"
|
81
|
+
end
|
82
|
+
|
83
|
+
def abort_too_many_backends_error(found)
|
84
|
+
abort "There are #{found.size} :backend tools with the name #{AIA.config.backend}"
|
85
|
+
end
|
86
|
+
|
51
87
|
end
|
52
88
|
end
|
data/lib/aia.rb
CHANGED
@@ -20,6 +20,7 @@ tramp_require('debug_me') {
|
|
20
20
|
}
|
21
21
|
|
22
22
|
require 'hashie'
|
23
|
+
require 'os'
|
23
24
|
require 'pathname'
|
24
25
|
require 'reline'
|
25
26
|
require 'shellwords'
|
@@ -37,6 +38,7 @@ require 'prompt_manager'
|
|
37
38
|
require 'prompt_manager/storage/file_system_adapter'
|
38
39
|
|
39
40
|
require_relative "aia/version"
|
41
|
+
require_relative "aia/clause"
|
40
42
|
require_relative "aia/main"
|
41
43
|
require_relative "core_ext/string_wrap"
|
42
44
|
|
@@ -54,6 +56,12 @@ module AIA
|
|
54
56
|
|
55
57
|
AIA::Main.new(args).call
|
56
58
|
end
|
59
|
+
|
60
|
+
|
61
|
+
def speak(what)
|
62
|
+
return unless AIA.config.speak?
|
63
|
+
system "say #{Shellwords.escape(what)}" if OS.osx?
|
64
|
+
end
|
57
65
|
end
|
58
66
|
end
|
59
67
|
|
data/main.just
CHANGED
@@ -55,6 +55,11 @@ view_man_page: create_man_page
|
|
55
55
|
create_man_page:
|
56
56
|
rake man
|
57
57
|
|
58
|
+
|
59
|
+
# Generate the Documentation
|
60
|
+
gen_doc: create_man_page update_toc_in_readmen
|
61
|
+
|
62
|
+
|
58
63
|
##########################################
|
59
64
|
|
60
65
|
# Tag the current commit, push it, then bump the version
|
data/man/aia.1
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
.\" Generated by kramdown-man 1.0.1
|
2
2
|
.\" https://github.com/postmodern/kramdown-man#readme
|
3
|
-
.TH aia 1 "v0.5.
|
3
|
+
.TH aia 1 "v0.5.12" AIA "User Manuals"
|
4
4
|
.SH NAME
|
5
5
|
.PP
|
6
6
|
aia \- command\-line interface for an AI assistant
|
@@ -78,9 +78,15 @@ Log FILEPATH \- default is \[Do]HOME\[sl]\.prompts\[sl]prompts\.log
|
|
78
78
|
\fB\-m\fR, \fB\-\-\[lB]no\[rB]\-markdown\fR
|
79
79
|
Format with Markdown \- default is true
|
80
80
|
.TP
|
81
|
+
\fB\-n\fR, \fB\-\-next PROMPT\[ru]ID\fR
|
82
|
+
Specifies the next prompt ID to be processed using the response for the previous prompt ID\[cq]s processing as a context within which to process the next prompt \- default is an empty string
|
83
|
+
.TP
|
81
84
|
\fB\-o\fR, \fB\-\-\[lB]no\[rB]\-out\[ru]file\fR \fIPATH\[ru]TO\[ru]OUTPUT\[ru]FILE\fP
|
82
85
|
Out FILENAME \- default is \.\[sl]temp\.md
|
83
86
|
.TP
|
87
|
+
\fB\-\-pipeline PID1,PID2,PID3\fR
|
88
|
+
Specifies a pipeline of prompt IDs (PID) in which the respone the first prompt is fed into the second prompt as context whose response is fed into the third as context, etc\. It is a comma seperated list\. There is no artificial limit to the number of prompt IDs in the pipeline \- default is an empty list
|
89
|
+
.TP
|
84
90
|
\fB\-p\fR, \fB\-\-prompts\[ru]dir\fR \fIPATH\[ru]TO\[ru]DIRECTORY\fP
|
85
91
|
Directory containing the prompt files \- default is \[ti]\[sl]\.prompts
|
86
92
|
.TP
|
@@ -129,6 +135,40 @@ Detail discussion on individual prompt directives is TBD\. Most likely it will
|
|
129
135
|
.UR https:\[sl]\[sl]github\.com\[sl]MadBomber\[sl]aia
|
130
136
|
.UE
|
131
137
|
\.
|
138
|
+
.PP
|
139
|
+
Some directives are:
|
140
|
+
.RS
|
141
|
+
.IP \(bu 2
|
142
|
+
\[sl]\[sl]config item value
|
143
|
+
.IP \(bu 2
|
144
|
+
\[sl]\[sl]include path\[ru]to\[ru]file
|
145
|
+
.IP \(bu 2
|
146
|
+
\[sl]\[sl]ruby ruby\[ru]code
|
147
|
+
.IP \(bu 2
|
148
|
+
\[sl]\[sl]shell shell\[ru]command
|
149
|
+
.RE
|
150
|
+
.SH Prompt Sequences
|
151
|
+
.PP
|
152
|
+
The \fB\-\-next\fR and \fB\-\-pipeline\fR command line options allow for the sequencing of prompts such that the first prompt\[cq]s response feeds into the second prompt\[cq]s context and so on\. Suppose you had a complex sequence of prompts with IDs one, two, three and four\. You would use the following \fBaia\fR command to process them in sequence:
|
153
|
+
.PP
|
154
|
+
\fBaia one \-\-pipeline two,three,four\fR
|
155
|
+
.PP
|
156
|
+
Notice that the value for the pipelined prompt IDs has no spaces\. This is so that the command line parser does not mistake one of the promp IDs as a CLI option and issue an error\.
|
157
|
+
.SS Prompt Sequences Inside of a Prompt File
|
158
|
+
.PP
|
159
|
+
You can also use the \fBconfig\fR directive inside of a prompt file to specify a sequence\. Given the example above of 4 prompt IDs you could add this directive to the prompt file \fBone\.txt\fR
|
160
|
+
.PP
|
161
|
+
\fB\[sl]\[sl]config next two\fR
|
162
|
+
.PP
|
163
|
+
Then inside the prompt file \fBtwo\.txt\fR you could use this directive:
|
164
|
+
.PP
|
165
|
+
\fB\[sl]\[sl]config pipeline three,four\fR
|
166
|
+
.PP
|
167
|
+
or just
|
168
|
+
.PP
|
169
|
+
\fB\[sl]\[sl]config next three\fR
|
170
|
+
.PP
|
171
|
+
if you want to specify them one at a time\.
|
132
172
|
.SH SEE ALSO
|
133
173
|
.RS
|
134
174
|
.IP \(bu 2
|
data/man/aia.1.md
CHANGED
@@ -1,4 +1,4 @@
|
|
1
|
-
# aia 1 "v0.5.
|
1
|
+
# aia 1 "v0.5.12" AIA "User Manuals"
|
2
2
|
|
3
3
|
## NAME
|
4
4
|
|
@@ -82,9 +82,15 @@ The aia command-line tool is an interface for interacting with an AI model backe
|
|
82
82
|
`-m`, `--[no]-markdown`
|
83
83
|
: Format with Markdown - default is true
|
84
84
|
|
85
|
+
`-n`, `--next PROMPT_ID`
|
86
|
+
: Specifies the next prompt ID to be processed using the response for the previous prompt ID's processing as a context within which to process the next prompt - default is an empty string
|
87
|
+
|
85
88
|
`-o`, `--[no]-out_file` *PATH_TO_OUTPUT_FILE*
|
86
89
|
: Out FILENAME - default is ./temp.md
|
87
90
|
|
91
|
+
`--pipeline PID1,PID2,PID3`
|
92
|
+
: Specifies a pipeline of prompt IDs (PID) in which the respone the first prompt is fed into the second prompt as context whose response is fed into the third as context, etc. It is a comma seperated list. There is no artificial limit to the number of prompt IDs in the pipeline - default is an empty list
|
93
|
+
|
88
94
|
`-p`, `--prompts_dir` *PATH_TO_DIRECTORY*
|
89
95
|
: Directory containing the prompt files - default is ~/.prompts
|
90
96
|
|
@@ -134,6 +140,38 @@ Within a prompt text file any line that begins with "//" is considered a prompt
|
|
134
140
|
|
135
141
|
Detail discussion on individual prompt directives is TBD. Most likely it will be handled in the [github wiki](https://github.com/MadBomber/aia).
|
136
142
|
|
143
|
+
Some directives are:
|
144
|
+
|
145
|
+
- //config item value
|
146
|
+
- //include path_to_file
|
147
|
+
- //ruby ruby_code
|
148
|
+
- //shell shell_command
|
149
|
+
|
150
|
+
## Prompt Sequences
|
151
|
+
|
152
|
+
The `--next` and `--pipeline` command line options allow for the sequencing of prompts such that the first prompt's response feeds into the second prompt's context and so on. Suppose you had a complex sequence of prompts with IDs one, two, three and four. You would use the following `aia` command to process them in sequence:
|
153
|
+
|
154
|
+
`aia one --pipeline two,three,four`
|
155
|
+
|
156
|
+
Notice that the value for the pipelined prompt IDs has no spaces. This is so that the command line parser does not mistake one of the promp IDs as a CLI option and issue an error.
|
157
|
+
|
158
|
+
### Prompt Sequences Inside of a Prompt File
|
159
|
+
|
160
|
+
You can also use the `config` directive inside of a prompt file to specify a sequence. Given the example above of 4 prompt IDs you could add this directive to the prompt file `one.txt`
|
161
|
+
|
162
|
+
`//config next two`
|
163
|
+
|
164
|
+
Then inside the prompt file `two.txt` you could use this directive:
|
165
|
+
|
166
|
+
`//config pipeline three,four`
|
167
|
+
|
168
|
+
or just
|
169
|
+
|
170
|
+
`//config next three`
|
171
|
+
|
172
|
+
if you want to specify them one at a time.
|
173
|
+
|
174
|
+
|
137
175
|
## SEE ALSO
|
138
176
|
|
139
177
|
- [OpenAI Platform Documentation](https://platform.openai.com/docs/overview) for more information on [obtaining access tokens](https://platform.openai.com/account/api-keys) and working with OpenAI models.
|
metadata
CHANGED
@@ -1,14 +1,14 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: aia
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.5.
|
4
|
+
version: 0.5.12
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Dewayne VanHoozer
|
8
8
|
autorequire:
|
9
9
|
bindir: bin
|
10
10
|
cert_chain: []
|
11
|
-
date: 2024-02-
|
11
|
+
date: 2024-02-25 00:00:00.000000000 Z
|
12
12
|
dependencies:
|
13
13
|
- !ruby/object:Gem::Dependency
|
14
14
|
name: hashie
|
@@ -24,6 +24,20 @@ dependencies:
|
|
24
24
|
- - ">="
|
25
25
|
- !ruby/object:Gem::Version
|
26
26
|
version: '0'
|
27
|
+
- !ruby/object:Gem::Dependency
|
28
|
+
name: os
|
29
|
+
requirement: !ruby/object:Gem::Requirement
|
30
|
+
requirements:
|
31
|
+
- - ">="
|
32
|
+
- !ruby/object:Gem::Version
|
33
|
+
version: '0'
|
34
|
+
type: :runtime
|
35
|
+
prerelease: false
|
36
|
+
version_requirements: !ruby/object:Gem::Requirement
|
37
|
+
requirements:
|
38
|
+
- - ">="
|
39
|
+
- !ruby/object:Gem::Version
|
40
|
+
version: '0'
|
27
41
|
- !ruby/object:Gem::Dependency
|
28
42
|
name: prompt_manager
|
29
43
|
requirement: !ruby/object:Gem::Requirement
|
@@ -207,17 +221,17 @@ dependencies:
|
|
207
221
|
- !ruby/object:Gem::Version
|
208
222
|
version: '0'
|
209
223
|
description: |
|
210
|
-
|
211
|
-
|
212
|
-
|
213
|
-
|
214
|
-
|
215
|
-
|
216
|
-
|
217
|
-
|
218
|
-
|
219
|
-
|
220
|
-
|
224
|
+
A command-line AI Assistante (aia) that provides pre-compositional
|
225
|
+
template prompt management to various backend gen-AI processes.
|
226
|
+
Complete shell integration allows a prompt to access system
|
227
|
+
environment variables and execut shell commands as part of the
|
228
|
+
prompt content. In addition full embedded Ruby support is provided
|
229
|
+
given even more dynamic prompt conditional content. It is a
|
230
|
+
generalized power house that rivals specialized gen-AI tools. aia
|
231
|
+
currently supports "mods" and "sgpt" CLI tools. aia uses "ripgrep"
|
232
|
+
and "fzf" CLI utilities to search for and select prompt files to
|
233
|
+
send to the backend gen-AI tool along with supported context
|
234
|
+
files.
|
221
235
|
email:
|
222
236
|
- dvanhoozer@gmail.com
|
223
237
|
executables:
|
@@ -241,6 +255,7 @@ files:
|
|
241
255
|
- lib/aia/aia_completion.bash
|
242
256
|
- lib/aia/aia_completion.fish
|
243
257
|
- lib/aia/aia_completion.zsh
|
258
|
+
- lib/aia/clause.rb
|
244
259
|
- lib/aia/cli.rb
|
245
260
|
- lib/aia/config.rb
|
246
261
|
- lib/aia/directives.rb
|
@@ -287,7 +302,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
|
|
287
302
|
- !ruby/object:Gem::Version
|
288
303
|
version: '0'
|
289
304
|
requirements: []
|
290
|
-
rubygems_version: 3.5.
|
305
|
+
rubygems_version: 3.5.6
|
291
306
|
signing_key:
|
292
307
|
specification_version: 4
|
293
308
|
summary: AI Assistant (aia) a command-line (CLI) utility
|