aia 0.5.17 → 0.8.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.envrc +1 -0
- data/.version +1 -2
- data/CHANGELOG.md +61 -22
- data/README.md +387 -227
- data/Rakefile +16 -5
- data/_notes.txt +231 -0
- data/bin/aia +3 -2
- data/examples/README.md +140 -0
- data/examples/headlines +21 -0
- data/justfile +16 -3
- data/lib/aia/ai_client_adapter.rb +210 -0
- data/lib/aia/chat_processor_service.rb +120 -0
- data/lib/aia/config.rb +473 -4
- data/lib/aia/context_manager.rb +58 -0
- data/lib/aia/directive_processor.rb +267 -0
- data/lib/aia/{tools/fzf.rb → fzf.rb} +9 -17
- data/lib/aia/history_manager.rb +85 -0
- data/lib/aia/prompt_handler.rb +178 -0
- data/lib/aia/session.rb +215 -0
- data/lib/aia/shell_command_executor.rb +109 -0
- data/lib/aia/ui_presenter.rb +110 -0
- data/lib/aia/utility.rb +24 -0
- data/lib/aia/version.rb +9 -6
- data/lib/aia.rb +57 -61
- data/lib/extensions/openstruct_merge.rb +44 -0
- metadata +43 -42
- data/LICENSE.txt +0 -21
- data/doc/aia_and_pre_compositional_prompts.md +0 -474
- data/lib/aia/clause.rb +0 -7
- data/lib/aia/cli.rb +0 -452
- data/lib/aia/directives.rb +0 -142
- data/lib/aia/dynamic_content.rb +0 -26
- data/lib/aia/logging.rb +0 -62
- data/lib/aia/main.rb +0 -265
- data/lib/aia/prompt.rb +0 -275
- data/lib/aia/tools/backend_common.rb +0 -58
- data/lib/aia/tools/client.rb +0 -197
- data/lib/aia/tools/editor.rb +0 -52
- data/lib/aia/tools/glow.rb +0 -90
- data/lib/aia/tools/llm.rb +0 -77
- data/lib/aia/tools/mods.rb +0 -100
- data/lib/aia/tools/sgpt.rb +0 -79
- data/lib/aia/tools/subl.rb +0 -68
- data/lib/aia/tools/vim.rb +0 -93
- data/lib/aia/tools.rb +0 -88
- data/lib/aia/user_query.rb +0 -21
- data/lib/core_ext/string_wrap.rb +0 -73
- data/lib/core_ext/tty-spinner_log.rb +0 -25
- data/man/aia.1 +0 -272
- data/man/aia.1.md +0 -236
data/README.md
CHANGED
@@ -1,20 +1,33 @@
|
|
1
|
-
# AI Assistant (AIA)
|
1
|
+
3# AI Assistant (AIA)
|
2
2
|
|
3
|
-
|
3
|
+
**The prompt is the code!**
|
4
4
|
|
5
|
-
|
5
|
+
```plain
|
6
|
+
, , AIA is a command-line utility that facilitates
|
7
|
+
(\____/) AI Assistant interaction with AI models. It automates the
|
8
|
+
(_oo_) Fancy LLM management of pre-compositional prompts and
|
9
|
+
(O) is Online executes generative AI (Gen-AI) commands on those
|
10
|
+
__||__ \) prompts. AIA includes enhanced feathres such as
|
11
|
+
[/______\] / * embedded directives * shell integration
|
12
|
+
/ \__AI__/ \/ * embedded Ruby * history management
|
13
|
+
/ /__\ * interactive chat * prompt workflows
|
14
|
+
(\ /____\
|
15
|
+
```
|
6
16
|
|
7
|
-
|
17
|
+
AIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts. It utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection.
|
8
18
|
|
9
|
-
|
10
|
-
> - replaced gem semver with versionaire
|
11
|
-
>
|
12
|
-
> v0.5.16
|
13
|
-
> - fixed bugs with the prompt pipeline
|
14
|
-
> - Added new backend "client" which is an `aia` internal client to the OpenAI API that allows both text-to-speech and speech-to-text
|
15
|
-
> - Added --image_size and --image_quality to support image generation with the dall-e-2 and dall-e-3 models using the new internal `aia` OpenAI client.
|
16
|
-
>
|
19
|
+
**Most Recent Change**: Refer to the [Changelog](CHANGELOG.md)
|
17
20
|
|
21
|
+
**Notable Recent Changes:**
|
22
|
+
- **Directive Processing in Chat and Prompts:** You can now use directives in chat sessions and prompt files with the syntax: `//command args`. Supported directives include:
|
23
|
+
- `shell`/`sh`: Execute shell commands
|
24
|
+
- `ruby`/`rb`: Execute Ruby code
|
25
|
+
- `config`/`cfg`: Display or update configuration
|
26
|
+
- `include`/`inc`: Include file content
|
27
|
+
- `next`: Specify the next prompt ID in a sequence
|
28
|
+
- `pipeline`: Specify a pipeline of prompt IDs to process
|
29
|
+
- `clear`: Clear the context (handy in a chat session)
|
30
|
+
- `help`: Show available directives
|
18
31
|
|
19
32
|
|
20
33
|
<!-- Tocer[start]: Auto-generated, don't remove. -->
|
@@ -22,42 +35,46 @@ It leverages the `prompt_manager` gem to manage prompts for the `mods` and `sgpt
|
|
22
35
|
## Table of Contents
|
23
36
|
|
24
37
|
- [Installation](#installation)
|
38
|
+
- [What is a Prompt ID?](#what-is-a-prompt-id)
|
39
|
+
- [Embedded Parameters as Placeholders](#embedded-parameters-as-placeholders)
|
25
40
|
- [Usage](#usage)
|
26
|
-
- [Configuration
|
41
|
+
- [Configuration Options](#configuration-options)
|
42
|
+
- [Configuration Flexibility](#configuration-flexibility)
|
43
|
+
- [Expandable Configuration](#expandable-configuration)
|
27
44
|
- [Shell Integration inside of a Prompt](#shell-integration-inside-of-a-prompt)
|
28
|
-
- [Access to System Environment Variables](#access-to-system-environment-variables)
|
29
45
|
- [Dynamic Shell Commands](#dynamic-shell-commands)
|
46
|
+
- [Shell Command Safety](#shell-command-safety)
|
30
47
|
- [Chat Session Use](#chat-session-use)
|
31
48
|
- [*E*mbedded *R*u*B*y (ERB)](#embedded-ruby-erb)
|
32
|
-
- [Chat Session Behavior](#chat-session-behavior)
|
33
49
|
- [Prompt Directives](#prompt-directives)
|
34
50
|
- [Parameter and Shell Substitution in Directives](#parameter-and-shell-substitution-in-directives)
|
35
|
-
- [
|
51
|
+
- [Directive Syntax](#directive-syntax)
|
52
|
+
- [AIA Specific Directive Commands](#aia-specific-directive-commands)
|
36
53
|
- [//config](#config)
|
37
54
|
- [//include](#include)
|
38
55
|
- [//ruby](#ruby)
|
39
56
|
- [//shell](#shell)
|
40
|
-
|
57
|
+
- [//next](#next)
|
58
|
+
- [//pipeline](#pipeline)
|
41
59
|
- [Using Directives in Chat Sessions](#using-directives-in-chat-sessions)
|
42
60
|
- [Prompt Sequences](#prompt-sequences)
|
43
61
|
- [--next](#--next)
|
44
62
|
- [--pipeline](#--pipeline)
|
45
63
|
- [Best Practices ??](#best-practices-)
|
46
|
-
- [Example
|
64
|
+
- [Example pipeline](#example-pipeline)
|
47
65
|
- [All About ROLES](#all-about-roles)
|
48
|
-
- [The --
|
66
|
+
- [The --roles_prefix (AIA_ROLES_PREFIX)](#the---roles_prefix-aia_roles_prefix)
|
49
67
|
- [The --role Option](#the---role-option)
|
50
68
|
- [Other Ways to Insert Roles into Prompts](#other-ways-to-insert-roles-into-prompts)
|
51
69
|
- [External CLI Tools Used](#external-cli-tools-used)
|
52
|
-
- [Optional External CLI-tools](#optional-external-cli-tools)
|
53
|
-
- [Backend Processor `llm`](#backend-processor-llm)
|
54
|
-
- [Backend Processor `sgpt`](#backend-processor-sgpt)
|
55
|
-
- [Occassionally Useful Tool `plz`](#occassionally-useful-tool-plz)
|
56
70
|
- [Shell Completion](#shell-completion)
|
57
71
|
- [My Most Powerful Prompt](#my-most-powerful-prompt)
|
58
72
|
- [My Configuration](#my-configuration)
|
73
|
+
- [Executable Prompts](#executable-prompts)
|
59
74
|
- [Development](#development)
|
60
75
|
- [Contributing](#contributing)
|
76
|
+
- [History of Development](#history-of-development)
|
77
|
+
- [Roadmap](#roadmap)
|
61
78
|
- [License](#license)
|
62
79
|
|
63
80
|
<!-- Tocer[finish]: Auto-generated, don't remove. -->
|
@@ -69,137 +86,251 @@ Install the gem by executing:
|
|
69
86
|
|
70
87
|
gem install aia
|
71
88
|
|
72
|
-
|
73
89
|
Install the command-line utilities by executing:
|
74
90
|
|
75
|
-
brew install
|
91
|
+
brew install fzf
|
76
92
|
|
77
93
|
You will also need to establish a directory in your file system where your prompt text files, last used parameters and usage log files are kept.
|
78
94
|
|
79
|
-
Setup a system environment variable (envar) named "AIA_PROMPTS_DIR" that points to your prompts directory. The default is in your HOME directory named ".prompts". The envar "
|
95
|
+
Setup a system environment variable (envar) named "AIA_PROMPTS_DIR" that points to your prompts directory. The default is in your HOME directory named ".prompts". The envar "AIA_ROLES_PREFIX" points to your role prefix where you have prompts that define the different roles you want the LLM to assume when it is doing its work. The default roles prefix is "roles".
|
80
96
|
|
81
97
|
You may also want to install the completion script for your shell. To get a copy of the completion script do:
|
82
98
|
|
83
|
-
|
99
|
+
```bash
|
100
|
+
aia --completion bash
|
101
|
+
```
|
84
102
|
|
85
103
|
`fish` and `zsh` are also available.
|
86
104
|
|
105
|
+
## What is a Prompt ID?
|
87
106
|
|
88
|
-
|
107
|
+
A prompt ID is the basename of a text file (extension *.txt) located in a prompts directory. The prompts directory is specified by the environment variable "AIA_PROMPTS_DIR". If this variable is not set, the default is in your HOME directory named ".prompts". It can also be set on the command line with the `--prompts-dir` option.
|
108
|
+
|
109
|
+
This file contains the context and instructions for the LLM to follow. The prompt ID is what you use as an option on the command line to specify which prompt text file to use. Prompt files can have comments, parameters, directives and ERB blocks along with the instruction text to feed to the LLM. It can also have shell commands and use system environment variables. Consider the following example:
|
110
|
+
|
111
|
+
```plaintext
|
112
|
+
#!/usr/bin/env aia run
|
113
|
+
# ~/.prompts/example.txt
|
114
|
+
# Desc: Be an example prompt with all? the bells and whistles
|
115
|
+
|
116
|
+
# Set the configuration for this prompt
|
117
|
+
|
118
|
+
//config model = gpt-4
|
119
|
+
//config temperature = 0.7
|
120
|
+
//config shell = true
|
121
|
+
//config erb = true
|
122
|
+
//config out_file = path/to/output.md
|
123
|
+
|
124
|
+
# Add some file content to the context/instructions
|
125
|
+
|
126
|
+
//include path/to/file
|
127
|
+
//shell cat path/to/file
|
128
|
+
$(cat path/to/file)
|
129
|
+
|
130
|
+
# Setup some workflows
|
89
131
|
|
90
|
-
|
132
|
+
//next next_prompt_id
|
133
|
+
//pipeline prompt_id_1, prompt_ie_2, prompt_id_3
|
91
134
|
|
92
|
-
|
93
|
-
|
135
|
+
# Execute some Ruby code
|
136
|
+
|
137
|
+
//ruby require 'some_library' # inserts into the context/instructions
|
138
|
+
<% some_ruby_things # not inserted into the context %>
|
139
|
+
<%= some_other_ruby_things # that are part of the context/instructions %>
|
140
|
+
|
141
|
+
Tell me how to do something for a $(uname -s) platform that would rename all
|
142
|
+
of the files in the directory $MY_DIRECTORY to have a prefix of for its filename
|
143
|
+
that is [PREFIX] and a ${SUFFIX}
|
144
|
+
|
145
|
+
# directives, ERB blocks and other junk can be used
|
146
|
+
# anywhere in the file mixing dynamic context/instructions with
|
147
|
+
# the static stuff.
|
148
|
+
|
149
|
+
__END__
|
150
|
+
|
151
|
+
Block comments that are not part of the context or instructions to
|
152
|
+
the LLM. Stuff that is just here ofr documentation.
|
94
153
|
```
|
95
154
|
|
96
|
-
|
155
|
+
That is just about everything including the kitchen sink that a pre-compositional parameterized prompt file can have. It can be an executable with a she-bang line and a special system prompt name `run` as shown in the example. It has line comments that use the `#` symbol. It had end of file block comments that appear after the "__END__" line. It has directive command that begin with the double slash `//` - an homage to IBM JCL. It has shell variables in both forms. It has shell commands. It has parameters that default to a regex that uses square brackets and all uppercase characeters to define the parameter name whose value is to be given in a Q&A session before the prompt is sent to the LLM for processing.
|
97
156
|
|
98
|
-
|
157
|
+
AIA has the ability to define a workflow of prompt IDs with either the //next or //pipeline directives.
|
99
158
|
|
100
|
-
|
101
|
-
| ------------- | ------------- | --------- |
|
102
|
-
| backend | mods | AIA_BACKEND |
|
103
|
-
| config_file | nil | AIA_CONFIG_FILE |
|
104
|
-
| debug | false | AIA_DEBUG |
|
105
|
-
| edit | false | AIA_EDIT |
|
106
|
-
| extra | '' | AIA_EXTRA |
|
107
|
-
| fuzzy | false | AIA_FUZZY |
|
108
|
-
| log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
|
109
|
-
| markdown | true | AIA_MARKDOWN |
|
110
|
-
| model | gpt-4-1106-preview | AIA_MODEL |
|
111
|
-
| out_file | STDOUT | AIA_OUT_FILE |
|
112
|
-
| prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
|
113
|
-
| speech_model. | tts-1 | AIA_SPEECH_MODEL |
|
114
|
-
| verbose | FALSE | AIA_VERBOSE |
|
115
|
-
| voice | alloy | AIA_VOICE |
|
159
|
+
You could say that instead of the prompt being part of a program, a program can be part of the prompt. **The prompt is the code!**
|
116
160
|
|
161
|
+
By using ERB you can make parts of the context/instructions conditional. You can also use ERB to make parts of the context/instructions dynamic for example to pull information from a database or an API.
|
117
162
|
|
163
|
+
## Embedded Parameters as Placeholders
|
118
164
|
|
119
|
-
|
165
|
+
In the example prompt text file above I used the default regex to define parameters as all upper case characters plus space, underscore and the vertical pipe enclosed within square brackets. Since the time that I original starting writing AIA I've seen more developers use double curly braces to define parameters. AIA allows you to specify your own regex as a string. If you want the curly brackets use the `--regex` option on the command line like this:
|
120
166
|
|
121
|
-
|
167
|
+
`--regex '(?-mix:({{[a-zA-Z _|]+}}))'`
|
122
168
|
|
123
|
-
## Shell Integration inside of a Prompt
|
124
169
|
|
125
|
-
|
170
|
+
## Usage
|
126
171
|
|
127
|
-
|
172
|
+
The usage report is obtained with either `-h` or `--help` options.
|
173
|
+
|
174
|
+
```bash
|
175
|
+
aia --help
|
176
|
+
```
|
177
|
+
|
178
|
+
## Configuration Options
|
179
|
+
|
180
|
+
The following table provides a comprehensive list of configuration options, their default values, and the associated environment variables:
|
181
|
+
|
182
|
+
| Option | Default Value | Environment Variable |
|
183
|
+
|-------------------------|---------------------------------|---------------------------|
|
184
|
+
| out_file | temp.md | AIA_OUT_FILE |
|
185
|
+
| log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
|
186
|
+
| prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
|
187
|
+
| roles_prefix | roles | AIA_ROLES_PREFIX |
|
188
|
+
| model | gpt-4o-mini | AIA_MODEL |
|
189
|
+
| speech_model | tts-1 | AIA_SPEECH_MODEL |
|
190
|
+
| transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |
|
191
|
+
| verbose | false | AIA_VERBOSE |
|
192
|
+
| markdown | true | AIA_MARKDOWN |
|
193
|
+
| shell | false | AIA_SHELL |
|
194
|
+
| erb | false | AIA_ERB |
|
195
|
+
| chat | false | AIA_CHAT |
|
196
|
+
| clear | false | AIA_CLEAR |
|
197
|
+
| terse | false | AIA_TERSE |
|
198
|
+
| debug | false | AIA_DEBUG |
|
199
|
+
| fuzzy | false | AIA_FUZZY |
|
200
|
+
| speak | false | AIA_SPEAK |
|
201
|
+
| append | false | AIA_APPEND |
|
202
|
+
| temperature | 0.7 | AIA_TEMPERATURE |
|
203
|
+
| max_tokens | 2048 | AIA_MAX_TOKENS |
|
204
|
+
| top_p | 1.0 | AIA_TOP_P |
|
205
|
+
| frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |
|
206
|
+
| presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |
|
207
|
+
| image_size | 1024x1024 | AIA_IMAGE_SIZE |
|
208
|
+
| image_quality | standard | AIA_IMAGE_QUALITY |
|
209
|
+
| image_style | vivid | AIA_IMAGE_STYLE |
|
210
|
+
| embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |
|
211
|
+
| speak_command | afplay | AIA_SPEAK_COMMAND |
|
212
|
+
| require_libs | [] | AIA_REQUIRE_LIBS |
|
213
|
+
| regex | '(?-mix:(\\[[A-Z _|]+\\]))' | AIA_REGEX |
|
214
|
+
|
215
|
+
These options can be configured via command-line arguments, environment variables, or configuration files.
|
216
|
+
|
217
|
+
### Configuration Flexibility
|
218
|
+
|
219
|
+
AIA determines configuration settings using the following order of precedence:
|
220
|
+
|
221
|
+
1. Embedded config directives
|
222
|
+
2. Command-line arguments
|
223
|
+
3. Environment variables
|
224
|
+
4. Configuration files
|
225
|
+
5. Default values
|
226
|
+
|
227
|
+
For example, let's consider the `model` option. Suppose the following conditions:
|
228
|
+
|
229
|
+
- Default value is "gpt-4o-mini"
|
230
|
+
- No entry in the config file
|
231
|
+
- No environment variable value for `AIA_MODEL`
|
232
|
+
- No command-line argument provided for `--model`
|
233
|
+
- No embedded directive like `//config model = some-fancy-llm`
|
234
|
+
|
235
|
+
In this scenario, the model used will be "gpt-4o-mini". However, you can override this default by setting the model at any level of the precedence order. Additionally, you can dynamically ask for user input by incorporating an embedded directive with a placeholder parameter, such as `//config model = [PROCESS_WITH_MODEL]`. When processing the prompt, AIA will prompt you to input a value for `[PROCESS_WITH_MODEL]`.
|
236
|
+
|
237
|
+
If you do not like the default regex used to identify parameters within the prompt text, don't worry there is a way to configure it using the `--regex` option.
|
238
|
+
|
239
|
+
### Expandable Configuration
|
240
|
+
|
241
|
+
The configuration options are expandable through a config file, allowing you to add custom entries. For example, you can define a custom configuration item like "xyzzy" in your config file. This value can then be accessed in your prompts using `AIA.config.xyzzy` within a `//ruby` directive or an ERB block, enabling dynamic prompt generation based on your custom configurations.
|
128
242
|
|
129
|
-
|
243
|
+
## Shell Integration inside of a Prompt
|
244
|
+
|
245
|
+
Using the option `--shell` enables AIA to access your terminal's shell environment from inside the prompt text.
|
130
246
|
|
131
247
|
#### Dynamic Shell Commands
|
132
248
|
|
133
|
-
Dynamic content can be inserted into the prompt using the pattern $(shell command) where the output of the shell command will replace the $(...) pattern.
|
249
|
+
Dynamic content can be inserted into the prompt using the pattern $(shell command) where the output of the shell command will replace the $(...) pattern. It will become part of the context / instructions for the prompt.
|
134
250
|
|
135
|
-
Consider the power
|
251
|
+
Consider the power of tailoring a prompt to your specific operating system:
|
136
252
|
|
137
|
-
```
|
253
|
+
```plaintext
|
138
254
|
As a system administration on a $(uname -v) platform what is the best way to [DO_SOMETHING]
|
139
255
|
```
|
140
256
|
|
141
|
-
|
257
|
+
Or insert content from a file in your home directory:
|
142
258
|
|
143
|
-
```
|
259
|
+
```plaintext
|
144
260
|
Given the following constraints $(cat ~/3_laws_of_robotics.txt) determine the best way to instruct my roomba to clean my kids room.
|
145
261
|
```
|
146
262
|
|
263
|
+
#### Shell Command Safety
|
264
|
+
|
265
|
+
The catchphrase "the prompt is the code" within AIA means that you have the power to execute any command you want, but you must be careful not to execute commands that could cause harm. AIA is not going to protect you from doing something stupid. Sure that's a copout. I just can't think (actually I can) of all the ways you can mess things up writing code. Remember what we learned from Forrest Gump "Stupid is as stupid does." So don't do anything stupid. If someone gives you a prompt as says "run this with AIA" you had better review the prompt before processing it.
|
266
|
+
|
147
267
|
#### Chat Session Use
|
148
268
|
|
149
|
-
When you use the `--shell` option to start a chat session, shell integration is available in your follow up prompts.
|
269
|
+
When you use the `--shell` option to start a chat session, shell integration is available in your follow up prompts. Suppose you started a chat session (--chat) using a role of "Ruby Expert" expecting to chat about changes that could be made to a specific class BUT you forgot to include the class source file as part of the context when you got started. You could enter this as your follow up prompt to keep going:
|
150
270
|
|
151
|
-
```
|
271
|
+
```plaintext
|
152
272
|
The class I want to chat about refactoring is this one: $(cat my_class.rb)
|
153
273
|
```
|
154
274
|
|
155
|
-
That inserts the entire class source file into your follow up prompt.
|
275
|
+
That inserts the entire class source file into your follow up prompt. You can continue chatting with you AI Assistant about changes to the class.
|
156
276
|
|
157
277
|
## *E*mbedded *R*u*B*y (ERB)
|
158
278
|
|
159
|
-
The inclusion of dynamic content through the shell integration provided by the `--shell` option is significant.
|
279
|
+
The inclusion of dynamic content through the shell integration provided by the `--shell` option is significant. AIA also provides the full power of embedded Ruby code processing within the prompt text.
|
160
280
|
|
161
281
|
The `--erb` option turns the prompt text file into a fully functioning ERB template. The [Embedded Ruby (ERB) template syntax (2024)](https://bophin-com.ngontinh24.com/article/language-embedded-ruby-erb-template-syntax) provides a good overview of the syntax and power of ERB.
|
162
282
|
|
163
|
-
Most websites that have information about ERB will give examples of how to use ERB to generate
|
164
|
-
|
165
|
-
### Chat Session Behavior
|
283
|
+
Most websites that have information about ERB will give examples of how to use ERB to generate dynamic HTML content for web-based applications. That is a common use case for ERB. AIA on the other hand uses ERB to generate dynamic prompt text for LLM processing.
|
166
284
|
|
167
|
-
In a chat session whether started by the `--chat` option or its equivalent with a directive within a prompt text file behaves a little differently w/r/t its binding and local variable assignments. Since a chat session by definition has multiple prompts, setting a local variable in one prompt and expecting it to be available in a subsequent prompt does not work. You need to use instance variables to accomplish this prompt to prompt carry over of information.
|
168
|
-
|
169
|
-
Also since follow up prompts are expected to be a single thing - sentence or paragraph - terminated by a single return, its likely that ERB enhance will be of benefit; but, you may find a use for it.
|
170
285
|
|
171
286
|
## Prompt Directives
|
172
287
|
|
173
|
-
Downstream processing directives were added to the `prompt_manager` gem used by
|
288
|
+
Downstream processing directives were added to the `prompt_manager` gem used by AIA at version 0.4.1. These directives are lines in the prompt text file that begin with "//" having this pattern:
|
174
289
|
|
175
|
-
```
|
176
|
-
//command
|
290
|
+
```bash
|
291
|
+
//command params
|
177
292
|
```
|
178
293
|
|
179
|
-
There is no space between the "//" and the command.
|
294
|
+
There is no space between the "//" and the command. Commands do not have to have params. These params are typically space delimited when more than one is required. It all depens on the command.
|
180
295
|
|
181
|
-
### Parameter and Shell Substitution in Directives
|
296
|
+
### Parameter and Shell Substitution in Directives
|
182
297
|
|
183
298
|
When you combine prompt directives with prompt parameters and shell envar substitutions you can get some powerful compositional prompts.
|
184
299
|
|
185
300
|
Here is an example of a pure generic directive.
|
186
301
|
|
187
|
-
```
|
302
|
+
```bash
|
188
303
|
//[DIRECTIVE_NAME] [DIRECTIVE_PARAMS]
|
189
304
|
```
|
190
305
|
|
191
306
|
When the prompt runs, you will be asked to provide a value for each of the parameters. You could answer "shell" for the directive name and "calc 22/7" if you wanted a bad approximation of PI.
|
192
307
|
|
193
308
|
Try this prompt file:
|
194
|
-
```
|
309
|
+
```bash
|
195
310
|
//shell calc [FORMULA]
|
196
311
|
|
197
312
|
What does that number mean to you?
|
198
313
|
```
|
199
314
|
|
200
|
-
###
|
315
|
+
### Directive Syntax
|
316
|
+
|
317
|
+
Directives can be entered in chat or prompt files using the following syntax:
|
318
|
+
- `//command args`
|
201
319
|
|
202
|
-
|
320
|
+
Supported directives:
|
321
|
+
- `help`: Show available directives
|
322
|
+
- `shell` or `sh`: Execute a shell command
|
323
|
+
- `ruby` or `rb`: Execute Ruby code
|
324
|
+
- `config` or `cfg`: Show or update configuration
|
325
|
+
- `include` or `inc`: Include file content
|
326
|
+
- `next`: Set/Show the next prompt ID to be processed
|
327
|
+
- `pipeline`: Set/Extend/Show the workflow of prompt IDs
|
328
|
+
|
329
|
+
When a directive produces output, it is added to the chat context. If there is no output, you are prompted again.
|
330
|
+
|
331
|
+
### AIA Specific Directive Commands
|
332
|
+
|
333
|
+
At this time AIA only has a few directives which are detailed below.
|
203
334
|
|
204
335
|
#### //config
|
205
336
|
|
@@ -211,25 +342,23 @@ The switch options are treated like booleans. They are either `true` or `false`
|
|
211
342
|
|
212
343
|
To set the value of a switch using ``//config` for example `--terse` or `--chat` to this:
|
213
344
|
|
214
|
-
```
|
345
|
+
```bash
|
215
346
|
//config chat? = true
|
216
347
|
//config terse? = true
|
217
348
|
```
|
218
349
|
|
219
350
|
A configuration item such as `--out_file` or `--model` has an associated value on the command line. To set that value with the `//config` directive do it like this:
|
220
351
|
|
221
|
-
```
|
352
|
+
```bash
|
222
353
|
//config model = gpt-3.5-turbo
|
223
354
|
//config out_file = temp.md
|
224
|
-
//config backend = mods
|
225
355
|
```
|
226
356
|
|
227
357
|
BTW: the "=" is completely options. Its actuall ignored as is ":=" if you were to choose that as your assignment operator. Also the number of spaces between the item and the value is complete arbitrary. I like to line things up so this syntax is just as valie:
|
228
358
|
|
229
|
-
```
|
359
|
+
```bash
|
230
360
|
//config model gpt-3.5-turbo
|
231
361
|
//config out_file temp.md
|
232
|
-
//config backend mods
|
233
362
|
//config chat? true
|
234
363
|
//config terse? true
|
235
364
|
//config model gpt-4
|
@@ -240,7 +369,7 @@ NOTE: if you specify the same config item name more than once within the prompt
|
|
240
369
|
#### //include
|
241
370
|
|
242
371
|
Example:
|
243
|
-
```
|
372
|
+
```bash
|
244
373
|
//include path_to_file
|
245
374
|
```
|
246
375
|
|
@@ -248,100 +377,120 @@ The `path_to_file` can be either absolute or relative. If it is relative, it is
|
|
248
377
|
|
249
378
|
The file that is included will have any comments or directives excluded. It is expected that the file will be a text file so that its content can be pre-pended to the existing prompt; however, if the file is a source code file (ex: file.rb) the source code will be included HOWEVER any comment line or line that starts with "//" will be excluded.
|
250
379
|
|
251
|
-
TODO: Consider adding a command line option `--include_dir` to specify the place from which relative files are to come.
|
252
380
|
|
253
381
|
#### //ruby
|
254
|
-
Example:
|
255
|
-
```
|
256
|
-
//ruby any_code_that_returns_an_instance_of_String
|
257
|
-
```
|
258
382
|
|
259
|
-
|
383
|
+
The `//ruby` directive executes Ruby code. You can use this to perform complex operations or interact with Ruby libraries.
|
260
384
|
|
261
|
-
|
262
|
-
```
|
263
|
-
//ruby
|
385
|
+
For example:
|
386
|
+
```ruby
|
387
|
+
//ruby puts "Hello from Ruby"
|
264
388
|
```
|
265
389
|
|
266
|
-
|
390
|
+
You can also use the `--rq` option to specify Ruby libraries to require before executing Ruby code:
|
391
|
+
|
392
|
+
```bash
|
393
|
+
# Command line
|
394
|
+
aia --rq json,csv my_prompt
|
395
|
+
|
396
|
+
# In chat
|
397
|
+
//ruby JSON.parse('{"data": [1,2,3]}')["data"]
|
398
|
+
```
|
267
399
|
|
268
400
|
#### //shell
|
269
401
|
Example:
|
270
|
-
```
|
402
|
+
```bash
|
271
403
|
//shell some_shell_command
|
272
404
|
```
|
273
405
|
|
274
406
|
It is expected that the shell command will return some text to STDOUT which will be pre-pending to the existing prompt text within the prompt file.
|
275
407
|
|
276
408
|
There are no limitations on what the shell command can be. For example if you wanted to bypass the stripping of comments and directives from a file you could do something like this:
|
277
|
-
```
|
409
|
+
```bash
|
278
410
|
//shell cat path_to_file
|
279
411
|
```
|
280
412
|
|
281
413
|
Which does basically the same thing as the `//include` directive, except it uses the entire content of the file. For relative file paths the same thing applies. The file's path will be relative to the PWD.
|
282
414
|
|
283
415
|
|
416
|
+
#### //next
|
417
|
+
Examples:
|
418
|
+
```bash
|
419
|
+
# Show the next promt ID
|
420
|
+
//next
|
421
|
+
|
422
|
+
# Set the next prompt ID
|
423
|
+
//next prompt_id
|
284
424
|
|
285
|
-
|
425
|
+
# Same as
|
426
|
+
//config next
|
427
|
+
//config next = prompt_id
|
428
|
+
```
|
286
429
|
|
287
|
-
|
430
|
+
#### //pipeline
|
288
431
|
|
289
|
-
|
290
|
-
|
432
|
+
Examples:
|
433
|
+
```bash
|
434
|
+
# Show the current prompt workflow
|
435
|
+
//pipeline
|
291
436
|
|
292
|
-
|
437
|
+
# Set the prompt workflow
|
438
|
+
//pipeline = prompt_id_1, prompt_id_2, prompt_id_3
|
293
439
|
|
294
|
-
|
295
|
-
//
|
296
|
-
```
|
440
|
+
# Extend the prompt workflow
|
441
|
+
//pipeline << prompt_id_4, prompt_id_5, prompt_id_6
|
297
442
|
|
298
|
-
|
443
|
+
# Same as
|
444
|
+
//config pipeline
|
445
|
+
//config pipeline = prompt_id_1, prompt_id_2, prompt_id_3
|
446
|
+
//config pipeline << prompt_id_4, prompt_id_5, prompt_id_6
|
447
|
+
```
|
299
448
|
|
300
449
|
### Using Directives in Chat Sessions
|
301
450
|
|
302
|
-
Whe you are in a chat session, you may use a directive as a follow up prompt. For example if you started the chat session with the option `--terse` expecting to get short answers from the
|
451
|
+
Whe you are in a chat session, you may use a directive as a follow up prompt. For example if you started the chat session with the option `--terse` expecting to get short answers from the LLM; but, then you decide that you want more comprehensive answers you may do this in a chat follow up:
|
303
452
|
|
304
|
-
```
|
453
|
+
```bash
|
305
454
|
//config terse? false
|
306
455
|
```
|
307
456
|
|
308
|
-
The directive is executed and a new follow up prompt can be entered with a more lengthy response generated from the
|
457
|
+
The directive is executed and a new follow up prompt can be entered with a more lengthy response generated from the LLM.
|
309
458
|
|
459
|
+
The directive `//clear` truncates your entire session context. The LLM will not remember anything you have discussed.
|
310
460
|
|
311
461
|
## Prompt Sequences
|
312
462
|
|
313
|
-
Why would you need/want to use a sequence of prompts in a batch situation.
|
463
|
+
Why would you need/want to use a sequence of prompts in a batch situation. Maybe you have a complex prompt which exceeds the token limitations of your model for input so you need to break it up into multiple parts. Or suppose its a simple prompt but the number of tokens on the output is limited and you do not get exactly the kind of full response for which you were looking.
|
314
464
|
|
315
|
-
Sometimes it takes a series of prompts to get the kind of response that you want. The reponse from one prompt becomes a context for the next prompt.
|
465
|
+
Sometimes it takes a series of prompts to get the kind of response that you want. The reponse from one prompt becomes a context for the next prompt. This is easy to do within a `chat` session were you are manually entering and adjusting your prompts until you get the kind of response that you want.
|
316
466
|
|
317
|
-
If you need to do this on a regular basis or within a batch you can use
|
467
|
+
If you need to do this on a regular basis or within a batch you can use AIA and the `--next` and `--pipeline` command line options.
|
318
468
|
|
319
|
-
These two options specify the sequence of prompt IDs to be processed. Both options are available to be used within a prompt file using the `//
|
469
|
+
These two options specify the sequence of prompt IDs to be processed. Both options are available to be used within a prompt file using the `//next` and `//pipeline` directives. Like all embedded directives you can take advantage of parameterization shell integration and Ruby. With this kind of dynamic content and control flow in your prompts you will start to feel like Tim the Tool man - more power!
|
320
470
|
|
321
471
|
Consider the condition in which you have 4 prompt IDs that need to be processed in sequence. The IDs and associated prompt file names are:
|
322
472
|
|
323
|
-
|
|
324
|
-
|
|
325
|
-
| one
|
326
|
-
| two
|
327
|
-
| three
|
328
|
-
| four
|
473
|
+
| Prompt ID | Prompt File |
|
474
|
+
| --------- | ----------- |
|
475
|
+
| one | one.txt |
|
476
|
+
| two | two.txt |
|
477
|
+
| three | three.txt |
|
478
|
+
| four | four.txt |
|
329
479
|
|
330
480
|
|
331
481
|
### --next
|
332
482
|
|
333
|
-
```
|
334
|
-
|
335
|
-
aia
|
336
|
-
aia three --next four temp.md
|
483
|
+
```bash
|
484
|
+
aia one --next two --out_file temp.md
|
485
|
+
aia three --next four temp.md -o answer.md
|
337
486
|
```
|
338
487
|
|
339
|
-
or within each of the prompt files you use the
|
488
|
+
or within each of the prompt files you use the `//next` directive:
|
340
489
|
|
341
|
-
```
|
342
|
-
one.txt contains //
|
343
|
-
two.txt contains //
|
344
|
-
three.txt contains //
|
490
|
+
```bash
|
491
|
+
one.txt contains //next two
|
492
|
+
two.txt contains //next three
|
493
|
+
three.txt contains //next four
|
345
494
|
```
|
346
495
|
BUT if you have more than two prompts in your sequence then consider using the --pipeline option.
|
347
496
|
|
@@ -349,11 +498,15 @@ BUT if you have more than two prompts in your sequence then consider using the -
|
|
349
498
|
|
350
499
|
### --pipeline
|
351
500
|
|
352
|
-
|
501
|
+
```bash
|
502
|
+
aia one --pipeline two,three,four
|
503
|
+
```
|
353
504
|
|
354
505
|
or inside of the `one.txt` prompt file use this directive:
|
355
506
|
|
356
|
-
|
507
|
+
```bash
|
508
|
+
//pipeline two,three,four
|
509
|
+
```
|
357
510
|
|
358
511
|
**The directive //pipeline is short for //config pipeline**
|
359
512
|
|
@@ -363,15 +516,15 @@ Since the response of one prompt is fed into the next prompt within the sequence
|
|
363
516
|
|
364
517
|
|
365
518
|
| Prompt File | Directive |
|
366
|
-
|
|
367
|
-
| one.txt
|
368
|
-
| two.txt
|
369
|
-
| three.txt
|
370
|
-
| four.txt
|
519
|
+
| ----------- | --------- |
|
520
|
+
| one.txt | //config out_file one.md |
|
521
|
+
| two.txt | //config out_file two.md |
|
522
|
+
| three.txt | //config out_file three.md |
|
523
|
+
| four.txt | //config out_file four.md |
|
371
524
|
|
372
525
|
This way you can see the response that was generated for each prompt in the sequence.
|
373
526
|
|
374
|
-
### Example
|
527
|
+
### Example pipeline
|
375
528
|
|
376
529
|
TODO: the audio-to-text is still under development.
|
377
530
|
|
@@ -379,24 +532,24 @@ Suppose you have an audio file of a meeting. You what to get a transcription of
|
|
379
532
|
|
380
533
|
Create two prompts named transcribe.txt and tech_summary.txt
|
381
534
|
|
382
|
-
```
|
535
|
+
```bash
|
383
536
|
# transcribe.txt
|
384
537
|
# Desc: takes one audio file
|
385
538
|
# note that there is no "prompt" text only the directive
|
386
539
|
|
387
|
-
//config backend client
|
388
540
|
//config model whisper-1
|
389
541
|
//next tech_summary
|
390
542
|
```
|
543
|
+
|
391
544
|
and
|
392
545
|
|
393
|
-
```
|
546
|
+
```bash
|
394
547
|
# tech_summary.txt
|
395
548
|
|
396
|
-
//config model gpt-
|
549
|
+
//config model gpt-4o-mini
|
397
550
|
//config out_file meeting_summary.md
|
398
551
|
|
399
|
-
Review the raw transcript of a technical meeting,
|
552
|
+
Review the raw transcript of a technical meeting,
|
400
553
|
summarize the discussion and
|
401
554
|
note any action items that were generated.
|
402
555
|
|
@@ -405,7 +558,7 @@ Format your response in markdown.
|
|
405
558
|
|
406
559
|
Now you can do this:
|
407
560
|
|
408
|
-
```
|
561
|
+
```bash
|
409
562
|
aia transcribe my_tech_meeting.m4a
|
410
563
|
```
|
411
564
|
|
@@ -414,104 +567,62 @@ You summary of the meeting is in the file `meeting_summary.md`
|
|
414
567
|
|
415
568
|
## All About ROLES
|
416
569
|
|
417
|
-
### The --
|
570
|
+
### The --roles_prefix (AIA_ROLES_PREFIX)
|
418
571
|
|
419
|
-
|
420
|
-
1. instructional - tells the LLM what to do
|
421
|
-
2. personification - tells the LLM who it should pretend to be when it does its transformational work.
|
572
|
+
The second kind of prompt is called a role (aka system prompt). Sometimes the role is incorporated into the instruction. For example, "As a magician make a rabbit appear out of a hat." To reuse the same role in multiple prompts, AIA encourages you to designate a special subdirectory for prompts that are specific to personification - roles.
|
422
573
|
|
423
|
-
|
424
|
-
|
425
|
-
The default `roles_dir` is a sub-directory of the `prompts_dir` named roles. You can, however, put your `roles_dir` anywhere that makes sense to you.
|
574
|
+
The default `roles_prefix` is set to 'roles'. This creates a subdirectory under the `prompts_dir` where role files are stored. Internally, AIA calculates a `roles_dir` value by joining `prompts_dir` and `roles_prefix`. It is recommended to keep the roles organized this way for better organization and management.
|
426
575
|
|
427
576
|
### The --role Option
|
428
577
|
|
429
|
-
The `--role` option is used to identify a personification prompt within your roles directory which defines the context within which the LLM is to provide its response. The text of the role ID is pre-pended to the text of the primary prompt to form a complete prompt to be processed by the
|
578
|
+
The `--role` option is used to identify a personification prompt within your roles directory which defines the context within which the LLM is to provide its response. The text of the role ID is pre-pended to the text of the primary prompt to form a complete prompt to be processed by the LLM.
|
430
579
|
|
431
580
|
For example consider:
|
432
581
|
|
433
|
-
```
|
582
|
+
```bash
|
434
583
|
aia -r ruby refactor my_class.rb
|
435
584
|
```
|
436
585
|
|
437
|
-
|
586
|
+
The role ID is `ruby` the prompt ID is `refactor` and my_class.rb is a context file.
|
587
|
+
|
588
|
+
Within the roles directory the contents of the text file `ruby.txt` will be pre-pre-pended to the contents of the `refactor.txt` file from the prompts directory to produce a complete prompt. That complete prompt will have any parameters followed by directives processed before sending the combined prompt text and the content of the context file to the LLM.
|
438
589
|
|
439
590
|
Note that `--role` is just a way of saying add this prompt text file to the front of this other prompt text file. The contents of the "role" prompt could be anything. It does not necessarily have be an actual role.
|
440
591
|
|
441
|
-
|
592
|
+
AIA fully supports a directory tree within the `prompts_dir` as a way of organization or classification of your different prompt text files.
|
442
593
|
|
443
|
-
```
|
444
|
-
aia -r sw_eng
|
594
|
+
```bash
|
595
|
+
aia -r ruby sw_eng/doc_the_methods my_class.rb
|
445
596
|
```
|
446
597
|
|
447
|
-
In this example the prompt text file `$
|
448
|
-
|
598
|
+
In this example the prompt text file `$AIA_ROLES_PREFIX/ruby.txt` is prepended to the prompt text file `$AIA_PROMPTS_DIR/sw_eng/doc_the_methods.txt`
|
449
599
|
|
450
600
|
### Other Ways to Insert Roles into Prompts
|
451
601
|
|
452
|
-
Since
|
602
|
+
Since AIA supports parameterized prompts you could make a keyword like "[ROLE]" be part of your prompt. For example consider this prompt:
|
453
603
|
|
454
604
|
```text
|
455
605
|
As a [ROLE] tell me what you think about [SUBJECT]
|
456
606
|
```
|
457
607
|
|
458
|
-
When this prompt is processed,
|
608
|
+
When this prompt is processed, AIA will ask you for a value for the keyword "[ROLE]" and the keyword "[SUBJECT]" to complete the prompt. Since AIA maintains a history of your previous answers, you could just choose something that you used in the past or answer with a completely new value.
|
459
609
|
|
460
610
|
## External CLI Tools Used
|
461
611
|
|
462
|
-
To install the external CLI programs used by
|
463
|
-
|
464
|
-
brew install fzf
|
612
|
+
To install the external CLI programs used by AIA:
|
613
|
+
|
614
|
+
brew install fzf
|
465
615
|
|
466
616
|
fzf
|
467
617
|
Command-line fuzzy finder written in Go
|
468
618
|
[https://github.com/junegunn/fzf](https://github.com/junegunn/fzf)
|
469
619
|
|
470
|
-
mods
|
471
|
-
AI on the command-line
|
472
|
-
[https://github.com/charmbracelet/mods](https://github.com/charmbracelet/mods)
|
473
|
-
|
474
|
-
rg
|
475
|
-
Search tool like grep and The Silver Searcher
|
476
|
-
[https://github.com/BurntSushi/ripgrep](https://github.com/BurntSushi/ripgrep)
|
477
|
-
|
478
|
-
glow
|
479
|
-
Render markdown on the CLI
|
480
|
-
[https://github.com/charmbracelet/glow](https://github.com/charmbracelet/glow)
|
481
|
-
|
482
|
-
A text editor whose executable is setup in the
|
483
|
-
system environment variable 'EDITOR' like this:
|
484
|
-
|
485
|
-
export EDITOR="subl -w"
|
486
|
-
|
487
|
-
### Optional External CLI-tools
|
488
|
-
|
489
|
-
#### Backend Processor `llm`
|
490
|
-
|
491
|
-
```
|
492
|
-
llm Access large language models from the command-line
|
493
|
-
| brew install llm
|
494
|
-
|__ https://llm.datasette.io/
|
495
|
-
```
|
496
|
-
|
497
|
-
As of `aia v0.5.13` the `llm` backend processor is available in a limited integration. It is a very powerful python-based implementation that has its own prompt templating system. The reason that it is be included within the `aia` environment is for its ability to make use of local LLM models.
|
498
|
-
|
499
|
-
|
500
|
-
#### Backend Processor `sgpt`
|
501
|
-
|
502
|
-
`shell-gpt` aka `sgpt` is also a python implementation of a CLI-tool that processes prompts through OpenAI. It has less features than both `mods` and `llm` and is less flexible.
|
503
|
-
|
504
|
-
#### Occassionally Useful Tool `plz`
|
505
|
-
|
506
|
-
`plz-cli` aka `plz` is not integrated with `aia` however, it gets an honorable mention for its ability to except a prompt that tailored to doing something on the command line. Its response is a CLI command (sometimes a piped sequence) that accomplishes the task set forth in the prompt. It will return the commands to be executed agaist the data files you specified with a query to execute the command.
|
507
|
-
|
508
|
-
- brew install plz-cli
|
509
620
|
|
510
621
|
## Shell Completion
|
511
622
|
|
512
623
|
You can setup a completion function in your shell that will complete on the prompt_id saved in your `prompts_dir` - functions for `bash`, `fish` and `zsh` are available. To get a copy of these functions do this:
|
513
624
|
|
514
|
-
```
|
625
|
+
```bash
|
515
626
|
aia --completion bash
|
516
627
|
```
|
517
628
|
|
@@ -523,29 +634,30 @@ Copy the function to a place where it can be installed in your shell's instance.
|
|
523
634
|
|
524
635
|
This is just between you and me so don't go blabbing this around to everyone. My most power prompt is in a file named `ad_hoc.txt`. It looks like this:
|
525
636
|
|
526
|
-
|
637
|
+
```text
|
638
|
+
[WHAT_NOW_HUMAN]
|
639
|
+
```
|
527
640
|
|
528
641
|
Yep. Just a single parameter for which I can provide a value of anything that is on my mind at the time. Its advantage is that I do not pollute my shell's command history with lots of text.
|
529
642
|
|
530
|
-
|
531
|
-
|
532
|
-
```shell
|
533
|
-
mods "As a certified public accountant specializing in forensic audit and analysis of public company financial statements, what do you think of mine? What is the best way to hide the millions dracma that I've skimmed?" < financial_statement.txt
|
643
|
+
```bash
|
644
|
+
aia ad_hoc
|
534
645
|
```
|
535
646
|
|
536
|
-
|
647
|
+
Or consider this executable prompt file:
|
537
648
|
|
538
|
-
```
|
539
|
-
aia
|
649
|
+
```bash
|
650
|
+
#!/usr/bin/env aia run
|
651
|
+
[WHAT_NOW_HUMAN]
|
540
652
|
```
|
541
653
|
|
542
|
-
|
654
|
+
Where the `run` prompt ID has a `run.txt` file in the prompt directory that is basically empty. Or maybe `run.txt` has some prompt instructions for how to run the prompt - some kind of meta-thinking instructions.
|
543
655
|
|
544
656
|
## My Configuration
|
545
657
|
|
546
658
|
I use the `bash` shell. In my `.bashrc` file I source another file named `.bashrc__aia` which looks like this:
|
547
659
|
|
548
|
-
```
|
660
|
+
```bash
|
549
661
|
# ~/.bashic_aia
|
550
662
|
# AI Assistant
|
551
663
|
|
@@ -553,43 +665,91 @@ I use the `bash` shell. In my `.bashrc` file I source another file named `.bash
|
|
553
665
|
export AIA_PROMPTS_DIR=~/.prompts
|
554
666
|
export AIA_OUT_FILE=./temp.md
|
555
667
|
export AIA_LOG_FILE=$AIA_PROMPTS_DIR/_prompts.log
|
556
|
-
export
|
557
|
-
export AIA_MODEL=gpt-4-1106-preview
|
668
|
+
export AIA_MODEL=gpt-4o-mini
|
558
669
|
|
559
|
-
# Not a default. Invokes spinner.
|
670
|
+
# Not a default. Invokes spinner. If not true then there is no spinner
|
671
|
+
# for feedback while waiting for the LLM to respond.
|
560
672
|
export AIA_VERBOSE=true
|
561
673
|
|
562
|
-
alias chat='aia chat --terse'
|
674
|
+
alias chat='aia --chat --shell --erb --terse'
|
563
675
|
|
564
676
|
# rest of the file is the completion function
|
565
677
|
```
|
566
678
|
|
567
|
-
Here is what my `chat` prompt file looks like:
|
568
679
|
|
569
|
-
```shell
|
570
|
-
# ~/.prompts/chat.txt
|
571
|
-
# Desc: Start a chat session
|
572
680
|
|
573
|
-
|
681
|
+
## Executable Prompts
|
682
|
+
|
683
|
+
With all of the capabilities of the AI Assistant, you can create your own executable prompts. These prompts can be used to automate tasks, generate content, or perform any other action that you can think of. All you need to get started with executable prompts is a prompt that does not do anything. For example consider my `run.txt` prompt.
|
574
684
|
|
575
|
-
|
685
|
+
```bash
|
686
|
+
# ~/.prompts/run.txt
|
687
|
+
# Desc: Run executable prompts coming in via STDIN
|
576
688
|
```
|
577
689
|
|
690
|
+
Remember that the '#' character indicates a comment line making the `run` prompt ID basically a do nothing prompt.
|
691
|
+
|
692
|
+
An executable prompt can reside anywhere either in your $PATH or not. That is your choice. It must however be executable. Consider the following `top10` executable prompt:
|
693
|
+
|
694
|
+
```bash
|
695
|
+
#!/usr/bin/env aia run --no-out_file
|
696
|
+
# File: top10
|
697
|
+
# Desc: The tope 10 cities by population
|
698
|
+
|
699
|
+
what are the top 10 cities by population in the USA. Summarize what people
|
700
|
+
like about living in each city. Include an average cost of living. Include
|
701
|
+
links to the Wikipedia pages. Format your response as a markdown document.
|
702
|
+
```
|
703
|
+
|
704
|
+
Make sure that it is executable.
|
705
|
+
|
706
|
+
```bash
|
707
|
+
chmod +x top10
|
708
|
+
```
|
709
|
+
|
710
|
+
The magic is in the first line of the prompt. It is a shebang line that tells the system how to execute the prompt. In this case it is telling the system to use the `aia` command line tool to execute the `run` prompt. The `--no-out_file` option tells the AIA command line tool not to write the output of the prompt to a file. Instead it will write the output to STDOUT. The remaining content of this `top10` prompt is send via STDIN to the LLM.
|
711
|
+
|
712
|
+
Now just execute it like any other command in your terminal.
|
713
|
+
|
714
|
+
```bash
|
715
|
+
./top10
|
716
|
+
```
|
717
|
+
|
718
|
+
Since its output is going to STDOUT you can setup a pipe chain. Using the CLI program `glow` to render markdown in the terminal
|
719
|
+
(brew install glow)
|
720
|
+
|
721
|
+
```bash
|
722
|
+
./top10 | glow
|
723
|
+
```
|
724
|
+
|
725
|
+
This executable prompt concept sets up the building blocks of a *nix CLI-based pipeline in the same way that the --pipeline and --next options and directives are used.
|
726
|
+
|
578
727
|
## Development
|
579
728
|
|
580
|
-
|
729
|
+
**ShellCommandExecutor Refactor:**
|
730
|
+
The `ShellCommandExecutor` is now a class (previously a module). It stores the config object as an instance variable and provides cleaner encapsulation. For backward compatibility, class-level methods are available and delegate to instance methods internally.
|
731
|
+
|
732
|
+
**Prompt Variable Fallback:**
|
733
|
+
When processing a prompt file without a `.json` history file, variables are always parsed from the prompt text so you are prompted for values as needed.
|
581
734
|
|
582
735
|
## Contributing
|
583
736
|
|
584
737
|
Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
|
585
738
|
|
586
|
-
|
739
|
+
When you find problems with AIA please note them as an issue. This thing was written mostly by a human and you know how error prone humans are. There should be plenty of errors to find.
|
740
|
+
|
741
|
+
I'm not happy with the way where some command line options for external command are hard coded. I'm specific talking about the way in which the `rg` and `fzf` tools are used. Their options decide the basic look and feel of the search capability on the command line. Maybe they should be part of the overall configuration so that users can tune their UI to the way they like.
|
742
|
+
|
743
|
+
## History of Development
|
744
|
+
|
745
|
+
I originally wrote a tiny script called `aip.rb` to experiment with parameterized prompts. That was in August of 2023. AIP meant AI Parameterized. Adding an extra P for Prompts just seemed to be a little silly. It lived in my [scripts repo](https://github.com/MadBomber/scripts) for a while. It became useful to me so of course I need to keep enhancing it. I moved it into my [experiments repo](https://github.com/MadBomber/experiments) and began adding features in a haphazard manner. No real plan or architecture. From those experiments I refactored out the [prompt_manager gem](https://github.com/MadBomber/prompt_manager) and the [ai_client gem](https://github.com/MadBomber/ai_client)(https://github.com/MadBomber/ai_client). The name was changed from AIP to AIA and it became a gem.
|
587
746
|
|
588
|
-
|
747
|
+
All of that undirected experimentation without a clear picture of where this thing was going resulted in chaotic code. I would use an Italian food dish to explain the organization but I think chaotic is more descriptive.
|
589
748
|
|
590
|
-
|
749
|
+
## Roadmap
|
591
750
|
|
592
|
-
|
751
|
+
- support for using Ruby-based functional callback tools
|
752
|
+
- support for Model Context Protocol
|
593
753
|
|
594
754
|
## License
|
595
755
|
|