aia 0.9.4 → 0.9.6

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -1,775 +1,770 @@
1
- # AI Assistant (AIA)
1
+ <div align="center">
2
+ <h1>AI Assistant (AIA)</h1>
3
+ <img src="images/aia.png" alt="Robots waiter ready to take your order."><br />
4
+ **The Prompt is the Code**
5
+ </div>
2
6
 
3
- **The prompt is the code!**
7
+ AIA is a command-line utility that facilitates interaction with AI models through dynamic prompt management. It automates the management of pre-compositional prompts and executes generative AI commands with enhanced features including embedded directives, shell integration, embedded Ruby, history management, interactive chat, and prompt workflows.
4
8
 
5
- ```plain
6
- , , AIA is a command-line utility that facilitates
7
- (\____/) AI Assistant interaction with AI models. It automates the
8
- (_oo_) Fancy LLM management of pre-compositional prompts and
9
- (O) is Online executes generative AI (Gen-AI) commands on those
10
- __||__ \) prompts. AIA includes enhanced features such as
11
- [/______\] / * embedded directives * shell integration
12
- / \__AI__/ \/ * embedded Ruby * history management
13
- / /__\ * interactive chat * prompt workflows
14
- (\ /____\ # supports RubyLLM::Tool integration
15
- ```
16
-
17
- AIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts. It utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection.
9
+ AIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts, utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection, and includes the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a collection of ready-to-use functions for use with LLMs that support tools.
18
10
 
19
11
  **Wiki**: [Checkout the AIA Wiki](https://github.com/MadBomber/aia/wiki)
20
12
 
21
- **MCRubyLLM::Tool Support:** AIA now supports the integration of Tools for those models that support function callbacks. See the --tools, --allowed_tools and --rejected_tools options. Yes, functional callbacks provided for dynamic prompts just like the AIA directives, shell and ERB integrations so why have both? Well, AIA is older that functional callbacks. The AIA integrations are legacy but more than that not all models support functional callbacks. That means the AIA integrationsß∑ are still viable ways to provided dynamic extra content to your prompts.
13
+ ## Quick Start
14
+
15
+ 1. **Install AIA:**
16
+ ```bash
17
+ gem install aia
18
+ ```
19
+
20
+ 2. **Install dependencies:**
21
+ ```bash
22
+ brew install fzf
23
+ ```
24
+
25
+ 3. **Create your first prompt:**
26
+ ```bash
27
+ mkdir -p ~/.prompts
28
+ echo "What is [TOPIC]?" > ~/.prompts/ask.txt
29
+ ```
30
+
31
+ 4. **Run your prompt:**
32
+ ```bash
33
+ aia ask
34
+ ```
35
+ You'll be prompted to enter a value for `[TOPIC]`, then AIA will send your question to the AI model.
36
+
37
+ 5. **Start an interactive chat:**
38
+ ```bash
39
+ aia --chat
40
+ ```
41
+
42
+ ```plain
43
+ , ,
44
+ (\____/) AI Assistant
45
+ (_oo_) Fancy LLM
46
+ (O) is Online
47
+ __||__ \)
48
+ [/______\] /
49
+ / \__AI__/ \/
50
+ / /__\
51
+ (\ /____\
52
+ ```
22
53
 
23
54
  <!-- Tocer[start]: Auto-generated, don't remove. -->
24
55
 
25
56
  ## Table of Contents
26
57
 
27
- - [Configuration Options](#configuration-options)
28
- - [Configuration Flexibility](#configuration-flexibility)
29
- - [Expandable Configuration](#expandable-configuration)
30
- - [The Local Model Registry Refresh](#the-local-model-registry-refresh)
31
- - [Important Note](#important-note)
32
- - [Shell Integration inside of a Prompt](#shell-integration-inside-of-a-prompt)
33
- - [Dynamic Shell Commands](#dynamic-shell-commands)
34
- - [Shell Command Safety](#shell-command-safety)
35
- - [Chat Session Use](#chat-session-use)
36
- - [Embedded Ruby (ERB)](#embedded-ruby-erb)
37
- - [Prompt Directives](#prompt-directives)
38
- - [Parameter and Shell Substitution in Directives](#parameter-and-shell-substitution-in-directives)
39
- - [Directive Syntax](#directive-syntax)
40
- - [AIA Specific Directive Commands](#aia-specific-directive-commands)
41
- - [//config](#config)
42
- - [//include](#include)
43
- - [//ruby](#ruby)
44
- - [//shell](#shell)
45
- - [//next](#next)
46
- - [//pipeline](#pipeline)
47
- - [Using Directives in Chat Sessions](#using-directives-in-chat-sessions)
48
- - [Prompt Sequences](#prompt-sequences)
49
- - [--next](#--next)
50
- - [--pipeline](#--pipeline)
51
- - [Best Practices ??](#best-practices-)
52
- - [Example pipeline](#example-pipeline)
53
- - [All About ROLES](#all-about-roles)
54
- - [The --roles_prefix (AIA_ROLES_PREFIX)](#the---roles_prefix-aia_roles_prefix)
55
- - [The --role Option](#the---role-option)
56
- - [Other Ways to Insert Roles into Prompts](#other-ways-to-insert-roles-into-prompts)
57
- - [External CLI Tools Used](#external-cli-tools-used)
58
- - [Shell Completion](#shell-completion)
59
- - [My Most Powerful Prompt](#my-most-powerful-prompt)
60
- - [My Configuration](#my-configuration)
61
- - [Executable Prompts](#executable-prompts)
62
- - [Usage](#usage)
58
+ - [Quick Start](#quick-start)
59
+ - [Installation & Prerequisites](#installation--prerequisites)
60
+ - [Basic Usage](#basic-usage)
61
+ - [Configuration](#configuration)
62
+ - [Essential Configuration Options](#essential-configuration-options)
63
+ - [Configuration Precedence](#configuration-precedence)
64
+ - [Complete Configuration Reference](#complete-configuration-reference)
65
+ - [Advanced Features](#advanced-features)
66
+ - [Prompt Directives](#prompt-directives)
67
+ - [Shell Integration](#shell-integration)
68
+ - [Embedded Ruby (ERB)](#embedded-ruby-erb)
69
+ - [Prompt Sequences](#prompt-sequences)
70
+ - [Roles and System Prompts](#roles-and-system-prompts)
71
+ - [RubyLLM::Tool Support](#rubyllmtool-support)
72
+ - [Examples & Tips](#examples--tips)
73
+ - [Practical Examples](#practical-examples)
74
+ - [Executable Prompts](#executable-prompts)
75
+ - [Tips from the Author](#tips-from-the-author)
76
+ - [Security Considerations](#security-considerations)
77
+ - [Troubleshooting](#troubleshooting)
63
78
  - [Development](#development)
64
79
  - [Contributing](#contributing)
65
80
  - [Roadmap](#roadmap)
66
- - [RubyLLM::Tool Support](#rubyllmtool-support)
67
- - [What Are RubyLLM Tools?](#what-are-rubyllm-tools)
68
- - [How to Use Tools](#how-to-use-tools)
69
- - [`--tools` Option](#--tools-option)
70
- - [Filtering the tool paths](#filtering-the-tool-paths)
71
- - [`--at`, `--allowed_tools` Option](#--at---allowed_tools-option)
72
- - [`--rt`, `--rejected_tools` Option](#--rt---rejected_tools-option)
73
- - [Creating Your Own Tools](#creating-your-own-tools)
74
- - [MCP Supported](#mcp-supported)
75
81
  - [License](#license)
76
82
 
77
83
  <!-- Tocer[finish]: Auto-generated, don't remove. -->
78
84
 
79
- ## Configuration Options
80
-
81
- The following table provides a comprehensive list of configuration options, their default values, and the associated environment variables:
82
-
83
- | Config Item Name | CLI Options | Default Value | Environment Variable |
84
- |----------------------|-------------|-----------------------------|---------------------------|
85
- | adapter | --adapter | ruby_llm | AIA_ADAPTER |
86
- | aia_dir | | ~/.aia | AIA_DIR |
87
- | append | -a, --append | false | AIA_APPEND |
88
- | chat | --chat | false | AIA_CHAT |
89
- | clear | --clear | false | AIA_CLEAR |
90
- | config_file | -c, --config_file | ~/.aia/config.yml | AIA_CONFIG_FILE |
91
- | debug | -d, --debug | false | AIA_DEBUG |
92
- | embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |
93
- | erb | | true | AIA_ERB |
94
- | frequency_penalty | --frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |
95
- | fuzzy | -f, --fuzzy | false | AIA_FUZZY |
96
- | image_quality | --iq, --image_quality | standard | AIA_IMAGE_QUALITY |
97
- | image_size | --is, --image_size | 1024x1024 | AIA_IMAGE_SIZE |
98
- | image_style | --style, --image_style | vivid | AIA_IMAGE_STYLE |
99
- | log_file | -l, --log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
100
- | markdown | --md, --markdown | true | AIA_MARKDOWN |
101
- | max_tokens | --max_tokens | 2048 | AIA_MAX_TOKENS |
102
- | model | -m, --model | gpt-4o-mini | AIA_MODEL |
103
- | next | -n, --next | nil | AIA_NEXT |
104
- | out_file | -o, --out_file | temp.md | AIA_OUT_FILE |
105
- | parameter_regex | --regex | '(?-mix:(\[[A-Z _|]+\]))' | AIA_PARAMETER_REGEX |
106
- | pipeline | --pipeline | [] | AIA_PIPELINE |
107
- | presence_penalty | --presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |
108
- | prompt_extname | | .txt | AIA_PROMPT_EXTNAME |
109
- | prompts_dir | -p, --prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
110
- | refresh | --refresh | 7 (days) | AIA_REFRESH |
111
- | require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |
112
- | role | -r, --role | | AIA_ROLE |
113
- | roles_dir | | ~/.prompts/roles | AIA_ROLES_DIR |
114
- | roles_prefix | --roles_prefix | roles | AIA_ROLES_PREFIX |
115
- | shell | | true | AIA_SHELL |
116
- | speak | --speak | false | AIA_SPEAK |
117
- | speak_command | | afplay | AIA_SPEAK_COMMAND |
118
- | speech_model | --sm, --speech_model | tts-1 | AIA_SPEECH_MODEL |
119
- | system_prompt | --system_prompt | | AIA_SYSTEM_PROMPT |
120
- | temperature | -t, --temperature | 0.7 | AIA_TEMPERATURE |
121
- | terse | --terse | false | AIA_TERSE |
122
- | tool_paths | --tools | [] | AIA_TOOL_PATHS |
123
- | allowed_tools | --at --allowed_tools | nil | AIA_ALLOWED_TOOLS |
124
- | rejected_tools | --rt --rejected_tools | nil | AIA_REJECTED_TOOLS |
125
- | top_p | --top_p | 1.0 | AIA_TOP_P |
126
- | transcription_model | --tm, --transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |
127
- | verbose | -v, --verbose | false | AIA_VERBOSE |
128
- | voice | --voice | alloy | AIA_VOICE |
129
-
130
- These options can be configured via command-line arguments, environment variables, or configuration files.
131
-
132
- ### Configuration Flexibility
133
-
134
- AIA determines configuration settings using the following order of precedence:
135
-
136
- 1. Embedded config directives
137
- 2. Command-line arguments
138
- 3. Environment variables
139
- 4. Configuration files
140
- 5. Default values
141
-
142
- For example, let's consider the `model` option. Suppose the following conditions:
143
-
144
- - Default value is "gpt-4o-mini"
145
- - No entry in the config file
146
- - No environment variable value for `AIA_MODEL`
147
- - No command-line argument provided for `--model`
148
- - No embedded directive like `//config model = some-fancy-llm`
149
-
150
- In this scenario, the model used will be "gpt-4o-mini". However, you can override this default by setting the model at any level of the precedence order. Additionally, you can dynamically ask for user input by incorporating an embedded directive with a placeholder parameter, such as `//config model = [PROCESS_WITH_MODEL]`. When processing the prompt, AIA will prompt you to input a value for `[PROCESS_WITH_MODEL]`.
151
-
152
- If you do not like the default regex used to identify parameters within the prompt text, don't worry there is a way to configure it using the `--regex` option.
153
-
154
- ### Expandable Configuration
85
+ ## Installation & Prerequisites
155
86
 
156
- The configuration options are expandable through a config file, allowing you to add custom entries. For example, you can define a custom configuration item like "xyzzy" in your config file. This value can then be accessed in your prompts using `AIA.config.xyzzy` within a `//ruby` directive or an ERB block, enabling dynamic prompt generation based on your custom configurations.
87
+ ### Requirements
157
88
 
158
- ## The Local Model Registry Refresh
89
+ - **Ruby**: >= 3.2.0
90
+ - **External Tools**:
91
+ - [fzf](https://github.com/junegunn/fzf) - Command-line fuzzy finder
159
92
 
160
- The `ruby_llm` gem maintains a registry of providers and models integrated with a new website that allows users to download the latest information about each model. This capability is scheduled for release in version 1.3.0 of the gem.
93
+ ### Installation
161
94
 
162
- In anticipation of this new feature, the AIA tool has introduced the `--refresh` option, which specifies the number of days between updates to the centralized model registry. Here’s how the `--refresh` option works:
163
-
164
- - A value of `0` (zero) updates the local model registry every time AIA is executed.
165
- - A value of `1` (one) updates the local model registry once per day.
166
- - etc.
167
-
168
- The date of the last successful refresh is stored in the configuration file under the key `last_refresh`. The default configuration file is located at `~/.aia/config.yml`. When a refresh is successful, the `last_refresh` value is updated to the current date, and the updated configuration is saved in `AIA.config.config_file`.
169
-
170
- ### Important Note
171
-
172
- This approach to saving the `last_refresh` date can become cumbersome, particularly if you maintain multiple configuration files for different projects. The `last_refresh` date is only updated in the currently active configuration file. If you switch to a different project with a different configuration file, you may inadvertently hit the central model registry again, even if your local registry is already up to date.
173
-
174
- ## Shell Integration inside of a Prompt
175
-
176
- AIA configures the `prompt_manager` gem to be fully integrated with your local shell by default. This is not an option - its a feature. If your prompt inclues text patterns like $HOME, ${HOME} or $(command) those patterns will be automatically replaced in the prompt text by the shell's value for those patterns.
177
-
178
- #### Dynamic Shell Commands
179
-
180
- Dynamic content can be inserted into the prompt using the pattern $(shell command) where the output of the shell command will replace the $(...) pattern. It will become part of the context / instructions for the prompt.
181
-
182
- Consider the power of tailoring a prompt to your specific operating system:
95
+ ```bash
96
+ # Install AIA gem
97
+ gem install aia
183
98
 
184
- ```plaintext
185
- As a system administration on a $(uname -v) platform what is the best way to [DO_SOMETHING]
186
- ```
99
+ # Install required external tools (macOS)
100
+ brew install fzf
187
101
 
188
- Or insert content from a file in your home directory:
102
+ # Install required external tools (Linux)
103
+ # Ubuntu/Debian
104
+ sudo apt install fzf
189
105
 
190
- ```plaintext
191
- Given the following constraints $(cat ~/3_laws_of_robotics.txt) determine the best way to instruct my roomba to clean my kids room.
106
+ # Arch Linux
107
+ sudo pacman -S fzf
192
108
  ```
193
109
 
194
- #### Shell Command Safety
110
+ ### Setup Shell Completion
195
111
 
196
- The catchphrase "the prompt is the code" within AIA means that you have the power to execute any command you want, but you must be careful not to execute commands that could cause harm. AIA is not going to protect you from doing something dumb. Sure that's a copout. I just can't think (actually I can) of all the ways you can mess things up writing code. Remember what we learned from Forrest Gump "Stupid is as stupid does." So don't break the dumb law. If someone gives you a prompt as says "run this with AIA" you had better review the prompt before processing it.
112
+ Get completion functions for your shell:
197
113
 
198
- #### Chat Session Use
114
+ ```bash
115
+ # For bash users
116
+ aia --completion bash >> ~/.bashrc
199
117
 
200
- Shell integration is available in your follow up prompts within a chat session. Suppose you started a chat session (--chat) using a role of "Ruby Expert" expecting to chat about changes that could be made to a specific class BUT you forgot to include the class source file as part of the context when you got started. You could enter this as your follow up prompt to keep going:
118
+ # For zsh users
119
+ aia --completion zsh >> ~/.zshrc
201
120
 
202
- ```plaintext
203
- The class I want to chat about refactoring is this one: $(cat my_class.rb)
121
+ # For fish users
122
+ aia --completion fish >> ~/.config/fish/config.fish
204
123
  ```
205
124
 
206
- That inserts the entire class source file into your follow up prompt. You can continue chatting with your AI Assistant about changes to the class.
125
+ ## Basic Usage
207
126
 
208
- ## Embedded Ruby (ERB)
127
+ ### Command Line Interface
209
128
 
210
- The inclusion of dynamic content through the shell integration is significant. AIA also provides the full power of embedded Ruby code processing within the prompt text.
129
+ ```bash
130
+ # Basic usage
131
+ aia [OPTIONS] PROMPT_ID [CONTEXT_FILES...]
211
132
 
212
- AIA takes advantage of the `prompt_manager` gem to enable ERB integration in prompt text as a default. Its an always available feature of AIA prompts. The [Embedded Ruby (ERB) template syntax (2024)](https://bophin-com.ngontinh24.com/article/language-embedded-ruby-erb-template-syntax) provides a good overview of the syntax and power of ERB.
133
+ # Interactive chat session
134
+ aia --chat [--role ROLE] [--model MODEL]
213
135
 
214
- Most websites that have information about ERB will give examples of how to use ERB to generate dynamic HTML content for web-based applications. That is a common use case for ERB. AIA on the other hand uses ERB to generate dynamic or conditional prompt text for LLM processing.
136
+ # Use a specific model
137
+ aia --model gpt-4 my_prompt
215
138
 
216
- ## Prompt Directives
139
+ # Specify output file
140
+ aia --out_file result.md my_prompt
217
141
 
218
- Downstream processing directives were added to the `prompt_manager` gem used by AIA at version 0.4.1. These directives are lines in the prompt text file that begin with "//" having this pattern:
142
+ # Use a role/system prompt
143
+ aia --role expert my_prompt
219
144
 
220
- ```bash
221
- //command params
145
+ # Enable fuzzy search for prompts
146
+ aia --fuzzy
222
147
  ```
223
148
 
224
- There is no space between the "//" and the command. Commands do not have to have params. These params are typically space delimited when more than one is required. It all depens on the command.
225
-
226
- ### Parameter and Shell Substitution in Directives
149
+ ### Key Command-Line Options
227
150
 
228
- When you combine prompt directives with prompt parameters and shell envar substitutions you can get some powerful compositional prompts.
151
+ | Option | Description | Example |
152
+ |--------|-------------|---------|
153
+ | `--chat` | Start interactive chat session | `aia --chat` |
154
+ | `--model MODEL` | Specify AI model to use | `aia --model gpt-4` |
155
+ | `--role ROLE` | Use a role/system prompt | `aia --role expert` |
156
+ | `--out_file FILE` | Specify output file | `aia --out_file results.md` |
157
+ | `--fuzzy` | Use fuzzy search for prompts | `aia --fuzzy` |
158
+ | `--help` | Show complete help | `aia --help` |
229
159
 
230
- Here is an example of a pure generic directive.
160
+ ### Directory Structure
231
161
 
232
- ```bash
233
- //[DIRECTIVE_NAME] [DIRECTIVE_PARAMS]
234
162
  ```
235
-
236
- When the prompt runs, you will be asked to provide a value for each of the parameters. You could answer "shell" for the directive name and "calc 22/7" if you wanted a bad approximation of PI.
237
-
238
- Try this prompt file:
239
- ```bash
240
- //shell calc [FORMULA]
241
-
242
- What does that number mean to you?
163
+ ~/.prompts/ # Default prompts directory
164
+ ├── ask.txt # Simple question prompt
165
+ ├── code_review.txt # Code review prompt
166
+ ├── roles/ # Role/system prompts
167
+ │ ├── expert.txt # Expert role
168
+ │ └── teacher.txt # Teaching role
169
+ └── _prompts.log # History log
243
170
  ```
244
171
 
245
- ### Directive Syntax
172
+ ## Configuration
246
173
 
247
- Directives can be entered in chat or prompt files using the following syntax:
248
- - `//command args`
174
+ ### Essential Configuration Options
249
175
 
250
- Supported directives:
251
- - `help`: Show available directives
252
- - `shell` or `sh`: Execute a shell command
253
- - `ruby` or `rb`: Execute Ruby code
254
- - `config` or `cfg`: Show or update configuration
255
- - `include` or `inc`: Include file content
256
- - `next`: Set/Show the next prompt ID to be processed
257
- - `pipeline`: Set/Extend/Show the workflow of prompt IDs
176
+ The most commonly used configuration options:
258
177
 
259
- When a directive produces output, it is added to the chat context. If there is no output, you are prompted again.
178
+ | Option | Default | Description |
179
+ |--------|---------|-------------|
180
+ | `model` | `gpt-4o-mini` | AI model to use |
181
+ | `prompts_dir` | `~/.prompts` | Directory containing prompts |
182
+ | `out_file` | `temp.md` | Default output file |
183
+ | `temperature` | `0.7` | Model creativity (0.0-1.0) |
184
+ | `chat` | `false` | Start in chat mode |
260
185
 
261
- ### AIA Specific Directive Commands
186
+ ### Configuration Precedence
262
187
 
263
- At this time AIA only has a few directives which are detailed below.
188
+ AIA determines configuration settings using this order (highest to lowest priority):
264
189
 
265
- #### //config
190
+ 1. **Embedded config directives** (in prompt files): `//config model = gpt-4`
191
+ 2. **Command-line arguments**: `--model gpt-4`
192
+ 3. **Environment variables**: `export AIA_MODEL=gpt-4`
193
+ 4. **Configuration files**: `~/.aia/config.yml`
194
+ 5. **Default values**
266
195
 
267
- The `//config` directive within a prompt text file is used to tailor the specific configuration environment for the prompt. All configuration items are available to have their values changed. The order of value assignment for a configuration item starts with the default value which is replaced by the envar value which is replaced by the command line option value which is replaced by the value from the config file.
268
-
269
- The `//config` is the last and final way of changing the value for a configuration item for a specific prompt.
270
-
271
- The switch options are treated like booleans. They are either `true` or `false`. Their name within the context of a `//config` directive always ends with a "?" character - question mark.
272
-
273
- To set the value of a switch using ``//config` for example `--terse` or `--chat` to this:
196
+ ### Configuration Methods
274
197
 
198
+ **Environment Variables:**
275
199
  ```bash
276
- //config chat? = true
277
- //config terse? = true
200
+ export AIA_MODEL=gpt-4
201
+ export AIA_PROMPTS_DIR=~/my-prompts
202
+ export AIA_TEMPERATURE=0.8
278
203
  ```
279
204
 
280
- A configuration item such as `--out_file` or `--model` has an associated value on the command line. To set that value with the `//config` directive do it like this:
281
-
282
- ```bash
283
- //config model = gpt-3.5-turbo
284
- //config out_file = temp.md
205
+ **Configuration File** (`~/.aia/config.yml`):
206
+ ```yaml
207
+ model: gpt-4
208
+ prompts_dir: ~/my-prompts
209
+ temperature: 0.8
210
+ chat: false
285
211
  ```
286
212
 
287
- BTW: the "=" is completely options. Its actuall ignored as is ":=" if you were to choose that as your assignment operator. Also the number of spaces between the item and the value is complete arbitrary. I like to line things up so this syntax is just as valie:
213
+ **Embedded Directives** (in prompt files):
214
+ ```
215
+ //config model = gpt-4
216
+ //config temperature = 0.8
288
217
 
289
- ```bash
290
- //config model gpt-3.5-turbo
291
- //config out_file temp.md
292
- //config chat? true
293
- //config terse? true
294
- //config model gpt-4
218
+ Your prompt content here...
295
219
  ```
296
220
 
297
- NOTE: if you specify the same config item name more than once within the prompt file, its the last one which will be set when the prompt is finally process through the LLM. For example in the example above `gpt-4` will be the model used. Being first does not count in this case.
221
+ ### Complete Configuration Reference
298
222
 
299
- #### //include
223
+ <details>
224
+ <summary>Click to view all configuration options</summary>
300
225
 
301
- Example:
302
- ```bash
303
- //include path_to_file
304
- ```
226
+ | Config Item Name | CLI Options | Default Value | Environment Variable |
227
+ |------------------|-------------|---------------|---------------------|
228
+ | adapter | --adapter | ruby_llm | AIA_ADAPTER |
229
+ | aia_dir | | ~/.aia | AIA_DIR |
230
+ | append | -a, --append | false | AIA_APPEND |
231
+ | chat | --chat | false | AIA_CHAT |
232
+ | clear | --clear | false | AIA_CLEAR |
233
+ | config_file | -c, --config_file | ~/.aia/config.yml | AIA_CONFIG_FILE |
234
+ | debug | -d, --debug | false | AIA_DEBUG |
235
+ | embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |
236
+ | erb | | true | AIA_ERB |
237
+ | frequency_penalty | --frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |
238
+ | fuzzy | -f, --fuzzy | false | AIA_FUZZY |
239
+ | image_quality | --iq, --image_quality | standard | AIA_IMAGE_QUALITY |
240
+ | image_size | --is, --image_size | 1024x1024 | AIA_IMAGE_SIZE |
241
+ | image_style | --style, --image_style | vivid | AIA_IMAGE_STYLE |
242
+ | log_file | -l, --log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
243
+ | markdown | --md, --markdown | true | AIA_MARKDOWN |
244
+ | max_tokens | --max_tokens | 2048 | AIA_MAX_TOKENS |
245
+ | model | -m, --model | gpt-4o-mini | AIA_MODEL |
246
+ | next | -n, --next | nil | AIA_NEXT |
247
+ | out_file | -o, --out_file | temp.md | AIA_OUT_FILE |
248
+ | parameter_regex | --regex | '(?-mix:(\[[A-Z _\|]+\]))' | AIA_PARAMETER_REGEX |
249
+ | pipeline | --pipeline | [] | AIA_PIPELINE |
250
+ | presence_penalty | --presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |
251
+ | prompt_extname | | .txt | AIA_PROMPT_EXTNAME |
252
+ | prompts_dir | -p, --prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
253
+ | refresh | --refresh | 7 (days) | AIA_REFRESH |
254
+ | require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |
255
+ | role | -r, --role | | AIA_ROLE |
256
+ | roles_dir | | ~/.prompts/roles | AIA_ROLES_DIR |
257
+ | roles_prefix | --roles_prefix | roles | AIA_ROLES_PREFIX |
258
+ | shell | | true | AIA_SHELL |
259
+ | speak | --speak | false | AIA_SPEAK |
260
+ | speak_command | | afplay | AIA_SPEAK_COMMAND |
261
+ | speech_model | --sm, --speech_model | tts-1 | AIA_SPEECH_MODEL |
262
+ | system_prompt | --system_prompt | | AIA_SYSTEM_PROMPT |
263
+ | temperature | -t, --temperature | 0.7 | AIA_TEMPERATURE |
264
+ | terse | --terse | false | AIA_TERSE |
265
+ | tool_paths | --tools | [] | AIA_TOOL_PATHS |
266
+ | allowed_tools | --at --allowed_tools | nil | AIA_ALLOWED_TOOLS |
267
+ | rejected_tools | --rt --rejected_tools | nil | AIA_REJECTED_TOOLS |
268
+ | top_p | --top_p | 1.0 | AIA_TOP_P |
269
+ | transcription_model | --tm, --transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |
270
+ | verbose | -v, --verbose | false | AIA_VERBOSE |
271
+ | voice | --voice | alloy | AIA_VOICE |
305
272
 
306
- The `path_to_file` can be either absolute or relative. If it is relative, it is achored at the PWD. If the `path_to_file` includes envars, the `--shell` CLI option must be used to replace the envar in the directive with its actual value.
273
+ </details>
307
274
 
308
- The file that is included will have any comments or directives excluded. It is expected that the file will be a text file so that its content can be pre-pended to the existing prompt; however, if the file is a source code file (ex: file.rb) the source code will be included HOWEVER any comment line or line that starts with "//" will be excluded.
275
+ ## Advanced Features
309
276
 
310
- #### //ruby
277
+ ### Prompt Directives
311
278
 
312
- The `//ruby` directive executes Ruby code. You can use this to perform complex operations or interact with Ruby libraries.
279
+ Directives are special commands in prompt files that begin with `//` and provide dynamic functionality:
313
280
 
314
- For example:
315
- ```ruby
316
- //ruby puts "Hello from Ruby"
317
- ```
281
+ | Directive | Description | Example |
282
+ |-----------|-------------|---------|
283
+ | `//config` | Set configuration values | `//config model = gpt-4` |
284
+ | `//include` | Insert file contents | `//include path/to/file.txt` |
285
+ | `//shell` | Execute shell commands | `//shell ls -la` |
286
+ | `//ruby` | Execute Ruby code | `//ruby puts "Hello World"` |
287
+ | `//next` | Set next prompt in sequence | `//next summary` |
288
+ | `//pipeline` | Set prompt workflow | `//pipeline analyze,summarize,report` |
289
+ | `//clear` | Clear conversation history | `//clear` |
290
+ | `//help` | Show available directives | `//help` |
291
+ | `//available_models` | List available models | `//available_models` |
292
+ | `//review` | Review current context | `//review` |
318
293
 
319
- You can also use the `--require` option to specify Ruby libraries to require before executing Ruby code:
294
+ #### Configuration Directive Examples
320
295
 
321
296
  ```bash
322
- # Command line
323
- aia --rq json,csv --require os my_prompt
297
+ # Set model and temperature for this prompt
298
+ //config model = gpt-4
299
+ //config temperature = 0.9
324
300
 
325
- # In chat
326
- //ruby JSON.parse('{"data": [1,2,3]}')["data"]
327
- ```
301
+ # Enable chat mode and terse responses
302
+ //config chat = true
303
+ //config terse = true
328
304
 
329
- #### //shell
330
- Example:
331
- ```bash
332
- //shell some_shell_command
305
+ Your prompt content here...
333
306
  ```
334
307
 
335
- It is expected that the shell command will return some text to STDOUT which will be pre-pending to the existing prompt text within the prompt file.
308
+ #### Dynamic Content Examples
336
309
 
337
- There are no limitations on what the shell command can be. For example if you wanted to bypass the stripping of comments and directives from a file you could do something like this:
338
310
  ```bash
339
- //shell cat path_to_file
340
- ```
341
-
342
- Which does basically the same thing as the `//include` directive, except it uses the entire content of the file. For relative file paths the same thing applies. The file's path will be relative to the PWD.
311
+ # Include file contents
312
+ //include ~/project/README.md
343
313
 
344
- #### //next
345
- Examples:
346
- ```bash
347
- # Show the next promt ID
348
- //next
314
+ # Execute shell commands
315
+ //shell git log --oneline -10
349
316
 
350
- # Set the next prompt ID
351
- //next prompt_id
317
+ # Run Ruby code
318
+ //ruby require 'json'; puts JSON.pretty_generate({status: "ready"})
352
319
 
353
- # Same as
354
- //config next
355
- //config next = prompt_id
320
+ Analyze the above information and provide insights.
356
321
  ```
357
322
 
358
- #### //pipeline
359
-
360
- Examples:
361
- ```bash
362
- # Show the current prompt workflow
363
- //pipeline
364
-
365
- # Set the prompt workflow
366
- //pipeline = prompt_id_1, prompt_id_2, prompt_id_3
367
-
368
- # Extend the prompt workflow
369
- //pipeline << prompt_id_4, prompt_id_5, prompt_id_6
323
+ ### Shell Integration
370
324
 
371
- # Same as
372
- //config pipeline
373
- //config pipeline = prompt_id_1, prompt_id_2, prompt_id_3
374
- //config pipeline << prompt_id_4, prompt_id_5, prompt_id_6
375
- ```
325
+ AIA automatically processes shell patterns in prompts:
376
326
 
377
- ### Using Directives in Chat Sessions
327
+ - **Environment variables**: `$HOME`, `${USER}`
328
+ - **Command substitution**: `$(date)`, `$(git branch --show-current)`
378
329
 
379
- Whe you are in a chat session, you may use a directive as a follow up prompt. For example if you started the chat session with the option `--terse` expecting to get short answers from the LLM; but, then you decide that you want more comprehensive answers you may do this in a chat follow up:
330
+ **Examples:**
380
331
 
381
332
  ```bash
382
- //config terse? false
383
- ```
333
+ # Dynamic system information
334
+ As a system administrator on a $(uname -s) platform, how do I optimize performance?
384
335
 
385
- The directive is executed and a new follow up prompt can be entered with a more lengthy response generated from the LLM.
336
+ # Include file contents via shell
337
+ Here's my current configuration: $(cat ~/.bashrc | head -20)
386
338
 
387
- The directive `//clear` truncates your entire session context. The LLM will not remember anything you have discussed.
339
+ # Use environment variables
340
+ My home directory is $HOME and I'm user $USER.
341
+ ```
388
342
 
389
- ## Prompt Sequences
343
+ **Security Note**: Be cautious with shell integration. Review prompts before execution as they can run arbitrary commands.
390
344
 
391
- Why would you need/want to use a sequence of prompts in a batch situation. Maybe you have a complex prompt which exceeds the token limitations of your model for input so you need to break it up into multiple parts. Or suppose its a simple prompt but the number of tokens on the output is limited and you do not get exactly the kind of full response for which you were looking.
345
+ ### Embedded Ruby (ERB)
392
346
 
393
- Sometimes it takes a series of prompts to get the kind of response that you want. The reponse from one prompt becomes a context for the next prompt. This is easy to do within a `chat` session were you are manually entering and adjusting your prompts until you get the kind of response that you want.
347
+ AIA supports full ERB processing in prompts for dynamic content generation:
394
348
 
395
- If you need to do this on a regular basis or within a batch you can use AIA and the `--next` and `--pipeline` command line options.
349
+ ```erb
350
+ <%# ERB example in prompt file %>
351
+ Current time: <%= Time.now %>
352
+ Random number: <%= rand(100) %>
396
353
 
397
- These two options specify the sequence of prompt IDs to be processed. Both options are available to be used within a prompt file using the `//next` and `//pipeline` directives. Like all embedded directives you can take advantage of parameterization shell integration and Ruby. With this kind of dynamic content and control flow in your prompts you will start to feel like Tim the Tool man - more power!
354
+ <% if ENV['USER'] == 'admin' %>
355
+ You have admin privileges.
356
+ <% else %>
357
+ You have standard user privileges.
358
+ <% end %>
398
359
 
399
- Consider the condition in which you have 4 prompt IDs that need to be processed in sequence. The IDs and associated prompt file names are:
360
+ <%= AIA.config.model %> is the current model.
361
+ ```
400
362
 
401
- | Prompt ID | Prompt File |
402
- | --------- | ----------- |
403
- | one | one.txt |
404
- | two | two.txt |
405
- | three | three.txt |
406
- | four | four.txt |
363
+ ### Prompt Sequences
407
364
 
365
+ Chain multiple prompts for complex workflows:
408
366
 
409
- ### --next
367
+ #### Using --next
410
368
 
411
369
  ```bash
412
- aia one --next two --out_file temp.md
413
- aia three --next four temp.md -o answer.md
414
- ```
415
-
416
- or within each of the prompt files you use the `//next` directive:
370
+ # Command line
371
+ aia analyze --next summarize --next report
417
372
 
418
- ```bash
419
- one.txt contains //next two
420
- two.txt contains //next three
421
- three.txt contains //next four
373
+ # In prompt files
374
+ # analyze.txt contains: //next summarize
375
+ # summarize.txt contains: //next report
422
376
  ```
423
- BUT if you have more than two prompts in your sequence then consider using the --pipeline option.
424
377
 
425
- **The directive //next is short for //config next**
426
-
427
- ### --pipeline
378
+ #### Using --pipeline
428
379
 
429
380
  ```bash
430
- aia one --pipeline two,three,four
431
- ```
432
-
433
- or inside of the `one.txt` prompt file use this directive:
381
+ # Command line
382
+ aia research --pipeline analyze,summarize,report,present
434
383
 
435
- ```bash
436
- //pipeline two,three,four
384
+ # In prompt file
385
+ //pipeline analyze,summarize,report,present
437
386
  ```
438
387
 
439
- **The directive //pipeline is short for //config pipeline**
440
-
441
- ### Best Practices ??
442
-
388
+ #### Example Workflow
443
389
 
444
- Since the response of one prompt is fed into the next prompt within the sequence instead of having all prompts write their response to the same out file, use these directives inside the associated prompt files:
390
+ **research.txt:**
391
+ ```
392
+ //config model = gpt-4
393
+ //next analyze
445
394
 
395
+ Research the topic: [RESEARCH_TOPIC]
396
+ Provide comprehensive background information.
397
+ ```
446
398
 
447
- | Prompt File | Directive |
448
- | ----------- | --------- |
449
- | one.txt | //config out_file one.md |
450
- | two.txt | //config out_file two.md |
451
- | three.txt | //config out_file three.md |
452
- | four.txt | //config out_file four.md |
399
+ **analyze.txt:**
400
+ ```
401
+ //config out_file = analysis.md
402
+ //next summarize
453
403
 
454
- This way you can see the response that was generated for each prompt in the sequence.
404
+ Analyze the research data and identify key insights.
405
+ ```
455
406
 
456
- ### Example pipeline
407
+ **summarize.txt:**
408
+ ```
409
+ //config out_file = summary.md
457
410
 
458
- TODO: the audio-to-text is still under development.
411
+ Create a concise summary of the analysis with actionable recommendations.
412
+ ```
459
413
 
460
- Suppose you have an audio file of a meeting. You what to get a transcription of what was said in that meeting. Sometimes raw transcriptions hide the real value of the recording so you have crafted a pompt that takes the raw transcriptions and does a technical summary with a list of action items.
414
+ ### Roles and System Prompts
461
415
 
462
- Create two prompts named transcribe.txt and tech_summary.txt
416
+ Roles define the context and personality for AI responses:
463
417
 
464
418
  ```bash
465
- # transcribe.txt
466
- # Desc: takes one audio file
467
- # note that there is no "prompt" text only the directive
419
+ # Use a predefined role
420
+ aia --role expert analyze_code.rb
468
421
 
469
- //config model whisper-1
470
- //next tech_summary
422
+ # Roles are stored in ~/.prompts/roles/
423
+ # expert.txt might contain:
424
+ # "You are a senior software engineer with 15 years of experience..."
471
425
  ```
472
426
 
473
- and
427
+ **Creating Custom Roles:**
474
428
 
475
429
  ```bash
476
- # tech_summary.txt
477
-
478
- //config model gpt-4o-mini
479
- //config out_file meeting_summary.md
480
-
481
- Review the raw transcript of a technical meeting,
482
- summarize the discussion and
483
- note any action items that were generated.
484
-
485
- Format your response in markdown.
486
- ```
430
+ # Create a code reviewer role
431
+ cat > ~/.prompts/roles/code_reviewer.txt << EOF
432
+ You are an experienced code reviewer. Focus on:
433
+ - Code quality and best practices
434
+ - Security vulnerabilities
435
+ - Performance optimizations
436
+ - Maintainability issues
487
437
 
488
- Now you can do this:
489
-
490
- ```bash
491
- aia transcribe my_tech_meeting.m4a
438
+ Provide specific, actionable feedback.
439
+ EOF
492
440
  ```
493
441
 
494
- You summary of the meeting is in the file `meeting_summary.md`
495
-
442
+ ### RubyLLM::Tool Support
496
443
 
497
- ## All About ROLES
444
+ AIA supports function calling through RubyLLM tools for extended capabilities:
498
445
 
499
- ### The --roles_prefix (AIA_ROLES_PREFIX)
500
-
501
- The second kind of prompt is called a role (aka system prompt). Sometimes the role is incorporated into the instruction. For example, "As a magician make a rabbit appear out of a hat." To reuse the same role in multiple prompts, AIA encourages you to designate a special subdirectory for prompts that are specific to personification - roles.
446
+ ```bash
447
+ # Load tools from directory
448
+ aia --tools ~/my-tools/ --chat
502
449
 
503
- The default `roles_prefix` is set to 'roles'. This creates a subdirectory under the `prompts_dir` where role files are stored. Internally, AIA calculates a `roles_dir` value by joining `prompts_dir` and `roles_prefix`. It is recommended to keep the roles organized this way for better organization and management.
450
+ # Load specific tool files
451
+ aia --tools weather.rb,calculator.rb --chat
504
452
 
505
- ### The --role Option
453
+ # Filter tools
454
+ aia --tools ~/tools/ --allowed_tools weather,calc
455
+ aia --tools ~/tools/ --rejected_tools deprecated
456
+ ```
506
457
 
507
- The `--role` option is used to identify a personification prompt within your roles directory which defines the context within which the LLM is to provide its response. The text of the role ID is pre-pended to the text of the primary prompt to form a complete prompt to be processed by the LLM.
458
+ **Tool Examples** (see `examples/tools/` directory):
459
+ - File operations (read, write, list)
460
+ - Shell command execution
461
+ - API integrations
462
+ - Data processing utilities
508
463
 
509
- For example consider:
464
+ **Shared Tools Collection:**
465
+ AIA includes the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a curated collection of commonly-used RubyLLM::Tool implementations:
510
466
 
511
467
  ```bash
512
- aia -r ruby refactor my_class.rb
513
- ```
468
+ # Access shared tools automatically (included with AIA)
469
+ aia --tools shared --chat
514
470
 
515
- The role ID is `ruby` the prompt ID is `refactor` and my_class.rb is a context file.
471
+ # Combine with custom tools
472
+ aia --tools shared,~/my-tools/ --chat
473
+ ```
516
474
 
517
- Within the roles directory the contents of the text file `ruby.txt` will be pre-pre-pended to the contents of the `refactor.txt` file from the prompts directory to produce a complete prompt. That complete prompt will have any parameters followed by directives processed before sending the combined prompt text and the content of the context file to the LLM.
475
+ Available shared tools include:
476
+ - **File operations**: read, write, edit, search files
477
+ - **Web utilities**: fetch web pages, parse HTML/JSON
478
+ - **System tools**: execute commands, get system info
479
+ - **Data processing**: CSV/JSON manipulation, text analysis
480
+ - **API helpers**: HTTP requests, common API patterns
518
481
 
519
- Note that `--role` is just a way of saying add this prompt text file to the front of this other prompt text file. The contents of the "role" prompt could be anything. It does not necessarily have be an actual role.
482
+ ## Examples & Tips
520
483
 
521
- AIA fully supports a directory tree within the `prompts_dir` as a way of organization or classification of your different prompt text files.
484
+ ### Practical Examples
522
485
 
486
+ #### Code Review Prompt
523
487
  ```bash
524
- aia -r ruby sw_eng/doc_the_methods my_class.rb
525
- ```
526
-
527
- In this example the prompt text file `$AIA_ROLES_PREFIX/ruby.txt` is prepended to the prompt text file `$AIA_PROMPTS_DIR/sw_eng/doc_the_methods.txt`
488
+ # ~/.prompts/code_review.txt
489
+ //config model = gpt-4
490
+ //config temperature = 0.3
528
491
 
529
- ### Other Ways to Insert Roles into Prompts
492
+ Review this code for:
493
+ - Best practices adherence
494
+ - Security vulnerabilities
495
+ - Performance issues
496
+ - Maintainability concerns
530
497
 
531
- Since AIA supports parameterized prompts you could make a keyword like "[ROLE]" be part of your prompt. For example consider this prompt:
532
-
533
- ```text
534
- As a [ROLE] tell me what you think about [SUBJECT]
498
+ Code to review:
499
+ //include [CODE_FILE]
535
500
  ```
536
501
 
537
- When this prompt is processed, AIA will ask you for a value for the keyword "[ROLE]" and the keyword "[SUBJECT]" to complete the prompt. Since AIA maintains a history of your previous answers, you could just choose something that you used in the past or answer with a completely new value.
538
-
539
- ## External CLI Tools Used
540
-
541
- To install the external CLI programs used by AIA:
542
-
543
- brew install fzf
544
-
545
- fzf
546
- Command-line fuzzy finder written in Go
547
- [https://github.com/junegunn/fzf](https://github.com/junegunn/fzf)
502
+ Usage: `aia code_review mycode.rb`
548
503
 
504
+ #### Meeting Notes Processor
505
+ ```bash
506
+ # ~/.prompts/meeting_notes.txt
507
+ //config model = gpt-4o-mini
508
+ //pipeline format,action_items
549
509
 
550
- ## Shell Completion
510
+ Raw meeting notes:
511
+ //include [NOTES_FILE]
551
512
 
552
- You can setup a completion function in your shell that will complete on the prompt_id saved in your `prompts_dir` - functions for `bash`, `fish` and `zsh` are available. To get a copy of these functions do:
513
+ Please clean up and structure these meeting notes.
514
+ ```
553
515
 
516
+ #### Documentation Generator
554
517
  ```bash
555
- aia --completion bash
556
- ```
518
+ # ~/.prompts/document.txt
519
+ //config model = gpt-4
520
+ //shell find [PROJECT_DIR] -name "*.rb" | head -10
557
521
 
558
- If you're not a fan of "born again" replace `bash` with one of the others.
522
+ Generate documentation for the Ruby project shown above.
523
+ Include: API references, usage examples, and setup instructions.
524
+ ```
559
525
 
560
- Copy the function to a place where it can be installed in your shell's instance. This might be a `.profile` or `.bashrc` file, etc.
526
+ ### Executable Prompts
561
527
 
562
- ## My Most Powerful Prompt
528
+ Create reusable executable prompts:
563
529
 
564
- This is just between you and me so don't go blabbing this around to everyone. My most power prompt is in a file named `ad_hoc.txt`. It looks like this:
530
+ **weather_report** (make executable with `chmod +x`):
531
+ ```bash
532
+ #!/usr/bin/env aia run --no-out_file
533
+ # Get current weather for a city
565
534
 
566
- ```text
567
- [WHAT_NOW_HUMAN]
535
+ What's the current weather in [CITY]?
536
+ Include temperature, conditions, and 3-day forecast.
537
+ Format as a brief, readable summary.
568
538
  ```
569
539
 
570
- Yep. Just a single parameter for which I can provide a value of anything that is on my mind at the time. Its advantage is that I do not pollute my shell's command history with lots of text.
571
-
540
+ Usage:
572
541
  ```bash
573
- aia ad_hoc
542
+ ./weather_report
543
+ # Prompts for city, outputs to stdout
544
+
545
+ ./weather_report | glow # Render with glow
574
546
  ```
575
547
 
576
- Or consider this executable prompt file:
548
+ ### Tips from the Author
577
549
 
550
+ **Most Versatile Prompt:**
578
551
  ```bash
579
- #!/usr/bin/env aia run
552
+ # ~/.prompts/ad_hoc.txt
580
553
  [WHAT_NOW_HUMAN]
581
554
  ```
555
+ Usage: `aia ad_hoc` - perfect for any quick question without cluttering shell history.
582
556
 
583
- Where the `run` prompt ID has a `run.txt` file in the prompt directory that is basically empty. Or maybe `run.txt` has some prompt instructions for how to run the prompt - some kind of meta-thinking instructions.
584
-
585
- ## My Configuration
586
-
587
- I use the `bash` shell. In my `.bashrc` file I source another file named `.bashrc__aia` which looks like this:
588
-
557
+ **Recommended Shell Setup:**
589
558
  ```bash
590
- # ~/.bashic_aia
591
- # AI Assistant
592
-
593
- # These are the defaults:
559
+ # ~/.bashrc_aia
594
560
  export AIA_PROMPTS_DIR=~/.prompts
595
561
  export AIA_OUT_FILE=./temp.md
596
- export AIA_LOG_FILE=$AIA_PROMPTS_DIR/_prompts.log
597
562
  export AIA_MODEL=gpt-4o-mini
598
-
599
- # Not a default. Invokes spinner. If not true then there is no spinner
600
- # for feedback while waiting for the LLM to respond.
601
- export AIA_VERBOSE=true
563
+ export AIA_VERBOSE=true # Shows spinner while waiting
602
564
 
603
565
  alias chat='aia --chat --terse'
604
-
605
- # rest of the file is the completion function
566
+ alias ask='aia ad_hoc'
606
567
  ```
607
568
 
569
+ **Prompt Organization:**
570
+ ```
571
+ ~/.prompts/
572
+ ├── daily/ # Daily workflow prompts
573
+ ├── development/ # Coding and review prompts
574
+ ├── research/ # Research and analysis
575
+ ├── roles/ # System prompts
576
+ └── workflows/ # Multi-step pipelines
577
+ ```
608
578
 
579
+ ## Security Considerations
609
580
 
610
- ## Executable Prompts
581
+ ### Shell Command Execution
611
582
 
612
- With all of the capabilities of the AI Assistant, you can create your own executable prompts. These prompts can be used to automate tasks, generate content, or perform any other action that you can think of. All you need to get started with executable prompts is a prompt that does not do anything. For example consider my `run.txt` prompt.
583
+ **⚠️ Important Security Warning**
613
584
 
614
- ```bash
615
- # ~/.prompts/run.txt
616
- # Desc: Run executable prompts coming in via STDIN
617
- ```
585
+ AIA executes shell commands and Ruby code embedded in prompts. This provides powerful functionality but requires caution:
618
586
 
619
- Remember that the '#' character indicates a comment line making the `run` prompt ID basically a do nothing prompt.
587
+ - **Review prompts before execution**, especially from untrusted sources
588
+ - **Avoid storing sensitive data** in prompts (API keys, passwords)
589
+ - **Use parameterized prompts** instead of hardcoding sensitive values
590
+ - **Limit file permissions** on prompt directories if sharing systems
620
591
 
621
- An executable prompt can reside anywhere either in your $PATH or not. That is your choice. It must however be executable. Consider the following `top10` executable prompt:
592
+ ### Safe Practices
622
593
 
623
594
  ```bash
624
- #!/usr/bin/env aia run --no-out_file
625
- # File: top10
626
- # Desc: The tope 10 cities by population
595
+ # Good: Use parameters for sensitive data
596
+ //config api_key = [API_KEY]
627
597
 
628
- what are the top 10 cities by population in the USA. Summarize what people
629
- like about living in each city. Include an average cost of living. Include
630
- links to the Wikipedia pages. Format your response as a markdown document.
631
- ```
598
+ # Bad: Hardcode secrets
599
+ //config api_key = sk-1234567890abcdef
632
600
 
633
- Make sure that it is executable.
601
+ # Good: Validate shell commands
602
+ //shell ls -la /safe/directory
634
603
 
635
- ```bash
636
- chmod +x top10
604
+ # ❌ Bad: Dangerous shell commands
605
+ //shell rm -rf / # Never do this!
637
606
  ```
638
607
 
639
- The magic is in the first line of the prompt. It is a shebang line that tells the system how to execute the prompt. In this case it is telling the system to use the `aia` command line tool to execute the `run` prompt. The `--no-out_file` option tells the AIA command line tool not to write the output of the prompt to a file. Instead it will write the output to STDOUT. The remaining content of this `top10` prompt is send via STDIN to the LLM.
640
-
641
- Now just execute it like any other command in your terminal.
608
+ ### Recommended Security Setup
642
609
 
643
610
  ```bash
644
- ./top10
645
- ```
646
-
647
- Since its output is going to STDOUT you can setup a pipe chain. Using the CLI program `glow` to render markdown in the terminal
648
- (brew install glow)
611
+ # Set restrictive permissions on prompts directory
612
+ chmod 700 ~/.prompts
613
+ chmod 600 ~/.prompts/*.txt
649
614
 
650
- ```bash
651
- ./top10 | glow
615
+ # Use separate prompts directory for shared/untrusted prompts
616
+ export AIA_PROMPTS_DIR_SHARED=~/shared-prompts
617
+ chmod 755 ~/shared-prompts
652
618
  ```
653
619
 
654
- This executable prompt concept sets up the building blocks of a *nix CLI-based pipeline in the same way that the --pipeline and --next options and directives are used.
655
-
656
- ## Usage
620
+ ## Troubleshooting
657
621
 
658
- The usage report is obtained with either `-h` or `--help` options.
622
+ ### Common Issues
659
623
 
624
+ **Prompt not found:**
660
625
  ```bash
661
- aia --help
662
- ```
626
+ # Check prompts directory
627
+ ls $AIA_PROMPTS_DIR
663
628
 
664
- Key command-line options include:
629
+ # Verify prompt file exists
630
+ ls ~/.prompts/my_prompt.txt
665
631
 
666
- - `--adapter ADAPTER`: Choose the LLM interface adapter to use. Valid options are 'ruby_llm' (default) or something else in the future. See [RubyLLM Integration Guide](README_RUBY_LLM.md) for details.
667
- - `--model MODEL`: Specify which LLM model to use
668
- - `--chat`: Start an interactive chat session
669
- - `--role ROLE`: Specify a role/system prompt
670
- - And many more (use --help to see all options)
632
+ # Use fuzzy search
633
+ aia --fuzzy
634
+ ```
671
635
 
672
- **Note:** ERB and shell processing are now standard features and always enabled. This allows you to use embedded Ruby code and shell commands in your prompts without needing to specify any additional options.
636
+ **Model errors:**
637
+ ```bash
638
+ # List available models
639
+ aia --available_models
673
640
 
674
- ## Development
641
+ # Check model name spelling
642
+ aia --model gpt-4o # Correct
643
+ aia --model gpt4 # Incorrect
644
+ ```
675
645
 
676
- **ShellCommandExecutor Refactor:**
677
- The `ShellCommandExecutor` is now a class (previously a module). It stores the config object as an instance variable and provides cleaner encapsulation. For backward compatibility, class-level methods are available and delegate to instance methods internally.
646
+ **Shell integration not working:**
647
+ ```bash
648
+ # Verify shell patterns
649
+ echo "Test: $(date)" # Should show current date
650
+ echo "Home: $HOME" # Should show home directory
651
+ ```
678
652
 
679
- **Prompt Variable Fallback:**
680
- When processing a prompt file without a `.json` history file, variables are always parsed from the prompt text so you are prompted for values as needed.
653
+ **Configuration issues:**
654
+ ```bash
655
+ # Check current configuration
656
+ aia --config
681
657
 
682
- ## Contributing
658
+ # Debug configuration loading
659
+ aia --debug --config
660
+ ```
683
661
 
684
- Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
662
+ ### Error Messages
685
663
 
686
- When you find problems with AIA please note them as an issue. This thing was written mostly by a human and you know how error prone humans are. There should be plenty of errors to find.
664
+ | Error | Cause | Solution |
665
+ |-------|-------|----------|
666
+ | "Prompt not found" | Missing prompt file | Check file exists and spelling |
667
+ | "Model not available" | Invalid model name | Use `--available_models` to list valid models |
668
+ | "Shell command failed" | Invalid shell syntax | Test shell commands separately first |
669
+ | "Configuration error" | Invalid config syntax | Check config file YAML syntax |
687
670
 
688
- I'm not happy with the way where some command line options for external command are hard coded. I'm specific talking about the way in which the `rg` and `fzf` tools are used. Their options decide the basic look and feel of the search capability on the command line. Maybe they should be part of the overall configuration so that users can tune their UI to the way they like.
671
+ ### Debug Mode
689
672
 
690
- ## Roadmap
673
+ Enable debug output for troubleshooting:
691
674
 
692
- - restore the prompt text file search. currently fzf only looks a prompt IDs.
693
- - continue integration of the ruby_llm gem
694
- - support for Model Context Protocol
675
+ ```bash
676
+ # Enable debug mode
677
+ aia --debug my_prompt
695
678
 
696
- ## RubyLLM::Tool Support
679
+ # Combine with verbose for maximum output
680
+ aia --debug --verbose my_prompt
681
+ ```
697
682
 
698
- AIA supports function calling capabilities through the `RubyLLM::Tool` framework, enabling LLMs to execute custom functions during a chat session.
683
+ ### Performance Issues
699
684
 
700
- ### What Are RubyLLM Tools?
685
+ **Slow model responses:**
686
+ - Try smaller/faster models: `--model gpt-4o-mini`
687
+ - Reduce max_tokens: `--max_tokens 1000`
688
+ - Use lower temperature for faster responses: `--temperature 0.1`
701
689
 
702
- Tools (or functions) allow LLMs to perform actions beyond generating text, such as:
690
+ **Large prompt processing:**
691
+ - Break into smaller prompts using `--pipeline`
692
+ - Use `//include` selectively instead of large files
693
+ - Consider model context limits
703
694
 
704
- - Retrieving real-time information
705
- - Executing system commands
706
- - Accessing external APIs
707
- - Performing calculations
695
+ ## Development
708
696
 
709
- Check out the [examples/tools](examples/tools) directory which contains several ready-to-use tool implementations you can use as references.
697
+ ### Testing
710
698
 
711
- ### How to Use Tools
699
+ ```bash
700
+ # Run unit tests
701
+ rake test
712
702
 
713
- AIA provides three CLI options to manage function calling:
703
+ # Run integration tests
704
+ rake integration
714
705
 
715
- #### `--tools` Option
706
+ # Run all tests with coverage
707
+ rake all_tests
708
+ open coverage/index.html
709
+ ```
716
710
 
717
- Specifies where to find tool implementations:
711
+ ### Building
718
712
 
719
713
  ```bash
720
- # Load tools from multiple sources
721
- --tools /path/to/tools/directory,other/tools/dir,my_tool.rb
722
-
723
- # Or use multiple --tools flags
724
- --tools my_first_tool.rb --tools /tool_repo/tools
725
- ```
714
+ # Install locally with documentation
715
+ just install
726
716
 
727
- Each path can be:
717
+ # Generate documentation
718
+ just gen_doc
728
719
 
729
- - A Ruby file implementing a `RubyLLM::Tool` subclass
730
- - A directory containing tool implementations (all Ruby files in that directory will be loaded)
731
-
732
- Supporting files for tools can be placed in the same directory or subdirectories.
720
+ # Static code analysis
721
+ just flay
722
+ ```
733
723
 
734
- ### Filtering the tool paths
724
+ ### Architecture Notes
735
725
 
736
- The --tools option must have exact relative or absolute paths to the tool files to be used by AIA for function callbacks. If you are specifying directories you may find yourself needing filter the entire set of tools to either allow some or reject others based upon some indicator in their file name. The following two options allow you to specify multiple sub-strings to match the tolls paths against. For example you might be comparing one version of a tool against another. Their filenames could have version prefixes like tool_v1.rb and tool_v2.rb Using the allowed and rejected filters you can choose one of the other when using an entire directory full of tools.
726
+ **ShellCommandExecutor Refactor:**
727
+ The `ShellCommandExecutor` is now a class (previously a module) with instance variables for cleaner encapsulation. Class-level methods remain for backward compatibility.
737
728
 
738
- #### `--at`, `--allowed_tools` Option
729
+ **Prompt Variable Fallback:**
730
+ Variables are always parsed from prompt text when no `.json` history file exists, ensuring parameter prompting works correctly.
739
731
 
740
- Filters which tools to make available when loading from directories:
732
+ ## Contributing
741
733
 
742
- ```bash
743
- # Only allow tools with 'test' in their filename
744
- --tools my_tools_directory --allowed_tools test
745
- ```
734
+ Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
746
735
 
747
- This is useful when you have many tools but only want to use specific ones in a session.
736
+ ### Reporting Issues
748
737
 
749
- #### `--rt`, `--rejected_tools` Option
738
+ When reporting issues, please include:
739
+ - AIA version: `aia --version`
740
+ - Ruby version: `ruby --version`
741
+ - Operating system
742
+ - Minimal reproduction example
743
+ - Error messages and debug output
750
744
 
751
- Excludes specific tools:
745
+ ### Development Setup
752
746
 
753
747
  ```bash
754
- # Exclude tools with '_v1' in their filename
755
- --tools my_tools_directory --rejected_tools _v1
748
+ git clone https://github.com/MadBomber/aia.git
749
+ cd aia
750
+ bundle install
751
+ rake test
756
752
  ```
757
753
 
758
- Ideal for excluding older versions or temporarily disabling specific tools.
759
-
760
- ### Creating Your Own Tools
761
-
762
- To create a custom tool:
754
+ ### Areas for Improvement
763
755
 
764
- 1. Create a Ruby file that subclasses `RubyLLM::Tool`
765
- 2. Define the tool's parameters and functionality
766
- 3. Use the `--tools` option to load it in your AIA session
756
+ - Configuration UI for complex setups
757
+ - Better error handling and user feedback
758
+ - Performance optimization for large prompt libraries
759
+ - Enhanced security controls for shell integration
767
760
 
768
- For implementation details, refer to the [examples in the repository](examples/tools) or the RubyLLM documentation.
769
-
770
- ## MCP Supported
761
+ ## Roadmap
771
762
 
772
- Abandon all hope of seeing an MCP client added to AIA. Maybe sometime in the future there will be a new gem "ruby_llm-mcp" that implements an MCP client as a native RubyLLM::Tool subclass. If that every happens you would use it the same way you use any other RubyLLM::Tool subclass which AIA now supports.
763
+ - **Enhanced Search**: Restore full-text search within prompt files
764
+ - **Model Context Protocol**: Continue integration with ruby_llm gem
765
+ - **UI Improvements**: Better configuration management for fzf and rg tools
766
+ - **Performance**: Optimize prompt loading and processing
767
+ - **Security**: Enhanced sandboxing for shell command execution
773
768
 
774
769
  ## License
775
770