aia 0.9.5 → 0.9.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -1,775 +1,815 @@
1
- # AI Assistant (AIA)
1
+ <div align="center">
2
+ <h1>AI Assistant (AIA)</h1>
3
+ <img src="images/aia.png" alt="Robots waiter ready to take your order."><br />
4
+ **The Prompt is the Code**
5
+ </div>
2
6
 
3
- **The prompt is the code!**
7
+ AIA is a command-line utility that facilitates interaction with AI models through dynamic prompt management. It automates the management of pre-compositional prompts and executes generative AI commands with enhanced features including embedded directives, shell integration, embedded Ruby, history management, interactive chat, and prompt workflows.
4
8
 
5
- ```plain
6
- , , AIA is a command-line utility that facilitates
7
- (\____/) AI Assistant interaction with AI models. It automates the
8
- (_oo_) Fancy LLM management of pre-compositional prompts and
9
- (O) is Online executes generative AI (Gen-AI) commands on those
10
- __||__ \) prompts. AIA includes enhanced features such as
11
- [/______\] / * embedded directives * shell integration
12
- / \__AI__/ \/ * embedded Ruby * history management
13
- / /__\ * interactive chat * prompt workflows
14
- (\ /____\ # supports RubyLLM::Tool integration
15
- ```
16
-
17
- AIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts. It utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection.
9
+ AIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts, utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection, and can use the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a collection of common ready-to-use functions for use with LLMs that support tools.
18
10
 
19
11
  **Wiki**: [Checkout the AIA Wiki](https://github.com/MadBomber/aia/wiki)
20
12
 
21
- **MCRubyLLM::Tool Support:** AIA now supports the integration of Tools for those models that support function callbacks. See the --tools, --allowed_tools and --rejected_tools options. Yes, functional callbacks provided for dynamic prompts just like the AIA directives, shell and ERB integrations so why have both? Well, AIA is older that functional callbacks. The AIA integrations are legacy but more than that not all models support functional callbacks. That means the AIA integrationsß∑ are still viable ways to provided dynamic extra content to your prompts.
22
-
23
- <!-- Tocer[start]: Auto-generated, don't remove. -->
24
-
25
- ## Table of Contents
26
-
27
- - [Configuration Options](#configuration-options)
28
- - [Configuration Flexibility](#configuration-flexibility)
29
- - [Expandable Configuration](#expandable-configuration)
30
- - [The Local Model Registry Refresh](#the-local-model-registry-refresh)
31
- - [Important Note](#important-note)
32
- - [Shell Integration inside of a Prompt](#shell-integration-inside-of-a-prompt)
33
- - [Dynamic Shell Commands](#dynamic-shell-commands)
34
- - [Shell Command Safety](#shell-command-safety)
35
- - [Chat Session Use](#chat-session-use)
36
- - [Embedded Ruby (ERB)](#embedded-ruby-erb)
37
- - [Prompt Directives](#prompt-directives)
38
- - [Parameter and Shell Substitution in Directives](#parameter-and-shell-substitution-in-directives)
39
- - [Directive Syntax](#directive-syntax)
40
- - [AIA Specific Directive Commands](#aia-specific-directive-commands)
41
- - [//config](#config)
42
- - [//include](#include)
43
- - [//ruby](#ruby)
44
- - [//shell](#shell)
45
- - [//next](#next)
46
- - [//pipeline](#pipeline)
47
- - [Using Directives in Chat Sessions](#using-directives-in-chat-sessions)
48
- - [Prompt Sequences](#prompt-sequences)
49
- - [--next](#--next)
50
- - [--pipeline](#--pipeline)
51
- - [Best Practices ??](#best-practices-)
52
- - [Example pipeline](#example-pipeline)
53
- - [All About ROLES](#all-about-roles)
54
- - [The --roles_prefix (AIA_ROLES_PREFIX)](#the---roles_prefix-aia_roles_prefix)
55
- - [The --role Option](#the---role-option)
56
- - [Other Ways to Insert Roles into Prompts](#other-ways-to-insert-roles-into-prompts)
57
- - [External CLI Tools Used](#external-cli-tools-used)
58
- - [Shell Completion](#shell-completion)
59
- - [My Most Powerful Prompt](#my-most-powerful-prompt)
60
- - [My Configuration](#my-configuration)
61
- - [Executable Prompts](#executable-prompts)
62
- - [Usage](#usage)
63
- - [Development](#development)
64
- - [Contributing](#contributing)
65
- - [Roadmap](#roadmap)
66
- - [RubyLLM::Tool Support](#rubyllmtool-support)
67
- - [What Are RubyLLM Tools?](#what-are-rubyllm-tools)
68
- - [How to Use Tools](#how-to-use-tools)
69
- - [`--tools` Option](#--tools-option)
70
- - [Filtering the tool paths](#filtering-the-tool-paths)
71
- - [`--at`, `--allowed_tools` Option](#--at---allowed_tools-option)
72
- - [`--rt`, `--rejected_tools` Option](#--rt---rejected_tools-option)
73
- - [Creating Your Own Tools](#creating-your-own-tools)
74
- - [MCP Supported](#mcp-supported)
75
- - [License](#license)
13
+ ## Quick Start
76
14
 
77
- <!-- Tocer[finish]: Auto-generated, don't remove. -->
15
+ 1. **Install AIA:**
16
+ ```bash
17
+ gem install aia
18
+ ```
78
19
 
79
- ## Configuration Options
80
-
81
- The following table provides a comprehensive list of configuration options, their default values, and the associated environment variables:
82
-
83
- | Config Item Name | CLI Options | Default Value | Environment Variable |
84
- |----------------------|-------------|-----------------------------|---------------------------|
85
- | adapter | --adapter | ruby_llm | AIA_ADAPTER |
86
- | aia_dir | | ~/.aia | AIA_DIR |
87
- | append | -a, --append | false | AIA_APPEND |
88
- | chat | --chat | false | AIA_CHAT |
89
- | clear | --clear | false | AIA_CLEAR |
90
- | config_file | -c, --config_file | ~/.aia/config.yml | AIA_CONFIG_FILE |
91
- | debug | -d, --debug | false | AIA_DEBUG |
92
- | embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |
93
- | erb | | true | AIA_ERB |
94
- | frequency_penalty | --frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |
95
- | fuzzy | -f, --fuzzy | false | AIA_FUZZY |
96
- | image_quality | --iq, --image_quality | standard | AIA_IMAGE_QUALITY |
97
- | image_size | --is, --image_size | 1024x1024 | AIA_IMAGE_SIZE |
98
- | image_style | --style, --image_style | vivid | AIA_IMAGE_STYLE |
99
- | log_file | -l, --log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
100
- | markdown | --md, --markdown | true | AIA_MARKDOWN |
101
- | max_tokens | --max_tokens | 2048 | AIA_MAX_TOKENS |
102
- | model | -m, --model | gpt-4o-mini | AIA_MODEL |
103
- | next | -n, --next | nil | AIA_NEXT |
104
- | out_file | -o, --out_file | temp.md | AIA_OUT_FILE |
105
- | parameter_regex | --regex | '(?-mix:(\[[A-Z _|]+\]))' | AIA_PARAMETER_REGEX |
106
- | pipeline | --pipeline | [] | AIA_PIPELINE |
107
- | presence_penalty | --presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |
108
- | prompt_extname | | .txt | AIA_PROMPT_EXTNAME |
109
- | prompts_dir | -p, --prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
110
- | refresh | --refresh | 7 (days) | AIA_REFRESH |
111
- | require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |
112
- | role | -r, --role | | AIA_ROLE |
113
- | roles_dir | | ~/.prompts/roles | AIA_ROLES_DIR |
114
- | roles_prefix | --roles_prefix | roles | AIA_ROLES_PREFIX |
115
- | shell | | true | AIA_SHELL |
116
- | speak | --speak | false | AIA_SPEAK |
117
- | speak_command | | afplay | AIA_SPEAK_COMMAND |
118
- | speech_model | --sm, --speech_model | tts-1 | AIA_SPEECH_MODEL |
119
- | system_prompt | --system_prompt | | AIA_SYSTEM_PROMPT |
120
- | temperature | -t, --temperature | 0.7 | AIA_TEMPERATURE |
121
- | terse | --terse | false | AIA_TERSE |
122
- | tool_paths | --tools | [] | AIA_TOOL_PATHS |
123
- | allowed_tools | --at --allowed_tools | nil | AIA_ALLOWED_TOOLS |
124
- | rejected_tools | --rt --rejected_tools | nil | AIA_REJECTED_TOOLS |
125
- | top_p | --top_p | 1.0 | AIA_TOP_P |
126
- | transcription_model | --tm, --transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |
127
- | verbose | -v, --verbose | false | AIA_VERBOSE |
128
- | voice | --voice | alloy | AIA_VOICE |
20
+ 2. **Install dependencies:**
21
+ ```bash
22
+ brew install fzf
23
+ ```
129
24
 
130
- These options can be configured via command-line arguments, environment variables, or configuration files.
25
+ 3. **Create your first prompt:**
26
+ ```bash
27
+ mkdir -p ~/.prompts
28
+ echo "What is [TOPIC]?" > ~/.prompts/what_is.txt
29
+ ```
131
30
 
132
- ### Configuration Flexibility
31
+ 4. **Run your prompt:**
32
+ ```bash
33
+ aia what_is
34
+ ```
35
+ You'll be prompted to enter a value for `[TOPIC]`, then AIA will send your question to the AI model.
133
36
 
134
- AIA determines configuration settings using the following order of precedence:
37
+ 5. **Start an interactive chat:**
38
+ ```bash
39
+ aia --chat
40
+ ```
135
41
 
136
- 1. Embedded config directives
137
- 2. Command-line arguments
138
- 3. Environment variables
139
- 4. Configuration files
140
- 5. Default values
42
+ ```plain
141
43
 
142
- For example, let's consider the `model` option. Suppose the following conditions:
44
+ , ,
45
+ (\____/) AI Assistant (v0.9.7) is Online
46
+ (_oo_) gpt-4o-mini
47
+ (O) using ruby_llm (v1.3.1)
48
+ __||__ \) model db was last refreshed on
49
+ [/______\] / 2025-06-18
50
+ / \__AI__/ \/ You can share my tools
51
+ / /__\
52
+ (\ /____\
143
53
 
144
- - Default value is "gpt-4o-mini"
145
- - No entry in the config file
146
- - No environment variable value for `AIA_MODEL`
147
- - No command-line argument provided for `--model`
148
- - No embedded directive like `//config model = some-fancy-llm`
54
+ ```
149
55
 
150
- In this scenario, the model used will be "gpt-4o-mini". However, you can override this default by setting the model at any level of the precedence order. Additionally, you can dynamically ask for user input by incorporating an embedded directive with a placeholder parameter, such as `//config model = [PROCESS_WITH_MODEL]`. When processing the prompt, AIA will prompt you to input a value for `[PROCESS_WITH_MODEL]`.
56
+ <!-- Tocer[start]: Auto-generated, don't remove. -->
151
57
 
152
- If you do not like the default regex used to identify parameters within the prompt text, don't worry there is a way to configure it using the `--regex` option.
58
+ ## Table of Contents
153
59
 
154
- ### Expandable Configuration
60
+ - [Installation & Prerequisites](#installation--prerequisites)
61
+ - [Requirements](#requirements)
62
+ - [Installation](#installation)
63
+ - [Setup Shell Completion](#setup-shell-completion)
64
+ - [Basic Usage](#basic-usage)
65
+ - [Command Line Interface](#command-line-interface)
66
+ - [Key Command-Line Options](#key-command-line-options)
67
+ - [Directory Structure](#directory-structure)
68
+ - [Configuration](#configuration)
69
+ - [Essential Configuration Options](#essential-configuration-options)
70
+ - [Configuration Precedence](#configuration-precedence)
71
+ - [Configuration Methods](#configuration-methods)
72
+ - [Complete Configuration Reference](#complete-configuration-reference)
73
+ - [Advanced Features](#advanced-features)
74
+ - [Prompt Directives](#prompt-directives)
75
+ - [Configuration Directive Examples](#configuration-directive-examples)
76
+ - [Dynamic Content Examples](#dynamic-content-examples)
77
+ - [Shell Integration](#shell-integration)
78
+ - [Embedded Ruby (ERB)](#embedded-ruby-erb)
79
+ - [Prompt Sequences](#prompt-sequences)
80
+ - [Using --next](#using---next)
81
+ - [Using --pipeline](#using---pipeline)
82
+ - [Example Workflow](#example-workflow)
83
+ - [Roles and System Prompts](#roles-and-system-prompts)
84
+ - [RubyLLM::Tool Support](#rubyllmtool-support)
85
+ - [Examples & Tips](#examples--tips)
86
+ - [Practical Examples](#practical-examples)
87
+ - [Code Review Prompt](#code-review-prompt)
88
+ - [Meeting Notes Processor](#meeting-notes-processor)
89
+ - [Documentation Generator](#documentation-generator)
90
+ - [Executable Prompts](#executable-prompts)
91
+ - [Tips from the Author](#tips-from-the-author)
92
+ - [The run Prompt](#the-run-prompt)
93
+ - [The Ad Hoc One-shot Prompt](#the-ad-hoc-one-shot-prompt)
94
+ - [Recommended Shell Setup](#recommended-shell-setup)
95
+ - [Prompt Directory Organization](#prompt-directory-organization)
96
+ - [Security Considerations](#security-considerations)
97
+ - [Shell Command Execution](#shell-command-execution)
98
+ - [Safe Practices](#safe-practices)
99
+ - [Recommended Security Setup](#recommended-security-setup)
100
+ - [Troubleshooting](#troubleshooting)
101
+ - [Common Issues](#common-issues)
102
+ - [Error Messages](#error-messages)
103
+ - [Debug Mode](#debug-mode)
104
+ - [Performance Issues](#performance-issues)
105
+ - [Development](#development)
106
+ - [Testing](#testing)
107
+ - [Building](#building)
108
+ - [Architecture Notes](#architecture-notes)
109
+ - [Contributing](#contributing)
110
+ - [Reporting Issues](#reporting-issues)
111
+ - [Development Setup](#development-setup)
112
+ - [Areas for Improvement](#areas-for-improvement)
113
+ - [Roadmap](#roadmap)
114
+ - [License](#license)
155
115
 
156
- The configuration options are expandable through a config file, allowing you to add custom entries. For example, you can define a custom configuration item like "xyzzy" in your config file. This value can then be accessed in your prompts using `AIA.config.xyzzy` within a `//ruby` directive or an ERB block, enabling dynamic prompt generation based on your custom configurations.
116
+ <!-- Tocer[finish]: Auto-generated, don't remove. -->
157
117
 
158
- ## The Local Model Registry Refresh
118
+ ## Installation & Prerequisites
159
119
 
160
- The `ruby_llm` gem maintains a registry of providers and models integrated with a new website that allows users to download the latest information about each model. This capability is scheduled for release in version 1.3.0 of the gem.
120
+ ### Requirements
161
121
 
162
- In anticipation of this new feature, the AIA tool has introduced the `--refresh` option, which specifies the number of days between updates to the centralized model registry. Here’s how the `--refresh` option works:
122
+ - **Ruby**: >= 3.2.0
123
+ - **External Tools**:
124
+ - [fzf](https://github.com/junegunn/fzf) - Command-line fuzzy finder
163
125
 
164
- - A value of `0` (zero) updates the local model registry every time AIA is executed.
165
- - A value of `1` (one) updates the local model registry once per day.
166
- - etc.
126
+ ### Installation
167
127
 
168
- The date of the last successful refresh is stored in the configuration file under the key `last_refresh`. The default configuration file is located at `~/.aia/config.yml`. When a refresh is successful, the `last_refresh` value is updated to the current date, and the updated configuration is saved in `AIA.config.config_file`.
128
+ ```bash
129
+ # Install AIA gem
130
+ gem install aia
169
131
 
170
- ### Important Note
132
+ # Install required external tools (macOS)
133
+ brew install fzf
171
134
 
172
- This approach to saving the `last_refresh` date can become cumbersome, particularly if you maintain multiple configuration files for different projects. The `last_refresh` date is only updated in the currently active configuration file. If you switch to a different project with a different configuration file, you may inadvertently hit the central model registry again, even if your local registry is already up to date.
135
+ # Install required external tools (Linux)
136
+ # Ubuntu/Debian
137
+ sudo apt install fzf
173
138
 
174
- ## Shell Integration inside of a Prompt
139
+ # Arch Linux
140
+ sudo pacman -S fzf
141
+ ```
175
142
 
176
- AIA configures the `prompt_manager` gem to be fully integrated with your local shell by default. This is not an option - its a feature. If your prompt inclues text patterns like $HOME, ${HOME} or $(command) those patterns will be automatically replaced in the prompt text by the shell's value for those patterns.
143
+ ### Setup Shell Completion
177
144
 
178
- #### Dynamic Shell Commands
145
+ Get completion functions for your shell:
179
146
 
180
- Dynamic content can be inserted into the prompt using the pattern $(shell command) where the output of the shell command will replace the $(...) pattern. It will become part of the context / instructions for the prompt.
147
+ ```bash
148
+ # For bash users
149
+ aia --completion bash >> ~/.bashrc
181
150
 
182
- Consider the power of tailoring a prompt to your specific operating system:
151
+ # For zsh users
152
+ aia --completion zsh >> ~/.zshrc
183
153
 
184
- ```plaintext
185
- As a system administration on a $(uname -v) platform what is the best way to [DO_SOMETHING]
154
+ # For fish users
155
+ aia --completion fish >> ~/.config/fish/config.fish
186
156
  ```
187
157
 
188
- Or insert content from a file in your home directory:
158
+ ## Basic Usage
189
159
 
190
- ```plaintext
191
- Given the following constraints $(cat ~/3_laws_of_robotics.txt) determine the best way to instruct my roomba to clean my kids room.
192
- ```
160
+ ### Command Line Interface
161
+
162
+ ```bash
163
+ # Basic usage
164
+ aia [OPTIONS] PROMPT_ID [CONTEXT_FILES...]
193
165
 
194
- #### Shell Command Safety
166
+ # Interactive chat session
167
+ aia --chat [--role ROLE] [--model MODEL]
195
168
 
196
- The catchphrase "the prompt is the code" within AIA means that you have the power to execute any command you want, but you must be careful not to execute commands that could cause harm. AIA is not going to protect you from doing something dumb. Sure that's a copout. I just can't think (actually I can) of all the ways you can mess things up writing code. Remember what we learned from Forrest Gump "Stupid is as stupid does." So don't break the dumb law. If someone gives you a prompt as says "run this with AIA" you had better review the prompt before processing it.
169
+ # Use a specific model
170
+ aia --model gpt-4 my_prompt
197
171
 
198
- #### Chat Session Use
172
+ # Specify output file
173
+ aia --out_file result.md my_prompt
199
174
 
200
- Shell integration is available in your follow up prompts within a chat session. Suppose you started a chat session (--chat) using a role of "Ruby Expert" expecting to chat about changes that could be made to a specific class BUT you forgot to include the class source file as part of the context when you got started. You could enter this as your follow up prompt to keep going:
175
+ # Use a role/system prompt
176
+ aia --role expert my_prompt
201
177
 
202
- ```plaintext
203
- The class I want to chat about refactoring is this one: $(cat my_class.rb)
178
+ # Enable fuzzy search for prompts
179
+ aia --fuzzy
204
180
  ```
205
181
 
206
- That inserts the entire class source file into your follow up prompt. You can continue chatting with your AI Assistant about changes to the class.
182
+ ### Key Command-Line Options
207
183
 
208
- ## Embedded Ruby (ERB)
184
+ | Option | Description | Example |
185
+ |--------|-------------|---------|
186
+ | `--chat` | Start interactive chat session | `aia --chat` |
187
+ | `--model MODEL` | Specify AI model to use | `aia --model gpt-4` |
188
+ | `--role ROLE` | Use a role/system prompt | `aia --role expert` |
189
+ | `--out_file FILE` | Specify output file | `aia --out_file results.md` |
190
+ | `--fuzzy` | Use fuzzy search for prompts | `aia --fuzzy` |
191
+ | `--help` | Show complete help | `aia --help` |
209
192
 
210
- The inclusion of dynamic content through the shell integration is significant. AIA also provides the full power of embedded Ruby code processing within the prompt text.
193
+ ### Directory Structure
211
194
 
212
- AIA takes advantage of the `prompt_manager` gem to enable ERB integration in prompt text as a default. Its an always available feature of AIA prompts. The [Embedded Ruby (ERB) template syntax (2024)](https://bophin-com.ngontinh24.com/article/language-embedded-ruby-erb-template-syntax) provides a good overview of the syntax and power of ERB.
195
+ ```
196
+ ~/.prompts/ # Default prompts directory
197
+ ├── ask.txt # Simple question prompt
198
+ ├── code_review.txt # Code review prompt
199
+ ├── roles/ # Role/system prompts
200
+ │ ├── expert.txt # Expert role
201
+ │ └── teacher.txt # Teaching role
202
+ └── _prompts.log # History log
203
+ ```
213
204
 
214
- Most websites that have information about ERB will give examples of how to use ERB to generate dynamic HTML content for web-based applications. That is a common use case for ERB. AIA on the other hand uses ERB to generate dynamic or conditional prompt text for LLM processing.
205
+ ## Configuration
215
206
 
216
- ## Prompt Directives
207
+ ### Essential Configuration Options
217
208
 
218
- Downstream processing directives were added to the `prompt_manager` gem used by AIA at version 0.4.1. These directives are lines in the prompt text file that begin with "//" having this pattern:
209
+ The most commonly used configuration options:
219
210
 
220
- ```bash
221
- //command params
222
- ```
211
+ | Option | Default | Description |
212
+ |--------|---------|-------------|
213
+ | `model` | `gpt-4o-mini` | AI model to use |
214
+ | `prompts_dir` | `~/.prompts` | Directory containing prompts |
215
+ | `out_file` | `temp.md` | Default output file |
216
+ | `temperature` | `0.7` | Model creativity (0.0-1.0) |
217
+ | `chat` | `false` | Start in chat mode |
223
218
 
224
- There is no space between the "//" and the command. Commands do not have to have params. These params are typically space delimited when more than one is required. It all depens on the command.
219
+ ### Configuration Precedence
225
220
 
226
- ### Parameter and Shell Substitution in Directives
221
+ AIA determines configuration settings using this order (highest to lowest priority):
227
222
 
228
- When you combine prompt directives with prompt parameters and shell envar substitutions you can get some powerful compositional prompts.
223
+ 1. **Embedded config directives** (in prompt files): `//config model = gpt-4`
224
+ 2. **Command-line arguments**: `--model gpt-4`
225
+ 3. **Environment variables**: `export AIA_MODEL=gpt-4`
226
+ 4. **Configuration files**: `~/.aia/config.yml`
227
+ 5. **Default values**
229
228
 
230
- Here is an example of a pure generic directive.
229
+ ### Configuration Methods
231
230
 
231
+ **Environment Variables:**
232
232
  ```bash
233
- //[DIRECTIVE_NAME] [DIRECTIVE_PARAMS]
233
+ export AIA_MODEL=gpt-4
234
+ export AIA_PROMPTS_DIR=~/my-prompts
235
+ export AIA_TEMPERATURE=0.8
234
236
  ```
235
237
 
236
- When the prompt runs, you will be asked to provide a value for each of the parameters. You could answer "shell" for the directive name and "calc 22/7" if you wanted a bad approximation of PI.
237
-
238
- Try this prompt file:
239
- ```bash
240
- //shell calc [FORMULA]
238
+ **Configuration File** (`~/.aia/config.yml`):
239
+ ```yaml
240
+ model: gpt-4
241
+ prompts_dir: ~/my-prompts
242
+ temperature: 0.8
243
+ chat: false
244
+ ```
241
245
 
242
- What does that number mean to you?
246
+ **Embedded Directives** (in prompt files):
243
247
  ```
248
+ //config model = gpt-4
249
+ //config temperature = 0.8
244
250
 
245
- ### Directive Syntax
251
+ Your prompt content here...
252
+ ```
246
253
 
247
- Directives can be entered in chat or prompt files using the following syntax:
248
- - `//command args`
254
+ ### Complete Configuration Reference
249
255
 
250
- Supported directives:
251
- - `help`: Show available directives
252
- - `shell` or `sh`: Execute a shell command
253
- - `ruby` or `rb`: Execute Ruby code
254
- - `config` or `cfg`: Show or update configuration
255
- - `include` or `inc`: Include file content
256
- - `next`: Set/Show the next prompt ID to be processed
257
- - `pipeline`: Set/Extend/Show the workflow of prompt IDs
256
+ <details>
257
+ <summary>Click to view all configuration options</summary>
258
258
 
259
- When a directive produces output, it is added to the chat context. If there is no output, you are prompted again.
259
+ | Config Item Name | CLI Options | Default Value | Environment Variable |
260
+ |------------------|-------------|---------------|---------------------|
261
+ | adapter | --adapter | ruby_llm | AIA_ADAPTER |
262
+ | aia_dir | | ~/.aia | AIA_DIR |
263
+ | append | -a, --append | false | AIA_APPEND |
264
+ | chat | --chat | false | AIA_CHAT |
265
+ | clear | --clear | false | AIA_CLEAR |
266
+ | config_file | -c, --config_file | ~/.aia/config.yml | AIA_CONFIG_FILE |
267
+ | debug | -d, --debug | false | AIA_DEBUG |
268
+ | embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |
269
+ | erb | | true | AIA_ERB |
270
+ | frequency_penalty | --frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |
271
+ | fuzzy | -f, --fuzzy | false | AIA_FUZZY |
272
+ | image_quality | --iq, --image_quality | standard | AIA_IMAGE_QUALITY |
273
+ | image_size | --is, --image_size | 1024x1024 | AIA_IMAGE_SIZE |
274
+ | image_style | --style, --image_style | vivid | AIA_IMAGE_STYLE |
275
+ | log_file | -l, --log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
276
+ | markdown | --md, --markdown | true | AIA_MARKDOWN |
277
+ | max_tokens | --max_tokens | 2048 | AIA_MAX_TOKENS |
278
+ | model | -m, --model | gpt-4o-mini | AIA_MODEL |
279
+ | next | -n, --next | nil | AIA_NEXT |
280
+ | out_file | -o, --out_file | temp.md | AIA_OUT_FILE |
281
+ | parameter_regex | --regex | '(?-mix:(\[[A-Z _\|]+\]))' | AIA_PARAMETER_REGEX |
282
+ | pipeline | --pipeline | [] | AIA_PIPELINE |
283
+ | presence_penalty | --presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |
284
+ | prompt_extname | | .txt | AIA_PROMPT_EXTNAME |
285
+ | prompts_dir | -p, --prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
286
+ | refresh | --refresh | 7 (days) | AIA_REFRESH |
287
+ | require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |
288
+ | role | -r, --role | | AIA_ROLE |
289
+ | roles_dir | | ~/.prompts/roles | AIA_ROLES_DIR |
290
+ | roles_prefix | --roles_prefix | roles | AIA_ROLES_PREFIX |
291
+ | shell | | true | AIA_SHELL |
292
+ | speak | --speak | false | AIA_SPEAK |
293
+ | speak_command | | afplay | AIA_SPEAK_COMMAND |
294
+ | speech_model | --sm, --speech_model | tts-1 | AIA_SPEECH_MODEL |
295
+ | system_prompt | --system_prompt | | AIA_SYSTEM_PROMPT |
296
+ | temperature | -t, --temperature | 0.7 | AIA_TEMPERATURE |
297
+ | terse | --terse | false | AIA_TERSE |
298
+ | tool_paths | --tools | [] | AIA_TOOL_PATHS |
299
+ | allowed_tools | --at --allowed_tools | nil | AIA_ALLOWED_TOOLS |
300
+ | rejected_tools | --rt --rejected_tools | nil | AIA_REJECTED_TOOLS |
301
+ | top_p | --top_p | 1.0 | AIA_TOP_P |
302
+ | transcription_model | --tm, --transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |
303
+ | verbose | -v, --verbose | false | AIA_VERBOSE |
304
+ | voice | --voice | alloy | AIA_VOICE |
260
305
 
261
- ### AIA Specific Directive Commands
306
+ </details>
262
307
 
263
- At this time AIA only has a few directives which are detailed below.
308
+ ## Advanced Features
264
309
 
265
- #### //config
310
+ ### Prompt Directives
266
311
 
267
- The `//config` directive within a prompt text file is used to tailor the specific configuration environment for the prompt. All configuration items are available to have their values changed. The order of value assignment for a configuration item starts with the default value which is replaced by the envar value which is replaced by the command line option value which is replaced by the value from the config file.
312
+ Directives are special commands in prompt files that begin with `//` and provide dynamic functionality:
268
313
 
269
- The `//config` is the last and final way of changing the value for a configuration item for a specific prompt.
314
+ | Directive | Description | Example |
315
+ |-----------|-------------|---------|
316
+ | `//config` | Set configuration values | `//config model = gpt-4` |
317
+ | `//context` | Show context for this conversation | `//context` |
318
+ | `//include` | Insert file contents | `//include path/to/file.txt` |
319
+ | `//shell` | Execute shell commands | `//shell ls -la` |
320
+ | `//robot` | Show the pet robot ASCII art w/versions | `//robot` |
321
+ | `//ruby` | Execute Ruby code | `//ruby puts "Hello World"` |
322
+ | `//next` | Set next prompt in sequence | `//next summary` |
323
+ | `//pipeline` | Set prompt workflow | `//pipeline analyze,summarize,report` |
324
+ | `//clear` | Clear conversation history | `//clear` |
325
+ | `//help` | Show available directives | `//help` |
326
+ | `//available_models` | List available models | `//available_models` |
327
+ | `//tools` | Show a list of available tools and their description | `//tools` |
328
+ | `//review` | Review current context | `//review` |
270
329
 
271
- The switch options are treated like booleans. They are either `true` or `false`. Their name within the context of a `//config` directive always ends with a "?" character - question mark.
330
+ Directives can also be used in the interactive chat sessions.
272
331
 
273
- To set the value of a switch using ``//config` for example `--terse` or `--chat` to this:
332
+ #### Configuration Directive Examples
274
333
 
275
334
  ```bash
276
- //config chat? = true
277
- //config terse? = true
278
- ```
335
+ # Set model and temperature for this prompt
336
+ //config model = gpt-4
337
+ //config temperature = 0.9
279
338
 
280
- A configuration item such as `--out_file` or `--model` has an associated value on the command line. To set that value with the `//config` directive do it like this:
339
+ # Enable chat mode and terse responses
340
+ //config chat = true
341
+ //config terse = true
281
342
 
282
- ```bash
283
- //config model = gpt-3.5-turbo
284
- //config out_file = temp.md
343
+ Your prompt content here...
285
344
  ```
286
345
 
287
- BTW: the "=" is completely options. Its actuall ignored as is ":=" if you were to choose that as your assignment operator. Also the number of spaces between the item and the value is complete arbitrary. I like to line things up so this syntax is just as valie:
346
+ #### Dynamic Content Examples
288
347
 
289
348
  ```bash
290
- //config model gpt-3.5-turbo
291
- //config out_file temp.md
292
- //config chat? true
293
- //config terse? true
294
- //config model gpt-4
295
- ```
349
+ # Include file contents
350
+ //include ~/project/README.md
296
351
 
297
- NOTE: if you specify the same config item name more than once within the prompt file, its the last one which will be set when the prompt is finally process through the LLM. For example in the example above `gpt-4` will be the model used. Being first does not count in this case.
352
+ # Execute shell commands
353
+ //shell git log --oneline -10
298
354
 
299
- #### //include
355
+ # Run Ruby code
356
+ //ruby require 'json'; puts JSON.pretty_generate({status: "ready"})
300
357
 
301
- Example:
302
- ```bash
303
- //include path_to_file
358
+ Analyze the above information and provide insights.
304
359
  ```
305
360
 
306
- The `path_to_file` can be either absolute or relative. If it is relative, it is achored at the PWD. If the `path_to_file` includes envars, the `--shell` CLI option must be used to replace the envar in the directive with its actual value.
307
-
308
- The file that is included will have any comments or directives excluded. It is expected that the file will be a text file so that its content can be pre-pended to the existing prompt; however, if the file is a source code file (ex: file.rb) the source code will be included HOWEVER any comment line or line that starts with "//" will be excluded.
361
+ ### Shell Integration
309
362
 
310
- #### //ruby
363
+ AIA automatically processes shell patterns in prompts:
311
364
 
312
- The `//ruby` directive executes Ruby code. You can use this to perform complex operations or interact with Ruby libraries.
365
+ - **Environment variables**: `$HOME`, `${USER}`
366
+ - **Command substitution**: `$(date)`, `$(git branch --show-current)`
313
367
 
314
- For example:
315
- ```ruby
316
- //ruby puts "Hello from Ruby"
317
- ```
318
-
319
- You can also use the `--require` option to specify Ruby libraries to require before executing Ruby code:
368
+ **Examples:**
320
369
 
321
370
  ```bash
322
- # Command line
323
- aia --rq json,csv --require os my_prompt
371
+ # Dynamic system information
372
+ As a system administrator on a $(uname -s) platform, how do I optimize performance?
324
373
 
325
- # In chat
326
- //ruby JSON.parse('{"data": [1,2,3]}')["data"]
327
- ```
374
+ # Include file contents via shell
375
+ Here's my current configuration: $(cat ~/.bashrc | head -20)
328
376
 
329
- #### //shell
330
- Example:
331
- ```bash
332
- //shell some_shell_command
377
+ # Use environment variables
378
+ My home directory is $HOME and I'm user $USER.
333
379
  ```
334
380
 
335
- It is expected that the shell command will return some text to STDOUT which will be pre-pending to the existing prompt text within the prompt file.
381
+ **Security Note**: Be cautious with shell integration. Review prompts before execution as they can run arbitrary commands.
336
382
 
337
- There are no limitations on what the shell command can be. For example if you wanted to bypass the stripping of comments and directives from a file you could do something like this:
338
- ```bash
339
- //shell cat path_to_file
340
- ```
383
+ ### Embedded Ruby (ERB)
341
384
 
342
- Which does basically the same thing as the `//include` directive, except it uses the entire content of the file. For relative file paths the same thing applies. The file's path will be relative to the PWD.
385
+ AIA supports full ERB processing in prompts for dynamic content generation:
343
386
 
344
- #### //next
345
- Examples:
346
- ```bash
347
- # Show the next promt ID
348
- //next
387
+ ```erb
388
+ <%# ERB example in prompt file %>
389
+ Current time: <%= Time.now %>
390
+ Random number: <%= rand(100) %>
349
391
 
350
- # Set the next prompt ID
351
- //next prompt_id
392
+ <% if ENV['USER'] == 'admin' %>
393
+ You have admin privileges.
394
+ <% else %>
395
+ You have standard user privileges.
396
+ <% end %>
352
397
 
353
- # Same as
354
- //config next
355
- //config next = prompt_id
398
+ <%= AIA.config.model %> is the current model.
356
399
  ```
357
400
 
358
- #### //pipeline
401
+ ### Prompt Sequences
359
402
 
360
- Examples:
361
- ```bash
362
- # Show the current prompt workflow
363
- //pipeline
403
+ Chain multiple prompts for complex workflows:
364
404
 
365
- # Set the prompt workflow
366
- //pipeline = prompt_id_1, prompt_id_2, prompt_id_3
405
+ #### Using --next
367
406
 
368
- # Extend the prompt workflow
369
- //pipeline << prompt_id_4, prompt_id_5, prompt_id_6
407
+ ```bash
408
+ # Command line
409
+ aia analyze --next summarize --next report
370
410
 
371
- # Same as
372
- //config pipeline
373
- //config pipeline = prompt_id_1, prompt_id_2, prompt_id_3
374
- //config pipeline << prompt_id_4, prompt_id_5, prompt_id_6
411
+ # In prompt files
412
+ # analyze.txt contains: //next summarize
413
+ # summarize.txt contains: //next report
375
414
  ```
376
415
 
377
- ### Using Directives in Chat Sessions
378
-
379
- Whe you are in a chat session, you may use a directive as a follow up prompt. For example if you started the chat session with the option `--terse` expecting to get short answers from the LLM; but, then you decide that you want more comprehensive answers you may do this in a chat follow up:
416
+ #### Using --pipeline
380
417
 
381
418
  ```bash
382
- //config terse? false
383
- ```
384
-
385
- The directive is executed and a new follow up prompt can be entered with a more lengthy response generated from the LLM.
419
+ # Command line
420
+ aia research --pipeline analyze,summarize,report,present
386
421
 
387
- The directive `//clear` truncates your entire session context. The LLM will not remember anything you have discussed.
422
+ # In prompt file
423
+ //pipeline analyze,summarize,report,present
424
+ ```
388
425
 
389
- ## Prompt Sequences
426
+ #### Example Workflow
390
427
 
391
- Why would you need/want to use a sequence of prompts in a batch situation. Maybe you have a complex prompt which exceeds the token limitations of your model for input so you need to break it up into multiple parts. Or suppose its a simple prompt but the number of tokens on the output is limited and you do not get exactly the kind of full response for which you were looking.
428
+ **research.txt:**
429
+ ```
430
+ //config model = gpt-4
431
+ //next analyze
392
432
 
393
- Sometimes it takes a series of prompts to get the kind of response that you want. The reponse from one prompt becomes a context for the next prompt. This is easy to do within a `chat` session were you are manually entering and adjusting your prompts until you get the kind of response that you want.
433
+ Research the topic: [RESEARCH_TOPIC]
434
+ Provide comprehensive background information.
435
+ ```
394
436
 
395
- If you need to do this on a regular basis or within a batch you can use AIA and the `--next` and `--pipeline` command line options.
437
+ **analyze.txt:**
438
+ ```
439
+ //config out_file = analysis.md
440
+ //next summarize
396
441
 
397
- These two options specify the sequence of prompt IDs to be processed. Both options are available to be used within a prompt file using the `//next` and `//pipeline` directives. Like all embedded directives you can take advantage of parameterization shell integration and Ruby. With this kind of dynamic content and control flow in your prompts you will start to feel like Tim the Tool man - more power!
442
+ Analyze the research data and identify key insights.
443
+ ```
398
444
 
399
- Consider the condition in which you have 4 prompt IDs that need to be processed in sequence. The IDs and associated prompt file names are:
445
+ **summarize.txt:**
446
+ ```
447
+ //config out_file = summary.md
400
448
 
401
- | Prompt ID | Prompt File |
402
- | --------- | ----------- |
403
- | one | one.txt |
404
- | two | two.txt |
405
- | three | three.txt |
406
- | four | four.txt |
449
+ Create a concise summary of the analysis with actionable recommendations.
450
+ ```
407
451
 
452
+ ### Roles and System Prompts
408
453
 
409
- ### --next
454
+ Roles define the context and personality for AI responses:
410
455
 
411
456
  ```bash
412
- aia one --next two --out_file temp.md
413
- aia three --next four temp.md -o answer.md
457
+ # Use a predefined role
458
+ aia --role expert analyze_code.rb
459
+
460
+ # Roles are stored in ~/.prompts/roles/
461
+ # expert.txt might contain:
462
+ # "You are a senior software engineer with 15 years of experience..."
414
463
  ```
415
464
 
416
- or within each of the prompt files you use the `//next` directive:
465
+ **Creating Custom Roles:**
417
466
 
418
467
  ```bash
419
- one.txt contains //next two
420
- two.txt contains //next three
421
- three.txt contains //next four
468
+ # Create a code reviewer role
469
+ cat > ~/.prompts/roles/code_reviewer.txt << EOF
470
+ You are an experienced code reviewer. Focus on:
471
+ - Code quality and best practices
472
+ - Security vulnerabilities
473
+ - Performance optimizations
474
+ - Maintainability issues
475
+
476
+ Provide specific, actionable feedback.
477
+ EOF
422
478
  ```
423
- BUT if you have more than two prompts in your sequence then consider using the --pipeline option.
424
479
 
425
- **The directive //next is short for //config next**
480
+ ### RubyLLM::Tool Support
426
481
 
427
- ### --pipeline
482
+ AIA supports function calling through RubyLLM tools for extended capabilities:
428
483
 
429
484
  ```bash
430
- aia one --pipeline two,three,four
431
- ```
485
+ # Load tools from directory
486
+ aia --tools ~/my-tools/ --chat
432
487
 
433
- or inside of the `one.txt` prompt file use this directive:
488
+ # Load specific tool files
489
+ aia --tools weather.rb,calculator.rb --chat
434
490
 
435
- ```bash
436
- //pipeline two,three,four
491
+ # Filter tools
492
+ aia --tools ~/tools/ --allowed_tools weather,calc
493
+ aia --tools ~/tools/ --rejected_tools deprecated
437
494
  ```
438
495
 
439
- **The directive //pipeline is short for //config pipeline**
440
-
441
- ### Best Practices ??
442
-
496
+ **Tool Examples** (see `examples/tools/` directory):
497
+ - File operations (read, write, list)
498
+ - Shell command execution
499
+ - API integrations
500
+ - Data processing utilities
443
501
 
444
- Since the response of one prompt is fed into the next prompt within the sequence instead of having all prompts write their response to the same out file, use these directives inside the associated prompt files:
502
+ **Shared Tools Collection:**
503
+ AIA can use the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a curated collection of commonly-used tools (aka functions) via the --require option.
445
504
 
505
+ ```bash
506
+ # Access shared tools automatically (included with AIA)
507
+ aia --require shared_tools/ruby_llm --chat
446
508
 
447
- | Prompt File | Directive |
448
- | ----------- | --------- |
449
- | one.txt | //config out_file one.md |
450
- | two.txt | //config out_file two.md |
451
- | three.txt | //config out_file three.md |
452
- | four.txt | //config out_file four.md |
453
-
454
- This way you can see the response that was generated for each prompt in the sequence.
509
+ # To access just one specific shared tool
510
+ aia --require shared_tools/ruby_llm/edit_file --chat
455
511
 
456
- ### Example pipeline
512
+ # Combine with your own local custom RubyLLM-based tools
513
+ aia --require shared_tools/ruby_llm --tools ~/my-tools/ --chat
514
+ ```
457
515
 
458
- TODO: the audio-to-text is still under development.
516
+ The above examples show the shared_tools being used within an interactive chat session. They are also available in batch prompts as well using the same --require option. You can also use the //ruby directive to require the shared_tools as well and using a require statement within an ERB block.
459
517
 
460
- Suppose you have an audio file of a meeting. You what to get a transcription of what was said in that meeting. Sometimes raw transcriptions hide the real value of the recording so you have crafted a pompt that takes the raw transcriptions and does a technical summary with a list of action items.
518
+ ## Examples & Tips
461
519
 
462
- Create two prompts named transcribe.txt and tech_summary.txt
520
+ ### Practical Examples
463
521
 
522
+ #### Code Review Prompt
464
523
  ```bash
465
- # transcribe.txt
466
- # Desc: takes one audio file
467
- # note that there is no "prompt" text only the directive
524
+ # ~/.prompts/code_review.txt
525
+ //config model = gpt-4o-mini
526
+ //config temperature = 0.3
468
527
 
469
- //config model whisper-1
470
- //next tech_summary
528
+ Review this code for:
529
+ - Best practices adherence
530
+ - Security vulnerabilities
531
+ - Performance issues
532
+ - Maintainability concerns
533
+
534
+ Code to review:
471
535
  ```
472
536
 
473
- and
537
+ Usage: `aia code_review mycode.rb`
474
538
 
539
+ #### Meeting Notes Processor
475
540
  ```bash
476
- # tech_summary.txt
477
-
478
- //config model gpt-4o-mini
479
- //config out_file meeting_summary.md
541
+ # ~/.prompts/meeting_notes.txt
542
+ //config model = gpt-4o-mini
543
+ //pipeline format,action_items
480
544
 
481
- Review the raw transcript of a technical meeting,
482
- summarize the discussion and
483
- note any action items that were generated.
545
+ Raw meeting notes:
546
+ //include [NOTES_FILE]
484
547
 
485
- Format your response in markdown.
548
+ Please clean up and structure these meeting notes.
486
549
  ```
487
550
 
488
- Now you can do this:
489
-
551
+ #### Documentation Generator
490
552
  ```bash
491
- aia transcribe my_tech_meeting.m4a
492
- ```
493
-
494
- You summary of the meeting is in the file `meeting_summary.md`
495
-
553
+ # ~/.prompts/document.txt
554
+ //config model = gpt-4o-mini
555
+ //shell find [PROJECT_DIR] -name "*.rb" | head -10
496
556
 
497
- ## All About ROLES
498
-
499
- ### The --roles_prefix (AIA_ROLES_PREFIX)
500
-
501
- The second kind of prompt is called a role (aka system prompt). Sometimes the role is incorporated into the instruction. For example, "As a magician make a rabbit appear out of a hat." To reuse the same role in multiple prompts, AIA encourages you to designate a special subdirectory for prompts that are specific to personification - roles.
502
-
503
- The default `roles_prefix` is set to 'roles'. This creates a subdirectory under the `prompts_dir` where role files are stored. Internally, AIA calculates a `roles_dir` value by joining `prompts_dir` and `roles_prefix`. It is recommended to keep the roles organized this way for better organization and management.
504
-
505
- ### The --role Option
506
-
507
- The `--role` option is used to identify a personification prompt within your roles directory which defines the context within which the LLM is to provide its response. The text of the role ID is pre-pended to the text of the primary prompt to form a complete prompt to be processed by the LLM.
508
-
509
- For example consider:
510
-
511
- ```bash
512
- aia -r ruby refactor my_class.rb
557
+ Generate documentation for the Ruby project shown above.
558
+ Include: API references, usage examples, and setup instructions.
513
559
  ```
514
560
 
515
- The role ID is `ruby` the prompt ID is `refactor` and my_class.rb is a context file.
561
+ ### Executable Prompts
516
562
 
517
- Within the roles directory the contents of the text file `ruby.txt` will be pre-pre-pended to the contents of the `refactor.txt` file from the prompts directory to produce a complete prompt. That complete prompt will have any parameters followed by directives processed before sending the combined prompt text and the content of the context file to the LLM.
563
+ The `--exec` flag is used to create executable prompts. If it is not present on the shebang line then the prompt file will be treated like any other context file. That means that the file will be included as context in the prompt but no dynamic content integration or directives will be processed. All other AIA options are, well, optional. All you need is an initial prompt ID and the --exec flag.
518
564
 
519
- Note that `--role` is just a way of saying add this prompt text file to the front of this other prompt text file. The contents of the "role" prompt could be anything. It does not necessarily have be an actual role.
565
+ In the example below the option `--no-out_file` is used to direct the output from the LLM processing of the prompt to STDOUT. This way the executable prompts can be good citizens on the *nix command line receiving piped in input via STDIN and send its output to STDOUT.
520
566
 
521
- AIA fully supports a directory tree within the `prompts_dir` as a way of organization or classification of your different prompt text files.
567
+ Create executable prompts:
522
568
 
569
+ **weather_report** (make executable with `chmod +x`):
523
570
  ```bash
524
- aia -r ruby sw_eng/doc_the_methods my_class.rb
525
- ```
526
-
527
- In this example the prompt text file `$AIA_ROLES_PREFIX/ruby.txt` is prepended to the prompt text file `$AIA_PROMPTS_DIR/sw_eng/doc_the_methods.txt`
571
+ #!/usr/bin/env aia run --no-out_file --exec
572
+ # Get current storm activity for the east and south coast of the US
528
573
 
529
- ### Other Ways to Insert Roles into Prompts
574
+ Summarize the tropical storm outlook fpr the Atlantic, Caribbean Sea and Gulf of America.
530
575
 
531
- Since AIA supports parameterized prompts you could make a keyword like "[ROLE]" be part of your prompt. For example consider this prompt:
532
-
533
- ```text
534
- As a [ROLE] tell me what you think about [SUBJECT]
576
+ //webpage https://www.nhc.noaa.gov/text/refresh/MIATWOAT+shtml/201724_MIATWOAT.shtml
535
577
  ```
536
578
 
537
- When this prompt is processed, AIA will ask you for a value for the keyword "[ROLE]" and the keyword "[SUBJECT]" to complete the prompt. Since AIA maintains a history of your previous answers, you could just choose something that you used in the past or answer with a completely new value.
538
-
539
- ## External CLI Tools Used
540
-
541
- To install the external CLI programs used by AIA:
542
-
543
- brew install fzf
544
-
545
- fzf
546
- Command-line fuzzy finder written in Go
547
- [https://github.com/junegunn/fzf](https://github.com/junegunn/fzf)
548
-
549
-
550
- ## Shell Completion
551
-
552
- You can setup a completion function in your shell that will complete on the prompt_id saved in your `prompts_dir` - functions for `bash`, `fish` and `zsh` are available. To get a copy of these functions do:
553
-
579
+ Usage:
554
580
  ```bash
555
- aia --completion bash
581
+ ./weather_report
582
+ ./weather_report | glow # Render the markdown with glow
556
583
  ```
557
584
 
558
- If you're not a fan of "born again" replace `bash` with one of the others.
559
-
560
- Copy the function to a place where it can be installed in your shell's instance. This might be a `.profile` or `.bashrc` file, etc.
561
-
562
- ## My Most Powerful Prompt
563
-
564
- This is just between you and me so don't go blabbing this around to everyone. My most power prompt is in a file named `ad_hoc.txt`. It looks like this:
565
-
566
- ```text
567
- [WHAT_NOW_HUMAN]
568
- ```
569
-
570
- Yep. Just a single parameter for which I can provide a value of anything that is on my mind at the time. Its advantage is that I do not pollute my shell's command history with lots of text.
585
+ ### Tips from the Author
571
586
 
587
+ #### The run Prompt
572
588
  ```bash
573
- aia ad_hoc
589
+ # ~/.prompts/run.txt
590
+ # Desc: A configuration only prompt file for use with executable prompts
591
+ # Put whatever you want here to setup the configuration desired.
592
+ # You could also add a system prompt to preface your intended prompt
574
593
  ```
575
594
 
576
- Or consider this executable prompt file:
595
+ Usage: `echo "What is the meaning of life?" | aia run`
577
596
 
597
+ #### The Ad Hoc One-shot Prompt
578
598
  ```bash
579
- #!/usr/bin/env aia run
599
+ # ~/.prompts/ad_hoc.txt
580
600
  [WHAT_NOW_HUMAN]
581
601
  ```
602
+ Usage: `aia ad_hoc` - perfect for any quick one-shot question without cluttering shell history.
582
603
 
583
- Where the `run` prompt ID has a `run.txt` file in the prompt directory that is basically empty. Or maybe `run.txt` has some prompt instructions for how to run the prompt - some kind of meta-thinking instructions.
584
-
585
- ## My Configuration
586
-
587
- I use the `bash` shell. In my `.bashrc` file I source another file named `.bashrc__aia` which looks like this:
588
-
604
+ #### Recommended Shell Setup
589
605
  ```bash
590
- # ~/.bashic_aia
591
- # AI Assistant
592
-
593
- # These are the defaults:
606
+ # ~/.bashrc_aia
594
607
  export AIA_PROMPTS_DIR=~/.prompts
595
608
  export AIA_OUT_FILE=./temp.md
596
- export AIA_LOG_FILE=$AIA_PROMPTS_DIR/_prompts.log
597
609
  export AIA_MODEL=gpt-4o-mini
598
-
599
- # Not a default. Invokes spinner. If not true then there is no spinner
600
- # for feedback while waiting for the LLM to respond.
601
- export AIA_VERBOSE=true
610
+ export AIA_VERBOSE=true # Shows spinner while waiting for LLM response
602
611
 
603
612
  alias chat='aia --chat --terse'
604
-
605
- # rest of the file is the completion function
613
+ ask() { echo "$1" | aia run --no-out_file; }
606
614
  ```
607
615
 
616
+ The `chat` alias and the `ask` function (shown above in HASH) are two powerful tools for interacting with the AI assistant. The `chat` alias allows you to engage in an interactive conversation with the AI assistant, while the `ask` function allows you to ask a question and receive a response. Later in this document the `run` prompt ID is discussed. Besides using the run prompt ID here its also used in making executable prompt files.
608
617
 
609
-
610
- ## Executable Prompts
611
-
612
- With all of the capabilities of the AI Assistant, you can create your own executable prompts. These prompts can be used to automate tasks, generate content, or perform any other action that you can think of. All you need to get started with executable prompts is a prompt that does not do anything. For example consider my `run.txt` prompt.
613
-
614
- ```bash
615
- # ~/.prompts/run.txt
616
- # Desc: Run executable prompts coming in via STDIN
618
+ #### Prompt Directory Organization
619
+ ```
620
+ ~/.prompts/
621
+ ├── daily/ # Daily workflow prompts
622
+ ├── development/ # Coding and review prompts
623
+ ├── research/ # Research and analysis
624
+ ├── roles/ # System prompts
625
+ └── workflows/ # Multi-step pipelines
617
626
  ```
618
627
 
619
- Remember that the '#' character indicates a comment line making the `run` prompt ID basically a do nothing prompt.
628
+ ## Security Considerations
620
629
 
621
- An executable prompt can reside anywhere either in your $PATH or not. That is your choice. It must however be executable. Consider the following `top10` executable prompt:
630
+ ### Shell Command Execution
622
631
 
623
- ```bash
624
- #!/usr/bin/env aia run --no-out_file
625
- # File: top10
626
- # Desc: The tope 10 cities by population
632
+ **⚠️ Important Security Warning**
627
633
 
628
- what are the top 10 cities by population in the USA. Summarize what people
629
- like about living in each city. Include an average cost of living. Include
630
- links to the Wikipedia pages. Format your response as a markdown document.
631
- ```
634
+ AIA executes shell commands and Ruby code embedded in prompts. This provides powerful functionality but requires caution:
635
+
636
+ - **Review prompts before execution**, especially from untrusted sources
637
+ - **Avoid storing sensitive data** in prompts (API keys, passwords)
638
+ - **Use parameterized prompts** instead of hardcoding sensitive values
639
+ - **Limit file permissions** on prompt directories if sharing systems
632
640
 
633
- Make sure that it is executable.
641
+ ### Safe Practices
634
642
 
635
643
  ```bash
636
- chmod +x top10
637
- ```
644
+ # Good: Use parameters for sensitive data
645
+ //config api_key = [API_KEY]
638
646
 
639
- The magic is in the first line of the prompt. It is a shebang line that tells the system how to execute the prompt. In this case it is telling the system to use the `aia` command line tool to execute the `run` prompt. The `--no-out_file` option tells the AIA command line tool not to write the output of the prompt to a file. Instead it will write the output to STDOUT. The remaining content of this `top10` prompt is send via STDIN to the LLM.
647
+ # Bad: Hardcode secrets
648
+ //config api_key = sk-1234567890abcdef
640
649
 
641
- Now just execute it like any other command in your terminal.
650
+ # Good: Validate shell commands
651
+ //shell ls -la /safe/directory
642
652
 
643
- ```bash
644
- ./top10
653
+ # ❌ Bad: Dangerous shell commands
654
+ //shell rm -rf / # Never do this!
645
655
  ```
646
656
 
647
- Since its output is going to STDOUT you can setup a pipe chain. Using the CLI program `glow` to render markdown in the terminal
648
- (brew install glow)
657
+ ### Recommended Security Setup
649
658
 
650
659
  ```bash
651
- ./top10 | glow
660
+ # Set restrictive permissions on prompts directory
661
+ chmod 700 ~/.prompts
662
+ chmod 600 ~/.prompts/*.txt
652
663
  ```
653
664
 
654
- This executable prompt concept sets up the building blocks of a *nix CLI-based pipeline in the same way that the --pipeline and --next options and directives are used.
665
+ ## Troubleshooting
655
666
 
656
- ## Usage
657
-
658
- The usage report is obtained with either `-h` or `--help` options.
667
+ ### Common Issues
659
668
 
669
+ **Prompt not found:**
660
670
  ```bash
661
- aia --help
662
- ```
671
+ # Check prompts directory
672
+ ls $AIA_PROMPTS_DIR
663
673
 
664
- Key command-line options include:
674
+ # Verify prompt file exists
675
+ ls ~/.prompts/my_prompt.txt
665
676
 
666
- - `--adapter ADAPTER`: Choose the LLM interface adapter to use. Valid options are 'ruby_llm' (default) or something else in the future. See [RubyLLM Integration Guide](README_RUBY_LLM.md) for details.
667
- - `--model MODEL`: Specify which LLM model to use
668
- - `--chat`: Start an interactive chat session
669
- - `--role ROLE`: Specify a role/system prompt
670
- - And many more (use --help to see all options)
677
+ # Use fuzzy search
678
+ aia --fuzzy
679
+ ```
671
680
 
672
- **Note:** ERB and shell processing are now standard features and always enabled. This allows you to use embedded Ruby code and shell commands in your prompts without needing to specify any additional options.
681
+ **Model errors:**
682
+ ```bash
683
+ # List available models
684
+ aia --available_models
673
685
 
674
- ## Development
686
+ # Check model name spelling
687
+ aia --model gpt-4o # Correct
688
+ aia --model gpt4 # Incorrect
689
+ ```
675
690
 
676
- **ShellCommandExecutor Refactor:**
677
- The `ShellCommandExecutor` is now a class (previously a module). It stores the config object as an instance variable and provides cleaner encapsulation. For backward compatibility, class-level methods are available and delegate to instance methods internally.
691
+ **Shell integration not working:**
692
+ ```bash
693
+ # Verify shell patterns
694
+ echo "Test: $(date)" # Should show current date
695
+ echo "Home: $HOME" # Should show home directory
696
+ ```
678
697
 
679
- **Prompt Variable Fallback:**
680
- When processing a prompt file without a `.json` history file, variables are always parsed from the prompt text so you are prompted for values as needed.
698
+ **Configuration issues:**
699
+ ```bash
700
+ # Check current configuration
701
+ aia --config
681
702
 
682
- ## Contributing
703
+ # Debug configuration loading
704
+ aia --debug --config
705
+ ```
683
706
 
684
- Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
707
+ ### Error Messages
685
708
 
686
- When you find problems with AIA please note them as an issue. This thing was written mostly by a human and you know how error prone humans are. There should be plenty of errors to find.
709
+ | Error | Cause | Solution |
710
+ |-------|-------|----------|
711
+ | "Prompt not found" | Missing prompt file | Check file exists and spelling |
712
+ | "Model not available" | Invalid model name | Use `--available_models` to list valid models |
713
+ | "Shell command failed" | Invalid shell syntax | Test shell commands separately first |
714
+ | "Configuration error" | Invalid config syntax | Check config file YAML syntax |
687
715
 
688
- I'm not happy with the way where some command line options for external command are hard coded. I'm specific talking about the way in which the `rg` and `fzf` tools are used. Their options decide the basic look and feel of the search capability on the command line. Maybe they should be part of the overall configuration so that users can tune their UI to the way they like.
716
+ ### Debug Mode
689
717
 
690
- ## Roadmap
718
+ Enable debug output for troubleshooting:
691
719
 
692
- - restore the prompt text file search. currently fzf only looks a prompt IDs.
693
- - continue integration of the ruby_llm gem
694
- - support for Model Context Protocol
720
+ ```bash
721
+ # Enable debug mode
722
+ aia --debug my_prompt
695
723
 
696
- ## RubyLLM::Tool Support
724
+ # Combine with verbose for maximum output
725
+ aia --debug --verbose my_prompt
726
+ ```
697
727
 
698
- AIA supports function calling capabilities through the `RubyLLM::Tool` framework, enabling LLMs to execute custom functions during a chat session.
728
+ ### Performance Issues
699
729
 
700
- ### What Are RubyLLM Tools?
730
+ **Slow model responses:**
731
+ - Try smaller/faster models: `--model gpt-4o-mini`
732
+ - Reduce max_tokens: `--max_tokens 1000`
733
+ - Use lower temperature for faster responses: `--temperature 0.1`
701
734
 
702
- Tools (or functions) allow LLMs to perform actions beyond generating text, such as:
735
+ **Large prompt processing:**
736
+ - Break into smaller prompts using `--pipeline`
737
+ - Use `//include` selectively instead of large files
738
+ - Consider model context limits
703
739
 
704
- - Retrieving real-time information
705
- - Executing system commands
706
- - Accessing external APIs
707
- - Performing calculations
740
+ ## Development
708
741
 
709
- Check out the [examples/tools](examples/tools) directory which contains several ready-to-use tool implementations you can use as references.
742
+ ### Testing
710
743
 
711
- ### How to Use Tools
744
+ ```bash
745
+ # Run unit tests
746
+ rake test
712
747
 
713
- AIA provides three CLI options to manage function calling:
748
+ # Run integration tests
749
+ rake integration
714
750
 
715
- #### `--tools` Option
751
+ # Run all tests with coverage
752
+ rake all_tests
753
+ open coverage/index.html
754
+ ```
716
755
 
717
- Specifies where to find tool implementations:
756
+ ### Building
718
757
 
719
758
  ```bash
720
- # Load tools from multiple sources
721
- --tools /path/to/tools/directory,other/tools/dir,my_tool.rb
759
+ # Install locally with documentation
760
+ just install
722
761
 
723
- # Or use multiple --tools flags
724
- --tools my_first_tool.rb --tools /tool_repo/tools
725
- ```
726
-
727
- Each path can be:
728
-
729
- - A Ruby file implementing a `RubyLLM::Tool` subclass
730
- - A directory containing tool implementations (all Ruby files in that directory will be loaded)
762
+ # Generate documentation
763
+ just gen_doc
731
764
 
732
- Supporting files for tools can be placed in the same directory or subdirectories.
765
+ # Static code analysis
766
+ just flay
767
+ ```
733
768
 
734
- ### Filtering the tool paths
769
+ ### Architecture Notes
735
770
 
736
- The --tools option must have exact relative or absolute paths to the tool files to be used by AIA for function callbacks. If you are specifying directories you may find yourself needing filter the entire set of tools to either allow some or reject others based upon some indicator in their file name. The following two options allow you to specify multiple sub-strings to match the tolls paths against. For example you might be comparing one version of a tool against another. Their filenames could have version prefixes like tool_v1.rb and tool_v2.rb Using the allowed and rejected filters you can choose one of the other when using an entire directory full of tools.
771
+ **ShellCommandExecutor Refactor:**
772
+ The `ShellCommandExecutor` is now a class (previously a module) with instance variables for cleaner encapsulation. Class-level methods remain for backward compatibility.
737
773
 
738
- #### `--at`, `--allowed_tools` Option
774
+ **Prompt Variable Fallback:**
775
+ Variables are always parsed from prompt text when no `.json` history file exists, ensuring parameter prompting works correctly.
739
776
 
740
- Filters which tools to make available when loading from directories:
777
+ ## Contributing
741
778
 
742
- ```bash
743
- # Only allow tools with 'test' in their filename
744
- --tools my_tools_directory --allowed_tools test
745
- ```
779
+ Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
746
780
 
747
- This is useful when you have many tools but only want to use specific ones in a session.
781
+ ### Reporting Issues
748
782
 
749
- #### `--rt`, `--rejected_tools` Option
783
+ When reporting issues, please include:
784
+ - AIA version: `aia --version`
785
+ - Ruby version: `ruby --version`
786
+ - Operating system
787
+ - Minimal reproduction example
788
+ - Error messages and debug output
750
789
 
751
- Excludes specific tools:
790
+ ### Development Setup
752
791
 
753
792
  ```bash
754
- # Exclude tools with '_v1' in their filename
755
- --tools my_tools_directory --rejected_tools _v1
793
+ git clone https://github.com/MadBomber/aia.git
794
+ cd aia
795
+ bundle install
796
+ rake test
756
797
  ```
757
798
 
758
- Ideal for excluding older versions or temporarily disabling specific tools.
799
+ ### Areas for Improvement
759
800
 
760
- ### Creating Your Own Tools
801
+ - Configuration UI for complex setups
802
+ - Better error handling and user feedback
803
+ - Performance optimization for large prompt libraries
804
+ - Enhanced security controls for shell integration
761
805
 
762
- To create a custom tool:
763
-
764
- 1. Create a Ruby file that subclasses `RubyLLM::Tool`
765
- 2. Define the tool's parameters and functionality
766
- 3. Use the `--tools` option to load it in your AIA session
767
-
768
- For implementation details, refer to the [examples in the repository](examples/tools) or the RubyLLM documentation.
769
-
770
- ## MCP Supported
806
+ ## Roadmap
771
807
 
772
- Abandon all hope of seeing an MCP client added to AIA. Maybe sometime in the future there will be a new gem "ruby_llm-mcp" that implements an MCP client as a native RubyLLM::Tool subclass. If that every happens you would use it the same way you use any other RubyLLM::Tool subclass which AIA now supports.
808
+ - **Enhanced Search**: Restore full-text search within prompt files
809
+ - **Model Context Protocol**: Continue integration with ruby_llm gem
810
+ - **UI Improvements**: Better configuration management for fzf and rg tools
811
+ - **Performance**: Optimize prompt loading and processing
812
+ - **Security**: Enhanced sandboxing for shell command execution
773
813
 
774
814
  ## License
775
815