aia 0.9.5 → 0.9.7
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.version +1 -1
- data/CHANGELOG.md +24 -1
- data/README.md +569 -529
- data/examples/headlines +24 -15
- data/examples/tools/run_shell_command.rb +16 -4
- data/images/aia.png +0 -0
- data/lib/aia/chat_processor_service.rb +10 -9
- data/lib/aia/config.rb +96 -4
- data/lib/aia/directive_processor.rb +50 -10
- data/lib/aia/ruby_llm_adapter.rb +7 -5
- data/lib/aia/session.rb +12 -3
- data/lib/aia/ui_presenter.rb +1 -1
- data/lib/aia/utility.rb +1 -1
- data/lib/aia.rb +7 -1
- data/lib/extensions/ruby_llm/modalities.rb +30 -22
- metadata +32 -3
data/README.md
CHANGED
@@ -1,775 +1,815 @@
|
|
1
|
-
|
1
|
+
<div align="center">
|
2
|
+
<h1>AI Assistant (AIA)</h1>
|
3
|
+
<img src="images/aia.png" alt="Robots waiter ready to take your order."><br />
|
4
|
+
**The Prompt is the Code**
|
5
|
+
</div>
|
2
6
|
|
3
|
-
|
7
|
+
AIA is a command-line utility that facilitates interaction with AI models through dynamic prompt management. It automates the management of pre-compositional prompts and executes generative AI commands with enhanced features including embedded directives, shell integration, embedded Ruby, history management, interactive chat, and prompt workflows.
|
4
8
|
|
5
|
-
|
6
|
-
, , AIA is a command-line utility that facilitates
|
7
|
-
(\____/) AI Assistant interaction with AI models. It automates the
|
8
|
-
(_oo_) Fancy LLM management of pre-compositional prompts and
|
9
|
-
(O) is Online executes generative AI (Gen-AI) commands on those
|
10
|
-
__||__ \) prompts. AIA includes enhanced features such as
|
11
|
-
[/______\] / * embedded directives * shell integration
|
12
|
-
/ \__AI__/ \/ * embedded Ruby * history management
|
13
|
-
/ /__\ * interactive chat * prompt workflows
|
14
|
-
(\ /____\ # supports RubyLLM::Tool integration
|
15
|
-
```
|
16
|
-
|
17
|
-
AIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts. It utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection.
|
9
|
+
AIA leverages the [prompt_manager gem](https://github.com/madbomber/prompt_manager) to manage prompts, utilizes the [CLI tool fzf](https://github.com/junegunn/fzf) for prompt selection, and can use the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a collection of common ready-to-use functions for use with LLMs that support tools.
|
18
10
|
|
19
11
|
**Wiki**: [Checkout the AIA Wiki](https://github.com/MadBomber/aia/wiki)
|
20
12
|
|
21
|
-
|
22
|
-
|
23
|
-
<!-- Tocer[start]: Auto-generated, don't remove. -->
|
24
|
-
|
25
|
-
## Table of Contents
|
26
|
-
|
27
|
-
- [Configuration Options](#configuration-options)
|
28
|
-
- [Configuration Flexibility](#configuration-flexibility)
|
29
|
-
- [Expandable Configuration](#expandable-configuration)
|
30
|
-
- [The Local Model Registry Refresh](#the-local-model-registry-refresh)
|
31
|
-
- [Important Note](#important-note)
|
32
|
-
- [Shell Integration inside of a Prompt](#shell-integration-inside-of-a-prompt)
|
33
|
-
- [Dynamic Shell Commands](#dynamic-shell-commands)
|
34
|
-
- [Shell Command Safety](#shell-command-safety)
|
35
|
-
- [Chat Session Use](#chat-session-use)
|
36
|
-
- [Embedded Ruby (ERB)](#embedded-ruby-erb)
|
37
|
-
- [Prompt Directives](#prompt-directives)
|
38
|
-
- [Parameter and Shell Substitution in Directives](#parameter-and-shell-substitution-in-directives)
|
39
|
-
- [Directive Syntax](#directive-syntax)
|
40
|
-
- [AIA Specific Directive Commands](#aia-specific-directive-commands)
|
41
|
-
- [//config](#config)
|
42
|
-
- [//include](#include)
|
43
|
-
- [//ruby](#ruby)
|
44
|
-
- [//shell](#shell)
|
45
|
-
- [//next](#next)
|
46
|
-
- [//pipeline](#pipeline)
|
47
|
-
- [Using Directives in Chat Sessions](#using-directives-in-chat-sessions)
|
48
|
-
- [Prompt Sequences](#prompt-sequences)
|
49
|
-
- [--next](#--next)
|
50
|
-
- [--pipeline](#--pipeline)
|
51
|
-
- [Best Practices ??](#best-practices-)
|
52
|
-
- [Example pipeline](#example-pipeline)
|
53
|
-
- [All About ROLES](#all-about-roles)
|
54
|
-
- [The --roles_prefix (AIA_ROLES_PREFIX)](#the---roles_prefix-aia_roles_prefix)
|
55
|
-
- [The --role Option](#the---role-option)
|
56
|
-
- [Other Ways to Insert Roles into Prompts](#other-ways-to-insert-roles-into-prompts)
|
57
|
-
- [External CLI Tools Used](#external-cli-tools-used)
|
58
|
-
- [Shell Completion](#shell-completion)
|
59
|
-
- [My Most Powerful Prompt](#my-most-powerful-prompt)
|
60
|
-
- [My Configuration](#my-configuration)
|
61
|
-
- [Executable Prompts](#executable-prompts)
|
62
|
-
- [Usage](#usage)
|
63
|
-
- [Development](#development)
|
64
|
-
- [Contributing](#contributing)
|
65
|
-
- [Roadmap](#roadmap)
|
66
|
-
- [RubyLLM::Tool Support](#rubyllmtool-support)
|
67
|
-
- [What Are RubyLLM Tools?](#what-are-rubyllm-tools)
|
68
|
-
- [How to Use Tools](#how-to-use-tools)
|
69
|
-
- [`--tools` Option](#--tools-option)
|
70
|
-
- [Filtering the tool paths](#filtering-the-tool-paths)
|
71
|
-
- [`--at`, `--allowed_tools` Option](#--at---allowed_tools-option)
|
72
|
-
- [`--rt`, `--rejected_tools` Option](#--rt---rejected_tools-option)
|
73
|
-
- [Creating Your Own Tools](#creating-your-own-tools)
|
74
|
-
- [MCP Supported](#mcp-supported)
|
75
|
-
- [License](#license)
|
13
|
+
## Quick Start
|
76
14
|
|
77
|
-
|
15
|
+
1. **Install AIA:**
|
16
|
+
```bash
|
17
|
+
gem install aia
|
18
|
+
```
|
78
19
|
|
79
|
-
|
80
|
-
|
81
|
-
|
82
|
-
|
83
|
-
| Config Item Name | CLI Options | Default Value | Environment Variable |
|
84
|
-
|----------------------|-------------|-----------------------------|---------------------------|
|
85
|
-
| adapter | --adapter | ruby_llm | AIA_ADAPTER |
|
86
|
-
| aia_dir | | ~/.aia | AIA_DIR |
|
87
|
-
| append | -a, --append | false | AIA_APPEND |
|
88
|
-
| chat | --chat | false | AIA_CHAT |
|
89
|
-
| clear | --clear | false | AIA_CLEAR |
|
90
|
-
| config_file | -c, --config_file | ~/.aia/config.yml | AIA_CONFIG_FILE |
|
91
|
-
| debug | -d, --debug | false | AIA_DEBUG |
|
92
|
-
| embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |
|
93
|
-
| erb | | true | AIA_ERB |
|
94
|
-
| frequency_penalty | --frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |
|
95
|
-
| fuzzy | -f, --fuzzy | false | AIA_FUZZY |
|
96
|
-
| image_quality | --iq, --image_quality | standard | AIA_IMAGE_QUALITY |
|
97
|
-
| image_size | --is, --image_size | 1024x1024 | AIA_IMAGE_SIZE |
|
98
|
-
| image_style | --style, --image_style | vivid | AIA_IMAGE_STYLE |
|
99
|
-
| log_file | -l, --log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
|
100
|
-
| markdown | --md, --markdown | true | AIA_MARKDOWN |
|
101
|
-
| max_tokens | --max_tokens | 2048 | AIA_MAX_TOKENS |
|
102
|
-
| model | -m, --model | gpt-4o-mini | AIA_MODEL |
|
103
|
-
| next | -n, --next | nil | AIA_NEXT |
|
104
|
-
| out_file | -o, --out_file | temp.md | AIA_OUT_FILE |
|
105
|
-
| parameter_regex | --regex | '(?-mix:(\[[A-Z _|]+\]))' | AIA_PARAMETER_REGEX |
|
106
|
-
| pipeline | --pipeline | [] | AIA_PIPELINE |
|
107
|
-
| presence_penalty | --presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |
|
108
|
-
| prompt_extname | | .txt | AIA_PROMPT_EXTNAME |
|
109
|
-
| prompts_dir | -p, --prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
|
110
|
-
| refresh | --refresh | 7 (days) | AIA_REFRESH |
|
111
|
-
| require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |
|
112
|
-
| role | -r, --role | | AIA_ROLE |
|
113
|
-
| roles_dir | | ~/.prompts/roles | AIA_ROLES_DIR |
|
114
|
-
| roles_prefix | --roles_prefix | roles | AIA_ROLES_PREFIX |
|
115
|
-
| shell | | true | AIA_SHELL |
|
116
|
-
| speak | --speak | false | AIA_SPEAK |
|
117
|
-
| speak_command | | afplay | AIA_SPEAK_COMMAND |
|
118
|
-
| speech_model | --sm, --speech_model | tts-1 | AIA_SPEECH_MODEL |
|
119
|
-
| system_prompt | --system_prompt | | AIA_SYSTEM_PROMPT |
|
120
|
-
| temperature | -t, --temperature | 0.7 | AIA_TEMPERATURE |
|
121
|
-
| terse | --terse | false | AIA_TERSE |
|
122
|
-
| tool_paths | --tools | [] | AIA_TOOL_PATHS |
|
123
|
-
| allowed_tools | --at --allowed_tools | nil | AIA_ALLOWED_TOOLS |
|
124
|
-
| rejected_tools | --rt --rejected_tools | nil | AIA_REJECTED_TOOLS |
|
125
|
-
| top_p | --top_p | 1.0 | AIA_TOP_P |
|
126
|
-
| transcription_model | --tm, --transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |
|
127
|
-
| verbose | -v, --verbose | false | AIA_VERBOSE |
|
128
|
-
| voice | --voice | alloy | AIA_VOICE |
|
20
|
+
2. **Install dependencies:**
|
21
|
+
```bash
|
22
|
+
brew install fzf
|
23
|
+
```
|
129
24
|
|
130
|
-
|
25
|
+
3. **Create your first prompt:**
|
26
|
+
```bash
|
27
|
+
mkdir -p ~/.prompts
|
28
|
+
echo "What is [TOPIC]?" > ~/.prompts/what_is.txt
|
29
|
+
```
|
131
30
|
|
132
|
-
|
31
|
+
4. **Run your prompt:**
|
32
|
+
```bash
|
33
|
+
aia what_is
|
34
|
+
```
|
35
|
+
You'll be prompted to enter a value for `[TOPIC]`, then AIA will send your question to the AI model.
|
133
36
|
|
134
|
-
|
37
|
+
5. **Start an interactive chat:**
|
38
|
+
```bash
|
39
|
+
aia --chat
|
40
|
+
```
|
135
41
|
|
136
|
-
|
137
|
-
2. Command-line arguments
|
138
|
-
3. Environment variables
|
139
|
-
4. Configuration files
|
140
|
-
5. Default values
|
42
|
+
```plain
|
141
43
|
|
142
|
-
|
44
|
+
, ,
|
45
|
+
(\____/) AI Assistant (v0.9.7) is Online
|
46
|
+
(_oo_) gpt-4o-mini
|
47
|
+
(O) using ruby_llm (v1.3.1)
|
48
|
+
__||__ \) model db was last refreshed on
|
49
|
+
[/______\] / 2025-06-18
|
50
|
+
/ \__AI__/ \/ You can share my tools
|
51
|
+
/ /__\
|
52
|
+
(\ /____\
|
143
53
|
|
144
|
-
|
145
|
-
- No entry in the config file
|
146
|
-
- No environment variable value for `AIA_MODEL`
|
147
|
-
- No command-line argument provided for `--model`
|
148
|
-
- No embedded directive like `//config model = some-fancy-llm`
|
54
|
+
```
|
149
55
|
|
150
|
-
|
56
|
+
<!-- Tocer[start]: Auto-generated, don't remove. -->
|
151
57
|
|
152
|
-
|
58
|
+
## Table of Contents
|
153
59
|
|
154
|
-
|
60
|
+
- [Installation & Prerequisites](#installation--prerequisites)
|
61
|
+
- [Requirements](#requirements)
|
62
|
+
- [Installation](#installation)
|
63
|
+
- [Setup Shell Completion](#setup-shell-completion)
|
64
|
+
- [Basic Usage](#basic-usage)
|
65
|
+
- [Command Line Interface](#command-line-interface)
|
66
|
+
- [Key Command-Line Options](#key-command-line-options)
|
67
|
+
- [Directory Structure](#directory-structure)
|
68
|
+
- [Configuration](#configuration)
|
69
|
+
- [Essential Configuration Options](#essential-configuration-options)
|
70
|
+
- [Configuration Precedence](#configuration-precedence)
|
71
|
+
- [Configuration Methods](#configuration-methods)
|
72
|
+
- [Complete Configuration Reference](#complete-configuration-reference)
|
73
|
+
- [Advanced Features](#advanced-features)
|
74
|
+
- [Prompt Directives](#prompt-directives)
|
75
|
+
- [Configuration Directive Examples](#configuration-directive-examples)
|
76
|
+
- [Dynamic Content Examples](#dynamic-content-examples)
|
77
|
+
- [Shell Integration](#shell-integration)
|
78
|
+
- [Embedded Ruby (ERB)](#embedded-ruby-erb)
|
79
|
+
- [Prompt Sequences](#prompt-sequences)
|
80
|
+
- [Using --next](#using---next)
|
81
|
+
- [Using --pipeline](#using---pipeline)
|
82
|
+
- [Example Workflow](#example-workflow)
|
83
|
+
- [Roles and System Prompts](#roles-and-system-prompts)
|
84
|
+
- [RubyLLM::Tool Support](#rubyllmtool-support)
|
85
|
+
- [Examples & Tips](#examples--tips)
|
86
|
+
- [Practical Examples](#practical-examples)
|
87
|
+
- [Code Review Prompt](#code-review-prompt)
|
88
|
+
- [Meeting Notes Processor](#meeting-notes-processor)
|
89
|
+
- [Documentation Generator](#documentation-generator)
|
90
|
+
- [Executable Prompts](#executable-prompts)
|
91
|
+
- [Tips from the Author](#tips-from-the-author)
|
92
|
+
- [The run Prompt](#the-run-prompt)
|
93
|
+
- [The Ad Hoc One-shot Prompt](#the-ad-hoc-one-shot-prompt)
|
94
|
+
- [Recommended Shell Setup](#recommended-shell-setup)
|
95
|
+
- [Prompt Directory Organization](#prompt-directory-organization)
|
96
|
+
- [Security Considerations](#security-considerations)
|
97
|
+
- [Shell Command Execution](#shell-command-execution)
|
98
|
+
- [Safe Practices](#safe-practices)
|
99
|
+
- [Recommended Security Setup](#recommended-security-setup)
|
100
|
+
- [Troubleshooting](#troubleshooting)
|
101
|
+
- [Common Issues](#common-issues)
|
102
|
+
- [Error Messages](#error-messages)
|
103
|
+
- [Debug Mode](#debug-mode)
|
104
|
+
- [Performance Issues](#performance-issues)
|
105
|
+
- [Development](#development)
|
106
|
+
- [Testing](#testing)
|
107
|
+
- [Building](#building)
|
108
|
+
- [Architecture Notes](#architecture-notes)
|
109
|
+
- [Contributing](#contributing)
|
110
|
+
- [Reporting Issues](#reporting-issues)
|
111
|
+
- [Development Setup](#development-setup)
|
112
|
+
- [Areas for Improvement](#areas-for-improvement)
|
113
|
+
- [Roadmap](#roadmap)
|
114
|
+
- [License](#license)
|
155
115
|
|
156
|
-
|
116
|
+
<!-- Tocer[finish]: Auto-generated, don't remove. -->
|
157
117
|
|
158
|
-
##
|
118
|
+
## Installation & Prerequisites
|
159
119
|
|
160
|
-
|
120
|
+
### Requirements
|
161
121
|
|
162
|
-
|
122
|
+
- **Ruby**: >= 3.2.0
|
123
|
+
- **External Tools**:
|
124
|
+
- [fzf](https://github.com/junegunn/fzf) - Command-line fuzzy finder
|
163
125
|
|
164
|
-
|
165
|
-
- A value of `1` (one) updates the local model registry once per day.
|
166
|
-
- etc.
|
126
|
+
### Installation
|
167
127
|
|
168
|
-
|
128
|
+
```bash
|
129
|
+
# Install AIA gem
|
130
|
+
gem install aia
|
169
131
|
|
170
|
-
|
132
|
+
# Install required external tools (macOS)
|
133
|
+
brew install fzf
|
171
134
|
|
172
|
-
|
135
|
+
# Install required external tools (Linux)
|
136
|
+
# Ubuntu/Debian
|
137
|
+
sudo apt install fzf
|
173
138
|
|
174
|
-
|
139
|
+
# Arch Linux
|
140
|
+
sudo pacman -S fzf
|
141
|
+
```
|
175
142
|
|
176
|
-
|
143
|
+
### Setup Shell Completion
|
177
144
|
|
178
|
-
|
145
|
+
Get completion functions for your shell:
|
179
146
|
|
180
|
-
|
147
|
+
```bash
|
148
|
+
# For bash users
|
149
|
+
aia --completion bash >> ~/.bashrc
|
181
150
|
|
182
|
-
|
151
|
+
# For zsh users
|
152
|
+
aia --completion zsh >> ~/.zshrc
|
183
153
|
|
184
|
-
|
185
|
-
|
154
|
+
# For fish users
|
155
|
+
aia --completion fish >> ~/.config/fish/config.fish
|
186
156
|
```
|
187
157
|
|
188
|
-
|
158
|
+
## Basic Usage
|
189
159
|
|
190
|
-
|
191
|
-
|
192
|
-
```
|
160
|
+
### Command Line Interface
|
161
|
+
|
162
|
+
```bash
|
163
|
+
# Basic usage
|
164
|
+
aia [OPTIONS] PROMPT_ID [CONTEXT_FILES...]
|
193
165
|
|
194
|
-
|
166
|
+
# Interactive chat session
|
167
|
+
aia --chat [--role ROLE] [--model MODEL]
|
195
168
|
|
196
|
-
|
169
|
+
# Use a specific model
|
170
|
+
aia --model gpt-4 my_prompt
|
197
171
|
|
198
|
-
|
172
|
+
# Specify output file
|
173
|
+
aia --out_file result.md my_prompt
|
199
174
|
|
200
|
-
|
175
|
+
# Use a role/system prompt
|
176
|
+
aia --role expert my_prompt
|
201
177
|
|
202
|
-
|
203
|
-
|
178
|
+
# Enable fuzzy search for prompts
|
179
|
+
aia --fuzzy
|
204
180
|
```
|
205
181
|
|
206
|
-
|
182
|
+
### Key Command-Line Options
|
207
183
|
|
208
|
-
|
184
|
+
| Option | Description | Example |
|
185
|
+
|--------|-------------|---------|
|
186
|
+
| `--chat` | Start interactive chat session | `aia --chat` |
|
187
|
+
| `--model MODEL` | Specify AI model to use | `aia --model gpt-4` |
|
188
|
+
| `--role ROLE` | Use a role/system prompt | `aia --role expert` |
|
189
|
+
| `--out_file FILE` | Specify output file | `aia --out_file results.md` |
|
190
|
+
| `--fuzzy` | Use fuzzy search for prompts | `aia --fuzzy` |
|
191
|
+
| `--help` | Show complete help | `aia --help` |
|
209
192
|
|
210
|
-
|
193
|
+
### Directory Structure
|
211
194
|
|
212
|
-
|
195
|
+
```
|
196
|
+
~/.prompts/ # Default prompts directory
|
197
|
+
├── ask.txt # Simple question prompt
|
198
|
+
├── code_review.txt # Code review prompt
|
199
|
+
├── roles/ # Role/system prompts
|
200
|
+
│ ├── expert.txt # Expert role
|
201
|
+
│ └── teacher.txt # Teaching role
|
202
|
+
└── _prompts.log # History log
|
203
|
+
```
|
213
204
|
|
214
|
-
|
205
|
+
## Configuration
|
215
206
|
|
216
|
-
|
207
|
+
### Essential Configuration Options
|
217
208
|
|
218
|
-
|
209
|
+
The most commonly used configuration options:
|
219
210
|
|
220
|
-
|
221
|
-
|
222
|
-
|
211
|
+
| Option | Default | Description |
|
212
|
+
|--------|---------|-------------|
|
213
|
+
| `model` | `gpt-4o-mini` | AI model to use |
|
214
|
+
| `prompts_dir` | `~/.prompts` | Directory containing prompts |
|
215
|
+
| `out_file` | `temp.md` | Default output file |
|
216
|
+
| `temperature` | `0.7` | Model creativity (0.0-1.0) |
|
217
|
+
| `chat` | `false` | Start in chat mode |
|
223
218
|
|
224
|
-
|
219
|
+
### Configuration Precedence
|
225
220
|
|
226
|
-
|
221
|
+
AIA determines configuration settings using this order (highest to lowest priority):
|
227
222
|
|
228
|
-
|
223
|
+
1. **Embedded config directives** (in prompt files): `//config model = gpt-4`
|
224
|
+
2. **Command-line arguments**: `--model gpt-4`
|
225
|
+
3. **Environment variables**: `export AIA_MODEL=gpt-4`
|
226
|
+
4. **Configuration files**: `~/.aia/config.yml`
|
227
|
+
5. **Default values**
|
229
228
|
|
230
|
-
|
229
|
+
### Configuration Methods
|
231
230
|
|
231
|
+
**Environment Variables:**
|
232
232
|
```bash
|
233
|
-
|
233
|
+
export AIA_MODEL=gpt-4
|
234
|
+
export AIA_PROMPTS_DIR=~/my-prompts
|
235
|
+
export AIA_TEMPERATURE=0.8
|
234
236
|
```
|
235
237
|
|
236
|
-
|
237
|
-
|
238
|
-
|
239
|
-
|
240
|
-
|
238
|
+
**Configuration File** (`~/.aia/config.yml`):
|
239
|
+
```yaml
|
240
|
+
model: gpt-4
|
241
|
+
prompts_dir: ~/my-prompts
|
242
|
+
temperature: 0.8
|
243
|
+
chat: false
|
244
|
+
```
|
241
245
|
|
242
|
-
|
246
|
+
**Embedded Directives** (in prompt files):
|
243
247
|
```
|
248
|
+
//config model = gpt-4
|
249
|
+
//config temperature = 0.8
|
244
250
|
|
245
|
-
|
251
|
+
Your prompt content here...
|
252
|
+
```
|
246
253
|
|
247
|
-
|
248
|
-
- `//command args`
|
254
|
+
### Complete Configuration Reference
|
249
255
|
|
250
|
-
|
251
|
-
|
252
|
-
- `shell` or `sh`: Execute a shell command
|
253
|
-
- `ruby` or `rb`: Execute Ruby code
|
254
|
-
- `config` or `cfg`: Show or update configuration
|
255
|
-
- `include` or `inc`: Include file content
|
256
|
-
- `next`: Set/Show the next prompt ID to be processed
|
257
|
-
- `pipeline`: Set/Extend/Show the workflow of prompt IDs
|
256
|
+
<details>
|
257
|
+
<summary>Click to view all configuration options</summary>
|
258
258
|
|
259
|
-
|
259
|
+
| Config Item Name | CLI Options | Default Value | Environment Variable |
|
260
|
+
|------------------|-------------|---------------|---------------------|
|
261
|
+
| adapter | --adapter | ruby_llm | AIA_ADAPTER |
|
262
|
+
| aia_dir | | ~/.aia | AIA_DIR |
|
263
|
+
| append | -a, --append | false | AIA_APPEND |
|
264
|
+
| chat | --chat | false | AIA_CHAT |
|
265
|
+
| clear | --clear | false | AIA_CLEAR |
|
266
|
+
| config_file | -c, --config_file | ~/.aia/config.yml | AIA_CONFIG_FILE |
|
267
|
+
| debug | -d, --debug | false | AIA_DEBUG |
|
268
|
+
| embedding_model | --em, --embedding_model | text-embedding-ada-002 | AIA_EMBEDDING_MODEL |
|
269
|
+
| erb | | true | AIA_ERB |
|
270
|
+
| frequency_penalty | --frequency_penalty | 0.0 | AIA_FREQUENCY_PENALTY |
|
271
|
+
| fuzzy | -f, --fuzzy | false | AIA_FUZZY |
|
272
|
+
| image_quality | --iq, --image_quality | standard | AIA_IMAGE_QUALITY |
|
273
|
+
| image_size | --is, --image_size | 1024x1024 | AIA_IMAGE_SIZE |
|
274
|
+
| image_style | --style, --image_style | vivid | AIA_IMAGE_STYLE |
|
275
|
+
| log_file | -l, --log_file | ~/.prompts/_prompts.log | AIA_LOG_FILE |
|
276
|
+
| markdown | --md, --markdown | true | AIA_MARKDOWN |
|
277
|
+
| max_tokens | --max_tokens | 2048 | AIA_MAX_TOKENS |
|
278
|
+
| model | -m, --model | gpt-4o-mini | AIA_MODEL |
|
279
|
+
| next | -n, --next | nil | AIA_NEXT |
|
280
|
+
| out_file | -o, --out_file | temp.md | AIA_OUT_FILE |
|
281
|
+
| parameter_regex | --regex | '(?-mix:(\[[A-Z _\|]+\]))' | AIA_PARAMETER_REGEX |
|
282
|
+
| pipeline | --pipeline | [] | AIA_PIPELINE |
|
283
|
+
| presence_penalty | --presence_penalty | 0.0 | AIA_PRESENCE_PENALTY |
|
284
|
+
| prompt_extname | | .txt | AIA_PROMPT_EXTNAME |
|
285
|
+
| prompts_dir | -p, --prompts_dir | ~/.prompts | AIA_PROMPTS_DIR |
|
286
|
+
| refresh | --refresh | 7 (days) | AIA_REFRESH |
|
287
|
+
| require_libs | --rq --require | [] | AIA_REQUIRE_LIBS |
|
288
|
+
| role | -r, --role | | AIA_ROLE |
|
289
|
+
| roles_dir | | ~/.prompts/roles | AIA_ROLES_DIR |
|
290
|
+
| roles_prefix | --roles_prefix | roles | AIA_ROLES_PREFIX |
|
291
|
+
| shell | | true | AIA_SHELL |
|
292
|
+
| speak | --speak | false | AIA_SPEAK |
|
293
|
+
| speak_command | | afplay | AIA_SPEAK_COMMAND |
|
294
|
+
| speech_model | --sm, --speech_model | tts-1 | AIA_SPEECH_MODEL |
|
295
|
+
| system_prompt | --system_prompt | | AIA_SYSTEM_PROMPT |
|
296
|
+
| temperature | -t, --temperature | 0.7 | AIA_TEMPERATURE |
|
297
|
+
| terse | --terse | false | AIA_TERSE |
|
298
|
+
| tool_paths | --tools | [] | AIA_TOOL_PATHS |
|
299
|
+
| allowed_tools | --at --allowed_tools | nil | AIA_ALLOWED_TOOLS |
|
300
|
+
| rejected_tools | --rt --rejected_tools | nil | AIA_REJECTED_TOOLS |
|
301
|
+
| top_p | --top_p | 1.0 | AIA_TOP_P |
|
302
|
+
| transcription_model | --tm, --transcription_model | whisper-1 | AIA_TRANSCRIPTION_MODEL |
|
303
|
+
| verbose | -v, --verbose | false | AIA_VERBOSE |
|
304
|
+
| voice | --voice | alloy | AIA_VOICE |
|
260
305
|
|
261
|
-
|
306
|
+
</details>
|
262
307
|
|
263
|
-
|
308
|
+
## Advanced Features
|
264
309
|
|
265
|
-
|
310
|
+
### Prompt Directives
|
266
311
|
|
267
|
-
|
312
|
+
Directives are special commands in prompt files that begin with `//` and provide dynamic functionality:
|
268
313
|
|
269
|
-
|
314
|
+
| Directive | Description | Example |
|
315
|
+
|-----------|-------------|---------|
|
316
|
+
| `//config` | Set configuration values | `//config model = gpt-4` |
|
317
|
+
| `//context` | Show context for this conversation | `//context` |
|
318
|
+
| `//include` | Insert file contents | `//include path/to/file.txt` |
|
319
|
+
| `//shell` | Execute shell commands | `//shell ls -la` |
|
320
|
+
| `//robot` | Show the pet robot ASCII art w/versions | `//robot` |
|
321
|
+
| `//ruby` | Execute Ruby code | `//ruby puts "Hello World"` |
|
322
|
+
| `//next` | Set next prompt in sequence | `//next summary` |
|
323
|
+
| `//pipeline` | Set prompt workflow | `//pipeline analyze,summarize,report` |
|
324
|
+
| `//clear` | Clear conversation history | `//clear` |
|
325
|
+
| `//help` | Show available directives | `//help` |
|
326
|
+
| `//available_models` | List available models | `//available_models` |
|
327
|
+
| `//tools` | Show a list of available tools and their description | `//tools` |
|
328
|
+
| `//review` | Review current context | `//review` |
|
270
329
|
|
271
|
-
|
330
|
+
Directives can also be used in the interactive chat sessions.
|
272
331
|
|
273
|
-
|
332
|
+
#### Configuration Directive Examples
|
274
333
|
|
275
334
|
```bash
|
276
|
-
|
277
|
-
//config
|
278
|
-
|
335
|
+
# Set model and temperature for this prompt
|
336
|
+
//config model = gpt-4
|
337
|
+
//config temperature = 0.9
|
279
338
|
|
280
|
-
|
339
|
+
# Enable chat mode and terse responses
|
340
|
+
//config chat = true
|
341
|
+
//config terse = true
|
281
342
|
|
282
|
-
|
283
|
-
//config model = gpt-3.5-turbo
|
284
|
-
//config out_file = temp.md
|
343
|
+
Your prompt content here...
|
285
344
|
```
|
286
345
|
|
287
|
-
|
346
|
+
#### Dynamic Content Examples
|
288
347
|
|
289
348
|
```bash
|
290
|
-
|
291
|
-
//
|
292
|
-
//config chat? true
|
293
|
-
//config terse? true
|
294
|
-
//config model gpt-4
|
295
|
-
```
|
349
|
+
# Include file contents
|
350
|
+
//include ~/project/README.md
|
296
351
|
|
297
|
-
|
352
|
+
# Execute shell commands
|
353
|
+
//shell git log --oneline -10
|
298
354
|
|
299
|
-
|
355
|
+
# Run Ruby code
|
356
|
+
//ruby require 'json'; puts JSON.pretty_generate({status: "ready"})
|
300
357
|
|
301
|
-
|
302
|
-
```bash
|
303
|
-
//include path_to_file
|
358
|
+
Analyze the above information and provide insights.
|
304
359
|
```
|
305
360
|
|
306
|
-
|
307
|
-
|
308
|
-
The file that is included will have any comments or directives excluded. It is expected that the file will be a text file so that its content can be pre-pended to the existing prompt; however, if the file is a source code file (ex: file.rb) the source code will be included HOWEVER any comment line or line that starts with "//" will be excluded.
|
361
|
+
### Shell Integration
|
309
362
|
|
310
|
-
|
363
|
+
AIA automatically processes shell patterns in prompts:
|
311
364
|
|
312
|
-
|
365
|
+
- **Environment variables**: `$HOME`, `${USER}`
|
366
|
+
- **Command substitution**: `$(date)`, `$(git branch --show-current)`
|
313
367
|
|
314
|
-
|
315
|
-
```ruby
|
316
|
-
//ruby puts "Hello from Ruby"
|
317
|
-
```
|
318
|
-
|
319
|
-
You can also use the `--require` option to specify Ruby libraries to require before executing Ruby code:
|
368
|
+
**Examples:**
|
320
369
|
|
321
370
|
```bash
|
322
|
-
#
|
323
|
-
|
371
|
+
# Dynamic system information
|
372
|
+
As a system administrator on a $(uname -s) platform, how do I optimize performance?
|
324
373
|
|
325
|
-
#
|
326
|
-
|
327
|
-
```
|
374
|
+
# Include file contents via shell
|
375
|
+
Here's my current configuration: $(cat ~/.bashrc | head -20)
|
328
376
|
|
329
|
-
|
330
|
-
|
331
|
-
```bash
|
332
|
-
//shell some_shell_command
|
377
|
+
# Use environment variables
|
378
|
+
My home directory is $HOME and I'm user $USER.
|
333
379
|
```
|
334
380
|
|
335
|
-
|
381
|
+
**Security Note**: Be cautious with shell integration. Review prompts before execution as they can run arbitrary commands.
|
336
382
|
|
337
|
-
|
338
|
-
```bash
|
339
|
-
//shell cat path_to_file
|
340
|
-
```
|
383
|
+
### Embedded Ruby (ERB)
|
341
384
|
|
342
|
-
|
385
|
+
AIA supports full ERB processing in prompts for dynamic content generation:
|
343
386
|
|
344
|
-
|
345
|
-
|
346
|
-
|
347
|
-
|
348
|
-
//next
|
387
|
+
```erb
|
388
|
+
<%# ERB example in prompt file %>
|
389
|
+
Current time: <%= Time.now %>
|
390
|
+
Random number: <%= rand(100) %>
|
349
391
|
|
350
|
-
|
351
|
-
|
392
|
+
<% if ENV['USER'] == 'admin' %>
|
393
|
+
You have admin privileges.
|
394
|
+
<% else %>
|
395
|
+
You have standard user privileges.
|
396
|
+
<% end %>
|
352
397
|
|
353
|
-
|
354
|
-
//config next
|
355
|
-
//config next = prompt_id
|
398
|
+
<%= AIA.config.model %> is the current model.
|
356
399
|
```
|
357
400
|
|
358
|
-
|
401
|
+
### Prompt Sequences
|
359
402
|
|
360
|
-
|
361
|
-
```bash
|
362
|
-
# Show the current prompt workflow
|
363
|
-
//pipeline
|
403
|
+
Chain multiple prompts for complex workflows:
|
364
404
|
|
365
|
-
|
366
|
-
//pipeline = prompt_id_1, prompt_id_2, prompt_id_3
|
405
|
+
#### Using --next
|
367
406
|
|
368
|
-
|
369
|
-
|
407
|
+
```bash
|
408
|
+
# Command line
|
409
|
+
aia analyze --next summarize --next report
|
370
410
|
|
371
|
-
#
|
372
|
-
//
|
373
|
-
|
374
|
-
//config pipeline << prompt_id_4, prompt_id_5, prompt_id_6
|
411
|
+
# In prompt files
|
412
|
+
# analyze.txt contains: //next summarize
|
413
|
+
# summarize.txt contains: //next report
|
375
414
|
```
|
376
415
|
|
377
|
-
|
378
|
-
|
379
|
-
Whe you are in a chat session, you may use a directive as a follow up prompt. For example if you started the chat session with the option `--terse` expecting to get short answers from the LLM; but, then you decide that you want more comprehensive answers you may do this in a chat follow up:
|
416
|
+
#### Using --pipeline
|
380
417
|
|
381
418
|
```bash
|
382
|
-
|
383
|
-
|
384
|
-
|
385
|
-
The directive is executed and a new follow up prompt can be entered with a more lengthy response generated from the LLM.
|
419
|
+
# Command line
|
420
|
+
aia research --pipeline analyze,summarize,report,present
|
386
421
|
|
387
|
-
|
422
|
+
# In prompt file
|
423
|
+
//pipeline analyze,summarize,report,present
|
424
|
+
```
|
388
425
|
|
389
|
-
|
426
|
+
#### Example Workflow
|
390
427
|
|
391
|
-
|
428
|
+
**research.txt:**
|
429
|
+
```
|
430
|
+
//config model = gpt-4
|
431
|
+
//next analyze
|
392
432
|
|
393
|
-
|
433
|
+
Research the topic: [RESEARCH_TOPIC]
|
434
|
+
Provide comprehensive background information.
|
435
|
+
```
|
394
436
|
|
395
|
-
|
437
|
+
**analyze.txt:**
|
438
|
+
```
|
439
|
+
//config out_file = analysis.md
|
440
|
+
//next summarize
|
396
441
|
|
397
|
-
|
442
|
+
Analyze the research data and identify key insights.
|
443
|
+
```
|
398
444
|
|
399
|
-
|
445
|
+
**summarize.txt:**
|
446
|
+
```
|
447
|
+
//config out_file = summary.md
|
400
448
|
|
401
|
-
|
402
|
-
|
403
|
-
| one | one.txt |
|
404
|
-
| two | two.txt |
|
405
|
-
| three | three.txt |
|
406
|
-
| four | four.txt |
|
449
|
+
Create a concise summary of the analysis with actionable recommendations.
|
450
|
+
```
|
407
451
|
|
452
|
+
### Roles and System Prompts
|
408
453
|
|
409
|
-
|
454
|
+
Roles define the context and personality for AI responses:
|
410
455
|
|
411
456
|
```bash
|
412
|
-
|
413
|
-
aia
|
457
|
+
# Use a predefined role
|
458
|
+
aia --role expert analyze_code.rb
|
459
|
+
|
460
|
+
# Roles are stored in ~/.prompts/roles/
|
461
|
+
# expert.txt might contain:
|
462
|
+
# "You are a senior software engineer with 15 years of experience..."
|
414
463
|
```
|
415
464
|
|
416
|
-
|
465
|
+
**Creating Custom Roles:**
|
417
466
|
|
418
467
|
```bash
|
419
|
-
|
420
|
-
|
421
|
-
|
468
|
+
# Create a code reviewer role
|
469
|
+
cat > ~/.prompts/roles/code_reviewer.txt << EOF
|
470
|
+
You are an experienced code reviewer. Focus on:
|
471
|
+
- Code quality and best practices
|
472
|
+
- Security vulnerabilities
|
473
|
+
- Performance optimizations
|
474
|
+
- Maintainability issues
|
475
|
+
|
476
|
+
Provide specific, actionable feedback.
|
477
|
+
EOF
|
422
478
|
```
|
423
|
-
BUT if you have more than two prompts in your sequence then consider using the --pipeline option.
|
424
479
|
|
425
|
-
|
480
|
+
### RubyLLM::Tool Support
|
426
481
|
|
427
|
-
|
482
|
+
AIA supports function calling through RubyLLM tools for extended capabilities:
|
428
483
|
|
429
484
|
```bash
|
430
|
-
|
431
|
-
|
485
|
+
# Load tools from directory
|
486
|
+
aia --tools ~/my-tools/ --chat
|
432
487
|
|
433
|
-
|
488
|
+
# Load specific tool files
|
489
|
+
aia --tools weather.rb,calculator.rb --chat
|
434
490
|
|
435
|
-
|
436
|
-
|
491
|
+
# Filter tools
|
492
|
+
aia --tools ~/tools/ --allowed_tools weather,calc
|
493
|
+
aia --tools ~/tools/ --rejected_tools deprecated
|
437
494
|
```
|
438
495
|
|
439
|
-
**
|
440
|
-
|
441
|
-
|
442
|
-
|
496
|
+
**Tool Examples** (see `examples/tools/` directory):
|
497
|
+
- File operations (read, write, list)
|
498
|
+
- Shell command execution
|
499
|
+
- API integrations
|
500
|
+
- Data processing utilities
|
443
501
|
|
444
|
-
|
502
|
+
**Shared Tools Collection:**
|
503
|
+
AIA can use the [shared_tools gem](https://github.com/madbomber/shared_tools) which provides a curated collection of commonly-used tools (aka functions) via the --require option.
|
445
504
|
|
505
|
+
```bash
|
506
|
+
# Access shared tools automatically (included with AIA)
|
507
|
+
aia --require shared_tools/ruby_llm --chat
|
446
508
|
|
447
|
-
|
448
|
-
|
449
|
-
| one.txt | //config out_file one.md |
|
450
|
-
| two.txt | //config out_file two.md |
|
451
|
-
| three.txt | //config out_file three.md |
|
452
|
-
| four.txt | //config out_file four.md |
|
453
|
-
|
454
|
-
This way you can see the response that was generated for each prompt in the sequence.
|
509
|
+
# To access just one specific shared tool
|
510
|
+
aia --require shared_tools/ruby_llm/edit_file --chat
|
455
511
|
|
456
|
-
|
512
|
+
# Combine with your own local custom RubyLLM-based tools
|
513
|
+
aia --require shared_tools/ruby_llm --tools ~/my-tools/ --chat
|
514
|
+
```
|
457
515
|
|
458
|
-
|
516
|
+
The above examples show the shared_tools being used within an interactive chat session. They are also available in batch prompts as well using the same --require option. You can also use the //ruby directive to require the shared_tools as well and using a require statement within an ERB block.
|
459
517
|
|
460
|
-
|
518
|
+
## Examples & Tips
|
461
519
|
|
462
|
-
|
520
|
+
### Practical Examples
|
463
521
|
|
522
|
+
#### Code Review Prompt
|
464
523
|
```bash
|
465
|
-
#
|
466
|
-
|
467
|
-
|
524
|
+
# ~/.prompts/code_review.txt
|
525
|
+
//config model = gpt-4o-mini
|
526
|
+
//config temperature = 0.3
|
468
527
|
|
469
|
-
|
470
|
-
|
528
|
+
Review this code for:
|
529
|
+
- Best practices adherence
|
530
|
+
- Security vulnerabilities
|
531
|
+
- Performance issues
|
532
|
+
- Maintainability concerns
|
533
|
+
|
534
|
+
Code to review:
|
471
535
|
```
|
472
536
|
|
473
|
-
|
537
|
+
Usage: `aia code_review mycode.rb`
|
474
538
|
|
539
|
+
#### Meeting Notes Processor
|
475
540
|
```bash
|
476
|
-
#
|
477
|
-
|
478
|
-
//
|
479
|
-
//config out_file meeting_summary.md
|
541
|
+
# ~/.prompts/meeting_notes.txt
|
542
|
+
//config model = gpt-4o-mini
|
543
|
+
//pipeline format,action_items
|
480
544
|
|
481
|
-
|
482
|
-
|
483
|
-
note any action items that were generated.
|
545
|
+
Raw meeting notes:
|
546
|
+
//include [NOTES_FILE]
|
484
547
|
|
485
|
-
|
548
|
+
Please clean up and structure these meeting notes.
|
486
549
|
```
|
487
550
|
|
488
|
-
|
489
|
-
|
551
|
+
#### Documentation Generator
|
490
552
|
```bash
|
491
|
-
|
492
|
-
|
493
|
-
|
494
|
-
You summary of the meeting is in the file `meeting_summary.md`
|
495
|
-
|
553
|
+
# ~/.prompts/document.txt
|
554
|
+
//config model = gpt-4o-mini
|
555
|
+
//shell find [PROJECT_DIR] -name "*.rb" | head -10
|
496
556
|
|
497
|
-
|
498
|
-
|
499
|
-
### The --roles_prefix (AIA_ROLES_PREFIX)
|
500
|
-
|
501
|
-
The second kind of prompt is called a role (aka system prompt). Sometimes the role is incorporated into the instruction. For example, "As a magician make a rabbit appear out of a hat." To reuse the same role in multiple prompts, AIA encourages you to designate a special subdirectory for prompts that are specific to personification - roles.
|
502
|
-
|
503
|
-
The default `roles_prefix` is set to 'roles'. This creates a subdirectory under the `prompts_dir` where role files are stored. Internally, AIA calculates a `roles_dir` value by joining `prompts_dir` and `roles_prefix`. It is recommended to keep the roles organized this way for better organization and management.
|
504
|
-
|
505
|
-
### The --role Option
|
506
|
-
|
507
|
-
The `--role` option is used to identify a personification prompt within your roles directory which defines the context within which the LLM is to provide its response. The text of the role ID is pre-pended to the text of the primary prompt to form a complete prompt to be processed by the LLM.
|
508
|
-
|
509
|
-
For example consider:
|
510
|
-
|
511
|
-
```bash
|
512
|
-
aia -r ruby refactor my_class.rb
|
557
|
+
Generate documentation for the Ruby project shown above.
|
558
|
+
Include: API references, usage examples, and setup instructions.
|
513
559
|
```
|
514
560
|
|
515
|
-
|
561
|
+
### Executable Prompts
|
516
562
|
|
517
|
-
|
563
|
+
The `--exec` flag is used to create executable prompts. If it is not present on the shebang line then the prompt file will be treated like any other context file. That means that the file will be included as context in the prompt but no dynamic content integration or directives will be processed. All other AIA options are, well, optional. All you need is an initial prompt ID and the --exec flag.
|
518
564
|
|
519
|
-
|
565
|
+
In the example below the option `--no-out_file` is used to direct the output from the LLM processing of the prompt to STDOUT. This way the executable prompts can be good citizens on the *nix command line receiving piped in input via STDIN and send its output to STDOUT.
|
520
566
|
|
521
|
-
|
567
|
+
Create executable prompts:
|
522
568
|
|
569
|
+
**weather_report** (make executable with `chmod +x`):
|
523
570
|
```bash
|
524
|
-
aia -
|
525
|
-
|
526
|
-
|
527
|
-
In this example the prompt text file `$AIA_ROLES_PREFIX/ruby.txt` is prepended to the prompt text file `$AIA_PROMPTS_DIR/sw_eng/doc_the_methods.txt`
|
571
|
+
#!/usr/bin/env aia run --no-out_file --exec
|
572
|
+
# Get current storm activity for the east and south coast of the US
|
528
573
|
|
529
|
-
|
574
|
+
Summarize the tropical storm outlook fpr the Atlantic, Caribbean Sea and Gulf of America.
|
530
575
|
|
531
|
-
|
532
|
-
|
533
|
-
```text
|
534
|
-
As a [ROLE] tell me what you think about [SUBJECT]
|
576
|
+
//webpage https://www.nhc.noaa.gov/text/refresh/MIATWOAT+shtml/201724_MIATWOAT.shtml
|
535
577
|
```
|
536
578
|
|
537
|
-
|
538
|
-
|
539
|
-
## External CLI Tools Used
|
540
|
-
|
541
|
-
To install the external CLI programs used by AIA:
|
542
|
-
|
543
|
-
brew install fzf
|
544
|
-
|
545
|
-
fzf
|
546
|
-
Command-line fuzzy finder written in Go
|
547
|
-
[https://github.com/junegunn/fzf](https://github.com/junegunn/fzf)
|
548
|
-
|
549
|
-
|
550
|
-
## Shell Completion
|
551
|
-
|
552
|
-
You can setup a completion function in your shell that will complete on the prompt_id saved in your `prompts_dir` - functions for `bash`, `fish` and `zsh` are available. To get a copy of these functions do:
|
553
|
-
|
579
|
+
Usage:
|
554
580
|
```bash
|
555
|
-
|
581
|
+
./weather_report
|
582
|
+
./weather_report | glow # Render the markdown with glow
|
556
583
|
```
|
557
584
|
|
558
|
-
|
559
|
-
|
560
|
-
Copy the function to a place where it can be installed in your shell's instance. This might be a `.profile` or `.bashrc` file, etc.
|
561
|
-
|
562
|
-
## My Most Powerful Prompt
|
563
|
-
|
564
|
-
This is just between you and me so don't go blabbing this around to everyone. My most power prompt is in a file named `ad_hoc.txt`. It looks like this:
|
565
|
-
|
566
|
-
```text
|
567
|
-
[WHAT_NOW_HUMAN]
|
568
|
-
```
|
569
|
-
|
570
|
-
Yep. Just a single parameter for which I can provide a value of anything that is on my mind at the time. Its advantage is that I do not pollute my shell's command history with lots of text.
|
585
|
+
### Tips from the Author
|
571
586
|
|
587
|
+
#### The run Prompt
|
572
588
|
```bash
|
573
|
-
|
589
|
+
# ~/.prompts/run.txt
|
590
|
+
# Desc: A configuration only prompt file for use with executable prompts
|
591
|
+
# Put whatever you want here to setup the configuration desired.
|
592
|
+
# You could also add a system prompt to preface your intended prompt
|
574
593
|
```
|
575
594
|
|
576
|
-
|
595
|
+
Usage: `echo "What is the meaning of life?" | aia run`
|
577
596
|
|
597
|
+
#### The Ad Hoc One-shot Prompt
|
578
598
|
```bash
|
579
|
-
|
599
|
+
# ~/.prompts/ad_hoc.txt
|
580
600
|
[WHAT_NOW_HUMAN]
|
581
601
|
```
|
602
|
+
Usage: `aia ad_hoc` - perfect for any quick one-shot question without cluttering shell history.
|
582
603
|
|
583
|
-
|
584
|
-
|
585
|
-
## My Configuration
|
586
|
-
|
587
|
-
I use the `bash` shell. In my `.bashrc` file I source another file named `.bashrc__aia` which looks like this:
|
588
|
-
|
604
|
+
#### Recommended Shell Setup
|
589
605
|
```bash
|
590
|
-
# ~/.
|
591
|
-
# AI Assistant
|
592
|
-
|
593
|
-
# These are the defaults:
|
606
|
+
# ~/.bashrc_aia
|
594
607
|
export AIA_PROMPTS_DIR=~/.prompts
|
595
608
|
export AIA_OUT_FILE=./temp.md
|
596
|
-
export AIA_LOG_FILE=$AIA_PROMPTS_DIR/_prompts.log
|
597
609
|
export AIA_MODEL=gpt-4o-mini
|
598
|
-
|
599
|
-
# Not a default. Invokes spinner. If not true then there is no spinner
|
600
|
-
# for feedback while waiting for the LLM to respond.
|
601
|
-
export AIA_VERBOSE=true
|
610
|
+
export AIA_VERBOSE=true # Shows spinner while waiting for LLM response
|
602
611
|
|
603
612
|
alias chat='aia --chat --terse'
|
604
|
-
|
605
|
-
# rest of the file is the completion function
|
613
|
+
ask() { echo "$1" | aia run --no-out_file; }
|
606
614
|
```
|
607
615
|
|
616
|
+
The `chat` alias and the `ask` function (shown above in HASH) are two powerful tools for interacting with the AI assistant. The `chat` alias allows you to engage in an interactive conversation with the AI assistant, while the `ask` function allows you to ask a question and receive a response. Later in this document the `run` prompt ID is discussed. Besides using the run prompt ID here its also used in making executable prompt files.
|
608
617
|
|
609
|
-
|
610
|
-
|
611
|
-
|
612
|
-
|
613
|
-
|
614
|
-
|
615
|
-
#
|
616
|
-
#
|
618
|
+
#### Prompt Directory Organization
|
619
|
+
```
|
620
|
+
~/.prompts/
|
621
|
+
├── daily/ # Daily workflow prompts
|
622
|
+
├── development/ # Coding and review prompts
|
623
|
+
├── research/ # Research and analysis
|
624
|
+
├── roles/ # System prompts
|
625
|
+
└── workflows/ # Multi-step pipelines
|
617
626
|
```
|
618
627
|
|
619
|
-
|
628
|
+
## Security Considerations
|
620
629
|
|
621
|
-
|
630
|
+
### Shell Command Execution
|
622
631
|
|
623
|
-
|
624
|
-
#!/usr/bin/env aia run --no-out_file
|
625
|
-
# File: top10
|
626
|
-
# Desc: The tope 10 cities by population
|
632
|
+
**⚠️ Important Security Warning**
|
627
633
|
|
628
|
-
|
629
|
-
|
630
|
-
|
631
|
-
|
634
|
+
AIA executes shell commands and Ruby code embedded in prompts. This provides powerful functionality but requires caution:
|
635
|
+
|
636
|
+
- **Review prompts before execution**, especially from untrusted sources
|
637
|
+
- **Avoid storing sensitive data** in prompts (API keys, passwords)
|
638
|
+
- **Use parameterized prompts** instead of hardcoding sensitive values
|
639
|
+
- **Limit file permissions** on prompt directories if sharing systems
|
632
640
|
|
633
|
-
|
641
|
+
### Safe Practices
|
634
642
|
|
635
643
|
```bash
|
636
|
-
|
637
|
-
|
644
|
+
# ✅ Good: Use parameters for sensitive data
|
645
|
+
//config api_key = [API_KEY]
|
638
646
|
|
639
|
-
|
647
|
+
# ❌ Bad: Hardcode secrets
|
648
|
+
//config api_key = sk-1234567890abcdef
|
640
649
|
|
641
|
-
|
650
|
+
# ✅ Good: Validate shell commands
|
651
|
+
//shell ls -la /safe/directory
|
642
652
|
|
643
|
-
|
644
|
-
|
653
|
+
# ❌ Bad: Dangerous shell commands
|
654
|
+
//shell rm -rf / # Never do this!
|
645
655
|
```
|
646
656
|
|
647
|
-
|
648
|
-
(brew install glow)
|
657
|
+
### Recommended Security Setup
|
649
658
|
|
650
659
|
```bash
|
651
|
-
|
660
|
+
# Set restrictive permissions on prompts directory
|
661
|
+
chmod 700 ~/.prompts
|
662
|
+
chmod 600 ~/.prompts/*.txt
|
652
663
|
```
|
653
664
|
|
654
|
-
|
665
|
+
## Troubleshooting
|
655
666
|
|
656
|
-
|
657
|
-
|
658
|
-
The usage report is obtained with either `-h` or `--help` options.
|
667
|
+
### Common Issues
|
659
668
|
|
669
|
+
**Prompt not found:**
|
660
670
|
```bash
|
661
|
-
|
662
|
-
|
671
|
+
# Check prompts directory
|
672
|
+
ls $AIA_PROMPTS_DIR
|
663
673
|
|
664
|
-
|
674
|
+
# Verify prompt file exists
|
675
|
+
ls ~/.prompts/my_prompt.txt
|
665
676
|
|
666
|
-
|
667
|
-
|
668
|
-
|
669
|
-
- `--role ROLE`: Specify a role/system prompt
|
670
|
-
- And many more (use --help to see all options)
|
677
|
+
# Use fuzzy search
|
678
|
+
aia --fuzzy
|
679
|
+
```
|
671
680
|
|
672
|
-
**
|
681
|
+
**Model errors:**
|
682
|
+
```bash
|
683
|
+
# List available models
|
684
|
+
aia --available_models
|
673
685
|
|
674
|
-
|
686
|
+
# Check model name spelling
|
687
|
+
aia --model gpt-4o # Correct
|
688
|
+
aia --model gpt4 # Incorrect
|
689
|
+
```
|
675
690
|
|
676
|
-
**
|
677
|
-
|
691
|
+
**Shell integration not working:**
|
692
|
+
```bash
|
693
|
+
# Verify shell patterns
|
694
|
+
echo "Test: $(date)" # Should show current date
|
695
|
+
echo "Home: $HOME" # Should show home directory
|
696
|
+
```
|
678
697
|
|
679
|
-
**
|
680
|
-
|
698
|
+
**Configuration issues:**
|
699
|
+
```bash
|
700
|
+
# Check current configuration
|
701
|
+
aia --config
|
681
702
|
|
682
|
-
|
703
|
+
# Debug configuration loading
|
704
|
+
aia --debug --config
|
705
|
+
```
|
683
706
|
|
684
|
-
|
707
|
+
### Error Messages
|
685
708
|
|
686
|
-
|
709
|
+
| Error | Cause | Solution |
|
710
|
+
|-------|-------|----------|
|
711
|
+
| "Prompt not found" | Missing prompt file | Check file exists and spelling |
|
712
|
+
| "Model not available" | Invalid model name | Use `--available_models` to list valid models |
|
713
|
+
| "Shell command failed" | Invalid shell syntax | Test shell commands separately first |
|
714
|
+
| "Configuration error" | Invalid config syntax | Check config file YAML syntax |
|
687
715
|
|
688
|
-
|
716
|
+
### Debug Mode
|
689
717
|
|
690
|
-
|
718
|
+
Enable debug output for troubleshooting:
|
691
719
|
|
692
|
-
|
693
|
-
|
694
|
-
|
720
|
+
```bash
|
721
|
+
# Enable debug mode
|
722
|
+
aia --debug my_prompt
|
695
723
|
|
696
|
-
|
724
|
+
# Combine with verbose for maximum output
|
725
|
+
aia --debug --verbose my_prompt
|
726
|
+
```
|
697
727
|
|
698
|
-
|
728
|
+
### Performance Issues
|
699
729
|
|
700
|
-
|
730
|
+
**Slow model responses:**
|
731
|
+
- Try smaller/faster models: `--model gpt-4o-mini`
|
732
|
+
- Reduce max_tokens: `--max_tokens 1000`
|
733
|
+
- Use lower temperature for faster responses: `--temperature 0.1`
|
701
734
|
|
702
|
-
|
735
|
+
**Large prompt processing:**
|
736
|
+
- Break into smaller prompts using `--pipeline`
|
737
|
+
- Use `//include` selectively instead of large files
|
738
|
+
- Consider model context limits
|
703
739
|
|
704
|
-
|
705
|
-
- Executing system commands
|
706
|
-
- Accessing external APIs
|
707
|
-
- Performing calculations
|
740
|
+
## Development
|
708
741
|
|
709
|
-
|
742
|
+
### Testing
|
710
743
|
|
711
|
-
|
744
|
+
```bash
|
745
|
+
# Run unit tests
|
746
|
+
rake test
|
712
747
|
|
713
|
-
|
748
|
+
# Run integration tests
|
749
|
+
rake integration
|
714
750
|
|
715
|
-
|
751
|
+
# Run all tests with coverage
|
752
|
+
rake all_tests
|
753
|
+
open coverage/index.html
|
754
|
+
```
|
716
755
|
|
717
|
-
|
756
|
+
### Building
|
718
757
|
|
719
758
|
```bash
|
720
|
-
#
|
721
|
-
|
759
|
+
# Install locally with documentation
|
760
|
+
just install
|
722
761
|
|
723
|
-
#
|
724
|
-
|
725
|
-
```
|
726
|
-
|
727
|
-
Each path can be:
|
728
|
-
|
729
|
-
- A Ruby file implementing a `RubyLLM::Tool` subclass
|
730
|
-
- A directory containing tool implementations (all Ruby files in that directory will be loaded)
|
762
|
+
# Generate documentation
|
763
|
+
just gen_doc
|
731
764
|
|
732
|
-
|
765
|
+
# Static code analysis
|
766
|
+
just flay
|
767
|
+
```
|
733
768
|
|
734
|
-
###
|
769
|
+
### Architecture Notes
|
735
770
|
|
736
|
-
|
771
|
+
**ShellCommandExecutor Refactor:**
|
772
|
+
The `ShellCommandExecutor` is now a class (previously a module) with instance variables for cleaner encapsulation. Class-level methods remain for backward compatibility.
|
737
773
|
|
738
|
-
|
774
|
+
**Prompt Variable Fallback:**
|
775
|
+
Variables are always parsed from prompt text when no `.json` history file exists, ensuring parameter prompting works correctly.
|
739
776
|
|
740
|
-
|
777
|
+
## Contributing
|
741
778
|
|
742
|
-
|
743
|
-
# Only allow tools with 'test' in their filename
|
744
|
-
--tools my_tools_directory --allowed_tools test
|
745
|
-
```
|
779
|
+
Bug reports and pull requests are welcome on GitHub at https://github.com/MadBomber/aia.
|
746
780
|
|
747
|
-
|
781
|
+
### Reporting Issues
|
748
782
|
|
749
|
-
|
783
|
+
When reporting issues, please include:
|
784
|
+
- AIA version: `aia --version`
|
785
|
+
- Ruby version: `ruby --version`
|
786
|
+
- Operating system
|
787
|
+
- Minimal reproduction example
|
788
|
+
- Error messages and debug output
|
750
789
|
|
751
|
-
|
790
|
+
### Development Setup
|
752
791
|
|
753
792
|
```bash
|
754
|
-
|
755
|
-
|
793
|
+
git clone https://github.com/MadBomber/aia.git
|
794
|
+
cd aia
|
795
|
+
bundle install
|
796
|
+
rake test
|
756
797
|
```
|
757
798
|
|
758
|
-
|
799
|
+
### Areas for Improvement
|
759
800
|
|
760
|
-
|
801
|
+
- Configuration UI for complex setups
|
802
|
+
- Better error handling and user feedback
|
803
|
+
- Performance optimization for large prompt libraries
|
804
|
+
- Enhanced security controls for shell integration
|
761
805
|
|
762
|
-
|
763
|
-
|
764
|
-
1. Create a Ruby file that subclasses `RubyLLM::Tool`
|
765
|
-
2. Define the tool's parameters and functionality
|
766
|
-
3. Use the `--tools` option to load it in your AIA session
|
767
|
-
|
768
|
-
For implementation details, refer to the [examples in the repository](examples/tools) or the RubyLLM documentation.
|
769
|
-
|
770
|
-
## MCP Supported
|
806
|
+
## Roadmap
|
771
807
|
|
772
|
-
|
808
|
+
- **Enhanced Search**: Restore full-text search within prompt files
|
809
|
+
- **Model Context Protocol**: Continue integration with ruby_llm gem
|
810
|
+
- **UI Improvements**: Better configuration management for fzf and rg tools
|
811
|
+
- **Performance**: Optimize prompt loading and processing
|
812
|
+
- **Security**: Enhanced sandboxing for shell command execution
|
773
813
|
|
774
814
|
## License
|
775
815
|
|