aia 0.9.18 → 0.9.20
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.version +1 -1
- data/CHANGELOG.md +220 -78
- data/README.md +128 -3
- data/docs/cli-reference.md +71 -4
- data/docs/guides/models.md +196 -1
- data/lib/aia/chat_processor_service.rb +14 -5
- data/lib/aia/config/base.rb +6 -1
- data/lib/aia/config/cli_parser.rb +116 -2
- data/lib/aia/config/file_loader.rb +33 -1
- data/lib/aia/prompt_handler.rb +22 -1
- data/lib/aia/ruby_llm_adapter.rb +224 -134
- data/lib/aia/session.rb +120 -28
- data/lib/aia/utility.rb +19 -1
- metadata +1 -1
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 24f8a994c526c8cd4cbd1c90e0fa6d1d9139584ed71b160693d3d62c6d990d8a
|
4
|
+
data.tar.gz: acf5c080b79e7c9d87770bfefe126cf455ea5d2e334766caa95348ced43e1420
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: c454d06e3f5bda976e7efdfe7967931045afaced2b4b68fdd9cd8c1d0c7ab02e702a8c49e62391f16e96624bea154b404a48afcafdae6aaadbae02af56a852ed
|
7
|
+
data.tar.gz: 4c0b17155f9b088611a8b24cd6fdaa5873916ba3392d61b2a0c7eca62d18644b5e0349d9a721d3d7b9470d37eb09117ed5a92c546083c5e93eb4358431296a83
|
data/.version
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.9.
|
1
|
+
0.9.20
|
data/CHANGELOG.md
CHANGED
@@ -1,28 +1,174 @@
|
|
1
1
|
# Changelog
|
2
2
|
## [Unreleased]
|
3
3
|
|
4
|
-
|
4
|
+
## [0.9.20] 2025-10-06
|
5
|
+
### Added
|
6
|
+
- **Enhanced Multi-Model Role System (ADR-005)**: Implemented per-model role assignment with inline syntax
|
7
|
+
- New inline syntax: `--model MODEL[=ROLE][,MODEL[=ROLE]]...`
|
8
|
+
- Example: `aia --model gpt-4o=architect,claude=security,gemini=performance design_doc.md`
|
9
|
+
- Support for duplicate models with different roles: `gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist`
|
10
|
+
- Added `--list-roles` command to discover available role files
|
11
|
+
- Display format shows instance numbers and roles: `gpt-4o #1 (optimist):`, `gpt-4o #2 (pessimist):`
|
12
|
+
- Consensus mode drops role for neutral synthesis
|
13
|
+
- Chat mode roles are immutable during session
|
14
|
+
- Nested role path support: `--model gpt-4o=specialized/architect`
|
15
|
+
- Full backward compatibility with existing `--role` flag
|
16
|
+
|
17
|
+
- **Config File Model Roles Support (ADR-005 v2)**:
|
18
|
+
- Enhanced `model` key in config files to support array of hashes with roles
|
19
|
+
- Format: `model: [{model: gpt-4o, role: architect}, {model: claude, role: security}]`
|
20
|
+
- Mirrors internal data structure (array of hashes with `model`, `role`, `instance`, `internal_id`)
|
21
|
+
- Supports models without roles: `model: [{model: gpt-4o}]`
|
22
|
+
- Enables reusable model-role setups across sessions
|
23
|
+
- Configuration precedence: CLI inline > CLI flag > Environment variable > Config file
|
24
|
+
|
25
|
+
- **Environment Variable Inline Syntax (ADR-005 v2)**:
|
26
|
+
- Added support for inline role syntax in `AIA_MODEL` environment variable
|
27
|
+
- Example: `export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"`
|
28
|
+
- Maintains backward compatibility with simple comma-separated model lists
|
29
|
+
- Detects `=` to distinguish between formats
|
30
|
+
|
31
|
+
### Bug Fixes
|
32
|
+
- **Multi-Model Chat Cross-Talk**: Fixed bug where model instances with different roles could see each other's conversation history
|
33
|
+
- Updated Session to properly extract `internal_id` from hash-based model specs (lib/aia/session.rb:47-68)
|
34
|
+
- Fixed `parse_multi_model_response` to normalize display names to internal IDs (lib/aia/session.rb:538-570)
|
35
|
+
- Each model instance now maintains completely isolated conversation context
|
36
|
+
- Fixes issue where models would respond as if aware of other models' perspectives
|
37
|
+
|
38
|
+
### Improvements
|
39
|
+
- **Robot ASCII Display**: Updated `robot` method to extract and display only model names from new hash format (lib/aia/utility.rb:24-53)
|
40
|
+
- Handles string, array of strings, and array of hashes formats
|
41
|
+
- Shows clean model list: "gpt-4o, claude, gemini" instead of hash objects
|
42
|
+
|
43
|
+
### Testing
|
44
|
+
- Added comprehensive test suite for config file and environment variable model roles
|
45
|
+
- test/aia/config_model_roles_test.rb: 8 tests covering array processing, env var parsing, YAML config files
|
46
|
+
- Added 15 tests for role parsing with inline syntax (test/aia/role_parsing_test.rb)
|
47
|
+
- Fixed Mocha test cleanup in multi_model_isolation_test.rb
|
48
|
+
- Full test suite: 306 runs, 980 assertions, 0 failures (1 pre-existing Mocha isolation issue)
|
49
|
+
|
50
|
+
### Technical Implementation
|
51
|
+
- Modified `config.model` to support array of hashes with model metadata: `{model:, role:, instance:, internal_id:}`
|
52
|
+
- Added `parse_models_with_roles` method with fail-fast validation (lib/aia/config/cli_parser.rb)
|
53
|
+
- Added `validate_role_exists` with helpful error messages showing available roles
|
54
|
+
- Added `list_available_roles` and `list_available_role_names` methods for role discovery
|
55
|
+
- Added `load_role_for_model` method to PromptHandler for per-model role loading (lib/aia/prompt_handler.rb)
|
56
|
+
- Enhanced RubyLLMAdapter to handle hash-based model specs and prepend roles per model
|
57
|
+
- Added `extract_model_names` to extract model names from specs
|
58
|
+
- Added `get_model_spec` to retrieve full spec by internal_id
|
59
|
+
- Added `prepend_model_role` to inject role content into prompts
|
60
|
+
- Added `format_model_display_name` for consistent display formatting
|
61
|
+
- Updated Session initialization to handle hash-based model specs for context managers
|
62
|
+
- Updated display formatting to show instance numbers and roles
|
63
|
+
- Maintained backward compatibility with string/array model configurations
|
64
|
+
- Added `process_model_array_with_roles` method in FileLoader (lib/aia/config/file_loader.rb:91-116)
|
65
|
+
- Enhanced `apply_file_config_to_struct` to detect and process model arrays with role hashes
|
66
|
+
- Enhanced `envar_options` to parse inline syntax for `:model` key (lib/aia/config/base.rb:212-217)
|
67
|
+
|
68
|
+
## [0.9.19] 2025-10-06
|
69
|
+
|
70
|
+
### Bug Fixes
|
71
|
+
- **CRITICAL BUG FIX**: Fixed multi-model cross-talk issue (#118) where models could see each other's conversation history
|
72
|
+
- **BUG FIX**: Implemented complete two-level context isolation to prevent models from contaminating each other's responses
|
73
|
+
- **BUG FIX**: Fixed token count inflation caused by models processing combined conversation histories
|
74
|
+
|
75
|
+
### Technical Changes
|
76
|
+
- **Level 1 (Library)**: Implemented per-model RubyLLM::Context isolation - each model now has its own Context instance (lib/aia/ruby_llm_adapter.rb)
|
77
|
+
- **Level 2 (Application)**: Implemented per-model ContextManager isolation - each model maintains its own conversation history (lib/aia/session.rb)
|
78
|
+
- Added `parse_multi_model_response` method to extract individual model responses from combined output (lib/aia/session.rb:502-533)
|
79
|
+
- Enhanced `multi_model_chat` to accept Hash of per-model conversations (lib/aia/ruby_llm_adapter.rb:305-334)
|
80
|
+
- Updated ChatProcessorService to handle both Array (single model) and Hash (multi-model with per-model contexts) inputs (lib/aia/chat_processor_service.rb:68-83)
|
81
|
+
- Refactored RubyLLMAdapter:
|
82
|
+
- Added `@contexts` hash to store per-model Context instances
|
83
|
+
- Added `create_isolated_context_for_model` helper method (lines 84-99)
|
84
|
+
- Added `extract_model_and_provider` helper method (lines 102-112)
|
85
|
+
- Simplified `clear_context` from 92 lines to 40 lines (56% reduction)
|
86
|
+
- Updated directive handlers to work with per-model context managers
|
87
|
+
- Added comprehensive test coverage with 6 new tests for multi-model isolation
|
88
|
+
- Updated LocalProvidersTest to reflect Context-based architecture
|
89
|
+
|
90
|
+
### Architecture
|
91
|
+
- **ADR-002-revised**: Complete Multi-Model Isolation (see `.architecture/decisions/adrs/ADR-002-revised-multi-model-isolation.md`)
|
92
|
+
- Eliminated global state dependencies in multi-model chat sessions
|
93
|
+
- Maintained backward compatibility with single-model mode (verified with tests)
|
94
|
+
|
95
|
+
### Test Coverage
|
96
|
+
- Added `test/aia/multi_model_isolation_test.rb` with comprehensive isolation tests
|
97
|
+
- Tests cover: response parsing, per-model context managers, single-model compatibility, RubyLLM::Context isolation
|
98
|
+
- Full test suite: 282 runs, 837 assertions, 0 failures, 0 errors, 13 skips ✅
|
99
|
+
|
100
|
+
### Expected Behavior After Fix
|
101
|
+
Previously, when running multi-model chat with repeated prompts:
|
102
|
+
- ❌ Models would see BOTH their own AND other models' responses
|
103
|
+
- ❌ Models would report inflated counts (e.g., "5 times", "6 times" instead of "3 times")
|
104
|
+
- ❌ Token counts would be inflated due to contaminated context
|
105
|
+
|
106
|
+
Now with the fix:
|
107
|
+
- ✅ Each model sees ONLY its own conversation history
|
108
|
+
- ✅ Each model correctly reports its own interaction count
|
109
|
+
- ✅ Token counts accurately reflect per-model conversation size
|
110
|
+
|
111
|
+
### Usage Examples
|
112
|
+
```bash
|
113
|
+
# Multi-model chat now properly isolates each model's context
|
114
|
+
bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
|
115
|
+
|
116
|
+
> pick a random language and say hello
|
117
|
+
# LMS: "Habari!" (Swahili)
|
118
|
+
# Ollama: "Kaixo!" (Basque)
|
119
|
+
|
120
|
+
> do it again
|
121
|
+
# LMS: "Habari!" (only sees its own previous response)
|
122
|
+
# Ollama: "Kaixo!" (only sees its own previous response)
|
123
|
+
|
124
|
+
> do it again
|
125
|
+
> how many times did you say hello to me?
|
126
|
+
|
127
|
+
# Both models correctly respond: "3 times"
|
128
|
+
# (Previously: LMS would say "5 times", Ollama "6 times" due to cross-talk)
|
129
|
+
```
|
130
|
+
|
131
|
+
## [0.9.18] 2025-10-05
|
132
|
+
|
133
|
+
### Bug Fixes
|
134
|
+
- **BUG FIX**: Fixed RubyLLM provider error parsing to handle both OpenAI and LM Studio error formats
|
135
|
+
- **BUG FIX**: Fixed "String does not have #dig method" errors when parsing error responses from local providers
|
136
|
+
- **BUG FIX**: Enhanced error parsing to gracefully handle malformed JSON responses
|
5
137
|
|
6
|
-
|
138
|
+
### Improvements
|
139
|
+
- **ENHANCEMENT**: Removed debug output statements from RubyLLMAdapter for cleaner production logs
|
140
|
+
- **ENHANCEMENT**: Improved error handling with debug logging for JSON parsing failures
|
141
|
+
|
142
|
+
### Documentation
|
143
|
+
- **DOCUMENTATION**: Added Local Models entry to MkDocs navigation for better documentation accessibility
|
144
|
+
|
145
|
+
### Technical Changes
|
146
|
+
- Enhanced provider_fix extension to support multiple error response formats (lib/extensions/ruby_llm/provider_fix.rb)
|
147
|
+
- Cleaned up debug puts statements from RubyLLMAdapter and provider_fix
|
148
|
+
- Added robust JSON parsing with fallback error handling
|
149
|
+
|
150
|
+
## [0.9.17] 2025-10-04
|
151
|
+
|
152
|
+
### New Features
|
7
153
|
- **NEW FEATURE**: Enhanced local model support with comprehensive validation and error handling
|
8
154
|
- **NEW FEATURE**: Added `lms/` prefix support for LM Studio models with automatic validation against loaded models
|
9
155
|
- **NEW FEATURE**: Enhanced `//models` directive to auto-detect and display local providers (Ollama and LM Studio)
|
10
156
|
- **NEW FEATURE**: Added model name prefix display in error messages for LM Studio (`lms/` prefix)
|
11
157
|
|
12
|
-
|
158
|
+
### Improvements
|
13
159
|
- **ENHANCEMENT**: Improved LM Studio integration with model validation against `/v1/models` endpoint
|
14
160
|
- **ENHANCEMENT**: Enhanced error messages showing exact model names with correct prefixes when validation fails
|
15
161
|
- **ENHANCEMENT**: Added environment variable support for custom LM Studio API base (`LMS_API_BASE`)
|
16
162
|
- **ENHANCEMENT**: Improved `//models` directive output formatting for local models with size and modified date for Ollama
|
17
163
|
- **ENHANCEMENT**: Enhanced multi-model support to seamlessly mix local and cloud models
|
18
164
|
|
19
|
-
|
165
|
+
### Documentation
|
20
166
|
- **DOCUMENTATION**: Added comprehensive local model documentation to README.md
|
21
167
|
- **DOCUMENTATION**: Created new docs/guides/local-models.md guide covering Ollama and LM Studio setup, usage, and troubleshooting
|
22
168
|
- **DOCUMENTATION**: Updated docs/guides/models.md with local provider sections including comparison table and workflow examples
|
23
169
|
- **DOCUMENTATION**: Enhanced docs/faq.md with 5 new FAQ entries covering local model usage, differences, and error handling
|
24
170
|
|
25
|
-
|
171
|
+
### Technical Changes
|
26
172
|
- Enhanced RubyLLMAdapter with LM Studio model validation (lib/aia/ruby_llm_adapter.rb)
|
27
173
|
- Updated models directive to query local provider endpoints (lib/aia/directives/models.rb)
|
28
174
|
- Added provider_fix extension for RubyLLM compatibility (lib/extensions/ruby_llm/provider_fix.rb)
|
@@ -30,12 +176,12 @@
|
|
30
176
|
- Updated dependencies: ruby_llm, webmock, crack, rexml
|
31
177
|
- Bumped Ruby bundler version to 2.7.2
|
32
178
|
|
33
|
-
|
179
|
+
### Bug Fixes
|
34
180
|
- **BUG FIX**: Fixed missing `lms/` prefix in LM Studio model listings
|
35
181
|
- **BUG FIX**: Fixed model validation error messages to show usable model names with correct prefixes
|
36
182
|
- **BUG FIX**: Fixed Ollama endpoint to use native API (removed incorrect `/v1` suffix)
|
37
183
|
|
38
|
-
|
184
|
+
### Usage Examples
|
39
185
|
```bash
|
40
186
|
# Use LM Studio with validation
|
41
187
|
aia --model lms/qwen/qwen3-coder-30b my_prompt
|
@@ -51,75 +197,74 @@ aia --model ollama/llama3.2 --chat
|
|
51
197
|
> //models
|
52
198
|
```
|
53
199
|
|
54
|
-
|
200
|
+
## [0.9.16] 2025-09-26
|
55
201
|
|
56
|
-
|
202
|
+
### New Features
|
57
203
|
- **NEW FEATURE**: Added support for Ollama AI provider
|
58
204
|
- **NEW FEATURE**: Added support for Osaurus AI provider
|
59
205
|
- **NEW FEATURE**: Added support for LM Studio AI provider
|
60
206
|
|
61
|
-
|
207
|
+
### Improvements
|
62
208
|
- **ENHANCEMENT**: Expanded AI provider ecosystem with three new local/self-hosted model options
|
63
209
|
- **ENHANCEMENT**: Improved flexibility for users preferring local LLM deployments
|
64
210
|
|
65
|
-
##
|
66
|
-
### [0.9.15] 2025-09-21
|
211
|
+
## [0.9.15] 2025-09-21
|
67
212
|
|
68
|
-
|
213
|
+
### New Features
|
69
214
|
- **NEW FEATURE**: Added `//paste` directive to insert clipboard contents into prompts
|
70
215
|
- **NEW FEATURE**: Added `//clipboard` alias for the paste directive
|
71
216
|
|
72
|
-
|
217
|
+
### Technical Changes
|
73
218
|
- Enhanced DirectiveProcessor with clipboard integration using the clipboard gem
|
74
219
|
- Added comprehensive test coverage for paste directive functionality
|
75
220
|
|
76
|
-
|
221
|
+
## [0.9.14] 2025-09-19
|
77
222
|
|
78
|
-
|
223
|
+
### New Features
|
79
224
|
- **NEW FEATURE**: Added `//checkpoint` directive to create named snapshots of conversation context
|
80
225
|
- **NEW FEATURE**: Added `//restore` directive to restore context to a previous checkpoint
|
81
226
|
- **NEW FEATURE**: Enhanced `//context` (and `//review`) directive to display checkpoint markers in conversation history
|
82
227
|
- **NEW FEATURE**: Added `//cp` alias for checkpoint directive
|
83
228
|
|
84
|
-
|
229
|
+
### Improvements
|
85
230
|
- **ENHANCEMENT**: Context manager now tracks checkpoint positions for better context visualization
|
86
231
|
- **ENHANCEMENT**: Checkpoint system uses auto-incrementing integer names when no name is provided
|
87
232
|
- **ENHANCEMENT**: Restore directive defaults to last checkpoint when no name specified
|
88
233
|
- **ENHANCEMENT**: Clear context now also clears all checkpoints
|
89
234
|
|
90
|
-
|
235
|
+
### Bug Fixes
|
91
236
|
- **BUG FIX**: Fixed `//help` directive that was showing empty list of directives
|
92
237
|
- **BUG FIX**: Help directive now displays all directives from all registered modules
|
93
238
|
- **BUG FIX**: Help directive now shows proper descriptions and aliases for all directives
|
94
239
|
- **BUG FIX**: Help directive organizes directives by category for better readability
|
95
240
|
|
96
|
-
|
241
|
+
### Technical Changes
|
97
242
|
- Enhanced ContextManager with checkpoint storage and restoration capabilities
|
98
243
|
- Added checkpoint_positions method to track checkpoint locations in context
|
99
244
|
- Refactored help directive to collect directives from all registered modules
|
100
245
|
- Added comprehensive test coverage for checkpoint and restore functionality
|
101
246
|
|
102
|
-
|
103
|
-
|
247
|
+
## [0.9.13] 2025-09-02
|
248
|
+
### New Features
|
104
249
|
- **NEW FEATURE**: Added `--metrics` flag to show token counts for each model
|
105
250
|
- **NEW FEATURE**: Added `--cost` flag to enable cost estimation for each model
|
106
251
|
|
107
|
-
|
252
|
+
### Improvements
|
108
253
|
- **DEPENDENCY**: Removed versionaire dependency, simplifying version management
|
109
254
|
- **ENHANCEMENT**: Improved test suite reliability and coverage
|
110
255
|
- **ENHANCEMENT**: Updated Gemfile.lock with latest dependency versions
|
111
256
|
|
112
|
-
|
257
|
+
### Bug Fixes
|
113
258
|
- **BUG FIX**: Fixed version handling issues by removing external versioning dependency
|
114
259
|
|
115
|
-
|
260
|
+
### Technical Changes
|
116
261
|
- Simplified version management by removing versionaire gem
|
117
262
|
- Enhanced test suite with improved assertions and coverage
|
118
263
|
- Updated various gem dependencies to latest stable versions
|
119
264
|
|
120
|
-
|
265
|
+
## [0.9.12] 2025-08-28
|
121
266
|
|
122
|
-
|
267
|
+
### New Features
|
123
268
|
- **MAJOR NEW FEATURE**: Multi-model support - specify multiple AI models simultaneously with comma-separated syntax
|
124
269
|
- **NEW FEATURE**: `--consensus` flag to enable primary model consensus mode for synthesized responses from multiple models
|
125
270
|
- **NEW FEATURE**: `--no-consensus` flag to explicitly force individual responses from all models
|
@@ -130,21 +275,21 @@ aia --model ollama/llama3.2 --chat
|
|
130
275
|
- **NEW FEATURE**: Multi-model support in both batch and interactive chat modes
|
131
276
|
- **NEW FEATURE**: Comprehensive documentation website https://madbomber.github.io/aia/
|
132
277
|
|
133
|
-
|
278
|
+
### Improvements
|
134
279
|
- **ENHANCEMENT**: Enhanced `//model` directive output with detailed multi-model configuration display
|
135
280
|
- **ENHANCEMENT**: Improved error handling with graceful fallback when model initialization fails
|
136
281
|
- **ENHANCEMENT**: Better TTY handling in chat mode to prevent `Errno::ENXIO` errors in containerized environments
|
137
282
|
- **ENHANCEMENT**: Updated directive processor to use new module-based architecture for better maintainability
|
138
283
|
- **ENHANCEMENT**: Improved batch mode output file formatting consistency between STDOUT and file output
|
139
284
|
|
140
|
-
|
285
|
+
### Bug Fixes
|
141
286
|
- **BUG FIX**: Fixed DirectiveProcessor TypeError that prevented application startup with invalid directive calls
|
142
287
|
- **BUG FIX**: Fixed missing primary model output in batch mode output files
|
143
288
|
- **BUG FIX**: Fixed inconsistent formatting between STDOUT and file output in batch mode
|
144
289
|
- **BUG FIX**: Fixed TTY availability issues in chat mode for containerized environments
|
145
290
|
- **BUG FIX**: Fixed directive processing to use updated module-based registry system
|
146
291
|
|
147
|
-
|
292
|
+
### Technical Changes
|
148
293
|
- Fixed ruby_llm version to 1.5.1
|
149
294
|
- Added extra API_KEY configuration for new LLM providers
|
150
295
|
- Updated RubyLLMAdapter to support multiple model initialization and management
|
@@ -153,13 +298,13 @@ aia --model ollama/llama3.2 --chat
|
|
153
298
|
- Updated CLI parser to support multi-model flags and options
|
154
299
|
- Enhanced configuration system to support consensus mode settings
|
155
300
|
|
156
|
-
|
301
|
+
### Documentation
|
157
302
|
- **DOCUMENTATION**: Comprehensive README.md updates with multi-model usage examples and best practices
|
158
303
|
- **DOCUMENTATION**: Added multi-model section to README with detailed usage instructions
|
159
304
|
- **DOCUMENTATION**: Updated command-line options table with new multi-model flags
|
160
305
|
- **DOCUMENTATION**: Added practical multi-model examples for decision-making scenarios
|
161
306
|
|
162
|
-
|
307
|
+
### Usage Examples
|
163
308
|
```bash
|
164
309
|
# Basic multi-model usage
|
165
310
|
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo
|
@@ -174,25 +319,25 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
174
319
|
//model # Use in any prompt or chat session
|
175
320
|
```
|
176
321
|
|
177
|
-
|
322
|
+
### Migration Notes
|
178
323
|
- Existing single-model usage remains unchanged and fully compatible
|
179
324
|
- Multi-model is opt-in via comma-separated model names
|
180
325
|
- Default behavior without `--consensus` flag shows individual responses from all models
|
181
326
|
- Invalid model names are reported but don't prevent valid models from working
|
182
327
|
|
183
|
-
|
328
|
+
### TODO
|
184
329
|
- TODO: focus on log file consistency using Logger
|
185
330
|
|
186
331
|
|
187
|
-
|
332
|
+
## [0.9.11] 2025-07-31
|
188
333
|
- added a cost per 1 million input tokens to available_models query output
|
189
334
|
- updated ruby_llm to version 1.4.0
|
190
335
|
- updated all other gem dependencies to their latest versions
|
191
336
|
|
192
|
-
|
337
|
+
## [0.9.10] 2025-07-18
|
193
338
|
- updated ruby_llm-mcp to version 0.6.1 which solves problems with MCP tools not being installed
|
194
339
|
|
195
|
-
|
340
|
+
## [0.9.9] 2025-07-10
|
196
341
|
- refactored the Session and Config classes into more testable method_missing
|
197
342
|
- updated the test suire for both the Session and Config classes
|
198
343
|
- added support for MCP servers coming into AIA via the shared_tools gem
|
@@ -203,13 +348,13 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
203
348
|
- //available_models now has context window size and capabilities for each model returned
|
204
349
|
|
205
350
|
|
206
|
-
|
351
|
+
## [0.9.8] 2025-06-25
|
207
352
|
- fixing an issue with pipelined prompts
|
208
353
|
- now showing the complete modality of the model on the processing line.
|
209
354
|
- changed -p option from prompts_dir to pipeline
|
210
355
|
- found problem with simple cov and deep cov w/r/t their reported test coverage; they have problems with heredoc and complex conditionals.
|
211
356
|
|
212
|
-
|
357
|
+
## [0.9.7] 2025-06-20
|
213
358
|
|
214
359
|
- **NEW FEATURE**: Added `--available_models` CLI option to list all available AI models
|
215
360
|
- **NEW FEATURE**: Added `//tools` to show a list of available tools and their description
|
@@ -220,7 +365,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
220
365
|
- **DOCUMENTATION**: Updated README for better clarity and structure
|
221
366
|
- **DEPENDENCY**: Updated Gemfile.lock with latest dependency versions
|
222
367
|
|
223
|
-
|
368
|
+
## [0.9.6] 2025-06-13
|
224
369
|
- fixed issue 84 with the //llms directive
|
225
370
|
- changed the monkey patch to the RubyLLM::Model::Modalities class at the suggestions of the RubyLLM author in prep for a PR against that gem.
|
226
371
|
- added the shared_tools gem - need to add examples on how to use it with the --tools option
|
@@ -228,11 +373,11 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
228
373
|
- added images/aia.png to README.md
|
229
374
|
- let claude code rewrite the README.md file. Some details were dropped but overall in reads better. Need to add the details to a wiki or other documentation site.
|
230
375
|
|
231
|
-
|
376
|
+
## [0.9.5] 2025-06-04
|
232
377
|
- changed the RubyLLM::Modalities class to use method_missing for mode query
|
233
378
|
- hunting for why the //llms query directive is not finding image_to_image LLMs.
|
234
379
|
|
235
|
-
|
380
|
+
## [0.9.4] 2025-06-03
|
236
381
|
- using RubyLLM v1.3.0
|
237
382
|
- setting up a docs infrastructure to behave like the ruby_llm gem's guides side
|
238
383
|
- fixed bug in the text-to-image workflow
|
@@ -240,7 +385,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
240
385
|
- need to pay attention to the test suite
|
241
386
|
- also need to ensure the non text2text modes are working
|
242
387
|
|
243
|
-
|
388
|
+
## [0.9.3rc1] 2025-05-24
|
244
389
|
- using ruby_llm v1.3.0rc1
|
245
390
|
- added a models database refresh based on integer days interval with the --refresh option
|
246
391
|
- config file now has a "last_refresh" String in format YYYY-MM-DD
|
@@ -249,43 +394,41 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
249
394
|
- fixed a bug in the prompt_manager gem which is now at v0.5.5
|
250
395
|
|
251
396
|
|
252
|
-
|
397
|
+
## [0.9.2] 2025-05-18
|
253
398
|
- removing the MCP experiment
|
254
399
|
- adding support for RubyLLM::Tool usage in place of the MCP stuff
|
255
400
|
- updated prompt_manager to v0.5.4 which fixed shell integration problem
|
256
401
|
|
257
|
-
|
402
|
+
## [0.9.1] 2025-05-16
|
258
403
|
- rethink MCP approach in favor of just RubyLLM::Tool
|
259
404
|
- fixed problem with //clear
|
260
405
|
- fixed a problem with a priming prompt in a chat loop
|
261
406
|
|
262
|
-
|
407
|
+
## [0.9.0] 2025-05-13
|
263
408
|
- Adding experimental MCP Client suppot
|
264
409
|
- removed the CLI options --erb and --shell but kept them in the config file with a default of true for both
|
265
410
|
|
266
|
-
|
411
|
+
## [0.8.6] 2025-04-23
|
267
412
|
- Added a client adapter for the ruby_llm gem
|
268
413
|
- Added the adapter config item and the --adapter option to select at runtime which client to use ai_client or ruby_llm
|
269
414
|
|
270
|
-
|
415
|
+
## [0.8.5] 2025-04-19
|
271
416
|
- documentation updates
|
272
417
|
- integrated the https://pure.md web service for inserting web pages into the context window
|
273
418
|
- //include http?://example.com/stuff
|
274
419
|
- //webpage http?://example.com/stuff
|
275
420
|
|
276
|
-
|
421
|
+
## [0.8.2] 2025-04-18
|
277
422
|
- fixed problems with pre-loaded context and chat repl
|
278
423
|
- piped content into `aia --chat` is now a part of the context/instructions
|
279
424
|
- content via "aia --chat < some_file" is added to the context/instructions
|
280
425
|
- `aia --chat context_file.txt context_file2.txt` now works
|
281
426
|
- `aia --chat prompt_id context)file.txt` also works
|
282
427
|
|
283
|
-
|
284
|
-
|
285
|
-
### [0.8.1] 2025-04-17
|
428
|
+
## [0.8.1] 2025-04-17
|
286
429
|
- bumped version to 0.8.1 after correcting merge conflicts
|
287
430
|
|
288
|
-
|
431
|
+
## [0.8.0] WIP - 2025-04-15
|
289
432
|
- Updated PromptManager to v0.5.1 which has some of the functionality that was originally developed in the AIA.
|
290
433
|
- Enhanced README.md to include a comprehensive table of configuration options with defaults and associated environment variables.
|
291
434
|
- Added a note in README.md about the expandability of configuration options from a config file for dynamic prompt generation.
|
@@ -293,7 +436,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
293
436
|
- Ensured version consistency across `.version`, `aia.gemspec`, and `lib/aia/version.rb`.
|
294
437
|
- Verified and updated documentation to ensure readiness for release on RubyGems.org.
|
295
438
|
|
296
|
-
|
439
|
+
## [0.7.1] WIP - 2025-03-22
|
297
440
|
- Added `UIPresenter` class for handling user interface presentation.
|
298
441
|
- Added `DirectiveProcessor` class for processing chat-based directives.
|
299
442
|
- Added `HistoryManager` class for managing conversation and variable history.
|
@@ -307,7 +450,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
307
450
|
- Improved error handling and user feedback for directive processing.
|
308
451
|
- Enhanced logging and output options for chat sessions.
|
309
452
|
|
310
|
-
|
453
|
+
## [0.7.0] WIP - 2025-03-17
|
311
454
|
- Major code refactoring for better organization and maintainability:
|
312
455
|
- Extracted `DirectiveProcessor` class to handle chat-based directives
|
313
456
|
- Extracted `HistoryManager` class for conversation and variable history management
|
@@ -318,13 +461,13 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
318
461
|
- Improved output handling to suppress STDOUT when chat mode is off and output file is specified
|
319
462
|
- Updated spinner format in the process_prompt method for better user experience
|
320
463
|
|
321
|
-
|
464
|
+
## [0.6.?] WIP
|
322
465
|
- Implemented Tony Stark's Clean Slate Protocol on the develop branch
|
323
466
|
|
324
|
-
|
467
|
+
## [0.5.17] 2024-05-17
|
325
468
|
- removed replaced `semver` with `versionaire`
|
326
469
|
|
327
|
-
|
470
|
+
## [0.5.16] 2024-04-02
|
328
471
|
- fixed prompt pipelines
|
329
472
|
- added //next and //pipeline directives as shortcuts to //config [next,pipeline]
|
330
473
|
- Added new backend "client" as an internal OpenAI client
|
@@ -333,83 +476,82 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
333
476
|
- Added --voice default: alloy (if "siri" and Mac? then uses cli tool "say")
|
334
477
|
- Added --image_size and --image_quality (--is --iq)
|
335
478
|
|
336
|
-
|
337
|
-
### [0.5.15] 2024-03-30
|
479
|
+
## [0.5.15] 2024-03-30
|
338
480
|
- Added the ability to accept piped in text to be appeded to the end of the prompt text: curl $URL | aia ad_hoc
|
339
481
|
- Fixed bugs with entering directives as follow-up prompts during a chat session
|
340
482
|
|
341
|
-
|
483
|
+
## [0.5.14] 2024-03-09
|
342
484
|
- Directly access OpenAI to do text to speech when using the `--speak` option
|
343
485
|
- Added --voice to specify which voice to use
|
344
486
|
- Added --speech_model to specify which TTS model to use
|
345
487
|
|
346
|
-
|
488
|
+
## [0.5.13] 2024-03-03
|
347
489
|
- Added CLI-utility `llm` as a backend processor
|
348
490
|
|
349
|
-
|
491
|
+
## [0.5.12] 2024-02-24
|
350
492
|
- Happy Birthday Ruby!
|
351
493
|
- Added --next CLI option
|
352
494
|
- Added --pipeline CLI option
|
353
495
|
|
354
|
-
|
496
|
+
## [0.5.11] 2024-02-18
|
355
497
|
- allow directives to return information that is inserted into the prompt text
|
356
498
|
- added //shell command directive
|
357
499
|
- added //ruby ruby_code directive
|
358
500
|
- added //include path_to_file directive
|
359
501
|
|
360
|
-
|
502
|
+
## [0.5.10] 2024-02-03
|
361
503
|
- Added --roles_dir to isolate roles from other prompts if desired
|
362
504
|
- Changed --prompts to --prompts_dir to be consistent
|
363
505
|
- Refactored common fzf usage into its own tool class
|
364
506
|
|
365
|
-
|
507
|
+
## [0.5.9] 2024-02-01
|
366
508
|
- Added a "I'm working" spinner thing when "--verbose" is used as an indication that the backend is in the process of composing its response to the prompt.
|
367
509
|
|
368
|
-
|
510
|
+
## [0.5.8] 2024-01-17
|
369
511
|
- Changed the behavior of the --dump option. It must now be followed by path/to/file.ext where ext is a supported config file format: yml, yaml, toml
|
370
512
|
|
371
|
-
|
513
|
+
## [0.5.7] 2024-01-15
|
372
514
|
- Added ERB processing to the config_file
|
373
515
|
|
374
|
-
|
516
|
+
## [0.5.6] 2024-01-15
|
375
517
|
- Adding processing for directives, shell integration and erb to the follow up prompt in a chat session
|
376
518
|
- some code refactoring.
|
377
519
|
|
378
520
|
## [0.5.3] 2024-01-14
|
379
521
|
- adding ability to render markdown to the terminal using the "glow" CLI utility
|
380
522
|
|
381
|
-
|
523
|
+
## [0.5.2] 2024-01-13
|
382
524
|
- wrap response when its going to the terminal
|
383
525
|
|
384
|
-
|
526
|
+
## [0.5.1] 2024-01-12
|
385
527
|
- removed a wicked puts "loaded" statement
|
386
528
|
- fixed missed code when the options were changed to --out_file and --log_file
|
387
529
|
- fixed completion functions by updating $PROMPT_DIR to $AIA_PROMPTS_DIR to match the documentation.
|
388
530
|
|
389
|
-
|
531
|
+
## [0.5.0] 2024-01-05
|
390
532
|
- breaking changes:
|
391
533
|
- changed `--config` to `--config_file`
|
392
534
|
- changed `--env` to `--shell`
|
393
535
|
- changed `--output` to `--out_file`
|
394
536
|
- changed default `out_file` to `STDOUT`
|
395
537
|
|
396
|
-
|
538
|
+
## [0.4.3] 2023-12-31
|
397
539
|
- added --env to process embedded system environment variables and shell commands within a prompt.
|
398
540
|
- added --erb to process Embedded RuBy within a prompt because have embedded shell commands will only get you in a trouble. Having ERB will really get you into trouble. Remember the simple prompt is usually the best prompt.
|
399
541
|
|
400
|
-
|
542
|
+
## [0.4.2] 2023-12-31
|
401
543
|
- added the --role CLI option to pre-pend a "role" prompt to the front of a primary prompt.
|
402
544
|
|
403
|
-
|
545
|
+
## [0.4.1] 2023-12-31
|
404
546
|
- added a chat mode
|
405
547
|
- prompt directives now supported
|
406
548
|
- version bumped to match the `prompt_manager` gem
|
407
549
|
|
408
|
-
|
550
|
+
## [0.3.20] 2023-12-28
|
409
551
|
- added work around to issue with multiple context files going to the `mods` backend
|
410
552
|
- added shellwords gem to santize prompt text on the command line
|
411
553
|
|
412
|
-
|
554
|
+
## [0.3.19] 2023-12-26
|
413
555
|
- major code refactoring.
|
414
556
|
- supports config files \*.yml, \*.yaml and \*.toml
|
415
557
|
- usage implemented as a man page. --help will display the man page/
|
@@ -422,10 +564,10 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
422
564
|
3. envar values over-rides ...
|
423
565
|
4. default values
|
424
566
|
|
425
|
-
|
567
|
+
## [0.3.0] = 2023-11-23
|
426
568
|
|
427
569
|
- Matching version to [prompt_manager](https://github.com/prompt_manager) This version allows for the user of history in the entery of values to prompt keywords. KW_HISTORY_MAX is set at 5. Changed CLI enteraction to use historical selection and editing of prior keyword values.
|
428
570
|
|
429
|
-
|
571
|
+
## [0.1.0] - 2023-11-23
|
430
572
|
|
431
573
|
- Initial release
|