aia 0.9.19 → 0.9.20
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.version +1 -1
- data/CHANGELOG.md +151 -91
- data/README.md +128 -3
- data/docs/cli-reference.md +71 -4
- data/docs/guides/models.md +196 -1
- data/lib/aia/config/base.rb +6 -1
- data/lib/aia/config/cli_parser.rb +116 -2
- data/lib/aia/config/file_loader.rb +33 -1
- data/lib/aia/prompt_handler.rb +22 -1
- data/lib/aia/ruby_llm_adapter.rb +134 -30
- data/lib/aia/session.rb +24 -8
- data/lib/aia/utility.rb +19 -1
- metadata +1 -1
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 24f8a994c526c8cd4cbd1c90e0fa6d1d9139584ed71b160693d3d62c6d990d8a
|
4
|
+
data.tar.gz: acf5c080b79e7c9d87770bfefe126cf455ea5d2e334766caa95348ced43e1420
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: c454d06e3f5bda976e7efdfe7967931045afaced2b4b68fdd9cd8c1d0c7ab02e702a8c49e62391f16e96624bea154b404a48afcafdae6aaadbae02af56a852ed
|
7
|
+
data.tar.gz: 4c0b17155f9b088611a8b24cd6fdaa5873916ba3392d61b2a0c7eca62d18644b5e0349d9a721d3d7b9470d37eb09117ed5a92c546083c5e93eb4358431296a83
|
data/.version
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.9.
|
1
|
+
0.9.20
|
data/CHANGELOG.md
CHANGED
@@ -1,14 +1,78 @@
|
|
1
1
|
# Changelog
|
2
2
|
## [Unreleased]
|
3
3
|
|
4
|
-
|
5
|
-
|
6
|
-
|
4
|
+
## [0.9.20] 2025-10-06
|
5
|
+
### Added
|
6
|
+
- **Enhanced Multi-Model Role System (ADR-005)**: Implemented per-model role assignment with inline syntax
|
7
|
+
- New inline syntax: `--model MODEL[=ROLE][,MODEL[=ROLE]]...`
|
8
|
+
- Example: `aia --model gpt-4o=architect,claude=security,gemini=performance design_doc.md`
|
9
|
+
- Support for duplicate models with different roles: `gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist`
|
10
|
+
- Added `--list-roles` command to discover available role files
|
11
|
+
- Display format shows instance numbers and roles: `gpt-4o #1 (optimist):`, `gpt-4o #2 (pessimist):`
|
12
|
+
- Consensus mode drops role for neutral synthesis
|
13
|
+
- Chat mode roles are immutable during session
|
14
|
+
- Nested role path support: `--model gpt-4o=specialized/architect`
|
15
|
+
- Full backward compatibility with existing `--role` flag
|
16
|
+
|
17
|
+
- **Config File Model Roles Support (ADR-005 v2)**:
|
18
|
+
- Enhanced `model` key in config files to support array of hashes with roles
|
19
|
+
- Format: `model: [{model: gpt-4o, role: architect}, {model: claude, role: security}]`
|
20
|
+
- Mirrors internal data structure (array of hashes with `model`, `role`, `instance`, `internal_id`)
|
21
|
+
- Supports models without roles: `model: [{model: gpt-4o}]`
|
22
|
+
- Enables reusable model-role setups across sessions
|
23
|
+
- Configuration precedence: CLI inline > CLI flag > Environment variable > Config file
|
24
|
+
|
25
|
+
- **Environment Variable Inline Syntax (ADR-005 v2)**:
|
26
|
+
- Added support for inline role syntax in `AIA_MODEL` environment variable
|
27
|
+
- Example: `export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"`
|
28
|
+
- Maintains backward compatibility with simple comma-separated model lists
|
29
|
+
- Detects `=` to distinguish between formats
|
30
|
+
|
31
|
+
### Bug Fixes
|
32
|
+
- **Multi-Model Chat Cross-Talk**: Fixed bug where model instances with different roles could see each other's conversation history
|
33
|
+
- Updated Session to properly extract `internal_id` from hash-based model specs (lib/aia/session.rb:47-68)
|
34
|
+
- Fixed `parse_multi_model_response` to normalize display names to internal IDs (lib/aia/session.rb:538-570)
|
35
|
+
- Each model instance now maintains completely isolated conversation context
|
36
|
+
- Fixes issue where models would respond as if aware of other models' perspectives
|
37
|
+
|
38
|
+
### Improvements
|
39
|
+
- **Robot ASCII Display**: Updated `robot` method to extract and display only model names from new hash format (lib/aia/utility.rb:24-53)
|
40
|
+
- Handles string, array of strings, and array of hashes formats
|
41
|
+
- Shows clean model list: "gpt-4o, claude, gemini" instead of hash objects
|
42
|
+
|
43
|
+
### Testing
|
44
|
+
- Added comprehensive test suite for config file and environment variable model roles
|
45
|
+
- test/aia/config_model_roles_test.rb: 8 tests covering array processing, env var parsing, YAML config files
|
46
|
+
- Added 15 tests for role parsing with inline syntax (test/aia/role_parsing_test.rb)
|
47
|
+
- Fixed Mocha test cleanup in multi_model_isolation_test.rb
|
48
|
+
- Full test suite: 306 runs, 980 assertions, 0 failures (1 pre-existing Mocha isolation issue)
|
49
|
+
|
50
|
+
### Technical Implementation
|
51
|
+
- Modified `config.model` to support array of hashes with model metadata: `{model:, role:, instance:, internal_id:}`
|
52
|
+
- Added `parse_models_with_roles` method with fail-fast validation (lib/aia/config/cli_parser.rb)
|
53
|
+
- Added `validate_role_exists` with helpful error messages showing available roles
|
54
|
+
- Added `list_available_roles` and `list_available_role_names` methods for role discovery
|
55
|
+
- Added `load_role_for_model` method to PromptHandler for per-model role loading (lib/aia/prompt_handler.rb)
|
56
|
+
- Enhanced RubyLLMAdapter to handle hash-based model specs and prepend roles per model
|
57
|
+
- Added `extract_model_names` to extract model names from specs
|
58
|
+
- Added `get_model_spec` to retrieve full spec by internal_id
|
59
|
+
- Added `prepend_model_role` to inject role content into prompts
|
60
|
+
- Added `format_model_display_name` for consistent display formatting
|
61
|
+
- Updated Session initialization to handle hash-based model specs for context managers
|
62
|
+
- Updated display formatting to show instance numbers and roles
|
63
|
+
- Maintained backward compatibility with string/array model configurations
|
64
|
+
- Added `process_model_array_with_roles` method in FileLoader (lib/aia/config/file_loader.rb:91-116)
|
65
|
+
- Enhanced `apply_file_config_to_struct` to detect and process model arrays with role hashes
|
66
|
+
- Enhanced `envar_options` to parse inline syntax for `:model` key (lib/aia/config/base.rb:212-217)
|
67
|
+
|
68
|
+
## [0.9.19] 2025-10-06
|
69
|
+
|
70
|
+
### Bug Fixes
|
7
71
|
- **CRITICAL BUG FIX**: Fixed multi-model cross-talk issue (#118) where models could see each other's conversation history
|
8
72
|
- **BUG FIX**: Implemented complete two-level context isolation to prevent models from contaminating each other's responses
|
9
73
|
- **BUG FIX**: Fixed token count inflation caused by models processing combined conversation histories
|
10
74
|
|
11
|
-
|
75
|
+
### Technical Changes
|
12
76
|
- **Level 1 (Library)**: Implemented per-model RubyLLM::Context isolation - each model now has its own Context instance (lib/aia/ruby_llm_adapter.rb)
|
13
77
|
- **Level 2 (Application)**: Implemented per-model ContextManager isolation - each model maintains its own conversation history (lib/aia/session.rb)
|
14
78
|
- Added `parse_multi_model_response` method to extract individual model responses from combined output (lib/aia/session.rb:502-533)
|
@@ -23,17 +87,17 @@
|
|
23
87
|
- Added comprehensive test coverage with 6 new tests for multi-model isolation
|
24
88
|
- Updated LocalProvidersTest to reflect Context-based architecture
|
25
89
|
|
26
|
-
|
90
|
+
### Architecture
|
27
91
|
- **ADR-002-revised**: Complete Multi-Model Isolation (see `.architecture/decisions/adrs/ADR-002-revised-multi-model-isolation.md`)
|
28
92
|
- Eliminated global state dependencies in multi-model chat sessions
|
29
93
|
- Maintained backward compatibility with single-model mode (verified with tests)
|
30
94
|
|
31
|
-
|
95
|
+
### Test Coverage
|
32
96
|
- Added `test/aia/multi_model_isolation_test.rb` with comprehensive isolation tests
|
33
97
|
- Tests cover: response parsing, per-model context managers, single-model compatibility, RubyLLM::Context isolation
|
34
98
|
- Full test suite: 282 runs, 837 assertions, 0 failures, 0 errors, 13 skips ✅
|
35
99
|
|
36
|
-
|
100
|
+
### Expected Behavior After Fix
|
37
101
|
Previously, when running multi-model chat with repeated prompts:
|
38
102
|
- ❌ Models would see BOTH their own AND other models' responses
|
39
103
|
- ❌ Models would report inflated counts (e.g., "5 times", "6 times" instead of "3 times")
|
@@ -44,7 +108,7 @@ Now with the fix:
|
|
44
108
|
- ✅ Each model correctly reports its own interaction count
|
45
109
|
- ✅ Token counts accurately reflect per-model conversation size
|
46
110
|
|
47
|
-
|
111
|
+
### Usage Examples
|
48
112
|
```bash
|
49
113
|
# Multi-model chat now properly isolates each model's context
|
50
114
|
bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
|
@@ -64,47 +128,47 @@ bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
|
|
64
128
|
# (Previously: LMS would say "5 times", Ollama "6 times" due to cross-talk)
|
65
129
|
```
|
66
130
|
|
67
|
-
|
131
|
+
## [0.9.18] 2025-10-05
|
68
132
|
|
69
|
-
|
133
|
+
### Bug Fixes
|
70
134
|
- **BUG FIX**: Fixed RubyLLM provider error parsing to handle both OpenAI and LM Studio error formats
|
71
135
|
- **BUG FIX**: Fixed "String does not have #dig method" errors when parsing error responses from local providers
|
72
136
|
- **BUG FIX**: Enhanced error parsing to gracefully handle malformed JSON responses
|
73
137
|
|
74
|
-
|
138
|
+
### Improvements
|
75
139
|
- **ENHANCEMENT**: Removed debug output statements from RubyLLMAdapter for cleaner production logs
|
76
140
|
- **ENHANCEMENT**: Improved error handling with debug logging for JSON parsing failures
|
77
141
|
|
78
|
-
|
142
|
+
### Documentation
|
79
143
|
- **DOCUMENTATION**: Added Local Models entry to MkDocs navigation for better documentation accessibility
|
80
144
|
|
81
|
-
|
145
|
+
### Technical Changes
|
82
146
|
- Enhanced provider_fix extension to support multiple error response formats (lib/extensions/ruby_llm/provider_fix.rb)
|
83
147
|
- Cleaned up debug puts statements from RubyLLMAdapter and provider_fix
|
84
148
|
- Added robust JSON parsing with fallback error handling
|
85
149
|
|
86
|
-
|
150
|
+
## [0.9.17] 2025-10-04
|
87
151
|
|
88
|
-
|
152
|
+
### New Features
|
89
153
|
- **NEW FEATURE**: Enhanced local model support with comprehensive validation and error handling
|
90
154
|
- **NEW FEATURE**: Added `lms/` prefix support for LM Studio models with automatic validation against loaded models
|
91
155
|
- **NEW FEATURE**: Enhanced `//models` directive to auto-detect and display local providers (Ollama and LM Studio)
|
92
156
|
- **NEW FEATURE**: Added model name prefix display in error messages for LM Studio (`lms/` prefix)
|
93
157
|
|
94
|
-
|
158
|
+
### Improvements
|
95
159
|
- **ENHANCEMENT**: Improved LM Studio integration with model validation against `/v1/models` endpoint
|
96
160
|
- **ENHANCEMENT**: Enhanced error messages showing exact model names with correct prefixes when validation fails
|
97
161
|
- **ENHANCEMENT**: Added environment variable support for custom LM Studio API base (`LMS_API_BASE`)
|
98
162
|
- **ENHANCEMENT**: Improved `//models` directive output formatting for local models with size and modified date for Ollama
|
99
163
|
- **ENHANCEMENT**: Enhanced multi-model support to seamlessly mix local and cloud models
|
100
164
|
|
101
|
-
|
165
|
+
### Documentation
|
102
166
|
- **DOCUMENTATION**: Added comprehensive local model documentation to README.md
|
103
167
|
- **DOCUMENTATION**: Created new docs/guides/local-models.md guide covering Ollama and LM Studio setup, usage, and troubleshooting
|
104
168
|
- **DOCUMENTATION**: Updated docs/guides/models.md with local provider sections including comparison table and workflow examples
|
105
169
|
- **DOCUMENTATION**: Enhanced docs/faq.md with 5 new FAQ entries covering local model usage, differences, and error handling
|
106
170
|
|
107
|
-
|
171
|
+
### Technical Changes
|
108
172
|
- Enhanced RubyLLMAdapter with LM Studio model validation (lib/aia/ruby_llm_adapter.rb)
|
109
173
|
- Updated models directive to query local provider endpoints (lib/aia/directives/models.rb)
|
110
174
|
- Added provider_fix extension for RubyLLM compatibility (lib/extensions/ruby_llm/provider_fix.rb)
|
@@ -112,12 +176,12 @@ bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
|
|
112
176
|
- Updated dependencies: ruby_llm, webmock, crack, rexml
|
113
177
|
- Bumped Ruby bundler version to 2.7.2
|
114
178
|
|
115
|
-
|
179
|
+
### Bug Fixes
|
116
180
|
- **BUG FIX**: Fixed missing `lms/` prefix in LM Studio model listings
|
117
181
|
- **BUG FIX**: Fixed model validation error messages to show usable model names with correct prefixes
|
118
182
|
- **BUG FIX**: Fixed Ollama endpoint to use native API (removed incorrect `/v1` suffix)
|
119
183
|
|
120
|
-
|
184
|
+
### Usage Examples
|
121
185
|
```bash
|
122
186
|
# Use LM Studio with validation
|
123
187
|
aia --model lms/qwen/qwen3-coder-30b my_prompt
|
@@ -133,75 +197,74 @@ aia --model ollama/llama3.2 --chat
|
|
133
197
|
> //models
|
134
198
|
```
|
135
199
|
|
136
|
-
|
200
|
+
## [0.9.16] 2025-09-26
|
137
201
|
|
138
|
-
|
202
|
+
### New Features
|
139
203
|
- **NEW FEATURE**: Added support for Ollama AI provider
|
140
204
|
- **NEW FEATURE**: Added support for Osaurus AI provider
|
141
205
|
- **NEW FEATURE**: Added support for LM Studio AI provider
|
142
206
|
|
143
|
-
|
207
|
+
### Improvements
|
144
208
|
- **ENHANCEMENT**: Expanded AI provider ecosystem with three new local/self-hosted model options
|
145
209
|
- **ENHANCEMENT**: Improved flexibility for users preferring local LLM deployments
|
146
210
|
|
147
|
-
##
|
148
|
-
### [0.9.15] 2025-09-21
|
211
|
+
## [0.9.15] 2025-09-21
|
149
212
|
|
150
|
-
|
213
|
+
### New Features
|
151
214
|
- **NEW FEATURE**: Added `//paste` directive to insert clipboard contents into prompts
|
152
215
|
- **NEW FEATURE**: Added `//clipboard` alias for the paste directive
|
153
216
|
|
154
|
-
|
217
|
+
### Technical Changes
|
155
218
|
- Enhanced DirectiveProcessor with clipboard integration using the clipboard gem
|
156
219
|
- Added comprehensive test coverage for paste directive functionality
|
157
220
|
|
158
|
-
|
221
|
+
## [0.9.14] 2025-09-19
|
159
222
|
|
160
|
-
|
223
|
+
### New Features
|
161
224
|
- **NEW FEATURE**: Added `//checkpoint` directive to create named snapshots of conversation context
|
162
225
|
- **NEW FEATURE**: Added `//restore` directive to restore context to a previous checkpoint
|
163
226
|
- **NEW FEATURE**: Enhanced `//context` (and `//review`) directive to display checkpoint markers in conversation history
|
164
227
|
- **NEW FEATURE**: Added `//cp` alias for checkpoint directive
|
165
228
|
|
166
|
-
|
229
|
+
### Improvements
|
167
230
|
- **ENHANCEMENT**: Context manager now tracks checkpoint positions for better context visualization
|
168
231
|
- **ENHANCEMENT**: Checkpoint system uses auto-incrementing integer names when no name is provided
|
169
232
|
- **ENHANCEMENT**: Restore directive defaults to last checkpoint when no name specified
|
170
233
|
- **ENHANCEMENT**: Clear context now also clears all checkpoints
|
171
234
|
|
172
|
-
|
235
|
+
### Bug Fixes
|
173
236
|
- **BUG FIX**: Fixed `//help` directive that was showing empty list of directives
|
174
237
|
- **BUG FIX**: Help directive now displays all directives from all registered modules
|
175
238
|
- **BUG FIX**: Help directive now shows proper descriptions and aliases for all directives
|
176
239
|
- **BUG FIX**: Help directive organizes directives by category for better readability
|
177
240
|
|
178
|
-
|
241
|
+
### Technical Changes
|
179
242
|
- Enhanced ContextManager with checkpoint storage and restoration capabilities
|
180
243
|
- Added checkpoint_positions method to track checkpoint locations in context
|
181
244
|
- Refactored help directive to collect directives from all registered modules
|
182
245
|
- Added comprehensive test coverage for checkpoint and restore functionality
|
183
246
|
|
184
|
-
|
185
|
-
|
247
|
+
## [0.9.13] 2025-09-02
|
248
|
+
### New Features
|
186
249
|
- **NEW FEATURE**: Added `--metrics` flag to show token counts for each model
|
187
250
|
- **NEW FEATURE**: Added `--cost` flag to enable cost estimation for each model
|
188
251
|
|
189
|
-
|
252
|
+
### Improvements
|
190
253
|
- **DEPENDENCY**: Removed versionaire dependency, simplifying version management
|
191
254
|
- **ENHANCEMENT**: Improved test suite reliability and coverage
|
192
255
|
- **ENHANCEMENT**: Updated Gemfile.lock with latest dependency versions
|
193
256
|
|
194
|
-
|
257
|
+
### Bug Fixes
|
195
258
|
- **BUG FIX**: Fixed version handling issues by removing external versioning dependency
|
196
259
|
|
197
|
-
|
260
|
+
### Technical Changes
|
198
261
|
- Simplified version management by removing versionaire gem
|
199
262
|
- Enhanced test suite with improved assertions and coverage
|
200
263
|
- Updated various gem dependencies to latest stable versions
|
201
264
|
|
202
|
-
|
265
|
+
## [0.9.12] 2025-08-28
|
203
266
|
|
204
|
-
|
267
|
+
### New Features
|
205
268
|
- **MAJOR NEW FEATURE**: Multi-model support - specify multiple AI models simultaneously with comma-separated syntax
|
206
269
|
- **NEW FEATURE**: `--consensus` flag to enable primary model consensus mode for synthesized responses from multiple models
|
207
270
|
- **NEW FEATURE**: `--no-consensus` flag to explicitly force individual responses from all models
|
@@ -212,21 +275,21 @@ aia --model ollama/llama3.2 --chat
|
|
212
275
|
- **NEW FEATURE**: Multi-model support in both batch and interactive chat modes
|
213
276
|
- **NEW FEATURE**: Comprehensive documentation website https://madbomber.github.io/aia/
|
214
277
|
|
215
|
-
|
278
|
+
### Improvements
|
216
279
|
- **ENHANCEMENT**: Enhanced `//model` directive output with detailed multi-model configuration display
|
217
280
|
- **ENHANCEMENT**: Improved error handling with graceful fallback when model initialization fails
|
218
281
|
- **ENHANCEMENT**: Better TTY handling in chat mode to prevent `Errno::ENXIO` errors in containerized environments
|
219
282
|
- **ENHANCEMENT**: Updated directive processor to use new module-based architecture for better maintainability
|
220
283
|
- **ENHANCEMENT**: Improved batch mode output file formatting consistency between STDOUT and file output
|
221
284
|
|
222
|
-
|
285
|
+
### Bug Fixes
|
223
286
|
- **BUG FIX**: Fixed DirectiveProcessor TypeError that prevented application startup with invalid directive calls
|
224
287
|
- **BUG FIX**: Fixed missing primary model output in batch mode output files
|
225
288
|
- **BUG FIX**: Fixed inconsistent formatting between STDOUT and file output in batch mode
|
226
289
|
- **BUG FIX**: Fixed TTY availability issues in chat mode for containerized environments
|
227
290
|
- **BUG FIX**: Fixed directive processing to use updated module-based registry system
|
228
291
|
|
229
|
-
|
292
|
+
### Technical Changes
|
230
293
|
- Fixed ruby_llm version to 1.5.1
|
231
294
|
- Added extra API_KEY configuration for new LLM providers
|
232
295
|
- Updated RubyLLMAdapter to support multiple model initialization and management
|
@@ -235,13 +298,13 @@ aia --model ollama/llama3.2 --chat
|
|
235
298
|
- Updated CLI parser to support multi-model flags and options
|
236
299
|
- Enhanced configuration system to support consensus mode settings
|
237
300
|
|
238
|
-
|
301
|
+
### Documentation
|
239
302
|
- **DOCUMENTATION**: Comprehensive README.md updates with multi-model usage examples and best practices
|
240
303
|
- **DOCUMENTATION**: Added multi-model section to README with detailed usage instructions
|
241
304
|
- **DOCUMENTATION**: Updated command-line options table with new multi-model flags
|
242
305
|
- **DOCUMENTATION**: Added practical multi-model examples for decision-making scenarios
|
243
306
|
|
244
|
-
|
307
|
+
### Usage Examples
|
245
308
|
```bash
|
246
309
|
# Basic multi-model usage
|
247
310
|
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo
|
@@ -256,25 +319,25 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
256
319
|
//model # Use in any prompt or chat session
|
257
320
|
```
|
258
321
|
|
259
|
-
|
322
|
+
### Migration Notes
|
260
323
|
- Existing single-model usage remains unchanged and fully compatible
|
261
324
|
- Multi-model is opt-in via comma-separated model names
|
262
325
|
- Default behavior without `--consensus` flag shows individual responses from all models
|
263
326
|
- Invalid model names are reported but don't prevent valid models from working
|
264
327
|
|
265
|
-
|
328
|
+
### TODO
|
266
329
|
- TODO: focus on log file consistency using Logger
|
267
330
|
|
268
331
|
|
269
|
-
|
332
|
+
## [0.9.11] 2025-07-31
|
270
333
|
- added a cost per 1 million input tokens to available_models query output
|
271
334
|
- updated ruby_llm to version 1.4.0
|
272
335
|
- updated all other gem dependencies to their latest versions
|
273
336
|
|
274
|
-
|
337
|
+
## [0.9.10] 2025-07-18
|
275
338
|
- updated ruby_llm-mcp to version 0.6.1 which solves problems with MCP tools not being installed
|
276
339
|
|
277
|
-
|
340
|
+
## [0.9.9] 2025-07-10
|
278
341
|
- refactored the Session and Config classes into more testable method_missing
|
279
342
|
- updated the test suire for both the Session and Config classes
|
280
343
|
- added support for MCP servers coming into AIA via the shared_tools gem
|
@@ -285,13 +348,13 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
285
348
|
- //available_models now has context window size and capabilities for each model returned
|
286
349
|
|
287
350
|
|
288
|
-
|
351
|
+
## [0.9.8] 2025-06-25
|
289
352
|
- fixing an issue with pipelined prompts
|
290
353
|
- now showing the complete modality of the model on the processing line.
|
291
354
|
- changed -p option from prompts_dir to pipeline
|
292
355
|
- found problem with simple cov and deep cov w/r/t their reported test coverage; they have problems with heredoc and complex conditionals.
|
293
356
|
|
294
|
-
|
357
|
+
## [0.9.7] 2025-06-20
|
295
358
|
|
296
359
|
- **NEW FEATURE**: Added `--available_models` CLI option to list all available AI models
|
297
360
|
- **NEW FEATURE**: Added `//tools` to show a list of available tools and their description
|
@@ -302,7 +365,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
302
365
|
- **DOCUMENTATION**: Updated README for better clarity and structure
|
303
366
|
- **DEPENDENCY**: Updated Gemfile.lock with latest dependency versions
|
304
367
|
|
305
|
-
|
368
|
+
## [0.9.6] 2025-06-13
|
306
369
|
- fixed issue 84 with the //llms directive
|
307
370
|
- changed the monkey patch to the RubyLLM::Model::Modalities class at the suggestions of the RubyLLM author in prep for a PR against that gem.
|
308
371
|
- added the shared_tools gem - need to add examples on how to use it with the --tools option
|
@@ -310,11 +373,11 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
310
373
|
- added images/aia.png to README.md
|
311
374
|
- let claude code rewrite the README.md file. Some details were dropped but overall in reads better. Need to add the details to a wiki or other documentation site.
|
312
375
|
|
313
|
-
|
376
|
+
## [0.9.5] 2025-06-04
|
314
377
|
- changed the RubyLLM::Modalities class to use method_missing for mode query
|
315
378
|
- hunting for why the //llms query directive is not finding image_to_image LLMs.
|
316
379
|
|
317
|
-
|
380
|
+
## [0.9.4] 2025-06-03
|
318
381
|
- using RubyLLM v1.3.0
|
319
382
|
- setting up a docs infrastructure to behave like the ruby_llm gem's guides side
|
320
383
|
- fixed bug in the text-to-image workflow
|
@@ -322,7 +385,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
322
385
|
- need to pay attention to the test suite
|
323
386
|
- also need to ensure the non text2text modes are working
|
324
387
|
|
325
|
-
|
388
|
+
## [0.9.3rc1] 2025-05-24
|
326
389
|
- using ruby_llm v1.3.0rc1
|
327
390
|
- added a models database refresh based on integer days interval with the --refresh option
|
328
391
|
- config file now has a "last_refresh" String in format YYYY-MM-DD
|
@@ -331,43 +394,41 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
331
394
|
- fixed a bug in the prompt_manager gem which is now at v0.5.5
|
332
395
|
|
333
396
|
|
334
|
-
|
397
|
+
## [0.9.2] 2025-05-18
|
335
398
|
- removing the MCP experiment
|
336
399
|
- adding support for RubyLLM::Tool usage in place of the MCP stuff
|
337
400
|
- updated prompt_manager to v0.5.4 which fixed shell integration problem
|
338
401
|
|
339
|
-
|
402
|
+
## [0.9.1] 2025-05-16
|
340
403
|
- rethink MCP approach in favor of just RubyLLM::Tool
|
341
404
|
- fixed problem with //clear
|
342
405
|
- fixed a problem with a priming prompt in a chat loop
|
343
406
|
|
344
|
-
|
407
|
+
## [0.9.0] 2025-05-13
|
345
408
|
- Adding experimental MCP Client suppot
|
346
409
|
- removed the CLI options --erb and --shell but kept them in the config file with a default of true for both
|
347
410
|
|
348
|
-
|
411
|
+
## [0.8.6] 2025-04-23
|
349
412
|
- Added a client adapter for the ruby_llm gem
|
350
413
|
- Added the adapter config item and the --adapter option to select at runtime which client to use ai_client or ruby_llm
|
351
414
|
|
352
|
-
|
415
|
+
## [0.8.5] 2025-04-19
|
353
416
|
- documentation updates
|
354
417
|
- integrated the https://pure.md web service for inserting web pages into the context window
|
355
418
|
- //include http?://example.com/stuff
|
356
419
|
- //webpage http?://example.com/stuff
|
357
420
|
|
358
|
-
|
421
|
+
## [0.8.2] 2025-04-18
|
359
422
|
- fixed problems with pre-loaded context and chat repl
|
360
423
|
- piped content into `aia --chat` is now a part of the context/instructions
|
361
424
|
- content via "aia --chat < some_file" is added to the context/instructions
|
362
425
|
- `aia --chat context_file.txt context_file2.txt` now works
|
363
426
|
- `aia --chat prompt_id context)file.txt` also works
|
364
427
|
|
365
|
-
|
366
|
-
|
367
|
-
### [0.8.1] 2025-04-17
|
428
|
+
## [0.8.1] 2025-04-17
|
368
429
|
- bumped version to 0.8.1 after correcting merge conflicts
|
369
430
|
|
370
|
-
|
431
|
+
## [0.8.0] WIP - 2025-04-15
|
371
432
|
- Updated PromptManager to v0.5.1 which has some of the functionality that was originally developed in the AIA.
|
372
433
|
- Enhanced README.md to include a comprehensive table of configuration options with defaults and associated environment variables.
|
373
434
|
- Added a note in README.md about the expandability of configuration options from a config file for dynamic prompt generation.
|
@@ -375,7 +436,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
375
436
|
- Ensured version consistency across `.version`, `aia.gemspec`, and `lib/aia/version.rb`.
|
376
437
|
- Verified and updated documentation to ensure readiness for release on RubyGems.org.
|
377
438
|
|
378
|
-
|
439
|
+
## [0.7.1] WIP - 2025-03-22
|
379
440
|
- Added `UIPresenter` class for handling user interface presentation.
|
380
441
|
- Added `DirectiveProcessor` class for processing chat-based directives.
|
381
442
|
- Added `HistoryManager` class for managing conversation and variable history.
|
@@ -389,7 +450,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
389
450
|
- Improved error handling and user feedback for directive processing.
|
390
451
|
- Enhanced logging and output options for chat sessions.
|
391
452
|
|
392
|
-
|
453
|
+
## [0.7.0] WIP - 2025-03-17
|
393
454
|
- Major code refactoring for better organization and maintainability:
|
394
455
|
- Extracted `DirectiveProcessor` class to handle chat-based directives
|
395
456
|
- Extracted `HistoryManager` class for conversation and variable history management
|
@@ -400,13 +461,13 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
400
461
|
- Improved output handling to suppress STDOUT when chat mode is off and output file is specified
|
401
462
|
- Updated spinner format in the process_prompt method for better user experience
|
402
463
|
|
403
|
-
|
464
|
+
## [0.6.?] WIP
|
404
465
|
- Implemented Tony Stark's Clean Slate Protocol on the develop branch
|
405
466
|
|
406
|
-
|
467
|
+
## [0.5.17] 2024-05-17
|
407
468
|
- removed replaced `semver` with `versionaire`
|
408
469
|
|
409
|
-
|
470
|
+
## [0.5.16] 2024-04-02
|
410
471
|
- fixed prompt pipelines
|
411
472
|
- added //next and //pipeline directives as shortcuts to //config [next,pipeline]
|
412
473
|
- Added new backend "client" as an internal OpenAI client
|
@@ -415,83 +476,82 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
415
476
|
- Added --voice default: alloy (if "siri" and Mac? then uses cli tool "say")
|
416
477
|
- Added --image_size and --image_quality (--is --iq)
|
417
478
|
|
418
|
-
|
419
|
-
### [0.5.15] 2024-03-30
|
479
|
+
## [0.5.15] 2024-03-30
|
420
480
|
- Added the ability to accept piped in text to be appeded to the end of the prompt text: curl $URL | aia ad_hoc
|
421
481
|
- Fixed bugs with entering directives as follow-up prompts during a chat session
|
422
482
|
|
423
|
-
|
483
|
+
## [0.5.14] 2024-03-09
|
424
484
|
- Directly access OpenAI to do text to speech when using the `--speak` option
|
425
485
|
- Added --voice to specify which voice to use
|
426
486
|
- Added --speech_model to specify which TTS model to use
|
427
487
|
|
428
|
-
|
488
|
+
## [0.5.13] 2024-03-03
|
429
489
|
- Added CLI-utility `llm` as a backend processor
|
430
490
|
|
431
|
-
|
491
|
+
## [0.5.12] 2024-02-24
|
432
492
|
- Happy Birthday Ruby!
|
433
493
|
- Added --next CLI option
|
434
494
|
- Added --pipeline CLI option
|
435
495
|
|
436
|
-
|
496
|
+
## [0.5.11] 2024-02-18
|
437
497
|
- allow directives to return information that is inserted into the prompt text
|
438
498
|
- added //shell command directive
|
439
499
|
- added //ruby ruby_code directive
|
440
500
|
- added //include path_to_file directive
|
441
501
|
|
442
|
-
|
502
|
+
## [0.5.10] 2024-02-03
|
443
503
|
- Added --roles_dir to isolate roles from other prompts if desired
|
444
504
|
- Changed --prompts to --prompts_dir to be consistent
|
445
505
|
- Refactored common fzf usage into its own tool class
|
446
506
|
|
447
|
-
|
507
|
+
## [0.5.9] 2024-02-01
|
448
508
|
- Added a "I'm working" spinner thing when "--verbose" is used as an indication that the backend is in the process of composing its response to the prompt.
|
449
509
|
|
450
|
-
|
510
|
+
## [0.5.8] 2024-01-17
|
451
511
|
- Changed the behavior of the --dump option. It must now be followed by path/to/file.ext where ext is a supported config file format: yml, yaml, toml
|
452
512
|
|
453
|
-
|
513
|
+
## [0.5.7] 2024-01-15
|
454
514
|
- Added ERB processing to the config_file
|
455
515
|
|
456
|
-
|
516
|
+
## [0.5.6] 2024-01-15
|
457
517
|
- Adding processing for directives, shell integration and erb to the follow up prompt in a chat session
|
458
518
|
- some code refactoring.
|
459
519
|
|
460
520
|
## [0.5.3] 2024-01-14
|
461
521
|
- adding ability to render markdown to the terminal using the "glow" CLI utility
|
462
522
|
|
463
|
-
|
523
|
+
## [0.5.2] 2024-01-13
|
464
524
|
- wrap response when its going to the terminal
|
465
525
|
|
466
|
-
|
526
|
+
## [0.5.1] 2024-01-12
|
467
527
|
- removed a wicked puts "loaded" statement
|
468
528
|
- fixed missed code when the options were changed to --out_file and --log_file
|
469
529
|
- fixed completion functions by updating $PROMPT_DIR to $AIA_PROMPTS_DIR to match the documentation.
|
470
530
|
|
471
|
-
|
531
|
+
## [0.5.0] 2024-01-05
|
472
532
|
- breaking changes:
|
473
533
|
- changed `--config` to `--config_file`
|
474
534
|
- changed `--env` to `--shell`
|
475
535
|
- changed `--output` to `--out_file`
|
476
536
|
- changed default `out_file` to `STDOUT`
|
477
537
|
|
478
|
-
|
538
|
+
## [0.4.3] 2023-12-31
|
479
539
|
- added --env to process embedded system environment variables and shell commands within a prompt.
|
480
540
|
- added --erb to process Embedded RuBy within a prompt because have embedded shell commands will only get you in a trouble. Having ERB will really get you into trouble. Remember the simple prompt is usually the best prompt.
|
481
541
|
|
482
|
-
|
542
|
+
## [0.4.2] 2023-12-31
|
483
543
|
- added the --role CLI option to pre-pend a "role" prompt to the front of a primary prompt.
|
484
544
|
|
485
|
-
|
545
|
+
## [0.4.1] 2023-12-31
|
486
546
|
- added a chat mode
|
487
547
|
- prompt directives now supported
|
488
548
|
- version bumped to match the `prompt_manager` gem
|
489
549
|
|
490
|
-
|
550
|
+
## [0.3.20] 2023-12-28
|
491
551
|
- added work around to issue with multiple context files going to the `mods` backend
|
492
552
|
- added shellwords gem to santize prompt text on the command line
|
493
553
|
|
494
|
-
|
554
|
+
## [0.3.19] 2023-12-26
|
495
555
|
- major code refactoring.
|
496
556
|
- supports config files \*.yml, \*.yaml and \*.toml
|
497
557
|
- usage implemented as a man page. --help will display the man page/
|
@@ -504,10 +564,10 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
504
564
|
3. envar values over-rides ...
|
505
565
|
4. default values
|
506
566
|
|
507
|
-
|
567
|
+
## [0.3.0] = 2023-11-23
|
508
568
|
|
509
569
|
- Matching version to [prompt_manager](https://github.com/prompt_manager) This version allows for the user of history in the entery of values to prompt keywords. KW_HISTORY_MAX is set at 5. Changed CLI enteraction to use historical selection and editing of prior keyword values.
|
510
570
|
|
511
|
-
|
571
|
+
## [0.1.0] - 2023-11-23
|
512
572
|
|
513
573
|
- Initial release
|