aia 0.9.19 → 0.9.21
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.version +1 -1
- data/CHANGELOG.md +158 -91
- data/README.md +128 -3
- data/docs/cli-reference.md +71 -4
- data/docs/guides/models.md +196 -1
- data/lib/aia/config/base.rb +6 -1
- data/lib/aia/config/cli_parser.rb +116 -2
- data/lib/aia/config/file_loader.rb +33 -1
- data/lib/aia/directives/configuration.rb +3 -2
- data/lib/aia/prompt_handler.rb +22 -1
- data/lib/aia/ruby_llm_adapter.rb +134 -30
- data/lib/aia/session.rb +24 -8
- data/lib/aia/utility.rb +19 -1
- metadata +1 -1
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 2d974eb8f23ef8b3af71447951101047c7bdf04a203d4690655b8bfbe940fb2f
|
4
|
+
data.tar.gz: d63ff74f8837d642a454e116e2efb29f5a57d2d3e37a0e1181378d93d15f4660
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 1c20c13f8d2c80abcc1dafc0401d6bfb3c1c1045ff4dc0821b3de256ba59241938a076f0cb74583758e1bbadd2e31cd933a2137440eee1a78e4adea794e66ea6
|
7
|
+
data.tar.gz: f7b6d1301362e22ccb3ef07956f5a9fe3766504ec23a2df82b420ab330b2e628406a1ce98b43dced48ab4142d49ccc5f7737f6fef0e9c9bded5d4ecda8626c8a
|
data/.version
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.9.
|
1
|
+
0.9.21
|
data/CHANGELOG.md
CHANGED
@@ -1,14 +1,85 @@
|
|
1
1
|
# Changelog
|
2
2
|
## [Unreleased]
|
3
3
|
|
4
|
-
|
5
|
-
|
6
|
-
|
4
|
+
## [0.9.21] 2025-10-08
|
5
|
+
### Bug Fixes
|
6
|
+
- **Checkpoint Directive Output**: Fixed `//checkpoint` directive to return empty string instead of status message (lib/aia/directives/configuration.rb:155)
|
7
|
+
- Prevents checkpoint creation messages from entering AI context
|
8
|
+
- Outputs confirmation to STDOUT instead for user feedback
|
9
|
+
- Prevents potential AI manipulation of checkpoint system
|
10
|
+
|
11
|
+
## [0.9.20] 2025-10-06
|
12
|
+
### Added
|
13
|
+
- **Enhanced Multi-Model Role System (ADR-005)**: Implemented per-model role assignment with inline syntax
|
14
|
+
- New inline syntax: `--model MODEL[=ROLE][,MODEL[=ROLE]]...`
|
15
|
+
- Example: `aia --model gpt-4o=architect,claude=security,gemini=performance design_doc.md`
|
16
|
+
- Support for duplicate models with different roles: `gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist`
|
17
|
+
- Added `--list-roles` command to discover available role files
|
18
|
+
- Display format shows instance numbers and roles: `gpt-4o #1 (optimist):`, `gpt-4o #2 (pessimist):`
|
19
|
+
- Consensus mode drops role for neutral synthesis
|
20
|
+
- Chat mode roles are immutable during session
|
21
|
+
- Nested role path support: `--model gpt-4o=specialized/architect`
|
22
|
+
- Full backward compatibility with existing `--role` flag
|
23
|
+
|
24
|
+
- **Config File Model Roles Support (ADR-005 v2)**:
|
25
|
+
- Enhanced `model` key in config files to support array of hashes with roles
|
26
|
+
- Format: `model: [{model: gpt-4o, role: architect}, {model: claude, role: security}]`
|
27
|
+
- Mirrors internal data structure (array of hashes with `model`, `role`, `instance`, `internal_id`)
|
28
|
+
- Supports models without roles: `model: [{model: gpt-4o}]`
|
29
|
+
- Enables reusable model-role setups across sessions
|
30
|
+
- Configuration precedence: CLI inline > CLI flag > Environment variable > Config file
|
31
|
+
|
32
|
+
- **Environment Variable Inline Syntax (ADR-005 v2)**:
|
33
|
+
- Added support for inline role syntax in `AIA_MODEL` environment variable
|
34
|
+
- Example: `export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"`
|
35
|
+
- Maintains backward compatibility with simple comma-separated model lists
|
36
|
+
- Detects `=` to distinguish between formats
|
37
|
+
|
38
|
+
### Bug Fixes
|
39
|
+
- **Multi-Model Chat Cross-Talk**: Fixed bug where model instances with different roles could see each other's conversation history
|
40
|
+
- Updated Session to properly extract `internal_id` from hash-based model specs (lib/aia/session.rb:47-68)
|
41
|
+
- Fixed `parse_multi_model_response` to normalize display names to internal IDs (lib/aia/session.rb:538-570)
|
42
|
+
- Each model instance now maintains completely isolated conversation context
|
43
|
+
- Fixes issue where models would respond as if aware of other models' perspectives
|
44
|
+
|
45
|
+
### Improvements
|
46
|
+
- **Robot ASCII Display**: Updated `robot` method to extract and display only model names from new hash format (lib/aia/utility.rb:24-53)
|
47
|
+
- Handles string, array of strings, and array of hashes formats
|
48
|
+
- Shows clean model list: "gpt-4o, claude, gemini" instead of hash objects
|
49
|
+
|
50
|
+
### Testing
|
51
|
+
- Added comprehensive test suite for config file and environment variable model roles
|
52
|
+
- test/aia/config_model_roles_test.rb: 8 tests covering array processing, env var parsing, YAML config files
|
53
|
+
- Added 15 tests for role parsing with inline syntax (test/aia/role_parsing_test.rb)
|
54
|
+
- Fixed Mocha test cleanup in multi_model_isolation_test.rb
|
55
|
+
- Full test suite: 306 runs, 980 assertions, 0 failures (1 pre-existing Mocha isolation issue)
|
56
|
+
|
57
|
+
### Technical Implementation
|
58
|
+
- Modified `config.model` to support array of hashes with model metadata: `{model:, role:, instance:, internal_id:}`
|
59
|
+
- Added `parse_models_with_roles` method with fail-fast validation (lib/aia/config/cli_parser.rb)
|
60
|
+
- Added `validate_role_exists` with helpful error messages showing available roles
|
61
|
+
- Added `list_available_roles` and `list_available_role_names` methods for role discovery
|
62
|
+
- Added `load_role_for_model` method to PromptHandler for per-model role loading (lib/aia/prompt_handler.rb)
|
63
|
+
- Enhanced RubyLLMAdapter to handle hash-based model specs and prepend roles per model
|
64
|
+
- Added `extract_model_names` to extract model names from specs
|
65
|
+
- Added `get_model_spec` to retrieve full spec by internal_id
|
66
|
+
- Added `prepend_model_role` to inject role content into prompts
|
67
|
+
- Added `format_model_display_name` for consistent display formatting
|
68
|
+
- Updated Session initialization to handle hash-based model specs for context managers
|
69
|
+
- Updated display formatting to show instance numbers and roles
|
70
|
+
- Maintained backward compatibility with string/array model configurations
|
71
|
+
- Added `process_model_array_with_roles` method in FileLoader (lib/aia/config/file_loader.rb:91-116)
|
72
|
+
- Enhanced `apply_file_config_to_struct` to detect and process model arrays with role hashes
|
73
|
+
- Enhanced `envar_options` to parse inline syntax for `:model` key (lib/aia/config/base.rb:212-217)
|
74
|
+
|
75
|
+
## [0.9.19] 2025-10-06
|
76
|
+
|
77
|
+
### Bug Fixes
|
7
78
|
- **CRITICAL BUG FIX**: Fixed multi-model cross-talk issue (#118) where models could see each other's conversation history
|
8
79
|
- **BUG FIX**: Implemented complete two-level context isolation to prevent models from contaminating each other's responses
|
9
80
|
- **BUG FIX**: Fixed token count inflation caused by models processing combined conversation histories
|
10
81
|
|
11
|
-
|
82
|
+
### Technical Changes
|
12
83
|
- **Level 1 (Library)**: Implemented per-model RubyLLM::Context isolation - each model now has its own Context instance (lib/aia/ruby_llm_adapter.rb)
|
13
84
|
- **Level 2 (Application)**: Implemented per-model ContextManager isolation - each model maintains its own conversation history (lib/aia/session.rb)
|
14
85
|
- Added `parse_multi_model_response` method to extract individual model responses from combined output (lib/aia/session.rb:502-533)
|
@@ -23,17 +94,17 @@
|
|
23
94
|
- Added comprehensive test coverage with 6 new tests for multi-model isolation
|
24
95
|
- Updated LocalProvidersTest to reflect Context-based architecture
|
25
96
|
|
26
|
-
|
97
|
+
### Architecture
|
27
98
|
- **ADR-002-revised**: Complete Multi-Model Isolation (see `.architecture/decisions/adrs/ADR-002-revised-multi-model-isolation.md`)
|
28
99
|
- Eliminated global state dependencies in multi-model chat sessions
|
29
100
|
- Maintained backward compatibility with single-model mode (verified with tests)
|
30
101
|
|
31
|
-
|
102
|
+
### Test Coverage
|
32
103
|
- Added `test/aia/multi_model_isolation_test.rb` with comprehensive isolation tests
|
33
104
|
- Tests cover: response parsing, per-model context managers, single-model compatibility, RubyLLM::Context isolation
|
34
105
|
- Full test suite: 282 runs, 837 assertions, 0 failures, 0 errors, 13 skips ✅
|
35
106
|
|
36
|
-
|
107
|
+
### Expected Behavior After Fix
|
37
108
|
Previously, when running multi-model chat with repeated prompts:
|
38
109
|
- ❌ Models would see BOTH their own AND other models' responses
|
39
110
|
- ❌ Models would report inflated counts (e.g., "5 times", "6 times" instead of "3 times")
|
@@ -44,7 +115,7 @@ Now with the fix:
|
|
44
115
|
- ✅ Each model correctly reports its own interaction count
|
45
116
|
- ✅ Token counts accurately reflect per-model conversation size
|
46
117
|
|
47
|
-
|
118
|
+
### Usage Examples
|
48
119
|
```bash
|
49
120
|
# Multi-model chat now properly isolates each model's context
|
50
121
|
bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
|
@@ -64,47 +135,47 @@ bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
|
|
64
135
|
# (Previously: LMS would say "5 times", Ollama "6 times" due to cross-talk)
|
65
136
|
```
|
66
137
|
|
67
|
-
|
138
|
+
## [0.9.18] 2025-10-05
|
68
139
|
|
69
|
-
|
140
|
+
### Bug Fixes
|
70
141
|
- **BUG FIX**: Fixed RubyLLM provider error parsing to handle both OpenAI and LM Studio error formats
|
71
142
|
- **BUG FIX**: Fixed "String does not have #dig method" errors when parsing error responses from local providers
|
72
143
|
- **BUG FIX**: Enhanced error parsing to gracefully handle malformed JSON responses
|
73
144
|
|
74
|
-
|
145
|
+
### Improvements
|
75
146
|
- **ENHANCEMENT**: Removed debug output statements from RubyLLMAdapter for cleaner production logs
|
76
147
|
- **ENHANCEMENT**: Improved error handling with debug logging for JSON parsing failures
|
77
148
|
|
78
|
-
|
149
|
+
### Documentation
|
79
150
|
- **DOCUMENTATION**: Added Local Models entry to MkDocs navigation for better documentation accessibility
|
80
151
|
|
81
|
-
|
152
|
+
### Technical Changes
|
82
153
|
- Enhanced provider_fix extension to support multiple error response formats (lib/extensions/ruby_llm/provider_fix.rb)
|
83
154
|
- Cleaned up debug puts statements from RubyLLMAdapter and provider_fix
|
84
155
|
- Added robust JSON parsing with fallback error handling
|
85
156
|
|
86
|
-
|
157
|
+
## [0.9.17] 2025-10-04
|
87
158
|
|
88
|
-
|
159
|
+
### New Features
|
89
160
|
- **NEW FEATURE**: Enhanced local model support with comprehensive validation and error handling
|
90
161
|
- **NEW FEATURE**: Added `lms/` prefix support for LM Studio models with automatic validation against loaded models
|
91
162
|
- **NEW FEATURE**: Enhanced `//models` directive to auto-detect and display local providers (Ollama and LM Studio)
|
92
163
|
- **NEW FEATURE**: Added model name prefix display in error messages for LM Studio (`lms/` prefix)
|
93
164
|
|
94
|
-
|
165
|
+
### Improvements
|
95
166
|
- **ENHANCEMENT**: Improved LM Studio integration with model validation against `/v1/models` endpoint
|
96
167
|
- **ENHANCEMENT**: Enhanced error messages showing exact model names with correct prefixes when validation fails
|
97
168
|
- **ENHANCEMENT**: Added environment variable support for custom LM Studio API base (`LMS_API_BASE`)
|
98
169
|
- **ENHANCEMENT**: Improved `//models` directive output formatting for local models with size and modified date for Ollama
|
99
170
|
- **ENHANCEMENT**: Enhanced multi-model support to seamlessly mix local and cloud models
|
100
171
|
|
101
|
-
|
172
|
+
### Documentation
|
102
173
|
- **DOCUMENTATION**: Added comprehensive local model documentation to README.md
|
103
174
|
- **DOCUMENTATION**: Created new docs/guides/local-models.md guide covering Ollama and LM Studio setup, usage, and troubleshooting
|
104
175
|
- **DOCUMENTATION**: Updated docs/guides/models.md with local provider sections including comparison table and workflow examples
|
105
176
|
- **DOCUMENTATION**: Enhanced docs/faq.md with 5 new FAQ entries covering local model usage, differences, and error handling
|
106
177
|
|
107
|
-
|
178
|
+
### Technical Changes
|
108
179
|
- Enhanced RubyLLMAdapter with LM Studio model validation (lib/aia/ruby_llm_adapter.rb)
|
109
180
|
- Updated models directive to query local provider endpoints (lib/aia/directives/models.rb)
|
110
181
|
- Added provider_fix extension for RubyLLM compatibility (lib/extensions/ruby_llm/provider_fix.rb)
|
@@ -112,12 +183,12 @@ bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
|
|
112
183
|
- Updated dependencies: ruby_llm, webmock, crack, rexml
|
113
184
|
- Bumped Ruby bundler version to 2.7.2
|
114
185
|
|
115
|
-
|
186
|
+
### Bug Fixes
|
116
187
|
- **BUG FIX**: Fixed missing `lms/` prefix in LM Studio model listings
|
117
188
|
- **BUG FIX**: Fixed model validation error messages to show usable model names with correct prefixes
|
118
189
|
- **BUG FIX**: Fixed Ollama endpoint to use native API (removed incorrect `/v1` suffix)
|
119
190
|
|
120
|
-
|
191
|
+
### Usage Examples
|
121
192
|
```bash
|
122
193
|
# Use LM Studio with validation
|
123
194
|
aia --model lms/qwen/qwen3-coder-30b my_prompt
|
@@ -133,75 +204,74 @@ aia --model ollama/llama3.2 --chat
|
|
133
204
|
> //models
|
134
205
|
```
|
135
206
|
|
136
|
-
|
207
|
+
## [0.9.16] 2025-09-26
|
137
208
|
|
138
|
-
|
209
|
+
### New Features
|
139
210
|
- **NEW FEATURE**: Added support for Ollama AI provider
|
140
211
|
- **NEW FEATURE**: Added support for Osaurus AI provider
|
141
212
|
- **NEW FEATURE**: Added support for LM Studio AI provider
|
142
213
|
|
143
|
-
|
214
|
+
### Improvements
|
144
215
|
- **ENHANCEMENT**: Expanded AI provider ecosystem with three new local/self-hosted model options
|
145
216
|
- **ENHANCEMENT**: Improved flexibility for users preferring local LLM deployments
|
146
217
|
|
147
|
-
##
|
148
|
-
### [0.9.15] 2025-09-21
|
218
|
+
## [0.9.15] 2025-09-21
|
149
219
|
|
150
|
-
|
220
|
+
### New Features
|
151
221
|
- **NEW FEATURE**: Added `//paste` directive to insert clipboard contents into prompts
|
152
222
|
- **NEW FEATURE**: Added `//clipboard` alias for the paste directive
|
153
223
|
|
154
|
-
|
224
|
+
### Technical Changes
|
155
225
|
- Enhanced DirectiveProcessor with clipboard integration using the clipboard gem
|
156
226
|
- Added comprehensive test coverage for paste directive functionality
|
157
227
|
|
158
|
-
|
228
|
+
## [0.9.14] 2025-09-19
|
159
229
|
|
160
|
-
|
230
|
+
### New Features
|
161
231
|
- **NEW FEATURE**: Added `//checkpoint` directive to create named snapshots of conversation context
|
162
232
|
- **NEW FEATURE**: Added `//restore` directive to restore context to a previous checkpoint
|
163
233
|
- **NEW FEATURE**: Enhanced `//context` (and `//review`) directive to display checkpoint markers in conversation history
|
164
234
|
- **NEW FEATURE**: Added `//cp` alias for checkpoint directive
|
165
235
|
|
166
|
-
|
236
|
+
### Improvements
|
167
237
|
- **ENHANCEMENT**: Context manager now tracks checkpoint positions for better context visualization
|
168
238
|
- **ENHANCEMENT**: Checkpoint system uses auto-incrementing integer names when no name is provided
|
169
239
|
- **ENHANCEMENT**: Restore directive defaults to last checkpoint when no name specified
|
170
240
|
- **ENHANCEMENT**: Clear context now also clears all checkpoints
|
171
241
|
|
172
|
-
|
242
|
+
### Bug Fixes
|
173
243
|
- **BUG FIX**: Fixed `//help` directive that was showing empty list of directives
|
174
244
|
- **BUG FIX**: Help directive now displays all directives from all registered modules
|
175
245
|
- **BUG FIX**: Help directive now shows proper descriptions and aliases for all directives
|
176
246
|
- **BUG FIX**: Help directive organizes directives by category for better readability
|
177
247
|
|
178
|
-
|
248
|
+
### Technical Changes
|
179
249
|
- Enhanced ContextManager with checkpoint storage and restoration capabilities
|
180
250
|
- Added checkpoint_positions method to track checkpoint locations in context
|
181
251
|
- Refactored help directive to collect directives from all registered modules
|
182
252
|
- Added comprehensive test coverage for checkpoint and restore functionality
|
183
253
|
|
184
|
-
|
185
|
-
|
254
|
+
## [0.9.13] 2025-09-02
|
255
|
+
### New Features
|
186
256
|
- **NEW FEATURE**: Added `--metrics` flag to show token counts for each model
|
187
257
|
- **NEW FEATURE**: Added `--cost` flag to enable cost estimation for each model
|
188
258
|
|
189
|
-
|
259
|
+
### Improvements
|
190
260
|
- **DEPENDENCY**: Removed versionaire dependency, simplifying version management
|
191
261
|
- **ENHANCEMENT**: Improved test suite reliability and coverage
|
192
262
|
- **ENHANCEMENT**: Updated Gemfile.lock with latest dependency versions
|
193
263
|
|
194
|
-
|
264
|
+
### Bug Fixes
|
195
265
|
- **BUG FIX**: Fixed version handling issues by removing external versioning dependency
|
196
266
|
|
197
|
-
|
267
|
+
### Technical Changes
|
198
268
|
- Simplified version management by removing versionaire gem
|
199
269
|
- Enhanced test suite with improved assertions and coverage
|
200
270
|
- Updated various gem dependencies to latest stable versions
|
201
271
|
|
202
|
-
|
272
|
+
## [0.9.12] 2025-08-28
|
203
273
|
|
204
|
-
|
274
|
+
### New Features
|
205
275
|
- **MAJOR NEW FEATURE**: Multi-model support - specify multiple AI models simultaneously with comma-separated syntax
|
206
276
|
- **NEW FEATURE**: `--consensus` flag to enable primary model consensus mode for synthesized responses from multiple models
|
207
277
|
- **NEW FEATURE**: `--no-consensus` flag to explicitly force individual responses from all models
|
@@ -212,21 +282,21 @@ aia --model ollama/llama3.2 --chat
|
|
212
282
|
- **NEW FEATURE**: Multi-model support in both batch and interactive chat modes
|
213
283
|
- **NEW FEATURE**: Comprehensive documentation website https://madbomber.github.io/aia/
|
214
284
|
|
215
|
-
|
285
|
+
### Improvements
|
216
286
|
- **ENHANCEMENT**: Enhanced `//model` directive output with detailed multi-model configuration display
|
217
287
|
- **ENHANCEMENT**: Improved error handling with graceful fallback when model initialization fails
|
218
288
|
- **ENHANCEMENT**: Better TTY handling in chat mode to prevent `Errno::ENXIO` errors in containerized environments
|
219
289
|
- **ENHANCEMENT**: Updated directive processor to use new module-based architecture for better maintainability
|
220
290
|
- **ENHANCEMENT**: Improved batch mode output file formatting consistency between STDOUT and file output
|
221
291
|
|
222
|
-
|
292
|
+
### Bug Fixes
|
223
293
|
- **BUG FIX**: Fixed DirectiveProcessor TypeError that prevented application startup with invalid directive calls
|
224
294
|
- **BUG FIX**: Fixed missing primary model output in batch mode output files
|
225
295
|
- **BUG FIX**: Fixed inconsistent formatting between STDOUT and file output in batch mode
|
226
296
|
- **BUG FIX**: Fixed TTY availability issues in chat mode for containerized environments
|
227
297
|
- **BUG FIX**: Fixed directive processing to use updated module-based registry system
|
228
298
|
|
229
|
-
|
299
|
+
### Technical Changes
|
230
300
|
- Fixed ruby_llm version to 1.5.1
|
231
301
|
- Added extra API_KEY configuration for new LLM providers
|
232
302
|
- Updated RubyLLMAdapter to support multiple model initialization and management
|
@@ -235,13 +305,13 @@ aia --model ollama/llama3.2 --chat
|
|
235
305
|
- Updated CLI parser to support multi-model flags and options
|
236
306
|
- Enhanced configuration system to support consensus mode settings
|
237
307
|
|
238
|
-
|
308
|
+
### Documentation
|
239
309
|
- **DOCUMENTATION**: Comprehensive README.md updates with multi-model usage examples and best practices
|
240
310
|
- **DOCUMENTATION**: Added multi-model section to README with detailed usage instructions
|
241
311
|
- **DOCUMENTATION**: Updated command-line options table with new multi-model flags
|
242
312
|
- **DOCUMENTATION**: Added practical multi-model examples for decision-making scenarios
|
243
313
|
|
244
|
-
|
314
|
+
### Usage Examples
|
245
315
|
```bash
|
246
316
|
# Basic multi-model usage
|
247
317
|
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo
|
@@ -256,25 +326,25 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
256
326
|
//model # Use in any prompt or chat session
|
257
327
|
```
|
258
328
|
|
259
|
-
|
329
|
+
### Migration Notes
|
260
330
|
- Existing single-model usage remains unchanged and fully compatible
|
261
331
|
- Multi-model is opt-in via comma-separated model names
|
262
332
|
- Default behavior without `--consensus` flag shows individual responses from all models
|
263
333
|
- Invalid model names are reported but don't prevent valid models from working
|
264
334
|
|
265
|
-
|
335
|
+
### TODO
|
266
336
|
- TODO: focus on log file consistency using Logger
|
267
337
|
|
268
338
|
|
269
|
-
|
339
|
+
## [0.9.11] 2025-07-31
|
270
340
|
- added a cost per 1 million input tokens to available_models query output
|
271
341
|
- updated ruby_llm to version 1.4.0
|
272
342
|
- updated all other gem dependencies to their latest versions
|
273
343
|
|
274
|
-
|
344
|
+
## [0.9.10] 2025-07-18
|
275
345
|
- updated ruby_llm-mcp to version 0.6.1 which solves problems with MCP tools not being installed
|
276
346
|
|
277
|
-
|
347
|
+
## [0.9.9] 2025-07-10
|
278
348
|
- refactored the Session and Config classes into more testable method_missing
|
279
349
|
- updated the test suire for both the Session and Config classes
|
280
350
|
- added support for MCP servers coming into AIA via the shared_tools gem
|
@@ -285,13 +355,13 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
285
355
|
- //available_models now has context window size and capabilities for each model returned
|
286
356
|
|
287
357
|
|
288
|
-
|
358
|
+
## [0.9.8] 2025-06-25
|
289
359
|
- fixing an issue with pipelined prompts
|
290
360
|
- now showing the complete modality of the model on the processing line.
|
291
361
|
- changed -p option from prompts_dir to pipeline
|
292
362
|
- found problem with simple cov and deep cov w/r/t their reported test coverage; they have problems with heredoc and complex conditionals.
|
293
363
|
|
294
|
-
|
364
|
+
## [0.9.7] 2025-06-20
|
295
365
|
|
296
366
|
- **NEW FEATURE**: Added `--available_models` CLI option to list all available AI models
|
297
367
|
- **NEW FEATURE**: Added `//tools` to show a list of available tools and their description
|
@@ -302,7 +372,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
302
372
|
- **DOCUMENTATION**: Updated README for better clarity and structure
|
303
373
|
- **DEPENDENCY**: Updated Gemfile.lock with latest dependency versions
|
304
374
|
|
305
|
-
|
375
|
+
## [0.9.6] 2025-06-13
|
306
376
|
- fixed issue 84 with the //llms directive
|
307
377
|
- changed the monkey patch to the RubyLLM::Model::Modalities class at the suggestions of the RubyLLM author in prep for a PR against that gem.
|
308
378
|
- added the shared_tools gem - need to add examples on how to use it with the --tools option
|
@@ -310,11 +380,11 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
310
380
|
- added images/aia.png to README.md
|
311
381
|
- let claude code rewrite the README.md file. Some details were dropped but overall in reads better. Need to add the details to a wiki or other documentation site.
|
312
382
|
|
313
|
-
|
383
|
+
## [0.9.5] 2025-06-04
|
314
384
|
- changed the RubyLLM::Modalities class to use method_missing for mode query
|
315
385
|
- hunting for why the //llms query directive is not finding image_to_image LLMs.
|
316
386
|
|
317
|
-
|
387
|
+
## [0.9.4] 2025-06-03
|
318
388
|
- using RubyLLM v1.3.0
|
319
389
|
- setting up a docs infrastructure to behave like the ruby_llm gem's guides side
|
320
390
|
- fixed bug in the text-to-image workflow
|
@@ -322,7 +392,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
322
392
|
- need to pay attention to the test suite
|
323
393
|
- also need to ensure the non text2text modes are working
|
324
394
|
|
325
|
-
|
395
|
+
## [0.9.3rc1] 2025-05-24
|
326
396
|
- using ruby_llm v1.3.0rc1
|
327
397
|
- added a models database refresh based on integer days interval with the --refresh option
|
328
398
|
- config file now has a "last_refresh" String in format YYYY-MM-DD
|
@@ -331,43 +401,41 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
331
401
|
- fixed a bug in the prompt_manager gem which is now at v0.5.5
|
332
402
|
|
333
403
|
|
334
|
-
|
404
|
+
## [0.9.2] 2025-05-18
|
335
405
|
- removing the MCP experiment
|
336
406
|
- adding support for RubyLLM::Tool usage in place of the MCP stuff
|
337
407
|
- updated prompt_manager to v0.5.4 which fixed shell integration problem
|
338
408
|
|
339
|
-
|
409
|
+
## [0.9.1] 2025-05-16
|
340
410
|
- rethink MCP approach in favor of just RubyLLM::Tool
|
341
411
|
- fixed problem with //clear
|
342
412
|
- fixed a problem with a priming prompt in a chat loop
|
343
413
|
|
344
|
-
|
414
|
+
## [0.9.0] 2025-05-13
|
345
415
|
- Adding experimental MCP Client suppot
|
346
416
|
- removed the CLI options --erb and --shell but kept them in the config file with a default of true for both
|
347
417
|
|
348
|
-
|
418
|
+
## [0.8.6] 2025-04-23
|
349
419
|
- Added a client adapter for the ruby_llm gem
|
350
420
|
- Added the adapter config item and the --adapter option to select at runtime which client to use ai_client or ruby_llm
|
351
421
|
|
352
|
-
|
422
|
+
## [0.8.5] 2025-04-19
|
353
423
|
- documentation updates
|
354
424
|
- integrated the https://pure.md web service for inserting web pages into the context window
|
355
425
|
- //include http?://example.com/stuff
|
356
426
|
- //webpage http?://example.com/stuff
|
357
427
|
|
358
|
-
|
428
|
+
## [0.8.2] 2025-04-18
|
359
429
|
- fixed problems with pre-loaded context and chat repl
|
360
430
|
- piped content into `aia --chat` is now a part of the context/instructions
|
361
431
|
- content via "aia --chat < some_file" is added to the context/instructions
|
362
432
|
- `aia --chat context_file.txt context_file2.txt` now works
|
363
433
|
- `aia --chat prompt_id context)file.txt` also works
|
364
434
|
|
365
|
-
|
366
|
-
|
367
|
-
### [0.8.1] 2025-04-17
|
435
|
+
## [0.8.1] 2025-04-17
|
368
436
|
- bumped version to 0.8.1 after correcting merge conflicts
|
369
437
|
|
370
|
-
|
438
|
+
## [0.8.0] WIP - 2025-04-15
|
371
439
|
- Updated PromptManager to v0.5.1 which has some of the functionality that was originally developed in the AIA.
|
372
440
|
- Enhanced README.md to include a comprehensive table of configuration options with defaults and associated environment variables.
|
373
441
|
- Added a note in README.md about the expandability of configuration options from a config file for dynamic prompt generation.
|
@@ -375,7 +443,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
375
443
|
- Ensured version consistency across `.version`, `aia.gemspec`, and `lib/aia/version.rb`.
|
376
444
|
- Verified and updated documentation to ensure readiness for release on RubyGems.org.
|
377
445
|
|
378
|
-
|
446
|
+
## [0.7.1] WIP - 2025-03-22
|
379
447
|
- Added `UIPresenter` class for handling user interface presentation.
|
380
448
|
- Added `DirectiveProcessor` class for processing chat-based directives.
|
381
449
|
- Added `HistoryManager` class for managing conversation and variable history.
|
@@ -389,7 +457,7 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
389
457
|
- Improved error handling and user feedback for directive processing.
|
390
458
|
- Enhanced logging and output options for chat sessions.
|
391
459
|
|
392
|
-
|
460
|
+
## [0.7.0] WIP - 2025-03-17
|
393
461
|
- Major code refactoring for better organization and maintainability:
|
394
462
|
- Extracted `DirectiveProcessor` class to handle chat-based directives
|
395
463
|
- Extracted `HistoryManager` class for conversation and variable history management
|
@@ -400,13 +468,13 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
400
468
|
- Improved output handling to suppress STDOUT when chat mode is off and output file is specified
|
401
469
|
- Updated spinner format in the process_prompt method for better user experience
|
402
470
|
|
403
|
-
|
471
|
+
## [0.6.?] WIP
|
404
472
|
- Implemented Tony Stark's Clean Slate Protocol on the develop branch
|
405
473
|
|
406
|
-
|
474
|
+
## [0.5.17] 2024-05-17
|
407
475
|
- removed replaced `semver` with `versionaire`
|
408
476
|
|
409
|
-
|
477
|
+
## [0.5.16] 2024-04-02
|
410
478
|
- fixed prompt pipelines
|
411
479
|
- added //next and //pipeline directives as shortcuts to //config [next,pipeline]
|
412
480
|
- Added new backend "client" as an internal OpenAI client
|
@@ -415,83 +483,82 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
415
483
|
- Added --voice default: alloy (if "siri" and Mac? then uses cli tool "say")
|
416
484
|
- Added --image_size and --image_quality (--is --iq)
|
417
485
|
|
418
|
-
|
419
|
-
### [0.5.15] 2024-03-30
|
486
|
+
## [0.5.15] 2024-03-30
|
420
487
|
- Added the ability to accept piped in text to be appeded to the end of the prompt text: curl $URL | aia ad_hoc
|
421
488
|
- Fixed bugs with entering directives as follow-up prompts during a chat session
|
422
489
|
|
423
|
-
|
490
|
+
## [0.5.14] 2024-03-09
|
424
491
|
- Directly access OpenAI to do text to speech when using the `--speak` option
|
425
492
|
- Added --voice to specify which voice to use
|
426
493
|
- Added --speech_model to specify which TTS model to use
|
427
494
|
|
428
|
-
|
495
|
+
## [0.5.13] 2024-03-03
|
429
496
|
- Added CLI-utility `llm` as a backend processor
|
430
497
|
|
431
|
-
|
498
|
+
## [0.5.12] 2024-02-24
|
432
499
|
- Happy Birthday Ruby!
|
433
500
|
- Added --next CLI option
|
434
501
|
- Added --pipeline CLI option
|
435
502
|
|
436
|
-
|
503
|
+
## [0.5.11] 2024-02-18
|
437
504
|
- allow directives to return information that is inserted into the prompt text
|
438
505
|
- added //shell command directive
|
439
506
|
- added //ruby ruby_code directive
|
440
507
|
- added //include path_to_file directive
|
441
508
|
|
442
|
-
|
509
|
+
## [0.5.10] 2024-02-03
|
443
510
|
- Added --roles_dir to isolate roles from other prompts if desired
|
444
511
|
- Changed --prompts to --prompts_dir to be consistent
|
445
512
|
- Refactored common fzf usage into its own tool class
|
446
513
|
|
447
|
-
|
514
|
+
## [0.5.9] 2024-02-01
|
448
515
|
- Added a "I'm working" spinner thing when "--verbose" is used as an indication that the backend is in the process of composing its response to the prompt.
|
449
516
|
|
450
|
-
|
517
|
+
## [0.5.8] 2024-01-17
|
451
518
|
- Changed the behavior of the --dump option. It must now be followed by path/to/file.ext where ext is a supported config file format: yml, yaml, toml
|
452
519
|
|
453
|
-
|
520
|
+
## [0.5.7] 2024-01-15
|
454
521
|
- Added ERB processing to the config_file
|
455
522
|
|
456
|
-
|
523
|
+
## [0.5.6] 2024-01-15
|
457
524
|
- Adding processing for directives, shell integration and erb to the follow up prompt in a chat session
|
458
525
|
- some code refactoring.
|
459
526
|
|
460
527
|
## [0.5.3] 2024-01-14
|
461
528
|
- adding ability to render markdown to the terminal using the "glow" CLI utility
|
462
529
|
|
463
|
-
|
530
|
+
## [0.5.2] 2024-01-13
|
464
531
|
- wrap response when its going to the terminal
|
465
532
|
|
466
|
-
|
533
|
+
## [0.5.1] 2024-01-12
|
467
534
|
- removed a wicked puts "loaded" statement
|
468
535
|
- fixed missed code when the options were changed to --out_file and --log_file
|
469
536
|
- fixed completion functions by updating $PROMPT_DIR to $AIA_PROMPTS_DIR to match the documentation.
|
470
537
|
|
471
|
-
|
538
|
+
## [0.5.0] 2024-01-05
|
472
539
|
- breaking changes:
|
473
540
|
- changed `--config` to `--config_file`
|
474
541
|
- changed `--env` to `--shell`
|
475
542
|
- changed `--output` to `--out_file`
|
476
543
|
- changed default `out_file` to `STDOUT`
|
477
544
|
|
478
|
-
|
545
|
+
## [0.4.3] 2023-12-31
|
479
546
|
- added --env to process embedded system environment variables and shell commands within a prompt.
|
480
547
|
- added --erb to process Embedded RuBy within a prompt because have embedded shell commands will only get you in a trouble. Having ERB will really get you into trouble. Remember the simple prompt is usually the best prompt.
|
481
548
|
|
482
|
-
|
549
|
+
## [0.4.2] 2023-12-31
|
483
550
|
- added the --role CLI option to pre-pend a "role" prompt to the front of a primary prompt.
|
484
551
|
|
485
|
-
|
552
|
+
## [0.4.1] 2023-12-31
|
486
553
|
- added a chat mode
|
487
554
|
- prompt directives now supported
|
488
555
|
- version bumped to match the `prompt_manager` gem
|
489
556
|
|
490
|
-
|
557
|
+
## [0.3.20] 2023-12-28
|
491
558
|
- added work around to issue with multiple context files going to the `mods` backend
|
492
559
|
- added shellwords gem to santize prompt text on the command line
|
493
560
|
|
494
|
-
|
561
|
+
## [0.3.19] 2023-12-26
|
495
562
|
- major code refactoring.
|
496
563
|
- supports config files \*.yml, \*.yaml and \*.toml
|
497
564
|
- usage implemented as a man page. --help will display the man page/
|
@@ -504,10 +571,10 @@ aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
|
504
571
|
3. envar values over-rides ...
|
505
572
|
4. default values
|
506
573
|
|
507
|
-
|
574
|
+
## [0.3.0] = 2023-11-23
|
508
575
|
|
509
576
|
- Matching version to [prompt_manager](https://github.com/prompt_manager) This version allows for the user of history in the entery of values to prompt keywords. KW_HISTORY_MAX is set at 5. Changed CLI enteraction to use historical selection and editing of prior keyword values.
|
510
577
|
|
511
|
-
|
578
|
+
## [0.1.0] - 2023-11-23
|
512
579
|
|
513
580
|
- Initial release
|