aia 0.9.19 → 0.9.21

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -1,7 +1,8 @@
1
1
  <div align="center">
2
2
  <h1>AI Assistant (AIA)</h1>
3
3
  <img src="docs/assets/images/aia.png" alt="Robots waiter ready to take your order."><br />
4
- **The Prompt is the Code**
4
+ **The Prompt is the Code**<br />
5
+ <p>Check out the new <a href="http://madbomber.github.io/aia/guides/models/?h=inline+role+syntax#inline-role-syntax">Inline Role Syntax</a> when working with multiple concurrent models.</p>
5
6
  </div>
6
7
 
7
8
  AIA is a command-line utility that facilitates interaction with AI models through dynamic prompt management. It automates the management of pre-compositional prompts and executes generative AI commands with enhanced features including embedded directives, shell integration, embedded Ruby, history management, interactive chat, and prompt workflows.
@@ -202,10 +203,11 @@ aia --fuzzy
202
203
  | Option | Description | Example |
203
204
  |--------|-------------|---------|
204
205
  | `--chat` | Start interactive chat session | `aia --chat` |
205
- | `--model MODEL` | Specify AI model(s) to use | `aia --model gpt-4o-mini,gpt-3.5-turbo` |
206
+ | `--model MODEL` | Specify AI model(s) to use. Supports `MODEL[=ROLE]` syntax | `aia --model gpt-4o-mini,gpt-3.5-turbo` or `aia --model gpt-4o=architect,claude=security` |
206
207
  | `--consensus` | Enable consensus mode for multi-model | `aia --consensus` |
207
208
  | `--no-consensus` | Force individual responses | `aia --no-consensus` |
208
- | `--role ROLE` | Use a role/system prompt | `aia --role expert` |
209
+ | `--role ROLE` | Use a role/system prompt (default for all models) | `aia --role expert` |
210
+ | `--list-roles` | List available role files | `aia --list-roles` |
209
211
  | `--out_file FILE` | Specify output file | `aia --out_file results.md` |
210
212
  | `--fuzzy` | Use fuzzy search for prompts | `aia --fuzzy` |
211
213
  | `--help` | Show complete help | `aia --help` |
@@ -759,6 +761,129 @@ Provide specific, actionable feedback.
759
761
  EOF
760
762
  ```
761
763
 
764
+ **Per-Model Roles** (Multi-Model Role Assignment):
765
+
766
+ Assign different roles to different models using inline `model=role` syntax:
767
+
768
+ ```bash
769
+ # Different perspectives on the same design
770
+ aia --model gpt-4o=architect,claude=security,gemini=performance design_doc.md
771
+
772
+ # Output shows each model with its role:
773
+ # from: gpt-4o (architect)
774
+ # The proposed microservices architecture provides good separation...
775
+ #
776
+ # from: claude (security)
777
+ # I'm concerned about the authentication flow between services...
778
+ #
779
+ # from: gemini (performance)
780
+ # The database access pattern could become a bottleneck...
781
+ ```
782
+
783
+ **Multiple Perspectives** (Same Model, Different Roles):
784
+
785
+ ```bash
786
+ # Get optimistic, pessimistic, and realistic views
787
+ aia --model gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist business_plan.md
788
+
789
+ # Output shows instance numbers:
790
+ # from: gpt-4o #1 (optimist)
791
+ # This market opportunity is massive...
792
+ #
793
+ # from: gpt-4o #2 (pessimist)
794
+ # The competition is fierce and our runway is limited...
795
+ #
796
+ # from: gpt-4o #3 (realist)
797
+ # Given our current team size, we should focus on MVP first...
798
+ ```
799
+
800
+ **Mixed Role Assignment:**
801
+
802
+ ```bash
803
+ # Some models with roles, some with default
804
+ aia --model gpt-4o=architect,claude,gemini=performance --role security design.md
805
+ # gpt-4o gets architect (inline)
806
+ # claude gets security (default from --role)
807
+ # gemini gets performance (inline)
808
+ ```
809
+
810
+ **Discovering Available Roles:**
811
+
812
+ ```bash
813
+ # List all available role files
814
+ aia --list-roles
815
+
816
+ # Output:
817
+ # Available roles in ~/.prompts/roles:
818
+ # - architect
819
+ # - performance
820
+ # - security
821
+ # - code_reviewer
822
+ # - specialized/senior_architect # nested paths supported
823
+ ```
824
+
825
+ **Role Organization:**
826
+
827
+ Roles can be organized in subdirectories:
828
+
829
+ ```bash
830
+ # Create nested role structure
831
+ mkdir -p ~/.prompts/roles/specialized
832
+ echo "You are a senior software architect..." > ~/.prompts/roles/specialized/senior_architect.txt
833
+
834
+ # Use nested roles
835
+ aia --model gpt-4o=specialized/senior_architect design.md
836
+ ```
837
+
838
+ **Using Config Files for Model Roles** (v2):
839
+
840
+ Define model-role assignments in your config file (`~/.aia/config.yml`) for reusable setups:
841
+
842
+ ```yaml
843
+ # Array of hashes format (mirrors internal structure)
844
+ model:
845
+ - model: gpt-4o
846
+ role: architect
847
+ - model: claude
848
+ role: security
849
+ - model: gemini
850
+ role: performance
851
+
852
+ # Also supports models without roles
853
+ model:
854
+ - model: gpt-4o
855
+ role: architect
856
+ - model: claude # No role assigned
857
+ ```
858
+
859
+ Then simply run:
860
+
861
+ ```bash
862
+ aia design_doc.md # Uses model configuration from config file
863
+ ```
864
+
865
+ **Using Environment Variables** (v2):
866
+
867
+ Set default model-role assignments via environment variable:
868
+
869
+ ```bash
870
+ # Set in your shell profile (.bashrc, .zshrc, etc.)
871
+ export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"
872
+
873
+ # Or for a single command
874
+ AIA_MODEL="gpt-4o=architect,claude=security" aia design.md
875
+ ```
876
+
877
+ **Configuration Precedence:**
878
+
879
+ When model roles are specified in multiple places, the precedence is:
880
+
881
+ 1. **Command-line inline** (highest): `--model gpt-4o=architect`
882
+ 2. **Command-line flag**: `--model gpt-4o --role architect`
883
+ 3. **Environment variable**: `AIA_MODEL="gpt-4o=architect"`
884
+ 4. **Config file** (lowest): `model` array in `~/.aia/config.yml`
885
+ ```
886
+
762
887
  ### RubyLLM::Tool Support
763
888
 
764
889
  AIA supports function calling through RubyLLM tools for extended capabilities:
@@ -84,6 +84,8 @@ aia --available_models openai,gpt,text_to_image
84
84
  ### `-m MODEL, --model MODEL`
85
85
  Name of the LLM model(s) to use. For multiple models, use comma-separated values.
86
86
 
87
+ Supports inline role assignment using `MODEL=ROLE` syntax to assign specific roles to individual models.
88
+
87
89
  ```bash
88
90
  # Single model
89
91
  aia --model gpt-4 my_prompt
@@ -93,8 +95,22 @@ aia --model "gpt-4,claude-3-sonnet,gemini-pro" my_prompt
93
95
 
94
96
  # Short form
95
97
  aia -m gpt-3.5-turbo my_prompt
98
+
99
+ # Single model with role (inline syntax)
100
+ aia --model gpt-4o=architect design_review.md
101
+
102
+ # Multiple models with different roles
103
+ aia --model "gpt-4o=architect,claude=security,gemini=performance" my_prompt
104
+
105
+ # Same model with multiple roles for diverse perspectives
106
+ aia --model "gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist" project_plan.md
107
+
108
+ # Mixed: some models with roles, some without
109
+ aia --model "gpt-4o=expert,claude,gemini" my_prompt
96
110
  ```
97
111
 
112
+ **See also**: `--role` for applying a role to all models, `--list-roles` for discovering available roles.
113
+
98
114
  ### `--[no-]consensus`
99
115
  Enable/disable consensus mode for multi-model responses. When enabled, AIA attempts to create a consensus response from multiple models.
100
116
 
@@ -206,13 +222,51 @@ aia --roles_prefix personas --role expert
206
222
  ```
207
223
 
208
224
  ### `-r, --role ROLE_ID`
209
- Role ID to prepend to the prompt.
225
+ Role ID to prepend to the prompt. This applies the same role to all models.
226
+
227
+ For per-model role assignment, use the inline `MODEL=ROLE` syntax with `--model` instead.
210
228
 
211
229
  ```bash
230
+ # Apply role to all models
212
231
  aia --role expert my_prompt
213
232
  aia -r teacher explain_concept
233
+
234
+ # With multiple models (same role for all)
235
+ aia --model "gpt-4,claude" --role architect design.md
236
+
237
+ # Per-model roles (inline syntax - see --model)
238
+ aia --model "gpt-4=architect,claude=security" design.md
214
239
  ```
215
240
 
241
+ **See also**: `--model` for inline role syntax, `--list-roles` for discovering available roles.
242
+
243
+ ### `--list-roles`
244
+ List all available roles and exit. Shows role IDs and their descriptions from the roles directory.
245
+
246
+ ```bash
247
+ # List all available roles
248
+ aia --list-roles
249
+
250
+ # Example output:
251
+ # Available roles in /Users/you/.prompts/roles:
252
+ # architect - Software architecture expert
253
+ # security - Security analysis specialist
254
+ # performance - Performance optimization expert
255
+ # debugger - Expert debugging assistant
256
+ # optimist - Positive perspective analyzer
257
+ # pessimist - Critical risk analyzer
258
+ # realist - Balanced pragmatic analyzer
259
+ ```
260
+
261
+ Roles are discovered from:
262
+ - **Default location**: `~/.prompts/roles/`
263
+ - **Custom location**: Set via `--prompts_dir` and `--roles_prefix`
264
+ - **Nested directories**: Supports subdirectories like `roles/software/architect.txt`
265
+
266
+ **Use case**: Discover available roles before using them with `--role` or inline `MODEL=ROLE` syntax.
267
+
268
+ **See also**: `--role`, `--model`, `--prompts_dir`, `--roles_prefix`
269
+
216
270
  ### `-n, --next PROMPT_ID`
217
271
  Next prompt to process (can be used multiple times to build a pipeline).
218
272
 
@@ -556,24 +610,37 @@ aia --transcription_model whisper-1 --speech_model tts-1-hd --voice echo audio_p
556
610
  Many CLI options have corresponding environment variables with the `AIA_` prefix:
557
611
 
558
612
  ```bash
613
+ # Basic model configuration
559
614
  export AIA_MODEL="gpt-4"
615
+
616
+ # Model with inline role syntax
617
+ export AIA_MODEL="gpt-4o=architect"
618
+
619
+ # Multiple models with roles
620
+ export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"
621
+
622
+ # Other configuration
560
623
  export AIA_TEMPERATURE="0.8"
561
624
  export AIA_PROMPTS_DIR="/custom/prompts"
562
625
  export AIA_VERBOSE="true"
563
626
  export AIA_DEBUG="false"
564
627
  ```
565
628
 
629
+ **Note**: The `AIA_MODEL` environment variable supports the same inline `MODEL=ROLE` syntax as the `--model` CLI option.
630
+
566
631
  See [Configuration](configuration.md#environment-variables) for a complete list.
567
632
 
568
633
  ## Configuration Precedence
569
634
 
570
635
  Options are resolved in this order (highest to lowest precedence):
571
636
 
572
- 1. Command line arguments
573
- 2. Environment variables
574
- 3. Configuration files
637
+ 1. Command line arguments (including inline `MODEL=ROLE` syntax)
638
+ 2. Environment variables (including inline syntax in `AIA_MODEL`)
639
+ 3. Configuration files (including array format with roles)
575
640
  4. Built-in defaults
576
641
 
642
+ **Role-specific precedence**: When using the role feature, inline `MODEL=ROLE` syntax takes precedence over the `--role` flag, which takes precedence over roles in config files.
643
+
577
644
  ## Related Documentation
578
645
 
579
646
  - [Configuration Guide](configuration.md) - Detailed configuration options
@@ -162,11 +162,206 @@ Model Details:
162
162
 
163
163
  **Multi-Model Features:**
164
164
  - **Primary Model**: The first model in the list serves as the consensus orchestrator
165
- - **Concurrent Processing**: All models run simultaneously for better performance
165
+ - **Concurrent Processing**: All models run simultaneously for better performance
166
166
  - **Flexible Output**: Choose between individual responses or synthesized consensus
167
167
  - **Error Handling**: Invalid models are reported but don't prevent valid models from working
168
168
  - **Batch Mode Support**: Multi-model responses are properly formatted in output files
169
169
 
170
+ ### Per-Model Roles
171
+
172
+ Assign specific roles to each model in multi-model mode to get diverse perspectives on your prompts. Each model receives a prepended role prompt that shapes its perspective.
173
+
174
+ #### Inline Role Syntax
175
+
176
+ Use the `MODEL=ROLE` syntax to assign roles directly on the command line:
177
+
178
+ ```bash
179
+ # Single model with role
180
+ aia --model gpt-4o=architect design_review.md
181
+
182
+ # Multiple models with different roles
183
+ aia --model gpt-4o=architect,claude=security,gemini=performance design_review.md
184
+
185
+ # Mixed: some models with roles, some without
186
+ aia --model gpt-4o=expert,claude analyze.md
187
+ ```
188
+
189
+ #### Multiple Perspectives
190
+
191
+ Use the same model multiple times with different roles for diverse viewpoints:
192
+
193
+ ```bash
194
+ # Three instances of same model with different roles
195
+ aia --model gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist project_plan.md
196
+
197
+ # AI provides three distinct perspectives on the same input
198
+ ```
199
+
200
+ **Output Format with Roles:**
201
+ ```
202
+ from: gpt-4o (optimist)
203
+ I see great potential in this approach! The architecture is solid...
204
+
205
+ from: gpt-4o #2 (pessimist)
206
+ We need to consider several risks here. The design has some concerning...
207
+
208
+ from: gpt-4o #3 (realist)
209
+ Let's look at this pragmatically. The proposal has both strengths and...
210
+ ```
211
+
212
+ **Note**: When using duplicate models, AIA automatically numbers them (e.g., `gpt-4o`, `gpt-4o #2`, `gpt-4o #3`) and maintains separate conversation contexts for each instance.
213
+
214
+ #### Role Discovery
215
+
216
+ List all available roles in your prompts directory:
217
+
218
+ ```bash
219
+ # List all roles
220
+ aia --list-roles
221
+
222
+ # Output shows role IDs and descriptions
223
+ Available roles in /Users/you/.prompts/roles:
224
+ architect - Software architecture expert
225
+ security - Security analysis specialist
226
+ performance - Performance optimization expert
227
+ optimist - Positive perspective analyzer
228
+ pessimist - Critical risk analyzer
229
+ realist - Balanced pragmatic analyzer
230
+ ```
231
+
232
+ #### Role Files
233
+
234
+ Roles are stored as text files in your prompts directory:
235
+
236
+ ```bash
237
+ # Default location: ~/.prompts/roles/
238
+ ~/.prompts/
239
+ roles/
240
+ architect.txt
241
+ security.txt
242
+ performance.txt
243
+ optimist.txt
244
+ pessimist.txt
245
+ realist.txt
246
+
247
+ # Nested role organization
248
+ ~/.prompts/
249
+ roles/
250
+ software/
251
+ architect.txt
252
+ developer.txt
253
+ analysis/
254
+ optimist.txt
255
+ pessimist.txt
256
+ realist.txt
257
+ ```
258
+
259
+ **Using Nested Roles:**
260
+ ```bash
261
+ # Specify full path from roles directory
262
+ aia --model gpt-4o=software/architect,claude=analysis/pessimist design.md
263
+ ```
264
+
265
+ #### Configuration File Format
266
+
267
+ Define model roles in your configuration file using array format:
268
+
269
+ ```yaml
270
+ # ~/.aia/config.yml
271
+ model:
272
+ - model: gpt-4o
273
+ role: architect
274
+ - model: claude-3-sonnet
275
+ role: security
276
+ - model: gemini-pro
277
+ role: performance
278
+
279
+ # Duplicate models with different roles
280
+ model:
281
+ - model: gpt-4o
282
+ role: optimist
283
+ - model: gpt-4o
284
+ role: pessimist
285
+ - model: gpt-4o
286
+ role: realist
287
+ ```
288
+
289
+ **Note**: Models without roles work normally - simply omit the `role` key.
290
+
291
+ #### Environment Variable Usage
292
+
293
+ Set model roles via environment variables using the same inline syntax:
294
+
295
+ ```bash
296
+ # Single model with role
297
+ export AIA_MODEL="gpt-4o=architect"
298
+
299
+ # Multiple models with roles
300
+ export AIA_MODEL="gpt-4o=architect,claude=security,gemini=performance"
301
+
302
+ # Duplicate models
303
+ export AIA_MODEL="gpt-4o=optimist,gpt-4o=pessimist,gpt-4o=realist"
304
+
305
+ # Then run AIA normally
306
+ aia design_review.md
307
+ ```
308
+
309
+ #### Role Configuration Precedence
310
+
311
+ When roles are specified in multiple places, the precedence order is:
312
+
313
+ 1. **CLI inline syntax**: `--model gpt-4o=architect` (highest priority)
314
+ 2. **CLI role flag**: `--role architect` (applies to all models)
315
+ 3. **Environment variable**: `AIA_MODEL="gpt-4o=architect"`
316
+ 4. **Configuration file**: `model: [{model: gpt-4o, role: architect}]`
317
+
318
+ **Example of precedence:**
319
+ ```bash
320
+ # Config file specifies: model: [{model: gpt-4o, role: architect}]
321
+ # Environment has: AIA_MODEL="claude=security"
322
+ # Command line uses:
323
+ aia --model gemini=performance my_prompt
324
+
325
+ # Result: Uses gemini with performance role (CLI wins)
326
+ ```
327
+
328
+ #### Role Validation
329
+
330
+ AIA validates role files exist at parse time and provides helpful error messages:
331
+
332
+ ```bash
333
+ # If role file doesn't exist
334
+ $ aia --model gpt-4o=nonexistent my_prompt
335
+
336
+ ERROR: Role file not found: ~/.prompts/roles/nonexistent.txt
337
+
338
+ Available roles:
339
+ - architect
340
+ - security
341
+ - performance
342
+ - optimist
343
+ - pessimist
344
+ - realist
345
+ ```
346
+
347
+ #### Creating Custom Roles
348
+
349
+ Create new role files in your roles directory:
350
+
351
+ ```bash
352
+ # Create a new role
353
+ cat > ~/.prompts/roles/debugger.txt << 'EOF'
354
+ You are an expert debugging assistant. When analyzing code:
355
+ - Focus on identifying potential bugs and edge cases
356
+ - Suggest specific debugging strategies
357
+ - Explain the root cause of issues clearly
358
+ - Provide actionable fix recommendations
359
+ EOF
360
+
361
+ # Use the new role
362
+ aia --model gpt-4o=debugger analyze_bug.py
363
+ ```
364
+
170
365
  ### Model Comparison in Prompts
171
366
  ```markdown
172
367
  Compare responses from multiple models:
@@ -209,7 +209,12 @@ module AIA
209
209
  when Float
210
210
  value.to_f
211
211
  when Array
212
- value.split(',').map(&:strip)
212
+ # Special handling for :model to support inline role syntax (ADR-005 v2)
213
+ if key == :model && value.include?('=')
214
+ CLIParser.parse_models_with_roles(value)
215
+ else
216
+ value.split(',').map(&:strip)
217
+ end
213
218
  else
214
219
  value # defaults to String
215
220
  end
@@ -81,14 +81,19 @@ module AIA
81
81
  end
82
82
 
83
83
  def setup_model_options(opts, config)
84
- opts.on("-m MODEL", "--model MODEL", "Name of the LLM model(s) to use (comma-separated for multiple models)") do |model|
85
- config.model = model.split(',').map(&:strip)
84
+ opts.on("-m MODEL", "--model MODEL", "Name of the LLM model(s) to use. Format: MODEL[=ROLE][,MODEL[=ROLE]]...") do |model|
85
+ config.model = parse_models_with_roles(model)
86
86
  end
87
87
 
88
88
  opts.on("--[no-]consensus", "Enable/disable consensus mode for multi-model responses (default: show individual responses)") do |consensus|
89
89
  config.consensus = consensus
90
90
  end
91
91
 
92
+ opts.on("--list-roles", "List available role files and exit") do
93
+ list_available_roles
94
+ exit 0
95
+ end
96
+
92
97
  opts.on("--sm", "--speech_model MODEL", "Speech model to use") do |model|
93
98
  config.speech_model = model
94
99
  end
@@ -296,6 +301,115 @@ module AIA
296
301
  end
297
302
  end
298
303
 
304
+ def parse_models_with_roles(model_string)
305
+ models = []
306
+ model_counts = Hash.new(0)
307
+
308
+ model_string.split(',').each do |spec|
309
+ spec.strip!
310
+
311
+ # Validate syntax
312
+ if spec =~ /^=|=$/
313
+ raise ArgumentError, "Invalid model syntax: '#{spec}'. Expected format: MODEL[=ROLE]"
314
+ end
315
+
316
+ if spec.include?('=')
317
+ # Explicit role: "model=role" or "provider/model=role"
318
+ model_name, role_name = spec.split('=', 2)
319
+ model_name.strip!
320
+ role_name.strip!
321
+
322
+ # Validate role file exists (fail fast)
323
+ validate_role_exists(role_name)
324
+
325
+ # Track instance count for duplicates
326
+ model_counts[model_name] += 1
327
+ instance = model_counts[model_name]
328
+
329
+ models << {
330
+ model: model_name,
331
+ role: role_name,
332
+ instance: instance,
333
+ internal_id: instance > 1 ? "#{model_name}##{instance}" : model_name
334
+ }
335
+ else
336
+ # No explicit role, will use default from -r/--role
337
+ model_counts[spec] += 1
338
+ instance = model_counts[spec]
339
+
340
+ models << {
341
+ model: spec,
342
+ role: nil,
343
+ instance: instance,
344
+ internal_id: instance > 1 ? "#{spec}##{instance}" : spec
345
+ }
346
+ end
347
+ end
348
+
349
+ models
350
+ end
351
+
352
+ def validate_role_exists(role_id)
353
+ # Get prompts_dir from defaults or environment
354
+ prompts_dir = ENV.fetch('AIA_PROMPTS_DIR', File.join(ENV['HOME'], '.prompts'))
355
+ roles_prefix = ENV.fetch('AIA_ROLES_PREFIX', 'roles')
356
+
357
+ # Build role file path
358
+ unless role_id.start_with?(roles_prefix)
359
+ role_id = "#{roles_prefix}/#{role_id}"
360
+ end
361
+
362
+ role_file_path = File.join(prompts_dir, "#{role_id}.txt")
363
+
364
+ unless File.exist?(role_file_path)
365
+ available_roles = list_available_role_names(prompts_dir, roles_prefix)
366
+
367
+ error_msg = "Role file not found: #{role_file_path}\n\n"
368
+
369
+ if available_roles.empty?
370
+ error_msg += "No roles directory found at #{File.join(prompts_dir, roles_prefix)}\n"
371
+ error_msg += "Create the directory and add role files to use this feature."
372
+ else
373
+ error_msg += "Available roles:\n"
374
+ error_msg += available_roles.map { |r| " - #{r}" }.join("\n")
375
+ error_msg += "\n\nCreate the role file or use an existing role."
376
+ end
377
+
378
+ raise ArgumentError, error_msg
379
+ end
380
+ end
381
+
382
+ def list_available_roles
383
+ prompts_dir = ENV.fetch('AIA_PROMPTS_DIR', File.join(ENV['HOME'], '.prompts'))
384
+ roles_prefix = ENV.fetch('AIA_ROLES_PREFIX', 'roles')
385
+ roles_dir = File.join(prompts_dir, roles_prefix)
386
+
387
+ if Dir.exist?(roles_dir)
388
+ roles = list_available_role_names(prompts_dir, roles_prefix)
389
+
390
+ if roles.empty?
391
+ puts "No role files found in #{roles_dir}"
392
+ puts "Create .txt files in this directory to define roles."
393
+ else
394
+ puts "Available roles in #{roles_dir}:"
395
+ roles.each { |role| puts " - #{role}" }
396
+ end
397
+ else
398
+ puts "No roles directory found at #{roles_dir}"
399
+ puts "Create this directory and add role files to use roles."
400
+ end
401
+ end
402
+
403
+ def list_available_role_names(prompts_dir, roles_prefix)
404
+ roles_dir = File.join(prompts_dir, roles_prefix)
405
+ return [] unless Dir.exist?(roles_dir)
406
+
407
+ # Find all .txt files recursively, preserving paths
408
+ Dir.glob("**/*.txt", base: roles_dir)
409
+ .map { |f| f.chomp('.txt') }
410
+ .sort
411
+ end
412
+
299
413
  def list_available_models(query)
300
414
  # SMELL: mostly duplications the code in the vailable_models directive
301
415
  # assumes that the adapter is for the ruby_llm gem
@@ -79,10 +79,42 @@ module AIA
79
79
 
80
80
  def apply_file_config_to_struct(config, file_config)
81
81
  file_config.each do |key, value|
82
- config[key] = value
82
+ # Special handling for model array with roles (ADR-005 v2)
83
+ if (key == :model || key == 'model') && value.is_a?(Array) && value.first.is_a?(Hash)
84
+ config[:model] = process_model_array_with_roles(value)
85
+ else
86
+ config[key] = value
87
+ end
83
88
  end
84
89
  end
85
90
 
91
+ # Process model array with roles from config file (ADR-005 v2)
92
+ # Format: [{model: "gpt-4o", role: "architect"}, ...]
93
+ # Also supports models without roles: [{model: "gpt-4o"}, ...]
94
+ def process_model_array_with_roles(models_array)
95
+ return [] if models_array.nil? || models_array.empty?
96
+
97
+ model_specs = []
98
+ model_counts = Hash.new(0)
99
+
100
+ models_array.each do |spec|
101
+ model_name = spec[:model] || spec['model']
102
+ role_name = spec[:role] || spec['role']
103
+
104
+ model_counts[model_name] += 1
105
+ instance = model_counts[model_name]
106
+
107
+ model_specs << {
108
+ model: model_name,
109
+ role: role_name,
110
+ instance: instance,
111
+ internal_id: instance > 1 ? "#{model_name}##{instance}" : model_name
112
+ }
113
+ end
114
+
115
+ model_specs
116
+ end
117
+
86
118
  def normalize_last_refresh_date(config)
87
119
  return unless config.last_refresh&.is_a?(String)
88
120
 
@@ -152,8 +152,9 @@ module AIA
152
152
 
153
153
  name = args.empty? ? nil : args.join(' ').strip
154
154
  checkpoint_name = context_manager.create_checkpoint(name: name)
155
- "Checkpoint '#{checkpoint_name}' created."
156
- end
155
+ puts "Checkpoint '#{checkpoint_name}' created."
156
+ ""
157
+ end
157
158
 
158
159
  def self.restore(args, context_manager = nil)
159
160
  if context_manager.nil?