aia 0.9.18 → 0.9.19

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 41c2c4e145f3bc6789b0de6e444a7d71352abd3bdf9e97ad487539271db28584
4
- data.tar.gz: 8057296778317494811114aa4692755ab45038dbccf06f14e47d43d7afe8b3d6
3
+ metadata.gz: 8fb298b4e9a1ddc4748425decde69e1c11d8e7eb195cf264b918c4d69bf64e01
4
+ data.tar.gz: a0cffea9fec68a81fbe5e5d36fed20255e33c1d230ec0dd518e08ad7adc56afa
5
5
  SHA512:
6
- metadata.gz: e76f4aff3f181f6bcb710241fae8f41b0642bb2250f0dafd8fcf54220d044f01a9698288bbdbae5ac0c62b8b4834b262ed55b25e11dfa09252118f0be4cc2664
7
- data.tar.gz: 0c61d610e09ae223ea8694fc18700cb5aa73a14a098a2038f9ab8f2e8c8879f50b210c516f0e4cc53309de1e7f272f85646550296a31830585b749935c26a687
6
+ metadata.gz: 6076117839c543fda6756e6657f69d9c35fb561213697e464701eb1515a1d38cff130e7cc57159bf44a8586387144257887906e2ffc63cf183886e816cfd6b84
7
+ data.tar.gz: 42a6d282bcf587edf84e0db8d13998e0f233f9a66f52977cbaa2d0c152d44e7691207a48a8bec68800e18b1abe81bd2964d841c29eca7437a69f91a2febfd67b
data/.version CHANGED
@@ -1 +1 @@
1
- 0.9.18
1
+ 0.9.19
data/CHANGELOG.md CHANGED
@@ -1,6 +1,88 @@
1
1
  # Changelog
2
2
  ## [Unreleased]
3
3
 
4
+ ### [0.9.19] 2025-10-06
5
+
6
+ #### Bug Fixes
7
+ - **CRITICAL BUG FIX**: Fixed multi-model cross-talk issue (#118) where models could see each other's conversation history
8
+ - **BUG FIX**: Implemented complete two-level context isolation to prevent models from contaminating each other's responses
9
+ - **BUG FIX**: Fixed token count inflation caused by models processing combined conversation histories
10
+
11
+ #### Technical Changes
12
+ - **Level 1 (Library)**: Implemented per-model RubyLLM::Context isolation - each model now has its own Context instance (lib/aia/ruby_llm_adapter.rb)
13
+ - **Level 2 (Application)**: Implemented per-model ContextManager isolation - each model maintains its own conversation history (lib/aia/session.rb)
14
+ - Added `parse_multi_model_response` method to extract individual model responses from combined output (lib/aia/session.rb:502-533)
15
+ - Enhanced `multi_model_chat` to accept Hash of per-model conversations (lib/aia/ruby_llm_adapter.rb:305-334)
16
+ - Updated ChatProcessorService to handle both Array (single model) and Hash (multi-model with per-model contexts) inputs (lib/aia/chat_processor_service.rb:68-83)
17
+ - Refactored RubyLLMAdapter:
18
+ - Added `@contexts` hash to store per-model Context instances
19
+ - Added `create_isolated_context_for_model` helper method (lines 84-99)
20
+ - Added `extract_model_and_provider` helper method (lines 102-112)
21
+ - Simplified `clear_context` from 92 lines to 40 lines (56% reduction)
22
+ - Updated directive handlers to work with per-model context managers
23
+ - Added comprehensive test coverage with 6 new tests for multi-model isolation
24
+ - Updated LocalProvidersTest to reflect Context-based architecture
25
+
26
+ #### Architecture
27
+ - **ADR-002-revised**: Complete Multi-Model Isolation (see `.architecture/decisions/adrs/ADR-002-revised-multi-model-isolation.md`)
28
+ - Eliminated global state dependencies in multi-model chat sessions
29
+ - Maintained backward compatibility with single-model mode (verified with tests)
30
+
31
+ #### Test Coverage
32
+ - Added `test/aia/multi_model_isolation_test.rb` with comprehensive isolation tests
33
+ - Tests cover: response parsing, per-model context managers, single-model compatibility, RubyLLM::Context isolation
34
+ - Full test suite: 282 runs, 837 assertions, 0 failures, 0 errors, 13 skips ✅
35
+
36
+ #### Expected Behavior After Fix
37
+ Previously, when running multi-model chat with repeated prompts:
38
+ - ❌ Models would see BOTH their own AND other models' responses
39
+ - ❌ Models would report inflated counts (e.g., "5 times", "6 times" instead of "3 times")
40
+ - ❌ Token counts would be inflated due to contaminated context
41
+
42
+ Now with the fix:
43
+ - ✅ Each model sees ONLY its own conversation history
44
+ - ✅ Each model correctly reports its own interaction count
45
+ - ✅ Token counts accurately reflect per-model conversation size
46
+
47
+ #### Usage Examples
48
+ ```bash
49
+ # Multi-model chat now properly isolates each model's context
50
+ bin/aia --chat --model lms/openai/gpt-oss-20b,ollama/gpt-oss:20b --metrics
51
+
52
+ > pick a random language and say hello
53
+ # LMS: "Habari!" (Swahili)
54
+ # Ollama: "Kaixo!" (Basque)
55
+
56
+ > do it again
57
+ # LMS: "Habari!" (only sees its own previous response)
58
+ # Ollama: "Kaixo!" (only sees its own previous response)
59
+
60
+ > do it again
61
+ > how many times did you say hello to me?
62
+
63
+ # Both models correctly respond: "3 times"
64
+ # (Previously: LMS would say "5 times", Ollama "6 times" due to cross-talk)
65
+ ```
66
+
67
+ ### [0.9.18] 2025-10-05
68
+
69
+ #### Bug Fixes
70
+ - **BUG FIX**: Fixed RubyLLM provider error parsing to handle both OpenAI and LM Studio error formats
71
+ - **BUG FIX**: Fixed "String does not have #dig method" errors when parsing error responses from local providers
72
+ - **BUG FIX**: Enhanced error parsing to gracefully handle malformed JSON responses
73
+
74
+ #### Improvements
75
+ - **ENHANCEMENT**: Removed debug output statements from RubyLLMAdapter for cleaner production logs
76
+ - **ENHANCEMENT**: Improved error handling with debug logging for JSON parsing failures
77
+
78
+ #### Documentation
79
+ - **DOCUMENTATION**: Added Local Models entry to MkDocs navigation for better documentation accessibility
80
+
81
+ #### Technical Changes
82
+ - Enhanced provider_fix extension to support multiple error response formats (lib/extensions/ruby_llm/provider_fix.rb)
83
+ - Cleaned up debug puts statements from RubyLLMAdapter and provider_fix
84
+ - Added robust JSON parsing with fallback error handling
85
+
4
86
  ### [0.9.17] 2025-10-04
5
87
 
6
88
  #### New Features
@@ -63,13 +63,22 @@ module AIA
63
63
  end
64
64
 
65
65
 
66
- # conversation is an Array of Hashes. Each entry is an interchange
67
- # with the LLM.
68
- def send_to_client(conversation)
66
+ # conversation is an Array of Hashes (single model) or Hash of Arrays (multi-model per-model contexts)
67
+ # Each entry is an interchange with the LLM.
68
+ def send_to_client(conversation_or_conversations)
69
69
  maybe_change_model
70
70
 
71
- puts "[DEBUG ChatProcessor] Sending conversation to client: #{conversation.inspect[0..500]}..." if AIA.config.debug
72
- result = AIA.client.chat(conversation)
71
+ # Handle per-model conversations (Hash) or single conversation (Array) - ADR-002 revised
72
+ if conversation_or_conversations.is_a?(Hash)
73
+ # Multi-model with per-model contexts: pass Hash directly to adapter
74
+ puts "[DEBUG ChatProcessor] Sending per-model conversations to client" if AIA.config.debug
75
+ result = AIA.client.chat(conversation_or_conversations)
76
+ else
77
+ # Single conversation for single model
78
+ puts "[DEBUG ChatProcessor] Sending conversation to client: #{conversation_or_conversations.inspect[0..500]}..." if AIA.config.debug
79
+ result = AIA.client.chat(conversation_or_conversations)
80
+ end
81
+
73
82
  puts "[DEBUG ChatProcessor] Client returned: #{result.class} - #{result.inspect[0..500]}..." if AIA.config.debug
74
83
  result
75
84
  end
@@ -10,6 +10,7 @@ module AIA
10
10
  def initialize
11
11
  @models = extract_models_config
12
12
  @chats = {}
13
+ @contexts = {} # Store isolated contexts for each model
13
14
 
14
15
  configure_rubyllm
15
16
  refresh_local_model_registry
@@ -80,42 +81,65 @@ module AIA
80
81
  end
81
82
 
82
83
 
84
+ # Create an isolated RubyLLM::Context for a model to prevent cross-talk (ADR-002)
85
+ # Each model gets its own context with provider-specific configuration
86
+ def create_isolated_context_for_model(model_name)
87
+ config = RubyLLM.config.dup
88
+
89
+ # Apply provider-specific configuration
90
+ if model_name.start_with?('lms/')
91
+ config.openai_api_base = ENV.fetch('LMS_API_BASE', 'http://localhost:1234/v1')
92
+ config.openai_api_key = 'dummy' # Local servers don't need a real API key
93
+ elsif model_name.start_with?('osaurus/')
94
+ config.openai_api_base = ENV.fetch('OSAURUS_API_BASE', 'http://localhost:11434/v1')
95
+ config.openai_api_key = 'dummy' # Local servers don't need a real API key
96
+ end
97
+
98
+ RubyLLM::Context.new(config)
99
+ end
100
+
101
+
102
+ # Extract the actual model name and provider from the prefixed model_name
103
+ # Returns: [actual_model, provider] where provider may be nil for auto-detection
104
+ def extract_model_and_provider(model_name)
105
+ if model_name.start_with?('ollama/')
106
+ [model_name.sub('ollama/', ''), 'ollama']
107
+ elsif model_name.start_with?('lms/') || model_name.start_with?('osaurus/')
108
+ [model_name.sub(%r{^(lms|osaurus)/}, ''), 'openai']
109
+ else
110
+ [model_name, nil] # Let RubyLLM auto-detect provider
111
+ end
112
+ end
113
+
114
+
83
115
  def setup_chats_with_tools
84
116
  valid_chats = {}
117
+ valid_contexts = {}
85
118
  failed_models = []
86
119
 
87
120
  @models.each do |model_name|
88
121
  begin
89
- # Check if this is a local provider model and handle it specially
90
- if model_name.start_with?('ollama/')
91
- # For Ollama models, extract the actual model name and use assume_model_exists
92
- actual_model = model_name.sub('ollama/', '')
93
- chat = RubyLLM.chat(model: actual_model, provider: 'ollama', assume_model_exists: true)
94
- elsif model_name.start_with?('osaurus/')
95
- # For Osaurus models (OpenAI-compatible), create a custom context with the right API base
96
- actual_model = model_name.sub('osaurus/', '')
97
- custom_config = RubyLLM.config.dup
98
- custom_config.openai_api_base = ENV.fetch('OSAURUS_API_BASE', 'http://localhost:11434/v1')
99
- custom_config.openai_api_key = 'dummy' # Local servers don't need a real API key
100
- context = RubyLLM::Context.new(custom_config)
101
- chat = context.chat(model: actual_model, provider: 'openai', assume_model_exists: true)
102
- elsif model_name.start_with?('lms/')
103
- # For LM Studio models (OpenAI-compatible), create a custom context with the right API base
104
- actual_model = model_name.sub('lms/', '')
105
- lms_api_base = ENV.fetch('LMS_API_BASE', 'http://localhost:1234/v1')
122
+ # Create isolated context for this model to prevent cross-talk (ADR-002)
123
+ context = create_isolated_context_for_model(model_name)
106
124
 
107
- # Validate model exists in LM Studio
108
- validate_lms_model!(actual_model, lms_api_base)
125
+ # Determine provider and actual model name
126
+ actual_model, provider = extract_model_and_provider(model_name)
109
127
 
110
- custom_config = RubyLLM.config.dup
111
- custom_config.openai_api_base = lms_api_base
112
- custom_config.openai_api_key = 'dummy' # Local servers don't need a real API key
113
- context = RubyLLM::Context.new(custom_config)
114
- chat = context.chat(model: actual_model, provider: 'openai', assume_model_exists: true)
115
- else
116
- chat = RubyLLM.chat(model: model_name)
128
+ # Validate LM Studio models
129
+ if model_name.start_with?('lms/')
130
+ lms_api_base = ENV.fetch('LMS_API_BASE', 'http://localhost:1234/v1')
131
+ validate_lms_model!(actual_model, lms_api_base)
117
132
  end
133
+
134
+ # Create chat using isolated context
135
+ chat = if provider
136
+ context.chat(model: actual_model, provider: provider, assume_model_exists: true)
137
+ else
138
+ context.chat(model: actual_model)
139
+ end
140
+
118
141
  valid_chats[model_name] = chat
142
+ valid_contexts[model_name] = context
119
143
  rescue StandardError => e
120
144
  failed_models << "#{model_name}: #{e.message}"
121
145
  end
@@ -135,6 +159,7 @@ module AIA
135
159
  end
136
160
 
137
161
  @chats = valid_chats
162
+ @contexts = valid_contexts
138
163
  @models = valid_chats.keys
139
164
 
140
165
  # Update the config to reflect only the valid models
@@ -277,13 +302,24 @@ module AIA
277
302
  result
278
303
  end
279
304
 
280
- def multi_model_chat(prompt)
305
+ def multi_model_chat(prompt_or_contexts)
281
306
  results = {}
282
307
 
308
+ # Check if we're receiving per-model contexts (Hash) or shared prompt (String/Array) - ADR-002 revised
309
+ per_model_contexts = prompt_or_contexts.is_a?(Hash) &&
310
+ prompt_or_contexts.keys.all? { |k| @models.include?(k) }
311
+
283
312
  Async do |task|
284
313
  @models.each do |model_name|
285
314
  task.async do
286
315
  begin
316
+ # Use model-specific context if available, otherwise shared prompt
317
+ prompt = if per_model_contexts
318
+ prompt_or_contexts[model_name]
319
+ else
320
+ prompt_or_contexts
321
+ end
322
+
287
323
  result = single_model_chat(prompt, model_name)
288
324
  results[model_name] = result
289
325
  rescue StandardError => e
@@ -452,96 +488,46 @@ module AIA
452
488
 
453
489
  # Clear the chat context/history
454
490
  # Needed for the //clear and //restore directives
491
+ # Simplified with ADR-002: Each model has isolated context, no global state to manage
455
492
  def clear_context
456
- @chats.each do |model_name, chat|
457
- # Option 1: Directly clear the messages array in the current chat object
458
- if chat.instance_variable_defined?(:@messages)
459
- chat.instance_variable_get(:@messages)
460
- # Force a completely empty array, not just attempting to clear it
461
- chat.instance_variable_set(:@messages, [])
462
- end
463
- end
464
-
465
- # Option 2: Force RubyLLM to create a new chat instance at the global level
466
- # This ensures any shared state is reset
467
- RubyLLM.instance_variable_set(:@chat, nil) if RubyLLM.instance_variable_defined?(:@chat)
493
+ old_chats = @chats.dup
494
+ new_chats = {}
468
495
 
469
- # Option 3: Try to create fresh chat instances, but don't exit on failure
470
- # This is safer for use in directives like //restore
471
- old_chats = @chats
472
- @chats = {} # First clear the chats hash
496
+ @models.each do |model_name|
497
+ begin
498
+ # Get the isolated context for this model
499
+ context = @contexts[model_name]
500
+ actual_model, provider = extract_model_and_provider(model_name)
473
501
 
474
- begin
475
- @models.each do |model_name|
476
- # Try to recreate each chat, but if it fails, keep the old one
477
- begin
478
- # Check if this is a local provider model and handle it specially
479
- if model_name.start_with?('ollama/')
480
- actual_model = model_name.sub('ollama/', '')
481
- @chats[model_name] = RubyLLM.chat(model: actual_model, provider: 'ollama', assume_model_exists: true)
482
- elsif model_name.start_with?('osaurus/')
483
- actual_model = model_name.sub('osaurus/', '')
484
- custom_config = RubyLLM.config.dup
485
- custom_config.openai_api_base = ENV.fetch('OSAURUS_API_BASE', 'http://localhost:11434/v1')
486
- custom_config.openai_api_key = 'dummy'
487
- context = RubyLLM::Context.new(custom_config)
488
- @chats[model_name] = context.chat(model: actual_model, provider: 'openai', assume_model_exists: true)
489
- elsif model_name.start_with?('lms/')
490
- actual_model = model_name.sub('lms/', '')
491
- lms_api_base = ENV.fetch('LMS_API_BASE', 'http://localhost:1234/v1')
492
-
493
- # Validate model exists in LM Studio
494
- validate_lms_model!(actual_model, lms_api_base)
495
-
496
- custom_config = RubyLLM.config.dup
497
- custom_config.openai_api_base = lms_api_base
498
- custom_config.openai_api_key = 'dummy'
499
- context = RubyLLM::Context.new(custom_config)
500
- @chats[model_name] = context.chat(model: actual_model, provider: 'openai', assume_model_exists: true)
501
- else
502
- @chats[model_name] = RubyLLM.chat(model: model_name)
503
- end
502
+ # Create a fresh chat instance from the same isolated context
503
+ chat = if provider
504
+ context.chat(model: actual_model, provider: provider, assume_model_exists: true)
505
+ else
506
+ context.chat(model: actual_model)
507
+ end
504
508
 
505
- # Re-add tools if they were previously loaded
506
- if @tools && !@tools.empty? && @chats[model_name].model&.supports_functions?
507
- @chats[model_name].with_tools(*@tools)
508
- end
509
- rescue StandardError => e
510
- # If we can't create a new chat, keep the old one but clear its context
511
- warn "Warning: Could not recreate chat for #{model_name}: #{e.message}. Keeping existing instance."
512
- @chats[model_name] = old_chats[model_name]
513
- # Clear the old chat's messages if possible
514
- if @chats[model_name] && @chats[model_name].instance_variable_defined?(:@messages)
515
- @chats[model_name].instance_variable_set(:@messages, [])
516
- end
509
+ # Re-add tools if they were previously loaded
510
+ if @tools && !@tools.empty? && chat.model&.supports_functions?
511
+ chat.with_tools(*@tools)
517
512
  end
518
- end
519
- rescue StandardError => e
520
- # If something went terribly wrong, restore the old chats but clear their contexts
521
- warn "Warning: Error during context clearing: #{e.message}. Attempting to recover."
522
- @chats = old_chats
523
- @chats.each_value do |chat|
524
- if chat.instance_variable_defined?(:@messages)
513
+
514
+ new_chats[model_name] = chat
515
+ rescue StandardError => e
516
+ # If recreation fails, keep the old chat but clear its messages
517
+ warn "Warning: Could not recreate chat for #{model_name}: #{e.message}. Clearing existing chat."
518
+ chat = old_chats[model_name]
519
+ if chat&.instance_variable_defined?(:@messages)
525
520
  chat.instance_variable_set(:@messages, [])
526
521
  end
522
+ chat.clear_history if chat&.respond_to?(:clear_history)
523
+ new_chats[model_name] = chat
527
524
  end
528
525
  end
529
526
 
530
- # Option 4: Call official clear_history method if it exists
531
- @chats.each_value do |chat|
532
- chat.clear_history if chat.respond_to?(:clear_history)
533
- end
534
-
535
- # Final verification
536
- @chats.each_value do |chat|
537
- if chat.instance_variable_defined?(:@messages) && !chat.instance_variable_get(:@messages).empty?
538
- chat.instance_variable_set(:@messages, [])
539
- end
540
- end
541
-
542
- return 'Chat context successfully cleared.'
527
+ @chats = new_chats
528
+ 'Chat context successfully cleared.'
543
529
  rescue StandardError => e
544
- return "Error clearing chat context: #{e.message}"
530
+ "Error clearing chat context: #{e.message}"
545
531
  end
546
532
 
547
533
 
data/lib/aia/session.rb CHANGED
@@ -45,7 +45,21 @@ module AIA
45
45
  end
46
46
 
47
47
  def initialize_components
48
- @context_manager = ContextManager.new(system_prompt: AIA.config.system_prompt)
48
+ # For multi-model: create separate context manager per model (ADR-002 revised)
49
+ # For single-model: maintain backward compatibility with single context manager
50
+ if AIA.config.model.is_a?(Array) && AIA.config.model.size > 1
51
+ @context_managers = {}
52
+ AIA.config.model.each do |model_name|
53
+ @context_managers[model_name] = ContextManager.new(
54
+ system_prompt: AIA.config.system_prompt
55
+ )
56
+ end
57
+ @context_manager = nil # Signal we're using per-model managers
58
+ else
59
+ @context_manager = ContextManager.new(system_prompt: AIA.config.system_prompt)
60
+ @context_managers = nil
61
+ end
62
+
49
63
  @ui_presenter = UIPresenter.new
50
64
  @directive_processor = DirectiveProcessor.new
51
65
  @chat_processor = ChatProcessorService.new(@ui_presenter, @directive_processor)
@@ -368,11 +382,29 @@ module AIA
368
382
  @chat_prompt.text = follow_up_prompt
369
383
  processed_prompt = @chat_prompt.to_s
370
384
 
371
- @context_manager.add_to_context(role: "user", content: processed_prompt)
372
- conversation = @context_manager.get_context
385
+ # Handle per-model contexts (ADR-002 revised)
386
+ if @context_managers
387
+ # Multi-model: add user prompt to each model's context
388
+ @context_managers.each_value do |ctx_mgr|
389
+ ctx_mgr.add_to_context(role: "user", content: processed_prompt)
390
+ end
373
391
 
374
- @ui_presenter.display_thinking_animation
375
- response_data = @chat_processor.process_prompt(conversation)
392
+ # Get per-model conversations
393
+ conversations = {}
394
+ @context_managers.each do |model_name, ctx_mgr|
395
+ conversations[model_name] = ctx_mgr.get_context
396
+ end
397
+
398
+ @ui_presenter.display_thinking_animation
399
+ response_data = @chat_processor.process_prompt(conversations)
400
+ else
401
+ # Single-model: use original logic
402
+ @context_manager.add_to_context(role: "user", content: processed_prompt)
403
+ conversation = @context_manager.get_context
404
+
405
+ @ui_presenter.display_thinking_animation
406
+ response_data = @chat_processor.process_prompt(conversation)
407
+ end
376
408
 
377
409
  # Handle new response format with metrics
378
410
  if response_data.is_a?(Hash)
@@ -386,7 +418,7 @@ module AIA
386
418
  end
387
419
 
388
420
  @ui_presenter.display_ai_response(content)
389
-
421
+
390
422
  # Display metrics if enabled and available (chat mode only)
391
423
  if AIA.config.show_metrics
392
424
  if multi_metrics
@@ -397,8 +429,22 @@ module AIA
397
429
  @ui_presenter.display_token_metrics(metrics)
398
430
  end
399
431
  end
400
-
401
- @context_manager.add_to_context(role: "assistant", content: content)
432
+
433
+ # Add responses to context (ADR-002 revised)
434
+ if @context_managers
435
+ # Multi-model: parse combined response and add each model's response to its own context
436
+ parsed_responses = parse_multi_model_response(content)
437
+ parsed_responses.each do |model_name, model_response|
438
+ @context_managers[model_name]&.add_to_context(
439
+ role: "assistant",
440
+ content: model_response
441
+ )
442
+ end
443
+ else
444
+ # Single-model: add response to single context
445
+ @context_manager.add_to_context(role: "assistant", content: content)
446
+ end
447
+
402
448
  @chat_processor.speak(content)
403
449
 
404
450
  @ui_presenter.display_separator
@@ -406,7 +452,10 @@ module AIA
406
452
  end
407
453
 
408
454
  def process_chat_directive(follow_up_prompt)
409
- directive_output = @directive_processor.process(follow_up_prompt, @context_manager)
455
+ # For multi-model, use first context manager for directives (ADR-002 revised)
456
+ # TODO: Consider if directives should affect all contexts or just one
457
+ context_for_directive = @context_managers ? @context_managers.values.first : @context_manager
458
+ directive_output = @directive_processor.process(follow_up_prompt, context_for_directive)
410
459
 
411
460
  return handle_clear_directive if follow_up_prompt.strip.start_with?("//clear")
412
461
  return handle_checkpoint_directive(directive_output) if follow_up_prompt.strip.start_with?("//checkpoint")
@@ -417,13 +466,16 @@ module AIA
417
466
  end
418
467
 
419
468
  def handle_clear_directive
420
- # The directive processor has called context_manager.clear_context
421
- # but we need to also clear the LLM client's context
422
-
423
- # First, clear the context manager's context
424
- @context_manager.clear_context(keep_system_prompt: true)
469
+ # Clear context manager(s) - ADR-002 revised
470
+ if @context_managers
471
+ # Multi-model: clear all context managers
472
+ @context_managers.each_value { |ctx_mgr| ctx_mgr.clear_context(keep_system_prompt: true) }
473
+ else
474
+ # Single-model: clear single context manager
475
+ @context_manager.clear_context(keep_system_prompt: true)
476
+ end
425
477
 
426
- # Second, try clearing the client's context
478
+ # Try clearing the client's context
427
479
  if AIA.config.client && AIA.config.client.respond_to?(:clear_context)
428
480
  begin
429
481
  AIA.config.client.clear_context
@@ -446,10 +498,9 @@ module AIA
446
498
  end
447
499
 
448
500
  def handle_restore_directive(directive_output)
449
- # If the restore was successful, we also need to refresh the client's context
501
+ # If the restore was successful, we also need to refresh the client's context - ADR-002 revised
450
502
  if directive_output.start_with?("Context restored")
451
503
  # Clear the client's context without reinitializing the entire adapter
452
- # This avoids the risk of exiting if model initialization fails
453
504
  if AIA.config.client && AIA.config.client.respond_to?(:clear_context)
454
505
  begin
455
506
  AIA.config.client.clear_context
@@ -459,17 +510,9 @@ module AIA
459
510
  end
460
511
  end
461
512
 
462
- # Rebuild the conversation in the LLM client from the restored context
463
- # This ensures the LLM's internal state matches what we restored
464
- if AIA.config.client && @context_manager
465
- begin
466
- restored_context = @context_manager.get_context
467
- # The client's context has been cleared, so we can safely continue
468
- # The next interaction will use the restored context from context_manager
469
- rescue => e
470
- STDERR.puts "Warning: Error syncing restored context: #{e.message}"
471
- end
472
- end
513
+ # Note: For multi-model, only the first context manager was used for restore
514
+ # This is a limitation of the current directive system
515
+ # TODO: Consider supporting restore for all context managers
473
516
  end
474
517
 
475
518
  @ui_presenter.display_info(directive_output)
@@ -485,6 +528,39 @@ module AIA
485
528
  "I executed this directive: #{follow_up_prompt}\nHere's the output: #{directive_output}\nLet's continue our conversation."
486
529
  end
487
530
 
531
+ # Parse multi-model response into per-model responses (ADR-002 revised)
532
+ # Input: "from: lms/model\nHabari!\n\nfrom: ollama/model\nKaixo!"
533
+ # Output: {"lms/model" => "Habari!", "ollama/model" => "Kaixo!"}
534
+ def parse_multi_model_response(combined_response)
535
+ return {} if combined_response.nil? || combined_response.empty?
536
+
537
+ responses = {}
538
+ current_model = nil
539
+ current_content = []
540
+
541
+ combined_response.each_line do |line|
542
+ if line =~ /^from:\s+(.+)$/
543
+ # Save previous model's response
544
+ if current_model
545
+ responses[current_model] = current_content.join.strip
546
+ end
547
+
548
+ # Start new model
549
+ current_model = $1.strip
550
+ current_content = []
551
+ elsif current_model
552
+ current_content << line
553
+ end
554
+ end
555
+
556
+ # Save last model's response
557
+ if current_model
558
+ responses[current_model] = current_content.join.strip
559
+ end
560
+
561
+ responses
562
+ end
563
+
488
564
  def cleanup_chat_prompt
489
565
  if @chat_prompt_id
490
566
  puts "[DEBUG] Cleaning up chat prompt: #{@chat_prompt_id}" if AIA.debug?
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: aia
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.9.18
4
+ version: 0.9.19
5
5
  platform: ruby
6
6
  authors:
7
7
  - Dewayne VanHoozer