aia 0.9.11 → 0.9.12

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (61) hide show
  1. checksums.yaml +4 -4
  2. data/.version +1 -1
  3. data/CHANGELOG.md +66 -2
  4. data/README.md +133 -4
  5. data/docs/advanced-prompting.md +721 -0
  6. data/docs/cli-reference.md +582 -0
  7. data/docs/configuration.md +347 -0
  8. data/docs/contributing.md +332 -0
  9. data/docs/directives-reference.md +490 -0
  10. data/docs/examples/index.md +277 -0
  11. data/docs/examples/mcp/index.md +479 -0
  12. data/docs/examples/prompts/analysis/index.md +78 -0
  13. data/docs/examples/prompts/automation/index.md +108 -0
  14. data/docs/examples/prompts/development/index.md +125 -0
  15. data/docs/examples/prompts/index.md +333 -0
  16. data/docs/examples/prompts/learning/index.md +127 -0
  17. data/docs/examples/prompts/writing/index.md +62 -0
  18. data/docs/examples/tools/index.md +292 -0
  19. data/docs/faq.md +414 -0
  20. data/docs/guides/available-models.md +366 -0
  21. data/docs/guides/basic-usage.md +477 -0
  22. data/docs/guides/chat.md +474 -0
  23. data/docs/guides/executable-prompts.md +417 -0
  24. data/docs/guides/first-prompt.md +454 -0
  25. data/docs/guides/getting-started.md +455 -0
  26. data/docs/guides/image-generation.md +507 -0
  27. data/docs/guides/index.md +46 -0
  28. data/docs/guides/models.md +507 -0
  29. data/docs/guides/tools.md +856 -0
  30. data/docs/index.md +173 -0
  31. data/docs/installation.md +238 -0
  32. data/docs/mcp-integration.md +612 -0
  33. data/docs/prompt_management.md +579 -0
  34. data/docs/security.md +629 -0
  35. data/docs/tools-and-mcp-examples.md +1186 -0
  36. data/docs/workflows-and-pipelines.md +563 -0
  37. data/examples/tools/mcp/github_mcp_server.json +11 -0
  38. data/examples/tools/mcp/imcp.json +7 -0
  39. data/lib/aia/chat_processor_service.rb +19 -3
  40. data/lib/aia/config/base.rb +224 -0
  41. data/lib/aia/config/cli_parser.rb +409 -0
  42. data/lib/aia/config/defaults.rb +88 -0
  43. data/lib/aia/config/file_loader.rb +131 -0
  44. data/lib/aia/config/validator.rb +184 -0
  45. data/lib/aia/config.rb +10 -860
  46. data/lib/aia/directive_processor.rb +27 -372
  47. data/lib/aia/directives/configuration.rb +114 -0
  48. data/lib/aia/directives/execution.rb +37 -0
  49. data/lib/aia/directives/models.rb +178 -0
  50. data/lib/aia/directives/registry.rb +120 -0
  51. data/lib/aia/directives/utility.rb +70 -0
  52. data/lib/aia/directives/web_and_file.rb +71 -0
  53. data/lib/aia/prompt_handler.rb +23 -3
  54. data/lib/aia/ruby_llm_adapter.rb +307 -128
  55. data/lib/aia/session.rb +27 -14
  56. data/lib/aia/utility.rb +12 -8
  57. data/lib/aia.rb +11 -2
  58. data/lib/extensions/ruby_llm/.irbrc +56 -0
  59. data/mkdocs.yml +165 -0
  60. metadata +77 -20
  61. /data/{images → docs/assets/images}/aia.png +0 -0
@@ -0,0 +1,563 @@
1
+ # Workflows and Pipelines
2
+
3
+ AIA's workflow system allows you to chain prompts together, creating sophisticated multi-stage processes for complex tasks. This enables automated processing pipelines that can handle everything from simple two-step workflows to complex enterprise-level automation.
4
+
5
+ ## Understanding Workflows
6
+
7
+ ### Basic Concepts
8
+
9
+ **Workflow**: A sequence of prompts executed in order, where each prompt can pass context to the next.
10
+
11
+ **Pipeline**: A predefined sequence of prompt IDs that are executed automatically.
12
+
13
+ **Next Prompt**: The immediate next prompt to execute after the current one completes.
14
+
15
+ **Context Passing**: Information and results flow from one prompt to the next in the sequence.
16
+
17
+ ## Simple Workflows
18
+
19
+ ### Sequential Processing
20
+ ```markdown
21
+ # first_prompt.txt
22
+ //next second_prompt
23
+ //config model gpt-4
24
+
25
+ Analyze the following data and prepare it for detailed analysis:
26
+ //include <%= data_file %>
27
+
28
+ Key findings summary:
29
+ ```
30
+
31
+ ```markdown
32
+ # second_prompt.txt
33
+ //config model claude-3-sonnet
34
+
35
+ Based on the initial analysis, provide detailed insights and recommendations:
36
+
37
+ Previous analysis results will be available in context.
38
+ Generate actionable recommendations.
39
+ ```
40
+
41
+ ### Basic Pipeline
42
+ ```bash
43
+ # Execute a simple pipeline
44
+ aia --pipeline "data_prep,analysis,report" dataset.csv
45
+
46
+ # Or using the directive
47
+ aia data_prep --next analysis --next report dataset.csv
48
+ ```
49
+
50
+ ## Pipeline Definition
51
+
52
+ ### Command Line Pipelines
53
+ ```bash
54
+ # Simple linear pipeline
55
+ aia --pipeline "step1,step2,step3" input.txt
56
+
57
+ # Pipeline with output files
58
+ aia --pipeline "extract,transform,load" --out_file results.md data.csv
59
+
60
+ # Pipeline with model specification
61
+ aia --model gpt-4 --pipeline "review,optimize,test" code.py
62
+ ```
63
+
64
+ ### Directive-Based Pipelines
65
+ ```markdown
66
+ # pipeline_starter.txt
67
+ //pipeline analyze_data,generate_insights,create_visualization,write_report
68
+ //config model claude-3-sonnet
69
+
70
+ # Data Analysis Pipeline
71
+
72
+ Starting comprehensive data analysis workflow.
73
+
74
+ Input data: <%= input_file %>
75
+ Processing stages: 4 stages planned
76
+
77
+ ## Stage 1: Data Analysis
78
+ Initial data examination and basic statistics.
79
+ ```
80
+
81
+ ### Dynamic Pipeline Generation
82
+ ```ruby
83
+ # adaptive_pipeline.txt
84
+ //ruby
85
+ data_size = File.size('<%= input_file %>')
86
+ complexity = data_size > 100000 ? 'complex' : 'simple'
87
+
88
+ if complexity == 'complex'
89
+ pipeline = ['data_chunk', 'parallel_analysis', 'merge_results', 'comprehensive_report']
90
+ else
91
+ pipeline = ['quick_analysis', 'summary_report']
92
+ end
93
+
94
+ puts "//pipeline #{pipeline.join(',')}"
95
+ puts "Selected #{complexity} pipeline (#{pipeline.length} stages)"
96
+ ```
97
+
98
+ ## Advanced Workflow Patterns
99
+
100
+ ### Conditional Workflows
101
+ Execute different paths based on intermediate results:
102
+
103
+ ```ruby
104
+ # conditional_workflow.txt
105
+ //ruby
106
+ # Analyze input to determine workflow path
107
+ content = File.read('<%= input_file %>')
108
+ file_type = File.extname('<%= input_file %>')
109
+
110
+ if file_type == '.py'
111
+ workflow = ['python_analysis', 'security_check', 'performance_review', 'documentation']
112
+ elsif file_type == '.js'
113
+ workflow = ['javascript_analysis', 'eslint_check', 'performance_review', 'documentation']
114
+ elsif content.match?(/SELECT|INSERT|UPDATE|DELETE/i)
115
+ workflow = ['sql_analysis', 'security_audit', 'optimization_review']
116
+ else
117
+ workflow = ['generic_analysis', 'quality_check', 'recommendations']
118
+ end
119
+
120
+ puts "//pipeline #{workflow.join(',')}"
121
+ puts "Detected #{file_type} file, using #{workflow.first.split('_').first} workflow"
122
+ ```
123
+
124
+ ### Parallel Processing Workflows
125
+ Handle multiple inputs simultaneously:
126
+
127
+ ```ruby
128
+ # parallel_processing.txt
129
+ //ruby
130
+ input_files = Dir.glob('<%= pattern %>')
131
+ batch_size = 3
132
+
133
+ puts "Processing #{input_files.length} files in parallel batches"
134
+
135
+ input_files.each_slice(batch_size).with_index do |batch, index|
136
+ puts "\n## Batch #{index + 1}"
137
+ batch.each_with_index do |file, file_index|
138
+ puts "### File #{file_index + 1}: #{File.basename(file)}"
139
+ puts "//include #{file}"
140
+ end
141
+
142
+ puts "\nProcess this batch focusing on:"
143
+ puts "- Individual file analysis"
144
+ puts "- Cross-file relationships"
145
+ puts "- Batch-level patterns"
146
+
147
+ if index < (input_files.length / batch_size.to_f).ceil - 1
148
+ puts "//next parallel_processing_batch_#{index + 2}"
149
+ else
150
+ puts "//next merge_parallel_results"
151
+ end
152
+ end
153
+ ```
154
+
155
+ ### Error Recovery Workflows
156
+ Handle failures gracefully:
157
+
158
+ ```markdown
159
+ # robust_workflow.txt
160
+ //config model gpt-4
161
+ //config temperature 0.3
162
+
163
+ # Robust Analysis Workflow
164
+
165
+ //ruby
166
+ begin
167
+ primary_data = File.read('<%= primary_input %>')
168
+ puts "Using primary data source"
169
+ puts "//include <%= primary_input %>"
170
+
171
+ # Set success path
172
+ puts "//next detailed_analysis"
173
+
174
+ rescue => e
175
+ puts "Primary data unavailable: #{e.message}"
176
+ puts "Switching to fallback workflow"
177
+
178
+ # Check for fallback options
179
+ if File.exist?('<%= fallback_input %>')
180
+ puts "//include <%= fallback_input %>"
181
+ puts "//next basic_analysis"
182
+ else
183
+ puts "No data sources available"
184
+ puts "//next manual_input_prompt"
185
+ end
186
+ end
187
+ ```
188
+
189
+ ## State Management in Workflows
190
+
191
+ ### Context Persistence
192
+ Maintain state across workflow stages:
193
+
194
+ ```ruby
195
+ # stateful_workflow.txt
196
+ //ruby
197
+ # Initialize or load workflow state
198
+ state_file = '/tmp/workflow_state.json'
199
+
200
+ if File.exist?(state_file)
201
+ state = JSON.parse(File.read(state_file))
202
+ puts "Resuming workflow at stage: #{state['current_stage']}"
203
+ else
204
+ state = {
205
+ 'workflow_id' => SecureRandom.uuid,
206
+ 'started_at' => Time.now.iso8601,
207
+ 'current_stage' => 1,
208
+ 'completed_stages' => [],
209
+ 'data' => {}
210
+ }
211
+ end
212
+
213
+ # Update state for current stage
214
+ stage_name = '<%= stage_name || "unknown" %>'
215
+ state['current_stage'] = stage_name
216
+ state['data'][stage_name] = {
217
+ 'started_at' => Time.now.iso8601,
218
+ 'input_file' => '<%= input_file %>',
219
+ 'model' => AIA.config.model
220
+ }
221
+
222
+ # Save state
223
+ File.write(state_file, JSON.pretty_generate(state))
224
+ puts "Workflow state saved: #{state['workflow_id']}"
225
+ ```
226
+
227
+ ### Data Passing Between Stages
228
+ Pass structured data between workflow stages:
229
+
230
+ ```ruby
231
+ # data_passing_example.txt
232
+ //ruby
233
+ # Stage data management
234
+ stage_data_file = "/tmp/stage_data_#{ENV['WORKFLOW_ID'] || 'default'}.json"
235
+
236
+ # Load previous stage data if available
237
+ previous_data = {}
238
+ if File.exist?(stage_data_file)
239
+ previous_data = JSON.parse(File.read(stage_data_file))
240
+ puts "Loaded data from previous stages:"
241
+ puts JSON.pretty_generate(previous_data)
242
+ end
243
+
244
+ # Current stage identifier
245
+ current_stage = '<%= current_stage || "stage_#{Time.now.to_i}" %>'
246
+ ```
247
+
248
+ ## Current Stage: <%= current_stage.capitalize %>
249
+
250
+ Previous stage results:
251
+ <%= previous_data.empty? ? "No previous data" : previous_data.to_json %>
252
+
253
+ ## Analysis Task
254
+ Perform analysis considering previous stage results.
255
+
256
+ //ruby
257
+ # Prepare data for next stage (this would be set by the AI response processing)
258
+ current_results = {
259
+ 'stage' => current_stage,
260
+ 'timestamp' => Time.now.iso8601,
261
+ 'status' => 'completed',
262
+ 'key_findings' => 'placeholder_for_ai_results'
263
+ }
264
+
265
+ # This would typically be saved after AI processing
266
+ puts "Stage data template prepared for: #{current_stage}"
267
+ ```
268
+
269
+ ## Workflow Orchestration
270
+
271
+ ### Master Workflow Controller
272
+ Create workflows that manage other workflows:
273
+
274
+ ```ruby
275
+ # master_controller.txt
276
+ //config model gpt-4
277
+
278
+ # Master Workflow Controller
279
+
280
+ //ruby
281
+ project_type = '<%= project_type %>'
282
+ complexity = '<%= complexity || "standard" %>'
283
+
284
+ workflows = {
285
+ 'code_project' => {
286
+ 'simple' => ['code_review', 'basic_tests', 'documentation'],
287
+ 'standard' => ['code_review', 'security_scan', 'performance_test', 'documentation'],
288
+ 'complex' => ['architecture_review', 'code_review', 'security_audit', 'performance_analysis', 'test_suite', 'documentation']
289
+ },
290
+ 'data_analysis' => {
291
+ 'simple' => ['data_overview', 'basic_stats', 'summary'],
292
+ 'standard' => ['data_validation', 'exploratory_analysis', 'modeling', 'insights'],
293
+ 'complex' => ['data_profiling', 'quality_assessment', 'feature_engineering', 'advanced_modeling', 'validation', 'reporting']
294
+ },
295
+ 'content_creation' => {
296
+ 'simple' => ['outline', 'draft', 'review'],
297
+ 'standard' => ['research', 'outline', 'draft', 'edit', 'finalize'],
298
+ 'complex' => ['research', 'expert_review', 'outline', 'sections_draft', 'peer_review', 'revision', 'final_edit']
299
+ }
300
+ }
301
+
302
+ selected_workflow = workflows[project_type][complexity]
303
+ puts "//pipeline #{selected_workflow.join(',')}"
304
+
305
+ puts "Initiating #{project_type} workflow (#{complexity} complexity)"
306
+ puts "Stages: #{selected_workflow.length}"
307
+ puts "Estimated duration: #{selected_workflow.length * 5} minutes"
308
+ ```
309
+
310
+ ### Workflow Monitoring and Logging
311
+ Track workflow execution and performance:
312
+
313
+ ```ruby
314
+ # workflow_monitor.txt
315
+ //ruby
316
+ require 'logger'
317
+
318
+ # Setup workflow logging
319
+ log_dir = '/tmp/aia_workflows'
320
+ Dir.mkdir(log_dir) unless Dir.exist?(log_dir)
321
+
322
+ logger = Logger.new("#{log_dir}/workflow_#{Date.today.strftime('%Y%m%d')}.log")
323
+ workflow_id = ENV['WORKFLOW_ID'] || SecureRandom.uuid
324
+
325
+ # Log workflow start
326
+ logger.info("Workflow #{workflow_id} started")
327
+ logger.info("Stage: <%= stage_name %>")
328
+ logger.info("Model: #{AIA.config.model}")
329
+ logger.info("Input: <%= input_description %>")
330
+
331
+ start_time = Time.now
332
+ puts "Workflow monitoring active (ID: #{workflow_id})"
333
+ ```
334
+
335
+ ## Workflow Performance Optimization
336
+
337
+ ### Intelligent Model Selection
338
+ Choose optimal models for each workflow stage:
339
+
340
+ ```ruby
341
+ # model_optimized_workflow.txt
342
+ //ruby
343
+ stages = {
344
+ 'data_extraction' => { model: 'gpt-3.5-turbo', temperature: 0.2 },
345
+ 'analysis' => { model: 'claude-3-sonnet', temperature: 0.3 },
346
+ 'creative_generation' => { model: 'gpt-4', temperature: 1.0 },
347
+ 'review_and_edit' => { model: 'gpt-4', temperature: 0.4 },
348
+ 'final_formatting' => { model: 'gpt-3.5-turbo', temperature: 0.1 }
349
+ }
350
+
351
+ current_stage = '<%= current_stage %>'
352
+ stage_config = stages[current_stage]
353
+
354
+ if stage_config
355
+ puts "//config model #{stage_config[:model]}"
356
+ puts "//config temperature #{stage_config[:temperature]}"
357
+ puts "Optimized for #{current_stage}: #{stage_config[:model]} at #{stage_config[:temperature]} temperature"
358
+ else
359
+ puts "//config model gpt-4"
360
+ puts "Using default model for unknown stage: #{current_stage}"
361
+ end
362
+ ```
363
+
364
+ ### Caching and Optimization
365
+ Implement caching for workflow efficiency:
366
+
367
+ ```ruby
368
+ # cached_workflow.txt
369
+ //ruby
370
+ require 'digest'
371
+
372
+ # Create cache key from inputs and configuration
373
+ cache_inputs = {
374
+ 'stage' => '<%= stage_name %>',
375
+ 'input_file' => '<%= input_file %>',
376
+ 'model' => AIA.config.model,
377
+ 'temperature' => AIA.config.temperature
378
+ }
379
+
380
+ cache_key = Digest::MD5.hexdigest(cache_inputs.to_json)
381
+ cache_file = "/tmp/workflow_cache_#{cache_key}.json"
382
+ cache_duration = 3600 # 1 hour
383
+
384
+ if File.exist?(cache_file) && (Time.now - File.mtime(cache_file)) < cache_duration
385
+ cached_result = JSON.parse(File.read(cache_file))
386
+ puts "Using cached result for stage: #{cached_result['stage']}"
387
+ puts cached_result['content']
388
+
389
+ # Skip to next stage if available
390
+ if cached_result['next_stage']
391
+ puts "//next #{cached_result['next_stage']}"
392
+ end
393
+
394
+ exit # Skip AI processing
395
+ else
396
+ puts "Processing fresh request (cache miss or expired)"
397
+ # Continue with normal processing
398
+ end
399
+ ```
400
+
401
+ ## Real-World Workflow Examples
402
+
403
+ ### Software Development Pipeline
404
+ Complete software development workflow:
405
+
406
+ ```markdown
407
+ # software_dev_pipeline.txt
408
+ //pipeline requirements_analysis,architecture_design,implementation_plan,code_review,testing_strategy,documentation,deployment_guide
409
+
410
+ # Software Development Pipeline
411
+
412
+ Project: <%= project_name %>
413
+ Repository: //include README.md
414
+
415
+ ## Pipeline Stages:
416
+ 1. **Requirements Analysis** - Extract and analyze requirements
417
+ 2. **Architecture Design** - Design system architecture
418
+ 3. **Implementation Plan** - Create detailed implementation plan
419
+ 4. **Code Review** - Review existing code
420
+ 5. **Testing Strategy** - Develop testing approach
421
+ 6. **Documentation** - Generate comprehensive docs
422
+ 7. **Deployment Guide** - Create deployment instructions
423
+
424
+ Starting requirements analysis phase...
425
+
426
+ //config model gpt-4
427
+ //config temperature 0.4
428
+ ```
429
+
430
+ ### Content Creation Workflow
431
+ Multi-stage content creation pipeline:
432
+
433
+ ```markdown
434
+ # content_creation_pipeline.txt
435
+ //pipeline research_phase,outline_creation,content_draft,expert_review,content_revision,final_edit,seo_optimization
436
+
437
+ # Content Creation Pipeline
438
+
439
+ Topic: <%= topic %>
440
+ Target Audience: <%= audience %>
441
+ Content Type: <%= content_type %>
442
+
443
+ ## Research Phase
444
+ //include source_materials.md
445
+ //shell curl -s "https://api.example.com/research/<%= topic %>" | jq '.'
446
+
447
+ Initial research and source gathering...
448
+
449
+ //config model claude-3-sonnet
450
+ //config temperature 0.6
451
+ ```
452
+
453
+ ### Data Science Workflow
454
+ Comprehensive data analysis pipeline:
455
+
456
+ ```ruby
457
+ # data_science_workflow.txt
458
+ //ruby
459
+ dataset_size = File.size('<%= dataset %>')
460
+ complexity = dataset_size > 10000000 ? 'enterprise' : 'standard'
461
+
462
+ pipelines = {
463
+ 'standard' => ['data_exploration', 'data_cleaning', 'feature_analysis', 'modeling', 'validation', 'insights'],
464
+ 'enterprise' => ['data_profiling', 'quality_assessment', 'preprocessing', 'feature_engineering', 'model_selection', 'hyperparameter_tuning', 'validation', 'deployment_prep', 'monitoring_setup']
465
+ }
466
+
467
+ selected_pipeline = pipelines[complexity]
468
+ puts "//pipeline #{selected_pipeline.join(',')}"
469
+
470
+ puts "Selected #{complexity} data science pipeline"
471
+ puts "Dataset size: #{dataset_size} bytes"
472
+ ```
473
+
474
+ # Data Science Analysis Pipeline
475
+
476
+ Dataset: //include <%= dataset %>
477
+
478
+ Pipeline optimized for <%= complexity %> analysis with <%= selected_pipeline.length %> stages.
479
+
480
+ //config model claude-3-sonnet
481
+ //config temperature 0.3
482
+ ```
483
+
484
+ ## Workflow Best Practices
485
+
486
+ ### Design Principles
487
+ 1. **Modularity**: Each stage should have a clear, single purpose
488
+ 2. **Reusability**: Design stages that can be used in multiple workflows
489
+ 3. **Error Handling**: Plan for failures and provide recovery paths
490
+ 4. **State Management**: Maintain proper state between stages
491
+ 5. **Monitoring**: Include logging and progress tracking
492
+
493
+ ### Performance Considerations
494
+ 1. **Model Selection**: Choose appropriate models for each stage
495
+ 2. **Caching**: Cache expensive operations and intermediate results
496
+ 3. **Parallel Processing**: Run independent stages concurrently
497
+ 4. **Resource Management**: Monitor memory and token usage
498
+ 5. **Optimization**: Profile and optimize slow stages
499
+
500
+ ### Maintenance and Debugging
501
+ 1. **Logging**: Comprehensive logging for troubleshooting
502
+ 2. **Testing**: Test workflows with various inputs
503
+ 3. **Documentation**: Document workflow purpose and usage
504
+ 4. **Versioning**: Version control workflow definitions
505
+ 5. **Monitoring**: Track workflow performance and success rates
506
+
507
+ ## Troubleshooting Workflows
508
+
509
+ ### Common Issues
510
+
511
+ #### Workflow Interruption
512
+ ```bash
513
+ # Resume interrupted workflow
514
+ export WORKFLOW_ID="previous_workflow_id"
515
+ aia --resume-workflow $WORKFLOW_ID
516
+
517
+ # Or restart from specific stage
518
+ aia --pipeline "failed_stage,remaining_stages" --resume-from failed_stage
519
+ ```
520
+
521
+ #### Context Size Issues
522
+ ```ruby
523
+ # Handle large contexts in workflows
524
+ //ruby
525
+ context_size = File.read('<%= context_file %>').length
526
+ max_context = 50000
527
+
528
+ if context_size > max_context
529
+ puts "Context too large (#{context_size} chars), implementing chunking strategy"
530
+ puts "//pipeline chunk_processing,merge_results,final_analysis"
531
+ else
532
+ puts "//pipeline standard_analysis,final_report"
533
+ end
534
+ ```
535
+
536
+ #### Model Rate Limiting
537
+ ```ruby
538
+ # Handle rate limiting in workflows
539
+ //ruby
540
+ stage_delays = {
541
+ 'heavy_analysis' => 30, # seconds
542
+ 'api_calls' => 10,
543
+ 'standard' => 5
544
+ }
545
+
546
+ current_stage = '<%= stage_name %>'
547
+ delay = stage_delays[current_stage] || stage_delays['standard']
548
+
549
+ puts "Implementing #{delay}s delay for rate limiting"
550
+ sleep delay if ENV['WORKFLOW_MODE'] == 'production'
551
+ ```
552
+
553
+ ## Related Documentation
554
+
555
+ - [Advanced Prompting](advanced-prompting.md) - Complex prompting techniques
556
+ - [Prompt Management](prompt_management.md) - Organizing prompts
557
+ - [Configuration](configuration.md) - Workflow configuration options
558
+ - [Examples](examples/index.md) - Real-world workflow examples
559
+ - [CLI Reference](cli-reference.md) - Pipeline command-line options
560
+
561
+ ---
562
+
563
+ Workflows and pipelines are powerful features that enable sophisticated automation with AIA. Start with simple sequential workflows and gradually build more complex, intelligent automation systems as your needs grow!
@@ -0,0 +1,11 @@
1
+ {
2
+ "mcpservers": {
3
+ "github": {
4
+ "command": "/opt/homebrew/bin/github-mcp-server",
5
+ "args": ["stdio"],
6
+ "env": {
7
+ "GITHUB_PERSONAL_ACCESS_TOKEN": "YOUR_GITHUB_PAT_HERE"
8
+ }
9
+ }
10
+ }
11
+ }
@@ -0,0 +1,7 @@
1
+ {
2
+ "mcpservers": {
3
+ "iMCP": {
4
+ "command": "/Applications/iMCP.app/Contents/MacOS/imcp-server 2> /dev/null"
5
+ }
6
+ }
7
+ }
@@ -46,6 +46,11 @@ module AIA
46
46
 
47
47
 
48
48
  def maybe_change_model
49
+ # With multiple models, we don't need to change the model in the same way
50
+ # The RubyLLMAdapter now handles multiple models internally
51
+ # This method is kept for backward compatibility but may not be needed
52
+ return if AIA.config.model.is_a?(Array)
53
+
49
54
  client_model = AIA.client.model.id # RubyLLM::Model instance
50
55
 
51
56
  unless AIA.config.model.downcase.include?(client_model.downcase)
@@ -64,7 +69,12 @@ module AIA
64
69
  else
65
70
  mode = AIA.append? ? 'a' : 'w'
66
71
  File.open(AIA.config.out_file, mode) do |file|
67
- file.puts response
72
+ file.puts "\nAI: "
73
+ # Handle multi-line responses by adding proper indentation
74
+ response_lines = response.to_s.split("\n")
75
+ response_lines.each do |line|
76
+ file.puts " #{line}"
77
+ end
68
78
  end
69
79
  end
70
80
 
@@ -89,8 +99,14 @@ module AIA
89
99
 
90
100
 
91
101
  def determine_operation_type
92
- mode = AIA.config.client.model.modalities
93
- mode.input.join(',') + " TO " + mode.output.join(',')
102
+ # With multiple models, determine operation type from the first model
103
+ # or provide a generic description
104
+ if AIA.config.model.is_a?(Array) && AIA.config.model.size > 1
105
+ "MULTI-MODEL PROCESSING"
106
+ else
107
+ mode = AIA.config.client.model.modalities
108
+ mode.input.join(',') + " TO " + mode.output.join(',')
109
+ end
94
110
  end
95
111
  end
96
112
  end