aia 0.9.11 → 0.9.12
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.version +1 -1
- data/CHANGELOG.md +66 -2
- data/README.md +133 -4
- data/docs/advanced-prompting.md +721 -0
- data/docs/cli-reference.md +582 -0
- data/docs/configuration.md +347 -0
- data/docs/contributing.md +332 -0
- data/docs/directives-reference.md +490 -0
- data/docs/examples/index.md +277 -0
- data/docs/examples/mcp/index.md +479 -0
- data/docs/examples/prompts/analysis/index.md +78 -0
- data/docs/examples/prompts/automation/index.md +108 -0
- data/docs/examples/prompts/development/index.md +125 -0
- data/docs/examples/prompts/index.md +333 -0
- data/docs/examples/prompts/learning/index.md +127 -0
- data/docs/examples/prompts/writing/index.md +62 -0
- data/docs/examples/tools/index.md +292 -0
- data/docs/faq.md +414 -0
- data/docs/guides/available-models.md +366 -0
- data/docs/guides/basic-usage.md +477 -0
- data/docs/guides/chat.md +474 -0
- data/docs/guides/executable-prompts.md +417 -0
- data/docs/guides/first-prompt.md +454 -0
- data/docs/guides/getting-started.md +455 -0
- data/docs/guides/image-generation.md +507 -0
- data/docs/guides/index.md +46 -0
- data/docs/guides/models.md +507 -0
- data/docs/guides/tools.md +856 -0
- data/docs/index.md +173 -0
- data/docs/installation.md +238 -0
- data/docs/mcp-integration.md +612 -0
- data/docs/prompt_management.md +579 -0
- data/docs/security.md +629 -0
- data/docs/tools-and-mcp-examples.md +1186 -0
- data/docs/workflows-and-pipelines.md +563 -0
- data/examples/tools/mcp/github_mcp_server.json +11 -0
- data/examples/tools/mcp/imcp.json +7 -0
- data/lib/aia/chat_processor_service.rb +19 -3
- data/lib/aia/config/base.rb +224 -0
- data/lib/aia/config/cli_parser.rb +409 -0
- data/lib/aia/config/defaults.rb +88 -0
- data/lib/aia/config/file_loader.rb +131 -0
- data/lib/aia/config/validator.rb +184 -0
- data/lib/aia/config.rb +10 -860
- data/lib/aia/directive_processor.rb +27 -372
- data/lib/aia/directives/configuration.rb +114 -0
- data/lib/aia/directives/execution.rb +37 -0
- data/lib/aia/directives/models.rb +178 -0
- data/lib/aia/directives/registry.rb +120 -0
- data/lib/aia/directives/utility.rb +70 -0
- data/lib/aia/directives/web_and_file.rb +71 -0
- data/lib/aia/prompt_handler.rb +23 -3
- data/lib/aia/ruby_llm_adapter.rb +307 -128
- data/lib/aia/session.rb +27 -14
- data/lib/aia/utility.rb +12 -8
- data/lib/aia.rb +11 -2
- data/lib/extensions/ruby_llm/.irbrc +56 -0
- data/mkdocs.yml +165 -0
- metadata +77 -20
- /data/{images → docs/assets/images}/aia.png +0 -0
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: b10d493c773b5eecca9050d03180d8d5a23b07b2e79b0e737b284b6629f3319a
|
4
|
+
data.tar.gz: 10bef52c76f737db9e3d974f44690242b4c417a89e4c17808f243c281eb70817
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: b10021b2557e63cfd4cbf19eead313c993af06e175ab81b1ab0f334afce4f6fc9ce9c877c2dd2f9e0da3093c07996daba667786f1200f8e2f7470b62161dcc64
|
7
|
+
data.tar.gz: de8952cc72bf776bae243d20fee31e60663f38f4d12a8a3497eb89002a63a2acf39ca4e67e97c344f9324106dc50b7d3c7e74e51154e74ccef238d909570316e
|
data/.version
CHANGED
@@ -1 +1 @@
|
|
1
|
-
0.9.
|
1
|
+
0.9.12
|
data/CHANGELOG.md
CHANGED
@@ -1,10 +1,74 @@
|
|
1
1
|
# Changelog
|
2
2
|
## [Unreleased]
|
3
3
|
|
4
|
-
|
4
|
+
## Released
|
5
|
+
|
6
|
+
### [0.9.12] 2025-08-28
|
7
|
+
|
8
|
+
#### New Features
|
9
|
+
- **MAJOR NEW FEATURE**: Multi-model support - specify multiple AI models simultaneously with comma-separated syntax
|
10
|
+
- **NEW FEATURE**: `--consensus` flag to enable primary model consensus mode for synthesized responses from multiple models
|
11
|
+
- **NEW FEATURE**: `--no-consensus` flag to explicitly force individual responses from all models
|
12
|
+
- **NEW FEATURE**: Enhanced `//model` directive now shows comprehensive multi-model configuration details
|
13
|
+
- **NEW FEATURE**: Concurrent processing of multiple models for improved performance
|
14
|
+
- **NEW FEATURE**: Primary model concept - first model in list serves as consensus orchestrator
|
15
|
+
- **NEW FEATURE**: Multi-model error handling - invalid models reported but don't prevent valid models from working
|
16
|
+
- **NEW FEATURE**: Multi-model support in both batch and interactive chat modes
|
17
|
+
- **NEW FEATURE**: Comprehensive documentation website https://madbomber.github.io/aia/
|
18
|
+
|
19
|
+
#### Improvements
|
20
|
+
- **ENHANCEMENT**: Enhanced `//model` directive output with detailed multi-model configuration display
|
21
|
+
- **ENHANCEMENT**: Improved error handling with graceful fallback when model initialization fails
|
22
|
+
- **ENHANCEMENT**: Better TTY handling in chat mode to prevent `Errno::ENXIO` errors in containerized environments
|
23
|
+
- **ENHANCEMENT**: Updated directive processor to use new module-based architecture for better maintainability
|
24
|
+
- **ENHANCEMENT**: Improved batch mode output file formatting consistency between STDOUT and file output
|
25
|
+
|
26
|
+
#### Bug Fixes
|
27
|
+
- **BUG FIX**: Fixed DirectiveProcessor TypeError that prevented application startup with invalid directive calls
|
28
|
+
- **BUG FIX**: Fixed missing primary model output in batch mode output files
|
29
|
+
- **BUG FIX**: Fixed inconsistent formatting between STDOUT and file output in batch mode
|
30
|
+
- **BUG FIX**: Fixed TTY availability issues in chat mode for containerized environments
|
31
|
+
- **BUG FIX**: Fixed directive processing to use updated module-based registry system
|
32
|
+
|
33
|
+
#### Technical Changes
|
34
|
+
- Fixed ruby_llm version to 1.5.1
|
35
|
+
- Added extra API_KEY configuration for new LLM providers
|
36
|
+
- Updated RubyLLMAdapter to support multiple model initialization and management
|
37
|
+
- Enhanced ChatProcessorService output handling for multi-model responses
|
38
|
+
- Improved Session class TTY error handling with proper exception catching
|
39
|
+
- Updated CLI parser to support multi-model flags and options
|
40
|
+
- Enhanced configuration system to support consensus mode settings
|
41
|
+
|
42
|
+
#### Documentation
|
43
|
+
- **DOCUMENTATION**: Comprehensive README.md updates with multi-model usage examples and best practices
|
44
|
+
- **DOCUMENTATION**: Added multi-model section to README with detailed usage instructions
|
45
|
+
- **DOCUMENTATION**: Updated command-line options table with new multi-model flags
|
46
|
+
- **DOCUMENTATION**: Added practical multi-model examples for decision-making scenarios
|
47
|
+
|
48
|
+
#### Usage Examples
|
49
|
+
```bash
|
50
|
+
# Basic multi-model usage
|
51
|
+
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo
|
52
|
+
|
53
|
+
# Enable consensus mode for synthesized response
|
54
|
+
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini --consensus
|
55
|
+
|
56
|
+
# Multi-model chat mode
|
57
|
+
aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
58
|
+
|
59
|
+
# View current multi-model configuration
|
60
|
+
//model # Use in any prompt or chat session
|
61
|
+
```
|
62
|
+
|
63
|
+
#### Migration Notes
|
64
|
+
- Existing single-model usage remains unchanged and fully compatible
|
65
|
+
- Multi-model is opt-in via comma-separated model names
|
66
|
+
- Default behavior without `--consensus` flag shows individual responses from all models
|
67
|
+
- Invalid model names are reported but don't prevent valid models from working
|
68
|
+
|
69
|
+
#### TODO
|
5
70
|
- TODO: focus on log file consistency using Logger
|
6
71
|
|
7
|
-
## Released
|
8
72
|
|
9
73
|
### [0.9.11] 2025-07-31
|
10
74
|
- added a cost per 1 million input tokens to available_models query output
|
data/README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1
1
|
<div align="center">
|
2
2
|
<h1>AI Assistant (AIA)</h1>
|
3
|
-
<img src="images/aia.png" alt="Robots waiter ready to take your order."><br />
|
3
|
+
<img src="docs/assets/images/aia.png" alt="Robots waiter ready to take your order."><br />
|
4
4
|
**The Prompt is the Code**
|
5
5
|
</div>
|
6
6
|
|
@@ -13,8 +13,10 @@ AIA leverages the following Ruby gems:
|
|
13
13
|
- **[ruby_llm-mcp](https://www.rubyllm-mcp.com)** for Model Context Protocol (MCP) support,
|
14
14
|
- and can use the **[shared_tools gem](https://github.com/madbomber/shared_tools)** which provides a collection of common ready-to-use MCP clients and functions for use with LLMs that support tools.
|
15
15
|
|
16
|
-
|
17
|
-
|
16
|
+
For more information on AIA visit these locations:
|
17
|
+
|
18
|
+
- **[The AIA Docs Website](https://madbomber.github.io/aia)**<br />
|
19
|
+
- **[Blog Series on AIA](https://madbomber.github.io/blog/engineering/AIA-Philosophy/)**
|
18
20
|
|
19
21
|
## Quick Start
|
20
22
|
|
@@ -43,6 +45,9 @@ AIA leverages the following Ruby gems:
|
|
43
45
|
5. **Start an interactive chat:**
|
44
46
|
```bash
|
45
47
|
aia --chat
|
48
|
+
|
49
|
+
# Or use multiple models for comparison
|
50
|
+
aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
46
51
|
```
|
47
52
|
|
48
53
|
```plain
|
@@ -81,6 +86,11 @@ AIA leverages the following Ruby gems:
|
|
81
86
|
- [Configuration Directive Examples](#configuration-directive-examples)
|
82
87
|
- [Dynamic Content Examples](#dynamic-content-examples)
|
83
88
|
- [Custom Directive Examples](#custom-directive-examples)
|
89
|
+
- [Multi-Model Support](#multi-model-support)
|
90
|
+
- [Basic Multi-Model Usage](#basic-multi-model-usage)
|
91
|
+
- [Consensus Mode](#consensus-mode)
|
92
|
+
- [Individual Responses Mode](#individual-responses-mode)
|
93
|
+
- [Model Information](#model-information)
|
84
94
|
- [Shell Integration](#shell-integration)
|
85
95
|
- [Embedded Ruby (ERB)](#embedded-ruby-erb)
|
86
96
|
- [Prompt Sequences](#prompt-sequences)
|
@@ -192,7 +202,9 @@ aia --fuzzy
|
|
192
202
|
| Option | Description | Example |
|
193
203
|
|--------|-------------|---------|
|
194
204
|
| `--chat` | Start interactive chat session | `aia --chat` |
|
195
|
-
| `--model MODEL` | Specify AI model to use | `aia --model gpt-
|
205
|
+
| `--model MODEL` | Specify AI model(s) to use | `aia --model gpt-4o-mini,gpt-3.5-turbo` |
|
206
|
+
| `--consensus` | Enable consensus mode for multi-model | `aia --consensus` |
|
207
|
+
| `--no-consensus` | Force individual responses | `aia --no-consensus` |
|
196
208
|
| `--role ROLE` | Use a role/system prompt | `aia --role expert` |
|
197
209
|
| `--out_file FILE` | Specify output file | `aia --out_file results.md` |
|
198
210
|
| `--fuzzy` | Use fuzzy search for prompts | `aia --fuzzy` |
|
@@ -331,6 +343,7 @@ Directives are special commands in prompt files that begin with `//` and provide
|
|
331
343
|
| `//pipeline` | Set prompt workflow | `//pipeline analyze,summarize,report` |
|
332
344
|
| `//clear` | Clear conversation history | `//clear` |
|
333
345
|
| `//help` | Show available directives | `//help` |
|
346
|
+
| `//model` | Show current model configuration | `//model` |
|
334
347
|
| `//available_models` | List available models | `//available_models` |
|
335
348
|
| `//tools` | Show a list of available tools and their description | `//tools` |
|
336
349
|
| `//review` | Review current context | `//review` |
|
@@ -393,6 +406,99 @@ aia --tools examples/directives/ask.rb --chat
|
|
393
406
|
//ask gather the latest closing data for the DOW, NASDAQ, and S&P 500
|
394
407
|
```
|
395
408
|
|
409
|
+
### Multi-Model Support
|
410
|
+
|
411
|
+
AIA supports running multiple AI models simultaneously, allowing you to:
|
412
|
+
- Compare responses from different models
|
413
|
+
- Get consensus answers from multiple AI perspectives
|
414
|
+
- Leverage the strengths of different models for various tasks
|
415
|
+
|
416
|
+
#### Basic Multi-Model Usage
|
417
|
+
|
418
|
+
Specify multiple models using comma-separated values with the `-m` flag:
|
419
|
+
|
420
|
+
```bash
|
421
|
+
# Use two models
|
422
|
+
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo
|
423
|
+
|
424
|
+
# Use three models
|
425
|
+
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini
|
426
|
+
|
427
|
+
# Works in chat mode too
|
428
|
+
aia --chat -m gpt-4o-mini,gpt-3.5-turbo
|
429
|
+
```
|
430
|
+
|
431
|
+
#### Consensus Mode
|
432
|
+
|
433
|
+
Use the `--consensus` flag to have the primary model (first in the list) synthesize responses from all models into a unified answer:
|
434
|
+
|
435
|
+
```bash
|
436
|
+
# Enable consensus mode
|
437
|
+
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini --consensus
|
438
|
+
```
|
439
|
+
|
440
|
+
**Consensus Output Format:**
|
441
|
+
```
|
442
|
+
from: gpt-4o-mini (consensus)
|
443
|
+
Based on the insights from multiple AI models, here is a comprehensive answer that
|
444
|
+
incorporates the best perspectives and resolves any contradictions...
|
445
|
+
```
|
446
|
+
|
447
|
+
#### Individual Responses Mode
|
448
|
+
|
449
|
+
By default (or with `--no-consensus`), each model provides its own response:
|
450
|
+
|
451
|
+
```bash
|
452
|
+
# Default behavior - show individual responses
|
453
|
+
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini
|
454
|
+
|
455
|
+
# Explicitly disable consensus
|
456
|
+
aia my_prompt -m gpt-4o-mini,gpt-3.5-turbo --no-consensus
|
457
|
+
```
|
458
|
+
|
459
|
+
**Individual Responses Output Format:**
|
460
|
+
```
|
461
|
+
from: gpt-4o-mini
|
462
|
+
Response from the first model...
|
463
|
+
|
464
|
+
from: gpt-3.5-turbo
|
465
|
+
Response from the second model...
|
466
|
+
|
467
|
+
from: gpt-5-mini
|
468
|
+
Response from the third model...
|
469
|
+
```
|
470
|
+
|
471
|
+
#### Model Information
|
472
|
+
|
473
|
+
View your current multi-model configuration using the `//model` directive:
|
474
|
+
|
475
|
+
```bash
|
476
|
+
# In any prompt file or chat session
|
477
|
+
//model
|
478
|
+
```
|
479
|
+
|
480
|
+
**Example Output:**
|
481
|
+
```
|
482
|
+
Multi-Model Configuration:
|
483
|
+
==========================
|
484
|
+
Model count: 3
|
485
|
+
Primary model: gpt-4o-mini (used for consensus when --consensus flag is enabled)
|
486
|
+
Consensus mode: false
|
487
|
+
|
488
|
+
Model Details:
|
489
|
+
--------------------------------------------------
|
490
|
+
1. gpt-4o-mini (primary)
|
491
|
+
2. gpt-3.5-turbo
|
492
|
+
3. gpt-5-mini
|
493
|
+
```
|
494
|
+
|
495
|
+
**Key Features:**
|
496
|
+
- **Primary Model**: The first model in the list serves as the consensus orchestrator
|
497
|
+
- **Concurrent Processing**: All models run simultaneously for better performance
|
498
|
+
- **Flexible Output**: Choose between individual responses or synthesized consensus
|
499
|
+
- **Error Handling**: Invalid models are reported but don't prevent valid models from working
|
500
|
+
- **Batch Mode Support**: Multi-model responses are properly formatted in output files
|
501
|
+
|
396
502
|
### Shell Integration
|
397
503
|
|
398
504
|
AIA automatically processes shell patterns in prompts:
|
@@ -609,6 +715,29 @@ Generate documentation for the Ruby project shown above.
|
|
609
715
|
Include: API references, usage examples, and setup instructions.
|
610
716
|
```
|
611
717
|
|
718
|
+
#### Multi-Model Decision Making
|
719
|
+
```bash
|
720
|
+
# ~/.prompts/decision_maker.txt
|
721
|
+
# Compare different AI perspectives on complex decisions
|
722
|
+
|
723
|
+
What are the pros and cons of [DECISION_TOPIC]?
|
724
|
+
Consider: technical feasibility, business impact, risks, and alternatives.
|
725
|
+
|
726
|
+
Analyze this thoroughly and provide actionable recommendations.
|
727
|
+
```
|
728
|
+
|
729
|
+
Usage examples:
|
730
|
+
```bash
|
731
|
+
# Get individual perspectives from each model
|
732
|
+
aia decision_maker -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini --no-consensus
|
733
|
+
|
734
|
+
# Get a synthesized consensus recommendation
|
735
|
+
aia decision_maker -m gpt-4o-mini,gpt-3.5-turbo,gpt-5-mini --consensus
|
736
|
+
|
737
|
+
# Use with chat mode for follow-up questions
|
738
|
+
aia --chat -m gpt-4o-mini,gpt-3.5-turbo --consensus
|
739
|
+
```
|
740
|
+
|
612
741
|
### Executable Prompts
|
613
742
|
|
614
743
|
The `--exec` flag is used to create executable prompts. If it is not present on the shebang line then the prompt file will be treated like any other context file. That means that the file will be included as context in the prompt but no dynamic content integration or directives will be processed. All other AIA options are, well, optional. All you need is an initial prompt ID and the --exec flag.
|