rainbow_llm 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: 0b7f668ffcf5f6b67b910f729dd10670908264762798119e3fb0d21b4e27995f
4
+ data.tar.gz: d0f42404d0d702e8c720b1e5febc8b17fcf02e626d867acd600cbc4ad1fedff0
5
+ SHA512:
6
+ metadata.gz: fcffac6222cdefa8255eb457913e802621fa262cac2d72ecac56d8e4031b18d71fb345d260911ddebe44b326d5f348ba00f6c33f4cae9df20b3dad662ff686d8
7
+ data.tar.gz: dc7598c47b9cf858a4e41be22c5b268f8206206f8451f657fc0ef20db25b137e8d51a9fc3019aac44fc184a87ce05bfd40d55954c326cb1d4bb8604372be199b
data/.rspec ADDED
@@ -0,0 +1,3 @@
1
+ --require spec_helper
2
+ --format documentation
3
+ --color
data/CHANGELOG.md ADDED
@@ -0,0 +1,5 @@
1
+ ## [Unreleased]
2
+
3
+ ## [0.1.0] - 2026-01-31
4
+
5
+ - Initial release
data/LICENSE.txt ADDED
@@ -0,0 +1,21 @@
1
+ The MIT License (MIT)
2
+
3
+ Copyright (c) 2026 a-chris
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in
13
+ all copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
21
+ THE SOFTWARE.
data/README.md ADDED
@@ -0,0 +1,269 @@
1
+ # 🌈 RainbowLLM
2
+
3
+ Sponsored by
4
+ <a href="https://coney.app" target="_blank"><img src="https://www.google.com/s2/favicons?domain=coney.app&sz=32" alt="Coney.app" width="20" height="20"> Coney.app</a> <a href="https://rubycon.it" target="_blank"><img src="https://www.google.com/s2/favicons?domain=rubycon.it&sz=32" alt="Rubycon.it" width="20" height="20"> Rubycon.it</a>
5
+
6
+ [![Gem Version](https://badge.fury.io/rb/rainbow_llm.svg)](https://badge.fury.io/rb/rainbow_llm) [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](https://opensource.org/licenses/MIT) [![Ruby](https://img.shields.io/badge/ruby-%3E%3D%203.0-red.svg)](https://www.ruby-lang.org/)
7
+
8
+ RainbowLLM is your **AI resilience layer** - a smart routing gem that gives you:
9
+
10
+ - **Automatic failover**: When one provider fails, it instantly tries the next. Your application stays online even when providers go down
11
+ - **Maximize free tiers usage and avoid vendor lock-in**
12
+ - **Easy integration**: Simple Ruby interface based on RubyLLM
13
+
14
+ ```ruby
15
+ # If OpenAI fails, automatically tries Anthropic, then Ollama
16
+ response = RainbowLLM.chat(
17
+ models: ["openai/gpt-5", "anthropic/claude-3.5-sonnet", "ollama/llama3.3"]
18
+ ).ask(user_question)
19
+ ```
20
+
21
+ ## 🚀 Quick Start
22
+
23
+ ```bash
24
+ bundle add rainbow_llm
25
+ ```
26
+
27
+ ## 📖 Table of Contents
28
+
29
+ - [Features](#-features)
30
+ - [Installation](#-installation)
31
+ - [Configuration](#-configuration)
32
+ - [Usage Examples](#-usage-examples)
33
+ - [Advanced Patterns](#-advanced-patterns)
34
+ - [Development](#-development)
35
+ - [Contributing](#-contributing)
36
+ - [License](#-license)
37
+
38
+ ## Features
39
+
40
+ - **Automatic failover** - Tries providers in your specified order with instant fallback when a provider fails, configurable retry logic and timeouts
41
+ - **Cost optimization** - Route to the most cost-effective provider, maximize free tier usage across providers, and avoid rate limit surprises
42
+ - **Flexible configuration** - Support for OpenAI-compatible endpoints, Basic Auth and API key authentication, custom endpoints and model mappings
43
+ - **Production ready** - Built on the reliable **ruby_llm** foundation with comprehensive error handling and detailed logging and monitoring
44
+
45
+ ## 📦 Installation
46
+
47
+ ### Option 1: With Bundler (Recommended)
48
+
49
+ ```bash
50
+ bundle add rainbow_llm
51
+ ```
52
+
53
+ ### Option 2: Direct Install
54
+
55
+ ```bash
56
+ gem install rainbow_llm
57
+ ```
58
+
59
+ ### Requirements
60
+
61
+ - Ruby 3.2+
62
+
63
+ ## ⚙️ Configuration
64
+
65
+ Configure your providers once, use them everywhere:
66
+
67
+ ```ruby
68
+ # config/initializers/rainbow_llm.rb
69
+ RainbowLLM.configure do |config|
70
+ # Local Ollama instance
71
+ config.provider :ollama, {
72
+ provider: "openai_basic",
73
+ uri_base: "http://localhost:11434/v1",
74
+ access_token: ENV['OLLAMA_API_KEY'],
75
+ assume_model_exists: true
76
+ }
77
+
78
+ # Cerebras cloud service
79
+ config.provider :cerebras, {
80
+ provider: "openai",
81
+ uri_base: "https://api.cerebras.ai/v1",
82
+ access_token: ENV['CEREBRAS_API_KEY'],
83
+ assume_model_exists: true
84
+ }
85
+
86
+ # Add as many providers as you need!
87
+ config.provider :openai, {
88
+ provider: "openai",
89
+ access_token: ENV['OPENAI_API_KEY']
90
+ }
91
+ end
92
+ ```
93
+
94
+ ## 💡 Usage Examples
95
+
96
+ ### Basic Chat Completion
97
+
98
+ ```ruby
99
+ response = RainbowLLM.chat(
100
+ models: ["ollama/llama3.3", "cerebras/llama-3.3-70b"]
101
+ ).ask("Explain quantum computing to a 5-year-old")
102
+
103
+ puts response.content
104
+ # => "Imagine you have a magical toy that can be in two places at once..."
105
+ ```
106
+
107
+ ### Fluent API Options
108
+
109
+ Chain options together for fine-grained control:
110
+
111
+ ```ruby
112
+ # Set temperature and timeout
113
+ response = RainbowLLM.chat(models: ["ollama/llama3.3"])
114
+ .with_temperature(0.8)
115
+ .with_timeout(30)
116
+ .ask("Write a creative story")
117
+
118
+ # Use JSON schema for structured output
119
+ response = RainbowLLM.chat(models: ["cerebras/llama-3.3-70b"])
120
+ .with_schema(MySchema)
121
+ .ask("Extract data from this text")
122
+
123
+ # Chain multiple options
124
+ response = RainbowLLM.chat(models: ["openai/gpt-5", "cerebras/llama-3.3-70b"])
125
+ .with_temperature(0.7)
126
+ .with_timeout(45)
127
+ .with_schema(ResponseSchema)
128
+ .ask("Analyze this document")
129
+ ```
130
+
131
+ **Response Object:**
132
+
133
+ ```ruby
134
+ # Check which model succeeded
135
+ response.model
136
+ # => "cerebras/llama-3.3-70b" (or nil if all failed)
137
+
138
+ # Get the content
139
+ response.content
140
+ # => "The analysis results..." (or nil if all failed)
141
+
142
+ # Inspect detailed status for each model
143
+ response.details
144
+ # => {
145
+ # "ollama/llama3.3" => { status: :failed, error: "Connection refused" },
146
+ # "cerebras/llama-3.3-70b" => { status: :success, duration: 1.23 }
147
+ # }
148
+ ```
149
+
150
+ ### Error Handling
151
+
152
+ RainbowLLM doesn't raise exceptions - it returns a Response with details:
153
+
154
+ ```ruby
155
+ response = RainbowLLM.chat(
156
+ models: ["primary-model", "backup-model-1", "backup-model-2"]
157
+ ).ask("Important business question")
158
+
159
+ if response.content
160
+ puts "Success: #{response.content}"
161
+ puts "Provided by: #{response.model}"
162
+ else
163
+ # All providers failed - inspect details to understand why
164
+ puts "All providers failed!"
165
+
166
+ response.details.each do |model, info|
167
+ puts "#{model}: #{info[:status]} - #{info[:error]}"
168
+ end
169
+ # => primary-model: failed - Rate limit exceeded
170
+ # => backup-model-1: failed - Connection timeout
171
+ # => backup-model-2: failed - Invalid API key
172
+ end
173
+ ```
174
+
175
+ ## 🎯 Advanced Patterns
176
+
177
+ ### Cost-Based Routing
178
+
179
+ ```ruby
180
+ # Route to cheapest available provider first
181
+ cheap_models = [
182
+ "ollama/llama3.2-3b", # Free (local)
183
+ "cerebras/llama-3.3-70b", # Free tier
184
+ "openai/gpt-5" # Paid fallback
185
+ ]
186
+
187
+ response = RainbowLLM.chat(models: cheap_models)
188
+ .with_temperature(0.5)
189
+ .ask(user_input)
190
+ ```
191
+
192
+ ### Performance-Based Routing
193
+
194
+ ```ruby
195
+ # Route to fastest providers for time-sensitive requests
196
+ fast_models = [
197
+ "cerebras/llama-3.3-70b", # Fast cloud
198
+ "openai/gpt-5", # Fast but more expensive
199
+ "ollama/llama3.2" # Local but slower
200
+ ]
201
+
202
+ response = RainbowLLM.chat(models: fast_models)
203
+ .with_timeout(1)
204
+ .ask(time_sensitive_question)
205
+ ```
206
+
207
+ ### Multi-Provider Load Balancing
208
+
209
+ ```ruby
210
+ # Distribute load across providers to avoid rate limits
211
+ providers = [
212
+ "openai/model-1",
213
+ "anthropic/model-2",
214
+ "cerebras/model-3",
215
+ "ollama/model-4"
216
+ ]
217
+
218
+ # RainbowLLM will try each in order until one succeeds
219
+ response = RainbowLLM.chat(models: providers)
220
+ .with_temperature(0.7)
221
+ .ask(request)
222
+ ```
223
+
224
+ ## 🔧 Development
225
+
226
+ Want to contribute or run tests locally?
227
+
228
+ ```bash
229
+ # Clone the repo
230
+ git clone https://github.com/a-chris/rainbow_llm.git
231
+ cd rainbow_llm
232
+
233
+ # Install dependencies
234
+ bin/setup
235
+
236
+ # Run tests
237
+ rake test
238
+
239
+ # Launch interactive console
240
+ bin/console
241
+
242
+ # Install locally
243
+ bundle exec rake install
244
+ ```
245
+
246
+ ## 🤝 Contributing
247
+
248
+ We welcome contributions! Here's how you can help:
249
+
250
+ - **Report bugs**: Open an issue with detailed reproduction steps
251
+ - **Suggest features**: What would make RainbowLLM even better?
252
+ - **Submit pull requests**: Fix bugs, add features, improve docs
253
+ - **Spread the word**: Star the repo, share with friends!
254
+
255
+ **Development setup**:
256
+
257
+ ```bash
258
+ # After cloning
259
+ bin/setup # Installs dependencies
260
+ rake test # Runs the test suite
261
+ ```
262
+
263
+ ## 📜 License
264
+
265
+ RainbowLLM is open source software licensed under the [MIT License](https://opensource.org/licenses/MIT).
266
+
267
+ ---
268
+
269
+ **Need help?** Open an issue or contact [@a-chris](https://github.com/a-chris)
data/Rakefile ADDED
@@ -0,0 +1,12 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new
7
+
8
+ require "rubocop/rake_task"
9
+
10
+ RuboCop::RakeTask.new
11
+
12
+ task default: %i[spec rubocop]
@@ -0,0 +1,110 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RainbowLLM
4
+ # Fluent API builder for chat requests with automatic failover across models
5
+ class ChatBuilder
6
+ attr_reader :models, :temperature, :schema, :timeout
7
+
8
+ def initialize(models)
9
+ @models = models
10
+ @temperature = nil
11
+ @schema = nil
12
+ @timeout = nil
13
+ end
14
+
15
+ def with_temperature(value)
16
+ dup.tap { |builder| builder.instance_variable_set(:@temperature, value) }
17
+ end
18
+
19
+ def with_schema(value)
20
+ dup.tap { |builder| builder.instance_variable_set(:@schema, value) }
21
+ end
22
+
23
+ def with_timeout(seconds)
24
+ dup.tap { |builder| builder.instance_variable_set(:@timeout, seconds) }
25
+ end
26
+
27
+ def ask(prompt)
28
+ raise InvalidArgument, "prompt cannot be nil or empty" if prompt.nil? || prompt.to_s.strip.empty?
29
+
30
+ details = {}
31
+
32
+ models.each do |model_string|
33
+ # Split model string into provider and model parts
34
+ provider, model = model_string.split("/", 2)
35
+
36
+ if model.nil? || model.strip.empty?
37
+ details[model_string] = {
38
+ status: :error,
39
+ error: "Invalid model format: '#{model_string}'. Expected format: 'provider/model_name'"
40
+ }
41
+ next
42
+ end
43
+
44
+ context =
45
+ if RainbowLLM.configuration.providers.key?(provider.to_sym)
46
+ RubyLLM.context do |c|
47
+ c.request_timeout = @timeout if @timeout
48
+ c.openai_api_base = RainbowLLM.configuration.providers.dig(provider.to_sym, :uri_base)
49
+ c.openai_api_key = RainbowLLM.configuration.providers.dig(provider.to_sym, :access_token)
50
+ end
51
+ elsif @timeout
52
+ RubyLLM.context do |c|
53
+ c.request_timeout = @timeout if @timeout
54
+ end
55
+ end
56
+
57
+ provider_type = RainbowLLM.configuration.providers.dig(provider.to_sym, :provider) || provider
58
+ assume_model_exists = RainbowLLM.configuration.providers.dig(provider.to_sym, :assume_model_exists)
59
+
60
+ base = {model:, provider: provider_type, assume_model_exists:, context:}.compact
61
+
62
+ opts = {}
63
+ opts[:temperature] = temperature if temperature
64
+ opts[:schema] = schema if schema
65
+
66
+ result =
67
+ begin
68
+ chat = RubyLLM.chat(**base.merge(opts))
69
+ response = chat.ask(prompt)
70
+ content = RainbowLLM.check_content(RainbowLLM.extract_content(response))
71
+
72
+ if content
73
+ details[model_string] = {status: :success}
74
+ {content:, model: model_string, status: :success}
75
+ else
76
+ details[model_string] = {status: :empty}
77
+ nil
78
+ end
79
+ rescue Faraday::TimeoutError => e
80
+ warn "Provider #{provider} failed with timeout error: #{e.message}"
81
+ details[model_string] = {status: :timeout, error: e.message}
82
+ nil
83
+ rescue RubyLLM::Error => e
84
+ # Log the error for debugging purposes
85
+ warn "Provider #{provider} failed with error: #{e.message}"
86
+ details[model_string] = {status: :error, error: e.message}
87
+ nil
88
+ end
89
+
90
+ if result && result[:status] == :success
91
+ return Response.new(content: result[:content], model: result[:model], details:)
92
+ end
93
+ end
94
+
95
+ # If we get here, all providers failed
96
+ Response.new(content: nil, model: nil, details:)
97
+ end
98
+ end
99
+
100
+ def self.extract_content(response)
101
+ return nil if response.nil?
102
+
103
+ response.content.strip
104
+ end
105
+
106
+ def self.check_content(content)
107
+ return nil if content.nil? || content.to_s.strip.empty?
108
+ content
109
+ end
110
+ end
@@ -0,0 +1,15 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RainbowLLM
4
+ class Configuration
5
+ attr_accessor :providers
6
+
7
+ def initialize
8
+ @providers = {}
9
+ end
10
+
11
+ def provider(name, config)
12
+ @providers[name.to_sym] = config
13
+ end
14
+ end
15
+ end
@@ -0,0 +1,22 @@
1
+ module RainbowLLM
2
+ # Base error class for all RainbowLLM errors
3
+ class Error < StandardError; end
4
+
5
+ # Raised when provided arguments are invalid or missing
6
+ class InvalidArgument < Error; end
7
+
8
+ # Raised when provider configuration is missing or invalid
9
+ class ConfigurationError < Error; end
10
+
11
+ # Raised when a specified provider cannot be found
12
+ class ProviderNotFoundError < Error; end
13
+
14
+ # Raised when a specified model cannot be found
15
+ class ModelNotFoundError < Error; end
16
+
17
+ # Raised when all providers fail to respond
18
+ class AllProvidersFailedError < Error; end
19
+
20
+ # Raised when model format is invalid
21
+ class ModelError < Error; end
22
+ end
@@ -0,0 +1,16 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "ruby_llm"
4
+
5
+ module RainbowLLM
6
+ module Providers
7
+ class BasicAuthOpenAI < RubyLLM::Providers::OpenAI
8
+ def headers
9
+ {"Authorization" => "Basic #{@config.openai_api_key}"}
10
+ end
11
+ end
12
+ end
13
+ end
14
+
15
+ # Register the provider with RubyLLM
16
+ RubyLLM::Provider.register :openai_basic, RainbowLLM::Providers::BasicAuthOpenAI
@@ -0,0 +1 @@
1
+ Response = Data.define(:content, :model, :details)
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RainbowLLM
4
+ VERSION = "0.1.0"
5
+ end
@@ -0,0 +1,29 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "ruby_llm"
4
+ require_relative "rainbow_llm/version"
5
+ require_relative "rainbow_llm/configuration"
6
+ require_relative "rainbow_llm/errors"
7
+ require_relative "rainbow_llm/response"
8
+ require_relative "rainbow_llm/chat_builder"
9
+ require_relative "rainbow_llm/providers/basic_auth_openai"
10
+
11
+ # RainbowLLM - A routing gem for multiple LLM providers with automatic failover
12
+ module RainbowLLM
13
+ def self.configuration
14
+ @configuration ||= Configuration.new
15
+ end
16
+
17
+ def self.configuration=(config)
18
+ @configuration = config
19
+ end
20
+
21
+ def self.configure
22
+ yield(configuration)
23
+ end
24
+
25
+ def self.chat(models:)
26
+ raise InvalidArgument, "models cannot be nil or empty" if models.nil? || models.empty?
27
+ ChatBuilder.new(models)
28
+ end
29
+ end
data/mise.toml ADDED
@@ -0,0 +1,2 @@
1
+ [tools]
2
+ ruby = "4.0.0"
metadata ADDED
@@ -0,0 +1,100 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: rainbow_llm
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - a-chris
8
+ bindir: exe
9
+ cert_chain: []
10
+ date: 1980-01-02 00:00:00.000000000 Z
11
+ dependencies:
12
+ - !ruby/object:Gem::Dependency
13
+ name: ruby_llm
14
+ requirement: !ruby/object:Gem::Requirement
15
+ requirements:
16
+ - - "~>"
17
+ - !ruby/object:Gem::Version
18
+ version: '1.0'
19
+ type: :runtime
20
+ prerelease: false
21
+ version_requirements: !ruby/object:Gem::Requirement
22
+ requirements:
23
+ - - "~>"
24
+ - !ruby/object:Gem::Version
25
+ version: '1.0'
26
+ - !ruby/object:Gem::Dependency
27
+ name: rspec
28
+ requirement: !ruby/object:Gem::Requirement
29
+ requirements:
30
+ - - "~>"
31
+ - !ruby/object:Gem::Version
32
+ version: '3.12'
33
+ type: :development
34
+ prerelease: false
35
+ version_requirements: !ruby/object:Gem::Requirement
36
+ requirements:
37
+ - - "~>"
38
+ - !ruby/object:Gem::Version
39
+ version: '3.12'
40
+ - !ruby/object:Gem::Dependency
41
+ name: rspec-mocks
42
+ requirement: !ruby/object:Gem::Requirement
43
+ requirements:
44
+ - - "~>"
45
+ - !ruby/object:Gem::Version
46
+ version: '3.12'
47
+ type: :development
48
+ prerelease: false
49
+ version_requirements: !ruby/object:Gem::Requirement
50
+ requirements:
51
+ - - "~>"
52
+ - !ruby/object:Gem::Version
53
+ version: '3.12'
54
+ description: RainbowLLM provides intelligent routing and failover capabilities for
55
+ multiple LLM providers. Automatically tries providers in sequence until one responds
56
+ successfully, ensuring reliable access to LLM services.
57
+ email:
58
+ - a.christian.toscano@gmail.com
59
+ executables: []
60
+ extensions: []
61
+ extra_rdoc_files: []
62
+ files:
63
+ - ".rspec"
64
+ - CHANGELOG.md
65
+ - LICENSE.txt
66
+ - README.md
67
+ - Rakefile
68
+ - lib/rainbow_llm.rb
69
+ - lib/rainbow_llm/chat_builder.rb
70
+ - lib/rainbow_llm/configuration.rb
71
+ - lib/rainbow_llm/errors.rb
72
+ - lib/rainbow_llm/providers/basic_auth_openai.rb
73
+ - lib/rainbow_llm/response.rb
74
+ - lib/rainbow_llm/version.rb
75
+ - mise.toml
76
+ homepage: https://github.com/a-chris/rainbow_llm
77
+ licenses:
78
+ - MIT
79
+ metadata:
80
+ homepage_uri: https://github.com/a-chris/rainbow_llm
81
+ source_code_uri: https://github.com/a-chris/rainbow_llm
82
+ changelog_uri: https://github.com/a-chris/rainbow_llm/blob/main/CHANGELOG.md
83
+ rdoc_options: []
84
+ require_paths:
85
+ - lib
86
+ required_ruby_version: !ruby/object:Gem::Requirement
87
+ requirements:
88
+ - - ">="
89
+ - !ruby/object:Gem::Version
90
+ version: 3.2.0
91
+ required_rubygems_version: !ruby/object:Gem::Requirement
92
+ requirements:
93
+ - - ">="
94
+ - !ruby/object:Gem::Version
95
+ version: '0'
96
+ requirements: []
97
+ rubygems_version: 4.0.3
98
+ specification_version: 4
99
+ summary: A routing gem for multiple LLM providers with automatic failover.
100
+ test_files: []