llm_conductor 1.0.0 → 1.1.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/LICENSE +9 -0
- data/README.md +62 -5
- data/config/initializers/llm_conductor.rb +2 -1
- data/examples/groq_usage.rb +77 -0
- data/lib/llm_conductor/client_factory.rb +7 -6
- data/lib/llm_conductor/clients/base_client.rb +21 -1
- data/lib/llm_conductor/clients/groq_client.rb +24 -0
- data/lib/llm_conductor/configuration.rb +19 -1
- data/lib/llm_conductor/version.rb +1 -1
- data/lib/llm_conductor.rb +5 -3
- metadata +24 -6
- data/.DS_Store +0 -0
checksums.yaml
CHANGED
|
@@ -1,7 +1,7 @@
|
|
|
1
1
|
---
|
|
2
2
|
SHA256:
|
|
3
|
-
metadata.gz:
|
|
4
|
-
data.tar.gz:
|
|
3
|
+
metadata.gz: 77bf332827087c373c19d318c72b21532974ae510b6b7958b8a5d77fc3e85833
|
|
4
|
+
data.tar.gz: 365717c91358d62d4a44264087d6500829d53ca1a9d1dc4550890282152ad75b
|
|
5
5
|
SHA512:
|
|
6
|
-
metadata.gz:
|
|
7
|
-
data.tar.gz:
|
|
6
|
+
metadata.gz: 547cb6b093784591aa0944e36cab482a380a2519fd73324a5dcefaae38fb4eade745f564ada7daee05cd7359515022f49492fee330fc0eee4e592eaf2e5d37bf
|
|
7
|
+
data.tar.gz: 1118f23b62a725962c7576ba368b5ddadd5fee7f5244f5aea1405d52bacd56708a14cbe50d833afc8e81e6916be8db5c50bb424c238f134957b7965272504944
|
data/LICENSE
ADDED
|
@@ -0,0 +1,9 @@
|
|
|
1
|
+
The MIT License (MIT)
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 Ekohe
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
|
|
6
|
+
|
|
7
|
+
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
|
|
8
|
+
|
|
9
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
|
data/README.md
CHANGED
|
@@ -1,10 +1,10 @@
|
|
|
1
1
|
# LLM Conductor
|
|
2
2
|
|
|
3
|
-
A powerful Ruby gem from [Ekohe](https://ekohe.com) for orchestrating multiple Language Model providers with a unified, modern interface. LLM Conductor provides seamless integration with OpenAI GPT, Anthropic Claude, Google Gemini, and Ollama with advanced prompt management, data building patterns, and comprehensive response handling.
|
|
3
|
+
A powerful Ruby gem from [Ekohe](https://ekohe.com) for orchestrating multiple Language Model providers with a unified, modern interface. LLM Conductor provides seamless integration with OpenAI GPT, Anthropic Claude, Google Gemini, Groq, and Ollama with advanced prompt management, data building patterns, and comprehensive response handling.
|
|
4
4
|
|
|
5
5
|
## Features
|
|
6
6
|
|
|
7
|
-
🚀 **Multi-Provider Support** - OpenAI GPT, Anthropic Claude, Google Gemini, and Ollama with automatic vendor detection
|
|
7
|
+
🚀 **Multi-Provider Support** - OpenAI GPT, Anthropic Claude, Google Gemini, Groq, and Ollama with automatic vendor detection
|
|
8
8
|
🎯 **Unified Modern API** - Simple `LlmConductor.generate()` interface with rich Response objects
|
|
9
9
|
📝 **Advanced Prompt Management** - Registrable prompt classes with inheritance and templating
|
|
10
10
|
🏗️ **Data Builder Pattern** - Structured data preparation for complex LLM inputs
|
|
@@ -106,9 +106,40 @@ LlmConductor.configure do |config|
|
|
|
106
106
|
api_key: ENV['GEMINI_API_KEY']
|
|
107
107
|
)
|
|
108
108
|
|
|
109
|
+
config.groq(
|
|
110
|
+
api_key: ENV['GROQ_API_KEY']
|
|
111
|
+
)
|
|
112
|
+
|
|
109
113
|
config.ollama(
|
|
110
114
|
base_url: ENV['OLLAMA_ADDRESS'] || 'http://localhost:11434'
|
|
111
115
|
)
|
|
116
|
+
|
|
117
|
+
# Optional: Configure custom logger
|
|
118
|
+
config.logger = Logger.new($stdout) # Log to stdout
|
|
119
|
+
config.logger = Logger.new('log/llm_conductor.log') # Log to file
|
|
120
|
+
config.logger = Rails.logger # Use Rails logger (in Rails apps)
|
|
121
|
+
end
|
|
122
|
+
```
|
|
123
|
+
|
|
124
|
+
### Logging Configuration
|
|
125
|
+
|
|
126
|
+
LLM Conductor supports flexible logging using Ruby's built-in Logger class. By default, when a logger is configured, it uses the DEBUG log level to provide detailed information during development.
|
|
127
|
+
|
|
128
|
+
```ruby
|
|
129
|
+
LlmConductor.configure do |config|
|
|
130
|
+
# Option 1: Log to stdout - uses DEBUG level by default
|
|
131
|
+
config.logger = Logger.new($stdout)
|
|
132
|
+
|
|
133
|
+
# Option 2: Log to file - set appropriate level
|
|
134
|
+
config.logger = Logger.new('log/llm_conductor.log')
|
|
135
|
+
|
|
136
|
+
# Option 3: Use Rails logger (Rails apps)
|
|
137
|
+
config.logger = Rails.logger
|
|
138
|
+
|
|
139
|
+
# Option 4: Custom logger with formatting
|
|
140
|
+
config.logger = Logger.new($stderr).tap do |logger|
|
|
141
|
+
logger.formatter = proc { |severity, datetime, progname, msg| "#{msg}\n" }
|
|
142
|
+
end
|
|
112
143
|
end
|
|
113
144
|
```
|
|
114
145
|
|
|
@@ -120,6 +151,7 @@ The gem automatically detects these environment variables:
|
|
|
120
151
|
- `OPENAI_ORG_ID` - OpenAI organization ID (optional)
|
|
121
152
|
- `ANTHROPIC_API_KEY` - Anthropic API key
|
|
122
153
|
- `GEMINI_API_KEY` - Google Gemini API key
|
|
154
|
+
- `GROQ_API_KEY` - Groq API key
|
|
123
155
|
- `OLLAMA_ADDRESS` - Ollama server address
|
|
124
156
|
|
|
125
157
|
## Supported Providers & Models
|
|
@@ -162,10 +194,31 @@ response = LlmConductor.generate(
|
|
|
162
194
|
)
|
|
163
195
|
```
|
|
164
196
|
|
|
165
|
-
###
|
|
197
|
+
### Groq (Automatic for Llama, Mixtral, Gemma, Qwen models)
|
|
198
|
+
```ruby
|
|
199
|
+
response = LlmConductor.generate(
|
|
200
|
+
model: 'llama-3.1-70b-versatile', # Auto-detects Groq
|
|
201
|
+
prompt: 'Your prompt here'
|
|
202
|
+
)
|
|
203
|
+
|
|
204
|
+
# Supported Groq models
|
|
205
|
+
response = LlmConductor.generate(
|
|
206
|
+
model: 'mixtral-8x7b-32768', # Auto-detects Groq
|
|
207
|
+
prompt: 'Your prompt here'
|
|
208
|
+
)
|
|
209
|
+
|
|
210
|
+
# Or explicitly specify vendor
|
|
211
|
+
response = LlmConductor.generate(
|
|
212
|
+
model: 'qwen-2.5-72b-instruct',
|
|
213
|
+
vendor: :groq,
|
|
214
|
+
prompt: 'Your prompt here'
|
|
215
|
+
)
|
|
216
|
+
```
|
|
217
|
+
|
|
218
|
+
### Ollama (Default for other models)
|
|
166
219
|
```ruby
|
|
167
220
|
response = LlmConductor.generate(
|
|
168
|
-
model: '
|
|
221
|
+
model: 'deepseek-r1',
|
|
169
222
|
prompt: 'Your prompt here'
|
|
170
223
|
)
|
|
171
224
|
```
|
|
@@ -177,13 +230,15 @@ The gem automatically detects the appropriate provider based on model names:
|
|
|
177
230
|
- **OpenAI**: Models starting with `gpt-` (e.g., `gpt-4`, `gpt-3.5-turbo`)
|
|
178
231
|
- **Anthropic**: Models starting with `claude-` (e.g., `claude-3-5-sonnet-20241022`)
|
|
179
232
|
- **Google Gemini**: Models starting with `gemini-` (e.g., `gemini-2.5-flash`, `gemini-2.0-flash`)
|
|
233
|
+
- **Groq**: Models starting with `llama`, `mixtral`, `gemma`, or `qwen` (e.g., `llama-3.1-70b-versatile`, `mixtral-8x7b-32768`, `gemma-7b-it`, `qwen-2.5-72b-instruct`)
|
|
180
234
|
- **Ollama**: All other models (e.g., `llama3.2`, `mistral`, `codellama`)
|
|
181
235
|
|
|
182
236
|
You can also explicitly specify the vendor:
|
|
183
237
|
|
|
184
238
|
```ruby
|
|
185
239
|
response = LlmConductor.generate(
|
|
186
|
-
model: '
|
|
240
|
+
model: 'llama-3.1-70b-versatile',
|
|
241
|
+
vendor: :groq, # Explicitly use Groq
|
|
187
242
|
prompt: 'Your prompt here'
|
|
188
243
|
)
|
|
189
244
|
```
|
|
@@ -426,6 +481,8 @@ Check the `/examples` directory for comprehensive usage examples:
|
|
|
426
481
|
- `prompt_registration.rb` - Custom prompt classes
|
|
427
482
|
- `data_builder_usage.rb` - Data structuring patterns
|
|
428
483
|
- `rag_usage.rb` - RAG implementation examples
|
|
484
|
+
- `gemini_usage.rb` - Google Gemini integration
|
|
485
|
+
- `groq_usage.rb` - Groq integration with various models
|
|
429
486
|
|
|
430
487
|
## Development
|
|
431
488
|
|
|
@@ -10,7 +10,8 @@ LlmConductor.configure do |config|
|
|
|
10
10
|
config.timeout = 30
|
|
11
11
|
config.max_retries = 3
|
|
12
12
|
config.retry_delay = 1.0
|
|
13
|
-
|
|
13
|
+
# Use Ruby's built-in Logger class directly
|
|
14
|
+
config.logger = Logger.new($stdout)
|
|
14
15
|
# Configure providers
|
|
15
16
|
config.openai(
|
|
16
17
|
api_key: ENV['OPENAI_API_KEY'],
|
|
@@ -0,0 +1,77 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
require 'llm_conductor'
|
|
4
|
+
|
|
5
|
+
# Configure Groq
|
|
6
|
+
LlmConductor.configure do |config|
|
|
7
|
+
config.groq(api_key: ENV['GROQ_API_KEY'])
|
|
8
|
+
end
|
|
9
|
+
|
|
10
|
+
# Example 1: Using Groq with automatic model detection
|
|
11
|
+
puts '=== Example 1: Automatic model detection ==='
|
|
12
|
+
client = LlmConductor::ClientFactory.build(
|
|
13
|
+
model: 'llama-3.1-70b-versatile',
|
|
14
|
+
type: :summarize_text
|
|
15
|
+
)
|
|
16
|
+
|
|
17
|
+
data = {
|
|
18
|
+
text: 'Groq is a cloud-based AI platform that provides fast inference for large language models. ' \
|
|
19
|
+
'It offers various models including Llama, Mixtral, Gemma, and Qwen models with ' \
|
|
20
|
+
'optimized performance for production use cases.'
|
|
21
|
+
}
|
|
22
|
+
|
|
23
|
+
response = client.generate(data:)
|
|
24
|
+
puts "Model: #{response.model}"
|
|
25
|
+
puts "Vendor: #{response.metadata[:vendor]}"
|
|
26
|
+
puts "Input tokens: #{response.input_tokens}"
|
|
27
|
+
puts "Output tokens: #{response.output_tokens}"
|
|
28
|
+
puts "Summary: #{response.output}"
|
|
29
|
+
puts
|
|
30
|
+
|
|
31
|
+
# Example 2: Using Groq with explicit vendor specification
|
|
32
|
+
puts '=== Example 2: Explicit vendor specification ==='
|
|
33
|
+
client = LlmConductor::ClientFactory.build(
|
|
34
|
+
model: 'mixtral-8x7b-32768',
|
|
35
|
+
type: :summarize_text,
|
|
36
|
+
vendor: :groq
|
|
37
|
+
)
|
|
38
|
+
|
|
39
|
+
response = client.generate(data:)
|
|
40
|
+
puts "Model: #{response.model}"
|
|
41
|
+
puts "Vendor: #{response.metadata[:vendor]}"
|
|
42
|
+
puts "Summary: #{response.output}"
|
|
43
|
+
puts
|
|
44
|
+
|
|
45
|
+
# Example 3: Using Groq with simple generation
|
|
46
|
+
puts '=== Example 3: Simple generation ==='
|
|
47
|
+
client = LlmConductor::ClientFactory.build(
|
|
48
|
+
model: 'gemma-7b-it',
|
|
49
|
+
type: :summarize_text,
|
|
50
|
+
vendor: :groq
|
|
51
|
+
)
|
|
52
|
+
|
|
53
|
+
response = client.generate_simple(prompt: 'Explain what Groq is in one sentence.')
|
|
54
|
+
puts "Model: #{response.model}"
|
|
55
|
+
puts "Response: #{response.output}"
|
|
56
|
+
puts
|
|
57
|
+
|
|
58
|
+
# Example 4: Using different Groq models
|
|
59
|
+
puts '=== Example 4: Different Groq models ==='
|
|
60
|
+
models = [
|
|
61
|
+
'llama-3.1-70b-versatile',
|
|
62
|
+
'mixtral-8x7b-32768',
|
|
63
|
+
'gemma-7b-it',
|
|
64
|
+
'qwen-2.5-72b-instruct'
|
|
65
|
+
]
|
|
66
|
+
|
|
67
|
+
models.each do |model|
|
|
68
|
+
client = LlmConductor::ClientFactory.build(
|
|
69
|
+
model:,
|
|
70
|
+
type: :summarize_text,
|
|
71
|
+
vendor: :groq
|
|
72
|
+
)
|
|
73
|
+
|
|
74
|
+
response = client.generate_simple(prompt: 'What is artificial intelligence?')
|
|
75
|
+
puts "#{model}: #{response.output[0..100]}..."
|
|
76
|
+
puts
|
|
77
|
+
end
|
|
@@ -18,14 +18,13 @@ module LlmConductor
|
|
|
18
18
|
openrouter: Clients::OpenrouterClient,
|
|
19
19
|
ollama: Clients::OllamaClient,
|
|
20
20
|
gemini: Clients::GeminiClient,
|
|
21
|
-
google: Clients::GeminiClient
|
|
21
|
+
google: Clients::GeminiClient,
|
|
22
|
+
groq: Clients::GroqClient
|
|
22
23
|
}
|
|
23
24
|
|
|
24
|
-
client_classes
|
|
25
|
-
ArgumentError,
|
|
26
|
-
|
|
27
|
-
'Supported vendors: anthropic, openai, openrouter, ollama, gemini'
|
|
28
|
-
)
|
|
25
|
+
client_classes.fetch(vendor) do
|
|
26
|
+
raise ArgumentError, "Unsupported vendor: #{vendor}. Supported vendors: #{client_classes.keys.uniq.join(', ')}"
|
|
27
|
+
end
|
|
29
28
|
end
|
|
30
29
|
|
|
31
30
|
def self.determine_vendor(model)
|
|
@@ -36,6 +35,8 @@ module LlmConductor
|
|
|
36
35
|
:openai
|
|
37
36
|
when /^gemini/i
|
|
38
37
|
:gemini
|
|
38
|
+
when /^(llama|mixtral|gemma|qwen)/i
|
|
39
|
+
:groq
|
|
39
40
|
else
|
|
40
41
|
:ollama # Default to Ollama for non-specific model names
|
|
41
42
|
end
|
|
@@ -24,6 +24,12 @@ module LlmConductor
|
|
|
24
24
|
output_text = generate_content(prompt)
|
|
25
25
|
output_tokens = calculate_tokens(output_text || '')
|
|
26
26
|
|
|
27
|
+
# Logging AI request metadata if logger is set
|
|
28
|
+
configuration.logger&.debug(
|
|
29
|
+
"Vendor: #{vendor_name}, Model: #{@model} " \
|
|
30
|
+
"Output_tokens: #{output_tokens} Input_tokens: #{input_tokens}"
|
|
31
|
+
)
|
|
32
|
+
|
|
27
33
|
build_response(output_text, input_tokens, output_tokens, { prompt: })
|
|
28
34
|
rescue StandardError => e
|
|
29
35
|
build_error_response(e)
|
|
@@ -35,6 +41,12 @@ module LlmConductor
|
|
|
35
41
|
output_text = generate_content(prompt)
|
|
36
42
|
output_tokens = calculate_tokens(output_text || '')
|
|
37
43
|
|
|
44
|
+
# Logging AI request metadata if logger is set
|
|
45
|
+
configuration.logger&.debug(
|
|
46
|
+
"Vendor: #{vendor_name}, Model: #{@model} " \
|
|
47
|
+
"Output_tokens: #{output_tokens} Input_tokens: #{input_tokens}"
|
|
48
|
+
)
|
|
49
|
+
|
|
38
50
|
build_response(output_text, input_tokens, output_tokens)
|
|
39
51
|
rescue StandardError => e
|
|
40
52
|
build_error_response(e)
|
|
@@ -89,10 +101,18 @@ module LlmConductor
|
|
|
89
101
|
# Build metadata for the response
|
|
90
102
|
def build_metadata
|
|
91
103
|
{
|
|
92
|
-
vendor:
|
|
104
|
+
vendor: vendor_name,
|
|
93
105
|
timestamp: Time.now.utc.iso8601
|
|
94
106
|
}
|
|
95
107
|
end
|
|
108
|
+
|
|
109
|
+
def vendor_name
|
|
110
|
+
self.class.name.split('::').last.gsub('Client', '').downcase.to_sym
|
|
111
|
+
end
|
|
112
|
+
|
|
113
|
+
def configuration
|
|
114
|
+
LlmConductor.configuration
|
|
115
|
+
end
|
|
96
116
|
end
|
|
97
117
|
end
|
|
98
118
|
end
|
|
@@ -0,0 +1,24 @@
|
|
|
1
|
+
# frozen_string_literal: true
|
|
2
|
+
|
|
3
|
+
module LlmConductor
|
|
4
|
+
module Clients
|
|
5
|
+
# Groq client implementation for accessing Groq models via Groq API
|
|
6
|
+
class GroqClient < BaseClient
|
|
7
|
+
private
|
|
8
|
+
|
|
9
|
+
def generate_content(prompt)
|
|
10
|
+
client.chat(
|
|
11
|
+
messages: [{ role: 'user', content: prompt }],
|
|
12
|
+
model:
|
|
13
|
+
).dig('choices', 0, 'message', 'content')
|
|
14
|
+
end
|
|
15
|
+
|
|
16
|
+
def client
|
|
17
|
+
@client ||= begin
|
|
18
|
+
config = LlmConductor.configuration.provider_config(:groq)
|
|
19
|
+
Groq::Client.new(api_key: config[:api_key])
|
|
20
|
+
end
|
|
21
|
+
end
|
|
22
|
+
end
|
|
23
|
+
end
|
|
24
|
+
end
|
|
@@ -4,7 +4,7 @@
|
|
|
4
4
|
module LlmConductor
|
|
5
5
|
# Configuration class for managing API keys, endpoints, and default settings
|
|
6
6
|
class Configuration
|
|
7
|
-
attr_accessor :default_model, :default_vendor, :timeout, :max_retries, :retry_delay
|
|
7
|
+
attr_accessor :default_model, :default_vendor, :timeout, :max_retries, :retry_delay, :logger
|
|
8
8
|
attr_reader :providers
|
|
9
9
|
|
|
10
10
|
def initialize
|
|
@@ -14,6 +14,7 @@ module LlmConductor
|
|
|
14
14
|
@timeout = 30
|
|
15
15
|
@max_retries = 3
|
|
16
16
|
@retry_delay = 1.0
|
|
17
|
+
@logger = nil
|
|
17
18
|
|
|
18
19
|
# Provider configurations
|
|
19
20
|
@providers = {}
|
|
@@ -63,6 +64,14 @@ module LlmConductor
|
|
|
63
64
|
}
|
|
64
65
|
end
|
|
65
66
|
|
|
67
|
+
# Configure Groq provider
|
|
68
|
+
def groq(api_key: nil, **options)
|
|
69
|
+
@providers[:groq] = {
|
|
70
|
+
api_key: api_key || ENV['GROQ_API_KEY'],
|
|
71
|
+
**options
|
|
72
|
+
}
|
|
73
|
+
end
|
|
74
|
+
|
|
66
75
|
# Get provider configuration
|
|
67
76
|
def provider_config(provider)
|
|
68
77
|
@providers[provider.to_sym] || {}
|
|
@@ -109,6 +118,14 @@ module LlmConductor
|
|
|
109
118
|
gemini(api_key: value)
|
|
110
119
|
end
|
|
111
120
|
|
|
121
|
+
def groq_api_key
|
|
122
|
+
provider_config(:groq)[:api_key]
|
|
123
|
+
end
|
|
124
|
+
|
|
125
|
+
def groq_api_key=(value)
|
|
126
|
+
groq(api_key: value)
|
|
127
|
+
end
|
|
128
|
+
|
|
112
129
|
private
|
|
113
130
|
|
|
114
131
|
def setup_defaults_from_env
|
|
@@ -117,6 +134,7 @@ module LlmConductor
|
|
|
117
134
|
openai if ENV['OPENAI_API_KEY']
|
|
118
135
|
openrouter if ENV['OPENROUTER_API_KEY']
|
|
119
136
|
gemini if ENV['GEMINI_API_KEY']
|
|
137
|
+
groq if ENV['GROQ_API_KEY']
|
|
120
138
|
ollama # Always configure Ollama with default URL
|
|
121
139
|
end
|
|
122
140
|
end
|
data/lib/llm_conductor.rb
CHANGED
|
@@ -10,13 +10,14 @@ require_relative 'llm_conductor/prompt_manager'
|
|
|
10
10
|
require_relative 'llm_conductor/clients/base_client'
|
|
11
11
|
require_relative 'llm_conductor/clients/anthropic_client'
|
|
12
12
|
require_relative 'llm_conductor/clients/gpt_client'
|
|
13
|
+
require_relative 'llm_conductor/clients/groq_client'
|
|
13
14
|
require_relative 'llm_conductor/clients/ollama_client'
|
|
14
15
|
require_relative 'llm_conductor/clients/openrouter_client'
|
|
15
16
|
require_relative 'llm_conductor/clients/gemini_client'
|
|
16
17
|
require_relative 'llm_conductor/client_factory'
|
|
17
18
|
|
|
18
19
|
# LLM Conductor provides a unified interface for multiple Language Model providers
|
|
19
|
-
# including OpenAI GPT, Anthropic Claude, Google Gemini, OpenRouter, and Ollama
|
|
20
|
+
# including OpenAI GPT, Anthropic Claude, Google Gemini, Groq, OpenRouter, and Ollama
|
|
20
21
|
# with built-in prompt templates, token counting, and extensible client architecture.
|
|
21
22
|
module LlmConductor
|
|
22
23
|
class Error < StandardError; end
|
|
@@ -61,16 +62,17 @@ module LlmConductor
|
|
|
61
62
|
when :openrouter then Clients::OpenrouterClient
|
|
62
63
|
when :ollama then Clients::OllamaClient
|
|
63
64
|
when :gemini, :google then Clients::GeminiClient
|
|
65
|
+
when :groq then Clients::GroqClient
|
|
64
66
|
else
|
|
65
67
|
raise ArgumentError,
|
|
66
68
|
"Unsupported vendor: #{vendor}. " \
|
|
67
|
-
'Supported vendors: anthropic, openai, openrouter, ollama, gemini'
|
|
69
|
+
'Supported vendors: anthropic, openai, openrouter, ollama, gemini, groq'
|
|
68
70
|
end
|
|
69
71
|
end
|
|
70
72
|
end
|
|
71
73
|
|
|
72
74
|
# List of supported vendors
|
|
73
|
-
SUPPORTED_VENDORS = %i[anthropic openai openrouter ollama gemini].freeze
|
|
75
|
+
SUPPORTED_VENDORS = %i[anthropic openai openrouter ollama gemini groq].freeze
|
|
74
76
|
|
|
75
77
|
# List of supported prompt types
|
|
76
78
|
SUPPORTED_PROMPT_TYPES = %i[
|
metadata
CHANGED
|
@@ -1,13 +1,13 @@
|
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
|
2
2
|
name: llm_conductor
|
|
3
3
|
version: !ruby/object:Gem::Version
|
|
4
|
-
version: 1.
|
|
4
|
+
version: 1.1.0
|
|
5
5
|
platform: ruby
|
|
6
6
|
authors:
|
|
7
7
|
- Ben Zheng
|
|
8
8
|
bindir: exe
|
|
9
9
|
cert_chain: []
|
|
10
|
-
date: 2025-
|
|
10
|
+
date: 2025-10-15 00:00:00.000000000 Z
|
|
11
11
|
dependencies:
|
|
12
12
|
- !ruby/object:Gem::Dependency
|
|
13
13
|
name: activesupport
|
|
@@ -51,6 +51,20 @@ dependencies:
|
|
|
51
51
|
- - "~>"
|
|
52
52
|
- !ruby/object:Gem::Version
|
|
53
53
|
version: '4.3'
|
|
54
|
+
- !ruby/object:Gem::Dependency
|
|
55
|
+
name: groq
|
|
56
|
+
requirement: !ruby/object:Gem::Requirement
|
|
57
|
+
requirements:
|
|
58
|
+
- - "~>"
|
|
59
|
+
- !ruby/object:Gem::Version
|
|
60
|
+
version: '0.3'
|
|
61
|
+
type: :runtime
|
|
62
|
+
prerelease: false
|
|
63
|
+
version_requirements: !ruby/object:Gem::Requirement
|
|
64
|
+
requirements:
|
|
65
|
+
- - "~>"
|
|
66
|
+
- !ruby/object:Gem::Version
|
|
67
|
+
version: '0.3'
|
|
54
68
|
- !ruby/object:Gem::Dependency
|
|
55
69
|
name: ollama-ai
|
|
56
70
|
requirement: !ruby/object:Gem::Requirement
|
|
@@ -122,24 +136,26 @@ dependencies:
|
|
|
122
136
|
- !ruby/object:Gem::Version
|
|
123
137
|
version: '3.0'
|
|
124
138
|
description: LLM Conductor provides a clean, unified interface for working with multiple
|
|
125
|
-
Language Model providers including OpenAI GPT,
|
|
126
|
-
include prompt templating, token counting,
|
|
139
|
+
Language Model providers including OpenAI GPT, Anthropic Claude, Google Gemini,
|
|
140
|
+
Groq, OpenRouter, and Ollama. Features include prompt templating, token counting,
|
|
141
|
+
and extensible client architecture.
|
|
127
142
|
email:
|
|
128
143
|
- ben@ekohe.com
|
|
129
144
|
executables: []
|
|
130
145
|
extensions: []
|
|
131
146
|
extra_rdoc_files: []
|
|
132
147
|
files:
|
|
133
|
-
- ".DS_Store"
|
|
134
148
|
- ".rspec"
|
|
135
149
|
- ".rubocop.yml"
|
|
136
150
|
- ".rubocop_todo.yml"
|
|
137
151
|
- ".ruby-version"
|
|
152
|
+
- LICENSE
|
|
138
153
|
- README.md
|
|
139
154
|
- Rakefile
|
|
140
155
|
- config/initializers/llm_conductor.rb
|
|
141
156
|
- examples/data_builder_usage.rb
|
|
142
157
|
- examples/gemini_usage.rb
|
|
158
|
+
- examples/groq_usage.rb
|
|
143
159
|
- examples/prompt_registration.rb
|
|
144
160
|
- examples/rag_usage.rb
|
|
145
161
|
- examples/simple_usage.rb
|
|
@@ -149,6 +165,7 @@ files:
|
|
|
149
165
|
- lib/llm_conductor/clients/base_client.rb
|
|
150
166
|
- lib/llm_conductor/clients/gemini_client.rb
|
|
151
167
|
- lib/llm_conductor/clients/gpt_client.rb
|
|
168
|
+
- lib/llm_conductor/clients/groq_client.rb
|
|
152
169
|
- lib/llm_conductor/clients/ollama_client.rb
|
|
153
170
|
- lib/llm_conductor/clients/openrouter_client.rb
|
|
154
171
|
- lib/llm_conductor/configuration.rb
|
|
@@ -160,7 +177,8 @@ files:
|
|
|
160
177
|
- lib/llm_conductor/version.rb
|
|
161
178
|
- sig/llm_conductor.rbs
|
|
162
179
|
homepage: https://github.com/ekohe/llm_conductor
|
|
163
|
-
licenses:
|
|
180
|
+
licenses:
|
|
181
|
+
- MIT
|
|
164
182
|
metadata:
|
|
165
183
|
allowed_push_host: https://rubygems.org
|
|
166
184
|
homepage_uri: https://github.com/ekohe/llm_conductor
|
data/.DS_Store
DELETED
|
Binary file
|