llm_conductor 1.4.1 → 1.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -1,461 +1,219 @@
1
1
  # LLM Conductor
2
2
 
3
- A powerful Ruby gem from [Ekohe](https://ekohe.com) for orchestrating multiple Language Model providers with a unified, modern interface. LLM Conductor provides seamless integration with OpenAI GPT, Anthropic Claude, Google Gemini, Groq, Ollama, OpenRouter, and Z.ai (Zhipu AI) with advanced prompt management, data building patterns, vision/multimodal support, and comprehensive response handling.
3
+ A unified Ruby interface for multiple Language Model providers from [Ekohe](https://ekohe.com). Seamlessly integrate OpenAI GPT, Anthropic Claude, Google Gemini, Groq, Ollama, OpenRouter, and Z.ai (Zhipu AI) with a single, consistent API.
4
4
 
5
5
  ## Features
6
6
 
7
- 🚀 **Multi-Provider Support** - OpenAI GPT, Anthropic Claude, Google Gemini, Groq, Ollama, OpenRouter, and Z.ai with automatic vendor detection
8
- 🎯 **Unified Modern API** - Simple `LlmConductor.generate()` interface with rich Response objects
9
- 🖼️ **Vision/Multimodal Support** - Send images alongside text prompts for vision-enabled models (OpenRouter, Z.ai GLM-4.5V)
10
- 📝 **Advanced Prompt Management** - Registrable prompt classes with inheritance and templating
11
- 🏗️ **Data Builder Pattern** - Structured data preparation for complex LLM inputs
12
- ⚡ **Smart Configuration** - Rails-style configuration with environment variable support
13
- 💰 **Cost Tracking** - Automatic token counting and cost estimation
14
- 🔧 **Extensible Architecture** - Easy to add new providers and prompt types
15
- 🛡️ **Robust Error Handling** - Comprehensive error handling with detailed metadata
7
+ - 🚀 **Multi-Provider Support** - 7+ LLM providers with automatic vendor detection
8
+ - 🎯 **Unified API** - Same interface across all providers
9
+ - 🖼️ **Vision Support** - Send images alongside text (OpenAI, Anthropic, OpenRouter, Z.ai, Gemini)
10
+ - 🔧 **Custom Parameters** - Fine-tune with temperature, top_p, and more
11
+ - 💰 **Cost Tracking** - Automatic token counting and cost estimation
12
+ - ⚡ **Smart Configuration** - Environment variables or code-based setup
16
13
 
17
14
  ## Installation
18
15
 
19
- Add this line to your application's Gemfile:
20
-
21
16
  ```ruby
22
17
  gem 'llm_conductor'
23
18
  ```
24
19
 
25
- And then execute:
26
-
27
20
  ```bash
28
- $ bundle install
29
- ```
30
-
31
- Or install it yourself as:
32
-
33
- ```bash
34
- $ gem install llm_conductor
21
+ bundle install
35
22
  ```
36
23
 
37
24
  ## Quick Start
38
25
 
39
- ### 1. Simple Text Generation
26
+ ### 1. Simple Generation
40
27
 
41
28
  ```ruby
42
- # Direct prompt generation - easiest way to get started
29
+ require 'llm_conductor'
30
+
31
+ # Set up your API key (or use ENV variables)
32
+ LlmConductor.configure do |config|
33
+ config.openai(api_key: 'your-api-key')
34
+ end
35
+
36
+ # Generate text
43
37
  response = LlmConductor.generate(
44
- model: 'gpt-5-mini',
38
+ model: 'gpt-4o-mini',
45
39
  prompt: 'Explain quantum computing in simple terms'
46
40
  )
47
41
 
48
- puts response.output # The generated text
49
- puts response.total_tokens # Token usage
42
+ puts response.output # Generated text
43
+ puts response.total_tokens # Token count
50
44
  puts response.estimated_cost # Cost in USD
51
45
  ```
52
46
 
53
- ### 2. Template-Based Generation
47
+ ### 2. With Custom Parameters
54
48
 
55
49
  ```ruby
56
- # Use built-in text summarization template
50
+ # Control creativity with temperature
57
51
  response = LlmConductor.generate(
58
- model: 'gpt-5-mini',
59
- type: :summarize_text,
60
- data: {
61
- text: 'Ekohe (ee-koh-hee) means "boundless possibility." Our way is to make AI practical, achievable, and most importantly, useful for you — and we prove it every day. With almost 16 years of wins under our belt, a market-leading 24-hr design & development cycle, and 5 offices in the most vibrant cities in the world, we surf the seas of innovation. We create efficient, elegant, and scalable digital products — delivering the right interactive solutions to achieve your audience and business goals. We help you transform. We break new ground across the globe — from AI and ML automation that drives the enterprise, to innovative customer experiences and mobile apps for startups. Our special sauce is the care, curiosity, and dedication we offer to solve for your needs. We focus on your success and deliver the most impactful experiences in the most efficient manner. Our clients tell us we partner with them in a trusted and capable way, driving the right design and technical choices.',
62
- max_length: '20 words',
63
- style: 'professional and engaging',
64
- focus_areas: ['core business', 'expertise', 'target market'],
65
- audience: 'potential investors',
66
- include_key_points: true,
67
- output_format: 'paragraph'
68
- }
52
+ model: 'llama2',
53
+ prompt: 'Write a creative story',
54
+ vendor: :ollama,
55
+ params: { temperature: 0.9 }
69
56
  )
57
+ ```
70
58
 
71
- # Response object provides rich information
72
- if response.success?
73
- puts "Generated: #{response.output}"
74
- puts "Tokens: #{response.total_tokens}"
75
- puts "Cost: $#{response.estimated_cost || 'N/A (free model)'}"
76
- else
77
- puts "Error: #{response.metadata[:error]}"
78
- end
59
+ ### 3. Vision/Multimodal
60
+
61
+ ```ruby
62
+ # Send images with your prompt
63
+ response = LlmConductor.generate(
64
+ model: 'gpt-4o',
65
+ prompt: {
66
+ text: 'What is in this image?',
67
+ images: ['https://example.com/image.jpg']
68
+ }
69
+ )
79
70
  ```
80
71
 
81
72
  ## Configuration
82
73
 
83
- ### Rails-Style Configuration
74
+ ### Environment Variables (Easiest)
84
75
 
85
- Create `config/initializers/llm_conductor.rb` (Rails) or configure in your application:
76
+ Set these environment variables and the gem auto-configures:
86
77
 
87
- ```ruby
88
- LlmConductor.configure do |config|
89
- # Default settings
90
- config.default_model = 'gpt-5-mini'
91
- config.default_vendor = :openai
92
- config.timeout = 30
93
- config.max_retries = 3
94
- config.retry_delay = 1.0
95
-
96
- # Provider configurations
97
- config.openai(
98
- api_key: ENV['OPENAI_API_KEY'],
99
- organization: ENV['OPENAI_ORG_ID'] # Optional
100
- )
101
-
102
- config.anthropic(
103
- api_key: ENV['ANTHROPIC_API_KEY']
104
- )
105
-
106
- config.gemini(
107
- api_key: ENV['GEMINI_API_KEY']
108
- )
109
-
110
- config.groq(
111
- api_key: ENV['GROQ_API_KEY']
112
- )
113
-
114
- config.ollama(
115
- base_url: ENV['OLLAMA_ADDRESS'] || 'http://localhost:11434'
116
- )
117
-
118
- config.openrouter(
119
- api_key: ENV['OPENROUTER_API_KEY'],
120
- uri_base: 'https://openrouter.ai/api/v1' # Optional, this is the default
121
- )
122
-
123
- config.zai(
124
- api_key: ENV['ZAI_API_KEY'],
125
- uri_base: 'https://api.z.ai/api/paas/v4' # Optional, this is the default
126
- )
127
-
128
- # Optional: Configure custom logger
129
- config.logger = Logger.new($stdout) # Log to stdout
130
- config.logger = Logger.new('log/llm_conductor.log') # Log to file
131
- config.logger = Rails.logger # Use Rails logger (in Rails apps)
132
- end
78
+ ```bash
79
+ export OPENAI_API_KEY=your-key-here
80
+ export ANTHROPIC_API_KEY=your-key-here
81
+ export GEMINI_API_KEY=your-key-here
82
+ export GROQ_API_KEY=your-key-here
83
+ export OLLAMA_ADDRESS=http://localhost:11434 # Optional
84
+ export OPENROUTER_API_KEY=your-key-here
85
+ export ZAI_API_KEY=your-key-here
133
86
  ```
134
87
 
135
- ### Logging Configuration
136
-
137
- LLM Conductor supports flexible logging using Ruby's built-in Logger class. By default, when a logger is configured, it uses the DEBUG log level to provide detailed information during development.
88
+ ### Code Configuration
138
89
 
139
90
  ```ruby
140
91
  LlmConductor.configure do |config|
141
- # Option 1: Log to stdout - uses DEBUG level by default
142
- config.logger = Logger.new($stdout)
143
-
144
- # Option 2: Log to file - set appropriate level
145
- config.logger = Logger.new('log/llm_conductor.log')
146
-
147
- # Option 3: Use Rails logger (Rails apps)
148
- config.logger = Rails.logger
149
-
150
- # Option 4: Custom logger with formatting
151
- config.logger = Logger.new($stderr).tap do |logger|
152
- logger.formatter = proc { |severity, datetime, progname, msg| "#{msg}\n" }
153
- end
92
+ config.default_model = 'gpt-4o-mini'
93
+
94
+ config.openai(api_key: ENV['OPENAI_API_KEY'])
95
+ config.anthropic(api_key: ENV['ANTHROPIC_API_KEY'])
96
+ config.gemini(api_key: ENV['GEMINI_API_KEY'])
97
+ config.groq(api_key: ENV['GROQ_API_KEY'])
98
+ config.ollama(base_url: 'http://localhost:11434')
99
+ config.openrouter(api_key: ENV['OPENROUTER_API_KEY'])
100
+ config.zai(api_key: ENV['ZAI_API_KEY'])
154
101
  end
155
102
  ```
156
103
 
157
- ### Environment Variables
104
+ ## Supported Providers
158
105
 
159
- The gem automatically detects these environment variables:
106
+ | Provider | Auto-Detect | Vision | Custom Params |
107
+ |----------|-------------|--------|---------------|
108
+ | OpenAI (GPT) | ✅ `gpt-*` | ✅ | 🔜 |
109
+ | Anthropic (Claude) | ✅ `claude-*` | ✅ | 🔜 |
110
+ | Google (Gemini) | ✅ `gemini-*` | ✅ | 🔜 |
111
+ | Groq | ✅ `llama/mixtral` | ❌ | 🔜 |
112
+ | Ollama | ✅ (default) | ❌ | ✅ |
113
+ | OpenRouter | 🔧 Manual | ✅ | 🔜 |
114
+ | Z.ai (Zhipu) | ✅ `glm-*` | ✅ | 🔜 |
160
115
 
161
- - `OPENAI_API_KEY` - OpenAI API key
162
- - `OPENAI_ORG_ID` - OpenAI organization ID (optional)
163
- - `ANTHROPIC_API_KEY` - Anthropic API key
164
- - `GEMINI_API_KEY` - Google Gemini API key
165
- - `GROQ_API_KEY` - Groq API key
166
- - `OLLAMA_ADDRESS` - Ollama server address
167
- - `OPENROUTER_API_KEY` - OpenRouter API key
168
- - `ZAI_API_KEY` - Z.ai (Zhipu AI) API key
116
+ ## Common Use Cases
169
117
 
170
- ## Supported Providers & Models
118
+ ### Simple Q&A
171
119
 
172
- ### OpenAI (Automatic for GPT models)
173
120
  ```ruby
174
121
  response = LlmConductor.generate(
175
- model: 'gpt-5-mini', # Auto-detects OpenAI
176
- prompt: 'Your prompt here'
122
+ model: 'gpt-4o-mini',
123
+ prompt: 'What is Ruby programming language?'
177
124
  )
178
125
  ```
179
126
 
180
- ### Anthropic Claude (Automatic for Claude models)
181
- ```ruby
182
- response = LlmConductor.generate(
183
- model: 'claude-3-5-sonnet-20241022', # Auto-detects Anthropic
184
- prompt: 'Your prompt here'
185
- )
127
+ ### Content Summarization
186
128
 
187
- # Or explicitly specify vendor
129
+ ```ruby
188
130
  response = LlmConductor.generate(
189
131
  model: 'claude-3-5-sonnet-20241022',
190
- vendor: :anthropic,
191
- prompt: 'Your prompt here'
132
+ type: :summarize_text,
133
+ data: {
134
+ text: 'Long article content here...',
135
+ max_length: '100 words',
136
+ style: 'professional'
137
+ }
192
138
  )
193
139
  ```
194
140
 
195
- ### Google Gemini (Automatic for Gemini models)
196
- ```ruby
197
- response = LlmConductor.generate(
198
- model: 'gemini-2.5-flash', # Auto-detects Gemini
199
- prompt: 'Your prompt here'
200
- )
141
+ ### Deterministic Output (Testing)
201
142
 
202
- # Or explicitly specify vendor
203
- response = LlmConductor.generate(
204
- model: 'gemini-2.5-flash',
205
- vendor: :gemini,
206
- prompt: 'Your prompt here'
207
- )
208
- ```
209
-
210
- ### Groq (Automatic for Llama, Mixtral, Gemma, Qwen models)
211
143
  ```ruby
212
144
  response = LlmConductor.generate(
213
- model: 'llama-3.1-70b-versatile', # Auto-detects Groq
214
- prompt: 'Your prompt here'
215
- )
216
-
217
- # Supported Groq models
218
- response = LlmConductor.generate(
219
- model: 'mixtral-8x7b-32768', # Auto-detects Groq
220
- prompt: 'Your prompt here'
221
- )
222
-
223
- # Or explicitly specify vendor
224
- response = LlmConductor.generate(
225
- model: 'qwen-2.5-72b-instruct',
226
- vendor: :groq,
227
- prompt: 'Your prompt here'
228
- )
229
- ```
230
-
231
- ### Ollama (Default for other models)
232
- ```ruby
233
- response = LlmConductor.generate(
234
- model: 'deepseek-r1',
235
- prompt: 'Your prompt here'
145
+ model: 'llama2',
146
+ prompt: 'Extract email addresses from: contact@example.com',
147
+ vendor: :ollama,
148
+ params: { temperature: 0.0, seed: 42 }
236
149
  )
237
150
  ```
238
151
 
239
- ### OpenRouter (Access to Multiple Providers)
240
- OpenRouter provides unified access to various LLM providers with automatic routing. It also supports vision/multimodal models with automatic retry logic for handling intermittent availability issues.
241
-
242
- **Vision-capable models:**
243
- - `nvidia/nemotron-nano-12b-v2-vl:free` - **FREE** 12B vision model (may need retries)
244
- - `openai/gpt-4o-mini` - Fast and reliable
245
- - `google/gemini-flash-1.5` - Fast vision processing
246
- - `anthropic/claude-3.5-sonnet` - High quality analysis
247
- - `openai/gpt-4o` - Best quality (higher cost)
248
-
249
- **Note:** Free-tier models may experience intermittent 502 errors. The client includes automatic retry logic with exponential backoff (up to 5 retries) to handle these transient failures.
152
+ ### Vision Analysis
250
153
 
251
154
  ```ruby
252
- # Text-only request
253
- response = LlmConductor.generate(
254
- model: 'nvidia/nemotron-nano-12b-v2-vl:free',
255
- vendor: :openrouter,
256
- prompt: 'Your prompt here'
257
- )
258
-
259
- # Vision/multimodal request with single image
260
- response = LlmConductor.generate(
261
- model: 'nvidia/nemotron-nano-12b-v2-vl:free',
262
- vendor: :openrouter,
263
- prompt: {
264
- text: 'What is in this image?',
265
- images: 'https://example.com/image.jpg'
266
- }
267
- )
268
-
269
- # Vision request with multiple images
270
155
  response = LlmConductor.generate(
271
- model: 'nvidia/nemotron-nano-12b-v2-vl:free',
272
- vendor: :openrouter,
273
- prompt: {
274
- text: 'Compare these images',
275
- images: [
276
- 'https://example.com/image1.jpg',
277
- 'https://example.com/image2.jpg'
278
- ]
279
- }
280
- )
281
-
282
- # Vision request with detail level
283
- response = LlmConductor.generate(
284
- model: 'nvidia/nemotron-nano-12b-v2-vl:free',
285
- vendor: :openrouter,
156
+ model: 'gpt-4o',
286
157
  prompt: {
287
158
  text: 'Describe this image in detail',
288
159
  images: [
289
- { url: 'https://example.com/image.jpg', detail: 'high' }
160
+ 'https://example.com/photo.jpg',
161
+ 'https://example.com/diagram.png'
290
162
  ]
291
163
  }
292
164
  )
293
-
294
- # Advanced: Raw array format (OpenAI-compatible)
295
- response = LlmConductor.generate(
296
- model: 'nvidia/nemotron-nano-12b-v2-vl:free',
297
- vendor: :openrouter,
298
- prompt: [
299
- { type: 'text', text: 'What is in this image?' },
300
- { type: 'image_url', image_url: { url: 'https://example.com/image.jpg' } }
301
- ]
302
- )
303
165
  ```
304
166
 
305
- **Reliability:** The OpenRouter client includes intelligent retry logic:
306
- - Automatically retries on 502 errors (up to 5 attempts)
307
- - Exponential backoff: 2s, 4s, 8s, 16s, 32s
308
- - Transparent to your code - works seamlessly
309
- - Enable logging to see retry attempts:
167
+ ## Response Object
310
168
 
311
169
  ```ruby
312
- LlmConductor.configure do |config|
313
- config.logger = Logger.new($stdout)
314
- config.logger.level = Logger::INFO
315
- end
316
- ```
317
-
318
- ### Z.ai (Zhipu AI) - GLM Models with Vision Support
319
- Z.ai provides access to GLM (General Language Model) series including the powerful GLM-4.5V multimodal model with 64K context window and vision capabilities.
320
-
321
- **Text models:**
322
- - `glm-4-plus` - Enhanced text-only model
323
- - `glm-4` - Standard GLM-4 model
324
-
325
- **Vision-capable models:**
326
- - `glm-4.5v` - Latest multimodal model with 64K context ✅ **RECOMMENDED**
327
- - `glm-4v` - Previous generation vision model
328
-
329
- ```ruby
330
- # Text-only request with GLM-4-plus
331
- response = LlmConductor.generate(
332
- model: 'glm-4-plus',
333
- vendor: :zai,
334
- prompt: 'Explain quantum computing in simple terms'
335
- )
336
-
337
- # Vision request with GLM-4.5V - single image
338
- response = LlmConductor.generate(
339
- model: 'glm-4.5v',
340
- vendor: :zai,
341
- prompt: {
342
- text: 'What is in this image?',
343
- images: 'https://example.com/image.jpg'
344
- }
345
- )
346
-
347
- # Vision request with multiple images
348
- response = LlmConductor.generate(
349
- model: 'glm-4.5v',
350
- vendor: :zai,
351
- prompt: {
352
- text: 'Compare these images and identify differences',
353
- images: [
354
- 'https://example.com/image1.jpg',
355
- 'https://example.com/image2.jpg'
356
- ]
357
- }
358
- )
359
-
360
- # Vision request with detail level
361
- response = LlmConductor.generate(
362
- model: 'glm-4.5v',
363
- vendor: :zai,
364
- prompt: {
365
- text: 'Analyze this document in detail',
366
- images: [
367
- { url: 'https://example.com/document.jpg', detail: 'high' }
368
- ]
369
- }
370
- )
371
-
372
- # Base64 encoded local images
373
- require 'base64'
374
- image_data = Base64.strict_encode64(File.read('path/to/image.jpg'))
375
- response = LlmConductor.generate(
376
- model: 'glm-4.5v',
377
- vendor: :zai,
378
- prompt: {
379
- text: 'What is in this image?',
380
- images: "data:image/jpeg;base64,#{image_data}"
381
- }
382
- )
383
- ```
384
-
385
- **GLM-4.5V Features:**
386
- - 64K token context window
387
- - Multimodal understanding (text + images)
388
- - Document understanding and OCR
389
- - Image reasoning and analysis
390
- - Base64 image support for local files
391
- - OpenAI-compatible API format
392
-
393
- ### Vendor Detection
394
-
395
- The gem automatically detects the appropriate provider based on model names:
170
+ response = LlmConductor.generate(...)
396
171
 
397
- - **OpenAI**: Models starting with `gpt-` (e.g., `gpt-4`, `gpt-3.5-turbo`)
398
- - **Anthropic**: Models starting with `claude-` (e.g., `claude-3-5-sonnet-20241022`)
399
- - **Google Gemini**: Models starting with `gemini-` (e.g., `gemini-2.5-flash`, `gemini-2.0-flash`)
400
- - **Z.ai**: Models starting with `glm-` (e.g., `glm-4.5v`, `glm-4-plus`, `glm-4v`)
401
- - **Groq**: Models starting with `llama`, `mixtral`, `gemma`, or `qwen` (e.g., `llama-3.1-70b-versatile`, `mixtral-8x7b-32768`, `gemma-7b-it`, `qwen-2.5-72b-instruct`)
402
- - **Ollama**: All other models (e.g., `llama3.2`, `mistral`, `codellama`)
172
+ response.output # String - Generated text
173
+ response.success? # Boolean - Success status
174
+ response.model # String - Model used
175
+ response.input_tokens # Integer - Input token count
176
+ response.output_tokens # Integer - Output token count
177
+ response.total_tokens # Integer - Total tokens
178
+ response.estimated_cost # Float - Cost in USD (if available)
179
+ response.metadata # Hash - Additional info
403
180
 
404
- You can also explicitly specify the vendor:
181
+ # Parse JSON responses
182
+ response.parse_json # Hash - Parsed JSON output
405
183
 
406
- ```ruby
407
- response = LlmConductor.generate(
408
- model: 'llama-3.1-70b-versatile',
409
- vendor: :groq, # Explicitly use Groq
410
- prompt: 'Your prompt here'
411
- )
184
+ # Extract code blocks
185
+ response.extract_code_block('ruby') # String - Code content
412
186
  ```
413
187
 
414
188
  ## Advanced Features
415
189
 
416
- ### 1. Custom Prompt Registration
190
+ ### Custom Prompt Classes
417
191
 
418
- Create reusable, testable prompt classes:
192
+ Create reusable, testable prompt templates:
419
193
 
420
194
  ```ruby
421
- class CompanyAnalysisPrompt < LlmConductor::Prompts::BasePrompt
195
+ class AnalysisPrompt < LlmConductor::Prompts::BasePrompt
422
196
  def render
423
197
  <<~PROMPT
424
- Company: #{name}
425
- Domain: #{domain_name}
426
- Description: #{truncate_text(description, max_length: 1000)}
427
-
428
- Please analyze this company and provide:
429
- 1. Core business model
430
- 2. Target market
431
- 3. Competitive advantages
432
- 4. Growth potential
433
-
434
- Format as JSON.
198
+ Analyze: #{title}
199
+ Content: #{truncate_text(content, max_length: 500)}
200
+
201
+ Provide insights in JSON format.
435
202
  PROMPT
436
203
  end
437
204
  end
438
205
 
439
- # Register the prompt
440
- LlmConductor::PromptManager.register(:detailed_analysis, CompanyAnalysisPrompt)
206
+ # Register and use
207
+ LlmConductor::PromptManager.register(:analyze, AnalysisPrompt)
441
208
 
442
- # Use the registered prompt
443
209
  response = LlmConductor.generate(
444
- model: 'gpt-5-mini',
445
- type: :detailed_analysis,
446
- data: {
447
- name: 'Ekohe',
448
- domain_name: 'ekohe.com',
449
- description: 'A leading AI company...'
450
- }
210
+ model: 'gpt-4o-mini',
211
+ type: :analyze,
212
+ data: { title: 'Article', content: '...' }
451
213
  )
452
-
453
- # Parse structured responses
454
- analysis = response.parse_json
455
- puts analysis
456
214
  ```
457
215
 
458
- ### 2. Data Builder Pattern
216
+ ### Data Builder Pattern
459
217
 
460
218
  Structure complex data for LLM consumption:
461
219
 
@@ -463,236 +221,96 @@ Structure complex data for LLM consumption:
463
221
  class CompanyDataBuilder < LlmConductor::DataBuilder
464
222
  def build
465
223
  {
466
- id: source_object.id,
467
224
  name: source_object.name,
468
225
  description: format_for_llm(source_object.description, max_length: 500),
469
- industry: extract_nested_data(:data, 'categories', 'primary'),
470
226
  metrics: build_metrics,
471
- summary: build_company_summary,
472
- domain_name: source_object.domain_name
473
-
227
+ summary: build_company_summary
474
228
  }
475
229
  end
476
-
230
+
477
231
  private
478
-
232
+
479
233
  def build_metrics
480
234
  {
481
235
  employees: format_number(source_object.employee_count),
482
- revenue: format_number(source_object.annual_revenue),
483
- growth_rate: "#{source_object.growth_rate}%"
236
+ revenue: format_number(source_object.annual_revenue, format: :currency)
484
237
  }
485
238
  end
486
-
487
- def build_company_summary
488
- name = safe_extract(:name, default: 'Company')
489
- industry = extract_nested_data(:data, 'categories', 'primary')
490
- "#{name} is a #{industry} company..."
491
- end
492
239
  end
493
-
494
- # Usage
495
- company = Company.find(123)
496
- data = CompanyDataBuilder.new(company).build
497
-
498
- response = LlmConductor.generate(
499
- model: 'gpt-5-mini',
500
- type: :detailed_analysis,
501
- data: data
502
- )
503
- ```
504
-
505
- ### 3. Built-in Prompt Templates
506
-
507
- #### Featured Links Extraction
508
- ```ruby
509
- response = LlmConductor.generate(
510
- model: 'gpt-5-mini',
511
- type: :featured_links,
512
- data: {
513
- htmls: '<html>...</html>',
514
- current_url: 'https://example.com'
515
- }
516
- )
517
- ```
518
-
519
- #### HTML Summarization
520
- ```ruby
521
- response = LlmConductor.generate(
522
- model: 'gpt-5-mini',
523
- type: :summarize_htmls,
524
- data: { htmls: '<html>...</html>' }
525
- )
526
- ```
527
-
528
- #### Description Summarization
529
- ```ruby
530
- response = LlmConductor.generate(
531
- model: 'gpt-5-mini',
532
- type: :summarize_description,
533
- data: {
534
- name: 'Company Name',
535
- description: 'Long description...',
536
- industries: ['Tech', 'AI']
537
- }
538
- )
539
240
  ```
540
241
 
541
- #### Custom Templates
542
- ```ruby
543
- response = LlmConductor.generate(
544
- model: 'gpt-5-mini',
545
- type: :custom,
546
- data: {
547
- template: "Analyze this data: %{data}",
548
- data: "Your data here"
549
- }
550
- )
551
- ```
552
-
553
- ### 4. Response Object
554
-
555
- All methods return a rich `LlmConductor::Response` object:
242
+ ### Error Handling
556
243
 
557
244
  ```ruby
558
245
  response = LlmConductor.generate(...)
559
246
 
560
- # Main content
561
- response.output # Generated text
562
- response.success? # Boolean success status
563
-
564
- # Token information
565
- response.input_tokens # Input tokens used
566
- response.output_tokens # Output tokens generated
567
- response.total_tokens # Total tokens
568
-
569
- # Cost tracking (for supported models)
570
- response.estimated_cost # Estimated cost in USD
571
-
572
- # Metadata
573
- response.model # Model used
574
- response.metadata # Hash with vendor, timestamp, etc.
575
-
576
- # Structured data parsing
577
- response.parse_json # Parse as JSON
578
- response.extract_code_block('json') # Extract code blocks
579
- ```
580
-
581
- ### 5. Error Handling
582
-
583
- The gem provides comprehensive error handling:
584
-
585
- ```ruby
586
- response = LlmConductor.generate(
587
- model: 'gpt-5-mini',
588
- prompt: 'Your prompt'
589
- )
590
-
591
247
  if response.success?
592
248
  puts response.output
593
249
  else
594
250
  puts "Error: #{response.metadata[:error]}"
595
- puts "Failed model: #{response.model}"
596
- end
597
-
598
- # Exception handling for critical errors
599
- begin
600
- response = LlmConductor.generate(...)
601
- rescue LlmConductor::Error => e
602
- puts "LLM Conductor error: #{e.message}"
603
- rescue StandardError => e
604
- puts "General error: #{e.message}"
251
+ puts "Error class: #{response.metadata[:error_class]}"
605
252
  end
606
253
  ```
607
254
 
608
- ## Extending the Gem
609
-
610
- ### Adding Custom Clients
611
-
612
- ```ruby
613
- module LlmConductor
614
- module Clients
615
- class CustomClient < BaseClient
616
- private
617
-
618
- def generate_content(prompt)
619
- # Implement your provider's API call
620
- your_custom_api.generate(prompt)
621
- end
622
- end
623
- end
624
- end
625
- ```
255
+ ## Documentation
626
256
 
627
- ### Adding Prompt Types
628
-
629
- ```ruby
630
- module LlmConductor
631
- module Prompts
632
- def prompt_custom_analysis(data)
633
- <<~PROMPT
634
- Custom analysis for: #{data[:subject]}
635
- Context: #{data[:context]}
636
-
637
- Please provide detailed analysis.
638
- PROMPT
639
- end
640
- end
641
- end
642
- ```
257
+ - **[Custom Parameters Guide](docs/custom-parameters.md)** - Temperature, top_p, and more
258
+ - **[Vision Support Guide](docs/vision-support.md)** - Using images with LLMs
259
+ - **[Examples](examples/)** - Working code examples for all providers
643
260
 
644
261
  ## Examples
645
262
 
646
- Check the `/examples` directory for comprehensive usage examples:
263
+ Check the [examples/](examples/) directory for comprehensive examples:
647
264
 
648
265
  - `simple_usage.rb` - Basic text generation
266
+ - `ollama_params_usage.rb` - Custom parameters with Ollama
267
+ - `gpt_vision_usage.rb` - Vision with OpenAI
268
+ - `claude_vision_usage.rb` - Vision with Anthropic
269
+ - `gemini_vision_usage.rb` - Vision with Gemini
270
+ - `openrouter_vision_usage.rb` - Vision with OpenRouter
271
+ - `zai_usage.rb` - Using Z.ai GLM models
272
+ - `data_builder_usage.rb` - Data builder patterns
649
273
  - `prompt_registration.rb` - Custom prompt classes
650
- - `data_builder_usage.rb` - Data structuring patterns
651
- - `rag_usage.rb` - RAG implementation examples
652
- - `gemini_usage.rb` - Google Gemini integration
653
- - `groq_usage.rb` - Groq integration with various models
654
- - `openrouter_vision_usage.rb` - OpenRouter vision/multimodal examples
655
- - `zai_usage.rb` - Z.ai GLM-4.5V vision and text examples
274
+ - `rag_usage.rb` - Retrieval-Augmented Generation
656
275
 
657
- ## Development
276
+ Run any example:
658
277
 
659
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests.
278
+ ```bash
279
+ ruby examples/simple_usage.rb
280
+ ```
281
+
282
+ ## Development
660
283
 
661
284
  ```bash
662
- # Install dependencies
663
- bin/setup
285
+ # Clone and setup
286
+ git clone https://github.com/ekohe/llm-conductor.git
287
+ cd llm-conductor
288
+ bundle install
664
289
 
665
- # Run tests
666
- rake spec
290
+ # Run tests
291
+ bundle exec rspec
667
292
 
668
- # Run RuboCop
669
- rubocop
293
+ # Run linter
294
+ bundle exec rubocop
670
295
 
671
296
  # Interactive console
672
297
  bin/console
673
298
  ```
674
299
 
675
- ## Testing
676
-
677
- The gem includes comprehensive test coverage with unit, integration, and performance tests.
678
-
679
- ## Performance
680
-
681
- - **Token Efficiency**: Automatic prompt optimization and token counting
682
- - **Cost Tracking**: Real-time cost estimation for all supported models
683
- - **Response Caching**: Built-in mechanisms to avoid redundant API calls
684
- - **Async Support**: Ready for async/background processing
685
-
686
300
  ## Contributing
687
301
 
688
- Bug reports and pull requests are welcome on GitHub at https://github.com/ekohe/llm_conductor.
689
-
690
- 1. Fork the repository
302
+ 1. Fork it
691
303
  2. Create your feature branch (`git checkout -b my-new-feature`)
692
304
  3. Commit your changes (`git commit -am 'Add some feature'`)
693
305
  4. Push to the branch (`git push origin my-new-feature`)
694
- 5. Create a new Pull Request
306
+ 5. Create new Pull Request
307
+
308
+ Ensure tests pass and RuboCop is clean before submitting.
695
309
 
696
310
  ## License
697
311
 
698
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
312
+ The gem is available as open source under the terms of the [MIT License](LICENSE).
313
+
314
+ ## Credits
315
+
316
+ Developed with ❤️ by [Ekohe](https://ekohe.com) - Making AI practical, achievable, and useful.