llm_chain 0.5.1 → 0.5.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6d2f08734c93afa3880316d14f129a90502e0f6d9094d939257a5bf0740b6754
4
- data.tar.gz: a1f6665f3dd39c1401e54770e8fd23f10fba1b47195f88b461b2e76fe4a0212a
3
+ metadata.gz: a926b1222ae2f5fda1d4562ade3663cf1b9d9aff0e1162f8c8c6b17ab817a27c
4
+ data.tar.gz: 9a9c7ad9c081899e5ae014d0236eaf01cf93fa66fcdae2c439d18cf54e7de532
5
5
  SHA512:
6
- metadata.gz: aebdbe64169f31b55ea55278103769acd71c4a8d786c1e84235abd9d2e6f271dcfe9eeba46796675c096a4bef3a34843a5f093dddf5964ad0fdfe974d18b2d79
7
- data.tar.gz: 57f61024965d99528e8e0f56c60106af3168a408bc0f612b4c1fbca7ac79eeb46b564bee50f648128e34850268717f57362f96911b4be290a8a4288bcc548d83
6
+ metadata.gz: 0f6736ee81ee8cc057e7f283de27f7d7cda086a3161b2c0bd7b8b434542b9df3c52df905cbc4c561c069f47afbaa333f36f52fbbe9fac56a83fe766a165c71f2
7
+ data.tar.gz: 17750148dce41dd667b24621d1e19051f9ff253213a72ef4e79ffda7d6291494c2db79a88215257f0afb69b4a857583e734e9872fa515e10d7879d5b62f139cb
data/CHANGELOG.md CHANGED
@@ -7,6 +7,28 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
7
7
 
8
8
  ## [Unreleased]
9
9
 
10
+ ## [0.5.2] - 2025-01-XX
11
+
12
+ ### Added
13
+ - **Configuration Validator** - Comprehensive system validation before chain initialization
14
+ - **System Diagnostics** - `LLMChain.diagnose_system` method for health checks
15
+ - **Retry Logic** - Exponential backoff for HTTP requests with configurable max retries
16
+ - **Enhanced Logging** - Structured logging with debug mode support
17
+ - **Internet Connectivity Detection** - Automatic offline mode detection
18
+ - **Code Extraction Improvements** - Better parsing of code blocks and inline commands
19
+
20
+ ### Changed
21
+ - **Improved Error Handling** - Better error messages with suggested solutions
22
+ - **Enhanced WebSearch** - More robust fallback mechanisms and timeout handling
23
+ - **CodeInterpreter Enhancements** - Improved code extraction from various formats
24
+ - **Better Validation** - Early detection of configuration issues with helpful warnings
25
+
26
+ ### Fixed
27
+ - **WebSearch Stability** - Fixed timeout and connection issues with retry logic
28
+ - **Code Block Parsing** - Resolved issues with multiline regex and Windows line endings
29
+ - **Graceful Degradation** - Better handling of offline scenarios and API failures
30
+ - **Memory Leaks** - Improved cleanup of temporary files and resources
31
+
10
32
  ## [0.5.1] - 2025-06-26
11
33
 
12
34
  ### Added
@@ -46,6 +68,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
46
68
  ### Changed
47
69
  - Initial stable release with core functionality
48
70
 
49
- [Unreleased]: https://github.com/FuryCow/llm_chain/compare/v0.5.1...HEAD
71
+ [Unreleased]: https://github.com/FuryCow/llm_chain/compare/v0.5.2...HEAD
72
+ [0.5.2]: https://github.com/FuryCow/llm_chain/compare/v0.5.1...v0.5.2
50
73
  [0.5.1]: https://github.com/FuryCow/llm_chain/compare/v0.5.0...v0.5.1
51
74
  [0.5.0]: https://github.com/FuryCow/llm_chain/releases/tag/v0.5.0
data/README.md CHANGED
@@ -8,14 +8,14 @@
8
8
 
9
9
  LLMChain is a Ruby analog of LangChain, providing a unified interface for interacting with various LLMs, built-in tool system, and RAG (Retrieval-Augmented Generation) support.
10
10
 
11
- ## 🎉 What's New in v0.5.1
11
+ ## 🎉 What's New in v0.5.2
12
12
 
13
- - ✅ **Google Search Integration** - Accurate, up-to-date search results
14
- - ✅ **Fixed Calculator** - Improved expression parsing and evaluation
15
- - ✅ **Enhanced Code Interpreter** - Better code extraction from prompts
16
- - ✅ **Production-Ready Output** - Clean interface without debug noise
17
- - ✅ **Quick Chain Creation** - Simple `LLMChain.quick_chain` method
18
- - ✅ **Simplified Configuration** - Easy setup with sensible defaults
13
+ - ✅ **System Diagnostics** - Built-in health checks with `LLMChain.diagnose_system`
14
+ - ✅ **Configuration Validation** - Early detection of setup issues and helpful warnings
15
+ - ✅ **Enhanced Error Handling** - Retry logic with exponential backoff for network requests
16
+ - ✅ **Robust WebSearch** - Better timeout handling and graceful degradation
17
+ - ✅ **Improved Code Extraction** - Enhanced support for various code block formats
18
+ - ✅ **Debug Logging** - Structured logging with `LLM_CHAIN_DEBUG=true`
19
19
 
20
20
  ## ✨ Key Features
21
21
 
@@ -80,6 +80,56 @@ response = chain.ask("Hello! How are you?")
80
80
  puts response
81
81
  ```
82
82
 
83
+ ## 🔍 System Diagnostics (v0.5.2+)
84
+
85
+ Before diving into development, it's recommended to check your system configuration:
86
+
87
+ ```ruby
88
+ require 'llm_chain'
89
+
90
+ # Run comprehensive system diagnostics
91
+ LLMChain.diagnose_system
92
+ # 🔍 LLMChain System Diagnostics
93
+ # ==================================================
94
+ # 📋 System Components:
95
+ # Ruby: ✅ (3.2.2)
96
+ # Python: ✅
97
+ # Node.js: ✅
98
+ # Internet: ✅
99
+ # Ollama: ✅
100
+ # 🔑 API Keys:
101
+ # Openai: ❌
102
+ # Google_search: ❌
103
+ # 💡 Recommendations:
104
+ # • Configure API keys for enhanced features
105
+ # • Start Ollama server: ollama serve
106
+ ```
107
+
108
+ ### Configuration Validation
109
+
110
+ Chains now validate their configuration on startup:
111
+
112
+ ```ruby
113
+ # Automatic validation (v0.5.2+)
114
+ begin
115
+ chain = LLMChain.quick_chain(model: "qwen3:1.7b")
116
+ rescue LLMChain::Error => e
117
+ puts "Configuration issue: #{e.message}"
118
+ end
119
+
120
+ # Disable validation if needed
121
+ chain = LLMChain.quick_chain(
122
+ model: "qwen3:1.7b",
123
+ validate_config: false
124
+ )
125
+
126
+ # Manual validation
127
+ LLMChain::ConfigurationValidator.validate_chain_config!(
128
+ model: "qwen3:1.7b",
129
+ tools: LLMChain::Tools::ToolManager.create_default_toolset
130
+ )
131
+ ```
132
+
83
133
  ## 🛠️ Tool System
84
134
 
85
135
  ### Automatic Tool Usage
@@ -130,9 +180,11 @@ results = search.call("Which is the latest version of Ruby?")
130
180
  # Works even without Google API configured
131
181
  ```
132
182
 
133
- #### 💻 Code Interpreter
183
+ #### 💻 Code Interpreter (Enhanced in v0.5.2)
134
184
  ```ruby
135
185
  interpreter = LLMChain::Tools::CodeInterpreter.new
186
+
187
+ # Standard markdown blocks
136
188
  result = interpreter.call(<<~CODE)
137
189
  ```ruby
138
190
  def factorial(n)
@@ -141,10 +193,25 @@ result = interpreter.call(<<~CODE)
141
193
  puts factorial(5)
142
194
  ```
143
195
  CODE
196
+
197
+ # Inline code commands (v0.5.2+)
198
+ result = interpreter.call("Execute code: puts 'Hello World!'")
199
+
200
+ # Code without language specification
201
+ result = interpreter.call(<<~CODE)
202
+ ```
203
+ numbers = [1, 2, 3, 4, 5]
204
+ puts numbers.sum
205
+ ```
206
+ CODE
207
+
208
+ # Windows line endings support (v0.5.2+)
209
+ result = interpreter.call("```ruby\r\nputs 'Windows compatible'\r\n```")
210
+
144
211
  puts result[:formatted]
145
212
  ```
146
213
 
147
- ## ⚙️ Configuration (v0.5.1+)
214
+ ## ⚙️ Configuration (v0.5.2+)
148
215
 
149
216
  ```ruby
150
217
  # Global configuration
@@ -158,14 +225,48 @@ end
158
225
  # Quick chain with default settings
159
226
  chain = LLMChain.quick_chain
160
227
 
161
- # Override settings per chain
228
+ # Override settings per chain (v0.5.2+)
162
229
  chain = LLMChain.quick_chain(
163
230
  model: "gpt-4",
164
231
  tools: false, # Disable tools
165
- memory: false # Disable memory
232
+ memory: false, # Disable memory
233
+ validate_config: false # Skip validation
166
234
  )
167
235
  ```
168
236
 
237
+ ### Debug Mode (v0.5.2+)
238
+
239
+ Enable detailed logging for troubleshooting:
240
+
241
+ ```bash
242
+ # Enable debug logging
243
+ export LLM_CHAIN_DEBUG=true
244
+
245
+ # Or in Ruby
246
+ ENV['LLM_CHAIN_DEBUG'] = 'true'
247
+ ```
248
+
249
+ ### Validation and Error Handling (v0.5.2+)
250
+
251
+ ```ruby
252
+ # Comprehensive environment check
253
+ results = LLMChain::ConfigurationValidator.validate_environment
254
+ puts "Ollama available: #{results[:ollama]}"
255
+ puts "Internet: #{results[:internet]}"
256
+ puts "Warnings: #{results[:warnings]}"
257
+
258
+ # Chain validation with custom settings
259
+ begin
260
+ LLMChain::ConfigurationValidator.validate_chain_config!(
261
+ model: "gpt-4",
262
+ tools: [LLMChain::Tools::Calculator.new, LLMChain::Tools::WebSearch.new]
263
+ )
264
+ rescue LLMChain::ConfigurationValidator::ValidationError => e
265
+ puts "Setup issue: #{e.message}"
266
+ # Handle configuration problems
267
+ end
268
+ ```
269
+
169
270
  ### Creating Custom Tools
170
271
 
171
272
  ```ruby
@@ -424,23 +525,64 @@ openai = LLMChain::Clients::OpenAI.new(
424
525
  )
425
526
  ```
426
527
 
427
- ## 🔧 Error Handling
528
+ ## 🔧 Error Handling (Enhanced in v0.5.2)
428
529
 
429
530
  ```ruby
430
531
  begin
431
532
  chain = LLMChain::Chain.new(model: "qwen3:1.7b")
432
533
  response = chain.ask("Complex query")
534
+ rescue LLMChain::ConfigurationValidator::ValidationError => e
535
+ puts "Configuration issue: #{e.message}"
536
+ # Use LLMChain.diagnose_system to check setup
433
537
  rescue LLMChain::UnknownModelError => e
434
538
  puts "Unknown model: #{e.message}"
539
+ # Check available models with ollama list
435
540
  rescue LLMChain::ClientError => e
436
541
  puts "Client error: #{e.message}"
542
+ # Network or API issues
437
543
  rescue LLMChain::TimeoutError => e
438
544
  puts "Timeout exceeded: #{e.message}"
545
+ # Increase timeout or use faster model
439
546
  rescue LLMChain::Error => e
440
547
  puts "General LLMChain error: #{e.message}"
441
548
  end
442
549
  ```
443
550
 
551
+ ### Automatic Retry Logic (v0.5.2+)
552
+
553
+ WebSearch and other tools now include automatic retry with exponential backoff:
554
+
555
+ ```ruby
556
+ # Retry configuration is automatic, but you can observe it:
557
+ ENV['LLM_CHAIN_DEBUG'] = 'true'
558
+
559
+ search = LLMChain::Tools::WebSearch.new
560
+ result = search.call("search query")
561
+ # [WebSearch] Retrying search (1/3) after 0.5s: Net::TimeoutError
562
+ # [WebSearch] Retrying search (2/3) after 1.0s: Net::TimeoutError
563
+ # [WebSearch] Search failed after 3 attempts: Net::TimeoutError
564
+
565
+ # Tools gracefully degrade to fallback methods when possible
566
+ puts result[:formatted] # Still provides useful response
567
+ ```
568
+
569
+ ### Graceful Degradation
570
+
571
+ ```ruby
572
+ # Tools handle failures gracefully
573
+ calculator = LLMChain::Tools::Calculator.new
574
+ web_search = LLMChain::Tools::WebSearch.new
575
+ code_runner = LLMChain::Tools::CodeInterpreter.new
576
+
577
+ # Even with network issues, you get useful responses:
578
+ search_result = web_search.call("latest Ruby version")
579
+ # Falls back to hardcoded data for common queries
580
+
581
+ # Safe code execution with timeout protection:
582
+ code_result = code_runner.call("puts 'Hello World!'")
583
+ # Executes safely with proper sandboxing
584
+ ```
585
+
444
586
  ## 📚 Usage Examples
445
587
 
446
588
  ### Chatbot with Tools
@@ -551,11 +693,17 @@ chain.ask(prompt, stream: false, rag_context: false, rag_options: {})
551
693
 
552
694
  ## 🛣️ Roadmap
553
695
 
554
- ### v0.6.0
696
+ ### v0.5.2 ✅ Completed
697
+ - [x] System diagnostics and health checks
698
+ - [x] Configuration validation
699
+ - [x] Enhanced error handling with retry logic
700
+ - [x] Improved code extraction and tool stability
701
+
702
+ ### v0.6.0 (Next)
555
703
  - [ ] ReAct agents and multi-step reasoning
556
704
  - [ ] More tools (file system, database queries)
557
705
  - [ ] Claude integration
558
- - [ ] Enhanced error handling
706
+ - [ ] Advanced logging and metrics
559
707
 
560
708
  ### v0.7.0
561
709
  - [ ] Multi-agent systems
@@ -9,7 +9,22 @@ module LLMChain
9
9
  # @param tools [Array<Tool>] Массив инструментов
10
10
  # @param retriever [#search] RAG-ретривер (Weaviate, Pinecone и т.д.)
11
11
  # @param client_options [Hash] Опции для клиента LLM
12
- def initialize(model: nil, memory: nil, tools: [], retriever: nil, **client_options)
12
+ def initialize(model: nil, memory: nil, tools: [], retriever: nil, validate_config: true, **client_options)
13
+ # Валидация конфигурации (можно отключить через validate_config: false)
14
+ if validate_config
15
+ begin
16
+ ConfigurationValidator.validate_chain_config!(
17
+ model: model,
18
+ tools: tools,
19
+ memory: memory,
20
+ retriever: retriever,
21
+ **client_options
22
+ )
23
+ rescue ConfigurationValidator::ValidationError => e
24
+ raise Error, "Configuration validation failed: #{e.message}"
25
+ end
26
+ end
27
+
13
28
  @model = model
14
29
  @memory = memory || Memory::Array.new
15
30
  @tools = tools
@@ -0,0 +1,349 @@
1
+ require 'net/http'
2
+ require 'uri'
3
+ require 'json'
4
+
5
+ module LLMChain
6
+ class ConfigurationValidator
7
+ class ValidationError < Error; end
8
+ class ValidationWarning < StandardError; end
9
+
10
+ def self.validate_chain_config!(model: nil, **options)
11
+ new.validate_chain_config!(model: model, **options)
12
+ end
13
+
14
+ def validate_chain_config!(model: nil, **options)
15
+ @warnings = []
16
+
17
+ begin
18
+ validate_model!(model) if model
19
+ validate_client_availability!(model) if model
20
+ validate_tools!(options[:tools]) if options[:tools]
21
+ validate_memory!(options[:memory]) if options[:memory]
22
+ validate_retriever!(options[:retriever]) if options[:retriever]
23
+
24
+ # Выводим предупреждения, если есть
25
+ @warnings.each { |warning| warn_user(warning) } if @warnings.any?
26
+
27
+ true
28
+ rescue => e
29
+ raise ValidationError, "Configuration validation failed: #{e.message}"
30
+ end
31
+ end
32
+
33
+ def self.validate_environment
34
+ new.validate_environment
35
+ end
36
+
37
+ def validate_environment
38
+ @warnings = []
39
+ results = {}
40
+
41
+ results[:ollama] = check_ollama_availability
42
+ results[:ruby] = check_ruby_version
43
+ results[:python] = check_python_availability
44
+ results[:node] = check_node_availability
45
+ results[:internet] = check_internet_connectivity
46
+ results[:apis] = check_api_keys
47
+
48
+ results[:warnings] = @warnings
49
+ results
50
+ end
51
+
52
+ private
53
+
54
+ def validate_model!(model)
55
+ return if model.nil?
56
+
57
+ case model.to_s
58
+ when /^gpt/
59
+ validate_openai_requirements!(model)
60
+ when /qwen|llama|gemma/
61
+ validate_ollama_requirements!(model)
62
+ else
63
+ add_warning("Unknown model type: #{model}. Proceeding with default settings.")
64
+ end
65
+ end
66
+
67
+ def validate_openai_requirements!(model)
68
+ api_key = ENV['OPENAI_API_KEY']
69
+ unless api_key
70
+ raise ValidationError, "OpenAI API key required for model '#{model}'. Set OPENAI_API_KEY environment variable."
71
+ end
72
+
73
+ if api_key.length < 20
74
+ raise ValidationError, "OpenAI API key appears to be invalid (too short)."
75
+ end
76
+
77
+ # Проверяем доступность OpenAI API
78
+ begin
79
+ uri = URI('https://api.openai.com/v1/models')
80
+ http = Net::HTTP.new(uri.host, uri.port)
81
+ http.use_ssl = true
82
+ http.open_timeout = 5
83
+ http.read_timeout = 5
84
+
85
+ request = Net::HTTP::Get.new(uri)
86
+ request['Authorization'] = "Bearer #{api_key}"
87
+
88
+ response = http.request(request)
89
+
90
+ case response.code
91
+ when '200'
92
+ # OK
93
+ when '401'
94
+ raise ValidationError, "OpenAI API key is invalid or expired."
95
+ when '429'
96
+ add_warning("OpenAI API rate limit reached. Service may be temporarily unavailable.")
97
+ else
98
+ add_warning("OpenAI API returned status #{response.code}. Service may be temporarily unavailable.")
99
+ end
100
+ rescue => e
101
+ add_warning("Cannot verify OpenAI API availability: #{e.message}")
102
+ end
103
+ end
104
+
105
+ def validate_ollama_requirements!(model)
106
+ unless check_ollama_availability
107
+ raise ValidationError, "Ollama is not running. Please start Ollama server with: ollama serve"
108
+ end
109
+
110
+ unless model_available_in_ollama?(model)
111
+ raise ValidationError, "Model '#{model}' not found in Ollama. Available models: #{list_ollama_models.join(', ')}"
112
+ end
113
+ end
114
+
115
+ def validate_client_availability!(model)
116
+ case model.to_s
117
+ when /qwen|llama|gemma/
118
+ unless check_ollama_availability
119
+ raise ValidationError, "Ollama server is not running for model '#{model}'"
120
+ end
121
+ end
122
+ end
123
+
124
+ def validate_tools!(tools)
125
+ return unless tools
126
+
127
+ if tools.respond_to?(:tools) # ToolManager
128
+ tools.tools.each { |tool| validate_single_tool!(tool) }
129
+ elsif tools.is_a?(Array)
130
+ tools.each { |tool| validate_single_tool!(tool) }
131
+ else
132
+ validate_single_tool!(tools)
133
+ end
134
+ end
135
+
136
+ def validate_single_tool!(tool)
137
+ case tool.class.name
138
+ when /WebSearch/
139
+ validate_web_search_tool!(tool)
140
+ when /CodeInterpreter/
141
+ validate_code_interpreter_tool!(tool)
142
+ when /Calculator/
143
+ # Calculator не требует дополнительной валидации
144
+ end
145
+ end
146
+
147
+ def validate_web_search_tool!(tool)
148
+ # Проверяем доступность Google Search API
149
+ if ENV['GOOGLE_API_KEY'] && ENV['GOOGLE_SEARCH_ENGINE_ID']
150
+ # Есть API ключи, но проверим их валидность
151
+ begin
152
+ # Простая проверка доступности
153
+ uri = URI('https://www.googleapis.com/customsearch/v1')
154
+ http = Net::HTTP.new(uri.host, uri.port)
155
+ http.use_ssl = true
156
+ http.open_timeout = 3
157
+ http.read_timeout = 3
158
+
159
+ response = http.get('/')
160
+ # Если получили любой ответ, значит API доступен
161
+ rescue => e
162
+ add_warning("Google Search API may be unavailable: #{e.message}")
163
+ end
164
+ else
165
+ add_warning("Google Search API not configured. Search will use fallback methods.")
166
+ end
167
+
168
+ # Проверяем доступность интернета для fallback поиска
169
+ unless check_internet_connectivity
170
+ add_warning("No internet connection detected. Search functionality will be limited.")
171
+ end
172
+ end
173
+
174
+ def validate_code_interpreter_tool!(tool)
175
+ # Проверяем доступность языков программирования
176
+ languages = tool.instance_variable_get(:@allowed_languages) || ['ruby']
177
+
178
+ languages.each do |lang|
179
+ case lang
180
+ when 'ruby'
181
+ unless check_ruby_version
182
+ add_warning("Ruby interpreter not found or outdated.")
183
+ end
184
+ when 'python'
185
+ unless check_python_availability
186
+ add_warning("Python interpreter not found.")
187
+ end
188
+ when 'javascript'
189
+ unless check_node_availability
190
+ add_warning("Node.js interpreter not found.")
191
+ end
192
+ end
193
+ end
194
+ end
195
+
196
+ def validate_memory!(memory)
197
+ return unless memory
198
+
199
+ case memory.class.name
200
+ when /Redis/
201
+ validate_redis_memory!(memory)
202
+ when /Array/
203
+ # Array memory не требует дополнительной валидации
204
+ end
205
+ end
206
+
207
+ def validate_redis_memory!(memory)
208
+ begin
209
+ # Проверяем подключение к Redis
210
+ redis_client = memory.instance_variable_get(:@redis) || memory.redis
211
+ if redis_client.respond_to?(:ping)
212
+ redis_client.ping
213
+ end
214
+ rescue => e
215
+ raise ValidationError, "Redis connection failed: #{e.message}"
216
+ end
217
+ end
218
+
219
+ def validate_retriever!(retriever)
220
+ return unless retriever
221
+ return if retriever == false
222
+
223
+ case retriever.class.name
224
+ when /Weaviate/
225
+ validate_weaviate_retriever!(retriever)
226
+ end
227
+ end
228
+
229
+ def validate_weaviate_retriever!(retriever)
230
+ # Проверяем доступность Weaviate
231
+ begin
232
+ # Попытка подключения к Weaviate
233
+ uri = URI('http://localhost:8080/v1/.well-known/ready')
234
+ response = Net::HTTP.get_response(uri)
235
+
236
+ unless response.code == '200'
237
+ raise ValidationError, "Weaviate server is not ready. Please start Weaviate."
238
+ end
239
+ rescue => e
240
+ raise ValidationError, "Cannot connect to Weaviate: #{e.message}"
241
+ end
242
+ end
243
+
244
+ # Вспомогательные методы для проверки системы
245
+
246
+ def check_ollama_availability
247
+ begin
248
+ uri = URI('http://localhost:11434/api/tags')
249
+ response = Net::HTTP.get_response(uri)
250
+ response.code == '200'
251
+ rescue
252
+ false
253
+ end
254
+ end
255
+
256
+ def model_available_in_ollama?(model)
257
+ begin
258
+ uri = URI('http://localhost:11434/api/tags')
259
+ response = Net::HTTP.get_response(uri)
260
+ return false unless response.code == '200'
261
+
262
+ data = JSON.parse(response.body)
263
+ models = data['models'] || []
264
+ models.any? { |m| m['name'].include?(model.to_s.split(':').first) }
265
+ rescue
266
+ false
267
+ end
268
+ end
269
+
270
+ def list_ollama_models
271
+ begin
272
+ uri = URI('http://localhost:11434/api/tags')
273
+ response = Net::HTTP.get_response(uri)
274
+ return [] unless response.code == '200'
275
+
276
+ data = JSON.parse(response.body)
277
+ models = data['models'] || []
278
+ models.map { |m| m['name'] }
279
+ rescue
280
+ []
281
+ end
282
+ end
283
+
284
+ def check_ruby_version
285
+ begin
286
+ version = RUBY_VERSION
287
+ major, minor, patch = version.split('.').map(&:to_i)
288
+
289
+ # Требуем Ruby >= 3.1.0
290
+ if major > 3 || (major == 3 && minor >= 1)
291
+ true
292
+ else
293
+ add_warning("Ruby version #{version} detected. Minimum required: 3.1.0")
294
+ false
295
+ end
296
+ rescue
297
+ false
298
+ end
299
+ end
300
+
301
+ def check_python_availability
302
+ begin
303
+ output = `python3 --version 2>&1`
304
+ $?.success? && output.include?('Python')
305
+ rescue
306
+ false
307
+ end
308
+ end
309
+
310
+ def check_node_availability
311
+ begin
312
+ output = `node --version 2>&1`
313
+ $?.success? && output.include?('v')
314
+ rescue
315
+ false
316
+ end
317
+ end
318
+
319
+ def check_internet_connectivity
320
+ begin
321
+ require 'socket'
322
+ Socket.tcp("8.8.8.8", 53, connect_timeout: 3) {}
323
+ true
324
+ rescue
325
+ false
326
+ end
327
+ end
328
+
329
+ def check_api_keys
330
+ keys = {}
331
+ keys[:openai] = !ENV['OPENAI_API_KEY'].nil?
332
+ keys[:google_search] = !ENV['GOOGLE_API_KEY'].nil? && !ENV['GOOGLE_SEARCH_ENGINE_ID'].nil?
333
+ keys[:bing_search] = !ENV['BING_API_KEY'].nil?
334
+ keys
335
+ end
336
+
337
+ def add_warning(message)
338
+ @warnings << message
339
+ end
340
+
341
+ def warn_user(message)
342
+ if defined?(Rails) && Rails.logger
343
+ Rails.logger.warn "[LLMChain] #{message}"
344
+ else
345
+ warn "[LLMChain] Warning: #{message}"
346
+ end
347
+ end
348
+ end
349
+ end