llm_chain 0.5.0 → 0.5.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9dd4f92a849092fdd1770088d8724a557ae12ff16f7108f0cef3f6199445c521
4
- data.tar.gz: ec2f30766da2cd6ab9b2a92a9df35b914e995d5243359d8398b0ac96a41ebd8c
3
+ metadata.gz: a926b1222ae2f5fda1d4562ade3663cf1b9d9aff0e1162f8c8c6b17ab817a27c
4
+ data.tar.gz: 9a9c7ad9c081899e5ae014d0236eaf01cf93fa66fcdae2c439d18cf54e7de532
5
5
  SHA512:
6
- metadata.gz: faa70d193aaf6d0af6cfede19867d5bb3263098d0214f71d15ced7f336fecf91483b8c046c652e5e38289cfa0e9d0ac4f0dedebbcc815981904afb07a091cfd0
7
- data.tar.gz: d8c5f550897bfe5ed92d9aa2098dd46e1726fd2783c56d886c90db7588375f15055a101cefb3ad16fe9a3522c1917ad99cb1dc6cd4091e8d117d3ee997795119
6
+ metadata.gz: 0f6736ee81ee8cc057e7f283de27f7d7cda086a3161b2c0bd7b8b434542b9df3c52df905cbc4c561c069f47afbaa333f36f52fbbe9fac56a83fe766a165c71f2
7
+ data.tar.gz: 17750148dce41dd667b24621d1e19051f9ff253213a72ef4e79ffda7d6291494c2db79a88215257f0afb69b4a857583e734e9872fa515e10d7879d5b62f139cb
data/CHANGELOG.md ADDED
@@ -0,0 +1,74 @@
1
+ # Changelog
2
+
3
+ All notable changes to this project will be documented in this file.
4
+
5
+ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
+ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
+
8
+ ## [Unreleased]
9
+
10
+ ## [0.5.2] - 2025-01-XX
11
+
12
+ ### Added
13
+ - **Configuration Validator** - Comprehensive system validation before chain initialization
14
+ - **System Diagnostics** - `LLMChain.diagnose_system` method for health checks
15
+ - **Retry Logic** - Exponential backoff for HTTP requests with configurable max retries
16
+ - **Enhanced Logging** - Structured logging with debug mode support
17
+ - **Internet Connectivity Detection** - Automatic offline mode detection
18
+ - **Code Extraction Improvements** - Better parsing of code blocks and inline commands
19
+
20
+ ### Changed
21
+ - **Improved Error Handling** - Better error messages with suggested solutions
22
+ - **Enhanced WebSearch** - More robust fallback mechanisms and timeout handling
23
+ - **CodeInterpreter Enhancements** - Improved code extraction from various formats
24
+ - **Better Validation** - Early detection of configuration issues with helpful warnings
25
+
26
+ ### Fixed
27
+ - **WebSearch Stability** - Fixed timeout and connection issues with retry logic
28
+ - **Code Block Parsing** - Resolved issues with multiline regex and Windows line endings
29
+ - **Graceful Degradation** - Better handling of offline scenarios and API failures
30
+ - **Memory Leaks** - Improved cleanup of temporary files and resources
31
+
32
+ ## [0.5.1] - 2025-06-26
33
+
34
+ ### Added
35
+ - Quick chain creation method `LLMChain.quick_chain` for rapid setup
36
+ - Global configuration system with `LLMChain.configure`
37
+ - Google Search integration for accurate, up-to-date search results
38
+ - Fallback search data for common queries (Ruby versions, etc.)
39
+ - Production-ready output without debug noise
40
+
41
+ ### Changed
42
+ - **BREAKING**: Replaced DuckDuckGo with Google as default search engine
43
+ - Web search now returns accurate results instead of outdated information
44
+ - Removed all debug output functionality for cleaner user experience
45
+ - Improved calculator expression parsing for better math evaluation
46
+ - Enhanced code interpreter to handle inline code prompts (e.g., "Execute code: puts ...")
47
+
48
+ ### Fixed
49
+ - Calculator now correctly parses expressions like "50 / 2" instead of extracting just "2"
50
+ - Code interpreter properly extracts code from "Execute code: ..." format
51
+ - Web search HTTP 202 responses no longer treated as errors
52
+ - Removed excessive debug console output
53
+
54
+ ## [0.5.0] - 2025-06-25
55
+
56
+ ### Added
57
+ - Core tool system with automatic tool selection
58
+ - Calculator tool for mathematical expressions
59
+ - Web search tool with DuckDuckGo integration
60
+ - Code interpreter tool for Ruby code execution
61
+ - Multi-LLM support (OpenAI, Qwen, LLaMA2, Gemma)
62
+ - Memory system with Array and Redis backends
63
+ - RAG support with vector databases
64
+ - Streaming output support
65
+ - Comprehensive error handling
66
+ - Tool manager for organizing and managing tools
67
+
68
+ ### Changed
69
+ - Initial stable release with core functionality
70
+
71
+ [Unreleased]: https://github.com/FuryCow/llm_chain/compare/v0.5.2...HEAD
72
+ [0.5.2]: https://github.com/FuryCow/llm_chain/compare/v0.5.1...v0.5.2
73
+ [0.5.1]: https://github.com/FuryCow/llm_chain/compare/v0.5.0...v0.5.1
74
+ [0.5.0]: https://github.com/FuryCow/llm_chain/releases/tag/v0.5.0
data/README.md CHANGED
@@ -8,6 +8,15 @@
8
8
 
9
9
  LLMChain is a Ruby analog of LangChain, providing a unified interface for interacting with various LLMs, built-in tool system, and RAG (Retrieval-Augmented Generation) support.
10
10
 
11
+ ## 🎉 What's New in v0.5.2
12
+
13
+ - ✅ **System Diagnostics** - Built-in health checks with `LLMChain.diagnose_system`
14
+ - ✅ **Configuration Validation** - Early detection of setup issues and helpful warnings
15
+ - ✅ **Enhanced Error Handling** - Retry logic with exponential backoff for network requests
16
+ - ✅ **Robust WebSearch** - Better timeout handling and graceful degradation
17
+ - ✅ **Improved Code Extraction** - Enhanced support for various code block formats
18
+ - ✅ **Debug Logging** - Structured logging with `LLM_CHAIN_DEBUG=true`
19
+
11
20
  ## ✨ Key Features
12
21
 
13
22
  - 🤖 **Unified API** for multiple LLMs (OpenAI, Ollama, Qwen, LLaMA2, Gemma)
@@ -45,10 +54,14 @@ gem 'llm_chain'
45
54
  ollama pull llama2:7b
46
55
  ```
47
56
 
48
- 2. **Optional**: API keys for external services
57
+ 2. **Optional**: API keys for enhanced features
49
58
  ```bash
50
- export OPENAI_API_KEY="your-key"
51
- export SEARCH_API_KEY="your-key"
59
+ # For OpenAI models
60
+ export OPENAI_API_KEY="your-openai-key"
61
+
62
+ # For Google Search (get at console.developers.google.com)
63
+ export GOOGLE_API_KEY="your-google-key"
64
+ export GOOGLE_SEARCH_ENGINE_ID="your-search-engine-id"
52
65
  ```
53
66
 
54
67
  ### Simple Example
@@ -56,33 +69,91 @@ gem 'llm_chain'
56
69
  ```ruby
57
70
  require 'llm_chain'
58
71
 
59
- # Basic usage
72
+ # Quick start with default tools (v0.5.1+)
73
+ chain = LLMChain.quick_chain
74
+ response = chain.ask("Hello! How are you?")
75
+ puts response
76
+
77
+ # Or traditional setup
60
78
  chain = LLMChain::Chain.new(model: "qwen3:1.7b")
61
79
  response = chain.ask("Hello! How are you?")
62
80
  puts response
63
81
  ```
64
82
 
65
- ## 🛠️ Tool System
83
+ ## 🔍 System Diagnostics (v0.5.2+)
66
84
 
67
- ### Automatic Tool Usage
85
+ Before diving into development, it's recommended to check your system configuration:
68
86
 
69
87
  ```ruby
70
- # Create chain with tools
71
- tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
72
- chain = LLMChain::Chain.new(
88
+ require 'llm_chain'
89
+
90
+ # Run comprehensive system diagnostics
91
+ LLMChain.diagnose_system
92
+ # 🔍 LLMChain System Diagnostics
93
+ # ==================================================
94
+ # 📋 System Components:
95
+ # Ruby: ✅ (3.2.2)
96
+ # Python: ✅
97
+ # Node.js: ✅
98
+ # Internet: ✅
99
+ # Ollama: ✅
100
+ # 🔑 API Keys:
101
+ # Openai: ❌
102
+ # Google_search: ❌
103
+ # 💡 Recommendations:
104
+ # • Configure API keys for enhanced features
105
+ # • Start Ollama server: ollama serve
106
+ ```
107
+
108
+ ### Configuration Validation
109
+
110
+ Chains now validate their configuration on startup:
111
+
112
+ ```ruby
113
+ # Automatic validation (v0.5.2+)
114
+ begin
115
+ chain = LLMChain.quick_chain(model: "qwen3:1.7b")
116
+ rescue LLMChain::Error => e
117
+ puts "Configuration issue: #{e.message}"
118
+ end
119
+
120
+ # Disable validation if needed
121
+ chain = LLMChain.quick_chain(
73
122
  model: "qwen3:1.7b",
74
- tools: tool_manager
123
+ validate_config: false
75
124
  )
76
125
 
126
+ # Manual validation
127
+ LLMChain::ConfigurationValidator.validate_chain_config!(
128
+ model: "qwen3:1.7b",
129
+ tools: LLMChain::Tools::ToolManager.create_default_toolset
130
+ )
131
+ ```
132
+
133
+ ## 🛠️ Tool System
134
+
135
+ ### Automatic Tool Usage
136
+
137
+ ```ruby
138
+ # Quick setup (v0.5.1+)
139
+ chain = LLMChain.quick_chain
140
+
77
141
  # Tools are selected automatically
78
142
  chain.ask("Calculate 15 * 7 + 32")
79
- # 🧮 Automatically uses calculator
143
+ # 🧮 Result: 137
80
144
 
81
- chain.ask("Find information about Ruby 3.2")
82
- # 🔍 Automatically uses web search
145
+ chain.ask("Which is the latest version of Ruby?")
146
+ # 🔍 Result: Ruby 3.3.6 (via Google search)
83
147
 
84
148
  chain.ask("Execute code: puts (1..10).sum")
85
- # 💻 Automatically uses code interpreter
149
+ # 💻 Result: 55
150
+
151
+ # Traditional setup
152
+ tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
153
+ chain = LLMChain::Chain.new(
154
+ model: "qwen3:1.7b",
155
+ tools: tool_manager
156
+ )
86
157
  ```
87
158
 
88
159
  ### Built-in Tools
@@ -97,14 +168,23 @@ puts result[:formatted]
97
168
 
98
169
  #### 🌐 Web Search
99
170
  ```ruby
171
+ # Google search for accurate results (v0.5.1+)
100
172
  search = LLMChain::Tools::WebSearch.new
101
- results = search.call("Latest Ruby news")
173
+ results = search.call("Latest Ruby version")
102
174
  puts results[:formatted]
175
+ # Output: Ruby 3.3.6 is the current stable version...
176
+
177
+ # Fallback data available without API keys
178
+ search = LLMChain::Tools::WebSearch.new
179
+ results = search.call("Which is the latest version of Ruby?")
180
+ # Works even without Google API configured
103
181
  ```
104
182
 
105
- #### 💻 Code Interpreter
183
+ #### 💻 Code Interpreter (Enhanced in v0.5.2)
106
184
  ```ruby
107
185
  interpreter = LLMChain::Tools::CodeInterpreter.new
186
+
187
+ # Standard markdown blocks
108
188
  result = interpreter.call(<<~CODE)
109
189
  ```ruby
110
190
  def factorial(n)
@@ -113,9 +193,80 @@ result = interpreter.call(<<~CODE)
113
193
  puts factorial(5)
114
194
  ```
115
195
  CODE
196
+
197
+ # Inline code commands (v0.5.2+)
198
+ result = interpreter.call("Execute code: puts 'Hello World!'")
199
+
200
+ # Code without language specification
201
+ result = interpreter.call(<<~CODE)
202
+ ```
203
+ numbers = [1, 2, 3, 4, 5]
204
+ puts numbers.sum
205
+ ```
206
+ CODE
207
+
208
+ # Windows line endings support (v0.5.2+)
209
+ result = interpreter.call("```ruby\r\nputs 'Windows compatible'\r\n```")
210
+
116
211
  puts result[:formatted]
117
212
  ```
118
213
 
214
+ ## ⚙️ Configuration (v0.5.2+)
215
+
216
+ ```ruby
217
+ # Global configuration
218
+ LLMChain.configure do |config|
219
+ config.default_model = "qwen3:1.7b" # Default LLM model
220
+ config.search_engine = :google # Google for accurate results
221
+ config.memory_size = 100 # Memory buffer size
222
+ config.timeout = 30 # Request timeout (seconds)
223
+ end
224
+
225
+ # Quick chain with default settings
226
+ chain = LLMChain.quick_chain
227
+
228
+ # Override settings per chain (v0.5.2+)
229
+ chain = LLMChain.quick_chain(
230
+ model: "gpt-4",
231
+ tools: false, # Disable tools
232
+ memory: false, # Disable memory
233
+ validate_config: false # Skip validation
234
+ )
235
+ ```
236
+
237
+ ### Debug Mode (v0.5.2+)
238
+
239
+ Enable detailed logging for troubleshooting:
240
+
241
+ ```bash
242
+ # Enable debug logging
243
+ export LLM_CHAIN_DEBUG=true
244
+
245
+ # Or in Ruby
246
+ ENV['LLM_CHAIN_DEBUG'] = 'true'
247
+ ```
248
+
249
+ ### Validation and Error Handling (v0.5.2+)
250
+
251
+ ```ruby
252
+ # Comprehensive environment check
253
+ results = LLMChain::ConfigurationValidator.validate_environment
254
+ puts "Ollama available: #{results[:ollama]}"
255
+ puts "Internet: #{results[:internet]}"
256
+ puts "Warnings: #{results[:warnings]}"
257
+
258
+ # Chain validation with custom settings
259
+ begin
260
+ LLMChain::ConfigurationValidator.validate_chain_config!(
261
+ model: "gpt-4",
262
+ tools: [LLMChain::Tools::Calculator.new, LLMChain::Tools::WebSearch.new]
263
+ )
264
+ rescue LLMChain::ConfigurationValidator::ValidationError => e
265
+ puts "Setup issue: #{e.message}"
266
+ # Handle configuration problems
267
+ end
268
+ ```
269
+
119
270
  ### Creating Custom Tools
120
271
 
121
272
  ```ruby
@@ -374,23 +525,64 @@ openai = LLMChain::Clients::OpenAI.new(
374
525
  )
375
526
  ```
376
527
 
377
- ## 🔧 Error Handling
528
+ ## 🔧 Error Handling (Enhanced in v0.5.2)
378
529
 
379
530
  ```ruby
380
531
  begin
381
532
  chain = LLMChain::Chain.new(model: "qwen3:1.7b")
382
533
  response = chain.ask("Complex query")
534
+ rescue LLMChain::ConfigurationValidator::ValidationError => e
535
+ puts "Configuration issue: #{e.message}"
536
+ # Use LLMChain.diagnose_system to check setup
383
537
  rescue LLMChain::UnknownModelError => e
384
538
  puts "Unknown model: #{e.message}"
539
+ # Check available models with ollama list
385
540
  rescue LLMChain::ClientError => e
386
541
  puts "Client error: #{e.message}"
542
+ # Network or API issues
387
543
  rescue LLMChain::TimeoutError => e
388
544
  puts "Timeout exceeded: #{e.message}"
545
+ # Increase timeout or use faster model
389
546
  rescue LLMChain::Error => e
390
547
  puts "General LLMChain error: #{e.message}"
391
548
  end
392
549
  ```
393
550
 
551
+ ### Automatic Retry Logic (v0.5.2+)
552
+
553
+ WebSearch and other tools now include automatic retry with exponential backoff:
554
+
555
+ ```ruby
556
+ # Retry configuration is automatic, but you can observe it:
557
+ ENV['LLM_CHAIN_DEBUG'] = 'true'
558
+
559
+ search = LLMChain::Tools::WebSearch.new
560
+ result = search.call("search query")
561
+ # [WebSearch] Retrying search (1/3) after 0.5s: Net::TimeoutError
562
+ # [WebSearch] Retrying search (2/3) after 1.0s: Net::TimeoutError
563
+ # [WebSearch] Search failed after 3 attempts: Net::TimeoutError
564
+
565
+ # Tools gracefully degrade to fallback methods when possible
566
+ puts result[:formatted] # Still provides useful response
567
+ ```
568
+
569
+ ### Graceful Degradation
570
+
571
+ ```ruby
572
+ # Tools handle failures gracefully
573
+ calculator = LLMChain::Tools::Calculator.new
574
+ web_search = LLMChain::Tools::WebSearch.new
575
+ code_runner = LLMChain::Tools::CodeInterpreter.new
576
+
577
+ # Even with network issues, you get useful responses:
578
+ search_result = web_search.call("latest Ruby version")
579
+ # Falls back to hardcoded data for common queries
580
+
581
+ # Safe code execution with timeout protection:
582
+ code_result = code_runner.call("puts 'Hello World!'")
583
+ # Executes safely with proper sandboxing
584
+ ```
585
+
394
586
  ## 📚 Usage Examples
395
587
 
396
588
  ### Chatbot with Tools
@@ -501,22 +693,28 @@ chain.ask(prompt, stream: false, rag_context: false, rag_options: {})
501
693
 
502
694
  ## 🛣️ Roadmap
503
695
 
504
- ### v0.6.0
505
- - [ ] ReAct agents
506
- - [ ] More tools (files, database)
696
+ ### v0.5.2 ✅ Completed
697
+ - [x] System diagnostics and health checks
698
+ - [x] Configuration validation
699
+ - [x] Enhanced error handling with retry logic
700
+ - [x] Improved code extraction and tool stability
701
+
702
+ ### v0.6.0 (Next)
703
+ - [ ] ReAct agents and multi-step reasoning
704
+ - [ ] More tools (file system, database queries)
507
705
  - [ ] Claude integration
508
- - [ ] Enhanced logging
706
+ - [ ] Advanced logging and metrics
509
707
 
510
708
  ### v0.7.0
511
709
  - [ ] Multi-agent systems
512
- - [ ] Task planning
513
- - [ ] Web interface
710
+ - [ ] Task planning and workflows
711
+ - [ ] Web interface for testing
514
712
  - [ ] Metrics and monitoring
515
713
 
516
714
  ### v1.0.0
517
- - [ ] Stable API
518
- - [ ] Complete documentation
519
- - [ ] Production readiness
715
+ - [ ] Stable API with semantic versioning
716
+ - [ ] Complete documentation coverage
717
+ - [ ] Production-grade performance
520
718
 
521
719
  ## 🤝 Contributing
522
720
 
@@ -551,5 +749,6 @@ This project is distributed under the [MIT License](LICENSE.txt).
551
749
 
552
750
  [Documentation](https://github.com/FuryCow/llm_chain/wiki) |
553
751
  [Examples](https://github.com/FuryCow/llm_chain/tree/main/examples) |
752
+ [Changelog](CHANGELOG.md) |
554
753
  [Issues](https://github.com/FuryCow/llm_chain/issues) |
555
754
  [Discussions](https://github.com/FuryCow/llm_chain/discussions)
@@ -9,7 +9,22 @@ module LLMChain
9
9
  # @param tools [Array<Tool>] Массив инструментов
10
10
  # @param retriever [#search] RAG-ретривер (Weaviate, Pinecone и т.д.)
11
11
  # @param client_options [Hash] Опции для клиента LLM
12
- def initialize(model: nil, memory: nil, tools: [], retriever: nil, **client_options)
12
+ def initialize(model: nil, memory: nil, tools: [], retriever: nil, validate_config: true, **client_options)
13
+ # Валидация конфигурации (можно отключить через validate_config: false)
14
+ if validate_config
15
+ begin
16
+ ConfigurationValidator.validate_chain_config!(
17
+ model: model,
18
+ tools: tools,
19
+ memory: memory,
20
+ retriever: retriever,
21
+ **client_options
22
+ )
23
+ rescue ConfigurationValidator::ValidationError => e
24
+ raise Error, "Configuration validation failed: #{e.message}"
25
+ end
26
+ end
27
+
13
28
  @model = model
14
29
  @memory = memory || Memory::Array.new
15
30
  @tools = tools
@@ -105,7 +120,12 @@ module LLMChain
105
120
  parts = ["Tool results:"]
106
121
  tool_responses.each do |name, response|
107
122
  if response.is_a?(Hash) && response[:formatted]
108
- parts << "#{name}: #{response[:formatted]}"
123
+ # Особая обработка для поиска без результатов
124
+ if name == "web_search" && response[:results] && response[:results].empty?
125
+ parts << "#{name}: No search results found. Please answer based on your knowledge, but indicate that search was unavailable."
126
+ else
127
+ parts << "#{name}: #{response[:formatted]}"
128
+ end
109
129
  else
110
130
  parts << "#{name}: #{response}"
111
131
  end
@@ -7,7 +7,6 @@ module LLMChain
7
7
  end
8
8
 
9
9
  def self.client_for(model, **options)
10
- puts model
11
10
  instance = case model
12
11
  when /gpt|openai/
13
12
  Clients::OpenAI