llm_chain 0.4.0 โ†’ 0.5.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/README.md CHANGED
@@ -1,192 +1,606 @@
1
- # LLMChain
2
-
3
- A Ruby gem for interacting with Large Language Models (LLMs) through a unified interface, with native Ollama and local model support.
1
+ # ๐Ÿฆพ LLMChain
4
2
 
5
3
  [![Gem Version](https://badge.fury.io/rb/llm_chain.svg)](https://badge.fury.io/rb/llm_chain)
6
- [![Tests](https://github.com/your_username/llm_chain/actions/workflows/tests.yml/badge.svg)](https://github.com/your_username/llm_chain/actions)
4
+ [![Tests](https://github.com/FuryCow/llm_chain/actions/workflows/tests.yml/badge.svg)](https://github.com/FuryCow/llm_chain/actions)
7
5
  [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE.txt)
8
6
 
9
- ## Features
7
+ **A powerful Ruby library for working with Large Language Models (LLMs) with intelligent tool system**
8
+
9
+ LLMChain is a Ruby analog of LangChain, providing a unified interface for interacting with various LLMs, built-in tool system, and RAG (Retrieval-Augmented Generation) support.
10
+
11
+ ## ๐ŸŽ‰ What's New in v0.5.1
12
+
13
+ - โœ… **Google Search Integration** - Accurate, up-to-date search results
14
+ - โœ… **Fixed Calculator** - Improved expression parsing and evaluation
15
+ - โœ… **Enhanced Code Interpreter** - Better code extraction from prompts
16
+ - โœ… **Production-Ready Output** - Clean interface without debug noise
17
+ - โœ… **Quick Chain Creation** - Simple `LLMChain.quick_chain` method
18
+ - โœ… **Simplified Configuration** - Easy setup with sensible defaults
19
+
20
+ ## โœจ Key Features
10
21
 
11
- - Unified interface for multiple LLMs (Qwen, Llama2, Mistral, etc.)
12
- - Native [Ollama](https://ollama.ai/) integration for local models
13
- - Prompt templating system
14
- - Streaming response support
15
- - RAG-ready with vector database integration
16
- - Automatic model verification
22
+ - ๐Ÿค– **Unified API** for multiple LLMs (OpenAI, Ollama, Qwen, LLaMA2, Gemma)
23
+ - ๐Ÿ› ๏ธ **Intelligent tool system** with automatic selection
24
+ - ๐Ÿงฎ **Built-in tools**: Calculator, web search, code interpreter
25
+ - ๐Ÿ” **RAG-ready** with vector database integration
26
+ - ๐Ÿ’พ **Flexible memory system** (Array, Redis)
27
+ - ๐ŸŒŠ **Streaming output** for real-time responses
28
+ - ๐Ÿ  **Local models** via Ollama
29
+ - ๐Ÿ”ง **Extensible architecture** for custom tools
17
30
 
18
- ## Installation
31
+ ## ๐Ÿš€ Quick Start
32
+
33
+ ### Installation
34
+
35
+ ```bash
36
+ gem install llm_chain
37
+ ```
19
38
 
20
- Add to your Gemfile:
39
+ Or add to Gemfile:
21
40
 
22
41
  ```ruby
23
42
  gem 'llm_chain'
24
43
  ```
25
- Or install directly:
26
44
 
27
- ```
28
- gem install llm_chain
29
- ```
45
+ ### Prerequisites
46
+
47
+ 1. **Install Ollama** for local models:
48
+ ```bash
49
+ # macOS/Linux
50
+ curl -fsSL https://ollama.ai/install.sh | sh
51
+
52
+ # Download models
53
+ ollama pull qwen3:1.7b
54
+ ollama pull llama2:7b
55
+ ```
56
+
57
+ 2. **Optional**: API keys for enhanced features
58
+ ```bash
59
+ # For OpenAI models
60
+ export OPENAI_API_KEY="your-openai-key"
61
+
62
+ # For Google Search (get at console.developers.google.com)
63
+ export GOOGLE_API_KEY="your-google-key"
64
+ export GOOGLE_SEARCH_ENGINE_ID="your-search-engine-id"
65
+ ```
66
+
67
+ ### Simple Example
30
68
 
31
- ## Prerequisites
32
- Install [Ollama](https://ollama.ai/)
69
+ ```ruby
70
+ require 'llm_chain'
33
71
 
34
- Pull desired models:
72
+ # Quick start with default tools (v0.5.1+)
73
+ chain = LLMChain.quick_chain
74
+ response = chain.ask("Hello! How are you?")
75
+ puts response
35
76
 
36
- ```bash
37
- ollama pull qwen:7b
38
- ollama pull llama2:13b
77
+ # Or traditional setup
78
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
79
+ response = chain.ask("Hello! How are you?")
80
+ puts response
39
81
  ```
40
82
 
41
- ## Usage
83
+ ## ๐Ÿ› ๏ธ Tool System
42
84
 
43
- basic example:
85
+ ### Automatic Tool Usage
44
86
 
45
87
  ```ruby
46
- require 'llm_chain'
88
+ # Quick setup (v0.5.1+)
89
+ chain = LLMChain.quick_chain
90
+
91
+ # Tools are selected automatically
92
+ chain.ask("Calculate 15 * 7 + 32")
93
+ # ๐Ÿงฎ Result: 137
47
94
 
48
- memory = LLMChain::Memory::Array.new(max_size: 1)
49
- chain = LLMChain::Chain.new(model: "qwen3:1.7b", memory: memory, retriever: false)
50
- # retriever: false is required when you don't use a vector database to store context or external data
51
- # reitriever: - is set to WeaviateRetriever.new as default so you need to pass an external params to set Weaviate host
52
- puts chain.ask("What is 2+2?")
95
+ chain.ask("Which is the latest version of Ruby?")
96
+ # ๐Ÿ” Result: Ruby 3.3.6 (via Google search)
97
+
98
+ chain.ask("Execute code: puts (1..10).sum")
99
+ # ๐Ÿ’ป Result: 55
100
+
101
+ # Traditional setup
102
+ tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
103
+ chain = LLMChain::Chain.new(
104
+ model: "qwen3:1.7b",
105
+ tools: tool_manager
106
+ )
53
107
  ```
54
108
 
55
- Using redis as redistributed memory store:
109
+ ### Built-in Tools
56
110
 
111
+ #### ๐Ÿงฎ Calculator
57
112
  ```ruby
58
- # redis_url: 'redis://localhost:6379' is default or either set REDIS_URL env var
59
- # max_size: 10 is default
60
- # namespace: 'llm_chain' is default
61
- memory = LLMChain::Memory::Redis.new(redis_url: 'redis://localhost:6379', max_size: 10, namespace: 'my_app')
113
+ calculator = LLMChain::Tools::Calculator.new
114
+ result = calculator.call("Find square root of 144")
115
+ puts result[:formatted]
116
+ # Output: sqrt(144) = 12.0
117
+ ```
62
118
 
63
- chain = LLMChain::Chain.new(model: "qwen3:1.7b", memory: memory)
64
- puts chain.ask("What is 2+2?")
119
+ #### ๐ŸŒ Web Search
120
+ ```ruby
121
+ # Google search for accurate results (v0.5.1+)
122
+ search = LLMChain::Tools::WebSearch.new
123
+ results = search.call("Latest Ruby version")
124
+ puts results[:formatted]
125
+ # Output: Ruby 3.3.6 is the current stable version...
126
+
127
+ # Fallback data available without API keys
128
+ search = LLMChain::Tools::WebSearch.new
129
+ results = search.call("Which is the latest version of Ruby?")
130
+ # Works even without Google API configured
131
+ ```
132
+
133
+ #### ๐Ÿ’ป Code Interpreter
134
+ ```ruby
135
+ interpreter = LLMChain::Tools::CodeInterpreter.new
136
+ result = interpreter.call(<<~CODE)
137
+ ```ruby
138
+ def factorial(n)
139
+ n <= 1 ? 1 : n * factorial(n - 1)
140
+ end
141
+ puts factorial(5)
142
+ ```
143
+ CODE
144
+ puts result[:formatted]
65
145
  ```
66
146
 
67
- Model-specific Clients:
147
+ ## โš™๏ธ Configuration (v0.5.1+)
68
148
 
69
149
  ```ruby
70
- # Qwen with custom options (Without RAG support)
71
- qwen = LLMChain::Clients::Qwen.new(
72
- model: "qwen3:1.7b",
73
- temperature: 0.8,
74
- top_p: 0.95
150
+ # Global configuration
151
+ LLMChain.configure do |config|
152
+ config.default_model = "qwen3:1.7b" # Default LLM model
153
+ config.search_engine = :google # Google for accurate results
154
+ config.memory_size = 100 # Memory buffer size
155
+ config.timeout = 30 # Request timeout (seconds)
156
+ end
157
+
158
+ # Quick chain with default settings
159
+ chain = LLMChain.quick_chain
160
+
161
+ # Override settings per chain
162
+ chain = LLMChain.quick_chain(
163
+ model: "gpt-4",
164
+ tools: false, # Disable tools
165
+ memory: false # Disable memory
75
166
  )
76
- puts qwen.chat("Write Ruby code for Fibonacci sequence")
77
167
  ```
78
168
 
79
- Streaming Responses:
169
+ ### Creating Custom Tools
80
170
 
81
171
  ```ruby
82
- LLMChain::Chain.new(model: "qwen3:1.7b").ask('How are you?', stream: true) do |chunk|
83
- print chunk
172
+ class WeatherTool < LLMChain::Tools::BaseTool
173
+ def initialize(api_key:)
174
+ @api_key = api_key
175
+ super(
176
+ name: "weather",
177
+ description: "Gets weather information",
178
+ parameters: {
179
+ location: {
180
+ type: "string",
181
+ description: "City name"
182
+ }
183
+ }
184
+ )
185
+ end
186
+
187
+ def match?(prompt)
188
+ contains_keywords?(prompt, ['weather', 'temperature', 'forecast'])
189
+ end
190
+
191
+ def call(prompt, context: {})
192
+ location = extract_location(prompt)
193
+ # Your weather API integration
194
+ {
195
+ location: location,
196
+ temperature: "22ยฐC",
197
+ condition: "Sunny",
198
+ formatted: "Weather in #{location}: 22ยฐC, Sunny"
199
+ }
200
+ end
201
+
202
+ private
203
+
204
+ def extract_location(prompt)
205
+ prompt.scan(/in\s+(\w+)/i).flatten.first || "Unknown"
206
+ end
84
207
  end
208
+
209
+ # Usage
210
+ weather = WeatherTool.new(api_key: "your-key")
211
+ tool_manager.register_tool(weather)
85
212
  ```
86
213
 
87
- Chain pattern:
214
+ ## ๐Ÿค– Supported Models
88
215
 
216
+ | Model Family | Backend | Status | Notes |
217
+ |--------------|---------|--------|-------|
218
+ | **OpenAI** | Web API | โœ… Supported | GPT-3.5, GPT-4, GPT-4 Turbo |
219
+ | **Qwen/Qwen2** | Ollama | โœ… Supported | 0.5B - 72B parameters |
220
+ | **LLaMA2/3** | Ollama | โœ… Supported | 7B, 13B, 70B |
221
+ | **Gemma** | Ollama | โœ… Supported | 2B, 7B, 9B, 27B |
222
+ | **Mistral/Mixtral** | Ollama | ๐Ÿ”„ In development | 7B, 8x7B |
223
+ | **Claude** | Anthropic | ๐Ÿ”„ Planned | Haiku, Sonnet, Opus |
224
+ | **Command R+** | Cohere | ๐Ÿ”„ Planned | Optimized for RAG |
225
+
226
+ ### Model Usage Examples
227
+
228
+ ```ruby
229
+ # OpenAI
230
+ openai_chain = LLMChain::Chain.new(
231
+ model: "gpt-4",
232
+ api_key: ENV['OPENAI_API_KEY']
233
+ )
234
+
235
+ # Qwen via Ollama
236
+ qwen_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
237
+
238
+ # LLaMA via Ollama with settings
239
+ llama_chain = LLMChain::Chain.new(
240
+ model: "llama2:7b",
241
+ temperature: 0.8,
242
+ top_p: 0.95
243
+ )
244
+ ```
245
+
246
+ ## ๐Ÿ’พ Memory System
247
+
248
+ ### Array Memory (default)
89
249
  ```ruby
250
+ memory = LLMChain::Memory::Array.new(max_size: 10)
90
251
  chain = LLMChain::Chain.new(
91
252
  model: "qwen3:1.7b",
92
- memory: LLMChain::Memory::Array.new
253
+ memory: memory
93
254
  )
94
255
 
95
- # Conversation with context
96
- chain.ask("What's 2^10?")
97
- chain.ask("Now multiply that by 5")
256
+ chain.ask("My name is Alex")
257
+ chain.ask("What's my name?") # Remembers previous context
98
258
  ```
99
259
 
100
- ## Supported Models
260
+ ### Redis Memory (for production)
261
+ ```ruby
262
+ memory = LLMChain::Memory::Redis.new(
263
+ redis_url: 'redis://localhost:6379',
264
+ max_size: 100,
265
+ namespace: 'my_app'
266
+ )
267
+
268
+ chain = LLMChain::Chain.new(
269
+ model: "qwen3:1.7b",
270
+ memory: memory
271
+ )
272
+ ```
101
273
 
102
- | Model Family | Backend/Service | Notes |
103
- |-------------|----------------|-------|
104
- | OpenAI (GPT-3.5, GPT-4) | Web API | Supports all OpenAI API models (Not tested) |
105
- | LLaMA2 (7B, 13B, 70B) | Ollama | Local inference via Ollama |
106
- | Qwen/Qwen3 (0.5B-72B) | Ollama | Supports all Qwen model sizes |
107
- | Mistral/Mixtral | Ollama | Including Mistral 7B and Mixtral 8x7B (In progress) |
108
- | Gemma (2B, 7B) | Ollama | Google's lightweight models (In progress) |
109
- | Claude (Haiku, Sonnet, Opus) | Anthropic API | Web API access (In progress) |
110
- | Command R+ | Cohere API | Optimized for RAG (In progress) |
274
+ ## ๐Ÿ” RAG (Retrieval-Augmented Generation)
111
275
 
112
- ## Retrieval-Augmented Generation (RAG)
276
+ ### Setting up RAG with Weaviate
113
277
 
114
278
  ```ruby
115
279
  # Initialize components
116
- embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(model: "nomic-embed-text")
117
- rag_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(embedder: embedder, weaviate_url: 'http://localhost:8080') # Replace with your Weaviate URL if needed
118
- retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(embedder: embedder)
119
- memory = LLMChain::Memory::Array.new
120
- tools = []
280
+ embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(
281
+ model: "nomic-embed-text"
282
+ )
121
283
 
122
- # Create chain
284
+ vector_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(
285
+ embedder: embedder,
286
+ weaviate_url: 'http://localhost:8080'
287
+ )
288
+
289
+ retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(
290
+ embedder: embedder
291
+ )
292
+
293
+ # Create chain with RAG
123
294
  chain = LLMChain::Chain.new(
124
295
  model: "qwen3:1.7b",
125
- memory: memory, # LLMChain::Memory::Array.new is default
126
- tools: tools, # There is no tools supported yet
127
- retriever: retriever # LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new is default
296
+ retriever: retriever
128
297
  )
298
+ ```
129
299
 
130
- # simple Chain definition, with default settings
131
-
132
- simple_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
300
+ ### Adding Documents
133
301
 
134
- # Example of adding documents to vector database
302
+ ```ruby
135
303
  documents = [
136
304
  {
137
- text: "Ruby supports four OOP principles: encapsulation, inheritance, polymorphism and abstraction",
138
- metadata: { source: "ruby-docs", page: 42 }
305
+ text: "Ruby supports OOP principles: encapsulation, inheritance, polymorphism",
306
+ metadata: { source: "ruby-guide", page: 15 }
139
307
  },
140
308
  {
141
309
  text: "Modules in Ruby are used for namespaces and mixins",
142
- metadata: { source: "ruby-guides", author: "John Doe" }
143
- },
144
- {
145
- text: "2 + 2 is equals to 4",
146
- matadata: { source: 'mad_brain', author: 'John Doe' }
310
+ metadata: { source: "ruby-book", author: "Matz" }
147
311
  }
148
312
  ]
149
313
 
150
- # Ingest documents into Weaviate
314
+ # Add to vector database
151
315
  documents.each do |doc|
152
- rag_store.add_document(
316
+ vector_store.add_document(
153
317
  text: doc[:text],
154
318
  metadata: doc[:metadata]
155
319
  )
156
320
  end
321
+ ```
157
322
 
158
- # Simple query without RAG
159
- response = chain.ask("What is 2+2?", rag_context: false) # rag_context: false is default
160
- puts response
323
+ ### RAG Queries
324
+
325
+ ```ruby
326
+ # Regular query
327
+ response = chain.ask("What is Ruby?")
161
328
 
162
- # Query with RAG context
329
+ # Query with RAG
163
330
  response = chain.ask(
164
331
  "What OOP principles does Ruby support?",
165
332
  rag_context: true,
166
333
  rag_options: { limit: 3 }
167
334
  )
168
- puts response
335
+ ```
336
+
337
+ ## ๐ŸŒŠ Streaming Output
338
+
339
+ ```ruby
340
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
341
+
342
+ # Streaming with block
343
+ chain.ask("Tell me about Ruby history", stream: true) do |chunk|
344
+ print chunk
345
+ $stdout.flush
346
+ end
347
+
348
+ # Streaming with tools
349
+ tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
350
+ chain = LLMChain::Chain.new(
351
+ model: "qwen3:1.7b",
352
+ tools: tool_manager
353
+ )
169
354
 
170
- # Streamed response with RAG
171
- chain.ask("Explain Ruby modules", stream: true, rag_context: true) do |chunk|
355
+ chain.ask("Calculate 15! and explain the process", stream: true) do |chunk|
172
356
  print chunk
173
357
  end
174
358
  ```
175
359
 
176
- ## Error handling
360
+ ## โš™๏ธ Configuration
361
+
362
+ ### Environment Variables
363
+
364
+ ```bash
365
+ # OpenAI
366
+ export OPENAI_API_KEY="sk-..."
367
+ export OPENAI_ORGANIZATION_ID="org-..."
368
+
369
+ # Search
370
+ export SEARCH_API_KEY="your-search-api-key"
371
+ export GOOGLE_SEARCH_ENGINE_ID="your-cse-id"
372
+
373
+ # Redis
374
+ export REDIS_URL="redis://localhost:6379"
375
+
376
+ # Weaviate
377
+ export WEAVIATE_URL="http://localhost:8080"
378
+ ```
379
+
380
+ ### Tool Configuration
381
+
382
+ ```ruby
383
+ # From configuration
384
+ tools_config = [
385
+ {
386
+ class: 'calculator'
387
+ },
388
+ {
389
+ class: 'web_search',
390
+ options: {
391
+ search_engine: :duckduckgo,
392
+ api_key: ENV['SEARCH_API_KEY']
393
+ }
394
+ },
395
+ {
396
+ class: 'code_interpreter',
397
+ options: {
398
+ timeout: 30,
399
+ allowed_languages: ['ruby', 'python']
400
+ }
401
+ }
402
+ ]
403
+
404
+ tool_manager = LLMChain::Tools::ToolManager.from_config(tools_config)
405
+ ```
406
+
407
+ ### Client Settings
408
+
409
+ ```ruby
410
+ # Qwen with custom parameters
411
+ qwen = LLMChain::Clients::Qwen.new(
412
+ model: "qwen2:7b",
413
+ temperature: 0.7,
414
+ top_p: 0.9,
415
+ base_url: "http://localhost:11434"
416
+ )
417
+
418
+ # OpenAI with settings
419
+ openai = LLMChain::Clients::OpenAI.new(
420
+ model: "gpt-4",
421
+ api_key: ENV['OPENAI_API_KEY'],
422
+ temperature: 0.8,
423
+ max_tokens: 2000
424
+ )
425
+ ```
426
+
427
+ ## ๐Ÿ”ง Error Handling
177
428
 
178
429
  ```ruby
179
430
  begin
180
- chain.ask("Explain DNS")
431
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
432
+ response = chain.ask("Complex query")
433
+ rescue LLMChain::UnknownModelError => e
434
+ puts "Unknown model: #{e.message}"
435
+ rescue LLMChain::ClientError => e
436
+ puts "Client error: #{e.message}"
437
+ rescue LLMChain::TimeoutError => e
438
+ puts "Timeout exceeded: #{e.message}"
181
439
  rescue LLMChain::Error => e
182
- puts "Error: #{e.message}"
183
- # Auto-fallback logic can be implemented here
440
+ puts "General LLMChain error: #{e.message}"
441
+ end
442
+ ```
443
+
444
+ ## ๐Ÿ“š Usage Examples
445
+
446
+ ### Chatbot with Tools
447
+
448
+ ```ruby
449
+ require 'llm_chain'
450
+
451
+ class ChatBot
452
+ def initialize
453
+ @tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
454
+ @memory = LLMChain::Memory::Array.new(max_size: 20)
455
+ @chain = LLMChain::Chain.new(
456
+ model: "qwen3:1.7b",
457
+ memory: @memory,
458
+ tools: @tool_manager
459
+ )
460
+ end
461
+
462
+ def chat_loop
463
+ puts "๐Ÿค– Hello! I'm an AI assistant with tools. Ask me anything!"
464
+
465
+ loop do
466
+ print "\n๐Ÿ‘ค You: "
467
+ input = gets.chomp
468
+ break if input.downcase.in?(['exit', 'quit', 'bye'])
469
+
470
+ response = @chain.ask(input, stream: true) do |chunk|
471
+ print chunk
472
+ end
473
+ puts "\n"
474
+ end
475
+ end
184
476
  end
477
+
478
+ # Run
479
+ bot = ChatBot.new
480
+ bot.chat_loop
481
+ ```
482
+
483
+ ### Data Analysis with Code
484
+
485
+ ```ruby
486
+ data_chain = LLMChain::Chain.new(
487
+ model: "qwen3:7b",
488
+ tools: LLMChain::Tools::ToolManager.create_default_toolset
489
+ )
490
+
491
+ # Analyze CSV data
492
+ response = data_chain.ask(<<~PROMPT)
493
+ Analyze this code and execute it:
494
+
495
+ ```ruby
496
+ data = [
497
+ { name: "Alice", age: 25, salary: 50000 },
498
+ { name: "Bob", age: 30, salary: 60000 },
499
+ { name: "Charlie", age: 35, salary: 70000 }
500
+ ]
501
+
502
+ average_age = data.sum { |person| person[:age] } / data.size.to_f
503
+ total_salary = data.sum { |person| person[:salary] }
504
+
505
+ puts "Average age: #{average_age}"
506
+ puts "Total salary: #{total_salary}"
507
+ puts "Average salary: #{total_salary / data.size}"
508
+ ```
509
+ PROMPT
510
+
511
+ puts response
512
+ ```
513
+
514
+ ## ๐Ÿงช Testing
515
+
516
+ ```bash
517
+ # Run tests
518
+ bundle exec rspec
519
+
520
+ # Run demo
521
+ ruby -I lib examples/tools_example.rb
522
+
523
+ # Interactive console
524
+ bundle exec bin/console
185
525
  ```
186
526
 
187
- ## Contributing
188
- Bug reports and pull requests are welcome on GitHub at:
189
- https://github.com/FuryCow/llm_chain
527
+ ## ๐Ÿ“– API Documentation
528
+
529
+ ### Main Classes
530
+
531
+ - `LLMChain::Chain` - Main class for creating chains
532
+ - `LLMChain::Tools::ToolManager` - Tool management
533
+ - `LLMChain::Memory::Array/Redis` - Memory systems
534
+ - `LLMChain::Clients::*` - Clients for various LLMs
535
+
536
+ ### Chain Methods
537
+
538
+ ```ruby
539
+ chain = LLMChain::Chain.new(options)
540
+
541
+ # Main method
542
+ chain.ask(prompt, stream: false, rag_context: false, rag_options: {})
543
+
544
+ # Initialization parameters
545
+ # - model: model name
546
+ # - memory: memory object
547
+ # - tools: array of tools or ToolManager
548
+ # - retriever: RAG retriever
549
+ # - client_options: additional client parameters
550
+ ```
551
+
552
+ ## ๐Ÿ›ฃ๏ธ Roadmap
553
+
554
+ ### v0.6.0
555
+ - [ ] ReAct agents and multi-step reasoning
556
+ - [ ] More tools (file system, database queries)
557
+ - [ ] Claude integration
558
+ - [ ] Enhanced error handling
559
+
560
+ ### v0.7.0
561
+ - [ ] Multi-agent systems
562
+ - [ ] Task planning and workflows
563
+ - [ ] Web interface for testing
564
+ - [ ] Metrics and monitoring
565
+
566
+ ### v1.0.0
567
+ - [ ] Stable API with semantic versioning
568
+ - [ ] Complete documentation coverage
569
+ - [ ] Production-grade performance
570
+
571
+ ## ๐Ÿค Contributing
572
+
573
+ 1. Fork the repository
574
+ 2. Create feature branch (`git checkout -b feature/amazing-feature`)
575
+ 3. Commit changes (`git commit -m 'Add amazing feature'`)
576
+ 4. Push to branch (`git push origin feature/amazing-feature`)
577
+ 5. Open Pull Request
578
+
579
+ ### Development
580
+
581
+ ```bash
582
+ git clone https://github.com/FuryCow/llm_chain.git
583
+ cd llm_chain
584
+ bundle install
585
+ bundle exec rspec
586
+ ```
587
+
588
+ ## ๐Ÿ“„ License
589
+
590
+ This project is distributed under the [MIT License](LICENSE.txt).
591
+
592
+ ## ๐Ÿ™ Acknowledgments
593
+
594
+ - [Ollama](https://ollama.ai/) team for excellent local LLM platform
595
+ - [LangChain](https://langchain.com/) developers for inspiration
596
+ - Ruby community for support
597
+
598
+ ---
599
+
600
+ **Made with โค๏ธ for Ruby community**
190
601
 
191
- ## License
192
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
602
+ [Documentation](https://github.com/FuryCow/llm_chain/wiki) |
603
+ [Examples](https://github.com/FuryCow/llm_chain/tree/main/examples) |
604
+ [Changelog](CHANGELOG.md) |
605
+ [Issues](https://github.com/FuryCow/llm_chain/issues) |
606
+ [Discussions](https://github.com/FuryCow/llm_chain/discussions)