llm_chain 0.4.0 โ†’ 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8edbb57db3e5fe7b44c7e4ebec1e026dea81bc1e30fc140e9ac35549b5c862e8
4
- data.tar.gz: 2c1dd32185e96ea57bdae128934726b556bfbaaa8a913cb0062d53021cf7df2f
3
+ metadata.gz: 9dd4f92a849092fdd1770088d8724a557ae12ff16f7108f0cef3f6199445c521
4
+ data.tar.gz: ec2f30766da2cd6ab9b2a92a9df35b914e995d5243359d8398b0ac96a41ebd8c
5
5
  SHA512:
6
- metadata.gz: aa19f25581b568ca3cd2f5d9569afd0f8beab706f56ce63b043a6bd81965a9386efb4b27aff0563bc9530576aaa836a5db092864f852962c47ea5f19fb859880
7
- data.tar.gz: 4d38b4daf714d28b298a41c555add9739faee9cdf7979de52b3ff944f2ed275694aa597f1faafed21fb844002b1915591ef043bcd6686483d931a59128ca62e7
6
+ metadata.gz: faa70d193aaf6d0af6cfede19867d5bb3263098d0214f71d15ced7f336fecf91483b8c046c652e5e38289cfa0e9d0ac4f0dedebbcc815981904afb07a091cfd0
7
+ data.tar.gz: d8c5f550897bfe5ed92d9aa2098dd46e1726fd2783c56d886c90db7588375f15055a101cefb3ad16fe9a3522c1917ad99cb1dc6cd4091e8d117d3ee997795119
data/README.md CHANGED
@@ -1,192 +1,555 @@
1
- # LLMChain
2
-
3
- A Ruby gem for interacting with Large Language Models (LLMs) through a unified interface, with native Ollama and local model support.
1
+ # ๐Ÿฆพ LLMChain
4
2
 
5
3
  [![Gem Version](https://badge.fury.io/rb/llm_chain.svg)](https://badge.fury.io/rb/llm_chain)
6
- [![Tests](https://github.com/your_username/llm_chain/actions/workflows/tests.yml/badge.svg)](https://github.com/your_username/llm_chain/actions)
4
+ [![Tests](https://github.com/FuryCow/llm_chain/actions/workflows/tests.yml/badge.svg)](https://github.com/FuryCow/llm_chain/actions)
7
5
  [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE.txt)
8
6
 
9
- ## Features
7
+ **A powerful Ruby library for working with Large Language Models (LLMs) with intelligent tool system**
8
+
9
+ LLMChain is a Ruby analog of LangChain, providing a unified interface for interacting with various LLMs, built-in tool system, and RAG (Retrieval-Augmented Generation) support.
10
+
11
+ ## โœจ Key Features
12
+
13
+ - ๐Ÿค– **Unified API** for multiple LLMs (OpenAI, Ollama, Qwen, LLaMA2, Gemma)
14
+ - ๐Ÿ› ๏ธ **Intelligent tool system** with automatic selection
15
+ - ๐Ÿงฎ **Built-in tools**: Calculator, web search, code interpreter
16
+ - ๐Ÿ” **RAG-ready** with vector database integration
17
+ - ๐Ÿ’พ **Flexible memory system** (Array, Redis)
18
+ - ๐ŸŒŠ **Streaming output** for real-time responses
19
+ - ๐Ÿ  **Local models** via Ollama
20
+ - ๐Ÿ”ง **Extensible architecture** for custom tools
21
+
22
+ ## ๐Ÿš€ Quick Start
10
23
 
11
- - Unified interface for multiple LLMs (Qwen, Llama2, Mistral, etc.)
12
- - Native [Ollama](https://ollama.ai/) integration for local models
13
- - Prompt templating system
14
- - Streaming response support
15
- - RAG-ready with vector database integration
16
- - Automatic model verification
24
+ ### Installation
17
25
 
18
- ## Installation
26
+ ```bash
27
+ gem install llm_chain
28
+ ```
19
29
 
20
- Add to your Gemfile:
30
+ Or add to Gemfile:
21
31
 
22
32
  ```ruby
23
33
  gem 'llm_chain'
24
34
  ```
25
- Or install directly:
26
35
 
27
- ```
28
- gem install llm_chain
36
+ ### Prerequisites
37
+
38
+ 1. **Install Ollama** for local models:
39
+ ```bash
40
+ # macOS/Linux
41
+ curl -fsSL https://ollama.ai/install.sh | sh
42
+
43
+ # Download models
44
+ ollama pull qwen3:1.7b
45
+ ollama pull llama2:7b
46
+ ```
47
+
48
+ 2. **Optional**: API keys for external services
49
+ ```bash
50
+ export OPENAI_API_KEY="your-key"
51
+ export SEARCH_API_KEY="your-key"
52
+ ```
53
+
54
+ ### Simple Example
55
+
56
+ ```ruby
57
+ require 'llm_chain'
58
+
59
+ # Basic usage
60
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
61
+ response = chain.ask("Hello! How are you?")
62
+ puts response
29
63
  ```
30
64
 
31
- ## Prerequisites
32
- Install [Ollama](https://ollama.ai/)
65
+ ## ๐Ÿ› ๏ธ Tool System
33
66
 
34
- Pull desired models:
67
+ ### Automatic Tool Usage
35
68
 
36
- ```bash
37
- ollama pull qwen:7b
38
- ollama pull llama2:13b
69
+ ```ruby
70
+ # Create chain with tools
71
+ tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
72
+ chain = LLMChain::Chain.new(
73
+ model: "qwen3:1.7b",
74
+ tools: tool_manager
75
+ )
76
+
77
+ # Tools are selected automatically
78
+ chain.ask("Calculate 15 * 7 + 32")
79
+ # ๐Ÿงฎ Automatically uses calculator
80
+
81
+ chain.ask("Find information about Ruby 3.2")
82
+ # ๐Ÿ” Automatically uses web search
83
+
84
+ chain.ask("Execute code: puts (1..10).sum")
85
+ # ๐Ÿ’ป Automatically uses code interpreter
39
86
  ```
40
87
 
41
- ## Usage
88
+ ### Built-in Tools
42
89
 
43
- basic example:
90
+ #### ๐Ÿงฎ Calculator
91
+ ```ruby
92
+ calculator = LLMChain::Tools::Calculator.new
93
+ result = calculator.call("Find square root of 144")
94
+ puts result[:formatted]
95
+ # Output: sqrt(144) = 12.0
96
+ ```
44
97
 
98
+ #### ๐ŸŒ Web Search
45
99
  ```ruby
46
- require 'llm_chain'
100
+ search = LLMChain::Tools::WebSearch.new
101
+ results = search.call("Latest Ruby news")
102
+ puts results[:formatted]
103
+ ```
47
104
 
48
- memory = LLMChain::Memory::Array.new(max_size: 1)
49
- chain = LLMChain::Chain.new(model: "qwen3:1.7b", memory: memory, retriever: false)
50
- # retriever: false is required when you don't use a vector database to store context or external data
51
- # reitriever: - is set to WeaviateRetriever.new as default so you need to pass an external params to set Weaviate host
52
- puts chain.ask("What is 2+2?")
105
+ #### ๐Ÿ’ป Code Interpreter
106
+ ```ruby
107
+ interpreter = LLMChain::Tools::CodeInterpreter.new
108
+ result = interpreter.call(<<~CODE)
109
+ ```ruby
110
+ def factorial(n)
111
+ n <= 1 ? 1 : n * factorial(n - 1)
112
+ end
113
+ puts factorial(5)
114
+ ```
115
+ CODE
116
+ puts result[:formatted]
53
117
  ```
54
118
 
55
- Using redis as redistributed memory store:
119
+ ### Creating Custom Tools
56
120
 
57
121
  ```ruby
58
- # redis_url: 'redis://localhost:6379' is default or either set REDIS_URL env var
59
- # max_size: 10 is default
60
- # namespace: 'llm_chain' is default
61
- memory = LLMChain::Memory::Redis.new(redis_url: 'redis://localhost:6379', max_size: 10, namespace: 'my_app')
122
+ class WeatherTool < LLMChain::Tools::BaseTool
123
+ def initialize(api_key:)
124
+ @api_key = api_key
125
+ super(
126
+ name: "weather",
127
+ description: "Gets weather information",
128
+ parameters: {
129
+ location: {
130
+ type: "string",
131
+ description: "City name"
132
+ }
133
+ }
134
+ )
135
+ end
136
+
137
+ def match?(prompt)
138
+ contains_keywords?(prompt, ['weather', 'temperature', 'forecast'])
139
+ end
140
+
141
+ def call(prompt, context: {})
142
+ location = extract_location(prompt)
143
+ # Your weather API integration
144
+ {
145
+ location: location,
146
+ temperature: "22ยฐC",
147
+ condition: "Sunny",
148
+ formatted: "Weather in #{location}: 22ยฐC, Sunny"
149
+ }
150
+ end
151
+
152
+ private
153
+
154
+ def extract_location(prompt)
155
+ prompt.scan(/in\s+(\w+)/i).flatten.first || "Unknown"
156
+ end
157
+ end
62
158
 
63
- chain = LLMChain::Chain.new(model: "qwen3:1.7b", memory: memory)
64
- puts chain.ask("What is 2+2?")
159
+ # Usage
160
+ weather = WeatherTool.new(api_key: "your-key")
161
+ tool_manager.register_tool(weather)
65
162
  ```
66
163
 
67
- Model-specific Clients:
164
+ ## ๐Ÿค– Supported Models
165
+
166
+ | Model Family | Backend | Status | Notes |
167
+ |--------------|---------|--------|-------|
168
+ | **OpenAI** | Web API | โœ… Supported | GPT-3.5, GPT-4, GPT-4 Turbo |
169
+ | **Qwen/Qwen2** | Ollama | โœ… Supported | 0.5B - 72B parameters |
170
+ | **LLaMA2/3** | Ollama | โœ… Supported | 7B, 13B, 70B |
171
+ | **Gemma** | Ollama | โœ… Supported | 2B, 7B, 9B, 27B |
172
+ | **Mistral/Mixtral** | Ollama | ๐Ÿ”„ In development | 7B, 8x7B |
173
+ | **Claude** | Anthropic | ๐Ÿ”„ Planned | Haiku, Sonnet, Opus |
174
+ | **Command R+** | Cohere | ๐Ÿ”„ Planned | Optimized for RAG |
175
+
176
+ ### Model Usage Examples
68
177
 
69
178
  ```ruby
70
- # Qwen with custom options (Without RAG support)
71
- qwen = LLMChain::Clients::Qwen.new(
72
- model: "qwen3:1.7b",
179
+ # OpenAI
180
+ openai_chain = LLMChain::Chain.new(
181
+ model: "gpt-4",
182
+ api_key: ENV['OPENAI_API_KEY']
183
+ )
184
+
185
+ # Qwen via Ollama
186
+ qwen_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
187
+
188
+ # LLaMA via Ollama with settings
189
+ llama_chain = LLMChain::Chain.new(
190
+ model: "llama2:7b",
73
191
  temperature: 0.8,
74
192
  top_p: 0.95
75
193
  )
76
- puts qwen.chat("Write Ruby code for Fibonacci sequence")
77
194
  ```
78
195
 
79
- Streaming Responses:
196
+ ## ๐Ÿ’พ Memory System
80
197
 
198
+ ### Array Memory (default)
81
199
  ```ruby
82
- LLMChain::Chain.new(model: "qwen3:1.7b").ask('How are you?', stream: true) do |chunk|
83
- print chunk
84
- end
85
- ```
200
+ memory = LLMChain::Memory::Array.new(max_size: 10)
201
+ chain = LLMChain::Chain.new(
202
+ model: "qwen3:1.7b",
203
+ memory: memory
204
+ )
86
205
 
87
- Chain pattern:
206
+ chain.ask("My name is Alex")
207
+ chain.ask("What's my name?") # Remembers previous context
208
+ ```
88
209
 
210
+ ### Redis Memory (for production)
89
211
  ```ruby
212
+ memory = LLMChain::Memory::Redis.new(
213
+ redis_url: 'redis://localhost:6379',
214
+ max_size: 100,
215
+ namespace: 'my_app'
216
+ )
217
+
90
218
  chain = LLMChain::Chain.new(
91
219
  model: "qwen3:1.7b",
92
- memory: LLMChain::Memory::Array.new
220
+ memory: memory
93
221
  )
94
-
95
- # Conversation with context
96
- chain.ask("What's 2^10?")
97
- chain.ask("Now multiply that by 5")
98
222
  ```
99
223
 
100
- ## Supported Models
101
-
102
- | Model Family | Backend/Service | Notes |
103
- |-------------|----------------|-------|
104
- | OpenAI (GPT-3.5, GPT-4) | Web API | Supports all OpenAI API models (Not tested) |
105
- | LLaMA2 (7B, 13B, 70B) | Ollama | Local inference via Ollama |
106
- | Qwen/Qwen3 (0.5B-72B) | Ollama | Supports all Qwen model sizes |
107
- | Mistral/Mixtral | Ollama | Including Mistral 7B and Mixtral 8x7B (In progress) |
108
- | Gemma (2B, 7B) | Ollama | Google's lightweight models (In progress) |
109
- | Claude (Haiku, Sonnet, Opus) | Anthropic API | Web API access (In progress) |
110
- | Command R+ | Cohere API | Optimized for RAG (In progress) |
224
+ ## ๐Ÿ” RAG (Retrieval-Augmented Generation)
111
225
 
112
- ## Retrieval-Augmented Generation (RAG)
226
+ ### Setting up RAG with Weaviate
113
227
 
114
228
  ```ruby
115
229
  # Initialize components
116
- embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(model: "nomic-embed-text")
117
- rag_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(embedder: embedder, weaviate_url: 'http://localhost:8080') # Replace with your Weaviate URL if needed
118
- retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(embedder: embedder)
119
- memory = LLMChain::Memory::Array.new
120
- tools = []
230
+ embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(
231
+ model: "nomic-embed-text"
232
+ )
121
233
 
122
- # Create chain
234
+ vector_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(
235
+ embedder: embedder,
236
+ weaviate_url: 'http://localhost:8080'
237
+ )
238
+
239
+ retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(
240
+ embedder: embedder
241
+ )
242
+
243
+ # Create chain with RAG
123
244
  chain = LLMChain::Chain.new(
124
245
  model: "qwen3:1.7b",
125
- memory: memory, # LLMChain::Memory::Array.new is default
126
- tools: tools, # There is no tools supported yet
127
- retriever: retriever # LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new is default
246
+ retriever: retriever
128
247
  )
248
+ ```
129
249
 
130
- # simple Chain definition, with default settings
131
-
132
- simple_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
250
+ ### Adding Documents
133
251
 
134
- # Example of adding documents to vector database
252
+ ```ruby
135
253
  documents = [
136
254
  {
137
- text: "Ruby supports four OOP principles: encapsulation, inheritance, polymorphism and abstraction",
138
- metadata: { source: "ruby-docs", page: 42 }
255
+ text: "Ruby supports OOP principles: encapsulation, inheritance, polymorphism",
256
+ metadata: { source: "ruby-guide", page: 15 }
139
257
  },
140
258
  {
141
259
  text: "Modules in Ruby are used for namespaces and mixins",
142
- metadata: { source: "ruby-guides", author: "John Doe" }
143
- },
144
- {
145
- text: "2 + 2 is equals to 4",
146
- matadata: { source: 'mad_brain', author: 'John Doe' }
260
+ metadata: { source: "ruby-book", author: "Matz" }
147
261
  }
148
262
  ]
149
263
 
150
- # Ingest documents into Weaviate
264
+ # Add to vector database
151
265
  documents.each do |doc|
152
- rag_store.add_document(
266
+ vector_store.add_document(
153
267
  text: doc[:text],
154
268
  metadata: doc[:metadata]
155
269
  )
156
270
  end
271
+ ```
157
272
 
158
- # Simple query without RAG
159
- response = chain.ask("What is 2+2?", rag_context: false) # rag_context: false is default
160
- puts response
273
+ ### RAG Queries
274
+
275
+ ```ruby
276
+ # Regular query
277
+ response = chain.ask("What is Ruby?")
161
278
 
162
- # Query with RAG context
279
+ # Query with RAG
163
280
  response = chain.ask(
164
281
  "What OOP principles does Ruby support?",
165
282
  rag_context: true,
166
283
  rag_options: { limit: 3 }
167
284
  )
168
- puts response
285
+ ```
286
+
287
+ ## ๐ŸŒŠ Streaming Output
288
+
289
+ ```ruby
290
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
291
+
292
+ # Streaming with block
293
+ chain.ask("Tell me about Ruby history", stream: true) do |chunk|
294
+ print chunk
295
+ $stdout.flush
296
+ end
297
+
298
+ # Streaming with tools
299
+ tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
300
+ chain = LLMChain::Chain.new(
301
+ model: "qwen3:1.7b",
302
+ tools: tool_manager
303
+ )
169
304
 
170
- # Streamed response with RAG
171
- chain.ask("Explain Ruby modules", stream: true, rag_context: true) do |chunk|
305
+ chain.ask("Calculate 15! and explain the process", stream: true) do |chunk|
172
306
  print chunk
173
307
  end
174
308
  ```
175
309
 
176
- ## Error handling
310
+ ## โš™๏ธ Configuration
311
+
312
+ ### Environment Variables
313
+
314
+ ```bash
315
+ # OpenAI
316
+ export OPENAI_API_KEY="sk-..."
317
+ export OPENAI_ORGANIZATION_ID="org-..."
318
+
319
+ # Search
320
+ export SEARCH_API_KEY="your-search-api-key"
321
+ export GOOGLE_SEARCH_ENGINE_ID="your-cse-id"
322
+
323
+ # Redis
324
+ export REDIS_URL="redis://localhost:6379"
325
+
326
+ # Weaviate
327
+ export WEAVIATE_URL="http://localhost:8080"
328
+ ```
329
+
330
+ ### Tool Configuration
331
+
332
+ ```ruby
333
+ # From configuration
334
+ tools_config = [
335
+ {
336
+ class: 'calculator'
337
+ },
338
+ {
339
+ class: 'web_search',
340
+ options: {
341
+ search_engine: :duckduckgo,
342
+ api_key: ENV['SEARCH_API_KEY']
343
+ }
344
+ },
345
+ {
346
+ class: 'code_interpreter',
347
+ options: {
348
+ timeout: 30,
349
+ allowed_languages: ['ruby', 'python']
350
+ }
351
+ }
352
+ ]
353
+
354
+ tool_manager = LLMChain::Tools::ToolManager.from_config(tools_config)
355
+ ```
356
+
357
+ ### Client Settings
358
+
359
+ ```ruby
360
+ # Qwen with custom parameters
361
+ qwen = LLMChain::Clients::Qwen.new(
362
+ model: "qwen2:7b",
363
+ temperature: 0.7,
364
+ top_p: 0.9,
365
+ base_url: "http://localhost:11434"
366
+ )
367
+
368
+ # OpenAI with settings
369
+ openai = LLMChain::Clients::OpenAI.new(
370
+ model: "gpt-4",
371
+ api_key: ENV['OPENAI_API_KEY'],
372
+ temperature: 0.8,
373
+ max_tokens: 2000
374
+ )
375
+ ```
376
+
377
+ ## ๐Ÿ”ง Error Handling
177
378
 
178
379
  ```ruby
179
380
  begin
180
- chain.ask("Explain DNS")
381
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
382
+ response = chain.ask("Complex query")
383
+ rescue LLMChain::UnknownModelError => e
384
+ puts "Unknown model: #{e.message}"
385
+ rescue LLMChain::ClientError => e
386
+ puts "Client error: #{e.message}"
387
+ rescue LLMChain::TimeoutError => e
388
+ puts "Timeout exceeded: #{e.message}"
181
389
  rescue LLMChain::Error => e
182
- puts "Error: #{e.message}"
183
- # Auto-fallback logic can be implemented here
390
+ puts "General LLMChain error: #{e.message}"
391
+ end
392
+ ```
393
+
394
+ ## ๐Ÿ“š Usage Examples
395
+
396
+ ### Chatbot with Tools
397
+
398
+ ```ruby
399
+ require 'llm_chain'
400
+
401
+ class ChatBot
402
+ def initialize
403
+ @tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
404
+ @memory = LLMChain::Memory::Array.new(max_size: 20)
405
+ @chain = LLMChain::Chain.new(
406
+ model: "qwen3:1.7b",
407
+ memory: @memory,
408
+ tools: @tool_manager
409
+ )
410
+ end
411
+
412
+ def chat_loop
413
+ puts "๐Ÿค– Hello! I'm an AI assistant with tools. Ask me anything!"
414
+
415
+ loop do
416
+ print "\n๐Ÿ‘ค You: "
417
+ input = gets.chomp
418
+ break if input.downcase.in?(['exit', 'quit', 'bye'])
419
+
420
+ response = @chain.ask(input, stream: true) do |chunk|
421
+ print chunk
422
+ end
423
+ puts "\n"
424
+ end
425
+ end
184
426
  end
427
+
428
+ # Run
429
+ bot = ChatBot.new
430
+ bot.chat_loop
431
+ ```
432
+
433
+ ### Data Analysis with Code
434
+
435
+ ```ruby
436
+ data_chain = LLMChain::Chain.new(
437
+ model: "qwen3:7b",
438
+ tools: LLMChain::Tools::ToolManager.create_default_toolset
439
+ )
440
+
441
+ # Analyze CSV data
442
+ response = data_chain.ask(<<~PROMPT)
443
+ Analyze this code and execute it:
444
+
445
+ ```ruby
446
+ data = [
447
+ { name: "Alice", age: 25, salary: 50000 },
448
+ { name: "Bob", age: 30, salary: 60000 },
449
+ { name: "Charlie", age: 35, salary: 70000 }
450
+ ]
451
+
452
+ average_age = data.sum { |person| person[:age] } / data.size.to_f
453
+ total_salary = data.sum { |person| person[:salary] }
454
+
455
+ puts "Average age: #{average_age}"
456
+ puts "Total salary: #{total_salary}"
457
+ puts "Average salary: #{total_salary / data.size}"
458
+ ```
459
+ PROMPT
460
+
461
+ puts response
185
462
  ```
186
463
 
187
- ## Contributing
188
- Bug reports and pull requests are welcome on GitHub at:
189
- https://github.com/FuryCow/llm_chain
464
+ ## ๐Ÿงช Testing
465
+
466
+ ```bash
467
+ # Run tests
468
+ bundle exec rspec
469
+
470
+ # Run demo
471
+ ruby -I lib examples/tools_example.rb
472
+
473
+ # Interactive console
474
+ bundle exec bin/console
475
+ ```
476
+
477
+ ## ๐Ÿ“– API Documentation
478
+
479
+ ### Main Classes
480
+
481
+ - `LLMChain::Chain` - Main class for creating chains
482
+ - `LLMChain::Tools::ToolManager` - Tool management
483
+ - `LLMChain::Memory::Array/Redis` - Memory systems
484
+ - `LLMChain::Clients::*` - Clients for various LLMs
485
+
486
+ ### Chain Methods
487
+
488
+ ```ruby
489
+ chain = LLMChain::Chain.new(options)
490
+
491
+ # Main method
492
+ chain.ask(prompt, stream: false, rag_context: false, rag_options: {})
493
+
494
+ # Initialization parameters
495
+ # - model: model name
496
+ # - memory: memory object
497
+ # - tools: array of tools or ToolManager
498
+ # - retriever: RAG retriever
499
+ # - client_options: additional client parameters
500
+ ```
501
+
502
+ ## ๐Ÿ›ฃ๏ธ Roadmap
503
+
504
+ ### v0.6.0
505
+ - [ ] ReAct agents
506
+ - [ ] More tools (files, database)
507
+ - [ ] Claude integration
508
+ - [ ] Enhanced logging
509
+
510
+ ### v0.7.0
511
+ - [ ] Multi-agent systems
512
+ - [ ] Task planning
513
+ - [ ] Web interface
514
+ - [ ] Metrics and monitoring
515
+
516
+ ### v1.0.0
517
+ - [ ] Stable API
518
+ - [ ] Complete documentation
519
+ - [ ] Production readiness
520
+
521
+ ## ๐Ÿค Contributing
522
+
523
+ 1. Fork the repository
524
+ 2. Create feature branch (`git checkout -b feature/amazing-feature`)
525
+ 3. Commit changes (`git commit -m 'Add amazing feature'`)
526
+ 4. Push to branch (`git push origin feature/amazing-feature`)
527
+ 5. Open Pull Request
528
+
529
+ ### Development
530
+
531
+ ```bash
532
+ git clone https://github.com/FuryCow/llm_chain.git
533
+ cd llm_chain
534
+ bundle install
535
+ bundle exec rspec
536
+ ```
537
+
538
+ ## ๐Ÿ“„ License
539
+
540
+ This project is distributed under the [MIT License](LICENSE.txt).
541
+
542
+ ## ๐Ÿ™ Acknowledgments
543
+
544
+ - [Ollama](https://ollama.ai/) team for excellent local LLM platform
545
+ - [LangChain](https://langchain.com/) developers for inspiration
546
+ - Ruby community for support
547
+
548
+ ---
549
+
550
+ **Made with โค๏ธ for Ruby community**
190
551
 
191
- ## License
192
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
552
+ [Documentation](https://github.com/FuryCow/llm_chain/wiki) |
553
+ [Examples](https://github.com/FuryCow/llm_chain/tree/main/examples) |
554
+ [Issues](https://github.com/FuryCow/llm_chain/issues) |
555
+ [Discussions](https://github.com/FuryCow/llm_chain/discussions)