llm_chain 0.3.0 โ†’ 0.5.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: efda7d777bfd7333f75b65ccc74d46abaff6c70e2a30a31709988811ed54cfff
4
- data.tar.gz: 4ea72695094eff0ee6d1a1acc029f66ea34be3a0052d39d33d70a9c6f5a1e1d7
3
+ metadata.gz: 9dd4f92a849092fdd1770088d8724a557ae12ff16f7108f0cef3f6199445c521
4
+ data.tar.gz: ec2f30766da2cd6ab9b2a92a9df35b914e995d5243359d8398b0ac96a41ebd8c
5
5
  SHA512:
6
- metadata.gz: 94423d0dc1cc4d6305aeb94dfc06fe50882726df1d1f72d03596317db4c37cb517fc9ad272236717439878fac7ca46d23354911e64bd1801b9e250d61d237a4b
7
- data.tar.gz: a1ca64366d7343ae2ad8622372d19c97291a0046cf7056af0e9f79902c31e113d8dac8008e687ee53c9bd32f5f24243ded5b74d7dd256f6c23ba9aedb319862d
6
+ metadata.gz: faa70d193aaf6d0af6cfede19867d5bb3263098d0214f71d15ced7f336fecf91483b8c046c652e5e38289cfa0e9d0ac4f0dedebbcc815981904afb07a091cfd0
7
+ data.tar.gz: d8c5f550897bfe5ed92d9aa2098dd46e1726fd2783c56d886c90db7588375f15055a101cefb3ad16fe9a3522c1917ad99cb1dc6cd4091e8d117d3ee997795119
data/README.md CHANGED
@@ -1,190 +1,555 @@
1
- # LLMChain
2
-
3
- A Ruby gem for interacting with Large Language Models (LLMs) through a unified interface, with native Ollama and local model support.
1
+ # ๐Ÿฆพ LLMChain
4
2
 
5
3
  [![Gem Version](https://badge.fury.io/rb/llm_chain.svg)](https://badge.fury.io/rb/llm_chain)
6
- [![Tests](https://github.com/your_username/llm_chain/actions/workflows/tests.yml/badge.svg)](https://github.com/your_username/llm_chain/actions)
4
+ [![Tests](https://github.com/FuryCow/llm_chain/actions/workflows/tests.yml/badge.svg)](https://github.com/FuryCow/llm_chain/actions)
7
5
  [![MIT License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE.txt)
8
6
 
9
- ## Features
7
+ **A powerful Ruby library for working with Large Language Models (LLMs) with intelligent tool system**
8
+
9
+ LLMChain is a Ruby analog of LangChain, providing a unified interface for interacting with various LLMs, built-in tool system, and RAG (Retrieval-Augmented Generation) support.
10
+
11
+ ## โœจ Key Features
12
+
13
+ - ๐Ÿค– **Unified API** for multiple LLMs (OpenAI, Ollama, Qwen, LLaMA2, Gemma)
14
+ - ๐Ÿ› ๏ธ **Intelligent tool system** with automatic selection
15
+ - ๐Ÿงฎ **Built-in tools**: Calculator, web search, code interpreter
16
+ - ๐Ÿ” **RAG-ready** with vector database integration
17
+ - ๐Ÿ’พ **Flexible memory system** (Array, Redis)
18
+ - ๐ŸŒŠ **Streaming output** for real-time responses
19
+ - ๐Ÿ  **Local models** via Ollama
20
+ - ๐Ÿ”ง **Extensible architecture** for custom tools
21
+
22
+ ## ๐Ÿš€ Quick Start
10
23
 
11
- - Unified interface for multiple LLMs (Qwen, Llama2, Mistral, etc.)
12
- - Native [Ollama](https://ollama.ai/) integration for local models
13
- - Prompt templating system
14
- - Streaming response support
15
- - RAG-ready with vector database integration
16
- - Automatic model verification
24
+ ### Installation
17
25
 
18
- ## Installation
26
+ ```bash
27
+ gem install llm_chain
28
+ ```
19
29
 
20
- Add to your Gemfile:
30
+ Or add to Gemfile:
21
31
 
22
32
  ```ruby
23
33
  gem 'llm_chain'
24
34
  ```
25
- Or install directly:
26
35
 
27
- ```
28
- gem install llm_chain
36
+ ### Prerequisites
37
+
38
+ 1. **Install Ollama** for local models:
39
+ ```bash
40
+ # macOS/Linux
41
+ curl -fsSL https://ollama.ai/install.sh | sh
42
+
43
+ # Download models
44
+ ollama pull qwen3:1.7b
45
+ ollama pull llama2:7b
46
+ ```
47
+
48
+ 2. **Optional**: API keys for external services
49
+ ```bash
50
+ export OPENAI_API_KEY="your-key"
51
+ export SEARCH_API_KEY="your-key"
52
+ ```
53
+
54
+ ### Simple Example
55
+
56
+ ```ruby
57
+ require 'llm_chain'
58
+
59
+ # Basic usage
60
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
61
+ response = chain.ask("Hello! How are you?")
62
+ puts response
29
63
  ```
30
64
 
31
- ## Prerequisites
32
- Install [Ollama](https://ollama.ai/)
65
+ ## ๐Ÿ› ๏ธ Tool System
33
66
 
34
- Pull desired models:
67
+ ### Automatic Tool Usage
35
68
 
36
- ```bash
37
- ollama pull qwen:7b
38
- ollama pull llama2:13b
69
+ ```ruby
70
+ # Create chain with tools
71
+ tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
72
+ chain = LLMChain::Chain.new(
73
+ model: "qwen3:1.7b",
74
+ tools: tool_manager
75
+ )
76
+
77
+ # Tools are selected automatically
78
+ chain.ask("Calculate 15 * 7 + 32")
79
+ # ๐Ÿงฎ Automatically uses calculator
80
+
81
+ chain.ask("Find information about Ruby 3.2")
82
+ # ๐Ÿ” Automatically uses web search
83
+
84
+ chain.ask("Execute code: puts (1..10).sum")
85
+ # ๐Ÿ’ป Automatically uses code interpreter
39
86
  ```
40
87
 
41
- ## Usage
88
+ ### Built-in Tools
42
89
 
43
- basic example:
90
+ #### ๐Ÿงฎ Calculator
91
+ ```ruby
92
+ calculator = LLMChain::Tools::Calculator.new
93
+ result = calculator.call("Find square root of 144")
94
+ puts result[:formatted]
95
+ # Output: sqrt(144) = 12.0
96
+ ```
44
97
 
98
+ #### ๐ŸŒ Web Search
45
99
  ```ruby
46
- require 'llm_chain'
100
+ search = LLMChain::Tools::WebSearch.new
101
+ results = search.call("Latest Ruby news")
102
+ puts results[:formatted]
103
+ ```
47
104
 
48
- memory = LLMChain::Memory::Array.new(max_size: 1)
49
- chain = LLMChain::Chain.new(model: "qwen3:1.7b", memory: memory)
50
- puts chain.ask("What is 2+2?")
105
+ #### ๐Ÿ’ป Code Interpreter
106
+ ```ruby
107
+ interpreter = LLMChain::Tools::CodeInterpreter.new
108
+ result = interpreter.call(<<~CODE)
109
+ ```ruby
110
+ def factorial(n)
111
+ n <= 1 ? 1 : n * factorial(n - 1)
112
+ end
113
+ puts factorial(5)
114
+ ```
115
+ CODE
116
+ puts result[:formatted]
51
117
  ```
52
118
 
53
- Using redis as redistributed memory store:
119
+ ### Creating Custom Tools
54
120
 
55
121
  ```ruby
56
- # redis_url: 'redis://localhost:6379' is default or either set REDIS_URL env var
57
- # max_size: 10 is default
58
- # namespace: 'llm_chain' is default
59
- memory = LLMChain::Memory::Redis.new(redis_url: 'redis://localhost:6379', max_size: 10, namespace: 'my_app')
122
+ class WeatherTool < LLMChain::Tools::BaseTool
123
+ def initialize(api_key:)
124
+ @api_key = api_key
125
+ super(
126
+ name: "weather",
127
+ description: "Gets weather information",
128
+ parameters: {
129
+ location: {
130
+ type: "string",
131
+ description: "City name"
132
+ }
133
+ }
134
+ )
135
+ end
136
+
137
+ def match?(prompt)
138
+ contains_keywords?(prompt, ['weather', 'temperature', 'forecast'])
139
+ end
140
+
141
+ def call(prompt, context: {})
142
+ location = extract_location(prompt)
143
+ # Your weather API integration
144
+ {
145
+ location: location,
146
+ temperature: "22ยฐC",
147
+ condition: "Sunny",
148
+ formatted: "Weather in #{location}: 22ยฐC, Sunny"
149
+ }
150
+ end
151
+
152
+ private
153
+
154
+ def extract_location(prompt)
155
+ prompt.scan(/in\s+(\w+)/i).flatten.first || "Unknown"
156
+ end
157
+ end
60
158
 
61
- chain = LLMChain::Chain.new(model: "qwen3:1.7b", memory: memory)
62
- puts chain.ask("What is 2+2?")
159
+ # Usage
160
+ weather = WeatherTool.new(api_key: "your-key")
161
+ tool_manager.register_tool(weather)
63
162
  ```
64
163
 
65
- Model-specific Clients:
164
+ ## ๐Ÿค– Supported Models
165
+
166
+ | Model Family | Backend | Status | Notes |
167
+ |--------------|---------|--------|-------|
168
+ | **OpenAI** | Web API | โœ… Supported | GPT-3.5, GPT-4, GPT-4 Turbo |
169
+ | **Qwen/Qwen2** | Ollama | โœ… Supported | 0.5B - 72B parameters |
170
+ | **LLaMA2/3** | Ollama | โœ… Supported | 7B, 13B, 70B |
171
+ | **Gemma** | Ollama | โœ… Supported | 2B, 7B, 9B, 27B |
172
+ | **Mistral/Mixtral** | Ollama | ๐Ÿ”„ In development | 7B, 8x7B |
173
+ | **Claude** | Anthropic | ๐Ÿ”„ Planned | Haiku, Sonnet, Opus |
174
+ | **Command R+** | Cohere | ๐Ÿ”„ Planned | Optimized for RAG |
175
+
176
+ ### Model Usage Examples
66
177
 
67
178
  ```ruby
68
- # Qwen with custom options (Without RAG support)
69
- qwen = LLMChain::Clients::Qwen.new(
70
- model: "qwen3:1.7b",
179
+ # OpenAI
180
+ openai_chain = LLMChain::Chain.new(
181
+ model: "gpt-4",
182
+ api_key: ENV['OPENAI_API_KEY']
183
+ )
184
+
185
+ # Qwen via Ollama
186
+ qwen_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
187
+
188
+ # LLaMA via Ollama with settings
189
+ llama_chain = LLMChain::Chain.new(
190
+ model: "llama2:7b",
71
191
  temperature: 0.8,
72
192
  top_p: 0.95
73
193
  )
74
- puts qwen.chat("Write Ruby code for Fibonacci sequence")
75
194
  ```
76
195
 
77
- Streaming Responses:
196
+ ## ๐Ÿ’พ Memory System
78
197
 
198
+ ### Array Memory (default)
79
199
  ```ruby
80
- LLMChain::Chain.new(model: "qwen3:1.7b").ask('How are you?', stream: true) do |chunk|
81
- print chunk
82
- end
83
- ```
200
+ memory = LLMChain::Memory::Array.new(max_size: 10)
201
+ chain = LLMChain::Chain.new(
202
+ model: "qwen3:1.7b",
203
+ memory: memory
204
+ )
84
205
 
85
- Chain pattern:
206
+ chain.ask("My name is Alex")
207
+ chain.ask("What's my name?") # Remembers previous context
208
+ ```
86
209
 
210
+ ### Redis Memory (for production)
87
211
  ```ruby
212
+ memory = LLMChain::Memory::Redis.new(
213
+ redis_url: 'redis://localhost:6379',
214
+ max_size: 100,
215
+ namespace: 'my_app'
216
+ )
217
+
88
218
  chain = LLMChain::Chain.new(
89
219
  model: "qwen3:1.7b",
90
- memory: LLMChain::Memory::Array.new
220
+ memory: memory
91
221
  )
92
-
93
- # Conversation with context
94
- chain.ask("What's 2^10?")
95
- chain.ask("Now multiply that by 5")
96
222
  ```
97
223
 
98
- ## Supported Models
99
-
100
- | Model Family | Backend/Service | Notes |
101
- |-------------|----------------|-------|
102
- | OpenAI (GPT-3.5, GPT-4) | Web API | Supports all OpenAI API models (Not tested) |
103
- | LLaMA2 (7B, 13B, 70B) | Ollama | Local inference via Ollama |
104
- | Qwen/Qwen3 (0.5B-72B) | Ollama | Supports all Qwen model sizes |
105
- | Mistral/Mixtral | Ollama | Including Mistral 7B and Mixtral 8x7B (In progress) |
106
- | Gemma (2B, 7B) | Ollama | Google's lightweight models (In progress) |
107
- | Claude (Haiku, Sonnet, Opus) | Anthropic API | Web API access (In progress) |
108
- | Command R+ | Cohere API | Optimized for RAG (In progress) |
224
+ ## ๐Ÿ” RAG (Retrieval-Augmented Generation)
109
225
 
110
- ## Retrieval-Augmented Generation (RAG)
226
+ ### Setting up RAG with Weaviate
111
227
 
112
228
  ```ruby
113
229
  # Initialize components
114
- embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(model: "nomic-embed-text")
115
- rag_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(embedder: embedder, weaviate_url: 'http://localhost:8080') # Replace with your Weaviate URL if needed
116
- retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(embedder: embedder)
117
- memory = LLMChain::Memory::Array.new
118
- tools = []
230
+ embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(
231
+ model: "nomic-embed-text"
232
+ )
119
233
 
120
- # Create chain
234
+ vector_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(
235
+ embedder: embedder,
236
+ weaviate_url: 'http://localhost:8080'
237
+ )
238
+
239
+ retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(
240
+ embedder: embedder
241
+ )
242
+
243
+ # Create chain with RAG
121
244
  chain = LLMChain::Chain.new(
122
245
  model: "qwen3:1.7b",
123
- memory: memory, # LLMChain::Memory::Array.new is default
124
- tools: tools, # There is no tools supported yet
125
- retriever: retriever # LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new is default
246
+ retriever: retriever
126
247
  )
248
+ ```
127
249
 
128
- # simple Chain definition, with default settings
129
-
130
- simple_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
250
+ ### Adding Documents
131
251
 
132
- # Example of adding documents to vector database
252
+ ```ruby
133
253
  documents = [
134
254
  {
135
- text: "Ruby supports four OOP principles: encapsulation, inheritance, polymorphism and abstraction",
136
- metadata: { source: "ruby-docs", page: 42 }
255
+ text: "Ruby supports OOP principles: encapsulation, inheritance, polymorphism",
256
+ metadata: { source: "ruby-guide", page: 15 }
137
257
  },
138
258
  {
139
259
  text: "Modules in Ruby are used for namespaces and mixins",
140
- metadata: { source: "ruby-guides", author: "John Doe" }
141
- },
142
- {
143
- text: "2 + 2 is equals to 4",
144
- matadata: { source: 'mad_brain', author: 'John Doe' }
260
+ metadata: { source: "ruby-book", author: "Matz" }
145
261
  }
146
262
  ]
147
263
 
148
- # Ingest documents into Weaviate
264
+ # Add to vector database
149
265
  documents.each do |doc|
150
- rag_store.add_document(
266
+ vector_store.add_document(
151
267
  text: doc[:text],
152
268
  metadata: doc[:metadata]
153
269
  )
154
270
  end
271
+ ```
155
272
 
156
- # Simple query without RAG
157
- response = chain.ask("What is 2+2?", rag_context: false) # rag_context: false is default
158
- puts response
273
+ ### RAG Queries
274
+
275
+ ```ruby
276
+ # Regular query
277
+ response = chain.ask("What is Ruby?")
159
278
 
160
- # Query with RAG context
279
+ # Query with RAG
161
280
  response = chain.ask(
162
281
  "What OOP principles does Ruby support?",
163
282
  rag_context: true,
164
283
  rag_options: { limit: 3 }
165
284
  )
166
- puts response
285
+ ```
286
+
287
+ ## ๐ŸŒŠ Streaming Output
288
+
289
+ ```ruby
290
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
291
+
292
+ # Streaming with block
293
+ chain.ask("Tell me about Ruby history", stream: true) do |chunk|
294
+ print chunk
295
+ $stdout.flush
296
+ end
297
+
298
+ # Streaming with tools
299
+ tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
300
+ chain = LLMChain::Chain.new(
301
+ model: "qwen3:1.7b",
302
+ tools: tool_manager
303
+ )
167
304
 
168
- # Streamed response with RAG
169
- chain.ask("Explain Ruby modules", stream: true, rag_context: true) do |chunk|
305
+ chain.ask("Calculate 15! and explain the process", stream: true) do |chunk|
170
306
  print chunk
171
307
  end
172
308
  ```
173
309
 
174
- ## Error handling
310
+ ## โš™๏ธ Configuration
311
+
312
+ ### Environment Variables
313
+
314
+ ```bash
315
+ # OpenAI
316
+ export OPENAI_API_KEY="sk-..."
317
+ export OPENAI_ORGANIZATION_ID="org-..."
318
+
319
+ # Search
320
+ export SEARCH_API_KEY="your-search-api-key"
321
+ export GOOGLE_SEARCH_ENGINE_ID="your-cse-id"
322
+
323
+ # Redis
324
+ export REDIS_URL="redis://localhost:6379"
325
+
326
+ # Weaviate
327
+ export WEAVIATE_URL="http://localhost:8080"
328
+ ```
329
+
330
+ ### Tool Configuration
331
+
332
+ ```ruby
333
+ # From configuration
334
+ tools_config = [
335
+ {
336
+ class: 'calculator'
337
+ },
338
+ {
339
+ class: 'web_search',
340
+ options: {
341
+ search_engine: :duckduckgo,
342
+ api_key: ENV['SEARCH_API_KEY']
343
+ }
344
+ },
345
+ {
346
+ class: 'code_interpreter',
347
+ options: {
348
+ timeout: 30,
349
+ allowed_languages: ['ruby', 'python']
350
+ }
351
+ }
352
+ ]
353
+
354
+ tool_manager = LLMChain::Tools::ToolManager.from_config(tools_config)
355
+ ```
356
+
357
+ ### Client Settings
358
+
359
+ ```ruby
360
+ # Qwen with custom parameters
361
+ qwen = LLMChain::Clients::Qwen.new(
362
+ model: "qwen2:7b",
363
+ temperature: 0.7,
364
+ top_p: 0.9,
365
+ base_url: "http://localhost:11434"
366
+ )
367
+
368
+ # OpenAI with settings
369
+ openai = LLMChain::Clients::OpenAI.new(
370
+ model: "gpt-4",
371
+ api_key: ENV['OPENAI_API_KEY'],
372
+ temperature: 0.8,
373
+ max_tokens: 2000
374
+ )
375
+ ```
376
+
377
+ ## ๐Ÿ”ง Error Handling
175
378
 
176
379
  ```ruby
177
380
  begin
178
- chain.ask("Explain DNS")
381
+ chain = LLMChain::Chain.new(model: "qwen3:1.7b")
382
+ response = chain.ask("Complex query")
383
+ rescue LLMChain::UnknownModelError => e
384
+ puts "Unknown model: #{e.message}"
385
+ rescue LLMChain::ClientError => e
386
+ puts "Client error: #{e.message}"
387
+ rescue LLMChain::TimeoutError => e
388
+ puts "Timeout exceeded: #{e.message}"
179
389
  rescue LLMChain::Error => e
180
- puts "Error: #{e.message}"
181
- # Auto-fallback logic can be implemented here
390
+ puts "General LLMChain error: #{e.message}"
391
+ end
392
+ ```
393
+
394
+ ## ๐Ÿ“š Usage Examples
395
+
396
+ ### Chatbot with Tools
397
+
398
+ ```ruby
399
+ require 'llm_chain'
400
+
401
+ class ChatBot
402
+ def initialize
403
+ @tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
404
+ @memory = LLMChain::Memory::Array.new(max_size: 20)
405
+ @chain = LLMChain::Chain.new(
406
+ model: "qwen3:1.7b",
407
+ memory: @memory,
408
+ tools: @tool_manager
409
+ )
410
+ end
411
+
412
+ def chat_loop
413
+ puts "๐Ÿค– Hello! I'm an AI assistant with tools. Ask me anything!"
414
+
415
+ loop do
416
+ print "\n๐Ÿ‘ค You: "
417
+ input = gets.chomp
418
+ break if input.downcase.in?(['exit', 'quit', 'bye'])
419
+
420
+ response = @chain.ask(input, stream: true) do |chunk|
421
+ print chunk
422
+ end
423
+ puts "\n"
424
+ end
425
+ end
182
426
  end
427
+
428
+ # Run
429
+ bot = ChatBot.new
430
+ bot.chat_loop
431
+ ```
432
+
433
+ ### Data Analysis with Code
434
+
435
+ ```ruby
436
+ data_chain = LLMChain::Chain.new(
437
+ model: "qwen3:7b",
438
+ tools: LLMChain::Tools::ToolManager.create_default_toolset
439
+ )
440
+
441
+ # Analyze CSV data
442
+ response = data_chain.ask(<<~PROMPT)
443
+ Analyze this code and execute it:
444
+
445
+ ```ruby
446
+ data = [
447
+ { name: "Alice", age: 25, salary: 50000 },
448
+ { name: "Bob", age: 30, salary: 60000 },
449
+ { name: "Charlie", age: 35, salary: 70000 }
450
+ ]
451
+
452
+ average_age = data.sum { |person| person[:age] } / data.size.to_f
453
+ total_salary = data.sum { |person| person[:salary] }
454
+
455
+ puts "Average age: #{average_age}"
456
+ puts "Total salary: #{total_salary}"
457
+ puts "Average salary: #{total_salary / data.size}"
458
+ ```
459
+ PROMPT
460
+
461
+ puts response
183
462
  ```
184
463
 
185
- ## Contributing
186
- Bug reports and pull requests are welcome on GitHub at:
187
- https://github.com/FuryCow/llm_chain
464
+ ## ๐Ÿงช Testing
465
+
466
+ ```bash
467
+ # Run tests
468
+ bundle exec rspec
469
+
470
+ # Run demo
471
+ ruby -I lib examples/tools_example.rb
472
+
473
+ # Interactive console
474
+ bundle exec bin/console
475
+ ```
476
+
477
+ ## ๐Ÿ“– API Documentation
478
+
479
+ ### Main Classes
480
+
481
+ - `LLMChain::Chain` - Main class for creating chains
482
+ - `LLMChain::Tools::ToolManager` - Tool management
483
+ - `LLMChain::Memory::Array/Redis` - Memory systems
484
+ - `LLMChain::Clients::*` - Clients for various LLMs
485
+
486
+ ### Chain Methods
487
+
488
+ ```ruby
489
+ chain = LLMChain::Chain.new(options)
490
+
491
+ # Main method
492
+ chain.ask(prompt, stream: false, rag_context: false, rag_options: {})
493
+
494
+ # Initialization parameters
495
+ # - model: model name
496
+ # - memory: memory object
497
+ # - tools: array of tools or ToolManager
498
+ # - retriever: RAG retriever
499
+ # - client_options: additional client parameters
500
+ ```
501
+
502
+ ## ๐Ÿ›ฃ๏ธ Roadmap
503
+
504
+ ### v0.6.0
505
+ - [ ] ReAct agents
506
+ - [ ] More tools (files, database)
507
+ - [ ] Claude integration
508
+ - [ ] Enhanced logging
509
+
510
+ ### v0.7.0
511
+ - [ ] Multi-agent systems
512
+ - [ ] Task planning
513
+ - [ ] Web interface
514
+ - [ ] Metrics and monitoring
515
+
516
+ ### v1.0.0
517
+ - [ ] Stable API
518
+ - [ ] Complete documentation
519
+ - [ ] Production readiness
520
+
521
+ ## ๐Ÿค Contributing
522
+
523
+ 1. Fork the repository
524
+ 2. Create feature branch (`git checkout -b feature/amazing-feature`)
525
+ 3. Commit changes (`git commit -m 'Add amazing feature'`)
526
+ 4. Push to branch (`git push origin feature/amazing-feature`)
527
+ 5. Open Pull Request
528
+
529
+ ### Development
530
+
531
+ ```bash
532
+ git clone https://github.com/FuryCow/llm_chain.git
533
+ cd llm_chain
534
+ bundle install
535
+ bundle exec rspec
536
+ ```
537
+
538
+ ## ๐Ÿ“„ License
539
+
540
+ This project is distributed under the [MIT License](LICENSE.txt).
541
+
542
+ ## ๐Ÿ™ Acknowledgments
543
+
544
+ - [Ollama](https://ollama.ai/) team for excellent local LLM platform
545
+ - [LangChain](https://langchain.com/) developers for inspiration
546
+ - Ruby community for support
547
+
548
+ ---
549
+
550
+ **Made with โค๏ธ for Ruby community**
188
551
 
189
- ## License
190
- The gem is available as open source under the terms of the [MIT License](https://opensource.org/licenses/MIT).
552
+ [Documentation](https://github.com/FuryCow/llm_chain/wiki) |
553
+ [Examples](https://github.com/FuryCow/llm_chain/tree/main/examples) |
554
+ [Issues](https://github.com/FuryCow/llm_chain/issues) |
555
+ [Discussions](https://github.com/FuryCow/llm_chain/discussions)