llm_chain 0.4.0 โ 0.5.1
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +51 -0
- data/README.md +516 -102
- data/examples/quick_demo.rb +93 -0
- data/examples/tools_example.rb +255 -0
- data/lib/llm_chain/chain.rb +24 -5
- data/lib/llm_chain/client_registry.rb +0 -1
- data/lib/llm_chain/clients/qwen.rb +13 -1
- data/lib/llm_chain/tools/base_tool.rb +81 -0
- data/lib/llm_chain/tools/calculator.rb +154 -0
- data/lib/llm_chain/tools/code_interpreter.rb +242 -0
- data/lib/llm_chain/tools/tool_manager.rb +204 -0
- data/lib/llm_chain/tools/web_search.rb +305 -0
- data/lib/llm_chain/version.rb +1 -1
- data/lib/llm_chain.rb +68 -18
- metadata +10 -2
data/README.md
CHANGED
@@ -1,192 +1,606 @@
|
|
1
|
-
# LLMChain
|
2
|
-
|
3
|
-
A Ruby gem for interacting with Large Language Models (LLMs) through a unified interface, with native Ollama and local model support.
|
1
|
+
# ๐ฆพ LLMChain
|
4
2
|
|
5
3
|
[](https://badge.fury.io/rb/llm_chain)
|
6
|
-
[](https://github.com/FuryCow/llm_chain/actions)
|
7
5
|
[](LICENSE.txt)
|
8
6
|
|
9
|
-
|
7
|
+
**A powerful Ruby library for working with Large Language Models (LLMs) with intelligent tool system**
|
8
|
+
|
9
|
+
LLMChain is a Ruby analog of LangChain, providing a unified interface for interacting with various LLMs, built-in tool system, and RAG (Retrieval-Augmented Generation) support.
|
10
|
+
|
11
|
+
## ๐ What's New in v0.5.1
|
12
|
+
|
13
|
+
- โ
**Google Search Integration** - Accurate, up-to-date search results
|
14
|
+
- โ
**Fixed Calculator** - Improved expression parsing and evaluation
|
15
|
+
- โ
**Enhanced Code Interpreter** - Better code extraction from prompts
|
16
|
+
- โ
**Production-Ready Output** - Clean interface without debug noise
|
17
|
+
- โ
**Quick Chain Creation** - Simple `LLMChain.quick_chain` method
|
18
|
+
- โ
**Simplified Configuration** - Easy setup with sensible defaults
|
19
|
+
|
20
|
+
## โจ Key Features
|
10
21
|
|
11
|
-
- Unified
|
12
|
-
-
|
13
|
-
-
|
14
|
-
-
|
15
|
-
-
|
16
|
-
-
|
22
|
+
- ๐ค **Unified API** for multiple LLMs (OpenAI, Ollama, Qwen, LLaMA2, Gemma)
|
23
|
+
- ๐ ๏ธ **Intelligent tool system** with automatic selection
|
24
|
+
- ๐งฎ **Built-in tools**: Calculator, web search, code interpreter
|
25
|
+
- ๐ **RAG-ready** with vector database integration
|
26
|
+
- ๐พ **Flexible memory system** (Array, Redis)
|
27
|
+
- ๐ **Streaming output** for real-time responses
|
28
|
+
- ๐ **Local models** via Ollama
|
29
|
+
- ๐ง **Extensible architecture** for custom tools
|
17
30
|
|
18
|
-
##
|
31
|
+
## ๐ Quick Start
|
32
|
+
|
33
|
+
### Installation
|
34
|
+
|
35
|
+
```bash
|
36
|
+
gem install llm_chain
|
37
|
+
```
|
19
38
|
|
20
|
-
|
39
|
+
Or add to Gemfile:
|
21
40
|
|
22
41
|
```ruby
|
23
42
|
gem 'llm_chain'
|
24
43
|
```
|
25
|
-
Or install directly:
|
26
44
|
|
27
|
-
|
28
|
-
|
29
|
-
|
45
|
+
### Prerequisites
|
46
|
+
|
47
|
+
1. **Install Ollama** for local models:
|
48
|
+
```bash
|
49
|
+
# macOS/Linux
|
50
|
+
curl -fsSL https://ollama.ai/install.sh | sh
|
51
|
+
|
52
|
+
# Download models
|
53
|
+
ollama pull qwen3:1.7b
|
54
|
+
ollama pull llama2:7b
|
55
|
+
```
|
56
|
+
|
57
|
+
2. **Optional**: API keys for enhanced features
|
58
|
+
```bash
|
59
|
+
# For OpenAI models
|
60
|
+
export OPENAI_API_KEY="your-openai-key"
|
61
|
+
|
62
|
+
# For Google Search (get at console.developers.google.com)
|
63
|
+
export GOOGLE_API_KEY="your-google-key"
|
64
|
+
export GOOGLE_SEARCH_ENGINE_ID="your-search-engine-id"
|
65
|
+
```
|
66
|
+
|
67
|
+
### Simple Example
|
30
68
|
|
31
|
-
|
32
|
-
|
69
|
+
```ruby
|
70
|
+
require 'llm_chain'
|
33
71
|
|
34
|
-
|
72
|
+
# Quick start with default tools (v0.5.1+)
|
73
|
+
chain = LLMChain.quick_chain
|
74
|
+
response = chain.ask("Hello! How are you?")
|
75
|
+
puts response
|
35
76
|
|
36
|
-
|
37
|
-
|
38
|
-
|
77
|
+
# Or traditional setup
|
78
|
+
chain = LLMChain::Chain.new(model: "qwen3:1.7b")
|
79
|
+
response = chain.ask("Hello! How are you?")
|
80
|
+
puts response
|
39
81
|
```
|
40
82
|
|
41
|
-
##
|
83
|
+
## ๐ ๏ธ Tool System
|
42
84
|
|
43
|
-
|
85
|
+
### Automatic Tool Usage
|
44
86
|
|
45
87
|
```ruby
|
46
|
-
|
88
|
+
# Quick setup (v0.5.1+)
|
89
|
+
chain = LLMChain.quick_chain
|
90
|
+
|
91
|
+
# Tools are selected automatically
|
92
|
+
chain.ask("Calculate 15 * 7 + 32")
|
93
|
+
# ๐งฎ Result: 137
|
47
94
|
|
48
|
-
|
49
|
-
|
50
|
-
|
51
|
-
|
52
|
-
|
95
|
+
chain.ask("Which is the latest version of Ruby?")
|
96
|
+
# ๐ Result: Ruby 3.3.6 (via Google search)
|
97
|
+
|
98
|
+
chain.ask("Execute code: puts (1..10).sum")
|
99
|
+
# ๐ป Result: 55
|
100
|
+
|
101
|
+
# Traditional setup
|
102
|
+
tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
|
103
|
+
chain = LLMChain::Chain.new(
|
104
|
+
model: "qwen3:1.7b",
|
105
|
+
tools: tool_manager
|
106
|
+
)
|
53
107
|
```
|
54
108
|
|
55
|
-
|
109
|
+
### Built-in Tools
|
56
110
|
|
111
|
+
#### ๐งฎ Calculator
|
57
112
|
```ruby
|
58
|
-
|
59
|
-
|
60
|
-
|
61
|
-
|
113
|
+
calculator = LLMChain::Tools::Calculator.new
|
114
|
+
result = calculator.call("Find square root of 144")
|
115
|
+
puts result[:formatted]
|
116
|
+
# Output: sqrt(144) = 12.0
|
117
|
+
```
|
62
118
|
|
63
|
-
|
64
|
-
|
119
|
+
#### ๐ Web Search
|
120
|
+
```ruby
|
121
|
+
# Google search for accurate results (v0.5.1+)
|
122
|
+
search = LLMChain::Tools::WebSearch.new
|
123
|
+
results = search.call("Latest Ruby version")
|
124
|
+
puts results[:formatted]
|
125
|
+
# Output: Ruby 3.3.6 is the current stable version...
|
126
|
+
|
127
|
+
# Fallback data available without API keys
|
128
|
+
search = LLMChain::Tools::WebSearch.new
|
129
|
+
results = search.call("Which is the latest version of Ruby?")
|
130
|
+
# Works even without Google API configured
|
131
|
+
```
|
132
|
+
|
133
|
+
#### ๐ป Code Interpreter
|
134
|
+
```ruby
|
135
|
+
interpreter = LLMChain::Tools::CodeInterpreter.new
|
136
|
+
result = interpreter.call(<<~CODE)
|
137
|
+
```ruby
|
138
|
+
def factorial(n)
|
139
|
+
n <= 1 ? 1 : n * factorial(n - 1)
|
140
|
+
end
|
141
|
+
puts factorial(5)
|
142
|
+
```
|
143
|
+
CODE
|
144
|
+
puts result[:formatted]
|
65
145
|
```
|
66
146
|
|
67
|
-
|
147
|
+
## โ๏ธ Configuration (v0.5.1+)
|
68
148
|
|
69
149
|
```ruby
|
70
|
-
#
|
71
|
-
|
72
|
-
|
73
|
-
|
74
|
-
|
150
|
+
# Global configuration
|
151
|
+
LLMChain.configure do |config|
|
152
|
+
config.default_model = "qwen3:1.7b" # Default LLM model
|
153
|
+
config.search_engine = :google # Google for accurate results
|
154
|
+
config.memory_size = 100 # Memory buffer size
|
155
|
+
config.timeout = 30 # Request timeout (seconds)
|
156
|
+
end
|
157
|
+
|
158
|
+
# Quick chain with default settings
|
159
|
+
chain = LLMChain.quick_chain
|
160
|
+
|
161
|
+
# Override settings per chain
|
162
|
+
chain = LLMChain.quick_chain(
|
163
|
+
model: "gpt-4",
|
164
|
+
tools: false, # Disable tools
|
165
|
+
memory: false # Disable memory
|
75
166
|
)
|
76
|
-
puts qwen.chat("Write Ruby code for Fibonacci sequence")
|
77
167
|
```
|
78
168
|
|
79
|
-
|
169
|
+
### Creating Custom Tools
|
80
170
|
|
81
171
|
```ruby
|
82
|
-
|
83
|
-
|
172
|
+
class WeatherTool < LLMChain::Tools::BaseTool
|
173
|
+
def initialize(api_key:)
|
174
|
+
@api_key = api_key
|
175
|
+
super(
|
176
|
+
name: "weather",
|
177
|
+
description: "Gets weather information",
|
178
|
+
parameters: {
|
179
|
+
location: {
|
180
|
+
type: "string",
|
181
|
+
description: "City name"
|
182
|
+
}
|
183
|
+
}
|
184
|
+
)
|
185
|
+
end
|
186
|
+
|
187
|
+
def match?(prompt)
|
188
|
+
contains_keywords?(prompt, ['weather', 'temperature', 'forecast'])
|
189
|
+
end
|
190
|
+
|
191
|
+
def call(prompt, context: {})
|
192
|
+
location = extract_location(prompt)
|
193
|
+
# Your weather API integration
|
194
|
+
{
|
195
|
+
location: location,
|
196
|
+
temperature: "22ยฐC",
|
197
|
+
condition: "Sunny",
|
198
|
+
formatted: "Weather in #{location}: 22ยฐC, Sunny"
|
199
|
+
}
|
200
|
+
end
|
201
|
+
|
202
|
+
private
|
203
|
+
|
204
|
+
def extract_location(prompt)
|
205
|
+
prompt.scan(/in\s+(\w+)/i).flatten.first || "Unknown"
|
206
|
+
end
|
84
207
|
end
|
208
|
+
|
209
|
+
# Usage
|
210
|
+
weather = WeatherTool.new(api_key: "your-key")
|
211
|
+
tool_manager.register_tool(weather)
|
85
212
|
```
|
86
213
|
|
87
|
-
|
214
|
+
## ๐ค Supported Models
|
88
215
|
|
216
|
+
| Model Family | Backend | Status | Notes |
|
217
|
+
|--------------|---------|--------|-------|
|
218
|
+
| **OpenAI** | Web API | โ
Supported | GPT-3.5, GPT-4, GPT-4 Turbo |
|
219
|
+
| **Qwen/Qwen2** | Ollama | โ
Supported | 0.5B - 72B parameters |
|
220
|
+
| **LLaMA2/3** | Ollama | โ
Supported | 7B, 13B, 70B |
|
221
|
+
| **Gemma** | Ollama | โ
Supported | 2B, 7B, 9B, 27B |
|
222
|
+
| **Mistral/Mixtral** | Ollama | ๐ In development | 7B, 8x7B |
|
223
|
+
| **Claude** | Anthropic | ๐ Planned | Haiku, Sonnet, Opus |
|
224
|
+
| **Command R+** | Cohere | ๐ Planned | Optimized for RAG |
|
225
|
+
|
226
|
+
### Model Usage Examples
|
227
|
+
|
228
|
+
```ruby
|
229
|
+
# OpenAI
|
230
|
+
openai_chain = LLMChain::Chain.new(
|
231
|
+
model: "gpt-4",
|
232
|
+
api_key: ENV['OPENAI_API_KEY']
|
233
|
+
)
|
234
|
+
|
235
|
+
# Qwen via Ollama
|
236
|
+
qwen_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
|
237
|
+
|
238
|
+
# LLaMA via Ollama with settings
|
239
|
+
llama_chain = LLMChain::Chain.new(
|
240
|
+
model: "llama2:7b",
|
241
|
+
temperature: 0.8,
|
242
|
+
top_p: 0.95
|
243
|
+
)
|
244
|
+
```
|
245
|
+
|
246
|
+
## ๐พ Memory System
|
247
|
+
|
248
|
+
### Array Memory (default)
|
89
249
|
```ruby
|
250
|
+
memory = LLMChain::Memory::Array.new(max_size: 10)
|
90
251
|
chain = LLMChain::Chain.new(
|
91
252
|
model: "qwen3:1.7b",
|
92
|
-
memory:
|
253
|
+
memory: memory
|
93
254
|
)
|
94
255
|
|
95
|
-
|
96
|
-
chain.ask("What's
|
97
|
-
chain.ask("Now multiply that by 5")
|
256
|
+
chain.ask("My name is Alex")
|
257
|
+
chain.ask("What's my name?") # Remembers previous context
|
98
258
|
```
|
99
259
|
|
100
|
-
|
260
|
+
### Redis Memory (for production)
|
261
|
+
```ruby
|
262
|
+
memory = LLMChain::Memory::Redis.new(
|
263
|
+
redis_url: 'redis://localhost:6379',
|
264
|
+
max_size: 100,
|
265
|
+
namespace: 'my_app'
|
266
|
+
)
|
267
|
+
|
268
|
+
chain = LLMChain::Chain.new(
|
269
|
+
model: "qwen3:1.7b",
|
270
|
+
memory: memory
|
271
|
+
)
|
272
|
+
```
|
101
273
|
|
102
|
-
|
103
|
-
|-------------|----------------|-------|
|
104
|
-
| OpenAI (GPT-3.5, GPT-4) | Web API | Supports all OpenAI API models (Not tested) |
|
105
|
-
| LLaMA2 (7B, 13B, 70B) | Ollama | Local inference via Ollama |
|
106
|
-
| Qwen/Qwen3 (0.5B-72B) | Ollama | Supports all Qwen model sizes |
|
107
|
-
| Mistral/Mixtral | Ollama | Including Mistral 7B and Mixtral 8x7B (In progress) |
|
108
|
-
| Gemma (2B, 7B) | Ollama | Google's lightweight models (In progress) |
|
109
|
-
| Claude (Haiku, Sonnet, Opus) | Anthropic API | Web API access (In progress) |
|
110
|
-
| Command R+ | Cohere API | Optimized for RAG (In progress) |
|
274
|
+
## ๐ RAG (Retrieval-Augmented Generation)
|
111
275
|
|
112
|
-
|
276
|
+
### Setting up RAG with Weaviate
|
113
277
|
|
114
278
|
```ruby
|
115
279
|
# Initialize components
|
116
|
-
embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(
|
117
|
-
|
118
|
-
|
119
|
-
memory = LLMChain::Memory::Array.new
|
120
|
-
tools = []
|
280
|
+
embedder = LLMChain::Embeddings::Clients::Local::OllamaClient.new(
|
281
|
+
model: "nomic-embed-text"
|
282
|
+
)
|
121
283
|
|
122
|
-
|
284
|
+
vector_store = LLMChain::Embeddings::Clients::Local::WeaviateVectorStore.new(
|
285
|
+
embedder: embedder,
|
286
|
+
weaviate_url: 'http://localhost:8080'
|
287
|
+
)
|
288
|
+
|
289
|
+
retriever = LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new(
|
290
|
+
embedder: embedder
|
291
|
+
)
|
292
|
+
|
293
|
+
# Create chain with RAG
|
123
294
|
chain = LLMChain::Chain.new(
|
124
295
|
model: "qwen3:1.7b",
|
125
|
-
|
126
|
-
tools: tools, # There is no tools supported yet
|
127
|
-
retriever: retriever # LLMChain::Embeddings::Clients::Local::WeaviateRetriever.new is default
|
296
|
+
retriever: retriever
|
128
297
|
)
|
298
|
+
```
|
129
299
|
|
130
|
-
|
131
|
-
|
132
|
-
simple_chain = LLMChain::Chain.new(model: "qwen3:1.7b")
|
300
|
+
### Adding Documents
|
133
301
|
|
134
|
-
|
302
|
+
```ruby
|
135
303
|
documents = [
|
136
304
|
{
|
137
|
-
text: "Ruby supports
|
138
|
-
metadata: { source: "ruby-
|
305
|
+
text: "Ruby supports OOP principles: encapsulation, inheritance, polymorphism",
|
306
|
+
metadata: { source: "ruby-guide", page: 15 }
|
139
307
|
},
|
140
308
|
{
|
141
309
|
text: "Modules in Ruby are used for namespaces and mixins",
|
142
|
-
metadata: { source: "ruby-
|
143
|
-
},
|
144
|
-
{
|
145
|
-
text: "2 + 2 is equals to 4",
|
146
|
-
matadata: { source: 'mad_brain', author: 'John Doe' }
|
310
|
+
metadata: { source: "ruby-book", author: "Matz" }
|
147
311
|
}
|
148
312
|
]
|
149
313
|
|
150
|
-
#
|
314
|
+
# Add to vector database
|
151
315
|
documents.each do |doc|
|
152
|
-
|
316
|
+
vector_store.add_document(
|
153
317
|
text: doc[:text],
|
154
318
|
metadata: doc[:metadata]
|
155
319
|
)
|
156
320
|
end
|
321
|
+
```
|
157
322
|
|
158
|
-
|
159
|
-
|
160
|
-
|
323
|
+
### RAG Queries
|
324
|
+
|
325
|
+
```ruby
|
326
|
+
# Regular query
|
327
|
+
response = chain.ask("What is Ruby?")
|
161
328
|
|
162
|
-
# Query with RAG
|
329
|
+
# Query with RAG
|
163
330
|
response = chain.ask(
|
164
331
|
"What OOP principles does Ruby support?",
|
165
332
|
rag_context: true,
|
166
333
|
rag_options: { limit: 3 }
|
167
334
|
)
|
168
|
-
|
335
|
+
```
|
336
|
+
|
337
|
+
## ๐ Streaming Output
|
338
|
+
|
339
|
+
```ruby
|
340
|
+
chain = LLMChain::Chain.new(model: "qwen3:1.7b")
|
341
|
+
|
342
|
+
# Streaming with block
|
343
|
+
chain.ask("Tell me about Ruby history", stream: true) do |chunk|
|
344
|
+
print chunk
|
345
|
+
$stdout.flush
|
346
|
+
end
|
347
|
+
|
348
|
+
# Streaming with tools
|
349
|
+
tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
|
350
|
+
chain = LLMChain::Chain.new(
|
351
|
+
model: "qwen3:1.7b",
|
352
|
+
tools: tool_manager
|
353
|
+
)
|
169
354
|
|
170
|
-
|
171
|
-
chain.ask("Explain Ruby modules", stream: true, rag_context: true) do |chunk|
|
355
|
+
chain.ask("Calculate 15! and explain the process", stream: true) do |chunk|
|
172
356
|
print chunk
|
173
357
|
end
|
174
358
|
```
|
175
359
|
|
176
|
-
##
|
360
|
+
## โ๏ธ Configuration
|
361
|
+
|
362
|
+
### Environment Variables
|
363
|
+
|
364
|
+
```bash
|
365
|
+
# OpenAI
|
366
|
+
export OPENAI_API_KEY="sk-..."
|
367
|
+
export OPENAI_ORGANIZATION_ID="org-..."
|
368
|
+
|
369
|
+
# Search
|
370
|
+
export SEARCH_API_KEY="your-search-api-key"
|
371
|
+
export GOOGLE_SEARCH_ENGINE_ID="your-cse-id"
|
372
|
+
|
373
|
+
# Redis
|
374
|
+
export REDIS_URL="redis://localhost:6379"
|
375
|
+
|
376
|
+
# Weaviate
|
377
|
+
export WEAVIATE_URL="http://localhost:8080"
|
378
|
+
```
|
379
|
+
|
380
|
+
### Tool Configuration
|
381
|
+
|
382
|
+
```ruby
|
383
|
+
# From configuration
|
384
|
+
tools_config = [
|
385
|
+
{
|
386
|
+
class: 'calculator'
|
387
|
+
},
|
388
|
+
{
|
389
|
+
class: 'web_search',
|
390
|
+
options: {
|
391
|
+
search_engine: :duckduckgo,
|
392
|
+
api_key: ENV['SEARCH_API_KEY']
|
393
|
+
}
|
394
|
+
},
|
395
|
+
{
|
396
|
+
class: 'code_interpreter',
|
397
|
+
options: {
|
398
|
+
timeout: 30,
|
399
|
+
allowed_languages: ['ruby', 'python']
|
400
|
+
}
|
401
|
+
}
|
402
|
+
]
|
403
|
+
|
404
|
+
tool_manager = LLMChain::Tools::ToolManager.from_config(tools_config)
|
405
|
+
```
|
406
|
+
|
407
|
+
### Client Settings
|
408
|
+
|
409
|
+
```ruby
|
410
|
+
# Qwen with custom parameters
|
411
|
+
qwen = LLMChain::Clients::Qwen.new(
|
412
|
+
model: "qwen2:7b",
|
413
|
+
temperature: 0.7,
|
414
|
+
top_p: 0.9,
|
415
|
+
base_url: "http://localhost:11434"
|
416
|
+
)
|
417
|
+
|
418
|
+
# OpenAI with settings
|
419
|
+
openai = LLMChain::Clients::OpenAI.new(
|
420
|
+
model: "gpt-4",
|
421
|
+
api_key: ENV['OPENAI_API_KEY'],
|
422
|
+
temperature: 0.8,
|
423
|
+
max_tokens: 2000
|
424
|
+
)
|
425
|
+
```
|
426
|
+
|
427
|
+
## ๐ง Error Handling
|
177
428
|
|
178
429
|
```ruby
|
179
430
|
begin
|
180
|
-
chain.
|
431
|
+
chain = LLMChain::Chain.new(model: "qwen3:1.7b")
|
432
|
+
response = chain.ask("Complex query")
|
433
|
+
rescue LLMChain::UnknownModelError => e
|
434
|
+
puts "Unknown model: #{e.message}"
|
435
|
+
rescue LLMChain::ClientError => e
|
436
|
+
puts "Client error: #{e.message}"
|
437
|
+
rescue LLMChain::TimeoutError => e
|
438
|
+
puts "Timeout exceeded: #{e.message}"
|
181
439
|
rescue LLMChain::Error => e
|
182
|
-
puts "
|
183
|
-
|
440
|
+
puts "General LLMChain error: #{e.message}"
|
441
|
+
end
|
442
|
+
```
|
443
|
+
|
444
|
+
## ๐ Usage Examples
|
445
|
+
|
446
|
+
### Chatbot with Tools
|
447
|
+
|
448
|
+
```ruby
|
449
|
+
require 'llm_chain'
|
450
|
+
|
451
|
+
class ChatBot
|
452
|
+
def initialize
|
453
|
+
@tool_manager = LLMChain::Tools::ToolManager.create_default_toolset
|
454
|
+
@memory = LLMChain::Memory::Array.new(max_size: 20)
|
455
|
+
@chain = LLMChain::Chain.new(
|
456
|
+
model: "qwen3:1.7b",
|
457
|
+
memory: @memory,
|
458
|
+
tools: @tool_manager
|
459
|
+
)
|
460
|
+
end
|
461
|
+
|
462
|
+
def chat_loop
|
463
|
+
puts "๐ค Hello! I'm an AI assistant with tools. Ask me anything!"
|
464
|
+
|
465
|
+
loop do
|
466
|
+
print "\n๐ค You: "
|
467
|
+
input = gets.chomp
|
468
|
+
break if input.downcase.in?(['exit', 'quit', 'bye'])
|
469
|
+
|
470
|
+
response = @chain.ask(input, stream: true) do |chunk|
|
471
|
+
print chunk
|
472
|
+
end
|
473
|
+
puts "\n"
|
474
|
+
end
|
475
|
+
end
|
184
476
|
end
|
477
|
+
|
478
|
+
# Run
|
479
|
+
bot = ChatBot.new
|
480
|
+
bot.chat_loop
|
481
|
+
```
|
482
|
+
|
483
|
+
### Data Analysis with Code
|
484
|
+
|
485
|
+
```ruby
|
486
|
+
data_chain = LLMChain::Chain.new(
|
487
|
+
model: "qwen3:7b",
|
488
|
+
tools: LLMChain::Tools::ToolManager.create_default_toolset
|
489
|
+
)
|
490
|
+
|
491
|
+
# Analyze CSV data
|
492
|
+
response = data_chain.ask(<<~PROMPT)
|
493
|
+
Analyze this code and execute it:
|
494
|
+
|
495
|
+
```ruby
|
496
|
+
data = [
|
497
|
+
{ name: "Alice", age: 25, salary: 50000 },
|
498
|
+
{ name: "Bob", age: 30, salary: 60000 },
|
499
|
+
{ name: "Charlie", age: 35, salary: 70000 }
|
500
|
+
]
|
501
|
+
|
502
|
+
average_age = data.sum { |person| person[:age] } / data.size.to_f
|
503
|
+
total_salary = data.sum { |person| person[:salary] }
|
504
|
+
|
505
|
+
puts "Average age: #{average_age}"
|
506
|
+
puts "Total salary: #{total_salary}"
|
507
|
+
puts "Average salary: #{total_salary / data.size}"
|
508
|
+
```
|
509
|
+
PROMPT
|
510
|
+
|
511
|
+
puts response
|
512
|
+
```
|
513
|
+
|
514
|
+
## ๐งช Testing
|
515
|
+
|
516
|
+
```bash
|
517
|
+
# Run tests
|
518
|
+
bundle exec rspec
|
519
|
+
|
520
|
+
# Run demo
|
521
|
+
ruby -I lib examples/tools_example.rb
|
522
|
+
|
523
|
+
# Interactive console
|
524
|
+
bundle exec bin/console
|
185
525
|
```
|
186
526
|
|
187
|
-
##
|
188
|
-
|
189
|
-
|
527
|
+
## ๐ API Documentation
|
528
|
+
|
529
|
+
### Main Classes
|
530
|
+
|
531
|
+
- `LLMChain::Chain` - Main class for creating chains
|
532
|
+
- `LLMChain::Tools::ToolManager` - Tool management
|
533
|
+
- `LLMChain::Memory::Array/Redis` - Memory systems
|
534
|
+
- `LLMChain::Clients::*` - Clients for various LLMs
|
535
|
+
|
536
|
+
### Chain Methods
|
537
|
+
|
538
|
+
```ruby
|
539
|
+
chain = LLMChain::Chain.new(options)
|
540
|
+
|
541
|
+
# Main method
|
542
|
+
chain.ask(prompt, stream: false, rag_context: false, rag_options: {})
|
543
|
+
|
544
|
+
# Initialization parameters
|
545
|
+
# - model: model name
|
546
|
+
# - memory: memory object
|
547
|
+
# - tools: array of tools or ToolManager
|
548
|
+
# - retriever: RAG retriever
|
549
|
+
# - client_options: additional client parameters
|
550
|
+
```
|
551
|
+
|
552
|
+
## ๐ฃ๏ธ Roadmap
|
553
|
+
|
554
|
+
### v0.6.0
|
555
|
+
- [ ] ReAct agents and multi-step reasoning
|
556
|
+
- [ ] More tools (file system, database queries)
|
557
|
+
- [ ] Claude integration
|
558
|
+
- [ ] Enhanced error handling
|
559
|
+
|
560
|
+
### v0.7.0
|
561
|
+
- [ ] Multi-agent systems
|
562
|
+
- [ ] Task planning and workflows
|
563
|
+
- [ ] Web interface for testing
|
564
|
+
- [ ] Metrics and monitoring
|
565
|
+
|
566
|
+
### v1.0.0
|
567
|
+
- [ ] Stable API with semantic versioning
|
568
|
+
- [ ] Complete documentation coverage
|
569
|
+
- [ ] Production-grade performance
|
570
|
+
|
571
|
+
## ๐ค Contributing
|
572
|
+
|
573
|
+
1. Fork the repository
|
574
|
+
2. Create feature branch (`git checkout -b feature/amazing-feature`)
|
575
|
+
3. Commit changes (`git commit -m 'Add amazing feature'`)
|
576
|
+
4. Push to branch (`git push origin feature/amazing-feature`)
|
577
|
+
5. Open Pull Request
|
578
|
+
|
579
|
+
### Development
|
580
|
+
|
581
|
+
```bash
|
582
|
+
git clone https://github.com/FuryCow/llm_chain.git
|
583
|
+
cd llm_chain
|
584
|
+
bundle install
|
585
|
+
bundle exec rspec
|
586
|
+
```
|
587
|
+
|
588
|
+
## ๐ License
|
589
|
+
|
590
|
+
This project is distributed under the [MIT License](LICENSE.txt).
|
591
|
+
|
592
|
+
## ๐ Acknowledgments
|
593
|
+
|
594
|
+
- [Ollama](https://ollama.ai/) team for excellent local LLM platform
|
595
|
+
- [LangChain](https://langchain.com/) developers for inspiration
|
596
|
+
- Ruby community for support
|
597
|
+
|
598
|
+
---
|
599
|
+
|
600
|
+
**Made with โค๏ธ for Ruby community**
|
190
601
|
|
191
|
-
|
192
|
-
|
602
|
+
[Documentation](https://github.com/FuryCow/llm_chain/wiki) |
|
603
|
+
[Examples](https://github.com/FuryCow/llm_chain/tree/main/examples) |
|
604
|
+
[Changelog](CHANGELOG.md) |
|
605
|
+
[Issues](https://github.com/FuryCow/llm_chain/issues) |
|
606
|
+
[Discussions](https://github.com/FuryCow/llm_chain/discussions)
|