ruby_llm 0.1.0.pre35 → 0.1.0.pre37

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,247 @@
1
+ ---
2
+ layout: default
3
+ title: Tools
4
+ parent: Guides
5
+ nav_order: 3
6
+ permalink: /guides/tools
7
+ ---
8
+
9
+ # Using Tools with RubyLLM
10
+
11
+ Tools allow AI models to call your Ruby code to perform actions or retrieve information. This guide explains how to create and use tools with RubyLLM.
12
+
13
+ ## What Are Tools?
14
+
15
+ Tools (also known as "functions" or "plugins") let AI models:
16
+
17
+ 1. Recognize when external functionality is needed
18
+ 2. Call your Ruby code with appropriate parameters
19
+ 3. Use the results to enhance their responses
20
+
21
+ Common use cases include:
22
+ - Retrieving real-time data
23
+ - Performing calculations
24
+ - Accessing databases
25
+ - Controlling external systems
26
+
27
+ ## Creating a Tool
28
+
29
+ Tools are defined as Ruby classes that inherit from `RubyLLM::Tool`:
30
+
31
+ ```ruby
32
+ class Calculator < RubyLLM::Tool
33
+ description "Performs arithmetic calculations"
34
+
35
+ param :expression,
36
+ type: :string,
37
+ desc: "A mathematical expression to evaluate (e.g. '2 + 2')"
38
+
39
+ def execute(expression:)
40
+ eval(expression).to_s
41
+ rescue StandardError => e
42
+ "Error: #{e.message}"
43
+ end
44
+ end
45
+ ```
46
+
47
+ ### Tool Components
48
+
49
+ Each tool has these key elements:
50
+
51
+ 1. **Description** - Explains what the tool does, helping the AI decide when to use it
52
+ 2. **Parameters** - Define the inputs the tool expects
53
+ 3. **Execute Method** - The code that runs when the tool is called
54
+
55
+ ### Parameter Definition
56
+
57
+ Parameters accept several options:
58
+
59
+ ```ruby
60
+ param :parameter_name,
61
+ type: :string, # Data type (:string, :integer, :boolean, :array, :object)
62
+ desc: "Description", # Description of what the parameter does
63
+ required: true # Whether the parameter is required (default: true)
64
+ ```
65
+
66
+ ## Using Tools in Chat
67
+
68
+ To use a tool, attach it to a chat:
69
+
70
+ ```ruby
71
+ # Create the chat
72
+ chat = RubyLLM.chat
73
+
74
+ # Add a tool
75
+ chat.with_tool(Calculator)
76
+
77
+ # Now you can ask questions that might require calculation
78
+ response = chat.ask "What's 123 * 456?"
79
+ # => "Let me calculate that for you. 123 * 456 = 56088."
80
+ ```
81
+
82
+ ### Multiple Tools
83
+
84
+ You can provide multiple tools to a single chat:
85
+
86
+ ```ruby
87
+ class Weather < RubyLLM::Tool
88
+ description "Gets current weather for a location"
89
+
90
+ param :location,
91
+ desc: "City name or zip code"
92
+
93
+ def execute(location:)
94
+ # Simulate weather lookup
95
+ "72°F and sunny in #{location}"
96
+ end
97
+ end
98
+
99
+ # Add multiple tools
100
+ chat = RubyLLM.chat
101
+ .with_tools(Calculator, Weather)
102
+
103
+ # Ask questions that might use either tool
104
+ chat.ask "What's the temperature in New York City?"
105
+ chat.ask "If it's 72°F in NYC and 54°F in Boston, what's the average?"
106
+ ```
107
+
108
+ ## Custom Initialization
109
+
110
+ Tools can have custom initialization:
111
+
112
+ ```ruby
113
+ class DocumentSearch < RubyLLM::Tool
114
+ description "Searches documents by relevance"
115
+
116
+ param :query,
117
+ desc: "The search query"
118
+
119
+ param :limit,
120
+ type: :integer,
121
+ desc: "Maximum number of results",
122
+ required: false
123
+
124
+ def initialize(database)
125
+ @database = database
126
+ end
127
+
128
+ def execute(query:, limit: 5)
129
+ # Search in @database
130
+ @database.search(query, limit: limit)
131
+ end
132
+ end
133
+
134
+ # Initialize with dependencies
135
+ search_tool = DocumentSearch.new(MyDatabase)
136
+ chat.with_tool(search_tool)
137
+ ```
138
+
139
+ ## The Tool Execution Flow
140
+
141
+ Here's what happens when a tool is used:
142
+
143
+ 1. You ask a question
144
+ 2. The model decides a tool is needed
145
+ 3. The model selects the tool and provides arguments
146
+ 4. RubyLLM calls your tool's `execute` method
147
+ 5. The result is sent back to the model
148
+ 6. The model incorporates the result into its response
149
+
150
+ For example:
151
+
152
+ ```ruby
153
+ response = chat.ask "What's 123 squared plus 456?"
154
+
155
+ # Behind the scenes:
156
+ # 1. Model decides it needs to calculate
157
+ # 2. Model calls Calculator with expression: "123 * 123 + 456"
158
+ # 3. Tool returns "15,585"
159
+ # 4. Model incorporates this in its response
160
+ ```
161
+
162
+ ## Debugging Tools
163
+
164
+ Enable debugging to see tool calls in action:
165
+
166
+ ```ruby
167
+ # Enable debug logging
168
+ ENV['RUBY_LLM_DEBUG'] = 'true'
169
+
170
+ # Make a request
171
+ chat.ask "What's 15329 divided by 437?"
172
+
173
+ # Console output:
174
+ # D, -- RubyLLM: Tool calculator called with: {"expression"=>"15329 / 437"}
175
+ # D, -- RubyLLM: Tool calculator returned: "35.078719"
176
+ ```
177
+
178
+ ## Error Handling
179
+
180
+ Tools can handle errors gracefully:
181
+
182
+ ```ruby
183
+ class Calculator < RubyLLM::Tool
184
+ description "Performs arithmetic calculations"
185
+
186
+ param :expression,
187
+ type: :string,
188
+ desc: "Math expression to evaluate"
189
+
190
+ def execute(expression:)
191
+ eval(expression).to_s
192
+ rescue StandardError => e
193
+ # Return error as a result
194
+ { error: "Error calculating #{expression}: #{e.message}" }
195
+ end
196
+ end
197
+
198
+ # When there's an error, the model will receive and explain it
199
+ chat.ask "What's 1/0?"
200
+ # => "I tried to calculate 1/0, but there was an error: divided by 0"
201
+ ```
202
+
203
+ ## Advanced Tool Parameters
204
+
205
+ Tools can have complex parameter types:
206
+
207
+ ```ruby
208
+ class DataAnalysis < RubyLLM::Tool
209
+ description "Analyzes numerical data"
210
+
211
+ param :data,
212
+ type: :array,
213
+ desc: "Array of numbers to analyze"
214
+
215
+ param :operations,
216
+ type: :object,
217
+ desc: "Analysis operations to perform",
218
+ required: false
219
+
220
+ def execute(data:, operations: {mean: true, median: false})
221
+ result = {}
222
+
223
+ result[:mean] = data.sum.to_f / data.size if operations[:mean]
224
+ result[:median] = calculate_median(data) if operations[:median]
225
+
226
+ result
227
+ end
228
+
229
+ private
230
+
231
+ def calculate_median(data)
232
+ sorted = data.sort
233
+ mid = sorted.size / 2
234
+ sorted.size.odd? ? sorted[mid] : (sorted[mid-1] + sorted[mid]) / 2.0
235
+ end
236
+ end
237
+ ```
238
+
239
+ ## When to Use Tools
240
+
241
+ Tools are best for:
242
+
243
+ 1. **External data retrieval** - Getting real-time information like weather, prices, or database records
244
+ 2. **Computation** - When calculations are complex or involve large numbers
245
+ 3. **System integration** - Connecting to external APIs or services
246
+ 4. **Data processing** - Working with files, formatting data, or analyzing information
247
+ 5. **Stateful operations** - When you need to maintain state between calls
data/docs/index.md ADDED
@@ -0,0 +1,53 @@
1
+ ---
2
+ layout: default
3
+ title: Home
4
+ nav_order: 1
5
+ description: "RubyLLM is a delightful Ruby way to work with AI."
6
+ permalink: /
7
+ ---
8
+
9
+ # RubyLLM
10
+ {: .fs-9 }
11
+
12
+ A delightful Ruby way to work with AI through a unified interface to OpenAI, Anthropic, Google, and DeepSeek.
13
+ {: .fs-6 .fw-300 }
14
+
15
+ [Get started now]({% link installation.md %}){: .btn .btn-primary .fs-5 .mb-4 .mb-md-0 .mr-2 }
16
+ [View on GitHub](https://github.com/crmne/ruby_llm){: .btn .fs-5 .mb-4 .mb-md-0 }
17
+
18
+ ---
19
+
20
+ ## Overview
21
+
22
+ RubyLLM provides a beautiful, unified interface to modern AI services, including:
23
+
24
+ - 💬 **Chat** with OpenAI GPT, Anthropic Claude, Google Gemini, and DeepSeek models
25
+ - 🖼️ **Image generation** with DALL-E and other providers
26
+ - 🔍 **Embeddings** for vector search and semantic analysis
27
+ - 🔧 **Tools** that let AI use your Ruby code
28
+ - 🚊 **Rails integration** to persist chats and messages with ActiveRecord
29
+ - 🌊 **Streaming** responses with proper Ruby patterns
30
+
31
+ ## Quick start
32
+
33
+ ```ruby
34
+ require 'ruby_llm'
35
+
36
+ # Configure your API keys
37
+ RubyLLM.configure do |config|
38
+ config.openai_api_key = ENV['OPENAI_API_KEY']
39
+ end
40
+
41
+ # Start chatting
42
+ chat = RubyLLM.chat
43
+ response = chat.ask "What's the best way to learn Ruby?"
44
+
45
+ # Generate images
46
+ image = RubyLLM.paint "a sunset over mountains"
47
+ puts image.url
48
+ ```
49
+
50
+ ## Learn more
51
+
52
+ - [Installation]({% link installation.md %})
53
+ - [Guides]({% link guides/index.md %})
@@ -0,0 +1,98 @@
1
+ ---
2
+ layout: default
3
+ title: Installation
4
+ nav_order: 2
5
+ permalink: /installation
6
+ ---
7
+
8
+ # Installation
9
+
10
+ RubyLLM is packaged as a Ruby gem, making it easy to install in your projects.
11
+
12
+ ## Requirements
13
+
14
+ * Ruby 3.1 or later
15
+ * An API key from at least one of the supported providers:
16
+ * OpenAI
17
+ * Anthropic
18
+ * Google (Gemini)
19
+ * DeepSeek
20
+
21
+ ## Installation Methods
22
+
23
+ ### Using Bundler (recommended)
24
+
25
+ Add RubyLLM to your project's Gemfile:
26
+
27
+ ```ruby
28
+ gem 'ruby_llm'
29
+ ```
30
+
31
+ Then install the dependencies:
32
+
33
+ ```bash
34
+ bundle install
35
+ ```
36
+
37
+ ### Manual Installation
38
+
39
+ If you're not using Bundler, you can install RubyLLM directly:
40
+
41
+ ```bash
42
+ gem install ruby_llm
43
+ ```
44
+
45
+ ## Configuration
46
+
47
+ After installing RubyLLM, you'll need to configure it with your API keys:
48
+
49
+ ```ruby
50
+ require 'ruby_llm'
51
+
52
+ RubyLLM.configure do |config|
53
+ # Required: At least one API key
54
+ config.openai_api_key = ENV['OPENAI_API_KEY']
55
+ config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
56
+ config.gemini_api_key = ENV['GEMINI_API_KEY']
57
+ config.deepseek_api_key = ENV['DEEPSEEK_API_KEY']
58
+
59
+ # Optional: Set default models
60
+ config.default_model = 'gpt-4o-mini' # Default chat model
61
+ config.default_embedding_model = 'text-embedding-3-small' # Default embedding model
62
+ config.default_image_model = 'dall-e-3' # Default image generation model
63
+
64
+ # Optional: Configure request settings
65
+ config.request_timeout = 120 # Request timeout in seconds
66
+ config.max_retries = 3 # Number of retries on failures
67
+ end
68
+ ```
69
+
70
+ We recommend storing your API keys as environment variables rather than hardcoding them in your application.
71
+
72
+ ## Verifying Installation
73
+
74
+ You can verify that RubyLLM is correctly installed and configured by running a simple test:
75
+
76
+ ```ruby
77
+ require 'ruby_llm'
78
+
79
+ # Configure with at least one API key
80
+ RubyLLM.configure do |config|
81
+ config.openai_api_key = ENV['OPENAI_API_KEY']
82
+ end
83
+
84
+ # Try a simple query
85
+ chat = RubyLLM.chat
86
+ response = chat.ask "Hello, world!"
87
+ puts response.content
88
+
89
+ # Check available models
90
+ puts "Available models:"
91
+ RubyLLM.models.chat_models.each do |model|
92
+ puts "- #{model.id} (#{model.provider})"
93
+ end
94
+ ```
95
+
96
+ ## Next Steps
97
+
98
+ Once you've successfully installed RubyLLM, check out the [Getting Started guide]({% link guides/getting-started.md %}) to learn how to use it in your applications.
@@ -85,10 +85,10 @@ module RubyLLM
85
85
  .on_end_message { |msg| persist_message_completion(msg) }
86
86
  end
87
87
 
88
- def ask(message, &block)
88
+ def ask(message, &)
89
89
  message = { role: :user, content: message }
90
90
  messages.create!(**message)
91
- to_llm.complete(&block)
91
+ to_llm.complete(&)
92
92
  end
93
93
 
94
94
  alias say ask
data/lib/ruby_llm/chat.rb CHANGED
@@ -72,18 +72,18 @@ module RubyLLM
72
72
  self
73
73
  end
74
74
 
75
- def each(&block)
76
- messages.each(&block)
75
+ def each(&)
76
+ messages.each(&)
77
77
  end
78
78
 
79
- def complete(&block)
79
+ def complete(&)
80
80
  @on[:new_message]&.call
81
- response = @provider.complete(messages, tools: @tools, temperature: @temperature, model: @model.id, &block)
81
+ response = @provider.complete(messages, tools: @tools, temperature: @temperature, model: @model.id, &)
82
82
  @on[:end_message]&.call(response)
83
83
 
84
84
  add_message response
85
85
  if response.tool_call?
86
- handle_tool_calls response, &block
86
+ handle_tool_calls(response, &)
87
87
  else
88
88
  response
89
89
  end
@@ -97,7 +97,7 @@ module RubyLLM
97
97
 
98
98
  private
99
99
 
100
- def handle_tool_calls(response, &block)
100
+ def handle_tool_calls(response, &)
101
101
  response.tool_calls.each_value do |tool_call|
102
102
  @on[:new_message]&.call
103
103
  result = execute_tool tool_call
@@ -105,7 +105,7 @@ module RubyLLM
105
105
  @on[:end_message]&.call(message)
106
106
  end
107
107
 
108
- complete(&block)
108
+ complete(&)
109
109
  end
110
110
 
111
111
  def execute_tool(tool_call)
@@ -11,8 +11,8 @@
11
11
  "supports_vision": false,
12
12
  "supports_functions": false,
13
13
  "supports_json_mode": false,
14
- "input_price_per_million": 0.075,
15
- "output_price_per_million": 0.3,
14
+ "input_price_per_million": 0.0,
15
+ "output_price_per_million": 0.0,
16
16
  "metadata": {
17
17
  "object": "model",
18
18
  "owned_by": "google"
@@ -23,8 +23,8 @@
23
23
  "created_at": "2023-08-21T18:16:55+02:00",
24
24
  "display_name": "Babbage 002",
25
25
  "provider": "openai",
26
- "context_window": 4096,
27
- "max_tokens": 4096,
26
+ "context_window": 16384,
27
+ "max_tokens": 16384,
28
28
  "type": "chat",
29
29
  "family": "babbage",
30
30
  "supports_vision": false,
@@ -80,7 +80,7 @@
80
80
  "created_at": "2023-07-11T00:00:00Z",
81
81
  "display_name": "Claude 2.0",
82
82
  "provider": "anthropic",
83
- "context_window": 100000,
83
+ "context_window": 200000,
84
84
  "max_tokens": 4096,
85
85
  "type": "chat",
86
86
  "family": "claude2",
@@ -96,7 +96,7 @@
96
96
  "created_at": "2023-11-21T00:00:00Z",
97
97
  "display_name": "Claude 2.1",
98
98
  "provider": "anthropic",
99
- "context_window": 100000,
99
+ "context_window": 200000,
100
100
  "max_tokens": 4096,
101
101
  "type": "chat",
102
102
  "family": "claude2",
@@ -116,7 +116,7 @@
116
116
  "max_tokens": 8192,
117
117
  "type": "chat",
118
118
  "family": "claude35_haiku",
119
- "supports_vision": false,
119
+ "supports_vision": true,
120
120
  "supports_functions": true,
121
121
  "supports_json_mode": true,
122
122
  "input_price_per_million": 0.8,
@@ -161,9 +161,9 @@
161
161
  "display_name": "Claude 3.7 Sonnet",
162
162
  "provider": "anthropic",
163
163
  "context_window": 200000,
164
- "max_tokens": 4096,
164
+ "max_tokens": 8192,
165
165
  "type": "chat",
166
- "family": "claude2",
166
+ "family": "claude37_sonnet",
167
167
  "supports_vision": true,
168
168
  "supports_functions": true,
169
169
  "supports_json_mode": true,
@@ -262,8 +262,8 @@
262
262
  "created_at": "2023-08-21T18:11:41+02:00",
263
263
  "display_name": "Davinci 002",
264
264
  "provider": "openai",
265
- "context_window": 4096,
266
- "max_tokens": 4096,
265
+ "context_window": 16384,
266
+ "max_tokens": 16384,
267
267
  "type": "chat",
268
268
  "family": "davinci",
269
269
  "supports_vision": false,
@@ -857,7 +857,7 @@
857
857
  "family": "gemini20_flash_lite",
858
858
  "supports_vision": true,
859
859
  "supports_functions": false,
860
- "supports_json_mode": true,
860
+ "supports_json_mode": false,
861
861
  "input_price_per_million": 0.075,
862
862
  "output_price_per_million": 0.3,
863
863
  "metadata": {
@@ -876,7 +876,7 @@
876
876
  "family": "gemini20_flash_lite",
877
877
  "supports_vision": true,
878
878
  "supports_functions": false,
879
- "supports_json_mode": true,
879
+ "supports_json_mode": false,
880
880
  "input_price_per_million": 0.075,
881
881
  "output_price_per_million": 0.3,
882
882
  "metadata": {
@@ -895,7 +895,7 @@
895
895
  "family": "gemini20_flash_lite",
896
896
  "supports_vision": true,
897
897
  "supports_functions": false,
898
- "supports_json_mode": true,
898
+ "supports_json_mode": false,
899
899
  "input_price_per_million": 0.075,
900
900
  "output_price_per_million": 0.3,
901
901
  "metadata": {
@@ -914,7 +914,7 @@
914
914
  "family": "gemini20_flash_lite",
915
915
  "supports_vision": true,
916
916
  "supports_functions": false,
917
- "supports_json_mode": true,
917
+ "supports_json_mode": false,
918
918
  "input_price_per_million": 0.075,
919
919
  "output_price_per_million": 0.3,
920
920
  "metadata": {
@@ -1650,7 +1650,7 @@
1650
1650
  "display_name": "GPT-4o-Mini Realtime Preview",
1651
1651
  "provider": "openai",
1652
1652
  "context_window": 128000,
1653
- "max_tokens": 16384,
1653
+ "max_tokens": 4096,
1654
1654
  "type": "chat",
1655
1655
  "family": "gpt4o_mini_realtime",
1656
1656
  "supports_vision": true,
@@ -1669,7 +1669,7 @@
1669
1669
  "display_name": "GPT-4o-Mini Realtime Preview 20241217",
1670
1670
  "provider": "openai",
1671
1671
  "context_window": 128000,
1672
- "max_tokens": 16384,
1672
+ "max_tokens": 4096,
1673
1673
  "type": "chat",
1674
1674
  "family": "gpt4o_mini_realtime",
1675
1675
  "supports_vision": true,
@@ -1685,10 +1685,10 @@
1685
1685
  {
1686
1686
  "id": "gpt-4o-realtime-preview",
1687
1687
  "created_at": "2024-09-30T03:33:18+02:00",
1688
- "display_name": "GPT-4o Realtime Preview",
1688
+ "display_name": "GPT-4o-Realtime Preview",
1689
1689
  "provider": "openai",
1690
1690
  "context_window": 128000,
1691
- "max_tokens": 16384,
1691
+ "max_tokens": 4096,
1692
1692
  "type": "chat",
1693
1693
  "family": "gpt4o_realtime",
1694
1694
  "supports_vision": true,
@@ -1704,10 +1704,10 @@
1704
1704
  {
1705
1705
  "id": "gpt-4o-realtime-preview-2024-10-01",
1706
1706
  "created_at": "2024-09-24T00:49:26+02:00",
1707
- "display_name": "GPT-4o Realtime Preview 20241001",
1707
+ "display_name": "GPT-4o-Realtime Preview 20241001",
1708
1708
  "provider": "openai",
1709
1709
  "context_window": 128000,
1710
- "max_tokens": 16384,
1710
+ "max_tokens": 4096,
1711
1711
  "type": "chat",
1712
1712
  "family": "gpt4o_realtime",
1713
1713
  "supports_vision": true,
@@ -1723,10 +1723,10 @@
1723
1723
  {
1724
1724
  "id": "gpt-4o-realtime-preview-2024-12-17",
1725
1725
  "created_at": "2024-12-11T20:30:30+01:00",
1726
- "display_name": "GPT-4o Realtime Preview 20241217",
1726
+ "display_name": "GPT-4o-Realtime Preview 20241217",
1727
1727
  "provider": "openai",
1728
1728
  "context_window": 128000,
1729
- "max_tokens": 16384,
1729
+ "max_tokens": 4096,
1730
1730
  "type": "chat",
1731
1731
  "family": "gpt4o_realtime",
1732
1732
  "supports_vision": true,
@@ -1820,7 +1820,7 @@
1820
1820
  "created_at": "2024-09-06T20:56:48+02:00",
1821
1821
  "display_name": "O1-Mini",
1822
1822
  "provider": "openai",
1823
- "context_window": 200000,
1823
+ "context_window": 128000,
1824
1824
  "max_tokens": 4096,
1825
1825
  "type": "chat",
1826
1826
  "family": "o1_mini",
@@ -1839,7 +1839,7 @@
1839
1839
  "created_at": "2024-09-06T20:56:19+02:00",
1840
1840
  "display_name": "O1-Mini 20240912",
1841
1841
  "provider": "openai",
1842
- "context_window": 200000,
1842
+ "context_window": 128000,
1843
1843
  "max_tokens": 65536,
1844
1844
  "type": "chat",
1845
1845
  "family": "o1_mini",
@@ -1894,7 +1894,7 @@
1894
1894
  {
1895
1895
  "id": "omni-moderation-2024-09-26",
1896
1896
  "created_at": "2024-11-27T20:07:46+01:00",
1897
- "display_name": "Omni Moderation 20240926",
1897
+ "display_name": "Omni-Moderation 20240926",
1898
1898
  "provider": "openai",
1899
1899
  "context_window": 4096,
1900
1900
  "max_tokens": 4096,
@@ -1913,7 +1913,7 @@
1913
1913
  {
1914
1914
  "id": "omni-moderation-latest",
1915
1915
  "created_at": "2024-11-15T17:47:45+01:00",
1916
- "display_name": "Omni Moderation Latest",
1916
+ "display_name": "Omni-Moderation Latest",
1917
1917
  "provider": "openai",
1918
1918
  "context_window": 4096,
1919
1919
  "max_tokens": 4096,