ruby_llm 0.1.0.pre12 → 0.1.0.pre14

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 70c8538fcfbdaba0d5bbb502affe66acfc38c5f67318ee69fe9dc203dbc4e7f7
4
- data.tar.gz: 51a0759de14ee600876471dd034e0ff99b3763d765fd3b81eadfe4dacbd3f7e5
3
+ metadata.gz: 634e727da013e56b973856876d2b456075841bd3cd1f9a5939d2028d96a47b38
4
+ data.tar.gz: af14dd677a8718883f0bc7c555272b5a894457d472edae1e0a4c44de8d9934ba
5
5
  SHA512:
6
- metadata.gz: 3b5a3362b20ffb9153f9668612a0b6fde3fdf040d7234baee08978cdcc60717fce0cf8a0950929e0ab384db0fe9cfd82456e0a3c4f6eaba566999c80f32fc941
7
- data.tar.gz: 6f12660ca79c9d00196003c15bc866b831e47bbce52fc6d6f2ad731655b8e6243388ad4637d7b3c45cb1076c489fa6a8ca9973da20230cbc16b756782917056c
6
+ metadata.gz: 9c1dcc9ac485aa9f85592c0dd12d2bfe3894bfbd7311f19a1168ed17289d21edc14c636532212b01e544365519477314e03be0b34d238f2f6390834ac0a280c7
7
+ data.tar.gz: 9e4db61a0e5e73d13d7b8a03149625e463f7eb52d181acfc8f3ce06db0a7294615cf17f0bdbcced5f5a2908aa503af3fbc6c0122541ad704f1ddb4d17ea3b494
@@ -28,8 +28,8 @@ jobs:
28
28
  - name: Install dependencies
29
29
  run: bundle install
30
30
 
31
- # - name: Check code format
32
- # run: bundle exec rubocop
31
+ - name: Check code format
32
+ run: bundle exec rubocop
33
33
 
34
34
  # - name: Run tests
35
35
  # run: bundle exec rspec
data/.rubocop.yml CHANGED
@@ -1,3 +1,6 @@
1
1
  require:
2
2
  - rubocop-rake
3
- - rubocop-rspec
3
+ - rubocop-rspec
4
+
5
+ AllCops:
6
+ TargetRubyVersion: 3.1
data/README.md CHANGED
@@ -54,14 +54,13 @@ image_models = RubyLLM.models.image_models
54
54
 
55
55
  ## Having a Conversation
56
56
 
57
- Conversations are simple and natural, with automatic token counting built right in:
57
+ Conversations are simple and natural:
58
58
 
59
59
  ```ruby
60
60
  chat = RubyLLM.chat model: 'claude-3-5-sonnet-20241022'
61
61
 
62
- # Single messages with token tracking
62
+ # Ask questions
63
63
  response = chat.ask "What's your favorite Ruby feature?"
64
- puts "Response used #{response.input_tokens} input tokens and #{response.output_tokens} output tokens"
65
64
 
66
65
  # Multi-turn conversations just work
67
66
  chat.ask "Can you elaborate on that?"
@@ -72,22 +71,21 @@ chat.ask "Tell me a story about a Ruby programmer" do |chunk|
72
71
  print chunk.content
73
72
  end
74
73
 
75
- # Get token usage for the whole conversation from the last message
74
+ # Check token usage
76
75
  last_message = chat.messages.last
77
76
  puts "Conversation used #{last_message.input_tokens} input tokens and #{last_message.output_tokens} output tokens"
78
77
  ```
79
78
 
80
79
  ## Using Tools
81
80
 
82
- Give Claude some Ruby superpowers by letting it call your code. Simply create tool classes that do one thing well:
81
+ Give your AI assistants access to your Ruby code by creating tool classes that do one thing well:
83
82
 
84
83
  ```ruby
85
- class CalculatorTool < RubyLLM::Tool
84
+ class Calculator < RubyLLM::Tool
86
85
  description "Performs arithmetic calculations"
87
86
 
88
87
  param :expression,
89
88
  type: :string,
90
- required: true,
91
89
  desc: "A mathematical expression to evaluate (e.g. '2 + 2')"
92
90
 
93
91
  def execute(expression:)
@@ -95,11 +93,10 @@ class CalculatorTool < RubyLLM::Tool
95
93
  end
96
94
  end
97
95
 
98
- class DocumentSearchTool < RubyLLM::Tool
96
+ class Search < RubyLLM::Tool
99
97
  description "Searches documents by similarity"
100
98
 
101
99
  param :query,
102
- type: :string,
103
100
  desc: "The search query"
104
101
 
105
102
  param :limit,
@@ -107,12 +104,12 @@ class DocumentSearchTool < RubyLLM::Tool
107
104
  desc: "Number of results to return",
108
105
  required: false
109
106
 
110
- def initialize(document_class:)
111
- @document_class = document_class
107
+ def initialize(repo:)
108
+ @repo = repo
112
109
  end
113
110
 
114
111
  def execute(query:, limit: 5)
115
- @document_class.similarity_search(query:, k: limit)
112
+ @repo.similarity_search(query, limit:)
116
113
  end
117
114
  end
118
115
  ```
@@ -121,11 +118,15 @@ Then use them in your conversations:
121
118
 
122
119
  ```ruby
123
120
  # Simple tools just work
124
- chat = RubyLLM.chat.with_tool(CalculatorTool.new)
121
+ chat = RubyLLM.chat.with_tool Calculator
125
122
 
126
123
  # Tools with dependencies are just regular Ruby objects
127
- search = DocumentSearchTool.new(document_class: Document)
128
- chat.with_tools(search, CalculatorTool.new)
124
+ search = Search.new repo: Document
125
+ chat.with_tools search, Calculator
126
+
127
+ # Configure as needed
128
+ chat.with_model('claude-3-5-sonnet-20241022')
129
+ .with_temperature(0.9)
129
130
 
130
131
  chat.ask "What's 2+2?"
131
132
  # => "Let me calculate that for you. The result is 4."
@@ -134,9 +135,7 @@ chat.ask "Find documents about Ruby performance"
134
135
  # => "I found these relevant documents about Ruby performance..."
135
136
  ```
136
137
 
137
- Tools let you seamlessly integrate your Ruby code with AI capabilities. The model will automatically decide when to use your tools and handle the results appropriately.
138
-
139
- Need to debug a tool? RubyLLM automatically logs all tool calls and their results when debug logging is enabled:
138
+ Need to debug a tool? RubyLLM automatically logs all tool calls:
140
139
 
141
140
  ```ruby
142
141
  ENV['RUBY_LLM_DEBUG'] = 'true'
@@ -146,13 +145,169 @@ chat.ask "What's 123 * 456?"
146
145
  # D, -- RubyLLM: Tool calculator returned: "56088"
147
146
  ```
148
147
 
149
- Create tools for anything - database queries, API calls, custom business logic - and let Claude use them naturally in conversation.
148
+ ## Rails Integration
149
+
150
+ RubyLLM comes with built-in Rails support that makes it dead simple to persist your chats and messages. Just create your tables and hook it up:
151
+
152
+ ```ruby
153
+ # db/migrate/YYYYMMDDHHMMSS_create_chats.rb
154
+ class CreateChats < ActiveRecord::Migration[8.0]
155
+ def change
156
+ create_table :chats do |t|
157
+ t.string :model_id
158
+ t.timestamps
159
+ end
160
+ end
161
+ end
162
+
163
+ # db/migrate/YYYYMMDDHHMMSS_create_messages.rb
164
+ class CreateMessages < ActiveRecord::Migration[8.0]
165
+ def change
166
+ create_table :messages do |t|
167
+ t.references :chat
168
+ t.string :role
169
+ t.text :content
170
+ t.json :tool_calls
171
+ t.string :tool_call_id
172
+ t.integer :input_tokens
173
+ t.integer :output_tokens
174
+ t.string :model_id
175
+ t.timestamps
176
+ end
177
+ end
178
+ end
179
+ ```
180
+
181
+ Then in your models:
182
+
183
+ ```ruby
184
+ class Chat < ApplicationRecord
185
+ acts_as_chat message_class: "Message"
186
+
187
+ # Optional: Add Turbo Streams support
188
+ broadcasts_to ->(chat) { "chat_#{chat.id}" }
189
+ end
190
+
191
+ class Message < ApplicationRecord
192
+ acts_as_message chat_class: "Chat"
193
+ end
194
+ ```
195
+
196
+ That's it! Now you can use chats straight from your models:
197
+
198
+ ```ruby
199
+ # Create a new chat
200
+ chat = Chat.create!(model_id: "gpt-4")
201
+
202
+ # Ask questions - messages are automatically saved
203
+ chat.ask "What's the weather in Paris?"
204
+
205
+ # Stream responses in real-time
206
+ chat.ask "Tell me a story" do |chunk|
207
+ broadcast_chunk(chunk)
208
+ end
209
+
210
+ # Everything is persisted automatically
211
+ chat.messages.each do |message|
212
+ case message.role
213
+ when :user
214
+ puts "User: #{message.content}"
215
+ when :assistant
216
+ puts "Assistant: #{message.content}"
217
+ end
218
+ end
219
+ ```
220
+
221
+ ### Real-time Updates with Hotwire
150
222
 
151
- ## Coming Soon
223
+ The Rails integration works great with Hotwire out of the box:
224
+
225
+ ```ruby
226
+ # app/controllers/chats_controller.rb
227
+ class ChatsController < ApplicationController
228
+ def show
229
+ @chat = Chat.find(params[:id])
230
+ end
231
+
232
+ def ask
233
+ @chat = Chat.find(params[:id])
234
+ @chat.ask(params[:message]) do |chunk|
235
+ Turbo::StreamsChannel.broadcast_append_to(
236
+ @chat,
237
+ target: "messages",
238
+ partial: "messages/chunk",
239
+ locals: { chunk: chunk }
240
+ )
241
+ end
242
+ end
243
+ end
244
+
245
+ # app/views/chats/show.html.erb
246
+ <%= turbo_stream_from @chat %>
247
+
248
+ <div id="messages">
249
+ <%= render @chat.messages %>
250
+ </div>
251
+
252
+ <%= form_with(url: ask_chat_path(@chat), local: false) do |f| %>
253
+ <%= f.text_area :message %>
254
+ <%= f.submit "Send" %>
255
+ <% end %>
256
+ ```
257
+
258
+ ### Background Jobs
259
+
260
+ The persistence works seamlessly with background jobs:
261
+
262
+ ```ruby
263
+ class ChatJob < ApplicationJob
264
+ def perform(chat_id, message)
265
+ chat = Chat.find(chat_id)
266
+
267
+ chat.ask(message) do |chunk|
268
+ # Optional: Broadcast chunks for real-time updates
269
+ Turbo::StreamsChannel.broadcast_append_to(
270
+ chat,
271
+ target: "messages",
272
+ partial: "messages/chunk",
273
+ locals: { chunk: chunk }
274
+ )
275
+ end
276
+ end
277
+ end
278
+ ```
279
+
280
+ ### Using Tools
281
+
282
+ Tools work just like they do in regular RubyLLM chats:
283
+
284
+ ```ruby
285
+ class WeatherTool < RubyLLM::Tool
286
+ description "Gets current weather for a location"
287
+
288
+ param :location,
289
+ type: :string,
290
+ desc: "City name or coordinates"
291
+
292
+ def execute(location:)
293
+ # Fetch weather data...
294
+ { temperature: 22, conditions: "Sunny" }
295
+ end
296
+ end
297
+
298
+ # Use tools with your persisted chats
299
+ chat = Chat.create!(model_id: "gpt-4")
300
+ chat.chat.with_tool(WeatherTool.new)
301
+
302
+ # Ask about weather - tool usage is automatically saved
303
+ chat.ask "What's the weather in Paris?"
304
+
305
+ # Tool calls and results are persisted as messages
306
+ pp chat.messages.map(&:role)
307
+ #=> [:user, :assistant, :tool, :assistant]
308
+ ```
152
309
 
153
- - Rails integration for seamless database and Active Record support
154
- - Automatic retries and error handling
155
- - Much more!
310
+ Looking for more examples? Check out the [example Rails app](https://github.com/example/ruby_llm_rails) showing these patterns in action!
156
311
 
157
312
  ## Development
158
313
 
@@ -0,0 +1,100 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RubyLLM
4
+ module ActiveRecord
5
+ # Adds chat and message persistence capabilities to ActiveRecord models.
6
+ # Provides a clean interface for storing chat history and message metadata
7
+ # in your database.
8
+ module ActsAs
9
+ extend ActiveSupport::Concern
10
+
11
+ class_methods do
12
+ def acts_as_chat(message_class:)
13
+ include ChatMethods
14
+
15
+ has_many :messages,
16
+ -> { order(created_at: :asc) },
17
+ class_name: message_class.to_s,
18
+ dependent: :destroy
19
+
20
+ # No more callback config - just expose the core chat functionality
21
+ delegate :ask, :say, :complete, to: :chat
22
+ end
23
+
24
+ def acts_as_message(chat_class:)
25
+ include MessageMethods
26
+
27
+ belongs_to :chat, class_name: chat_class.to_s
28
+
29
+ serialize :tool_calls, coder: JSON
30
+ end
31
+ end
32
+ end
33
+
34
+ # Methods mixed into chat models to handle message persistence and
35
+ # provide a conversation interface.
36
+ module ChatMethods
37
+ extend ActiveSupport::Concern
38
+
39
+ def chat
40
+ @chat ||= begin
41
+ chat = RubyLLM.chat(model: model_id)
42
+
43
+ # Load existing messages into chat
44
+ messages.each do |msg|
45
+ chat.add_message(msg.to_llm)
46
+ end
47
+
48
+ # Set up message persistence
49
+ chat.on_new_message { |msg| persist_new_message(msg) }
50
+ .on_end_message { |msg| persist_message_completion(msg) }
51
+
52
+ chat
53
+ end
54
+ end
55
+
56
+ private
57
+
58
+ def persist_new_message(message)
59
+ return unless message
60
+
61
+ messages.create!(
62
+ role: message.role,
63
+ content: message.content,
64
+ tool_calls: message.tool_calls,
65
+ tool_call_id: message.tool_call_id,
66
+ model_id: message.model_id
67
+ )
68
+ end
69
+
70
+ def persist_message_completion(message)
71
+ return unless message
72
+
73
+ messages.last.update!(
74
+ content: message.content,
75
+ tool_calls: message.tool_calls,
76
+ input_tokens: message.input_tokens,
77
+ output_tokens: message.output_tokens
78
+ )
79
+ end
80
+ end
81
+
82
+ # Methods mixed into message models to handle serialization and
83
+ # provide a clean interface to the underlying message data.
84
+ module MessageMethods
85
+ extend ActiveSupport::Concern
86
+
87
+ def to_llm
88
+ RubyLLM::Message.new(
89
+ role: role.to_sym,
90
+ content: content,
91
+ tool_calls: tool_calls,
92
+ tool_call_id: tool_call_id,
93
+ input_tokens: input_tokens,
94
+ output_tokens: output_tokens,
95
+ model_id: model_id
96
+ )
97
+ end
98
+ end
99
+ end
100
+ end
data/lib/ruby_llm/chat.rb CHANGED
@@ -1,6 +1,13 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Represents a conversation with an AI model. Handles message history,
5
+ # streaming responses, and tool integration with a simple, conversational API.
6
+ #
7
+ # Example:
8
+ # chat = RubyLLM.chat
9
+ # chat.ask "What's the best way to learn Ruby?"
10
+ # chat.ask "Can you elaborate on that?"
4
11
  class Chat
5
12
  include Enumerable
6
13
 
@@ -8,12 +15,14 @@ module RubyLLM
8
15
 
9
16
  def initialize(model: nil)
10
17
  model_id = model || RubyLLM.config.default_model
11
- @model = Models.find model_id
12
- @provider = Models.provider_for model_id
18
+ self.model = model_id
19
+ @temperature = 0.7
13
20
  @messages = []
14
21
  @tools = {}
15
-
16
- ensure_valid_tools
22
+ @on = {
23
+ new_message: nil,
24
+ end_message: nil
25
+ }
17
26
  end
18
27
 
19
28
  def ask(message, &block)
@@ -26,7 +35,7 @@ module RubyLLM
26
35
  def with_tool(tool)
27
36
  raise Error, "Model #{@model.id} doesn't support function calling" unless @model.supports_functions
28
37
 
29
- tool_instance = tool.is_a?(Class) ? tool.to_tool : tool
38
+ tool_instance = tool.is_a?(Class) ? tool.new : tool
30
39
  @tools[tool_instance.name.to_sym] = tool_instance
31
40
  self
32
41
  end
@@ -36,26 +45,51 @@ module RubyLLM
36
45
  self
37
46
  end
38
47
 
48
+ def model=(model_id)
49
+ @model = Models.find model_id
50
+ @provider = Models.provider_for model_id
51
+ end
52
+
53
+ def with_model(model_id)
54
+ self.model = model_id
55
+ self
56
+ end
57
+
58
+ def with_temperature(temperature)
59
+ @temperature = temperature
60
+ self
61
+ end
62
+
63
+ def on_new_message(&block)
64
+ @on[:new_message] = block
65
+ self
66
+ end
67
+
68
+ def on_end_message(&block)
69
+ @on[:end_message] = block
70
+ self
71
+ end
72
+
39
73
  def each(&block)
40
74
  messages.each(&block)
41
75
  end
42
76
 
43
- private
44
-
45
77
  def complete(&block)
46
- response = @provider.complete(messages, tools: @tools, model: @model.id, &block)
78
+ @on[:new_message]&.call
79
+ response = @provider.complete(messages, tools: @tools, temperature: @temperature, model: @model.id, &block)
80
+ @on[:end_message]&.call(response)
47
81
 
82
+ add_message response
48
83
  if response.tool_call?
49
84
  handle_tool_calls response, &block
50
85
  else
51
- add_message response
52
86
  response
53
87
  end
54
88
  end
55
89
 
56
- def handle_tool_calls(response, &block)
57
- add_message response
90
+ private
58
91
 
92
+ def handle_tool_calls(response, &block)
59
93
  response.tool_calls.each_value do |tool_call|
60
94
  result = execute_tool tool_call
61
95
  add_tool_result tool_call.id, result if result
@@ -72,6 +106,7 @@ module RubyLLM
72
106
 
73
107
  def add_message(message_or_attributes)
74
108
  message = message_or_attributes.is_a?(Message) ? message_or_attributes : Message.new(message_or_attributes)
109
+ # TODO: callback
75
110
  messages << message
76
111
  message
77
112
  end
@@ -83,13 +118,5 @@ module RubyLLM
83
118
  tool_call_id: tool_use_id
84
119
  )
85
120
  end
86
-
87
- def ensure_valid_tools
88
- tools.each_key do |name|
89
- unless name.is_a?(Symbol) && tools[name].is_a?(RubyLLM::Tool)
90
- raise Error, 'Tools should be of the format {<name.to_sym>: <RubyLLM::Tool>}'
91
- end
92
- end
93
- end
94
121
  end
95
122
  end
@@ -1,6 +1,14 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Global configuration for RubyLLM. Manages API keys, default models,
5
+ # and provider-specific settings.
6
+ #
7
+ # Configure via:
8
+ # RubyLLM.configure do |config|
9
+ # config.openai_api_key = ENV['OPENAI_API_KEY']
10
+ # config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
11
+ # end
4
12
  class Configuration
5
13
  attr_accessor :openai_api_key, :anthropic_api_key, :default_model, :request_timeout
6
14
 
@@ -8,17 +16,5 @@ module RubyLLM
8
16
  @request_timeout = 30
9
17
  @default_model = 'gpt-4o-mini'
10
18
  end
11
-
12
- def provider_settings
13
- @provider_settings ||= {
14
- openai: ProviderSettings.new,
15
- anthropic: ProviderSettings.new
16
- }
17
- end
18
- end
19
-
20
- # Settings specific to individual LLM providers
21
- class ProviderSettings
22
- attr_accessor :api_key, :api_version, :default_model, :base_url
23
19
  end
24
20
  end
@@ -1,6 +1,9 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # A single message in a chat conversation. Can represent user input,
5
+ # AI responses, or tool interactions. Tracks token usage and handles
6
+ # the complexities of tool calls and responses.
4
7
  class Message
5
8
  ROLES = %i[system user assistant tool].freeze
6
9
 
@@ -26,7 +29,7 @@ module RubyLLM
26
29
  !tool_call_id.nil? && !tool_call_id.empty?
27
30
  end
28
31
 
29
- def tool_result
32
+ def tool_results
30
33
  content if tool_result?
31
34
  end
32
35
 
@@ -2,6 +2,7 @@
2
2
 
3
3
  module RubyLLM
4
4
  module ModelCapabilities
5
+ # Determines capabilities and pricing for Anthropic models
5
6
  module Anthropic
6
7
  extend self
7
8
 
@@ -2,6 +2,7 @@
2
2
 
3
3
  module RubyLLM
4
4
  module ModelCapabilities
5
+ # Determines capabilities and pricing for OpenAI models
5
6
  module OpenAI
6
7
  extend self
7
8
 
@@ -56,16 +57,15 @@ module RubyLLM
56
57
 
57
58
  private
58
59
 
59
- def model_family(model_id)
60
+ def model_family(model_id) # rubocop:disable Metrics/CyclomaticComplexity
60
61
  case model_id
61
- when /o1-2024/ then :o1
62
- when /o1-mini/ then :o1_mini
63
- when /gpt-4o-realtime-preview/ then :gpt4o_realtime
64
- when /gpt-4o-mini-realtime/ then :gpt4o_mini_realtime
65
- when /gpt-4o-mini/ then :gpt4o_mini
66
- when /gpt-4o/ then :gpt4o
67
- when /gpt-4-turbo/ then :gpt4_turbo
68
- when /gpt-3.5/ then :gpt35
62
+ when /o1-2024/ then :o1
63
+ when /o1-mini/ then :o1_mini
64
+ when /gpt-4o-realtime/ then :gpt4o_realtime
65
+ when /gpt-4o-mini-realtime/ then :gpt4o_mini_realtime
66
+ when /gpt-4o-mini/ then :gpt4o_mini
67
+ when /gpt-4o/ then :gpt4o
68
+ when /gpt-4-turbo/ then :gpt4_turbo
69
69
  else :gpt35
70
70
  end
71
71
  end
@@ -96,7 +96,7 @@ module RubyLLM
96
96
  .join(' ')
97
97
  end
98
98
 
99
- def apply_special_formatting(name)
99
+ def apply_special_formatting(name) # rubocop:disable Metrics/MethodLength
100
100
  name
101
101
  .gsub(/(\d{4}) (\d{2}) (\d{2})/, '\1\2\3')
102
102
  .gsub(/^Gpt /, 'GPT-')
@@ -3,12 +3,21 @@
3
3
  require 'time'
4
4
 
5
5
  module RubyLLM
6
+ # Information about an AI model's capabilities, pricing, and metadata.
7
+ # Used by the Models registry to help developers choose the right model
8
+ # for their needs.
9
+ #
10
+ # Example:
11
+ # model = RubyLLM.models.find('gpt-4')
12
+ # model.supports_vision? # => true
13
+ # model.supports_functions? # => true
14
+ # model.input_price_per_million # => 30.0
6
15
  class ModelInfo
7
16
  attr_reader :id, :created_at, :display_name, :provider, :metadata,
8
17
  :context_window, :max_tokens, :supports_vision, :supports_functions,
9
18
  :supports_json_mode, :input_price_per_million, :output_price_per_million
10
19
 
11
- def initialize(data)
20
+ def initialize(data) # rubocop:disable Metrics/AbcSize,Metrics/MethodLength
12
21
  @id = data[:id]
13
22
  @created_at = data[:created_at].is_a?(String) ? Time.parse(data[:created_at]) : data[:created_at]
14
23
  @display_name = data[:display_name]
@@ -23,7 +32,7 @@ module RubyLLM
23
32
  @metadata = data[:metadata] || {}
24
33
  end
25
34
 
26
- def to_h
35
+ def to_h # rubocop:disable Metrics/MethodLength
27
36
  {
28
37
  id: id,
29
38
  created_at: created_at.iso8601,
@@ -1,6 +1,13 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Registry of available AI models and their capabilities. Provides a clean interface
5
+ # to discover and work with models from different providers.
6
+ #
7
+ # Example:
8
+ # RubyLLM.models.all # All available models
9
+ # RubyLLM.models.chat_models # Models that support chat
10
+ # RubyLLM.models.find('claude-3') # Get info about a specific model
4
11
  module Models
5
12
  module_function
6
13
 
@@ -1,14 +1,19 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Base interface for LLM providers like OpenAI and Anthropic.
5
+ # Handles the complexities of API communication, streaming responses,
6
+ # and error handling so individual providers can focus on their unique features.
4
7
  module Provider
5
8
  def self.included(base)
6
9
  base.include(InstanceMethods)
7
10
  end
8
11
 
12
+ # Common functionality for all LLM providers. Implements the core provider
13
+ # interface so specific providers only need to implement a few key methods.
9
14
  module InstanceMethods
10
- def complete(messages, tools: [], model: nil, &block)
11
- payload = build_payload messages, tools, model: model, stream: block_given?
15
+ def complete(messages, tools:, temperature:, model:, &block)
16
+ payload = build_payload messages, tools: tools, temperature: temperature, model: model, stream: block_given?
12
17
 
13
18
  if block_given?
14
19
  stream_response payload, &block
@@ -2,7 +2,9 @@
2
2
 
3
3
  module RubyLLM
4
4
  module Providers
5
- class Anthropic
5
+ # Anthropic Claude API integration. Handles the complexities of
6
+ # Claude's unique message format and tool calling conventions.
7
+ class Anthropic # rubocop:disable Metrics/ClassLength
6
8
  include Provider
7
9
 
8
10
  private
@@ -26,7 +28,7 @@ module RubyLLM
26
28
  '/v1/models'
27
29
  end
28
30
 
29
- def build_payload(messages, tools, model:, temperature: 0.7, stream: false)
31
+ def build_payload(messages, tools:, temperature:, model:, stream: false)
30
32
  {
31
33
  model: model,
32
34
  messages: format_messages(messages),
@@ -34,7 +36,7 @@ module RubyLLM
34
36
  stream: stream,
35
37
  max_tokens: RubyLLM.models.find(model).max_tokens
36
38
  }.tap do |payload|
37
- payload[:tools] = tools.map { |t| function_for(t) } if tools.any?
39
+ payload[:tools] = tools.values.map { |t| function_for(t) } if tools.any?
38
40
  end
39
41
  end
40
42
 
@@ -42,32 +44,33 @@ module RubyLLM
42
44
  data = response.body
43
45
  content_blocks = data['content'] || []
44
46
 
45
- text_content = content_blocks.find { |c| c['type'] == 'text' }&.fetch('text', '')
46
- tool_use = content_blocks.find { |c| c['type'] == 'tool_use' }
47
-
48
- if tool_use
49
- Message.new(
50
- role: :assistant,
51
- content: text_content,
52
- tool_calls: [
53
- {
54
- name: tool_use['name'],
55
- arguments: JSON.generate(tool_use['input'] || {})
56
- }
57
- ]
58
- )
59
- else
60
- Message.new(
61
- role: :assistant,
62
- content: text_content,
63
- input_tokens: data['usage']['input_tokens'],
64
- output_tokens: data['usage']['output_tokens'],
65
- model_id: data['model']
66
- )
67
- end
47
+ text_content = extract_text_content(content_blocks)
48
+ tool_use = find_tool_use(content_blocks)
49
+
50
+ build_message(data, text_content, tool_use)
51
+ end
52
+
53
+ def extract_text_content(blocks)
54
+ text_blocks = blocks.select { |c| c['type'] == 'text' }
55
+ text_blocks.map { |c| c['text'] }.join('')
68
56
  end
69
57
 
70
- def parse_models_response(response)
58
+ def find_tool_use(blocks)
59
+ blocks.find { |c| c['type'] == 'tool_use' }
60
+ end
61
+
62
+ def build_message(data, content, tool_use)
63
+ Message.new(
64
+ role: :assistant,
65
+ content: content,
66
+ tool_calls: parse_tool_calls(tool_use),
67
+ input_tokens: data.dig('usage', 'input_tokens'),
68
+ output_tokens: data.dig('usage', 'output_tokens'),
69
+ model_id: data['model']
70
+ )
71
+ end
72
+
73
+ def parse_models_response(response) # rubocop:disable Metrics/AbcSize,Metrics/MethodLength
71
74
  capabilities = ModelCapabilities::Anthropic.new
72
75
 
73
76
  (response.body['data'] || []).map do |model|
@@ -90,18 +93,57 @@ module RubyLLM
90
93
 
91
94
  def handle_stream(&block)
92
95
  to_json_stream do |data|
93
- block.call(
94
- Chunk.new(
95
- role: :assistant,
96
- model_id: data.dig('message', 'model'),
97
- content: data.dig('delta', 'text'),
98
- input_tokens: data.dig('message', 'usage', 'input_tokens'),
99
- output_tokens: data.dig('message', 'usage', 'output_tokens') || data.dig('usage', 'output_tokens')
100
- )
101
- )
96
+ block.call(build_chunk(data))
97
+ end
98
+ end
99
+
100
+ def build_chunk(data)
101
+ Chunk.new(
102
+ role: :assistant,
103
+ model_id: extract_model_id(data),
104
+ content: data.dig('delta', 'text'),
105
+ input_tokens: extract_input_tokens(data),
106
+ output_tokens: extract_output_tokens(data),
107
+ tool_calls: extract_tool_calls(data)
108
+ )
109
+ end
110
+
111
+ def extract_model_id(data)
112
+ data.dig('message', 'model')
113
+ end
114
+
115
+ def extract_input_tokens(data)
116
+ data.dig('message', 'usage', 'input_tokens')
117
+ end
118
+
119
+ def extract_output_tokens(data)
120
+ data.dig('message', 'usage', 'output_tokens') || data.dig('usage', 'output_tokens')
121
+ end
122
+
123
+ def extract_tool_calls(data)
124
+ if json_delta?(data)
125
+ { nil => ToolCall.new(id: nil, name: nil, arguments: data.dig('delta', 'partial_json')) }
126
+ else
127
+ parse_tool_calls(data['content_block'])
102
128
  end
103
129
  end
104
130
 
131
+ def json_delta?(data)
132
+ data['type'] == 'content_block_delta' && data.dig('delta', 'type') == 'input_json_delta'
133
+ end
134
+
135
+ def parse_tool_calls(content_block)
136
+ return nil unless content_block && content_block['type'] == 'tool_use'
137
+
138
+ {
139
+ content_block['id'] => ToolCall.new(
140
+ id: content_block['id'],
141
+ name: content_block['name'],
142
+ arguments: content_block['input']
143
+ )
144
+ }
145
+ end
146
+
105
147
  def function_for(tool)
106
148
  {
107
149
  name: tool.name,
@@ -115,43 +157,88 @@ module RubyLLM
115
157
  end
116
158
 
117
159
  def format_messages(messages)
118
- messages.map do |msg|
119
- if msg.tool_results
120
- {
121
- role: convert_role(msg.role),
122
- content: [
123
- {
124
- type: 'tool_result',
125
- tool_use_id: msg.tool_results[:tool_use_id],
126
- content: msg.tool_results[:content],
127
- is_error: msg.tool_results[:is_error]
128
- }.compact
129
- ]
130
- }
131
- else
132
- {
133
- role: convert_role(msg.role),
134
- content: msg.content
135
- }
136
- end
160
+ messages.map { |msg| format_message(msg) }
161
+ end
162
+
163
+ def format_message(msg)
164
+ if msg.tool_call?
165
+ format_tool_call(msg)
166
+ elsif msg.tool_result?
167
+ format_tool_result(msg)
168
+ else
169
+ format_basic_message(msg)
137
170
  end
138
171
  end
139
172
 
173
+ def format_tool_call(msg)
174
+ tool_call = msg.tool_calls.values.first
175
+
176
+ {
177
+ role: 'assistant',
178
+ content: [
179
+ format_text_block(msg.content),
180
+ format_tool_use_block(tool_call)
181
+ ]
182
+ }
183
+ end
184
+
185
+ def format_tool_result(msg)
186
+ {
187
+ role: 'user',
188
+ content: [format_tool_result_block(msg)]
189
+ }
190
+ end
191
+
192
+ def format_basic_message(msg)
193
+ {
194
+ role: convert_role(msg.role),
195
+ content: msg.content
196
+ }
197
+ end
198
+
199
+ def format_text_block(content)
200
+ {
201
+ type: 'text',
202
+ text: content
203
+ }
204
+ end
205
+
206
+ def format_tool_use_block(tool_call)
207
+ {
208
+ type: 'tool_use',
209
+ id: tool_call.id,
210
+ name: tool_call.name,
211
+ input: tool_call.arguments
212
+ }
213
+ end
214
+
215
+ def format_tool_result_block(msg)
216
+ {
217
+ type: 'tool_result',
218
+ tool_use_id: msg.tool_call_id,
219
+ content: msg.content
220
+ }
221
+ end
222
+
140
223
  def convert_role(role)
141
224
  case role
225
+ when :tool then 'user'
142
226
  when :user then 'user'
143
227
  else 'assistant'
144
228
  end
145
229
  end
146
230
 
147
231
  def clean_parameters(parameters)
148
- parameters.transform_values do |props|
149
- props.except(:required)
232
+ parameters.transform_values do |param|
233
+ {
234
+ type: param.type,
235
+ description: param.description
236
+ }.compact
150
237
  end
151
238
  end
152
239
 
153
240
  def required_parameters(parameters)
154
- parameters.select { |_, props| props[:required] }.keys
241
+ parameters.select { |_, param| param.required }.keys
155
242
  end
156
243
  end
157
244
  end
@@ -2,7 +2,10 @@
2
2
 
3
3
  module RubyLLM
4
4
  module Providers
5
- class OpenAI
5
+ # OpenAI API integration. Handles chat completion, function calling,
6
+ # and OpenAI's unique streaming format. Supports GPT-4, GPT-3.5,
7
+ # and other OpenAI models.
8
+ class OpenAI # rubocop:disable Metrics/ClassLength
6
9
  include Provider
7
10
 
8
11
  private
@@ -25,7 +28,7 @@ module RubyLLM
25
28
  '/v1/models'
26
29
  end
27
30
 
28
- def build_payload(messages, tools, model:, temperature: 0.7, stream: false)
31
+ def build_payload(messages, tools:, temperature:, model:, stream: false) # rubocop:disable Metrics/MethodLength
29
32
  {
30
33
  model: model,
31
34
  messages: format_messages(messages),
@@ -50,7 +53,7 @@ module RubyLLM
50
53
  end
51
54
  end
52
55
 
53
- def format_tool_calls(tool_calls)
56
+ def format_tool_calls(tool_calls) # rubocop:disable Metrics/MethodLength
54
57
  return nil unless tool_calls&.any?
55
58
 
56
59
  tool_calls.map do |_, tc|
@@ -65,7 +68,7 @@ module RubyLLM
65
68
  end
66
69
  end
67
70
 
68
- def tool_for(tool)
71
+ def tool_for(tool) # rubocop:disable Metrics/MethodLength
69
72
  {
70
73
  type: 'function',
71
74
  function: {
@@ -87,7 +90,7 @@ module RubyLLM
87
90
  }.compact
88
91
  end
89
92
 
90
- def parse_completion_response(response)
93
+ def parse_completion_response(response) # rubocop:disable Metrics/MethodLength
91
94
  data = response.body
92
95
  return if data.empty?
93
96
 
@@ -104,7 +107,7 @@ module RubyLLM
104
107
  )
105
108
  end
106
109
 
107
- def parse_tool_calls(tool_calls, parse_arguments: true)
110
+ def parse_tool_calls(tool_calls, parse_arguments: true) # rubocop:disable Metrics/MethodLength
108
111
  return nil unless tool_calls&.any?
109
112
 
110
113
  tool_calls.to_h do |tc|
@@ -119,7 +122,7 @@ module RubyLLM
119
122
  end
120
123
  end
121
124
 
122
- def parse_models_response(response)
125
+ def parse_models_response(response) # rubocop:disable Metrics/MethodLength
123
126
  (response.body['data'] || []).map do |model|
124
127
  model_info = begin
125
128
  Models.find(model['id'])
@@ -150,7 +153,7 @@ module RubyLLM
150
153
  end
151
154
  end
152
155
 
153
- def parse_list_models_response(response)
156
+ def parse_list_models_response(response) # rubocop:disable Metrics/AbcSize,Metrics/MethodLength
154
157
  capabilities = ModelCapabilities::OpenAI
155
158
  (response.body['data'] || []).map do |model|
156
159
  ModelInfo.new(
@@ -1,6 +1,9 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Assembles streaming responses from LLMs into complete messages.
5
+ # Handles the complexities of accumulating content and tool calls
6
+ # from partial chunks while tracking token usage.
4
7
  class StreamAccumulator
5
8
  attr_reader :content, :model_id, :tool_calls
6
9
 
@@ -49,7 +52,7 @@ module RubyLLM
49
52
  end
50
53
  end
51
54
 
52
- def accumulate_tool_calls(new_tool_calls)
55
+ def accumulate_tool_calls(new_tool_calls) # rubocop:disable Metrics/MethodLength
53
56
  new_tool_calls.each_value do |tool_call|
54
57
  if tool_call.id
55
58
  @tool_calls[tool_call.id] = ToolCall.new(
data/lib/ruby_llm/tool.rb CHANGED
@@ -1,6 +1,8 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Parameter definition for Tool methods. Specifies type constraints,
5
+ # descriptions, and whether parameters are required.
4
6
  class Parameter
5
7
  attr_reader :name, :type, :description, :required
6
8
 
@@ -12,6 +14,18 @@ module RubyLLM
12
14
  end
13
15
  end
14
16
 
17
+ # Base class for creating tools that AI models can use. Provides a simple
18
+ # interface for defining parameters and implementing tool behavior.
19
+ #
20
+ # Example:
21
+ # class Calculator < RubyLLM::Tool
22
+ # description "Performs arithmetic calculations"
23
+ # param :expression, type: :string, desc: "Math expression to evaluate"
24
+ #
25
+ # def execute(expression:)
26
+ # eval(expression).to_s
27
+ # end
28
+ # end
15
29
  class Tool
16
30
  class << self
17
31
  def description(text = nil)
@@ -1,6 +1,16 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Represents a function call from an AI model to a Tool.
5
+ # Encapsulates the function name, arguments, and execution results
6
+ # in a clean Ruby interface.
7
+ #
8
+ # Example:
9
+ # tool_call = ToolCall.new(
10
+ # id: "call_123",
11
+ # name: "calculator",
12
+ # arguments: { expression: "2 + 2" }
13
+ # )
4
14
  class ToolCall
5
15
  attr_reader :id, :name, :arguments
6
16
 
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
- VERSION = '0.1.0.pre12'
4
+ VERSION = '0.1.0.pre14'
5
5
  end
data/lib/ruby_llm.rb CHANGED
@@ -7,6 +7,9 @@ require 'logger'
7
7
  require 'event_stream_parser'
8
8
  require 'securerandom'
9
9
 
10
+ # A delightful Ruby interface to modern AI language models.
11
+ # Provides a unified way to interact with models from OpenAI, Anthropic and others
12
+ # with a focus on developer happiness and convention over configuration.
10
13
  module RubyLLM
11
14
  class Error < StandardError; end
12
15
 
data/ruby_llm.gemspec CHANGED
@@ -15,7 +15,7 @@ Gem::Specification.new do |spec|
15
15
  ' works.'
16
16
  spec.homepage = 'https://github.com/crmne/ruby_llm'
17
17
  spec.license = 'MIT'
18
- spec.required_ruby_version = Gem::Requirement.new('>= 2.7.0')
18
+ spec.required_ruby_version = Gem::Requirement.new('>= 3.1.0')
19
19
 
20
20
  spec.metadata['homepage_uri'] = spec.homepage
21
21
  spec.metadata['source_code_uri'] = spec.homepage
metadata CHANGED
@@ -1,14 +1,14 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby_llm
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0.pre12
4
+ version: 0.1.0.pre14
5
5
  platform: ruby
6
6
  authors:
7
7
  - Carmine Paolino
8
8
  autorequire:
9
9
  bindir: exe
10
10
  cert_chain: []
11
- date: 2025-02-03 00:00:00.000000000 Z
11
+ date: 2025-02-04 00:00:00.000000000 Z
12
12
  dependencies:
13
13
  - !ruby/object:Gem::Dependency
14
14
  name: event_stream_parser
@@ -344,6 +344,7 @@ files:
344
344
  - bin/console
345
345
  - bin/setup
346
346
  - lib/ruby_llm.rb
347
+ - lib/ruby_llm/active_record/acts_as.rb
347
348
  - lib/ruby_llm/chat.rb
348
349
  - lib/ruby_llm/chunk.rb
349
350
  - lib/ruby_llm/configuration.rb
@@ -380,7 +381,7 @@ required_ruby_version: !ruby/object:Gem::Requirement
380
381
  requirements:
381
382
  - - ">="
382
383
  - !ruby/object:Gem::Version
383
- version: 2.7.0
384
+ version: 3.1.0
384
385
  required_rubygems_version: !ruby/object:Gem::Requirement
385
386
  requirements:
386
387
  - - ">="