ruby_llm 0.1.0.pre13 → 0.1.0.pre15

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 325d22d60cd9f0a4157670442830fe2473e8e46879ee672d0e56207f6b3b8e72
4
- data.tar.gz: 355be22334837f40ac717974058d4cceb4d46dd13e36a808c11410a9c4624dc0
3
+ metadata.gz: ca489465046f71efd3a80717b46a68cdcbc754527f4b5ccade15442d2a95854e
4
+ data.tar.gz: b1091dd99c8b96266b767b1be69ade14a88cffc0c3f2cd07c1bf20d9efebe6e2
5
5
  SHA512:
6
- metadata.gz: a7ea97dbe171c7f62df442134b613c84ec277dffcdf1e86fafeed14a95c7ba2545b950d454153606e205f13ffd5a79a9e6e8158d1f053be51a915bbfb03d4806
7
- data.tar.gz: bb20ec1351e7597550283b7b7b2b1dff50a26ceba1b123b5fabd8636150f2fae5eb935dac51153448895b7782cdbb85575c25a671064928a8c83465489bea6b9
6
+ metadata.gz: 8bd968d14ec94fb254ae21a3d253cbe8b952145d3626540bf92cc5ded13d660e8f45b651319d3591f6a6e6b137c02cb94540c2a00680c25c2d7d3ecf4d799bbf
7
+ data.tar.gz: d1a5e1ef82b95f5ebe59b27440d703e88d4332f47cb706da8c4f09487608d6de572e78a38756af938a0e0ed0dbf8dc0f4a45aa82ac1d32f444681c3172baf40a
@@ -28,8 +28,8 @@ jobs:
28
28
  - name: Install dependencies
29
29
  run: bundle install
30
30
 
31
- # - name: Check code format
32
- # run: bundle exec rubocop
31
+ - name: Check code format
32
+ run: bundle exec rubocop
33
33
 
34
34
  # - name: Run tests
35
35
  # run: bundle exec rspec
data/.overcommit.yml CHANGED
@@ -1,6 +1,6 @@
1
1
  PreCommit:
2
2
  RuboCop:
3
- enabled: false
3
+ enabled: true
4
4
  auto_correct: true
5
5
  on_warn: fail # Treat all warnings as failures
6
6
 
data/README.md CHANGED
@@ -54,14 +54,13 @@ image_models = RubyLLM.models.image_models
54
54
 
55
55
  ## Having a Conversation
56
56
 
57
- Conversations are simple and natural, with automatic token counting built right in:
57
+ Conversations are simple and natural:
58
58
 
59
59
  ```ruby
60
60
  chat = RubyLLM.chat model: 'claude-3-5-sonnet-20241022'
61
61
 
62
- # Single messages with token tracking
62
+ # Ask questions
63
63
  response = chat.ask "What's your favorite Ruby feature?"
64
- puts "Response used #{response.input_tokens} input tokens and #{response.output_tokens} output tokens"
65
64
 
66
65
  # Multi-turn conversations just work
67
66
  chat.ask "Can you elaborate on that?"
@@ -72,7 +71,7 @@ chat.ask "Tell me a story about a Ruby programmer" do |chunk|
72
71
  print chunk.content
73
72
  end
74
73
 
75
- # Get token usage for the whole conversation from the last message
74
+ # Check token usage
76
75
  last_message = chat.messages.last
77
76
  puts "Conversation used #{last_message.input_tokens} input tokens and #{last_message.output_tokens} output tokens"
78
77
  ```
@@ -123,9 +122,9 @@ chat = RubyLLM.chat.with_tool Calculator
123
122
 
124
123
  # Tools with dependencies are just regular Ruby objects
125
124
  search = Search.new repo: Document
126
- chat.with_tools search, CalculatorTool
125
+ chat.with_tools search, Calculator
127
126
 
128
- # Need more control? Configure as needed
127
+ # Configure as needed
129
128
  chat.with_model('claude-3-5-sonnet-20241022')
130
129
  .with_temperature(0.9)
131
130
 
@@ -136,9 +135,7 @@ chat.ask "Find documents about Ruby performance"
136
135
  # => "I found these relevant documents about Ruby performance..."
137
136
  ```
138
137
 
139
- Tools let you seamlessly integrate your Ruby code with AI capabilities. The model will automatically decide when to use your tools and handle the results appropriately.
140
-
141
- Need to debug a tool? RubyLLM automatically logs all tool calls and their results when debug logging is enabled:
138
+ Need to debug a tool? RubyLLM automatically logs all tool calls:
142
139
 
143
140
  ```ruby
144
141
  ENV['RUBY_LLM_DEBUG'] = 'true'
@@ -148,13 +145,169 @@ chat.ask "What's 123 * 456?"
148
145
  # D, -- RubyLLM: Tool calculator returned: "56088"
149
146
  ```
150
147
 
151
- Create tools for anything - database queries, API calls, custom business logic - and let Claude use them naturally in conversation.
148
+ ## Rails Integration
149
+
150
+ RubyLLM comes with built-in Rails support that makes it dead simple to persist your chats and messages. Just create your tables and hook it up:
151
+
152
+ ```ruby
153
+ # db/migrate/YYYYMMDDHHMMSS_create_chats.rb
154
+ class CreateChats < ActiveRecord::Migration[8.0]
155
+ def change
156
+ create_table :chats do |t|
157
+ t.string :model_id
158
+ t.timestamps
159
+ end
160
+ end
161
+ end
162
+
163
+ # db/migrate/YYYYMMDDHHMMSS_create_messages.rb
164
+ class CreateMessages < ActiveRecord::Migration[8.0]
165
+ def change
166
+ create_table :messages do |t|
167
+ t.references :chat
168
+ t.string :role
169
+ t.text :content
170
+ t.json :tool_calls, default: {}
171
+ t.string :tool_call_id
172
+ t.integer :input_tokens
173
+ t.integer :output_tokens
174
+ t.string :model_id
175
+ t.timestamps
176
+ end
177
+ end
178
+ end
179
+ ```
180
+
181
+ Then in your models:
182
+
183
+ ```ruby
184
+ class Chat < ApplicationRecord
185
+ acts_as_chat
186
+
187
+ # Optional: Add Turbo Streams support
188
+ broadcasts_to ->(chat) { "chat_#{chat.id}" }
189
+ end
190
+
191
+ class Message < ApplicationRecord
192
+ acts_as_message
193
+ end
194
+ ```
195
+
196
+ That's it! Now you can use chats straight from your models:
197
+
198
+ ```ruby
199
+ # Create a new chat
200
+ chat = Chat.create!(model_id: "gpt-4")
201
+
202
+ # Ask questions - messages are automatically saved
203
+ chat.ask "What's the weather in Paris?"
204
+
205
+ # Stream responses in real-time
206
+ chat.ask "Tell me a story" do |chunk|
207
+ broadcast_chunk(chunk)
208
+ end
209
+
210
+ # Everything is persisted automatically
211
+ chat.messages.each do |message|
212
+ case message.role
213
+ when :user
214
+ puts "User: #{message.content}"
215
+ when :assistant
216
+ puts "Assistant: #{message.content}"
217
+ end
218
+ end
219
+ ```
220
+
221
+ ### Real-time Updates with Hotwire
222
+
223
+ The Rails integration works great with Hotwire out of the box:
152
224
 
153
- ## Coming Soon
225
+ ```ruby
226
+ # app/controllers/chats_controller.rb
227
+ class ChatsController < ApplicationController
228
+ def show
229
+ @chat = Chat.find(params[:id])
230
+ end
231
+
232
+ def ask
233
+ @chat = Chat.find(params[:id])
234
+ @chat.ask(params[:message]) do |chunk|
235
+ Turbo::StreamsChannel.broadcast_append_to(
236
+ @chat,
237
+ target: "messages",
238
+ partial: "messages/chunk",
239
+ locals: { chunk: chunk }
240
+ )
241
+ end
242
+ end
243
+ end
244
+
245
+ # app/views/chats/show.html.erb
246
+ <%= turbo_stream_from @chat %>
247
+
248
+ <div id="messages">
249
+ <%= render @chat.messages %>
250
+ </div>
251
+
252
+ <%= form_with(url: ask_chat_path(@chat), local: false) do |f| %>
253
+ <%= f.text_area :message %>
254
+ <%= f.submit "Send" %>
255
+ <% end %>
256
+ ```
257
+
258
+ ### Background Jobs
259
+
260
+ The persistence works seamlessly with background jobs:
261
+
262
+ ```ruby
263
+ class ChatJob < ApplicationJob
264
+ def perform(chat_id, message)
265
+ chat = Chat.find(chat_id)
266
+
267
+ chat.ask(message) do |chunk|
268
+ # Optional: Broadcast chunks for real-time updates
269
+ Turbo::StreamsChannel.broadcast_append_to(
270
+ chat,
271
+ target: "messages",
272
+ partial: "messages/chunk",
273
+ locals: { chunk: chunk }
274
+ )
275
+ end
276
+ end
277
+ end
278
+ ```
279
+
280
+ ### Using Tools
281
+
282
+ Tools work just like they do in regular RubyLLM chats:
283
+
284
+ ```ruby
285
+ class WeatherTool < RubyLLM::Tool
286
+ description "Gets current weather for a location"
287
+
288
+ param :location,
289
+ type: :string,
290
+ desc: "City name or coordinates"
291
+
292
+ def execute(location:)
293
+ # Fetch weather data...
294
+ { temperature: 22, conditions: "Sunny" }
295
+ end
296
+ end
297
+
298
+ # Use tools with your persisted chats
299
+ chat = Chat.create!(model_id: "gpt-4")
300
+ chat.chat.with_tool(WeatherTool.new)
301
+
302
+ # Ask about weather - tool usage is automatically saved
303
+ chat.ask "What's the weather in Paris?"
304
+
305
+ # Tool calls and results are persisted as messages
306
+ pp chat.messages.map(&:role)
307
+ #=> [:user, :assistant, :tool, :assistant]
308
+ ```
154
309
 
155
- - Rails integration for seamless database and Active Record support
156
- - Automatic retries and error handling
157
- - Much more!
310
+ Looking for more examples? Check out the [example Rails app](https://github.com/example/ruby_llm_rails) showing these patterns in action!
158
311
 
159
312
  ## Development
160
313
 
@@ -0,0 +1,98 @@
1
+ # frozen_string_literal: true
2
+
3
+ module RubyLLM
4
+ module ActiveRecord
5
+ # Adds chat and message persistence capabilities to ActiveRecord models.
6
+ # Provides a clean interface for storing chat history and message metadata
7
+ # in your database.
8
+ module ActsAs
9
+ extend ActiveSupport::Concern
10
+
11
+ class_methods do
12
+ def acts_as_chat(message_class: 'Message')
13
+ include ChatMethods
14
+
15
+ has_many :messages,
16
+ -> { order(created_at: :asc) },
17
+ class_name: message_class.to_s,
18
+ dependent: :destroy
19
+
20
+ delegate :complete, to: :chat
21
+ end
22
+
23
+ def acts_as_message(chat_class: 'Chat')
24
+ include MessageMethods
25
+
26
+ belongs_to :chat, class_name: chat_class.to_s
27
+ end
28
+ end
29
+ end
30
+
31
+ # Methods mixed into chat models to handle message persistence and
32
+ # provide a conversation interface.
33
+ module ChatMethods
34
+ extend ActiveSupport::Concern
35
+
36
+ def chat
37
+ chat = RubyLLM.chat(model: model_id)
38
+
39
+ # Load existing messages into chat
40
+ messages.each do |msg|
41
+ chat.add_message(msg.to_llm)
42
+ end
43
+
44
+ # Set up message persistence
45
+ chat.on_new_message { persist_new_message }
46
+ .on_end_message { |msg| persist_message_completion(msg) }
47
+
48
+ chat
49
+ end
50
+
51
+ def ask(message, &block)
52
+ message = { role: :user, content: message }
53
+ messages.create!(**message)
54
+ chat.complete(&block)
55
+ end
56
+
57
+ alias say ask
58
+
59
+ private
60
+
61
+ def persist_new_message
62
+ messages.create!
63
+ end
64
+
65
+ def persist_message_completion(message)
66
+ return unless message
67
+
68
+ messages.last.update!(
69
+ role: message.role,
70
+ content: message.content,
71
+ model_id: message.model_id,
72
+ tool_calls: message.tool_calls,
73
+ tool_call_id: message.tool_call_id,
74
+ input_tokens: message.input_tokens,
75
+ output_tokens: message.output_tokens
76
+ )
77
+ end
78
+ end
79
+
80
+ # Methods mixed into message models to handle serialization and
81
+ # provide a clean interface to the underlying message data.
82
+ module MessageMethods
83
+ extend ActiveSupport::Concern
84
+
85
+ def to_llm
86
+ RubyLLM::Message.new(
87
+ role: role.to_sym,
88
+ content: content,
89
+ tool_calls: tool_calls,
90
+ tool_call_id: tool_call_id,
91
+ input_tokens: input_tokens,
92
+ output_tokens: output_tokens,
93
+ model_id: model_id
94
+ )
95
+ end
96
+ end
97
+ end
98
+ end
data/lib/ruby_llm/chat.rb CHANGED
@@ -1,6 +1,13 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Represents a conversation with an AI model. Handles message history,
5
+ # streaming responses, and tool integration with a simple, conversational API.
6
+ #
7
+ # Example:
8
+ # chat = RubyLLM.chat
9
+ # chat.ask "What's the best way to learn Ruby?"
10
+ # chat.ask "Can you elaborate on that?"
4
11
  class Chat
5
12
  include Enumerable
6
13
 
@@ -8,13 +15,14 @@ module RubyLLM
8
15
 
9
16
  def initialize(model: nil)
10
17
  model_id = model || RubyLLM.config.default_model
11
- @model = Models.find model_id
12
- @provider = Models.provider_for model_id
18
+ self.model = model_id
13
19
  @temperature = 0.7
14
20
  @messages = []
15
21
  @tools = {}
16
-
17
- ensure_valid_tools
22
+ @on = {
23
+ new_message: nil,
24
+ end_message: nil
25
+ }
18
26
  end
19
27
 
20
28
  def ask(message, &block)
@@ -37,9 +45,13 @@ module RubyLLM
37
45
  self
38
46
  end
39
47
 
40
- def with_model(model_id)
48
+ def model=(model_id)
41
49
  @model = Models.find model_id
42
50
  @provider = Models.provider_for model_id
51
+ end
52
+
53
+ def with_model(model_id)
54
+ self.model = model_id
43
55
  self
44
56
  end
45
57
 
@@ -48,14 +60,24 @@ module RubyLLM
48
60
  self
49
61
  end
50
62
 
63
+ def on_new_message(&block)
64
+ @on[:new_message] = block
65
+ self
66
+ end
67
+
68
+ def on_end_message(&block)
69
+ @on[:end_message] = block
70
+ self
71
+ end
72
+
51
73
  def each(&block)
52
74
  messages.each(&block)
53
75
  end
54
76
 
55
- private
56
-
57
77
  def complete(&block)
58
- response = @provider.complete messages, tools: @tools, temperature: @temperature, model: @model.id, &block
78
+ @on[:new_message]&.call
79
+ response = @provider.complete(messages, tools: @tools, temperature: @temperature, model: @model.id, &block)
80
+ @on[:end_message]&.call(response)
59
81
 
60
82
  add_message response
61
83
  if response.tool_call?
@@ -65,6 +87,14 @@ module RubyLLM
65
87
  end
66
88
  end
67
89
 
90
+ def add_message(message_or_attributes)
91
+ message = message_or_attributes.is_a?(Message) ? message_or_attributes : Message.new(message_or_attributes)
92
+ messages << message
93
+ message
94
+ end
95
+
96
+ private
97
+
68
98
  def handle_tool_calls(response, &block)
69
99
  response.tool_calls.each_value do |tool_call|
70
100
  result = execute_tool tool_call
@@ -80,12 +110,6 @@ module RubyLLM
80
110
  tool.call(args)
81
111
  end
82
112
 
83
- def add_message(message_or_attributes)
84
- message = message_or_attributes.is_a?(Message) ? message_or_attributes : Message.new(message_or_attributes)
85
- messages << message
86
- message
87
- end
88
-
89
113
  def add_tool_result(tool_use_id, result)
90
114
  add_message(
91
115
  role: :tool,
@@ -93,13 +117,5 @@ module RubyLLM
93
117
  tool_call_id: tool_use_id
94
118
  )
95
119
  end
96
-
97
- def ensure_valid_tools
98
- tools.each_key do |name|
99
- unless name.is_a?(Symbol) && tools[name].is_a?(RubyLLM::Tool)
100
- raise Error, 'Tools should be of the format {<name.to_sym>: <RubyLLM::Tool>}'
101
- end
102
- end
103
- end
104
120
  end
105
121
  end
@@ -1,6 +1,14 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Global configuration for RubyLLM. Manages API keys, default models,
5
+ # and provider-specific settings.
6
+ #
7
+ # Configure via:
8
+ # RubyLLM.configure do |config|
9
+ # config.openai_api_key = ENV['OPENAI_API_KEY']
10
+ # config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
11
+ # end
4
12
  class Configuration
5
13
  attr_accessor :openai_api_key, :anthropic_api_key, :default_model, :request_timeout
6
14
 
@@ -8,17 +16,5 @@ module RubyLLM
8
16
  @request_timeout = 30
9
17
  @default_model = 'gpt-4o-mini'
10
18
  end
11
-
12
- def provider_settings
13
- @provider_settings ||= {
14
- openai: ProviderSettings.new,
15
- anthropic: ProviderSettings.new
16
- }
17
- end
18
- end
19
-
20
- # Settings specific to individual LLM providers
21
- class ProviderSettings
22
- attr_accessor :api_key, :api_version, :default_model, :base_url
23
19
  end
24
20
  end
@@ -1,6 +1,9 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # A single message in a chat conversation. Can represent user input,
5
+ # AI responses, or tool interactions. Tracks token usage and handles
6
+ # the complexities of tool calls and responses.
4
7
  class Message
5
8
  ROLES = %i[system user assistant tool].freeze
6
9
 
@@ -2,6 +2,7 @@
2
2
 
3
3
  module RubyLLM
4
4
  module ModelCapabilities
5
+ # Determines capabilities and pricing for Anthropic models
5
6
  module Anthropic
6
7
  extend self
7
8
 
@@ -2,6 +2,7 @@
2
2
 
3
3
  module RubyLLM
4
4
  module ModelCapabilities
5
+ # Determines capabilities and pricing for OpenAI models
5
6
  module OpenAI
6
7
  extend self
7
8
 
@@ -56,16 +57,15 @@ module RubyLLM
56
57
 
57
58
  private
58
59
 
59
- def model_family(model_id)
60
+ def model_family(model_id) # rubocop:disable Metrics/CyclomaticComplexity
60
61
  case model_id
61
- when /o1-2024/ then :o1
62
- when /o1-mini/ then :o1_mini
63
- when /gpt-4o-realtime-preview/ then :gpt4o_realtime
64
- when /gpt-4o-mini-realtime/ then :gpt4o_mini_realtime
65
- when /gpt-4o-mini/ then :gpt4o_mini
66
- when /gpt-4o/ then :gpt4o
67
- when /gpt-4-turbo/ then :gpt4_turbo
68
- when /gpt-3.5/ then :gpt35
62
+ when /o1-2024/ then :o1
63
+ when /o1-mini/ then :o1_mini
64
+ when /gpt-4o-realtime/ then :gpt4o_realtime
65
+ when /gpt-4o-mini-realtime/ then :gpt4o_mini_realtime
66
+ when /gpt-4o-mini/ then :gpt4o_mini
67
+ when /gpt-4o/ then :gpt4o
68
+ when /gpt-4-turbo/ then :gpt4_turbo
69
69
  else :gpt35
70
70
  end
71
71
  end
@@ -96,7 +96,7 @@ module RubyLLM
96
96
  .join(' ')
97
97
  end
98
98
 
99
- def apply_special_formatting(name)
99
+ def apply_special_formatting(name) # rubocop:disable Metrics/MethodLength
100
100
  name
101
101
  .gsub(/(\d{4}) (\d{2}) (\d{2})/, '\1\2\3')
102
102
  .gsub(/^Gpt /, 'GPT-')
@@ -3,12 +3,21 @@
3
3
  require 'time'
4
4
 
5
5
  module RubyLLM
6
+ # Information about an AI model's capabilities, pricing, and metadata.
7
+ # Used by the Models registry to help developers choose the right model
8
+ # for their needs.
9
+ #
10
+ # Example:
11
+ # model = RubyLLM.models.find('gpt-4')
12
+ # model.supports_vision? # => true
13
+ # model.supports_functions? # => true
14
+ # model.input_price_per_million # => 30.0
6
15
  class ModelInfo
7
16
  attr_reader :id, :created_at, :display_name, :provider, :metadata,
8
17
  :context_window, :max_tokens, :supports_vision, :supports_functions,
9
18
  :supports_json_mode, :input_price_per_million, :output_price_per_million
10
19
 
11
- def initialize(data)
20
+ def initialize(data) # rubocop:disable Metrics/AbcSize,Metrics/MethodLength
12
21
  @id = data[:id]
13
22
  @created_at = data[:created_at].is_a?(String) ? Time.parse(data[:created_at]) : data[:created_at]
14
23
  @display_name = data[:display_name]
@@ -23,7 +32,7 @@ module RubyLLM
23
32
  @metadata = data[:metadata] || {}
24
33
  end
25
34
 
26
- def to_h
35
+ def to_h # rubocop:disable Metrics/MethodLength
27
36
  {
28
37
  id: id,
29
38
  created_at: created_at.iso8601,
@@ -1,6 +1,13 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Registry of available AI models and their capabilities. Provides a clean interface
5
+ # to discover and work with models from different providers.
6
+ #
7
+ # Example:
8
+ # RubyLLM.models.all # All available models
9
+ # RubyLLM.models.chat_models # Models that support chat
10
+ # RubyLLM.models.find('claude-3') # Get info about a specific model
4
11
  module Models
5
12
  module_function
6
13
 
@@ -1,11 +1,16 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Base interface for LLM providers like OpenAI and Anthropic.
5
+ # Handles the complexities of API communication, streaming responses,
6
+ # and error handling so individual providers can focus on their unique features.
4
7
  module Provider
5
8
  def self.included(base)
6
9
  base.include(InstanceMethods)
7
10
  end
8
11
 
12
+ # Common functionality for all LLM providers. Implements the core provider
13
+ # interface so specific providers only need to implement a few key methods.
9
14
  module InstanceMethods
10
15
  def complete(messages, tools:, temperature:, model:, &block)
11
16
  payload = build_payload messages, tools: tools, temperature: temperature, model: model, stream: block_given?
@@ -2,7 +2,9 @@
2
2
 
3
3
  module RubyLLM
4
4
  module Providers
5
- class Anthropic
5
+ # Anthropic Claude API integration. Handles the complexities of
6
+ # Claude's unique message format and tool calling conventions.
7
+ class Anthropic # rubocop:disable Metrics/ClassLength
6
8
  include Provider
7
9
 
8
10
  private
@@ -42,32 +44,33 @@ module RubyLLM
42
44
  data = response.body
43
45
  content_blocks = data['content'] || []
44
46
 
45
- text_blocks = content_blocks.select { |c| c['type'] == 'text' }
46
- text_content = text_blocks.map { |c| c['text'] }.join('')
47
+ text_content = extract_text_content(content_blocks)
48
+ tool_use = find_tool_use(content_blocks)
47
49
 
48
- tool_use = content_blocks.find { |c| c['type'] == 'tool_use' }
50
+ build_message(data, text_content, tool_use)
51
+ end
49
52
 
50
- if tool_use
51
- Message.new(
52
- role: :assistant,
53
- content: text_content,
54
- tool_calls: parse_tool_calls(tool_use),
55
- input_tokens: data.dig('usage', 'input_tokens'),
56
- output_tokens: data.dig('usage', 'output_tokens'),
57
- model_id: data['model']
58
- )
59
- else
60
- Message.new(
61
- role: :assistant,
62
- content: text_content,
63
- input_tokens: data.dig('usage', 'input_tokens'),
64
- output_tokens: data.dig('usage', 'output_tokens'),
65
- model_id: data['model']
66
- )
67
- end
53
+ def extract_text_content(blocks)
54
+ text_blocks = blocks.select { |c| c['type'] == 'text' }
55
+ text_blocks.map { |c| c['text'] }.join('')
56
+ end
57
+
58
+ def find_tool_use(blocks)
59
+ blocks.find { |c| c['type'] == 'tool_use' }
68
60
  end
69
61
 
70
- def parse_models_response(response)
62
+ def build_message(data, content, tool_use)
63
+ Message.new(
64
+ role: :assistant,
65
+ content: content,
66
+ tool_calls: parse_tool_calls(tool_use),
67
+ input_tokens: data.dig('usage', 'input_tokens'),
68
+ output_tokens: data.dig('usage', 'output_tokens'),
69
+ model_id: data['model']
70
+ )
71
+ end
72
+
73
+ def parse_models_response(response) # rubocop:disable Metrics/AbcSize,Metrics/MethodLength
71
74
  capabilities = ModelCapabilities::Anthropic.new
72
75
 
73
76
  (response.body['data'] || []).map do |model|
@@ -90,32 +93,45 @@ module RubyLLM
90
93
 
91
94
  def handle_stream(&block)
92
95
  to_json_stream do |data|
93
- if data['type'] == 'content_block_delta' && data.dig('delta', 'type') == 'input_json_delta'
94
- block.call(
95
- Chunk.new(
96
- role: :assistant,
97
- model_id: data.dig('message', 'model'),
98
- content: data.dig('delta', 'text'),
99
- input_tokens: data.dig('message', 'usage', 'input_tokens'),
100
- output_tokens: data.dig('message', 'usage', 'output_tokens') || data.dig('usage', 'output_tokens'),
101
- tool_calls: { nil => ToolCall.new(id: nil, name: nil, arguments: data.dig('delta', 'partial_json')) }
102
- )
103
- )
104
- else
105
- block.call(
106
- Chunk.new(
107
- role: :assistant,
108
- model_id: data.dig('message', 'model'),
109
- content: data.dig('delta', 'text'),
110
- input_tokens: data.dig('message', 'usage', 'input_tokens'),
111
- output_tokens: data.dig('message', 'usage', 'output_tokens') || data.dig('usage', 'output_tokens'),
112
- tool_calls: parse_tool_calls(data['content_block'])
113
- )
114
- )
115
- end
96
+ block.call(build_chunk(data))
97
+ end
98
+ end
99
+
100
+ def build_chunk(data)
101
+ Chunk.new(
102
+ role: :assistant,
103
+ model_id: extract_model_id(data),
104
+ content: data.dig('delta', 'text'),
105
+ input_tokens: extract_input_tokens(data),
106
+ output_tokens: extract_output_tokens(data),
107
+ tool_calls: extract_tool_calls(data)
108
+ )
109
+ end
110
+
111
+ def extract_model_id(data)
112
+ data.dig('message', 'model')
113
+ end
114
+
115
+ def extract_input_tokens(data)
116
+ data.dig('message', 'usage', 'input_tokens')
117
+ end
118
+
119
+ def extract_output_tokens(data)
120
+ data.dig('message', 'usage', 'output_tokens') || data.dig('usage', 'output_tokens')
121
+ end
122
+
123
+ def extract_tool_calls(data)
124
+ if json_delta?(data)
125
+ { nil => ToolCall.new(id: nil, name: nil, arguments: data.dig('delta', 'partial_json')) }
126
+ else
127
+ parse_tool_calls(data['content_block'])
116
128
  end
117
129
  end
118
130
 
131
+ def json_delta?(data)
132
+ data['type'] == 'content_block_delta' && data.dig('delta', 'type') == 'input_json_delta'
133
+ end
134
+
119
135
  def parse_tool_calls(content_block)
120
136
  return nil unless content_block && content_block['type'] == 'tool_use'
121
137
 
@@ -141,43 +157,69 @@ module RubyLLM
141
157
  end
142
158
 
143
159
  def format_messages(messages)
144
- messages.map do |msg|
145
- if msg.tool_call?
146
- {
147
- role: 'assistant',
148
- content: [
149
- {
150
- type: 'text',
151
- text: msg.content
152
- },
153
- {
154
- type: 'tool_use',
155
- id: msg.tool_calls.values.first.id,
156
- name: msg.tool_calls.values.first.name,
157
- input: msg.tool_calls.values.first.arguments
158
- }
159
- ]
160
- }
161
- elsif msg.tool_result?
162
- {
163
- role: 'user',
164
- content: [
165
- {
166
- type: 'tool_result',
167
- tool_use_id: msg.tool_call_id,
168
- content: msg.content
169
- }
170
- ]
171
- }
172
- else
173
- {
174
- role: convert_role(msg.role),
175
- content: msg.content
176
- }
177
- end
160
+ messages.map { |msg| format_message(msg) }
161
+ end
162
+
163
+ def format_message(msg)
164
+ if msg.tool_call?
165
+ format_tool_call(msg)
166
+ elsif msg.tool_result?
167
+ format_tool_result(msg)
168
+ else
169
+ format_basic_message(msg)
178
170
  end
179
171
  end
180
172
 
173
+ def format_tool_call(msg)
174
+ tool_call = msg.tool_calls.values.first
175
+
176
+ {
177
+ role: 'assistant',
178
+ content: [
179
+ format_text_block(msg.content),
180
+ format_tool_use_block(tool_call)
181
+ ]
182
+ }
183
+ end
184
+
185
+ def format_tool_result(msg)
186
+ {
187
+ role: 'user',
188
+ content: [format_tool_result_block(msg)]
189
+ }
190
+ end
191
+
192
+ def format_basic_message(msg)
193
+ {
194
+ role: convert_role(msg.role),
195
+ content: msg.content
196
+ }
197
+ end
198
+
199
+ def format_text_block(content)
200
+ {
201
+ type: 'text',
202
+ text: content
203
+ }
204
+ end
205
+
206
+ def format_tool_use_block(tool_call)
207
+ {
208
+ type: 'tool_use',
209
+ id: tool_call.id,
210
+ name: tool_call.name,
211
+ input: tool_call.arguments
212
+ }
213
+ end
214
+
215
+ def format_tool_result_block(msg)
216
+ {
217
+ type: 'tool_result',
218
+ tool_use_id: msg.tool_call_id,
219
+ content: msg.content
220
+ }
221
+ end
222
+
181
223
  def convert_role(role)
182
224
  case role
183
225
  when :tool then 'user'
@@ -2,7 +2,10 @@
2
2
 
3
3
  module RubyLLM
4
4
  module Providers
5
- class OpenAI
5
+ # OpenAI API integration. Handles chat completion, function calling,
6
+ # and OpenAI's unique streaming format. Supports GPT-4, GPT-3.5,
7
+ # and other OpenAI models.
8
+ class OpenAI # rubocop:disable Metrics/ClassLength
6
9
  include Provider
7
10
 
8
11
  private
@@ -25,7 +28,7 @@ module RubyLLM
25
28
  '/v1/models'
26
29
  end
27
30
 
28
- def build_payload(messages, tools:, temperature:, model:, stream: false)
31
+ def build_payload(messages, tools:, temperature:, model:, stream: false) # rubocop:disable Metrics/MethodLength
29
32
  {
30
33
  model: model,
31
34
  messages: format_messages(messages),
@@ -50,7 +53,7 @@ module RubyLLM
50
53
  end
51
54
  end
52
55
 
53
- def format_tool_calls(tool_calls)
56
+ def format_tool_calls(tool_calls) # rubocop:disable Metrics/MethodLength
54
57
  return nil unless tool_calls&.any?
55
58
 
56
59
  tool_calls.map do |_, tc|
@@ -65,7 +68,7 @@ module RubyLLM
65
68
  end
66
69
  end
67
70
 
68
- def tool_for(tool)
71
+ def tool_for(tool) # rubocop:disable Metrics/MethodLength
69
72
  {
70
73
  type: 'function',
71
74
  function: {
@@ -87,7 +90,7 @@ module RubyLLM
87
90
  }.compact
88
91
  end
89
92
 
90
- def parse_completion_response(response)
93
+ def parse_completion_response(response) # rubocop:disable Metrics/MethodLength
91
94
  data = response.body
92
95
  return if data.empty?
93
96
 
@@ -104,7 +107,7 @@ module RubyLLM
104
107
  )
105
108
  end
106
109
 
107
- def parse_tool_calls(tool_calls, parse_arguments: true)
110
+ def parse_tool_calls(tool_calls, parse_arguments: true) # rubocop:disable Metrics/MethodLength
108
111
  return nil unless tool_calls&.any?
109
112
 
110
113
  tool_calls.to_h do |tc|
@@ -119,7 +122,7 @@ module RubyLLM
119
122
  end
120
123
  end
121
124
 
122
- def parse_models_response(response)
125
+ def parse_models_response(response) # rubocop:disable Metrics/MethodLength
123
126
  (response.body['data'] || []).map do |model|
124
127
  model_info = begin
125
128
  Models.find(model['id'])
@@ -150,7 +153,7 @@ module RubyLLM
150
153
  end
151
154
  end
152
155
 
153
- def parse_list_models_response(response)
156
+ def parse_list_models_response(response) # rubocop:disable Metrics/AbcSize,Metrics/MethodLength
154
157
  capabilities = ModelCapabilities::OpenAI
155
158
  (response.body['data'] || []).map do |model|
156
159
  ModelInfo.new(
@@ -3,9 +3,9 @@
3
3
  module RubyLLM
4
4
  # Rails integration for RubyLLM
5
5
  class Railtie < Rails::Railtie
6
- initializer 'ruby_llm.initialize' do
7
- ActiveSupport.on_load(:active_record) do
8
- extend RubyLLM::ActiveRecord::ActsAs
6
+ initializer 'ruby_llm.active_record' do
7
+ ActiveSupport.on_load :active_record do
8
+ include RubyLLM::ActiveRecord::ActsAs
9
9
  end
10
10
  end
11
11
  end
@@ -1,6 +1,9 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Assembles streaming responses from LLMs into complete messages.
5
+ # Handles the complexities of accumulating content and tool calls
6
+ # from partial chunks while tracking token usage.
4
7
  class StreamAccumulator
5
8
  attr_reader :content, :model_id, :tool_calls
6
9
 
@@ -49,7 +52,7 @@ module RubyLLM
49
52
  end
50
53
  end
51
54
 
52
- def accumulate_tool_calls(new_tool_calls)
55
+ def accumulate_tool_calls(new_tool_calls) # rubocop:disable Metrics/MethodLength
53
56
  new_tool_calls.each_value do |tool_call|
54
57
  if tool_call.id
55
58
  @tool_calls[tool_call.id] = ToolCall.new(
data/lib/ruby_llm/tool.rb CHANGED
@@ -1,6 +1,8 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Parameter definition for Tool methods. Specifies type constraints,
5
+ # descriptions, and whether parameters are required.
4
6
  class Parameter
5
7
  attr_reader :name, :type, :description, :required
6
8
 
@@ -12,6 +14,18 @@ module RubyLLM
12
14
  end
13
15
  end
14
16
 
17
+ # Base class for creating tools that AI models can use. Provides a simple
18
+ # interface for defining parameters and implementing tool behavior.
19
+ #
20
+ # Example:
21
+ # class Calculator < RubyLLM::Tool
22
+ # description "Performs arithmetic calculations"
23
+ # param :expression, type: :string, desc: "Math expression to evaluate"
24
+ #
25
+ # def execute(expression:)
26
+ # eval(expression).to_s
27
+ # end
28
+ # end
15
29
  class Tool
16
30
  class << self
17
31
  def description(text = nil)
@@ -1,6 +1,16 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
+ # Represents a function call from an AI model to a Tool.
5
+ # Encapsulates the function name, arguments, and execution results
6
+ # in a clean Ruby interface.
7
+ #
8
+ # Example:
9
+ # tool_call = ToolCall.new(
10
+ # id: "call_123",
11
+ # name: "calculator",
12
+ # arguments: { expression: "2 + 2" }
13
+ # )
4
14
  class ToolCall
5
15
  attr_reader :id, :name, :arguments
6
16
 
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
- VERSION = '0.1.0.pre13'
4
+ VERSION = '0.1.0.pre15'
5
5
  end
data/lib/ruby_llm.rb CHANGED
@@ -7,6 +7,18 @@ require 'logger'
7
7
  require 'event_stream_parser'
8
8
  require 'securerandom'
9
9
 
10
+ loader = Zeitwerk::Loader.for_gem
11
+ loader.inflector.inflect(
12
+ 'ruby_llm' => 'RubyLLM',
13
+ 'llm' => 'LLM',
14
+ 'openai' => 'OpenAI',
15
+ 'api' => 'API'
16
+ )
17
+ loader.setup
18
+
19
+ # A delightful Ruby interface to modern AI language models.
20
+ # Provides a unified way to interact with models from OpenAI, Anthropic and others
21
+ # with a focus on developer happiness and convention over configuration.
10
22
  module RubyLLM
11
23
  class Error < StandardError; end
12
24
 
@@ -31,40 +43,16 @@ module RubyLLM
31
43
  @logger ||= Logger.new(
32
44
  $stdout,
33
45
  progname: 'RubyLLM',
34
- level: ENV['RUBY_LLM_DEBUG'] == 'true' ? Logger::DEBUG : Logger::INFO
46
+ level: ENV['RUBY_LLM_DEBUG'] ? Logger::DEBUG : Logger::INFO
35
47
  )
36
48
  end
37
49
  end
38
50
  end
39
51
 
40
- loader = Zeitwerk::Loader.for_gem
41
-
42
- # Add lib directory to the load path
43
- loader.push_dir(File.expand_path('..', __dir__))
44
-
45
- # Configure custom inflections
46
- loader.inflector.inflect(
47
- 'ruby_llm' => 'RubyLLM',
48
- 'llm' => 'LLM',
49
- 'openai' => 'OpenAI',
50
- 'api' => 'API'
51
- )
52
-
53
- # Ignore Rails-specific files and specs
54
- loader.ignore("#{__dir__}/ruby_llm/railtie.rb")
55
- loader.ignore("#{__dir__}/ruby_llm/active_record")
56
- loader.ignore(File.expand_path('../spec', __dir__).to_s)
57
-
58
- loader.enable_reloading if ENV['RUBY_LLM_DEBUG']
59
-
60
- loader.setup
61
- loader.eager_load if ENV['RUBY_LLM_DEBUG']
62
-
63
52
  RubyLLM::Provider.register :openai, RubyLLM::Providers::OpenAI
64
53
  RubyLLM::Provider.register :anthropic, RubyLLM::Providers::Anthropic
65
54
 
66
- # Load Rails integration if Rails is defined
67
- if defined?(Rails)
55
+ if defined?(Rails::Railtie)
68
56
  require 'ruby_llm/railtie'
69
57
  require 'ruby_llm/active_record/acts_as'
70
58
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby_llm
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0.pre13
4
+ version: 0.1.0.pre15
5
5
  platform: ruby
6
6
  authors:
7
7
  - Carmine Paolino
@@ -344,6 +344,7 @@ files:
344
344
  - bin/console
345
345
  - bin/setup
346
346
  - lib/ruby_llm.rb
347
+ - lib/ruby_llm/active_record/acts_as.rb
347
348
  - lib/ruby_llm/chat.rb
348
349
  - lib/ruby_llm/chunk.rb
349
350
  - lib/ruby_llm/configuration.rb