ruby_llm 0.1.0.pre41 → 0.1.0.pre43

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e6a58fa5d1286364224b46aced4e7150aa935988ca5e6a5f5a12d760da76c214
4
- data.tar.gz: 7d24ed72b786fea8b030191e796facd27c6ac6d45efe499ea2a25d3a5d01fab1
3
+ metadata.gz: b2433d0b669ef8bd499507edf33ed672d011fc1c95c2de0febd4e83d2593e0b8
4
+ data.tar.gz: a818a1c59220404c29c0e562c473480ac630a6fd53fadee65ab102b8955e9ec5
5
5
  SHA512:
6
- metadata.gz: 293c57036acd3a7b0b5fdeda7866d0bf1599f52138aeac058bb248bc53d4b687caff047ab84aa57b1b50ae034e8cfb25729bd1e027af3febd1c50ce640bebf44
7
- data.tar.gz: 762c75a2d0e7dc02829c2b7d7171c6fc57932656c72890ca0d1656fefc7d9d5cfa5989726ee40fa3b5d63cac81b1a2dc66c0439cb487272e70288718d993c21e
6
+ metadata.gz: afa3882a1850692e63f16dbc8c379a9aa71d704096f1eb2d5d88aeefa0c9c17f2585e8e215be0ed126caa0ec6bb7d07b4c12b5b6eaa5a1cf69a054b976b0d9e8
7
+ data.tar.gz: a8089658eab75b8200f51d39568dc876cdcc2d61d6581dbd1e6c211f487fa127c9300e0fcbb318f958b6b109f5fb43c2f7f6824acd2f426e4a0a9ae2a7023c63
data/.rspec_status CHANGED
@@ -34,7 +34,8 @@ example_id | status | run_time |
34
34
  ./spec/ruby_llm/embeddings_spec.rb[1:1:1:2] | passed | 0.43632 seconds |
35
35
  ./spec/ruby_llm/embeddings_spec.rb[1:1:2:1] | passed | 0.65614 seconds |
36
36
  ./spec/ruby_llm/embeddings_spec.rb[1:1:2:2] | passed | 2.16 seconds |
37
- ./spec/ruby_llm/error_handling_spec.rb[1:1] | passed | 0.19586 seconds |
38
- ./spec/ruby_llm/image_generation_spec.rb[1:1:1] | passed | 12.77 seconds |
39
- ./spec/ruby_llm/image_generation_spec.rb[1:1:2] | passed | 18.13 seconds |
40
- ./spec/ruby_llm/image_generation_spec.rb[1:1:3] | passed | 0.00035 seconds |
37
+ ./spec/ruby_llm/error_handling_spec.rb[1:1] | passed | 0.29366 seconds |
38
+ ./spec/ruby_llm/image_generation_spec.rb[1:1:1] | passed | 11.61 seconds |
39
+ ./spec/ruby_llm/image_generation_spec.rb[1:1:2] | passed | 17.63 seconds |
40
+ ./spec/ruby_llm/image_generation_spec.rb[1:1:3] | passed | 8.77 seconds |
41
+ ./spec/ruby_llm/image_generation_spec.rb[1:1:4] | passed | 0.00319 seconds |
data/README.md CHANGED
@@ -1,6 +1,6 @@
1
1
  # RubyLLM
2
2
 
3
- A delightful Ruby way to work with AI. Chat in text, analyze and generate images, understand audio, and use tools through a unified interface to OpenAI, Anthropic, Google, and DeepSeek. Built for developer happiness with automatic token counting, proper streaming, and Rails integration. No wrapping your head around multiple APIs - just clean Ruby code that works.
3
+ A delightful Ruby way to work with AI. No configuration madness, no complex callbacks, no handler hell – just beautiful, expressive Ruby code.
4
4
 
5
5
  <p align="center">
6
6
  <img src="https://upload.wikimedia.org/wikipedia/commons/4/4d/OpenAI_Logo.svg" alt="OpenAI" height="40" width="120">
@@ -15,446 +15,164 @@ A delightful Ruby way to work with AI. Chat in text, analyze and generate images
15
15
  <p align="center">
16
16
  <a href="https://badge.fury.io/rb/ruby_llm"><img src="https://badge.fury.io/rb/ruby_llm.svg" alt="Gem Version" /></a>
17
17
  <a href="https://github.com/testdouble/standard"><img src="https://img.shields.io/badge/code_style-standard-brightgreen.svg" alt="Ruby Style Guide" /></a>
18
- <a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Total Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a>
18
+ <a href="https://rubygems.org/gems/ruby_llm"><img alt="Gem Downloads" src="https://img.shields.io/gem/dt/ruby_llm"></a>
19
19
  <a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a>
20
20
  </p>
21
21
 
22
22
  ðŸĪš Battle tested at [💎 Chat with Work](https://chatwithwork.com)
23
23
 
24
- ## Features
24
+ ## The problem with AI libraries
25
25
 
26
- - 💎 **Beautiful Chat Interface** - Converse with AI models as easily as `RubyLLM.chat.ask "teach me Ruby"`
27
- - 📄 **PDF Analysis** - Analyze PDF documents directly with Claude models using `chat.ask "What's in this?", with: { pdf: "document.pdf" }`
28
- - ðŸŽĩ **Audio Analysis** - Get audio transcription and understanding with `chat.ask "what's said here?", with: { audio: "clip.wav" }`
29
- - 👁ïļ **Vision Understanding** - Let AIs analyze images with a simple `chat.ask "what's in this?", with: { image: "photo.jpg" }`
30
- - 🌊 **Streaming** - Real-time responses with proper Ruby streaming with `chat.ask "hello" do |chunk| puts chunk.content end`
31
- - 🚂 **Rails Integration** - Persist chats and messages with ActiveRecord with `acts_as_{chat|message|tool_call}`
32
- - 🛠ïļ **Tool Support** - Give AIs access to your Ruby code with `chat.with_tool(Calculator).ask "what's 2+2?"`
33
- - ðŸŽĻ **Paint with AI** - Create images as easily as `RubyLLM.paint "a sunset over mountains"`
34
- - 📊 **Embeddings** - Generate vector embeddings for your text with `RubyLLM.embed "hello"`
35
- - 🔄 **Multi-Provider Support** - Works with OpenAI, Anthropic, Google, and DeepSeek
36
- - ðŸŽŊ **Token Tracking** - Automatic usage tracking across providers
26
+ Every AI provider comes with its own client library, its own response format, its own conventions for streaming, and its own way of handling errors. Want to use multiple providers? Prepare to juggle incompatible APIs and bloated dependencies.
37
27
 
38
- ## Installation
39
-
40
- Add it to your Gemfile:
41
-
42
- ```ruby
43
- gem 'ruby_llm'
44
- ```
45
-
46
- Or install it yourself:
47
-
48
- ```bash
49
- gem install ruby_llm
50
- ```
51
-
52
- ## Configuration
53
-
54
- ```ruby
55
- require 'ruby_llm'
56
-
57
- # Configure your API keys
58
- RubyLLM.configure do |config|
59
- config.openai_api_key = ENV['OPENAI_API_KEY']
60
- config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
61
- config.gemini_api_key = ENV['GEMINI_API_KEY']
62
- config.deepseek_api_key = ENV['DEEPSEEK_API_KEY']
63
- end
64
- ```
28
+ RubyLLM fixes all that. One beautiful API for everything. One consistent format. Minimal dependencies — just Faraday and Zeitwerk. Because working with AI should be a joy, not a chore.
65
29
 
66
- ## Quick Start
67
-
68
- RubyLLM makes it dead simple to start chatting with AI models:
30
+ ## What makes it great
69
31
 
70
32
  ```ruby
71
- # Start a conversation
33
+ # Just ask questions
72
34
  chat = RubyLLM.chat
73
35
  chat.ask "What's the best way to learn Ruby?"
74
- ```
75
36
 
76
- ## Available Models
77
-
78
- RubyLLM gives you access to the latest models from multiple providers:
79
-
80
- ```ruby
81
- # List all available models
82
- RubyLLM.models.all
83
-
84
- # Get models by type
85
- chat_models = RubyLLM.models.chat_models
86
- embedding_models = RubyLLM.models.embedding_models
87
- audio_models = RubyLLM.models.audio_models
88
- image_models = RubyLLM.models.image_models
89
- ```
37
+ # Analyze images
38
+ chat.ask "What's in this image?", with: { image: "ruby_conf.jpg" }
90
39
 
91
- ## Having a Conversation
40
+ # Analyze audio recordings
41
+ chat.ask "Describe this meeting", with: { audio: "meeting.wav" }
92
42
 
93
- Conversations are simple and natural:
94
-
95
- ```ruby
96
- chat = RubyLLM.chat model: 'gemini-2.0-flash'
97
-
98
- # Ask questions
99
- response = chat.ask "What's your favorite Ruby feature?"
100
-
101
- # Multi-turn conversations just work
102
- chat.ask "Can you elaborate on that?"
103
- chat.ask "How does that compare to Python?"
104
-
105
- # Stream responses as they come
106
- chat.ask "Tell me a story about a Ruby programmer" do |chunk|
107
- print chunk.content
108
- end
43
+ # Analyze documents
44
+ chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
109
45
 
110
- # Ask about images
111
- chat.ask "What do you see in this image?", with: { image: "ruby_logo.png" }
46
+ # Generate images
47
+ RubyLLM.paint "a sunset over mountains in watercolor style"
112
48
 
113
- # Get analysis of audio content
114
- chat.ask "What's being said in this recording?", with: { audio: "meeting.wav" }
49
+ # Create vector embeddings
50
+ RubyLLM.embed "Ruby is elegant and expressive"
115
51
 
116
- # Combine multiple pieces of content
117
- chat.ask "Compare these diagrams", with: { image: ["diagram1.png", "diagram2.png"] }
118
-
119
- # Ask about PDFs (currently supported with Claude models)
120
- chat = RubyLLM.chat(model: 'claude-3-7-sonnet-20250219')
121
- chat.ask "Summarize this research paper", with: { pdf: "research.pdf" }
122
-
123
- # Multiple PDFs work too
124
- chat.ask "Compare these contracts", with: { pdf: ["contract1.pdf", "contract2.pdf"] }
125
-
126
- # Check token usage
127
- last_message = chat.messages.last
128
- puts "Conversation used #{last_message.input_tokens} input tokens and #{last_message.output_tokens} output tokens"
129
- ```
130
-
131
- You can provide content as local files or URLs - RubyLLM handles the rest. Vision and audio capabilities are available with compatible models. The API stays clean and consistent whether you're working with text, images, or audio.
132
-
133
- ## Image Generation
134
-
135
- Want to create AI-generated images? RubyLLM makes it super simple:
136
-
137
- ```ruby
138
- # Paint a picture!
139
- image = RubyLLM.paint "a starry night over San Francisco in Van Gogh's style"
140
- image.url # => "https://..."
141
- image.revised_prompt # Shows how DALL-E interpreted your prompt
142
-
143
- # Choose size and model
144
- image = RubyLLM.paint(
145
- "a cyberpunk cityscape at sunset",
146
- model: "dall-e-3",
147
- size: "1792x1024"
148
- )
149
-
150
- # Set your default model
151
- RubyLLM.configure do |config|
152
- config.default_image_model = "dall-e-3"
153
- end
154
- ```
155
-
156
- RubyLLM automatically handles all the complexities of the DALL-E API, token/credit management, and error handling, so you can focus on being creative.
157
-
158
- ## Text Embeddings
159
-
160
- Need vector embeddings for your text? RubyLLM makes it simple:
161
-
162
- ```ruby
163
- # Get embeddings with the default model
164
- RubyLLM.embed "Hello, world!"
165
-
166
- # Use a specific model
167
- RubyLLM.embed "Ruby is awesome!", model: "text-embedding-004"
168
-
169
- # Process multiple texts at once
170
- RubyLLM.embed([
171
- "First document",
172
- "Second document",
173
- "Third document"
174
- ])
175
-
176
- # Configure the default model
177
- RubyLLM.configure do |config|
178
- config.default_embedding_model = 'text-embedding-3-large'
179
- end
180
- ```
181
-
182
- ## Using Tools
183
-
184
- Give your AI assistants access to your Ruby code by creating tool classes that do one thing well:
185
-
186
- ```ruby
52
+ # Let AI use your code
187
53
  class Calculator < RubyLLM::Tool
188
- description "Performs arithmetic calculations"
189
-
190
- param :expression,
191
- type: :string,
192
- desc: "A mathematical expression to evaluate (e.g. '2 + 2')"
54
+ description "Performs calculations"
55
+ param :expression, type: :string, desc: "Math expression to evaluate"
193
56
 
194
57
  def execute(expression:)
195
58
  eval(expression).to_s
196
59
  end
197
60
  end
198
61
 
199
- class Search < RubyLLM::Tool
200
- description "Searches documents by similarity"
201
-
202
- param :query,
203
- desc: "The search query"
204
-
205
- param :limit,
206
- type: :integer,
207
- desc: "Number of results to return",
208
- required: false
209
-
210
- def initialize(repo:)
211
- @repo = repo
212
- end
213
-
214
- def execute(query:, limit: 5)
215
- @repo.similarity_search(query, limit:)
216
- end
217
- end
62
+ chat.with_tool(Calculator).ask "What's 123 * 456?"
218
63
  ```
219
64
 
220
- Then use them in your conversations:
65
+ ## Installation
221
66
 
222
67
  ```ruby
223
- # Simple tools just work
224
- chat = RubyLLM.chat.with_tool Calculator
225
-
226
- # Tools with dependencies are just regular Ruby objects
227
- search = Search.new repo: Document
228
- chat.with_tools search, Calculator
229
-
230
- # Configure as needed
231
- chat.with_model('claude-3-5-sonnet-20241022')
232
- .with_temperature(0.9)
68
+ # In your Gemfile
69
+ gem 'ruby_llm'
233
70
 
234
- chat.ask "What's 2+2?"
235
- # => "Let me calculate that for you. The result is 4."
71
+ # Then run
72
+ bundle install
236
73
 
237
- chat.ask "Find documents about Ruby performance"
238
- # => "I found these relevant documents about Ruby performance..."
74
+ # Or install it yourself
75
+ gem install ruby_llm
239
76
  ```
240
77
 
241
- Need to debug a tool? RubyLLM automatically logs all tool calls:
78
+ Configure with your API keys:
242
79
 
243
80
  ```ruby
244
- ENV['RUBY_LLM_DEBUG'] = 'true'
245
-
246
- chat.ask "What's 123 * 456?"
247
- # D, -- RubyLLM: Tool calculator called with: {"expression" => "123 * 456"}
248
- # D, -- RubyLLM: Tool calculator returned: "56088"
81
+ RubyLLM.configure do |config|
82
+ config.openai_api_key = ENV['OPENAI_API_KEY']
83
+ config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']
84
+ config.gemini_api_key = ENV['GEMINI_API_KEY']
85
+ config.deepseek_api_key = ENV['DEEPSEEK_API_KEY'] # Optional
86
+ end
249
87
  ```
250
88
 
251
- ## Error Handling
252
-
253
- RubyLLM wraps provider errors in clear Ruby exceptions:
89
+ ## Have great conversations
254
90
 
255
91
  ```ruby
256
- begin
257
- chat = RubyLLM.chat
258
- chat.ask "Hello world!"
259
- rescue RubyLLM::UnauthorizedError
260
- puts "Check your API credentials"
261
- rescue RubyLLM::BadRequestError => e
262
- puts "Something went wrong: #{e.message}"
263
- rescue RubyLLM::PaymentRequiredError
264
- puts "Time to top up your API credits"
265
- rescue RubyLLM::ServiceUnavailableError
266
- puts "API service is temporarily down"
267
- end
268
- ```
92
+ # Start a chat with the default model (GPT-4o-mini)
93
+ chat = RubyLLM.chat
269
94
 
270
- ## Rails Integration
95
+ # Or specify what you want
96
+ chat = RubyLLM.chat(model: 'claude-3-7-sonnet-20250219')
271
97
 
272
- RubyLLM comes with built-in Rails support that makes it dead simple to persist your chats and messages. Just create your tables and hook it up:
98
+ # Simple questions just work
99
+ chat.ask "What's the difference between attr_reader and attr_accessor?"
273
100
 
274
- ```ruby
275
- # db/migrate/YYYYMMDDHHMMSS_create_chats.rb
276
- class CreateChats < ActiveRecord::Migration[8.0]
277
- def change
278
- create_table :chats do |t|
279
- t.string :model_id
280
- t.timestamps
281
- end
282
- end
283
- end
101
+ # Multi-turn conversations are seamless
102
+ chat.ask "Could you give me an example?"
284
103
 
285
- # db/migrate/YYYYMMDDHHMMSS_create_messages.rb
286
- class CreateMessages < ActiveRecord::Migration[8.0]
287
- def change
288
- create_table :messages do |t|
289
- t.references :chat, null: false
290
- t.string :role
291
- t.text :content
292
- t.string :model_id
293
- t.integer :input_tokens
294
- t.integer :output_tokens
295
- t.references :tool_call
296
- t.timestamps
297
- end
298
- end
104
+ # Stream responses in real-time
105
+ chat.ask "Tell me a story about a Ruby programmer" do |chunk|
106
+ print chunk.content
299
107
  end
300
108
 
301
- # db/migrate/YYYYMMDDHHMMSS_create_tool_calls.rb
302
- class CreateToolCalls < ActiveRecord::Migration[8.0]
303
- def change
304
- create_table :tool_calls do |t|
305
- t.references :message, null: false
306
- t.string :tool_call_id, null: false
307
- t.string :name, null: false
308
- t.jsonb :arguments, default: {}
309
- t.timestamps
310
- end
311
-
312
- add_index :tool_calls, :tool_call_id
313
- end
314
- end
109
+ # Understand content in multiple forms
110
+ chat.ask "Compare these diagrams", with: { image: ["diagram1.png", "diagram2.png"] }
111
+ chat.ask "Summarize this document", with: { pdf: "contract.pdf" }
112
+ chat.ask "What's being said?", with: { audio: "meeting.wav" }
113
+
114
+ # Need a different model mid-conversation? No problem
115
+ chat.with_model('gemini-2.0-flash').ask "What's your favorite algorithm?"
315
116
  ```
316
117
 
317
- Then in your models:
118
+ ## Rails integration that makes sense
318
119
 
319
120
  ```ruby
121
+ # app/models/chat.rb
320
122
  class Chat < ApplicationRecord
321
123
  acts_as_chat
322
124
 
323
- # Optional: Add Turbo Streams support
125
+ # Works great with Turbo
324
126
  broadcasts_to ->(chat) { "chat_#{chat.id}" }
325
127
  end
326
128
 
129
+ # app/models/message.rb
327
130
  class Message < ApplicationRecord
328
131
  acts_as_message
329
132
  end
330
133
 
134
+ # app/models/tool_call.rb
331
135
  class ToolCall < ApplicationRecord
332
136
  acts_as_tool_call
333
137
  end
334
- ```
335
138
 
336
- That's it! Now you can use chats straight from your models:
337
-
338
- ```ruby
339
- # Create a new chat
340
- chat = Chat.create! model_id: "gpt-4o-mini"
341
-
342
- # Ask questions - messages are automatically saved
343
- chat.ask "What's the weather in Paris?"
344
-
345
- # Stream responses in real-time
346
- chat.ask "Tell me a story" do |chunk|
347
- broadcast_chunk chunk
139
+ # In your controller
140
+ chat = Chat.create!(model_id: "gpt-4o-mini")
141
+ chat.ask("What's your favorite Ruby gem?") do |chunk|
142
+ Turbo::StreamsChannel.broadcast_append_to(
143
+ chat,
144
+ target: "response",
145
+ partial: "messages/chunk",
146
+ locals: { chunk: chunk }
147
+ )
348
148
  end
349
149
 
350
- # Everything is persisted automatically
351
- chat.messages.each do |message|
352
- case message.role
353
- when :user
354
- puts "User: #{message.content}"
355
- when :assistant
356
- puts "Assistant: #{message.content}"
357
- end
358
- end
150
+ # That's it - chat history is automatically saved
359
151
  ```
360
152
 
361
- ### Real-time Updates with Hotwire
362
-
363
- The Rails integration works great with Hotwire out of the box:
153
+ ## Creating tools is a breeze
364
154
 
365
155
  ```ruby
366
- # app/controllers/chats_controller.rb
367
- class ChatsController < ApplicationController
368
- def show
369
- @chat = Chat.find(params[:id])
370
- end
371
-
372
- def ask
373
- @chat = Chat.find(params[:id])
374
- @chat.ask(params[:message]) do |chunk|
375
- Turbo::StreamsChannel.broadcast_append_to(
376
- @chat,
377
- target: "messages",
378
- partial: "messages/chunk",
379
- locals: { chunk: chunk }
380
- )
381
- end
382
- end
383
- end
384
-
385
- # app/views/chats/show.html.erb
386
- <%= turbo_stream_from @chat %>
387
-
388
- <div id="messages">
389
- <%= render @chat.messages %>
390
- </div>
391
-
392
- <%= form_with(url: ask_chat_path(@chat), local: false) do |f| %>
393
- <%= f.text_area :message %>
394
- <%= f.submit "Send" %>
395
- <% end %>
396
- ```
397
-
398
- ### Background Jobs
399
-
400
- The persistence works seamlessly with background jobs:
401
-
402
- ```ruby
403
- class ChatJob < ApplicationJob
404
- def perform(chat_id, message)
405
- chat = Chat.find chat_id
406
-
407
- chat.ask(message) do |chunk|
408
- # Optional: Broadcast chunks for real-time updates
409
- Turbo::StreamsChannel.broadcast_append_to(
410
- chat,
411
- target: "messages",
412
- partial: "messages/chunk",
413
- locals: { chunk: chunk }
414
- )
415
- end
416
- end
417
- end
418
- ```
419
-
420
- ### Using Tools
421
-
422
- Tools work just like they do in regular RubyLLM chats:
423
-
424
- ```ruby
425
- class WeatherTool < RubyLLM::Tool
426
- description "Gets current weather for a location"
156
+ class Search < RubyLLM::Tool
157
+ description "Searches a knowledge base"
427
158
 
428
- param :location,
429
- type: :string,
430
- desc: "City name or coordinates"
159
+ param :query, desc: "The search query"
160
+ param :limit, type: :integer, desc: "Max results", required: false
431
161
 
432
- def execute(location:)
433
- # Fetch weather data...
434
- { temperature: 22, conditions: "Sunny" }
162
+ def execute(query:, limit: 5)
163
+ # Your search logic here
164
+ Document.search(query).limit(limit).map(&:title)
435
165
  end
436
166
  end
437
167
 
438
- # Use tools with your persisted chats
439
- chat = Chat.create! model_id: "deepseek-reasoner"
440
- chat.chat.with_tool WeatherTool.new
441
-
442
- # Ask about weather - tool usage is automatically saved
443
- chat.ask "What's the weather in Paris?"
444
-
445
- # Tool calls and results are persisted as messages
446
- pp chat.messages.map(&:role)
447
- #=> [:user, :assistant, :tool, :assistant]
168
+ # Let the AI use it
169
+ chat.with_tool(Search).ask "Find documents about Ruby 3.3 features"
448
170
  ```
449
171
 
450
- ## Development
451
-
452
- After checking out the repo, run `bin/setup` to install dependencies. Then, run `bin/console` for an interactive prompt.
453
-
454
- ## Contributing
172
+ ## Learn more
455
173
 
456
- Bug reports and pull requests are welcome on GitHub at https://github.com/crmne/ruby_llm.
174
+ Check out the guides at https://rubyllm.com for deeper dives into conversations with tools, streaming responses, embedding generations, and more.
457
175
 
458
176
  ## License
459
177
 
460
- Released under the MIT License. See [LICENSE](LICENSE) for details.
178
+ Released under the MIT License.
@@ -68,7 +68,7 @@ module RubyLLM
68
68
  }
69
69
  end
70
70
 
71
- def attach_pdf(source) # rubocop:disable Metrics/AbcSize,Metrics/MethodLength
71
+ def attach_pdf(source)
72
72
  source = File.expand_path(source) unless source.start_with?('http')
73
73
 
74
74
  pdf_data = {
@@ -80,19 +80,11 @@ module RubyLLM
80
80
  unless source.start_with?('http')
81
81
  raise Error, "PDF file not found: #{source}" unless File.exist?(source)
82
82
 
83
- # Simple check for PDF file type (could be more robust)
84
- unless source.downcase.end_with?('.pdf') || File.read(source, 5) == '%PDF-'
85
- RubyLLM.logger.warn "File may not be a valid PDF: #{source}"
86
- end
87
-
88
83
  # Preload file content for providers that need it
89
84
  pdf_data[:content] = File.read(source)
90
85
  end
91
86
 
92
87
  pdf_data
93
- rescue StandardError => e
94
- RubyLLM.logger.error "Error attaching PDF #{source}: #{e.message}"
95
- raise Error, "Failed to attach PDF: #{e.message}"
96
88
  end
97
89
 
98
90
  def encode_file(source)