llm.rb 0.3.0 → 0.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9073b7495fb9bdad2deec1d2c086b6d3b554c5a440dd884108a2fa8d12f7c8a9
4
- data.tar.gz: 514902fc97de61dc18df8c22d51d9e86472a62e1ffb0c4ce4394b0684cddbd8a
3
+ metadata.gz: 3939075c064b4abfd8853c3f67b6db7df6111d340d658d4d8ad0c4d1bccc96bc
4
+ data.tar.gz: 0ca274d3e4b032c25730aef896df903681c28033ebb0907c965339a33aff56d1
5
5
  SHA512:
6
- metadata.gz: 0d0c35fa38ed3481872e29131d15e03e5a4bf0ad8a96c42ba64a5f48ed32584973d39b53ca630c966d54b6700a83a44abb1f4224c1bb9c1ca7f9e7a2d953e1c3
7
- data.tar.gz: 8889034558c56a2bc1ff5321cf0ca45d82ac83ac7122c741e859caed7d060b34b99824cc53d20a5add4949dd135cf65c383f8400e7c112a44110fc1d4e0d2f4d
6
+ metadata.gz: feaf87457b8fa5b4f756a5fe8cc1f670c8b0286a730fe00273bc99678092fe7f704d58f01ba0a0baf4072a0dcee063bc87cf88bc7cdf53125334476adbce41f6
7
+ data.tar.gz: 3be8b460d9b483c0e172d9159b2394ea39da7a1475aee3ab47b224303e2a251f3b04f0543402494485040998225f84342be986db8c7b8ea80df92f561d4d6d92
data/README.md CHANGED
@@ -3,7 +3,9 @@
3
3
  llm.rb is a lightweight library that provides a common interface
4
4
  and set of functionality for multiple Large Language Models (LLMs). It
5
5
  is designed to be simple, flexible, and easy to use – and it has been
6
- implemented with no dependencies outside Ruby's standard library.
6
+ implemented with zero dependencies outside Ruby's standard library. See the
7
+ [philosophy](#philosophy) section for more information on the design principles
8
+ behind llm.rb.
7
9
 
8
10
  ## Examples
9
11
 
@@ -110,7 +112,7 @@ bot.messages.each { print "[#{_1.role}] ", _1.content, "\n" }
110
112
  #### Speech
111
113
 
112
114
  Some but not all providers implement audio generation capabilities that
113
- can create text from speech, transcribe audio to text, or translate
115
+ can create speech from text, transcribe audio to text, or translate
114
116
  audio to text (usually English). The following example uses the OpenAI provider
115
117
  to create an audio file from a text prompt. The audio is then moved to
116
118
  `${HOME}/hello.mp3` as the final step. As always, consult the provider's
@@ -193,7 +195,7 @@ print res.text, "\n" # => "Good morning."
193
195
 
194
196
  #### Create
195
197
 
196
- Some but all LLM providers implement image generation capabilities that
198
+ Some but not all LLM providers implement image generation capabilities that
197
199
  can create new images from a prompt, or edit an existing image with a
198
200
  prompt. The following example uses the OpenAI provider to create an
199
201
  image of a dog on a rocket to the moon. The image is then moved to
@@ -282,6 +284,42 @@ res.urls.each.with_index do |url, index|
282
284
  end
283
285
  ```
284
286
 
287
+ ### Files
288
+
289
+ #### Create
290
+
291
+ Most LLM providers provide a Files API where you can upload files
292
+ that can be referenced from a prompt and llm.rb has first-class support
293
+ for this feature. The following example uses the OpenAI provider to describe
294
+ the contents of a PDF file after it has been uploaded. The file (an instance
295
+ of [LLM::Response::File](https://0x1eef.github.io/x/llm.rb/LLM/Response/File.html))
296
+ is passed directly to the chat method, and generally any object a prompt supports
297
+ can be given to the chat method.
298
+
299
+ Please also see provider-specific documentation for more provider-specific
300
+ examples and documentation
301
+ (eg
302
+ [LLM::Gemini::Files](https://0x1eef.github.io/x/llm.rb/LLM/Gemini/Files.html),
303
+ [LLM::OpenAI::Files](https://0x1eef.github.io/x/llm.rb/LLM/OpenAI/Files.html)):
304
+
305
+ ```ruby
306
+ #!/usr/bin/env ruby
307
+ require "llm"
308
+
309
+ llm = LLM.openai(ENV["KEY"])
310
+ bot = LLM::Chat.new(llm).lazy
311
+ file = llm.files.create(file: LLM::File("/documents/openbsd_is_awesome.pdf"))
312
+ bot.chat(file)
313
+ bot.chat("What is this file about?")
314
+ bot.messages.select(&:assistant?).each { print "[#{_1.role}] ", _1.content, "\n" }
315
+
316
+ ##
317
+ # [assistant] This file is about OpenBSD, a free and open-source Unix-like operating system
318
+ # based on the Berkeley Software Distribution (BSD). It is known for its
319
+ # emphasis on security, code correctness, and code simplicity. The file
320
+ # contains information about the features, installation, and usage of OpenBSD.
321
+ ```
322
+
285
323
  ### Embeddings
286
324
 
287
325
  #### Text
@@ -354,6 +392,22 @@ llm.rb can be installed via rubygems.org:
354
392
 
355
393
  gem install llm.rb
356
394
 
395
+ ## Philosophy
396
+
397
+ llm.rb was built for developers who believe that simplicity is strength.
398
+ It provides a clean, dependency-free interface to Large Language Models,
399
+ treating Ruby itself as the primary platform – not Rails or any other
400
+ specific framework or library. There is no hidden magic or extreme
401
+ metaprogramming.
402
+
403
+ Every part of llm.rb is designed to be explicit, composable, memory-safe,
404
+ and production-ready without compromise. No unnecessary abstractions,
405
+ no global configuration, and no dependencies that aren't part of standard
406
+ Ruby. It has been inspired in part by other languages such as Python, but
407
+ it is not a port of any other library.
408
+
409
+ Good software doesn’t need marketing. It just needs to work. :)
410
+
357
411
  ## License
358
412
 
359
413
  [BSD Zero Clause](https://choosealicense.com/licenses/0bsd/)
data/lib/llm/error.rb CHANGED
@@ -10,7 +10,7 @@ module LLM
10
10
 
11
11
  ##
12
12
  # The superclass of all HTTP protocol errors
13
- class BadResponse < Error
13
+ class ResponseError < Error
14
14
  ##
15
15
  # @return [Net::HTTPResponse]
16
16
  # Returns the response associated with an error
@@ -19,10 +19,10 @@ module LLM
19
19
 
20
20
  ##
21
21
  # HTTPUnauthorized
22
- Unauthorized = Class.new(BadResponse)
22
+ Unauthorized = Class.new(ResponseError)
23
23
 
24
24
  ##
25
25
  # HTTPTooManyRequests
26
- RateLimit = Class.new(BadResponse)
26
+ RateLimit = Class.new(ResponseError)
27
27
  end
28
28
  end
data/lib/llm/file.rb CHANGED
@@ -41,6 +41,16 @@ class LLM::File
41
41
  def to_b64
42
42
  [File.binread(path)].pack("m0")
43
43
  end
44
+
45
+ ##
46
+ # @return [File]
47
+ # Yields an IO object suitable to be streamed
48
+ def with_io
49
+ io = File.open(path, "rb")
50
+ yield(io)
51
+ ensure
52
+ io.close
53
+ end
44
54
  end
45
55
 
46
56
  ##
data/lib/llm/message.rb CHANGED
@@ -50,6 +50,13 @@ module LLM
50
50
  end
51
51
  alias_method :eql?, :==
52
52
 
53
+ ##
54
+ # Returns true when the message is from the LLM
55
+ # @return [Boolean]
56
+ def assistant?
57
+ role == "assistant" || role == "model"
58
+ end
59
+
53
60
  ##
54
61
  # Returns a string representation of the message
55
62
  # @return [String]
data/lib/llm/multipart.rb CHANGED
@@ -45,7 +45,9 @@ class LLM::Multipart
45
45
  # Returns the multipart request body
46
46
  # @return [String]
47
47
  def body
48
- [*parts, "--#{@boundary}--\r\n"].inject(&:<<)
48
+ io = StringIO.new("".b)
49
+ [*parts, StringIO.new("--#{@boundary}--\r\n".b)].each { IO.copy_stream(_1.tap(&:rewind), io) }
50
+ io.tap(&:rewind)
49
51
  end
50
52
 
51
53
  private
@@ -61,7 +63,7 @@ class LLM::Multipart
61
63
 
62
64
  def multipart_header(type:, locals:)
63
65
  if type == :file
64
- str = "".b
66
+ str = StringIO.new("".b)
65
67
  str << "--#{locals[:boundary]}" \
66
68
  "\r\n" \
67
69
  "Content-Disposition: form-data; name=\"#{locals[:key]}\";" \
@@ -70,7 +72,7 @@ class LLM::Multipart
70
72
  "Content-Type: #{locals[:content_type]}" \
71
73
  "\r\n\r\n"
72
74
  elsif type == :data
73
- str = "".b
75
+ str = StringIO.new("".b)
74
76
  str << "--#{locals[:boundary]}" \
75
77
  "\r\n" \
76
78
  "Content-Disposition: form-data; name=\"#{locals[:key]}\"" \
@@ -82,17 +84,17 @@ class LLM::Multipart
82
84
 
83
85
  def file_part(key, file, locals)
84
86
  locals = locals.merge(attributes(file))
85
- multipart_header(type: :file, locals:).tap do
86
- _1 << File.binread(file.path)
87
- _1 << "\r\n"
87
+ multipart_header(type: :file, locals:).tap do |io|
88
+ IO.copy_stream(file.path, io)
89
+ io << "\r\n"
88
90
  end
89
91
  end
90
92
 
91
93
  def data_part(key, value, locals)
92
94
  locals = locals.merge(value:)
93
- multipart_header(type: :data, locals:).tap do
94
- _1 << value.to_s
95
- _1 << "\r\n"
95
+ multipart_header(type: :data, locals:).tap do |io|
96
+ io << value.to_s
97
+ io << "\r\n"
96
98
  end
97
99
  end
98
100
  end
data/lib/llm/provider.rb CHANGED
@@ -15,8 +15,7 @@
15
15
  # @see LLM::Provider::Gemini
16
16
  # @see LLM::Provider::Ollama
17
17
  class LLM::Provider
18
- require_relative "http_client"
19
- include LLM::HTTPClient
18
+ require "net/http"
20
19
 
21
20
  ##
22
21
  # @param [String] secret
@@ -222,6 +221,32 @@ class LLM::Provider
222
221
  raise NotImplementedError
223
222
  end
224
223
 
224
+ ##
225
+ # Initiates a HTTP request
226
+ # @param [Net::HTTP] http
227
+ # The HTTP object to use for the request
228
+ # @param [Net::HTTPRequest] req
229
+ # The request to send
230
+ # @param [Proc] b
231
+ # A block to yield the response to (optional)
232
+ # @return [Net::HTTPResponse]
233
+ # The response from the server
234
+ # @raise [LLM::Error::Unauthorized]
235
+ # When authentication fails
236
+ # @raise [LLM::Error::RateLimit]
237
+ # When the rate limit is exceeded
238
+ # @raise [LLM::Error::ResponseError]
239
+ # When any other unsuccessful status code is returned
240
+ # @raise [SystemCallError]
241
+ # When there is a network error at the operating system level
242
+ def request(http, req, &b)
243
+ res = http.request(req, &b)
244
+ case res
245
+ when Net::HTTPOK then res
246
+ else error_handler.new(res).raise_error!
247
+ end
248
+ end
249
+
225
250
  ##
226
251
  # @param [String] provider
227
252
  # The name of the provider
@@ -27,7 +27,7 @@ class LLM::Anthropic
27
27
  when Net::HTTPTooManyRequests
28
28
  raise LLM::Error::RateLimit.new { _1.response = res }, "Too many requests"
29
29
  else
30
- raise LLM::Error::BadResponse.new { _1.response = res }, "Unexpected response"
30
+ raise LLM::Error::ResponseError.new { _1.response = res }, "Unexpected response"
31
31
  end
32
32
  end
33
33
  end
@@ -28,7 +28,7 @@ module LLM
28
28
  # The embedding model to use
29
29
  # @param [Hash] params
30
30
  # Other embedding parameters
31
- # @raise (see LLM::HTTPClient#request)
31
+ # @raise (see LLM::Provider#request)
32
32
  # @return (see LLM::Provider#embed)
33
33
  def embed(input, token:, model: "voyage-2", **params)
34
34
  llm = LLM.voyageai(token)
@@ -44,7 +44,7 @@ module LLM
44
44
  # @param max_tokens The maximum number of tokens to generate
45
45
  # @param params (see LLM::Provider#complete)
46
46
  # @example (see LLM::Provider#complete)
47
- # @raise (see LLM::HTTPClient#request)
47
+ # @raise (see LLM::Provider#request)
48
48
  # @return (see LLM::Provider#complete)
49
49
  def complete(prompt, role = :user, model: "claude-3-5-sonnet-20240620", max_tokens: 1024, **params)
50
50
  params = {max_tokens:, model:}.merge!(params)
@@ -37,7 +37,7 @@ class LLM::Gemini
37
37
  # @param [LLM::File, LLM::Response::File] file The input audio
38
38
  # @param [String] model The model to use
39
39
  # @param [Hash] params Other parameters (see Gemini docs)
40
- # @raise (see LLM::HTTPClient#request)
40
+ # @raise (see LLM::Provider#request)
41
41
  # @return [LLM::Response::AudioTranscription]
42
42
  def create_transcription(file:, model: "gemini-1.5-flash", **params)
43
43
  res = @provider.complete [
@@ -61,7 +61,7 @@ class LLM::Gemini
61
61
  # @param [LLM::File, LLM::Response::File] file The input audio
62
62
  # @param [String] model The model to use
63
63
  # @param [Hash] params Other parameters (see Gemini docs)
64
- # @raise (see LLM::HTTPClient#request)
64
+ # @raise (see LLM::Provider#request)
65
65
  # @return [LLM::Response::AudioTranslation]
66
66
  def create_translation(file:, model: "gemini-1.5-flash", **params)
67
67
  res = @provider.complete [
@@ -27,12 +27,12 @@ class LLM::Gemini
27
27
  if reason == "API_KEY_INVALID"
28
28
  raise LLM::Error::Unauthorized.new { _1.response = res }, "Authentication error"
29
29
  else
30
- raise LLM::Error::BadResponse.new { _1.response = res }, "Unexpected response"
30
+ raise LLM::Error::ResponseError.new { _1.response = res }, "Unexpected response"
31
31
  end
32
32
  when Net::HTTPTooManyRequests
33
33
  raise LLM::Error::RateLimit.new { _1.response = res }, "Too many requests"
34
34
  else
35
- raise LLM::Error::BadResponse.new { _1.response = res }, "Unexpected response"
35
+ raise LLM::Error::ResponseError.new { _1.response = res }, "Unexpected response"
36
36
  end
37
37
  end
38
38
 
@@ -17,9 +17,9 @@ class LLM::Gemini
17
17
  # #!/usr/bin/env ruby
18
18
  # require "llm"
19
19
  #
20
- # llm = LLM.gemini(ENV["KEY"])
21
- # file = llm.files.create file: LLM::File("/audio/haiku.mp3")
20
+ # llm = LLM.gemini(ENV["KEY"])
22
21
  # bot = LLM::Chat.new(llm).lazy
22
+ # file = llm.files.create file: LLM::File("/audio/haiku.mp3")
23
23
  # bot.chat(file)
24
24
  # bot.chat("Describe the audio file I sent to you")
25
25
  # bot.chat("The audio file is the first message I sent to you.")
@@ -28,9 +28,9 @@ class LLM::Gemini
28
28
  # #!/usr/bin/env ruby
29
29
  # require "llm"
30
30
  #
31
- # llm = LLM.gemini(ENV["KEY"])
32
- # file = llm.files.create file: LLM::File("/audio/haiku.mp3")
31
+ # llm = LLM.gemini(ENV["KEY"])
33
32
  # bot = LLM::Chat.new(llm).lazy
33
+ # file = llm.files.create file: LLM::File("/audio/haiku.mp3")
34
34
  # bot.chat(["Describe the audio file I sent to you", file])
35
35
  # bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }
36
36
  class Files
@@ -52,7 +52,7 @@ class LLM::Gemini
52
52
  # end
53
53
  # @see https://ai.google.dev/gemini-api/docs/files Gemini docs
54
54
  # @param [Hash] params Other parameters (see Gemini docs)
55
- # @raise (see LLM::HTTPClient#request)
55
+ # @raise (see LLM::Provider#request)
56
56
  # @return [LLM::Response::FileList]
57
57
  def all(**params)
58
58
  query = URI.encode_www_form(params.merge!(key: secret))
@@ -75,16 +75,18 @@ class LLM::Gemini
75
75
  # @see https://ai.google.dev/gemini-api/docs/files Gemini docs
76
76
  # @param [File] file The file
77
77
  # @param [Hash] params Other parameters (see Gemini docs)
78
- # @raise (see LLM::HTTPClient#request)
78
+ # @raise (see LLM::Provider#request)
79
79
  # @return [LLM::Response::File]
80
80
  def create(file:, **params)
81
81
  req = Net::HTTP::Post.new(request_upload_url(file:), {})
82
82
  req["content-length"] = file.bytesize
83
83
  req["X-Goog-Upload-Offset"] = 0
84
84
  req["X-Goog-Upload-Command"] = "upload, finalize"
85
- req.body = File.binread(file.path)
86
- res = request(http, req)
87
- LLM::Response::File.new(res)
85
+ file.with_io do |io|
86
+ req.body_stream = io
87
+ res = request(http, req)
88
+ LLM::Response::File.new(res)
89
+ end
88
90
  end
89
91
 
90
92
  ##
@@ -96,7 +98,7 @@ class LLM::Gemini
96
98
  # @see https://ai.google.dev/gemini-api/docs/files Gemini docs
97
99
  # @param [#name, String] file The file to get
98
100
  # @param [Hash] params Other parameters (see Gemini docs)
99
- # @raise (see LLM::HTTPClient#request)
101
+ # @raise (see LLM::Provider#request)
100
102
  # @return [LLM::Response::File]
101
103
  def get(file:, **params)
102
104
  file_id = file.respond_to?(:name) ? file.name : file.to_s
@@ -114,7 +116,7 @@ class LLM::Gemini
114
116
  # @see https://ai.google.dev/gemini-api/docs/files Gemini docs
115
117
  # @param [#name, String] file The file to delete
116
118
  # @param [Hash] params Other parameters (see Gemini docs)
117
- # @raise (see LLM::HTTPClient#request)
119
+ # @raise (see LLM::Provider#request)
118
120
  # @return [LLM::Response::File]
119
121
  def delete(file:, **params)
120
122
  file_id = file.respond_to?(:name) ? file.name : file.to_s
@@ -34,18 +34,19 @@ class LLM::Gemini
34
34
  # @see https://ai.google.dev/gemini-api/docs/image-generation Gemini docs
35
35
  # @param [String] prompt The prompt
36
36
  # @param [Hash] params Other parameters (see Gemini docs)
37
- # @raise (see LLM::HTTPClient#request)
37
+ # @raise (see LLM::Provider#request)
38
38
  # @note
39
39
  # The prompt should make it clear you want to generate an image, or you
40
40
  # might unexpectedly receive a purely textual response. This is due to how
41
41
  # Gemini implements image generation under the hood.
42
42
  # @return [LLM::Response::Image]
43
43
  def create(prompt:, model: "gemini-2.0-flash-exp-image-generation", **params)
44
- req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{secret}", headers)
45
- req.body = JSON.dump({
44
+ req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{secret}", headers)
45
+ body = JSON.dump({
46
46
  contents: [{parts: {text: prompt}}],
47
47
  generationConfig: {responseModalities: ["TEXT", "IMAGE"]}
48
48
  }.merge!(params))
49
+ req.body = body
49
50
  res = request(http, req)
50
51
  LLM::Response::Image.new(res).extend(response_parser)
51
52
  end
@@ -60,17 +61,16 @@ class LLM::Gemini
60
61
  # @param [LLM::File] image The image to edit
61
62
  # @param [String] prompt The prompt
62
63
  # @param [Hash] params Other parameters (see Gemini docs)
63
- # @raise (see LLM::HTTPClient#request)
64
+ # @raise (see LLM::Provider#request)
64
65
  # @note (see LLM::Gemini::Images#create)
65
66
  # @return [LLM::Response::Image]
66
67
  def edit(image:, prompt:, model: "gemini-2.0-flash-exp-image-generation", **params)
67
- req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{secret}", headers)
68
- req.body = JSON.dump({
69
- contents: [
70
- {parts: [{text: prompt}, format_content(image)]}
71
- ],
68
+ req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{secret}", headers)
69
+ body = JSON.dump({
70
+ contents: [{parts: [{text: prompt}, format_content(image)]}],
72
71
  generationConfig: {responseModalities: ["TEXT", "IMAGE"]}
73
- }.merge!(params))
72
+ }.merge!(params)).b
73
+ req.body_stream = StringIO.new(body)
74
74
  res = request(http, req)
75
75
  LLM::Response::Image.new(res).extend(response_parser)
76
76
  end
@@ -49,7 +49,7 @@ module LLM
49
49
  # @param input (see LLM::Provider#embed)
50
50
  # @param model (see LLM::Provider#embed)
51
51
  # @param params (see LLM::Provider#embed)
52
- # @raise (see LLM::HTTPClient#request)
52
+ # @raise (see LLM::Provider#request)
53
53
  # @return (see LLM::Provider#embed)
54
54
  def embed(input, model: "text-embedding-004", **params)
55
55
  path = ["/v1beta/models/#{model}", "embedContent?key=#{@secret}"].join(":")
@@ -67,7 +67,7 @@ module LLM
67
67
  # @param model (see LLM::Provider#complete)
68
68
  # @param params (see LLM::Provider#complete)
69
69
  # @example (see LLM::Provider#complete)
70
- # @raise (see LLM::HTTPClient#request)
70
+ # @raise (see LLM::Provider#request)
71
71
  # @return (see LLM::Provider#complete)
72
72
  def complete(prompt, role = :user, model: "gemini-1.5-flash", **params)
73
73
  path = ["/v1beta/models/#{model}", "generateContent?key=#{@secret}"].join(":")
@@ -27,7 +27,7 @@ class LLM::Ollama
27
27
  when Net::HTTPTooManyRequests
28
28
  raise LLM::Error::RateLimit.new { _1.response = res }, "Too many requests"
29
29
  else
30
- raise LLM::Error::BadResponse.new { _1.response = res }, "Unexpected response"
30
+ raise LLM::Error::ResponseError.new { _1.response = res }, "Unexpected response"
31
31
  end
32
32
  end
33
33
  end
@@ -37,7 +37,7 @@ module LLM
37
37
  # @param input (see LLM::Provider#embed)
38
38
  # @param model (see LLM::Provider#embed)
39
39
  # @param params (see LLM::Provider#embed)
40
- # @raise (see LLM::HTTPClient#request)
40
+ # @raise (see LLM::Provider#request)
41
41
  # @return (see LLM::Provider#embed)
42
42
  def embed(input, model: "llama3.2", **params)
43
43
  params = {model:}.merge!(params)
@@ -55,7 +55,7 @@ module LLM
55
55
  # @param model (see LLM::Provider#complete)
56
56
  # @param params (see LLM::Provider#complete)
57
57
  # @example (see LLM::Provider#complete)
58
- # @raise (see LLM::HTTPClient#request)
58
+ # @raise (see LLM::Provider#request)
59
59
  # @return (see LLM::Provider#complete)
60
60
  def complete(prompt, role = :user, model: "llama3.2", **params)
61
61
  params = {model:, stream: false}.merge!(params)
@@ -31,7 +31,7 @@ class LLM::OpenAI
31
31
  # @param [String] model The model to use
32
32
  # @param [String] response_format The response format
33
33
  # @param [Hash] params Other parameters (see OpenAI docs)
34
- # @raise (see LLM::HTTPClient#request)
34
+ # @raise (see LLM::Provider#request)
35
35
  # @return [LLM::Response::Audio]
36
36
  def create_speech(input:, voice: "alloy", model: "gpt-4o-mini-tts", response_format: "mp3", **params)
37
37
  req = Net::HTTP::Post.new("/v1/audio/speech", headers)
@@ -51,13 +51,13 @@ class LLM::OpenAI
51
51
  # @param [LLM::File] file The input audio
52
52
  # @param [String] model The model to use
53
53
  # @param [Hash] params Other parameters (see OpenAI docs)
54
- # @raise (see LLM::HTTPClient#request)
54
+ # @raise (see LLM::Provider#request)
55
55
  # @return [LLM::Response::AudioTranscription]
56
56
  def create_transcription(file:, model: "whisper-1", **params)
57
57
  multi = LLM::Multipart.new(params.merge!(file:, model:))
58
58
  req = Net::HTTP::Post.new("/v1/audio/transcriptions", headers)
59
59
  req["content-type"] = multi.content_type
60
- req.body = multi.body
60
+ req.body_stream = multi.body
61
61
  res = request(http, req)
62
62
  LLM::Response::AudioTranscription.new(res).tap { _1.text = _1.body["text"] }
63
63
  end
@@ -73,13 +73,13 @@ class LLM::OpenAI
73
73
  # @param [LLM::File] file The input audio
74
74
  # @param [String] model The model to use
75
75
  # @param [Hash] params Other parameters (see OpenAI docs)
76
- # @raise (see LLM::HTTPClient#request)
76
+ # @raise (see LLM::Provider#request)
77
77
  # @return [LLM::Response::AudioTranslation]
78
78
  def create_translation(file:, model: "whisper-1", **params)
79
79
  multi = LLM::Multipart.new(params.merge!(file:, model:))
80
80
  req = Net::HTTP::Post.new("/v1/audio/translations", headers)
81
81
  req["content-type"] = multi.content_type
82
- req.body = multi.body
82
+ req.body_stream = multi.body
83
83
  res = request(http, req)
84
84
  LLM::Response::AudioTranslation.new(res).tap { _1.text = _1.body["text"] }
85
85
  end
@@ -27,7 +27,7 @@ class LLM::OpenAI
27
27
  when Net::HTTPTooManyRequests
28
28
  raise LLM::Error::RateLimit.new { _1.response = res }, "Too many requests"
29
29
  else
30
- raise LLM::Error::BadResponse.new { _1.response = res }, "Unexpected response"
30
+ raise LLM::Error::ResponseError.new { _1.response = res }, "Unexpected response"
31
31
  end
32
32
  end
33
33
  end
@@ -46,7 +46,7 @@ class LLM::OpenAI
46
46
  # end
47
47
  # @see https://platform.openai.com/docs/api-reference/files/list OpenAI docs
48
48
  # @param [Hash] params Other parameters (see OpenAI docs)
49
- # @raise (see LLM::HTTPClient#request)
49
+ # @raise (see LLM::Provider#request)
50
50
  # @return [LLM::Response::FileList]
51
51
  def all(**params)
52
52
  query = URI.encode_www_form(params)
@@ -67,13 +67,13 @@ class LLM::OpenAI
67
67
  # @param [File] file The file
68
68
  # @param [String] purpose The purpose of the file (see OpenAI docs)
69
69
  # @param [Hash] params Other parameters (see OpenAI docs)
70
- # @raise (see LLM::HTTPClient#request)
70
+ # @raise (see LLM::Provider#request)
71
71
  # @return [LLM::Response::File]
72
72
  def create(file:, purpose: "assistants", **params)
73
73
  multi = LLM::Multipart.new(params.merge!(file:, purpose:))
74
74
  req = Net::HTTP::Post.new("/v1/files", headers)
75
75
  req["content-type"] = multi.content_type
76
- req.body = multi.body
76
+ req.body_stream = multi.body
77
77
  res = request(http, req)
78
78
  LLM::Response::File.new(res)
79
79
  end
@@ -87,7 +87,7 @@ class LLM::OpenAI
87
87
  # @see https://platform.openai.com/docs/api-reference/files/get OpenAI docs
88
88
  # @param [#id, #to_s] file The file ID
89
89
  # @param [Hash] params Other parameters (see OpenAI docs)
90
- # @raise (see LLM::HTTPClient#request)
90
+ # @raise (see LLM::Provider#request)
91
91
  # @return [LLM::Response::File]
92
92
  def get(file:, **params)
93
93
  file_id = file.respond_to?(:id) ? file.id : file
@@ -107,7 +107,7 @@ class LLM::OpenAI
107
107
  # @see https://platform.openai.com/docs/api-reference/files/content OpenAI docs
108
108
  # @param [#id, #to_s] file The file ID
109
109
  # @param [Hash] params Other parameters (see OpenAI docs)
110
- # @raise (see LLM::HTTPClient#request)
110
+ # @raise (see LLM::Provider#request)
111
111
  # @return [LLM::Response::DownloadFile]
112
112
  def download(file:, **params)
113
113
  query = URI.encode_www_form(params)
@@ -126,7 +126,7 @@ class LLM::OpenAI
126
126
  # print res.deleted, "\n"
127
127
  # @see https://platform.openai.com/docs/api-reference/files/delete OpenAI docs
128
128
  # @param [#id, #to_s] file The file ID
129
- # @raise (see LLM::HTTPClient#request)
129
+ # @raise (see LLM::Provider#request)
130
130
  # @return [OpenStruct] Response body
131
131
  def delete(file:)
132
132
  file_id = file.respond_to?(:id) ? file.id : file
@@ -32,6 +32,7 @@ class LLM::OpenAI
32
32
  case content
33
33
  when Array then content.flat_map { format_content(_1, mode) }
34
34
  when URI then [{type: :image_url, image_url: {url: content.to_s}}]
35
+ when LLM::Response::File then [{type: :file, file: {file_id: content.id}}]
35
36
  else [{type: :text, text: content.to_s}]
36
37
  end
37
38
  elsif mode == :response
@@ -44,7 +44,7 @@ class LLM::OpenAI
44
44
  # @param [String] prompt The prompt
45
45
  # @param [String] model The model to use
46
46
  # @param [Hash] params Other parameters (see OpenAI docs)
47
- # @raise (see LLM::HTTPClient#request)
47
+ # @raise (see LLM::Provider#request)
48
48
  # @return [LLM::Response::Image]
49
49
  def create(prompt:, model: "dall-e-3", **params)
50
50
  req = Net::HTTP::Post.new("/v1/images/generations", headers)
@@ -63,13 +63,13 @@ class LLM::OpenAI
63
63
  # @param [File] image The image to create variations from
64
64
  # @param [String] model The model to use
65
65
  # @param [Hash] params Other parameters (see OpenAI docs)
66
- # @raise (see LLM::HTTPClient#request)
66
+ # @raise (see LLM::Provider#request)
67
67
  # @return [LLM::Response::Image]
68
68
  def create_variation(image:, model: "dall-e-2", **params)
69
69
  multi = LLM::Multipart.new(params.merge!(image:, model:))
70
70
  req = Net::HTTP::Post.new("/v1/images/variations", headers)
71
71
  req["content-type"] = multi.content_type
72
- req.body = multi.body
72
+ req.body_stream = multi.body
73
73
  res = request(http, req)
74
74
  LLM::Response::Image.new(res).extend(response_parser)
75
75
  end
@@ -85,13 +85,13 @@ class LLM::OpenAI
85
85
  # @param [String] prompt The prompt
86
86
  # @param [String] model The model to use
87
87
  # @param [Hash] params Other parameters (see OpenAI docs)
88
- # @raise (see LLM::HTTPClient#request)
88
+ # @raise (see LLM::Provider#request)
89
89
  # @return [LLM::Response::Image]
90
90
  def edit(image:, prompt:, model: "dall-e-2", **params)
91
91
  multi = LLM::Multipart.new(params.merge!(image:, prompt:, model:))
92
92
  req = Net::HTTP::Post.new("/v1/images/edits", headers)
93
93
  req["content-type"] = multi.content_type
94
- req.body = multi.body
94
+ req.body_stream = multi.body
95
95
  res = request(http, req)
96
96
  LLM::Response::Image.new(res).extend(response_parser)
97
97
  end
@@ -4,7 +4,14 @@ class LLM::OpenAI
4
4
  ##
5
5
  # The {LLM::OpenAI::Responses LLM::OpenAI::Responses} class provides a responses
6
6
  # object for interacting with [OpenAI's response API](https://platform.openai.com/docs/guides/conversation-state?api-mode=responses).
7
+ # The responses API is similar to the chat completions API but it can maintain
8
+ # conversation state across multiple requests. This is useful when you want to
9
+ # save bandwidth and/or not maintain the message thread by yourself.
10
+ #
7
11
  # @example
12
+ # #!/usr/bin/env ruby
13
+ # require "llm"
14
+ #
8
15
  # llm = LLM.openai(ENV["KEY"])
9
16
  # res1 = llm.responses.create "Your task is to help me with math", :developer
10
17
  # res2 = llm.responses.create "5 + 5 = ?", :user, previous_response_id: res1.id
@@ -27,7 +34,7 @@ class LLM::OpenAI
27
34
  # @param role (see LLM::Provider#complete)
28
35
  # @param model (see LLM::Provider#complete)
29
36
  # @param [Hash] params Response params
30
- # @raise (see LLM::HTTPClient#request)
37
+ # @raise (see LLM::Provider#request)
31
38
  # @return [LLM::Response::Output]
32
39
  def create(prompt, role = :user, model: "gpt-4o-mini", **params)
33
40
  params = {model:}.merge!(params)
@@ -42,7 +49,7 @@ class LLM::OpenAI
42
49
  # Get a response
43
50
  # @see https://platform.openai.com/docs/api-reference/responses/get OpenAI docs
44
51
  # @param [#id, #to_s] response Response ID
45
- # @raise (see LLM::HTTPClient#request)
52
+ # @raise (see LLM::Provider#request)
46
53
  # @return [LLM::Response::Output]
47
54
  def get(response, **params)
48
55
  response_id = response.respond_to?(:id) ? response.id : response
@@ -56,7 +63,7 @@ class LLM::OpenAI
56
63
  # Deletes a response
57
64
  # @see https://platform.openai.com/docs/api-reference/responses/delete OpenAI docs
58
65
  # @param [#id, #to_s] response Response ID
59
- # @raise (see LLM::HTTPClient#request)
66
+ # @raise (see LLM::Provider#request)
60
67
  # @return [OpenStruct] Response body
61
68
  def delete(response)
62
69
  response_id = response.respond_to?(:id) ? response.id : response
@@ -28,7 +28,7 @@ module LLM
28
28
  # @param input (see LLM::Provider#embed)
29
29
  # @param model (see LLM::Provider#embed)
30
30
  # @param params (see LLM::Provider#embed)
31
- # @raise (see LLM::HTTPClient#request)
31
+ # @raise (see LLM::Provider#request)
32
32
  # @return (see LLM::Provider#embed)
33
33
  def embed(input, model: "text-embedding-3-small", **params)
34
34
  req = Net::HTTP::Post.new("/v1/embeddings", headers)
@@ -45,7 +45,7 @@ module LLM
45
45
  # @param model (see LLM::Provider#complete)
46
46
  # @param params (see LLM::Provider#complete)
47
47
  # @example (see LLM::Provider#complete)
48
- # @raise (see LLM::HTTPClient#request)
48
+ # @raise (see LLM::Provider#request)
49
49
  # @return (see LLM::Provider#complete)
50
50
  def complete(prompt, role = :user, model: "gpt-4o-mini", **params)
51
51
  params = {model:}.merge!(params)
@@ -25,7 +25,7 @@ class LLM::VoyageAI
25
25
  when Net::HTTPTooManyRequests
26
26
  raise LLM::Error::RateLimit.new { _1.response = res }, "Too many requests"
27
27
  else
28
- raise LLM::Error::BadResponse.new { _1.response = res }, "Unexpected response"
28
+ raise LLM::Error::ResponseError.new { _1.response = res }, "Unexpected response"
29
29
  end
30
30
  end
31
31
  end
data/lib/llm/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LLM
4
- VERSION = "0.3.0"
4
+ VERSION = "0.3.1"
5
5
  end
@@ -66,7 +66,7 @@ RSpec.describe "LLM::Gemini::Files" do
66
66
  end
67
67
 
68
68
  it "translates the audio clip" do
69
- is_expected.to eq("In the name of God, the Most Gracious, the Most Merciful.\n")
69
+ is_expected.to eq("In the name of Allah, the Most Gracious, the Most Merciful.\n")
70
70
  end
71
71
  end
72
72
 
@@ -86,7 +86,7 @@ RSpec.describe "LLM::Gemini::Files" do
86
86
  end
87
87
 
88
88
  it "translates the audio clip" do
89
- is_expected.to eq("All praise is due to Allah, Lord of the Worlds.\n")
89
+ is_expected.to eq("All praise is due to Allah, Lord of the worlds.\n")
90
90
  end
91
91
  end
92
92
 
@@ -60,7 +60,12 @@ end
60
60
  RSpec.describe "LLM::Chat: lazy" do
61
61
  let(:described_class) { LLM::Chat }
62
62
  let(:token) { ENV["LLM_SECRET"] || "TOKEN" }
63
- let(:prompt) { "Keep your answers short and concise, and provide three answers to the three questions" }
63
+ let(:prompt) do
64
+ "Keep your answers short and concise, and provide three answers to the three questions" \
65
+ "There should be one answer per line" \
66
+ "An answer should be a number, for example: 5" \
67
+ "Nothing else"
68
+ end
64
69
 
65
70
  context "when given completions" do
66
71
  context "with gemini",
@@ -105,7 +110,7 @@ RSpec.describe "LLM::Chat: lazy" do
105
110
  it "maintains a conversation" do
106
111
  is_expected.to have_attributes(
107
112
  role: "assistant",
108
- content: "1. 5 \n2. 10 \n3. 12 "
113
+ content: %r|5\s*\n10\s*\n12\s*|
109
114
  )
110
115
  end
111
116
  end
@@ -167,7 +172,7 @@ RSpec.describe "LLM::Chat: lazy" do
167
172
  it "maintains a conversation" do
168
173
  is_expected.to have_attributes(
169
174
  role: "assistant",
170
- content: "1. 3 + 2 = 5 \n2. 5 + 5 = 10 \n3. 5 + 7 = 12"
175
+ content: %r|5\s*\n10\s*\n12\s*|
171
176
  )
172
177
  end
173
178
  end
@@ -47,7 +47,8 @@ RSpec.describe "LLM::OpenAI: completions" do
47
47
  subject(:response) do
48
48
  openai.complete "What is your name? What age are you?", :user, messages: [
49
49
  {role: "system", content: "Answer all of my questions"},
50
- {role: "system", content: "Your name is Pablo, you are 25 years old and you are my amigo"}
50
+ {role: "system", content: "Answer in the format: My name is <name> and I am <age> years old"},
51
+ {role: "system", content: "Your name is Pablo and you are 25 years old"},
51
52
  ]
52
53
  end
53
54
 
@@ -56,7 +57,7 @@ RSpec.describe "LLM::OpenAI: completions" do
56
57
  choices: [
57
58
  have_attributes(
58
59
  role: "assistant",
59
- content: "My name is Pablo, and I'm 25 years old! How can I help you today, amigo?"
60
+ content: %r|\AMy name is Pablo and I am 25 years old|
60
61
  )
61
62
  ]
62
63
  )
@@ -68,7 +69,7 @@ RSpec.describe "LLM::OpenAI: completions" do
68
69
  subject(:response) { openai.complete(URI("/foobar.exe"), :user) }
69
70
 
70
71
  it "raises an error" do
71
- expect { response }.to raise_error(LLM::Error::BadResponse)
72
+ expect { response }.to raise_error(LLM::Error::ResponseError)
72
73
  end
73
74
 
74
75
  it "includes the response" do
@@ -9,10 +9,11 @@ RSpec.describe "LLM::OpenAI::Files" do
9
9
  context "when given a successful create operation (haiku1.txt)",
10
10
  vcr: {cassette_name: "openai/files/successful_create_haiku1"} do
11
11
  subject(:file) { provider.files.create(file: LLM::File("spec/fixtures/documents/haiku1.txt")) }
12
- after { provider.files.delete(file:) }
13
12
 
14
13
  it "is successful" do
15
14
  expect(file).to be_instance_of(LLM::Response::File)
15
+ ensure
16
+ provider.files.delete(file:)
16
17
  end
17
18
 
18
19
  it "returns a file object" do
@@ -21,16 +22,19 @@ RSpec.describe "LLM::OpenAI::Files" do
21
22
  filename: "haiku1.txt",
22
23
  purpose: "assistants"
23
24
  )
25
+ ensure
26
+ provider.files.delete(file:)
24
27
  end
25
28
  end
26
29
 
27
30
  context "when given a successful create operation (haiku2.txt)",
28
31
  vcr: {cassette_name: "openai/files/successful_create_haiku2"} do
29
32
  subject(:file) { provider.files.create(file: LLM::File("spec/fixtures/documents/haiku2.txt")) }
30
- after { provider.files.delete(file:) }
31
33
 
32
34
  it "is successful" do
33
35
  expect(file).to be_instance_of(LLM::Response::File)
36
+ ensure
37
+ provider.files.delete(file:)
34
38
  end
35
39
 
36
40
  it "returns a file object" do
@@ -39,6 +43,8 @@ RSpec.describe "LLM::OpenAI::Files" do
39
43
  filename: "haiku2.txt",
40
44
  purpose: "assistants"
41
45
  )
46
+ ensure
47
+ provider.files.delete(file:)
42
48
  end
43
49
  end
44
50
 
@@ -62,10 +68,11 @@ RSpec.describe "LLM::OpenAI::Files" do
62
68
  vcr: {cassette_name: "openai/files/successful_get_haiku4"} do
63
69
  let(:file) { provider.files.create(file: LLM::File("spec/fixtures/documents/haiku4.txt")) }
64
70
  subject { provider.files.get(file:) }
65
- after { provider.files.delete(file:) }
66
71
 
67
72
  it "is successful" do
68
73
  is_expected.to be_instance_of(LLM::Response::File)
74
+ ensure
75
+ provider.files.delete(file:)
69
76
  end
70
77
 
71
78
  it "returns a file object" do
@@ -74,6 +81,8 @@ RSpec.describe "LLM::OpenAI::Files" do
74
81
  filename: "haiku4.txt",
75
82
  purpose: "assistants"
76
83
  )
84
+ ensure
85
+ provider.files.delete(file:)
77
86
  end
78
87
  end
79
88
 
@@ -86,10 +95,11 @@ RSpec.describe "LLM::OpenAI::Files" do
86
95
  ]
87
96
  end
88
97
  subject(:file) { provider.files.all }
89
- after { files.each { |file| provider.files.delete(file:) } }
90
98
 
91
99
  it "is successful" do
92
100
  expect(file).to be_instance_of(LLM::Response::FileList)
101
+ ensure
102
+ files.each { |file| provider.files.delete(file:) }
93
103
  end
94
104
 
95
105
  it "returns an array of file objects" do
@@ -107,44 +117,88 @@ RSpec.describe "LLM::OpenAI::Files" do
107
117
  )
108
118
  ]
109
119
  )
120
+ ensure
121
+ files.each { |file| provider.files.delete(file:) }
110
122
  end
111
123
  end
112
124
 
113
125
  context "when asked to describe the contents of a file",
114
126
  vcr: {cassette_name: "openai/files/describe_freebsd.sysctl.pdf"} do
115
- subject { bot.last_message.content }
127
+ subject { bot.last_message.content.downcase[0..2] }
116
128
  let(:bot) { LLM::Chat.new(provider).lazy }
117
129
  let(:file) { provider.files.create(file: LLM::File("spec/fixtures/documents/freebsd.sysctl.pdf")) }
118
- after { provider.files.delete(file:) }
119
130
 
120
131
  before do
121
132
  bot.respond(file)
122
- bot.respond("Describe the contents of the file to me")
123
- bot.respond("Your summary should be no more than ten words")
133
+ bot.respond("Is this PDF document about FreeBSD?")
134
+ bot.respond("Answer with yes or no. Nothing else.")
124
135
  end
125
136
 
126
137
  it "describes the document" do
127
- is_expected.to eq("FreeBSD system control nodes implementation and usage overview.")
138
+ is_expected.to eq("yes")
139
+ ensure
140
+ provider.files.delete(file:)
128
141
  end
129
142
  end
130
143
 
131
144
  context "when asked to describe the contents of a file",
132
145
  vcr: {cassette_name: "openai/files/describe_freebsd.sysctl_2.pdf"} do
133
- subject { bot.last_message.content }
146
+ subject { bot.last_message.content.downcase[0..2] }
134
147
  let(:bot) { LLM::Chat.new(provider).lazy }
135
148
  let(:file) { provider.files.create(file: LLM::File("spec/fixtures/documents/freebsd.sysctl.pdf")) }
136
- after { provider.files.delete(file:) }
137
149
 
138
150
  before do
139
151
  bot.respond([
140
- "Describe the contents of the file to me",
141
- "Your summary should be no more than ten words",
152
+ "Is this PDF document about FreeBSD?",
153
+ "Answer with yes or no. Nothing else.",
142
154
  file
143
155
  ])
144
156
  end
145
157
 
146
158
  it "describes the document" do
147
- is_expected.to eq("FreeBSD kernel system control nodes overview and implementation.")
159
+ is_expected.to eq("yes")
160
+ ensure
161
+ provider.files.delete(file:)
162
+ end
163
+ end
164
+
165
+ context "when asked to describe the contents of a file",
166
+ vcr: {cassette_name: "openai/files/describe_freebsd.sysctl_3.pdf"} do
167
+ subject { bot.last_message.content.downcase[0..2] }
168
+ let(:bot) { LLM::Chat.new(provider).lazy }
169
+ let(:file) { provider.files.create(file: LLM::File("spec/fixtures/documents/freebsd.sysctl.pdf")) }
170
+
171
+ before do
172
+ bot.chat(file)
173
+ bot.chat("Is this PDF document about FreeBSD?")
174
+ bot.chat("Answer with yes or no. Nothing else.")
175
+ end
176
+
177
+ it "describes the document" do
178
+ is_expected.to eq("yes")
179
+ ensure
180
+ provider.files.delete(file:)
181
+ end
182
+ end
183
+
184
+ context "when asked to describe the contents of a file",
185
+ vcr: {cassette_name: "openai/files/describe_freebsd.sysctl_4.pdf"} do
186
+ subject { bot.last_message.content.downcase[0..2] }
187
+ let(:bot) { LLM::Chat.new(provider).lazy }
188
+ let(:file) { provider.files.create(file: LLM::File("spec/fixtures/documents/freebsd.sysctl.pdf")) }
189
+
190
+ before do
191
+ bot.chat([
192
+ "Is this PDF document about FreeBSD?",
193
+ "Answer with yes or no. Nothing else.",
194
+ file
195
+ ])
196
+ end
197
+
198
+ it "describes the document" do
199
+ is_expected.to eq("yes")
200
+ ensure
201
+ provider.files.delete(file:)
148
202
  end
149
203
  end
150
204
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm.rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.3.0
4
+ version: 0.3.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Antar Azri
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2025-04-25 00:00:00.000000000 Z
12
+ date: 2025-04-26 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: webmock
@@ -154,7 +154,6 @@ files:
154
154
  - lib/llm/core_ext/ostruct.rb
155
155
  - lib/llm/error.rb
156
156
  - lib/llm/file.rb
157
- - lib/llm/http_client.rb
158
157
  - lib/llm/message.rb
159
158
  - lib/llm/mime.rb
160
159
  - lib/llm/model.rb
@@ -1,34 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module LLM
4
- ##
5
- # @private
6
- module HTTPClient
7
- require "net/http"
8
- ##
9
- # Initiates a HTTP request
10
- # @param [Net::HTTP] http
11
- # The HTTP object to use for the request
12
- # @param [Net::HTTPRequest] req
13
- # The request to send
14
- # @param [Proc] b
15
- # A block to yield the response to (optional)
16
- # @return [Net::HTTPResponse]
17
- # The response from the server
18
- # @raise [LLM::Error::Unauthorized]
19
- # When authentication fails
20
- # @raise [LLM::Error::RateLimit]
21
- # When the rate limit is exceeded
22
- # @raise [LLM::Error::BadResponse]
23
- # When any other unsuccessful status code is returned
24
- # @raise [SystemCallError]
25
- # When there is a network error at the operating system level
26
- def request(http, req, &b)
27
- res = http.request(req, &b)
28
- case res
29
- when Net::HTTPOK then res
30
- else error_handler.new(res).raise_error!
31
- end
32
- end
33
- end
34
- end