llm.rb 0.13.0 β†’ 0.14.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: f83248fa3bce285d75952b726cc7d1b097e61b61fb77edd55f39168922a8614c
4
- data.tar.gz: 52fa691fad7eaa2e77e24245aba97ea469688f8307f869d533fac47707aa6caf
3
+ metadata.gz: 587a0b7425102e44f79107bd5f0471ba2cb8f88d95fe28af5e1d334093a539e3
4
+ data.tar.gz: 73242cb4daa8890a1f16d578c0e6ff50d54651162d11132ce06d9fb571fec062
5
5
  SHA512:
6
- metadata.gz: d17e104601295bbbd8dd5bd34fe5378e96367aacb832ddb81b7f0fdfd3fa54f39d98dff6568f3190a4d979de4382a6ebd2eba8ee2fb2cce0e67db2478be9a1df
7
- data.tar.gz: d2e6bb0f507bdcf140bd7f44f9edd40c057f061a9b4bc7123a87bdf6fa40a6f16a99642965bdaf81aaaeab5e995a01587c79cd47e19c6a2b5f86e456fa4813a9
6
+ metadata.gz: 1f62948774f2cc389ec8b1d53886785667c95d4bd98d3f648307c07342b004c080feb40ae8baa9bda0cb6cf0d3cbeec33f084e9cd5a0a22622a07519a772c0b4
7
+ data.tar.gz: 92ce35a290004371ed6ac8ef11d4a41344d85170f398d69ebf2d5251a1a73fc9db79e3065ab5e8d32945070854362f415276a00ac56a4d4e916cd5752f12b5d4
data/README.md CHANGED
@@ -5,12 +5,16 @@ includes OpenAI, Gemini, Anthropic, xAI (grok), DeepSeek, Ollama, and
5
5
  LlamaCpp. The toolkit includes full support for chat, streaming, tool calling,
6
6
  audio, images, files, and structured outputs (JSON Schema).
7
7
 
8
+ The library provides a common, uniform interface for all the providers and
9
+ features it supports, in addition to provider-specific features as well. Keep
10
+ reading to find out more.
11
+
8
12
  ## Features
9
13
 
10
14
  #### General
11
15
  - βœ… A single unified interface for multiple providers
12
16
  - πŸ“¦ Zero dependencies outside Ruby's standard library
13
- - πŸš€ Efficient API design that minimizes the number of requests made
17
+ - πŸš€ Smart API design that minimizes the number of requests made
14
18
 
15
19
  #### Chat, Agents
16
20
  - 🧠 Stateless and stateful chat via completions and responses API
@@ -44,12 +48,12 @@ breaks things down by provider, so you can see exactly what’s supported where.
44
48
  | **Streaming** | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… |
45
49
  | **Tool Calling** | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… |
46
50
  | **JSON Schema / Structured Output** | βœ… | ❌ | βœ… | ❌ | βœ… | βœ…* | βœ…* |
47
- | **Audio (TTS / Transcribe / Translate)** | βœ… | ❌ | βœ… | ❌ | ❌ | ❌ | ❌ |
48
- | **Image Generation & Editing** | βœ… | ❌ | βœ… | ❌ | βœ… | ❌ | ❌ |
49
- | **File Uploads** | βœ… | ❌ | βœ… | ❌ | ❌ | ❌ | ❌ |
50
- | **Multimodal Prompts** *(text, documents, audio, images, videos, URLs, etc)* | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… |
51
51
  | **Embeddings** | βœ… | βœ… | βœ… | βœ… | ❌ | βœ… | βœ… |
52
+ | **Multimodal Prompts** *(text, documents, audio, images, videos, URLs, etc)* | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… |
53
+ | **Files API** | βœ… | βœ… | βœ… | ❌ | ❌ | ❌ | ❌ |
52
54
  | **Models API** | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… | βœ… |
55
+ | **Audio (TTS / Transcribe / Translate)** | βœ… | ❌ | βœ… | ❌ | ❌ | ❌ | ❌ |
56
+ | **Image Generation & Editing** | βœ… | ❌ | βœ… | ❌ | βœ… | ❌ | ❌ |
53
57
  | **Local Model Support** | ❌ | ❌ | ❌ | ❌ | ❌ | βœ… | βœ… |
54
58
  | **Vector Stores (RAG)** | βœ… | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
55
59
  | **Responses** | βœ… | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
@@ -167,10 +171,10 @@ require "llm"
167
171
  ##
168
172
  # Objects
169
173
  llm = LLM.openai(key: ENV["KEY"])
170
- schema = llm.schema.object(probability: llm.schema.integer.required)
174
+ schema = llm.schema.object(probability: llm.schema.number.required)
171
175
  bot = LLM::Bot.new(llm, schema:)
172
176
  bot.chat "Does the earth orbit the sun?", role: :user
173
- bot.messages.find(&:assistant?).content! # => {probability: 1}
177
+ bot.messages.find(&:assistant?).content! # => {probability: 1.0}
174
178
 
175
179
  ##
176
180
  # Enums
@@ -259,7 +263,7 @@ require "llm"
259
263
  llm = LLM.openai(key: ENV["KEY"])
260
264
  bot = LLM::Bot.new(llm)
261
265
  file = llm.files.create(file: "/books/goodread.pdf")
262
- bot.chat(["Tell me about this file", file])
266
+ bot.chat ["Tell me about this file", file]
263
267
  bot.messages.select(&:assistant?).each { print "[#{_1.role}] ", _1.content, "\n" }
264
268
  ```
265
269
 
@@ -477,7 +481,7 @@ end
477
481
  ##
478
482
  # Select a model
479
483
  model = llm.models.all.find { |m| m.id == "gpt-3.5-turbo" }
480
- bot = LLM::Bot.new(llm, model:)
484
+ bot = LLM::Bot.new(llm, model: model.id)
481
485
  bot.chat "Hello #{model.id} :)"
482
486
  bot.messages.select(&:assistant?).each { print "[#{_1.role}] ", _1.content, "\n" }
483
487
  ```
@@ -493,8 +497,12 @@ over or doesn't cover at all. The API reference is available at
493
497
 
494
498
  ### Guides
495
499
 
496
- * [An introduction to RAG with llm.rb](https://0x1eef.github.io/posts/an-introduction-to-rag-with-llm.rb/) –
497
- a blog post that implements the RAG pattern in 32 lines of Ruby code
500
+ * [An introduction to RAG](https://0x1eef.github.io/posts/an-introduction-to-rag-with-llm.rb/) –
501
+ a blog post that implements the RAG pattern
502
+ * [How to estimate the age of a person in a photo](https://0x1eef.github.io/posts/age-estimation-with-llm.rb/) –
503
+ a blog post that implements an age estimation tool
504
+ * [How to edit an image with Gemini](https://0x1eef.github.io/posts/how-to-edit-images-with-gemini/) –
505
+ a blog post that implements image editing with Gemini
498
506
  * [docs/](docs/) – the docs directory contains additional guides
499
507
 
500
508
 
data/lib/llm/buffer.rb CHANGED
@@ -87,6 +87,20 @@ module LLM
87
87
  @pending.empty? and @completed.empty?
88
88
  end
89
89
 
90
+ ##
91
+ # @example
92
+ # llm = LLM.openai(key: ENV["KEY"])
93
+ # bot = LLM::Bot.new(llm, stream: $stdout)
94
+ # bot.chat "Hello", role: :user
95
+ # bot.messages.drain
96
+ # @note
97
+ # This method is especially useful when using the streaming API.
98
+ # Drains the buffer and returns all messages as an array
99
+ # @return [Array<LLM::Message>]
100
+ def drain
101
+ to_a
102
+ end
103
+
90
104
  private
91
105
 
92
106
  def empty!
@@ -101,23 +115,26 @@ module LLM
101
115
  end
102
116
 
103
117
  def complete!(message, params)
104
- pendings = @pending.map { _1[0] }
105
- messages = [*@completed, *pendings]
118
+ oldparams = @pending.map { _1[1] }
119
+ pendings = @pending.map { _1[0] }
120
+ messages = [*@completed, *pendings]
106
121
  role = message.role
107
122
  completion = @provider.complete(
108
123
  message.content,
109
- params.merge(role:, messages:)
124
+ [*oldparams, params.merge(role:, messages:)].inject({}, &:merge!)
110
125
  )
111
126
  @completed.concat([*pendings, message, *completion.choices[0]])
112
127
  @pending.clear
113
128
  end
114
129
 
115
130
  def respond!(message, params)
116
- pendings = @pending.map { _1[0] }
117
- input = [*pendings]
131
+ oldparams = @pending.map { _1[1] }
132
+ pendings = @pending.map { _1[0] }
133
+ messages = [*pendings]
118
134
  role = message.role
119
135
  params = [
120
- params.merge(input:),
136
+ *oldparams,
137
+ params.merge(input: messages),
121
138
  @response ? {previous_response_id: @response.id} : {}
122
139
  ].inject({}, &:merge!)
123
140
  @response = @provider.responses.create(message.content, params.merge(role:))
data/lib/llm/file.rb CHANGED
@@ -26,7 +26,7 @@ class LLM::File
26
26
  # @return [String]
27
27
  # Returns the MIME type of the file
28
28
  def mime_type
29
- LLM::Mime[File.extname(path)]
29
+ LLM::Mime[path]
30
30
  end
31
31
 
32
32
  ##
data/lib/llm/mime.rb CHANGED
@@ -3,12 +3,16 @@
3
3
  ##
4
4
  # @private
5
5
  class LLM::Mime
6
+ EXTNAME = /\A\.[a-zA-Z0-9]+\z/
7
+ private_constant :EXTNAME
8
+
6
9
  ##
7
10
  # Lookup a mime type
8
11
  # @return [String, nil]
9
12
  def self.[](key)
10
- key = key.respond_to?(:path) ? File.extname(key.path) : key
11
- types[key] || "application/octet-stream"
13
+ key = key.respond_to?(:path) ? key.path : key
14
+ extname = (key =~ EXTNAME) ? key : File.extname(key)
15
+ types[extname] || "application/octet-stream"
12
16
  end
13
17
 
14
18
  ##
data/lib/llm/provider.rb CHANGED
@@ -284,7 +284,8 @@ class LLM::Provider
284
284
  parser&.free
285
285
  end
286
286
  else
287
- client.request(request, &b)
287
+ b ? client.request(request) { (Net::HTTPSuccess === _1) ? b.call(_1) : _1 } :
288
+ client.request(request)
288
289
  end
289
290
  handle_response(res)
290
291
  end
@@ -0,0 +1,155 @@
1
+ # frozen_string_literal: true
2
+
3
+ class LLM::Anthropic
4
+ ##
5
+ # The {LLM::Anthropic::Files LLM::Anthropic::Files} class provides a files
6
+ # object for interacting with [Anthropic's Files API](https://docs.anthropic.com/en/docs/build-with-claude/files).
7
+ #
8
+ # @example
9
+ # #!/usr/bin/env ruby
10
+ # require "llm"
11
+ #
12
+ # llm = LLM.anthropic(key: ENV["KEY"])
13
+ # bot = LLM::Bot.new(llm)
14
+ # file = llm.files.create file: "/books/goodread.pdf"
15
+ # bot.chat ["Tell me about this PDF", file]
16
+ # bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }
17
+ class Files
18
+ require_relative "response/file"
19
+ ##
20
+ # Returns a new Files object
21
+ # @param provider [LLM::Provider]
22
+ # @return [LLM::Anthropic::Files]
23
+ def initialize(provider)
24
+ @provider = provider
25
+ end
26
+
27
+ ##
28
+ # List all files
29
+ # @example
30
+ # llm = LLM.anthropic(key: ENV["KEY"])
31
+ # res = llm.files.all
32
+ # res.each do |file|
33
+ # print "id: ", file.id, "\n"
34
+ # end
35
+ # @see https://docs.anthropic.com/en/docs/build-with-claude/files Anthropic docs
36
+ # @param [Hash] params Other parameters (see Anthropic docs)
37
+ # @raise (see LLM::Provider#request)
38
+ # @return [LLM::Response]
39
+ def all(**params)
40
+ query = URI.encode_www_form(params)
41
+ req = Net::HTTP::Get.new("/v1/files?#{query}", headers)
42
+ res = execute(request: req)
43
+ LLM::Response.new(res).extend(LLM::Anthropic::Response::Enumerable)
44
+ end
45
+
46
+ ##
47
+ # Create a file
48
+ # @example
49
+ # llm = LLM.anthropic(key: ENV["KEY"])
50
+ # res = llm.files.create file: "/documents/haiku.txt"
51
+ # @see https://docs.anthropic.com/en/docs/build-with-claude/files Anthropic docs
52
+ # @param [File, LLM::File, String] file The file
53
+ # @param [Hash] params Other parameters (see Anthropic docs)
54
+ # @raise (see LLM::Provider#request)
55
+ # @return [LLM::Response]
56
+ def create(file:, **params)
57
+ multi = LLM::Multipart.new(params.merge!(file: LLM.File(file)))
58
+ req = Net::HTTP::Post.new("/v1/files", headers)
59
+ req["content-type"] = multi.content_type
60
+ set_body_stream(req, multi.body)
61
+ res = execute(request: req)
62
+ LLM::Response.new(res).extend(LLM::Anthropic::Response::File)
63
+ end
64
+
65
+ ##
66
+ # Get a file
67
+ # @example
68
+ # llm = LLM.anthropic(key: ENV["KEY"])
69
+ # res = llm.files.get(file: "file-1234567890")
70
+ # print "id: ", res.id, "\n"
71
+ # @see https://docs.anthropic.com/en/docs/build-with-claude/files Anthropic docs
72
+ # @param [#id, #to_s] file The file ID
73
+ # @param [Hash] params Other parameters - if any (see Anthropic docs)
74
+ # @raise (see LLM::Provider#request)
75
+ # @return [LLM::Response]
76
+ def get(file:, **params)
77
+ file_id = file.respond_to?(:id) ? file.id : file
78
+ query = URI.encode_www_form(params)
79
+ req = Net::HTTP::Get.new("/v1/files/#{file_id}?#{query}", headers)
80
+ res = execute(request: req)
81
+ LLM::Response.new(res).extend(LLM::Anthropic::Response::File)
82
+ end
83
+
84
+ ##
85
+ # Retrieve file metadata
86
+ # @example
87
+ # llm = LLM.anthropic(key: ENV["KEY"])
88
+ # res = llm.files.get_metadata(file: "file-1234567890")
89
+ # print "id: ", res.id, "\n"
90
+ # @see https://docs.anthropic.com/en/docs/build-with-claude/files
91
+ # @param [#id, #to_s] file The file ID
92
+ # @param [Hash] params Other parameters - if any (see Anthropic docs)
93
+ # @raise (see LLM::Provider#request)
94
+ # @return [LLM::Response]
95
+ def get_metadata(file:, **params)
96
+ query = URI.encode_www_form(params)
97
+ file_id = file.respond_to?(:id) ? file.id : file
98
+ req = Net::HTTP::Get.new("/v1/files/#{file_id}?#{query}", headers)
99
+ res = execute(request: req)
100
+ LLM::Response.new(res).extend(LLM::Anthropic::Response::File)
101
+ end
102
+ alias_method :retrieve_metadata, :get_metadata
103
+
104
+ ##
105
+ # Delete a file
106
+ # @example
107
+ # llm = LLM.anthropic(key: ENV["KEY"])
108
+ # res = llm.files.delete(file: "file-1234567890")
109
+ # print res.deleted, "\n"
110
+ # @see https://docs.anthropic.com/en/docs/build-with-claude/files Anthropic docs
111
+ # @param [#id, #to_s] file The file ID
112
+ # @raise (see LLM::Provider#request)
113
+ # @return [LLM::Response]
114
+ def delete(file:)
115
+ file_id = file.respond_to?(:id) ? file.id : file
116
+ req = Net::HTTP::Delete.new("/v1/files/#{file_id}", headers)
117
+ res = execute(request: req)
118
+ LLM::Response.new(res)
119
+ end
120
+
121
+ ##
122
+ # Download the contents of a file
123
+ # @note
124
+ # You can only download files that were created by the code
125
+ # execution tool. Files that you uploaded cannot be downloaded.
126
+ # @example
127
+ # llm = LLM.anthropic(key: ENV["KEY"])
128
+ # res = llm.files.download(file: "file-1234567890")
129
+ # File.binwrite "program.c", res.file.read
130
+ # print res.file.read, "\n"
131
+ # @see https://docs.anthropic.com/en/docs/build-with-claude/files Anthropic docs
132
+ # @param [#id, #to_s] file The file ID
133
+ # @param [Hash] params Other parameters (see Anthropic docs)
134
+ # @raise (see LLM::Provider#request)
135
+ # @return [LLM::Response]
136
+ def download(file:, **params)
137
+ query = URI.encode_www_form(params)
138
+ file_id = file.respond_to?(:id) ? file.id : file
139
+ req = Net::HTTP::Get.new("/v1/files/#{file_id}/content?#{query}", headers)
140
+ io = StringIO.new("".b)
141
+ res = execute(request: req) { |res| res.read_body { |chunk| io << chunk } }
142
+ LLM::Response.new(res).tap { _1.define_singleton_method(:file) { io } }
143
+ end
144
+
145
+ private
146
+
147
+ def key
148
+ @provider.instance_variable_get(:@key)
149
+ end
150
+
151
+ [:headers, :execute, :set_body_stream].each do |m|
152
+ define_method(m) { |*args, **kwargs, &b| @provider.send(m, *args, **kwargs, &b) }
153
+ end
154
+ end
155
+ end
@@ -60,6 +60,12 @@ module LLM::Anthropic::Format
60
60
  "is not an image or PDF, and therefore not supported by the " \
61
61
  "Anthropic API"
62
62
  end
63
+ when LLM::Response
64
+ if content.file?
65
+ [{type: content.file_type, source: {type: :file, file_id: content.id}}]
66
+ else
67
+ prompt_error!(content)
68
+ end
63
69
  when String
64
70
  [{type: :text, text: content}]
65
71
  when LLM::Message
@@ -67,11 +73,15 @@ module LLM::Anthropic::Format
67
73
  when LLM::Function::Return
68
74
  [{type: "tool_result", tool_use_id: content.id, content: [{type: :text, text: JSON.dump(content.value)}]}]
69
75
  else
70
- raise LLM::PromptError, "The given object (an instance of #{content.class}) " \
71
- "is not supported by the Anthropic API"
76
+ prompt_error!(content)
72
77
  end
73
78
  end
74
79
 
80
+ def prompt_error!(content)
81
+ raise LLM::PromptError, "The given object (an instance of #{content.class}) " \
82
+ "is not supported by the Anthropic API"
83
+ end
84
+
75
85
  def message = @message
76
86
  def content = message.content
77
87
  end
@@ -1,6 +1,7 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  class LLM::Anthropic
4
+ require_relative "response/enumerable"
4
5
  ##
5
6
  # The {LLM::Anthropic::Models LLM::Anthropic::Models} class provides a model
6
7
  # object for interacting with [Anthropic's models API](https://platform.anthropic.com/docs/api-reference/models/list).
@@ -41,7 +42,7 @@ class LLM::Anthropic
41
42
  query = URI.encode_www_form(params)
42
43
  req = Net::HTTP::Get.new("/v1/models?#{query}", headers)
43
44
  res = execute(request: req)
44
- LLM::Response.new(res)
45
+ LLM::Response.new(res).extend(LLM::Anthropic::Response::Enumerable)
45
46
  end
46
47
 
47
48
  private
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::Anthropic::Response
4
+ module Enumerable
5
+ include ::Enumerable
6
+ def each(&)
7
+ return enum_for(:each) unless block_given?
8
+ data.each { yield(_1) }
9
+ end
10
+ end
11
+ end
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::Anthropic::Response
4
+ module File
5
+ ##
6
+ # Always return true
7
+ # @return [Boolean]
8
+ def file? = true
9
+
10
+ ##
11
+ # Returns the file type referenced by a prompt
12
+ # @return [Symbol]
13
+ def file_type
14
+ if mime_type.start_with?("image/")
15
+ :image
16
+ elsif mime_type == "text/plain" || mime_type == "application/pdf"
17
+ :document
18
+ else
19
+ :container_upload
20
+ end
21
+ end
22
+ end
23
+ end
@@ -18,6 +18,7 @@ module LLM
18
18
  require_relative "anthropic/format"
19
19
  require_relative "anthropic/error_handler"
20
20
  require_relative "anthropic/stream_parser"
21
+ require_relative "anthropic/files"
21
22
  require_relative "anthropic/models"
22
23
  include Format
23
24
 
@@ -60,6 +61,14 @@ module LLM
60
61
  LLM::Anthropic::Models.new(self)
61
62
  end
62
63
 
64
+ ##
65
+ # Provides an interface to Anthropic's files API
66
+ # @see https://docs.anthropic.com/en/docs/build-with-claude/files Anthropic docs
67
+ # @return [LLM::Anthropic::Files]
68
+ def files
69
+ LLM::Anthropic::Files.new(self)
70
+ end
71
+
63
72
  ##
64
73
  # @return (see LLM::Provider#assistant_role)
65
74
  def assistant_role
@@ -80,7 +89,8 @@ module LLM
80
89
  (@headers || {}).merge(
81
90
  "Content-Type" => "application/json",
82
91
  "x-api-key" => @key,
83
- "anthropic-version" => "2023-06-01"
92
+ "anthropic-version" => "2023-06-01",
93
+ "anthropic-beta" => "files-api-2025-04-14"
84
94
  )
85
95
  end
86
96
 
@@ -24,6 +24,7 @@ class LLM::Gemini
24
24
  # bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }
25
25
  class Files
26
26
  require_relative "response/file"
27
+ require_relative "response/files"
27
28
 
28
29
  ##
29
30
  # Returns a new Files object
@@ -49,7 +50,7 @@ class LLM::Gemini
49
50
  query = URI.encode_www_form(params.merge!(key: key))
50
51
  req = Net::HTTP::Get.new("/v1beta/files?#{query}", headers)
51
52
  res = execute(request: req)
52
- LLM::Response.new(res)
53
+ LLM::Response.new(res).extend(LLM::Gemini::Response::Files)
53
54
  end
54
55
 
55
56
  ##
@@ -17,6 +17,7 @@ class LLM::Gemini
17
17
  # print "id: ", model.id, "\n"
18
18
  # end
19
19
  class Models
20
+ require_relative "response/models"
20
21
  include LLM::Utils
21
22
 
22
23
  ##
@@ -43,7 +44,7 @@ class LLM::Gemini
43
44
  query = URI.encode_www_form(params.merge!(key: key))
44
45
  req = Net::HTTP::Get.new("/v1beta/models?#{query}", headers)
45
46
  res = execute(request: req)
46
- LLM::Response.new(res)
47
+ LLM::Response.new(res).extend(LLM::Gemini::Response::Models)
47
48
  end
48
49
 
49
50
  private
@@ -0,0 +1,15 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::Gemini::Response
4
+ module Files
5
+ include ::Enumerable
6
+ def each(&)
7
+ return enum_for(:each) unless block_given?
8
+ files.each { yield(_1) }
9
+ end
10
+
11
+ def files
12
+ body.files || []
13
+ end
14
+ end
15
+ end
@@ -0,0 +1,15 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::Gemini::Response
4
+ module Models
5
+ include ::Enumerable
6
+ def each(&)
7
+ return enum_for(:each) unless block_given?
8
+ models.each { yield(_1) }
9
+ end
10
+
11
+ def models
12
+ body.models || []
13
+ end
14
+ end
15
+ end
@@ -18,6 +18,7 @@ class LLM::OpenAI
18
18
  # bot.chat ["Tell me about this PDF", file]
19
19
  # bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }
20
20
  class Files
21
+ require_relative "response/enumerable"
21
22
  require_relative "response/file"
22
23
 
23
24
  ##
@@ -44,7 +45,7 @@ class LLM::OpenAI
44
45
  query = URI.encode_www_form(params)
45
46
  req = Net::HTTP::Get.new("/v1/files?#{query}", headers)
46
47
  res = execute(request: req)
47
- LLM::Response.new(res)
48
+ LLM::Response.new(res).extend(LLM::OpenAI::Response::Enumerable)
48
49
  end
49
50
 
50
51
  ##
@@ -17,6 +17,8 @@ class LLM::OpenAI
17
17
  # print "id: ", model.id, "\n"
18
18
  # end
19
19
  class Models
20
+ require_relative "response/enumerable"
21
+
20
22
  ##
21
23
  # Returns a new Models object
22
24
  # @param provider [LLM::Provider]
@@ -41,7 +43,7 @@ class LLM::OpenAI
41
43
  query = URI.encode_www_form(params)
42
44
  req = Net::HTTP::Get.new("/v1/models?#{query}", headers)
43
45
  res = execute(request: req)
44
- LLM::Response.new(res)
46
+ LLM::Response.new(res).extend(LLM::OpenAI::Response::Enumerable)
45
47
  end
46
48
 
47
49
  private
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::OpenAI::Response
4
+ module Enumerable
5
+ include ::Enumerable
6
+ def each(&)
7
+ return enum_for(:each) unless block_given?
8
+ data.each { yield(_1) }
9
+ end
10
+ end
11
+ end
@@ -5,6 +5,8 @@ class LLM::OpenAI
5
5
  # The {LLM::OpenAI::VectorStores LLM::OpenAI::VectorStores} class provides
6
6
  # an interface for [OpenAI's vector stores API](https://platform.openai.com/docs/api-reference/vector_stores/create)
7
7
  class VectorStores
8
+ require_relative "response/enumerable"
9
+
8
10
  ##
9
11
  # @param [LLM::Provider] provider
10
12
  # An OpenAI provider
@@ -20,7 +22,7 @@ class LLM::OpenAI
20
22
  query = URI.encode_www_form(params)
21
23
  req = Net::HTTP::Get.new("/v1/vector_stores?#{query}", headers)
22
24
  res = execute(request: req)
23
- LLM::Response.new(res)
25
+ LLM::Response.new(res).extend(LLM::OpenAI::Response::Enumerable)
24
26
  end
25
27
 
26
28
  ##
@@ -93,7 +95,7 @@ class LLM::OpenAI
93
95
  req = Net::HTTP::Post.new("/v1/vector_stores/#{vector_id}/search", headers)
94
96
  req.body = JSON.dump(params.merge({query:}).compact)
95
97
  res = execute(request: req)
96
- LLM::Response.new(res)
98
+ LLM::Response.new(res).extend(LLM::OpenAI::Response::Enumerable)
97
99
  end
98
100
 
99
101
  ##
@@ -108,7 +110,7 @@ class LLM::OpenAI
108
110
  query = URI.encode_www_form(params)
109
111
  req = Net::HTTP::Get.new("/v1/vector_stores/#{vector_id}/files?#{query}", headers)
110
112
  res = execute(request: req)
111
- LLM::Response.new(res)
113
+ LLM::Response.new(res).extend(LLM::OpenAI::Response::Enumerable)
112
114
  end
113
115
 
114
116
  ##
data/lib/llm/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LLM
4
- VERSION = "0.13.0"
4
+ VERSION = "0.14.0"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm.rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.13.0
4
+ version: 0.14.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Antar Azri
@@ -186,10 +186,13 @@ files:
186
186
  - lib/llm/provider.rb
187
187
  - lib/llm/providers/anthropic.rb
188
188
  - lib/llm/providers/anthropic/error_handler.rb
189
+ - lib/llm/providers/anthropic/files.rb
189
190
  - lib/llm/providers/anthropic/format.rb
190
191
  - lib/llm/providers/anthropic/format/completion_format.rb
191
192
  - lib/llm/providers/anthropic/models.rb
192
193
  - lib/llm/providers/anthropic/response/completion.rb
194
+ - lib/llm/providers/anthropic/response/enumerable.rb
195
+ - lib/llm/providers/anthropic/response/file.rb
193
196
  - lib/llm/providers/anthropic/stream_parser.rb
194
197
  - lib/llm/providers/deepseek.rb
195
198
  - lib/llm/providers/deepseek/format.rb
@@ -205,7 +208,9 @@ files:
205
208
  - lib/llm/providers/gemini/response/completion.rb
206
209
  - lib/llm/providers/gemini/response/embedding.rb
207
210
  - lib/llm/providers/gemini/response/file.rb
211
+ - lib/llm/providers/gemini/response/files.rb
208
212
  - lib/llm/providers/gemini/response/image.rb
213
+ - lib/llm/providers/gemini/response/models.rb
209
214
  - lib/llm/providers/gemini/stream_parser.rb
210
215
  - lib/llm/providers/llamacpp.rb
211
216
  - lib/llm/providers/ollama.rb
@@ -230,6 +235,7 @@ files:
230
235
  - lib/llm/providers/openai/response/audio.rb
231
236
  - lib/llm/providers/openai/response/completion.rb
232
237
  - lib/llm/providers/openai/response/embedding.rb
238
+ - lib/llm/providers/openai/response/enumerable.rb
233
239
  - lib/llm/providers/openai/response/file.rb
234
240
  - lib/llm/providers/openai/response/image.rb
235
241
  - lib/llm/providers/openai/response/moderations.rb