llm.rb 0.15.0 → 0.16.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 68bbdddd157d6df71729378a46b4759140d7fe13c44a1e14940a0a3a367d277c
4
- data.tar.gz: f24aa6b6042b58857ca419c0039422f678e92355f5fdfd0064d2931108338866
3
+ metadata.gz: c46802e2152430f164990a91499be669e928201e7793b162f8f62152349078af
4
+ data.tar.gz: 58269b584d08e9d236a3d85ac5728f4e4453a845ace9ebbf8c1ac22890609a1f
5
5
  SHA512:
6
- metadata.gz: e5043707425445ea5709f4eed33b3feab33a376a66569abe46b6ad93274ffd3044eef1099625ec75f24f5ee5fb5387493adc97332c7be1d6fa2e085d6e33281c
7
- data.tar.gz: 6f60c60130904dd23c67d65d50a04614c32659d5fe7af5e37ec2f9d6f25ccda1712a22bf872a7ae4c4aae49c2755cb1adff593fe1435c4231f445c40761a6477
6
+ metadata.gz: 9003c9cac451081ce0589f54acb2d72813e53a8708426e320e30732aa5842d99187af14f1c54bab69cfcc3bcc401529375377547a090f530aa1da140fd0a4b3f
7
+ data.tar.gz: 691e4ebfd80fcfa7af22f1ca26393256a69ddf6957be8960590eebddc5e20a1a30db3ba8cfc1ae65dba10491df6f2844fd72d3ca2444420cb23db357a0225e25
data/README.md CHANGED
@@ -1,7 +1,7 @@
1
1
  ## About
2
2
 
3
3
  llm.rb is a zero-dependency Ruby toolkit for Large Language Models that
4
- includes OpenAI, Gemini, Anthropic, xAI (grok), DeepSeek, Ollama, and
4
+ includes OpenAI, Gemini, Anthropic, xAI (Grok), DeepSeek, Ollama, and
5
5
  LlamaCpp. The toolkit includes full support for chat, streaming, tool calling,
6
6
  audio, images, files, and structured outputs (JSON Schema).
7
7
 
@@ -9,9 +9,13 @@ audio, images, files, and structured outputs (JSON Schema).
9
9
 
10
10
  #### Demo
11
11
 
12
+ This cool demo writes a new [llm-shell](https://github.com/llmrb/llm-shell#readme) command
13
+ with the help of [llm.rb](https://github.com/llmrb/llm#readme). <br> Similar-ish to
14
+ GitHub Copilot but for the terminal.
15
+
12
16
  <details>
13
- <summary>Play</summary>
14
- <img src="share/llm-shell/examples/demo.gif/">
17
+ <summary>Start demo</summary>
18
+ <img src="https://github.com/llmrb/llm/blob/main/share/llm-shell/examples/demo.gif?raw=true" alt="llm-shell demo" />
15
19
  </details>
16
20
 
17
21
  #### Guides
@@ -22,18 +26,40 @@ audio, images, files, and structured outputs (JSON Schema).
22
26
  a blog post that implements an age estimation tool
23
27
  * [How to edit an image with Gemini](https://0x1eef.github.io/posts/how-to-edit-images-with-gemini/) &ndash;
24
28
  a blog post that implements image editing with Gemini
29
+ * [Fast sailing with persistent connections](https://0x1eef.github.io/posts/persistent-connections-with-llm.rb/) &ndash;
30
+ a blog post that optimizes performance with a thread-safe connection pool
25
31
 
26
32
  #### Ecosystem
27
33
 
28
34
  * [llm-shell](https://github.com/llmrb/llm-shell) &ndash; a developer-oriented console for Large Language Model communication
29
35
  * [llm-spell](https://github.com/llmrb/llm-spell) &ndash; a utility that can correct spelling mistakes with a Large Language Model
30
36
 
37
+ #### Show code
38
+
39
+ A simple chatbot that maintains a conversation and streams
40
+ responses in real-time:
41
+
42
+ ```ruby
43
+ #!/usr/bin/env ruby
44
+ require "llm"
45
+
46
+ llm = LLM.openai(key: ENV["KEY"])
47
+ bot = LLM::Bot.new(llm, stream: $stdout)
48
+ loop do
49
+ print "> "
50
+ input = $stdin.gets&.chomp || break
51
+ bot.chat(input).flush
52
+ print "\n"
53
+ end
54
+ ```
55
+
31
56
  ## Features
32
57
 
33
58
  #### General
34
59
  - ✅ A single unified interface for multiple providers
35
60
  - 📦 Zero dependencies outside Ruby's standard library
36
61
  - 🚀 Smart API design that minimizes the number of requests made
62
+ - ♻️ Optional: per-provider, process-wide connection pool via net-http-persistent
37
63
 
38
64
  #### Chat, Agents
39
65
  - 🧠 Stateless and stateful chat via completions and responses API
@@ -110,13 +136,32 @@ llm = LLM.ollama(key: nil)
110
136
  llm = LLM.llamacpp(key: nil)
111
137
  ```
112
138
 
139
+ #### Persistence
140
+
141
+ The llm.rb library can maintain a process-wide connection pool
142
+ for each provider that is instantiated. This feature can improve
143
+ performance but it is optional, the implementation depends on
144
+ [net-http-persistent](https://github.com/dbrain/net-http-persistent),
145
+ and the gem should be installed separately:
146
+
147
+ ```ruby
148
+ #!/usr/bin/env ruby
149
+ require "llm"
150
+
151
+ llm = LLM.openai(key: ENV["KEY"], persistent: true)
152
+ res1 = llm.responses.create "message 1"
153
+ res2 = llm.responses.create "message 2", previous_response_id: res1.response_id
154
+ res3 = llm.responses.create "message 3", previous_response_id: res2.response_id
155
+ print res3.output_text, "\n"
156
+ ```
157
+
113
158
  ### Conversations
114
159
 
115
160
  #### Completions
116
161
 
117
162
  > This example uses the stateless chat completions API that all
118
163
  > providers support. A similar example for OpenAI's stateful
119
- > responses API is available in the [docs/](docs/OPENAI.md#responses)
164
+ > responses API is available in the [docs/](https://0x1eef.github.io/x/llm.rb/file.OPENAI.html#responses)
120
165
  > directory.
121
166
 
122
167
  The following example creates an instance of
@@ -149,7 +194,8 @@ bot.messages.each { print "[#{_1.role}] ", _1.content, "\n" }
149
194
  > There Is More Than One Way To Do It (TIMTOWTDI) when you are
150
195
  > using llm.rb &ndash; and this is especially true when it
151
196
  > comes to streaming. See the streaming documentation in
152
- > [docs/](docs/STREAMING.md#scopes) for more details.
197
+ > [docs/](https://0x1eef.github.io/x/llm.rb/file.STREAMING.html#scopes)
198
+ > for more details.
153
199
 
154
200
  The following example streams the messages in a conversation
155
201
  as they are generated in real-time. The `stream` option can
@@ -170,7 +216,7 @@ bot.chat(stream: $stdout) do |prompt|
170
216
  prompt.user ["Tell me about this URL", URI(url)]
171
217
  prompt.user ["Tell me about this PDF", File.open("handbook.pdf", "rb")]
172
218
  prompt.user "Are the URL and PDF similar to each other?"
173
- end.to_a
219
+ end.flush
174
220
  ```
175
221
 
176
222
  ### Schema
@@ -263,6 +309,43 @@ bot.chat bot.functions.map(&:call) # report return value to the LLM
263
309
  # {stderr: "", stdout: "FreeBSD"}
264
310
  ```
265
311
 
312
+ #### Provider
313
+
314
+ The
315
+ [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
316
+ class can define a local function that can be called by a provider on your behalf,
317
+ and the
318
+ [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html)
319
+ class represents a tool that is defined and implemented by a provider, and we can
320
+ request that the provider call the tool on our behalf. That's the primary difference
321
+ between a function implemented locally and a tool implemented by a provider. The
322
+ available tools depend on the provider, and the following example uses the
323
+ OpenAI provider to execute Python code on OpenAI's servers:
324
+
325
+ ```ruby
326
+ #!/usr/bin/env ruby
327
+ require "llm"
328
+
329
+ llm = LLM.openai(key: ENV["KEY"])
330
+ res = llm.responses.create "Run: 'print(\"hello world\")'", tools: [llm.tool(:code_interpreter)]
331
+ print res.output_text, "\n"
332
+ ```
333
+
334
+ #### Web Search
335
+
336
+ A common tool among all providers is the ability to perform a web search, and
337
+ the following example uses the OpenAI provider to search the web using the
338
+ Web Search tool. This can also be done with the Anthropic and Gemini providers:
339
+
340
+ ```ruby
341
+ #!/usr/bin/env ruby
342
+ require "llm"
343
+
344
+ llm = LLM.openai(key: ENV["KEY"])
345
+ res = llm.web_search(query: "summarize today's news")
346
+ print res.output_text, "\n"
347
+ ```
348
+
266
349
  ### Files
267
350
 
268
351
  #### Create
@@ -504,6 +587,23 @@ bot.chat "Hello #{model.id} :)"
504
587
  bot.messages.select(&:assistant?).each { print "[#{_1.role}] ", _1.content, "\n" }
505
588
  ```
506
589
 
590
+ ## Reviews
591
+
592
+ I supplied both Gemini and DeepSeek with the contents of [lib/](https://github.com/llmrb/llm/tree/main/lib)
593
+ and [README.md](https://github.com/llmrb/llm#readme) via [llm-shell](https://github.com/llmrb/llm-shell#readme).
594
+ Their feedback was way more positive than I could have imagined 😅 These are genuine responses though, with no
595
+ special prompting or engineering. I just provided them with the source code and asked for their opinion.
596
+
597
+ <details>
598
+ <summary>Review by Gemini</summary>
599
+ <img src="https://github.com/llmrb/llm/blob/main/share/llm-shell/examples/gemini.png?raw=true" alt="Gemini review" />
600
+ </details>
601
+
602
+ <details>
603
+ <summary>Review by DeepSeek</summary>
604
+ <img src="https://github.com/llmrb/llm/blob/main/share/llm-shell/examples/deepseek.png?raw=true" alt="DeepSeek review" />
605
+ </details>
606
+
507
607
  ## Documentation
508
608
 
509
609
  ### API
data/lib/llm/bot.rb CHANGED
@@ -123,5 +123,17 @@ module LLM
123
123
  .flat_map(&:functions)
124
124
  .select(&:pending?)
125
125
  end
126
+
127
+ ##
128
+ # @example
129
+ # llm = LLM.openai(key: ENV["KEY"])
130
+ # bot = LLM::Bot.new(llm, stream: $stdout)
131
+ # bot.chat("Hello", role: :user).flush
132
+ # Drains the buffer and returns all messages as an array
133
+ # @return [Array<LLM::Message>]
134
+ def drain
135
+ messages.drain
136
+ end
137
+ alias_method :flush, :drain
126
138
  end
127
139
  end
data/lib/llm/buffer.rb CHANGED
@@ -92,7 +92,8 @@ module LLM
92
92
  # llm = LLM.openai(key: ENV["KEY"])
93
93
  # bot = LLM::Bot.new(llm, stream: $stdout)
94
94
  # bot.chat "Hello", role: :user
95
- # bot.messages.drain
95
+ # bot.messages.flush
96
+ # @see LLM::Bot#drain
96
97
  # @note
97
98
  # This method is especially useful when using the streaming API.
98
99
  # Drains the buffer and returns all messages as an array
@@ -100,6 +101,7 @@ module LLM
100
101
  def drain
101
102
  to_a
102
103
  end
104
+ alias_method :flush, :drain
103
105
 
104
106
  private
105
107
 
@@ -138,7 +140,7 @@ module LLM
138
140
  @response ? {previous_response_id: @response.response_id} : {}
139
141
  ].inject({}, &:merge!)
140
142
  @response = @provider.responses.create(message.content, params.merge(role:))
141
- @completed.concat([*pendings, message, *@response.outputs[0]])
143
+ @completed.concat([*pendings, message, *@response.choices[0]])
142
144
  @pending.clear
143
145
  end
144
146
  end
data/lib/llm/client.rb ADDED
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM
4
+ ##
5
+ # @api private
6
+ module Client
7
+ private
8
+
9
+ ##
10
+ # @api private
11
+ def persistent_client
12
+ mutex.synchronize do
13
+ if clients[client_id]
14
+ clients[client_id]
15
+ else
16
+ require "net/http/persistent" unless defined?(Net::HTTP::Persistent)
17
+ client = Net::HTTP::Persistent.new(name: self.class.name)
18
+ client.read_timeout = timeout
19
+ clients[client_id] = client
20
+ end
21
+ end
22
+ end
23
+
24
+ ##
25
+ # @api private
26
+ def transient_client
27
+ client = Net::HTTP.new(host, port)
28
+ client.read_timeout = timeout
29
+ client.use_ssl = ssl
30
+ client
31
+ end
32
+
33
+ def client_id = "#{host}:#{port}:#{timeout}:#{ssl}"
34
+ def clients = self.class.clients
35
+ def mutex = self.class.mutex
36
+ end
37
+ end
data/lib/llm/function.rb CHANGED
@@ -30,7 +30,7 @@
30
30
  # fn.register(System)
31
31
  # end
32
32
  class LLM::Function
33
- class Return < Struct.new(:id, :value)
33
+ class Return < Struct.new(:id, :name, :value)
34
34
  end
35
35
 
36
36
  ##
@@ -92,7 +92,7 @@ class LLM::Function
92
92
  # @return [LLM::Function::Return] The result of the function call
93
93
  def call
94
94
  runner = ((Class === @runner) ? @runner.new : @runner)
95
- Return.new(id, runner.call(**arguments))
95
+ Return.new(id, name, runner.call(**arguments))
96
96
  ensure
97
97
  @called = true
98
98
  end
@@ -106,7 +106,7 @@ class LLM::Function
106
106
  # bot.chat bot.functions.map(&:cancel)
107
107
  # @return [LLM::Function::Return]
108
108
  def cancel(reason: "function call cancelled")
109
- Return.new(id, {cancelled: true, reason:})
109
+ Return.new(id, name, {cancelled: true, reason:})
110
110
  ensure
111
111
  @cancelled = true
112
112
  end
data/lib/llm/message.rb CHANGED
@@ -127,6 +127,16 @@ module LLM
127
127
  extra[:response]
128
128
  end
129
129
 
130
+ ##
131
+ # @note
132
+ # This method might return annotations for assistant messages,
133
+ # and it returns an empty array for non-assistant messages
134
+ # Returns annotations associated with the message
135
+ # @return [Array<LLM::Object>]
136
+ def annotations
137
+ @annotations ||= LLM::Object.from_hash(extra["annotations"] || [])
138
+ end
139
+
130
140
  ##
131
141
  # @note
132
142
  # This method returns token usage for assistant messages,
data/lib/llm/object.rb CHANGED
@@ -69,6 +69,12 @@ class LLM::Object < BasicObject
69
69
  to_h.dig(...)
70
70
  end
71
71
 
72
+ ##
73
+ # @return [Hash]
74
+ def slice(...)
75
+ to_h.slice(...)
76
+ end
77
+
72
78
  private
73
79
 
74
80
  def method_missing(m, *args, &b)
data/lib/llm/provider.rb CHANGED
@@ -7,6 +7,19 @@
7
7
  # @abstract
8
8
  class LLM::Provider
9
9
  require "net/http"
10
+ require_relative "client"
11
+ include LLM::Client
12
+
13
+ @@clients = {}
14
+ @@mutex = Mutex.new
15
+
16
+ ##
17
+ # @api private
18
+ def self.clients = @@clients
19
+
20
+ ##
21
+ # @api private
22
+ def self.mutex = @@mutex
10
23
 
11
24
  ##
12
25
  # @param [String, nil] key
@@ -19,11 +32,17 @@ class LLM::Provider
19
32
  # The number of seconds to wait for a response
20
33
  # @param [Boolean] ssl
21
34
  # Whether to use SSL for the connection
22
- def initialize(key:, host:, port: 443, timeout: 60, ssl: true)
35
+ # @param [Boolean] persistent
36
+ # Whether to use a persistent connection.
37
+ # Requires the net-http-persistent gem.
38
+ def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false)
23
39
  @key = key
24
- @client = Net::HTTP.new(host, port)
25
- @client.use_ssl = ssl
26
- @client.read_timeout = timeout
40
+ @host = host
41
+ @port = port
42
+ @timeout = timeout
43
+ @ssl = ssl
44
+ @client = persistent ? persistent_client : transient_client
45
+ @base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/")
27
46
  end
28
47
 
29
48
  ##
@@ -217,9 +236,46 @@ class LLM::Provider
217
236
  tap { (@headers ||= {}).merge!(headers) }
218
237
  end
219
238
 
239
+ ##
240
+ # @note
241
+ # This method might be outdated, and the {LLM::Provider#tool LLM::Provider#tool}
242
+ # method can be used if a tool is not found here.
243
+ # Returns all known tools provided by a provider.
244
+ # @return [String => LLM::Tool]
245
+ def tools
246
+ {}
247
+ end
248
+
249
+ ##
250
+ # @note
251
+ # OpenAI, Anthropic, and Gemini provide platform-tools for things
252
+ # like web search, and more.
253
+ # Returns a tool provided by a provider.
254
+ # @example
255
+ # llm = LLM.openai(key: ENV["KEY"])
256
+ # tools = [llm.tool(:web_search)]
257
+ # res = llm.responses.create("Summarize today's news", tools:)
258
+ # print res.output_text, "\n"
259
+ # @param [String, Symbol] name The name of the tool
260
+ # @param [Hash] options Configuration options for the tool
261
+ # @return [LLM::Tool]
262
+ def tool(name, options = {})
263
+ LLM::Tool.new(name, options, self)
264
+ end
265
+
266
+ ##
267
+ # Provides a web search capability
268
+ # @param [String] query The search query
269
+ # @raise [NotImplementedError]
270
+ # When the method is not implemented by a subclass
271
+ # @return [LLM::Response]
272
+ def web_search(query:)
273
+ raise NotImplementedError
274
+ end
275
+
220
276
  private
221
277
 
222
- attr_reader :client
278
+ attr_reader :client, :base_uri, :host, :port, :timeout, :ssl
223
279
 
224
280
  ##
225
281
  # The headers to include with a request
@@ -269,8 +325,9 @@ class LLM::Provider
269
325
  # When there is a network error at the operating system level
270
326
  # @return [Net::HTTPResponse]
271
327
  def execute(request:, stream: nil, stream_parser: self.stream_parser, &b)
328
+ args = (Net::HTTP === client) ? [request] : [URI.join(base_uri, request.path), request]
272
329
  res = if stream
273
- client.request(request) do |res|
330
+ client.request(*args) do |res|
274
331
  handler = event_handler.new stream_parser.new(stream)
275
332
  parser = LLM::EventStream::Parser.new
276
333
  parser.register(handler)
@@ -284,8 +341,8 @@ class LLM::Provider
284
341
  parser&.free
285
342
  end
286
343
  else
287
- b ? client.request(request) { (Net::HTTPSuccess === _1) ? b.call(_1) : _1 } :
288
- client.request(request)
344
+ b ? client.request(*args) { (Net::HTTPSuccess === _1) ? b.call(_1) : _1 } :
345
+ client.request(*args)
289
346
  end
290
347
  handle_response(res)
291
348
  end
@@ -24,7 +24,7 @@ class LLM::Anthropic
24
24
  def format_tools(params)
25
25
  return {} unless params and params[:tools]&.any?
26
26
  tools = params[:tools]
27
- {tools: tools.map { _1.format(self) }}
27
+ {tools: tools.map { _1.respond_to?(:format) ? _1.format(self) : _1 }}
28
28
  end
29
29
  end
30
30
  end
@@ -0,0 +1,21 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::Anthropic::Response
4
+ ##
5
+ # The {LLM::Anthropic::Response::WebSearch LLM::Anthropic::Response::WebSearch}
6
+ # module provides methods for accessing web search results from a web search
7
+ # tool call made via the {LLM::Provider#web_search LLM::Provider#web_search}
8
+ # method.
9
+ module WebSearch
10
+ ##
11
+ # Returns one or more search results
12
+ # @return [Array<LLM::Object>]
13
+ def search_results
14
+ LLM::Object.from_hash(
15
+ content
16
+ .select { _1["type"] == "web_search_tool_result" }
17
+ .flat_map { |n| n.content.map { _1.slice(:title, :url) } }
18
+ )
19
+ end
20
+ end
21
+ end
@@ -15,6 +15,7 @@ module LLM
15
15
  # bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }
16
16
  class Anthropic < Provider
17
17
  require_relative "anthropic/response/completion"
18
+ require_relative "anthropic/response/web_search"
18
19
  require_relative "anthropic/format"
19
20
  require_relative "anthropic/error_handler"
20
21
  require_relative "anthropic/stream_parser"
@@ -83,6 +84,35 @@ module LLM
83
84
  "claude-sonnet-4-20250514"
84
85
  end
85
86
 
87
+ ##
88
+ # @note
89
+ # This method includes certain tools that require configuration
90
+ # through a set of options that are easier to set through the
91
+ # {LLM::Provider#tool LLM::Provider#tool} method.
92
+ # @see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-tool Anthropic docs
93
+ # @return (see LLM::Provider#tools)
94
+ def tools
95
+ {
96
+ bash: tool(:bash, type: "bash_20250124"),
97
+ web_search: tool(:web_search, type: "web_search_20250305", max_uses: 5),
98
+ text_editor: tool(:str_replace_based_edit_tool, type: "text_editor_20250728", max_characters: 10_000)
99
+ }
100
+ end
101
+
102
+ ##
103
+ # A convenience method for performing a web search using the
104
+ # Anthropic web search tool.
105
+ # @example
106
+ # llm = LLM.anthropic(key: ENV["KEY"])
107
+ # res = llm.web_search(query: "summarize today's news")
108
+ # res.search_results.each { |item| print item.title, ": ", item.url, "\n" }
109
+ # @param query [String] The search query.
110
+ # @return [LLM::Response] The response from the LLM provider.
111
+ def web_search(query:)
112
+ complete(query, tools: [tools[:web_search]])
113
+ .extend(LLM::Anthropic::Response::WebSearch)
114
+ end
115
+
86
116
  private
87
117
 
88
118
  def headers
@@ -43,7 +43,7 @@ module LLM::Gemini::Format
43
43
  when LLM::Message
44
44
  format_content(content.content)
45
45
  when LLM::Function::Return
46
- [{text: JSON.dump(content.value)}]
46
+ [{functionResponse: {name: content.name, response: content.value}}]
47
47
  else
48
48
  prompt_error!(content)
49
49
  end
@@ -32,8 +32,9 @@ class LLM::Gemini
32
32
  # @return [Hash]
33
33
  def format_tools(params)
34
34
  return {} unless params and params[:tools]&.any?
35
- functions = params.delete(:tools).grep(LLM::Function)
36
- {tools: {functionDeclarations: functions.map { _1.format(self) }}}
35
+ tools = params.delete(:tools)
36
+ platform, functions = [tools.grep(LLM::Tool), tools.grep(LLM::Function)]
37
+ {tools: [*platform, {functionDeclarations: functions.map { _1.format(self) }}]}
37
38
  end
38
39
  end
39
40
  end
@@ -44,7 +44,7 @@ class LLM::Gemini
44
44
  def create(prompt:, model: "gemini-2.0-flash-exp-image-generation", **params)
45
45
  req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{key}", headers)
46
46
  body = JSON.dump({
47
- contents: [{parts: [{text: system_prompt}, {text: prompt}]}],
47
+ contents: [{parts: [{text: create_prompt}, {text: prompt}]}],
48
48
  generationConfig: {responseModalities: ["TEXT", "IMAGE"]}
49
49
  }.merge!(params))
50
50
  req.body = body
@@ -69,7 +69,7 @@ class LLM::Gemini
69
69
  req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{key}", headers)
70
70
  image = LLM.File(image)
71
71
  body = JSON.dump({
72
- contents: [{parts: [{text: prompt}, format.format_content(image)]}],
72
+ contents: [{parts: [{text: edit_prompt}, {text: prompt}, format.format_content(image)]}],
73
73
  generationConfig: {responseModalities: ["TEXT", "IMAGE"]}
74
74
  }.merge!(params)).b
75
75
  set_body_stream(req, StringIO.new(body))
@@ -94,12 +94,28 @@ class LLM::Gemini
94
94
  @provider.instance_variable_get(:@key)
95
95
  end
96
96
 
97
- def system_prompt
97
+ def create_prompt
98
98
  <<~PROMPT
99
- Your task is to generate one or more image(s) from
100
- text I will provide to you. Your response *MUST* include
101
- at least one image, and your response *MUST NOT* include
102
- any text or other content.
99
+ ## Context
100
+ Your task is to generate one or more image(s) based on the user's instructions.
101
+ The user will provide you with text only.
102
+
103
+ ## Instructions
104
+ 1. The model *MUST* generate image(s) based on the user text alone.
105
+ 2. The model *MUST NOT* generate anything else.
106
+ PROMPT
107
+ end
108
+
109
+ def edit_prompt
110
+ <<~PROMPT
111
+ ## Context
112
+ Your task is to edit the provided image based on the user's instructions.
113
+ The user will provide you with both text and an image.
114
+
115
+ ## Instructions
116
+ 1. The model *MUST* edit the provided image based on the user's instructions
117
+ 2. The model *MUST NOT* generate a new image.
118
+ 3. The model *MUST NOT* generate anything else.
103
119
  PROMPT
104
120
  end
105
121
 
@@ -13,8 +13,9 @@ module LLM::Gemini::Response
13
13
  def format_choices
14
14
  candidates.map.with_index do |choice, index|
15
15
  choice = LLM::Object.from_hash(choice)
16
- content = choice.content
17
- role, parts = content.role, content.parts
16
+ content = choice.content || LLM::Object.new
17
+ role = content.role || "model"
18
+ parts = content.parts || [{"text" => choice.finishReason}]
18
19
  text = parts.filter_map { _1["text"] }.join
19
20
  tools = parts.filter_map { _1["functionCall"] }
20
21
  extra = {index:, response: self, tool_calls: format_tool_calls(tools), original_tool_calls: tools}
@@ -0,0 +1,22 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::Gemini::Response
4
+ ##
5
+ # The {LLM::Gemini::Response::WebSearch LLM::Gemini::Response::WebSearch}
6
+ # module provides methods for accessing web search results from a web search
7
+ # tool call made via the {LLM::Provider#web_search LLM::Provider#web_search}
8
+ # method.
9
+ module WebSearch
10
+ ##
11
+ # Returns one or more search results
12
+ # @return [Array<LLM::Object>]
13
+ def search_results
14
+ LLM::Object.from_hash(
15
+ candidates[0]
16
+ .groundingMetadata
17
+ .groundingChunks
18
+ .map { {"url" => _1.web.uri, "title" => _1.web.title} }
19
+ )
20
+ end
21
+ end
22
+ end
@@ -13,7 +13,7 @@ class LLM::Gemini
13
13
  # @param [#<<] io An IO-like object
14
14
  # @return [LLM::Gemini::StreamParser]
15
15
  def initialize(io)
16
- @body = LLM::Object.new
16
+ @body = LLM::Object.from_hash({candidates: []})
17
17
  @io = io
18
18
  end
19
19
 
@@ -21,47 +21,64 @@ class LLM::Gemini
21
21
  # @param [Hash] chunk
22
22
  # @return [LLM::Gemini::StreamParser]
23
23
  def parse!(chunk)
24
- tap { merge!(chunk) }
24
+ tap { merge_chunk!(LLM::Object.from_hash(chunk)) }
25
25
  end
26
26
 
27
27
  private
28
28
 
29
- def merge!(chunk)
29
+ def merge_chunk!(chunk)
30
30
  chunk.each do |key, value|
31
- if key == "candidates"
32
- @body.candidates ||= []
31
+ if key.to_s == "candidates"
33
32
  merge_candidates!(value)
33
+ elsif key.to_s == "usageMetadata" &&
34
+ @body.usageMetadata.is_a?(LLM::Object) &&
35
+ value.is_a?(LLM::Object)
36
+ @body.usageMetadata = LLM::Object.from_hash(@body.usageMetadata.to_h.merge(value.to_h))
34
37
  else
35
38
  @body[key] = value
36
39
  end
37
40
  end
38
41
  end
39
42
 
40
- def merge_candidates!(candidates)
41
- candidates.each.with_index do |candidate, i|
42
- if @body.candidates[i].nil?
43
- merge_one(@body.candidates, candidate, i)
44
- else
45
- merge_two(@body.candidates, candidate, i)
43
+ def merge_candidates!(new_candidates_list)
44
+ new_candidates_list.each do |new_candidate_delta|
45
+ index = new_candidate_delta.index
46
+ @body.candidates[index] ||= LLM::Object.from_hash({content: {parts: []}})
47
+ existing_candidate = @body.candidates[index]
48
+ new_candidate_delta.each do |key, value|
49
+ if key.to_s == "content"
50
+ merge_candidate_content!(existing_candidate.content, value) if value
51
+ else
52
+ existing_candidate[key] = value # Overwrite other fields
53
+ end
46
54
  end
47
55
  end
48
56
  end
49
57
 
50
- def merge_one(candidates, candidate, i)
51
- candidate
52
- .dig("content", "parts")
53
- &.filter_map { _1["text"] }
54
- &.each { @io << _1 if @io.respond_to?(:<<) }
55
- candidates[i] = candidate
58
+ def merge_candidate_content!(existing_content, new_content_delta)
59
+ new_content_delta.each do |key, value|
60
+ if key.to_s == "parts"
61
+ existing_content.parts ||= []
62
+ merge_content_parts!(existing_content.parts, value) if value
63
+ else
64
+ existing_content[key] = value
65
+ end
66
+ end
56
67
  end
57
68
 
58
- def merge_two(candidates, candidate, i)
59
- parts = candidates[i].dig("content", "parts")
60
- parts&.each&.with_index do |part, j|
61
- if part["text"]
62
- target = candidate["content"]["parts"][j]
63
- part["text"] << target["text"]
64
- @io << target["text"] if @io.respond_to?(:<<)
69
+ def merge_content_parts!(existing_parts, new_parts_delta)
70
+ new_parts_delta.each do |new_part_delta|
71
+ if new_part_delta.text
72
+ last_existing_part = existing_parts.last
73
+ if last_existing_part&.text
74
+ last_existing_part.text << new_part_delta.text
75
+ @io << new_part_delta.text if @io.respond_to?(:<<)
76
+ else
77
+ existing_parts << new_part_delta
78
+ @io << new_part_delta.text if @io.respond_to?(:<<)
79
+ end
80
+ elsif new_part_delta.functionCall
81
+ existing_parts << new_part_delta
65
82
  end
66
83
  end
67
84
  end
@@ -20,6 +20,7 @@ module LLM
20
20
  class Gemini < Provider
21
21
  require_relative "gemini/response/embedding"
22
22
  require_relative "gemini/response/completion"
23
+ require_relative "gemini/response/web_search"
23
24
  require_relative "gemini/error_handler"
24
25
  require_relative "gemini/format"
25
26
  require_relative "gemini/stream_parser"
@@ -125,6 +126,31 @@ module LLM
125
126
  "gemini-2.5-flash"
126
127
  end
127
128
 
129
+ ##
130
+ # @note
131
+ # This method includes certain tools that require configuration
132
+ # through a set of options that are easier to set through the
133
+ # {LLM::Provider#tool LLM::Provider#tool} method.
134
+ # @see https://ai.google.dev/gemini-api/docs/google-search Gemini docs
135
+ # @return (see LLM::Provider#tools)
136
+ def tools
137
+ {
138
+ google_search: tool(:google_search),
139
+ code_execution: tool(:code_execution),
140
+ url_context: tool(:url_context)
141
+ }
142
+ end
143
+
144
+ ##
145
+ # A convenience method for performing a web search using the
146
+ # Google Search tool.
147
+ # @param query [String] The search query.
148
+ # @return [LLM::Response] The response from the LLM provider.
149
+ def web_search(query:)
150
+ complete(query, tools: [tools[:google_search]])
151
+ .extend(LLM::Gemini::Response::WebSearch)
152
+ end
153
+
128
154
  private
129
155
 
130
156
  def headers
@@ -45,7 +45,11 @@ class LLM::OpenAI
45
45
  # @return [Hash]
46
46
  def format_tools(params)
47
47
  tools = params.delete(:tools)
48
- (tools.nil? || tools.empty?) ? {} : {tools: tools.map { _1.format(self) }}
48
+ if tools.nil? || tools.empty?
49
+ {}
50
+ else
51
+ {tools: tools.map { _1.respond_to?(:format) ? _1.format(self) : _1 }}
52
+ end
49
53
  end
50
54
  end
51
55
  end
@@ -2,22 +2,35 @@
2
2
 
3
3
  module LLM::OpenAI::Response
4
4
  module Responds
5
+ def model = body.model
5
6
  def response_id = respond_to?(:response) ? response["id"] : id
6
- def outputs = [format_message]
7
- def choices = body.output
8
- def tools = output.select { _1.type == "function_call" }
7
+ def choices = [format_message]
8
+ def annotations = choices[0].annotations
9
+
10
+ def prompt_tokens = body.usage&.input_tokens
11
+ def completion_tokens = body.usage&.output_tokens
12
+ def total_tokens = body.usage&.total_tokens
13
+
14
+ ##
15
+ # Returns the aggregated text content from the response outputs.
16
+ # @return [String]
17
+ def output_text
18
+ choices.find(&:assistant?).content || ""
19
+ end
9
20
 
10
21
  private
11
22
 
12
23
  def format_message
13
24
  message = LLM::Message.new("assistant", +"", {response: self, tool_calls: []})
14
- choices.each.with_index do |choice, index|
25
+ output.each.with_index do |choice, index|
15
26
  if choice.type == "function_call"
16
27
  message.extra[:tool_calls] << format_tool(choice)
17
28
  elsif choice.content
18
29
  choice.content.each do |c|
19
30
  next unless c["type"] == "output_text"
20
31
  message.content << c["text"] << "\n"
32
+ next unless c["annotations"]
33
+ message.extra["annotations"] = [*message.extra["annotations"], *c["annotations"]]
21
34
  end
22
35
  end
23
36
  end
@@ -0,0 +1,21 @@
1
+ # frozen_string_literal: true
2
+
3
+ module LLM::OpenAI::Response
4
+ ##
5
+ # The {LLM::OpenAI::Response::WebSearch LLM::OpenAI::Response::WebSearch}
6
+ # module provides methods for accessing web search results from a web search
7
+ # tool call made via the {LLM::Provider#web_search LLM::Provider#web_search}
8
+ # method.
9
+ module WebSearch
10
+ ##
11
+ # Returns one or more search results
12
+ # @return [Array<LLM::Object>]
13
+ def search_results
14
+ LLM::Object.from_hash(
15
+ choices[0]
16
+ .annotations
17
+ .map { _1.slice(:title, :url) }
18
+ )
19
+ end
20
+ end
21
+ end
@@ -39,36 +39,44 @@ class LLM::OpenAI
39
39
  def merge_choices!(choices)
40
40
  choices.each do |choice|
41
41
  if @body.choices[choice["index"]]
42
- target = @body["choices"][choice["index"]]["message"]
42
+ target_message = @body["choices"][choice["index"]]["message"]
43
43
  delta = choice["delta"]
44
44
  delta.each do |key, value|
45
- if target[key]
46
- if key == "content"
47
- target[key] << value
48
- @io << value if @io.respond_to?(:<<)
49
- elsif key == "tool_calls"
50
- merge_tools!(target, value)
51
- else
52
- target[key] = value
53
- end
54
- else
45
+ if key == "content"
46
+ target_message[key] ||= +""
47
+ target_message[key] << value
55
48
  @io << value if @io.respond_to?(:<<)
56
- target[key] = value
49
+ elsif key == "tool_calls"
50
+ merge_tools!(target_message, value)
51
+ else
52
+ target_message[key] = value
57
53
  end
58
54
  end
59
55
  else
60
- target = {"message" => {"role" => "assistant"}}
61
- @body["choices"][choice["index"]] = target
62
- target["message"].merge!(choice["delta"])
56
+ message_hash = {"role" => "assistant"}
57
+ @body["choices"][choice["index"]] = {"message" => message_hash}
58
+ choice["delta"].each do |key, value|
59
+ if key == "content"
60
+ @io << value if @io.respond_to?(:<<)
61
+ message_hash[key] = value
62
+ else
63
+ message_hash[key] = value
64
+ end
65
+ end
63
66
  end
64
67
  end
65
68
  end
66
69
 
67
70
  def merge_tools!(target, tools)
71
+ target["tool_calls"] ||= []
68
72
  tools.each.with_index do |toola, index|
69
73
  toolb = target["tool_calls"][index]
70
- if toolb
71
- toola["function"].each { toolb["function"][_1] << _2 }
74
+ if toolb && toola["function"] && toolb["function"]
75
+ # Append to existing function arguments
76
+ toola["function"].each do |func_key, func_value|
77
+ toolb["function"][func_key] ||= +""
78
+ toolb["function"][func_key] << func_value
79
+ end
72
80
  else
73
81
  target["tool_calls"][index] = toola
74
82
  end
@@ -3,9 +3,18 @@
3
3
  class LLM::OpenAI
4
4
  ##
5
5
  # The {LLM::OpenAI::VectorStores LLM::OpenAI::VectorStores} class provides
6
- # an interface for [OpenAI's vector stores API](https://platform.openai.com/docs/api-reference/vector_stores/create)
6
+ # an interface for [OpenAI's vector stores API](https://platform.openai.com/docs/api-reference/vector_stores/create).
7
+ #
8
+ # @example
9
+ # llm = LLM.openai(key: ENV["OPENAI_SECRET"])
10
+ # files = %w(foo.pdf bar.pdf).map { llm.files.create(file: _1) }
11
+ # store = llm.vector_stores.create_and_poll(name: "PDF Store", file_ids: files.map(&:id))
12
+ # print "[-] store is ready", "\n"
13
+ # chunks = llm.vector_stores.search(vector: store, query: "What is Ruby?")
14
+ # chunks.each { |chunk| puts chunk }
7
15
  class VectorStores
8
16
  require_relative "response/enumerable"
17
+ PollError = Class.new(LLM::Error)
9
18
 
10
19
  ##
11
20
  # @param [LLM::Provider] provider
@@ -40,6 +49,14 @@ class LLM::OpenAI
40
49
  LLM::Response.new(res)
41
50
  end
42
51
 
52
+ ##
53
+ # Create a vector store and poll until its status is "completed"
54
+ # @param (see LLM::OpenAI::VectorStores#create)
55
+ # @return (see LLM::OpenAI::VectorStores#poll)
56
+ def create_and_poll(...)
57
+ poll(vector: create(...))
58
+ end
59
+
43
60
  ##
44
61
  # Get a vector store
45
62
  # @param [String, #id] vector The ID of the vector store
@@ -181,6 +198,27 @@ class LLM::OpenAI
181
198
  LLM::Response.new(res)
182
199
  end
183
200
 
201
+ ##
202
+ # Poll a vector store until its status is "completed"
203
+ # @param [String, #id] vector The ID of the vector store
204
+ # @param [Integer] attempts The current number of attempts (default: 0)
205
+ # @param [Integer] max The maximum number of iterations (default: 50)
206
+ # @raise [LLM::PollError] When the maximum number of iterations is reached
207
+ # @return [LLM::Response]
208
+ def poll(vector:, attempts: 0, max: 50)
209
+ if attempts == max
210
+ raise LLM::PollError, "vector store '#{vector.id}' has status '#{vector.status}' after #{max} attempts"
211
+ elsif vector.status == "expired"
212
+ raise LLM::PollError, "vector store '#{vector.id}' has expired"
213
+ elsif vector.status != "completed"
214
+ vector = get(vector:)
215
+ sleep(0.1 * (2**attempts))
216
+ poll(vector:, attempts: attempts + 1, max:)
217
+ else
218
+ vector
219
+ end
220
+ end
221
+
184
222
  private
185
223
 
186
224
  [:headers, :execute, :set_body_stream].each do |m|
@@ -16,6 +16,7 @@ module LLM
16
16
  class OpenAI < Provider
17
17
  require_relative "openai/response/embedding"
18
18
  require_relative "openai/response/completion"
19
+ require_relative "openai/response/web_search"
19
20
  require_relative "openai/error_handler"
20
21
  require_relative "openai/format"
21
22
  require_relative "openai/stream_parser"
@@ -146,6 +147,37 @@ module LLM
146
147
  "gpt-4.1"
147
148
  end
148
149
 
150
+ ##
151
+ # @note
152
+ # This method includes certain tools that require configuration
153
+ # through a set of options that are easier to set through the
154
+ # {LLM::Provider#tool LLM::Provider#tool} method.
155
+ # @return (see LLM::Provider#tools)
156
+ def tools
157
+ {
158
+ web_search: tool(:web_search),
159
+ file_search: tool(:file_search),
160
+ image_generation: tool(:image_generation),
161
+ code_interpreter: tool(:code_interpreter),
162
+ computer_use: tool(:computer_use)
163
+ }
164
+ end
165
+
166
+ ##
167
+ # A convenience method for performing a web search using the
168
+ # OpenAI web search tool.
169
+ # @example
170
+ # llm = LLM.openai(key: ENV["KEY"])
171
+ # res = llm.web_search(query: "summarize today's news")
172
+ # res.search_results.each { |item| print item.title, ": ", item.url, "\n" }
173
+ # @param query [String] The search query.
174
+ # @return [LLM::Response] The response from the LLM provider.
175
+ def web_search(query:)
176
+ responses
177
+ .create(query, store: false, tools: [tools[:web_search]])
178
+ .extend(LLM::OpenAI::Response::WebSearch)
179
+ end
180
+
149
181
  private
150
182
 
151
183
  def headers
data/lib/llm/tool.rb ADDED
@@ -0,0 +1,32 @@
1
+ # frozen_string_literal: true
2
+
3
+ ##
4
+ # The {LLM::Tool LLM::Tool} class represents a platform-native tool
5
+ # that can be activated by an LLM provider. Unlike {LLM::Function LLM::Function},
6
+ # these tools are pre-defined by the provider and their capabilities
7
+ # are already known to the underlying LLM.
8
+ #
9
+ # @example
10
+ # #!/usr/bin/env ruby
11
+ # llm = LLM.gemini ENV["KEY"]
12
+ # bot = LLM::Bot.new(llm, tools: [LLM.tool(:google_search)])
13
+ # bot.chat("Summarize today's news", role: :user)
14
+ # print bot.messages.find(&:assistant?).content, "\n"
15
+ class LLM::Tool < Struct.new(:name, :options, :provider)
16
+ ##
17
+ # @return [String]
18
+ def to_json(...)
19
+ to_h.to_json(...)
20
+ end
21
+
22
+ ##
23
+ # @return [Hash]
24
+ def to_h
25
+ case provider.class.to_s
26
+ when "LLM::Anthropic" then options.merge("name" => name.to_s)
27
+ when "LLM::Gemini" then {name => options}
28
+ else options.merge("type" => name.to_s)
29
+ end
30
+ end
31
+ alias_method :to_hash, :to_h
32
+ end
data/lib/llm/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LLM
4
- VERSION = "0.15.0"
4
+ VERSION = "0.16.1"
5
5
  end
data/lib/llm.rb CHANGED
@@ -18,6 +18,7 @@ module LLM
18
18
  require_relative "llm/function"
19
19
  require_relative "llm/eventstream"
20
20
  require_relative "llm/eventhandler"
21
+ require_relative "llm/tool"
21
22
 
22
23
  module_function
23
24
 
@@ -38,7 +39,7 @@ module LLM
38
39
  end
39
40
 
40
41
  ##
41
- # @param (see LLM::Provider#initialize)
42
+ # @param key (see LLM::Provider#initialize)
42
43
  # @return (see LLM::Ollama#initialize)
43
44
  def ollama(key: nil, **)
44
45
  require_relative "llm/providers/ollama" unless defined?(LLM::Ollama)
@@ -79,7 +80,7 @@ module LLM
79
80
  end
80
81
 
81
82
  ##
82
- # Define a function
83
+ # Define or get a function
83
84
  # @example
84
85
  # LLM.function(:system) do |fn|
85
86
  # fn.description "Run system command"
@@ -94,7 +95,11 @@ module LLM
94
95
  # @param [Proc] b The block to define the function
95
96
  # @return [LLM::Function] The function object
96
97
  def function(name, &b)
97
- functions[name.to_s] = LLM::Function.new(name, &b)
98
+ if block_given?
99
+ functions[name.to_s] = LLM::Function.new(name, &b)
100
+ else
101
+ functions[name.to_s]
102
+ end
98
103
  end
99
104
 
100
105
  ##
data/llm.gemspec CHANGED
@@ -40,4 +40,5 @@ Gem::Specification.new do |spec|
40
40
  spec.add_development_dependency "standard", "~> 1.50"
41
41
  spec.add_development_dependency "vcr", "~> 6.0"
42
42
  spec.add_development_dependency "dotenv", "~> 2.8"
43
+ spec.add_development_dependency "net-http-persistent", "~> 4.0"
43
44
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm.rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.15.0
4
+ version: 0.16.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Antar Azri
@@ -150,6 +150,20 @@ dependencies:
150
150
  - - "~>"
151
151
  - !ruby/object:Gem::Version
152
152
  version: '2.8'
153
+ - !ruby/object:Gem::Dependency
154
+ name: net-http-persistent
155
+ requirement: !ruby/object:Gem::Requirement
156
+ requirements:
157
+ - - "~>"
158
+ - !ruby/object:Gem::Version
159
+ version: '4.0'
160
+ type: :development
161
+ prerelease: false
162
+ version_requirements: !ruby/object:Gem::Requirement
163
+ requirements:
164
+ - - "~>"
165
+ - !ruby/object:Gem::Version
166
+ version: '4.0'
153
167
  description: llm.rb is a zero-dependency Ruby toolkit for Large Language Models that
154
168
  includes OpenAI, Gemini, Anthropic, xAI (grok), DeepSeek, Ollama, and LlamaCpp.
155
169
  The toolkit includes full support for chat, streaming, tool calling, audio, images,
@@ -170,6 +184,7 @@ files:
170
184
  - lib/llm/bot/prompt/completion.rb
171
185
  - lib/llm/bot/prompt/respond.rb
172
186
  - lib/llm/buffer.rb
187
+ - lib/llm/client.rb
173
188
  - lib/llm/error.rb
174
189
  - lib/llm/eventhandler.rb
175
190
  - lib/llm/eventstream.rb
@@ -193,6 +208,7 @@ files:
193
208
  - lib/llm/providers/anthropic/response/completion.rb
194
209
  - lib/llm/providers/anthropic/response/enumerable.rb
195
210
  - lib/llm/providers/anthropic/response/file.rb
211
+ - lib/llm/providers/anthropic/response/web_search.rb
196
212
  - lib/llm/providers/anthropic/stream_parser.rb
197
213
  - lib/llm/providers/deepseek.rb
198
214
  - lib/llm/providers/deepseek/format.rb
@@ -211,6 +227,7 @@ files:
211
227
  - lib/llm/providers/gemini/response/files.rb
212
228
  - lib/llm/providers/gemini/response/image.rb
213
229
  - lib/llm/providers/gemini/response/models.rb
230
+ - lib/llm/providers/gemini/response/web_search.rb
214
231
  - lib/llm/providers/gemini/stream_parser.rb
215
232
  - lib/llm/providers/llamacpp.rb
216
233
  - lib/llm/providers/ollama.rb
@@ -240,6 +257,7 @@ files:
240
257
  - lib/llm/providers/openai/response/image.rb
241
258
  - lib/llm/providers/openai/response/moderations.rb
242
259
  - lib/llm/providers/openai/response/responds.rb
260
+ - lib/llm/providers/openai/response/web_search.rb
243
261
  - lib/llm/providers/openai/responses.rb
244
262
  - lib/llm/providers/openai/responses/stream_parser.rb
245
263
  - lib/llm/providers/openai/stream_parser.rb
@@ -257,6 +275,7 @@ files:
257
275
  - lib/llm/schema/object.rb
258
276
  - lib/llm/schema/string.rb
259
277
  - lib/llm/schema/version.rb
278
+ - lib/llm/tool.rb
260
279
  - lib/llm/utils.rb
261
280
  - lib/llm/version.rb
262
281
  - llm.gemspec