llm.rb 0.15.0 → 0.16.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +84 -5
- data/lib/llm/buffer.rb +1 -1
- data/lib/llm/client.rb +37 -0
- data/lib/llm/function.rb +3 -3
- data/lib/llm/message.rb +10 -0
- data/lib/llm/object.rb +6 -0
- data/lib/llm/provider.rb +65 -8
- data/lib/llm/providers/anthropic/format.rb +1 -1
- data/lib/llm/providers/anthropic/response/web_search.rb +21 -0
- data/lib/llm/providers/anthropic.rb +30 -0
- data/lib/llm/providers/gemini/format/completion_format.rb +1 -1
- data/lib/llm/providers/gemini/format.rb +3 -2
- data/lib/llm/providers/gemini/images.rb +23 -7
- data/lib/llm/providers/gemini/response/completion.rb +3 -2
- data/lib/llm/providers/gemini/response/web_search.rb +22 -0
- data/lib/llm/providers/gemini/stream_parser.rb +41 -24
- data/lib/llm/providers/gemini.rb +26 -0
- data/lib/llm/providers/openai/format.rb +5 -1
- data/lib/llm/providers/openai/response/responds.rb +17 -4
- data/lib/llm/providers/openai/response/web_search.rb +21 -0
- data/lib/llm/providers/openai/stream_parser.rb +25 -17
- data/lib/llm/providers/openai/vector_stores.rb +32 -1
- data/lib/llm/providers/openai.rb +32 -0
- data/lib/llm/tool.rb +32 -0
- data/lib/llm/version.rb +1 -1
- data/lib/llm.rb +8 -3
- data/llm.gemspec +1 -0
- metadata +20 -1
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 38502ab4a41dba8177cb7b21db68f3e0dd5492323ac8b132b1775926b46ffffc
|
4
|
+
data.tar.gz: ebe196962c43934ae979e298f80b4bc2e30a147ad0f42595eef59880be9fc01e
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 5100d71b851771137a86e799bc2cadab360fc0ced297288b09fa701a8f434c671fe739427b889243e9704c5ae2a05b6b8761c85f0b0bea9268700bf770e80f13
|
7
|
+
data.tar.gz: 3aed0f826c229a37d30b2f0d41678976ba00df72216588a9302320c52e22d9d1b814df0b76bca8f6e713e105d73006339121a8bb900a73ac8896cc1f7c3f0051
|
data/README.md
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
## About
|
2
2
|
|
3
3
|
llm.rb is a zero-dependency Ruby toolkit for Large Language Models that
|
4
|
-
includes OpenAI, Gemini, Anthropic, xAI (
|
4
|
+
includes OpenAI, Gemini, Anthropic, xAI (Grok), DeepSeek, Ollama, and
|
5
5
|
LlamaCpp. The toolkit includes full support for chat, streaming, tool calling,
|
6
6
|
audio, images, files, and structured outputs (JSON Schema).
|
7
7
|
|
@@ -9,9 +9,13 @@ audio, images, files, and structured outputs (JSON Schema).
|
|
9
9
|
|
10
10
|
#### Demo
|
11
11
|
|
12
|
+
This cool demo writes a new [llm-shell](https://github.com/llmrb/llm-shell#readme) command
|
13
|
+
with the help of [llm.rb](https://github.com/llmrb/llm#readme). <br> Similar-ish to
|
14
|
+
GitHub Copilot but for the terminal.
|
15
|
+
|
12
16
|
<details>
|
13
|
-
<summary>
|
14
|
-
<img src="share/llm-shell/examples/demo.gif
|
17
|
+
<summary>Start demo</summary>
|
18
|
+
<img src="https://github.com/llmrb/llm/blob/main/share/llm-shell/examples/demo.gif?raw=true" alt="llm-shell demo" />
|
15
19
|
</details>
|
16
20
|
|
17
21
|
#### Guides
|
@@ -22,6 +26,8 @@ audio, images, files, and structured outputs (JSON Schema).
|
|
22
26
|
a blog post that implements an age estimation tool
|
23
27
|
* [How to edit an image with Gemini](https://0x1eef.github.io/posts/how-to-edit-images-with-gemini/) –
|
24
28
|
a blog post that implements image editing with Gemini
|
29
|
+
* [Fast sailing with persistent connections](https://0x1eef.github.io/posts/persistent-connections-with-llm.rb/) –
|
30
|
+
a blog post that optimizes performance with a thread-safe connection pool
|
25
31
|
|
26
32
|
#### Ecosystem
|
27
33
|
|
@@ -34,6 +40,7 @@ audio, images, files, and structured outputs (JSON Schema).
|
|
34
40
|
- ✅ A single unified interface for multiple providers
|
35
41
|
- 📦 Zero dependencies outside Ruby's standard library
|
36
42
|
- 🚀 Smart API design that minimizes the number of requests made
|
43
|
+
- ♻️ Optional: per-provider, process-wide connection pool via net-http-persistent
|
37
44
|
|
38
45
|
#### Chat, Agents
|
39
46
|
- 🧠 Stateless and stateful chat via completions and responses API
|
@@ -110,13 +117,30 @@ llm = LLM.ollama(key: nil)
|
|
110
117
|
llm = LLM.llamacpp(key: nil)
|
111
118
|
```
|
112
119
|
|
120
|
+
#### Persistence
|
121
|
+
|
122
|
+
The llm.rb library can maintain a process-wide connection pool
|
123
|
+
for each provider that is instantiated. This feature can improve
|
124
|
+
performance but it is optional, the implementation depends on
|
125
|
+
[net-http-persistent](https://github.com/dbrain/net-http-persistent),
|
126
|
+
and the gem should be installed separately:
|
127
|
+
|
128
|
+
```ruby
|
129
|
+
#!/usr/bin/env ruby
|
130
|
+
require "llm"
|
131
|
+
|
132
|
+
llm = LLM.openai(key: ENV["KEY"], persistent: true)
|
133
|
+
res = llm.responses.create "Hello world"
|
134
|
+
llm.responses.create "Adios", last_response_id: res.response_id
|
135
|
+
```
|
136
|
+
|
113
137
|
### Conversations
|
114
138
|
|
115
139
|
#### Completions
|
116
140
|
|
117
141
|
> This example uses the stateless chat completions API that all
|
118
142
|
> providers support. A similar example for OpenAI's stateful
|
119
|
-
> responses API is available in the [docs/](
|
143
|
+
> responses API is available in the [docs/](https://0x1eef.github.io/x/llm.rb/file.OPENAI.html#responses)
|
120
144
|
> directory.
|
121
145
|
|
122
146
|
The following example creates an instance of
|
@@ -149,7 +173,8 @@ bot.messages.each { print "[#{_1.role}] ", _1.content, "\n" }
|
|
149
173
|
> There Is More Than One Way To Do It (TIMTOWTDI) when you are
|
150
174
|
> using llm.rb – and this is especially true when it
|
151
175
|
> comes to streaming. See the streaming documentation in
|
152
|
-
> [docs/](
|
176
|
+
> [docs/](https://0x1eef.github.io/x/llm.rb/file.STREAMING.html#scopes)
|
177
|
+
> for more details.
|
153
178
|
|
154
179
|
The following example streams the messages in a conversation
|
155
180
|
as they are generated in real-time. The `stream` option can
|
@@ -263,6 +288,43 @@ bot.chat bot.functions.map(&:call) # report return value to the LLM
|
|
263
288
|
# {stderr: "", stdout: "FreeBSD"}
|
264
289
|
```
|
265
290
|
|
291
|
+
#### Provider
|
292
|
+
|
293
|
+
The
|
294
|
+
[LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
|
295
|
+
class can define a local function that can be called by a provider on your behalf,
|
296
|
+
and the
|
297
|
+
[LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html)
|
298
|
+
class represents a tool that is defined and implemented by a provider, and we can
|
299
|
+
request that the provider call the tool on our behalf. That's the primary difference
|
300
|
+
between a function implemented locally and a tool implemented by a provider. The
|
301
|
+
available tools depend on the provider, and the following example uses the
|
302
|
+
OpenAI provider to execute Python code on OpenAI's servers:
|
303
|
+
|
304
|
+
```ruby
|
305
|
+
#!/usr/bin/env ruby
|
306
|
+
require "llm"
|
307
|
+
|
308
|
+
llm = LLM.openai(key: ENV["KEY"])
|
309
|
+
res = llm.responses.create "Run: 'print(\"hello world\")'", tools: [llm.tool(:code_interpreter)]
|
310
|
+
print res.output_text, "\n"
|
311
|
+
```
|
312
|
+
|
313
|
+
#### Web Search
|
314
|
+
|
315
|
+
A common tool among all providers is the ability to perform a web search, and
|
316
|
+
the following example uses the OpenAI provider to search the web using the
|
317
|
+
Web Search tool. This can also be done with the Anthropic and Gemini providers:
|
318
|
+
|
319
|
+
```ruby
|
320
|
+
#!/usr/bin/env ruby
|
321
|
+
require "llm"
|
322
|
+
|
323
|
+
llm = LLM.openai(key: ENV["KEY"])
|
324
|
+
res = llm.web_search(query: "summarize today's news")
|
325
|
+
print res.output_text, "\n"
|
326
|
+
```
|
327
|
+
|
266
328
|
### Files
|
267
329
|
|
268
330
|
#### Create
|
@@ -504,6 +566,23 @@ bot.chat "Hello #{model.id} :)"
|
|
504
566
|
bot.messages.select(&:assistant?).each { print "[#{_1.role}] ", _1.content, "\n" }
|
505
567
|
```
|
506
568
|
|
569
|
+
## Reviews
|
570
|
+
|
571
|
+
I supplied both Gemini and DeepSeek with the contents of [lib/](https://github.com/llmrb/llm/tree/main/lib)
|
572
|
+
and [README.md](https://github.com/llmrb/llm#readme) via [llm-shell](https://github.com/llmrb/llm-shell#readme).
|
573
|
+
Their feedback was way more positive than I could have imagined 😅 These are genuine responses though, with no
|
574
|
+
special prompting or engineering. I just provided them with the source code and asked for their opinion.
|
575
|
+
|
576
|
+
<details>
|
577
|
+
<summary>Review by Gemini</summary>
|
578
|
+
<img src="https://github.com/llmrb/llm/blob/main/share/llm-shell/examples/gemini.png?raw=true" alt="Gemini review" />
|
579
|
+
</details>
|
580
|
+
|
581
|
+
<details>
|
582
|
+
<summary>Review by DeepSeek</summary>
|
583
|
+
<img src="https://github.com/llmrb/llm/blob/main/share/llm-shell/examples/deepseek.png?raw=true" alt="DeepSeek review" />
|
584
|
+
</details>
|
585
|
+
|
507
586
|
## Documentation
|
508
587
|
|
509
588
|
### API
|
data/lib/llm/buffer.rb
CHANGED
@@ -138,7 +138,7 @@ module LLM
|
|
138
138
|
@response ? {previous_response_id: @response.response_id} : {}
|
139
139
|
].inject({}, &:merge!)
|
140
140
|
@response = @provider.responses.create(message.content, params.merge(role:))
|
141
|
-
@completed.concat([*pendings, message, *@response.
|
141
|
+
@completed.concat([*pendings, message, *@response.choices[0]])
|
142
142
|
@pending.clear
|
143
143
|
end
|
144
144
|
end
|
data/lib/llm/client.rb
ADDED
@@ -0,0 +1,37 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module LLM
|
4
|
+
##
|
5
|
+
# @api private
|
6
|
+
module Client
|
7
|
+
private
|
8
|
+
|
9
|
+
##
|
10
|
+
# @api private
|
11
|
+
def persistent_client
|
12
|
+
mutex.synchronize do
|
13
|
+
if clients[client_id]
|
14
|
+
clients[client_id]
|
15
|
+
else
|
16
|
+
require "net/http/persistent" unless defined?(Net::HTTP::Persistent)
|
17
|
+
client = Net::HTTP::Persistent.new(name: self.class.name)
|
18
|
+
client.read_timeout = timeout
|
19
|
+
clients[client_id] = client
|
20
|
+
end
|
21
|
+
end
|
22
|
+
end
|
23
|
+
|
24
|
+
##
|
25
|
+
# @api private
|
26
|
+
def transient_client
|
27
|
+
client = Net::HTTP.new(host, port)
|
28
|
+
client.read_timeout = timeout
|
29
|
+
client.use_ssl = ssl
|
30
|
+
client
|
31
|
+
end
|
32
|
+
|
33
|
+
def client_id = "#{host}:#{port}:#{timeout}:#{ssl}"
|
34
|
+
def clients = self.class.clients
|
35
|
+
def mutex = self.class.mutex
|
36
|
+
end
|
37
|
+
end
|
data/lib/llm/function.rb
CHANGED
@@ -30,7 +30,7 @@
|
|
30
30
|
# fn.register(System)
|
31
31
|
# end
|
32
32
|
class LLM::Function
|
33
|
-
class Return < Struct.new(:id, :value)
|
33
|
+
class Return < Struct.new(:id, :name, :value)
|
34
34
|
end
|
35
35
|
|
36
36
|
##
|
@@ -92,7 +92,7 @@ class LLM::Function
|
|
92
92
|
# @return [LLM::Function::Return] The result of the function call
|
93
93
|
def call
|
94
94
|
runner = ((Class === @runner) ? @runner.new : @runner)
|
95
|
-
Return.new(id, runner.call(**arguments))
|
95
|
+
Return.new(id, name, runner.call(**arguments))
|
96
96
|
ensure
|
97
97
|
@called = true
|
98
98
|
end
|
@@ -106,7 +106,7 @@ class LLM::Function
|
|
106
106
|
# bot.chat bot.functions.map(&:cancel)
|
107
107
|
# @return [LLM::Function::Return]
|
108
108
|
def cancel(reason: "function call cancelled")
|
109
|
-
Return.new(id, {cancelled: true, reason:})
|
109
|
+
Return.new(id, name, {cancelled: true, reason:})
|
110
110
|
ensure
|
111
111
|
@cancelled = true
|
112
112
|
end
|
data/lib/llm/message.rb
CHANGED
@@ -127,6 +127,16 @@ module LLM
|
|
127
127
|
extra[:response]
|
128
128
|
end
|
129
129
|
|
130
|
+
##
|
131
|
+
# @note
|
132
|
+
# This method might return annotations for assistant messages,
|
133
|
+
# and it returns an empty array for non-assistant messages
|
134
|
+
# Returns annotations associated with the message
|
135
|
+
# @return [Array<LLM::Object>]
|
136
|
+
def annotations
|
137
|
+
@annotations ||= LLM::Object.from_hash(extra["annotations"] || [])
|
138
|
+
end
|
139
|
+
|
130
140
|
##
|
131
141
|
# @note
|
132
142
|
# This method returns token usage for assistant messages,
|
data/lib/llm/object.rb
CHANGED
data/lib/llm/provider.rb
CHANGED
@@ -7,6 +7,19 @@
|
|
7
7
|
# @abstract
|
8
8
|
class LLM::Provider
|
9
9
|
require "net/http"
|
10
|
+
require_relative "client"
|
11
|
+
include LLM::Client
|
12
|
+
|
13
|
+
@@clients = {}
|
14
|
+
@@mutex = Mutex.new
|
15
|
+
|
16
|
+
##
|
17
|
+
# @api private
|
18
|
+
def self.clients = @@clients
|
19
|
+
|
20
|
+
##
|
21
|
+
# @api private
|
22
|
+
def self.mutex = @@mutex
|
10
23
|
|
11
24
|
##
|
12
25
|
# @param [String, nil] key
|
@@ -19,11 +32,17 @@ class LLM::Provider
|
|
19
32
|
# The number of seconds to wait for a response
|
20
33
|
# @param [Boolean] ssl
|
21
34
|
# Whether to use SSL for the connection
|
22
|
-
|
35
|
+
# @param [Boolean] persistent
|
36
|
+
# Whether to use a persistent connection.
|
37
|
+
# Requires the net-http-persistent gem.
|
38
|
+
def initialize(key:, host:, port: 443, timeout: 60, ssl: true, persistent: false)
|
23
39
|
@key = key
|
24
|
-
@
|
25
|
-
@
|
26
|
-
@
|
40
|
+
@host = host
|
41
|
+
@port = port
|
42
|
+
@timeout = timeout
|
43
|
+
@ssl = ssl
|
44
|
+
@client = persistent ? persistent_client : transient_client
|
45
|
+
@base_uri = URI("#{ssl ? "https" : "http"}://#{host}:#{port}/")
|
27
46
|
end
|
28
47
|
|
29
48
|
##
|
@@ -217,9 +236,46 @@ class LLM::Provider
|
|
217
236
|
tap { (@headers ||= {}).merge!(headers) }
|
218
237
|
end
|
219
238
|
|
239
|
+
##
|
240
|
+
# @note
|
241
|
+
# This method might be outdated, and the {LLM::Provider#tool LLM::Provider#tool}
|
242
|
+
# method can be used if a tool is not found here.
|
243
|
+
# Returns all known tools provided by a provider.
|
244
|
+
# @return [String => LLM::Tool]
|
245
|
+
def tools
|
246
|
+
{}
|
247
|
+
end
|
248
|
+
|
249
|
+
##
|
250
|
+
# @note
|
251
|
+
# OpenAI, Anthropic, and Gemini provide platform-tools for things
|
252
|
+
# like web search, and more.
|
253
|
+
# Returns a tool provided by a provider.
|
254
|
+
# @example
|
255
|
+
# llm = LLM.openai(key: ENV["KEY"])
|
256
|
+
# tools = [llm.tool(:web_search)]
|
257
|
+
# res = llm.responses.create("Summarize today's news", tools:)
|
258
|
+
# print res.output_text, "\n"
|
259
|
+
# @param [String, Symbol] name The name of the tool
|
260
|
+
# @param [Hash] options Configuration options for the tool
|
261
|
+
# @return [LLM::Tool]
|
262
|
+
def tool(name, options = {})
|
263
|
+
LLM::Tool.new(name, options, self)
|
264
|
+
end
|
265
|
+
|
266
|
+
##
|
267
|
+
# Provides a web search capability
|
268
|
+
# @param [String] query The search query
|
269
|
+
# @raise [NotImplementedError]
|
270
|
+
# When the method is not implemented by a subclass
|
271
|
+
# @return [LLM::Response]
|
272
|
+
def web_search(query:)
|
273
|
+
raise NotImplementedError
|
274
|
+
end
|
275
|
+
|
220
276
|
private
|
221
277
|
|
222
|
-
attr_reader :client
|
278
|
+
attr_reader :client, :base_uri, :host, :port, :timeout, :ssl
|
223
279
|
|
224
280
|
##
|
225
281
|
# The headers to include with a request
|
@@ -269,8 +325,9 @@ class LLM::Provider
|
|
269
325
|
# When there is a network error at the operating system level
|
270
326
|
# @return [Net::HTTPResponse]
|
271
327
|
def execute(request:, stream: nil, stream_parser: self.stream_parser, &b)
|
328
|
+
args = (Net::HTTP === client) ? [request] : [URI.join(base_uri, request.path), request]
|
272
329
|
res = if stream
|
273
|
-
client.request(
|
330
|
+
client.request(*args) do |res|
|
274
331
|
handler = event_handler.new stream_parser.new(stream)
|
275
332
|
parser = LLM::EventStream::Parser.new
|
276
333
|
parser.register(handler)
|
@@ -284,8 +341,8 @@ class LLM::Provider
|
|
284
341
|
parser&.free
|
285
342
|
end
|
286
343
|
else
|
287
|
-
b ? client.request(
|
288
|
-
client.request(
|
344
|
+
b ? client.request(*args) { (Net::HTTPSuccess === _1) ? b.call(_1) : _1 } :
|
345
|
+
client.request(*args)
|
289
346
|
end
|
290
347
|
handle_response(res)
|
291
348
|
end
|
@@ -24,7 +24,7 @@ class LLM::Anthropic
|
|
24
24
|
def format_tools(params)
|
25
25
|
return {} unless params and params[:tools]&.any?
|
26
26
|
tools = params[:tools]
|
27
|
-
{tools: tools.map { _1.format(self) }}
|
27
|
+
{tools: tools.map { _1.respond_to?(:format) ? _1.format(self) : _1 }}
|
28
28
|
end
|
29
29
|
end
|
30
30
|
end
|
@@ -0,0 +1,21 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module LLM::Anthropic::Response
|
4
|
+
##
|
5
|
+
# The {LLM::Anthropic::Response::WebSearch LLM::Anthropic::Response::WebSearch}
|
6
|
+
# module provides methods for accessing web search results from a web search
|
7
|
+
# tool call made via the {LLM::Provider#web_search LLM::Provider#web_search}
|
8
|
+
# method.
|
9
|
+
module WebSearch
|
10
|
+
##
|
11
|
+
# Returns one or more search results
|
12
|
+
# @return [Array<LLM::Object>]
|
13
|
+
def search_results
|
14
|
+
LLM::Object.from_hash(
|
15
|
+
content
|
16
|
+
.select { _1["type"] == "web_search_tool_result" }
|
17
|
+
.flat_map { |n| n.content.map { _1.slice(:title, :url) } }
|
18
|
+
)
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end
|
@@ -15,6 +15,7 @@ module LLM
|
|
15
15
|
# bot.messages.select(&:assistant?).each { print "[#{_1.role}]", _1.content, "\n" }
|
16
16
|
class Anthropic < Provider
|
17
17
|
require_relative "anthropic/response/completion"
|
18
|
+
require_relative "anthropic/response/web_search"
|
18
19
|
require_relative "anthropic/format"
|
19
20
|
require_relative "anthropic/error_handler"
|
20
21
|
require_relative "anthropic/stream_parser"
|
@@ -83,6 +84,35 @@ module LLM
|
|
83
84
|
"claude-sonnet-4-20250514"
|
84
85
|
end
|
85
86
|
|
87
|
+
##
|
88
|
+
# @note
|
89
|
+
# This method includes certain tools that require configuration
|
90
|
+
# through a set of options that are easier to set through the
|
91
|
+
# {LLM::Provider#tool LLM::Provider#tool} method.
|
92
|
+
# @see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-tool Anthropic docs
|
93
|
+
# @return (see LLM::Provider#tools)
|
94
|
+
def tools
|
95
|
+
{
|
96
|
+
bash: tool(:bash, type: "bash_20250124"),
|
97
|
+
web_search: tool(:web_search, type: "web_search_20250305", max_uses: 5),
|
98
|
+
text_editor: tool(:str_replace_based_edit_tool, type: "text_editor_20250728", max_characters: 10_000)
|
99
|
+
}
|
100
|
+
end
|
101
|
+
|
102
|
+
##
|
103
|
+
# A convenience method for performing a web search using the
|
104
|
+
# Anthropic web search tool.
|
105
|
+
# @example
|
106
|
+
# llm = LLM.anthropic(key: ENV["KEY"])
|
107
|
+
# res = llm.web_search(query: "summarize today's news")
|
108
|
+
# res.search_results.each { |item| print item.title, ": ", item.url, "\n" }
|
109
|
+
# @param query [String] The search query.
|
110
|
+
# @return [LLM::Response] The response from the LLM provider.
|
111
|
+
def web_search(query:)
|
112
|
+
complete(query, tools: [tools[:web_search]])
|
113
|
+
.extend(LLM::Anthropic::Response::WebSearch)
|
114
|
+
end
|
115
|
+
|
86
116
|
private
|
87
117
|
|
88
118
|
def headers
|
@@ -43,7 +43,7 @@ module LLM::Gemini::Format
|
|
43
43
|
when LLM::Message
|
44
44
|
format_content(content.content)
|
45
45
|
when LLM::Function::Return
|
46
|
-
[{
|
46
|
+
[{functionResponse: {name: content.name, response: content.value}}]
|
47
47
|
else
|
48
48
|
prompt_error!(content)
|
49
49
|
end
|
@@ -32,8 +32,9 @@ class LLM::Gemini
|
|
32
32
|
# @return [Hash]
|
33
33
|
def format_tools(params)
|
34
34
|
return {} unless params and params[:tools]&.any?
|
35
|
-
|
36
|
-
|
35
|
+
tools = params.delete(:tools)
|
36
|
+
platform, functions = [tools.grep(LLM::Tool), tools.grep(LLM::Function)]
|
37
|
+
{tools: [*platform, {functionDeclarations: functions.map { _1.format(self) }}]}
|
37
38
|
end
|
38
39
|
end
|
39
40
|
end
|
@@ -44,7 +44,7 @@ class LLM::Gemini
|
|
44
44
|
def create(prompt:, model: "gemini-2.0-flash-exp-image-generation", **params)
|
45
45
|
req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{key}", headers)
|
46
46
|
body = JSON.dump({
|
47
|
-
contents: [{parts: [{text:
|
47
|
+
contents: [{parts: [{text: create_prompt}, {text: prompt}]}],
|
48
48
|
generationConfig: {responseModalities: ["TEXT", "IMAGE"]}
|
49
49
|
}.merge!(params))
|
50
50
|
req.body = body
|
@@ -69,7 +69,7 @@ class LLM::Gemini
|
|
69
69
|
req = Net::HTTP::Post.new("/v1beta/models/#{model}:generateContent?key=#{key}", headers)
|
70
70
|
image = LLM.File(image)
|
71
71
|
body = JSON.dump({
|
72
|
-
contents: [{parts: [{text: prompt}, format.format_content(image)]}],
|
72
|
+
contents: [{parts: [{text: edit_prompt}, {text: prompt}, format.format_content(image)]}],
|
73
73
|
generationConfig: {responseModalities: ["TEXT", "IMAGE"]}
|
74
74
|
}.merge!(params)).b
|
75
75
|
set_body_stream(req, StringIO.new(body))
|
@@ -94,12 +94,28 @@ class LLM::Gemini
|
|
94
94
|
@provider.instance_variable_get(:@key)
|
95
95
|
end
|
96
96
|
|
97
|
-
def
|
97
|
+
def create_prompt
|
98
98
|
<<~PROMPT
|
99
|
-
|
100
|
-
|
101
|
-
|
102
|
-
|
99
|
+
## Context
|
100
|
+
Your task is to generate one or more image(s) based on the user's instructions.
|
101
|
+
The user will provide you with text only.
|
102
|
+
|
103
|
+
## Instructions
|
104
|
+
1. The model *MUST* generate image(s) based on the user text alone.
|
105
|
+
2. The model *MUST NOT* generate anything else.
|
106
|
+
PROMPT
|
107
|
+
end
|
108
|
+
|
109
|
+
def edit_prompt
|
110
|
+
<<~PROMPT
|
111
|
+
## Context
|
112
|
+
Your task is to edit the provided image based on the user's instructions.
|
113
|
+
The user will provide you with both text and an image.
|
114
|
+
|
115
|
+
## Instructions
|
116
|
+
1. The model *MUST* edit the provided image based on the user's instructions
|
117
|
+
2. The model *MUST NOT* generate a new image.
|
118
|
+
3. The model *MUST NOT* generate anything else.
|
103
119
|
PROMPT
|
104
120
|
end
|
105
121
|
|
@@ -13,8 +13,9 @@ module LLM::Gemini::Response
|
|
13
13
|
def format_choices
|
14
14
|
candidates.map.with_index do |choice, index|
|
15
15
|
choice = LLM::Object.from_hash(choice)
|
16
|
-
content = choice.content
|
17
|
-
role
|
16
|
+
content = choice.content || LLM::Object.new
|
17
|
+
role = content.role || "model"
|
18
|
+
parts = content.parts || [{"text" => choice.finishReason}]
|
18
19
|
text = parts.filter_map { _1["text"] }.join
|
19
20
|
tools = parts.filter_map { _1["functionCall"] }
|
20
21
|
extra = {index:, response: self, tool_calls: format_tool_calls(tools), original_tool_calls: tools}
|
@@ -0,0 +1,22 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module LLM::Gemini::Response
|
4
|
+
##
|
5
|
+
# The {LLM::Gemini::Response::WebSearch LLM::Gemini::Response::WebSearch}
|
6
|
+
# module provides methods for accessing web search results from a web search
|
7
|
+
# tool call made via the {LLM::Provider#web_search LLM::Provider#web_search}
|
8
|
+
# method.
|
9
|
+
module WebSearch
|
10
|
+
##
|
11
|
+
# Returns one or more search results
|
12
|
+
# @return [Array<LLM::Object>]
|
13
|
+
def search_results
|
14
|
+
LLM::Object.from_hash(
|
15
|
+
candidates[0]
|
16
|
+
.groundingMetadata
|
17
|
+
.groundingChunks
|
18
|
+
.map { {"url" => _1.web.uri, "title" => _1.web.title} }
|
19
|
+
)
|
20
|
+
end
|
21
|
+
end
|
22
|
+
end
|
@@ -13,7 +13,7 @@ class LLM::Gemini
|
|
13
13
|
# @param [#<<] io An IO-like object
|
14
14
|
# @return [LLM::Gemini::StreamParser]
|
15
15
|
def initialize(io)
|
16
|
-
@body = LLM::Object.
|
16
|
+
@body = LLM::Object.from_hash({candidates: []})
|
17
17
|
@io = io
|
18
18
|
end
|
19
19
|
|
@@ -21,47 +21,64 @@ class LLM::Gemini
|
|
21
21
|
# @param [Hash] chunk
|
22
22
|
# @return [LLM::Gemini::StreamParser]
|
23
23
|
def parse!(chunk)
|
24
|
-
tap {
|
24
|
+
tap { merge_chunk!(LLM::Object.from_hash(chunk)) }
|
25
25
|
end
|
26
26
|
|
27
27
|
private
|
28
28
|
|
29
|
-
def
|
29
|
+
def merge_chunk!(chunk)
|
30
30
|
chunk.each do |key, value|
|
31
|
-
if key == "candidates"
|
32
|
-
@body.candidates ||= []
|
31
|
+
if key.to_s == "candidates"
|
33
32
|
merge_candidates!(value)
|
33
|
+
elsif key.to_s == "usageMetadata" &&
|
34
|
+
@body.usageMetadata.is_a?(LLM::Object) &&
|
35
|
+
value.is_a?(LLM::Object)
|
36
|
+
@body.usageMetadata = LLM::Object.from_hash(@body.usageMetadata.to_h.merge(value.to_h))
|
34
37
|
else
|
35
38
|
@body[key] = value
|
36
39
|
end
|
37
40
|
end
|
38
41
|
end
|
39
42
|
|
40
|
-
def merge_candidates!(
|
41
|
-
|
42
|
-
|
43
|
-
|
44
|
-
|
45
|
-
|
43
|
+
def merge_candidates!(new_candidates_list)
|
44
|
+
new_candidates_list.each do |new_candidate_delta|
|
45
|
+
index = new_candidate_delta.index
|
46
|
+
@body.candidates[index] ||= LLM::Object.from_hash({content: {parts: []}})
|
47
|
+
existing_candidate = @body.candidates[index]
|
48
|
+
new_candidate_delta.each do |key, value|
|
49
|
+
if key.to_s == "content"
|
50
|
+
merge_candidate_content!(existing_candidate.content, value) if value
|
51
|
+
else
|
52
|
+
existing_candidate[key] = value # Overwrite other fields
|
53
|
+
end
|
46
54
|
end
|
47
55
|
end
|
48
56
|
end
|
49
57
|
|
50
|
-
def
|
51
|
-
|
52
|
-
.
|
53
|
-
|
54
|
-
|
55
|
-
|
58
|
+
def merge_candidate_content!(existing_content, new_content_delta)
|
59
|
+
new_content_delta.each do |key, value|
|
60
|
+
if key.to_s == "parts"
|
61
|
+
existing_content.parts ||= []
|
62
|
+
merge_content_parts!(existing_content.parts, value) if value
|
63
|
+
else
|
64
|
+
existing_content[key] = value
|
65
|
+
end
|
66
|
+
end
|
56
67
|
end
|
57
68
|
|
58
|
-
def
|
59
|
-
|
60
|
-
|
61
|
-
|
62
|
-
|
63
|
-
|
64
|
-
|
69
|
+
def merge_content_parts!(existing_parts, new_parts_delta)
|
70
|
+
new_parts_delta.each do |new_part_delta|
|
71
|
+
if new_part_delta.text
|
72
|
+
last_existing_part = existing_parts.last
|
73
|
+
if last_existing_part&.text
|
74
|
+
last_existing_part.text << new_part_delta.text
|
75
|
+
@io << new_part_delta.text if @io.respond_to?(:<<)
|
76
|
+
else
|
77
|
+
existing_parts << new_part_delta
|
78
|
+
@io << new_part_delta.text if @io.respond_to?(:<<)
|
79
|
+
end
|
80
|
+
elsif new_part_delta.functionCall
|
81
|
+
existing_parts << new_part_delta
|
65
82
|
end
|
66
83
|
end
|
67
84
|
end
|
data/lib/llm/providers/gemini.rb
CHANGED
@@ -20,6 +20,7 @@ module LLM
|
|
20
20
|
class Gemini < Provider
|
21
21
|
require_relative "gemini/response/embedding"
|
22
22
|
require_relative "gemini/response/completion"
|
23
|
+
require_relative "gemini/response/web_search"
|
23
24
|
require_relative "gemini/error_handler"
|
24
25
|
require_relative "gemini/format"
|
25
26
|
require_relative "gemini/stream_parser"
|
@@ -125,6 +126,31 @@ module LLM
|
|
125
126
|
"gemini-2.5-flash"
|
126
127
|
end
|
127
128
|
|
129
|
+
##
|
130
|
+
# @note
|
131
|
+
# This method includes certain tools that require configuration
|
132
|
+
# through a set of options that are easier to set through the
|
133
|
+
# {LLM::Provider#tool LLM::Provider#tool} method.
|
134
|
+
# @see https://ai.google.dev/gemini-api/docs/google-search Gemini docs
|
135
|
+
# @return (see LLM::Provider#tools)
|
136
|
+
def tools
|
137
|
+
{
|
138
|
+
google_search: tool(:google_search),
|
139
|
+
code_execution: tool(:code_execution),
|
140
|
+
url_context: tool(:url_context)
|
141
|
+
}
|
142
|
+
end
|
143
|
+
|
144
|
+
##
|
145
|
+
# A convenience method for performing a web search using the
|
146
|
+
# Google Search tool.
|
147
|
+
# @param query [String] The search query.
|
148
|
+
# @return [LLM::Response] The response from the LLM provider.
|
149
|
+
def web_search(query:)
|
150
|
+
complete(query, tools: [tools[:google_search]])
|
151
|
+
.extend(LLM::Gemini::Response::WebSearch)
|
152
|
+
end
|
153
|
+
|
128
154
|
private
|
129
155
|
|
130
156
|
def headers
|
@@ -45,7 +45,11 @@ class LLM::OpenAI
|
|
45
45
|
# @return [Hash]
|
46
46
|
def format_tools(params)
|
47
47
|
tools = params.delete(:tools)
|
48
|
-
|
48
|
+
if tools.nil? || tools.empty?
|
49
|
+
{}
|
50
|
+
else
|
51
|
+
{tools: tools.map { _1.respond_to?(:format) ? _1.format(self) : _1 }}
|
52
|
+
end
|
49
53
|
end
|
50
54
|
end
|
51
55
|
end
|
@@ -2,22 +2,35 @@
|
|
2
2
|
|
3
3
|
module LLM::OpenAI::Response
|
4
4
|
module Responds
|
5
|
+
def model = body.model
|
5
6
|
def response_id = respond_to?(:response) ? response["id"] : id
|
6
|
-
def
|
7
|
-
def
|
8
|
-
|
7
|
+
def choices = [format_message]
|
8
|
+
def annotations = choices[0].annotations
|
9
|
+
|
10
|
+
def prompt_tokens = body.usage&.input_tokens
|
11
|
+
def completion_tokens = body.usage&.output_tokens
|
12
|
+
def total_tokens = body.usage&.total_tokens
|
13
|
+
|
14
|
+
##
|
15
|
+
# Returns the aggregated text content from the response outputs.
|
16
|
+
# @return [String]
|
17
|
+
def output_text
|
18
|
+
choices.find(&:assistant?).content || ""
|
19
|
+
end
|
9
20
|
|
10
21
|
private
|
11
22
|
|
12
23
|
def format_message
|
13
24
|
message = LLM::Message.new("assistant", +"", {response: self, tool_calls: []})
|
14
|
-
|
25
|
+
output.each.with_index do |choice, index|
|
15
26
|
if choice.type == "function_call"
|
16
27
|
message.extra[:tool_calls] << format_tool(choice)
|
17
28
|
elsif choice.content
|
18
29
|
choice.content.each do |c|
|
19
30
|
next unless c["type"] == "output_text"
|
20
31
|
message.content << c["text"] << "\n"
|
32
|
+
next unless c["annotations"]
|
33
|
+
message.extra["annotations"] = [*message.extra["annotations"], *c["annotations"]]
|
21
34
|
end
|
22
35
|
end
|
23
36
|
end
|
@@ -0,0 +1,21 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
module LLM::OpenAI::Response
|
4
|
+
##
|
5
|
+
# The {LLM::OpenAI::Response::WebSearch LLM::OpenAI::Response::WebSearch}
|
6
|
+
# module provides methods for accessing web search results from a web search
|
7
|
+
# tool call made via the {LLM::Provider#web_search LLM::Provider#web_search}
|
8
|
+
# method.
|
9
|
+
module WebSearch
|
10
|
+
##
|
11
|
+
# Returns one or more search results
|
12
|
+
# @return [Array<LLM::Object>]
|
13
|
+
def search_results
|
14
|
+
LLM::Object.from_hash(
|
15
|
+
choices[0]
|
16
|
+
.annotations
|
17
|
+
.map { _1.slice(:title, :url) }
|
18
|
+
)
|
19
|
+
end
|
20
|
+
end
|
21
|
+
end
|
@@ -39,36 +39,44 @@ class LLM::OpenAI
|
|
39
39
|
def merge_choices!(choices)
|
40
40
|
choices.each do |choice|
|
41
41
|
if @body.choices[choice["index"]]
|
42
|
-
|
42
|
+
target_message = @body["choices"][choice["index"]]["message"]
|
43
43
|
delta = choice["delta"]
|
44
44
|
delta.each do |key, value|
|
45
|
-
if
|
46
|
-
|
47
|
-
|
48
|
-
@io << value if @io.respond_to?(:<<)
|
49
|
-
elsif key == "tool_calls"
|
50
|
-
merge_tools!(target, value)
|
51
|
-
else
|
52
|
-
target[key] = value
|
53
|
-
end
|
54
|
-
else
|
45
|
+
if key == "content"
|
46
|
+
target_message[key] ||= +""
|
47
|
+
target_message[key] << value
|
55
48
|
@io << value if @io.respond_to?(:<<)
|
56
|
-
|
49
|
+
elsif key == "tool_calls"
|
50
|
+
merge_tools!(target_message, value)
|
51
|
+
else
|
52
|
+
target_message[key] = value
|
57
53
|
end
|
58
54
|
end
|
59
55
|
else
|
60
|
-
|
61
|
-
@body["choices"][choice["index"]] =
|
62
|
-
|
56
|
+
message_hash = {"role" => "assistant"}
|
57
|
+
@body["choices"][choice["index"]] = {"message" => message_hash}
|
58
|
+
choice["delta"].each do |key, value|
|
59
|
+
if key == "content"
|
60
|
+
@io << value if @io.respond_to?(:<<)
|
61
|
+
message_hash[key] = value
|
62
|
+
else
|
63
|
+
message_hash[key] = value
|
64
|
+
end
|
65
|
+
end
|
63
66
|
end
|
64
67
|
end
|
65
68
|
end
|
66
69
|
|
67
70
|
def merge_tools!(target, tools)
|
71
|
+
target["tool_calls"] ||= []
|
68
72
|
tools.each.with_index do |toola, index|
|
69
73
|
toolb = target["tool_calls"][index]
|
70
|
-
if toolb
|
71
|
-
|
74
|
+
if toolb && toola["function"] && toolb["function"]
|
75
|
+
# Append to existing function arguments
|
76
|
+
toola["function"].each do |func_key, func_value|
|
77
|
+
toolb["function"][func_key] ||= +""
|
78
|
+
toolb["function"][func_key] << func_value
|
79
|
+
end
|
72
80
|
else
|
73
81
|
target["tool_calls"][index] = toola
|
74
82
|
end
|
@@ -3,9 +3,19 @@
|
|
3
3
|
class LLM::OpenAI
|
4
4
|
##
|
5
5
|
# The {LLM::OpenAI::VectorStores LLM::OpenAI::VectorStores} class provides
|
6
|
-
# an interface for [OpenAI's vector stores API](https://platform.openai.com/docs/api-reference/vector_stores/create)
|
6
|
+
# an interface for [OpenAI's vector stores API](https://platform.openai.com/docs/api-reference/vector_stores/create).
|
7
|
+
#
|
8
|
+
# @example
|
9
|
+
# llm = LLM.openai(key: ENV["OPENAI_SECRET"])
|
10
|
+
# files = %w(foo.pdf bar.pdf).map { llm.files.create(file: _1) }
|
11
|
+
# store = llm.vector_stores.create(name: "PDF Store", file_ids: files.map(&:id))
|
12
|
+
# store = llm.vector_stores.poll(vector: store)
|
13
|
+
# print "[-] store is ready", "\n"
|
14
|
+
# chunks = llm.vector_stores.search(vector: store, query: "What is Ruby?")
|
15
|
+
# chunks.each { |chunk| puts chunk }
|
7
16
|
class VectorStores
|
8
17
|
require_relative "response/enumerable"
|
18
|
+
PollError = Class.new(LLM::Error)
|
9
19
|
|
10
20
|
##
|
11
21
|
# @param [LLM::Provider] provider
|
@@ -181,6 +191,27 @@ class LLM::OpenAI
|
|
181
191
|
LLM::Response.new(res)
|
182
192
|
end
|
183
193
|
|
194
|
+
##
|
195
|
+
# Poll a vector store until its status is "completed"
|
196
|
+
# @param [String, #id] vector The ID of the vector store
|
197
|
+
# @param [Integer] attempts The current number of attempts (default: 0)
|
198
|
+
# @param [Integer] max The maximum number of iterations (default: 50)
|
199
|
+
# @raise [LLM::PollError] When the maximum number of iterations is reached
|
200
|
+
# @return [LLM::Response]
|
201
|
+
def poll(vector:, attempts: 0, max: 50)
|
202
|
+
if attempts == max
|
203
|
+
raise LLM::PollError, "vector store '#{vector.id}' has status '#{vector.status}' after #{max} attempts"
|
204
|
+
elsif vector.status == "expired"
|
205
|
+
raise LLM::PollError, "vector store '#{vector.id}' has expired"
|
206
|
+
elsif vector.status != "completed"
|
207
|
+
vector = get(vector:)
|
208
|
+
sleep(0.1 * (2**attempts))
|
209
|
+
poll(vector:, attempts: attempts + 1, max:)
|
210
|
+
else
|
211
|
+
vector
|
212
|
+
end
|
213
|
+
end
|
214
|
+
|
184
215
|
private
|
185
216
|
|
186
217
|
[:headers, :execute, :set_body_stream].each do |m|
|
data/lib/llm/providers/openai.rb
CHANGED
@@ -16,6 +16,7 @@ module LLM
|
|
16
16
|
class OpenAI < Provider
|
17
17
|
require_relative "openai/response/embedding"
|
18
18
|
require_relative "openai/response/completion"
|
19
|
+
require_relative "openai/response/web_search"
|
19
20
|
require_relative "openai/error_handler"
|
20
21
|
require_relative "openai/format"
|
21
22
|
require_relative "openai/stream_parser"
|
@@ -146,6 +147,37 @@ module LLM
|
|
146
147
|
"gpt-4.1"
|
147
148
|
end
|
148
149
|
|
150
|
+
##
|
151
|
+
# @note
|
152
|
+
# This method includes certain tools that require configuration
|
153
|
+
# through a set of options that are easier to set through the
|
154
|
+
# {LLM::Provider#tool LLM::Provider#tool} method.
|
155
|
+
# @return (see LLM::Provider#tools)
|
156
|
+
def tools
|
157
|
+
{
|
158
|
+
web_search: tool(:web_search),
|
159
|
+
file_search: tool(:file_search),
|
160
|
+
image_generation: tool(:image_generation),
|
161
|
+
code_interpreter: tool(:code_interpreter),
|
162
|
+
computer_use: tool(:computer_use)
|
163
|
+
}
|
164
|
+
end
|
165
|
+
|
166
|
+
##
|
167
|
+
# A convenience method for performing a web search using the
|
168
|
+
# OpenAI web search tool.
|
169
|
+
# @example
|
170
|
+
# llm = LLM.openai(key: ENV["KEY"])
|
171
|
+
# res = llm.web_search(query: "summarize today's news")
|
172
|
+
# res.search_results.each { |item| print item.title, ": ", item.url, "\n" }
|
173
|
+
# @param query [String] The search query.
|
174
|
+
# @return [LLM::Response] The response from the LLM provider.
|
175
|
+
def web_search(query:)
|
176
|
+
responses
|
177
|
+
.create(query, store: false, tools: [tools[:web_search]])
|
178
|
+
.extend(LLM::OpenAI::Response::WebSearch)
|
179
|
+
end
|
180
|
+
|
149
181
|
private
|
150
182
|
|
151
183
|
def headers
|
data/lib/llm/tool.rb
ADDED
@@ -0,0 +1,32 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
##
|
4
|
+
# The {LLM::Tool LLM::Tool} class represents a platform-native tool
|
5
|
+
# that can be activated by an LLM provider. Unlike {LLM::Function LLM::Function},
|
6
|
+
# these tools are pre-defined by the provider and their capabilities
|
7
|
+
# are already known to the underlying LLM.
|
8
|
+
#
|
9
|
+
# @example
|
10
|
+
# #!/usr/bin/env ruby
|
11
|
+
# llm = LLM.gemini ENV["KEY"]
|
12
|
+
# bot = LLM::Bot.new(llm, tools: [LLM.tool(:google_search)])
|
13
|
+
# bot.chat("Summarize today's news", role: :user)
|
14
|
+
# print bot.messages.find(&:assistant?).content, "\n"
|
15
|
+
class LLM::Tool < Struct.new(:name, :options, :provider)
|
16
|
+
##
|
17
|
+
# @return [String]
|
18
|
+
def to_json(...)
|
19
|
+
to_h.to_json(...)
|
20
|
+
end
|
21
|
+
|
22
|
+
##
|
23
|
+
# @return [Hash]
|
24
|
+
def to_h
|
25
|
+
case provider.class.to_s
|
26
|
+
when "LLM::Anthropic" then options.merge("name" => name.to_s)
|
27
|
+
when "LLM::Gemini" then {name => options}
|
28
|
+
else options.merge("type" => name.to_s)
|
29
|
+
end
|
30
|
+
end
|
31
|
+
alias_method :to_hash, :to_h
|
32
|
+
end
|
data/lib/llm/version.rb
CHANGED
data/lib/llm.rb
CHANGED
@@ -18,6 +18,7 @@ module LLM
|
|
18
18
|
require_relative "llm/function"
|
19
19
|
require_relative "llm/eventstream"
|
20
20
|
require_relative "llm/eventhandler"
|
21
|
+
require_relative "llm/tool"
|
21
22
|
|
22
23
|
module_function
|
23
24
|
|
@@ -38,7 +39,7 @@ module LLM
|
|
38
39
|
end
|
39
40
|
|
40
41
|
##
|
41
|
-
# @param (see LLM::Provider#initialize)
|
42
|
+
# @param key (see LLM::Provider#initialize)
|
42
43
|
# @return (see LLM::Ollama#initialize)
|
43
44
|
def ollama(key: nil, **)
|
44
45
|
require_relative "llm/providers/ollama" unless defined?(LLM::Ollama)
|
@@ -79,7 +80,7 @@ module LLM
|
|
79
80
|
end
|
80
81
|
|
81
82
|
##
|
82
|
-
# Define a function
|
83
|
+
# Define or get a function
|
83
84
|
# @example
|
84
85
|
# LLM.function(:system) do |fn|
|
85
86
|
# fn.description "Run system command"
|
@@ -94,7 +95,11 @@ module LLM
|
|
94
95
|
# @param [Proc] b The block to define the function
|
95
96
|
# @return [LLM::Function] The function object
|
96
97
|
def function(name, &b)
|
97
|
-
|
98
|
+
if block_given?
|
99
|
+
functions[name.to_s] = LLM::Function.new(name, &b)
|
100
|
+
else
|
101
|
+
functions[name.to_s]
|
102
|
+
end
|
98
103
|
end
|
99
104
|
|
100
105
|
##
|
data/llm.gemspec
CHANGED
@@ -40,4 +40,5 @@ Gem::Specification.new do |spec|
|
|
40
40
|
spec.add_development_dependency "standard", "~> 1.50"
|
41
41
|
spec.add_development_dependency "vcr", "~> 6.0"
|
42
42
|
spec.add_development_dependency "dotenv", "~> 2.8"
|
43
|
+
spec.add_development_dependency "net-http-persistent", "~> 4.0"
|
43
44
|
end
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: llm.rb
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 0.16.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Antar Azri
|
@@ -150,6 +150,20 @@ dependencies:
|
|
150
150
|
- - "~>"
|
151
151
|
- !ruby/object:Gem::Version
|
152
152
|
version: '2.8'
|
153
|
+
- !ruby/object:Gem::Dependency
|
154
|
+
name: net-http-persistent
|
155
|
+
requirement: !ruby/object:Gem::Requirement
|
156
|
+
requirements:
|
157
|
+
- - "~>"
|
158
|
+
- !ruby/object:Gem::Version
|
159
|
+
version: '4.0'
|
160
|
+
type: :development
|
161
|
+
prerelease: false
|
162
|
+
version_requirements: !ruby/object:Gem::Requirement
|
163
|
+
requirements:
|
164
|
+
- - "~>"
|
165
|
+
- !ruby/object:Gem::Version
|
166
|
+
version: '4.0'
|
153
167
|
description: llm.rb is a zero-dependency Ruby toolkit for Large Language Models that
|
154
168
|
includes OpenAI, Gemini, Anthropic, xAI (grok), DeepSeek, Ollama, and LlamaCpp.
|
155
169
|
The toolkit includes full support for chat, streaming, tool calling, audio, images,
|
@@ -170,6 +184,7 @@ files:
|
|
170
184
|
- lib/llm/bot/prompt/completion.rb
|
171
185
|
- lib/llm/bot/prompt/respond.rb
|
172
186
|
- lib/llm/buffer.rb
|
187
|
+
- lib/llm/client.rb
|
173
188
|
- lib/llm/error.rb
|
174
189
|
- lib/llm/eventhandler.rb
|
175
190
|
- lib/llm/eventstream.rb
|
@@ -193,6 +208,7 @@ files:
|
|
193
208
|
- lib/llm/providers/anthropic/response/completion.rb
|
194
209
|
- lib/llm/providers/anthropic/response/enumerable.rb
|
195
210
|
- lib/llm/providers/anthropic/response/file.rb
|
211
|
+
- lib/llm/providers/anthropic/response/web_search.rb
|
196
212
|
- lib/llm/providers/anthropic/stream_parser.rb
|
197
213
|
- lib/llm/providers/deepseek.rb
|
198
214
|
- lib/llm/providers/deepseek/format.rb
|
@@ -211,6 +227,7 @@ files:
|
|
211
227
|
- lib/llm/providers/gemini/response/files.rb
|
212
228
|
- lib/llm/providers/gemini/response/image.rb
|
213
229
|
- lib/llm/providers/gemini/response/models.rb
|
230
|
+
- lib/llm/providers/gemini/response/web_search.rb
|
214
231
|
- lib/llm/providers/gemini/stream_parser.rb
|
215
232
|
- lib/llm/providers/llamacpp.rb
|
216
233
|
- lib/llm/providers/ollama.rb
|
@@ -240,6 +257,7 @@ files:
|
|
240
257
|
- lib/llm/providers/openai/response/image.rb
|
241
258
|
- lib/llm/providers/openai/response/moderations.rb
|
242
259
|
- lib/llm/providers/openai/response/responds.rb
|
260
|
+
- lib/llm/providers/openai/response/web_search.rb
|
243
261
|
- lib/llm/providers/openai/responses.rb
|
244
262
|
- lib/llm/providers/openai/responses/stream_parser.rb
|
245
263
|
- lib/llm/providers/openai/stream_parser.rb
|
@@ -257,6 +275,7 @@ files:
|
|
257
275
|
- lib/llm/schema/object.rb
|
258
276
|
- lib/llm/schema/string.rb
|
259
277
|
- lib/llm/schema/version.rb
|
278
|
+
- lib/llm/tool.rb
|
260
279
|
- lib/llm/utils.rb
|
261
280
|
- lib/llm/version.rb
|
262
281
|
- llm.gemspec
|