llm.rb 0.16.3 → 1.0.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/README.md +108 -39
- data/lib/llm/buffer.rb +19 -12
- data/lib/llm/client.rb +1 -2
- data/lib/llm/function.rb +28 -17
- data/lib/llm/message.rb +5 -1
- data/lib/llm/provider.rb +28 -48
- data/lib/llm/providers/anthropic/format.rb +2 -3
- data/lib/llm/providers/anthropic.rb +12 -9
- data/lib/llm/providers/deepseek/format.rb +1 -2
- data/lib/llm/providers/gemini/format.rb +3 -4
- data/lib/llm/providers/gemini.rb +12 -9
- data/lib/llm/providers/ollama/format.rb +2 -3
- data/lib/llm/providers/ollama.rb +5 -2
- data/lib/llm/providers/openai/format.rb +1 -2
- data/lib/llm/providers/openai/responses.rb +6 -3
- data/lib/llm/providers/openai.rb +19 -12
- data/lib/llm/providers/zai.rb +74 -0
- data/lib/llm/server_tool.rb +32 -0
- data/lib/llm/tool.rb +63 -20
- data/lib/llm/version.rb +1 -1
- data/lib/llm.rb +29 -21
- data/llm.gemspec +1 -1
- metadata +7 -5
checksums.yaml
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
---
|
2
2
|
SHA256:
|
3
|
-
metadata.gz:
|
4
|
-
data.tar.gz:
|
3
|
+
metadata.gz: 143a4329539a3ac3f9ece7925f5aef3061ee4aee4562e251c1ee1b4f21ec834a
|
4
|
+
data.tar.gz: 538019c363e178fff1ac8afec1c76cabcd23cbc05b212641251feb072d200958
|
5
5
|
SHA512:
|
6
|
-
metadata.gz:
|
7
|
-
data.tar.gz:
|
6
|
+
metadata.gz: 70da6593671516e75b0a7f0813ee2d50ab9dceddfecda072465e435bad8964aa9eb6fb539dff6ff3846872f3ff18f3648b726016e164ab3a5b852d6b2edc8068
|
7
|
+
data.tar.gz: 4c473fe9e13b0b9ce2219c3c2023237860e0d77df44d408a6d8bed4f747e7169e6541303cb7823aa727aaec4f578c5579b40120921231aaf6ea2b9b568435159
|
data/README.md
CHANGED
@@ -1,9 +1,15 @@
|
|
1
|
+
> **Maintenance Notice** <br>
|
2
|
+
> Please note that the primary author of llm.rb is pivoting away from
|
3
|
+
> Ruby and towards [Golang](https://golang.org) for future projects.
|
4
|
+
> Although llm.rb will be maintained for the foreseeable future it is not
|
5
|
+
> where my primary interests lie anymore. Thanks for understanding.
|
6
|
+
|
1
7
|
## About
|
2
8
|
|
3
9
|
llm.rb is a zero-dependency Ruby toolkit for Large Language Models that
|
4
|
-
includes OpenAI, Gemini, Anthropic, xAI (Grok),
|
5
|
-
LlamaCpp. The toolkit includes full support for chat, streaming,
|
6
|
-
audio, images, files, and structured outputs (JSON Schema).
|
10
|
+
includes OpenAI, Gemini, Anthropic, xAI (Grok), [zAI](https://z.ai), DeepSeek,
|
11
|
+
Ollama, and LlamaCpp. The toolkit includes full support for chat, streaming,
|
12
|
+
tool calling, audio, images, files, and structured outputs (JSON Schema).
|
7
13
|
|
8
14
|
## Quick start
|
9
15
|
|
@@ -28,6 +34,8 @@ GitHub Copilot but for the terminal.
|
|
28
34
|
a blog post that implements image editing with Gemini
|
29
35
|
* [Fast sailing with persistent connections](https://0x1eef.github.io/posts/persistent-connections-with-llm.rb/) –
|
30
36
|
a blog post that optimizes performance with a thread-safe connection pool
|
37
|
+
* [How to build agents (with llm.rb)](https://0x1eef.github.io/posts/how-to-build-agents-with-llm.rb/) –
|
38
|
+
a blog post that implements agentic behavior via tools
|
31
39
|
|
32
40
|
#### Ecosystem
|
33
41
|
|
@@ -87,22 +95,22 @@ While the Features section above gives you the high-level picture, the table bel
|
|
87
95
|
breaks things down by provider, so you can see exactly what’s supported where.
|
88
96
|
|
89
97
|
|
90
|
-
| Feature / Provider | OpenAI | Anthropic | Gemini | DeepSeek | xAI (Grok) | Ollama | LlamaCpp |
|
91
|
-
|
92
|
-
| **Chat Completions** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
93
|
-
| **Streaming** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
94
|
-
| **Tool Calling** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
95
|
-
| **JSON Schema / Structured Output** | ✅ | ❌ | ✅ | ❌ | ✅ | ✅* | ✅* |
|
96
|
-
| **Embeddings** | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ |
|
97
|
-
| **Multimodal Prompts** *(text, documents, audio, images, videos, URLs, etc)* | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
98
|
-
| **Files API** | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ |
|
99
|
-
| **Models API** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
100
|
-
| **Audio (TTS / Transcribe / Translate)** | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ |
|
101
|
-
| **Image Generation & Editing** | ✅ | ❌ | ✅ | ❌ | ✅ | ❌ | ❌ |
|
102
|
-
| **Local Model Support** | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
|
103
|
-
| **Vector Stores (RAG)** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
104
|
-
| **Responses** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
105
|
-
| **Moderations** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
98
|
+
| Feature / Provider | OpenAI | Anthropic | Gemini | DeepSeek | xAI (Grok) | zAI | Ollama | LlamaCpp |
|
99
|
+
|--------------------------------------|:------:|:---------:|:------:|:--------:|:----------:|:------:|:------:|:--------:|
|
100
|
+
| **Chat Completions** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
101
|
+
| **Streaming** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
102
|
+
| **Tool Calling** | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ | ✅ |
|
103
|
+
| **JSON Schema / Structured Output** | ✅ | ❌ | ✅ | ❌ | ✅ | ❌ | ✅* | ✅* |
|
104
|
+
| **Embeddings** | ✅ | ✅ | ✅ | ✅ | ❌ | ❌ | ✅ | ✅ |
|
105
|
+
| **Multimodal Prompts** *(text, documents, audio, images, videos, URLs, etc)* | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ |
|
106
|
+
| **Files API** | ✅ | ✅ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
107
|
+
| **Models API** | ✅ | ✅ | ✅ | ✅ | ✅ | ❌ | ✅ | ✅ |
|
108
|
+
| **Audio (TTS / Transcribe / Translate)** | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
109
|
+
| **Image Generation & Editing** | ✅ | ❌ | ✅ | ❌ | ✅ | ❌ | ❌ | ❌ |
|
110
|
+
| **Local Model Support** | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ✅ | ✅ |
|
111
|
+
| **Vector Stores (RAG)** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
112
|
+
| **Responses** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
113
|
+
| **Moderations** | ✅ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ | ❌ |
|
106
114
|
|
107
115
|
\* JSON Schema support in Ollama/LlamaCpp depends on the model, not the API.
|
108
116
|
|
@@ -141,7 +149,7 @@ llm = LLM.llamacpp(key: nil)
|
|
141
149
|
The llm.rb library can maintain a process-wide connection pool
|
142
150
|
for each provider that is instantiated. This feature can improve
|
143
151
|
performance but it is optional, the implementation depends on
|
144
|
-
[net-http-persistent](https://github.com/
|
152
|
+
[net-http-persistent](https://github.com/drbrain/net-http-persistent),
|
145
153
|
and the gem should be installed separately:
|
146
154
|
|
147
155
|
```ruby
|
@@ -155,6 +163,18 @@ res3 = llm.responses.create "message 3", previous_response_id: res2.response_id
|
|
155
163
|
print res3.output_text, "\n"
|
156
164
|
```
|
157
165
|
|
166
|
+
#### Thread Safety
|
167
|
+
|
168
|
+
The llm.rb library is thread-safe and can be used in a multi-threaded
|
169
|
+
environments but it is important to keep in mind that the
|
170
|
+
[LLM::Provider](https://0x1eef.github.io/x/llm.rb/LLM/Provider.html)
|
171
|
+
and
|
172
|
+
[LLM::Bot](https://0x1eef.github.io/x/llm.rb/LLM/Bot.html)
|
173
|
+
classes should be instantiated once per thread, and not shared
|
174
|
+
between threads. Generally the library tries to avoid global or
|
175
|
+
shared state but where it exists reentrant locks are used to
|
176
|
+
ensure thread-safety.
|
177
|
+
|
158
178
|
### Conversations
|
159
179
|
|
160
180
|
#### Completions
|
@@ -261,13 +281,22 @@ bot.messages.find(&:assistant?).content! # => {answers: [5, 10, 11]}
|
|
261
281
|
|
262
282
|
### Tools
|
263
283
|
|
264
|
-
####
|
284
|
+
#### Introduction
|
265
285
|
|
266
286
|
All providers support a powerful feature known as tool calling, and although
|
267
287
|
it is a little complex to understand at first, it can be powerful for building
|
268
|
-
agents.
|
269
|
-
(
|
270
|
-
|
288
|
+
agents. There are three main interfaces to understand: [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html),
|
289
|
+
[LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html), and
|
290
|
+
[LLM::ServerTool](https://0x1eef.github.io/x/llm.rb/LLM/ServerTool.html).
|
291
|
+
|
292
|
+
|
293
|
+
#### LLM::Function
|
294
|
+
|
295
|
+
The following example demonstrates [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
|
296
|
+
and how it can define a local function (which happens to be a tool), and how
|
297
|
+
a provider (such as OpenAI) can then detect when we should call the function.
|
298
|
+
Its most notable feature is that it can act as a closure and has access to
|
299
|
+
its surrounding scope, which can be useful in some situations.
|
271
300
|
|
272
301
|
The
|
273
302
|
[LLM::Bot#functions](https://0x1eef.github.io/x/llm.rb/LLM/Bot.html#functions-instance_method)
|
@@ -276,6 +305,7 @@ it will only be populated if the LLM detects a function should be called. Each f
|
|
276
305
|
corresponds to an element in the "tools" array. The array is emptied after a function call,
|
277
306
|
and potentially repopulated on the next message:
|
278
307
|
|
308
|
+
|
279
309
|
```ruby
|
280
310
|
#!/usr/bin/env ruby
|
281
311
|
require "llm"
|
@@ -309,13 +339,61 @@ bot.chat bot.functions.map(&:call) # report return value to the LLM
|
|
309
339
|
# {stderr: "", stdout: "FreeBSD"}
|
310
340
|
```
|
311
341
|
|
312
|
-
####
|
342
|
+
#### LLM::Tool
|
343
|
+
|
344
|
+
The [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html) class can be used
|
345
|
+
to implement a [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
|
346
|
+
as a class. Under the hood, a subclass of [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html)
|
347
|
+
wraps an instance of [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
|
348
|
+
and delegates to it.
|
349
|
+
|
350
|
+
The choice between [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
|
351
|
+
and [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html) is often a matter of
|
352
|
+
preference but each carry their own benefits. For example, [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
|
353
|
+
has the benefit of being a closure that has access to its surrounding context and
|
354
|
+
sometimes that is useful:
|
355
|
+
|
356
|
+
```ruby
|
357
|
+
#!/usr/bin/env ruby
|
358
|
+
require "llm"
|
359
|
+
|
360
|
+
class System < LLM::Tool
|
361
|
+
name "system"
|
362
|
+
description "Run a shell command"
|
363
|
+
params { |schema| schema.object(command: schema.string.required) }
|
364
|
+
|
365
|
+
def call(command:)
|
366
|
+
ro, wo = IO.pipe
|
367
|
+
re, we = IO.pipe
|
368
|
+
Process.wait Process.spawn(command, out: wo, err: we)
|
369
|
+
[wo,we].each(&:close)
|
370
|
+
{stderr: re.read, stdout: ro.read}
|
371
|
+
end
|
372
|
+
end
|
373
|
+
|
374
|
+
bot = LLM::Bot.new(llm, tools: [System])
|
375
|
+
bot.chat "Your task is to run shell commands via a tool.", role: :system
|
376
|
+
|
377
|
+
bot.chat "What is the current date?", role: :user
|
378
|
+
bot.chat bot.functions.map(&:call) # report return value to the LLM
|
379
|
+
|
380
|
+
bot.chat "What operating system am I running? (short version please!)", role: :user
|
381
|
+
bot.chat bot.functions.map(&:call) # report return value to the LLM
|
382
|
+
|
383
|
+
##
|
384
|
+
# {stderr: "", stdout: "Thu May 1 10:01:02 UTC 2025"}
|
385
|
+
# {stderr: "", stdout: "FreeBSD"}
|
386
|
+
```
|
387
|
+
|
388
|
+
#### Server Tools
|
313
389
|
|
314
390
|
The
|
315
391
|
[LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
|
316
|
-
|
317
|
-
and the
|
392
|
+
and
|
318
393
|
[LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html)
|
394
|
+
classes can define a local function or tool that can be called by
|
395
|
+
a provider on your behalf, and the
|
396
|
+
[LLM::ServerTool](https://0x1eef.github.io/x/llm.rb/LLM/ServerTool.html)
|
319
397
|
class represents a tool that is defined and implemented by a provider, and we can
|
320
398
|
request that the provider call the tool on our behalf. That's the primary difference
|
321
399
|
between a function implemented locally and a tool implemented by a provider. The
|
@@ -327,7 +405,8 @@ OpenAI provider to execute Python code on OpenAI's servers:
|
|
327
405
|
require "llm"
|
328
406
|
|
329
407
|
llm = LLM.openai(key: ENV["KEY"])
|
330
|
-
res = llm.responses.create "Run: 'print(\"hello world\")'",
|
408
|
+
res = llm.responses.create "Run: 'print(\"hello world\")'",
|
409
|
+
tools: [llm.server_tool(:code_interpreter)]
|
331
410
|
print res.output_text, "\n"
|
332
411
|
```
|
333
412
|
|
@@ -613,16 +692,6 @@ else there's the API reference. It covers classes and methods that the README gl
|
|
613
692
|
over or doesn't cover at all. The API reference is available at
|
614
693
|
[0x1eef.github.io/x/llm.rb](https://0x1eef.github.io/x/llm.rb).
|
615
694
|
|
616
|
-
### Guides
|
617
|
-
|
618
|
-
* [An introduction to RAG](https://0x1eef.github.io/posts/an-introduction-to-rag-with-llm.rb/) –
|
619
|
-
a blog post that implements the RAG pattern
|
620
|
-
* [How to estimate the age of a person in a photo](https://0x1eef.github.io/posts/age-estimation-with-llm.rb/) –
|
621
|
-
a blog post that implements an age estimation tool
|
622
|
-
* [How to edit an image with Gemini](https://0x1eef.github.io/posts/how-to-edit-images-with-gemini/) –
|
623
|
-
a blog post that implements image editing with Gemini
|
624
|
-
* [docs/](docs/) – the docs directory contains additional guides
|
625
|
-
|
626
695
|
## Install
|
627
696
|
|
628
697
|
llm.rb can be installed via rubygems.org:
|
@@ -633,4 +702,4 @@ llm.rb can be installed via rubygems.org:
|
|
633
702
|
|
634
703
|
[BSD Zero Clause](https://choosealicense.com/licenses/0bsd/)
|
635
704
|
<br>
|
636
|
-
See [LICENSE](./LICENSE)
|
705
|
+
See [LICENSE](./LICENSE)
|
data/lib/llm/buffer.rb
CHANGED
@@ -48,10 +48,16 @@ module LLM
|
|
48
48
|
end
|
49
49
|
|
50
50
|
##
|
51
|
-
# Returns the last message in the buffer
|
52
|
-
# @
|
53
|
-
|
54
|
-
|
51
|
+
# Returns the last message(s) in the buffer
|
52
|
+
# @param [Integer, nil] n
|
53
|
+
# The number of messages to return
|
54
|
+
# @return [LLM::Message, Array<LLM::Message>, nil]
|
55
|
+
def last(n = nil)
|
56
|
+
if @pending.empty?
|
57
|
+
n.nil? ? @completed.last : @completed.last(n)
|
58
|
+
else
|
59
|
+
n.nil? ? to_a.last : to_a.last(n)
|
60
|
+
end
|
55
61
|
end
|
56
62
|
|
57
63
|
##
|
@@ -65,19 +71,20 @@ module LLM
|
|
65
71
|
alias_method :push, :<<
|
66
72
|
|
67
73
|
##
|
68
|
-
# @param [Integer, Range
|
74
|
+
# @param [Integer, Range] index
|
69
75
|
# The message index
|
70
76
|
# @return [LLM::Message, nil]
|
71
77
|
# Returns a message, or nil
|
72
78
|
def [](index)
|
73
|
-
if
|
74
|
-
|
75
|
-
|
76
|
-
|
77
|
-
|
78
|
-
|
79
|
+
if @pending.empty?
|
80
|
+
if Range === index
|
81
|
+
slice = @completed[index]
|
82
|
+
(slice.nil? || slice.size < index.size) ? to_a[index] : slice
|
83
|
+
else
|
84
|
+
@completed[index]
|
85
|
+
end
|
79
86
|
else
|
80
|
-
|
87
|
+
to_a[index]
|
81
88
|
end
|
82
89
|
end
|
83
90
|
|
data/lib/llm/client.rb
CHANGED
@@ -9,7 +9,7 @@ module LLM
|
|
9
9
|
##
|
10
10
|
# @api private
|
11
11
|
def persistent_client
|
12
|
-
|
12
|
+
LLM.lock(:clients) do
|
13
13
|
if clients[client_id]
|
14
14
|
clients[client_id]
|
15
15
|
else
|
@@ -32,6 +32,5 @@ module LLM
|
|
32
32
|
|
33
33
|
def client_id = "#{host}:#{port}:#{timeout}:#{ssl}"
|
34
34
|
def clients = self.class.clients
|
35
|
-
def mutex = self.class.mutex
|
36
35
|
end
|
37
36
|
end
|
data/lib/llm/function.rb
CHANGED
@@ -6,6 +6,7 @@
|
|
6
6
|
#
|
7
7
|
# @example example #1
|
8
8
|
# LLM.function(:system) do |fn|
|
9
|
+
# fn.name "system"
|
9
10
|
# fn.description "Runs system commands"
|
10
11
|
# fn.params do |schema|
|
11
12
|
# schema.object(command: schema.string.required)
|
@@ -16,18 +17,16 @@
|
|
16
17
|
# end
|
17
18
|
#
|
18
19
|
# @example example #2
|
19
|
-
# class System
|
20
|
-
#
|
21
|
-
#
|
20
|
+
# class System < LLM::Tool
|
21
|
+
# name "system"
|
22
|
+
# description "Runs system commands"
|
23
|
+
# params do |schema|
|
24
|
+
# schema.object(command: schema.string.required)
|
22
25
|
# end
|
23
|
-
# end
|
24
26
|
#
|
25
|
-
#
|
26
|
-
#
|
27
|
-
# fn.params do |schema|
|
28
|
-
# schema.object(command: schema.string.required)
|
27
|
+
# def call(command:)
|
28
|
+
# {success: Kernel.system(command)}
|
29
29
|
# end
|
30
|
-
# fn.register(System)
|
31
30
|
# end
|
32
31
|
class LLM::Function
|
33
32
|
class Return < Struct.new(:id, :name, :value)
|
@@ -38,11 +37,6 @@ class LLM::Function
|
|
38
37
|
# @return [String, nil]
|
39
38
|
attr_accessor :id
|
40
39
|
|
41
|
-
##
|
42
|
-
# Returns the function name
|
43
|
-
# @return [String]
|
44
|
-
attr_reader :name
|
45
|
-
|
46
40
|
##
|
47
41
|
# Returns function arguments
|
48
42
|
# @return [Array, nil]
|
@@ -56,11 +50,23 @@ class LLM::Function
|
|
56
50
|
@schema = LLM::Schema.new
|
57
51
|
@called = false
|
58
52
|
@cancelled = false
|
59
|
-
yield(self)
|
53
|
+
yield(self) if block_given?
|
54
|
+
end
|
55
|
+
|
56
|
+
##
|
57
|
+
# Set (or get) the function name
|
58
|
+
# @param [String] name The function name
|
59
|
+
# @return [void]
|
60
|
+
def name(name = nil)
|
61
|
+
if name
|
62
|
+
@name = name.to_s
|
63
|
+
else
|
64
|
+
@name
|
65
|
+
end
|
60
66
|
end
|
61
67
|
|
62
68
|
##
|
63
|
-
# Set the function description
|
69
|
+
# Set (or get) the function description
|
64
70
|
# @param [String] desc The function description
|
65
71
|
# @return [void]
|
66
72
|
def description(desc = nil)
|
@@ -72,10 +78,15 @@ class LLM::Function
|
|
72
78
|
end
|
73
79
|
|
74
80
|
##
|
81
|
+
# Set (or get) the function parameters
|
75
82
|
# @yieldparam [LLM::Schema] schema The schema object
|
76
83
|
# @return [void]
|
77
84
|
def params
|
78
|
-
|
85
|
+
if block_given?
|
86
|
+
@params = yield(@schema)
|
87
|
+
else
|
88
|
+
@params
|
89
|
+
end
|
79
90
|
end
|
80
91
|
|
81
92
|
##
|
data/lib/llm/message.rb
CHANGED
@@ -61,7 +61,7 @@ module LLM
|
|
61
61
|
# @return [Array<LLM::Function>]
|
62
62
|
def functions
|
63
63
|
@functions ||= tool_calls.map do |fn|
|
64
|
-
function =
|
64
|
+
function = tools.find { _1.name.to_s == fn["name"] }.dup
|
65
65
|
function.tap { _1.id = fn.id }
|
66
66
|
function.tap { _1.arguments = fn.arguments }
|
67
67
|
end
|
@@ -170,5 +170,9 @@ module LLM
|
|
170
170
|
def tool_calls
|
171
171
|
@tool_calls ||= LLM::Object.from_hash(@extra[:tool_calls] || [])
|
172
172
|
end
|
173
|
+
|
174
|
+
def tools
|
175
|
+
response&.__tools__ || []
|
176
|
+
end
|
173
177
|
end
|
174
178
|
end
|
data/lib/llm/provider.rb
CHANGED
@@ -11,16 +11,11 @@ class LLM::Provider
|
|
11
11
|
include LLM::Client
|
12
12
|
|
13
13
|
@@clients = {}
|
14
|
-
@@mutex = Mutex.new
|
15
14
|
|
16
15
|
##
|
17
16
|
# @api private
|
18
17
|
def self.clients = @@clients
|
19
18
|
|
20
|
-
##
|
21
|
-
# @api private
|
22
|
-
def self.mutex = @@mutex
|
23
|
-
|
24
19
|
##
|
25
20
|
# @param [String, nil] key
|
26
21
|
# The secret key for authentication
|
@@ -92,57 +87,23 @@ class LLM::Provider
|
|
92
87
|
raise NotImplementedError
|
93
88
|
end
|
94
89
|
|
95
|
-
##
|
96
|
-
# Starts a new lazy chat powered by the chat completions API
|
97
|
-
# @note
|
98
|
-
# This method creates a lazy version of a
|
99
|
-
# {LLM::Bot LLM::Bot} object.
|
100
|
-
# @param prompt (see LLM::Provider#complete)
|
101
|
-
# @param params (see LLM::Provider#complete)
|
102
|
-
# @return [LLM::Bot]
|
103
|
-
def chat(prompt, params = {})
|
104
|
-
role = params.delete(:role)
|
105
|
-
LLM::Bot.new(self, params).chat(prompt, role:)
|
106
|
-
end
|
107
|
-
|
108
90
|
##
|
109
91
|
# Starts a new chat powered by the chat completions API
|
110
|
-
# @note
|
111
|
-
# This method creates a non-lazy version of a
|
112
|
-
# {LLM::Bot LLM::Bot} object.
|
113
92
|
# @param prompt (see LLM::Provider#complete)
|
114
93
|
# @param params (see LLM::Provider#complete)
|
115
|
-
# @raise (see LLM::Provider#complete)
|
116
94
|
# @return [LLM::Bot]
|
117
|
-
def chat
|
95
|
+
def chat(prompt, params = {})
|
118
96
|
role = params.delete(:role)
|
119
97
|
LLM::Bot.new(self, params).chat(prompt, role:)
|
120
98
|
end
|
121
99
|
|
122
|
-
##
|
123
|
-
# Starts a new lazy chat powered by the responses API
|
124
|
-
# @note
|
125
|
-
# This method creates a lazy variant of a
|
126
|
-
# {LLM::Bot LLM::Bot} object.
|
127
|
-
# @param prompt (see LLM::Provider#complete)
|
128
|
-
# @param params (see LLM::Provider#complete)
|
129
|
-
# @raise (see LLM::Provider#complete)
|
130
|
-
# @return [LLM::Bot]
|
131
|
-
def respond(prompt, params = {})
|
132
|
-
role = params.delete(:role)
|
133
|
-
LLM::Bot.new(self, params).respond(prompt, role:)
|
134
|
-
end
|
135
|
-
|
136
100
|
##
|
137
101
|
# Starts a new chat powered by the responses API
|
138
|
-
# @note
|
139
|
-
# This method creates a non-lazy variant of a
|
140
|
-
# {LLM::Bot LLM::Bot} object.
|
141
102
|
# @param prompt (see LLM::Provider#complete)
|
142
103
|
# @param params (see LLM::Provider#complete)
|
143
104
|
# @raise (see LLM::Provider#complete)
|
144
105
|
# @return [LLM::Bot]
|
145
|
-
def respond
|
106
|
+
def respond(prompt, params = {})
|
146
107
|
role = params.delete(:role)
|
147
108
|
LLM::Bot.new(self, params).respond(prompt, role:)
|
148
109
|
end
|
@@ -238,11 +199,11 @@ class LLM::Provider
|
|
238
199
|
|
239
200
|
##
|
240
201
|
# @note
|
241
|
-
# This method might be outdated, and the {LLM::Provider#
|
202
|
+
# This method might be outdated, and the {LLM::Provider#server_tool LLM::Provider#server_tool}
|
242
203
|
# method can be used if a tool is not found here.
|
243
204
|
# Returns all known tools provided by a provider.
|
244
|
-
# @return [String => LLM::
|
245
|
-
def
|
205
|
+
# @return [String => LLM::ServerTool]
|
206
|
+
def server_tools
|
246
207
|
{}
|
247
208
|
end
|
248
209
|
|
@@ -253,14 +214,14 @@ class LLM::Provider
|
|
253
214
|
# Returns a tool provided by a provider.
|
254
215
|
# @example
|
255
216
|
# llm = LLM.openai(key: ENV["KEY"])
|
256
|
-
# tools = [llm.
|
217
|
+
# tools = [llm.server_tool(:web_search)]
|
257
218
|
# res = llm.responses.create("Summarize today's news", tools:)
|
258
219
|
# print res.output_text, "\n"
|
259
220
|
# @param [String, Symbol] name The name of the tool
|
260
221
|
# @param [Hash] options Configuration options for the tool
|
261
|
-
# @return [LLM::
|
262
|
-
def
|
263
|
-
LLM::
|
222
|
+
# @return [LLM::ServerTool]
|
223
|
+
def server_tool(name, options = {})
|
224
|
+
LLM::ServerTool.new(name, options, self)
|
264
225
|
end
|
265
226
|
|
266
227
|
##
|
@@ -369,4 +330,23 @@ class LLM::Provider
|
|
369
330
|
req.body_stream = io
|
370
331
|
req["transfer-encoding"] = "chunked" unless req["content-length"]
|
371
332
|
end
|
333
|
+
|
334
|
+
##
|
335
|
+
# Resolves tools to their function representations
|
336
|
+
# @param [Array<LLM::Function, LLM::Tool>] tools
|
337
|
+
# The tools to map
|
338
|
+
# @raise [TypeError]
|
339
|
+
# When a tool is not recognized
|
340
|
+
# @return [Array<LLM::Function>]
|
341
|
+
def resolve_tools(tools)
|
342
|
+
(tools || []).map do |tool|
|
343
|
+
if tool.respond_to?(:function)
|
344
|
+
tool.function
|
345
|
+
elsif [LLM::Function, LLM::ServerTool, Hash].any? { _1 === tool }
|
346
|
+
tool
|
347
|
+
else
|
348
|
+
raise TypeError, "#{tool.class} given as a tool but it is not recognized"
|
349
|
+
end
|
350
|
+
end
|
351
|
+
end
|
372
352
|
end
|
@@ -21,9 +21,8 @@ class LLM::Anthropic
|
|
21
21
|
##
|
22
22
|
# @param [Hash] params
|
23
23
|
# @return [Hash]
|
24
|
-
def format_tools(
|
25
|
-
return {} unless
|
26
|
-
tools = params[:tools]
|
24
|
+
def format_tools(tools)
|
25
|
+
return {} unless tools&.any?
|
27
26
|
{tools: tools.map { _1.respond_to?(:format) ? _1.format(self) : _1 }}
|
28
27
|
end
|
29
28
|
end
|
@@ -43,7 +43,8 @@ module LLM
|
|
43
43
|
# @return (see LLM::Provider#complete)
|
44
44
|
def complete(prompt, params = {})
|
45
45
|
params = {role: :user, model: default_model, max_tokens: 1024}.merge!(params)
|
46
|
-
|
46
|
+
tools = resolve_tools(params.delete(:tools))
|
47
|
+
params = [params, format_tools(tools)].inject({}, &:merge!).compact
|
47
48
|
role, stream = params.delete(:role), params.delete(:stream)
|
48
49
|
params[:stream] = true if stream.respond_to?(:<<) || stream == true
|
49
50
|
req = Net::HTTP::Post.new("/v1/messages", headers)
|
@@ -51,7 +52,9 @@ module LLM
|
|
51
52
|
body = JSON.dump({messages: [format(messages)].flatten}.merge!(params))
|
52
53
|
set_body_stream(req, StringIO.new(body))
|
53
54
|
res = execute(request: req, stream:)
|
54
|
-
LLM::Response.new(res)
|
55
|
+
LLM::Response.new(res)
|
56
|
+
.extend(LLM::Anthropic::Response::Completion)
|
57
|
+
.extend(Module.new { define_method(:__tools__) { tools } })
|
55
58
|
end
|
56
59
|
|
57
60
|
##
|
@@ -88,14 +91,14 @@ module LLM
|
|
88
91
|
# @note
|
89
92
|
# This method includes certain tools that require configuration
|
90
93
|
# through a set of options that are easier to set through the
|
91
|
-
# {LLM::Provider#
|
94
|
+
# {LLM::Provider#server_tool LLM::Provider#server_tool} method.
|
92
95
|
# @see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-tool Anthropic docs
|
93
|
-
# @return (see LLM::Provider#
|
94
|
-
def
|
96
|
+
# @return (see LLM::Provider#server_tools)
|
97
|
+
def server_tools
|
95
98
|
{
|
96
|
-
bash:
|
97
|
-
web_search:
|
98
|
-
text_editor:
|
99
|
+
bash: server_tool(:bash, type: "bash_20250124"),
|
100
|
+
web_search: server_tool(:web_search, type: "web_search_20250305", max_uses: 5),
|
101
|
+
text_editor: server_tool(:str_replace_based_edit_tool, type: "text_editor_20250728", max_characters: 10_000)
|
99
102
|
}
|
100
103
|
end
|
101
104
|
|
@@ -109,7 +112,7 @@ module LLM
|
|
109
112
|
# @param query [String] The search query.
|
110
113
|
# @return [LLM::Response] The response from the LLM provider.
|
111
114
|
def web_search(query:)
|
112
|
-
complete(query, tools: [
|
115
|
+
complete(query, tools: [server_tools[:web_search]])
|
113
116
|
.extend(LLM::Anthropic::Response::WebSearch)
|
114
117
|
end
|
115
118
|
|
@@ -30,10 +30,9 @@ class LLM::Gemini
|
|
30
30
|
##
|
31
31
|
# @param [Hash] params
|
32
32
|
# @return [Hash]
|
33
|
-
def format_tools(
|
34
|
-
return {} unless
|
35
|
-
|
36
|
-
platform, functions = [tools.grep(LLM::Tool), tools.grep(LLM::Function)]
|
33
|
+
def format_tools(tools)
|
34
|
+
return {} unless tools&.any?
|
35
|
+
platform, functions = [tools.grep(LLM::ServerTool), tools.grep(LLM::Function)]
|
37
36
|
{tools: [*platform, {functionDeclarations: functions.map { _1.format(self) }}]}
|
38
37
|
end
|
39
38
|
end
|
data/lib/llm/providers/gemini.rb
CHANGED
@@ -67,7 +67,8 @@ module LLM
|
|
67
67
|
# @return [LLM::Response]
|
68
68
|
def complete(prompt, params = {})
|
69
69
|
params = {role: :user, model: default_model}.merge!(params)
|
70
|
-
|
70
|
+
tools = resolve_tools(params.delete(:tools))
|
71
|
+
params = [params, format_schema(params), format_tools(tools)].inject({}, &:merge!).compact
|
71
72
|
role, model, stream = [:role, :model, :stream].map { params.delete(_1) }
|
72
73
|
action = stream ? "streamGenerateContent?key=#{@key}&alt=sse" : "generateContent?key=#{@key}"
|
73
74
|
model.respond_to?(:id) ? model.id : model
|
@@ -77,7 +78,9 @@ module LLM
|
|
77
78
|
body = JSON.dump({contents: format(messages)}.merge!(params))
|
78
79
|
set_body_stream(req, StringIO.new(body))
|
79
80
|
res = execute(request: req, stream:)
|
80
|
-
LLM::Response.new(res)
|
81
|
+
LLM::Response.new(res)
|
82
|
+
.extend(LLM::Gemini::Response::Completion)
|
83
|
+
.extend(Module.new { define_method(:__tools__) { tools } })
|
81
84
|
end
|
82
85
|
|
83
86
|
##
|
@@ -130,14 +133,14 @@ module LLM
|
|
130
133
|
# @note
|
131
134
|
# This method includes certain tools that require configuration
|
132
135
|
# through a set of options that are easier to set through the
|
133
|
-
# {LLM::Provider#
|
136
|
+
# {LLM::Provider#server_tool LLM::Provider#server_tool} method.
|
134
137
|
# @see https://ai.google.dev/gemini-api/docs/google-search Gemini docs
|
135
|
-
# @return (see LLM::Provider#
|
136
|
-
def
|
138
|
+
# @return (see LLM::Provider#server_tools)
|
139
|
+
def server_tools
|
137
140
|
{
|
138
|
-
google_search:
|
139
|
-
code_execution:
|
140
|
-
url_context:
|
141
|
+
google_search: server_tool(:google_search),
|
142
|
+
code_execution: server_tool(:code_execution),
|
143
|
+
url_context: server_tool(:url_context)
|
141
144
|
}
|
142
145
|
end
|
143
146
|
|
@@ -147,7 +150,7 @@ module LLM
|
|
147
150
|
# @param query [String] The search query.
|
148
151
|
# @return [LLM::Response] The response from the LLM provider.
|
149
152
|
def web_search(query:)
|
150
|
-
complete(query, tools: [
|
153
|
+
complete(query, tools: [server_tools[:google_search]])
|
151
154
|
.extend(LLM::Gemini::Response::WebSearch)
|
152
155
|
end
|
153
156
|
|
@@ -21,9 +21,8 @@ class LLM::Ollama
|
|
21
21
|
##
|
22
22
|
# @param [Hash] params
|
23
23
|
# @return [Hash]
|
24
|
-
def format_tools(
|
25
|
-
return {} unless
|
26
|
-
tools = params[:tools]
|
24
|
+
def format_tools(tools)
|
25
|
+
return {} unless tools&.any?
|
27
26
|
{tools: tools.map { _1.format(self) }}
|
28
27
|
end
|
29
28
|
end
|
data/lib/llm/providers/ollama.rb
CHANGED
@@ -60,7 +60,8 @@ module LLM
|
|
60
60
|
# @return [LLM::Response]
|
61
61
|
def complete(prompt, params = {})
|
62
62
|
params = {role: :user, model: default_model, stream: true}.merge!(params)
|
63
|
-
|
63
|
+
tools = resolve_tools(params.delete(:tools))
|
64
|
+
params = [params, {format: params[:schema]}, format_tools(tools)].inject({}, &:merge!).compact
|
64
65
|
role, stream = params.delete(:role), params.delete(:stream)
|
65
66
|
params[:stream] = true if stream.respond_to?(:<<) || stream == true
|
66
67
|
req = Net::HTTP::Post.new("/api/chat", headers)
|
@@ -68,7 +69,9 @@ module LLM
|
|
68
69
|
body = JSON.dump({messages: [format(messages)].flatten}.merge!(params))
|
69
70
|
set_body_stream(req, StringIO.new(body))
|
70
71
|
res = execute(request: req, stream:)
|
71
|
-
LLM::Response.new(res)
|
72
|
+
LLM::Response.new(res)
|
73
|
+
.extend(LLM::Ollama::Response::Completion)
|
74
|
+
.extend(Module.new { define_method(:__tools__) { tools } })
|
72
75
|
end
|
73
76
|
|
74
77
|
##
|
@@ -37,7 +37,8 @@ class LLM::OpenAI
|
|
37
37
|
# @return [LLM::Response]
|
38
38
|
def create(prompt, params = {})
|
39
39
|
params = {role: :user, model: @provider.default_model}.merge!(params)
|
40
|
-
|
40
|
+
tools = resolve_tools(params.delete(:tools))
|
41
|
+
params = [params, format_schema(params), format_tools(tools)].inject({}, &:merge!).compact
|
41
42
|
role, stream = params.delete(:role), params.delete(:stream)
|
42
43
|
params[:stream] = true if stream.respond_to?(:<<) || stream == true
|
43
44
|
req = Net::HTTP::Post.new("/v1/responses", headers)
|
@@ -45,7 +46,9 @@ class LLM::OpenAI
|
|
45
46
|
body = JSON.dump({input: [format(messages, :response)].flatten}.merge!(params))
|
46
47
|
set_body_stream(req, StringIO.new(body))
|
47
48
|
res = execute(request: req, stream:, stream_parser:)
|
48
|
-
LLM::Response.new(res)
|
49
|
+
LLM::Response.new(res)
|
50
|
+
.extend(LLM::OpenAI::Response::Responds)
|
51
|
+
.extend(Module.new { define_method(:__tools__) { tools } })
|
49
52
|
end
|
50
53
|
|
51
54
|
##
|
@@ -77,7 +80,7 @@ class LLM::OpenAI
|
|
77
80
|
|
78
81
|
private
|
79
82
|
|
80
|
-
[:headers, :execute, :set_body_stream].each do |m|
|
83
|
+
[:headers, :execute, :set_body_stream, :resolve_tools].each do |m|
|
81
84
|
define_method(m) { |*args, **kwargs, &b| @provider.send(m, *args, **kwargs, &b) }
|
82
85
|
end
|
83
86
|
|
data/lib/llm/providers/openai.rb
CHANGED
@@ -65,16 +65,19 @@ module LLM
|
|
65
65
|
# @return (see LLM::Provider#complete)
|
66
66
|
def complete(prompt, params = {})
|
67
67
|
params = {role: :user, model: default_model}.merge!(params)
|
68
|
-
|
68
|
+
tools = resolve_tools(params.delete(:tools))
|
69
|
+
params = [params, format_schema(params), format_tools(tools)].inject({}, &:merge!).compact
|
69
70
|
role, stream = params.delete(:role), params.delete(:stream)
|
70
71
|
params[:stream] = true if stream.respond_to?(:<<) || stream == true
|
71
72
|
params[:stream_options] = {include_usage: true}.merge!(params[:stream_options] || {}) if params[:stream]
|
72
|
-
req = Net::HTTP::Post.new(
|
73
|
+
req = Net::HTTP::Post.new(completions_path, headers)
|
73
74
|
messages = [*(params.delete(:messages) || []), Message.new(role, prompt)]
|
74
75
|
body = JSON.dump({messages: format(messages, :complete).flatten}.merge!(params))
|
75
76
|
set_body_stream(req, StringIO.new(body))
|
76
77
|
res = execute(request: req, stream:)
|
77
|
-
LLM::Response.new(res)
|
78
|
+
LLM::Response.new(res)
|
79
|
+
.extend(LLM::OpenAI::Response::Completion)
|
80
|
+
.extend(Module.new { define_method(:__tools__) { tools } })
|
78
81
|
end
|
79
82
|
|
80
83
|
##
|
@@ -152,15 +155,15 @@ module LLM
|
|
152
155
|
# @note
|
153
156
|
# This method includes certain tools that require configuration
|
154
157
|
# through a set of options that are easier to set through the
|
155
|
-
# {LLM::Provider#
|
156
|
-
# @return (see LLM::Provider#
|
157
|
-
def
|
158
|
+
# {LLM::Provider#server_tool LLM::Provider#server_tool} method.
|
159
|
+
# @return (see LLM::Provider#server_tools)
|
160
|
+
def server_tools
|
158
161
|
{
|
159
|
-
web_search:
|
160
|
-
file_search:
|
161
|
-
image_generation:
|
162
|
-
code_interpreter:
|
163
|
-
computer_use:
|
162
|
+
web_search: server_tool(:web_search),
|
163
|
+
file_search: server_tool(:file_search),
|
164
|
+
image_generation: server_tool(:image_generation),
|
165
|
+
code_interpreter: server_tool(:code_interpreter),
|
166
|
+
computer_use: server_tool(:computer_use)
|
164
167
|
}
|
165
168
|
end
|
166
169
|
|
@@ -175,12 +178,16 @@ module LLM
|
|
175
178
|
# @return [LLM::Response] The response from the LLM provider.
|
176
179
|
def web_search(query:)
|
177
180
|
responses
|
178
|
-
.create(query, store: false, tools: [
|
181
|
+
.create(query, store: false, tools: [server_tools[:web_search]])
|
179
182
|
.extend(LLM::OpenAI::Response::WebSearch)
|
180
183
|
end
|
181
184
|
|
182
185
|
private
|
183
186
|
|
187
|
+
def completions_path
|
188
|
+
"/v1/chat/completions"
|
189
|
+
end
|
190
|
+
|
184
191
|
def headers
|
185
192
|
(@headers || {}).merge(
|
186
193
|
"Content-Type" => "application/json",
|
@@ -0,0 +1,74 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
require_relative "openai" unless defined?(LLM::OpenAI)
|
4
|
+
|
5
|
+
module LLM
|
6
|
+
##
|
7
|
+
# The ZAI class implements a provider for [zAI](https://docs.z.ai/guides/overview/quick-start).
|
8
|
+
#
|
9
|
+
# @example
|
10
|
+
# #!/usr/bin/env ruby
|
11
|
+
# require "llm"
|
12
|
+
#
|
13
|
+
# llm = LLM.zai(key: ENV["KEY"])
|
14
|
+
# bot = LLM::Bot.new(llm, stream: $stdout)
|
15
|
+
# bot.chat("Greetings Robot", role: :user).flush
|
16
|
+
class ZAI < OpenAI
|
17
|
+
##
|
18
|
+
# @param [String] host A regional host or the default ("api.z.ai")
|
19
|
+
# @param key (see LLM::Provider#initialize)
|
20
|
+
def initialize(host: "api.z.ai", **)
|
21
|
+
super
|
22
|
+
end
|
23
|
+
|
24
|
+
##
|
25
|
+
# @raise [NotImplementedError]
|
26
|
+
def files
|
27
|
+
raise NotImplementedError
|
28
|
+
end
|
29
|
+
|
30
|
+
##
|
31
|
+
# @return [LLM::XAI::Images]
|
32
|
+
def images
|
33
|
+
raise NotImplementedError
|
34
|
+
end
|
35
|
+
|
36
|
+
##
|
37
|
+
# @raise [NotImplementedError]
|
38
|
+
def audio
|
39
|
+
raise NotImplementedError
|
40
|
+
end
|
41
|
+
|
42
|
+
##
|
43
|
+
# @raise [NotImplementedError]
|
44
|
+
def moderations
|
45
|
+
raise NotImplementedError
|
46
|
+
end
|
47
|
+
|
48
|
+
##
|
49
|
+
# @raise [NotImplementedError]
|
50
|
+
def responses
|
51
|
+
raise NotImplementedError
|
52
|
+
end
|
53
|
+
|
54
|
+
##
|
55
|
+
# @raise [NotImplementedError]
|
56
|
+
def vector_stores
|
57
|
+
raise NotImplementedError
|
58
|
+
end
|
59
|
+
|
60
|
+
##
|
61
|
+
# Returns the default model for chat completions
|
62
|
+
# #see https://docs.z.ai/guides/llm/glm-4.5#glm-4-5-flash glm-4.5-flash
|
63
|
+
# @return [String]
|
64
|
+
def default_model
|
65
|
+
"glm-4.5-flash"
|
66
|
+
end
|
67
|
+
|
68
|
+
private
|
69
|
+
|
70
|
+
def completions_path
|
71
|
+
"/api/paas/v4/chat/completions"
|
72
|
+
end
|
73
|
+
end
|
74
|
+
end
|
@@ -0,0 +1,32 @@
|
|
1
|
+
# frozen_string_literal: true
|
2
|
+
|
3
|
+
##
|
4
|
+
# The {LLM::ServerTool LLM::ServerTool} class represents a platform-native tool
|
5
|
+
# that can be activated by an LLM provider. Unlike {LLM::Function LLM::Function},
|
6
|
+
# these tools are pre-defined by the provider and their capabilities
|
7
|
+
# are already known to the underlying LLM.
|
8
|
+
#
|
9
|
+
# @example
|
10
|
+
# #!/usr/bin/env ruby
|
11
|
+
# llm = LLM.gemini ENV["KEY"]
|
12
|
+
# bot = LLM::Bot.new(llm, tools: [LLM::ServerTool.new(:google_search)])
|
13
|
+
# bot.chat("Summarize today's news", role: :user)
|
14
|
+
# print bot.messages.find(&:assistant?).content, "\n"
|
15
|
+
class LLM::ServerTool < Struct.new(:name, :options, :provider)
|
16
|
+
##
|
17
|
+
# @return [String]
|
18
|
+
def to_json(...)
|
19
|
+
to_h.to_json(...)
|
20
|
+
end
|
21
|
+
|
22
|
+
##
|
23
|
+
# @return [Hash]
|
24
|
+
def to_h
|
25
|
+
case provider.class.to_s
|
26
|
+
when "LLM::Anthropic" then options.merge("name" => name.to_s)
|
27
|
+
when "LLM::Gemini" then {name => options}
|
28
|
+
else options.merge("type" => name.to_s)
|
29
|
+
end
|
30
|
+
end
|
31
|
+
alias_method :to_hash, :to_h
|
32
|
+
end
|
data/lib/llm/tool.rb
CHANGED
@@ -1,32 +1,75 @@
|
|
1
1
|
# frozen_string_literal: true
|
2
2
|
|
3
3
|
##
|
4
|
-
# The {LLM::Tool LLM::Tool} class represents a
|
5
|
-
# that can be
|
6
|
-
#
|
7
|
-
#
|
8
|
-
#
|
4
|
+
# The {LLM::Tool LLM::Tool} class represents a local tool
|
5
|
+
# that can be called by an LLM. Under the hood, it is a wrapper
|
6
|
+
# around {LLM::Function LLM::Function} but allows the definition
|
7
|
+
# of a function (also known as a tool) as a class.
|
9
8
|
# @example
|
10
|
-
#
|
11
|
-
#
|
12
|
-
#
|
13
|
-
#
|
14
|
-
#
|
15
|
-
|
9
|
+
# class System < LLM::Tool
|
10
|
+
# name "system"
|
11
|
+
# description "Runs system commands"
|
12
|
+
# params do |schema|
|
13
|
+
# schema.object(command: schema.string.required)
|
14
|
+
# end
|
15
|
+
#
|
16
|
+
# def call(command:)
|
17
|
+
# {success: Kernel.system(command)}
|
18
|
+
# end
|
19
|
+
# end
|
20
|
+
class LLM::Tool
|
21
|
+
##
|
22
|
+
# Registers the tool as a function when inherited
|
23
|
+
# @param [Class] klass The subclass
|
24
|
+
# @return [void]
|
25
|
+
def self.inherited(klass)
|
26
|
+
LLM.lock(:inherited) do
|
27
|
+
klass.instance_eval { @__monitor ||= Monitor.new }
|
28
|
+
klass.function.register(klass)
|
29
|
+
end
|
30
|
+
end
|
31
|
+
|
16
32
|
##
|
33
|
+
# Returns (or sets) the tool name
|
34
|
+
# @param [String, nil] name The tool name
|
17
35
|
# @return [String]
|
18
|
-
def
|
19
|
-
|
36
|
+
def self.name(name = nil)
|
37
|
+
lock do
|
38
|
+
function.tap { _1.name(name) }
|
39
|
+
end
|
20
40
|
end
|
21
41
|
|
22
42
|
##
|
23
|
-
#
|
24
|
-
|
25
|
-
|
26
|
-
|
27
|
-
|
28
|
-
|
43
|
+
# Returns (or sets) the tool description
|
44
|
+
# @param [String, nil] desc The tool description
|
45
|
+
# @return [String]
|
46
|
+
def self.description(desc = nil)
|
47
|
+
lock do
|
48
|
+
function.tap { _1.description(desc) }
|
29
49
|
end
|
30
50
|
end
|
31
|
-
|
51
|
+
|
52
|
+
##
|
53
|
+
# Returns (or sets) tool parameters
|
54
|
+
# @yieldparam [LLM::Schema] schema The schema object to define parameters
|
55
|
+
# @return [LLM::Schema]
|
56
|
+
def self.params(&)
|
57
|
+
lock do
|
58
|
+
function.tap { _1.params(&) }
|
59
|
+
end
|
60
|
+
end
|
61
|
+
|
62
|
+
##
|
63
|
+
# @api private
|
64
|
+
def self.function
|
65
|
+
lock do
|
66
|
+
@function ||= LLM::Function.new(self)
|
67
|
+
end
|
68
|
+
end
|
69
|
+
|
70
|
+
##
|
71
|
+
# @api private
|
72
|
+
def self.lock(&)
|
73
|
+
@__monitor.synchronize(&)
|
74
|
+
end
|
32
75
|
end
|
data/lib/llm/version.rb
CHANGED
data/lib/llm.rb
CHANGED
@@ -19,8 +19,11 @@ module LLM
|
|
19
19
|
require_relative "llm/eventstream"
|
20
20
|
require_relative "llm/eventhandler"
|
21
21
|
require_relative "llm/tool"
|
22
|
+
require_relative "llm/server_tool"
|
22
23
|
|
23
|
-
|
24
|
+
##
|
25
|
+
# Thread-safe monitors for different contexts
|
26
|
+
@monitors = {require: Monitor.new, clients: Monitor.new, inherited: Monitor.new}
|
24
27
|
|
25
28
|
module_function
|
26
29
|
|
@@ -28,7 +31,7 @@ module LLM
|
|
28
31
|
# @param (see LLM::Provider#initialize)
|
29
32
|
# @return (see LLM::Anthropic#initialize)
|
30
33
|
def anthropic(**)
|
31
|
-
|
34
|
+
lock(:require) { require_relative "llm/providers/anthropic" unless defined?(LLM::Anthropic) }
|
32
35
|
LLM::Anthropic.new(**)
|
33
36
|
end
|
34
37
|
|
@@ -36,7 +39,7 @@ module LLM
|
|
36
39
|
# @param (see LLM::Provider#initialize)
|
37
40
|
# @return (see LLM::Gemini#initialize)
|
38
41
|
def gemini(**)
|
39
|
-
|
42
|
+
lock(:require) { require_relative "llm/providers/gemini" unless defined?(LLM::Gemini) }
|
40
43
|
LLM::Gemini.new(**)
|
41
44
|
end
|
42
45
|
|
@@ -44,7 +47,7 @@ module LLM
|
|
44
47
|
# @param key (see LLM::Provider#initialize)
|
45
48
|
# @return (see LLM::Ollama#initialize)
|
46
49
|
def ollama(key: nil, **)
|
47
|
-
|
50
|
+
lock(:require) { require_relative "llm/providers/ollama" unless defined?(LLM::Ollama) }
|
48
51
|
LLM::Ollama.new(key:, **)
|
49
52
|
end
|
50
53
|
|
@@ -52,7 +55,7 @@ module LLM
|
|
52
55
|
# @param key (see LLM::Provider#initialize)
|
53
56
|
# @return (see LLM::LlamaCpp#initialize)
|
54
57
|
def llamacpp(key: nil, **)
|
55
|
-
|
58
|
+
lock(:require) { require_relative "llm/providers/llamacpp" unless defined?(LLM::LlamaCpp) }
|
56
59
|
LLM::LlamaCpp.new(key:, **)
|
57
60
|
end
|
58
61
|
|
@@ -60,7 +63,7 @@ module LLM
|
|
60
63
|
# @param key (see LLM::Provider#initialize)
|
61
64
|
# @return (see LLM::DeepSeek#initialize)
|
62
65
|
def deepseek(**)
|
63
|
-
|
66
|
+
lock(:require) { require_relative "llm/providers/deepseek" unless defined?(LLM::DeepSeek) }
|
64
67
|
LLM::DeepSeek.new(**)
|
65
68
|
end
|
66
69
|
|
@@ -68,7 +71,7 @@ module LLM
|
|
68
71
|
# @param key (see LLM::Provider#initialize)
|
69
72
|
# @return (see LLM::OpenAI#initialize)
|
70
73
|
def openai(**)
|
71
|
-
|
74
|
+
lock(:require) { require_relative "llm/providers/openai" unless defined?(LLM::OpenAI) }
|
72
75
|
LLM::OpenAI.new(**)
|
73
76
|
end
|
74
77
|
|
@@ -77,12 +80,21 @@ module LLM
|
|
77
80
|
# @param host (see LLM::XAI#initialize)
|
78
81
|
# @return (see LLM::XAI#initialize)
|
79
82
|
def xai(**)
|
80
|
-
|
83
|
+
lock(:require) { require_relative "llm/providers/xai" unless defined?(LLM::XAI) }
|
81
84
|
LLM::XAI.new(**)
|
82
85
|
end
|
83
86
|
|
84
87
|
##
|
85
|
-
#
|
88
|
+
# @param key (see LLM::ZAI#initialize)
|
89
|
+
# @param host (see LLM::ZAI#initialize)
|
90
|
+
# @return (see LLM::ZAI#initialize)
|
91
|
+
def zai(**)
|
92
|
+
lock(:require) { require_relative "llm/providers/zai" unless defined?(LLM::ZAI) }
|
93
|
+
LLM::ZAI.new(**)
|
94
|
+
end
|
95
|
+
|
96
|
+
##
|
97
|
+
# Define a function
|
86
98
|
# @example
|
87
99
|
# LLM.function(:system) do |fn|
|
88
100
|
# fn.description "Run system command"
|
@@ -93,21 +105,17 @@ module LLM
|
|
93
105
|
# system(command)
|
94
106
|
# end
|
95
107
|
# end
|
96
|
-
# @param [Symbol]
|
108
|
+
# @param [Symbol] key The function name / key
|
97
109
|
# @param [Proc] b The block to define the function
|
98
110
|
# @return [LLM::Function] The function object
|
99
|
-
def function(
|
100
|
-
|
101
|
-
functions[name.to_s] = LLM::Function.new(name, &b)
|
102
|
-
else
|
103
|
-
functions[name.to_s]
|
104
|
-
end
|
111
|
+
def function(key, &b)
|
112
|
+
LLM::Function.new(key, &b)
|
105
113
|
end
|
106
114
|
|
107
115
|
##
|
108
|
-
#
|
109
|
-
# @
|
110
|
-
|
111
|
-
|
112
|
-
|
116
|
+
# Provides a thread-safe lock
|
117
|
+
# @param [Symbol] name The name of the lock
|
118
|
+
# @param [Proc] & The block to execute within the lock
|
119
|
+
# @return [void]
|
120
|
+
def lock(name, &) = @monitors[name].synchronize(&)
|
113
121
|
end
|
data/llm.gemspec
CHANGED
@@ -10,7 +10,7 @@ Gem::Specification.new do |spec|
|
|
10
10
|
|
11
11
|
spec.summary = <<~SUMMARY
|
12
12
|
llm.rb is a zero-dependency Ruby toolkit for Large Language Models that
|
13
|
-
includes OpenAI, Gemini, Anthropic, xAI (grok), DeepSeek, Ollama, and
|
13
|
+
includes OpenAI, Gemini, Anthropic, xAI (grok), zAI, DeepSeek, Ollama, and
|
14
14
|
LlamaCpp. The toolkit includes full support for chat, streaming, tool calling,
|
15
15
|
audio, images, files, and structured outputs (JSON Schema).
|
16
16
|
SUMMARY
|
metadata
CHANGED
@@ -1,7 +1,7 @@
|
|
1
1
|
--- !ruby/object:Gem::Specification
|
2
2
|
name: llm.rb
|
3
3
|
version: !ruby/object:Gem::Version
|
4
|
-
version: 0.
|
4
|
+
version: 1.0.0
|
5
5
|
platform: ruby
|
6
6
|
authors:
|
7
7
|
- Antar Azri
|
@@ -165,7 +165,7 @@ dependencies:
|
|
165
165
|
- !ruby/object:Gem::Version
|
166
166
|
version: '4.0'
|
167
167
|
description: llm.rb is a zero-dependency Ruby toolkit for Large Language Models that
|
168
|
-
includes OpenAI, Gemini, Anthropic, xAI (grok), DeepSeek, Ollama, and LlamaCpp.
|
168
|
+
includes OpenAI, Gemini, Anthropic, xAI (grok), zAI, DeepSeek, Ollama, and LlamaCpp.
|
169
169
|
The toolkit includes full support for chat, streaming, tool calling, audio, images,
|
170
170
|
files, and structured outputs (JSON Schema).
|
171
171
|
email:
|
@@ -264,6 +264,7 @@ files:
|
|
264
264
|
- lib/llm/providers/openai/vector_stores.rb
|
265
265
|
- lib/llm/providers/xai.rb
|
266
266
|
- lib/llm/providers/xai/images.rb
|
267
|
+
- lib/llm/providers/zai.rb
|
267
268
|
- lib/llm/response.rb
|
268
269
|
- lib/llm/schema.rb
|
269
270
|
- lib/llm/schema/array.rb
|
@@ -275,6 +276,7 @@ files:
|
|
275
276
|
- lib/llm/schema/object.rb
|
276
277
|
- lib/llm/schema/string.rb
|
277
278
|
- lib/llm/schema/version.rb
|
279
|
+
- lib/llm/server_tool.rb
|
278
280
|
- lib/llm/tool.rb
|
279
281
|
- lib/llm/utils.rb
|
280
282
|
- lib/llm/version.rb
|
@@ -302,7 +304,7 @@ requirements: []
|
|
302
304
|
rubygems_version: 3.6.9
|
303
305
|
specification_version: 4
|
304
306
|
summary: llm.rb is a zero-dependency Ruby toolkit for Large Language Models that includes
|
305
|
-
OpenAI, Gemini, Anthropic, xAI (grok), DeepSeek, Ollama, and LlamaCpp. The
|
306
|
-
includes full support for chat, streaming, tool calling, audio, images,
|
307
|
-
structured outputs (JSON Schema).
|
307
|
+
OpenAI, Gemini, Anthropic, xAI (grok), zAI, DeepSeek, Ollama, and LlamaCpp. The
|
308
|
+
toolkit includes full support for chat, streaming, tool calling, audio, images,
|
309
|
+
files, and structured outputs (JSON Schema).
|
308
310
|
test_files: []
|