llm.rb 0.16.3 → 0.17.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 696893f9ef5355ed4433b265dc7771e76aa9c96c78036a70e698615a0c4d7bdd
4
- data.tar.gz: 02f48b464173823d0696ea8d2d21ab5a147cdf946f1c3287dedddc1760a93c38
3
+ metadata.gz: fcb896bf97c2b2a07987e7d44088244e79bf35220478dd3ac93fc349a91c5f7c
4
+ data.tar.gz: e33c4f4c87d72568ac94af0f4494e6a40ade164dbc894de1c772fc1bd6c92705
5
5
  SHA512:
6
- metadata.gz: 15c5549e9165a854814c853c381b36d3de6409a9260acb7a4bd2adddffd08520e3ea8f3dea999b111c6bd7706d9f241eddbcd2d103122ea64482a12ca8fa6879
7
- data.tar.gz: 971c404ada0cf5ac1af10ffbd4aabd8654fa11fe69beceb6ef3e3fd8ae68ed3f9426ebc4af829beff2610aded2a77224ebe2b4f3e4c3ec4e7dfd5fe21e0fb90b
6
+ metadata.gz: 660dee96aa651f293818492ab74efa92893cf2f39dc43dd04c84841ca13071769cabf5ee03060406295af6759ed005e76594abb0de1b6b4d00b1a0b5f0915284
7
+ data.tar.gz: 89166bce1fb90718b8b99e2a8b6bea1fbe3cbbeb33c7fdcb0362febc7f969caaaab7e5072514793ebc8e911d7c98ebe9f46ec0b7d29257a299e9e99fca497a15
data/README.md CHANGED
@@ -155,6 +155,18 @@ res3 = llm.responses.create "message 3", previous_response_id: res2.response_id
155
155
  print res3.output_text, "\n"
156
156
  ```
157
157
 
158
+ #### Thread Safety
159
+
160
+ The llm.rb library is thread-safe and can be used in a multi-threaded
161
+ environments but it is important to keep in mind that the
162
+ [LLM::Provider](https://0x1eef.github.io/x/llm.rb/LLM/Provider.html)
163
+ and
164
+ [LLM::Bot](https://0x1eef.github.io/x/llm.rb/LLM/Bot.html)
165
+ classes should be instantiated once per thread, and not shared
166
+ between threads. Generally the library tries to avoid global or
167
+ shared state but where it exists reentrant locks are used to
168
+ ensure thread-safety.
169
+
158
170
  ### Conversations
159
171
 
160
172
  #### Completions
@@ -261,13 +273,22 @@ bot.messages.find(&:assistant?).content! # => {answers: [5, 10, 11]}
261
273
 
262
274
  ### Tools
263
275
 
264
- #### Functions
276
+ #### Introduction
265
277
 
266
278
  All providers support a powerful feature known as tool calling, and although
267
279
  it is a little complex to understand at first, it can be powerful for building
268
- agents. The following example demonstrates how we can define a local function
269
- (which happens to be a tool), and a provider (such as OpenAI) can then detect
270
- when we should call the function.
280
+ agents. There are three main interfaces to understand: [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html),
281
+ [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html), and
282
+ [LLM::ServerTool](https://0x1eef.github.io/x/llm.rb/LLM/ServerTool.html).
283
+
284
+
285
+ #### LLM::Function
286
+
287
+ The following example demonstrates [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
288
+ and how it can define a local function (which happens to be a tool), and how
289
+ a provider (such as OpenAI) can then detect when we should call the function.
290
+ Its most notable feature is that it can act as a closure and has access to
291
+ its surrounding scope, which can be useful in some situations.
271
292
 
272
293
  The
273
294
  [LLM::Bot#functions](https://0x1eef.github.io/x/llm.rb/LLM/Bot.html#functions-instance_method)
@@ -276,6 +297,7 @@ it will only be populated if the LLM detects a function should be called. Each f
276
297
  corresponds to an element in the "tools" array. The array is emptied after a function call,
277
298
  and potentially repopulated on the next message:
278
299
 
300
+
279
301
  ```ruby
280
302
  #!/usr/bin/env ruby
281
303
  require "llm"
@@ -309,13 +331,61 @@ bot.chat bot.functions.map(&:call) # report return value to the LLM
309
331
  # {stderr: "", stdout: "FreeBSD"}
310
332
  ```
311
333
 
312
- #### Provider
334
+ #### LLM::Tool
335
+
336
+ The [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html) class can be used
337
+ to implement a [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
338
+ as a class. Under the hood, a subclass of [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html)
339
+ wraps an instance of [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
340
+ and delegates to it.
341
+
342
+ The choice between [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
343
+ and [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html) is often a matter of
344
+ preference but each carry their own benefits. For example, [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
345
+ has the benefit of being a closure that has access to its surrounding context and
346
+ sometimes that is useful:
347
+
348
+ ```ruby
349
+ #!/usr/bin/env ruby
350
+ require "llm"
351
+
352
+ class System < LLM::Tool
353
+ name "system"
354
+ description "Run a shell command"
355
+ params { |schema| schema.object(command: schema.string.required) }
356
+
357
+ def call(command:)
358
+ ro, wo = IO.pipe
359
+ re, we = IO.pipe
360
+ Process.wait Process.spawn(command, out: wo, err: we)
361
+ [wo,we].each(&:close)
362
+ {stderr: re.read, stdout: ro.read}
363
+ end
364
+ end
365
+
366
+ bot = LLM::Bot.new(llm, tools: [System])
367
+ bot.chat "Your task is to run shell commands via a tool.", role: :system
368
+
369
+ bot.chat "What is the current date?", role: :user
370
+ bot.chat bot.functions.map(&:call) # report return value to the LLM
371
+
372
+ bot.chat "What operating system am I running? (short version please!)", role: :user
373
+ bot.chat bot.functions.map(&:call) # report return value to the LLM
374
+
375
+ ##
376
+ # {stderr: "", stdout: "Thu May 1 10:01:02 UTC 2025"}
377
+ # {stderr: "", stdout: "FreeBSD"}
378
+ ```
379
+
380
+ #### Server Tools
313
381
 
314
382
  The
315
383
  [LLM::Function](https://0x1eef.github.io/x/llm.rb/LLM/Function.html)
316
- class can define a local function that can be called by a provider on your behalf,
317
- and the
384
+ and
318
385
  [LLM::Tool](https://0x1eef.github.io/x/llm.rb/LLM/Tool.html)
386
+ classes can define a local function or tool that can be called by
387
+ a provider on your behalf, and the
388
+ [LLM::ServerTool](https://0x1eef.github.io/x/llm.rb/LLM/ServerTool.html)
319
389
  class represents a tool that is defined and implemented by a provider, and we can
320
390
  request that the provider call the tool on our behalf. That's the primary difference
321
391
  between a function implemented locally and a tool implemented by a provider. The
@@ -327,7 +397,8 @@ OpenAI provider to execute Python code on OpenAI's servers:
327
397
  require "llm"
328
398
 
329
399
  llm = LLM.openai(key: ENV["KEY"])
330
- res = llm.responses.create "Run: 'print(\"hello world\")'", tools: [llm.tool(:code_interpreter)]
400
+ res = llm.responses.create "Run: 'print(\"hello world\")'",
401
+ tools: [llm.server_tool(:code_interpreter)]
331
402
  print res.output_text, "\n"
332
403
  ```
333
404
 
data/lib/llm/buffer.rb CHANGED
@@ -48,10 +48,16 @@ module LLM
48
48
  end
49
49
 
50
50
  ##
51
- # Returns the last message in the buffer
52
- # @return [LLM::Message, nil]
53
- def last
54
- to_a[-1]
51
+ # Returns the last message(s) in the buffer
52
+ # @param [Integer, nil] n
53
+ # The number of messages to return
54
+ # @return [LLM::Message, Array<LLM::Message>, nil]
55
+ def last(n = nil)
56
+ if @pending.empty?
57
+ n.nil? ? @completed.last : @completed.last(n)
58
+ else
59
+ n.nil? ? to_a.last : to_a.last(n)
60
+ end
55
61
  end
56
62
 
57
63
  ##
@@ -65,19 +71,20 @@ module LLM
65
71
  alias_method :push, :<<
66
72
 
67
73
  ##
68
- # @param [Integer, Range, #to_i] index
74
+ # @param [Integer, Range] index
69
75
  # The message index
70
76
  # @return [LLM::Message, nil]
71
77
  # Returns a message, or nil
72
78
  def [](index)
73
- if index.respond_to?(:to_i)
74
- @completed[index.to_i] || to_a[index.to_i]
75
- elsif Range === index
76
- slice = @completed[index]
77
- invalidate = slice.nil? || slice.size < index.size
78
- invalidate ? to_a[index] : slice
79
+ if @pending.empty?
80
+ if Range === index
81
+ slice = @completed[index]
82
+ (slice.nil? || slice.size < index.size) ? to_a[index] : slice
83
+ else
84
+ @completed[index]
85
+ end
79
86
  else
80
- raise TypeError, "index must be an Integer or Range"
87
+ to_a[index]
81
88
  end
82
89
  end
83
90
 
data/lib/llm/client.rb CHANGED
@@ -9,7 +9,7 @@ module LLM
9
9
  ##
10
10
  # @api private
11
11
  def persistent_client
12
- mutex.synchronize do
12
+ LLM.lock(:clients) do
13
13
  if clients[client_id]
14
14
  clients[client_id]
15
15
  else
data/lib/llm/function.rb CHANGED
@@ -6,6 +6,7 @@
6
6
  #
7
7
  # @example example #1
8
8
  # LLM.function(:system) do |fn|
9
+ # fn.name "system"
9
10
  # fn.description "Runs system commands"
10
11
  # fn.params do |schema|
11
12
  # schema.object(command: schema.string.required)
@@ -16,18 +17,16 @@
16
17
  # end
17
18
  #
18
19
  # @example example #2
19
- # class System
20
- # def call(command:)
21
- # {success: Kernel.system(command)}
20
+ # class System < LLM::Tool
21
+ # name "system"
22
+ # description "Runs system commands"
23
+ # params do |schema|
24
+ # schema.object(command: schema.string.required)
22
25
  # end
23
- # end
24
26
  #
25
- # LLM.function(:system) do |fn|
26
- # fn.description "Runs system commands"
27
- # fn.params do |schema|
28
- # schema.object(command: schema.string.required)
27
+ # def call(command:)
28
+ # {success: Kernel.system(command)}
29
29
  # end
30
- # fn.register(System)
31
30
  # end
32
31
  class LLM::Function
33
32
  class Return < Struct.new(:id, :name, :value)
@@ -38,11 +37,6 @@ class LLM::Function
38
37
  # @return [String, nil]
39
38
  attr_accessor :id
40
39
 
41
- ##
42
- # Returns the function name
43
- # @return [String]
44
- attr_reader :name
45
-
46
40
  ##
47
41
  # Returns function arguments
48
42
  # @return [Array, nil]
@@ -56,11 +50,23 @@ class LLM::Function
56
50
  @schema = LLM::Schema.new
57
51
  @called = false
58
52
  @cancelled = false
59
- yield(self)
53
+ yield(self) if block_given?
54
+ end
55
+
56
+ ##
57
+ # Set (or get) the function name
58
+ # @param [String] name The function name
59
+ # @return [void]
60
+ def name(name = nil)
61
+ if name
62
+ @name = name.to_s
63
+ else
64
+ @name
65
+ end
60
66
  end
61
67
 
62
68
  ##
63
- # Set the function description
69
+ # Set (or get) the function description
64
70
  # @param [String] desc The function description
65
71
  # @return [void]
66
72
  def description(desc = nil)
@@ -72,10 +78,15 @@ class LLM::Function
72
78
  end
73
79
 
74
80
  ##
81
+ # Set (or get) the function parameters
75
82
  # @yieldparam [LLM::Schema] schema The schema object
76
83
  # @return [void]
77
84
  def params
78
- @params = yield(@schema)
85
+ if block_given?
86
+ @params = yield(@schema)
87
+ else
88
+ @params
89
+ end
79
90
  end
80
91
 
81
92
  ##
data/lib/llm/message.rb CHANGED
@@ -61,7 +61,7 @@ module LLM
61
61
  # @return [Array<LLM::Function>]
62
62
  def functions
63
63
  @functions ||= tool_calls.map do |fn|
64
- function = LLM.functions[fn.name].dup
64
+ function = tools.find { _1.name.to_s == fn["name"] }.dup
65
65
  function.tap { _1.id = fn.id }
66
66
  function.tap { _1.arguments = fn.arguments }
67
67
  end
@@ -170,5 +170,9 @@ module LLM
170
170
  def tool_calls
171
171
  @tool_calls ||= LLM::Object.from_hash(@extra[:tool_calls] || [])
172
172
  end
173
+
174
+ def tools
175
+ response&.__tools__ || []
176
+ end
173
177
  end
174
178
  end
data/lib/llm/provider.rb CHANGED
@@ -11,16 +11,11 @@ class LLM::Provider
11
11
  include LLM::Client
12
12
 
13
13
  @@clients = {}
14
- @@mutex = Mutex.new
15
14
 
16
15
  ##
17
16
  # @api private
18
17
  def self.clients = @@clients
19
18
 
20
- ##
21
- # @api private
22
- def self.mutex = @@mutex
23
-
24
19
  ##
25
20
  # @param [String, nil] key
26
21
  # The secret key for authentication
@@ -238,11 +233,11 @@ class LLM::Provider
238
233
 
239
234
  ##
240
235
  # @note
241
- # This method might be outdated, and the {LLM::Provider#tool LLM::Provider#tool}
236
+ # This method might be outdated, and the {LLM::Provider#server_tool LLM::Provider#server_tool}
242
237
  # method can be used if a tool is not found here.
243
238
  # Returns all known tools provided by a provider.
244
- # @return [String => LLM::Tool]
245
- def tools
239
+ # @return [String => LLM::ServerTool]
240
+ def server_tools
246
241
  {}
247
242
  end
248
243
 
@@ -253,14 +248,14 @@ class LLM::Provider
253
248
  # Returns a tool provided by a provider.
254
249
  # @example
255
250
  # llm = LLM.openai(key: ENV["KEY"])
256
- # tools = [llm.tool(:web_search)]
251
+ # tools = [llm.server_tool(:web_search)]
257
252
  # res = llm.responses.create("Summarize today's news", tools:)
258
253
  # print res.output_text, "\n"
259
254
  # @param [String, Symbol] name The name of the tool
260
255
  # @param [Hash] options Configuration options for the tool
261
- # @return [LLM::Tool]
262
- def tool(name, options = {})
263
- LLM::Tool.new(name, options, self)
256
+ # @return [LLM::ServerTool]
257
+ def server_tool(name, options = {})
258
+ LLM::ServerTool.new(name, options, self)
264
259
  end
265
260
 
266
261
  ##
@@ -369,4 +364,23 @@ class LLM::Provider
369
364
  req.body_stream = io
370
365
  req["transfer-encoding"] = "chunked" unless req["content-length"]
371
366
  end
367
+
368
+ ##
369
+ # Resolves tools to their function representations
370
+ # @param [Array<LLM::Function, LLM::Tool>] tools
371
+ # The tools to map
372
+ # @raise [TypeError]
373
+ # When a tool is not recognized
374
+ # @return [Array<LLM::Function>]
375
+ def resolve_tools(tools)
376
+ (tools || []).map do |tool|
377
+ if tool.respond_to?(:function)
378
+ tool.function
379
+ elsif [LLM::Function, LLM::ServerTool, Hash].any? { _1 === tool }
380
+ tool
381
+ else
382
+ raise TypeError, "#{tool.class} given as a tool but it is not recognized"
383
+ end
384
+ end
385
+ end
372
386
  end
@@ -21,9 +21,8 @@ class LLM::Anthropic
21
21
  ##
22
22
  # @param [Hash] params
23
23
  # @return [Hash]
24
- def format_tools(params)
25
- return {} unless params and params[:tools]&.any?
26
- tools = params[:tools]
24
+ def format_tools(tools)
25
+ return {} unless tools&.any?
27
26
  {tools: tools.map { _1.respond_to?(:format) ? _1.format(self) : _1 }}
28
27
  end
29
28
  end
@@ -43,7 +43,8 @@ module LLM
43
43
  # @return (see LLM::Provider#complete)
44
44
  def complete(prompt, params = {})
45
45
  params = {role: :user, model: default_model, max_tokens: 1024}.merge!(params)
46
- params = [params, format_tools(params)].inject({}, &:merge!).compact
46
+ tools = resolve_tools(params.delete(:tools))
47
+ params = [params, format_tools(tools)].inject({}, &:merge!).compact
47
48
  role, stream = params.delete(:role), params.delete(:stream)
48
49
  params[:stream] = true if stream.respond_to?(:<<) || stream == true
49
50
  req = Net::HTTP::Post.new("/v1/messages", headers)
@@ -51,7 +52,9 @@ module LLM
51
52
  body = JSON.dump({messages: [format(messages)].flatten}.merge!(params))
52
53
  set_body_stream(req, StringIO.new(body))
53
54
  res = execute(request: req, stream:)
54
- LLM::Response.new(res).extend(LLM::Anthropic::Response::Completion)
55
+ LLM::Response.new(res)
56
+ .extend(LLM::Anthropic::Response::Completion)
57
+ .extend(Module.new { define_method(:__tools__) { tools } })
55
58
  end
56
59
 
57
60
  ##
@@ -88,14 +91,14 @@ module LLM
88
91
  # @note
89
92
  # This method includes certain tools that require configuration
90
93
  # through a set of options that are easier to set through the
91
- # {LLM::Provider#tool LLM::Provider#tool} method.
94
+ # {LLM::Provider#server_tool LLM::Provider#server_tool} method.
92
95
  # @see https://docs.anthropic.com/en/docs/agents-and-tools/tool-use/web-search-tool Anthropic docs
93
- # @return (see LLM::Provider#tools)
94
- def tools
96
+ # @return (see LLM::Provider#server_tools)
97
+ def server_tools
95
98
  {
96
- bash: tool(:bash, type: "bash_20250124"),
97
- web_search: tool(:web_search, type: "web_search_20250305", max_uses: 5),
98
- text_editor: tool(:str_replace_based_edit_tool, type: "text_editor_20250728", max_characters: 10_000)
99
+ bash: server_tool(:bash, type: "bash_20250124"),
100
+ web_search: server_tool(:web_search, type: "web_search_20250305", max_uses: 5),
101
+ text_editor: server_tool(:str_replace_based_edit_tool, type: "text_editor_20250728", max_characters: 10_000)
99
102
  }
100
103
  end
101
104
 
@@ -109,7 +112,7 @@ module LLM
109
112
  # @param query [String] The search query.
110
113
  # @return [LLM::Response] The response from the LLM provider.
111
114
  def web_search(query:)
112
- complete(query, tools: [tools[:web_search]])
115
+ complete(query, tools: [server_tools[:web_search]])
113
116
  .extend(LLM::Anthropic::Response::WebSearch)
114
117
  end
115
118
 
@@ -20,8 +20,7 @@ class LLM::DeepSeek
20
20
  ##
21
21
  # @param [Hash] params
22
22
  # @return [Hash]
23
- def format_tools(params)
24
- tools = params.delete(:tools)
23
+ def format_tools(tools)
25
24
  (tools.nil? || tools.empty?) ? {} : {tools: tools.map { _1.format(self) }}
26
25
  end
27
26
  end
@@ -30,10 +30,9 @@ class LLM::Gemini
30
30
  ##
31
31
  # @param [Hash] params
32
32
  # @return [Hash]
33
- def format_tools(params)
34
- return {} unless params and params[:tools]&.any?
35
- tools = params.delete(:tools)
36
- platform, functions = [tools.grep(LLM::Tool), tools.grep(LLM::Function)]
33
+ def format_tools(tools)
34
+ return {} unless tools&.any?
35
+ platform, functions = [tools.grep(LLM::ServerTool), tools.grep(LLM::Function)]
37
36
  {tools: [*platform, {functionDeclarations: functions.map { _1.format(self) }}]}
38
37
  end
39
38
  end
@@ -67,7 +67,8 @@ module LLM
67
67
  # @return [LLM::Response]
68
68
  def complete(prompt, params = {})
69
69
  params = {role: :user, model: default_model}.merge!(params)
70
- params = [params, format_schema(params), format_tools(params)].inject({}, &:merge!).compact
70
+ tools = resolve_tools(params.delete(:tools))
71
+ params = [params, format_schema(params), format_tools(tools)].inject({}, &:merge!).compact
71
72
  role, model, stream = [:role, :model, :stream].map { params.delete(_1) }
72
73
  action = stream ? "streamGenerateContent?key=#{@key}&alt=sse" : "generateContent?key=#{@key}"
73
74
  model.respond_to?(:id) ? model.id : model
@@ -77,7 +78,9 @@ module LLM
77
78
  body = JSON.dump({contents: format(messages)}.merge!(params))
78
79
  set_body_stream(req, StringIO.new(body))
79
80
  res = execute(request: req, stream:)
80
- LLM::Response.new(res).extend(LLM::Gemini::Response::Completion)
81
+ LLM::Response.new(res)
82
+ .extend(LLM::Gemini::Response::Completion)
83
+ .extend(Module.new { define_method(:__tools__) { tools } })
81
84
  end
82
85
 
83
86
  ##
@@ -130,14 +133,14 @@ module LLM
130
133
  # @note
131
134
  # This method includes certain tools that require configuration
132
135
  # through a set of options that are easier to set through the
133
- # {LLM::Provider#tool LLM::Provider#tool} method.
136
+ # {LLM::Provider#server_tool LLM::Provider#server_tool} method.
134
137
  # @see https://ai.google.dev/gemini-api/docs/google-search Gemini docs
135
- # @return (see LLM::Provider#tools)
136
- def tools
138
+ # @return (see LLM::Provider#server_tools)
139
+ def server_tools
137
140
  {
138
- google_search: tool(:google_search),
139
- code_execution: tool(:code_execution),
140
- url_context: tool(:url_context)
141
+ google_search: server_tool(:google_search),
142
+ code_execution: server_tool(:code_execution),
143
+ url_context: server_tool(:url_context)
141
144
  }
142
145
  end
143
146
 
@@ -147,7 +150,7 @@ module LLM
147
150
  # @param query [String] The search query.
148
151
  # @return [LLM::Response] The response from the LLM provider.
149
152
  def web_search(query:)
150
- complete(query, tools: [tools[:google_search]])
153
+ complete(query, tools: [server_tools[:google_search]])
151
154
  .extend(LLM::Gemini::Response::WebSearch)
152
155
  end
153
156
 
@@ -21,9 +21,8 @@ class LLM::Ollama
21
21
  ##
22
22
  # @param [Hash] params
23
23
  # @return [Hash]
24
- def format_tools(params)
25
- return {} unless params and params[:tools]&.any?
26
- tools = params[:tools]
24
+ def format_tools(tools)
25
+ return {} unless tools&.any?
27
26
  {tools: tools.map { _1.format(self) }}
28
27
  end
29
28
  end
@@ -60,7 +60,8 @@ module LLM
60
60
  # @return [LLM::Response]
61
61
  def complete(prompt, params = {})
62
62
  params = {role: :user, model: default_model, stream: true}.merge!(params)
63
- params = [params, {format: params[:schema]}, format_tools(params)].inject({}, &:merge!).compact
63
+ tools = resolve_tools(params.delete(:tools))
64
+ params = [params, {format: params[:schema]}, format_tools(tools)].inject({}, &:merge!).compact
64
65
  role, stream = params.delete(:role), params.delete(:stream)
65
66
  params[:stream] = true if stream.respond_to?(:<<) || stream == true
66
67
  req = Net::HTTP::Post.new("/api/chat", headers)
@@ -68,7 +69,9 @@ module LLM
68
69
  body = JSON.dump({messages: [format(messages)].flatten}.merge!(params))
69
70
  set_body_stream(req, StringIO.new(body))
70
71
  res = execute(request: req, stream:)
71
- LLM::Response.new(res).extend(LLM::Ollama::Response::Completion)
72
+ LLM::Response.new(res)
73
+ .extend(LLM::Ollama::Response::Completion)
74
+ .extend(Module.new { define_method(:__tools__) { tools } })
72
75
  end
73
76
 
74
77
  ##
@@ -43,8 +43,7 @@ class LLM::OpenAI
43
43
  ##
44
44
  # @param [Hash] params
45
45
  # @return [Hash]
46
- def format_tools(params)
47
- tools = params.delete(:tools)
46
+ def format_tools(tools)
48
47
  if tools.nil? || tools.empty?
49
48
  {}
50
49
  else
@@ -37,7 +37,8 @@ class LLM::OpenAI
37
37
  # @return [LLM::Response]
38
38
  def create(prompt, params = {})
39
39
  params = {role: :user, model: @provider.default_model}.merge!(params)
40
- params = [params, format_schema(params), format_tools(params)].inject({}, &:merge!).compact
40
+ tools = resolve_tools(params.delete(:tools))
41
+ params = [params, format_schema(params), format_tools(tools)].inject({}, &:merge!).compact
41
42
  role, stream = params.delete(:role), params.delete(:stream)
42
43
  params[:stream] = true if stream.respond_to?(:<<) || stream == true
43
44
  req = Net::HTTP::Post.new("/v1/responses", headers)
@@ -45,7 +46,9 @@ class LLM::OpenAI
45
46
  body = JSON.dump({input: [format(messages, :response)].flatten}.merge!(params))
46
47
  set_body_stream(req, StringIO.new(body))
47
48
  res = execute(request: req, stream:, stream_parser:)
48
- LLM::Response.new(res).extend(LLM::OpenAI::Response::Responds)
49
+ LLM::Response.new(res)
50
+ .extend(LLM::OpenAI::Response::Responds)
51
+ .extend(Module.new { define_method(:__tools__) { tools } })
49
52
  end
50
53
 
51
54
  ##
@@ -77,7 +80,7 @@ class LLM::OpenAI
77
80
 
78
81
  private
79
82
 
80
- [:headers, :execute, :set_body_stream].each do |m|
83
+ [:headers, :execute, :set_body_stream, :resolve_tools].each do |m|
81
84
  define_method(m) { |*args, **kwargs, &b| @provider.send(m, *args, **kwargs, &b) }
82
85
  end
83
86
 
@@ -65,7 +65,8 @@ module LLM
65
65
  # @return (see LLM::Provider#complete)
66
66
  def complete(prompt, params = {})
67
67
  params = {role: :user, model: default_model}.merge!(params)
68
- params = [params, format_schema(params), format_tools(params)].inject({}, &:merge!).compact
68
+ tools = resolve_tools(params.delete(:tools))
69
+ params = [params, format_schema(params), format_tools(tools)].inject({}, &:merge!).compact
69
70
  role, stream = params.delete(:role), params.delete(:stream)
70
71
  params[:stream] = true if stream.respond_to?(:<<) || stream == true
71
72
  params[:stream_options] = {include_usage: true}.merge!(params[:stream_options] || {}) if params[:stream]
@@ -74,7 +75,9 @@ module LLM
74
75
  body = JSON.dump({messages: format(messages, :complete).flatten}.merge!(params))
75
76
  set_body_stream(req, StringIO.new(body))
76
77
  res = execute(request: req, stream:)
77
- LLM::Response.new(res).extend(LLM::OpenAI::Response::Completion)
78
+ LLM::Response.new(res)
79
+ .extend(LLM::OpenAI::Response::Completion)
80
+ .extend(Module.new { define_method(:__tools__) { tools } })
78
81
  end
79
82
 
80
83
  ##
@@ -152,15 +155,15 @@ module LLM
152
155
  # @note
153
156
  # This method includes certain tools that require configuration
154
157
  # through a set of options that are easier to set through the
155
- # {LLM::Provider#tool LLM::Provider#tool} method.
156
- # @return (see LLM::Provider#tools)
157
- def tools
158
+ # {LLM::Provider#server_tool LLM::Provider#server_tool} method.
159
+ # @return (see LLM::Provider#server_tools)
160
+ def server_tools
158
161
  {
159
- web_search: tool(:web_search),
160
- file_search: tool(:file_search),
161
- image_generation: tool(:image_generation),
162
- code_interpreter: tool(:code_interpreter),
163
- computer_use: tool(:computer_use)
162
+ web_search: server_tool(:web_search),
163
+ file_search: server_tool(:file_search),
164
+ image_generation: server_tool(:image_generation),
165
+ code_interpreter: server_tool(:code_interpreter),
166
+ computer_use: server_tool(:computer_use)
164
167
  }
165
168
  end
166
169
 
@@ -175,7 +178,7 @@ module LLM
175
178
  # @return [LLM::Response] The response from the LLM provider.
176
179
  def web_search(query:)
177
180
  responses
178
- .create(query, store: false, tools: [tools[:web_search]])
181
+ .create(query, store: false, tools: [server_tools[:web_search]])
179
182
  .extend(LLM::OpenAI::Response::WebSearch)
180
183
  end
181
184
 
@@ -0,0 +1,32 @@
1
+ # frozen_string_literal: true
2
+
3
+ ##
4
+ # The {LLM::ServerTool LLM::ServerTool} class represents a platform-native tool
5
+ # that can be activated by an LLM provider. Unlike {LLM::Function LLM::Function},
6
+ # these tools are pre-defined by the provider and their capabilities
7
+ # are already known to the underlying LLM.
8
+ #
9
+ # @example
10
+ # #!/usr/bin/env ruby
11
+ # llm = LLM.gemini ENV["KEY"]
12
+ # bot = LLM::Bot.new(llm, tools: [LLM::ServerTool.new(:google_search)])
13
+ # bot.chat("Summarize today's news", role: :user)
14
+ # print bot.messages.find(&:assistant?).content, "\n"
15
+ class LLM::ServerTool < Struct.new(:name, :options, :provider)
16
+ ##
17
+ # @return [String]
18
+ def to_json(...)
19
+ to_h.to_json(...)
20
+ end
21
+
22
+ ##
23
+ # @return [Hash]
24
+ def to_h
25
+ case provider.class.to_s
26
+ when "LLM::Anthropic" then options.merge("name" => name.to_s)
27
+ when "LLM::Gemini" then {name => options}
28
+ else options.merge("type" => name.to_s)
29
+ end
30
+ end
31
+ alias_method :to_hash, :to_h
32
+ end
data/lib/llm/tool.rb CHANGED
@@ -1,32 +1,75 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  ##
4
- # The {LLM::Tool LLM::Tool} class represents a platform-native tool
5
- # that can be activated by an LLM provider. Unlike {LLM::Function LLM::Function},
6
- # these tools are pre-defined by the provider and their capabilities
7
- # are already known to the underlying LLM.
8
- #
4
+ # The {LLM::Tool LLM::Tool} class represents a local tool
5
+ # that can be called by an LLM. Under the hood, it is a wrapper
6
+ # around {LLM::Function LLM::Function} but allows the definition
7
+ # of a function (also known as a tool) as a class.
9
8
  # @example
10
- # #!/usr/bin/env ruby
11
- # llm = LLM.gemini ENV["KEY"]
12
- # bot = LLM::Bot.new(llm, tools: [LLM.tool(:google_search)])
13
- # bot.chat("Summarize today's news", role: :user)
14
- # print bot.messages.find(&:assistant?).content, "\n"
15
- class LLM::Tool < Struct.new(:name, :options, :provider)
9
+ # class System < LLM::Tool
10
+ # name "system"
11
+ # description "Runs system commands"
12
+ # params do |schema|
13
+ # schema.object(command: schema.string.required)
14
+ # end
15
+ #
16
+ # def call(command:)
17
+ # {success: Kernel.system(command)}
18
+ # end
19
+ # end
20
+ class LLM::Tool
21
+ ##
22
+ # Registers the tool as a function when inherited
23
+ # @param [Class] klass The subclass
24
+ # @return [void]
25
+ def self.inherited(klass)
26
+ LLM.lock(:inherited) do
27
+ klass.instance_eval { @__monitor ||= Monitor.new }
28
+ klass.function.register(klass)
29
+ end
30
+ end
31
+
16
32
  ##
33
+ # Returns (or sets) the tool name
34
+ # @param [String, nil] name The tool name
17
35
  # @return [String]
18
- def to_json(...)
19
- to_h.to_json(...)
36
+ def self.name(name = nil)
37
+ lock do
38
+ function.tap { _1.name(name) }
39
+ end
20
40
  end
21
41
 
22
42
  ##
23
- # @return [Hash]
24
- def to_h
25
- case provider.class.to_s
26
- when "LLM::Anthropic" then options.merge("name" => name.to_s)
27
- when "LLM::Gemini" then {name => options}
28
- else options.merge("type" => name.to_s)
43
+ # Returns (or sets) the tool description
44
+ # @param [String, nil] desc The tool description
45
+ # @return [String]
46
+ def self.description(desc = nil)
47
+ lock do
48
+ function.tap { _1.description(desc) }
29
49
  end
30
50
  end
31
- alias_method :to_hash, :to_h
51
+
52
+ ##
53
+ # Returns (or sets) tool parameters
54
+ # @yieldparam [LLM::Schema] schema The schema object to define parameters
55
+ # @return [LLM::Schema]
56
+ def self.params(&)
57
+ lock do
58
+ function.tap { _1.params(&) }
59
+ end
60
+ end
61
+
62
+ ##
63
+ # @api private
64
+ def self.function
65
+ lock do
66
+ @function ||= LLM::Function.new(self)
67
+ end
68
+ end
69
+
70
+ ##
71
+ # @api private
72
+ def self.lock(&)
73
+ @__monitor.synchronize(&)
74
+ end
32
75
  end
data/lib/llm/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LLM
4
- VERSION = "0.16.3"
4
+ VERSION = "0.17.0"
5
5
  end
data/lib/llm.rb CHANGED
@@ -19,8 +19,11 @@ module LLM
19
19
  require_relative "llm/eventstream"
20
20
  require_relative "llm/eventhandler"
21
21
  require_relative "llm/tool"
22
+ require_relative "llm/server_tool"
22
23
 
23
- @mutex = Mutex.new
24
+ ##
25
+ # Thread-safe monitors for different contexts
26
+ @monitors = { require: Monitor.new, clients: Monitor.new, inherited: Monitor.new }
24
27
 
25
28
  module_function
26
29
 
@@ -28,7 +31,7 @@ module LLM
28
31
  # @param (see LLM::Provider#initialize)
29
32
  # @return (see LLM::Anthropic#initialize)
30
33
  def anthropic(**)
31
- @mutex.synchronize { require_relative "llm/providers/anthropic" unless defined?(LLM::Anthropic) }
34
+ lock(:require) { require_relative "llm/providers/anthropic" unless defined?(LLM::Anthropic) }
32
35
  LLM::Anthropic.new(**)
33
36
  end
34
37
 
@@ -36,7 +39,7 @@ module LLM
36
39
  # @param (see LLM::Provider#initialize)
37
40
  # @return (see LLM::Gemini#initialize)
38
41
  def gemini(**)
39
- @mutex.synchronize { require_relative "llm/providers/gemini" unless defined?(LLM::Gemini) }
42
+ lock(:require) { require_relative "llm/providers/gemini" unless defined?(LLM::Gemini) }
40
43
  LLM::Gemini.new(**)
41
44
  end
42
45
 
@@ -44,7 +47,7 @@ module LLM
44
47
  # @param key (see LLM::Provider#initialize)
45
48
  # @return (see LLM::Ollama#initialize)
46
49
  def ollama(key: nil, **)
47
- @mutex.synchronize { require_relative "llm/providers/ollama" unless defined?(LLM::Ollama) }
50
+ lock(:require) { require_relative "llm/providers/ollama" unless defined?(LLM::Ollama) }
48
51
  LLM::Ollama.new(key:, **)
49
52
  end
50
53
 
@@ -52,7 +55,7 @@ module LLM
52
55
  # @param key (see LLM::Provider#initialize)
53
56
  # @return (see LLM::LlamaCpp#initialize)
54
57
  def llamacpp(key: nil, **)
55
- @mutex.synchronize { require_relative "llm/providers/llamacpp" unless defined?(LLM::LlamaCpp) }
58
+ lock(:require) { require_relative "llm/providers/llamacpp" unless defined?(LLM::LlamaCpp) }
56
59
  LLM::LlamaCpp.new(key:, **)
57
60
  end
58
61
 
@@ -60,7 +63,7 @@ module LLM
60
63
  # @param key (see LLM::Provider#initialize)
61
64
  # @return (see LLM::DeepSeek#initialize)
62
65
  def deepseek(**)
63
- @mutex.synchronize { require_relative "llm/providers/deepseek" unless defined?(LLM::DeepSeek) }
66
+ lock(:require) { require_relative "llm/providers/deepseek" unless defined?(LLM::DeepSeek) }
64
67
  LLM::DeepSeek.new(**)
65
68
  end
66
69
 
@@ -68,7 +71,7 @@ module LLM
68
71
  # @param key (see LLM::Provider#initialize)
69
72
  # @return (see LLM::OpenAI#initialize)
70
73
  def openai(**)
71
- @mutex.synchronize { require_relative "llm/providers/openai" unless defined?(LLM::OpenAI) }
74
+ lock(:require) { require_relative "llm/providers/openai" unless defined?(LLM::OpenAI) }
72
75
  LLM::OpenAI.new(**)
73
76
  end
74
77
 
@@ -77,12 +80,12 @@ module LLM
77
80
  # @param host (see LLM::XAI#initialize)
78
81
  # @return (see LLM::XAI#initialize)
79
82
  def xai(**)
80
- @mutex.synchronize { require_relative "llm/providers/xai" unless defined?(LLM::XAI) }
83
+ lock(:require) { require_relative "llm/providers/xai" unless defined?(LLM::XAI) }
81
84
  LLM::XAI.new(**)
82
85
  end
83
86
 
84
87
  ##
85
- # Define or get a function
88
+ # Define a function
86
89
  # @example
87
90
  # LLM.function(:system) do |fn|
88
91
  # fn.description "Run system command"
@@ -93,21 +96,17 @@ module LLM
93
96
  # system(command)
94
97
  # end
95
98
  # end
96
- # @param [Symbol] name The name of the function
99
+ # @param [Symbol] key The function name / key
97
100
  # @param [Proc] b The block to define the function
98
101
  # @return [LLM::Function] The function object
99
- def function(name, &b)
100
- if block_given?
101
- functions[name.to_s] = LLM::Function.new(name, &b)
102
- else
103
- functions[name.to_s]
104
- end
102
+ def function(key, &b)
103
+ LLM::Function.new(key, &b)
105
104
  end
106
105
 
107
106
  ##
108
- # Returns all known functions
109
- # @return [Hash<String,LLM::Function>]
110
- def functions
111
- @functions ||= {}
112
- end
107
+ # Provides a thread-safe lock
108
+ # @param [Symbol] name The name of the lock
109
+ # @param [Proc] & The block to execute within the lock
110
+ # @return [void]
111
+ def lock(name, &) = @monitors[name].synchronize(&)
113
112
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm.rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.16.3
4
+ version: 0.17.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Antar Azri
@@ -275,6 +275,7 @@ files:
275
275
  - lib/llm/schema/object.rb
276
276
  - lib/llm/schema/string.rb
277
277
  - lib/llm/schema/version.rb
278
+ - lib/llm/server_tool.rb
278
279
  - lib/llm/tool.rb
279
280
  - lib/llm/utils.rb
280
281
  - lib/llm/version.rb