llm.rb 5.2.1 → 5.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8f9bdef0c733225e44dcf39d75e3397974122bfeb5e705a0797067242fd5c966
4
- data.tar.gz: 567cc793e1e095e481abf5ef797a6fcb26a04faeed91855c234e531b78e3544a
3
+ metadata.gz: 0bd3ea0956fe1a9fa53bec3211dc4afe6f03c15fa67304ce5ba2c922d20abff1
4
+ data.tar.gz: 1aa03e4fc3eafbbbf9367deb8714f844ccd41299c666595d1d081a2db4d9d42e
5
5
  SHA512:
6
- metadata.gz: 3d7e026b308228787d2f6ead8de197f847b644f7ba5bd0a1d679270a66e5c0e48c74a7d9494a86613bd061a42c2fd56ff270f73f8f3a627bc213dd7d57de788d
7
- data.tar.gz: 8586a02d0345e7259f80b32e688a0ad531e28e3d15c8519781647ee1f77f23b6ecdaa9b28ed40efdea138c138511c7c4f88986ed7d646fb562e9faf7e2db687f
6
+ metadata.gz: 0e14b7cb29b5130b703c26369b6ecec106117e6a045bbcbeb79019c96814d9969e387c25992d2f3c96fd2ea43143ca4c48f00e91a00cc6cb3e20556145254d80
7
+ data.tar.gz: 3d34913dba2eab22f6f794196d59791cbadfeb71b9b4aaf588d46607112c4a0a0f741a9e8c9d308afb86d5d678a753b8a551ca66b08351581677845012c8e583
data/CHANGELOG.md CHANGED
@@ -2,8 +2,56 @@
2
2
 
3
3
  ## Unreleased
4
4
 
5
+ Changes since `v5.4.0`.
6
+
7
+ ## v5.4.0
8
+
9
+ Changes since `v5.3.0`.
10
+
11
+ This release expands tracer support around agentic execution. It lets
12
+ `LLM::Agent` define scoped tracers through the agent DSL and fixes concurrent
13
+ tool execution so those scoped tracers stay attached when work crosses
14
+ thread, task, fiber, and skill boundaries.
15
+
16
+ ### Change
17
+
18
+ * **Add agent-scoped tracers** <br>
19
+ Let `LLM::Agent` classes define `tracer ...` or `tracer { ... }` so an
20
+ agent can carry its own tracer without replacing the provider's default
21
+ tracer. The resolved tracer is scoped to that agent's turns, tool loops,
22
+ and pending tool access. Available through the `acts_as_agent` and Sequel
23
+ agent plugin `tracer` DSL too.
24
+
25
+ ### Fix
26
+
27
+ * **Preserve scoped tracers across concurrent tool work** <br>
28
+ Keep agent- and request-scoped tracers attached when tool execution
29
+ crosses `:thread`, `:task`, or `:fiber` boundaries, including skill
30
+ execution, so spawned work does not fall back to the provider default
31
+ tracer.
32
+
33
+ ## v5.3.0
34
+
5
35
  Changes since `v5.2.1`.
6
36
 
37
+ This release deepens llm.rb's request-rewriting and tool-definition surface.
38
+ It adds transformer lifecycle hooks to `LLM::Stream` so UIs can surface work
39
+ like PII scrubbing before a request is sent, and it adds a more explicit
40
+ OmniAI-style tool DSL form with `parameter` plus separate `required`
41
+ declarations while keeping the older `param ... required: true` style working.
42
+
43
+ ### Change
44
+
45
+ * **Add transformer stream lifecycle hooks** <br>
46
+ Add `on_transform` and `on_transform_finish` to
47
+ `LLM::Stream` so UIs can surface request rewriting work such as PII
48
+ scrubbing before a request is sent to the model.
49
+
50
+ * **Add a separate `required` tool DSL form** <br>
51
+ Add `parameter` as an alias of `param` and support `required %i[...]`
52
+ as a separate declaration, inspired by OmniAI-style tools, while keeping
53
+ the existing `param ... required: true` form working too.
54
+
7
55
  ## v5.2.1
8
56
 
9
57
  Changes since `v5.2.0`.
data/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
  <p align="center">
5
5
  <a href="https://0x1eef.github.io/x/llm.rb?rebuild=1"><img src="https://img.shields.io/badge/docs-0x1eef.github.io-blue.svg" alt="RubyDoc"></a>
6
6
  <a href="https://opensource.org/license/0bsd"><img src="https://img.shields.io/badge/License-0BSD-orange.svg?" alt="License"></a>
7
- <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-5.2.1-green.svg?" alt="Version"></a>
7
+ <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-5.4.0-green.svg?" alt="Version"></a>
8
8
  </p>
9
9
 
10
10
  ## About
@@ -26,7 +26,7 @@ execution model instead of a pile of adapters.
26
26
 
27
27
  Want to see some code? Jump to [the examples](#examples) section. <br>
28
28
  Want to see an agentic framework built on top of llm.rb? Check out [general-intelligence-systems/brute](https://github.com/general-intelligence-systems/brute). <br>
29
- Want a taste of what llm.rb can build? See [the screencast](#screencast).
29
+ Want to see a self-hosted LLM environment built on llm.rb? Check out [Relay](https://github.com/llmrb/relay).
30
30
 
31
31
  ## Architecture
32
32
 
@@ -87,6 +87,7 @@ Review the release state, summarize what changed, and prepare the release.
87
87
  class Agent < LLM::Agent
88
88
  model "gpt-5.4-mini"
89
89
  skills "./skills/release"
90
+ tracer { LLM::Tracer::Logger.new(llm, path: "logs/release-agent.log") }
90
91
  end
91
92
 
92
93
  llm = LLM.openai(key: ENV["KEY"])
@@ -193,11 +194,22 @@ Transformers let llm.rb rewrite outgoing prompts and params before a request
193
194
  is sent to the provider. They also live on
194
195
  [`LLM::Context`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html), but
195
196
  they solve a different problem from guards: instead of blocking execution,
196
- they can normalize or scrub what gets sent.
197
+ they can normalize or scrub what gets sent. When a stream is present, that
198
+ lifecycle is also exposed through
199
+ [`LLM::Stream`](https://0x1eef.github.io/x/llm.rb/LLM/Stream.html) with
200
+ `on_transform` and `on_transform_finish`.
197
201
 
198
202
  That makes them a good fit for things like PII scrubbing, prompt
199
203
  normalization, or request-level param injection. A transformer just needs to
200
- implement `call(ctx, prompt, params)` and return `[prompt, params]`.
204
+ implement `call(ctx, prompt, params)` and return `[prompt, params]`. That
205
+ means a transformer can scrub plain text prompts, but it can also scrub
206
+ [`LLM::Function::Return`](https://0x1eef.github.io/x/llm.rb/LLM/Function/Return.html)
207
+ values. In other words, you can intercept a tool call's return value and
208
+ modify it before sending it back to the LLM.
209
+
210
+ That is also a useful UI hook. A stream can surface messages like
211
+ `Anonymizing your data...` before a scrubber runs and `Data anonymized.`
212
+ after it finishes.
201
213
 
202
214
  ```ruby
203
215
  class ScrubPII
@@ -212,22 +224,45 @@ class ScrubPII
212
224
  def scrub(prompt)
213
225
  case prompt
214
226
  when String then prompt.gsub(EMAIL, "[REDACTED_EMAIL]")
227
+ when Array then prompt.map { scrub(_1) }
228
+ when LLM::Function::Return then on_tool_return(prompt)
215
229
  else prompt
216
230
  end
217
231
  end
232
+
233
+ def on_tool_return(result)
234
+ value = case result.name
235
+ when "lookup-customer" then scrub_value(result.value)
236
+ else result.value
237
+ end
238
+ LLM::Function::Return.new(result.id, result.name, value)
239
+ end
240
+
241
+ def scrub_value(value)
242
+ case value
243
+ when String then value.gsub(EMAIL, "[REDACTED_EMAIL]")
244
+ when Array then value.map { scrub_value(_1) }
245
+ when Hash then value.transform_values { scrub_value(_1) }
246
+ else value
247
+ end
248
+ end
218
249
  end
219
250
 
220
251
  ctx = LLM::Context.new(llm)
221
252
  ctx.transformer = ScrubPII.new
222
253
  ```
223
254
 
255
+ When a stream is present, that transformer lifecycle is also exposed through
256
+ `on_transform` and `on_transform_finish` on
257
+ [`LLM::Stream`](https://0x1eef.github.io/x/llm.rb/LLM/Stream.html).
258
+
224
259
  #### LLM::Stream
225
260
 
226
261
  `LLM::Stream` is not just for printing tokens. It supports `on_content`,
227
- `on_reasoning_content`, `on_tool_call`, `on_tool_return`, `on_compaction`,
228
- and `on_compaction_finish`, which means visible output, reasoning output, tool
229
- execution, and context compaction can all be driven through the same
230
- execution path.
262
+ `on_reasoning_content`, `on_tool_call`, `on_tool_return`, `on_transform`,
263
+ `on_transform_finish`, `on_compaction`, and `on_compaction_finish`, which
264
+ means visible output, reasoning output, request rewriting, tool execution,
265
+ and context compaction can all be driven through the same execution path.
231
266
 
232
267
  ```ruby
233
268
  class Stream < LLM::Stream
@@ -477,6 +512,29 @@ loop do
477
512
  end
478
513
  ```
479
514
 
515
+ #### Multimodal: Local Files
516
+
517
+ In llm.rb, a prompt can be a string, an [`LLM::Prompt`](https://0x1eef.github.io/x/llm.rb/LLM/Prompt.html), or an array.
518
+ When you use an array, each element can be plain text or a tagged object such as
519
+ [`ctx.image_url(...)`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html#image_url-instance_method),
520
+ [`ctx.local_file(...)`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html#local_file-instance_method),
521
+ or [`ctx.remote_file(...)`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html#remote_file-instance_method).
522
+ Those tagged objects carry the metadata the provider adapter needs to turn one
523
+ Ruby prompt into the provider-specific multimodal request schema.
524
+
525
+ `ctx.local_file(path)` tags a local path as a `:local_file` object around
526
+ `LLM.File(path)`. If the model understands that file type, you can include it
527
+ directly in the prompt array instead of uploading it first through a provider
528
+ Files API:
529
+
530
+ ```ruby
531
+ require "llm"
532
+
533
+ llm = LLM.openai(key: ENV["KEY"])
534
+ ctx = LLM::Context.new(llm)
535
+ ctx.talk ["Summarize this document.", ctx.local_file("README.md")]
536
+ ```
537
+
480
538
  #### Agent
481
539
 
482
540
  This example uses [`LLM::Agent`](https://0x1eef.github.io/x/llm.rb/LLM/Agent.html) directly and lets the agent manage tool execution. <br> See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
@@ -509,6 +567,7 @@ class Agent < LLM::Agent
509
567
  model "gpt-5.4-mini"
510
568
  instructions "You are a concise release assistant."
511
569
  skills "./skills/release", "./skills/review"
570
+ tracer { LLM::Tracer::Logger.new(llm, path: "logs/release-agent.log") }
512
571
  end
513
572
 
514
573
  llm = LLM.openai(key: ENV["KEY"])
@@ -738,13 +797,6 @@ mcp.run do
738
797
  end
739
798
  ```
740
799
 
741
- ## Screencast
742
-
743
- This screencast was built on an older version of llm.rb, but it still shows
744
- how capable the runtime can be in a real application:
745
-
746
- [![Watch the llm.rb screencast](https://img.youtube.com/vi/Jb7LNUYlCf4/maxresdefault.jpg)](https://www.youtube.com/watch?v=x1K4wMeO_QA)
747
-
748
800
  ## Resources
749
801
 
750
802
  - [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) and
@@ -41,6 +41,11 @@ module LLM::ActiveRecord
41
41
  agent.concurrency(concurrency)
42
42
  end
43
43
 
44
+ def tracer(tracer = nil, &block)
45
+ return agent.tracer if tracer.nil? && !block
46
+ agent.tracer(tracer, &block)
47
+ end
48
+
44
49
  def agent
45
50
  @agent ||= Class.new(LLM::Agent)
46
51
  end
data/lib/llm/agent.rb CHANGED
@@ -115,6 +115,26 @@ module LLM
115
115
  @concurrency = concurrency
116
116
  end
117
117
 
118
+ ##
119
+ # Set or get the default tracer.
120
+ #
121
+ # When a block is provided, it is stored and evaluated lazily against the
122
+ # agent instance during initialization so it can build a tracer from the
123
+ # resolved provider.
124
+ #
125
+ # @example
126
+ # class Agent < LLM::Agent
127
+ # tracer { LLM::Tracer::Logger.new(llm, io: $stdout) }
128
+ # end
129
+ #
130
+ # @param [LLM::Tracer, Proc, nil] tracer
131
+ # @yieldreturn [LLM::Tracer, nil]
132
+ # @return [LLM::Tracer, Proc, nil]
133
+ def self.tracer(tracer = nil, &block)
134
+ return @tracer if tracer.nil? && !block
135
+ @tracer = block || tracer
136
+ end
137
+
118
138
  ##
119
139
  # @param [LLM::Provider] provider
120
140
  # A provider
@@ -131,6 +151,7 @@ module LLM
131
151
  defaults = {model: self.class.model, tools: self.class.tools, skills: self.class.skills, schema: self.class.schema}.compact
132
152
  @concurrency = params.delete(:concurrency) || self.class.concurrency
133
153
  @llm = llm
154
+ @tracer = resolve_option(self.class.tracer) unless self.class.tracer.nil?
134
155
  @ctx = LLM::Context.new(llm, defaults.merge({guard: true}).merge(params))
135
156
  end
136
157
 
@@ -179,7 +200,7 @@ module LLM
179
200
  ##
180
201
  # @return [Array<LLM::Function>]
181
202
  def functions
182
- @ctx.functions
203
+ @tracer ? @llm.with_tracer(@tracer) { @ctx.functions } : @ctx.functions
183
204
  end
184
205
 
185
206
  ##
@@ -193,14 +214,14 @@ module LLM
193
214
  # @see LLM::Context#call
194
215
  # @return [Object]
195
216
  def call(...)
196
- @ctx.call(...)
217
+ @tracer ? @llm.with_tracer(@tracer) { @ctx.call(...) } : @ctx.call(...)
197
218
  end
198
219
 
199
220
  ##
200
221
  # @see LLM::Context#wait
201
222
  # @return [Array<LLM::Function::Return>]
202
223
  def wait(...)
203
- @ctx.wait(...)
224
+ @tracer ? @llm.with_tracer(@tracer) { @ctx.wait(...) } : @ctx.wait(...)
204
225
  end
205
226
 
206
227
  ##
@@ -257,7 +278,7 @@ module LLM
257
278
  # @return [LLM::Tracer]
258
279
  # Returns an LLM tracer
259
280
  def tracer
260
- @ctx.tracer
281
+ @tracer || @ctx.tracer
261
282
  end
262
283
 
263
284
  ##
@@ -371,14 +392,21 @@ module LLM
371
392
  end
372
393
 
373
394
  def run_loop(method, prompt, params)
374
- max = Integer(params.delete(:tool_attempts) || 25)
375
- res = @ctx.public_send(method, apply_instructions(prompt), params)
376
- max.times do
377
- break if @ctx.functions.empty?
378
- res = @ctx.public_send(method, call_functions, params)
395
+ loop = proc do
396
+ max = Integer(params.delete(:tool_attempts) || 25)
397
+ res = @ctx.public_send(method, apply_instructions(prompt), params)
398
+ max.times do
399
+ break if @ctx.functions.empty?
400
+ res = @ctx.public_send(method, call_functions, params)
401
+ end
402
+ raise LLM::ToolLoopError, "pending tool calls remain" unless @ctx.functions.empty?
403
+ res
379
404
  end
380
- raise LLM::ToolLoopError, "pending tool calls remain" unless @ctx.functions.empty?
381
- res
405
+ @tracer ? @llm.with_tracer(@tracer, &loop) : loop.call
406
+ end
407
+
408
+ def resolve_option(option)
409
+ Proc === option ? instance_exec(&option) : option
382
410
  end
383
411
  end
384
412
  end
data/lib/llm/context.rb CHANGED
@@ -489,7 +489,11 @@ module LLM
489
489
 
490
490
  def transform(prompt, params)
491
491
  return [prompt, params] unless transformer
492
+ stream = params[:stream]
493
+ stream.on_transform(self, transformer) if LLM::Stream === stream
492
494
  transformer.call(self, prompt, params)
495
+ ensure
496
+ stream.on_transform_finish(self, transformer) if LLM::Stream === stream
493
497
  end
494
498
 
495
499
  def guarded_return_for(function, warning)
data/lib/llm/function.rb CHANGED
@@ -42,6 +42,33 @@ class LLM::Function
42
42
  extend LLM::Function::Registry
43
43
  prepend LLM::Function::Tracing
44
44
 
45
+ ##
46
+ # {LLM::Function::Return LLM::Function::Return} represents the result of a
47
+ # tool call.
48
+ #
49
+ # In llm.rb, tool execution is not complete until the requested function is
50
+ # answered with a return object and that return is sent back through the
51
+ # context. This is the object that closes that loop.
52
+ #
53
+ # The return carries:
54
+ # - the tool call ID
55
+ # - the tool name
56
+ # - the tool's return value
57
+ #
58
+ # That value is usually a `Hash`, but it can be any JSON-like structure your
59
+ # tool returns. `LLM::Function#call` produces one automatically, and
60
+ # `LLM::Function#cancel` produces one that represents a cancelled tool call.
61
+ #
62
+ # You can also construct one directly when you need to intercept, scrub, or
63
+ # synthesize a tool return before sending it back to the model.
64
+ #
65
+ # @example Returning a normal tool result
66
+ # ret = LLM::Function::Return.new("call_1", "weather", {forecast: "sunny"})
67
+ # ctx.talk(ret)
68
+ #
69
+ # @example Returning a tool result after rewriting its payload
70
+ # value = ret.value.merge(email: "[REDACTED_EMAIL]")
71
+ # ctx.talk(LLM::Function::Return.new(ret.id, ret.name, value))
45
72
  Return = Struct.new(:id, :name, :value) do
46
73
  ##
47
74
  # Returns true when the return value represents an error.
@@ -191,12 +218,12 @@ class LLM::Function
191
218
  task = case strategy
192
219
  when :task
193
220
  require "async" unless defined?(::Async)
194
- Async { call }
221
+ Async { call! }
195
222
  when :thread
196
- Thread.new { call }
223
+ Thread.new { call! }
197
224
  when :fiber
198
225
  Fiber.new do
199
- call
226
+ call!
200
227
  ensure
201
228
  Fiber.yield
202
229
  end.tap(&:resume)
@@ -301,9 +328,16 @@ class LLM::Function
301
328
  # Returns a Return object with either the function result or error information.
302
329
  def call_function
303
330
  runner = ((Class === @runner) ? @runner.new : @runner)
331
+ runner.tracer = @tracer if runner.respond_to?(:tracer=)
304
332
  kwargs = Hash === arguments ? arguments.transform_keys(&:to_sym) : arguments
305
333
  Return.new(id, name, runner.call(**kwargs))
306
334
  rescue => ex
307
335
  Return.new(id, name, {error: true, type: ex.class.name, message: ex.message})
308
336
  end
337
+
338
+ def call!
339
+ llm = @tracer&.llm
340
+ return call unless llm.respond_to?(:with_tracer)
341
+ llm.with_tracer(@tracer) { call }
342
+ end
309
343
  end
@@ -62,6 +62,11 @@ module LLM::Sequel
62
62
  agent.concurrency(concurrency)
63
63
  end
64
64
 
65
+ def tracer(tracer = nil, &block)
66
+ return agent.tracer if tracer.nil? && !block
67
+ agent.tracer(tracer, &block)
68
+ end
69
+
65
70
  def agent
66
71
  @agent ||= Class.new(LLM::Agent)
67
72
  end
data/lib/llm/skill.rb CHANGED
@@ -74,11 +74,12 @@ module LLM
74
74
  # @param [LLM::Context] ctx
75
75
  # @return [Hash]
76
76
  def call(ctx)
77
- instructions, tools = self.instructions, self.tools
77
+ instructions, tools, tracer = self.instructions, self.tools, ctx.llm.tracer
78
78
  params = ctx.params.merge(mode: ctx.mode).reject { [:tools, :schema].include?(_1) }
79
79
  agent = Class.new(LLM::Agent) do
80
80
  instructions(instructions)
81
81
  tools(*tools)
82
+ tracer(tracer)
82
83
  end.new(ctx.llm, params)
83
84
  agent.messages.concat(messages_for(ctx))
84
85
  res = agent.talk("Solve the user's query.")
@@ -95,6 +96,7 @@ module LLM
95
96
  Class.new(LLM::Tool) do
96
97
  name skill.name
97
98
  description skill.description
99
+ attr_accessor :tracer
98
100
 
99
101
  define_method(:call) do
100
102
  skill.call(ctx)
data/lib/llm/stream.rb CHANGED
@@ -19,7 +19,7 @@ module LLM
19
19
  # The most common callback is {#on_content}, which also maps to {#<<}.
20
20
  # Providers may also call {#on_reasoning_content} and {#on_tool_call} when
21
21
  # that data is available. Runtime features such as context compaction may
22
- # also emit lifecycle callbacks like {#on_compaction}.
22
+ # also emit lifecycle callbacks like {#on_transform} or {#on_compaction}.
23
23
  class Stream
24
24
  require_relative "stream/queue"
25
25
 
@@ -112,6 +112,24 @@ module LLM
112
112
  nil
113
113
  end
114
114
 
115
+ ##
116
+ # Called before a context transformer rewrites a prompt.
117
+ # @param [LLM::Context] ctx
118
+ # @param [#call] transformer
119
+ # @return [nil]
120
+ def on_transform(ctx, transformer)
121
+ nil
122
+ end
123
+
124
+ ##
125
+ # Called after a context transformer finishes rewriting a prompt.
126
+ # @param [LLM::Context] ctx
127
+ # @param [#call] transformer
128
+ # @return [nil]
129
+ def on_transform_finish(ctx, transformer)
130
+ nil
131
+ end
132
+
115
133
  ##
116
134
  # Called before a context compaction starts.
117
135
  # @param [LLM::Context] ctx
@@ -11,7 +11,8 @@ class LLM::Tool
11
11
  # class Greeter < LLM::Tool
12
12
  # name "greeter"
13
13
  # description "Greets the user"
14
- # param :name, String, "The user's name", required: true
14
+ # parameter :name, String, "The user's name"
15
+ # required %i[name]
15
16
  #
16
17
  # def call(name:)
17
18
  # puts "Hello, #{name}!"
@@ -41,6 +42,19 @@ class LLM::Tool
41
42
  end
42
43
  end
43
44
  end
45
+ alias_method :parameter, :param
46
+
47
+ ##
48
+ # Mark existing parameters as required.
49
+ # @param names [Array<Symbol,String>]
50
+ # @return [LLM::Schema::Object]
51
+ def required(names)
52
+ lock do
53
+ function.params.tap do |schema|
54
+ [*names].each { Utils.fetch(schema.properties, _1).required }
55
+ end
56
+ end
57
+ end
44
58
 
45
59
  ##
46
60
  # @api private
@@ -68,6 +82,10 @@ class LLM::Tool
68
82
  leaf.enum(*enum) if enum
69
83
  leaf
70
84
  end
85
+
86
+ def fetch(properties, name)
87
+ properties[name] || properties.fetch(name.to_s)
88
+ end
71
89
  end
72
90
  end
73
91
  end
@@ -114,7 +114,7 @@ module LLM
114
114
  # @param [LLM::Response] res
115
115
  # @api private
116
116
  def finish_attributes(operation, res)
117
- case @provider.class.to_s
117
+ case @llm.class.to_s
118
118
  when "LLM::OpenAI" then openai_attributes(operation, res)
119
119
  else {}
120
120
  end
@@ -233,7 +233,7 @@ module LLM
233
233
  # @param [LLM::Response] res
234
234
  # @api private
235
235
  def finish_attributes(operation, res)
236
- case @provider.class.to_s
236
+ case @llm.class.to_s
237
237
  when "LLM::OpenAI" then openai_attributes(operation, res)
238
238
  else {}
239
239
  end
data/lib/llm/tracer.rb CHANGED
@@ -14,13 +14,17 @@ module LLM
14
14
  require_relative "tracer/langsmith"
15
15
  require_relative "tracer/null"
16
16
 
17
+ ##
18
+ # @return [LLM::Provider]
19
+ attr_reader :llm
20
+
17
21
  ##
18
22
  # @param [LLM::Provider] provider
19
23
  # A provider
20
24
  # @param [Hash] options
21
25
  # A hash of options
22
26
  def initialize(provider, options = {})
23
- @provider = provider
27
+ @llm = provider
24
28
  @options = {}
25
29
  end
26
30
 
@@ -124,7 +128,7 @@ module LLM
124
128
  ##
125
129
  # @return [String]
126
130
  def inspect
127
- "#<#{self.class.name}:0x#{object_id.to_s(16)} @provider=#{@provider.class} @tracer=#{@tracer.inspect}>"
131
+ "#<#{self.class.name}:0x#{object_id.to_s(16)} @provider=#{@llm.class} @tracer=#{@tracer.inspect}>"
128
132
  end
129
133
 
130
134
  ##
@@ -245,19 +249,19 @@ module LLM
245
249
  ##
246
250
  # @return [String]
247
251
  def provider_name
248
- @provider.class.name.split("::").last.downcase
252
+ @llm.class.name.split("::").last.downcase
249
253
  end
250
254
 
251
255
  ##
252
256
  # @return [String]
253
257
  def provider_host
254
- @provider.instance_variable_get(:@host)
258
+ @llm.instance_variable_get(:@host)
255
259
  end
256
260
 
257
261
  ##
258
262
  # @return [String]
259
263
  def provider_port
260
- @provider.instance_variable_get(:@port)
264
+ @llm.instance_variable_get(:@port)
261
265
  end
262
266
  end
263
267
  end
data/lib/llm/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LLM
4
- VERSION = "5.2.1"
4
+ VERSION = "5.4.0"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm.rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.2.1
4
+ version: 5.4.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Antar Azri