llm.rb 5.3.0 → 6.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 39e1632fb63f83a65c5a146ea2a2f4178d0d99d26d2a347f36d360c09ea9845d
4
- data.tar.gz: 04b2236d5cac243cc496b686d8d8a5097676e7bcd6973bacfa8d1f7e8d48e270
3
+ metadata.gz: 8cc16b548be77e0a2bb78c0e130b14f0ac8fdabb2c2f082dd2824282e5a5733b
4
+ data.tar.gz: 668655ba6a7d65d44b53cc1b8b33afddf3563dc216df83181fb2b7394a2847ff
5
5
  SHA512:
6
- metadata.gz: 1b4a68bd3b3e109a00f996f520296405ad6066b4f17c1e59da4077c2023c5fe0c95e770b9bd563a531748a16d79355f8db5fb2dcd66ac42673698ddcbea07704
7
- data.tar.gz: 12713c07834164f3d13d01613126488cc19e937b8f127fcdbf839d8c75f2f6b77c6207e7e3e6f515d996e35aedb6d6e7333ba563a3f4578f4c33f454d41c6088
6
+ metadata.gz: 2257aeec49a43c56bfc974e3fc190a850ea27c98c437c7c242efe6c645c000eebdbe2e731fcc0b287303690c1055768f2de32be6ac900391c5156260da9c2ce5
7
+ data.tar.gz: 8f4f8f3475ac1bd2d9ffbd8c0b413b46224936459eb0ed6bb395876b994c1ab67ec02cf670176f90b8d9b892b59ab4b67271f4b69c80fa7c094aa89310ab5c58
data/CHANGELOG.md CHANGED
@@ -1,9 +1,70 @@
1
1
  # Changelog
2
2
 
3
- ## Unreleased
3
+ ## v6.0.0
4
+
5
+ Changes since `v5.4.0`.
6
+
7
+ This release simplifies the ORM persistence contract around serialized
8
+ `data` state, removing the assumption of reserved `provider`, `model`, and
9
+ usage columns. Provider selection must now come from `provider:` hooks,
10
+ model defaults come from `context:` or agent DSL, and usage is read from the
11
+ serialized runtime state. Alongside this breaking change, Sequel JSON and
12
+ JSONB persistence is fixed, ractor-backed tools now fire tracer callbacks,
13
+ and `LLM::RactorError` is raised for unsupported ractor tool work.
14
+
15
+ ### Change
16
+
17
+ * **Simplify ORM persistence to serialized `data` state** <br>
18
+ Change the built-in ActiveRecord and Sequel wrappers to treat serialized
19
+ `data` as the persistence contract, instead of assuming reserved
20
+ `provider`, `model`, and usage columns. Provider selection must now come
21
+ from `provider:` hooks that resolve a real `LLM::Provider` instance, model
22
+ defaults come from `context:` or agent DSL, and `usage` is read from the
23
+ serialized runtime state.
24
+
25
+ ### Fix
26
+
27
+ * **Fix Sequel JSON and JSONB persistence** <br>
28
+ Load Sequel PostgreSQL JSON support when `plugin :llm` is configured with
29
+ `format: :json` or `:jsonb`, and wrap structured payloads correctly so
30
+ persisted context state can be stored in PostgreSQL JSON columns.
31
+
32
+ * **Trace ractor-backed tool callbacks** <br>
33
+ Make tool tracers fire `on_tool_start` and `on_tool_finish` for
34
+ class-based `:ractor` execution too, so ractor-backed tool calls show up
35
+ in tracer callbacks like the other concurrent tool paths.
36
+
37
+ * **Raise `LLM::RactorError` for unsupported ractor tool work** <br>
38
+ Add `LLM::RactorError` and fail fast when `:ractor` execution is requested
39
+ for unsupported tool types such as skill-backed tools, instead of letting
40
+ deeper Ruby isolation errors leak out later in execution.
41
+
42
+ ## v5.4.0
4
43
 
5
44
  Changes since `v5.3.0`.
6
45
 
46
+ This release expands tracer support around agentic execution. It lets
47
+ `LLM::Agent` define scoped tracers through the agent DSL and fixes concurrent
48
+ tool execution so those scoped tracers stay attached when work crosses
49
+ thread, task, fiber, and skill boundaries.
50
+
51
+ ### Change
52
+
53
+ * **Add agent-scoped tracers** <br>
54
+ Let `LLM::Agent` classes define `tracer ...` or `tracer { ... }` so an
55
+ agent can carry its own tracer without replacing the provider's default
56
+ tracer. The resolved tracer is scoped to that agent's turns, tool loops,
57
+ and pending tool access. Available through the `acts_as_agent` and Sequel
58
+ agent plugin `tracer` DSL too.
59
+
60
+ ### Fix
61
+
62
+ * **Preserve scoped tracers across concurrent tool work** <br>
63
+ Keep agent- and request-scoped tracers attached when tool execution
64
+ crosses `:thread`, `:task`, or `:fiber` boundaries, including skill
65
+ execution, so spawned work does not fall back to the provider default
66
+ tracer.
67
+
7
68
  ## v5.3.0
8
69
 
9
70
  Changes since `v5.2.1`.
data/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
  <p align="center">
5
5
  <a href="https://0x1eef.github.io/x/llm.rb?rebuild=1"><img src="https://img.shields.io/badge/docs-0x1eef.github.io-blue.svg" alt="RubyDoc"></a>
6
6
  <a href="https://opensource.org/license/0bsd"><img src="https://img.shields.io/badge/License-0BSD-orange.svg?" alt="License"></a>
7
- <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-5.3.0-green.svg?" alt="Version"></a>
7
+ <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-6.0.0-green.svg?" alt="Version"></a>
8
8
  </p>
9
9
 
10
10
  ## About
@@ -25,7 +25,6 @@ schemas, files, and persisted state, so real systems can be built out of one coh
25
25
  execution model instead of a pile of adapters.
26
26
 
27
27
  Want to see some code? Jump to [the examples](#examples) section. <br>
28
- Want to see an agentic framework built on top of llm.rb? Check out [general-intelligence-systems/brute](https://github.com/general-intelligence-systems/brute). <br>
29
28
  Want to see a self-hosted LLM environment built on llm.rb? Check out [Relay](https://github.com/llmrb/relay).
30
29
 
31
30
  ## Architecture
@@ -87,6 +86,7 @@ Review the release state, summarize what changed, and prepare the release.
87
86
  class Agent < LLM::Agent
88
87
  model "gpt-5.4-mini"
89
88
  skills "./skills/release"
89
+ tracer { LLM::Tracer::Logger.new(llm, path: "logs/release-agent.log") }
90
90
  end
91
91
 
92
92
  llm = LLM.openai(key: ENV["KEY"])
@@ -101,20 +101,26 @@ separate agent table or a second persistence layer.
101
101
 
102
102
  `acts_as_agent` extends a model with agent capabilities: the same runtime
103
103
  surface as [`LLM::Agent`](https://0x1eef.github.io/x/llm.rb/LLM/Agent.html),
104
- because it actually wraps an `LLM::Agent`, plus persistence through a text,
105
- JSON, or JSONB-backed column on the same table.
104
+ because it actually wraps an `LLM::Agent`, plus persistence through one text,
105
+ JSON, or JSONB-backed `data` column on the same table. If your app also has
106
+ provider or model columns, provide them to llm.rb through `set_provider` and
107
+ `set_context`.
106
108
 
107
109
 
108
110
  ```ruby
109
111
  class Ticket < ApplicationRecord
110
- acts_as_agent provider: :set_provider
112
+ acts_as_agent provider: :set_provider, context: :set_context
111
113
  model "gpt-5.4-mini"
112
114
  instructions "You are a support assistant."
113
115
 
114
116
  private
115
117
 
116
118
  def set_provider
117
- { key: ENV["#{provider.upcase}_SECRET"], persistent: true }
119
+ LLM.openai(key: ENV["OPENAI_SECRET"])
120
+ end
121
+
122
+ def set_context
123
+ { mode: :responses, store: false }
118
124
  end
119
125
  end
120
126
  ```
@@ -302,7 +308,7 @@ finer sequential control across several steps before shutting the client down.
302
308
  ```ruby
303
309
  mcp = LLM::MCP.http(
304
310
  url: "https://api.githubcopilot.com/mcp/",
305
- headers: {"Authorization" => "Bearer #{ENV.fetch("GITHUB_PAT")}"}
311
+ headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"}
306
312
  ).persistent
307
313
  mcp.run do
308
314
  ctx = LLM::Context.new(llm, tools: mcp.tools)
@@ -375,7 +381,8 @@ worker.join
375
381
  Use threads, fibers, async tasks, or experimental ractors without
376
382
  rewriting your tool layer. The current `:ractor` mode is for class-based
377
383
  tools and does not support MCP tools, but mixed workloads can branch on
378
- `tool.mcp?` and choose a supported strategy per tool. `:ractor` is
384
+ `tool.mcp?` and choose a supported strategy per tool. Class-based
385
+ `:ractor` tools still emit normal tool tracer callbacks. `:ractor` is
379
386
  especially useful for CPU-bound tools, while `:task`, `:fiber`, or
380
387
  `:thread` may be a better fit for I/O-bound work.
381
388
  - **Advanced workloads are built in, not bolted on** <br>
@@ -566,6 +573,7 @@ class Agent < LLM::Agent
566
573
  model "gpt-5.4-mini"
567
574
  instructions "You are a concise release assistant."
568
575
  skills "./skills/release", "./skills/review"
576
+ tracer { LLM::Tracer::Logger.new(llm, path: "logs/release-agent.log") }
569
577
  end
570
578
 
571
579
  llm = LLM.openai(key: ENV["KEY"])
@@ -694,7 +702,7 @@ worker.join
694
702
 
695
703
  #### Sequel (ORM)
696
704
 
697
- The `plugin :llm` integration wraps [`LLM::Context`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html) on a `Sequel::Model` and keeps tool execution explicit. <br> See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
705
+ The `plugin :llm` integration wraps [`LLM::Context`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html) on a `Sequel::Model` and keeps tool execution explicit. Like the ActiveRecord wrappers, its built-in persistence contract is the serialized `data` column, while `provider:` resolves a real `LLM::Provider` instance and `context:` injects defaults such as `model:`. <br> See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
698
706
 
699
707
  ```ruby
700
708
  require "llm"
@@ -703,10 +711,20 @@ require "sequel"
703
711
  require "sequel/plugins/llm"
704
712
 
705
713
  class Context < Sequel::Model
706
- plugin :llm, provider: -> { { key: ENV["#{provider.upcase}_SECRET"], persistent: true } }
714
+ plugin :llm, provider: :set_provider, context: :set_context
715
+
716
+ private
717
+
718
+ def set_provider
719
+ LLM.openai(key: ENV["OPENAI_SECRET"])
720
+ end
721
+
722
+ def set_context
723
+ {model: "gpt-5.4-mini", mode: :responses, store: false}
724
+ end
707
725
  end
708
726
 
709
- ctx = Context.create(provider: "openai", model: "gpt-5.4-mini")
727
+ ctx = Context.create
710
728
  ctx.talk("Remember that my favorite language is Ruby")
711
729
  puts ctx.talk("What is my favorite language?").content
712
730
  ```
@@ -714,36 +732,76 @@ puts ctx.talk("What is my favorite language?").content
714
732
  #### ActiveRecord (ORM): acts_as_llm
715
733
 
716
734
  The `acts_as_llm` method wraps [`LLM::Context`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html) and
717
- provides full control over tool execution. <br> See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
735
+ provides full control over tool execution. Its built-in persistence contract is
736
+ one serialized `data` column. If your app has provider, model, or usage
737
+ columns, provide them to llm.rb through `provider:` and `context:` instead of
738
+ relying on reserved wrapper columns.
739
+
740
+ See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
718
741
 
719
742
  ```ruby
720
743
  require "llm"
721
- require "net/http/persistent"
722
744
  require "active_record"
723
745
  require "llm/active_record"
724
746
 
725
747
  class Context < ApplicationRecord
726
- acts_as_llm provider: -> { { key: ENV["#{provider.upcase}_SECRET"], persistent: true } }
748
+ acts_as_llm provider: :set_provider, context: :set_context
749
+
750
+ private
751
+
752
+ def set_provider
753
+ LLM.openai(key: ENV["OPENAI_SECRET"])
754
+ end
755
+
756
+ def set_context
757
+ {model: "gpt-5.4-mini", mode: :responses, store: false}
758
+ end
727
759
  end
728
760
 
729
- ctx = Context.create!(provider: "openai", model: "gpt-5.4-mini")
761
+ ctx = Context.create!
730
762
  ctx.talk("Remember that my favorite language is Ruby")
731
763
  puts ctx.talk("What is my favorite language?").content
732
764
  ```
733
765
 
766
+ ```ruby
767
+ require "llm"
768
+ require "active_record"
769
+ require "llm/active_record"
770
+
771
+ class Context < ApplicationRecord
772
+ acts_as_llm provider: :set_provider, context: :set_context
773
+
774
+ # Optional application columns can still provide the provider and context.
775
+ # For example, `provider_name` and `model_name` can be normal columns.
776
+
777
+ private
778
+
779
+ def set_provider
780
+ LLM.public_send(provider_name, key: provider_key)
781
+ end
782
+
783
+ def set_context
784
+ {model: model_name, mode: :responses, store: false}
785
+ end
786
+ end
787
+ ```
788
+
734
789
  #### ActiveRecord (ORM): acts_as_agent
735
790
 
736
791
  The `acts_as_agent` method wraps [`LLM::Agent`](https://0x1eef.github.io/x/llm.rb/LLM/Agent.html) and
737
- manages tool execution for you. <br> See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
792
+ manages tool execution for you. Like `acts_as_llm`, its built-in persistence
793
+ contract is one serialized `data` column. If your app has provider or model
794
+ columns, provide them to llm.rb through your hooks and agent DSL.
795
+
796
+ See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
738
797
 
739
798
  ```ruby
740
799
  require "llm"
741
- require "net/http/persistent"
742
800
  require "active_record"
743
801
  require "llm/active_record"
744
802
 
745
803
  class Ticket < ApplicationRecord
746
- acts_as_agent provider: :set_provider
804
+ acts_as_agent provider: :set_provider, context: :set_context
747
805
  model "gpt-5.4-mini"
748
806
  instructions "You are a concise support assistant."
749
807
  tools SearchDocs, Escalate
@@ -752,14 +810,40 @@ class Ticket < ApplicationRecord
752
810
  private
753
811
 
754
812
  def set_provider
755
- { key: ENV["#{provider.upcase}_SECRET"], persistent: true }
813
+ LLM.openai(key: ENV["OPENAI_SECRET"])
814
+ end
815
+
816
+ def set_context
817
+ {mode: :responses, store: false}
756
818
  end
757
819
  end
758
820
 
759
- ticket = Ticket.create!(provider: "openai", model: "gpt-5.4-mini")
821
+ ticket = Ticket.create!
760
822
  puts ticket.talk("How do I rotate my API key?").content
761
823
  ```
762
824
 
825
+ ```ruby
826
+ require "llm"
827
+ require "active_record"
828
+ require "llm/active_record"
829
+
830
+ class Ticket < ApplicationRecord
831
+ acts_as_agent provider: :set_provider, context: :set_context
832
+ model "gpt-5.4-mini"
833
+ instructions "You are a concise support assistant."
834
+
835
+ private
836
+
837
+ def set_provider
838
+ LLM.public_send(provider_name, key: provider_key)
839
+ end
840
+
841
+ def set_context
842
+ {mode: :responses, store: false}
843
+ end
844
+ end
845
+ ```
846
+
763
847
  #### MCP
764
848
 
765
849
  This example uses [`LLM::MCP`](https://0x1eef.github.io/x/llm.rb/LLM/MCP.html) over HTTP so remote GitHub MCP tools run through the same `LLM::Context` tool path as local tools. It expects a GitHub token in `ENV["GITHUB_PAT"]`. See the [deepdive (web)](https://0x1eef.github.io/x/llm.rb/file.deepdive.html) or [deepdive (markdown)](resources/deepdive.md) for more examples.
@@ -771,7 +855,7 @@ require "net/http/persistent"
771
855
  llm = LLM.openai(key: ENV["KEY"])
772
856
  mcp = LLM::MCP.http(
773
857
  url: "https://api.githubcopilot.com/mcp/",
774
- headers: {"Authorization" => "Bearer #{ENV.fetch("GITHUB_PAT")}"}
858
+ headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"}
775
859
  ).persistent
776
860
 
777
861
  mcp.start
@@ -786,7 +870,7 @@ For scoped work, `mcp.run do ... end` is shorter and handles cleanup for you:
786
870
  ```ruby
787
871
  mcp = LLM::MCP.http(
788
872
  url: "https://api.githubcopilot.com/mcp/",
789
- headers: {"Authorization" => "Bearer #{ENV.fetch("GITHUB_PAT")}"}
873
+ headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"}
790
874
  ).persistent
791
875
  mcp.run do
792
876
  ctx = LLM::Context.new(llm, stream: $stdout, tools: mcp.tools)
@@ -11,7 +11,6 @@ module LLM::ActiveRecord
11
11
  # class and forwarded to an internal agent subclass.
12
12
  module ActsAsAgent
13
13
  EMPTY_HASH = LLM::ActiveRecord::ActsAsLLM::EMPTY_HASH
14
- DEFAULT_USAGE_COLUMNS = LLM::ActiveRecord::ActsAsLLM::DEFAULT_USAGE_COLUMNS
15
14
  DEFAULTS = LLM::ActiveRecord::ActsAsLLM::DEFAULTS
16
15
  Utils = LLM::ActiveRecord::ActsAsLLM::Utils
17
16
 
@@ -41,6 +40,11 @@ module LLM::ActiveRecord
41
40
  agent.concurrency(concurrency)
42
41
  end
43
42
 
43
+ def tracer(tracer = nil, &block)
44
+ return agent.tracer if tracer.nil? && !block
45
+ agent.tracer(tracer, &block)
46
+ end
47
+
44
48
  def agent
45
49
  @agent ||= Class.new(LLM::Agent)
46
50
  end
@@ -53,8 +57,6 @@ module LLM::ActiveRecord
53
57
  # @param [Class] model
54
58
  # @return [void]
55
59
  def self.extended(model)
56
- options = model.llm_plugin_options
57
- model.validates options[:provider_column], options[:model_column], presence: true
58
60
  model.include LLM::ActiveRecord::ActsAsLLM::InstanceMethods unless model.ancestors.include?(LLM::ActiveRecord::ActsAsLLM::InstanceMethods)
59
61
  model.include InstanceMethods unless model.ancestors.include?(InstanceMethods)
60
62
  model.extend ClassMethods unless model.singleton_class.ancestors.include?(ClassMethods)
@@ -72,6 +74,8 @@ module LLM::ActiveRecord
72
74
  # @option options [Proc, Symbol, LLM::Tracer, nil] :tracer
73
75
  # Optional tracer, method name, or proc that resolves to one and is
74
76
  # assigned through `llm.tracer = ...` on the resolved provider.
77
+ # @option options [Proc, Symbol, LLM::Provider] :provider
78
+ # Must resolve to an `LLM::Provider` instance for the current record.
75
79
  # @yield
76
80
  # Evaluated in the model class after the wrapper is installed, so agent
77
81
  # DSL methods such as `model`, `tools`, `schema`, `instructions`, and
@@ -79,9 +83,8 @@ module LLM::ActiveRecord
79
83
  # @return [void]
80
84
  def acts_as_agent(options = EMPTY_HASH, &block)
81
85
  options = DEFAULTS.merge(options)
82
- usage_columns = DEFAULT_USAGE_COLUMNS.merge(options[:usage_columns] || EMPTY_HASH)
83
86
  class_attribute :llm_plugin_options, instance_accessor: false, default: DEFAULTS unless respond_to?(:llm_plugin_options)
84
- self.llm_plugin_options = options.merge(usage_columns: usage_columns.freeze).freeze
87
+ self.llm_plugin_options = options.freeze
85
88
  extend Hooks
86
89
  class_exec(&block) if block
87
90
  end
@@ -92,11 +95,8 @@ module LLM::ActiveRecord
92
95
  # @return [LLM::Provider]
93
96
  def llm
94
97
  options = self.class.llm_plugin_options
95
- columns = Utils.columns(options)
96
- provider = self[columns[:provider_column]]
97
- kwargs = Utils.resolve_options(self, options[:provider], ActsAsAgent::EMPTY_HASH)
98
98
  return @llm if @llm
99
- @llm = LLM.method(provider).call(**kwargs)
99
+ @llm = Utils.resolve_provider(self, options, ActsAsAgent::EMPTY_HASH)
100
100
  @llm.tracer = Utils.resolve_option(self, options[:tracer]) if options[:tracer]
101
101
  @llm
102
102
  end
@@ -108,10 +108,9 @@ module LLM::ActiveRecord
108
108
  def ctx
109
109
  @ctx ||= begin
110
110
  options = self.class.llm_plugin_options
111
- columns = Utils.columns(options)
112
111
  params = Utils.resolve_options(self, options[:context], ActsAsAgent::EMPTY_HASH).dup
113
- params[:model] ||= self[columns[:model_column]]
114
112
  ctx = self.class.agent.new(llm, params.compact)
113
+ columns = Utils.columns(options)
115
114
  data = self[columns[:data_column]]
116
115
  if data.nil? || data == ""
117
116
  ctx
@@ -17,19 +17,11 @@ module LLM::ActiveRecord
17
17
  # `tracer:` can also be configured as symbols that are called on the model.
18
18
  module ActsAsLLM
19
19
  EMPTY_HASH = {}.freeze
20
- DEFAULT_USAGE_COLUMNS = {
21
- input_tokens: :input_tokens,
22
- output_tokens: :output_tokens,
23
- total_tokens: :total_tokens
24
- }.freeze
25
20
  DEFAULTS = {
26
- provider_column: :provider,
27
- model_column: :model,
28
21
  data_column: :data,
29
22
  format: :string,
30
- usage_columns: DEFAULT_USAGE_COLUMNS,
31
23
  tracer: nil,
32
- provider: EMPTY_HASH,
24
+ provider: nil,
33
25
  context: EMPTY_HASH
34
26
  }.freeze
35
27
 
@@ -78,28 +70,26 @@ module LLM::ActiveRecord
78
70
  # Maps wrapper options onto the record's storage columns.
79
71
  # @return [Hash]
80
72
  def self.columns(options)
81
- usage_columns = options[:usage_columns]
82
73
  {
83
- provider_column: options[:provider_column],
84
- model_column: options[:model_column],
85
- data_column: options[:data_column],
86
- input_tokens: usage_columns[:input_tokens],
87
- output_tokens: usage_columns[:output_tokens],
88
- total_tokens: usage_columns[:total_tokens]
74
+ data_column: options[:data_column]
89
75
  }.freeze
90
76
  end
91
77
 
78
+ ##
79
+ # Resolves the provider runtime for a record.
80
+ # @return [LLM::Provider]
81
+ def self.resolve_provider(obj, options, empty_hash)
82
+ provider = resolve_option(obj, options[:provider])
83
+ return provider if LLM::Provider === provider
84
+ raise ArgumentError, "provider: must resolve to an LLM::Provider instance"
85
+ end
86
+
92
87
  ##
93
88
  # Persists the runtime state and usage columns back onto the record.
94
89
  # @return [void]
95
90
  def self.save(obj, ctx, options)
96
91
  columns = self.columns(options)
97
- obj.assign_attributes(
98
- columns[:data_column] => serialize_context(ctx, options[:format]),
99
- columns[:input_tokens] => ctx.usage.input_tokens,
100
- columns[:output_tokens] => ctx.usage.output_tokens,
101
- columns[:total_tokens] => ctx.usage.total_tokens
102
- )
92
+ obj.assign_attributes(columns[:data_column] => serialize_context(ctx, options[:format]))
103
93
  obj.save!
104
94
  end
105
95
  end
@@ -111,8 +101,6 @@ module LLM::ActiveRecord
111
101
  # @param [Class] model
112
102
  # @return [void]
113
103
  def self.extended(model)
114
- options = model.llm_plugin_options
115
- model.validates options[:provider_column], options[:model_column], presence: true
116
104
  model.include InstanceMethods unless model.ancestors.include?(InstanceMethods)
117
105
  end
118
106
  end
@@ -128,12 +116,13 @@ module LLM::ActiveRecord
128
116
  # @option options [Proc, Symbol, LLM::Tracer, nil] :tracer
129
117
  # Optional tracer, method name, or proc that resolves to one and is
130
118
  # assigned through `llm.tracer = ...` on the resolved provider.
119
+ # @option options [Proc, Symbol, LLM::Provider] :provider
120
+ # Must resolve to an `LLM::Provider` instance for the current record.
131
121
  # @return [void]
132
122
  def acts_as_llm(options = EMPTY_HASH)
133
123
  options = DEFAULTS.merge(options)
134
- usage_columns = DEFAULT_USAGE_COLUMNS.merge(options[:usage_columns] || EMPTY_HASH)
135
124
  class_attribute :llm_plugin_options, instance_accessor: false, default: DEFAULTS unless respond_to?(:llm_plugin_options)
136
- self.llm_plugin_options = options.merge(usage_columns: usage_columns.freeze).freeze
125
+ self.llm_plugin_options = options.freeze
137
126
  extend Hooks
138
127
  end
139
128
 
@@ -228,12 +217,7 @@ module LLM::ActiveRecord
228
217
  # Returns usage from the mapped usage columns.
229
218
  # @return [LLM::Object]
230
219
  def usage
231
- columns = Utils.columns(self.class.llm_plugin_options)
232
- LLM::Object.from(
233
- input_tokens: self[columns[:input_tokens]] || 0,
234
- output_tokens: self[columns[:output_tokens]] || 0,
235
- total_tokens: self[columns[:total_tokens]] || 0
236
- )
220
+ ctx.usage || LLM::Object.from(input_tokens: 0, output_tokens: 0, total_tokens: 0)
237
221
  end
238
222
 
239
223
  ##
@@ -285,11 +269,8 @@ module LLM::ActiveRecord
285
269
  # @return [LLM::Provider]
286
270
  def llm
287
271
  options = self.class.llm_plugin_options
288
- columns = Utils.columns(options)
289
- provider = self[columns[:provider_column]]
290
- kwargs = Utils.resolve_options(self, options[:provider], ActsAsLLM::EMPTY_HASH)
291
272
  return @llm if @llm
292
- @llm = LLM.method(provider).call(**kwargs)
273
+ @llm = Utils.resolve_provider(self, options, ActsAsLLM::EMPTY_HASH)
293
274
  @llm.tracer = Utils.resolve_option(self, options[:tracer]) if options[:tracer]
294
275
  @llm
295
276
  end
@@ -303,7 +284,6 @@ module LLM::ActiveRecord
303
284
  options = self.class.llm_plugin_options
304
285
  columns = Utils.columns(options)
305
286
  params = Utils.resolve_options(self, options[:context], ActsAsLLM::EMPTY_HASH).dup
306
- params[:model] ||= self[columns[:model_column]]
307
287
  ctx = LLM::Context.new(llm, params.compact)
308
288
  data = self[columns[:data_column]]
309
289
  if data.nil? || data == ""
data/lib/llm/agent.rb CHANGED
@@ -115,6 +115,26 @@ module LLM
115
115
  @concurrency = concurrency
116
116
  end
117
117
 
118
+ ##
119
+ # Set or get the default tracer.
120
+ #
121
+ # When a block is provided, it is stored and evaluated lazily against the
122
+ # agent instance during initialization so it can build a tracer from the
123
+ # resolved provider.
124
+ #
125
+ # @example
126
+ # class Agent < LLM::Agent
127
+ # tracer { LLM::Tracer::Logger.new(llm, io: $stdout) }
128
+ # end
129
+ #
130
+ # @param [LLM::Tracer, Proc, nil] tracer
131
+ # @yieldreturn [LLM::Tracer, nil]
132
+ # @return [LLM::Tracer, Proc, nil]
133
+ def self.tracer(tracer = nil, &block)
134
+ return @tracer if tracer.nil? && !block
135
+ @tracer = block || tracer
136
+ end
137
+
118
138
  ##
119
139
  # @param [LLM::Provider] provider
120
140
  # A provider
@@ -131,6 +151,7 @@ module LLM
131
151
  defaults = {model: self.class.model, tools: self.class.tools, skills: self.class.skills, schema: self.class.schema}.compact
132
152
  @concurrency = params.delete(:concurrency) || self.class.concurrency
133
153
  @llm = llm
154
+ @tracer = resolve_option(self.class.tracer) unless self.class.tracer.nil?
134
155
  @ctx = LLM::Context.new(llm, defaults.merge({guard: true}).merge(params))
135
156
  end
136
157
 
@@ -179,7 +200,7 @@ module LLM
179
200
  ##
180
201
  # @return [Array<LLM::Function>]
181
202
  def functions
182
- @ctx.functions
203
+ @tracer ? @llm.with_tracer(@tracer) { @ctx.functions } : @ctx.functions
183
204
  end
184
205
 
185
206
  ##
@@ -193,14 +214,14 @@ module LLM
193
214
  # @see LLM::Context#call
194
215
  # @return [Object]
195
216
  def call(...)
196
- @ctx.call(...)
217
+ @tracer ? @llm.with_tracer(@tracer) { @ctx.call(...) } : @ctx.call(...)
197
218
  end
198
219
 
199
220
  ##
200
221
  # @see LLM::Context#wait
201
222
  # @return [Array<LLM::Function::Return>]
202
223
  def wait(...)
203
- @ctx.wait(...)
224
+ @tracer ? @llm.with_tracer(@tracer) { @ctx.wait(...) } : @ctx.wait(...)
204
225
  end
205
226
 
206
227
  ##
@@ -257,7 +278,7 @@ module LLM
257
278
  # @return [LLM::Tracer]
258
279
  # Returns an LLM tracer
259
280
  def tracer
260
- @ctx.tracer
281
+ @tracer || @ctx.tracer
261
282
  end
262
283
 
263
284
  ##
@@ -371,14 +392,21 @@ module LLM
371
392
  end
372
393
 
373
394
  def run_loop(method, prompt, params)
374
- max = Integer(params.delete(:tool_attempts) || 25)
375
- res = @ctx.public_send(method, apply_instructions(prompt), params)
376
- max.times do
377
- break if @ctx.functions.empty?
378
- res = @ctx.public_send(method, call_functions, params)
395
+ loop = proc do
396
+ max = Integer(params.delete(:tool_attempts) || 25)
397
+ res = @ctx.public_send(method, apply_instructions(prompt), params)
398
+ max.times do
399
+ break if @ctx.functions.empty?
400
+ res = @ctx.public_send(method, call_functions, params)
401
+ end
402
+ raise LLM::ToolLoopError, "pending tool calls remain" unless @ctx.functions.empty?
403
+ res
379
404
  end
380
- raise LLM::ToolLoopError, "pending tool calls remain" unless @ctx.functions.empty?
381
- res
405
+ @tracer ? @llm.with_tracer(@tracer, &loop) : loop.call
406
+ end
407
+
408
+ def resolve_option(option)
409
+ Proc === option ? instance_exec(&option) : option
382
410
  end
383
411
  end
384
412
  end
data/lib/llm/error.rb CHANGED
@@ -63,6 +63,10 @@ module LLM
63
63
  # When a request is interrupted
64
64
  Interrupt = Class.new(Error)
65
65
 
66
+ ##
67
+ # When a concurrency strategy cannot execute a given tool
68
+ RactorError = Class.new(Error)
69
+
66
70
  ##
67
71
  # When a tool call cannot be mapped to a local tool
68
72
  NoSuchToolError = Class.new(Error)
@@ -15,8 +15,12 @@ class LLM::Function
15
15
  # @param [String, nil] id
16
16
  # @param [String] name
17
17
  # @param [Hash, Array, nil] arguments
18
+ # @param [LLM::Tracer, nil] tracer
19
+ # @param [Object, nil] span
18
20
  # @return [LLM::Function::Ractor::Task]
19
- def initialize(runner_class, id, name, arguments)
21
+ def initialize(runner_class, id, name, arguments, tracer: nil, span: nil)
22
+ @tracer = tracer
23
+ @span = span
20
24
  @mailbox = Ractor::Mailbox.new(build_task(runner_class, id, name, arguments))
21
25
  end
22
26
 
@@ -37,7 +41,9 @@ class LLM::Function
37
41
  # @return [LLM::Function::Return]
38
42
  def wait
39
43
  id, name, value = mailbox.wait
40
- Return.new(id, name, value)
44
+ result = Return.new(id, name, value)
45
+ @tracer&.on_tool_finish(result:, span: @span)
46
+ result
41
47
  end
42
48
  alias_method :value, :wait
43
49
 
data/lib/llm/function.rb CHANGED
@@ -218,18 +218,22 @@ class LLM::Function
218
218
  task = case strategy
219
219
  when :task
220
220
  require "async" unless defined?(::Async)
221
- Async { call }
221
+ Async { call! }
222
222
  when :thread
223
- Thread.new { call }
223
+ Thread.new { call! }
224
224
  when :fiber
225
225
  Fiber.new do
226
- call
226
+ call!
227
227
  ensure
228
228
  Fiber.yield
229
229
  end.tap(&:resume)
230
230
  when :ractor
231
- raise ArgumentError, "Ractor concurrency only supports class-based tools" unless Class === @runner
232
- Ractor::Task.new(@runner, id, name, arguments)
231
+ raise LLM::RactorError, "Ractor concurrency only supports class-based tools" unless Class === @runner
232
+ if @runner.respond_to?(:skill?) && @runner.skill?
233
+ raise LLM::RactorError, "Ractor concurrency does not support skill-backed tools"
234
+ end
235
+ span = @tracer&.on_tool_start(id:, name:, arguments:, model:)
236
+ Ractor::Task.new(@runner, id, name, arguments, tracer: @tracer, span:)
233
237
  else
234
238
  raise ArgumentError, "Unknown strategy: #{strategy.inspect}. Expected :thread, :task, :fiber, or :ractor"
235
239
  end
@@ -328,9 +332,16 @@ class LLM::Function
328
332
  # Returns a Return object with either the function result or error information.
329
333
  def call_function
330
334
  runner = ((Class === @runner) ? @runner.new : @runner)
335
+ runner.tracer = @tracer if runner.respond_to?(:tracer=)
331
336
  kwargs = Hash === arguments ? arguments.transform_keys(&:to_sym) : arguments
332
337
  Return.new(id, name, runner.call(**kwargs))
333
338
  rescue => ex
334
339
  Return.new(id, name, {error: true, type: ex.class.name, message: ex.message})
335
340
  end
341
+
342
+ def call!
343
+ llm = @tracer&.llm
344
+ return call unless llm.respond_to?(:with_tracer)
345
+ llm.with_tracer(@tracer) { call }
346
+ end
336
347
  end
@@ -12,7 +12,6 @@ module LLM::Sequel
12
12
  module Agent
13
13
  require_relative "plugin"
14
14
  EMPTY_HASH = LLM::Sequel::Plugin::EMPTY_HASH
15
- DEFAULT_USAGE_COLUMNS = LLM::Sequel::Plugin::DEFAULT_USAGE_COLUMNS
16
15
  DEFAULTS = LLM::Sequel::Plugin::DEFAULTS
17
16
  Utils = LLM::Sequel::Plugin::Utils
18
17
 
@@ -24,11 +23,8 @@ module LLM::Sequel
24
23
 
25
24
  def self.configure(model, options = EMPTY_HASH, &block)
26
25
  options = DEFAULTS.merge(options)
27
- usage_columns = DEFAULT_USAGE_COLUMNS.merge(options[:usage_columns] || EMPTY_HASH)
28
- model.instance_variable_set(
29
- :@llm_agent_options,
30
- options.merge(usage_columns: usage_columns.freeze).freeze
31
- )
26
+ model.db.extension :pg_json if %i[json jsonb].include?(options[:format])
27
+ model.instance_variable_set(:@llm_agent_options, options.freeze)
32
28
  model.instance_exec(&block) if block
33
29
  end
34
30
 
@@ -62,6 +58,11 @@ module LLM::Sequel
62
58
  agent.concurrency(concurrency)
63
59
  end
64
60
 
61
+ def tracer(tracer = nil, &block)
62
+ return agent.tracer if tracer.nil? && !block
63
+ agent.tracer(tracer, &block)
64
+ end
65
+
65
66
  def agent
66
67
  @agent ||= Class.new(LLM::Agent)
67
68
  end
@@ -75,7 +76,6 @@ module LLM::Sequel
75
76
  options = self.class.llm_plugin_options
76
77
  columns = Agent::Utils.columns(options)
77
78
  params = Agent::Utils.resolve_options(self, options[:context], Agent::EMPTY_HASH).dup
78
- params[:model] ||= self[columns[:model_column]]
79
79
  ctx = self.class.agent.new(llm, params.compact)
80
80
  data = self[columns[:data_column]]
81
81
  if data.nil? || data == ""
@@ -17,11 +17,6 @@ module LLM::Sequel
17
17
  # can also be configured as symbols that are called on the model.
18
18
  module Plugin
19
19
  EMPTY_HASH = {}.freeze
20
- DEFAULT_USAGE_COLUMNS = {
21
- input_tokens: :input_tokens,
22
- output_tokens: :output_tokens,
23
- total_tokens: :total_tokens
24
- }.freeze
25
20
 
26
21
  ##
27
22
  # Shared helper methods for the ORM wrapper.
@@ -68,38 +63,46 @@ module LLM::Sequel
68
63
  # Maps wrapper options onto the record's storage columns.
69
64
  # @return [Hash]
70
65
  def self.columns(options)
71
- usage_columns = options[:usage_columns]
72
66
  {
73
- provider_column: options[:provider_column],
74
- model_column: options[:model_column],
75
- data_column: options[:data_column],
76
- input_tokens: usage_columns[:input_tokens],
77
- output_tokens: usage_columns[:output_tokens],
78
- total_tokens: usage_columns[:total_tokens]
67
+ data_column: options[:data_column]
79
68
  }.freeze
80
69
  end
81
70
 
71
+ ##
72
+ # Resolves the provider runtime for a record.
73
+ # @return [LLM::Provider]
74
+ def self.resolve_provider(obj, options, empty_hash)
75
+ provider = resolve_option(obj, options[:provider])
76
+ return provider if LLM::Provider === provider
77
+ raise ArgumentError, "provider: must resolve to an LLM::Provider instance"
78
+ end
79
+
82
80
  ##
83
81
  # Persists the runtime state and usage columns back onto the record.
84
82
  # @return [void]
85
83
  def self.save(obj, ctx, options)
86
84
  columns = self.columns(options)
87
- obj.update(
88
- columns[:data_column] => serialize_context(ctx, options[:format]),
89
- columns[:input_tokens] => ctx.usage.input_tokens,
90
- columns[:output_tokens] => ctx.usage.output_tokens,
91
- columns[:total_tokens] => ctx.usage.total_tokens
92
- )
85
+ payload = serialize_context(ctx, options[:format])
86
+ payload = wrap_json_payload(payload, options[:format])
87
+ obj.update(columns[:data_column] => payload)
88
+ end
89
+
90
+ ##
91
+ # Wraps JSON payloads for Sequel PostgreSQL adapters when needed.
92
+ # @return [Object]
93
+ def self.wrap_json_payload(payload, format)
94
+ case format
95
+ when :json then Sequel.pg_json_wrap(payload)
96
+ when :jsonb then Sequel.pg_jsonb_wrap(payload)
97
+ else payload
98
+ end
93
99
  end
94
100
  end
95
101
  DEFAULTS = {
96
- provider_column: :provider,
97
- model_column: :model,
98
102
  data_column: :data,
99
103
  format: :string,
100
- usage_columns: DEFAULT_USAGE_COLUMNS,
101
104
  tracer: nil,
102
- provider: EMPTY_HASH,
105
+ provider: nil,
103
106
  context: EMPTY_HASH
104
107
  }.freeze
105
108
 
@@ -134,14 +137,13 @@ module LLM::Sequel
134
137
  # @option options [Proc, Symbol, LLM::Tracer, nil] :tracer
135
138
  # Optional tracer, method name, or proc that resolves to one and is
136
139
  # assigned through `llm.tracer = ...` on the resolved provider.
140
+ # @option options [Proc, Symbol, LLM::Provider] :provider
141
+ # Must resolve to an `LLM::Provider` instance for the current record.
137
142
  # @return [void]
138
143
  def self.configure(model, options = EMPTY_HASH)
139
144
  options = DEFAULTS.merge(options)
140
- usage_columns = DEFAULT_USAGE_COLUMNS.merge(options[:usage_columns] || EMPTY_HASH)
141
- model.instance_variable_set(
142
- :@llm_plugin_options,
143
- options.merge(usage_columns: usage_columns.freeze).freeze
144
- )
145
+ model.db.extension :pg_json if %i[json jsonb].include?(options[:format])
146
+ model.instance_variable_set(:@llm_plugin_options, options.freeze)
145
147
  end
146
148
  end
147
149
 
@@ -247,12 +249,7 @@ module LLM::Sequel
247
249
  # Returns usage from the mapped usage columns.
248
250
  # @return [LLM::Object]
249
251
  def usage
250
- columns = Utils.columns(self.class.llm_plugin_options)
251
- LLM::Object.from(
252
- input_tokens: self[columns[:input_tokens]] || 0,
253
- output_tokens: self[columns[:output_tokens]] || 0,
254
- total_tokens: self[columns[:total_tokens]] || 0
255
- )
252
+ ctx.usage || LLM::Object.from(input_tokens: 0, output_tokens: 0, total_tokens: 0)
256
253
  end
257
254
 
258
255
  ##
@@ -304,11 +301,8 @@ module LLM::Sequel
304
301
  # @return [LLM::Provider]
305
302
  def llm
306
303
  options = self.class.llm_plugin_options
307
- columns = Utils.columns(options)
308
- provider = self[columns[:provider_column]]
309
- kwargs = Utils.resolve_options(self, options[:provider], Plugin::EMPTY_HASH)
310
304
  return @llm if @llm
311
- @llm = LLM.method(provider).call(**kwargs)
305
+ @llm = Utils.resolve_provider(self, options, Plugin::EMPTY_HASH)
312
306
  @llm.tracer = Utils.resolve_option(self, options[:tracer]) if options[:tracer]
313
307
  @llm
314
308
  end
@@ -322,7 +316,6 @@ module LLM::Sequel
322
316
  options = self.class.llm_plugin_options
323
317
  columns = Utils.columns(options)
324
318
  params = Utils.resolve_options(self, options[:context], Plugin::EMPTY_HASH).dup
325
- params[:model] ||= self[columns[:model_column]]
326
319
  ctx = LLM::Context.new(llm, params.compact)
327
320
  data = self[columns[:data_column]]
328
321
  if data.nil? || data == ""
data/lib/llm/skill.rb CHANGED
@@ -74,11 +74,12 @@ module LLM
74
74
  # @param [LLM::Context] ctx
75
75
  # @return [Hash]
76
76
  def call(ctx)
77
- instructions, tools = self.instructions, self.tools
77
+ instructions, tools, tracer = self.instructions, self.tools, ctx.llm.tracer
78
78
  params = ctx.params.merge(mode: ctx.mode).reject { [:tools, :schema].include?(_1) }
79
79
  agent = Class.new(LLM::Agent) do
80
80
  instructions(instructions)
81
81
  tools(*tools)
82
+ tracer(tracer)
82
83
  end.new(ctx.llm, params)
83
84
  agent.messages.concat(messages_for(ctx))
84
85
  res = agent.talk("Solve the user's query.")
@@ -95,6 +96,11 @@ module LLM
95
96
  Class.new(LLM::Tool) do
96
97
  name skill.name
97
98
  description skill.description
99
+ attr_accessor :tracer
100
+
101
+ define_singleton_method(:skill?) do
102
+ true
103
+ end
98
104
 
99
105
  define_method(:call) do
100
106
  skill.call(ctx)
@@ -114,7 +114,7 @@ module LLM
114
114
  # @param [LLM::Response] res
115
115
  # @api private
116
116
  def finish_attributes(operation, res)
117
- case @provider.class.to_s
117
+ case @llm.class.to_s
118
118
  when "LLM::OpenAI" then openai_attributes(operation, res)
119
119
  else {}
120
120
  end
@@ -233,7 +233,7 @@ module LLM
233
233
  # @param [LLM::Response] res
234
234
  # @api private
235
235
  def finish_attributes(operation, res)
236
- case @provider.class.to_s
236
+ case @llm.class.to_s
237
237
  when "LLM::OpenAI" then openai_attributes(operation, res)
238
238
  else {}
239
239
  end
data/lib/llm/tracer.rb CHANGED
@@ -14,13 +14,17 @@ module LLM
14
14
  require_relative "tracer/langsmith"
15
15
  require_relative "tracer/null"
16
16
 
17
+ ##
18
+ # @return [LLM::Provider]
19
+ attr_reader :llm
20
+
17
21
  ##
18
22
  # @param [LLM::Provider] provider
19
23
  # A provider
20
24
  # @param [Hash] options
21
25
  # A hash of options
22
26
  def initialize(provider, options = {})
23
- @provider = provider
27
+ @llm = provider
24
28
  @options = {}
25
29
  end
26
30
 
@@ -124,7 +128,7 @@ module LLM
124
128
  ##
125
129
  # @return [String]
126
130
  def inspect
127
- "#<#{self.class.name}:0x#{object_id.to_s(16)} @provider=#{@provider.class} @tracer=#{@tracer.inspect}>"
131
+ "#<#{self.class.name}:0x#{object_id.to_s(16)} @provider=#{@llm.class} @tracer=#{@tracer.inspect}>"
128
132
  end
129
133
 
130
134
  ##
@@ -245,19 +249,19 @@ module LLM
245
249
  ##
246
250
  # @return [String]
247
251
  def provider_name
248
- @provider.class.name.split("::").last.downcase
252
+ @llm.class.name.split("::").last.downcase
249
253
  end
250
254
 
251
255
  ##
252
256
  # @return [String]
253
257
  def provider_host
254
- @provider.instance_variable_get(:@host)
258
+ @llm.instance_variable_get(:@host)
255
259
  end
256
260
 
257
261
  ##
258
262
  # @return [String]
259
263
  def provider_port
260
- @provider.instance_variable_get(:@port)
264
+ @llm.instance_variable_get(:@port)
261
265
  end
262
266
  end
263
267
  end
data/lib/llm/version.rb CHANGED
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module LLM
4
- VERSION = "5.3.0"
4
+ VERSION = "6.0.0"
5
5
  end
data/llm.gemspec CHANGED
@@ -57,4 +57,5 @@ Gem::Specification.new do |spec|
57
57
  spec.add_development_dependency "activerecord", "~> 8.0"
58
58
  spec.add_development_dependency "sequel", "~> 5.0"
59
59
  spec.add_development_dependency "sqlite3", "~> 2.0"
60
+ spec.add_development_dependency "pg", "~> 1.5"
60
61
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: llm.rb
3
3
  version: !ruby/object:Gem::Version
4
- version: 5.3.0
4
+ version: 6.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Antar Azri
@@ -236,6 +236,20 @@ dependencies:
236
236
  - - "~>"
237
237
  - !ruby/object:Gem::Version
238
238
  version: '2.0'
239
+ - !ruby/object:Gem::Dependency
240
+ name: pg
241
+ requirement: !ruby/object:Gem::Requirement
242
+ requirements:
243
+ - - "~>"
244
+ - !ruby/object:Gem::Version
245
+ version: '1.5'
246
+ type: :development
247
+ prerelease: false
248
+ version_requirements: !ruby/object:Gem::Requirement
249
+ requirements:
250
+ - - "~>"
251
+ - !ruby/object:Gem::Version
252
+ version: '1.5'
239
253
  description: |
240
254
  llm.rb is a lightweight runtime for building capable AI systems in Ruby.
241
255
  It is not just an API wrapper. llm.rb gives you one runtime for providers,