llm.rb 7.0.0 → 8.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6c923952039095a2234eb1bd5c058a951b0d797d27577cdf7f679df59b49060b
4
- data.tar.gz: 3667e0d79e44634f769dfced198dd07c1039f173cb43b72aab7d3204aa3638f8
3
+ metadata.gz: 4d726213f6b63342582738a133f7f82c1158934d6f25a48ae6b6c9e59a8f8262
4
+ data.tar.gz: 6288d177adc7a07a37368066329c882f746747d5bed9ffba7cb50d2bcbd1d98c
5
5
  SHA512:
6
- metadata.gz: 655d450b2ffeb71ed9564b7c5c23a2a86e9e385de9dc1abdac18588e460cffdecd1b2da1d5ef9fc162dc3f3286b7d2c979baec3953cd1ddbdab74d1ef5b87112
7
- data.tar.gz: a044fedb675c4d92eff55c210d588b68b80c7e3967188674c2de4d8f6bc69d76e8f15c18f49fb54e09a8c93dff89074304d231609337bfa3bc79c96e1f3f576b
6
+ metadata.gz: 4ae089f4117dc384000a70500c40ebadf48f42d1bd820d0840568b3b31b0197e51c65e9f60fe65d0e75c23aa4c7eac977be928a38969580174169bd0efe39912
7
+ data.tar.gz: 9653135f93b9b2b722102f055dc961346949368dab161a3cff64e99ddfc6781933a94b527151da9a24ff39451814f76c5409389f91c3692852eb17bd5d3d11f9
data/CHANGELOG.md CHANGED
@@ -2,6 +2,105 @@
2
2
 
3
3
  ## Unreleased
4
4
 
5
+ ## v8.0.0
6
+
7
+ Changes since `v7.0.0`.
8
+
9
+ This release adds Unix-fork concurrency for process-isolated tool
10
+ execution, extends `LLM::Object` with `#merge` and `#delete`, and drops
11
+ Ruby 3.2 support due to segfaults observed with the `:fork` path. It
12
+ promotes `LLM::Pipe` to the top-level namespace and adds
13
+ `persistent: true` on `LLM::MCP.http` for direct persistent transport
14
+ configuration. `LLM::Function#runner` is exposed as public API, agent
15
+ tracer overrides are supported, fiber execution now uses `Fiber.schedule`,
16
+ missing optional dependencies raise clearer `LLM::LoadError` guidance,
17
+ and ActiveRecord wrapper plumbing is deduplicated between `acts_as_llm`
18
+ and `acts_as_agent`.
19
+
20
+ ### Breaking
21
+
22
+ * **Drop Ruby 3.2 support** <br>
23
+ Stop supporting Ruby 3.2 due to a segfault observed with the `:fork`
24
+ tool concurrency strategy.
25
+
26
+ ### Add
27
+
28
+ * **Add `LLM::Object#merge`** <br>
29
+ Let `LLM::Object` return a new wrapped object when merging hash-like
30
+ data through `#merge`.
31
+
32
+ * **Add `LLM::Object#delete`** <br>
33
+ Let `LLM::Object` delete keys directly through `#delete`.
34
+
35
+ ### Change
36
+
37
+ * **Add fork-based tool concurrency** <br>
38
+ Add `:fork` as a new concurrency strategy for `LLM::Function#spawn`,
39
+ `LLM::Function::Array#wait`, and `LLM::Agent.concurrency` that runs
40
+ class-based tools in isolated child processes. Fork-backed tools support
41
+ tracer callbacks, `on_interrupt`/`on_cancel` hooks, and `alive?` checks.
42
+ Requires the `xchan` gem for inter-process communication with `:fork`.
43
+ This is especially useful for tools that need process isolation, such as
44
+ running shell commands or handling unsafe data.
45
+
46
+ * **Promote `LLM::Pipe` from MCP namespace to top-level** <br>
47
+ Move `LLM::MCP::Pipe` to `LLM::Pipe` so the pipe abstraction is available
48
+ outside MCP internals. The new class adds a `binmode:` option for binary
49
+ pipes. `LLM::MCP::Command` and related MCP transport code have been updated
50
+ to use `LLM::Pipe`.
51
+
52
+ * **Allow `persistent: true` on `LLM::MCP.http`** <br>
53
+ Let `LLM::MCP.http(...)` enable persistent HTTP transport directly
54
+ through `persistent: true`, instead of requiring a separate
55
+ `.persistent` call after construction.
56
+
57
+ * **Expose `LLM::Function#runner` as public API** <br>
58
+ Promote the internal runner instantiation to a public `runner` method on
59
+ `LLM::Function`, so callers can inspect or reuse the resolved tool instance
60
+ that a function wraps.
61
+
62
+ * **Allow agent instance tracer overrides** <br>
63
+ Let `LLM::Agent.new(..., tracer: ...)` override the class-level tracer
64
+ for that agent instance.
65
+
66
+ * **Make `:fiber` use scheduler-backed fibers** <br>
67
+ Change `:fiber` tool execution to use `Fiber.schedule` and require
68
+ `Fiber.scheduler`, instead of wrapping direct calls in raw fibers. This
69
+ gives `:fiber` a real cooperative concurrency model instead of acting as
70
+ a thin wrapper around sequential execution.
71
+
72
+ * **Read stored values from zero-argument `LLM::Object` method calls** <br>
73
+ Let calls like `obj.delete`, `obj.fetch`, `obj.merge`, `obj.key?`,
74
+ `obj.dig`, `obj.slice`, or `obj.keys` return a stored value when that
75
+ method name exists as a key and no arguments are given.
76
+
77
+ * **Harden `LLM::Object` against arbitrary key names** <br>
78
+ Move internal lookup logic off `LLM::Object` instances and onto the
79
+ singleton class instead, making stored keys like `method_missing`
80
+ more resilient while preserving normal dynamic field access.
81
+
82
+ * **Deduplicate ActiveRecord wrapper plumbing** <br>
83
+ Move shared ActiveRecord wrapper defaults and utility methods into
84
+ `LLM::ActiveRecord`, reducing duplication between `acts_as_llm` and
85
+ `acts_as_agent`.
86
+
87
+ * **Raise clearer errors for missing optional runtime dependencies** <br>
88
+ Route optional `async`, `xchan`, and `net/http/persistent` loads
89
+ through `LLM.require` so missing runtime gems raise `LLM::LoadError`
90
+ with installation guidance instead of leaking raw `LoadError`
91
+ exceptions.
92
+
93
+ ### Fix
94
+
95
+ * **Avoid `RuntimeError` from `Async::Task.current` lookups** <br>
96
+ Check `Async::Task.current?` before reading the current Async task so
97
+ provider transports fall back to `Fiber.current` without raising when
98
+ no Async task is active.
99
+
100
+ * **Serialize `LLM::Object` values correctly through `LLM.json`** <br>
101
+ Make `LLM::Object#to_json` call `LLM.json.dump(to_h, ...)` so
102
+ `LLM::Object` values serialize through the llm.rb JSON adapter.
103
+
5
104
  ## v7.0.0
6
105
 
7
106
  Changes since `v6.1.0`.
@@ -121,6 +220,12 @@ and `LLM::RactorError` is raised for unsupported ractor tool work.
121
220
  for unsupported tool types such as skill-backed tools, instead of letting
122
221
  deeper Ruby isolation errors leak out later in execution.
123
222
 
223
+ * **Delegate interrupt to concurrent task implementations** <br>
224
+ Make `LLM::Function::Task#interrupt!` delegate to the underlying fork or
225
+ ractor task when it supports interruption, so `ctx.interrupt!` and
226
+ `task.interrupt!` work correctly for fork- and ractor-backed tool
227
+ execution.
228
+
124
229
  ## v5.4.0
125
230
 
126
231
  Changes since `v5.3.0`.
@@ -828,7 +933,7 @@ Changes since `v4.9.0`.
828
933
 
829
934
  - Add HTTP transport for MCP with `LLM::MCP::Transport::HTTP` for remote servers
830
935
  - Add JSON Schema union types (`any_of`, `all_of`, `one_of`) with parser integration
831
- - Add JSON Schema type array union support (e.g., `"type": ["object", "null"]`)
936
+ - Add JSON Schema type array union support (e.g., `"type\": [\"object\", \"null\"]`)
832
937
  - Add JSON Schema type inference from `const`, `enum`, or `default` fields
833
938
 
834
939
  ### Change
data/README.md CHANGED
@@ -4,7 +4,7 @@
4
4
  <p align="center">
5
5
  <a href="https://0x1eef.github.io/x/llm.rb?rebuild=1"><img src="https://img.shields.io/badge/docs-0x1eef.github.io-blue.svg" alt="RubyDoc"></a>
6
6
  <a href="https://opensource.org/license/0bsd"><img src="https://img.shields.io/badge/License-0BSD-orange.svg?" alt="License"></a>
7
- <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-7.0.0-green.svg?" alt="Version"></a>
7
+ <a href="https://github.com/llmrb/llm.rb/tags"><img src="https://img.shields.io/badge/version-8.0.0-green.svg?" alt="Version"></a>
8
8
  </p>
9
9
 
10
10
  ## About
@@ -24,6 +24,11 @@ It provides one runtime for providers, agents, tools, skills, MCP servers, strea
24
24
  schemas, files, and persisted state, so real systems can be built out of one coherent
25
25
  execution model instead of a pile of adapters.
26
26
 
27
+ It provides concurrent tool execution with multiple strategies exposed through a single
28
+ runtime: async-task, threads, fibers, ractors and processes (fork). The first three are
29
+ good for IO-bound work and the last two are good for CPU-bound work. Ractor support is
30
+ experimental and comes with limitations.
31
+
27
32
  Want to see some code? Jump to [the examples](#examples) section. <br>
28
33
  Want to see a self-hosted LLM environment built on llm.rb? Check out [Relay](https://github.com/llmrb/relay).
29
34
 
@@ -287,8 +292,13 @@ end
287
292
  #### Concurrency
288
293
 
289
294
  Tool execution can run sequentially with `:call` or concurrently through
290
- `:thread`, `:task`, `:fiber`, and experimental `:ractor`, without rewriting
291
- your tool layer.
295
+ `:thread`, `:task`, `:fiber`, `:fork`, and experimental `:ractor`, without
296
+ rewriting your tool layer. Async tasks, threads, and fibers are the
297
+ I/O-bound options. Fork and ractor are the CPU-bound options. `:fork`
298
+ requires [`xchan.rb`](https://github.com/0x1eef/xchan.rb#readme) support,
299
+ and `:ractor` is still experimental.
300
+
301
+ `:fiber` uses `Fiber.schedule`, so it requires `Fiber.scheduler`.
292
302
 
293
303
  ```ruby
294
304
  class Agent < LLM::Agent
@@ -311,8 +321,9 @@ finer sequential control across several steps before shutting the client down.
311
321
  ```ruby
312
322
  mcp = LLM::MCP.http(
313
323
  url: "https://api.githubcopilot.com/mcp/",
314
- headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"}
315
- ).persistent
324
+ headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"},
325
+ persistent: true
326
+ )
316
327
  mcp.run do
317
328
  ctx = LLM::Context.new(llm, tools: mcp.tools)
318
329
  end
@@ -367,13 +378,13 @@ worker.join
367
378
  Use `LLM::Agent` when you want the same stateful runtime surface as
368
379
  `LLM::Context`, but with tool loops executed automatically according to a
369
380
  configured concurrency mode such as `:call`, `:thread`, `:task`, `:fiber`,
370
- or experimental `:ractor` support for class-based tools. MCP tools are not
371
- supported by the current `:ractor` mode, but mixed tool sets can still
372
- route MCP tools and local tools through different strategies at runtime.
373
- By default, the tool attempt budget is `25`. When an agent exhausts that
374
- budget, it sends advisory tool errors back through the model instead of
375
- raising out of the runtime. Set `tool_attempts: nil` to disable that
376
- advisory behavior.
381
+ `:fork`, or experimental `:ractor` support for class-based tools. MCP tools
382
+ are not supported by the current `:ractor` mode, but mixed tool sets can
383
+ still route MCP tools and local tools through different strategies at
384
+ runtime. By default, the tool attempt budget is `25`. When an agent
385
+ exhausts that budget, it sends advisory tool errors back through the model
386
+ instead of raising out of the runtime. Set `tool_attempts: nil` to disable
387
+ that advisory behavior.
377
388
  - **Tool calls have an explicit lifecycle** <br>
378
389
  A tool call can be executed, cancelled through
379
390
  [`LLM::Function#cancel`](https://0x1eef.github.io/x/llm.rb/LLM/Function.html#cancel-instance_method),
@@ -385,13 +396,15 @@ worker.join
385
396
  [`LLM::Context#cancel!`](https://0x1eef.github.io/x/llm.rb/LLM/Context.html#cancel-21-instance_method)
386
397
  is inspired by Go's context cancellation model.
387
398
  - **Concurrency is a first-class feature** <br>
388
- Use threads, fibers, async tasks, or experimental ractors without
389
- rewriting your tool layer. The current `:ractor` mode is for class-based
390
- tools and does not support MCP tools, but mixed workloads can branch on
391
- `tool.mcp?` and choose a supported strategy per tool. Class-based
392
- `:ractor` tools still emit normal tool tracer callbacks. `:ractor` is
393
- especially useful for CPU-bound tools, while `:task`, `:fiber`, or
394
- `:thread` may be a better fit for I/O-bound work.
399
+ Use async tasks, threads, fibers, forks, or experimental ractors without
400
+ rewriting your tool layer. Async tasks, threads, and fibers are the
401
+ I/O-bound options. Fork and ractor are the CPU-bound options. `:fork`
402
+ requires [`xchan.rb`](https://github.com/0x1eef/xchan.rb#readme) support.
403
+ The current `:ractor` mode is for class-based tools, and MCP tools are
404
+ not supported by ractor, but mixed workloads can branch on `tool.mcp?`
405
+ and choose a supported strategy per tool. Class-based `:ractor` tools
406
+ still emit normal tool tracer callbacks. `:fiber` uses `Fiber.schedule`,
407
+ so it requires `Fiber.scheduler`.
395
408
  - **Advanced workloads are built in, not bolted on** <br>
396
409
  Streaming, concurrent tool execution, persistence, tracing, and MCP support
397
410
  all fit the same runtime model.
@@ -865,8 +878,9 @@ require "net/http/persistent"
865
878
  llm = LLM.openai(key: ENV["KEY"])
866
879
  mcp = LLM::MCP.http(
867
880
  url: "https://api.githubcopilot.com/mcp/",
868
- headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"}
869
- ).persistent
881
+ headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"},
882
+ persistent: true
883
+ )
870
884
 
871
885
  mcp.start
872
886
  ctx = LLM::Context.new(llm, stream: $stdout, tools: mcp.tools)
@@ -880,8 +894,9 @@ For scoped work, `mcp.run do ... end` is shorter and handles cleanup for you:
880
894
  ```ruby
881
895
  mcp = LLM::MCP.http(
882
896
  url: "https://api.githubcopilot.com/mcp/",
883
- headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"}
884
- ).persistent
897
+ headers: {"Authorization" => "Bearer #{ENV["GITHUB_PAT"]}"},
898
+ persistent: true
899
+ )
885
900
  mcp.run do
886
901
  ctx = LLM::Context.new(llm, stream: $stdout, tools: mcp.tools)
887
902
  ctx.talk("Pull information about my GitHub account.")
@@ -10,10 +10,6 @@ module LLM::ActiveRecord
10
10
  # tools, schema, instructions, and concurrency are configured on the model
11
11
  # class and forwarded to an internal agent subclass.
12
12
  module ActsAsAgent
13
- EMPTY_HASH = LLM::ActiveRecord::ActsAsLLM::EMPTY_HASH
14
- DEFAULTS = LLM::ActiveRecord::ActsAsLLM::DEFAULTS
15
- Utils = LLM::ActiveRecord::ActsAsLLM::Utils
16
-
17
13
  module ClassMethods
18
14
  def model(model = nil)
19
15
  return agent.model if model.nil?
@@ -96,7 +92,7 @@ module LLM::ActiveRecord
96
92
  def llm
97
93
  options = self.class.llm_plugin_options
98
94
  return @llm if @llm
99
- @llm = Utils.resolve_provider(self, options, ActsAsAgent::EMPTY_HASH)
95
+ @llm = Utils.resolve_provider(self, options, EMPTY_HASH)
100
96
  @llm.tracer = Utils.resolve_option(self, options[:tracer]) if options[:tracer]
101
97
  @llm
102
98
  end
@@ -108,7 +104,7 @@ module LLM::ActiveRecord
108
104
  def ctx
109
105
  @ctx ||= begin
110
106
  options = self.class.llm_plugin_options
111
- params = Utils.resolve_options(self, options[:context], ActsAsAgent::EMPTY_HASH).dup
107
+ params = Utils.resolve_options(self, options[:context], EMPTY_HASH).dup
112
108
  ctx = self.class.agent.new(llm, params.compact)
113
109
  columns = Utils.columns(options)
114
110
  data = self[columns[:data_column]]
@@ -16,84 +16,6 @@ module LLM::ActiveRecord
16
16
  # handling JSON typecasting for the model. `provider:`, `context:`, and
17
17
  # `tracer:` can also be configured as symbols that are called on the model.
18
18
  module ActsAsLLM
19
- EMPTY_HASH = {}.freeze
20
- DEFAULTS = {
21
- data_column: :data,
22
- format: :string,
23
- tracer: nil,
24
- provider: nil,
25
- context: EMPTY_HASH
26
- }.freeze
27
-
28
- ##
29
- # Shared helper methods for the ORM wrapper.
30
- #
31
- # These utilities keep persistence plumbing out of the wrapped model's
32
- # method namespace so the injected surface stays focused on the runtime
33
- # API itself.
34
- # @api private
35
- module Utils
36
- ##
37
- # Resolves a single configured option against a model instance.
38
- # @return [Object]
39
- def self.resolve_option(obj, option)
40
- case option
41
- when Proc then obj.instance_exec(&option)
42
- when Symbol then obj.send(option)
43
- when Hash then option.dup
44
- else option
45
- end
46
- end
47
-
48
- ##
49
- # Resolves hash-like wrapper options against a model instance.
50
- # @return [Hash]
51
- def self.resolve_options(obj, option, empty_hash)
52
- case option
53
- when Proc, Symbol, Hash then resolve_option(obj, option)
54
- else empty_hash.dup
55
- end
56
- end
57
-
58
- ##
59
- # Serializes the runtime into the configured storage format.
60
- # @return [String, Hash]
61
- def self.serialize_context(ctx, format)
62
- case format
63
- when :string then ctx.to_json
64
- when :json, :jsonb then ctx.to_h
65
- else raise ArgumentError, "Unknown format: #{format.inspect}"
66
- end
67
- end
68
-
69
- ##
70
- # Maps wrapper options onto the record's storage columns.
71
- # @return [Hash]
72
- def self.columns(options)
73
- {
74
- data_column: options[:data_column]
75
- }.freeze
76
- end
77
-
78
- ##
79
- # Resolves the provider runtime for a record.
80
- # @return [LLM::Provider]
81
- def self.resolve_provider(obj, options, empty_hash)
82
- provider = resolve_option(obj, options[:provider])
83
- return provider if LLM::Provider === provider
84
- raise ArgumentError, "provider: must resolve to an LLM::Provider instance"
85
- end
86
-
87
- ##
88
- # Persists the runtime state and usage columns back onto the record.
89
- # @return [void]
90
- def self.save(obj, ctx, options)
91
- columns = self.columns(options)
92
- obj.assign_attributes(columns[:data_column] => serialize_context(ctx, options[:format]))
93
- obj.save!
94
- end
95
- end
96
-
97
19
  module Hooks
98
20
  ##
99
21
  # Called when hooks are extended onto an ActiveRecord model.
@@ -133,7 +55,7 @@ module LLM::ActiveRecord
133
55
  # @return [LLM::Response]
134
56
  def talk(...)
135
57
  options = self.class.llm_plugin_options
136
- ctx.talk(...).tap { Utils.save(self, ctx, options) }
58
+ ctx.talk(...).tap { Utils.save!(self, ctx, options) }
137
59
  end
138
60
 
139
61
  ##
@@ -142,7 +64,7 @@ module LLM::ActiveRecord
142
64
  # @return [LLM::Response]
143
65
  def respond(...)
144
66
  options = self.class.llm_plugin_options
145
- ctx.respond(...).tap { Utils.save(self, ctx, options) }
67
+ ctx.respond(...).tap { Utils.save!(self, ctx, options) }
146
68
  end
147
69
 
148
70
  ##
@@ -270,7 +192,7 @@ module LLM::ActiveRecord
270
192
  def llm
271
193
  options = self.class.llm_plugin_options
272
194
  return @llm if @llm
273
- @llm = Utils.resolve_provider(self, options, ActsAsLLM::EMPTY_HASH)
195
+ @llm = Utils.resolve_provider(self, options, EMPTY_HASH)
274
196
  @llm.tracer = Utils.resolve_option(self, options[:tracer]) if options[:tracer]
275
197
  @llm
276
198
  end
@@ -283,7 +205,7 @@ module LLM::ActiveRecord
283
205
  @ctx ||= begin
284
206
  options = self.class.llm_plugin_options
285
207
  columns = Utils.columns(options)
286
- params = Utils.resolve_options(self, options[:context], ActsAsLLM::EMPTY_HASH).dup
208
+ params = Utils.resolve_options(self, options[:context], EMPTY_HASH).dup
287
209
  ctx = LLM::Context.new(llm, params.compact)
288
210
  data = self[columns[:data_column]]
289
211
  if data.nil? || data == ""
@@ -1,4 +1,82 @@
1
1
  # frozen_string_literal: true
2
2
 
3
- require "llm/active_record/acts_as_llm"
4
- require "llm/active_record/acts_as_agent"
3
+ module LLM::ActiveRecord
4
+ EMPTY_HASH = {}.freeze
5
+ DEFAULTS = {
6
+ data_column: :data,
7
+ format: :string,
8
+ tracer: nil,
9
+ provider: nil,
10
+ context: EMPTY_HASH
11
+ }.freeze
12
+
13
+ ##
14
+ # These utilities keep persistence plumbing out of the wrapped model's
15
+ # method namespace so the injected surface stays focused on the runtime
16
+ # API itself.
17
+ # @api private
18
+ module Utils
19
+ ##
20
+ # Resolves a single configured option against a model instance.
21
+ # @return [Object]
22
+ def self.resolve_option(obj, option)
23
+ case option
24
+ when Proc then obj.instance_exec(&option)
25
+ when Symbol then obj.send(option)
26
+ when Hash then option.dup
27
+ else option
28
+ end
29
+ end
30
+
31
+ ##
32
+ # Resolves hash-like wrapper options against a model instance.
33
+ # @return [Hash]
34
+ def self.resolve_options(obj, option, empty_hash)
35
+ case option
36
+ when Proc, Symbol, Hash then resolve_option(obj, option)
37
+ else empty_hash.dup
38
+ end
39
+ end
40
+
41
+ ##
42
+ # Serializes the runtime into the configured storage format.
43
+ # @return [String, Hash]
44
+ def self.serialize_context(ctx, format)
45
+ case format
46
+ when :string then ctx.to_json
47
+ when :json, :jsonb then ctx.to_h
48
+ else raise ArgumentError, "Unknown format: #{format.inspect}"
49
+ end
50
+ end
51
+
52
+ ##
53
+ # Maps wrapper options onto the record's storage columns.
54
+ # @return [Hash]
55
+ def self.columns(options)
56
+ {
57
+ data_column: options[:data_column]
58
+ }.freeze
59
+ end
60
+
61
+ ##
62
+ # Resolves the provider runtime for a record.
63
+ # @return [LLM::Provider]
64
+ def self.resolve_provider(obj, options, empty_hash)
65
+ provider = resolve_option(obj, options[:provider])
66
+ return provider if LLM::Provider === provider
67
+ raise ArgumentError, "provider: must resolve to an LLM::Provider instance"
68
+ end
69
+
70
+ ##
71
+ # Persists the runtime state and usage columns back onto the record.
72
+ # @return [void]
73
+ def self.save!(obj, ctx, options)
74
+ columns = self.columns(options)
75
+ obj.assign_attributes(columns[:data_column] => serialize_context(ctx, options[:format]))
76
+ obj.save!
77
+ end
78
+ end
79
+
80
+ require "llm/active_record/acts_as_llm"
81
+ require "llm/active_record/acts_as_agent"
82
+ end
data/lib/llm/agent.rb CHANGED
@@ -106,7 +106,8 @@ module LLM
106
106
  # - `:call`: sequential calls
107
107
  # - `:thread`: concurrent threads
108
108
  # - `:task`: concurrent async tasks
109
- # - `:fiber`: concurrent raw fibers
109
+ # - `:fiber`: concurrent scheduler-backed fibers
110
+ # - `:fork`: forked child processes
110
111
  # - `:ractor`: concurrent Ruby ractors for class-based tools; MCP tools are not supported,
111
112
  # and this mode is especially useful for CPU-bound tool work
112
113
  # - `[:thread, :ractor]`: the possible concurrency strategies to wait on, in the
@@ -149,12 +150,14 @@ module LLM
149
150
  # @option params [Array<LLM::Function>, nil] :tools Defaults to nil
150
151
  # @option params [Array<String>, nil] :skills Defaults to nil
151
152
  # @option params [#to_json, nil] :schema Defaults to nil
153
+ # @option params [LLM::Tracer, Proc, nil] :tracer Optional tracer override for this agent instance
152
154
  # @option params [Symbol, Array<Symbol>, nil] :concurrency Defaults to the agent class concurrency
153
155
  def initialize(llm, params = {})
154
156
  defaults = {model: self.class.model, tools: self.class.tools, skills: self.class.skills, schema: self.class.schema}.compact
155
157
  @concurrency = params.delete(:concurrency) || self.class.concurrency
156
158
  @llm = llm
157
- @tracer = resolve_option(self.class.tracer) unless self.class.tracer.nil?
159
+ tracer = params.key?(:tracer) ? params.delete(:tracer) : self.class.tracer
160
+ @tracer = resolve_option(tracer) unless tracer.nil?
158
161
  @ctx = LLM::Context.new(llm, defaults.merge({guard: true}).merge(params))
159
162
  end
160
163
 
@@ -395,8 +398,10 @@ module LLM
395
398
  def call_functions
396
399
  case concurrency || :call
397
400
  when :call then call(:functions)
398
- when :thread, :task, :fiber, :ractor, Array then wait(concurrency)
399
- else raise ArgumentError, "Unknown concurrency: #{concurrency.inspect}. Expected :call, :thread, :task, :fiber, :ractor, or an array of queued task types"
401
+ when :thread, :task, :fiber, :fork, :ractor, Array then wait(concurrency)
402
+ else raise ArgumentError, "Unknown concurrency: #{concurrency.inspect}. " \
403
+ "Expected :call, :thread, :task, :fiber, :fork, :ractor, " \
404
+ "or an array of the mentioned options"
400
405
  end
401
406
  end
402
407
 
data/lib/llm/error.rb CHANGED
@@ -78,4 +78,8 @@ module LLM
78
78
  ##
79
79
  # When {LLM::Registry} can't map a registry
80
80
  NoSuchRegistryError = Class.new(Error)
81
+
82
+ ##
83
+ # When an optional runtime dependency cannot be required
84
+ LoadError = Class.new(Error)
81
85
  end
@@ -26,7 +26,8 @@ class LLM::Function
26
26
  # Controls concurrency strategy:
27
27
  # - `:thread`: Use threads
28
28
  # - `:task`: Use async tasks (requires async gem)
29
- # - `:fiber`: Use raw fibers
29
+ # - `:fiber`: Use scheduler-backed fibers (requires Fiber.scheduler)
30
+ # - `:fork`: Use forked child processes
30
31
  # - `:ractor`: Use Ruby ractors (class-based tools only; MCP tools are not supported)
31
32
  #
32
33
  # @return [LLM::Function::ThreadGroup, LLM::Function::TaskGroup, LLM::Function::FiberGroup, LLM::Function::Ractor::Group]
@@ -38,10 +39,12 @@ class LLM::Function
38
39
  ThreadGroup.new(map { |fn| fn.spawn(:thread) })
39
40
  when :fiber
40
41
  FiberGroup.new(map { |fn| fn.spawn(:fiber) })
42
+ when :fork
43
+ Fork::Group.new(map { |fn| fn.spawn(:fork) })
41
44
  when :ractor
42
45
  Ractor::Group.new(map { |fn| fn.spawn(:ractor) })
43
46
  else
44
- raise ArgumentError, "Unknown strategy: #{strategy.inspect}. Expected :thread, :task, :fiber, or :ractor"
47
+ raise ArgumentError, "Unknown strategy: #{strategy.inspect}. Expected :thread, :task, :fiber, :fork, or :ractor"
45
48
  end
46
49
  end
47
50
 
@@ -53,7 +56,8 @@ class LLM::Function
53
56
  # Controls concurrency strategy:
54
57
  # - `:thread`: Use threads
55
58
  # - `:task`: Use async tasks (requires async gem)
56
- # - `:fiber`: Use raw fibers
59
+ # - `:fiber`: Use scheduler-backed fibers (requires Fiber.scheduler)
60
+ # - `:fork`: Use forked child processes
57
61
  # - `:ractor`: Use Ruby ractors (class-based tools only; MCP tools are not supported)
58
62
  #
59
63
  # @return [Array<LLM::Function::Return>]
@@ -4,10 +4,10 @@ class LLM::Function
4
4
  ##
5
5
  # The {LLM::Function::FiberGroup} class wraps an array of
6
6
  # {Fiber} objects that are running {LLM::Function} calls
7
- # concurrently using raw fibers.
7
+ # concurrently using scheduler-backed fibers.
8
8
  #
9
9
  # This class provides the same interface as {LLM::Function::ThreadGroup}
10
- # but uses raw fibers for lightweight concurrency without the async gem.
10
+ # but uses scheduler-backed fibers for cooperative concurrency.
11
11
  #
12
12
  # @example
13
13
  # llm = LLM.openai(key: ENV["KEY"])
@@ -90,10 +90,16 @@ class LLM::Function
90
90
  # order as the original fibers.
91
91
  def wait
92
92
  @fibers.map do |fiber|
93
- fiber.resume if fiber.alive?
93
+ fiber.alive? ? scheduler.run : nil
94
94
  fiber.value
95
95
  end
96
96
  end
97
97
  alias_method :value, :wait
98
+
99
+ private
100
+
101
+ def scheduler
102
+ Fiber.scheduler
103
+ end
98
104
  end
99
105
  end
@@ -0,0 +1,67 @@
1
+ # frozen_string_literal: true
2
+
3
+ class LLM::Function
4
+ ##
5
+ # The {LLM::Function::Fork::Job} class represents a single fork-backed
6
+ # function call inside the child process.
7
+ #
8
+ # It is executed in the forked process and is responsible for running the
9
+ # resolved tool instance, handling control messages such as interrupts, and
10
+ # writing the final result back to the parent process.
11
+ class Fork::Job
12
+ ##
13
+ # @param [LLM::Function] function
14
+ # @param [LLM::Object] ch
15
+ # @return [LLM::Function::Fork::Job]
16
+ def initialize(function, ch)
17
+ @function = function
18
+ @ch = ch
19
+ end
20
+
21
+ ##
22
+ # @return [void]
23
+ def call
24
+ runner = @function.runner
25
+ controller = setup(runner)
26
+ @ch.result.write([:result, call!(runner)])
27
+ rescue => ex
28
+ @ch.result.write([:result, error(ex)])
29
+ ensure
30
+ controller&.kill
31
+ [@ch.control, @ch.result].each { _1.close unless _1.closed? }
32
+ end
33
+
34
+ private
35
+
36
+ def call!(runner)
37
+ kwargs = if Hash === @function.arguments
38
+ @function.arguments.transform_keys(&:to_sym)
39
+ else
40
+ @function.arguments
41
+ end
42
+ {id: @function.id, name: @function.name, value: runner.call(**kwargs)}
43
+ end
44
+
45
+ def error(ex)
46
+ {
47
+ id: @function.id,
48
+ name: @function.name,
49
+ value: {error: true, type: ex.class.name, message: ex.message}
50
+ }
51
+ end
52
+
53
+ def setup(runner)
54
+ ready = Queue.new
55
+ thread = Thread.new do
56
+ ready << true
57
+ kind = @ch.control.recv
58
+ next unless kind == :interrupt
59
+ hook = %i[on_cancel on_interrupt].find { runner.respond_to?(_1) }
60
+ runner.public_send(hook) if hook
61
+ rescue IOError, ArgumentError
62
+ end
63
+ ready.pop
64
+ thread
65
+ end
66
+ end
67
+ end