ruby_llm-agents 2.1.0 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b341c701b87662c5947fec756ba5a925759f8179700683dda02c1b30951d3213
4
- data.tar.gz: bec52ce05c7958b6954b8f6df23a8a9058ac2a9f900decdb858ed108623ff5af
3
+ metadata.gz: 0a12fa00f68f2dcc889d91017ac8a009796089b19ab81908cdbd1e33f24f9cea
4
+ data.tar.gz: cb1ed8fc426ed31e51b0796e5deb7f570410a562b7d579eb0e486046d3fb4d2c
5
5
  SHA512:
6
- metadata.gz: fefed6d947fb4e9dc8e85a683c72754062d22c016c1de61b9922d49ae9a54cef9e84e638cadb1000e221fdd7616bfd1764fbf47bf5ba11404126acedcebe7c03
7
- data.tar.gz: f93588c36b8057878b951c8411e00c2cd94ecc6189be8d33f70cb49e7d7e81098755606fbc3dcb6b2b8dfe85efc5c043a3d1884b7ecf48fc185a4863b4d0f16d
6
+ metadata.gz: bf6939777a18aa2a00b213373bb3400cf17be976422bb412ef665405f5b2b2281978e025b88231816d3433c4ac2d52793aafd046372ae608a6ba6e6d70e8f771
7
+ data.tar.gz: 274e161fc65dd2e5f36a5c552a0e086576470505011aee2ce9713669a027e81b7e1c964e3638dca835f432b60a6a9282c2eaca72e7b2086bf469e6d6c72b35d2
data/README.md CHANGED
@@ -9,6 +9,7 @@
9
9
  > **Production-ready Rails engine for building, managing, and monitoring LLM-powered AI agents**
10
10
 
11
11
  [![Gem Version](https://badge.fury.io/rb/ruby_llm-agents.svg)](https://rubygems.org/gems/ruby_llm-agents)
12
+ [![CI](https://github.com/adham90/ruby_llm-agents/actions/workflows/ci.yml/badge.svg)](https://github.com/adham90/ruby_llm-agents/actions/workflows/ci.yml)
12
13
  [![Ruby](https://img.shields.io/badge/ruby-%3E%3D%203.1-ruby.svg)](https://www.ruby-lang.org)
13
14
  [![Rails](https://img.shields.io/badge/rails-%3E%3D%207.0-red.svg)](https://rubyonrails.org)
14
15
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
@@ -26,16 +27,15 @@ Build intelligent AI agents in Ruby with a clean DSL, automatic execution tracki
26
27
  ## Show Me the Code
27
28
 
28
29
  ```ruby
29
- # app/agents/search_intent_agent.rb
30
+ # Template agent — structured input via .call
30
31
  class SearchIntentAgent < ApplicationAgent
31
32
  model "gpt-4o"
32
33
  temperature 0.0
33
34
 
34
- # Prompts with {placeholder} syntax - params auto-registered
35
35
  system "You are a search intent analyzer. Extract structured data from queries."
36
- prompt "Extract search intent from: {query}"
36
+ user "Extract search intent from: {query}"
37
+ assistant '{"refined_query":' # Force JSON output
37
38
 
38
- # Structured output with returns DSL
39
39
  returns do
40
40
  string :refined_query, description: "Cleaned search query"
41
41
  array :filters, of: :string, description: "Extracted filters"
@@ -51,15 +51,17 @@ result.duration_ms # => 850
51
51
  ```
52
52
 
53
53
  ```ruby
54
- # Multi-turn conversations
55
- result = ChatAgent.call(
56
- query: "What's my name?",
57
- messages: [
58
- { role: :user, content: "My name is Alice" },
59
- { role: :assistant, content: "Nice to meet you, Alice!" }
60
- ]
61
- )
62
- # => "Your name is Alice!"
54
+ # Conversational agent — freeform input via .ask
55
+ class RubyExpert < ApplicationAgent
56
+ model "claude-sonnet-4-5-20250929"
57
+ system "You are a senior Ruby developer with 20 years of experience."
58
+ end
59
+
60
+ result = RubyExpert.ask("What's the difference between proc and lambda?")
61
+ puts result.content
62
+
63
+ # Stream the response
64
+ RubyExpert.ask("Explain metaprogramming") { |chunk| print chunk.content }
63
65
  ```
64
66
 
65
67
  ```ruby
@@ -67,7 +69,7 @@ result = ChatAgent.call(
67
69
  class ReliableAgent < ApplicationAgent
68
70
  model "gpt-4o"
69
71
 
70
- prompt "{query}"
72
+ user "{query}"
71
73
 
72
74
  on_failure do
73
75
  retries times: 3, backoff: :exponential
@@ -152,6 +154,19 @@ rails db:migrate
152
154
 
153
155
  ### Configure API Keys
154
156
 
157
+ Configure all provider API keys in one place (v2.1+):
158
+
159
+ ```ruby
160
+ # config/initializers/ruby_llm_agents.rb
161
+ RubyLLM::Agents.configure do |config|
162
+ config.openai_api_key = ENV["OPENAI_API_KEY"]
163
+ config.anthropic_api_key = ENV["ANTHROPIC_API_KEY"]
164
+ config.gemini_api_key = ENV["GOOGLE_API_KEY"]
165
+ end
166
+ ```
167
+
168
+ Or use environment variables directly (auto-detected by RubyLLM):
169
+
155
170
  ```bash
156
171
  # .env
157
172
  OPENAI_API_KEY=sk-...
@@ -212,7 +227,7 @@ mount RubyLLM::Agents::Engine => "/agents"
212
227
 
213
228
  - **Ruby** >= 3.1.0
214
229
  - **Rails** >= 7.0
215
- - **RubyLLM** >= 1.11.0
230
+ - **RubyLLM** >= 1.12.0
216
231
 
217
232
  ## Contributing
218
233
 
@@ -76,6 +76,39 @@ module RubyLLM
76
76
  instance.call(&block)
77
77
  end
78
78
 
79
+ # Executes the agent with a freeform message as the user prompt
80
+ #
81
+ # Designed for conversational agents that define a persona (system +
82
+ # optional assistant prefill) but accept freeform input at runtime.
83
+ # Also works on template agents as an escape hatch to bypass the
84
+ # user template.
85
+ #
86
+ # @param message [String] The user message to send
87
+ # @param with [String, Array<String>, nil] Attachments (files, URLs)
88
+ # @param kwargs [Hash] Additional options (model:, temperature:, etc.)
89
+ # @yield [chunk] Yields chunks when streaming
90
+ # @return [Result] The processed response
91
+ #
92
+ # @example Basic usage
93
+ # RubyExpert.ask("What is metaprogramming?")
94
+ #
95
+ # @example With streaming
96
+ # RubyExpert.ask("Explain closures") { |chunk| print chunk.content }
97
+ #
98
+ # @example With attachments
99
+ # RubyExpert.ask("What's in this image?", with: "photo.jpg")
100
+ #
101
+ def ask(message, with: nil, **kwargs, &block)
102
+ opts = kwargs.merge(_ask_message: message)
103
+ opts[:with] = with if with
104
+
105
+ if block
106
+ stream(**opts, &block)
107
+ else
108
+ call(**opts)
109
+ end
110
+ end
111
+
79
112
  # Returns the agent type for this class
80
113
  #
81
114
  # Used by middleware to determine which tracking/budget config to use.
@@ -221,12 +254,13 @@ module RubyLLM
221
254
  # @param temperature [Float] Override the class-level temperature
222
255
  # @param options [Hash] Agent parameters defined via the param DSL
223
256
  def initialize(model: self.class.model, temperature: self.class.temperature, **options)
257
+ @ask_message = options.delete(:_ask_message)
224
258
  @model = model
225
259
  @temperature = temperature
226
260
  @options = options
227
261
  @tracked_tool_calls = []
228
262
  @pending_tool_call = nil
229
- validate_required_params!
263
+ validate_required_params! unless @ask_message
230
264
  end
231
265
 
232
266
  # Executes the agent through the middleware pipeline
@@ -245,15 +279,21 @@ module RubyLLM
245
279
 
246
280
  # User prompt to send to the LLM
247
281
  #
248
- # If a class-level `prompt` DSL is defined (string template or block),
249
- # it will be used. Otherwise, subclasses must implement this method.
282
+ # Resolution order:
283
+ # 1. Subclass method override (standard Ruby dispatch this method is never called)
284
+ # 2. .ask(message) runtime message — bypasses template
285
+ # 3. Class-level `user` / `prompt` template — interpolated with {placeholders}
286
+ # 4. Inherited from superclass
287
+ # 5. NotImplementedError
250
288
  #
251
289
  # @return [String] The user prompt
252
290
  def user_prompt
253
- prompt_config = self.class.prompt_config
254
- return resolve_prompt_from_config(prompt_config) if prompt_config
291
+ return @ask_message if @ask_message
255
292
 
256
- raise NotImplementedError, "#{self.class} must implement #user_prompt or use the prompt DSL"
293
+ config = self.class.user_config
294
+ return resolve_prompt_from_config(config) if config
295
+
296
+ raise NotImplementedError, "#{self.class} must implement #user_prompt, use the `user` DSL, or call with .ask(message)"
257
297
  end
258
298
 
259
299
  # System prompt for LLM instructions
@@ -269,6 +309,19 @@ module RubyLLM
269
309
  nil
270
310
  end
271
311
 
312
+ # Assistant prefill to prime the model's response
313
+ #
314
+ # If a class-level `assistant` DSL is defined, it will be used.
315
+ # Otherwise returns nil (no prefill).
316
+ #
317
+ # @return [String, nil] The assistant prefill, or nil for none
318
+ def assistant_prompt
319
+ config = self.class.assistant_config
320
+ return resolve_prompt_from_config(config) if config
321
+
322
+ nil
323
+ end
324
+
272
325
  # Response schema for structured output
273
326
  #
274
327
  # Delegates to the class-level schema DSL by default.
@@ -381,6 +434,7 @@ module RubyLLM
381
434
  {
382
435
  temperature: temperature,
383
436
  system_prompt: system_prompt,
437
+ assistant_prefill: assistant_prompt,
384
438
  schema: schema,
385
439
  messages: resolved_messages,
386
440
  tools: resolved_tools,
@@ -419,11 +473,26 @@ module RubyLLM
419
473
 
420
474
  # Resolves messages for this execution
421
475
  #
476
+ # Includes conversation history and assistant prefill if defined.
477
+ # The assistant prefill is appended as the last message so it appears
478
+ # after the user prompt in the conversation.
479
+ #
422
480
  # @return [Array<Hash>] Messages to apply
423
481
  def resolved_messages
424
- return @options[:messages] if @options[:messages]&.any?
482
+ msgs = @options[:messages]&.any? ? @options[:messages] : messages
483
+ msgs.dup
484
+ end
425
485
 
426
- messages
486
+ # Returns the assistant prefill message if defined
487
+ #
488
+ # Called after the user prompt is sent to inject the prefill.
489
+ #
490
+ # @return [Hash, nil] The assistant prefill message hash, or nil
491
+ def resolved_assistant_prefill
492
+ prefill = assistant_prompt
493
+ return nil if prefill.nil? || (prefill.is_a?(String) && prefill.empty?)
494
+
495
+ { role: :assistant, content: prefill }
427
496
  end
428
497
 
429
498
  # Returns whether streaming is enabled
@@ -446,6 +515,7 @@ module RubyLLM
446
515
  timeout: self.class.timeout,
447
516
  system_prompt: system_prompt,
448
517
  user_prompt: user_prompt,
518
+ assistant_prompt: assistant_prompt,
449
519
  attachments: @options[:with],
450
520
  schema: schema&.class&.name,
451
521
  streaming: self.class.streaming,
@@ -546,32 +616,75 @@ module RubyLLM
546
616
 
547
617
  # Executes the LLM call
548
618
  #
619
+ # When an assistant prefill is defined, messages are added manually
620
+ # (user, then assistant) before calling complete, so the model
621
+ # continues from the prefill. Otherwise, uses the standard .ask flow.
622
+ #
549
623
  # @param client [RubyLLM::Chat] The configured client
550
624
  # @param context [Pipeline::Context] The execution context
551
625
  # @return [RubyLLM::Message] The response
552
626
  def execute_llm_call(client, context)
553
627
  timeout = self.class.timeout
554
- ask_opts = {}
555
- ask_opts[:with] = @options[:with] if @options[:with]
628
+ prefill = resolved_assistant_prefill
556
629
 
557
630
  Timeout.timeout(timeout) do
558
- if streaming_enabled? && context.stream_block
559
- execute_with_streaming(client, context, ask_opts)
631
+ if prefill
632
+ execute_with_prefill(client, context, prefill)
633
+ elsif streaming_enabled? && context.stream_block
634
+ execute_with_streaming(client, context)
560
635
  else
636
+ ask_opts = {}
637
+ ask_opts[:with] = @options[:with] if @options[:with]
561
638
  client.ask(user_prompt, **ask_opts)
562
639
  end
563
640
  end
564
641
  end
565
642
 
643
+ # Executes with assistant prefill
644
+ #
645
+ # Manually adds the user message and assistant prefill, then calls
646
+ # complete so the model continues from the prefill text.
647
+ #
648
+ # @param client [RubyLLM::Chat] The client
649
+ # @param context [Pipeline::Context] The context
650
+ # @param prefill [Hash] The assistant prefill message ({role:, content:})
651
+ # @return [RubyLLM::Message] The response
652
+ def execute_with_prefill(client, context, prefill)
653
+ # Add user message — use .ask for attachment support, then add prefill
654
+ # We use add_message + complete instead of .ask so we can insert the
655
+ # assistant prefill between user and completion
656
+ client.add_message(role: :user, content: user_prompt)
657
+ client.add_message(**prefill)
658
+
659
+ if streaming_enabled? && context.stream_block
660
+ first_chunk_at = nil
661
+ started_at = context.started_at || Time.current
662
+
663
+ response = client.complete do |chunk|
664
+ first_chunk_at ||= Time.current
665
+ context.stream_block.call(chunk)
666
+ end
667
+
668
+ if first_chunk_at
669
+ context.time_to_first_token_ms = ((first_chunk_at - started_at) * 1000).to_i
670
+ end
671
+
672
+ response
673
+ else
674
+ client.complete
675
+ end
676
+ end
677
+
566
678
  # Executes with streaming enabled
567
679
  #
568
680
  # @param client [RubyLLM::Chat] The client
569
681
  # @param context [Pipeline::Context] The context
570
- # @param ask_opts [Hash] Options for the ask call
571
682
  # @return [RubyLLM::Message] The response
572
- def execute_with_streaming(client, context, ask_opts)
683
+ def execute_with_streaming(client, context)
573
684
  first_chunk_at = nil
574
685
  started_at = context.started_at || Time.current
686
+ ask_opts = {}
687
+ ask_opts[:with] = @options[:with] if @options[:with]
575
688
 
576
689
  response = client.ask(user_prompt, **ask_opts) do |chunk|
577
690
  first_chunk_at ||= Time.current
@@ -4,6 +4,6 @@ module RubyLLM
4
4
  module Agents
5
5
  # Current version of the RubyLLM::Agents gem
6
6
  # @return [String] Semantic version string
7
- VERSION = "2.1.0"
7
+ VERSION = "2.2.0"
8
8
  end
9
9
  end
@@ -7,17 +7,23 @@ module RubyLLM
7
7
  #
8
8
  # Provides common configuration methods that every agent type needs:
9
9
  # - model: The LLM model to use
10
- # - prompt: The user prompt (string with {placeholders} or block)
11
10
  # - system: System instructions
11
+ # - user: The user prompt (string with {placeholders})
12
+ # - assistant: Assistant prefill (string with optional {placeholders})
12
13
  # - description: Human-readable description
13
14
  # - timeout: Request timeout
14
15
  # - returns: Structured output schema
15
16
  #
16
- # @example Simplified DSL
17
+ # Two levels for defining prompts:
18
+ # - Class-level string/heredoc for static content
19
+ # - Instance method override for dynamic content
20
+ #
21
+ # @example Template agent (structured input via .call)
17
22
  # class SearchAgent < RubyLLM::Agents::BaseAgent
18
23
  # model "gpt-4o"
19
24
  # system "You are a helpful search assistant."
20
- # prompt "Search for: {query} (limit: {limit})"
25
+ # user "Search for: {query} (limit: {limit})"
26
+ # assistant '{"results":['
21
27
  #
22
28
  # param :limit, default: 10 # Override auto-detected param
23
29
  #
@@ -29,10 +35,22 @@ module RubyLLM
29
35
  # end
30
36
  # end
31
37
  #
32
- # @example Dynamic prompt with block
33
- # class SummaryAgent < RubyLLM::Agents::BaseAgent
34
- # prompt do
35
- # "Summarize in #{word_count} words: #{text}"
38
+ # @example Conversational agent (freeform input via .ask)
39
+ # class RubyExpert < RubyLLM::Agents::BaseAgent
40
+ # model "gpt-4o"
41
+ # system "You are a senior Ruby developer."
42
+ # end
43
+ #
44
+ # RubyExpert.ask("What is metaprogramming?")
45
+ #
46
+ # @example Dynamic prompts with method overrides
47
+ # class SmartAgent < RubyLLM::Agents::BaseAgent
48
+ # def system_prompt
49
+ # "You are helping #{company.name}. Today is #{Date.today}."
50
+ # end
51
+ #
52
+ # def user_prompt
53
+ # "Question: #{params[:question]}"
36
54
  # end
37
55
  # end
38
56
  #
@@ -53,49 +71,70 @@ module RubyLLM
53
71
  @model || inherited_or_default(:model, default_model)
54
72
  end
55
73
 
56
- # Sets the user prompt template or block
74
+ # Sets the user prompt template
57
75
  #
58
76
  # When a string is provided, {placeholder} syntax is used to interpolate
59
77
  # parameters. Parameters are automatically registered (as required) unless
60
78
  # already defined with `param`.
61
79
  #
62
- # When a block is provided, it's evaluated in the instance context at
63
- # execution time, allowing access to all instance methods and parameters.
64
- #
65
80
  # @param template [String, nil] Prompt template with {placeholder} syntax
66
- # @yield Block that returns the prompt string (evaluated at execution time)
67
- # @return [String, Proc, nil] The current prompt configuration
81
+ # @return [String, nil] The current user prompt configuration
68
82
  #
69
83
  # @example With template string (parameters auto-detected)
70
- # prompt "Search for: {query} in {category}"
84
+ # user "Search for: {query} in {category}"
71
85
  # # Automatically registers :query and :category as required params
72
86
  #
73
- # @example With block for dynamic prompts
74
- # prompt do
75
- # base = "Analyze the following"
76
- # base += " in #{language}" if language != "en"
77
- # "#{base}: #{text}"
78
- # end
87
+ # @example Multi-line with heredoc
88
+ # user <<~S
89
+ # Search for: {query}
90
+ # Category: {category}
91
+ # Limit: {limit}
92
+ # S
93
+ #
94
+ def user(template = nil)
95
+ if template
96
+ @user_template = template
97
+ auto_register_params_from_template(template)
98
+ end
99
+ @user_template || @prompt_template || @prompt_block || inherited_or_default(:user_config, nil)
100
+ end
101
+
102
+ # Returns the user prompt configuration
103
+ #
104
+ # @return [String, Proc, nil] The user template, or nil
105
+ def user_config
106
+ @user_template || @prompt_template || @prompt_block || inherited_or_default(:user_config, nil)
107
+ end
108
+
109
+ # Backward-compatible alias for `user`
79
110
  #
111
+ # @deprecated Use `user` instead
112
+ # @param template [String, nil] Prompt template with {placeholder} syntax
113
+ # @yield Block that returns the prompt string (evaluated at execution time)
114
+ # @return [String, Proc, nil] The current prompt configuration
80
115
  def prompt(template = nil, &block)
81
116
  if template
82
- @prompt_template = template
117
+ @user_template = template
83
118
  auto_register_params_from_template(template)
84
119
  elsif block
85
120
  @prompt_block = block
86
121
  end
87
- @prompt_template || @prompt_block || inherited_or_default(:prompt_config, nil)
122
+ @user_template || @prompt_template || @prompt_block || inherited_or_default(:user_config, nil)
88
123
  end
89
124
 
90
- # Returns the prompt configuration (template or block)
125
+ # Returns the prompt configuration (alias for user_config)
91
126
  #
127
+ # @deprecated Use `user_config` instead
92
128
  # @return [String, Proc, nil] The prompt template, block, or nil
93
129
  def prompt_config
94
- @prompt_template || @prompt_block || inherited_or_default(:prompt_config, nil)
130
+ user_config
95
131
  end
96
132
 
97
133
  # Sets the system prompt/instructions
98
134
  #
135
+ # When a string is provided, {placeholder} syntax is supported for
136
+ # parameter interpolation, same as the `user` DSL.
137
+ #
99
138
  # @param text [String, nil] System instructions for the LLM
100
139
  # @yield Block that returns the system prompt (evaluated at execution time)
101
140
  # @return [String, Proc, nil] The current system prompt
@@ -103,14 +142,18 @@ module RubyLLM
103
142
  # @example Static system prompt
104
143
  # system "You are a helpful assistant. Be concise and accurate."
105
144
  #
106
- # @example Dynamic system prompt
107
- # system do
108
- # "You are helping #{user_name}. Their preferences: #{preferences}"
145
+ # @example With placeholders
146
+ # system "You are helping {user_name} with their {task}."
147
+ #
148
+ # @example Dynamic system prompt (method override)
149
+ # def system_prompt
150
+ # "You are helping #{user_name}. Today is #{Date.today}."
109
151
  # end
110
152
  #
111
153
  def system(text = nil, &block)
112
154
  if text
113
155
  @system_template = text
156
+ auto_register_params_from_template(text)
114
157
  elsif block
115
158
  @system_block = block
116
159
  end
@@ -124,6 +167,39 @@ module RubyLLM
124
167
  @system_template || @system_block || inherited_or_default(:system_config, nil)
125
168
  end
126
169
 
170
+ # Sets the assistant prefill string
171
+ #
172
+ # The assistant prefill is sent as the last message with the "assistant"
173
+ # role, priming the model to continue from that point. Useful for:
174
+ # - Forcing output format (e.g., starting with "{" for JSON)
175
+ # - Steering the response style
176
+ #
177
+ # Supports {placeholder} syntax for parameter interpolation.
178
+ #
179
+ # @param text [String, nil] The assistant prefill text
180
+ # @return [String, nil] The current assistant configuration
181
+ #
182
+ # @example Force JSON output
183
+ # assistant '{"category":'
184
+ #
185
+ # @example With placeholders
186
+ # assistant "Results for {query}:"
187
+ #
188
+ def assistant(text = nil)
189
+ if text
190
+ @assistant_template = text
191
+ auto_register_params_from_template(text)
192
+ end
193
+ @assistant_template || inherited_or_default(:assistant_config, nil)
194
+ end
195
+
196
+ # Returns the assistant prefill configuration
197
+ #
198
+ # @return [String, nil] The assistant template, or nil
199
+ def assistant_config
200
+ @assistant_template || inherited_or_default(:assistant_config, nil)
201
+ end
202
+
127
203
  # Sets or returns the description for this agent class
128
204
  #
129
205
  # Useful for documentation and tool registration.
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby_llm-agents
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.1.0
4
+ version: 2.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - adham90