ruby_llm-agents 2.0.0 → 2.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 7f2dcacc45b93f72d6de4f0d8ac2f56d85288f9cacc5495838fbc40473714ddf
4
- data.tar.gz: d6e2e7d7c78d9f6443cf8f01fa43733951e552922418ba8ee7226df4977d51a1
3
+ metadata.gz: 0a12fa00f68f2dcc889d91017ac8a009796089b19ab81908cdbd1e33f24f9cea
4
+ data.tar.gz: cb1ed8fc426ed31e51b0796e5deb7f570410a562b7d579eb0e486046d3fb4d2c
5
5
  SHA512:
6
- metadata.gz: 3231be55f4d51f39bb8339ac878a82eef2c2ee5d2894a53356c43a7fc20fd9381141e19d1b447e7c755a6117d687a241e7d3be701792ebf5e97ce3f5e7cf4758
7
- data.tar.gz: d89ded2027aa6ef14494216a116fcf42a6b549128677ec24a3881a63109f06a15955d0c824d7b3ff9c8aeecaa1ebdd675099d38bb5a9dce8dd4344bf045cd15f
6
+ metadata.gz: bf6939777a18aa2a00b213373bb3400cf17be976422bb412ef665405f5b2b2281978e025b88231816d3433c4ac2d52793aafd046372ae608a6ba6e6d70e8f771
7
+ data.tar.gz: 274e161fc65dd2e5f36a5c552a0e086576470505011aee2ce9713669a027e81b7e1c964e3638dca835f432b60a6a9282c2eaca72e7b2086bf469e6d6c72b35d2
data/README.md CHANGED
@@ -9,6 +9,7 @@
9
9
  > **Production-ready Rails engine for building, managing, and monitoring LLM-powered AI agents**
10
10
 
11
11
  [![Gem Version](https://badge.fury.io/rb/ruby_llm-agents.svg)](https://rubygems.org/gems/ruby_llm-agents)
12
+ [![CI](https://github.com/adham90/ruby_llm-agents/actions/workflows/ci.yml/badge.svg)](https://github.com/adham90/ruby_llm-agents/actions/workflows/ci.yml)
12
13
  [![Ruby](https://img.shields.io/badge/ruby-%3E%3D%203.1-ruby.svg)](https://www.ruby-lang.org)
13
14
  [![Rails](https://img.shields.io/badge/rails-%3E%3D%207.0-red.svg)](https://rubyonrails.org)
14
15
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
@@ -26,16 +27,15 @@ Build intelligent AI agents in Ruby with a clean DSL, automatic execution tracki
26
27
  ## Show Me the Code
27
28
 
28
29
  ```ruby
29
- # app/agents/search_intent_agent.rb
30
+ # Template agent — structured input via .call
30
31
  class SearchIntentAgent < ApplicationAgent
31
32
  model "gpt-4o"
32
33
  temperature 0.0
33
34
 
34
- # Prompts with {placeholder} syntax - params auto-registered
35
35
  system "You are a search intent analyzer. Extract structured data from queries."
36
- prompt "Extract search intent from: {query}"
36
+ user "Extract search intent from: {query}"
37
+ assistant '{"refined_query":' # Force JSON output
37
38
 
38
- # Structured output with returns DSL
39
39
  returns do
40
40
  string :refined_query, description: "Cleaned search query"
41
41
  array :filters, of: :string, description: "Extracted filters"
@@ -51,15 +51,17 @@ result.duration_ms # => 850
51
51
  ```
52
52
 
53
53
  ```ruby
54
- # Multi-turn conversations
55
- result = ChatAgent.call(
56
- query: "What's my name?",
57
- messages: [
58
- { role: :user, content: "My name is Alice" },
59
- { role: :assistant, content: "Nice to meet you, Alice!" }
60
- ]
61
- )
62
- # => "Your name is Alice!"
54
+ # Conversational agent — freeform input via .ask
55
+ class RubyExpert < ApplicationAgent
56
+ model "claude-sonnet-4-5-20250929"
57
+ system "You are a senior Ruby developer with 20 years of experience."
58
+ end
59
+
60
+ result = RubyExpert.ask("What's the difference between proc and lambda?")
61
+ puts result.content
62
+
63
+ # Stream the response
64
+ RubyExpert.ask("Explain metaprogramming") { |chunk| print chunk.content }
63
65
  ```
64
66
 
65
67
  ```ruby
@@ -67,7 +69,7 @@ result = ChatAgent.call(
67
69
  class ReliableAgent < ApplicationAgent
68
70
  model "gpt-4o"
69
71
 
70
- prompt "{query}"
72
+ user "{query}"
71
73
 
72
74
  on_failure do
73
75
  retries times: 3, backoff: :exponential
@@ -152,6 +154,19 @@ rails db:migrate
152
154
 
153
155
  ### Configure API Keys
154
156
 
157
+ Configure all provider API keys in one place (v2.1+):
158
+
159
+ ```ruby
160
+ # config/initializers/ruby_llm_agents.rb
161
+ RubyLLM::Agents.configure do |config|
162
+ config.openai_api_key = ENV["OPENAI_API_KEY"]
163
+ config.anthropic_api_key = ENV["ANTHROPIC_API_KEY"]
164
+ config.gemini_api_key = ENV["GOOGLE_API_KEY"]
165
+ end
166
+ ```
167
+
168
+ Or use environment variables directly (auto-detected by RubyLLM):
169
+
155
170
  ```bash
156
171
  # .env
157
172
  OPENAI_API_KEY=sk-...
@@ -174,7 +189,16 @@ This creates `app/agents/search_intent_agent.rb` with the agent class ready to c
174
189
  mount RubyLLM::Agents::Engine => "/agents"
175
190
  ```
176
191
 
177
- ![RubyLLM Agents Dashboard](screenshot.png)
192
+ <table>
193
+ <tr>
194
+ <td><img src="screenshots/dashboard.png" alt="Dashboard Overview" width="400"></td>
195
+ <td><img src="screenshots/agents.png" alt="Agent Registry" width="400"></td>
196
+ </tr>
197
+ <tr>
198
+ <td><img src="screenshots/executions.png" alt="Execution Log" width="400"></td>
199
+ <td><img src="screenshots/tenants.png" alt="Multi-Tenancy" width="400"></td>
200
+ </tr>
201
+ </table>
178
202
 
179
203
  ## Documentation
180
204
 
@@ -193,7 +217,7 @@ mount RubyLLM::Agents::Engine => "/agents"
193
217
  | [Testing Agents](https://github.com/adham90/ruby_llm-agents/wiki/Testing-Agents) | RSpec patterns, mocking, dry_run mode |
194
218
  | [Error Handling](https://github.com/adham90/ruby_llm-agents/wiki/Error-Handling) | Error types, recovery patterns |
195
219
  | [Embeddings](https://github.com/adham90/ruby_llm-agents/wiki/Embeddings) | Vector embeddings, batching, caching, preprocessing |
196
- | [Image Generation](https://github.com/adham90/ruby_llm-agents/wiki/Image-Generation) | Text-to-image, templates, content policy, cost tracking |
220
+ | [Image Generation](https://github.com/adham90/ruby_llm-agents/wiki/Image-Generation) | Text-to-image, templates, pipelines, cost tracking |
197
221
  | [Dashboard](https://github.com/adham90/ruby_llm-agents/wiki/Dashboard) | Setup, authentication, analytics |
198
222
  | [Production](https://github.com/adham90/ruby_llm-agents/wiki/Production-Deployment) | Deployment best practices, background jobs |
199
223
  | [API Reference](https://github.com/adham90/ruby_llm-agents/wiki/API-Reference) | Complete class documentation |
@@ -203,7 +227,7 @@ mount RubyLLM::Agents::Engine => "/agents"
203
227
 
204
228
  - **Ruby** >= 3.1.0
205
229
  - **Rails** >= 7.0
206
- - **RubyLLM** >= 1.0
230
+ - **RubyLLM** >= 1.12.0
207
231
 
208
232
  ## Contributing
209
233
 
@@ -359,14 +359,18 @@ module RubyLLM
359
359
 
360
360
  # Resolves model info for cost calculation
361
361
  #
362
+ # Uses Models.find (local registry lookup) rather than Models.resolve
363
+ # because cost calculation only needs pricing data, not a provider instance.
364
+ # Models.resolve requires API keys to instantiate the provider, which may
365
+ # not be available in background jobs or instrumentation contexts.
366
+ #
362
367
  # @param lookup_model_id [String, nil] The model identifier (defaults to self.model_id)
363
368
  # @return [Object, nil] Model info or nil
364
369
  def resolve_model_info(lookup_model_id = nil)
365
370
  lookup_model_id ||= model_id
366
371
  return nil unless lookup_model_id
367
372
 
368
- model, _provider = RubyLLM::Models.resolve(lookup_model_id)
369
- model
373
+ RubyLLM::Models.find(lookup_model_id)
370
374
  rescue StandardError
371
375
  nil
372
376
  end
@@ -148,12 +148,13 @@
148
148
 
149
149
  <!-- Desktop Navigation -->
150
150
  <nav class="hidden md:flex items-center ml-8 gap-0.5 font-mono text-xs">
151
- <% [
151
+ <% nav_items = [
152
152
  [ruby_llm_agents.root_path, "dashboard"],
153
153
  [ruby_llm_agents.agents_path, "agents"],
154
- [ruby_llm_agents.executions_path, "executions"],
155
- [ruby_llm_agents.tenants_path, "tenants"]
156
- ].each do |path, label| %>
154
+ [ruby_llm_agents.executions_path, "executions"]
155
+ ]
156
+ nav_items << [ruby_llm_agents.tenants_path, "tenants"] if tenant_filter_enabled?
157
+ nav_items.each do |path, label| %>
157
158
  <% active = current_page?(path) %>
158
159
  <%= link_to label, path, class: "px-2.5 py-1 rounded transition-colors #{active ? 'text-gray-900 dark:text-gray-100 bg-gray-100 dark:bg-gray-800' : 'text-gray-400 dark:text-gray-500 hover:text-gray-700 dark:hover:text-gray-300'}" %>
159
160
  <% end %>
@@ -197,12 +198,13 @@
197
198
  class="md:hidden border-t border-gray-200 dark:border-gray-800"
198
199
  >
199
200
  <nav class="max-w-7xl mx-auto px-4 py-2 font-mono text-xs space-y-0.5">
200
- <% [
201
+ <% mobile_nav_items = [
201
202
  [ruby_llm_agents.root_path, "dashboard"],
202
203
  [ruby_llm_agents.agents_path, "agents"],
203
- [ruby_llm_agents.executions_path, "executions"],
204
- [ruby_llm_agents.tenants_path, "tenants"]
205
- ].each do |path, label| %>
204
+ [ruby_llm_agents.executions_path, "executions"]
205
+ ]
206
+ mobile_nav_items << [ruby_llm_agents.tenants_path, "tenants"] if tenant_filter_enabled?
207
+ mobile_nav_items.each do |path, label| %>
206
208
  <% active = current_page?(path) %>
207
209
  <%= link_to label, path, class: "block px-3 py-2 rounded transition-colors #{active ? 'text-gray-900 dark:text-gray-100 bg-gray-100 dark:bg-gray-800' : 'text-gray-400 dark:text-gray-500 hover:text-gray-700 dark:hover:text-gray-300'}" %>
208
210
  <% end %>
@@ -108,9 +108,10 @@ module RubyLlmAgents
108
108
  say "Skill files (*.md) help AI coding assistants understand how to use this gem."
109
109
  say ""
110
110
  say "Next steps:"
111
- say " 1. Run migrations: rails db:migrate"
112
- say " 2. Generate an agent: rails generate ruby_llm_agents:agent MyAgent query:required"
113
- say " 3. Access the dashboard at: /agents"
111
+ say " 1. Set your API keys in config/initializers/ruby_llm_agents.rb"
112
+ say " 2. Run migrations: rails db:migrate"
113
+ say " 3. Generate an agent: rails generate ruby_llm_agents:agent MyAgent query:required"
114
+ say " 4. Access the dashboard at: /agents"
114
115
  say ""
115
116
  say "Generator commands:"
116
117
  say " rails generate ruby_llm_agents:agent CustomerSupport query:required"
@@ -5,6 +5,30 @@
5
5
  # For more information, see: https://github.com/adham90/ruby_llm-agents
6
6
 
7
7
  RubyLLM::Agents.configure do |config|
8
+ # ============================================
9
+ # LLM Provider API Keys
10
+ # ============================================
11
+ # Configure at least one provider. Set these in your environment
12
+ # or replace ENV[] calls with your keys directly.
13
+
14
+ # config.openai_api_key = ENV["OPENAI_API_KEY"]
15
+ # config.anthropic_api_key = ENV["ANTHROPIC_API_KEY"]
16
+ # config.gemini_api_key = ENV["GOOGLE_API_KEY"]
17
+
18
+ # Additional providers:
19
+ # config.deepseek_api_key = ENV["DEEPSEEK_API_KEY"]
20
+ # config.openrouter_api_key = ENV["OPENROUTER_API_KEY"]
21
+ # config.mistral_api_key = ENV["MISTRAL_API_KEY"]
22
+ # config.xai_api_key = ENV["XAI_API_KEY"]
23
+
24
+ # Custom endpoints (e.g., Azure OpenAI, local Ollama):
25
+ # config.openai_api_base = "https://your-resource.openai.azure.com"
26
+ # config.ollama_api_base = "http://localhost:11434"
27
+
28
+ # Connection settings:
29
+ # config.request_timeout = 120
30
+ # config.max_retries = 3
31
+
8
32
  # ============================================
9
33
  # Model Defaults
10
34
  # ============================================
@@ -60,6 +60,28 @@ module RubyLlmAgents
60
60
  )
61
61
  end
62
62
 
63
+ def suggest_config_consolidation
64
+ ruby_llm_initializer = File.join(destination_root, "config/initializers/ruby_llm.rb")
65
+ agents_initializer = File.join(destination_root, "config/initializers/ruby_llm_agents.rb")
66
+
67
+ return unless File.exist?(ruby_llm_initializer) && File.exist?(agents_initializer)
68
+
69
+ say ""
70
+ say "Optional: You can now consolidate your API key configuration.", :yellow
71
+ say ""
72
+ say "Move your API keys from config/initializers/ruby_llm.rb"
73
+ say "into config/initializers/ruby_llm_agents.rb:"
74
+ say ""
75
+ say " RubyLLM::Agents.configure do |config|"
76
+ say " config.openai_api_key = ENV['OPENAI_API_KEY']"
77
+ say " config.anthropic_api_key = ENV['ANTHROPIC_API_KEY']"
78
+ say " # ... rest of your agent config"
79
+ say " end"
80
+ say ""
81
+ say "Then delete config/initializers/ruby_llm.rb if it only contained API keys."
82
+ say ""
83
+ end
84
+
63
85
  def show_post_upgrade_message
64
86
  say ""
65
87
  say "RubyLLM::Agents upgrade complete!", :green
@@ -76,6 +76,39 @@ module RubyLLM
76
76
  instance.call(&block)
77
77
  end
78
78
 
79
+ # Executes the agent with a freeform message as the user prompt
80
+ #
81
+ # Designed for conversational agents that define a persona (system +
82
+ # optional assistant prefill) but accept freeform input at runtime.
83
+ # Also works on template agents as an escape hatch to bypass the
84
+ # user template.
85
+ #
86
+ # @param message [String] The user message to send
87
+ # @param with [String, Array<String>, nil] Attachments (files, URLs)
88
+ # @param kwargs [Hash] Additional options (model:, temperature:, etc.)
89
+ # @yield [chunk] Yields chunks when streaming
90
+ # @return [Result] The processed response
91
+ #
92
+ # @example Basic usage
93
+ # RubyExpert.ask("What is metaprogramming?")
94
+ #
95
+ # @example With streaming
96
+ # RubyExpert.ask("Explain closures") { |chunk| print chunk.content }
97
+ #
98
+ # @example With attachments
99
+ # RubyExpert.ask("What's in this image?", with: "photo.jpg")
100
+ #
101
+ def ask(message, with: nil, **kwargs, &block)
102
+ opts = kwargs.merge(_ask_message: message)
103
+ opts[:with] = with if with
104
+
105
+ if block
106
+ stream(**opts, &block)
107
+ else
108
+ call(**opts)
109
+ end
110
+ end
111
+
79
112
  # Returns the agent type for this class
80
113
  #
81
114
  # Used by middleware to determine which tracking/budget config to use.
@@ -221,12 +254,13 @@ module RubyLLM
221
254
  # @param temperature [Float] Override the class-level temperature
222
255
  # @param options [Hash] Agent parameters defined via the param DSL
223
256
  def initialize(model: self.class.model, temperature: self.class.temperature, **options)
257
+ @ask_message = options.delete(:_ask_message)
224
258
  @model = model
225
259
  @temperature = temperature
226
260
  @options = options
227
261
  @tracked_tool_calls = []
228
262
  @pending_tool_call = nil
229
- validate_required_params!
263
+ validate_required_params! unless @ask_message
230
264
  end
231
265
 
232
266
  # Executes the agent through the middleware pipeline
@@ -245,15 +279,21 @@ module RubyLLM
245
279
 
246
280
  # User prompt to send to the LLM
247
281
  #
248
- # If a class-level `prompt` DSL is defined (string template or block),
249
- # it will be used. Otherwise, subclasses must implement this method.
282
+ # Resolution order:
283
+ # 1. Subclass method override (standard Ruby dispatch this method is never called)
284
+ # 2. .ask(message) runtime message — bypasses template
285
+ # 3. Class-level `user` / `prompt` template — interpolated with {placeholders}
286
+ # 4. Inherited from superclass
287
+ # 5. NotImplementedError
250
288
  #
251
289
  # @return [String] The user prompt
252
290
  def user_prompt
253
- prompt_config = self.class.prompt_config
254
- return resolve_prompt_from_config(prompt_config) if prompt_config
291
+ return @ask_message if @ask_message
292
+
293
+ config = self.class.user_config
294
+ return resolve_prompt_from_config(config) if config
255
295
 
256
- raise NotImplementedError, "#{self.class} must implement #user_prompt or use the prompt DSL"
296
+ raise NotImplementedError, "#{self.class} must implement #user_prompt, use the `user` DSL, or call with .ask(message)"
257
297
  end
258
298
 
259
299
  # System prompt for LLM instructions
@@ -269,6 +309,19 @@ module RubyLLM
269
309
  nil
270
310
  end
271
311
 
312
+ # Assistant prefill to prime the model's response
313
+ #
314
+ # If a class-level `assistant` DSL is defined, it will be used.
315
+ # Otherwise returns nil (no prefill).
316
+ #
317
+ # @return [String, nil] The assistant prefill, or nil for none
318
+ def assistant_prompt
319
+ config = self.class.assistant_config
320
+ return resolve_prompt_from_config(config) if config
321
+
322
+ nil
323
+ end
324
+
272
325
  # Response schema for structured output
273
326
  #
274
327
  # Delegates to the class-level schema DSL by default.
@@ -381,6 +434,7 @@ module RubyLLM
381
434
  {
382
435
  temperature: temperature,
383
436
  system_prompt: system_prompt,
437
+ assistant_prefill: assistant_prompt,
384
438
  schema: schema,
385
439
  messages: resolved_messages,
386
440
  tools: resolved_tools,
@@ -419,11 +473,26 @@ module RubyLLM
419
473
 
420
474
  # Resolves messages for this execution
421
475
  #
476
+ # Includes conversation history and assistant prefill if defined.
477
+ # The assistant prefill is appended as the last message so it appears
478
+ # after the user prompt in the conversation.
479
+ #
422
480
  # @return [Array<Hash>] Messages to apply
423
481
  def resolved_messages
424
- return @options[:messages] if @options[:messages]&.any?
482
+ msgs = @options[:messages]&.any? ? @options[:messages] : messages
483
+ msgs.dup
484
+ end
485
+
486
+ # Returns the assistant prefill message if defined
487
+ #
488
+ # Called after the user prompt is sent to inject the prefill.
489
+ #
490
+ # @return [Hash, nil] The assistant prefill message hash, or nil
491
+ def resolved_assistant_prefill
492
+ prefill = assistant_prompt
493
+ return nil if prefill.nil? || (prefill.is_a?(String) && prefill.empty?)
425
494
 
426
- messages
495
+ { role: :assistant, content: prefill }
427
496
  end
428
497
 
429
498
  # Returns whether streaming is enabled
@@ -446,6 +515,7 @@ module RubyLLM
446
515
  timeout: self.class.timeout,
447
516
  system_prompt: system_prompt,
448
517
  user_prompt: user_prompt,
518
+ assistant_prompt: assistant_prompt,
449
519
  attachments: @options[:with],
450
520
  schema: schema&.class&.name,
451
521
  streaming: self.class.streaming,
@@ -546,32 +616,75 @@ module RubyLLM
546
616
 
547
617
  # Executes the LLM call
548
618
  #
619
+ # When an assistant prefill is defined, messages are added manually
620
+ # (user, then assistant) before calling complete, so the model
621
+ # continues from the prefill. Otherwise, uses the standard .ask flow.
622
+ #
549
623
  # @param client [RubyLLM::Chat] The configured client
550
624
  # @param context [Pipeline::Context] The execution context
551
625
  # @return [RubyLLM::Message] The response
552
626
  def execute_llm_call(client, context)
553
627
  timeout = self.class.timeout
554
- ask_opts = {}
555
- ask_opts[:with] = @options[:with] if @options[:with]
628
+ prefill = resolved_assistant_prefill
556
629
 
557
630
  Timeout.timeout(timeout) do
558
- if streaming_enabled? && context.stream_block
559
- execute_with_streaming(client, context, ask_opts)
631
+ if prefill
632
+ execute_with_prefill(client, context, prefill)
633
+ elsif streaming_enabled? && context.stream_block
634
+ execute_with_streaming(client, context)
560
635
  else
636
+ ask_opts = {}
637
+ ask_opts[:with] = @options[:with] if @options[:with]
561
638
  client.ask(user_prompt, **ask_opts)
562
639
  end
563
640
  end
564
641
  end
565
642
 
643
+ # Executes with assistant prefill
644
+ #
645
+ # Manually adds the user message and assistant prefill, then calls
646
+ # complete so the model continues from the prefill text.
647
+ #
648
+ # @param client [RubyLLM::Chat] The client
649
+ # @param context [Pipeline::Context] The context
650
+ # @param prefill [Hash] The assistant prefill message ({role:, content:})
651
+ # @return [RubyLLM::Message] The response
652
+ def execute_with_prefill(client, context, prefill)
653
+ # Add user message — use .ask for attachment support, then add prefill
654
+ # We use add_message + complete instead of .ask so we can insert the
655
+ # assistant prefill between user and completion
656
+ client.add_message(role: :user, content: user_prompt)
657
+ client.add_message(**prefill)
658
+
659
+ if streaming_enabled? && context.stream_block
660
+ first_chunk_at = nil
661
+ started_at = context.started_at || Time.current
662
+
663
+ response = client.complete do |chunk|
664
+ first_chunk_at ||= Time.current
665
+ context.stream_block.call(chunk)
666
+ end
667
+
668
+ if first_chunk_at
669
+ context.time_to_first_token_ms = ((first_chunk_at - started_at) * 1000).to_i
670
+ end
671
+
672
+ response
673
+ else
674
+ client.complete
675
+ end
676
+ end
677
+
566
678
  # Executes with streaming enabled
567
679
  #
568
680
  # @param client [RubyLLM::Chat] The client
569
681
  # @param context [Pipeline::Context] The context
570
- # @param ask_opts [Hash] Options for the ask call
571
682
  # @return [RubyLLM::Message] The response
572
- def execute_with_streaming(client, context, ask_opts)
683
+ def execute_with_streaming(client, context)
573
684
  first_chunk_at = nil
574
685
  started_at = context.started_at || Time.current
686
+ ask_opts = {}
687
+ ask_opts[:with] = @options[:with] if @options[:with]
575
688
 
576
689
  response = client.ask(user_prompt, **ask_opts) do |chunk|
577
690
  first_chunk_at ||= Time.current
@@ -613,29 +726,14 @@ module RubyLLM
613
726
  input_tokens = context.input_tokens || 0
614
727
  output_tokens = context.output_tokens || 0
615
728
 
616
- input_price = extract_model_price(model_info, :input_price)
617
- output_price = extract_model_price(model_info, :output_price)
729
+ input_price = model_info.pricing&.text_tokens&.input || 0
730
+ output_price = model_info.pricing&.text_tokens&.output || 0
618
731
 
619
732
  context.input_cost = (input_tokens / 1_000_000.0) * input_price
620
733
  context.output_cost = (output_tokens / 1_000_000.0) * output_price
621
734
  context.total_cost = (context.input_cost + context.output_cost).round(6)
622
735
  end
623
736
 
624
- # Extracts price from model info (supports both hash and object access)
625
- #
626
- # @param model_info [Hash, Object] Model info
627
- # @param key [Symbol] The price key
628
- # @return [Float] The price, or 0 if not found
629
- def extract_model_price(model_info, key)
630
- if model_info.respond_to?(key)
631
- model_info.send(key) || 0
632
- elsif model_info.respond_to?(:[])
633
- model_info[key] || 0
634
- else
635
- 0
636
- end
637
- end
638
-
639
737
  # Finds model pricing info
640
738
  #
641
739
  # @param model_id [String] The model ID
@@ -351,6 +351,45 @@ module RubyLLM
351
351
  # max_value_length: 5000
352
352
  # }
353
353
 
354
+ # API key and provider attributes forwarded to RubyLLM.
355
+ # These let users configure everything in one place through
356
+ # RubyLLM::Agents.configure instead of a separate RubyLLM.configure block.
357
+ FORWARDED_RUBY_LLM_ATTRIBUTES = %i[
358
+ openai_api_key
359
+ anthropic_api_key
360
+ gemini_api_key
361
+ deepseek_api_key
362
+ openrouter_api_key
363
+ bedrock_api_key
364
+ bedrock_secret_key
365
+ bedrock_session_token
366
+ bedrock_region
367
+ mistral_api_key
368
+ perplexity_api_key
369
+ xai_api_key
370
+ gpustack_api_key
371
+ openai_api_base
372
+ openai_organization_id
373
+ openai_project_id
374
+ gemini_api_base
375
+ gpustack_api_base
376
+ ollama_api_base
377
+ vertexai_project_id
378
+ vertexai_location
379
+ request_timeout
380
+ max_retries
381
+ ].freeze
382
+
383
+ FORWARDED_RUBY_LLM_ATTRIBUTES.each do |attr|
384
+ define_method(:"#{attr}=") do |value|
385
+ RubyLLM.config.public_send(:"#{attr}=", value)
386
+ end
387
+
388
+ define_method(attr) do
389
+ RubyLLM.config.public_send(attr)
390
+ end
391
+ end
392
+
354
393
  # Attributes without validation (simple accessors)
355
394
  attr_accessor :default_model,
356
395
  :async_logging,
@@ -4,6 +4,6 @@ module RubyLLM
4
4
  module Agents
5
5
  # Current version of the RubyLLM::Agents gem
6
6
  # @return [String] Semantic version string
7
- VERSION = "2.0.0"
7
+ VERSION = "2.2.0"
8
8
  end
9
9
  end
@@ -7,17 +7,23 @@ module RubyLLM
7
7
  #
8
8
  # Provides common configuration methods that every agent type needs:
9
9
  # - model: The LLM model to use
10
- # - prompt: The user prompt (string with {placeholders} or block)
11
10
  # - system: System instructions
11
+ # - user: The user prompt (string with {placeholders})
12
+ # - assistant: Assistant prefill (string with optional {placeholders})
12
13
  # - description: Human-readable description
13
14
  # - timeout: Request timeout
14
15
  # - returns: Structured output schema
15
16
  #
16
- # @example Simplified DSL
17
+ # Two levels for defining prompts:
18
+ # - Class-level string/heredoc for static content
19
+ # - Instance method override for dynamic content
20
+ #
21
+ # @example Template agent (structured input via .call)
17
22
  # class SearchAgent < RubyLLM::Agents::BaseAgent
18
23
  # model "gpt-4o"
19
24
  # system "You are a helpful search assistant."
20
- # prompt "Search for: {query} (limit: {limit})"
25
+ # user "Search for: {query} (limit: {limit})"
26
+ # assistant '{"results":['
21
27
  #
22
28
  # param :limit, default: 10 # Override auto-detected param
23
29
  #
@@ -29,10 +35,22 @@ module RubyLLM
29
35
  # end
30
36
  # end
31
37
  #
32
- # @example Dynamic prompt with block
33
- # class SummaryAgent < RubyLLM::Agents::BaseAgent
34
- # prompt do
35
- # "Summarize in #{word_count} words: #{text}"
38
+ # @example Conversational agent (freeform input via .ask)
39
+ # class RubyExpert < RubyLLM::Agents::BaseAgent
40
+ # model "gpt-4o"
41
+ # system "You are a senior Ruby developer."
42
+ # end
43
+ #
44
+ # RubyExpert.ask("What is metaprogramming?")
45
+ #
46
+ # @example Dynamic prompts with method overrides
47
+ # class SmartAgent < RubyLLM::Agents::BaseAgent
48
+ # def system_prompt
49
+ # "You are helping #{company.name}. Today is #{Date.today}."
50
+ # end
51
+ #
52
+ # def user_prompt
53
+ # "Question: #{params[:question]}"
36
54
  # end
37
55
  # end
38
56
  #
@@ -53,49 +71,70 @@ module RubyLLM
53
71
  @model || inherited_or_default(:model, default_model)
54
72
  end
55
73
 
56
- # Sets the user prompt template or block
74
+ # Sets the user prompt template
57
75
  #
58
76
  # When a string is provided, {placeholder} syntax is used to interpolate
59
77
  # parameters. Parameters are automatically registered (as required) unless
60
78
  # already defined with `param`.
61
79
  #
62
- # When a block is provided, it's evaluated in the instance context at
63
- # execution time, allowing access to all instance methods and parameters.
64
- #
65
80
  # @param template [String, nil] Prompt template with {placeholder} syntax
66
- # @yield Block that returns the prompt string (evaluated at execution time)
67
- # @return [String, Proc, nil] The current prompt configuration
81
+ # @return [String, nil] The current user prompt configuration
68
82
  #
69
83
  # @example With template string (parameters auto-detected)
70
- # prompt "Search for: {query} in {category}"
84
+ # user "Search for: {query} in {category}"
71
85
  # # Automatically registers :query and :category as required params
72
86
  #
73
- # @example With block for dynamic prompts
74
- # prompt do
75
- # base = "Analyze the following"
76
- # base += " in #{language}" if language != "en"
77
- # "#{base}: #{text}"
78
- # end
87
+ # @example Multi-line with heredoc
88
+ # user <<~S
89
+ # Search for: {query}
90
+ # Category: {category}
91
+ # Limit: {limit}
92
+ # S
93
+ #
94
+ def user(template = nil)
95
+ if template
96
+ @user_template = template
97
+ auto_register_params_from_template(template)
98
+ end
99
+ @user_template || @prompt_template || @prompt_block || inherited_or_default(:user_config, nil)
100
+ end
101
+
102
+ # Returns the user prompt configuration
103
+ #
104
+ # @return [String, Proc, nil] The user template, or nil
105
+ def user_config
106
+ @user_template || @prompt_template || @prompt_block || inherited_or_default(:user_config, nil)
107
+ end
108
+
109
+ # Backward-compatible alias for `user`
79
110
  #
111
+ # @deprecated Use `user` instead
112
+ # @param template [String, nil] Prompt template with {placeholder} syntax
113
+ # @yield Block that returns the prompt string (evaluated at execution time)
114
+ # @return [String, Proc, nil] The current prompt configuration
80
115
  def prompt(template = nil, &block)
81
116
  if template
82
- @prompt_template = template
117
+ @user_template = template
83
118
  auto_register_params_from_template(template)
84
119
  elsif block
85
120
  @prompt_block = block
86
121
  end
87
- @prompt_template || @prompt_block || inherited_or_default(:prompt_config, nil)
122
+ @user_template || @prompt_template || @prompt_block || inherited_or_default(:user_config, nil)
88
123
  end
89
124
 
90
- # Returns the prompt configuration (template or block)
125
+ # Returns the prompt configuration (alias for user_config)
91
126
  #
127
+ # @deprecated Use `user_config` instead
92
128
  # @return [String, Proc, nil] The prompt template, block, or nil
93
129
  def prompt_config
94
- @prompt_template || @prompt_block || inherited_or_default(:prompt_config, nil)
130
+ user_config
95
131
  end
96
132
 
97
133
  # Sets the system prompt/instructions
98
134
  #
135
+ # When a string is provided, {placeholder} syntax is supported for
136
+ # parameter interpolation, same as the `user` DSL.
137
+ #
99
138
  # @param text [String, nil] System instructions for the LLM
100
139
  # @yield Block that returns the system prompt (evaluated at execution time)
101
140
  # @return [String, Proc, nil] The current system prompt
@@ -103,14 +142,18 @@ module RubyLLM
103
142
  # @example Static system prompt
104
143
  # system "You are a helpful assistant. Be concise and accurate."
105
144
  #
106
- # @example Dynamic system prompt
107
- # system do
108
- # "You are helping #{user_name}. Their preferences: #{preferences}"
145
+ # @example With placeholders
146
+ # system "You are helping {user_name} with their {task}."
147
+ #
148
+ # @example Dynamic system prompt (method override)
149
+ # def system_prompt
150
+ # "You are helping #{user_name}. Today is #{Date.today}."
109
151
  # end
110
152
  #
111
153
  def system(text = nil, &block)
112
154
  if text
113
155
  @system_template = text
156
+ auto_register_params_from_template(text)
114
157
  elsif block
115
158
  @system_block = block
116
159
  end
@@ -124,6 +167,39 @@ module RubyLLM
124
167
  @system_template || @system_block || inherited_or_default(:system_config, nil)
125
168
  end
126
169
 
170
+ # Sets the assistant prefill string
171
+ #
172
+ # The assistant prefill is sent as the last message with the "assistant"
173
+ # role, priming the model to continue from that point. Useful for:
174
+ # - Forcing output format (e.g., starting with "{" for JSON)
175
+ # - Steering the response style
176
+ #
177
+ # Supports {placeholder} syntax for parameter interpolation.
178
+ #
179
+ # @param text [String, nil] The assistant prefill text
180
+ # @return [String, nil] The current assistant configuration
181
+ #
182
+ # @example Force JSON output
183
+ # assistant '{"category":'
184
+ #
185
+ # @example With placeholders
186
+ # assistant "Results for {query}:"
187
+ #
188
+ def assistant(text = nil)
189
+ if text
190
+ @assistant_template = text
191
+ auto_register_params_from_template(text)
192
+ end
193
+ @assistant_template || inherited_or_default(:assistant_config, nil)
194
+ end
195
+
196
+ # Returns the assistant prefill configuration
197
+ #
198
+ # @return [String, nil] The assistant template, or nil
199
+ def assistant_config
200
+ @assistant_template || inherited_or_default(:assistant_config, nil)
201
+ end
202
+
127
203
  # Sets or returns the description for this agent class
128
204
  #
129
205
  # Useful for documentation and tool registration.
@@ -25,10 +25,18 @@ module RubyLLM
25
25
  # @param execution_data [Hash] Execution attributes from instrumentation
26
26
  # @return [void]
27
27
  def perform(execution_data)
28
+ # Extract detail data before filtering (stored in separate table)
29
+ detail_data = execution_data.delete(:_detail_data) || execution_data.delete("_detail_data")
30
+
28
31
  # Filter to only known attributes to prevent schema mismatches
29
32
  filtered_data = filter_known_attributes(execution_data)
30
33
  execution = Execution.create!(filtered_data)
31
34
 
35
+ # Create detail record if present
36
+ if detail_data && detail_data.values.any? { |v| v.present? && v != {} && v != [] }
37
+ execution.create_detail!(detail_data)
38
+ end
39
+
32
40
  # Calculate costs if token data is available
33
41
  if execution.input_tokens && execution.output_tokens
34
42
  execution.calculate_costs!
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby_llm-agents
3
3
  version: !ruby/object:Gem::Version
4
- version: 2.0.0
4
+ version: 2.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - adham90
@@ -29,14 +29,14 @@ dependencies:
29
29
  requirements:
30
30
  - - ">="
31
31
  - !ruby/object:Gem::Version
32
- version: 1.11.0
32
+ version: 1.12.0
33
33
  type: :runtime
34
34
  prerelease: false
35
35
  version_requirements: !ruby/object:Gem::Requirement
36
36
  requirements:
37
37
  - - ">="
38
38
  - !ruby/object:Gem::Version
39
- version: 1.11.0
39
+ version: 1.12.0
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: csv
42
42
  requirement: !ruby/object:Gem::Requirement