rails_console_ai 0.26.0 → 0.28.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a2194416a93ce8522de376169eb627b90c8b2477ba253f0f1b877144251f9ee6
4
- data.tar.gz: 65dcbeb6eef9dd2641181529aa671058e85a6cd1c5263acd51678c52c89d3567
3
+ metadata.gz: 50a2c9dce686cffa315e1fb5004f0ad0a9cbc67459d3a546b28ad1b95d0d1798
4
+ data.tar.gz: 7eea0529e3a3e4f9a4d60cf8e6b882e5dc825052c3793817013cf5cc6e617d22
5
5
  SHA512:
6
- metadata.gz: e4d5a1f2fe8ef6ae4593829ac1a8997eeeac652ea0d0a491f91a64b8e45a246d024a10aa269e853949e0d106779d4f1b9d671af21057bd6af802ff6a541ac15d
7
- data.tar.gz: ec0e0d25f7bb2605a7c4180ebc56ab56d8f590d3632c98310ec65c3aecd987d04118e7199534a8ea5a1a8023a505de9ac467ce05d01393df667c2436bcebc68c
6
+ metadata.gz: 4ed2a4ad47456f7e28682e0302d346fd04e3778433015855fc2199d4b1dc3077e8ec96efec5eb1b9ae049f6ed5f8c5ac6944e354488313b1f20e8f4354a0bdd1
7
+ data.tar.gz: c3a9d0faaada962952b9995ac0e07b79384536dc03970c0da17fe0e2eef843edea799d2dfa8698363a869d74e2aa7d86f2dbe80088234206693d86dd29ef45b4
data/CHANGELOG.md CHANGED
@@ -2,6 +2,14 @@
2
2
 
3
3
  All notable changes to this project will be documented in this file.
4
4
 
5
+ ## [0.28.0]
6
+
7
+ - Add `bin/smoke_model.rb` to smoke-test new models (plain, tool, parallel, cache checks)
8
+ - Support Claude Opus 4.7 by omitting the `temperature` parameter for models that reject it
9
+ - Show both estimated request tokens and total billed tokens in LLM round status
10
+ - Auto-upgrade to thinking model on "think harder/deeper/carefully" phrases in Slack as well as console
11
+ - Fix cancelled code execution state persisting into the next user turn
12
+
5
13
  ## [0.26.0]
6
14
 
7
15
  - Add sub-agent support
data/README.md CHANGED
@@ -352,6 +352,39 @@ end
352
352
 
353
353
  Timeout is automatically raised to 300s minimum for local models to account for slower inference.
354
354
 
355
+ ### Testing a new model
356
+
357
+ Before adopting a new Claude model, smoke-test it against the Anthropic or Bedrock provider with `bin/smoke_model.rb`. The script runs four checks and exits non-zero on any failure:
358
+
359
+ | check | what it verifies |
360
+ | -------- | -------------------------------------------------------------------------------- |
361
+ | plain | the model returns text for a basic prompt |
362
+ | tool | a single tool call → tool result → final answer round-trip works |
363
+ | parallel | the model issues multiple tool calls in one response when asked |
364
+ | cache | a long system prompt is written to and read from the prompt cache (with retry) |
365
+
366
+ ```bash
367
+ # Anthropic — provider inferred from the `claude-` prefix
368
+ ANTHROPIC_API_KEY=sk-ant-... bin/smoke_model.rb --model claude-opus-4-7
369
+
370
+ # Bedrock — provider inferred from the regional `us.anthropic.` prefix.
371
+ # Requires the aws-sdk-bedrockruntime gem and AWS credentials in the environment.
372
+ bin/smoke_model.rb --model us.anthropic.claude-opus-4-7
373
+
374
+ # Bedrock in another region
375
+ bin/smoke_model.rb --model eu.anthropic.claude-opus-4-7 --region eu-west-1
376
+
377
+ # Subset of checks, e.g. when iterating on cache behavior
378
+ bin/smoke_model.rb --model claude-sonnet-4-6 --checks cache
379
+
380
+ # Force a provider when the model ID is ambiguous
381
+ bin/smoke_model.rb --provider anthropic --model claude-opus-4-7
382
+ ```
383
+
384
+ `DEBUG=1` enables the providers' raw request/response logging.
385
+
386
+ If the model rejects a parameter the gem sends by default (e.g. opus-4-7 deprecated `temperature`), add the model ID to `Configuration::MODELS_WITHOUT_TEMPERATURE` in `lib/rails_console_ai/configuration.rb` so the providers omit the field.
387
+
355
388
  ## Configuration
356
389
 
357
390
  ```ruby
@@ -207,6 +207,7 @@ module RailsConsoleAi
207
207
  break if input.nil?
208
208
 
209
209
  input = input.strip
210
+ input = input.force_encoding('UTF-8') if input.encoding == Encoding::ASCII_8BIT
210
211
  break if input.downcase == 'exit' || input.downcase == 'quit'
211
212
  next if input.empty?
212
213
 
@@ -222,8 +223,7 @@ module RailsConsoleAi
222
223
  # Add to Readline history
223
224
  Readline::HISTORY.push(input) unless input == Readline::HISTORY.to_a.last
224
225
 
225
- # Auto-upgrade to thinking model on "think harder" phrases
226
- @engine.upgrade_to_thinking_model if input =~ /think\s*harder/i
226
+ @engine.maybe_auto_upgrade_thinking(input)
227
227
 
228
228
  @engine.set_interactive_query(input)
229
229
  @engine.add_user_message(input)
@@ -1,3 +1,5 @@
1
+ require 'set'
2
+
1
3
  module RailsConsoleAi
2
4
  class Configuration
3
5
  PROVIDERS = %i[anthropic openai local bedrock].freeze
@@ -18,6 +20,17 @@ module RailsConsoleAi
18
20
  'claude-opus-4-6' => 4_096,
19
21
  }.freeze
20
22
 
23
+ # Models that reject the `temperature` parameter. Configuration#resolved_temperature
24
+ # returns nil for these so providers can omit the field from the request.
25
+ MODELS_WITHOUT_TEMPERATURE = Set.new(%w[
26
+ claude-opus-4-7
27
+ anthropic.claude-opus-4-7
28
+ us.anthropic.claude-opus-4-7
29
+ eu.anthropic.claude-opus-4-7
30
+ jp.anthropic.claude-opus-4-7
31
+ global.anthropic.claude-opus-4-7
32
+ ]).freeze
33
+
21
34
  attr_accessor :provider, :api_key, :model, :thinking_model, :max_tokens,
22
35
  :auto_execute, :temperature,
23
36
  :timeout, :debug, :max_tool_rounds,
@@ -179,6 +192,13 @@ module RailsConsoleAi
179
192
  DEFAULT_MAX_TOKENS.fetch(resolved_model, 4096)
180
193
  end
181
194
 
195
+ # Returns nil for models that reject the `temperature` parameter (e.g. opus-4-7).
196
+ # Providers should use this in place of @temperature.
197
+ def resolved_temperature
198
+ return nil if MODELS_WITHOUT_TEMPERATURE.include?(resolved_model)
199
+ @temperature
200
+ end
201
+
182
202
  def resolved_thinking_model
183
203
  return @thinking_model if @thinking_model && !@thinking_model.empty?
184
204
 
@@ -110,6 +110,7 @@ module RailsConsoleAi
110
110
  init_interactive unless @interactive_start
111
111
  @channel.log_input(text) if @channel.respond_to?(:log_input)
112
112
  @interactive_query ||= text
113
+ maybe_auto_upgrade_thinking(text)
113
114
  @history << { role: :user, content: text }
114
115
 
115
116
  status = send_and_execute
@@ -249,7 +250,7 @@ module RailsConsoleAi
249
250
  output_id = @executor.store_output(result_str)
250
251
  if result_str.length > LARGE_OUTPUT_THRESHOLD
251
252
  preview = result_str[0, LARGE_OUTPUT_PREVIEW_CHARS]
252
- context_msg += "\n#{preview}\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{result_str.length} chars — use recall_output tool with id #{output_id} to retrieve the full output]"
253
+ context_msg += "\n#{preview}\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{result_str.length} chars — use explore_output with output_id=#{output_id} for focused queries, or recall_output to expand in place]"
253
254
  elsif !output_parts.empty?
254
255
  context_msg += "\n#{result_str}"
255
256
  end
@@ -332,7 +333,7 @@ module RailsConsoleAi
332
333
  context_msg = "Code was executed (safety override). "
333
334
  if result_str.length > LARGE_OUTPUT_THRESHOLD
334
335
  context_msg += result_str[0, LARGE_OUTPUT_PREVIEW_CHARS]
335
- context_msg += "\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{result_str.length} chars — use recall_output tool with id #{output_id} to retrieve the full output]"
336
+ context_msg += "\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{result_str.length} chars — use explore_output with output_id=#{output_id} for focused queries, or recall_output to expand in place]"
336
337
  else
337
338
  context_msg += result_str
338
339
  end
@@ -360,7 +361,7 @@ module RailsConsoleAi
360
361
  context_msg = "Code was executed. "
361
362
  if result_str.length > LARGE_OUTPUT_THRESHOLD
362
363
  context_msg += result_str[0, LARGE_OUTPUT_PREVIEW_CHARS]
363
- context_msg += "\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{result_str.length} chars — use recall_output tool with id #{output_id} to retrieve the full output]"
364
+ context_msg += "\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{result_str.length} chars — use explore_output with output_id=#{output_id} for focused queries, or recall_output to expand in place]"
364
365
  else
365
366
  context_msg += result_str
366
367
  end
@@ -450,6 +451,13 @@ module RailsConsoleAi
450
451
  parts.compact.join("\n\n")
451
452
  end
452
453
 
454
+ AUTO_THINK_PATTERN = /\bthink\s+(harder|deeper|hard|carefully|more\s+carefully)\b/i
455
+
456
+ def maybe_auto_upgrade_thinking(text)
457
+ return unless text.is_a?(String) && text =~ AUTO_THINK_PATTERN
458
+ upgrade_to_thinking_model
459
+ end
460
+
453
461
  def upgrade_to_thinking_model
454
462
  config = RailsConsoleAi.configuration
455
463
  current = effective_model
@@ -777,6 +785,7 @@ module RailsConsoleAi
777
785
  require 'rails_console_ai/tools/registry'
778
786
  tools = tools_override || Tools::Registry.new(executor: @executor, channel: @channel)
779
787
  active_system_prompt = system_prompt || context
788
+ @executor.reset_cancelled! if @executor
780
789
  max_rounds = RailsConsoleAi.configuration.max_tool_rounds
781
790
  total_input = 0
782
791
  total_output = 0
@@ -796,19 +805,21 @@ module RailsConsoleAi
796
805
 
797
806
  if round == 0
798
807
  @channel.display_status(" Thinking...")
799
- else
800
- if last_thinking
801
- last_thinking.split("\n").each do |line|
802
- @channel.display_thinking(" #{line}")
803
- end
808
+ elsif last_thinking
809
+ last_thinking.split("\n").each do |line|
810
+ @channel.display_thinking(" #{line}")
804
811
  end
805
- @channel.display_status(" #{llm_status(round, messages, total_input, last_thinking, last_tool_names)}")
806
812
  end
807
813
 
808
814
  # Trim large tool outputs between rounds to prevent context explosion.
809
815
  # The LLM can still retrieve omitted outputs via recall_output.
810
816
  messages = trim_large_outputs(messages) if round > 0
811
817
 
818
+ if round > 0
819
+ req_tokens = estimate_request_tokens(messages)
820
+ @channel.display_status(" #{llm_status(round, messages, req_tokens, total_input, last_thinking, last_tool_names)}")
821
+ end
822
+
812
823
  if RailsConsoleAi.configuration.debug
813
824
  debug_pre_call(round, messages, active_system_prompt, tools, total_input, total_output)
814
825
  end
@@ -903,7 +914,7 @@ module RailsConsoleAi
903
914
  tool_msg[:output_id] = output_id
904
915
  if full_text.length > LARGE_OUTPUT_THRESHOLD
905
916
  truncated = full_text[0, LARGE_OUTPUT_PREVIEW_CHARS]
906
- truncated += "\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{full_text.length} chars — use recall_output tool with id #{output_id} to retrieve the full output]"
917
+ truncated += "\n\n[Output truncated at #{LARGE_OUTPUT_PREVIEW_CHARS} of #{full_text.length} chars — use explore_output with output_id=#{output_id} for focused queries, or recall_output to expand in place]"
907
918
  tool_msg = provider.format_tool_result(tc[:id], truncated)
908
919
  tool_msg[:output_id] = output_id
909
920
  end
@@ -1012,6 +1023,11 @@ module RailsConsoleAi
1012
1023
 
1013
1024
  # --- Formatting helpers ---
1014
1025
 
1026
+ def estimate_request_tokens(messages)
1027
+ chars = messages.sum { |m| (m[:content] || m['content']).to_s.length }
1028
+ chars / 4
1029
+ end
1030
+
1015
1031
  def format_tokens(count)
1016
1032
  if count >= 1_000_000
1017
1033
  "#{(count / 1_000_000.0).round(1)}M"
@@ -1041,6 +1057,10 @@ module RailsConsoleAi
1041
1057
  when 'save_skill' then "(\"#{args['name']}\")"
1042
1058
  when 'delete_skill' then "(\"#{args['name']}\")"
1043
1059
  when 'recall_output' then "(#{args['id']})"
1060
+ when 'explore_output'
1061
+ task_preview = args['task'].to_s[0, 80]
1062
+ task_preview += '...' if args['task'].to_s.length > 80
1063
+ "(id: #{args['output_id']}, \"#{task_preview}\")"
1044
1064
  when 'execute_plan'
1045
1065
  steps = args['steps']
1046
1066
  steps ? "(#{steps.length} steps)" : ''
@@ -1132,9 +1152,10 @@ module RailsConsoleAi
1132
1152
  str.length > max ? str[0..max] + '...' : str
1133
1153
  end
1134
1154
 
1135
- def llm_status(round, messages, tokens_so_far, last_thinking = nil, last_tool_names = [])
1155
+ def llm_status(round, messages, req_tokens, total_billed, last_thinking = nil, last_tool_names = [])
1136
1156
  status = "Calling LLM (round #{round + 1}, #{messages.length} msgs"
1137
- status += ", ~#{format_tokens(tokens_so_far)} ctx" if tokens_so_far > 0
1157
+ status += ", ~#{format_tokens(req_tokens)} ctx" if req_tokens > 0
1158
+ status += ", ~#{format_tokens(total_billed)} total" if total_billed > 0
1138
1159
  status += ")"
1139
1160
  if !last_thinking && last_tool_names.any?
1140
1161
  counts = last_tool_names.tally
@@ -1409,7 +1430,7 @@ module RailsConsoleAi
1409
1430
  end
1410
1431
 
1411
1432
  def trim_message(msg)
1412
- ref = "[Output omitted — use recall_output tool with id #{msg[:output_id]} to retrieve]"
1433
+ ref = "[Output omitted — use explore_output with output_id=#{msg[:output_id]} for focused queries, or recall_output to expand in place]"
1413
1434
 
1414
1435
  if msg[:content].is_a?(Array)
1415
1436
  trimmed_content = msg[:content].map do |block|
@@ -206,6 +206,10 @@ module RailsConsoleAi
206
206
  @last_cancelled
207
207
  end
208
208
 
209
+ def reset_cancelled!
210
+ @last_cancelled = false
211
+ end
212
+
209
213
  def confirm_and_execute(code)
210
214
  return nil if code.nil? || code.strip.empty?
211
215
 
@@ -51,9 +51,10 @@ module RailsConsoleAi
51
51
  body = {
52
52
  model: config.resolved_model,
53
53
  max_tokens: config.resolved_max_tokens,
54
- temperature: config.temperature,
55
54
  messages: format_messages(messages)
56
55
  }
56
+ temp = config.resolved_temperature
57
+ body[:temperature] = temp unless temp.nil?
57
58
  if system_prompt
58
59
  body[:system] = [
59
60
  { 'type' => 'text', 'text' => system_prompt, 'cache_control' => { 'type' => 'ephemeral' } }
@@ -41,13 +41,13 @@ module RailsConsoleAi
41
41
  private
42
42
 
43
43
  def call_api(messages, system_prompt: nil, tools: nil)
44
+ inference = { max_tokens: config.resolved_max_tokens }
45
+ temp = config.resolved_temperature
46
+ inference[:temperature] = temp unless temp.nil?
44
47
  params = {
45
48
  model_id: config.resolved_model,
46
49
  messages: format_messages(messages),
47
- inference_config: {
48
- max_tokens: config.resolved_max_tokens,
49
- temperature: config.temperature
50
- }
50
+ inference_config: inference
51
51
  }
52
52
  if system_prompt
53
53
  sys_blocks = [{ text: system_prompt }]
@@ -21,9 +21,10 @@ module RailsConsoleAi
21
21
  body = {
22
22
  model: config.resolved_model,
23
23
  max_tokens: config.resolved_max_tokens,
24
- temperature: config.temperature,
25
24
  messages: formatted
26
25
  }
26
+ temp = config.resolved_temperature
27
+ body[:temperature] = temp unless temp.nil?
27
28
  body[:tools] = tools.to_openai_format if tools
28
29
 
29
30
  estimated_input_tokens = estimate_tokens(formatted, system_prompt, tools)
@@ -51,9 +51,10 @@ module RailsConsoleAi
51
51
  body = {
52
52
  model: config.resolved_model,
53
53
  max_tokens: config.resolved_max_tokens,
54
- temperature: config.temperature,
55
54
  messages: formatted
56
55
  }
56
+ temp = config.resolved_temperature
57
+ body[:temperature] = temp unless temp.nil?
57
58
  body[:tools] = tools.to_openai_format if tools
58
59
 
59
60
  json_body = JSON.generate(body)
@@ -12,12 +12,15 @@ module RailsConsoleAi
12
12
 
13
13
  attr_reader :input_tokens, :output_tokens, :model_used
14
14
 
15
- def initialize(task:, agent_config:, binding_context:, parent_channel:, executor:)
15
+ def initialize(task:, agent_config:, binding_context:, parent_channel:, executor:,
16
+ output_payload: nil, output_local_name: :output)
16
17
  @task = task
17
18
  @agent_config = agent_config || {}
18
19
  @binding_context = binding_context
19
20
  @parent_channel = parent_channel
20
21
  @parent_executor = executor
22
+ @output_payload = output_payload
23
+ @output_local_name = output_local_name
21
24
  @input_tokens = 0
22
25
  @output_tokens = 0
23
26
  @model_used = nil
@@ -29,7 +32,16 @@ module RailsConsoleAi
29
32
  task_label: @agent_config['name']
30
33
  )
31
34
 
32
- executor = Executor.new(@binding_context, channel: channel)
35
+ effective_binding =
36
+ if @output_payload
37
+ b = @binding_context.eval("proc { binding }.call")
38
+ b.local_variable_set(@output_local_name, @output_payload)
39
+ b
40
+ else
41
+ @binding_context
42
+ end
43
+
44
+ executor = Executor.new(effective_binding, channel: channel)
33
45
  allowed_tools = @agent_config['tools'] ? Array(@agent_config['tools']) : nil
34
46
  tools = Tools::Registry.new(executor: executor, mode: :sub_agent, channel: channel, allowed_tools: allowed_tools)
35
47
  provider = build_provider
@@ -6,7 +6,7 @@ module RailsConsoleAi
6
6
  attr_reader :definitions, :last_sub_agent_usage
7
7
 
8
8
  # Tools that should never be cached (side effects or user interaction)
9
- NO_CACHE = %w[ask_user save_memory delete_memory recall_memory execute_code execute_plan activate_skill save_skill delete_skill delegate_task].freeze
9
+ NO_CACHE = %w[ask_user save_memory delete_memory recall_memory execute_code execute_plan activate_skill save_skill delete_skill delegate_task explore_output].freeze
10
10
 
11
11
  def initialize(executor: nil, mode: :default, channel: nil, allowed_tools: nil)
12
12
  @executor = executor
@@ -188,7 +188,7 @@ module RailsConsoleAi
188
188
  if @executor
189
189
  register(
190
190
  name: 'recall_output',
191
- description: 'Retrieve a previous code execution output that was omitted or truncated. The output will be expanded in place in the conversation. Use the output id shown in the "[Output omitted]" or "[Output truncated]" placeholder.',
191
+ description: 'Expand a previously omitted/truncated output back into this conversation\'s context, where it will persist for the rest of the session. Prefer `explore_output` if you only need a specific answer about the output — that keeps this conversation lean. Use `recall_output` only when you need the full content alongside other context here. Use the output id shown in the "[Output omitted]" or "[Output truncated]" placeholder.',
192
192
  parameters: {
193
193
  'type' => 'object',
194
194
  'properties' => {
@@ -204,7 +204,7 @@ module RailsConsoleAi
204
204
 
205
205
  register(
206
206
  name: 'recall_outputs',
207
- description: 'Retrieve multiple previous code execution outputs that were omitted from the conversation. Use the output ids shown in "[Output omitted]" or "[Output truncated]" placeholders.',
207
+ description: 'Expand multiple previously omitted outputs back into this conversation. Prefer `explore_output` per-id for focused queries. Use the output ids shown in "[Output omitted]" or "[Output truncated]" placeholders.',
208
208
  parameters: {
209
209
  'type' => 'object',
210
210
  'properties' => {
@@ -214,6 +214,22 @@ module RailsConsoleAi
214
214
  },
215
215
  handler: ->(args) { "recall_outputs handled by conversation engine" }
216
216
  )
217
+
218
+ if @mode != :sub_agent
219
+ register(
220
+ name: 'explore_output',
221
+ description: 'Prefer this over recall_output when you have a specific question about a large omitted/truncated output (e.g. "find the item where X", "how many match Y", "what is the value at index N", "parse the JSON and return field Z"). Spawns a sub-agent with the full output bound to the local Ruby variable `output` (a String); the sub-agent runs execute_code against it and returns a concise answer. The full output does NOT enter this conversation.',
222
+ parameters: {
223
+ 'type' => 'object',
224
+ 'properties' => {
225
+ 'output_id' => { 'type' => 'integer', 'description' => 'The output id shown in the "[Output omitted]" or "[Output truncated]" placeholder.' },
226
+ 'task' => { 'type' => 'string', 'description' => 'The specific question or task. Be concrete — the sub-agent only sees this task and the output.' }
227
+ },
228
+ 'required' => ['output_id', 'task']
229
+ },
230
+ handler: ->(args) { explore_output(args['output_id'].to_i, args['task']) }
231
+ )
232
+ end
217
233
  end
218
234
 
219
235
  unless @mode == :init
@@ -317,6 +333,45 @@ module RailsConsoleAi
317
333
  )
318
334
  end
319
335
 
336
+ EXPLORE_OUTPUT_AGENT_CONFIG = {
337
+ 'name' => 'output-explorer',
338
+ 'tools' => ['execute_code'],
339
+ 'max_rounds' => 8,
340
+ 'body' => <<~PROMPT.freeze
341
+ You are exploring a single chunk of captured tool output on behalf of the main assistant.
342
+
343
+ The full output is bound to the local variable `output` (a String). You do NOT see it
344
+ directly — it lives in Ruby memory. Use `execute_code` with Ruby to query it:
345
+ - `output.length`, `output.lines.count`
346
+ - `output[start, len]`, `output.lines[n]`
347
+ - `output.scan(/pattern/)`, `output.include?("...")`
348
+ - `JSON.parse(output)` if it looks like JSON, then drill in
349
+ - any other Ruby string/collection methods
350
+
351
+ Print only the specific slice or summary the task requires — never dump the whole `output`.
352
+ Return a concise factual answer. No preamble.
353
+ PROMPT
354
+ }.freeze
355
+
356
+ def explore_output(output_id, task)
357
+ require 'rails_console_ai/sub_agent'
358
+
359
+ payload = @executor.recall_output(output_id)
360
+ return "No output found with id #{output_id}" unless payload
361
+
362
+ sub = SubAgent.new(
363
+ task: task,
364
+ agent_config: EXPLORE_OUTPUT_AGENT_CONFIG,
365
+ binding_context: @executor.binding_context,
366
+ parent_channel: @channel,
367
+ executor: @executor,
368
+ output_payload: payload.dup
369
+ )
370
+ result = sub.run
371
+ @last_sub_agent_usage = { input: sub.input_tokens, output: sub.output_tokens, model: sub.model_used }
372
+ "Exploration result (#{sub.input_tokens + sub.output_tokens} tokens used, #{payload.length} chars explored):\n#{result}"
373
+ end
374
+
320
375
  def delegate_task(task, agent_name = nil)
321
376
  require 'rails_console_ai/sub_agent'
322
377
  require 'rails_console_ai/agent_loader'
@@ -1,3 +1,3 @@
1
1
  module RailsConsoleAi
2
- VERSION = '0.26.0'.freeze
2
+ VERSION = '0.28.0'.freeze
3
3
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: rails_console_ai
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.26.0
4
+ version: 0.28.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Cortfr