agent_c 2.9
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +7 -0
- data/.rubocop.yml +10 -0
- data/.ruby-version +1 -0
- data/CLAUDE.md +21 -0
- data/README.md +360 -0
- data/Rakefile +16 -0
- data/TODO.md +105 -0
- data/agent_c.gemspec +38 -0
- data/docs/batch.md +503 -0
- data/docs/chat-methods.md +156 -0
- data/docs/cost-reporting.md +86 -0
- data/docs/pipeline-tips-and-tricks.md +453 -0
- data/docs/session-configuration.md +274 -0
- data/docs/testing.md +747 -0
- data/docs/tools.md +103 -0
- data/docs/versioned-store.md +840 -0
- data/lib/agent_c/agent/chat.rb +211 -0
- data/lib/agent_c/agent/chat_response.rb +38 -0
- data/lib/agent_c/agent/chats/anthropic_bedrock.rb +48 -0
- data/lib/agent_c/batch.rb +102 -0
- data/lib/agent_c/configs/repo.rb +90 -0
- data/lib/agent_c/context.rb +56 -0
- data/lib/agent_c/costs/data.rb +39 -0
- data/lib/agent_c/costs/report.rb +219 -0
- data/lib/agent_c/db/store.rb +162 -0
- data/lib/agent_c/errors.rb +19 -0
- data/lib/agent_c/pipeline.rb +152 -0
- data/lib/agent_c/pipelines/agent.rb +219 -0
- data/lib/agent_c/processor.rb +98 -0
- data/lib/agent_c/prompts.yml +53 -0
- data/lib/agent_c/schema.rb +71 -0
- data/lib/agent_c/session.rb +206 -0
- data/lib/agent_c/store.rb +72 -0
- data/lib/agent_c/test_helpers.rb +173 -0
- data/lib/agent_c/tools/dir_glob.rb +46 -0
- data/lib/agent_c/tools/edit_file.rb +114 -0
- data/lib/agent_c/tools/file_metadata.rb +43 -0
- data/lib/agent_c/tools/git_status.rb +30 -0
- data/lib/agent_c/tools/grep.rb +119 -0
- data/lib/agent_c/tools/paths.rb +36 -0
- data/lib/agent_c/tools/read_file.rb +94 -0
- data/lib/agent_c/tools/run_rails_test.rb +87 -0
- data/lib/agent_c/tools.rb +61 -0
- data/lib/agent_c/utils/git.rb +87 -0
- data/lib/agent_c/utils/shell.rb +58 -0
- data/lib/agent_c/version.rb +5 -0
- data/lib/agent_c.rb +32 -0
- data/lib/versioned_store/base.rb +314 -0
- data/lib/versioned_store/config.rb +26 -0
- data/lib/versioned_store/stores/schema.rb +127 -0
- data/lib/versioned_store/version.rb +5 -0
- data/lib/versioned_store.rb +5 -0
- data/template/Gemfile +9 -0
- data/template/Gemfile.lock +152 -0
- data/template/README.md +61 -0
- data/template/Rakefile +50 -0
- data/template/bin/rake +27 -0
- data/template/lib/autoload.rb +10 -0
- data/template/lib/config.rb +59 -0
- data/template/lib/pipeline.rb +19 -0
- data/template/lib/prompts.yml +57 -0
- data/template/lib/store.rb +17 -0
- data/template/test/pipeline_test.rb +221 -0
- data/template/test/test_helper.rb +18 -0
- metadata +194 -0
data/docs/batch.md
ADDED
|
@@ -0,0 +1,503 @@
|
|
|
1
|
+
# Batch
|
|
2
|
+
|
|
3
|
+
The `Batch` class is the primary interface for executing pipelines across multiple records. It manages task processing, workspace allocation, and provides cost reporting.
|
|
4
|
+
|
|
5
|
+
## Overview
|
|
6
|
+
|
|
7
|
+
A Batch processes multiple records through a Pipeline in either serial or parallel execution. When you call `batch.call`, it will process all added tasks and optionally yield each task after completion.
|
|
8
|
+
|
|
9
|
+
```ruby
|
|
10
|
+
batch = Batch.new(
|
|
11
|
+
record_type: :summary,
|
|
12
|
+
pipeline: MyPipeline,
|
|
13
|
+
store: store_config,
|
|
14
|
+
workspace: workspace_config,
|
|
15
|
+
session: session_config
|
|
16
|
+
)
|
|
17
|
+
|
|
18
|
+
batch.add_task(record1)
|
|
19
|
+
batch.add_task(record2)
|
|
20
|
+
|
|
21
|
+
# Process all tasks and yield each one after completion
|
|
22
|
+
batch.call do |task|
|
|
23
|
+
puts "Completed task #{task.id} with status: #{task.status}"
|
|
24
|
+
end
|
|
25
|
+
```
|
|
26
|
+
|
|
27
|
+
## Configuration
|
|
28
|
+
|
|
29
|
+
### Required Parameters
|
|
30
|
+
|
|
31
|
+
#### `record_type:` (Symbol)
|
|
32
|
+
|
|
33
|
+
The name of the record class defined in your Store that this batch will process.
|
|
34
|
+
|
|
35
|
+
```ruby
|
|
36
|
+
record_type: :summary
|
|
37
|
+
```
|
|
38
|
+
|
|
39
|
+
#### `pipeline:` (Class)
|
|
40
|
+
|
|
41
|
+
The Pipeline class that defines the steps to execute for each task.
|
|
42
|
+
|
|
43
|
+
```ruby
|
|
44
|
+
pipeline: MyPipeline
|
|
45
|
+
```
|
|
46
|
+
|
|
47
|
+
#### `store:` (Hash or Object)
|
|
48
|
+
|
|
49
|
+
Configuration for the VersionedStore. Can be either a hash with configuration or a store instance directly.
|
|
50
|
+
|
|
51
|
+
As a hash:
|
|
52
|
+
```ruby
|
|
53
|
+
store: {
|
|
54
|
+
class: MyStore,
|
|
55
|
+
config: {
|
|
56
|
+
logger: Logger.new("/dev/null"),
|
|
57
|
+
dir: "/path/to/store/database"
|
|
58
|
+
}
|
|
59
|
+
}
|
|
60
|
+
```
|
|
61
|
+
|
|
62
|
+
As a store instance:
|
|
63
|
+
```ruby
|
|
64
|
+
store: MyStore.new(dir: "/path/to/store")
|
|
65
|
+
```
|
|
66
|
+
|
|
67
|
+
#### `session:` (Hash or Object)
|
|
68
|
+
|
|
69
|
+
Configuration for the AI session. Can be either a hash with configuration or a session instance.
|
|
70
|
+
|
|
71
|
+
As a hash:
|
|
72
|
+
```ruby
|
|
73
|
+
session: {
|
|
74
|
+
agent_db_path: "/path/to/agent.db",
|
|
75
|
+
logger: Logger.new("/dev/null"),
|
|
76
|
+
i18n_path: "/path/to/prompts.yml",
|
|
77
|
+
project: "MyProject",
|
|
78
|
+
ruby_llm: {
|
|
79
|
+
bedrock_api_key: ENV.fetch("AWS_ACCESS_KEY_ID"),
|
|
80
|
+
bedrock_secret_key: ENV.fetch("AWS_SECRET_ACCESS_KEY"),
|
|
81
|
+
bedrock_region: "us-west-2",
|
|
82
|
+
default_model: "us.anthropic.claude-sonnet-4-5-20250929-v1:0"
|
|
83
|
+
}
|
|
84
|
+
}
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
### Workspace Configuration
|
|
88
|
+
|
|
89
|
+
You must provide **either** `workspace:` or `repo:`, but not both.
|
|
90
|
+
|
|
91
|
+
#### `workspace:` (Hash or Object)
|
|
92
|
+
|
|
93
|
+
A single workspace for serial task processing.
|
|
94
|
+
|
|
95
|
+
```ruby
|
|
96
|
+
workspace: {
|
|
97
|
+
dir: "/path/to/workspace",
|
|
98
|
+
env: {
|
|
99
|
+
RAILS_ENV: "test"
|
|
100
|
+
}
|
|
101
|
+
}
|
|
102
|
+
```
|
|
103
|
+
|
|
104
|
+
Or as an object:
|
|
105
|
+
```ruby
|
|
106
|
+
workspace: store.workspace.create!(dir: "/path", env: {})
|
|
107
|
+
```
|
|
108
|
+
|
|
109
|
+
#### `repo:` (Hash)
|
|
110
|
+
|
|
111
|
+
Configuration for parallel processing using git worktrees.
|
|
112
|
+
|
|
113
|
+
```ruby
|
|
114
|
+
repo: {
|
|
115
|
+
dir: "/path/to/repo",
|
|
116
|
+
initial_revision: "main",
|
|
117
|
+
working_subdir: "./app", # optional
|
|
118
|
+
worktrees_root_dir: "/tmp/worktrees",
|
|
119
|
+
worktree_branch_prefix: "batch-task",
|
|
120
|
+
worktree_envs: [
|
|
121
|
+
{ WORKER_ID: "0" },
|
|
122
|
+
{ WORKER_ID: "1" }
|
|
123
|
+
]
|
|
124
|
+
}
|
|
125
|
+
```
|
|
126
|
+
|
|
127
|
+
The number of worktrees created equals the length of `worktree_envs`. Each worktree processes tasks in parallel.
|
|
128
|
+
|
|
129
|
+
### Optional Parameters
|
|
130
|
+
|
|
131
|
+
#### `git:` (Proc)
|
|
132
|
+
|
|
133
|
+
A lambda that creates Git objects. Defaults to `Utils::Git.new`.
|
|
134
|
+
|
|
135
|
+
```ruby
|
|
136
|
+
git: ->(dir) { MyGitWrapper.new(dir) }
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
## Methods
|
|
140
|
+
|
|
141
|
+
### `#call(&block)`
|
|
142
|
+
|
|
143
|
+
Processes all pending tasks. Optionally yields each task after it completes.
|
|
144
|
+
|
|
145
|
+
```ruby
|
|
146
|
+
# Process without callback
|
|
147
|
+
batch.call
|
|
148
|
+
|
|
149
|
+
# Process with callback after each task
|
|
150
|
+
batch.call do |task|
|
|
151
|
+
puts "Task #{task.id} finished with status: #{task.status}"
|
|
152
|
+
puts "Completed steps: #{task.completed_steps.inspect}"
|
|
153
|
+
end
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
The block is called after each task finishes, regardless of whether it succeeded or failed. This allows you to:
|
|
157
|
+
- Monitor progress in real-time
|
|
158
|
+
- Log task completion
|
|
159
|
+
- Trigger external notifications
|
|
160
|
+
- Update UI or progress bars
|
|
161
|
+
|
|
162
|
+
### `#add_task(record)`
|
|
163
|
+
|
|
164
|
+
Adds a record to be processed. This method is idempotent - adding the same record multiple times will only create one task.
|
|
165
|
+
|
|
166
|
+
```ruby
|
|
167
|
+
record = batch.store.summary.create!(language: "english")
|
|
168
|
+
batch.add_task(record)
|
|
169
|
+
batch.add_task(record) # No-op, task already exists
|
|
170
|
+
```
|
|
171
|
+
|
|
172
|
+
### `#report`
|
|
173
|
+
|
|
174
|
+
Returns a summary report of task statuses and costs.
|
|
175
|
+
|
|
176
|
+
```ruby
|
|
177
|
+
puts batch.report
|
|
178
|
+
# =>
|
|
179
|
+
# Succeeded: 5
|
|
180
|
+
# Pending: 0
|
|
181
|
+
# Failed: 1
|
|
182
|
+
# Run cost: $2.34
|
|
183
|
+
# Project total cost: $45.67
|
|
184
|
+
#
|
|
185
|
+
# First 1 failed task(s):
|
|
186
|
+
# - ArgumentError: Cannot rewind to a step that's not been completed yet
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
### `#abort!`
|
|
190
|
+
|
|
191
|
+
Stops task processing. Useful for gracefully shutting down long-running batches.
|
|
192
|
+
|
|
193
|
+
```ruby
|
|
194
|
+
# Stop if any task fails
|
|
195
|
+
batch.call do |task|
|
|
196
|
+
batch.abort! if task.failed?
|
|
197
|
+
end
|
|
198
|
+
```
|
|
199
|
+
|
|
200
|
+
```ruby
|
|
201
|
+
# In a signal handler
|
|
202
|
+
Signal.trap("INT") do
|
|
203
|
+
batch.abort!
|
|
204
|
+
end
|
|
205
|
+
|
|
206
|
+
batch.call
|
|
207
|
+
```
|
|
208
|
+
|
|
209
|
+
### Accessors
|
|
210
|
+
|
|
211
|
+
#### `#store`
|
|
212
|
+
|
|
213
|
+
Returns the Store instance.
|
|
214
|
+
|
|
215
|
+
```ruby
|
|
216
|
+
batch.store.summary.all
|
|
217
|
+
```
|
|
218
|
+
|
|
219
|
+
#### `#workspaces`
|
|
220
|
+
|
|
221
|
+
Returns an array of workspace objects available to this batch.
|
|
222
|
+
|
|
223
|
+
```ruby
|
|
224
|
+
batch.workspaces.each do |workspace|
|
|
225
|
+
puts workspace.dir
|
|
226
|
+
end
|
|
227
|
+
```
|
|
228
|
+
|
|
229
|
+
#### `#session`
|
|
230
|
+
|
|
231
|
+
Returns the Session instance.
|
|
232
|
+
|
|
233
|
+
```ruby
|
|
234
|
+
cost = batch.session.cost
|
|
235
|
+
```
|
|
236
|
+
|
|
237
|
+
## Pipeline Integration
|
|
238
|
+
|
|
239
|
+
Pipelines executed by Batch have access to several methods for defining workflow steps.
|
|
240
|
+
|
|
241
|
+
### `step(name, &block)`
|
|
242
|
+
|
|
243
|
+
Defines a custom step that executes Ruby code.
|
|
244
|
+
|
|
245
|
+
```ruby
|
|
246
|
+
class MyPipeline < AgentC::Pipeline
|
|
247
|
+
step(:assign_data) do
|
|
248
|
+
record.update!(attr_1: "value")
|
|
249
|
+
end
|
|
250
|
+
|
|
251
|
+
step(:commit_changes) do
|
|
252
|
+
git.commit_all("claude: updated data")
|
|
253
|
+
end
|
|
254
|
+
end
|
|
255
|
+
```
|
|
256
|
+
|
|
257
|
+
**Available in step blocks:**
|
|
258
|
+
- `record` - The ActiveRecord instance being processed
|
|
259
|
+
- `task` - The Task instance tracking this pipeline execution
|
|
260
|
+
- `store` - The Store instance
|
|
261
|
+
- `workspace` - The Workspace instance
|
|
262
|
+
- `session` - The Session instance
|
|
263
|
+
- `git` - A Git helper (if git was configured)
|
|
264
|
+
|
|
265
|
+
**Resumability:** If a pipeline is interrupted and rerun, already-completed steps are skipped. The pipeline continues from the first incomplete step.
|
|
266
|
+
|
|
267
|
+
### `agent_step(name, **params)`
|
|
268
|
+
|
|
269
|
+
Defines a step that calls Claude with tools and expects structured output.
|
|
270
|
+
|
|
271
|
+
#### Using i18n (prompts.yml)
|
|
272
|
+
|
|
273
|
+
```ruby
|
|
274
|
+
agent_step(:analyze_code)
|
|
275
|
+
```
|
|
276
|
+
|
|
277
|
+
In your `prompts.yml`:
|
|
278
|
+
```yaml
|
|
279
|
+
en:
|
|
280
|
+
analyze_code:
|
|
281
|
+
cached_prompts:
|
|
282
|
+
- "You are analyzing Ruby code."
|
|
283
|
+
prompt: "Analyze this file: %{file_path}"
|
|
284
|
+
tools: [read_file, dir_glob]
|
|
285
|
+
response_schema:
|
|
286
|
+
summary:
|
|
287
|
+
type: string
|
|
288
|
+
description: "A summary of the code"
|
|
289
|
+
```
|
|
290
|
+
|
|
291
|
+
#### Inline configuration
|
|
292
|
+
|
|
293
|
+
```ruby
|
|
294
|
+
agent_step(
|
|
295
|
+
:analyze_code,
|
|
296
|
+
prompt: "Analyze this code",
|
|
297
|
+
cached_prompt: ["You are a code analyzer"],
|
|
298
|
+
tools: [:read_file, :grep],
|
|
299
|
+
schema: -> {
|
|
300
|
+
string("summary", description: "Code summary")
|
|
301
|
+
}
|
|
302
|
+
)
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
#### Dynamic configuration with a block
|
|
306
|
+
|
|
307
|
+
```ruby
|
|
308
|
+
agent_step(:process) do
|
|
309
|
+
{
|
|
310
|
+
prompt: "Process #{record.name}",
|
|
311
|
+
tools: [:read_file],
|
|
312
|
+
schema: -> { string("result") }
|
|
313
|
+
}
|
|
314
|
+
end
|
|
315
|
+
```
|
|
316
|
+
|
|
317
|
+
**Result handling:** Claude's response is automatically saved to the record. If the response contains `unable_to_fulfill_request_error`, the task is marked as failed.
|
|
318
|
+
|
|
319
|
+
### `rewind_to!(step_name)`
|
|
320
|
+
|
|
321
|
+
Rewinds the pipeline to re-execute a specific step. This removes all steps after the specified step from the completed list and jumps back to that step.
|
|
322
|
+
|
|
323
|
+
```ruby
|
|
324
|
+
class MyPipeline < AgentC::Pipeline
|
|
325
|
+
step(:fetch_data) do
|
|
326
|
+
record.update!(data: fetch_from_api)
|
|
327
|
+
end
|
|
328
|
+
|
|
329
|
+
step(:validate_data) do
|
|
330
|
+
if record.data.nil?
|
|
331
|
+
# Re-fetch the data
|
|
332
|
+
rewind_to!(:fetch_data)
|
|
333
|
+
end
|
|
334
|
+
end
|
|
335
|
+
|
|
336
|
+
step(:process_data) do
|
|
337
|
+
# This only runs if validation passed
|
|
338
|
+
record.update!(processed: true)
|
|
339
|
+
end
|
|
340
|
+
end
|
|
341
|
+
```
|
|
342
|
+
|
|
343
|
+
**Constraints:**
|
|
344
|
+
- The target step must have already been completed
|
|
345
|
+
- The target step name must be unique in the completed steps list
|
|
346
|
+
|
|
347
|
+
**Use cases:**
|
|
348
|
+
- Retry failed operations
|
|
349
|
+
- Implement conditional branching
|
|
350
|
+
- Handle rate limiting by backing up and waiting
|
|
351
|
+
|
|
352
|
+
## Examples
|
|
353
|
+
|
|
354
|
+
### Basic batch processing
|
|
355
|
+
|
|
356
|
+
```ruby
|
|
357
|
+
pipeline_class = Class.new(AgentC::Pipeline) do
|
|
358
|
+
step(:assign_attr_1) do
|
|
359
|
+
record.update!(attr_1: "assigned")
|
|
360
|
+
end
|
|
361
|
+
|
|
362
|
+
step(:assign_attr_2) do
|
|
363
|
+
record.update!(attr_2: "assigned")
|
|
364
|
+
end
|
|
365
|
+
end
|
|
366
|
+
|
|
367
|
+
batch = Batch.new(
|
|
368
|
+
store: store,
|
|
369
|
+
workspace: workspace,
|
|
370
|
+
session: session,
|
|
371
|
+
record_type: :my_record,
|
|
372
|
+
pipeline: pipeline_class
|
|
373
|
+
)
|
|
374
|
+
|
|
375
|
+
record = store.my_record.create!(attr_1: "initial")
|
|
376
|
+
batch.add_task(record)
|
|
377
|
+
batch.call
|
|
378
|
+
|
|
379
|
+
puts record.reload.attr_1 # => "assigned"
|
|
380
|
+
```
|
|
381
|
+
|
|
382
|
+
### Processing with callback
|
|
383
|
+
|
|
384
|
+
```ruby
|
|
385
|
+
batch.call do |task|
|
|
386
|
+
if task.done?
|
|
387
|
+
puts "✓ Task #{task.id} completed"
|
|
388
|
+
puts " Steps: #{task.completed_steps.join(', ')}"
|
|
389
|
+
elsif task.failed?
|
|
390
|
+
puts "✗ Task #{task.id} failed"
|
|
391
|
+
puts " Error: #{task.error_message}"
|
|
392
|
+
end
|
|
393
|
+
end
|
|
394
|
+
```
|
|
395
|
+
|
|
396
|
+
### Parallel processing with worktrees
|
|
397
|
+
|
|
398
|
+
```ruby
|
|
399
|
+
batch = Batch.new(
|
|
400
|
+
store: store,
|
|
401
|
+
session: session,
|
|
402
|
+
record_type: :my_record,
|
|
403
|
+
pipeline: pipeline_class,
|
|
404
|
+
repo: {
|
|
405
|
+
dir: "/path/to/repo",
|
|
406
|
+
initial_revision: "main",
|
|
407
|
+
worktrees_root_dir: "/tmp/worktrees",
|
|
408
|
+
worktree_branch_prefix: "task",
|
|
409
|
+
worktree_envs: [
|
|
410
|
+
{ WORKER: "0" },
|
|
411
|
+
{ WORKER: "1" }
|
|
412
|
+
]
|
|
413
|
+
}
|
|
414
|
+
)
|
|
415
|
+
|
|
416
|
+
10.times do |i|
|
|
417
|
+
record = store.my_record.create!(name: "record-#{i}")
|
|
418
|
+
batch.add_task(record)
|
|
419
|
+
end
|
|
420
|
+
|
|
421
|
+
# Tasks are processed across 2 worktrees in parallel
|
|
422
|
+
batch.call
|
|
423
|
+
```
|
|
424
|
+
|
|
425
|
+
### Using rewind_to! for retries
|
|
426
|
+
|
|
427
|
+
```ruby
|
|
428
|
+
pipeline_class = Class.new(AgentC::Pipeline) do
|
|
429
|
+
step(:fetch) do
|
|
430
|
+
counter[:attempts] ||= 0
|
|
431
|
+
counter[:attempts] += 1
|
|
432
|
+
record.update!(data: fetch_api)
|
|
433
|
+
end
|
|
434
|
+
|
|
435
|
+
step(:validate) do
|
|
436
|
+
if record.data.nil? && counter[:attempts] < 3
|
|
437
|
+
rewind_to!(:fetch)
|
|
438
|
+
elsif record.data.nil?
|
|
439
|
+
task.fail!("Failed after 3 attempts")
|
|
440
|
+
end
|
|
441
|
+
end
|
|
442
|
+
|
|
443
|
+
step(:process) do
|
|
444
|
+
record.update!(processed: true)
|
|
445
|
+
end
|
|
446
|
+
end
|
|
447
|
+
```
|
|
448
|
+
|
|
449
|
+
### Agent step with dynamic tools
|
|
450
|
+
|
|
451
|
+
```ruby
|
|
452
|
+
pipeline_class = Class.new(AgentC::Pipeline) do
|
|
453
|
+
agent_step(:analyze) do
|
|
454
|
+
tools = [:read_file]
|
|
455
|
+
tools << :run_rails_test if record.needs_testing?
|
|
456
|
+
|
|
457
|
+
{
|
|
458
|
+
prompt: "Analyze #{record.file_path}",
|
|
459
|
+
tools: tools,
|
|
460
|
+
schema: -> { string("result") }
|
|
461
|
+
}
|
|
462
|
+
end
|
|
463
|
+
end
|
|
464
|
+
```
|
|
465
|
+
|
|
466
|
+
## Error Handling
|
|
467
|
+
|
|
468
|
+
If a step raises an exception or Claude returns an error:
|
|
469
|
+
1. The task is marked as failed
|
|
470
|
+
2. The `on_failure` callbacks from the pipeline are executed
|
|
471
|
+
3. Processing continues with the next task
|
|
472
|
+
|
|
473
|
+
```ruby
|
|
474
|
+
class MyPipeline < AgentC::Pipeline
|
|
475
|
+
step(:risky_operation) do
|
|
476
|
+
# May raise an exception
|
|
477
|
+
end
|
|
478
|
+
|
|
479
|
+
on_failure do
|
|
480
|
+
# Clean up
|
|
481
|
+
git.reset_hard_all
|
|
482
|
+
end
|
|
483
|
+
end
|
|
484
|
+
```
|
|
485
|
+
|
|
486
|
+
## Resuming After Failure
|
|
487
|
+
|
|
488
|
+
If your batch is interrupted (exception, SIGTERM, etc.), simply run it again. Already-completed tasks and already-completed steps within partially-completed tasks will be skipped.
|
|
489
|
+
|
|
490
|
+
```ruby
|
|
491
|
+
# First run - processes some tasks then crashes
|
|
492
|
+
batch.call
|
|
493
|
+
|
|
494
|
+
# Second run - resumes from where it left off
|
|
495
|
+
batch.call
|
|
496
|
+
```
|
|
497
|
+
|
|
498
|
+
## See Also
|
|
499
|
+
|
|
500
|
+
- [Main README](../README.md) - Complete setup and configuration
|
|
501
|
+
- [Pipeline Tips and Tricks](pipeline-tips-and-tricks.md) - Advanced pipeline patterns
|
|
502
|
+
- [Store Versioning](versioned-store.md) - Rollback and recovery
|
|
503
|
+
- [Cost Reporting](cost-reporting.md) - Track AI usage costs
|
|
@@ -0,0 +1,156 @@
|
|
|
1
|
+
# Chat Methods
|
|
2
|
+
|
|
3
|
+
**Note:** For batch processing and structured workflows, use [Batch and Pipeline](../README.md) instead. The methods below are for direct chat interactions and one-off requests.
|
|
4
|
+
|
|
5
|
+
AgentC provides several methods for interacting with LLMs, each optimized for different use cases.
|
|
6
|
+
|
|
7
|
+
## Creating Chats
|
|
8
|
+
|
|
9
|
+
```ruby
|
|
10
|
+
# See the [configuration](./session-configuration.md) for session args
|
|
11
|
+
session = Session.new(...)
|
|
12
|
+
|
|
13
|
+
chat = session.chat(
|
|
14
|
+
tools: [:read_file, :edit_file],
|
|
15
|
+
cached_prompts: ["You are a helpful assistant"],
|
|
16
|
+
workspace_dir: Dir.pwd
|
|
17
|
+
)
|
|
18
|
+
```
|
|
19
|
+
|
|
20
|
+
## Chat.ask(message)
|
|
21
|
+
|
|
22
|
+
Basic interaction - send a message and get a response:
|
|
23
|
+
|
|
24
|
+
```ruby
|
|
25
|
+
chat = session.chat
|
|
26
|
+
response = chat.ask("Explain recursion in simple terms")
|
|
27
|
+
```
|
|
28
|
+
|
|
29
|
+
## Chat.get(message, schema:, confirm:, out_of:)
|
|
30
|
+
|
|
31
|
+
Get a structured response with optional confirmation:
|
|
32
|
+
|
|
33
|
+
```ruby
|
|
34
|
+
# Get a simple answer
|
|
35
|
+
answer = chat.get("What is 2 + 2?")
|
|
36
|
+
|
|
37
|
+
# Get structured response using AgentC::Schema.result
|
|
38
|
+
# This creates a schema that accepts either success or error responses
|
|
39
|
+
#
|
|
40
|
+
# You can make your own schema using RubyLLM::Schema, but
|
|
41
|
+
# this is a pretty standard approach. It will allow the LLM
|
|
42
|
+
# to indicate that they could not fulfill your request and
|
|
43
|
+
# give a reason why.
|
|
44
|
+
#
|
|
45
|
+
# The response will look like one of the following:
|
|
46
|
+
# Success response (just the data fields):
|
|
47
|
+
# {
|
|
48
|
+
# name: "...",
|
|
49
|
+
# email: "...",
|
|
50
|
+
# }
|
|
51
|
+
# OR error response:
|
|
52
|
+
# {
|
|
53
|
+
# unable_to_fulfill_request_error: "some reason why it couldn't do it"
|
|
54
|
+
# }
|
|
55
|
+
|
|
56
|
+
schema = AgentC::Schema.result do
|
|
57
|
+
string(:name, description: "Person's name")
|
|
58
|
+
string(:email, description: "Person's email")
|
|
59
|
+
end
|
|
60
|
+
|
|
61
|
+
result = chat.get(
|
|
62
|
+
"Extract the name and email from this text: 'Contact John at john@example.com'",
|
|
63
|
+
schema: schema
|
|
64
|
+
)
|
|
65
|
+
# => { "name" => "John", "email" => "john@example.com" }
|
|
66
|
+
|
|
67
|
+
# If the LLM can't complete the task, it returns an error response:
|
|
68
|
+
# => { "unable_to_fulfill_request_error" => "No email found in the text" }
|
|
69
|
+
```
|
|
70
|
+
|
|
71
|
+
### Using confirm and out_of for consensus
|
|
72
|
+
|
|
73
|
+
LLMs are non-deterministic and can give different answers to the same question. The `confirm` feature asks the question multiple times and only accepts an answer when it appears at least `confirm` times out of `out_of` attempts. This gives you much higher confidence the answer isn't a hallucination or random variation.
|
|
74
|
+
|
|
75
|
+
```ruby
|
|
76
|
+
class YesOrNoSchema < RubyLLM::Schema
|
|
77
|
+
string(:value, enum: ["yes", "no"])
|
|
78
|
+
end
|
|
79
|
+
|
|
80
|
+
confirmed = chat.get(
|
|
81
|
+
"Is vanilla better than chocolate?",
|
|
82
|
+
confirm: 2, # Need 2 matching answers
|
|
83
|
+
out_of: 3, # Out of 3 attempts max
|
|
84
|
+
schema: YesOrNoSchema
|
|
85
|
+
)
|
|
86
|
+
```
|
|
87
|
+
|
|
88
|
+
## Chat.refine(message, schema:, times:)
|
|
89
|
+
|
|
90
|
+
Iteratively refine a response by having the LLM review and improve its own answer.
|
|
91
|
+
|
|
92
|
+
The refine feature asks your question, gets an answer, then asks the LLM to review that answer for accuracy and improvements. This repeats for the specified number of times. Each iteration gives the LLM a chance to catch mistakes, add detail, or improve quality.
|
|
93
|
+
|
|
94
|
+
This works because LLMs are often better at *reviewing* content than generating it perfectly the first time - like having an editor review a draft. It's especially effective for creative tasks, complex analysis, or code generation where iterative improvement leads to higher quality outputs.
|
|
95
|
+
|
|
96
|
+
```ruby
|
|
97
|
+
HaikuSchema = RubyLLM::Schema.object(
|
|
98
|
+
haiku: RubyLLM::Schema.string
|
|
99
|
+
)
|
|
100
|
+
|
|
101
|
+
refined_answer = chat.refine(
|
|
102
|
+
"Write a haiku about programming",
|
|
103
|
+
schema: HaikuSchema,
|
|
104
|
+
times: 3 # LLM reviews and refines its answer 3 times
|
|
105
|
+
)
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
## Session.prompt() - One-Off Requests
|
|
109
|
+
|
|
110
|
+
For single-shot requests where you don't need a persistent chat, use `session.prompt()`:
|
|
111
|
+
|
|
112
|
+
```ruby
|
|
113
|
+
# See the [configuration](./session-configuration.md) for session args
|
|
114
|
+
session = Session.new(...)
|
|
115
|
+
|
|
116
|
+
# Simple one-off request
|
|
117
|
+
result = session.prompt(
|
|
118
|
+
prompt: "What is the capital of France?",
|
|
119
|
+
schema: -> { string(:answer) }
|
|
120
|
+
)
|
|
121
|
+
# => ChatResponse with success/error status
|
|
122
|
+
|
|
123
|
+
# With tools and custom settings
|
|
124
|
+
result = session.prompt(
|
|
125
|
+
prompt: "Read the README file and summarize it",
|
|
126
|
+
schema: -> { string(:summary) },
|
|
127
|
+
tools: [:read_file],
|
|
128
|
+
tool_args: { workspace_dir: '/path/to/project' },
|
|
129
|
+
cached_prompt: ["You are a helpful documentation assistant"]
|
|
130
|
+
)
|
|
131
|
+
|
|
132
|
+
if result.success?
|
|
133
|
+
puts result.data['summary']
|
|
134
|
+
else
|
|
135
|
+
puts "Error: #{result.error_message}"
|
|
136
|
+
end
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
This is equivalent to creating a chat, calling `get()`, and handling the response, but more concise for one-off requests.
|
|
140
|
+
|
|
141
|
+
## Cached Prompts
|
|
142
|
+
|
|
143
|
+
To optimize token usage and reduce costs, you can use cached prompts. Cached prompts are stored in the API provider's cache and can significantly reduce the number of input tokens charged on subsequent requests.
|
|
144
|
+
|
|
145
|
+
```ruby
|
|
146
|
+
# Provide cached prompts that will be reused across conversations
|
|
147
|
+
cached_prompts = [
|
|
148
|
+
"You are a helpful coding assistant specialized in Ruby.",
|
|
149
|
+
"Always write idiomatic Ruby code following Ruby community best practices."
|
|
150
|
+
]
|
|
151
|
+
|
|
152
|
+
chat = session.chat(cached_prompts: cached_prompts)
|
|
153
|
+
response = chat.ask("Write a method to calculate fibonacci numbers")
|
|
154
|
+
```
|
|
155
|
+
|
|
156
|
+
The first request will incur cache creation costs, but subsequent requests with the same cached prompts will use significantly fewer tokens, reducing overall API costs.
|