rcrewai 0.2.0 → 0.3.0
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/.rubocop.yml +1 -0
- data/.rubocop_todo.yml +99 -0
- data/CHANGELOG.md +24 -0
- data/README.md +33 -1
- data/Rakefile +53 -53
- data/bin/rcrewai +3 -3
- data/docs/mcp.md +109 -0
- data/docs/superpowers/plans/2026-05-11-llm-modernization.md +2753 -0
- data/docs/superpowers/specs/2026-05-11-llm-modernization-design.md +479 -0
- data/docs/upgrading-to-0.3.md +163 -0
- data/examples/async_execution_example.rb +82 -81
- data/examples/hierarchical_crew_example.rb +68 -72
- data/examples/human_in_the_loop_example.rb +73 -74
- data/examples/mcp_example.rb +48 -0
- data/examples/native_tools_example.rb +64 -0
- data/examples/streaming_example.rb +56 -0
- data/lib/rcrewai/agent.rb +148 -287
- data/lib/rcrewai/async_executor.rb +43 -43
- data/lib/rcrewai/cli.rb +11 -11
- data/lib/rcrewai/configuration.rb +14 -9
- data/lib/rcrewai/crew.rb +56 -39
- data/lib/rcrewai/events.rb +30 -0
- data/lib/rcrewai/human_input.rb +104 -114
- data/lib/rcrewai/legacy_react_runner.rb +172 -0
- data/lib/rcrewai/llm_client.rb +1 -1
- data/lib/rcrewai/llm_clients/anthropic.rb +174 -54
- data/lib/rcrewai/llm_clients/azure.rb +23 -128
- data/lib/rcrewai/llm_clients/base.rb +11 -7
- data/lib/rcrewai/llm_clients/google.rb +159 -95
- data/lib/rcrewai/llm_clients/ollama.rb +150 -106
- data/lib/rcrewai/llm_clients/openai.rb +140 -63
- data/lib/rcrewai/mcp/client.rb +101 -0
- data/lib/rcrewai/mcp/tool_adapter.rb +59 -0
- data/lib/rcrewai/mcp/transport/http.rb +53 -0
- data/lib/rcrewai/mcp/transport/stdio.rb +55 -0
- data/lib/rcrewai/mcp.rb +8 -0
- data/lib/rcrewai/memory.rb +45 -37
- data/lib/rcrewai/pricing.rb +34 -0
- data/lib/rcrewai/process.rb +86 -95
- data/lib/rcrewai/provider_schema.rb +38 -0
- data/lib/rcrewai/sse_parser.rb +55 -0
- data/lib/rcrewai/task.rb +56 -64
- data/lib/rcrewai/tool_runner.rb +132 -0
- data/lib/rcrewai/tool_schema.rb +97 -0
- data/lib/rcrewai/tools/base.rb +98 -37
- data/lib/rcrewai/tools/code_executor.rb +71 -74
- data/lib/rcrewai/tools/email_sender.rb +70 -78
- data/lib/rcrewai/tools/file_reader.rb +38 -30
- data/lib/rcrewai/tools/file_writer.rb +40 -38
- data/lib/rcrewai/tools/pdf_processor.rb +115 -130
- data/lib/rcrewai/tools/sql_database.rb +58 -55
- data/lib/rcrewai/tools/web_search.rb +26 -25
- data/lib/rcrewai/version.rb +2 -2
- data/lib/rcrewai.rb +18 -10
- data/rcrewai.gemspec +55 -36
- metadata +86 -50
|
@@ -0,0 +1,2753 @@
|
|
|
1
|
+
# LLM Modernization Implementation Plan
|
|
2
|
+
|
|
3
|
+
> **For agentic workers:** REQUIRED SUB-SKILL: Use superpowers:subagent-driven-development (recommended) or superpowers:executing-plans to implement this plan task-by-task. Steps use checkbox (`- [ ]`) syntax for tracking.
|
|
4
|
+
|
|
5
|
+
**Goal:** Replace ReAct-style prompt-parsed tool calls with native function calling across all five LLM providers, add a typed streaming event model, and add an MCP client — without breaking existing user code.
|
|
6
|
+
|
|
7
|
+
**Architecture:** Dual-mode `ToolRunner` (native) + `LegacyReactRunner` (existing `USE_TOOL[]` fallback), driven from a refactored `Agent#execute_task`. A new `RCrewAI::Events` typed stream flows from per-provider SSE parsers through the runner to user-supplied sinks. MCP servers (stdio + HTTP) connect via a small JSON-RPC client and surface their tools as ordinary `Tools::Base` instances.
|
|
8
|
+
|
|
9
|
+
**Tech Stack:** Ruby 3.0+, Faraday (existing), concurrent-ruby (existing), webmock + VCR (existing dev deps). No new gem dependencies — SSE parsing and JSON-RPC are hand-rolled (~100 LOC each).
|
|
10
|
+
|
|
11
|
+
**Spec:** `docs/superpowers/specs/2026-05-11-llm-modernization-design.md`
|
|
12
|
+
|
|
13
|
+
**Versioning:** Currently `0.2.1`. This work ships as **`0.3.0`** (minor bump, two minor breaking changes called out below).
|
|
14
|
+
|
|
15
|
+
**Phases:**
|
|
16
|
+
- **Phase 1 (Tasks 1-5)** — Foundation: schema DSL, events, SSE parser, pricing, new chat contract. Ships nothing user-visible yet.
|
|
17
|
+
- **Phase 2 (Tasks 6-10)** — Native tools + streaming across all five providers + ToolRunner + Agent/Crew refactor. **Releasable as `0.3.0-rc1`.**
|
|
18
|
+
- **Phase 3 (Tasks 11-12)** — MCP client + docs/examples/CHANGELOG. **Releasable as `0.3.0`.**
|
|
19
|
+
|
|
20
|
+
A phase boundary is a clean checkpoint: commit, run the full suite, optionally cut a tag.
|
|
21
|
+
|
|
22
|
+
---
|
|
23
|
+
|
|
24
|
+
## File Structure
|
|
25
|
+
|
|
26
|
+
### New files
|
|
27
|
+
|
|
28
|
+
| Path | Responsibility |
|
|
29
|
+
|---|---|
|
|
30
|
+
| `lib/rcrewai/tool_schema.rb` | Class-level DSL on `Tools::Base` + canonical JSON schema emitter |
|
|
31
|
+
| `lib/rcrewai/provider_schema.rb` | Reshape canonical schema → per-provider format |
|
|
32
|
+
| `lib/rcrewai/events.rb` | Typed event classes (`TextDelta`, `ToolCallStart`, …) + sink fan-out helper |
|
|
33
|
+
| `lib/rcrewai/sse_parser.rb` | Reusable SSE line/event parser used by all LLM clients and MCP HTTP transport |
|
|
34
|
+
| `lib/rcrewai/pricing.rb` | Per-model price table + `cost_for(model, usage)` helper |
|
|
35
|
+
| `lib/rcrewai/tool_runner.rb` | Native function-calling loop |
|
|
36
|
+
| `lib/rcrewai/legacy_react_runner.rb` | Extracted `USE_TOOL[]` loop from `agent.rb` (behavior-preserving) |
|
|
37
|
+
| `lib/rcrewai/mcp.rb` | MCP module entry + autoload glue |
|
|
38
|
+
| `lib/rcrewai/mcp/client.rb` | JSON-RPC client + handshake + `tools/list` + `tools/call` |
|
|
39
|
+
| `lib/rcrewai/mcp/transport/stdio.rb` | Stdio transport (Process.spawn + IO pipes) |
|
|
40
|
+
| `lib/rcrewai/mcp/transport/http.rb` | Streamable HTTP transport (Faraday + SSE) |
|
|
41
|
+
| `lib/rcrewai/mcp/tool_adapter.rb` | Wrap an MCP tool as `Tools::Base` |
|
|
42
|
+
| `docs/upgrading-to-0.3.md` | Migration guide |
|
|
43
|
+
| `docs/mcp.md` | MCP user docs |
|
|
44
|
+
| `examples/native_tools_example.rb` | Shows native tool calling end-to-end |
|
|
45
|
+
| `examples/streaming_example.rb` | Shows event streaming + cost tracking |
|
|
46
|
+
| `examples/mcp_example.rb` | Shows connecting to an MCP server |
|
|
47
|
+
| `spec/fixtures/mcp_servers/echo_server.rb` | Minimal Ruby stdio MCP server for tests |
|
|
48
|
+
| `spec/fixtures/llm_responses/**` | Recorded JSON / SSE bodies for client unit tests |
|
|
49
|
+
|
|
50
|
+
### Modified files
|
|
51
|
+
|
|
52
|
+
| Path | Change |
|
|
53
|
+
|---|---|
|
|
54
|
+
| `lib/rcrewai.rb` | Require new modules in dependency order |
|
|
55
|
+
| `lib/rcrewai/version.rb` | `"0.2.1"` → `"0.3.0"` |
|
|
56
|
+
| `lib/rcrewai/configuration.rb` | Add `pricing`, `ollama_native_tools`, `log_level` accessors |
|
|
57
|
+
| `lib/rcrewai/llm_clients/base.rb` | New `chat(messages:, tools: nil, stream: nil, **)` contract + `supports_native_tools?` |
|
|
58
|
+
| `lib/rcrewai/llm_clients/openai.rb` | Native tools + streaming (reference impl) |
|
|
59
|
+
| `lib/rcrewai/llm_clients/anthropic.rb` | Native tools + streaming + prompt-caching hook |
|
|
60
|
+
| `lib/rcrewai/llm_clients/google.rb` | Native tools + streaming |
|
|
61
|
+
| `lib/rcrewai/llm_clients/azure.rb` | Mostly inherits; auth shape only |
|
|
62
|
+
| `lib/rcrewai/llm_clients/ollama.rb` | Native tools (allowlist) + streaming |
|
|
63
|
+
| `lib/rcrewai/agent.rb` | Delegate execution to `ToolRunner` / `LegacyReactRunner`; new `tool_calls_history:` in return |
|
|
64
|
+
| `lib/rcrewai/crew.rb` | Accept `stream:` kwarg in `execute`; fan out to sinks |
|
|
65
|
+
| `lib/rcrewai/tools/*.rb` (7 files) | Add DSL declarations |
|
|
66
|
+
| `CHANGELOG.md` | 0.3.0 entry with breaking changes called out |
|
|
67
|
+
| `rcrewai.gemspec` | (no change unless we discover one; flagged in Task 12) |
|
|
68
|
+
|
|
69
|
+
---
|
|
70
|
+
|
|
71
|
+
# Phase 1 — Foundation
|
|
72
|
+
|
|
73
|
+
## Task 1: Tool Schema DSL
|
|
74
|
+
|
|
75
|
+
**Files:**
|
|
76
|
+
- Create: `lib/rcrewai/tool_schema.rb`
|
|
77
|
+
- Create: `lib/rcrewai/provider_schema.rb`
|
|
78
|
+
- Create: `spec/tool_schema_spec.rb`
|
|
79
|
+
- Create: `spec/provider_schema_spec.rb`
|
|
80
|
+
- Modify: `lib/rcrewai/tools/base.rb` (extend, do not break existing API)
|
|
81
|
+
- Modify: `lib/rcrewai.rb` (require new files)
|
|
82
|
+
|
|
83
|
+
- [ ] **Step 1.1: Write failing spec for canonical schema emission**
|
|
84
|
+
|
|
85
|
+
Create `spec/tool_schema_spec.rb`:
|
|
86
|
+
|
|
87
|
+
```ruby
|
|
88
|
+
# frozen_string_literal: true
|
|
89
|
+
require 'spec_helper'
|
|
90
|
+
|
|
91
|
+
RSpec.describe RCrewAI::ToolSchema do
|
|
92
|
+
describe 'DSL on Tools::Base subclass' do
|
|
93
|
+
let(:tool_class) do
|
|
94
|
+
Class.new(RCrewAI::Tools::Base) do
|
|
95
|
+
tool_name "demo_tool"
|
|
96
|
+
description "A demo tool"
|
|
97
|
+
param :query, type: :string, required: true, description: "A query"
|
|
98
|
+
param :max_results, type: :integer, default: 10, description: "How many"
|
|
99
|
+
param :tags, type: :array, items: { type: :string }
|
|
100
|
+
param :verbose, type: :boolean, default: false
|
|
101
|
+
param :mode, type: :enum, values: %w[fast slow]
|
|
102
|
+
|
|
103
|
+
def execute(query:, max_results: 10, tags: [], verbose: false, mode: "fast")
|
|
104
|
+
{ query: query, max_results: max_results }
|
|
105
|
+
end
|
|
106
|
+
end
|
|
107
|
+
end
|
|
108
|
+
|
|
109
|
+
it 'exposes tool_name and description' do
|
|
110
|
+
expect(tool_class.tool_name).to eq("demo_tool")
|
|
111
|
+
expect(tool_class.description).to eq("A demo tool")
|
|
112
|
+
end
|
|
113
|
+
|
|
114
|
+
it 'emits canonical JSON schema' do
|
|
115
|
+
schema = tool_class.json_schema
|
|
116
|
+
expect(schema[:name]).to eq("demo_tool")
|
|
117
|
+
expect(schema[:description]).to eq("A demo tool")
|
|
118
|
+
expect(schema.dig(:parameters, :type)).to eq("object")
|
|
119
|
+
expect(schema.dig(:parameters, :required)).to eq(["query"])
|
|
120
|
+
props = schema.dig(:parameters, :properties)
|
|
121
|
+
expect(props[:query]).to eq(type: "string", description: "A query")
|
|
122
|
+
expect(props[:max_results]).to include(type: "integer", default: 10)
|
|
123
|
+
expect(props[:tags]).to include(type: "array", items: { type: "string" })
|
|
124
|
+
expect(props[:verbose]).to include(type: "boolean", default: false)
|
|
125
|
+
expect(props[:mode]).to include(type: "string", enum: %w[fast slow])
|
|
126
|
+
end
|
|
127
|
+
|
|
128
|
+
it 'instance exposes json_schema' do
|
|
129
|
+
expect(tool_class.new.json_schema).to eq(tool_class.json_schema)
|
|
130
|
+
end
|
|
131
|
+
end
|
|
132
|
+
|
|
133
|
+
describe 'fallback when no DSL declared' do
|
|
134
|
+
let(:tool_class) do
|
|
135
|
+
Class.new(RCrewAI::Tools::Base) do
|
|
136
|
+
def execute(**params); params; end
|
|
137
|
+
end
|
|
138
|
+
end
|
|
139
|
+
|
|
140
|
+
it 'returns a permissive schema' do
|
|
141
|
+
schema = tool_class.json_schema
|
|
142
|
+
expect(schema[:parameters]).to eq(
|
|
143
|
+
type: "object",
|
|
144
|
+
additionalProperties: true
|
|
145
|
+
)
|
|
146
|
+
end
|
|
147
|
+
|
|
148
|
+
it 'prints deprecation warning once per class' do
|
|
149
|
+
expect { tool_class.json_schema }.to output(/no DSL declarations/).to_stderr
|
|
150
|
+
expect { tool_class.json_schema }.not_to output.to_stderr
|
|
151
|
+
end
|
|
152
|
+
end
|
|
153
|
+
|
|
154
|
+
describe '#execute_with_validation' do
|
|
155
|
+
let(:tool_class) do
|
|
156
|
+
Class.new(RCrewAI::Tools::Base) do
|
|
157
|
+
tool_name "v"
|
|
158
|
+
description "v"
|
|
159
|
+
param :n, type: :integer, required: true
|
|
160
|
+
def execute(n:); n * 2; end
|
|
161
|
+
end
|
|
162
|
+
end
|
|
163
|
+
|
|
164
|
+
it 'coerces string integers' do
|
|
165
|
+
expect(tool_class.new.execute_with_validation({ "n" => "7" })).to eq(14)
|
|
166
|
+
end
|
|
167
|
+
|
|
168
|
+
it 'raises ToolError on missing required' do
|
|
169
|
+
expect { tool_class.new.execute_with_validation({}) }
|
|
170
|
+
.to raise_error(RCrewAI::Tools::ToolError, /missing required param: n/i)
|
|
171
|
+
end
|
|
172
|
+
|
|
173
|
+
it 'raises ToolError on wrong type' do
|
|
174
|
+
expect { tool_class.new.execute_with_validation({ "n" => "abc" }) }
|
|
175
|
+
.to raise_error(RCrewAI::Tools::ToolError, /n must be integer/i)
|
|
176
|
+
end
|
|
177
|
+
end
|
|
178
|
+
end
|
|
179
|
+
```
|
|
180
|
+
|
|
181
|
+
- [ ] **Step 1.2: Run spec; expect failure**
|
|
182
|
+
|
|
183
|
+
Run: `bundle exec rspec spec/tool_schema_spec.rb`
|
|
184
|
+
Expected: `uninitialized constant RCrewAI::ToolSchema`
|
|
185
|
+
|
|
186
|
+
- [ ] **Step 1.3: Implement `lib/rcrewai/tool_schema.rb`**
|
|
187
|
+
|
|
188
|
+
```ruby
|
|
189
|
+
# frozen_string_literal: true
|
|
190
|
+
|
|
191
|
+
module RCrewAI
|
|
192
|
+
module ToolSchema
|
|
193
|
+
TYPE_MAP = {
|
|
194
|
+
string: "string", integer: "integer", number: "number",
|
|
195
|
+
boolean: "boolean", array: "array", object: "object", enum: "string"
|
|
196
|
+
}.freeze
|
|
197
|
+
|
|
198
|
+
def self.extended(base)
|
|
199
|
+
base.instance_variable_set(:@params, [])
|
|
200
|
+
base.instance_variable_set(:@tool_name, nil)
|
|
201
|
+
base.instance_variable_set(:@description, nil)
|
|
202
|
+
end
|
|
203
|
+
|
|
204
|
+
def tool_name(name = nil)
|
|
205
|
+
return @tool_name || name_default if name.nil?
|
|
206
|
+
@tool_name = name.to_s
|
|
207
|
+
end
|
|
208
|
+
|
|
209
|
+
def description(desc = nil)
|
|
210
|
+
return @description || "" if desc.nil?
|
|
211
|
+
@description = desc.to_s
|
|
212
|
+
end
|
|
213
|
+
|
|
214
|
+
def param(name, type:, required: false, default: nil, description: nil, items: nil, values: nil, properties: nil)
|
|
215
|
+
@params ||= []
|
|
216
|
+
@params << {
|
|
217
|
+
name: name, type: type, required: required, default: default,
|
|
218
|
+
description: description, items: items, values: values, properties: properties
|
|
219
|
+
}
|
|
220
|
+
end
|
|
221
|
+
|
|
222
|
+
def params
|
|
223
|
+
@params || []
|
|
224
|
+
end
|
|
225
|
+
|
|
226
|
+
def json_schema
|
|
227
|
+
props = {}
|
|
228
|
+
required = []
|
|
229
|
+
params.each do |p|
|
|
230
|
+
entry = { type: TYPE_MAP.fetch(p[:type]) }
|
|
231
|
+
entry[:description] = p[:description] if p[:description]
|
|
232
|
+
entry[:default] = p[:default] unless p[:default].nil?
|
|
233
|
+
entry[:items] = stringify_type(p[:items]) if p[:items]
|
|
234
|
+
entry[:enum] = p[:values] if p[:type] == :enum
|
|
235
|
+
entry[:properties] = p[:properties] if p[:properties]
|
|
236
|
+
props[p[:name]] = entry
|
|
237
|
+
required << p[:name].to_s if p[:required]
|
|
238
|
+
end
|
|
239
|
+
|
|
240
|
+
if params.empty?
|
|
241
|
+
warn_once_no_dsl!
|
|
242
|
+
return {
|
|
243
|
+
name: tool_name,
|
|
244
|
+
description: description,
|
|
245
|
+
parameters: { type: "object", additionalProperties: true }
|
|
246
|
+
}
|
|
247
|
+
end
|
|
248
|
+
|
|
249
|
+
{
|
|
250
|
+
name: tool_name,
|
|
251
|
+
description: description,
|
|
252
|
+
parameters: {
|
|
253
|
+
type: "object",
|
|
254
|
+
properties: props,
|
|
255
|
+
required: required
|
|
256
|
+
}
|
|
257
|
+
}
|
|
258
|
+
end
|
|
259
|
+
|
|
260
|
+
private
|
|
261
|
+
|
|
262
|
+
def name_default
|
|
263
|
+
name.to_s.split("::").last.gsub(/([a-z])([A-Z])/, '\1_\2').downcase
|
|
264
|
+
end
|
|
265
|
+
|
|
266
|
+
def stringify_type(h)
|
|
267
|
+
return h unless h.is_a?(Hash) && h[:type].is_a?(Symbol)
|
|
268
|
+
h.merge(type: TYPE_MAP.fetch(h[:type]))
|
|
269
|
+
end
|
|
270
|
+
|
|
271
|
+
def warn_once_no_dsl!
|
|
272
|
+
return if @warned_no_dsl
|
|
273
|
+
@warned_no_dsl = true
|
|
274
|
+
Kernel.warn "[rcrewai] Tool #{name} has no DSL declarations; using permissive schema. See docs/upgrading-to-0.3.md"
|
|
275
|
+
end
|
|
276
|
+
end
|
|
277
|
+
end
|
|
278
|
+
```
|
|
279
|
+
|
|
280
|
+
- [ ] **Step 1.4: Extend `Tools::Base` to include the DSL and validation**
|
|
281
|
+
|
|
282
|
+
Modify `lib/rcrewai/tools/base.rb`. Add at top of file (after `require`s) and inside the class:
|
|
283
|
+
|
|
284
|
+
```ruby
|
|
285
|
+
require_relative '../tool_schema'
|
|
286
|
+
|
|
287
|
+
module RCrewAI
|
|
288
|
+
module Tools
|
|
289
|
+
class Base
|
|
290
|
+
extend RCrewAI::ToolSchema
|
|
291
|
+
|
|
292
|
+
def name
|
|
293
|
+
@name ||= self.class.tool_name
|
|
294
|
+
end
|
|
295
|
+
|
|
296
|
+
def description
|
|
297
|
+
@description ||= self.class.description
|
|
298
|
+
end
|
|
299
|
+
|
|
300
|
+
def json_schema
|
|
301
|
+
self.class.json_schema
|
|
302
|
+
end
|
|
303
|
+
|
|
304
|
+
def execute_with_validation(args_hash)
|
|
305
|
+
coerced = {}
|
|
306
|
+
schema_params = self.class.params
|
|
307
|
+
|
|
308
|
+
if schema_params.empty?
|
|
309
|
+
# Permissive: pass through symbolized
|
|
310
|
+
coerced = args_hash.transform_keys(&:to_sym)
|
|
311
|
+
return execute(**coerced)
|
|
312
|
+
end
|
|
313
|
+
|
|
314
|
+
schema_params.each do |p|
|
|
315
|
+
key_str = p[:name].to_s
|
|
316
|
+
key_sym = p[:name].to_sym
|
|
317
|
+
if args_hash.key?(key_str) || args_hash.key?(key_sym)
|
|
318
|
+
raw = args_hash[key_str] || args_hash[key_sym]
|
|
319
|
+
coerced[key_sym] = coerce(raw, p[:type], p[:name])
|
|
320
|
+
elsif p[:required]
|
|
321
|
+
raise ToolError, "missing required param: #{p[:name]}"
|
|
322
|
+
end
|
|
323
|
+
end
|
|
324
|
+
|
|
325
|
+
execute(**coerced)
|
|
326
|
+
end
|
|
327
|
+
|
|
328
|
+
private
|
|
329
|
+
|
|
330
|
+
def coerce(value, type, name)
|
|
331
|
+
case type
|
|
332
|
+
when :integer
|
|
333
|
+
return value if value.is_a?(Integer)
|
|
334
|
+
Integer(value.to_s)
|
|
335
|
+
when :number
|
|
336
|
+
return value if value.is_a?(Numeric)
|
|
337
|
+
Float(value.to_s)
|
|
338
|
+
when :boolean
|
|
339
|
+
return value if [true, false].include?(value)
|
|
340
|
+
%w[true 1 yes].include?(value.to_s.downcase)
|
|
341
|
+
when :string, :enum
|
|
342
|
+
value.to_s
|
|
343
|
+
when :array, :object
|
|
344
|
+
value
|
|
345
|
+
else
|
|
346
|
+
value
|
|
347
|
+
end
|
|
348
|
+
rescue ArgumentError, TypeError
|
|
349
|
+
raise ToolError, "#{name} must be #{type}, got #{value.inspect}"
|
|
350
|
+
end
|
|
351
|
+
|
|
352
|
+
# ... existing code (initialize, execute, validate_params!, etc.) UNCHANGED ...
|
|
353
|
+
end
|
|
354
|
+
end
|
|
355
|
+
end
|
|
356
|
+
```
|
|
357
|
+
|
|
358
|
+
Keep the `initialize`, `execute`, `validate_params!`, `self.available_tools`, `self.create_tool`, `self.list_available_tools` methods exactly as they are today — only add the new code above.
|
|
359
|
+
|
|
360
|
+
- [ ] **Step 1.5: Add require in `lib/rcrewai.rb`**
|
|
361
|
+
|
|
362
|
+
Insert `require_relative 'rcrewai/tool_schema'` immediately before `require_relative 'rcrewai/tools/base'` in `lib/rcrewai.rb`.
|
|
363
|
+
|
|
364
|
+
- [ ] **Step 1.6: Run spec; expect pass**
|
|
365
|
+
|
|
366
|
+
Run: `bundle exec rspec spec/tool_schema_spec.rb`
|
|
367
|
+
Expected: all green.
|
|
368
|
+
|
|
369
|
+
- [ ] **Step 1.7: Write failing spec for provider-schema reshape**
|
|
370
|
+
|
|
371
|
+
Create `spec/provider_schema_spec.rb`:
|
|
372
|
+
|
|
373
|
+
```ruby
|
|
374
|
+
# frozen_string_literal: true
|
|
375
|
+
require 'spec_helper'
|
|
376
|
+
|
|
377
|
+
RSpec.describe RCrewAI::ProviderSchema do
|
|
378
|
+
let(:canonical) do
|
|
379
|
+
{
|
|
380
|
+
name: "search",
|
|
381
|
+
description: "Search",
|
|
382
|
+
parameters: {
|
|
383
|
+
type: "object",
|
|
384
|
+
properties: { q: { type: "string", description: "query" } },
|
|
385
|
+
required: ["q"]
|
|
386
|
+
}
|
|
387
|
+
}
|
|
388
|
+
end
|
|
389
|
+
|
|
390
|
+
it 'reshapes for OpenAI' do
|
|
391
|
+
expect(described_class.for(:openai, canonical)).to eq(
|
|
392
|
+
type: "function",
|
|
393
|
+
function: canonical
|
|
394
|
+
)
|
|
395
|
+
end
|
|
396
|
+
|
|
397
|
+
it 'reshapes for Anthropic' do
|
|
398
|
+
out = described_class.for(:anthropic, canonical)
|
|
399
|
+
expect(out).to eq(
|
|
400
|
+
name: "search",
|
|
401
|
+
description: "Search",
|
|
402
|
+
input_schema: canonical[:parameters]
|
|
403
|
+
)
|
|
404
|
+
end
|
|
405
|
+
|
|
406
|
+
it 'reshapes for Google' do
|
|
407
|
+
out = described_class.for(:google, canonical)
|
|
408
|
+
expect(out).to eq(
|
|
409
|
+
function_declarations: [{
|
|
410
|
+
name: "search",
|
|
411
|
+
description: "Search",
|
|
412
|
+
parameters: canonical[:parameters]
|
|
413
|
+
}]
|
|
414
|
+
)
|
|
415
|
+
end
|
|
416
|
+
|
|
417
|
+
it 'reshapes for Ollama (same as OpenAI minus wrapper)' do
|
|
418
|
+
out = described_class.for(:ollama, canonical)
|
|
419
|
+
expect(out).to eq(type: "function", function: canonical)
|
|
420
|
+
end
|
|
421
|
+
|
|
422
|
+
it 'raises on unknown provider' do
|
|
423
|
+
expect { described_class.for(:unknown, canonical) }
|
|
424
|
+
.to raise_error(ArgumentError, /unknown provider/i)
|
|
425
|
+
end
|
|
426
|
+
end
|
|
427
|
+
```
|
|
428
|
+
|
|
429
|
+
- [ ] **Step 1.8: Implement `lib/rcrewai/provider_schema.rb`**
|
|
430
|
+
|
|
431
|
+
```ruby
|
|
432
|
+
# frozen_string_literal: true
|
|
433
|
+
|
|
434
|
+
module RCrewAI
|
|
435
|
+
module ProviderSchema
|
|
436
|
+
module_function
|
|
437
|
+
|
|
438
|
+
def for(provider, canonical)
|
|
439
|
+
case provider.to_sym
|
|
440
|
+
when :openai, :azure, :ollama
|
|
441
|
+
{ type: "function", function: canonical }
|
|
442
|
+
when :anthropic
|
|
443
|
+
{
|
|
444
|
+
name: canonical[:name],
|
|
445
|
+
description: canonical[:description],
|
|
446
|
+
input_schema: canonical[:parameters]
|
|
447
|
+
}
|
|
448
|
+
when :google
|
|
449
|
+
{
|
|
450
|
+
function_declarations: [{
|
|
451
|
+
name: canonical[:name],
|
|
452
|
+
description: canonical[:description],
|
|
453
|
+
parameters: canonical[:parameters]
|
|
454
|
+
}]
|
|
455
|
+
}
|
|
456
|
+
else
|
|
457
|
+
raise ArgumentError, "unknown provider #{provider.inspect}"
|
|
458
|
+
end
|
|
459
|
+
end
|
|
460
|
+
|
|
461
|
+
def for_many(provider, canonicals)
|
|
462
|
+
if provider.to_sym == :google
|
|
463
|
+
{ function_declarations: canonicals.map { |c| for(:google, c)[:function_declarations].first } }
|
|
464
|
+
else
|
|
465
|
+
canonicals.map { |c| for(provider, c) }
|
|
466
|
+
end
|
|
467
|
+
end
|
|
468
|
+
end
|
|
469
|
+
end
|
|
470
|
+
```
|
|
471
|
+
|
|
472
|
+
- [ ] **Step 1.9: Add require in `lib/rcrewai.rb`**
|
|
473
|
+
|
|
474
|
+
Insert `require_relative 'rcrewai/provider_schema'` immediately after the `tool_schema` require.
|
|
475
|
+
|
|
476
|
+
- [ ] **Step 1.10: Run spec; expect pass**
|
|
477
|
+
|
|
478
|
+
Run: `bundle exec rspec spec/provider_schema_spec.rb`
|
|
479
|
+
Expected: all green.
|
|
480
|
+
|
|
481
|
+
- [ ] **Step 1.11: Commit**
|
|
482
|
+
|
|
483
|
+
```bash
|
|
484
|
+
git add lib/rcrewai/tool_schema.rb lib/rcrewai/provider_schema.rb \
|
|
485
|
+
lib/rcrewai/tools/base.rb lib/rcrewai.rb \
|
|
486
|
+
spec/tool_schema_spec.rb spec/provider_schema_spec.rb
|
|
487
|
+
git commit -m "feat(tool_schema): add DSL and per-provider schema reshape"
|
|
488
|
+
```
|
|
489
|
+
|
|
490
|
+
---
|
|
491
|
+
|
|
492
|
+
## Task 2: Events, SSE parser, Pricing
|
|
493
|
+
|
|
494
|
+
**Files:**
|
|
495
|
+
- Create: `lib/rcrewai/events.rb`
|
|
496
|
+
- Create: `lib/rcrewai/sse_parser.rb`
|
|
497
|
+
- Create: `lib/rcrewai/pricing.rb`
|
|
498
|
+
- Create: `spec/events_spec.rb`
|
|
499
|
+
- Create: `spec/sse_parser_spec.rb`
|
|
500
|
+
- Create: `spec/pricing_spec.rb`
|
|
501
|
+
- Modify: `lib/rcrewai.rb` (require new files)
|
|
502
|
+
|
|
503
|
+
- [ ] **Step 2.1: Write failing spec for events**
|
|
504
|
+
|
|
505
|
+
Create `spec/events_spec.rb`:
|
|
506
|
+
|
|
507
|
+
```ruby
|
|
508
|
+
# frozen_string_literal: true
|
|
509
|
+
require 'spec_helper'
|
|
510
|
+
|
|
511
|
+
RSpec.describe RCrewAI::Events do
|
|
512
|
+
it 'TextDelta carries text + agent + iteration' do
|
|
513
|
+
e = described_class::TextDelta.new(text: "hi", agent: "a", iteration: 0, timestamp: Time.now)
|
|
514
|
+
expect(e.text).to eq("hi")
|
|
515
|
+
expect(e.agent).to eq("a")
|
|
516
|
+
expect(e.iteration).to eq(0)
|
|
517
|
+
end
|
|
518
|
+
|
|
519
|
+
describe '.fan_out' do
|
|
520
|
+
it 'forwards events to every sink' do
|
|
521
|
+
received_a, received_b = [], []
|
|
522
|
+
sink_a = ->(e) { received_a << e }
|
|
523
|
+
sink_b = ->(e) { received_b << e }
|
|
524
|
+
fan = described_class.fan_out([sink_a, sink_b])
|
|
525
|
+
|
|
526
|
+
e = described_class::TextDelta.new(text: "x", agent: "a", iteration: 0, timestamp: Time.now)
|
|
527
|
+
fan.call(e)
|
|
528
|
+
expect(received_a).to eq([e])
|
|
529
|
+
expect(received_b).to eq([e])
|
|
530
|
+
end
|
|
531
|
+
|
|
532
|
+
it 'isolates one sink raising from the others' do
|
|
533
|
+
received = []
|
|
534
|
+
bad = ->(_) { raise "boom" }
|
|
535
|
+
good = ->(e) { received << e }
|
|
536
|
+
fan = described_class.fan_out([bad, good])
|
|
537
|
+
e = described_class::TextDelta.new(text: "x", agent: "a", iteration: 0, timestamp: Time.now)
|
|
538
|
+
expect { fan.call(e) }.not_to raise_error
|
|
539
|
+
expect(received).to eq([e])
|
|
540
|
+
end
|
|
541
|
+
end
|
|
542
|
+
end
|
|
543
|
+
```
|
|
544
|
+
|
|
545
|
+
- [ ] **Step 2.2: Run spec; expect failure**
|
|
546
|
+
|
|
547
|
+
Run: `bundle exec rspec spec/events_spec.rb`
|
|
548
|
+
Expected: `uninitialized constant RCrewAI::Events`
|
|
549
|
+
|
|
550
|
+
- [ ] **Step 2.3: Implement `lib/rcrewai/events.rb`**
|
|
551
|
+
|
|
552
|
+
```ruby
|
|
553
|
+
# frozen_string_literal: true
|
|
554
|
+
|
|
555
|
+
module RCrewAI
|
|
556
|
+
module Events
|
|
557
|
+
BaseAttrs = %i[type timestamp agent iteration].freeze
|
|
558
|
+
|
|
559
|
+
Event = Struct.new(*BaseAttrs, keyword_init: true)
|
|
560
|
+
TextDelta = Struct.new(*BaseAttrs, :text, keyword_init: true)
|
|
561
|
+
TextDone = Struct.new(*BaseAttrs, :text, keyword_init: true)
|
|
562
|
+
ToolCallStart = Struct.new(*BaseAttrs, :tool, :args, :call_id, keyword_init: true)
|
|
563
|
+
ToolCallResult = Struct.new(*BaseAttrs, :tool, :call_id, :result, :duration_ms, keyword_init: true)
|
|
564
|
+
ToolCallError = Struct.new(*BaseAttrs, :tool, :call_id, :error, keyword_init: true)
|
|
565
|
+
Thinking = Struct.new(*BaseAttrs, :text, keyword_init: true)
|
|
566
|
+
Usage = Struct.new(*BaseAttrs, :prompt_tokens, :completion_tokens, :total_tokens, :cost_usd, keyword_init: true)
|
|
567
|
+
IterationStart = Struct.new(*BaseAttrs, :iteration_index, keyword_init: true)
|
|
568
|
+
IterationEnd = Struct.new(*BaseAttrs, :finish_reason, keyword_init: true)
|
|
569
|
+
Error = Struct.new(*BaseAttrs, :error, keyword_init: true)
|
|
570
|
+
|
|
571
|
+
# Returns a Proc that forwards each event to every sink, isolating exceptions.
|
|
572
|
+
def self.fan_out(sinks)
|
|
573
|
+
sinks = Array(sinks).compact
|
|
574
|
+
lambda do |event|
|
|
575
|
+
sinks.each do |s|
|
|
576
|
+
begin
|
|
577
|
+
s.call(event)
|
|
578
|
+
rescue StandardError => e
|
|
579
|
+
Kernel.warn "[rcrewai] event sink raised: #{e.class}: #{e.message}"
|
|
580
|
+
end
|
|
581
|
+
end
|
|
582
|
+
end
|
|
583
|
+
end
|
|
584
|
+
end
|
|
585
|
+
end
|
|
586
|
+
```
|
|
587
|
+
|
|
588
|
+
- [ ] **Step 2.4: Run spec; expect pass**
|
|
589
|
+
|
|
590
|
+
Run: `bundle exec rspec spec/events_spec.rb`
|
|
591
|
+
Expected: all green.
|
|
592
|
+
|
|
593
|
+
- [ ] **Step 2.5: Write failing spec for SSE parser**
|
|
594
|
+
|
|
595
|
+
Create `spec/sse_parser_spec.rb`:
|
|
596
|
+
|
|
597
|
+
```ruby
|
|
598
|
+
# frozen_string_literal: true
|
|
599
|
+
require 'spec_helper'
|
|
600
|
+
|
|
601
|
+
RSpec.describe RCrewAI::SSEParser do
|
|
602
|
+
it 'parses a simple data event' do
|
|
603
|
+
events = []
|
|
604
|
+
p = described_class.new { |evt| events << evt }
|
|
605
|
+
p.feed("data: hello\n\n")
|
|
606
|
+
expect(events).to eq([{ event: "message", data: "hello" }])
|
|
607
|
+
end
|
|
608
|
+
|
|
609
|
+
it 'splits multi-line data with newlines preserved' do
|
|
610
|
+
events = []
|
|
611
|
+
p = described_class.new { |evt| events << evt }
|
|
612
|
+
p.feed("data: one\ndata: two\n\n")
|
|
613
|
+
expect(events.first[:data]).to eq("one\ntwo")
|
|
614
|
+
end
|
|
615
|
+
|
|
616
|
+
it 'respects event: field' do
|
|
617
|
+
events = []
|
|
618
|
+
p = described_class.new { |evt| events << evt }
|
|
619
|
+
p.feed("event: ping\ndata: {}\n\n")
|
|
620
|
+
expect(events.first[:event]).to eq("ping")
|
|
621
|
+
end
|
|
622
|
+
|
|
623
|
+
it 'handles chunked feeds across event boundary' do
|
|
624
|
+
events = []
|
|
625
|
+
p = described_class.new { |evt| events << evt }
|
|
626
|
+
p.feed("data: par")
|
|
627
|
+
p.feed("tial\n")
|
|
628
|
+
p.feed("\n")
|
|
629
|
+
expect(events.first[:data]).to eq("partial")
|
|
630
|
+
end
|
|
631
|
+
|
|
632
|
+
it 'ignores comment lines' do
|
|
633
|
+
events = []
|
|
634
|
+
p = described_class.new { |evt| events << evt }
|
|
635
|
+
p.feed(": heartbeat\n\ndata: x\n\n")
|
|
636
|
+
expect(events.length).to eq(1)
|
|
637
|
+
expect(events.first[:data]).to eq("x")
|
|
638
|
+
end
|
|
639
|
+
end
|
|
640
|
+
```
|
|
641
|
+
|
|
642
|
+
- [ ] **Step 2.6: Implement `lib/rcrewai/sse_parser.rb`**
|
|
643
|
+
|
|
644
|
+
```ruby
|
|
645
|
+
# frozen_string_literal: true
|
|
646
|
+
|
|
647
|
+
module RCrewAI
|
|
648
|
+
# Minimal Server-Sent Events line parser per https://html.spec.whatwg.org/multipage/server-sent-events.html
|
|
649
|
+
# Feed bytes via #feed(chunk); yields { event: String, data: String } per complete event.
|
|
650
|
+
class SSEParser
|
|
651
|
+
def initialize(&block)
|
|
652
|
+
@on_event = block
|
|
653
|
+
@buffer = +""
|
|
654
|
+
@event = "message"
|
|
655
|
+
@data_lines = []
|
|
656
|
+
end
|
|
657
|
+
|
|
658
|
+
def feed(chunk)
|
|
659
|
+
@buffer << chunk
|
|
660
|
+
while (idx = @buffer.index("\n"))
|
|
661
|
+
line = @buffer.slice!(0, idx + 1).chomp
|
|
662
|
+
if line.empty?
|
|
663
|
+
dispatch
|
|
664
|
+
elsif line.start_with?(":")
|
|
665
|
+
# comment line, ignore
|
|
666
|
+
elsif (colon = line.index(":"))
|
|
667
|
+
field = line[0...colon]
|
|
668
|
+
value = line[(colon + 1)..]
|
|
669
|
+
value = value[1..] if value.start_with?(" ")
|
|
670
|
+
handle_field(field, value)
|
|
671
|
+
else
|
|
672
|
+
handle_field(line, "")
|
|
673
|
+
end
|
|
674
|
+
end
|
|
675
|
+
end
|
|
676
|
+
|
|
677
|
+
private
|
|
678
|
+
|
|
679
|
+
def handle_field(field, value)
|
|
680
|
+
case field
|
|
681
|
+
when "event" then @event = value
|
|
682
|
+
when "data" then @data_lines << value
|
|
683
|
+
end
|
|
684
|
+
end
|
|
685
|
+
|
|
686
|
+
def dispatch
|
|
687
|
+
return if @data_lines.empty?
|
|
688
|
+
@on_event.call(event: @event, data: @data_lines.join("\n"))
|
|
689
|
+
@event = "message"
|
|
690
|
+
@data_lines = []
|
|
691
|
+
end
|
|
692
|
+
end
|
|
693
|
+
end
|
|
694
|
+
```
|
|
695
|
+
|
|
696
|
+
- [ ] **Step 2.7: Run spec; expect pass**
|
|
697
|
+
|
|
698
|
+
Run: `bundle exec rspec spec/sse_parser_spec.rb`
|
|
699
|
+
Expected: all green.
|
|
700
|
+
|
|
701
|
+
- [ ] **Step 2.8: Write failing spec for pricing**
|
|
702
|
+
|
|
703
|
+
Create `spec/pricing_spec.rb`:
|
|
704
|
+
|
|
705
|
+
```ruby
|
|
706
|
+
# frozen_string_literal: true
|
|
707
|
+
require 'spec_helper'
|
|
708
|
+
|
|
709
|
+
RSpec.describe RCrewAI::Pricing do
|
|
710
|
+
it 'computes cost for a known model' do
|
|
711
|
+
cost = described_class.cost_for("gpt-4o", prompt_tokens: 1_000_000, completion_tokens: 1_000_000)
|
|
712
|
+
expect(cost).to be > 0
|
|
713
|
+
end
|
|
714
|
+
|
|
715
|
+
it 'returns nil for unknown model' do
|
|
716
|
+
cost = described_class.cost_for("definitely-not-real", prompt_tokens: 1, completion_tokens: 1)
|
|
717
|
+
expect(cost).to be_nil
|
|
718
|
+
end
|
|
719
|
+
|
|
720
|
+
it 'accepts user overrides from configuration' do
|
|
721
|
+
RCrewAI.configuration.pricing = { "totally-fake" => { input: 1.0, output: 2.0 } }
|
|
722
|
+
cost = described_class.cost_for("totally-fake", prompt_tokens: 1_000_000, completion_tokens: 1_000_000)
|
|
723
|
+
expect(cost).to eq(3.0)
|
|
724
|
+
ensure
|
|
725
|
+
RCrewAI.configuration.pricing = nil
|
|
726
|
+
end
|
|
727
|
+
end
|
|
728
|
+
```
|
|
729
|
+
|
|
730
|
+
- [ ] **Step 2.9: Implement `lib/rcrewai/pricing.rb`**
|
|
731
|
+
|
|
732
|
+
```ruby
|
|
733
|
+
# frozen_string_literal: true
|
|
734
|
+
|
|
735
|
+
module RCrewAI
|
|
736
|
+
module Pricing
|
|
737
|
+
# Prices in USD per 1M tokens. List prices as of 2026-05; users can override.
|
|
738
|
+
DEFAULT_PRICES = {
|
|
739
|
+
# OpenAI
|
|
740
|
+
"gpt-4o" => { input: 2.50, output: 10.00 },
|
|
741
|
+
"gpt-4o-mini" => { input: 0.15, output: 0.60 },
|
|
742
|
+
"gpt-4-turbo" => { input: 10.00, output: 30.00 },
|
|
743
|
+
"gpt-4" => { input: 30.00, output: 60.00 },
|
|
744
|
+
"gpt-3.5-turbo" => { input: 0.50, output: 1.50 },
|
|
745
|
+
# Anthropic
|
|
746
|
+
"claude-opus-4-7" => { input: 15.00, output: 75.00 },
|
|
747
|
+
"claude-sonnet-4-6" => { input: 3.00, output: 15.00 },
|
|
748
|
+
"claude-haiku-4-5" => { input: 0.80, output: 4.00 },
|
|
749
|
+
"claude-3-5-sonnet-20241022" => { input: 3.00, output: 15.00 },
|
|
750
|
+
"claude-3-haiku-20240307" => { input: 0.25, output: 1.25 },
|
|
751
|
+
# Google
|
|
752
|
+
"gemini-1.5-pro" => { input: 1.25, output: 5.00 },
|
|
753
|
+
"gemini-1.5-flash" => { input: 0.075, output: 0.30 }
|
|
754
|
+
}.freeze
|
|
755
|
+
|
|
756
|
+
module_function
|
|
757
|
+
|
|
758
|
+
def cost_for(model, prompt_tokens:, completion_tokens:)
|
|
759
|
+
table = RCrewAI.configuration.pricing || {}
|
|
760
|
+
entry = table[model] || DEFAULT_PRICES[model]
|
|
761
|
+
return nil unless entry
|
|
762
|
+
((prompt_tokens * entry[:input]) + (completion_tokens * entry[:output])) / 1_000_000.0
|
|
763
|
+
end
|
|
764
|
+
end
|
|
765
|
+
end
|
|
766
|
+
```
|
|
767
|
+
|
|
768
|
+
- [ ] **Step 2.10: Add `pricing` accessor to Configuration**
|
|
769
|
+
|
|
770
|
+
Modify `lib/rcrewai/configuration.rb`:
|
|
771
|
+
|
|
772
|
+
```ruby
|
|
773
|
+
# In the attr_accessor lines near the top:
|
|
774
|
+
attr_accessor :pricing, :ollama_native_tools, :log_level
|
|
775
|
+
|
|
776
|
+
# In initialize, add at the bottom of the body (before load_from_env):
|
|
777
|
+
@pricing = nil
|
|
778
|
+
@ollama_native_tools = nil # nil = auto-detect via allowlist
|
|
779
|
+
@log_level = :info
|
|
780
|
+
```
|
|
781
|
+
|
|
782
|
+
- [ ] **Step 2.11: Add requires in `lib/rcrewai.rb`**
|
|
783
|
+
|
|
784
|
+
Add these in order after `require_relative 'rcrewai/configuration'`:
|
|
785
|
+
|
|
786
|
+
```ruby
|
|
787
|
+
require_relative 'rcrewai/events'
|
|
788
|
+
require_relative 'rcrewai/sse_parser'
|
|
789
|
+
require_relative 'rcrewai/pricing'
|
|
790
|
+
```
|
|
791
|
+
|
|
792
|
+
- [ ] **Step 2.12: Run spec; expect pass**
|
|
793
|
+
|
|
794
|
+
Run: `bundle exec rspec spec/pricing_spec.rb spec/events_spec.rb spec/sse_parser_spec.rb`
|
|
795
|
+
Expected: all green.
|
|
796
|
+
|
|
797
|
+
- [ ] **Step 2.13: Commit**
|
|
798
|
+
|
|
799
|
+
```bash
|
|
800
|
+
git add lib/rcrewai/events.rb lib/rcrewai/sse_parser.rb lib/rcrewai/pricing.rb \
|
|
801
|
+
lib/rcrewai/configuration.rb lib/rcrewai.rb \
|
|
802
|
+
spec/events_spec.rb spec/sse_parser_spec.rb spec/pricing_spec.rb
|
|
803
|
+
git commit -m "feat(core): add Events, SSE parser, and Pricing modules"
|
|
804
|
+
```
|
|
805
|
+
|
|
806
|
+
---
|
|
807
|
+
|
|
808
|
+
## Task 3: New LLMClients::Base contract
|
|
809
|
+
|
|
810
|
+
**Files:**
|
|
811
|
+
- Modify: `lib/rcrewai/llm_clients/base.rb`
|
|
812
|
+
- Create: `spec/llm_clients/base_spec.rb`
|
|
813
|
+
|
|
814
|
+
This task only updates the abstract contract — provider implementations come in later tasks. We add new optional kwargs (`tools:`, `tool_choice:`, `stream:`) and a `supports_native_tools?` method with sensible defaults. Existing callers pass nothing extra and keep working.
|
|
815
|
+
|
|
816
|
+
- [ ] **Step 3.1: Write failing spec**
|
|
817
|
+
|
|
818
|
+
Create `spec/llm_clients/base_spec.rb`:
|
|
819
|
+
|
|
820
|
+
```ruby
|
|
821
|
+
# frozen_string_literal: true
|
|
822
|
+
require 'spec_helper'
|
|
823
|
+
|
|
824
|
+
RSpec.describe RCrewAI::LLMClients::Base do
|
|
825
|
+
let(:config) { RCrewAI.configuration.tap { |c| c.api_key = "x"; c.model = "m" } }
|
|
826
|
+
|
|
827
|
+
it 'chat raises NotImplementedError by default' do
|
|
828
|
+
expect { described_class.new(config).chat(messages: []) }
|
|
829
|
+
.to raise_error(NotImplementedError)
|
|
830
|
+
end
|
|
831
|
+
|
|
832
|
+
it 'chat accepts tools and stream kwargs without ArgumentError' do
|
|
833
|
+
subclass = Class.new(described_class) do
|
|
834
|
+
def chat(messages:, tools: nil, tool_choice: :auto, stream: nil, **opts)
|
|
835
|
+
{ content: "ok", tool_calls: [], usage: {}, finish_reason: :stop, model: "m", provider: :test }
|
|
836
|
+
end
|
|
837
|
+
def validate_config!; end
|
|
838
|
+
end
|
|
839
|
+
out = subclass.new(config).chat(messages: [], tools: [{ name: "x" }], stream: ->(_) {})
|
|
840
|
+
expect(out[:content]).to eq("ok")
|
|
841
|
+
end
|
|
842
|
+
|
|
843
|
+
it 'supports_native_tools? defaults to true' do
|
|
844
|
+
subclass = Class.new(described_class) do
|
|
845
|
+
def chat(messages:, **opts); end
|
|
846
|
+
def validate_config!; end
|
|
847
|
+
end
|
|
848
|
+
expect(subclass.new(config).supports_native_tools?(model: "m")).to be true
|
|
849
|
+
end
|
|
850
|
+
end
|
|
851
|
+
```
|
|
852
|
+
|
|
853
|
+
- [ ] **Step 3.2: Run spec; expect failure**
|
|
854
|
+
|
|
855
|
+
Run: `bundle exec rspec spec/llm_clients/base_spec.rb`
|
|
856
|
+
Expected: failures on the new signature.
|
|
857
|
+
|
|
858
|
+
- [ ] **Step 3.3: Update `lib/rcrewai/llm_clients/base.rb`**
|
|
859
|
+
|
|
860
|
+
Replace the `def chat` signature with the new one, and add `supports_native_tools?`:
|
|
861
|
+
|
|
862
|
+
```ruby
|
|
863
|
+
def chat(messages:, tools: nil, tool_choice: :auto, stream: nil, **options)
|
|
864
|
+
raise NotImplementedError, "Subclasses must implement #chat method"
|
|
865
|
+
end
|
|
866
|
+
|
|
867
|
+
def supports_native_tools?(model: config.model)
|
|
868
|
+
true
|
|
869
|
+
end
|
|
870
|
+
```
|
|
871
|
+
|
|
872
|
+
Leave everything else in the file unchanged.
|
|
873
|
+
|
|
874
|
+
- [ ] **Step 3.4: Run spec; expect pass**
|
|
875
|
+
|
|
876
|
+
Run: `bundle exec rspec spec/llm_clients/base_spec.rb`
|
|
877
|
+
Expected: all green.
|
|
878
|
+
|
|
879
|
+
- [ ] **Step 3.5: Run full existing suite for regressions**
|
|
880
|
+
|
|
881
|
+
Run: `bundle exec rspec`
|
|
882
|
+
Expected: all green. If any existing `chat` mocks/stubs break because they don't accept `tools:`/`stream:`, fix them by using `**opts` in the subclass override.
|
|
883
|
+
|
|
884
|
+
- [ ] **Step 3.6: Commit**
|
|
885
|
+
|
|
886
|
+
```bash
|
|
887
|
+
git add lib/rcrewai/llm_clients/base.rb spec/llm_clients/base_spec.rb
|
|
888
|
+
git commit -m "feat(llm_clients): new chat() contract with tools and stream kwargs"
|
|
889
|
+
```
|
|
890
|
+
|
|
891
|
+
---
|
|
892
|
+
|
|
893
|
+
## Task 4: Migrate built-in tools to DSL
|
|
894
|
+
|
|
895
|
+
**Files:**
|
|
896
|
+
- Modify: `lib/rcrewai/tools/web_search.rb`
|
|
897
|
+
- Modify: `lib/rcrewai/tools/file_reader.rb`
|
|
898
|
+
- Modify: `lib/rcrewai/tools/file_writer.rb`
|
|
899
|
+
- Modify: `lib/rcrewai/tools/sql_database.rb`
|
|
900
|
+
- Modify: `lib/rcrewai/tools/email_sender.rb`
|
|
901
|
+
- Modify: `lib/rcrewai/tools/code_executor.rb`
|
|
902
|
+
- Modify: `lib/rcrewai/tools/pdf_processor.rb`
|
|
903
|
+
- Create: `spec/tools/builtin_tools_schema_spec.rb`
|
|
904
|
+
|
|
905
|
+
We add 3-7 lines of DSL to each tool. The `execute` method body is unchanged.
|
|
906
|
+
|
|
907
|
+
- [ ] **Step 4.1: Write failing spec that asserts every built-in tool has a non-permissive schema**
|
|
908
|
+
|
|
909
|
+
Create `spec/tools/builtin_tools_schema_spec.rb`:
|
|
910
|
+
|
|
911
|
+
```ruby
|
|
912
|
+
# frozen_string_literal: true
|
|
913
|
+
require 'spec_helper'
|
|
914
|
+
|
|
915
|
+
RSpec.describe "built-in tool schemas" do
|
|
916
|
+
RCrewAI::Tools::Base.available_tools.each do |klass|
|
|
917
|
+
describe klass do
|
|
918
|
+
it 'declares a tool_name' do
|
|
919
|
+
expect(klass.tool_name).not_to be_empty
|
|
920
|
+
end
|
|
921
|
+
|
|
922
|
+
it 'declares a description' do
|
|
923
|
+
expect(klass.description).not_to be_empty
|
|
924
|
+
end
|
|
925
|
+
|
|
926
|
+
it 'declares at least one param' do
|
|
927
|
+
expect(klass.params).not_to be_empty
|
|
928
|
+
end
|
|
929
|
+
|
|
930
|
+
it 'emits a non-permissive JSON schema' do
|
|
931
|
+
schema = klass.json_schema
|
|
932
|
+
expect(schema.dig(:parameters, :additionalProperties)).to be_nil
|
|
933
|
+
expect(schema.dig(:parameters, :properties)).to be_a(Hash)
|
|
934
|
+
end
|
|
935
|
+
end
|
|
936
|
+
end
|
|
937
|
+
end
|
|
938
|
+
```
|
|
939
|
+
|
|
940
|
+
- [ ] **Step 4.2: Run spec; expect failures across all 7 tools**
|
|
941
|
+
|
|
942
|
+
Run: `bundle exec rspec spec/tools/builtin_tools_schema_spec.rb`
|
|
943
|
+
Expected: 7 × 4 = 28 failures (permissive schema, empty params).
|
|
944
|
+
|
|
945
|
+
- [ ] **Step 4.3: Migrate `web_search.rb`**
|
|
946
|
+
|
|
947
|
+
Modify `lib/rcrewai/tools/web_search.rb`. Inspect the existing `execute` signature to determine params, then add at top of class body (right after `class WebSearch < Base`):
|
|
948
|
+
|
|
949
|
+
```ruby
|
|
950
|
+
tool_name "web_search"
|
|
951
|
+
description "Search the web using DuckDuckGo and return top results"
|
|
952
|
+
param :query, type: :string, required: true,
|
|
953
|
+
description: "Search query"
|
|
954
|
+
param :max_results, type: :integer, default: 10,
|
|
955
|
+
description: "Number of results to return (1-25)"
|
|
956
|
+
```
|
|
957
|
+
|
|
958
|
+
Leave everything else unchanged.
|
|
959
|
+
|
|
960
|
+
- [ ] **Step 4.4: Migrate `file_reader.rb`**
|
|
961
|
+
|
|
962
|
+
```ruby
|
|
963
|
+
tool_name "file_reader"
|
|
964
|
+
description "Read the contents of a text file from disk"
|
|
965
|
+
param :path, type: :string, required: true,
|
|
966
|
+
description: "Absolute or relative path to the file"
|
|
967
|
+
param :encoding, type: :string, default: "utf-8",
|
|
968
|
+
description: "Text encoding"
|
|
969
|
+
```
|
|
970
|
+
|
|
971
|
+
- [ ] **Step 4.5: Migrate `file_writer.rb`**
|
|
972
|
+
|
|
973
|
+
```ruby
|
|
974
|
+
tool_name "file_writer"
|
|
975
|
+
description "Write content to a text file on disk"
|
|
976
|
+
param :path, type: :string, required: true, description: "Path to write to"
|
|
977
|
+
param :content, type: :string, required: true, description: "Content to write"
|
|
978
|
+
param :append, type: :boolean, default: false, description: "Append instead of overwrite"
|
|
979
|
+
```
|
|
980
|
+
|
|
981
|
+
- [ ] **Step 4.6: Migrate `sql_database.rb`**
|
|
982
|
+
|
|
983
|
+
```ruby
|
|
984
|
+
tool_name "sql_database"
|
|
985
|
+
description "Execute a read-only SQL query and return rows as JSON"
|
|
986
|
+
param :query, type: :string, required: true,
|
|
987
|
+
description: "SQL query (SELECT only by default)"
|
|
988
|
+
param :connection_string, type: :string, required: false,
|
|
989
|
+
description: "Optional override for the configured DB URL"
|
|
990
|
+
```
|
|
991
|
+
|
|
992
|
+
- [ ] **Step 4.7: Migrate `email_sender.rb`**
|
|
993
|
+
|
|
994
|
+
```ruby
|
|
995
|
+
tool_name "email_sender"
|
|
996
|
+
description "Send an email via configured SMTP"
|
|
997
|
+
param :to, type: :string, required: true, description: "Recipient address"
|
|
998
|
+
param :subject, type: :string, required: true, description: "Email subject"
|
|
999
|
+
param :body, type: :string, required: true, description: "Email body (plain text)"
|
|
1000
|
+
param :html, type: :boolean, default: false, description: "Treat body as HTML"
|
|
1001
|
+
```
|
|
1002
|
+
|
|
1003
|
+
- [ ] **Step 4.8: Migrate `code_executor.rb`**
|
|
1004
|
+
|
|
1005
|
+
```ruby
|
|
1006
|
+
tool_name "code_executor"
|
|
1007
|
+
description "Execute code in a sandboxed subprocess"
|
|
1008
|
+
param :code, type: :string, required: true, description: "Source code to run"
|
|
1009
|
+
param :language, type: :enum, required: true, values: %w[ruby python javascript bash],
|
|
1010
|
+
description: "Language of the code"
|
|
1011
|
+
param :timeout, type: :integer, default: 30, description: "Max execution seconds"
|
|
1012
|
+
```
|
|
1013
|
+
|
|
1014
|
+
- [ ] **Step 4.9: Migrate `pdf_processor.rb`**
|
|
1015
|
+
|
|
1016
|
+
```ruby
|
|
1017
|
+
tool_name "pdf_processor"
|
|
1018
|
+
description "Extract text from a PDF file"
|
|
1019
|
+
param :path, type: :string, required: true, description: "Path to the PDF"
|
|
1020
|
+
param :max_pages, type: :integer, default: 100, description: "Maximum pages to read"
|
|
1021
|
+
```
|
|
1022
|
+
|
|
1023
|
+
- [ ] **Step 4.10: Run spec; expect pass**
|
|
1024
|
+
|
|
1025
|
+
Run: `bundle exec rspec spec/tools/builtin_tools_schema_spec.rb`
|
|
1026
|
+
Expected: all green.
|
|
1027
|
+
|
|
1028
|
+
- [ ] **Step 4.11: Run full suite for regressions**
|
|
1029
|
+
|
|
1030
|
+
Run: `bundle exec rspec`
|
|
1031
|
+
Expected: all green. If any tool spec breaks because `name` is now derived from `tool_name`, fix the assertion to use the declared name.
|
|
1032
|
+
|
|
1033
|
+
- [ ] **Step 4.12: Commit**
|
|
1034
|
+
|
|
1035
|
+
```bash
|
|
1036
|
+
git add lib/rcrewai/tools/*.rb spec/tools/builtin_tools_schema_spec.rb
|
|
1037
|
+
git commit -m "feat(tools): declare DSL schemas for all built-in tools"
|
|
1038
|
+
```
|
|
1039
|
+
|
|
1040
|
+
---
|
|
1041
|
+
|
|
1042
|
+
## Task 5: Phase 1 checkpoint
|
|
1043
|
+
|
|
1044
|
+
- [ ] **Step 5.1: Run full suite**
|
|
1045
|
+
|
|
1046
|
+
Run: `bundle exec rspec`
|
|
1047
|
+
Expected: all green.
|
|
1048
|
+
|
|
1049
|
+
- [ ] **Step 5.2: Confirm no behavior change for existing users**
|
|
1050
|
+
|
|
1051
|
+
Run an existing example end-to-end (mock or real LLM) to confirm nothing observable has changed yet:
|
|
1052
|
+
|
|
1053
|
+
```bash
|
|
1054
|
+
ruby examples/async_execution_example.rb 2>&1 | head -30
|
|
1055
|
+
```
|
|
1056
|
+
|
|
1057
|
+
Expected: same output as before this branch.
|
|
1058
|
+
|
|
1059
|
+
- [ ] **Step 5.3: Tag phase boundary**
|
|
1060
|
+
|
|
1061
|
+
```bash
|
|
1062
|
+
git tag phase-1-foundation
|
|
1063
|
+
```
|
|
1064
|
+
|
|
1065
|
+
Phase 1 complete. New infrastructure exists but no user-visible behavior has changed. Move to Phase 2.
|
|
1066
|
+
|
|
1067
|
+
---
|
|
1068
|
+
|
|
1069
|
+
# Phase 2 — Native tools + streaming + runner refactor
|
|
1070
|
+
|
|
1071
|
+
## Task 6: OpenAI native tools + streaming (reference impl)
|
|
1072
|
+
|
|
1073
|
+
**Files:**
|
|
1074
|
+
- Modify: `lib/rcrewai/llm_clients/openai.rb`
|
|
1075
|
+
- Create: `spec/llm_clients/openai_spec.rb`
|
|
1076
|
+
- Create: `spec/fixtures/llm_responses/openai/tool_call.json`
|
|
1077
|
+
- Create: `spec/fixtures/llm_responses/openai/stream_text.sse`
|
|
1078
|
+
- Create: `spec/fixtures/llm_responses/openai/stream_tool_call.sse`
|
|
1079
|
+
|
|
1080
|
+
- [ ] **Step 6.1: Capture fixture: non-streamed tool call response**
|
|
1081
|
+
|
|
1082
|
+
Create `spec/fixtures/llm_responses/openai/tool_call.json`:
|
|
1083
|
+
|
|
1084
|
+
```json
|
|
1085
|
+
{
|
|
1086
|
+
"id": "chatcmpl-xyz",
|
|
1087
|
+
"model": "gpt-4o",
|
|
1088
|
+
"choices": [{
|
|
1089
|
+
"index": 0,
|
|
1090
|
+
"message": {
|
|
1091
|
+
"role": "assistant",
|
|
1092
|
+
"content": null,
|
|
1093
|
+
"tool_calls": [{
|
|
1094
|
+
"id": "call_1",
|
|
1095
|
+
"type": "function",
|
|
1096
|
+
"function": { "name": "web_search", "arguments": "{\"query\":\"ruby\"}" }
|
|
1097
|
+
}]
|
|
1098
|
+
},
|
|
1099
|
+
"finish_reason": "tool_calls"
|
|
1100
|
+
}],
|
|
1101
|
+
"usage": { "prompt_tokens": 50, "completion_tokens": 10, "total_tokens": 60 }
|
|
1102
|
+
}
|
|
1103
|
+
```
|
|
1104
|
+
|
|
1105
|
+
- [ ] **Step 6.2: Capture fixture: streamed text response**
|
|
1106
|
+
|
|
1107
|
+
Create `spec/fixtures/llm_responses/openai/stream_text.sse`:
|
|
1108
|
+
|
|
1109
|
+
```
|
|
1110
|
+
data: {"id":"1","choices":[{"index":0,"delta":{"role":"assistant","content":"Hel"},"finish_reason":null}]}
|
|
1111
|
+
|
|
1112
|
+
data: {"id":"1","choices":[{"index":0,"delta":{"content":"lo"},"finish_reason":null}]}
|
|
1113
|
+
|
|
1114
|
+
data: {"id":"1","choices":[{"index":0,"delta":{},"finish_reason":"stop"}],"usage":{"prompt_tokens":5,"completion_tokens":2,"total_tokens":7}}
|
|
1115
|
+
|
|
1116
|
+
data: [DONE]
|
|
1117
|
+
|
|
1118
|
+
```
|
|
1119
|
+
|
|
1120
|
+
- [ ] **Step 6.3: Capture fixture: streamed tool call**
|
|
1121
|
+
|
|
1122
|
+
Create `spec/fixtures/llm_responses/openai/stream_tool_call.sse`:
|
|
1123
|
+
|
|
1124
|
+
```
|
|
1125
|
+
data: {"id":"1","choices":[{"index":0,"delta":{"role":"assistant","tool_calls":[{"index":0,"id":"call_1","type":"function","function":{"name":"web_search","arguments":""}}]},"finish_reason":null}]}
|
|
1126
|
+
|
|
1127
|
+
data: {"id":"1","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"{\"que"}}]},"finish_reason":null}]}
|
|
1128
|
+
|
|
1129
|
+
data: {"id":"1","choices":[{"index":0,"delta":{"tool_calls":[{"index":0,"function":{"arguments":"ry\":\"ruby\"}"}}]},"finish_reason":null}]}
|
|
1130
|
+
|
|
1131
|
+
data: {"id":"1","choices":[{"index":0,"delta":{},"finish_reason":"tool_calls"}],"usage":{"prompt_tokens":40,"completion_tokens":12,"total_tokens":52}}
|
|
1132
|
+
|
|
1133
|
+
data: [DONE]
|
|
1134
|
+
|
|
1135
|
+
```
|
|
1136
|
+
|
|
1137
|
+
- [ ] **Step 6.4: Write failing spec**
|
|
1138
|
+
|
|
1139
|
+
Create `spec/llm_clients/openai_spec.rb`:
|
|
1140
|
+
|
|
1141
|
+
```ruby
|
|
1142
|
+
# frozen_string_literal: true
|
|
1143
|
+
require 'spec_helper'
|
|
1144
|
+
require 'webmock/rspec'
|
|
1145
|
+
|
|
1146
|
+
RSpec.describe RCrewAI::LLMClients::OpenAI do
|
|
1147
|
+
let(:config) do
|
|
1148
|
+
RCrewAI.configuration.tap do |c|
|
|
1149
|
+
c.llm_provider = :openai
|
|
1150
|
+
c.openai_api_key = "test-key"
|
|
1151
|
+
c.openai_model = "gpt-4o"
|
|
1152
|
+
end
|
|
1153
|
+
end
|
|
1154
|
+
let(:client) { described_class.new(config) }
|
|
1155
|
+
|
|
1156
|
+
describe '#chat with tools (non-streaming)' do
|
|
1157
|
+
it 'sends tools in OpenAI shape and returns tool_calls' do
|
|
1158
|
+
tool_schema = {
|
|
1159
|
+
name: "web_search",
|
|
1160
|
+
description: "Search",
|
|
1161
|
+
parameters: { type: "object", properties: { query: { type: "string" } }, required: ["query"] }
|
|
1162
|
+
}
|
|
1163
|
+
|
|
1164
|
+
stub = stub_request(:post, "https://api.openai.com/v1/chat/completions")
|
|
1165
|
+
.with(body: hash_including(
|
|
1166
|
+
"model" => "gpt-4o",
|
|
1167
|
+
"tools" => [{ "type" => "function", "function" => tool_schema.transform_keys(&:to_s) }]
|
|
1168
|
+
))
|
|
1169
|
+
.to_return(status: 200, body: File.read("spec/fixtures/llm_responses/openai/tool_call.json"),
|
|
1170
|
+
headers: { "Content-Type" => "application/json" })
|
|
1171
|
+
|
|
1172
|
+
result = client.chat(messages: [{ role: "user", content: "hi" }], tools: [tool_schema])
|
|
1173
|
+
|
|
1174
|
+
expect(stub).to have_been_requested
|
|
1175
|
+
expect(result[:content]).to be_nil
|
|
1176
|
+
expect(result[:tool_calls]).to eq([{
|
|
1177
|
+
id: "call_1", name: "web_search", arguments: { "query" => "ruby" }
|
|
1178
|
+
}])
|
|
1179
|
+
expect(result[:finish_reason]).to eq(:tool_calls)
|
|
1180
|
+
expect(result[:usage]).to eq(prompt_tokens: 50, completion_tokens: 10, total_tokens: 60)
|
|
1181
|
+
end
|
|
1182
|
+
end
|
|
1183
|
+
|
|
1184
|
+
describe '#chat with stream:' do
|
|
1185
|
+
it 'emits TextDelta events and returns final assembled result' do
|
|
1186
|
+
stub_request(:post, "https://api.openai.com/v1/chat/completions")
|
|
1187
|
+
.to_return(status: 200, body: File.read("spec/fixtures/llm_responses/openai/stream_text.sse"),
|
|
1188
|
+
headers: { "Content-Type" => "text/event-stream" })
|
|
1189
|
+
|
|
1190
|
+
events = []
|
|
1191
|
+
result = client.chat(messages: [{ role: "user", content: "hi" }],
|
|
1192
|
+
stream: ->(e) { events << e })
|
|
1193
|
+
|
|
1194
|
+
text_events = events.select { |e| e.is_a?(RCrewAI::Events::TextDelta) }
|
|
1195
|
+
expect(text_events.map(&:text)).to eq(%w[Hel lo])
|
|
1196
|
+
expect(result[:content]).to eq("Hello")
|
|
1197
|
+
expect(result[:finish_reason]).to eq(:stop)
|
|
1198
|
+
expect(events.last).to be_a(RCrewAI::Events::Usage)
|
|
1199
|
+
end
|
|
1200
|
+
|
|
1201
|
+
it 'assembles streamed tool_call arguments' do
|
|
1202
|
+
stub_request(:post, "https://api.openai.com/v1/chat/completions")
|
|
1203
|
+
.to_return(status: 200, body: File.read("spec/fixtures/llm_responses/openai/stream_tool_call.sse"),
|
|
1204
|
+
headers: { "Content-Type" => "text/event-stream" })
|
|
1205
|
+
|
|
1206
|
+
events = []
|
|
1207
|
+
result = client.chat(messages: [{ role: "user", content: "search" }],
|
|
1208
|
+
tools: [{ name: "web_search", description: "x",
|
|
1209
|
+
parameters: { type: "object", properties: {}, required: [] } }],
|
|
1210
|
+
stream: ->(e) { events << e })
|
|
1211
|
+
|
|
1212
|
+
expect(result[:tool_calls]).to eq([{ id: "call_1", name: "web_search", arguments: { "query" => "ruby" } }])
|
|
1213
|
+
expect(result[:finish_reason]).to eq(:tool_calls)
|
|
1214
|
+
end
|
|
1215
|
+
end
|
|
1216
|
+
|
|
1217
|
+
describe '#supports_native_tools?' do
|
|
1218
|
+
it 'returns true for any OpenAI model' do
|
|
1219
|
+
expect(client.supports_native_tools?(model: "gpt-4o")).to be true
|
|
1220
|
+
end
|
|
1221
|
+
end
|
|
1222
|
+
end
|
|
1223
|
+
```
|
|
1224
|
+
|
|
1225
|
+
- [ ] **Step 6.5: Run spec; expect failure**
|
|
1226
|
+
|
|
1227
|
+
Run: `bundle exec rspec spec/llm_clients/openai_spec.rb`
|
|
1228
|
+
Expected: failures (current OpenAI client doesn't accept `tools:`, no streaming).
|
|
1229
|
+
|
|
1230
|
+
- [ ] **Step 6.6: Rewrite `lib/rcrewai/llm_clients/openai.rb`**
|
|
1231
|
+
|
|
1232
|
+
Replace the file body (keeping the class name and module) with:
|
|
1233
|
+
|
|
1234
|
+
```ruby
|
|
1235
|
+
# frozen_string_literal: true
|
|
1236
|
+
|
|
1237
|
+
require 'faraday'
|
|
1238
|
+
require 'json'
|
|
1239
|
+
require_relative 'base'
|
|
1240
|
+
require_relative '../events'
|
|
1241
|
+
require_relative '../sse_parser'
|
|
1242
|
+
require_relative '../provider_schema'
|
|
1243
|
+
require_relative '../pricing'
|
|
1244
|
+
|
|
1245
|
+
module RCrewAI
|
|
1246
|
+
module LLMClients
|
|
1247
|
+
class OpenAI < Base
|
|
1248
|
+
BASE_URL = 'https://api.openai.com/v1'
|
|
1249
|
+
|
|
1250
|
+
def chat(messages:, tools: nil, tool_choice: :auto, stream: nil, **options)
|
|
1251
|
+
payload = {
|
|
1252
|
+
model: config.model,
|
|
1253
|
+
messages: messages,
|
|
1254
|
+
temperature: options[:temperature] || config.temperature,
|
|
1255
|
+
max_tokens: options[:max_tokens] || config.max_tokens
|
|
1256
|
+
}.compact
|
|
1257
|
+
|
|
1258
|
+
if tools && !tools.empty?
|
|
1259
|
+
payload[:tools] = ProviderSchema.for_many(:openai, tools)
|
|
1260
|
+
payload[:tool_choice] = tool_choice if tool_choice != :auto
|
|
1261
|
+
end
|
|
1262
|
+
|
|
1263
|
+
if stream
|
|
1264
|
+
payload[:stream] = true
|
|
1265
|
+
payload[:stream_options] = { include_usage: true }
|
|
1266
|
+
stream_chat(payload, stream)
|
|
1267
|
+
else
|
|
1268
|
+
plain_chat(payload)
|
|
1269
|
+
end
|
|
1270
|
+
end
|
|
1271
|
+
|
|
1272
|
+
def supports_native_tools?(model: config.model)
|
|
1273
|
+
true
|
|
1274
|
+
end
|
|
1275
|
+
|
|
1276
|
+
private
|
|
1277
|
+
|
|
1278
|
+
def plain_chat(payload)
|
|
1279
|
+
url = "#{BASE_URL}/chat/completions"
|
|
1280
|
+
log_request(:post, url, payload)
|
|
1281
|
+
response = http_client.post(url, payload, build_headers.merge(auth_header))
|
|
1282
|
+
body = handle_response(response)
|
|
1283
|
+
normalize_non_streaming(body)
|
|
1284
|
+
end
|
|
1285
|
+
|
|
1286
|
+
def stream_chat(payload, sink)
|
|
1287
|
+
url = "#{BASE_URL}/chat/completions"
|
|
1288
|
+
log_request(:post, url, payload)
|
|
1289
|
+
|
|
1290
|
+
assembled_text = +""
|
|
1291
|
+
tool_calls_by_index = {}
|
|
1292
|
+
final_usage = nil
|
|
1293
|
+
finish_reason = nil
|
|
1294
|
+
|
|
1295
|
+
parser = SSEParser.new do |sse|
|
|
1296
|
+
next if sse[:data] == "[DONE]"
|
|
1297
|
+
data = JSON.parse(sse[:data])
|
|
1298
|
+
choice = data.dig("choices", 0) || {}
|
|
1299
|
+
delta = choice["delta"] || {}
|
|
1300
|
+
|
|
1301
|
+
if delta["content"]
|
|
1302
|
+
assembled_text << delta["content"]
|
|
1303
|
+
sink.call(Events::TextDelta.new(
|
|
1304
|
+
type: :text_delta, timestamp: Time.now, agent: nil, iteration: nil,
|
|
1305
|
+
text: delta["content"]
|
|
1306
|
+
))
|
|
1307
|
+
end
|
|
1308
|
+
|
|
1309
|
+
Array(delta["tool_calls"]).each do |tc|
|
|
1310
|
+
idx = tc["index"]
|
|
1311
|
+
tool_calls_by_index[idx] ||= { id: nil, name: nil, arguments: +"" }
|
|
1312
|
+
tool_calls_by_index[idx][:id] ||= tc["id"]
|
|
1313
|
+
tool_calls_by_index[idx][:name] ||= tc.dig("function", "name")
|
|
1314
|
+
tool_calls_by_index[idx][:arguments] << (tc.dig("function", "arguments") || "")
|
|
1315
|
+
end
|
|
1316
|
+
|
|
1317
|
+
finish_reason ||= choice["finish_reason"]&.to_sym
|
|
1318
|
+
|
|
1319
|
+
if data["usage"]
|
|
1320
|
+
final_usage = {
|
|
1321
|
+
prompt_tokens: data["usage"]["prompt_tokens"],
|
|
1322
|
+
completion_tokens: data["usage"]["completion_tokens"],
|
|
1323
|
+
total_tokens: data["usage"]["total_tokens"]
|
|
1324
|
+
}
|
|
1325
|
+
end
|
|
1326
|
+
end
|
|
1327
|
+
|
|
1328
|
+
streaming_post(url, payload) do |chunk|
|
|
1329
|
+
parser.feed(chunk)
|
|
1330
|
+
end
|
|
1331
|
+
|
|
1332
|
+
tool_calls = tool_calls_by_index.values.map do |tc|
|
|
1333
|
+
{ id: tc[:id], name: tc[:name], arguments: tc[:arguments].empty? ? {} : JSON.parse(tc[:arguments]) }
|
|
1334
|
+
end
|
|
1335
|
+
|
|
1336
|
+
if final_usage
|
|
1337
|
+
sink.call(Events::Usage.new(
|
|
1338
|
+
type: :usage, timestamp: Time.now, agent: nil, iteration: nil,
|
|
1339
|
+
prompt_tokens: final_usage[:prompt_tokens],
|
|
1340
|
+
completion_tokens: final_usage[:completion_tokens],
|
|
1341
|
+
total_tokens: final_usage[:total_tokens],
|
|
1342
|
+
cost_usd: Pricing.cost_for(config.model,
|
|
1343
|
+
prompt_tokens: final_usage[:prompt_tokens],
|
|
1344
|
+
completion_tokens: final_usage[:completion_tokens])
|
|
1345
|
+
))
|
|
1346
|
+
end
|
|
1347
|
+
|
|
1348
|
+
{
|
|
1349
|
+
content: assembled_text.empty? ? nil : assembled_text,
|
|
1350
|
+
tool_calls: tool_calls,
|
|
1351
|
+
usage: final_usage || {},
|
|
1352
|
+
finish_reason: finish_reason || :stop,
|
|
1353
|
+
model: config.model,
|
|
1354
|
+
provider: :openai
|
|
1355
|
+
}
|
|
1356
|
+
end
|
|
1357
|
+
|
|
1358
|
+
def streaming_post(url, payload, &on_chunk)
|
|
1359
|
+
conn = Faraday.new do |f|
|
|
1360
|
+
f.request :json
|
|
1361
|
+
f.options.timeout = config.timeout
|
|
1362
|
+
f.adapter Faraday.default_adapter
|
|
1363
|
+
end
|
|
1364
|
+
conn.post(url) do |req|
|
|
1365
|
+
req.headers = build_headers.merge(auth_header)
|
|
1366
|
+
req.body = payload.to_json
|
|
1367
|
+
req.options.on_data = proc { |chunk, _| on_chunk.call(chunk) }
|
|
1368
|
+
end
|
|
1369
|
+
end
|
|
1370
|
+
|
|
1371
|
+
def normalize_non_streaming(body)
|
|
1372
|
+
choice = body.dig("choices", 0) || {}
|
|
1373
|
+
msg = choice["message"] || {}
|
|
1374
|
+
tool_calls = Array(msg["tool_calls"]).map do |tc|
|
|
1375
|
+
{
|
|
1376
|
+
id: tc["id"],
|
|
1377
|
+
name: tc.dig("function", "name"),
|
|
1378
|
+
arguments: JSON.parse(tc.dig("function", "arguments") || "{}")
|
|
1379
|
+
}
|
|
1380
|
+
end
|
|
1381
|
+
{
|
|
1382
|
+
content: msg["content"],
|
|
1383
|
+
tool_calls: tool_calls,
|
|
1384
|
+
usage: {
|
|
1385
|
+
prompt_tokens: body.dig("usage", "prompt_tokens"),
|
|
1386
|
+
completion_tokens: body.dig("usage", "completion_tokens"),
|
|
1387
|
+
total_tokens: body.dig("usage", "total_tokens")
|
|
1388
|
+
},
|
|
1389
|
+
finish_reason: (choice["finish_reason"] || "stop").to_sym,
|
|
1390
|
+
model: body["model"] || config.model,
|
|
1391
|
+
provider: :openai
|
|
1392
|
+
}
|
|
1393
|
+
end
|
|
1394
|
+
|
|
1395
|
+
def auth_header
|
|
1396
|
+
{ 'Authorization' => "Bearer #{config.openai_api_key || config.api_key}" }
|
|
1397
|
+
end
|
|
1398
|
+
|
|
1399
|
+
def validate_config!
|
|
1400
|
+
raise ConfigurationError, "OpenAI API key required" unless config.openai_api_key || config.api_key
|
|
1401
|
+
raise ConfigurationError, "Model required" unless config.model
|
|
1402
|
+
end
|
|
1403
|
+
end
|
|
1404
|
+
end
|
|
1405
|
+
end
|
|
1406
|
+
```
|
|
1407
|
+
|
|
1408
|
+
- [ ] **Step 6.7: Run spec; expect pass**
|
|
1409
|
+
|
|
1410
|
+
Run: `bundle exec rspec spec/llm_clients/openai_spec.rb`
|
|
1411
|
+
Expected: all green.
|
|
1412
|
+
|
|
1413
|
+
- [ ] **Step 6.8: Commit**
|
|
1414
|
+
|
|
1415
|
+
```bash
|
|
1416
|
+
git add lib/rcrewai/llm_clients/openai.rb spec/llm_clients/openai_spec.rb \
|
|
1417
|
+
spec/fixtures/llm_responses/openai/
|
|
1418
|
+
git commit -m "feat(openai): native tool calling and SSE streaming"
|
|
1419
|
+
```
|
|
1420
|
+
|
|
1421
|
+
---
|
|
1422
|
+
|
|
1423
|
+
## Task 7: ToolRunner + LegacyReactRunner
|
|
1424
|
+
|
|
1425
|
+
**Files:**
|
|
1426
|
+
- Create: `lib/rcrewai/tool_runner.rb`
|
|
1427
|
+
- Create: `lib/rcrewai/legacy_react_runner.rb`
|
|
1428
|
+
- Create: `spec/tool_runner_spec.rb`
|
|
1429
|
+
- Create: `spec/legacy_react_runner_spec.rb`
|
|
1430
|
+
- Modify: `lib/rcrewai.rb`
|
|
1431
|
+
|
|
1432
|
+
The `LegacyReactRunner` is a straight extraction of the existing prompt-template + regex-parsing code from `agent.rb` (lines ~280-360). Behavior must be identical; we just move it to its own class.
|
|
1433
|
+
|
|
1434
|
+
- [ ] **Step 7.1: Write failing spec for ToolRunner happy path**
|
|
1435
|
+
|
|
1436
|
+
Create `spec/tool_runner_spec.rb`:
|
|
1437
|
+
|
|
1438
|
+
```ruby
|
|
1439
|
+
# frozen_string_literal: true
|
|
1440
|
+
require 'spec_helper'
|
|
1441
|
+
|
|
1442
|
+
class FakeTool < RCrewAI::Tools::Base
|
|
1443
|
+
tool_name "echo"
|
|
1444
|
+
description "echo args back"
|
|
1445
|
+
param :msg, type: :string, required: true
|
|
1446
|
+
def execute(msg:); "echoed: #{msg}"; end
|
|
1447
|
+
end
|
|
1448
|
+
|
|
1449
|
+
RSpec.describe RCrewAI::ToolRunner do
|
|
1450
|
+
let(:tool) { FakeTool.new }
|
|
1451
|
+
let(:agent) do
|
|
1452
|
+
double("Agent",
|
|
1453
|
+
name: "a",
|
|
1454
|
+
memory: double("Memory", add_tool_usage: nil),
|
|
1455
|
+
require_approval_for_tools?: false)
|
|
1456
|
+
end
|
|
1457
|
+
|
|
1458
|
+
context 'when LLM responds with a tool_call then a final answer' do
|
|
1459
|
+
let(:llm) do
|
|
1460
|
+
responses = [
|
|
1461
|
+
{ content: nil, tool_calls: [{ id: "c1", name: "echo", arguments: { "msg" => "hi" } }],
|
|
1462
|
+
usage: {}, finish_reason: :tool_calls, model: "m", provider: :test },
|
|
1463
|
+
{ content: "Done.", tool_calls: [], usage: {},
|
|
1464
|
+
finish_reason: :stop, model: "m", provider: :test }
|
|
1465
|
+
]
|
|
1466
|
+
llm = double("LLM")
|
|
1467
|
+
allow(llm).to receive(:chat) { responses.shift }
|
|
1468
|
+
llm
|
|
1469
|
+
end
|
|
1470
|
+
|
|
1471
|
+
it 'runs to completion in 2 iterations with tool result threaded in' do
|
|
1472
|
+
events = []
|
|
1473
|
+
runner = described_class.new(agent: agent, llm: llm, tools: [tool],
|
|
1474
|
+
event_sink: ->(e) { events << e })
|
|
1475
|
+
result = runner.run(messages: [{ role: "user", content: "echo hi" }])
|
|
1476
|
+
|
|
1477
|
+
expect(result[:content]).to eq("Done.")
|
|
1478
|
+
expect(result[:iterations]).to eq(2)
|
|
1479
|
+
expect(result[:tool_calls_history]).to eq([
|
|
1480
|
+
{ tool: "echo", args: { "msg" => "hi" }, result: "echoed: hi", duration_ms: kind_of(Integer) }
|
|
1481
|
+
])
|
|
1482
|
+
types = events.map { |e| e.class }
|
|
1483
|
+
expect(types).to include(RCrewAI::Events::ToolCallStart, RCrewAI::Events::ToolCallResult)
|
|
1484
|
+
end
|
|
1485
|
+
|
|
1486
|
+
it 'records tool usage in agent memory' do
|
|
1487
|
+
expect(agent.memory).to receive(:add_tool_usage).with("echo", { "msg" => "hi" }, "echoed: hi")
|
|
1488
|
+
runner = described_class.new(agent: agent, llm: llm, tools: [tool])
|
|
1489
|
+
runner.run(messages: [{ role: "user", content: "echo hi" }])
|
|
1490
|
+
end
|
|
1491
|
+
end
|
|
1492
|
+
|
|
1493
|
+
context 'when a tool raises' do
|
|
1494
|
+
let(:bad_tool) do
|
|
1495
|
+
Class.new(RCrewAI::Tools::Base) do
|
|
1496
|
+
tool_name "bad"
|
|
1497
|
+
description "bad"
|
|
1498
|
+
param :x, type: :string, required: true
|
|
1499
|
+
def execute(x:); raise "boom"; end
|
|
1500
|
+
end.new
|
|
1501
|
+
end
|
|
1502
|
+
|
|
1503
|
+
let(:llm) do
|
|
1504
|
+
responses = [
|
|
1505
|
+
{ content: nil, tool_calls: [{ id: "c1", name: "bad", arguments: { "x" => "y" } }],
|
|
1506
|
+
usage: {}, finish_reason: :tool_calls, model: "m", provider: :test },
|
|
1507
|
+
{ content: "Recovered.", tool_calls: [], usage: {},
|
|
1508
|
+
finish_reason: :stop, model: "m", provider: :test }
|
|
1509
|
+
]
|
|
1510
|
+
double("LLM").tap { |l| allow(l).to receive(:chat) { responses.shift } }
|
|
1511
|
+
end
|
|
1512
|
+
|
|
1513
|
+
it 'emits ToolCallError, threads error back into messages, continues' do
|
|
1514
|
+
events = []
|
|
1515
|
+
runner = described_class.new(agent: agent, llm: llm, tools: [bad_tool],
|
|
1516
|
+
event_sink: ->(e) { events << e })
|
|
1517
|
+
result = runner.run(messages: [{ role: "user", content: "go" }])
|
|
1518
|
+
|
|
1519
|
+
expect(result[:content]).to eq("Recovered.")
|
|
1520
|
+
expect(events.any? { |e| e.is_a?(RCrewAI::Events::ToolCallError) }).to be true
|
|
1521
|
+
end
|
|
1522
|
+
end
|
|
1523
|
+
|
|
1524
|
+
context 'when max_iterations is reached' do
|
|
1525
|
+
let(:llm) do
|
|
1526
|
+
always_tool = {
|
|
1527
|
+
content: nil, tool_calls: [{ id: "c", name: "echo", arguments: { "msg" => "x" } }],
|
|
1528
|
+
usage: {}, finish_reason: :tool_calls, model: "m", provider: :test
|
|
1529
|
+
}
|
|
1530
|
+
double("LLM").tap { |l| allow(l).to receive(:chat).and_return(always_tool) }
|
|
1531
|
+
end
|
|
1532
|
+
|
|
1533
|
+
it 'stops after max_iterations and returns best-effort' do
|
|
1534
|
+
runner = described_class.new(agent: agent, llm: llm, tools: [tool], max_iterations: 3)
|
|
1535
|
+
result = runner.run(messages: [{ role: "user", content: "loop" }])
|
|
1536
|
+
expect(result[:iterations]).to eq(3)
|
|
1537
|
+
expect(result[:finish_reason]).to eq(:max_iterations)
|
|
1538
|
+
end
|
|
1539
|
+
end
|
|
1540
|
+
end
|
|
1541
|
+
```
|
|
1542
|
+
|
|
1543
|
+
- [ ] **Step 7.2: Run spec; expect failure**
|
|
1544
|
+
|
|
1545
|
+
Run: `bundle exec rspec spec/tool_runner_spec.rb`
|
|
1546
|
+
Expected: `uninitialized constant RCrewAI::ToolRunner`
|
|
1547
|
+
|
|
1548
|
+
- [ ] **Step 7.3: Implement `lib/rcrewai/tool_runner.rb`**
|
|
1549
|
+
|
|
1550
|
+
```ruby
|
|
1551
|
+
# frozen_string_literal: true
|
|
1552
|
+
|
|
1553
|
+
require_relative 'events'
|
|
1554
|
+
require_relative 'provider_schema'
|
|
1555
|
+
|
|
1556
|
+
module RCrewAI
|
|
1557
|
+
class ToolRunner
|
|
1558
|
+
DEFAULT_MAX_ITERATIONS = 10
|
|
1559
|
+
|
|
1560
|
+
def initialize(agent:, llm:, tools:, max_iterations: DEFAULT_MAX_ITERATIONS, event_sink: nil)
|
|
1561
|
+
@agent = agent
|
|
1562
|
+
@llm = llm
|
|
1563
|
+
@tools = tools
|
|
1564
|
+
@tools_by_name = tools.each_with_object({}) { |t, h| h[t.name] = t }
|
|
1565
|
+
@max_iterations = max_iterations
|
|
1566
|
+
@sink = event_sink || ->(_) {}
|
|
1567
|
+
end
|
|
1568
|
+
|
|
1569
|
+
def run(messages:)
|
|
1570
|
+
msgs = messages.dup
|
|
1571
|
+
history = []
|
|
1572
|
+
iter = 0
|
|
1573
|
+
total_usage = { prompt_tokens: 0, completion_tokens: 0, total_tokens: 0 }
|
|
1574
|
+
|
|
1575
|
+
while iter < @max_iterations
|
|
1576
|
+
iter += 1
|
|
1577
|
+
emit(Events::IterationStart, iteration_index: iter)
|
|
1578
|
+
|
|
1579
|
+
response = @llm.chat(
|
|
1580
|
+
messages: msgs,
|
|
1581
|
+
tools: @tools.map(&:json_schema),
|
|
1582
|
+
stream: ->(e) { tagged = retag(e, iter); @sink.call(tagged) }
|
|
1583
|
+
)
|
|
1584
|
+
accumulate_usage(total_usage, response[:usage])
|
|
1585
|
+
|
|
1586
|
+
if response[:tool_calls].nil? || response[:tool_calls].empty?
|
|
1587
|
+
emit(Events::IterationEnd, finish_reason: response[:finish_reason], iteration: iter)
|
|
1588
|
+
return finalize(content: response[:content], history: history, iter: iter,
|
|
1589
|
+
finish_reason: response[:finish_reason], usage: total_usage)
|
|
1590
|
+
end
|
|
1591
|
+
|
|
1592
|
+
# append assistant tool-call message
|
|
1593
|
+
msgs << { role: "assistant", content: response[:content], tool_calls: response[:tool_calls] }
|
|
1594
|
+
|
|
1595
|
+
response[:tool_calls].each do |tc|
|
|
1596
|
+
tool = @tools_by_name[tc[:name]]
|
|
1597
|
+
emit(Events::ToolCallStart, tool: tc[:name], args: tc[:arguments], call_id: tc[:id], iteration: iter)
|
|
1598
|
+
|
|
1599
|
+
if tool.nil?
|
|
1600
|
+
err = "tool not found: #{tc[:name]}"
|
|
1601
|
+
emit(Events::ToolCallError, tool: tc[:name], call_id: tc[:id], error: err, iteration: iter)
|
|
1602
|
+
msgs << tool_result_message(tc[:id], "ERROR: #{err}")
|
|
1603
|
+
next
|
|
1604
|
+
end
|
|
1605
|
+
|
|
1606
|
+
started = monotonic_ms
|
|
1607
|
+
begin
|
|
1608
|
+
result = tool.execute_with_validation(tc[:arguments] || {})
|
|
1609
|
+
duration = monotonic_ms - started
|
|
1610
|
+
@agent.memory.add_tool_usage(tc[:name], tc[:arguments], result) if @agent.respond_to?(:memory) && @agent.memory
|
|
1611
|
+
emit(Events::ToolCallResult, tool: tc[:name], call_id: tc[:id], result: result,
|
|
1612
|
+
duration_ms: duration, iteration: iter)
|
|
1613
|
+
history << { tool: tc[:name], args: tc[:arguments], result: result, duration_ms: duration }
|
|
1614
|
+
msgs << tool_result_message(tc[:id], result.to_s)
|
|
1615
|
+
rescue StandardError => e
|
|
1616
|
+
emit(Events::ToolCallError, tool: tc[:name], call_id: tc[:id], error: e.message, iteration: iter)
|
|
1617
|
+
msgs << tool_result_message(tc[:id], "ERROR: #{e.message}")
|
|
1618
|
+
end
|
|
1619
|
+
end
|
|
1620
|
+
|
|
1621
|
+
emit(Events::IterationEnd, finish_reason: :tool_calls, iteration: iter)
|
|
1622
|
+
end
|
|
1623
|
+
|
|
1624
|
+
finalize(content: nil, history: history, iter: iter, finish_reason: :max_iterations, usage: total_usage)
|
|
1625
|
+
end
|
|
1626
|
+
|
|
1627
|
+
private
|
|
1628
|
+
|
|
1629
|
+
def tool_result_message(call_id, content)
|
|
1630
|
+
{ role: "tool", tool_call_id: call_id, content: content }
|
|
1631
|
+
end
|
|
1632
|
+
|
|
1633
|
+
def emit(klass, **attrs)
|
|
1634
|
+
@sink.call(klass.new(timestamp: Time.now, agent: @agent.respond_to?(:name) ? @agent.name : nil,
|
|
1635
|
+
iteration: attrs.delete(:iteration), type: klass.name.split("::").last.downcase.to_sym, **attrs))
|
|
1636
|
+
end
|
|
1637
|
+
|
|
1638
|
+
def retag(event, iter)
|
|
1639
|
+
event.agent = @agent.respond_to?(:name) ? @agent.name : nil if event.respond_to?(:agent=)
|
|
1640
|
+
event.iteration = iter if event.respond_to?(:iteration=) && event.iteration.nil?
|
|
1641
|
+
event
|
|
1642
|
+
end
|
|
1643
|
+
|
|
1644
|
+
def accumulate_usage(total, partial)
|
|
1645
|
+
return unless partial.is_a?(Hash)
|
|
1646
|
+
total[:prompt_tokens] += partial[:prompt_tokens] || 0
|
|
1647
|
+
total[:completion_tokens] += partial[:completion_tokens] || 0
|
|
1648
|
+
total[:total_tokens] += partial[:total_tokens] || 0
|
|
1649
|
+
end
|
|
1650
|
+
|
|
1651
|
+
def finalize(content:, history:, iter:, finish_reason:, usage:)
|
|
1652
|
+
{ content: content, tool_calls_history: history, usage: usage, iterations: iter, finish_reason: finish_reason }
|
|
1653
|
+
end
|
|
1654
|
+
|
|
1655
|
+
def monotonic_ms
|
|
1656
|
+
(Process.clock_gettime(Process::CLOCK_MONOTONIC) * 1000).to_i
|
|
1657
|
+
end
|
|
1658
|
+
end
|
|
1659
|
+
end
|
|
1660
|
+
```
|
|
1661
|
+
|
|
1662
|
+
- [ ] **Step 7.4: Run spec; expect pass**
|
|
1663
|
+
|
|
1664
|
+
Run: `bundle exec rspec spec/tool_runner_spec.rb`
|
|
1665
|
+
Expected: all green.
|
|
1666
|
+
|
|
1667
|
+
- [ ] **Step 7.5: Extract LegacyReactRunner from agent.rb**
|
|
1668
|
+
|
|
1669
|
+
Read `lib/rcrewai/agent.rb` lines 280-360 (the prompt-building + `USE_TOOL[]` regex-scanning section). Copy that code into `lib/rcrewai/legacy_react_runner.rb` with this skeleton:
|
|
1670
|
+
|
|
1671
|
+
```ruby
|
|
1672
|
+
# frozen_string_literal: true
|
|
1673
|
+
|
|
1674
|
+
require_relative 'events'
|
|
1675
|
+
|
|
1676
|
+
module RCrewAI
|
|
1677
|
+
class LegacyReactRunner
|
|
1678
|
+
def initialize(agent:, llm:, tools:, max_iterations: 10, event_sink: nil)
|
|
1679
|
+
@agent = agent
|
|
1680
|
+
@llm = llm
|
|
1681
|
+
@tools = tools
|
|
1682
|
+
@max_iterations = max_iterations
|
|
1683
|
+
@sink = event_sink || ->(_) {}
|
|
1684
|
+
end
|
|
1685
|
+
|
|
1686
|
+
def run(messages:)
|
|
1687
|
+
# MOVE the existing prompt-building, LLM call, regex scanning, and
|
|
1688
|
+
# tool dispatch logic from Agent#execute_task verbatim. Wrap iterations
|
|
1689
|
+
# in Events::IterationStart / IterationEnd. Emit Events::TextDone with
|
|
1690
|
+
# the final content. Return shape:
|
|
1691
|
+
# { content:, tool_calls_history:, usage:, iterations:, finish_reason: }
|
|
1692
|
+
end
|
|
1693
|
+
|
|
1694
|
+
# ... helper methods extracted from agent.rb (parse_tool_params, etc.) ...
|
|
1695
|
+
end
|
|
1696
|
+
end
|
|
1697
|
+
```
|
|
1698
|
+
|
|
1699
|
+
Fill the body by copying from `agent.rb` exactly. The goal is **behavior-preserving** extraction. Reference `Agent#use_tool` through the injected `@agent`.
|
|
1700
|
+
|
|
1701
|
+
- [ ] **Step 7.6: Write spec that pins legacy behavior**
|
|
1702
|
+
|
|
1703
|
+
Create `spec/legacy_react_runner_spec.rb`:
|
|
1704
|
+
|
|
1705
|
+
```ruby
|
|
1706
|
+
# frozen_string_literal: true
|
|
1707
|
+
require 'spec_helper'
|
|
1708
|
+
|
|
1709
|
+
RSpec.describe RCrewAI::LegacyReactRunner do
|
|
1710
|
+
let(:tool) { FakeTool.new } # defined in tool_runner_spec.rb; load helper or inline-define
|
|
1711
|
+
let(:agent) { double("Agent", name: "a", memory: double(add_tool_usage: nil),
|
|
1712
|
+
available_tools_description: "- echo: echo") }
|
|
1713
|
+
|
|
1714
|
+
it 'parses USE_TOOL[name](k=v) and threads the result' do
|
|
1715
|
+
responses = [
|
|
1716
|
+
{ content: "Reasoning... USE_TOOL[echo](msg=hi)\nDone", usage: {}, finish_reason: :stop, model: "m", provider: :test, tool_calls: [] }
|
|
1717
|
+
]
|
|
1718
|
+
llm = double("LLM").tap { |l| allow(l).to receive(:chat) { responses.shift } }
|
|
1719
|
+
|
|
1720
|
+
runner = described_class.new(agent: agent, llm: llm, tools: [tool])
|
|
1721
|
+
result = runner.run(messages: [{ role: "user", content: "x" }])
|
|
1722
|
+
|
|
1723
|
+
expect(result[:content]).to include("Done")
|
|
1724
|
+
expect(result[:tool_calls_history].first[:tool]).to eq("echo")
|
|
1725
|
+
end
|
|
1726
|
+
end
|
|
1727
|
+
```
|
|
1728
|
+
|
|
1729
|
+
Re-define `FakeTool` here if not shared via `spec/support/`.
|
|
1730
|
+
|
|
1731
|
+
- [ ] **Step 7.7: Run spec; expect pass**
|
|
1732
|
+
|
|
1733
|
+
Run: `bundle exec rspec spec/legacy_react_runner_spec.rb`
|
|
1734
|
+
Expected: all green.
|
|
1735
|
+
|
|
1736
|
+
- [ ] **Step 7.8: Add requires in `lib/rcrewai.rb`**
|
|
1737
|
+
|
|
1738
|
+
Add after `require_relative 'rcrewai/pricing'`:
|
|
1739
|
+
|
|
1740
|
+
```ruby
|
|
1741
|
+
require_relative 'rcrewai/tool_runner'
|
|
1742
|
+
require_relative 'rcrewai/legacy_react_runner'
|
|
1743
|
+
```
|
|
1744
|
+
|
|
1745
|
+
- [ ] **Step 7.9: Commit**
|
|
1746
|
+
|
|
1747
|
+
```bash
|
|
1748
|
+
git add lib/rcrewai/tool_runner.rb lib/rcrewai/legacy_react_runner.rb \
|
|
1749
|
+
lib/rcrewai.rb spec/tool_runner_spec.rb spec/legacy_react_runner_spec.rb
|
|
1750
|
+
git commit -m "feat(runner): add ToolRunner and extract LegacyReactRunner"
|
|
1751
|
+
```
|
|
1752
|
+
|
|
1753
|
+
---
|
|
1754
|
+
|
|
1755
|
+
## Task 8: Agent refactor + Crew streaming pass-through
|
|
1756
|
+
|
|
1757
|
+
**Files:**
|
|
1758
|
+
- Modify: `lib/rcrewai/agent.rb`
|
|
1759
|
+
- Modify: `lib/rcrewai/crew.rb`
|
|
1760
|
+
- Create: `spec/agent_streaming_spec.rb`
|
|
1761
|
+
- Modify: `spec/agent_spec.rb` (add cases for tool_calls_history and stream pass-through)
|
|
1762
|
+
|
|
1763
|
+
- [ ] **Step 8.1: Refactor `Agent#execute_task`**
|
|
1764
|
+
|
|
1765
|
+
Modify `lib/rcrewai/agent.rb`. Replace the body of `execute_task` so that, instead of building a giant prompt and regex-scanning, it picks a runner:
|
|
1766
|
+
|
|
1767
|
+
```ruby
|
|
1768
|
+
def execute_task(task, stream: nil, **opts)
|
|
1769
|
+
llm = @llm_client
|
|
1770
|
+
initial_messages = build_initial_messages(task)
|
|
1771
|
+
sink = stream || ->(_) {}
|
|
1772
|
+
|
|
1773
|
+
mode = if llm.supports_native_tools?(model: @llm_client.config.model) && @tools.all? { |t| t.json_schema }
|
|
1774
|
+
:native_tools
|
|
1775
|
+
else
|
|
1776
|
+
:react_legacy
|
|
1777
|
+
end
|
|
1778
|
+
@logger.info "[rcrewai] agent=#{name} mode=#{mode} provider=#{llm.config.llm_provider}"
|
|
1779
|
+
|
|
1780
|
+
runner_class = mode == :native_tools ? ToolRunner : LegacyReactRunner
|
|
1781
|
+
runner = runner_class.new(agent: self, llm: llm, tools: @tools,
|
|
1782
|
+
max_iterations: opts.fetch(:max_iterations, 10),
|
|
1783
|
+
event_sink: sink)
|
|
1784
|
+
|
|
1785
|
+
result = runner.run(messages: initial_messages)
|
|
1786
|
+
# Preserve existing return-shape; add tool_calls_history
|
|
1787
|
+
build_task_result(task, result)
|
|
1788
|
+
end
|
|
1789
|
+
|
|
1790
|
+
def require_approval_for_tools?
|
|
1791
|
+
@require_approval_for_tools && @human_input_enabled
|
|
1792
|
+
end
|
|
1793
|
+
|
|
1794
|
+
private
|
|
1795
|
+
|
|
1796
|
+
def build_initial_messages(task)
|
|
1797
|
+
# Build [{ role: "system", content: ... }, { role: "user", content: task.description }]
|
|
1798
|
+
# The system prompt should include role, goal, backstory, and (for native mode) NO USE_TOOL
|
|
1799
|
+
# instructions. LegacyReactRunner injects its own USE_TOOL prompt section as today.
|
|
1800
|
+
system = <<~SYS
|
|
1801
|
+
You are #{role}. Goal: #{goal}. #{backstory}
|
|
1802
|
+
You may call tools by name when needed.
|
|
1803
|
+
SYS
|
|
1804
|
+
[{ role: "system", content: system }, { role: "user", content: task.description }]
|
|
1805
|
+
end
|
|
1806
|
+
|
|
1807
|
+
def build_task_result(task, runner_result)
|
|
1808
|
+
# Match the historical return-shape of execute_task. Add :tool_calls_history.
|
|
1809
|
+
{
|
|
1810
|
+
task: task.name,
|
|
1811
|
+
agent: name,
|
|
1812
|
+
content: runner_result[:content],
|
|
1813
|
+
tool_calls_history: runner_result[:tool_calls_history],
|
|
1814
|
+
usage: runner_result[:usage],
|
|
1815
|
+
iterations: runner_result[:iterations],
|
|
1816
|
+
finish_reason: runner_result[:finish_reason]
|
|
1817
|
+
}
|
|
1818
|
+
end
|
|
1819
|
+
```
|
|
1820
|
+
|
|
1821
|
+
Keep `use_tool`, `available_tools_description`, human-approval helpers, and delegation methods exactly as they are. Delete only the prompt-building and `USE_TOOL[]` parsing methods that are now duplicated in `LegacyReactRunner`. (Audit: any method that LegacyReactRunner calls on `@agent` must remain.)
|
|
1822
|
+
|
|
1823
|
+
- [ ] **Step 8.2: Add streaming to Crew**
|
|
1824
|
+
|
|
1825
|
+
Modify `lib/rcrewai/crew.rb`. In `def execute(...)`:
|
|
1826
|
+
|
|
1827
|
+
```ruby
|
|
1828
|
+
def execute(stream: nil, async: false, max_concurrency: 1, timeout: nil, &block)
|
|
1829
|
+
sinks = []
|
|
1830
|
+
sinks << block if block_given?
|
|
1831
|
+
sinks << stream if stream && stream != true
|
|
1832
|
+
sinks.concat(Array(stream)) if stream.is_a?(Array)
|
|
1833
|
+
fan = sinks.empty? ? nil : Events.fan_out(sinks.flatten)
|
|
1834
|
+
|
|
1835
|
+
# ... existing logic, but each task execution passes stream: fan ...
|
|
1836
|
+
# e.g. agent.execute_task(task, stream: fan)
|
|
1837
|
+
end
|
|
1838
|
+
```
|
|
1839
|
+
|
|
1840
|
+
Existing callers that don't pass `stream:` are unaffected (fan is `nil`).
|
|
1841
|
+
|
|
1842
|
+
- [ ] **Step 8.3: Write streaming spec**
|
|
1843
|
+
|
|
1844
|
+
Create `spec/agent_streaming_spec.rb`:
|
|
1845
|
+
|
|
1846
|
+
```ruby
|
|
1847
|
+
# frozen_string_literal: true
|
|
1848
|
+
require 'spec_helper'
|
|
1849
|
+
|
|
1850
|
+
RSpec.describe "Agent streaming pass-through" do
|
|
1851
|
+
it 'forwards events from runner to user sink' do
|
|
1852
|
+
# Stub LLM to return a single text-only response
|
|
1853
|
+
fake_llm = double("LLM")
|
|
1854
|
+
allow(fake_llm).to receive(:supports_native_tools?).and_return(true)
|
|
1855
|
+
allow(fake_llm).to receive(:config).and_return(RCrewAI.configuration)
|
|
1856
|
+
allow(fake_llm).to receive(:chat) do |**kwargs|
|
|
1857
|
+
kwargs[:stream]&.call(RCrewAI::Events::TextDelta.new(
|
|
1858
|
+
type: :text_delta, timestamp: Time.now, agent: nil, iteration: nil, text: "hi"))
|
|
1859
|
+
{ content: "hi", tool_calls: [], usage: {}, finish_reason: :stop, model: "m", provider: :openai }
|
|
1860
|
+
end
|
|
1861
|
+
|
|
1862
|
+
agent = RCrewAI::Agent.new(name: "a", role: "r", goal: "g", tools: [])
|
|
1863
|
+
agent.instance_variable_set(:@llm_client, fake_llm)
|
|
1864
|
+
|
|
1865
|
+
task = RCrewAI::Task.new(name: "t", description: "say hi", agent: agent, expected_output: "x")
|
|
1866
|
+
|
|
1867
|
+
received = []
|
|
1868
|
+
agent.execute_task(task, stream: ->(e) { received << e })
|
|
1869
|
+
|
|
1870
|
+
text_events = received.select { |e| e.is_a?(RCrewAI::Events::TextDelta) }
|
|
1871
|
+
expect(text_events.map(&:text)).to include("hi")
|
|
1872
|
+
expect(text_events.first.agent).to eq("a")
|
|
1873
|
+
end
|
|
1874
|
+
end
|
|
1875
|
+
```
|
|
1876
|
+
|
|
1877
|
+
- [ ] **Step 8.4: Run new + existing specs; fix regressions**
|
|
1878
|
+
|
|
1879
|
+
Run: `bundle exec rspec`
|
|
1880
|
+
Expected: all green. Existing `agent_spec.rb` tests that asserted exact `USE_TOOL[]` prompt strings need updating to use a stubbed LLM that returns either `:tool_calls` (native) or the legacy text format.
|
|
1881
|
+
|
|
1882
|
+
- [ ] **Step 8.5: Commit**
|
|
1883
|
+
|
|
1884
|
+
```bash
|
|
1885
|
+
git add lib/rcrewai/agent.rb lib/rcrewai/crew.rb \
|
|
1886
|
+
spec/agent_streaming_spec.rb spec/agent_spec.rb
|
|
1887
|
+
git commit -m "feat(agent): delegate to ToolRunner/LegacyReactRunner; add stream: pass-through"
|
|
1888
|
+
```
|
|
1889
|
+
|
|
1890
|
+
---
|
|
1891
|
+
|
|
1892
|
+
## Task 9: Anthropic native tools + streaming + caching hook
|
|
1893
|
+
|
|
1894
|
+
**Files:**
|
|
1895
|
+
- Modify: `lib/rcrewai/llm_clients/anthropic.rb`
|
|
1896
|
+
- Create: `spec/llm_clients/anthropic_spec.rb`
|
|
1897
|
+
- Create: `spec/fixtures/llm_responses/anthropic/tool_call.json`
|
|
1898
|
+
- Create: `spec/fixtures/llm_responses/anthropic/stream_tool_call.sse`
|
|
1899
|
+
|
|
1900
|
+
Anthropic differences vs. OpenAI:
|
|
1901
|
+
- Tools live at `tools:` top-level (not wrapped in `{type: "function", function: ...}`).
|
|
1902
|
+
- Tool schema field is `input_schema`, not `parameters`.
|
|
1903
|
+
- Stream events use `content_block_delta` with `delta.type = "text_delta"` or `"input_json_delta"`.
|
|
1904
|
+
- Tool calls arrive as `content` blocks with `type: "tool_use"`.
|
|
1905
|
+
|
|
1906
|
+
- [ ] **Step 9.1: Capture fixture: non-streamed tool call**
|
|
1907
|
+
|
|
1908
|
+
Create `spec/fixtures/llm_responses/anthropic/tool_call.json`:
|
|
1909
|
+
|
|
1910
|
+
```json
|
|
1911
|
+
{
|
|
1912
|
+
"id": "msg_1",
|
|
1913
|
+
"model": "claude-sonnet-4-6",
|
|
1914
|
+
"role": "assistant",
|
|
1915
|
+
"content": [
|
|
1916
|
+
{ "type": "tool_use", "id": "toolu_1", "name": "web_search", "input": { "query": "ruby" } }
|
|
1917
|
+
],
|
|
1918
|
+
"stop_reason": "tool_use",
|
|
1919
|
+
"usage": { "input_tokens": 50, "output_tokens": 10 }
|
|
1920
|
+
}
|
|
1921
|
+
```
|
|
1922
|
+
|
|
1923
|
+
- [ ] **Step 9.2: Capture fixture: streamed tool call**
|
|
1924
|
+
|
|
1925
|
+
Create `spec/fixtures/llm_responses/anthropic/stream_tool_call.sse`:
|
|
1926
|
+
|
|
1927
|
+
```
|
|
1928
|
+
event: message_start
|
|
1929
|
+
data: {"type":"message_start","message":{"id":"m","model":"claude-sonnet-4-6","role":"assistant","usage":{"input_tokens":40,"output_tokens":0}}}
|
|
1930
|
+
|
|
1931
|
+
event: content_block_start
|
|
1932
|
+
data: {"type":"content_block_start","index":0,"content_block":{"type":"tool_use","id":"toolu_1","name":"web_search","input":{}}}
|
|
1933
|
+
|
|
1934
|
+
event: content_block_delta
|
|
1935
|
+
data: {"type":"content_block_delta","index":0,"delta":{"type":"input_json_delta","partial_json":"{\"query\":"}}
|
|
1936
|
+
|
|
1937
|
+
event: content_block_delta
|
|
1938
|
+
data: {"type":"content_block_delta","index":0,"delta":{"type":"input_json_delta","partial_json":"\"ruby\"}"}}
|
|
1939
|
+
|
|
1940
|
+
event: content_block_stop
|
|
1941
|
+
data: {"type":"content_block_stop","index":0}
|
|
1942
|
+
|
|
1943
|
+
event: message_delta
|
|
1944
|
+
data: {"type":"message_delta","delta":{"stop_reason":"tool_use"},"usage":{"output_tokens":12}}
|
|
1945
|
+
|
|
1946
|
+
event: message_stop
|
|
1947
|
+
data: {"type":"message_stop"}
|
|
1948
|
+
|
|
1949
|
+
```
|
|
1950
|
+
|
|
1951
|
+
- [ ] **Step 9.3: Write failing spec**
|
|
1952
|
+
|
|
1953
|
+
Create `spec/llm_clients/anthropic_spec.rb`:
|
|
1954
|
+
|
|
1955
|
+
```ruby
|
|
1956
|
+
# frozen_string_literal: true
|
|
1957
|
+
require 'spec_helper'
|
|
1958
|
+
require 'webmock/rspec'
|
|
1959
|
+
|
|
1960
|
+
RSpec.describe RCrewAI::LLMClients::Anthropic do
|
|
1961
|
+
let(:config) do
|
|
1962
|
+
RCrewAI.configuration.tap do |c|
|
|
1963
|
+
c.llm_provider = :anthropic
|
|
1964
|
+
c.anthropic_api_key = "k"
|
|
1965
|
+
c.anthropic_model = "claude-sonnet-4-6"
|
|
1966
|
+
end
|
|
1967
|
+
end
|
|
1968
|
+
let(:client) { described_class.new(config) }
|
|
1969
|
+
|
|
1970
|
+
it 'sends tools at top level with input_schema and parses tool_use blocks' do
|
|
1971
|
+
tool_schema = {
|
|
1972
|
+
name: "web_search", description: "Search",
|
|
1973
|
+
parameters: { type: "object", properties: { query: { type: "string" } }, required: ["query"] }
|
|
1974
|
+
}
|
|
1975
|
+
|
|
1976
|
+
stub = stub_request(:post, "https://api.anthropic.com/v1/messages")
|
|
1977
|
+
.with(body: hash_including(
|
|
1978
|
+
"model" => "claude-sonnet-4-6",
|
|
1979
|
+
"tools" => [{ "name" => "web_search", "description" => "Search",
|
|
1980
|
+
"input_schema" => { "type" => "object",
|
|
1981
|
+
"properties" => { "query" => { "type" => "string" } },
|
|
1982
|
+
"required" => ["query"] } }]
|
|
1983
|
+
))
|
|
1984
|
+
.to_return(status: 200, body: File.read("spec/fixtures/llm_responses/anthropic/tool_call.json"),
|
|
1985
|
+
headers: { "Content-Type" => "application/json" })
|
|
1986
|
+
|
|
1987
|
+
result = client.chat(messages: [{ role: "user", content: "hi" }], tools: [tool_schema])
|
|
1988
|
+
|
|
1989
|
+
expect(stub).to have_been_requested
|
|
1990
|
+
expect(result[:tool_calls]).to eq([{ id: "toolu_1", name: "web_search", arguments: { "query" => "ruby" } }])
|
|
1991
|
+
expect(result[:finish_reason]).to eq(:tool_calls)
|
|
1992
|
+
expect(result[:usage]).to eq(prompt_tokens: 50, completion_tokens: 10, total_tokens: 60)
|
|
1993
|
+
end
|
|
1994
|
+
|
|
1995
|
+
it 'assembles streamed input_json_delta into a tool_call' do
|
|
1996
|
+
stub_request(:post, "https://api.anthropic.com/v1/messages")
|
|
1997
|
+
.to_return(status: 200,
|
|
1998
|
+
body: File.read("spec/fixtures/llm_responses/anthropic/stream_tool_call.sse"),
|
|
1999
|
+
headers: { "Content-Type" => "text/event-stream" })
|
|
2000
|
+
|
|
2001
|
+
events = []
|
|
2002
|
+
result = client.chat(
|
|
2003
|
+
messages: [{ role: "user", content: "x" }],
|
|
2004
|
+
tools: [{ name: "web_search", description: "x",
|
|
2005
|
+
parameters: { type: "object", properties: {}, required: [] } }],
|
|
2006
|
+
stream: ->(e) { events << e }
|
|
2007
|
+
)
|
|
2008
|
+
|
|
2009
|
+
expect(result[:tool_calls]).to eq([{ id: "toolu_1", name: "web_search", arguments: { "query" => "ruby" } }])
|
|
2010
|
+
expect(result[:finish_reason]).to eq(:tool_calls)
|
|
2011
|
+
end
|
|
2012
|
+
|
|
2013
|
+
it 'attaches cache_control to large system blocks when cache_system: true' do
|
|
2014
|
+
stub_request(:post, "https://api.anthropic.com/v1/messages")
|
|
2015
|
+
.with(body: hash_including(
|
|
2016
|
+
"system" => [hash_including("cache_control" => { "type" => "ephemeral" })]
|
|
2017
|
+
))
|
|
2018
|
+
.to_return(status: 200, body: '{"content":[{"type":"text","text":"ok"}],"stop_reason":"end_turn","usage":{"input_tokens":1,"output_tokens":1}}',
|
|
2019
|
+
headers: { "Content-Type" => "application/json" })
|
|
2020
|
+
|
|
2021
|
+
client.chat(messages: [{ role: "system", content: "BIG SYSTEM" * 200 },
|
|
2022
|
+
{ role: "user", content: "hi" }],
|
|
2023
|
+
cache_system: true)
|
|
2024
|
+
end
|
|
2025
|
+
end
|
|
2026
|
+
```
|
|
2027
|
+
|
|
2028
|
+
- [ ] **Step 9.4: Rewrite `lib/rcrewai/llm_clients/anthropic.rb`**
|
|
2029
|
+
|
|
2030
|
+
Implement following the OpenAI shape but with Anthropic mapping. Key points:
|
|
2031
|
+
|
|
2032
|
+
```ruby
|
|
2033
|
+
# In chat(...):
|
|
2034
|
+
# - Extract system message; if cache_system: true, wrap as
|
|
2035
|
+
# [{ "type" => "text", "text" => sys, "cache_control" => { "type" => "ephemeral" } }]
|
|
2036
|
+
# - tools: ProviderSchema.for_many(:anthropic, tools) if tools
|
|
2037
|
+
# - For streaming: parse named SSE events ("content_block_start",
|
|
2038
|
+
# "content_block_delta", "message_delta", "message_stop"); accumulate
|
|
2039
|
+
# text deltas into assembled_text and input_json_delta into the right
|
|
2040
|
+
# tool_use block (index → tool_call slot).
|
|
2041
|
+
# - normalize_non_streaming walks response["content"] and picks out
|
|
2042
|
+
# `tool_use` blocks → tool_calls, `text` blocks → content.
|
|
2043
|
+
# - finish_reason mapping: "tool_use" → :tool_calls, "end_turn" → :stop, "max_tokens" → :length
|
|
2044
|
+
```
|
|
2045
|
+
|
|
2046
|
+
Reuse the existing `validate_config!` and headers (x-api-key, anthropic-version). Use `RCrewAI::SSEParser` for stream parsing. Emit `Events::TextDelta` from `text_delta` deltas; emit `Events::Usage` from `message_delta.usage`.
|
|
2047
|
+
|
|
2048
|
+
The full reference shape — model on the same skeleton as OpenAI in Task 6.6 with the differences above.
|
|
2049
|
+
|
|
2050
|
+
- [ ] **Step 9.5: Run spec; expect pass**
|
|
2051
|
+
|
|
2052
|
+
Run: `bundle exec rspec spec/llm_clients/anthropic_spec.rb`
|
|
2053
|
+
Expected: all green.
|
|
2054
|
+
|
|
2055
|
+
- [ ] **Step 9.6: Commit**
|
|
2056
|
+
|
|
2057
|
+
```bash
|
|
2058
|
+
git add lib/rcrewai/llm_clients/anthropic.rb spec/llm_clients/anthropic_spec.rb \
|
|
2059
|
+
spec/fixtures/llm_responses/anthropic/
|
|
2060
|
+
git commit -m "feat(anthropic): native tools, streaming, and prompt-caching hook"
|
|
2061
|
+
```
|
|
2062
|
+
|
|
2063
|
+
---
|
|
2064
|
+
|
|
2065
|
+
## Task 10: Google + Azure + Ollama native tools + streaming
|
|
2066
|
+
|
|
2067
|
+
**Files:**
|
|
2068
|
+
- Modify: `lib/rcrewai/llm_clients/google.rb`
|
|
2069
|
+
- Modify: `lib/rcrewai/llm_clients/azure.rb`
|
|
2070
|
+
- Modify: `lib/rcrewai/llm_clients/ollama.rb`
|
|
2071
|
+
- Create: `spec/llm_clients/google_spec.rb`
|
|
2072
|
+
- Create: `spec/llm_clients/azure_spec.rb`
|
|
2073
|
+
- Create: `spec/llm_clients/ollama_spec.rb`
|
|
2074
|
+
- Create: `spec/fixtures/llm_responses/google/tool_call.json`
|
|
2075
|
+
- Create: `spec/fixtures/llm_responses/google/stream.sse`
|
|
2076
|
+
- Create: `spec/fixtures/llm_responses/ollama/tool_call.json`
|
|
2077
|
+
|
|
2078
|
+
This task is repeated three times (one provider at a time). Each follows the same pattern as Tasks 6 and 9: capture fixture → write failing spec → implement → run.
|
|
2079
|
+
|
|
2080
|
+
### Google (Gemini)
|
|
2081
|
+
|
|
2082
|
+
- [ ] **Step 10.1: Implement Google client following the per-provider shape**
|
|
2083
|
+
|
|
2084
|
+
Gemini specifics:
|
|
2085
|
+
- Endpoint: `https://generativelanguage.googleapis.com/v1beta/models/{model}:generateContent?key=API_KEY` (non-streaming) or `:streamGenerateContent?alt=sse&key=API_KEY` (streaming).
|
|
2086
|
+
- Tools: top-level `tools: [{ function_declarations: [...] }]` (use `ProviderSchema.for_many(:google, tools)` which already returns this shape from one list).
|
|
2087
|
+
- Tool calls arrive as `candidates[].content.parts[].functionCall`.
|
|
2088
|
+
- Usage: `usageMetadata.{promptTokenCount,candidatesTokenCount,totalTokenCount}`.
|
|
2089
|
+
- Finish reason: `STOP` → `:stop`, `MAX_TOKENS` → `:length`, `FUNCTION_CALL`/`TOOL_USE` → `:tool_calls`.
|
|
2090
|
+
|
|
2091
|
+
- [ ] **Step 10.2: Capture fixture + write spec mirroring OpenAI shape; run; pass; commit**
|
|
2092
|
+
|
|
2093
|
+
```bash
|
|
2094
|
+
git add lib/rcrewai/llm_clients/google.rb spec/llm_clients/google_spec.rb \
|
|
2095
|
+
spec/fixtures/llm_responses/google/
|
|
2096
|
+
git commit -m "feat(google): native tools and streaming"
|
|
2097
|
+
```
|
|
2098
|
+
|
|
2099
|
+
### Azure
|
|
2100
|
+
|
|
2101
|
+
- [ ] **Step 10.3: Implement Azure client**
|
|
2102
|
+
|
|
2103
|
+
Azure uses the OpenAI wire format with a different base URL and auth header. Strategy: have `Azure` subclass `OpenAI` and override only:
|
|
2104
|
+
- `BASE_URL` → built from `config.azure_endpoint` + `/openai/deployments/#{deployment}/chat/completions?api-version=#{config.api_version}`
|
|
2105
|
+
- `auth_header` → `{ 'api-key' => config.azure_api_key }`
|
|
2106
|
+
- `validate_config!` → require `azure_api_key`, `azure_endpoint`, `deployment_name`, `api_version`
|
|
2107
|
+
|
|
2108
|
+
Add a smoke-test spec asserting the URL is constructed correctly and `auth_header` uses `api-key`.
|
|
2109
|
+
|
|
2110
|
+
- [ ] **Step 10.4: Spec, run, commit**
|
|
2111
|
+
|
|
2112
|
+
```bash
|
|
2113
|
+
git add lib/rcrewai/llm_clients/azure.rb spec/llm_clients/azure_spec.rb
|
|
2114
|
+
git commit -m "feat(azure): inherit OpenAI native tools and streaming"
|
|
2115
|
+
```
|
|
2116
|
+
|
|
2117
|
+
### Ollama
|
|
2118
|
+
|
|
2119
|
+
- [ ] **Step 10.5: Implement Ollama client**
|
|
2120
|
+
|
|
2121
|
+
Ollama specifics:
|
|
2122
|
+
- `POST http://host:port/api/chat` — body `{ model, messages, tools, stream: bool }`.
|
|
2123
|
+
- Non-streaming returns one JSON document with `message.content` and `message.tool_calls`.
|
|
2124
|
+
- Streaming is **line-delimited JSON** (not SSE) — feed lines through a plain JSON-per-line parser, not the SSEParser.
|
|
2125
|
+
- Tools shape matches OpenAI (`{type:"function", function:{...}}`).
|
|
2126
|
+
- Native-tools allowlist for `supports_native_tools?`:
|
|
2127
|
+
|
|
2128
|
+
```ruby
|
|
2129
|
+
NATIVE_TOOL_MODELS = %w[
|
|
2130
|
+
llama3.1 llama3.1:8b llama3.1:70b llama3.1:405b
|
|
2131
|
+
llama3.2 llama3.2:1b llama3.2:3b
|
|
2132
|
+
qwen2.5 qwen2.5:7b qwen2.5:14b qwen2.5:32b qwen2.5:72b
|
|
2133
|
+
mistral-nemo mistral-large
|
|
2134
|
+
command-r command-r-plus
|
|
2135
|
+
firefunction-v2
|
|
2136
|
+
].freeze
|
|
2137
|
+
|
|
2138
|
+
def supports_native_tools?(model: config.model)
|
|
2139
|
+
return RCrewAI.configuration.ollama_native_tools unless RCrewAI.configuration.ollama_native_tools.nil?
|
|
2140
|
+
base = model.to_s.split(":").first
|
|
2141
|
+
NATIVE_TOOL_MODELS.any? { |m| m == model || m.split(":").first == base }
|
|
2142
|
+
end
|
|
2143
|
+
```
|
|
2144
|
+
|
|
2145
|
+
- [ ] **Step 10.6: Spec — assert allowlist + ReAct fallback selection; run; commit**
|
|
2146
|
+
|
|
2147
|
+
```bash
|
|
2148
|
+
git add lib/rcrewai/llm_clients/ollama.rb spec/llm_clients/ollama_spec.rb \
|
|
2149
|
+
spec/fixtures/llm_responses/ollama/
|
|
2150
|
+
git commit -m "feat(ollama): native tools (allowlist) and streaming"
|
|
2151
|
+
```
|
|
2152
|
+
|
|
2153
|
+
- [ ] **Step 10.7: Phase 2 checkpoint**
|
|
2154
|
+
|
|
2155
|
+
Run: `bundle exec rspec`
|
|
2156
|
+
Expected: all green.
|
|
2157
|
+
|
|
2158
|
+
Run all three examples to confirm no end-to-end regressions:
|
|
2159
|
+
|
|
2160
|
+
```bash
|
|
2161
|
+
ruby examples/async_execution_example.rb 2>&1 | head -30
|
|
2162
|
+
ruby examples/human_in_the_loop_example.rb 2>&1 | head -30
|
|
2163
|
+
ruby examples/hierarchical_crew_example.rb 2>&1 | head -30
|
|
2164
|
+
```
|
|
2165
|
+
|
|
2166
|
+
Tag:
|
|
2167
|
+
|
|
2168
|
+
```bash
|
|
2169
|
+
git tag phase-2-providers
|
|
2170
|
+
```
|
|
2171
|
+
|
|
2172
|
+
Phase 2 complete. The gem could now ship as `0.3.0-rc1` with native tools and streaming everywhere. MCP comes next.
|
|
2173
|
+
|
|
2174
|
+
---
|
|
2175
|
+
|
|
2176
|
+
# Phase 3 — MCP client + docs + release
|
|
2177
|
+
|
|
2178
|
+
## Task 11: MCP client + transports + ToolAdapter
|
|
2179
|
+
|
|
2180
|
+
**Files:**
|
|
2181
|
+
- Create: `lib/rcrewai/mcp.rb`
|
|
2182
|
+
- Create: `lib/rcrewai/mcp/client.rb`
|
|
2183
|
+
- Create: `lib/rcrewai/mcp/transport/stdio.rb`
|
|
2184
|
+
- Create: `lib/rcrewai/mcp/transport/http.rb`
|
|
2185
|
+
- Create: `lib/rcrewai/mcp/tool_adapter.rb`
|
|
2186
|
+
- Create: `spec/mcp/client_spec.rb`
|
|
2187
|
+
- Create: `spec/mcp/transport/stdio_spec.rb`
|
|
2188
|
+
- Create: `spec/mcp/transport/http_spec.rb`
|
|
2189
|
+
- Create: `spec/mcp/tool_adapter_spec.rb`
|
|
2190
|
+
- Create: `spec/fixtures/mcp_servers/echo_server.rb`
|
|
2191
|
+
- Modify: `lib/rcrewai.rb` (autoload MCP module)
|
|
2192
|
+
|
|
2193
|
+
- [ ] **Step 11.1: Write the fixture MCP server**
|
|
2194
|
+
|
|
2195
|
+
Create `spec/fixtures/mcp_servers/echo_server.rb`:
|
|
2196
|
+
|
|
2197
|
+
```ruby
|
|
2198
|
+
#!/usr/bin/env ruby
|
|
2199
|
+
# Minimal stdio MCP server: implements initialize, tools/list, tools/call (for one tool: echo).
|
|
2200
|
+
require 'json'
|
|
2201
|
+
|
|
2202
|
+
$stdout.sync = true
|
|
2203
|
+
|
|
2204
|
+
loop do
|
|
2205
|
+
line = $stdin.gets
|
|
2206
|
+
break if line.nil?
|
|
2207
|
+
req = JSON.parse(line)
|
|
2208
|
+
id = req["id"]
|
|
2209
|
+
|
|
2210
|
+
case req["method"]
|
|
2211
|
+
when "initialize"
|
|
2212
|
+
puts({ jsonrpc: "2.0", id: id, result: {
|
|
2213
|
+
protocolVersion: "2024-11-05",
|
|
2214
|
+
capabilities: { tools: {} },
|
|
2215
|
+
serverInfo: { name: "echo-server", version: "0.1" }
|
|
2216
|
+
}}.to_json)
|
|
2217
|
+
when "tools/list"
|
|
2218
|
+
puts({ jsonrpc: "2.0", id: id, result: {
|
|
2219
|
+
tools: [{
|
|
2220
|
+
name: "echo",
|
|
2221
|
+
description: "Echoes its input",
|
|
2222
|
+
inputSchema: { type: "object", properties: { message: { type: "string" } }, required: ["message"] }
|
|
2223
|
+
}]
|
|
2224
|
+
}}.to_json)
|
|
2225
|
+
when "tools/call"
|
|
2226
|
+
msg = req.dig("params", "arguments", "message")
|
|
2227
|
+
puts({ jsonrpc: "2.0", id: id, result: {
|
|
2228
|
+
content: [{ type: "text", text: "echo: #{msg}" }]
|
|
2229
|
+
}}.to_json)
|
|
2230
|
+
when "notifications/initialized"
|
|
2231
|
+
# no response for notifications
|
|
2232
|
+
else
|
|
2233
|
+
puts({ jsonrpc: "2.0", id: id, error: { code: -32601, message: "method not found" } }.to_json)
|
|
2234
|
+
end
|
|
2235
|
+
end
|
|
2236
|
+
```
|
|
2237
|
+
|
|
2238
|
+
Make it executable: `chmod +x spec/fixtures/mcp_servers/echo_server.rb`.
|
|
2239
|
+
|
|
2240
|
+
- [ ] **Step 11.2: Write failing client spec**
|
|
2241
|
+
|
|
2242
|
+
Create `spec/mcp/client_spec.rb`:
|
|
2243
|
+
|
|
2244
|
+
```ruby
|
|
2245
|
+
# frozen_string_literal: true
|
|
2246
|
+
require 'spec_helper'
|
|
2247
|
+
|
|
2248
|
+
RSpec.describe RCrewAI::MCP::Client do
|
|
2249
|
+
let(:server_path) { File.expand_path("../fixtures/mcp_servers/echo_server.rb", __dir__) }
|
|
2250
|
+
|
|
2251
|
+
it 'handshakes, lists tools, and calls a tool' do
|
|
2252
|
+
client = described_class.connect(command: "ruby", args: [server_path])
|
|
2253
|
+
expect(client.server_name).to eq("echo-server")
|
|
2254
|
+
expect(client.tools.map(&:name)).to eq(["echo-server__echo"])
|
|
2255
|
+
|
|
2256
|
+
tool = client.tools.first
|
|
2257
|
+
result = tool.execute(message: "hello")
|
|
2258
|
+
expect(result).to include("echo: hello")
|
|
2259
|
+
ensure
|
|
2260
|
+
client&.close
|
|
2261
|
+
end
|
|
2262
|
+
|
|
2263
|
+
it 'with_connection auto-closes on block exit' do
|
|
2264
|
+
described_class.with_connection(command: "ruby", args: [server_path]) do |client|
|
|
2265
|
+
expect(client.tools).not_to be_empty
|
|
2266
|
+
end
|
|
2267
|
+
end
|
|
2268
|
+
|
|
2269
|
+
it 'translates MCP inputSchema to canonical JSON schema for tool adapters' do
|
|
2270
|
+
described_class.with_connection(command: "ruby", args: [server_path]) do |client|
|
|
2271
|
+
schema = client.tools.first.json_schema
|
|
2272
|
+
expect(schema[:name]).to eq("echo-server__echo")
|
|
2273
|
+
expect(schema[:parameters]).to include(
|
|
2274
|
+
type: "object",
|
|
2275
|
+
properties: { "message" => { "type" => "string" } },
|
|
2276
|
+
required: ["message"]
|
|
2277
|
+
)
|
|
2278
|
+
end
|
|
2279
|
+
end
|
|
2280
|
+
end
|
|
2281
|
+
```
|
|
2282
|
+
|
|
2283
|
+
- [ ] **Step 11.3: Implement `lib/rcrewai/mcp/transport/stdio.rb`**
|
|
2284
|
+
|
|
2285
|
+
```ruby
|
|
2286
|
+
# frozen_string_literal: true
|
|
2287
|
+
|
|
2288
|
+
require 'json'
|
|
2289
|
+
|
|
2290
|
+
module RCrewAI
|
|
2291
|
+
module MCP
|
|
2292
|
+
module Transport
|
|
2293
|
+
class Stdio
|
|
2294
|
+
def initialize(command:, args: [], env: {})
|
|
2295
|
+
@command = command
|
|
2296
|
+
@args = args
|
|
2297
|
+
@env = env
|
|
2298
|
+
@stdin = nil
|
|
2299
|
+
@stdout = nil
|
|
2300
|
+
@pid = nil
|
|
2301
|
+
end
|
|
2302
|
+
|
|
2303
|
+
def open
|
|
2304
|
+
@stdin, @stdout, _stderr_thread, wait_thr = open_pipes
|
|
2305
|
+
@pid = wait_thr.pid
|
|
2306
|
+
ObjectSpace.define_finalizer(self, self.class.finalize(@pid))
|
|
2307
|
+
end
|
|
2308
|
+
|
|
2309
|
+
def send_line(json)
|
|
2310
|
+
@stdin.write(json + "\n")
|
|
2311
|
+
@stdin.flush
|
|
2312
|
+
end
|
|
2313
|
+
|
|
2314
|
+
def recv_line
|
|
2315
|
+
@stdout.gets
|
|
2316
|
+
end
|
|
2317
|
+
|
|
2318
|
+
def close
|
|
2319
|
+
return unless @pid
|
|
2320
|
+
Process.kill("TERM", @pid) rescue nil
|
|
2321
|
+
@stdin&.close rescue nil
|
|
2322
|
+
@stdout&.close rescue nil
|
|
2323
|
+
@pid = nil
|
|
2324
|
+
end
|
|
2325
|
+
|
|
2326
|
+
def self.finalize(pid)
|
|
2327
|
+
proc { Process.kill("KILL", pid) rescue nil }
|
|
2328
|
+
end
|
|
2329
|
+
|
|
2330
|
+
private
|
|
2331
|
+
|
|
2332
|
+
def open_pipes
|
|
2333
|
+
require 'open3'
|
|
2334
|
+
stdin, stdout, stderr, wait_thr = Open3.popen3(@env, @command, *@args)
|
|
2335
|
+
stderr_thread = Thread.new { stderr.each_line { |l| Kernel.warn "[mcp-stderr] #{l}" } }
|
|
2336
|
+
[stdin, stdout, stderr_thread, wait_thr]
|
|
2337
|
+
end
|
|
2338
|
+
end
|
|
2339
|
+
end
|
|
2340
|
+
end
|
|
2341
|
+
end
|
|
2342
|
+
```
|
|
2343
|
+
|
|
2344
|
+
- [ ] **Step 11.4: Implement `lib/rcrewai/mcp/transport/http.rb`**
|
|
2345
|
+
|
|
2346
|
+
```ruby
|
|
2347
|
+
# frozen_string_literal: true
|
|
2348
|
+
|
|
2349
|
+
require 'faraday'
|
|
2350
|
+
require_relative '../../sse_parser'
|
|
2351
|
+
|
|
2352
|
+
module RCrewAI
|
|
2353
|
+
module MCP
|
|
2354
|
+
module Transport
|
|
2355
|
+
class Http
|
|
2356
|
+
def initialize(url:, headers: {})
|
|
2357
|
+
@url = url
|
|
2358
|
+
@headers = headers
|
|
2359
|
+
@queue = Queue.new
|
|
2360
|
+
@sse_thread = nil
|
|
2361
|
+
end
|
|
2362
|
+
|
|
2363
|
+
def open
|
|
2364
|
+
@http = Faraday.new(url: @url) do |f|
|
|
2365
|
+
f.adapter Faraday.default_adapter
|
|
2366
|
+
end
|
|
2367
|
+
@sse_thread = Thread.new { start_sse_stream }
|
|
2368
|
+
end
|
|
2369
|
+
|
|
2370
|
+
def send_line(json)
|
|
2371
|
+
@http.post("") do |req|
|
|
2372
|
+
req.headers.merge!(@headers).merge!("Content-Type" => "application/json")
|
|
2373
|
+
req.body = json
|
|
2374
|
+
end
|
|
2375
|
+
end
|
|
2376
|
+
|
|
2377
|
+
def recv_line
|
|
2378
|
+
@queue.pop
|
|
2379
|
+
end
|
|
2380
|
+
|
|
2381
|
+
def close
|
|
2382
|
+
@sse_thread&.kill
|
|
2383
|
+
@queue.close if @queue.respond_to?(:close)
|
|
2384
|
+
end
|
|
2385
|
+
|
|
2386
|
+
private
|
|
2387
|
+
|
|
2388
|
+
def start_sse_stream
|
|
2389
|
+
parser = SSEParser.new do |evt|
|
|
2390
|
+
@queue << evt[:data] + "\n" if evt[:event] == "message" || evt[:event].nil?
|
|
2391
|
+
end
|
|
2392
|
+
@http.get("") do |req|
|
|
2393
|
+
req.headers.merge!(@headers).merge!("Accept" => "text/event-stream")
|
|
2394
|
+
req.options.on_data = proc { |chunk, _| parser.feed(chunk) }
|
|
2395
|
+
end
|
|
2396
|
+
end
|
|
2397
|
+
end
|
|
2398
|
+
end
|
|
2399
|
+
end
|
|
2400
|
+
end
|
|
2401
|
+
```
|
|
2402
|
+
|
|
2403
|
+
- [ ] **Step 11.5: Implement `lib/rcrewai/mcp/client.rb`**
|
|
2404
|
+
|
|
2405
|
+
```ruby
|
|
2406
|
+
# frozen_string_literal: true
|
|
2407
|
+
|
|
2408
|
+
require 'json'
|
|
2409
|
+
require_relative 'transport/stdio'
|
|
2410
|
+
require_relative 'transport/http'
|
|
2411
|
+
require_relative 'tool_adapter'
|
|
2412
|
+
|
|
2413
|
+
module RCrewAI
|
|
2414
|
+
module MCP
|
|
2415
|
+
class Error < RCrewAI::Error; end
|
|
2416
|
+
|
|
2417
|
+
class Client
|
|
2418
|
+
attr_reader :server_name, :tools
|
|
2419
|
+
|
|
2420
|
+
def self.connect(**opts)
|
|
2421
|
+
new(**opts).tap(&:open)
|
|
2422
|
+
end
|
|
2423
|
+
|
|
2424
|
+
def self.with_connection(**opts)
|
|
2425
|
+
c = connect(**opts)
|
|
2426
|
+
yield c
|
|
2427
|
+
ensure
|
|
2428
|
+
c&.close
|
|
2429
|
+
end
|
|
2430
|
+
|
|
2431
|
+
def initialize(command: nil, args: [], env: {}, url: nil, headers: {})
|
|
2432
|
+
@transport = if url
|
|
2433
|
+
Transport::Http.new(url: url, headers: headers)
|
|
2434
|
+
else
|
|
2435
|
+
Transport::Stdio.new(command: command, args: args, env: env)
|
|
2436
|
+
end
|
|
2437
|
+
@request_id = 0
|
|
2438
|
+
@tools = []
|
|
2439
|
+
@server_name = nil
|
|
2440
|
+
end
|
|
2441
|
+
|
|
2442
|
+
def open
|
|
2443
|
+
@transport.open
|
|
2444
|
+
handshake
|
|
2445
|
+
load_tools
|
|
2446
|
+
end
|
|
2447
|
+
|
|
2448
|
+
def close
|
|
2449
|
+
@transport.close
|
|
2450
|
+
end
|
|
2451
|
+
|
|
2452
|
+
def call_tool(name, args)
|
|
2453
|
+
result = request("tools/call", { name: strip_prefix(name), arguments: args })
|
|
2454
|
+
result.dig("content", 0, "text") || result["content"]
|
|
2455
|
+
end
|
|
2456
|
+
|
|
2457
|
+
private
|
|
2458
|
+
|
|
2459
|
+
def handshake
|
|
2460
|
+
info = request("initialize", {
|
|
2461
|
+
protocolVersion: "2024-11-05",
|
|
2462
|
+
capabilities: { tools: {} },
|
|
2463
|
+
clientInfo: { name: "rcrewai", version: RCrewAI::VERSION }
|
|
2464
|
+
})
|
|
2465
|
+
@server_name = info.dig("serverInfo", "name") || "mcp"
|
|
2466
|
+
notify("notifications/initialized", {})
|
|
2467
|
+
end
|
|
2468
|
+
|
|
2469
|
+
def load_tools
|
|
2470
|
+
result = request("tools/list", {})
|
|
2471
|
+
@tools = Array(result["tools"]).map { |t| ToolAdapter.new(self, t, @server_name) }
|
|
2472
|
+
end
|
|
2473
|
+
|
|
2474
|
+
def request(method, params)
|
|
2475
|
+
@request_id += 1
|
|
2476
|
+
msg = { jsonrpc: "2.0", id: @request_id, method: method, params: params }
|
|
2477
|
+
@transport.send_line(msg.to_json)
|
|
2478
|
+
reply = JSON.parse(@transport.recv_line)
|
|
2479
|
+
raise Error, reply["error"]["message"] if reply["error"]
|
|
2480
|
+
reply["result"]
|
|
2481
|
+
end
|
|
2482
|
+
|
|
2483
|
+
def notify(method, params)
|
|
2484
|
+
msg = { jsonrpc: "2.0", method: method, params: params }
|
|
2485
|
+
@transport.send_line(msg.to_json)
|
|
2486
|
+
end
|
|
2487
|
+
|
|
2488
|
+
def strip_prefix(prefixed_name)
|
|
2489
|
+
prefixed_name.sub(/^#{Regexp.escape(@server_name)}__/, "")
|
|
2490
|
+
end
|
|
2491
|
+
end
|
|
2492
|
+
end
|
|
2493
|
+
end
|
|
2494
|
+
```
|
|
2495
|
+
|
|
2496
|
+
- [ ] **Step 11.6: Implement `lib/rcrewai/mcp/tool_adapter.rb`**
|
|
2497
|
+
|
|
2498
|
+
```ruby
|
|
2499
|
+
# frozen_string_literal: true
|
|
2500
|
+
|
|
2501
|
+
require_relative '../tools/base'
|
|
2502
|
+
|
|
2503
|
+
module RCrewAI
|
|
2504
|
+
module MCP
|
|
2505
|
+
class ToolAdapter < RCrewAI::Tools::Base
|
|
2506
|
+
def initialize(client, mcp_tool_descriptor, server_name)
|
|
2507
|
+
@client = client
|
|
2508
|
+
@descriptor = mcp_tool_descriptor
|
|
2509
|
+
@server_name = server_name
|
|
2510
|
+
@name = "#{server_name}__#{mcp_tool_descriptor["name"]}"
|
|
2511
|
+
@description = mcp_tool_descriptor["description"].to_s
|
|
2512
|
+
end
|
|
2513
|
+
|
|
2514
|
+
def name; @name; end
|
|
2515
|
+
def description; @description; end
|
|
2516
|
+
|
|
2517
|
+
def json_schema
|
|
2518
|
+
{
|
|
2519
|
+
name: @name,
|
|
2520
|
+
description: @description,
|
|
2521
|
+
parameters: stringify_keys(@descriptor["inputSchema"] || { "type" => "object", "additionalProperties" => true })
|
|
2522
|
+
}
|
|
2523
|
+
end
|
|
2524
|
+
|
|
2525
|
+
def execute(**args)
|
|
2526
|
+
@client.call_tool(@name, args)
|
|
2527
|
+
end
|
|
2528
|
+
|
|
2529
|
+
def execute_with_validation(args_hash)
|
|
2530
|
+
execute(**args_hash.transform_keys(&:to_sym))
|
|
2531
|
+
end
|
|
2532
|
+
|
|
2533
|
+
private
|
|
2534
|
+
|
|
2535
|
+
def stringify_keys(h)
|
|
2536
|
+
return h unless h.is_a?(Hash)
|
|
2537
|
+
h.each_with_object({}) { |(k, v), out| out[k.to_s] = stringify_keys(v) }
|
|
2538
|
+
end
|
|
2539
|
+
end
|
|
2540
|
+
end
|
|
2541
|
+
end
|
|
2542
|
+
```
|
|
2543
|
+
|
|
2544
|
+
- [ ] **Step 11.7: Implement `lib/rcrewai/mcp.rb` (module entry)**
|
|
2545
|
+
|
|
2546
|
+
```ruby
|
|
2547
|
+
# frozen_string_literal: true
|
|
2548
|
+
|
|
2549
|
+
require_relative 'mcp/client'
|
|
2550
|
+
|
|
2551
|
+
module RCrewAI
|
|
2552
|
+
module MCP
|
|
2553
|
+
end
|
|
2554
|
+
end
|
|
2555
|
+
```
|
|
2556
|
+
|
|
2557
|
+
- [ ] **Step 11.8: Add require in `lib/rcrewai.rb`**
|
|
2558
|
+
|
|
2559
|
+
Add at end of `lib/rcrewai.rb`:
|
|
2560
|
+
|
|
2561
|
+
```ruby
|
|
2562
|
+
require_relative 'rcrewai/mcp'
|
|
2563
|
+
```
|
|
2564
|
+
|
|
2565
|
+
- [ ] **Step 11.9: Run client spec; expect pass**
|
|
2566
|
+
|
|
2567
|
+
Run: `bundle exec rspec spec/mcp/client_spec.rb`
|
|
2568
|
+
Expected: all green.
|
|
2569
|
+
|
|
2570
|
+
- [ ] **Step 11.10: Write transport unit specs**
|
|
2571
|
+
|
|
2572
|
+
Create `spec/mcp/transport/stdio_spec.rb` — assert that `Stdio` spawns the subprocess, can round-trip a line, and kills the pid on `close`. Use the echo_server fixture or a one-liner ruby `-e 'puts gets'`.
|
|
2573
|
+
|
|
2574
|
+
Create `spec/mcp/transport/http_spec.rb` — use webmock to stub `GET` (SSE) + `POST` and assert round-trip behavior.
|
|
2575
|
+
|
|
2576
|
+
- [ ] **Step 11.11: Write end-to-end integration spec**
|
|
2577
|
+
|
|
2578
|
+
Create `spec/integration/mcp_end_to_end_spec.rb`:
|
|
2579
|
+
|
|
2580
|
+
```ruby
|
|
2581
|
+
# frozen_string_literal: true
|
|
2582
|
+
require 'spec_helper'
|
|
2583
|
+
|
|
2584
|
+
RSpec.describe "MCP end-to-end" do
|
|
2585
|
+
let(:server_path) { File.expand_path("../fixtures/mcp_servers/echo_server.rb", __dir__) }
|
|
2586
|
+
|
|
2587
|
+
it 'lets an agent call an MCP tool via the ToolRunner' do
|
|
2588
|
+
RCrewAI::MCP::Client.with_connection(command: "ruby", args: [server_path]) do |client|
|
|
2589
|
+
# Stub the LLM to invoke the MCP echo tool then stop
|
|
2590
|
+
llm = double("LLM")
|
|
2591
|
+
allow(llm).to receive(:supports_native_tools?).and_return(true)
|
|
2592
|
+
allow(llm).to receive(:config).and_return(RCrewAI.configuration)
|
|
2593
|
+
sequence = [
|
|
2594
|
+
{ content: nil, tool_calls: [{ id: "1", name: "echo-server__echo", arguments: { "message" => "hi" } }],
|
|
2595
|
+
usage: {}, finish_reason: :tool_calls, model: "m", provider: :openai },
|
|
2596
|
+
{ content: "Done.", tool_calls: [], usage: {},
|
|
2597
|
+
finish_reason: :stop, model: "m", provider: :openai }
|
|
2598
|
+
]
|
|
2599
|
+
allow(llm).to receive(:chat) { sequence.shift }
|
|
2600
|
+
|
|
2601
|
+
agent = RCrewAI::Agent.new(name: "a", role: "r", goal: "g", tools: client.tools)
|
|
2602
|
+
agent.instance_variable_set(:@llm_client, llm)
|
|
2603
|
+
|
|
2604
|
+
task = RCrewAI::Task.new(name: "t", description: "echo hi", agent: agent, expected_output: "x")
|
|
2605
|
+
result = agent.execute_task(task)
|
|
2606
|
+
|
|
2607
|
+
expect(result[:content]).to eq("Done.")
|
|
2608
|
+
expect(result[:tool_calls_history].first[:result]).to include("echo: hi")
|
|
2609
|
+
end
|
|
2610
|
+
end
|
|
2611
|
+
end
|
|
2612
|
+
```
|
|
2613
|
+
|
|
2614
|
+
- [ ] **Step 11.12: Run full suite**
|
|
2615
|
+
|
|
2616
|
+
Run: `bundle exec rspec`
|
|
2617
|
+
Expected: all green.
|
|
2618
|
+
|
|
2619
|
+
- [ ] **Step 11.13: Commit**
|
|
2620
|
+
|
|
2621
|
+
```bash
|
|
2622
|
+
git add lib/rcrewai/mcp* lib/rcrewai.rb spec/mcp spec/integration/mcp_end_to_end_spec.rb \
|
|
2623
|
+
spec/fixtures/mcp_servers/
|
|
2624
|
+
git commit -m "feat(mcp): client + stdio/HTTP transports + ToolAdapter"
|
|
2625
|
+
```
|
|
2626
|
+
|
|
2627
|
+
---
|
|
2628
|
+
|
|
2629
|
+
## Task 12: Docs, CHANGELOG, examples, version bump
|
|
2630
|
+
|
|
2631
|
+
**Files:**
|
|
2632
|
+
- Modify: `lib/rcrewai/version.rb`
|
|
2633
|
+
- Modify: `CHANGELOG.md`
|
|
2634
|
+
- Create: `docs/upgrading-to-0.3.md`
|
|
2635
|
+
- Create: `docs/mcp.md`
|
|
2636
|
+
- Create: `examples/native_tools_example.rb`
|
|
2637
|
+
- Create: `examples/streaming_example.rb`
|
|
2638
|
+
- Create: `examples/mcp_example.rb`
|
|
2639
|
+
|
|
2640
|
+
- [ ] **Step 12.1: Bump version**
|
|
2641
|
+
|
|
2642
|
+
```ruby
|
|
2643
|
+
# lib/rcrewai/version.rb
|
|
2644
|
+
module RCrewAI
|
|
2645
|
+
VERSION = "0.3.0"
|
|
2646
|
+
end
|
|
2647
|
+
```
|
|
2648
|
+
|
|
2649
|
+
- [ ] **Step 12.2: Write CHANGELOG entry**
|
|
2650
|
+
|
|
2651
|
+
Prepend to `CHANGELOG.md`:
|
|
2652
|
+
|
|
2653
|
+
```markdown
|
|
2654
|
+
## 0.3.0 — 2026-XX-XX
|
|
2655
|
+
|
|
2656
|
+
### Added
|
|
2657
|
+
- Native function calling across all five providers (OpenAI, Anthropic, Google, Azure, Ollama). Tools declare a JSON schema via the new DSL (`tool_name`, `description`, `param`) on `Tools::Base`.
|
|
2658
|
+
- Typed streaming event model (`RCrewAI::Events::*`) covering text deltas, tool-call lifecycle, usage, and errors. Pass `stream:` to `crew.execute` or `agent.execute_task`.
|
|
2659
|
+
- MCP (Model Context Protocol) client. Connect to stdio or HTTP MCP servers and expose their tools as ordinary RCrewAI tools.
|
|
2660
|
+
- Per-model price table and `cost_usd` on `Events::Usage` for cost tracking.
|
|
2661
|
+
- `Tools::Base#execute_with_validation` coerces and validates args against the DSL schema.
|
|
2662
|
+
|
|
2663
|
+
### Changed
|
|
2664
|
+
- `Agent#execute_task` now delegates to `ToolRunner` (native function calling) or `LegacyReactRunner` (existing `USE_TOOL[]` parsing, used as fallback for legacy Ollama models or tools without a DSL declaration).
|
|
2665
|
+
- `Agent#execute_task` return value now includes `tool_calls_history:`.
|
|
2666
|
+
- `LLMClients::Base#chat` gains `tools:`, `tool_choice:`, and `stream:` keyword arguments.
|
|
2667
|
+
|
|
2668
|
+
### Breaking
|
|
2669
|
+
- Subclasses of `LLMClients::Base` that override `chat` with an explicit kwarg list must add `tools: nil, stream: nil` to the signature (or accept `**options`).
|
|
2670
|
+
- Tools without DSL declarations now receive a permissive fallback schema and emit a one-time deprecation warning to stderr.
|
|
2671
|
+
|
|
2672
|
+
### Migration
|
|
2673
|
+
- See `docs/upgrading-to-0.3.md` for step-by-step migration.
|
|
2674
|
+
```
|
|
2675
|
+
|
|
2676
|
+
- [ ] **Step 12.3: Write `docs/upgrading-to-0.3.md`**
|
|
2677
|
+
|
|
2678
|
+
Content: full migration guide. Sections: "What you must do", "What you should do (recommended)", "What you can do (new capabilities)". Each section has before/after Ruby snippets:
|
|
2679
|
+
|
|
2680
|
+
- Custom tool migration (5-line DSL diff).
|
|
2681
|
+
- Streaming adoption snippet.
|
|
2682
|
+
- MCP adoption snippet.
|
|
2683
|
+
- LLMClient subclass kwarg fix snippet.
|
|
2684
|
+
|
|
2685
|
+
- [ ] **Step 12.4: Write `docs/mcp.md`**
|
|
2686
|
+
|
|
2687
|
+
Content: what is MCP, why use it, how to connect (stdio + HTTP examples), how the MCP tool name prefix works, lifecycle (connect/close/with_connection), what's not supported in 0.3 (resources, prompts, server mode, OAuth).
|
|
2688
|
+
|
|
2689
|
+
- [ ] **Step 12.5: Write `examples/native_tools_example.rb`**
|
|
2690
|
+
|
|
2691
|
+
Full runnable example showing an agent with a DSL-declared tool, OpenAI provider, no streaming. Comment headers explain each section.
|
|
2692
|
+
|
|
2693
|
+
- [ ] **Step 12.6: Write `examples/streaming_example.rb`**
|
|
2694
|
+
|
|
2695
|
+
Full runnable example using `crew.execute(stream: ...)` with multiple sinks (one prints text, one tracks cost via `Events::Usage`).
|
|
2696
|
+
|
|
2697
|
+
- [ ] **Step 12.7: Write `examples/mcp_example.rb`**
|
|
2698
|
+
|
|
2699
|
+
Full runnable example using `RCrewAI::MCP::Client.with_connection` against a public MCP server (e.g., filesystem). Include the npm install/npx command in a comment.
|
|
2700
|
+
|
|
2701
|
+
- [ ] **Step 12.8: Run full suite + examples smoke check**
|
|
2702
|
+
|
|
2703
|
+
```bash
|
|
2704
|
+
bundle exec rspec
|
|
2705
|
+
ruby -Ilib -e "require 'rcrewai'; puts RCrewAI::VERSION" # expects 0.3.0
|
|
2706
|
+
```
|
|
2707
|
+
|
|
2708
|
+
- [ ] **Step 12.9: Commit**
|
|
2709
|
+
|
|
2710
|
+
```bash
|
|
2711
|
+
git add lib/rcrewai/version.rb CHANGELOG.md \
|
|
2712
|
+
docs/upgrading-to-0.3.md docs/mcp.md \
|
|
2713
|
+
examples/native_tools_example.rb examples/streaming_example.rb examples/mcp_example.rb
|
|
2714
|
+
git commit -m "chore(release): docs, examples, and version bump for 0.3.0"
|
|
2715
|
+
```
|
|
2716
|
+
|
|
2717
|
+
- [ ] **Step 12.10: Phase 3 / release tag**
|
|
2718
|
+
|
|
2719
|
+
```bash
|
|
2720
|
+
git tag v0.3.0
|
|
2721
|
+
```
|
|
2722
|
+
|
|
2723
|
+
---
|
|
2724
|
+
|
|
2725
|
+
# Appendix — Cross-cutting concerns
|
|
2726
|
+
|
|
2727
|
+
## Test fixtures organization
|
|
2728
|
+
|
|
2729
|
+
```
|
|
2730
|
+
spec/fixtures/
|
|
2731
|
+
├── llm_responses/
|
|
2732
|
+
│ ├── openai/{tool_call.json, stream_text.sse, stream_tool_call.sse}
|
|
2733
|
+
│ ├── anthropic/{tool_call.json, stream_tool_call.sse}
|
|
2734
|
+
│ ├── google/{tool_call.json, stream.sse}
|
|
2735
|
+
│ └── ollama/{tool_call.json}
|
|
2736
|
+
└── mcp_servers/echo_server.rb
|
|
2737
|
+
```
|
|
2738
|
+
|
|
2739
|
+
## CI requirements (out of scope for this plan but called out)
|
|
2740
|
+
|
|
2741
|
+
- `.github/workflows/ci.yml` should run `bundle exec rspec` on Ruby 3.0–3.3 matrix.
|
|
2742
|
+
- MCP integration test requires Ruby — already covered by the matrix.
|
|
2743
|
+
- VCR cassettes for live-provider integration are recorded with `RECORD=true` and committed.
|
|
2744
|
+
|
|
2745
|
+
## What this plan does NOT change
|
|
2746
|
+
|
|
2747
|
+
- Async executor (`async_executor.rb`)
|
|
2748
|
+
- Process types / hierarchical orchestration (`process.rb`)
|
|
2749
|
+
- Memory (`memory.rb`)
|
|
2750
|
+
- Human-in-the-loop core (`human_input.rb`) — only consumed
|
|
2751
|
+
- CLI (`cli.rb`)
|
|
2752
|
+
|
|
2753
|
+
These are working today and out of scope. Touch only if a regression surfaces.
|