riffer 0.7.0 → 0.9.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (60) hide show
  1. checksums.yaml +4 -4
  2. data/.agents/architecture.md +113 -0
  3. data/.agents/code-style.md +42 -0
  4. data/.agents/providers.md +46 -0
  5. data/.agents/rdoc.md +51 -0
  6. data/.agents/testing.md +56 -0
  7. data/.release-please-manifest.json +1 -1
  8. data/AGENTS.md +21 -308
  9. data/CHANGELOG.md +17 -0
  10. data/README.md +21 -112
  11. data/Rakefile +1 -1
  12. data/docs/01_OVERVIEW.md +106 -0
  13. data/docs/02_GETTING_STARTED.md +128 -0
  14. data/docs/03_AGENTS.md +226 -0
  15. data/docs/04_TOOLS.md +342 -0
  16. data/docs/05_MESSAGES.md +173 -0
  17. data/docs/06_STREAM_EVENTS.md +191 -0
  18. data/docs/07_CONFIGURATION.md +195 -0
  19. data/docs_providers/01_PROVIDERS.md +168 -0
  20. data/docs_providers/02_AMAZON_BEDROCK.md +196 -0
  21. data/docs_providers/03_ANTHROPIC.md +211 -0
  22. data/docs_providers/04_OPENAI.md +157 -0
  23. data/docs_providers/05_TEST_PROVIDER.md +163 -0
  24. data/docs_providers/06_CUSTOM_PROVIDERS.md +304 -0
  25. data/lib/riffer/agent.rb +103 -63
  26. data/lib/riffer/config.rb +20 -12
  27. data/lib/riffer/core.rb +7 -7
  28. data/lib/riffer/helpers/class_name_converter.rb +6 -3
  29. data/lib/riffer/helpers/dependencies.rb +18 -0
  30. data/lib/riffer/helpers/validations.rb +9 -0
  31. data/lib/riffer/messages/assistant.rb +23 -1
  32. data/lib/riffer/messages/base.rb +15 -0
  33. data/lib/riffer/messages/converter.rb +15 -5
  34. data/lib/riffer/messages/system.rb +8 -1
  35. data/lib/riffer/messages/tool.rb +45 -2
  36. data/lib/riffer/messages/user.rb +8 -1
  37. data/lib/riffer/messages.rb +7 -0
  38. data/lib/riffer/providers/amazon_bedrock.rb +8 -4
  39. data/lib/riffer/providers/anthropic.rb +209 -0
  40. data/lib/riffer/providers/base.rb +17 -12
  41. data/lib/riffer/providers/open_ai.rb +7 -1
  42. data/lib/riffer/providers/repository.rb +9 -4
  43. data/lib/riffer/providers/test.rb +25 -7
  44. data/lib/riffer/providers.rb +6 -0
  45. data/lib/riffer/stream_events/base.rb +13 -1
  46. data/lib/riffer/stream_events/reasoning_delta.rb +15 -1
  47. data/lib/riffer/stream_events/reasoning_done.rb +15 -1
  48. data/lib/riffer/stream_events/text_delta.rb +14 -1
  49. data/lib/riffer/stream_events/text_done.rb +14 -1
  50. data/lib/riffer/stream_events/tool_call_delta.rb +18 -11
  51. data/lib/riffer/stream_events/tool_call_done.rb +22 -12
  52. data/lib/riffer/stream_events.rb +9 -0
  53. data/lib/riffer/tool.rb +92 -25
  54. data/lib/riffer/tools/param.rb +19 -16
  55. data/lib/riffer/tools/params.rb +28 -22
  56. data/lib/riffer/tools/response.rb +90 -0
  57. data/lib/riffer/tools.rb +6 -0
  58. data/lib/riffer/version.rb +1 -1
  59. data/lib/riffer.rb +21 -21
  60. metadata +35 -1
@@ -0,0 +1,211 @@
1
+ # Anthropic Provider
2
+
3
+ The Anthropic provider connects to Claude models via the Anthropic API.
4
+
5
+ ## Installation
6
+
7
+ Add the Anthropic gem to your Gemfile:
8
+
9
+ ```ruby
10
+ gem 'anthropic'
11
+ ```
12
+
13
+ ## Configuration
14
+
15
+ Configure your Anthropic API key:
16
+
17
+ ```ruby
18
+ Riffer.configure do |config|
19
+ config.anthropic.api_key = ENV['ANTHROPIC_API_KEY']
20
+ end
21
+ ```
22
+
23
+ Or per-agent:
24
+
25
+ ```ruby
26
+ class MyAgent < Riffer::Agent
27
+ model 'anthropic/claude-4-5-haiku-20251001'
28
+ provider_options api_key: ENV['ANTHROPIC_API_KEY']
29
+ end
30
+ ```
31
+
32
+ ## Supported Models
33
+
34
+ Use Anthropic model IDs in the `anthropic/model` format:
35
+
36
+ ```ruby
37
+ model 'anthropic/claude-haiku-4-5-20251001'
38
+ model 'anthropic/claude-sonnet-4-5-20250929'
39
+ model 'anthropic/claude-opus-4-5-20251101'
40
+ ```
41
+
42
+ ## Model Options
43
+
44
+ ### temperature
45
+
46
+ Controls randomness:
47
+
48
+ ```ruby
49
+ model_options temperature: 0.7
50
+ ```
51
+
52
+ ### max_tokens
53
+
54
+ Maximum tokens in response:
55
+
56
+ ```ruby
57
+ model_options max_tokens: 4096
58
+ ```
59
+
60
+ ### top_p
61
+
62
+ Nucleus sampling parameter:
63
+
64
+ ```ruby
65
+ model_options top_p: 0.95
66
+ ```
67
+
68
+ ### top_k
69
+
70
+ Top-k sampling parameter:
71
+
72
+ ```ruby
73
+ model_options top_k: 250
74
+ ```
75
+
76
+ ### thinking
77
+
78
+ Enable extended thinking (reasoning) for supported models. Pass the thinking configuration hash directly as Anthropic expects:
79
+
80
+ ```ruby
81
+ # Enable with budget tokens
82
+ model_options thinking: {type: "enabled", budget_tokens: 10000}
83
+ ```
84
+
85
+ ## Example
86
+
87
+ ```ruby
88
+ Riffer.configure do |config|
89
+ config.anthropic.api_key = ENV['ANTHROPIC_API_KEY']
90
+ end
91
+
92
+ class AssistantAgent < Riffer::Agent
93
+ model 'anthropic/claude-4-5-haiku-20251001'
94
+ instructions 'You are a helpful assistant.'
95
+ model_options temperature: 0.7, max_tokens: 4096
96
+ end
97
+
98
+ agent = AssistantAgent.new
99
+ puts agent.generate("Explain quantum computing")
100
+ ```
101
+
102
+ ## Streaming
103
+
104
+ ```ruby
105
+ agent.stream("Tell me about Claude models").each do |event|
106
+ case event
107
+ when Riffer::StreamEvents::TextDelta
108
+ print event.content
109
+ when Riffer::StreamEvents::TextDone
110
+ puts "\n[Complete]"
111
+ when Riffer::StreamEvents::ReasoningDelta
112
+ print "[Thinking] #{event.content}"
113
+ when Riffer::StreamEvents::ReasoningDone
114
+ puts "\n[Thinking Complete]"
115
+ when Riffer::StreamEvents::ToolCallDone
116
+ puts "[Tool: #{event.name}]"
117
+ end
118
+ end
119
+ ```
120
+
121
+ ## Tool Calling
122
+
123
+ Anthropic provider converts tools to the Anthropic tool format:
124
+
125
+ ```ruby
126
+ class WeatherTool < Riffer::Tool
127
+ description "Gets the current weather for a location"
128
+
129
+ params do
130
+ required :city, String, description: "The city name"
131
+ optional :unit, String, description: "Temperature unit (celsius or fahrenheit)"
132
+ end
133
+
134
+ def call(context:, city:, unit: "celsius")
135
+ # Implementation
136
+ "It's 22 degrees #{unit} in #{city}"
137
+ end
138
+ end
139
+
140
+ class WeatherAgent < Riffer::Agent
141
+ model 'anthropic/claude-4-5-haiku-20251001'
142
+ uses_tools [WeatherTool]
143
+ end
144
+ ```
145
+
146
+ ## Extended Thinking
147
+
148
+ Extended thinking enables Claude to reason through complex problems before responding. This is available on Claude 3.7 models.
149
+
150
+ ```ruby
151
+ class ReasoningAgent < Riffer::Agent
152
+ model 'anthropic/claude-4-5-haiku-20251001'
153
+ model_options thinking: {type: "enabled", budget_tokens: 10000}
154
+ end
155
+ ```
156
+
157
+ When streaming with extended thinking enabled, you'll receive `ReasoningDelta` events containing the model's thought process, followed by a `ReasoningDone` event when thinking completes:
158
+
159
+ ```ruby
160
+ agent.stream("Solve this complex math problem").each do |event|
161
+ case event
162
+ when Riffer::StreamEvents::ReasoningDelta
163
+ # Model's internal reasoning
164
+ print "[Thinking] #{event.content}"
165
+ when Riffer::StreamEvents::ReasoningDone
166
+ puts "\n[Thinking complete]"
167
+ when Riffer::StreamEvents::TextDelta
168
+ # Final response
169
+ print event.content
170
+ end
171
+ end
172
+ ```
173
+
174
+ ## Message Format
175
+
176
+ The provider converts Riffer messages to Anthropic format:
177
+
178
+ | Riffer Message | Anthropic Format |
179
+ | -------------- | --------------------------------------------------------------- |
180
+ | `System` | Added to `system` array as `{type: "text", text: ...}` |
181
+ | `User` | `{role: "user", content: "..."}` |
182
+ | `Assistant` | `{role: "assistant", content: [...]}` with text/tool_use blocks |
183
+ | `Tool` | `{role: "user", content: [{type: "tool_result", ...}]}` |
184
+
185
+ ## Direct Provider Usage
186
+
187
+ ```ruby
188
+ provider = Riffer::Providers::Anthropic.new(
189
+ api_key: ENV['ANTHROPIC_API_KEY']
190
+ )
191
+
192
+ response = provider.generate_text(
193
+ prompt: "Hello!",
194
+ model: "claude-4-5-haiku-20251001",
195
+ temperature: 0.7
196
+ )
197
+
198
+ puts response.content
199
+ ```
200
+
201
+ ### With extended thinking:
202
+
203
+ ```ruby
204
+ response = provider.generate_text(
205
+ prompt: "Explain step by step how to solve a Rubik's cube",
206
+ model: "claude-4-5-haiku-20251001",
207
+ thinking: { type: "enabled", budget_tokens: 10000 }
208
+ )
209
+
210
+ puts response.content
211
+ ```
@@ -0,0 +1,157 @@
1
+ # OpenAI Provider
2
+
3
+ The OpenAI provider connects to OpenAI's API for GPT models.
4
+
5
+ ## Installation
6
+
7
+ Add the OpenAI gem to your Gemfile:
8
+
9
+ ```ruby
10
+ gem 'openai'
11
+ ```
12
+
13
+ ## Configuration
14
+
15
+ Set your API key globally:
16
+
17
+ ```ruby
18
+ Riffer.configure do |config|
19
+ config.openai.api_key = ENV['OPENAI_API_KEY']
20
+ end
21
+ ```
22
+
23
+ Or per-agent:
24
+
25
+ ```ruby
26
+ class MyAgent < Riffer::Agent
27
+ model 'openai/gpt-4o'
28
+ provider_options api_key: ENV['CUSTOM_API_KEY']
29
+ end
30
+ ```
31
+
32
+ ## Supported Models
33
+
34
+ Use any OpenAI model in the `openai/model` format:
35
+
36
+ ```ruby
37
+ model 'openai/gpt-4o'
38
+ model 'openai/gpt-4o-mini'
39
+ model 'openai/gpt-4-turbo'
40
+ model 'openai/gpt-3.5-turbo'
41
+ ```
42
+
43
+ ## Model Options
44
+
45
+ ### temperature
46
+
47
+ Controls randomness (0.0-2.0):
48
+
49
+ ```ruby
50
+ model_options temperature: 0.7
51
+ ```
52
+
53
+ ### max_tokens
54
+
55
+ Maximum tokens in response:
56
+
57
+ ```ruby
58
+ model_options max_tokens: 4096
59
+ ```
60
+
61
+ ### reasoning
62
+
63
+ Enables extended thinking (for supported models):
64
+
65
+ ```ruby
66
+ model_options reasoning: 'medium' # 'low', 'medium', or 'high'
67
+ ```
68
+
69
+ When reasoning is enabled, you'll receive `ReasoningDelta` and `ReasoningDone` events during streaming.
70
+
71
+ ## Example
72
+
73
+ ```ruby
74
+ Riffer.configure do |config|
75
+ config.openai.api_key = ENV['OPENAI_API_KEY']
76
+ end
77
+
78
+ class CodeReviewAgent < Riffer::Agent
79
+ model 'openai/gpt-4o'
80
+ instructions 'You are a code reviewer. Provide constructive feedback.'
81
+ model_options temperature: 0.3, reasoning: 'medium'
82
+ end
83
+
84
+ agent = CodeReviewAgent.new
85
+ puts agent.generate("Review this code: def add(a,b); a+b; end")
86
+ ```
87
+
88
+ ## Streaming
89
+
90
+ ```ruby
91
+ agent.stream("Explain Ruby blocks").each do |event|
92
+ case event
93
+ when Riffer::StreamEvents::TextDelta
94
+ print event.content
95
+ when Riffer::StreamEvents::ReasoningDelta
96
+ # Extended thinking content
97
+ print "[thinking] #{event.content}"
98
+ when Riffer::StreamEvents::ReasoningDone
99
+ puts "\n[reasoning complete]"
100
+ end
101
+ end
102
+ ```
103
+
104
+ ## Tool Calling
105
+
106
+ OpenAI provider converts tools to function calling format with strict mode:
107
+
108
+ ```ruby
109
+ class CalculatorTool < Riffer::Tool
110
+ description "Performs basic math operations"
111
+
112
+ params do
113
+ required :operation, String, enum: ["add", "subtract", "multiply", "divide"]
114
+ required :a, Float, description: "First number"
115
+ required :b, Float, description: "Second number"
116
+ end
117
+
118
+ def call(context:, operation:, a:, b:)
119
+ case operation
120
+ when "add" then a + b
121
+ when "subtract" then a - b
122
+ when "multiply" then a * b
123
+ when "divide" then a / b
124
+ end.to_s
125
+ end
126
+ end
127
+
128
+ class MathAgent < Riffer::Agent
129
+ model 'openai/gpt-4o'
130
+ uses_tools [CalculatorTool]
131
+ end
132
+ ```
133
+
134
+ ## Message Format
135
+
136
+ The provider converts Riffer messages to OpenAI format:
137
+
138
+ | Riffer Message | OpenAI Role |
139
+ | -------------- | ---------------------- |
140
+ | `System` | `developer` |
141
+ | `User` | `user` |
142
+ | `Assistant` | `assistant` |
143
+ | `Tool` | `function_call_output` |
144
+
145
+ ## Direct Provider Usage
146
+
147
+ ```ruby
148
+ provider = Riffer::Providers::OpenAI.new(api_key: ENV['OPENAI_API_KEY'])
149
+
150
+ response = provider.generate_text(
151
+ prompt: "Hello!",
152
+ model: "gpt-4o",
153
+ temperature: 0.7
154
+ )
155
+
156
+ puts response.content
157
+ ```
@@ -0,0 +1,163 @@
1
+ # Test Provider
2
+
3
+ The Test provider is a mock provider for testing agents without making real API calls.
4
+
5
+ ## Usage
6
+
7
+ No additional gems required. Use the `test` provider identifier:
8
+
9
+ ```ruby
10
+ class TestableAgent < Riffer::Agent
11
+ model 'test/any' # The model name doesn't matter for test provider
12
+ instructions 'You are helpful.'
13
+ uses_tools [MyTool]
14
+ end
15
+ ```
16
+
17
+ ## Stubbing Responses
18
+
19
+ Use `stub_response` to queue responses:
20
+
21
+ ```ruby
22
+ # Get the provider instance from the agent
23
+ agent = TestableAgent.new
24
+ provider = agent.send(:provider_instance)
25
+
26
+ # Stub a simple text response
27
+ provider.stub_response("Hello, I'm here to help!")
28
+
29
+ # Now generate will return the stubbed response
30
+ response = agent.generate("Hi")
31
+ # => "Hello, I'm here to help!"
32
+ ```
33
+
34
+ ## Stubbing Tool Calls
35
+
36
+ Stub responses that trigger tool execution:
37
+
38
+ ```ruby
39
+ provider.stub_response("", tool_calls: [
40
+ {name: "my_tool", arguments: '{"query":"test"}'}
41
+ ])
42
+
43
+ # Queue the response after tool execution
44
+ provider.stub_response("Based on the tool result, here's my answer.")
45
+
46
+ response = agent.generate("Use the tool")
47
+ ```
48
+
49
+ ## Queueing Multiple Responses
50
+
51
+ Responses are consumed in order:
52
+
53
+ ```ruby
54
+ provider.stub_response("First response")
55
+ provider.stub_response("Second response")
56
+ provider.stub_response("Third response")
57
+
58
+ agent.generate("Message 1") # => "First response"
59
+ agent.generate("Message 2") # => "Second response"
60
+ agent.generate("Message 3") # => "Third response"
61
+ agent.generate("Message 4") # => "Test response" (default)
62
+ ```
63
+
64
+ ## Inspecting Calls
65
+
66
+ Access recorded calls for assertions:
67
+
68
+ ```ruby
69
+ provider.calls
70
+ # => [
71
+ # {messages: [...], model: "any", tools: [...], ...},
72
+ # {messages: [...], model: "any", tools: [...], ...}
73
+ # ]
74
+
75
+ # Check what was sent
76
+ expect(provider.calls.last[:messages].last[:content]).to eq("Hi")
77
+ ```
78
+
79
+ ## Clearing State
80
+
81
+ Reset stubbed responses:
82
+
83
+ ```ruby
84
+ provider.clear_stubs
85
+ ```
86
+
87
+ ## Example Test
88
+
89
+ ```ruby
90
+ require 'minitest/autorun'
91
+
92
+ class MyAgentTest < Minitest::Test
93
+ def setup
94
+ @agent = TestableAgent.new
95
+ @provider = @agent.send(:provider_instance)
96
+ end
97
+
98
+ def test_generates_response
99
+ @provider.stub_response("Hello!")
100
+
101
+ response = @agent.generate("Hi")
102
+
103
+ assert_equal "Hello!", response
104
+ end
105
+
106
+ def test_executes_tool
107
+ @provider.stub_response("", tool_calls: [
108
+ {name: "weather_tool", arguments: '{"city":"Tokyo"}'}
109
+ ])
110
+ @provider.stub_response("The weather is sunny.")
111
+
112
+ response = @agent.generate("What's the weather?")
113
+
114
+ assert_equal "The weather is sunny.", response
115
+ assert_equal 2, @provider.calls.length
116
+ end
117
+
118
+ def test_passes_context_to_tools
119
+ @provider.stub_response("", tool_calls: [
120
+ {name: "user_tool", arguments: '{}'}
121
+ ])
122
+ @provider.stub_response("Done.")
123
+
124
+ @agent.generate("Do something", tool_context: {user_id: 123})
125
+
126
+ # Tool receives the context
127
+ end
128
+ end
129
+ ```
130
+
131
+ ## Streaming
132
+
133
+ The test provider also supports streaming:
134
+
135
+ ```ruby
136
+ provider.stub_response("Hello world.")
137
+
138
+ events = []
139
+ agent.stream("Hi").each { |e| events << e }
140
+
141
+ # Events include TextDelta and TextDone
142
+ text_deltas = events.select { |e| e.is_a?(Riffer::StreamEvents::TextDelta) }
143
+ text_done = events.find { |e| e.is_a?(Riffer::StreamEvents::TextDone) }
144
+ ```
145
+
146
+ ## Initial Responses
147
+
148
+ Pass responses during initialization:
149
+
150
+ ```ruby
151
+ provider = Riffer::Providers::Test.new(responses: [
152
+ {content: "First"},
153
+ {content: "Second"}
154
+ ])
155
+ ```
156
+
157
+ ## Default Response
158
+
159
+ When no stubs are queued and initial responses are exhausted, the provider returns:
160
+
161
+ ```ruby
162
+ {role: "assistant", content: "Test response"}
163
+ ```