riffer 0.6.1 → 0.8.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (60) hide show
  1. checksums.yaml +4 -4
  2. data/.agents/architecture.md +113 -0
  3. data/.agents/code-style.md +42 -0
  4. data/.agents/providers.md +46 -0
  5. data/.agents/rdoc.md +51 -0
  6. data/.agents/testing.md +56 -0
  7. data/.release-please-manifest.json +1 -1
  8. data/AGENTS.md +28 -0
  9. data/CHANGELOG.md +17 -0
  10. data/README.md +26 -36
  11. data/Rakefile +1 -1
  12. data/docs/01_OVERVIEW.md +106 -0
  13. data/docs/02_GETTING_STARTED.md +128 -0
  14. data/docs/03_AGENTS.md +226 -0
  15. data/docs/04_TOOLS.md +251 -0
  16. data/docs/05_MESSAGES.md +173 -0
  17. data/docs/06_STREAM_EVENTS.md +191 -0
  18. data/docs/07_CONFIGURATION.md +195 -0
  19. data/docs_providers/01_PROVIDERS.md +168 -0
  20. data/docs_providers/02_AMAZON_BEDROCK.md +196 -0
  21. data/docs_providers/03_ANTHROPIC.md +211 -0
  22. data/docs_providers/04_OPENAI.md +157 -0
  23. data/docs_providers/05_TEST_PROVIDER.md +163 -0
  24. data/docs_providers/06_CUSTOM_PROVIDERS.md +304 -0
  25. data/lib/riffer/agent.rb +220 -57
  26. data/lib/riffer/config.rb +20 -12
  27. data/lib/riffer/core.rb +7 -7
  28. data/lib/riffer/helpers/class_name_converter.rb +6 -3
  29. data/lib/riffer/helpers/dependencies.rb +18 -0
  30. data/lib/riffer/helpers/validations.rb +9 -0
  31. data/lib/riffer/messages/assistant.rb +23 -1
  32. data/lib/riffer/messages/base.rb +15 -0
  33. data/lib/riffer/messages/converter.rb +15 -5
  34. data/lib/riffer/messages/system.rb +8 -1
  35. data/lib/riffer/messages/tool.rb +58 -4
  36. data/lib/riffer/messages/user.rb +8 -1
  37. data/lib/riffer/messages.rb +7 -0
  38. data/lib/riffer/providers/amazon_bedrock.rb +128 -13
  39. data/lib/riffer/providers/anthropic.rb +209 -0
  40. data/lib/riffer/providers/base.rb +23 -18
  41. data/lib/riffer/providers/open_ai.rb +119 -39
  42. data/lib/riffer/providers/repository.rb +9 -4
  43. data/lib/riffer/providers/test.rb +78 -24
  44. data/lib/riffer/providers.rb +6 -0
  45. data/lib/riffer/stream_events/base.rb +13 -1
  46. data/lib/riffer/stream_events/reasoning_delta.rb +15 -1
  47. data/lib/riffer/stream_events/reasoning_done.rb +15 -1
  48. data/lib/riffer/stream_events/text_delta.rb +14 -1
  49. data/lib/riffer/stream_events/text_done.rb +14 -1
  50. data/lib/riffer/stream_events/tool_call_delta.rb +35 -0
  51. data/lib/riffer/stream_events/tool_call_done.rb +40 -0
  52. data/lib/riffer/stream_events.rb +9 -0
  53. data/lib/riffer/tool.rb +120 -0
  54. data/lib/riffer/tools/param.rb +68 -0
  55. data/lib/riffer/tools/params.rb +118 -0
  56. data/lib/riffer/tools.rb +9 -0
  57. data/lib/riffer/version.rb +1 -1
  58. data/lib/riffer.rb +23 -19
  59. metadata +41 -2
  60. data/CLAUDE.md +0 -73
@@ -0,0 +1,173 @@
1
+ # Messages
2
+
3
+ Messages represent the conversation between users and the assistant. Riffer uses strongly-typed message objects to ensure consistency and type safety.
4
+
5
+ ## Message Types
6
+
7
+ ### System
8
+
9
+ System messages provide instructions to the LLM:
10
+
11
+ ```ruby
12
+ msg = Riffer::Messages::System.new("You are a helpful assistant.")
13
+ msg.role # => :system
14
+ msg.content # => "You are a helpful assistant."
15
+ msg.to_h # => {role: :system, content: "You are a helpful assistant."}
16
+ ```
17
+
18
+ System messages are typically set via agent `instructions` and automatically prepended to conversations.
19
+
20
+ ### User
21
+
22
+ User messages represent input from the user:
23
+
24
+ ```ruby
25
+ msg = Riffer::Messages::User.new("Hello, how are you?")
26
+ msg.role # => :user
27
+ msg.content # => "Hello, how are you?"
28
+ msg.to_h # => {role: :user, content: "Hello, how are you?"}
29
+ ```
30
+
31
+ ### Assistant
32
+
33
+ Assistant messages represent LLM responses, potentially including tool calls:
34
+
35
+ ```ruby
36
+ # Text-only response
37
+ msg = Riffer::Messages::Assistant.new("I'm doing well, thank you!")
38
+ msg.role # => :assistant
39
+ msg.content # => "I'm doing well, thank you!"
40
+ msg.tool_calls # => []
41
+
42
+ # Response with tool calls
43
+ msg = Riffer::Messages::Assistant.new("", tool_calls: [
44
+ {id: "call_123", call_id: "call_123", name: "weather_tool", arguments: '{"city":"Tokyo"}'}
45
+ ])
46
+ msg.tool_calls # => [{id: "call_123", ...}]
47
+ msg.to_h # => {role: "assistant", content: "", tool_calls: [...]}
48
+ ```
49
+
50
+ ### Tool
51
+
52
+ Tool messages contain the results of tool executions:
53
+
54
+ ```ruby
55
+ msg = Riffer::Messages::Tool.new(
56
+ "The weather in Tokyo is 22C and sunny.",
57
+ tool_call_id: "call_123",
58
+ name: "weather_tool"
59
+ )
60
+ msg.role # => :tool
61
+ msg.content # => "The weather in Tokyo is 22C and sunny."
62
+ msg.tool_call_id # => "call_123"
63
+ msg.name # => "weather_tool"
64
+ msg.error? # => false
65
+
66
+ # Error result
67
+ msg = Riffer::Messages::Tool.new(
68
+ "API rate limit exceeded",
69
+ tool_call_id: "call_123",
70
+ name: "weather_tool",
71
+ error: "API rate limit exceeded",
72
+ error_type: :execution_error
73
+ )
74
+ msg.error? # => true
75
+ msg.error # => "API rate limit exceeded"
76
+ msg.error_type # => :execution_error
77
+ ```
78
+
79
+ ## Using Messages with Agents
80
+
81
+ ### String Prompts
82
+
83
+ The simplest way to interact with an agent:
84
+
85
+ ```ruby
86
+ agent = MyAgent.new
87
+ response = agent.generate("Hello!")
88
+ ```
89
+
90
+ This creates a `User` message internally.
91
+
92
+ ### Message Arrays
93
+
94
+ For multi-turn conversations, pass an array of messages:
95
+
96
+ ```ruby
97
+ messages = [
98
+ {role: :user, content: "What's the weather?"},
99
+ {role: :assistant, content: "I'll check that for you."},
100
+ {role: :user, content: "Thanks, I meant in Tokyo specifically."}
101
+ ]
102
+
103
+ response = agent.generate(messages)
104
+ ```
105
+
106
+ Messages can be hashes or `Riffer::Messages::Base` objects:
107
+
108
+ ```ruby
109
+ messages = [
110
+ Riffer::Messages::User.new("Hello"),
111
+ Riffer::Messages::Assistant.new("Hi there!"),
112
+ Riffer::Messages::User.new("How are you?")
113
+ ]
114
+
115
+ response = agent.generate(messages)
116
+ ```
117
+
118
+ ### Accessing Message History
119
+
120
+ After calling `generate` or `stream`, access the full conversation:
121
+
122
+ ```ruby
123
+ agent = MyAgent.new
124
+ agent.generate("Hello!")
125
+
126
+ agent.messages.each do |msg|
127
+ puts "[#{msg.role}] #{msg.content}"
128
+ end
129
+ # [system] You are a helpful assistant.
130
+ # [user] Hello!
131
+ # [assistant] Hi there! How can I help you today?
132
+ ```
133
+
134
+ ## Tool Call Structure
135
+
136
+ Tool calls in assistant messages have this structure:
137
+
138
+ ```ruby
139
+ {
140
+ id: "item_123", # Item identifier
141
+ call_id: "call_456", # Call identifier for response matching
142
+ name: "weather_tool", # Tool name
143
+ arguments: '{"city":"Tokyo"}' # JSON string of arguments
144
+ }
145
+ ```
146
+
147
+ When creating tool result messages, use the `id` as `tool_call_id`.
148
+
149
+ ## Message Emission
150
+
151
+ Agents can emit messages as they're added during generation via the `on_message` callback. This is useful for persistence or real-time logging. Only agent-generated messages (Assistant, Tool) are emitted—not inputs (System, User).
152
+
153
+ See [Agents - on_message](03_AGENTS.md#on_message) for details.
154
+
155
+ ## Base Class
156
+
157
+ All messages inherit from `Riffer::Messages::Base`:
158
+
159
+ ```ruby
160
+ class Riffer::Messages::Base
161
+ attr_reader :content
162
+
163
+ def role
164
+ raise NotImplementedError
165
+ end
166
+
167
+ def to_h
168
+ {role: role, content: content}
169
+ end
170
+ end
171
+ ```
172
+
173
+ Subclasses implement `role` and optionally extend `to_h` with additional fields.
@@ -0,0 +1,191 @@
1
+ # Stream Events
2
+
3
+ When streaming responses, Riffer emits typed events that represent incremental updates from the LLM.
4
+
5
+ ## Using Streaming
6
+
7
+ Use `stream` instead of `generate` to receive events as they arrive:
8
+
9
+ ```ruby
10
+ agent = MyAgent.new
11
+
12
+ agent.stream("Tell me a story").each do |event|
13
+ case event
14
+ when Riffer::StreamEvents::TextDelta
15
+ print event.content
16
+ when Riffer::StreamEvents::TextDone
17
+ puts "\n[Complete]"
18
+ when Riffer::StreamEvents::ToolCallDelta
19
+ # Tool call being built
20
+ when Riffer::StreamEvents::ToolCallDone
21
+ puts "[Tool: #{event.name}]"
22
+ end
23
+ end
24
+ ```
25
+
26
+ ## Event Types
27
+
28
+ ### TextDelta
29
+
30
+ Emitted when incremental text content is received:
31
+
32
+ ```ruby
33
+ event = Riffer::StreamEvents::TextDelta.new("Hello ")
34
+ event.role # => "assistant"
35
+ event.content # => "Hello "
36
+ event.to_h # => {role: "assistant", content: "Hello "}
37
+ ```
38
+
39
+ Use this to display text in real-time as it streams.
40
+
41
+ ### TextDone
42
+
43
+ Emitted when text generation is complete:
44
+
45
+ ```ruby
46
+ event = Riffer::StreamEvents::TextDone.new("Hello, how can I help you?")
47
+ event.role # => "assistant"
48
+ event.content # => "Hello, how can I help you?"
49
+ event.to_h # => {role: "assistant", content: "Hello, how can I help you?"}
50
+ ```
51
+
52
+ Contains the complete final text.
53
+
54
+ ### ToolCallDelta
55
+
56
+ Emitted when tool call arguments are being streamed:
57
+
58
+ ```ruby
59
+ event = Riffer::StreamEvents::ToolCallDelta.new(
60
+ item_id: "item_123",
61
+ name: "weather_tool",
62
+ arguments_delta: '{"city":'
63
+ )
64
+ event.role # => "assistant"
65
+ event.item_id # => "item_123"
66
+ event.name # => "weather_tool"
67
+ event.arguments_delta # => '{"city":'
68
+ ```
69
+
70
+ The `name` may only be present in the first delta. Accumulate `arguments_delta` to build the complete arguments.
71
+
72
+ ### ToolCallDone
73
+
74
+ Emitted when a tool call is complete:
75
+
76
+ ```ruby
77
+ event = Riffer::StreamEvents::ToolCallDone.new(
78
+ item_id: "item_123",
79
+ call_id: "call_456",
80
+ name: "weather_tool",
81
+ arguments: '{"city":"Tokyo"}'
82
+ )
83
+ event.role # => "assistant"
84
+ event.item_id # => "item_123"
85
+ event.call_id # => "call_456"
86
+ event.name # => "weather_tool"
87
+ event.arguments # => '{"city":"Tokyo"}'
88
+ ```
89
+
90
+ Contains the complete tool call information.
91
+
92
+ ### ReasoningDelta
93
+
94
+ Emitted when reasoning/thinking content is streamed (OpenAI with reasoning enabled):
95
+
96
+ ```ruby
97
+ event = Riffer::StreamEvents::ReasoningDelta.new("Let me think about ")
98
+ event.role # => "assistant"
99
+ event.content # => "Let me think about "
100
+ ```
101
+
102
+ ### ReasoningDone
103
+
104
+ Emitted when reasoning is complete:
105
+
106
+ ```ruby
107
+ event = Riffer::StreamEvents::ReasoningDone.new("Let me think about this step by step...")
108
+ event.role # => "assistant"
109
+ event.content # => "Let me think about this step by step..."
110
+ ```
111
+
112
+ ## Streaming with Tools
113
+
114
+ When an agent uses tools during streaming, the flow is:
115
+
116
+ 1. Text events stream in (`TextDelta`, `TextDone`)
117
+ 2. If tool calls are present: `ToolCallDelta` events, then `ToolCallDone`
118
+ 3. Agent executes tools internally
119
+ 4. Agent sends results back to LLM
120
+ 5. More text events stream in
121
+ 6. Repeat until no more tool calls
122
+
123
+ ```ruby
124
+ agent.stream("What's the weather in Tokyo?").each do |event|
125
+ case event
126
+ when Riffer::StreamEvents::TextDelta
127
+ print event.content
128
+ when Riffer::StreamEvents::ToolCallDone
129
+ puts "\n[Calling #{event.name}...]"
130
+ when Riffer::StreamEvents::TextDone
131
+ puts "\n"
132
+ end
133
+ end
134
+ ```
135
+
136
+ ## Complete Example
137
+
138
+ ```ruby
139
+ class WeatherAgent < Riffer::Agent
140
+ model 'openai/gpt-4o'
141
+ instructions 'You are a weather assistant.'
142
+ uses_tools [WeatherTool]
143
+ end
144
+
145
+ agent = WeatherAgent.new
146
+ text_buffer = ""
147
+
148
+ agent.stream("What's the weather in Tokyo and New York?").each do |event|
149
+ case event
150
+ when Riffer::StreamEvents::TextDelta
151
+ print event.content
152
+ text_buffer += event.content
153
+
154
+ when Riffer::StreamEvents::TextDone
155
+ # Final text available
156
+ puts "\n---"
157
+ puts "Complete response: #{event.content}"
158
+
159
+ when Riffer::StreamEvents::ToolCallDelta
160
+ # Could show "typing..." indicator
161
+
162
+ when Riffer::StreamEvents::ToolCallDone
163
+ puts "\n[Tool: #{event.name}(#{event.arguments})]"
164
+
165
+ when Riffer::StreamEvents::ReasoningDelta
166
+ # Show thinking process if desired
167
+ print "[thinking] #{event.content}"
168
+
169
+ when Riffer::StreamEvents::ReasoningDone
170
+ puts "\n[reasoning complete]"
171
+ end
172
+ end
173
+ ```
174
+
175
+ ## Base Class
176
+
177
+ All events inherit from `Riffer::StreamEvents::Base`:
178
+
179
+ ```ruby
180
+ class Riffer::StreamEvents::Base
181
+ attr_reader :role
182
+
183
+ def initialize(role: "assistant")
184
+ @role = role
185
+ end
186
+
187
+ def to_h
188
+ raise NotImplementedError
189
+ end
190
+ end
191
+ ```
@@ -0,0 +1,195 @@
1
+ # Configuration
2
+
3
+ Riffer uses a centralized configuration system for provider credentials and settings.
4
+
5
+ ## Global Configuration
6
+
7
+ Use `Riffer.configure` to set up provider credentials:
8
+
9
+ ```ruby
10
+ Riffer.configure do |config|
11
+ config.openai.api_key = ENV['OPENAI_API_KEY']
12
+ config.amazon_bedrock.region = 'us-east-1'
13
+ config.amazon_bedrock.api_token = ENV['BEDROCK_API_TOKEN']
14
+ config.anthropic.api_key = ENV['ANTHROPIC_API_KEY']
15
+ end
16
+ ```
17
+
18
+ ## Accessing Configuration
19
+
20
+ Access the current configuration via `Riffer.config`:
21
+
22
+ ```ruby
23
+ Riffer.config.openai.api_key
24
+ # => "sk-..."
25
+
26
+ Riffer.config.amazon_bedrock.region
27
+ # => "us-east-1"
28
+
29
+ Riffer.config.anthropic.api_key
30
+ # => "sk-ant-..."
31
+ ```
32
+
33
+ ## Provider-Specific Configuration
34
+
35
+ ### OpenAI
36
+
37
+ | Option | Description |
38
+ | --------- | ------------------- |
39
+ | `api_key` | Your OpenAI API key |
40
+
41
+ ```ruby
42
+ Riffer.configure do |config|
43
+ config.openai.api_key = ENV['OPENAI_API_KEY']
44
+ end
45
+ ```
46
+
47
+ ### Amazon Bedrock
48
+
49
+ | Option | Description |
50
+ | ----------- | -------------------------------------------- |
51
+ | `region` | AWS region (e.g., `us-east-1`) |
52
+ | `api_token` | Optional bearer token for API authentication |
53
+
54
+ ```ruby
55
+ Riffer.configure do |config|
56
+ config.amazon_bedrock.region = 'us-east-1'
57
+ config.amazon_bedrock.api_token = ENV['BEDROCK_API_TOKEN'] # Optional
58
+ end
59
+ ```
60
+
61
+ When `api_token` is not set, the provider uses standard AWS IAM authentication.
62
+
63
+ ### Anthropic
64
+
65
+ | Option | Description |
66
+ | --------- | ---------------------- |
67
+ | `api_key` | Your Anthropic API key |
68
+
69
+ ```ruby
70
+ Riffer.configure do |config|
71
+ config.anthropic.api_key = ENV['ANTHROPIC_API_KEY']
72
+ end
73
+ ```
74
+
75
+ ## Agent-Level Configuration
76
+
77
+ Override global configuration at the agent level:
78
+
79
+ ### provider_options
80
+
81
+ Pass options directly to the provider client:
82
+
83
+ ```ruby
84
+ class MyAgent < Riffer::Agent
85
+ model 'openai/gpt-4o'
86
+
87
+ # Override API key for this agent only
88
+ provider_options api_key: ENV['CUSTOM_OPENAI_KEY']
89
+ end
90
+ ```
91
+
92
+ ### model_options
93
+
94
+ Pass options to each LLM request:
95
+
96
+ ```ruby
97
+ class MyAgent < Riffer::Agent
98
+ model 'openai/gpt-4o'
99
+
100
+ # These options are sent with every generate/stream call
101
+ model_options temperature: 0.7, reasoning: 'medium'
102
+ end
103
+ ```
104
+
105
+ ## Common Model Options
106
+
107
+ ### OpenAI
108
+
109
+ | Option | Description |
110
+ | ------------- | ------------------------------------------------ |
111
+ | `temperature` | Sampling temperature (0.0-2.0) |
112
+ | `max_tokens` | Maximum tokens in response |
113
+ | `top_p` | Nucleus sampling parameter |
114
+ | `reasoning` | Reasoning effort level (`low`, `medium`, `high`) |
115
+
116
+ ```ruby
117
+ class MyAgent < Riffer::Agent
118
+ model 'openai/gpt-4o'
119
+ model_options temperature: 0.7, reasoning: 'medium'
120
+ end
121
+ ```
122
+
123
+ ### Amazon Bedrock
124
+
125
+ | Option | Description |
126
+ | ------------- | -------------------------- |
127
+ | `temperature` | Sampling temperature |
128
+ | `max_tokens` | Maximum tokens in response |
129
+ | `top_p` | Nucleus sampling parameter |
130
+ | `top_k` | Top-k sampling parameter |
131
+
132
+ ```ruby
133
+ class MyAgent < Riffer::Agent
134
+ model 'amazon_bedrock/anthropic.claude-3-sonnet-20240229-v1:0'
135
+ model_options temperature: 0.7, max_tokens: 4096
136
+ end
137
+ ```
138
+
139
+ ### Anthropic
140
+
141
+ | Option | Description |
142
+ | ------------- | ------------------------------------------- |
143
+ | `temperature` | Sampling temperature |
144
+ | `max_tokens` | Maximum tokens in response |
145
+ | `top_p` | Nucleus sampling parameter |
146
+ | `top_k` | Top-k sampling parameter |
147
+ | `thinking` | Extended thinking config hash (Claude 3.7+) |
148
+
149
+ ```ruby
150
+ class MyAgent < Riffer::Agent
151
+ model 'anthropic/claude-3-5-sonnet-20241022'
152
+ model_options temperature: 0.7, max_tokens: 4096
153
+ end
154
+
155
+ # With extended thinking (Claude 3.7+)
156
+ class ReasoningAgent < Riffer::Agent
157
+ model 'anthropic/claude-3-7-sonnet-20250219'
158
+ model_options thinking: {type: "enabled", budget_tokens: 10000}
159
+ end
160
+ ```
161
+
162
+ ## Environment Variables
163
+
164
+ Recommended pattern for managing credentials:
165
+
166
+ ```ruby
167
+ # config/initializers/riffer.rb (Rails)
168
+ # or at application startup
169
+
170
+ Riffer.configure do |config|
171
+ config.openai.api_key = ENV.fetch('OPENAI_API_KEY') { raise 'OPENAI_API_KEY not set' }
172
+
173
+ if ENV['BEDROCK_REGION']
174
+ config.amazon_bedrock.region = ENV['BEDROCK_REGION']
175
+ config.amazon_bedrock.api_token = ENV['BEDROCK_API_TOKEN']
176
+ end
177
+ end
178
+ ```
179
+
180
+ ## Multiple Configurations
181
+
182
+ For different environments or use cases, use agent-level overrides:
183
+
184
+ ```ruby
185
+ class ProductionAgent < Riffer::Agent
186
+ model 'openai/gpt-4o'
187
+ provider_options api_key: ENV['PRODUCTION_OPENAI_KEY']
188
+ end
189
+
190
+ class DevelopmentAgent < Riffer::Agent
191
+ model 'openai/gpt-4o-mini'
192
+ provider_options api_key: ENV['DEV_OPENAI_KEY']
193
+ model_options temperature: 0.0 # Deterministic for testing
194
+ end
195
+ ```