sage-rb 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: e5380b93f9b4d73ec594e712e1a7421d72c749c7e4fb2a0216deee2be1606aae
4
+ data.tar.gz: 877373c5133fcd9e29fd48fad8c0df533df8c2cfcf5881feef187b72523daea4
5
+ SHA512:
6
+ metadata.gz: 6c5d49d69fe7122f3e41308bf8f28ef25ab8e626d4de9a2f35fa68ec616c8ef3e182506790f4418a580fdf2e136ff3f9007e243cc2a172e13c1dd6a7c736ea10
7
+ data.tar.gz: 4542f512d8198653dc7b4fc83345830e5e83294e115ffd395abc4cf144ad81517acf2c9e794ed3295b4da9c58b13c35eb66c60b5a936a4653afce6949fc6cd09
data/README.md ADDED
@@ -0,0 +1,265 @@
1
+ # sage-rb
2
+
3
+ A lightweight, provider-agnostic Ruby gem for calling LLM APIs. One interface for OpenAI, Anthropic, and Ollama — with profiles to switch between models without changing your code.
4
+
5
+ ## Installation
6
+
7
+ Add to your Gemfile:
8
+
9
+ ```ruby
10
+ gem "sage-rb"
11
+ ```
12
+
13
+ Or install directly:
14
+
15
+ ```bash
16
+ gem install sage-rb
17
+ ```
18
+
19
+ ## Quick Start (Rails)
20
+
21
+ Generate the initializer:
22
+
23
+ ```bash
24
+ rails generate sage:install
25
+ ```
26
+
27
+ Edit `config/initializers/sage.rb`:
28
+
29
+ ```ruby
30
+ Sage.configure do |config|
31
+ config.provider :openai, api_key: Rails.application.credentials.dig(:openai, :api_key)
32
+
33
+ config.profile :default, provider: :openai, model: "gpt-4o"
34
+ config.default_profile :default
35
+ end
36
+ ```
37
+
38
+ Use it anywhere in your app:
39
+
40
+ ```ruby
41
+ response = Sage.complete(prompt: "Summarize this article")
42
+ response.content # => "The article discusses..."
43
+ ```
44
+
45
+ ## Quick Start (Ruby)
46
+
47
+ ```ruby
48
+ require "sage"
49
+
50
+ Sage.configure do |config|
51
+ config.provider :openai, api_key: ENV["OPENAI_API_KEY"]
52
+
53
+ config.profile :default, provider: :openai, model: "gpt-4o"
54
+ config.default_profile :default
55
+ end
56
+
57
+ response = Sage.complete(prompt: "Hello!")
58
+ puts response.content
59
+ ```
60
+
61
+ ## Configuration
62
+
63
+ ### Providers
64
+
65
+ Register providers with their credentials. sage-rb never stores credentials — it receives API keys as strings and passes them to the provider API.
66
+
67
+ ```ruby
68
+ Sage.configure do |config|
69
+ # OpenAI (or any OpenAI-compatible API)
70
+ config.provider :openai,
71
+ api_key: ENV["OPENAI_API_KEY"],
72
+ base_url: "https://api.openai.com/v1" # optional, this is the default
73
+
74
+ # Anthropic
75
+ config.provider :anthropic,
76
+ api_key: ENV["ANTHROPIC_API_KEY"]
77
+
78
+ # Ollama (local, no API key required)
79
+ config.provider :ollama,
80
+ endpoint: "http://localhost:11434" # optional, this is the default
81
+
82
+ # Ollama with authentication (remote deployment)
83
+ config.provider :ollama,
84
+ endpoint: "https://ollama.example.com",
85
+ api_key: ENV["OLLAMA_API_KEY"]
86
+ end
87
+ ```
88
+
89
+ ### Profiles
90
+
91
+ Profiles are named combinations of provider + model + default parameters. Define them once, use them everywhere.
92
+
93
+ ```ruby
94
+ Sage.configure do |config|
95
+ config.provider :openai, api_key: ENV["OPENAI_API_KEY"]
96
+ config.provider :ollama, endpoint: "http://localhost:11434"
97
+
98
+ config.profile :small_brain, provider: :ollama, model: "hermes3"
99
+ config.profile :code_expert, provider: :openai, model: "gpt-4o",
100
+ temperature: 0.2, max_tokens: 4096
101
+ config.profile :creative, provider: :openai, model: "gpt-4o",
102
+ temperature: 0.9
103
+
104
+ config.default_profile :small_brain
105
+ end
106
+ ```
107
+
108
+ Use different profiles for different tasks:
109
+
110
+ ```ruby
111
+ Sage.complete(:code_expert, prompt: "Review this function")
112
+ Sage.complete(:creative, prompt: "Write a haiku about Ruby")
113
+ Sage.complete(prompt: "Hello") # uses default profile (:small_brain)
114
+ ```
115
+
116
+ ### Environment-based defaults
117
+
118
+ ```ruby
119
+ config.default_profile Rails.env.production? ? :code_expert : :small_brain
120
+ ```
121
+
122
+ ## Usage
123
+
124
+ ### Blocking completion
125
+
126
+ Returns a `Sage::Response` with the full response:
127
+
128
+ ```ruby
129
+ response = Sage.complete(:code_expert, prompt: "Explain recursion", system: "You are a teacher")
130
+
131
+ response.content # => "Recursion is when a function calls itself..."
132
+ response.model # => "gpt-4o"
133
+ response.usage # => { prompt_tokens: 15, completion_tokens: 42 }
134
+ ```
135
+
136
+ ### Streaming completion
137
+
138
+ Pass a block to stream chunks as they arrive:
139
+
140
+ ```ruby
141
+ Sage.complete(:code_expert, prompt: "Explain recursion") do |chunk|
142
+ if chunk.done?
143
+ puts "\n[Done]"
144
+ else
145
+ print chunk.content
146
+ end
147
+ end
148
+ ```
149
+
150
+ ### Per-call parameter overrides
151
+
152
+ Override profile defaults for a single call:
153
+
154
+ ```ruby
155
+ # Profile has temperature: 0.2, but this call uses 0.9
156
+ Sage.complete(:code_expert, prompt: "Be creative", temperature: 0.9)
157
+ ```
158
+
159
+ ### System prompts
160
+
161
+ ```ruby
162
+ Sage.complete(:default,
163
+ prompt: "What is 2+2?",
164
+ system: "You are a math tutor. Show your work."
165
+ )
166
+ ```
167
+
168
+ ## Providers Reference
169
+
170
+ | Provider | Config key | Required fields | Optional fields |
171
+ |----------|-----------|----------------|-----------------|
172
+ | OpenAI | `:openai` | `api_key` | `base_url` |
173
+ | Anthropic | `:anthropic` | `api_key` | `base_url` |
174
+ | Ollama | `:ollama` | — | `endpoint`, `api_key` |
175
+
176
+ ### Provider notes
177
+
178
+ **OpenAI** — Newer models (o1, o3, gpt-4o, gpt-5) automatically use `max_completion_tokens` instead of `max_tokens`. The `base_url` option supports OpenAI-compatible APIs (Azure, local proxies).
179
+
180
+ **Anthropic** — System prompts are sent as a separate field (not in the messages array), matching the Anthropic API spec. `max_tokens` defaults to 1024 if not specified (Anthropic requires this field).
181
+
182
+ **Ollama** — Runs locally by default at `http://localhost:11434`. API key is optional — only needed for authenticated remote deployments.
183
+
184
+ ## Error Handling
185
+
186
+ ```ruby
187
+ begin
188
+ Sage.complete(prompt: "Hello")
189
+ rescue Sage::AuthenticationError => e
190
+ # Invalid API key (401)
191
+ rescue Sage::ProviderError => e
192
+ # Rate limited (429), server error (500), or other provider issues
193
+ rescue Sage::ConnectionError => e
194
+ # Could not connect (e.g., Ollama not running)
195
+ rescue Sage::ProfileNotFound => e
196
+ # Referenced a profile that doesn't exist
197
+ rescue Sage::ProviderNotConfigured => e
198
+ # Profile references a provider that isn't configured
199
+ rescue Sage::NoDefaultProfile => e
200
+ # Called Sage.complete without a profile name and no default is set
201
+ rescue Sage::Error => e
202
+ # Catch-all for any sage-rb error
203
+ end
204
+ ```
205
+
206
+ ## Response Objects
207
+
208
+ ### Sage::Response
209
+
210
+ Returned by blocking `Sage.complete` calls.
211
+
212
+ ```ruby
213
+ response.content # String — the generated text
214
+ response.model # String — the model that generated it
215
+ response.usage # Hash — { prompt_tokens: Integer, completion_tokens: Integer }
216
+ ```
217
+
218
+ ### Sage::Chunk
219
+
220
+ Yielded during streaming `Sage.complete` calls.
221
+
222
+ ```ruby
223
+ chunk.content # String — text fragment
224
+ chunk.done? # Boolean — true for the final chunk
225
+ ```
226
+
227
+ ## Security
228
+
229
+ sage-rb never stores credentials — API keys are received as strings and passed directly to provider HTTP headers.
230
+
231
+ ### Provider URLs
232
+
233
+ The `base_url` and `endpoint` configuration options control where HTTP requests are sent. These must only come from trusted sources:
234
+
235
+ ```ruby
236
+ # Safe — from environment or Rails credentials
237
+ config.provider :openai,
238
+ api_key: ENV["OPENAI_API_KEY"],
239
+ base_url: ENV["OPENAI_BASE_URL"]
240
+
241
+ # Unsafe — NEVER use user input for provider URLs
242
+ config.provider :openai,
243
+ api_key: current_user.api_key,
244
+ base_url: params[:base_url] # DO NOT do this
245
+ ```
246
+
247
+ Allowing user-controlled URLs could enable Server-Side Request Forgery (SSRF) against internal services.
248
+
249
+ ## Relationship to sage
250
+
251
+ sage-rb is a companion to [sage](https://github.com/not-emily/sage), the Go CLI and library. They share the same core concepts:
252
+
253
+ | Concept | sage (Go CLI) | sage-rb (Ruby gem) |
254
+ |---------|--------------|-------------------|
255
+ | **Providers** | Configured via `sage provider add` | Configured in initializer |
256
+ | **Profiles** | Configured via `sage profile add` | Configured in initializer |
257
+ | **Complete** | `sage complete --profile name` | `Sage.complete(:name, ...)` |
258
+ | **Credentials** | Encrypted in `~/.config/sage/` | From ENV vars or Rails credentials |
259
+ | **Streaming** | Default behavior | Pass a block to `Sage.complete` |
260
+
261
+ Both make HTTP calls directly to provider APIs. They are independent implementations — sage-rb does not require or wrap the sage Go binary.
262
+
263
+ ## License
264
+
265
+ MIT
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ class InstallGenerator < Rails::Generators::Base
5
+ source_root File.expand_path("templates", __dir__)
6
+
7
+ desc "Creates a sage-rb initializer at config/initializers/sage.rb"
8
+
9
+ def create_initializer
10
+ template "initializer.rb.tt", "config/initializers/sage.rb"
11
+ end
12
+ end
13
+ end
data/lib/sage/chunk.rb ADDED
@@ -0,0 +1,16 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ class Chunk
5
+ attr_reader :content
6
+
7
+ def initialize(content:, done: false)
8
+ @content = content
9
+ @done = done
10
+ end
11
+
12
+ def done?
13
+ @done
14
+ end
15
+ end
16
+ end
@@ -0,0 +1,55 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ class Client
5
+ PROVIDERS = {}
6
+
7
+ def self.register_provider(name, klass)
8
+ PROVIDERS[name.to_sym] = klass
9
+ end
10
+
11
+ def initialize(configuration)
12
+ @configuration = configuration
13
+ end
14
+
15
+ def complete(profile_name = nil, prompt:, system: nil, **params, &block)
16
+ profile = resolve_profile(profile_name)
17
+ provider = build_provider(profile)
18
+ merged_params = profile.params.merge(params)
19
+
20
+ if block
21
+ provider.stream(model: profile.model, prompt: prompt, system: system, **merged_params, &block)
22
+ else
23
+ provider.complete(model: profile.model, prompt: prompt, system: system, **merged_params)
24
+ end
25
+ end
26
+
27
+ private
28
+
29
+ attr_reader :configuration
30
+
31
+ def resolve_profile(name)
32
+ name = name&.to_sym || configuration.default_profile
33
+
34
+ raise NoDefaultProfile, "No default profile configured. Call Sage.configure { |c| c.default_profile :name }" if name.nil?
35
+
36
+ profile = configuration.profiles[name]
37
+
38
+ raise ProfileNotFound, "Profile '#{name}' is not configured" if profile.nil?
39
+
40
+ profile
41
+ end
42
+
43
+ def build_provider(profile)
44
+ provider_config = configuration.providers[profile.provider]
45
+
46
+ raise ProviderNotConfigured, "Provider '#{profile.provider}' referenced by profile '#{profile.name}' is not configured" if provider_config.nil?
47
+
48
+ provider_class = PROVIDERS[profile.provider]
49
+
50
+ raise ProviderNotConfigured, "No provider adapter registered for '#{profile.provider}'" if provider_class.nil?
51
+
52
+ provider_class.new(provider_config)
53
+ end
54
+ end
55
+ end
@@ -0,0 +1,31 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "profile"
4
+
5
+ module Sage
6
+ class Configuration
7
+ attr_reader :providers, :profiles
8
+
9
+ def initialize
10
+ @providers = {}
11
+ @profiles = {}
12
+ @default_profile_name = nil
13
+ end
14
+
15
+ def provider(name, **options)
16
+ @providers[name.to_sym] = options
17
+ end
18
+
19
+ def profile(name, provider:, model:, **params)
20
+ @profiles[name.to_sym] = Profile.new(name: name, provider: provider, model: model, **params)
21
+ end
22
+
23
+ def default_profile(name = nil)
24
+ if name.nil?
25
+ @default_profile_name
26
+ else
27
+ @default_profile_name = name.to_sym
28
+ end
29
+ end
30
+ end
31
+ end
@@ -0,0 +1,11 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ class Error < StandardError; end
5
+ class ProfileNotFound < Error; end
6
+ class ProviderNotConfigured < Error; end
7
+ class NoDefaultProfile < Error; end
8
+ class ConnectionError < Error; end
9
+ class AuthenticationError < Error; end
10
+ class ProviderError < Error; end
11
+ end
@@ -0,0 +1,14 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ class Profile
5
+ attr_reader :name, :provider, :model, :params
6
+
7
+ def initialize(name:, provider:, model:, **params)
8
+ @name = name.to_sym
9
+ @provider = provider.to_sym
10
+ @model = model.to_s
11
+ @params = params
12
+ end
13
+ end
14
+ end
@@ -0,0 +1,151 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "net/http"
4
+ require "json"
5
+ require "uri"
6
+
7
+ module Sage
8
+ module Providers
9
+ class Anthropic < Base
10
+ DEFAULT_BASE_URL = "https://api.anthropic.com/v1"
11
+ ANTHROPIC_VERSION = "2023-06-01"
12
+ DEFAULT_MAX_TOKENS = 1024
13
+
14
+ def complete(model:, prompt:, system: nil, **params)
15
+ body = build_request_body(model, prompt, system, stream: false, **params)
16
+ response = post(body)
17
+
18
+ parsed = JSON.parse(response.body)
19
+ content = extract_content(parsed)
20
+ usage = parsed.fetch("usage", {})
21
+
22
+ Response.new(
23
+ content: content,
24
+ model: model,
25
+ usage: {
26
+ prompt_tokens: usage["input_tokens"] || 0,
27
+ completion_tokens: usage["output_tokens"] || 0
28
+ }
29
+ )
30
+ end
31
+
32
+ def stream(model:, prompt:, system: nil, **params, &block)
33
+ body = build_request_body(model, prompt, system, stream: true, **params)
34
+ uri = endpoint_uri
35
+
36
+ Net::HTTP.start(uri.host, uri.port, use_ssl: true) do |http|
37
+ http.verify_mode = OpenSSL::SSL::VERIFY_PEER
38
+ request = build_http_request(uri, body)
39
+
40
+ http.request(request) do |response|
41
+ handle_error_response(response) unless response.is_a?(Net::HTTPSuccess)
42
+
43
+ current_event = nil
44
+
45
+ response.read_body do |chunk_data|
46
+ chunk_data.each_line do |line|
47
+ line = line.strip
48
+
49
+ if line.start_with?("event: ")
50
+ current_event = line.delete_prefix("event: ")
51
+ next
52
+ end
53
+
54
+ next if line.empty?
55
+ next unless line.start_with?("data: ")
56
+
57
+ if current_event == "message_stop"
58
+ block.call(Chunk.new(content: "", done: true))
59
+ return
60
+ end
61
+
62
+ next unless current_event == "content_block_delta"
63
+
64
+ data = line.delete_prefix("data: ")
65
+ parsed = JSON.parse(data)
66
+ delta = parsed["delta"]
67
+
68
+ next unless delta && delta["type"] == "text_delta" && !delta["text"].empty?
69
+
70
+ block.call(Chunk.new(content: delta["text"]))
71
+ end
72
+ end
73
+ end
74
+ end
75
+ end
76
+
77
+ private
78
+
79
+ def build_request_body(model, prompt, system, stream:, **params)
80
+ max_tokens = params.delete(:max_tokens) || DEFAULT_MAX_TOKENS
81
+
82
+ body = {
83
+ model: model,
84
+ messages: [{ role: "user", content: prompt }],
85
+ max_tokens: max_tokens
86
+ }
87
+
88
+ body[:system] = system if system
89
+ body[:stream] = true if stream
90
+ body.merge!(params)
91
+ body
92
+ end
93
+
94
+ def extract_content(parsed)
95
+ content_blocks = parsed["content"] || []
96
+ text_block = content_blocks.find { |block| block["type"] == "text" }
97
+ text_block ? text_block["text"] : ""
98
+ end
99
+
100
+ def endpoint_uri
101
+ base = config[:base_url]
102
+ base = DEFAULT_BASE_URL if base.nil? || base.empty?
103
+ URI("#{base.chomp("/")}/messages")
104
+ end
105
+
106
+ def post(body)
107
+ uri = endpoint_uri
108
+ http = Net::HTTP.new(uri.host, uri.port)
109
+ http.use_ssl = true
110
+ http.verify_mode = OpenSSL::SSL::VERIFY_PEER
111
+ http.read_timeout = 300
112
+
113
+ request = build_http_request(uri, body)
114
+ response = http.request(request)
115
+
116
+ handle_error_response(response) unless response.is_a?(Net::HTTPSuccess)
117
+
118
+ response
119
+ end
120
+
121
+ def build_http_request(uri, body)
122
+ request = Net::HTTP::Post.new(uri)
123
+ request["Content-Type"] = "application/json"
124
+ request["x-api-key"] = config[:api_key]
125
+ request["anthropic-version"] = ANTHROPIC_VERSION
126
+ request.body = JSON.generate(body)
127
+ request
128
+ end
129
+
130
+ def handle_error_response(response)
131
+ message = extract_error_message(response)
132
+
133
+ case response.code.to_i
134
+ when 401
135
+ raise AuthenticationError, "Invalid API key: #{message}"
136
+ when 429
137
+ raise ProviderError, "Rate limited: #{message}"
138
+ else
139
+ raise ProviderError, "API error (#{response.code}): #{message}"
140
+ end
141
+ end
142
+
143
+ def extract_error_message(response)
144
+ parsed = JSON.parse(response.body)
145
+ parsed.dig("error", "message") || response.body
146
+ rescue JSON::ParserError
147
+ response.body
148
+ end
149
+ end
150
+ end
151
+ end
@@ -0,0 +1,23 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ module Providers
5
+ class Base
6
+ def initialize(config)
7
+ @config = config
8
+ end
9
+
10
+ def complete(model:, prompt:, system: nil, **params)
11
+ raise NotImplementedError, "#{self.class}#complete is not implemented"
12
+ end
13
+
14
+ def stream(model:, prompt:, system: nil, **params, &block)
15
+ raise NotImplementedError, "#{self.class}#stream is not implemented"
16
+ end
17
+
18
+ private
19
+
20
+ attr_reader :config
21
+ end
22
+ end
23
+ end
@@ -0,0 +1,130 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "net/http"
4
+ require "json"
5
+ require "uri"
6
+
7
+ module Sage
8
+ module Providers
9
+ class Ollama < Base
10
+ DEFAULT_BASE_URL = "http://localhost:11434"
11
+
12
+ def complete(model:, prompt:, system: nil, **params)
13
+ body = build_request_body(model, prompt, system, stream: false)
14
+ response = post(body)
15
+
16
+ parsed = JSON.parse(response.body)
17
+
18
+ raise ProviderError, "Ollama error: #{parsed["error"]}" if parsed["error"] && !parsed["error"].empty?
19
+
20
+ content = parsed.dig("message", "content") || ""
21
+
22
+ Response.new(
23
+ content: content,
24
+ model: model,
25
+ usage: {
26
+ prompt_tokens: parsed["prompt_eval_count"] || 0,
27
+ completion_tokens: parsed["eval_count"] || 0
28
+ }
29
+ )
30
+ rescue Errno::ECONNREFUSED
31
+ raise ConnectionError, "Could not connect to Ollama at #{endpoint_uri}. Is Ollama running?"
32
+ end
33
+
34
+ def stream(model:, prompt:, system: nil, **params, &block)
35
+ body = build_request_body(model, prompt, system, stream: true)
36
+ uri = endpoint_uri
37
+
38
+ ssl = uri.scheme == "https"
39
+ Net::HTTP.start(uri.host, uri.port, use_ssl: ssl) do |http|
40
+ http.verify_mode = OpenSSL::SSL::VERIFY_PEER if ssl
41
+ request = build_http_request(uri, body)
42
+
43
+ http.request(request) do |response|
44
+ handle_error_response(response) unless response.is_a?(Net::HTTPSuccess)
45
+
46
+ response.read_body do |chunk_data|
47
+ chunk_data.each_line do |line|
48
+ line = line.strip
49
+ next if line.empty?
50
+
51
+ parsed = JSON.parse(line)
52
+
53
+ if parsed["done"]
54
+ block.call(Chunk.new(content: "", done: true))
55
+ return
56
+ end
57
+
58
+ content = parsed.dig("message", "content")
59
+ next if content.nil? || content.empty?
60
+
61
+ block.call(Chunk.new(content: content))
62
+ end
63
+ end
64
+ end
65
+ end
66
+ rescue Errno::ECONNREFUSED
67
+ raise ConnectionError, "Could not connect to Ollama at #{endpoint_uri}. Is Ollama running?"
68
+ end
69
+
70
+ private
71
+
72
+ def build_request_body(model, prompt, system, stream:)
73
+ messages = []
74
+ messages << { role: "system", content: system } if system
75
+ messages << { role: "user", content: prompt }
76
+
77
+ {
78
+ model: model,
79
+ messages: messages,
80
+ stream: stream
81
+ }
82
+ end
83
+
84
+ def post(body)
85
+ uri = endpoint_uri
86
+ http = Net::HTTP.new(uri.host, uri.port)
87
+ ssl = uri.scheme == "https"
88
+ http.use_ssl = ssl
89
+ http.verify_mode = OpenSSL::SSL::VERIFY_PEER if ssl
90
+ http.read_timeout = 300
91
+
92
+ request = build_http_request(uri, body)
93
+ response = http.request(request)
94
+
95
+ handle_error_response(response) unless response.is_a?(Net::HTTPSuccess)
96
+
97
+ response
98
+ end
99
+
100
+ def build_http_request(uri, body)
101
+ request = Net::HTTP::Post.new(uri)
102
+ request["Content-Type"] = "application/json"
103
+
104
+ api_key = config[:api_key]
105
+ request["Authorization"] = "Bearer #{api_key}" if api_key && !api_key.empty?
106
+
107
+ request.body = JSON.generate(body)
108
+ request
109
+ end
110
+
111
+ def endpoint_uri
112
+ base = config[:endpoint]
113
+ base = DEFAULT_BASE_URL if base.nil? || base.empty?
114
+ URI("#{base.chomp("/")}/api/chat")
115
+ end
116
+
117
+ def handle_error_response(response)
118
+ message = extract_error_message(response)
119
+ raise ProviderError, "Ollama error (#{response.code}): #{message}"
120
+ end
121
+
122
+ def extract_error_message(response)
123
+ parsed = JSON.parse(response.body)
124
+ parsed["error"] || response.body
125
+ rescue JSON::ParserError
126
+ response.body
127
+ end
128
+ end
129
+ end
130
+ end
@@ -0,0 +1,148 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "net/http"
4
+ require "json"
5
+ require "uri"
6
+
7
+ module Sage
8
+ module Providers
9
+ class OpenAI < Base
10
+ DEFAULT_BASE_URL = "https://api.openai.com/v1"
11
+
12
+ def complete(model:, prompt:, system: nil, **params)
13
+ body = build_request_body(model, prompt, system, stream: false, **params)
14
+ response = post(body)
15
+
16
+ parsed = JSON.parse(response.body)
17
+ content = parsed.dig("choices", 0, "message", "content") || ""
18
+ usage = parsed.fetch("usage", {})
19
+
20
+ Response.new(
21
+ content: content,
22
+ model: model,
23
+ usage: {
24
+ prompt_tokens: usage["prompt_tokens"] || 0,
25
+ completion_tokens: usage["completion_tokens"] || 0
26
+ }
27
+ )
28
+ end
29
+
30
+ def stream(model:, prompt:, system: nil, **params, &block)
31
+ body = build_request_body(model, prompt, system, stream: true, **params)
32
+ uri = endpoint_uri
33
+
34
+ ssl = uri.scheme == "https"
35
+ Net::HTTP.start(uri.host, uri.port, use_ssl: ssl) do |http|
36
+ http.verify_mode = OpenSSL::SSL::VERIFY_PEER if ssl
37
+ request = build_http_request(uri, body)
38
+
39
+ http.request(request) do |response|
40
+ handle_error_response(response) unless response.is_a?(Net::HTTPSuccess)
41
+
42
+ response.read_body do |chunk_data|
43
+ chunk_data.each_line do |line|
44
+ line = line.strip
45
+ next if line.empty?
46
+ next unless line.start_with?("data: ")
47
+
48
+ data = line.delete_prefix("data: ")
49
+
50
+ if data == "[DONE]"
51
+ block.call(Chunk.new(content: "", done: true))
52
+ return
53
+ end
54
+
55
+ parsed = JSON.parse(data)
56
+ content = parsed.dig("choices", 0, "delta", "content")
57
+ next if content.nil? || content.empty?
58
+
59
+ block.call(Chunk.new(content: content))
60
+ end
61
+ end
62
+ end
63
+ end
64
+ end
65
+
66
+ private
67
+
68
+ def build_request_body(model, prompt, system, stream:, **params)
69
+ messages = []
70
+ messages << { role: "system", content: system } if system
71
+ messages << { role: "user", content: prompt }
72
+
73
+ body = {
74
+ model: model,
75
+ messages: messages,
76
+ stream: stream
77
+ }
78
+
79
+ if params[:max_tokens]
80
+ if use_max_completion_tokens?(model)
81
+ body[:max_completion_tokens] = params.delete(:max_tokens)
82
+ else
83
+ body[:max_tokens] = params.delete(:max_tokens)
84
+ end
85
+ end
86
+
87
+ body.merge!(params.except(:max_tokens))
88
+ body
89
+ end
90
+
91
+ def use_max_completion_tokens?(model)
92
+ model.start_with?("o1", "o3") ||
93
+ model.include?("gpt-4o") ||
94
+ model.include?("gpt-5")
95
+ end
96
+
97
+ def post(body)
98
+ uri = endpoint_uri
99
+ http = Net::HTTP.new(uri.host, uri.port)
100
+ ssl = uri.scheme == "https"
101
+ http.use_ssl = ssl
102
+ http.verify_mode = OpenSSL::SSL::VERIFY_PEER if ssl
103
+ http.read_timeout = 300
104
+
105
+ request = build_http_request(uri, body)
106
+ response = http.request(request)
107
+
108
+ handle_error_response(response) unless response.is_a?(Net::HTTPSuccess)
109
+
110
+ response
111
+ end
112
+
113
+ def build_http_request(uri, body)
114
+ request = Net::HTTP::Post.new(uri)
115
+ request["Content-Type"] = "application/json"
116
+ request["Authorization"] = "Bearer #{config[:api_key]}"
117
+ request.body = JSON.generate(body)
118
+ request
119
+ end
120
+
121
+ def endpoint_uri
122
+ base = config[:base_url]
123
+ base = DEFAULT_BASE_URL if base.nil? || base.empty?
124
+ URI("#{base.chomp("/")}/chat/completions")
125
+ end
126
+
127
+ def handle_error_response(response)
128
+ message = extract_error_message(response)
129
+
130
+ case response.code.to_i
131
+ when 401
132
+ raise AuthenticationError, "Invalid API key: #{message}"
133
+ when 429
134
+ raise ProviderError, "Rate limited: #{message}"
135
+ else
136
+ raise ProviderError, "API error (#{response.code}): #{message}"
137
+ end
138
+ end
139
+
140
+ def extract_error_message(response)
141
+ parsed = JSON.parse(response.body)
142
+ parsed.dig("error", "message") || response.body
143
+ rescue JSON::ParserError
144
+ response.body
145
+ end
146
+ end
147
+ end
148
+ end
@@ -0,0 +1,9 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ class Railtie < Rails::Railtie
5
+ generators do
6
+ require "generators/sage/install_generator"
7
+ end
8
+ end
9
+ end
@@ -0,0 +1,13 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ class Response
5
+ attr_reader :content, :model, :usage
6
+
7
+ def initialize(content:, model:, usage: {})
8
+ @content = content
9
+ @model = model
10
+ @usage = usage
11
+ end
12
+ end
13
+ end
@@ -0,0 +1,5 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Sage
4
+ VERSION = "0.1.0"
5
+ end
data/lib/sage.rb ADDED
@@ -0,0 +1,37 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "sage/version"
4
+ require_relative "sage/errors"
5
+ require_relative "sage/configuration"
6
+ require_relative "sage/response"
7
+ require_relative "sage/chunk"
8
+ require_relative "sage/providers/base"
9
+ require_relative "sage/providers/openai"
10
+ require_relative "sage/providers/anthropic"
11
+ require_relative "sage/providers/ollama"
12
+ require_relative "sage/client"
13
+
14
+ module Sage
15
+ class << self
16
+ def configure
17
+ @configuration = Configuration.new
18
+ yield(@configuration)
19
+ @configuration
20
+ end
21
+
22
+ def configuration
23
+ @configuration
24
+ end
25
+
26
+ def complete(profile_name = nil, **params, &block)
27
+ client = Client.new(configuration)
28
+ client.complete(profile_name, **params, &block)
29
+ end
30
+ end
31
+ end
32
+
33
+ Sage::Client.register_provider(:openai, Sage::Providers::OpenAI)
34
+ Sage::Client.register_provider(:anthropic, Sage::Providers::Anthropic)
35
+ Sage::Client.register_provider(:ollama, Sage::Providers::Ollama)
36
+
37
+ require_relative "sage/railtie" if defined?(Rails::Railtie)
metadata ADDED
@@ -0,0 +1,60 @@
1
+ --- !ruby/object:Gem::Specification
2
+ name: sage-rb
3
+ version: !ruby/object:Gem::Version
4
+ version: 0.1.0
5
+ platform: ruby
6
+ authors:
7
+ - Emily
8
+ autorequire:
9
+ bindir: bin
10
+ cert_chain: []
11
+ date: 2026-02-13 00:00:00.000000000 Z
12
+ dependencies: []
13
+ description: A lightweight, provider-agnostic interface for calling LLM APIs (OpenAI,
14
+ Anthropic, Ollama) from any Ruby application.
15
+ email:
16
+ executables: []
17
+ extensions: []
18
+ extra_rdoc_files: []
19
+ files:
20
+ - README.md
21
+ - lib/generators/sage/install_generator.rb
22
+ - lib/sage.rb
23
+ - lib/sage/chunk.rb
24
+ - lib/sage/client.rb
25
+ - lib/sage/configuration.rb
26
+ - lib/sage/errors.rb
27
+ - lib/sage/profile.rb
28
+ - lib/sage/providers/anthropic.rb
29
+ - lib/sage/providers/base.rb
30
+ - lib/sage/providers/ollama.rb
31
+ - lib/sage/providers/openai.rb
32
+ - lib/sage/railtie.rb
33
+ - lib/sage/response.rb
34
+ - lib/sage/version.rb
35
+ homepage: https://github.com/not-emily/sage-rb
36
+ licenses:
37
+ - MIT
38
+ metadata:
39
+ homepage_uri: https://github.com/not-emily/sage-rb
40
+ source_code_uri: https://github.com/not-emily/sage-rb
41
+ post_install_message:
42
+ rdoc_options: []
43
+ require_paths:
44
+ - lib
45
+ required_ruby_version: !ruby/object:Gem::Requirement
46
+ requirements:
47
+ - - ">="
48
+ - !ruby/object:Gem::Version
49
+ version: 3.0.0
50
+ required_rubygems_version: !ruby/object:Gem::Requirement
51
+ requirements:
52
+ - - ">="
53
+ - !ruby/object:Gem::Version
54
+ version: '0'
55
+ requirements: []
56
+ rubygems_version: 3.5.22
57
+ signing_key:
58
+ specification_version: 4
59
+ summary: Unified LLM adapter for Ruby
60
+ test_files: []