spectre_ai 1.2.0 → 2.0.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 68106b39f46d2b4069e560eb8e51dcc64ed005a05d9f062919db94e628c2c5f4
4
- data.tar.gz: c35b62f8f973763c2029620a0b4608e0dd15591c991e9a13b91d427e2b5dddd7
3
+ metadata.gz: '03910c4dd38bf7a272fab91c0e9d1431d0f9bdc593abe62035271e9e07f22e89'
4
+ data.tar.gz: 0f1e927a42785d2f4735e4adf9140efad6815247fca6ce6270ac10ef286ae217
5
5
  SHA512:
6
- metadata.gz: 7c4632584286d800799a66a1b5ac2f2c5dbb6a9c35597f6c1256dffa94a9f228c2c0ff5aa1a30170a1aae93d9d26cbad42df55297468b853471a3559aa72c69a
7
- data.tar.gz: 998ab9f6d356b9f3f9cc404260ea931c35a4e80abb2b6b5f9da1757fbdbc39365bfc0041c9a9ae934f97c954d66fa9ffc8c75a8d66b4228cf5db3f26972bd2e0
6
+ metadata.gz: 48e634fedb903de30ff0acba0b1de5b725fccb2ac0882bf8890740100312c92ae48c6c182612554cdd6968b34df5f6c7958a62caf19492ae29a42d8e084e1458
7
+ data.tar.gz: b7a382fb583431eff8715147df8f5251e30d528b0a77ecc6c2ecf68b324f57f76e9c6a2634da07f55159713d9cd58684e212b75fee7f5a7e20e56b437725dd55
data/CHANGELOG.md CHANGED
@@ -198,3 +198,85 @@ Key Benefits:\
198
198
  ✅ Keeps the method signature cleaner and future-proof.\
199
199
  ✅ Ensures optional parameters are handled dynamically without cluttering the main method signature.\
200
200
  ✅ Improves consistency across OpenAI and Ollama providers.
201
+
202
+
203
+ # Changelog for Version 2.0.0
204
+
205
+ **Release Date:** [21st Sep 2025]
206
+
207
+ ### New Provider: Claude (Anthropic)
208
+
209
+ - Added Spectre::Claude client for chat completions using Anthropic Messages API.
210
+ - New configuration block: `Spectre.setup { |c| c.default_llm_provider = :claude; c.claude { |v| v.api_key = ENV['ANTHROPIC_API_KEY'] } }`.
211
+ - Supports `claude: { max_tokens: ... }` in args to control max tokens.
212
+
213
+ ### Structured Outputs via Tools-based JSON Schema
214
+
215
+ - Claude does not use `response_format`; instead, when `json_schema` is provided we now:
216
+ - Convert your schema into a single “virtual” tool (`tools[0]`) with `input_schema`.
217
+ - Force use of that tool by default with `tool_choice: { type: 'tool', name: <schema_name> }` (respects explicit `tool_choice` if you pass one).
218
+ - Merge your own `tools` alongside the schema tool without overriding them.
219
+ - Messages content preserves structured blocks (hashes/arrays), enabling images and other block types to be sent as-is.
220
+
221
+ ### Output Normalization (Parity with OpenAI when using json_schema)
222
+
223
+ - When a `json_schema` is provided and Claude returns a single `tool_use` with no text, we normalize the output to:
224
+ - `content: <parsed_object>` (Hash/Array), not a JSON string.
225
+ - This mirrors the behavior you get with OpenAI’s JSON schema mode, simplifying consumers.
226
+ - When no `json_schema` is provided, we return `tool_calls` (raw `tool_use` blocks) plus any text content.
227
+
228
+ ### Error Handling & Stop Reasons
229
+
230
+ - `stop_reason: 'max_tokens'` → raises `"Incomplete response: The completion was cut off due to token limit."`
231
+ - `stop_reason: 'refusal'` → raises `Spectre::Claude::RefusalError`.
232
+ - Unexpected stop reasons raise an error to make issues explicit.
233
+
234
+ ### Tools and tool_choice Support
235
+
236
+ - Pass-through for user-defined tools.
237
+ - Respect explicit `tool_choice`; only enforce schema tool when `json_schema` is present and no explicit choice is set.
238
+
239
+ ### Tests & DX
240
+
241
+ - Added a comprehensive RSpec suite for `Spectre::Claude::Completions`.
242
+ - Ensured spec loading works consistently across environments via `.rspec --require spec_helper` and consistent requires.
243
+ - Full suite passes locally (69 examples).
244
+
245
+ ### Notes
246
+
247
+ - Claude embeddings are not implemented (no native embeddings model).
248
+ - Behavior change (Claude only): when `json_schema` is used, `:content` returns a parsed object (not a JSON string). If you relied on a string, wrap with `JSON.generate` on the caller side.
249
+
250
+
251
+
252
+ # Changelog for Version 2.0.0
253
+
254
+ **Release Date:** [21st Sep 2025]
255
+
256
+ ### New Provider: Gemini (Google)
257
+
258
+ - Added Spectre::Gemini client for chat completions using Google’s OpenAI-compatible endpoint.
259
+ - Added Spectre::Gemini embeddings using Google’s OpenAI-compatible endpoint.
260
+ - New configuration block:
261
+ ```ruby
262
+ Spectre.setup do |c|
263
+ c.default_llm_provider = :gemini
264
+ c.gemini { |v| v.api_key = ENV['GEMINI_API_KEY'] }
265
+ end
266
+ ```
267
+ - Supports `gemini: { max_tokens: ... }` in args to control max tokens for completions.
268
+ - `json_schema` and `tools` are passed through in OpenAI-compatible format.
269
+
270
+ ### Core Wiring
271
+
272
+ - Added `:gemini` to VALID_LLM_PROVIDERS and provider configuration accessors.
273
+ - Updated Rails generator initializer template to include a gemini block.
274
+
275
+ ### Docs & Tests
276
+
277
+ - Updated README to include Gemini in compatibility matrix and configuration example.
278
+ - Added RSpec tests for Gemini completions and embeddings (mirroring OpenAI behavior and error handling).
279
+
280
+ ### Behavior Notes
281
+
282
+ - Gemini OpenAI-compatible chat endpoint requires that the last message in `messages` has role 'user'. Spectre raises an ArgumentError if this requirement is not met to prevent 400 INVALID_ARGUMENT errors from the API.
data/README.md CHANGED
@@ -6,14 +6,14 @@
6
6
 
7
7
  ## Compatibility
8
8
 
9
- | Feature | Compatibility |
10
- |-------------------------|----------------|
11
- | Foundation Models (LLM) | OpenAI, Ollama |
12
- | Embeddings | OpenAI, Ollama |
13
- | Vector Searching | MongoDB Atlas |
14
- | Prompt Templates | ✅ |
9
+ | Feature | Compatibility |
10
+ |-------------------------|------------------------|
11
+ | Foundation Models (LLM) | OpenAI, Ollama, Claude, Gemini |
12
+ | Embeddings | OpenAI, Ollama, Gemini |
13
+ | Vector Searching | MongoDB Atlas |
14
+ | Prompt Templates | ✅ |
15
15
 
16
- **💡 Note:** We will first prioritize adding support for additional foundation models (Claude, Cohere, etc.), then look to add support for more vector databases (Pgvector, Pinecone, etc.). If you're looking for something a bit more extensible, we highly recommend checking out [langchainrb](https://github.com/patterns-ai-core/langchainrb).
16
+ **💡 Note:** We now support OpenAI, Ollama, Claude, and Gemini. Next, we'll add support for additional providers (e.g., Cohere) and more vector databases (Pgvector, Pinecone, etc.). If you're looking for something a bit more extensible, we highly recommend checking out [langchainrb](https://github.com/patterns-ai-core/langchainrb).
17
17
 
18
18
  ## Installation
19
19
 
@@ -49,7 +49,7 @@ This will create a file at `config/initializers/spectre.rb`, where you can set y
49
49
 
50
50
  ```ruby
51
51
  Spectre.setup do |config|
52
- config.default_llm_provider = :openai
52
+ config.default_llm_provider = :openai # or :claude, :ollama, :gemini
53
53
 
54
54
  config.openai do |openai|
55
55
  openai.api_key = ENV['OPENAI_API_KEY']
@@ -59,6 +59,14 @@ Spectre.setup do |config|
59
59
  ollama.host = ENV['OLLAMA_HOST']
60
60
  ollama.api_key = ENV['OLLAMA_API_KEY']
61
61
  end
62
+
63
+ config.claude do |claude|
64
+ claude.api_key = ENV['ANTHROPIC_API_KEY']
65
+ end
66
+
67
+ config.gemini do |gemini|
68
+ gemini.api_key = ENV['GEMINI_API_KEY']
69
+ end
62
70
  end
63
71
  ```
64
72
 
@@ -248,8 +256,76 @@ Spectre.provider_module::Completions.create(
248
256
 
249
257
  This structured format guarantees that the response adheres to the schema you’ve provided, ensuring more predictable and controlled results.
250
258
 
251
- **NOTE:** The JSON schema is different for each provider. OpenAI uses [JSON Schema](https://json-schema.org/overview/what-is-jsonschema.html), where you can specify the name of schema and schema itself. Ollama uses just plain JSON object.
252
- But you can provide OpenAI's schema to Ollama as well. We just convert it to Ollama's format.
259
+ **NOTE:** Provider differences for structured output:
260
+ - OpenAI: supports strict JSON Schema via `response_format.json_schema` (see JSON Schema docs: https://json-schema.org/overview/what-is-jsonschema.html).
261
+ - Claude (Anthropic): does not use `response_format`. Spectre converts your `json_schema` into a single "virtual" tool with `input_schema` and, by default, forces its use via `tool_choice` (you can override `tool_choice` explicitly). When the reply consists only of that `tool_use`, Spectre returns the parsed object in `:content` (Hash/Array), not a JSON string.
262
+ - Ollama: expects a plain JSON object in `format`. Spectre will convert OpenAI-style `{ name:, schema: }` automatically into the format Ollama expects.
263
+
264
+ #### Claude (Anthropic) specifics
265
+
266
+ - Configure:
267
+ ```ruby
268
+ Spectre.setup do |config|
269
+ config.default_llm_provider = :claude
270
+ config.claude { |c| c.api_key = ENV['ANTHROPIC_API_KEY'] }
271
+ end
272
+ ```
273
+
274
+ - Structured output with a schema:
275
+ ```ruby
276
+ json_schema = {
277
+ name: "completion_response",
278
+ schema: {
279
+ type: "object",
280
+ properties: { response: { type: "string" } },
281
+ required: ["response"],
282
+ additionalProperties: false
283
+ }
284
+ }
285
+
286
+ messages = [
287
+ { role: 'system', content: 'You are a helpful assistant.' },
288
+ { role: 'user', content: 'Say hello' }
289
+ ]
290
+
291
+ result = Spectre.provider_module::Completions.create(
292
+ messages: messages,
293
+ json_schema: json_schema,
294
+ claude: { max_tokens: 256 }
295
+ )
296
+
297
+ # When only the schema tool is used, Spectre returns a parsed object:
298
+ result[:content] # => { 'response' => 'Hello!' }
299
+ ```
300
+
301
+ - Optional: override tool selection
302
+ ```ruby
303
+ Spectre.provider_module::Completions.create(messages: messages, json_schema: json_schema, tool_choice: { type: 'auto' })
304
+ ```
305
+
306
+ - Note: Claude embeddings are not implemented (no native embeddings model).
307
+
308
+ #### Gemini (Google) specifics
309
+
310
+ - Chat completions use Google's OpenAI-compatible endpoint. Important: the messages array must end with a user message. If the last message is assistant/system or missing, the API returns 400 INVALID_ARGUMENT (e.g., "Please ensure that single turn requests end with a user role or the role field is empty."). Spectre validates this and raises an ArgumentError earlier to help you fix the history before making an API call.
311
+ - Example:
312
+
313
+ ```ruby
314
+ # Incorrect (ends with assistant)
315
+ messages = [
316
+ { role: 'system', content: 'You are a funny assistant.' },
317
+ { role: 'user', content: 'Tell me a joke.' },
318
+ { role: 'assistant', content: "Sure, here's a joke!" }
319
+ ]
320
+
321
+ # Correct (ends with user)
322
+ messages = [
323
+ { role: 'system', content: 'You are a funny assistant.' },
324
+ { role: 'user', content: 'Tell me a joke.' },
325
+ { role: 'assistant', content: "Sure, here's a joke!" },
326
+ { role: 'user', content: 'Tell me another one.' }
327
+ ]
328
+ ```
253
329
 
254
330
  ⚙️ Function Calling (Tool Use)
255
331
 
@@ -3,7 +3,7 @@
3
3
  require 'spectre'
4
4
 
5
5
  Spectre.setup do |config|
6
- # Chose your LLM (openai, ollama)
6
+ # Chose your LLM (openai, ollama, claude, gemini)
7
7
  config.default_llm_provider = :openai
8
8
 
9
9
  config.openai do |openai|
@@ -14,4 +14,12 @@ Spectre.setup do |config|
14
14
  ollama.host = ENV['OLLAMA_HOST']
15
15
  ollama.api_key = ENV['OLLAMA_API_KEY']
16
16
  end
17
+
18
+ config.claude do |claude|
19
+ claude.api_key = ENV['ANTHROPIC_API_KEY']
20
+ end
21
+
22
+ config.gemini do |gemini|
23
+ gemini.api_key = ENV['GEMINI_API_KEY']
24
+ end
17
25
  end
@@ -0,0 +1,207 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'net/http'
4
+ require 'json'
5
+ require 'uri'
6
+
7
+ module Spectre
8
+ module Claude
9
+ class RefusalError < StandardError; end
10
+
11
+ class Completions
12
+ API_URL = 'https://api.anthropic.com/v1/messages'
13
+ DEFAULT_MODEL = 'claude-opus-4-1'
14
+ DEFAULT_TIMEOUT = 60
15
+ ANTHROPIC_VERSION = '2023-06-01'
16
+
17
+ # Class method to generate a completion based on user messages and optional tools
18
+ #
19
+ # @param messages [Array<Hash>] The conversation messages, each with a role and content
20
+ # @param model [String] The model to be used for generating completions, defaults to DEFAULT_MODEL
21
+ # @param json_schema [Hash, nil] Optional JSON Schema; when provided, it will be converted into a tool with input_schema and forced via tool_choice unless overridden
22
+ # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
23
+ # @param tool_choice [Hash, nil] Optional tool_choice to force a specific tool use (e.g., { type: 'tool', name: 'record_summary' })
24
+ # @param args [Hash, nil] optional arguments like read_timeout and open_timeout. For Claude, max_tokens can be passed in the claude hash.
25
+ # @return [Hash] The parsed response including any tool calls or content
26
+ # @raise [APIKeyNotConfiguredError] If the API key is not set
27
+ # @raise [RuntimeError] For general API errors or unexpected issues
28
+ def self.create(messages:, model: DEFAULT_MODEL, json_schema: nil, tools: nil, tool_choice: nil, **args)
29
+ api_key = Spectre.claude_configuration&.api_key
30
+ raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
31
+
32
+ validate_messages!(messages)
33
+
34
+ uri = URI(API_URL)
35
+ http = Net::HTTP.new(uri.host, uri.port)
36
+ http.use_ssl = true
37
+ http.read_timeout = args.fetch(:read_timeout, DEFAULT_TIMEOUT)
38
+ http.open_timeout = args.fetch(:open_timeout, DEFAULT_TIMEOUT)
39
+
40
+ request = Net::HTTP::Post.new(uri.path, {
41
+ 'Content-Type' => 'application/json',
42
+ 'x-api-key' => api_key,
43
+ 'anthropic-version' => ANTHROPIC_VERSION
44
+ })
45
+
46
+ max_tokens = args.dig(:claude, :max_tokens) || 1024
47
+ request.body = generate_body(messages, model, json_schema, max_tokens, tools, tool_choice).to_json
48
+ response = http.request(request)
49
+
50
+ unless response.is_a?(Net::HTTPSuccess)
51
+ raise "Claude API Error: #{response.code} - #{response.message}: #{response.body}"
52
+ end
53
+
54
+ parsed_response = JSON.parse(response.body)
55
+
56
+ handle_response(parsed_response, schema_used: !!json_schema)
57
+ rescue JSON::ParserError => e
58
+ raise "JSON Parse Error: #{e.message}"
59
+ end
60
+
61
+ private
62
+
63
+ # Validate the structure and content of the messages array.
64
+ #
65
+ # @param messages [Array<Hash>] The array of message hashes to validate.
66
+ #
67
+ # @raise [ArgumentError] if the messages array is not in the expected format or contains invalid data.
68
+ def self.validate_messages!(messages)
69
+ unless messages.is_a?(Array) && messages.all? { |msg| msg.is_a?(Hash) }
70
+ raise ArgumentError, "Messages must be an array of message hashes."
71
+ end
72
+
73
+ if messages.empty?
74
+ raise ArgumentError, "Messages cannot be empty."
75
+ end
76
+ end
77
+
78
+ # Helper method to generate the request body for Anthropic Messages API
79
+ #
80
+ # @param messages [Array<Hash>] The conversation messages, each with a role and content
81
+ # @param model [String] The model to be used for generating completions
82
+ # @param json_schema [Hash, nil] An optional JSON schema to hint structured output
83
+ # @param max_tokens [Integer] The maximum number of tokens for the completion
84
+ # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
85
+ # @return [Hash] The body for the API request
86
+ def self.generate_body(messages, model, json_schema, max_tokens, tools, tool_choice)
87
+ system_prompts, chat_messages = partition_system_and_chat(messages)
88
+
89
+ body = {
90
+ model: model,
91
+ max_tokens: max_tokens,
92
+ messages: chat_messages
93
+ }
94
+
95
+ # Join multiple system prompts into one. Anthropic supports a string here.
96
+ body[:system] = system_prompts.join("\n\n") unless system_prompts.empty?
97
+
98
+ # If a json_schema is provided, transform it into a "virtual" tool and force its use via tool_choice (unless already provided).
99
+ if json_schema
100
+ # Normalize schema input: accept anthropic-style { json_schema: { name:, schema:, strict: } },
101
+ # OpenAI-like { name:, schema:, strict: }, or a raw schema object.
102
+ if json_schema.is_a?(Hash) && (json_schema.key?(:json_schema) || json_schema.key?("json_schema"))
103
+ schema_payload = json_schema[:json_schema] || json_schema["json_schema"]
104
+ schema_name = (schema_payload[:name] || schema_payload["name"] || "structured_output").to_s
105
+ schema_object = schema_payload[:schema] || schema_payload["schema"] || schema_payload
106
+ else
107
+ schema_name = (json_schema.is_a?(Hash) && (json_schema[:name] || json_schema["name"])) || "structured_output"
108
+ schema_object = (json_schema.is_a?(Hash) && (json_schema[:schema] || json_schema["schema"])) || json_schema
109
+ end
110
+
111
+ schema_tool = {
112
+ name: schema_name,
113
+ description: "Return a JSON object that strictly follows the provided input_schema.",
114
+ input_schema: schema_object
115
+ }
116
+
117
+ # Merge with any user-provided tools. Prefer a single tool by default but don't drop existing tools.
118
+ existing_tools = tools || []
119
+ body[:tools] = [schema_tool] + existing_tools
120
+
121
+ # If the caller didn't specify tool_choice, force using the schema tool.
122
+ body[:tool_choice] = { type: 'tool', name: schema_name } unless tool_choice
123
+ end
124
+
125
+ body[:tools] = tools if tools && !body.key?(:tools)
126
+ body[:tool_choice] = tool_choice if tool_choice
127
+
128
+ body
129
+ end
130
+
131
+ # Normalize content for Anthropic: preserve arrays/hashes (structured blocks), stringify otherwise
132
+ def self.normalize_content(content)
133
+ case content
134
+ when Array
135
+ content
136
+ when Hash
137
+ content
138
+ else
139
+ content.to_s
140
+ end
141
+ end
142
+
143
+ # Partition system messages and convert remaining into Anthropic-compatible messages
144
+ def self.partition_system_and_chat(messages)
145
+ system_prompts = []
146
+ chat_messages = []
147
+
148
+ messages.each do |msg|
149
+ role = (msg[:role] || msg['role']).to_s
150
+ content = msg[:content] || msg['content']
151
+
152
+ case role
153
+ when 'system'
154
+ system_prompts << content.to_s
155
+ when 'user', 'assistant'
156
+ chat_messages << { role: role, content: normalize_content(content) }
157
+ else
158
+ # Unknown role, treat as user to avoid API errors
159
+ chat_messages << { role: 'user', content: normalize_content(content) }
160
+ end
161
+ end
162
+
163
+ [system_prompts, chat_messages]
164
+ end
165
+
166
+ # Handles the API response, raising errors for specific cases and returning structured content otherwise
167
+ #
168
+ # @param response [Hash] The parsed API response
169
+ # @param schema_used [Boolean] Whether the request used a JSON schema (tools-based) and needs normalization
170
+ # @return [Hash] The relevant data based on the stop_reason
171
+ def self.handle_response(response, schema_used: false)
172
+ content_blocks = response['content'] || []
173
+ stop_reason = response['stop_reason']
174
+
175
+ text_content = content_blocks.select { |b| b['type'] == 'text' }.map { |b| b['text'] }.join
176
+ tool_uses = content_blocks.select { |b| b['type'] == 'tool_use' }
177
+
178
+ if stop_reason == 'max_tokens'
179
+ raise "Incomplete response: The completion was cut off due to token limit."
180
+ end
181
+
182
+ if stop_reason == 'refusal'
183
+ raise RefusalError, "Content filtered: The model's output was blocked due to policy violations."
184
+ end
185
+
186
+ # If a json_schema was provided and Claude produced a single tool_use with no text,
187
+ # treat it as structured JSON output and return the parsed object in :content.
188
+ if schema_used && tool_uses.length == 1 && (text_content.nil? || text_content.strip.empty?)
189
+ input = tool_uses.first['input']
190
+ return({ content: input }) if input.is_a?(Hash) || input.is_a?(Array)
191
+ end
192
+
193
+ if !tool_uses.empty?
194
+ return { tool_calls: tool_uses, content: text_content }
195
+ end
196
+
197
+ # Normal end of turn
198
+ if stop_reason == 'end_turn' || stop_reason.nil?
199
+ return { content: text_content }
200
+ end
201
+
202
+ # Handle unexpected stop reasons
203
+ raise "Unexpected stop_reason: #{stop_reason}"
204
+ end
205
+ end
206
+ end
207
+ end
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Spectre
4
+ module Claude
5
+ # Require each specific client file here
6
+ require_relative 'claude/completions'
7
+ end
8
+ end
@@ -0,0 +1,120 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'net/http'
4
+ require 'json'
5
+ require 'uri'
6
+
7
+ module Spectre
8
+ module Gemini
9
+ class Completions
10
+ # Using Google's OpenAI-compatible endpoint
11
+ API_URL = 'https://generativelanguage.googleapis.com/v1beta/openai/chat/completions'
12
+ DEFAULT_MODEL = 'gemini-2.5-flash'
13
+ DEFAULT_TIMEOUT = 60
14
+
15
+ # Class method to generate a completion based on user messages and optional tools
16
+ #
17
+ # @param messages [Array<Hash>] The conversation messages, each with a role and content
18
+ # @param model [String] The model to be used for generating completions, defaults to DEFAULT_MODEL
19
+ # @param json_schema [Hash, nil] An optional JSON schema to enforce structured output (OpenAI-compatible "response_format")
20
+ # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
21
+ # @param args [Hash, nil] optional arguments like read_timeout and open_timeout. For Gemini, max_tokens can be passed in the gemini hash.
22
+ # @return [Hash] The parsed response including any function calls or content
23
+ # @raise [APIKeyNotConfiguredError] If the API key is not set
24
+ # @raise [RuntimeError] For general API errors or unexpected issues
25
+ def self.create(messages:, model: DEFAULT_MODEL, json_schema: nil, tools: nil, **args)
26
+ api_key = Spectre.gemini_configuration&.api_key
27
+ raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
28
+
29
+ validate_messages!(messages)
30
+
31
+ uri = URI(API_URL)
32
+ http = Net::HTTP.new(uri.host, uri.port)
33
+ http.use_ssl = true
34
+ http.read_timeout = args.fetch(:read_timeout, DEFAULT_TIMEOUT)
35
+ http.open_timeout = args.fetch(:open_timeout, DEFAULT_TIMEOUT)
36
+
37
+ request = Net::HTTP::Post.new(uri.path, {
38
+ 'Content-Type' => 'application/json',
39
+ 'Authorization' => "Bearer #{api_key}"
40
+ })
41
+
42
+ max_tokens = args.dig(:gemini, :max_tokens)
43
+ request.body = generate_body(messages, model, json_schema, max_tokens, tools).to_json
44
+ response = http.request(request)
45
+
46
+ unless response.is_a?(Net::HTTPSuccess)
47
+ raise "Gemini API Error: #{response.code} - #{response.message}: #{response.body}"
48
+ end
49
+
50
+ parsed_response = JSON.parse(response.body)
51
+
52
+ handle_response(parsed_response)
53
+ rescue JSON::ParserError => e
54
+ raise "JSON Parse Error: #{e.message}"
55
+ end
56
+
57
+ private
58
+
59
+ # Validate the structure and content of the messages array.
60
+ def self.validate_messages!(messages)
61
+ unless messages.is_a?(Array) && messages.all? { |msg| msg.is_a?(Hash) }
62
+ raise ArgumentError, "Messages must be an array of message hashes."
63
+ end
64
+
65
+ if messages.empty?
66
+ raise ArgumentError, "Messages cannot be empty."
67
+ end
68
+
69
+ # Gemini's OpenAI-compatible chat endpoint requires that single-turn
70
+ # and general requests end with a user message. If not, return a clear error.
71
+ last_role = (messages.last[:role] || messages.last['role']).to_s
72
+ unless last_role == 'user'
73
+ raise ArgumentError, "Gemini: the last message must have role 'user'. Got '#{last_role}'."
74
+ end
75
+ end
76
+
77
+ # Helper method to generate the request body (OpenAI-compatible)
78
+ def self.generate_body(messages, model, json_schema, max_tokens, tools)
79
+ body = {
80
+ model: model,
81
+ messages: messages
82
+ }
83
+
84
+ body[:max_tokens] = max_tokens if max_tokens
85
+ body[:response_format] = { type: 'json_schema', json_schema: json_schema } if json_schema
86
+ body[:tools] = tools if tools
87
+
88
+ body
89
+ end
90
+
91
+ # Handles the API response, mirroring OpenAI semantics
92
+ def self.handle_response(response)
93
+ message = response.dig('choices', 0, 'message')
94
+ finish_reason = response.dig('choices', 0, 'finish_reason')
95
+
96
+ if message && message['refusal']
97
+ raise "Refusal: #{message['refusal']}"
98
+ end
99
+
100
+ if finish_reason == 'length'
101
+ raise "Incomplete response: The completion was cut off due to token limit."
102
+ end
103
+
104
+ if finish_reason == 'content_filter'
105
+ raise "Content filtered: The model's output was blocked due to policy violations."
106
+ end
107
+
108
+ if finish_reason == 'function_call' || finish_reason == 'tool_calls'
109
+ return { tool_calls: message['tool_calls'], content: message['content'] }
110
+ end
111
+
112
+ if finish_reason == 'stop'
113
+ return { content: message['content'] }
114
+ end
115
+
116
+ raise "Unexpected finish_reason: #{finish_reason}"
117
+ end
118
+ end
119
+ end
120
+ end
@@ -0,0 +1,44 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'net/http'
4
+ require 'json'
5
+ require 'uri'
6
+
7
+ module Spectre
8
+ module Gemini
9
+ class Embeddings
10
+ # Using Google's OpenAI-compatible endpoint
11
+ API_URL = 'https://generativelanguage.googleapis.com/v1beta/openai/embeddings'
12
+ DEFAULT_MODEL = 'gemini-embedding-001'
13
+ DEFAULT_TIMEOUT = 60
14
+
15
+ # Generate embeddings for text
16
+ def self.create(text, model: DEFAULT_MODEL, **args)
17
+ api_key = Spectre.gemini_configuration&.api_key
18
+ raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
19
+
20
+ uri = URI(API_URL)
21
+ http = Net::HTTP.new(uri.host, uri.port)
22
+ http.use_ssl = true
23
+ http.read_timeout = args.fetch(:read_timeout, DEFAULT_TIMEOUT)
24
+ http.open_timeout = args.fetch(:open_timeout, DEFAULT_TIMEOUT)
25
+
26
+ request = Net::HTTP::Post.new(uri.path, {
27
+ 'Content-Type' => 'application/json',
28
+ 'Authorization' => "Bearer #{api_key}"
29
+ })
30
+
31
+ request.body = { model: model, input: text }.to_json
32
+ response = http.request(request)
33
+
34
+ unless response.is_a?(Net::HTTPSuccess)
35
+ raise "Gemini API Error: #{response.code} - #{response.message}: #{response.body}"
36
+ end
37
+
38
+ JSON.parse(response.body).dig('data', 0, 'embedding')
39
+ rescue JSON::ParserError => e
40
+ raise "JSON Parse Error: #{e.message}"
41
+ end
42
+ end
43
+ end
44
+ end
@@ -0,0 +1,8 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Spectre
4
+ module Gemini
5
+ require_relative 'gemini/completions'
6
+ require_relative 'gemini/embeddings'
7
+ end
8
+ end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Spectre # :nodoc:all
4
- VERSION = "1.2.0"
4
+ VERSION = "2.0.0"
5
5
  end
data/lib/spectre.rb CHANGED
@@ -5,6 +5,8 @@ require "spectre/embeddable"
5
5
  require 'spectre/searchable'
6
6
  require "spectre/openai"
7
7
  require "spectre/ollama"
8
+ require "spectre/claude"
9
+ require "spectre/gemini"
8
10
  require "spectre/logging"
9
11
  require 'spectre/prompt'
10
12
  require 'spectre/errors'
@@ -12,7 +14,9 @@ require 'spectre/errors'
12
14
  module Spectre
13
15
  VALID_LLM_PROVIDERS = {
14
16
  openai: Spectre::Openai,
15
- ollama: Spectre::Ollama
17
+ ollama: Spectre::Ollama,
18
+ claude: Spectre::Claude,
19
+ gemini: Spectre::Gemini
16
20
  # cohere: Spectre::Cohere,
17
21
  }.freeze
18
22
 
@@ -52,6 +56,16 @@ module Spectre
52
56
  yield @providers[:ollama] if block_given?
53
57
  end
54
58
 
59
+ def claude
60
+ @providers[:claude] ||= ClaudeConfiguration.new
61
+ yield @providers[:claude] if block_given?
62
+ end
63
+
64
+ def gemini
65
+ @providers[:gemini] ||= GeminiConfiguration.new
66
+ yield @providers[:gemini] if block_given?
67
+ end
68
+
55
69
  def provider_configuration
56
70
  providers[default_llm_provider] || raise("No configuration found for provider: #{default_llm_provider}")
57
71
  end
@@ -65,6 +79,14 @@ module Spectre
65
79
  attr_accessor :host, :api_key
66
80
  end
67
81
 
82
+ class ClaudeConfiguration
83
+ attr_accessor :api_key
84
+ end
85
+
86
+ class GeminiConfiguration
87
+ attr_accessor :api_key
88
+ end
89
+
68
90
  class << self
69
91
  attr_accessor :config
70
92
 
@@ -90,6 +112,14 @@ module Spectre
90
112
  config.providers[:ollama]
91
113
  end
92
114
 
115
+ def claude_configuration
116
+ config.providers[:claude]
117
+ end
118
+
119
+ def gemini_configuration
120
+ config.providers[:gemini]
121
+ end
122
+
93
123
  private
94
124
 
95
125
  def validate_llm_provider!
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: spectre_ai
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.2.0
4
+ version: 2.0.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ilya Klapatok
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2025-01-29 00:00:00.000000000 Z
12
+ date: 2025-09-24 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: rspec-rails
@@ -53,8 +53,13 @@ files:
53
53
  - lib/generators/spectre/templates/rag/user.yml.erb
54
54
  - lib/generators/spectre/templates/spectre_initializer.rb
55
55
  - lib/spectre.rb
56
+ - lib/spectre/claude.rb
57
+ - lib/spectre/claude/completions.rb
56
58
  - lib/spectre/embeddable.rb
57
59
  - lib/spectre/errors.rb
60
+ - lib/spectre/gemini.rb
61
+ - lib/spectre/gemini/completions.rb
62
+ - lib/spectre/gemini/embeddings.rb
58
63
  - lib/spectre/logging.rb
59
64
  - lib/spectre/ollama.rb
60
65
  - lib/spectre/ollama/completions.rb