spectre_ai 1.1.4 → 1.2.0

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: e92299b643fbf7d928b9c45de5ad9a504528cbb246202d56cb24d36a32030030
4
- data.tar.gz: 8dc5d19040a9cac2cc929a1a91b55112d0e78fa21ca8de05bb6c3544838af9ed
3
+ metadata.gz: 68106b39f46d2b4069e560eb8e51dcc64ed005a05d9f062919db94e628c2c5f4
4
+ data.tar.gz: c35b62f8f973763c2029620a0b4608e0dd15591c991e9a13b91d427e2b5dddd7
5
5
  SHA512:
6
- metadata.gz: 8ea61b8b0a0e23d7a5c500fd99e147942fd33538e21d38be0030aff60dda20758a87651b9460d4b4de50f986ffe068da4dd29f172dc0a25a993595eadee34fe6
7
- data.tar.gz: b4ef2e616f0143f677c635e6d999e155a1205a1ab90ac47fc2a8c322b95ee6f107522dc4c2d8a16e33a856f3387ffdb1e7ac85ca9f83b3a72960e9b8e6d52ca1
6
+ metadata.gz: 7c4632584286d800799a66a1b5ac2f2c5dbb6a9c35597f6c1256dffa94a9f228c2c0ff5aa1a30170a1aae93d9d26cbad42df55297468b853471a3559aa72c69a
7
+ data.tar.gz: 998ab9f6d356b9f3f9cc404260ea931c35a4e80abb2b6b5f9da1757fbdbc39365bfc0041c9a9ae934f97c954d66fa9ffc8c75a8d66b4228cf5db3f26972bd2e0
data/CHANGELOG.md CHANGED
@@ -138,4 +138,63 @@ Spectre::Openai::Completions.create(
138
138
 
139
139
  * Simplified Exception Handling for Timeouts
140
140
  * Removed explicit handling of Net::OpenTimeout and Net::ReadTimeout exceptions in both Completions and Embeddings classes.
141
- * Letting these exceptions propagate ensures clearer and more consistent error messages for timeout issues.
141
+ * Letting these exceptions propagate ensures clearer and more consistent error messages for timeout issues.
142
+
143
+
144
+ # Changelog for Version 1.2.0
145
+
146
+ **Release Date:** [30th Jan 2025]
147
+
148
+ ### **New Features & Enhancements**
149
+
150
+ 1️⃣ **Unified Configuration for LLM Providers**
151
+
152
+ 🔧 Refactored the configuration system to provide a consistent interface for setting up OpenAI and Ollama within config/initializers/spectre.rb.\
153
+ • Now, developers can seamlessly switch between OpenAI and Ollama by defining a single provider configuration block.\
154
+ • Ensures better modularity and simplifies adding support for future providers (Claude, Cohere, etc.).
155
+
156
+ 🔑 **Example Configuration:**
157
+
158
+ ```ruby
159
+ Spectre.setup do |config|
160
+ config.default_llm_provider = :openai
161
+
162
+ config.openai do |openai|
163
+ openai.api_key = ENV['OPENAI_API_KEY']
164
+ end
165
+
166
+ config.ollama do |ollama|
167
+ ollama.host = ENV['OLLAMA_HOST']
168
+ ollama.api_key = ENV['OLLAMA_API_KEY']
169
+ end
170
+ end
171
+ ```
172
+
173
+ Key Improvements:\
174
+ ✅ API key validation added: Now properly checks if api_key is missing and raises APIKeyNotConfiguredError.\
175
+ ✅ Host validation added: Now checks if host is missing for Ollama and raises HostNotConfiguredError.
176
+
177
+ 2️⃣ **Added Ollama Provider Support**
178
+
179
+ 🆕 Introduced full support for Ollama, allowing users to use local LLM models efficiently.\
180
+ • Supports Ollama-based completions for generating text using local models like llama3.\
181
+ • Supports Ollama-based embeddings for generating embeddings using local models like nomic-embed-text.\
182
+ • Automatic JSON Schema Conversion: OpenAI’s json_schema format is now automatically translated into Ollama’s format key.
183
+
184
+ 3️⃣ **Differences in OpenAI Interface: max_tokens Moved to `**args`**
185
+
186
+ 💡 Refactored the OpenAI completions request so that max_tokens is now passed as a dynamic argument inside `**args` instead of a separate parameter.\
187
+ • Why? To ensure a consistent interface across different providers, making it easier to switch between them seamlessly.\
188
+ • Before:
189
+ ```ruby
190
+ Spectre.provider_module::Completions.create(messages: messages, max_tokens: 50)
191
+ ```
192
+ • After:
193
+ ```ruby
194
+ Spectre.provider_module::Completions.create(messages: messages, openai: { max_tokens: 50 })
195
+ ```
196
+
197
+ Key Benefits:\
198
+ ✅ Keeps the method signature cleaner and future-proof.\
199
+ ✅ Ensures optional parameters are handled dynamically without cluttering the main method signature.\
200
+ ✅ Improves consistency across OpenAI and Ollama providers.
data/README.md CHANGED
@@ -6,14 +6,14 @@
6
6
 
7
7
  ## Compatibility
8
8
 
9
- | Feature | Compatibility |
10
- |-------------------------|---------------|
11
- | Foundation Models (LLM) | OpenAI |
12
- | Embeddings | OpenAI |
13
- | Vector Searching | MongoDB Atlas |
14
- | Prompt Templates | OpenAI |
9
+ | Feature | Compatibility |
10
+ |-------------------------|----------------|
11
+ | Foundation Models (LLM) | OpenAI, Ollama |
12
+ | Embeddings | OpenAI, Ollama |
13
+ | Vector Searching | MongoDB Atlas |
14
+ | Prompt Templates | |
15
15
 
16
- **💡 Note:** We will first prioritize adding support for additional foundation models (Claude, Cohere, LLaMA, etc.), then look to add support for more vector databases (Pgvector, Pinecone, etc.). If you're looking for something a bit more extensible, we highly recommend checking out [langchainrb](https://github.com/patterns-ai-core/langchainrb).
16
+ **💡 Note:** We will first prioritize adding support for additional foundation models (Claude, Cohere, etc.), then look to add support for more vector databases (Pgvector, Pinecone, etc.). If you're looking for something a bit more extensible, we highly recommend checking out [langchainrb](https://github.com/patterns-ai-core/langchainrb).
17
17
 
18
18
  ## Installation
19
19
 
@@ -37,24 +37,32 @@ gem install spectre_ai
37
37
 
38
38
  ## Usage
39
39
 
40
- ### 1. Setup
40
+ ### 🔧 Configuration
41
41
 
42
- First, you’ll need to generate the initializer to configure your OpenAI API key. Run the following command to create the initializer:
42
+ First, you’ll need to generate the initializer. Run the following command to create the initializer:
43
43
 
44
44
  ```bash
45
45
  rails generate spectre:install
46
46
  ```
47
47
 
48
- This will create a file at `config/initializers/spectre.rb`, where you can set your OpenAI API key:
48
+ This will create a file at `config/initializers/spectre.rb`, where you can set your llm provider and configure the provider-specific settings.
49
49
 
50
50
  ```ruby
51
51
  Spectre.setup do |config|
52
- config.api_key = 'your_openai_api_key'
53
- config.llm_provider = :openai
52
+ config.default_llm_provider = :openai
53
+
54
+ config.openai do |openai|
55
+ openai.api_key = ENV['OPENAI_API_KEY']
56
+ end
57
+
58
+ config.ollama do |ollama|
59
+ ollama.host = ENV['OLLAMA_HOST']
60
+ ollama.api_key = ENV['OLLAMA_API_KEY']
61
+ end
54
62
  end
55
63
  ```
56
64
 
57
- ### 2. Enable Your Rails Model(s)
65
+ ### 📡 Embeddings & Vector Search
58
66
 
59
67
  #### For Embedding
60
68
 
@@ -146,6 +154,8 @@ This method sends the text to OpenAI’s API and returns the embedding vector. Y
146
154
  Spectre.provider_module::Embeddings.create("Your text here", model: "text-embedding-ada-002")
147
155
  ```
148
156
 
157
+ **NOTE:** Different providers have different available args for the `create` method. Please refer to the provider-specific documentation for more details.
158
+
149
159
  ### 4. Performing Vector-Based Searches
150
160
 
151
161
  Once your model is configured as searchable, you can perform vector-based searches on the stored embeddings:
@@ -168,7 +178,7 @@ This method will:
168
178
  - **custom_result_fields:** Limit the fields returned in the search results.
169
179
  - **additional_scopes:** Apply additional MongoDB filters to the search results.
170
180
 
171
- ### 5. Creating Completions
181
+ ### 💬 Chat Completions
172
182
 
173
183
  Spectre provides an interface to create chat completions using your configured LLM provider, allowing you to create dynamic responses, messages, or other forms of text.
174
184
 
@@ -182,17 +192,14 @@ messages = [
182
192
  { role: 'user', content: "Tell me a joke." }
183
193
  ]
184
194
 
185
- Spectre.provider_module::Completions.create(
186
- messages: messages
187
- )
188
-
195
+ Spectre.provider_module::Completions.create(messages: messages)
189
196
  ```
190
197
 
191
198
  This sends the request to the LLM provider’s API and returns the chat completion.
192
199
 
193
200
  **Customizing the Completion**
194
201
 
195
- You can customize the behavior by specifying additional parameters such as the model, maximum number of tokens, and any tools needed for function calls:
202
+ You can customize the behavior by specifying additional parameters such as the model, any tools needed for function calls:
196
203
 
197
204
  ```ruby
198
205
  messages = [
@@ -204,7 +211,7 @@ messages = [
204
211
  Spectre.provider_module::Completions.create(
205
212
  messages: messages,
206
213
  model: "gpt-4",
207
- max_tokens: 50
214
+ openai: { max_tokens: 50 }
208
215
  )
209
216
 
210
217
  ```
@@ -241,7 +248,10 @@ Spectre.provider_module::Completions.create(
241
248
 
242
249
  This structured format guarantees that the response adheres to the schema you’ve provided, ensuring more predictable and controlled results.
243
250
 
244
- **Using Tools for Function Calling**
251
+ **NOTE:** The JSON schema is different for each provider. OpenAI uses [JSON Schema](https://json-schema.org/overview/what-is-jsonschema.html), where you can specify the name of schema and schema itself. Ollama uses just plain JSON object.
252
+ But you can provide OpenAI's schema to Ollama as well. We just convert it to Ollama's format.
253
+
254
+ ⚙️ Function Calling (Tool Use)
245
255
 
246
256
  You can incorporate tools (function calls) in your completion to handle more complex interactions such as fetching external information via API or performing calculations. Define tools using the function call format and include them in the request:
247
257
 
@@ -321,7 +331,9 @@ else
321
331
  end
322
332
  ```
323
333
 
324
- ### 6. Creating Dynamic Prompts
334
+ **NOTE:** Completions class also supports different `**args` for different providers. Please refer to the provider-specific documentation for more details.
335
+
336
+ ### 🎭 Dynamic Prompt Rendering
325
337
 
326
338
  Spectre provides a system for creating dynamic prompts based on templates. You can define reusable prompt templates and render them with different parameters in your Rails app (think Ruby on Rails view partials).
327
339
 
@@ -424,7 +436,7 @@ Spectre.provider_module::Completions.create(
424
436
 
425
437
  ```
426
438
 
427
- ## Contributing
439
+ ## 📜 Contributing
428
440
 
429
441
  Bug reports and pull requests are welcome on GitHub at [https://github.com/hiremav/spectre](https://github.com/hiremav/spectre). This project is intended to be a safe, welcoming space for collaboration, and your contributions are greatly appreciated!
430
442
 
@@ -434,6 +446,6 @@ Bug reports and pull requests are welcome on GitHub at [https://github.com/hirem
434
446
  4. **Push** the branch (`git push origin my-new-feature`).
435
447
  5. **Create** a pull request.
436
448
 
437
- ## License
449
+ ## 📜 License
438
450
 
439
451
  This gem is available as open source under the terms of the MIT License.
@@ -3,8 +3,15 @@
3
3
  require 'spectre'
4
4
 
5
5
  Spectre.setup do |config|
6
- # Chose your LLM (openai, cohere, ollama)
7
- config.llm_provider = :openai
8
- # Set the API key for your chosen LLM
9
- config.api_key = ENV.fetch('CHATGPT_API_TOKEN')
6
+ # Chose your LLM (openai, ollama)
7
+ config.default_llm_provider = :openai
8
+
9
+ config.openai do |openai|
10
+ openai.api_key = ENV['OPENAI_API_KEY']
11
+ end
12
+
13
+ config.ollama do |ollama|
14
+ ollama.host = ENV['OLLAMA_HOST']
15
+ ollama.api_key = ENV['OLLAMA_API_KEY']
16
+ end
10
17
  end
@@ -0,0 +1,7 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Spectre
4
+ # Define custom error classes here
5
+ class APIKeyNotConfiguredError < StandardError; end
6
+ class HostNotConfiguredError < StandardError; end
7
+ end
@@ -0,0 +1,135 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'net/http'
4
+ require 'json'
5
+ require 'uri'
6
+
7
+ module Spectre
8
+ module Ollama
9
+ class Completions
10
+ API_PATH = 'api/chat'
11
+ DEFAULT_MODEL = 'llama3.1:8b'
12
+ DEFAULT_TIMEOUT = 60
13
+
14
+ # Class method to generate a completion based on user messages and optional tools
15
+ #
16
+ # @param messages [Array<Hash>] The conversation messages, each with a role and content
17
+ # @param model [String] The model to be used for generating completions, defaults to DEFAULT_MODEL
18
+ # @param json_schema [Hash, nil] An optional JSON schema to enforce structured output
19
+ # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
20
+ # @param args [Hash, nil] optional arguments like read_timeout and open_timeout. You can pass in the ollama hash to specify the path and options.
21
+ # @param args.ollama.path [String, nil] The path to the Ollama API endpoint, defaults to API_PATH
22
+ # @param args.ollama.options [Hash, nil] Additional model parameters listed in the documentation for the https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values such as temperature
23
+ # @return [Hash] The parsed response including any function calls or content
24
+ # @raise [HostNotConfiguredError] If the API host is not set in the provider configuration.
25
+ # @raise [APIKeyNotConfiguredError] If the API key is not set
26
+ # @raise [RuntimeError] For general API errors or unexpected issues
27
+ def self.create(messages:, model: DEFAULT_MODEL, json_schema: nil, tools: nil, **args)
28
+ api_host = Spectre.ollama_configuration.host
29
+ api_key = Spectre.ollama_configuration.api_key
30
+ raise HostNotConfiguredError, "Host is not configured" unless api_host
31
+ raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
32
+
33
+ validate_messages!(messages)
34
+
35
+ path = args.dig(:ollama, :path) || API_PATH
36
+ uri = URI.join(api_host, path)
37
+ http = Net::HTTP.new(uri.host, uri.port)
38
+ http.use_ssl = true if uri.scheme == 'https'
39
+ http.read_timeout = args.fetch(:read_timeout, DEFAULT_TIMEOUT)
40
+ http.open_timeout = args.fetch(:open_timeout, DEFAULT_TIMEOUT)
41
+
42
+ request = Net::HTTP::Post.new(uri.path, {
43
+ 'Content-Type' => 'application/json',
44
+ 'Authorization' => "Bearer #{api_key}"
45
+ })
46
+
47
+ options = args.dig(:ollama, :options)
48
+ request.body = generate_body(messages, model, json_schema, tools, options).to_json
49
+ response = http.request(request)
50
+
51
+ unless response.is_a?(Net::HTTPSuccess)
52
+ raise "Ollama API Error: #{response.code} - #{response.message}: #{response.body}"
53
+ end
54
+
55
+ parsed_response = JSON.parse(response.body)
56
+
57
+ handle_response(parsed_response)
58
+ rescue JSON::ParserError => e
59
+ raise "JSON Parse Error: #{e.message}"
60
+ end
61
+
62
+ private
63
+
64
+ # Validate the structure and content of the messages array.
65
+ #
66
+ # @param messages [Array<Hash>] The array of message hashes to validate.
67
+ #
68
+ # @raise [ArgumentError] if the messages array is not in the expected format or contains invalid data.
69
+ def self.validate_messages!(messages)
70
+ # Check if messages is an array of hashes.
71
+ # This ensures that the input is in the correct format for message processing.
72
+ unless messages.is_a?(Array) && messages.all? { |msg| msg.is_a?(Hash) }
73
+ raise ArgumentError, "Messages must be an array of message hashes."
74
+ end
75
+
76
+ # Check if the array is empty.
77
+ # This prevents requests with no messages, which would be invalid.
78
+ if messages.empty?
79
+ raise ArgumentError, "Messages cannot be empty."
80
+ end
81
+ end
82
+
83
+ # Helper method to generate the request body
84
+ #
85
+ # @param messages [Array<Hash>] The conversation messages, each with a role and content
86
+ # @param model [String] The model to be used for generating completions
87
+ # @param json_schema [Hash, nil] An optional JSON schema to enforce structured output
88
+ # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
89
+ # @param options [Hash, nil] Additional model parameters listed in the documentation for the https://github.com/ollama/ollama/blob/main/docs/modelfile.md#valid-parameters-and-values such as temperature
90
+ # @return [Hash] The body for the API request
91
+ def self.generate_body(messages, model, json_schema, tools, options)
92
+ body = {
93
+ model: model,
94
+ stream: false,
95
+ messages: messages
96
+ }
97
+
98
+ # Extract schema if json_schema follows OpenAI's structure
99
+ if json_schema.is_a?(Hash) && json_schema.key?(:schema)
100
+ body[:format] = json_schema[:schema] # Use only the "schema" key
101
+ elsif json_schema.is_a?(Hash)
102
+ body[:format] = json_schema # Use the schema as-is if it doesn't follow OpenAI's structure
103
+ end
104
+
105
+ body[:tools] = tools if tools # Add the tools to the request body if provided
106
+ body[:options] = options if options
107
+
108
+ body
109
+ end
110
+
111
+ # Handles the API response, raising errors for specific cases and returning structured content otherwise
112
+ #
113
+ # @param response [Hash] The parsed API response
114
+ # @return [Hash] The relevant data based on the finish reason
115
+ def self.handle_response(response)
116
+ message = response.dig('message')
117
+ finish_reason = response.dig('done_reason')
118
+ done = response.dig('done')
119
+
120
+ # Check if the model made a function call
121
+ if message['tool_calls'] && !message['tool_calls'].empty?
122
+ return { tool_calls: message['tool_calls'], content: message['content'] }
123
+ end
124
+
125
+ # If the response finished normally, return the content
126
+ if done
127
+ return { content: message['content'] }
128
+ end
129
+
130
+ # Handle unexpected finish reasons
131
+ raise "Unexpected finish_reason: #{finish_reason}, done: #{done}, message: #{message}"
132
+ end
133
+ end
134
+ end
135
+ end
@@ -0,0 +1,59 @@
1
+ # frozen_string_literal: true
2
+
3
+ require 'net/http'
4
+ require 'json'
5
+ require 'uri'
6
+
7
+ module Spectre
8
+ module Ollama
9
+ class Embeddings
10
+ API_PATH = 'api/embeddings'
11
+ DEFAULT_MODEL = 'nomic-embed-text'
12
+ PARAM_NAME = 'prompt'
13
+ DEFAULT_TIMEOUT = 60
14
+
15
+ # Class method to generate embeddings for a given text
16
+ #
17
+ # @param text [String] the text input for which embeddings are to be generated
18
+ # @param model [String] the model to be used for generating embeddings, defaults to DEFAULT_MODEL
19
+ # @param args [Hash, nil] optional arguments like read_timeout and open_timeout
20
+ # @param args.ollama.path [String, nil] the API path, defaults to API_PATH
21
+ # @param args.ollama.param_name [String, nil] the parameter key for the text input, defaults to PARAM_NAME
22
+ # @return [Array<Float>] the generated embedding vector
23
+ # @raise [HostNotConfiguredError] if the host is not set in the configuration
24
+ # @raise [APIKeyNotConfiguredError] if the API key is not set in the configuration
25
+ # @raise [RuntimeError] for API errors or invalid responses
26
+ # @raise [JSON::ParserError] if the response cannot be parsed as JSON
27
+ def self.create(text, model: DEFAULT_MODEL, **args)
28
+ api_host = Spectre.ollama_configuration.host
29
+ api_key = Spectre.ollama_configuration.api_key
30
+ raise HostNotConfiguredError, "Host is not configured" unless api_host
31
+ raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
32
+
33
+ path = args.dig(:ollama, :path) || API_PATH
34
+ uri = URI.join(api_host, path)
35
+ http = Net::HTTP.new(uri.host, uri.port)
36
+ http.use_ssl = true if uri.scheme == 'https'
37
+ http.read_timeout = args.fetch(:read_timeout, DEFAULT_TIMEOUT)
38
+ http.open_timeout = args.fetch(:open_timeout, DEFAULT_TIMEOUT)
39
+
40
+ request = Net::HTTP::Post.new(uri.path, {
41
+ 'Content-Type' => 'application/json',
42
+ 'Authorization' => "Bearer #{api_key}"
43
+ })
44
+
45
+ param_name = args.dig(:ollama, :param_name) || PARAM_NAME
46
+ request.body = { model: model, param_name => text }.to_json
47
+ response = http.request(request)
48
+
49
+ unless response.is_a?(Net::HTTPSuccess)
50
+ raise "Ollama API Error: #{response.code} - #{response.message}: #{response.body}"
51
+ end
52
+
53
+ JSON.parse(response.body).dig('embedding')
54
+ rescue JSON::ParserError => e
55
+ raise "JSON Parse Error: #{e.message}"
56
+ end
57
+ end
58
+ end
59
+ end
@@ -0,0 +1,9 @@
1
+ # frozen_string_literal: true
2
+
3
+ module Spectre
4
+ module Ollama
5
+ # Require each specific client file here
6
+ require_relative 'ollama/embeddings'
7
+ require_relative 'ollama/completions'
8
+ end
9
+ end
@@ -16,14 +16,13 @@ module Spectre
16
16
  # @param messages [Array<Hash>] The conversation messages, each with a role and content
17
17
  # @param model [String] The model to be used for generating completions, defaults to DEFAULT_MODEL
18
18
  # @param json_schema [Hash, nil] An optional JSON schema to enforce structured output
19
- # @param max_tokens [Integer] The maximum number of tokens for the completion (default: 50)
20
19
  # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
21
- # @param args [Hash] Optional arguments like timeouts
20
+ # @param args [Hash, nil] optional arguments like read_timeout and open_timeout. For OpenAI, max_tokens can be passed in the openai hash.
22
21
  # @return [Hash] The parsed response including any function calls or content
23
22
  # @raise [APIKeyNotConfiguredError] If the API key is not set
24
23
  # @raise [RuntimeError] For general API errors or unexpected issues
25
- def self.create(messages:, model: DEFAULT_MODEL, json_schema: nil, max_tokens: nil, tools: nil, **args)
26
- api_key = Spectre.api_key
24
+ def self.create(messages:, model: DEFAULT_MODEL, json_schema: nil, tools: nil, **args)
25
+ api_key = Spectre.openai_configuration.api_key
27
26
  raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
28
27
 
29
28
  validate_messages!(messages)
@@ -39,6 +38,7 @@ module Spectre
39
38
  'Authorization' => "Bearer #{api_key}"
40
39
  })
41
40
 
41
+ max_tokens = args.dig(:openai, :max_tokens)
42
42
  request.body = generate_body(messages, model, json_schema, max_tokens, tools).to_json
43
43
  response = http.request(request)
44
44
 
@@ -15,12 +15,12 @@ module Spectre
15
15
  #
16
16
  # @param text [String] the text input for which embeddings are to be generated
17
17
  # @param model [String] the model to be used for generating embeddings, defaults to DEFAULT_MODEL
18
- # # @param args [Hash] Optional arguments like timeouts
18
+ # @param args [Hash] optional arguments like read_timeout and open_timeout
19
19
  # @return [Array<Float>] the generated embedding vector
20
20
  # @raise [APIKeyNotConfiguredError] if the API key is not set
21
21
  # @raise [RuntimeError] for general API errors or unexpected issues
22
22
  def self.create(text, model: DEFAULT_MODEL, **args)
23
- api_key = Spectre.api_key
23
+ api_key = Spectre.openai_configuration.api_key
24
24
  raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
25
25
 
26
26
  uri = URI(API_URL)
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Spectre # :nodoc:all
4
- VERSION = "1.1.4"
4
+ VERSION = "1.2.0"
5
5
  end
data/lib/spectre.rb CHANGED
@@ -4,16 +4,16 @@ require "spectre/version"
4
4
  require "spectre/embeddable"
5
5
  require 'spectre/searchable'
6
6
  require "spectre/openai"
7
+ require "spectre/ollama"
7
8
  require "spectre/logging"
8
9
  require 'spectre/prompt'
10
+ require 'spectre/errors'
9
11
 
10
12
  module Spectre
11
- class APIKeyNotConfiguredError < StandardError; end
12
-
13
13
  VALID_LLM_PROVIDERS = {
14
14
  openai: Spectre::Openai,
15
+ ollama: Spectre::Ollama
15
16
  # cohere: Spectre::Cohere,
16
- # ollama: Spectre::Ollama
17
17
  }.freeze
18
18
 
19
19
  def self.included(base)
@@ -35,25 +35,67 @@ module Spectre
35
35
  end
36
36
  end
37
37
 
38
+ class Configuration
39
+ attr_accessor :default_llm_provider, :providers
40
+
41
+ def initialize
42
+ @providers = {}
43
+ end
44
+
45
+ def openai
46
+ @providers[:openai] ||= OpenaiConfiguration.new
47
+ yield @providers[:openai] if block_given?
48
+ end
49
+
50
+ def ollama
51
+ @providers[:ollama] ||= OllamaConfiguration.new
52
+ yield @providers[:ollama] if block_given?
53
+ end
54
+
55
+ def provider_configuration
56
+ providers[default_llm_provider] || raise("No configuration found for provider: #{default_llm_provider}")
57
+ end
58
+ end
59
+
60
+ class OpenaiConfiguration
61
+ attr_accessor :api_key
62
+ end
63
+
64
+ class OllamaConfiguration
65
+ attr_accessor :host, :api_key
66
+ end
67
+
38
68
  class << self
39
- attr_accessor :api_key, :llm_provider
69
+ attr_accessor :config
40
70
 
41
71
  def setup
42
- yield self
72
+ self.config ||= Configuration.new
73
+ yield config
43
74
  validate_llm_provider!
44
75
  end
45
76
 
46
77
  def provider_module
47
- VALID_LLM_PROVIDERS[llm_provider] || raise("LLM provider #{llm_provider} not supported")
78
+ VALID_LLM_PROVIDERS[config.default_llm_provider] || raise("LLM provider #{config.default_llm_provider} not supported")
79
+ end
80
+
81
+ def provider_configuration
82
+ config.provider_configuration
83
+ end
84
+
85
+ def openai_configuration
86
+ config.providers[:openai]
87
+ end
88
+
89
+ def ollama_configuration
90
+ config.providers[:ollama]
48
91
  end
49
92
 
50
93
  private
51
94
 
52
95
  def validate_llm_provider!
53
- unless VALID_LLM_PROVIDERS.keys.include?(llm_provider)
54
- raise ArgumentError, "Invalid llm_provider: #{llm_provider}. Must be one of: #{VALID_LLM_PROVIDERS.keys.join(', ')}"
96
+ unless VALID_LLM_PROVIDERS.keys.include?(config.default_llm_provider)
97
+ raise ArgumentError, "Invalid default_llm_provider: #{config.default_llm_provider}. Must be one of: #{VALID_LLM_PROVIDERS.keys.join(', ')}"
55
98
  end
56
99
  end
57
-
58
100
  end
59
101
  end
metadata CHANGED
@@ -1,15 +1,15 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: spectre_ai
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.1.4
4
+ version: 1.2.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ilya Klapatok
8
8
  - Matthew Black
9
- autorequire:
9
+ autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2024-12-04 00:00:00.000000000 Z
12
+ date: 2025-01-29 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: rspec-rails
@@ -54,7 +54,11 @@ files:
54
54
  - lib/generators/spectre/templates/spectre_initializer.rb
55
55
  - lib/spectre.rb
56
56
  - lib/spectre/embeddable.rb
57
+ - lib/spectre/errors.rb
57
58
  - lib/spectre/logging.rb
59
+ - lib/spectre/ollama.rb
60
+ - lib/spectre/ollama/completions.rb
61
+ - lib/spectre/ollama/embeddings.rb
58
62
  - lib/spectre/openai.rb
59
63
  - lib/spectre/openai/completions.rb
60
64
  - lib/spectre/openai/embeddings.rb
@@ -65,7 +69,7 @@ homepage: https://github.com/hiremav/spectre
65
69
  licenses:
66
70
  - MIT
67
71
  metadata: {}
68
- post_install_message:
72
+ post_install_message:
69
73
  rdoc_options: []
70
74
  require_paths:
71
75
  - lib
@@ -81,7 +85,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
81
85
  version: '0'
82
86
  requirements: []
83
87
  rubygems_version: 3.5.11
84
- signing_key:
88
+ signing_key:
85
89
  specification_version: 4
86
90
  summary: Spectre
87
91
  test_files: []