spectre_ai 1.0.1 → 1.1.1

Sign up to get free protection for your applications and to get access to all the features.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 5a1b57e957fd8d44c84db209bc46d0c86a7be77a37d5f0f9ca726c2f2622a539
4
- data.tar.gz: b09d880f7f80e918c229caba2d5754e2dcf96ca9afeb1ca3091e7db0232420c1
3
+ metadata.gz: c7c3acf59b77ad62e0095fb7f91aa0491a50f25d197f94c788ad5ce2bbefbf6f
4
+ data.tar.gz: 48b4a9dcda9a013a6dde32ca5008a7dd33a49f4d4678952b8fcbf2089d2ccca6
5
5
  SHA512:
6
- metadata.gz: 91d020445d05ca703d78b6f0fda842a9f92c55b4c67d8b201accdad8fe352ccec2e4e28e36b157ae2a82f86f574d0873ddf23ff4ebbf7c15f707a2c467c47532
7
- data.tar.gz: 48eb3d8f6339d999ff7386e32de81cb01008ceb3245fb9fc2e150311845779e1f5c788198c35710d20584100085974035fd1730f1c093d8634202a45d9b3756f
6
+ metadata.gz: be7bf9f1570bad924509a8b8ad2a4671d019d20325696c4a2587e586f4a9314e395d822857cf9a277c84fd69e1d61e1231b356405266b5147187d6ec01d7dd33
7
+ data.tar.gz: f80dddebe6a99f7c946a8364f62f15112a974ad6cffe5a031baf5c1a9b151f39c12b870c60945277b8c57e78e51a0f0bb7147c620af11e4a59ed1ce485bfebab
data/CHANGELOG.md CHANGED
@@ -24,4 +24,60 @@ user: |
24
24
 
25
25
  Before this change, queries or responses containing special characters might have caused YAML parsing errors. This update ensures that even complex strings are handled safely and returned in their original form.
26
26
 
27
- To upgrade, update your Gemfile to version 1.0.1 and run bundle install. Make sure your YAML/ERB templates do not manually escape special characters anymore, as the Prompt class will handle it automatically.
27
+ To upgrade, update your Gemfile to version 1.0.1 and run bundle install. Make sure your YAML/ERB templates do not manually escape special characters anymore, as the Prompt class will handle it automatically.
28
+
29
+ # Changelog for Version 1.1.0
30
+
31
+ **Release Date:** [7th Oct 2024]
32
+
33
+ **New Features:**
34
+
35
+ * **Tool _(Function Calling)_ Integration:** Added support for tools parameter to enable function calling during completions. Now you can specify an array of tool definitions that the model can use to call specific functions.
36
+
37
+ * **Enhanced Message Handling:** Replaced individual prompt parameters (user_prompt, system_prompt, assistant_prompt) with a single messages array parameter, which accepts a sequence of messages with their roles and contents. This provides more flexibility in managing conversations.
38
+
39
+ * **Response Validation:** Introduced a handle_response method to handle different finish_reason cases more effectively, including content filtering and tool call handling.
40
+
41
+ * **Improved Error Handling:**
42
+ Added more specific error messages for cases like refusal (Refusal), incomplete response due to token limits (Incomplete response), and content filtering (Content filtered).
43
+ Enhanced JSON parsing error handling with more descriptive messages.
44
+
45
+ * **Request Validation:** Implemented message validation to ensure the messages parameter is not empty and follows the required format. Raises an error if validation fails.
46
+
47
+ * **Support for Structured Output:** Integrated support for json_schema parameter in the request body to enforce structured output responses.
48
+
49
+ * **Skip Request on Empty Messages:** The class will now skip sending a request if the messages parameter is empty or invalid, reducing unnecessary API calls.
50
+
51
+ **Breaking Changes:**
52
+
53
+ **Message Parameter Refactor**: The previous individual prompt parameters (user_prompt, system_prompt, assistant_prompt) have been consolidated into a single messages array. This may require updating any existing code using the old parameters.
54
+
55
+ **Bug Fixes:**
56
+
57
+ * **API Key Check:** Improved error handling for cases when the API key is not configured, providing a more specific exception.
58
+
59
+ * **Error Messages:** Enhanced error messages for various edge cases, including content filtering and incomplete responses due to token limits.
60
+
61
+ **Refinements:**
62
+
63
+ Code Refactoring:
64
+ * Moved message validation into a dedicated validate_messages! method for clarity and reusability.
65
+ * Simplified the generate_body method to include the tools and json_schema parameters more effectively.
66
+
67
+ **Documentation:** Updated class-level documentation and method comments for better clarity and understanding of the class’s functionality and usage.
68
+
69
+ This version enhances the flexibility and robustness of the Completions class, enabling more complex interactions and better error handling for different types of API responses.
70
+
71
+ # Changelog for Version 1.1.1
72
+
73
+ **Release Date:** [11th Oct 2024]
74
+
75
+ **New Features:**
76
+
77
+ * **Nested Template Support in Prompts**
78
+ * You can now organize your prompt files in nested directories and render them using the `Spectre::Prompt.render` method.
79
+ * **Example**: To render a template from a nested folder:
80
+ ```ruby
81
+ Spectre::Prompt.render(template: 'classification/intent/user', locals: { query: 'What is AI?' })
82
+ ```
83
+ * This feature allows for better organization and scalability when dealing with multiple prompt categories and complex scenarios.
data/README.md CHANGED
@@ -175,25 +175,36 @@ Spectre provides an interface to create chat completions using your configured L
175
175
  To create a simple chat completion, use the `Spectre.provider_module::Completions.create` method. You can provide a user prompt and an optional system prompt to guide the response:
176
176
 
177
177
  ```ruby
178
+ messages = [
179
+ { role: 'system', content: "You are a funny assistant." },
180
+ { role: 'user', content: "Tell me a joke." }
181
+ ]
182
+
178
183
  Spectre.provider_module::Completions.create(
179
- user_prompt: "Tell me a joke.",
180
- system_prompt: "You are a funny assistant."
184
+ messages: messages
181
185
  )
186
+
182
187
  ```
183
188
 
184
189
  This sends the request to the LLM provider’s API and returns the chat completion.
185
190
 
186
191
  **Customizing the Completion**
187
192
 
188
- You can customize the behavior by specifying additional parameters such as the model or an `assistant_prompt` to provide further context for the AI’s responses:
193
+ You can customize the behavior by specifying additional parameters such as the model, maximum number of tokens, and any tools needed for function calls:
189
194
 
190
195
  ```ruby
196
+ messages = [
197
+ { role: 'system', content: "You are a funny assistant." },
198
+ { role: 'user', content: "Tell me a joke." },
199
+ { role: 'assistant', content: "Sure, here's a joke!" }
200
+ ]
201
+
191
202
  Spectre.provider_module::Completions.create(
192
- user_prompt: "Tell me a joke.",
193
- system_prompt: "You are a funny assistant.",
194
- assistant_prompt: "Sure, here's a joke!",
195
- model: "gpt-4"
203
+ messages: messages,
204
+ model: "gpt-4",
205
+ max_tokens: 50
196
206
  )
207
+
197
208
  ```
198
209
 
199
210
  **Using a JSON Schema for Structured Output**
@@ -214,15 +225,100 @@ json_schema = {
214
225
  }
215
226
  }
216
227
 
228
+ messages = [
229
+ { role: 'system', content: "You are a knowledgeable assistant." },
230
+ { role: 'user', content: "What is the capital of France?" }
231
+ ]
232
+
217
233
  Spectre.provider_module::Completions.create(
218
- user_prompt: "What is the capital of France?",
219
- system_prompt: "You are a knowledgeable assistant.",
234
+ messages: messages,
220
235
  json_schema: json_schema
221
236
  )
237
+
222
238
  ```
223
239
 
224
240
  This structured format guarantees that the response adheres to the schema you’ve provided, ensuring more predictable and controlled results.
225
241
 
242
+ **Using Tools for Function Calling**
243
+
244
+ You can incorporate tools (function calls) in your completion to handle more complex interactions such as fetching external information via API or performing calculations. Define tools using the function call format and include them in the request:
245
+
246
+ ```ruby
247
+ tools = [
248
+ {
249
+ type: "function",
250
+ function: {
251
+ name: "get_delivery_date",
252
+ description: "Get the delivery date for a customer's order.",
253
+ parameters: {
254
+ type: "object",
255
+ properties: {
256
+ order_id: { type: "string", description: "The customer's order ID." }
257
+ },
258
+ required: ["order_id"],
259
+ additionalProperties: false
260
+ }
261
+ }
262
+ }
263
+ ]
264
+
265
+ messages = [
266
+ { role: 'system', content: "You are a helpful customer support assistant." },
267
+ { role: 'user', content: "Can you tell me the delivery date for my order?" }
268
+ ]
269
+
270
+ Spectre.provider_module::Completions.create(
271
+ messages: messages,
272
+ tools: tools
273
+ )
274
+ ```
275
+
276
+ This setup allows the model to call specific tools (or functions) based on the user's input. The model can then generate a tool call to get necessary information and integrate it into the conversation.
277
+
278
+ **Handling Responses from Completions with Tools**
279
+
280
+ When tools (function calls) are included in a completion request, the response might include `tool_calls` with relevant details for executing the function.
281
+
282
+ Here’s an example of how the response might look when a tool call is made:
283
+
284
+ ```ruby
285
+ response = Spectre.provider_module::Completions.create(
286
+ messages: messages,
287
+ tools: tools
288
+ )
289
+
290
+ # Sample response structure when a tool call is triggered:
291
+ # {
292
+ # :tool_calls=>[{
293
+ # "id" => "call_gqvSz1JTDfUyky7ghqY1wMoy",
294
+ # "type" => "function",
295
+ # "function" => {
296
+ # "name" => "get_lead_count",
297
+ # "arguments" => "{\"account_id\":\"acc_12312\"}"
298
+ # }
299
+ # }],
300
+ # :content => nil
301
+ # }
302
+
303
+ if response[:tool_calls]
304
+ tool_call = response[:tool_calls].first
305
+
306
+ # You can now perform the function using the provided data
307
+ # For example, get the lead count by account_id
308
+ account_id = JSON.parse(tool_call['function']['arguments'])['account_id']
309
+ lead_count = get_lead_count(account_id) # Assuming you have a method for this
310
+
311
+ # Respond back with the function result
312
+ completion_response = Spectre.provider_module::Completions.create(
313
+ messages: [
314
+ { role: 'assistant', content: "There are #{lead_count} leads for account #{account_id}." }
315
+ ]
316
+ )
317
+ else
318
+ puts "Model response: #{response[:content]}"
319
+ end
320
+ ```
321
+
226
322
  ### 6. Creating Dynamic Prompts
227
323
 
228
324
  Spectre provides a system for creating dynamic prompts based on templates. You can define reusable prompt templates and render them with different parameters in your Rails app (think Ruby on Rails view partials).
@@ -281,15 +377,49 @@ Spectre::Prompt.render(
281
377
  - **`template`:** The path to the prompt template file (e.g., `rag/system`).
282
378
  - **`locals`:** A hash of variables to be used inside the ERB template.
283
379
 
380
+ **Using Nested Templates for Prompts**
381
+
382
+ Spectre's `Prompt` class now supports rendering templates from nested directories. This allows you to better organize your prompt files in a structured folder hierarchy.
383
+
384
+ You can organize your prompt templates in subfolders. For instance, you can have the following structure:
385
+
386
+ ```
387
+ app/
388
+ spectre/
389
+ prompts/
390
+ rag/
391
+ system.yml.erb
392
+ user.yml.erb
393
+ classification/
394
+ intent/
395
+ system.yml.erb
396
+ user.yml.erb
397
+ entity/
398
+ system.yml.erb
399
+ user.yml.erb
400
+ ```
401
+
402
+ To render a prompt from a nested folder, simply pass the full path to the `template` argument:
403
+
404
+ ```ruby
405
+ # Rendering from a nested folder
406
+ Spectre::Prompt.render(template: 'classification/intent/user', locals: { query: 'What is AI?' })
407
+ ```
408
+
409
+ This allows for more flexibility when organizing your prompt files, particularly when dealing with complex scenarios or multiple prompt categories.
410
+
284
411
  **Combining Completions with Prompts**
285
412
 
286
413
  You can also combine completions and prompts like so:
287
414
 
288
415
  ```ruby
289
416
  Spectre.provider_module::Completions.create(
290
- user_prompt: Spectre::Prompt.render(template: 'rag/user', locals: { query: @query, user: @user }),
291
- system_prompt: Spectre::Prompt.render(template: 'rag/system')
417
+ messages: [
418
+ { role: 'system', content: Spectre::Prompt.render(template: 'rag/system') },
419
+ { role: 'user', content: Spectre::Prompt.render(template: 'rag/user', locals: { query: @query, user: @user }) }
420
+ ]
292
421
  )
422
+
293
423
  ```
294
424
 
295
425
  ## Contributing
@@ -10,21 +10,22 @@ module Spectre
10
10
  API_URL = 'https://api.openai.com/v1/chat/completions'
11
11
  DEFAULT_MODEL = 'gpt-4o-mini'
12
12
 
13
- # Class method to generate a completion based on a user prompt
13
+ # Class method to generate a completion based on user messages and optional tools
14
14
  #
15
- # @param user_prompt [String] the user's input to generate a completion for
16
- # @param system_prompt [String] an optional system prompt to guide the AI's behavior
17
- # @param assistant_prompt [String] an optional assistant prompt to provide context for the assistant's behavior
18
- # @param model [String] the model to be used for generating completions, defaults to DEFAULT_MODEL
19
- # @param json_schema [Hash, nil] an optional JSON schema to enforce structured output
20
- # @param max_tokens [Integer] the maximum number of tokens for the completion (default: 50)
21
- # @return [String] the generated completion text
22
- # @raise [APIKeyNotConfiguredError] if the API key is not set
23
- # @raise [RuntimeError] for general API errors or unexpected issues
24
- def self.create(user_prompt:, system_prompt: "You are a helpful assistant.", assistant_prompt: nil, model: DEFAULT_MODEL, json_schema: nil, max_tokens: nil)
15
+ # @param messages [Array<Hash>] The conversation messages, each with a role and content
16
+ # @param model [String] The model to be used for generating completions, defaults to DEFAULT_MODEL
17
+ # @param json_schema [Hash, nil] An optional JSON schema to enforce structured output
18
+ # @param max_tokens [Integer] The maximum number of tokens for the completion (default: 50)
19
+ # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
20
+ # @return [Hash] The parsed response including any function calls or content
21
+ # @raise [APIKeyNotConfiguredError] If the API key is not set
22
+ # @raise [RuntimeError] For general API errors or unexpected issues
23
+ def self.create(messages:, model: DEFAULT_MODEL, json_schema: nil, max_tokens: nil, tools: nil)
25
24
  api_key = Spectre.api_key
26
25
  raise APIKeyNotConfiguredError, "API key is not configured" unless api_key
27
26
 
27
+ validate_messages!(messages)
28
+
28
29
  uri = URI(API_URL)
29
30
  http = Net::HTTP.new(uri.host, uri.port)
30
31
  http.use_ssl = true
@@ -36,7 +37,7 @@ module Spectre
36
37
  'Authorization' => "Bearer #{api_key}"
37
38
  })
38
39
 
39
- request.body = generate_body(user_prompt, system_prompt, assistant_prompt, model, json_schema, max_tokens).to_json
40
+ request.body = generate_body(messages, model, json_schema, max_tokens, tools).to_json
40
41
  response = http.request(request)
41
42
 
42
43
  unless response.is_a?(Net::HTTPSuccess)
@@ -45,18 +46,7 @@ module Spectre
45
46
 
46
47
  parsed_response = JSON.parse(response.body)
47
48
 
48
- # Check if the response contains a refusal
49
- if parsed_response.dig('choices', 0, 'message', 'refusal')
50
- raise "Refusal: #{parsed_response.dig('choices', 0, 'message', 'refusal')}"
51
- end
52
-
53
- # Check if the finish reason is "length", indicating incomplete response
54
- if parsed_response.dig('choices', 0, 'finish_reason') == "length"
55
- raise "Incomplete response: The completion was cut off due to token limit."
56
- end
57
-
58
- # Return the structured output if it's included
59
- parsed_response.dig('choices', 0, 'message', 'content')
49
+ handle_response(parsed_response)
60
50
  rescue JSON::ParserError => e
61
51
  raise "JSON Parse Error: #{e.message}"
62
52
  rescue Net::OpenTimeout, Net::ReadTimeout => e
@@ -65,40 +55,103 @@ module Spectre
65
55
 
66
56
  private
67
57
 
68
- # Helper method to generate the request body
58
+ # Validate the structure and content of the messages array.
59
+ #
60
+ # @param messages [Array<Hash>] The array of message hashes to validate.
69
61
  #
70
- # @param user_prompt [String] the user's input to generate a completion for
71
- # @param system_prompt [String] an optional system prompt to guide the AI's behavior
72
- # @param assistant_prompt [String] an optional assistant prompt to provide context for the assistant's behavior
73
- # @param model [String] the model to be used for generating completions
74
- # @param json_schema [Hash, nil] an optional JSON schema to enforce structured output
75
- # @param max_tokens [Integer, nil] the maximum number of tokens for the completion
76
- # @return [Hash] the body for the API request
77
- def self.generate_body(user_prompt, system_prompt, assistant_prompt, model, json_schema, max_tokens)
78
- messages = [
79
- { role: 'system', content: system_prompt },
80
- { role: 'user', content: user_prompt }
81
- ]
82
-
83
- # Add the assistant prompt if provided
84
- messages << { role: 'assistant', content: assistant_prompt } if assistant_prompt
62
+ # @raise [ArgumentError] if the messages array is not in the expected format or contains invalid data.
63
+ def self.validate_messages!(messages)
64
+ # Check if messages is an array of hashes.
65
+ # This ensures that the input is in the correct format for message processing.
66
+ unless messages.is_a?(Array) && messages.all? { |msg| msg.is_a?(Hash) }
67
+ raise ArgumentError, "Messages must be an array of message hashes."
68
+ end
69
+
70
+ # Check if the array is empty.
71
+ # This prevents requests with no messages, which would be invalid.
72
+ if messages.empty?
73
+ raise ArgumentError, "Messages cannot be empty."
74
+ end
85
75
 
76
+ # Iterate through each message and perform detailed validation.
77
+ messages.each_with_index do |msg, index|
78
+ # Check if each message hash contains the required keys: :role and :content.
79
+ # These keys are necessary for defining the type of message and its content.
80
+ unless msg.key?(:role) && msg.key?(:content)
81
+ raise ArgumentError, "Message at index #{index} must contain both :role and :content keys."
82
+ end
83
+
84
+ # Check if the role is one of the allowed values: 'system', 'user', or 'assistant'.
85
+ # This ensures that each message has a valid role identifier.
86
+ unless %w[system user assistant].include?(msg[:role])
87
+ raise ArgumentError, "Invalid role '#{msg[:role]}' at index #{index}. Valid roles are 'system', 'user', 'assistant'."
88
+ end
89
+
90
+ # Check if the content is a non-empty string.
91
+ # This prevents empty or non-string content, which would be meaningless in a conversation.
92
+ unless msg[:content].is_a?(String) && !msg[:content].strip.empty?
93
+ raise ArgumentError, "Content for message at index #{index} must be a non-empty string."
94
+ end
95
+ end
96
+ end
97
+
98
+ # Helper method to generate the request body
99
+ #
100
+ # @param messages [Array<Hash>] The conversation messages, each with a role and content
101
+ # @param model [String] The model to be used for generating completions
102
+ # @param json_schema [Hash, nil] An optional JSON schema to enforce structured output
103
+ # @param max_tokens [Integer, nil] The maximum number of tokens for the completion
104
+ # @param tools [Array<Hash>, nil] An optional array of tool definitions for function calling
105
+ # @return [Hash] The body for the API request
106
+ def self.generate_body(messages, model, json_schema, max_tokens, tools)
86
107
  body = {
87
108
  model: model,
88
- messages: messages,
109
+ messages: messages
89
110
  }
90
- body['max_tokens'] = max_tokens if max_tokens
91
-
92
- # Add the JSON schema as part of response_format if provided
93
- if json_schema
94
- body[:response_format] = {
95
- type: 'json_schema',
96
- json_schema: json_schema
97
- }
98
- end
111
+
112
+ body[:max_tokens] = max_tokens if max_tokens
113
+ body[:response_format] = { type: 'json_schema', json_schema: json_schema } if json_schema
114
+ body[:tools] = tools if tools # Add the tools to the request body if provided
99
115
 
100
116
  body
101
117
  end
118
+
119
+ # Handles the API response, raising errors for specific cases and returning structured content otherwise
120
+ #
121
+ # @param response [Hash] The parsed API response
122
+ # @return [Hash] The relevant data based on the finish reason
123
+ def self.handle_response(response)
124
+ message = response.dig('choices', 0, 'message')
125
+ finish_reason = response.dig('choices', 0, 'finish_reason')
126
+
127
+ # Check if the response contains a refusal
128
+ if message['refusal']
129
+ raise "Refusal: #{message['refusal']}"
130
+ end
131
+
132
+ # Check if the finish reason is "length", indicating incomplete response
133
+ if finish_reason == "length"
134
+ raise "Incomplete response: The completion was cut off due to token limit."
135
+ end
136
+
137
+ # Check if the finish reason is "content_filter", indicating policy violations
138
+ if finish_reason == "content_filter"
139
+ raise "Content filtered: The model's output was blocked due to policy violations."
140
+ end
141
+
142
+ # Check if the model made a function call
143
+ if finish_reason == "function_call" || finish_reason == "tool_calls"
144
+ return { tool_calls: message['tool_calls'], content: message['content'] }
145
+ end
146
+
147
+ # If the response finished normally, return the content
148
+ if finish_reason == "stop"
149
+ return { content: message['content'] }
150
+ end
151
+
152
+ # Handle unexpected finish reasons
153
+ raise "Unexpected finish_reason: #{finish_reason}"
154
+ end
102
155
  end
103
156
  end
104
157
  end
@@ -5,107 +5,119 @@ require 'yaml'
5
5
 
6
6
  module Spectre
7
7
  class Prompt
8
- PROMPTS_PATH = File.join(Dir.pwd, 'app', 'spectre', 'prompts')
9
-
10
- # Render a prompt by reading and rendering the YAML template
11
- #
12
- # @param template [String] The path to the template file, formatted as 'type/prompt' (e.g., 'rag/system')
13
- # @param locals [Hash] Variables to be passed to the template for rendering
14
- #
15
- # @return [String] Rendered prompt
16
- def self.render(template:, locals: {})
17
- type, prompt = split_template(template)
18
- file_path = prompt_file_path(type, prompt)
19
-
20
- raise "Prompt file not found: #{file_path}" unless File.exist?(file_path)
21
-
22
- # Preprocess the locals before rendering the YAML file
23
- preprocessed_locals = preprocess_locals(locals)
24
-
25
- template_content = File.read(file_path)
26
- erb_template = ERB.new(template_content)
27
-
28
- context = Context.new(preprocessed_locals)
29
- rendered_prompt = erb_template.result(context.get_binding)
30
-
31
- # YAML.safe_load returns a hash, so fetch the correct part based on the prompt
32
- parsed_yaml = YAML.safe_load(rendered_prompt)[prompt]
33
-
34
- # Convert special characters back after YAML processing
35
- convert_special_chars_back(parsed_yaml)
36
- rescue Errno::ENOENT
37
- raise "Template file not found at path: #{file_path}"
38
- rescue Psych::SyntaxError => e
39
- raise "YAML Syntax Error in file #{file_path}: #{e.message}"
40
- rescue StandardError => e
41
- raise "Error rendering prompt for template '#{template}': #{e.message}"
42
- end
8
+ class << self
9
+ attr_reader :prompts_path
43
10
 
44
- private
11
+ def prompts_path
12
+ @prompts_path ||= detect_prompts_path
13
+ end
45
14
 
46
- # Split the template parameter into type and prompt
47
- #
48
- # @param template [String] Template path in the format 'type/prompt' (e.g., 'rag/system')
49
- # @return [Array<String, String>] An array containing the type and prompt
50
- def self.split_template(template)
51
- template.split('/')
52
- end
15
+ # Render a prompt by reading and rendering the YAML template
16
+ #
17
+ # @param template [String] The path to the template file, formatted as 'folder1/folder2/prompt'
18
+ # @param locals [Hash] Variables to be passed to the template for rendering
19
+ #
20
+ # @return [String] Rendered prompt
21
+ def render(template:, locals: {})
22
+ path, prompt = split_template(template)
23
+ file_path = prompt_file_path(path, prompt)
24
+
25
+ raise "Prompt file not found: #{file_path}" unless File.exist?(file_path)
26
+
27
+ # Preprocess the locals before rendering the YAML file
28
+ preprocessed_locals = preprocess_locals(locals)
29
+
30
+ template_content = File.read(file_path)
31
+ erb_template = ERB.new(template_content)
32
+
33
+ context = Context.new(preprocessed_locals)
34
+ rendered_prompt = erb_template.result(context.get_binding)
35
+
36
+ # YAML.safe_load returns a hash, so fetch the correct part based on the prompt
37
+ parsed_yaml = YAML.safe_load(rendered_prompt)[prompt]
38
+
39
+ # Convert special characters back after YAML processing
40
+ convert_special_chars_back(parsed_yaml)
41
+ rescue Errno::ENOENT
42
+ raise "Template file not found at path: #{file_path}"
43
+ rescue Psych::SyntaxError => e
44
+ raise "YAML Syntax Error in file #{file_path}: #{e.message}"
45
+ rescue StandardError => e
46
+ raise "Error rendering prompt for template '#{template}': #{e.message}"
47
+ end
53
48
 
54
- # Build the path to the desired prompt file
55
- #
56
- # @param type [String] Name of the prompt folder
57
- # @param prompt [String] Type of prompt (e.g., 'system', 'user')
58
- #
59
- # @return [String] Full path to the template file
60
- def self.prompt_file_path(type, prompt)
61
- "#{PROMPTS_PATH}/#{type}/#{prompt}.yml.erb"
62
- end
49
+ private
63
50
 
64
- # Preprocess locals recursively to escape special characters in strings
65
- #
66
- # @param value [Object] The value to process (string, array, hash, etc.)
67
- # @return [Object] Processed value with special characters escaped
68
- def self.preprocess_locals(value)
69
- case value
70
- when String
71
- escape_special_chars(value)
72
- when Hash
73
- value.transform_values { |v| preprocess_locals(v) } # Recurse into hash values
74
- when Array
75
- value.map { |item| preprocess_locals(item) } # Recurse into array items
76
- else
77
- value
51
+ # Detects the appropriate path for prompt templates
52
+ def detect_prompts_path
53
+ File.join(Dir.pwd, 'app', 'spectre', 'prompts')
78
54
  end
79
- end
80
55
 
81
- # Escape special characters in strings to avoid YAML parsing issues
82
- #
83
- # @param value [String] The string to process
84
- # @return [String] The processed string with special characters escaped
85
- def self.escape_special_chars(value)
86
- value.gsub('&', '&amp;')
87
- .gsub('<', '&lt;')
88
- .gsub('>', '&gt;')
89
- .gsub('"', '&quot;')
90
- .gsub("'", '&#39;')
91
- .gsub("\n", '\\n')
92
- .gsub("\r", '\\r')
93
- .gsub("\t", '\\t')
94
- end
56
+ # Split the template parameter into path and prompt
57
+ #
58
+ # @param template [String] Template path in the format 'folder1/folder2/prompt'
59
+ # @return [Array<String, String>] An array containing the folder path and the prompt name
60
+ def split_template(template)
61
+ *path_parts, prompt = template.split('/')
62
+ [File.join(path_parts), prompt]
63
+ end
95
64
 
96
- # Convert special characters back to their original form after YAML processing
97
- #
98
- # @param value [String] The string to process
99
- # @return [String] The processed string with original special characters restored
100
- def self.convert_special_chars_back(value)
101
- value.gsub('&amp;', '&')
102
- .gsub('&lt;', '<')
103
- .gsub('&gt;', '>')
104
- .gsub('&quot;', '"')
105
- .gsub('&#39;', "'")
106
- .gsub('\\n', "\n")
107
- .gsub('\\r', "\r")
108
- .gsub('\\t', "\t")
65
+ # Build the path to the desired prompt file
66
+ #
67
+ # @param path [String] Path to the prompt folder(s)
68
+ # @param prompt [String] Name of the prompt file (e.g., 'system', 'user')
69
+ #
70
+ # @return [String] Full path to the template file
71
+ def prompt_file_path(path, prompt)
72
+ File.join(prompts_path, path, "#{prompt}.yml.erb")
73
+ end
74
+
75
+ # Preprocess locals recursively to escape special characters in strings
76
+ #
77
+ # @param value [Object] The value to process (string, array, hash, etc.)
78
+ # @return [Object] Processed value with special characters escaped
79
+ def preprocess_locals(value)
80
+ case value
81
+ when String
82
+ escape_special_chars(value)
83
+ when Hash
84
+ value.transform_values { |v| preprocess_locals(v) } # Recurse into hash values
85
+ when Array
86
+ value.map { |item| preprocess_locals(item) } # Recurse into array items
87
+ else
88
+ value
89
+ end
90
+ end
91
+
92
+ # Escape special characters in strings to avoid YAML parsing issues
93
+ #
94
+ # @param value [String] The string to process
95
+ # @return [String] The processed string with special characters escaped
96
+ def escape_special_chars(value)
97
+ value.gsub('&', '&amp;')
98
+ .gsub('<', '&lt;')
99
+ .gsub('>', '&gt;')
100
+ .gsub('"', '&quot;')
101
+ .gsub("'", '&#39;')
102
+ .gsub("\n", '\\n')
103
+ .gsub("\r", '\\r')
104
+ .gsub("\t", '\\t')
105
+ end
106
+
107
+ # Convert special characters back to their original form after YAML processing
108
+ #
109
+ # @param value [String] The string to process
110
+ # @return [String] The processed string with original special characters restored
111
+ def convert_special_chars_back(value)
112
+ value.gsub('&amp;', '&')
113
+ .gsub('&lt;', '<')
114
+ .gsub('&gt;', '>')
115
+ .gsub('&quot;', '"')
116
+ .gsub('&#39;', "'")
117
+ .gsub('\\n', "\n")
118
+ .gsub('\\r', "\r")
119
+ .gsub('\\t', "\t")
120
+ end
109
121
  end
110
122
 
111
123
  # Helper class to handle the binding for ERB template rendering
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module Spectre # :nodoc:all
4
- VERSION = "1.0.1"
4
+ VERSION = "1.1.1"
5
5
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: spectre_ai
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.0.1
4
+ version: 1.1.1
5
5
  platform: ruby
6
6
  authors:
7
7
  - Ilya Klapatok
@@ -9,7 +9,7 @@ authors:
9
9
  autorequire:
10
10
  bindir: bin
11
11
  cert_chain: []
12
- date: 2024-09-19 00:00:00.000000000 Z
12
+ date: 2024-10-10 00:00:00.000000000 Z
13
13
  dependencies:
14
14
  - !ruby/object:Gem::Dependency
15
15
  name: rspec-rails