nitro_intelligence 0.0.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml ADDED
@@ -0,0 +1,7 @@
1
+ ---
2
+ SHA256:
3
+ metadata.gz: aa22292af25ca70dd3a19ba62830fa25606a84517af6f7bb79906ceea4d0e058
4
+ data.tar.gz: 07c68668426d3feb046e633203e617ede8f9ef807a3f60341c347fbb0c05b17f
5
+ SHA512:
6
+ metadata.gz: 02d7bcd08eb45f6dc14c0b4a5e5f91e0edac92369435721e838a0e623a5a26b4af33b0f851c1d02de54ca6d80a864d0b8904e9a78612e9d14aa642e8993865ba
7
+ data.tar.gz: e6c1bfc0896d5d11cc017e23aef1b617c3f83182c09fe8a6414202d66dfbdc5d2e13fe73bf3b5ef54e928d5f1cf944ad001781397bab7a8a5f4867e4a152cc1c
data/Rakefile ADDED
@@ -0,0 +1,6 @@
1
+ require "bundler/gem_tasks"
2
+ require "rspec/core/rake_task"
3
+
4
+ RSpec::Core::RakeTask.new(:spec)
5
+
6
+ task default: :spec
data/docs/README.md ADDED
@@ -0,0 +1,247 @@
1
+ # Nitro Intelligence
2
+
3
+ The entrypoint to everything AI.
4
+
5
+ This component aims to consolidate and standardize AI features implemented in the host application.
6
+
7
+ ## Configuration
8
+
9
+ `NitroIntelligence` is configured via an initializer in the host application. All external dependencies
10
+ (e.g. inference API credentials, observability settings) must be injected at boot time:
11
+
12
+ ```ruby
13
+ NitroIntelligence.configure do |config|
14
+ # Standard Rails integrations
15
+ config.logger = Rails.logger # Logger instance
16
+ config.environment = Rails.env # e.g. "production", "test"
17
+ config.cache_provider = Rails.cache # ActiveSupport cache store
18
+
19
+ # Inference (LLM) settings
20
+ config.inference_api_key = "..." # API key for the inference service
21
+ config.inference_base_url = "https://..." # Base URL for the inference service
22
+
23
+ # Observability (Langfuse) settings
24
+ config.observability_base_url = "https://..." # Base URL for the observability service
25
+ config.observability_projects = [ # Array of project credential hashes
26
+ {
27
+ "slug" => "my-feature-project",
28
+ "id" => "project-id",
29
+ "public_key" => "pk-...",
30
+ "secret_key" => "sk-...",
31
+ },
32
+ ]
33
+
34
+ # Agent server settings (optional)
35
+ config.agent_server_config = {} # Hash of AgentServer keyword arguments
36
+ end
37
+ ```
38
+
39
+ ### Configuration Keys
40
+
41
+ | Key | Type | Default | Description |
42
+ |---|---|---|---|
43
+ | `logger` | `Logger` | `Logger.new($stdout)` | Logger used for diagnostic output |
44
+ | `environment` | `String` | `"test"` | Runtime environment name |
45
+ | `cache_provider` | cache store | `NullCache` | ActiveSupport-compatible cache store |
46
+ | `inference_api_key` | `String` | `""` | API key for the LLM inference service |
47
+ | `inference_base_url` | `String` | `""` | Base URL for the LLM inference service |
48
+ | `observability_base_url` | `String` | `""` | Base URL for the Langfuse observability service |
49
+ | `observability_projects` | `Array<Hash>` | `[]` | Langfuse project credentials (slug, id, public_key, secret_key) |
50
+ | `agent_server_config` | `Hash` | `{}` | Credentials for `AgentServer.new`. Expected keys: `base_url` (String) — HTTP base URL of the agent server; `api_key` (String) — bearer token; `user_id` (String, default: `"default-user"`) — caller identity |
51
+
52
+ ## Basic Usage
53
+
54
+ ### OpenAI API/LLM Requests
55
+
56
+ A simple LLM call can be invoked as follows:
57
+
58
+ ```ruby
59
+ client = NitroIntelligence::Client.new
60
+ client.chat(message: "Why is the sky blue?")
61
+ content = result.choices.first&.message&.content
62
+ ```
63
+
64
+ This component handles setting defaults for most things, such as model, host, keys, etc.
65
+
66
+ You may also use [`openai-ruby`](https://github.com/openai/openai-ruby) compatible syntax with this wrapper by passing the parameters keyword in. For example:
67
+
68
+ ```ruby
69
+ client = NitroIntelligence::Client.new
70
+ client.chat(parameters: { model: "meta-llama/Llama-3.1-8B-Instruct", messages: [{ role: "user", content: "Why is the sky blue?" }]})
71
+ ```
72
+
73
+
74
+ #### Providing Parameters
75
+
76
+ Parameters such as 'max_tokens' and 'temperature' can be passed in under the `parameters` key.
77
+
78
+ ```ruby
79
+ client = NitroIntelligence::Client.new
80
+ client.chat(parameters: { model: "meta-llama/Llama-3.1-8B-Instruct", max_tokens: 1000, temperature: 0.7, messages: [{ role: "user", content: "Why is the sky blue?" }]})
81
+ ```
82
+
83
+ For a full list of supported parameters, see the [API reference here](https://developers.openai.com/api/reference/resources/completions/methods/create).
84
+
85
+ ### Image Editing and Generation
86
+
87
+ Nitro Intelligence can be used for image editing and generation
88
+
89
+ Basic examples of usage:
90
+
91
+ #### Image Generation
92
+
93
+ ```ruby
94
+ client = NitroIntelligence::Client.new
95
+ result = client.generate_image(message: "Create an image of a bear installing a window.")
96
+ ```
97
+
98
+ `result` is a `NitroIntelligence::ImageGeneration` object, and the generated image can be accessed via `result.generated_image` which returns a `NitroIntelligence::Image`.
99
+
100
+ You may write the file to disk:
101
+
102
+ ```ruby
103
+ File.binwrite("my_generated_image.#{result.generated_image.file_extension}", result.generated_image.byte_string)
104
+ ```
105
+
106
+ #### Image Editing and Uploading Reference Images
107
+
108
+ To edit an image, provide your source image as a byte string, along with any references you would like to include.
109
+
110
+ ```ruby
111
+ client = NitroIntelligence::Client.new
112
+ house = File.binread("./house.jpg")
113
+ siding = File.binread("./siding.png")
114
+ result = client.generate_image(message: "Replace the siding in the image of the house with the new siding I have provided.", target_image: house, reference_images: [siding])
115
+ ```
116
+
117
+ #### Using Prompts
118
+
119
+ See Observability[## Observability] for more details. Basic usage looks like:
120
+
121
+ ```ruby
122
+ client = NitroIntelligence::Client.new(observability_project_slug: "sample-project-slug")
123
+ house = File.binread("./house.jpg")
124
+ siding = File.binread("./siding.png")
125
+ result = client.generate_image(target_image: house, reference_images: [siding], parameters: {prompt_name: "Sample Prompt Name"})
126
+ ```
127
+
128
+ #### Image Configuration
129
+
130
+ You can specify parameters such as model to use, aspect ratio and resolution via the `parameters` key:
131
+
132
+ ```ruby
133
+ client = NitroIntelligence::Client.new
134
+ result = client.generate_image(message: "Create an image of a bear installing a window.", parameters: {aspect_ratio: "4:3", resolution: "512"})
135
+ ```
136
+
137
+ ## Observability
138
+
139
+ ### Setup
140
+
141
+ If your feature is setup for observability, simply pass the observability project and desired prompt name in when invoking requests. It is also preferable to pass in a source and any other useful metadata for later debugging. For example:
142
+
143
+ ```ruby
144
+ client = NitroIntelligence::Client.new(observability_project_slug: "fake-feature-project")
145
+ client.chat(
146
+ message: "Why is the sky blue?",
147
+ parameters: {
148
+ prompt_name: "My Prompt",
149
+ # prompt_label: "debug",
150
+ # prompt_version: "v2",
151
+ metadata: {
152
+ source: self.class,
153
+ },
154
+ }
155
+ )
156
+ ```
157
+
158
+ If no `prompt_label` or `prompt_version` is provided, the 'production' prompt is used by default.
159
+
160
+ ### Custom Trace Names
161
+
162
+ To provide custom trace names to the observability platform, you can pass 'trace_name' in parameters. Example:
163
+
164
+ ```ruby
165
+ client = NitroIntelligence::Client.new(observability_project_slug: "fake-feature-project")
166
+ client.chat(
167
+ message: "Why is the sky blue?",
168
+ parameters: {
169
+ prompt_name: "My Prompt",
170
+ metadata: {
171
+ source: self.class,
172
+ },
173
+ trace_name: "custom-trace-name"
174
+ }
175
+ )
176
+ ```
177
+
178
+ ### Prompt Variables and Config
179
+
180
+ Prompts are often created with "variables". These variables can be supplied and compiled into the prompt. For example:
181
+
182
+ ```ruby
183
+ client = NitroIntelligence::Client.new(observability_project_slug: "fake-feature-project")
184
+ client.chat(
185
+ message: "Where is the appointment?",
186
+ parameters: {
187
+ prompt_name: "My Prompt With Variables",
188
+ prompt_variables: {
189
+ appointment_id: "1234",
190
+ },
191
+ metadata: {
192
+ source: self.class,
193
+ },
194
+ }
195
+ )
196
+ ```
197
+
198
+ Prompts can be created with a "config" JSON object. This config stores structured data such as model parameters (like model name, temperature), function/tool parameters, or JSON schemas.
199
+
200
+ The prompt config is merged into your chat request by default. The prompt config object is merged into the "parameters" hash, overriding any existing keys.
201
+
202
+ Consider this prompt config:
203
+
204
+ ```json
205
+ {
206
+ "model": "gpt-4o-mini"
207
+ }
208
+ ```
209
+
210
+ Invoking this request would result in "gpt-4o-mini" being used as the model, even if supplied manually:
211
+
212
+ ```ruby
213
+ client = NitroIntelligence::Client.new(observability_project_slug: "fake-feature-project")
214
+ client.chat(
215
+ message: "Where is the appointment?",
216
+ parameters: {
217
+ model: "meta-llama/Llama-3.1-8B-Instruct", # Will not be used, will be overridden by config "gpt-4o-mini"
218
+ prompt_name: "My Prompt With Variables",
219
+ prompt_variables: {
220
+ appointment_id: "1234",
221
+ },
222
+ metadata: {
223
+ source: self.class,
224
+ },
225
+ }
226
+ )
227
+ ```
228
+
229
+ To disable the prompt config entirely, which may be useful for debugging/testing, you can supply the "prompt_config_disabled" keyword. For example:
230
+
231
+ ```ruby
232
+ client = NitroIntelligence::Client.new(observability_project_slug: "fake-feature-project")
233
+ client.chat(
234
+ message: "Where is the appointment?",
235
+ parameters: {
236
+ model: "meta-llama/Llama-3.1-8B-Instruct", # This will now be used since "prompt_config_disabled" is true
237
+ prompt_name: "My Prompt With Variables",
238
+ prompt_variables: {
239
+ appointment_id: "1234",
240
+ },
241
+ prompt_config_disabled: true,
242
+ metadata: {
243
+ source: self.class,
244
+ },
245
+ }
246
+ )
247
+ ```
@@ -0,0 +1,76 @@
1
+ module NitroIntelligence
2
+ class AgentServer
3
+ class ConfigurationError < StandardError; end
4
+ class ThreadInitializationError < StandardError; end
5
+ class RunError < StandardError; end
6
+
7
+ attr_reader :base_url, :user_id
8
+
9
+ def initialize(base_url:, api_key:, user_id: "default-user")
10
+ raise ConfigurationError, "base_url is required" if base_url.blank?
11
+ raise ConfigurationError, "api_key is required" if api_key.blank?
12
+ raise ConfigurationError, "user_id is required" if user_id.blank?
13
+
14
+ @base_url = base_url
15
+ @api_key = api_key
16
+ @user_id = user_id
17
+ end
18
+
19
+ def await_run(thread_id:, assistant_id:, messages:, context: {})
20
+ raise RunError, "messages cannot be empty" if messages.blank?
21
+
22
+ *initial_state, last_message = messages
23
+ initial_state = [] if messages.size == 1
24
+ last_message = messages.first if messages.size == 1
25
+
26
+ initialize_thread_if_needed(thread_id:, initial_state:)
27
+ trigger_run(thread_id:, assistant_id:, context:, last_message:)
28
+ end
29
+
30
+ private
31
+
32
+ def initialize_thread_if_needed(thread_id:, initial_state:)
33
+ thread_response = post(
34
+ path: "/threads",
35
+ body: {
36
+ threadId: thread_id.to_s,
37
+ ifExists: "do_nothing",
38
+ initial_state: { messages: initial_state },
39
+ user_id:,
40
+ }
41
+ )
42
+
43
+ raise ThreadInitializationError, thread_response.body if thread_response.code != 200
44
+
45
+ thread_response
46
+ end
47
+
48
+ def trigger_run(thread_id:, assistant_id:, last_message:, context: {})
49
+ run_response = post(
50
+ path: "/threads/#{thread_id}/runs/wait",
51
+ body: {
52
+ assistant_id:,
53
+ context:,
54
+ input: {
55
+ messages: [last_message],
56
+ },
57
+ }
58
+ )
59
+
60
+ raise RunError, run_response.body if run_response.code != 200
61
+
62
+ run_response["messages"].last["content"]
63
+ end
64
+
65
+ def post(path:, body:)
66
+ HTTParty.post(
67
+ "#{base_url}#{path}",
68
+ headers: {
69
+ "Content-Type" => "application/json",
70
+ "Authorization" => "Bearer #{@api_key}",
71
+ },
72
+ body: body.to_json
73
+ )
74
+ end
75
+ end
76
+ end
@@ -0,0 +1,337 @@
1
+ require "nitro_intelligence/trace"
2
+
3
+ module NitroIntelligence
4
+ class Client
5
+ attr_accessor :client
6
+
7
+ def initialize(observability_project_slug: nil)
8
+ @inference_api_key = NitroIntelligence.config.inference_api_key
9
+ @inference_host = NitroIntelligence.config.inference_base_url
10
+ @observability_host = NitroIntelligence.config.observability_base_url
11
+ @observability_project_slug = observability_project_slug
12
+ @client = OpenAI::Client.new(
13
+ api_key: @inference_api_key,
14
+ base_url: @inference_host
15
+ )
16
+ @langfuse_client = NitroIntelligence.langfuse_clients[observability_project_slug]
17
+ end
18
+
19
+ def chat(message: "", parameters: {})
20
+ default_params = CUSTOM_PARAMS.index_with { |_param| nil }
21
+ .merge({
22
+ metadata: {},
23
+ messages: [],
24
+ model: NitroIntelligence.model_catalog.default_text_model.name,
25
+ observability_project_slug:,
26
+ })
27
+ parameters = default_params.merge(parameters)
28
+
29
+ parameters[:messages] = [{ role: "user", content: message }] if parameters[:messages].blank? && message.present?
30
+
31
+ return chat_with_tracing(parameters:) if observability_available?
32
+
33
+ client_chat(parameters:)
34
+ end
35
+
36
+ # We abstract the image generation for now because of usage
37
+ # across various apis: chat/completions, image/edits, image/generations
38
+ # Input images should be byte strings
39
+ # Returns NitroIntelligence::ImageGeneration
40
+ def generate_image(
41
+ message: "",
42
+ target_image: nil,
43
+ reference_images: [],
44
+ parameters: {}
45
+ )
46
+ image_generation = NitroIntelligence::ImageGeneration.new(message:, target_image:, reference_images:) do |config|
47
+ config.aspect_ratio = parameters[:aspect_ratio] if parameters.key?(:aspect_ratio)
48
+ config.model = parameters[:model] if parameters.key?(:model)
49
+ config.resolution = parameters[:resolution] if parameters.key?(:resolution)
50
+ end
51
+
52
+ default_params = CUSTOM_PARAMS.index_with { |_param| nil }
53
+ .merge({
54
+ image_generation:,
55
+ metadata: {},
56
+ messages: image_generation.messages,
57
+ model: image_generation.config.model,
58
+ observability_project_slug:,
59
+ extra_headers: {
60
+ "Prefer" => "wait",
61
+ },
62
+ request_options: {
63
+ extra_body: {
64
+ image_config: {
65
+ aspect_ratio: image_generation.config.aspect_ratio,
66
+ image_size: image_generation.config.resolution,
67
+ },
68
+ },
69
+ },
70
+ })
71
+ parameters = default_params.merge(parameters)
72
+
73
+ if observability_available?
74
+ chat_with_tracing(parameters:)
75
+ else
76
+ chat_completion = client_chat(parameters:)
77
+ image_generation.parse_file(chat_completion)
78
+ end
79
+
80
+ image_generation
81
+ end
82
+
83
+ def score(trace_id:, name:, value:, id: "#{trace_id}-#{name}")
84
+ raise ObservabilityUnavailableError, "Observability project slug not configured" unless observability_available?
85
+
86
+ if @langfuse_client.nil?
87
+ raise LangfuseClientNotFoundError,
88
+ "No Langfuse client found for slug: #{observability_project_slug}"
89
+ end
90
+
91
+ @langfuse_client.create_score(
92
+ id:,
93
+ trace_id:,
94
+ name:,
95
+ value:,
96
+ environment: NitroIntelligence.environment
97
+ )
98
+ end
99
+
100
+ def create_dataset_item(attributes)
101
+ HTTParty.post("#{@observability_host}/api/public/dataset-items",
102
+ body: attributes.to_json,
103
+ headers: {
104
+ "Content-Type" => "application/json",
105
+ "Authorization" => "Basic #{observability_auth_token}",
106
+ })
107
+ end
108
+
109
+ private
110
+
111
+ attr_reader :observability_project_slug
112
+
113
+ def current_revision
114
+ return @current_revision if defined?(@current_revision)
115
+
116
+ path = Rails.root.join("REVISION")
117
+ @current_revision = File.exist?(path) ? File.read(path).strip.presence : nil
118
+ end
119
+
120
+ def observability_available?
121
+ observability_project_slug.present?
122
+ end
123
+
124
+ def method_missing(method_name, *, &)
125
+ @client.send(method_name, *, &)
126
+ end
127
+
128
+ def respond_to_missing?(method_name, include_private = false)
129
+ @client.respond_to?(method_name, include_private) || super
130
+ end
131
+
132
+ def project_config
133
+ return @project_config if @project_config.present?
134
+
135
+ projects = NitroIntelligence.config.observability_projects
136
+ @project_config = projects.find { |project| project["slug"] == observability_project_slug }
137
+
138
+ if @project_config.nil?
139
+ raise ObservabilityProjectConfigNotFoundError,
140
+ "No observability project config found for slug: #{observability_project_slug}"
141
+ end
142
+
143
+ @project_config
144
+ end
145
+
146
+ def client_chat(parameters:)
147
+ # When requesting an OpenAI model, OpenAI API will return a 400 because it does not ignore custom params
148
+ @client.chat.completions.create(**parameters.except(*NitroIntelligence.omit_params))
149
+ end
150
+
151
+ def chat_with_tracing(parameters:)
152
+ project = get_project(
153
+ project_id: project_config["id"],
154
+ observability_public_key: project_config["public_key"]
155
+ )
156
+ prompt = handle_prompt(parameters:, project_config:)
157
+
158
+ instrument_tracing(prompt:, project:, parameters:)
159
+ rescue ObservabilityProjectNotFoundError, LangfuseClientNotFoundError => e
160
+ # We should still send the request if we have problems with observability
161
+ NitroIntelligence.logger.warn(
162
+ "#{self.class} - Observability configuration provided, but could not be processed. #{e}. " \
163
+ "Sending request regardless."
164
+ )
165
+ client_chat(parameters:)
166
+ end
167
+
168
+ def handle_prompt(parameters:, project_config:)
169
+ return if parameters[:prompt_name].blank?
170
+
171
+ prompt_store = NitroIntelligence::PromptStore.new(
172
+ observability_project_slug:,
173
+ observability_public_key: project_config["public_key"],
174
+ observability_secret_key: project_config["secret_key"]
175
+ )
176
+ prompt = prompt_store.get_prompt(
177
+ prompt_name: parameters[:prompt_name],
178
+ prompt_label: parameters[:prompt_label],
179
+ prompt_version: parameters[:prompt_version]
180
+ )
181
+ prompt_variables = parameters[:prompt_variables] || {}
182
+
183
+ if prompt.present?
184
+ parameters[:messages] = prompt.interpolate(
185
+ messages: parameters[:messages],
186
+ variables: prompt_variables
187
+ )
188
+
189
+ parameters.merge!(prompt.config) unless parameters[:prompt_config_disabled]
190
+ end
191
+
192
+ prompt
193
+ end
194
+
195
+ def instrument_tracing(prompt:, project:, parameters:) # rubocop:disable Metrics/AbcSize, Metrics/MethodLength
196
+ if @langfuse_client.nil?
197
+ raise LangfuseClientNotFoundError,
198
+ "No Langfuse client found for slug: #{observability_project_slug}"
199
+ end
200
+
201
+ default_trace_name = project["name"]
202
+ input = parameters[:messages]
203
+ image_generation = parameters[:image_generation]
204
+ metadata = parameters[:metadata]
205
+
206
+ if prompt
207
+ metadata[:prompt_name] = prompt.name
208
+ metadata[:prompt_version] = prompt.version
209
+ default_trace_name = prompt.name
210
+ end
211
+
212
+ seed = parameters[:trace_seed]
213
+ trace_id = NitroIntelligence::Trace.create_id(seed:) if seed.present?
214
+
215
+ chat_completion = nil
216
+
217
+ Langfuse.propagate_attributes(
218
+ user_id: parameters[:user_id] || "default-user",
219
+ metadata: metadata.transform_values(&:to_s)
220
+ ) do
221
+ @langfuse_client.observe(
222
+ "llm-response",
223
+ as_type: :generation,
224
+ trace_id:,
225
+ environment: NitroIntelligence.environment.to_s,
226
+ input:,
227
+ model: parameters[:model],
228
+ metadata: metadata.transform_values(&:to_s)
229
+ ) do |generation|
230
+ generation.update_trace(
231
+ name: parameters[:trace_name] || default_trace_name,
232
+ release: current_revision
233
+ )
234
+
235
+ if prompt
236
+ generation.update({
237
+ prompt: {
238
+ name: prompt.name,
239
+ version: prompt.version,
240
+ },
241
+ })
242
+ end
243
+
244
+ chat_completion = client_chat(parameters:)
245
+ output = chat_completion.choices.first.message.to_h
246
+
247
+ # Handle image generation media
248
+ if image_generation
249
+ image_generation.trace_id = generation.trace_id
250
+ image_generation.parse_file(chat_completion)
251
+ handle_image_generation_uploads(input, output, image_generation)
252
+ end
253
+
254
+ # Handle truncating any unnecessary data before storing into trace
255
+ handle_truncation(input, output, chat_completion.model)
256
+
257
+ generation.model = chat_completion.model
258
+ generation.usage_details = {
259
+ prompt_tokens: chat_completion.usage.prompt_tokens,
260
+ completion_tokens: chat_completion.usage.completion_tokens,
261
+ total_tokens: chat_completion.usage.total_tokens,
262
+ }
263
+ generation.input = input
264
+ generation.output = output
265
+
266
+ generation.update_trace(input:, output:)
267
+ end
268
+ end
269
+
270
+ chat_completion
271
+ end
272
+
273
+ # Returns name and metadata
274
+ def get_project(project_id:, observability_public_key:)
275
+ cache_key = OBSERVABILITY_PROJECTS_CACHE_KEY_PREFIX + project_id
276
+ cached_project = NitroIntelligence.cache.read(cache_key)
277
+ return cached_project if cached_project.present?
278
+
279
+ target_project = get_projects(observability_public_key:).find do |project|
280
+ project["id"] == project_id
281
+ end
282
+
283
+ raise ObservabilityProjectNotFoundError, "Project with ID: #{project_id} not found" if target_project.nil?
284
+
285
+ NitroIntelligence.cache.write(cache_key, target_project, expires_in: 12.hours)
286
+ target_project
287
+ end
288
+
289
+ def get_projects(observability_public_key:)
290
+ response = HTTParty.get("#{@observability_host}/api/public/projects",
291
+ headers: {
292
+ "Authorization" => "Basic #{observability_auth_token}",
293
+ })
294
+ data = JSON.parse(response.body)["data"]
295
+ if data.nil?
296
+ raise(
297
+ ObservabilityProjectNotFoundError,
298
+ "No projects were found. Public key: #{observability_public_key || 'missing'}"
299
+ )
300
+ end
301
+ data
302
+ end
303
+
304
+ def handle_image_generation_uploads(input, output, image_generation)
305
+ # If we are doing image generation we should upload the media to observability manually
306
+ upload_handler = NitroIntelligence::UploadHandler.new(@observability_host, observability_auth_token)
307
+ upload_handler.upload(
308
+ image_generation.trace_id,
309
+ upload_queue: Queue.new(image_generation.files)
310
+ )
311
+
312
+ # Replace base64 strings with media references
313
+ upload_handler.replace_base64_with_media_references(input)
314
+ upload_handler.replace_base64_with_media_references(output)
315
+ end
316
+
317
+ def handle_truncation(_input, output, model_name)
318
+ model = NitroIntelligence.model_catalog.lookup_by_name(model_name)
319
+
320
+ return unless model&.omit_output_fields
321
+
322
+ model.omit_output_fields.each do |omit_output_field|
323
+ last_key = omit_output_field.last
324
+ parent_keys = omit_output_field[0...-1]
325
+ parent = parent_keys.empty? ? output : output.dig(*parent_keys)
326
+
327
+ parent[last_key] = "[Truncated...]" if parent.is_a?(Hash) && parent.key?(last_key)
328
+ end
329
+ end
330
+
331
+ def observability_auth_token
332
+ public_key = project_config["public_key"]
333
+ secret_key = project_config["secret_key"]
334
+ @observability_auth_token ||= Base64.strict_encode64("#{public_key}:#{secret_key}")
335
+ end
336
+ end
337
+ end
@@ -0,0 +1,25 @@
1
+ require "logger"
2
+
3
+ require "nitro_intelligence/null_cache"
4
+
5
+ module NitroIntelligence
6
+ class Configuration
7
+ include ActiveSupport::Configurable
8
+
9
+ config_accessor :logger, default: Logger.new($stdout)
10
+ config_accessor :cache_provider, default: NitroIntelligence::NullCache.new
11
+ config_accessor :environment, default: "test"
12
+ config_accessor :agent_server_config, default: {}
13
+ config_accessor :inference_api_key, default: ""
14
+ config_accessor :inference_base_url, default: ""
15
+ config_accessor :model_config, default: {}
16
+ config_accessor :observability_base_url, default: ""
17
+ config_accessor :observability_projects, default: []
18
+
19
+ class << self
20
+ def configure
21
+ yield config
22
+ end
23
+ end
24
+ end
25
+ end