omniai-llama 0.0.1 → 2.6.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 25c0a8a9ba448d407d0f8300d377d0e3a10a0aa1b18343540a0b32f3eb3baeb5
4
- data.tar.gz: 67d4173043933e6314df901969eae71b0ab1137dfd8da4365377824eeed167dd
3
+ metadata.gz: b6c8f4322c62d60da848e490cf8815e518e50a9ca542c5fe37cf6efaf47db4cc
4
+ data.tar.gz: 713668fc41bb593495f60eac721b645bc4b5156ac6e2050c8939db8b10cc0582
5
5
  SHA512:
6
- metadata.gz: 2e43f7b12d1dbcbe2b02c9a7bc206ee0c1980c231843cb7fc292774548fed69ebb492a17aa8d451374a106bcc7d24f9ec0af27cce34004459493174b74854243
7
- data.tar.gz: b991b560116560bff895708f89cc28d01c20a1f477ab69625fb910249bdc7b853eb91f23666b8d1fedb7caaac5f715bc01c3cfa749ee91a0d4929d9e9ac1f3ec
6
+ metadata.gz: 0f39b5fab49e6aa7d7ec42dc404391c4319a8c5a59d97a43a187f611bb39940231684b07604cac1cc6caedbc0ecbe72f1088665eae0610f141db8db4ba9d5e51
7
+ data.tar.gz: ff42220c333e7458659d9610ba6ed39d57d08ec500674992271a52527779f0a3be7616a0f7210d6a9732cde9f54d5b6a37085ea5cc45777ae931505fd8014017
data/README.md CHANGED
@@ -34,7 +34,7 @@ Global configuration is supported for the following options:
34
34
 
35
35
  ```ruby
36
36
  OmniAI::Llama.configure do |config|
37
- config.api_key = 'sk-...' # default: ENV['LLAMA_API_KEY']
37
+ config.api_key = 'LLM|...' # default: ENV['LLAMA_API_KEY']
38
38
  end
39
39
  ```
40
40
 
@@ -59,15 +59,13 @@ completion.content # 'The capital of Canada is Ottawa.'
59
59
 
60
60
  #### Model
61
61
 
62
- `model` takes an optional string (default is `gpt-4o`):
62
+ `model` takes an optional string (default is `Llama-4-Scout-17B-16E-Instruct-FP8`):
63
63
 
64
64
  ```ruby
65
- completion = client.chat('How fast is a cheetah?', model: OmniAI::Llama::Chat::Model::GPT_3_5_TURBO)
65
+ completion = client.chat('How fast is a cheetah?', model: OmniAI::Llama::Chat::Model::LLAMA_4_SCOUT)
66
66
  completion.content # 'A cheetah can reach speeds over 100 km/h.'
67
67
  ```
68
68
 
69
- [OpenAI API Reference `model`](https://platform.openai.com/docs/api-reference/chat/create#chat-create-model)
70
-
71
69
  #### Temperature
72
70
 
73
71
  `temperature` takes an optional float between `0.0` and `2.0` (defaults is `0.7`):
@@ -77,8 +75,6 @@ completion = client.chat('Pick a number between 1 and 5', temperature: 2.0)
77
75
  completion.content # '3'
78
76
  ```
79
77
 
80
- [OpenAI API Reference `temperature`](https://platform.openai.com/docs/api-reference/chat/create#chat-create-temperature)
81
-
82
78
  #### Stream
83
79
 
84
80
  `stream` takes an optional a proc to stream responses in real-time chunks instead of waiting for a complete response:
@@ -90,8 +86,6 @@ end
90
86
  client.chat('Be poetic.', stream:)
91
87
  ```
92
88
 
93
- [OpenAI API Reference `stream`](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream)
94
-
95
89
  #### Format
96
90
 
97
91
  `format` takes an optional symbol (`:json`) and that setes the `response_format` to `json_object`:
@@ -104,329 +98,4 @@ end
104
98
  JSON.parse(completion.content) # { "name": "Ringo" }
105
99
  ```
106
100
 
107
- [OpenAI API Reference `response_format`](https://platform.openai.com/docs/api-reference/chat/create#chat-create-stream)
108
-
109
101
  > When using JSON mode, you must also instruct the model to produce JSON yourself via a system or user message.
110
-
111
- ### Transcribe
112
-
113
- A transcription is generated by passing in a path to a file:
114
-
115
- ```ruby
116
- transcription = client.transcribe(file.path)
117
- transcription.text # '...'
118
- ```
119
-
120
- #### Prompt
121
-
122
- `prompt` is optional and can provide additional context for transcribing:
123
-
124
- ```ruby
125
- transcription = client.transcribe(file.path, prompt: '')
126
- transcription.text # '...'
127
- ```
128
-
129
- [OpenAI API Reference `prompt`](https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-prompt)
130
-
131
- #### Format
132
-
133
- `format` is optional and supports `json`, `text`, `srt` or `vtt`:
134
-
135
- ```ruby
136
- transcription = client.transcribe(file.path, format: OmniAI::Transcribe::Format::TEXT)
137
- transcription.text # '...'
138
- ```
139
-
140
- [OpenAI API Reference `response_format`](https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-response_format)
141
-
142
- #### Language
143
-
144
- `language` is optional and may improve accuracy and latency:
145
-
146
- ```ruby
147
- transcription = client.transcribe(file.path, language: OmniAI::Transcribe::Language::SPANISH)
148
- transcription.text
149
- ```
150
-
151
- [OpenAI API Reference `language`](https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-language)
152
-
153
- #### Temperature
154
-
155
- `temperature` is optional and must be between 0.0 (more deterministic) and 1.0 (less deterministic):
156
-
157
- ```ruby
158
- transcription = client.transcribe(file.path, temperature: 0.2)
159
- transcription.text
160
- ```
161
-
162
- [OpenAI API Reference `temperature`](https://platform.openai.com/docs/api-reference/audio/createTranscription#audio-createtranscription-temperature)
163
-
164
- ### Speak
165
-
166
- Speech can be generated by passing text with a block:
167
-
168
- ```ruby
169
- File.open('example.ogg', 'wb') do |file|
170
- client.speak('How can a clam cram in a clean cream can?') do |chunk|
171
- file << chunk
172
- end
173
- end
174
- ```
175
-
176
- If a block is not provided then a tempfile is returned:
177
-
178
- ```ruby
179
- tempfile = client.speak('Can you can a can as a canner can can a can?')
180
- tempfile.close
181
- tempfile.unlink
182
- ```
183
-
184
- #### Voice
185
-
186
- `voice` is optional and must be one of the supported voices:
187
-
188
- ```ruby
189
- client.speak('She sells seashells by the seashore.', voice: OmniAI::Llama::Speak::Voice::SHIMMER)
190
- ```
191
-
192
- [OpenAI API Reference `voice`](https://platform.openai.com/docs/api-reference/audio/createSpeech#audio-createspeech-voice)
193
-
194
- #### Model
195
-
196
- `model` is optional and must be either `tts-1` or `tts-1-hd` (default):
197
-
198
- ```ruby
199
- client.speak('I saw a kitten eating chicken in the kitchen.', format: OmniAI::Llama::Speak::Model::TTS_1)
200
- ```
201
-
202
- [OpenAI API Refernce `model`](https://platform.openai.com/docs/api-reference/audio/createSpeech#audio-createspeech-model)
203
-
204
- #### Speed
205
-
206
- `speed` is optional and must be between 0.25 and 0.40:
207
-
208
- ```ruby
209
- client.speak('How much wood would a woodchuck chuck if a woodchuck could chuck wood?', speed: 4.0)
210
- ```
211
-
212
- [OmniAI API Reference `speed`](https://platform.openai.com/docs/api-reference/audio/createSpeech#audio-createspeech-speed)
213
-
214
- #### Format
215
-
216
- `format` is optional and supports `MP3` (default), `OPUS`, `AAC`, `FLAC`, `WAV` or `PCM`:
217
-
218
- ```ruby
219
- client.speak('A pessemistic pest exists amidst us.', format: OmniAI::Llama::Speak::Format::FLAC)
220
- ```
221
-
222
- [OpenAI API Reference `format`](https://platform.openai.com/docs/api-reference/audio/createSpeech#audio-createspeech-response_format)
223
-
224
- ## Files
225
-
226
- ### Finding an File
227
-
228
- ```ruby
229
- client.files.find(id: 'file_...')
230
- ```
231
-
232
- ### Listing all Files
233
-
234
- ```ruby
235
- client.files.all
236
- ```
237
-
238
- ### Uploading a File
239
-
240
- #### Using a File
241
-
242
- ```ruby
243
- file = client.files.build(io: File.open('demo.pdf', 'wb'))
244
- file.save!
245
- ```
246
-
247
- #### Using a Path
248
-
249
- ```ruby
250
- file = client.files.build(io: 'demo.pdf'))
251
- file.save!
252
- ```
253
-
254
- ### Downloading a File
255
-
256
- ```ruby
257
- file = client.files.find(id: 'file_...')
258
- File.open('...', 'wb') do |file|
259
- file.content do |chunk|
260
- file << chunk
261
- end
262
- end
263
- ```
264
-
265
- ### Destroying a File
266
-
267
- ```ruby
268
- client.files.destroy!('file_...')
269
- ```
270
-
271
- ## Assistants
272
-
273
- ### Finding an Assistant
274
-
275
- ```ruby
276
- client.assistants.find(id: 'asst_...')
277
- ```
278
-
279
- ### Listing all Assistants
280
-
281
- ```ruby
282
- client.assistants.all
283
- ```
284
-
285
- ### Creating an Assistant
286
-
287
- ```ruby
288
- assistant = client.assistants.build
289
- assistant.name = 'Ringo'
290
- assistant.model = OmniAI::Llama::Chat::Model::GPT_4
291
- assistant.description = 'The drummer for the Beatles.'
292
- assistant.save!
293
- ```
294
-
295
- ### Updating an Assistant
296
-
297
- ```ruby
298
- assistant = client.assistants.find(id: 'asst_...')
299
- assistant.name = 'George'
300
- assistant.model = OmniAI::Llama::Chat::Model::GPT_4
301
- assistant.description = 'A guitarist for the Beatles.'
302
- assistant.save!
303
- ```
304
-
305
- ### Destroying an Assistant
306
-
307
- ```ruby
308
- client.assistants.destroy!('asst_...')
309
- ```
310
-
311
- ## Threads
312
-
313
- ### Finding a Thread
314
-
315
- ```ruby
316
- client.threads.find(id: 'thread_...')
317
- ```
318
-
319
- ### Creating a Thread
320
-
321
- ```ruby
322
- thread = client.threads.build
323
- thread.metadata = { user: 'Ringo' }
324
- thread.save!
325
- ```
326
-
327
- ### Updating a Thread
328
-
329
- ```ruby
330
- thread = client.threads.find(id: 'thread_...')
331
- thread.metadata = { user: 'Ringo' }
332
- thread.save!
333
- ```
334
-
335
- ### Destroying a Threads
336
-
337
- ```ruby
338
- client.threads.destroy!('thread_...')
339
- ```
340
-
341
- ### Messages
342
-
343
- #### Finding a Message
344
-
345
- ```ruby
346
- thread = client.threads.find(id: 'thread_...')
347
- message = thread.messages.find(id: 'msg_...')
348
- message.save!
349
- ```
350
-
351
- #### Listing all Messages
352
-
353
- ```ruby
354
- thread = client.threads.find(id: 'thread_...')
355
- thread.messages.all
356
- ```
357
-
358
- #### Creating a Message
359
-
360
- ```ruby
361
- thread = client.threads.find(id: 'thread_...')
362
- message = thread.messages.build(role: 'user', content: 'Hello?')
363
- message.save!
364
- ```
365
-
366
- #### Updating a Message
367
-
368
- ```ruby
369
- thread = client.threads.find(id: 'thread_...')
370
- message = thread.messages.build(role: 'user', content: 'Hello?')
371
- message.save!
372
- ```
373
-
374
- ### Runs
375
-
376
- #### Finding a Run
377
-
378
- ```ruby
379
- thread = client.threads.find(id: 'thread_...')
380
- run = thread.runs.find(id: 'run_...')
381
- run.save!
382
- ```
383
-
384
- #### Listing all Runs
385
-
386
- ```ruby
387
- thread = client.threads.find(id: 'thread_...')
388
- thread.runs.all
389
- ```
390
-
391
- #### Creating a Run
392
-
393
- ```ruby
394
- run = client.runs.find(id: 'thread_...')
395
- run = thread.runs.build
396
- run.metadata = { user: 'Ringo' }
397
- run.save!
398
- ```
399
-
400
- #### Updating a Run
401
-
402
- ```ruby
403
- thread = client.threads.find(id: 'thread_...')
404
- run = thread.messages.find(id: 'run_...')
405
- run.metadata = { user: 'Ringo' }
406
- run.save!
407
- ```
408
-
409
- #### Polling a Run
410
-
411
- ```ruby
412
- run.terminated? # false
413
- run.poll!
414
- run.terminated? # true
415
- run.status # 'cancelled' / 'failed' / 'completed' / 'expired'
416
- ```
417
-
418
- #### Cancelling a Run
419
-
420
- ```ruby
421
- thread = client.threads.find(id: 'thread_...')
422
- run = thread.runs.cancel!(id: 'run_...')
423
- ```
424
-
425
- ### Embed
426
-
427
- Text can be converted into a vector embedding for similarity comparison usage via:
428
-
429
- ```ruby
430
- response = client.embed('The quick brown fox jumps over a lazy dog.')
431
- response.embedding # [0.0, ...]
432
- ```
@@ -0,0 +1,21 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OmniAI
4
+ module Llama
5
+ class Chat
6
+ # Overrides content serialize / deserialize.
7
+ module ContentSerializer
8
+ # @param data [Hash]
9
+ # @param context [Context]
10
+ # @return [OmniAI::Chat::Text, OmniAI::Chat::ToolCall]
11
+ def self.deserialize(data, context:)
12
+ if data["tool_call"]
13
+ OmniAI::Chat::ToolCall.deserialize(data, context:)
14
+ else
15
+ data["text"]
16
+ end
17
+ end
18
+ end
19
+ end
20
+ end
21
+ end
@@ -18,17 +18,17 @@ module OmniAI
18
18
  # metrics: [
19
19
  # {
20
20
  # metric: "num_completion_tokens",
21
- # value: 25,
21
+ # value: 2,
22
22
  # unit: "tokens",
23
23
  # },
24
24
  # {
25
25
  # metric: "num_prompt_tokens",
26
- # value: 25,
26
+ # value: 3,
27
27
  # unit: "tokens",
28
28
  # },
29
29
  # {
30
30
  # metric: "num_total_tokens",
31
- # value: 50,
31
+ # value: 4,
32
32
  # unit: "tokens",
33
33
  # },
34
34
  # ],
@@ -0,0 +1,92 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OmniAI
4
+ module Llama
5
+ class Chat
6
+ # A stream is used to process a series of chunks of data. It converts the following into a combined payload.
7
+ class Stream < OmniAI::Chat::Stream
8
+ # @yield [delta]
9
+ # @yieldparam delta [OmniAI::Chat::Delta]
10
+ #
11
+ # @return [Hash]
12
+ def stream!(&block)
13
+ @message = { "role" => "assistant" }
14
+ @metrics = []
15
+
16
+ @chunks.map do |chunk|
17
+ parser.feed(chunk) do |type, data, id|
18
+ process!(type, data, id, &block)
19
+ end
20
+ end
21
+
22
+ {
23
+ "completion_message" => @message,
24
+ "metrics" => @metrics,
25
+ }
26
+ end
27
+
28
+ protected
29
+
30
+ #
31
+ # @param data [Hash]
32
+ #
33
+ # @yield [delta]
34
+ # @yieldparam delta [OmniAI::Chat::Delta]
35
+ def process_data!(data:, &)
36
+ event = data["event"]
37
+
38
+ process_metrics(metrics: event["metrics"]) if event["metrics"]
39
+ process_delta(delta: event["delta"], &) if event["delta"]
40
+ end
41
+
42
+ # @param delta [Hash]
43
+ #
44
+ # @yield [delta]
45
+ # @yieldparam delta [OmniAI::Chat::Delta]
46
+ def process_delta(delta:, &block)
47
+ block&.call(OmniAI::Chat::Delta.new(text: delta["text"])) if delta["text"] && !delta["text"].empty?
48
+
49
+ case delta["type"]
50
+ when "text" then process_delta_text(delta:)
51
+ when "tool_call" then process_delta_tool_call(delta:)
52
+ end
53
+ end
54
+
55
+ # @param delta [Hash]
56
+ def process_delta_text(delta:)
57
+ return if delta["text"].empty?
58
+
59
+ if @message["content"]
60
+ @message["content"]["text"] += delta["text"]
61
+ else
62
+ @message["content"] = delta
63
+ end
64
+ end
65
+
66
+ # @param delta [Hash]
67
+ def process_delta_tool_call(delta:)
68
+ @message["tool_calls"] ||= []
69
+
70
+ latest = @message["tool_calls"][-1]
71
+
72
+ if delta["id"]
73
+ @message["tool_calls"] << {
74
+ "id" => delta["id"],
75
+ "function" => delta["function"],
76
+ }
77
+ else
78
+ latest["function"]["arguments"] ||= ""
79
+ latest["function"]["arguments"] += delta["function"]["arguments"]
80
+ end
81
+ end
82
+
83
+ # @param metrics [Array<Hash>]
84
+ def process_metrics(metrics:)
85
+ return unless metrics
86
+
87
+ @metrics = metrics
88
+ end
89
+ end
90
+ end
91
+ end
92
+ end
@@ -8,17 +8,17 @@ module OmniAI
8
8
  # [
9
9
  # {
10
10
  # metric: "num_completion_tokens",
11
- # value: 25,
11
+ # value: 2,
12
12
  # unit: "tokens",
13
13
  # },
14
14
  # {
15
15
  # metric: "num_prompt_tokens",
16
- # value: 25,
16
+ # value: 3,
17
17
  # unit: "tokens",
18
18
  # },
19
19
  # {
20
20
  # metric: "num_total_tokens",
21
- # value: 50,
21
+ # value: 4,
22
22
  # unit: "tokens",
23
23
  # },
24
24
  # ]
@@ -2,7 +2,7 @@
2
2
 
3
3
  module OmniAI
4
4
  module Llama
5
- # An OpenAI chat implementation.
5
+ # An Llama chat implementation.
6
6
  #
7
7
  # Usage:
8
8
  #
@@ -13,7 +13,6 @@ module OmniAI
13
13
  # completion.choice.message.content # '...'
14
14
  class Chat < OmniAI::Chat
15
15
  JSON_RESPONSE_FORMAT = { type: "json_object" }.freeze
16
- DEFAULT_STREAM_OPTIONS = { include_usage: ENV.fetch("OMNIAI_STREAM_USAGE", "on").eql?("on") }.freeze
17
16
 
18
17
  module Model
19
18
  LLAMA_4_SCOUT_17B_16E_INSTRUCT_FP8 = "Llama-4-Scout-17B-16E-Instruct-FP8"
@@ -30,12 +29,22 @@ module OmniAI
30
29
  CONTEXT = Context.build do |context|
31
30
  context.deserializers[:response] = ResponseSerializer.method(:deserialize)
32
31
  context.deserializers[:choice] = ChoiceSerializer.method(:deserialize)
33
- context.deserializers[:message] = MessageSerializer.method(:deserialize)
32
+ context.deserializers[:content] = ContentSerializer.method(:deserialize)
34
33
  context.deserializers[:usage] = UsageSerializer.method(:deserialize)
35
34
  end
36
35
 
37
36
  protected
38
37
 
38
+ # @return [HTTP::Response]
39
+ def request!
40
+ logger&.debug("Chat#request! payload=#{payload.inspect}")
41
+
42
+ @client
43
+ .connection
44
+ .accept(stream? ? "text/event-stream" : :json)
45
+ .post(path, json: payload)
46
+ end
47
+
39
48
  # @return [Context]
40
49
  def context
41
50
  CONTEXT
@@ -48,7 +57,6 @@ module OmniAI
48
57
  model: @model,
49
58
  response_format: (JSON_RESPONSE_FORMAT if @format.eql?(:json)),
50
59
  stream: stream? || nil,
51
- stream_options: (DEFAULT_STREAM_OPTIONS if stream?),
52
60
  temperature: @temperature,
53
61
  tools: (@tools.map(&:serialize) if @tools&.any?),
54
62
  }).compact
@@ -2,7 +2,7 @@
2
2
 
3
3
  module OmniAI
4
4
  module Llama
5
- # Configuration for OpenAI.
5
+ # Configuration for Llama.
6
6
  class Config < OmniAI::Config
7
7
  DEFAULT_HOST = "https://api.llama.com"
8
8
 
@@ -2,6 +2,6 @@
2
2
 
3
3
  module OmniAI
4
4
  module Llama
5
- VERSION = "0.0.1"
5
+ VERSION = "2.6.0"
6
6
  end
7
7
  end
metadata CHANGED
@@ -1,13 +1,13 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: omniai-llama
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.1
4
+ version: 2.6.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Kevin Sylvestre
8
8
  bindir: exe
9
9
  cert_chain: []
10
- date: 2025-05-01 00:00:00.000000000 Z
10
+ date: 1980-01-02 00:00:00.000000000 Z
11
11
  dependencies:
12
12
  - !ruby/object:Gem::Dependency
13
13
  name: event_stream_parser
@@ -29,14 +29,14 @@ dependencies:
29
29
  requirements:
30
30
  - - "~>"
31
31
  - !ruby/object:Gem::Version
32
- version: '2.2'
32
+ version: '2.6'
33
33
  type: :runtime
34
34
  prerelease: false
35
35
  version_requirements: !ruby/object:Gem::Requirement
36
36
  requirements:
37
37
  - - "~>"
38
38
  - !ruby/object:Gem::Version
39
- version: '2.2'
39
+ version: '2.6'
40
40
  - !ruby/object:Gem::Dependency
41
41
  name: zeitwerk
42
42
  requirement: !ruby/object:Gem::Requirement
@@ -51,7 +51,7 @@ dependencies:
51
51
  - - ">="
52
52
  - !ruby/object:Gem::Version
53
53
  version: '0'
54
- description: An implementation of OmniAI for OpenAI
54
+ description: An implementation of OmniAI for Llama
55
55
  email:
56
56
  - kevin@ksylvest.com
57
57
  executables: []
@@ -63,8 +63,9 @@ files:
63
63
  - lib/omniai/llama.rb
64
64
  - lib/omniai/llama/chat.rb
65
65
  - lib/omniai/llama/chat/choice_serializer.rb
66
- - lib/omniai/llama/chat/message_serializer.rb
66
+ - lib/omniai/llama/chat/content_serializer.rb
67
67
  - lib/omniai/llama/chat/response_serializer.rb
68
+ - lib/omniai/llama/chat/stream.rb
68
69
  - lib/omniai/llama/chat/usage_serializer.rb
69
70
  - lib/omniai/llama/client.rb
70
71
  - lib/omniai/llama/config.rb
@@ -90,7 +91,7 @@ required_rubygems_version: !ruby/object:Gem::Requirement
90
91
  - !ruby/object:Gem::Version
91
92
  version: '0'
92
93
  requirements: []
93
- rubygems_version: 3.6.6
94
+ rubygems_version: 3.6.9
94
95
  specification_version: 4
95
- summary: A generalized framework for interacting with OpenAI
96
+ summary: A generalized framework for interacting with Llama
96
97
  test_files: []
@@ -1,31 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module OmniAI
4
- module Llama
5
- class Chat
6
- # Overrides choice serialize / deserialize for the following payload:
7
- #
8
- # {
9
- # content: {
10
- # type: "text",
11
- # text: "Hello!",
12
- # },
13
- # role: "assistant",
14
- # stop_reason: "stop",
15
- # tool_calls: [],
16
- # }
17
- module MessageSerializer
18
- # @param data [Hash]
19
- # @param context [OmniAI::Context]
20
- #
21
- # @return [OmniAI::Chat::Message]
22
- def self.deserialize(data, context:)
23
- role = data["role"]
24
- content = OmniAI::Chat::Content.deserialize(data["content"], context:)
25
-
26
- OmniAI::Chat::Message.new(content:, role:)
27
- end
28
- end
29
- end
30
- end
31
- end