ollama-ai 1.0.0

Sign up to get free protection for your applications and to get access to all the features.
data/template.md ADDED
@@ -0,0 +1,817 @@
1
+ # Ollama AI
2
+
3
+ A Ruby gem for interacting with [Ollama](https://github.com/jmorganca/ollama)'s API that allows you to run open source AI LLMs (Large Language Models) locally.
4
+
5
+ ![The image presents a llama's head merged with a red ruby gemstone against a light beige background. The red facets form both the ruby and the contours of the llama, creating a clever visual fusion.](https://raw.githubusercontent.com/gbaptista/assets/main/ollama-ai/ollama-ai-canvas.png)
6
+
7
+ > _This Gem is designed to provide low-level access to Ollama, enabling people to build abstractions on top of it. If you are interested in more high-level abstractions or more user-friendly tools, you may want to consider [Nano Bots](https://github.com/icebaker/ruby-nano-bots) 💎 🤖._
8
+
9
+ ## TL;DR and Quick Start
10
+
11
+ ```ruby
12
+ gem 'ollama-ai', '~> 1.0.0'
13
+ ```
14
+
15
+ ```ruby
16
+ require 'ollama-ai'
17
+
18
+ client = Ollama.new(
19
+ credentials: { address: 'http://localhost:11434' },
20
+ options: { server_sent_events: true }
21
+ )
22
+
23
+ result = client.generate(
24
+ { model: 'dolphin-phi',
25
+ prompt: 'Hi!' }
26
+ )
27
+ ```
28
+
29
+ Result:
30
+ ```ruby
31
+ [{ 'model' => 'dolphin-phi',
32
+ 'created_at' => '2024-01-06T16:53:21.357816652Z',
33
+ 'response' => 'Hello',
34
+ 'done' => false },
35
+ { 'model' => 'dolphin-phi',
36
+ 'created_at' => '2024-01-06T16:53:21.490053654Z',
37
+ 'response' => '!',
38
+ 'done' => false },
39
+ # ...
40
+ { 'model' => 'dolphin-phi',
41
+ 'created_at' => '2024-01-06T16:53:24.82505599Z',
42
+ 'response' => '.',
43
+ 'done' => false },
44
+ { 'model' => 'dolphin-phi',
45
+ 'created_at' => '2024-01-06T16:53:24.956774721Z',
46
+ 'response' => '',
47
+ 'done' => true,
48
+ 'context' =>
49
+ [50_296, 10_057,
50
+ # ...
51
+ 1037, 13],
52
+ 'total_duration' => 5_702_027_026,
53
+ 'load_duration' => 649_711,
54
+ 'prompt_eval_count' => 25,
55
+ 'prompt_eval_duration' => 2_227_159_000,
56
+ 'eval_count' => 39,
57
+ 'eval_duration' => 3_466_593_000 }]
58
+ ```
59
+
60
+ ## Index
61
+
62
+ {index}
63
+
64
+ ## Setup
65
+
66
+ ### Installing
67
+
68
+ ```sh
69
+ gem install ollama-ai -v 1.0.0
70
+ ```
71
+
72
+ ```sh
73
+ gem 'ollama-ai', '~> 1.0.0'
74
+ ```
75
+
76
+ ## Usage
77
+
78
+ ### Client
79
+
80
+ Create a new client:
81
+ ```ruby
82
+ require 'ollama-ai'
83
+
84
+ client = Ollama.new(
85
+ credentials: { address: 'http://localhost:11434' },
86
+ options: { server_sent_events: true }
87
+ )
88
+ ```
89
+
90
+ ### Methods
91
+
92
+ ```ruby
93
+ client.generate
94
+ client.chat
95
+ client.embeddings
96
+
97
+ client.create
98
+ client.tags
99
+ client.show
100
+ client.copy
101
+ client.delete
102
+ client.pull
103
+ client.push
104
+ ```
105
+
106
+ #### generate: Generate a completion
107
+
108
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion
109
+
110
+ ##### Without Streaming Events
111
+
112
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion
113
+
114
+ ```ruby
115
+ result = client.generate(
116
+ { model: 'dolphin-phi',
117
+ prompt: 'Hi!',
118
+ stream: false }
119
+ )
120
+ ```
121
+
122
+ Result:
123
+ ```ruby
124
+ [{ 'model' => 'dolphin-phi',
125
+ 'created_at' => '2024-01-06T17:47:26.443128626Z',
126
+ 'response' =>
127
+ "Hello! How can I assist you today? Do you have any questions or problems that you'd like help with?",
128
+ 'done' => true,
129
+ 'context' =>
130
+ [50_296, 10_057,
131
+ # ...
132
+ 351, 30],
133
+ 'total_duration' => 6_495_278_960,
134
+ 'load_duration' => 1_434_052_851,
135
+ 'prompt_eval_count' => 25,
136
+ 'prompt_eval_duration' => 1_938_861_000,
137
+ 'eval_count' => 23,
138
+ 'eval_duration' => 3_119_030_000 }]
139
+ ```
140
+
141
+ ##### Receiving Stream Events
142
+
143
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion
144
+
145
+ Ensure that you have enabled [Server-Sent Events](#streaming-and-server-sent-events-sse) before using blocks for streaming. `stream: true` is not necessary, as `true` is the [default](https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-completion):
146
+
147
+ ```ruby
148
+ client.generate(
149
+ { model: 'dolphin-phi',
150
+ prompt: 'Hi!' }
151
+ ) do |event, raw|
152
+ puts event
153
+ end
154
+ ```
155
+
156
+ Event:
157
+ ```ruby
158
+ { 'model' => 'dolphin-phi',
159
+ 'created_at' => '2024-01-06T17:27:29.366879586Z',
160
+ 'response' => 'Hello',
161
+ 'done' => false }
162
+ ```
163
+
164
+ You can get all the receive events at once as an array:
165
+ ```ruby
166
+ result = client.generate(
167
+ { model: 'dolphin-phi',
168
+ prompt: 'Hi!' }
169
+ )
170
+ ```
171
+
172
+ Result:
173
+ ```ruby
174
+ [{ 'model' => 'dolphin-phi',
175
+ 'created_at' => '2024-01-06T16:53:21.357816652Z',
176
+ 'response' => 'Hello',
177
+ 'done' => false },
178
+ { 'model' => 'dolphin-phi',
179
+ 'created_at' => '2024-01-06T16:53:21.490053654Z',
180
+ 'response' => '!',
181
+ 'done' => false },
182
+ # ...
183
+ { 'model' => 'dolphin-phi',
184
+ 'created_at' => '2024-01-06T16:53:24.82505599Z',
185
+ 'response' => '.',
186
+ 'done' => false },
187
+ { 'model' => 'dolphin-phi',
188
+ 'created_at' => '2024-01-06T16:53:24.956774721Z',
189
+ 'response' => '',
190
+ 'done' => true,
191
+ 'context' =>
192
+ [50_296, 10_057,
193
+ # ...
194
+ 1037, 13],
195
+ 'total_duration' => 5_702_027_026,
196
+ 'load_duration' => 649_711,
197
+ 'prompt_eval_count' => 25,
198
+ 'prompt_eval_duration' => 2_227_159_000,
199
+ 'eval_count' => 39,
200
+ 'eval_duration' => 3_466_593_000 }]
201
+ ```
202
+
203
+ You can mix both as well:
204
+ ```ruby
205
+ result = client.generate(
206
+ { model: 'dolphin-phi',
207
+ prompt: 'Hi!' }
208
+ ) do |event, raw|
209
+ puts event
210
+ end
211
+ ```
212
+
213
+ #### chat: Generate a chat completion
214
+
215
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-chat-completion
216
+
217
+ ```ruby
218
+ result = client.chat(
219
+ { model: 'dolphin-phi',
220
+ messages: [
221
+ { role: 'user', content: 'Hi! My name is Purple.' }
222
+ ] }
223
+ ) do |event, raw|
224
+ puts event
225
+ end
226
+ ```
227
+
228
+ Event:
229
+ ```ruby
230
+ { 'model' => 'dolphin-phi',
231
+ 'created_at' => '2024-01-06T18:17:22.468231988Z',
232
+ 'message' => { 'role' => 'assistant', 'content' => 'Hello' },
233
+ 'done' => false }
234
+ ```
235
+
236
+ Result:
237
+ ```ruby
238
+ [{ 'model' => 'dolphin-phi',
239
+ 'created_at' => '2024-01-06T18:17:22.468231988Z',
240
+ 'message' => { 'role' => 'assistant', 'content' => 'Hello' },
241
+ 'done' => false },
242
+ { 'model' => 'dolphin-phi',
243
+ 'created_at' => '2024-01-06T18:17:22.594414415Z',
244
+ 'message' => { 'role' => 'assistant', 'content' => ' Purple' },
245
+ 'done' => false },
246
+ # ...
247
+ { 'model' => 'dolphin-phi',
248
+ 'created_at' => '2024-01-06T18:17:25.491597233Z',
249
+ 'message' => { 'role' => 'assistant', 'content' => '?' },
250
+ 'done' => false },
251
+ { 'model' => 'dolphin-phi',
252
+ 'created_at' => '2024-01-06T18:17:25.578463723Z',
253
+ 'message' => { 'role' => 'assistant', 'content' => '' },
254
+ 'done' => true,
255
+ 'total_duration' => 5_274_177_696,
256
+ 'load_duration' => 1_565_325,
257
+ 'prompt_eval_count' => 30,
258
+ 'prompt_eval_duration' => 2_284_638_000,
259
+ 'eval_count' => 29,
260
+ 'eval_duration' => 2_983_962_000 }]
261
+ ```
262
+
263
+ ##### Back-and-Forth Conversations
264
+
265
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-a-chat-completion
266
+
267
+ To maintain a back-and-forth conversation, you need to append the received responses and build a history for your requests:
268
+
269
+ ```ruby
270
+ result = client.chat(
271
+ { model: 'dolphin-phi',
272
+ messages: [
273
+ { role: 'user', content: 'Hi! My name is Purple.' },
274
+ { role: 'assistant',
275
+ content: "Hi, Purple! It's nice to meet you. I am Dolphin. How can I help you today?" },
276
+ { role: 'user', content: "What's my name?" }
277
+ ] }
278
+ ) do |event, raw|
279
+ puts event
280
+ end
281
+ ```
282
+
283
+ Event:
284
+
285
+ ```ruby
286
+ { 'model' => 'dolphin-phi',
287
+ 'created_at' => '2024-01-06T19:07:51.05465997Z',
288
+ 'message' => { 'role' => 'assistant', 'content' => 'Your' },
289
+ 'done' => false }
290
+ ```
291
+
292
+ Result:
293
+ ```ruby
294
+ [{ 'model' => 'dolphin-phi',
295
+ 'created_at' => '2024-01-06T19:07:51.05465997Z',
296
+ 'message' => { 'role' => 'assistant', 'content' => 'Your' },
297
+ 'done' => false },
298
+ { 'model' => 'dolphin-phi',
299
+ 'created_at' => '2024-01-06T19:07:51.184476541Z',
300
+ 'message' => { 'role' => 'assistant', 'content' => ' name' },
301
+ 'done' => false },
302
+ # ...
303
+ { 'model' => 'dolphin-phi',
304
+ 'created_at' => '2024-01-06T19:07:56.526297223Z',
305
+ 'message' => { 'role' => 'assistant', 'content' => '.' },
306
+ 'done' => false },
307
+ { 'model' => 'dolphin-phi',
308
+ 'created_at' => '2024-01-06T19:07:56.667809424Z',
309
+ 'message' => { 'role' => 'assistant', 'content' => '' },
310
+ 'done' => true,
311
+ 'total_duration' => 12_169_557_266,
312
+ 'load_duration' => 4_486_689,
313
+ 'prompt_eval_count' => 95,
314
+ 'prompt_eval_duration' => 6_678_566_000,
315
+ 'eval_count' => 40,
316
+ 'eval_duration' => 5_483_133_000 }]
317
+ ```
318
+
319
+ #### embeddings: Generate Embeddings
320
+
321
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#generate-embeddings
322
+
323
+ ```ruby
324
+ result = client.embeddings(
325
+ { model: 'dolphin-phi',
326
+ prompt: 'Hi!' }
327
+ )
328
+ ```
329
+
330
+ Result:
331
+ ```ruby
332
+ [{ 'embedding' =>
333
+ [1.0372048616409302,
334
+ 1.0635842084884644,
335
+ # ...
336
+ -0.5416496396064758,
337
+ 0.051569778472185135] }]
338
+ ```
339
+
340
+ #### Models
341
+
342
+ ##### create: Create a Model
343
+
344
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#create-a-model
345
+
346
+ ```ruby
347
+ result = client.create(
348
+ { name: 'mario',
349
+ modelfile: "FROM dolphin-phi\nSYSTEM You are mario from Super Mario Bros." }
350
+ ) do |event, raw|
351
+ puts event
352
+ end
353
+ ```
354
+
355
+ Event:
356
+ ```ruby
357
+ { 'status' => 'reading model metadata' }
358
+ ```
359
+
360
+ Result:
361
+ ```ruby
362
+ [{ 'status' => 'reading model metadata' },
363
+ { 'status' => 'creating system layer' },
364
+ { 'status' =>
365
+ 'using already created layer sha256:4eca7304a07a42c48887f159ef5ad82ed5a5bd30fe52db4aadae1dd938e26f70' },
366
+ { 'status' =>
367
+ 'using already created layer sha256:876a8d805b60882d53fed3ded3123aede6a996bdde4a253de422cacd236e33d3' },
368
+ { 'status' =>
369
+ 'using already created layer sha256:a47b02e00552cd7022ea700b1abf8c572bb26c9bc8c1a37e01b566f2344df5dc' },
370
+ { 'status' =>
371
+ 'using already created layer sha256:f02dd72bb2423204352eabc5637b44d79d17f109fdb510a7c51455892aa2d216' },
372
+ { 'status' =>
373
+ 'writing layer sha256:1741cf59ce26ff01ac614d31efc700e21e44dd96aed60a7c91ab3f47e440ef94' },
374
+ { 'status' =>
375
+ 'writing layer sha256:e8bcbb2eebad88c2fa64bc32939162c064be96e70ff36aff566718fc9186b427' },
376
+ { 'status' => 'writing manifest' },
377
+ { 'status' => 'success' }]
378
+ ```
379
+
380
+ After creation, you can use it:
381
+ ```ruby
382
+ client.generate(
383
+ { model: 'mario',
384
+ prompt: 'Hi! Who are you?' }
385
+ ) do |event, raw|
386
+ print event['response']
387
+ end
388
+ ```
389
+
390
+ > _Hello! I'm Mario, a character from the popular video game series Super Mario Bros. My goal is to rescue Princess Peach from the evil Bowser and his minions, so we can live happily ever after in the Mushroom Kingdom! 🍄🐥_
391
+ >
392
+ > _What brings you here? How can I help you on your journey?_
393
+
394
+ ##### tags: List Local Models
395
+
396
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#list-local-models
397
+
398
+ ```ruby
399
+ result = client.tags
400
+ ```
401
+
402
+ Result:
403
+ ```ruby
404
+ [{ 'models' =>
405
+ [{ 'name' => 'dolphin-phi:latest',
406
+ 'modified_at' => '2024-01-06T12:20:42.778120982-03:00',
407
+ 'size' => 1_602_473_850,
408
+ 'digest' =>
409
+ 'c5761fc772409945787240af89a5cce01dd39dc52f1b7b80d080a1163e8dbe10',
410
+ 'details' =>
411
+ { 'format' => 'gguf',
412
+ 'family' => 'phi2',
413
+ 'families' => ['phi2'],
414
+ 'parameter_size' => '3B',
415
+ 'quantization_level' => 'Q4_0' } },
416
+ { 'name' => 'mario:latest',
417
+ 'modified_at' => '2024-01-06T16:19:11.340234644-03:00',
418
+ 'size' => 1_602_473_846,
419
+ 'digest' =>
420
+ '582e668feaba3fcb6add3cee26046a1d6a0c940b86a692ea30d5100aec90135f',
421
+ 'details' =>
422
+ { 'format' => 'gguf',
423
+ 'family' => 'phi2',
424
+ 'families' => ['phi2'],
425
+ 'parameter_size' => '3B',
426
+ 'quantization_level' => 'Q4_0' } }] }]
427
+ ```
428
+
429
+ ##### show: Show Model Information
430
+
431
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#show-model-information
432
+
433
+ ```ruby
434
+ result = client.show(
435
+ { name: 'dolphin-phi' }
436
+ )
437
+ ```
438
+
439
+ Result:
440
+ ```ruby
441
+ [{ 'license' =>
442
+ "MICROSOFT RESEARCH LICENSE TERMS\n" \
443
+ # ...
444
+ 'It also applies even if Microsoft knew or should have known about the possibility...',
445
+ 'modelfile' =>
446
+ "# Modelfile generated by \"ollama show\"\n" \
447
+ # ...
448
+ 'PARAMETER stop "<|im_end|>"',
449
+ 'parameters' =>
450
+ "stop <|im_start|>\n" \
451
+ 'stop <|im_end|>',
452
+ 'template' =>
453
+ "<|im_start|>system\n" \
454
+ "{{ .System }}<|im_end|>\n" \
455
+ "<|im_start|>user\n" \
456
+ "{{ .Prompt }}<|im_end|>\n" \
457
+ "<|im_start|>assistant\n",
458
+ 'system' => 'You are Dolphin, a helpful AI assistant.',
459
+ 'details' =>
460
+ { 'format' => 'gguf',
461
+ 'family' => 'phi2',
462
+ 'families' => ['phi2'],
463
+ 'parameter_size' => '3B',
464
+ 'quantization_level' => 'Q4_0' } }]
465
+ ```
466
+
467
+ ##### copy: Copy a Model
468
+
469
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#copy-a-model
470
+
471
+ ```ruby
472
+ result = client.copy(
473
+ { source: 'dolphin-phi',
474
+ destination: 'dolphin-phi-backup' }
475
+ )
476
+ ```
477
+
478
+ Result:
479
+ ```ruby
480
+ true
481
+ ```
482
+
483
+ If the source model does not exist:
484
+ ```ruby
485
+ begin
486
+ result = client.copy(
487
+ { source: 'purple',
488
+ destination: 'purple-backup' }
489
+ )
490
+ rescue Ollama::Errors::OllamaError => error
491
+ puts error.class # Ollama::Errors::RequestError
492
+ puts error.message # 'the server responded with status 404'
493
+
494
+ puts error.payload
495
+ # { source: 'purple',
496
+ # destination: 'purple-backup',
497
+ # ...
498
+ # }
499
+
500
+ puts error.request.inspect
501
+ # #<Faraday::ResourceNotFound response={:status=>404, :headers...
502
+ end
503
+ ```
504
+
505
+ ##### delete: Delete a Model
506
+
507
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#delete-a-model
508
+
509
+ ```ruby
510
+ result = client.delete(
511
+ { name: 'dolphin-phi' }
512
+ )
513
+ ```
514
+
515
+ Result:
516
+ ```ruby
517
+ true
518
+ ```
519
+
520
+ If the model does not exist:
521
+ ```ruby
522
+ begin
523
+ result = client.delete(
524
+ { name: 'dolphin-phi' }
525
+ )
526
+ rescue Ollama::Errors::OllamaError => error
527
+ puts error.class # Ollama::Errors::RequestError
528
+ puts error.message # 'the server responded with status 404'
529
+
530
+ puts error.payload
531
+ # { name: 'dolphin-phi',
532
+ # ...
533
+ # }
534
+
535
+ puts error.request.inspect
536
+ # #<Faraday::ResourceNotFound response={:status=>404, :headers...
537
+ end
538
+ ```
539
+
540
+ ##### pull: Pull a Model
541
+
542
+ API Documentation: https://github.com/jmorganca/ollama/blob/main/docs/api.md#pull-a-model
543
+
544
+ ```ruby
545
+ result = client.pull(
546
+ { name: 'dolphin-phi' }
547
+ ) do |event, raw|
548
+ puts event
549
+ end
550
+ ```
551
+
552
+ Event:
553
+ ```ruby
554
+ { 'status' => 'pulling manifest' }
555
+ ```
556
+
557
+ Result:
558
+ ```ruby
559
+ [{ 'status' => 'pulling manifest' },
560
+ { 'status' => 'pulling 4eca7304a07a',
561
+ 'digest' =>
562
+ 'sha256:4eca7304a07a42c48887f159ef5ad82ed5a5bd30fe52db4aadae1dd938e26f70',
563
+ 'total' => 1_602_463_008,
564
+ 'completed' => 1_602_463_008 },
565
+ # ...
566
+ { 'status' => 'verifying sha256 digest' },
567
+ { 'status' => 'writing manifest' },
568
+ { 'status' => 'removing any unused layers' },
569
+ { 'status' => 'success' }]
570
+ ```
571
+
572
+ ##### push: Push a Model
573
+
574
+ Documentation: [API](https://github.com/jmorganca/ollama/blob/main/docs/api.md#push-a-model) and [_Publishing Your Model_](https://github.com/jmorganca/ollama/blob/main/docs/import.md#publishing-your-model-optional--early-alpha).
575
+
576
+
577
+ You need to create an account at https://ollama.ai and add your Public Key at https://ollama.ai/settings/keys.
578
+
579
+ Your keys are located in `/usr/share/ollama/.ollama/`. You may need to copy them to your user directory:
580
+
581
+ ```sh
582
+ sudo cp /usr/share/ollama/.ollama/id_ed25519 ~/.ollama/
583
+ sudo cp /usr/share/ollama/.ollama/id_ed25519.pub ~/.ollama/
584
+ ```
585
+
586
+ Copy your model to your user namespace:
587
+
588
+ ```ruby
589
+ client.copy(
590
+ { source: 'mario',
591
+ destination: 'your-user/mario' }
592
+ )
593
+ ```
594
+
595
+ And push it:
596
+
597
+ ```ruby
598
+ result = client.push(
599
+ { name: 'your-user/mario' }
600
+ ) do |event, raw|
601
+ puts event
602
+ end
603
+ ```
604
+
605
+ Event:
606
+ ```ruby
607
+ { 'status' => 'retrieving manifest' }
608
+ ```
609
+
610
+ Result:
611
+ ```ruby
612
+ [{ 'status' => 'retrieving manifest' },
613
+ { 'status' => 'pushing 4eca7304a07a',
614
+ 'digest' =>
615
+ 'sha256:4eca7304a07a42c48887f159ef5ad82ed5a5bd30fe52db4aadae1dd938e26f70',
616
+ 'total' => 1_602_463_008,
617
+ 'completed' => 1_602_463_008 },
618
+ # ...
619
+ { 'status' => 'pushing e8bcbb2eebad',
620
+ 'digest' =>
621
+ 'sha256:e8bcbb2eebad88c2fa64bc32939162c064be96e70ff36aff566718fc9186b427',
622
+ 'total' => 555,
623
+ 'completed' => 555 },
624
+ { 'status' => 'pushing manifest' },
625
+ { 'status' => 'success' }]
626
+ ```
627
+
628
+ ### Streaming and Server-Sent Events (SSE)
629
+
630
+ [Server-Sent Events (SSE)](https://en.wikipedia.org/wiki/Server-sent_events) is a technology that allows certain endpoints to offer streaming capabilities, such as creating the impression that "the model is typing along with you," rather than delivering the entire answer all at once.
631
+
632
+ You can set up the client to use Server-Sent Events (SSE) for all supported endpoints:
633
+ ```ruby
634
+ client = Ollama.new(
635
+ credentials: { address: 'http://localhost:11434' },
636
+ options: { server_sent_events: true }
637
+ )
638
+ ```
639
+
640
+ Or, you can decide on a request basis:
641
+ ```ruby
642
+ result = client.generate(
643
+ { model: 'dolphin-phi',
644
+ prompt: 'Hi!' },
645
+ server_sent_events: true
646
+ ) do |event, raw|
647
+ puts event
648
+ end
649
+ ```
650
+
651
+ With Server-Sent Events (SSE) enabled, you can use a block to receive partial results via events. This feature is particularly useful for methods that offer streaming capabilities, such as `generate`: [Receiving Stream Events](#receiving-stream-events)
652
+
653
+ #### Server-Sent Events (SSE) Hang
654
+
655
+ Method calls will _hang_ until the server-sent events finish, so even without providing a block, you can obtain the final results of the received events: [Receiving Stream Events](#receiving-stream-events)
656
+
657
+ ### New Functionalities and APIs
658
+
659
+ Ollama may launch a new endpoint that we haven't covered in the Gem yet. If that's the case, you may still be able to use it through the `request` method. For example, `generate` is just a wrapper for `api/generate`, which you can call directly like this:
660
+
661
+ ```ruby
662
+ result = client.request(
663
+ 'api/generate',
664
+ { model: 'dolphin-phi',
665
+ prompt: 'Hi!' },
666
+ request_method: 'POST', server_sent_events: true
667
+ )
668
+ ```
669
+
670
+ ### Request Options
671
+
672
+ #### Timeout
673
+
674
+ You can set the maximum number of seconds to wait for the request to complete with the `timeout` option:
675
+
676
+ ```ruby
677
+ client = Ollama.new(
678
+ credentials: { address: 'http://localhost:11434' },
679
+ options: { connection: { request: { timeout: 5 } } }
680
+ )
681
+ ```
682
+
683
+ You can also have more fine-grained control over [Faraday's Request Options](https://lostisland.github.io/faraday/#/customization/request-options?id=request-options) if you prefer:
684
+
685
+ ```ruby
686
+ client = Ollama.new(
687
+ credentials: { address: 'http://localhost:11434' },
688
+ options: {
689
+ connection: {
690
+ request: {
691
+ timeout: 5,
692
+ open_timeout: 5,
693
+ read_timeout: 5,
694
+ write_timeout: 5
695
+ }
696
+ }
697
+ }
698
+ )
699
+ ```
700
+
701
+ ### Error Handling
702
+
703
+ #### Rescuing
704
+
705
+ ```ruby
706
+ require 'ollama-ai'
707
+
708
+ begin
709
+ client.chat_completions(
710
+ { model: 'dolphin-phi',
711
+ prompt: 'Hi!' }
712
+ )
713
+ rescue Ollama::Errors::OllamaError => error
714
+ puts error.class # Ollama::Errors::RequestError
715
+ puts error.message # 'the server responded with status 500'
716
+
717
+ puts error.payload
718
+ # { model: 'dolphin-phi',
719
+ # prompt: 'Hi!',
720
+ # ...
721
+ # }
722
+
723
+ puts error.request.inspect
724
+ # #<Faraday::ServerError response={:status=>500, :headers...
725
+ end
726
+ ```
727
+
728
+ #### For Short
729
+
730
+ ```ruby
731
+ require 'ollama-ai/errors'
732
+
733
+ begin
734
+ client.chat_completions(
735
+ { model: 'dolphin-phi',
736
+ prompt: 'Hi!' }
737
+ )
738
+ rescue OllamaError => error
739
+ puts error.class # Ollama::Errors::RequestError
740
+ end
741
+ ```
742
+
743
+ #### Errors
744
+
745
+ ```ruby
746
+ OllamaError
747
+
748
+ BlockWithoutServerSentEventsError
749
+
750
+ RequestError
751
+ ```
752
+
753
+ ## Development
754
+
755
+ ```bash
756
+ bundle
757
+ rubocop -A
758
+ ```
759
+
760
+ ### Purpose
761
+
762
+ This Gem is designed to provide low-level access to Ollama, enabling people to build abstractions on top of it. If you are interested in more high-level abstractions or more user-friendly tools, you may want to consider [Nano Bots](https://github.com/icebaker/ruby-nano-bots) 💎 🤖.
763
+
764
+ ### Publish to RubyGems
765
+
766
+ ```bash
767
+ gem build ollama-ai.gemspec
768
+
769
+ gem signin
770
+
771
+ gem push ollama-ai-1.0.0.gem
772
+ ```
773
+
774
+ ### Updating the README
775
+
776
+ Install [Babashka](https://babashka.org):
777
+
778
+ ```sh
779
+ curl -s https://raw.githubusercontent.com/babashka/babashka/master/install | sudo bash
780
+ ```
781
+
782
+ Update the `template.md` file and then:
783
+
784
+ ```sh
785
+ bb tasks/generate-readme.clj
786
+ ```
787
+
788
+ Trick for automatically updating the `README.md` when `template.md` changes:
789
+
790
+ ```sh
791
+ sudo pacman -S inotify-tools # Arch / Manjaro
792
+ sudo apt-get install inotify-tools # Debian / Ubuntu / Raspberry Pi OS
793
+ sudo dnf install inotify-tools # Fedora / CentOS / RHEL
794
+
795
+ while inotifywait -e modify template.md; do bb tasks/generate-readme.clj; done
796
+ ```
797
+
798
+ Trick for Markdown Live Preview:
799
+ ```sh
800
+ pip install -U markdown_live_preview
801
+
802
+ mlp README.md -p 8076
803
+ ```
804
+
805
+ ## Resources and References
806
+
807
+ These resources and references may be useful throughout your learning process:
808
+
809
+ - [Ollama Official Website](https://ollama.ai)
810
+ - [Ollama GitHub](https://github.com/jmorganca/ollama)
811
+ - [Ollama API Documentation](https://github.com/jmorganca/ollama/blob/main/docs/api.md)
812
+
813
+ ## Disclaimer
814
+
815
+ This is not an official Ollama project, nor is it affiliated with Ollama in any way.
816
+
817
+ This software is distributed under the [MIT License](https://github.com/gbaptista/ollama-ai/blob/main/LICENSE). This license includes a disclaimer of warranty. Moreover, the authors assume no responsibility for any damage or costs that may result from using this project. Use the Ollama AI Ruby Gem at your own risk.