ruby_llm 0.1.0.pre37 → 0.1.0.pre39

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 8befc508f1a0bdbaf7b113f3a2f8584f78fbe756ac07b22344af9ea90bc47f31
4
- data.tar.gz: 389cb1518cacfa6448f30891d45e573536c768aa8092f8d1fbac156ff4c2d4c5
3
+ metadata.gz: 0c887462ad2c35c5e831991b8b6cd78095c3f286861fd4913afddd5887833f3b
4
+ data.tar.gz: 382a2dd05532510ed5ac8ae5deb7a81febb831e3bbd8a11376eba0ad266451c3
5
5
  SHA512:
6
- metadata.gz: 67493fd05b3c9f0765b8d7bd72d295756e615ad6ab8b133181c8a472d28a64ce83e8aaedf48f556de8e9ddc1d39c59fdd4e7cdf4f0521eff9b9d34726a6b6bcd
7
- data.tar.gz: e4569279dc890b62881c3d8da2cd03370fd8a87f6c20f112fe0bbce2d61aec0acba726d9d89252b724f4f4a2cc97ce383f229bddc8ea9d6c204fbcc66e71f6de
6
+ metadata.gz: b8c372a54f0bb94ecf6b22ead70f34a324974f4f61ad4610ba15ffee64ed8ec226fac24a74b73667bc07ea73a14fb5ee5d813759ab60f86c54c7d00cfc3905aa
7
+ data.tar.gz: 868fadbdaf583aa911a98cce36b6b61208a2065d92f32c31f1a2ea0908a37aa91f2047d2ea7aa77911093a7fcbb4ae10ace87f7ddafa2d39856e67a54d5bf846
@@ -3,6 +3,12 @@ name: CI
3
3
  on:
4
4
  push:
5
5
  branches: [ "main" ]
6
+ paths:
7
+ - 'lib/**'
8
+ - 'spec/**'
9
+ - 'Gemfile'
10
+ - 'Rakefile'
11
+ - 'ruby_llm.gemspec'
6
12
  pull_request:
7
13
  branches: [ "main" ]
8
14
  workflow_call:
data/.rspec_status CHANGED
@@ -1,7 +1,38 @@
1
- example_id | status | run_time |
2
- ------------------------------------------- | ------ | ------------ |
3
- ./spec/ruby_llm/chat_content_spec.rb[1:1:1] | passed | 3.59 seconds |
4
- ./spec/ruby_llm/chat_content_spec.rb[1:1:2] | passed | 1.51 seconds |
5
- ./spec/ruby_llm/chat_content_spec.rb[1:1:3] | passed | 2.29 seconds |
6
- ./spec/ruby_llm/chat_content_spec.rb[1:2:1] | passed | 2.06 seconds |
7
- ./spec/ruby_llm/chat_content_spec.rb[1:2:2] | passed | 2.04 seconds |
1
+ example_id | status | run_time |
2
+ -------------------------------------------------- | ------ | --------------- |
3
+ ./spec/ruby_llm/active_record/acts_as_spec.rb[1:1] | passed | 3.38 seconds |
4
+ ./spec/ruby_llm/active_record/acts_as_spec.rb[1:2] | passed | 2.48 seconds |
5
+ ./spec/ruby_llm/chat_content_spec.rb[1:1:1] | passed | 2.74 seconds |
6
+ ./spec/ruby_llm/chat_content_spec.rb[1:1:2] | passed | 1.29 seconds |
7
+ ./spec/ruby_llm/chat_content_spec.rb[1:1:3] | passed | 2.54 seconds |
8
+ ./spec/ruby_llm/chat_content_spec.rb[1:2:1] | passed | 2.77 seconds |
9
+ ./spec/ruby_llm/chat_content_spec.rb[1:2:2] | passed | 2.1 seconds |
10
+ ./spec/ruby_llm/chat_spec.rb[1:1:1:1] | passed | 1.02 seconds |
11
+ ./spec/ruby_llm/chat_spec.rb[1:1:1:2] | passed | 3.95 seconds |
12
+ ./spec/ruby_llm/chat_spec.rb[1:1:2:1] | passed | 0.4854 seconds |
13
+ ./spec/ruby_llm/chat_spec.rb[1:1:2:2] | passed | 1.37 seconds |
14
+ ./spec/ruby_llm/chat_spec.rb[1:1:3:1] | passed | 7.34 seconds |
15
+ ./spec/ruby_llm/chat_spec.rb[1:1:3:2] | passed | 19.22 seconds |
16
+ ./spec/ruby_llm/chat_spec.rb[1:1:4:1] | passed | 3.15 seconds |
17
+ ./spec/ruby_llm/chat_spec.rb[1:1:4:2] | passed | 2.51 seconds |
18
+ ./spec/ruby_llm/chat_streaming_spec.rb[1:1:1:1] | passed | 0.91374 seconds |
19
+ ./spec/ruby_llm/chat_streaming_spec.rb[1:1:2:1] | passed | 0.50088 seconds |
20
+ ./spec/ruby_llm/chat_streaming_spec.rb[1:1:3:1] | passed | 5.69 seconds |
21
+ ./spec/ruby_llm/chat_streaming_spec.rb[1:1:4:1] | passed | 1.22 seconds |
22
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:1:1] | passed | 3.75 seconds |
23
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:1:2] | passed | 6.1 seconds |
24
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:1:3] | passed | 5.32 seconds |
25
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:2:1] | passed | 1.21 seconds |
26
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:2:2] | passed | 2.36 seconds |
27
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:2:3] | passed | 2.78 seconds |
28
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:3:1] | passed | 1.87 seconds |
29
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:3:2] | passed | 3.12 seconds |
30
+ ./spec/ruby_llm/chat_tools_spec.rb[1:1:3:3] | passed | 3.83 seconds |
31
+ ./spec/ruby_llm/embeddings_spec.rb[1:1:1:1] | passed | 0.33357 seconds |
32
+ ./spec/ruby_llm/embeddings_spec.rb[1:1:1:2] | passed | 0.43632 seconds |
33
+ ./spec/ruby_llm/embeddings_spec.rb[1:1:2:1] | passed | 0.65614 seconds |
34
+ ./spec/ruby_llm/embeddings_spec.rb[1:1:2:2] | passed | 2.16 seconds |
35
+ ./spec/ruby_llm/error_handling_spec.rb[1:1] | passed | 0.19586 seconds |
36
+ ./spec/ruby_llm/image_generation_spec.rb[1:1:1] | passed | 12.77 seconds |
37
+ ./spec/ruby_llm/image_generation_spec.rb[1:1:2] | passed | 18.13 seconds |
38
+ ./spec/ruby_llm/image_generation_spec.rb[1:1:3] | passed | 0.00035 seconds |
data/README.md CHANGED
@@ -20,6 +20,8 @@ A delightful Ruby way to work with AI. Chat in text, analyze and generate images
20
20
  <a href="https://codecov.io/gh/crmne/ruby_llm"><img src="https://codecov.io/gh/crmne/ruby_llm/branch/main/graph/badge.svg" alt="codecov" /></a>
21
21
  </p>
22
22
 
23
+ 🤺 Battle tested at [💬 Chat with Work](https://chatwithwork.com)
24
+
23
25
  ## Features
24
26
 
25
27
  - 💬 **Beautiful Chat Interface** - Converse with AI models as easily as `RubyLLM.chat.ask "teach me Ruby"`
@@ -23,7 +23,7 @@
23
23
  "created_at": "2023-08-21T18:16:55+02:00",
24
24
  "display_name": "Babbage 002",
25
25
  "provider": "openai",
26
- "context_window": 16384,
26
+ "context_window": 16385,
27
27
  "max_tokens": 16384,
28
28
  "type": "chat",
29
29
  "family": "babbage",
@@ -262,7 +262,7 @@
262
262
  "created_at": "2023-08-21T18:11:41+02:00",
263
263
  "display_name": "Davinci 002",
264
264
  "provider": "openai",
265
- "context_window": 16384,
265
+ "context_window": 16385,
266
266
  "max_tokens": 16384,
267
267
  "type": "chat",
268
268
  "family": "davinci",
@@ -287,7 +287,7 @@
287
287
  "family": "chat",
288
288
  "supports_vision": false,
289
289
  "supports_functions": true,
290
- "supports_json_mode": true,
290
+ "supports_json_mode": false,
291
291
  "input_price_per_million": 0.27,
292
292
  "output_price_per_million": 1.1,
293
293
  "metadata": {
@@ -371,63 +371,6 @@
371
371
  "owned_by": "google"
372
372
  }
373
373
  },
374
- {
375
- "id": "gemini-1.0-pro",
376
- "created_at": null,
377
- "display_name": "Gemini 1.0 Pro",
378
- "provider": "gemini",
379
- "context_window": 32768,
380
- "max_tokens": 4096,
381
- "type": "chat",
382
- "family": "gemini10_pro",
383
- "supports_vision": false,
384
- "supports_functions": false,
385
- "supports_json_mode": false,
386
- "input_price_per_million": 0.5,
387
- "output_price_per_million": 1.5,
388
- "metadata": {
389
- "object": "model",
390
- "owned_by": "google"
391
- }
392
- },
393
- {
394
- "id": "gemini-1.0-pro-001",
395
- "created_at": null,
396
- "display_name": "Gemini 1.0 Pro 001",
397
- "provider": "gemini",
398
- "context_window": 32768,
399
- "max_tokens": 4096,
400
- "type": "chat",
401
- "family": "gemini10_pro",
402
- "supports_vision": false,
403
- "supports_functions": false,
404
- "supports_json_mode": false,
405
- "input_price_per_million": 0.5,
406
- "output_price_per_million": 1.5,
407
- "metadata": {
408
- "object": "model",
409
- "owned_by": "google"
410
- }
411
- },
412
- {
413
- "id": "gemini-1.0-pro-latest",
414
- "created_at": null,
415
- "display_name": "Gemini 1.0 Pro Latest",
416
- "provider": "gemini",
417
- "context_window": 32768,
418
- "max_tokens": 4096,
419
- "type": "chat",
420
- "family": "gemini10_pro",
421
- "supports_vision": false,
422
- "supports_functions": false,
423
- "supports_json_mode": false,
424
- "input_price_per_million": 0.5,
425
- "output_price_per_million": 1.5,
426
- "metadata": {
427
- "object": "model",
428
- "owned_by": "google"
429
- }
430
- },
431
374
  {
432
375
  "id": "gemini-1.0-pro-vision-latest",
433
376
  "created_at": null,
@@ -884,44 +827,6 @@
884
827
  "owned_by": "google"
885
828
  }
886
829
  },
887
- {
888
- "id": "gemini-2.0-flash-lite-preview",
889
- "created_at": null,
890
- "display_name": "Gemini 2.0 Flash Lite Preview",
891
- "provider": "gemini",
892
- "context_window": 1048576,
893
- "max_tokens": 8192,
894
- "type": "chat",
895
- "family": "gemini20_flash_lite",
896
- "supports_vision": true,
897
- "supports_functions": false,
898
- "supports_json_mode": false,
899
- "input_price_per_million": 0.075,
900
- "output_price_per_million": 0.3,
901
- "metadata": {
902
- "object": "model",
903
- "owned_by": "google"
904
- }
905
- },
906
- {
907
- "id": "gemini-2.0-flash-lite-preview-02-05",
908
- "created_at": null,
909
- "display_name": "Gemini 2.0 Flash Lite Preview 02 05",
910
- "provider": "gemini",
911
- "context_window": 1048576,
912
- "max_tokens": 8192,
913
- "type": "chat",
914
- "family": "gemini20_flash_lite",
915
- "supports_vision": true,
916
- "supports_functions": false,
917
- "supports_json_mode": false,
918
- "input_price_per_million": 0.075,
919
- "output_price_per_million": 0.3,
920
- "metadata": {
921
- "object": "model",
922
- "owned_by": "google"
923
- }
924
- },
925
830
  {
926
831
  "id": "gemini-2.0-flash-mmgen-rev17",
927
832
  "created_at": null,
@@ -1093,25 +998,6 @@
1093
998
  "owned_by": "google"
1094
999
  }
1095
1000
  },
1096
- {
1097
- "id": "gemini-pro",
1098
- "created_at": null,
1099
- "display_name": "Gemini Pro",
1100
- "provider": "gemini",
1101
- "context_window": 32768,
1102
- "max_tokens": 4096,
1103
- "type": "chat",
1104
- "family": "other",
1105
- "supports_vision": false,
1106
- "supports_functions": false,
1107
- "supports_json_mode": false,
1108
- "input_price_per_million": 0.075,
1109
- "output_price_per_million": 0.3,
1110
- "metadata": {
1111
- "object": "model",
1112
- "owned_by": "google"
1113
- }
1114
- },
1115
1001
  {
1116
1002
  "id": "gemini-pro-vision",
1117
1003
  "created_at": null,
@@ -1212,7 +1098,7 @@
1212
1098
  "created_at": "2024-01-23T23:19:18+01:00",
1213
1099
  "display_name": "GPT-3.5-Turbo 0125",
1214
1100
  "provider": "openai",
1215
- "context_window": 16385,
1101
+ "context_window": 4096,
1216
1102
  "max_tokens": 4096,
1217
1103
  "type": "chat",
1218
1104
  "family": "gpt35",
@@ -1231,7 +1117,7 @@
1231
1117
  "created_at": "2023-11-02T22:15:48+01:00",
1232
1118
  "display_name": "GPT-3.5-Turbo 1106",
1233
1119
  "provider": "openai",
1234
- "context_window": 16385,
1120
+ "context_window": 4096,
1235
1121
  "max_tokens": 4096,
1236
1122
  "type": "chat",
1237
1123
  "family": "gpt35",
@@ -1460,7 +1346,7 @@
1460
1346
  "display_name": "GPT-4o 20240513",
1461
1347
  "provider": "openai",
1462
1348
  "context_window": 128000,
1463
- "max_tokens": 4096,
1349
+ "max_tokens": 16384,
1464
1350
  "type": "chat",
1465
1351
  "family": "gpt4o",
1466
1352
  "supports_vision": true,
@@ -1650,7 +1536,7 @@
1650
1536
  "display_name": "GPT-4o-Mini Realtime Preview",
1651
1537
  "provider": "openai",
1652
1538
  "context_window": 128000,
1653
- "max_tokens": 4096,
1539
+ "max_tokens": 16384,
1654
1540
  "type": "chat",
1655
1541
  "family": "gpt4o_mini_realtime",
1656
1542
  "supports_vision": true,
@@ -1669,7 +1555,7 @@
1669
1555
  "display_name": "GPT-4o-Mini Realtime Preview 20241217",
1670
1556
  "provider": "openai",
1671
1557
  "context_window": 128000,
1672
- "max_tokens": 4096,
1558
+ "max_tokens": 16384,
1673
1559
  "type": "chat",
1674
1560
  "family": "gpt4o_mini_realtime",
1675
1561
  "supports_vision": true,
@@ -1688,7 +1574,7 @@
1688
1574
  "display_name": "GPT-4o-Realtime Preview",
1689
1575
  "provider": "openai",
1690
1576
  "context_window": 128000,
1691
- "max_tokens": 4096,
1577
+ "max_tokens": 16384,
1692
1578
  "type": "chat",
1693
1579
  "family": "gpt4o_realtime",
1694
1580
  "supports_vision": true,
@@ -1707,7 +1593,7 @@
1707
1593
  "display_name": "GPT-4o-Realtime Preview 20241001",
1708
1594
  "provider": "openai",
1709
1595
  "context_window": 128000,
1710
- "max_tokens": 4096,
1596
+ "max_tokens": 16384,
1711
1597
  "type": "chat",
1712
1598
  "family": "gpt4o_realtime",
1713
1599
  "supports_vision": true,
@@ -1726,7 +1612,7 @@
1726
1612
  "display_name": "GPT-4o-Realtime Preview 20241217",
1727
1613
  "provider": "openai",
1728
1614
  "context_window": 128000,
1729
- "max_tokens": 4096,
1615
+ "max_tokens": 16384,
1730
1616
  "type": "chat",
1731
1617
  "family": "gpt4o_realtime",
1732
1618
  "supports_vision": true,
@@ -65,8 +65,8 @@ module RubyLLM
65
65
  # Determines if the model supports JSON mode
66
66
  # @param model_id [String] the model identifier
67
67
  # @return [Boolean] true if the model supports JSON mode
68
- def supports_json_mode?(model_id)
69
- model_id.match?(/deepseek-chat/) # Only deepseek-chat supports JSON mode
68
+ def supports_json_mode?(_model_id)
69
+ false # DeepSeek function calling is unstable
70
70
  end
71
71
 
72
72
  # Returns a formatted display name for the model
@@ -15,8 +15,7 @@ module RubyLLM
15
15
  when /o1-2024/, /o3-mini/, /o3-mini-2025/ then 200_000
16
16
  when /gpt-4o/, /gpt-4o-mini/, /gpt-4-turbo/, /o1-mini/ then 128_000
17
17
  when /gpt-4-0[0-9]{3}/ then 8_192
18
- when /gpt-3.5/ then 16_385
19
- when /babbage-002/, /davinci-002/ then 16_384
18
+ when /gpt-3.5-turbo$/, /babbage-002/, /davinci-002/, /16k/ then 16_385
20
19
  else 4_096
21
20
  end
22
21
  end
@@ -1,5 +1,5 @@
1
1
  # frozen_string_literal: true
2
2
 
3
3
  module RubyLLM
4
- VERSION = '0.1.0.pre37'
4
+ VERSION = '0.1.0.pre39'
5
5
  end
data/ruby_llm.gemspec CHANGED
@@ -28,7 +28,7 @@ Gem::Specification.new do |spec|
28
28
  # Specify which files should be added to the gem when it is released.
29
29
  # The `git ls-files -z` loads the files in the RubyGem that have been added into git.
30
30
  spec.files = Dir.chdir(File.expand_path(__dir__)) do
31
- `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features)/}) }
31
+ `git ls-files -z`.split("\x0").reject { |f| f.match(%r{^(test|spec|features|docs)/}) }
32
32
  end
33
33
  spec.bindir = 'exe'
34
34
  spec.executables = spec.files.grep(%r{^exe/}) { |f| File.basename(f) }
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ruby_llm
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.1.0.pre37
4
+ version: 0.1.0.pre39
5
5
  platform: ruby
6
6
  authors:
7
7
  - Carmine Paolino
@@ -111,21 +111,6 @@ files:
111
111
  - Rakefile
112
112
  - bin/console
113
113
  - bin/setup
114
- - docs/.gitignore
115
- - docs/Gemfile
116
- - docs/_config.yml
117
- - docs/_data/navigation.yml
118
- - docs/guides/chat.md
119
- - docs/guides/embeddings.md
120
- - docs/guides/error-handling.md
121
- - docs/guides/getting-started.md
122
- - docs/guides/image-generation.md
123
- - docs/guides/index.md
124
- - docs/guides/rails.md
125
- - docs/guides/streaming.md
126
- - docs/guides/tools.md
127
- - docs/index.md
128
- - docs/installation.md
129
114
  - lib/ruby_llm.rb
130
115
  - lib/ruby_llm/active_record/acts_as.rb
131
116
  - lib/ruby_llm/chat.rb
data/docs/.gitignore DELETED
@@ -1,7 +0,0 @@
1
- _site/
2
- .sass-cache/
3
- .jekyll-cache/
4
- .jekyll-metadata
5
- # Ignore folders generated by Bundler
6
- .bundle/
7
- vendor/
data/docs/Gemfile DELETED
@@ -1,11 +0,0 @@
1
- source 'https://rubygems.org'
2
-
3
- gem 'jekyll', '~> 4.3'
4
- gem 'just-the-docs', '~> 0.7.0'
5
- gem 'webrick', '~> 1.8'
6
-
7
- # GitHub Pages plugins
8
- group :jekyll_plugins do
9
- gem 'jekyll-remote-theme'
10
- gem 'jekyll-seo-tag'
11
- end
data/docs/_config.yml DELETED
@@ -1,43 +0,0 @@
1
- title: RubyLLM
2
- description: A delightful Ruby way to work with AI
3
- url: https://crmne.github.io/ruby_llm
4
- baseurl: /ruby_llm
5
- remote_theme: just-the-docs/just-the-docs
6
-
7
- # Enable search
8
- search_enabled: true
9
- search:
10
- heading_level: 2
11
- previews: 3
12
- preview_words_before: 5
13
- preview_words_after: 10
14
- tokenizer_separator: /[\s/]+/
15
- rel_url: true
16
- button: false
17
-
18
- # Navigation structure
19
- nav_external_links:
20
- - title: RubyLLM on GitHub
21
- url: https://github.com/crmne/ruby_llm
22
- hide_icon: false
23
-
24
- # Footer content
25
- footer_content: "Copyright &copy; 2025 <a href='https://paolino.me'>Carmine Paolino</a>. Distributed under an <a href=\"https://github.com/crmne/ruby_llm/tree/main/LICENSE\">MIT license.</a>"
26
-
27
- # Enable copy button on code blocks
28
- enable_copy_code_button: true
29
-
30
- # Make Anchor links show on hover
31
- heading_anchors: true
32
-
33
- # Color scheme
34
- color_scheme: light
35
-
36
- # Google Analytics
37
- ga_tracking:
38
- ga_tracking_anonymize_ip: true
39
-
40
- # Custom plugins (GitHub Pages allows these)
41
- plugins:
42
- - jekyll-remote-theme
43
- - jekyll-seo-tag
@@ -1,25 +0,0 @@
1
- - title: Home
2
- url: /
3
- - title: Installation
4
- url: /installation
5
- - title: Guides
6
- url: /guides/
7
- subfolderitems:
8
- - title: Getting Started
9
- url: /guides/getting-started
10
- - title: Chat
11
- url: /guides/chat
12
- - title: Tools
13
- url: /guides/tools
14
- - title: Streaming
15
- url: /guides/streaming
16
- - title: Rails Integration
17
- url: /guides/rails
18
- - title: Image Generation
19
- url: /guides/image-generation
20
- - title: Embeddings
21
- url: /guides/embeddings
22
- - title: Error Handling
23
- url: /guides/error-handling
24
- - title: GitHub
25
- url: https://github.com/crmne/ruby_llm
data/docs/guides/chat.md DELETED
@@ -1,206 +0,0 @@
1
- ---
2
- layout: default
3
- title: Chat
4
- parent: Guides
5
- nav_order: 2
6
- permalink: /guides/chat
7
- ---
8
-
9
- # Chatting with AI Models
10
-
11
- RubyLLM's chat interface provides a natural way to interact with various AI models. This guide covers everything from basic chatting to advanced features like multimodal inputs and streaming responses.
12
-
13
- ## Basic Chat
14
-
15
- Creating a chat and asking questions is straightforward:
16
-
17
- ```ruby
18
- # Create a chat with the default model
19
- chat = RubyLLM.chat
20
-
21
- # Ask a question
22
- response = chat.ask "What's the best way to learn Ruby?"
23
-
24
- # The response is a Message object
25
- puts response.content
26
- puts "Role: #{response.role}"
27
- puts "Model: #{response.model_id}"
28
- puts "Tokens: #{response.input_tokens} input, #{response.output_tokens} output"
29
- ```
30
-
31
- ## Choosing Models
32
-
33
- You can specify which model to use when creating a chat:
34
-
35
- ```ruby
36
- # Create a chat with a specific model
37
- chat = RubyLLM.chat(model: 'gpt-4o-mini')
38
-
39
- # Use Claude instead
40
- claude_chat = RubyLLM.chat(model: 'claude-3-5-sonnet-20241022')
41
-
42
- # Or change the model for an existing chat
43
- chat.with_model('gemini-2.0-flash')
44
- ```
45
-
46
- ## Multi-turn Conversations
47
-
48
- Chats maintain conversation history automatically:
49
-
50
- ```ruby
51
- chat = RubyLLM.chat
52
-
53
- # Start a conversation
54
- chat.ask "What's your favorite programming language?"
55
-
56
- # Follow up
57
- chat.ask "Why do you like that language?"
58
-
59
- # Continue the conversation
60
- chat.ask "What are its weaknesses?"
61
-
62
- # Access the conversation history
63
- chat.messages.each do |message|
64
- puts "#{message.role}: #{message.content[0..50]}..."
65
- end
66
- ```
67
-
68
- ## Working with Images
69
-
70
- Vision-capable models can understand images:
71
-
72
- ```ruby
73
- chat = RubyLLM.chat
74
-
75
- # Ask about an image (local file)
76
- chat.ask "What's in this image?", with: { image: "path/to/image.jpg" }
77
-
78
- # Or use an image URL
79
- chat.ask "Describe this picture", with: { image: "https://example.com/image.jpg" }
80
-
81
- # Include multiple images
82
- chat.ask "Compare these two charts", with: {
83
- image: ["chart1.png", "chart2.png"]
84
- }
85
-
86
- # Combine text and image
87
- chat.ask "Is this the Ruby logo?", with: { image: "logo.png" }
88
- ```
89
-
90
- ## Working with Audio
91
-
92
- Models with audio capabilities can process spoken content:
93
-
94
- ```ruby
95
- chat = RubyLLM.chat(model: 'gpt-4o-audio-preview')
96
-
97
- # Analyze audio content
98
- chat.ask "What's being said in this recording?", with: {
99
- audio: "meeting.wav"
100
- }
101
-
102
- # Ask follow-up questions about the audio
103
- chat.ask "Summarize the key points mentioned"
104
- ```
105
-
106
- ## Streaming Responses
107
-
108
- For a more interactive experience, you can stream responses as they're generated:
109
-
110
- ```ruby
111
- chat = RubyLLM.chat
112
-
113
- # Stream the response with a block
114
- chat.ask "Tell me a story about a Ruby programmer" do |chunk|
115
- # Each chunk is a partial response
116
- print chunk.content
117
- $stdout.flush # Ensure output is displayed immediately
118
- end
119
-
120
- # Useful for long responses or real-time displays
121
- chat.ask "Write a detailed essay about programming paradigms" do |chunk|
122
- add_to_ui(chunk.content) # Your method to update UI
123
- end
124
- ```
125
-
126
- ## Temperature Control
127
-
128
- Control the creativity and randomness of AI responses:
129
-
130
- ```ruby
131
- # Higher temperature (more creative)
132
- creative_chat = RubyLLM.chat.with_temperature(0.9)
133
- creative_chat.ask "Write a poem about Ruby programming"
134
-
135
- # Lower temperature (more deterministic)
136
- precise_chat = RubyLLM.chat.with_temperature(0.1)
137
- precise_chat.ask "Explain how Ruby's garbage collector works"
138
- ```
139
-
140
- ## Access Token Usage
141
-
142
- RubyLLM automatically tracks token usage for billing and quota management:
143
-
144
- ```ruby
145
- chat = RubyLLM.chat
146
- response = chat.ask "Explain quantum computing"
147
-
148
- # Check token usage
149
- puts "Input tokens: #{response.input_tokens}"
150
- puts "Output tokens: #{response.output_tokens}"
151
- puts "Total tokens: #{response.input_tokens + response.output_tokens}"
152
-
153
- # Estimate cost (varies by model)
154
- model = RubyLLM.models.find(response.model_id)
155
- input_cost = response.input_tokens * model.input_price_per_million / 1_000_000
156
- output_cost = response.output_tokens * model.output_price_per_million / 1_000_000
157
- puts "Estimated cost: $#{(input_cost + output_cost).round(6)}"
158
- ```
159
-
160
- ## Registering Event Handlers
161
-
162
- You can register callbacks for chat events:
163
-
164
- ```ruby
165
- chat = RubyLLM.chat
166
-
167
- # Called when a new assistant message starts
168
- chat.on_new_message do
169
- puts "Assistant is typing..."
170
- end
171
-
172
- # Called when a message is complete
173
- chat.on_end_message do |message|
174
- puts "Response complete!"
175
- puts "Used #{message.input_tokens + message.output_tokens} tokens"
176
- end
177
-
178
- # These callbacks work with both streaming and non-streaming responses
179
- chat.ask "Tell me about Ruby's history"
180
- ```
181
-
182
- ## Multiple Parallel Chats
183
-
184
- You can maintain multiple separate chat instances:
185
-
186
- ```ruby
187
- # Create multiple chat instances
188
- ruby_chat = RubyLLM.chat
189
- python_chat = RubyLLM.chat
190
-
191
- # Each has its own conversation history
192
- ruby_chat.ask "What's great about Ruby?"
193
- python_chat.ask "What's great about Python?"
194
-
195
- # Continue separate conversations
196
- ruby_chat.ask "How does Ruby handle metaprogramming?"
197
- python_chat.ask "How does Python handle decorators?"
198
- ```
199
-
200
- ## Next Steps
201
-
202
- Now that you understand chat basics, you might want to explore:
203
-
204
- - [Using Tools]({% link guides/tools.md %}) to let AI use your Ruby code
205
- - [Streaming Responses]({% link guides/streaming.md %}) for real-time interactions
206
- - [Rails Integration]({% link guides/rails.md %}) to persist conversations in your apps