ollama-ruby 1.14.0 → 1.15.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: d07a44fbcf79c2fcab881a2e2f2bcbcd42c18d040498e7988abb77e3aa97a6f5
4
- data.tar.gz: 4ab66fbf6953ca66c3235eb9512ce38b53371d5f2178603ff678d94b30fa9551
3
+ metadata.gz: e83f50382169132ae2774add6050e16e36bd9af29b5aca839007c0d48f0c51c0
4
+ data.tar.gz: 5b1d915ea8bbf426d4f2ae08556533e1c78e4483857a8d649b96e9f668857491
5
5
  SHA512:
6
- metadata.gz: '08433690e167532e8a98c81b212cb6a82a96329121f376379c56abf5ecfae2e7e11d96ba40875357b6a83796fe018db4b9a3b5148711860400cdbbe290c0e2c1'
7
- data.tar.gz: bdf0161b3c9ba895ff79927bcd515f346c1582cebddc86f14d7077a16ad39c907a406d1db8a056ac0a7d1f19bdaf58468cbb4acab56d5618112fe875199da540
6
+ metadata.gz: 9b0e7e583bcf8baa2fa5050ba84e1b94b890ed3d2fa8b871c72bb3a4dcf15f64ecebb6e665a6e7e49dc0f08e030cf5f5db5d0874f42f7f2f917572e03ed24672
7
+ data.tar.gz: edae6c2a87f761b36fa0615073a999ba16451a3558a67d1d114f7cf510cdafaf44062ced275d21f1ea807c3badd0e3ae9d5fa4fc0b62cb8358bdf78bcbca9f44
data/CHANGES.md CHANGED
@@ -1,5 +1,24 @@
1
1
  # Changes
2
2
 
3
+ ## 2025-12-04 v1.15.0
4
+
5
+ - Added documentation for `ollama_ps` executable utility in README
6
+ - Implemented `usage` method in `ollama_ps` script with `-h` flag support
7
+ - Enhanced `ollama_ps` script with improved CLI using `Tins::GO`
8
+ - Added support for `-f json` and `-f yaml` output formats in `ollama_ps`
9
+ - Refactored `ollama_ps` into `fetch_ps_models`, `interpret_models`, and
10
+ `ps_table` functions
11
+ - Implemented dynamic table headings and safe navigation (`&.`) for optional
12
+ model fields
13
+ - Added `-I IMAGE` flag to `ollama_cli` for sending images to visual models
14
+ - Enabled multiple image file support with repeated `-I` flag usage
15
+ - Integrated image handling with `Ollama::Image` infrastructure
16
+ - Added debug mode (-d) and version info (-i) options to `ollama_cli`
17
+ documentation
18
+ - Updated README.md with image support documentation and usage examples
19
+ - Updated command-line usage help text to document new `-I` option
20
+ - Maintained backward compatibility with existing `ollama_cli` functionality
21
+
3
22
  ## 2025-12-03 v1.14.0
4
23
 
5
24
  - Added `as_json` method to `Ollama::Image` class that returns base64 string
data/README.md CHANGED
@@ -365,8 +365,11 @@ Usage: ollama_cli [OPTIONS]
365
365
  if it contains %{stdin} it is substituted by stdin input
366
366
  -P VARIABLE sets prompt var %{foo} to "bar" if VARIABLE is foo=bar
367
367
  -H HANDLER the handler to use for the response, defaults to ChatStart
368
+ -I IMAGE image is sent to the (visual) model (can be used many times)
368
369
  -S use streaming for generation
369
370
  -T use thinking for generation
371
+ -d enable debug mode
372
+ -i display ollama server version information
370
373
  -h this help
371
374
 
372
375
  ```
@@ -385,8 +388,8 @@ The following environment variables can be set to customize the behavior of
385
388
 
386
389
  #### Debug Mode
387
390
 
388
- If the `DEBUG` environment variable is set to `1`, `ollama_cli` will print out
389
- the values of various variables, including the base URL, model, system prompt,
391
+ If the `ollama_cli` is given the `-d` option it will print out the values of
392
+ various variables, including the base URL, model, system prompt,
390
393
  and options. This can be useful for debugging purposes.
391
394
 
392
395
  #### Handler Options
@@ -406,6 +409,26 @@ The `-S` option enables streaming for generation. This allows the model to
406
409
  generate text in chunks, rather than waiting for the entire response to be
407
410
  generated.
408
411
 
412
+ #### Image Support
413
+
414
+ The `ollama_cli` tool supports sending images to visual models. This enables
415
+ multimodal interactions where you can provide visual context to language
416
+ models.
417
+
418
+ **Basic Usage:**
419
+ ```bash
420
+ ollama_cli -m llava -I image.jpg -p "Describe what you see in this image"
421
+ ```
422
+
423
+ **Multiple Images:**
424
+ ```bash
425
+ ollama_cli -m qwen3-vl -I image1.jpg -I image2.png -p "Compare these images"
426
+ ```
427
+
428
+ This feature works with visual language models such as `llava`, `llava-llama3`,
429
+ and `bakllava`. The image support integrates seamlessly with existing
430
+ functionality and follows the same patterns as other features in the library.
431
+
409
432
  ### `ollama_browse`
410
433
 
411
434
  The `ollama_browse` executable is a utility for exploring model tags and their
@@ -460,7 +483,76 @@ This output shows:
460
483
  **Note:** Tags are grouped by their corresponding digests, allowing users to
461
484
  easily identify equivalent versions of a model.
462
485
 
463
- ### ollama\_chat
486
+ ### `ollama_ps`
487
+
488
+ The `ollama_ps` executable is a utility script that displays information about
489
+ running Ollama models in a formatted table. It queries the Ollama API's
490
+ `/api/ps` endpoint and presents running models with detailed information
491
+ including:
492
+
493
+ * Model name and ID
494
+ * Memory usage (size and processor allocation)
495
+ * Context window length
496
+ * Parameter count and quantization level
497
+ * Time until expiration
498
+
499
+ #### Usage
500
+
501
+ ```bash
502
+ Usage: ollama_ps [OPTIONS]
503
+
504
+ -f FORMAT output format: json, yaml, or table (default)
505
+ -h this help
506
+ ```
507
+
508
+ #### Environment Variables
509
+
510
+ The following environment variables can be set to customize the behavior of `ollama_ps`:
511
+
512
+ * `OLLAMA_URL`: The Ollama base URL.
513
+ * `OLLAMA_HOST`: The Ollama host (used if `OLLAMA_URL` is not set).
514
+
515
+ #### Output
516
+
517
+ The script displays a formatted table with columns:
518
+
519
+ * **NAME** - Model name
520
+ * **ID** - Truncated model digest
521
+ * **SIZE** - Human-readable model size
522
+ * **PROCESSOR** - CPU/GPU allocation percentage
523
+ * **CONTEXT** - Context window size
524
+ * **PARAMS** - Parameter count (e.g., 30.5B, 23M)
525
+ * **QUANT** - Quantization level (e.g., Q4_K_M, F16)
526
+ * **UNTIL** - Time until model expiration
527
+
528
+ Example output:
529
+ ```
530
+ ╭────────────────────┬──────────────┬──────────┬───────────┬───────────┬────────┬────────┬────────────╮
531
+ │ NAME │ ID │ SIZE │ PROCESSOR │ CONTEXT │ PARAMS │ QUANT │ UNTIL │
532
+ ╞════════════════════╪══════════════╪══════════╪═══════════╪═══════════╪════════╪════════╪════════════╡
533
+ │ qwen3-coder:latest │ 06c1097efce0 │ 28.08 GB │ 100% GPU │ 195.31 KB │ 30.5B │ Q4_K_M │ 0+23:38:37 │
534
+ ├────────────────────┼──────────────┼──────────┼───────────┼───────────┼────────┼────────┼────────────┤
535
+ │ all-minilm:latest │ 1b226e2802db │ 43.28 MB │ 100% GPU │ 256.00 B │ 23M │ F16 │ 0+23:38:31 │
536
+ ╰────────────────────┴──────────────┴──────────┴───────────┴───────────┴────────┴────────┴────────────╯
537
+ ```
538
+
539
+ The script supports different output formats:
540
+ * **table** (default): Formatted table output (when no `-f` flag is provided)
541
+ * **json**: JSON formatted output
542
+ * **yaml**: YAML formatted output
543
+
544
+ Example usage:
545
+ ```bash
546
+ ollama_ps -f json
547
+ ollama_ps -f yaml
548
+ OLLAMA_URL=http://localhost:11434 ollama_ps
549
+ ```
550
+
551
+ The `ollama_ps` utility is particularly useful for monitoring running models,
552
+ checking resource usage, and managing model lifecycles in development and
553
+ production environments.
554
+
555
+ ### `ollama_chat`
464
556
 
465
557
  This is a chat client that allows you to connect to an Ollama server and engage
466
558
  in conversations with Large Language Models (LLMs). It can be installed using
@@ -469,6 +561,7 @@ the following command:
469
561
  ```bash
470
562
  gem install ollama-chat
471
563
  ```
564
+ - Set default format to **table** when no `-f` flag is provided
472
565
 
473
566
  Once installed, you can run `ollama_chat` from your terminal or command prompt.
474
567
  This will launch a chat interface where you can interact with an LLM.
data/bin/ollama_cli CHANGED
@@ -122,6 +122,7 @@ def usage
122
122
  if it contains %{stdin} it is substituted by stdin input
123
123
  -P VARIABLE sets prompt var %{foo} to "bar" if VARIABLE is foo=bar
124
124
  -H HANDLER the handler to use for the response, defaults to ChatStart
125
+ -I IMAGE image is sent to the (visual) model (can be used many times)
125
126
  -S use streaming for generation
126
127
  -T use thinking for generation
127
128
  -d enable debug mode
@@ -132,7 +133,7 @@ def usage
132
133
  exit 0
133
134
  end
134
135
 
135
- opts = go 'u:m:M:s:p:P:H:c:STdih', defaults: { ?H => 'ChatStart' }
136
+ opts = go 'u:m:M:s:p:P:H:c:I:STdih', defaults: { ?H => 'ChatStart' }
136
137
 
137
138
  opts[?h] and usage
138
139
 
@@ -150,6 +151,7 @@ options = if model_options.is_a?(Hash)
150
151
  end
151
152
  system = get_file_argument(opts[?s], default: ENV['OLLAMA_SYSTEM'])
152
153
  prompt = get_file_argument(opts[?p], default: ENV['OLLAMA_PROMPT'])
154
+ images = opts[?I].to_a.map { Ollama::Image.for_filename(_1) }
153
155
 
154
156
  ollama = Client.configure_with(client_config)
155
157
 
@@ -200,6 +202,7 @@ ollama.generate(
200
202
  system:,
201
203
  prompt:,
202
204
  options:,
205
+ images: images,
203
206
  stream: !!opts[?S],
204
207
  think: !!opts[?T],
205
208
  &handler
data/bin/ollama_ps CHANGED
@@ -34,6 +34,26 @@
34
34
  require 'ollama'
35
35
  include Ollama
36
36
  require 'terminal-table'
37
+ require 'tins'
38
+ include Tins::GO
39
+
40
+ # The usage method displays the command-line usage information for the
41
+ # application.
42
+ #
43
+ # This method prints a formatted help message to the standard output, showing
44
+ # the available command-line options and their descriptions, and then exits the
45
+ # program. It is typically called when the user requests help or when invalid
46
+ # arguments are provided.
47
+ def usage
48
+ puts <<~EOT
49
+ Usage: #{File.basename($0)} [OPTIONS]
50
+
51
+ -f FORMAT output format: json, yaml, or table (default)
52
+ -h this help
53
+
54
+ EOT
55
+ exit 0
56
+ end
37
57
 
38
58
  # The base_url method returns the Ollama API base URL.
39
59
  #
@@ -97,44 +117,88 @@ def format_processor(size_vram, size_total)
97
117
  end
98
118
  end
99
119
 
100
- # The ps method retrieves and displays information about running models in a
101
- # formatted table.
120
+ # The fetch_ps_models method retrieves information about running models from an
121
+ # Ollama server.
102
122
  #
103
- # This method creates a new Ollama client instance, fetches the list of running
104
- # models, and presents the information in a structured table format showing
105
- # model details such as name, ID, size, processor information, context length,
106
- # parameters, quantization level, and expiration time.
123
+ # This method creates a new Ollama client instance using the provided base URL,
124
+ # executes a ps command to fetch details about currently running models, and
125
+ # returns the array of model information if any models are running.
107
126
  #
108
- # @param base_url [ String, nil ] the base URL of the Ollama API endpoint, defaults to nil
109
- def ps
127
+ # @return [ Array<Hash>, nil ] an array of model information hashes if models
128
+ # are running, or nil if no models are currently running
129
+ def fetch_ps_models
110
130
  ollama = Client.new(base_url:)
111
131
  result = ollama.ps
112
132
  models = result.models
113
133
  models.empty? and return
134
+ models
135
+ end
136
+
137
+ # The interpret_models method processes an array of model objects into a
138
+ # standardized hash format.
139
+ #
140
+ # This method transforms model data into a consistent structure with specific
141
+ # keys including name, id, size, processor, context, params, quant, and until.
142
+ #
143
+ # @param models [ Array ] an array of model objects to be processed
144
+ #
145
+ # @return [ Array<Hash> ] an array of hashes containing standardized model information
146
+ def interpret_models(models)
147
+ names = %w[ name id size processor context params quant until ]
148
+ models.map do |model|
149
+ hash = {}
150
+ names.zip(
151
+ [
152
+ model.name,
153
+ model.digest[0, 12],
154
+ format_bytes(model.size),
155
+ format_processor(model.size_vram, model.size),
156
+ format_bytes(model.context_length),
157
+ model.details&.parameter_size || 'n/a',
158
+ model.details&.quantization_level || 'n/a',
159
+ format_expiry(model.expires_at),
160
+ ]
161
+ ) do |n, v|
162
+ hash[n] = v
163
+ end
164
+ hash
165
+ end
166
+ end
114
167
 
168
+ # The ps_table method displays a formatted table of model information.
169
+ #
170
+ # This method takes an array of model hashes and presents them in a formatted
171
+ # table with aligned columns, using Unicode box-drawing characters for borders.
172
+ # It formats the size, parameters, and quantization columns to be right-aligned
173
+ # for better readability.
174
+ #
175
+ # @param models [ Array<Hash> ] an array of hashes containing model information
176
+ def ps_table(models)
115
177
  table = Terminal::Table.new
116
178
  table.style = {
117
179
  all_separators: true,
118
180
  border: :unicode_round,
119
181
  }
120
- headings = %w[ NAME ID SIZE PROCESSOR CONTEXT PARAMS QUANT UNTIL ]
182
+ headings = models.first.keys.map(&:upcase)
121
183
  table.headings = headings
122
- models.each do |model|
123
- table << [
124
- model.name,
125
- model.digest[0, 12],
126
- format_bytes(model.size),
127
- format_processor(model.size_vram, model.size),
128
- format_bytes(model.context_length),
129
- model.details&.parameter_size || 'n/a',
130
- model.details&.quantization_level || 'n/a',
131
- format_expiry(model.expires_at),
132
- ]
133
- end
184
+ models.each { |model| table << model.values }
134
185
  table.align_column(headings.index("SIZE"), :right)
135
186
  table.align_column(headings.index("PARAMS"), :right)
136
187
  table.align_column(headings.index("QUANT"), :right)
137
188
  puts table
138
189
  end
139
190
 
140
- ps
191
+ opts = go 'f:h'
192
+
193
+ opts[?h] and usage
194
+
195
+ models = fetch_ps_models or exit
196
+
197
+ case opts[?f]
198
+ when 'json'
199
+ puts JSON.pretty_generate(interpret_models(models))
200
+ when 'yaml'
201
+ YAML.dump(interpret_models(models), STDOUT)
202
+ else
203
+ ps_table(interpret_models(models))
204
+ end
@@ -1,6 +1,6 @@
1
1
  module Ollama
2
2
  # Ollama version
3
- VERSION = '1.14.0'
3
+ VERSION = '1.15.0'
4
4
  VERSION_ARRAY = VERSION.split('.').map(&:to_i) # :nodoc:
5
5
  VERSION_MAJOR = VERSION_ARRAY[0] # :nodoc:
6
6
  VERSION_MINOR = VERSION_ARRAY[1] # :nodoc:
data/ollama-ruby.gemspec CHANGED
@@ -1,9 +1,9 @@
1
1
  # -*- encoding: utf-8 -*-
2
- # stub: ollama-ruby 1.14.0 ruby lib
2
+ # stub: ollama-ruby 1.15.0 ruby lib
3
3
 
4
4
  Gem::Specification.new do |s|
5
5
  s.name = "ollama-ruby".freeze
6
- s.version = "1.14.0".freeze
6
+ s.version = "1.15.0".freeze
7
7
 
8
8
  s.required_rubygems_version = Gem::Requirement.new(">= 0".freeze) if s.respond_to? :required_rubygems_version=
9
9
  s.require_paths = ["lib".freeze]
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ollama-ruby
3
3
  version: !ruby/object:Gem::Version
4
- version: 1.14.0
4
+ version: 1.15.0
5
5
  platform: ruby
6
6
  authors:
7
7
  - Florian Frank