ollama-client 0.2.4 → 0.2.5

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9b126aae11a2fd7f0ff26e53e90e12222eb88e23fa6ecf622215c4a9317fe6ce
4
- data.tar.gz: 2656544e1ce4bfa852dc687108bdde17084773c3a1e76c53ae9ed59d77604256
3
+ metadata.gz: 687a8a4fbb73c24bbc408a902cbc94312923dd5c9a42823a2a5e13111977a6b9
4
+ data.tar.gz: 6c7992774151468a99a855671d2e2e17058a613377de970f5647411c9f627b81
5
5
  SHA512:
6
- metadata.gz: 5cbd78f768412b8e413e9222b4b0b2ef68e094280546742a0bd8191269072a6281ae4182d591e889ba3829c5b047b0e053c2410fa7734dc4cf1dfe320b527239
7
- data.tar.gz: 2f1c2eafa75910646ed2300c8ad5ab312aefe56bb03d6e4797356e219543a57532058c2b2f3fc31b773caf0e7e0d26b3a71a6f9887b1a40955f5b818ffe6e227
6
+ metadata.gz: c52ad58ee08f15b0014500ac9285c0d7a446447e72ec0df1da17b815ae249103adbf5f3a64ec66e83fc1c821e64d7297b84cd7e19d808c914c862fc3263e52ae
7
+ data.tar.gz: fad07b8161e7e1442ecfc203b4774e23ceb5075fe660adc1636a1fe81a8ac4a8e27fad80c91bcc20964a9a13e6733a1cb88612cb6d6de2b4c01b5bbbeba6eaf2
data/CHANGELOG.md CHANGED
@@ -1,6 +1,13 @@
1
1
  ## [Unreleased]
2
2
 
3
- - Add tag-triggered GitHub Actions release workflow for RubyGems publishing.
3
+ ## [0.2.5] - 2026-01-22
4
+
5
+ - Add `Ollama::DocumentLoader` for loading files as context in queries
6
+ - Enhance README with context provision methods and examples
7
+ - Improve embeddings error handling and model usage guidance
8
+ - Add comprehensive Ruby guide documentation
9
+ - Update `generate()` method with enhanced functionality and usage examples
10
+ - Improve error handling across client and embeddings modules
4
11
 
5
12
  ## [0.2.3] - 2026-01-17
6
13
 
data/README.md CHANGED
@@ -74,24 +74,225 @@ gem install ollama-client
74
74
 
75
75
  ### Primary API: `generate()`
76
76
 
77
- **`generate(prompt:, schema:)`** is the **primary and recommended method** for agent-grade usage:
77
+ **`generate(prompt:, schema: nil)`** is the **primary and recommended method** for agent-grade usage:
78
78
 
79
79
  - ✅ Stateless, explicit state injection
80
80
  - ✅ Uses `/api/generate` endpoint
81
81
  - ✅ Ideal for: agent planning, tool routing, one-shot analysis, classification, extraction
82
82
  - ✅ No implicit memory or conversation history
83
+ - ✅ Supports both structured JSON (with schema) and plain text/markdown (without schema)
83
84
 
84
85
  **This is the method you should use for hybrid agents.**
85
86
 
87
+ **Usage:**
88
+ - **With schema** (structured JSON): `generate(prompt: "...", schema: {...})`
89
+ - **Without schema** (plain text): `generate(prompt: "...")` - returns plain text/markdown
90
+
86
91
  ### Choosing the Correct API (generate vs chat)
87
92
 
88
93
  - **Use `/api/generate`** (via `Ollama::Client#generate` or `Ollama::Agent::Planner`) for **stateless planner/router** steps where you want strict, deterministic structured outputs.
89
94
  - **Use `/api/chat`** (via `Ollama::Agent::Executor`) for **stateful tool-using** workflows where the model may request tool calls across multiple turns.
90
95
 
91
96
  **Warnings:**
92
- - Dont use `generate()` for tool-calling loops (youll end up re-implementing message/tool lifecycles).
93
- - Dont use `chat()` for deterministic planners unless youre intentionally managing conversation state.
94
- - Dont let streaming output drive decisions (streaming is presentation-only).
97
+ - Don't use `generate()` for tool-calling loops (you'll end up re-implementing message/tool lifecycles).
98
+ - Don't use `chat()` for deterministic planners unless you're intentionally managing conversation state.
99
+ - Don't let streaming output drive decisions (streaming is presentation-only).
100
+
101
+ ### Providing Context to Queries
102
+
103
+ You can provide context to your queries in several ways:
104
+
105
+ **Option 1: Include context directly in the prompt (generate)**
106
+
107
+ ```ruby
108
+ require "ollama_client"
109
+
110
+ client = Ollama::Client.new
111
+
112
+ # Build prompt with context
113
+ context = "User's previous actions: search, calculate, validate"
114
+ user_query = "What should I do next?"
115
+
116
+ full_prompt = "Given this context: #{context}\n\nUser asks: #{user_query}"
117
+
118
+ result = client.generate(
119
+ prompt: full_prompt,
120
+ schema: {
121
+ "type" => "object",
122
+ "required" => ["action"],
123
+ "properties" => {
124
+ "action" => { "type" => "string" }
125
+ }
126
+ }
127
+ )
128
+ ```
129
+
130
+ **Option 2: Use system messages (chat/chat_raw)**
131
+
132
+ ```ruby
133
+ require "ollama_client"
134
+
135
+ client = Ollama::Client.new
136
+
137
+ # Provide context via system message
138
+ context = "You are analyzing market data. Current market status: Bullish. Key indicators: RSI 65, MACD positive."
139
+
140
+ response = client.chat_raw(
141
+ messages: [
142
+ { role: "system", content: context },
143
+ { role: "user", content: "What's the next trading action?" }
144
+ ],
145
+ allow_chat: true
146
+ )
147
+
148
+ puts response.message.content
149
+ ```
150
+
151
+ **Option 3: Use Planner with context parameter**
152
+
153
+ ```ruby
154
+ require "ollama_client"
155
+
156
+ client = Ollama::Client.new
157
+ planner = Ollama::Agent::Planner.new(client)
158
+
159
+ context = {
160
+ previous_actions: ["search", "calculate"],
161
+ user_preferences: "prefers conservative strategies"
162
+ }
163
+
164
+ plan = planner.run(
165
+ prompt: "Decide the next action",
166
+ context: context
167
+ )
168
+ ```
169
+
170
+ **Option 4: Load documents from directory (DocumentLoader)**
171
+
172
+ ```ruby
173
+ require "ollama_client"
174
+
175
+ client = Ollama::Client.new
176
+
177
+ # Load all documents from a directory (supports .txt, .md, .csv, .json)
178
+ loader = Ollama::DocumentLoader.new("docs/")
179
+ loader.load_all # Loads all supported files
180
+
181
+ # Get all documents as context
182
+ context = loader.to_context
183
+
184
+ # Use in your query
185
+ result = client.generate(
186
+ prompt: "Context from documents:\n#{context}\n\nQuestion: What is Ruby?",
187
+ schema: {
188
+ "type" => "object",
189
+ "required" => ["answer"],
190
+ "properties" => {
191
+ "answer" => { "type" => "string" }
192
+ }
193
+ }
194
+ )
195
+
196
+ # Or load specific files
197
+ loader.load_file("ruby_guide.md")
198
+ ruby_context = loader["ruby_guide.md"]
199
+
200
+ result = client.generate(
201
+ prompt: "Based on this documentation:\n#{ruby_context}\n\nExplain Ruby's key features."
202
+ )
203
+ ```
204
+
205
+ **Option 5: RAG-style context injection (using embeddings + DocumentLoader)**
206
+
207
+ ```ruby
208
+ require "ollama_client"
209
+
210
+ client = Ollama::Client.new
211
+
212
+ # 1. Load documents
213
+ loader = Ollama::DocumentLoader.new("docs/")
214
+ loader.load_all
215
+
216
+ # 2. When querying, find relevant context using embeddings
217
+ query = "What is Ruby?"
218
+ # (In real RAG, you'd compute embeddings and find similar docs)
219
+
220
+ # 3. Inject relevant context into prompt
221
+ relevant_context = loader["ruby_guide.md"] # Or find via similarity search
222
+
223
+ result = client.generate(
224
+ prompt: "Context: #{relevant_context}\n\nQuestion: #{query}\n\nAnswer based on the context:"
225
+ )
226
+ ```
227
+
228
+ **Option 5: Multi-turn conversation with accumulated context**
229
+
230
+ ```ruby
231
+ require "ollama_client"
232
+
233
+ client = Ollama::Client.new
234
+
235
+ messages = [
236
+ { role: "system", content: "You are a helpful assistant with access to context." },
237
+ { role: "user", content: "What is Ruby?" }
238
+ ]
239
+
240
+ # First response
241
+ response1 = client.chat_raw(messages: messages, allow_chat: true)
242
+ puts response1.message.content
243
+
244
+ # Add context and continue conversation
245
+ messages << { role: "assistant", content: response1.message.content }
246
+ messages << { role: "user", content: "Tell me more about its use cases" }
247
+
248
+ response2 = client.chat_raw(messages: messages, allow_chat: true)
249
+ puts response2.message.content
250
+ ```
251
+
252
+ ### Plain Text / Markdown Responses (No JSON Schema)
253
+
254
+ For simple text or markdown responses without JSON validation, you can use either `generate()` or `chat_raw()`:
255
+
256
+ **Option 1: Using `generate()` (recommended for simple queries)**
257
+
258
+ ```ruby
259
+ require "ollama_client"
260
+
261
+ client = Ollama::Client.new
262
+
263
+ # Get plain text/markdown response (no schema required)
264
+ text_response = client.generate(
265
+ prompt: "Explain Ruby in simple terms"
266
+ )
267
+
268
+ puts text_response
269
+ # Output: Plain text or markdown explanation
270
+ ```
271
+
272
+ **Option 2: Using `chat_raw()` (for multi-turn conversations)**
273
+
274
+ ```ruby
275
+ require "ollama_client"
276
+
277
+ client = Ollama::Client.new
278
+
279
+ # Get plain text/markdown response (no format required)
280
+ response = client.chat_raw(
281
+ messages: [{ role: "user", content: "Explain Ruby in simple terms" }],
282
+ allow_chat: true
283
+ )
284
+
285
+ # Access the plain text content
286
+ text_response = response.message.content
287
+ puts text_response
288
+ # Output: Plain text or markdown explanation
289
+ ```
290
+
291
+ **When to use which:**
292
+ - **`generate()` without schema** - Simple one-shot queries, explanations, text generation
293
+ - **`generate()` with schema** - Structured JSON outputs for agents
294
+ - **`chat_raw()` without format** - Multi-turn conversations with plain text
295
+ - **`chat_raw()` with format** - Multi-turn conversations with structured outputs
95
296
 
96
297
  ### Scope / endpoint coverage
97
298
 
@@ -200,16 +401,36 @@ Use structured tools when you need:
200
401
  All Tool classes support serialization and deserialization:
201
402
 
202
403
  ```ruby
404
+ # Create a tool
405
+ tool = Ollama::Tool.new(
406
+ type: "function",
407
+ function: Ollama::Tool::Function.new(
408
+ name: "fetch_weather",
409
+ description: "Get weather for a city",
410
+ parameters: Ollama::Tool::Function::Parameters.new(
411
+ type: "object",
412
+ properties: {
413
+ city: Ollama::Tool::Function::Parameters::Property.new(
414
+ type: "string",
415
+ description: "The city name"
416
+ )
417
+ },
418
+ required: %w[city]
419
+ )
420
+ )
421
+ )
422
+
203
423
  # Serialize to JSON
204
424
  json = tool.to_json
205
425
 
206
426
  # Deserialize from hash
207
- tool = Ollama::Tool.from_hash(JSON.parse(json))
427
+ tool2 = Ollama::Tool.from_hash(JSON.parse(json))
208
428
 
209
429
  # Equality comparison
210
- tool1 == tool2 # Compares hash representations
430
+ tool == tool2 # Compares hash representations (returns true)
211
431
 
212
432
  # Empty check
433
+ params = Ollama::Tool::Function::Parameters.new(type: "object", properties: {})
213
434
  params.empty? # True if no properties/required fields
214
435
  ```
215
436
 
@@ -267,7 +488,23 @@ end
267
488
 
268
489
  ### Quick Start Pattern
269
490
 
270
- The basic pattern for using structured outputs:
491
+ **Option 1: Plain text/markdown (no schema)**
492
+
493
+ ```ruby
494
+ require "ollama_client"
495
+
496
+ client = Ollama::Client.new
497
+
498
+ # Simple text response - no schema needed
499
+ response = client.generate(
500
+ prompt: "Explain Ruby programming in one sentence"
501
+ )
502
+
503
+ puts response
504
+ # Output: Plain text explanation
505
+ ```
506
+
507
+ **Option 2: Structured JSON (with schema)**
271
508
 
272
509
  ```ruby
273
510
  require "ollama_client"
@@ -288,7 +525,7 @@ schema = {
288
525
  begin
289
526
  result = client.generate(
290
527
  model: "llama3.1:8b",
291
- prompt: "Your prompt here",
528
+ prompt: "Return a JSON object with field1 as a string and field2 as a number. Example: field1 could be 'example' and field2 could be 42.",
292
529
  schema: schema
293
530
  )
294
531
 
@@ -400,7 +637,18 @@ end
400
637
  **For agents, prefer `generate()` with explicit state injection:**
401
638
 
402
639
  ```ruby
640
+ # Define decision schema
641
+ decision_schema = {
642
+ "type" => "object",
643
+ "required" => ["action", "reasoning"],
644
+ "properties" => {
645
+ "action" => { "type" => "string" },
646
+ "reasoning" => { "type" => "string" }
647
+ }
648
+ }
649
+
403
650
  # ✅ GOOD: Explicit state in prompt
651
+ actions = ["search", "calculate", "validate"]
404
652
  context = "Previous actions: #{actions.join(', ')}"
405
653
  result = client.generate(
406
654
  prompt: "Given context: #{context}. Decide next action.",
@@ -408,8 +656,17 @@ result = client.generate(
408
656
  )
409
657
 
410
658
  # ❌ AVOID: Implicit conversation history
411
- messages = [{ role: "user", content: "..." }]
412
- result = client.chat(messages: messages, format: schema, allow_chat: true) # History grows silently
659
+ messages = [{ role: "user", content: "Decide the next action based on previous actions: search, calculate, validate" }]
660
+ result = client.chat(messages: messages, format: decision_schema, allow_chat: true)
661
+
662
+ # Problem: History grows silently - you must manually manage it
663
+ messages << { role: "assistant", content: result.to_json }
664
+ messages << { role: "user", content: "Now do the next step" }
665
+ result2 = client.chat(messages: messages, format: decision_schema, allow_chat: true)
666
+ # messages.size is now 3, and will keep growing with each turn
667
+ # You must manually track what's in the history
668
+ # Schema validation can become weaker with accumulated context
669
+ # Harder to reason about state in agent systems
413
670
  ```
414
671
 
415
672
  ### Example: Chat API (Advanced Use Case)
@@ -567,7 +824,7 @@ data = "Sales increased 25% this quarter, customer satisfaction is at 4.8/5"
567
824
 
568
825
  begin
569
826
  result = client.generate(
570
- prompt: "Analyze this data: #{data}",
827
+ prompt: "Analyze this data: #{data}. Return confidence as a decimal between 0 and 1 (e.g., 0.85 for 85% confidence).",
571
828
  schema: analysis_schema
572
829
  )
573
830
 
@@ -589,7 +846,8 @@ begin
589
846
 
590
847
  rescue Ollama::SchemaViolationError => e
591
848
  puts "Analysis failed validation: #{e.message}"
592
- # Could retry or use fallback logic
849
+ puts "The LLM response didn't match the schema constraints."
850
+ # Could retry with a clearer prompt or use fallback logic
593
851
  rescue Ollama::TimeoutError => e
594
852
  puts "Request timed out: #{e.message}"
595
853
  rescue Ollama::Error => e
@@ -631,6 +889,63 @@ models = client.list_models
631
889
  puts "Available models: #{models.join(', ')}"
632
890
  ```
633
891
 
892
+ ### Loading Documents from Directory (DocumentLoader)
893
+
894
+ Load files from a directory and use them as context for your queries. Supports `.txt`, `.md`, `.csv`, and `.json` files:
895
+
896
+ ```ruby
897
+ require "ollama_client"
898
+
899
+ client = Ollama::Client.new
900
+
901
+ # Load all documents from a directory
902
+ loader = Ollama::DocumentLoader.new("docs/")
903
+ loader.load_all # Loads all .txt, .md, .csv, .json files
904
+
905
+ # Get all documents as a single context string
906
+ context = loader.to_context
907
+
908
+ # Use in your query
909
+ result = client.generate(
910
+ prompt: "Context from documents:\n#{context}\n\nQuestion: What is Ruby?",
911
+ schema: {
912
+ "type" => "object",
913
+ "required" => ["answer"],
914
+ "properties" => {
915
+ "answer" => { "type" => "string" }
916
+ }
917
+ }
918
+ )
919
+
920
+ # Load specific file
921
+ ruby_guide = loader.load_file("ruby_guide.md")
922
+
923
+ # Access loaded documents
924
+ all_files = loader.files # ["ruby_guide.md", "python_intro.txt", ...]
925
+ specific_doc = loader["ruby_guide.md"]
926
+
927
+ # Load recursively from subdirectories
928
+ loader.load_all(recursive: true)
929
+
930
+ # Select documents by pattern
931
+ ruby_docs = loader.select(/ruby/)
932
+ ```
933
+
934
+ **Supported file types:**
935
+ - **`.txt`** - Plain text files
936
+ - **`.md`, `.markdown`** - Markdown files
937
+ - **`.csv`** - CSV files (converted to readable text format)
938
+ - **`.json`** - JSON files (pretty-printed)
939
+
940
+ **Example directory structure:**
941
+ ```
942
+ docs/
943
+ ├── ruby_guide.md
944
+ ├── python_intro.txt
945
+ ├── data.csv
946
+ └── config.json
947
+ ```
948
+
634
949
  ### Embeddings for RAG/Semantic Search
635
950
 
636
951
  Use embeddings for building knowledge bases and semantic search in agents:
@@ -640,21 +955,55 @@ require "ollama_client"
640
955
 
641
956
  client = Ollama::Client.new
642
957
 
643
- # Single text embedding
644
- embedding = client.embeddings.embed(
645
- model: "all-minilm",
646
- input: "What is Ruby programming?"
647
- )
648
- # Returns: [0.123, -0.456, ...] (array of floats)
958
+ # Note: You need an embedding model installed in Ollama
959
+ # Common models: nomic-embed-text, all-minilm, mxbai-embed-large
960
+ # Check available models: client.list_models
649
961
 
650
- # Multiple texts
651
- embeddings = client.embeddings.embed(
652
- model: "all-minilm",
653
- input: ["What is Ruby?", "What is Python?", "What is JavaScript?"]
654
- )
655
- # Returns: [[...], [...], [...]] (array of embedding arrays)
962
+ begin
963
+ # Single text embedding
964
+ # Note: Use the full model name with tag if needed (e.g., "nomic-embed-text:latest")
965
+ embedding = client.embeddings.embed(
966
+ model: "nomic-embed-text:latest", # Use an available embedding model
967
+ input: "What is Ruby programming?"
968
+ )
969
+ # Returns: [0.123, -0.456, ...] (array of floats)
970
+ if embedding.empty?
971
+ puts "Warning: Empty embedding returned. Check model compatibility."
972
+ else
973
+ puts "Embedding dimension: #{embedding.length}"
974
+ puts "First few values: #{embedding.first(5).map { |v| v.round(4) }}"
975
+ end
976
+
977
+ # Multiple texts
978
+ embeddings = client.embeddings.embed(
979
+ model: "nomic-embed-text:latest",
980
+ input: ["What is Ruby?", "What is Python?", "What is JavaScript?"]
981
+ )
982
+ # Returns: [[...], [...], [...]] (array of embedding arrays)
983
+ if embeddings.is_a?(Array) && embeddings.first.is_a?(Array)
984
+ puts "Number of embeddings: #{embeddings.length}"
985
+ puts "Each embedding dimension: #{embeddings.first.length}"
986
+ else
987
+ puts "Unexpected response format: #{embeddings.class}"
988
+ end
989
+
990
+ rescue Ollama::NotFoundError => e
991
+ puts "Model not found. Install an embedding model first:"
992
+ puts " ollama pull nomic-embed-text"
993
+ puts "Or check available models: client.list_models"
994
+ puts "Note: Use the full model name with tag (e.g., 'nomic-embed-text:latest')"
995
+ rescue Ollama::Error => e
996
+ puts "Error: #{e.message}"
997
+ end
656
998
 
657
999
  # Use for semantic similarity in agents
1000
+ def cosine_similarity(vec1, vec2)
1001
+ dot_product = vec1.zip(vec2).sum { |a, b| a * b }
1002
+ magnitude1 = Math.sqrt(vec1.sum { |x| x * x })
1003
+ magnitude2 = Math.sqrt(vec2.sum { |x| x * x })
1004
+ dot_product / (magnitude1 * magnitude2)
1005
+ end
1006
+
658
1007
  def find_similar(query_embedding, document_embeddings, threshold: 0.7)
659
1008
  document_embeddings.select do |doc_emb|
660
1009
  cosine_similarity(query_embedding, doc_emb) > threshold
@@ -668,18 +1017,28 @@ Load configuration from JSON files for production deployments:
668
1017
 
669
1018
  ```ruby
670
1019
  require "ollama_client"
1020
+ require "json"
671
1021
 
672
- # config.json:
673
- # {
674
- # "base_url": "http://localhost:11434",
675
- # "model": "llama3.1:8b",
676
- # "timeout": 30,
677
- # "retries": 3,
678
- # "temperature": 0.2
679
- # }
1022
+ # Create config.json file (or use an existing one)
1023
+ config_data = {
1024
+ "base_url" => "http://localhost:11434",
1025
+ "model" => "llama3.1:8b",
1026
+ "timeout" => 30,
1027
+ "retries" => 3,
1028
+ "temperature" => 0.2
1029
+ }
1030
+
1031
+ # Write config file
1032
+ File.write("config.json", JSON.pretty_generate(config_data))
680
1033
 
681
- config = Ollama::Config.load_from_json("config.json")
682
- client = Ollama::Client.new(config: config)
1034
+ # Load configuration from file
1035
+ begin
1036
+ config = Ollama::Config.load_from_json("config.json")
1037
+ client = Ollama::Client.new(config: config)
1038
+ puts "Client configured from config.json"
1039
+ rescue Ollama::Error => e
1040
+ puts "Error loading config: #{e.message}"
1041
+ end
683
1042
  ```
684
1043
 
685
1044
  ### Type-Safe Model Options
@@ -689,6 +1048,17 @@ Use the `Options` class for type-checked model parameters:
689
1048
  ```ruby
690
1049
  require "ollama_client"
691
1050
 
1051
+ client = Ollama::Client.new
1052
+
1053
+ # Define schema
1054
+ analysis_schema = {
1055
+ "type" => "object",
1056
+ "required" => ["summary"],
1057
+ "properties" => {
1058
+ "summary" => { "type" => "string" }
1059
+ }
1060
+ }
1061
+
692
1062
  # Options with validation
693
1063
  options = Ollama::Options.new(
694
1064
  temperature: 0.7,
@@ -701,11 +1071,19 @@ options = Ollama::Options.new(
701
1071
  # Will raise ArgumentError if values are out of range
702
1072
  # options.temperature = 3.0 # Error: temperature must be between 0.0 and 2.0
703
1073
 
704
- client.generate(
705
- prompt: "Analyze this data",
706
- schema: analysis_schema,
707
- options: options.to_h
1074
+ # Use with chat() - chat() accepts options parameter
1075
+ client.chat(
1076
+ messages: [{ role: "user", content: "Analyze this data" }],
1077
+ format: analysis_schema,
1078
+ options: options.to_h,
1079
+ allow_chat: true
708
1080
  )
1081
+
1082
+ # Note: generate() doesn't accept options parameter
1083
+ # For generate(), set options in config instead:
1084
+ # config = Ollama::Config.new
1085
+ # config.temperature = 0.7
1086
+ # client = Ollama::Client.new(config: config)
709
1087
  ```
710
1088
 
711
1089
  ### Error Handling
@@ -713,8 +1091,22 @@ client.generate(
713
1091
  ```ruby
714
1092
  require "ollama_client"
715
1093
 
1094
+ client = Ollama::Client.new
1095
+ schema = {
1096
+ "type" => "object",
1097
+ "required" => ["result"],
1098
+ "properties" => {
1099
+ "result" => { "type" => "string" }
1100
+ }
1101
+ }
1102
+
716
1103
  begin
717
- result = client.generate(prompt: prompt, schema: schema)
1104
+ result = client.generate(
1105
+ prompt: "Return a simple result",
1106
+ schema: schema
1107
+ )
1108
+ # Success - use the result
1109
+ puts "Result: #{result['result']}"
718
1110
  rescue Ollama::NotFoundError => e
719
1111
  # 404 Not Found - model or endpoint doesn't exist
720
1112
  # The error message automatically suggests similar model names if available
data/docs/README.md CHANGED
@@ -4,8 +4,7 @@ This directory contains internal development documentation for the ollama-client
4
4
 
5
5
  ## Quick Links
6
6
 
7
- - 🚀 **[Quick Release Reference](QUICK_RELEASE.md)** - Fast release checklist
8
- - 📘 **[Complete Release Guide](GEM_RELEASE_GUIDE.md)** - Full automation setup (794 lines)
7
+ - 🚀 **[Release Guide](RELEASE_GUIDE.md)** - Complete guide for automated gem releases with MFA
9
8
 
10
9
  ## Contents
11
10
 
@@ -22,7 +21,7 @@ This directory contains internal development documentation for the ollama-client
22
21
 
23
22
  ### CI/Automation
24
23
  - **[CLOUD.md](CLOUD.md)** - Cloud agent guide for automated testing and fixes
25
- - **[GEM_RELEASE_GUIDE.md](GEM_RELEASE_GUIDE.md)** - Complete guide for automated gem releases via GitHub Actions and git tags
24
+ - **[RELEASE_GUIDE.md](RELEASE_GUIDE.md)** - Complete guide for automated gem releases via GitHub Actions with OTP/MFA
26
25
 
27
26
  ## For Users
28
27