ollama-client 0.2.6 → 0.2.7

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: a1ba22e6a669d0617be340e290bb4009023583418afb1f4952f58b616e22369e
4
- data.tar.gz: 33310224720f99ab82cf59b6e30166b682af648240fcecd84ecde9dc5bc88608
3
+ metadata.gz: 53bd3c1ff3323d004a7ba3549317ef49d3ce9698d41133a1d2150d0a5135fa80
4
+ data.tar.gz: 4c726b2a7fabf164c91cd51206266d250de836986d1a36726556cbefb47a6e8e
5
5
  SHA512:
6
- metadata.gz: 29ff7766aebd45634c3170e6a2e8bdb09c97728f6d4dc726e17e9cc404d23c53e96c669effda77044c3a2296fcd0642e2a96bb845d84d4549f759b8de20383c0
7
- data.tar.gz: d272a70b48a329c09c8b2ddc0baf98b61708a637eb4cbf92693af4291ccfbf42104ecdf3eef0a66b525ad45e57111c4d550baa525662c284482b15feacfcfafa
6
+ metadata.gz: 1cdcc10b54d2fa318daefc2736d338715b65f03cacf94c7c9303c07ec745ea510e6de0ac73c0c668b0301481663989af1368d86727ebfa1252a43edab9d7d711
7
+ data.tar.gz: aead7b8ae703d436866cf796e91b102fbcc21ff47e6528c2edf62b13a1bac1f98f36d1146ba3e56a7fa0792bd0aff0880be65450c05b2352336cd20b39b3f9ab
data/CHANGELOG.md CHANGED
@@ -1,5 +1,14 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.2.7] - 2026-02-04
4
+
5
+ - Add MCP (Model Context Protocol) support for local and remote servers
6
+ - Add `Ollama::MCP::StdioClient` for local MCP servers over stdio (e.g. `npx @modelcontextprotocol/server-filesystem`)
7
+ - Add `Ollama::MCP::HttpClient` for remote MCP servers over HTTP (e.g. [gitmcp.io](https://gitmcp.io)/owner/repo)
8
+ - Add `Ollama::MCP::ToolsBridge` to expose MCP tools to `Ollama::Agent::Executor` (`client:` or `stdio_client:`)
9
+ - Add examples: `examples/mcp_executor.rb` (stdio), `examples/mcp_http_executor.rb` (URL)
10
+ - Document MCP usage and GitMCP URL in README; fix RuboCop offenses in MCP code
11
+
3
12
  ## [0.2.6] - 2026-01-26
4
13
 
5
14
  - Reorganize examples: move agent examples to separate repository, keep minimal client examples
data/README.md CHANGED
@@ -72,28 +72,165 @@ Or install it yourself as:
72
72
  gem install ollama-client
73
73
  ```
74
74
 
75
+ ## Quick Start
76
+
77
+ ### Step 1: Simple Text Generation
78
+
79
+ ```ruby
80
+ require "ollama_client"
81
+
82
+ client = Ollama::Client.new
83
+
84
+ # Get plain text response (no schema = plain text)
85
+ response = client.generate(
86
+ prompt: "Explain Ruby blocks in one sentence"
87
+ )
88
+
89
+ puts response
90
+ # => "Ruby blocks are anonymous functions passed to methods..."
91
+ ```
92
+
93
+ ### Step 2: Structured Outputs (Recommended for Agents)
94
+
95
+ ```ruby
96
+ require "ollama_client"
97
+
98
+ client = Ollama::Client.new
99
+
100
+ # Define JSON schema
101
+ schema = {
102
+ "type" => "object",
103
+ "required" => ["action", "reasoning"],
104
+ "properties" => {
105
+ "action" => { "type" => "string", "enum" => ["search", "calculate", "finish"] },
106
+ "reasoning" => { "type" => "string" }
107
+ }
108
+ }
109
+
110
+ # Get structured decision
111
+ result = client.generate(
112
+ prompt: "User wants weather in Paris. What should I do?",
113
+ schema: schema
114
+ )
115
+
116
+ puts result["action"] # => "search"
117
+ puts result["reasoning"] # => "Need to fetch weather data..."
118
+ ```
119
+
120
+ ### Step 3: Agent Planning (Stateless)
121
+
122
+ ```ruby
123
+ require "ollama_client"
124
+
125
+ client = Ollama::Client.new
126
+ planner = Ollama::Agent::Planner.new(client)
127
+
128
+ decision_schema = {
129
+ "type" => "object",
130
+ "required" => ["action"],
131
+ "properties" => {
132
+ "action" => { "type" => "string", "enum" => ["search", "calculate", "finish"] }
133
+ }
134
+ }
135
+
136
+ plan = planner.run(
137
+ prompt: "Decide the next action",
138
+ schema: decision_schema
139
+ )
140
+
141
+ # Use the structured decision
142
+ case plan["action"]
143
+ when "search"
144
+ # Execute search
145
+ when "calculate"
146
+ # Execute calculation
147
+ when "finish"
148
+ # Task complete
149
+ end
150
+ ```
151
+
152
+ ### Step 4: Tool Calling (Stateful)
153
+
154
+ ```ruby
155
+ require "ollama_client"
156
+
157
+ client = Ollama::Client.new
158
+
159
+ # Define tools
160
+ tools = {
161
+ "get_weather" => ->(city:) { { city: city, temp: 22, condition: "sunny" } }
162
+ }
163
+
164
+ executor = Ollama::Agent::Executor.new(client, tools: tools)
165
+
166
+ answer = executor.run(
167
+ system: "You are a helpful assistant. Use tools when needed.",
168
+ user: "What's the weather in Paris?"
169
+ )
170
+
171
+ puts answer
172
+ # => "The weather in Paris is 22°C and sunny."
173
+ ```
174
+
175
+ **Next Steps:** See [Choosing the Correct API](#choosing-the-correct-api-generate-vs-chat) below for guidance on when to use each method.
176
+
75
177
  ## Usage
76
178
 
77
179
  **Note:** You can use `require "ollama_client"` (recommended) or `require "ollama/client"` directly. The client works with or without the global `OllamaClient` configuration module.
78
180
 
79
181
  ### Primary API: `generate()`
80
182
 
81
- **`generate(prompt:, schema: nil, allow_plain_text: false)`** is the **primary and recommended method** for agent-grade usage:
183
+ **`generate(prompt:, schema: nil, model: nil, strict: false, return_meta: false)`** is the **primary and recommended method** for agent-grade usage:
82
184
 
83
185
  - ✅ Stateless, explicit state injection
84
186
  - ✅ Uses `/api/generate` endpoint
85
187
  - ✅ Ideal for: agent planning, tool routing, one-shot analysis, classification, extraction
86
188
  - ✅ No implicit memory or conversation history
87
- - ✅ Supports both structured JSON (with schema) and plain text/markdown (with `allow_plain_text: true`)
189
+ - ✅ Supports both structured JSON (with schema) and plain text/markdown (without schema)
88
190
 
89
191
  **This is the method you should use for hybrid agents.**
90
192
 
91
193
  **Usage:**
92
194
  - **With schema** (structured JSON): `generate(prompt: "...", schema: {...})` - returns Hash
93
- - **Without schema** (plain text): `generate(prompt: "...", allow_plain_text: true)` - returns String
195
+ - **Without schema** (plain text): `generate(prompt: "...")` - returns String (plain text/markdown)
94
196
 
95
197
  ### Choosing the Correct API (generate vs chat)
96
198
 
199
+ **Decision Tree:**
200
+
201
+ ```
202
+ Need structured JSON output?
203
+ ├─ Yes → Use generate() with schema
204
+ │ └─ Need conversation history?
205
+ │ ├─ No → Use generate() directly
206
+ │ └─ Yes → Include context in prompt (generate() is stateless)
207
+
208
+ └─ No → Need plain text/markdown?
209
+ ├─ Yes → Use generate() without schema
210
+ │ └─ Need conversation history?
211
+ │ ├─ No → Use generate() directly
212
+ │ └─ Yes → Include context in prompt
213
+
214
+ └─ Need tool calling?
215
+ ├─ Yes → Use Executor (chat API with tools)
216
+ │ └─ Multi-step workflow with tool loops
217
+
218
+ └─ No → Use ChatSession (chat API for UI)
219
+ └─ Human-facing chat interface
220
+ ```
221
+
222
+ **Quick Reference:**
223
+
224
+ | Use Case | Method | API Endpoint | State |
225
+ |----------|--------|--------------|-------|
226
+ | Agent planning/routing | `generate()` | `/api/generate` | Stateless |
227
+ | Structured extraction | `generate()` | `/api/generate` | Stateless |
228
+ | Simple text generation | `generate()` | `/api/generate` | Stateless |
229
+ | Tool-calling loops | `Executor` | `/api/chat` | Stateful |
230
+ | UI chat interface | `ChatSession` | `/api/chat` | Stateful |
231
+
232
+ **Detailed Guidance:**
233
+
97
234
  - **Use `/api/generate`** (via `Ollama::Client#generate` or `Ollama::Agent::Planner`) for **stateless planner/router** steps where you want strict, deterministic structured outputs.
98
235
  - **Use `/api/chat`** (via `Ollama::Agent::Executor`) for **stateful tool-using** workflows where the model may request tool calls across multiple turns.
99
236
 
@@ -264,10 +401,9 @@ require "ollama_client"
264
401
 
265
402
  client = Ollama::Client.new
266
403
 
267
- # Get plain text/markdown response (use allow_plain_text: true to skip schema)
404
+ # Get plain text/markdown response (omit schema for plain text)
268
405
  text_response = client.generate(
269
- prompt: "Explain Ruby in simple terms",
270
- allow_plain_text: true
406
+ prompt: "Explain Ruby in simple terms"
271
407
  )
272
408
 
273
409
  puts text_response
@@ -294,8 +430,8 @@ puts text_response
294
430
  ```
295
431
 
296
432
  **When to use which:**
297
- - **`generate()` with `allow_plain_text: true`** - Simple one-shot queries, explanations, text generation
298
- - **`generate()` with schema** - Structured JSON outputs for agents (default, recommended)
433
+ - **`generate()` without schema** - Simple one-shot queries, explanations, text generation (returns plain text)
434
+ - **`generate()` with schema** - Structured JSON outputs for agents (recommended for agents)
299
435
  - **`chat_raw()` without format** - Multi-turn conversations with plain text
300
436
  - **`chat_raw()` with format** - Multi-turn conversations with structured outputs
301
437
 
@@ -303,7 +439,7 @@ puts text_response
303
439
 
304
440
  This gem intentionally focuses on **agent building blocks**:
305
441
 
306
- - **Supported**: `/api/generate`, `/api/chat`, `/api/tags`, `/api/ping`, `/api/embeddings`
442
+ - **Supported**: `/api/generate`, `/api/chat`, `/api/tags`, `/api/ping`, `/api/embed`
307
443
  - **Not guaranteed**: full endpoint parity with every Ollama release (advanced model mgmt, etc.)
308
444
 
309
445
  ### Agent endpoint mapping (unambiguous)
@@ -884,6 +1020,58 @@ end
884
1020
  - Both methods accept `tools:` parameter (Tool object, array of Tool objects, or array of hashes)
885
1021
  - For agent tool loops, use `Ollama::Agent::Executor` instead (handles tool execution automatically)
886
1022
 
1023
+ ### MCP support (local and remote servers)
1024
+
1025
+ You can connect to [Model Context Protocol](https://modelcontextprotocol.io) (MCP) servers and use their tools with the Executor.
1026
+
1027
+ **Remote MCP server (HTTP URL, e.g. [GitMCP](https://gitmcp.io)):**
1028
+
1029
+ ```ruby
1030
+ require "ollama_client"
1031
+
1032
+ client = Ollama::Client.new
1033
+
1034
+ # Remote MCP server URL (e.g. GitMCP: https://gitmcp.io/owner/repo)
1035
+ mcp_client = Ollama::MCP::HttpClient.new(
1036
+ url: "https://gitmcp.io/shubhamtaywade82/agent-runtime",
1037
+ timeout_seconds: 60
1038
+ )
1039
+
1040
+ bridge = Ollama::MCP::ToolsBridge.new(client: mcp_client)
1041
+ tools = bridge.tools_for_executor
1042
+
1043
+ executor = Ollama::Agent::Executor.new(client, tools: tools)
1044
+ answer = executor.run(
1045
+ system: "You have access to the agent-runtime docs. Use tools when the user asks about the repo.",
1046
+ user: "What does this repo do?"
1047
+ )
1048
+
1049
+ puts answer
1050
+ mcp_client.close
1051
+ ```
1052
+
1053
+ **Local MCP server (stdio, e.g. filesystem server):**
1054
+
1055
+ ```ruby
1056
+ # Local MCP server via stdio (requires Node.js/npx)
1057
+ mcp_client = Ollama::MCP::StdioClient.new(
1058
+ command: "npx",
1059
+ args: ["-y", "@modelcontextprotocol/server-filesystem", "/tmp"],
1060
+ timeout_seconds: 60
1061
+ )
1062
+
1063
+ bridge = Ollama::MCP::ToolsBridge.new(stdio_client: mcp_client) # or client: mcp_client
1064
+ tools = bridge.tools_for_executor
1065
+ # ... same executor usage
1066
+ mcp_client.close
1067
+ ```
1068
+
1069
+ - **Stdio**: `Ollama::MCP::StdioClient` — spawns a subprocess; use for local servers (e.g. `npx @modelcontextprotocol/server-filesystem`).
1070
+ - **HTTP**: `Ollama::MCP::HttpClient` — POSTs JSON-RPC to a URL; use for remote servers (e.g. [gitmcp.io/owner/repo](https://gitmcp.io)).
1071
+ - **Bridge**: `Ollama::MCP::ToolsBridge.new(client: mcp_client)` or `stdio_client: mcp_client`; then `tools_for_executor` for the Executor.
1072
+ - No extra gem; implementation is self-contained.
1073
+ - See [examples/mcp_executor.rb](examples/mcp_executor.rb) (stdio) and [examples/mcp_http_executor.rb](examples/mcp_http_executor.rb) (URL).
1074
+
887
1075
  ### Example: Data Analysis with Validation
888
1076
 
889
1077
  ```ruby
@@ -1052,42 +1240,37 @@ client = Ollama::Client.new
1052
1240
  # Note: You need an embedding model installed in Ollama
1053
1241
  # Common models: nomic-embed-text, all-minilm, mxbai-embed-large
1054
1242
  # Check available models: client.list_models
1243
+ # The client uses /api/embed endpoint internally
1055
1244
 
1056
1245
  begin
1057
1246
  # Single text embedding
1058
- # Note: Use the full model name with tag if needed (e.g., "nomic-embed-text:latest")
1247
+ # Note: Model name can be with or without tag (e.g., "nomic-embed-text" or "nomic-embed-text:latest")
1059
1248
  embedding = client.embeddings.embed(
1060
- model: "nomic-embed-text:latest", # Use an available embedding model
1249
+ model: "nomic-embed-text", # Use an available embedding model
1061
1250
  input: "What is Ruby programming?"
1062
1251
  )
1063
1252
  # Returns: [0.123, -0.456, ...] (array of floats)
1064
- if embedding.empty?
1065
- puts "Warning: Empty embedding returned. Check model compatibility."
1066
- else
1067
- puts "Embedding dimension: #{embedding.length}"
1068
- puts "First few values: #{embedding.first(5).map { |v| v.round(4) }}"
1069
- end
1253
+ # For nomic-embed-text, dimension is typically 768
1254
+ puts "Embedding dimension: #{embedding.length}"
1255
+ puts "First few values: #{embedding.first(5).map { |v| v.round(4) }}"
1070
1256
 
1071
1257
  # Multiple texts
1072
1258
  embeddings = client.embeddings.embed(
1073
- model: "nomic-embed-text:latest",
1259
+ model: "nomic-embed-text",
1074
1260
  input: ["What is Ruby?", "What is Python?", "What is JavaScript?"]
1075
1261
  )
1076
1262
  # Returns: [[...], [...], [...]] (array of embedding arrays)
1077
- if embeddings.is_a?(Array) && embeddings.first.is_a?(Array)
1078
- puts "Number of embeddings: #{embeddings.length}"
1079
- puts "Each embedding dimension: #{embeddings.first.length}"
1080
- else
1081
- puts "Unexpected response format: #{embeddings.class}"
1082
- end
1263
+ # Each inner array is an embedding vector for the corresponding input text
1264
+ puts "Number of embeddings: #{embeddings.length}"
1265
+ puts "Each embedding dimension: #{embeddings.first.length}"
1083
1266
 
1084
1267
  rescue Ollama::NotFoundError => e
1085
1268
  puts "Model not found. Install an embedding model first:"
1086
1269
  puts " ollama pull nomic-embed-text"
1087
1270
  puts "Or check available models: client.list_models"
1088
- puts "Note: Use the full model name with tag (e.g., 'nomic-embed-text:latest')"
1089
1271
  rescue Ollama::Error => e
1090
1272
  puts "Error: #{e.message}"
1273
+ # Error message includes helpful troubleshooting steps
1091
1274
  end
1092
1275
 
1093
1276
  # Use for semantic similarity in agents
@@ -0,0 +1,41 @@
1
+ # Release v0.2.6
2
+
3
+ ## 🎯 Major Changes
4
+
5
+ ### Test Coverage & Quality
6
+ - **Increased test coverage from 65.66% to 79.59%**
7
+ - Added comprehensive test suites for:
8
+ - `Ollama::DocumentLoader` (file loading, context building)
9
+ - `Ollama::Embeddings` (API calls, error handling)
10
+ - `Ollama::ChatSession` (session management)
11
+ - Tool classes (`Tool`, `Function`, `Parameters`, `Property`)
12
+
13
+ ### Documentation & Examples
14
+ - **Reorganized examples**: Moved agent examples to separate repository (`ollama-agent-examples`)
15
+ - Kept minimal client-focused examples in this repository
16
+ - Rewrote testing documentation to focus on client-only testing (transport/protocol)
17
+ - Added test checklist with specific test categories (G1-G3, C1-C3, A1-A2, F1-F3)
18
+ - Updated README with enhanced "What This Gem IS NOT" section
19
+
20
+ ### Code Quality
21
+ - Fixed all RuboCop offenses
22
+ - Improved code quality and consistency
23
+ - Aligned workflow with agent-runtime repository
24
+
25
+ ## 📦 Installation
26
+
27
+ ```bash
28
+ gem install ollama-client
29
+ ```
30
+
31
+ Or add to your Gemfile:
32
+
33
+ ```ruby
34
+ gem 'ollama-client', '~> 0.2.6'
35
+ ```
36
+
37
+ ## 🔗 Links
38
+
39
+ - [Full Changelog](https://github.com/shubhamtaywade82/ollama-client/blob/main/CHANGELOG.md)
40
+ - [Documentation](https://github.com/shubhamtaywade82/ollama-client#readme)
41
+ - [Examples Repository](https://github.com/shubhamtaywade82/ollama-agent-examples)
@@ -0,0 +1,325 @@
1
+ # Areas for Consideration
2
+
3
+ This document addresses three key areas that warrant attention in the `ollama-client` codebase.
4
+
5
+ ## ⚠️ 1. Complex Client Class
6
+
7
+ ### Current State
8
+
9
+ - **File:** `lib/ollama/client.rb`
10
+ - **Size:** 860 lines
11
+ - **RuboCop Disables:**
12
+ - `Metrics/ClassLength` (entire class)
13
+ - `Metrics/MethodLength` (multiple methods)
14
+ - `Metrics/ParameterLists` (multiple methods)
15
+ - `Metrics/AbcSize` (multiple methods)
16
+ - `Metrics/CyclomaticComplexity` (multiple methods)
17
+ - `Metrics/PerceivedComplexity` (multiple methods)
18
+ - `Metrics/BlockLength` (multiple methods)
19
+
20
+ ### Responsibilities
21
+
22
+ The `Client` class currently handles:
23
+ 1. HTTP communication with Ollama API
24
+ 2. JSON parsing and validation
25
+ 3. Schema validation
26
+ 4. Retry logic
27
+ 5. Error handling and enhancement
28
+ 6. Tool normalization
29
+ 7. Response formatting
30
+ 8. Streaming response parsing
31
+ 9. Model suggestion logic
32
+ 10. Response hooks/callbacks
33
+
34
+ ### Recommendations
35
+
36
+ #### Option A: Extract Service Objects (Recommended)
37
+
38
+ Break down into focused service classes:
39
+
40
+ ```ruby
41
+ # lib/ollama/http_client.rb
42
+ class HttpClient
43
+ # Handles all HTTP communication
44
+ end
45
+
46
+ # lib/ollama/response_parser.rb
47
+ class ResponseParser
48
+ # Handles JSON parsing, markdown stripping, etc.
49
+ end
50
+
51
+ # lib/ollama/retry_handler.rb
52
+ class RetryHandler
53
+ # Handles retry logic and error classification
54
+ end
55
+
56
+ # lib/ollama/client.rb (simplified)
57
+ class Client
58
+ def initialize(config: nil)
59
+ @config = config || default_config
60
+ @http_client = HttpClient.new(@config)
61
+ @response_parser = ResponseParser.new
62
+ @retry_handler = RetryHandler.new(@config)
63
+ end
64
+ end
65
+ ```
66
+
67
+ **Benefits:**
68
+ - Single Responsibility Principle
69
+ - Easier to test
70
+ - Easier to maintain
71
+ - Can enable RuboCop metrics
72
+
73
+ **Trade-offs:**
74
+ - More files to navigate
75
+ - Slightly more complex initialization
76
+
77
+ #### Option B: Extract Concerns into Modules
78
+
79
+ Keep single file but organize with modules:
80
+
81
+ ```ruby
82
+ class Client
83
+ include HTTPCommunication
84
+ include ResponseParsing
85
+ include RetryHandling
86
+ include ErrorEnhancement
87
+ end
88
+ ```
89
+
90
+ **Benefits:**
91
+ - Still one file
92
+ - Better organization
93
+ - Can test modules independently
94
+
95
+ **Trade-offs:**
96
+ - Still a large file
97
+ - RuboCop metrics still problematic
98
+
99
+ #### Option C: Accept Complexity (Current State)
100
+
101
+ **Rationale:**
102
+ - Client is the core API surface
103
+ - All methods are public API
104
+ - Breaking it up might make usage more complex
105
+ - Current structure is functional
106
+
107
+ **Action Items:**
108
+ - Document why metrics are disabled
109
+ - Add architectural decision record (ADR)
110
+ - Consider refactoring in future major version
111
+
112
+ ### Recommended Approach
113
+
114
+ **Short-term:** Document the decision (Option C)
115
+ **Medium-term:** Extract HTTP client (Option A, partial)
116
+ **Long-term:** Full service object extraction (Option A)
117
+
118
+ ---
119
+
120
+ ## ⚠️ 2. Thread Safety
121
+
122
+ ### Current State
123
+
124
+ **Global Configuration:**
125
+ ```ruby
126
+ module OllamaClient
127
+ @config_mutex = Mutex.new
128
+
129
+ def self.configure
130
+ @config_mutex.synchronize do
131
+ @config ||= Ollama::Config.new
132
+ yield(@config)
133
+ end
134
+ end
135
+ end
136
+ ```
137
+
138
+ **Issues:**
139
+ 1. Mutex protects global config access, but warning says "not thread-safe"
140
+ 2. Confusing messaging - mutex IS present but warning suggests it's not safe
141
+ 3. Per-client config is recommended but not enforced
142
+
143
+ ### Analysis
144
+
145
+ **What's Actually Thread-Safe:**
146
+ - ✅ Global config access (protected by mutex)
147
+ - ✅ Per-client instances (each has own config)
148
+ - ✅ Client methods (stateless, use instance config)
149
+
150
+ **What's NOT Thread-Safe:**
151
+ - ⚠️ Modifying global config while clients are using it
152
+ - ⚠️ Shared config objects between threads (if mutated)
153
+
154
+ ### Recommendations
155
+
156
+ #### Option A: Clarify Documentation (Recommended)
157
+
158
+ Update warnings to be more accurate:
159
+
160
+ ```ruby
161
+ # ⚠️ THREAD SAFETY WARNING:
162
+ # Global configuration access is protected by mutex, but modifying
163
+ # global config while clients are active can cause race conditions.
164
+ # For concurrent agents, prefer per-client configuration:
165
+ #
166
+ # config = Ollama::Config.new
167
+ # config.model = "llama3.1"
168
+ # client = Ollama::Client.new(config: config)
169
+ ```
170
+
171
+ #### Option B: Make Global Config Immutable
172
+
173
+ ```ruby
174
+ def self.configure
175
+ @config_mutex.synchronize do
176
+ @config ||= Ollama::Config.new
177
+ frozen_config = @config.dup.freeze
178
+ yield(frozen_config)
179
+ @config = frozen_config.dup # Create new instance
180
+ end
181
+ end
182
+ ```
183
+
184
+ #### Option C: Deprecate Global Config
185
+
186
+ ```ruby
187
+ def self.configure
188
+ warn "[DEPRECATED] OllamaClient.configure is deprecated. " \
189
+ "Use per-client config: Ollama::Client.new(config: ...)"
190
+ # ... existing implementation
191
+ end
192
+ ```
193
+
194
+ ### Recommended Approach
195
+
196
+ **Immediate:** Clarify documentation (Option A)
197
+ **Future:** Consider deprecation path (Option C) if usage patterns show global config is rarely needed
198
+
199
+ ---
200
+
201
+ ## ⚠️ 3. Learning Curve
202
+
203
+ ### Current State
204
+
205
+ The gem provides two agent patterns:
206
+
207
+ 1. **Planner** (`Ollama::Agent::Planner`)
208
+ - Stateless
209
+ - Uses `/api/generate`
210
+ - Returns structured JSON
211
+ - For: routing, classification, decisions
212
+
213
+ 2. **Executor** (`Ollama::Agent::Executor`)
214
+ - Stateful
215
+ - Uses `/api/chat`
216
+ - Tool-calling loops
217
+ - For: multi-step workflows
218
+
219
+ ### Challenges for Beginners
220
+
221
+ 1. **Conceptual Complexity:**
222
+ - Two different patterns (why?)
223
+ - When to use which?
224
+ - Stateless vs stateful concepts
225
+
226
+ 2. **API Surface:**
227
+ - `generate()` vs `chat()` vs `chat_raw()`
228
+ - When to use schema vs plain text
229
+ - Tool calling setup
230
+
231
+ 3. **Documentation:**
232
+ - README is comprehensive but dense
233
+ - Examples are minimal
234
+ - Agent patterns require understanding of concepts
235
+
236
+ ### Recommendations
237
+
238
+ #### Option A: Enhanced Quick Start Guide
239
+
240
+ Create a step-by-step guide:
241
+
242
+ ```markdown
243
+ # Quick Start Guide
244
+
245
+ ## Step 1: Simple Text Generation
246
+ [Example]
247
+
248
+ ## Step 2: Structured Outputs
249
+ [Example]
250
+
251
+ ## Step 3: Agent Planning
252
+ [Example]
253
+
254
+ ## Step 4: Tool Calling
255
+ [Example]
256
+ ```
257
+
258
+ #### Option B: Decision Tree / Flowchart
259
+
260
+ ```
261
+ Need structured output?
262
+ ├─ Yes → Use generate() with schema
263
+ └─ No → Need conversation?
264
+ ├─ Yes → Use chat_raw() or ChatSession
265
+ └─ No → Use generate() with allow_plain_text: true
266
+ ```
267
+
268
+ #### Option C: Simplified Wrapper API
269
+
270
+ ```ruby
271
+ # High-level API for beginners
272
+ class SimpleAgent
273
+ def initialize(model: "llama3.1:8b")
274
+ @client = Ollama::Client.new
275
+ @model = model
276
+ end
277
+
278
+ def ask(question, schema: nil)
279
+ if schema
280
+ @client.generate(prompt: question, schema: schema)
281
+ else
282
+ @client.generate(prompt: question, allow_plain_text: true)
283
+ end
284
+ end
285
+ end
286
+ ```
287
+
288
+ #### Option D: Video Tutorials / Interactive Examples
289
+
290
+ - Video walkthrough
291
+ - Interactive REPL examples
292
+ - Common patterns cookbook
293
+
294
+ ### Recommended Approach
295
+
296
+ **Immediate:**
297
+ 1. Add "Quick Start" section to README (Option A)
298
+ 2. Add decision tree diagram (Option B)
299
+
300
+ **Short-term:**
301
+ 3. Create `examples/quick_start/` directory with progressive examples
302
+ 4. Add "Common Patterns" section
303
+
304
+ **Long-term:**
305
+ 5. Consider simplified API wrapper (Option C) if demand exists
306
+ 6. Create video tutorials if community requests
307
+
308
+ ---
309
+
310
+ ## Summary
311
+
312
+ | Area | Current State | Recommended Action | Priority |
313
+ |------|--------------|-------------------|----------|
314
+ | **Complex Client** | 860 lines, metrics disabled | Document decision, plan extraction | Medium |
315
+ | **Thread Safety** | Mutex present but confusing docs | Clarify documentation | High |
316
+ | **Learning Curve** | Comprehensive but dense | Add quick start guide + decision tree | High |
317
+
318
+ ## Next Steps
319
+
320
+ 1. ✅ Create this document (done)
321
+ 2. Update thread safety documentation
322
+ 3. Add quick start guide to README
323
+ 4. Create decision tree diagram
324
+ 5. Consider ADR for Client class architecture
325
+ 6. Gather user feedback on learning curve