ollama-client 0.2.4 → 0.2.6
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- checksums.yaml +4 -4
- data/CHANGELOG.md +21 -1
- data/README.md +560 -106
- data/docs/EXAMPLE_REORGANIZATION.md +412 -0
- data/docs/GETTING_STARTED.md +361 -0
- data/docs/INTEGRATION_TESTING.md +170 -0
- data/docs/NEXT_STEPS_SUMMARY.md +114 -0
- data/docs/PERSONAS.md +383 -0
- data/docs/QUICK_START.md +195 -0
- data/docs/README.md +2 -3
- data/docs/RELEASE_GUIDE.md +376 -0
- data/docs/TESTING.md +392 -170
- data/docs/TEST_CHECKLIST.md +450 -0
- data/docs/ruby_guide.md +6232 -0
- data/examples/README.md +51 -66
- data/examples/basic_chat.rb +33 -0
- data/examples/basic_generate.rb +29 -0
- data/examples/tool_calling_parsing.rb +59 -0
- data/exe/ollama-client +128 -1
- data/lib/ollama/agent/planner.rb +7 -2
- data/lib/ollama/chat_session.rb +101 -0
- data/lib/ollama/client.rb +43 -21
- data/lib/ollama/config.rb +4 -1
- data/lib/ollama/document_loader.rb +163 -0
- data/lib/ollama/embeddings.rb +42 -13
- data/lib/ollama/errors.rb +1 -0
- data/lib/ollama/personas.rb +287 -0
- data/lib/ollama/version.rb +1 -1
- data/lib/ollama_client.rb +8 -0
- metadata +31 -53
- data/docs/GEM_RELEASE_GUIDE.md +0 -794
- data/docs/GET_RUBYGEMS_SECRET.md +0 -151
- data/docs/QUICK_OTP_SETUP.md +0 -80
- data/docs/QUICK_RELEASE.md +0 -106
- data/docs/RUBYGEMS_OTP_SETUP.md +0 -199
- data/examples/advanced_complex_schemas.rb +0 -366
- data/examples/advanced_edge_cases.rb +0 -241
- data/examples/advanced_error_handling.rb +0 -200
- data/examples/advanced_multi_step_agent.rb +0 -341
- data/examples/advanced_performance_testing.rb +0 -186
- data/examples/chat_console.rb +0 -143
- data/examples/complete_workflow.rb +0 -245
- data/examples/dhan_console.rb +0 -843
- data/examples/dhanhq/README.md +0 -236
- data/examples/dhanhq/agents/base_agent.rb +0 -74
- data/examples/dhanhq/agents/data_agent.rb +0 -66
- data/examples/dhanhq/agents/orchestrator_agent.rb +0 -120
- data/examples/dhanhq/agents/technical_analysis_agent.rb +0 -252
- data/examples/dhanhq/agents/trading_agent.rb +0 -81
- data/examples/dhanhq/analysis/market_structure.rb +0 -138
- data/examples/dhanhq/analysis/pattern_recognizer.rb +0 -192
- data/examples/dhanhq/analysis/trend_analyzer.rb +0 -88
- data/examples/dhanhq/builders/market_context_builder.rb +0 -67
- data/examples/dhanhq/dhanhq_agent.rb +0 -829
- data/examples/dhanhq/indicators/technical_indicators.rb +0 -158
- data/examples/dhanhq/scanners/intraday_options_scanner.rb +0 -492
- data/examples/dhanhq/scanners/swing_scanner.rb +0 -247
- data/examples/dhanhq/schemas/agent_schemas.rb +0 -61
- data/examples/dhanhq/services/base_service.rb +0 -46
- data/examples/dhanhq/services/data_service.rb +0 -118
- data/examples/dhanhq/services/trading_service.rb +0 -59
- data/examples/dhanhq/technical_analysis_agentic_runner.rb +0 -411
- data/examples/dhanhq/technical_analysis_runner.rb +0 -420
- data/examples/dhanhq/test_tool_calling.rb +0 -538
- data/examples/dhanhq/test_tool_calling_verbose.rb +0 -251
- data/examples/dhanhq/utils/instrument_helper.rb +0 -32
- data/examples/dhanhq/utils/parameter_cleaner.rb +0 -28
- data/examples/dhanhq/utils/parameter_normalizer.rb +0 -45
- data/examples/dhanhq/utils/rate_limiter.rb +0 -23
- data/examples/dhanhq/utils/trading_parameter_normalizer.rb +0 -72
- data/examples/dhanhq_agent.rb +0 -964
- data/examples/dhanhq_tools.rb +0 -1663
- data/examples/multi_step_agent_with_external_data.rb +0 -368
- data/examples/structured_outputs_chat.rb +0 -72
- data/examples/structured_tools.rb +0 -89
- data/examples/test_dhanhq_tool_calling.rb +0 -375
- data/examples/test_tool_calling.rb +0 -160
- data/examples/tool_calling_direct.rb +0 -124
- data/examples/tool_calling_pattern.rb +0 -269
- data/exe/dhan_console +0 -4
|
@@ -0,0 +1,361 @@
|
|
|
1
|
+
# Getting Started: Creating an Ollama Client
|
|
2
|
+
|
|
3
|
+
This guide shows you step-by-step how to create a client object to use all features of `ollama-client`.
|
|
4
|
+
|
|
5
|
+
## Step 1: Install and Require the Gem
|
|
6
|
+
|
|
7
|
+
### Option A: Using Bundler (Recommended)
|
|
8
|
+
|
|
9
|
+
Add to your `Gemfile`:
|
|
10
|
+
```ruby
|
|
11
|
+
gem "ollama-client"
|
|
12
|
+
```
|
|
13
|
+
|
|
14
|
+
Then run:
|
|
15
|
+
```bash
|
|
16
|
+
bundle install
|
|
17
|
+
```
|
|
18
|
+
|
|
19
|
+
### Option B: Install Directly
|
|
20
|
+
|
|
21
|
+
```bash
|
|
22
|
+
gem install ollama-client
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
### Step 1b: Require in Your Code
|
|
26
|
+
|
|
27
|
+
```ruby
|
|
28
|
+
require "ollama_client"
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
The `.env` file is automatically loaded when you require the gem (if dotenv is available).
|
|
32
|
+
|
|
33
|
+
---
|
|
34
|
+
|
|
35
|
+
## Step 2: Create a Client Object
|
|
36
|
+
|
|
37
|
+
You have several options depending on your needs:
|
|
38
|
+
|
|
39
|
+
### Option A: Basic Client (Uses Defaults)
|
|
40
|
+
|
|
41
|
+
```ruby
|
|
42
|
+
require "ollama_client"
|
|
43
|
+
|
|
44
|
+
# Simplest way - uses default configuration
|
|
45
|
+
client = Ollama::Client.new
|
|
46
|
+
|
|
47
|
+
# Defaults:
|
|
48
|
+
# - base_url: "http://localhost:11434"
|
|
49
|
+
# - model: "llama3.1:8b"
|
|
50
|
+
# - timeout: 20 seconds
|
|
51
|
+
# - retries: 2
|
|
52
|
+
# - temperature: 0.2
|
|
53
|
+
# - allow_chat: false
|
|
54
|
+
# - streaming_enabled: false
|
|
55
|
+
```
|
|
56
|
+
|
|
57
|
+
### Option B: Client with Custom Configuration
|
|
58
|
+
|
|
59
|
+
```ruby
|
|
60
|
+
require "ollama_client"
|
|
61
|
+
|
|
62
|
+
# Create config object
|
|
63
|
+
config = Ollama::Config.new
|
|
64
|
+
|
|
65
|
+
# Customize settings
|
|
66
|
+
config.base_url = "http://localhost:11434" # or your Ollama server URL
|
|
67
|
+
config.model = "qwen2.5:14b" # or your preferred model
|
|
68
|
+
config.temperature = 0.1 # Lower = more deterministic
|
|
69
|
+
config.timeout = 60 # Increase for complex schemas
|
|
70
|
+
config.retries = 3 # Number of retry attempts
|
|
71
|
+
config.allow_chat = true # Enable chat API (if needed)
|
|
72
|
+
config.streaming_enabled = true # Enable streaming (if needed)
|
|
73
|
+
|
|
74
|
+
# Create client with custom config
|
|
75
|
+
client = Ollama::Client.new(config: config)
|
|
76
|
+
```
|
|
77
|
+
|
|
78
|
+
### Option C: Client from Environment Variables
|
|
79
|
+
|
|
80
|
+
The gem automatically loads `.env` file. You can set these environment variables:
|
|
81
|
+
|
|
82
|
+
```bash
|
|
83
|
+
# In your .env file or shell environment
|
|
84
|
+
OLLAMA_BASE_URL=http://localhost:11434
|
|
85
|
+
OLLAMA_MODEL=qwen2.5:14b
|
|
86
|
+
OLLAMA_TEMPERATURE=0.1
|
|
87
|
+
```
|
|
88
|
+
|
|
89
|
+
Then in your code:
|
|
90
|
+
|
|
91
|
+
```ruby
|
|
92
|
+
require "ollama_client"
|
|
93
|
+
|
|
94
|
+
# Create config and read from environment
|
|
95
|
+
config = Ollama::Config.new
|
|
96
|
+
config.base_url = ENV["OLLAMA_BASE_URL"] if ENV["OLLAMA_BASE_URL"]
|
|
97
|
+
config.model = ENV["OLLAMA_MODEL"] if ENV["OLLAMA_MODEL"]
|
|
98
|
+
config.temperature = ENV["OLLAMA_TEMPERATURE"].to_f if ENV["OLLAMA_TEMPERATURE"]
|
|
99
|
+
|
|
100
|
+
client = Ollama::Client.new(config: config)
|
|
101
|
+
```
|
|
102
|
+
|
|
103
|
+
### Option D: Client from JSON Config File
|
|
104
|
+
|
|
105
|
+
Create a `config.json` file:
|
|
106
|
+
|
|
107
|
+
```json
|
|
108
|
+
{
|
|
109
|
+
"base_url": "http://localhost:11434",
|
|
110
|
+
"model": "llama3.1:8b",
|
|
111
|
+
"timeout": 30,
|
|
112
|
+
"retries": 3,
|
|
113
|
+
"temperature": 0.2,
|
|
114
|
+
"top_p": 0.9,
|
|
115
|
+
"num_ctx": 8192
|
|
116
|
+
}
|
|
117
|
+
```
|
|
118
|
+
|
|
119
|
+
Then load it:
|
|
120
|
+
|
|
121
|
+
```ruby
|
|
122
|
+
require "ollama_client"
|
|
123
|
+
|
|
124
|
+
config = Ollama::Config.load_from_json("config.json")
|
|
125
|
+
client = Ollama::Client.new(config: config)
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
---
|
|
129
|
+
|
|
130
|
+
## Step 3: Verify Your Client Works
|
|
131
|
+
|
|
132
|
+
Test that your client can connect:
|
|
133
|
+
|
|
134
|
+
```ruby
|
|
135
|
+
require "ollama_client"
|
|
136
|
+
|
|
137
|
+
client = Ollama::Client.new
|
|
138
|
+
|
|
139
|
+
# List available models (verifies connection)
|
|
140
|
+
begin
|
|
141
|
+
models = client.list_models
|
|
142
|
+
puts "✅ Connected! Available models: #{models.map { |m| m['name'] }.join(', ')}"
|
|
143
|
+
rescue Ollama::Error => e
|
|
144
|
+
puts "❌ Connection failed: #{e.message}"
|
|
145
|
+
puts " Make sure Ollama server is running at #{client.instance_variable_get(:@config).base_url}"
|
|
146
|
+
end
|
|
147
|
+
```
|
|
148
|
+
|
|
149
|
+
---
|
|
150
|
+
|
|
151
|
+
## Step 4: Use Client Features
|
|
152
|
+
|
|
153
|
+
Once you have a client object, you can use all features:
|
|
154
|
+
|
|
155
|
+
### 4.1: Generate (Structured Outputs) - Recommended for Agents
|
|
156
|
+
|
|
157
|
+
```ruby
|
|
158
|
+
client = Ollama::Client.new
|
|
159
|
+
|
|
160
|
+
schema = {
|
|
161
|
+
"type" => "object",
|
|
162
|
+
"required" => ["action", "reasoning"],
|
|
163
|
+
"properties" => {
|
|
164
|
+
"action" => { "type" => "string", "enum" => ["search", "calculate", "finish"] },
|
|
165
|
+
"reasoning" => { "type" => "string" }
|
|
166
|
+
}
|
|
167
|
+
}
|
|
168
|
+
|
|
169
|
+
result = client.generate(
|
|
170
|
+
prompt: "Analyze the situation and decide next action.",
|
|
171
|
+
schema: schema
|
|
172
|
+
)
|
|
173
|
+
|
|
174
|
+
puts result["action"] # => "search"
|
|
175
|
+
puts result["reasoning"] # => "User needs data..."
|
|
176
|
+
```
|
|
177
|
+
|
|
178
|
+
### 4.2: Generate (Plain Text)
|
|
179
|
+
|
|
180
|
+
```ruby
|
|
181
|
+
client = Ollama::Client.new
|
|
182
|
+
|
|
183
|
+
# Use allow_plain_text: true to skip schema requirement
|
|
184
|
+
response = client.generate(
|
|
185
|
+
prompt: "Explain Ruby blocks in one sentence.",
|
|
186
|
+
allow_plain_text: true
|
|
187
|
+
)
|
|
188
|
+
|
|
189
|
+
puts response # => Plain text/markdown response (String)
|
|
190
|
+
```
|
|
191
|
+
|
|
192
|
+
### 4.3: Chat (Human-Facing Interfaces)
|
|
193
|
+
|
|
194
|
+
```ruby
|
|
195
|
+
# Enable chat in config
|
|
196
|
+
config = Ollama::Config.new
|
|
197
|
+
config.allow_chat = true
|
|
198
|
+
client = Ollama::Client.new(config: config)
|
|
199
|
+
|
|
200
|
+
response = client.chat(
|
|
201
|
+
messages: [
|
|
202
|
+
{ role: "user", content: "Hello!" }
|
|
203
|
+
],
|
|
204
|
+
allow_chat: true
|
|
205
|
+
)
|
|
206
|
+
|
|
207
|
+
puts response["message"]["content"]
|
|
208
|
+
```
|
|
209
|
+
|
|
210
|
+
### 4.4: ChatSession (Stateful Conversations)
|
|
211
|
+
|
|
212
|
+
```ruby
|
|
213
|
+
config = Ollama::Config.new
|
|
214
|
+
config.allow_chat = true
|
|
215
|
+
config.streaming_enabled = true
|
|
216
|
+
client = Ollama::Client.new(config: config)
|
|
217
|
+
|
|
218
|
+
observer = Ollama::StreamingObserver.new do |event|
|
|
219
|
+
print event.text if event.type == :token
|
|
220
|
+
end
|
|
221
|
+
|
|
222
|
+
chat = Ollama::ChatSession.new(
|
|
223
|
+
client,
|
|
224
|
+
system: "You are a helpful assistant.",
|
|
225
|
+
stream: observer
|
|
226
|
+
)
|
|
227
|
+
|
|
228
|
+
chat.say("Hello!")
|
|
229
|
+
chat.say("Explain Ruby blocks")
|
|
230
|
+
```
|
|
231
|
+
|
|
232
|
+
### 4.5: Planner Agent (Schema-Based Planning)
|
|
233
|
+
|
|
234
|
+
```ruby
|
|
235
|
+
client = Ollama::Client.new
|
|
236
|
+
|
|
237
|
+
planner = Ollama::Agent::Planner.new(
|
|
238
|
+
client,
|
|
239
|
+
system_prompt: Ollama::Personas.get(:architect, variant: :agent)
|
|
240
|
+
)
|
|
241
|
+
|
|
242
|
+
# Define the decision schema
|
|
243
|
+
DECISION_SCHEMA = {
|
|
244
|
+
"type" => "object",
|
|
245
|
+
"required" => ["action", "reasoning"],
|
|
246
|
+
"properties" => {
|
|
247
|
+
"action" => {
|
|
248
|
+
"type" => "string",
|
|
249
|
+
"enum" => ["refactor", "test", "document", "defer"]
|
|
250
|
+
},
|
|
251
|
+
"reasoning" => {
|
|
252
|
+
"type" => "string"
|
|
253
|
+
}
|
|
254
|
+
}
|
|
255
|
+
}
|
|
256
|
+
|
|
257
|
+
plan = planner.run(
|
|
258
|
+
prompt: "Design a caching layer for a high-traffic API.",
|
|
259
|
+
schema: DECISION_SCHEMA
|
|
260
|
+
)
|
|
261
|
+
|
|
262
|
+
puts plan["action"] # => "refactor" (or one of the enum values)
|
|
263
|
+
puts plan["reasoning"] # => Explanation string
|
|
264
|
+
```
|
|
265
|
+
|
|
266
|
+
### 4.6: Executor Agent (Tool-Calling)
|
|
267
|
+
|
|
268
|
+
```ruby
|
|
269
|
+
client = Ollama::Client.new
|
|
270
|
+
|
|
271
|
+
# Define tools (copy-paste ready)
|
|
272
|
+
tools = {
|
|
273
|
+
"get_price" => ->(symbol:) { { symbol: symbol, price: 24500.50, volume: 1_000_000 } },
|
|
274
|
+
"get_indicators" => ->(symbol:) { { symbol: symbol, rsi: 65.5, macd: 1.2 } }
|
|
275
|
+
}
|
|
276
|
+
|
|
277
|
+
executor = Ollama::Agent::Executor.new(client, tools: tools)
|
|
278
|
+
|
|
279
|
+
answer = executor.run(
|
|
280
|
+
system: Ollama::Personas.get(:trading, variant: :agent),
|
|
281
|
+
user: "Analyze NIFTY. Get current price and technical indicators."
|
|
282
|
+
)
|
|
283
|
+
|
|
284
|
+
puts answer
|
|
285
|
+
```
|
|
286
|
+
|
|
287
|
+
### 4.7: Embeddings
|
|
288
|
+
|
|
289
|
+
```ruby
|
|
290
|
+
client = Ollama::Client.new
|
|
291
|
+
|
|
292
|
+
embedding = client.embeddings.embed(
|
|
293
|
+
model: "all-minilm",
|
|
294
|
+
input: "What is Ruby?"
|
|
295
|
+
)
|
|
296
|
+
|
|
297
|
+
puts embedding.length # => Array of floats
|
|
298
|
+
```
|
|
299
|
+
|
|
300
|
+
---
|
|
301
|
+
|
|
302
|
+
## Complete Example: From Zero to Working Client
|
|
303
|
+
|
|
304
|
+
```ruby
|
|
305
|
+
#!/usr/bin/env ruby
|
|
306
|
+
# frozen_string_literal: true
|
|
307
|
+
|
|
308
|
+
# Step 1: Require the gem
|
|
309
|
+
require "ollama_client"
|
|
310
|
+
|
|
311
|
+
# Step 2: Create client (using environment variables from .env)
|
|
312
|
+
config = Ollama::Config.new
|
|
313
|
+
config.base_url = ENV["OLLAMA_BASE_URL"] || "http://localhost:11434"
|
|
314
|
+
config.model = ENV["OLLAMA_MODEL"] || "llama3.1:8b"
|
|
315
|
+
config.temperature = ENV["OLLAMA_TEMPERATURE"].to_f if ENV["OLLAMA_TEMPERATURE"]
|
|
316
|
+
|
|
317
|
+
client = Ollama::Client.new(config: config)
|
|
318
|
+
|
|
319
|
+
# Step 3: Use the client
|
|
320
|
+
begin
|
|
321
|
+
result = client.generate(
|
|
322
|
+
prompt: "Return a JSON object with a 'greeting' field saying hello.",
|
|
323
|
+
schema: {
|
|
324
|
+
"type" => "object",
|
|
325
|
+
"required" => ["greeting"],
|
|
326
|
+
"properties" => {
|
|
327
|
+
"greeting" => { "type" => "string" }
|
|
328
|
+
}
|
|
329
|
+
}
|
|
330
|
+
)
|
|
331
|
+
|
|
332
|
+
puts "✅ Success!"
|
|
333
|
+
puts "Response: #{result['greeting']}"
|
|
334
|
+
rescue Ollama::Error => e
|
|
335
|
+
puts "❌ Error: #{e.message}"
|
|
336
|
+
end
|
|
337
|
+
```
|
|
338
|
+
|
|
339
|
+
---
|
|
340
|
+
|
|
341
|
+
## Configuration Options Reference
|
|
342
|
+
|
|
343
|
+
| Option | Default | Description |
|
|
344
|
+
|--------|---------|-------------|
|
|
345
|
+
| `base_url` | `"http://localhost:11434"` | Ollama server URL |
|
|
346
|
+
| `model` | `"llama3.1:8b"` | Default model to use |
|
|
347
|
+
| `timeout` | `20` | Request timeout in seconds |
|
|
348
|
+
| `retries` | `2` | Number of retry attempts on failure |
|
|
349
|
+
| `temperature` | `0.2` | Model temperature (0.0-2.0) |
|
|
350
|
+
| `top_p` | `0.9` | Top-p sampling parameter |
|
|
351
|
+
| `num_ctx` | `8192` | Context window size |
|
|
352
|
+
| `allow_chat` | `false` | Enable chat API (must be explicitly enabled) |
|
|
353
|
+
| `streaming_enabled` | `false` | Enable streaming support |
|
|
354
|
+
|
|
355
|
+
---
|
|
356
|
+
|
|
357
|
+
## Next Steps
|
|
358
|
+
|
|
359
|
+
- See [README.md](../README.md) for detailed API documentation
|
|
360
|
+
- See [PERSONAS.md](PERSONAS.md) for using personas
|
|
361
|
+
- See [examples/](../examples/) for complete working examples
|
|
@@ -0,0 +1,170 @@
|
|
|
1
|
+
# Integration Testing Guide
|
|
2
|
+
|
|
3
|
+
Integration tests make **actual calls** to a running Ollama server to verify the client works end-to-end with real models.
|
|
4
|
+
|
|
5
|
+
## Quick Start
|
|
6
|
+
|
|
7
|
+
```bash
|
|
8
|
+
# Run all integration tests (requires Ollama server running)
|
|
9
|
+
bundle exec rspec --tag integration
|
|
10
|
+
|
|
11
|
+
# Run with custom configuration
|
|
12
|
+
OLLAMA_URL=http://localhost:11434 \
|
|
13
|
+
OLLAMA_MODEL=llama3.1:8b \
|
|
14
|
+
bundle exec rspec --tag integration
|
|
15
|
+
```
|
|
16
|
+
|
|
17
|
+
## Prerequisites
|
|
18
|
+
|
|
19
|
+
1. **Ollama server running** (default: `http://localhost:11434`)
|
|
20
|
+
```bash
|
|
21
|
+
# Start Ollama server
|
|
22
|
+
ollama serve
|
|
23
|
+
```
|
|
24
|
+
|
|
25
|
+
2. **At least one model installed**
|
|
26
|
+
```bash
|
|
27
|
+
# Install a model
|
|
28
|
+
ollama pull llama3.1:8b
|
|
29
|
+
```
|
|
30
|
+
|
|
31
|
+
3. **Optional: Embedding model** (for embedding tests)
|
|
32
|
+
```bash
|
|
33
|
+
ollama pull nomic-embed-text
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
## Test Coverage
|
|
37
|
+
|
|
38
|
+
Integration tests verify:
|
|
39
|
+
|
|
40
|
+
### ✅ Core Client Methods
|
|
41
|
+
- `#list_models` - Lists available models
|
|
42
|
+
- `#generate` - Structured JSON output with schema
|
|
43
|
+
- `#generate` - Plain text output without schema
|
|
44
|
+
- `#generate` - Complex schemas with enums
|
|
45
|
+
- `#chat_raw` - Chat messages and responses
|
|
46
|
+
- `#chat_raw` - Conversation history
|
|
47
|
+
- `#chat_raw` - Tool calling (if model supports it)
|
|
48
|
+
|
|
49
|
+
### ✅ Embeddings
|
|
50
|
+
- Single text embeddings
|
|
51
|
+
- Multiple text embeddings
|
|
52
|
+
- Error handling for missing/unsupported models
|
|
53
|
+
|
|
54
|
+
### ✅ Agent Components
|
|
55
|
+
- `Ollama::Agent::Planner` - Planning decisions
|
|
56
|
+
- `Ollama::Agent::Executor` - Tool execution loops (if model supports tools)
|
|
57
|
+
|
|
58
|
+
### ✅ Chat Session
|
|
59
|
+
- Session management
|
|
60
|
+
- Conversation state
|
|
61
|
+
- Message history
|
|
62
|
+
|
|
63
|
+
### ✅ Error Handling
|
|
64
|
+
- `NotFoundError` for non-existent models
|
|
65
|
+
- Proper error propagation
|
|
66
|
+
|
|
67
|
+
## Running Tests
|
|
68
|
+
|
|
69
|
+
### Run All Integration Tests
|
|
70
|
+
```bash
|
|
71
|
+
bundle exec rspec --tag integration
|
|
72
|
+
```
|
|
73
|
+
|
|
74
|
+
### Run Specific Test File
|
|
75
|
+
```bash
|
|
76
|
+
bundle exec rspec spec/integration/ollama_client_integration_spec.rb --tag integration
|
|
77
|
+
```
|
|
78
|
+
|
|
79
|
+
### Run Specific Test
|
|
80
|
+
```bash
|
|
81
|
+
bundle exec rspec spec/integration/ollama_client_integration_spec.rb:32 --tag integration
|
|
82
|
+
```
|
|
83
|
+
|
|
84
|
+
### With Environment Variables
|
|
85
|
+
```bash
|
|
86
|
+
# Custom Ollama URL
|
|
87
|
+
OLLAMA_URL=http://remote-server:11434 bundle exec rspec --tag integration
|
|
88
|
+
|
|
89
|
+
# Custom model
|
|
90
|
+
OLLAMA_MODEL=llama3.2:3b bundle exec rspec --tag integration
|
|
91
|
+
|
|
92
|
+
# Custom embedding model
|
|
93
|
+
OLLAMA_EMBEDDING_MODEL=nomic-embed-text bundle exec rspec --tag integration
|
|
94
|
+
|
|
95
|
+
# All together
|
|
96
|
+
OLLAMA_URL=http://localhost:11434 \
|
|
97
|
+
OLLAMA_MODEL=llama3.1:8b \
|
|
98
|
+
OLLAMA_EMBEDDING_MODEL=nomic-embed-text \
|
|
99
|
+
bundle exec rspec --tag integration
|
|
100
|
+
```
|
|
101
|
+
|
|
102
|
+
## Test Behavior
|
|
103
|
+
|
|
104
|
+
### Automatic Skipping
|
|
105
|
+
- Tests automatically skip if Ollama server is not available
|
|
106
|
+
- Tests skip if required models are not installed
|
|
107
|
+
- Tests skip if models don't support certain features (e.g., tool calling)
|
|
108
|
+
|
|
109
|
+
### Expected Results
|
|
110
|
+
- **Passing**: Client correctly communicates with Ollama
|
|
111
|
+
- **Pending/Skipped**: Expected when models/features unavailable
|
|
112
|
+
- **Failing**: Indicates actual client issues (rare)
|
|
113
|
+
|
|
114
|
+
## Differences from Unit Tests
|
|
115
|
+
|
|
116
|
+
| Aspect | Unit Tests | Integration Tests |
|
|
117
|
+
|--------|-----------|-------------------|
|
|
118
|
+
| **HTTP Calls** | Mocked (WebMock) | Real HTTP calls |
|
|
119
|
+
| **Ollama Server** | Not required | Required |
|
|
120
|
+
| **Speed** | Fast (~0.1s) | Slower (~15s) |
|
|
121
|
+
| **Reliability** | 100% deterministic | Depends on server/model |
|
|
122
|
+
| **Coverage** | Transport layer | End-to-end |
|
|
123
|
+
| **Run Command** | `bundle exec rspec` | `bundle exec rspec --tag integration` |
|
|
124
|
+
|
|
125
|
+
## Best Practices
|
|
126
|
+
|
|
127
|
+
1. **Run unit tests first** - Ensure client code is correct
|
|
128
|
+
2. **Run integration tests** - Verify real server communication
|
|
129
|
+
3. **Use appropriate models** - Some models support tools, others don't
|
|
130
|
+
4. **Handle skips gracefully** - Missing models/features are expected
|
|
131
|
+
5. **Check server availability** - Tests skip if Ollama is down
|
|
132
|
+
|
|
133
|
+
## Troubleshooting
|
|
134
|
+
|
|
135
|
+
### "Ollama server not available"
|
|
136
|
+
- Start Ollama: `ollama serve`
|
|
137
|
+
- Check URL: `OLLAMA_URL=http://localhost:11434`
|
|
138
|
+
- Verify connection: `curl http://localhost:11434/api/tags`
|
|
139
|
+
|
|
140
|
+
### "Model not found"
|
|
141
|
+
- Install model: `ollama pull llama3.1:8b`
|
|
142
|
+
- Set model: `OLLAMA_MODEL=your-model`
|
|
143
|
+
|
|
144
|
+
### "Empty embedding returned"
|
|
145
|
+
- Install embedding model: `ollama pull nomic-embed-text`
|
|
146
|
+
- Verify model supports embeddings
|
|
147
|
+
- Set model: `OLLAMA_EMBEDDING_MODEL=nomic-embed-text`
|
|
148
|
+
|
|
149
|
+
### "HTTP 400: Bad Request" (tool calling)
|
|
150
|
+
- Some models don't support tool calling
|
|
151
|
+
- Test will skip automatically
|
|
152
|
+
- Try a different model that supports tools
|
|
153
|
+
|
|
154
|
+
## CI/CD Integration
|
|
155
|
+
|
|
156
|
+
Integration tests are **optional** and can be run separately:
|
|
157
|
+
|
|
158
|
+
```yaml
|
|
159
|
+
# Example GitHub Actions
|
|
160
|
+
- name: Run Unit Tests
|
|
161
|
+
run: bundle exec rspec
|
|
162
|
+
|
|
163
|
+
- name: Run Integration Tests (if Ollama available)
|
|
164
|
+
run: bundle exec rspec --tag integration
|
|
165
|
+
if: env.OLLAMA_URL != ''
|
|
166
|
+
```
|
|
167
|
+
|
|
168
|
+
## Summary
|
|
169
|
+
|
|
170
|
+
Integration tests verify the client works with **real Ollama servers** and **real models**. They complement unit tests by ensuring end-to-end functionality while gracefully handling missing models or unsupported features.
|
|
@@ -0,0 +1,114 @@
|
|
|
1
|
+
# Next Steps Summary
|
|
2
|
+
|
|
3
|
+
## ✅ Completed
|
|
4
|
+
|
|
5
|
+
### 1. Testing Documentation
|
|
6
|
+
- ✅ **Rewrote `docs/TESTING.md`** - Focuses on client-only testing (transport/protocol, not agent behavior)
|
|
7
|
+
- ✅ **Created `docs/TEST_CHECKLIST.md`** - Comprehensive checklist with test categories (G1-G3, C1-C3, A1-A2, F1-F3)
|
|
8
|
+
|
|
9
|
+
### 2. Example Reorganization
|
|
10
|
+
- ✅ **Created `docs/EXAMPLE_REORGANIZATION.md`** - Complete proposal for separating examples
|
|
11
|
+
- ✅ **Created `docs/MIGRATION_CHECKLIST.md`** - Detailed migration tracking
|
|
12
|
+
|
|
13
|
+
### 3. Minimal Examples Created
|
|
14
|
+
- ✅ **`examples/basic_generate.rb`** - Basic `/generate` usage with schema
|
|
15
|
+
- ✅ **`examples/basic_chat.rb`** - Basic `/chat` usage
|
|
16
|
+
- ✅ **`examples/tool_calling_parsing.rb`** - Tool-call parsing (no execution)
|
|
17
|
+
|
|
18
|
+
### 4. Documentation Updates
|
|
19
|
+
- ✅ **Updated `examples/README.md`** - Reflects minimal examples only
|
|
20
|
+
- ✅ **Updated main `README.md`** - Enhanced "What This Gem IS NOT" section, updated examples section
|
|
21
|
+
- ✅ **Updated all repository links** - Point to `shubhamtaywade82/ollama-agent-examples`
|
|
22
|
+
|
|
23
|
+
## 📋 Remaining Tasks
|
|
24
|
+
|
|
25
|
+
### Phase 2: Create Separate Repository
|
|
26
|
+
|
|
27
|
+
**Action Required:** Set up the `ollama-agent-examples` repository structure
|
|
28
|
+
|
|
29
|
+
1. Repository: https://github.com/shubhamtaywade82/ollama-agent-examples
|
|
30
|
+
2. Initialize with README that links back to `ollama-client`
|
|
31
|
+
4. Set up repository structure:
|
|
32
|
+
```
|
|
33
|
+
ollama-agent-examples/
|
|
34
|
+
├── README.md
|
|
35
|
+
├── basic/
|
|
36
|
+
├── trading/
|
|
37
|
+
│ └── dhanhq/
|
|
38
|
+
├── coding/
|
|
39
|
+
├── rag/
|
|
40
|
+
├── advanced/
|
|
41
|
+
└── tools/
|
|
42
|
+
```
|
|
43
|
+
|
|
44
|
+
### Phase 3: Migrate Examples
|
|
45
|
+
|
|
46
|
+
**Files to Move:** (See `docs/MIGRATION_CHECKLIST.md` for complete list)
|
|
47
|
+
|
|
48
|
+
- All files in `examples/dhanhq/` directory
|
|
49
|
+
- `dhan_console.rb`, `dhanhq_agent.rb`, `dhanhq_tools.rb`
|
|
50
|
+
- `multi_step_agent_*.rb` files
|
|
51
|
+
- `advanced_*.rb` files
|
|
52
|
+
- `test_tool_calling.rb`, `tool_calling_direct.rb`, `tool_calling_pattern.rb`
|
|
53
|
+
- `chat_console.rb`, `chat_session_example.rb`, `ollama_chat.rb`
|
|
54
|
+
- `complete_workflow.rb`, `structured_outputs_chat.rb`, `personas_example.rb`
|
|
55
|
+
- `structured_tools.rb`
|
|
56
|
+
- `ollama-api.md` (if example-related)
|
|
57
|
+
|
|
58
|
+
**Files to Keep:**
|
|
59
|
+
- ✅ `basic_generate.rb`
|
|
60
|
+
- ✅ `basic_chat.rb`
|
|
61
|
+
- ✅ `tool_calling_parsing.rb`
|
|
62
|
+
- ✅ `tool_dto_example.rb`
|
|
63
|
+
|
|
64
|
+
### Phase 4: Clean Up
|
|
65
|
+
|
|
66
|
+
1. Remove moved examples from `ollama-client/examples/`
|
|
67
|
+
2. Verify minimal examples work
|
|
68
|
+
3. Update any CI/CD that references examples
|
|
69
|
+
4. Test migrated examples in new location
|
|
70
|
+
|
|
71
|
+
## 📚 Documentation Created
|
|
72
|
+
|
|
73
|
+
1. **`docs/TESTING.md`** - Client-only testing guide
|
|
74
|
+
2. **`docs/TEST_CHECKLIST.md`** - Test checklist with categories
|
|
75
|
+
3. **`docs/EXAMPLE_REORGANIZATION.md`** - Example reorganization proposal
|
|
76
|
+
4. **`docs/MIGRATION_CHECKLIST.md`** - Migration tracking checklist
|
|
77
|
+
5. **`docs/NEXT_STEPS_SUMMARY.md`** - This file
|
|
78
|
+
|
|
79
|
+
## 🎯 Key Principles Established
|
|
80
|
+
|
|
81
|
+
### Testing Boundaries
|
|
82
|
+
- ✅ Test transport layer only
|
|
83
|
+
- ✅ Test protocol correctness
|
|
84
|
+
- ✅ Test schema enforcement
|
|
85
|
+
- ✅ Test tool-call parsing
|
|
86
|
+
- ❌ Do NOT test agent loops, tool execution, convergence logic
|
|
87
|
+
|
|
88
|
+
### Example Boundaries
|
|
89
|
+
- ✅ Keep minimal client usage examples
|
|
90
|
+
- ✅ Focus on transport/protocol demonstration
|
|
91
|
+
- ❌ Move all agent behavior examples
|
|
92
|
+
- ❌ Move all tool execution examples
|
|
93
|
+
- ❌ Move all domain-specific examples
|
|
94
|
+
|
|
95
|
+
## 🔗 Repository Links
|
|
96
|
+
|
|
97
|
+
- **Main Repository:** https://github.com/shubhamtaywade82/ollama-client
|
|
98
|
+
- **Examples Repository:** https://github.com/shubhamtaywade82/ollama-agent-examples
|
|
99
|
+
|
|
100
|
+
## 📝 Next Actions
|
|
101
|
+
|
|
102
|
+
1. **Set up `ollama-agent-examples` repository** structure
|
|
103
|
+
2. **Copy agent examples** to new repository
|
|
104
|
+
3. **Organize examples** by category (trading, coding, rag, advanced, tools)
|
|
105
|
+
4. **Remove migrated examples** from `ollama-client`
|
|
106
|
+
5. **Test everything** works in both repositories
|
|
107
|
+
|
|
108
|
+
## ✨ Benefits Achieved
|
|
109
|
+
|
|
110
|
+
- ✅ Clear separation of concerns
|
|
111
|
+
- ✅ Client stays focused on transport layer
|
|
112
|
+
- ✅ Examples can evolve independently
|
|
113
|
+
- ✅ Users won't confuse client vs agent
|
|
114
|
+
- ✅ Easier maintenance and contribution
|