ollama-client 0.2.1 → 0.2.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (60) hide show
  1. checksums.yaml +4 -4
  2. data/CHANGELOG.md +8 -0
  3. data/README.md +220 -12
  4. data/docs/CLOUD.md +29 -0
  5. data/docs/CONSOLE_IMPROVEMENTS.md +256 -0
  6. data/docs/FEATURES_ADDED.md +145 -0
  7. data/docs/HANDLERS_ANALYSIS.md +190 -0
  8. data/docs/README.md +37 -0
  9. data/docs/SCHEMA_FIXES.md +147 -0
  10. data/docs/TEST_UPDATES.md +107 -0
  11. data/examples/README.md +92 -0
  12. data/examples/advanced_complex_schemas.rb +6 -3
  13. data/examples/advanced_multi_step_agent.rb +13 -7
  14. data/examples/chat_console.rb +143 -0
  15. data/examples/complete_workflow.rb +14 -4
  16. data/examples/dhan_console.rb +843 -0
  17. data/examples/dhanhq/agents/base_agent.rb +0 -2
  18. data/examples/dhanhq/agents/orchestrator_agent.rb +1 -2
  19. data/examples/dhanhq/agents/technical_analysis_agent.rb +67 -49
  20. data/examples/dhanhq/analysis/market_structure.rb +44 -28
  21. data/examples/dhanhq/analysis/pattern_recognizer.rb +64 -47
  22. data/examples/dhanhq/analysis/trend_analyzer.rb +6 -8
  23. data/examples/dhanhq/dhanhq_agent.rb +296 -99
  24. data/examples/dhanhq/indicators/technical_indicators.rb +3 -5
  25. data/examples/dhanhq/scanners/intraday_options_scanner.rb +360 -255
  26. data/examples/dhanhq/scanners/swing_scanner.rb +118 -84
  27. data/examples/dhanhq/schemas/agent_schemas.rb +2 -2
  28. data/examples/dhanhq/services/data_service.rb +5 -7
  29. data/examples/dhanhq/services/trading_service.rb +0 -3
  30. data/examples/dhanhq/technical_analysis_agentic_runner.rb +217 -84
  31. data/examples/dhanhq/technical_analysis_runner.rb +216 -162
  32. data/examples/dhanhq/test_tool_calling.rb +538 -0
  33. data/examples/dhanhq/test_tool_calling_verbose.rb +251 -0
  34. data/examples/dhanhq/utils/trading_parameter_normalizer.rb +12 -17
  35. data/examples/dhanhq_agent.rb +159 -116
  36. data/examples/dhanhq_tools.rb +1158 -251
  37. data/examples/multi_step_agent_with_external_data.rb +368 -0
  38. data/examples/structured_tools.rb +89 -0
  39. data/examples/test_dhanhq_tool_calling.rb +375 -0
  40. data/examples/test_tool_calling.rb +160 -0
  41. data/examples/tool_calling_direct.rb +124 -0
  42. data/examples/tool_dto_example.rb +94 -0
  43. data/exe/dhan_console +4 -0
  44. data/exe/ollama-client +1 -1
  45. data/lib/ollama/agent/executor.rb +116 -15
  46. data/lib/ollama/client.rb +118 -55
  47. data/lib/ollama/config.rb +36 -0
  48. data/lib/ollama/dto.rb +187 -0
  49. data/lib/ollama/embeddings.rb +77 -0
  50. data/lib/ollama/options.rb +104 -0
  51. data/lib/ollama/response.rb +121 -0
  52. data/lib/ollama/tool/function/parameters/property.rb +72 -0
  53. data/lib/ollama/tool/function/parameters.rb +101 -0
  54. data/lib/ollama/tool/function.rb +78 -0
  55. data/lib/ollama/tool.rb +60 -0
  56. data/lib/ollama/version.rb +1 -1
  57. data/lib/ollama_client.rb +3 -0
  58. metadata +31 -3
  59. /data/{PRODUCTION_FIXES.md → docs/PRODUCTION_FIXES.md} +0 -0
  60. /data/{TESTING.md → docs/TESTING.md} +0 -0
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 6b1ed7c3f2d09174f4127c92b0b22eef90fc3deb895dc86183d296a6a70cc3b4
4
- data.tar.gz: f2d0737f5cbb77557fa736fb45f7a5c7c94fd939d1005c53d68e82d229844283
3
+ metadata.gz: 908909c57f4d7067168bcdcecce27471ea2587d6452aecb68aca2c25a8547839
4
+ data.tar.gz: fcbf8ee83cba0f05bcae0892e103ae1201a7db1b045c25848caf0d8a71ca891d
5
5
  SHA512:
6
- metadata.gz: 04dcc1b56d68d715727f5f51bd9511d1c4b633369435e4f95008b2bb826d988d00df6fa307dc900fb0c8365b6f4bc0e111e17b4c74deb9c25360bfa36f6148cd
7
- data.tar.gz: 8ea4bef981e5601dddd258a1f0dab481e76601f765bddc9783098bedc144cd1d0ceeb9e30b185541fa688ea8519fc796356a9ffe6abbc437a7dae6d93b757852
6
+ metadata.gz: 67800e85b23a59fd9717ec76a2d27a591538d1813846213cb095d853d41547d6da3bbb10b8096a05b500bec66576a2e0f9268a5ad8738a9c0176b8f2051a99eb
7
+ data.tar.gz: 1bb186cb330c6fec601ed909af0fa9ef9bee786827edeb9195955ed25e7d0e8ab1d2eb81c302e47806cf304014ad847c207cc05f7b881392f936413419158714
data/CHANGELOG.md CHANGED
@@ -1,5 +1,13 @@
1
1
  ## [Unreleased]
2
2
 
3
+ - Add tag-triggered GitHub Actions release workflow for RubyGems publishing.
4
+
5
+ ## [0.2.3] - 2026-01-17
6
+
7
+ - Add per-call `model:` override for `Ollama::Client#generate`.
8
+ - Document `generate` model override usage in README.
9
+ - Add spec to cover per-call `model:` in 404 error path.
10
+
3
11
  ## [0.2.0] - 2026-01-12
4
12
 
5
13
  - Add `Ollama::Agent::Planner` (stateless `/api/generate`)
data/README.md CHANGED
@@ -36,15 +36,15 @@ This keeps it **clean and future-proof**.
36
36
 
37
37
  ## 🔒 Guarantees
38
38
 
39
- | Guarantee | Yes |
40
- | -------------------------------------- | --- |
41
- | Client requests are explicit | ✅ |
42
- | Planner is stateless (no hidden memory)| ✅ |
43
- | Executor is stateful (explicit messages)| ✅ |
44
- | Retry bounded | ✅ |
45
- | Schema validated (when schema provided)| ✅ |
46
- | Tools run in Ruby (not in the LLM) | ✅ |
47
- | Streaming is display-only (Executor) | ✅ |
39
+ | Guarantee | Yes |
40
+ | ---------------------------------------- | --- |
41
+ | Client requests are explicit | ✅ |
42
+ | Planner is stateless (no hidden memory) | ✅ |
43
+ | Executor is stateful (explicit messages) | ✅ |
44
+ | Retry bounded | ✅ |
45
+ | Schema validated (when schema provided) | ✅ |
46
+ | Tools run in Ruby (not in the LLM) | ✅ |
47
+ | Streaming is display-only (Executor) | ✅ |
48
48
 
49
49
  **Non-negotiable safety rule:** the **LLM never executes side effects**. It may request a tool call; **your Ruby code** executes the tool.
50
50
 
@@ -97,8 +97,8 @@ gem install ollama-client
97
97
 
98
98
  This gem intentionally focuses on **agent building blocks**:
99
99
 
100
- - **Supported**: `/api/generate`, `/api/chat`, `/api/tags`, `/api/ping`
101
- - **Not guaranteed**: full endpoint parity with every Ollama release (embeddings, advanced model mgmt, etc.)
100
+ - **Supported**: `/api/generate`, `/api/chat`, `/api/tags`, `/api/ping`, `/api/embeddings`
101
+ - **Not guaranteed**: full endpoint parity with every Ollama release (advanced model mgmt, etc.)
102
102
 
103
103
  ### Agent endpoint mapping (unambiguous)
104
104
 
@@ -130,6 +130,8 @@ puts plan
130
130
 
131
131
  ### Executor Agent (tool loop, /api/chat)
132
132
 
133
+ **Simple approach (auto-inferred schemas):**
134
+
133
135
  ```ruby
134
136
  require "ollama_client"
135
137
  require "json"
@@ -151,6 +153,68 @@ answer = executor.run(
151
153
  puts answer
152
154
  ```
153
155
 
156
+ **Structured approach (explicit schemas with Tool classes):**
157
+
158
+ ```ruby
159
+ require "ollama_client"
160
+
161
+ # Define explicit tool schema
162
+ location_prop = Ollama::Tool::Function::Parameters::Property.new(
163
+ type: "string",
164
+ description: "The city name"
165
+ )
166
+
167
+ params = Ollama::Tool::Function::Parameters.new(
168
+ type: "object",
169
+ properties: { city: location_prop },
170
+ required: %w[city]
171
+ )
172
+
173
+ function = Ollama::Tool::Function.new(
174
+ name: "fetch_weather",
175
+ description: "Get weather for a city",
176
+ parameters: params
177
+ )
178
+
179
+ tool = Ollama::Tool.new(type: "function", function: function)
180
+
181
+ # Associate tool schema with callable
182
+ tools = {
183
+ "fetch_weather" => {
184
+ tool: tool,
185
+ callable: ->(city:) { { city: city, forecast: "sunny" } }
186
+ }
187
+ }
188
+
189
+ executor = Ollama::Agent::Executor.new(client, tools: tools)
190
+ ```
191
+
192
+ Use structured tools when you need:
193
+ - Explicit control over parameter types and descriptions
194
+ - Enum constraints on parameters
195
+ - Better documentation for complex tools
196
+ - Serialization/deserialization (JSON storage, API responses)
197
+
198
+ **DTO (Data Transfer Object) functionality:**
199
+
200
+ All Tool classes support serialization and deserialization:
201
+
202
+ ```ruby
203
+ # Serialize to JSON
204
+ json = tool.to_json
205
+
206
+ # Deserialize from hash
207
+ tool = Ollama::Tool.from_hash(JSON.parse(json))
208
+
209
+ # Equality comparison
210
+ tool1 == tool2 # Compares hash representations
211
+
212
+ # Empty check
213
+ params.empty? # True if no properties/required fields
214
+ ```
215
+
216
+ See `examples/tool_dto_example.rb` for complete DTO usage examples.
217
+
154
218
  ### Streaming (Executor only; presentation-only)
155
219
 
156
220
  Streaming is treated as **presentation**, not control. The agent buffers the full assistant message and only
@@ -223,6 +287,7 @@ schema = {
223
287
  # 2. Call the LLM with your schema
224
288
  begin
225
289
  result = client.generate(
290
+ model: "llama3.1:8b",
226
291
  prompt: "Your prompt here",
227
292
  schema: schema
228
293
  )
@@ -407,6 +472,67 @@ rescue Ollama::Error => e
407
472
  end
408
473
  ```
409
474
 
475
+ ### Example: Tool Calling (Direct API Usage)
476
+
477
+ For tool calling, use `chat_raw()` to access `tool_calls` from the response:
478
+
479
+ ```ruby
480
+ require "ollama_client"
481
+
482
+ client = Ollama::Client.new
483
+
484
+ # Define tool using Tool classes
485
+ tool = Ollama::Tool.new(
486
+ type: "function",
487
+ function: Ollama::Tool::Function.new(
488
+ name: "get_current_weather",
489
+ description: "Get the current weather for a location",
490
+ parameters: Ollama::Tool::Function::Parameters.new(
491
+ type: "object",
492
+ properties: {
493
+ location: Ollama::Tool::Function::Parameters::Property.new(
494
+ type: "string",
495
+ description: "The location to get the weather for, e.g. San Francisco, CA"
496
+ ),
497
+ temperature_unit: Ollama::Tool::Function::Parameters::Property.new(
498
+ type: "string",
499
+ description: "The unit to return the temperature in",
500
+ enum: %w[celsius fahrenheit]
501
+ )
502
+ },
503
+ required: %w[location temperature_unit]
504
+ )
505
+ )
506
+ )
507
+
508
+ # Create message
509
+ message = Ollama::Agent::Messages.user("What is the weather today in Paris?")
510
+
511
+ # Use chat_raw() to get full response with tool_calls
512
+ response = client.chat_raw(
513
+ model: "llama3.1:8b",
514
+ messages: [message],
515
+ tools: tool, # Pass Tool object directly (or array of Tool objects)
516
+ allow_chat: true
517
+ )
518
+
519
+ # Access tool_calls from response
520
+ tool_calls = response.dig("message", "tool_calls")
521
+ if tool_calls && !tool_calls.empty?
522
+ tool_calls.each do |call|
523
+ name = call.dig("function", "name")
524
+ args = call.dig("function", "arguments")
525
+ puts "Tool: #{name}, Args: #{args}"
526
+ end
527
+ end
528
+ ```
529
+
530
+ **Note:**
531
+ - `chat()` returns only the content (for simple use cases)
532
+ - `chat_raw()` returns the full response with `message.tool_calls` (for tool calling)
533
+ - Both methods accept `tools:` parameter (Tool object, array of Tool objects, or array of hashes)
534
+ - For agent tool loops, use `Ollama::Agent::Executor` instead (handles tool execution automatically)
535
+
410
536
  ### Example: Data Analysis with Validation
411
537
 
412
538
  ```ruby
@@ -505,6 +631,83 @@ models = client.list_models
505
631
  puts "Available models: #{models.join(', ')}"
506
632
  ```
507
633
 
634
+ ### Embeddings for RAG/Semantic Search
635
+
636
+ Use embeddings for building knowledge bases and semantic search in agents:
637
+
638
+ ```ruby
639
+ require "ollama_client"
640
+
641
+ client = Ollama::Client.new
642
+
643
+ # Single text embedding
644
+ embedding = client.embeddings.embed(
645
+ model: "all-minilm",
646
+ input: "What is Ruby programming?"
647
+ )
648
+ # Returns: [0.123, -0.456, ...] (array of floats)
649
+
650
+ # Multiple texts
651
+ embeddings = client.embeddings.embed(
652
+ model: "all-minilm",
653
+ input: ["What is Ruby?", "What is Python?", "What is JavaScript?"]
654
+ )
655
+ # Returns: [[...], [...], [...]] (array of embedding arrays)
656
+
657
+ # Use for semantic similarity in agents
658
+ def find_similar(query_embedding, document_embeddings, threshold: 0.7)
659
+ document_embeddings.select do |doc_emb|
660
+ cosine_similarity(query_embedding, doc_emb) > threshold
661
+ end
662
+ end
663
+ ```
664
+
665
+ ### Configuration from JSON
666
+
667
+ Load configuration from JSON files for production deployments:
668
+
669
+ ```ruby
670
+ require "ollama_client"
671
+
672
+ # config.json:
673
+ # {
674
+ # "base_url": "http://localhost:11434",
675
+ # "model": "llama3.1:8b",
676
+ # "timeout": 30,
677
+ # "retries": 3,
678
+ # "temperature": 0.2
679
+ # }
680
+
681
+ config = Ollama::Config.load_from_json("config.json")
682
+ client = Ollama::Client.new(config: config)
683
+ ```
684
+
685
+ ### Type-Safe Model Options
686
+
687
+ Use the `Options` class for type-checked model parameters:
688
+
689
+ ```ruby
690
+ require "ollama_client"
691
+
692
+ # Options with validation
693
+ options = Ollama::Options.new(
694
+ temperature: 0.7,
695
+ top_p: 0.95,
696
+ top_k: 40,
697
+ num_ctx: 4096,
698
+ seed: 42
699
+ )
700
+
701
+ # Will raise ArgumentError if values are out of range
702
+ # options.temperature = 3.0 # Error: temperature must be between 0.0 and 2.0
703
+
704
+ client.generate(
705
+ prompt: "Analyze this data",
706
+ schema: analysis_schema,
707
+ options: options.to_h
708
+ )
709
+ ```
710
+
508
711
  ### Error Handling
509
712
 
510
713
  ```ruby
@@ -675,7 +878,12 @@ ruby examples/advanced_multi_step_agent.rb
675
878
 
676
879
  After checking out the repo, run `bin/setup` to install dependencies. Then, run `rake spec` to run the tests. You can also run `bin/console` for an interactive prompt that will allow you to experiment.
677
880
 
678
- To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
881
+ To install this gem onto your local machine, run `bundle exec rake install`.
882
+
883
+ To release a new version, update `lib/ollama/version.rb` and `CHANGELOG.md`, then commit. You can:
884
+
885
+ - Run `bundle exec rake release` locally to create the tag, push commits/tags, and publish to [rubygems.org](https://rubygems.org).
886
+ - Push a tag `vX.Y.Z` to trigger the GitHub Actions release workflow, which builds and publishes the gem using the `RUBYGEMS_API_KEY` secret.
679
887
 
680
888
  ## Contributing
681
889
 
data/docs/CLOUD.md ADDED
@@ -0,0 +1,29 @@
1
+ # Cloud Agent Guide
2
+
3
+ This repository is a Ruby gem. It has no database and does not require
4
+ application secrets for the default test suite.
5
+
6
+ ## Required Commands
7
+ - `bundle install`
8
+ - `bundle exec rubocop`
9
+ - `bundle exec rspec`
10
+
11
+ ## Agent Prompt Template
12
+ You are operating on a Ruby gem repository.
13
+ Task:
14
+ 1. Run `bundle exec rubocop`.
15
+ 2. Fix all RuboCop offenses.
16
+ 3. Re-run RuboCop until clean.
17
+ 4. Run `bundle exec rspec`.
18
+ 5. Fix all failing specs.
19
+ 6. Re-run RSpec until green.
20
+
21
+ Rules:
22
+ - Do not skip failures.
23
+ - Do not change public APIs without reporting.
24
+ - Do not bump gem version unless explicitly told.
25
+ - Stop if blocked and explain why.
26
+
27
+ ## Guardrails
28
+ - Keep API surface stable and backward compatible.
29
+ - Update specs when behavior changes.
@@ -0,0 +1,256 @@
1
+ # Console Improvements
2
+
3
+ ## Overview
4
+
5
+ Enhanced the interactive console experiences (`chat_console.rb` and `dhan_console.rb`) to provide better user feedback and cleaner output formatting.
6
+
7
+ ## Chat Console (`chat_console.rb`)
8
+
9
+ ### Problem
10
+
11
+ When the LLM was processing a response, the `llm>` prompt appeared immediately, making it look like it was waiting for user input. This caused confusion where users would start typing, thinking it was a prompt.
12
+
13
+ ### Solution
14
+
15
+ Added a **thinking indicator** that shows while waiting for the API response:
16
+
17
+ **Before:**
18
+ ```
19
+ you> Hi
20
+ llm> [cursor blinks here - looks like a prompt!]
21
+ [user types something]
22
+ [then response appears]
23
+ ```
24
+
25
+ **After:**
26
+ ```
27
+ you> Hi
28
+ ... [thinking indicator in cyan]
29
+ llm> Hey! How can I help you?
30
+ you> [clear prompt for next input]
31
+ ```
32
+
33
+ ### Implementation
34
+
35
+ ```ruby
36
+ def chat_response(client, messages, config)
37
+ content = +""
38
+ prompt_printed = false
39
+
40
+ # Show thinking indicator
41
+ print "#{COLOR_LLM}...#{COLOR_RESET}"
42
+ $stdout.flush
43
+
44
+ client.chat_raw(...) do |chunk|
45
+ token = chunk.dig("message", "content").to_s
46
+ next if token.empty?
47
+
48
+ # Replace thinking indicator with llm> on first token
49
+ unless prompt_printed
50
+ print "\r#{LLM_PROMPT}"
51
+ prompt_printed = true
52
+ end
53
+
54
+ content << token
55
+ print token
56
+ $stdout.flush
57
+ end
58
+
59
+ puts
60
+ content
61
+ end
62
+ ```
63
+
64
+ **Key Changes:**
65
+ - `\r` (carriage return) replaces `...` with `llm>` when first token arrives
66
+ - `$stdout.flush` ensures immediate visual feedback
67
+ - Clear visual state: thinking → responding → ready for input
68
+
69
+ ## DhanHQ Console (`dhan_console.rb`)
70
+
71
+ ### Problem
72
+
73
+ Tool results were displayed as raw JSON dumps, making it hard to quickly understand:
74
+ - **Which tool** was called
75
+ - **What data** was retrieved
76
+ - **Key information** from the response
77
+
78
+ **Before:**
79
+ ```
80
+ Tool Results:
81
+ - get_live_ltp
82
+ {
83
+ "action": "get_live_ltp",
84
+ "params": {
85
+ "security_id": "13",
86
+ "symbol": "NIFTY",
87
+ "exchange_segment": "IDX_I"
88
+ },
89
+ "result": {
90
+ "security_id": "13",
91
+ "exchange_segment": "IDX_I",
92
+ "ltp": 25694.35,
93
+ "ltp_data": {
94
+ "last_price": 25694.35
95
+ },
96
+ "symbol": "NIFTY"
97
+ }
98
+ }
99
+ ```
100
+
101
+ ### Solution
102
+
103
+ Implemented **formatted, human-readable tool results** that extract and display key information:
104
+
105
+ **After:**
106
+ ```
107
+ 🔧 Tool Called: get_live_ltp
108
+ → NIFTY (IDX_I)
109
+ → Last Price: ₹25694.35
110
+
111
+ llm> The current price of NIFTY is 25694.35.
112
+ ```
113
+
114
+ ### Formatted Output by Tool
115
+
116
+ #### 1. Live LTP (`get_live_ltp`)
117
+ ```
118
+ 🔧 Tool Called: get_live_ltp
119
+ → NIFTY (IDX_I)
120
+ → Last Price: ₹25694.35
121
+ ```
122
+
123
+ #### 2. Market Quote (`get_market_quote`)
124
+ ```
125
+ 🔧 Tool Called: get_market_quote
126
+ → RELIANCE
127
+ → LTP: ₹1457.9
128
+ → OHLC: O:1458.8 H:1480.0 L:1455.1 C:1457.9
129
+ → Volume: 17167161
130
+ ```
131
+
132
+ #### 3. Historical Data (`get_historical_data`)
133
+
134
+ **Regular data:**
135
+ ```
136
+ 🔧 Tool Called: get_historical_data
137
+ → Historical data: 30 records
138
+ → Interval: 5
139
+ ```
140
+
141
+ **With indicators:**
142
+ ```
143
+ 🔧 Tool Called: get_historical_data
144
+ → Technical Indicators:
145
+ → Current Price: ₹25694.35
146
+ → RSI(14): 56.32
147
+ → MACD: 12.45
148
+ → SMA(20): ₹25680.12
149
+ → SMA(50): ₹25550.45
150
+ ```
151
+
152
+ #### 4. Option Chain (`get_option_chain`)
153
+ ```
154
+ 🔧 Tool Called: get_option_chain
155
+ → Spot: ₹25694.35
156
+ → Strikes: 15
157
+ → (Filtered: ATM/OTM/ITM with both CE & PE)
158
+ ```
159
+
160
+ #### 5. Expiry List (`get_expiry_list`)
161
+ ```
162
+ 🔧 Tool Called: get_expiry_list
163
+ → Available expiries: 12
164
+ → Next expiry: 2026-01-20
165
+ → Expiries: 2026-01-20, 2026-01-27, 2026-02-03, 2026-02-10, 2026-02-17...
166
+ ```
167
+
168
+ #### 6. Find Instrument (`find_instrument`)
169
+ ```
170
+ 🔧 Tool Called: find_instrument
171
+ → Found: NIFTY
172
+ → Security ID: 13
173
+ → Exchange: IDX_I
174
+ ```
175
+
176
+ ### Implementation
177
+
178
+ ```ruby
179
+ def print_formatted_result(tool_name, content)
180
+ result = content.dig("result") || content
181
+
182
+ case tool_name
183
+ when "get_live_ltp"
184
+ print_ltp_result(result)
185
+ when "get_market_quote"
186
+ print_quote_result(result)
187
+ when "get_historical_data"
188
+ print_historical_result(result, content)
189
+ # ... other tool formatters
190
+ end
191
+ end
192
+ ```
193
+
194
+ Each tool has a dedicated formatter that:
195
+ 1. Extracts key information from the result
196
+ 2. Formats it in a human-readable way
197
+ 3. Uses color coding for better readability
198
+ 4. Shows relevant details without overwhelming the user
199
+
200
+ ## Benefits
201
+
202
+ ### Chat Console
203
+ ✅ **Clear visual feedback** - Users know when the LLM is thinking vs responding
204
+ ✅ **No confusion** - Thinking indicator prevents accidental typing
205
+ ✅ **Better UX** - Immediate response to user input
206
+
207
+ ### DhanHQ Console
208
+ ✅ **Instant comprehension** - See what tool was called at a glance
209
+ ✅ **Key data highlighted** - Important values (price, volume) are prominent
210
+ ✅ **Less noise** - No JSON clutter, just the facts
211
+ ✅ **Consistent formatting** - Each tool type has predictable output
212
+ ✅ **Professional appearance** - Clean, organized display with icons
213
+
214
+ ## Configuration
215
+
216
+ Both consoles support environment variables:
217
+
218
+ **Chat Console:**
219
+ ```bash
220
+ OLLAMA_BASE_URL=http://localhost:11434
221
+ OLLAMA_MODEL=llama3.1:8b
222
+ OLLAMA_TEMPERATURE=0.7
223
+ OLLAMA_SYSTEM="You are a helpful assistant"
224
+ ```
225
+
226
+ **DhanHQ Console:**
227
+ ```bash
228
+ # All chat console vars, plus:
229
+ DHANHQ_CLIENT_ID=your_client_id
230
+ DHANHQ_ACCESS_TOKEN=your_token
231
+ SHOW_PLAN=true # Show planning step
232
+ ALLOW_NO_TOOL_OUTPUT=false # Require tool calls
233
+ ```
234
+
235
+ ## Testing
236
+
237
+ ```bash
238
+ # Test chat console with thinking indicator
239
+ ruby examples/chat_console.rb
240
+
241
+ # Test DhanHQ console with formatted results
242
+ ruby examples/dhan_console.rb
243
+ ```
244
+
245
+ Try queries like:
246
+ - "What is NIFTY price?" → See formatted LTP
247
+ - "Get RELIANCE quote" → See formatted quote with OHLC
248
+ - "Show me historical data for NIFTY" → See record count
249
+ - "Get option chain for NIFTY" → See filtered strikes
250
+
251
+ ## Related Files
252
+
253
+ - `examples/chat_console.rb` - Simple chat with thinking indicator
254
+ - `examples/dhan_console.rb` - Market data with formatted tool results
255
+ - `examples/dhanhq_tools.rb` - Underlying DhanHQ tool implementations
256
+ - `docs/TESTING.md` - Testing guide