ollama_chat 0.0.42 → 0.0.44

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: b1136200c7d140c7f9509710887e6e0c490336046306a7447ef61dc11f523622
4
- data.tar.gz: d9275c4b1b86ba2ac94e7a4d6956c87899481abd34d1471e6d77be7c21ef810b
3
+ metadata.gz: d0c9a525754d546b6d7cd0fa88e20c4767bd1dae51313d1f2c2409c06762f4b0
4
+ data.tar.gz: fc9ddde83495908f965c9225246b6668b01c40fe7391fb83c54beb3ce46321d2
5
5
  SHA512:
6
- metadata.gz: 8dee06aacfff1af99076ae81d509f913a61f539071553f6f86af96b3ee0fb3d1c63f39307d85525ff0f14a382ef44bc35132e7930b10589f062b6a40973cc8e3
7
- data.tar.gz: b0388f8ac5e679f3ffec9e28008a4f9558d2b09c3c73ebedd58b6821fb6cca62dd25aa16712c29ff36c5117cfbd77bf55f5c48b2412993aa70e650fae1326217
6
+ metadata.gz: 97dc5c42b59b3f0d7abf50a0647ff3a3c79681e88e4e9b61c37f706fadc450731ed0240795362e6a061f07ada56502934f5992e796c602b01c27f3b20e6c575f
7
+ data.tar.gz: 34c1f4dded9b54155b407913290354707cb63104b3ea126ca438a245d03458ee868962068f85398954cf6be0ff17f826bdc627985168817d55642cb231732459
data/CHANGES.md CHANGED
@@ -1,5 +1,57 @@
1
1
  # Changes
2
2
 
3
+ ## 2025-12-10 v0.0.44
4
+
5
+ - Fixed `stream` option in `spec/ollama_chat/follow_chat_spec.rb` from `on?
6
+ true` to `on?: true`
7
+ - Extracted `prepare_last_message` method to handle content and thinking text
8
+ formatting with markdown and annotation support
9
+ - Introduced `display_output` method that uses `use_pager` from `MessageList`
10
+ to handle large outputs gracefully
11
+ - Modified `FollowChat#process_response` to conditionally call
12
+ `display_formatted_terminal_output` based on `@chat.stream.on?`
13
+ - Added `use_pager` method to `MessageList` that wraps output blocks with pager
14
+ context using `Kramdown::ANSI::Pager`
15
+ - Updated conditional logic in `follow_chat.rb` to properly distinguish between
16
+ streaming and non-streaming display modes
17
+ - Updated `kramdown-ansi` dependency version from `~> 0.2` to `~> 0.3` in
18
+ `Rakefile` and `ollama_chat.gemspec`
19
+ - Added `truncate_for_terminal` method to `OllamaChat::FollowChat` class that
20
+ limits text to a specified number of lines
21
+ - Modified `display_formatted_terminal_output` to use `truncate_for_terminal`
22
+ when processing content and thinking text
23
+ - Updated spec file to expose the `FollowChat` instance for testing
24
+ - Added comprehensive tests for the new `truncate_for_terminal` method covering
25
+ various line count scenarios
26
+ - The method handles edge cases like negative and zero line counts by returning
27
+ the last line
28
+ - Uses `Tins::Terminal.lines` as default maximum lines parameter
29
+ - The implementation ensures terminal output stays within display limits while
30
+ preserving content integrity
31
+
32
+ ## 2025-12-09 v0.0.43
33
+
34
+ - Added retry logic in `interact_with_user` method to handle
35
+ `Ollama::Errors::BadRequestError` when in think mode
36
+ - Introduced `think_loud` switch with associated UI commands and logic in
37
+ `chat.rb`
38
+ - Implemented `OllamaChat::ThinkControl` module in
39
+ `lib/ollama_chat/think_control.rb` with methods `think`, `choose_think_mode`,
40
+ `think?`, and `think_show`
41
+ - Updated `ollama-ruby` dependency from version **1.14** to **1.16**
42
+ - Simplified think mode handling and updated related tests
43
+ - Added string modes support for think feature allowing values `"low"`,
44
+ `"medium"`, `"high"`
45
+ - Modified `FollowChat` to conditionally append thinking annotations based on
46
+ `think_loud.on?`
47
+ - Updated documentation comments to follow new tagging conventions for method
48
+ returns and attribute accessors
49
+ - Updated `default_config.yml` to set `think_loud: true` by default
50
+ - Modified information display to include `think_loud.show`
51
+ - Adjusted tests to mock `think_loud` and verify annotation handling
52
+ - Updated `follow_chat_spec.rb` to stub `think_loud?` instead of
53
+ `think_loud.on?`
54
+
3
55
  ## 2025-12-03 v0.0.42
4
56
 
5
57
  - Updated `ollama-ruby` gem dependency from version **1.7** to **1.14**
data/README.md CHANGED
@@ -170,7 +170,8 @@ The following commands can be given inside the chat, if prefixed by a `/`:
170
170
  /info show information for current session
171
171
  /config output current configuration ("/Users/flori/.config/ollama_chat/config.yml")
172
172
  /document_policy pick a scan policy for document references
173
- /think enable ollama think setting for models
173
+ /think choose ollama think mode setting for models
174
+ /think_loud enable to think out loud instead of silently
174
175
  /import source import the source's content
175
176
  /summarize [n] source summarize the source's content in n words
176
177
  /embedding toggle embedding paused or not
data/Rakefile CHANGED
@@ -37,7 +37,7 @@ GemHadar do
37
37
  )
38
38
 
39
39
  dependency 'excon', '~> 1.0'
40
- dependency 'ollama-ruby', '~> 1.14'
40
+ dependency 'ollama-ruby', '~> 1.16'
41
41
  dependency 'documentrix', '~> 0.0', '>= 0.0.2'
42
42
  dependency 'unix_socks', '~> 0.1'
43
43
  dependency 'rss', '~> 0.3'
@@ -45,7 +45,7 @@ GemHadar do
45
45
  dependency 'redis', '~> 5.0'
46
46
  dependency 'mime-types', '~> 3.0'
47
47
  dependency 'reverse_markdown', '~> 3.0'
48
- dependency 'kramdown-ansi', '~> 0.2'
48
+ dependency 'kramdown-ansi', '~> 0.3'
49
49
  dependency 'complex_config', '~> 0.22', '>= 0.22.2'
50
50
  dependency 'tins', '~> 1.47'
51
51
  dependency 'search_ui', '~> 0.0'
@@ -40,6 +40,7 @@ class OllamaChat::Chat
40
40
  include OllamaChat::SourceFetching
41
41
  include OllamaChat::WebSearching
42
42
  include OllamaChat::Dialog
43
+ include OllamaChat::ThinkControl
43
44
  include OllamaChat::Information
44
45
  include OllamaChat::MessageOutput
45
46
  include OllamaChat::Clipboard
@@ -80,6 +81,7 @@ class OllamaChat::Chat
80
81
  @opts = go 'f:u:m:s:c:C:D:MESVh', argv
81
82
  @opts[?h] and exit usage
82
83
  @opts[?V] and exit version
84
+ @messages = OllamaChat::MessageList.new(self)
83
85
  @ollama_chat_config = OllamaChat::OllamaChatConfig.new(@opts[?f])
84
86
  self.config = @ollama_chat_config.config
85
87
  setup_switches(config)
@@ -98,9 +100,9 @@ class OllamaChat::Chat
98
100
  @document_policy = config.document_policy
99
101
  @model = choose_model(@opts[?m], config.model.name)
100
102
  @model_options = Ollama::Options[config.model.options]
103
+ @think = config.think
101
104
  model_system = pull_model_unless_present(@model, @model_options)
102
105
  embedding_enabled.set(config.embedding.enabled && !@opts[?E])
103
- @messages = OllamaChat::MessageList.new(self)
104
106
  if @opts[?c]
105
107
  messages.load_conversation(@opts[?c])
106
108
  else
@@ -305,7 +307,10 @@ class OllamaChat::Chat
305
307
  choose_document_policy
306
308
  :next
307
309
  when %r(^/think$)
308
- think.toggle
310
+ choose_think_mode
311
+ :next
312
+ when %r(^/think_loud$)
313
+ think_loud.toggle
309
314
  :next
310
315
  when %r(^/import\s+(.+))
311
316
  @parse_content = false
@@ -601,14 +606,27 @@ class OllamaChat::Chat
601
606
  messages:,
602
607
  voice: (@current_voice if voice.on?)
603
608
  )
604
- ollama.chat(
605
- model: @model,
606
- messages: ,
607
- options: @model_options,
608
- stream: stream.on?,
609
- think: think.on?,
610
- &handler
611
- )
609
+ begin
610
+ retried = false
611
+ ollama.chat(
612
+ model: @model,
613
+ messages: ,
614
+ options: @model_options,
615
+ stream: stream.on?,
616
+ think: ,
617
+ &handler
618
+ )
619
+ rescue Ollama::Errors::BadRequestError
620
+ if think? && !retried
621
+ STDOUT.puts "#{bold('Error')}: in think mode, switch thinking off and retry."
622
+ sleep 1
623
+ @think = false
624
+ retried = true
625
+ retry
626
+ else
627
+ raise
628
+ end
629
+ end
612
630
  if embedding.on? && !records.empty?
613
631
  STDOUT.puts "", records.map { |record|
614
632
  link = if record.source =~ %r(\Ahttps?://)
@@ -686,8 +704,6 @@ class OllamaChat::Chat
686
704
  #
687
705
  # @param document_list [Array<String>] List of document paths or URLs to process
688
706
  #
689
- # @return [void]
690
- #
691
707
  # @example Adding local files
692
708
  # add_documents_from_argv(['/path/to/file1.txt', '/path/to/file2.pdf'])
693
709
  #
@@ -68,7 +68,13 @@ class OllamaChat::FollowChat
68
68
  if response&.message&.role == 'assistant'
69
69
  ensure_assistant_response_exists
70
70
  update_last_message(response)
71
- display_formatted_terminal_output
71
+ if @chat.stream.on?
72
+ display_formatted_terminal_output
73
+ else
74
+ if display_output
75
+ display_formatted_terminal_output
76
+ end
77
+ end
72
78
  @say.call(response)
73
79
  end
74
80
 
@@ -79,6 +85,25 @@ class OllamaChat::FollowChat
79
85
 
80
86
  private
81
87
 
88
+ # The truncate_for_terminal method processes text to fit within a specified
89
+ # number of lines.
90
+ #
91
+ # This method takes a text string and trims it to ensure it doesn't exceed
92
+ # the maximum number of lines allowed for terminal display. If the text
93
+ # exceeds the limit, only
94
+ # the last N lines are retained where N equals the maximum lines parameter.
95
+ #
96
+ # @param text [ String ] the text content to be processed
97
+ # @param max_lines [ Integer ] the maximum number of lines allowed (defaults to terminal lines)
98
+ #
99
+ # @return [ String ] the text truncated to fit within the specified line limit
100
+ def truncate_for_terminal(text, max_lines: Tins::Terminal.lines)
101
+ max_lines = max_lines.clamp(1..)
102
+ lines = text.lines
103
+ return text if lines.size <= max_lines
104
+ lines[-max_lines..-1].join('')
105
+ end
106
+
82
107
  # The ensure_assistant_response_exists method ensures that the last message
83
108
  # in the conversation is from the assistant role.
84
109
  #
@@ -91,7 +116,7 @@ class OllamaChat::FollowChat
91
116
  @messages << Message.new(
92
117
  role: 'assistant',
93
118
  content: '',
94
- thinking: ('' if @chat.think.on?)
119
+ thinking: ('' if @chat.think?)
95
120
  )
96
121
  @user = message_type(@messages.last.images) + " " +
97
122
  bold { color(111) { 'assistant:' } }
@@ -106,11 +131,51 @@ class OllamaChat::FollowChat
106
131
  # and thinking
107
132
  def update_last_message(response)
108
133
  @messages.last.content << response.message&.content
109
- if @chat.think.on? and response_thinking = response.message&.thinking.full?
134
+ if @chat.think_loud? and response_thinking = response.message&.thinking.full?
110
135
  @messages.last.thinking << response_thinking
111
136
  end
112
137
  end
113
138
 
139
+ # The prepare_last_message method processes and formats content and thinking
140
+ # annotations for display.
141
+ #
142
+ # This method prepares the final content and thinking text by applying
143
+ # appropriate formatting based on the chat's markdown and think loud
144
+ # settings. It handles parsing of content through Kramdown::ANSI when
145
+ # markdown is enabled, and applies annotation
146
+ # formatting to both content and thinking text according to the chat's
147
+ # configuration.
148
+ #
149
+ # @return [Array<String, String>] an array containing the processed content
150
+ # and thinking text
151
+ # @return [Array<String, nil>] an array containing the processed content and
152
+ # nil if thinking is disabled
153
+ def prepare_last_message
154
+ content, thinking = @messages.last.content, @messages.last.thinking
155
+ if @chat.markdown.on?
156
+ content = talk_annotate { truncate_for_terminal @chat.kramdown_ansi_parse(content) }
157
+ if @chat.think_loud?
158
+ thinking = think_annotate { truncate_for_terminal@chat.kramdown_ansi_parse(thinking) }
159
+ end
160
+ else
161
+ content = talk_annotate { content }
162
+ @chat.think? and thinking = think_annotate { thinking }
163
+ end
164
+ return content, thinking
165
+ end
166
+
167
+ # The last_message_with_user method constructs a formatted message array by
168
+ # combining user information, newline characters, thinking annotations, and
169
+ # content for display in the terminal output.
170
+ #
171
+ # @return [ Array ] an array containing the user identifier, newline
172
+ # character, thinking annotation (if present), and content formatted for
173
+ # terminal display
174
+ def last_message_with_user
175
+ content, thinking = prepare_last_message
176
+ [ @user, ?\n, thinking, content ]
177
+ end
178
+
114
179
  # The display_formatted_terminal_output method formats and outputs the
115
180
  # terminal content by processing the last message's content and thinking,
116
181
  # then prints it to the output. It handles markdown parsing and annotation
@@ -119,19 +184,21 @@ class OllamaChat::FollowChat
119
184
  # thinking modes are enabled to determine how to process and display the
120
185
  # content.
121
186
  def display_formatted_terminal_output
122
- content, thinking = @messages.last.content, @messages.last.thinking
123
- if @chat.markdown.on?
124
- content = talk_annotate { @chat.kramdown_ansi_parse(content) }
125
- if @chat.think.on?
126
- thinking = think_annotate { @chat.kramdown_ansi_parse(content) }
127
- end
128
- else
129
- content = talk_annotate { content }
130
- @chat.think.on? and thinking = think_annotate { @messages.last.thinking.full? }
187
+ @output.print(*([ clear_screen, move_home, *last_message_with_user ].compact))
188
+ end
189
+
190
+ # The display_output method shows the last message in the conversation.
191
+ #
192
+ # This method delegates to the messages object's show_last method, which
193
+ # displays the most recent non-user message in the conversation history.
194
+ # It is typically used to provide feedback to the user about the last
195
+ # response from the assistant.
196
+ # @return [ nil, String ] the pager command or nil if no paging was
197
+ # performed.
198
+ def display_output
199
+ @messages.use_pager do |output|
200
+ output.print(*last_message_with_user)
131
201
  end
132
- @output.print(*([
133
- clear_screen, move_home, @user, ?\n, thinking, content
134
- ].compact))
135
202
  end
136
203
 
137
204
  # The eval_stats method processes response statistics and formats them into a
@@ -99,7 +99,8 @@ module OllamaChat::Information
99
99
  end
100
100
  markdown.show
101
101
  stream.show
102
- think.show
102
+ think_show
103
+ think_loud.show
103
104
  location.show
104
105
  voice.show
105
106
  if @voice.on?
@@ -136,7 +137,8 @@ module OllamaChat::Information
136
137
  /info show information for current session
137
138
  /config output current configuration (#{@ollama_chat_config.filename.to_s.inspect})
138
139
  /document_policy pick a scan policy for document references
139
- /think enable ollama think setting for models
140
+ /think choose ollama think mode setting for models
141
+ /think_loud enable to think out loud instead of silently
140
142
  /import source import the source's content
141
143
  /summarize [n] source summarize the source's content in n words
142
144
  /embedding toggle embedding paused or not
@@ -34,7 +34,7 @@ module OllamaChat::MessageFormat
34
34
  def think_annotate(&block)
35
35
  string = block.()
36
36
  string.to_s.size == 0 and return
37
- if @chat.think.on?
37
+ if @chat.think?
38
38
  "💭\n#{string}\n"
39
39
  end
40
40
  end
@@ -48,7 +48,7 @@ module OllamaChat::MessageFormat
48
48
  def talk_annotate(&block)
49
49
  string = block.()
50
50
  string.to_s.size == 0 and return
51
- if @chat.think.on?
51
+ if @chat.think?
52
52
  "💬\n#{string}\n"
53
53
  else
54
54
  string
@@ -290,6 +290,22 @@ class OllamaChat::MessageList
290
290
  end.to_s
291
291
  end
292
292
 
293
+ # The use_pager method wraps the given block with a pager context.
294
+ # If the output would exceed the terminal's line capacity, it pipes the content
295
+ # through an appropriate pager command (like 'less' or 'more').
296
+ #
297
+ # @yield A block that yields an IO object to write output to
298
+ # @yieldparam [IO] the IO object to write to
299
+ def use_pager
300
+ command = determine_pager_command
301
+ output_buffer = StringIO.new
302
+ yield output_buffer
303
+ messages = output_buffer.string
304
+ Kramdown::ANSI::Pager.pager(command:, lines: messages.count(?\n)) do |output|
305
+ output.puts messages
306
+ end
307
+ end
308
+
293
309
  private
294
310
 
295
311
  # The config method provides access to the chat configuration object.
@@ -311,22 +327,6 @@ class OllamaChat::MessageList
311
327
  OllamaChat::EnvConfig::PAGER?
312
328
  end
313
329
 
314
- # The use_pager method wraps the given block with a pager context.
315
- # If the output would exceed the terminal's line capacity, it pipes the content
316
- # through an appropriate pager command (like 'less' or 'more').
317
- #
318
- # @yield A block that yields an IO object to write output to
319
- # @yieldparam [IO] the IO object to write to
320
- def use_pager
321
- command = determine_pager_command
322
- output_buffer = StringIO.new
323
- yield output_buffer
324
- messages = output_buffer.string
325
- Kramdown::ANSI::Pager.pager(command:, lines: messages.count(?\n)) do |output|
326
- output.puts messages
327
- end
328
- end
329
-
330
330
  # The message_text_for method generates formatted text representation of a
331
331
  # message including its role, content, thinking annotations, and associated
332
332
  # images.
@@ -344,7 +344,7 @@ class OllamaChat::MessageList
344
344
  when 'system' then 213
345
345
  else 210
346
346
  end
347
- thinking = if @chat.think.on?
347
+ thinking = if @chat.think?
348
348
  think_annotate do
349
349
  message.thinking.full? { @chat.markdown.on? ? @chat.kramdown_ansi_parse(_1) : _1 }
350
350
  end
@@ -42,6 +42,7 @@ markdown: true
42
42
  stream: true
43
43
  document_policy: importing
44
44
  think: false
45
+ think_loud: true
45
46
  embedding:
46
47
  enabled: true
47
48
  paused: false
@@ -40,8 +40,6 @@ module OllamaChat::Switches
40
40
 
41
41
  # The show method outputs the current value of the message to standard
42
42
  # output.
43
- #
44
- # @return [ void ]
45
43
  def show
46
44
  STDOUT.puts @msg[value]
47
45
  end
@@ -137,17 +135,17 @@ module OllamaChat::Switches
137
135
  # @return [ OllamaChat::Switches::Switch ] the stream switch instance
138
136
  attr_reader :stream
139
137
 
140
- # The think method returns the current state of the thinking switch.
141
- #
142
- # @return [ OllamaChat::Switches::Switch ] the thinking switch instance
143
- attr_reader :think
144
-
145
138
  # The markdown attribute reader returns the markdown switch object.
146
139
  # The voice reader returns the voice switch instance.
147
140
  #
148
141
  # @return [ OllamaChat::Switches::Switch ] the markdown switch instance
149
142
  attr_reader :markdown
150
143
 
144
+ # The think_loud method returns the current state of the think loud switch.
145
+ #
146
+ # @return [ OllamaChat::Switches::Switch ] the think loud switch instance
147
+ attr_reader :think_loud
148
+
151
149
  # The voice reader returns the voice switch instance.
152
150
  #
153
151
  # @return [ OllamaChat::Switches::Switch ] the voice switch instance
@@ -193,11 +191,11 @@ module OllamaChat::Switches
193
191
  }
194
192
  )
195
193
 
196
- @think = Switch.new(
197
- value: config.think,
194
+ @think_loud = Switch.new(
195
+ value: config.think_loud,
198
196
  msg: {
199
- true => "Thinking enabled.",
200
- false => "Thinking disabled.",
197
+ true => "Thinking out loud, show thinking annotations.",
198
+ false => "Thinking silently, don't show thinking annotations.",
201
199
  }
202
200
  )
203
201
 
@@ -0,0 +1,64 @@
1
+ # A module that provides thinking control functionality for OllamaChat.
2
+ #
3
+ # The ThinkControl module encapsulates methods for managing the 'think' mode
4
+ # setting in OllamaChat sessions. It handles the selection of different
5
+ # thinking modes, checking the current state, and displaying the current
6
+ # think mode status.
7
+ module OllamaChat::ThinkControl
8
+ # The think method returns the current state of the think mode.
9
+ #
10
+ # @return [ true, false, String ] the think mode
11
+ attr_reader :think
12
+
13
+ # The choose_think_mode method presents a menu to select a think mode.
14
+ #
15
+ # This method displays available think modes to the user and sets the
16
+ # selected mode as the current think mode for the chat session.
17
+ def choose_think_mode
18
+ think_modes = %w[ off on low medium high [EXIT] ]
19
+ case chosen = OllamaChat::Utils::Chooser.choose(think_modes)
20
+ when '[EXIT]', nil
21
+ STDOUT.puts "Exiting chooser."
22
+ when 'off'
23
+ @think = false
24
+ when 'on'
25
+ @think = true
26
+ when 'low', 'medium', 'high'
27
+ @think = chosen
28
+ end
29
+ end
30
+
31
+ # The think? method checks if the think mode is enabled.
32
+ #
33
+ # @return [ TrueClass, FalseClass ] true if think mode is enabled, false
34
+ # otherwise
35
+ def think?
36
+ !!think
37
+ end
38
+
39
+ # The think_mode method returns the current think mode status as a string.
40
+ #
41
+ # @return [ String ] returns 'enabled' if think mode is true, the think mode
42
+ # value if it's a string, or 'disabled' if think mode is false or nil
43
+ def think_mode
44
+ think == true ? 'enabled' : think || 'disabled'
45
+ end
46
+
47
+ # The think_show method displays the current think mode status.
48
+ #
49
+ # This method checks the current think mode setting and outputs a message
50
+ # indicating whether think mode is enabled, disabled, or set to a specific
51
+ # mode level (low, medium, high).
52
+ def think_show
53
+ STDOUT.puts "Think mode is #{bold(think_mode)}."
54
+ end
55
+
56
+ # The think_loud? method checks if both think mode and think loud mode are
57
+ # enabled.
58
+ #
59
+ # @return [ TrueClass, FalseClass ] true if think mode is enabled and think
60
+ # loud mode is on, false otherwise
61
+ def think_loud?
62
+ think? && think_loud.on?
63
+ end
64
+ end
@@ -19,8 +19,6 @@ class OllamaChat::Utils::CacheFetcher
19
19
  # The initialize method sets up the cache instance variable for the object.
20
20
  #
21
21
  # @param cache [ Object ] the cache object to be stored
22
- #
23
- # @return [ void ]
24
22
  def initialize(cache)
25
23
  @cache = cache
26
24
  end
@@ -1,6 +1,6 @@
1
1
  module OllamaChat
2
2
  # OllamaChat version
3
- VERSION = '0.0.42'
3
+ VERSION = '0.0.44'
4
4
  VERSION_ARRAY = VERSION.split('.').map(&:to_i) # :nodoc:
5
5
  VERSION_MAJOR = VERSION_ARRAY[0] # :nodoc:
6
6
  VERSION_MINOR = VERSION_ARRAY[1] # :nodoc:
data/lib/ollama_chat.rb CHANGED
@@ -25,6 +25,7 @@ require 'ollama_chat/parsing'
25
25
  require 'ollama_chat/source_fetching'
26
26
  require 'ollama_chat/web_searching'
27
27
  require 'ollama_chat/dialog'
28
+ require 'ollama_chat/think_control'
28
29
  require 'ollama_chat/information'
29
30
  require 'ollama_chat/message_output'
30
31
  require 'ollama_chat/clipboard'
data/ollama_chat.gemspec CHANGED
@@ -1,9 +1,9 @@
1
1
  # -*- encoding: utf-8 -*-
2
- # stub: ollama_chat 0.0.42 ruby lib
2
+ # stub: ollama_chat 0.0.44 ruby lib
3
3
 
4
4
  Gem::Specification.new do |s|
5
5
  s.name = "ollama_chat".freeze
6
- s.version = "0.0.42".freeze
6
+ s.version = "0.0.44".freeze
7
7
 
8
8
  s.required_rubygems_version = Gem::Requirement.new(">= 0".freeze) if s.respond_to? :required_rubygems_version=
9
9
  s.require_paths = ["lib".freeze]
@@ -12,8 +12,8 @@ Gem::Specification.new do |s|
12
12
  s.description = "The app provides a command-line interface (CLI) to an Ollama AI model,\nallowing users to engage in text-based conversations and generate\nhuman-like responses. Users can import data from local files or web pages,\nwhich are then processed through three different modes: fully importing the\ncontent into the conversation context, summarizing the information for\nconcise reference, or storing it in an embedding vector database for later\nretrieval based on the conversation.\n".freeze
13
13
  s.email = "flori@ping.de".freeze
14
14
  s.executables = ["ollama_chat".freeze, "ollama_chat_send".freeze]
15
- s.extra_rdoc_files = ["README.md".freeze, "lib/ollama_chat.rb".freeze, "lib/ollama_chat/chat.rb".freeze, "lib/ollama_chat/clipboard.rb".freeze, "lib/ollama_chat/conversation.rb".freeze, "lib/ollama_chat/dialog.rb".freeze, "lib/ollama_chat/document_cache.rb".freeze, "lib/ollama_chat/env_config.rb".freeze, "lib/ollama_chat/follow_chat.rb".freeze, "lib/ollama_chat/history.rb".freeze, "lib/ollama_chat/information.rb".freeze, "lib/ollama_chat/kramdown_ansi.rb".freeze, "lib/ollama_chat/message_format.rb".freeze, "lib/ollama_chat/message_list.rb".freeze, "lib/ollama_chat/message_output.rb".freeze, "lib/ollama_chat/model_handling.rb".freeze, "lib/ollama_chat/ollama_chat_config.rb".freeze, "lib/ollama_chat/parsing.rb".freeze, "lib/ollama_chat/server_socket.rb".freeze, "lib/ollama_chat/source_fetching.rb".freeze, "lib/ollama_chat/switches.rb".freeze, "lib/ollama_chat/utils.rb".freeze, "lib/ollama_chat/utils/cache_fetcher.rb".freeze, "lib/ollama_chat/utils/chooser.rb".freeze, "lib/ollama_chat/utils/fetcher.rb".freeze, "lib/ollama_chat/utils/file_argument.rb".freeze, "lib/ollama_chat/version.rb".freeze, "lib/ollama_chat/vim.rb".freeze, "lib/ollama_chat/web_searching.rb".freeze]
16
- s.files = [".utilsrc".freeze, "CHANGES.md".freeze, "Gemfile".freeze, "README.md".freeze, "Rakefile".freeze, "bin/ollama_chat".freeze, "bin/ollama_chat_send".freeze, "config/searxng/settings.yml".freeze, "docker-compose.yml".freeze, "lib/ollama_chat.rb".freeze, "lib/ollama_chat/chat.rb".freeze, "lib/ollama_chat/clipboard.rb".freeze, "lib/ollama_chat/conversation.rb".freeze, "lib/ollama_chat/dialog.rb".freeze, "lib/ollama_chat/document_cache.rb".freeze, "lib/ollama_chat/env_config.rb".freeze, "lib/ollama_chat/follow_chat.rb".freeze, "lib/ollama_chat/history.rb".freeze, "lib/ollama_chat/information.rb".freeze, "lib/ollama_chat/kramdown_ansi.rb".freeze, "lib/ollama_chat/message_format.rb".freeze, "lib/ollama_chat/message_list.rb".freeze, "lib/ollama_chat/message_output.rb".freeze, "lib/ollama_chat/model_handling.rb".freeze, "lib/ollama_chat/ollama_chat_config.rb".freeze, "lib/ollama_chat/ollama_chat_config/default_config.yml".freeze, "lib/ollama_chat/parsing.rb".freeze, "lib/ollama_chat/server_socket.rb".freeze, "lib/ollama_chat/source_fetching.rb".freeze, "lib/ollama_chat/switches.rb".freeze, "lib/ollama_chat/utils.rb".freeze, "lib/ollama_chat/utils/cache_fetcher.rb".freeze, "lib/ollama_chat/utils/chooser.rb".freeze, "lib/ollama_chat/utils/fetcher.rb".freeze, "lib/ollama_chat/utils/file_argument.rb".freeze, "lib/ollama_chat/version.rb".freeze, "lib/ollama_chat/vim.rb".freeze, "lib/ollama_chat/web_searching.rb".freeze, "ollama_chat.gemspec".freeze, "redis/redis.conf".freeze, "spec/assets/api_show.json".freeze, "spec/assets/api_tags.json".freeze, "spec/assets/api_version.json".freeze, "spec/assets/conversation.json".freeze, "spec/assets/duckduckgo.html".freeze, "spec/assets/example.atom".freeze, "spec/assets/example.csv".freeze, "spec/assets/example.html".freeze, "spec/assets/example.pdf".freeze, "spec/assets/example.ps".freeze, "spec/assets/example.rb".freeze, "spec/assets/example.rss".freeze, "spec/assets/example.xml".freeze, "spec/assets/example_with_quote.html".freeze, "spec/assets/kitten.jpg".freeze, "spec/assets/prompt.txt".freeze, "spec/assets/searxng.json".freeze, "spec/ollama_chat/chat_spec.rb".freeze, "spec/ollama_chat/clipboard_spec.rb".freeze, "spec/ollama_chat/follow_chat_spec.rb".freeze, "spec/ollama_chat/information_spec.rb".freeze, "spec/ollama_chat/kramdown_ansi_spec.rb".freeze, "spec/ollama_chat/message_list_spec.rb".freeze, "spec/ollama_chat/message_output_spec.rb".freeze, "spec/ollama_chat/model_handling_spec.rb".freeze, "spec/ollama_chat/parsing_spec.rb".freeze, "spec/ollama_chat/server_socket_spec.rb".freeze, "spec/ollama_chat/source_fetching_spec.rb".freeze, "spec/ollama_chat/switches_spec.rb".freeze, "spec/ollama_chat/utils/cache_fetcher_spec.rb".freeze, "spec/ollama_chat/utils/fetcher_spec.rb".freeze, "spec/ollama_chat/utils/file_argument_spec.rb".freeze, "spec/ollama_chat/web_searching_spec.rb".freeze, "spec/spec_helper.rb".freeze, "tmp/.keep".freeze]
15
+ s.extra_rdoc_files = ["README.md".freeze, "lib/ollama_chat.rb".freeze, "lib/ollama_chat/chat.rb".freeze, "lib/ollama_chat/clipboard.rb".freeze, "lib/ollama_chat/conversation.rb".freeze, "lib/ollama_chat/dialog.rb".freeze, "lib/ollama_chat/document_cache.rb".freeze, "lib/ollama_chat/env_config.rb".freeze, "lib/ollama_chat/follow_chat.rb".freeze, "lib/ollama_chat/history.rb".freeze, "lib/ollama_chat/information.rb".freeze, "lib/ollama_chat/kramdown_ansi.rb".freeze, "lib/ollama_chat/message_format.rb".freeze, "lib/ollama_chat/message_list.rb".freeze, "lib/ollama_chat/message_output.rb".freeze, "lib/ollama_chat/model_handling.rb".freeze, "lib/ollama_chat/ollama_chat_config.rb".freeze, "lib/ollama_chat/parsing.rb".freeze, "lib/ollama_chat/server_socket.rb".freeze, "lib/ollama_chat/source_fetching.rb".freeze, "lib/ollama_chat/switches.rb".freeze, "lib/ollama_chat/think_control.rb".freeze, "lib/ollama_chat/utils.rb".freeze, "lib/ollama_chat/utils/cache_fetcher.rb".freeze, "lib/ollama_chat/utils/chooser.rb".freeze, "lib/ollama_chat/utils/fetcher.rb".freeze, "lib/ollama_chat/utils/file_argument.rb".freeze, "lib/ollama_chat/version.rb".freeze, "lib/ollama_chat/vim.rb".freeze, "lib/ollama_chat/web_searching.rb".freeze]
16
+ s.files = [".utilsrc".freeze, "CHANGES.md".freeze, "Gemfile".freeze, "README.md".freeze, "Rakefile".freeze, "bin/ollama_chat".freeze, "bin/ollama_chat_send".freeze, "config/searxng/settings.yml".freeze, "docker-compose.yml".freeze, "lib/ollama_chat.rb".freeze, "lib/ollama_chat/chat.rb".freeze, "lib/ollama_chat/clipboard.rb".freeze, "lib/ollama_chat/conversation.rb".freeze, "lib/ollama_chat/dialog.rb".freeze, "lib/ollama_chat/document_cache.rb".freeze, "lib/ollama_chat/env_config.rb".freeze, "lib/ollama_chat/follow_chat.rb".freeze, "lib/ollama_chat/history.rb".freeze, "lib/ollama_chat/information.rb".freeze, "lib/ollama_chat/kramdown_ansi.rb".freeze, "lib/ollama_chat/message_format.rb".freeze, "lib/ollama_chat/message_list.rb".freeze, "lib/ollama_chat/message_output.rb".freeze, "lib/ollama_chat/model_handling.rb".freeze, "lib/ollama_chat/ollama_chat_config.rb".freeze, "lib/ollama_chat/ollama_chat_config/default_config.yml".freeze, "lib/ollama_chat/parsing.rb".freeze, "lib/ollama_chat/server_socket.rb".freeze, "lib/ollama_chat/source_fetching.rb".freeze, "lib/ollama_chat/switches.rb".freeze, "lib/ollama_chat/think_control.rb".freeze, "lib/ollama_chat/utils.rb".freeze, "lib/ollama_chat/utils/cache_fetcher.rb".freeze, "lib/ollama_chat/utils/chooser.rb".freeze, "lib/ollama_chat/utils/fetcher.rb".freeze, "lib/ollama_chat/utils/file_argument.rb".freeze, "lib/ollama_chat/version.rb".freeze, "lib/ollama_chat/vim.rb".freeze, "lib/ollama_chat/web_searching.rb".freeze, "ollama_chat.gemspec".freeze, "redis/redis.conf".freeze, "spec/assets/api_show.json".freeze, "spec/assets/api_tags.json".freeze, "spec/assets/api_version.json".freeze, "spec/assets/conversation.json".freeze, "spec/assets/duckduckgo.html".freeze, "spec/assets/example.atom".freeze, "spec/assets/example.csv".freeze, "spec/assets/example.html".freeze, "spec/assets/example.pdf".freeze, "spec/assets/example.ps".freeze, "spec/assets/example.rb".freeze, "spec/assets/example.rss".freeze, "spec/assets/example.xml".freeze, "spec/assets/example_with_quote.html".freeze, "spec/assets/kitten.jpg".freeze, "spec/assets/prompt.txt".freeze, "spec/assets/searxng.json".freeze, "spec/ollama_chat/chat_spec.rb".freeze, "spec/ollama_chat/clipboard_spec.rb".freeze, "spec/ollama_chat/follow_chat_spec.rb".freeze, "spec/ollama_chat/information_spec.rb".freeze, "spec/ollama_chat/kramdown_ansi_spec.rb".freeze, "spec/ollama_chat/message_list_spec.rb".freeze, "spec/ollama_chat/message_output_spec.rb".freeze, "spec/ollama_chat/model_handling_spec.rb".freeze, "spec/ollama_chat/parsing_spec.rb".freeze, "spec/ollama_chat/server_socket_spec.rb".freeze, "spec/ollama_chat/source_fetching_spec.rb".freeze, "spec/ollama_chat/switches_spec.rb".freeze, "spec/ollama_chat/utils/cache_fetcher_spec.rb".freeze, "spec/ollama_chat/utils/fetcher_spec.rb".freeze, "spec/ollama_chat/utils/file_argument_spec.rb".freeze, "spec/ollama_chat/web_searching_spec.rb".freeze, "spec/spec_helper.rb".freeze, "tmp/.keep".freeze]
17
17
  s.homepage = "https://github.com/flori/ollama_chat".freeze
18
18
  s.licenses = ["MIT".freeze]
19
19
  s.rdoc_options = ["--title".freeze, "OllamaChat - A command-line interface (CLI) for interacting with an Ollama AI model.".freeze, "--main".freeze, "README.md".freeze]
@@ -33,7 +33,7 @@ Gem::Specification.new do |s|
33
33
  s.add_development_dependency(%q<simplecov>.freeze, [">= 0".freeze])
34
34
  s.add_development_dependency(%q<context_spook>.freeze, [">= 0".freeze])
35
35
  s.add_runtime_dependency(%q<excon>.freeze, ["~> 1.0".freeze])
36
- s.add_runtime_dependency(%q<ollama-ruby>.freeze, ["~> 1.14".freeze])
36
+ s.add_runtime_dependency(%q<ollama-ruby>.freeze, ["~> 1.16".freeze])
37
37
  s.add_runtime_dependency(%q<documentrix>.freeze, ["~> 0.0".freeze, ">= 0.0.2".freeze])
38
38
  s.add_runtime_dependency(%q<unix_socks>.freeze, ["~> 0.1".freeze])
39
39
  s.add_runtime_dependency(%q<rss>.freeze, ["~> 0.3".freeze])
@@ -41,7 +41,7 @@ Gem::Specification.new do |s|
41
41
  s.add_runtime_dependency(%q<redis>.freeze, ["~> 5.0".freeze])
42
42
  s.add_runtime_dependency(%q<mime-types>.freeze, ["~> 3.0".freeze])
43
43
  s.add_runtime_dependency(%q<reverse_markdown>.freeze, ["~> 3.0".freeze])
44
- s.add_runtime_dependency(%q<kramdown-ansi>.freeze, ["~> 0.2".freeze])
44
+ s.add_runtime_dependency(%q<kramdown-ansi>.freeze, ["~> 0.3".freeze])
45
45
  s.add_runtime_dependency(%q<complex_config>.freeze, ["~> 0.22".freeze, ">= 0.22.2".freeze])
46
46
  s.add_runtime_dependency(%q<tins>.freeze, ["~> 1.47".freeze])
47
47
  s.add_runtime_dependency(%q<search_ui>.freeze, ["~> 0.0".freeze])
@@ -312,7 +312,8 @@ describe OllamaChat::Chat, protect_env: true do
312
312
  Streaming|
313
313
  Location|
314
314
  Document\ policy|
315
- Thinking|
315
+ Think\ mode|
316
+ Thinking\ out\ loud|
316
317
  Voice\ output|
317
318
  Currently\ selected\ search\ engine|
318
319
  Conversation\ length
@@ -8,11 +8,12 @@ describe OllamaChat::FollowChat do
8
8
  end
9
9
 
10
10
  let :chat do
11
- double('Chat', markdown: double(on?: false), think: double(on?: false), debug: false)
11
+ double('Chat', markdown: double(on?: false), think_loud?: true,
12
+ think?: false, debug: false, stream: double(on?: true))
12
13
  end
13
14
 
14
15
  let :follow_chat do
15
- described_class.new(chat:, messages:, output:)
16
+ described_class.new(chat:, messages:, output:).expose
16
17
  end
17
18
 
18
19
  let :output do
@@ -45,4 +46,33 @@ describe OllamaChat::FollowChat do
45
46
  expect(output).to receive(:puts).with("", /eval_duration/)
46
47
  follow_chat.call(response)
47
48
  end
49
+
50
+ context '#truncate_for_terminal' do
51
+ it 'can truncate text for 5 lines' do
52
+ text = (?A..?Z).to_a.join(?\n)
53
+ expect(follow_chat.truncate_for_terminal(text, max_lines: 5)).to eq(
54
+ (?V..?Z).to_a.join(?\n)
55
+ )
56
+ end
57
+
58
+ it 'can truncate text for -1 lines' do
59
+ text = (?A..?Z).to_a.join(?\n)
60
+ expect(follow_chat.truncate_for_terminal(text, max_lines: -1)).to eq(?Z)
61
+ end
62
+
63
+ it 'can truncate text for 0 lines' do
64
+ text = (?A..?Z).to_a.join(?\n)
65
+ expect(follow_chat.truncate_for_terminal(text, max_lines: 0)).to eq(?Z)
66
+ end
67
+
68
+ it 'can truncate text for 1 lines' do
69
+ text = (?A..?Z).to_a.join(?\n)
70
+ expect(follow_chat.truncate_for_terminal(text, max_lines: 1)).to eq(?Z)
71
+ end
72
+
73
+ it 'can truncate text for 42 lines' do
74
+ text = (?A..?Z).to_a.join(?\n)
75
+ expect(follow_chat.truncate_for_terminal(text, max_lines: 42)).to eq(text)
76
+ end
77
+ end
48
78
  end
@@ -103,7 +103,7 @@ describe OllamaChat::MessageList do
103
103
 
104
104
  it 'shows nothing when the last message is by the assistant' do
105
105
  list = described_class.new(chat)
106
- allow(chat).to receive(:think).and_return(double(on?: false))
106
+ allow(chat).to receive(:think?).and_return(false)
107
107
  allow(chat).to receive(:markdown).and_return(double(on?: false))
108
108
  list << Ollama::Message.new(role: 'assistant', content: 'hello')
109
109
  expect(STDOUT).to receive(:puts).
@@ -119,7 +119,7 @@ describe OllamaChat::MessageList do
119
119
  end
120
120
 
121
121
  it "shows last N messages when N is larger than available messages" do
122
- allow(chat).to receive(:think).and_return(double(on?: false))
122
+ allow(chat).to receive(:think?).and_return(false)
123
123
  allow(chat).to receive(:markdown).and_return(double(on?: false))
124
124
  list = described_class.new(chat)
125
125
  list << Ollama::Message.new(role: 'system', content: 'hello')
@@ -139,8 +139,7 @@ describe OllamaChat::MessageList do
139
139
  it 'can show last message' do
140
140
  expect(chat).to receive(:markdown).
141
141
  and_return(double(on?: true)).at_least(:once)
142
- expect(chat).to receive(:think).
143
- and_return(double(on?: false)).at_least(:once)
142
+ expect(chat).to receive(:think?).and_return(false).at_least(:once)
144
143
  expect(STDOUT).to receive(:puts).
145
144
  with("📨 \e[1m\e[38;5;213msystem\e[0m\e[0m:\nhello\n")
146
145
  list.show_last
@@ -149,8 +148,7 @@ describe OllamaChat::MessageList do
149
148
  it 'can list conversations without thinking' do
150
149
  expect(chat).to receive(:markdown).
151
150
  and_return(double(on?: true)).at_least(:once)
152
- expect(chat).to receive(:think).
153
- and_return(double(on?: false)).at_least(:once)
151
+ expect(chat).to receive(:think?).and_return(false).at_least(:once)
154
152
  list << Ollama::Message.new(role: 'user', content: 'world')
155
153
  expect(STDOUT).to receive(:puts).
156
154
  with(
@@ -163,8 +161,7 @@ describe OllamaChat::MessageList do
163
161
  it 'can list conversations with thinking' do
164
162
  expect(chat).to receive(:markdown).
165
163
  and_return(double(on?: true)).at_least(:once)
166
- expect(chat).to receive(:think).
167
- and_return(double(on?: true)).at_least(:once)
164
+ expect(chat).to receive(:think?).and_return(true).at_least(:once)
168
165
  expect(STDOUT).to receive(:puts).
169
166
  with(
170
167
  "📨 \e[1m\e[38;5;213msystem\e[0m\e[0m:\n" \
@@ -190,8 +187,7 @@ describe OllamaChat::MessageList do
190
187
  it 'can list conversations' do
191
188
  expect(chat).to receive(:markdown).
192
189
  and_return(double(on?: true)).at_least(:once)
193
- expect(chat).to receive(:think).
194
- and_return(double(on?: false)).at_least(:once)
190
+ expect(chat).to receive(:think?).and_return(false).at_least(:once)
195
191
  list << Ollama::Message.new(role: 'user', content: 'world')
196
192
  list.list_conversation
197
193
  end
metadata CHANGED
@@ -1,7 +1,7 @@
1
1
  --- !ruby/object:Gem::Specification
2
2
  name: ollama_chat
3
3
  version: !ruby/object:Gem::Version
4
- version: 0.0.42
4
+ version: 0.0.44
5
5
  platform: ruby
6
6
  authors:
7
7
  - Florian Frank
@@ -141,14 +141,14 @@ dependencies:
141
141
  requirements:
142
142
  - - "~>"
143
143
  - !ruby/object:Gem::Version
144
- version: '1.14'
144
+ version: '1.16'
145
145
  type: :runtime
146
146
  prerelease: false
147
147
  version_requirements: !ruby/object:Gem::Requirement
148
148
  requirements:
149
149
  - - "~>"
150
150
  - !ruby/object:Gem::Version
151
- version: '1.14'
151
+ version: '1.16'
152
152
  - !ruby/object:Gem::Dependency
153
153
  name: documentrix
154
154
  requirement: !ruby/object:Gem::Requirement
@@ -259,14 +259,14 @@ dependencies:
259
259
  requirements:
260
260
  - - "~>"
261
261
  - !ruby/object:Gem::Version
262
- version: '0.2'
262
+ version: '0.3'
263
263
  type: :runtime
264
264
  prerelease: false
265
265
  version_requirements: !ruby/object:Gem::Requirement
266
266
  requirements:
267
267
  - - "~>"
268
268
  - !ruby/object:Gem::Version
269
- version: '0.2'
269
+ version: '0.3'
270
270
  - !ruby/object:Gem::Dependency
271
271
  name: complex_config
272
272
  requirement: !ruby/object:Gem::Requirement
@@ -406,6 +406,7 @@ extra_rdoc_files:
406
406
  - lib/ollama_chat/server_socket.rb
407
407
  - lib/ollama_chat/source_fetching.rb
408
408
  - lib/ollama_chat/switches.rb
409
+ - lib/ollama_chat/think_control.rb
409
410
  - lib/ollama_chat/utils.rb
410
411
  - lib/ollama_chat/utils/cache_fetcher.rb
411
412
  - lib/ollama_chat/utils/chooser.rb
@@ -445,6 +446,7 @@ files:
445
446
  - lib/ollama_chat/server_socket.rb
446
447
  - lib/ollama_chat/source_fetching.rb
447
448
  - lib/ollama_chat/switches.rb
449
+ - lib/ollama_chat/think_control.rb
448
450
  - lib/ollama_chat/utils.rb
449
451
  - lib/ollama_chat/utils/cache_fetcher.rb
450
452
  - lib/ollama_chat/utils/chooser.rb