ollama_agent 0.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (37) hide show
  1. checksums.yaml +7 -0
  2. data/.cursor/.gitignore +1 -0
  3. data/.cursor/skills/ollama-agent-patterns/SKILL.md +132 -0
  4. data/.cursor/skills/ollama-agent-patterns/reference.md +428 -0
  5. data/.env.example +27 -0
  6. data/CHANGELOG.md +5 -0
  7. data/CODE_OF_CONDUCT.md +10 -0
  8. data/LICENSE.txt +21 -0
  9. data/README.md +147 -0
  10. data/Rakefile +12 -0
  11. data/exe/ollama_agent +13 -0
  12. data/lib/ollama_agent/agent.rb +146 -0
  13. data/lib/ollama_agent/agent_prompt.rb +44 -0
  14. data/lib/ollama_agent/cli.rb +73 -0
  15. data/lib/ollama_agent/console.rb +136 -0
  16. data/lib/ollama_agent/diff_path_validator.rb +141 -0
  17. data/lib/ollama_agent/ollama_connection.rb +14 -0
  18. data/lib/ollama_agent/patch_support.rb +78 -0
  19. data/lib/ollama_agent/repo_list.rb +50 -0
  20. data/lib/ollama_agent/ruby_index/builder.rb +115 -0
  21. data/lib/ollama_agent/ruby_index/extractor_visitor.rb +81 -0
  22. data/lib/ollama_agent/ruby_index/formatter.rb +65 -0
  23. data/lib/ollama_agent/ruby_index/index.rb +51 -0
  24. data/lib/ollama_agent/ruby_index/naming.rb +27 -0
  25. data/lib/ollama_agent/ruby_index.rb +17 -0
  26. data/lib/ollama_agent/ruby_index_tool_support.rb +52 -0
  27. data/lib/ollama_agent/ruby_search_modes.rb +9 -0
  28. data/lib/ollama_agent/sandboxed_tools.rb +216 -0
  29. data/lib/ollama_agent/think_param.rb +27 -0
  30. data/lib/ollama_agent/timeout_param.rb +20 -0
  31. data/lib/ollama_agent/tool_arguments.rb +26 -0
  32. data/lib/ollama_agent/tool_content_parser.rb +44 -0
  33. data/lib/ollama_agent/tools_schema.rb +78 -0
  34. data/lib/ollama_agent/version.rb +5 -0
  35. data/lib/ollama_agent.rb +11 -0
  36. data/sig/ollama_agent.rbs +4 -0
  37. metadata +182 -0
data/README.md ADDED
@@ -0,0 +1,147 @@
1
+ # ollama_agent
2
+
3
+ Version: 0.1.0
4
+
5
+ Ruby gem that runs a **CLI coding agent** against a local [Ollama](https://ollama.com) model. It exposes tools to **list files**, **read files**, **search the tree** (ripgrep or grep), and **apply unified diffs** so the model can make small, reviewable edits.
6
+
7
+ ## Features
8
+
9
+ - Tool `list_files` – list project files.
10
+ - Tool `read_file` – read file contents.
11
+ - Tool `search_code` – search code with ripgrep or grep.
12
+ - Tool `edit_file` – apply unified diffs safely.
13
+ - CLI built with Thor, entry point `exe/ollama_agent`.
14
+
15
+ ## Requirements
16
+
17
+ - Ruby ≥ 3.2
18
+ - **Local:** Ollama running and a capable tool-calling model, **or**
19
+ - **Ollama Cloud:** API key and a cloud-capable model name (see below)
20
+
21
+ ## Installation
22
+
23
+ From RubyGems (when published) or from this repository:
24
+
25
+ ```bash
26
+ bundle install
27
+ ```
28
+
29
+ ## Usage
30
+
31
+ From the project you want the agent to modify (set the working directory accordingly):
32
+
33
+ ```bash
34
+ bundle exec ruby exe/ollama_agent ask "Update the README.md with current codebase"
35
+ ```
36
+
37
+ From this repository after `bundle install`, `ruby exe/ollama_agent` (without `bundle exec`) also works: the executable adds `lib` to the load path and loads `bundler/setup` when a `Gemfile` is present.
38
+
39
+ Apply proposed patches without interactive confirmation:
40
+
41
+ ```bash
42
+ bundle exec ruby exe/ollama_agent ask -y "Your task"
43
+ ```
44
+
45
+ Long-running models (slow local inference):
46
+
47
+ ```bash
48
+ bundle exec ruby exe/ollama_agent ask --timeout 300 "Your task"
49
+ ```
50
+
51
+ Interactive REPL:
52
+
53
+ ```bash
54
+ bundle exec ruby exe/ollama_agent ask --interactive
55
+ ```
56
+
57
+ With a **thinking-capable** model, enable reasoning output:
58
+
59
+ ```bash
60
+ OLLAMA_AGENT_THINK=true bundle exec ruby exe/ollama_agent ask -i
61
+ # or
62
+ bundle exec ruby exe/ollama_agent ask -i --think true
63
+ ```
64
+
65
+ The CLI uses **ANSI colors** on a TTY (banner, prompt, patch prompts). **Assistant replies** are rendered as **Markdown** (headings, lists, bold, code fences) via `tty-markdown` when stdout is a TTY and **`NO_COLOR`** is unset. Disable Markdown rendering with **`OLLAMA_AGENT_MARKDOWN=0`**. Disable all colors with **`NO_COLOR`** or **`OLLAMA_AGENT_COLOR=0`**.
66
+
67
+ When **thinking** is enabled, internal reasoning is shown in a **framed, dim** block labeled **Thinking**; the user-facing reply is labeled **Assistant** in green when the model returns both fields. Thinking text is **plain dim** by default (so it stays visually separate from the reply). Set **`OLLAMA_AGENT_THINKING_MARKDOWN=1`** to render thinking through Markdown too (muted colors).
68
+
69
+ ### Ollama Cloud
70
+
71
+ [Ollama Cloud](https://docs.ollama.com/cloud) uses the same HTTP API as the local server, with HTTPS and a Bearer API key. The **ollama-client** gem sends `Authorization: Bearer <api_key>` when `Ollama::Config#api_key` is set (HTTPS is used when the URL scheme is `https`).
72
+
73
+ 1. Create a key at [ollama.com/settings/keys](https://ollama.com/settings/keys).
74
+ 2. Point the agent at the cloud host and pass the key (same env names as ollama-client’s docs):
75
+
76
+ ```bash
77
+ export OLLAMA_BASE_URL="https://ollama.com"
78
+ export OLLAMA_API_KEY="your_key"
79
+ export OLLAMA_AGENT_MODEL="gpt-oss:120b-cloud" # example; pick a cloud model from `ollama list` / the catalog
80
+ bundle exec ruby exe/ollama_agent ask "Your task"
81
+ ```
82
+
83
+ ### Environment
84
+
85
+ | Variable | Purpose |
86
+ |----------|---------|
87
+ | `OLLAMA_BASE_URL` | Ollama API base URL (default from ollama-client: `http://localhost:11434`; use `https://ollama.com` for cloud) |
88
+ | `OLLAMA_API_KEY` | API key for Ollama Cloud (`https://ollama.com`); optional for local HTTP |
89
+ | `OLLAMA_AGENT_MODEL` | Model name (overrides default from ollama-client) |
90
+ | `OLLAMA_AGENT_ROOT` | Project root (defaults to current working directory) |
91
+ | `OLLAMA_AGENT_DEBUG` | Set to `1` to print validation diagnostics on stderr |
92
+ | `OLLAMA_AGENT_MAX_TURNS` | Max chat rounds with tool calls (default: 64) |
93
+ | `OLLAMA_AGENT_TIMEOUT` | HTTP read/open timeout in seconds for Ollama requests (default **120**; use `ask --timeout` / `-t` to override per run) |
94
+ | `OLLAMA_AGENT_PARSE_TOOL_JSON` | Set to `1` to run tools parsed from JSON lines in assistant text (fallback when the model does not emit native tool calls) |
95
+ | `NO_COLOR` | Set (any value) to disable ANSI colors (see [no-color.org](https://no-color.org/)) |
96
+ | `OLLAMA_AGENT_COLOR` | Set to `0` to disable colors even on a TTY |
97
+ | `OLLAMA_AGENT_MARKDOWN` | Set to `0` to disable Markdown formatting of assistant replies (plain text only) |
98
+ | `OLLAMA_AGENT_THINKING_MARKDOWN` | Set to `1` to render **thinking** text with Markdown (muted); default is plain dim text inside the Thinking frame |
99
+ | `OLLAMA_AGENT_THINK` | Model **thinking** mode for compatible models: `true` / `false`, or `high` / `medium` / `low` (see ollama-client `think:`). Empty = omit (server default). |
100
+ | `OLLAMA_AGENT_INDEX_REBUILD` | Set to `1` to drop the cached Prism Ruby index before the next symbol search in this process |
101
+ | `OLLAMA_AGENT_RUBY_INDEX_MAX_FILES` | Max `.rb` files to parse per index build (default **5000**) |
102
+ | `OLLAMA_AGENT_RUBY_INDEX_MAX_FILE_BYTES` | Skip Ruby files larger than this many bytes (default **512000**) |
103
+ | `OLLAMA_AGENT_RUBY_INDEX_MAX_LINES` | Max result lines for `search_code` class/module/method modes (default **200**) |
104
+ | `OLLAMA_AGENT_RUBY_INDEX_MAX_CHARS` | Max characters of index output per search (default **60000**) |
105
+ | `OLLAMA_AGENT_MAX_READ_FILE_BYTES` | Max bytes for a **full** `read_file` (no line range); larger files return an error (default **2097152**, 2 MiB). Line-range reads stream and are not limited by this cap. |
106
+ | `OLLAMA_AGENT_INDEX_REBUILD` | The Prism index is rebuilt when this env value **changes** (e.g. unset → `1`); it is **not** rebuilt on every tool call while it stays `1`. |
107
+
108
+ ## Troubleshooting
109
+
110
+ - **Use a tool-capable model** — Set `OLLAMA_AGENT_MODEL` to a model that supports function/tool calling (e.g. a recent coder-tuned variant). If the model only prints `{"name": "read_file", ...}` in plain text, tools never run unless you enable `OLLAMA_AGENT_PARSE_TOOL_JSON=1`.
111
+ - **Malformed diffs** — Headers must look like `git diff`: `--- a/file` then `+++ b/file` then a unified hunk line starting with `@@` (not legacy `--- N,M ----`). Do not put commas after path tokens. The gem normalizes some mistakes and runs `patch --dry-run` before applying.
112
+ - **Request timeouts** — The agent defaults to a **120s** HTTP timeout (longer than ollama-client’s 30s). If you still hit `Ollama::TimeoutError`, raise it with `OLLAMA_AGENT_TIMEOUT=300`, `bundle exec ruby exe/ollama_agent ask --timeout 300 "..."`, or `-t 300`. Ensure the variable name is exactly `OLLAMA_AGENT_TIMEOUT` (a leading typo such as `vOLLAMA_AGENT_TIMEOUT` is ignored).
113
+
114
+ ## How it works
115
+
116
+ 1. The CLI starts `OllamaAgent::Agent`, which loops on `Ollama::Client#chat` with tool definitions.
117
+ 2. Tools are executed in-process under a **path sandbox** (`OLLAMA_AGENT_ROOT`).
118
+ 3. **`search_code`** defaults to **ripgrep/grep** (`mode` omitted or `text`). For Ruby, use `mode` **`method`**, **`class`**, **`module`**, or **`constant`** to query a **Prism** parse index (built lazily on first use). **`read_file`** accepts optional **`start_line`** / **`end_line`** (1-based, inclusive) to read only part of a file.
119
+ 4. Patches are validated and checked with **`patch --dry-run`** before you confirm (unless `-y`).
120
+
121
+ ## Development
122
+
123
+ ```bash
124
+ bundle exec rspec
125
+ bundle exec rubocop
126
+ ```
127
+
128
+ ### CI and RubyGems release
129
+
130
+ - **CI** — [`.github/workflows/main.yml`](.github/workflows/main.yml) runs **RSpec** and **RuboCop** on pushes to `main` / `master` and on pull requests (Ruby **3.3.4** and **3.2.0**).
131
+ - **Release** — [`.github/workflows/release.yml`](.github/workflows/release.yml) runs on tags `v*`. It checks that the tag matches `OllamaAgent::VERSION` in [`lib/ollama_agent/version.rb`](lib/ollama_agent/version.rb), builds with `gem build ollama_agent.gemspec`, and pushes to RubyGems.
132
+
133
+ Repository **secrets** (Settings → Secrets and variables → Actions):
134
+
135
+ | Secret | Purpose |
136
+ |--------|---------|
137
+ | `RUBYGEMS_API_KEY` | RubyGems API key with **push** scope |
138
+ | `RUBYGEMS_OTP_SECRET` | Base32 secret for **TOTP** (RubyGems MFA); the workflow uses `rotp` to generate a one-time code for `gem push` |
139
+
140
+ Release steps:
141
+
142
+ 1. Bump `OllamaAgent::VERSION` in `lib/ollama_agent/version.rb` and commit to `main`.
143
+ 2. Tag: `git tag v0.1.0` (must match the version string) and `git push origin v0.1.0`.
144
+
145
+ ## License
146
+
147
+ MIT. See [LICENSE.txt](LICENSE.txt).
data/Rakefile ADDED
@@ -0,0 +1,12 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "bundler/gem_tasks"
4
+ require "rspec/core/rake_task"
5
+
6
+ RSpec::Core::RakeTask.new(:spec)
7
+
8
+ require "rubocop/rake_task"
9
+
10
+ RuboCop::RakeTask.new
11
+
12
+ task default: %i[spec rubocop]
data/exe/ollama_agent ADDED
@@ -0,0 +1,13 @@
1
+ #!/usr/bin/env ruby
2
+ # frozen_string_literal: true
3
+
4
+ gem_root = File.expand_path("..", __dir__)
5
+ $LOAD_PATH.unshift(File.join(gem_root, "lib"))
6
+
7
+ # From a checkout, load the Gemfile so ollama-client and other deps resolve without `bundle exec`.
8
+ gemfile = File.join(gem_root, "Gemfile")
9
+ require "bundler/setup" if File.file?(gemfile)
10
+
11
+ require "ollama_agent"
12
+
13
+ OllamaAgent::CLI.start(ARGV)
@@ -0,0 +1,146 @@
1
+ # frozen_string_literal: true
2
+
3
+ require_relative "agent_prompt"
4
+ require_relative "console"
5
+ require_relative "ollama_connection"
6
+ require_relative "tools_schema"
7
+ require_relative "sandboxed_tools"
8
+ require_relative "think_param"
9
+ require_relative "timeout_param"
10
+ require_relative "tool_content_parser"
11
+
12
+ module OllamaAgent
13
+ # Runs a tool-calling loop against Ollama: read files, search, apply unified diffs.
14
+ class Agent
15
+ include SandboxedTools
16
+
17
+ MAX_TURNS = 64
18
+ # ollama-client defaults to 30s; multi-turn tool chats often need longer on local hardware.
19
+ DEFAULT_HTTP_TIMEOUT = 120
20
+
21
+ attr_reader :client, :root
22
+
23
+ # rubocop:disable Metrics/ParameterLists -- CLI and tests pass explicit dependencies
24
+ def initialize(client: nil, model: nil, root: nil, confirm_patches: true, http_timeout: nil, think: nil)
25
+ @model = model || default_model
26
+ @root = File.expand_path(root || ENV.fetch("OLLAMA_AGENT_ROOT", Dir.pwd))
27
+ @confirm_patches = confirm_patches
28
+ @http_timeout_override = http_timeout
29
+ @think = think
30
+ @client = client || build_default_client
31
+ end
32
+ # rubocop:enable Metrics/ParameterLists
33
+
34
+ def run(query)
35
+ messages = [
36
+ { role: "system", content: system_prompt },
37
+ { role: "user", content: query }
38
+ ]
39
+
40
+ execute_agent_turns(messages)
41
+ end
42
+
43
+ private
44
+
45
+ def execute_agent_turns(messages)
46
+ max_turns.times do
47
+ message = chat_assistant_message(messages)
48
+ tool_calls = tool_calls_from(message)
49
+
50
+ messages << message.to_h
51
+ return if tool_calls.empty?
52
+
53
+ append_tool_results(messages, tool_calls)
54
+ end
55
+
56
+ warn "ollama_agent: maximum tool rounds (#{max_turns}) reached" if ENV["OLLAMA_AGENT_DEBUG"] == "1"
57
+ end
58
+
59
+ def tool_calls_from(message)
60
+ calls = message.tool_calls || []
61
+ return calls unless calls.empty? && ToolContentParser.enabled?
62
+
63
+ ToolContentParser.synthetic_calls(message.content)
64
+ end
65
+
66
+ def max_turns
67
+ Integer(ENV.fetch("OLLAMA_AGENT_MAX_TURNS", MAX_TURNS.to_s))
68
+ rescue ArgumentError, TypeError
69
+ MAX_TURNS
70
+ end
71
+
72
+ def chat_assistant_message(messages)
73
+ response = @client.chat(**chat_request_args(messages))
74
+
75
+ message = response.message
76
+ raise Error, "Empty assistant message" if message.nil?
77
+
78
+ announce_assistant_content(message)
79
+ message
80
+ end
81
+
82
+ def chat_request_args(messages)
83
+ args = {
84
+ messages: messages,
85
+ tools: TOOLS,
86
+ model: @model,
87
+ options: { temperature: 0.2 }
88
+ }
89
+ th = resolve_think
90
+ args[:think] = th unless th.nil?
91
+ args
92
+ end
93
+
94
+ def announce_assistant_content(message)
95
+ Console.puts_assistant_message(message)
96
+ end
97
+
98
+ def resolve_think
99
+ ThinkParam.resolve(@think)
100
+ end
101
+
102
+ def default_model
103
+ ENV["OLLAMA_AGENT_MODEL"] || Ollama::Config.new.model
104
+ end
105
+
106
+ def build_default_client
107
+ config = Ollama::Config.new
108
+ @http_timeout_seconds = resolved_http_timeout_seconds
109
+ config.timeout = @http_timeout_seconds
110
+ OllamaConnection.apply_env_to_config(config)
111
+ Ollama::Client.new(config: config)
112
+ end
113
+
114
+ def resolved_http_timeout_seconds
115
+ parsed = TimeoutParam.parse_positive(@http_timeout_override)
116
+ return parsed if parsed
117
+
118
+ parsed = TimeoutParam.parse_positive(ENV.fetch("OLLAMA_AGENT_TIMEOUT", nil))
119
+ return parsed if parsed
120
+
121
+ DEFAULT_HTTP_TIMEOUT
122
+ end
123
+
124
+ def system_prompt
125
+ AgentPrompt.text
126
+ end
127
+
128
+ def append_tool_results(messages, tool_calls)
129
+ tool_calls.each do |tool_call|
130
+ result = execute_tool(tool_call.name, tool_call.arguments || {})
131
+ messages << tool_message(tool_call, result)
132
+ end
133
+ end
134
+
135
+ def tool_message(tool_call, result)
136
+ msg = {
137
+ role: "tool",
138
+ name: tool_call.name,
139
+ content: result.to_s
140
+ }
141
+ id = tool_call.id
142
+ msg[:tool_call_id] = id if id && !id.to_s.empty?
143
+ msg
144
+ end
145
+ end
146
+ end
@@ -0,0 +1,44 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OllamaAgent
4
+ # System prompt for the coding agent (kept separate to keep Agent small and testable).
5
+ module AgentPrompt
6
+ def self.text
7
+ <<~PROMPT
8
+ You are a coding assistant with tools: list_files, read_file, search_code, edit_file.
9
+ Work only under the project root. Briefly state your plan, then use tools.
10
+
11
+ Large Ruby codebases: use search_code with mode "method", "class", "module", or "constant" to locate definitions
12
+ via the Prism index (substring match on names), then read_file with start_line/end_line for only the lines you need.
13
+ Use search_code mode "text" (default) for ripgrep-style matches in any file type.
14
+
15
+ Do not paste JSON tool calls or {"name": ...} blocks in your reply text. Tools run only when the host
16
+ receives native tool calls from the model API—not from prose. Never put commas after --- or +++ file lines.
17
+
18
+ For README or documentation updates that should reflect the codebase:
19
+ 1) list_files on "." or "lib" (and read ollama_agent.gemspec if present) to see structure.
20
+ 2) read_file every file you will change before editing (e.g. README.md, lib/ollama_agent.rb).
21
+ 3) edit_file last with a unified diff in `git diff` / patch(1) form: `--- a/<path>` then `+++ b/<same path>` (no
22
+ trailing commas). The next line must be a unified hunk header starting with `@@` (two at-signs), e.g.
23
+ `@@ -12,5 +12,5 @@`, then unchanged lines prefixed with a space, `-` removed, `+` added. Never use legacy lines like
24
+ `--- 2,1 ----`. Do not append editor markers such as `*** End Patch` or `*** Begin Patch`—only what `git diff`
25
+ would print; those markers are not valid patch input.
26
+
27
+ Markdown bullets: in unified diff, the first character of each line is the opcode. A line starting with `-` is a
28
+ removal from the old file—not a bullet. To add a bullet line `- item` to the file, the diff line must start with
29
+ `+` then the rest: `+ - item` (plus, space, dash, …). Same for any new line whose text begins with `-`.
30
+
31
+ Do not paste, paraphrase, or echo any sample diff from this system message—there is none. Every `-` and `+` line
32
+ must match real text from your read_file results (or your intended replacement for those exact lines). Never
33
+ invent hunks from memory or placeholders.
34
+
35
+ Never put @@ before the +++ line for the same file. When the task is done, reply with a brief summary and stop
36
+ calling tools.
37
+
38
+ When the API exposes separate reasoning ("thinking") and main message text ("content"): put internal planning only
39
+ in thinking; put the full user-visible reply (greetings, explanations, summaries) in content so the host can style
40
+ them as the assistant message.
41
+ PROMPT
42
+ end
43
+ end
44
+ end
@@ -0,0 +1,73 @@
1
+ # frozen_string_literal: true
2
+
3
+ require "thor"
4
+
5
+ require_relative "agent"
6
+
7
+ module OllamaAgent
8
+ # Thor CLI for single-shot and interactive agent sessions.
9
+ class CLI < Thor
10
+ desc "ask [QUERY]", "Run a natural-language task (reads, search, patch)"
11
+ method_option :model, type: :string, desc: "Ollama model (default: OLLAMA_AGENT_MODEL or ollama-client default)"
12
+ method_option :interactive, type: :boolean, aliases: "-i", desc: "Interactive REPL"
13
+ method_option :yes, type: :boolean, aliases: "-y", desc: "Apply patches without confirmation"
14
+ method_option :root, type: :string, desc: "Project root (default: OLLAMA_AGENT_ROOT or cwd)"
15
+ method_option :timeout, type: :numeric, aliases: "-t", desc: "HTTP timeout seconds (default 120)"
16
+ method_option :think, type: :string, desc: "Thinking mode: true|false|high|medium|low (see OLLAMA_AGENT_THINK)"
17
+ def ask(query = nil)
18
+ agent = build_agent
19
+
20
+ if options[:interactive]
21
+ start_interactive(agent)
22
+ elsif query
23
+ agent.run(query)
24
+ else
25
+ puts Console.error_line("Error: provide a QUERY or use --interactive")
26
+ exit 1
27
+ end
28
+ end
29
+
30
+ private
31
+
32
+ def build_agent
33
+ Agent.new(
34
+ model: options[:model],
35
+ root: options[:root],
36
+ confirm_patches: !options[:yes],
37
+ http_timeout: options[:timeout],
38
+ think: options[:think]
39
+ )
40
+ end
41
+
42
+ def start_interactive(agent)
43
+ puts Console.welcome_banner("Ollama Agent (type 'exit' to quit)")
44
+ use_readline = interactive_readline_usable?
45
+
46
+ loop do
47
+ input = interactive_readline_line(use_readline)
48
+ break if input.nil?
49
+
50
+ line = input.chomp
51
+ break if line == "exit"
52
+
53
+ agent.run(line)
54
+ end
55
+ end
56
+
57
+ def interactive_readline_usable?
58
+ require "readline"
59
+ true
60
+ rescue LoadError
61
+ false
62
+ end
63
+
64
+ def interactive_readline_line(use_readline)
65
+ if use_readline
66
+ Readline.readline(Console.prompt_prefix, true)
67
+ else
68
+ print Console.prompt_prefix
69
+ $stdin.gets
70
+ end
71
+ end
72
+ end
73
+ end
@@ -0,0 +1,136 @@
1
+ # frozen_string_literal: true
2
+
3
+ module OllamaAgent
4
+ # ANSI styling for TTY output. Respects https://no-color.org/ via NO_COLOR.
5
+ # Assistant replies use tty-markdown when enabled (headings, lists, bold, code blocks).
6
+ module Console
7
+ module_function
8
+
9
+ # Muted tty-markdown palette so "Thinking" stays visually distinct from the main reply
10
+ # (default TTY::Markdown theme uses cyan/yellow like normal assistant output).
11
+ THINKING_MARKDOWN_THEME = {
12
+ em: :bright_black,
13
+ header: %i[bright_black bold],
14
+ hr: :bright_black,
15
+ link: %i[bright_black underline],
16
+ list: :bright_black,
17
+ strong: %i[bright_black bold],
18
+ table: :bright_black,
19
+ quote: :bright_black,
20
+ image: :bright_black,
21
+ note: :bright_black,
22
+ comment: :bright_black
23
+ }.freeze
24
+
25
+ THINKING_FRAME_WIDTH = 44
26
+
27
+ def color_enabled?
28
+ $stdout.tty? && ENV["NO_COLOR"].to_s.empty? && ENV["OLLAMA_AGENT_COLOR"] != "0"
29
+ end
30
+
31
+ def markdown_enabled?
32
+ $stdout.tty? && ENV["NO_COLOR"].to_s.empty? && ENV["OLLAMA_AGENT_MARKDOWN"] != "0"
33
+ end
34
+
35
+ # Thinking uses dim plain text by default so it stays visually separate from the main reply.
36
+ # Set OLLAMA_AGENT_THINKING_MARKDOWN=1 to render thinking through tty-markdown (muted theme).
37
+ def thinking_markdown_enabled?
38
+ markdown_enabled? && ENV["OLLAMA_AGENT_THINKING_MARKDOWN"] == "1"
39
+ end
40
+
41
+ def style(text, *codes)
42
+ return text.to_s unless color_enabled?
43
+
44
+ t = text.to_s
45
+ return t if t.empty? || codes.flatten.compact.empty?
46
+
47
+ "\e[#{codes.flatten.compact.join(";")}m#{t}\e[0m"
48
+ end
49
+
50
+ def bold(text) = style(text, 1)
51
+ def dim(text) = style(text, 2)
52
+ def cyan(text) = style(text, 36)
53
+ def green(text) = style(text, 32)
54
+ def yellow(text) = style(text, 33)
55
+ def red(text) = style(text, 31)
56
+ def magenta(text) = style(text, 35)
57
+
58
+ def welcome_banner(text)
59
+ bold(cyan(text))
60
+ end
61
+
62
+ def prompt_prefix
63
+ cyan("> ")
64
+ end
65
+
66
+ def assistant_output(text)
67
+ green(text)
68
+ end
69
+
70
+ # Renders Markdown to the terminal (bold, lists, fenced code) when enabled; otherwise plain green text.
71
+ def format_assistant(text)
72
+ return assistant_output(text) unless markdown_enabled?
73
+
74
+ markdown_parse(text) || assistant_output(text)
75
+ end
76
+
77
+ def format_thinking(text)
78
+ line = thinking_frame_line
79
+ header = "#{magenta(bold("Thinking"))}\n#{line}\n"
80
+ body = if thinking_markdown_enabled?
81
+ markdown_parse(text, thinking: true) || dim(text.to_s)
82
+ else
83
+ dim(text.to_s)
84
+ end
85
+ "#{header}#{body}\n#{line}"
86
+ end
87
+
88
+ def assistant_reply_heading
89
+ bold(green("Assistant"))
90
+ end
91
+
92
+ def thinking_frame_line
93
+ dim("-" * THINKING_FRAME_WIDTH)
94
+ end
95
+
96
+ class << self
97
+ private
98
+
99
+ def markdown_parse(text, thinking: false)
100
+ require "tty-markdown"
101
+ theme = thinking ? THINKING_MARKDOWN_THEME : {}
102
+ TTY::Markdown.parse(text.to_s, theme: theme)
103
+ rescue LoadError, StandardError
104
+ nil
105
+ end
106
+
107
+ def write_assistant_reply(content, thinking_present)
108
+ puts if thinking_present
109
+ puts assistant_reply_heading if thinking_present
110
+ puts format_assistant(content)
111
+ end
112
+ end
113
+
114
+ # Prints thinking (if any) then main content; duck-types #thinking and #content.
115
+ def puts_assistant_message(message)
116
+ t = message.thinking
117
+ c = message.content
118
+ thinking_present = t && !t.to_s.empty?
119
+
120
+ puts format_thinking(t) if thinking_present
121
+ write_assistant_reply(c, thinking_present) if c && !c.to_s.empty?
122
+ end
123
+
124
+ def patch_title(text)
125
+ bold(yellow(text))
126
+ end
127
+
128
+ def apply_prompt(text)
129
+ yellow(text)
130
+ end
131
+
132
+ def error_line(text)
133
+ red(text)
134
+ end
135
+ end
136
+ end