elelem 0.1.3 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
checksums.yaml CHANGED
@@ -1,7 +1,7 @@
1
1
  ---
2
2
  SHA256:
3
- metadata.gz: 9d4a99b7addd5861f402c297ecfd19bc52e8edc6bba89422c2dec195a8cdcd13
4
- data.tar.gz: d663221598cb4a843879ae191d6b7d8f62318be15d121803f58f752cecc79bbc
3
+ metadata.gz: 7d165866e64423e182a5497407deba0249b4c73ca4fc5c3af36a979925aade9f
4
+ data.tar.gz: 4b8f6b384a901781514bd085c106672489676b065eab7c2c6057ffff49842b71
5
5
  SHA512:
6
- metadata.gz: cdc7ccfb94de895a32f5be4cc310f23ca781119628fcbec4bd95a50df2c5b0eb3caa474c3a9058b6120c744e56f53001eb8c67b09cad7934f9b03292202e3c88
7
- data.tar.gz: 9732e722f6d1c040d4c497b6826bd53ac425ad129befdd37da0e7ecd60b9bf4e6c96464a44f89a4cd118ca79b5b675ac6190ebf157c21bf4d06aa8ced13507f9
6
+ metadata.gz: 356ff6e3bbadda54bc3ae9663d67879ea3fc1cd52418eaaa228f8aa1b7a6bf2a9db847dfa482e840fb0f772130753d5d417814d649e75f5c530bca467ef6f2df
7
+ data.tar.gz: 67313588f14536711acf61e566394466a34513cf52259c286e522648ccc1f47fcc898f4f2856a0072bf79cb63462948931c202bf972b7c29e94494aa3efb4a0f
data/CHANGELOG.md CHANGED
@@ -1,5 +1,35 @@
1
1
  ## [Unreleased]
2
2
 
3
+ ## [0.2.1] - 2025-10-15
4
+
5
+ ### Fixed
6
+ - Added missing `exe/llm-ollama` and `exe/llm-openai` files to gemspec
7
+ - These executables were added in 0.2.0 but not included in the packaged gem
8
+
9
+ ## [0.2.0] - 2025-10-15
10
+
11
+ ### Added
12
+ - New `llm-ollama` executable - minimal coding agent with streaming support for Ollama
13
+ - New `llm-openai` executable - minimal coding agent for OpenAI/compatible APIs
14
+ - Memory feature for persistent context storage and retrieval
15
+ - Web fetch tool for retrieving and analyzing web content
16
+ - Streaming responses with real-time token display
17
+ - Visual "thinking" progress indicators with dots during reasoning phase
18
+
19
+ ### Changed
20
+ - **BREAKING**: Migrated from custom Net::HTTP implementation to `net-llm` gem
21
+ - API client now uses `Net::Llm::Ollama` for better reliability and maintainability
22
+ - Removed direct dependencies on `net-http` and `uri` (now transitive through net-llm)
23
+ - Maps Ollama's `thinking` field to internal `reasoning` field
24
+ - Maps Ollama's `done_reason` to internal `finish_reason`
25
+ - Improved system prompt for better agent behavior
26
+ - Enhanced error handling and logging
27
+
28
+ ### Fixed
29
+ - Response processing for Ollama's native message format
30
+ - Tool argument parsing to handle both string and object formats
31
+ - Safe navigation operator usage to prevent nil errors
32
+
3
33
  ## [0.1.2] - 2025-08-14
4
34
 
5
35
  ### Fixed
data/README.md CHANGED
@@ -58,100 +58,6 @@ After checking out the repo, run `bin/setup` to install dependencies. Then, run
58
58
 
59
59
  To install this gem onto your local machine, run `bundle exec rake install`. To release a new version, update the version number in `version.rb`, and then run `bundle exec rake release`, which will create a git tag for the version, push git commits and the created tag, and push the `.gem` file to [rubygems.org](https://rubygems.org).
60
60
 
61
- REPL State Diagram
62
-
63
- ```
64
- ┌─────────────────┐
65
- │ START/INIT │
66
- └─────────┬───────┘
67
-
68
- v
69
- ┌─────────────────┐
70
- ┌────▶│ IDLE (Prompt) │◄────┐
71
- │ │ Shows "> " │ │
72
- │ └─────────┬───────┘ │
73
- │ │ │
74
- │ │ User input │
75
- │ v │
76
- │ ┌─────────────────┐ │
77
- │ │ PROCESSING │ │
78
- │ │ INPUT │ │
79
- │ └─────────┬───────┘ │
80
- │ │ │
81
- │ │ API call │
82
- │ v │
83
- │ ┌─────────────────┐ │
84
- │ │ STREAMING │ │
85
- │ ┌──▶│ RESPONSE │─────┤
86
- │ │ └─────────┬───────┘ │
87
- │ │ │ │ done=true
88
- │ │ │ Parse chunk │
89
- │ │ v │
90
- │ │ ┌─────────────────┐ │
91
- │ │ │ MESSAGE TYPE │ │
92
- │ │ │ ROUTING │ │
93
- │ │ └─────┬─┬─┬───────┘ │
94
- │ │ │ │ │ │
95
- ┌────────┴─┴─────────┘ │ └─────────────┴──────────┐
96
- │ │ │
97
- v v v
98
- ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
99
- │ THINKING │ │ TOOL │ │ CONTENT │
100
- │ STATE │ │ EXECUTION │ │ OUTPUT │
101
- │ │ │ STATE │ │ STATE │
102
- └─────────────┘ └─────┬───────┘ └─────────────┘
103
- │ │ │
104
- │ │ done=false │
105
- └───────────────────┼──────────────────────────┘
106
-
107
- v
108
- ┌─────────────────┐
109
- │ CONTINUE │
110
- │ STREAMING │
111
- └─────────────────┘
112
-
113
- └─────────────────┐
114
-
115
- ┌─────────────────┐ │
116
- │ ERROR STATE │ │
117
- │ (Exception) │ │
118
- └─────────────────┘ │
119
- ▲ │
120
- │ Invalid response │
121
- └────────────────────────────┘
122
-
123
- EXIT CONDITIONS:
124
- ┌─────────────────────────┐
125
- │ • User enters "" │
126
- │ • User enters "exit" │
127
- │ • EOF (Ctrl+D) │
128
- │ • nil input │
129
- └─────────────────────────┘
130
-
131
- v
132
- ┌─────────────────────────┐
133
- │ TERMINATE │
134
- └─────────────────────────┘
135
- ```
136
-
137
- Key Transitions:
138
-
139
- 1. IDLE → PROCESSING: User enters any non-empty, non-"exit" input
140
- 2. PROCESSING → STREAMING: API call initiated to Ollama
141
- 3. STREAMING → MESSAGE ROUTING: Each chunk received is parsed
142
- 4. MESSAGE ROUTING → States: Based on message content:
143
- - thinking → THINKING STATE
144
- - tool_calls → TOOL EXECUTION STATE
145
- - content → CONTENT OUTPUT STATE
146
- - Invalid format → ERROR STATE
147
- 5. All States → IDLE: When done=true from API response
148
- 6. TOOL EXECUTION → STREAMING: Sets done=false to continue conversation
149
- 7. Any State → TERMINATE: On exit conditions
150
-
151
- The REPL operates as a continuous loop where the primary flow is IDLE → PROCESSING → STREAMING →
152
- back to IDLE, with the streaming phase potentially cycling through multiple message types before
153
- completion.
154
-
155
61
  ## Contributing
156
62
 
157
63
  Bug reports and pull requests are welcome on GitHub at https://github.com/xlgmokha/elelem.
data/exe/llm-ollama ADDED
@@ -0,0 +1,358 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ =begin
4
+ Fast, correct, autonomous - Pick two
5
+
6
+ PURPOSE:
7
+
8
+ This script is a minimal coding agent written in Ruby. It is intended to
9
+ assist me (a software engineer and computer science student) with writing,
10
+ editing, and managing code and text files from the command line. It acts
11
+ as a direct interface to an LLM, providing it with a simple text-based
12
+ UI and access to the local filesystem.
13
+
14
+ DESIGN PRINCIPLES:
15
+
16
+ - Follows the Unix philosophy: simple, composable, minimal.
17
+ - Convention over configuration.
18
+ - Avoids unnecessary defensive checks, or complexity.
19
+ - Assumes a mature and responsible LLM that behaves like a capable engineer.
20
+ - Designed for my workflow and preferences.
21
+ - Efficient and minimal like aider - https://aider.chat/
22
+ - UX like Claude Code - https://docs.claude.com/en/docs/claude-code/overview
23
+
24
+ SYSTEM ASSUMPTIONS:
25
+
26
+ - This script is used on a Linux system with the following tools: Alacritty, tmux, Bash, and Vim.
27
+ - It is always run inside a Git repository.
28
+ - All project work is assumed to be version-controlled with Git.
29
+ - Git is expected to be available and working; no checks are necessary.
30
+
31
+ SCOPE:
32
+
33
+ - This program operates only on code and plain-text files.
34
+ - It does not need to support binary files.
35
+ - The LLM has full access to execute system commands.
36
+ - There are no sandboxing, permission, or validation layers.
37
+ - Execution is not restricted or monitored — responsibility is delegated to the LLM.
38
+
39
+ CONFIGURATION:
40
+
41
+ - Avoid adding configuration options unless absolutely necessary.
42
+ - Prefer hard-coded values that can be changed later if needed.
43
+ - Only introduce environment variables after repeated usage proves them worthwhile.
44
+
45
+ UI EXPECTATIONS:
46
+
47
+ - The TUI must remain simple, fast, and predictable.
48
+ - No mouse support or complex UI components are required.
49
+ - Interaction is strictly keyboard-driven.
50
+
51
+ CODING STANDARDS FOR LLM:
52
+
53
+ - Do not add error handling or logging unless it is essential for functionality.
54
+ - Keep methods short and single-purpose.
55
+ - Use descriptive, conventional names.
56
+ - Stick to Ruby's standard library whenever possible.
57
+
58
+ HELPFUL LINKS:
59
+
60
+ - https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
61
+ - https://www.anthropic.com/engineering/writing-tools-for-agents
62
+ - https://simonwillison.net/2025/Sep/30/designing-agentic-loops/
63
+
64
+ =end
65
+
66
+ require "bundler/inline"
67
+
68
+ gemfile do
69
+ source "https://rubygems.org"
70
+
71
+ gem "fileutils", "~> 1.0"
72
+ gem "json", "~> 2.0"
73
+ gem "net-llm", "~> 0.4"
74
+ gem "open3", "~> 0.1"
75
+ gem "ostruct", "~> 0.1"
76
+ gem "reline", "~> 0.1"
77
+ gem "set", "~> 1.0"
78
+ gem "uri", "~> 1.0"
79
+ end
80
+
81
+ STDOUT.set_encoding(Encoding::UTF_8)
82
+ STDERR.set_encoding(Encoding::UTF_8)
83
+
84
+ OLLAMA_HOST = ENV["OLLAMA_HOST"] || "localhost:11434"
85
+ OLLAMA_MODEL = ENV["OLLAMA_MODEL"] || "gpt-oss:latest"
86
+ SYSTEM_PROMPT="You are a reasoning coding and system agent."
87
+
88
+ def build_tool(name, description, properties, required = [])
89
+ {
90
+ type: "function",
91
+ function: {
92
+ name: name,
93
+ description: description,
94
+ parameters: {
95
+ type: "object",
96
+ properties: properties,
97
+ required: required
98
+ }
99
+ }
100
+ }
101
+ end
102
+
103
+ EXEC_TOOL = build_tool("execute", "Execute shell commands. Returns stdout, stderr, and exit code. Use for: checking system state, running tests, managing services. Common Unix tools available: git, bash, grep, etc. Tip: Check exit_status in response to determine success.", { cmd: { type: "string" }, args: { type: "array", items: { type: "string" } }, env: { type: "object", additionalProperties: { type: "string" } }, cwd: { type: "string" }, stdin: { type: "string" } }, ["cmd"])
104
+ GREP_TOOL = build_tool("grep", "Search all git-tracked files using git grep. Returns file paths with matching line numbers. Use this to discover where code/configuration exists before reading files. Examples: search 'def method_name' to find method definitions. Much faster than reading multiple files.", { query: { type: "string" } }, ["query"])
105
+ LS_TOOL = build_tool("list", "List all git-tracked files in the repository, optionally filtered by path. Use this to explore project structure or find files in a directory. Returns relative paths from repo root. Tip: Use this before reading if you need to discover what files exist.", { path: { type: "string" } })
106
+ PATCH_TOOL = build_tool("patch", "Apply a unified diff patch via 'git apply'. Use this for surgical edits to existing files rather than rewriting entire files. Generates proper git diffs. Format: standard unified diff with --- and +++ headers. Tip: More efficient than write for small changes to large files.", { diff: { type: "string" } }, ["diff"])
107
+ READ_TOOL = build_tool("read", "Read complete contents of a file. Requires exact file path. Use grep or list first if you don't know the path. Best for: understanding existing code, reading config files, reviewing implementation details. Tip: For large files, grep first to confirm relevance.", { path: { type: "string" } }, ["path"])
108
+ WRITE_TOOL = build_tool("write", "Write complete file contents (overwrites existing files). Creates parent directories automatically. Best for: creating new files, replacing entire file contents. For small edits to existing files, consider using patch instead.", { path: { type: "string" }, content: { type: "string" } }, ["path", "content"])
109
+
110
+ TOOLS = {
111
+ read: [GREP_TOOL, LS_TOOL, READ_TOOL],
112
+ write: [PATCH_TOOL, WRITE_TOOL],
113
+ execute: [EXEC_TOOL]
114
+ }
115
+
116
+ trap("INT") do
117
+ puts "\nExiting."
118
+ exit
119
+ end
120
+
121
+ def run_exec(command, args: [], env: {}, cwd: Dir.pwd, stdin: nil)
122
+ stdout, stderr, status = Open3.capture3(env, command, *args, chdir: cwd, stdin_data: stdin)
123
+ {
124
+ "exit_status" => status.exitstatus,
125
+ "stdout" => stdout.to_s,
126
+ "stderr" => stderr.to_s
127
+ }
128
+ end
129
+
130
+ def expand_path(path)
131
+ Pathname.new(path).expand_path
132
+ end
133
+
134
+ def read_file(path)
135
+ full_path = expand_path(path)
136
+ full_path.exist? ? { content: full_path.read } : { error: "File not found: #{path}" }
137
+ end
138
+
139
+ def write_file(path, content)
140
+ full_path = expand_path(path)
141
+ FileUtils.mkdir_p(full_path.dirname)
142
+ { bytes_written: full_path.write(content) }
143
+ end
144
+
145
+ def run_tool(name, args)
146
+ case name
147
+ when "execute" then run_exec(args["cmd"], args: args["args"] || [], env: args["env"] || {}, cwd: args["cwd"] || Dir.pwd, stdin: args["stdin"])
148
+ when "grep" then run_exec("git", args: ["grep", "-nI", args["query"]])
149
+ when "list" then run_exec("git", args: args["path"] ? ["ls-files", "--", args["path"]] : ["ls-files"])
150
+ when "patch" then run_exec("git", args: ["apply", "--index", "--whitespace=nowarn", "-p1"], stdin: args["diff"])
151
+ when "read" then read_file(args["path"])
152
+ when "write" then write_file(args["path"], args["content"])
153
+ else
154
+ { error: "Unknown tool", name: name, args: args }
155
+ end
156
+ end
157
+
158
+ def format_tool_call(name, args)
159
+ case name
160
+ when "execute" then "execute(#{args["cmd"]})"
161
+ when "grep" then "grep(#{args["query"]})"
162
+ when "list" then "list(#{args["path"] || '.'})"
163
+ when "patch" then "patch(#{args["diff"].lines.count} lines)"
164
+ when "read" then "read(#{args["path"]})"
165
+ when "write" then "write(#{args["path"]})"
166
+ else
167
+ "▶ #{name}(#{args.to_s[0...70]})"
168
+ end
169
+ end
170
+
171
+ def system_prompt_for(mode)
172
+ base = "You are a reasoning coding and system agent."
173
+
174
+ case mode.sort
175
+ when [:read]
176
+ "#{base}\n\nRead and analyze. Understand before suggesting action."
177
+ when [:write]
178
+ "#{base}\n\nWrite clean, thoughtful code."
179
+ when [:execute]
180
+ "#{base}\n\nUse shell commands creatively to understand and manipulate the system."
181
+ when [:read, :write]
182
+ "#{base}\n\nFirst understand, then build solutions that integrate well."
183
+ when [:read, :execute]
184
+ "#{base}\n\nUse commands to deeply understand the system."
185
+ when [:write, :execute]
186
+ "#{base}\n\nCreate and execute freely. Have fun. Be kind."
187
+ when [:read, :write, :execute]
188
+ "#{base}\n\nYou have all tools. Use them wisely."
189
+ else
190
+ base
191
+ end
192
+ end
193
+
194
+ def tools_for(modes)
195
+ modes.map { |mode| TOOLS[mode] }.flatten
196
+ end
197
+
198
+ def prune_context(messages, keep_recent: 5)
199
+ return messages if messages.length <= keep_recent + 1
200
+
201
+ default_context + messages.last(keep_recent)
202
+ end
203
+
204
+ def execute_turn(client, messages, tools:)
205
+ turn_context = []
206
+
207
+ loop do
208
+ content = ""
209
+ tool_calls = nil
210
+ role = "assistant"
211
+ first_content = true
212
+
213
+ print "Thinking..."
214
+ client.chat(messages + turn_context, tools) do |chunk|
215
+ if chunk["message"]
216
+ msg = chunk["message"]
217
+ role = msg["role"] if msg["role"]
218
+
219
+ if msg["thinking"] && !msg["thinking"].empty?
220
+ print "."
221
+ end
222
+
223
+ if msg["content"] && !msg["content"].empty?
224
+ if first_content
225
+ print "\r\e[KAssistant> "
226
+ first_content = false
227
+ end
228
+ print msg["content"]
229
+ $stdout.flush
230
+ content += msg["content"]
231
+ end
232
+
233
+ tool_calls = msg["tool_calls"] if msg["tool_calls"]
234
+ end
235
+ end
236
+ puts
237
+
238
+ turn_context << { role: role, content: content, tool_calls: tool_calls }.compact
239
+
240
+ if tool_calls
241
+ tool_calls.each do |call|
242
+ name = call.dig("function", "name")
243
+ args_raw = call.dig("function", "arguments")
244
+
245
+ begin
246
+ args = args_raw.is_a?(String) ? JSON.parse(args_raw) : args_raw
247
+ rescue JSON::ParserError => e
248
+ turn_context << {
249
+ role: "tool",
250
+ content: JSON.dump({
251
+ error: "Invalid JSON in arguments: #{e.message}",
252
+ received: args_raw
253
+ })
254
+ }
255
+ next
256
+ end
257
+
258
+ puts "Tool> #{format_tool_call(name, args)}"
259
+ result = run_tool(name, args)
260
+ turn_context << { role: "tool", content: JSON.dump(result) }
261
+ end
262
+ next
263
+ end
264
+
265
+ return { role: "assistant", content: content } unless content.strip.empty?
266
+ end
267
+ end
268
+
269
+ def dump_context(messages)
270
+ puts JSON.pretty_generate(messages)
271
+ end
272
+
273
+ def print_status(mode, messages)
274
+ puts "Mode: #{mode.inspect}"
275
+ puts "Tools: #{tools_for(mode).map { |x| x.dig(:function, :name) }}"
276
+ end
277
+
278
+ def strip_ansi(text)
279
+ text.gsub(/^Script started.*?\n/, '')
280
+ .gsub(/\nScript done.*$/, '')
281
+ .gsub(/\e\[[0-9;]*[a-zA-Z]/, '') # Standard ANSI codes
282
+ .gsub(/\e\[\?[0-9]+[hl]/, '') # Bracketed paste mode
283
+ .gsub(/[\b]/, '') # Backspace chars
284
+ .gsub(/\r/, '') # Carriage returns
285
+ end
286
+
287
+ def start_shell
288
+ Tempfile.create do |file|
289
+ system("script -q #{file.path}")
290
+ { role: "user", content: strip_ansi(File.read(file.path)) }
291
+ end
292
+ end
293
+
294
+ def ask?(text)
295
+ input = Reline.readline(text, true)&.strip
296
+ exit if input.nil? || input.downcase == "exit"
297
+
298
+ input
299
+ end
300
+
301
+ def print_help
302
+ puts <<~HELP
303
+ /chmod - (+|-)rwx auto build plan
304
+ /clear
305
+ /context
306
+ /exit
307
+ /help
308
+ /shell
309
+ /status
310
+ HELP
311
+ end
312
+
313
+ def default_context
314
+ [{ role: "system", content: SYSTEM_PROMPT }]
315
+ end
316
+
317
+ def main
318
+ client = Net::Llm::Ollama.new(
319
+ host: OLLAMA_HOST,
320
+ model: OLLAMA_MODEL
321
+ )
322
+
323
+ messages = default_context
324
+ mode = Set.new([:read])
325
+
326
+ loop do
327
+ input = ask?("User> ")
328
+ if input.start_with?("/")
329
+ case input
330
+ when "/chmod +r" then mode.add(:read)
331
+ when "/chmod +w" then mode.add(:write)
332
+ when "/chmod +x" then mode.add(:execute)
333
+ when "/chmod -r" then mode.add(:read)
334
+ when "/chmod -w" then mode.add(:write)
335
+ when "/chmod -x" then mode.add(:execute)
336
+ when "/clear" then messages = default_context
337
+ when "/compact" then messages = prune_context(messages, keep_recent: 10)
338
+ when "/context" then dump_context(messages)
339
+ when "/exit" then exit
340
+ when "/help" then print_help
341
+ when "/mode auto" then mode = Set[:read, :write, :execute]
342
+ when "/mode build" then mode = Set[:read, :write]
343
+ when "/mode plan" then mode = Set[:read]
344
+ when "/mode verify" then mode = Set[:read, :execute]
345
+ when "/mode" then print_status(mode, messages)
346
+ when "/shell" then messages << start_shell
347
+ else
348
+ print_help
349
+ end
350
+ else
351
+ messages[0] = { role: "system", content: system_prompt_for(mode) }
352
+ messages << { role: "user", content: input }
353
+ messages << execute_turn(client, messages, tools: tools_for(mode))
354
+ end
355
+ end
356
+ end
357
+
358
+ main