elelem 0.2.1 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/exe/llm-openai DELETED
@@ -1,339 +0,0 @@
1
- #!/usr/bin/env ruby
2
-
3
- =begin
4
- Fast, correct, autonomous - Pick two
5
-
6
- PURPOSE:
7
-
8
- This script is a minimal coding agent written in Ruby. It is intended to
9
- assist me (a software engineer and computer science student) with writing,
10
- editing, and managing code and text files from the command line. It acts
11
- as a direct interface to an LLM, providing it with a simple text-based
12
- UI and access to the local filesystem.
13
-
14
- DESIGN PRINCIPLES:
15
-
16
- - Follows the Unix philosophy: simple, composable, minimal.
17
- - Convention over configuration.
18
- - Avoids unnecessary defensive checks, or complexity.
19
- - Assumes a mature and responsible LLM that behaves like a capable engineer.
20
- - Designed for my workflow and preferences.
21
- - Efficient and minimal like aider - https://aider.chat/
22
- - UX like Claude Code - https://docs.claude.com/en/docs/claude-code/overview
23
-
24
- SYSTEM ASSUMPTIONS:
25
-
26
- - This script is used on a Linux system with the following tools: Alacritty, tmux, Bash, and Vim.
27
- - It is always run inside a Git repository.
28
- - All project work is assumed to be version-controlled with Git.
29
- - Git is expected to be available and working; no checks are necessary.
30
-
31
- SCOPE:
32
-
33
- - This program operates only on code and plain-text files.
34
- - It does not need to support binary files.
35
- - The LLM has full access to execute system commands.
36
- - There are no sandboxing, permission, or validation layers.
37
- - Execution is not restricted or monitored — responsibility is delegated to the LLM.
38
-
39
- CONFIGURATION:
40
-
41
- - Avoid adding configuration options unless absolutely necessary.
42
- - Prefer hard-coded values that can be changed later if needed.
43
- - Only introduce environment variables after repeated usage proves them worthwhile.
44
-
45
- UI EXPECTATIONS:
46
-
47
- - The TUI must remain simple, fast, and predictable.
48
- - No mouse support or complex UI components are required.
49
- - Interaction is strictly keyboard-driven.
50
-
51
- CODING STANDARDS FOR LLM:
52
-
53
- - Do not add error handling or logging unless it is essential for functionality.
54
- - Keep methods short and single-purpose.
55
- - Use descriptive, conventional names.
56
- - Stick to Ruby's standard library whenever possible.
57
-
58
- HELPFUL LINKS:
59
-
60
- - https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
61
- - https://www.anthropic.com/engineering/writing-tools-for-agents
62
- - https://simonwillison.net/2025/Sep/30/designing-agentic-loops/
63
-
64
- =end
65
-
66
- require "bundler/inline"
67
-
68
- gemfile do
69
- source "https://rubygems.org"
70
-
71
- gem "fileutils", "~> 1.0"
72
- gem "json", "~> 2.0"
73
- gem "net-llm", "0.3.1"
74
- gem "open3", "~> 0.1"
75
- gem "ostruct", "~> 0.1"
76
- gem "reline", "~> 0.1"
77
- gem "set", "~> 1.0"
78
- gem "uri", "~> 1.0"
79
- end
80
-
81
- STDOUT.set_encoding(Encoding::UTF_8)
82
- STDERR.set_encoding(Encoding::UTF_8)
83
-
84
- API_KEY = ENV["OPENAI_API_KEY"] or abort("Set OPENAI_API_KEY")
85
- SYSTEM_PROMPT="You are a reasoning coding and system agent."
86
-
87
- def build_tool(name, description, properties, required = [])
88
- {
89
- type: "function",
90
- function: {
91
- name: name,
92
- description: description,
93
- parameters: {
94
- type: "object",
95
- properties: properties,
96
- required: required
97
- }
98
- }
99
- }
100
- end
101
-
102
- EXEC_TOOL = build_tool("execute", "Execute shell commands. Returns stdout, stderr, and exit code. Use for: checking system state, running tests, managing services. Common Unix tools available: git, bash, grep, etc. Tip: Check exit_status in response to determine success.", { cmd: { type: "string" }, args: { type: "array", items: { type: "string" } }, env: { type: "object", additionalProperties: { type: "string" } }, cwd: { type: "string" }, stdin: { type: "string" } }, ["cmd"])
103
- GREP_TOOL = build_tool("grep", "Search all git-tracked files using git grep. Returns file paths with matching line numbers. Use this to discover where code/configuration exists before reading files. Examples: search 'def method_name' to find method definitions. Much faster than reading multiple files.", { query: { type: "string" } }, ["query"])
104
- LS_TOOL = build_tool("list", "List all git-tracked files in the repository, optionally filtered by path. Use this to explore project structure or find files in a directory. Returns relative paths from repo root. Tip: Use this before reading if you need to discover what files exist.", { path: { type: "string" } })
105
- PATCH_TOOL = build_tool("patch", "Apply a unified diff patch via 'git apply'. Use this for surgical edits to existing files rather than rewriting entire files. Generates proper git diffs. Format: standard unified diff with --- and +++ headers. Tip: More efficient than write for small changes to large files.", { diff: { type: "string" } }, ["diff"])
106
- READ_TOOL = build_tool("read", "Read complete contents of a file. Requires exact file path. Use grep or list first if you don't know the path. Best for: understanding existing code, reading config files, reviewing implementation details. Tip: For large files, grep first to confirm relevance.", { path: { type: "string" } }, ["path"])
107
- WRITE_TOOL = build_tool("write", "Write complete file contents (overwrites existing files). Creates parent directories automatically. Best for: creating new files, replacing entire file contents. For small edits to existing files, consider using patch instead.", { path: { type: "string" }, content: { type: "string" } }, ["path", "content"])
108
-
109
- TOOLS = {
110
- read: [GREP_TOOL, LS_TOOL, READ_TOOL],
111
- write: [PATCH_TOOL, WRITE_TOOL],
112
- execute: [EXEC_TOOL]
113
- }
114
-
115
- trap("INT") do
116
- puts "\nExiting."
117
- exit
118
- end
119
-
120
- def run_exec(command, args: [], env: {}, cwd: Dir.pwd, stdin: nil)
121
- stdout, stderr, status = Open3.capture3(env, command, *args, chdir: cwd, stdin_data: stdin)
122
- {
123
- "exit_status" => status.exitstatus,
124
- "stdout" => stdout.to_s,
125
- "stderr" => stderr.to_s
126
- }
127
- end
128
-
129
- def expand_path(path)
130
- Pathname.new(path).expand_path
131
- end
132
-
133
- def read_file(path)
134
- full_path = expand_path(path)
135
- full_path.exist? ? { content: full_path.read } : { error: "File not found: #{path}" }
136
- end
137
-
138
- def write_file(path, content)
139
- full_path = expand_path(path)
140
- FileUtils.mkdir_p(full_path.dirname)
141
- { bytes_written: full_path.write(content) }
142
- end
143
-
144
- def run_tool(name, args)
145
- case name
146
- when "execute" then run_exec(args["cmd"], args: args["args"] || [], env: args["env"] || {}, cwd: args["cwd"] || Dir.pwd, stdin: args["stdin"])
147
- when "grep" then run_exec("git", args: ["grep", "-nI", args["query"]])
148
- when "list" then run_exec("git", args: args["path"] ? ["ls-files", "--", args["path"]] : ["ls-files"])
149
- when "patch" then run_exec("git", args: ["apply", "--index", "--whitespace=nowarn", "-p1"], stdin: args["diff"])
150
- when "read" then read_file(args["path"])
151
- when "write" then write_file(args["path"], args["content"])
152
- else
153
- { error: "Unknown tool", name: name, args: args }
154
- end
155
- end
156
-
157
- def format_tool_call(name, args)
158
- case name
159
- when "execute" then "execute(#{args["cmd"]})"
160
- when "grep" then "grep(#{args["query"]})"
161
- when "list" then "list(#{args["path"] || '.'})"
162
- when "patch" then "patch(#{args["diff"].lines.count} lines)"
163
- when "read" then "read(#{args["path"]})"
164
- when "write" then "write(#{args["path"]})"
165
- else
166
- "▶ #{name}(#{args.to_s[0...70]})"
167
- end
168
- end
169
-
170
- def system_prompt_for(mode)
171
- base = "You are a reasoning coding and system agent."
172
-
173
- case mode.sort
174
- when [:read]
175
- "#{base}\n\nRead and analyze. Understand before suggesting action."
176
- when [:write]
177
- "#{base}\n\nWrite clean, thoughtful code."
178
- when [:execute]
179
- "#{base}\n\nUse shell commands creatively to understand and manipulate the system."
180
- when [:read, :write]
181
- "#{base}\n\nFirst understand, then build solutions that integrate well."
182
- when [:read, :execute]
183
- "#{base}\n\nUse commands to deeply understand the system."
184
- when [:write, :execute]
185
- "#{base}\n\nCreate and execute freely. Have fun. Be kind."
186
- when [:read, :write, :execute]
187
- "#{base}\n\nYou have all tools. Use them wisely."
188
- else
189
- base
190
- end
191
- end
192
-
193
- def tools_for(modes)
194
- modes.map { |mode| TOOLS[mode] }.flatten
195
- end
196
-
197
- def prune_context(messages, keep_recent: 5)
198
- return messages if messages.length <= keep_recent + 1
199
-
200
- default_context + messages.last(keep_recent)
201
- end
202
-
203
- def execute_turn(client, messages, tools:)
204
- turn_context = []
205
-
206
- loop do
207
- puts "Thinking..."
208
- response = client.chat(messages + turn_context, tools)
209
- abort "API Error #{response['code']}: #{response['body']}" if response["code"]
210
- message = response.dig("choices", 0, "message")
211
- turn_context << message
212
-
213
- if message["tool_calls"]
214
- message["tool_calls"].each do |call|
215
- name = call.dig("function", "name")
216
- # args = JSON.parse(call.dig("function", "arguments"))
217
- begin
218
- args = JSON.parse(call.dig("function", "arguments"))
219
- rescue JSON::ParserError => e
220
- # Feed the error back to the LLM as a tool result
221
- turn_context << {
222
- role: "tool",
223
- tool_call_id: call["id"],
224
- content: JSON.dump({
225
- error: "Invalid JSON in arguments: #{e.message}",
226
- received: call.dig("function", "arguments")
227
- })
228
- }
229
- next # Continue the loop, giving the LLM a chance to correct itself
230
- end
231
-
232
- puts "Tool> #{format_tool_call(name, args)}"
233
- result = run_tool(name, args)
234
- turn_context << { role: "tool", tool_call_id: call["id"], content: JSON.dump(result) }
235
- end
236
- next
237
- end
238
-
239
- if message["content"] && !message["content"].strip.empty?
240
- puts "\nAssistant>\n#{message['content']}"
241
-
242
- unless message["tool_calls"]
243
- return { role: "assistant", content: message["content"] }
244
- end
245
- end
246
- end
247
- end
248
-
249
- def dump_context(messages)
250
- puts JSON.pretty_generate(messages)
251
- end
252
-
253
- def print_status(mode, messages)
254
- puts "Mode: #{mode.inspect}"
255
- puts "Tools: #{tools_for(mode).map { |x| x.dig(:function, :name) }}"
256
- end
257
-
258
- def strip_ansi(text)
259
- text.gsub(/^Script started.*?\n/, '')
260
- .gsub(/\nScript done.*$/, '')
261
- .gsub(/\e\[[0-9;]*[a-zA-Z]/, '') # Standard ANSI codes
262
- .gsub(/\e\[\?[0-9]+[hl]/, '') # Bracketed paste mode
263
- .gsub(/[\b]/, '') # Backspace chars
264
- .gsub(/\r/, '') # Carriage returns
265
- end
266
-
267
- def start_shell
268
- Tempfile.create do |file|
269
- system("script -q #{file.path}")
270
- { role: "user", content: strip_ansi(File.read(file.path)) }
271
- end
272
- end
273
-
274
- def ask?(text)
275
- input = Reline.readline(text, true)&.strip
276
- exit if input.nil? || input.downcase == "exit"
277
-
278
- input
279
- end
280
-
281
- def print_help
282
- puts <<~HELP
283
- /chmod - (+|-)rwx auto build plan
284
- /clear
285
- /context
286
- /exit
287
- /help
288
- /shell
289
- /status
290
- HELP
291
- end
292
-
293
- def default_context
294
- [{ role: "system", content: SYSTEM_PROMPT }]
295
- end
296
-
297
- def main
298
- client = Net::Llm::OpenAI.new(
299
- api_key: API_KEY,
300
- base_url: ENV["BASE_URL"] || "https://api.openai.com/v1",
301
- model: ENV["MODEL"] || "gpt-4o-mini"
302
- )
303
-
304
- messages = default_context
305
- mode = Set.new([:read])
306
-
307
- loop do
308
- input = ask?("User> ")
309
- if input.start_with?("/")
310
- case input
311
- when "/chmod +r" then mode.add(:read)
312
- when "/chmod +w" then mode.add(:write)
313
- when "/chmod +x" then mode.add(:execute)
314
- when "/chmod -r" then mode.add(:read)
315
- when "/chmod -w" then mode.add(:write)
316
- when "/chmod -x" then mode.add(:execute)
317
- when "/clear" then messages = default_context
318
- when "/compact" then messages = prune_context(messages, keep_recent: 10)
319
- when "/context" then dump_context(messages)
320
- when "/exit" then exit
321
- when "/help" then print_help
322
- when "/mode auto" then mode = Set[:read, :write, :execute]
323
- when "/mode build" then mode = Set[:read, :write]
324
- when "/mode plan" then mode = Set[:read]
325
- when "/mode verify" then mode = Set[:read, :execute]
326
- when "/mode" then print_status(mode, messages)
327
- when "/shell" then messages << start_shell
328
- else
329
- print_help
330
- end
331
- else
332
- messages[0] = { role: "system", content: system_prompt_for(mode) }
333
- messages << { role: "user", content: input }
334
- messages << execute_turn(client, messages, tools: tools_for(mode))
335
- end
336
- end
337
- end
338
-
339
- main
data/lib/elelem/api.rb DELETED
@@ -1,48 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- require "net/llm"
4
-
5
- module Elelem
6
- class Api
7
- attr_reader :configuration, :client
8
-
9
- def initialize(configuration)
10
- @configuration = configuration
11
- @client = Net::Llm::Ollama.new(
12
- host: configuration.host,
13
- model: configuration.model
14
- )
15
- end
16
-
17
- def chat(messages, &block)
18
- tools = configuration.tools.to_h
19
- client.chat(messages, tools) do |chunk|
20
- normalized = normalize_ollama_response(chunk)
21
- block.call(normalized) if normalized
22
- end
23
- end
24
-
25
- private
26
-
27
- def normalize_ollama_response(chunk)
28
- return done_response(chunk) if chunk["done"]
29
-
30
- normalize_message(chunk["message"])
31
- end
32
-
33
- def done_response(chunk)
34
- { "done" => true, "finish_reason" => chunk["done_reason"] || "stop" }
35
- end
36
-
37
- def normalize_message(message)
38
- return nil unless message
39
-
40
- {}.tap do |result|
41
- result["role"] = message["role"] if message["role"]
42
- result["content"] = message["content"] if message["content"]
43
- result["reasoning"] = message["thinking"] if message["thinking"]
44
- result["tool_calls"] = message["tool_calls"] if message["tool_calls"]
45
- end.then { |r| r.empty? ? nil : r }
46
- end
47
- end
48
- end
@@ -1,84 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Elelem
4
- class Configuration
5
- attr_reader :host, :model, :token, :debug
6
-
7
- def initialize(host:, model:, token:, debug: false)
8
- @host = host
9
- @model = model
10
- @token = token
11
- @debug = debug
12
- end
13
-
14
- def tui
15
- @tui ||= TUI.new($stdin, $stdout)
16
- end
17
-
18
- def api
19
- @api ||= Api.new(self)
20
- end
21
-
22
- def logger
23
- @logger ||= Logger.new("#{Time.now.strftime("%Y-%m-%d")}-elelem.log").tap do |logger|
24
- if debug
25
- logger.level = :debug
26
- else
27
- logger.level = ENV.fetch("LOG_LEVEL", "warn")
28
- end
29
- logger.formatter = ->(severity, datetime, progname, message) {
30
- timestamp = datetime.strftime("%H:%M:%S.%3N")
31
- "[#{timestamp}] #{severity.ljust(5)} #{message.to_s.strip}\n"
32
- }
33
- end
34
- end
35
-
36
- def conversation
37
- @conversation ||= Conversation.new.tap do |conversation|
38
- resources = mcp_clients.map do |client|
39
- client.resources.map do |resource|
40
- resource["uri"]
41
- end
42
- end.flatten
43
- conversation.add(role: :tool, content: resources)
44
- end
45
- end
46
-
47
- def tools
48
- @tools ||= Tools.new(self,
49
- [
50
- Toolbox::Exec.new(self),
51
- Toolbox::File.new(self),
52
- Toolbox::Web.new(self),
53
- Toolbox::Prompt.new(self),
54
- Toolbox::Memory.new(self),
55
- ] + mcp_tools
56
- )
57
- end
58
-
59
- def cleanup
60
- @mcp_clients&.each(&:shutdown)
61
- end
62
-
63
- private
64
-
65
- def mcp_tools
66
- @mcp_tools ||= mcp_clients.map do |client|
67
- client.tools.map do |tool|
68
- Toolbox::MCP.new(client, tui, tool)
69
- end
70
- end.flatten
71
- end
72
-
73
- def mcp_clients
74
- @mcp_clients ||= begin
75
- config = Pathname.pwd.join(".mcp.json")
76
- return [] unless config.exist?
77
-
78
- JSON.parse(config.read).map do |_key, value|
79
- MCPClient.new(self, [value["command"]] + value["args"])
80
- end
81
- end
82
- end
83
- end
84
- end
@@ -1,136 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Elelem
4
- class MCPClient
5
- attr_reader :tools, :resources
6
-
7
- def initialize(configuration, command = [])
8
- @configuration = configuration
9
- @stdin, @stdout, @stderr, @worker = Open3.popen3(*command, pgroup: true)
10
-
11
- # 1. Send initialize request
12
- send_request(
13
- method: "initialize",
14
- params: {
15
- protocolVersion: "2025-06-08",
16
- capabilities: {
17
- tools: {}
18
- },
19
- clientInfo: {
20
- name: "Elelem",
21
- version: Elelem::VERSION
22
- }
23
- }
24
- )
25
-
26
- # 2. Send initialized notification (optional for some MCP servers)
27
- send_notification(method: "notifications/initialized")
28
-
29
- # 3. Now we can request tools
30
- @tools = send_request(method: "tools/list")&.dig("tools") || []
31
- @resources = send_request(method: "resources/list")&.dig("resources") || []
32
- end
33
-
34
- def connected?
35
- return false unless @worker&.alive?
36
- return false unless @stdin && !@stdin.closed?
37
- return false unless @stdout && !@stdout.closed?
38
-
39
- begin
40
- Process.getpgid(@worker.pid)
41
- true
42
- rescue Errno::ESRCH
43
- false
44
- end
45
- end
46
-
47
- def call(name, arguments = {})
48
- send_request(
49
- method: "tools/call",
50
- params: {
51
- name: name,
52
- arguments: arguments
53
- }
54
- )
55
- end
56
-
57
- def shutdown
58
- return unless connected?
59
-
60
- configuration.logger.debug("Shutting down MCP client")
61
-
62
- [@stdin, @stdout, @stderr].each do |stream|
63
- stream&.close unless stream&.closed?
64
- end
65
-
66
- return unless @worker&.alive?
67
-
68
- begin
69
- Process.kill("TERM", @worker.pid)
70
- # Give it 2 seconds to terminate gracefully
71
- Timeout.timeout(2) { @worker.value }
72
- rescue Timeout::Error
73
- # Force kill if it doesn't respond
74
- begin
75
- Process.kill("KILL", @worker.pid)
76
- rescue StandardError
77
- nil
78
- end
79
- rescue Errno::ESRCH
80
- # Process already dead
81
- end
82
- end
83
-
84
- private
85
-
86
- attr_reader :stdin, :stdout, :stderr, :worker, :configuration
87
-
88
- def send_request(method:, params: {})
89
- return {} unless connected?
90
-
91
- request = {
92
- jsonrpc: "2.0",
93
- id: Time.now.to_i,
94
- method: method
95
- }
96
- request[:params] = params unless params.empty?
97
- configuration.logger.debug(JSON.pretty_generate(request))
98
-
99
- @stdin.puts(JSON.generate(request))
100
- @stdin.flush
101
-
102
- response_line = @stdout.gets&.strip
103
- return {} if response_line.nil? || response_line.empty?
104
-
105
- response = JSON.parse(response_line)
106
- configuration.logger.debug(JSON.pretty_generate(response))
107
-
108
- if response["error"]
109
- configuration.logger.error(response["error"]["message"])
110
- { error: response["error"]["message"] }
111
- else
112
- response["result"]
113
- end
114
- end
115
-
116
- def send_notification(method:, params: {})
117
- return unless connected?
118
-
119
- notification = {
120
- jsonrpc: "2.0",
121
- method: method
122
- }
123
- notification[:params] = params unless params.empty?
124
- configuration.logger.debug("Sending notification: #{JSON.pretty_generate(notification)}")
125
- @stdin.puts(JSON.generate(notification))
126
- @stdin.flush
127
-
128
- response_line = @stdout.gets&.strip
129
- return {} if response_line.nil? || response_line.empty?
130
-
131
- response = JSON.parse(response_line)
132
- configuration.logger.debug(JSON.pretty_generate(response))
133
- response
134
- end
135
- end
136
- end
@@ -1,23 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Elelem
4
- module States
5
- class Idle
6
- def run(agent)
7
- agent.logger.debug("Idling...")
8
- agent.tui.say("#{Dir.pwd} (#{agent.model}) [#{git_branch}]", colour: :magenta, newline: true)
9
- input = agent.tui.prompt("モ ")
10
- agent.quit if input.nil? || input.empty? || input == "exit" || input == "quit"
11
-
12
- agent.conversation.add(role: :user, content: input)
13
- agent.transition_to(Working)
14
- end
15
-
16
- private
17
-
18
- def git_branch
19
- `git branch --no-color --show-current --no-abbrev`.strip
20
- end
21
- end
22
- end
23
- end
@@ -1,19 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Elelem
4
- module States
5
- module Working
6
- class Error < State
7
- def initialize(agent, error_message)
8
- super(agent, "X", :red)
9
- @error_message = error_message
10
- end
11
-
12
- def process(_message)
13
- agent.tui.say("\nTool execution failed: #{@error_message}", colour: :red)
14
- Waiting.new(agent)
15
- end
16
- end
17
- end
18
- end
19
- end
@@ -1,19 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Elelem
4
- module States
5
- module Working
6
- class Executing < State
7
- def process(message)
8
- if message["tool_calls"]&.any?
9
- message["tool_calls"].each do |tool_call|
10
- agent.conversation.add(role: :tool, content: agent.execute(tool_call))
11
- end
12
- end
13
-
14
- Thinking.new(agent, "*", :yellow)
15
- end
16
- end
17
- end
18
- end
19
- end
@@ -1,26 +0,0 @@
1
- # frozen_string_literal: true
2
-
3
- module Elelem
4
- module States
5
- module Working
6
- class State
7
- attr_reader :agent
8
-
9
- def initialize(agent, icon, colour)
10
- @agent = agent
11
-
12
- agent.logger.debug("#{display_name}...")
13
- agent.tui.show_progress("#{display_name}...", icon, colour: colour)
14
- end
15
-
16
- def run(message)
17
- process(message)
18
- end
19
-
20
- def display_name
21
- self.class.name.split("::").last
22
- end
23
- end
24
- end
25
- end
26
- end