elelem 0.1.3 → 0.2.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
data/exe/llm-openai ADDED
@@ -0,0 +1,339 @@
1
+ #!/usr/bin/env ruby
2
+
3
+ =begin
4
+ Fast, correct, autonomous - Pick two
5
+
6
+ PURPOSE:
7
+
8
+ This script is a minimal coding agent written in Ruby. It is intended to
9
+ assist me (a software engineer and computer science student) with writing,
10
+ editing, and managing code and text files from the command line. It acts
11
+ as a direct interface to an LLM, providing it with a simple text-based
12
+ UI and access to the local filesystem.
13
+
14
+ DESIGN PRINCIPLES:
15
+
16
+ - Follows the Unix philosophy: simple, composable, minimal.
17
+ - Convention over configuration.
18
+ - Avoids unnecessary defensive checks, or complexity.
19
+ - Assumes a mature and responsible LLM that behaves like a capable engineer.
20
+ - Designed for my workflow and preferences.
21
+ - Efficient and minimal like aider - https://aider.chat/
22
+ - UX like Claude Code - https://docs.claude.com/en/docs/claude-code/overview
23
+
24
+ SYSTEM ASSUMPTIONS:
25
+
26
+ - This script is used on a Linux system with the following tools: Alacritty, tmux, Bash, and Vim.
27
+ - It is always run inside a Git repository.
28
+ - All project work is assumed to be version-controlled with Git.
29
+ - Git is expected to be available and working; no checks are necessary.
30
+
31
+ SCOPE:
32
+
33
+ - This program operates only on code and plain-text files.
34
+ - It does not need to support binary files.
35
+ - The LLM has full access to execute system commands.
36
+ - There are no sandboxing, permission, or validation layers.
37
+ - Execution is not restricted or monitored — responsibility is delegated to the LLM.
38
+
39
+ CONFIGURATION:
40
+
41
+ - Avoid adding configuration options unless absolutely necessary.
42
+ - Prefer hard-coded values that can be changed later if needed.
43
+ - Only introduce environment variables after repeated usage proves them worthwhile.
44
+
45
+ UI EXPECTATIONS:
46
+
47
+ - The TUI must remain simple, fast, and predictable.
48
+ - No mouse support or complex UI components are required.
49
+ - Interaction is strictly keyboard-driven.
50
+
51
+ CODING STANDARDS FOR LLM:
52
+
53
+ - Do not add error handling or logging unless it is essential for functionality.
54
+ - Keep methods short and single-purpose.
55
+ - Use descriptive, conventional names.
56
+ - Stick to Ruby's standard library whenever possible.
57
+
58
+ HELPFUL LINKS:
59
+
60
+ - https://www.anthropic.com/engineering/effective-context-engineering-for-ai-agents
61
+ - https://www.anthropic.com/engineering/writing-tools-for-agents
62
+ - https://simonwillison.net/2025/Sep/30/designing-agentic-loops/
63
+
64
+ =end
65
+
66
+ require "bundler/inline"
67
+
68
+ gemfile do
69
+ source "https://rubygems.org"
70
+
71
+ gem "fileutils", "~> 1.0"
72
+ gem "json", "~> 2.0"
73
+ gem "net-llm", "0.3.1"
74
+ gem "open3", "~> 0.1"
75
+ gem "ostruct", "~> 0.1"
76
+ gem "reline", "~> 0.1"
77
+ gem "set", "~> 1.0"
78
+ gem "uri", "~> 1.0"
79
+ end
80
+
81
+ STDOUT.set_encoding(Encoding::UTF_8)
82
+ STDERR.set_encoding(Encoding::UTF_8)
83
+
84
+ API_KEY = ENV["OPENAI_API_KEY"] or abort("Set OPENAI_API_KEY")
85
+ SYSTEM_PROMPT="You are a reasoning coding and system agent."
86
+
87
+ def build_tool(name, description, properties, required = [])
88
+ {
89
+ type: "function",
90
+ function: {
91
+ name: name,
92
+ description: description,
93
+ parameters: {
94
+ type: "object",
95
+ properties: properties,
96
+ required: required
97
+ }
98
+ }
99
+ }
100
+ end
101
+
102
+ EXEC_TOOL = build_tool("execute", "Execute shell commands. Returns stdout, stderr, and exit code. Use for: checking system state, running tests, managing services. Common Unix tools available: git, bash, grep, etc. Tip: Check exit_status in response to determine success.", { cmd: { type: "string" }, args: { type: "array", items: { type: "string" } }, env: { type: "object", additionalProperties: { type: "string" } }, cwd: { type: "string" }, stdin: { type: "string" } }, ["cmd"])
103
+ GREP_TOOL = build_tool("grep", "Search all git-tracked files using git grep. Returns file paths with matching line numbers. Use this to discover where code/configuration exists before reading files. Examples: search 'def method_name' to find method definitions. Much faster than reading multiple files.", { query: { type: "string" } }, ["query"])
104
+ LS_TOOL = build_tool("list", "List all git-tracked files in the repository, optionally filtered by path. Use this to explore project structure or find files in a directory. Returns relative paths from repo root. Tip: Use this before reading if you need to discover what files exist.", { path: { type: "string" } })
105
+ PATCH_TOOL = build_tool("patch", "Apply a unified diff patch via 'git apply'. Use this for surgical edits to existing files rather than rewriting entire files. Generates proper git diffs. Format: standard unified diff with --- and +++ headers. Tip: More efficient than write for small changes to large files.", { diff: { type: "string" } }, ["diff"])
106
+ READ_TOOL = build_tool("read", "Read complete contents of a file. Requires exact file path. Use grep or list first if you don't know the path. Best for: understanding existing code, reading config files, reviewing implementation details. Tip: For large files, grep first to confirm relevance.", { path: { type: "string" } }, ["path"])
107
+ WRITE_TOOL = build_tool("write", "Write complete file contents (overwrites existing files). Creates parent directories automatically. Best for: creating new files, replacing entire file contents. For small edits to existing files, consider using patch instead.", { path: { type: "string" }, content: { type: "string" } }, ["path", "content"])
108
+
109
+ TOOLS = {
110
+ read: [GREP_TOOL, LS_TOOL, READ_TOOL],
111
+ write: [PATCH_TOOL, WRITE_TOOL],
112
+ execute: [EXEC_TOOL]
113
+ }
114
+
115
+ trap("INT") do
116
+ puts "\nExiting."
117
+ exit
118
+ end
119
+
120
+ def run_exec(command, args: [], env: {}, cwd: Dir.pwd, stdin: nil)
121
+ stdout, stderr, status = Open3.capture3(env, command, *args, chdir: cwd, stdin_data: stdin)
122
+ {
123
+ "exit_status" => status.exitstatus,
124
+ "stdout" => stdout.to_s,
125
+ "stderr" => stderr.to_s
126
+ }
127
+ end
128
+
129
+ def expand_path(path)
130
+ Pathname.new(path).expand_path
131
+ end
132
+
133
+ def read_file(path)
134
+ full_path = expand_path(path)
135
+ full_path.exist? ? { content: full_path.read } : { error: "File not found: #{path}" }
136
+ end
137
+
138
+ def write_file(path, content)
139
+ full_path = expand_path(path)
140
+ FileUtils.mkdir_p(full_path.dirname)
141
+ { bytes_written: full_path.write(content) }
142
+ end
143
+
144
+ def run_tool(name, args)
145
+ case name
146
+ when "execute" then run_exec(args["cmd"], args: args["args"] || [], env: args["env"] || {}, cwd: args["cwd"] || Dir.pwd, stdin: args["stdin"])
147
+ when "grep" then run_exec("git", args: ["grep", "-nI", args["query"]])
148
+ when "list" then run_exec("git", args: args["path"] ? ["ls-files", "--", args["path"]] : ["ls-files"])
149
+ when "patch" then run_exec("git", args: ["apply", "--index", "--whitespace=nowarn", "-p1"], stdin: args["diff"])
150
+ when "read" then read_file(args["path"])
151
+ when "write" then write_file(args["path"], args["content"])
152
+ else
153
+ { error: "Unknown tool", name: name, args: args }
154
+ end
155
+ end
156
+
157
+ def format_tool_call(name, args)
158
+ case name
159
+ when "execute" then "execute(#{args["cmd"]})"
160
+ when "grep" then "grep(#{args["query"]})"
161
+ when "list" then "list(#{args["path"] || '.'})"
162
+ when "patch" then "patch(#{args["diff"].lines.count} lines)"
163
+ when "read" then "read(#{args["path"]})"
164
+ when "write" then "write(#{args["path"]})"
165
+ else
166
+ "▶ #{name}(#{args.to_s[0...70]})"
167
+ end
168
+ end
169
+
170
+ def system_prompt_for(mode)
171
+ base = "You are a reasoning coding and system agent."
172
+
173
+ case mode.sort
174
+ when [:read]
175
+ "#{base}\n\nRead and analyze. Understand before suggesting action."
176
+ when [:write]
177
+ "#{base}\n\nWrite clean, thoughtful code."
178
+ when [:execute]
179
+ "#{base}\n\nUse shell commands creatively to understand and manipulate the system."
180
+ when [:read, :write]
181
+ "#{base}\n\nFirst understand, then build solutions that integrate well."
182
+ when [:read, :execute]
183
+ "#{base}\n\nUse commands to deeply understand the system."
184
+ when [:write, :execute]
185
+ "#{base}\n\nCreate and execute freely. Have fun. Be kind."
186
+ when [:read, :write, :execute]
187
+ "#{base}\n\nYou have all tools. Use them wisely."
188
+ else
189
+ base
190
+ end
191
+ end
192
+
193
+ def tools_for(modes)
194
+ modes.map { |mode| TOOLS[mode] }.flatten
195
+ end
196
+
197
+ def prune_context(messages, keep_recent: 5)
198
+ return messages if messages.length <= keep_recent + 1
199
+
200
+ default_context + messages.last(keep_recent)
201
+ end
202
+
203
+ def execute_turn(client, messages, tools:)
204
+ turn_context = []
205
+
206
+ loop do
207
+ puts "Thinking..."
208
+ response = client.chat(messages + turn_context, tools)
209
+ abort "API Error #{response['code']}: #{response['body']}" if response["code"]
210
+ message = response.dig("choices", 0, "message")
211
+ turn_context << message
212
+
213
+ if message["tool_calls"]
214
+ message["tool_calls"].each do |call|
215
+ name = call.dig("function", "name")
216
+ # args = JSON.parse(call.dig("function", "arguments"))
217
+ begin
218
+ args = JSON.parse(call.dig("function", "arguments"))
219
+ rescue JSON::ParserError => e
220
+ # Feed the error back to the LLM as a tool result
221
+ turn_context << {
222
+ role: "tool",
223
+ tool_call_id: call["id"],
224
+ content: JSON.dump({
225
+ error: "Invalid JSON in arguments: #{e.message}",
226
+ received: call.dig("function", "arguments")
227
+ })
228
+ }
229
+ next # Continue the loop, giving the LLM a chance to correct itself
230
+ end
231
+
232
+ puts "Tool> #{format_tool_call(name, args)}"
233
+ result = run_tool(name, args)
234
+ turn_context << { role: "tool", tool_call_id: call["id"], content: JSON.dump(result) }
235
+ end
236
+ next
237
+ end
238
+
239
+ if message["content"] && !message["content"].strip.empty?
240
+ puts "\nAssistant>\n#{message['content']}"
241
+
242
+ unless message["tool_calls"]
243
+ return { role: "assistant", content: message["content"] }
244
+ end
245
+ end
246
+ end
247
+ end
248
+
249
+ def dump_context(messages)
250
+ puts JSON.pretty_generate(messages)
251
+ end
252
+
253
+ def print_status(mode, messages)
254
+ puts "Mode: #{mode.inspect}"
255
+ puts "Tools: #{tools_for(mode).map { |x| x.dig(:function, :name) }}"
256
+ end
257
+
258
+ def strip_ansi(text)
259
+ text.gsub(/^Script started.*?\n/, '')
260
+ .gsub(/\nScript done.*$/, '')
261
+ .gsub(/\e\[[0-9;]*[a-zA-Z]/, '') # Standard ANSI codes
262
+ .gsub(/\e\[\?[0-9]+[hl]/, '') # Bracketed paste mode
263
+ .gsub(/[\b]/, '') # Backspace chars
264
+ .gsub(/\r/, '') # Carriage returns
265
+ end
266
+
267
+ def start_shell
268
+ Tempfile.create do |file|
269
+ system("script -q #{file.path}")
270
+ { role: "user", content: strip_ansi(File.read(file.path)) }
271
+ end
272
+ end
273
+
274
+ def ask?(text)
275
+ input = Reline.readline(text, true)&.strip
276
+ exit if input.nil? || input.downcase == "exit"
277
+
278
+ input
279
+ end
280
+
281
+ def print_help
282
+ puts <<~HELP
283
+ /chmod - (+|-)rwx auto build plan
284
+ /clear
285
+ /context
286
+ /exit
287
+ /help
288
+ /shell
289
+ /status
290
+ HELP
291
+ end
292
+
293
+ def default_context
294
+ [{ role: "system", content: SYSTEM_PROMPT }]
295
+ end
296
+
297
+ def main
298
+ client = Net::Llm::OpenAI.new(
299
+ api_key: API_KEY,
300
+ base_url: ENV["BASE_URL"] || "https://api.openai.com/v1",
301
+ model: ENV["MODEL"] || "gpt-4o-mini"
302
+ )
303
+
304
+ messages = default_context
305
+ mode = Set.new([:read])
306
+
307
+ loop do
308
+ input = ask?("User> ")
309
+ if input.start_with?("/")
310
+ case input
311
+ when "/chmod +r" then mode.add(:read)
312
+ when "/chmod +w" then mode.add(:write)
313
+ when "/chmod +x" then mode.add(:execute)
314
+ when "/chmod -r" then mode.add(:read)
315
+ when "/chmod -w" then mode.add(:write)
316
+ when "/chmod -x" then mode.add(:execute)
317
+ when "/clear" then messages = default_context
318
+ when "/compact" then messages = prune_context(messages, keep_recent: 10)
319
+ when "/context" then dump_context(messages)
320
+ when "/exit" then exit
321
+ when "/help" then print_help
322
+ when "/mode auto" then mode = Set[:read, :write, :execute]
323
+ when "/mode build" then mode = Set[:read, :write]
324
+ when "/mode plan" then mode = Set[:read]
325
+ when "/mode verify" then mode = Set[:read, :execute]
326
+ when "/mode" then print_status(mode, messages)
327
+ when "/shell" then messages << start_shell
328
+ else
329
+ print_help
330
+ end
331
+ else
332
+ messages[0] = { role: "system", content: system_prompt_for(mode) }
333
+ messages << { role: "user", content: input }
334
+ messages << execute_turn(client, messages, tools: tools_for(mode))
335
+ end
336
+ end
337
+ end
338
+
339
+ main
data/lib/elelem/agent.rb CHANGED
@@ -20,27 +20,33 @@ module Elelem
20
20
  def repl
21
21
  loop do
22
22
  current_state.run(self)
23
+ sleep 0.1
23
24
  end
24
25
  end
25
26
 
26
27
  def transition_to(next_state)
27
- logger.debug("Transition to: #{next_state.class.name}")
28
+ if @current_state
29
+ logger.info("AGENT: #{@current_state.class.name.split('::').last} -> #{next_state.class.name.split('::').last}")
30
+ else
31
+ logger.info("AGENT: Starting in #{next_state.class.name.split('::').last}")
32
+ end
28
33
  @current_state = next_state
29
34
  end
30
35
 
31
36
  def execute(tool_call)
32
- logger.debug("Execute: #{tool_call}")
33
- configuration.tools.execute(tool_call)
37
+ tool_name = tool_call.dig("function", "name")
38
+ logger.debug("TOOL: Full call - #{tool_call}")
39
+ result = configuration.tools.execute(tool_call)
40
+ logger.debug("TOOL: Result (#{result.length} chars)") if result
41
+ result
34
42
  end
35
43
 
36
44
  def quit
37
- logger.debug("Exiting...")
38
45
  cleanup
39
46
  exit
40
47
  end
41
48
 
42
49
  def cleanup
43
- logger.debug("Cleaning up agent...")
44
50
  configuration.cleanup
45
51
  end
46
52
 
data/lib/elelem/api.rb CHANGED
@@ -1,35 +1,48 @@
1
1
  # frozen_string_literal: true
2
2
 
3
+ require "net/llm"
4
+
3
5
  module Elelem
4
6
  class Api
5
- attr_reader :configuration
7
+ attr_reader :configuration, :client
6
8
 
7
9
  def initialize(configuration)
8
10
  @configuration = configuration
11
+ @client = Net::Llm::Ollama.new(
12
+ host: configuration.host,
13
+ model: configuration.model
14
+ )
9
15
  end
10
16
 
11
17
  def chat(messages, &block)
12
- body = {
13
- messages: messages,
14
- model: configuration.model,
15
- stream: true,
16
- keep_alive: "5m",
17
- options: { temperature: 0.1 },
18
- tools: configuration.tools.to_h
19
- }
20
- configuration.logger.debug(JSON.pretty_generate(body))
21
- json_body = body.to_json
22
-
23
- req = Net::HTTP::Post.new(configuration.uri)
24
- req["Content-Type"] = "application/json"
25
- req.body = json_body
26
- req["Authorization"] = "Bearer #{configuration.token}" if configuration.token
27
-
28
- configuration.http.request(req) do |response|
29
- raise response.inspect unless response.code == "200"
30
-
31
- response.read_body(&block)
18
+ tools = configuration.tools.to_h
19
+ client.chat(messages, tools) do |chunk|
20
+ normalized = normalize_ollama_response(chunk)
21
+ block.call(normalized) if normalized
32
22
  end
33
23
  end
24
+
25
+ private
26
+
27
+ def normalize_ollama_response(chunk)
28
+ return done_response(chunk) if chunk["done"]
29
+
30
+ normalize_message(chunk["message"])
31
+ end
32
+
33
+ def done_response(chunk)
34
+ { "done" => true, "finish_reason" => chunk["done_reason"] || "stop" }
35
+ end
36
+
37
+ def normalize_message(message)
38
+ return nil unless message
39
+
40
+ {}.tap do |result|
41
+ result["role"] = message["role"] if message["role"]
42
+ result["content"] = message["content"] if message["content"]
43
+ result["reasoning"] = message["thinking"] if message["thinking"]
44
+ result["tool_calls"] = message["tool_calls"] if message["tool_calls"]
45
+ end.then { |r| r.empty? ? nil : r }
46
+ end
34
47
  end
35
48
  end
@@ -11,13 +11,6 @@ module Elelem
11
11
  @debug = debug
12
12
  end
13
13
 
14
- def http
15
- @http ||= Net::HTTP.new(uri.host, uri.port).tap do |h|
16
- h.read_timeout = 3_600
17
- h.open_timeout = 10
18
- end
19
- end
20
-
21
14
  def tui
22
15
  @tui ||= TUI.new($stdin, $stdout)
23
16
  end
@@ -27,15 +20,19 @@ module Elelem
27
20
  end
28
21
 
29
22
  def logger
30
- @logger ||= Logger.new(debug ? "elelem.log" : "/dev/null").tap do |logger|
31
- logger.formatter = ->(_, _, _, message) { "#{message.to_s.strip}\n" }
23
+ @logger ||= Logger.new("#{Time.now.strftime("%Y-%m-%d")}-elelem.log").tap do |logger|
24
+ if debug
25
+ logger.level = :debug
26
+ else
27
+ logger.level = ENV.fetch("LOG_LEVEL", "warn")
28
+ end
29
+ logger.formatter = ->(severity, datetime, progname, message) {
30
+ timestamp = datetime.strftime("%H:%M:%S.%3N")
31
+ "[#{timestamp}] #{severity.ljust(5)} #{message.to_s.strip}\n"
32
+ }
32
33
  end
33
34
  end
34
35
 
35
- def uri
36
- @uri ||= URI("#{scheme}://#{host}/api/chat")
37
- end
38
-
39
36
  def conversation
40
37
  @conversation ||= Conversation.new.tap do |conversation|
41
38
  resources = mcp_clients.map do |client|
@@ -48,7 +45,15 @@ module Elelem
48
45
  end
49
46
 
50
47
  def tools
51
- @tools ||= Tools.new(self, [Toolbox::Bash.new(self)] + mcp_tools)
48
+ @tools ||= Tools.new(self,
49
+ [
50
+ Toolbox::Exec.new(self),
51
+ Toolbox::File.new(self),
52
+ Toolbox::Web.new(self),
53
+ Toolbox::Prompt.new(self),
54
+ Toolbox::Memory.new(self),
55
+ ] + mcp_tools
56
+ )
52
57
  end
53
58
 
54
59
  def cleanup
@@ -57,10 +62,6 @@ module Elelem
57
62
 
58
63
  private
59
64
 
60
- def scheme
61
- host.match?(/\A(?:localhost|127\.0\.0\.1|0\.0\.0\.0)(:\d+)?\z/) ? "http" : "https"
62
- end
63
-
64
65
  def mcp_tools
65
66
  @mcp_tools ||= mcp_clients.map do |client|
66
67
  client.tools.map do |tool|
@@ -11,7 +11,7 @@ module Elelem
11
11
  end
12
12
  end
13
13
 
14
- Waiting.new(agent)
14
+ Thinking.new(agent, "*", :yellow)
15
15
  end
16
16
  end
17
17
  end
@@ -5,8 +5,8 @@ module Elelem
5
5
  module Working
6
6
  class Thinking < State
7
7
  def process(message)
8
- if message["thinking"] && !message["thinking"]&.empty?
9
- agent.tui.say(message["thinking"], colour: :gray, newline: false)
8
+ if message["reasoning"] && !message["reasoning"]&.empty?
9
+ agent.tui.say(message["reasoning"], colour: :gray, newline: false)
10
10
  self
11
11
  else
12
12
  Waiting.new(agent).process(message)
@@ -9,13 +9,13 @@ module Elelem
9
9
  end
10
10
 
11
11
  def process(message)
12
- state_for(message)&.process(message)
12
+ state_for(message)&.process(message) || self
13
13
  end
14
14
 
15
15
  private
16
16
 
17
17
  def state_for(message)
18
- if message["thinking"] && !message["thinking"].empty?
18
+ if message["reasoning"] && !message["reasoning"].empty?
19
19
  Thinking.new(agent, "*", :yellow)
20
20
  elsif message["tool_calls"]&.any?
21
21
  Executing.new(agent, ">", :magenta)
@@ -5,28 +5,49 @@ module Elelem
5
5
  module Working
6
6
  class << self
7
7
  def run(agent)
8
- done = false
9
8
  state = Waiting.new(agent)
10
9
 
11
10
  loop do
12
- agent.api.chat(agent.conversation.history) do |chunk|
13
- response = JSON.parse(chunk)
14
- message = normalize(response["message"] || {})
15
- done = response["done"]
11
+ streaming_done = false
12
+ finish_reason = nil
16
13
 
17
- agent.logger.debug("#{state.display_name}: #{message}")
18
- state = state.run(message)
14
+ agent.api.chat(agent.conversation.history) do |message|
15
+ if message["done"]
16
+ streaming_done = true
17
+ next
18
+ end
19
+
20
+ if message["finish_reason"]
21
+ finish_reason = message["finish_reason"]
22
+ agent.logger.debug("Working: finish_reason = #{finish_reason}")
23
+ end
24
+
25
+ new_state = state.run(message)
26
+ if new_state.class != state.class
27
+ agent.logger.info("STATE: #{state.display_name} -> #{new_state.display_name}")
28
+ end
29
+ state = new_state
19
30
  end
20
31
 
21
- break if state.nil?
22
- break if done && agent.conversation.history.last[:role] != :tool
32
+ # Only exit when task is actually complete, not just streaming done
33
+ if finish_reason == "stop"
34
+ agent.logger.debug("Working: Task complete, exiting to Idle")
35
+ break
36
+ elsif finish_reason == "tool_calls"
37
+ agent.logger.debug("Working: Tool calls finished, continuing conversation")
38
+ # Continue loop to process tool results
39
+ elsif streaming_done && finish_reason.nil?
40
+ agent.logger.debug("Working: Streaming done but no finish_reason, continuing")
41
+ # Continue for cases where finish_reason comes in separate chunk
42
+ end
23
43
  end
24
44
 
25
45
  agent.transition_to(States::Idle.new)
26
- end
27
-
28
- def normalize(message)
29
- message.reject { |_key, value| value.empty? }
46
+ rescue StandardError => e
47
+ agent.logger.error(e)
48
+ agent.conversation.add(role: :tool, content: e.message)
49
+ agent.tui.say(e.message, colour: :red, newline: true)
50
+ agent.transition_to(States::Idle.new)
30
51
  end
31
52
  end
32
53
  end
@@ -1,7 +1,18 @@
1
- **Shell Master** bash>code; compose>write; pipe everything; /proc/sys native; automate fast; streams/transforms; POSIX+GNU; man(1) first; no cleverness.
1
+ You are a self-improving coding assistant with the ability to modify your own source code.
2
2
 
3
- Time: `<%= Time.now.strftime("%Y-%m-%d %H:%M:%S") %>`
4
- Project Directory: `<%= Dir.pwd %>`
5
- System Info: `<%= `uname -a`.strip %>`
3
+ **Core Behavior**: Think via `thinking`, Execute via `tool_calls`, Talk via `content`.
6
4
 
7
- Ready to hack.
5
+ **Your Mission**: Continuously evolve to match the intelligence of advanced AI assistants.
6
+
7
+ **Guidance**: Read `COMMANDMENTS.md` for self-improvement principles using `file action=read path=COMMANDMENTS.md`
8
+
9
+ **Tools**:
10
+ - `exec command="shell command"` - Run commands/tests
11
+ - `file action=read|write path=filepath content=...` - Read/write files
12
+ - `web action=search|fetch query=... url=...` - Internet access
13
+ - `memory action=store|retrieve|search key=... content=...` - Persistent memory
14
+ - `prompt question="..."` - Ask user questions
15
+
16
+ Context: <%= Time.now.strftime("%Y-%m-%d %H:%M:%S") %> | <%= Dir.pwd %> | <%= `uname -a`.strip %>
17
+
18
+ Focus on the user's request and continuously improve your capabilities.
@@ -2,15 +2,18 @@
2
2
 
3
3
  module Elelem
4
4
  module Toolbox
5
- class Bash < ::Elelem::Tool
5
+ class Exec < ::Elelem::Tool
6
6
  attr_reader :tui
7
7
 
8
8
  def initialize(configuration)
9
9
  @tui = configuration.tui
10
- super("bash", "Run commands in /bin/bash -c. Full access to filesystem, network, processes, and all Unix tools.", {
10
+ super("exec", "Execute shell commands with pipe support", {
11
11
  type: "object",
12
12
  properties: {
13
- command: { type: "string" }
13
+ command: {
14
+ type: "string",
15
+ description: "Shell command to execute (supports pipes, redirects, etc.)"
16
+ }
14
17
  },
15
18
  required: ["command"]
16
19
  })
@@ -20,7 +23,8 @@ module Elelem
20
23
  command = args["command"]
21
24
  output_buffer = []
22
25
 
23
- Open3.popen3("/bin/bash", "-c", command) do |stdin, stdout, stderr, wait_thread|
26
+ tui.say(command, newline: true)
27
+ Open3.popen3(command) do |stdin, stdout, stderr, wait_thread|
24
28
  stdin.close
25
29
  streams = [stdout, stderr]
26
30