hey-cli-python 1.0.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Mohit S.
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,97 @@
1
+ Metadata-Version: 2.4
2
+ Name: hey-cli-python
3
+ Version: 1.0.0
4
+ Summary: A secure, zero-bloat CLI companion that turns natural language and error logs into executable commands.
5
+ Author: Mohit S.
6
+ Project-URL: Homepage, https://github.com/sinsniwal/hey-cli
7
+ Project-URL: Repository, https://github.com/sinsniwal/hey-cli
8
+ Project-URL: Issues, https://github.com/sinsniwal/hey-cli/issues
9
+ Keywords: cli,llm,bash,terminal,ollama,sysadmin
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Environment :: Console
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Operating System :: MacOS
15
+ Classifier: Operating System :: POSIX :: Linux
16
+ Classifier: Operating System :: Microsoft :: Windows
17
+ Classifier: Programming Language :: Python :: 3.9
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Programming Language :: Python :: 3.12
21
+ Requires-Python: >=3.9
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: pydantic>=2.0.0
25
+ Requires-Dist: ollama>=0.1.0
26
+ Requires-Dist: rich>=13.0.0
27
+ Dynamic: license-file
28
+
29
+ <div align="center">
30
+ <h1>hey-cli 🤖</h1>
31
+ <p><strong>A zero-bloat, privacy-first, locally-hosted CLI agent powered by Ollama.</strong></p>
32
+
33
+ <img src="https://img.shields.io/badge/Python-3.9+-blue.svg" alt="Python Version" />
34
+ <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License" />
35
+ <img src="https://img.shields.io/badge/Ollama-Local-orange.svg" alt="Ollama Local" />
36
+ </div>
37
+
38
+ <br>
39
+
40
+ `hey` isn't just an LLM wrapper. It's a context-aware system agent designed to bridge the gap between human language and POSIX shell utilities natively on your host machine.
41
+
42
+ > Ask it to parse your error logs, debug Docker, clear DNS caches, or execute complex file maneuvers—all while executing safely behind a dynamic zero-trust governance matrix.
43
+
44
+ ## 🚀 Why `hey-cli` over Copilot/ChatGPT?
45
+ 1. **Total Privacy**: Your code and system logs never leave your physical CPU. All context gathering and reasoning is done locally via [Ollama](https://ollama.com).
46
+ 2. **True Cross-Platform Skills**: Under the hood, `hey-cli`'s "Skills Engine" detects if you're on macOS (BSD), Ubuntu (GNU), Windows (PowerShell), or Arch Linux, and actively refuses to generate incompatible flags like `xargs -d` on Mac.
47
+ 3. **Agentic Execution**: Ask "is docker running?" and `hey` will silently execute `docker info` in the background, read the stdout, analyze it, and return a plain English answer.
48
+ 4. **Security Governance**: Built-in AST-level parsing. Safe commands (like `git status`) auto-run. Destructive commands (`rm -rf`, `-exec delete`) require explicit typed confirmation.
49
+
50
+ ## 📦 Installation
51
+
52
+ **Prerequisite:** You must have [Python 3.9+](https://www.python.org/downloads/) installed.
53
+
54
+ ### macOS & Linux
55
+ Paste this snippet into your terminal to auto-install `pipx`, `ollama`, pull the required language model, and build `hey-cli` natively.
56
+ ```bash
57
+ curl -sL https://raw.githubusercontent.com/sinsniwal/hey-cli/main/install.sh | bash
58
+ ```
59
+
60
+ ### Windows (PowerShell)
61
+ Paste this into your PowerShell terminal:
62
+ ```powershell
63
+ Invoke-WebRequest -Uri "https://raw.githubusercontent.com/sinsniwal/hey-cli/main/install.ps1" -OutFile "$env:TEMP\hey_install.ps1"; & "$env:TEMP\hey_install.ps1"
64
+ ```
65
+
66
+ ## 🛠️ Usage
67
+
68
+ Simply type `hey` followed by your objective.
69
+
70
+ **Context-Gathering (Zero-Trust)**
71
+ ```bash
72
+ hey is my docker hub running?
73
+ ```
74
+ *`hey` will silently run `systemctl is-active docker` or `docker info`, see that it failed to connect to the socket, and explain the situation.*
75
+
76
+ **Execution (Governance Protected)**
77
+ ```bash
78
+ hey forcefully delete all .pyc files
79
+ ```
80
+ *`hey` parses the generated `find . -name "*.pyc" -exec rm -f {} +` command, detects `rm` and `-exec` triggers, and pauses execution until you explicitly type `rm` to authorize.*
81
+
82
+ **Debugging Logs**
83
+ ```bash
84
+ npm run build 2>&1 | hey what is causing this webpack error?
85
+ ```
86
+
87
+ ## 🛡️ Governance Matrix
88
+ Safety is a first-class citizen. `hey-cli` maintains a local governance database (`~/.hey-rules.json`):
89
+ - **Never List**: Things like `rm -rf /` and `mkfs` are permanently blocked at the compiler level.
90
+ - **Explicit Confirm**: High-risk ops (`truncate`, `drop`, `rm`) require typing exact keyword verification.
91
+ - **Y/N Confirm**: Moderate risk ops requiring a quick `y`.
92
+ - **Allowed List**: Safe diagnostics like `cat`, `ls`, `grep` auto-run natively.
93
+
94
+ ## 🤝 Adding OS Skills
95
+ Is `hey` generating incorrect shell semantics for your niche operating system?
96
+
97
+ You can make it permanently smarter without touching code! Simply open `hey_cli/skills/` and create a markdown file for your OS containing explicit English instructions (e.g. "Do not use apt on Alpine, use apk add"). The engine dynamically parses `.md` rulesites at runtime. Pull requests are heavily welcomed!
@@ -0,0 +1,69 @@
1
+ <div align="center">
2
+ <h1>hey-cli 🤖</h1>
3
+ <p><strong>A zero-bloat, privacy-first, locally-hosted CLI agent powered by Ollama.</strong></p>
4
+
5
+ <img src="https://img.shields.io/badge/Python-3.9+-blue.svg" alt="Python Version" />
6
+ <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License" />
7
+ <img src="https://img.shields.io/badge/Ollama-Local-orange.svg" alt="Ollama Local" />
8
+ </div>
9
+
10
+ <br>
11
+
12
+ `hey` isn't just an LLM wrapper. It's a context-aware system agent designed to bridge the gap between human language and POSIX shell utilities natively on your host machine.
13
+
14
+ > Ask it to parse your error logs, debug Docker, clear DNS caches, or execute complex file maneuvers—all while executing safely behind a dynamic zero-trust governance matrix.
15
+
16
+ ## 🚀 Why `hey-cli` over Copilot/ChatGPT?
17
+ 1. **Total Privacy**: Your code and system logs never leave your physical CPU. All context gathering and reasoning is done locally via [Ollama](https://ollama.com).
18
+ 2. **True Cross-Platform Skills**: Under the hood, `hey-cli`'s "Skills Engine" detects if you're on macOS (BSD), Ubuntu (GNU), Windows (PowerShell), or Arch Linux, and actively refuses to generate incompatible flags like `xargs -d` on Mac.
19
+ 3. **Agentic Execution**: Ask "is docker running?" and `hey` will silently execute `docker info` in the background, read the stdout, analyze it, and return a plain English answer.
20
+ 4. **Security Governance**: Built-in AST-level parsing. Safe commands (like `git status`) auto-run. Destructive commands (`rm -rf`, `-exec delete`) require explicit typed confirmation.
21
+
22
+ ## 📦 Installation
23
+
24
+ **Prerequisite:** You must have [Python 3.9+](https://www.python.org/downloads/) installed.
25
+
26
+ ### macOS & Linux
27
+ Paste this snippet into your terminal to auto-install `pipx`, `ollama`, pull the required language model, and build `hey-cli` natively.
28
+ ```bash
29
+ curl -sL https://raw.githubusercontent.com/sinsniwal/hey-cli/main/install.sh | bash
30
+ ```
31
+
32
+ ### Windows (PowerShell)
33
+ Paste this into your PowerShell terminal:
34
+ ```powershell
35
+ Invoke-WebRequest -Uri "https://raw.githubusercontent.com/sinsniwal/hey-cli/main/install.ps1" -OutFile "$env:TEMP\hey_install.ps1"; & "$env:TEMP\hey_install.ps1"
36
+ ```
37
+
38
+ ## 🛠️ Usage
39
+
40
+ Simply type `hey` followed by your objective.
41
+
42
+ **Context-Gathering (Zero-Trust)**
43
+ ```bash
44
+ hey is my docker hub running?
45
+ ```
46
+ *`hey` will silently run `systemctl is-active docker` or `docker info`, see that it failed to connect to the socket, and explain the situation.*
47
+
48
+ **Execution (Governance Protected)**
49
+ ```bash
50
+ hey forcefully delete all .pyc files
51
+ ```
52
+ *`hey` parses the generated `find . -name "*.pyc" -exec rm -f {} +` command, detects `rm` and `-exec` triggers, and pauses execution until you explicitly type `rm` to authorize.*
53
+
54
+ **Debugging Logs**
55
+ ```bash
56
+ npm run build 2>&1 | hey what is causing this webpack error?
57
+ ```
58
+
59
+ ## 🛡️ Governance Matrix
60
+ Safety is a first-class citizen. `hey-cli` maintains a local governance database (`~/.hey-rules.json`):
61
+ - **Never List**: Things like `rm -rf /` and `mkfs` are permanently blocked at the compiler level.
62
+ - **Explicit Confirm**: High-risk ops (`truncate`, `drop`, `rm`) require typing exact keyword verification.
63
+ - **Y/N Confirm**: Moderate risk ops requiring a quick `y`.
64
+ - **Allowed List**: Safe diagnostics like `cat`, `ls`, `grep` auto-run natively.
65
+
66
+ ## 🤝 Adding OS Skills
67
+ Is `hey` generating incorrect shell semantics for your niche operating system?
68
+
69
+ You can make it permanently smarter without touching code! Simply open `hey_cli/skills/` and create a markdown file for your OS containing explicit English instructions (e.g. "Do not use apt on Alpine, use apk add"). The engine dynamically parses `.md` rulesites at runtime. Pull requests are heavily welcomed!
@@ -0,0 +1 @@
1
+ __version__ = "1.0.0"
@@ -0,0 +1,81 @@
1
+ import argparse
2
+ import sys
3
+ import os
4
+
5
+ from .governance import GovernanceEngine
6
+ from .llm import generate_command
7
+ from .history import HistoryManager
8
+ from .runner import CommandRunner
9
+ from rich.console import Console
10
+
11
+ console = Console()
12
+
13
+ def main():
14
+ parser = argparse.ArgumentParser(
15
+ description="hey-cli: a secure, zero-bloat CLI companion.",
16
+ formatter_class=argparse.RawTextHelpFormatter
17
+ )
18
+
19
+ parser.add_argument("objective", nargs="*", help="Goal or task description")
20
+ parser.add_argument("--level", type=int, choices=[0, 1, 2, 3], default=None,
21
+ help="0: Dry-Run\n1: Supervised (Default)\n2: Unrestricted (Danger)\n3: Troubleshooter")
22
+ parser.add_argument("--init", action="store_true", help="Initialize ~/.hey-rules.json")
23
+ parser.add_argument("--clear", action="store_true", help="Clear conversational memory history")
24
+ parser.add_argument("--check-cache", type=str, help="Check local cache for instant fix")
25
+
26
+ args = parser.parse_args()
27
+
28
+ gov = GovernanceEngine()
29
+ history_mgr = HistoryManager()
30
+
31
+ if args.clear:
32
+ history_mgr.clear()
33
+ console.print("[dim]Conversational history wiped clean.[/dim]")
34
+ sys.exit(0)
35
+
36
+ active_level = args.level
37
+ if active_level is None:
38
+ active_level = gov.rules.get("config", {}).get("default_level", 1)
39
+
40
+ model_name = gov.rules.get("config", {}).get("model", "gpt-oss:20b-cloud")
41
+
42
+ if args.init:
43
+ if gov.init_rules():
44
+ console.print(f"Initialized security rules at {gov.rules_path}")
45
+ else:
46
+ console.print(f"Rules already exist at {gov.rules_path}")
47
+ sys.exit(0)
48
+
49
+ if args.check_cache:
50
+ sys.exit(0)
51
+
52
+ piped_data = ""
53
+ if not sys.stdin.isatty():
54
+ try:
55
+ piped_data = sys.stdin.read()
56
+ sys.stdin = open('/dev/tty')
57
+ except Exception:
58
+ pass
59
+
60
+ objective = " ".join(args.objective).strip()
61
+ if not objective and not piped_data:
62
+ parser.print_help()
63
+ sys.exit(1)
64
+
65
+ # Build complete user message for saving later
66
+ user_prompt = objective
67
+ if piped_data:
68
+ user_prompt += f"\n\n[Piped Data]:\n{piped_data}"
69
+
70
+ console.print("[bold yellow]●[/bold yellow] Thinking...")
71
+ past_messages = history_mgr.load()
72
+ response = generate_command(objective, context=piped_data, model_name=model_name, history=past_messages)
73
+
74
+ # Save the user query to history IMMEDIATELY
75
+ history_mgr.append("user", user_prompt)
76
+
77
+ runner = CommandRunner(governance=gov, level=active_level, model_name=model_name, history_mgr=history_mgr)
78
+ runner.execute_flow(response, objective)
79
+
80
+ if __name__ == "__main__":
81
+ main()
@@ -0,0 +1,85 @@
1
+ import json
2
+ import os
3
+ from pathlib import Path
4
+
5
+ DEFAULT_RULES = {
6
+ "config": {
7
+ "default_level": 1,
8
+ "model": "gpt-oss:20b-cloud"
9
+ },
10
+ "never": [
11
+ "rm -rf /",
12
+ "mkfs",
13
+ "dropdb",
14
+ "chmod -R 777"
15
+ ],
16
+ "require_confirmation": [
17
+ "docker run",
18
+ "docker build",
19
+ "npm publish",
20
+ "git push",
21
+ "kubectl delete"
22
+ ],
23
+ "allowed": [
24
+ "ls", "cat", "pwd", "grep", "find", "echo", "tail"
25
+ ],
26
+ "high_risk_keywords": [
27
+ "reset", "delete", "drop", "truncate", "prune", "rm", "-exec", ">"
28
+ ]
29
+ }
30
+
31
+ class Action(str):
32
+ BLOCKED = "blocked"
33
+ EXPLICIT_CONFIRM = "explicit_confirm"
34
+ YN_CONFIRM = "yn_confirm"
35
+ PROCEED = "proceed"
36
+
37
+ class GovernanceEngine:
38
+ def __init__(self, rules_path: str = "~/.hey-rules.json"):
39
+ self.rules_path = Path(os.path.expanduser(rules_path))
40
+ self.rules = self._load()
41
+
42
+ def _load(self):
43
+ if not self.rules_path.exists():
44
+ return DEFAULT_RULES
45
+ with open(self.rules_path, "r") as f:
46
+ return json.load(f)
47
+
48
+ def init_rules(self):
49
+ if not self.rules_path.exists():
50
+ with open(self.rules_path, "w") as f:
51
+ json.dump(DEFAULT_RULES, f, indent=2)
52
+ return True
53
+ return False
54
+
55
+ def evaluate(self, command: str) -> tuple[str, str]:
56
+ """
57
+ Evaluate a command against the security matrix.
58
+ Returns a tuple: (Action, Reason/Keyword)
59
+ """
60
+ if not command:
61
+ return Action.BLOCKED, "Empty command"
62
+
63
+ # 1. Never List
64
+ for never_cmd in self.rules.get("never", []):
65
+ if never_cmd in command:
66
+ return Action.BLOCKED, f"Matches never list: {never_cmd}"
67
+
68
+ # 2. High Risk
69
+ for hr_kw in self.rules.get("high_risk_keywords", []):
70
+ if hr_kw in command:
71
+ return Action.EXPLICIT_CONFIRM, hr_kw
72
+
73
+ # 3. Require Confirmation
74
+ for req_cmd in self.rules.get("require_confirmation", []):
75
+ if req_cmd in command:
76
+ return Action.YN_CONFIRM, req_cmd
77
+
78
+ # 4. Allowed List
79
+ for allow_cmd in self.rules.get("allowed", []):
80
+ # Check if it's the start of the command (e.g. 'ls -la')
81
+ if command.strip().startswith(allow_cmd):
82
+ return Action.PROCEED, "Allowed command"
83
+
84
+ # Default behavior: if not explicitly allowed, still require y/N confirm for safety
85
+ return Action.YN_CONFIRM, "Command not explicitly in allowed list"
@@ -0,0 +1,32 @@
1
+ import json
2
+ import os
3
+ from pathlib import Path
4
+
5
+ class HistoryManager:
6
+ def __init__(self, path: str = "~/.hey-history.json", max_messages: int = 15):
7
+ self.path = Path(os.path.expanduser(path))
8
+ self.max_messages = max_messages
9
+
10
+ def load(self) -> list:
11
+ if not self.path.exists():
12
+ return []
13
+ try:
14
+ with open(self.path, "r") as f:
15
+ return json.load(f)
16
+ except Exception:
17
+ return []
18
+
19
+ def append(self, role: str, content: str):
20
+ history = self.load()
21
+ history.append({"role": role, "content": content})
22
+
23
+ # Enforce rolling window
24
+ if len(history) > self.max_messages:
25
+ history = history[-self.max_messages:]
26
+
27
+ with open(self.path, "w") as f:
28
+ json.dump(history, f, indent=2)
29
+
30
+ def clear(self):
31
+ if self.path.exists():
32
+ self.path.unlink()
@@ -0,0 +1,130 @@
1
+ import json
2
+ import os
3
+ import platform
4
+ import ollama
5
+ from .models import CommandResponse, TroubleshootResponse
6
+
7
+ DEFAULT_MODEL = "gpt-oss:20b-cloud"
8
+
9
+ SYSTEM_PROMPT = r"""You are hey-cli, an autonomous, minimalist CLI companion and terminal expert.
10
+ Your primary goal is to turn natural language objectives and error logs into actionable shell commands.
11
+ Your user intends to execute the command you provide.
12
+ Do NOT output markdown blocks or conversational text outside the required JSON schema.
13
+ Only output valid JSON matching the requested schema exactly.
14
+ You MUST provide "command", "explanation", and "needs_context" fields in your JSON output.
15
+ WARNING: Ensure any quotes inside your command (e.g. echo 'text') are single quotes, or properly escaped double quotes, to maintain valid JSON string structure.
16
+ CRITICAL PARSING RULE: If the user provides a specific filename, directory name, string, or port, you MUST preserve it EXACTLY as written. Do not autocorrect spelling, abbreviate, or drop extensions (e.g., if asked to make 'temporarily', do not output 'temporay').
17
+
18
+ IMPORTANT AGENTIC INSTRUCTION:
19
+ If the user asks ANY question about their system state, files, or environment (e.g., "is docker running?", "what is my IP?", "explain this folder"), you MUST set `needs_context = true` and target a bash command to silently gather the data.
20
+ ONLY set `needs_context = false` when you are providing the FINAL answer.
21
+ If your final answer is an explanation or simply answering a question, leave the `command` field empty `""` and put a high-quality Markdown response in the `explanation` field. Do NOT write bash `echo` or `printf` statements.
22
+ If your final answer requires an action to be ran (e.g., "start docker", "delete the folder"), put the executable bash string in `command`.
23
+ CRITICAL JSON REQUIREMENT: If your bash command contains any backslashes (e.g. for regex like `\.` or escaping spaces), you MUST double-escape them (`\\\\.`) so the output remains valid JSON!
24
+ """
25
+
26
+ from .skills import get_compiled_skills
27
+
28
+ def get_system_context() -> str:
29
+ os_name = platform.system()
30
+ os_release = platform.release()
31
+ arch = platform.machine()
32
+ shell = os.environ.get("SHELL", "unknown")
33
+
34
+ skills_block = f"\n\n{get_compiled_skills()}"
35
+
36
+ return f"Operating System: {os_name} {os_release} ({arch})\nCurrent Shell: {shell}{skills_block}"
37
+
38
+ TROUBLESHOOT_PROMPT = r"""You are acting as an iterative troubleshooter.
39
+ You will be provided with an objective, the previous commands attempted, and the stdout/stderr.
40
+ Determine the next command to run to resolve the issue, OR if the issue is resolved, indicate it.
41
+ Keep your explanation brief and chill. If a file or tests do not exist, do not try to aggressively brute-force create configurations. Just explain the situation and set is_resolved=True to gracefully stop.
42
+ """
43
+
44
+ def generate_command(prompt: str, context: str = "", model_name: str = DEFAULT_MODEL, history: list = None) -> CommandResponse:
45
+ content = prompt
46
+ if context:
47
+ content = f"Context (e.g. error logs or piped data):\n{context}\n\nObjective:\n{prompt}"
48
+
49
+ try:
50
+ sys_context = f"--- ENVIRONMENT ---\n{get_system_context()}\n-------------------\n"
51
+ msgs = [{"role": "system", "content": SYSTEM_PROMPT + "\n\n" + sys_context}]
52
+ if history:
53
+ msgs.extend(history)
54
+ msgs.append({"role": "user", "content": content})
55
+
56
+ response = ollama.chat(
57
+ model=model_name,
58
+ messages=msgs,
59
+ format="json",
60
+ options={"temperature": 0.0}
61
+ )
62
+
63
+ content = response["message"]["content"]
64
+ # In case ollama returns markdown format code block for JSON
65
+ if content.startswith("```json"):
66
+ content = content[7:-3].strip()
67
+ elif content.startswith("```"):
68
+ content = content[3:-3].strip()
69
+
70
+ data = json.loads(content)
71
+ return CommandResponse(**data)
72
+ except Exception as e:
73
+ raw_val = content if 'content' in locals() else "None"
74
+
75
+ # Check if the error was caused by a safety refusal schema validation failure
76
+ if "refusal" in raw_val.lower() or "sorry" in raw_val.lower():
77
+ return CommandResponse(
78
+ command="",
79
+ explanation=f"LLM Safety Trigger: The model refused to generate this command.\n\nRaw output: {raw_val.strip()}",
80
+ needs_context=False
81
+ )
82
+
83
+ # Fallback empty block on failure
84
+ return CommandResponse(
85
+ command="",
86
+ explanation=f"Error generating command from LLM: {str(e)}\nRaw Output:\n{raw_val}"
87
+ )
88
+
89
+ def generate_troubleshoot_step(objective: str, history: list, model_name: str = DEFAULT_MODEL) -> TroubleshootResponse:
90
+ history_text = "\n".join([
91
+ f"Cmd: {h['cmd']}\nExit: {h['exit_code']}\nOut/Err:\n{h['output']}"
92
+ for h in history
93
+ ])
94
+
95
+ content = f"Objective:\n{objective}\n\nHistory of execution:\n{history_text}\n\nAnalyze the specific error and provide the NEXT logical command to test or fix. Re-read logs carefully."
96
+
97
+ try:
98
+ sys_context = f"--- ENVIRONMENT ---\n{get_system_context()}\n-------------------\n"
99
+ response = ollama.chat(
100
+ model=model_name,
101
+ messages=[
102
+ {"role": "system", "content": SYSTEM_PROMPT + "\n" + TROUBLESHOOT_PROMPT + "\n\n" + sys_context},
103
+ {"role": "user", "content": content}
104
+ ],
105
+ format="json",
106
+ options={"temperature": 0.0}
107
+ )
108
+
109
+ content = response["message"]["content"].strip()
110
+ if not content:
111
+ return TroubleshootResponse(
112
+ command=None,
113
+ explanation="LLM returned empty JSON object.",
114
+ is_resolved=False
115
+ )
116
+
117
+ if content.startswith("```json"):
118
+ content = content[7:-3].strip()
119
+ elif content.startswith("```"):
120
+ content = content[3:-3].strip()
121
+
122
+ data = json.loads(content)
123
+ return TroubleshootResponse(**data)
124
+ except Exception as e:
125
+ raw_val = content if 'content' in locals() else "None"
126
+ return TroubleshootResponse(
127
+ command=None,
128
+ explanation=f"Error analyzing execution: {str(e)}\nRaw Output:\n{raw_val}",
129
+ is_resolved=False
130
+ )
@@ -0,0 +1,12 @@
1
+ from pydantic import BaseModel, ConfigDict
2
+ from typing import Optional
3
+
4
+ class CommandResponse(BaseModel):
5
+ command: str
6
+ explanation: str = ""
7
+ needs_context: bool = False
8
+
9
+ class TroubleshootResponse(BaseModel):
10
+ command: Optional[str] = None
11
+ explanation: str = ""
12
+ is_resolved: bool = False
@@ -0,0 +1,195 @@
1
+ import subprocess
2
+ import sys
3
+ from typing import Optional
4
+
5
+ from .governance import GovernanceEngine, Action
6
+ from .llm import CommandResponse, TroubleshootResponse, generate_troubleshoot_step, generate_command
7
+ from rich.console import Console
8
+ from rich.markdown import Markdown
9
+
10
+ class CommandRunner:
11
+ def __init__(self, governance: GovernanceEngine, level: int = 1, model_name: str = "gpt-oss:20b-cloud", history_mgr=None):
12
+ self.gov = governance
13
+ self.level = level
14
+ self.model_name = model_name
15
+ self.history_mgr = history_mgr
16
+ self.console = Console()
17
+
18
+ def run_command(self, cmd: str) -> tuple[int, str]:
19
+ """Executes a command and returns exit code and combined output."""
20
+ try:
21
+ result = subprocess.run(
22
+ cmd,
23
+ shell=True,
24
+ stdout=subprocess.PIPE,
25
+ stderr=subprocess.STDOUT,
26
+ text=True
27
+ )
28
+ return result.returncode, result.stdout
29
+ except Exception as e:
30
+ return -1, str(e)
31
+
32
+ def _prompt_user(self, prompt: str) -> bool:
33
+ try:
34
+ sys.stdout.write(f"{prompt} ")
35
+ sys.stdout.flush()
36
+ ans = sys.stdin.readline().strip().lower()
37
+ return ans in ('y', 'yes')
38
+ except KeyboardInterrupt:
39
+ print("\nAborted.")
40
+ return False
41
+
42
+ def _prompt_exact(self, prompt: str, expected_match: str) -> bool:
43
+ try:
44
+ sys.stdout.write(f"\033[93m{prompt}\033[0m ")
45
+ sys.stdout.flush()
46
+ ans = sys.stdin.readline().strip()
47
+ return ans == expected_match
48
+ except KeyboardInterrupt:
49
+ print("\nAborted.")
50
+ return False
51
+
52
+ def _check_governance(self, cmd: str) -> bool:
53
+ """Run command through governance and output proper CLI prompts."""
54
+ action, reason = self.gov.evaluate(cmd)
55
+
56
+ if action == Action.BLOCKED:
57
+ self.console.print(f"[bold red]● [BLOCKED][/bold red] {reason}")
58
+ return False
59
+
60
+ elif action == Action.EXPLICIT_CONFIRM:
61
+ self.console.print(f"[bold yellow]● [WARNING] High risk command detected![/bold yellow]")
62
+ if not self._prompt_exact(f"Type '{reason}' to confirm execution:", reason):
63
+ self.console.print("[dim]Confirmation failed. Aborted.[/dim]")
64
+ return False
65
+ return True
66
+
67
+ elif action == Action.YN_CONFIRM:
68
+ if self.level >= 2:
69
+ if self.level == 2:
70
+ self.console.print(f"[dim]● Auto-approving (Level {self.level}): {reason}[/dim]")
71
+ return True
72
+ if not self._prompt_user("Execute this command? [y/N]:"):
73
+ self.console.print("[dim]Aborted.[/dim]")
74
+ return False
75
+ return True
76
+
77
+ elif action == Action.PROCEED:
78
+ # Completely silent for PROCEED
79
+ return True
80
+
81
+ return False
82
+
83
+ def execute_flow(self, initial_response: CommandResponse, original_objective: str):
84
+ current_response = initial_response
85
+ current_context = ""
86
+
87
+ # Micro-Agent Context Gathering Loop (Active for Level 1, 2)
88
+ if self.level in (1, 2):
89
+ iteration = 0
90
+ executed_commands = set()
91
+ while current_response.needs_context and iteration < 5:
92
+ cmd_stripped = current_response.command.strip()
93
+ if cmd_stripped in executed_commands:
94
+ self.console.print(f"[bold red]●[/bold red] Loop detected on: [bold]{cmd_stripped}[/bold]. Forcing final answer.")
95
+ current_context += f"\n\n[System]: You already ran '{cmd_stripped}'. Stop gathering context and provide the final answer with needs_context=false."
96
+ current_response = generate_command(original_objective, context=current_context, model_name=self.model_name)
97
+ iteration += 1
98
+ continue
99
+
100
+ executed_commands.add(cmd_stripped)
101
+ self.console.print(f"[bold cyan]●[/bold cyan] [bold]{current_response.command}[/bold]")
102
+ if not self._check_governance(current_response.command):
103
+ self.console.print(f"[bold red]●[/bold red] Context gathering blocked by governance. Reverting to manual answer.")
104
+ break
105
+
106
+ code, out = self.run_command(current_response.command)
107
+ clean_out = out.strip()
108
+ if len(clean_out) > 5000:
109
+ clean_out = clean_out[:5000] + "\n...[Output Truncated]"
110
+
111
+ current_context += f"\n\n[Output of {current_response.command}]:\n{clean_out}"
112
+
113
+ self.console.print("[bold dim]● Analyzing output...[/bold dim]")
114
+ current_response = generate_command(original_objective, context=current_context, model_name=self.model_name)
115
+ iteration += 1
116
+
117
+ cmd = current_response.command.strip() if current_response.command else ""
118
+
119
+ # Save final outcome to history
120
+ if self.history_mgr:
121
+ if not cmd or cmd.startswith("echo ") or cmd.startswith("printf "):
122
+ self.history_mgr.append("assistant", current_response.explanation)
123
+ else:
124
+ self.history_mgr.append("assistant", current_response.model_dump_json())
125
+
126
+ # Level 0 = Dry Run
127
+ if self.level == 0:
128
+ self.console.print("[dim]● Dry run (Level 0). Exiting without execution.[/dim]")
129
+ return
130
+
131
+ if not cmd:
132
+ self.console.print("\n[bold green]● TASK RESULT:[/bold green]")
133
+ self.console.print(Markdown(current_response.explanation))
134
+ return
135
+
136
+ # Action mode (command generated)
137
+ if self.level != 3:
138
+ if current_response.explanation:
139
+ self.console.print("\n[bold green]● TASK RESULT:[/bold green]")
140
+ self.console.print(Markdown(current_response.explanation))
141
+ self.console.print(f"Command: [bold yellow]{cmd}[/bold yellow]\n")
142
+
143
+ # Levels 1 and 2
144
+ if self.level in (1, 2):
145
+ if self._check_governance(cmd):
146
+ self.console.print(f"[bold green]● Running:[/bold green] {cmd}")
147
+ code, out = self.run_command(cmd)
148
+ if out.strip():
149
+ print(out.strip())
150
+ sys.exit(code)
151
+ else:
152
+ sys.exit(1)
153
+
154
+ # Level 3 = Iterative Troubleshooter
155
+ if self.level == 3:
156
+ history = []
157
+ current_cmd = cmd
158
+
159
+ print("\033[96m[hey]\033[0m Iterative Troubleshooter Started:")
160
+
161
+ tr = None
162
+ for iteration in range(1, 6): # Max 5 iterations
163
+ if not self._check_governance(current_cmd):
164
+ print(" \033[91m-> Blocked by governance.\033[0m")
165
+ break
166
+
167
+ print(f" \033[93mStep {iteration}\033[0m: \033[1m{current_cmd}\033[0m", end=" ")
168
+ code, out = self.run_command(current_cmd)
169
+
170
+ clean_out = out.strip()
171
+ if clean_out:
172
+ print(f"\n \033[90m> {clean_out[:200]}{'...' if len(clean_out) > 200 else ''}\033[0m")
173
+ else:
174
+ print(f" \033[92m(done)\033[0m")
175
+
176
+ history.append({
177
+ "cmd": current_cmd,
178
+ "exit_code": code,
179
+ "output": clean_out
180
+ })
181
+
182
+ tr = generate_troubleshoot_step(original_objective, history, model_name=self.model_name)
183
+ if tr.is_resolved:
184
+ print(f"\n\033[92m[hey] {tr.explanation}\033[0m")
185
+ sys.exit(0)
186
+
187
+ current_cmd = tr.command
188
+ if not current_cmd:
189
+ print(f"\n\033[92m[hey] {tr.explanation}\033[0m")
190
+ sys.exit(0)
191
+
192
+ if tr:
193
+ print(f"\n\033[93m[hey] Pausing after 5 steps. Final assessment:\033[0m")
194
+ print(f"\033[92m{tr.explanation}\033[0m")
195
+ sys.exit(0)
@@ -0,0 +1,115 @@
1
+ """
2
+ Modular skills registry for hey-cli.
3
+ Dynamically loads OS-specific and universal shell heuristics from markdown files
4
+ in the skills/ directory and injects them into the LLM system prompt.
5
+
6
+ Add new skills by creating/editing .md files in hey_cli/skills/:
7
+ - shell.md → Universal rules (applied on ALL platforms)
8
+ - darwin.md → macOS specific
9
+ - ubuntu_debian.md → Ubuntu/Debian specific
10
+ - fedora_rhel.md → Fedora/RHEL/CentOS specific
11
+ - arch_linux.md → Arch/Manjaro specific
12
+ - windows_powershell.md → Windows PowerShell
13
+ - windows_wsl.md → WSL specific
14
+ - alpine.md → Alpine Linux specific
15
+ - freebsd.md → FreeBSD specific
16
+ - opensuse.md → openSUSE/SLES specific
17
+ - chromeos.md → ChromeOS Crostini specific
18
+ """
19
+
20
+ import platform
21
+ from pathlib import Path
22
+
23
+ # Map platform.system() output to skill file basenames
24
+ OS_SKILL_MAP = {
25
+ "Darwin": ["darwin"],
26
+ "Linux": [], # Resolved dynamically below based on distro detection
27
+ "Windows": ["windows_powershell"],
28
+ "FreeBSD": ["freebsd"],
29
+ }
30
+
31
+ # Linux distro detection → skill file mapping
32
+ LINUX_DISTRO_MAP = {
33
+ "ubuntu": "ubuntu_debian",
34
+ "debian": "ubuntu_debian",
35
+ "pop": "ubuntu_debian", # Pop!_OS
36
+ "mint": "ubuntu_debian", # Linux Mint
37
+ "fedora": "fedora_rhel",
38
+ "rhel": "fedora_rhel",
39
+ "centos": "fedora_rhel",
40
+ "rocky": "fedora_rhel", # Rocky Linux
41
+ "alma": "fedora_rhel", # AlmaLinux
42
+ "arch": "arch_linux",
43
+ "manjaro": "arch_linux",
44
+ "endeavouros": "arch_linux",
45
+ "alpine": "alpine",
46
+ "opensuse": "opensuse",
47
+ "sles": "opensuse",
48
+ "suse": "opensuse",
49
+ "chromeos": "chromeos",
50
+ }
51
+
52
+ SKILLS_DIR = Path(__file__).parent / "skills"
53
+
54
+
55
+ def _detect_linux_distro() -> str:
56
+ """Detect Linux distribution from /etc/os-release."""
57
+ try:
58
+ with open("/etc/os-release", "r") as f:
59
+ content = f.read().lower()
60
+ for key, skill_file in LINUX_DISTRO_MAP.items():
61
+ if key in content:
62
+ return skill_file
63
+ except FileNotFoundError:
64
+ pass
65
+
66
+ # Fallback: check for WSL
67
+ try:
68
+ with open("/proc/version", "r") as f:
69
+ if "microsoft" in f.read().lower():
70
+ return "windows_wsl"
71
+ except FileNotFoundError:
72
+ pass
73
+
74
+ # Default to ubuntu_debian as most common
75
+ return "ubuntu_debian"
76
+
77
+
78
+ def _load_skill_file(filename: str) -> str:
79
+ """Load a skill markdown file and return its content as plain text."""
80
+ filepath = SKILLS_DIR / f"{filename}.md"
81
+ if filepath.exists():
82
+ return filepath.read_text(encoding="utf-8").strip()
83
+ return ""
84
+
85
+
86
+ def get_compiled_skills() -> str:
87
+ """
88
+ Compiles a formatted string of applicable operational skills
89
+ based on host OS and distro. Always includes shell.md (universal).
90
+ """
91
+ os_name = platform.system()
92
+
93
+ # Always load universal shell skills
94
+ sections = []
95
+ shell_content = _load_skill_file("shell")
96
+ if shell_content:
97
+ sections.append(shell_content)
98
+
99
+ # Load OS-specific skills
100
+ os_files = OS_SKILL_MAP.get(os_name, [])
101
+
102
+ # For Linux, detect the specific distro
103
+ if os_name == "Linux":
104
+ distro_file = _detect_linux_distro()
105
+ os_files = [distro_file]
106
+
107
+ for skill_file in os_files:
108
+ content = _load_skill_file(skill_file)
109
+ if content:
110
+ sections.append(content)
111
+
112
+ if not sections:
113
+ return ""
114
+
115
+ return "### OPERATIONAL SKILLS & HEURISTICS\n\n" + "\n\n---\n\n".join(sections)
@@ -0,0 +1,97 @@
1
+ Metadata-Version: 2.4
2
+ Name: hey-cli-python
3
+ Version: 1.0.0
4
+ Summary: A secure, zero-bloat CLI companion that turns natural language and error logs into executable commands.
5
+ Author: Mohit S.
6
+ Project-URL: Homepage, https://github.com/sinsniwal/hey-cli
7
+ Project-URL: Repository, https://github.com/sinsniwal/hey-cli
8
+ Project-URL: Issues, https://github.com/sinsniwal/hey-cli/issues
9
+ Keywords: cli,llm,bash,terminal,ollama,sysadmin
10
+ Classifier: Development Status :: 5 - Production/Stable
11
+ Classifier: Environment :: Console
12
+ Classifier: Intended Audience :: Developers
13
+ Classifier: License :: OSI Approved :: MIT License
14
+ Classifier: Operating System :: MacOS
15
+ Classifier: Operating System :: POSIX :: Linux
16
+ Classifier: Operating System :: Microsoft :: Windows
17
+ Classifier: Programming Language :: Python :: 3.9
18
+ Classifier: Programming Language :: Python :: 3.10
19
+ Classifier: Programming Language :: Python :: 3.11
20
+ Classifier: Programming Language :: Python :: 3.12
21
+ Requires-Python: >=3.9
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: pydantic>=2.0.0
25
+ Requires-Dist: ollama>=0.1.0
26
+ Requires-Dist: rich>=13.0.0
27
+ Dynamic: license-file
28
+
29
+ <div align="center">
30
+ <h1>hey-cli 🤖</h1>
31
+ <p><strong>A zero-bloat, privacy-first, locally-hosted CLI agent powered by Ollama.</strong></p>
32
+
33
+ <img src="https://img.shields.io/badge/Python-3.9+-blue.svg" alt="Python Version" />
34
+ <img src="https://img.shields.io/badge/License-MIT-green.svg" alt="License" />
35
+ <img src="https://img.shields.io/badge/Ollama-Local-orange.svg" alt="Ollama Local" />
36
+ </div>
37
+
38
+ <br>
39
+
40
+ `hey` isn't just an LLM wrapper. It's a context-aware system agent designed to bridge the gap between human language and POSIX shell utilities natively on your host machine.
41
+
42
+ > Ask it to parse your error logs, debug Docker, clear DNS caches, or execute complex file maneuvers—all while executing safely behind a dynamic zero-trust governance matrix.
43
+
44
+ ## 🚀 Why `hey-cli` over Copilot/ChatGPT?
45
+ 1. **Total Privacy**: Your code and system logs never leave your physical CPU. All context gathering and reasoning is done locally via [Ollama](https://ollama.com).
46
+ 2. **True Cross-Platform Skills**: Under the hood, `hey-cli`'s "Skills Engine" detects if you're on macOS (BSD), Ubuntu (GNU), Windows (PowerShell), or Arch Linux, and actively refuses to generate incompatible flags like `xargs -d` on Mac.
47
+ 3. **Agentic Execution**: Ask "is docker running?" and `hey` will silently execute `docker info` in the background, read the stdout, analyze it, and return a plain English answer.
48
+ 4. **Security Governance**: Built-in AST-level parsing. Safe commands (like `git status`) auto-run. Destructive commands (`rm -rf`, `-exec delete`) require explicit typed confirmation.
49
+
50
+ ## 📦 Installation
51
+
52
+ **Prerequisite:** You must have [Python 3.9+](https://www.python.org/downloads/) installed.
53
+
54
+ ### macOS & Linux
55
+ Paste this snippet into your terminal to auto-install `pipx`, `ollama`, pull the required language model, and build `hey-cli` natively.
56
+ ```bash
57
+ curl -sL https://raw.githubusercontent.com/sinsniwal/hey-cli/main/install.sh | bash
58
+ ```
59
+
60
+ ### Windows (PowerShell)
61
+ Paste this into your PowerShell terminal:
62
+ ```powershell
63
+ Invoke-WebRequest -Uri "https://raw.githubusercontent.com/sinsniwal/hey-cli/main/install.ps1" -OutFile "$env:TEMP\hey_install.ps1"; & "$env:TEMP\hey_install.ps1"
64
+ ```
65
+
66
+ ## 🛠️ Usage
67
+
68
+ Simply type `hey` followed by your objective.
69
+
70
+ **Context-Gathering (Zero-Trust)**
71
+ ```bash
72
+ hey is my docker hub running?
73
+ ```
74
+ *`hey` will silently run `systemctl is-active docker` or `docker info`, see that it failed to connect to the socket, and explain the situation.*
75
+
76
+ **Execution (Governance Protected)**
77
+ ```bash
78
+ hey forcefully delete all .pyc files
79
+ ```
80
+ *`hey` parses the generated `find . -name "*.pyc" -exec rm -f {} +` command, detects `rm` and `-exec` triggers, and pauses execution until you explicitly type `rm` to authorize.*
81
+
82
+ **Debugging Logs**
83
+ ```bash
84
+ npm run build 2>&1 | hey what is causing this webpack error?
85
+ ```
86
+
87
+ ## 🛡️ Governance Matrix
88
+ Safety is a first-class citizen. `hey-cli` maintains a local governance database (`~/.hey-rules.json`):
89
+ - **Never List**: Things like `rm -rf /` and `mkfs` are permanently blocked at the compiler level.
90
+ - **Explicit Confirm**: High-risk ops (`truncate`, `drop`, `rm`) require typing exact keyword verification.
91
+ - **Y/N Confirm**: Moderate risk ops requiring a quick `y`.
92
+ - **Allowed List**: Safe diagnostics like `cat`, `ls`, `grep` auto-run natively.
93
+
94
+ ## 🤝 Adding OS Skills
95
+ Is `hey` generating incorrect shell semantics for your niche operating system?
96
+
97
+ You can make it permanently smarter without touching code! Simply open `hey_cli/skills/` and create a markdown file for your OS containing explicit English instructions (e.g. "Do not use apt on Alpine, use apk add"). The engine dynamically parses `.md` rulesites at runtime. Pull requests are heavily welcomed!
@@ -0,0 +1,18 @@
1
+ LICENSE
2
+ README.md
3
+ pyproject.toml
4
+ hey_cli/__init__.py
5
+ hey_cli/cli.py
6
+ hey_cli/governance.py
7
+ hey_cli/history.py
8
+ hey_cli/llm.py
9
+ hey_cli/models.py
10
+ hey_cli/runner.py
11
+ hey_cli/skills.py
12
+ hey_cli_python.egg-info/PKG-INFO
13
+ hey_cli_python.egg-info/SOURCES.txt
14
+ hey_cli_python.egg-info/dependency_links.txt
15
+ hey_cli_python.egg-info/entry_points.txt
16
+ hey_cli_python.egg-info/requires.txt
17
+ hey_cli_python.egg-info/top_level.txt
18
+ tests/test_cli.py
@@ -0,0 +1,2 @@
1
+ [console_scripts]
2
+ hey = hey_cli.cli:main
@@ -0,0 +1,3 @@
1
+ pydantic>=2.0.0
2
+ ollama>=0.1.0
3
+ rich>=13.0.0
@@ -0,0 +1,41 @@
1
+ [build-system]
2
+ requires = ["setuptools>=61.0"]
3
+ build-backend = "setuptools.build_meta"
4
+
5
+ [project]
6
+ name = "hey-cli-python"
7
+ version = "1.0.0"
8
+ description = "A secure, zero-bloat CLI companion that turns natural language and error logs into executable commands."
9
+ readme = "README.md"
10
+ requires-python = ">=3.9"
11
+ authors = [
12
+ {name = "Mohit S."}
13
+ ]
14
+ keywords = ["cli", "llm", "bash", "terminal", "ollama", "sysadmin"]
15
+ classifiers = [
16
+ "Development Status :: 5 - Production/Stable",
17
+ "Environment :: Console",
18
+ "Intended Audience :: Developers",
19
+ "License :: OSI Approved :: MIT License",
20
+ "Operating System :: MacOS",
21
+ "Operating System :: POSIX :: Linux",
22
+ "Operating System :: Microsoft :: Windows",
23
+ "Programming Language :: Python :: 3.9",
24
+ "Programming Language :: Python :: 3.10",
25
+ "Programming Language :: Python :: 3.11",
26
+ "Programming Language :: Python :: 3.12",
27
+ ]
28
+ dependencies = [
29
+ "pydantic>=2.0.0",
30
+ "ollama>=0.1.0",
31
+ "rich>=13.0.0"
32
+ ]
33
+ urls.Homepage = "https://github.com/sinsniwal/hey-cli"
34
+ urls.Repository = "https://github.com/sinsniwal/hey-cli"
35
+ urls.Issues = "https://github.com/sinsniwal/hey-cli/issues"
36
+
37
+ [project.scripts]
38
+ hey = "hey_cli.cli:main"
39
+
40
+ [tool.setuptools]
41
+ packages = ["hey_cli"]
@@ -0,0 +1,4 @@
1
+ [egg_info]
2
+ tag_build =
3
+ tag_date = 0
4
+
@@ -0,0 +1,42 @@
1
+ import pytest
2
+ import os
3
+ import platform
4
+ from hey_cli.llm import generate_command
5
+ from hey_cli.governance import GovernanceEngine, Action
6
+
7
+ def test_governance_engine_safe_command():
8
+ gov = GovernanceEngine()
9
+ # Explicitly test an allowed command
10
+ action, keyword = gov.evaluate("ls -la")
11
+ assert action == Action.PROCEED
12
+
13
+ def test_governance_engine_unsafe_command():
14
+ gov = GovernanceEngine()
15
+ # Explicitly test a blocked command
16
+ action, keyword = gov.evaluate("rm -rf /")
17
+ assert action == Action.BLOCKED
18
+
19
+ def test_governance_engine_explicit_confirm():
20
+ gov = GovernanceEngine()
21
+ # Explicit test for dangerous keywords
22
+ action, keyword = gov.evaluate("find . -name temp -exec rm -f {} +")
23
+ assert action == Action.EXPLICIT_CONFIRM
24
+ assert keyword in ["-exec", "rm"]
25
+
26
+ def test_system_prompt_structure():
27
+ from hey_cli.llm import SYSTEM_PROMPT
28
+ assert "CRITICAL PARSING RULE:" in SYSTEM_PROMPT
29
+ assert "needs_context" in SYSTEM_PROMPT
30
+
31
+ def test_skill_compiler():
32
+ from hey_cli.skills import get_compiled_skills
33
+ skills = get_compiled_skills()
34
+ assert isinstance(skills, str)
35
+ # The universal shell sheet is always loaded
36
+ assert "Universal Shell Skills" in skills
37
+ # The OS-specific sheet should be loaded dynamically
38
+ os_name = platform.system()
39
+ if os_name == "Windows":
40
+ assert "Windows" in skills
41
+ elif os_name == "Darwin":
42
+ assert "macOS" in skills