codegpt-ai 1.0.0 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md ADDED
@@ -0,0 +1,210 @@
1
+ # CodeGPT
2
+
3
+ **Your local AI assistant hub. One command. No cloud. No API keys.**
4
+
5
+ ```
6
+ npm i -g codegpt-ai
7
+ ```
8
+
9
+ Then type `ai`.
10
+
11
+ ```
12
+ ██████╗ ██████╗ ██████╗ ███████╗ ██████╗ ██████╗ ████████╗
13
+ ██╔════╝██╔═══██╗██╔══██╗██╔════╝██╔════╝ ██╔══██╗╚══██╔══╝
14
+ ██║ ██║ ██║██║ ██║█████╗ ██║ ███╗██████╔╝ ██║
15
+ ██║ ██║ ██║██║ ██║██╔══╝ ██║ ██║██╔═══╝ ██║
16
+ ╚██████╗╚██████╔╝██████╔╝███████╗╚██████╔╝██║ ██║
17
+ ╚═════╝ ╚═════╝ ╚═════╝ ╚══════╝ ╚═════╝ ╚═╝ ╚═╝
18
+ ```
19
+
20
+ ## What is it?
21
+
22
+ A fully-featured AI chat CLI that runs locally on your machine using [Ollama](https://ollama.com). No API keys needed. No cloud. No subscription.
23
+
24
+ - **80+ slash commands** — chat, code, search, export, and more
25
+ - **8 AI agents** — coder, debugger, reviewer, architect, pentester, explainer, optimizer, researcher
26
+ - **29 AI tool integrations** — Claude, Codex, Gemini, Copilot, Cline, and more
27
+ - **Multi-AI system** — swarm, vote, race, team chat with multiple AIs
28
+ - **AI training** — train your own custom Ollama models from conversations
29
+ - **Security** — PIN lock, audit log, sandboxed tools, shell blocklist
30
+ - **Works anywhere** — Windows, macOS, Linux, Termux (Android)
31
+
32
+ ## Install
33
+
34
+ ### npm (recommended)
35
+ ```bash
36
+ npm i -g codegpt-ai
37
+ ai
38
+ ```
39
+
40
+ Works with just Node.js. If Python is installed, you get the full 80+ command experience. Without Python, you get a lightweight Node.js chat client.
41
+
42
+ ### pip
43
+ ```bash
44
+ pip install -e git+https://github.com/CCguvycu/codegpt.git#egg=codegpt
45
+ ai
46
+ ```
47
+
48
+ ### PowerShell (Windows)
49
+ ```powershell
50
+ irm https://raw.githubusercontent.com/CCguvycu/codegpt/main/install.ps1 | iex
51
+ ```
52
+
53
+ ### Termux (Android)
54
+ ```bash
55
+ curl -sL https://raw.githubusercontent.com/CCguvycu/codegpt/main/install-termux.sh | bash
56
+ ```
57
+
58
+ ## Requirements
59
+
60
+ - **Node.js 16+** (for npm install) OR **Python 3.10+** (for pip install)
61
+ - **Ollama** — install from [ollama.com](https://ollama.com), then `ollama pull llama3.2`
62
+ - Or connect to a remote Ollama server with `/connect IP`
63
+
64
+ ## Quick Start
65
+
66
+ ```bash
67
+ # Install
68
+ npm i -g codegpt-ai
69
+
70
+ # Pull a model
71
+ ollama pull llama3.2
72
+
73
+ # Start chatting
74
+ ai
75
+ ```
76
+
77
+ Type `/help` inside for all commands.
78
+
79
+ ## Features
80
+
81
+ ### Chat
82
+ | Command | Description |
83
+ |---------|-------------|
84
+ | `/new` | New conversation |
85
+ | `/save` | Save conversation |
86
+ | `/load` | Load saved conversation |
87
+ | `/model` | Switch model |
88
+ | `/persona` | Switch personality (hacker, teacher, roast, architect, minimal) |
89
+ | `/think` | Toggle deep reasoning mode |
90
+ | `/file path` | Read a file into context |
91
+ | `/run` | Execute last code block |
92
+ | `/compact` | Summarize conversation to save context |
93
+
94
+ ### AI Agents
95
+ ```
96
+ /agent coder build a REST API
97
+ /agent debugger why does this crash
98
+ /agent reviewer check this code
99
+ /agent architect design a microservices system
100
+ ```
101
+
102
+ 8 specialized agents: `coder` `debugger` `researcher` `reviewer` `architect` `pentester` `explainer` `optimizer`
103
+
104
+ ### Multi-AI
105
+ ```
106
+ /all what's the best database? # All 8 agents answer in parallel
107
+ /vote Flask or FastAPI? # Agents vote with consensus
108
+ /swarm build a CLI password manager # 6-agent pipeline
109
+ /team claude codex # Group chat with 2 AIs + you
110
+ /race explain recursion # Race all models for speed
111
+ ```
112
+
113
+ ### AI Tools (29 integrations)
114
+ ```
115
+ /tools # See all tools
116
+ /claude # Launch Claude Code
117
+ /codex # Launch Codex
118
+ /gemini # Launch Gemini CLI
119
+ /split claude codex # Side-by-side split screen
120
+ /grid claude codex gemini cline # 2x2 grid
121
+ ```
122
+
123
+ All tools auto-install on first use. Sandboxed for security.
124
+
125
+ ### AI Training
126
+ ```
127
+ /rate good # Rate a response
128
+ /train collect # Save chat as training data
129
+ /train build mymodel # Create custom Ollama model
130
+ /model mymodel # Use your trained model
131
+ ```
132
+
133
+ ### Remote Connect
134
+ ```
135
+ /connect 192.168.1.100 # Connect to PC's Ollama
136
+ /qr # Show QR code to connect from phone
137
+ /server # Check connection status
138
+ ```
139
+
140
+ ### Security
141
+ ```
142
+ /pin-set # Set login PIN
143
+ /audit # View security log
144
+ /security # Security dashboard
145
+ ```
146
+
147
+ ### Integrations
148
+ ```
149
+ /github repos # Your GitHub repos
150
+ /spotify play # Control Spotify
151
+ /weather London # Get weather
152
+ /sysinfo # System info
153
+ ```
154
+
155
+ ## CLI Args (non-interactive)
156
+
157
+ ```bash
158
+ ai --ask "explain kubernetes"
159
+ ai --agent coder "build a flask app"
160
+ ai --team claude codex "discuss auth"
161
+ ai --tools
162
+ ai --models
163
+ ai --status
164
+ ai doctor
165
+ ai update
166
+ echo "what is rust?" | ai
167
+ ```
168
+
169
+ ## Aliases
170
+
171
+ Type faster with 30+ shortcuts:
172
+
173
+ | Short | Full | Short | Full |
174
+ |-------|------|-------|------|
175
+ | `/q` | `/quit` | `/a` | `/all` |
176
+ | `/n` | `/new` | `/sw` | `/swarm` |
177
+ | `/s` | `/save` | `/t` | `/think` |
178
+ | `/m` | `/model` | `/h` | `/help` |
179
+ | `/f` | `/file` | `/con` | `/connect` |
180
+
181
+ ## Architecture
182
+
183
+ ```
184
+ codegpt/
185
+ chat.py 6,200 lines Main CLI (Python)
186
+ bin/chat.js 300 lines Node.js fallback
187
+ bin/ai.js 50 lines Entry point (routes to Python or Node)
188
+ ai_cli/ Package (updater, doctor)
189
+ app.py TUI app (Textual)
190
+ bot.py Telegram bot
191
+ web.py Web app (Flask PWA)
192
+ server.py Backend API
193
+ ```
194
+
195
+ ## Data
196
+
197
+ All data stored locally at `~/.codegpt/`:
198
+ - `profiles/` — user profile
199
+ - `memory/` — persistent AI memories
200
+ - `security/` — PIN hash, audit log
201
+ - `training/` — training data, custom models
202
+ - `chats/` — saved conversations
203
+
204
+ ## License
205
+
206
+ MIT
207
+
208
+ ## Author
209
+
210
+ Built by [ArukuX](https://github.com/CCguvycu)
package/ai_cli/updater.py CHANGED
@@ -148,18 +148,64 @@ def force_update():
148
148
  return
149
149
 
150
150
  asset = exe_assets[0]
151
+
152
+ # Find checksum file in release assets
153
+ sha_assets = [a for a in release.get("assets", []) if a["name"].endswith(".sha256")]
154
+ expected_hash = None
155
+ if sha_assets:
156
+ try:
157
+ sha_resp = requests.get(sha_assets[0]["browser_download_url"], timeout=10)
158
+ # Parse certutil output: second line is the hash
159
+ lines = sha_resp.text.strip().splitlines()
160
+ for line in lines:
161
+ line = line.strip().replace(" ", "")
162
+ if len(line) == 64 and all(c in "0123456789abcdef" for c in line.lower()):
163
+ expected_hash = line.lower()
164
+ break
165
+ except Exception:
166
+ pass
167
+
151
168
  console.print(f" Downloading {asset['name']} ({latest_tag})...")
169
+ if expected_hash:
170
+ console.print(f" Expected SHA256: {expected_hash[:16]}...")
171
+ else:
172
+ console.print("[yellow] WARNING: No checksum file found. Cannot verify integrity.[/]")
152
173
 
153
174
  try:
154
175
  resp = requests.get(asset["browser_download_url"], stream=True, timeout=60)
155
176
  resp.raise_for_status()
156
177
 
157
- # Download to temp file
178
+ # Download to temp file and compute hash
179
+ import hashlib as _hashlib
180
+ sha256 = _hashlib.sha256()
158
181
  tmp = tempfile.NamedTemporaryFile(delete=False, suffix=".exe")
159
182
  for chunk in resp.iter_content(chunk_size=8192):
160
183
  tmp.write(chunk)
184
+ sha256.update(chunk)
161
185
  tmp.close()
162
186
 
187
+ actual_hash = sha256.hexdigest().lower()
188
+ console.print(f" Actual SHA256: {actual_hash[:16]}...")
189
+
190
+ # Verify checksum if available
191
+ if expected_hash and actual_hash != expected_hash:
192
+ console.print(Panel(
193
+ Text(
194
+ "CHECKSUM MISMATCH — download may be tampered with.\n"
195
+ f"Expected: {expected_hash}\n"
196
+ f"Got: {actual_hash}\n\n"
197
+ "Update aborted for your safety.",
198
+ style="bold red"
199
+ ),
200
+ title="[bold red]SECURITY ALERT[/]",
201
+ border_style="red",
202
+ ))
203
+ os.unlink(tmp.name)
204
+ return
205
+
206
+ if expected_hash:
207
+ console.print("[green] Checksum verified.[/]")
208
+
163
209
  if _is_frozen():
164
210
  # Replace the running exe
165
211
  current_exe = sys.executable
@@ -172,7 +218,7 @@ def force_update():
172
218
  shutil.move(tmp.name, current_exe)
173
219
 
174
220
  console.print(Panel(
175
- Text(f"Updated: v{current} -> {latest_tag}\nRestart to use the new version.", style="green"),
221
+ Text(f"Updated: v{current} -> {latest_tag}\nChecksum: {actual_hash[:16]}...\nRestart to use the new version.", style="green"),
176
222
  border_style="green",
177
223
  ))
178
224
  else:
package/bin/ai.js CHANGED
@@ -19,12 +19,8 @@ function findPython() {
19
19
  }
20
20
 
21
21
  const python = findPython();
22
- if (!python) {
23
- console.error("Python not found. Install from https://python.org");
24
- process.exit(1);
25
- }
26
22
 
27
- // Find chat.py — check npm package dir first, then common locations
23
+ // Find chat.py
28
24
  const locations = [
29
25
  path.join(__dirname, "..", "chat.py"),
30
26
  path.join(process.env.HOME || process.env.USERPROFILE, "codegpt", "chat.py"),
@@ -39,17 +35,22 @@ for (const loc of locations) {
39
35
  }
40
36
  }
41
37
 
42
- if (!chatPy) {
43
- console.error("CodeGPT not found. Run: codegpt-setup");
44
- process.exit(1);
38
+ // If Python + chat.py found, use full Python CLI
39
+ if (python && chatPy) {
40
+ const args = [chatPy, ...process.argv.slice(2)];
41
+ const child = spawn(python, args, {
42
+ stdio: "inherit",
43
+ cwd: path.dirname(chatPy),
44
+ env: { ...process.env, PYTHONUTF8: "1" },
45
+ });
46
+ child.on("exit", (code) => process.exit(code || 0));
47
+ } else {
48
+ // Fallback: Node.js chat client (no Python needed)
49
+ if (!python) {
50
+ console.log(" Python not found — using Node.js mode.");
51
+ console.log(" Install Python for the full 80+ command experience.\n");
52
+ } else {
53
+ console.log(" chat.py not found — using Node.js mode.\n");
54
+ }
55
+ require("./chat.js");
45
56
  }
46
-
47
- // Pass all args through to Python
48
- const args = [chatPy, ...process.argv.slice(2)];
49
- const child = spawn(python, args, {
50
- stdio: "inherit",
51
- cwd: path.dirname(chatPy),
52
- env: { ...process.env, PYTHONUTF8: "1" },
53
- });
54
-
55
- child.on("exit", (code) => process.exit(code || 0));
package/bin/chat.js ADDED
@@ -0,0 +1,361 @@
1
+ #!/usr/bin/env node
2
+ /**
3
+ * CodeGPT — Node.js CLI (no Python required)
4
+ * Fallback when Python isn't installed. Connects to Ollama directly.
5
+ */
6
+
7
+ const readline = require("readline");
8
+ const http = require("http");
9
+ const os = require("os");
10
+ const fs = require("fs");
11
+ const path = require("path");
12
+
13
+ // --- Config ---
14
+ const HOME = os.homedir();
15
+ const CONFIG_DIR = path.join(HOME, ".codegpt");
16
+ const PROFILE_FILE = path.join(CONFIG_DIR, "profiles", "cli_profile.json");
17
+ const HISTORY_FILE = path.join(CONFIG_DIR, "node_history.json");
18
+ const URL_FILE = path.join(CONFIG_DIR, "ollama_url");
19
+
20
+ let OLLAMA_HOST = process.env.OLLAMA_URL || "http://localhost:11434";
21
+ if (OLLAMA_HOST.includes("/api/chat")) OLLAMA_HOST = OLLAMA_HOST.replace("/api/chat", "");
22
+
23
+ let MODEL = "llama3.2";
24
+ let SYSTEM = "You are a helpful AI assistant. Be concise and technical.";
25
+ let messages = [];
26
+ let history = [];
27
+ let totalTokens = 0;
28
+ let startTime = Date.now();
29
+
30
+ // --- Colors ---
31
+ const c = {
32
+ cyan: (s) => `\x1b[36m${s}\x1b[0m`,
33
+ green: (s) => `\x1b[32m${s}\x1b[0m`,
34
+ yellow: (s) => `\x1b[33m${s}\x1b[0m`,
35
+ red: (s) => `\x1b[31m${s}\x1b[0m`,
36
+ dim: (s) => `\x1b[2m${s}\x1b[0m`,
37
+ bold: (s) => `\x1b[1m${s}\x1b[0m`,
38
+ white: (s) => `\x1b[37m${s}\x1b[0m`,
39
+ };
40
+
41
+ // --- Helpers ---
42
+ function ensureDir(p) {
43
+ if (!fs.existsSync(p)) fs.mkdirSync(p, { recursive: true });
44
+ }
45
+
46
+ function loadProfile() {
47
+ try {
48
+ if (fs.existsSync(PROFILE_FILE)) return JSON.parse(fs.readFileSync(PROFILE_FILE, "utf8"));
49
+ } catch {}
50
+ return { name: "", model: "llama3.2", persona: "default", total_sessions: 0 };
51
+ }
52
+
53
+ function saveProfile(profile) {
54
+ ensureDir(path.dirname(PROFILE_FILE));
55
+ fs.writeFileSync(PROFILE_FILE, JSON.stringify(profile, null, 2));
56
+ }
57
+
58
+ function loadSavedUrl() {
59
+ try {
60
+ if (fs.existsSync(URL_FILE)) {
61
+ const url = fs.readFileSync(URL_FILE, "utf8").trim();
62
+ if (url) return url.replace("/api/chat", "");
63
+ }
64
+ } catch {}
65
+ return null;
66
+ }
67
+
68
+ // --- Ollama API ---
69
+ function ollamaRequest(endpoint, body) {
70
+ return new Promise((resolve, reject) => {
71
+ const url = new URL(endpoint, OLLAMA_HOST);
72
+ const data = JSON.stringify(body);
73
+ const req = http.request(url, {
74
+ method: "POST",
75
+ headers: { "Content-Type": "application/json" },
76
+ timeout: 120000,
77
+ }, (res) => {
78
+ let result = "";
79
+ res.on("data", (chunk) => result += chunk);
80
+ res.on("end", () => {
81
+ try { resolve(JSON.parse(result)); } catch { resolve(result); }
82
+ });
83
+ });
84
+ req.on("error", reject);
85
+ req.on("timeout", () => { req.destroy(); reject(new Error("timeout")); });
86
+ req.write(data);
87
+ req.end();
88
+ });
89
+ }
90
+
91
+ function ollamaGet(endpoint) {
92
+ return new Promise((resolve, reject) => {
93
+ const url = new URL(endpoint, OLLAMA_HOST);
94
+ http.get(url, { timeout: 5000 }, (res) => {
95
+ let data = "";
96
+ res.on("data", (chunk) => data += chunk);
97
+ res.on("end", () => {
98
+ try { resolve(JSON.parse(data)); } catch { resolve(null); }
99
+ });
100
+ }).on("error", reject).on("timeout", function() { this.destroy(); reject(new Error("timeout")); });
101
+ });
102
+ }
103
+
104
+ async function streamChat(msgs) {
105
+ return new Promise((resolve, reject) => {
106
+ const url = new URL("/api/chat", OLLAMA_HOST);
107
+ const body = JSON.stringify({
108
+ model: MODEL,
109
+ messages: [{ role: "system", content: SYSTEM }, ...msgs],
110
+ stream: true,
111
+ });
112
+
113
+ const req = http.request(url, {
114
+ method: "POST",
115
+ headers: { "Content-Type": "application/json" },
116
+ timeout: 120000,
117
+ }, (res) => {
118
+ let full = "";
119
+ process.stdout.write(`\n ${c.green("AI")} > `);
120
+
121
+ res.on("data", (chunk) => {
122
+ const lines = chunk.toString().split("\n").filter(Boolean);
123
+ for (const line of lines) {
124
+ try {
125
+ const obj = JSON.parse(line);
126
+ if (obj.message?.content) {
127
+ process.stdout.write(obj.message.content);
128
+ full += obj.message.content;
129
+ }
130
+ if (obj.eval_count) totalTokens += obj.eval_count;
131
+ } catch {}
132
+ }
133
+ });
134
+
135
+ res.on("end", () => {
136
+ process.stdout.write("\n\n");
137
+ resolve(full);
138
+ });
139
+ });
140
+
141
+ req.on("error", (e) => {
142
+ console.log(c.red(`\n Error: ${e.message}`));
143
+ resolve("");
144
+ });
145
+ req.on("timeout", () => { req.destroy(); resolve(""); });
146
+ req.write(body);
147
+ req.end();
148
+ });
149
+ }
150
+
151
+ async function getModels() {
152
+ try {
153
+ const data = await ollamaGet("/api/tags");
154
+ return data?.models?.map((m) => m.name) || [];
155
+ } catch {
156
+ return [];
157
+ }
158
+ }
159
+
160
+ // --- UI ---
161
+ const LOGO = `
162
+ ${c.cyan("██████╗ ██████╗ ██████╗ ███████╗")}${c.white(" ██████╗ ██████╗ ████████╗")}
163
+ ${c.cyan("██╔════╝██╔═══██╗██╔══██╗██╔════╝")}${c.white("██╔════╝ ██╔══██╗╚══██╔══╝")}
164
+ ${c.cyan("██║ ██║ ██║██║ ██║█████╗ ")}${c.white("██║ ███╗██████╔╝ ██║ ")}
165
+ ${c.cyan("██║ ██║ ██║██║ ██║██╔══╝ ")}${c.white("██║ ██║██╔═══╝ ██║ ")}
166
+ ${c.cyan("╚██████╗╚██████╔╝██████╔╝███████╗")}${c.white("╚██████╔╝██║ ██║ ")}
167
+ ${c.cyan(" ╚═════╝ ╚═════╝ ╚═════╝ ╚══════╝")}${c.white(" ╚═════╝ ╚═╝ ╚═╝ ")}
168
+ ${c.dim(" Your Local AI Assistant — Node.js Edition")}
169
+ `;
170
+
171
+ const COMMANDS = {
172
+ "/help": "Show commands",
173
+ "/model": "Switch model (/model name)",
174
+ "/models": "List available models",
175
+ "/new": "Start new conversation",
176
+ "/history": "Show conversation",
177
+ "/connect": "Connect to remote Ollama (/connect IP)",
178
+ "/server": "Show current server",
179
+ "/clear": "Clear screen",
180
+ "/quit": "Exit",
181
+ };
182
+
183
+ function printHeader() {
184
+ console.clear();
185
+ console.log(LOGO);
186
+ const elapsed = Math.floor((Date.now() - startTime) / 1000 / 60);
187
+ console.log(c.dim(` ${MODEL} | ${messages.length} msgs | ${totalTokens} tok | ${elapsed}m | ${OLLAMA_HOST}`));
188
+ console.log();
189
+ }
190
+
191
+ function printHelp() {
192
+ console.log(c.bold("\n Commands:"));
193
+ for (const [cmd, desc] of Object.entries(COMMANDS)) {
194
+ console.log(` ${c.cyan(cmd.padEnd(14))} ${c.dim(desc)}`);
195
+ }
196
+ console.log();
197
+ }
198
+
199
+ // --- Main ---
200
+ async function main() {
201
+ // Load saved URL
202
+ const savedUrl = loadSavedUrl();
203
+ if (savedUrl) OLLAMA_HOST = savedUrl;
204
+
205
+ // Load profile
206
+ const profile = loadProfile();
207
+ if (profile.model) MODEL = profile.model;
208
+
209
+ // Check Ollama
210
+ let models = await getModels();
211
+ if (!models.length) {
212
+ // Try common IPs
213
+ for (const ip of ["http://192.168.1.237:11434", "http://10.0.2.2:11434"]) {
214
+ OLLAMA_HOST = ip;
215
+ models = await getModels();
216
+ if (models.length) break;
217
+ }
218
+ }
219
+
220
+ printHeader();
221
+
222
+ if (!models.length) {
223
+ console.log(c.yellow(" No Ollama server found."));
224
+ console.log(c.dim(" Use /connect IP to connect to a remote server."));
225
+ console.log(c.dim(" Or install Ollama: https://ollama.com\n"));
226
+ } else {
227
+ const hour = new Date().getHours();
228
+ const greeting = hour < 12 ? "Good morning" : hour < 18 ? "Good afternoon" : "Good evening";
229
+ const name = profile.name || "there";
230
+ console.log(c.bold(` ${greeting}, ${name}.\n`));
231
+ }
232
+
233
+ // Update session count
234
+ profile.total_sessions = (profile.total_sessions || 0) + 1;
235
+ saveProfile(profile);
236
+
237
+ // REPL
238
+ const rl = readline.createInterface({
239
+ input: process.stdin,
240
+ output: process.stdout,
241
+ prompt: ` ${c.cyan(">")} `,
242
+ historySize: 100,
243
+ });
244
+
245
+ rl.prompt();
246
+
247
+ rl.on("line", async (line) => {
248
+ const input = line.trim();
249
+ if (!input) { rl.prompt(); return; }
250
+
251
+ if (input.startsWith("/")) {
252
+ const cmd = input.split(" ")[0].toLowerCase();
253
+ const args = input.slice(cmd.length).trim();
254
+
255
+ switch (cmd) {
256
+ case "/quit":
257
+ case "/q":
258
+ case "/exit":
259
+ const elapsed = Math.floor((Date.now() - startTime) / 1000);
260
+ console.log(c.dim(`\n ${elapsed}s | ${messages.length} msgs | ${totalTokens} tok`));
261
+ process.exit(0);
262
+
263
+ case "/help":
264
+ case "/h":
265
+ printHelp();
266
+ break;
267
+
268
+ case "/model":
269
+ case "/m":
270
+ if (args) {
271
+ MODEL = args;
272
+ profile.model = MODEL;
273
+ saveProfile(profile);
274
+ console.log(c.green(` Model: ${MODEL}`));
275
+ } else {
276
+ console.log(c.dim(` Current: ${MODEL}`));
277
+ }
278
+ break;
279
+
280
+ case "/models":
281
+ const mods = await getModels();
282
+ if (mods.length) {
283
+ console.log(c.bold("\n Models:"));
284
+ mods.forEach((m) => console.log(` ${m === MODEL ? c.green("* " + m) : c.dim(" " + m)}`));
285
+ console.log();
286
+ } else {
287
+ console.log(c.red(" Ollama not reachable."));
288
+ }
289
+ break;
290
+
291
+ case "/new":
292
+ case "/n":
293
+ messages = [];
294
+ printHeader();
295
+ console.log(c.dim(" New conversation.\n"));
296
+ break;
297
+
298
+ case "/history":
299
+ if (!messages.length) { console.log(c.dim(" No messages.\n")); break; }
300
+ messages.forEach((m) => {
301
+ const tag = m.role === "user" ? c.cyan("You") : c.green("AI");
302
+ console.log(` ${tag} > ${m.content.slice(0, 200)}\n`);
303
+ });
304
+ break;
305
+
306
+ case "/connect":
307
+ case "/con":
308
+ if (args) {
309
+ let url = args;
310
+ if (!url.startsWith("http")) url = "http://" + url;
311
+ if (!url.includes(":")) url += ":11434";
312
+ OLLAMA_HOST = url;
313
+ const test = await getModels();
314
+ if (test.length) {
315
+ models = test;
316
+ ensureDir(CONFIG_DIR);
317
+ fs.writeFileSync(URL_FILE, OLLAMA_HOST + "/api/chat");
318
+ console.log(c.green(` Connected: ${OLLAMA_HOST} (${test.length} models)`));
319
+ } else {
320
+ console.log(c.red(` Cannot reach ${OLLAMA_HOST}`));
321
+ }
322
+ } else {
323
+ console.log(c.dim(" Usage: /connect 192.168.1.237"));
324
+ }
325
+ break;
326
+
327
+ case "/server":
328
+ case "/srv":
329
+ const test2 = await getModels();
330
+ const status = test2.length ? c.green("connected") : c.red("offline");
331
+ console.log(` ${c.dim("Server:")} ${OLLAMA_HOST} ${status}`);
332
+ break;
333
+
334
+ case "/clear":
335
+ case "/c":
336
+ printHeader();
337
+ break;
338
+
339
+ default:
340
+ console.log(c.dim(` Unknown: ${cmd}. Type /help`));
341
+ }
342
+
343
+ rl.prompt();
344
+ return;
345
+ }
346
+
347
+ // Regular message
348
+ messages.push({ role: "user", content: input });
349
+ const response = await streamChat(messages);
350
+ if (response) {
351
+ messages.push({ role: "assistant", content: response });
352
+ } else {
353
+ messages.pop();
354
+ }
355
+ rl.prompt();
356
+ });
357
+
358
+ rl.on("close", () => process.exit(0));
359
+ }
360
+
361
+ main().catch(console.error);