navada-edge-cli 3.2.0 → 3.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,67 +1,96 @@
1
- # navada-edge-cli
1
+ # NAVADA Edge CLI
2
2
 
3
- An AI-powered terminal agent for the **NAVADA Edge Network**. Full computer access, distributed infrastructure control, and conversational AI — all from your terminal.
3
+ An AI-powered terminal agent. Install it, type naturally, and control your computer — files, shell, Docker, cloud services, and distributed infrastructure — all through conversation.
4
4
 
5
- ```bash
5
+ ```
6
6
  npm install -g navada-edge-cli
7
7
  ```
8
8
 
9
- ## Executive Summary
9
+ ```
10
+ navada> deploy the API to production and check disk space on all nodes
11
+ [shell] docker push registry:5000/my-api:latest
12
+ [lucas_ssh] {"node":"ec2","command":"df -h"}
13
+
14
+ NAVADA
15
+ Deployed my-api to production. EC2: 45% disk used (12G/26G).
16
+ HP: 32% used. Oracle: 61% used. All nodes healthy.
17
+ ```
18
+
19
+ ---
20
+
21
+ ## Table of Contents
22
+
23
+ - [What is a Terminal?](#what-is-a-terminal)
24
+ - [What is a CLI?](#what-is-a-cli)
25
+ - [NAVADA Edge CLI](#navada-edge-cli-1)
26
+ - [Installation](#installation)
27
+ - [Quick Start](#quick-start)
28
+ - [AI Providers](#ai-providers)
29
+ - [Commands](#commands)
30
+ - [Architecture](#architecture)
31
+ - [Edge Network](#edge-network)
32
+ - [The Vision](#the-vision)
33
+ - [License](#license)
34
+
35
+ ---
36
+
37
+ ## What is a Terminal?
38
+
39
+ A terminal is a text-based interface to your computer. Before graphical desktops existed, the terminal was the only way to interact with a machine. You type commands, the computer executes them, and prints the result.
40
+
41
+ Every operating system has one:
42
+
43
+ | OS | Terminal |
44
+ |---|---|
45
+ | **Windows** | PowerShell, Command Prompt, Windows Terminal |
46
+ | **macOS** | Terminal.app, iTerm2 |
47
+ | **Linux** | GNOME Terminal, Konsole, any shell emulator |
48
+
49
+ The terminal gives you direct, unfiltered access to your system. When you type `ls` (or `dir` on Windows), the operating system lists files. When you type `node server.js`, it starts a process. There is no button, no menu, no intermediary. You speak to the machine and it responds.
50
+
51
+ This matters because every server in the world runs on terminals. Cloud infrastructure, Docker containers, CI/CD pipelines, database administration -- all of it happens through text commands. The graphical interface is a layer on top; the terminal is the foundation underneath.
52
+
53
+ The NAVADA Edge CLI lives in this space. It turns the terminal from a command executor into a conversational AI agent -- one that understands what you want to do and has the tools to do it.
10
54
 
11
- **NAVADA Edge CLI** is a production-grade terminal tool that gives developers and infrastructure teams an AI agent with full system access, connected to a distributed computing network.
55
+ ---
12
56
 
13
- ### Install
57
+ ## What is a CLI?
58
+
59
+ A CLI (Command-Line Interface) is a program designed to run inside a terminal. Instead of clicking buttons in a window, you type commands and receive text output.
14
60
 
15
61
  ```bash
16
- npm install -g navada-edge-cli
17
- navada
62
+ # A GUI: click File > Open > navigate to folder > click file
63
+ # A CLI: one command
64
+ cat /etc/hosts
18
65
  ```
19
66
 
20
- Or run with Docker:
67
+ CLIs differ from GUIs (Graphical User Interfaces) in several important ways:
68
+
69
+ **Speed.** A CLI command executes immediately. No loading screens, no animations, no waiting for a window to render. You type, it runs.
70
+
71
+ **Composability.** CLI commands can be chained together. The output of one becomes the input of another. This is called piping, and it is the foundation of Unix philosophy:
21
72
 
22
73
  ```bash
23
- docker run -it navada-edge-cli:3.0.2
74
+ docker ps | grep navada | wc -l
24
75
  ```
25
76
 
26
- **The problem:** Managing distributed infrastructure across multiple cloud providers, on-prem servers, and services requires jumping between terminals, dashboards, and APIs. AI assistants answer questions but can't execute.
77
+ That single line lists running containers, filters for NAVADA services, and counts them. Three tools, one pipeline, instant result.
27
78
 
28
- **The solution:** One CLI that talks naturally, executes commands locally and remotely, and unifies access to your entire infrastructure through a conversational AI agent with tool use, streaming, and smart model routing.
79
+ **Automation.** Anything you type in a CLI can be put in a script. A script is just a file full of commands. This is how infrastructure-as-code works: instead of manually clicking through cloud dashboards, you write scripts that configure servers, deploy applications, and manage databases automatically.
29
80
 
30
- **Key differentiators:**
31
- - **Free tier included** — Grok AI with 30 RPM, no API key needed, just install and go
32
- - **Multi-provider AI** — Anthropic (Claude Sonnet 4), OpenAI (GPT-4o/mini), Grok, Qwen Coder — all with streaming
33
- - **Smart routing** — auto-picks the best model: code queries to Qwen, complex to Claude, general to Grok
34
- - **Conversational AI agent** — type naturally, the agent uses tools to execute (not just answer)
35
- - **Full computer access** — shell, files, processes, Python execution on your local machine
36
- - **Distributed network** — 4 physical nodes connected via Tailscale VPN, managed from one terminal
37
- - **Two sub-agents** — Lucas CTO (infrastructure) and Claude CoS (communications + automation)
38
- - **Cloud-native** — Cloudflare (R2, Flux AI, Stream, DNS), Azure (n8n), private Docker registry
39
- - **58 slash commands** — direct access when you need precision
40
- - **3 learning modes** — `/learn python`, `/learn csharp`, `/learn node` — interactive tutor mode
41
- - **Split-panel TUI** — session info, token usage, rate limits, provider status
42
- - **4 themes** — dark, crow (achromatic), matrix, light
43
- - **Docker-first** — runs as a container with `restart: always`, ships with Dockerfile
44
- - **Cross-platform** — Windows, macOS, Linux
45
- - **Zero config to start** — install and start talking, free tier works immediately
81
+ **Reproducibility.** A CLI command is text. It can be documented, version-controlled, shared, and re-run identically. A series of GUI clicks cannot.
46
82
 
47
- **Who it's for:** DevOps engineers, infrastructure teams, AI developers, and anyone who manages distributed systems and wants an AI copilot in their terminal.
83
+ Developers, system administrators, and infrastructure engineers prefer CLIs because they are faster, scriptable, and precise. The NAVADA Edge CLI takes this further by adding an AI agent layer -- instead of memorising hundreds of commands, you describe what you want in plain English and the agent executes the right tools.
48
84
 
49
- **Pricing:** The CLI is free and open source (MIT). Free tier (Grok) works out of the box. Add your own API key for full agent mode (Anthropic, OpenAI, or HuggingFace).
85
+ ---
50
86
 
51
- ```
52
- ╭─────────────────────────────────────────────────────────╮
53
- │ ███╗ ██╗ █████╗ ██╗ ██╗ █████╗ ██████╗ █████╗ │
54
- │ ██╔██╗ ██║███████║██║ ██║███████║██║ ██║███████║ │
55
- │ ██║ ╚████║██║ ██║ ╚████╔╝ ██║ ██║██████╔╝██║ ██║ │
56
- │ E D G E N E T W O R K v3.0.2 │
57
- ╰─────────────────────────────────────────────────────────╯
58
- ```
87
+ ## NAVADA Edge CLI
59
88
 
60
- ## What is this?
89
+ NAVADA Edge CLI is an AI-powered operating system layer that runs in your terminal. It is the first interface to the NAVADA Edge Network -- a distributed computing platform built for AI workloads, infrastructure management, and developer tooling.
61
90
 
62
- NAVADA Edge CLI is a **conversational AI agent** that runs in your terminal. It has full access to your local machine (files, shell, processes) and connects to the NAVADA Edge Network — a distributed infrastructure spanning 4 physical nodes, Cloudflare, Azure, and AI services.
91
+ ### What it does
63
92
 
64
- **Just type naturally.** No commands needed.
93
+ The CLI wraps your terminal in a conversational AI agent. It has tools for file operations, shell execution, Docker management, remote SSH, database queries, cloud services, image generation, and more. You can use slash commands for precision or type naturally and let the agent figure out what to do.
65
94
 
66
95
  ```
67
96
  navada> what images are in the docker registry?
@@ -73,235 +102,499 @@ navada> what images are in the docker registry?
73
102
  ```
74
103
 
75
104
  ```
76
- navada> check disk space on EC2
77
- [lucas_ssh] {"node":"ec2","command":"df -h"}
105
+ navada> create a Python script that fetches weather data and save it to weather.py
106
+ [write_file] {"path":"weather.py","content":"import requests\n..."}
78
107
 
79
108
  NAVADA
80
- EC2 disk usage: 45% used (12G/26G). /dev/xvda1 mounted at /.
109
+ Written: /Users/you/weather.py
81
110
  ```
82
111
 
112
+ ### How it works under the hood
113
+
114
+ The CLI is a Node.js application that maintains a conversation loop with an AI provider. When you type a message, it goes through this pipeline:
115
+
116
+ 1. **Input parsing** -- slash commands are routed directly to handlers; natural language goes to the AI agent
117
+ 2. **Provider routing** -- the agent selects the appropriate AI provider based on your configuration (NAVADA free tier, Anthropic, OpenAI, Google, NVIDIA, or HuggingFace)
118
+ 3. **Tool execution** -- the AI can call tools (shell, file I/O, network, Docker, MCP) and receive results, then continue reasoning
119
+ 4. **Streaming** -- responses stream token-by-token to your terminal in real time
120
+ 5. **Context management** -- conversation history (last 40 turns) is maintained so the agent remembers what you discussed
121
+
122
+ The agent uses a tool-use loop: the AI decides which tool to call, the CLI executes it locally, sends the result back, and the AI continues until the task is complete. This means the agent can chain multiple operations in a single response.
123
+
124
+ ---
125
+
126
+ ## Installation
127
+
128
+ Requires **Node.js 18+**.
129
+
130
+ ### Global install (recommended)
131
+
132
+ ```bash
133
+ npm install -g navada-edge-cli
134
+ navada
83
135
  ```
84
- navada> create a file called hello.txt with "Hello from NAVADA"
85
- [write_file] {"path":"hello.txt","content":"Hello from NAVADA"}
86
136
 
87
- NAVADA
88
- ✓ Written: /Users/you/hello.txt
137
+ This adds the `navada` command to your PATH. Run it from anywhere.
138
+
139
+ ### Local / project install
140
+
141
+ ```bash
142
+ npm install navada-edge-cli
143
+ npx navada
89
144
  ```
90
145
 
91
- ## Network Architecture
146
+ Useful when you want a specific version pinned to a project, or when you do not have permission to install globally.
147
+
148
+ ### Verify installation
149
+
150
+ ```bash
151
+ navada --version
152
+ ```
92
153
 
93
- ![NAVADA Edge Network](https://cdn.jsdelivr.net/npm/navada-edge-sdk@1.2.0/network.svg)
154
+ ---
94
155
 
95
156
  ## Quick Start
96
157
 
97
158
  ```bash
98
- # Install
159
+ # 1. Install
99
160
  npm install -g navada-edge-cli
100
161
 
101
- # Start the CLI
162
+ # 2. Launch
102
163
  navada
103
164
 
104
- # Login with your API key (Anthropic recommended for full agent mode)
105
- /login sk-ant-your-key-here
165
+ # 3. Start talking (free tier, no key needed)
166
+ navada> hello, what can you do?
106
167
 
107
- # Start talking
108
- hello, what can you do?
168
+ # 4. Try some commands
169
+ navada> /status
170
+ navada> /help
171
+ navada> list files in my home directory
172
+ navada> create a file called test.txt with "Hello from NAVADA"
173
+ ```
174
+
175
+ The free tier uses GPT-4o-mini via the NAVADA Edge server. No API key required -- install and go.
176
+
177
+ To unlock full agent mode with tool use, add your own API key:
178
+
179
+ ```bash
180
+ navada> /login sk-ant-your-anthropic-key
181
+ ```
182
+
183
+ To connect to the NAVADA Edge Network for distributed infrastructure access:
184
+
185
+ ```bash
186
+ navada> /onboard
187
+ navada> /edge login nv_edge_your_key_here
109
188
  ```
110
189
 
111
- The CLI accepts **any** API key:
190
+ ---
112
191
 
113
- | Key Type | Prefix | What You Get |
114
- |----------|--------|-------------|
115
- | **Anthropic** | `sk-ant-...` | Full agent with tool use — recommended |
116
- | **OpenAI** | `sk-...` | GPT-4o chat |
117
- | **HuggingFace** | `hf_...` | Qwen Coder (FREE) |
118
- | **NAVADA Edge** | `nv_edge_...` | MCP server access + all network tools |
192
+ ## AI Providers
119
193
 
120
- ## Features
194
+ The CLI supports 6 AI providers. Each is activated by logging in with the corresponding API key.
121
195
 
122
- ### AI Agent with Tool Use
196
+ | Provider | Key Prefix | Model | Cost | Tool Use |
197
+ |---|---|---|---|---|
198
+ | **NAVADA Free Tier** | (none needed) | GPT-4o-mini | Free (30 RPM) | No |
199
+ | **Anthropic** | `sk-ant-...` | Claude Sonnet 4 | Paid | Yes (full agent) |
200
+ | **OpenAI** | `sk-...` | GPT-4o | Paid | Yes |
201
+ | **Google Gemini** | `AIza...` | Gemini 2.0 Flash | Free | No |
202
+ | **NVIDIA** | `nvapi-...` | Llama, DeepSeek, Mistral + 5 more | Free | No |
203
+ | **HuggingFace** | `hf_...` | Qwen Coder 32B | Free | No |
123
204
 
124
- The CLI is powered by Claude Sonnet 4 with 13 tools. It doesn't just answer questions — it **executes tasks**:
205
+ ### Login with each provider
125
206
 
126
- | Tool | What it does |
127
- |------|-------------|
128
- | `shell` | Run any command on your local machine |
129
- | `read_file` | Read files from your filesystem |
130
- | `write_file` | Create or modify files |
131
- | `list_files` | Browse directories |
132
- | `system_info` | CPU, RAM, disk, hostname, OS |
133
- | `network_status` | Ping all NAVADA Edge nodes |
134
- | `lucas_exec` | Run bash on EC2 via Lucas CTO |
135
- | `lucas_ssh` | SSH to any node via Lucas |
136
- | `lucas_docker` | Docker exec in remote containers |
137
- | `mcp_call` | Call any of 18 MCP server tools |
138
- | `docker_registry` | Query the private Docker registry |
139
- | `send_email` | Send email via SMTP or MCP |
140
- | `generate_image` | Generate images (Flux FREE / DALL-E) |
207
+ ```bash
208
+ # NAVADA free tier (default, no login needed)
209
+ navada
210
+
211
+ # Anthropic full agent with tool use (recommended)
212
+ navada> /login sk-ant-api03-xxxxx
213
+
214
+ # OpenAI GPT-4o with tool use
215
+ navada> /login sk-xxxxx
216
+
217
+ # Google Gemini free, fast
218
+ navada> /login AIzaSyxxxxx
219
+
220
+ # NVIDIA 8 free models (Llama, DeepSeek, Mistral, etc.)
221
+ navada> /login nvapi-xxxxx
222
+ navada> /nvidia models
223
+
224
+ # HuggingFace — Qwen Coder 32B, free
225
+ navada> /login hf_xxxxx
226
+ ```
227
+
228
+ ### NVIDIA models
141
229
 
142
- ### Two AI Agents
230
+ NVIDIA provides 8 models for free via [build.nvidia.com](https://build.nvidia.com):
143
231
 
144
- | Agent | Role | Access |
145
- |-------|------|--------|
146
- | **Lucas CTO** | Infrastructure agent on EC2 | bash, SSH, Docker, deploy, file operations |
147
- | **Claude CoS** | Chief of Staff | Email, image gen, SMS, research, automation |
232
+ | Model | ID |
233
+ |---|---|
234
+ | Meta Llama 3.3 70B | `llama-3.3-70b` |
235
+ | Meta Llama 3.1 8B | `llama-3.1-8b` |
236
+ | DeepSeek R1 | `deepseek-r1` |
237
+ | Mistral Large 2 | `mistral-large` |
238
+ | Code Llama 70B | `codellama-70b` |
239
+ | Google Gemma 2 27B | `gemma-2-27b` |
240
+ | Microsoft Phi 3 Medium 128K | `phi-3-medium` |
241
+ | NVIDIA Nemotron 70B | `nemotron-70b` |
148
242
 
243
+ ```bash
244
+ navada> /model deepseek-r1
245
+ navada> /nvidia chat explain Docker networking
149
246
  ```
150
- navada> /agents
151
- ● Lucas CTO ONLINE (MCP http://100.x.x.x:8820)
152
- Tools: bash, ssh, docker_exec, deploy, read_file, write_file
153
247
 
154
- Claude CoS ONLINE (Dashboard http://100.x.x.x:7900)
155
- Controls: Telegram, email, SMS, image gen, R2, cost tracking
248
+ ### Model selection
156
249
 
157
- ● MCP Server ONLINE (http://100.x.x.x:8811)
158
- Version: 1.0.0
250
+ ```bash
251
+ navada> /model # show all available models
252
+ navada> /model auto # smart routing (picks best per query)
253
+ navada> /model claude # always use Claude Sonnet 4
254
+ navada> /model gpt-4o # always use GPT-4o
255
+ navada> /model deepseek-r1 # always use DeepSeek R1
159
256
  ```
160
257
 
161
- ### 45+ Slash Commands
258
+ ---
259
+
260
+ ## Commands
261
+
262
+ 75 commands organised by category. Use `/help` inside the CLI for the full list.
263
+
264
+ ### AI
265
+
266
+ | Command | Description |
267
+ |---|---|
268
+ | `/chat <msg>` | Chat with NAVADA Edge AI agent |
269
+ | `/qwen <prompt>` | Qwen Coder 32B (free via HuggingFace) |
270
+ | `/yolo detect <img>` | Object detection on an image |
271
+ | `/yolo model` | Show YOLO model info |
272
+ | `/image <prompt>` | Generate image (Flux, free) |
273
+ | `/image --dalle <prompt>` | Generate image (DALL-E 3) |
274
+ | `/model [name]` | Show or set default AI model |
275
+ | `/research <query>` | RAG search via MCP server |
276
+ | `/retry` | Resend the last message to the AI |
277
+ | `/tokens` | Show session token usage and cost |
278
+ | `/clear` | Clear conversation history and reset session |
279
+ | `/save [name]` | Save current conversation to disk |
280
+ | `/load <name>` | Load a saved conversation |
281
+ | `/conversations` | List all saved conversations |
282
+
283
+ ### NVIDIA
284
+
285
+ | Command | Description |
286
+ |---|---|
287
+ | `/nvidia login <key>` | Set NVIDIA API key |
288
+ | `/nvidia models` | List all available NVIDIA models |
289
+ | `/nvidia model <name>` | Set default NVIDIA model |
290
+ | `/nvidia chat <msg>` | Chat with selected NVIDIA model |
291
+ | `/nvidia status` | Test NVIDIA API connection |
162
292
 
163
- Use `/commands` for direct control, or just type naturally for the AI agent.
293
+ ### NETWORK
164
294
 
165
- **Network**
166
295
  | Command | Description |
167
- |---------|-------------|
168
- | `/status` | Ping all nodes + cloud services |
296
+ |---|---|
297
+ | `/status` | Ping all nodes and cloud services |
169
298
  | `/nodes` | Show node configuration |
170
- | `/doctor` | Validate all connections |
171
- | `/ping` | Quick all-nodes ping |
172
- | `/metrics` | CPU/RAM/disk for all nodes |
173
- | `/health` | Deep health check |
174
299
  | `/dashboard` | Command Dashboard status |
300
+ | `/doctor` | Validate all service connections |
301
+ | `/metrics` | CPU, RAM, disk for all nodes |
302
+ | `/health` | Deep health check |
303
+ | `/ping` | Quick all-nodes ping |
304
+ | `/opencode` | OpenCode status on all nodes |
305
+
306
+ ### AGENTS
175
307
 
176
- **Agents**
177
308
  | Command | Description |
178
- |---------|-------------|
309
+ |---|---|
179
310
  | `/agents` | Show Lucas CTO + Claude CoS status |
180
- | `/claude <msg>` | Send message to Claude CoS |
181
- | `/lucas exec <cmd>` | Run bash on EC2 via Lucas |
182
- | `/lucas ssh <node> <cmd>` | SSH to node via Lucas |
183
- | `/lucas docker <ctr> <cmd>` | Docker exec via Lucas |
184
- | `/lucas deploy <name> <node>` | Deploy container |
311
+ | `/claude <msg>` | Send message to Claude CoS agent |
312
+ | `/lucas exec <cmd>` | Run bash on EC2 via Lucas CTO |
313
+ | `/lucas ssh <node> <cmd>` | SSH to any node via Lucas |
314
+ | `/lucas docker <ctr> <cmd>` | Docker exec on remote container |
315
+ | `/lucas deploy <name> <node>` | Deploy container to a node |
316
+ | `/lucas status` | Lucas network status |
317
+ | `/lucas files <dir>` | List files on remote node |
318
+ | `/lucas read <file>` | Read file on remote node |
319
+
320
+ ### DOCKER
185
321
 
186
- **Docker**
187
322
  | Command | Description |
188
- |---------|-------------|
189
- | `/registry` | List images in private registry |
323
+ |---|---|
324
+ | `/registry` | List images in private Docker registry |
190
325
  | `/registry tags <image>` | List tags for an image |
191
- | `/deploy <name> <node>` | Deploy container to node |
326
+ | `/deploy <name> <node>` | Deploy container to a node |
192
327
  | `/logs <container>` | View container logs |
193
328
 
194
- **AI**
329
+ ### MCP
330
+
195
331
  | Command | Description |
196
- |---------|-------------|
197
- | `/chat <msg>` | Chat with NAVADA Edge AI |
198
- | `/qwen <prompt>` | Qwen Coder 32B (FREE) |
199
- | `/yolo detect <img>` | Object detection |
200
- | `/image <prompt>` | Generate image (Flux FREE) |
201
- | `/image --dalle <prompt>` | Generate with DALL-E 3 |
202
- | `/model` | Show/set AI model |
203
- | `/research <query>` | RAG search via MCP |
204
-
205
- **Cloudflare**
332
+ |---|---|
333
+ | `/mcp tools` | List all MCP server tools |
334
+ | `/mcp call <tool> [json]` | Call an MCP tool directly |
335
+
336
+ ### CLOUDFLARE
337
+
206
338
  | Command | Description |
207
- |---------|-------------|
208
- | `/r2 ls [prefix]` | List R2 objects |
209
- | `/r2 upload <key> <file>` | Upload to R2 |
210
- | `/dns` | List DNS records |
339
+ |---|---|
340
+ | `/r2 ls [prefix]` | List R2 storage objects |
341
+ | `/r2 buckets` | List R2 buckets |
342
+ | `/r2 upload <key> <file>` | Upload file to R2 |
343
+ | `/r2 delete <key>` | Delete R2 object |
344
+ | `/r2 url <key>` | Get public URL for R2 object |
345
+ | `/dns` | List Cloudflare DNS records |
211
346
  | `/dns create <type> <name> <val>` | Create DNS record |
212
347
  | `/tunnel` | List Cloudflare tunnels |
213
- | `/stream` | List Stream videos |
214
- | `/flux <prompt>` | Generate image (FREE) |
215
- | `/trace <url>` | Trace through WAF |
348
+ | `/stream` | List Cloudflare Stream videos |
349
+ | `/flux <prompt>` | Generate image (free Cloudflare AI) |
350
+ | `/trace <url>` | Trace request through Cloudflare WAF |
351
+
352
+ ### DATABASE
353
+
354
+ | Command | Description |
355
+ |---|---|
356
+ | `/db <sql>` | Query PostgreSQL |
357
+
358
+ ### EDGE
359
+
360
+ | Command | Description |
361
+ |---|---|
362
+ | `/edge login <key>` | Connect with NAVADA Edge API key |
363
+ | `/edge status` | Check Edge Network connection |
364
+ | `/edge logout` | Disconnect from Edge Network |
365
+ | `/edge tier` | Show current tier and limits |
366
+ | `/onboard` | Open Edge Portal to create account |
367
+
368
+ ### TASKS
369
+
370
+ | Command | Description |
371
+ |---|---|
372
+ | `/tasks` | List tasks |
373
+ | `/tasks create <title>` | Create a task |
374
+ | `/tasks done <id>` | Mark task complete |
375
+ | `/tasks delete <id>` | Delete a task |
376
+
377
+ ### KEYS
378
+
379
+ | Command | Description |
380
+ |---|---|
381
+ | `/keys` | List API keys |
382
+ | `/keys create [name]` | Create an API key |
383
+ | `/keys delete <key>` | Delete an API key |
384
+
385
+ ### AZURE
386
+
387
+ | Command | Description |
388
+ |---|---|
389
+ | `/n8n` | Azure n8n health check |
390
+ | `/n8n restart` | Restart Azure n8n |
391
+
392
+ ### LEARNING
393
+
394
+ | Command | Description |
395
+ |---|---|
396
+ | `/learn python` | Enter Python learning mode |
397
+ | `/learn csharp` | Enter C# learning mode |
398
+ | `/learn node` | Enter Node.js learning mode |
399
+ | `/learn off` | Exit learning mode |
400
+
401
+ ### SANDBOX
216
402
 
217
- **Database**
218
403
  | Command | Description |
219
- |---------|-------------|
220
- | `/db <sql>` | Query Postgres |
404
+ |---|---|
405
+ | `/sandbox run <lang>` | Run code with syntax highlighting |
406
+ | `/sandbox exec <file>` | Execute a file in the sandbox |
407
+ | `/sandbox highlight <file>` | Syntax-highlight a file |
408
+ | `/sandbox demo` | Run a demo to test colors |
409
+
410
+ ### SYSTEM
221
411
 
222
- **System**
223
412
  | Command | Description |
224
- |---------|-------------|
413
+ |---|---|
414
+ | `/help` | Show all commands |
225
415
  | `/config` | Show all configuration |
226
- | `/login <key>` | Set API key (auto-detects type) |
416
+ | `/login <key>` | Set API key (auto-detects provider) |
417
+ | `/init <key> <value>` | Set a config value |
227
418
  | `/setup` | Guided onboarding wizard |
228
- | `/theme [name]` | Switch theme (dark/crow/matrix/light) |
229
- | `/history` | Command history |
230
- | `/alias <name> <cmd>` | Create shortcut |
419
+ | `/theme [name]` | Switch theme (dark, crow, matrix, light) |
420
+ | `/history [search]` | Command history |
421
+ | `/alias <name> <cmd>` | Create command shortcut |
231
422
  | `/watch <cmd> <sec>` | Repeat command on interval |
232
- | `/export <file>` | Save output to file |
233
- | `/pipe` | Copy output to clipboard |
234
- | `/email <to> <subj> <body>` | Send email (SMTP/MCP) |
235
- | `/email setup` | Configure SMTP (Hotmail/Gmail/Zoho) |
236
- | `/serve [port]` | Start mobile HTTP UI |
237
- | `/version` | Version + tier info |
238
- | `/upgrade` | Check for updates |
239
- | `/clear` | Clear screen |
423
+ | `/export <file>` | Save last output to file |
424
+ | `/pipe` | Copy last output to clipboard |
425
+ | `/email <to> <subj> <body>` | Send email (SMTP or MCP) |
426
+ | `/email setup` | Configure SMTP provider |
427
+ | `/activity` | Recent activity log |
428
+ | `/version` | Version and tier info |
429
+ | `/upgrade` | Check for CLI updates |
240
430
  | `/exit` | Exit CLI |
241
431
 
242
- ### 4 Themes
432
+ ---
243
433
 
244
- ```
245
- navada> /theme crow # achromatic, minimal
246
- navada> /theme matrix # green terminal
247
- navada> /theme dark # default
248
- navada> /theme light # blue + white
249
- ```
434
+ ## Architecture
435
+
436
+ ![NAVADA Edge Network Architecture](architecture.svg)
250
437
 
251
- ### Mobile Access
438
+ ### Agent routing
439
+
440
+ When you type a message, the CLI determines how to handle it:
252
441
 
253
442
  ```
254
- navada> /serve 7800
255
- MOBILE SERVER
256
- Local http://localhost:7800
257
- Network http://100.x.x.x:7800
258
- [QR code displayed]
443
+ User Input
444
+ |
445
+ +-- Starts with "/" --> Slash command handler (direct execution)
446
+ |
447
+ +-- Natural language --> AI Agent pipeline:
448
+ |
449
+ 1. Build message array (system prompt + conversation history + user message)
450
+ 2. Select provider:
451
+ | - Anthropic key set? --> Claude Sonnet 4 (with tool definitions)
452
+ | - OpenAI key set? --> GPT-4o (with tool definitions)
453
+ | - Gemini key set? --> Gemini 2.0 Flash
454
+ | - NVIDIA key set? --> Selected NVIDIA model
455
+ | - HF token set? --> Qwen Coder 32B
456
+ | - No key? --> NAVADA free tier (GPT-4o-mini via Edge server)
457
+ |
458
+ 3. Stream response token-by-token to terminal
459
+ 4. If AI requests tool use --> execute tool --> return result --> continue
460
+ 5. Update conversation history (40-turn sliding window)
259
461
  ```
260
462
 
261
- Opens a web UI on your phone — same commands, same network access.
463
+ ### Tool execution loop
262
464
 
263
- ### Email
465
+ The agent (Anthropic and OpenAI providers) supports tool use. The AI model decides which tools to call based on the user's request. Available tools:
264
466
 
265
- ```bash
266
- # Configure SMTP (Hotmail, Gmail, Zoho)
267
- navada> /email setup
467
+ | Tool | Scope | Function |
468
+ |---|---|---|
469
+ | `shell` | Local | Run any shell command on your machine |
470
+ | `read_file` | Local | Read files from your filesystem |
471
+ | `write_file` | Local | Create or modify files |
472
+ | `list_files` | Local | Browse directories |
473
+ | `system_info` | Local | CPU, RAM, disk, hostname, OS |
474
+ | `python_exec` | Local | Execute Python code |
475
+ | `python_pip` | Local | Install Python packages |
476
+ | `python_script` | Local | Run a Python script file |
477
+ | `sandbox_run` | Local | Run code with syntax highlighting |
478
+ | `network_status` | Network | Ping all NAVADA Edge nodes |
479
+ | `lucas_exec` | Remote | Run bash on EC2 via Lucas CTO |
480
+ | `lucas_ssh` | Remote | SSH to any network node |
481
+ | `lucas_docker` | Remote | Docker exec in remote containers |
482
+ | `mcp_call` | Remote | Call any of 18 MCP server tools |
483
+ | `docker_registry` | Remote | Query the private Docker registry |
484
+ | `send_email` | Remote | Send email via SMTP or MCP |
485
+ | `generate_image` | Remote | Generate images (Flux or DALL-E) |
486
+ | `founder_info` | Local | Information about the NAVADA founder |
268
487
 
269
- # Send directly
270
- navada> /email user@example.com "Subject" "Body text"
488
+ The execution loop works like this:
271
489
 
272
- # Or ask the agent naturally
273
- navada> email lee the network status report
274
- ```
490
+ 1. User sends message
491
+ 2. AI analyses the message and decides to call a tool (e.g., `shell` with `docker ps`)
492
+ 3. CLI executes the tool locally and captures the output
493
+ 4. Output is sent back to the AI as a tool result
494
+ 5. AI reads the result and either calls another tool or writes a final response
495
+ 6. If the AI calls another tool, go back to step 3 (up to 10 iterations)
496
+
497
+ ### Rate limiting
275
498
 
276
- ### Persistent History
499
+ The free tier enforces a sliding-window rate limit of 30 requests per minute. The CLI tracks this in-memory per session. When the limit is reached, the CLI suggests upgrading to a provider with your own API key.
277
500
 
278
- Commands survive restarts. Search with `/history docker` to find previous Docker commands.
501
+ ### Conversation context
279
502
 
280
- ### Docker
503
+ The CLI maintains a sliding window of the last 40 conversation turns (20 exchanges). This allows the agent to reference earlier parts of the conversation without exceeding token limits. Conversations can be saved to disk with `/save` and restored with `/load`.
504
+
505
+ ### Streaming
506
+
507
+ All providers support streaming. Responses appear token-by-token as the AI generates them, rather than waiting for the full response. This is implemented using Server-Sent Events (SSE) for each provider's API.
508
+
509
+ ---
510
+
511
+ ## Edge Network
512
+
513
+ The NAVADA Edge Network is a distributed computing platform that the CLI connects to. It consists of physical servers, cloud VMs, and services connected via Tailscale VPN.
514
+
515
+ ### What it provides
516
+
517
+ - **AI chat** -- free-tier GPT-4o-mini via the Edge server (no API key required)
518
+ - **MCP server** -- 18 tools accessible via JSON-RPC (file operations, network, email, image gen, cloud)
519
+ - **Lucas CTO** -- autonomous infrastructure agent running on EC2 (bash, SSH, Docker, deploy)
520
+ - **Docker registry** -- private container registry for deploying to any node
521
+ - **Command Dashboard** -- real-time status, metrics, and activity logs
522
+
523
+ ### Connecting to the Edge Network
281
524
 
282
525
  ```bash
283
- # Run in Docker
284
- docker run -it --env-file .env navada-edge-cli
526
+ # 1. Create an account at the Edge Portal
527
+ navada> /onboard
528
+
529
+ # 2. Generate an API key in the portal dashboard
530
+ # 3. Connect your CLI
531
+ navada> /edge login nv_edge_your_key_here
285
532
 
286
- # With mobile access
287
- docker run -it -p 7800:7800 --env-file .env navada-edge-cli --serve
533
+ # 4. Verify connection
534
+ navada> /edge status
535
+ navada> /doctor
288
536
  ```
289
537
 
538
+ ### API key tiers
539
+
540
+ | Tier | Requests/day | Tokens/day | Edge tasks | Max runtime |
541
+ |---|---|---|---|---|
542
+ | **Free** | 100 | 50K | 10 | 5 minutes |
543
+ | **Pro** | Coming soon | -- | -- | -- |
544
+ | **Enterprise** | Coming soon | -- | -- | -- |
545
+
546
+ ### Portal
547
+
548
+ The NAVADA Edge Portal is a web application where users create accounts, generate API keys, and monitor their usage. Visit [portal.navada-edge-server.uk](https://portal.navada-edge-server.uk) or run `/onboard` from the CLI.
549
+
550
+ ---
551
+
552
+ ## The Vision
553
+
554
+ This is the early stage of an AI-powered operating system.
555
+
556
+ Today, the CLI is a terminal agent -- you install it, type naturally, and it executes tasks on your machine using AI. But the terminal is just the first interface layer. The architecture is designed for what comes next.
557
+
558
+ ### Where this is going
559
+
560
+ **Local LLM execution.** The CLI currently routes to cloud AI providers. The next step is running models locally on NVIDIA GPUs via Docker containers. Instead of paying per token to a cloud API, your home server runs Llama, Mistral, or DeepSeek locally. The CLI routes to whichever is fastest -- local GPU or cloud API -- transparently.
561
+
562
+ **Edge compute offloading.** Today, `/lucas exec` sends a command to one remote node. In the future, you will be able to offload long-running tasks (ML training, batch processing, video encoding) to any node in the Edge Network. The CLI submits a task, the network schedules it on an available node, and you get notified when it completes.
563
+
564
+ **Sub-agents.** Lucas CTO is the first sub-agent -- an autonomous infrastructure manager. More are planned: a security agent for vulnerability scanning, a data agent for ETL pipelines, a monitoring agent for alerting. Each runs in its own container and communicates via MCP.
565
+
566
+ **agent.md customisation.** Every user will be able to define their own `agent.md` file -- a plain-text configuration that shapes the AI's personality, tools, and behaviour. Your agent becomes uniquely yours: different system prompts, different tool sets, different priorities. This is the NAVADA moat -- an AI operating system that adapts to each user, not a one-size-fits-all chatbot.
567
+
568
+ **Multi-device.** The CLI already supports mobile access via `/serve`. The vision is a unified agent layer across terminal, web, and mobile -- same context, same tools, same conversation -- wherever you are.
569
+
570
+ ### The operating system analogy
571
+
572
+ An OS manages hardware resources and provides an interface for users to interact with them. NAVADA Edge does the same thing for AI and distributed compute:
573
+
574
+ - **Hardware layer** -- physical servers, GPUs, VMs connected via Tailscale
575
+ - **Service layer** -- Docker containers, MCP servers, databases, cloud APIs
576
+ - **Agent layer** -- AI that understands your intent and orchestrates the services
577
+ - **Interface layer** -- CLI today, web portal, mobile, and API tomorrow
578
+
579
+ The CLI is the shell. The agent is the kernel. The Edge Network is the hardware. Everything else is a service.
580
+
581
+ ---
582
+
290
583
  ## Configuration
291
584
 
292
- All config lives in `~/.navada/config.json`. Set values with:
585
+ All configuration is stored in `~/.navada/config.json`.
293
586
 
294
587
  ```bash
295
- navada> /login sk-ant-your-key # API key
296
- navada> /init asus 100.x.x.x # Node IP
297
- navada> /init mcp http://x.x.x:8811 # MCP server
298
- navada> /theme crow # Theme
299
- navada> /alias s status # Alias
588
+ navada> /login sk-ant-your-key # API key (auto-detects provider)
589
+ navada> /init asus 100.x.x.x # Set node IP
590
+ navada> /init mcp http://x:8811 # MCP server endpoint
591
+ navada> /theme crow # Theme (dark, crow, matrix, light)
592
+ navada> /alias s status # Create shortcut
300
593
  ```
301
594
 
302
- Or use environment variables:
595
+ Environment variables are also supported:
303
596
 
304
- ```env
597
+ ```
305
598
  ANTHROPIC_API_KEY=sk-ant-...
306
599
  NAVADA_ASUS=100.x.x.x
307
600
  NAVADA_MCP=http://100.x.x.x:8811
@@ -310,19 +603,34 @@ NAVADA_REGISTRY=http://100.x.x.x:5000
310
603
  NAVADA_LUCAS=http://100.x.x.x:8820
311
604
  ```
312
605
 
606
+ ---
607
+
313
608
  ## NAVADA Edge Ecosystem
314
609
 
315
610
  | Package | Install | Purpose |
316
- |---------|---------|---------|
317
- | **navada-edge-sdk** | `npm i navada-edge-sdk` | SDK use in your Node.js apps |
318
- | **navada-edge-cli** | `npm i -g navada-edge-cli` | CLI — AI agent in your terminal |
611
+ |---|---|---|
612
+ | **navada-edge-sdk** | `npm i navada-edge-sdk` | SDK for Node.js applications |
613
+ | **navada-edge-cli** | `npm i -g navada-edge-cli` | AI agent in your terminal |
614
+ | **Edge Portal** | [portal.navada-edge-server.uk](https://portal.navada-edge-server.uk) | Account management and API keys |
319
615
  | **MCP Server** | `POST /mcp` | JSON-RPC tool server (18 tools) |
320
- | **Docker** | `docker pull navada-edge-cli` | Run CLI in a container |
616
+
617
+ ---
321
618
 
322
619
  ## Telemetry
323
620
 
324
621
  The CLI reports anonymous usage events (install, session start, command counts) to the NAVADA Edge Dashboard for monitoring. No personal data or API keys are transmitted. Telemetry requires a configured dashboard endpoint.
325
622
 
623
+ ---
624
+
625
+ ## Built by
626
+
627
+ **Leslie (Lee) Akpareva** -- Principal AI Consultant, MBA, MA. 17+ years in enterprise IT, insurance, and AI infrastructure.
628
+
629
+ - GitHub: [github.com/leeakpareva](https://github.com/leeakpareva)
630
+ - Website: [navada-lab.space](https://www.navada-lab.space)
631
+
632
+ ---
633
+
326
634
  ## License
327
635
 
328
- MIT - Leslie Akpareva / NAVADA
636
+ MIT -- Leslie Akpareva / NAVADA