navada-edge-cli 4.0.0 → 4.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -1,135 +1,53 @@
1
1
  # NAVADA Edge CLI
2
2
 
3
- An AI-powered terminal agent. Install it, type naturally, and control your computer — files, shell, Docker, cloud services, and distributed infrastructure all through conversation.
3
+ An AI agent that lives in your terminal. It learns who you are, remembers your conversations, and has full access to your machine — files, shell, Python, code execution, and more. Type naturally and it does the work.
4
4
 
5
5
  ```
6
6
  npm install -g navada-edge-cli
7
7
  ```
8
8
 
9
9
  ```
10
- navada> deploy the API to production and check disk space on all nodes
11
- [shell] docker push registry:5000/my-api:latest
12
- [lucas_ssh] {"node":"ec2","command":"df -h"}
10
+ navada> create a Python script that analyses my CSV data and generate a chart
11
+ [write_file] {"path":"analyse.py","content":"import pandas as pd..."}
12
+ [python_exec] {"code":"..."}
13
13
 
14
14
  NAVADA
15
- Deployed my-api to production. EC2: 45% disk used (12G/26G).
16
- HP: 32% used. Oracle: 61% used. All nodes healthy.
15
+ Done. Created analyse.py and ran it chart saved to output.png.
16
+ Your CSV has 1,247 rows across 8 columns. Revenue peaked in March.
17
17
  ```
18
18
 
19
19
  ---
20
20
 
21
21
  ## Table of Contents
22
22
 
23
- - [What is a Terminal?](#what-is-a-terminal)
24
- - [What is a CLI?](#what-is-a-cli)
25
- - [NAVADA Edge CLI](#navada-edge-cli-1)
23
+ - [What is This?](#what-is-this)
26
24
  - [Installation](#installation)
27
25
  - [Quick Start](#quick-start)
26
+ - [How It Works](#how-it-works)
27
+ - [Memory System](#memory-system)
28
+ - [Tools](#tools)
29
+ - [Skills](#skills)
28
30
  - [AI Providers](#ai-providers)
31
+ - [Automation Pipeline](#automation-pipeline)
32
+ - [Configuration](#configuration)
29
33
  - [Commands](#commands)
30
- - [Architecture](#architecture)
31
- - [Edge Network](#edge-network)
32
34
  - [The Vision](#the-vision)
35
+ - [Built By](#built-by)
33
36
  - [License](#license)
34
37
 
35
38
  ---
36
39
 
37
- ## What is a Terminal?
40
+ ## What is This?
38
41
 
39
- A terminal is a text-based interface to your computer. Before graphical desktops existed, the terminal was the only way to interact with a machine. You type commands, the computer executes them, and prints the result.
42
+ A terminal is the text interface to your computer. Every server, every cloud platform, every CI/CD pipeline runs on terminals. The NAVADA Edge CLI turns your terminal into a conversational AI agent one that understands what you want and has the tools to do it.
40
43
 
41
- Every operating system has one:
44
+ **What makes it different:**
42
45
 
43
- | OS | Terminal |
44
- |---|---|
45
- | **Windows** | PowerShell, Command Prompt, Windows Terminal |
46
- | **macOS** | Terminal.app, iTerm2 |
47
- | **Linux** | GNOME Terminal, Konsole, any shell emulator |
48
-
49
- The terminal gives you direct, unfiltered access to your system. When you type `ls` (or `dir` on Windows), the operating system lists files. When you type `node server.js`, it starts a process. There is no button, no menu, no intermediary. You speak to the machine and it responds.
50
-
51
- This matters because every server in the world runs on terminals. Cloud infrastructure, Docker containers, CI/CD pipelines, database administration -- all of it happens through text commands. The graphical interface is a layer on top; the terminal is the foundation underneath.
52
-
53
- The NAVADA Edge CLI lives in this space. It turns the terminal from a command executor into a conversational AI agent -- one that understands what you want to do and has the tools to do it.
54
-
55
- ---
56
-
57
- ## What is a CLI?
58
-
59
- A CLI (Command-Line Interface) is a program designed to run inside a terminal. Instead of clicking buttons in a window, you type commands and receive text output.
60
-
61
- ```bash
62
- # A GUI: click File > Open > navigate to folder > click file
63
- # A CLI: one command
64
- cat /etc/hosts
65
- ```
66
-
67
- CLIs differ from GUIs (Graphical User Interfaces) in several important ways:
68
-
69
- **Speed.** A CLI command executes immediately. No loading screens, no animations, no waiting for a window to render. You type, it runs.
70
-
71
- **Composability.** CLI commands can be chained together. The output of one becomes the input of another. This is called piping, and it is the foundation of Unix philosophy:
72
-
73
- ```bash
74
- docker ps | grep navada | wc -l
75
- ```
76
-
77
- That single line lists running containers, filters for NAVADA services, and counts them. Three tools, one pipeline, instant result.
78
-
79
- **Automation.** Anything you type in a CLI can be put in a script. A script is just a file full of commands. This is how infrastructure-as-code works: instead of manually clicking through cloud dashboards, you write scripts that configure servers, deploy applications, and manage databases automatically.
80
-
81
- **Reproducibility.** A CLI command is text. It can be documented, version-controlled, shared, and re-run identically. A series of GUI clicks cannot.
82
-
83
- Developers, system administrators, and infrastructure engineers prefer CLIs because they are faster, scriptable, and precise. The NAVADA Edge CLI takes this further by adding an AI agent layer -- instead of memorising hundreds of commands, you describe what you want in plain English and the agent executes the right tools.
84
-
85
- ---
86
-
87
- ## NAVADA Edge CLI
88
-
89
- NAVADA Edge CLI is an AI-powered operating system layer that runs in your terminal. It is the first interface to the NAVADA Edge Network -- a distributed computing platform built for AI workloads, infrastructure management, and developer tooling.
90
-
91
- ### Two parts, one CLI
92
-
93
- The NAVADA Edge CLI has two distinct modes of operation:
94
-
95
- **Part 1: Standalone CLI** -- install from npm, runs on your machine, no account needed. Full AI agent with file operations, shell access, Python execution, code sandbox, and 6 AI providers. The free tier works out of the box. When it runs out, add your own API key. This is a complete, independent developer tool.
96
-
97
- **Part 2: Cloud Compute** -- opt-in. Sign up at the Edge Portal, generate an API key, and run tasks 24/7 on AWS or Azure. Your laptop can be closed while jobs run in the cloud. Monitoring, output streaming, and task management built into the same CLI.
98
-
99
- Part 1 works without Part 2. Part 2 extends Part 1.
100
-
101
- ### What it does
102
-
103
- The CLI wraps your terminal in a conversational AI agent. It has tools for file operations, shell execution, Docker management, remote SSH, database queries, cloud services, image generation, and more. You can use slash commands for precision or type naturally and let the agent figure out what to do.
104
-
105
- ```
106
- navada> what images are in the docker registry?
107
- [docker_registry] {}
108
-
109
- NAVADA
110
- The private registry contains: my-api, navada-command-dashboard,
111
- navada-edge-mcp, navada-opencode-serve, navada-opencode-server.
112
- ```
113
-
114
- ```
115
- navada> create a Python script that fetches weather data and save it to weather.py
116
- [write_file] {"path":"weather.py","content":"import requests\n..."}
117
-
118
- NAVADA
119
- Written: /Users/you/weather.py
120
- ```
121
-
122
- ### How it works under the hood
123
-
124
- The CLI is a Node.js application that maintains a conversation loop with an AI provider. When you type a message, it goes through this pipeline:
125
-
126
- 1. **Input parsing** -- slash commands are routed directly to handlers; natural language goes to the AI agent
127
- 2. **Provider routing** -- the agent selects the appropriate AI provider based on your configuration (NAVADA free tier, Anthropic, OpenAI, Google, NVIDIA, or HuggingFace)
128
- 3. **Tool execution** -- the AI can call tools (shell, file I/O, network, Docker, MCP) and receive results, then continue reasoning
129
- 4. **Streaming** -- responses stream token-by-token to your terminal in real time
130
- 5. **Context management** -- conversation history (last 40 turns) is maintained so the agent remembers what you discussed
131
-
132
- The agent uses a tool-use loop: the AI decides which tool to call, the CLI executes it locally, sends the result back, and the AI continues until the task is complete. This means the agent can chain multiple operations in a single response.
46
+ - **3-tier memory** the agent remembers you across sessions. Your preferences, your projects, your patterns. It gets better the more you use it.
47
+ - **Full tool access** — shell commands, file operations, Python execution, screenshots, web search. The agent doesn't just talk — it acts.
48
+ - **Bring your own model** use NVIDIA (free), Anthropic, OpenAI, Google Gemini, or HuggingFace. All providers get the same tools.
49
+ - **Automation pipeline** describe what you want automated and the NAVADA team sets it up on 24/7 cloud infrastructure.
50
+ - **soul.md + guardrails** configure who the agent thinks you are and what boundaries it respects. Your agent, your rules.
133
51
 
134
52
  ---
135
53
 
@@ -137,28 +55,18 @@ The agent uses a tool-use loop: the AI decides which tool to call, the CLI execu
137
55
 
138
56
  Requires **Node.js 18+**.
139
57
 
140
- ### Global install (recommended)
141
-
142
58
  ```bash
143
59
  npm install -g navada-edge-cli
144
60
  navada
145
61
  ```
146
62
 
147
- This adds the `navada` command to your PATH. Run it from anywhere.
63
+ That's it. The free tier uses NVIDIA AI models — no API key required. Install and start talking.
148
64
 
149
- ### Local / project install
150
-
151
- ```bash
152
- npm install navada-edge-cli
153
- npx navada
154
- ```
155
-
156
- Useful when you want a specific version pinned to a project, or when you do not have permission to install globally.
157
-
158
- ### Verify installation
65
+ ### Verify
159
66
 
160
67
  ```bash
161
68
  navada --version
69
+ # navada-edge-cli v4.0.0
162
70
  ```
163
71
 
164
72
  ---
@@ -166,488 +74,384 @@ navada --version
166
74
  ## Quick Start
167
75
 
168
76
  ```bash
169
- # 1. Install
77
+ # Install and launch
170
78
  npm install -g navada-edge-cli
171
-
172
- # 2. Launch
173
79
  navada
174
80
 
175
- # 3. Start talking (free tier, no key needed)
176
- navada> hello, what can you do?
81
+ # The agent is ready. Just type:
82
+ navada> what files are in my current directory?
83
+ navada> create a Node.js server that returns "hello world" on port 3000
84
+ navada> explain what this Python script does
85
+ navada> take a screenshot of my screen
177
86
 
178
- # 4. Try some commands
179
- navada> /status
180
- navada> /help
181
- navada> list files in my home directory
182
- navada> create a file called test.txt with "Hello from NAVADA"
87
+ # Run the setup wizard for full personalisation
88
+ navada> /setup
183
89
  ```
184
90
 
185
- The free tier uses Grok via the NAVADA Edge server. No API key required -- install and go. File operations (create, read, edit, delete folders and files) work on the free tier without any AI provider.
186
-
187
- To unlock full agent mode with tool use, add your own API key:
91
+ ### First-time setup
188
92
 
189
93
  ```bash
190
- navada> /login sk-ant-your-anthropic-key
94
+ navada> /setup
191
95
  ```
192
96
 
193
- To connect to the NAVADA Edge Network for distributed infrastructure access:
97
+ This walks you through:
98
+ 1. **API key** — free tier works without one, or add your own for unlimited
99
+ 2. **soul.md** — tell the agent who you are, what you do, what stack you use
100
+ 3. **guardrail.md** — set safety boundaries (what the agent can and can't do)
101
+ 4. **Theme** — dark, crow, matrix, or light
194
102
 
195
- ```bash
196
- navada> /onboard
197
- navada> /edge login nv_edge_your_key_here
198
- ```
103
+ After setup, the agent knows you. It loads your soul.md on every interaction and respects your guardrails.
199
104
 
200
105
  ---
201
106
 
202
- ## AI Providers
107
+ ## How It Works
203
108
 
204
- The CLI supports 6 AI providers. Each is activated by logging in with the corresponding API key.
109
+ When you type a message, here's what happens:
205
110
 
206
- | Provider | Key Prefix | Model | Cost | Tool Use |
207
- |---|---|---|---|---|
208
- | **NAVADA Free Tier** | (none needed) | Grok (via Edge server) | Free (30 RPM) | File ops only |
209
- | **Anthropic** | `sk-ant-...` | Claude Sonnet 4 | Paid | Yes (full agent) |
210
- | **OpenAI** | `sk-...` | GPT-4o | Paid | Yes |
211
- | **Google Gemini** | `AIza...` | Gemini 2.0 Flash | Free | No |
212
- | **NVIDIA** | `nvapi-...` | Llama, DeepSeek, Mistral + 5 more | Free | No |
213
- | **HuggingFace** | `hf_...` | Qwen Coder 32B | Free | No |
111
+ ```
112
+ You type: "create a React component for a login form"
113
+ |
114
+ v
115
+ Input Parser
116
+ +-- Starts with / --> Slash command (direct execution)
117
+ +-- Natural language --> AI Agent pipeline
118
+ |
119
+ v
120
+ Memory Injection
121
+ +-- Tier 1: Working memory (current conversation + summary)
122
+ +-- Tier 2: Recent episodes (what you discussed yesterday)
123
+ +-- Tier 3: Knowledge (you prefer TypeScript, use Next.js)
124
+ |
125
+ v
126
+ Provider Selection (NVIDIA free, Anthropic, OpenAI, Gemini, etc.)
127
+ |
128
+ v
129
+ Tool Execution Loop
130
+ +-- AI decides: write_file --> create LoginForm.tsx
131
+ +-- AI decides: shell --> npm install @shadcn/ui
132
+ +-- AI reads results, continues reasoning
133
+ +-- Repeat until task is complete (up to 10 iterations)
134
+ |
135
+ v
136
+ Response streams token-by-token to your terminal
137
+ |
138
+ v
139
+ Auto-extract: saves preferences and facts to memory
140
+ ```
214
141
 
215
- ### Login with each provider
142
+ The agent uses a **tool-use loop**: the AI decides which tool to call, the CLI executes it on your machine, sends the result back, and the AI continues. This means it can chain multiple operations — read a file, modify it, run tests, fix errors — all in one response.
216
143
 
217
- ```bash
218
- # NAVADA free tier (default, no login needed)
219
- navada
144
+ ---
220
145
 
221
- # Anthropic — full agent with tool use (recommended)
222
- navada> /login sk-ant-api03-xxxxx
146
+ ## Memory System
223
147
 
224
- # OpenAI GPT-4o with tool use
225
- navada> /login sk-xxxxx
148
+ The agent has a 3-tier memory that works invisibly. You never manage it — it just gets smarter.
226
149
 
227
- # Google Gemini free, fast
228
- navada> /login AIzaSyxxxxx
150
+ ### Tier 1: Working Memory (in-session)
229
151
 
230
- # NVIDIA 8 free models (Llama, DeepSeek, Mistral, etc.)
231
- navada> /login nvapi-xxxxx
232
- navada> /nvidia models
152
+ - Last 20 full messages kept in buffer
153
+ - Older messages compressed into rolling summary
154
+ - Auto-summarises every 15 turns to prevent context overflow
155
+ - Dies when you close the terminal (but auto-saved as episode)
233
156
 
234
- # HuggingFace Qwen Coder 32B, free
235
- navada> /login hf_xxxxx
236
- ```
157
+ ### Tier 2: Episodic Memory (cross-session)
237
158
 
238
- ### NVIDIA models
159
+ - Every meaningful session is auto-saved as an episode
160
+ - Episodes have timestamps, tags, and summaries
161
+ - "What did we work on yesterday?" — the agent actually knows
162
+ - Stored in `~/.navada/memory/episodes/`
239
163
 
240
- NVIDIA provides 8 models for free via [build.nvidia.com](https://build.nvidia.com):
164
+ ### Tier 3: Semantic Knowledge (permanent)
241
165
 
242
- | Model | ID |
243
- |---|---|
244
- | Meta Llama 3.3 70B | `llama-3.3-70b` |
245
- | Meta Llama 3.1 8B | `llama-3.1-8b` |
246
- | DeepSeek R1 | `deepseek-r1` |
247
- | Mistral Large 2 | `mistral-large` |
248
- | Code Llama 70B | `codellama-70b` |
249
- | Google Gemma 2 27B | `gemma-2-27b` |
250
- | Microsoft Phi 3 Medium 128K | `phi-3-medium` |
251
- | NVIDIA Nemotron 70B | `nemotron-70b` |
166
+ - Facts, preferences, skills, people — auto-extracted from conversations
167
+ - TF-IDF search (pure JavaScript, zero dependencies)
168
+ - "I prefer TypeScript" saved permanently, used in every future session
169
+ - Stored in `~/.navada/memory/knowledge.json`
252
170
 
253
- ```bash
254
- navada> /model deepseek-r1
255
- navada> /nvidia chat explain Docker networking
256
- ```
171
+ ### How it feels
257
172
 
258
- ### Model selection
173
+ **Session 1:**
174
+ ```
175
+ navada> I'm a frontend developer, I use React and TypeScript
176
+ NAVADA: Got it. I'll keep that in mind.
177
+ ```
259
178
 
260
- ```bash
261
- navada> /model # show all available models
262
- navada> /model auto # smart routing (picks best per query)
263
- navada> /model claude # always use Claude Sonnet 4
264
- navada> /model gpt-4o # always use GPT-4o
265
- navada> /model deepseek-r1 # always use DeepSeek R1
179
+ **Session 2 (next day):**
180
+ ```
181
+ navada> help me build a component
182
+ NAVADA: Since you use React with TypeScript, here's a typed functional component...
266
183
  ```
267
184
 
268
- ---
185
+ The agent loaded your preference from Tier 3 knowledge. You never asked it to remember — it just did.
269
186
 
270
- ## Commands
187
+ ---
271
188
 
272
- 72 commands (91 with aliases) organised by category. Use `/help` inside the CLI for the full list.
189
+ ## Tools
273
190
 
274
- ### AI
191
+ The agent has 16 tools across 7 categories. Every tool works with every AI provider.
275
192
 
276
- | Command | Description |
193
+ ### Bash
194
+ | Tool | What it does |
277
195
  |---|---|
278
- | `/chat <msg>` | Chat with NAVADA Edge AI agent |
279
- | `/qwen <prompt>` | Qwen Coder 32B (free via HuggingFace) |
280
- | `/yolo detect <img>` | Object detection on an image |
281
- | `/yolo model` | Show YOLO model info |
282
- | `/image <prompt>` | Generate image (Flux, free) |
283
- | `/image --dalle <prompt>` | Generate image (DALL-E 3) |
284
- | `/model [name]` | Show or set default AI model |
285
- | `/research <query>` | RAG search via MCP server |
286
- | `/retry` | Resend the last message to the AI |
287
- | `/tokens` | Show session token usage and cost |
288
- | `/clear` | Clear conversation history and reset session |
289
- | `/save [name]` | Save current conversation to disk |
290
- | `/load <name>` | Load a saved conversation |
291
- | `/conversations` | List all saved conversations |
292
-
293
- ### NVIDIA
294
-
295
- | Command | Description |
196
+ | `shell` | Execute any shell command (bash, PowerShell, cmd) |
197
+ | `python_exec` | Run Python code inline |
198
+ | `python_pip` | Install Python packages |
199
+ | `python_script` | Run .py script files |
200
+ | `sandbox_run` | Sandboxed code execution (JavaScript, Python, TypeScript) |
201
+
202
+ ### System
203
+ | Tool | What it does |
296
204
  |---|---|
297
- | `/nvidia login <key>` | Set NVIDIA API key |
298
- | `/nvidia models` | List all available NVIDIA models |
299
- | `/nvidia model <name>` | Set default NVIDIA model |
300
- | `/nvidia chat <msg>` | Chat with selected NVIDIA model |
301
- | `/nvidia status` | Test NVIDIA API connection |
205
+ | `read_file` | Read any file on your machine |
206
+ | `write_file` | Create or edit files |
207
+ | `list_files` | Browse directories |
208
+ | `system_info` | CPU, RAM, disk, OS, hostname |
302
209
 
303
- ### NETWORK
304
-
305
- | Command | Description |
210
+ ### Data
211
+ | Tool | What it does |
306
212
  |---|---|
307
- | `/status` | Ping all nodes and cloud services |
308
- | `/nodes` | Show node configuration |
309
- | `/dashboard` | Command Dashboard status |
310
- | `/doctor` | Validate all service connections |
311
- | `/metrics` | CPU, RAM, disk for all nodes |
312
- | `/health` | Deep health check |
313
- | `/ping` | Quick all-nodes ping |
314
- | `/opencode` | OpenCode status on all nodes |
315
-
316
- ### AGENTS
213
+ | `web_search` | Search the web for information |
317
214
 
318
- | Command | Description |
215
+ ### Communication
216
+ | Tool | What it does |
319
217
  |---|---|
320
- | `/agents` | Show Lucas CTO + Claude CoS status |
321
- | `/claude <msg>` | Send message to Claude CoS agent |
322
- | `/lucas exec <cmd>` | Run bash on EC2 via Lucas CTO |
323
- | `/lucas ssh <node> <cmd>` | SSH to any node via Lucas |
324
- | `/lucas docker <ctr> <cmd>` | Docker exec on remote container |
325
- | `/lucas deploy <name> <node>` | Deploy container to a node |
326
- | `/lucas status` | Lucas network status |
327
- | `/lucas files <dir>` | List files on remote node |
328
- | `/lucas read <file>` | Read file on remote node |
329
-
330
- ### DOCKER
218
+ | `automation_request` | Submit automation requests (emails, campaigns, builds) |
331
219
 
332
- | Command | Description |
220
+ ### Memory
221
+ | Tool | What it does |
333
222
  |---|---|
334
- | `/registry` | List images in private Docker registry |
335
- | `/registry tags <image>` | List tags for an image |
336
- | `/deploy <name> <node>` | Deploy container to a node |
337
- | `/logs <container>` | View container logs |
223
+ | `save_memory` | Explicitly save something to long-term knowledge |
224
+ | `recall_memory` | Search across all memory tiers |
338
225
 
339
- ### MCP
340
-
341
- | Command | Description |
226
+ ### Perception
227
+ | Tool | What it does |
342
228
  |---|---|
343
- | `/mcp tools` | List all MCP server tools |
344
- | `/mcp call <tool> [json]` | Call an MCP tool directly |
345
-
346
- ### CLOUDFLARE
229
+ | `screenshot` | Capture your screen |
230
+ | `describe_image` | AI-powered image analysis |
347
231
 
348
- | Command | Description |
232
+ ### Info
233
+ | Tool | What it does |
349
234
  |---|---|
350
- | `/r2 ls [prefix]` | List R2 storage objects |
351
- | `/r2 buckets` | List R2 buckets |
352
- | `/r2 upload <key> <file>` | Upload file to R2 |
353
- | `/r2 delete <key>` | Delete R2 object |
354
- | `/r2 url <key>` | Get public URL for R2 object |
355
- | `/dns` | List Cloudflare DNS records |
356
- | `/dns create <type> <name> <val>` | Create DNS record |
357
- | `/tunnel` | List Cloudflare tunnels |
358
- | `/stream` | List Cloudflare Stream videos |
359
- | `/flux <prompt>` | Generate image (free Cloudflare AI) |
360
- | `/trace <url>` | Trace request through Cloudflare WAF |
361
-
362
- ### DATABASE
235
+ | `founder_info` | About the NAVADA founder |
363
236
 
364
- | Command | Description |
365
- |---|---|
366
- | `/db <sql>` | Query PostgreSQL |
237
+ ---
367
238
 
368
- ### FILES
239
+ ## Skills
369
240
 
370
- | Command | Description |
371
- |---|---|
372
- | `/read <path>` | Read a file (with line numbers) |
373
- | `/write <path> <content>` | Write content to a file |
374
- | `/edit <path> <search> -> <replace>` | Find and replace in a file |
375
- | `/delete <path>` | Delete a file or empty directory |
376
- | `/ls [path]` | List files and directories |
377
- | `/mkdir <path>` | Create a directory |
378
- | `/touch <path>` | Create an empty file |
241
+ The agent can perform complex, multi-step tasks. These are not commands — just describe what you need.
379
242
 
380
- File operations also work via natural language on any tier (no API key needed):
243
+ **Code Generation** write, debug, refactor code in any language. Full project scaffolds with configs.
381
244
 
382
- ```
383
- navada> create a folder on my desktop called MyProject
384
- Created folder: C:\Users\you\Desktop\MyProject
385
- ```
245
+ **Data Analysis** — process CSV, JSON, Excel with Python/pandas. Generate charts and visualisations.
386
246
 
387
- ### EDGE (Part 2 -- Cloud Compute)
247
+ **Automation** submit requests for 24/7 automation: marketing emails, scheduled reports, data pipelines.
388
248
 
389
- | Command | Description |
390
- |---|---|
391
- | `/edge login <key>` | Connect with NAVADA Edge API key |
392
- | `/edge status` | Check Edge Network connection |
393
- | `/edge logout` | Disconnect from Edge Network |
394
- | `/edge tier` | Show current tier and limits |
395
- | `/edge setup` | Create agent.md and sub-agents directory |
396
- | `/onboard` | Open Edge Portal to create account |
397
- | `/offload <command>` | Run a task 24/7 on the cloud |
398
- | `/sessions` | View your cloud task sessions |
399
- | `/attach <session-id>` | Stream output from a running task |
400
- | `/kill <session-id>` | Stop a running cloud task |
401
-
402
- ### TASKS
249
+ **DevOps** Docker builds, git workflows, package management, CI/CD pipelines.
403
250
 
404
- | Command | Description |
405
- |---|---|
406
- | `/tasks` | List tasks |
407
- | `/tasks create <title>` | Create a task |
408
- | `/tasks done <id>` | Mark task complete |
409
- | `/tasks delete <id>` | Delete a task |
251
+ **Research** web search, technical docs, competitive analysis, summarisation.
410
252
 
411
- ### KEYS
253
+ **Content** — blog posts, emails, documentation, diagrams (Mermaid, SVG, HTML).
412
254
 
413
- | Command | Description |
414
- |---|---|
415
- | `/keys` | List API keys |
416
- | `/keys create [name]` | Create an API key |
417
- | `/keys delete <key>` | Delete an API key |
255
+ **Learning** interactive tutorials: `/learn python`, `/learn node`, `/learn csharp`. Code review and architecture design.
418
256
 
419
- ### AZURE
257
+ ---
420
258
 
421
- | Command | Description |
422
- |---|---|
423
- | `/n8n` | Azure n8n health check |
424
- | `/n8n restart` | Restart Azure n8n |
259
+ ## AI Providers
425
260
 
426
- ### LEARNING
261
+ Bring your own model. All providers get full tool access.
427
262
 
428
- | Command | Description |
429
- |---|---|
430
- | `/learn python` | Enter Python learning mode |
431
- | `/learn csharp` | Enter C# learning mode |
432
- | `/learn node` | Enter Node.js learning mode |
433
- | `/learn off` | Exit learning mode |
263
+ | Provider | Key Prefix | Models | Cost | How to activate |
264
+ |---|---|---|---|---|
265
+ | **NVIDIA (default)** | `nvapi-...` | Llama 3.3, DeepSeek R1, Mistral + 5 more | Free (rate limited) | Default — no key needed |
266
+ | **Anthropic** | `sk-ant-...` | Claude Sonnet 4 | Paid | `/login sk-ant-xxx` |
267
+ | **OpenAI** | `sk-...` | GPT-4o, GPT-4o-mini | Paid | `/login sk-xxx` |
268
+ | **Google Gemini** | `AIza...` | Gemini 2.0 Flash | Free | `/login AIzaxxx` |
269
+ | **HuggingFace** | `hf_...` | Qwen Coder 32B | Free | `/login hf_xxx` |
434
270
 
435
- ### SANDBOX
271
+ ### Unlimited access
436
272
 
437
- | Command | Description |
438
- |---|---|
439
- | `/sandbox run <lang>` | Run code with syntax highlighting |
440
- | `/sandbox exec <file>` | Execute a file in the sandbox |
441
- | `/sandbox highlight <file>` | Syntax-highlight a file |
442
- | `/sandbox demo` | Run a demo to test colors |
273
+ The free NVIDIA tier has rate limits. For unlimited access:
443
274
 
444
- ### SYSTEM
275
+ ```bash
276
+ navada> /register # Create account with name + email
277
+ navada> /setup # Complete soul.md + guardrails
278
+ ```
445
279
 
446
- | Command | Description |
447
- |---|---|
448
- | `/help` | Show all commands |
449
- | `/config` | Show all configuration |
450
- | `/login <key>` | Set API key (auto-detects provider) |
451
- | `/init <key> <value>` | Set a config value |
452
- | `/setup` | Guided onboarding wizard |
453
- | `/theme [name]` | Switch theme (dark, crow, matrix, light) |
454
- | `/history [search]` | Command history |
455
- | `/alias <name> <cmd>` | Create command shortcut |
456
- | `/watch <cmd> <sec>` | Repeat command on interval |
457
- | `/export <file>` | Save last output to file |
458
- | `/pipe` | Copy last output to clipboard |
459
- | `/email <to> <subj> <body>` | Send email (SMTP or MCP) |
460
- | `/email setup` | Configure SMTP provider |
461
- | `/activity` | Recent activity log |
462
- | `/version` | Version and tier info |
463
- | `/upgrade` | Check for CLI updates |
464
- | `/audit` | Security and compliance audit (30 checks) |
465
- | `/exit` | Exit CLI |
280
+ This unlocks full access including automation requests and priority support.
466
281
 
467
282
  ---
468
283
 
469
- ## Architecture
284
+ ## Automation Pipeline
470
285
 
471
- ![NAVADA Edge Network Architecture](architecture.svg)
286
+ Describe what you want automated. The NAVADA team reviews your request and sets it up on 24/7 cloud infrastructure.
472
287
 
473
- ### Agent routing
288
+ ```bash
289
+ navada> /automate Send a weekly marketing email to my leads every Monday
290
+ ```
474
291
 
475
- When you type a message, the CLI determines how to handle it:
292
+ ### How it works
476
293
 
477
- ```
478
- User Input
479
- |
480
- +-- Starts with "/" --> Slash command handler (direct execution)
481
- |
482
- +-- Natural language --> AI Agent pipeline:
483
- |
484
- 1. Build message array (system prompt + conversation history + user message)
485
- 2. Select provider:
486
- | - Anthropic key set? --> Claude Sonnet 4 (with tool definitions)
487
- | - OpenAI key set? --> GPT-4o (with tool definitions)
488
- | - Gemini key set? --> Gemini 2.0 Flash
489
- | - NVIDIA key set? --> Selected NVIDIA model
490
- | - HF token set? --> Qwen Coder 32B
491
- | - No key? --> NAVADA free tier (GPT-4o-mini via Edge server)
492
- |
493
- 3. Stream response token-by-token to terminal
494
- 4. If AI requests tool use --> execute tool --> return result --> continue
495
- 5. Update conversation history (40-turn sliding window)
496
- ```
294
+ 1. **You submit** — via `/automate` or by asking the agent naturally
295
+ 2. **Request queued** — goes to the NAVADA Edge operations team
296
+ 3. **Team reviews** — Lee (founder) personally reviews and configures your automation
297
+ 4. **You get notified** email confirmation when it's live
298
+ 5. **Track metrics** — view status and results on your dashboard
497
299
 
498
- ### Tool execution loop
300
+ ### What can be automated
499
301
 
500
- The agent (Anthropic and OpenAI providers) supports tool use. The AI model decides which tools to call based on the user's request. Available tools:
302
+ - Marketing email campaigns and newsletters
303
+ - Scheduled reports and data exports
304
+ - Application builds and deployments
305
+ - Data scraping and processing pipelines
306
+ - Custom recurring tasks
501
307
 
502
- | Tool | Scope | Function |
503
- |---|---|---|
504
- | `shell` | Local | Run any shell command on your machine |
505
- | `read_file` | Local | Read files from your filesystem |
506
- | `write_file` | Local | Create or modify files |
507
- | `edit_file` | Local | Find and replace text in a file |
508
- | `delete_file` | Local | Delete a file or empty directory |
509
- | `list_files` | Local | Browse directories |
510
- | `system_info` | Local | CPU, RAM, disk, hostname, OS |
511
- | `python_exec` | Local | Execute Python code |
512
- | `python_pip` | Local | Install Python packages |
513
- | `python_script` | Local | Run a Python script file |
514
- | `sandbox_run` | Local | Run code with syntax highlighting |
515
- | `founder_info` | Local | Accurate answers about the NAVADA founder (CV-grounded) |
516
- | `network_status` | Network | Ping all NAVADA Edge nodes |
517
- | `lucas_exec` | Remote | Run bash on EC2 via Lucas CTO |
518
- | `lucas_ssh` | Remote | SSH to any network node |
519
- | `lucas_docker` | Remote | Docker exec in remote containers |
520
- | `mcp_call` | Remote | Call any of 18 MCP server tools |
521
- | `docker_registry` | Remote | Query the private Docker registry |
522
- | `send_email` | Remote | Send email via SMTP or MCP |
523
- | `generate_image` | Remote | Generate images (Flux or DALL-E) |
524
-
525
- The execution loop works like this:
526
-
527
- 1. User sends message
528
- 2. AI analyses the message and decides to call a tool (e.g., `shell` with `docker ps`)
529
- 3. CLI executes the tool locally and captures the output
530
- 4. Output is sent back to the AI as a tool result
531
- 5. AI reads the result and either calls another tool or writes a final response
532
- 6. If the AI calls another tool, go back to step 3 (up to 10 iterations)
533
-
534
- ### Rate limiting
535
-
536
- The free tier enforces a sliding-window rate limit of 30 requests per minute. The CLI tracks this in-memory per session. When the limit is reached, the CLI suggests upgrading to a provider with your own API key.
537
-
538
- ### Conversation context
539
-
540
- The CLI maintains a sliding window of the last 40 conversation turns (20 exchanges). This allows the agent to reference earlier parts of the conversation without exceeding token limits. Conversations can be saved to disk with `/save` and restored with `/load`.
541
-
542
- ### Streaming
543
-
544
- All providers support streaming. Responses appear token-by-token as the AI generates them, rather than waiting for the full response. This is implemented using Server-Sent Events (SSE) for each provider's API.
308
+ ```bash
309
+ navada> /automate --type marketing Build me a weekly newsletter template
310
+ navada> /automate --type data --schedule daily Scrape competitor prices from 3 sites
311
+ navada> /requests # View all your submitted requests
312
+ ```
545
313
 
546
314
  ---
547
315
 
548
- ## Edge Network
549
-
550
- The NAVADA Edge Network is a distributed computing platform that the CLI connects to. It consists of physical servers, cloud VMs, and services connected via Tailscale VPN.
316
+ ## Configuration
551
317
 
552
- ### What it provides
318
+ All config lives in `~/.navada/`:
553
319
 
554
- - **AI chat** -- free-tier GPT-4o-mini via the Edge server (no API key required)
555
- - **MCP server** -- 18 tools accessible via JSON-RPC (file operations, network, email, image gen, cloud)
556
- - **Lucas CTO** -- autonomous infrastructure agent running on EC2 (bash, SSH, Docker, deploy)
557
- - **Docker registry** -- private container registry for deploying to any node
558
- - **Command Dashboard** -- real-time status, metrics, and activity logs
320
+ ```
321
+ ~/.navada/
322
+ +-- config.json # API keys, theme, model preferences
323
+ +-- soul.md # Your identity (who you are, your goals)
324
+ +-- guardrail.md # Safety boundaries (what the agent can do)
325
+ +-- agent.md # Custom agent personality (optional)
326
+ +-- README.md # Your workspace quickstart
327
+ +-- agents/ # Custom sub-agent personas
328
+ +-- memory/
329
+ | +-- knowledge.json # Tier 3: permanent knowledge base
330
+ | +-- episodes/ # Tier 2: session summaries
331
+ +-- conversations/ # Saved full conversations
332
+ +-- requests/ # Local automation request queue
333
+ ```
559
334
 
560
- ### Connecting to the Edge Network
335
+ ### Key commands
561
336
 
562
337
  ```bash
563
- # 1. Create an account at the Edge Portal
564
- navada> /onboard
565
-
566
- # 2. Generate an API key in the portal dashboard
567
- # 3. Connect your CLI
568
- navada> /edge login nv_edge_your_key_here
569
-
570
- # 4. Verify connection
571
- navada> /edge status
572
- navada> /doctor
338
+ navada> /setup # Full setup wizard
339
+ navada> /soul # View your soul.md
340
+ navada> /soul edit # Edit in your default editor
341
+ navada> /guardrails # View boundaries
342
+ navada> /guardrails edit # Edit boundaries
343
+ navada> /tools # Show all agent tools
344
+ navada> /skills # Show what the agent can do
345
+ navada> /config # Show all settings
346
+ navada> /login <key> # Set API key (auto-detects provider)
347
+ navada> /theme crow # Switch theme
573
348
  ```
574
349
 
575
- ### API key tiers
576
-
577
- | Tier | Cloud Compute | Pricing |
578
- |---|---|---|
579
- | **Free** | 3 sessions, 1 concurrent, 5 min max | Free |
580
- | **Starter** | 10 sessions, 3 concurrent, 15 min max | TBC |
581
- | **Pro** | 50 sessions, 5 concurrent, 1 hour max | TBC |
582
- | **Enterprise** | 500 sessions, 10 concurrent, 4 hour max | TBC |
350
+ ---
583
351
 
584
- ### Portal
352
+ ## Commands
585
353
 
586
- The NAVADA Edge Portal is a web application where users create accounts, generate API keys, and monitor their usage. Visit [portal.navada-edge-server.uk](https://portal.navada-edge-server.uk) or run `/onboard` from the CLI.
354
+ 68 commands organised by category. Use `/help` inside the CLI for the full list.
587
355
 
588
- ---
356
+ ### Core
589
357
 
590
- ## The Vision
358
+ | Command | Description |
359
+ |---|---|
360
+ | `/help` | Show all commands |
361
+ | `/setup` | Full onboarding wizard |
362
+ | `/tools` | Show agent tools by category |
363
+ | `/skills` | Show what the agent can do |
364
+ | `/login <key>` | Set API key |
365
+ | `/register` | Create NAVADA Edge account |
366
+ | `/config` | Show configuration |
591
367
 
592
- This is the early stage of an AI-powered operating system.
368
+ ### AI & Chat
593
369
 
594
- Today, the CLI is a terminal agent -- you install it, type naturally, and it executes tasks on your machine using AI. But the terminal is just the first interface layer. The architecture is designed for what comes next.
370
+ | Command | Description |
371
+ |---|---|
372
+ | `/chat <msg>` | Chat with the agent |
373
+ | `/model [name]` | Show or set AI model |
374
+ | `/nvidia models` | List free NVIDIA models |
375
+ | `/clear` | Reset conversation |
376
+ | `/save [name]` | Save conversation |
377
+ | `/load <name>` | Load saved conversation |
378
+ | `/retry` | Resend last message |
379
+ | `/tokens` | Session token usage |
595
380
 
596
- ### What is built today
381
+ ### Automation
597
382
 
598
- **Part 1: Standalone CLI.** Install from npm and go. Full local tools (file CRUD, shell, Python, sandbox), 6 AI providers, conversation history, agent.md customisation, sub-agents, learning modes, code sandbox. Works completely independently with no account or sign-up required. Free tier included, or bring your own API key.
383
+ | Command | Description |
384
+ |---|---|
385
+ | `/automate <desc>` | Submit automation request |
386
+ | `/requests` | View your requests |
599
387
 
600
- **Part 2: Cloud Compute.** Sign up at the Edge Portal, generate an API key, and offload tasks to AWS (EC2) for 24/7 execution. Tasks run when your laptop is closed. Monitor sessions, stream output, kill tasks -- all from the CLI. Authenticated via `nv_edge_` API keys with tier-based limits.
388
+ ### Edge Network
601
389
 
602
- **agent.md customisation.** Every user can define their own `agent.md` file at `~/.navada/agent.md` -- a plain-text configuration that shapes the AI's personality, tools, and behaviour. Your agent becomes uniquely yours. Sub-agents live in `~/.navada/agents/` and can be switched mid-session with `/agent use <name>`.
390
+ | Command | Description |
391
+ |---|---|
392
+ | `/edge login <key>` | Connect to NAVADA Edge |
393
+ | `/edge status` | Check connection |
394
+ | `/edge tier` | Show tier and limits |
395
+ | `/edge usage` | View usage stats |
396
+ | `/onboard` | Open portal in browser |
603
397
 
604
- **Knowledge skills.** The CLI uses Python-based knowledge skills for grounded, accurate responses. Instead of the AI hallucinating, factual data is baked into Python scripts that the agent calls as tools. No RAG infrastructure needed -- just a Python file and a prompt.
398
+ ### Identity
605
399
 
606
- ### Where this is going
400
+ | Command | Description |
401
+ |---|---|
402
+ | `/soul` | View soul.md |
403
+ | `/soul edit` | Edit soul.md |
404
+ | `/guardrails` | View guardrail.md |
405
+ | `/guardrails edit` | Edit guardrail.md |
406
+ | `/agent list` | List sub-agents |
407
+ | `/agent use <name>` | Activate sub-agent |
607
408
 
608
- **Azure compute node.** A second cloud region for redundancy and lower latency. The CLI will route tasks to the nearest available node.
409
+ ### Learning
609
410
 
610
- **Local LLM execution.** Running models locally on NVIDIA GPUs via Docker containers. Instead of paying per token to a cloud API, your home server runs Llama, Mistral, or DeepSeek locally. The CLI routes to whichever is fastest -- local GPU or cloud API -- transparently.
411
+ | Command | Description |
412
+ |---|---|
413
+ | `/learn python` | Python interactive tutor |
414
+ | `/learn node` | Node.js interactive tutor |
415
+ | `/learn csharp` | C# interactive tutor |
416
+ | `/learn off` | Exit learning mode |
611
417
 
612
- **More sub-agents.** Lucas CTO is the first sub-agent -- an autonomous infrastructure manager. More are planned: a security agent for vulnerability scanning, a data agent for ETL pipelines, a monitoring agent for alerting. Each runs in its own container and communicates via MCP.
418
+ ### System
613
419
 
614
- **Multi-device.** The CLI supports mobile access via `/serve`. The vision is a unified agent layer across terminal, web, and mobile -- same context, same tools, same conversation -- wherever you are.
420
+ | Command | Description |
421
+ |---|---|
422
+ | `/theme [name]` | Switch theme |
423
+ | `/history` | Command history |
424
+ | `/alias <name> <cmd>` | Create shortcut |
425
+ | `/export <file>` | Save output to file |
426
+ | `/version` | Version info |
427
+ | `/upgrade` | Check for updates |
615
428
 
616
- ### The operating system analogy
429
+ ---
617
430
 
618
- An OS manages hardware resources and provides an interface for users to interact with them. NAVADA Edge does the same thing for AI and distributed compute:
431
+ ## The Vision
619
432
 
620
- - **Hardware layer** -- physical servers, GPUs, VMs connected via Tailscale
621
- - **Service layer** -- Docker containers, MCP servers, databases, cloud APIs
622
- - **Agent layer** -- AI that understands your intent and orchestrates the services
623
- - **Interface layer** -- CLI today, web portal, mobile, and API tomorrow
433
+ This is the early stage of an AI-powered operating system layer.
624
434
 
625
- The CLI is the shell. The agent is the kernel. The Edge Network is the hardware. Everything else is a service.
435
+ Today, the CLI is a terminal agent install it, talk naturally, and it works on your machine. But the terminal is just the first interface.
626
436
 
627
- ---
437
+ **Where this is going:**
628
438
 
629
- ## Configuration
439
+ - **Memory that compounds** — the longer you use it, the better it understands you. Your agent becomes uniquely yours.
440
+ - **Automation marketplace** — submit what you need, get it running on cloud infrastructure. Email campaigns, data pipelines, scheduled reports — all managed for you.
441
+ - **Local LLM execution** — run models on your own GPU instead of cloud APIs. Same interface, zero cost.
442
+ - **Sub-agents** — specialised personas for security, data engineering, marketing, DevOps. Switch between them with `/agent use`.
443
+ - **Multi-device** — same agent, same memory, across terminal, web, and mobile.
630
444
 
631
- All configuration is stored in `~/.navada/config.json`.
445
+ ### The operating system analogy
632
446
 
633
- ```bash
634
- navada> /login sk-ant-your-key # API key (auto-detects provider)
635
- navada> /init asus 100.x.x.x # Set node IP
636
- navada> /init mcp http://x:8811 # MCP server endpoint
637
- navada> /theme crow # Theme (dark, crow, matrix, light)
638
- navada> /alias s status # Create shortcut
639
- ```
447
+ An OS manages hardware and provides an interface. NAVADA Edge does the same for AI:
640
448
 
641
- Environment variables are also supported:
449
+ - **Hardware layer** your machine + cloud infrastructure
450
+ - **Memory layer** — 3-tier system that persists across sessions
451
+ - **Agent layer** — AI that understands intent and orchestrates tools
452
+ - **Interface layer** — CLI today, web and mobile tomorrow
642
453
 
643
- ```
644
- ANTHROPIC_API_KEY=sk-ant-...
645
- NAVADA_ASUS=100.x.x.x
646
- NAVADA_MCP=http://100.x.x.x:8811
647
- NAVADA_DASHBOARD=http://100.x.x.x:7900
648
- NAVADA_REGISTRY=http://100.x.x.x:5000
649
- NAVADA_LUCAS=http://100.x.x.x:8820
650
- ```
454
+ The CLI is the shell. The agent is the kernel. Memory is the filesystem. Everything else is a service.
651
455
 
652
456
  ---
653
457
 
@@ -655,45 +459,15 @@ NAVADA_LUCAS=http://100.x.x.x:8820
655
459
 
656
460
  | Package | Install | Purpose |
657
461
  |---|---|---|
658
- | **navada-edge-sdk** | `npm i navada-edge-sdk` | SDK for Node.js applications -- build on top of the Edge Network |
659
462
  | **navada-edge-cli** | `npm i -g navada-edge-cli` | AI agent in your terminal |
660
- | **Edge Portal** | [portal.navada-edge-server.uk](https://portal.navada-edge-server.uk) | Account management, API keys, usage dashboard |
661
- | **Edge Compute** | via CLI `/offload` | 24/7 task execution on AWS/Azure |
662
- | **MCP Server** | `POST /mcp` | JSON-RPC tool server (18 tools) |
663
-
664
- ### SDK usage
665
-
666
- The CLI ships with the NAVADA Edge SDK as a dependency. You can also install it independently to build your own applications:
667
-
668
- ```bash
669
- npm install navada-edge-sdk
670
- ```
671
-
672
- ```javascript
673
- const navada = require('navada-edge-sdk');
674
-
675
- // Configure
676
- navada.init({ mcpApiKey: 'nv_edge_your_key' });
677
-
678
- // Use
679
- const status = await navada.network.ping();
680
- const result = await navada.mcp.call('server_status');
681
- const image = await navada.cloudflare.flux.generate('a sunset over mountains');
682
- ```
683
-
684
- The SDK provides programmatic access to the same services the CLI uses: network nodes, MCP tools, Cloudflare (R2, Flux, Stream, DNS), AI providers, Docker registry, and PostgreSQL.
685
-
686
- ---
687
-
688
- ## Telemetry
689
-
690
- The CLI reports anonymous usage events (install, session start, command counts) to the NAVADA Edge Dashboard for monitoring. No personal data or API keys are transmitted. Telemetry requires a configured dashboard endpoint.
463
+ | **navada-edge-sdk** | `npm i navada-edge-sdk` | SDK for Node.js applications |
464
+ | **Edge Portal** | [portal.navada-edge-server.uk](https://portal.navada-edge-server.uk) | Account management and API keys |
691
465
 
692
466
  ---
693
467
 
694
- ## Built by
468
+ ## Built By
695
469
 
696
- **Leslie (Lee) Akpareva** -- Principal AI Consultant, MBA, MA. 17+ years in enterprise IT, insurance, and AI infrastructure.
470
+ **Leslie (Lee) Akpareva** Principal AI Consultant, MBA, MA. 17+ years in enterprise IT, insurance, and AI infrastructure.
697
471
 
698
472
  - GitHub: [github.com/leeakpareva](https://github.com/leeakpareva)
699
473
  - Website: [navada-lab.space](https://www.navada-lab.space)
@@ -702,4 +476,4 @@ The CLI reports anonymous usage events (install, session start, command counts)
702
476
 
703
477
  ## License
704
478
 
705
- MIT -- Leslie Akpareva / NAVADA
479
+ MIT Leslie Akpareva / NAVADA