navada-edge-cli 3.0.2 → 3.1.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/Dockerfile CHANGED
@@ -1,3 +1,25 @@
1
1
  FROM node:22-slim
2
- RUN npm install -g navada-edge-cli@2.0.0
3
- ENTRYPOINT ["navada"]
2
+
3
+ LABEL maintainer="Leslie Akpareva <leeakpareva@hotmail.com>"
4
+ LABEL org.opencontainers.image.title="navada-edge-cli"
5
+ LABEL org.opencontainers.image.description="NAVADA Edge CLI — interactive terminal agent"
6
+
7
+ # Install Python for python tools
8
+ RUN apt-get update && apt-get install -y --no-install-recommends python3 python3-pip && \
9
+ rm -rf /var/lib/apt/lists/*
10
+
11
+ WORKDIR /app
12
+
13
+ # Copy package files and install
14
+ COPY package.json package-lock.json ./
15
+ RUN npm ci --production
16
+
17
+ # Copy source
18
+ COPY bin/ bin/
19
+ COPY lib/ lib/
20
+ COPY LICENSE README.md ./
21
+
22
+ # Config directory
23
+ RUN mkdir -p /root/.navada
24
+
25
+ ENTRYPOINT ["node", "bin/navada.js"]
package/README.md CHANGED
@@ -10,34 +10,50 @@ npm install -g navada-edge-cli
10
10
 
11
11
  **NAVADA Edge CLI** is a production-grade terminal tool that gives developers and infrastructure teams an AI agent with full system access, connected to a distributed computing network.
12
12
 
13
+ ### Install
14
+
15
+ ```bash
16
+ npm install -g navada-edge-cli
17
+ navada
18
+ ```
19
+
20
+ Or run with Docker:
21
+
22
+ ```bash
23
+ docker run -it navada-edge-cli:3.0.2
24
+ ```
25
+
13
26
  **The problem:** Managing distributed infrastructure across multiple cloud providers, on-prem servers, and services requires jumping between terminals, dashboards, and APIs. AI assistants answer questions but can't execute.
14
27
 
15
- **The solution:** One CLI that talks naturally, executes commands locally and remotely, and unifies access to your entire infrastructure through a conversational AI agent with 13 built-in tools.
28
+ **The solution:** One CLI that talks naturally, executes commands locally and remotely, and unifies access to your entire infrastructure through a conversational AI agent with tool use, streaming, and smart model routing.
16
29
 
17
30
  **Key differentiators:**
31
+ - **Free tier included** — Grok AI with 30 RPM, no API key needed, just install and go
32
+ - **Multi-provider AI** — Anthropic (Claude Sonnet 4), OpenAI (GPT-4o/mini), Grok, Qwen Coder — all with streaming
33
+ - **Smart routing** — auto-picks the best model: code queries to Qwen, complex to Claude, general to Grok
18
34
  - **Conversational AI agent** — type naturally, the agent uses tools to execute (not just answer)
19
- - **Full computer access** — shell, files, processes on your local machine
35
+ - **Full computer access** — shell, files, processes, Python execution on your local machine
20
36
  - **Distributed network** — 4 physical nodes connected via Tailscale VPN, managed from one terminal
21
37
  - **Two sub-agents** — Lucas CTO (infrastructure) and Claude CoS (communications + automation)
22
38
  - **Cloud-native** — Cloudflare (R2, Flux AI, Stream, DNS), Azure (n8n), private Docker registry
23
- - **5 AI models** — Claude Sonnet 4, GPT-4o, Qwen Coder (FREE), YOLO v8, Flux (FREE)
24
- - **45+ slash commands** — direct access when you need precision
39
+ - **58 slash commands** — direct access when you need precision
40
+ - **3 learning modes** — `/learn python`, `/learn csharp`, `/learn node` — interactive tutor mode
41
+ - **Split-panel TUI** — session info, token usage, rate limits, provider status
25
42
  - **4 themes** — dark, crow (achromatic), matrix, light
26
- - **Mobile access** — HTTP server with web UI, accessible from phone
43
+ - **Docker-first** — runs as a container with `restart: always`, ships with Dockerfile
27
44
  - **Cross-platform** — Windows, macOS, Linux
28
- - **Docker ready** — run as a container with `docker run -it navada-edge-cli`
29
- - **Zero config to start** — install, login with any API key, start talking
45
+ - **Zero config to start** — install and start talking, free tier works immediately
30
46
 
31
47
  **Who it's for:** DevOps engineers, infrastructure teams, AI developers, and anyone who manages distributed systems and wants an AI copilot in their terminal.
32
48
 
33
- **Pricing:** The CLI is free and open source (MIT). AI features require your own API key (Anthropic, OpenAI, or HuggingFace — Qwen is free). Network features require access to a NAVADA Edge Network instance.
49
+ **Pricing:** The CLI is free and open source (MIT). Free tier (Grok) works out of the box. Add your own API key for full agent mode (Anthropic, OpenAI, or HuggingFace).
34
50
 
35
51
  ```
36
52
  ╭─────────────────────────────────────────────────────────╮
37
53
  │ ███╗ ██╗ █████╗ ██╗ ██╗ █████╗ ██████╗ █████╗ │
38
54
  │ ██╔██╗ ██║███████║██║ ██║███████║██║ ██║███████║ │
39
55
  │ ██║ ╚████║██║ ██║ ╚████╔╝ ██║ ██║██████╔╝██║ ██║ │
40
- │ E D G E N E T W O R K v2.1.1
56
+ │ E D G E N E T W O R K v3.0.2
41
57
  ╰─────────────────────────────────────────────────────────╯
42
58
  ```
43
59
 
@@ -1,10 +1,16 @@
1
1
  services:
2
2
  cli:
3
3
  build: .
4
- image: navada-edge-cli:2.0.0
4
+ image: navada-edge-cli:3.1.0
5
5
  container_name: navada-edge-cli
6
- env_file: .env
6
+ restart: always
7
7
  stdin_open: true
8
8
  tty: true
9
- ports:
10
- - "${SERVE_PORT:-7800}:7800"
9
+ volumes:
10
+ - cli-config:/root/.navada
11
+ environment:
12
+ - NODE_ENV=production
13
+ - TERM=xterm-256color
14
+
15
+ volumes:
16
+ cli-config:
package/lib/agent.js CHANGED
@@ -17,15 +17,53 @@ const config = require('./config');
17
17
  const IDENTITY = {
18
18
  name: 'NAVADA Edge',
19
19
  role: 'AI Infrastructure Agent',
20
- personality: `You are NAVADA Edge — an AI agent that operates inside the user's terminal.
20
+ personality: `You are NAVADA Edge — an AI agent built by Lee Akpareva (Principal AI Consultant, MBA, MA) that operates inside the user's terminal.
21
21
  You are professional, technical, concise, and helpful. You speak with authority about distributed systems, Docker, AI, and cloud infrastructure.
22
- You have full access to the user's computer: you can read/write files, run shell commands, manage processes, and connect to the NAVADA Edge Network.
23
- You can invoke two sub-agents:
24
- - Lucas CTO: runs bash, SSH, Docker commands on remote NAVADA Edge nodes
25
- - Claude CoS: sends emails, generates images, manages the network
26
- When users ask you to do something, DO it — don't just describe how. Use your tools.
27
- When you don't have a tool for something, say so clearly and suggest an alternative.
22
+ You have FULL ACCESS to the user's computer you CAN and SHOULD use your tools to execute tasks:
23
+ - shell: run ANY bash, PowerShell, or system command on the user's machine
24
+ - read_file / write_file / list_files: full filesystem access create, read, modify any file
25
+ - python_exec / python_pip / python_script: run Python code directly
26
+ - sandbox_run: run code with syntax-highlighted output
27
+ - system_info: check CPU, RAM, disk, OS
28
+ You also connect to the NAVADA Edge Network (4 nodes via Tailscale VPN):
29
+ - lucas_exec / lucas_ssh / lucas_docker: execute commands on remote nodes (EC2, HP, Oracle)
30
+ - mcp_call: access 18 MCP tools on the ASUS server
31
+ - docker_registry: manage the private Docker registry
32
+ - send_email / generate_image: communications and AI image generation
33
+ - founder_info: information about Lee Akpareva, the creator of NAVADA
34
+ When users ask you to DO something — DO IT. Use write_file to create files. Use shell to run commands. Never say "I can't" when you have a tool for it.
28
35
  Keep responses short. Code blocks when needed. No fluff.`,
36
+ founder: {
37
+ name: 'Leslie (Lee) Akpareva',
38
+ title: 'Principal AI Consultant & Founder, NAVADA Edge Network',
39
+ qualifications: 'MBA (Business Administration), MA (International Relations), EF8 Alumni',
40
+ experience: '17+ years in enterprise IT, insurance, and AI infrastructure',
41
+ current_role: 'AI Project Lead at Generali UK (insurance)',
42
+ expertise: [
43
+ 'AI/ML infrastructure and deployment',
44
+ 'Distributed systems and edge computing',
45
+ 'Docker containerisation and orchestration',
46
+ 'Cloud architecture (AWS, Azure, Oracle, Cloudflare)',
47
+ 'MCP (Model Context Protocol) server development',
48
+ 'Full-stack development (Node.js, Python, React, Next.js)',
49
+ 'Enterprise IT transformation and automation',
50
+ ],
51
+ projects: [
52
+ 'NAVADA Edge Network — distributed AI home server (4 nodes, 25+ containers)',
53
+ 'NAVADA Edge SDK + CLI — npm packages for network access',
54
+ 'WorldMonitor — OSINT intelligence dashboard',
55
+ 'Lucas CTO — autonomous infrastructure agent',
56
+ 'OpenCode — open source coding assistant',
57
+ 'NAVADA Robotics — robotics and hardware',
58
+ 'Raven Terminal — security terminal',
59
+ ],
60
+ contact: {
61
+ email: 'leeakpareva@hotmail.com',
62
+ github: 'github.com/leeakpareva',
63
+ website: 'navada-lab.space',
64
+ phone: '+447935237704',
65
+ },
66
+ },
29
67
  };
30
68
 
31
69
  function getSystemPrompt() {
@@ -194,6 +232,11 @@ const localTools = {
194
232
  }
195
233
  },
196
234
  },
235
+
236
+ founderInfo: {
237
+ description: 'Get information about the NAVADA Edge founder',
238
+ execute: () => JSON.stringify(IDENTITY.founder, null, 2),
239
+ },
197
240
  };
198
241
 
199
242
  // ---------------------------------------------------------------------------
@@ -592,11 +635,13 @@ function detectIntent(message) {
592
635
  async function chat(userMessage, conversationHistory = []) {
593
636
  const anthropicKey = config.get('anthropicKey') || process.env.ANTHROPIC_API_KEY || '';
594
637
  const openaiKey = config.get('openaiKey') || process.env.OPENAI_API_KEY || '';
638
+ const nvidiaKey = config.get('nvidiaKey') || process.env.NVIDIA_API_KEY || '';
595
639
  const apiKey = config.getApiKey() || '';
596
640
 
597
641
  // Determine which provider to use
598
642
  const effectiveAnthropicKey = anthropicKey || (apiKey.startsWith('sk-ant') ? apiKey : '');
599
643
  const effectiveOpenAIKey = openaiKey || (apiKey.startsWith('sk-') && !apiKey.startsWith('sk-ant') ? apiKey : '');
644
+ const effectiveNvidiaKey = nvidiaKey || (apiKey.startsWith('nvapi-') ? apiKey : '');
600
645
 
601
646
  const modelPref = config.getModel();
602
647
  const intent = detectIntent(userMessage);
@@ -604,10 +649,11 @@ async function chat(userMessage, conversationHistory = []) {
604
649
  // Track active provider for UI
605
650
  if (effectiveAnthropicKey) sessionState.provider = 'Anthropic';
606
651
  else if (effectiveOpenAIKey) sessionState.provider = 'OpenAI';
652
+ else if (effectiveNvidiaKey) sessionState.provider = 'NVIDIA';
607
653
  else sessionState.provider = 'Grok (free)';
608
654
 
609
655
  // No personal key — use free tier
610
- if (!effectiveAnthropicKey && !effectiveOpenAIKey) {
656
+ if (!effectiveAnthropicKey && !effectiveOpenAIKey && !effectiveNvidiaKey) {
611
657
  if (intent === 'code' && navada.config.hfToken) {
612
658
  try {
613
659
  const r = await navada.ai.huggingface.qwen(userMessage);
@@ -617,6 +663,21 @@ async function chat(userMessage, conversationHistory = []) {
617
663
  return grokChat(userMessage, conversationHistory);
618
664
  }
619
665
 
666
+ // NVIDIA key — route to NVIDIA
667
+ if (effectiveNvidiaKey && (!effectiveAnthropicKey || modelPref?.startsWith('nvidia') || modelPref?.startsWith('llama') || modelPref?.startsWith('deepseek') || modelPref?.startsWith('mistral') || modelPref?.startsWith('gemma') || modelPref?.startsWith('nemotron'))) {
668
+ const { streamNvidia } = require('./commands/nvidia');
669
+ const nvidiaModel = config.get('nvidiaModel') || 'llama-3.3-70b';
670
+ sessionState.provider = 'NVIDIA';
671
+ sessionState.model = nvidiaModel;
672
+ const messages = [
673
+ ...conversationHistory.map(m => ({ role: m.role, content: typeof m.content === 'string' ? m.content : JSON.stringify(m.content) })),
674
+ { role: 'user', content: userMessage },
675
+ ];
676
+ process.stdout.write(ui.dim(' NAVADA > '));
677
+ const result = await streamNvidia(effectiveNvidiaKey, messages, nvidiaModel);
678
+ return result.content;
679
+ }
680
+
620
681
  // OpenAI key — route to OpenAI
621
682
  if (effectiveOpenAIKey && (!effectiveAnthropicKey || modelPref === 'gpt-4o' || modelPref === 'gpt-4o-mini')) {
622
683
  return openAIChat(effectiveOpenAIKey, userMessage, conversationHistory);
@@ -711,6 +772,16 @@ async function chat(userMessage, conversationHistory = []) {
711
772
  description: 'Run a Python script file.',
712
773
  input_schema: { type: 'object', properties: { path: { type: 'string', description: 'Path to .py file' } }, required: ['path'] },
713
774
  },
775
+ {
776
+ name: 'sandbox_run',
777
+ description: 'Run code in an isolated sandbox with syntax highlighting. Supports javascript, python, typescript. Returns colored output.',
778
+ input_schema: { type: 'object', properties: { code: { type: 'string', description: 'Code to execute' }, language: { type: 'string', description: 'javascript, python, or typescript' } }, required: ['code'] },
779
+ },
780
+ {
781
+ name: 'founder_info',
782
+ description: 'Get information about Lee Akpareva, founder of NAVADA Edge Network. Use when asked about the creator, founder, Lee, or who made NAVADA.',
783
+ input_schema: { type: 'object', properties: {} },
784
+ },
714
785
  ];
715
786
 
716
787
  const messages = [
@@ -782,6 +853,14 @@ async function executeTool(name, input) {
782
853
  case 'python_exec': return localTools.pythonExec.execute(input.code);
783
854
  case 'python_pip': return localTools.pythonPip.execute(input.package);
784
855
  case 'python_script': return localTools.pythonScript.execute(input.path);
856
+ case 'sandbox_run': {
857
+ const { runCode, displayCode, displayOutput } = require('./commands/sandbox');
858
+ displayCode(input.code, input.language);
859
+ const result = runCode(input.code, input.language);
860
+ displayOutput(result);
861
+ return result.error ? `Error (exit ${result.exitCode}): ${result.error}` : result.output;
862
+ }
863
+ case 'founder_info': return localTools.founderInfo.execute();
785
864
  default: return `Unknown tool: ${name}`;
786
865
  }
787
866
  } catch (e) {
@@ -106,24 +106,36 @@ module.exports = function(reg) {
106
106
 
107
107
  reg('model', 'Show/set default AI model', (args) => {
108
108
  if (args[0]) {
109
- const valid = ['auto', 'claude', 'gpt-4o', 'gpt-4o-mini', 'qwen'];
109
+ const valid = ['auto', 'claude', 'gpt-4o', 'gpt-4o-mini', 'qwen', 'nvidia', 'llama-3.3-70b', 'llama-3.1-8b', 'mistral-large', 'gemma-2-27b', 'codellama-70b', 'deepseek-r1', 'phi-3-medium', 'nemotron-70b'];
110
110
  if (!valid.includes(args[0])) { console.log(ui.error(`Invalid model. Options: ${valid.join(', ')}`)); return; }
111
111
  config.setModel(args[0]);
112
+ // If it's an NVIDIA model name, also set it as the nvidia model
113
+ const nvidiaModels = ['llama-3.3-70b', 'llama-3.1-8b', 'mistral-large', 'gemma-2-27b', 'codellama-70b', 'deepseek-r1', 'phi-3-medium', 'nemotron-70b'];
114
+ if (nvidiaModels.includes(args[0])) config.set('nvidiaModel', args[0]);
112
115
  console.log(ui.success(`Model set to: ${args[0]}`));
113
116
  } else {
114
117
  console.log(ui.header('AI MODELS'));
115
118
  const current = config.getModel();
116
119
  console.log(ui.label('Current', current));
117
120
  console.log('');
118
- console.log(ui.dim('Available:'));
119
- console.log(ui.label('auto', 'Claude (Anthropic) with tool use default'));
120
- console.log(ui.label('claude', 'Claude Sonnet 4 via Anthropic API'));
121
- console.log(ui.label('gpt-4o', 'OpenAI GPT-4o (requires OPENAI_API_KEY)'));
122
- console.log(ui.label('qwen', 'Qwen Coder 32B (FREE via HuggingFace)'));
121
+ console.log(ui.dim('Core providers:'));
122
+ console.log(ui.label('auto', 'Smart routing picks best provider per query'));
123
+ console.log(ui.label('claude', 'Claude Sonnet 4 (Anthropic) — full agent + tools'));
124
+ console.log(ui.label('gpt-4o', 'GPT-4o (OpenAI) — tool use + streaming'));
125
+ console.log(ui.label('qwen', 'Qwen Coder 32B (HuggingFace FREE)'));
123
126
  console.log('');
124
- console.log(ui.dim('Set with: /model claude'));
127
+ console.log(ui.dim('NVIDIA models (FREE via build.nvidia.com):'));
128
+ console.log(ui.label('llama-3.3-70b', 'Meta Llama 3.3 70B'));
129
+ console.log(ui.label('deepseek-r1', 'DeepSeek R1'));
130
+ console.log(ui.label('mistral-large', 'Mistral Large 2'));
131
+ console.log(ui.label('codellama-70b', 'Code Llama 70B'));
132
+ console.log(ui.label('gemma-2-27b', 'Google Gemma 2 27B'));
133
+ console.log(ui.label('nemotron-70b', 'NVIDIA Nemotron 70B'));
134
+ console.log('');
135
+ console.log(ui.dim('Set: /model deepseek-r1'));
136
+ console.log(ui.dim('NVIDIA key: /login nvapi-your-key (free at build.nvidia.com)'));
125
137
  }
126
- }, { category: 'AI', subs: ['auto', 'claude', 'gpt-4o', 'gpt-4o-mini', 'qwen'] });
138
+ }, { category: 'AI', subs: ['auto', 'claude', 'gpt-4o', 'gpt-4o-mini', 'qwen', 'nvidia', 'llama-3.3-70b', 'deepseek-r1', 'mistral-large', 'codellama-70b', 'gemma-2-27b', 'nemotron-70b'] });
127
139
 
128
140
  reg('research', 'RAG search via MCP', async (args) => {
129
141
  const query = args.join(' ');
@@ -18,6 +18,8 @@ const modules = [
18
18
  require('./setup'),
19
19
  require('./system'),
20
20
  require('./learn'),
21
+ require('./sandbox'),
22
+ require('./nvidia'),
21
23
  ];
22
24
 
23
25
  function loadAll() {
@@ -0,0 +1,241 @@
1
+ 'use strict';
2
+
3
+ const https = require('https');
4
+ const ui = require('../ui');
5
+ const config = require('../config');
6
+
7
+ // ---------------------------------------------------------------------------
8
+ // NVIDIA API — free tier models via build.nvidia.com
9
+ // ---------------------------------------------------------------------------
10
+ const NVIDIA_MODELS = {
11
+ 'llama-3.3-70b': { id: 'meta/llama-3.3-70b-instruct', name: 'Llama 3.3 70B', free: true },
12
+ 'llama-3.1-8b': { id: 'meta/llama-3.1-8b-instruct', name: 'Llama 3.1 8B', free: true },
13
+ 'mistral-large': { id: 'mistralai/mistral-large-2-instruct', name: 'Mistral Large 2', free: true },
14
+ 'gemma-2-27b': { id: 'google/gemma-2-27b-it', name: 'Gemma 2 27B', free: true },
15
+ 'codellama-70b': { id: 'meta/codellama-70b-instruct', name: 'Code Llama 70B', free: true },
16
+ 'deepseek-r1': { id: 'deepseek-ai/deepseek-r1', name: 'DeepSeek R1', free: true },
17
+ 'phi-3-medium': { id: 'microsoft/phi-3-medium-128k-instruct', name: 'Phi 3 Medium 128K', free: true },
18
+ 'nemotron-70b': { id: 'nvidia/llama-3.1-nemotron-70b-instruct', name: 'Nemotron 70B', free: true },
19
+ };
20
+
21
+ const DEFAULT_NVIDIA_MODEL = 'llama-3.3-70b';
22
+
23
+ // ---------------------------------------------------------------------------
24
+ // NVIDIA API chat (streaming)
25
+ // ---------------------------------------------------------------------------
26
+ function streamNvidia(apiKey, messages, modelKey = DEFAULT_NVIDIA_MODEL) {
27
+ const modelInfo = NVIDIA_MODELS[modelKey] || NVIDIA_MODELS[DEFAULT_NVIDIA_MODEL];
28
+
29
+ return new Promise((resolve, reject) => {
30
+ const body = JSON.stringify({
31
+ model: modelInfo.id,
32
+ messages: [
33
+ { role: 'system', content: 'You are NAVADA, an AI infrastructure agent. Keep responses concise and technical. Use code blocks with language tags when showing code.' },
34
+ ...messages,
35
+ ],
36
+ max_tokens: 4096,
37
+ stream: true,
38
+ temperature: 0.7,
39
+ });
40
+
41
+ const req = https.request('https://integrate.api.nvidia.com/v1/chat/completions', {
42
+ method: 'POST',
43
+ headers: {
44
+ 'Authorization': `Bearer ${apiKey}`,
45
+ 'Content-Type': 'application/json',
46
+ 'Content-Length': Buffer.byteLength(body),
47
+ },
48
+ timeout: 120000,
49
+ }, (res) => {
50
+ if (res.statusCode !== 200) {
51
+ let data = '';
52
+ res.on('data', c => data += c);
53
+ res.on('end', () => {
54
+ if (res.statusCode === 401) reject(new Error('Invalid NVIDIA API key. Get one free at https://build.nvidia.com'));
55
+ else if (res.statusCode === 429) reject(new Error('NVIDIA rate limit reached. Wait a moment and try again.'));
56
+ else reject(new Error(`NVIDIA API error ${res.statusCode}: ${data.slice(0, 200)}`));
57
+ });
58
+ return;
59
+ }
60
+
61
+ let buffer = '';
62
+ let fullContent = '';
63
+
64
+ res.on('data', (chunk) => {
65
+ buffer += chunk.toString();
66
+ const lines = buffer.split('\n');
67
+ buffer = lines.pop();
68
+
69
+ for (const line of lines) {
70
+ if (!line.startsWith('data: ')) continue;
71
+ const data = line.slice(6).trim();
72
+ if (data === '[DONE]') continue;
73
+ try {
74
+ const event = JSON.parse(data);
75
+ const delta = event.choices?.[0]?.delta?.content || '';
76
+ if (delta) {
77
+ process.stdout.write(delta);
78
+ fullContent += delta;
79
+ }
80
+ } catch {}
81
+ }
82
+ });
83
+
84
+ res.on('end', () => {
85
+ if (fullContent) process.stdout.write('\n');
86
+ resolve({ content: fullContent, model: modelInfo.name, streamed: true });
87
+ });
88
+ });
89
+
90
+ req.on('error', reject);
91
+ req.on('timeout', () => { req.destroy(); reject(new Error('NVIDIA API timeout')); });
92
+ req.write(body);
93
+ req.end();
94
+ });
95
+ }
96
+
97
+ // Non-streaming fallback
98
+ async function chatNvidia(apiKey, messages, modelKey = DEFAULT_NVIDIA_MODEL) {
99
+ const modelInfo = NVIDIA_MODELS[modelKey] || NVIDIA_MODELS[DEFAULT_NVIDIA_MODEL];
100
+
101
+ return new Promise((resolve, reject) => {
102
+ const body = JSON.stringify({
103
+ model: modelInfo.id,
104
+ messages: [
105
+ { role: 'system', content: 'You are NAVADA, an AI infrastructure agent. Keep responses concise and technical. Use code blocks with language tags when showing code.' },
106
+ ...messages,
107
+ ],
108
+ max_tokens: 4096,
109
+ temperature: 0.7,
110
+ });
111
+
112
+ const req = https.request('https://integrate.api.nvidia.com/v1/chat/completions', {
113
+ method: 'POST',
114
+ headers: {
115
+ 'Authorization': `Bearer ${apiKey}`,
116
+ 'Content-Type': 'application/json',
117
+ 'Content-Length': Buffer.byteLength(body),
118
+ },
119
+ timeout: 120000,
120
+ }, (res) => {
121
+ let data = '';
122
+ res.on('data', c => data += c);
123
+ res.on('end', () => {
124
+ if (res.statusCode !== 200) {
125
+ reject(new Error(`NVIDIA API error ${res.statusCode}: ${data.slice(0, 200)}`));
126
+ return;
127
+ }
128
+ try {
129
+ const parsed = JSON.parse(data);
130
+ resolve({ content: parsed.choices?.[0]?.message?.content || '', model: modelInfo.name });
131
+ } catch (e) { reject(e); }
132
+ });
133
+ });
134
+
135
+ req.on('error', reject);
136
+ req.on('timeout', () => { req.destroy(); reject(new Error('Timeout')); });
137
+ req.write(body);
138
+ req.end();
139
+ });
140
+ }
141
+
142
+ // ---------------------------------------------------------------------------
143
+ // Command registration
144
+ // ---------------------------------------------------------------------------
145
+ module.exports = function(reg) {
146
+
147
+ reg('nvidia', 'NVIDIA AI models (free tier via build.nvidia.com)', async (args) => {
148
+ const sub = args[0];
149
+
150
+ if (!sub || sub === 'help') {
151
+ console.log(ui.header('NVIDIA AI'));
152
+ console.log(ui.dim('Free AI models via NVIDIA build.nvidia.com'));
153
+ console.log('');
154
+ console.log(ui.cmd('nvidia login <key>', 'Set your NVIDIA API key'));
155
+ console.log(ui.cmd('nvidia models', 'List available models'));
156
+ console.log(ui.cmd('nvidia model <name>', 'Set default NVIDIA model'));
157
+ console.log(ui.cmd('nvidia chat <msg>', 'Chat with NVIDIA model'));
158
+ console.log(ui.cmd('nvidia status', 'Check API connection'));
159
+ console.log('');
160
+ console.log(ui.dim('Get a free key: https://build.nvidia.com'));
161
+ console.log(ui.dim('Then: /nvidia login nvapi-xxxx'));
162
+ return;
163
+ }
164
+
165
+ if (sub === 'login') {
166
+ const key = args[1];
167
+ if (!key) { console.log(ui.error('Usage: /nvidia login nvapi-your-key-here')); return; }
168
+ if (!key.startsWith('nvapi-')) { console.log(ui.error('NVIDIA keys start with nvapi-')); return; }
169
+ config.set('nvidiaKey', key);
170
+ console.log(ui.success('NVIDIA API key saved'));
171
+ console.log(ui.dim('Test it: /nvidia status'));
172
+ return;
173
+ }
174
+
175
+ if (sub === 'models') {
176
+ console.log(ui.header('NVIDIA MODELS'));
177
+ const currentModel = config.get('nvidiaModel') || DEFAULT_NVIDIA_MODEL;
178
+ for (const [key, info] of Object.entries(NVIDIA_MODELS)) {
179
+ const active = key === currentModel ? ' ◄' : '';
180
+ console.log(ui.label(key.padEnd(18), `${info.name}${info.free ? ' (FREE)' : ''}${active}`));
181
+ }
182
+ console.log('');
183
+ console.log(ui.dim(`Current: ${currentModel}`));
184
+ console.log(ui.dim('Set: /nvidia model deepseek-r1'));
185
+ return;
186
+ }
187
+
188
+ if (sub === 'model') {
189
+ const model = args[1];
190
+ if (!model) { console.log(ui.error('Usage: /nvidia model llama-3.3-70b')); return; }
191
+ if (!NVIDIA_MODELS[model]) {
192
+ console.log(ui.error(`Unknown model: ${model}`));
193
+ console.log(ui.dim(`Available: ${Object.keys(NVIDIA_MODELS).join(', ')}`));
194
+ return;
195
+ }
196
+ config.set('nvidiaModel', model);
197
+ console.log(ui.success(`NVIDIA model set to: ${NVIDIA_MODELS[model].name}`));
198
+ return;
199
+ }
200
+
201
+ if (sub === 'status') {
202
+ const key = config.get('nvidiaKey');
203
+ if (!key) { console.log(ui.error('No NVIDIA key set. /nvidia login nvapi-your-key')); return; }
204
+ console.log(ui.dim('Testing NVIDIA API...'));
205
+ try {
206
+ const result = await chatNvidia(key, [{ role: 'user', content: 'Say "NVIDIA connected" in exactly 2 words.' }]);
207
+ console.log(ui.success(`NVIDIA API connected — ${result.model}`));
208
+ console.log(ui.dim(`Response: ${result.content}`));
209
+ } catch (e) {
210
+ console.log(ui.error(e.message));
211
+ }
212
+ return;
213
+ }
214
+
215
+ if (sub === 'chat') {
216
+ const key = config.get('nvidiaKey');
217
+ if (!key) { console.log(ui.error('No NVIDIA key set. /nvidia login nvapi-your-key')); return; }
218
+ const msg = args.slice(1).join(' ');
219
+ if (!msg) { console.log(ui.dim('Usage: /nvidia chat explain Docker networking')); return; }
220
+
221
+ const model = config.get('nvidiaModel') || DEFAULT_NVIDIA_MODEL;
222
+ process.stdout.write(ui.dim(` NAVADA [${NVIDIA_MODELS[model].name}] > `));
223
+ try {
224
+ await streamNvidia(key, [{ role: 'user', content: msg }], model);
225
+ } catch (e) {
226
+ console.log(ui.error(e.message));
227
+ }
228
+ return;
229
+ }
230
+
231
+ console.log(ui.dim('Unknown subcommand. Try /nvidia help'));
232
+
233
+ }, { category: 'AI', aliases: ['nv'], subs: ['login', 'models', 'model', 'chat', 'status', 'help'] });
234
+
235
+ };
236
+
237
+ // Export for agent integration
238
+ module.exports.streamNvidia = streamNvidia;
239
+ module.exports.chatNvidia = chatNvidia;
240
+ module.exports.NVIDIA_MODELS = NVIDIA_MODELS;
241
+ module.exports.DEFAULT_NVIDIA_MODEL = DEFAULT_NVIDIA_MODEL;
@@ -0,0 +1,323 @@
1
+ 'use strict';
2
+
3
+ const { execSync } = require('child_process');
4
+ const fs = require('fs');
5
+ const path = require('path');
6
+ const os = require('os');
7
+ const chalk = require('chalk');
8
+ const ui = require('../ui');
9
+ const config = require('../config');
10
+
11
+ // ---------------------------------------------------------------------------
12
+ // Syntax highlighter — colorizes code output for the terminal
13
+ // ---------------------------------------------------------------------------
14
+ const LANG_KEYWORDS = {
15
+ javascript: /\b(const|let|var|function|return|if|else|for|while|class|import|export|from|async|await|new|this|typeof|instanceof|try|catch|throw|switch|case|break|continue|default|yield|delete|in|of|do|void|with)\b/g,
16
+ python: /\b(def|class|return|if|elif|else|for|while|import|from|as|try|except|finally|raise|with|yield|lambda|pass|break|continue|not|and|or|is|in|True|False|None|self|print|async|await|nonlocal|global|assert|del)\b/g,
17
+ typescript: /\b(const|let|var|function|return|if|else|for|while|class|import|export|from|async|await|new|this|typeof|instanceof|try|catch|throw|switch|case|break|continue|default|yield|interface|type|enum|implements|extends|abstract|readonly|private|public|protected|static|override|declare|namespace|module|keyof|infer)\b/g,
18
+ csharp: /\b(using|namespace|class|public|private|protected|static|void|int|string|bool|float|double|var|new|return|if|else|for|foreach|while|try|catch|throw|switch|case|break|continue|async|await|override|virtual|abstract|interface|struct|enum|null|true|false|this|base|readonly|const|ref|out|in|params|yield|delegate|event|lock|checked|unchecked|fixed|sizeof|typeof|is|as|where|select|from|orderby|group|join|let|into)\b/g,
19
+ go: /\b(func|return|if|else|for|range|switch|case|break|continue|var|const|type|struct|interface|map|chan|select|defer|go|import|package|fallthrough|default|nil|true|false|make|len|cap|append|copy|delete|new|panic|recover|close|iota)\b/g,
20
+ rust: /\b(fn|let|mut|return|if|else|for|while|loop|match|struct|enum|impl|trait|pub|use|mod|crate|self|super|async|await|move|ref|where|type|const|static|unsafe|extern|dyn|macro_rules|as|in|break|continue|true|false|None|Some|Ok|Err|Box|Vec|String|Option|Result)\b/g,
21
+ };
22
+
23
+ const STRING_RE = /(["'`])(?:(?!\1|\\).|\\.)*?\1/g;
24
+ const NUMBER_RE = /\b(\d+\.?\d*(?:e[+-]?\d+)?)\b/gi;
25
+ const COMMENT_LINE_RE = /(\/\/.*$|#(?!!).*$)/gm;
26
+ const COMMENT_BLOCK_RE = /(\/\*[\s\S]*?\*\/)/g;
27
+ const FUNC_CALL_RE = /\b([a-zA-Z_]\w*)\s*\(/g;
28
+ const DECORATOR_RE = /(@\w+)/g;
29
+
30
+ function highlight(code, lang = 'javascript') {
31
+ const langKey = lang.toLowerCase().replace(/^(js|node)$/, 'javascript').replace(/^(ts)$/, 'typescript').replace(/^(py)$/, 'python').replace(/^(cs)$/, 'csharp').replace(/^(rs)$/, 'rust');
32
+
33
+ // Placeholder system to avoid double-coloring
34
+ const placeholders = [];
35
+ const ph = (styled) => { placeholders.push(styled); return `\x00PH${placeholders.length - 1}\x00`; };
36
+
37
+ let result = code;
38
+
39
+ // 1. Comments first (highest priority)
40
+ result = result.replace(COMMENT_BLOCK_RE, (m) => ph(chalk.gray.italic(m)));
41
+ result = result.replace(COMMENT_LINE_RE, (m) => ph(chalk.gray.italic(m)));
42
+
43
+ // 2. Strings
44
+ result = result.replace(STRING_RE, (m) => ph(chalk.green(m)));
45
+
46
+ // 3. Decorators
47
+ result = result.replace(DECORATOR_RE, (m) => ph(chalk.yellow(m)));
48
+
49
+ // 4. Function calls
50
+ result = result.replace(FUNC_CALL_RE, (m, name) => ph(chalk.yellow(name)) + '(');
51
+
52
+ // 5. Numbers
53
+ result = result.replace(NUMBER_RE, (m) => ph(chalk.magenta(m)));
54
+
55
+ // 6. Keywords
56
+ const keywordRe = LANG_KEYWORDS[langKey] || LANG_KEYWORDS.javascript;
57
+ result = result.replace(keywordRe, (m) => ph(chalk.cyan.bold(m)));
58
+
59
+ // Restore placeholders
60
+ result = result.replace(/\x00PH(\d+)\x00/g, (_, i) => placeholders[parseInt(i)]);
61
+
62
+ return result;
63
+ }
64
+
65
+ // ---------------------------------------------------------------------------
66
+ // Sandbox — isolated code execution with colored output
67
+ // ---------------------------------------------------------------------------
68
+ const SANDBOX_DIR = path.join(os.tmpdir(), 'navada-sandbox');
69
+
70
+ function ensureSandbox() {
71
+ if (!fs.existsSync(SANDBOX_DIR)) fs.mkdirSync(SANDBOX_DIR, { recursive: true });
72
+ }
73
+
74
+ function detectLang(code, lang) {
75
+ if (lang) return lang.toLowerCase();
76
+ if (/\bdef\s+\w+|import\s+\w+|print\s*\(/.test(code)) return 'python';
77
+ if (/\bfunc\s+\w+|package\s+\w+|fmt\./.test(code)) return 'go';
78
+ if (/\bfn\s+\w+|let\s+mut\b|use\s+std/.test(code)) return 'rust';
79
+ if (/\busing\s+System|namespace\s+|Console\./.test(code)) return 'csharp';
80
+ return 'javascript';
81
+ }
82
+
83
+ function runCode(code, lang) {
84
+ ensureSandbox();
85
+ const detected = detectLang(code, lang);
86
+
87
+ let file, cmd;
88
+ switch (detected) {
89
+ case 'python':
90
+ case 'py':
91
+ file = path.join(SANDBOX_DIR, `run_${Date.now()}.py`);
92
+ fs.writeFileSync(file, code);
93
+ cmd = `${process.platform === 'win32' ? 'python' : 'python3'} "${file}"`;
94
+ break;
95
+
96
+ case 'javascript':
97
+ case 'js':
98
+ case 'node':
99
+ file = path.join(SANDBOX_DIR, `run_${Date.now()}.js`);
100
+ fs.writeFileSync(file, code);
101
+ cmd = `node "${file}"`;
102
+ break;
103
+
104
+ case 'typescript':
105
+ case 'ts':
106
+ file = path.join(SANDBOX_DIR, `run_${Date.now()}.ts`);
107
+ fs.writeFileSync(file, code);
108
+ cmd = `npx tsx "${file}" 2>/dev/null || npx ts-node "${file}"`;
109
+ break;
110
+
111
+ default:
112
+ return { lang: detected, error: `Language "${detected}" not supported for execution. Supported: javascript, python, typescript` };
113
+ }
114
+
115
+ try {
116
+ const output = execSync(cmd, {
117
+ timeout: 30000,
118
+ encoding: 'utf-8',
119
+ cwd: SANDBOX_DIR,
120
+ stdio: ['pipe', 'pipe', 'pipe'],
121
+ env: { ...process.env, NODE_NO_WARNINGS: '1' },
122
+ });
123
+ // Clean up
124
+ try { fs.unlinkSync(file); } catch {}
125
+ return { lang: detected, output: output.trim(), exitCode: 0 };
126
+ } catch (e) {
127
+ try { fs.unlinkSync(file); } catch {}
128
+ return { lang: detected, output: (e.stdout || '').trim(), error: (e.stderr || e.message || '').trim(), exitCode: e.status || 1 };
129
+ }
130
+ }
131
+
132
+ // ---------------------------------------------------------------------------
133
+ // Display — code + output with colors
134
+ // ---------------------------------------------------------------------------
135
+ function displayCode(code, lang) {
136
+ const detected = detectLang(code, lang);
137
+ const lines = code.split('\n');
138
+ const gutterWidth = String(lines.length).length;
139
+
140
+ console.log('');
141
+ console.log(ui.dim(`─── ${detected.toUpperCase()} ${'─'.repeat(40)}`));
142
+ console.log('');
143
+
144
+ lines.forEach((line, i) => {
145
+ const num = chalk.gray(String(i + 1).padStart(gutterWidth) + ' │ ');
146
+ const colored = highlight(line, detected);
147
+ console.log(' ' + num + colored);
148
+ });
149
+
150
+ console.log('');
151
+ }
152
+
153
+ function displayOutput(result) {
154
+ if (result.error && !result.output) {
155
+ console.log(ui.dim('─── ERROR ' + '─'.repeat(38)));
156
+ console.log('');
157
+ result.error.split('\n').forEach(line => {
158
+ console.log(' ' + chalk.red(line));
159
+ });
160
+ } else {
161
+ console.log(ui.dim('─── OUTPUT ' + '─'.repeat(37)));
162
+ console.log('');
163
+ if (result.output) {
164
+ result.output.split('\n').forEach(line => {
165
+ console.log(' ' + chalk.white(line));
166
+ });
167
+ }
168
+ if (result.error) {
169
+ console.log('');
170
+ console.log(ui.dim('─── STDERR ' + '─'.repeat(37)));
171
+ result.error.split('\n').forEach(line => {
172
+ console.log(' ' + chalk.yellow(line));
173
+ });
174
+ }
175
+ }
176
+ console.log('');
177
+ const status = result.exitCode === 0 ? chalk.green('✓ exit 0') : chalk.red(`✗ exit ${result.exitCode}`);
178
+ console.log(' ' + status);
179
+ console.log('');
180
+ }
181
+
182
+ // ---------------------------------------------------------------------------
183
+ // Command registration
184
+ // ---------------------------------------------------------------------------
185
+ module.exports = function(reg) {
186
+
187
+ reg('sandbox', 'Run code in a sandbox with syntax highlighting', async (args) => {
188
+ if (!args.length) {
189
+ console.log(ui.header('CODING SANDBOX'));
190
+ console.log(ui.dim('Run code with syntax-highlighted output'));
191
+ console.log('');
192
+ console.log(ui.cmd('sandbox run <lang>', 'Enter code to run (opens multi-line input)'));
193
+ console.log(ui.cmd('sandbox exec <file>', 'Run a file in the sandbox'));
194
+ console.log(ui.cmd('sandbox highlight <file>', 'Syntax-highlight a file (no execution)'));
195
+ console.log(ui.cmd('sandbox demo', 'Run a demo to test colors'));
196
+ console.log('');
197
+ console.log(ui.dim('Languages: javascript, python, typescript'));
198
+ console.log(ui.dim('Shorthand: /run <code> — quick-run a one-liner'));
199
+ return;
200
+ }
201
+
202
+ const sub = args[0];
203
+
204
+ if (sub === 'demo') {
205
+ // Demo all language highlighting
206
+ const demos = {
207
+ javascript: `const greet = (name) => {
208
+ // Welcome message
209
+ const time = new Date().getHours();
210
+ if (time < 12) return \`Good morning, \${name}!\`;
211
+ return \`Hello, \${name}!\`;
212
+ };
213
+
214
+ console.log(greet("NAVADA"));
215
+ console.log("Version:", 3.02);`,
216
+
217
+ python: `def fibonacci(n):
218
+ """Generate fibonacci sequence"""
219
+ a, b = 0, 1
220
+ result = []
221
+ for _ in range(n):
222
+ result.append(a)
223
+ a, b = b, a + b
224
+ return result
225
+
226
+ # Print first 10 numbers
227
+ print(fibonacci(10))
228
+ print(f"Sum: {sum(fibonacci(10))}")`,
229
+ };
230
+
231
+ for (const [lang, code] of Object.entries(demos)) {
232
+ displayCode(code, lang);
233
+ const result = runCode(code, lang);
234
+ displayOutput(result);
235
+ }
236
+ return;
237
+ }
238
+
239
+ if (sub === 'highlight' && args[1]) {
240
+ const filePath = path.resolve(args.slice(1).join(' '));
241
+ if (!fs.existsSync(filePath)) { console.log(ui.error(`File not found: ${filePath}`)); return; }
242
+ const code = fs.readFileSync(filePath, 'utf-8');
243
+ const ext = path.extname(filePath).slice(1);
244
+ displayCode(code, ext);
245
+ return;
246
+ }
247
+
248
+ if (sub === 'exec' && args[1]) {
249
+ const filePath = path.resolve(args.slice(1).join(' '));
250
+ if (!fs.existsSync(filePath)) { console.log(ui.error(`File not found: ${filePath}`)); return; }
251
+ const code = fs.readFileSync(filePath, 'utf-8');
252
+ const ext = path.extname(filePath).slice(1);
253
+ displayCode(code, ext);
254
+ const result = runCode(code, ext);
255
+ displayOutput(result);
256
+ return;
257
+ }
258
+
259
+ if (sub === 'run') {
260
+ const lang = args[1] || 'javascript';
261
+ console.log(ui.dim(`Enter ${lang} code (type END on a new line to execute):`));
262
+ console.log('');
263
+
264
+ // Collect multi-line input
265
+ const readline = require('readline');
266
+ const rl = readline.createInterface({ input: process.stdin, output: process.stdout, prompt: chalk.gray(' > ') });
267
+ const lines = [];
268
+
269
+ return new Promise((resolve) => {
270
+ rl.prompt();
271
+ rl.on('line', (line) => {
272
+ if (line.trim() === 'END') {
273
+ rl.close();
274
+ const code = lines.join('\n');
275
+ displayCode(code, lang);
276
+ const result = runCode(code, lang);
277
+ displayOutput(result);
278
+ resolve();
279
+ } else {
280
+ lines.push(line);
281
+ rl.prompt();
282
+ }
283
+ });
284
+ });
285
+ }
286
+
287
+ // Quick run — rest of args is code
288
+ const code = args.join(' ');
289
+ const lang = detectLang(code);
290
+ displayCode(code, lang);
291
+ const result = runCode(code, lang);
292
+ displayOutput(result);
293
+
294
+ }, { category: 'CODE', aliases: ['sb'], subs: ['run', 'exec', 'highlight', 'demo'] });
295
+
296
+ // Quick run shortcut
297
+ reg('run', 'Quick-run code in the sandbox', async (args) => {
298
+ if (!args.length) { console.log(ui.dim('Usage: /run console.log("hello") or /run py print("hello")')); return; }
299
+
300
+ // Check for language prefix
301
+ const langShorts = { js: 'javascript', py: 'python', ts: 'typescript', node: 'javascript' };
302
+ let lang = null;
303
+ let code = args.join(' ');
304
+
305
+ if (langShorts[args[0]]) {
306
+ lang = langShorts[args[0]];
307
+ code = args.slice(1).join(' ');
308
+ }
309
+
310
+ const detected = detectLang(code, lang);
311
+ displayCode(code, detected);
312
+ const result = runCode(code, detected);
313
+ displayOutput(result);
314
+
315
+ }, { category: 'CODE' });
316
+
317
+ };
318
+
319
+ // Export for agent tool use
320
+ module.exports.highlight = highlight;
321
+ module.exports.runCode = runCode;
322
+ module.exports.displayCode = displayCode;
323
+ module.exports.displayOutput = displayOutput;
@@ -65,10 +65,15 @@ module.exports = function(reg) {
65
65
  }, { category: 'SYSTEM' });
66
66
 
67
67
  // --- /login ---
68
- reg('login', 'Set API key (Anthropic, OpenAI, NAVADA Edge, or HuggingFace)', (args) => {
68
+ reg('login', 'Set API key (Anthropic, OpenAI, NVIDIA, NAVADA Edge, or HuggingFace)', (args) => {
69
69
  if (!args[0]) {
70
70
  console.log(ui.dim('Usage: /login <api-key>'));
71
- console.log(ui.dim('Accepts: sk-ant-... (Anthropic) | sk-... (OpenAI) | nv_edge_... (NAVADA) | hf_... (HuggingFace)'));
71
+ console.log(ui.dim('Accepts:'));
72
+ console.log(ui.dim(' sk-ant-... Anthropic (full agent + tool use)'));
73
+ console.log(ui.dim(' sk-... OpenAI (GPT-4o)'));
74
+ console.log(ui.dim(' nvapi-... NVIDIA (Llama, Mistral, DeepSeek — FREE)'));
75
+ console.log(ui.dim(' nv_edge_... NAVADA Edge (MCP + Dashboard)'));
76
+ console.log(ui.dim(' hf_... HuggingFace (Qwen Coder — FREE)'));
72
77
  return;
73
78
  }
74
79
  const key = args[0].trim();
@@ -77,6 +82,10 @@ module.exports = function(reg) {
77
82
  if (key.startsWith('sk-ant')) {
78
83
  config.set('anthropicKey', key);
79
84
  console.log(ui.success('Anthropic key saved — full agent mode with tool use enabled'));
85
+ } else if (key.startsWith('nvapi-')) {
86
+ config.set('nvidiaKey', key);
87
+ console.log(ui.success('NVIDIA key saved — Llama, Mistral, DeepSeek, Gemma enabled'));
88
+ console.log(ui.dim('/nvidia models to see all available models'));
80
89
  } else if (key.startsWith('hf_')) {
81
90
  config.set('hfToken', key);
82
91
  navada.init({ hfToken: key });
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "navada-edge-cli",
3
- "version": "3.0.2",
3
+ "version": "3.1.0",
4
4
  "description": "Interactive CLI for the NAVADA Edge Network — explore nodes, agents, Cloudflare, AI, Docker, and MCP from your terminal",
5
5
  "main": "lib/cli.js",
6
6
  "bin": {