@pensar/apex 0.0.1 → 0.0.2

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/README.md CHANGED
@@ -6,13 +6,12 @@
6
6
 
7
7
  **Pensar Apex** is an AI-powered penetration testing CLI tool that enables you to use an AI agent to perform comprehensive black box testing.
8
8
 
9
- ## Quick Start
9
+ ## Installation
10
10
 
11
11
  ### Prerequisites
12
12
 
13
- - **Bun** v1.0+ (required - [install from bun.sh](https://bun.sh))
14
13
  - **nmap** (required for network scanning)
15
- - **Anthropic API Key** (get one at [console.anthropic.com](https://console.anthropic.com/))
14
+ - **API Key** for your chosen AI provider
16
15
 
17
16
  #### Install nmap
18
17
 
@@ -35,117 +34,64 @@ sudo dnf install -y nmap
35
34
  ```
36
35
 
37
36
  Windows:
38
-
39
37
  Download installer from `https://nmap.org/download.html` and ensure `nmap` is on your PATH.
40
38
 
41
- ### Installation
42
-
43
- #### Install Bun First
44
-
45
- If you don't have Bun installed:
39
+ ### Install Apex
46
40
 
47
41
  ```bash
48
- curl -fsSL https://bun.sh/install | bash
49
- ```
50
-
51
- #### Option 1: Global Installation (Recommended)
52
-
53
- ```bash
54
- # Install globally with bun
55
- bun install -g @pensar/apex
56
-
57
- # Or with npm (still requires bun to run)
58
42
  npm install -g @pensar/apex
59
43
  ```
60
44
 
61
- #### Option 2: Local Development
62
-
63
- ```bash
64
- # Clone the repository
65
- git clone https://github.com/your-org/apex.git
66
- cd apex
67
-
68
- # Install dependencies
69
- npm install
70
- # or
71
- bun install
72
-
73
- # Build the project
74
- npm run build
75
- # or
76
- bun run build
77
- ```
78
-
79
45
  ### Configuration
80
46
 
81
- Set your Anthropic API key as an environment variable:
47
+ Set your AI provider API key as an environment variable:
82
48
 
83
49
  ```bash
84
50
  export ANTHROPIC_API_KEY="your-api-key-here"
85
- ```
86
-
87
- To make it permanent, add it to your shell profile (`~/.bashrc`, `~/.zshrc`, etc.):
88
-
89
- ```bash
90
- echo 'export ANTHROPIC_API_KEY="your-api-key-here"' >> ~/.zshrc
91
- source ~/.zshrc
51
+ # or for other providers:
52
+ # export OPENAI_API_KEY="your-api-key-here"
53
+ # export AWS_ACCESS_KEY_ID="..." and AWS_SECRET_ACCESS_KEY="..."
92
54
  ```
93
55
 
94
56
  ## Usage
95
57
 
96
- ### Running Pensar
97
-
98
- If installed globally:
58
+ Run Apex:
99
59
 
100
60
  ```bash
101
61
  pensar
102
62
  ```
103
63
 
104
- If running locally:
64
+ ## AI Provider Support
105
65
 
106
- ```bash
107
- npm start
108
- # or
109
- bun start
110
- ```
66
+ Apex supports **OpenAI**, **Anthropic**, **AWS Bedrock**, and **vLLM** (local models). **Anthropic models provide the best performance** and are recommended for optimal results.
111
67
 
112
- ### Run in Container (Kali-based)
68
+ ## Kali Linux Container (Recommended)
113
69
 
114
- If you prefer a preconfigured environment with `nmap` and common pentest tools, use the included container setup.
70
+ For **best performance**, run Apex in the included Kali Linux container with preconfigured pentest tools:
115
71
 
116
72
  ```bash
117
73
  cd container
118
- cp env.example .env # add your ANTHROPIC_API_KEY and others
74
+ cp env.example .env # add your API keys
119
75
  docker compose up --build -d
120
76
  docker compose exec kali-apex bash
121
77
  ```
122
78
 
123
- Inside the container:
79
+ Inside the container, run:
124
80
 
125
81
  ```bash
126
- cd ~/app
127
- bun install
128
- bun run build
129
- node build/index.js
130
- # or
131
82
  pensar
132
83
  ```
133
84
 
134
- Notes:
85
+ **Note:** On Linux hosts, consider using `network_mode: host` in `docker-compose.yml` for comprehensive network scanning.
135
86
 
136
- - The host repo is mounted into the container at `/home/ctf/app`.
137
- - On Linux, consider `network_mode: host` in `container/docker-compose.yml` for comprehensive scanning.
87
+ ## vLLM Local Model Support
138
88
 
139
- ### vLLM (Local model) Support
89
+ To use a local vLLM server:
140
90
 
141
- You can run against a local vLLM server by setting a custom model and a base URL:
142
-
143
- 1. Set `LOCAL_MODEL_URL` to your vLLM HTTP endpoint (e.g., `http://localhost:8000/v1`):
91
+ 1. Set the vLLM endpoint:
144
92
 
145
93
  ```bash
146
94
  export LOCAL_MODEL_URL="http://localhost:8000/v1"
147
95
  ```
148
96
 
149
- 2. In the Models screen, enter your model name in the "Custom local model (vLLM)" input and press Enter. This will select a local provider model with the ID you entered.
150
-
151
- That’s it—no other keys required for local. The app will route via OpenAI-compatible API to `LOCAL_MODEL_URL` when the selected provider is `local`.
97
+ 2. In the Apex Models screen, enter your model name in the "Custom local model (vLLM)" input.
package/bin/pensar.js CHANGED
@@ -4,7 +4,9 @@
4
4
  * Pensar - AI-Powered Penetration Testing CLI
5
5
  *
6
6
  * This is the main entry point for the Pensar CLI tool.
7
- * It launches the OpenTUI-based terminal interface.
7
+ * It supports:
8
+ * - Default (no args): Launches the OpenTUI-based terminal interface
9
+ * - benchmark command: Runs the benchmark CLI
8
10
  */
9
11
 
10
12
  import { fileURLToPath } from "url";
@@ -13,8 +15,56 @@ import { dirname, join } from "path";
13
15
  const __filename = fileURLToPath(import.meta.url);
14
16
  const __dirname = dirname(__filename);
15
17
 
16
- // Path to the main application
17
- const appPath = join(__dirname, "..", "build", "index.js");
18
+ // Get command-line arguments (skip node/bun and script path)
19
+ const args = process.argv.slice(2);
20
+ const command = args[0];
18
21
 
19
- // Import and run the application directly with Bun
20
- await import(appPath);
22
+ // Handle different commands
23
+ if (command === "benchmark") {
24
+ // Run benchmark CLI
25
+ const benchmarkPath = join(__dirname, "..", "build", "benchmark.js");
26
+
27
+ // Remove "benchmark" from args and pass the rest to benchmark script
28
+ process.argv = [process.argv[0], benchmarkPath, ...args.slice(1)];
29
+
30
+ // Import and run benchmark
31
+ await import(benchmarkPath);
32
+ } else if (command === "--help" || command === "-h") {
33
+ // Show help
34
+ console.log("Pensar - AI-Powered Penetration Testing CLI");
35
+ console.log();
36
+ console.log("Usage:");
37
+ console.log(" pensar Launch the TUI (Terminal User Interface)");
38
+ console.log(" pensar benchmark Run the benchmark CLI");
39
+ console.log();
40
+ console.log("Options:");
41
+ console.log(" -h, --help Show this help message");
42
+ console.log();
43
+ console.log("Benchmark Usage:");
44
+ console.log(" pensar benchmark <repo-path> [options] [branch1 branch2 ...]");
45
+ console.log();
46
+ console.log("Benchmark Options:");
47
+ console.log(" --all-branches Test all branches in the repository");
48
+ console.log(" --limit <number> Limit the number of branches to test");
49
+ console.log(" --skip <number> Skip the first N branches");
50
+ console.log(
51
+ " --model <model> Specify the AI model to use (default: claude-sonnet-4-5)"
52
+ );
53
+ console.log();
54
+ console.log("Examples:");
55
+ console.log(" pensar");
56
+ console.log(" pensar benchmark /path/to/vulnerable-app");
57
+ console.log(" pensar benchmark /path/to/app main develop");
58
+ console.log(" pensar benchmark /path/to/app --all-branches --limit 3");
59
+ console.log(" pensar benchmark /path/to/app --model gpt-4o");
60
+ } else if (args.length === 0) {
61
+ // No command specified, run the TUI
62
+ const appPath = join(__dirname, "..", "build", "index.js");
63
+ await import(appPath);
64
+ } else {
65
+ // Unknown command
66
+ console.error(`Error: Unknown command '${command}'`);
67
+ console.error();
68
+ console.error("Run 'pensar --help' for usage information");
69
+ process.exit(1);
70
+ }