auto-api-discovery 1.0.0 → 1.0.3

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/LICENSE CHANGED
@@ -1,21 +1,21 @@
1
- MIT License
2
-
3
- Copyright (c) 2026 Anooj Shete
4
-
5
- Permission is hereby granted, free of charge, to any person obtaining a copy
6
- of this software and associated documentation files (the "Software"), to deal
7
- in the Software without restriction, including without limitation the rights
8
- to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
- copies of the Software, and to permit persons to whom the Software is
10
- furnished to do so, subject to the following conditions:
11
-
12
- The above copyright notice and this permission notice shall be included in all
13
- copies or substantial portions of the Software.
14
-
15
- THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
- IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
- FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
- AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
- LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
- OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
- SOFTWARE.
1
+ MIT License
2
+
3
+ Copyright (c) 2026 Anooj Shete
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
package/README.md CHANGED
@@ -1,79 +1,52 @@
1
- # 🕸️ ApiGen
1
+ # ApiGen (auto-api-discovery)
2
2
 
3
- > **Automated API Discovery System with HITL Authentication**
3
+ [![NPM Version](https://img.shields.io/npm/v/auto-api-discovery)](https://www.npmjs.com/package/auto-api-discovery)
4
+ [![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](https://opensource.org/licenses/MIT)
4
5
 
5
- ![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)
6
- ![TypeScript](https://img.shields.io/badge/TypeScript-007ACC?logo=typescript&logoColor=white)
7
- ![Node.js](https://img.shields.io/badge/Node.js-43853D?logo=node.js&logoColor=white)
8
- ![Playwright](https://img.shields.io/badge/Playwright-2EAD33?logo=playwright&logoColor=white)
6
+ A CLI tool for automated API discovery. ApiGen intercepts browser network traffic (XHR, Fetch, GraphQL) to automatically build OpenAPI 3.0 specifications.
9
7
 
10
- ---
8
+ ## Installation
11
9
 
12
- ## 🚀 What is it?
10
+ Install the package globally via npm:
13
11
 
14
- Modern web applications often rely on undocumented, "hidden" internal APIs. Reverse-engineering these endpoints manually by staring at the network tab in DevTools is a tedious, error-prone, and highly time-consuming process.
15
-
16
- **ApiGen** solves this pain point by automating the discovery and documentation of these hidden APIs. It seamlessly intercepts network traffic (XHR, Fetch, GraphQL), gracefully handles complex authentication barriers via a Human-in-the-Loop (HITL) system, recursively maps internal paths automatically via headless spidering, and outputs a strict, structurally sound OpenAPI 3.0 representation.
17
-
18
- ---
19
-
20
- ## 🏗️ Architecture
21
-
22
- ApiGen utilizes a **Hybrid API Discovery Approach** consisting of:
23
-
24
- 1. **Passive Recording (Capture Mode):** Uses a headful instance of Playwright. You navigate a target web application as a real user—bypassing CAPTCHAs, 2FA, and complex login flows. Under the hood, ApiGen intercepts all network requests/responses and logs them securely into a local SQLite database. Upon closing the browser, it actively exports your authenticated session state into `.apigen-session.json`.
25
- 2. **Auto-Spidering (Crawl Mode):** Takes the previously captured authenticated session state and launches a headless Playwright Breadth-First-Search (BFS) spider. It organically navigates the application just like an authenticated user would, discovering and hitting protected routes, intelligently avoiding generic WAF traps using randomized latency offsets, and feeding new underlying API traffic deeply into the local SQLite store.
26
- 3. **OpenAPI Generation (Export):** A local robust schema inference engine processes thousands of SQLite records. It aggressively deduplicates dynamic URLs (folding Object IDs and UUIDs into neat path parameters), accurately infers nested JSON payload structures recursively, and compiles them to a strict, standard OpenAPI 3.0 json specification document.
27
-
28
- ---
29
-
30
- ## ⚙️ Tech Stack
31
-
32
- - **[Node.js](https://nodejs.org/en/) & [TypeScript](https://www.typescriptlang.org/)** - For robust type-safe runtime execution.
33
- - **[Playwright](https://playwright.dev/)** - For complete DOM navigation, session persistence, and native underlying browser CDP network interception workflows.
34
- - **[Better-SQLite3](https://github.com/WiseLibs/better-sqlite3)** - Highly scalable, local WAL-mode database used as the high-throughput primary storage layer.
35
- - **[Commander.js](https://github.com/tj/commander.js/)** - For building the intuitive Command Line Interface.
36
- - **[Chalk](https://github.com/chalk/chalk)** - For beautiful and clear terminal logging feedback.
37
-
38
- ---
39
-
40
- ## 🛠️ Getting Started
12
+ ```bash
13
+ npm install -g auto-api-discovery
14
+ ```
41
15
 
42
- ### Installation
16
+ *(Note: The `postinstall` script will automatically download the Chromium binaries required by Playwright.)*
43
17
 
44
- Clone the repository and install dependencies:
18
+ Alternatively, you can run it directly via `npx`:
45
19
 
46
20
  ```bash
47
- git clone https://github.com/anoojshete/auto-api-discovery.git
48
- cd auto-api-discovery
49
- npm install
50
- npx playwright install chromium
51
- npm run build
21
+ npx auto-api-discovery capture https://example.com
52
22
  ```
53
23
 
54
- *(Note: The project leverages `npm run apigen` as the execution entrypoint hooked to `ts-node src/index.ts`)*
24
+ ## Usage
55
25
 
56
- ### Usage
26
+ Once installed, the `apigen` CLI becomes available.
57
27
 
58
- #### 1. Capture Mode
59
- Boot into target application interactively, bypass authentications, and capture underlying APIs. When you close the browser, your cookies are saved securely to your local working directory.
28
+ ### 1. Capture API Traffic
29
+ Launch an interactive browser. You can manually log in, navigate the app, and bypass captchas. ApiGen intercepts the underlying API requests and saves them to a local SQLite database.
60
30
 
61
31
  ```bash
62
- npm run apigen capture https://example.com
32
+ apigen capture https://example.com
63
33
  ```
64
34
 
65
- #### 2. Crawl Mode
66
- Trigger the automated, authenticated headless spider to deeply map internal features and capture the underlying APIs organically mapping to those components.
35
+ ### 2. Crawl Connected Pages (Auto-Spidering)
36
+ Use your previously captured authenticated session to automatically run a headless crawler. It discovers and logs additional API endpoints in the background.
67
37
 
68
38
  ```bash
69
- # Options: -d / --depth, -p / --pages
70
- npm run apigen crawl https://example.com --depth 3 --pages 50
39
+ apigen crawl https://example.com --depth 3 --pages 50
71
40
  ```
72
41
 
73
- #### 3. Export OpenAPI Schema
74
- Transform your densely populated local SQLite database into a unified, natively grouped and inferred OpenAPI 3.0 Document.
42
+ ### 3. Export OpenAPI Schema
43
+ Convert the recorded API traffic into a unified OpenAPI 3.0 specification document. Dynamic IDs and UUIDs are automatically folded into path parameters.
75
44
 
76
45
  ```bash
77
- # Options: -b / --base-url
78
- npm run apigen export ./openapi.json --base-url https://api.example.com
46
+ apigen export ./openapi.json --base-url https://api.example.com
79
47
  ```
48
+
49
+ ## How It Works
50
+ - Uses **Playwright** to proxy network requests and manage sessions.
51
+ - Stores metadata and traffic locally using **Better-SQLite3**.
52
+ - Automatically infers JSON payload structures and route schemas.
package/dist/crawler.js CHANGED
@@ -51,7 +51,6 @@ async function startCrawler(targetUrl, maxDepth = 2, maxPages = 50) {
51
51
  try {
52
52
  browser = await playwright_1.chromium.launch({ headless: true });
53
53
  const context = await browser.newContext();
54
- // Load cookies if available to run fully authenticated
55
54
  if (fs.existsSync(SESSION_FILE)) {
56
55
  try {
57
56
  const cookies = JSON.parse(fs.readFileSync(SESSION_FILE, 'utf-8'));
@@ -63,7 +62,6 @@ async function startCrawler(targetUrl, maxDepth = 2, maxPages = 50) {
63
62
  }
64
63
  }
65
64
  const page = await context.newPage();
66
- // Attach the EXACT same interceptor from Milestone 1
67
65
  (0, interceptor_1.attachInterceptor)(page);
68
66
  let parsedTargetUrl;
69
67
  try {
@@ -84,7 +82,6 @@ async function startCrawler(targetUrl, maxDepth = 2, maxPages = 50) {
84
82
  if (!current)
85
83
  break;
86
84
  const { url: currentUrl, depth } = current;
87
- // Normalize URL (strip pure hash fragments to avoid duplicates)
88
85
  let normalizedUrl = currentUrl;
89
86
  try {
90
87
  const pureUrl = new URL(currentUrl);
@@ -99,28 +96,24 @@ async function startCrawler(targetUrl, maxDepth = 2, maxPages = 50) {
99
96
  try {
100
97
  await page.goto(normalizedUrl, { waitUntil: 'domcontentloaded', timeout: 15000 });
101
98
  pagesProcessed++;
102
- // Random delay to avoid aggressive WAF blocks
103
99
  await randomDelay(500, 1500);
104
100
  if (depth < maxDepth) {
105
- // Extract all <a> tags and map their absolute URLs
106
101
  const links = await page.$$eval('a', anchors => anchors.map(a => a.href));
107
102
  for (const link of links) {
108
103
  if (!link)
109
104
  continue;
110
105
  try {
111
106
  const parsedLink = new URL(link);
112
- // Filter out external links strictly based on target domain boundaries
113
107
  if (parsedLink.hostname === domain || parsedLink.hostname.endsWith(`.${domain}`)) {
114
108
  parsedLink.hash = '';
115
109
  const nextUrl = parsedLink.toString();
116
- // Add new links to queue
117
110
  if (!visited.has(nextUrl)) {
111
+ visited.add(nextUrl);
118
112
  queue.push({ url: nextUrl, depth: depth + 1 });
119
113
  }
120
114
  }
121
115
  }
122
116
  catch (err) {
123
- // Ignore invalid or weird internal hrefs (`javascript:`, etc)
124
117
  }
125
118
  }
126
119
  }
package/dist/db.js CHANGED
@@ -7,11 +7,9 @@ exports.insertEndpoint = insertEndpoint;
7
7
  exports.getAllEndpoints = getAllEndpoints;
8
8
  const better_sqlite3_1 = __importDefault(require("better-sqlite3"));
9
9
  const path_1 = __importDefault(require("path"));
10
- // Initialize db
11
10
  const dbPath = path_1.default.resolve(process.cwd(), 'apigen.db');
12
11
  const db = new better_sqlite3_1.default(dbPath);
13
12
  db.pragma('journal_mode = WAL');
14
- // Create table
15
13
  db.exec(`
16
14
  CREATE TABLE IF NOT EXISTS endpoints (
17
15
  id TEXT PRIMARY KEY,
package/dist/index.js CHANGED
@@ -66,7 +66,6 @@ program
66
66
  await page.goto(url, { waitUntil: 'domcontentloaded' });
67
67
  console.log(chalk_1.default.green('Navigation complete. Intercepting API traffic...'));
68
68
  console.log(chalk_1.default.gray('Terminal output shows real-time capture. Close the browser window to exit.'));
69
- // Save cookies safely in a loop to guarantee they are captured before exit
70
69
  const sessionFile = path.resolve(process.cwd(), '.apigen-session.json');
71
70
  let isRunning = true;
72
71
  browser.on('disconnected', () => {
@@ -74,14 +73,12 @@ program
74
73
  console.log(chalk_1.default.yellow('\nBrowser closed. Session saved. Exiting apigen gracefully...'));
75
74
  process.exit(0);
76
75
  });
77
- // Periodically sync the cookies securely to avoid missing them if context is torn down quickly
78
76
  while (isRunning) {
79
77
  try {
80
78
  const cookies = await context.cookies();
81
79
  fs.writeFileSync(sessionFile, JSON.stringify(cookies, null, 2), 'utf-8');
82
80
  }
83
81
  catch (e) {
84
- // Might hit target closed error silently during exit
85
82
  }
86
83
  await new Promise(resolve => setTimeout(resolve, 2000));
87
84
  }
@@ -103,7 +100,6 @@ program
103
100
  process.exit(0);
104
101
  }
105
102
  console.log(chalk_1.default.blue(`Found ${endpoints.length} raw endpoints. Generating schema map...`));
106
- // Process URLs and inference schemas
107
103
  const schemaMap = (0, schema_engine_1.generateSchemaMap)(endpoints);
108
104
  const finalMap = Object.values(schemaMap);
109
105
  console.log(chalk_1.default.green(`Folded into ${finalMap.length} unique routes.`));
@@ -7,25 +7,21 @@ exports.attachInterceptor = attachInterceptor;
7
7
  const crypto_1 = require("crypto");
8
8
  const chalk_1 = __importDefault(require("chalk"));
9
9
  const db_1 = require("./db");
10
- const IGNORED_RESOURCE_TYPES = new Set(['image', 'stylesheet', 'font', 'media', 'script', 'document']);
11
10
  const TARGET_RESOURCE_TYPES = new Set(['xhr', 'fetch']);
12
11
  function attachInterceptor(page) {
13
12
  page.on('response', async (response) => {
14
13
  const request = response.request();
15
14
  const resourceType = request.resourceType();
16
- // 1. Crucial Filter: ONLY capture traffic if it is XHR, Fetch etc.
17
15
  if (!TARGET_RESOURCE_TYPES.has(resourceType)) {
18
16
  return;
19
17
  }
20
18
  const url = request.url();
21
- // Ignore tracking domains / static extensions
22
19
  if (url.includes('google-analytics.com') ||
23
20
  url.includes('googletagmanager.com') ||
24
21
  url.match(/\.(png|jpg|jpeg|gif|css|woff2?|js|ico|svg)$/i)) {
25
22
  return;
26
23
  }
27
24
  const method = request.method();
28
- // Ignore preflight requests
29
25
  if (method === 'OPTIONS')
30
26
  return;
31
27
  try {
@@ -33,17 +29,15 @@ function attachInterceptor(page) {
33
29
  const headers = request.headers();
34
30
  let reqBodyParsed = null;
35
31
  let resBodyParsed = null;
36
- // Parse request post body
37
32
  const postData = request.postData();
38
33
  if (postData) {
39
34
  try {
40
35
  reqBodyParsed = JSON.parse(postData);
41
36
  }
42
37
  catch {
43
- reqBodyParsed = postData; // Fallback to raw string
38
+ reqBodyParsed = postData;
44
39
  }
45
40
  }
46
- // Parse response body (JSON, text)
47
41
  const contentType = response.headers()['content-type'] || '';
48
42
  if (contentType.includes('application/json') || contentType.includes('text/')) {
49
43
  try {
@@ -63,16 +57,20 @@ function attachInterceptor(page) {
63
57
  else {
64
58
  resBodyParsed = "[Binary or Unsupported Content]";
65
59
  }
66
- // Extract basic path pattern
60
+ let finalUrl = url;
61
+ if (reqBodyParsed && reqBodyParsed.operationName && finalUrl.includes('/graphql')) {
62
+ const separator = finalUrl.includes('?') ? '&' : '?';
63
+ finalUrl = `${finalUrl}${separator}op=${reqBodyParsed.operationName}`;
64
+ }
67
65
  let pathPattern = '/';
68
66
  try {
69
- pathPattern = new URL(url).pathname;
67
+ pathPattern = new URL(finalUrl).pathname;
70
68
  }
71
69
  catch { }
72
70
  const data = {
73
71
  id: (0, crypto_1.randomUUID)(),
74
72
  method,
75
- url,
73
+ url: finalUrl,
76
74
  path_pattern: pathPattern,
77
75
  request_headers: headers,
78
76
  request_body: reqBodyParsed,
@@ -26,7 +26,7 @@ function mapSchemaToOpenAPI(customSchema) {
26
26
  }
27
27
  return { type: 'object', properties };
28
28
  }
29
- return {}; // fallback
29
+ return {};
30
30
  }
31
31
  function extractPathParams(path) {
32
32
  const matches = path.match(/\{([^}]+)\}/g);
@@ -77,7 +77,6 @@ function generateOpenAPI(customSchema, baseUrl) {
77
77
  }
78
78
  };
79
79
  }
80
- // Default response if empty
81
80
  if (Object.keys(operation.responses).length === 0) {
82
81
  operation.responses['200'] = { description: 'Success' };
83
82
  }
@@ -44,7 +44,6 @@ function inferSchema(data) {
44
44
  if (Array.isArray(data)) {
45
45
  if (data.length === 0)
46
46
  return ['any'];
47
- // Infer schema of the object inside the array
48
47
  return [inferSchema(data[0])];
49
48
  }
50
49
  if (typeof data === 'object') {
@@ -74,7 +73,7 @@ function generateSchemaMap(endpoints) {
74
73
  try {
75
74
  bodyData = JSON.parse(bodyData);
76
75
  }
77
- catch { } // Leave as string if parsing fails
76
+ catch { }
78
77
  }
79
78
  if (bodyData && typeof bodyData === 'object') {
80
79
  const schema = inferSchema(bodyData);
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "auto-api-discovery",
3
- "version": "1.0.0",
3
+ "version": "1.0.3",
4
4
  "description": "API discovery automation CLI",
5
5
  "main": "dist/index.js",
6
6
  "bin": {
@@ -12,8 +12,9 @@
12
12
  "scripts": {
13
13
  "build": "tsc",
14
14
  "dev": "ts-node src/index.ts",
15
+ "test": "tsc --noEmit",
15
16
  "prepublishOnly": "npm run build",
16
- "postinstall": "playwright install"
17
+ "postinstall": "playwright install chromium"
17
18
  },
18
19
  "keywords": [
19
20
  "api",
@@ -36,4 +37,4 @@
36
37
  "ts-node": "^10.9.2",
37
38
  "typescript": "^5.3.3"
38
39
  }
39
- }
40
+ }