llm-scanner 0.1.3 → 0.1.4

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +16 -3
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -16,16 +16,29 @@ npm install -g llm-scanner
16
16
 
17
17
  **Step 2 — Add your OpenAI key**
18
18
 
19
- llm-scanner uses OpenAI internally to judge whether your AI passed or
20
- failed each attack. Create a .env file in the folder you run scans from:
19
+ Get your API key from https://platform.openai.com/api-keys
20
+
21
+ llm-scanner uses a separate AI judge to evaluate whether your AI
22
+ passed or failed each attack. This judge runs on your machine using
23
+ your own OpenAI API key — so your endpoint data never leaves your
24
+ environment.
25
+
26
+ This is what powers the PASS/FAIL results in your report.
27
+
28
+ Option 1 — Save it (recommended):
21
29
  ```bash
22
30
  echo 'OPENAI_API_KEY=sk-your-key-here' > .env
23
31
  ```
24
- Or export it directly:
32
+
33
+ Option 2 — Quick test:
25
34
  ```bash
26
35
  export OPENAI_API_KEY=sk-your-key-here
27
36
  ```
28
37
 
38
+ If you use export, make sure you run aisec in the same terminal session.
39
+
40
+ > Note: The judge uses gpt-4o-mini. A full scan costs less than $0.02.
41
+
29
42
  **Step 3 — Run your first scan**
30
43
  ```bash
31
44
  aisec scan \
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "llm-scanner",
3
- "version": "0.1.3",
3
+ "version": "0.1.4",
4
4
  "description": "Scan your AI app for prompt injection vulnerabilities before hackers do",
5
5
  "main": "./dist/index.js",
6
6
  "bin": {