@sandrobuilds/tracerney 0.9.9 → 0.9.11

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (2) hide show
  1. package/README.md +59 -20
  2. package/package.json +1 -1
package/README.md CHANGED
@@ -5,13 +5,13 @@
5
5
  [![npm version](https://badge.fury.io/js/tracerney.svg)](https://www.npmjs.com/package/tracerney)
6
6
  [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
7
7
 
8
- Tracerney is a production-ready security middleware that sits between your application and LLM providers. It detects and blocks prompt injection attacks with **three hardened layers**:
8
+ Tracerney is a lightweight, free SDK for detecting prompt injection attacks. It runs 100% locally with no dependencies and no data collection.
9
9
 
10
- 1. **Layer 1 (Vanguard)**: Regex patterns with Unicode normalization — <2ms, blocks known attacks
11
- 2. **Layer 2 (Sentinel)**: Backend LLM verification for novel attacks, with rate limiting to prevent cost spikes
12
- 3. **Layer 3 (Jitter)**: Random response delays to mask which layer blocked an attack
13
-
14
- **Zero data leaves your infrastructure** you control the backend endpoints.
10
+ **Free SDK includes:**
11
+ - **Layer 1 (Pattern Detection)**: 238 embedded attack patterns with Unicode normalization
12
+ - <2ms detection latency per prompt
13
+ - Zero network overhead — all detection is local
14
+ - Works offlineno backend required
15
15
 
16
16
  ## 🚀 Quick Start
17
17
 
@@ -20,18 +20,41 @@ Tracerney is a production-ready security middleware that sits between your appli
20
20
  npm install @sandrobuilds/tracerney
21
21
  ```
22
22
 
23
- **Setup (30 seconds)RECOMMENDED approach:**
23
+ **Simplest Setup (Free SDKPattern Detection):**
24
+ ```typescript
25
+ import { Tracerney } from '@sandrobuilds/tracerney';
26
+
27
+ const tracer = new Tracerney({
28
+ allowedTools: ['search', 'calculator'],
29
+ });
30
+
31
+ // Check if a prompt is suspicious
32
+ const result = await tracer.scanPrompt(userInput);
33
+
34
+ console.log(result);
35
+ // {
36
+ // suspicious: true, // Layer 1 detected pattern match
37
+ // patternName: "Ignore Instructions",
38
+ // severity: "CRITICAL",
39
+ // blocked: false // No backend verification yet
40
+ // }
41
+
42
+ if (result.suspicious) {
43
+ console.log(`⚠️ Suspicious: ${result.patternName}`);
44
+ // Handle the suspicious prompt (log, rate-limit, etc.)
45
+ }
46
+ ```
47
+
48
+ **Advanced Setup (with Backend LLM Verification):**
24
49
  ```typescript
25
50
  import { Tracerney, ShieldBlockError } from '@sandrobuilds/tracerney';
26
51
 
27
- // Just provide your domain URL — paths are auto-constructed!
28
52
  const shield = new Tracerney({
29
- baseUrl: 'http://localhost:3000', // Automatically creates all endpoints
53
+ baseUrl: 'http://localhost:3000', // Backend with LLM Sentinel
30
54
  allowedTools: ['search', 'calculator'],
31
55
  apiKey: process.env.TRACERNY_API_KEY,
32
56
  });
33
57
 
34
- // Wrap your LLM call (detects at Layer 1 + Layer 2)
35
58
  try {
36
59
  const response = await shield.wrap(() =>
37
60
  openai.chat.completions.create({
@@ -43,13 +66,12 @@ try {
43
66
  } catch (error) {
44
67
  if (error instanceof ShieldBlockError) {
45
68
  console.error('🛡️ Attack blocked:', error.event.blockReason);
46
- // Layer 1 hit: error.event.metadata.patternName
47
- // Layer 2 hit: "LLM Sentinel" detected novel attack
69
+ // Only throws if LLM Sentinel confirms attack
48
70
  }
49
71
  }
50
72
  ```
51
73
 
52
- That's it! Your LLM is now protected.
74
+ Start with the simple setup. Add backend verification when ready!
53
75
 
54
76
  ## Philosophy
55
77
 
@@ -304,17 +326,34 @@ try {
304
326
 
305
327
  ### Scanning Prompts Pre-LLM
306
328
 
307
- For defense-in-depth, scan prompts before calling the LLM:
329
+ Check if a prompt is suspicious before calling the LLM:
308
330
 
309
331
  ```typescript
310
- try {
311
- shield.scanPrompt(userInput);
312
- // Safe to proceed
313
- } catch (err) {
314
- if (err instanceof ShieldBlockError) {
315
- console.error('Blocked at edge:', err.event);
332
+ const result = await shield.scanPrompt(userInput);
333
+
334
+ if (result.suspicious) {
335
+ console.log(`⚠️ Suspicious: ${result.patternName}`);
336
+ console.log(`Severity: ${result.severity}`);
337
+ // Log, rate-limit, or notify security team
338
+
339
+ if (result.blocked) {
340
+ // Only true if LLM Sentinel confirmed (requires backend)
341
+ return res.status(403).json({ error: 'Request blocked' });
316
342
  }
317
343
  }
344
+
345
+ // Safe to call LLM
346
+ const response = await openai.chat.completions.create({...});
347
+ ```
348
+
349
+ **Result object:**
350
+ ```typescript
351
+ interface ScanResult {
352
+ suspicious: boolean; // Layer 1 detected pattern match
353
+ patternName?: string; // e.g., "Ignore Instructions"
354
+ severity?: string; // "low" | "medium" | "high" | "critical"
355
+ blocked: boolean; // true only if LLM Sentinel confirmed (requires backend)
356
+ }
318
357
  ```
319
358
 
320
359
  ### Updating Allowed Tools
package/package.json CHANGED
@@ -1,6 +1,6 @@
1
1
  {
2
2
  "name": "@sandrobuilds/tracerney",
3
- "version": "0.9.9",
3
+ "version": "0.9.11",
4
4
  "description": "Transparent proxy runtime sentinel for prompt injection defense",
5
5
  "type": "module",
6
6
  "main": "dist/index.js",