llmverify 1.0.2 → 1.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,139 @@ All notable changes to llmverify will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.4.0] - 2024-12-04
9
+
10
+ ### Added - Enterprise Features
11
+
12
+ **Enhanced Error Handling:**
13
+ - 20+ standardized error codes (LLMVERIFY_1001 format)
14
+ - Error severity levels (low, medium, high, critical)
15
+ - Error metadata with actionable suggestions
16
+ - Recoverable/non-recoverable classification
17
+ - JSON serialization support
18
+
19
+ **Logging & Audit System:**
20
+ - Structured logging to `~/.llmverify/logs/*.jsonl`
21
+ - Request ID tracking with UUID
22
+ - Automatic PII sanitization in logs
23
+ - Log rotation (10MB max, keep 10 files)
24
+ - Audit trail to `~/.llmverify/audit/*.jsonl`
25
+ - SHA-256 content hashing
26
+ - Compliance-ready audit exports
27
+ - Log statistics & analytics
28
+
29
+ **Baseline Drift Detection:**
30
+ - Baseline metrics storage (`~/.llmverify/baseline/baseline.json`)
31
+ - Running averages for latency, content length, risk score
32
+ - Risk distribution tracking
33
+ - Engine score tracking
34
+ - Drift detection with 20% threshold
35
+ - Drift history tracking
36
+ - CLI commands: `baseline:stats`, `baseline:reset`, `baseline:drift`
37
+
38
+ **Plugin System:**
39
+ - Extensible rule system for custom verification
40
+ - Plugin registry with enable/disable
41
+ - Priority-based execution
42
+ - Category-based filtering
43
+ - Built-in helpers: blacklist, regex, length validator, keyword detector
44
+ - `use()` API for plugin registration
45
+
46
+ **Security Hardening:**
47
+ - Input validation with size limits
48
+ - Safe regex execution with timeout protection
49
+ - PII sanitization utilities
50
+ - Rate limiter class
51
+ - XSS prevention (HTML escaping)
52
+ - Injection detection
53
+ - URL validation
54
+
55
+ ### Changed
56
+ - `verify()` now integrates logging, audit, baseline tracking, and plugins
57
+ - Enhanced input validation with better error messages
58
+ - Improved error handling throughout codebase
59
+
60
+ ### API Additions
61
+ - `ErrorCode`, `ErrorSeverity`, `getErrorMetadata()`
62
+ - `Logger`, `getLogger()`, `LogLevel`
63
+ - `AuditLogger`, `getAuditLogger()`
64
+ - `BaselineStorage`, `getBaselineStorage()`
65
+ - `Plugin`, `use()`, `createPlugin()`
66
+ - `RateLimiter`, `sanitizeForLogging()`, `safeRegexTest()`
67
+
68
+ ### Documentation
69
+ - Complete implementation of enterprise features
70
+ - All APIs exported and documented
71
+ - CLI commands for baseline management
72
+
73
+ ## [1.3.1] - 2024-12-04
74
+
75
+ ### Added
76
+ - **Complete API Reference Documentation** (`docs/API-REFERENCE.md`)
77
+ - Comprehensive programmatic API documentation
78
+ - All functions with parameters, return types, and examples
79
+ - TypeScript type definitions
80
+ - Best practices and error handling
81
+ - **JSON Schema for verify() Output** (`schema/verify-result.schema.json`)
82
+ - Formal JSON Schema (draft-07) for VerifyResult
83
+ - Complete type definitions and validation rules
84
+ - Example outputs for reference
85
+ - Machine-readable schema for validation tools
86
+ - **Enhanced Documentation**
87
+ - Added schema directory to npm package
88
+ - Improved API discoverability
89
+
90
+ ### Changed
91
+ - Package now includes `schema/` directory in published files
92
+ - Enhanced type safety with formal JSON schema
93
+
94
+ ### Documentation
95
+ - Complete API reference with all functions documented
96
+ - JSON schema for programmatic validation
97
+ - TypeScript type definitions reference
98
+ - Best practices guide
99
+
100
+ ## [1.3.0] - 2024-12-04
101
+
102
+ ### Added
103
+ - **HTTP Server Mode**: New `llmverify-serve` command starts a long-running HTTP API server
104
+ - Default port 9009, configurable via `--port` flag
105
+ - RESTful endpoints: `/verify`, `/check-input`, `/check-pii`, `/classify`, `/health`
106
+ - Full CORS support for local development
107
+ - Graceful shutdown handling
108
+ - **IDE Integration**: Comprehensive guide for Windsurf, Cursor, VS Code, and custom IDEs
109
+ - Example code for TypeScript, JavaScript, Python
110
+ - System prompt templates for AI assistants
111
+ - Production deployment guidelines
112
+ - **Server Endpoints**:
113
+ - `POST /verify` - Main verification endpoint (accepts `text` or `content`)
114
+ - `POST /check-input` - Input safety check for prompt injection
115
+ - `POST /check-pii` - PII detection and redaction
116
+ - `POST /classify` - Output classification with intent and hallucination risk
117
+ - `GET /health` - Health check with version info
118
+ - `GET /` - API documentation endpoint
119
+ - **Enhanced CLI**:
120
+ - Improved `--output json` mode for scripting
121
+ - Better error messages and validation
122
+ - Exit codes for CI/CD integration (0=low, 1=moderate, 2=high/critical)
123
+
124
+ ### Changed
125
+ - Updated package.json to include Express.js dependency
126
+ - Added `bin/llmverify-serve.js` executable
127
+ - Enhanced README with server mode documentation and IDE integration examples
128
+ - Improved API response format with consistent structure across all endpoints
129
+
130
+ ### Fixed
131
+ - CLI now properly handles `--file` and `--json` flags
132
+ - Better error handling for missing or invalid input
133
+
134
+ ### Documentation
135
+ - Added comprehensive server mode section to README
136
+ - Added IDE integration guide with examples for multiple languages
137
+ - Added production deployment best practices
138
+ - Added API response format documentation
139
+ - Updated CLI usage examples
140
+
8
141
  ## [1.0.0] - 2025-12-02
9
142
 
10
143
  ### Added
package/README.md CHANGED
@@ -110,6 +110,339 @@ console.log(response.llmverify.health); // 'stable' | 'degraded' | 'unstable'
110
110
 
111
111
  ---
112
112
 
113
+ ## 🆕 Enterprise Features (v1.4.0)
114
+
115
+ **NEW in v1.4.0**: Production-grade monitoring, logging, and extensibility.
116
+
117
+ ### Enhanced Error Handling
118
+ ```typescript
119
+ import { verify, ErrorCode } from 'llmverify';
120
+
121
+ try {
122
+ const result = await verify({ content });
123
+ } catch (error) {
124
+ console.log(error.code); // LLMVERIFY_1003
125
+ console.log(error.metadata.suggestion); // Actionable fix
126
+ }
127
+ ```
128
+
129
+ ### Logging & Audit Trails
130
+ ```typescript
131
+ import { getLogger, getAuditLogger } from 'llmverify';
132
+
133
+ const logger = getLogger({ level: 'info' });
134
+ const requestId = logger.startRequest();
135
+ logger.info('Processing request', { userId: '123' });
136
+
137
+ // Compliance-ready audit trail
138
+ const auditLogger = getAuditLogger();
139
+ // Automatically logs all verifications to ~/.llmverify/audit/
140
+ ```
141
+
142
+ ### Baseline Drift Detection
143
+ ```typescript
144
+ import { getBaselineStorage } from 'llmverify';
145
+
146
+ const storage = getBaselineStorage();
147
+ const stats = storage.getStatistics();
148
+ console.log(`Baseline: ${stats.sampleCount} samples`);
149
+
150
+ // Automatic drift detection (20% threshold)
151
+ // CLI: npx llmverify baseline:stats
152
+ ```
153
+
154
+ ### Plugin System
155
+ ```typescript
156
+ import { use, createPlugin } from 'llmverify';
157
+
158
+ const customRule = createPlugin({
159
+ id: 'my-rule',
160
+ name: 'Custom Verification Rule',
161
+ execute: async (context) => ({
162
+ findings: [],
163
+ score: 0
164
+ })
165
+ });
166
+
167
+ use(customRule);
168
+ // Now all verify() calls include your custom rule
169
+ ```
170
+
171
+ ### Security Utilities
172
+ ```typescript
173
+ import { RateLimiter, sanitizeForLogging, safeRegexTest } from 'llmverify';
174
+
175
+ const limiter = new RateLimiter(100, 60000); // 100 req/min
176
+ if (!limiter.isAllowed(userId)) {
177
+ throw new Error('Rate limit exceeded');
178
+ }
179
+
180
+ // PII-safe logging
181
+ const safe = sanitizeForLogging(content); // Removes emails, phones, SSNs
182
+ ```
183
+
184
+ **See [CHANGELOG.md](CHANGELOG.md) for complete v1.4.0 feature list.**
185
+
186
+ ---
187
+
188
+ ## 📛 Show Your Badge
189
+
190
+ Display the "Built with llmverify" badge on your project to show you're using AI verification!
191
+
192
+ ### Generate Your Badge
193
+
194
+ ```bash
195
+ npx llmverify badge --name "My Project" --url "https://myproject.com"
196
+ ```
197
+
198
+ ### Add to Your README
199
+
200
+ ```markdown
201
+ [![Built with llmverify](https://img.shields.io/badge/Built_with-llmverify-blue)](https://github.com/subodhkc/llmverify-npm)
202
+ ```
203
+
204
+ ### Programmatic Badge Generation
205
+
206
+ ```typescript
207
+ import { generateBadgeForProject } from 'llmverify';
208
+
209
+ const { markdown, html } = generateBadgeForProject('My Project', 'https://myproject.com');
210
+ console.log(markdown); // Copy to README.md
211
+ ```
212
+
213
+ **Badge Features:**
214
+ - ✅ Verified signature for authenticity
215
+ - ✅ Markdown and HTML formats
216
+ - ✅ Customizable project name and URL
217
+ - ✅ Downloadable badge image in `/assets/badge.svg`
218
+
219
+ ---
220
+
221
+ ## Server Mode — Run llmverify in Your IDE
222
+
223
+ **NEW in v1.3.0**: Start a long-running HTTP server for seamless IDE integration.
224
+
225
+ ### Quick Start
226
+
227
+ ```bash
228
+ # Start the server (default port 9009)
229
+ npx llmverify-serve
230
+
231
+ # Or specify a custom port
232
+ npx llmverify-serve --port=8080
233
+ ```
234
+
235
+ The server will start at `http://localhost:9009` with the following endpoints:
236
+
237
+ | Endpoint | Method | Description |
238
+ |----------|--------|-------------|
239
+ | `/health` | GET | Health check |
240
+ | `/verify` | POST | Verify AI output (main endpoint) |
241
+ | `/check-input` | POST | Check input for prompt injection |
242
+ | `/check-pii` | POST | Detect and redact PII |
243
+ | `/classify` | POST | Classify output intent and hallucination risk |
244
+
245
+ ### API Usage Examples
246
+
247
+ #### Verify AI Output
248
+
249
+ ```bash
250
+ curl -X POST http://localhost:9009/verify \
251
+ -H "Content-Type: application/json" \
252
+ -d '{"text": "Your AI output here"}'
253
+ ```
254
+
255
+ ```javascript
256
+ // Node.js / JavaScript
257
+ const response = await fetch('http://localhost:9009/verify', {
258
+ method: 'POST',
259
+ headers: { 'Content-Type': 'application/json' },
260
+ body: JSON.stringify({ text: 'Your AI output here' })
261
+ });
262
+ const result = await response.json();
263
+ console.log(result.result.risk.level); // "low" | "moderate" | "high" | "critical"
264
+ ```
265
+
266
+ ```python
267
+ # Python
268
+ import requests
269
+
270
+ response = requests.post('http://localhost:9009/verify', json={
271
+ 'text': 'Your AI output here'
272
+ })
273
+ result = response.json()
274
+ print(result['result']['risk']['level'])
275
+ ```
276
+
277
+ #### Check Input Safety
278
+
279
+ ```bash
280
+ curl -X POST http://localhost:9009/check-input \
281
+ -H "Content-Type: application/json" \
282
+ -d '{"text": "User input to check"}'
283
+ ```
284
+
285
+ #### Detect PII
286
+
287
+ ```bash
288
+ curl -X POST http://localhost:9009/check-pii \
289
+ -H "Content-Type: application/json" \
290
+ -d '{"text": "Contact me at john@example.com"}'
291
+ ```
292
+
293
+ ### IDE Integration Guide
294
+
295
+ #### Windsurf / Cursor / VS Code
296
+
297
+ Add this to your AI assistant's system prompt or rules:
298
+
299
+ ```markdown
300
+ ## AI Output Verification
301
+
302
+ Before responding to the user, verify your output using the local llmverify server:
303
+
304
+ 1. Start the server: `npx llmverify-serve` (runs on http://localhost:9009)
305
+ 2. POST your response to http://localhost:9009/verify
306
+ 3. Check the risk level in the response
307
+ 4. If risk is "high" or "critical", revise your response
308
+
309
+ Example verification code:
310
+ ```javascript
311
+ const verifyOutput = async (text) => {
312
+ const response = await fetch('http://localhost:9009/verify', {
313
+ method: 'POST',
314
+ headers: { 'Content-Type': 'application/json' },
315
+ body: JSON.stringify({ text })
316
+ });
317
+ return await response.json();
318
+ };
319
+
320
+ // Before sending response to user
321
+ const verification = await verifyOutput(myResponse);
322
+ if (verification.result.risk.level === 'critical') {
323
+ // Revise response or alert user
324
+ }
325
+ ```
326
+ ```
327
+
328
+ #### Custom IDE Integration
329
+
330
+ ```typescript
331
+ // Add to your IDE extension or AI agent
332
+ import fetch from 'node-fetch';
333
+
334
+ async function verifyAIOutput(text: string) {
335
+ try {
336
+ const response = await fetch('http://localhost:9009/verify', {
337
+ method: 'POST',
338
+ headers: { 'Content-Type': 'application/json' },
339
+ body: JSON.stringify({ text })
340
+ });
341
+
342
+ const result = await response.json();
343
+
344
+ if (!result.success) {
345
+ console.error('Verification failed:', result.error);
346
+ return null;
347
+ }
348
+
349
+ return {
350
+ riskLevel: result.result.risk.level,
351
+ action: result.result.risk.action,
352
+ findings: result.result.findings,
353
+ safe: result.result.risk.level === 'low'
354
+ };
355
+ } catch (error) {
356
+ console.error('Failed to connect to llmverify server:', error);
357
+ return null;
358
+ }
359
+ }
360
+
361
+ // Usage in your AI workflow
362
+ const aiResponse = await generateAIResponse(userPrompt);
363
+ const verification = await verifyAIOutput(aiResponse);
364
+
365
+ if (verification && !verification.safe) {
366
+ console.warn(`AI output has ${verification.riskLevel} risk`);
367
+ // Handle accordingly - revise, flag, or block
368
+ }
369
+ ```
370
+
371
+ #### GitHub Copilot / AI Assistants
372
+
373
+ For AI assistants that support custom tools or MCP servers, you can integrate llmverify as a verification step:
374
+
375
+ ```json
376
+ {
377
+ "tools": [
378
+ {
379
+ "name": "verify_output",
380
+ "description": "Verify AI output for safety, PII, and hallucinations",
381
+ "endpoint": "http://localhost:9009/verify",
382
+ "method": "POST",
383
+ "required": ["text"]
384
+ }
385
+ ]
386
+ }
387
+ ```
388
+
389
+ ### Server Response Format
390
+
391
+ All endpoints return JSON with this structure:
392
+
393
+ ```typescript
394
+ {
395
+ success: boolean;
396
+ result?: {
397
+ risk: {
398
+ level: "low" | "moderate" | "high" | "critical";
399
+ action: "allow" | "review" | "block";
400
+ score: number; // 0-1
401
+ };
402
+ findings: Array<{
403
+ category: string;
404
+ severity: string;
405
+ message: string;
406
+ }>;
407
+ // ... additional fields
408
+ };
409
+ error?: string;
410
+ version: string;
411
+ }
412
+ ```
413
+
414
+ ### Production Deployment
415
+
416
+ For production use, consider:
417
+
418
+ 1. **Authentication**: Add API key middleware
419
+ 2. **Rate Limiting**: Use `express-rate-limit`
420
+ 3. **HTTPS**: Deploy behind a reverse proxy (nginx, Caddy)
421
+ 4. **Monitoring**: Add logging and health checks
422
+ 5. **Scaling**: Run multiple instances with load balancing
423
+
424
+ Example with authentication:
425
+
426
+ ```typescript
427
+ import express from 'express';
428
+ import { startServer } from 'llmverify/dist/server';
429
+
430
+ const app = express();
431
+
432
+ // Add API key middleware
433
+ app.use((req, res, next) => {
434
+ const apiKey = req.headers['x-api-key'];
435
+ if (apiKey !== process.env.LLMVERIFY_API_KEY) {
436
+ return res.status(401).json({ error: 'Unauthorized' });
437
+ }
438
+ next();
439
+ });
440
+
441
+ startServer(9009);
442
+ ```
443
+
444
+ ---
445
+
113
446
  ## Why llmverify?
114
447
 
115
448
  | Problem | Solution |
@@ -423,6 +756,10 @@ export default async function handler(req, res) {
423
756
  ### Quick Start Commands
424
757
 
425
758
  ```bash
759
+ # ★ Start HTTP server for IDE integration (NEW in v1.3.0)
760
+ npx llmverify-serve # Default port 9009
761
+ npx llmverify-serve --port=8080 # Custom port
762
+
426
763
  # ★ Interactive setup wizard (first-time users)
427
764
  npx llmverify wizard
428
765
 
@@ -438,6 +775,9 @@ npx llmverify verify "Your AI output here"
438
775
  # From file
439
776
  npx llmverify verify --file output.txt
440
777
 
778
+ # JSON output (for scripting)
779
+ npx llmverify verify "Your AI output" --output json
780
+
441
781
  # JSON validation
442
782
  npx llmverify verify --json '{"status": "ok"}'
443
783
  ```
@@ -889,8 +1229,11 @@ MIT License. See [LICENSE](LICENSE) for details.
889
1229
  ## Documentation
890
1230
 
891
1231
  - [GETTING-STARTED.md](docs/GETTING-STARTED.md) - Beginner-friendly guide for students
1232
+ - [API-REFERENCE.md](docs/API-REFERENCE.md) - Complete programmatic API documentation
1233
+ - [SERVER-MODE.md](docs/SERVER-MODE.md) - HTTP server mode guide for IDE integration
892
1234
  - [ALGORITHMS.md](docs/ALGORITHMS.md) - How each engine computes scores
893
1235
  - [LIMITATIONS.md](docs/LIMITATIONS.md) - What llmverify can and cannot do
1236
+ - [JSON Schema](schema/verify-result.schema.json) - Formal schema for verify() output
894
1237
 
895
1238
  ## Links
896
1239
 
@@ -0,0 +1,8 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * llmverify-serve CLI entry point
5
+ * Starts the HTTP server for IDE integration
6
+ */
7
+
8
+ require('../dist/server.js');
@@ -0,0 +1,58 @@
1
+ /**
2
+ * Badge Generator and Verification
3
+ *
4
+ * Generate "Built with llmverify" badges for verified applications
5
+ *
6
+ * @module badge/generator
7
+ */
8
+ /**
9
+ * Badge configuration
10
+ */
11
+ export interface BadgeConfig {
12
+ projectName: string;
13
+ projectUrl?: string;
14
+ verifiedDate: string;
15
+ version: string;
16
+ }
17
+ /**
18
+ * Badge verification data
19
+ */
20
+ export interface BadgeVerification {
21
+ projectName: string;
22
+ verifiedDate: string;
23
+ version: string;
24
+ signature: string;
25
+ valid: boolean;
26
+ }
27
+ /**
28
+ * Generate badge verification signature
29
+ */
30
+ export declare function generateBadgeSignature(config: BadgeConfig): string;
31
+ /**
32
+ * Verify badge signature
33
+ */
34
+ export declare function verifyBadgeSignature(projectName: string, verifiedDate: string, version: string, signature: string): boolean;
35
+ /**
36
+ * Generate badge markdown
37
+ */
38
+ export declare function generateBadgeMarkdown(config: BadgeConfig): string;
39
+ /**
40
+ * Generate badge HTML
41
+ */
42
+ export declare function generateBadgeHTML(config: BadgeConfig): string;
43
+ /**
44
+ * Extract badge verification from markdown/HTML
45
+ */
46
+ export declare function extractBadgeVerification(content: string): BadgeVerification | null;
47
+ /**
48
+ * CLI helper to generate badge
49
+ */
50
+ export declare function generateBadgeForProject(projectName: string, projectUrl?: string, version?: string): {
51
+ markdown: string;
52
+ html: string;
53
+ signature: string;
54
+ };
55
+ /**
56
+ * Save badge to file
57
+ */
58
+ export declare function saveBadgeToFile(outputPath: string, projectName: string, projectUrl?: string): void;