llmverify 1.0.2 → 1.3.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
package/CHANGELOG.md CHANGED
@@ -5,6 +5,74 @@ All notable changes to llmverify will be documented in this file.
5
5
  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
6
6
  and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
7
7
 
8
+ ## [1.3.1] - 2024-12-04
9
+
10
+ ### Added
11
+ - **Complete API Reference Documentation** (`docs/API-REFERENCE.md`)
12
+ - Comprehensive programmatic API documentation
13
+ - All functions with parameters, return types, and examples
14
+ - TypeScript type definitions
15
+ - Best practices and error handling
16
+ - **JSON Schema for verify() Output** (`schema/verify-result.schema.json`)
17
+ - Formal JSON Schema (draft-07) for VerifyResult
18
+ - Complete type definitions and validation rules
19
+ - Example outputs for reference
20
+ - Machine-readable schema for validation tools
21
+ - **Enhanced Documentation**
22
+ - Added schema directory to npm package
23
+ - Improved API discoverability
24
+
25
+ ### Changed
26
+ - Package now includes `schema/` directory in published files
27
+ - Enhanced type safety with formal JSON schema
28
+
29
+ ### Documentation
30
+ - Complete API reference with all functions documented
31
+ - JSON schema for programmatic validation
32
+ - TypeScript type definitions reference
33
+ - Best practices guide
34
+
35
+ ## [1.3.0] - 2024-12-04
36
+
37
+ ### Added
38
+ - **HTTP Server Mode**: New `llmverify-serve` command starts a long-running HTTP API server
39
+ - Default port 9009, configurable via `--port` flag
40
+ - RESTful endpoints: `/verify`, `/check-input`, `/check-pii`, `/classify`, `/health`
41
+ - Full CORS support for local development
42
+ - Graceful shutdown handling
43
+ - **IDE Integration**: Comprehensive guide for Windsurf, Cursor, VS Code, and custom IDEs
44
+ - Example code for TypeScript, JavaScript, Python
45
+ - System prompt templates for AI assistants
46
+ - Production deployment guidelines
47
+ - **Server Endpoints**:
48
+ - `POST /verify` - Main verification endpoint (accepts `text` or `content`)
49
+ - `POST /check-input` - Input safety check for prompt injection
50
+ - `POST /check-pii` - PII detection and redaction
51
+ - `POST /classify` - Output classification with intent and hallucination risk
52
+ - `GET /health` - Health check with version info
53
+ - `GET /` - API documentation endpoint
54
+ - **Enhanced CLI**:
55
+ - Improved `--output json` mode for scripting
56
+ - Better error messages and validation
57
+ - Exit codes for CI/CD integration (0=low, 1=moderate, 2=high/critical)
58
+
59
+ ### Changed
60
+ - Updated package.json to include Express.js dependency
61
+ - Added `bin/llmverify-serve.js` executable
62
+ - Enhanced README with server mode documentation and IDE integration examples
63
+ - Improved API response format with consistent structure across all endpoints
64
+
65
+ ### Fixed
66
+ - CLI now properly handles `--file` and `--json` flags
67
+ - Better error handling for missing or invalid input
68
+
69
+ ### Documentation
70
+ - Added comprehensive server mode section to README
71
+ - Added IDE integration guide with examples for multiple languages
72
+ - Added production deployment best practices
73
+ - Added API response format documentation
74
+ - Updated CLI usage examples
75
+
8
76
  ## [1.0.0] - 2025-12-02
9
77
 
10
78
  ### Added
package/README.md CHANGED
@@ -110,6 +110,231 @@ console.log(response.llmverify.health); // 'stable' | 'degraded' | 'unstable'
110
110
 
111
111
  ---
112
112
 
113
+ ## Server Mode — Run llmverify in Your IDE
114
+
115
+ **NEW in v1.3.0**: Start a long-running HTTP server for seamless IDE integration.
116
+
117
+ ### Quick Start
118
+
119
+ ```bash
120
+ # Start the server (default port 9009)
121
+ npx llmverify-serve
122
+
123
+ # Or specify a custom port
124
+ npx llmverify-serve --port=8080
125
+ ```
126
+
127
+ The server will start at `http://localhost:9009` with the following endpoints:
128
+
129
+ | Endpoint | Method | Description |
130
+ |----------|--------|-------------|
131
+ | `/health` | GET | Health check |
132
+ | `/verify` | POST | Verify AI output (main endpoint) |
133
+ | `/check-input` | POST | Check input for prompt injection |
134
+ | `/check-pii` | POST | Detect and redact PII |
135
+ | `/classify` | POST | Classify output intent and hallucination risk |
136
+
137
+ ### API Usage Examples
138
+
139
+ #### Verify AI Output
140
+
141
+ ```bash
142
+ curl -X POST http://localhost:9009/verify \
143
+ -H "Content-Type: application/json" \
144
+ -d '{"text": "Your AI output here"}'
145
+ ```
146
+
147
+ ```javascript
148
+ // Node.js / JavaScript
149
+ const response = await fetch('http://localhost:9009/verify', {
150
+ method: 'POST',
151
+ headers: { 'Content-Type': 'application/json' },
152
+ body: JSON.stringify({ text: 'Your AI output here' })
153
+ });
154
+ const result = await response.json();
155
+ console.log(result.result.risk.level); // "low" | "moderate" | "high" | "critical"
156
+ ```
157
+
158
+ ```python
159
+ # Python
160
+ import requests
161
+
162
+ response = requests.post('http://localhost:9009/verify', json={
163
+ 'text': 'Your AI output here'
164
+ })
165
+ result = response.json()
166
+ print(result['result']['risk']['level'])
167
+ ```
168
+
169
+ #### Check Input Safety
170
+
171
+ ```bash
172
+ curl -X POST http://localhost:9009/check-input \
173
+ -H "Content-Type: application/json" \
174
+ -d '{"text": "User input to check"}'
175
+ ```
176
+
177
+ #### Detect PII
178
+
179
+ ```bash
180
+ curl -X POST http://localhost:9009/check-pii \
181
+ -H "Content-Type: application/json" \
182
+ -d '{"text": "Contact me at john@example.com"}'
183
+ ```
184
+
185
+ ### IDE Integration Guide
186
+
187
+ #### Windsurf / Cursor / VS Code
188
+
189
+ Add this to your AI assistant's system prompt or rules:
190
+
191
+ ```markdown
192
+ ## AI Output Verification
193
+
194
+ Before responding to the user, verify your output using the local llmverify server:
195
+
196
+ 1. Start the server: `npx llmverify-serve` (runs on http://localhost:9009)
197
+ 2. POST your response to http://localhost:9009/verify
198
+ 3. Check the risk level in the response
199
+ 4. If risk is "high" or "critical", revise your response
200
+
201
+ Example verification code:
202
+ ```javascript
203
+ const verifyOutput = async (text) => {
204
+ const response = await fetch('http://localhost:9009/verify', {
205
+ method: 'POST',
206
+ headers: { 'Content-Type': 'application/json' },
207
+ body: JSON.stringify({ text })
208
+ });
209
+ return await response.json();
210
+ };
211
+
212
+ // Before sending response to user
213
+ const verification = await verifyOutput(myResponse);
214
+ if (verification.result.risk.level === 'critical') {
215
+ // Revise response or alert user
216
+ }
217
+ ```
218
+ ```
219
+
220
+ #### Custom IDE Integration
221
+
222
+ ```typescript
223
+ // Add to your IDE extension or AI agent
224
+ import fetch from 'node-fetch';
225
+
226
+ async function verifyAIOutput(text: string) {
227
+ try {
228
+ const response = await fetch('http://localhost:9009/verify', {
229
+ method: 'POST',
230
+ headers: { 'Content-Type': 'application/json' },
231
+ body: JSON.stringify({ text })
232
+ });
233
+
234
+ const result = await response.json();
235
+
236
+ if (!result.success) {
237
+ console.error('Verification failed:', result.error);
238
+ return null;
239
+ }
240
+
241
+ return {
242
+ riskLevel: result.result.risk.level,
243
+ action: result.result.risk.action,
244
+ findings: result.result.findings,
245
+ safe: result.result.risk.level === 'low'
246
+ };
247
+ } catch (error) {
248
+ console.error('Failed to connect to llmverify server:', error);
249
+ return null;
250
+ }
251
+ }
252
+
253
+ // Usage in your AI workflow
254
+ const aiResponse = await generateAIResponse(userPrompt);
255
+ const verification = await verifyAIOutput(aiResponse);
256
+
257
+ if (verification && !verification.safe) {
258
+ console.warn(`AI output has ${verification.riskLevel} risk`);
259
+ // Handle accordingly - revise, flag, or block
260
+ }
261
+ ```
262
+
263
+ #### GitHub Copilot / AI Assistants
264
+
265
+ For AI assistants that support custom tools or MCP servers, you can integrate llmverify as a verification step:
266
+
267
+ ```json
268
+ {
269
+ "tools": [
270
+ {
271
+ "name": "verify_output",
272
+ "description": "Verify AI output for safety, PII, and hallucinations",
273
+ "endpoint": "http://localhost:9009/verify",
274
+ "method": "POST",
275
+ "required": ["text"]
276
+ }
277
+ ]
278
+ }
279
+ ```
280
+
281
+ ### Server Response Format
282
+
283
+ All endpoints return JSON with this structure:
284
+
285
+ ```typescript
286
+ {
287
+ success: boolean;
288
+ result?: {
289
+ risk: {
290
+ level: "low" | "moderate" | "high" | "critical";
291
+ action: "allow" | "review" | "block";
292
+ score: number; // 0-1
293
+ };
294
+ findings: Array<{
295
+ category: string;
296
+ severity: string;
297
+ message: string;
298
+ }>;
299
+ // ... additional fields
300
+ };
301
+ error?: string;
302
+ version: string;
303
+ }
304
+ ```
305
+
306
+ ### Production Deployment
307
+
308
+ For production use, consider:
309
+
310
+ 1. **Authentication**: Add API key middleware
311
+ 2. **Rate Limiting**: Use `express-rate-limit`
312
+ 3. **HTTPS**: Deploy behind a reverse proxy (nginx, Caddy)
313
+ 4. **Monitoring**: Add logging and health checks
314
+ 5. **Scaling**: Run multiple instances with load balancing
315
+
316
+ Example with authentication:
317
+
318
+ ```typescript
319
+ import express from 'express';
320
+ import { startServer } from 'llmverify/dist/server';
321
+
322
+ const app = express();
323
+
324
+ // Add API key middleware
325
+ app.use((req, res, next) => {
326
+ const apiKey = req.headers['x-api-key'];
327
+ if (apiKey !== process.env.LLMVERIFY_API_KEY) {
328
+ return res.status(401).json({ error: 'Unauthorized' });
329
+ }
330
+ next();
331
+ });
332
+
333
+ startServer(9009);
334
+ ```
335
+
336
+ ---
337
+
113
338
  ## Why llmverify?
114
339
 
115
340
  | Problem | Solution |
@@ -423,6 +648,10 @@ export default async function handler(req, res) {
423
648
  ### Quick Start Commands
424
649
 
425
650
  ```bash
651
+ # ★ Start HTTP server for IDE integration (NEW in v1.3.0)
652
+ npx llmverify-serve # Default port 9009
653
+ npx llmverify-serve --port=8080 # Custom port
654
+
426
655
  # ★ Interactive setup wizard (first-time users)
427
656
  npx llmverify wizard
428
657
 
@@ -438,6 +667,9 @@ npx llmverify verify "Your AI output here"
438
667
  # From file
439
668
  npx llmverify verify --file output.txt
440
669
 
670
+ # JSON output (for scripting)
671
+ npx llmverify verify "Your AI output" --output json
672
+
441
673
  # JSON validation
442
674
  npx llmverify verify --json '{"status": "ok"}'
443
675
  ```
@@ -889,8 +1121,11 @@ MIT License. See [LICENSE](LICENSE) for details.
889
1121
  ## Documentation
890
1122
 
891
1123
  - [GETTING-STARTED.md](docs/GETTING-STARTED.md) - Beginner-friendly guide for students
1124
+ - [API-REFERENCE.md](docs/API-REFERENCE.md) - Complete programmatic API documentation
1125
+ - [SERVER-MODE.md](docs/SERVER-MODE.md) - HTTP server mode guide for IDE integration
892
1126
  - [ALGORITHMS.md](docs/ALGORITHMS.md) - How each engine computes scores
893
1127
  - [LIMITATIONS.md](docs/LIMITATIONS.md) - What llmverify can and cannot do
1128
+ - [JSON Schema](schema/verify-result.schema.json) - Formal schema for verify() output
894
1129
 
895
1130
  ## Links
896
1131
 
@@ -0,0 +1,8 @@
1
+ #!/usr/bin/env node
2
+
3
+ /**
4
+ * llmverify-serve CLI entry point
5
+ * Starts the HTTP server for IDE integration
6
+ */
7
+
8
+ require('../dist/server.js');