@lockllm/sdk 1.0.1 → 1.2.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (51) hide show
  1. package/CHANGELOG.md +223 -5
  2. package/README.md +272 -40
  3. package/dist/client.d.ts +1 -1
  4. package/dist/client.d.ts.map +1 -1
  5. package/dist/errors.d.ts +56 -1
  6. package/dist/errors.d.ts.map +1 -1
  7. package/dist/errors.js +132 -2
  8. package/dist/errors.js.map +1 -1
  9. package/dist/errors.mjs +127 -1
  10. package/dist/index.d.ts +6 -5
  11. package/dist/index.d.ts.map +1 -1
  12. package/dist/index.js +10 -1
  13. package/dist/index.js.map +1 -1
  14. package/dist/index.mjs +3 -2
  15. package/dist/scan.d.ts +20 -5
  16. package/dist/scan.d.ts.map +1 -1
  17. package/dist/scan.js +59 -5
  18. package/dist/scan.js.map +1 -1
  19. package/dist/scan.mjs +59 -5
  20. package/dist/types/common.d.ts +116 -0
  21. package/dist/types/common.d.ts.map +1 -1
  22. package/dist/types/errors.d.ts +39 -0
  23. package/dist/types/errors.d.ts.map +1 -1
  24. package/dist/types/scan.d.ts +118 -3
  25. package/dist/types/scan.d.ts.map +1 -1
  26. package/dist/utils/proxy-headers.d.ts +24 -0
  27. package/dist/utils/proxy-headers.d.ts.map +1 -0
  28. package/dist/utils/proxy-headers.js +234 -0
  29. package/dist/utils/proxy-headers.js.map +1 -0
  30. package/dist/utils/proxy-headers.mjs +229 -0
  31. package/dist/utils.d.ts +24 -0
  32. package/dist/utils.d.ts.map +1 -1
  33. package/dist/utils.js +28 -0
  34. package/dist/utils.js.map +1 -1
  35. package/dist/utils.mjs +27 -0
  36. package/dist/wrappers/anthropic-wrapper.d.ts +10 -1
  37. package/dist/wrappers/anthropic-wrapper.d.ts.map +1 -1
  38. package/dist/wrappers/anthropic-wrapper.js +17 -2
  39. package/dist/wrappers/anthropic-wrapper.js.map +1 -1
  40. package/dist/wrappers/anthropic-wrapper.mjs +17 -2
  41. package/dist/wrappers/generic-wrapper.d.ts +5 -0
  42. package/dist/wrappers/generic-wrapper.d.ts.map +1 -1
  43. package/dist/wrappers/generic-wrapper.js +12 -1
  44. package/dist/wrappers/generic-wrapper.js.map +1 -1
  45. package/dist/wrappers/generic-wrapper.mjs +12 -1
  46. package/dist/wrappers/openai-wrapper.d.ts +10 -1
  47. package/dist/wrappers/openai-wrapper.d.ts.map +1 -1
  48. package/dist/wrappers/openai-wrapper.js +17 -2
  49. package/dist/wrappers/openai-wrapper.js.map +1 -1
  50. package/dist/wrappers/openai-wrapper.mjs +17 -2
  51. package/package.json +2 -2
package/CHANGELOG.md CHANGED
@@ -1,12 +1,230 @@
1
1
  # Changelog
2
2
 
3
- ## [1.1.0] - 2026-01-16
3
+ ## [1.2.0] - 2026-02-21
4
+
5
+ ### Added
6
+
7
+ #### PII Detection and Redaction
8
+ Protect sensitive personal information in prompts before they reach AI providers. When enabled, LockLLM detects emails, phone numbers, SSNs, credit card numbers, and other PII entities. Choose how to handle detected PII with the `piiAction` option:
9
+
10
+ - **`block`** - Reject requests containing PII entirely. Throws a `PIIDetectedError` with entity types and count.
11
+ - **`strip`** - Automatically redact PII from prompts before forwarding to the AI provider. The redacted text is available via `redacted_input` in the scan response.
12
+ - **`allow_with_warning`** - Allow requests through but include PII metadata in the response for logging.
13
+
14
+ PII detection is opt-in and disabled by default.
15
+
16
+ ```typescript
17
+ // Block requests containing PII
18
+ const openai = createOpenAI({
19
+ apiKey: process.env.LOCKLLM_API_KEY,
20
+ proxyOptions: {
21
+ piiAction: 'strip' // Automatically redact PII before sending to AI
22
+ }
23
+ });
24
+
25
+ // Handle PII errors when using block mode
26
+ try {
27
+ await openai.chat.completions.create({ ... });
28
+ } catch (error) {
29
+ if (error instanceof PIIDetectedError) {
30
+ console.log(error.pii_details.entity_types); // ['email', 'phone_number']
31
+ console.log(error.pii_details.entity_count); // 3
32
+ }
33
+ }
34
+ ```
35
+
36
+ #### Scan API PII Support
37
+ The scan endpoint now accepts a `piiAction` option alongside existing scan options:
38
+
39
+ ```typescript
40
+ const result = await lockllm.scan(
41
+ { input: 'My email is test@example.com' },
42
+ { piiAction: 'block', scanAction: 'block' }
43
+ );
44
+
45
+ if (result.pii_result) {
46
+ console.log(result.pii_result.detected); // true
47
+ console.log(result.pii_result.entity_types); // ['email']
48
+ console.log(result.pii_result.entity_count); // 1
49
+ console.log(result.pii_result.redacted_input); // 'My email is [EMAIL]' (strip mode only)
50
+ }
51
+ ```
52
+
53
+ #### Enhanced Proxy Response Metadata
54
+ Proxy responses now include additional fields for better observability:
55
+
56
+ - **PII detection metadata** - `pii_detected` object with detection status, entity types, count, and action taken
57
+ - **Blocked status** - `blocked` flag when a request was rejected by security checks
58
+ - **Sensitivity and label** - `sensitivity` level used and numeric `label` (0 = safe, 1 = unsafe)
59
+ - **Decoded detail fields** - `scan_detail`, `policy_detail`, and `abuse_detail` automatically decoded from base64 response headers
60
+ - **Extended routing metadata** - `estimated_original_cost`, `estimated_routed_cost`, `estimated_input_tokens`, `estimated_output_tokens`, and `routing_fee_reason`
61
+
62
+ #### Sensitivity Header Support
63
+ You can now set the detection sensitivity level via `proxyOptions` or `buildLockLLMHeaders`:
64
+
65
+ ```typescript
66
+ const openai = createOpenAI({
67
+ apiKey: process.env.LOCKLLM_API_KEY,
68
+ proxyOptions: {
69
+ sensitivity: 'high' // 'low', 'medium', or 'high'
70
+ }
71
+ });
72
+ ```
73
+
74
+ ### Notes
75
+ - PII detection is opt-in. Existing integrations continue to work without changes.
76
+ - All new types (`PIIAction`, `PIIResult`, `PIIDetectedError`, `PIIDetectedErrorData`) are fully exported for TypeScript users.
77
+
78
+ ---
79
+
80
+ ## [1.1.0] - 2026-02-18
81
+
82
+ ### Added
83
+
84
+ #### Custom Content Policy Enforcement
85
+ You can now enforce your own content rules on top of LockLLM's built-in security. Create custom policies in the [dashboard](https://www.lockllm.com/policies), and the SDK will automatically check prompts against them. When a policy is violated, you'll get a `PolicyViolationError` with the exact policy name, violated categories, and details.
86
+
87
+ ```typescript
88
+ try {
89
+ await openai.chat.completions.create({ ... });
90
+ } catch (error) {
91
+ if (error instanceof PolicyViolationError) {
92
+ console.log(error.violated_policies);
93
+ // [{ policy_name: "No competitor mentions", violated_categories: [...] }]
94
+ }
95
+ }
96
+ ```
97
+
98
+ #### AI Abuse Detection
99
+ Protect your endpoints from automated misuse. When enabled, LockLLM detects bot-generated content, repetitive prompts, and resource exhaustion attacks. If abuse is detected, you'll get an `AbuseDetectedError` with confidence scores and detailed indicator breakdowns.
100
+
101
+ ```typescript
102
+ const openai = createOpenAI({
103
+ apiKey: process.env.LOCKLLM_API_KEY,
104
+ proxyOptions: {
105
+ abuseAction: 'block' // Opt-in: block abusive requests
106
+ }
107
+ });
108
+ ```
109
+
110
+ #### Credit Balance Awareness
111
+ The SDK now returns a dedicated `InsufficientCreditsError` when your balance is too low for a request. The error includes your `current_balance` and the `estimated_cost`, so you can handle billing gracefully in your application.
112
+
113
+ #### Scan Modes and Actions
114
+ Control exactly what gets checked and what happens when threats are found:
115
+
116
+ - **Scan modes** - Choose `normal` (core security only), `policy_only` (custom policies only), or `combined` (both)
117
+ - **Actions per detection type** - Set `block` or `allow_with_warning` independently for core scans, custom policies, and abuse detection
118
+ - **Abuse detection** is opt-in - disabled by default, enable it with `abuseAction`
119
+
120
+ ```typescript
121
+ const result = await lockllm.scan(
122
+ { input: userPrompt, mode: 'combined', sensitivity: 'high' },
123
+ { scanAction: 'block', policyAction: 'allow_with_warning', abuseAction: 'block' }
124
+ );
125
+ ```
126
+
127
+ #### Proxy Options on All Wrappers
128
+ All wrapper functions (`createOpenAI`, `createAnthropic`, `createGroq`, etc.) now accept a `proxyOptions` parameter so you can configure security behavior at initialization time instead of per-request:
129
+
130
+ ```typescript
131
+ const openai = createOpenAI({
132
+ apiKey: process.env.LOCKLLM_API_KEY,
133
+ proxyOptions: {
134
+ scanMode: 'combined',
135
+ scanAction: 'block',
136
+ policyAction: 'block',
137
+ routeAction: 'auto', // Enable intelligent routing
138
+ cacheResponse: true, // Enable response caching
139
+ cacheTTL: 3600 // Cache for 1 hour
140
+ }
141
+ });
142
+ ```
143
+
144
+ #### Intelligent Routing
145
+ Let LockLLM automatically select the best model for each request based on task type and complexity. Set `routeAction: 'auto'` to enable, or `routeAction: 'custom'` to use your own routing rules from the dashboard.
146
+
147
+ #### Response Caching
148
+ Reduce costs by caching identical LLM responses. Enabled by default in proxy mode - disable it with `cacheResponse: false` or customize the TTL with `cacheTTL`.
149
+
150
+ #### Universal Proxy Mode
151
+ Access 200+ models without configuring individual provider API keys using `getUniversalProxyURL()`. Uses LockLLM credits instead of BYOK.
152
+
153
+ ```typescript
154
+ import { getUniversalProxyURL } from '@lockllm/sdk';
155
+ const url = getUniversalProxyURL();
156
+ // 'https://api.lockllm.com/v1/proxy/chat/completions'
157
+ ```
158
+
159
+ #### Proxy Response Metadata
160
+ New utilities to read detailed metadata from proxy responses - scan results, routing decisions, cache status, and credit usage:
161
+
162
+ ```typescript
163
+ import { parseProxyMetadata } from '@lockllm/sdk';
164
+ const metadata = parseProxyMetadata(response.headers);
165
+ // metadata.safe, metadata.routing, metadata.cache_status, metadata.credits_deducted, etc.
166
+ ```
167
+
168
+ #### Expanded Scan Response
169
+ Scan responses now include richer data when using advanced features:
170
+ - `policy_warnings` - Which custom policies were violated and why
171
+ - `scan_warning` - Injection details when using `allow_with_warning`
172
+ - `abuse_warnings` - Abuse indicators when abuse detection is enabled
173
+ - `routing` - Task type, complexity score, and selected model when routing is enabled
174
+
175
+ ### Changed
176
+ - The scan API is fully backward compatible - existing code works without changes. Internally, scan configuration is now sent via HTTP headers for better compatibility and caching behavior.
177
+
178
+ ### Notes
179
+ - All new features are opt-in. Existing integrations continue to work without any changes.
180
+ - Custom policies, abuse detection, and routing are configured in the [LockLLM dashboard](https://www.lockllm.com/dashboard).
181
+
182
+ ---
183
+
184
+ ## [1.0.1] - 2026-01-16
185
+
186
+ ### Changed
187
+
188
+ #### Flexible SDK Installation
189
+ - **Optional Provider SDKs**: Provider SDKs (OpenAI, Anthropic, Cohere, etc.) are no longer required dependencies. Install only what you need:
190
+ - Using OpenAI? Just install `openai` package
191
+ - Using Anthropic? Just install `@anthropic-ai/sdk` package
192
+ - Using Cohere? Just install `cohere-ai` package
193
+ - Mix and match any providers your application uses
194
+ - **Smaller Bundle Sizes**: Your application only includes the provider SDKs you actually use, reducing package size and installation time
195
+ - **Pay-As-You-Go Dependencies**: No need to install SDKs for providers you don't use
196
+
197
+ ### Benefits
198
+ - Faster installation with fewer dependencies
199
+ - Smaller `node_modules` folder
200
+ - More control over your project dependencies
201
+ - No unused packages taking up disk space
202
+
203
+ ### Migration Guide
204
+ If you're upgrading from v1.0.0 and using provider wrappers, simply install the provider SDKs you need:
205
+
206
+ ```bash
207
+ # For OpenAI (GPT models, DALL-E, etc.)
208
+ npm install openai
209
+
210
+ # For Anthropic (Claude models)
211
+ npm install @anthropic-ai/sdk
212
+
213
+ # For Cohere (Command, Embed models)
214
+ npm install cohere-ai
215
+
216
+ # Install only what you use!
217
+ ```
218
+
219
+ The SDK will work out of the box once you install the provider packages you need.
220
+
221
+ ## [1.0.0] - 2026-01-16
4
222
 
5
223
  ### Added
6
224
 
7
225
  #### Universal Provider Support
8
- - **Generic Wrapper Factory**: Added `createClient()` function to create clients for any LLM provider using their official SDK
9
- - **OpenAI-Compatible Helper**: Added `createOpenAICompatible()` for easy integration with OpenAI-compatible providers
226
+ - **Generic Wrapper Factory**: Added `createClient()` function to create clients for any of the 17 supported providers using their official SDK
227
+ - **OpenAI-Compatible Helper**: Added `createOpenAICompatible()` for easy integration with the OpenAI-compatible providers (Groq, DeepSeek, Mistral, Perplexity, etc.)
10
228
  - **15 New Provider Wrappers**: Pre-configured factory functions for all remaining providers:
11
229
  - `createGroq()` - Groq (fast inference)
12
230
  - `createDeepSeek()` - DeepSeek (reasoning models)
@@ -30,7 +248,7 @@
30
248
  - **Type Export**: Added `ProviderName` type export for better TypeScript support
31
249
 
32
250
  #### Examples
33
- - **`examples/wrapper-generic.ts`**: Comprehensive example showing three ways to integrate with any provider
251
+ - **`examples/wrapper-generic.ts`**: Comprehensive example showing three ways to integrate with any of the 17 supported providers
34
252
  - **`examples/wrapper-all-providers.ts`**: Complete example demonstrating all 17 providers
35
253
 
36
254
  #### Documentation
@@ -76,6 +294,6 @@ const client = new OpenAI({
76
294
  ```
77
295
 
78
296
  ### Notes
79
- - All 15+ providers are now fully supported with multiple integration options
297
+ - All 17+ providers are now fully supported with multiple integration options
80
298
  - Zero breaking changes - existing code continues to work
81
299
  - Backward compatible with v1.0.0