@sssxyd/face-liveness-detector 0.2.2
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/LICENSE +21 -0
- package/README.md +530 -0
- package/dist/index.esm.js +2175 -0
- package/dist/index.esm.js.map +1 -0
- package/dist/index.js +2183 -0
- package/dist/index.js.map +1 -0
- package/dist/types/__tests__/config.test.d.ts +5 -0
- package/dist/types/__tests__/config.test.d.ts.map +1 -0
- package/dist/types/__tests__/enums.test.d.ts +5 -0
- package/dist/types/__tests__/enums.test.d.ts.map +1 -0
- package/dist/types/__tests__/event-emitter.test.d.ts +5 -0
- package/dist/types/__tests__/event-emitter.test.d.ts.map +1 -0
- package/dist/types/__tests__/face-detection-engine.test.d.ts +7 -0
- package/dist/types/__tests__/face-detection-engine.test.d.ts.map +1 -0
- package/dist/types/config.d.ts +15 -0
- package/dist/types/config.d.ts.map +1 -0
- package/dist/types/enums.d.ts +43 -0
- package/dist/types/enums.d.ts.map +1 -0
- package/dist/types/event-emitter.d.ts +48 -0
- package/dist/types/event-emitter.d.ts.map +1 -0
- package/dist/types/exports.d.ts +11 -0
- package/dist/types/exports.d.ts.map +1 -0
- package/dist/types/face-frontal-checker.d.ts +168 -0
- package/dist/types/face-frontal-checker.d.ts.map +1 -0
- package/dist/types/image-quality-checker.d.ts +65 -0
- package/dist/types/image-quality-checker.d.ts.map +1 -0
- package/dist/types/index.d.ts +200 -0
- package/dist/types/index.d.ts.map +1 -0
- package/dist/types/library-loader.d.ts +26 -0
- package/dist/types/library-loader.d.ts.map +1 -0
- package/dist/types/types.d.ts +146 -0
- package/dist/types/types.d.ts.map +1 -0
- package/package.json +77 -0
package/LICENSE
ADDED
|
@@ -0,0 +1,21 @@
|
|
|
1
|
+
MIT License
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2024 Face Liveness Team
|
|
4
|
+
|
|
5
|
+
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
6
|
+
of this software and associated documentation files (the "Software"), to deal
|
|
7
|
+
in the Software without restriction, including without limitation the rights
|
|
8
|
+
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
9
|
+
copies of the Software, and to permit persons to whom the Software is
|
|
10
|
+
furnished to do so, subject to the following conditions:
|
|
11
|
+
|
|
12
|
+
The above copyright notice and this permission notice shall be included in all
|
|
13
|
+
copies or substantial portions of the Software.
|
|
14
|
+
|
|
15
|
+
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
16
|
+
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
17
|
+
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
18
|
+
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
19
|
+
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
20
|
+
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
21
|
+
SOFTWARE.
|
package/README.md
ADDED
|
@@ -0,0 +1,530 @@
|
|
|
1
|
+
# Face Detection Engine
|
|
2
|
+
|
|
3
|
+
A framework-agnostic, TypeScript-based npm package for face liveness detection. This engine provides core face detection, liveness verification, and anti-spoofing capabilities without any UI framework dependencies.
|
|
4
|
+
|
|
5
|
+
## Features
|
|
6
|
+
|
|
7
|
+
- 🎯 **Framework Agnostic** - Works with any JavaScript framework or vanilla JS
|
|
8
|
+
- 🧠 **Intelligent Liveness Detection** - Action-based and silent liveness detection modes
|
|
9
|
+
- 🔍 **Face Quality Checks** - Comprehensive image quality and face frontality analysis
|
|
10
|
+
- 🚀 **High Performance** - Optimized detection loop with RequestAnimationFrame
|
|
11
|
+
- 📱 **Mobile Friendly** - Built-in mobile device adaptation
|
|
12
|
+
- ♿ **Event-Driven Architecture** - Easy integration with TypeScript/JavaScript applications
|
|
13
|
+
- 🛡️ **Anti-Spoofing** - Real-time anti-spoofing detection
|
|
14
|
+
- 📊 **Detailed Debugging** - Rich debug information for troubleshooting
|
|
15
|
+
|
|
16
|
+
## Installation
|
|
17
|
+
|
|
18
|
+
```bash
|
|
19
|
+
npm install @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
|
|
20
|
+
```
|
|
21
|
+
|
|
22
|
+
## Quick Start - Using Local Model Files (Recommended)
|
|
23
|
+
|
|
24
|
+
To improve performance and reduce external dependencies, you can download and use local copies of model files:
|
|
25
|
+
|
|
26
|
+
### Step 1: Download Model Files
|
|
27
|
+
|
|
28
|
+
```bash
|
|
29
|
+
# Copy Human.js models locally
|
|
30
|
+
node copy-human-models.js
|
|
31
|
+
|
|
32
|
+
# Download TensorFlow.js WASM files
|
|
33
|
+
node download-tensorflow-wasm.js
|
|
34
|
+
```
|
|
35
|
+
|
|
36
|
+
This will create:
|
|
37
|
+
- `public/models/` - Human.js face detection models
|
|
38
|
+
- `public/wasm/` - TensorFlow.js WASM backend files
|
|
39
|
+
|
|
40
|
+
### Step 2: Initialize Engine with Local Files
|
|
41
|
+
|
|
42
|
+
```typescript
|
|
43
|
+
import FaceDetectionEngine from '@sssxyd/face-liveness-detector'
|
|
44
|
+
|
|
45
|
+
// Configure to use local model files
|
|
46
|
+
const engine = new FaceDetectionEngine({
|
|
47
|
+
human_model_path: '/models', // Path to downloaded models
|
|
48
|
+
tensorflow_wasm_path: '/wasm', // Path to WASM files
|
|
49
|
+
min_face_ratio: 0.5,
|
|
50
|
+
max_face_ratio: 0.9,
|
|
51
|
+
liveness_action_count: 1,
|
|
52
|
+
liveness_action_list: ['blink']
|
|
53
|
+
})
|
|
54
|
+
|
|
55
|
+
// Initialize and start detection
|
|
56
|
+
await engine.initialize()
|
|
57
|
+
const videoElement = document.getElementById('video') as HTMLVideoElement
|
|
58
|
+
await engine.startDetection(videoElement)
|
|
59
|
+
```
|
|
60
|
+
|
|
61
|
+
### Step 3: Serve Static Files
|
|
62
|
+
|
|
63
|
+
Make sure your web server serves the `public/` directory:
|
|
64
|
+
|
|
65
|
+
```typescript
|
|
66
|
+
// Express.js example
|
|
67
|
+
app.use(express.static('public'))
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
## Quick Start - Using Default CDN Files
|
|
71
|
+
|
|
72
|
+
If you prefer not to host local files, the engine will automatically use CDN sources:
|
|
73
|
+
|
|
74
|
+
```typescript
|
|
75
|
+
import FaceDetectionEngine from '@sssxyd/face-liveness-detector'
|
|
76
|
+
|
|
77
|
+
// No need to specify paths - uses CDN by default
|
|
78
|
+
const engine = new FaceDetectionEngine({
|
|
79
|
+
min_face_ratio: 0.5,
|
|
80
|
+
max_face_ratio: 0.9,
|
|
81
|
+
liveness_action_count: 1,
|
|
82
|
+
liveness_action_list: ['blink']
|
|
83
|
+
})
|
|
84
|
+
|
|
85
|
+
await engine.initialize()
|
|
86
|
+
const videoElement = document.getElementById('video') as HTMLVideoElement
|
|
87
|
+
await engine.startDetection(videoElement)
|
|
88
|
+
```
|
|
89
|
+
|
|
90
|
+
## Configuration
|
|
91
|
+
|
|
92
|
+
### FaceDetectionEngineConfig
|
|
93
|
+
|
|
94
|
+
```typescript
|
|
95
|
+
interface FaceDetectionEngineConfig {
|
|
96
|
+
// ========== Resource Paths ==========
|
|
97
|
+
human_model_path?: string // Path to human.js models (default: undefined)
|
|
98
|
+
tensorflow_wasm_path?: string // Path to TensorFlow WASM files (default: undefined)
|
|
99
|
+
|
|
100
|
+
// ========== Detection Settings ==========
|
|
101
|
+
video_width?: number // Width of the video stream (default: 640)
|
|
102
|
+
video_height?: number // Height of the video stream (default: 640)
|
|
103
|
+
video_mirror?: boolean // Mirror video horizontally (default: true)
|
|
104
|
+
video_load_timeout?: number // Timeout for loading video stream in ms (default: 5000)
|
|
105
|
+
detection_frame_delay?: number // Delay between detection frames in ms (default: 100)
|
|
106
|
+
error_retry_delay?: number // Delay before retrying after an error in ms (default: 200)
|
|
107
|
+
|
|
108
|
+
// ========== Collection Settings ==========
|
|
109
|
+
silent_detect_count?: number // Number of silent detections to collect (default: 3)
|
|
110
|
+
min_face_ratio?: number // Minimum face size ratio (default: 0.5)
|
|
111
|
+
max_face_ratio?: number // Maximum face size ratio (default: 0.9)
|
|
112
|
+
min_face_frontal?: number // Minimum face frontality (default: 0.9)
|
|
113
|
+
min_image_quality?: number // Minimum image quality (default: 0.8)
|
|
114
|
+
min_live_score?: number // Minimum live score (default: 0.5)
|
|
115
|
+
min_real_score?: number // Minimum anti-spoofing score (default: 0.85)
|
|
116
|
+
suspected_frauds_count?: number // Number of suspected frauds to detect (default: 3)
|
|
117
|
+
face_frontal_features?: { // Face frontal features
|
|
118
|
+
yaw_threshold: number // Yaw angle threshold in degrees (default: 3)
|
|
119
|
+
pitch_threshold: number // Pitch angle threshold in degrees (default: 4)
|
|
120
|
+
roll_threshold: number // Roll angle threshold in degrees (default: 2)
|
|
121
|
+
}
|
|
122
|
+
image_quality_features?: { // Image quality features
|
|
123
|
+
require_full_face_in_bounds: boolean // Require face completely within bounds (default: true)
|
|
124
|
+
use_opencv_enhancement: boolean // Use OpenCV enhancement for quality detection (default: true)
|
|
125
|
+
min_laplacian_variance: number // Minimum Laplacian variance for blur detection (default: 100)
|
|
126
|
+
min_gradient_sharpness: number // Minimum gradient sharpness for blur detection (default: 0.3)
|
|
127
|
+
min_blur_score: number // Minimum blur score for blur detection (default: 0.6)
|
|
128
|
+
}
|
|
129
|
+
|
|
130
|
+
// ========== Liveness Settings ==========
|
|
131
|
+
liveness_action_list?: LivenessAction[] // List of liveness actions to detect (default: [BLINK, MOUTH_OPEN, NOD])
|
|
132
|
+
liveness_action_count?: number // Number of liveness actions to perform (default: 1)
|
|
133
|
+
liveness_action_randomize?: boolean // Whether to randomize liveness actions (default: true)
|
|
134
|
+
liveness_verify_timeout?: number // Timeout for liveness verification in ms (default: 60000)
|
|
135
|
+
min_mouth_open_percent?: number // Minimum mouth open percentage for detection (default: 0.2)
|
|
136
|
+
}
|
|
137
|
+
```
|
|
138
|
+
|
|
139
|
+
## API Reference
|
|
140
|
+
|
|
141
|
+
### Methods
|
|
142
|
+
|
|
143
|
+
#### `initialize(): Promise<void>`
|
|
144
|
+
Load and initialize detection libraries. Must be called before using detection.
|
|
145
|
+
|
|
146
|
+
```typescript
|
|
147
|
+
await engine.initialize()
|
|
148
|
+
```
|
|
149
|
+
|
|
150
|
+
#### `startDetection(videoElement): Promise<void>`
|
|
151
|
+
Start face detection on a video element.
|
|
152
|
+
|
|
153
|
+
```typescript
|
|
154
|
+
const videoElement = document.getElementById('video') as HTMLVideoElement
|
|
155
|
+
await engine.startDetection(videoElement)
|
|
156
|
+
```
|
|
157
|
+
|
|
158
|
+
#### `stopDetection(success?: boolean): void`
|
|
159
|
+
Stop the detection process.
|
|
160
|
+
|
|
161
|
+
```typescript
|
|
162
|
+
engine.stopDetection(true) // true to display best image
|
|
163
|
+
```
|
|
164
|
+
|
|
165
|
+
#### `updateConfig(config): void`
|
|
166
|
+
Update configuration during runtime.
|
|
167
|
+
|
|
168
|
+
```typescript
|
|
169
|
+
engine.updateConfig({
|
|
170
|
+
min_face_ratio: 0.6,
|
|
171
|
+
liveness_action_count: 2
|
|
172
|
+
})
|
|
173
|
+
```
|
|
174
|
+
|
|
175
|
+
#### `getConfig(): FaceDetectionEngineConfig`
|
|
176
|
+
Get current configuration.
|
|
177
|
+
|
|
178
|
+
```typescript
|
|
179
|
+
const config = engine.getConfig()
|
|
180
|
+
```
|
|
181
|
+
|
|
182
|
+
#### `getStatus(): Object`
|
|
183
|
+
Get engine status.
|
|
184
|
+
|
|
185
|
+
```typescript
|
|
186
|
+
const { isReady, isDetecting, isInitializing } = engine.getStatus()
|
|
187
|
+
```
|
|
188
|
+
|
|
189
|
+
### Events
|
|
190
|
+
|
|
191
|
+
The engine uses a TypeScript event emitter pattern:
|
|
192
|
+
|
|
193
|
+
#### `detector-loaded`
|
|
194
|
+
Engine has finished initialization.
|
|
195
|
+
|
|
196
|
+
```typescript
|
|
197
|
+
engine.on('detector-loaded', () => {
|
|
198
|
+
console.log('Ready to start detection')
|
|
199
|
+
})
|
|
200
|
+
```
|
|
201
|
+
|
|
202
|
+
#### `face-detected`
|
|
203
|
+
A face frame has been detected with silent liveness scores.
|
|
204
|
+
|
|
205
|
+
```typescript
|
|
206
|
+
engine.on('face-detected', (data) => {
|
|
207
|
+
console.log(`Quality: ${data.quality}, Frontal: ${data.frontal}`)
|
|
208
|
+
console.log(`Real: ${data.real}, Live: ${data.live}`)
|
|
209
|
+
})
|
|
210
|
+
```
|
|
211
|
+
|
|
212
|
+
#### `status-prompt`
|
|
213
|
+
Status update prompt.
|
|
214
|
+
|
|
215
|
+
```typescript
|
|
216
|
+
engine.on('status-prompt', (data: StatusPromptData) => {
|
|
217
|
+
console.log(`Code: ${data.code}, Message: ${data.message}`)
|
|
218
|
+
})
|
|
219
|
+
```
|
|
220
|
+
|
|
221
|
+
#### `action-prompt`
|
|
222
|
+
Action liveness request.
|
|
223
|
+
|
|
224
|
+
```typescript
|
|
225
|
+
engine.on('action-prompt', (data: ActionPromptData) => {
|
|
226
|
+
console.log(`Action: ${data.action}, Status: ${data.status}`)
|
|
227
|
+
})
|
|
228
|
+
```
|
|
229
|
+
|
|
230
|
+
#### `detector-finish`
|
|
231
|
+
Liveness detection completed (successfully or not).
|
|
232
|
+
|
|
233
|
+
```typescript
|
|
234
|
+
engine.on('detector-finish', (data) => {
|
|
235
|
+
console.log('Detection finished:', {
|
|
236
|
+
success: data.success,
|
|
237
|
+
bestQuality: data.bestQualityScore,
|
|
238
|
+
silentPassed: data.silentPassedCount,
|
|
239
|
+
actionsPassed: data.actionPassedCount,
|
|
240
|
+
frameImage: data.bestFrameImage,
|
|
241
|
+
faceImage: data.bestFaceImage
|
|
242
|
+
})
|
|
243
|
+
})
|
|
244
|
+
```
|
|
245
|
+
|
|
246
|
+
#### `detector-error`
|
|
247
|
+
An error occurred during detection.
|
|
248
|
+
|
|
249
|
+
```typescript
|
|
250
|
+
engine.on('detector-error', (error: ErrorData) => {
|
|
251
|
+
console.error(`Error [${error.code}]: ${error.message}`)
|
|
252
|
+
})
|
|
253
|
+
```
|
|
254
|
+
|
|
255
|
+
#### `detector-debug`
|
|
256
|
+
Debug information (useful for development).
|
|
257
|
+
|
|
258
|
+
```typescript
|
|
259
|
+
engine.on('detector-debug', (debug: DebugData) => {
|
|
260
|
+
console.log(`[${debug.level}] ${debug.stage}: ${debug.message}`)
|
|
261
|
+
})
|
|
262
|
+
```
|
|
263
|
+
|
|
264
|
+
## Enumerations
|
|
265
|
+
|
|
266
|
+
### LivenessAction
|
|
267
|
+
```typescript
|
|
268
|
+
enum LivenessAction {
|
|
269
|
+
BLINK = 'blink',
|
|
270
|
+
MOUTH_OPEN = 'mouth_open',
|
|
271
|
+
NOD = 'nod'
|
|
272
|
+
}
|
|
273
|
+
```
|
|
274
|
+
|
|
275
|
+
### PromptCode
|
|
276
|
+
```typescript
|
|
277
|
+
enum PromptCode {
|
|
278
|
+
NO_FACE = 'NO_FACE',
|
|
279
|
+
MULTIPLE_FACE = 'MULTIPLE_FACE',
|
|
280
|
+
FACE_TOO_SMALL = 'FACE_TOO_SMALL',
|
|
281
|
+
FACE_TOO_LARGE = 'FACE_TOO_LARGE',
|
|
282
|
+
FACE_NOT_FRONTAL = 'FACE_NOT_FRONTAL',
|
|
283
|
+
BLURRY_IMAGE = 'BLURRY_IMAGE',
|
|
284
|
+
LOW_QUALITY = 'LOW_QUALITY',
|
|
285
|
+
FRAME_DETECTED = 'FRAME_DETECTED'
|
|
286
|
+
}
|
|
287
|
+
```
|
|
288
|
+
|
|
289
|
+
### ErrorCode
|
|
290
|
+
```typescript
|
|
291
|
+
enum ErrorCode {
|
|
292
|
+
ENGINE_NOT_INITIALIZED = 'ENGINE_NOT_INITIALIZED',
|
|
293
|
+
CAMERA_ACCESS_DENIED = 'CAMERA_ACCESS_DENIED',
|
|
294
|
+
STREAM_ACQUISITION_FAILED = 'STREAM_ACQUISITION_FAILED',
|
|
295
|
+
DETECTION_ERROR = 'DETECTION_ERROR',
|
|
296
|
+
// ... more error codes
|
|
297
|
+
}
|
|
298
|
+
```
|
|
299
|
+
|
|
300
|
+
## Advanced Usage
|
|
301
|
+
|
|
302
|
+
### Complete Integration Example with Events
|
|
303
|
+
|
|
304
|
+
```typescript
|
|
305
|
+
import FaceDetectionEngine from '@sssxyd/face-liveness-detector'
|
|
306
|
+
|
|
307
|
+
const engine = new FaceDetectionEngine({
|
|
308
|
+
human_model_path: '/models',
|
|
309
|
+
tensorflow_wasm_path: '/wasm',
|
|
310
|
+
min_face_ratio: 0.5,
|
|
311
|
+
max_face_ratio: 0.9,
|
|
312
|
+
liveness_action_count: 1,
|
|
313
|
+
liveness_action_list: ['blink']
|
|
314
|
+
})
|
|
315
|
+
|
|
316
|
+
// Listen for events
|
|
317
|
+
engine.on('detector-loaded', () => {
|
|
318
|
+
console.log('Engine is ready')
|
|
319
|
+
})
|
|
320
|
+
|
|
321
|
+
engine.on('face-detected', (data) => {
|
|
322
|
+
console.log('Frame detected:', data)
|
|
323
|
+
})
|
|
324
|
+
|
|
325
|
+
engine.on('detector-finish', (data) => {
|
|
326
|
+
console.log('Liveness verification complete:', {
|
|
327
|
+
success: data.success,
|
|
328
|
+
qualityScore: data.bestQualityScore,
|
|
329
|
+
frameImage: data.bestFrameImage,
|
|
330
|
+
faceImage: data.bestFaceImage
|
|
331
|
+
})
|
|
332
|
+
})
|
|
333
|
+
|
|
334
|
+
engine.on('detector-error', (error) => {
|
|
335
|
+
console.error('Detection error:', error.message)
|
|
336
|
+
})
|
|
337
|
+
|
|
338
|
+
engine.on('detector-debug', (debug) => {
|
|
339
|
+
console.log(`[${debug.stage}] ${debug.message}`, debug.details)
|
|
340
|
+
})
|
|
341
|
+
|
|
342
|
+
// Initialize
|
|
343
|
+
await engine.initialize()
|
|
344
|
+
|
|
345
|
+
// Start detection with video element
|
|
346
|
+
const videoElement = document.getElementById('video') as HTMLVideoElement
|
|
347
|
+
await engine.startDetection(videoElement)
|
|
348
|
+
|
|
349
|
+
// Stop detection
|
|
350
|
+
engine.stopDetection()
|
|
351
|
+
```
|
|
352
|
+
|
|
353
|
+
### Advanced Usage
|
|
354
|
+
|
|
355
|
+
### Custom Configuration with Local Models
|
|
356
|
+
|
|
357
|
+
```typescript
|
|
358
|
+
const engine = new FaceDetectionEngine({
|
|
359
|
+
// Use local model files
|
|
360
|
+
human_model_path: '/models',
|
|
361
|
+
tensorflow_wasm_path: '/wasm',
|
|
362
|
+
|
|
363
|
+
// Require higher quality
|
|
364
|
+
min_face_ratio: 0.6,
|
|
365
|
+
max_face_ratio: 0.85,
|
|
366
|
+
min_face_frontal: 0.95,
|
|
367
|
+
min_image_quality: 0.9,
|
|
368
|
+
|
|
369
|
+
// Multiple actions
|
|
370
|
+
liveness_action_count: 3,
|
|
371
|
+
liveness_action_list: ['blink', 'mouth_open', 'nod'],
|
|
372
|
+
liveness_verify_timeout: 120000, // 2 minutes
|
|
373
|
+
})
|
|
374
|
+
```
|
|
375
|
+
|
|
376
|
+
### Dynamic Configuration Updates
|
|
377
|
+
|
|
378
|
+
```typescript
|
|
379
|
+
engine.on('status-prompt', (data) => {
|
|
380
|
+
if (data.code === PromptCode.FACE_TOO_SMALL) {
|
|
381
|
+
// Make requirements more lenient if faces are small
|
|
382
|
+
engine.updateConfig({ min_face_ratio: 0.4 })
|
|
383
|
+
}
|
|
384
|
+
})
|
|
385
|
+
```
|
|
386
|
+
|
|
387
|
+
### Exporting Results
|
|
388
|
+
|
|
389
|
+
```typescript
|
|
390
|
+
let resultImage = null
|
|
391
|
+
let resultData = null
|
|
392
|
+
|
|
393
|
+
engine.on('liveness-completed', (data) => {
|
|
394
|
+
resultImage = data.imageData // Base64 encoded image
|
|
395
|
+
resultData = {
|
|
396
|
+
quality: data.qualityScore,
|
|
397
|
+
liveness: data.liveness,
|
|
398
|
+
timestamp: new Date()
|
|
399
|
+
}
|
|
400
|
+
|
|
401
|
+
// Send to server
|
|
402
|
+
await fetch('/api/verify-liveness', {
|
|
403
|
+
method: 'POST',
|
|
404
|
+
headers: { 'Content-Type': 'application/json' },
|
|
405
|
+
body: JSON.stringify({
|
|
406
|
+
image: resultImage,
|
|
407
|
+
metadata: resultData
|
|
408
|
+
})
|
|
409
|
+
})
|
|
410
|
+
})
|
|
411
|
+
```
|
|
412
|
+
|
|
413
|
+
## Downloading and Hosting Model Files
|
|
414
|
+
|
|
415
|
+
To avoid CDN dependencies and improve performance, you can download model files locally:
|
|
416
|
+
|
|
417
|
+
### Available Download Scripts
|
|
418
|
+
|
|
419
|
+
Two scripts are provided in the root directory:
|
|
420
|
+
|
|
421
|
+
#### 1. Copy Human.js Models
|
|
422
|
+
|
|
423
|
+
```bash
|
|
424
|
+
node copy-human-models.js
|
|
425
|
+
```
|
|
426
|
+
|
|
427
|
+
**What it does:**
|
|
428
|
+
- Copies face detection models from `node_modules/@vladmandic/human/models`
|
|
429
|
+
- Saves to `public/models/` directory
|
|
430
|
+
- Downloads both `.json` and `.bin` model files
|
|
431
|
+
- Shows file size and progress
|
|
432
|
+
|
|
433
|
+
#### 2. Download TensorFlow.js WASM Files
|
|
434
|
+
|
|
435
|
+
```bash
|
|
436
|
+
node download-tensorflow-wasm.js
|
|
437
|
+
```
|
|
438
|
+
|
|
439
|
+
**What it does:**
|
|
440
|
+
- Downloads TensorFlow.js WASM backend files
|
|
441
|
+
- Saves to `public/wasm/` directory
|
|
442
|
+
- Downloads 4 critical files:
|
|
443
|
+
- `tf-backend-wasm.min.js`
|
|
444
|
+
- `tfjs-backend-wasm.wasm`
|
|
445
|
+
- `tfjs-backend-wasm-simd.wasm`
|
|
446
|
+
- `tfjs-backend-wasm-threaded-simd.wasm`
|
|
447
|
+
- **Supports multiple CDN sources** with automatic fallback:
|
|
448
|
+
1. unpkg.com (primary)
|
|
449
|
+
2. cdn.jsdelivr.net (backup)
|
|
450
|
+
3. esm.sh (fallback)
|
|
451
|
+
4. cdn.esm.sh (last resort)
|
|
452
|
+
|
|
453
|
+
### Configuration to Use Local Files
|
|
454
|
+
|
|
455
|
+
Once downloaded, configure the engine to use these local files:
|
|
456
|
+
|
|
457
|
+
```typescript
|
|
458
|
+
const engine = new FaceDetectionEngine({
|
|
459
|
+
// Use local files instead of CDN
|
|
460
|
+
human_model_path: '/models',
|
|
461
|
+
tensorflow_wasm_path: '/wasm',
|
|
462
|
+
|
|
463
|
+
// ... rest of configuration
|
|
464
|
+
})
|
|
465
|
+
```
|
|
466
|
+
|
|
467
|
+
## Browser Requirements
|
|
468
|
+
|
|
469
|
+
- Modern browsers with WebRTC support (Chrome, Firefox, Edge, Safari 11+)
|
|
470
|
+
- HTTPS required for getUserMedia
|
|
471
|
+
- WebGL or WASM backend support
|
|
472
|
+
|
|
473
|
+
## Performance Tips
|
|
474
|
+
|
|
475
|
+
1. **Adjust detection frame delay** - Higher delay = lower CPU usage but slower detection
|
|
476
|
+
```typescript
|
|
477
|
+
engine.updateConfig({ detection_frame_delay: 200 })
|
|
478
|
+
```
|
|
479
|
+
|
|
480
|
+
2. **Reduce canvas size** - Smaller canvases process faster
|
|
481
|
+
```typescript
|
|
482
|
+
engine.updateConfig({
|
|
483
|
+
video_width: 480,
|
|
484
|
+
video_height: 480
|
|
485
|
+
})
|
|
486
|
+
```
|
|
487
|
+
|
|
488
|
+
3. **Optimize light conditions** - Better lighting = better detection
|
|
489
|
+
- Avoid backlighting
|
|
490
|
+
- Ensure face is well-lit
|
|
491
|
+
|
|
492
|
+
4. **Monitor debug output** - Use debug events to identify bottlenecks
|
|
493
|
+
```typescript
|
|
494
|
+
engine.on('detector-debug', (debug) => {
|
|
495
|
+
if (debug.stage === 'detection') {
|
|
496
|
+
console.time(debug.message)
|
|
497
|
+
}
|
|
498
|
+
})
|
|
499
|
+
```
|
|
500
|
+
|
|
501
|
+
## Troubleshooting
|
|
502
|
+
|
|
503
|
+
### "Camera access denied"
|
|
504
|
+
- Ensure HTTPS is used (or localhost for development)
|
|
505
|
+
- Check browser permissions
|
|
506
|
+
- User must grant camera access
|
|
507
|
+
|
|
508
|
+
### "Video loading timeout"
|
|
509
|
+
- Check internet connection
|
|
510
|
+
- Verify model files are accessible
|
|
511
|
+
- Increase `video_load_timeout`
|
|
512
|
+
|
|
513
|
+
### Poor detection accuracy
|
|
514
|
+
- Ensure good lighting
|
|
515
|
+
- Keep face centered in frame
|
|
516
|
+
- Face should be 50-90% of frame
|
|
517
|
+
- Face should be frontal (not tilted)
|
|
518
|
+
|
|
519
|
+
### High CPU usage
|
|
520
|
+
- Increase `detection_frame_delay`
|
|
521
|
+
- Reduce `video_width` and `video_height`
|
|
522
|
+
- Disable `show_action_prompt` if not needed
|
|
523
|
+
|
|
524
|
+
## License
|
|
525
|
+
|
|
526
|
+
MIT
|
|
527
|
+
|
|
528
|
+
## Support
|
|
529
|
+
|
|
530
|
+
For issues and questions, please visit: https://github.com/sssxyd/face-liveness-detector/issues
|