@sssxyd/face-liveness-detector 0.4.0-alpha.9 → 0.4.1-beta.1

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. package/README.en.md +115 -146
  2. package/README.md +21 -52
  3. package/dist/index.esm.js +2218 -3077
  4. package/dist/index.esm.js.map +1 -1
  5. package/dist/index.js +2218 -3077
  6. package/dist/index.js.map +1 -1
  7. package/dist/types/browser_utils.d.ts +3 -3
  8. package/dist/types/browser_utils.d.ts.map +1 -1
  9. package/dist/types/config.d.ts.map +1 -1
  10. package/dist/types/dlp-color-wheel-detector.d.ts +76 -0
  11. package/dist/types/dlp-color-wheel-detector.d.ts.map +1 -0
  12. package/dist/types/face-detection-engine.d.ts +59 -20
  13. package/dist/types/face-detection-engine.d.ts.map +1 -1
  14. package/dist/types/face-detection-state.d.ts +1 -5
  15. package/dist/types/face-detection-state.d.ts.map +1 -1
  16. package/dist/types/face-frontal-calculator.d.ts +3 -3
  17. package/dist/types/face-frontal-calculator.d.ts.map +1 -1
  18. package/dist/types/image-quality-calculator.d.ts +10 -23
  19. package/dist/types/image-quality-calculator.d.ts.map +1 -1
  20. package/dist/types/motion-liveness-detector.d.ts +258 -124
  21. package/dist/types/motion-liveness-detector.d.ts.map +1 -1
  22. package/dist/types/optical-distortion-detector.d.ts +116 -0
  23. package/dist/types/optical-distortion-detector.d.ts.map +1 -0
  24. package/dist/types/screen-capture-detector.d.ts +74 -115
  25. package/dist/types/screen-capture-detector.d.ts.map +1 -1
  26. package/dist/types/screen-corners-contour-detector.d.ts +78 -0
  27. package/dist/types/screen-corners-contour-detector.d.ts.map +1 -0
  28. package/dist/types/screen-flicker-detector.d.ts +103 -0
  29. package/dist/types/screen-flicker-detector.d.ts.map +1 -0
  30. package/dist/types/screen-moire-pattern-detect.d.ts.map +1 -1
  31. package/dist/types/screen-response-time-detector.d.ts +70 -0
  32. package/dist/types/screen-response-time-detector.d.ts.map +1 -0
  33. package/dist/types/screen-rgb-emission-detect.d.ts.map +1 -1
  34. package/dist/types/types.d.ts +8 -54
  35. package/dist/types/types.d.ts.map +1 -1
  36. package/dist/types/video-frame-collector.d.ts +111 -0
  37. package/dist/types/video-frame-collector.d.ts.map +1 -0
  38. package/package.json +2 -1
package/README.en.md CHANGED
@@ -1,6 +1,6 @@
1
1
  <div align="center">
2
2
 
3
- > **Languages / 语言:** [English](#) · [中文](./README.md)
3
+ > **Languages:** [English](#) · [中文](./README.md)
4
4
 
5
5
  # Face Liveness Detection Engine
6
6
 
@@ -22,26 +22,26 @@
22
22
 
23
23
  <table>
24
24
  <tr>
25
- <td>💯 <strong>Pure Frontend</strong><br/>Zero backend dependency, all processing runs locally in the browser</td>
26
- <td>🔬 <strong>Hybrid AI Solution</strong><br/>Deep fusion of TensorFlow + OpenCV</td>
25
+ <td>💯 <strong>Pure Frontend</strong><br/>Zero backend dependencies, all processing runs locally in browser</td>
26
+ <td>🔬 <strong>Hybrid AI Solution</strong><br/>Deep integration of TensorFlow + OpenCV</td>
27
27
  </tr>
28
28
  <tr>
29
- <td>🧠 <strong>Dual Liveness Verification</strong><br/>Silent detection + action recognition (blink, mouth open, nod)</td>
30
- <td>⚡ <strong>Event-Driven Architecture</strong><br/>100% TypeScript, seamless integration with any framework</td>
29
+ <td>🧠 <strong>Dual Liveness Verification</strong><br/>Silent detection + Action recognition (blink, mouth open, nod)</td>
30
+ <td>⚡ <strong>Event-Driven Architecture</strong><br/>100% TypeScript, seamlessly integrates with any framework</td>
31
31
  </tr>
32
32
  <tr>
33
- <td>🎯 <strong>Multi-Dimensional Analysis</strong><br/>Quality, face frontalness, motion score, screen detection</td>
34
- <td>🛡️ <strong>Multi-Dimensional Anti-Spoofing</strong><br/>Photo, screen video, moire pattern, RGB emission detection</td>
33
+ <td>🎯 <strong>Multi-Dimensional Analysis</strong><br/>Quality, frontality, motion score, screen detection</td>
34
+ <td>🛡️ <strong>Multi-Layer Anti-Spoofing</strong><br/>Photo motion detection, screen temporal analysis, contour boundary detection</td>
35
35
  </tr>
36
36
  </table>
37
37
 
38
38
  ---
39
39
 
40
- ## 🚀 Online Demo
40
+ ## 🚀 Live Demo
41
41
 
42
42
  <div align="center">
43
43
 
44
- **[👉 Live Demo](https://face.lowtechsoft.com/) | Scan QR code for quick testing**
44
+ **[👉 Try Live Demo](https://face.lowtechsoft.com/) | Scan with phone for quick test**
45
45
 
46
46
  [![Face Liveness Detection Demo QR Code](https://raw.githubusercontent.com/sssxyd/face-liveness-detector/main/demos/vue-demo/vue-demo.png)](https://face.lowtechsoft.com/)
47
47
 
@@ -51,11 +51,12 @@
51
51
 
52
52
  ## 🧬 Core Algorithm Design
53
53
 
54
- | Detection Module | Technology | Documentation |
54
+ | Detection Module | Technical Solution | Documentation |
55
55
  |---------|--------|--------|
56
- | **Face Recognition** | Human.js BlazeFace + FaceMesh | 468 facial feature points + expression recognition |
57
- | **Motion Detection** | Multi-dimensional motion analysis | [Motion Detection Algorithm](./docs/MOTION_DETECTION_ALGORITHM.md) - optical flow, keypoint variance, facial region changes |
58
- | **Screen Detection** | Three-dimensional feature fusion | [Screen Capture Detection](./docs/SCREEN_CAPTURE_DETECTION_ALGORITHM.md) - moire patterns, RGB emission, color features |
56
+ | **Face Recognition** | Human.js BlazeFace + FaceMesh | 468 facial landmarks + expression recognition |
57
+ | **Motion Liveness Detection** | 6-Indicator Voting System | [Motion Detection Algorithm](./docs/MOTION_DETECTION_ALGORITHM.md) - Optical flow, keypoint variance, eye/mouth movement, facial area changes |
58
+ | **Screen Capture Detection** | 4-Dimensional Temporal Analysis | [Screen Capture Detection Algorithm](./docs/SCREEN_CAPTURE_DETECTION_ALGORITHM.md) - Screen flicker, response time, DLP color wheel, optical distortion |
59
+ | **Screen Contour Detection** | Canny Edge + Contour Analysis | [Screen Contour Detection Algorithm](./docs/SCREEN_CORNERS_CONTOUR_DETECTION_ALGORITHM.md) - Single-frame rectangular boundary detection |
59
60
 
60
61
  ---
61
62
 
@@ -81,21 +82,21 @@ pnpm add @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
81
82
  </details>
82
83
 
83
84
  > 📝 **Why three packages?**
84
- > `@vladmandic/human` and `@techstark/opencv-js` are peer dependencies that must be installed separately to avoid bundling large libraries and reduce the final bundle size.
85
+ > `@vladmandic/human` and `@techstark/opencv-js` are peer dependencies that need to be installed separately to avoid bundling large libraries, reducing final bundle size.
85
86
 
86
87
  ---
87
88
 
88
89
  ## ⚠️ Required Configuration Steps
89
90
 
90
- ### 1️⃣ Fix OpenCV.js ESM Compatibility Issue
91
+ ### 1️⃣ Fix OpenCV.js ESM Compatibility
91
92
 
92
- `@techstark/opencv-js` contains an incompatible UMD format that **must be patched**.
93
+ `@techstark/opencv-js` contains incompatible UMD format, **patch script must be applied**.
93
94
 
94
- **Reference:**
95
- - Issue: [TechStark/opencv-js#44](https://github.com/TechStark/opencv-js/issues/44)
95
+ **References:**
96
+ - Issue details: [TechStark/opencv-js#44](https://github.com/TechStark/opencv-js/issues/44)
96
97
  - Patch script: [patch-opencv.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/patch-opencv.js)
97
98
 
98
- **Setup Method (Recommended):** Add to `package.json` as `postinstall` hook
99
+ **Setup (Recommended):** Add to `package.json` postinstall hook
99
100
 
100
101
  ```json
101
102
  {
@@ -107,13 +108,13 @@ pnpm add @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
107
108
 
108
109
  ### 2️⃣ Download Human.js Model Files
109
110
 
110
- `@vladmandic/human` requires model files and TensorFlow WASM backend, **otherwise it will not load**.
111
+ `@vladmandic/human` requires model files and TensorFlow WASM backend, **otherwise it won't load**.
111
112
 
112
113
  **Download Scripts:**
113
114
  - Model copy: [copy-models.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/copy-models.js)
114
115
  - WASM download: [download-wasm.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/download-wasm.js)
115
116
 
116
- **Setup Method (Recommended):** Configure as `postinstall` hook
117
+ **Setup (Recommended):** Configure as postinstall hook
117
118
 
118
119
  ```json
119
120
  {
@@ -139,30 +140,24 @@ const engine = new FaceDetectionEngine({
139
140
  tensorflow_wasm_path: '/wasm',
140
141
  tensorflow_backend: 'auto',
141
142
 
142
- // Detection settings (recommend ≥720p, otherwise screen detection accuracy decreases)
143
- detect_video_ideal_width: 1920,
144
- detect_video_ideal_height: 1080,
143
+ // Detection settings
144
+ detect_video_ideal_width: 1280,
145
+ detect_video_ideal_height: 720,
145
146
  detect_video_mirror: true,
146
147
  detect_video_load_timeout: 5000,
147
- detect_frame_delay: 100,
148
148
 
149
149
  // Collection quality requirements
150
- collect_min_collect_count: 3, // Minimum 3 face images collected
151
- collect_min_face_ratio: 0.5, // Face occupies 50%+
152
- collect_max_face_ratio: 0.9, // Face occupies <90%
153
- collect_min_face_frontal: 0.9, // Face frontalness 90%
150
+ collect_min_collect_count: 3, // Collect at least 3 faces
151
+ collect_min_face_ratio: 0.5, // Face ratio 50%+
152
+ collect_max_face_ratio: 0.9, // Face ratio below 90%
153
+ collect_min_face_frontal: 0.9, // Face frontality 90%
154
154
  collect_min_image_quality: 0.5, // Image quality 50%+
155
155
 
156
156
  // Liveness detection settings
157
- action_liveness_action_count: 1, // Requires 1 action
157
+ action_liveness_action_count: 1, // Require 1 action
158
158
  action_liveness_action_list: [LivenessAction.BLINK, LivenessAction.MOUTH_OPEN, LivenessAction.NOD],
159
159
  action_liveness_action_randomize: true,
160
160
  action_liveness_verify_timeout: 60000,
161
-
162
- // Anti-spoofing settings
163
- motion_liveness_min_motion_score: 0.15,
164
- motion_liveness_strict_photo_detection: false,
165
- screen_capture_confidence_threshold: 0.7,
166
161
  })
167
162
 
168
163
  // Listen to core events
@@ -176,7 +171,7 @@ engine.on('detector-loaded', (data) => {
176
171
  })
177
172
 
178
173
  engine.on('detector-info', (data) => {
179
- // Real-time per-frame data
174
+ // Real-time data per frame
180
175
  console.log({
181
176
  status: data.code,
182
177
  quality: (data.imageQuality * 100).toFixed(1) + '%',
@@ -192,13 +187,13 @@ engine.on('detector-action', (data) => {
192
187
  })
193
188
 
194
189
  engine.on('detector-finish', (data) => {
195
- // Detection complete
190
+ // Detection completed
196
191
  if (data.success) {
197
192
  console.log('✅ Liveness verification passed!', {
198
- silentPassed: data.silentPassedCount,
199
- actionsCompleted: data.actionPassedCount,
200
- bestQuality: (data.bestQualityScore * 100).toFixed(1) + '%',
201
- totalTime: (data.totalTime / 1000).toFixed(2) + 's'
193
+ 'Silent passed': data.silentPassedCount,
194
+ 'Actions completed': data.actionPassedCount,
195
+ 'Best quality': (data.bestQualityScore * 100).toFixed(1) + '%',
196
+ 'Total time': (data.totalTime / 1000).toFixed(2) + 's'
202
197
  })
203
198
  } else {
204
199
  console.log('❌ Liveness verification failed')
@@ -219,10 +214,10 @@ async function startLivenessDetection() {
219
214
  const videoEl = document.getElementById('video') as HTMLVideoElement
220
215
  await engine.startDetection(videoEl)
221
216
 
222
- // Detection runs automatically until complete or manually stopped
217
+ // Detection runs automatically to completion or manual stop
223
218
  // engine.stopDetection(true) // Stop and show best image
224
219
  } catch (error) {
225
- console.error('Detection startup failed:', error)
220
+ console.error('Detection start failed:', error)
226
221
  }
227
222
  }
228
223
 
@@ -242,16 +237,23 @@ startLivenessDetection()
242
237
  | `tensorflow_wasm_path` | `string` | TensorFlow WASM files directory | `undefined` |
243
238
  | `tensorflow_backend` | `'auto' \| 'webgl' \| 'wasm'` | TensorFlow backend engine | `'auto'` |
244
239
 
240
+ ### Debug Mode Configuration
241
+
242
+ | Option | Type | Description | Default |
243
+ |-----|------|------|--------|
244
+ | `debug_mode` | `boolean` | Enable debug mode | `false` |
245
+ | `debug_log_level` | `'info' \| 'warn' \| 'error'` | Debug log minimum level | `'info'` |
246
+ | `debug_log_stages` | `string[]` | Debug log stage filter (undefined=all) | `undefined` |
247
+ | `debug_log_throttle` | `number` | Debug log throttle interval (ms) | `100` |
248
+
245
249
  ### Video Detection Settings
246
250
 
247
251
  | Option | Type | Description | Default |
248
252
  |-----|------|------|--------|
249
- | `detect_video_ideal_width` | `number` | Video width (pixels) | `1920` |
250
- | `detect_video_ideal_height` | `number` | Video height (pixels) | `1080` |
251
- | `detect_video_mirror` | `boolean` | Horizontal flip video | `true` |
253
+ | `detect_video_ideal_width` | `number` | Video width (pixels) | `1280` |
254
+ | `detect_video_ideal_height` | `number` | Video height (pixels) | `720` |
255
+ | `detect_video_mirror` | `boolean` | Horizontally flip video | `true` |
252
256
  | `detect_video_load_timeout` | `number` | Load timeout (ms) | `5000` |
253
- | `detect_frame_delay` | `number` | Frame delay (ms) | `100` |
254
- | `detect_error_retry_delay` | `number` | Error retry delay (ms) | `200` |
255
257
 
256
258
  ### Face Collection Quality Requirements
257
259
 
@@ -260,10 +262,10 @@ startLivenessDetection()
260
262
  | `collect_min_collect_count` | `number` | Minimum collection count | `3` |
261
263
  | `collect_min_face_ratio` | `number` | Minimum face ratio (0-1) | `0.5` |
262
264
  | `collect_max_face_ratio` | `number` | Maximum face ratio (0-1) | `0.9` |
263
- | `collect_min_face_frontal` | `number` | Minimum frontalness (0-1) | `0.9` |
265
+ | `collect_min_face_frontal` | `number` | Minimum frontality (0-1) | `0.9` |
264
266
  | `collect_min_image_quality` | `number` | Minimum image quality (0-1) | `0.5` |
265
267
 
266
- ### Face Frontalness Parameters
268
+ ### Face Frontality Parameters
267
269
 
268
270
  | Option | Type | Description | Default |
269
271
  |-----|------|------|--------|
@@ -275,7 +277,7 @@ startLivenessDetection()
275
277
 
276
278
  | Option | Type | Description | Default |
277
279
  |-----|------|------|--------|
278
- | `require_full_face_in_bounds` | `boolean` | Face fully in bounds | `false` |
280
+ | `require_full_face_in_bounds` | `boolean` | Face fully within bounds | `false` |
279
281
  | `min_laplacian_variance` | `number` | Minimum blur detection value | `40` |
280
282
  | `min_gradient_sharpness` | `number` | Minimum sharpness | `0.15` |
281
283
  | `min_blur_score` | `number` | Minimum blur score | `0.6` |
@@ -285,7 +287,7 @@ startLivenessDetection()
285
287
  | Option | Type | Description | Default |
286
288
  |-----|------|------|--------|
287
289
  | `action_liveness_action_list` | `LivenessAction[]` | Action list | `[BLINK, MOUTH_OPEN, NOD]` |
288
- | `action_liveness_action_count` | `number` | Actions to complete | `1` |
290
+ | `action_liveness_action_count` | `number` | Number of actions to complete | `1` |
289
291
  | `action_liveness_action_randomize` | `boolean` | Randomize action order | `true` |
290
292
  | `action_liveness_verify_timeout` | `number` | Timeout (ms) | `60000` |
291
293
  | `action_liveness_min_mouth_open_percent` | `number` | Minimum mouth open ratio (0-1) | `0.2` |
@@ -294,44 +296,11 @@ startLivenessDetection()
294
296
 
295
297
  | Option | Type | Description | Default |
296
298
  |-----|------|------|--------|
297
- | `motion_liveness_min_motion_score` | `number` | Minimum motion score (0-1) | `0.15` |
298
- | `motion_liveness_min_keypoint_variance` | `number` | Minimum keypoint variance (0-1) | `0.02` |
299
- | `motion_liveness_frame_buffer_size` | `number` | Frame buffer size | `5` |
300
- | `motion_liveness_eye_aspect_ratio_threshold` | `number` | Blink detection threshold | `0.15` |
301
- | `motion_liveness_motion_consistency_threshold` | `number` | Consistency threshold (0-1) | `0.3` |
302
- | `motion_liveness_min_optical_flow_threshold` | `number` | Minimum optical flow magnitude (0-1) | `0.02` |
303
299
  | `motion_liveness_strict_photo_detection` | `boolean` | Strict photo detection mode | `false` |
304
300
 
305
- ### Screen Capture Detection
306
-
307
- | Option | Type | Description | Default |
308
- |-----|------|------|--------|
309
- | `screen_capture_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.7` |
310
- | `screen_capture_detection_strategy` | `string` | Detection strategy | `'adaptive'` |
311
- | `screen_moire_pattern_threshold` | `number` | Moire pattern threshold (0-1) | `0.65` |
312
- | `screen_moire_pattern_enable_dct` | `boolean` | Enable DCT analysis | `true` |
313
- | `screen_moire_pattern_enable_edge_detection` | `boolean` | Enable edge detection | `true` |
314
-
315
- ### Screen Color Features
316
-
317
- | Option | Type | Description | Default |
318
- |-----|------|------|--------|
319
- | `screen_color_saturation_threshold` | `number` | Saturation threshold (%) | `40` |
320
- | `screen_color_rgb_correlation_threshold` | `number` | RGB correlation threshold (0-1) | `0.75` |
321
- | `screen_color_pixel_entropy_threshold` | `number` | Entropy threshold (0-8) | `6.5` |
322
- | `screen_color_gradient_smoothness_threshold` | `number` | Smoothness threshold (0-1) | `0.7` |
323
- | `screen_color_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.65` |
324
-
325
- ### Screen RGB Emission Detection
301
+ > **Note**: Motion liveness detection uses built-in 6-indicator voting algorithm. Other parameters are internally optimized and require no manual configuration. See [Motion Detection Algorithm Documentation](./docs/MOTION_DETECTION_ALGORITHM.md).
326
302
 
327
- | Option | Type | Description | Default |
328
- |-----|------|------|--------|
329
- | `screen_rgb_low_freq_start_percent` | `number` | Low frequency start (0-1) | `0.15` |
330
- | `screen_rgb_low_freq_end_percent` | `number` | Low frequency end (0-1) | `0.35` |
331
- | `screen_rgb_energy_score_weight` | `number` | Energy weight | `0.40` |
332
- | `screen_rgb_asymmetry_score_weight` | `number` | Asymmetry weight | `0.40` |
333
- | `screen_rgb_difference_factor_weight` | `number` | Difference weight | `0.20` |
334
- | `screen_rgb_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.65` |
303
+ > **Note**: Screen capture detection uses built-in 4-dimensional cascade algorithm (screen flicker, response time, DLP color wheel, optical distortion) and screen contour detection. All parameters are internally optimized and require no manual configuration. See [Screen Capture Detection Algorithm Documentation](./docs/SCREEN_CAPTURE_DETECTION_ALGORITHM.md) and [Screen Contour Detection Algorithm Documentation](./docs/SCREEN_CORNERS_CONTOUR_DETECTION_ALGORITHM.md).
335
304
 
336
305
  ---
337
306
 
@@ -340,14 +309,14 @@ startLivenessDetection()
340
309
  ### Core Methods
341
310
 
342
311
  #### `initialize(): Promise<void>`
343
- Load and initialize the detection library. **Must be called before using other functions.**
312
+ Load and initialize detection libraries. **Must be called before using other features.**
344
313
 
345
314
  ```typescript
346
315
  await engine.initialize()
347
316
  ```
348
317
 
349
318
  #### `startDetection(videoElement): Promise<void>`
350
- Start face detection on a video element.
319
+ Start face detection on video element.
351
320
 
352
321
  ```typescript
353
322
  const videoEl = document.getElementById('video') as HTMLVideoElement
@@ -355,7 +324,7 @@ await engine.startDetection(videoEl)
355
324
  ```
356
325
 
357
326
  #### `stopDetection(success?: boolean): void`
358
- Stop the detection process.
327
+ Stop detection process.
359
328
 
360
329
  ```typescript
361
330
  engine.stopDetection(true) // true: show best detection image
@@ -389,18 +358,18 @@ const state = engine.getEngineState()
389
358
 
390
359
  ## 📡 Event System
391
360
 
392
- The engine uses **TypeScript event emitter pattern** with full type safety.
361
+ The engine uses **TypeScript event emitter pattern**, all events are type-safe.
393
362
 
394
363
  ### Event List
395
364
 
396
365
  <table>
397
366
  <tr>
398
367
  <td><strong>detector-loaded</strong></td>
399
- <td>Engine initialization complete</td>
368
+ <td>Engine initialization completed</td>
400
369
  </tr>
401
370
  <tr>
402
371
  <td><strong>detector-info</strong></td>
403
- <td>Real-time per-frame detection data</td>
372
+ <td>Real-time detection data per frame</td>
404
373
  </tr>
405
374
  <tr>
406
375
  <td><strong>detector-action</strong></td>
@@ -408,15 +377,15 @@ The engine uses **TypeScript event emitter pattern** with full type safety.
408
377
  </tr>
409
378
  <tr>
410
379
  <td><strong>detector-finish</strong></td>
411
- <td>Detection complete (success/failure)</td>
380
+ <td>Detection completed (success/failure)</td>
412
381
  </tr>
413
382
  <tr>
414
383
  <td><strong>detector-error</strong></td>
415
- <td>Error occurred</td>
384
+ <td>Triggered when error occurs</td>
416
385
  </tr>
417
386
  <tr>
418
387
  <td><strong>detector-debug</strong></td>
419
- <td>Debug information (development)</td>
388
+ <td>Debug information (for development)</td>
420
389
  </tr>
421
390
  </table>
422
391
 
@@ -429,7 +398,7 @@ The engine uses **TypeScript event emitter pattern** with full type safety.
429
398
  ```typescript
430
399
  interface DetectorLoadedEventData {
431
400
  success: boolean // Whether initialization succeeded
432
- error?: string // Error message (if failed)
401
+ error?: string // Error message (on failure)
433
402
  opencv_version?: string // OpenCV.js version
434
403
  human_version?: string // Human.js version
435
404
  }
@@ -451,7 +420,7 @@ engine.on('detector-loaded', (data) => {
451
420
 
452
421
  ### 📊 detector-info
453
422
 
454
- **Returns real-time detection data per frame (high-frequency event)**
423
+ **Returns real-time detection data per frame (high frequency event)**
455
424
 
456
425
  ```typescript
457
426
  interface DetectorInfoEventData {
@@ -460,7 +429,7 @@ interface DetectorInfoEventData {
460
429
  message: string // Status message
461
430
  faceCount: number // Number of faces detected
462
431
  faceRatio: number // Face ratio (0-1)
463
- faceFrontal: number // Face frontalness (0-1)
432
+ faceFrontal: number // Face frontality (0-1)
464
433
  imageQuality: number // Image quality score (0-1)
465
434
  motionScore: number // Motion score (0-1)
466
435
  keypointVariance: number // Keypoint variance (0-1)
@@ -472,7 +441,7 @@ interface DetectorInfoEventData {
472
441
  **Detection Status Codes:**
473
442
  ```typescript
474
443
  enum DetectionCode {
475
- VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face detected in video
444
+ VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face detected
476
445
  MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces detected
477
446
  FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face too small
478
447
  FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face too large
@@ -488,12 +457,12 @@ enum DetectionCode {
488
457
  ```typescript
489
458
  engine.on('detector-info', (data) => {
490
459
  console.log({
491
- status: data.code,
492
- silentPassed: data.passed ? '✅' : '❌',
493
- quality: `${(data.imageQuality * 100).toFixed(1)}%`,
494
- frontal: `${(data.faceFrontal * 100).toFixed(1)}%`,
495
- motion: `${(data.motionScore * 100).toFixed(1)}%`,
496
- screen: `${(data.screenConfidence * 100).toFixed(1)}%`
460
+ 'Detection status': data.code,
461
+ 'Silent passed': data.passed ? '✅' : '❌',
462
+ 'Image quality': `${(data.imageQuality * 100).toFixed(1)}%`,
463
+ 'Face frontality': `${(data.faceFrontal * 100).toFixed(1)}%`,
464
+ 'Motion score': `${(data.motionScore * 100).toFixed(1)}%`,
465
+ 'Screen capture': `${(data.screenConfidence * 100).toFixed(1)}%`
497
466
  })
498
467
  })
499
468
  ```
@@ -528,7 +497,7 @@ enum LivenessActionStatus {
528
497
  engine.on('detector-action', (data) => {
529
498
  const actionLabels = {
530
499
  'blink': 'Blink',
531
- 'mouth_open': 'Open mouth',
500
+ 'mouth_open': 'Open Mouth',
532
501
  'nod': 'Nod'
533
502
  }
534
503
 
@@ -553,14 +522,14 @@ engine.on('detector-action', (data) => {
553
522
 
554
523
  ### ✅ detector-finish
555
524
 
556
- **Detection process complete (success or failure)**
525
+ **Detection process completed (success or failure)**
557
526
 
558
527
  ```typescript
559
528
  interface DetectorFinishEventData {
560
529
  success: boolean // Whether verification passed
561
- silentPassedCount: number // Silent detection passes
562
- actionPassedCount: number // Actions completed
563
- totalTime: number // Total elapsed time (ms)
530
+ silentPassedCount: number // Silent detection pass count
531
+ actionPassedCount: number // Actions completed count
532
+ totalTime: number // Total time elapsed (milliseconds)
564
533
  bestQualityScore: number // Best image quality (0-1)
565
534
  bestFrameImage: string | null // Base64 frame image
566
535
  bestFaceImage: string | null // Base64 face image
@@ -572,10 +541,10 @@ interface DetectorFinishEventData {
572
541
  engine.on('detector-finish', (data) => {
573
542
  if (data.success) {
574
543
  console.log('🎉 Liveness verification successful!', {
575
- silentPassed: `${data.silentPassedCount} times`,
576
- actionsCompleted: `${data.actionPassedCount} times`,
577
- bestQuality: `${(data.bestQualityScore * 100).toFixed(1)}%`,
578
- totalTime: `${(data.totalTime / 1000).toFixed(2)}s`
544
+ 'Silent passed': `${data.silentPassedCount} times`,
545
+ 'Actions completed': `${data.actionPassedCount} times`,
546
+ 'Best quality': `${(data.bestQualityScore * 100).toFixed(1)}%`,
547
+ 'Total time': `${(data.totalTime / 1000).toFixed(2)}s`
579
548
  })
580
549
 
581
550
  // Upload result to server
@@ -673,24 +642,24 @@ enum LivenessAction {
673
642
  ### LivenessActionStatus
674
643
  ```typescript
675
644
  enum LivenessActionStatus {
676
- STARTED = 'started', // Prompt started
677
- COMPLETED = 'completed', // Successfully recognized
678
- TIMEOUT = 'timeout' // Recognition timeout
645
+ STARTED = 'started', // Action prompt started
646
+ COMPLETED = 'completed', // Action successfully recognized
647
+ TIMEOUT = 'timeout' // Action recognition timeout
679
648
  }
680
649
  ```
681
650
 
682
651
  ### DetectionCode
683
652
  ```typescript
684
653
  enum DetectionCode {
685
- VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face in video
686
- MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces
687
- FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face below minimum size
688
- FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face above maximum size
689
- FACE_NOT_FRONTAL = 'FACE_NOT_FRONTAL', // Face angle not frontal
690
- FACE_NOT_REAL = 'FACE_NOT_REAL', // Suspected spoofing
654
+ VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face detected in video
655
+ MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces detected
656
+ FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face size below minimum threshold
657
+ FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face size above maximum threshold
658
+ FACE_NOT_FRONTAL = 'FACE_NOT_FRONTAL', // Face angle not frontal enough
659
+ FACE_NOT_REAL = 'FACE_NOT_REAL', // Suspected spoofing detected
691
660
  FACE_NOT_LIVE = 'FACE_NOT_LIVE', // Liveness score below threshold
692
- FACE_LOW_QUALITY = 'FACE_LOW_QUALITY', // Image quality below threshold
693
- FACE_CHECK_PASS = 'FACE_CHECK_PASS' // All checks passed ✅
661
+ FACE_LOW_QUALITY = 'FACE_LOW_QUALITY', // Image quality below minimum
662
+ FACE_CHECK_PASS = 'FACE_CHECK_PASS' // All detection checks passed ✅
694
663
  }
695
664
  ```
696
665
 
@@ -699,7 +668,7 @@ enum DetectionCode {
699
668
  enum ErrorCode {
700
669
  DETECTOR_NOT_INITIALIZED = 'DETECTOR_NOT_INITIALIZED', // Engine not initialized
701
670
  CAMERA_ACCESS_DENIED = 'CAMERA_ACCESS_DENIED', // Camera permission denied
702
- STREAM_ACQUISITION_FAILED = 'STREAM_ACQUISITION_FAILED', // Failed to get video stream
671
+ STREAM_ACQUISITION_FAILED = 'STREAM_ACQUISITION_FAILED', // Failed to acquire video stream
703
672
  SUSPECTED_FRAUDS_DETECTED = 'SUSPECTED_FRAUDS_DETECTED' // Spoofing/fraud detected
704
673
  }
705
674
  ```
@@ -717,7 +686,7 @@ For comprehensive examples and advanced usage patterns, refer to the official de
717
686
  - ✅ Complete Vue 3 + TypeScript integration
718
687
  - ✅ Real-time detection result visualization
719
688
  - ✅ Dynamic configuration panel
720
- - ✅ Complete event handling examples
689
+ - ✅ Complete handling of all engine events
721
690
  - ✅ Real-time debug panel
722
691
  - ✅ Responsive mobile + desktop UI
723
692
  - ✅ Error handling and user feedback
@@ -731,17 +700,17 @@ npm install
731
700
  npm run dev
732
701
  ```
733
702
 
734
- Then open the local URL shown in your browser.
703
+ Then open the displayed local URL in your browser.
735
704
 
736
705
  ---
737
706
 
738
- ## 📥 Deploying Model Files Locally
707
+ ## 📥 Local Deployment of Model Files
739
708
 
740
- ### Why Deploy Locally?
709
+ ### Why Local Deployment?
741
710
 
742
- - 🚀 **Improve Performance** - Avoid CDN latency
743
- - 🔒 **Privacy Protection** - Complete offline operation
744
- - 🌐 **Network Independence** - No external dependencies
711
+ - 🚀 **Improved Performance** - Avoid CDN delays
712
+ - 🔒 **Privacy Protection** - Fully offline operation
713
+ - 🌐 **Network Independence** - No external connection dependencies
745
714
 
746
715
  ### Available Scripts
747
716
 
@@ -756,8 +725,8 @@ node copy-models.js
756
725
  **Features:**
757
726
  - Copy models from `node_modules/@vladmandic/human/models`
758
727
  - Save to `public/models/` directory
759
- - Include `.json` and `.bin` model files
760
- - Automatically display file sizes and progress
728
+ - Includes `.json` and `.bin` model files
729
+ - Automatically shows file sizes and progress
761
730
 
762
731
  #### 2️⃣ Download TensorFlow WASM Files
763
732
 
@@ -768,12 +737,12 @@ node download-wasm.js
768
737
  **Features:**
769
738
  - Automatically download TensorFlow.js WASM backend
770
739
  - Save to `public/wasm/` directory
771
- - Download 4 essential files:
740
+ - Download 4 key files:
772
741
  - `tf-backend-wasm.min.js`
773
742
  - `tfjs-backend-wasm.wasm`
774
743
  - `tfjs-backend-wasm-simd.wasm`
775
744
  - `tfjs-backend-wasm-threaded-simd.wasm`
776
- - **Intelligent multi-CDN sources** with automatic fallback:
745
+ - **Smart Multi-CDN Sources** with automatic fallback:
777
746
  1. unpkg.com (recommended)
778
747
  2. cdn.jsdelivr.net
779
748
  3. esm.sh
@@ -789,7 +758,7 @@ const engine = new FaceDetectionEngine({
789
758
  human_model_path: '/models',
790
759
  tensorflow_wasm_path: '/wasm',
791
760
 
792
- // Other configuration...
761
+ // Other configurations...
793
762
  })
794
763
  ```
795
764
 
@@ -810,16 +779,16 @@ Configure `postinstall` hook in `package.json` for automatic download:
810
779
  ## 🌐 Browser Compatibility
811
780
 
812
781
  | Browser | Version | Support | Notes |
813
- |--------|---------|---------|-------|
814
- | Chrome | 60+ | ✅ | Full support |
815
- | Firefox | 55+ | ✅ | Full support |
816
- | Safari | 11+ | ✅ | Full support |
817
- | Edge | 79+ | ✅ | Full support |
782
+ |--------|------|------|------|
783
+ | Chrome | 60+ | ✅ | Fully supported |
784
+ | Firefox | 55+ | ✅ | Fully supported |
785
+ | Safari | 11+ | ✅ | Fully supported |
786
+ | Edge | 79+ | ✅ | Fully supported |
818
787
 
819
788
  **System Requirements:**
820
789
 
821
- - 📱 Modern browser supporting **WebRTC**
822
- - 🔒 **HTTPS environment** (localhost available for development)
790
+ - 📱 Modern browsers with **WebRTC** support
791
+ - 🔒 **HTTPS environment** (localhost allowed for development)
823
792
  - ⚙️ **WebGL** or **WASM** backend support
824
793
  - 📹 **User authorization** - Camera permission required
825
794