@sssxyd/face-liveness-detector 0.4.0-alpha.8 → 0.4.0

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. package/README.en.md +118 -144
  2. package/README.md +23 -49
  3. package/dist/index.esm.js +3172 -2315
  4. package/dist/index.esm.js.map +1 -1
  5. package/dist/index.js +3172 -2315
  6. package/dist/index.js.map +1 -1
  7. package/dist/types/browser_utils.d.ts +3 -3
  8. package/dist/types/browser_utils.d.ts.map +1 -1
  9. package/dist/types/config.d.ts.map +1 -1
  10. package/dist/types/dlp-color-wheel-detector.d.ts +76 -0
  11. package/dist/types/dlp-color-wheel-detector.d.ts.map +1 -0
  12. package/dist/types/face-detection-engine.d.ts +113 -16
  13. package/dist/types/face-detection-engine.d.ts.map +1 -1
  14. package/dist/types/face-detection-state.d.ts +5 -2
  15. package/dist/types/face-detection-state.d.ts.map +1 -1
  16. package/dist/types/face-frontal-calculator.d.ts +3 -3
  17. package/dist/types/face-frontal-calculator.d.ts.map +1 -1
  18. package/dist/types/image-quality-calculator.d.ts +10 -23
  19. package/dist/types/image-quality-calculator.d.ts.map +1 -1
  20. package/dist/types/motion-liveness-detector.d.ts +34 -19
  21. package/dist/types/motion-liveness-detector.d.ts.map +1 -1
  22. package/dist/types/optical-distortion-detector.d.ts +116 -0
  23. package/dist/types/optical-distortion-detector.d.ts.map +1 -0
  24. package/dist/types/screen-capture-detector.d.ts +74 -115
  25. package/dist/types/screen-capture-detector.d.ts.map +1 -1
  26. package/dist/types/screen-corners-contour-detector.d.ts +78 -0
  27. package/dist/types/screen-corners-contour-detector.d.ts.map +1 -0
  28. package/dist/types/screen-flicker-detector.d.ts +103 -0
  29. package/dist/types/screen-flicker-detector.d.ts.map +1 -0
  30. package/dist/types/screen-moire-pattern-detect.d.ts.map +1 -1
  31. package/dist/types/screen-response-time-detector.d.ts +70 -0
  32. package/dist/types/screen-response-time-detector.d.ts.map +1 -0
  33. package/dist/types/screen-rgb-emission-detect.d.ts.map +1 -1
  34. package/dist/types/types.d.ts +9 -51
  35. package/dist/types/types.d.ts.map +1 -1
  36. package/dist/types/video-frame-collector.d.ts +111 -0
  37. package/dist/types/video-frame-collector.d.ts.map +1 -0
  38. package/package.json +2 -1
package/README.en.md CHANGED
@@ -1,6 +1,6 @@
1
1
  <div align="center">
2
2
 
3
- > **Languages / 语言:** [English](#) · [中文](./README.md)
3
+ > **Languages:** [English](#) · [中文](./README.md)
4
4
 
5
5
  # Face Liveness Detection Engine
6
6
 
@@ -22,26 +22,26 @@
22
22
 
23
23
  <table>
24
24
  <tr>
25
- <td>💯 <strong>Pure Frontend</strong><br/>Zero backend dependency, all processing runs locally in the browser</td>
26
- <td>🔬 <strong>Hybrid AI Solution</strong><br/>Deep fusion of TensorFlow + OpenCV</td>
25
+ <td>💯 <strong>Pure Frontend</strong><br/>Zero backend dependencies, all processing runs locally in browser</td>
26
+ <td>🔬 <strong>Hybrid AI Solution</strong><br/>Deep integration of TensorFlow + OpenCV</td>
27
27
  </tr>
28
28
  <tr>
29
- <td>🧠 <strong>Dual Liveness Verification</strong><br/>Silent detection + action recognition (blink, mouth open, nod)</td>
30
- <td>⚡ <strong>Event-Driven Architecture</strong><br/>100% TypeScript, seamless integration with any framework</td>
29
+ <td>🧠 <strong>Dual Liveness Verification</strong><br/>Silent detection + Action recognition (blink, mouth open, nod)</td>
30
+ <td>⚡ <strong>Event-Driven Architecture</strong><br/>100% TypeScript, seamlessly integrates with any framework</td>
31
31
  </tr>
32
32
  <tr>
33
- <td>🎯 <strong>Multi-Dimensional Analysis</strong><br/>Quality, face frontalness, motion score, screen detection</td>
34
- <td>🛡️ <strong>Multi-Dimensional Anti-Spoofing</strong><br/>Photo, screen video, moire pattern, RGB emission detection</td>
33
+ <td>🎯 <strong>Multi-Dimensional Analysis</strong><br/>Quality, frontality, motion score, screen detection</td>
34
+ <td>🛡️ <strong>Multi-Layer Anti-Spoofing</strong><br/>Photo motion detection, screen temporal analysis, contour boundary detection</td>
35
35
  </tr>
36
36
  </table>
37
37
 
38
38
  ---
39
39
 
40
- ## 🚀 Online Demo
40
+ ## 🚀 Live Demo
41
41
 
42
42
  <div align="center">
43
43
 
44
- **[👉 Live Demo](https://face.lowtechsoft.com/) | Scan QR code for quick testing**
44
+ **[👉 Try Live Demo](https://face.lowtechsoft.com/) | Scan with phone for quick test**
45
45
 
46
46
  [![Face Liveness Detection Demo QR Code](https://raw.githubusercontent.com/sssxyd/face-liveness-detector/main/demos/vue-demo/vue-demo.png)](https://face.lowtechsoft.com/)
47
47
 
@@ -51,11 +51,12 @@
51
51
 
52
52
  ## 🧬 Core Algorithm Design
53
53
 
54
- | Detection Module | Technology | Documentation |
54
+ | Detection Module | Technical Solution | Documentation |
55
55
  |---------|--------|--------|
56
- | **Face Recognition** | Human.js BlazeFace + FaceMesh | 468 facial feature points + expression recognition |
57
- | **Motion Detection** | Multi-dimensional motion analysis | [Motion Detection Algorithm](./docs/MOTION_DETECTION_ALGORITHM.md) - optical flow, keypoint variance, facial region changes |
58
- | **Screen Detection** | Three-dimensional feature fusion | [Screen Capture Detection](./docs/SCREEN_CAPTURE_DETECTION_ALGORITHM.md) - moire patterns, RGB emission, color features |
56
+ | **Face Recognition** | Human.js BlazeFace + FaceMesh | 468 facial landmarks + expression recognition |
57
+ | **Motion Liveness Detection** | 6-Indicator Voting System | [Motion Detection Algorithm](./docs/MOTION_DETECTION_ALGORITHM.md) - Optical flow, keypoint variance, eye/mouth movement, facial area changes |
58
+ | **Screen Capture Detection** | 4-Dimensional Temporal Analysis | [Screen Capture Detection Algorithm](./docs/SCREEN_CAPTURE_DETECTION_ALGORITHM.md) - Screen flicker, response time, DLP color wheel, optical distortion |
59
+ | **Screen Contour Detection** | Canny Edge + Contour Analysis | [Screen Contour Detection Algorithm](./docs/SCREEN_CORNERS_CONTOUR_DETECTION_ALGORITHM.md) - Single-frame rectangular boundary detection |
59
60
 
60
61
  ---
61
62
 
@@ -81,21 +82,21 @@ pnpm add @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
81
82
  </details>
82
83
 
83
84
  > 📝 **Why three packages?**
84
- > `@vladmandic/human` and `@techstark/opencv-js` are peer dependencies that must be installed separately to avoid bundling large libraries and reduce the final bundle size.
85
+ > `@vladmandic/human` and `@techstark/opencv-js` are peer dependencies that need to be installed separately to avoid bundling large libraries, reducing final bundle size.
85
86
 
86
87
  ---
87
88
 
88
89
  ## ⚠️ Required Configuration Steps
89
90
 
90
- ### 1️⃣ Fix OpenCV.js ESM Compatibility Issue
91
+ ### 1️⃣ Fix OpenCV.js ESM Compatibility
91
92
 
92
- `@techstark/opencv-js` contains an incompatible UMD format that **must be patched**.
93
+ `@techstark/opencv-js` contains incompatible UMD format, **patch script must be applied**.
93
94
 
94
- **Reference:**
95
- - Issue: [TechStark/opencv-js#44](https://github.com/TechStark/opencv-js/issues/44)
95
+ **References:**
96
+ - Issue details: [TechStark/opencv-js#44](https://github.com/TechStark/opencv-js/issues/44)
96
97
  - Patch script: [patch-opencv.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/patch-opencv.js)
97
98
 
98
- **Setup Method (Recommended):** Add to `package.json` as `postinstall` hook
99
+ **Setup (Recommended):** Add to `package.json` postinstall hook
99
100
 
100
101
  ```json
101
102
  {
@@ -107,13 +108,13 @@ pnpm add @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
107
108
 
108
109
  ### 2️⃣ Download Human.js Model Files
109
110
 
110
- `@vladmandic/human` requires model files and TensorFlow WASM backend, **otherwise it will not load**.
111
+ `@vladmandic/human` requires model files and TensorFlow WASM backend, **otherwise it won't load**.
111
112
 
112
113
  **Download Scripts:**
113
114
  - Model copy: [copy-models.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/copy-models.js)
114
115
  - WASM download: [download-wasm.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/download-wasm.js)
115
116
 
116
- **Setup Method (Recommended):** Configure as `postinstall` hook
117
+ **Setup (Recommended):** Configure as postinstall hook
117
118
 
118
119
  ```json
119
120
  {
@@ -139,30 +140,28 @@ const engine = new FaceDetectionEngine({
139
140
  tensorflow_wasm_path: '/wasm',
140
141
  tensorflow_backend: 'auto',
141
142
 
142
- // Detection settings (recommend ≥720p, otherwise screen detection accuracy decreases)
143
- detect_video_ideal_width: 1920,
144
- detect_video_ideal_height: 1080,
143
+ // Detection settings (≥720p recommended, otherwise screen detection accuracy drops)
144
+ detect_video_ideal_width: 1280,
145
+ detect_video_ideal_height: 720,
145
146
  detect_video_mirror: true,
146
147
  detect_video_load_timeout: 5000,
147
- detect_frame_delay: 100,
148
+ detect_frame_delay: 120,
148
149
 
149
150
  // Collection quality requirements
150
- collect_min_collect_count: 3, // Minimum 3 face images collected
151
- collect_min_face_ratio: 0.5, // Face occupies 50%+
152
- collect_max_face_ratio: 0.9, // Face occupies <90%
153
- collect_min_face_frontal: 0.9, // Face frontalness 90%
151
+ collect_min_collect_count: 3, // Collect at least 3 faces
152
+ collect_min_face_ratio: 0.5, // Face ratio 50%+
153
+ collect_max_face_ratio: 0.9, // Face ratio below 90%
154
+ collect_min_face_frontal: 0.9, // Face frontality 90%
154
155
  collect_min_image_quality: 0.5, // Image quality 50%+
155
156
 
156
157
  // Liveness detection settings
157
- action_liveness_action_count: 1, // Requires 1 action
158
+ action_liveness_action_count: 1, // Require 1 action
158
159
  action_liveness_action_list: [LivenessAction.BLINK, LivenessAction.MOUTH_OPEN, LivenessAction.NOD],
159
160
  action_liveness_action_randomize: true,
160
161
  action_liveness_verify_timeout: 60000,
161
162
 
162
- // Anti-spoofing settings
163
- motion_liveness_min_motion_score: 0.15,
163
+ // Anti-spoofing settings (motion and screen detection use built-in optimized algorithms, usually no adjustment needed)
164
164
  motion_liveness_strict_photo_detection: false,
165
- screen_capture_confidence_threshold: 0.7,
166
165
  })
167
166
 
168
167
  // Listen to core events
@@ -176,7 +175,7 @@ engine.on('detector-loaded', (data) => {
176
175
  })
177
176
 
178
177
  engine.on('detector-info', (data) => {
179
- // Real-time per-frame data
178
+ // Real-time data per frame
180
179
  console.log({
181
180
  status: data.code,
182
181
  quality: (data.imageQuality * 100).toFixed(1) + '%',
@@ -192,13 +191,13 @@ engine.on('detector-action', (data) => {
192
191
  })
193
192
 
194
193
  engine.on('detector-finish', (data) => {
195
- // Detection complete
194
+ // Detection completed
196
195
  if (data.success) {
197
196
  console.log('✅ Liveness verification passed!', {
198
- silentPassed: data.silentPassedCount,
199
- actionsCompleted: data.actionPassedCount,
200
- bestQuality: (data.bestQualityScore * 100).toFixed(1) + '%',
201
- totalTime: (data.totalTime / 1000).toFixed(2) + 's'
197
+ 'Silent passed': data.silentPassedCount,
198
+ 'Actions completed': data.actionPassedCount,
199
+ 'Best quality': (data.bestQualityScore * 100).toFixed(1) + '%',
200
+ 'Total time': (data.totalTime / 1000).toFixed(2) + 's'
202
201
  })
203
202
  } else {
204
203
  console.log('❌ Liveness verification failed')
@@ -219,10 +218,10 @@ async function startLivenessDetection() {
219
218
  const videoEl = document.getElementById('video') as HTMLVideoElement
220
219
  await engine.startDetection(videoEl)
221
220
 
222
- // Detection runs automatically until complete or manually stopped
221
+ // Detection runs automatically to completion or manual stop
223
222
  // engine.stopDetection(true) // Stop and show best image
224
223
  } catch (error) {
225
- console.error('Detection startup failed:', error)
224
+ console.error('Detection start failed:', error)
226
225
  }
227
226
  }
228
227
 
@@ -242,16 +241,24 @@ startLivenessDetection()
242
241
  | `tensorflow_wasm_path` | `string` | TensorFlow WASM files directory | `undefined` |
243
242
  | `tensorflow_backend` | `'auto' \| 'webgl' \| 'wasm'` | TensorFlow backend engine | `'auto'` |
244
243
 
244
+ ### Debug Mode Configuration
245
+
246
+ | Option | Type | Description | Default |
247
+ |-----|------|------|--------|
248
+ | `debug_mode` | `boolean` | Enable debug mode | `false` |
249
+ | `debug_log_level` | `'info' \| 'warn' \| 'error'` | Debug log minimum level | `'info'` |
250
+ | `debug_log_stages` | `string[]` | Debug log stage filter (undefined=all) | `undefined` |
251
+ | `debug_log_throttle` | `number` | Debug log throttle interval (ms) | `100` |
252
+
245
253
  ### Video Detection Settings
246
254
 
247
255
  | Option | Type | Description | Default |
248
256
  |-----|------|------|--------|
249
- | `detect_video_ideal_width` | `number` | Video width (pixels) | `1920` |
250
- | `detect_video_ideal_height` | `number` | Video height (pixels) | `1080` |
251
- | `detect_video_mirror` | `boolean` | Horizontal flip video | `true` |
257
+ | `detect_video_ideal_width` | `number` | Video width (pixels) | `1280` |
258
+ | `detect_video_ideal_height` | `number` | Video height (pixels) | `720` |
259
+ | `detect_video_mirror` | `boolean` | Horizontally flip video | `true` |
252
260
  | `detect_video_load_timeout` | `number` | Load timeout (ms) | `5000` |
253
- | `detect_frame_delay` | `number` | Frame delay (ms) | `100` |
254
- | `detect_error_retry_delay` | `number` | Error retry delay (ms) | `200` |
261
+ | `detect_frame_delay` | `number` | Delay between frames (ms) | `120` |
255
262
 
256
263
  ### Face Collection Quality Requirements
257
264
 
@@ -260,10 +267,10 @@ startLivenessDetection()
260
267
  | `collect_min_collect_count` | `number` | Minimum collection count | `3` |
261
268
  | `collect_min_face_ratio` | `number` | Minimum face ratio (0-1) | `0.5` |
262
269
  | `collect_max_face_ratio` | `number` | Maximum face ratio (0-1) | `0.9` |
263
- | `collect_min_face_frontal` | `number` | Minimum frontalness (0-1) | `0.9` |
270
+ | `collect_min_face_frontal` | `number` | Minimum frontality (0-1) | `0.9` |
264
271
  | `collect_min_image_quality` | `number` | Minimum image quality (0-1) | `0.5` |
265
272
 
266
- ### Face Frontalness Parameters
273
+ ### Face Frontality Parameters
267
274
 
268
275
  | Option | Type | Description | Default |
269
276
  |-----|------|------|--------|
@@ -275,7 +282,7 @@ startLivenessDetection()
275
282
 
276
283
  | Option | Type | Description | Default |
277
284
  |-----|------|------|--------|
278
- | `require_full_face_in_bounds` | `boolean` | Face fully in bounds | `false` |
285
+ | `require_full_face_in_bounds` | `boolean` | Face fully within bounds | `false` |
279
286
  | `min_laplacian_variance` | `number` | Minimum blur detection value | `40` |
280
287
  | `min_gradient_sharpness` | `number` | Minimum sharpness | `0.15` |
281
288
  | `min_blur_score` | `number` | Minimum blur score | `0.6` |
@@ -285,7 +292,7 @@ startLivenessDetection()
285
292
  | Option | Type | Description | Default |
286
293
  |-----|------|------|--------|
287
294
  | `action_liveness_action_list` | `LivenessAction[]` | Action list | `[BLINK, MOUTH_OPEN, NOD]` |
288
- | `action_liveness_action_count` | `number` | Actions to complete | `1` |
295
+ | `action_liveness_action_count` | `number` | Number of actions to complete | `1` |
289
296
  | `action_liveness_action_randomize` | `boolean` | Randomize action order | `true` |
290
297
  | `action_liveness_verify_timeout` | `number` | Timeout (ms) | `60000` |
291
298
  | `action_liveness_min_mouth_open_percent` | `number` | Minimum mouth open ratio (0-1) | `0.2` |
@@ -294,44 +301,11 @@ startLivenessDetection()
294
301
 
295
302
  | Option | Type | Description | Default |
296
303
  |-----|------|------|--------|
297
- | `motion_liveness_min_motion_score` | `number` | Minimum motion score (0-1) | `0.15` |
298
- | `motion_liveness_min_keypoint_variance` | `number` | Minimum keypoint variance (0-1) | `0.02` |
299
- | `motion_liveness_frame_buffer_size` | `number` | Frame buffer size | `5` |
300
- | `motion_liveness_eye_aspect_ratio_threshold` | `number` | Blink detection threshold | `0.15` |
301
- | `motion_liveness_motion_consistency_threshold` | `number` | Consistency threshold (0-1) | `0.3` |
302
- | `motion_liveness_min_optical_flow_threshold` | `number` | Minimum optical flow magnitude (0-1) | `0.02` |
303
304
  | `motion_liveness_strict_photo_detection` | `boolean` | Strict photo detection mode | `false` |
304
305
 
305
- ### Screen Capture Detection
306
-
307
- | Option | Type | Description | Default |
308
- |-----|------|------|--------|
309
- | `screen_capture_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.7` |
310
- | `screen_capture_detection_strategy` | `string` | Detection strategy | `'adaptive'` |
311
- | `screen_moire_pattern_threshold` | `number` | Moire pattern threshold (0-1) | `0.65` |
312
- | `screen_moire_pattern_enable_dct` | `boolean` | Enable DCT analysis | `true` |
313
- | `screen_moire_pattern_enable_edge_detection` | `boolean` | Enable edge detection | `true` |
314
-
315
- ### Screen Color Features
316
-
317
- | Option | Type | Description | Default |
318
- |-----|------|------|--------|
319
- | `screen_color_saturation_threshold` | `number` | Saturation threshold (%) | `40` |
320
- | `screen_color_rgb_correlation_threshold` | `number` | RGB correlation threshold (0-1) | `0.75` |
321
- | `screen_color_pixel_entropy_threshold` | `number` | Entropy threshold (0-8) | `6.5` |
322
- | `screen_color_gradient_smoothness_threshold` | `number` | Smoothness threshold (0-1) | `0.7` |
323
- | `screen_color_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.65` |
306
+ > **Note**: Motion liveness detection uses built-in 6-indicator voting algorithm. Other parameters are internally optimized and require no manual configuration. See [Motion Detection Algorithm Documentation](./docs/MOTION_DETECTION_ALGORITHM.md).
324
307
 
325
- ### Screen RGB Emission Detection
326
-
327
- | Option | Type | Description | Default |
328
- |-----|------|------|--------|
329
- | `screen_rgb_low_freq_start_percent` | `number` | Low frequency start (0-1) | `0.15` |
330
- | `screen_rgb_low_freq_end_percent` | `number` | Low frequency end (0-1) | `0.35` |
331
- | `screen_rgb_energy_score_weight` | `number` | Energy weight | `0.40` |
332
- | `screen_rgb_asymmetry_score_weight` | `number` | Asymmetry weight | `0.40` |
333
- | `screen_rgb_difference_factor_weight` | `number` | Difference weight | `0.20` |
334
- | `screen_rgb_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.65` |
308
+ > **Note**: Screen capture detection uses built-in 4-dimensional cascade algorithm (screen flicker, response time, DLP color wheel, optical distortion) and screen contour detection. All parameters are internally optimized and require no manual configuration. See [Screen Capture Detection Algorithm Documentation](./docs/SCREEN_CAPTURE_DETECTION_ALGORITHM.md) and [Screen Contour Detection Algorithm Documentation](./docs/SCREEN_CORNERS_CONTOUR_DETECTION_ALGORITHM.md).
335
309
 
336
310
  ---
337
311
 
@@ -340,14 +314,14 @@ startLivenessDetection()
340
314
  ### Core Methods
341
315
 
342
316
  #### `initialize(): Promise<void>`
343
- Load and initialize the detection library. **Must be called before using other functions.**
317
+ Load and initialize detection libraries. **Must be called before using other features.**
344
318
 
345
319
  ```typescript
346
320
  await engine.initialize()
347
321
  ```
348
322
 
349
323
  #### `startDetection(videoElement): Promise<void>`
350
- Start face detection on a video element.
324
+ Start face detection on video element.
351
325
 
352
326
  ```typescript
353
327
  const videoEl = document.getElementById('video') as HTMLVideoElement
@@ -355,7 +329,7 @@ await engine.startDetection(videoEl)
355
329
  ```
356
330
 
357
331
  #### `stopDetection(success?: boolean): void`
358
- Stop the detection process.
332
+ Stop detection process.
359
333
 
360
334
  ```typescript
361
335
  engine.stopDetection(true) // true: show best detection image
@@ -389,18 +363,18 @@ const state = engine.getEngineState()
389
363
 
390
364
  ## 📡 Event System
391
365
 
392
- The engine uses **TypeScript event emitter pattern** with full type safety.
366
+ The engine uses **TypeScript event emitter pattern**, all events are type-safe.
393
367
 
394
368
  ### Event List
395
369
 
396
370
  <table>
397
371
  <tr>
398
372
  <td><strong>detector-loaded</strong></td>
399
- <td>Engine initialization complete</td>
373
+ <td>Engine initialization completed</td>
400
374
  </tr>
401
375
  <tr>
402
376
  <td><strong>detector-info</strong></td>
403
- <td>Real-time per-frame detection data</td>
377
+ <td>Real-time detection data per frame</td>
404
378
  </tr>
405
379
  <tr>
406
380
  <td><strong>detector-action</strong></td>
@@ -408,15 +382,15 @@ The engine uses **TypeScript event emitter pattern** with full type safety.
408
382
  </tr>
409
383
  <tr>
410
384
  <td><strong>detector-finish</strong></td>
411
- <td>Detection complete (success/failure)</td>
385
+ <td>Detection completed (success/failure)</td>
412
386
  </tr>
413
387
  <tr>
414
388
  <td><strong>detector-error</strong></td>
415
- <td>Error occurred</td>
389
+ <td>Triggered when error occurs</td>
416
390
  </tr>
417
391
  <tr>
418
392
  <td><strong>detector-debug</strong></td>
419
- <td>Debug information (development)</td>
393
+ <td>Debug information (for development)</td>
420
394
  </tr>
421
395
  </table>
422
396
 
@@ -429,7 +403,7 @@ The engine uses **TypeScript event emitter pattern** with full type safety.
429
403
  ```typescript
430
404
  interface DetectorLoadedEventData {
431
405
  success: boolean // Whether initialization succeeded
432
- error?: string // Error message (if failed)
406
+ error?: string // Error message (on failure)
433
407
  opencv_version?: string // OpenCV.js version
434
408
  human_version?: string // Human.js version
435
409
  }
@@ -451,7 +425,7 @@ engine.on('detector-loaded', (data) => {
451
425
 
452
426
  ### 📊 detector-info
453
427
 
454
- **Returns real-time detection data per frame (high-frequency event)**
428
+ **Returns real-time detection data per frame (high frequency event)**
455
429
 
456
430
  ```typescript
457
431
  interface DetectorInfoEventData {
@@ -460,7 +434,7 @@ interface DetectorInfoEventData {
460
434
  message: string // Status message
461
435
  faceCount: number // Number of faces detected
462
436
  faceRatio: number // Face ratio (0-1)
463
- faceFrontal: number // Face frontalness (0-1)
437
+ faceFrontal: number // Face frontality (0-1)
464
438
  imageQuality: number // Image quality score (0-1)
465
439
  motionScore: number // Motion score (0-1)
466
440
  keypointVariance: number // Keypoint variance (0-1)
@@ -472,7 +446,7 @@ interface DetectorInfoEventData {
472
446
  **Detection Status Codes:**
473
447
  ```typescript
474
448
  enum DetectionCode {
475
- VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face detected in video
449
+ VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face detected
476
450
  MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces detected
477
451
  FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face too small
478
452
  FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face too large
@@ -488,12 +462,12 @@ enum DetectionCode {
488
462
  ```typescript
489
463
  engine.on('detector-info', (data) => {
490
464
  console.log({
491
- status: data.code,
492
- silentPassed: data.passed ? '✅' : '❌',
493
- quality: `${(data.imageQuality * 100).toFixed(1)}%`,
494
- frontal: `${(data.faceFrontal * 100).toFixed(1)}%`,
495
- motion: `${(data.motionScore * 100).toFixed(1)}%`,
496
- screen: `${(data.screenConfidence * 100).toFixed(1)}%`
465
+ 'Detection status': data.code,
466
+ 'Silent passed': data.passed ? '✅' : '❌',
467
+ 'Image quality': `${(data.imageQuality * 100).toFixed(1)}%`,
468
+ 'Face frontality': `${(data.faceFrontal * 100).toFixed(1)}%`,
469
+ 'Motion score': `${(data.motionScore * 100).toFixed(1)}%`,
470
+ 'Screen capture': `${(data.screenConfidence * 100).toFixed(1)}%`
497
471
  })
498
472
  })
499
473
  ```
@@ -528,7 +502,7 @@ enum LivenessActionStatus {
528
502
  engine.on('detector-action', (data) => {
529
503
  const actionLabels = {
530
504
  'blink': 'Blink',
531
- 'mouth_open': 'Open mouth',
505
+ 'mouth_open': 'Open Mouth',
532
506
  'nod': 'Nod'
533
507
  }
534
508
 
@@ -553,14 +527,14 @@ engine.on('detector-action', (data) => {
553
527
 
554
528
  ### ✅ detector-finish
555
529
 
556
- **Detection process complete (success or failure)**
530
+ **Detection process completed (success or failure)**
557
531
 
558
532
  ```typescript
559
533
  interface DetectorFinishEventData {
560
534
  success: boolean // Whether verification passed
561
- silentPassedCount: number // Silent detection passes
562
- actionPassedCount: number // Actions completed
563
- totalTime: number // Total elapsed time (ms)
535
+ silentPassedCount: number // Silent detection pass count
536
+ actionPassedCount: number // Actions completed count
537
+ totalTime: number // Total time elapsed (milliseconds)
564
538
  bestQualityScore: number // Best image quality (0-1)
565
539
  bestFrameImage: string | null // Base64 frame image
566
540
  bestFaceImage: string | null // Base64 face image
@@ -572,10 +546,10 @@ interface DetectorFinishEventData {
572
546
  engine.on('detector-finish', (data) => {
573
547
  if (data.success) {
574
548
  console.log('🎉 Liveness verification successful!', {
575
- silentPassed: `${data.silentPassedCount} times`,
576
- actionsCompleted: `${data.actionPassedCount} times`,
577
- bestQuality: `${(data.bestQualityScore * 100).toFixed(1)}%`,
578
- totalTime: `${(data.totalTime / 1000).toFixed(2)}s`
549
+ 'Silent passed': `${data.silentPassedCount} times`,
550
+ 'Actions completed': `${data.actionPassedCount} times`,
551
+ 'Best quality': `${(data.bestQualityScore * 100).toFixed(1)}%`,
552
+ 'Total time': `${(data.totalTime / 1000).toFixed(2)}s`
579
553
  })
580
554
 
581
555
  // Upload result to server
@@ -673,24 +647,24 @@ enum LivenessAction {
673
647
  ### LivenessActionStatus
674
648
  ```typescript
675
649
  enum LivenessActionStatus {
676
- STARTED = 'started', // Prompt started
677
- COMPLETED = 'completed', // Successfully recognized
678
- TIMEOUT = 'timeout' // Recognition timeout
650
+ STARTED = 'started', // Action prompt started
651
+ COMPLETED = 'completed', // Action successfully recognized
652
+ TIMEOUT = 'timeout' // Action recognition timeout
679
653
  }
680
654
  ```
681
655
 
682
656
  ### DetectionCode
683
657
  ```typescript
684
658
  enum DetectionCode {
685
- VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face in video
686
- MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces
687
- FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face below minimum size
688
- FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face above maximum size
689
- FACE_NOT_FRONTAL = 'FACE_NOT_FRONTAL', // Face angle not frontal
690
- FACE_NOT_REAL = 'FACE_NOT_REAL', // Suspected spoofing
659
+ VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face detected in video
660
+ MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces detected
661
+ FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face size below minimum threshold
662
+ FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face size above maximum threshold
663
+ FACE_NOT_FRONTAL = 'FACE_NOT_FRONTAL', // Face angle not frontal enough
664
+ FACE_NOT_REAL = 'FACE_NOT_REAL', // Suspected spoofing detected
691
665
  FACE_NOT_LIVE = 'FACE_NOT_LIVE', // Liveness score below threshold
692
- FACE_LOW_QUALITY = 'FACE_LOW_QUALITY', // Image quality below threshold
693
- FACE_CHECK_PASS = 'FACE_CHECK_PASS' // All checks passed ✅
666
+ FACE_LOW_QUALITY = 'FACE_LOW_QUALITY', // Image quality below minimum
667
+ FACE_CHECK_PASS = 'FACE_CHECK_PASS' // All detection checks passed ✅
694
668
  }
695
669
  ```
696
670
 
@@ -699,7 +673,7 @@ enum DetectionCode {
699
673
  enum ErrorCode {
700
674
  DETECTOR_NOT_INITIALIZED = 'DETECTOR_NOT_INITIALIZED', // Engine not initialized
701
675
  CAMERA_ACCESS_DENIED = 'CAMERA_ACCESS_DENIED', // Camera permission denied
702
- STREAM_ACQUISITION_FAILED = 'STREAM_ACQUISITION_FAILED', // Failed to get video stream
676
+ STREAM_ACQUISITION_FAILED = 'STREAM_ACQUISITION_FAILED', // Failed to acquire video stream
703
677
  SUSPECTED_FRAUDS_DETECTED = 'SUSPECTED_FRAUDS_DETECTED' // Spoofing/fraud detected
704
678
  }
705
679
  ```
@@ -717,7 +691,7 @@ For comprehensive examples and advanced usage patterns, refer to the official de
717
691
  - ✅ Complete Vue 3 + TypeScript integration
718
692
  - ✅ Real-time detection result visualization
719
693
  - ✅ Dynamic configuration panel
720
- - ✅ Complete event handling examples
694
+ - ✅ Complete handling of all engine events
721
695
  - ✅ Real-time debug panel
722
696
  - ✅ Responsive mobile + desktop UI
723
697
  - ✅ Error handling and user feedback
@@ -731,17 +705,17 @@ npm install
731
705
  npm run dev
732
706
  ```
733
707
 
734
- Then open the local URL shown in your browser.
708
+ Then open the displayed local URL in your browser.
735
709
 
736
710
  ---
737
711
 
738
- ## 📥 Deploying Model Files Locally
712
+ ## 📥 Local Deployment of Model Files
739
713
 
740
- ### Why Deploy Locally?
714
+ ### Why Local Deployment?
741
715
 
742
- - 🚀 **Improve Performance** - Avoid CDN latency
743
- - 🔒 **Privacy Protection** - Complete offline operation
744
- - 🌐 **Network Independence** - No external dependencies
716
+ - 🚀 **Improved Performance** - Avoid CDN delays
717
+ - 🔒 **Privacy Protection** - Fully offline operation
718
+ - 🌐 **Network Independence** - No external connection dependencies
745
719
 
746
720
  ### Available Scripts
747
721
 
@@ -756,8 +730,8 @@ node copy-models.js
756
730
  **Features:**
757
731
  - Copy models from `node_modules/@vladmandic/human/models`
758
732
  - Save to `public/models/` directory
759
- - Include `.json` and `.bin` model files
760
- - Automatically display file sizes and progress
733
+ - Includes `.json` and `.bin` model files
734
+ - Automatically shows file sizes and progress
761
735
 
762
736
  #### 2️⃣ Download TensorFlow WASM Files
763
737
 
@@ -768,12 +742,12 @@ node download-wasm.js
768
742
  **Features:**
769
743
  - Automatically download TensorFlow.js WASM backend
770
744
  - Save to `public/wasm/` directory
771
- - Download 4 essential files:
745
+ - Download 4 key files:
772
746
  - `tf-backend-wasm.min.js`
773
747
  - `tfjs-backend-wasm.wasm`
774
748
  - `tfjs-backend-wasm-simd.wasm`
775
749
  - `tfjs-backend-wasm-threaded-simd.wasm`
776
- - **Intelligent multi-CDN sources** with automatic fallback:
750
+ - **Smart Multi-CDN Sources** with automatic fallback:
777
751
  1. unpkg.com (recommended)
778
752
  2. cdn.jsdelivr.net
779
753
  3. esm.sh
@@ -789,7 +763,7 @@ const engine = new FaceDetectionEngine({
789
763
  human_model_path: '/models',
790
764
  tensorflow_wasm_path: '/wasm',
791
765
 
792
- // Other configuration...
766
+ // Other configurations...
793
767
  })
794
768
  ```
795
769
 
@@ -810,16 +784,16 @@ Configure `postinstall` hook in `package.json` for automatic download:
810
784
  ## 🌐 Browser Compatibility
811
785
 
812
786
  | Browser | Version | Support | Notes |
813
- |--------|---------|---------|-------|
814
- | Chrome | 60+ | ✅ | Full support |
815
- | Firefox | 55+ | ✅ | Full support |
816
- | Safari | 11+ | ✅ | Full support |
817
- | Edge | 79+ | ✅ | Full support |
787
+ |--------|------|------|------|
788
+ | Chrome | 60+ | ✅ | Fully supported |
789
+ | Firefox | 55+ | ✅ | Fully supported |
790
+ | Safari | 11+ | ✅ | Fully supported |
791
+ | Edge | 79+ | ✅ | Fully supported |
818
792
 
819
793
  **System Requirements:**
820
794
 
821
- - 📱 Modern browser supporting **WebRTC**
822
- - 🔒 **HTTPS environment** (localhost available for development)
795
+ - 📱 Modern browsers with **WebRTC** support
796
+ - 🔒 **HTTPS environment** (localhost allowed for development)
823
797
  - ⚙️ **WebGL** or **WASM** backend support
824
798
  - 📹 **User authorization** - Camera permission required
825
799