@sssxyd/face-liveness-detector 0.4.0-alpha.7 → 0.4.0-alpha.8
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- package/README.en.md +844 -0
- package/README.md +544 -393
- package/dist/types/types.d.ts +82 -0
- package/dist/types/types.d.ts.map +1 -1
- package/package.json +1 -1
- package/README.zh-Hans.md +0 -695
package/README.en.md
ADDED
|
@@ -0,0 +1,844 @@
|
|
|
1
|
+
<div align="center">
|
|
2
|
+
|
|
3
|
+
> **Languages / 语言:** [English](#) · [中文](./README.md)
|
|
4
|
+
|
|
5
|
+
# Face Liveness Detection Engine
|
|
6
|
+
|
|
7
|
+
<p>
|
|
8
|
+
<strong>Pure Frontend Real-time Face Liveness Detection Solution Based on TensorFlow + OpenCV</strong>
|
|
9
|
+
</p>
|
|
10
|
+
|
|
11
|
+
<p>
|
|
12
|
+
<img alt="TypeScript" src="https://img.shields.io/badge/TypeScript-5.0+-3178c6?logo=typescript">
|
|
13
|
+
<img alt="NPM Package" src="https://img.shields.io/npm/v/@sssxyd/face-liveness-detector?label=npm&color=cb3837">
|
|
14
|
+
<img alt="License" src="https://img.shields.io/badge/license-MIT-green">
|
|
15
|
+
</p>
|
|
16
|
+
|
|
17
|
+
</div>
|
|
18
|
+
|
|
19
|
+
---
|
|
20
|
+
|
|
21
|
+
## ✨ Features
|
|
22
|
+
|
|
23
|
+
<table>
|
|
24
|
+
<tr>
|
|
25
|
+
<td>💯 <strong>Pure Frontend</strong><br/>Zero backend dependency, all processing runs locally in the browser</td>
|
|
26
|
+
<td>🔬 <strong>Hybrid AI Solution</strong><br/>Deep fusion of TensorFlow + OpenCV</td>
|
|
27
|
+
</tr>
|
|
28
|
+
<tr>
|
|
29
|
+
<td>🧠 <strong>Dual Liveness Verification</strong><br/>Silent detection + action recognition (blink, mouth open, nod)</td>
|
|
30
|
+
<td>⚡ <strong>Event-Driven Architecture</strong><br/>100% TypeScript, seamless integration with any framework</td>
|
|
31
|
+
</tr>
|
|
32
|
+
<tr>
|
|
33
|
+
<td>🎯 <strong>Multi-Dimensional Analysis</strong><br/>Quality, face frontalness, motion score, screen detection</td>
|
|
34
|
+
<td>🛡️ <strong>Multi-Dimensional Anti-Spoofing</strong><br/>Photo, screen video, moire pattern, RGB emission detection</td>
|
|
35
|
+
</tr>
|
|
36
|
+
</table>
|
|
37
|
+
|
|
38
|
+
---
|
|
39
|
+
|
|
40
|
+
## 🚀 Online Demo
|
|
41
|
+
|
|
42
|
+
<div align="center">
|
|
43
|
+
|
|
44
|
+
**[👉 Live Demo](https://face.lowtechsoft.com/) | Scan QR code for quick testing**
|
|
45
|
+
|
|
46
|
+
[](https://face.lowtechsoft.com/)
|
|
47
|
+
|
|
48
|
+
</div>
|
|
49
|
+
|
|
50
|
+
---
|
|
51
|
+
|
|
52
|
+
## 🧬 Core Algorithm Design
|
|
53
|
+
|
|
54
|
+
| Detection Module | Technology | Documentation |
|
|
55
|
+
|---------|--------|--------|
|
|
56
|
+
| **Face Recognition** | Human.js BlazeFace + FaceMesh | 468 facial feature points + expression recognition |
|
|
57
|
+
| **Motion Detection** | Multi-dimensional motion analysis | [Motion Detection Algorithm](./docs/MOTION_DETECTION_ALGORITHM.md) - optical flow, keypoint variance, facial region changes |
|
|
58
|
+
| **Screen Detection** | Three-dimensional feature fusion | [Screen Capture Detection](./docs/SCREEN_CAPTURE_DETECTION_ALGORITHM.md) - moire patterns, RGB emission, color features |
|
|
59
|
+
|
|
60
|
+
---
|
|
61
|
+
|
|
62
|
+
## 📦 Installation Guide
|
|
63
|
+
|
|
64
|
+
### Quick Install (3 packages)
|
|
65
|
+
|
|
66
|
+
```bash
|
|
67
|
+
npm install @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
|
|
68
|
+
```
|
|
69
|
+
|
|
70
|
+
<details>
|
|
71
|
+
<summary><strong>Other Package Managers</strong></summary>
|
|
72
|
+
|
|
73
|
+
```bash
|
|
74
|
+
# Yarn
|
|
75
|
+
yarn add @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
|
|
76
|
+
|
|
77
|
+
# pnpm
|
|
78
|
+
pnpm add @sssxyd/face-liveness-detector @vladmandic/human @techstark/opencv-js
|
|
79
|
+
```
|
|
80
|
+
|
|
81
|
+
</details>
|
|
82
|
+
|
|
83
|
+
> 📝 **Why three packages?**
|
|
84
|
+
> `@vladmandic/human` and `@techstark/opencv-js` are peer dependencies that must be installed separately to avoid bundling large libraries and reduce the final bundle size.
|
|
85
|
+
|
|
86
|
+
---
|
|
87
|
+
|
|
88
|
+
## ⚠️ Required Configuration Steps
|
|
89
|
+
|
|
90
|
+
### 1️⃣ Fix OpenCV.js ESM Compatibility Issue
|
|
91
|
+
|
|
92
|
+
`@techstark/opencv-js` contains an incompatible UMD format that **must be patched**.
|
|
93
|
+
|
|
94
|
+
**Reference:**
|
|
95
|
+
- Issue: [TechStark/opencv-js#44](https://github.com/TechStark/opencv-js/issues/44)
|
|
96
|
+
- Patch script: [patch-opencv.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/patch-opencv.js)
|
|
97
|
+
|
|
98
|
+
**Setup Method (Recommended):** Add to `package.json` as `postinstall` hook
|
|
99
|
+
|
|
100
|
+
```json
|
|
101
|
+
{
|
|
102
|
+
"scripts": {
|
|
103
|
+
"postinstall": "node patch-opencv.cjs"
|
|
104
|
+
}
|
|
105
|
+
}
|
|
106
|
+
```
|
|
107
|
+
|
|
108
|
+
### 2️⃣ Download Human.js Model Files
|
|
109
|
+
|
|
110
|
+
`@vladmandic/human` requires model files and TensorFlow WASM backend, **otherwise it will not load**.
|
|
111
|
+
|
|
112
|
+
**Download Scripts:**
|
|
113
|
+
- Model copy: [copy-models.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/copy-models.js)
|
|
114
|
+
- WASM download: [download-wasm.js](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/scripts/download-wasm.js)
|
|
115
|
+
|
|
116
|
+
**Setup Method (Recommended):** Configure as `postinstall` hook
|
|
117
|
+
|
|
118
|
+
```json
|
|
119
|
+
{
|
|
120
|
+
"scripts": {
|
|
121
|
+
"postinstall": "node scripts/copy-models.js && node scripts/download-wasm.js"
|
|
122
|
+
}
|
|
123
|
+
}
|
|
124
|
+
```
|
|
125
|
+
|
|
126
|
+
---
|
|
127
|
+
|
|
128
|
+
## 🎯 Quick Start
|
|
129
|
+
|
|
130
|
+
### Basic Example
|
|
131
|
+
|
|
132
|
+
```typescript
|
|
133
|
+
import FaceDetectionEngine, { LivenessAction } from '@sssxyd/face-liveness-detector'
|
|
134
|
+
|
|
135
|
+
// Initialize engine
|
|
136
|
+
const engine = new FaceDetectionEngine({
|
|
137
|
+
// Resource path configuration
|
|
138
|
+
human_model_path: '/models',
|
|
139
|
+
tensorflow_wasm_path: '/wasm',
|
|
140
|
+
tensorflow_backend: 'auto',
|
|
141
|
+
|
|
142
|
+
// Detection settings (recommend ≥720p, otherwise screen detection accuracy decreases)
|
|
143
|
+
detect_video_ideal_width: 1920,
|
|
144
|
+
detect_video_ideal_height: 1080,
|
|
145
|
+
detect_video_mirror: true,
|
|
146
|
+
detect_video_load_timeout: 5000,
|
|
147
|
+
detect_frame_delay: 100,
|
|
148
|
+
|
|
149
|
+
// Collection quality requirements
|
|
150
|
+
collect_min_collect_count: 3, // Minimum 3 face images collected
|
|
151
|
+
collect_min_face_ratio: 0.5, // Face occupies 50%+
|
|
152
|
+
collect_max_face_ratio: 0.9, // Face occupies <90%
|
|
153
|
+
collect_min_face_frontal: 0.9, // Face frontalness 90%
|
|
154
|
+
collect_min_image_quality: 0.5, // Image quality 50%+
|
|
155
|
+
|
|
156
|
+
// Liveness detection settings
|
|
157
|
+
action_liveness_action_count: 1, // Requires 1 action
|
|
158
|
+
action_liveness_action_list: [LivenessAction.BLINK, LivenessAction.MOUTH_OPEN, LivenessAction.NOD],
|
|
159
|
+
action_liveness_action_randomize: true,
|
|
160
|
+
action_liveness_verify_timeout: 60000,
|
|
161
|
+
|
|
162
|
+
// Anti-spoofing settings
|
|
163
|
+
motion_liveness_min_motion_score: 0.15,
|
|
164
|
+
motion_liveness_strict_photo_detection: false,
|
|
165
|
+
screen_capture_confidence_threshold: 0.7,
|
|
166
|
+
})
|
|
167
|
+
|
|
168
|
+
// Listen to core events
|
|
169
|
+
engine.on('detector-loaded', (data) => {
|
|
170
|
+
if (data.success) {
|
|
171
|
+
console.log('✅ Engine ready', {
|
|
172
|
+
opencv: data.opencv_version,
|
|
173
|
+
human: data.human_version
|
|
174
|
+
})
|
|
175
|
+
}
|
|
176
|
+
})
|
|
177
|
+
|
|
178
|
+
engine.on('detector-info', (data) => {
|
|
179
|
+
// Real-time per-frame data
|
|
180
|
+
console.log({
|
|
181
|
+
status: data.code,
|
|
182
|
+
quality: (data.imageQuality * 100).toFixed(1) + '%',
|
|
183
|
+
frontal: (data.faceFrontal * 100).toFixed(1) + '%',
|
|
184
|
+
motion: (data.motionScore * 100).toFixed(1) + '%',
|
|
185
|
+
screen: (data.screenConfidence * 100).toFixed(1) + '%'
|
|
186
|
+
})
|
|
187
|
+
})
|
|
188
|
+
|
|
189
|
+
engine.on('detector-action', (data) => {
|
|
190
|
+
// Action prompt
|
|
191
|
+
console.log(`Please perform action: ${data.action} (${data.status})`)
|
|
192
|
+
})
|
|
193
|
+
|
|
194
|
+
engine.on('detector-finish', (data) => {
|
|
195
|
+
// Detection complete
|
|
196
|
+
if (data.success) {
|
|
197
|
+
console.log('✅ Liveness verification passed!', {
|
|
198
|
+
silentPassed: data.silentPassedCount,
|
|
199
|
+
actionsCompleted: data.actionPassedCount,
|
|
200
|
+
bestQuality: (data.bestQualityScore * 100).toFixed(1) + '%',
|
|
201
|
+
totalTime: (data.totalTime / 1000).toFixed(2) + 's'
|
|
202
|
+
})
|
|
203
|
+
} else {
|
|
204
|
+
console.log('❌ Liveness verification failed')
|
|
205
|
+
}
|
|
206
|
+
})
|
|
207
|
+
|
|
208
|
+
engine.on('detector-error', (error) => {
|
|
209
|
+
console.error(`❌ Error [${error.code}]: ${error.message}`)
|
|
210
|
+
})
|
|
211
|
+
|
|
212
|
+
// Start detection
|
|
213
|
+
async function startLivenessDetection() {
|
|
214
|
+
try {
|
|
215
|
+
// Initialize libraries
|
|
216
|
+
await engine.initialize()
|
|
217
|
+
|
|
218
|
+
// Get video element and start detection
|
|
219
|
+
const videoEl = document.getElementById('video') as HTMLVideoElement
|
|
220
|
+
await engine.startDetection(videoEl)
|
|
221
|
+
|
|
222
|
+
// Detection runs automatically until complete or manually stopped
|
|
223
|
+
// engine.stopDetection(true) // Stop and show best image
|
|
224
|
+
} catch (error) {
|
|
225
|
+
console.error('Detection startup failed:', error)
|
|
226
|
+
}
|
|
227
|
+
}
|
|
228
|
+
|
|
229
|
+
// Start when ready
|
|
230
|
+
startLivenessDetection()
|
|
231
|
+
```
|
|
232
|
+
|
|
233
|
+
---
|
|
234
|
+
|
|
235
|
+
## ⚙️ Detailed Configuration Reference
|
|
236
|
+
|
|
237
|
+
### Resource Path Configuration
|
|
238
|
+
|
|
239
|
+
| Option | Type | Description | Default |
|
|
240
|
+
|-----|------|------|--------|
|
|
241
|
+
| `human_model_path` | `string` | Human.js model files directory | `undefined` |
|
|
242
|
+
| `tensorflow_wasm_path` | `string` | TensorFlow WASM files directory | `undefined` |
|
|
243
|
+
| `tensorflow_backend` | `'auto' \| 'webgl' \| 'wasm'` | TensorFlow backend engine | `'auto'` |
|
|
244
|
+
|
|
245
|
+
### Video Detection Settings
|
|
246
|
+
|
|
247
|
+
| Option | Type | Description | Default |
|
|
248
|
+
|-----|------|------|--------|
|
|
249
|
+
| `detect_video_ideal_width` | `number` | Video width (pixels) | `1920` |
|
|
250
|
+
| `detect_video_ideal_height` | `number` | Video height (pixels) | `1080` |
|
|
251
|
+
| `detect_video_mirror` | `boolean` | Horizontal flip video | `true` |
|
|
252
|
+
| `detect_video_load_timeout` | `number` | Load timeout (ms) | `5000` |
|
|
253
|
+
| `detect_frame_delay` | `number` | Frame delay (ms) | `100` |
|
|
254
|
+
| `detect_error_retry_delay` | `number` | Error retry delay (ms) | `200` |
|
|
255
|
+
|
|
256
|
+
### Face Collection Quality Requirements
|
|
257
|
+
|
|
258
|
+
| Option | Type | Description | Default |
|
|
259
|
+
|-----|------|------|--------|
|
|
260
|
+
| `collect_min_collect_count` | `number` | Minimum collection count | `3` |
|
|
261
|
+
| `collect_min_face_ratio` | `number` | Minimum face ratio (0-1) | `0.5` |
|
|
262
|
+
| `collect_max_face_ratio` | `number` | Maximum face ratio (0-1) | `0.9` |
|
|
263
|
+
| `collect_min_face_frontal` | `number` | Minimum frontalness (0-1) | `0.9` |
|
|
264
|
+
| `collect_min_image_quality` | `number` | Minimum image quality (0-1) | `0.5` |
|
|
265
|
+
|
|
266
|
+
### Face Frontalness Parameters
|
|
267
|
+
|
|
268
|
+
| Option | Type | Description | Default |
|
|
269
|
+
|-----|------|------|--------|
|
|
270
|
+
| `yaw_threshold` | `number` | Yaw angle threshold (degrees) | `3` |
|
|
271
|
+
| `pitch_threshold` | `number` | Pitch angle threshold (degrees) | `4` |
|
|
272
|
+
| `roll_threshold` | `number` | Roll angle threshold (degrees) | `2` |
|
|
273
|
+
|
|
274
|
+
### Image Quality Parameters
|
|
275
|
+
|
|
276
|
+
| Option | Type | Description | Default |
|
|
277
|
+
|-----|------|------|--------|
|
|
278
|
+
| `require_full_face_in_bounds` | `boolean` | Face fully in bounds | `false` |
|
|
279
|
+
| `min_laplacian_variance` | `number` | Minimum blur detection value | `40` |
|
|
280
|
+
| `min_gradient_sharpness` | `number` | Minimum sharpness | `0.15` |
|
|
281
|
+
| `min_blur_score` | `number` | Minimum blur score | `0.6` |
|
|
282
|
+
|
|
283
|
+
### Liveness Detection Settings
|
|
284
|
+
|
|
285
|
+
| Option | Type | Description | Default |
|
|
286
|
+
|-----|------|------|--------|
|
|
287
|
+
| `action_liveness_action_list` | `LivenessAction[]` | Action list | `[BLINK, MOUTH_OPEN, NOD]` |
|
|
288
|
+
| `action_liveness_action_count` | `number` | Actions to complete | `1` |
|
|
289
|
+
| `action_liveness_action_randomize` | `boolean` | Randomize action order | `true` |
|
|
290
|
+
| `action_liveness_verify_timeout` | `number` | Timeout (ms) | `60000` |
|
|
291
|
+
| `action_liveness_min_mouth_open_percent` | `number` | Minimum mouth open ratio (0-1) | `0.2` |
|
|
292
|
+
|
|
293
|
+
### Motion Liveness Detection (Anti-Photo Attack)
|
|
294
|
+
|
|
295
|
+
| Option | Type | Description | Default |
|
|
296
|
+
|-----|------|------|--------|
|
|
297
|
+
| `motion_liveness_min_motion_score` | `number` | Minimum motion score (0-1) | `0.15` |
|
|
298
|
+
| `motion_liveness_min_keypoint_variance` | `number` | Minimum keypoint variance (0-1) | `0.02` |
|
|
299
|
+
| `motion_liveness_frame_buffer_size` | `number` | Frame buffer size | `5` |
|
|
300
|
+
| `motion_liveness_eye_aspect_ratio_threshold` | `number` | Blink detection threshold | `0.15` |
|
|
301
|
+
| `motion_liveness_motion_consistency_threshold` | `number` | Consistency threshold (0-1) | `0.3` |
|
|
302
|
+
| `motion_liveness_min_optical_flow_threshold` | `number` | Minimum optical flow magnitude (0-1) | `0.02` |
|
|
303
|
+
| `motion_liveness_strict_photo_detection` | `boolean` | Strict photo detection mode | `false` |
|
|
304
|
+
|
|
305
|
+
### Screen Capture Detection
|
|
306
|
+
|
|
307
|
+
| Option | Type | Description | Default |
|
|
308
|
+
|-----|------|------|--------|
|
|
309
|
+
| `screen_capture_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.7` |
|
|
310
|
+
| `screen_capture_detection_strategy` | `string` | Detection strategy | `'adaptive'` |
|
|
311
|
+
| `screen_moire_pattern_threshold` | `number` | Moire pattern threshold (0-1) | `0.65` |
|
|
312
|
+
| `screen_moire_pattern_enable_dct` | `boolean` | Enable DCT analysis | `true` |
|
|
313
|
+
| `screen_moire_pattern_enable_edge_detection` | `boolean` | Enable edge detection | `true` |
|
|
314
|
+
|
|
315
|
+
### Screen Color Features
|
|
316
|
+
|
|
317
|
+
| Option | Type | Description | Default |
|
|
318
|
+
|-----|------|------|--------|
|
|
319
|
+
| `screen_color_saturation_threshold` | `number` | Saturation threshold (%) | `40` |
|
|
320
|
+
| `screen_color_rgb_correlation_threshold` | `number` | RGB correlation threshold (0-1) | `0.75` |
|
|
321
|
+
| `screen_color_pixel_entropy_threshold` | `number` | Entropy threshold (0-8) | `6.5` |
|
|
322
|
+
| `screen_color_gradient_smoothness_threshold` | `number` | Smoothness threshold (0-1) | `0.7` |
|
|
323
|
+
| `screen_color_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.65` |
|
|
324
|
+
|
|
325
|
+
### Screen RGB Emission Detection
|
|
326
|
+
|
|
327
|
+
| Option | Type | Description | Default |
|
|
328
|
+
|-----|------|------|--------|
|
|
329
|
+
| `screen_rgb_low_freq_start_percent` | `number` | Low frequency start (0-1) | `0.15` |
|
|
330
|
+
| `screen_rgb_low_freq_end_percent` | `number` | Low frequency end (0-1) | `0.35` |
|
|
331
|
+
| `screen_rgb_energy_score_weight` | `number` | Energy weight | `0.40` |
|
|
332
|
+
| `screen_rgb_asymmetry_score_weight` | `number` | Asymmetry weight | `0.40` |
|
|
333
|
+
| `screen_rgb_difference_factor_weight` | `number` | Difference weight | `0.20` |
|
|
334
|
+
| `screen_rgb_confidence_threshold` | `number` | Confidence threshold (0-1) | `0.65` |
|
|
335
|
+
|
|
336
|
+
---
|
|
337
|
+
|
|
338
|
+
## 🛠️ API Method Reference
|
|
339
|
+
|
|
340
|
+
### Core Methods
|
|
341
|
+
|
|
342
|
+
#### `initialize(): Promise<void>`
|
|
343
|
+
Load and initialize the detection library. **Must be called before using other functions.**
|
|
344
|
+
|
|
345
|
+
```typescript
|
|
346
|
+
await engine.initialize()
|
|
347
|
+
```
|
|
348
|
+
|
|
349
|
+
#### `startDetection(videoElement): Promise<void>`
|
|
350
|
+
Start face detection on a video element.
|
|
351
|
+
|
|
352
|
+
```typescript
|
|
353
|
+
const videoEl = document.getElementById('video') as HTMLVideoElement
|
|
354
|
+
await engine.startDetection(videoEl)
|
|
355
|
+
```
|
|
356
|
+
|
|
357
|
+
#### `stopDetection(success?: boolean): void`
|
|
358
|
+
Stop the detection process.
|
|
359
|
+
|
|
360
|
+
```typescript
|
|
361
|
+
engine.stopDetection(true) // true: show best detection image
|
|
362
|
+
```
|
|
363
|
+
|
|
364
|
+
#### `updateConfig(config): void`
|
|
365
|
+
Dynamically update configuration at runtime.
|
|
366
|
+
|
|
367
|
+
```typescript
|
|
368
|
+
engine.updateConfig({
|
|
369
|
+
collect_min_face_ratio: 0.6,
|
|
370
|
+
action_liveness_action_count: 0
|
|
371
|
+
})
|
|
372
|
+
```
|
|
373
|
+
|
|
374
|
+
#### `getOptions(): FaceDetectionEngineOptions`
|
|
375
|
+
Get current configuration object.
|
|
376
|
+
|
|
377
|
+
```typescript
|
|
378
|
+
const config = engine.getOptions()
|
|
379
|
+
```
|
|
380
|
+
|
|
381
|
+
#### `getEngineState(): EngineState`
|
|
382
|
+
Get current engine state.
|
|
383
|
+
|
|
384
|
+
```typescript
|
|
385
|
+
const state = engine.getEngineState()
|
|
386
|
+
```
|
|
387
|
+
|
|
388
|
+
---
|
|
389
|
+
|
|
390
|
+
## 📡 Event System
|
|
391
|
+
|
|
392
|
+
The engine uses **TypeScript event emitter pattern** with full type safety.
|
|
393
|
+
|
|
394
|
+
### Event List
|
|
395
|
+
|
|
396
|
+
<table>
|
|
397
|
+
<tr>
|
|
398
|
+
<td><strong>detector-loaded</strong></td>
|
|
399
|
+
<td>Engine initialization complete</td>
|
|
400
|
+
</tr>
|
|
401
|
+
<tr>
|
|
402
|
+
<td><strong>detector-info</strong></td>
|
|
403
|
+
<td>Real-time per-frame detection data</td>
|
|
404
|
+
</tr>
|
|
405
|
+
<tr>
|
|
406
|
+
<td><strong>detector-action</strong></td>
|
|
407
|
+
<td>Action liveness prompt and status</td>
|
|
408
|
+
</tr>
|
|
409
|
+
<tr>
|
|
410
|
+
<td><strong>detector-finish</strong></td>
|
|
411
|
+
<td>Detection complete (success/failure)</td>
|
|
412
|
+
</tr>
|
|
413
|
+
<tr>
|
|
414
|
+
<td><strong>detector-error</strong></td>
|
|
415
|
+
<td>Error occurred</td>
|
|
416
|
+
</tr>
|
|
417
|
+
<tr>
|
|
418
|
+
<td><strong>detector-debug</strong></td>
|
|
419
|
+
<td>Debug information (development)</td>
|
|
420
|
+
</tr>
|
|
421
|
+
</table>
|
|
422
|
+
|
|
423
|
+
---
|
|
424
|
+
|
|
425
|
+
### 📋 detector-loaded
|
|
426
|
+
|
|
427
|
+
**Triggered when engine initialization completes**
|
|
428
|
+
|
|
429
|
+
```typescript
|
|
430
|
+
interface DetectorLoadedEventData {
|
|
431
|
+
success: boolean // Whether initialization succeeded
|
|
432
|
+
error?: string // Error message (if failed)
|
|
433
|
+
opencv_version?: string // OpenCV.js version
|
|
434
|
+
human_version?: string // Human.js version
|
|
435
|
+
}
|
|
436
|
+
```
|
|
437
|
+
|
|
438
|
+
**Example:**
|
|
439
|
+
```typescript
|
|
440
|
+
engine.on('detector-loaded', (data) => {
|
|
441
|
+
if (data.success) {
|
|
442
|
+
console.log('✅ Engine ready')
|
|
443
|
+
console.log(`OpenCV ${data.opencv_version} | Human.js ${data.human_version}`)
|
|
444
|
+
} else {
|
|
445
|
+
console.error('❌ Initialization failed:', data.error)
|
|
446
|
+
}
|
|
447
|
+
})
|
|
448
|
+
```
|
|
449
|
+
|
|
450
|
+
---
|
|
451
|
+
|
|
452
|
+
### 📊 detector-info
|
|
453
|
+
|
|
454
|
+
**Returns real-time detection data per frame (high-frequency event)**
|
|
455
|
+
|
|
456
|
+
```typescript
|
|
457
|
+
interface DetectorInfoEventData {
|
|
458
|
+
passed: boolean // Whether silent detection passed
|
|
459
|
+
code: DetectionCode // Detection status code
|
|
460
|
+
message: string // Status message
|
|
461
|
+
faceCount: number // Number of faces detected
|
|
462
|
+
faceRatio: number // Face ratio (0-1)
|
|
463
|
+
faceFrontal: number // Face frontalness (0-1)
|
|
464
|
+
imageQuality: number // Image quality score (0-1)
|
|
465
|
+
motionScore: number // Motion score (0-1)
|
|
466
|
+
keypointVariance: number // Keypoint variance (0-1)
|
|
467
|
+
motionType: string // Detected motion type
|
|
468
|
+
screenConfidence: number // Screen capture confidence (0-1)
|
|
469
|
+
}
|
|
470
|
+
```
|
|
471
|
+
|
|
472
|
+
**Detection Status Codes:**
|
|
473
|
+
```typescript
|
|
474
|
+
enum DetectionCode {
|
|
475
|
+
VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face detected in video
|
|
476
|
+
MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces detected
|
|
477
|
+
FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face too small
|
|
478
|
+
FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face too large
|
|
479
|
+
FACE_NOT_FRONTAL = 'FACE_NOT_FRONTAL', // Face not frontal enough
|
|
480
|
+
FACE_NOT_REAL = 'FACE_NOT_REAL', // Suspected spoofing
|
|
481
|
+
FACE_NOT_LIVE = 'FACE_NOT_LIVE', // Liveness score too low
|
|
482
|
+
FACE_LOW_QUALITY = 'FACE_LOW_QUALITY', // Image quality too low
|
|
483
|
+
FACE_CHECK_PASS = 'FACE_CHECK_PASS' // All checks passed ✅
|
|
484
|
+
}
|
|
485
|
+
```
|
|
486
|
+
|
|
487
|
+
**Example:**
|
|
488
|
+
```typescript
|
|
489
|
+
engine.on('detector-info', (data) => {
|
|
490
|
+
console.log({
|
|
491
|
+
status: data.code,
|
|
492
|
+
silentPassed: data.passed ? '✅' : '❌',
|
|
493
|
+
quality: `${(data.imageQuality * 100).toFixed(1)}%`,
|
|
494
|
+
frontal: `${(data.faceFrontal * 100).toFixed(1)}%`,
|
|
495
|
+
motion: `${(data.motionScore * 100).toFixed(1)}%`,
|
|
496
|
+
screen: `${(data.screenConfidence * 100).toFixed(1)}%`
|
|
497
|
+
})
|
|
498
|
+
})
|
|
499
|
+
```
|
|
500
|
+
|
|
501
|
+
---
|
|
502
|
+
|
|
503
|
+
### 👤 detector-action
|
|
504
|
+
|
|
505
|
+
**Action liveness prompt and recognition status**
|
|
506
|
+
|
|
507
|
+
```typescript
|
|
508
|
+
interface DetectorActionEventData {
|
|
509
|
+
action: LivenessAction // Action to perform
|
|
510
|
+
status: LivenessActionStatus // Action status
|
|
511
|
+
}
|
|
512
|
+
|
|
513
|
+
enum LivenessAction {
|
|
514
|
+
BLINK = 'blink', // Blink
|
|
515
|
+
MOUTH_OPEN = 'mouth_open', // Open mouth
|
|
516
|
+
NOD = 'nod' // Nod
|
|
517
|
+
}
|
|
518
|
+
|
|
519
|
+
enum LivenessActionStatus {
|
|
520
|
+
STARTED = 'started', // Prompt started
|
|
521
|
+
COMPLETED = 'completed', // Successfully recognized
|
|
522
|
+
TIMEOUT = 'timeout' // Recognition timeout
|
|
523
|
+
}
|
|
524
|
+
```
|
|
525
|
+
|
|
526
|
+
**Example:**
|
|
527
|
+
```typescript
|
|
528
|
+
engine.on('detector-action', (data) => {
|
|
529
|
+
const actionLabels = {
|
|
530
|
+
'blink': 'Blink',
|
|
531
|
+
'mouth_open': 'Open mouth',
|
|
532
|
+
'nod': 'Nod'
|
|
533
|
+
}
|
|
534
|
+
|
|
535
|
+
switch (data.status) {
|
|
536
|
+
case 'started':
|
|
537
|
+
console.log(`👤 Please perform: ${actionLabels[data.action]}`)
|
|
538
|
+
// Show UI prompt
|
|
539
|
+
break
|
|
540
|
+
case 'completed':
|
|
541
|
+
console.log(`✅ Recognized: ${actionLabels[data.action]}`)
|
|
542
|
+
// Update progress bar
|
|
543
|
+
break
|
|
544
|
+
case 'timeout':
|
|
545
|
+
console.log(`⏱️ Timeout: ${actionLabels[data.action]}`)
|
|
546
|
+
// Show retry prompt
|
|
547
|
+
break
|
|
548
|
+
}
|
|
549
|
+
})
|
|
550
|
+
```
|
|
551
|
+
|
|
552
|
+
---
|
|
553
|
+
|
|
554
|
+
### ✅ detector-finish
|
|
555
|
+
|
|
556
|
+
**Detection process complete (success or failure)**
|
|
557
|
+
|
|
558
|
+
```typescript
|
|
559
|
+
interface DetectorFinishEventData {
|
|
560
|
+
success: boolean // Whether verification passed
|
|
561
|
+
silentPassedCount: number // Silent detection passes
|
|
562
|
+
actionPassedCount: number // Actions completed
|
|
563
|
+
totalTime: number // Total elapsed time (ms)
|
|
564
|
+
bestQualityScore: number // Best image quality (0-1)
|
|
565
|
+
bestFrameImage: string | null // Base64 frame image
|
|
566
|
+
bestFaceImage: string | null // Base64 face image
|
|
567
|
+
}
|
|
568
|
+
```
|
|
569
|
+
|
|
570
|
+
**Example:**
|
|
571
|
+
```typescript
|
|
572
|
+
engine.on('detector-finish', (data) => {
|
|
573
|
+
if (data.success) {
|
|
574
|
+
console.log('🎉 Liveness verification successful!', {
|
|
575
|
+
silentPassed: `${data.silentPassedCount} times`,
|
|
576
|
+
actionsCompleted: `${data.actionPassedCount} times`,
|
|
577
|
+
bestQuality: `${(data.bestQualityScore * 100).toFixed(1)}%`,
|
|
578
|
+
totalTime: `${(data.totalTime / 1000).toFixed(2)}s`
|
|
579
|
+
})
|
|
580
|
+
|
|
581
|
+
// Upload result to server
|
|
582
|
+
if (data.bestFrameImage) {
|
|
583
|
+
uploadToServer({
|
|
584
|
+
image: data.bestFrameImage,
|
|
585
|
+
quality: data.bestQualityScore,
|
|
586
|
+
timestamp: new Date()
|
|
587
|
+
})
|
|
588
|
+
}
|
|
589
|
+
} else {
|
|
590
|
+
console.log('❌ Verification failed, please retry')
|
|
591
|
+
}
|
|
592
|
+
})
|
|
593
|
+
```
|
|
594
|
+
|
|
595
|
+
---
|
|
596
|
+
|
|
597
|
+
### ⚠️ detector-error
|
|
598
|
+
|
|
599
|
+
**Error occurred during detection**
|
|
600
|
+
|
|
601
|
+
```typescript
|
|
602
|
+
interface DetectorErrorEventData {
|
|
603
|
+
code: ErrorCode // Error code
|
|
604
|
+
message: string // Error message
|
|
605
|
+
}
|
|
606
|
+
|
|
607
|
+
enum ErrorCode {
|
|
608
|
+
DETECTOR_NOT_INITIALIZED = 'DETECTOR_NOT_INITIALIZED',
|
|
609
|
+
CAMERA_ACCESS_DENIED = 'CAMERA_ACCESS_DENIED',
|
|
610
|
+
STREAM_ACQUISITION_FAILED = 'STREAM_ACQUISITION_FAILED',
|
|
611
|
+
SUSPECTED_FRAUDS_DETECTED = 'SUSPECTED_FRAUDS_DETECTED'
|
|
612
|
+
}
|
|
613
|
+
```
|
|
614
|
+
|
|
615
|
+
**Example:**
|
|
616
|
+
```typescript
|
|
617
|
+
engine.on('detector-error', (error) => {
|
|
618
|
+
const errorMessages: Record<string, string> = {
|
|
619
|
+
'DETECTOR_NOT_INITIALIZED': 'Engine not initialized',
|
|
620
|
+
'CAMERA_ACCESS_DENIED': 'Camera permission denied',
|
|
621
|
+
'STREAM_ACQUISITION_FAILED': 'Failed to acquire camera stream',
|
|
622
|
+
'SUSPECTED_FRAUDS_DETECTED': 'Spoofing detected'
|
|
623
|
+
}
|
|
624
|
+
|
|
625
|
+
console.error(`❌ Error [${error.code}]: ${errorMessages[error.code] || error.message}`)
|
|
626
|
+
showUserErrorPrompt(errorMessages[error.code])
|
|
627
|
+
})
|
|
628
|
+
```
|
|
629
|
+
|
|
630
|
+
---
|
|
631
|
+
|
|
632
|
+
### 🐛 detector-debug
|
|
633
|
+
|
|
634
|
+
**Debug information for development and troubleshooting**
|
|
635
|
+
|
|
636
|
+
```typescript
|
|
637
|
+
interface DetectorDebugEventData {
|
|
638
|
+
level: 'info' | 'warn' | 'error' // Log level
|
|
639
|
+
stage: string // Processing stage
|
|
640
|
+
message: string // Debug message
|
|
641
|
+
details?: Record<string, any> // Additional details
|
|
642
|
+
timestamp: number // Unix timestamp
|
|
643
|
+
}
|
|
644
|
+
```
|
|
645
|
+
|
|
646
|
+
**Example:**
|
|
647
|
+
```typescript
|
|
648
|
+
engine.on('detector-debug', (debug) => {
|
|
649
|
+
const time = new Date(debug.timestamp).toLocaleTimeString()
|
|
650
|
+
const prefix = `[${time}] [${debug.stage}]`
|
|
651
|
+
|
|
652
|
+
if (debug.level === 'error') {
|
|
653
|
+
console.error(`${prefix} ❌ ${debug.message}`, debug.details)
|
|
654
|
+
} else {
|
|
655
|
+
console.log(`${prefix} ℹ️ ${debug.message}`)
|
|
656
|
+
}
|
|
657
|
+
})
|
|
658
|
+
```
|
|
659
|
+
|
|
660
|
+
---
|
|
661
|
+
|
|
662
|
+
## 📖 Type Definitions
|
|
663
|
+
|
|
664
|
+
### LivenessAction
|
|
665
|
+
```typescript
|
|
666
|
+
enum LivenessAction {
|
|
667
|
+
BLINK = 'blink', // Blink
|
|
668
|
+
MOUTH_OPEN = 'mouth_open', // Open mouth
|
|
669
|
+
NOD = 'nod' // Nod
|
|
670
|
+
}
|
|
671
|
+
```
|
|
672
|
+
|
|
673
|
+
### LivenessActionStatus
|
|
674
|
+
```typescript
|
|
675
|
+
enum LivenessActionStatus {
|
|
676
|
+
STARTED = 'started', // Prompt started
|
|
677
|
+
COMPLETED = 'completed', // Successfully recognized
|
|
678
|
+
TIMEOUT = 'timeout' // Recognition timeout
|
|
679
|
+
}
|
|
680
|
+
```
|
|
681
|
+
|
|
682
|
+
### DetectionCode
|
|
683
|
+
```typescript
|
|
684
|
+
enum DetectionCode {
|
|
685
|
+
VIDEO_NO_FACE = 'VIDEO_NO_FACE', // No face in video
|
|
686
|
+
MULTIPLE_FACE = 'MULTIPLE_FACE', // Multiple faces
|
|
687
|
+
FACE_TOO_SMALL = 'FACE_TOO_SMALL', // Face below minimum size
|
|
688
|
+
FACE_TOO_LARGE = 'FACE_TOO_LARGE', // Face above maximum size
|
|
689
|
+
FACE_NOT_FRONTAL = 'FACE_NOT_FRONTAL', // Face angle not frontal
|
|
690
|
+
FACE_NOT_REAL = 'FACE_NOT_REAL', // Suspected spoofing
|
|
691
|
+
FACE_NOT_LIVE = 'FACE_NOT_LIVE', // Liveness score below threshold
|
|
692
|
+
FACE_LOW_QUALITY = 'FACE_LOW_QUALITY', // Image quality below threshold
|
|
693
|
+
FACE_CHECK_PASS = 'FACE_CHECK_PASS' // All checks passed ✅
|
|
694
|
+
}
|
|
695
|
+
```
|
|
696
|
+
|
|
697
|
+
### ErrorCode
|
|
698
|
+
```typescript
|
|
699
|
+
enum ErrorCode {
|
|
700
|
+
DETECTOR_NOT_INITIALIZED = 'DETECTOR_NOT_INITIALIZED', // Engine not initialized
|
|
701
|
+
CAMERA_ACCESS_DENIED = 'CAMERA_ACCESS_DENIED', // Camera permission denied
|
|
702
|
+
STREAM_ACQUISITION_FAILED = 'STREAM_ACQUISITION_FAILED', // Failed to get video stream
|
|
703
|
+
SUSPECTED_FRAUDS_DETECTED = 'SUSPECTED_FRAUDS_DETECTED' // Spoofing/fraud detected
|
|
704
|
+
}
|
|
705
|
+
```
|
|
706
|
+
|
|
707
|
+
---
|
|
708
|
+
|
|
709
|
+
## 🎓 Advanced Usage & Examples
|
|
710
|
+
|
|
711
|
+
### Complete Vue 3 Demo Project
|
|
712
|
+
|
|
713
|
+
For comprehensive examples and advanced usage patterns, refer to the official demo project:
|
|
714
|
+
|
|
715
|
+
**[Vue Demo Project](https://github.com/sssxyd/face-liveness-detector/tree/main/demos/vue-demo/)** includes:
|
|
716
|
+
|
|
717
|
+
- ✅ Complete Vue 3 + TypeScript integration
|
|
718
|
+
- ✅ Real-time detection result visualization
|
|
719
|
+
- ✅ Dynamic configuration panel
|
|
720
|
+
- ✅ Complete event handling examples
|
|
721
|
+
- ✅ Real-time debug panel
|
|
722
|
+
- ✅ Responsive mobile + desktop UI
|
|
723
|
+
- ✅ Error handling and user feedback
|
|
724
|
+
- ✅ Result export and image capture
|
|
725
|
+
|
|
726
|
+
**Quick Start Demo:**
|
|
727
|
+
|
|
728
|
+
```bash
|
|
729
|
+
cd demos/vue-demo
|
|
730
|
+
npm install
|
|
731
|
+
npm run dev
|
|
732
|
+
```
|
|
733
|
+
|
|
734
|
+
Then open the local URL shown in your browser.
|
|
735
|
+
|
|
736
|
+
---
|
|
737
|
+
|
|
738
|
+
## 📥 Deploying Model Files Locally
|
|
739
|
+
|
|
740
|
+
### Why Deploy Locally?
|
|
741
|
+
|
|
742
|
+
- 🚀 **Improve Performance** - Avoid CDN latency
|
|
743
|
+
- 🔒 **Privacy Protection** - Complete offline operation
|
|
744
|
+
- 🌐 **Network Independence** - No external dependencies
|
|
745
|
+
|
|
746
|
+
### Available Scripts
|
|
747
|
+
|
|
748
|
+
Two download scripts are provided in the project root:
|
|
749
|
+
|
|
750
|
+
#### 1️⃣ Copy Human.js Models
|
|
751
|
+
|
|
752
|
+
```bash
|
|
753
|
+
node copy-models.js
|
|
754
|
+
```
|
|
755
|
+
|
|
756
|
+
**Features:**
|
|
757
|
+
- Copy models from `node_modules/@vladmandic/human/models`
|
|
758
|
+
- Save to `public/models/` directory
|
|
759
|
+
- Include `.json` and `.bin` model files
|
|
760
|
+
- Automatically display file sizes and progress
|
|
761
|
+
|
|
762
|
+
#### 2️⃣ Download TensorFlow WASM Files
|
|
763
|
+
|
|
764
|
+
```bash
|
|
765
|
+
node download-wasm.js
|
|
766
|
+
```
|
|
767
|
+
|
|
768
|
+
**Features:**
|
|
769
|
+
- Automatically download TensorFlow.js WASM backend
|
|
770
|
+
- Save to `public/wasm/` directory
|
|
771
|
+
- Download 4 essential files:
|
|
772
|
+
- `tf-backend-wasm.min.js`
|
|
773
|
+
- `tfjs-backend-wasm.wasm`
|
|
774
|
+
- `tfjs-backend-wasm-simd.wasm`
|
|
775
|
+
- `tfjs-backend-wasm-threaded-simd.wasm`
|
|
776
|
+
- **Intelligent multi-CDN sources** with automatic fallback:
|
|
777
|
+
1. unpkg.com (recommended)
|
|
778
|
+
2. cdn.jsdelivr.net
|
|
779
|
+
3. esm.sh
|
|
780
|
+
4. cdn.esm.sh
|
|
781
|
+
|
|
782
|
+
### Configure Project to Use Local Files
|
|
783
|
+
|
|
784
|
+
After downloading, specify local paths when initializing the engine:
|
|
785
|
+
|
|
786
|
+
```typescript
|
|
787
|
+
const engine = new FaceDetectionEngine({
|
|
788
|
+
// Use local files instead of CDN
|
|
789
|
+
human_model_path: '/models',
|
|
790
|
+
tensorflow_wasm_path: '/wasm',
|
|
791
|
+
|
|
792
|
+
// Other configuration...
|
|
793
|
+
})
|
|
794
|
+
```
|
|
795
|
+
|
|
796
|
+
### Automated Setup (Recommended)
|
|
797
|
+
|
|
798
|
+
Configure `postinstall` hook in `package.json` for automatic download:
|
|
799
|
+
|
|
800
|
+
```json
|
|
801
|
+
{
|
|
802
|
+
"scripts": {
|
|
803
|
+
"postinstall": "node scripts/copy-models.js && node scripts/download-wasm.js"
|
|
804
|
+
}
|
|
805
|
+
}
|
|
806
|
+
```
|
|
807
|
+
|
|
808
|
+
---
|
|
809
|
+
|
|
810
|
+
## 🌐 Browser Compatibility
|
|
811
|
+
|
|
812
|
+
| Browser | Version | Support | Notes |
|
|
813
|
+
|--------|---------|---------|-------|
|
|
814
|
+
| Chrome | 60+ | ✅ | Full support |
|
|
815
|
+
| Firefox | 55+ | ✅ | Full support |
|
|
816
|
+
| Safari | 11+ | ✅ | Full support |
|
|
817
|
+
| Edge | 79+ | ✅ | Full support |
|
|
818
|
+
|
|
819
|
+
**System Requirements:**
|
|
820
|
+
|
|
821
|
+
- 📱 Modern browser supporting **WebRTC**
|
|
822
|
+
- 🔒 **HTTPS environment** (localhost available for development)
|
|
823
|
+
- ⚙️ **WebGL** or **WASM** backend support
|
|
824
|
+
- 📹 **User authorization** - Camera permission required
|
|
825
|
+
|
|
826
|
+
---
|
|
827
|
+
|
|
828
|
+
## 📄 License
|
|
829
|
+
|
|
830
|
+
[MIT License](./LICENSE) - Free to use and modify
|
|
831
|
+
|
|
832
|
+
## 🤝 Contributing
|
|
833
|
+
|
|
834
|
+
Issues and Pull Requests are welcome!
|
|
835
|
+
|
|
836
|
+
---
|
|
837
|
+
|
|
838
|
+
<div align="center">
|
|
839
|
+
|
|
840
|
+
**[⬆ Back to Top](#face-liveness-detection-engine)**
|
|
841
|
+
|
|
842
|
+
Made with ❤️ by [sssxyd](https://github.com/sssxyd)
|
|
843
|
+
|
|
844
|
+
</div>
|