pyfaceau 1.0.3__cp312-cp312-win_amd64.whl
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- pyfaceau/__init__.py +19 -0
- pyfaceau/alignment/__init__.py +0 -0
- pyfaceau/alignment/calc_params.py +671 -0
- pyfaceau/alignment/face_aligner.py +352 -0
- pyfaceau/alignment/numba_calcparams_accelerator.py +244 -0
- pyfaceau/cython_histogram_median.cp312-win_amd64.pyd +0 -0
- pyfaceau/cython_rotation_update.cp312-win_amd64.pyd +0 -0
- pyfaceau/detectors/__init__.py +0 -0
- pyfaceau/detectors/pfld.py +128 -0
- pyfaceau/detectors/retinaface.py +352 -0
- pyfaceau/download_weights.py +134 -0
- pyfaceau/features/__init__.py +0 -0
- pyfaceau/features/histogram_median_tracker.py +335 -0
- pyfaceau/features/pdm.py +269 -0
- pyfaceau/features/triangulation.py +64 -0
- pyfaceau/parallel_pipeline.py +462 -0
- pyfaceau/pipeline.py +1083 -0
- pyfaceau/prediction/__init__.py +0 -0
- pyfaceau/prediction/au_predictor.py +434 -0
- pyfaceau/prediction/batched_au_predictor.py +269 -0
- pyfaceau/prediction/model_parser.py +337 -0
- pyfaceau/prediction/running_median.py +318 -0
- pyfaceau/prediction/running_median_fallback.py +200 -0
- pyfaceau/processor.py +270 -0
- pyfaceau/refinement/__init__.py +12 -0
- pyfaceau/refinement/svr_patch_expert.py +361 -0
- pyfaceau/refinement/targeted_refiner.py +362 -0
- pyfaceau/utils/__init__.py +0 -0
- pyfaceau/utils/cython_extensions/cython_histogram_median.c +35391 -0
- pyfaceau/utils/cython_extensions/cython_histogram_median.pyx +316 -0
- pyfaceau/utils/cython_extensions/cython_rotation_update.c +32262 -0
- pyfaceau/utils/cython_extensions/cython_rotation_update.pyx +211 -0
- pyfaceau/utils/cython_extensions/setup.py +47 -0
- pyfaceau-1.0.3.data/scripts/pyfaceau_gui.py +302 -0
- pyfaceau-1.0.3.dist-info/METADATA +466 -0
- pyfaceau-1.0.3.dist-info/RECORD +40 -0
- pyfaceau-1.0.3.dist-info/WHEEL +5 -0
- pyfaceau-1.0.3.dist-info/entry_points.txt +3 -0
- pyfaceau-1.0.3.dist-info/licenses/LICENSE +40 -0
- pyfaceau-1.0.3.dist-info/top_level.txt +1 -0
|
@@ -0,0 +1,466 @@
|
|
|
1
|
+
Metadata-Version: 2.4
|
|
2
|
+
Name: pyfaceau
|
|
3
|
+
Version: 1.0.3
|
|
4
|
+
Summary: Pure Python OpenFace 2.2 AU extraction with CLNF landmark refinement
|
|
5
|
+
Home-page: https://github.com/johnwilsoniv/face-analysis
|
|
6
|
+
Author: John Wilson
|
|
7
|
+
Author-email:
|
|
8
|
+
License: CC BY-NC 4.0
|
|
9
|
+
Project-URL: Homepage, https://github.com/johnwilsoniv/face-analysis
|
|
10
|
+
Project-URL: Documentation, https://github.com/johnwilsoniv/face-analysis/tree/main/S0%20PyfaceAU
|
|
11
|
+
Project-URL: Repository, https://github.com/johnwilsoniv/face-analysis
|
|
12
|
+
Project-URL: Bug Tracker, https://github.com/johnwilsoniv/face-analysis/issues
|
|
13
|
+
Keywords: facial-action-units,openface,computer-vision,facial-analysis,emotion-recognition
|
|
14
|
+
Classifier: Development Status :: 5 - Production/Stable
|
|
15
|
+
Classifier: Intended Audience :: Science/Research
|
|
16
|
+
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
|
|
17
|
+
Classifier: Topic :: Scientific/Engineering :: Image Recognition
|
|
18
|
+
Classifier: License :: Other/Proprietary License
|
|
19
|
+
Classifier: Programming Language :: Python :: 3
|
|
20
|
+
Classifier: Programming Language :: Python :: 3.10
|
|
21
|
+
Classifier: Programming Language :: Python :: 3.11
|
|
22
|
+
Classifier: Programming Language :: Python :: 3.12
|
|
23
|
+
Classifier: Operating System :: OS Independent
|
|
24
|
+
Requires-Python: >=3.10
|
|
25
|
+
Description-Content-Type: text/markdown
|
|
26
|
+
License-File: LICENSE
|
|
27
|
+
Requires-Dist: numpy>=1.20.0
|
|
28
|
+
Requires-Dist: opencv-python>=4.5.0
|
|
29
|
+
Requires-Dist: pandas>=1.3.0
|
|
30
|
+
Requires-Dist: onnxruntime>=1.10.0
|
|
31
|
+
Requires-Dist: scipy>=1.7.0
|
|
32
|
+
Requires-Dist: scikit-learn>=1.0.0
|
|
33
|
+
Requires-Dist: tqdm>=4.62.0
|
|
34
|
+
Requires-Dist: pyfhog>=0.1.0
|
|
35
|
+
Requires-Dist: Cython>=0.29.0
|
|
36
|
+
Provides-Extra: dev
|
|
37
|
+
Requires-Dist: pytest>=7.0.0; extra == "dev"
|
|
38
|
+
Requires-Dist: black>=22.0.0; extra == "dev"
|
|
39
|
+
Requires-Dist: flake8>=4.0.0; extra == "dev"
|
|
40
|
+
Provides-Extra: accel
|
|
41
|
+
Requires-Dist: onnxruntime-coreml>=1.10.0; extra == "accel"
|
|
42
|
+
Dynamic: home-page
|
|
43
|
+
Dynamic: license-file
|
|
44
|
+
Dynamic: requires-python
|
|
45
|
+
|
|
46
|
+
# pyfaceau - Action Unit Generation based on Python and Openface 2.2
|
|
47
|
+
|
|
48
|
+
|
|
49
|
+
[](https://www.python.org/downloads/)
|
|
50
|
+
[](https://creativecommons.org/licenses/by-nc/4.0/)
|
|
51
|
+
|
|
52
|
+
---
|
|
53
|
+
|
|
54
|
+
## Overview
|
|
55
|
+
|
|
56
|
+
pyfaceau is a Python reimplementation of the [OpenFace 2.2](https://github.com/TadasBaltrusaitis/OpenFace) Facial Action Unit extraction pipeline. It achieves **r =92 correlation** with the original C++ implementation while requiring **zero compilation** and running on any platform.
|
|
57
|
+
|
|
58
|
+
### Key Features
|
|
59
|
+
|
|
60
|
+
- ** 100% Python** - No C++ compilation required
|
|
61
|
+
- ** Easy Installation** - `pip install` and go
|
|
62
|
+
- ** High Accuracy** - r=0.92 overall
|
|
63
|
+
- ** High Performance** - 50-100 FPS with parallel processing (6-10x speedup!)
|
|
64
|
+
- ** Multi-Core Support** - Automatic parallelization across CPU cores
|
|
65
|
+
- ** Modular** - Use individual components independently
|
|
66
|
+
- ** 17 Action Units** - Full AU extraction (AU01, AU02, AU04, etc.)
|
|
67
|
+
|
|
68
|
+
---
|
|
69
|
+
|
|
70
|
+
## Quick Start
|
|
71
|
+
|
|
72
|
+
### Installation
|
|
73
|
+
|
|
74
|
+
#### Option 1: Install from PyPI (Recommended)
|
|
75
|
+
|
|
76
|
+
```bash
|
|
77
|
+
# Install pyfaceau
|
|
78
|
+
pip install pyfaceau
|
|
79
|
+
|
|
80
|
+
# Download model weights (14MB)
|
|
81
|
+
python -m pyfaceau.download_weights
|
|
82
|
+
|
|
83
|
+
# Or manually download from GitHub
|
|
84
|
+
# https://github.com/johnwilsoniv/face-analysis/tree/main/S0%20PyfaceAU/weights
|
|
85
|
+
```
|
|
86
|
+
|
|
87
|
+
#### Option 2: Install from Source
|
|
88
|
+
|
|
89
|
+
```bash
|
|
90
|
+
# Clone repository
|
|
91
|
+
git clone https://github.com/johnwilsoniv/face-analysis.git
|
|
92
|
+
cd "face-analysis/S0 PyfaceAU"
|
|
93
|
+
|
|
94
|
+
# Install in development mode
|
|
95
|
+
pip install -e .
|
|
96
|
+
|
|
97
|
+
# Model weights are included in the repository
|
|
98
|
+
```
|
|
99
|
+
|
|
100
|
+
### Basic Usage
|
|
101
|
+
|
|
102
|
+
#### High-Performance Mode (Recommended - 50-100 FPS)
|
|
103
|
+
|
|
104
|
+
```python
|
|
105
|
+
from pyfaceau import ParallelAUPipeline
|
|
106
|
+
|
|
107
|
+
# Initialize parallel pipeline
|
|
108
|
+
pipeline = ParallelAUPipeline(
|
|
109
|
+
retinaface_model='weights/retinaface_mobilenet025_coreml.onnx',
|
|
110
|
+
pfld_model='weights/pfld_cunjian.onnx',
|
|
111
|
+
pdm_file='weights/In-the-wild_aligned_PDM_68.txt',
|
|
112
|
+
au_models_dir='path/to/AU_predictors',
|
|
113
|
+
triangulation_file='weights/tris_68_full.txt',
|
|
114
|
+
num_workers=6, # Adjust based on CPU cores
|
|
115
|
+
batch_size=30
|
|
116
|
+
)
|
|
117
|
+
|
|
118
|
+
# Process video
|
|
119
|
+
results = pipeline.process_video(
|
|
120
|
+
video_path='input.mp4',
|
|
121
|
+
output_csv='results.csv'
|
|
122
|
+
)
|
|
123
|
+
|
|
124
|
+
print(f"Processed {len(results)} frames")
|
|
125
|
+
# Typical output: ~28-50 FPS depending on CPU cores
|
|
126
|
+
```
|
|
127
|
+
|
|
128
|
+
#### Standard Mode (4.6 FPS)
|
|
129
|
+
|
|
130
|
+
```python
|
|
131
|
+
from pyfaceau import FullPythonAUPipeline
|
|
132
|
+
|
|
133
|
+
# Initialize standard pipeline
|
|
134
|
+
pipeline = FullPythonAUPipeline(
|
|
135
|
+
retinaface_model='weights/retinaface_mobilenet025_coreml.onnx',
|
|
136
|
+
pfld_model='weights/pfld_cunjian.onnx',
|
|
137
|
+
pdm_file='weights/In-the-wild_aligned_PDM_68.txt',
|
|
138
|
+
au_models_dir='path/to/AU_predictors',
|
|
139
|
+
triangulation_file='weights/tris_68_full.txt',
|
|
140
|
+
use_calc_params=True,
|
|
141
|
+
use_coreml=True, # macOS only
|
|
142
|
+
verbose=False
|
|
143
|
+
)
|
|
144
|
+
|
|
145
|
+
# Process video
|
|
146
|
+
results = pipeline.process_video(
|
|
147
|
+
video_path='input.mp4',
|
|
148
|
+
output_csv='results.csv'
|
|
149
|
+
)
|
|
150
|
+
```
|
|
151
|
+
|
|
152
|
+
### Example Output
|
|
153
|
+
|
|
154
|
+
```csv
|
|
155
|
+
frame,success,AU01_r,AU02_r,AU04_r,AU06_r,AU12_r,...
|
|
156
|
+
0,True,0.60,0.90,0.00,1.23,2.45,...
|
|
157
|
+
1,True,0.55,0.85,0.00,1.20,2.50,...
|
|
158
|
+
```
|
|
159
|
+
|
|
160
|
+
---
|
|
161
|
+
|
|
162
|
+
## Architecture
|
|
163
|
+
|
|
164
|
+
pyfaceau replicates the complete OpenFace 2.2 AU extraction pipeline:
|
|
165
|
+
|
|
166
|
+
```
|
|
167
|
+
Video Input
|
|
168
|
+
↓
|
|
169
|
+
Face Detection (RetinaFace ONNX)
|
|
170
|
+
↓
|
|
171
|
+
Landmark Detection (PFLD 68-point)
|
|
172
|
+
↓
|
|
173
|
+
3D Pose Estimation (Python Implementation of CalcParams with 98% fidelity)
|
|
174
|
+
↓
|
|
175
|
+
Face Alignment
|
|
176
|
+
↓
|
|
177
|
+
HOG Feature Extraction (PyFHOG)
|
|
178
|
+
↓
|
|
179
|
+
Geometric Features (PDM reconstruction)
|
|
180
|
+
↓
|
|
181
|
+
Running Median Tracking (Cython-optimized)
|
|
182
|
+
↓
|
|
183
|
+
AU Prediction (17 SVR models)
|
|
184
|
+
↓
|
|
185
|
+
Output: 17 AU intensities
|
|
186
|
+
```
|
|
187
|
+
|
|
188
|
+
---
|
|
189
|
+
|
|
190
|
+
## Custom Components & Innovations
|
|
191
|
+
|
|
192
|
+
pyfaceau includes several novel components that can be used independently in other projects:
|
|
193
|
+
|
|
194
|
+
### Python-based CalcParams - 3D Pose Estimation
|
|
195
|
+
|
|
196
|
+
A pure Python implementation of OpenFace's CalcParams algorithm for 3D head pose estimation. Achieves 99.45% correlation with the C++ reference implementation.
|
|
197
|
+
|
|
198
|
+
```python
|
|
199
|
+
from pyfaceau.alignment import CalcParams
|
|
200
|
+
|
|
201
|
+
# Initialize with PDM model
|
|
202
|
+
calc_params = CalcParams(pdm_file='weights/In-the-wild_aligned_PDM_68.txt')
|
|
203
|
+
|
|
204
|
+
# Estimate 3D pose from 2D landmarks
|
|
205
|
+
params_local, params_global, detected_landmarks = calc_params.estimate_pose(
|
|
206
|
+
landmarks_2d, # 68x2 array of detected landmarks
|
|
207
|
+
img_width,
|
|
208
|
+
img_height
|
|
209
|
+
)
|
|
210
|
+
|
|
211
|
+
# Extract pose parameters
|
|
212
|
+
tx, ty = params_global[4], params_global[5] # Translation
|
|
213
|
+
rx, ry, rz = params_global[1:4] # Rotation (radians)
|
|
214
|
+
scale = params_global[0] # Scale factor
|
|
215
|
+
```
|
|
216
|
+
|
|
217
|
+
**Use cases:** Head pose tracking, gaze estimation, facial alignment
|
|
218
|
+
|
|
219
|
+
### CLNF Landmark Refinement
|
|
220
|
+
|
|
221
|
+
Constrained Local Neural Fields (CLNF) refinement using SVR patch experts for improved landmark accuracy. Particularly effective for challenging poses and expressions.
|
|
222
|
+
|
|
223
|
+
```python
|
|
224
|
+
from pyfaceau.detectors import CLNFRefiner
|
|
225
|
+
|
|
226
|
+
# Initialize refiner
|
|
227
|
+
refiner = CLNFRefiner(
|
|
228
|
+
pdm_file='weights/In-the-wild_aligned_PDM_68.txt',
|
|
229
|
+
patch_expert_file='weights/svr_patches_0.25_general.txt'
|
|
230
|
+
)
|
|
231
|
+
|
|
232
|
+
# Refine landmarks
|
|
233
|
+
refined_landmarks = refiner.refine_landmarks(
|
|
234
|
+
frame,
|
|
235
|
+
initial_landmarks,
|
|
236
|
+
face_bbox,
|
|
237
|
+
num_iterations=5
|
|
238
|
+
)
|
|
239
|
+
```
|
|
240
|
+
|
|
241
|
+
**Use cases:** Landmark tracking, facial feature extraction, expression analysis
|
|
242
|
+
|
|
243
|
+
### Cython Histogram Median Tracker (260x speedup)
|
|
244
|
+
|
|
245
|
+
High-performance running median tracking for temporal smoothing of geometric features. Implements OpenFace's histogram-based median algorithm in optimized Cython.
|
|
246
|
+
|
|
247
|
+
```python
|
|
248
|
+
from pyfaceau.features import HistogramMedianTracker
|
|
249
|
+
|
|
250
|
+
# Initialize tracker
|
|
251
|
+
tracker = HistogramMedianTracker(
|
|
252
|
+
num_features=136, # 68 landmarks x 2 (x,y)
|
|
253
|
+
history_length=120
|
|
254
|
+
)
|
|
255
|
+
|
|
256
|
+
# Update with new frame
|
|
257
|
+
smoothed_features = tracker.update(current_features)
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
**Use cases:** Temporal smoothing, noise reduction, video feature tracking
|
|
261
|
+
|
|
262
|
+
### Batched AU Predictor
|
|
263
|
+
|
|
264
|
+
Optimized AU prediction using batch processing for HOG features. Reduces overhead when processing multiple frames.
|
|
265
|
+
|
|
266
|
+
```python
|
|
267
|
+
from pyfaceau.prediction import BatchedAUPredictor
|
|
268
|
+
|
|
269
|
+
# Initialize predictor
|
|
270
|
+
predictor = BatchedAUPredictor(
|
|
271
|
+
au_models_dir='weights/AU_predictors',
|
|
272
|
+
batch_size=30
|
|
273
|
+
)
|
|
274
|
+
|
|
275
|
+
# Predict AUs for multiple frames
|
|
276
|
+
au_results = predictor.predict_batch(
|
|
277
|
+
hog_features_list, # List of HOG feature arrays
|
|
278
|
+
geom_features_list # List of geometric feature arrays
|
|
279
|
+
)
|
|
280
|
+
```
|
|
281
|
+
|
|
282
|
+
**Use cases:** Video processing, batch AU extraction, real-time analysis
|
|
283
|
+
|
|
284
|
+
### OpenFace22 Face Aligner
|
|
285
|
+
|
|
286
|
+
Pure Python implementation of OpenFace 2.2's face alignment algorithm. Produces pixel-perfect aligned faces matching the C++ implementation.
|
|
287
|
+
|
|
288
|
+
```python
|
|
289
|
+
from pyfaceau.alignment import OpenFace22FaceAligner
|
|
290
|
+
|
|
291
|
+
# Initialize aligner
|
|
292
|
+
aligner = OpenFace22FaceAligner(
|
|
293
|
+
pdm_file='weights/In-the-wild_aligned_PDM_68.txt',
|
|
294
|
+
triangulation_file='weights/tris_68_full.txt'
|
|
295
|
+
)
|
|
296
|
+
|
|
297
|
+
# Align face for AU extraction
|
|
298
|
+
aligned_face = aligner.align_face(
|
|
299
|
+
frame,
|
|
300
|
+
landmarks_2d,
|
|
301
|
+
tx, ty, rz # From CalcParams
|
|
302
|
+
)
|
|
303
|
+
```
|
|
304
|
+
|
|
305
|
+
**Output:** 112x112 RGB aligned face, ready for HOG extraction
|
|
306
|
+
|
|
307
|
+
**Use cases:** Face normalization, AU extraction preprocessing, facial feature analysis
|
|
308
|
+
|
|
309
|
+
---
|
|
310
|
+
|
|
311
|
+
## Supported Action Units
|
|
312
|
+
|
|
313
|
+
pyfaceau extracts 17 Facial Action Units:
|
|
314
|
+
|
|
315
|
+
**Dynamic AUs (11):**
|
|
316
|
+
- AU01 - Inner Brow Raiser
|
|
317
|
+
- AU02 - Outer Brow Raiser
|
|
318
|
+
- AU05 - Upper Lid Raiser
|
|
319
|
+
- AU09 - Nose Wrinkler
|
|
320
|
+
- AU15 - Lip Corner Depressor
|
|
321
|
+
- AU17 - Chin Raiser
|
|
322
|
+
- AU20 - Lip Stretcher
|
|
323
|
+
- AU23 - Lip Tightener
|
|
324
|
+
- AU25 - Lips Part
|
|
325
|
+
- AU26 - Jaw Drop
|
|
326
|
+
- AU45 - Blink
|
|
327
|
+
|
|
328
|
+
**Static AUs (6):**
|
|
329
|
+
- AU04 - Brow Lowerer
|
|
330
|
+
- AU06 - Cheek Raiser
|
|
331
|
+
- AU07 - Lid Tightener
|
|
332
|
+
- AU10 - Upper Lip Raiser
|
|
333
|
+
- AU12 - Lip Corner Puller
|
|
334
|
+
- AU14 - Dimpler
|
|
335
|
+
|
|
336
|
+
---
|
|
337
|
+
|
|
338
|
+
## Requirements
|
|
339
|
+
|
|
340
|
+
### Python Dependencies
|
|
341
|
+
|
|
342
|
+
```
|
|
343
|
+
python >= 3.10
|
|
344
|
+
numpy >= 1.20.0
|
|
345
|
+
opencv-python >= 4.5.0
|
|
346
|
+
pandas >= 1.3.0
|
|
347
|
+
scipy >= 1.7.0
|
|
348
|
+
onnxruntime >= 1.10.0
|
|
349
|
+
pyfhog >= 0.1.0
|
|
350
|
+
```
|
|
351
|
+
|
|
352
|
+
### Model Files
|
|
353
|
+
|
|
354
|
+
Download OpenFace 2.2 AU predictor models:
|
|
355
|
+
- Available from: [OpenFace repository](https://github.com/TadasBaltrusaitis/OpenFace)
|
|
356
|
+
- Place in: `AU_predictors/` directory
|
|
357
|
+
- Required: 17 `.dat` files (AU_1_dynamic_intensity_comb.dat, etc.)
|
|
358
|
+
|
|
359
|
+
---
|
|
360
|
+
|
|
361
|
+
## Project Structure
|
|
362
|
+
|
|
363
|
+
```
|
|
364
|
+
S0 pyfaceau/
|
|
365
|
+
├── pyfaceau/ # Core library
|
|
366
|
+
│ ├── pipeline.py # Full AU extraction pipeline
|
|
367
|
+
│ ├── detectors/ # Face and landmark detection
|
|
368
|
+
│ ├── alignment/ # Face alignment and pose estimation
|
|
369
|
+
│ ├── features/ # HOG and geometric features
|
|
370
|
+
│ ├── prediction/ # AU prediction and running median
|
|
371
|
+
│ └── utils/ # Utilities and Cython extensions
|
|
372
|
+
├── weights/ # Model weights
|
|
373
|
+
├── tests/ # Test suite
|
|
374
|
+
├── examples/ # Usage examples
|
|
375
|
+
└── docs/ # Documentation
|
|
376
|
+
```
|
|
377
|
+
|
|
378
|
+
---
|
|
379
|
+
|
|
380
|
+
## Advanced Usage
|
|
381
|
+
|
|
382
|
+
### Process Single Frame
|
|
383
|
+
|
|
384
|
+
```python
|
|
385
|
+
from pyfaceau import FullPythonAUPipeline
|
|
386
|
+
import cv2
|
|
387
|
+
|
|
388
|
+
pipeline = FullPythonAUPipeline(...)
|
|
389
|
+
|
|
390
|
+
# Read frame
|
|
391
|
+
frame = cv2.imread('image.jpg')
|
|
392
|
+
|
|
393
|
+
# Process (requires landmarks and pose from CSV or detector)
|
|
394
|
+
aligned = pipeline.aligner.align_face(frame, landmarks, tx, ty, rz)
|
|
395
|
+
hog_features = pipeline.extract_hog(aligned)
|
|
396
|
+
aus = pipeline.predict_aus(hog_features, geom_features)
|
|
397
|
+
```
|
|
398
|
+
|
|
399
|
+
### Use Individual Components
|
|
400
|
+
|
|
401
|
+
```python
|
|
402
|
+
# Face detection only
|
|
403
|
+
from pyfaceau.detectors import ONNXRetinaFaceDetector
|
|
404
|
+
detector = ONNXRetinaFaceDetector('weights/retinaface_mobilenet025_coreml.onnx')
|
|
405
|
+
faces = detector.detect_faces(frame)
|
|
406
|
+
|
|
407
|
+
# Landmark detection only
|
|
408
|
+
from pyfaceau.detectors import CunjianPFLDDetector
|
|
409
|
+
landmarker = CunjianPFLDDetector('weights/pfld_cunjian.onnx')
|
|
410
|
+
landmarks, conf = landmarker.detect_landmarks(frame, bbox)
|
|
411
|
+
|
|
412
|
+
# Face alignment only
|
|
413
|
+
from pyfaceau.alignment import OpenFace22FaceAligner
|
|
414
|
+
aligner = OpenFace22FaceAligner('weights/In-the-wild_aligned_PDM_68.txt')
|
|
415
|
+
aligned = aligner.align_face(frame, landmarks, tx, ty, rz)
|
|
416
|
+
```
|
|
417
|
+
|
|
418
|
+
---
|
|
419
|
+
|
|
420
|
+
## Citation
|
|
421
|
+
|
|
422
|
+
If you use pyfaceau in your research, please cite:
|
|
423
|
+
|
|
424
|
+
```bibtex
|
|
425
|
+
@article{wilson2025splitface,
|
|
426
|
+
title={A Split-Face Computer Vision/Machine Learning Assessment of Facial Paralysis Using Facial Action Units},
|
|
427
|
+
author={Wilson IV, John and Rosenberg, Joshua and Gray, Mingyang L and Razavi, Christopher R},
|
|
428
|
+
journal={Facial Plastic Surgery \& Aesthetic Medicine},
|
|
429
|
+
year={2025},
|
|
430
|
+
publisher={Mary Ann Liebert, Inc.}
|
|
431
|
+
}
|
|
432
|
+
```
|
|
433
|
+
|
|
434
|
+
Also cite the original OpenFace:
|
|
435
|
+
|
|
436
|
+
```bibtex
|
|
437
|
+
@inproceedings{baltrusaitis2018openface,
|
|
438
|
+
title={OpenFace 2.0: Facial behavior analysis toolkit},
|
|
439
|
+
author={Baltru{\v{s}}aitis, Tadas and Zadeh, Amir and Lim, Yao Chong and Morency, Louis-Philippe},
|
|
440
|
+
booktitle={2018 13th IEEE International Conference on Automatic Face \& Gesture Recognition (FG 2018)},
|
|
441
|
+
pages={59--66},
|
|
442
|
+
year={2018},
|
|
443
|
+
organization={IEEE}
|
|
444
|
+
}
|
|
445
|
+
```
|
|
446
|
+
|
|
447
|
+
---
|
|
448
|
+
|
|
449
|
+
## Acknowledgments
|
|
450
|
+
|
|
451
|
+
- **OpenFace** - Original C++ implementation by Tadas Baltrusaitis
|
|
452
|
+
- **PyFHOG** - HOG feature extraction library
|
|
453
|
+
- **RetinaFace** - Face detection model
|
|
454
|
+
- **PFLD** - Landmark detection by Cunjian Chen
|
|
455
|
+
|
|
456
|
+
---
|
|
457
|
+
|
|
458
|
+
## Support
|
|
459
|
+
|
|
460
|
+
- **Issues:** https://github.com/yourname/pyfaceau/issues
|
|
461
|
+
- **Documentation:** [docs/](docs/)
|
|
462
|
+
- **Examples:** [examples/](examples/)
|
|
463
|
+
|
|
464
|
+
---
|
|
465
|
+
|
|
466
|
+
**Built for the facial behavior research community**
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
pyfaceau/__init__.py,sha256=zlulUCRRF_Px6d6-y2HVldKRbqoqhLtNr3qeHXcgNVA,521
|
|
2
|
+
pyfaceau/cython_histogram_median.cp312-win_amd64.pyd,sha256=YYhT5T9I_czuyQzeeuw-6upW-P_wi1vL25B3rudKAiU,182272
|
|
3
|
+
pyfaceau/cython_rotation_update.cp312-win_amd64.pyd,sha256=4clouNzAGFzgT-viUy2evQ7F3jD-njxfXQlocFxJVJk,161792
|
|
4
|
+
pyfaceau/download_weights.py,sha256=Mwjl6SOeLF-_DV504HFpBhOhWiiNvkNpHK-7PT76YvA,4167
|
|
5
|
+
pyfaceau/parallel_pipeline.py,sha256=CUE_fHVlg_BcGLj9e-citaqEKfV304TyLH6mOxlNajE,16676
|
|
6
|
+
pyfaceau/pipeline.py,sha256=d9wkQZHz6THEleGz3ASEvEzhwGc6rogqb05GiwMmL6A,42990
|
|
7
|
+
pyfaceau/processor.py,sha256=zdhCQ1yHvYkfOlgqlfMJZp_pUgtukTpH04mT_UcV4S8,9194
|
|
8
|
+
pyfaceau/alignment/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
9
|
+
pyfaceau/alignment/calc_params.py,sha256=-_ON-ABDHT0SSGUHsbaP6n-URPPIXB4TtAoX0h5ZQWA,25819
|
|
10
|
+
pyfaceau/alignment/face_aligner.py,sha256=obujWILyree0SH48SDH1GJBdqiLEzyfz07siA98Z-2U,13857
|
|
11
|
+
pyfaceau/alignment/numba_calcparams_accelerator.py,sha256=AJ8yiINdtxrfAW3kmZzXLiDxC4dzE--PqWqjY2UiiV8,7885
|
|
12
|
+
pyfaceau/detectors/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
13
|
+
pyfaceau/detectors/pfld.py,sha256=PzYHrEJCQwAXtIF2OvBpZYWzc23wiC9eSGhjTCBzLwo,4752
|
|
14
|
+
pyfaceau/detectors/retinaface.py,sha256=_N9WMwfSzJ2FMPhc_lgfTGTDWf4ot68EOlzIzIk3OXE,14103
|
|
15
|
+
pyfaceau/features/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
16
|
+
pyfaceau/features/histogram_median_tracker.py,sha256=qzRGQBJue8IEPOOQxgSAHNc52Dt3gHky5nQtcwIp4jA,12152
|
|
17
|
+
pyfaceau/features/pdm.py,sha256=Dp8pXhjeTe_f2Mnt7xYOAOwfJiXcdOFxG1M-RBjDjVE,9094
|
|
18
|
+
pyfaceau/features/triangulation.py,sha256=0yvjxmh46tc1H1Y7AtM1oI1fburKAYPF8a9tvt4LJrA,2059
|
|
19
|
+
pyfaceau/prediction/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
20
|
+
pyfaceau/prediction/au_predictor.py,sha256=Ff4jcAsxaGArT0s3pcubOvjZlg9eKxZ743qvS4bAWjI,15490
|
|
21
|
+
pyfaceau/prediction/batched_au_predictor.py,sha256=zmCJGKGiz6qBTh1dJI8PWlIeQUet-4NDy9gwyrd4hA4,9492
|
|
22
|
+
pyfaceau/prediction/model_parser.py,sha256=g966F0JbiSOzauANl1hoLqLiFl8tmzrw4gSSYOLMRnI,12699
|
|
23
|
+
pyfaceau/prediction/running_median.py,sha256=XpxLddoO3N108RKKzSWYBkZ1aEbcbfQ9OW5wGdQqba8,11359
|
|
24
|
+
pyfaceau/prediction/running_median_fallback.py,sha256=F6jxg-Kend_W38pnY6hnioedeV_jWhXflI-PHJLereY,6555
|
|
25
|
+
pyfaceau/refinement/__init__.py,sha256=GrfCBuaiCNC1CP8-82GOm_L5qGYHm0Lqd_iW0PLb3sc,464
|
|
26
|
+
pyfaceau/refinement/svr_patch_expert.py,sha256=zBJeq_ixz02M9GMG_4kM0n3l-HQ91c5-xPN04d9PnkQ,12681
|
|
27
|
+
pyfaceau/refinement/targeted_refiner.py,sha256=4-d0ihhj6r0cVRIERhrPXZRtkNT8Tz7jg7L4hXaew9E,13035
|
|
28
|
+
pyfaceau/utils/__init__.py,sha256=47DEQpj8HBSa-_TImW-5JCeuQeRkm5NMpJWZG3hSuFU,0
|
|
29
|
+
pyfaceau/utils/cython_extensions/cython_histogram_median.c,sha256=GfB6Dv3UZi9IZphOTbFGdKuefbWxUDxT8A3lhI3CvbA,1453037
|
|
30
|
+
pyfaceau/utils/cython_extensions/cython_histogram_median.pyx,sha256=ZquDcBxQzeQvfZL7sjAJ6vDwd9BJYbT7vI-gyd1K06c,10772
|
|
31
|
+
pyfaceau/utils/cython_extensions/cython_rotation_update.c,sha256=OkvawBBJ_3xuUgJaUbpaNAc8JD1Het0HfwNIYXUE6eA,1279833
|
|
32
|
+
pyfaceau/utils/cython_extensions/cython_rotation_update.pyx,sha256=7n2VXIBfU62HPGglYPocVgSm-Z3Li6Rl1Q6Uw6Xznr8,7122
|
|
33
|
+
pyfaceau/utils/cython_extensions/setup.py,sha256=lu92FxUCw7SlmCYwhaILidnwPXLGqP21k8g15WUW2Ts,1231
|
|
34
|
+
pyfaceau-1.0.3.data/scripts/pyfaceau_gui.py,sha256=mdQrSA7QN7essogtnZ7WmK2n0NBB9DEP88YyTHcwJxs,9431
|
|
35
|
+
pyfaceau-1.0.3.dist-info/licenses/LICENSE,sha256=te63_pyBiqyEbWaoSxaVJkRWPlGgEMtjgm2x01wyR4E,1786
|
|
36
|
+
pyfaceau-1.0.3.dist-info/METADATA,sha256=5t--39vio3KIvGACWpR8NUF-9_VpmFMZtE7-ud1Lhh0,13453
|
|
37
|
+
pyfaceau-1.0.3.dist-info/WHEEL,sha256=8UP9x9puWI0P1V_d7K2oMTBqfeLNm21CTzZ_Ptr0NXU,101
|
|
38
|
+
pyfaceau-1.0.3.dist-info/entry_points.txt,sha256=nMKRSf4lVJh-w2ooCCWmql618EgI0njNAP-Kv4Am9Wk,86
|
|
39
|
+
pyfaceau-1.0.3.dist-info/top_level.txt,sha256=DpF-CfqlMNwwdGnlgEmuHOg9wTXm9PrVURpv5ttXr2o,9
|
|
40
|
+
pyfaceau-1.0.3.dist-info/RECORD,,
|
|
@@ -0,0 +1,40 @@
|
|
|
1
|
+
Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)
|
|
2
|
+
|
|
3
|
+
Copyright (c) 2025 John Wilson IV, MD
|
|
4
|
+
|
|
5
|
+
This work is licensed under the Creative Commons Attribution-NonCommercial 4.0
|
|
6
|
+
International License. To view a copy of this license, visit:
|
|
7
|
+
https://creativecommons.org/licenses/by-nc/4.0/
|
|
8
|
+
|
|
9
|
+
YOU ARE FREE TO:
|
|
10
|
+
- Share: copy and redistribute the material in any medium or format
|
|
11
|
+
- Adapt: remix, transform, and build upon the material
|
|
12
|
+
|
|
13
|
+
UNDER THE FOLLOWING TERMS:
|
|
14
|
+
- Attribution: You must give appropriate credit, provide a link to the license,
|
|
15
|
+
and indicate if changes were made.
|
|
16
|
+
- NonCommercial: You may not use the material for commercial purposes.
|
|
17
|
+
- No additional restrictions: You may not apply legal terms or technological
|
|
18
|
+
measures that legally restrict others from doing anything the license permits.
|
|
19
|
+
|
|
20
|
+
NOTICES:
|
|
21
|
+
You do not have to comply with the license for elements of the material in the
|
|
22
|
+
public domain or where your use is permitted by an applicable exception or limitation.
|
|
23
|
+
|
|
24
|
+
No warranties are given. The license may not give you all of the permissions
|
|
25
|
+
necessary for your intended use. For example, other rights such as publicity,
|
|
26
|
+
privacy, or moral rights may limit how you use the material.
|
|
27
|
+
|
|
28
|
+
================================================================================
|
|
29
|
+
|
|
30
|
+
COMMERCIAL LICENSING:
|
|
31
|
+
|
|
32
|
+
If you wish to use pyfaceau for commercial purposes, you must obtain a separate
|
|
33
|
+
commercial license. See COMMERCIAL-LICENSE.md for details or contact:
|
|
34
|
+
[Your contact information - email/website]
|
|
35
|
+
|
|
36
|
+
Commercial use includes, but is not limited to:
|
|
37
|
+
- Use in a commercial product or service
|
|
38
|
+
- Use to provide commercial services
|
|
39
|
+
- Use in a for-profit organization
|
|
40
|
+
- Any use where you or your organization receives monetary compensation
|
|
@@ -0,0 +1 @@
|
|
|
1
|
+
pyfaceau
|