sports2d 0.8.25__tar.gz → 0.8.26__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (38) hide show
  1. sports2d-0.8.26/.github/workflows/sync_to_hf.yml.bak +19 -0
  2. sports2d-0.8.26/Content/huggingface_demo.png +0 -0
  3. {sports2d-0.8.25/sports2d.egg-info → sports2d-0.8.26}/PKG-INFO +19 -6
  4. {sports2d-0.8.25 → sports2d-0.8.26}/README.md +18 -5
  5. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Demo/Config_demo.toml +8 -5
  6. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Sports2D.py +2 -0
  7. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/process.py +184 -139
  8. {sports2d-0.8.25 → sports2d-0.8.26/sports2d.egg-info}/PKG-INFO +19 -6
  9. {sports2d-0.8.25 → sports2d-0.8.26}/sports2d.egg-info/SOURCES.txt +2 -0
  10. {sports2d-0.8.25 → sports2d-0.8.26}/.github/workflows/continuous-integration.yml +0 -0
  11. {sports2d-0.8.25 → sports2d-0.8.26}/.github/workflows/joss_pdf.yml +0 -0
  12. {sports2d-0.8.25 → sports2d-0.8.26}/.github/workflows/publish-on-release.yml +0 -0
  13. {sports2d-0.8.25 → sports2d-0.8.26}/.gitignore +0 -0
  14. {sports2d-0.8.25 → sports2d-0.8.26}/CITATION.cff +0 -0
  15. {sports2d-0.8.25 → sports2d-0.8.26}/Content/Demo_plots.png +0 -0
  16. {sports2d-0.8.25 → sports2d-0.8.26}/Content/Demo_results.png +0 -0
  17. {sports2d-0.8.25 → sports2d-0.8.26}/Content/Demo_terminal.png +0 -0
  18. {sports2d-0.8.25 → sports2d-0.8.26}/Content/Person_selection.png +0 -0
  19. {sports2d-0.8.25 → sports2d-0.8.26}/Content/Video_tuto_Sports2D_Colab.png +0 -0
  20. {sports2d-0.8.25 → sports2d-0.8.26}/Content/joint_convention.png +0 -0
  21. {sports2d-0.8.25 → sports2d-0.8.26}/Content/paper.bib +0 -0
  22. {sports2d-0.8.25 → sports2d-0.8.26}/Content/paper.md +0 -0
  23. {sports2d-0.8.25 → sports2d-0.8.26}/Content/sports2d_blender.gif +0 -0
  24. {sports2d-0.8.25 → sports2d-0.8.26}/Content/sports2d_opensim.gif +0 -0
  25. {sports2d-0.8.25 → sports2d-0.8.26}/LICENSE +0 -0
  26. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Demo/Calib_demo.toml +0 -0
  27. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Demo/demo.mp4 +0 -0
  28. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Sports2D.ipynb +0 -0
  29. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Utilities/__init__.py +0 -0
  30. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Utilities/common.py +0 -0
  31. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/Utilities/tests.py +0 -0
  32. {sports2d-0.8.25 → sports2d-0.8.26}/Sports2D/__init__.py +0 -0
  33. {sports2d-0.8.25 → sports2d-0.8.26}/pyproject.toml +0 -0
  34. {sports2d-0.8.25 → sports2d-0.8.26}/setup.cfg +0 -0
  35. {sports2d-0.8.25 → sports2d-0.8.26}/sports2d.egg-info/dependency_links.txt +0 -0
  36. {sports2d-0.8.25 → sports2d-0.8.26}/sports2d.egg-info/entry_points.txt +0 -0
  37. {sports2d-0.8.25 → sports2d-0.8.26}/sports2d.egg-info/requires.txt +0 -0
  38. {sports2d-0.8.25 → sports2d-0.8.26}/sports2d.egg-info/top_level.txt +0 -0
@@ -0,0 +1,19 @@
1
+ name: Sync to Hugging Face Space
2
+ on:
3
+ push:
4
+ branches: [ main ]
5
+ jobs:
6
+ sync:
7
+ runs-on: ubuntu-latest
8
+ steps:
9
+ - uses: actions/checkout@v4
10
+ - name: Push to Hugging Face Space
11
+ run: |
12
+ git clone https://${{ secrets.HUGGINGFACE_TOKEN }}@huggingface.co/spaces/DavidPagnon/sports2d
13
+ cd sports2d
14
+ git config --global user.name "DavidPagnon"
15
+ git config --global user.email "contact@david-pagnon.com"
16
+ cp -r ../Sports2D/* .
17
+ git add .
18
+ git commit -m "Sync from GitHub"
19
+ git push https://${{ secrets.HUGGINGFACE_TOKEN }}@huggingface.co/spaces/DavidPagnon/sports2d
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: sports2d
3
- Version: 0.8.25
3
+ Version: 0.8.26
4
4
  Summary: Compute 2D human pose and angles from a video or a webcam.
5
5
  Author-email: David Pagnon <contact@david-pagnon.com>
6
6
  Maintainer-email: David Pagnon <contact@david-pagnon.com>
@@ -40,6 +40,8 @@ Dynamic: license-file
40
40
  [![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
41
41
  \
42
42
  [![Discord](https://img.shields.io/discord/1183750225471492206?logo=Discord&label=Discord%20community)](https://discord.com/invite/4mXUdSFjmt)
43
+ [![Hugging Face Space](https://img.shields.io/badge/HuggingFace-Sports2D-yellow?logo=huggingface)](https://huggingface.co/spaces/DavidPagnon/sports2d)
44
+
43
45
 
44
46
  <!-- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/Sports2D_Colab)-->
45
47
 
@@ -52,7 +54,7 @@ Dynamic: license-file
52
54
  </br>
53
55
 
54
56
  > **`Announcements:`**
55
- > - Compensate for floor angle, floor height, depth perspective effects, generate a calibration file **New in v0.9!**
57
+ > - Compensate for floor angle, floor height, depth perspective effects, generate a calibration file **New in v0.8.25!**
56
58
  > - Select only the persons you want to analyze **New in v0.8!**
57
59
  > - MarkerAugmentation and Inverse Kinematics for accurate 3D motion with OpenSim. **New in v0.7!**
58
60
  > - Any detector and pose estimation model can be used. **New in v0.6!**
@@ -80,7 +82,7 @@ https://github.com/user-attachments/assets/2ce62012-f28c-4e23-b3b8-f68931bacb77
80
82
  <!-- https://github.com/user-attachments/assets/1c6e2d6b-d0cf-4165-864e-d9f01c0b8a0e -->
81
83
 
82
84
  `Warning:` Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\
83
- `Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\
85
+ `Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal). The persons need to be filmed as parallel as possible to the motion plane.\
84
86
  If you need 3D research-grade markerless joint kinematics, consider using several cameras with **[Pose2Sim](https://github.com/perfanalytics/pose2sim)**.
85
87
 
86
88
  <!--`Warning:` Google Colab does not follow the European GDPR requirements regarding data privacy. [Install locally](#installation) if this matters.-->
@@ -90,7 +92,8 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
90
92
 
91
93
  ## Contents
92
94
  1. [Installation and Demonstration](#installation-and-demonstration)
93
- 1. [Installation](#installation)
95
+ 1. [Test it on Hugging face](#test-it-on-hugging-face)
96
+ 1. [Local installation](#local-installation)
94
97
  1. [Quick install](#quick-install)
95
98
  2. [Full install](#full-install)
96
99
  2. [Demonstration](#demonstration)
@@ -119,7 +122,16 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
119
122
 
120
123
  ## Installation and Demonstration
121
124
 
122
- ### Installation
125
+
126
+ ### Test it on Hugging face
127
+
128
+ Test an online, limited version [on Hugging Face](https://huggingface.co/spaces/DavidPagnon/sports2d): [![Hugging Face Space](https://img.shields.io/badge/HuggingFace-Sports2D-yellow?logo=huggingface)](https://huggingface.co/spaces/DavidPagnon/sports2d)
129
+
130
+ <img src="Content/huggingface_demo.png" width="760">
131
+
132
+
133
+
134
+ ### Local installation
123
135
 
124
136
  <!--- OPTION 0: **Use Colab** \
125
137
  User-friendly (but full) version, also works on a phone or a tablet.\
@@ -424,7 +436,7 @@ sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
424
436
  sports2d --calculate_angles false
425
437
  ```
426
438
  - Flip angles when the person faces the other side.\
427
- **N.B.:** *We consider that the person looks to the right if their toe keypoint is to the right of their heel. This is not always true when the person is sprinting, especially in the swing phase. Set it to false if you want timeseries to be continuous even when the participant switches their stance.*
439
+ **N.B.: Set to false when sprinting.** *We consider that each limb "looks" to the right if the toe keypoint is to the right of the heel one. This is not always true, particularly during the swing phase of sprinting. Set it to false if you want timeseries to be continuous even when the participant switches their stance.*
428
440
  ```cmd
429
441
  sports2d --flip_left_right true # Default
430
442
  ```
@@ -525,6 +537,7 @@ sports2d --help
525
537
  'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
526
538
  'save_calib': ["", "save calibration file. true if not specified"],
527
539
  'feet_on_floor': ["", "offset marker augmentation results so that feet are at floor level. true if not specified"],
540
+ 'distortions': ["", "camera distortion coefficients [k1, k2, p1, p2, k3] or 'from_calib'. [0.0, 0.0, 0.0, 0.0, 0.0] if not specified"],
528
541
  'use_simple_model': ["", "IK 10+ times faster, but no muscles or flexible spine, no patella. false if not specified"],
529
542
  'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
530
543
  'tracking_mode': ["", "'sports2d' or 'deepsort'. 'deepsort' is slower, harder to parametrize but can be more robust if correctly tuned"],
@@ -12,6 +12,8 @@
12
12
  [![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
13
13
  \
14
14
  [![Discord](https://img.shields.io/discord/1183750225471492206?logo=Discord&label=Discord%20community)](https://discord.com/invite/4mXUdSFjmt)
15
+ [![Hugging Face Space](https://img.shields.io/badge/HuggingFace-Sports2D-yellow?logo=huggingface)](https://huggingface.co/spaces/DavidPagnon/sports2d)
16
+
15
17
 
16
18
  <!-- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/Sports2D_Colab)-->
17
19
 
@@ -24,7 +26,7 @@
24
26
  </br>
25
27
 
26
28
  > **`Announcements:`**
27
- > - Compensate for floor angle, floor height, depth perspective effects, generate a calibration file **New in v0.9!**
29
+ > - Compensate for floor angle, floor height, depth perspective effects, generate a calibration file **New in v0.8.25!**
28
30
  > - Select only the persons you want to analyze **New in v0.8!**
29
31
  > - MarkerAugmentation and Inverse Kinematics for accurate 3D motion with OpenSim. **New in v0.7!**
30
32
  > - Any detector and pose estimation model can be used. **New in v0.6!**
@@ -52,7 +54,7 @@ https://github.com/user-attachments/assets/2ce62012-f28c-4e23-b3b8-f68931bacb77
52
54
  <!-- https://github.com/user-attachments/assets/1c6e2d6b-d0cf-4165-864e-d9f01c0b8a0e -->
53
55
 
54
56
  `Warning:` Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\
55
- `Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\
57
+ `Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal). The persons need to be filmed as parallel as possible to the motion plane.\
56
58
  If you need 3D research-grade markerless joint kinematics, consider using several cameras with **[Pose2Sim](https://github.com/perfanalytics/pose2sim)**.
57
59
 
58
60
  <!--`Warning:` Google Colab does not follow the European GDPR requirements regarding data privacy. [Install locally](#installation) if this matters.-->
@@ -62,7 +64,8 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
62
64
 
63
65
  ## Contents
64
66
  1. [Installation and Demonstration](#installation-and-demonstration)
65
- 1. [Installation](#installation)
67
+ 1. [Test it on Hugging face](#test-it-on-hugging-face)
68
+ 1. [Local installation](#local-installation)
66
69
  1. [Quick install](#quick-install)
67
70
  2. [Full install](#full-install)
68
71
  2. [Demonstration](#demonstration)
@@ -91,7 +94,16 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
91
94
 
92
95
  ## Installation and Demonstration
93
96
 
94
- ### Installation
97
+
98
+ ### Test it on Hugging face
99
+
100
+ Test an online, limited version [on Hugging Face](https://huggingface.co/spaces/DavidPagnon/sports2d): [![Hugging Face Space](https://img.shields.io/badge/HuggingFace-Sports2D-yellow?logo=huggingface)](https://huggingface.co/spaces/DavidPagnon/sports2d)
101
+
102
+ <img src="Content/huggingface_demo.png" width="760">
103
+
104
+
105
+
106
+ ### Local installation
95
107
 
96
108
  <!--- OPTION 0: **Use Colab** \
97
109
  User-friendly (but full) version, also works on a phone or a tablet.\
@@ -396,7 +408,7 @@ sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
396
408
  sports2d --calculate_angles false
397
409
  ```
398
410
  - Flip angles when the person faces the other side.\
399
- **N.B.:** *We consider that the person looks to the right if their toe keypoint is to the right of their heel. This is not always true when the person is sprinting, especially in the swing phase. Set it to false if you want timeseries to be continuous even when the participant switches their stance.*
411
+ **N.B.: Set to false when sprinting.** *We consider that each limb "looks" to the right if the toe keypoint is to the right of the heel one. This is not always true, particularly during the swing phase of sprinting. Set it to false if you want timeseries to be continuous even when the participant switches their stance.*
400
412
  ```cmd
401
413
  sports2d --flip_left_right true # Default
402
414
  ```
@@ -497,6 +509,7 @@ sports2d --help
497
509
  'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
498
510
  'save_calib': ["", "save calibration file. true if not specified"],
499
511
  'feet_on_floor': ["", "offset marker augmentation results so that feet are at floor level. true if not specified"],
512
+ 'distortions': ["", "camera distortion coefficients [k1, k2, p1, p2, k3] or 'from_calib'. [0.0, 0.0, 0.0, 0.0, 0.0] if not specified"],
500
513
  'use_simple_model': ["", "IK 10+ times faster, but no muscles or flexible spine, no patella. false if not specified"],
501
514
  'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
502
515
  'tracking_mode': ["", "'sports2d' or 'deepsort'. 'deepsort' is slower, harder to parametrize but can be more robust if correctly tuned"],
@@ -106,13 +106,16 @@ to_meters = true
106
106
  make_c3d = true
107
107
  save_calib = true
108
108
 
109
+ # Compensate for camera horizon
110
+ floor_angle = 'auto' # float, 'from_kinematics', 'from_calib', or 'auto' # 'auto' is equivalent to 'from_kinematics', ie angle calculated from foot contacts. 'from_calib' calculates it from a toml calibration file. Use float to manually specify it in degrees
111
+ xy_origin = ['auto'] # [px_x,px_y], or ['from kinematics'], ['from_calib'], or ['auto']. # BETWEEN BRACKETS! # ['auto'] is equivalent to ['from_kinematics'], ie origin estimated at first foot contact, direction is direction of motion. ['from_calib'] calculates it from a calibration file. Use [px_x,px_y] to manually specify it in pixels (px_y points downwards)
112
+
109
113
  # Compensate for perspective effects, which make the further limb look smaller. 1-2% coordinate error at 10 m, less if the camera is further away
110
114
  perspective_value = 10 # Either camera-to-person distance (m), or focal length (px), or field-of-view (degrees or radians), or '' if perspective_unit=='from_calib'
111
115
  perspective_unit = 'distance_m' # 'distance_m', 'f_px', 'fov_deg', 'fov_rad', or 'from_calib'
112
116
 
113
- # Compensate for camera horizon
114
- floor_angle = 'auto' # float, 'from_kinematics', 'from_calib', or 'auto' # 'auto' is equivalent to 'from_kinematics', ie angle calculated from foot contacts. 'from_calib' calculates it from a toml calibration file. Use float to manually specify it in degrees
115
- xy_origin = ['auto'] # [px_x,px_y], or ['from kinematics'], ['from_calib'], or ['auto']. # BETWEEN BRACKETS! # ['auto'] is equivalent to ['from_kinematics'], ie origin estimated at first foot contact, direction is direction of motion. ['from_calib'] calculates it from a calibration file. Use [px_x,px_y] to manually specify it in pixels (px_y points downwards)
117
+ # Optional distortion coefficients
118
+ distortions = [0.0, 0.0, 0.0, 0.0, 0.0] # [k1, k2, p1, p2, k3] or 'from_calib' (not implemented yet)
116
119
 
117
120
  # Optional calibration file
118
121
  calib_file = '' # Calibration file in the Pose2Sim toml format, or '' if not available
@@ -130,7 +133,7 @@ joint_angles = ['Right ankle', 'Left ankle', 'Right knee', 'Left knee', 'Right h
130
133
  segment_angles = ['Right foot', 'Left foot', 'Right shank', 'Left shank', 'Right thigh', 'Left thigh', 'Pelvis', 'Trunk', 'Shoulders', 'Head', 'Right arm', 'Left arm', 'Right forearm', 'Left forearm']
131
134
 
132
135
  # Processing parameters
133
- flip_left_right = true # Same angles whether the participant faces left/right. Set it to false if you want timeseries to be continuous even when the participant switches their stance.
136
+ flip_left_right = false # Same angles whether the participant faces left/right. Set it to false if you want timeseries to be continuous even when the participant switches their stance.
134
137
  correct_segment_angles_with_floor_angle = true # If the camera is tilted, corrects segment angles as regards to the floor angle. Set to false if it is the floor which is actually tilted
135
138
 
136
139
 
@@ -212,7 +215,7 @@ use_custom_logging = false # if integrated in an API that already has logging
212
215
  #
213
216
  # Check your model hierarchy with: for pre, _, node in RenderTree(model):
214
217
  # print(f'{pre}{node.name} id={node.id}')
215
- [pose.CUSTOM]
218
+ [[pose.CUSTOM]]
216
219
  name = "Hip"
217
220
  id = 19
218
221
  [[pose.CUSTOM.children]]
@@ -197,6 +197,7 @@ DEFAULT_CONFIG = {'base': {'video_input': ['demo.mp4'],
197
197
  'save_calib': True,
198
198
  'perspective_value': 10.0,
199
199
  'perspective_unit': 'distance_m',
200
+ 'distortions': [0.0, 0.0, 0.0, 0.0, 0.0],
200
201
  'floor_angle': 'auto',
201
202
  'xy_origin': ['auto'],
202
203
  'calib_file': '',
@@ -311,6 +312,7 @@ CONFIG_HELP = {'config': ["C", "path to a toml configuration file"],
311
312
  'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
312
313
  'save_calib': ["", "save calibration file. true if not specified"],
313
314
  'feet_on_floor': ["", "offset marker augmentation results so that feet are at floor level. true if not specified"],
315
+ 'distortions': ["", "camera distortion coefficients [k1, k2, p1, p2, k3] or 'from_calib'. [0.0, 0.0, 0.0, 0.0, 0.0] if not specified"],
314
316
  'use_simple_model': ["", "IK 10+ times faster, but no muscles or flexible spine, no patella. false if not specified"],
315
317
  'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
316
318
  'tracking_mode': ["", "'sports2d' or 'deepsort'. 'deepsort' is slower, harder to parametrize but can be more robust if correctly tuned"],
@@ -242,7 +242,7 @@ def setup_model_class_mode(pose_model, mode, config_dict={}):
242
242
  try: # from Config.toml
243
243
  from anytree.importer import DictImporter
244
244
  model_name = pose_model.upper()
245
- pose_model = DictImporter().import_(config_dict.get('pose').get(pose_model))
245
+ pose_model = DictImporter().import_(config_dict.get('pose').get(pose_model)[0])
246
246
  if pose_model.id == 'None':
247
247
  pose_model.id = None
248
248
  logging.info(f"Using model {model_name} for pose estimation.")
@@ -798,7 +798,6 @@ def make_mot_with_angles(angles, time, mot_path):
798
798
  def pose_plots(trc_data_unfiltered, trc_data, person_id, show=True):
799
799
  '''
800
800
  Displays trc filtered and unfiltered data for comparison
801
- ⚠ Often crashes on the third window...
802
801
 
803
802
  INPUTS:
804
803
  - trc_data_unfiltered: pd.DataFrame. The unfiltered trc data
@@ -809,23 +808,26 @@ def pose_plots(trc_data_unfiltered, trc_data, person_id, show=True):
809
808
  OUTPUT:
810
809
  - matplotlib window with tabbed figures for each keypoint
811
810
  '''
812
-
811
+
813
812
  os_name = platform.system()
814
- if os_name == 'Windows':
815
- mpl.use('qt5agg') # windows
816
813
  mpl.rc('figure', max_open_warning=0)
814
+ if show:
815
+ if os_name == 'Windows':
816
+ mpl.use('qt5agg') # windows
817
+ pw = plotWindow()
818
+ pw.MainWindow.setWindowTitle('Person'+ str(person_id) + ' coordinates')
819
+ else:
820
+ mpl.use('Agg') # Otherwise fails on Hugging-face
821
+ figures_list = []
817
822
 
818
823
  keypoints_names = trc_data.columns[1::3]
819
-
820
- pw = plotWindow()
821
- pw.MainWindow.setWindowTitle('Person'+ str(person_id) + ' coordinates') # Main title
822
-
823
824
  for id, keypoint in enumerate(keypoints_names):
824
825
  f = plt.figure()
825
- if os_name == 'Windows':
826
- f.canvas.manager.window.setWindowTitle(keypoint + ' Plot') # windows
827
- elif os_name == 'Darwin': # macOS
828
- f.canvas.manager.set_window_title(keypoint + ' Plot') # mac
826
+ if show:
827
+ if os_name == 'Windows':
828
+ f.canvas.manager.window.setWindowTitle(keypoint + ' Plot')
829
+ elif os_name == 'Darwin':
830
+ f.canvas.manager.set_window_title(keypoint + ' Plot')
829
831
 
830
832
  axX = plt.subplot(211)
831
833
  plt.plot(trc_data_unfiltered.iloc[:,0], trc_data_unfiltered.iloc[:,id*3+1], label='unfiltered')
@@ -840,18 +842,21 @@ def pose_plots(trc_data_unfiltered, trc_data, person_id, show=True):
840
842
  axY.set_xlabel('Time (seconds)')
841
843
  axY.set_ylabel(keypoint+' Y')
842
844
 
843
- pw.addPlot(keypoint, f)
845
+ if show:
846
+ pw.addPlot(keypoint, f)
847
+ else:
848
+ figures_list.append((keypoint, f))
844
849
 
845
850
  if show:
846
851
  pw.show()
847
-
848
- return pw
849
-
852
+ return pw
853
+ else:
854
+ return figures_list
855
+
850
856
 
851
857
  def angle_plots(angle_data_unfiltered, angle_data, person_id, show=True):
852
858
  '''
853
859
  Displays angle filtered and unfiltered data for comparison
854
- ⚠ Often crashes on the third window...
855
860
 
856
861
  INPUTS:
857
862
  - angle_data_unfiltered: pd.DataFrame. The unfiltered angle data
@@ -862,21 +867,24 @@ def angle_plots(angle_data_unfiltered, angle_data, person_id, show=True):
862
867
  '''
863
868
 
864
869
  os_name = platform.system()
865
- if os_name == 'Windows':
866
- mpl.use('qt5agg') # windows
867
870
  mpl.rc('figure', max_open_warning=0)
871
+ if show:
872
+ if os_name == 'Windows':
873
+ mpl.use('qt5agg') # windows
874
+ pw = plotWindow()
875
+ pw.MainWindow.setWindowTitle('Person'+ str(person_id) + ' angles')
876
+ else:
877
+ mpl.use('Agg') # Otherwise fails on Hugging-face
878
+ figures_list = []
868
879
 
869
880
  angles_names = angle_data.columns[1:]
870
-
871
- pw = plotWindow()
872
- pw.MainWindow.setWindowTitle('Person'+ str(person_id) + ' angles') # Main title
873
-
874
881
  for id, angle in enumerate(angles_names):
875
882
  f = plt.figure()
876
- if os_name == 'Windows':
877
- f.canvas.manager.window.setWindowTitle(angle + ' Plot') # windows
878
- elif os_name == 'Darwin': # macOS
879
- f.canvas.manager.set_window_title(angle + ' Plot') # mac
883
+ if show:
884
+ if os_name == 'Windows':
885
+ f.canvas.manager.window.setWindowTitle(angle + ' Plot') # windows
886
+ elif os_name == 'Darwin': # macOS
887
+ f.canvas.manager.set_window_title(angle + ' Plot') # mac
880
888
 
881
889
  ax = plt.subplot(111)
882
890
  plt.plot(angle_data_unfiltered.iloc[:,0], angle_data_unfiltered.iloc[:,id+1], label='unfiltered')
@@ -886,12 +894,16 @@ def angle_plots(angle_data_unfiltered, angle_data, person_id, show=True):
886
894
  ax.set_ylabel(angle+' (°)')
887
895
  plt.legend()
888
896
 
889
- pw.addPlot(angle, f)
890
-
897
+ if show:
898
+ pw.addPlot(angle, f)
899
+ else:
900
+ figures_list.append((angle, f))
901
+
891
902
  if show:
892
903
  pw.show()
893
-
894
- return pw
904
+ return pw
905
+ else:
906
+ return figures_list
895
907
 
896
908
 
897
909
  def get_personIDs_with_highest_scores(all_frames_scores, nb_persons_to_detect):
@@ -1306,7 +1318,7 @@ def compute_floor_line(trc_data, score_data, keypoint_names = ['LBigToe', 'RBigT
1306
1318
  trc_data_kpt_trim = trc_data_kpt.iloc[start:end].reset_index(drop=True)
1307
1319
  score_data_kpt_trim = score_data_kpt.iloc[start:end].reset_index(drop=True)
1308
1320
 
1309
- # Compute speeds
1321
+ # Compute euclidean speed
1310
1322
  speeds = np.linalg.norm(trc_data_kpt_trim.diff(), axis=1)
1311
1323
 
1312
1324
  # Remove speeds with low confidence
@@ -1450,7 +1462,8 @@ def get_floor_params(floor_angle='auto', xy_origin=['auto'],
1450
1462
  except:
1451
1463
  floor_angle_kin = 0
1452
1464
  xy_origin_kin = cam_width/2, cam_height/2
1453
- logging.warning(f'Could not estimate the floor angle and xy_origin from person {0}. Make sure that the full body is visible. Using floor angle = 0° and xy_origin = [{cam_width/2}, {cam_height/2}] px.')
1465
+ gait_direction = 1
1466
+ logging.warning(f'Could not estimate the floor angle, xy_origin, and visible from person {0}. Make sure that the full body is visible. Using floor angle = 0°, xy_origin = [{cam_width/2}, {cam_height/2}] px, and visible_side = right.')
1454
1467
 
1455
1468
  # Determine final floor angle estimation
1456
1469
  if floor_angle == 'from_calib':
@@ -1578,7 +1591,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
1578
1591
 
1579
1592
  # Base parameters
1580
1593
  video_dir = Path(config_dict.get('base').get('video_dir'))
1581
-
1594
+
1582
1595
  nb_persons_to_detect = config_dict.get('base').get('nb_persons_to_detect')
1583
1596
  if nb_persons_to_detect != 'all':
1584
1597
  try:
@@ -1825,6 +1838,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
1825
1838
  keypoints_names = [node.name for _, _, node in RenderTree(pose_model) if node.id!=None]
1826
1839
  t0 = 0
1827
1840
  tf = (cap.get(cv2.CAP_PROP_FRAME_COUNT)-1) / fps if cap.get(cv2.CAP_PROP_FRAME_COUNT)>0 else float('inf')
1841
+ kpt_id_max = max(keypoints_ids)+1
1828
1842
 
1829
1843
  # Set up pose tracker
1830
1844
  try:
@@ -1913,60 +1927,64 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
1913
1927
  if video_file == "webcam":
1914
1928
  out_vid.write(frame)
1915
1929
 
1916
- # Detect poses
1917
- keypoints, scores = pose_tracker(frame)
1918
-
1919
- # Non maximum suppression (at pose level, not detection, and only using likely keypoints)
1920
- frame_shape = frame.shape
1921
- mask_scores = np.mean(scores, axis=1) > 0.2
1922
-
1923
- likely_keypoints = np.where(mask_scores[:, np.newaxis, np.newaxis], keypoints, np.nan)
1924
- likely_scores = np.where(mask_scores[:, np.newaxis], scores, np.nan)
1925
- likely_bboxes = bbox_xyxy_compute(frame_shape, likely_keypoints, padding=0)
1926
- score_likely_bboxes = np.nanmean(likely_scores, axis=1)
1927
-
1928
- valid_indices = np.where(~np.isnan(score_likely_bboxes))[0]
1929
- if len(valid_indices) > 0:
1930
- valid_bboxes = likely_bboxes[valid_indices]
1931
- valid_scores = score_likely_bboxes[valid_indices]
1932
- keep_valid = nms(valid_bboxes, valid_scores, nms_thr=0.45)
1933
- keep = valid_indices[keep_valid]
1934
- else:
1935
- keep = []
1936
- keypoints, scores = likely_keypoints[keep], likely_scores[keep]
1937
-
1938
- # # Debugging: display detected keypoints on the frame
1939
- # colors = [(255,0,0), (0,255,0), (0,0,255), (255,255,0), (255,0,255), (0,255,255), (128,0,0), (0,128,0), (0,0,128), (128,128,0), (128,0,128), (0,128,128)]
1940
- # bboxes = likely_bboxes[keep]
1941
- # for person_idx in range(len(keypoints)):
1942
- # for kpt_idx, kpt in enumerate(keypoints[person_idx]):
1943
- # if not np.isnan(kpt).any():
1944
- # cv2.circle(frame, (int(kpt[0]), int(kpt[1])), 3, colors[person_idx%len(colors)], -1)
1945
- # if not np.isnan(bboxes[person_idx]).any():
1946
- # cv2.rectangle(frame, (int(bboxes[person_idx][0]), int(bboxes[person_idx][1])), (int(bboxes[person_idx][2]), int(bboxes[person_idx][3])), colors[person_idx%len(colors)], 1)
1947
- # cv2.imshow(f'{video_file} Sports2D', frame)
1948
-
1949
- # Track poses across frames
1950
- if tracking_mode == 'deepsort':
1951
- keypoints, scores = sort_people_deepsort(keypoints, scores, deepsort_tracker, frame, frame_count)
1952
- if tracking_mode == 'sports2d':
1953
- if 'prev_keypoints' not in locals(): prev_keypoints = keypoints
1954
- prev_keypoints, keypoints, scores = sort_people_sports2d(prev_keypoints, keypoints, scores=scores, max_dist=max_distance)
1955
- else:
1956
- pass
1957
-
1958
- # # Debugging: display detected keypoints on the frame
1959
- # colors = [(255,0,0), (0,255,0), (0,0,255), (255,255,0), (255,0,255), (0,255,255), (128,0,0), (0,128,0), (0,0,128), (128,128,0), (128,0,128), (0,128,128)]
1960
- # for person_idx in range(len(keypoints)):
1961
- # for kpt_idx, kpt in enumerate(keypoints[person_idx]):
1962
- # if not np.isnan(kpt).any():
1963
- # cv2.circle(frame, (int(kpt[0]), int(kpt[1])), 3, colors[person_idx%len(colors)], -1)
1964
- # # if not np.isnan(bboxes[person_idx]).any():
1965
- # # cv2.rectangle(frame, (int(bboxes[person_idx][0]), int(bboxes[person_idx][1])), (int(bboxes[person_idx][2]), int(bboxes[person_idx][3])), colors[person_idx%len(colors)], 1)
1966
- # cv2.imshow(f'{video_file} Sports2D', frame)
1967
- # # if (cv2.waitKey(1) & 0xFF) == ord('q') or (cv2.waitKey(1) & 0xFF) == 27:
1968
- # # break
1969
- # # input()
1930
+ try: # Frames with no detection cause errors on MacOS CoreMLExecutionProvider
1931
+ # Detect poses
1932
+ keypoints, scores = pose_tracker(frame)
1933
+
1934
+ # Non maximum suppression (at pose level, not detection, and only using likely keypoints)
1935
+ frame_shape = frame.shape
1936
+ mask_scores = np.mean(scores, axis=1) > 0.2
1937
+
1938
+ likely_keypoints = np.where(mask_scores[:, np.newaxis, np.newaxis], keypoints, np.nan)
1939
+ likely_scores = np.where(mask_scores[:, np.newaxis], scores, np.nan)
1940
+ likely_bboxes = bbox_xyxy_compute(frame_shape, likely_keypoints, padding=0)
1941
+ score_likely_bboxes = np.nanmean(likely_scores, axis=1)
1942
+
1943
+ valid_indices = np.where(~np.isnan(score_likely_bboxes))[0]
1944
+ if len(valid_indices) > 0:
1945
+ valid_bboxes = likely_bboxes[valid_indices]
1946
+ valid_scores = score_likely_bboxes[valid_indices]
1947
+ keep_valid = nms(valid_bboxes, valid_scores, nms_thr=0.45)
1948
+ keep = valid_indices[keep_valid]
1949
+ else:
1950
+ keep = []
1951
+ keypoints, scores = likely_keypoints[keep], likely_scores[keep]
1952
+
1953
+ # # Debugging: display detected keypoints on the frame
1954
+ # colors = [(255,0,0), (0,255,0), (0,0,255), (255,255,0), (255,0,255), (0,255,255), (128,0,0), (0,128,0), (0,0,128), (128,128,0), (128,0,128), (0,128,128)]
1955
+ # bboxes = likely_bboxes[keep]
1956
+ # for person_idx in range(len(keypoints)):
1957
+ # for kpt_idx, kpt in enumerate(keypoints[person_idx]):
1958
+ # if not np.isnan(kpt).any():
1959
+ # cv2.circle(frame, (int(kpt[0]), int(kpt[1])), 3, colors[person_idx%len(colors)], -1)
1960
+ # if not np.isnan(bboxes[person_idx]).any():
1961
+ # cv2.rectangle(frame, (int(bboxes[person_idx][0]), int(bboxes[person_idx][1])), (int(bboxes[person_idx][2]), int(bboxes[person_idx][3])), colors[person_idx%len(colors)], 1)
1962
+ # cv2.imshow(f'{video_file} Sports2D', frame)
1963
+
1964
+ # Track poses across frames
1965
+ if tracking_mode == 'deepsort':
1966
+ keypoints, scores = sort_people_deepsort(keypoints, scores, deepsort_tracker, frame, frame_count)
1967
+ if tracking_mode == 'sports2d':
1968
+ if 'prev_keypoints' not in locals(): prev_keypoints = keypoints
1969
+ prev_keypoints, keypoints, scores = sort_people_sports2d(prev_keypoints, keypoints, scores=scores, max_dist=max_distance)
1970
+ else:
1971
+ pass
1972
+
1973
+ # # Debugging: display detected keypoints on the frame
1974
+ # colors = [(255,0,0), (0,255,0), (0,0,255), (255,255,0), (255,0,255), (0,255,255), (128,0,0), (0,128,0), (0,0,128), (128,128,0), (128,0,128), (0,128,128)]
1975
+ # for person_idx in range(len(keypoints)):
1976
+ # for kpt_idx, kpt in enumerate(keypoints[person_idx]):
1977
+ # if not np.isnan(kpt).any():
1978
+ # cv2.circle(frame, (int(kpt[0]), int(kpt[1])), 3, colors[person_idx%len(colors)], -1)
1979
+ # # if not np.isnan(bboxes[person_idx]).any():
1980
+ # # cv2.rectangle(frame, (int(bboxes[person_idx][0]), int(bboxes[person_idx][1])), (int(bboxes[person_idx][2]), int(bboxes[person_idx][3])), colors[person_idx%len(colors)], 1)
1981
+ # cv2.imshow(f'{video_file} Sports2D', frame)
1982
+ # # if (cv2.waitKey(1) & 0xFF) == ord('q') or (cv2.waitKey(1) & 0xFF) == 27:
1983
+ # # break
1984
+ # # input()
1985
+ except:
1986
+ keypoints = np.full((1,kpt_id_max,2), fill_value=np.nan)
1987
+ scores = np.full((1,kpt_id_max), fill_value=np.nan)
1970
1988
 
1971
1989
  # Process coordinates and compute angles
1972
1990
  valid_X, valid_Y, valid_scores = [], [], []
@@ -2058,6 +2076,10 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
2058
2076
  if (cv2.waitKey(1) & 0xFF) == ord('q') or (cv2.waitKey(1) & 0xFF) == 27:
2059
2077
  break
2060
2078
 
2079
+ # # Debugging
2080
+ # img_output_path = img_output_dir / f'{video_file_stem}_frame{frame_nb:06d}.png'
2081
+ # cv2.imwrite(str(img_output_path), img)
2082
+
2061
2083
  all_frames_X.append(np.array(valid_X))
2062
2084
  all_frames_X_flipped.append(np.array(valid_X_flipped))
2063
2085
  all_frames_Y.append(np.array(valid_Y))
@@ -2260,12 +2282,20 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
2260
2282
  if not to_meters and (show_plots or save_plots):
2261
2283
  pw = pose_plots(trc_data_unfiltered_i, trc_data_i, i, show=show_plots)
2262
2284
  if save_plots:
2263
- for n, f in enumerate(pw.figure_handles):
2264
- dpi = pw.canvases[i].figure.dpi
2265
- f.set_size_inches(1280/dpi, 720/dpi)
2266
- title = pw.tabs.tabText(n)
2267
- plot_path = plots_output_dir / (pose_output_path.stem + f'_person{i:02d}_px_{title.replace(" ","_").replace("/","_")}.png')
2268
- f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2285
+ if show_plots:
2286
+ for n, f in enumerate(pw.figure_handles):
2287
+ dpi = pw.canvases[n].figure.dpi
2288
+ f.set_size_inches(1280/dpi, 720/dpi)
2289
+ title = pw.tabs.tabText(n)
2290
+ plot_path = plots_output_dir / (pose_output_path.stem + f'_person{i:02d}_px_{title.replace(" ","_").replace("/","_")}.png')
2291
+ f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2292
+ else: # Tabbed plots not used
2293
+ for title, f in pw:
2294
+ dpi = f.dpi
2295
+ f.set_size_inches(1280/dpi, 720/dpi)
2296
+ plot_path = plots_output_dir / (pose_output_path.stem + f'_person{i:02d}_px_{title.replace(" ","_").replace("/","_")}.png')
2297
+ f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2298
+ plt.close(f)
2269
2299
  logging.info(f'Pose plots (px) saved in {plots_output_dir}.')
2270
2300
 
2271
2301
  all_frames_X_processed[:,idx_person,:], all_frames_Y_processed[:,idx_person,:] = all_frames_X_person_filt, all_frames_Y_person_filt
@@ -2334,41 +2364,40 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
2334
2364
  message = get_correction_message(xy_origin)
2335
2365
  logging.info(f'Floor level: {cy:.2f} px (from the top of the image), gait starting at {cx:.2f} px in the {direction_person0} direction for the first person. Corrected using {message}\n')
2336
2366
 
2367
+ # Prepare calibration data
2368
+ R90z = np.array([[0.0, -1.0, 0.0],
2369
+ [1.0, 0.0, 0.0],
2370
+ [0.0, 0.0, 1.0]])
2371
+ R270x = np.array([[1.0, 0.0, 0.0],
2372
+ [0.0, 0.0, 1.0],
2373
+ [0.0, -1.0, 0.0]])
2374
+
2375
+ calib_file_path = output_dir / f'{video_file_stem}_Sports2D_calib.toml'
2376
+
2377
+ # name, size, distortions
2378
+ N = [video_file_stem]
2379
+ S = [[cam_width, cam_height]]
2380
+ D = [[0.0, 0.0, 0.0, 0.0]]
2381
+
2382
+ # Intrinsics
2383
+ f = height_px / first_person_height * distance_m
2384
+ cu = cam_width/2
2385
+ cv = cam_height/2
2386
+ K = np.array([[[f, 0.0, cu], [0.0, f, cv], [0.0, 0.0, 1.0]]])
2387
+
2388
+ # Extrinsics
2389
+ Rfloory = np.array([[np.cos(floor_angle_estim), 0.0, np.sin(floor_angle_estim)],
2390
+ [0.0, 1.0, 0.0],
2391
+ [-np.sin(floor_angle_estim), 0.0, np.cos(floor_angle_estim)]])
2392
+ R_world = R90z @ Rfloory @ R270x
2393
+ T_world = R90z @ np.array([-(cx-cu)/f*distance_m, -distance_m, (cy-cv)/f*distance_m])
2394
+
2395
+ R_cam, T_cam = world_to_camera_persp(R_world, T_world)
2396
+ Tvec_cam = T_cam.reshape(1,3).tolist()
2397
+ Rvec_cam = cv2.Rodrigues(R_cam)[0].reshape(1,3).tolist()
2337
2398
 
2338
2399
  # Save calibration file
2339
2400
  if save_calib and not calib_file:
2340
- R90z = np.array([[0.0, -1.0, 0.0],
2341
- [1.0, 0.0, 0.0],
2342
- [0.0, 0.0, 1.0]])
2343
- R270x = np.array([[1.0, 0.0, 0.0],
2344
- [0.0, 0.0, 1.0],
2345
- [0.0, -1.0, 0.0]])
2346
-
2347
- calib_file_path = output_dir / f'{video_file_stem}_Sports2D_calib.toml'
2348
-
2349
- # name, size, distortions
2350
- N = [video_file_stem]
2351
- S = [[cam_width, cam_height]]
2352
- D = [[0.0, 0.0, 0.0, 0.0]]
2353
-
2354
- # Intrinsics
2355
- f = height_px / first_person_height * distance_m
2356
- cu = cam_width/2
2357
- cv = cam_height/2
2358
- K = np.array([[[f, 0.0, cu], [0.0, f, cv], [0.0, 0.0, 1.0]]])
2359
-
2360
- # Extrinsics
2361
- Rfloory = np.array([[np.cos(floor_angle_estim), 0.0, np.sin(floor_angle_estim)],
2362
- [0.0, 1.0, 0.0],
2363
- [-np.sin(floor_angle_estim), 0.0, np.cos(floor_angle_estim)]])
2364
- R_world = R90z @ Rfloory @ R270x
2365
- T_world = R90z @ np.array([-(cx-cu)/f*distance_m, -distance_m, (cy-cv)/f*distance_m])
2366
-
2367
- R_cam, T_cam = world_to_camera_persp(R_world, T_world)
2368
- Tvec_cam = T_cam.reshape(1,3).tolist()
2369
- Rvec_cam = cv2.Rodrigues(R_cam)[0].reshape(1,3).tolist()
2370
-
2371
- # Write calibration file
2372
2401
  toml_write(calib_file_path, N, S, D, K, Rvec_cam, Tvec_cam)
2373
2402
  logging.info(f'Calibration saved to {calib_file_path}.')
2374
2403
 
@@ -2417,12 +2446,20 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
2417
2446
  if to_meters and (show_plots or save_plots):
2418
2447
  pw = pose_plots(trc_data_unfiltered_m_i, trc_data_m_i, i, show=show_plots)
2419
2448
  if save_plots:
2420
- for n, f in enumerate(pw.figure_handles):
2421
- dpi = pw.canvases[i].figure.dpi
2422
- f.set_size_inches(1280/dpi, 720/dpi)
2423
- title = pw.tabs.tabText(n)
2424
- plot_path = plots_output_dir / (pose_output_path_m.stem + f'_person{i:02d}_m_{title.replace(" ","_").replace("/","_")}.png')
2425
- f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2449
+ if show_plots:
2450
+ for n, f in enumerate(pw.figure_handles):
2451
+ dpi = pw.canvases[n].figure.dpi
2452
+ f.set_size_inches(1280/dpi, 720/dpi)
2453
+ title = pw.tabs.tabText(n)
2454
+ plot_path = plots_output_dir / (pose_output_path.stem + f'_person{i:02d}_m_{title.replace(" ","_").replace("/","_")}.png')
2455
+ f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2456
+ else: # Tabbed plots not used
2457
+ for title, f in pw:
2458
+ dpi = f.dpi
2459
+ f.set_size_inches(1280/dpi, 720/dpi)
2460
+ plot_path = plots_output_dir / (pose_output_path.stem + f'_person{i:02d}_m_{title.replace(" ","_").replace("/","_")}.png')
2461
+ f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2462
+ plt.close(f)
2426
2463
  logging.info(f'Pose plots (m) saved in {plots_output_dir}.')
2427
2464
 
2428
2465
  # Write to trc file
@@ -2553,12 +2590,20 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
2553
2590
  if show_plots or save_plots:
2554
2591
  pw = angle_plots(all_frames_angles_person, angle_data, i, show=show_plots) # i = current person
2555
2592
  if save_plots:
2556
- for n, f in enumerate(pw.figure_handles):
2557
- dpi = pw.canvases[i].figure.dpi
2558
- f.set_size_inches(1280/dpi, 720/dpi)
2559
- title = pw.tabs.tabText(n)
2560
- plot_path = plots_output_dir / (pose_output_path_m.stem + f'_person{i:02d}_ang_{title.replace(" ","_").replace("/","_")}.png')
2561
- f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2593
+ if show_plots:
2594
+ for n, f in enumerate(pw.figure_handles):
2595
+ dpi = pw.canvases[n].figure.dpi
2596
+ f.set_size_inches(1280/dpi, 720/dpi)
2597
+ title = pw.tabs.tabText(n)
2598
+ plot_path = plots_output_dir / (pose_output_path.stem + f'_person{i:02d}_ang_{title.replace(" ","_").replace("/","_")}.png')
2599
+ f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2600
+ else: # Tabbed plots not used
2601
+ for title, f in pw:
2602
+ dpi = f.dpi
2603
+ f.set_size_inches(1280/dpi, 720/dpi)
2604
+ plot_path = plots_output_dir / (pose_output_path.stem + f'_person{i:02d}_ang_{title.replace(" ","_").replace("/","_")}.png')
2605
+ f.savefig(plot_path, dpi=dpi, bbox_inches='tight')
2606
+ plt.close(f)
2562
2607
  logging.info(f'Pose plots (m) saved in {plots_output_dir}.')
2563
2608
 
2564
2609
 
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: sports2d
3
- Version: 0.8.25
3
+ Version: 0.8.26
4
4
  Summary: Compute 2D human pose and angles from a video or a webcam.
5
5
  Author-email: David Pagnon <contact@david-pagnon.com>
6
6
  Maintainer-email: David Pagnon <contact@david-pagnon.com>
@@ -40,6 +40,8 @@ Dynamic: license-file
40
40
  [![License](https://img.shields.io/badge/License-BSD_3--Clause-blue.svg)](https://opensource.org/licenses/BSD-3-Clause)
41
41
  \
42
42
  [![Discord](https://img.shields.io/discord/1183750225471492206?logo=Discord&label=Discord%20community)](https://discord.com/invite/4mXUdSFjmt)
43
+ [![Hugging Face Space](https://img.shields.io/badge/HuggingFace-Sports2D-yellow?logo=huggingface)](https://huggingface.co/spaces/DavidPagnon/sports2d)
44
+
43
45
 
44
46
  <!-- [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/Sports2D_Colab)-->
45
47
 
@@ -52,7 +54,7 @@ Dynamic: license-file
52
54
  </br>
53
55
 
54
56
  > **`Announcements:`**
55
- > - Compensate for floor angle, floor height, depth perspective effects, generate a calibration file **New in v0.9!**
57
+ > - Compensate for floor angle, floor height, depth perspective effects, generate a calibration file **New in v0.8.25!**
56
58
  > - Select only the persons you want to analyze **New in v0.8!**
57
59
  > - MarkerAugmentation and Inverse Kinematics for accurate 3D motion with OpenSim. **New in v0.7!**
58
60
  > - Any detector and pose estimation model can be used. **New in v0.6!**
@@ -80,7 +82,7 @@ https://github.com/user-attachments/assets/2ce62012-f28c-4e23-b3b8-f68931bacb77
80
82
  <!-- https://github.com/user-attachments/assets/1c6e2d6b-d0cf-4165-864e-d9f01c0b8a0e -->
81
83
 
82
84
  `Warning:` Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\
83
- `Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\
85
+ `Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal). The persons need to be filmed as parallel as possible to the motion plane.\
84
86
  If you need 3D research-grade markerless joint kinematics, consider using several cameras with **[Pose2Sim](https://github.com/perfanalytics/pose2sim)**.
85
87
 
86
88
  <!--`Warning:` Google Colab does not follow the European GDPR requirements regarding data privacy. [Install locally](#installation) if this matters.-->
@@ -90,7 +92,8 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
90
92
 
91
93
  ## Contents
92
94
  1. [Installation and Demonstration](#installation-and-demonstration)
93
- 1. [Installation](#installation)
95
+ 1. [Test it on Hugging face](#test-it-on-hugging-face)
96
+ 1. [Local installation](#local-installation)
94
97
  1. [Quick install](#quick-install)
95
98
  2. [Full install](#full-install)
96
99
  2. [Demonstration](#demonstration)
@@ -119,7 +122,16 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
119
122
 
120
123
  ## Installation and Demonstration
121
124
 
122
- ### Installation
125
+
126
+ ### Test it on Hugging face
127
+
128
+ Test an online, limited version [on Hugging Face](https://huggingface.co/spaces/DavidPagnon/sports2d): [![Hugging Face Space](https://img.shields.io/badge/HuggingFace-Sports2D-yellow?logo=huggingface)](https://huggingface.co/spaces/DavidPagnon/sports2d)
129
+
130
+ <img src="Content/huggingface_demo.png" width="760">
131
+
132
+
133
+
134
+ ### Local installation
123
135
 
124
136
  <!--- OPTION 0: **Use Colab** \
125
137
  User-friendly (but full) version, also works on a phone or a tablet.\
@@ -424,7 +436,7 @@ sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
424
436
  sports2d --calculate_angles false
425
437
  ```
426
438
  - Flip angles when the person faces the other side.\
427
- **N.B.:** *We consider that the person looks to the right if their toe keypoint is to the right of their heel. This is not always true when the person is sprinting, especially in the swing phase. Set it to false if you want timeseries to be continuous even when the participant switches their stance.*
439
+ **N.B.: Set to false when sprinting.** *We consider that each limb "looks" to the right if the toe keypoint is to the right of the heel one. This is not always true, particularly during the swing phase of sprinting. Set it to false if you want timeseries to be continuous even when the participant switches their stance.*
428
440
  ```cmd
429
441
  sports2d --flip_left_right true # Default
430
442
  ```
@@ -525,6 +537,7 @@ sports2d --help
525
537
  'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
526
538
  'save_calib': ["", "save calibration file. true if not specified"],
527
539
  'feet_on_floor': ["", "offset marker augmentation results so that feet are at floor level. true if not specified"],
540
+ 'distortions': ["", "camera distortion coefficients [k1, k2, p1, p2, k3] or 'from_calib'. [0.0, 0.0, 0.0, 0.0, 0.0] if not specified"],
528
541
  'use_simple_model': ["", "IK 10+ times faster, but no muscles or flexible spine, no patella. false if not specified"],
529
542
  'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
530
543
  'tracking_mode': ["", "'sports2d' or 'deepsort'. 'deepsort' is slower, harder to parametrize but can be more robust if correctly tuned"],
@@ -6,11 +6,13 @@ pyproject.toml
6
6
  .github/workflows/continuous-integration.yml
7
7
  .github/workflows/joss_pdf.yml
8
8
  .github/workflows/publish-on-release.yml
9
+ .github/workflows/sync_to_hf.yml.bak
9
10
  Content/Demo_plots.png
10
11
  Content/Demo_results.png
11
12
  Content/Demo_terminal.png
12
13
  Content/Person_selection.png
13
14
  Content/Video_tuto_Sports2D_Colab.png
15
+ Content/huggingface_demo.png
14
16
  Content/joint_convention.png
15
17
  Content/paper.bib
16
18
  Content/paper.md
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes
File without changes