sports2d 0.8.1__tar.gz → 0.8.2__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {sports2d-0.8.1 → sports2d-0.8.2}/PKG-INFO +3 -4
- {sports2d-0.8.1 → sports2d-0.8.2}/README.md +2 -3
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/Demo/Config_demo.toml +0 -1
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/Sports2D.py +1 -1
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/process.py +17 -11
- {sports2d-0.8.1 → sports2d-0.8.2}/pyproject.toml +1 -1
- {sports2d-0.8.1 → sports2d-0.8.2}/sports2d.egg-info/PKG-INFO +3 -4
- {sports2d-0.8.1 → sports2d-0.8.2}/LICENSE +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/Demo/demo.mp4 +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/Utilities/__init__.py +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/Utilities/common.py +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/Utilities/filter.py +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/Utilities/tests.py +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/Sports2D/__init__.py +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/setup.cfg +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/sports2d.egg-info/SOURCES.txt +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/sports2d.egg-info/dependency_links.txt +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/sports2d.egg-info/entry_points.txt +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/sports2d.egg-info/requires.txt +0 -0
- {sports2d-0.8.1 → sports2d-0.8.2}/sports2d.egg-info/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: sports2d
|
|
3
|
-
Version: 0.8.
|
|
3
|
+
Version: 0.8.2
|
|
4
4
|
Summary: Compute 2D human pose and angles from a video or a webcam.
|
|
5
5
|
Author-email: David Pagnon <contact@david-pagnon.com>
|
|
6
6
|
License: BSD 3-Clause License
|
|
@@ -122,7 +122,7 @@ https://github.com/user-attachments/assets/6a444474-4df1-4134-af0c-e9746fa433ad
|
|
|
122
122
|
|
|
123
123
|
`Warning:` Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\
|
|
124
124
|
`Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\
|
|
125
|
-
If you need 3D research-grade markerless joint kinematics, consider using several cameras
|
|
125
|
+
If you need 3D research-grade markerless joint kinematics, consider using several cameras with **[Pose2Sim](https://github.com/perfanalytics/pose2sim)**.
|
|
126
126
|
|
|
127
127
|
<!--`Warning:` Google Colab does not follow the European GDPR requirements regarding data privacy. [Install locally](#installation) if this matters.-->
|
|
128
128
|
|
|
@@ -238,7 +238,6 @@ The Demo video is voluntarily challenging to demonstrate the robustness of the p
|
|
|
238
238
|
- One person walking in the sagittal plane
|
|
239
239
|
- One person doing jumping jacks in the frontal plane. This person then performs a flip while being backlit, both of which are challenging for the pose detection algorithm
|
|
240
240
|
- One tiny person flickering in the background who needs to be ignored
|
|
241
|
-
- The first person is starting high and ending low on the image, which messes up the automatic floor angle calculation. You can set it up manually with the parameter `--floor_angle 0`
|
|
242
241
|
|
|
243
242
|
<br>
|
|
244
243
|
|
|
@@ -509,7 +508,7 @@ sports2d --help
|
|
|
509
508
|
'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"],
|
|
510
509
|
'to_meters': ["M", "convert pixels to meters. true if not specified"],
|
|
511
510
|
'make_c3d': ["", "Convert trc to c3d file. true if not specified"],
|
|
512
|
-
'floor_angle': ["", "angle of the floor. 'auto' if not specified"],
|
|
511
|
+
'floor_angle': ["", "angle of the floor (degrees). 'auto' if not specified"],
|
|
513
512
|
'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
|
|
514
513
|
'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
|
|
515
514
|
'save_calib': ["", "save calibration file. true if not specified"],
|
|
@@ -50,7 +50,7 @@ https://github.com/user-attachments/assets/6a444474-4df1-4134-af0c-e9746fa433ad
|
|
|
50
50
|
|
|
51
51
|
`Warning:` Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\
|
|
52
52
|
`Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\
|
|
53
|
-
If you need 3D research-grade markerless joint kinematics, consider using several cameras
|
|
53
|
+
If you need 3D research-grade markerless joint kinematics, consider using several cameras with **[Pose2Sim](https://github.com/perfanalytics/pose2sim)**.
|
|
54
54
|
|
|
55
55
|
<!--`Warning:` Google Colab does not follow the European GDPR requirements regarding data privacy. [Install locally](#installation) if this matters.-->
|
|
56
56
|
|
|
@@ -166,7 +166,6 @@ The Demo video is voluntarily challenging to demonstrate the robustness of the p
|
|
|
166
166
|
- One person walking in the sagittal plane
|
|
167
167
|
- One person doing jumping jacks in the frontal plane. This person then performs a flip while being backlit, both of which are challenging for the pose detection algorithm
|
|
168
168
|
- One tiny person flickering in the background who needs to be ignored
|
|
169
|
-
- The first person is starting high and ending low on the image, which messes up the automatic floor angle calculation. You can set it up manually with the parameter `--floor_angle 0`
|
|
170
169
|
|
|
171
170
|
<br>
|
|
172
171
|
|
|
@@ -437,7 +436,7 @@ sports2d --help
|
|
|
437
436
|
'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"],
|
|
438
437
|
'to_meters': ["M", "convert pixels to meters. true if not specified"],
|
|
439
438
|
'make_c3d': ["", "Convert trc to c3d file. true if not specified"],
|
|
440
|
-
'floor_angle': ["", "angle of the floor. 'auto' if not specified"],
|
|
439
|
+
'floor_angle': ["", "angle of the floor (degrees). 'auto' if not specified"],
|
|
441
440
|
'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
|
|
442
441
|
'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
|
|
443
442
|
'save_calib': ["", "save calibration file. true if not specified"],
|
|
@@ -93,7 +93,6 @@ tracking_mode = 'sports2d' # 'sports2d' or 'deepsort'. 'deepsort' is slower, har
|
|
|
93
93
|
# More robust in crowded scenes but tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
|
|
94
94
|
# Requires `pip install torch torchvision torchreid gdown tensorboard`
|
|
95
95
|
|
|
96
|
-
|
|
97
96
|
# Processing parameters
|
|
98
97
|
keypoint_likelihood_threshold = 0.3 # Keypoints whose likelihood is lower will not be taken into account
|
|
99
98
|
average_likelihood_threshold = 0.5 # Person will be ignored if average likelihood of good keypoints is lower than this value
|
|
@@ -252,7 +252,7 @@ CONFIG_HELP = {'config': ["C", "path to a toml configuration file"],
|
|
|
252
252
|
'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"],
|
|
253
253
|
'to_meters': ["M", "convert pixels to meters. true if not specified"],
|
|
254
254
|
'make_c3d': ["", "Convert trc to c3d file. true if not specified"],
|
|
255
|
-
'floor_angle': ["", "angle of the floor. 'auto' if not specified"],
|
|
255
|
+
'floor_angle': ["", "angle of the floor (degrees). 'auto' if not specified"],
|
|
256
256
|
'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
|
|
257
257
|
'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
|
|
258
258
|
'save_calib': ["", "save calibration file. true if not specified"],
|
|
@@ -56,6 +56,7 @@ import sys
|
|
|
56
56
|
import logging
|
|
57
57
|
import json
|
|
58
58
|
import ast
|
|
59
|
+
import copy
|
|
59
60
|
import shutil
|
|
60
61
|
import os
|
|
61
62
|
from importlib.metadata import version
|
|
@@ -1350,10 +1351,15 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
1350
1351
|
close_to_zero_speed_px = config_dict.get('kinematics').get('close_to_zero_speed_px')
|
|
1351
1352
|
close_to_zero_speed_m = config_dict.get('kinematics').get('close_to_zero_speed_m')
|
|
1352
1353
|
if do_ik or use_augmentation:
|
|
1353
|
-
|
|
1354
|
-
|
|
1355
|
-
|
|
1356
|
-
|
|
1354
|
+
try:
|
|
1355
|
+
if use_augmentation:
|
|
1356
|
+
from Pose2Sim.markerAugmentation import augment_markers_all
|
|
1357
|
+
if do_ik:
|
|
1358
|
+
from Pose2Sim.kinematics import kinematics_all
|
|
1359
|
+
except ImportError:
|
|
1360
|
+
logging.error("OpenSim package is not installed. Please install it to use inverse kinematics or marker augmentation features (see 'Full install' section of the documentation).")
|
|
1361
|
+
raise ImportError("OpenSim package is not installed. Please install it to use inverse kinematics or marker augmentation features (see 'Full install' section of the documentation).")
|
|
1362
|
+
|
|
1357
1363
|
# Create a Pose2Sim dictionary and fill in missing keys
|
|
1358
1364
|
recursivedict = lambda: defaultdict(recursivedict)
|
|
1359
1365
|
Pose2Sim_config_dict = recursivedict()
|
|
@@ -1428,7 +1434,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
1428
1434
|
backend=backend, device=device)
|
|
1429
1435
|
|
|
1430
1436
|
except (json.JSONDecodeError, TypeError):
|
|
1431
|
-
logging.warning("
|
|
1437
|
+
logging.warning("Invalid mode. Must be 'lightweight', 'balanced', 'performance', or '''{dictionary}''' of parameters within triple quotes. Make sure input_sizes are within square brackets.")
|
|
1432
1438
|
logging.warning('Using the default "balanced" mode.')
|
|
1433
1439
|
mode = 'balanced'
|
|
1434
1440
|
|
|
@@ -1458,6 +1464,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
1458
1464
|
keypoints_ids = [node.id for _, _, node in RenderTree(pose_model) if node.id!=None]
|
|
1459
1465
|
keypoints_names = [node.name for _, _, node in RenderTree(pose_model) if node.id!=None]
|
|
1460
1466
|
t0 = 0
|
|
1467
|
+
print(keypoints_names, keypoints_ids)
|
|
1461
1468
|
|
|
1462
1469
|
# Set up pose tracker
|
|
1463
1470
|
try:
|
|
@@ -1469,7 +1476,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
1469
1476
|
if tracking_mode not in ['deepsort', 'sports2d']:
|
|
1470
1477
|
logging.warning(f"Tracking mode {tracking_mode} not recognized. Using sports2d method.")
|
|
1471
1478
|
tracking_mode = 'sports2d'
|
|
1472
|
-
logging.info(f'
|
|
1479
|
+
logging.info(f'Pose tracking set up for "{pose_model_name}" model.')
|
|
1473
1480
|
logging.info(f'Mode: {mode}.\n')
|
|
1474
1481
|
logging.info(f'Persons are detected every {det_frequency} frames and tracked inbetween. Tracking is done with {tracking_mode}.')
|
|
1475
1482
|
if tracking_mode == 'deepsort': logging.info(f'Deepsort parameters: {deepsort_params}.')
|
|
@@ -1604,8 +1611,8 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
1604
1611
|
# Draw keypoints and skeleton
|
|
1605
1612
|
if show_realtime_results:
|
|
1606
1613
|
img = frame.copy()
|
|
1607
|
-
cv2.putText(img, f"Press 'q' to
|
|
1608
|
-
cv2.putText(img, f"Press 'q' to
|
|
1614
|
+
cv2.putText(img, f"Press 'q' to stop", (cam_width-int(600*fontSize), cam_height-20), cv2.FONT_HERSHEY_SIMPLEX, fontSize+0.2, (255,255,255), thickness+1, cv2.LINE_AA)
|
|
1615
|
+
cv2.putText(img, f"Press 'q' to stop", (cam_width-int(600*fontSize), cam_height-20), cv2.FONT_HERSHEY_SIMPLEX, fontSize+0.2, (0,0,255), thickness, cv2.LINE_AA)
|
|
1609
1616
|
img = draw_bounding_box(img, valid_X, valid_Y, colors=colors, fontSize=fontSize, thickness=thickness)
|
|
1610
1617
|
img = draw_keypts(img, valid_X, valid_Y, valid_scores, cmap_str='RdYlGn')
|
|
1611
1618
|
img = draw_skel(img, valid_X, valid_Y, pose_model)
|
|
@@ -2002,7 +2009,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
2002
2009
|
all_frames_angles_processed = all_frames_angles_processed[:,selected_persons,:]
|
|
2003
2010
|
|
|
2004
2011
|
# Reorder keypoints ids
|
|
2005
|
-
pose_model_with_new_ids = pose_model
|
|
2012
|
+
pose_model_with_new_ids = copy.deepcopy(pose_model)
|
|
2006
2013
|
new_id = 0
|
|
2007
2014
|
for node in PreOrderIter(pose_model_with_new_ids):
|
|
2008
2015
|
if node.id!=None:
|
|
@@ -2020,7 +2027,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
2020
2027
|
img = frame.copy()
|
|
2021
2028
|
img = draw_bounding_box(img, valid_X, valid_Y, colors=colors, fontSize=fontSize, thickness=thickness)
|
|
2022
2029
|
img = draw_keypts(img, valid_X, valid_Y, valid_scores, cmap_str='RdYlGn')
|
|
2023
|
-
img = draw_skel(img, valid_X, valid_Y,
|
|
2030
|
+
img = draw_skel(img, valid_X, valid_Y, pose_model_with_new_ids)
|
|
2024
2031
|
if calculate_angles:
|
|
2025
2032
|
img = draw_angles(img, valid_X, valid_Y, valid_angles, valid_X_flipped, new_keypoints_ids, new_keypoints_names, angle_names, display_angle_values_on=display_angle_values_on, colors=colors, fontSize=fontSize, thickness=thickness)
|
|
2026
2033
|
|
|
@@ -2098,7 +2105,6 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
|
|
|
2098
2105
|
Pose2Sim_config_dict['project']['participant_mass'] = masses
|
|
2099
2106
|
Pose2Sim_config_dict['pose']['pose_model'] = pose_model_name.upper()
|
|
2100
2107
|
Pose2Sim_config_dict = to_dict(Pose2Sim_config_dict)
|
|
2101
|
-
print(Pose2Sim_config_dict)
|
|
2102
2108
|
|
|
2103
2109
|
# Marker augmentation
|
|
2104
2110
|
if use_augmentation:
|
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.4
|
|
2
2
|
Name: sports2d
|
|
3
|
-
Version: 0.8.
|
|
3
|
+
Version: 0.8.2
|
|
4
4
|
Summary: Compute 2D human pose and angles from a video or a webcam.
|
|
5
5
|
Author-email: David Pagnon <contact@david-pagnon.com>
|
|
6
6
|
License: BSD 3-Clause License
|
|
@@ -122,7 +122,7 @@ https://github.com/user-attachments/assets/6a444474-4df1-4134-af0c-e9746fa433ad
|
|
|
122
122
|
|
|
123
123
|
`Warning:` Angle estimation is only as good as the pose estimation algorithm, i.e., it is not perfect.\
|
|
124
124
|
`Warning:` Results are acceptable only if the persons move in the 2D plane (sagittal or frontal plane). The persons need to be filmed as parallel as possible to the motion plane.\
|
|
125
|
-
If you need 3D research-grade markerless joint kinematics, consider using several cameras
|
|
125
|
+
If you need 3D research-grade markerless joint kinematics, consider using several cameras with **[Pose2Sim](https://github.com/perfanalytics/pose2sim)**.
|
|
126
126
|
|
|
127
127
|
<!--`Warning:` Google Colab does not follow the European GDPR requirements regarding data privacy. [Install locally](#installation) if this matters.-->
|
|
128
128
|
|
|
@@ -238,7 +238,6 @@ The Demo video is voluntarily challenging to demonstrate the robustness of the p
|
|
|
238
238
|
- One person walking in the sagittal plane
|
|
239
239
|
- One person doing jumping jacks in the frontal plane. This person then performs a flip while being backlit, both of which are challenging for the pose detection algorithm
|
|
240
240
|
- One tiny person flickering in the background who needs to be ignored
|
|
241
|
-
- The first person is starting high and ending low on the image, which messes up the automatic floor angle calculation. You can set it up manually with the parameter `--floor_angle 0`
|
|
242
241
|
|
|
243
242
|
<br>
|
|
244
243
|
|
|
@@ -509,7 +508,7 @@ sports2d --help
|
|
|
509
508
|
'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"],
|
|
510
509
|
'to_meters': ["M", "convert pixels to meters. true if not specified"],
|
|
511
510
|
'make_c3d': ["", "Convert trc to c3d file. true if not specified"],
|
|
512
|
-
'floor_angle': ["", "angle of the floor. 'auto' if not specified"],
|
|
511
|
+
'floor_angle': ["", "angle of the floor (degrees). 'auto' if not specified"],
|
|
513
512
|
'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
|
|
514
513
|
'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
|
|
515
514
|
'save_calib': ["", "save calibration file. true if not specified"],
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|
|
File without changes
|