sports2d 0.8.26__tar.gz → 0.8.27__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (40) hide show
  1. sports2d-0.8.27/Content/huggingface_demo.png +0 -0
  2. {sports2d-0.8.26/sports2d.egg-info → sports2d-0.8.27}/PKG-INFO +16 -16
  3. {sports2d-0.8.26 → sports2d-0.8.27}/README.md +14 -14
  4. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/Demo/Config_demo.toml +1 -1
  5. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/process.py +10 -166
  6. {sports2d-0.8.26 → sports2d-0.8.27}/pyproject.toml +1 -1
  7. {sports2d-0.8.26 → sports2d-0.8.27/sports2d.egg-info}/PKG-INFO +16 -16
  8. {sports2d-0.8.26 → sports2d-0.8.27}/sports2d.egg-info/SOURCES.txt +0 -1
  9. sports2d-0.8.27/sports2d.egg-info/requires.txt +2 -0
  10. sports2d-0.8.26/Content/huggingface_demo.png +0 -0
  11. sports2d-0.8.26/Sports2D/Sports2D.ipynb +0 -3114
  12. sports2d-0.8.26/sports2d.egg-info/requires.txt +0 -2
  13. {sports2d-0.8.26 → sports2d-0.8.27}/.github/workflows/continuous-integration.yml +0 -0
  14. {sports2d-0.8.26 → sports2d-0.8.27}/.github/workflows/joss_pdf.yml +0 -0
  15. {sports2d-0.8.26 → sports2d-0.8.27}/.github/workflows/publish-on-release.yml +0 -0
  16. {sports2d-0.8.26 → sports2d-0.8.27}/.github/workflows/sync_to_hf.yml.bak +0 -0
  17. {sports2d-0.8.26 → sports2d-0.8.27}/.gitignore +0 -0
  18. {sports2d-0.8.26 → sports2d-0.8.27}/CITATION.cff +0 -0
  19. {sports2d-0.8.26 → sports2d-0.8.27}/Content/Demo_plots.png +0 -0
  20. {sports2d-0.8.26 → sports2d-0.8.27}/Content/Demo_results.png +0 -0
  21. {sports2d-0.8.26 → sports2d-0.8.27}/Content/Demo_terminal.png +0 -0
  22. {sports2d-0.8.26 → sports2d-0.8.27}/Content/Person_selection.png +0 -0
  23. {sports2d-0.8.26 → sports2d-0.8.27}/Content/Video_tuto_Sports2D_Colab.png +0 -0
  24. {sports2d-0.8.26 → sports2d-0.8.27}/Content/joint_convention.png +0 -0
  25. {sports2d-0.8.26 → sports2d-0.8.27}/Content/paper.bib +0 -0
  26. {sports2d-0.8.26 → sports2d-0.8.27}/Content/paper.md +0 -0
  27. {sports2d-0.8.26 → sports2d-0.8.27}/Content/sports2d_blender.gif +0 -0
  28. {sports2d-0.8.26 → sports2d-0.8.27}/Content/sports2d_opensim.gif +0 -0
  29. {sports2d-0.8.26 → sports2d-0.8.27}/LICENSE +0 -0
  30. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/Demo/Calib_demo.toml +0 -0
  31. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/Demo/demo.mp4 +0 -0
  32. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/Sports2D.py +0 -0
  33. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/Utilities/__init__.py +0 -0
  34. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/Utilities/common.py +0 -0
  35. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/Utilities/tests.py +0 -0
  36. {sports2d-0.8.26 → sports2d-0.8.27}/Sports2D/__init__.py +0 -0
  37. {sports2d-0.8.26 → sports2d-0.8.27}/setup.cfg +0 -0
  38. {sports2d-0.8.26 → sports2d-0.8.27}/sports2d.egg-info/dependency_links.txt +0 -0
  39. {sports2d-0.8.26 → sports2d-0.8.27}/sports2d.egg-info/entry_points.txt +0 -0
  40. {sports2d-0.8.26 → sports2d-0.8.27}/sports2d.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: sports2d
3
- Version: 0.8.26
3
+ Version: 0.8.27
4
4
  Summary: Compute 2D human pose and angles from a video or a webcam.
5
5
  Author-email: David Pagnon <contact@david-pagnon.com>
6
6
  Maintainer-email: David Pagnon <contact@david-pagnon.com>
@@ -23,7 +23,7 @@ Requires-Python: >=3.9
23
23
  Description-Content-Type: text/markdown
24
24
  License-File: LICENSE
25
25
  Requires-Dist: imageio_ffmpeg
26
- Requires-Dist: Pose2Sim>=0.10.38
26
+ Requires-Dist: Pose2Sim>=0.10.40
27
27
  Dynamic: license-file
28
28
 
29
29
 
@@ -100,23 +100,23 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
100
100
  1. [Run the demo](#run-the-demo)
101
101
  2. [Visualize in OpenSim](#visualize-in-opensim)
102
102
  3. [Visualize in Blender](#visualize-in-blender)
103
- 3. [Play with the parameters](#play-with-the-parameters)
104
- 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
105
- 2. [Run for a specific time range](#run-for-a-specific-time-range)
106
- 3. [Select the persons you are interested in](#select-the-persons-you-are-interested-in)
107
- 4. [Get coordinates in meters](#get-coordinates-in-meters)
108
- 5. [Run inverse kinematics](#run-inverse-kinematics)
109
- 6. [Run on several videos at once](#run-on-several-videos-at-once)
110
- 7. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
111
- 8. [Get the angles the way you want](#get-the-angles-the-way-you-want)
112
- 9. [Customize your output](#customize-your-output)
113
- 10. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
114
- 11. [All the parameters](#all-the-parameters)
115
- 2. [Go further](#go-further)
103
+ 2. [Play with the parameters](#play-with-the-parameters)
104
+ 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
105
+ 2. [Run for a specific time range](#run-for-a-specific-time-range)
106
+ 3. [Select the persons you are interested in](#select-the-persons-you-are-interested-in)
107
+ 4. [Get coordinates in meters](#get-coordinates-in-meters)
108
+ 5. [Run inverse kinematics](#run-inverse-kinematics)
109
+ 6. [Run on several videos at once](#run-on-several-videos-at-once)
110
+ 7. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
111
+ 8. [Get the angles the way you want](#get-the-angles-the-way-you-want)
112
+ 9. [Customize your output](#customize-your-output)
113
+ 10. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
114
+ 11. [All the parameters](#all-the-parameters)
115
+ 3. [Go further](#go-further)
116
116
  1. [Too slow for you?](#too-slow-for-you)
117
117
  3. [Run inverse kinematics](#run-inverse-kinematics)
118
118
  4. [How it works](#how-it-works)
119
- 3. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
119
+ 4. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
120
120
 
121
121
  <br>
122
122
 
@@ -72,23 +72,23 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
72
72
  1. [Run the demo](#run-the-demo)
73
73
  2. [Visualize in OpenSim](#visualize-in-opensim)
74
74
  3. [Visualize in Blender](#visualize-in-blender)
75
- 3. [Play with the parameters](#play-with-the-parameters)
76
- 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
77
- 2. [Run for a specific time range](#run-for-a-specific-time-range)
78
- 3. [Select the persons you are interested in](#select-the-persons-you-are-interested-in)
79
- 4. [Get coordinates in meters](#get-coordinates-in-meters)
80
- 5. [Run inverse kinematics](#run-inverse-kinematics)
81
- 6. [Run on several videos at once](#run-on-several-videos-at-once)
82
- 7. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
83
- 8. [Get the angles the way you want](#get-the-angles-the-way-you-want)
84
- 9. [Customize your output](#customize-your-output)
85
- 10. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
86
- 11. [All the parameters](#all-the-parameters)
87
- 2. [Go further](#go-further)
75
+ 2. [Play with the parameters](#play-with-the-parameters)
76
+ 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
77
+ 2. [Run for a specific time range](#run-for-a-specific-time-range)
78
+ 3. [Select the persons you are interested in](#select-the-persons-you-are-interested-in)
79
+ 4. [Get coordinates in meters](#get-coordinates-in-meters)
80
+ 5. [Run inverse kinematics](#run-inverse-kinematics)
81
+ 6. [Run on several videos at once](#run-on-several-videos-at-once)
82
+ 7. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
83
+ 8. [Get the angles the way you want](#get-the-angles-the-way-you-want)
84
+ 9. [Customize your output](#customize-your-output)
85
+ 10. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
86
+ 11. [All the parameters](#all-the-parameters)
87
+ 3. [Go further](#go-further)
88
88
  1. [Too slow for you?](#too-slow-for-you)
89
89
  3. [Run inverse kinematics](#run-inverse-kinematics)
90
90
  4. [How it works](#how-it-works)
91
- 3. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
91
+ 4. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
92
92
 
93
93
  <br>
94
94
 
@@ -89,7 +89,7 @@ det_frequency = 4 # Run person detection only every N frames, and inbetwee
89
89
  device = 'auto' # 'auto', 'CPU', 'CUDA', 'MPS', 'ROCM'
90
90
  backend = 'auto' # 'auto', 'openvino', 'onnxruntime', 'opencv'
91
91
  tracking_mode = 'sports2d' # 'sports2d' or 'deepsort'. 'deepsort' is slower, harder to parametrize but can be more robust if correctly tuned
92
- # deepsort_params = """{'max_age':30, 'n_init':3, 'max_cosine_distance':0.3, 'max_iou_distance':0.8, 'embedder_gpu': True, embedder':'torchreid'}""" # """{dictionary between 3 double quotes}"""
92
+ # deepsort_params = """{'max_age':30, 'n_init':3, 'max_cosine_distance':0.3, 'max_iou_distance':0.8, 'embedder_gpu': True, 'embedder':'torchreid'}""" # """{dictionary between 3 double quotes}"""
93
93
  # More robust in crowded scenes but tricky to parametrize. More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51
94
94
  # Requires `pip install torch torchvision torchreid gdown tensorboard`
95
95
 
@@ -75,29 +75,32 @@ import matplotlib as mpl
75
75
  import matplotlib.pyplot as plt
76
76
  from matplotlib.widgets import Slider, Button
77
77
  from matplotlib import patheffects
78
-
79
- from rtmlib import PoseTracker, BodyWithFeet, Wholebody, Body, Hand, Custom
80
78
  from rtmlib.tools.object_detection.post_processings import nms
81
79
 
82
80
  from Sports2D.Utilities.common import *
83
81
  from Pose2Sim.common import *
84
82
  from Pose2Sim.skeletons import *
85
83
  from Pose2Sim.calibration import toml_write
84
+ from Pose2Sim.poseEstimation import setup_model_class_mode, setup_backend_device, setup_pose_tracker
86
85
  from Pose2Sim.triangulation import indices_of_first_last_non_nan_chunks
87
86
  from Pose2Sim.personAssociation import *
88
87
  from Pose2Sim.filtering import *
89
88
 
90
- # Silence numpy "RuntimeWarning: Mean of empty slice"
91
- import warnings
89
+ os.environ['KMP_DUPLICATE_LIB_OK'] = 'TRUE'
90
+ np.set_printoptions(legacy='1.21') # otherwise prints np.float64(3.0) rather than 3.0
91
+ import warnings # Silence numpy and CoreML warnings
92
92
  warnings.filterwarnings("ignore", category=RuntimeWarning, message="Mean of empty slice")
93
93
  warnings.filterwarnings("ignore", category=RuntimeWarning, message="All-NaN slice encountered")
94
94
  warnings.filterwarnings("ignore", category=RuntimeWarning, message="invalid value encountered in scalar divide")
95
+ warnings.filterwarnings("ignore", message=".*Input.*has a dynamic shape.*but the runtime shape.*has zero elements.*")
96
+
95
97
 
96
98
  # Not safe, but to be used until OpenMMLab/RTMlib's SSL certificates are updated
97
99
  import ssl
98
100
  ssl._create_default_https_context = ssl._create_unverified_context
99
101
 
100
102
 
103
+ CORRECTION_2D_TO_3D = 1.063 # Corrective factor for height calculation: segments do not perfectly lie in the 2D plane and look shorter than in 3D
101
104
  DEFAULT_MASS = 70
102
105
  DEFAULT_HEIGHT = 1.7
103
106
 
@@ -206,165 +209,6 @@ def setup_video(video_file_path, vid_output_path, save_vid):
206
209
  return cap, out_vid, cam_width, cam_height, fps
207
210
 
208
211
 
209
- def setup_model_class_mode(pose_model, mode, config_dict={}):
210
- '''
211
- Set up the pose model class and mode for the pose tracker.
212
- '''
213
-
214
- if pose_model.upper() in ('HALPE_26', 'BODY_WITH_FEET'):
215
- model_name = 'HALPE_26'
216
- ModelClass = BodyWithFeet # 26 keypoints(halpe26)
217
- logging.info(f"Using HALPE_26 model (body and feet) for pose estimation in {mode} mode.")
218
- elif pose_model.upper() in ('COCO_133', 'WHOLE_BODY', 'WHOLE_BODY_WRIST'):
219
- model_name = 'COCO_133'
220
- ModelClass = Wholebody
221
- logging.info(f"Using COCO_133 model (body, feet, hands, and face) for pose estimation in {mode} mode.")
222
- elif pose_model.upper() in ('COCO_17', 'BODY'):
223
- model_name = 'COCO_17'
224
- ModelClass = Body
225
- logging.info(f"Using COCO_17 model (body) for pose estimation in {mode} mode.")
226
- elif pose_model.upper() =='HAND':
227
- model_name = 'HAND_21'
228
- ModelClass = Hand
229
- logging.info(f"Using HAND_21 model for pose estimation in {mode} mode.")
230
- elif pose_model.upper() =='FACE':
231
- model_name = 'FACE_106'
232
- logging.info(f"Using FACE_106 model for pose estimation in {mode} mode.")
233
- elif pose_model.upper() =='ANIMAL':
234
- model_name = 'ANIMAL2D_17'
235
- logging.info(f"Using ANIMAL2D_17 model for pose estimation in {mode} mode.")
236
- else:
237
- model_name = pose_model.upper()
238
- logging.info(f"Using model {model_name} for pose estimation in {mode} mode.")
239
- try:
240
- pose_model = eval(model_name)
241
- except:
242
- try: # from Config.toml
243
- from anytree.importer import DictImporter
244
- model_name = pose_model.upper()
245
- pose_model = DictImporter().import_(config_dict.get('pose').get(pose_model)[0])
246
- if pose_model.id == 'None':
247
- pose_model.id = None
248
- logging.info(f"Using model {model_name} for pose estimation.")
249
- except:
250
- raise NameError(f'{pose_model} not found in skeletons.py nor in Config.toml')
251
-
252
- # Manually select the models if mode is a dictionary rather than 'lightweight', 'balanced', or 'performance'
253
- if not mode in ['lightweight', 'balanced', 'performance'] or 'ModelClass' not in locals():
254
- try:
255
- from functools import partial
256
- try:
257
- mode = ast.literal_eval(mode)
258
- except: # if within single quotes instead of double quotes when run with sports2d --mode """{dictionary}"""
259
- mode = mode.strip("'").replace('\n', '').replace(" ", "").replace(",", '", "').replace(":", '":"').replace("{", '{"').replace("}", '"}').replace('":"/',':/').replace('":"\\',':\\')
260
- mode = re.sub(r'"\[([^"]+)",\s?"([^"]+)\]"', r'[\1,\2]', mode) # changes "[640", "640]" to [640,640]
261
- mode = json.loads(mode)
262
- det_class = mode.get('det_class')
263
- det = mode.get('det_model')
264
- det_input_size = mode.get('det_input_size')
265
- pose_class = mode.get('pose_class')
266
- pose = mode.get('pose_model')
267
- pose_input_size = mode.get('pose_input_size')
268
-
269
- ModelClass = partial(Custom,
270
- det_class=det_class, det=det, det_input_size=det_input_size,
271
- pose_class=pose_class, pose=pose, pose_input_size=pose_input_size)
272
- logging.info(f"Using model {model_name} with the following custom parameters: {mode}.")
273
-
274
- if pose_class == 'RTMO' and model_name != 'COCO_17':
275
- logging.warning("RTMO currently only supports 'Body' pose_model. Switching to 'Body'.")
276
- pose_model = eval('COCO_17')
277
-
278
- except (json.JSONDecodeError, TypeError):
279
- logging.warning("Invalid mode. Must be 'lightweight', 'balanced', 'performance', or '''{dictionary}''' of parameters within triple quotes. Make sure input_sizes are within square brackets.")
280
- logging.warning('Using the default "balanced" mode.')
281
- mode = 'balanced'
282
-
283
- return pose_model, ModelClass, mode
284
-
285
-
286
- def setup_backend_device(backend='auto', device='auto'):
287
- '''
288
- Set up the backend and device for the pose tracker based on the availability of hardware acceleration.
289
- TensorRT is not supported by RTMLib yet: https://github.com/Tau-J/rtmlib/issues/12
290
-
291
- If device and backend are not specified, they are automatically set up in the following order of priority:
292
- 1. GPU with CUDA and ONNXRuntime backend (if CUDAExecutionProvider is available)
293
- 2. GPU with ROCm and ONNXRuntime backend (if ROCMExecutionProvider is available, for AMD GPUs)
294
- 3. GPU with MPS or CoreML and ONNXRuntime backend (for macOS systems)
295
- 4. CPU with OpenVINO backend (default fallback)
296
- '''
297
-
298
- if device!='auto' and backend!='auto':
299
- device = device.lower()
300
- backend = backend.lower()
301
-
302
- if device=='auto' or backend=='auto':
303
- if device=='auto' and backend!='auto' or device!='auto' and backend=='auto':
304
- logging.warning(f"If you set device or backend to 'auto', you must set the other to 'auto' as well. Both device and backend will be determined automatically.")
305
-
306
- try:
307
- import torch
308
- import onnxruntime as ort
309
- if torch.cuda.is_available() == True and 'CUDAExecutionProvider' in ort.get_available_providers():
310
- device = 'cuda'
311
- backend = 'onnxruntime'
312
- logging.info(f"\nValid CUDA installation found: using ONNXRuntime backend with GPU.")
313
- elif torch.cuda.is_available() == True and 'ROCMExecutionProvider' in ort.get_available_providers():
314
- device = 'rocm'
315
- backend = 'onnxruntime'
316
- logging.info(f"\nValid ROCM installation found: using ONNXRuntime backend with GPU.")
317
- else:
318
- raise
319
- except:
320
- try:
321
- import onnxruntime as ort
322
- if 'MPSExecutionProvider' in ort.get_available_providers() or 'CoreMLExecutionProvider' in ort.get_available_providers():
323
- device = 'mps'
324
- backend = 'onnxruntime'
325
- logging.info(f"\nValid MPS installation found: using ONNXRuntime backend with GPU.")
326
- else:
327
- raise
328
- except:
329
- device = 'cpu'
330
- backend = 'openvino'
331
- logging.info(f"\nNo valid CUDA installation found: using OpenVINO backend with CPU.")
332
-
333
- return backend, device
334
-
335
-
336
- def setup_pose_tracker(ModelClass, det_frequency, mode, tracking, backend, device):
337
- '''
338
- Set up the RTMLib pose tracker with the appropriate model and backend.
339
- If CUDA is available, use it with ONNXRuntime backend; else use CPU with openvino
340
-
341
- INPUTS:
342
- - ModelClass: class. The RTMlib model class to use for pose detection (Body, BodyWithFeet, Wholebody)
343
- - det_frequency: int. The frequency of pose detection (every N frames)
344
- - mode: str. The mode of the pose tracker ('lightweight', 'balanced', 'performance')
345
- - tracking: bool. Whether to track persons across frames with RTMlib tracker
346
- - backend: str. The backend to use for pose detection (onnxruntime, openvino, opencv)
347
- - device: str. The device to use for pose detection (cpu, cuda, rocm, mps)
348
-
349
- OUTPUTS:
350
- - pose_tracker: PoseTracker. The initialized pose tracker object
351
- '''
352
-
353
- backend, device = setup_backend_device(backend=backend, device=device)
354
-
355
- # Initialize the pose tracker with Halpe26 model
356
- pose_tracker = PoseTracker(
357
- ModelClass,
358
- det_frequency=det_frequency,
359
- mode=mode,
360
- backend=backend,
361
- device=device,
362
- tracking=tracking,
363
- to_openpose=False)
364
-
365
- return pose_tracker
366
-
367
-
368
212
  def flip_left_right_direction(person_X, L_R_direction_idx, keypoints_names, keypoints_ids):
369
213
  '''
370
214
  Flip the points to the right or left for more consistent angle calculation
@@ -1959,7 +1803,7 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
1959
1803
  # cv2.circle(frame, (int(kpt[0]), int(kpt[1])), 3, colors[person_idx%len(colors)], -1)
1960
1804
  # if not np.isnan(bboxes[person_idx]).any():
1961
1805
  # cv2.rectangle(frame, (int(bboxes[person_idx][0]), int(bboxes[person_idx][1])), (int(bboxes[person_idx][2]), int(bboxes[person_idx][3])), colors[person_idx%len(colors)], 1)
1962
- # cv2.imshow(f'{video_file} Sports2D', frame)
1806
+ # cv2.imshow(f'{video_file} Sports2D', frame)
1963
1807
 
1964
1808
  # Track poses across frames
1965
1809
  if tracking_mode == 'deepsort':
@@ -2308,8 +2152,8 @@ def process_fun(config_dict, video_file, time_range, frame_rate, result_dir):
2308
2152
  if to_meters and save_pose:
2309
2153
  logging.info('\nConverting pose to meters:')
2310
2154
 
2311
- # Compute height in px of the first person
2312
- height_px = compute_height(trc_data[0].iloc[:,1:], new_keypoints_names,
2155
+ # Compute height of the first person in pixels
2156
+ height_px = CORRECTION_2D_TO_3D * compute_height(trc_data[0].iloc[:,1:], new_keypoints_names,
2313
2157
  fastest_frames_to_remove_percent=fastest_frames_to_remove_percent, close_to_zero_speed=close_to_zero_speed_px, large_hip_knee_angles=large_hip_knee_angles, trimmed_extrema_percent=trimmed_extrema_percent)
2314
2158
 
2315
2159
  # Compute distance from camera to compensate for perspective effects
@@ -34,7 +34,7 @@ classifiers = [
34
34
  urls = {Homepage = "https://github.com/davidpagnon/Sports2D", "Bug Tracker" = "https://github.com/davidpagnon/Sports2D/issues"}
35
35
  dependencies = [
36
36
  "imageio_ffmpeg",
37
- "Pose2Sim>=0.10.38"
37
+ "Pose2Sim>=0.10.40"
38
38
  ]
39
39
 
40
40
  [tool.setuptools_scm]
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: sports2d
3
- Version: 0.8.26
3
+ Version: 0.8.27
4
4
  Summary: Compute 2D human pose and angles from a video or a webcam.
5
5
  Author-email: David Pagnon <contact@david-pagnon.com>
6
6
  Maintainer-email: David Pagnon <contact@david-pagnon.com>
@@ -23,7 +23,7 @@ Requires-Python: >=3.9
23
23
  Description-Content-Type: text/markdown
24
24
  License-File: LICENSE
25
25
  Requires-Dist: imageio_ffmpeg
26
- Requires-Dist: Pose2Sim>=0.10.38
26
+ Requires-Dist: Pose2Sim>=0.10.40
27
27
  Dynamic: license-file
28
28
 
29
29
 
@@ -100,23 +100,23 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
100
100
  1. [Run the demo](#run-the-demo)
101
101
  2. [Visualize in OpenSim](#visualize-in-opensim)
102
102
  3. [Visualize in Blender](#visualize-in-blender)
103
- 3. [Play with the parameters](#play-with-the-parameters)
104
- 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
105
- 2. [Run for a specific time range](#run-for-a-specific-time-range)
106
- 3. [Select the persons you are interested in](#select-the-persons-you-are-interested-in)
107
- 4. [Get coordinates in meters](#get-coordinates-in-meters)
108
- 5. [Run inverse kinematics](#run-inverse-kinematics)
109
- 6. [Run on several videos at once](#run-on-several-videos-at-once)
110
- 7. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
111
- 8. [Get the angles the way you want](#get-the-angles-the-way-you-want)
112
- 9. [Customize your output](#customize-your-output)
113
- 10. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
114
- 11. [All the parameters](#all-the-parameters)
115
- 2. [Go further](#go-further)
103
+ 2. [Play with the parameters](#play-with-the-parameters)
104
+ 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
105
+ 2. [Run for a specific time range](#run-for-a-specific-time-range)
106
+ 3. [Select the persons you are interested in](#select-the-persons-you-are-interested-in)
107
+ 4. [Get coordinates in meters](#get-coordinates-in-meters)
108
+ 5. [Run inverse kinematics](#run-inverse-kinematics)
109
+ 6. [Run on several videos at once](#run-on-several-videos-at-once)
110
+ 7. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
111
+ 8. [Get the angles the way you want](#get-the-angles-the-way-you-want)
112
+ 9. [Customize your output](#customize-your-output)
113
+ 10. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
114
+ 11. [All the parameters](#all-the-parameters)
115
+ 3. [Go further](#go-further)
116
116
  1. [Too slow for you?](#too-slow-for-you)
117
117
  3. [Run inverse kinematics](#run-inverse-kinematics)
118
118
  4. [How it works](#how-it-works)
119
- 3. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
119
+ 4. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
120
120
 
121
121
  <br>
122
122
 
@@ -18,7 +18,6 @@ Content/paper.bib
18
18
  Content/paper.md
19
19
  Content/sports2d_blender.gif
20
20
  Content/sports2d_opensim.gif
21
- Sports2D/Sports2D.ipynb
22
21
  Sports2D/Sports2D.py
23
22
  Sports2D/__init__.py
24
23
  Sports2D/process.py
@@ -0,0 +1,2 @@
1
+ imageio_ffmpeg
2
+ Pose2Sim>=0.10.40