sports2d 0.6.2__tar.gz → 0.7.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (23) hide show
  1. {sports2d-0.6.2 → sports2d-0.7.0}/PKG-INFO +315 -190
  2. {sports2d-0.6.2 → sports2d-0.7.0}/README.md +312 -189
  3. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Demo/Config_demo.toml +49 -28
  4. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Sports2D.py +40 -22
  5. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Utilities/common.py +124 -3
  6. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Utilities/skeletons.py +6 -8
  7. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Utilities/tests.py +16 -8
  8. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/process.py +221 -72
  9. {sports2d-0.6.2 → sports2d-0.7.0}/setup.cfg +3 -1
  10. {sports2d-0.6.2 → sports2d-0.7.0}/sports2d.egg-info/PKG-INFO +315 -190
  11. {sports2d-0.6.2 → sports2d-0.7.0}/sports2d.egg-info/requires.txt +2 -0
  12. {sports2d-0.6.2 → sports2d-0.7.0}/LICENSE +0 -0
  13. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Demo/demo.mp4 +0 -0
  14. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Utilities/__init__.py +0 -0
  15. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/Utilities/filter.py +0 -0
  16. {sports2d-0.6.2 → sports2d-0.7.0}/Sports2D/__init__.py +0 -0
  17. {sports2d-0.6.2 → sports2d-0.7.0}/pyproject.toml +0 -0
  18. {sports2d-0.6.2 → sports2d-0.7.0}/setup.py +0 -0
  19. {sports2d-0.6.2 → sports2d-0.7.0}/sports2d.egg-info/SOURCES.txt +0 -0
  20. {sports2d-0.6.2 → sports2d-0.7.0}/sports2d.egg-info/dependency_links.txt +0 -0
  21. {sports2d-0.6.2 → sports2d-0.7.0}/sports2d.egg-info/entry_points.txt +0 -0
  22. {sports2d-0.6.2 → sports2d-0.7.0}/sports2d.egg-info/not-zip-safe +0 -0
  23. {sports2d-0.6.2 → sports2d-0.7.0}/sports2d.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: sports2d
3
- Version: 0.6.2
3
+ Version: 0.7.0
4
4
  Summary: Detect pose and compute 2D joint angles from a video.
5
5
  Home-page: https://github.com/davidpagnon/Sports2D
6
6
  Author: David Pagnon
@@ -33,11 +33,13 @@ Requires-Dist: opencv-python
33
33
  Requires-Dist: matplotlib
34
34
  Requires-Dist: PyQt5
35
35
  Requires-Dist: statsmodels
36
+ Requires-Dist: c3d
36
37
  Requires-Dist: rtmlib
37
38
  Requires-Dist: openvino
38
39
  Requires-Dist: tqdm
39
40
  Requires-Dist: imageio_ffmpeg
40
41
  Requires-Dist: deep-sort-realtime
42
+ Requires-Dist: Pose2Sim
41
43
 
42
44
 
43
45
  [![Continuous integration](https://github.com/davidpagnon/sports2d/actions/workflows/continuous-integration.yml/badge.svg?branch=main)](https://github.com/davidpagnon/sports2d/actions/workflows/continuous-integration.yml)
@@ -63,14 +65,15 @@ Requires-Dist: deep-sort-realtime
63
65
 
64
66
  > **`Announcement:`\
65
67
  > Complete rewriting of the code!** Run `pip install sports2d -U` to get the latest version.
68
+ > - MarkerAugmentation and Inverse Kinematics for accurate 3D motion with OpenSim. **New in v0.7!**
69
+ > - Any detector and pose estimation model can be used. **New in v0.6!**
70
+ > - Results in meters rather than pixels. **New in v0.5!**
66
71
  > - Faster, more accurate
67
72
  > - Works from a webcam
68
- > - Results in meters rather than pixels. **New in v0.5!**
69
73
  > - Better visualization output
70
74
  > - More flexible, easier to run
71
- > - Batch process multiple videos at once
72
- >
73
- > Note: Colab version broken for now. I'll fix it in the next few weeks.
75
+
76
+ ***N.B.:*** As always, I am more than happy to welcome contributions (see [How to contribute](#how-to-contribute-and-to-do-list))!
74
77
  <!--User-friendly Colab version released! (and latest issues fixed, too)\
75
78
  Works on any smartphone!**\
76
79
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/Sports2D_Colab)-->
@@ -92,12 +95,26 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
92
95
  ## Contents
93
96
  1. [Installation and Demonstration](#installation-and-demonstration)
94
97
  1. [Installation](#installation)
98
+ 1. [Quick install](#quick-install)
99
+ 2. [Full install](#full-install)
95
100
  2. [Demonstration](#demonstration)
101
+ 1. [Run the demo](#run-the-demo)
102
+ 2. [Visualize in OpenSim](#visualize-in-opensim)
103
+ 3. [Visualize in Blender](#visualize-in-blender)
96
104
  3. [Play with the parameters](#play-with-the-parameters)
105
+ 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
106
+ 2. [Run for a specific time range](#run-for-a-specific-time-range)
107
+ 3. [Get coordinates in meters](#get-coordinates-in-meters)
108
+ 4. [Run inverse kinematics](#run-inverse-kinematics)
109
+ 5. [Run on several videos at once](#run-on-several-videos-at-once)
110
+ 6. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
111
+ 7. [Get the angles the way you want](#get-the-angles-the-way-you-want)
112
+ 8. [Customize your output](#customize-your-output)
113
+ 9. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
114
+ 10. [All the parameters](#all-the-parameters)
97
115
  2. [Go further](#go-further)
98
116
  1. [Too slow for you?](#too-slow-for-you)
99
- 2. [What you need is what you get](#what-you-need-is-what-you-get)
100
- 3. [All the parameters](#all-the-parameters)
117
+ 3. [Run inverse kinematics](#run-inverse-kinematics)
101
118
  4. [How it works](#how-it-works)
102
119
  3. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
103
120
 
@@ -115,33 +132,54 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
115
132
 
116
133
  -->
117
134
 
118
- - OPTION 1: **Quick install** \
119
- Open a terminal. Type `python -V` to make sure python >=3.8 <=3.11 is installed. If not, install it [from there](https://www.python.org/downloads/). Run:
120
- ``` cmd
121
- pip install sports2d
122
- ```
135
+ #### Quick install
136
+
137
+ > N.B.: Full install is required for OpenSim inverse kinematics.
138
+
139
+ Open a terminal. Type `python -V` to make sure python >=3.10 <=3.11 is installed. If not, install it [from there](https://www.python.org/downloads/).
140
+
141
+ Run:
142
+ ``` cmd
143
+ pip install sports2d
144
+ ```
123
145
 
124
- - OPTION 2: **Safer install with Anaconda**\
125
- Install [Miniconda](https://docs.conda.io/en/latest/miniconda.html):\
126
- Open an Anaconda prompt and create a virtual environment by typing:
127
- ``` cmd
128
- conda create -n Sports2D python=3.9 -y
129
- conda activate Sports2D
130
- pip install sports2d
146
+ Alternatively, build from source to test the last changes:
147
+ ``` cmd
148
+ git clone https://github.com/davidpagnon/sports2d.git
149
+ cd sports2d
150
+ pip install .
151
+ ```
152
+
153
+ <br>
154
+
155
+ #### Full install
156
+
157
+ > Only needed if you want to run inverse kinematics (`--do_ik True`).
158
+
159
+ - Install Anaconda or [Miniconda](https://docs.conda.io/en/latest/miniconda.html):\
160
+ Open an Anaconda prompt and create a virtual environment:
161
+ ``` cmd
162
+ conda create -n Sports2D python=3.10 -y
163
+ conda activate Sports2D
164
+ ```
165
+ - **Install OpenSim**:\
166
+ Install the OpenSim Python API (if you do not want to install via conda, refer [to this page](https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/pages/53085346/Scripting+in+Python#ScriptinginPython-SettingupyourPythonscriptingenvironment(ifnotusingconda))):
167
+ ```
168
+ conda install -c opensim-org opensim -y
131
169
  ```
170
+
171
+ - **Install Sports2D with Pose2Sim**:
172
+ ``` cmd
173
+ pip install sports2d pose2sim
174
+ ```
132
175
 
133
- - OPTION 3: **Build from source and test the last changes**\
134
- Open a terminal in the directory of your choice and clone the Sports2D repository.
135
- ``` cmd
136
- git clone https://github.com/davidpagnon/sports2d.git
137
- cd sports2d
138
- pip install .
139
- ```
140
176
 
141
177
  <br>
142
178
 
143
179
  ### Demonstration
144
180
 
181
+ #### Run the demo:
182
+
145
183
  Just open a command line and run:
146
184
  ``` cmd
147
185
  sports2d
@@ -166,213 +204,218 @@ The Demo video is voluntarily challenging to demonstrate the robustness of the p
166
204
 
167
205
  <br>
168
206
 
169
- ### Play with the parameters
170
207
 
171
- For a full list of the available parameters, see [this section](#all-the-parameters) of the documentation, check the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file, or type:
172
- ``` cmd
173
- sports2d --help
174
- ```
208
+ #### Visualize in Blender
209
+
210
+ 1. **Install the Pose2Sim_Blender add-on.**\
211
+ Follow instructions on the [Pose2Sim_Blender](https://github.com/davidpagnon/Pose2Sim_Blender) add-on page.
212
+ 2. **Open your point coordinates.**\
213
+ **Add Markers**: open your trc file(e.g., `coords_m.trc`) from your `result_dir` folder.
214
+
215
+ This will optionally create **an animated rig** based on the motion of the captured person.
216
+ 3. **Open your animated skeleton:**\
217
+ Make sure you first set `--do_ik True` ([full install](#full-install) required). See [inverse kinematics](#run-inverse-kinematics) section for more details.
218
+ - **Add Model**: Open your scaled model (e.g., `Model_Pose2Sim_LSTM.osim`).
219
+ - **Add Motion**: Open your motion file (e.g., `angles.mot`). Make sure the skeleton is selected in the outliner.
220
+
221
+ The OpenSim skeleton is not rigged yet. **[Feel free to contribute!](https://github.com/perfanalytics/pose2sim/issues/40)**
222
+
223
+ <img src="Content/sports2d_blender.gif" width="760">
224
+
175
225
  <br>
176
226
 
177
- #### Run on custom video with default parameters:
178
- ``` cmd
179
- sports2d --video_input path_to_video.mp4
180
- ```
181
227
 
182
- #### Run on webcam with default parameters:
183
- ``` cmd
184
- sports2d --video_input webcam
185
- ```
228
+ #### Visualize in OpenSim
229
+
230
+ 1. Install **[OpenSim GUI](https://simtk.org/frs/index.php?group_id=91)**.
231
+ 2. **Visualize point coordinates:**\
232
+ **File -> Preview experimental data:** Open your trc file (e.g., `coords_m.trc`) from your `result_dir` folder.
233
+ 3. **Visualize angles:**\
234
+ To open an animated model and run further biomechanical analysis, make sure you first set `--do_ik True` ([full install](#full-install) required). See [inverse kinematics](#run-inverse-kinematics) section for more details.
235
+ - **File -> Open Model:** Open your scaled model (e.g., `Model_Pose2Sim_LSTM.osim`).
236
+ - **File -> Load Motion:** Open your motion file (e.g., `angles.mot`).
237
+
238
+ <img src="Content/sports2d_opensim.gif" width="760">
239
+
186
240
  <br>
187
241
 
188
- #### Get coordinates in meters rather than in pixels:
189
242
 
190
- <!-- You either need to provide a calibration file, or simply the height of a person (Note that the latter will not take distortions into account, and that it will be less accurate for motion in the frontal plane).\-->
191
- Just provide the height of the analyzed person (and their ID in case of multiple person detection).\
192
- The floor angle and the origin of the xy axis are computed automatically from gait. If you analyze another type of motion, you can manually specify them.\
193
- Note that it does not take distortions into account, and that it will be less accurate for motions in the frontal plane.
194
243
 
195
- ``` cmd
196
- sports2d --to_meters True --calib_file calib_demo.toml
197
- ```
198
- ``` cmd
199
- sports2d --to_meters True --person_height 1.65 --calib_on_person_id 2
200
- ```
201
- ``` cmd
202
- sports2d --to_meters True --person_height 1.65 --calib_on_person_id 2 --floor_angle 0 --xy_origin 0 940
203
- ```
244
+ ### Play with the parameters
245
+
246
+ For a full list of the available parameters, see [this section](#all-the-parameters) of the documentation, check the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file, or type `sports2d --help`. All non specified are set to default values.
247
+
204
248
  <br>
205
249
 
206
- #### Run with custom parameters (all non specified are set to default):
207
- ``` cmd
208
- sports2d --video_input demo.mp4 other_video.mp4
209
- ```
210
- ``` cmd
211
- sports2d --show_graphs False --time_range 1.2 2.7 --result_dir path_to_result_dir --slowmo_factor 4
212
- ```
213
- ``` cmd
214
- sports2d --multiperson false --pose_model Body --mode lightweight --det_frequency 50
215
- ```
216
- ``` cmd
217
- sports2d --tracking_mode deepsort --deepsort_params """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8, 'embedder_gpu': True}"""
218
- ```
250
+
251
+ #### Run on a custom video or on a webcam:
252
+ ``` cmd
253
+ sports2d --video_input path_to_video.mp4
254
+ ```
255
+
256
+ ``` cmd
257
+ sports2d --video_input webcam
258
+ ```
259
+
219
260
  <br>
220
261
 
221
- #### Run with a toml configuration file:
222
- ``` cmd
223
- sports2d --config Config_demo.toml
224
- ```
262
+
263
+ #### Run for a specific time range:
264
+ ```cmd
265
+ sports2d --time_range 1.2 2.7
266
+ ```
267
+
225
268
  <br>
226
269
 
227
- #### Run within a Python script:
228
- ``` python
229
- from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
230
- ```
231
- ``` python
232
- from Sports2D import Sports2D; Sports2D.process(config_dict)
233
- ```
270
+
271
+ #### Get coordinates in meters:
272
+ > **N.B.:** Depth is estimated from a neutral pose.
273
+
274
+ <!-- You either need to provide a calibration file, or simply the height of a person (Note that the latter will not take distortions into account, and that it will be less accurate for motion in the frontal plane).\-->
275
+ You may need to convert pixel coordinates to meters.\
276
+ Just provide the height of the reference person (and their ID in case of multiple person detection).
277
+
278
+ You can also specify whether the visible side of the person is left, right, front, or back. Set it to 'auto' if you do not want to find it automatically (only works for motion in the sagittal plane), or to 'none' if you want to keep 2D instead of 3D coordinates (if the person goes right, and then left for example).
279
+
280
+ The floor angle and the origin of the xy axis are computed automatically from gait. If you analyze another type of motion, you can manually specify them. Note that `y` points down.\
281
+ Also note that distortions are not taken into account, and that results will be less accurate for motions in the frontal plane.
282
+
283
+ <!-- ``` cmd
284
+ sports2d --to_meters True --calib_file calib_demo.toml
285
+ ``` -->
286
+ ``` cmd
287
+ sports2d --to_meters True --px_to_m_person_height 1.65 --px_to_m_from_person_id 2
288
+ ```
289
+ ``` cmd
290
+ sports2d --to_meters True --px_to_m_person_height 1.65 --px_to_m_from_person_id 2 `
291
+ --visible_side front none auto --floor_angle 0 --xy_origin 0 940
292
+ ```
234
293
 
235
294
  <br>
236
295
 
237
- ## Go further
238
296
 
239
- ### Too slow for you?
297
+ #### Run inverse kinematics:
298
+ > N.B.: [Full install](#full-install) required.
240
299
 
241
- **Quick fixes:**
242
- - Use ` --save_vid false --save_img false --show_realtime_results false`: Will not save images or videos, and will not display the results in real time.
243
- - Use `--mode lightweight`: Will use a lighter version of RTMPose, which is faster but less accurate.\
244
- Note that any detection and pose models can be used (first [deploy them with MMPose](https://mmpose.readthedocs.io/en/latest/user_guides/how_to_deploy.html#onnx) if you do not have their .onnx or .zip files), with the following formalism:
245
- ```
246
- --mode """{'det_class':'YOLOX',
247
- 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip',
248
- 'det_input_size':[416,416],
249
- 'pose_class':'RTMPose',
250
- 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip',
251
- 'pose_input_size':[192,256]}"""
252
- ```
253
- - Use `--det_frequency 50`: Will detect poses only every 50 frames, and track keypoints in between, which is faster.
254
- - Use `--multiperson false`: Can be used if one single person is present in the video. Otherwise, persons' IDs may be mixed up.
255
- - Use `--load_trc <path_to_file_px.trc>`: Will use pose estimation results from a file. Useful if you want to use different parameters for pixel to meter conversion or angle calculation without running detection and pose estimation all over.
256
- - Use `--tracking_mode sports2d`: Will use the default Sports2D tracker. Unlike DeepSort, it is faster, does not require any parametrization, and is as good in non-crowded scenes.
300
+ > **N.B.:** The person needs to be moving on a single plane for the whole selected time range.
257
301
 
258
- <br>
302
+ OpenSim inverse kinematics allows you to set joint constraints, joint angle limits, to constrain the bones to keep the same length all along the motion and potentially to have equal sizes on left and right side. Most generally, it gives more biomechanically accurate results. It can also give you the opportunity to compute joint torques, muscle forces, ground reaction forces, and more, [with MoCo](https://opensim-org.github.io/opensim-moco-site/) for example.
259
303
 
260
- **Use your GPU**:\
261
- Will be much faster, with no impact on accuracy. However, the installation takes about 6 GB of additional storage space.
304
+ This is done via [Pose2Sim](https://github.com/perfanalytics/pose2sim).\
305
+ Model scaling is done according to the mean of the segment lengths, across a subset of frames. We remove the 10% fastest frames (potential outliers), the frames where the speed is 0 (person probably out of frame), the frames where the average knee and hip flexion angles are above 45° (pose estimation is not precise when the person is crouching) and the 20% most extreme segment values after the previous operations (potential outliers). All these parameters can be edited in your Config.toml file.
262
306
 
263
- 1. Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)).
307
+ ```cmd
308
+ sports2d --time_range 1.2 2.7 `
309
+ --do_ik true `
310
+ --px_to_m_from_person_id 1 --px_to_m_person_height 1.65 `
311
+ --visible_side front auto
312
+ ```
264
313
 
265
- Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Next, go to the [pyTorch website](https://pytorch.org/get-started/previous-versions/) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
266
- ``` cmd
267
- pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
268
- ```
314
+ You can optionally use the LSTM marker augmentation to improve the quality of the output motion.\
315
+ You can also optionally give the participants proper masses. Mass has no influence on motion, only on forces (if you decide to further pursue kinetics analysis).
269
316
 
270
- <!-- > ***Note:*** Issues were reported with the default command. However, this has been tested and works:
271
- `pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118` -->
317
+ ```cmd
318
+ sports2d --time_range 1.2 2.7 `
319
+ --do_ik true --use_augmentation True `
320
+ --px_to_m_from_person_id 1 --px_to_m_person_height 1.65 `
321
+ --visible_side front left --participant_mass 67.0 55.0
322
+ ```
272
323
 
273
- 2. Finally, install ONNX Runtime with GPU support:
274
- ```
275
- pip install onnxruntime-gpu
276
- ```
324
+ <br>
277
325
 
278
- 3. Check that everything went well within Python with these commands:
279
- ``` bash
280
- python -c 'import torch; print(torch.cuda.is_available())'
281
- python -c 'import onnxruntime as ort; print(ort.get_available_providers())'
282
- # Should print "True ['CUDAExecutionProvider', ...]"
283
- ```
284
- <!-- print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->
326
+
327
+ #### Run on several videos at once:
328
+ ``` cmd
329
+ sports2d --video_input demo.mp4 other_video.mp4
330
+ ```
331
+ All videos analyzed with the same time range.
332
+ ```cmd
333
+ sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7
334
+ ```
335
+ Different time ranges for each video.
336
+ ```cmd
337
+ sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
338
+ ```
285
339
 
286
340
  <br>
287
341
 
288
- ### What you need is what you get
289
342
 
290
- #### Analyze a fraction of your video:
291
- ```cmd
292
- sports2d --time_range 1.2 2.7
343
+ #### Use the configuration file or run within Python:
344
+
345
+ - Run with a configuration file:
346
+ ``` cmd
347
+ sports2d --config Config_demo.toml
293
348
  ```
349
+ - Run within Python:
350
+ ``` python
351
+ from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
352
+ ```
353
+ - Run within Python with a dictionary (for example, `config_dict = toml.load('Config_demo.toml')`):
354
+ ``` python
355
+ from Sports2D import Sports2D; Sports2D.process(config_dict)
356
+ ```
357
+
294
358
  <br>
295
359
 
296
- #### Customize your output:
297
- - Choose whether you want video, images, trc pose file, angle mot file, real-time display, and plots:
298
- ```cmd
299
- sports2d --save_vid false --save_img true --save_pose false --save_angles true --show_realtime_results false --show_graphs false
300
- ```
360
+
361
+ #### Get the angles the way you want:
362
+
301
363
  - Choose which angles you need:
302
364
  ```cmd
303
365
  sports2d --joint_angles 'right knee' 'left knee' --segment_angles None
304
366
  ```
305
367
  - Choose where to display the angles: either as a list on the upper-left of the image, or near the joint/segment, or both:
306
368
  ```cmd
307
- sports2d --display_angle_values_on body
369
+ sports2d --display_angle_values_on body # OR none, or list
308
370
  ```
309
371
  - You can also decide not to calculate and display angles at all:
310
372
  ```cmd
311
373
  sports2d --calculate_angles false
312
374
  ```
375
+ - To run **inverse kinematics with OpenSim**, check [this section](#run-inverse-kinematics)
376
+
313
377
  <br>
314
378
 
315
- #### Run on several videos at once:
316
- You can individualize (or not) the parameters.
317
- ```cmd
318
- sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7
379
+
380
+ #### Customize your output:
381
+ - Only analyze the most prominent person:
382
+ ``` cmd
383
+ sports2d --multiperson false
319
384
  ```
385
+ - Choose whether you want video, images, trc pose file, angle mot file, real-time display, and plots:
320
386
  ```cmd
321
- sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
387
+ sports2d --save_vid false --save_img true `
388
+ --save_pose false --save_angles true `
389
+ --show_realtime_results false --show_graphs false
390
+ ```
391
+ - Save results to a custom directory, specify the slow-motion factor:
392
+ ``` cmd
393
+ sports2d --result_dir path_to_result_dir
322
394
  ```
323
-
324
- <!--
325
- <br>
326
-
327
- ### Constrain results to a biomechanical model
328
-
329
- > Why + image\
330
- > Add explanation in "how it works" section
331
-
332
- #### Installation
333
- You will need to install OpenSim via conda, which makes installation slightly more complicated.
334
-
335
- 1. **Install Anaconda or [Miniconda](https://docs.conda.io/en/latest/miniconda.html).**
336
-
337
- Once installed, open an Anaconda prompt and create a virtual environment:
338
- ```
339
- conda create -n Sports2D python=3.9 -y
340
- conda activate Sports2D
341
- ```
342
-
343
- 2. **Install OpenSim**:\
344
- Install the OpenSim Python API (if you do not want to install via conda, refer [to this page](https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/pages/53085346/Scripting+in+Python#ScriptinginPython-SettingupyourPythonscriptingenvironment(ifnotusingconda))):
345
- ```
346
- conda install -c opensim-org opensim -y
347
- ```
348
-
349
- 3. **Install Sports2D**:\
350
- Open a terminal.
351
- ``` cmd
352
- pip install sports2d
353
- ```
354
- <br>
355
-
356
- #### Usage
357
-
358
- Need person doing a 2D motion. If not, trim the video with `--time_range` option.
359
-
360
- ```cmd
361
- sports2d --time_range 1.2 2.7 --ik true --person_orientation front none left
362
- ```
363
395
 
364
396
  <br>
365
397
 
366
- #### Visualize the results
367
- - The simplest option is to use OpenSim GUI
368
- - If you want to see the skeleton overlay on the video, you can install the Pose2Sim Blender plugin.
369
398
 
370
- -->
399
+ #### Use a custom pose estimation model:
400
+ - Retrieve hand motion:
401
+ ``` cmd
402
+ sports2d --pose_model WholeBody
403
+ ```
404
+ - Use any custom (deployed) MMPose model
405
+ ``` cmd
406
+ sports2d --pose_model BodyWithFeet : `
407
+ --mode """{'det_class':'YOLOX', `
408
+ 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip', `
409
+ 'det_input_size':[640, 640], `
410
+ 'pose_class':'RTMPose', `
411
+ 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip', `
412
+ 'pose_input_size':[192,256]}"""
413
+ ```
371
414
 
372
415
  <br>
373
416
 
374
417
 
375
- ### All the parameters
418
+ #### All the parameters
376
419
 
377
420
  For a full list of the available parameters, have a look at the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file or type:
378
421
 
@@ -381,11 +424,12 @@ sports2d --help
381
424
  ```
382
425
 
383
426
  ```
384
- ['config': "C", "path to a toml configuration file"],
427
+ 'config': ["C", "path to a toml configuration file"],
385
428
 
386
429
  'video_input': ["i", "webcam, or video_path.mp4, or video1_path.avi video2_path.mp4 ... Beware that images won't be saved if paths contain non ASCII characters"],
387
- 'person_height': ["H", "height of the person in meters. 1.70 if not specified"],
388
- 'load_trc': ["", "load trc file to avaid running pose estimation again. false if not specified"],
430
+ 'px_to_m_person_height': ["H", "height of the person in meters. 1.70 if not specified"],
431
+ 'visible_side': ["", "front, back, left, right, auto, or none. 'front none auto' if not specified. If 'auto', will be either left or right depending on the direction of the motion. If 'none', no IK for this person"],
432
+ 'load_trc_px': ["", "load trc file to avaid running pose estimation again. false if not specified"],
389
433
  'compare': ["", "visually compare motion with trc file. false if not specified"],
390
434
  'webcam_id': ["w", "webcam ID. 0 if not specified"],
391
435
  'time_range': ["t", "start_time end_time. In seconds. Whole video if not specified. start_time1 end_time1 start_time2 end_time2 ... if multiple videos with different time ranges"],
@@ -403,26 +447,28 @@ sports2d --help
403
447
  'save_angles': ["A", "save angles as mot files. true if not specified"],
404
448
  'slowmo_factor': ["", "slow-motion factor. For a video recorded at 240 fps and exported to 30 fps, it would be 240/30 = 8. 1 if not specified"],
405
449
  'pose_model': ["p", "only body_with_feet is available for now. body_with_feet if not specified"],
406
- 'mode': ["m", "light, balanced, or performance. balanced if not specified"],
450
+ 'mode': ["m", 'light, balanced, performance, or a """{dictionary within triple quote}""". balanced if not specified. Use a dictionary to specify your own detection and/or pose estimation models (more about in the documentation).'],
407
451
  'det_frequency': ["f", "run person detection only every N frames, and inbetween track previously detected bounding boxes. keypoint detection is still run on all frames.\n\
408
- Equal to or greater than 1, can be as high as you want in simple uncrowded cases. Much faster, but might be less accurate. 1 if not specified: detection runs on all frames"],
409
- 'to_meters': ["M", "convert pixels to meters. true if not specified"],
410
-
452
+ Equal to or greater than 1, can be as high as you want in simple uncrowded cases. Much faster, but might be less accurate. 1 if not specified: detection runs on all frames"],
411
453
  'backend': ["", "Backend for pose estimation can be 'auto', 'cpu', 'cuda', 'mps' (for MacOS), or 'rocm' (for AMD GPUs)"],
412
454
  'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"],
413
- 'calib_on_person_id': ["", "person ID to calibrate on. 0 if not specified"],
455
+ 'to_meters': ["M", "convert pixels to meters. true if not specified"],
456
+ 'make_c3d': ["", "Convert trc to c3d file. true if not specified"],
457
+ 'px_to_m_from_person_id': ["", "person ID to calibrate on. 0 if not specified"],
414
458
  'floor_angle': ["", "angle of the floor. 'auto' if not specified"],
415
459
  'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
416
460
  'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
417
461
  'save_calib': ["", "save calibration file. true if not specified"],
418
462
  'do_ik': ["", "do inverse kinematics. false if not specified"],
419
- 'osim_setup_path': ["", "path to OpenSim setup. '../OpenSim_setup' if not specified"],
420
- 'person_orientation': ["", "front, back, left, right, auto, or none. 'front none left' if not specified. If 'auto', will be either left or right depending on the direction of the motion."],
463
+ 'use_augmentation': ["", "Use LSTM marker augmentation. false if not specified"],
464
+ 'use_contacts_muscles': ["", "Use model with contact spheres and muscles. false if not specified"],
465
+ 'participant_mass': ["", "mass of the participant in kg or none. Defaults to 70 if not provided. No influence on kinematics (motion), only on kinetics (forces)"],
421
466
  'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
422
467
  'multiperson': ["", "multiperson involves tracking: will be faster if set to false. true if not specified"],
423
468
  'tracking_mode': ["", "sports2d or rtmlib. sports2d is generally much more accurate and comparable in speed. sports2d if not specified"],
424
469
  'deepsort_params': ["", 'Deepsort tracking parameters: """{dictionary between 3 double quotes}""". \n\
425
- More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51'],
470
+ Default: max_age:30, n_init:3, nms_max_overlap:0.8, max_cosine_distance:0.3, nn_budget:200, max_iou_distance:0.8, embedder_gpu: True\n\
471
+ More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51'],
426
472
  'input_size': ["", "width, height. 1280, 720 if not specified. Lower resolution will be faster but less precise"],
427
473
  'keypoint_likelihood_threshold': ["", "detected keypoints are not retained if likelihood is below this threshold. 0.3 if not specified"],
428
474
  'average_likelihood_threshold': ["", "detected persons are not retained if average keypoint likelihood is below this threshold. 0.5 if not specified"],
@@ -444,12 +490,90 @@ sports2d --help
444
490
  'sigma_kernel': ["", "sigma of the gaussian filter. 1 if not specified"],
445
491
  'nb_values_used': ["", "number of values used for the loess filter. 5 if not specified"],
446
492
  'kernel_size': ["", "kernel size of the median filter. 3 if not specified"],
493
+ 'osim_setup_path': ["", "path to OpenSim setup. '../OpenSim_setup' if not specified"],
494
+ 'right_left_symmetry': ["", "right left symmetry. true if not specified"],
495
+ 'default_height': ["", "default height for scaling. 1.70 if not specified"],
496
+ 'remove_individual_scaling_setup': ["", "remove individual scaling setup files generated during scaling. true if not specified"],
497
+ 'remove_individual_ik_setup': ["", "remove individual IK setup files generated during IK. true if not specified"],
498
+ 'fastest_frames_to_remove_percent': ["", "Frames with high speed are considered as outliers. Defaults to 0.1"],
499
+ 'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
500
+ 'large_hip_knee_angles': ["", "Hip and knee angles below this value are considered as imprecise and ignored. Defaults to 45"],
501
+ 'trimmed_extrema_percent': ["", "Proportion of the most extreme segment values to remove before calculating their mean. Defaults to 50"],
447
502
  'use_custom_logging': ["", "use custom logging. false if not specified"]
448
503
  ```
449
504
 
450
505
  <br>
451
506
 
452
507
 
508
+ ## Go further
509
+
510
+ ### Too slow for you?
511
+
512
+ **Quick fixes:**
513
+ - Use ` --save_vid false --save_img false --show_realtime_results false`: Will not save images or videos, and will not display the results in real time.
514
+ - Use `--mode lightweight`: Will use a lighter version of RTMPose, which is faster but less accurate.\
515
+ Note that any detection and pose models can be used (first [deploy them with MMPose](https://mmpose.readthedocs.io/en/latest/user_guides/how_to_deploy.html#onnx) if you do not have their .onnx or .zip files), with the following formalism:
516
+ ```
517
+ --mode """{'det_class':'YOLOX',
518
+ 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip',
519
+ 'det_input_size':[416,416],
520
+ 'pose_class':'RTMPose',
521
+ 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip',
522
+ 'pose_input_size':[192,256]}"""
523
+ ```
524
+ - Use `--det_frequency 50`: Will detect poses only every 50 frames, and track keypoints in between, which is faster.
525
+ - Use `--multiperson false`: Can be used if one single person is present in the video. Otherwise, persons' IDs may be mixed up.
526
+ - Use `--load_trc_px <path_to_file_px.trc>`: Will use pose estimation results from a file. Useful if you want to use different parameters for pixel to meter conversion or angle calculation without running detection and pose estimation all over.
527
+ - Make sure you use `--tracking_mode sports2d`: Will use the default Sports2D tracker. Unlike DeepSort, it is faster, does not require any parametrization, and is as good in non-crowded scenes.
528
+
529
+ <br>
530
+
531
+ **Use your GPU**:\
532
+ Will be much faster, with no impact on accuracy. However, the installation takes about 6 GB of additional storage space.
533
+
534
+ 1. Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)).
535
+
536
+ Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Next, go to the [pyTorch website](https://pytorch.org/get-started/previous-versions/) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
537
+ ``` cmd
538
+ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
539
+ ```
540
+
541
+ <!-- > ***Note:*** Issues were reported with the default command. However, this has been tested and works:
542
+ `pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118` -->
543
+
544
+ 2. Finally, install ONNX Runtime with GPU support:
545
+ ```
546
+ pip install onnxruntime-gpu
547
+ ```
548
+
549
+ 3. Check that everything went well within Python with these commands:
550
+ ``` bash
551
+ python -c 'import torch; print(torch.cuda.is_available())'
552
+ python -c 'import onnxruntime as ort; print(ort.get_available_providers())'
553
+ # Should print "True ['CUDAExecutionProvider', ...]"
554
+ ```
555
+ <!-- print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->
556
+
557
+ <br>
558
+
559
+
560
+
561
+
562
+
563
+
564
+ <!--
565
+
566
+ VIDEO THERE
567
+
568
+ -->
569
+
570
+
571
+ <br>
572
+
573
+
574
+
575
+
576
+
453
577
  ### How it works
454
578
 
455
579
  Sports2D:
@@ -471,7 +595,7 @@ Sports2D:
471
595
 
472
596
  4. **Chooses the right persons to keep.** In single-person mode, only keeps the person with the highest average scores over the sequence. In multi-person mode, only retrieves the keypoints with high enough confidence, and only keeps the persons with high enough average confidence over each frame.
473
597
 
474
- 4. **Converts the pixel coordinates to meters.** The user can provide a calibration file, or simply the size of a specified person. The floor angle and the coordinate origin can either be detected automatically from the gait sequence, or be manually specified.
598
+ 4. **Converts the pixel coordinates to meters.** The user can provide a calibration file, or simply the size of a specified person. The floor angle and the coordinate origin can either be detected automatically from the gait sequence, or be manually specified. The depth coordinates are set to normative values, depending on whether the person is going left, right, facing the camera, or looking away.
475
599
 
476
600
  5. **Computes the selected joint and segment angles**, and flips them on the left/right side if the respective foot is pointing to the left/right.
477
601
 
@@ -538,7 +662,8 @@ If you use Sports2D, please cite [Pagnon, 2024](https://joss.theoj.org/papers/10
538
662
 
539
663
  ### How to contribute
540
664
  I would happily welcome any proposal for new features, code improvement, and more!\
541
- If you want to contribute to Sports2D, please follow [this guide](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) on how to fork, modify and push code, and submit a pull request. I would appreciate it if you provided as much useful information as possible about how you modified the code, and a rationale for why you're making this pull request. Please also specify on which operating system and on which python version you have tested the code.
665
+ If you want to contribute to Sports2D or Pose2Sim, please see [this issue](https://github.com/perfanalytics/pose2sim/issues/40).\
666
+ You will be proposed a to-do list, but please feel absolutely free to propose your own ideas and improvements.
542
667
 
543
668
  *Here is a to-do list: feel free to complete it:*
544
669
  - [x] Compute **segment angles**.
@@ -548,7 +673,7 @@ If you want to contribute to Sports2D, please follow [this guide](https://docs.g
548
673
  - [x] Handle sudden **changes of direction**.
549
674
  - [x] **Batch processing** for the analysis of multiple videos at once.
550
675
  - [x] Option to only save one person (with the highest average score, or with the most frames and fastest speed)
551
- - [x] Run again without pose estimation with the option `--load_trc` for px .trc file.
676
+ - [x] Run again without pose estimation with the option `--load_trc_px` for px .trc file.
552
677
  - [x] **Convert positions to meters** by providing the person height, a calibration file, or 3D points [to click on the image](https://stackoverflow.com/questions/74248955/how-to-display-the-coordinates-of-the-points-clicked-on-the-image-in-google-cola)
553
678
  - [x] Support any detection and/or pose estimation model.
554
679