sports2d 0.6.1__tar.gz → 0.6.3__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (23) hide show
  1. {sports2d-0.6.1 → sports2d-0.6.3}/PKG-INFO +306 -182
  2. {sports2d-0.6.1 → sports2d-0.6.3}/README.md +302 -180
  3. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Demo/Config_demo.toml +40 -22
  4. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Sports2D.py +39 -13
  5. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Utilities/common.py +477 -20
  6. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Utilities/skeletons.py +7 -8
  7. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Utilities/tests.py +3 -3
  8. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/process.py +162 -326
  9. {sports2d-0.6.1 → sports2d-0.6.3}/setup.cfg +4 -2
  10. {sports2d-0.6.1 → sports2d-0.6.3}/sports2d.egg-info/PKG-INFO +306 -182
  11. {sports2d-0.6.1 → sports2d-0.6.3}/sports2d.egg-info/requires.txt +3 -1
  12. {sports2d-0.6.1 → sports2d-0.6.3}/LICENSE +0 -0
  13. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Demo/demo.mp4 +0 -0
  14. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Utilities/__init__.py +0 -0
  15. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/Utilities/filter.py +0 -0
  16. {sports2d-0.6.1 → sports2d-0.6.3}/Sports2D/__init__.py +0 -0
  17. {sports2d-0.6.1 → sports2d-0.6.3}/pyproject.toml +0 -0
  18. {sports2d-0.6.1 → sports2d-0.6.3}/setup.py +0 -0
  19. {sports2d-0.6.1 → sports2d-0.6.3}/sports2d.egg-info/SOURCES.txt +0 -0
  20. {sports2d-0.6.1 → sports2d-0.6.3}/sports2d.egg-info/dependency_links.txt +0 -0
  21. {sports2d-0.6.1 → sports2d-0.6.3}/sports2d.egg-info/entry_points.txt +0 -0
  22. {sports2d-0.6.1 → sports2d-0.6.3}/sports2d.egg-info/not-zip-safe +0 -0
  23. {sports2d-0.6.1 → sports2d-0.6.3}/sports2d.egg-info/top_level.txt +0 -0
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.2
2
2
  Name: sports2d
3
- Version: 0.6.1
3
+ Version: 0.6.3
4
4
  Summary: Detect pose and compute 2D joint angles from a video.
5
5
  Home-page: https://github.com/davidpagnon/Sports2D
6
6
  Author: David Pagnon
@@ -33,10 +33,12 @@ Requires-Dist: opencv-python
33
33
  Requires-Dist: matplotlib
34
34
  Requires-Dist: PyQt5
35
35
  Requires-Dist: statsmodels
36
- Requires-Dist: rtmlib_pose2sim
36
+ Requires-Dist: c3d
37
+ Requires-Dist: rtmlib
37
38
  Requires-Dist: openvino
38
39
  Requires-Dist: tqdm
39
40
  Requires-Dist: imageio_ffmpeg
41
+ Requires-Dist: deep-sort-realtime
40
42
 
41
43
 
42
44
  [![Continuous integration](https://github.com/davidpagnon/sports2d/actions/workflows/continuous-integration.yml/badge.svg?branch=main)](https://github.com/davidpagnon/sports2d/actions/workflows/continuous-integration.yml)
@@ -70,6 +72,8 @@ Requires-Dist: imageio_ffmpeg
70
72
  > - Batch process multiple videos at once
71
73
  >
72
74
  > Note: Colab version broken for now. I'll fix it in the next few weeks.
75
+
76
+ ***N.B.:*** As always, I am more than happy to welcome contributions (see [How to contribute](#how-to-contribute-and-to-do-list))!
73
77
  <!--User-friendly Colab version released! (and latest issues fixed, too)\
74
78
  Works on any smartphone!**\
75
79
  [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://bit.ly/Sports2D_Colab)-->
@@ -91,12 +95,26 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
91
95
  ## Contents
92
96
  1. [Installation and Demonstration](#installation-and-demonstration)
93
97
  1. [Installation](#installation)
98
+ 1. [Quick install](#quick-install)
99
+ 2. [Full install](#full-install)
94
100
  2. [Demonstration](#demonstration)
101
+ 1. [Run the demo](#run-the-demo)
102
+ 2. [Visualize in OpenSim](#visualize-in-opensim)
103
+ 3. [Visualize in Blender](#visualize-in-blender)
95
104
  3. [Play with the parameters](#play-with-the-parameters)
105
+ 1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
106
+ 2. [Run for a specific time range](#run-for-a-specific-time-range)
107
+ 3. [Get coordinates in meters](#get-coordinates-in-meters)
108
+ 4. [Run inverse kinematics](#run-inverse-kinematics)
109
+ 5. [Run on several videos at once](#run-on-several-videos-at-once)
110
+ 6. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
111
+ 7. [Get the angles the way you want](#get-the-angles-the-way-you-want)
112
+ 8. [Customize your output](#customize-your-output)
113
+ 9. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
114
+ 10. [All the parameters](#all-the-parameters)
96
115
  2. [Go further](#go-further)
97
116
  1. [Too slow for you?](#too-slow-for-you)
98
- 2. [What you need is what you get](#what-you-need-is-what-you-get)
99
- 3. [All the parameters](#all-the-parameters)
117
+ 3. [Run inverse kinematics](#run-inverse-kinematics)
100
118
  4. [How it works](#how-it-works)
101
119
  3. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
102
120
 
@@ -114,33 +132,54 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
114
132
 
115
133
  -->
116
134
 
117
- - OPTION 1: **Quick install** \
118
- Open a terminal. Type `python -V` to make sure python >=3.8 <=3.11 is installed. If not, install it [from there](https://www.python.org/downloads/). Run:
119
- ``` cmd
120
- pip install sports2d
121
- ```
135
+ #### Quick install
136
+
137
+ > N.B.: Full install is required for OpenSim inverse kinematics.
122
138
 
123
- - OPTION 2: **Safer install with Anaconda**\
124
- Install [Miniconda](https://docs.conda.io/en/latest/miniconda.html):\
125
- Open an Anaconda prompt and create a virtual environment by typing:
126
- ``` cmd
127
- conda create -n Sports2D python=3.9 -y
128
- conda activate Sports2D
129
- pip install sports2d
139
+ Open a terminal. Type `python -V` to make sure python >=3.10 <=3.11 is installed. If not, install it [from there](https://www.python.org/downloads/).
140
+
141
+ Run:
142
+ ``` cmd
143
+ pip install sports2d
144
+ ```
145
+
146
+ Alternatively, build from source to test the last changes:
147
+ ``` cmd
148
+ git clone https://github.com/davidpagnon/sports2d.git
149
+ cd sports2d
150
+ pip install .
151
+ ```
152
+
153
+ <br>
154
+
155
+ #### Full install
156
+
157
+ > Only needed if you want to run inverse kinematics (`--do_ik True`).
158
+
159
+ - Install Anaconda or [Miniconda](https://docs.conda.io/en/latest/miniconda.html):\
160
+ Open an Anaconda prompt and create a virtual environment:
161
+ ``` cmd
162
+ conda create -n Sports2D python=3.10 -y
163
+ conda activate Sports2D
164
+ ```
165
+ - **Install OpenSim**:\
166
+ Install the OpenSim Python API (if you do not want to install via conda, refer [to this page](https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/pages/53085346/Scripting+in+Python#ScriptinginPython-SettingupyourPythonscriptingenvironment(ifnotusingconda))):
167
+ ```
168
+ conda install -c opensim-org opensim -y
130
169
  ```
170
+
171
+ - **Install Sports2D with Pose2Sim**:
172
+ ``` cmd
173
+ pip install sports2d pose2sim
174
+ ```
131
175
 
132
- - OPTION 3: **Build from source and test the last changes**\
133
- Open a terminal in the directory of your choice and clone the Sports2D repository.
134
- ``` cmd
135
- git clone https://github.com/davidpagnon/sports2d.git
136
- cd sports2d
137
- pip install .
138
- ```
139
176
 
140
177
  <br>
141
178
 
142
179
  ### Demonstration
143
180
 
181
+ #### Run the demo:
182
+
144
183
  Just open a command line and run:
145
184
  ``` cmd
146
185
  sports2d
@@ -165,222 +204,224 @@ The Demo video is voluntarily challenging to demonstrate the robustness of the p
165
204
 
166
205
  <br>
167
206
 
207
+
208
+ #### Visualize in Blender
209
+
210
+ 1. **Install the Pose2Sim_Blender add-on.**\
211
+ Follow instructions on the [Pose2Sim_Blender](https://github.com/davidpagnon/Pose2Sim_Blender) add-on page.
212
+ 2. **Open your point coordinates.**\
213
+ **Add Markers**: open your trc file(e.g., `coords_m.trc`) from your `result_dir` folder.
214
+
215
+ This will optionally create **an animated rig** based on the motion of the captured person.
216
+ 3. **Open your animated skeleton:**\
217
+ Make sure you first set `--do_ik True` ([full install](#full-install) required). See [inverse kinematics](#run-inverse-kinematics) section for more details.
218
+ - **Add Model**: Open your scaled model (e.g., `Model_Pose2Sim_LSTM.osim`).
219
+ - **Add Motion**: Open your motion file (e.g., `angles.mot`). Make sure the skeleton is selected in the outliner.
220
+
221
+ The OpenSim skeleton is not rigged yet. **[Feel free to contribute!](https://github.com/perfanalytics/pose2sim/issues/40)**
222
+
223
+ <!-- IMAGE ICI
224
+ -->
225
+
226
+
227
+
228
+ <br>
229
+
230
+
231
+ #### Visualize in OpenSim
232
+
233
+ 1. Install **[OpenSim GUI](https://simtk.org/frs/index.php?group_id=91)**.
234
+ 2. **Visualize point coordinates:**\
235
+ **File -> Preview experimental data:** Open your trc file (e.g., `coords_m.trc`) from your `result_dir` folder.
236
+ 3. **Visualize angles:**\
237
+ To open an animated model and run further biomechanical analysis, make sure you first set `--do_ik True` ([full install](#full-install) required). See [inverse kinematics](#run-inverse-kinematics) section for more details.
238
+ - **File -> Open Model:** Open your scaled model (e.g., `Model_Pose2Sim_LSTM.osim`).
239
+ - **File -> Load Motion:** Open your motion file (e.g., `angles.mot`).
240
+
241
+ <br>
242
+
243
+ <!-- IMAGE ICI
244
+ -->
245
+
246
+
247
+
168
248
  ### Play with the parameters
169
249
 
170
- For a full list of the available parameters, see [this section](#all-the-parameters) of the documentation, check the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file, or type:
250
+ For a full list of the available parameters, see [this section](#all-the-parameters) of the documentation, check the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file, or type `sports2d --help`. All non specified are set to default values.
251
+
252
+ <br>
253
+
254
+
255
+ #### Run on a custom video or on a webcam:
171
256
  ``` cmd
172
- sports2d --help
257
+ sports2d --video_input path_to_video.mp4
258
+ ```
259
+
260
+ ``` cmd
261
+ sports2d --video_input webcam
173
262
  ```
263
+
174
264
  <br>
175
265
 
176
- #### Run on custom video with default parameters:
177
- ``` cmd
178
- sports2d --video_input path_to_video.mp4
179
- ```
180
266
 
181
- #### Run on webcam with default parameters:
182
- ``` cmd
183
- sports2d --video_input webcam
184
- ```
267
+ #### Run for a specific time range:
268
+ ```cmd
269
+ sports2d --time_range 1.2 2.7
270
+ ```
271
+
185
272
  <br>
186
273
 
187
- #### Get coordinates in meters rather than in pixels:
274
+
275
+ #### Get coordinates in meters:
188
276
 
189
277
  <!-- You either need to provide a calibration file, or simply the height of a person (Note that the latter will not take distortions into account, and that it will be less accurate for motion in the frontal plane).\-->
190
- Just provide the height of the analyzed person (and their ID in case of multiple person detection).\
278
+ You may need to convert pixel coordinates to meters.\
279
+ Just provide the height of the reference person (and their ID in case of multiple person detection).\
191
280
  The floor angle and the origin of the xy axis are computed automatically from gait. If you analyze another type of motion, you can manually specify them.\
192
281
  Note that it does not take distortions into account, and that it will be less accurate for motions in the frontal plane.
193
282
 
194
- ``` cmd
195
- sports2d --to_meters True --calib_file calib_demo.toml
196
- ```
197
- ``` cmd
198
- sports2d --to_meters True --person_height 1.65 --calib_on_person_id 2
199
- ```
200
- ``` cmd
201
- sports2d --to_meters True --person_height 1.65 --calib_on_person_id 2 --floor_angle 0 --xy_origin 0 940
202
- ```
203
- <br>
283
+ ``` cmd
284
+ sports2d --to_meters True --calib_file calib_demo.toml
285
+ ```
286
+ ``` cmd
287
+ sports2d --to_meters True --px_to_m_person_height 1.65 --px_to_m_from_person_id 2
288
+ ```
289
+ ``` cmd
290
+ sports2d --to_meters True --px_to_m_person_height 1.65 --px_to_m_from_person_id 2 --floor_angle 0 --xy_origin 0 940
291
+ ```
204
292
 
205
- #### Run with custom parameters (all non specified are set to default):
206
- ``` cmd
207
- sports2d --video_input demo.mp4 other_video.mp4
208
- ```
209
- ``` cmd
210
- sports2d --show_graphs False --time_range 1.2 2.7 --result_dir path_to_result_dir --slowmo_factor 4
211
- ```
212
- ``` cmd
213
- sports2d --multiperson false --pose_model Body --mode lightweight --det_frequency 50
214
- ```
215
293
  <br>
216
294
 
217
- #### Run with a toml configuration file:
218
- ``` cmd
219
- sports2d --config Config_demo.toml
220
- ```
221
- <br>
222
295
 
223
- #### Run within a Python script:
224
- ``` python
225
- from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
226
- ```
227
- ``` python
228
- from Sports2D import Sports2D; Sports2D.process(config_dict)
229
- ```
296
+ #### Run inverse kinematics:
297
+ > N.B.: [Full install](#full-install) required.
230
298
 
231
- <br>
299
+ > N.B.: The person needs to be moving on a single plane for the whole selected time range.
232
300
 
233
- ## Go further
301
+ Analyzed persons can be showing their left, right, front, or back side. If you want to ignore a certain person, set `--visible_side none`.
234
302
 
235
- ### Too slow for you?
236
303
 
237
- **Quick fixes:**
238
- - Use ` --save_vid false --save_img false --show_realtime_results false`: Will not save images or videos, and will not display the results in real time.
239
- - Use `--mode lightweight`: Will use a lighter version of RTMPose, which is faster but less accurate.\
240
- Note that any detection and pose models can be used (first [deploy them with MMPose](https://mmpose.readthedocs.io/en/latest/user_guides/how_to_deploy.html#onnx) if you do not have their .onnx or .zip files), with the following formalism:
241
- ```
242
- --mode """{'det_class':'YOLOX',
243
- 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip',
244
- 'det_input_size':[416,416],
245
- 'pose_class':'RTMPose',
246
- 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip',
247
- 'pose_input_size':[192,256]}"""
248
- ```
249
- - Use `--det_frequency 50`: Will detect poses only every 50 frames, and track keypoints in between, which is faster.
250
- - Use `--multiperson false`: Can be used if one single person is present in the video. Otherwise, persons' IDs may be mixed up.
251
- - Use `--load_trc <path_to_file_px.trc>`: Will use pose estimation results from a file. Useful if you want to use different parameters for pixel to meter conversion or angle calculation without running detection and pose estimation all over.
252
304
 
253
- <br>
254
305
 
255
- **Use your GPU**:\
256
- Will be much faster, with no impact on accuracy. However, the installation takes about 6 GB of additional storage space.
306
+ Why IK?
307
+ Add section in how it works
257
308
 
258
- 1. Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)).
259
309
 
260
- Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Next, go to the [pyTorch website](https://pytorch.org/get-started/previous-versions/) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
261
- ``` cmd
262
- pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
263
- ```
264
310
 
265
- <!-- > ***Note:*** Issues were reported with the default command. However, this has been tested and works:
266
- `pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118` -->
311
+ ```cmd
312
+ sports2d --time_range 1.2 2.7 --do_ik true --visible_side front left
313
+ ```
267
314
 
268
- 2. Finally, install ONNX Runtime with GPU support:
269
- ```
270
- pip install onnxruntime-gpu
271
- ```
315
+ ```cmd
316
+ sports2d --time_range 1.2 2.7 --do_ik true --visible_side front left --use_augmentation True
317
+ ```
318
+
319
+ <br>
272
320
 
273
- 3. Check that everything went well within Python with these commands:
274
- ``` bash
275
- python -c 'import torch; print(torch.cuda.is_available())'
276
- python -c 'import onnxruntime as ort; print(ort.get_available_providers())'
277
- # Should print "True ['CUDAExecutionProvider', ...]"
278
- ```
279
- <!-- print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->
321
+
322
+ #### Run on several videos at once:
323
+ ``` cmd
324
+ sports2d --video_input demo.mp4 other_video.mp4
325
+ ```
326
+ All videos analyzed with the same time range.
327
+ ```cmd
328
+ sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7
329
+ ```
330
+ Different time ranges for each video.
331
+ ```cmd
332
+ sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
333
+ ```
280
334
 
281
335
  <br>
282
336
 
283
- ### What you need is what you get
284
337
 
285
- #### Analyze a fraction of your video:
286
- ```cmd
287
- sports2d --time_range 1.2 2.7
338
+ #### Use the configuration file or run within Python:
339
+
340
+ - Run with a configuration file:
341
+ ``` cmd
342
+ sports2d --config Config_demo.toml
288
343
  ```
344
+ - Run within Python:
345
+ ``` python
346
+ from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
347
+ ```
348
+ - Run within Python with a dictionary (for example, `config_dict = toml.load('Config_demo.toml')`):
349
+ ``` python
350
+ from Sports2D import Sports2D; Sports2D.process(config_dict)
351
+ ```
352
+
289
353
  <br>
290
354
 
291
- #### Customize your output:
292
- - Choose whether you want video, images, trc pose file, angle mot file, real-time display, and plots:
293
- ```cmd
294
- sports2d --save_vid false --save_img true --save_pose false --save_angles true --show_realtime_results false --show_graphs false
295
- ```
355
+
356
+ #### Get the angles the way you want:
357
+
296
358
  - Choose which angles you need:
297
359
  ```cmd
298
360
  sports2d --joint_angles 'right knee' 'left knee' --segment_angles None
299
361
  ```
300
362
  - Choose where to display the angles: either as a list on the upper-left of the image, or near the joint/segment, or both:
301
363
  ```cmd
302
- sports2d --display_angle_values_on body
364
+ sports2d --display_angle_values_on body # OR none, or list
303
365
  ```
304
366
  - You can also decide not to calculate and display angles at all:
305
367
  ```cmd
306
368
  sports2d --calculate_angles false
307
369
  ```
370
+ - To run **inverse kinematics with OpenSim**, check [this section](#run-inverse-kinematics)
371
+
308
372
  <br>
309
373
 
310
- #### Run on several videos at once:
311
- You can individualize (or not) the parameters.
312
- ```cmd
313
- sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7
374
+
375
+ #### Customize your output:
376
+ - Only analyze the most prominent person:
377
+ ``` cmd
378
+ sports2d --multiperson false
314
379
  ```
380
+ - Choose whether you want video, images, trc pose file, angle mot file, real-time display, and plots:
315
381
  ```cmd
316
- sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
382
+ sports2d --save_vid false --save_img true --save_pose false --save_angles true --show_realtime_results false --show_graphs false
383
+ ```
384
+ - Save results to a custom directory, specify the slow-motion factor:
385
+ ``` cmd
386
+ sports2d --result_dir path_to_result_dir
317
387
  ```
318
-
319
- <!--
320
- <br>
321
-
322
- ### Constrain results to a biomechanical model
323
-
324
- > Why + image\
325
- > Add explanation in "how it works" section
326
-
327
- #### Installation
328
- You will need to install OpenSim via conda, which makes installation slightly more complicated.
329
-
330
- 1. **Install Anaconda or [Miniconda](https://docs.conda.io/en/latest/miniconda.html).**
331
-
332
- Once installed, open an Anaconda prompt and create a virtual environment:
333
- ```
334
- conda create -n Sports2D python=3.9 -y
335
- conda activate Sports2D
336
- ```
337
-
338
- 2. **Install OpenSim**:\
339
- Install the OpenSim Python API (if you do not want to install via conda, refer [to this page](https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/pages/53085346/Scripting+in+Python#ScriptinginPython-SettingupyourPythonscriptingenvironment(ifnotusingconda))):
340
- ```
341
- conda install -c opensim-org opensim -y
342
- ```
343
-
344
- 3. **Install Sports2D**:\
345
- Open a terminal.
346
- ``` cmd
347
- pip install sports2d
348
- ```
349
- <br>
350
-
351
- #### Usage
352
-
353
- Need person doing a 2D motion. If not, trim the video with `--time_range` option.
354
-
355
- ```cmd
356
- sports2d --time_range 1.2 2.7 --ik true --person_orientation front none left
357
- ```
358
388
 
359
389
  <br>
360
390
 
361
- #### Visualize the results
362
- - The simplest option is to use OpenSim GUI
363
- - If you want to see the skeleton overlay on the video, you can install the Pose2Sim Blender plugin.
364
391
 
365
- -->
392
+ #### Use a custom pose estimation model:
393
+ - Retrieve hand motion:
394
+ ``` cmd
395
+ sports2d --pose_model WholeBody
396
+ ```
397
+ - Use any custom (deployed) MMPose model
398
+ ``` cmd
399
+ sports2d --pose_model BodyWithFeet :
400
+ --mode """{'det_class':'YOLOX',
401
+ 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip',
402
+ 'det_input_size':[640, 640],
403
+ 'pose_class':'RTMPose',
404
+ 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip',
405
+ 'pose_input_size':[192,256]}"""
406
+ ```
366
407
 
367
408
  <br>
368
409
 
369
410
 
370
- ### All the parameters
411
+ #### All the parameters
371
412
 
372
- Have a look at the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file or type for a full list of the available parameters:
413
+ For a full list of the available parameters, have a look at the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file or type:
373
414
 
374
415
  ``` cmd
375
416
  sports2d --help
376
417
  ```
377
418
 
378
419
  ```
379
- ['config': "C", "path to a toml configuration file"],
380
-
420
+ 'config': ["C", "path to a toml configuration file"],
381
421
  'video_input': ["i", "webcam, or video_path.mp4, or video1_path.avi video2_path.mp4 ... Beware that images won't be saved if paths contain non ASCII characters"],
382
- 'person_height': ["H", "height of the person in meters. 1.70 if not specified"],
383
- 'load_trc': ["", "load trc file to avaid running pose estimation again. false if not specified"],
422
+ 'px_to_m_person_height': ["H", "height of the person in meters. 1.70 if not specified"],
423
+ 'visible_side': ["", "front, back, left, right, auto, or none. 'front auto' if not specified. If 'auto', will be either left or right depending on the direction of the motion. If 'none', no IK for this person"],
424
+ 'load_trc_px': ["", "load trc file to avaid running pose estimation again. false if not specified"],
384
425
  'compare': ["", "visually compare motion with trc file. false if not specified"],
385
426
  'webcam_id': ["w", "webcam ID. 0 if not specified"],
386
427
  'time_range': ["t", "start_time end_time. In seconds. Whole video if not specified. start_time1 end_time1 start_time2 end_time2 ... if multiple videos with different time ranges"],
@@ -398,23 +439,27 @@ sports2d --help
398
439
  'save_angles': ["A", "save angles as mot files. true if not specified"],
399
440
  'slowmo_factor': ["", "slow-motion factor. For a video recorded at 240 fps and exported to 30 fps, it would be 240/30 = 8. 1 if not specified"],
400
441
  'pose_model': ["p", "only body_with_feet is available for now. body_with_feet if not specified"],
401
- 'mode': ["m", "light, balanced, or performance. balanced if not specified"],
442
+ 'mode': ["m", 'light, balanced, performance, or a """{dictionary within triple quote}""". balanced if not specified. Use a dictionary to specify your own detection and/or pose estimation models (more about in the documentation).'],
402
443
  'det_frequency': ["f", "run person detection only every N frames, and inbetween track previously detected bounding boxes. keypoint detection is still run on all frames.\n\
403
- Equal to or greater than 1, can be as high as you want in simple uncrowded cases. Much faster, but might be less accurate. 1 if not specified: detection runs on all frames"],
404
- 'to_meters': ["M", "convert pixels to meters. true if not specified"],
405
-
444
+ Equal to or greater than 1, can be as high as you want in simple uncrowded cases. Much faster, but might be less accurate. 1 if not specified: detection runs on all frames"],
406
445
  'backend': ["", "Backend for pose estimation can be 'auto', 'cpu', 'cuda', 'mps' (for MacOS), or 'rocm' (for AMD GPUs)"],
407
446
  'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"],
408
- 'calib_on_person_id': ["", "person ID to calibrate on. 0 if not specified"],
447
+ 'to_meters': ["M", "convert pixels to meters. true if not specified"],
448
+ 'make_c3d': ["", "Convert trc to c3d file. true if not specified"],
449
+ 'px_to_m_from_person_id': ["", "person ID to calibrate on. 0 if not specified"],
409
450
  'floor_angle': ["", "angle of the floor. 'auto' if not specified"],
410
451
  'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
411
452
  'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
412
453
  'save_calib': ["", "save calibration file. true if not specified"],
413
454
  'do_ik': ["", "do inverse kinematics. false if not specified"],
414
- 'osim_setup_path': ["", "path to OpenSim setup. '../OpenSim_setup' if not specified"],
415
- 'person_orientation': ["", "front, back, left, right, auto, or none. 'front none left' if not specified. If 'auto', will be either left or right depending on the direction of the motion."],
455
+ 'use_augmentation': ["", "Use LSTM marker augmentation. false if not specified"],
456
+ 'use_contacts_muscles': ["", "Use model with contact spheres and muscles. false if not specified"],
416
457
  'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
417
- 'multiperson': ["", "multiperson involves tracking: will be faster if set to false. true if not specified"], 'tracking_mode': ["", "sports2d or rtmlib. sports2d is generally much more accurate and comparable in speed. sports2d if not specified"],
458
+ 'multiperson': ["", "multiperson involves tracking: will be faster if set to false. true if not specified"],
459
+ 'tracking_mode': ["", "sports2d or rtmlib. sports2d is generally much more accurate and comparable in speed. sports2d if not specified"],
460
+ 'deepsort_params': ["", 'Deepsort tracking parameters: """{dictionary between 3 double quotes}""". \n\
461
+ Default: max_age:30, n_init:3, nms_max_overlap:0.8, max_cosine_distance:0.3, nn_budget:200, max_iou_distance:0.8, embedder_gpu: True\n\
462
+ More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51'],
418
463
  'input_size': ["", "width, height. 1280, 720 if not specified. Lower resolution will be faster but less precise"],
419
464
  'keypoint_likelihood_threshold': ["", "detected keypoints are not retained if likelihood is below this threshold. 0.3 if not specified"],
420
465
  'average_likelihood_threshold': ["", "detected persons are not retained if average keypoint likelihood is below this threshold. 0.5 if not specified"],
@@ -436,12 +481,90 @@ sports2d --help
436
481
  'sigma_kernel': ["", "sigma of the gaussian filter. 1 if not specified"],
437
482
  'nb_values_used': ["", "number of values used for the loess filter. 5 if not specified"],
438
483
  'kernel_size': ["", "kernel size of the median filter. 3 if not specified"],
484
+ 'osim_setup_path': ["", "path to OpenSim setup. '../OpenSim_setup' if not specified"],
485
+ 'right_left_symmetry': ["", "right left symmetry. true if not specified"],
486
+ 'default_height': ["", "default height for scaling. 1.70 if not specified"],
487
+ 'remove_individual_scaling_setup': ["", "remove individual scaling setup files generated during scaling. true if not specified"],
488
+ 'remove_individual_ik_setup': ["", "remove individual IK setup files generated during IK. true if not specified"],
489
+ 'fastest_frames_to_remove_percent': ["", "Frames with high speed are considered as outliers. Defaults to 0.1"],
490
+ 'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
491
+ 'large_hip_knee_angles': ["", "Hip and knee angles below this value are considered as imprecise and ignored. Defaults to 45"],
492
+ 'trimmed_extrema_percent': ["", "Proportion of the most extreme segment values to remove before calculating their mean. Defaults to 50"],
439
493
  'use_custom_logging': ["", "use custom logging. false if not specified"]
440
494
  ```
441
495
 
442
496
  <br>
443
497
 
444
498
 
499
+ ## Go further
500
+
501
+ ### Too slow for you?
502
+
503
+ **Quick fixes:**
504
+ - Use ` --save_vid false --save_img false --show_realtime_results false`: Will not save images or videos, and will not display the results in real time.
505
+ - Use `--mode lightweight`: Will use a lighter version of RTMPose, which is faster but less accurate.\
506
+ Note that any detection and pose models can be used (first [deploy them with MMPose](https://mmpose.readthedocs.io/en/latest/user_guides/how_to_deploy.html#onnx) if you do not have their .onnx or .zip files), with the following formalism:
507
+ ```
508
+ --mode """{'det_class':'YOLOX',
509
+ 'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip',
510
+ 'det_input_size':[416,416],
511
+ 'pose_class':'RTMPose',
512
+ 'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip',
513
+ 'pose_input_size':[192,256]}"""
514
+ ```
515
+ - Use `--det_frequency 50`: Will detect poses only every 50 frames, and track keypoints in between, which is faster.
516
+ - Use `--multiperson false`: Can be used if one single person is present in the video. Otherwise, persons' IDs may be mixed up.
517
+ - Use `--load_trc_px <path_to_file_px.trc>`: Will use pose estimation results from a file. Useful if you want to use different parameters for pixel to meter conversion or angle calculation without running detection and pose estimation all over.
518
+ - Make sure you use `--tracking_mode sports2d`: Will use the default Sports2D tracker. Unlike DeepSort, it is faster, does not require any parametrization, and is as good in non-crowded scenes.
519
+
520
+ <br>
521
+
522
+ **Use your GPU**:\
523
+ Will be much faster, with no impact on accuracy. However, the installation takes about 6 GB of additional storage space.
524
+
525
+ 1. Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)).
526
+
527
+ Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Next, go to the [pyTorch website](https://pytorch.org/get-started/previous-versions/) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
528
+ ``` cmd
529
+ pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
530
+ ```
531
+
532
+ <!-- > ***Note:*** Issues were reported with the default command. However, this has been tested and works:
533
+ `pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118` -->
534
+
535
+ 2. Finally, install ONNX Runtime with GPU support:
536
+ ```
537
+ pip install onnxruntime-gpu
538
+ ```
539
+
540
+ 3. Check that everything went well within Python with these commands:
541
+ ``` bash
542
+ python -c 'import torch; print(torch.cuda.is_available())'
543
+ python -c 'import onnxruntime as ort; print(ort.get_available_providers())'
544
+ # Should print "True ['CUDAExecutionProvider', ...]"
545
+ ```
546
+ <!-- print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->
547
+
548
+ <br>
549
+
550
+
551
+
552
+
553
+
554
+
555
+ <!--
556
+
557
+ VIDEO THERE
558
+
559
+ -->
560
+
561
+
562
+ <br>
563
+
564
+
565
+
566
+
567
+
445
568
  ### How it works
446
569
 
447
570
  Sports2D:
@@ -459,7 +582,7 @@ Sports2D:
459
582
 
460
583
  2. **Sets up pose estimation with RTMLib.** It can be run in lightweight, balanced, or performance mode, and for faster inference, keypoints can be tracked instead of detected for a certain number of frames. Any RTMPose model can be used.
461
584
 
462
- 3. **Tracks people** so that their IDs are consistent across frames. A person is associated to another in the next frame when they are at a small distance. IDs remain consistent even if the person disappears from a few frames. This carefully crafted `sports2d` tracker runs at a comparable speed as the RTMlib one but is much more robust. The user can still choose the RTMLib method if they need it by specifying it in the Config.toml file.
585
+ 3. **Tracks people** so that their IDs are consistent across frames. A person is associated to another in the next frame when they are at a small distance. IDs remain consistent even if the person disappears from a few frames. We crafted a 'sports2D' tracker which gives good results and runs in real time, but it is also possible to use `deepsort` in particularly challenging situations.
463
586
 
464
587
  4. **Chooses the right persons to keep.** In single-person mode, only keeps the person with the highest average scores over the sequence. In multi-person mode, only retrieves the keypoints with high enough confidence, and only keeps the persons with high enough average confidence over each frame.
465
588
 
@@ -530,7 +653,8 @@ If you use Sports2D, please cite [Pagnon, 2024](https://joss.theoj.org/papers/10
530
653
 
531
654
  ### How to contribute
532
655
  I would happily welcome any proposal for new features, code improvement, and more!\
533
- If you want to contribute to Sports2D, please follow [this guide](https://docs.github.com/en/get-started/quickstart/contributing-to-projects) on how to fork, modify and push code, and submit a pull request. I would appreciate it if you provided as much useful information as possible about how you modified the code, and a rationale for why you're making this pull request. Please also specify on which operating system and on which python version you have tested the code.
656
+ If you want to contribute to Sports2D or Pose2Sim, please see [this issue](https://github.com/perfanalytics/pose2sim/issues/40).\
657
+ You will be proposed a to-do list, but please feel absolutely free to propose your own ideas and improvements.
534
658
 
535
659
  *Here is a to-do list: feel free to complete it:*
536
660
  - [x] Compute **segment angles**.
@@ -540,7 +664,7 @@ If you want to contribute to Sports2D, please follow [this guide](https://docs.g
540
664
  - [x] Handle sudden **changes of direction**.
541
665
  - [x] **Batch processing** for the analysis of multiple videos at once.
542
666
  - [x] Option to only save one person (with the highest average score, or with the most frames and fastest speed)
543
- - [x] Run again without pose estimation with the option `--load_trc` for px .trc file.
667
+ - [x] Run again without pose estimation with the option `--load_trc_px` for px .trc file.
544
668
  - [x] **Convert positions to meters** by providing the person height, a calibration file, or 3D points [to click on the image](https://stackoverflow.com/questions/74248955/how-to-display-the-coordinates-of-the-points-clicked-on-the-image-in-google-cola)
545
669
  - [x] Support any detection and/or pose estimation model.
546
670