sports2d 0.6.2__tar.gz → 0.6.3__tar.gz
This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
- {sports2d-0.6.2 → sports2d-0.6.3}/PKG-INFO +299 -183
- {sports2d-0.6.2 → sports2d-0.6.3}/README.md +297 -182
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Demo/Config_demo.toml +36 -21
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Sports2D.py +35 -13
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Utilities/common.py +106 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Utilities/skeletons.py +7 -8
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Utilities/tests.py +3 -3
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/process.py +126 -53
- {sports2d-0.6.2 → sports2d-0.6.3}/setup.cfg +2 -1
- {sports2d-0.6.2 → sports2d-0.6.3}/sports2d.egg-info/PKG-INFO +299 -183
- {sports2d-0.6.2 → sports2d-0.6.3}/sports2d.egg-info/requires.txt +1 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/LICENSE +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Demo/demo.mp4 +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Utilities/__init__.py +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/Utilities/filter.py +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/Sports2D/__init__.py +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/pyproject.toml +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/setup.py +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/sports2d.egg-info/SOURCES.txt +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/sports2d.egg-info/dependency_links.txt +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/sports2d.egg-info/entry_points.txt +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/sports2d.egg-info/not-zip-safe +0 -0
- {sports2d-0.6.2 → sports2d-0.6.3}/sports2d.egg-info/top_level.txt +0 -0
|
@@ -1,6 +1,6 @@
|
|
|
1
1
|
Metadata-Version: 2.2
|
|
2
2
|
Name: sports2d
|
|
3
|
-
Version: 0.6.
|
|
3
|
+
Version: 0.6.3
|
|
4
4
|
Summary: Detect pose and compute 2D joint angles from a video.
|
|
5
5
|
Home-page: https://github.com/davidpagnon/Sports2D
|
|
6
6
|
Author: David Pagnon
|
|
@@ -33,6 +33,7 @@ Requires-Dist: opencv-python
|
|
|
33
33
|
Requires-Dist: matplotlib
|
|
34
34
|
Requires-Dist: PyQt5
|
|
35
35
|
Requires-Dist: statsmodels
|
|
36
|
+
Requires-Dist: c3d
|
|
36
37
|
Requires-Dist: rtmlib
|
|
37
38
|
Requires-Dist: openvino
|
|
38
39
|
Requires-Dist: tqdm
|
|
@@ -71,6 +72,8 @@ Requires-Dist: deep-sort-realtime
|
|
|
71
72
|
> - Batch process multiple videos at once
|
|
72
73
|
>
|
|
73
74
|
> Note: Colab version broken for now. I'll fix it in the next few weeks.
|
|
75
|
+
|
|
76
|
+
***N.B.:*** As always, I am more than happy to welcome contributions (see [How to contribute](#how-to-contribute-and-to-do-list))!
|
|
74
77
|
<!--User-friendly Colab version released! (and latest issues fixed, too)\
|
|
75
78
|
Works on any smartphone!**\
|
|
76
79
|
[](https://bit.ly/Sports2D_Colab)-->
|
|
@@ -92,12 +95,26 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
|
|
|
92
95
|
## Contents
|
|
93
96
|
1. [Installation and Demonstration](#installation-and-demonstration)
|
|
94
97
|
1. [Installation](#installation)
|
|
98
|
+
1. [Quick install](#quick-install)
|
|
99
|
+
2. [Full install](#full-install)
|
|
95
100
|
2. [Demonstration](#demonstration)
|
|
101
|
+
1. [Run the demo](#run-the-demo)
|
|
102
|
+
2. [Visualize in OpenSim](#visualize-in-opensim)
|
|
103
|
+
3. [Visualize in Blender](#visualize-in-blender)
|
|
96
104
|
3. [Play with the parameters](#play-with-the-parameters)
|
|
105
|
+
1. [Run on a custom video or on a webcam](#run-on-a-custom-video-or-on-a-webcam)
|
|
106
|
+
2. [Run for a specific time range](#run-for-a-specific-time-range)
|
|
107
|
+
3. [Get coordinates in meters](#get-coordinates-in-meters)
|
|
108
|
+
4. [Run inverse kinematics](#run-inverse-kinematics)
|
|
109
|
+
5. [Run on several videos at once](#run-on-several-videos-at-once)
|
|
110
|
+
6. [Use the configuration file or run within Python](#use-the-configuration-file-or-run-within-python)
|
|
111
|
+
7. [Get the angles the way you want](#get-the-angles-the-way-you-want)
|
|
112
|
+
8. [Customize your output](#customize-your-output)
|
|
113
|
+
9. [Use a custom pose estimation model](#use-a-custom-pose-estimation-model)
|
|
114
|
+
10. [All the parameters](#all-the-parameters)
|
|
97
115
|
2. [Go further](#go-further)
|
|
98
116
|
1. [Too slow for you?](#too-slow-for-you)
|
|
99
|
-
|
|
100
|
-
3. [All the parameters](#all-the-parameters)
|
|
117
|
+
3. [Run inverse kinematics](#run-inverse-kinematics)
|
|
101
118
|
4. [How it works](#how-it-works)
|
|
102
119
|
3. [How to cite and how to contribute](#how-to-cite-and-how-to-contribute)
|
|
103
120
|
|
|
@@ -115,33 +132,54 @@ If you need 3D research-grade markerless joint kinematics, consider using severa
|
|
|
115
132
|
|
|
116
133
|
-->
|
|
117
134
|
|
|
118
|
-
|
|
119
|
-
|
|
120
|
-
|
|
121
|
-
pip install sports2d
|
|
122
|
-
```
|
|
135
|
+
#### Quick install
|
|
136
|
+
|
|
137
|
+
> N.B.: Full install is required for OpenSim inverse kinematics.
|
|
123
138
|
|
|
124
|
-
-
|
|
125
|
-
|
|
126
|
-
|
|
127
|
-
|
|
128
|
-
|
|
129
|
-
|
|
130
|
-
|
|
139
|
+
Open a terminal. Type `python -V` to make sure python >=3.10 <=3.11 is installed. If not, install it [from there](https://www.python.org/downloads/).
|
|
140
|
+
|
|
141
|
+
Run:
|
|
142
|
+
``` cmd
|
|
143
|
+
pip install sports2d
|
|
144
|
+
```
|
|
145
|
+
|
|
146
|
+
Alternatively, build from source to test the last changes:
|
|
147
|
+
``` cmd
|
|
148
|
+
git clone https://github.com/davidpagnon/sports2d.git
|
|
149
|
+
cd sports2d
|
|
150
|
+
pip install .
|
|
151
|
+
```
|
|
152
|
+
|
|
153
|
+
<br>
|
|
154
|
+
|
|
155
|
+
#### Full install
|
|
156
|
+
|
|
157
|
+
> Only needed if you want to run inverse kinematics (`--do_ik True`).
|
|
158
|
+
|
|
159
|
+
- Install Anaconda or [Miniconda](https://docs.conda.io/en/latest/miniconda.html):\
|
|
160
|
+
Open an Anaconda prompt and create a virtual environment:
|
|
161
|
+
``` cmd
|
|
162
|
+
conda create -n Sports2D python=3.10 -y
|
|
163
|
+
conda activate Sports2D
|
|
164
|
+
```
|
|
165
|
+
- **Install OpenSim**:\
|
|
166
|
+
Install the OpenSim Python API (if you do not want to install via conda, refer [to this page](https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/pages/53085346/Scripting+in+Python#ScriptinginPython-SettingupyourPythonscriptingenvironment(ifnotusingconda))):
|
|
167
|
+
```
|
|
168
|
+
conda install -c opensim-org opensim -y
|
|
131
169
|
```
|
|
170
|
+
|
|
171
|
+
- **Install Sports2D with Pose2Sim**:
|
|
172
|
+
``` cmd
|
|
173
|
+
pip install sports2d pose2sim
|
|
174
|
+
```
|
|
132
175
|
|
|
133
|
-
- OPTION 3: **Build from source and test the last changes**\
|
|
134
|
-
Open a terminal in the directory of your choice and clone the Sports2D repository.
|
|
135
|
-
``` cmd
|
|
136
|
-
git clone https://github.com/davidpagnon/sports2d.git
|
|
137
|
-
cd sports2d
|
|
138
|
-
pip install .
|
|
139
|
-
```
|
|
140
176
|
|
|
141
177
|
<br>
|
|
142
178
|
|
|
143
179
|
### Demonstration
|
|
144
180
|
|
|
181
|
+
#### Run the demo:
|
|
182
|
+
|
|
145
183
|
Just open a command line and run:
|
|
146
184
|
``` cmd
|
|
147
185
|
sports2d
|
|
@@ -166,213 +204,211 @@ The Demo video is voluntarily challenging to demonstrate the robustness of the p
|
|
|
166
204
|
|
|
167
205
|
<br>
|
|
168
206
|
|
|
207
|
+
|
|
208
|
+
#### Visualize in Blender
|
|
209
|
+
|
|
210
|
+
1. **Install the Pose2Sim_Blender add-on.**\
|
|
211
|
+
Follow instructions on the [Pose2Sim_Blender](https://github.com/davidpagnon/Pose2Sim_Blender) add-on page.
|
|
212
|
+
2. **Open your point coordinates.**\
|
|
213
|
+
**Add Markers**: open your trc file(e.g., `coords_m.trc`) from your `result_dir` folder.
|
|
214
|
+
|
|
215
|
+
This will optionally create **an animated rig** based on the motion of the captured person.
|
|
216
|
+
3. **Open your animated skeleton:**\
|
|
217
|
+
Make sure you first set `--do_ik True` ([full install](#full-install) required). See [inverse kinematics](#run-inverse-kinematics) section for more details.
|
|
218
|
+
- **Add Model**: Open your scaled model (e.g., `Model_Pose2Sim_LSTM.osim`).
|
|
219
|
+
- **Add Motion**: Open your motion file (e.g., `angles.mot`). Make sure the skeleton is selected in the outliner.
|
|
220
|
+
|
|
221
|
+
The OpenSim skeleton is not rigged yet. **[Feel free to contribute!](https://github.com/perfanalytics/pose2sim/issues/40)**
|
|
222
|
+
|
|
223
|
+
<!-- IMAGE ICI
|
|
224
|
+
-->
|
|
225
|
+
|
|
226
|
+
|
|
227
|
+
|
|
228
|
+
<br>
|
|
229
|
+
|
|
230
|
+
|
|
231
|
+
#### Visualize in OpenSim
|
|
232
|
+
|
|
233
|
+
1. Install **[OpenSim GUI](https://simtk.org/frs/index.php?group_id=91)**.
|
|
234
|
+
2. **Visualize point coordinates:**\
|
|
235
|
+
**File -> Preview experimental data:** Open your trc file (e.g., `coords_m.trc`) from your `result_dir` folder.
|
|
236
|
+
3. **Visualize angles:**\
|
|
237
|
+
To open an animated model and run further biomechanical analysis, make sure you first set `--do_ik True` ([full install](#full-install) required). See [inverse kinematics](#run-inverse-kinematics) section for more details.
|
|
238
|
+
- **File -> Open Model:** Open your scaled model (e.g., `Model_Pose2Sim_LSTM.osim`).
|
|
239
|
+
- **File -> Load Motion:** Open your motion file (e.g., `angles.mot`).
|
|
240
|
+
|
|
241
|
+
<br>
|
|
242
|
+
|
|
243
|
+
<!-- IMAGE ICI
|
|
244
|
+
-->
|
|
245
|
+
|
|
246
|
+
|
|
247
|
+
|
|
169
248
|
### Play with the parameters
|
|
170
249
|
|
|
171
|
-
For a full list of the available parameters, see [this section](#all-the-parameters) of the documentation, check the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file, or type
|
|
250
|
+
For a full list of the available parameters, see [this section](#all-the-parameters) of the documentation, check the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file, or type `sports2d --help`. All non specified are set to default values.
|
|
251
|
+
|
|
252
|
+
<br>
|
|
253
|
+
|
|
254
|
+
|
|
255
|
+
#### Run on a custom video or on a webcam:
|
|
172
256
|
``` cmd
|
|
173
|
-
sports2d --
|
|
257
|
+
sports2d --video_input path_to_video.mp4
|
|
258
|
+
```
|
|
259
|
+
|
|
260
|
+
``` cmd
|
|
261
|
+
sports2d --video_input webcam
|
|
174
262
|
```
|
|
263
|
+
|
|
175
264
|
<br>
|
|
176
265
|
|
|
177
|
-
#### Run on custom video with default parameters:
|
|
178
|
-
``` cmd
|
|
179
|
-
sports2d --video_input path_to_video.mp4
|
|
180
|
-
```
|
|
181
266
|
|
|
182
|
-
#### Run
|
|
183
|
-
|
|
184
|
-
|
|
185
|
-
|
|
267
|
+
#### Run for a specific time range:
|
|
268
|
+
```cmd
|
|
269
|
+
sports2d --time_range 1.2 2.7
|
|
270
|
+
```
|
|
271
|
+
|
|
186
272
|
<br>
|
|
187
273
|
|
|
188
|
-
|
|
274
|
+
|
|
275
|
+
#### Get coordinates in meters:
|
|
189
276
|
|
|
190
277
|
<!-- You either need to provide a calibration file, or simply the height of a person (Note that the latter will not take distortions into account, and that it will be less accurate for motion in the frontal plane).\-->
|
|
191
|
-
|
|
278
|
+
You may need to convert pixel coordinates to meters.\
|
|
279
|
+
Just provide the height of the reference person (and their ID in case of multiple person detection).\
|
|
192
280
|
The floor angle and the origin of the xy axis are computed automatically from gait. If you analyze another type of motion, you can manually specify them.\
|
|
193
281
|
Note that it does not take distortions into account, and that it will be less accurate for motions in the frontal plane.
|
|
194
282
|
|
|
195
|
-
|
|
196
|
-
|
|
197
|
-
|
|
198
|
-
|
|
199
|
-
|
|
200
|
-
|
|
201
|
-
|
|
202
|
-
|
|
203
|
-
|
|
204
|
-
<br>
|
|
283
|
+
``` cmd
|
|
284
|
+
sports2d --to_meters True --calib_file calib_demo.toml
|
|
285
|
+
```
|
|
286
|
+
``` cmd
|
|
287
|
+
sports2d --to_meters True --px_to_m_person_height 1.65 --px_to_m_from_person_id 2
|
|
288
|
+
```
|
|
289
|
+
``` cmd
|
|
290
|
+
sports2d --to_meters True --px_to_m_person_height 1.65 --px_to_m_from_person_id 2 --floor_angle 0 --xy_origin 0 940
|
|
291
|
+
```
|
|
205
292
|
|
|
206
|
-
#### Run with custom parameters (all non specified are set to default):
|
|
207
|
-
``` cmd
|
|
208
|
-
sports2d --video_input demo.mp4 other_video.mp4
|
|
209
|
-
```
|
|
210
|
-
``` cmd
|
|
211
|
-
sports2d --show_graphs False --time_range 1.2 2.7 --result_dir path_to_result_dir --slowmo_factor 4
|
|
212
|
-
```
|
|
213
|
-
``` cmd
|
|
214
|
-
sports2d --multiperson false --pose_model Body --mode lightweight --det_frequency 50
|
|
215
|
-
```
|
|
216
|
-
``` cmd
|
|
217
|
-
sports2d --tracking_mode deepsort --deepsort_params """{'max_age':30, 'n_init':3, 'nms_max_overlap':0.8, 'max_cosine_distance':0.3, 'nn_budget':200, 'max_iou_distance':0.8, 'embedder_gpu': True}"""
|
|
218
|
-
```
|
|
219
293
|
<br>
|
|
220
294
|
|
|
221
|
-
#### Run with a toml configuration file:
|
|
222
|
-
``` cmd
|
|
223
|
-
sports2d --config Config_demo.toml
|
|
224
|
-
```
|
|
225
|
-
<br>
|
|
226
295
|
|
|
227
|
-
#### Run
|
|
228
|
-
|
|
229
|
-
from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
|
|
230
|
-
```
|
|
231
|
-
``` python
|
|
232
|
-
from Sports2D import Sports2D; Sports2D.process(config_dict)
|
|
233
|
-
```
|
|
296
|
+
#### Run inverse kinematics:
|
|
297
|
+
> N.B.: [Full install](#full-install) required.
|
|
234
298
|
|
|
235
|
-
|
|
299
|
+
> N.B.: The person needs to be moving on a single plane for the whole selected time range.
|
|
236
300
|
|
|
237
|
-
|
|
301
|
+
Analyzed persons can be showing their left, right, front, or back side. If you want to ignore a certain person, set `--visible_side none`.
|
|
238
302
|
|
|
239
|
-
### Too slow for you?
|
|
240
303
|
|
|
241
|
-
**Quick fixes:**
|
|
242
|
-
- Use ` --save_vid false --save_img false --show_realtime_results false`: Will not save images or videos, and will not display the results in real time.
|
|
243
|
-
- Use `--mode lightweight`: Will use a lighter version of RTMPose, which is faster but less accurate.\
|
|
244
|
-
Note that any detection and pose models can be used (first [deploy them with MMPose](https://mmpose.readthedocs.io/en/latest/user_guides/how_to_deploy.html#onnx) if you do not have their .onnx or .zip files), with the following formalism:
|
|
245
|
-
```
|
|
246
|
-
--mode """{'det_class':'YOLOX',
|
|
247
|
-
'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip',
|
|
248
|
-
'det_input_size':[416,416],
|
|
249
|
-
'pose_class':'RTMPose',
|
|
250
|
-
'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip',
|
|
251
|
-
'pose_input_size':[192,256]}"""
|
|
252
|
-
```
|
|
253
|
-
- Use `--det_frequency 50`: Will detect poses only every 50 frames, and track keypoints in between, which is faster.
|
|
254
|
-
- Use `--multiperson false`: Can be used if one single person is present in the video. Otherwise, persons' IDs may be mixed up.
|
|
255
|
-
- Use `--load_trc <path_to_file_px.trc>`: Will use pose estimation results from a file. Useful if you want to use different parameters for pixel to meter conversion or angle calculation without running detection and pose estimation all over.
|
|
256
|
-
- Use `--tracking_mode sports2d`: Will use the default Sports2D tracker. Unlike DeepSort, it is faster, does not require any parametrization, and is as good in non-crowded scenes.
|
|
257
304
|
|
|
258
|
-
<br>
|
|
259
305
|
|
|
260
|
-
|
|
261
|
-
|
|
306
|
+
Why IK?
|
|
307
|
+
Add section in how it works
|
|
262
308
|
|
|
263
|
-
1. Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)).
|
|
264
309
|
|
|
265
|
-
Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Next, go to the [pyTorch website](https://pytorch.org/get-started/previous-versions/) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
|
|
266
|
-
``` cmd
|
|
267
|
-
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
|
|
268
|
-
```
|
|
269
310
|
|
|
270
|
-
|
|
271
|
-
|
|
311
|
+
```cmd
|
|
312
|
+
sports2d --time_range 1.2 2.7 --do_ik true --visible_side front left
|
|
313
|
+
```
|
|
272
314
|
|
|
273
|
-
|
|
274
|
-
|
|
275
|
-
|
|
276
|
-
```
|
|
315
|
+
```cmd
|
|
316
|
+
sports2d --time_range 1.2 2.7 --do_ik true --visible_side front left --use_augmentation True
|
|
317
|
+
```
|
|
277
318
|
|
|
278
|
-
|
|
279
|
-
|
|
280
|
-
|
|
281
|
-
|
|
282
|
-
|
|
283
|
-
|
|
284
|
-
|
|
319
|
+
<br>
|
|
320
|
+
|
|
321
|
+
|
|
322
|
+
#### Run on several videos at once:
|
|
323
|
+
``` cmd
|
|
324
|
+
sports2d --video_input demo.mp4 other_video.mp4
|
|
325
|
+
```
|
|
326
|
+
All videos analyzed with the same time range.
|
|
327
|
+
```cmd
|
|
328
|
+
sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7
|
|
329
|
+
```
|
|
330
|
+
Different time ranges for each video.
|
|
331
|
+
```cmd
|
|
332
|
+
sports2d --video_input demo.mp4 other_video.mp4 --time_range 1.2 2.7 0 3.5
|
|
333
|
+
```
|
|
285
334
|
|
|
286
335
|
<br>
|
|
287
336
|
|
|
288
|
-
### What you need is what you get
|
|
289
337
|
|
|
290
|
-
####
|
|
291
|
-
|
|
292
|
-
|
|
338
|
+
#### Use the configuration file or run within Python:
|
|
339
|
+
|
|
340
|
+
- Run with a configuration file:
|
|
341
|
+
``` cmd
|
|
342
|
+
sports2d --config Config_demo.toml
|
|
293
343
|
```
|
|
344
|
+
- Run within Python:
|
|
345
|
+
``` python
|
|
346
|
+
from Sports2D import Sports2D; Sports2D.process('Config_demo.toml')
|
|
347
|
+
```
|
|
348
|
+
- Run within Python with a dictionary (for example, `config_dict = toml.load('Config_demo.toml')`):
|
|
349
|
+
``` python
|
|
350
|
+
from Sports2D import Sports2D; Sports2D.process(config_dict)
|
|
351
|
+
```
|
|
352
|
+
|
|
294
353
|
<br>
|
|
295
354
|
|
|
296
|
-
|
|
297
|
-
|
|
298
|
-
|
|
299
|
-
sports2d --save_vid false --save_img true --save_pose false --save_angles true --show_realtime_results false --show_graphs false
|
|
300
|
-
```
|
|
355
|
+
|
|
356
|
+
#### Get the angles the way you want:
|
|
357
|
+
|
|
301
358
|
- Choose which angles you need:
|
|
302
359
|
```cmd
|
|
303
360
|
sports2d --joint_angles 'right knee' 'left knee' --segment_angles None
|
|
304
361
|
```
|
|
305
362
|
- Choose where to display the angles: either as a list on the upper-left of the image, or near the joint/segment, or both:
|
|
306
363
|
```cmd
|
|
307
|
-
sports2d --display_angle_values_on body
|
|
364
|
+
sports2d --display_angle_values_on body # OR none, or list
|
|
308
365
|
```
|
|
309
366
|
- You can also decide not to calculate and display angles at all:
|
|
310
367
|
```cmd
|
|
311
368
|
sports2d --calculate_angles false
|
|
312
369
|
```
|
|
370
|
+
- To run **inverse kinematics with OpenSim**, check [this section](#run-inverse-kinematics)
|
|
371
|
+
|
|
313
372
|
<br>
|
|
314
373
|
|
|
315
|
-
|
|
316
|
-
|
|
317
|
-
|
|
318
|
-
|
|
374
|
+
|
|
375
|
+
#### Customize your output:
|
|
376
|
+
- Only analyze the most prominent person:
|
|
377
|
+
``` cmd
|
|
378
|
+
sports2d --multiperson false
|
|
319
379
|
```
|
|
380
|
+
- Choose whether you want video, images, trc pose file, angle mot file, real-time display, and plots:
|
|
320
381
|
```cmd
|
|
321
|
-
sports2d --
|
|
382
|
+
sports2d --save_vid false --save_img true --save_pose false --save_angles true --show_realtime_results false --show_graphs false
|
|
383
|
+
```
|
|
384
|
+
- Save results to a custom directory, specify the slow-motion factor:
|
|
385
|
+
``` cmd
|
|
386
|
+
sports2d --result_dir path_to_result_dir
|
|
322
387
|
```
|
|
323
|
-
|
|
324
|
-
<!--
|
|
325
|
-
<br>
|
|
326
|
-
|
|
327
|
-
### Constrain results to a biomechanical model
|
|
328
|
-
|
|
329
|
-
> Why + image\
|
|
330
|
-
> Add explanation in "how it works" section
|
|
331
|
-
|
|
332
|
-
#### Installation
|
|
333
|
-
You will need to install OpenSim via conda, which makes installation slightly more complicated.
|
|
334
|
-
|
|
335
|
-
1. **Install Anaconda or [Miniconda](https://docs.conda.io/en/latest/miniconda.html).**
|
|
336
|
-
|
|
337
|
-
Once installed, open an Anaconda prompt and create a virtual environment:
|
|
338
|
-
```
|
|
339
|
-
conda create -n Sports2D python=3.9 -y
|
|
340
|
-
conda activate Sports2D
|
|
341
|
-
```
|
|
342
|
-
|
|
343
|
-
2. **Install OpenSim**:\
|
|
344
|
-
Install the OpenSim Python API (if you do not want to install via conda, refer [to this page](https://opensimconfluence.atlassian.net/wiki/spaces/OpenSim/pages/53085346/Scripting+in+Python#ScriptinginPython-SettingupyourPythonscriptingenvironment(ifnotusingconda))):
|
|
345
|
-
```
|
|
346
|
-
conda install -c opensim-org opensim -y
|
|
347
|
-
```
|
|
348
|
-
|
|
349
|
-
3. **Install Sports2D**:\
|
|
350
|
-
Open a terminal.
|
|
351
|
-
``` cmd
|
|
352
|
-
pip install sports2d
|
|
353
|
-
```
|
|
354
|
-
<br>
|
|
355
|
-
|
|
356
|
-
#### Usage
|
|
357
|
-
|
|
358
|
-
Need person doing a 2D motion. If not, trim the video with `--time_range` option.
|
|
359
|
-
|
|
360
|
-
```cmd
|
|
361
|
-
sports2d --time_range 1.2 2.7 --ik true --person_orientation front none left
|
|
362
|
-
```
|
|
363
388
|
|
|
364
389
|
<br>
|
|
365
390
|
|
|
366
|
-
#### Visualize the results
|
|
367
|
-
- The simplest option is to use OpenSim GUI
|
|
368
|
-
- If you want to see the skeleton overlay on the video, you can install the Pose2Sim Blender plugin.
|
|
369
391
|
|
|
370
|
-
|
|
392
|
+
#### Use a custom pose estimation model:
|
|
393
|
+
- Retrieve hand motion:
|
|
394
|
+
``` cmd
|
|
395
|
+
sports2d --pose_model WholeBody
|
|
396
|
+
```
|
|
397
|
+
- Use any custom (deployed) MMPose model
|
|
398
|
+
``` cmd
|
|
399
|
+
sports2d --pose_model BodyWithFeet :
|
|
400
|
+
--mode """{'det_class':'YOLOX',
|
|
401
|
+
'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_m_8xb8-300e_humanart-c2c7a14a.zip',
|
|
402
|
+
'det_input_size':[640, 640],
|
|
403
|
+
'pose_class':'RTMPose',
|
|
404
|
+
'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-m_simcc-body7_pt-body7-halpe26_700e-256x192-4d3e73dd_20230605.zip',
|
|
405
|
+
'pose_input_size':[192,256]}"""
|
|
406
|
+
```
|
|
371
407
|
|
|
372
408
|
<br>
|
|
373
409
|
|
|
374
410
|
|
|
375
|
-
|
|
411
|
+
#### All the parameters
|
|
376
412
|
|
|
377
413
|
For a full list of the available parameters, have a look at the [Config_Demo.toml](https://github.com/davidpagnon/Sports2D/blob/main/Sports2D/Demo/Config_demo.toml) file or type:
|
|
378
414
|
|
|
@@ -381,11 +417,11 @@ sports2d --help
|
|
|
381
417
|
```
|
|
382
418
|
|
|
383
419
|
```
|
|
384
|
-
|
|
385
|
-
|
|
420
|
+
'config': ["C", "path to a toml configuration file"],
|
|
386
421
|
'video_input': ["i", "webcam, or video_path.mp4, or video1_path.avi video2_path.mp4 ... Beware that images won't be saved if paths contain non ASCII characters"],
|
|
387
|
-
'
|
|
388
|
-
'
|
|
422
|
+
'px_to_m_person_height': ["H", "height of the person in meters. 1.70 if not specified"],
|
|
423
|
+
'visible_side': ["", "front, back, left, right, auto, or none. 'front auto' if not specified. If 'auto', will be either left or right depending on the direction of the motion. If 'none', no IK for this person"],
|
|
424
|
+
'load_trc_px': ["", "load trc file to avaid running pose estimation again. false if not specified"],
|
|
389
425
|
'compare': ["", "visually compare motion with trc file. false if not specified"],
|
|
390
426
|
'webcam_id': ["w", "webcam ID. 0 if not specified"],
|
|
391
427
|
'time_range': ["t", "start_time end_time. In seconds. Whole video if not specified. start_time1 end_time1 start_time2 end_time2 ... if multiple videos with different time ranges"],
|
|
@@ -403,26 +439,27 @@ sports2d --help
|
|
|
403
439
|
'save_angles': ["A", "save angles as mot files. true if not specified"],
|
|
404
440
|
'slowmo_factor': ["", "slow-motion factor. For a video recorded at 240 fps and exported to 30 fps, it would be 240/30 = 8. 1 if not specified"],
|
|
405
441
|
'pose_model': ["p", "only body_with_feet is available for now. body_with_feet if not specified"],
|
|
406
|
-
'mode': ["m",
|
|
442
|
+
'mode': ["m", 'light, balanced, performance, or a """{dictionary within triple quote}""". balanced if not specified. Use a dictionary to specify your own detection and/or pose estimation models (more about in the documentation).'],
|
|
407
443
|
'det_frequency': ["f", "run person detection only every N frames, and inbetween track previously detected bounding boxes. keypoint detection is still run on all frames.\n\
|
|
408
|
-
|
|
409
|
-
'to_meters': ["M", "convert pixels to meters. true if not specified"],
|
|
410
|
-
|
|
444
|
+
Equal to or greater than 1, can be as high as you want in simple uncrowded cases. Much faster, but might be less accurate. 1 if not specified: detection runs on all frames"],
|
|
411
445
|
'backend': ["", "Backend for pose estimation can be 'auto', 'cpu', 'cuda', 'mps' (for MacOS), or 'rocm' (for AMD GPUs)"],
|
|
412
446
|
'device': ["", "Device for pose estimatino can be 'auto', 'openvino', 'onnxruntime', 'opencv'"],
|
|
413
|
-
'
|
|
447
|
+
'to_meters': ["M", "convert pixels to meters. true if not specified"],
|
|
448
|
+
'make_c3d': ["", "Convert trc to c3d file. true if not specified"],
|
|
449
|
+
'px_to_m_from_person_id': ["", "person ID to calibrate on. 0 if not specified"],
|
|
414
450
|
'floor_angle': ["", "angle of the floor. 'auto' if not specified"],
|
|
415
451
|
'xy_origin': ["", "origin of the xy plane. 'auto' if not specified"],
|
|
416
452
|
'calib_file': ["", "path to calibration file. '' if not specified, eg no calibration file"],
|
|
417
453
|
'save_calib': ["", "save calibration file. true if not specified"],
|
|
418
454
|
'do_ik': ["", "do inverse kinematics. false if not specified"],
|
|
419
|
-
'
|
|
420
|
-
'
|
|
455
|
+
'use_augmentation': ["", "Use LSTM marker augmentation. false if not specified"],
|
|
456
|
+
'use_contacts_muscles': ["", "Use model with contact spheres and muscles. false if not specified"],
|
|
421
457
|
'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
|
|
422
458
|
'multiperson': ["", "multiperson involves tracking: will be faster if set to false. true if not specified"],
|
|
423
459
|
'tracking_mode': ["", "sports2d or rtmlib. sports2d is generally much more accurate and comparable in speed. sports2d if not specified"],
|
|
424
460
|
'deepsort_params': ["", 'Deepsort tracking parameters: """{dictionary between 3 double quotes}""". \n\
|
|
425
|
-
|
|
461
|
+
Default: max_age:30, n_init:3, nms_max_overlap:0.8, max_cosine_distance:0.3, nn_budget:200, max_iou_distance:0.8, embedder_gpu: True\n\
|
|
462
|
+
More information there: https://github.com/levan92/deep_sort_realtime/blob/master/deep_sort_realtime/deepsort_tracker.py#L51'],
|
|
426
463
|
'input_size': ["", "width, height. 1280, 720 if not specified. Lower resolution will be faster but less precise"],
|
|
427
464
|
'keypoint_likelihood_threshold': ["", "detected keypoints are not retained if likelihood is below this threshold. 0.3 if not specified"],
|
|
428
465
|
'average_likelihood_threshold': ["", "detected persons are not retained if average keypoint likelihood is below this threshold. 0.5 if not specified"],
|
|
@@ -444,12 +481,90 @@ sports2d --help
|
|
|
444
481
|
'sigma_kernel': ["", "sigma of the gaussian filter. 1 if not specified"],
|
|
445
482
|
'nb_values_used': ["", "number of values used for the loess filter. 5 if not specified"],
|
|
446
483
|
'kernel_size': ["", "kernel size of the median filter. 3 if not specified"],
|
|
484
|
+
'osim_setup_path': ["", "path to OpenSim setup. '../OpenSim_setup' if not specified"],
|
|
485
|
+
'right_left_symmetry': ["", "right left symmetry. true if not specified"],
|
|
486
|
+
'default_height': ["", "default height for scaling. 1.70 if not specified"],
|
|
487
|
+
'remove_individual_scaling_setup': ["", "remove individual scaling setup files generated during scaling. true if not specified"],
|
|
488
|
+
'remove_individual_ik_setup': ["", "remove individual IK setup files generated during IK. true if not specified"],
|
|
489
|
+
'fastest_frames_to_remove_percent': ["", "Frames with high speed are considered as outliers. Defaults to 0.1"],
|
|
490
|
+
'close_to_zero_speed_m': ["","Sum for all keypoints: about 50 px/frame or 0.2 m/frame"],
|
|
491
|
+
'large_hip_knee_angles': ["", "Hip and knee angles below this value are considered as imprecise and ignored. Defaults to 45"],
|
|
492
|
+
'trimmed_extrema_percent': ["", "Proportion of the most extreme segment values to remove before calculating their mean. Defaults to 50"],
|
|
447
493
|
'use_custom_logging': ["", "use custom logging. false if not specified"]
|
|
448
494
|
```
|
|
449
495
|
|
|
450
496
|
<br>
|
|
451
497
|
|
|
452
498
|
|
|
499
|
+
## Go further
|
|
500
|
+
|
|
501
|
+
### Too slow for you?
|
|
502
|
+
|
|
503
|
+
**Quick fixes:**
|
|
504
|
+
- Use ` --save_vid false --save_img false --show_realtime_results false`: Will not save images or videos, and will not display the results in real time.
|
|
505
|
+
- Use `--mode lightweight`: Will use a lighter version of RTMPose, which is faster but less accurate.\
|
|
506
|
+
Note that any detection and pose models can be used (first [deploy them with MMPose](https://mmpose.readthedocs.io/en/latest/user_guides/how_to_deploy.html#onnx) if you do not have their .onnx or .zip files), with the following formalism:
|
|
507
|
+
```
|
|
508
|
+
--mode """{'det_class':'YOLOX',
|
|
509
|
+
'det_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/yolox_nano_8xb8-300e_humanart-40f6f0d0.zip',
|
|
510
|
+
'det_input_size':[416,416],
|
|
511
|
+
'pose_class':'RTMPose',
|
|
512
|
+
'pose_model':'https://download.openmmlab.com/mmpose/v1/projects/rtmposev1/onnx_sdk/rtmpose-t_simcc-body7_pt-body7_420e-256x192-026a1439_20230504.zip',
|
|
513
|
+
'pose_input_size':[192,256]}"""
|
|
514
|
+
```
|
|
515
|
+
- Use `--det_frequency 50`: Will detect poses only every 50 frames, and track keypoints in between, which is faster.
|
|
516
|
+
- Use `--multiperson false`: Can be used if one single person is present in the video. Otherwise, persons' IDs may be mixed up.
|
|
517
|
+
- Use `--load_trc_px <path_to_file_px.trc>`: Will use pose estimation results from a file. Useful if you want to use different parameters for pixel to meter conversion or angle calculation without running detection and pose estimation all over.
|
|
518
|
+
- Make sure you use `--tracking_mode sports2d`: Will use the default Sports2D tracker. Unlike DeepSort, it is faster, does not require any parametrization, and is as good in non-crowded scenes.
|
|
519
|
+
|
|
520
|
+
<br>
|
|
521
|
+
|
|
522
|
+
**Use your GPU**:\
|
|
523
|
+
Will be much faster, with no impact on accuracy. However, the installation takes about 6 GB of additional storage space.
|
|
524
|
+
|
|
525
|
+
1. Run `nvidia-smi` in a terminal. If this results in an error, your GPU is probably not compatible with CUDA. If not, note the "CUDA version": it is the latest version your driver is compatible with (more information [on this post](https://stackoverflow.com/questions/60987997/why-torch-cuda-is-available-returns-false-even-after-installing-pytorch-with)).
|
|
526
|
+
|
|
527
|
+
Then go to the [ONNXruntime requirement page](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html#requirements), note the latest compatible CUDA and cuDNN requirements. Next, go to the [pyTorch website](https://pytorch.org/get-started/previous-versions/) and install the latest version that satisfies these requirements (beware that torch 2.4 ships with cuDNN 9, while torch 2.3 installs cuDNN 8). For example:
|
|
528
|
+
``` cmd
|
|
529
|
+
pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124
|
|
530
|
+
```
|
|
531
|
+
|
|
532
|
+
<!-- > ***Note:*** Issues were reported with the default command. However, this has been tested and works:
|
|
533
|
+
`pip install torch==2.3.1 torchvision==0.18.1 torchaudio==2.3.1 --index-url https://download.pytorch.org/whl/cu118` -->
|
|
534
|
+
|
|
535
|
+
2. Finally, install ONNX Runtime with GPU support:
|
|
536
|
+
```
|
|
537
|
+
pip install onnxruntime-gpu
|
|
538
|
+
```
|
|
539
|
+
|
|
540
|
+
3. Check that everything went well within Python with these commands:
|
|
541
|
+
``` bash
|
|
542
|
+
python -c 'import torch; print(torch.cuda.is_available())'
|
|
543
|
+
python -c 'import onnxruntime as ort; print(ort.get_available_providers())'
|
|
544
|
+
# Should print "True ['CUDAExecutionProvider', ...]"
|
|
545
|
+
```
|
|
546
|
+
<!-- print(f'torch version: {torch.__version__}, cuda version: {torch.version.cuda}, cudnn version: {torch.backends.cudnn.version()}, onnxruntime version: {ort.__version__}') -->
|
|
547
|
+
|
|
548
|
+
<br>
|
|
549
|
+
|
|
550
|
+
|
|
551
|
+
|
|
552
|
+
|
|
553
|
+
|
|
554
|
+
|
|
555
|
+
<!--
|
|
556
|
+
|
|
557
|
+
VIDEO THERE
|
|
558
|
+
|
|
559
|
+
-->
|
|
560
|
+
|
|
561
|
+
|
|
562
|
+
<br>
|
|
563
|
+
|
|
564
|
+
|
|
565
|
+
|
|
566
|
+
|
|
567
|
+
|
|
453
568
|
### How it works
|
|
454
569
|
|
|
455
570
|
Sports2D:
|
|
@@ -538,7 +653,8 @@ If you use Sports2D, please cite [Pagnon, 2024](https://joss.theoj.org/papers/10
|
|
|
538
653
|
|
|
539
654
|
### How to contribute
|
|
540
655
|
I would happily welcome any proposal for new features, code improvement, and more!\
|
|
541
|
-
If you want to contribute to Sports2D, please
|
|
656
|
+
If you want to contribute to Sports2D or Pose2Sim, please see [this issue](https://github.com/perfanalytics/pose2sim/issues/40).\
|
|
657
|
+
You will be proposed a to-do list, but please feel absolutely free to propose your own ideas and improvements.
|
|
542
658
|
|
|
543
659
|
*Here is a to-do list: feel free to complete it:*
|
|
544
660
|
- [x] Compute **segment angles**.
|
|
@@ -548,7 +664,7 @@ If you want to contribute to Sports2D, please follow [this guide](https://docs.g
|
|
|
548
664
|
- [x] Handle sudden **changes of direction**.
|
|
549
665
|
- [x] **Batch processing** for the analysis of multiple videos at once.
|
|
550
666
|
- [x] Option to only save one person (with the highest average score, or with the most frames and fastest speed)
|
|
551
|
-
- [x] Run again without pose estimation with the option `--
|
|
667
|
+
- [x] Run again without pose estimation with the option `--load_trc_px` for px .trc file.
|
|
552
668
|
- [x] **Convert positions to meters** by providing the person height, a calibration file, or 3D points [to click on the image](https://stackoverflow.com/questions/74248955/how-to-display-the-coordinates-of-the-points-clicked-on-the-image-in-google-cola)
|
|
553
669
|
- [x] Support any detection and/or pose estimation model.
|
|
554
670
|
|