kinemotion 0.2.0__tar.gz → 0.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of kinemotion might be problematic. Click here for more details.

Files changed (34) hide show
  1. {kinemotion-0.2.0 → kinemotion-0.4.0}/.gitignore +2 -0
  2. {kinemotion-0.2.0 → kinemotion-0.4.0}/CLAUDE.md +67 -46
  3. {kinemotion-0.2.0 → kinemotion-0.4.0}/PKG-INFO +35 -85
  4. {kinemotion-0.2.0 → kinemotion-0.4.0}/README.md +34 -84
  5. kinemotion-0.4.0/docs/ERRORS_FINDINGS.md +260 -0
  6. kinemotion-0.4.0/docs/FRAMERATE.md +747 -0
  7. kinemotion-0.4.0/docs/IMPLEMENTATION_PLAN.md +795 -0
  8. kinemotion-0.4.0/docs/IMU_METADATA_PRESERVATION.md +124 -0
  9. {kinemotion-0.2.0 → kinemotion-0.4.0}/docs/PARAMETERS.md +244 -332
  10. kinemotion-0.4.0/docs/VALIDATION_PLAN.md +706 -0
  11. {kinemotion-0.2.0 → kinemotion-0.4.0}/pyproject.toml +1 -1
  12. kinemotion-0.4.0/src/kinemotion/cli.py +20 -0
  13. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/dropjump/analysis.py +14 -2
  14. {kinemotion-0.2.0/src/kinemotion → kinemotion-0.4.0/src/kinemotion/dropjump}/cli.py +41 -80
  15. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/dropjump/kinematics.py +23 -7
  16. {kinemotion-0.2.0 → kinemotion-0.4.0}/.tool-versions +0 -0
  17. {kinemotion-0.2.0 → kinemotion-0.4.0}/LICENSE +0 -0
  18. {kinemotion-0.2.0 → kinemotion-0.4.0}/examples/programmatic_usage.py +0 -0
  19. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/__init__.py +0 -0
  20. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/core/__init__.py +0 -0
  21. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/core/filtering.py +0 -0
  22. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/core/pose.py +0 -0
  23. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/core/smoothing.py +0 -0
  24. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/core/video_io.py +0 -0
  25. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/dropjump/__init__.py +0 -0
  26. {kinemotion-0.2.0 → kinemotion-0.4.0}/src/kinemotion/dropjump/debug_overlay.py +0 -0
  27. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/__init__.py +0 -0
  28. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/test_adaptive_threshold.py +0 -0
  29. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/test_aspect_ratio.py +0 -0
  30. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/test_com_estimation.py +0 -0
  31. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/test_contact_detection.py +0 -0
  32. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/test_filtering.py +0 -0
  33. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/test_kinematics.py +0 -0
  34. {kinemotion-0.2.0 → kinemotion-0.4.0}/tests/test_polyorder.py +0 -0
@@ -60,3 +60,5 @@ Thumbs.db
60
60
  *.mp4
61
61
  *.jpeg
62
62
  *.jpg
63
+
64
+ .claude/settings.local.json*
@@ -11,12 +11,14 @@ Kinemotion: Video-based kinematic analysis tool for athletic performance. Analyz
11
11
  ### Dependencies
12
12
 
13
13
  Managed with `uv` and `asdf`:
14
+
14
15
  - Python version: 3.12.7 (specified in `.tool-versions`)
15
16
  - **Important**: MediaPipe requires Python 3.12 or earlier (no 3.13 support yet)
16
17
  - Install dependencies: `uv sync`
17
18
  - Run CLI: `kinemotion dropjump-analyze <video.mp4>`
18
19
 
19
20
  **Production dependencies:**
21
+
20
22
  - click: CLI framework
21
23
  - opencv-python: Video processing
22
24
  - mediapipe: Pose detection and tracking
@@ -24,6 +26,7 @@ Managed with `uv` and `asdf`:
24
26
  - scipy: Signal processing (Savitzky-Golay filter)
25
27
 
26
28
  **Development dependencies:**
29
+
27
30
  - pytest: Testing framework
28
31
  - black: Code formatting
29
32
  - ruff: Fast Python linter
@@ -45,20 +48,22 @@ Managed with `uv` and `asdf`:
45
48
 
46
49
  ### Module Structure
47
50
 
48
- ```
51
+ ```text
49
52
  src/kinemotion/
50
53
  ├── __init__.py
51
- ├── cli.py # Click-based CLI entry point
54
+ ├── cli.py # Main CLI entry point (registers subcommands)
52
55
  ├── core/ # Shared functionality across all jump types
53
56
  │ ├── __init__.py
54
57
  │ ├── pose.py # MediaPipe Pose integration + CoM
55
58
  │ ├── smoothing.py # Savitzky-Golay landmark smoothing
56
- └── filtering.py # Outlier rejection + bilateral filtering
59
+ ├── filtering.py # Outlier rejection + bilateral filtering
60
+ │ └── video_io.py # Video processing (VideoProcessor class)
57
61
  └── dropjump/ # Drop jump specific analysis
58
62
  ├── __init__.py
63
+ ├── cli.py # Drop jump CLI command (dropjump-analyze)
59
64
  ├── analysis.py # Ground contact state detection
60
65
  ├── kinematics.py # Drop jump metrics calculations
61
- └── video_io.py # Video processing and debug overlay rendering
66
+ └── debug_overlay.py # Debug video overlay rendering
62
67
 
63
68
  tests/
64
69
  ├── test_adaptive_threshold.py # Adaptive threshold tests
@@ -70,14 +75,25 @@ tests/
70
75
  └── test_polyorder.py # Polynomial order tests
71
76
 
72
77
  docs/
73
- └── PARAMETERS.md # Comprehensive guide to all CLI parameters
78
+ ├── PARAMETERS.md # Comprehensive guide to all CLI parameters
79
+ └── IMPLEMENTATION_PLAN.md # Implementation plan and fix guide
74
80
  ```
75
81
 
76
82
  **Design Rationale:**
83
+
77
84
  - `core/` contains shared code reusable across different jump types (CMJ, squat jumps, etc.)
78
- - `dropjump/` contains drop jump specific logic and metrics
79
- - Future jump types (CMJ, squat) will be sibling modules to `dropjump/`
80
- - Single CLI with subcommands for different analysis types
85
+ - `dropjump/` contains drop jump specific logic, metrics, and CLI command
86
+ - Each jump type module contains its own CLI command definition
87
+ - Main `cli.py` is just an entry point that registers subcommands from each module
88
+ - Future jump types (CMJ, squat) will be sibling modules to `dropjump/` with their own cli.py
89
+ - Single CLI group with subcommands for different analysis types
90
+
91
+ **CLI Architecture:**
92
+
93
+ - `src/kinemotion/cli.py` (20 lines): Main CLI group + command registration
94
+ - `src/kinemotion/dropjump/cli.py` (358 lines): Complete dropjump-analyze command
95
+ - Commands registered using Click's `cli.add_command()` pattern
96
+ - Modular design allows easy addition of new jump type analysis commands
81
97
 
82
98
  ### Analysis Pipeline
83
99
 
@@ -123,7 +139,7 @@ docs/
123
139
  - **Configurable thresholds**: CLI flags allow tuning for different video qualities and athletes
124
140
  - **Calibrated jump height**: Position-based measurement with drop height calibration for accuracy
125
141
  - Optional `--drop-height` parameter uses known drop box height to calibrate measurements
126
- - Achieves ~88% accuracy (vs 71% with kinematic-only method)
142
+ - **⚠️ Accuracy claim unvalidated** - theoretical benefit estimated, not empirically tested
127
143
  - Fallback to empirically-corrected kinematic formula when no calibration provided
128
144
  - **Aspect ratio preservation**: Output video ALWAYS matches source video dimensions
129
145
  - Handles SAR (Sample Aspect Ratio) metadata from mobile videos
@@ -162,6 +178,7 @@ The codebase enforces strict code quality standards using multiple tools:
162
178
  ### When Contributing Code
163
179
 
164
180
  Always run before committing:
181
+
165
182
  ```bash
166
183
  # Format code
167
184
  uv run black src/
@@ -177,17 +194,18 @@ uv run pytest
177
194
  ```
178
195
 
179
196
  Or run all checks at once:
197
+
180
198
  ```bash
181
199
  uv run ruff check && uv run mypy src/kinemotion && uv run pytest
182
200
  ```
183
201
 
184
202
  ## Critical Implementation Details
185
203
 
186
- ### Aspect Ratio Preservation & SAR Handling (dropjump/video_io.py)
204
+ ### Aspect Ratio Preservation & SAR Handling (core/video_io.py)
187
205
 
188
206
  **IMPORTANT**: The tool preserves the exact aspect ratio of the source video, including SAR (Sample Aspect Ratio) metadata. No dimensions are hardcoded.
189
207
 
190
- #### VideoProcessor (`dropjump/video_io.py:15-110`)
208
+ #### VideoProcessor (`core/video_io.py:15-110`)
191
209
 
192
210
  - Reads the **first actual frame** to get true encoded dimensions (not OpenCV properties)
193
211
  - Critical for mobile videos with rotation metadata
@@ -206,13 +224,14 @@ if ret:
206
224
  ```
207
225
 
208
226
  **Never do this:**
227
+
209
228
  ```python
210
229
  # Wrong - may return incorrect dimensions with rotated videos
211
230
  self.width = int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH))
212
231
  self.height = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
213
232
  ```
214
233
 
215
- #### DebugOverlayRenderer (`dropjump/video_io.py:130-330`)
234
+ #### DebugOverlayRenderer (`dropjump/debug_overlay.py`)
216
235
 
217
236
  - Creates output video with **display dimensions** (respecting SAR)
218
237
  - Resizes frames from encoded dimensions to display dimensions if needed (INTER_LANCZOS4)
@@ -230,12 +249,14 @@ self.height = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
230
249
  Instead of simple frame-to-frame differences, velocity is computed as the derivative of the smoothed position trajectory using Savitzky-Golay filter:
231
250
 
232
251
  **Advantages:**
252
+
233
253
  - **Smoother velocity curves**: Eliminates noise from frame-to-frame jitter
234
254
  - **More accurate threshold crossings**: Clean transitions without false positives
235
255
  - **Better interpolation**: Smoother velocity gradient for sub-frame precision
236
256
  - **Consistent with smoothing**: Uses same polynomial fit as position smoothing
237
257
 
238
258
  **Implementation:**
259
+
239
260
  ```python
240
261
  # OLD: Simple differences (noisy)
241
262
  velocities = np.abs(np.diff(foot_positions, prepend=foot_positions[0]))
@@ -245,6 +266,7 @@ velocities = savgol_filter(positions, window_length=5, polyorder=2, deriv=1, del
245
266
  ```
246
267
 
247
268
  **Key Function:**
269
+
248
270
  - `compute_velocity_from_derivative()`: Computes first derivative using Savitzky-Golay filter
249
271
 
250
272
  #### Sub-Frame Interpolation Algorithm
@@ -252,9 +274,11 @@ velocities = savgol_filter(positions, window_length=5, polyorder=2, deriv=1, del
252
274
  At 30fps, each frame represents 33.3ms. Contact events (landing, takeoff) rarely occur exactly at frame boundaries. Sub-frame interpolation estimates the exact moment between frames when velocity crosses the threshold.
253
275
 
254
276
  **Algorithm:**
277
+
255
278
  1. Calculate smooth velocity using derivative: `v = derivative(smooth_position)`
256
279
  2. Find frames where velocity crosses threshold (e.g., from 0.025 to 0.015, threshold 0.020)
257
280
  3. Use linear interpolation to find exact crossing point:
281
+
258
282
  ```python
259
283
  # If v[10] = 0.025 and v[11] = 0.015, threshold = 0.020
260
284
  t = (0.020 - 0.025) / (0.015 - 0.025) = 0.5
@@ -262,11 +286,13 @@ At 30fps, each frame represents 33.3ms. Contact events (landing, takeoff) rarely
262
286
  ```
263
287
 
264
288
  **Key Functions:**
289
+
265
290
  - `interpolate_threshold_crossing()`: Linear interpolation of velocity crossing
266
291
  - `find_interpolated_phase_transitions()`: Returns fractional frame indices for all phases
267
292
 
268
293
  **Accuracy Improvement:**
269
- ```
294
+
295
+ ```text
270
296
  30fps without interpolation: ±33ms (1 frame on each boundary)
271
297
  30fps with interpolation: ±10ms (sub-frame precision)
272
298
  60fps without interpolation: ±17ms
@@ -274,6 +300,7 @@ At 30fps, each frame represents 33.3ms. Contact events (landing, takeoff) rarely
274
300
  ```
275
301
 
276
302
  **Velocity Comparison:**
303
+
277
304
  ```python
278
305
  # Frame-to-frame differences: noisy, discontinuous jumps
279
306
  v_simple = [0.01, 0.03, 0.02, 0.04, 0.02, 0.01] # Jittery
@@ -283,6 +310,7 @@ v_deriv = [0.015, 0.022, 0.025, 0.024, 0.018, 0.012] # Smooth
283
310
  ```
284
311
 
285
312
  **Example:**
313
+
286
314
  ```python
287
315
  # Integer frames: contact from frame 49 to 53 (5 frames = 168ms at 30fps)
288
316
  # With derivative velocity: contact from 49.0 to 53.0 (4 frames = 135ms)
@@ -298,12 +326,14 @@ v_deriv = [0.015, 0.022, 0.025, 0.024, 0.018, 0.012] # Smooth
298
326
  Acceleration (second derivative) reveals characteristic patterns at contact events:
299
327
 
300
328
  **Physical Patterns:**
329
+
301
330
  - **Landing impact**: Large acceleration spike as feet decelerate on impact
302
331
  - **Takeoff**: Acceleration change as body transitions from static to upward motion
303
332
  - **In flight**: Constant acceleration (gravity ≈ -9.81 m/s²)
304
333
  - **On ground**: Near-zero acceleration (stationary position)
305
334
 
306
335
  **Implementation:**
336
+
307
337
  ```python
308
338
  # Compute acceleration using Savitzky-Golay second derivative
309
339
  acceleration = savgol_filter(positions, window=5, polyorder=2, deriv=2, delta=1.0)
@@ -317,6 +347,7 @@ takeoff_frame = np.argmax(accel_change[search_window])
317
347
  ```
318
348
 
319
349
  **Key Functions:**
350
+
320
351
  - `compute_acceleration_from_derivative()`: Computes second derivative using Savitzky-Golay
321
352
  - `refine_transition_with_curvature()`: Searches for acceleration patterns near transitions
322
353
  - `find_interpolated_phase_transitions_with_curvature()`: Combines velocity + curvature
@@ -330,11 +361,13 @@ Curvature analysis refines velocity-based estimates through blending:
330
361
  3. **Blending**: 70% curvature-based + 30% velocity-based
331
362
 
332
363
  **Why Blending?**
364
+
333
365
  - Velocity is reliable for coarse timing
334
366
  - Curvature provides fine detail but can be noisy at boundaries
335
367
  - Blending prevents large deviations while incorporating physical insights
336
368
 
337
369
  **Algorithm:**
370
+
338
371
  ```python
339
372
  # 1. Get velocity-based estimate
340
373
  velocity_estimate = 49.0 # from interpolation
@@ -349,6 +382,7 @@ blend = 0.7 * 47.2 + 0.3 * 49.0 # = 47.74
349
382
  ```
350
383
 
351
384
  **Accuracy Improvement:**
385
+
352
386
  ```python
353
387
  # Example: Landing detection
354
388
  # Velocity only: frame 49.0 (when velocity drops below threshold)
@@ -357,6 +391,7 @@ blend = 0.7 * 47.2 + 0.3 * 49.0 # = 47.74
357
391
  ```
358
392
 
359
393
  **Optional Feature:**
394
+
360
395
  - Enabled by default (`--use-curvature`, default: True)
361
396
  - Can be disabled with `--no-curvature` flag for pure velocity-based detection
362
397
  - Negligible performance impact (reuses smoothed trajectory)
@@ -374,12 +409,13 @@ Always convert to Python `int()` in `to_dict()` method:
374
409
  ```
375
410
 
376
411
  **Never do this:**
412
+
377
413
  ```python
378
414
  # Wrong - will fail with "Object of type int64 is not JSON serializable"
379
415
  "contact_start_frame": self.contact_start_frame
380
416
  ```
381
417
 
382
- ### Video Codec Handling (dropjump/video_io.py:78-94)
418
+ ### Video Codec Handling (dropjump/debug_overlay.py)
383
419
 
384
420
  - Primary codec: H.264 (avc1) - better quality, smaller file size
385
421
  - Fallback codec: MPEG-4 (mp4v) - broader compatibility
@@ -393,6 +429,7 @@ OpenCV and NumPy use different dimension ordering:
393
429
  - **OpenCV VideoWriter size**: `(width, height)` tuple
394
430
 
395
431
  Example:
432
+
396
433
  ```python
397
434
  frame.shape # (1080, 1920, 3) - height first
398
435
  cv2.VideoWriter(..., (1920, 1080)) # width first
@@ -407,12 +444,13 @@ Always be careful with dimension ordering to avoid squashed/stretched videos.
407
444
  1. Update `DropJumpMetrics` class in `dropjump/kinematics.py:10-19`
408
445
  2. Add calculation logic in `calculate_drop_jump_metrics()` function
409
446
  3. Update `to_dict()` method for JSON serialization (remember to convert NumPy types to Python types)
410
- 4. Optionally add visualization in `DebugOverlayRenderer.render_frame()` in `dropjump/video_io.py:96`
447
+ 4. Optionally add visualization in `DebugOverlayRenderer.render_frame()` in `dropjump/debug_overlay.py`
411
448
  5. Add tests in `tests/test_kinematics.py`
412
449
 
413
450
  ### Modifying Contact Detection Logic
414
451
 
415
452
  Edit `detect_ground_contact()` in `dropjump/analysis.py:14`. Key parameters:
453
+
416
454
  - `velocity_threshold`: Tune for different surface/athlete combinations (default: 0.02)
417
455
  - `min_contact_frames`: Adjust for frame rate and contact duration expectations (default: 3)
418
456
  - `visibility_threshold`: Minimum landmark visibility score (default: 0.5)
@@ -420,6 +458,7 @@ Edit `detect_ground_contact()` in `dropjump/analysis.py:14`. Key parameters:
420
458
  ### Adjusting Smoothing
421
459
 
422
460
  Modify `smooth_landmarks()` in `core/smoothing.py:9`:
461
+
423
462
  - `window_length`: Controls smoothing strength (must be odd, default: 5)
424
463
  - `polyorder`: Polynomial order for Savitzky-Golay filter (default: 2)
425
464
 
@@ -427,11 +466,10 @@ Modify `smooth_landmarks()` in `core/smoothing.py:9`:
427
466
 
428
467
  **IMPORTANT**: See `docs/PARAMETERS.md` for comprehensive guide on all CLI parameters.
429
468
 
430
- Quick reference:
431
- - **use-com**: Use center of mass tracking instead of feet (↑ accuracy by 3-5%)
432
- - **adaptive-threshold**: Auto-calibrate velocity threshold from baseline (↑ accuracy by 2-3%)
469
+ Quick reference for `dropjump-analyze`:
470
+
433
471
  - **smoothing-window**: Trajectory smoothness (↑ for noisy video)
434
- - **velocity-threshold**: Contact sensitivity (↓ to detect brief contacts) - ignored if adaptive-threshold enabled
472
+ - **velocity-threshold**: Contact sensitivity (↓ to detect brief contacts)
435
473
  - **min-contact-frames**: Temporal filter (↑ to remove false contacts)
436
474
  - **visibility-threshold**: Landmark confidence (↓ for occluded landmarks)
437
475
  - **detection-confidence**: Pose detection strictness (MediaPipe)
@@ -439,7 +477,10 @@ Quick reference:
439
477
  - **drop-height**: Drop box height in meters for calibration (e.g., 0.40 for 40cm)
440
478
  - **use-curvature**: Enable trajectory curvature analysis (default: enabled)
441
479
 
480
+ **Note**: Drop jump analysis always uses foot-based tracking with fixed velocity thresholds because typical drop jump videos are ~3 seconds long without a stationary baseline period. The `--use-com` and `--adaptive-threshold` options (available in `core/` modules) require longer videos (~5+ seconds) with 3 seconds of standing baseline, making them suitable for future jump types like CMJ (countermovement jump) but not drop jumps.
481
+
442
482
  The detailed guide includes:
483
+
443
484
  - How each parameter works internally
444
485
  - Frame rate considerations
445
486
  - Scenario-based recommended settings
@@ -449,6 +490,7 @@ The detailed guide includes:
449
490
  ### Working with Different Video Formats
450
491
 
451
492
  The tool handles various video formats and aspect ratios:
493
+
452
494
  - 16:9 landscape (1920x1080)
453
495
  - 4:3 standard (640x480)
454
496
  - 9:16 portrait (1080x1920)
@@ -484,6 +526,7 @@ uv run pytest -v
484
526
  ### Code Quality
485
527
 
486
528
  All code passes:
529
+
487
530
  - ✅ **Type checking**: Full mypy strict mode compliance
488
531
  - ✅ **Linting**: ruff checks with comprehensive rule sets
489
532
  - ✅ **Tests**: 25/25 tests passing
@@ -499,6 +542,7 @@ All code passes:
499
542
  ### Video Dimension Issues
500
543
 
501
544
  If output video has wrong aspect ratio:
545
+
502
546
  1. Check `VideoProcessor` is reading first frame correctly
503
547
  2. Verify `DebugOverlayRenderer` receives correct width/height from `VideoProcessor`
504
548
  3. Check that `write_frame()` validation is enabled (should raise error if dimensions mismatch)
@@ -507,6 +551,7 @@ If output video has wrong aspect ratio:
507
551
  ### JSON Serialization Errors
508
552
 
509
553
  If you see "Object of type X is not JSON serializable":
554
+
510
555
  1. Check `kinematics.py` `to_dict()` method
511
556
  2. Ensure all NumPy types are converted to Python types with `int()` or `float()`
512
557
  3. Run `tests/test_kinematics.py::test_metrics_to_dict` to verify
@@ -514,6 +559,7 @@ If you see "Object of type X is not JSON serializable":
514
559
  ### Video Codec Issues
515
560
 
516
561
  If output video won't play:
562
+
517
563
  1. Try different output format: `.avi` instead of `.mp4`
518
564
  2. Check OpenCV codec support: `cv2.getBuildInformation()`
519
565
  3. DebugOverlayRenderer will fallback from H.264 to MPEG-4 automatically
@@ -521,6 +567,7 @@ If output video won't play:
521
567
  ### Type Checking Issues
522
568
 
523
569
  If mypy reports errors:
570
+
524
571
  1. Ensure all function signatures have complete type annotations (parameters and return types)
525
572
  2. For numpy types, use explicit casts: `int()`, `float()` when converting to Python types
526
573
  3. For third-party libraries without stubs (cv2, mediapipe, scipy), use `# type: ignore` comments sparingly
@@ -565,38 +612,12 @@ uv run kinemotion dropjump-analyze video.mp4 \
565
612
  uv run kinemotion dropjump-analyze jump.mp4 \
566
613
  --output debug.mp4 \
567
614
  --json-output metrics.json
568
-
569
- # Use center of mass tracking for improved accuracy (3-5% gain)
570
- uv run kinemotion dropjump-analyze video.mp4 \
571
- --use-com \
572
- --output debug.mp4 \
573
- --json-output metrics.json
574
-
575
- # Full analysis with CoM tracking and calibration
576
- uv run kinemotion dropjump-analyze video.mp4 \
577
- --use-com \
578
- --drop-height 0.40 \
579
- --output debug_com.mp4 \
580
- --json-output metrics.json
581
-
582
- # Adaptive threshold for auto-calibration (2-3% accuracy gain)
583
- uv run kinemotion dropjump-analyze video.mp4 \
584
- --adaptive-threshold \
585
- --output debug.mp4 \
586
- --json-output metrics.json
587
-
588
- # Maximum accuracy: CoM + adaptive threshold + calibration (~93-96%)
589
- uv run kinemotion dropjump-analyze video.mp4 \
590
- --adaptive-threshold \
591
- --use-com \
592
- --drop-height 0.40 \
593
- --output debug_max.mp4 \
594
- --json-output metrics.json
595
615
  ```
596
616
 
597
617
  ## MCP Server Configuration
598
618
 
599
619
  The repository includes MCP server configuration in `.mcp.json`:
620
+
600
621
  - **web-search**: DuckDuckGo search via @dannyboy2042/freebird-mcp
601
622
  - **sequential**: Sequential thinking via @smithery-ai/server-sequential-thinking
602
623
  - **context7**: Library documentation via @upstash/context7-mcp
@@ -1,6 +1,6 @@
1
1
  Metadata-Version: 2.4
2
2
  Name: kinemotion
3
- Version: 0.2.0
3
+ Version: 0.4.0
4
4
  Summary: Video-based kinematic analysis for athletic performance
5
5
  Project-URL: Homepage, https://github.com/feniix/kinemotion
6
6
  Project-URL: Repository, https://github.com/feniix/kinemotion
@@ -33,9 +33,7 @@ A video-based kinematic analysis tool for athletic performance. Analyzes side-vi
33
33
  ## Features
34
34
 
35
35
  - **Automatic pose tracking** using MediaPipe Pose landmarks
36
- - **Center of mass (CoM) tracking** - biomechanical CoM estimation for 3-5% accuracy improvement
37
- - **Adaptive velocity thresholding** - auto-calibrates from video baseline for 2-3% additional accuracy
38
- - **Ground contact detection** based on velocity and position (feet or CoM)
36
+ - **Ground contact detection** based on foot velocity and position
39
37
  - **Derivative-based velocity** - smooth velocity calculation from position trajectory
40
38
  - **Trajectory curvature analysis** - acceleration patterns for refined event detection
41
39
  - **Sub-frame interpolation** - precise timing beyond frame boundaries for improved accuracy
@@ -44,13 +42,25 @@ A video-based kinematic analysis tool for athletic performance. Analyzes side-vi
44
42
  - Ground contact time (ms)
45
43
  - Flight time (ms)
46
44
  - Jump height (m) - with optional calibration using drop box height
47
- - **Calibrated measurements** - use known drop height for ~88% accuracy (vs 71% uncalibrated)
48
- - With CoM tracking: potential for 91-93% accuracy
49
- - With adaptive thresholding + CoM: potential for 93-96% accuracy
45
+ - **Calibrated measurements** - use known drop height for theoretically improved accuracy (⚠️ accuracy claims unvalidated)
50
46
  - **JSON output** for easy integration with other tools
51
47
  - **Optional debug video** with visual overlays showing contact states and landmarks
52
48
  - **Configurable parameters** for smoothing, thresholds, and detection
53
49
 
50
+ **Note**: Drop jump analysis uses foot-based tracking with fixed velocity thresholds. Center of mass (CoM) tracking and adaptive thresholding (available in `core/` modules) require longer videos (~5+ seconds) with a 3-second standing baseline, making them unsuitable for typical drop jump videos (~3 seconds). These features may be available in future jump types like CMJ (countermovement jump).
51
+
52
+ ## Validation Status
53
+
54
+ ⚠️ **IMPORTANT**: This tool's accuracy has **not been validated** against gold standard measurements (force plates, 3D motion capture). All accuracy claims and improvement estimates are theoretical and based on algorithmic considerations, not empirical testing.
55
+
56
+ The tool provides consistent measurements and may be useful for:
57
+
58
+ - Tracking relative changes in an individual athlete over time
59
+ - Comparing similar jumps under controlled conditions
60
+ - Exploratory analysis and research
61
+
62
+ For clinical, research, or performance assessment requiring validated accuracy, this tool should be compared against validated measurement systems before use.
63
+
54
64
  ## Setup
55
65
 
56
66
  ### Prerequisites
@@ -67,13 +77,13 @@ asdf plugin add python
67
77
  asdf plugin add uv
68
78
  ```
69
79
 
70
- 2. **Install versions specified in `.tool-versions`**:
80
+ 1. **Install versions specified in `.tool-versions`**:
71
81
 
72
82
  ```bash
73
83
  asdf install
74
84
  ```
75
85
 
76
- 3. **Install project dependencies using uv**:
86
+ 1. **Install project dependencies using uv**:
77
87
 
78
88
  ```bash
79
89
  uv sync
@@ -120,46 +130,11 @@ kinemotion dropjump-analyze drop-jump.mp4 \
120
130
  --output debug.mp4
121
131
  ```
122
132
 
123
- ### Center of Mass Tracking (Improved Accuracy)
124
-
125
- Use CoM tracking for 3-5% accuracy improvement:
126
-
127
- ```bash
128
- # Basic CoM tracking
129
- kinemotion dropjump-analyze video.mp4 --use-com
130
-
131
- # CoM tracking with calibration for maximum accuracy
132
- kinemotion dropjump-analyze drop-jump.mp4 \
133
- --use-com \
134
- --drop-height 0.40 \
135
- --output debug_com.mp4 \
136
- --json-output metrics.json
137
- ```
138
-
139
- ### Adaptive Thresholding (Auto-Calibration)
140
-
141
- Auto-calibrate velocity threshold from video baseline for 2-3% accuracy improvement:
142
-
143
- ```bash
144
- # Basic adaptive thresholding
145
- kinemotion dropjump-analyze video.mp4 --adaptive-threshold
146
-
147
- # Combined with CoM for maximum accuracy
148
- kinemotion dropjump-analyze video.mp4 \
149
- --adaptive-threshold \
150
- --use-com \
151
- --drop-height 0.40 \
152
- --output debug.mp4 \
153
- --json-output metrics.json
154
- ```
155
-
156
133
  ### Full Example (Maximum Accuracy)
157
134
 
158
135
  ```bash
159
- # With all accuracy improvements enabled (~93-96% accuracy)
136
+ # With all accuracy improvements enabled
160
137
  kinemotion dropjump-analyze jump.mp4 \
161
- --adaptive-threshold \
162
- --use-com \
163
138
  --outlier-rejection \
164
139
  --drop-height 0.40 \
165
140
  --output debug.mp4 \
@@ -169,8 +144,6 @@ kinemotion dropjump-analyze jump.mp4 \
169
144
 
170
145
  # Alternative: With experimental bilateral filter
171
146
  kinemotion dropjump-analyze jump.mp4 \
172
- --adaptive-threshold \
173
- --use-com \
174
147
  --outlier-rejection \
175
148
  --bilateral-filter \
176
149
  --drop-height 0.40 \
@@ -183,6 +156,7 @@ kinemotion dropjump-analyze jump.mp4 \
183
156
  > **📖 For detailed explanations of all parameters, see [docs/PARAMETERS.md](docs/PARAMETERS.md)**
184
157
  >
185
158
  > This section provides a quick reference. The full guide includes:
159
+ >
186
160
  > - How each parameter works internally
187
161
  > - When and why to adjust them
188
162
  > - Scenario-based recommendations
@@ -270,38 +244,10 @@ kinemotion dropjump-analyze jump.mp4 \
270
244
  - `--drop-height <float>` (optional)
271
245
  - Height of drop box/platform in meters (e.g., 0.40 for 40cm)
272
246
  - Enables calibrated jump height measurement using known drop height
273
- - Improves accuracy from ~71% to ~88%
247
+ - Theoretically improves accuracy (⚠️ unvalidated - requires empirical validation)
274
248
  - Only applicable for drop jumps (box → drop → landing → jump)
275
249
  - **Tip**: Measure your box height accurately for best results
276
250
 
277
- ### Tracking Method
278
-
279
- - `--use-com / --use-feet` (default: --use-feet)
280
- - Choose between center of mass (CoM) or foot-based tracking
281
- - **CoM tracking** (`--use-com`): Uses biomechanical CoM estimation with Dempster's body segment parameters
282
- - Head: 8%, Trunk: 50%, Thighs: 20%, Legs: 10%, Feet: 3% of body mass
283
- - Tracks true body movement instead of foot position
284
- - Reduces error from foot dorsiflexion/plantarflexion during flight
285
- - **Accuracy improvement**: +3-5% over foot-based tracking
286
- - **Foot tracking** (`--use-feet`): Traditional method using average ankle/heel positions
287
- - Faster, simpler, well-tested baseline method
288
- - **Tip**: Use `--use-com` for maximum accuracy, especially for drop jumps
289
-
290
- ### Velocity Threshold Mode
291
-
292
- - `--adaptive-threshold / --fixed-threshold` (default: --fixed-threshold)
293
- - Choose between adaptive or fixed velocity threshold for contact detection
294
- - **Adaptive threshold** (`--adaptive-threshold`): Auto-calibrates from video baseline
295
- - Analyzes first 3 seconds of video (assumed relatively stationary)
296
- - Computes noise floor as 95th percentile of baseline velocity
297
- - Sets threshold as 1.5× noise floor (bounded: 0.005-0.05)
298
- - Adapts to camera distance, lighting, frame rate, and compression artifacts
299
- - **Accuracy improvement**: +2-3% by eliminating manual tuning
300
- - **Fixed threshold** (`--fixed-threshold`): Uses `--velocity-threshold` value (default: 0.02)
301
- - Consistent, predictable behavior
302
- - Requires manual tuning for optimal results
303
- - **Tip**: Use `--adaptive-threshold` for varying video conditions or when unsure of optimal threshold
304
-
305
251
  ### Trajectory Analysis
306
252
 
307
253
  - `--use-curvature / --no-curvature` (default: --use-curvature)
@@ -337,6 +283,7 @@ kinemotion dropjump-analyze jump.mp4 \
337
283
  ```
338
284
 
339
285
  **Fields**:
286
+
340
287
  - `jump_height_m`: Primary jump height measurement (calibrated if --drop-height provided, otherwise corrected kinematic)
341
288
  - `jump_height_kinematic_m`: Kinematic estimate from flight time: h = (g × t²) / 8
342
289
  - `jump_height_trajectory_normalized`: Position-based measurement in normalized coordinates (0-1 range)
@@ -348,6 +295,7 @@ kinemotion dropjump-analyze jump.mp4 \
348
295
  ### Debug Video
349
296
 
350
297
  The debug video includes:
298
+
351
299
  - **Green circle**: Average foot position when on ground
352
300
  - **Red circle**: Average foot position when in air
353
301
  - **Yellow circles**: Individual foot landmarks (ankles, heels)
@@ -363,6 +311,7 @@ The debug video includes:
363
311
  **Symptoms**: Erratic landmark positions, missing detections, incorrect contact states
364
312
 
365
313
  **Solutions**:
314
+
366
315
  1. **Check video quality**: Ensure the athlete is clearly visible in profile view
367
316
  2. **Increase smoothing**: Use `--smoothing-window 7` or higher
368
317
  3. **Adjust detection confidence**: Try `--detection-confidence 0.6` or `--tracking-confidence 0.6`
@@ -373,6 +322,7 @@ The debug video includes:
373
322
  **Symptoms**: "No frames processed" error or all null landmarks
374
323
 
375
324
  **Solutions**:
325
+
376
326
  1. **Verify video format**: OpenCV must be able to read the video
377
327
  2. **Check framing**: Ensure full body is visible in side view
378
328
  3. **Lower confidence thresholds**: Try `--detection-confidence 0.3 --tracking-confidence 0.3`
@@ -383,6 +333,7 @@ The debug video includes:
383
333
  **Symptoms**: Wrong ground contact times, flight phases not detected
384
334
 
385
335
  **Solutions**:
336
+
386
337
  1. **Generate debug video**: Visualize contact states to diagnose the issue
387
338
  2. **Adjust velocity threshold**:
388
339
  - If missing contacts: decrease to `--velocity-threshold 0.01`
@@ -395,8 +346,9 @@ The debug video includes:
395
346
  **Symptoms**: Unrealistic jump height values
396
347
 
397
348
  **Solutions**:
349
+
398
350
  1. **Use calibration**: For drop jumps, add `--drop-height` parameter with box height in meters (e.g., `--drop-height 0.40`)
399
- - This improves accuracy from ~71% to ~88%
351
+ - Theoretically improves accuracy (⚠️ unvalidated)
400
352
  2. **Verify flight time detection**: Check `flight_start_frame` and `flight_end_frame` in JSON
401
353
  3. **Compare measurements**: JSON output includes both `jump_height_m` (primary) and `jump_height_kinematic_m` (kinematic-only)
402
354
  4. **Check for drop jump detection**: If doing a drop jump, ensure first phase is elevated enough (>5% of frame height)
@@ -406,18 +358,15 @@ The debug video includes:
406
358
  **Symptoms**: Cannot write debug video or corrupted output
407
359
 
408
360
  **Solutions**:
361
+
409
362
  1. **Install additional codecs**: Ensure OpenCV has proper video codec support
410
363
  2. **Try different output format**: Use `.avi` extension instead of `.mp4`
411
364
  3. **Check output path**: Ensure write permissions for output directory
412
365
 
413
366
  ## How It Works
414
367
 
415
- 1. **Pose Tracking**: MediaPipe extracts 2D pose landmarks (13 points: feet, ankles, knees, hips, shoulders, nose) from each frame
416
- 2. **Position Calculation**: Two methods available:
417
- - **Foot-based** (default): Averages ankle, heel, and foot index positions
418
- - **CoM-based** (--use-com): Biomechanical center of mass using Dempster's body segment parameters
419
- - Head: 8%, Trunk: 50%, Thighs: 20%, Legs: 10%, Feet: 3% of body mass
420
- - Weighted average reduces error from foot movement artifacts
368
+ 1. **Pose Tracking**: MediaPipe extracts 2D pose landmarks (foot points: ankles, heels, foot indices) from each frame
369
+ 2. **Position Calculation**: Averages ankle, heel, and foot index positions to determine foot location
421
370
  3. **Smoothing**: Savitzky-Golay filter reduces tracking jitter while preserving motion dynamics
422
371
  4. **Contact Detection**: Analyzes vertical position velocity to identify ground contact vs. flight phases
423
372
  5. **Phase Identification**: Finds continuous ground contact and flight periods
@@ -438,13 +387,14 @@ The debug video includes:
438
387
  - Ground contact time = contact phase duration (using fractional frames)
439
388
  - Flight time = flight phase duration (using fractional frames)
440
389
  - Jump height = calibrated position-based measurement (if --drop-height provided)
441
- - Fallback: corrected kinematic estimate (g × t²) / 8 × 1.35
390
+ - Fallback: kinematic estimate (g × t²) / 8 with optional empirical correction factor (⚠️ unvalidated)
442
391
 
443
392
  ## Development
444
393
 
445
394
  ### Code Quality Standards
446
395
 
447
396
  This project enforces strict code quality standards:
397
+
448
398
  - **Type safety**: Full mypy strict mode compliance with complete type annotations
449
399
  - **Linting**: Comprehensive ruff checks (pycodestyle, pyflakes, isort, pep8-naming, etc.)
450
400
  - **Formatting**: Black code style
@@ -492,7 +442,7 @@ See [CLAUDE.md](CLAUDE.md) for detailed development guidelines.
492
442
  ## Limitations
493
443
 
494
444
  - **2D Analysis**: Only analyzes motion in the camera's view plane
495
- - **Calibration accuracy**: With drop height calibration, achieves ~88% accuracy; without calibration ~71% accuracy
445
+ - **Validation Status**: ⚠️ Accuracy has not been validated against gold standard measurements (force plates, 3D motion capture)
496
446
  - **Side View Required**: Must film from the side to accurately track vertical motion
497
447
  - **Single Athlete**: Designed for analyzing one athlete at a time
498
448
  - **Timing precision**: