kinemotion 0.1.0__tar.gz → 0.4.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.

Potentially problematic release.


This version of kinemotion might be problematic. Click here for more details.

Files changed (37) hide show
  1. {kinemotion-0.1.0 → kinemotion-0.4.0}/.gitignore +2 -0
  2. {kinemotion-0.1.0 → kinemotion-0.4.0}/CLAUDE.md +130 -51
  3. {kinemotion-0.1.0 → kinemotion-0.4.0}/PKG-INFO +119 -33
  4. {kinemotion-0.1.0 → kinemotion-0.4.0}/README.md +114 -28
  5. kinemotion-0.4.0/docs/ERRORS_FINDINGS.md +260 -0
  6. kinemotion-0.4.0/docs/FRAMERATE.md +747 -0
  7. kinemotion-0.4.0/docs/IMPLEMENTATION_PLAN.md +795 -0
  8. kinemotion-0.4.0/docs/IMU_METADATA_PRESERVATION.md +124 -0
  9. kinemotion-0.4.0/docs/PARAMETERS.md +1313 -0
  10. kinemotion-0.4.0/docs/VALIDATION_PLAN.md +706 -0
  11. {kinemotion-0.1.0 → kinemotion-0.4.0}/pyproject.toml +7 -7
  12. kinemotion-0.4.0/src/kinemotion/__init__.py +3 -0
  13. kinemotion-0.4.0/src/kinemotion/cli.py +20 -0
  14. kinemotion-0.4.0/src/kinemotion/core/__init__.py +40 -0
  15. kinemotion-0.4.0/src/kinemotion/core/filtering.py +345 -0
  16. kinemotion-0.4.0/src/kinemotion/core/pose.py +221 -0
  17. {kinemotion-0.1.0/src/dropjump → kinemotion-0.4.0/src/kinemotion/core}/smoothing.py +144 -0
  18. kinemotion-0.4.0/src/kinemotion/core/video_io.py +122 -0
  19. kinemotion-0.4.0/src/kinemotion/dropjump/__init__.py +29 -0
  20. kinemotion-0.1.0/src/dropjump/contact_detection.py → kinemotion-0.4.0/src/kinemotion/dropjump/analysis.py +95 -4
  21. {kinemotion-0.1.0/src → kinemotion-0.4.0/src/kinemotion}/dropjump/cli.py +98 -31
  22. kinemotion-0.1.0/src/dropjump/video_io.py → kinemotion-0.4.0/src/kinemotion/dropjump/debug_overlay.py +49 -140
  23. {kinemotion-0.1.0/src → kinemotion-0.4.0/src/kinemotion}/dropjump/kinematics.py +27 -8
  24. kinemotion-0.4.0/tests/test_adaptive_threshold.py +193 -0
  25. {kinemotion-0.1.0 → kinemotion-0.4.0}/tests/test_aspect_ratio.py +2 -1
  26. kinemotion-0.4.0/tests/test_com_estimation.py +165 -0
  27. {kinemotion-0.1.0 → kinemotion-0.4.0}/tests/test_contact_detection.py +1 -1
  28. kinemotion-0.4.0/tests/test_filtering.py +391 -0
  29. {kinemotion-0.1.0 → kinemotion-0.4.0}/tests/test_kinematics.py +2 -2
  30. kinemotion-0.4.0/tests/test_polyorder.py +149 -0
  31. kinemotion-0.1.0/docs/PARAMETERS.md +0 -622
  32. kinemotion-0.1.0/src/dropjump/__init__.py +0 -3
  33. kinemotion-0.1.0/src/dropjump/pose_tracker.py +0 -74
  34. {kinemotion-0.1.0 → kinemotion-0.4.0}/.tool-versions +0 -0
  35. {kinemotion-0.1.0 → kinemotion-0.4.0}/LICENSE +0 -0
  36. {kinemotion-0.1.0 → kinemotion-0.4.0}/examples/programmatic_usage.py +0 -0
  37. {kinemotion-0.1.0 → kinemotion-0.4.0}/tests/__init__.py +0 -0
@@ -60,3 +60,5 @@ Thumbs.db
60
60
  *.mp4
61
61
  *.jpeg
62
62
  *.jpg
63
+
64
+ .claude/settings.local.json*
@@ -4,19 +4,21 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
4
4
 
5
5
  ## Repository Purpose
6
6
 
7
- Kinemetry: Video-based kinematic analysis tool for athletic performance. Analyzes drop-jump videos to estimate ground contact time, flight time, and jump height by tracking athlete's feet using MediaPipe pose tracking and advanced kinematics.
7
+ Kinemotion: Video-based kinematic analysis tool for athletic performance. Analyzes drop-jump videos to estimate ground contact time, flight time, and jump height by tracking athlete's movement using MediaPipe pose tracking and advanced kinematics. Supports both foot-based tracking (traditional) and center of mass (CoM) tracking for improved accuracy.
8
8
 
9
9
  ## Project Setup
10
10
 
11
11
  ### Dependencies
12
12
 
13
13
  Managed with `uv` and `asdf`:
14
+
14
15
  - Python version: 3.12.7 (specified in `.tool-versions`)
15
16
  - **Important**: MediaPipe requires Python 3.12 or earlier (no 3.13 support yet)
16
17
  - Install dependencies: `uv sync`
17
- - Run CLI: `kinemetry dropjump-analyze <video.mp4>`
18
+ - Run CLI: `kinemotion dropjump-analyze <video.mp4>`
18
19
 
19
20
  **Production dependencies:**
21
+
20
22
  - click: CLI framework
21
23
  - opencv-python: Video processing
22
24
  - mediapipe: Pose detection and tracking
@@ -24,6 +26,7 @@ Managed with `uv` and `asdf`:
24
26
  - scipy: Signal processing (Savitzky-Golay filter)
25
27
 
26
28
  **Development dependencies:**
29
+
27
30
  - pytest: Testing framework
28
31
  - black: Code formatting
29
32
  - ruff: Fast Python linter
@@ -31,64 +34,103 @@ Managed with `uv` and `asdf`:
31
34
 
32
35
  ### Development Commands
33
36
 
34
- - **Run tool**: `uv run kinemetry dropjump-analyze <video_path>`
37
+ - **Run tool**: `uv run kinemotion dropjump-analyze <video_path>`
35
38
  - **Install/sync deps**: `uv sync`
36
39
  - **Run tests**: `uv run pytest`
37
40
  - **Run specific test**: `uv run pytest tests/test_aspect_ratio.py -v`
38
41
  - **Format code**: `uv run black src/`
39
42
  - **Lint code**: `uv run ruff check`
40
43
  - **Auto-fix lint issues**: `uv run ruff check --fix`
41
- - **Type check**: `uv run mypy src/dropjump`
42
- - **Run all checks**: `uv run ruff check && uv run mypy src/dropjump && uv run pytest`
44
+ - **Type check**: `uv run mypy src/kinemotion`
45
+ - **Run all checks**: `uv run ruff check && uv run mypy src/kinemotion && uv run pytest`
43
46
 
44
47
  ## Architecture
45
48
 
46
49
  ### Module Structure
47
50
 
48
- ```
49
- src/dropjump/
50
- ├── cli.py # Click-based CLI entry point
51
- ├── pose_tracker.py # MediaPipe Pose integration
52
- ├── smoothing.py # Savitzky-Golay landmark smoothing
53
- ├── contact_detection.py # Ground contact state detection
54
- ├── kinematics.py # Metric calculations (contact time, flight time, jump height)
55
- └── video_io.py # Video processing and debug overlay rendering
51
+ ```text
52
+ src/kinemotion/
53
+ ├── __init__.py
54
+ ├── cli.py # Main CLI entry point (registers subcommands)
55
+ ├── core/ # Shared functionality across all jump types
56
+ ├── __init__.py
57
+ ├── pose.py # MediaPipe Pose integration + CoM
58
+ │ ├── smoothing.py # Savitzky-Golay landmark smoothing
59
+ │ ├── filtering.py # Outlier rejection + bilateral filtering
60
+ │ └── video_io.py # Video processing (VideoProcessor class)
61
+ └── dropjump/ # Drop jump specific analysis
62
+ ├── __init__.py
63
+ ├── cli.py # Drop jump CLI command (dropjump-analyze)
64
+ ├── analysis.py # Ground contact state detection
65
+ ├── kinematics.py # Drop jump metrics calculations
66
+ └── debug_overlay.py # Debug video overlay rendering
56
67
 
57
68
  tests/
69
+ ├── test_adaptive_threshold.py # Adaptive threshold tests
70
+ ├── test_aspect_ratio.py # Aspect ratio preservation tests
71
+ ├── test_com_estimation.py # Center of mass estimation tests
58
72
  ├── test_contact_detection.py # Contact detection unit tests
73
+ ├── test_filtering.py # Advanced filtering tests
59
74
  ├── test_kinematics.py # Metrics calculation tests
60
- └── test_aspect_ratio.py # Aspect ratio preservation tests
75
+ └── test_polyorder.py # Polynomial order tests
61
76
 
62
77
  docs/
63
- └── PARAMETERS.md # Comprehensive guide to all CLI parameters
78
+ ├── PARAMETERS.md # Comprehensive guide to all CLI parameters
79
+ └── IMPLEMENTATION_PLAN.md # Implementation plan and fix guide
64
80
  ```
65
81
 
82
+ **Design Rationale:**
83
+
84
+ - `core/` contains shared code reusable across different jump types (CMJ, squat jumps, etc.)
85
+ - `dropjump/` contains drop jump specific logic, metrics, and CLI command
86
+ - Each jump type module contains its own CLI command definition
87
+ - Main `cli.py` is just an entry point that registers subcommands from each module
88
+ - Future jump types (CMJ, squat) will be sibling modules to `dropjump/` with their own cli.py
89
+ - Single CLI group with subcommands for different analysis types
90
+
91
+ **CLI Architecture:**
92
+
93
+ - `src/kinemotion/cli.py` (20 lines): Main CLI group + command registration
94
+ - `src/kinemotion/dropjump/cli.py` (358 lines): Complete dropjump-analyze command
95
+ - Commands registered using Click's `cli.add_command()` pattern
96
+ - Modular design allows easy addition of new jump type analysis commands
97
+
66
98
  ### Analysis Pipeline
67
99
 
68
- 1. **Pose Tracking** (pose_tracker.py): MediaPipe extracts foot landmarks (ankles, heels, foot indices) from each frame
69
- 2. **Smoothing** (smoothing.py): Savitzky-Golay filter reduces jitter while preserving dynamics
70
- 3. **Contact Detection** (contact_detection.py): Analyzes vertical foot velocity to classify ground contact vs. flight
71
- 4. **Phase Identification**: Finds continuous ground contact and flight periods
100
+ 1. **Pose Tracking** (core/pose.py): MediaPipe extracts body landmarks from each frame
101
+ - Foot landmarks: ankles, heels, foot indices (for traditional foot-based tracking)
102
+ - Body landmarks: nose, shoulders, hips, knees (for CoM-based tracking)
103
+ - Total 13 landmarks tracked per frame
104
+ 2. **Center of Mass Estimation** (core/pose.py): Optional biomechanical CoM calculation
105
+ - Uses Dempster's body segment parameters for accurate weight distribution:
106
+ - Head: 8%, Trunk: 50%, Thighs: 20%, Legs: 10%, Feet: 3%
107
+ - Weighted average of segment positions for physics-based tracking
108
+ - More accurate than foot tracking as it tracks true body movement
109
+ - Reduces error from foot dorsiflexion/plantarflexion during flight
110
+ 3. **Smoothing** (core/smoothing.py): Savitzky-Golay filter reduces jitter while preserving dynamics
111
+ 4. **Contact Detection** (dropjump/analysis.py): Analyzes vertical position velocity to classify ground contact vs. flight
112
+ - Works with either foot positions or CoM positions
113
+ 5. **Phase Identification**: Finds continuous ground contact and flight periods
72
114
  - Automatically detects drop jumps vs regular jumps
73
115
  - For drop jumps: identifies standing on box → drop → ground contact → jump
74
- 5. **Sub-Frame Interpolation** (contact_detection.py): Estimates exact transition times
75
- - Computes velocity from Savitzky-Golay derivative (smoothing.py)
116
+ 6. **Sub-Frame Interpolation** (dropjump/analysis.py): Estimates exact transition times
117
+ - Computes velocity from Savitzky-Golay derivative (core/smoothing.py)
76
118
  - Linear interpolation of smooth velocity to find threshold crossings
77
119
  - Returns fractional frame indices (e.g., 48.78 instead of 49)
78
120
  - Reduces timing error from ±33ms to ±10ms at 30fps (60-70% improvement)
79
121
  - Eliminates false threshold crossings from velocity noise
80
- 6. **Trajectory Curvature Analysis** (contact_detection.py): Refines transitions
122
+ 7. **Trajectory Curvature Analysis** (dropjump/analysis.py): Refines transitions
81
123
  - Computes acceleration (second derivative) using Savitzky-Golay filter
82
124
  - Detects landing events by acceleration spikes (impact deceleration)
83
125
  - Identifies takeoff events by acceleration changes
84
126
  - Blends curvature-based refinement with velocity-based estimates (70/30)
85
127
  - Provides independent validation based on physical motion patterns
86
- 7. **Metrics Calculation** (kinematics.py):
128
+ 8. **Metrics Calculation** (dropjump/kinematics.py):
87
129
  - Ground contact time from phase duration (using fractional frames)
88
130
  - Flight time from phase duration (using fractional frames)
89
131
  - Jump height from position tracking with optional calibration
90
132
  - Fallback: kinematic estimate from flight time: h = (g × t²) / 8
91
- 7. **Output**: JSON metrics + optional debug video overlay
133
+ 9. **Output**: JSON metrics + optional debug video overlay with visualizations
92
134
 
93
135
  ### Key Design Decisions
94
136
 
@@ -97,7 +139,7 @@ docs/
97
139
  - **Configurable thresholds**: CLI flags allow tuning for different video qualities and athletes
98
140
  - **Calibrated jump height**: Position-based measurement with drop height calibration for accuracy
99
141
  - Optional `--drop-height` parameter uses known drop box height to calibrate measurements
100
- - Achieves ~88% accuracy (vs 71% with kinematic-only method)
142
+ - **⚠️ Accuracy claim unvalidated** - theoretical benefit estimated, not empirically tested
101
143
  - Fallback to empirically-corrected kinematic formula when no calibration provided
102
144
  - **Aspect ratio preservation**: Output video ALWAYS matches source video dimensions
103
145
  - Handles SAR (Sample Aspect Ratio) metadata from mobile videos
@@ -116,7 +158,7 @@ The codebase enforces strict code quality standards using multiple tools:
116
158
  - `disallow_incomplete_defs`: Partial type hints not allowed
117
159
  - `warn_return_any`: Warns on Any return types
118
160
  - Third-party stubs: Ignores missing imports for cv2, mediapipe, scipy
119
- - Run with: `uv run mypy src/dropjump`
161
+ - Run with: `uv run mypy src/kinemotion`
120
162
 
121
163
  ### Linting with ruff
122
164
 
@@ -136,6 +178,7 @@ The codebase enforces strict code quality standards using multiple tools:
136
178
  ### When Contributing Code
137
179
 
138
180
  Always run before committing:
181
+
139
182
  ```bash
140
183
  # Format code
141
184
  uv run black src/
@@ -144,24 +187,25 @@ uv run black src/
144
187
  uv run ruff check --fix
145
188
 
146
189
  # Type check
147
- uv run mypy src/dropjump
190
+ uv run mypy src/kinemotion
148
191
 
149
192
  # Run tests
150
193
  uv run pytest
151
194
  ```
152
195
 
153
196
  Or run all checks at once:
197
+
154
198
  ```bash
155
- uv run ruff check && uv run mypy src/dropjump && uv run pytest
199
+ uv run ruff check && uv run mypy src/kinemotion && uv run pytest
156
200
  ```
157
201
 
158
202
  ## Critical Implementation Details
159
203
 
160
- ### Aspect Ratio Preservation & SAR Handling (video_io.py)
204
+ ### Aspect Ratio Preservation & SAR Handling (core/video_io.py)
161
205
 
162
206
  **IMPORTANT**: The tool preserves the exact aspect ratio of the source video, including SAR (Sample Aspect Ratio) metadata. No dimensions are hardcoded.
163
207
 
164
- #### VideoProcessor (`video_io.py:15-110`)
208
+ #### VideoProcessor (`core/video_io.py:15-110`)
165
209
 
166
210
  - Reads the **first actual frame** to get true encoded dimensions (not OpenCV properties)
167
211
  - Critical for mobile videos with rotation metadata
@@ -180,13 +224,14 @@ if ret:
180
224
  ```
181
225
 
182
226
  **Never do this:**
227
+
183
228
  ```python
184
229
  # Wrong - may return incorrect dimensions with rotated videos
185
230
  self.width = int(self.cap.get(cv2.CAP_PROP_FRAME_WIDTH))
186
231
  self.height = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
187
232
  ```
188
233
 
189
- #### DebugOverlayRenderer (`video_io.py:130-330`)
234
+ #### DebugOverlayRenderer (`dropjump/debug_overlay.py`)
190
235
 
191
236
  - Creates output video with **display dimensions** (respecting SAR)
192
237
  - Resizes frames from encoded dimensions to display dimensions if needed (INTER_LANCZOS4)
@@ -204,12 +249,14 @@ self.height = int(self.cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
204
249
  Instead of simple frame-to-frame differences, velocity is computed as the derivative of the smoothed position trajectory using Savitzky-Golay filter:
205
250
 
206
251
  **Advantages:**
252
+
207
253
  - **Smoother velocity curves**: Eliminates noise from frame-to-frame jitter
208
254
  - **More accurate threshold crossings**: Clean transitions without false positives
209
255
  - **Better interpolation**: Smoother velocity gradient for sub-frame precision
210
256
  - **Consistent with smoothing**: Uses same polynomial fit as position smoothing
211
257
 
212
258
  **Implementation:**
259
+
213
260
  ```python
214
261
  # OLD: Simple differences (noisy)
215
262
  velocities = np.abs(np.diff(foot_positions, prepend=foot_positions[0]))
@@ -219,6 +266,7 @@ velocities = savgol_filter(positions, window_length=5, polyorder=2, deriv=1, del
219
266
  ```
220
267
 
221
268
  **Key Function:**
269
+
222
270
  - `compute_velocity_from_derivative()`: Computes first derivative using Savitzky-Golay filter
223
271
 
224
272
  #### Sub-Frame Interpolation Algorithm
@@ -226,9 +274,11 @@ velocities = savgol_filter(positions, window_length=5, polyorder=2, deriv=1, del
226
274
  At 30fps, each frame represents 33.3ms. Contact events (landing, takeoff) rarely occur exactly at frame boundaries. Sub-frame interpolation estimates the exact moment between frames when velocity crosses the threshold.
227
275
 
228
276
  **Algorithm:**
277
+
229
278
  1. Calculate smooth velocity using derivative: `v = derivative(smooth_position)`
230
279
  2. Find frames where velocity crosses threshold (e.g., from 0.025 to 0.015, threshold 0.020)
231
280
  3. Use linear interpolation to find exact crossing point:
281
+
232
282
  ```python
233
283
  # If v[10] = 0.025 and v[11] = 0.015, threshold = 0.020
234
284
  t = (0.020 - 0.025) / (0.015 - 0.025) = 0.5
@@ -236,11 +286,13 @@ At 30fps, each frame represents 33.3ms. Contact events (landing, takeoff) rarely
236
286
  ```
237
287
 
238
288
  **Key Functions:**
289
+
239
290
  - `interpolate_threshold_crossing()`: Linear interpolation of velocity crossing
240
291
  - `find_interpolated_phase_transitions()`: Returns fractional frame indices for all phases
241
292
 
242
293
  **Accuracy Improvement:**
243
- ```
294
+
295
+ ```text
244
296
  30fps without interpolation: ±33ms (1 frame on each boundary)
245
297
  30fps with interpolation: ±10ms (sub-frame precision)
246
298
  60fps without interpolation: ±17ms
@@ -248,6 +300,7 @@ At 30fps, each frame represents 33.3ms. Contact events (landing, takeoff) rarely
248
300
  ```
249
301
 
250
302
  **Velocity Comparison:**
303
+
251
304
  ```python
252
305
  # Frame-to-frame differences: noisy, discontinuous jumps
253
306
  v_simple = [0.01, 0.03, 0.02, 0.04, 0.02, 0.01] # Jittery
@@ -257,6 +310,7 @@ v_deriv = [0.015, 0.022, 0.025, 0.024, 0.018, 0.012] # Smooth
257
310
  ```
258
311
 
259
312
  **Example:**
313
+
260
314
  ```python
261
315
  # Integer frames: contact from frame 49 to 53 (5 frames = 168ms at 30fps)
262
316
  # With derivative velocity: contact from 49.0 to 53.0 (4 frames = 135ms)
@@ -272,12 +326,14 @@ v_deriv = [0.015, 0.022, 0.025, 0.024, 0.018, 0.012] # Smooth
272
326
  Acceleration (second derivative) reveals characteristic patterns at contact events:
273
327
 
274
328
  **Physical Patterns:**
329
+
275
330
  - **Landing impact**: Large acceleration spike as feet decelerate on impact
276
331
  - **Takeoff**: Acceleration change as body transitions from static to upward motion
277
332
  - **In flight**: Constant acceleration (gravity ≈ -9.81 m/s²)
278
333
  - **On ground**: Near-zero acceleration (stationary position)
279
334
 
280
335
  **Implementation:**
336
+
281
337
  ```python
282
338
  # Compute acceleration using Savitzky-Golay second derivative
283
339
  acceleration = savgol_filter(positions, window=5, polyorder=2, deriv=2, delta=1.0)
@@ -291,6 +347,7 @@ takeoff_frame = np.argmax(accel_change[search_window])
291
347
  ```
292
348
 
293
349
  **Key Functions:**
350
+
294
351
  - `compute_acceleration_from_derivative()`: Computes second derivative using Savitzky-Golay
295
352
  - `refine_transition_with_curvature()`: Searches for acceleration patterns near transitions
296
353
  - `find_interpolated_phase_transitions_with_curvature()`: Combines velocity + curvature
@@ -304,11 +361,13 @@ Curvature analysis refines velocity-based estimates through blending:
304
361
  3. **Blending**: 70% curvature-based + 30% velocity-based
305
362
 
306
363
  **Why Blending?**
364
+
307
365
  - Velocity is reliable for coarse timing
308
366
  - Curvature provides fine detail but can be noisy at boundaries
309
367
  - Blending prevents large deviations while incorporating physical insights
310
368
 
311
369
  **Algorithm:**
370
+
312
371
  ```python
313
372
  # 1. Get velocity-based estimate
314
373
  velocity_estimate = 49.0 # from interpolation
@@ -323,6 +382,7 @@ blend = 0.7 * 47.2 + 0.3 * 49.0 # = 47.74
323
382
  ```
324
383
 
325
384
  **Accuracy Improvement:**
385
+
326
386
  ```python
327
387
  # Example: Landing detection
328
388
  # Velocity only: frame 49.0 (when velocity drops below threshold)
@@ -331,6 +391,7 @@ blend = 0.7 * 47.2 + 0.3 * 49.0 # = 47.74
331
391
  ```
332
392
 
333
393
  **Optional Feature:**
394
+
334
395
  - Enabled by default (`--use-curvature`, default: True)
335
396
  - Can be disabled with `--no-curvature` flag for pure velocity-based detection
336
397
  - Negligible performance impact (reuses smoothed trajectory)
@@ -348,12 +409,13 @@ Always convert to Python `int()` in `to_dict()` method:
348
409
  ```
349
410
 
350
411
  **Never do this:**
412
+
351
413
  ```python
352
414
  # Wrong - will fail with "Object of type int64 is not JSON serializable"
353
415
  "contact_start_frame": self.contact_start_frame
354
416
  ```
355
417
 
356
- ### Video Codec Handling (video_io.py:78-94)
418
+ ### Video Codec Handling (dropjump/debug_overlay.py)
357
419
 
358
420
  - Primary codec: H.264 (avc1) - better quality, smaller file size
359
421
  - Fallback codec: MPEG-4 (mp4v) - broader compatibility
@@ -367,6 +429,7 @@ OpenCV and NumPy use different dimension ordering:
367
429
  - **OpenCV VideoWriter size**: `(width, height)` tuple
368
430
 
369
431
  Example:
432
+
370
433
  ```python
371
434
  frame.shape # (1080, 1920, 3) - height first
372
435
  cv2.VideoWriter(..., (1920, 1080)) # width first
@@ -378,39 +441,46 @@ Always be careful with dimension ordering to avoid squashed/stretched videos.
378
441
 
379
442
  ### Adding New Metrics
380
443
 
381
- 1. Update `DropJumpMetrics` class in `kinematics.py:10-19`
444
+ 1. Update `DropJumpMetrics` class in `dropjump/kinematics.py:10-19`
382
445
  2. Add calculation logic in `calculate_drop_jump_metrics()` function
383
446
  3. Update `to_dict()` method for JSON serialization (remember to convert NumPy types to Python types)
384
- 4. Optionally add visualization in `DebugOverlayRenderer.render_frame()` in `video_io.py:96`
447
+ 4. Optionally add visualization in `DebugOverlayRenderer.render_frame()` in `dropjump/debug_overlay.py`
385
448
  5. Add tests in `tests/test_kinematics.py`
386
449
 
387
450
  ### Modifying Contact Detection Logic
388
451
 
389
- Edit `detect_ground_contact()` in `contact_detection.py:14`. Key parameters:
452
+ Edit `detect_ground_contact()` in `dropjump/analysis.py:14`. Key parameters:
453
+
390
454
  - `velocity_threshold`: Tune for different surface/athlete combinations (default: 0.02)
391
455
  - `min_contact_frames`: Adjust for frame rate and contact duration expectations (default: 3)
392
456
  - `visibility_threshold`: Minimum landmark visibility score (default: 0.5)
393
457
 
394
458
  ### Adjusting Smoothing
395
459
 
396
- Modify `smooth_landmarks()` in `smoothing.py:9`:
460
+ Modify `smooth_landmarks()` in `core/smoothing.py:9`:
461
+
397
462
  - `window_length`: Controls smoothing strength (must be odd, default: 5)
398
463
  - `polyorder`: Polynomial order for Savitzky-Golay filter (default: 2)
399
464
 
400
465
  ### Parameter Tuning
401
466
 
402
- **IMPORTANT**: See `docs/PARAMETERS.md` for comprehensive guide on all 7 CLI parameters.
467
+ **IMPORTANT**: See `docs/PARAMETERS.md` for comprehensive guide on all CLI parameters.
468
+
469
+ Quick reference for `dropjump-analyze`:
403
470
 
404
- Quick reference:
405
471
  - **smoothing-window**: Trajectory smoothness (↑ for noisy video)
406
472
  - **velocity-threshold**: Contact sensitivity (↓ to detect brief contacts)
407
473
  - **min-contact-frames**: Temporal filter (↑ to remove false contacts)
408
- - **visibility-threshold**: Landmark confidence (↓ for occluded feet)
474
+ - **visibility-threshold**: Landmark confidence (↓ for occluded landmarks)
409
475
  - **detection-confidence**: Pose detection strictness (MediaPipe)
410
476
  - **tracking-confidence**: Tracking persistence (MediaPipe)
411
477
  - **drop-height**: Drop box height in meters for calibration (e.g., 0.40 for 40cm)
478
+ - **use-curvature**: Enable trajectory curvature analysis (default: enabled)
479
+
480
+ **Note**: Drop jump analysis always uses foot-based tracking with fixed velocity thresholds because typical drop jump videos are ~3 seconds long without a stationary baseline period. The `--use-com` and `--adaptive-threshold` options (available in `core/` modules) require longer videos (~5+ seconds) with 3 seconds of standing baseline, making them suitable for future jump types like CMJ (countermovement jump) but not drop jumps.
412
481
 
413
482
  The detailed guide includes:
483
+
414
484
  - How each parameter works internally
415
485
  - Frame rate considerations
416
486
  - Scenario-based recommended settings
@@ -420,6 +490,7 @@ The detailed guide includes:
420
490
  ### Working with Different Video Formats
421
491
 
422
492
  The tool handles various video formats and aspect ratios:
493
+
423
494
  - 16:9 landscape (1920x1080)
424
495
  - 4:3 standard (640x480)
425
496
  - 9:16 portrait (1080x1920)
@@ -448,14 +519,17 @@ uv run pytest -v
448
519
 
449
520
  - **Aspect ratio preservation**: 4 tests covering 16:9, 4:3, 9:16, and validation
450
521
  - **Contact detection**: 3 tests for ground contact detection and phase identification
522
+ - **Center of mass estimation**: 6 tests for CoM calculation, biomechanical weights, and fallback behavior
523
+ - **Adaptive thresholding**: 10 tests for auto-calibration, noise adaptation, bounds checking, and edge cases
451
524
  - **Kinematics**: 2 tests for metrics calculation and JSON serialization
452
525
 
453
526
  ### Code Quality
454
527
 
455
528
  All code passes:
529
+
456
530
  - ✅ **Type checking**: Full mypy strict mode compliance
457
531
  - ✅ **Linting**: ruff checks with comprehensive rule sets
458
- - ✅ **Tests**: 9/9 tests passing
532
+ - ✅ **Tests**: 25/25 tests passing
459
533
  - ✅ **Formatting**: Black code style
460
534
 
461
535
  ## Troubleshooting
@@ -468,6 +542,7 @@ All code passes:
468
542
  ### Video Dimension Issues
469
543
 
470
544
  If output video has wrong aspect ratio:
545
+
471
546
  1. Check `VideoProcessor` is reading first frame correctly
472
547
  2. Verify `DebugOverlayRenderer` receives correct width/height from `VideoProcessor`
473
548
  3. Check that `write_frame()` validation is enabled (should raise error if dimensions mismatch)
@@ -476,6 +551,7 @@ If output video has wrong aspect ratio:
476
551
  ### JSON Serialization Errors
477
552
 
478
553
  If you see "Object of type X is not JSON serializable":
554
+
479
555
  1. Check `kinematics.py` `to_dict()` method
480
556
  2. Ensure all NumPy types are converted to Python types with `int()` or `float()`
481
557
  3. Run `tests/test_kinematics.py::test_metrics_to_dict` to verify
@@ -483,6 +559,7 @@ If you see "Object of type X is not JSON serializable":
483
559
  ### Video Codec Issues
484
560
 
485
561
  If output video won't play:
562
+
486
563
  1. Try different output format: `.avi` instead of `.mp4`
487
564
  2. Check OpenCV codec support: `cv2.getBuildInformation()`
488
565
  3. DebugOverlayRenderer will fallback from H.264 to MPEG-4 automatically
@@ -490,48 +567,49 @@ If output video won't play:
490
567
  ### Type Checking Issues
491
568
 
492
569
  If mypy reports errors:
570
+
493
571
  1. Ensure all function signatures have complete type annotations (parameters and return types)
494
572
  2. For numpy types, use explicit casts: `int()`, `float()` when converting to Python types
495
573
  3. For third-party libraries without stubs (cv2, mediapipe, scipy), use `# type: ignore` comments sparingly
496
574
  4. Check `pyproject.toml` under `[tool.mypy]` for configuration
497
- 5. Run `uv run mypy src/dropjump` to verify fixes
575
+ 5. Run `uv run mypy src/kinemotion` to verify fixes
498
576
 
499
577
  ## CLI Usage Examples
500
578
 
501
579
  ```bash
502
580
  # Show main command help
503
- uv run kinemetry --help
581
+ uv run kinemotion --help
504
582
 
505
583
  # Show subcommand help
506
- uv run kinemetry dropjump-analyze --help
584
+ uv run kinemotion dropjump-analyze --help
507
585
 
508
586
  # Basic analysis (JSON to stdout)
509
- uv run kinemetry dropjump-analyze video.mp4
587
+ uv run kinemotion dropjump-analyze video.mp4
510
588
 
511
589
  # Save metrics to file
512
- uv run kinemetry dropjump-analyze video.mp4 --json-output results.json
590
+ uv run kinemotion dropjump-analyze video.mp4 --json-output results.json
513
591
 
514
592
  # Generate debug video
515
- uv run kinemetry dropjump-analyze video.mp4 --output debug.mp4
593
+ uv run kinemotion dropjump-analyze video.mp4 --output debug.mp4
516
594
 
517
595
  # Drop jump with calibration (40cm box)
518
- uv run kinemetry dropjump-analyze video.mp4 --drop-height 0.40
596
+ uv run kinemotion dropjump-analyze video.mp4 --drop-height 0.40
519
597
 
520
598
  # Custom parameters for noisy video
521
- uv run kinemetry dropjump-analyze video.mp4 \
599
+ uv run kinemotion dropjump-analyze video.mp4 \
522
600
  --smoothing-window 7 \
523
601
  --velocity-threshold 0.01 \
524
602
  --min-contact-frames 5
525
603
 
526
604
  # Full analysis with calibration and all outputs
527
- uv run kinemetry dropjump-analyze video.mp4 \
605
+ uv run kinemotion dropjump-analyze video.mp4 \
528
606
  --output debug.mp4 \
529
607
  --json-output metrics.json \
530
608
  --drop-height 0.40 \
531
609
  --smoothing-window 7
532
610
 
533
611
  # Regular jump (no calibration, uses corrected kinematic method)
534
- uv run kinemetry dropjump-analyze jump.mp4 \
612
+ uv run kinemotion dropjump-analyze jump.mp4 \
535
613
  --output debug.mp4 \
536
614
  --json-output metrics.json
537
615
  ```
@@ -539,6 +617,7 @@ uv run kinemetry dropjump-analyze jump.mp4 \
539
617
  ## MCP Server Configuration
540
618
 
541
619
  The repository includes MCP server configuration in `.mcp.json`:
620
+
542
621
  - **web-search**: DuckDuckGo search via @dannyboy2042/freebird-mcp
543
622
  - **sequential**: Sequential thinking via @smithery-ai/server-sequential-thinking
544
623
  - **context7**: Library documentation via @upstash/context7-mcp