calibrate-suite 0.1.0__py3-none-any.whl

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
Files changed (47) hide show
  1. calibrate_suite-0.1.0.dist-info/METADATA +761 -0
  2. calibrate_suite-0.1.0.dist-info/RECORD +47 -0
  3. calibrate_suite-0.1.0.dist-info/WHEEL +5 -0
  4. calibrate_suite-0.1.0.dist-info/entry_points.txt +3 -0
  5. calibrate_suite-0.1.0.dist-info/licenses/LICENSE +201 -0
  6. calibrate_suite-0.1.0.dist-info/top_level.txt +4 -0
  7. fleet_server/__init__.py +32 -0
  8. fleet_server/app.py +377 -0
  9. fleet_server/config.py +91 -0
  10. fleet_server/templates/error.html +57 -0
  11. fleet_server/templates/index.html +137 -0
  12. fleet_server/templates/viewer.html +490 -0
  13. fleet_server/utils.py +178 -0
  14. gui/__init__.py +2 -0
  15. gui/assets/2d-or-3d-fleet-upload.png +0 -0
  16. gui/assets/2d_3d_overlay_output.jpg +0 -0
  17. gui/assets/3d-or-2d-overlay_page.png +0 -0
  18. gui/assets/3d-or-2d-record-page.png +0 -0
  19. gui/assets/3d_3d_overlay_output.png +0 -0
  20. gui/assets/3d_or_2d_calibrate-page.png +0 -0
  21. gui/assets/GUI_homepage.png +0 -0
  22. gui/assets/hardware_setup.jpeg +0 -0
  23. gui/assets/single_lidar_calibrate_page.png +0 -0
  24. gui/assets/single_lidar_output.png +0 -0
  25. gui/assets/single_lidar_record_page.png +0 -0
  26. gui/assets/virya.jpg +0 -0
  27. gui/main.py +23 -0
  28. gui/widgets/calibrator_widget.py +977 -0
  29. gui/widgets/extractor_widget.py +561 -0
  30. gui/widgets/home_widget.py +117 -0
  31. gui/widgets/recorder_widget.py +127 -0
  32. gui/widgets/single_lidar_widget.py +673 -0
  33. gui/widgets/three_d_calib_widget.py +87 -0
  34. gui/widgets/two_d_calib_widget.py +86 -0
  35. gui/widgets/uploader_widget.py +151 -0
  36. gui/widgets/validator_widget.py +614 -0
  37. gui/windows/main_window.py +56 -0
  38. gui/windows/main_window_ui.py +65 -0
  39. rviz_configs/2D-3D.rviz +183 -0
  40. rviz_configs/3D-3D.rviz +184 -0
  41. rviz_configs/default_calib.rviz +167 -0
  42. utils/__init__.py +13 -0
  43. utils/calibration_common.py +23 -0
  44. utils/cli_calibrate.py +53 -0
  45. utils/cli_fleet_server.py +64 -0
  46. utils/data_extractor_common.py +87 -0
  47. utils/gui_helpers.py +25 -0
@@ -0,0 +1,761 @@
1
+ Metadata-Version: 2.4
2
+ Name: calibrate-suite
3
+ Version: 0.1.0
4
+ Summary: Comprehensive camera-lidar calibration suite with GUI and fleet management
5
+ Home-page: https://github.com/Opera5/calibrate-suite
6
+ Author: Jamih
7
+ Author-email: Jamih <ajamiu46@gmail.com>
8
+ License: MIT
9
+ Project-URL: Homepage, https://github.com/Opera5/calibrate-suite
10
+ Project-URL: Repository, https://github.com/Opera5/calibrate-suite.git
11
+ Keywords: calibration,camera,lidar,computer-vision
12
+ Classifier: Development Status :: 3 - Alpha
13
+ Classifier: Intended Audience :: Developers
14
+ Classifier: Intended Audience :: Science/Research
15
+ Classifier: License :: OSI Approved :: MIT License
16
+ Classifier: Programming Language :: Python :: 3
17
+ Classifier: Programming Language :: Python :: 3.8
18
+ Classifier: Programming Language :: Python :: 3.9
19
+ Classifier: Programming Language :: Python :: 3.10
20
+ Classifier: Programming Language :: Python :: 3.11
21
+ Requires-Python: >=3.8
22
+ Description-Content-Type: text/markdown
23
+ License-File: LICENSE
24
+ Requires-Dist: numpy>=1.20.0
25
+ Requires-Dist: scipy>=1.7.0
26
+ Requires-Dist: scikit-learn>=1.0.0
27
+ Requires-Dist: Flask==2.3.3
28
+ Requires-Dist: requests==2.31.0
29
+ Requires-Dist: python-dotenv==1.0.0
30
+ Requires-Dist: pyyaml>=6.0
31
+ Provides-Extra: cv
32
+ Requires-Dist: opencv-python>=4.5.0; extra == "cv"
33
+ Provides-Extra: 3d
34
+ Requires-Dist: open3d>=0.13.0; extra == "3d"
35
+ Provides-Extra: visualization
36
+ Requires-Dist: matplotlib>=3.3.0; extra == "visualization"
37
+ Provides-Extra: gui
38
+ Requires-Dist: PyQt5>=5.15.0; extra == "gui"
39
+ Requires-Dist: pyqtgraph>=0.13.0; extra == "gui"
40
+ Provides-Extra: full
41
+ Requires-Dist: opencv-python>=4.5.0; extra == "full"
42
+ Requires-Dist: open3d>=0.13.0; extra == "full"
43
+ Requires-Dist: matplotlib>=3.3.0; extra == "full"
44
+ Requires-Dist: PyQt5>=5.15.0; extra == "full"
45
+ Requires-Dist: pyqtgraph>=0.13.0; extra == "full"
46
+ Provides-Extra: dev
47
+ Requires-Dist: pytest>=7.0; extra == "dev"
48
+ Requires-Dist: pytest-cov>=4.0; extra == "dev"
49
+ Requires-Dist: black>=23.0; extra == "dev"
50
+ Requires-Dist: flake8>=6.0; extra == "dev"
51
+ Dynamic: author
52
+ Dynamic: home-page
53
+ Dynamic: license-file
54
+ Dynamic: requires-python
55
+
56
+ # Calibrate-Suite: Multi-Modal Sensor Calibration
57
+
58
+ A comprehensive calibration suite for LiDAR-Camera extrinsic calibration using AprilTag fiducials(for 2d-3d method) and a plane board(for 3d-3d method). Provides three calibration frameworks with CLI tools, REST API, and PyQt5 GUI for desktop and fleet-scale deployments(dummy test).
59
+
60
+ ---
61
+
62
+ ## 📋 Table of Contents
63
+
64
+ 1. [Project Overview](#project-overview)
65
+ 2. [Features](#features)
66
+ 3. [Installation](#installation)
67
+ 4. [Dependencies & Conflicts](#dependencies--conflicts)
68
+ 5. [Quick Start](#quick-start)
69
+ 6. [Calibration Frameworks](#calibration-frameworks)
70
+ 7. [Command-Line Tools](#command-line-tools)
71
+ 8. [Data Formats](#data-formats)
72
+ 9. [Cautions & Best Practices](#cautions--best-practices)
73
+ 10. [Troubleshooting](#troubleshooting)
74
+ 11. [FAQ](#faq)
75
+ 12. [Version](#version)
76
+
77
+ ---
78
+
79
+ ## 🎯 Project Overview
80
+
81
+ **Calibrate-Suite** provides production-ready tools for calibrating the extrinsic transformation between LiDAR and camera sensors in robotic platforms. The package includes:
82
+
83
+ - **Three calibration frameworks** targeting different sensor configurations (LiDAR-only, 3D-3D, 2D-3D (AprilTag-based))
84
+ - **Desktop GUI** (PyQt5) for interactive calibration workflows
85
+ - **REST API** (Flask) for distributed fleet management dummy illustration
86
+ - **Command-line tools** for automated batch processing
87
+ - **Multiple installation methods** supporting both pip and ROS2 environments
88
+
89
+ **Supports:** Ubuntu 20.04+, Python 3.8+, ROS2(humble)(optional)
90
+
91
+ ### Hardware Setup
92
+
93
+ ![Hardware Setup](src/gui/assets/hardware_setup.jpeg)
94
+
95
+ **Pattern Generator:** i used the [Calib io Pattern Generator](https://calib.io/pages/camera-calibration-pattern-generator) for the AprilGrid
96
+
97
+ ---
98
+
99
+ ## ✨ Features
100
+
101
+ ### Core Calibration
102
+ - ✅ **LiDAR-Camera extrinsic calibration** using AprilTag fiducials for 2D-3D and plane board for 3D-3D
103
+ - ✅ **Three frameworks** for different sensor setups and use cases
104
+ - ✅ **Automatic corner normalization** in single-pass calibration(2D-3D)
105
+ - ✅ **Robust quality filtering** with per-pose metrics
106
+ - ✅ **Comprehensive error reporting** (reprojection, translational, rotational)
107
+ - ✅ **Multi-pose support** (20+ poses used)
108
+
109
+ ### Desktop GUI
110
+ - ✅ **Tab-based interface** for all frameworks and tools
111
+ - ✅ **Real-time data visualization** with 3D point cloud overlay
112
+ - ✅ **Interactive parameter tuning** with live metrics
113
+ - ✅ **Web-based 3D viewer** for browser access(the dummy server)
114
+ - ✅ **PCD file browser** and upload functionality
115
+
116
+ ### REST API (dummy Fleet Server)
117
+ - ✅ **Calibration management** endpoints
118
+ - ✅ **Result persistence** with YAML/JSON export
119
+ - ✅ **Metrics dashboard** for fleet monitoring
120
+ - ✅ **File upload/download** for distributed workflows
121
+ - ✅ **Browser-based interface** for web access
122
+
123
+ ### Data Processing
124
+ - ✅ **Automatic data extraction** from ROS2 bags
125
+ - ✅ **Synchronized sensor capture** (camera + LiDAR)
126
+ - ✅ **Point cloud filtering** and preprocessing
127
+ - ✅ **AprilTag detection** and corner localization
128
+ - ✅ **Board segmentation** and quality assessment
129
+
130
+ ### Deployment
131
+ - ✅ **pip installation** for standard Python environments
132
+ - ✅ **ROS2 colcon build** for local deployments on ROS2
133
+ - ✅ **System package compatibility** (no Qt/RViz conflicts)
134
+ - ✅ **Docker support** ready
135
+ - ✅ **Minimal dependencies** with optional extras
136
+
137
+ ---
138
+
139
+ ## 📦 Installation
140
+
141
+ ### Prerequisites
142
+ ```bash
143
+ # Python 3.8 or higher
144
+ python3 --version
145
+
146
+ # System packages (Ubuntu 22.04)
147
+ sudo apt update
148
+ ```
149
+
150
+ ### Installation Options
151
+
152
+ **Option 1: ROS2 + System Packages (Recommended for ROS users)**
153
+ ```bash
154
+ # Install system dependencies
155
+ sudo apt install python3-opencv python3-open3d python3-pyqt5 python3-yaml
156
+
157
+ # Build with ROS2
158
+ cd ~/calib_ws
159
+ colcon build --packages-select calibrate-suite
160
+ source install/setup.bash
161
+
162
+ # Launch after installation
163
+ calibrate-suite # GUI
164
+ fleet-server # REST API
165
+ ```
166
+
167
+ **Option 2: pip - Core Only (Minimal)**
168
+ ```bash
169
+ pip install calibrate-suite
170
+ ```
171
+
172
+ **Option 3: pip - With Computer Vision**
173
+ ```bash
174
+ pip install calibrate-suite[cv]
175
+ ```
176
+
177
+ **Option 4: pip - With 3D Processing**
178
+ ```bash
179
+ pip install calibrate-suite[3d]
180
+ ```
181
+
182
+ **Option 5: pip - With GUI**
183
+ ```bash
184
+ pip install calibrate-suite[gui]
185
+ ```
186
+
187
+ **Option 6: pip - Full Installation (Isolated venv recommended)**
188
+ ```bash
189
+ python3 -m venv calib_env
190
+ source calib_env/bin/activate
191
+ pip install calibrate-suite[full]
192
+
193
+ # Launch commands after installation
194
+ calibrate-suite # GUI
195
+ fleet-server # REST API (optional parameters: --host 0.0.0.0 --port 8080 --debug)
196
+ ```
197
+
198
+ ### Verification
199
+ ```bash
200
+ # Check installation and verify commands are available
201
+ which calibrate-suite
202
+ which fleet-server
203
+
204
+ # Quick test of GUI
205
+ calibrate-suite --version
206
+
207
+ # Quick test of server
208
+ fleet-server --version
209
+ ```
210
+
211
+ ---
212
+
213
+ ## 🔧 Dependencies & Conflicts
214
+
215
+ ### Core Dependencies (Always Required)
216
+ | Package | Version | Purpose |
217
+ |---------|---------|---------|
218
+ | numpy | ≥1.20.0 | Numerical computing |
219
+ | scipy | ≥1.7.0 | Scientific functions |
220
+ | scikit-learn | ≥1.0.0 | Machine learning algorithms |
221
+ | Flask | 2.3.3 | REST API framework |
222
+ | requests | 2.31.0 | HTTP client |
223
+ | python-dotenv | 1.0.0 | Environment configuration |
224
+ | pyyaml | ≥6.0 | YAML parsing |
225
+
226
+ ### Optional Dependencies (Feature-Specific)
227
+ | Extra | Packages | When to Use |
228
+ |-------|----------|------------|
229
+ | `[cv]` | opencv-python ≥4.5.0 | Computer vision, AprilTag detection |
230
+ | `[3d]` | open3d ≥0.13.0 | 3D point cloud processing |
231
+ | `[visualization]` | matplotlib ≥3.3.0 | Result visualization |
232
+ | `[gui]` | PyQt5 ≥5.15.0 | Desktop GUI application |
233
+ | `[full]` | All packages | Complete feature set |
234
+ | `[dev]` | pytest, black, flake8 | Development & testing |
235
+
236
+ ### Known Conflicts & Solutions
237
+
238
+ **Issue: OpenCV conflicts with system Qt/RViz**
239
+ ```
240
+ Problem: pip opencv-python can conflict with system OpenCV
241
+ Solution: Use system package instead
242
+ sudo apt install python3-opencv
243
+ pip install calibrate-suite # without [cv]
244
+ ```
245
+
246
+ **Issue: PyQt5 conflicts in ROS2 environment**
247
+ ```
248
+ Problem: pip PyQt5 may conflict with ROS2 system packages
249
+ Solution: Use system PyQt5
250
+ sudo apt install python3-pyqt5
251
+ pip install calibrate-suite # without [gui]
252
+ ```
253
+
254
+ **Issue: NumPy 2.x compatibility**
255
+ ```
256
+ Problem: Some packages compiled against NumPy 1.x
257
+ Solution: Pin NumPy version if needed
258
+ pip install 'numpy<2' calibrate-suite[full]
259
+ ```
260
+
261
+ ---
262
+
263
+ ## 🚀 Quick Start
264
+
265
+ ### GUI Application (Recommended)
266
+ ```bash
267
+ # Launch desktop application
268
+ calibrate-suite
269
+
270
+ # Tabs available:
271
+ # - Home: Quick navigation
272
+ # - Recorder: Capture ROS2 bag data
273
+ # - Extractor: Extract frames and point clouds
274
+ # - Single LiDAR: LiDAR-only calibration
275
+ # - 3D-3D: Dense 3D point cloud calibration
276
+ # - 2D-3D: AprilTag-based calibration (recommended)
277
+ # - Calibrator: Advanced settings and batch processing
278
+ ```
279
+
280
+ **GUI Workflow Tabs:**
281
+
282
+ | Home/Record | Overlay |
283
+ |---|---|
284
+ | ![Home Interface](src/gui/assets/GUI_homepage.png) | ![Record Page](src/gui/assets/3d-or-2d-record-page.png) |
285
+
286
+ ### Command-Line (Quick Calibration)
287
+ ```bash
288
+ cd /home/jamih/calib_ws/src/calibrate-suite/src
289
+
290
+ #launching calibrator suite from terminal
291
+ # Method 2: Direct Python execution
292
+ python3 -m gui.main
293
+
294
+ # Method 3: Direct script execution
295
+ python3 gui/main.py
296
+
297
+ # 2D-3D Calibration (AprilTag-based, recommended)
298
+ python3 2d_3d_calibrator.py --data_root ../calib_data/multimodal_captures
299
+
300
+ # View results
301
+ cat ../calib_data/multimodal_captures/final_extrinsic.yaml
302
+ ```
303
+
304
+ ### Data Processing Pipeline
305
+ ```bash
306
+ # 1. Extract synchronized data from ROS2 bag
307
+ python3 2d_3d_data_extractor.py \
308
+ --data_root ../calib_data/multimodal_captures\
309
+ --output ../calib_data/multimodal_captures \
310
+ --num-poses 20
311
+
312
+ # 2. Run calibration
313
+ python3 2d_3d_calibrator.py --data_root ../calib_data/multimodal_captures
314
+
315
+ # 3. Visualize results
316
+ python3 visualize_alignment.py --data_root ../calib_data/multimodal_captures
317
+ ```
318
+
319
+ ---
320
+
321
+ ## 📐 Calibration Frameworks
322
+
323
+ ### Framework 1: Single LiDAR Calibration
324
+
325
+ **Purpose:** Point cloud self-registration and intrinsic validation
326
+
327
+ **Features:**
328
+ - Point cloud pair registration using Point-to-Plane ICP
329
+ - FPFH-RANSAC global registration for difficult alignments
330
+ - Hybrid initialization (centroid multi-start + FPFH-RANSAC)
331
+ - Multi-scale ICP pyramid (coarse → medium → fine)
332
+ - GICP final refinement for sub-millimeter accuracy
333
+ - Statistical outlier removal
334
+ - Adaptive Voxel-based downsampling
335
+ - Fitness score and RMSE metrics
336
+
337
+ **Input:**
338
+ - Source and target PCD files (`.pcd`)
339
+
340
+ **Output:**
341
+ - Transformation matrix (4×4)
342
+ - Fitness score (0-1)
343
+ - RMSE error (meters)
344
+ - Aligned point cloud visualization
345
+
346
+ ![Single LiDAR Output](src/gui/assets/single_lidar_output.png)
347
+
348
+ **When to Use:**
349
+ - Sensor self-calibration and intrinsic validation
350
+ - Point cloud registration debugging
351
+ - Temporal drift assessment
352
+ - LiDAR-to-LiDAR pose estimation
353
+
354
+ **Note:**
355
+ - when you run single Lidar capturing under bag mode, Rviz might seem unstable, just click the pause/play button, it will stabilize after that
356
+
357
+ **GUI Usage:**
358
+ ```
359
+ GUI → Single LiDAR Tab → Upload PCD files → Select method → Run → View metrics
360
+ ```
361
+
362
+ | Record | Calibrate |
363
+ |---|---|
364
+ | ![Single LiDAR Record](src/gui/assets/single_lidar_record_page.png) | ![Single LiDAR Calibrate](src/gui/assets/single_lidar_calibrate_page.png) |
365
+
366
+ ---
367
+
368
+ ### Framework 2: 3D-3D Camera-LiDAR Calibration
369
+
370
+ **Purpose:** Dense point cloud correspondence calibration
371
+
372
+ **Features:**
373
+ - 3D point cloud extraction from both camera and LiDAR
374
+ - DBSCAN clustering for board segmentation
375
+ - Centroid-based initial estimation
376
+ - Least-squares iterative refinement
377
+ - Per-pose quality metrics (centroid error, normal error, point-to-plane RMSE)
378
+ - Cloud overlay visualization
379
+
380
+ **Input:**
381
+ - Camera point clouds (depth+image-to-3D conversion)
382
+ - LiDAR point clouds (PCD files)
383
+ - Synchronized multi-pose captures
384
+
385
+ **Output:**
386
+ ```yaml
387
+ final_extrinsic.yaml:
388
+ rotation_matrix: 3×3
389
+ translation_vector: [x, y, z]
390
+ metrics:
391
+ centroid_error_m: float
392
+ normal_error_deg: float
393
+ rmse_error_m: float
394
+ per_pose_quality: [...]
395
+ ```
396
+
397
+ ![3D-3D Calibration Output](src/gui/assets/3d_3d_overlay_output.png)
398
+
399
+ **When to Use:**
400
+ - Both sensors provide dense 3D data
401
+ - High-precision applications required
402
+ - Full point cloud validation needed
403
+ - Stereo camera + LiDAR setups
404
+
405
+ **GUI Usage:**
406
+ ```
407
+ GUI → 3D-3D Tab → Select data directory → Configure parameters → Run → View metrics
408
+ ```
409
+
410
+ ---
411
+
412
+ ### Framework 3: 2D-3D Camera-LiDAR Calibration (Recommended) ⭐
413
+
414
+ **Purpose:** AprilTag fiducial-based corner correspondence calibration (RECOMMENDED)
415
+
416
+ **Features:**
417
+ - AprilTag detection in camera images (6×6 grid)
418
+ - 2D corner localization in pixel coordinates
419
+ - LiDAR corner extraction via PCA and DBSCAN
420
+ - 8-permutation search for optimal corner matching
421
+ - Automatic corner index normalization in single pass
422
+ - Quality filtering (best N poses by board quality)
423
+ - Huber loss optimization with outlier rejection
424
+ - Comprehensive error metrics per pose
425
+ - Reprojection and translational error reporting
426
+ - Convergence guarantees with verification pass
427
+
428
+ **Input:**
429
+ - 20+ synchronized multi-pose captures
430
+ - Camera images (arbitrary resolution, e.g., 1920×1440)
431
+ - Point cloud scans (E.g., Velodyne, RoarIndar, etc.)
432
+ - AprilTag board (8×11 grid, any tag size or grid)
433
+
434
+ ```yaml
435
+ final_extrinsic.yaml:
436
+ method: camera_pose_iterative_quality_filtered
437
+ rotation_matrix: 3×3
438
+ translation_vector: [x, y, z]
439
+ euler_angles_deg: [roll, pitch, yaw]
440
+ reprojection_error_metrics: {...}
441
+ fitness_scores: {...}
442
+ quality_distribution: {...}
443
+ calibration_info: {...}
444
+ ```
445
+
446
+ ![2D-3D Calibration Output](src/gui/assets/2d_3d_overlay_output.jpg)**Output:**
447
+
448
+
449
+ **When to Use:**
450
+ - AprilTag board is available
451
+ - Corner-based correspondences required
452
+ - Multi-pose dataset available (20+ poses)
453
+ - Need comprehensive error metrics
454
+ - Want automatic normalization and single-pass completion
455
+ - Standard camera+LiDAR robotics applications
456
+
457
+ **GUI Usage:**
458
+ ```
459
+ GUI → 2D-3D Tab → Select data directory → Configure quality filter → Run → View metrics
460
+ ```
461
+
462
+ | GUI Interface | Alignment Visualization |
463
+ |---|---|
464
+ | ![2D-3D Calibration](src/gui/assets/3d_or_2d_calibrate-page.png) | ![Alignment Overlay](src/gui/assets/3d-or-2d-overlay_page.png) |
465
+
466
+ ---
467
+
468
+ ## 💻 Command-Line Tools
469
+
470
+ ### Data Extraction Tools
471
+ ```bash
472
+ # Synchronized extraction from ROS2 bag
473
+ cd src
474
+ python3 3d_3d_data_extractor.py \
475
+ --bag path-to-bag \
476
+ --output ../calib_data/captures \
477
+ --num-poses 20 \
478
+ --sync-tolerance 100
479
+
480
+ # Extract camera images only(it will be 3d_3d_data extractor for the 3d_3d mode)
481
+ python3 2d_3d_data_extractor.py \
482
+ --bag path-to-bag \
483
+ --type camera \
484
+ --output ../calib_data/camera_images
485
+
486
+ # Extract LiDAR point clouds only
487
+ python3 2d_3d_data_extractor.py \
488
+ --bag path-to-bag \
489
+ --type lidar \
490
+ --output ../calib_data/lidar_clouds
491
+
492
+ # single lidar data extraction
493
+ python3 single_data_extractor.py \
494
+ --bag path-to-bag \
495
+ ```
496
+
497
+ ### Calibration Tools
498
+ ```bash
499
+ # 2D-3D calibration (AprilTag-based, recommended)
500
+ python3 2d_3d_calibrator.py \
501
+ --data_root ../calib_data/multimodal_captures \
502
+ --quality-filter 0.05 \
503
+ --output final_extrinsic.yaml
504
+
505
+ # 3D-3D calibration (Dense point cloud)
506
+ python3 3d_3d_calibrator.py \
507
+ --camera_pcds ../calib_data/camera_pcd \
508
+ --lidar_pcds ../calib_data/lidar_pcd \
509
+ --output final_extrinsic.yaml
510
+
511
+ # Single LiDAR registration
512
+ python3 single_lidar_calib.py \
513
+ --source source.pcd \
514
+ --target target.pcd \
515
+ --output registration.yaml
516
+ ```
517
+
518
+ ### Visualization Tools
519
+ ```bash
520
+ # Generate 3d_3d mode alignment overlays
521
+ python3 3d_3d_overlay.py \
522
+ --data_root ../calib_data/multimodal_captures \
523
+ --output alignment_viz/
524
+
525
+ # Overlay AprilTag detection for 2d_3d
526
+ python3 2d_3d_overlay.py \
527
+ --data_root ../calib_data/multimodal_captures \
528
+ --pose 1
529
+
530
+ # generate/Verify corner detection
531
+ python3 camtag_det.py \
532
+ --data_root ../calib_data/multimodal_captures
533
+
534
+ python3 lidtag_det.py \
535
+ --data_root ../calib_data/multimodal_captures
536
+ ```
537
+ ---
538
+
539
+ ## 📊 Data Formats
540
+
541
+ ### Directory Structure (Package-Created)
542
+
543
+ When using calibrate-suite, the following directory structure is created(note, they can be changed as per your wish in the GUI):
544
+
545
+ ```
546
+ calib_data/
547
+ ├── multimodal_captures/ # Main calibration dataset
548
+ │ ├── camera_intrinsics.yaml # Camera K, D matrices
549
+ │ ├── final_extrinsic.yaml # Output extrinsic transformation
550
+ │ ├── calibration_metrics.json # Per-pose metrics
551
+ │ ├── pose_1/
552
+ │ │ ├── frame.jpg # Original camera image
553
+ │ │ ├── anot_image.jpg # Annotated with corner detections
554
+ │ │ ├── board_plane.pcd # Filtered/segmented LiDAR cloud
555
+ │ │ ├── detections.yaml # Camera and LiDAR corner detections
556
+ │ │ └── metrics.yaml # Per-pose quality metrics
557
+ │ ├── pose_2/
558
+ │ │ └── [same structure]
559
+ │ └── alignment_viz/ # Visualization outputs
560
+ │ | ├── pose_1_overlay.jpg # Corner reprojection overlay
561
+ │ | ├── pose_1_cloud.jpg # Point cloud projection
562
+ │ | └── ...
563
+ |----
564
+
565
+ ```
566
+
567
+ ### Board Geometry Configuration
568
+ ```yaml
569
+ board_type: "AprilTag"
570
+ grid_dimensions: [8, 11] # 8×11 tag grid, it can be as per your board
571
+ tag_size_m: 0.04 # Individual tag width
572
+ board_width_m: 0.609 # Total board width
573
+ board_height_m: 0.457 # Total board height
574
+ tag_spacing_m: 0.005 # Gap between tags (if any)
575
+ ```
576
+
577
+ ---
578
+
579
+ ## ⚠️ Cautions & Best Practices
580
+
581
+ ### Data Collection Best Practices
582
+
583
+ ✅ **DO:**
584
+ - Capture 20+ poses with board at varying angles (0°-60° from camera normal)
585
+ - Ensure consistent, even lighting throughout capture sequence
586
+ - Keep AprilTag board planar and undamaged
587
+ - Verify sensor time synchronization (<100ms offset)
588
+ - Calibrate camera intrinsics separately beforehand(if not sure of its intrinsics)
589
+ - Document environmental conditions (lighting, reflectance, temperature)
590
+ - Perform test calibration with subset before full dataset
591
+
592
+ ✗ **DON'T:**
593
+ - Use fewer than 15 poses (insufficient statistical sampling)(might be less though)
594
+ - Allow board to contact sensor brackets or cables (occlusion/deformation)
595
+ - Capture images with severe motion blur
596
+ - Mix board orientations or rotate frame mid-capture
597
+ - Have LiDAR points sparse or with extreme reflectance outliers
598
+ - Assume intrinsic parameters are accurate without verification
599
+ - Recalibrate unnecessarily (extrinsics typically remain stable)
600
+
601
+ ### Quality Assessment
602
+
603
+ **Warning Signs:**
604
+ - High standard deviation in error distribution (outliers present)
605
+ - Asymmetric error distribution (systematic bias exists)
606
+ - Rotational error > 60° (check mechanical alignment)
607
+ - All 20 poses accepted without filtering (quality filter too lenient)
608
+ - Non-identity corner permutations after normalization (check board orientation)
609
+
610
+ ### AprilTag Board Setup
611
+
612
+ The calibrate-suite works with **any AprilTag grid configuration**, not limited to 8×11:
613
+ - Grid size: Configurable (e.g., 4×4, 5×5, 8×8)
614
+ - Tag size: Any size (adjust `config/board_geometry.yaml`)
615
+ - Board material: Any flat surface with AprilTags
616
+ - Spacing: Regular grid required for PCA-based board frame
617
+
618
+ **Board must be:**
619
+ - Planar (flat, no warping)
620
+ - Rigidly mounted (no vibration)
621
+ - Visible to both camera and LiDAR simultaneously
622
+ - Not occluded or partially obscured
623
+
624
+ ---
625
+
626
+ ## 🔧 Troubleshooting
627
+
628
+ | Symptom | Probable Cause | Solution |
629
+ |--------- |--- |--- |
630
+ | **AprilTag not detected** | Poor lighting or high exposure time | Improve illumination, reduce exposure, check camera focus |
631
+ | **Few LiDAR corners detected (<2)** | Low point cloud density or reflectance | Move board closer, ensure LiDAR line-of-sight, check surface reflectance |
632
+ | **High reprojection error (>100px)** | Time sync offset or incorrect intrinsics | Verify camera sync (<100ms), recalibrate camera intrinsics |
633
+ | **Rotation error >90°** | Multiple outlier poses not filtered | Increase quality_filter parameter (0.1m), manually review poses |
634
+ | **Calibration fails to converge** | Dominant outliers or bad permutation search | Reduce huber_delta (1.5px), check board orientation consistency |
635
+ | **Corner indices not normalized** | Permutation search failed to find match | Verify board geometry in config, check corner detection quality |
636
+ | **All poses marked "poor"** | Board completely out of focus or occluded | Check camera focus, ensure board visibility, verify synchronization |
637
+ | **Memory error during processing** | Large point clouds not downsampled | Reduce voxel size or enable downsampling in config |
638
+
639
+ ### Common Command Errors
640
+
641
+ ```bash
642
+ # Error: "ModuleNotFoundError: No module named 'cv2'"
643
+ # Solution: Install OpenCV
644
+ sudo apt install python3-opencv
645
+ # OR
646
+ pip install calibrate-suite[cv]
647
+
648
+ # Error: "No module named 'open3d'"
649
+ # Solution: Install Open3D
650
+ sudo apt install python3-open3d
651
+ # OR
652
+ pip install calibrate-suite[3d]
653
+
654
+ # Error: "PyQt5 import failed" (GUI)
655
+ # Solution: Install PyQt5
656
+ sudo apt install python3-pyqt5
657
+ # OR
658
+ pip install calibrate-suite[gui]
659
+
660
+ # Error: "YAML parsing failed"
661
+ # Solution: Verify YAML format and reinstall
662
+ sudo apt install python3-yaml
663
+ pip install 'pyyaml>=6.0'
664
+ ```
665
+
666
+ ### REST API Server (Fleet Management)
667
+
668
+ > **Note:** The fleet server is a **dummy reference implementation** for demonstration purposes. It provides basic REST endpoints for uploading calibration results from multiple robots. You can test the upload functionality from the GUI by pointing it to your own server URL.
669
+
670
+ ```bash
671
+
672
+ #launching through terminal
673
+ cd /home/jamih/calib_ws/src/calibrate-suite/src
674
+ # Method 1: Using entry point (if installed)
675
+ fleet-server
676
+
677
+ # Method 2: With custom host/port
678
+ fleet-server --host 0.0.0.0 --port 8080 --debug
679
+
680
+ # Method 3: Direct Python execution
681
+ python3 -m fleet_server.app
682
+
683
+ # Method 4: Direct script execution
684
+ python3 fleet_server/app.py
685
+
686
+
687
+ # Optional parameters:
688
+ # fleet-server --host 0.0.0.0 --port 8080 --debug
689
+
690
+ # Access dashboard: http://localhost:5000
691
+ # API endpoints: http://localhost:5000/api/...
692
+ ```
693
+
694
+ **Testing from GUI:**
695
+ 1. In the calibrate-suite GUI, go to the Fleet tab
696
+ 2. Enter your server URL (e.g., `http://localhost:5000` or your custom server address)
697
+ 3. Upload calibration results to your own fleet management backend
698
+
699
+ ![Fleet Upload Interface](src/gui/assets/2d-or-3d-fleet-upload.png)
700
+
701
+ ---
702
+
703
+ ### Performance Optimization
704
+
705
+ If processing is slow:
706
+ ```bash
707
+ # Reduce point cloud density
708
+ python3 2d_3d_calibrator.py \
709
+ --data_root ../calib_data/multimodal_captures \
710
+ --voxel-size 0.01 # Increase for faster processing (default 0.005)
711
+ --max-correspondence-dist 0.2 # Increase search radius
712
+
713
+ # Reduce number of poses
714
+ python3 2d_3d_calib.py \
715
+ --data_root ../calib_data/multimodal_captures \
716
+ --max-poses 15 # Use best 15 poses instead of 20
717
+ ```
718
+
719
+ ---
720
+
721
+ ## ❓ FAQ
722
+
723
+ **Q: Which calibration framework should I use?**
724
+ A: Framework 3 (2D-3D AprilTag-based) is recommended for standard camera+LiDAR setups. Use Framework 2 for stereo+LiDAR or dense depth+LiDAR. Use Framework 1 for LiDAR self-calibration only.
725
+
726
+ **Q: Why do I need 20 poses minimum?**
727
+ A: Statistical robustness and noise reduction. Fewer poses provide insufficient sampling of the transformation space and make results sensitive to outliers.
728
+
729
+ **Q: Can I use a different board configuration?**
730
+ A: Yes. Modify `config/board_geometry.yaml` with your grid size and tag dimensions. The corner detection adapts to any regular AprilTag grid.
731
+
732
+ **Q: How do I integrate the calibration into my ROS2 pipeline?**
733
+ A: Load `final_extrinsic.yaml` into your tf2 broadcaster or calibration node. See ROS2 documentation for tf2 static transforms.
734
+
735
+ **Q: What if I have a different LiDAR type?**
736
+ A: The point cloud processing is format-agnostic. As long as you have synchronized PCD files, the calibration works with any LiDAR (Velodyne, RoarIndar, OS1, etc.).
737
+
738
+ **Q: Can I run this in Docker?**
739
+ A: Yes. The package is Docker-ready. Dockerfile and compose configuration are available in the repository.
740
+
741
+ **Q: How often should I recalibrate?**
742
+ A: Extrinsics typically remain stable for months. Recalibrate only if: (1) sensors were physically moved, (2) mechanical stress/vibration occurred, or (3) recalibration validation fails.
743
+
744
+ **Q: Does the package support real-time calibration?**
745
+ A: Current implementation is batch-based. Real-time streaming calibration is planned for future releases.
746
+
747
+ **Q: Can I export calibration for other tools?**
748
+ A: Yes. The output is standard YAML with rotation matrix and translation vector. Compatible with ROS2 tf2, OpenCV, and most robotics frameworks.
749
+
750
+ **Q: What is the typical calibration accuracy?**
751
+ A: Mean reprojection error typically 30-50 px depending on sensor quality and data collection. Rotational error < 5° for rigid platforms.
752
+
753
+ **Q: How do I handle multiple sensors on the same platform?**
754
+ A: Run separate calibrations for each camera+LiDAR pair. Combine results using platform geometry (e.g., manual mounting offsets).
755
+
756
+ ---
757
+
758
+ ## 📄 Version
759
+
760
+ **Package Version:** 0.1.0
761
+ **License:** MIT