eridian 0.1.0__tar.gz

This diff represents the content of publicly available package versions that have been released to one of the supported registries. The information contained in this diff is provided for informational purposes only and reflects changes between package versions as they appear in their respective public registries.
eridian-0.1.0/LICENSE ADDED
@@ -0,0 +1,21 @@
1
+ MIT License
2
+
3
+ Copyright (c) 2025 Eeman Majumder
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
@@ -0,0 +1,4 @@
1
+ include LICENSE
2
+ include README.md
3
+ include requirements.txt
4
+ recursive-include assets *.jpg *.png
eridian-0.1.0/PKG-INFO ADDED
@@ -0,0 +1,240 @@
1
+ Metadata-Version: 2.4
2
+ Name: eridian
3
+ Version: 0.1.0
4
+ Summary: Real-time 3D world reconstruction from a single camera
5
+ Author: Eeman Majumder
6
+ License-Expression: MIT
7
+ Project-URL: Homepage, https://github.com/Eeman1113/Eridian.
8
+ Project-URL: Repository, https://github.com/Eeman1113/Eridian.
9
+ Keywords: 3d,depth-estimation,point-cloud,monocular,reconstruction,slam
10
+ Classifier: Development Status :: 3 - Alpha
11
+ Classifier: Topic :: Scientific/Engineering :: Image Processing
12
+ Classifier: Topic :: Scientific/Engineering :: Visualization
13
+ Classifier: Programming Language :: Python :: 3
14
+ Requires-Python: >=3.10
15
+ Description-Content-Type: text/markdown
16
+ License-File: LICENSE
17
+ Requires-Dist: numpy
18
+ Requires-Dist: opencv-python
19
+ Requires-Dist: torch
20
+ Requires-Dist: transformers
21
+ Requires-Dist: pyvista
22
+ Requires-Dist: pillow
23
+ Dynamic: license-file
24
+
25
+ # Eridian
26
+
27
+ **Real-time 3D world reconstruction from a single camera.**
28
+
29
+ Eridian turns any webcam into a spatial scanner. It watches what you see, understands how far away everything is, tracks how you move, and builds a 3D colored map of your surroundings — all in real time, on a laptop, with no special hardware.
30
+
31
+ ![Python 3.10+](https://img.shields.io/badge/python-3.10%2B-blue)
32
+ ![Platform](https://img.shields.io/badge/platform-macOS%20%7C%20Linux-lightgrey)
33
+ ![License](https://img.shields.io/badge/license-MIT-green)
34
+
35
+ ---
36
+
37
+ ## Demo
38
+
39
+ ![Eridian 4-panel view](assets/demo_4panel.jpg)
40
+
41
+ > **Top-left:** Live camera feed | **Top-right:** Metric depth map | **Bottom-left:** Optical flow tracking | **Bottom-right:** Accumulated 3D point cloud
42
+
43
+ https://github.com/Eeman1113/Eridian./raw/main/output_video/eridian_demo.mp4
44
+
45
+ ---
46
+
47
+ ## What it does
48
+
49
+ Eridian takes a flat 2D video stream and reconstructs the 3D structure of the world from it. Every frame goes through four stages:
50
+
51
+ ### 1. Metric Depth Estimation
52
+
53
+ ![Depth map](assets/panel_depth.jpg)
54
+
55
+ A neural network ([Depth Anything V2 Metric](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf)) estimates the real-world distance in meters from the camera to every single pixel in the frame. This isn't relative "closer vs farther" — it outputs actual metric depth (e.g., "this wall is 2.3 meters away"). The depth is then smoothed with a bilateral filter to reduce noise while keeping sharp edges, and temporally stabilized so it doesn't flicker between frames.
56
+
57
+ ### 2. Camera Motion Tracking
58
+
59
+ ![Optical flow](assets/panel_features.jpg)
60
+
61
+ Eridian tracks hundreds of corner features across consecutive frames using Lucas-Kanade optical flow. Each tracked point is lifted into 3D using the depth map, creating a set of known 3D-to-2D correspondences. These are fed into a PnP (Perspective-n-Point) solver that computes exactly how the camera moved between frames — both direction and distance, in real meters. A forward-backward consistency check eliminates bad tracks before they can corrupt the pose.
62
+
63
+ ### 3. Intelligent Point Cloud Accumulation
64
+
65
+ ![3D Point Cloud](assets/panel_pointcloud.jpg)
66
+
67
+ Not every frame contributes to the 3D map. A keyframe system detects when the camera has moved enough (>8cm or >5 degrees) to justify adding new geometry. When a keyframe fires, each pixel is back-projected from 2D into 3D world coordinates using the depth and the accumulated camera pose. Three quality filters run before any point is accepted:
68
+
69
+ - **Depth edge rejection** — Removes "flying pixels" at object boundaries where depth is unreliable (detected via Sobel gradients)
70
+ - **Grazing angle rejection** — Removes points on surfaces viewed at steep angles (>75 degrees), where depth accuracy degrades
71
+ - **Voxel averaging** — Instead of keeping random points, all points within each 3cm voxel are averaged together, producing cleaner surfaces
72
+
73
+ ### 4. Live 3D Visualization
74
+
75
+ The accumulated point cloud is rendered in real time alongside the camera feed, depth map, and feature tracking view. The result is a colored 3D map that grows as you move the camera around.
76
+
77
+ ---
78
+
79
+ ## The bigger picture: spatial understanding for the visually impaired
80
+
81
+ Eridian is a building block toward something much more important.
82
+
83
+ **700 million people worldwide live with significant vision impairment.** For them, understanding the 3D layout of an unfamiliar room — where the furniture is, how far the doorway is, whether there's a step down ahead — requires either memorization, a cane, or another person.
84
+
85
+ A phone camera combined with real-time 3D reconstruction changes that equation fundamentally:
86
+
87
+ **Spatial awareness from a phone.** Eridian's depth estimation and 3D mapping pipeline runs on a single camera — the same one in every smartphone. This means a blind person's phone could continuously build a 3D model of their surroundings as they move through a space.
88
+
89
+ **What this enables (with further development):**
90
+
91
+ - **Obstacle detection and distance warnings** — "There's a table 1.5 meters ahead, slightly to your left." The metric depth map already provides this information at every pixel, every frame.
92
+
93
+ - **Room layout narration** — By accumulating the 3D map over time, the system can describe the overall structure of a space: "You're in a rectangular room, about 4 by 6 meters. The door is behind you to the right. There's a couch along the left wall."
94
+
95
+ - **Path planning** — The 3D point cloud can be analyzed to find clear walking paths and identify floor-level obstacles that a cane might miss — like a low table or an open cabinet door at head height.
96
+
97
+ - **Spatial memory** — Unlike a cane that only senses the immediate moment, a persistent 3D map remembers the entire space. If you've already scanned a room, the system knows what's there even when the camera isn't pointing at it.
98
+
99
+ - **Indoor navigation** — Combined with visual place recognition, the accumulated 3D maps could enable turn-by-turn navigation inside buildings where GPS doesn't work.
100
+
101
+ **Why single-camera monocular reconstruction matters for this mission:**
102
+
103
+ Existing spatial sensing tools for the visually impaired (like LiDAR-equipped devices) are expensive and limited to specific hardware. Eridian's approach works with any camera — including the $50 phone in someone's pocket. By solving the hard problems of monocular depth estimation and visual odometry in software, the hardware barrier drops to zero.
104
+
105
+ The current system is a proof of concept. The depth estimation is accurate enough to detect obstacles. The pose tracking is robust enough to build coherent maps. The filtering pipeline produces clean enough geometry to reason about room structure. What remains is building the accessibility layer on top: the natural language descriptions, the haptic feedback, the audio cues that translate a 3D point cloud into spatial understanding for someone who can't see it.
106
+
107
+ **Eridian is the perception layer. The next step is making it speak.**
108
+
109
+ ---
110
+
111
+ ## Pipeline at a glance
112
+
113
+ ![Early in the scan](assets/demo_early.jpg)
114
+ *Early in the scan — depth map is active, point cloud is starting to form*
115
+
116
+ ![Full reconstruction](assets/demo_late.jpg)
117
+ *After scanning — dense point cloud with room geometry visible*
118
+
119
+ ---
120
+
121
+ ## Install
122
+
123
+ ### Option 1: pip install (recommended)
124
+
125
+ ```bash
126
+ pip install eridian
127
+ eridian # launch with webcam
128
+ eridian --test # run on test video
129
+ eridian --video v.mp4 # any video file
130
+ ```
131
+
132
+ ### Option 2: clone and run
133
+
134
+ ```bash
135
+ git clone https://github.com/Eeman1113/Eridian..git
136
+ cd Eridian.
137
+ ./run.sh
138
+ ```
139
+
140
+ `run.sh` handles everything: creates a virtualenv, installs dependencies, runs component tests, and launches the mapper.
141
+
142
+ ### Manual setup
143
+
144
+ ```bash
145
+ python3 -m venv venv
146
+ source venv/bin/activate
147
+ pip install -r requirements.txt
148
+ pip install -e .
149
+ python test_components.py # verify depth model + PLY export
150
+ python main.py # launch
151
+ ```
152
+
153
+ ### Use as a library
154
+
155
+ ```python
156
+ from eridian import DepthEstimator, PointCloud, PoseEstimator
157
+
158
+ depth_est = DepthEstimator()
159
+ depth_map = depth_est.estimate(frame)
160
+ ```
161
+
162
+ ### Test mode (no camera needed)
163
+
164
+ ```bash
165
+ eridian --test # process data/video.mp4 headless
166
+ eridian --video path/to/vid.mp4 # any video file
167
+ python render_video.py # render 4-panel demo video
168
+ ```
169
+
170
+ If no camera is detected, Eridian automatically falls back to `data/video.mp4`.
171
+
172
+ ## Requirements
173
+
174
+ - Python 3.10 – 3.13
175
+ - A webcam (built-in, USB, or macOS Continuity Camera) — or a video file for test mode
176
+ - CPU-only — no CUDA needed (uses Apple MPS when available)
177
+
178
+ ## Controls
179
+
180
+ | Key | Action |
181
+ |-----|--------|
182
+ | `q` | Quit (in any OpenCV window) |
183
+ | `Ctrl+C` | Graceful shutdown with final save |
184
+ | Mouse | Orbit / zoom / pan in 3D window |
185
+
186
+ ## Output files
187
+
188
+ | Path | What |
189
+ |------|------|
190
+ | `splat/cloud_latest.ply` | Latest point cloud (saved every 10s) |
191
+ | `splat/cloud_YYYYMMDD_HHMMSS.ply` | Timestamped backups (every 60s) |
192
+ | `splat/cloud_final_*.ply` | Final save on shutdown |
193
+ | `splat/depth_frames/*.png` | 16-bit metric depth maps (every 5th frame) |
194
+ | `output_video/eridian_demo.mp4` | 4-panel demo video |
195
+ | `logs/mapper.log` | Full application log |
196
+
197
+ ## Architecture
198
+
199
+ Single `main.py`, no external config, no separate processes:
200
+
201
+ ```
202
+ CameraCapture — OpenCV with auto-reconnect + video fallback
203
+ DepthEstimator — Depth Anything V2 Metric + bilateral filter + temporal EMA
204
+ PoseEstimator — GFTT corners + LK optical flow + PnP (solvePnPRansac)
205
+ PointCloud — Edge/normal filtering + keyframe gating + voxel averaging
206
+ Visualizer3D — PyVista non-blocking renderer
207
+ save_ply() — Binary PLY writer
208
+ WorldMapper — Main loop with keyframe system
209
+ ```
210
+
211
+ ## Depth model fallback chain
212
+
213
+ Eridian tries metric models first (real meters), then falls back to relative:
214
+
215
+ 1. `depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf` (metric)
216
+ 2. `depth-anything/Depth-Anything-V2-Metric-Outdoor-Small-hf` (metric)
217
+ 3. `depth-anything/Depth-Anything-V2-Small-hf` (relative)
218
+ 4. `Intel/dpt-swinv2-tiny-256` (relative)
219
+ 5. `Intel/dpt-hybrid-midas` (relative)
220
+
221
+ ## Performance
222
+
223
+ On Apple M-series (MPS):
224
+ - Depth inference: ~5 FPS
225
+ - Optical flow tracking: <3ms per frame
226
+ - PnP pose solve: <1ms
227
+ - Point filtering + accumulation: ~5ms per frame
228
+ - Total pipeline: ~5 FPS real-time
229
+
230
+ ## Viewing PLY files
231
+
232
+ The `.ply` files Eridian produces can be opened in:
233
+ - [MeshLab](https://www.meshlab.net/)
234
+ - [CloudCompare](https://www.danielgm.net/cc/)
235
+ - Blender (File > Import > PLY)
236
+ - Any viewer supporting binary little-endian PLY with vertex colors
237
+
238
+ ## License
239
+
240
+ MIT
@@ -0,0 +1,216 @@
1
+ # Eridian
2
+
3
+ **Real-time 3D world reconstruction from a single camera.**
4
+
5
+ Eridian turns any webcam into a spatial scanner. It watches what you see, understands how far away everything is, tracks how you move, and builds a 3D colored map of your surroundings — all in real time, on a laptop, with no special hardware.
6
+
7
+ ![Python 3.10+](https://img.shields.io/badge/python-3.10%2B-blue)
8
+ ![Platform](https://img.shields.io/badge/platform-macOS%20%7C%20Linux-lightgrey)
9
+ ![License](https://img.shields.io/badge/license-MIT-green)
10
+
11
+ ---
12
+
13
+ ## Demo
14
+
15
+ ![Eridian 4-panel view](assets/demo_4panel.jpg)
16
+
17
+ > **Top-left:** Live camera feed | **Top-right:** Metric depth map | **Bottom-left:** Optical flow tracking | **Bottom-right:** Accumulated 3D point cloud
18
+
19
+ https://github.com/Eeman1113/Eridian./raw/main/output_video/eridian_demo.mp4
20
+
21
+ ---
22
+
23
+ ## What it does
24
+
25
+ Eridian takes a flat 2D video stream and reconstructs the 3D structure of the world from it. Every frame goes through four stages:
26
+
27
+ ### 1. Metric Depth Estimation
28
+
29
+ ![Depth map](assets/panel_depth.jpg)
30
+
31
+ A neural network ([Depth Anything V2 Metric](https://huggingface.co/depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf)) estimates the real-world distance in meters from the camera to every single pixel in the frame. This isn't relative "closer vs farther" — it outputs actual metric depth (e.g., "this wall is 2.3 meters away"). The depth is then smoothed with a bilateral filter to reduce noise while keeping sharp edges, and temporally stabilized so it doesn't flicker between frames.
32
+
33
+ ### 2. Camera Motion Tracking
34
+
35
+ ![Optical flow](assets/panel_features.jpg)
36
+
37
+ Eridian tracks hundreds of corner features across consecutive frames using Lucas-Kanade optical flow. Each tracked point is lifted into 3D using the depth map, creating a set of known 3D-to-2D correspondences. These are fed into a PnP (Perspective-n-Point) solver that computes exactly how the camera moved between frames — both direction and distance, in real meters. A forward-backward consistency check eliminates bad tracks before they can corrupt the pose.
38
+
39
+ ### 3. Intelligent Point Cloud Accumulation
40
+
41
+ ![3D Point Cloud](assets/panel_pointcloud.jpg)
42
+
43
+ Not every frame contributes to the 3D map. A keyframe system detects when the camera has moved enough (>8cm or >5 degrees) to justify adding new geometry. When a keyframe fires, each pixel is back-projected from 2D into 3D world coordinates using the depth and the accumulated camera pose. Three quality filters run before any point is accepted:
44
+
45
+ - **Depth edge rejection** — Removes "flying pixels" at object boundaries where depth is unreliable (detected via Sobel gradients)
46
+ - **Grazing angle rejection** — Removes points on surfaces viewed at steep angles (>75 degrees), where depth accuracy degrades
47
+ - **Voxel averaging** — Instead of keeping random points, all points within each 3cm voxel are averaged together, producing cleaner surfaces
48
+
49
+ ### 4. Live 3D Visualization
50
+
51
+ The accumulated point cloud is rendered in real time alongside the camera feed, depth map, and feature tracking view. The result is a colored 3D map that grows as you move the camera around.
52
+
53
+ ---
54
+
55
+ ## The bigger picture: spatial understanding for the visually impaired
56
+
57
+ Eridian is a building block toward something much more important.
58
+
59
+ **700 million people worldwide live with significant vision impairment.** For them, understanding the 3D layout of an unfamiliar room — where the furniture is, how far the doorway is, whether there's a step down ahead — requires either memorization, a cane, or another person.
60
+
61
+ A phone camera combined with real-time 3D reconstruction changes that equation fundamentally:
62
+
63
+ **Spatial awareness from a phone.** Eridian's depth estimation and 3D mapping pipeline runs on a single camera — the same one in every smartphone. This means a blind person's phone could continuously build a 3D model of their surroundings as they move through a space.
64
+
65
+ **What this enables (with further development):**
66
+
67
+ - **Obstacle detection and distance warnings** — "There's a table 1.5 meters ahead, slightly to your left." The metric depth map already provides this information at every pixel, every frame.
68
+
69
+ - **Room layout narration** — By accumulating the 3D map over time, the system can describe the overall structure of a space: "You're in a rectangular room, about 4 by 6 meters. The door is behind you to the right. There's a couch along the left wall."
70
+
71
+ - **Path planning** — The 3D point cloud can be analyzed to find clear walking paths and identify floor-level obstacles that a cane might miss — like a low table or an open cabinet door at head height.
72
+
73
+ - **Spatial memory** — Unlike a cane that only senses the immediate moment, a persistent 3D map remembers the entire space. If you've already scanned a room, the system knows what's there even when the camera isn't pointing at it.
74
+
75
+ - **Indoor navigation** — Combined with visual place recognition, the accumulated 3D maps could enable turn-by-turn navigation inside buildings where GPS doesn't work.
76
+
77
+ **Why single-camera monocular reconstruction matters for this mission:**
78
+
79
+ Existing spatial sensing tools for the visually impaired (like LiDAR-equipped devices) are expensive and limited to specific hardware. Eridian's approach works with any camera — including the $50 phone in someone's pocket. By solving the hard problems of monocular depth estimation and visual odometry in software, the hardware barrier drops to zero.
80
+
81
+ The current system is a proof of concept. The depth estimation is accurate enough to detect obstacles. The pose tracking is robust enough to build coherent maps. The filtering pipeline produces clean enough geometry to reason about room structure. What remains is building the accessibility layer on top: the natural language descriptions, the haptic feedback, the audio cues that translate a 3D point cloud into spatial understanding for someone who can't see it.
82
+
83
+ **Eridian is the perception layer. The next step is making it speak.**
84
+
85
+ ---
86
+
87
+ ## Pipeline at a glance
88
+
89
+ ![Early in the scan](assets/demo_early.jpg)
90
+ *Early in the scan — depth map is active, point cloud is starting to form*
91
+
92
+ ![Full reconstruction](assets/demo_late.jpg)
93
+ *After scanning — dense point cloud with room geometry visible*
94
+
95
+ ---
96
+
97
+ ## Install
98
+
99
+ ### Option 1: pip install (recommended)
100
+
101
+ ```bash
102
+ pip install eridian
103
+ eridian # launch with webcam
104
+ eridian --test # run on test video
105
+ eridian --video v.mp4 # any video file
106
+ ```
107
+
108
+ ### Option 2: clone and run
109
+
110
+ ```bash
111
+ git clone https://github.com/Eeman1113/Eridian..git
112
+ cd Eridian.
113
+ ./run.sh
114
+ ```
115
+
116
+ `run.sh` handles everything: creates a virtualenv, installs dependencies, runs component tests, and launches the mapper.
117
+
118
+ ### Manual setup
119
+
120
+ ```bash
121
+ python3 -m venv venv
122
+ source venv/bin/activate
123
+ pip install -r requirements.txt
124
+ pip install -e .
125
+ python test_components.py # verify depth model + PLY export
126
+ python main.py # launch
127
+ ```
128
+
129
+ ### Use as a library
130
+
131
+ ```python
132
+ from eridian import DepthEstimator, PointCloud, PoseEstimator
133
+
134
+ depth_est = DepthEstimator()
135
+ depth_map = depth_est.estimate(frame)
136
+ ```
137
+
138
+ ### Test mode (no camera needed)
139
+
140
+ ```bash
141
+ eridian --test # process data/video.mp4 headless
142
+ eridian --video path/to/vid.mp4 # any video file
143
+ python render_video.py # render 4-panel demo video
144
+ ```
145
+
146
+ If no camera is detected, Eridian automatically falls back to `data/video.mp4`.
147
+
148
+ ## Requirements
149
+
150
+ - Python 3.10 – 3.13
151
+ - A webcam (built-in, USB, or macOS Continuity Camera) — or a video file for test mode
152
+ - CPU-only — no CUDA needed (uses Apple MPS when available)
153
+
154
+ ## Controls
155
+
156
+ | Key | Action |
157
+ |-----|--------|
158
+ | `q` | Quit (in any OpenCV window) |
159
+ | `Ctrl+C` | Graceful shutdown with final save |
160
+ | Mouse | Orbit / zoom / pan in 3D window |
161
+
162
+ ## Output files
163
+
164
+ | Path | What |
165
+ |------|------|
166
+ | `splat/cloud_latest.ply` | Latest point cloud (saved every 10s) |
167
+ | `splat/cloud_YYYYMMDD_HHMMSS.ply` | Timestamped backups (every 60s) |
168
+ | `splat/cloud_final_*.ply` | Final save on shutdown |
169
+ | `splat/depth_frames/*.png` | 16-bit metric depth maps (every 5th frame) |
170
+ | `output_video/eridian_demo.mp4` | 4-panel demo video |
171
+ | `logs/mapper.log` | Full application log |
172
+
173
+ ## Architecture
174
+
175
+ Single `main.py`, no external config, no separate processes:
176
+
177
+ ```
178
+ CameraCapture — OpenCV with auto-reconnect + video fallback
179
+ DepthEstimator — Depth Anything V2 Metric + bilateral filter + temporal EMA
180
+ PoseEstimator — GFTT corners + LK optical flow + PnP (solvePnPRansac)
181
+ PointCloud — Edge/normal filtering + keyframe gating + voxel averaging
182
+ Visualizer3D — PyVista non-blocking renderer
183
+ save_ply() — Binary PLY writer
184
+ WorldMapper — Main loop with keyframe system
185
+ ```
186
+
187
+ ## Depth model fallback chain
188
+
189
+ Eridian tries metric models first (real meters), then falls back to relative:
190
+
191
+ 1. `depth-anything/Depth-Anything-V2-Metric-Indoor-Small-hf` (metric)
192
+ 2. `depth-anything/Depth-Anything-V2-Metric-Outdoor-Small-hf` (metric)
193
+ 3. `depth-anything/Depth-Anything-V2-Small-hf` (relative)
194
+ 4. `Intel/dpt-swinv2-tiny-256` (relative)
195
+ 5. `Intel/dpt-hybrid-midas` (relative)
196
+
197
+ ## Performance
198
+
199
+ On Apple M-series (MPS):
200
+ - Depth inference: ~5 FPS
201
+ - Optical flow tracking: <3ms per frame
202
+ - PnP pose solve: <1ms
203
+ - Point filtering + accumulation: ~5ms per frame
204
+ - Total pipeline: ~5 FPS real-time
205
+
206
+ ## Viewing PLY files
207
+
208
+ The `.ply` files Eridian produces can be opened in:
209
+ - [MeshLab](https://www.meshlab.net/)
210
+ - [CloudCompare](https://www.danielgm.net/cc/)
211
+ - Blender (File > Import > PLY)
212
+ - Any viewer supporting binary little-endian PLY with vertex colors
213
+
214
+ ## License
215
+
216
+ MIT
Binary file
Binary file
Binary file
Binary file
Binary file
Binary file
@@ -0,0 +1,25 @@
1
+ """Eridian - Real-time monocular 3D point cloud reconstruction."""
2
+
3
+ from eridian.main import (
4
+ CameraCapture,
5
+ DepthEstimator,
6
+ PoseEstimator,
7
+ PointCloud,
8
+ Visualizer3D,
9
+ WorldMapper,
10
+ save_ply,
11
+ cli_main,
12
+ )
13
+
14
+ __version__ = "0.1.0"
15
+
16
+ __all__ = [
17
+ "CameraCapture",
18
+ "DepthEstimator",
19
+ "PoseEstimator",
20
+ "PointCloud",
21
+ "Visualizer3D",
22
+ "WorldMapper",
23
+ "save_ply",
24
+ "cli_main",
25
+ ]