Trying to implement a SLAM system in Rust.
- OpenCV
- nalgebra
- serde
- rerun (optional, for 3D visualization)
- Clap (for command-line argument parsing)
Build with Cargo:
cargo build --release
Run visual odometry example:
# Use default KITTI intrinsics
cargo run --example visual_odometry /path/to/video.mp4
# Specify custom camera intrinsics
cargo run --example visual_odometry /path/to/video.mp4 -- --fx 500 --fy 500 --cx 320 --cy 240
Run point cloud generation with real-time 3D visualization (Rerun):
# With Rerun 3D viewer (shows map, trajectory, matches, video in real-time!)
cargo run --example point_cloud --features rerun /path/to/video.mp4 -- --rerun
# Or save to PLY file (default, no Rerun)
cargo run --example point_cloud /path/to/video.mp4 -- --save-ply
# With custom camera intrinsics
cargo run --example point_cloud --features rerun /path/to/video.mp4 -- --rerun --fx 718.856 --fy 718.856 --cx 607.1928 --cy 185.2157Run feature detection visualization:
cargo run --example visualize_features /path/to/video.mp4feature: ORB feature detection and matchingodometry: Camera intrinsics, pose estimation, trajectory trackingmapping: Keyframe selection, 3D point triangulation, map points, bundle adjustment
visualize_features: Real-time feature detection and matching visualizationvisual_odometry: Full VO pipeline with trajectory tracking and visualizationpoint_cloud: 3D point cloud reconstruction with triangulation (use--features rerun --rerunfor rerun viz!)
See TODO for development status.
- Why is the map sparse?
- This is a feature-based system using ORB features (~1000-2000 per frame). We only triangulate at corner-like features, not every pixel. This is similar to ORB-SLAM.
- How can I make the map denser?
- For dense reconstruction, you need:
- Dense/semi-dense tracking (all high-gradient pixels)
- Depth estimation/fusion
- More computational resources
- For dense reconstruction, you need: