LiDAR-Visual-Inertial Odometry with a Unified Representation
Semester/Masters Project
The goal of this project is to develop a lidar-visual-inertial odometry approach that integrates visual and lidar measurements into a single unified representation.
The output of FAST-LIVO2 [1], a state-of-the-art LiDAR-Visual-Inertial mapping approach
Background
LiDAR-Visual-Inertial odometry approaches [1-3] aim to overcome the limitations of the individual sensing modalities by estimating a pose from heterogenous measurements. Lidar-inertial odometry often diverges in environments with degenerate geometric structures and visual-inertial odometry can diverge in environments with uniform texture. Many existing lidar-visual-inertial odometry approaches use independent lidar-inertial and visual-inertial pipelines [2-3] to compute odometry estimates that are combined in a joint optimisation to obtain a single pose estimate. These approaches are able to obtain a robust pose estimate in degenerate environments but often underperform lidar-inertial or visual-inertial methods in non-degenerate scenarios due to the complexity of maintaining and combining odometry estimates from multiple representations.
Description
The goal of this project is to develop a lidar-visual-inertial odometry approach that integrates visual and lidar measurements into a single unified representation. The starting point, inspired by FAST-LIVO2 [1], will be to investigate methods for efficiently combining visual patches from camera images with a set of geometric primitives extracted from FAST-LIO2 [4], a lidar-inertial odometry pipeline. The performance of the resulting approach will be evaluated in comparison with existing lidar-visual-inertial odometry approaches.
Work Packages
- Literature review of work on lidar-visual-inertial odometry
- Develop a lidar-visual-inertial odometry approach with a single unified representation
- Evaluate the performance of the approach in comparison with existing work
Requirements
- Experience with C++ and ROS
References
- [1] C. Zheng et al., “FAST-LIVO2: Fast, Direct LiDAR-Inertial-Visual Odometry,” IEEE Transactions on Robotics, 2024.
- [2] J. Lin and F. Zhang, “R3LIVE++: A Robust, Real-time, Radiance reconstruction package with a tightly-coupled LiDAR-Inertial-Visual state Estimator,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024.
- [3] T. Shan, B. Englot, C. Ratti, and D. Rus, “LVI-SAM: Tightly-coupled Lidar-Visual-Inertial Odometry via Smoothing and Mapping,” in IEEE International Conference on Robotics and Automation, 2021.
- [4] W. Xu, Y. Cai, D. He, J. Lin, and F. Zhang, “FAST-LIO2: Fast Direct LiDAR-inertial Odometry,” arXiv, 2021.