Abstract
Simultaneous localization and mapping, a critical technology for enabling the autonomous driving of vehicles and mobile robots, increasingly incorporates multi-sensor configurations. Inertial measurement units (IMUs), known for their ability to measure acceleration and angular velocity, are widely utilized for motion estimation due to their cost efficiency. However, the inherent noise in IMU measurements necessitates the integration of additional sensors to facilitate spatial understanding for mapping. Visual–inertial odometry (VIO) is a prominent approach that combines cameras with IMUs, offering high spatial resolution while maintaining cost-effectiveness. In this paper, we introduce our uncertainty-aware depth network (UD-Net), which is designed to estimate both depth and uncertainty maps. We propose a novel loss function for the training of UD-Net, and unreliable depth values are filtered out to improve VIO performance based on the uncertainty maps. Experiments were conducted on the KITTI dataset and our custom dataset acquired from various driving scenarios. Experimental results demonstrated that the proposed VIO algorithm based on UD-Net outperforms previous methods with a significant margin.
| Original language | English |
|---|---|
| Article number | 6665 |
| Journal | Sensors |
| Volume | 24 |
| Issue number | 20 |
| DOIs | |
| State | Published - 2024.10 |
Keywords
- depth estimation
- parking lot dataset
- simultaneous localization and mapping
- uncertainty estimation
- visual-inertial odometry
Quacquarelli Symonds(QS) Subject Topics
- Computer Science & Information Systems
- Engineering - Electrical & Electronic
- Engineering - Petroleum
- Chemistry
- Physics & Astronomy
- Biological Sciences
Fingerprint
Dive into the research topics of 'Uncertainty-Aware Depth Network for Visual Inertial Odometry of Mobile Robots'. Together they form a unique fingerprint.Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver