Sparse Pseudo-LiDAR Depth Assisted Monocular Depth Estimation

Page view(s)
50
Checked on Sep 21, 2024
Sparse Pseudo-LiDAR Depth Assisted Monocular Depth Estimation
Title:
Sparse Pseudo-LiDAR Depth Assisted Monocular Depth Estimation
Journal Title:
IEEE Transactions on Intelligent Vehicles
Publication Date:
31 July 2023
Citation:
Shao, S., Pei, Z., Chen, W., Liu, Q., Yue, H., & Li, Z. (2024). Sparse Pseudo-LiDAR Depth Assisted Monocular Depth Estimation. IEEE Transactions on Intelligent Vehicles, 9(1), 917–929. https://doi.org/10.1109/tiv.2023.3299935
Abstract:
Monocular depth estimation has attracted extensive attention and made great progress in recent years. However, the performance still lags far behind LiDAR-based depth completion algorithms. This is because the completion algorithms not only utilize theRGBimage, but also have the prior of sparse depth collected by LiDAR. To reduce this performance gap, we propose a novel initiative that incorporates the concept of pseudo-LiDARinto depth estimation. The pseudo-LiDAR depends only on the camera and thus achieves a lower cost than LiDAR. To emulate the scan pattern of LiDAR, geometric sampling and appearance sampling are proposed. The former measures the vertical and horizontal azimuths of 3D scene points to establish the geometric correlation. The latter helps determine which “pseudo-LiDAR rays” return an answer and which do not. Then, we build a sparse pseudo-LiDAR-based depth estimation framework. Extensive experiments show that the proposed method surpasses previous state-of-the-art competitors on the KITTI, NYU-Depth-v2 and SUN RGB-D datasets.
License type:
Publisher Copyright
Funding Info:
This research / project is supported by the A*STAR - Robotics Horizontal Technology Coordinating Office (HTCO)
Grant Reference no. : C221518005

This work was supported by the National Natural Science Foundation of China under grant 61620106012.
Description:
© 2023 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
ISSN:
2379-8858
2379-8904
Files uploaded:

File Size Format Action
tiv.pdf 11.52 MB PDF Request a copy