Revisiting Sparsity Invariant Convolution: A Network for Image Guided Depth Completion

33Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The limitation of LiDAR (Light Detection And Ranging) sensor causes the general sparsity of produced depth measurement. However, the sparse representation of the world is insufficient for applications such as 3D reconstruction. Thus, depth completion is an important computer vision task in which a synchronized RGB image is commonly available. In this paper, we propose a deep neural network to tackle this image guided depth completion problem. By revisiting the sparsity invariant convolution and revealing how it can be used in a novel approach, we propose three mask aware operations to process, downscale, and fuse sparse features. These operations explicitly consider the observation mask of its corresponding feature map. In addition, the structure of this network follows a novel scheme in which data from image and depth domain are processed by these proposed operations independently. Our proposed model achieves state-of-the-art performance on the KITTI depth completion benchmark. Furthermore, it presents a strong robustness for bearing input sparsity under different densities and patterns.

Cite

CITATION STYLE

APA

Yan, L., Liu, K., & Belyaev, E. (2020). Revisiting Sparsity Invariant Convolution: A Network for Image Guided Depth Completion. IEEE Access, 8, 126323–126332. https://doi.org/10.1109/ACCESS.2020.3008404

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free