Fully convolutional multi-scale dense networks for monocular depth estimation

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Monocular depth estimation is of vital importance in understanding the 3D geometry of a scene. However, inferring the underlying depth is ill-posed and inherently ambiguous. In this study, two improvements to existing approaches are proposed. One is about a clean improved network architecture, for which the authors extend Densely Connected Convolutional Network (DenseNet) to work as end-to-end fully convolutional multi-scale dense networks. The dense upsampling blocks are integrated to improve the output resolution and selected skip connection is incorporated to connect the downsampling and the upsampling paths efficiently. The other is about edge-preserving loss functions, encompassing the reverse Huber loss, depth gradient loss and feature edge loss, which is particularly suited for estimation of fine details and clear boundaries of objects. Experiments on the NYU-Depth-v2 dataset and KITTI dataset show that the proposed model is competitive to the state-of-the-art methods, achieving 0.506 and 4.977 performance in terms of root mean squared error respectively.

Cite

CITATION STYLE

APA

Liu, J., Zhang, Y., Cui, J., Feng, Y., & Pang, L. (2019). Fully convolutional multi-scale dense networks for monocular depth estimation. IET Computer Vision, 13(5), 515–522. https://doi.org/10.1049/iet-cvi.2018.5645

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free