Depth Estimation from a Single Image Using Guided Deep Network

14Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

This paper addresses the problem of monocular depth estimation, which plays a key role to understand a given scene. Owing to the success of the generative model using deep neural networks, the performance of depth estimation from a single image has been significantly improved. However, most previous approaches still fail to accurately estimate the depth boundary and thus lead to the result of the blurry restoration. In this paper, a novel and simple method is proposed by exploiting the latent space of the depth-to-depth network, which contains useful encoded features for guiding the process of depth generation. This network, so-called guided network, simply consists of convolution layers and their corresponding deconvolution ones, and is also easily trained by only using single depth images. For efficiently learning the relationship between a color value and its related depth value in a given image, we propose to train the color-to-depth network via loss defined along with features from the latent space of our guided network (i.e., depth-to-depth network). One important advantage of the proposed method is to greatly enhance local details even under complicated background regions. Moreover, the proposed method works very fast (at 125 fps with GPU). Experimental results on various benchmark datasets show the efficiency and robustness of the proposed approach compared to state-of-the-art methods.

Cite

CITATION STYLE

APA

Song, M., & Kim, W. (2019). Depth Estimation from a Single Image Using Guided Deep Network. IEEE Access, 7, 142595–142606. https://doi.org/10.1109/ACCESS.2019.2944937

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free