Unsupervised learning of geometry from videos with edge-aware depth-normal consistency

108Citations
Citations of this article
143Readers
Mendeley users who have this article in their library.

Abstract

Learning to reconstruct depths from a single image by watching unlabeled videos via deep convolutional network (DCN) is attracting significant attention in recent years, e.g.(Zhou et al. 2017). In this paper, we propose to use surface normal representation for unsupervised depth estimation framework. Our estimated depths are constrained to be compatible with predicted normals, yielding more robust geometry results. Specifically, we formulate an edge-aware depth-normal consistency term, and solve it by constructing a depth-to-normal layer and a normal-to-depth layer inside of the DCN. The depth-to-normal layer takes estimated depths as input, and computes normal directions using cross production based on neighboring pixels. Then given the estimated normals, the normal-to-depth layer outputs a regularized depth map through local planar smoothness. Both layers are computed with awareness of edges inside the image to help address the issue of depth/normal discontinuity and preserve sharp edges. Finally, to train the network, we apply the photometric error and gradient smoothness to supervise both depth and normal predictions. We conducted experiments on both outdoor (KITTI) and indoor (NYUv2) datasets, and showed that our algorithm vastly outperforms state-of-the-art, which demonstrates the benefits of our approach.

Cite

CITATION STYLE

APA

Yang, Z., Wang, P., Xu, W., Zhao, L., & Nevatia, R. (2018). Unsupervised learning of geometry from videos with edge-aware depth-normal consistency. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 7493–7500). AAAI press. https://doi.org/10.1609/aaai.v32i1.12257

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free