RA-Depth: Resolution Adaptive Self-supervised Monocular Depth Estimation

19Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Existing self-supervised monocular depth estimation methods can get rid of expensive annotations and achieve promising results. However, these methods suffer from severe performance degradation when directly adopting a model trained on a fixed resolution to evaluate at other different resolutions. In this paper, we propose a resolution adaptive self-supervised monocular depth estimation method (RA-Depth) by learning the scale invariance of the scene depth. Specifically, we propose a simple yet efficient data augmentation method to generate images with arbitrary scales for the same scene. Then, we develop a dual high-resolution network that uses the multi-path encoder and decoder with dense interactions to aggregate multi-scale features for accurate depth inference. Finally, to explicitly learn the scale invariance of the scene depth, we formulate a cross-scale depth consistency loss on depth predictions with different scales. Extensive experiments on the KITTI, Make3D and NYU-V2 datasets demonstrate that RA-Depth not only achieves state-of-the-art performance, but also exhibits a good ability of resolution adaptation. Source code is available at https://github.com/hmhemu/RA-Depth.

Cite

CITATION STYLE

APA

He, M., Hui, L., Bian, Y., Ren, J., Xie, J., & Yang, J. (2022). RA-Depth: Resolution Adaptive Self-supervised Monocular Depth Estimation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13687 LNCS, pp. 565–581). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-19812-0_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free