OmniDepth: Dense depth estimation for indoors spherical panoramas

40Citations
Citations of this article
181Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Recent work on depth estimation up to now has only focused on projective images ignoring 360° content which is now increasingly and more easily produced. We show that monocular depth estimation models trained on traditional images produce sub-optimal results on omnidirectional images, showcasing the need for training directly on 360° datasets, which however, are hard to acquire. In this work, we circumvent the challenges associated with acquiring high quality 360° datasets with ground truth depth annotations, by re-using recently released large scale 3D datasets and re-purposing them to 360° via rendering. This dataset, which is considerably larger than similar projective datasets, is publicly offered to the community to enable future research in this direction. We use this dataset to learn in an end-to-end fashion the task of depth estimation from 360° images. We show promising results in our synthesized data as well as in unseen realistic images.

Cite

CITATION STYLE

APA

Zioulis, N., Karakottas, A., Zarpalas, D., & Daras, P. (2018). OmniDepth: Dense depth estimation for indoors spherical panoramas. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11210 LNCS, pp. 453–471). Springer Verlag. https://doi.org/10.1007/978-3-030-01231-1_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free