Distortion-Aware Convolutional Filters for Dense Prediction in Panoramic Images

41Citations
Citations of this article
160Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

There is a high demand of 3D data for 360 panoramic images and videos, pushed by the growing availability on the market of specialized hardware for both capturing (e.g., omni-directional cameras) as well as visualizing in 3D (e.g., head mounted displays) panoramic images and videos. At the same time, 3D sensors able to capture 3D panoramic data are expensive and/or hardly available. To fill this gap, we propose a learning approach for panoramic depth map estimation from a single image. Thanks to a specifically developed distortion-aware deformable convolution filter, our method can be trained by means of conventional perspective images, then used to regress depth for panoramic images, thus bypassing the effort needed to create annotated panoramic training dataset. We also demonstrate our approach for emerging tasks such as panoramic monocular SLAM, panoramic semantic segmentation and panoramic style transfer.

Cite

CITATION STYLE

APA

Tateno, K., Navab, N., & Tombari, F. (2018). Distortion-Aware Convolutional Filters for Dense Prediction in Panoramic Images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11220 LNCS, pp. 732–750). Springer Verlag. https://doi.org/10.1007/978-3-030-01270-0_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free