Saliency detection in 360° Videos

18Citations
Citations of this article
162Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a novel spherical convolutional neural network based scheme for saliency detection for 360° videos. Specifically, in our spherical convolution neural network definition, kernel is defined on a spherical crown, and the convolution involves the rotation of the kernel along the sphere. Considering that the 360° videos are usually stored with equirectangular panorama, we propose to implement the spherical convolution on panorama by stretching and rotating the kernel based on the location of patch to be convolved. Compared with existing spherical convolution, our definition has the parameter sharing property, which would greatly reduce the parameters to be learned. We further take the temporal coherence of the viewing process into consideration, and propose a sequential saliency detection by leveraging a spherical U-Net. To validate our approach, we construct a large-scale 360° videos saliency detection benchmark that consists of 104 360° videos viewed by 20+ human subjects. Comprehensive experiments validate the effectiveness of our spherical U-net for 360° video saliency detection.

Cite

CITATION STYLE

APA

Zhang, Z., Xu, Y., Yu, J., & Gao, S. (2018). Saliency detection in 360° Videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11211 LNCS, pp. 504–520). Springer Verlag. https://doi.org/10.1007/978-3-030-01234-2_30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free