SalNet360: Saliency maps for omni-directional images with CNN

119Citations
Citations of this article
75Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The prediction of Visual Attention data from any kind of media is of valuable use to content creators and used to efficiently drive encoding algorithms. With the current trend in the Virtual Reality (VR) field, adapting known techniques to this new kind of media is starting to gain momentum. In this paper, we present an architectural extension to any Convolutional Neural Network (CNN) to fine-tune traditional 2D saliency prediction to Omnidirectional Images (ODIs) in an end-to-end manner. We show that each step in the proposed pipeline works towards making the generated saliency map more accurate with respect to ground truth data.

Cite

CITATION STYLE

APA

Monroy, R., Lutz, S., Chalasani, T., & Smolic, A. (2018). SalNet360: Saliency maps for omni-directional images with CNN. Signal Processing: Image Communication, 69, 26–34. https://doi.org/10.1016/j.image.2018.05.005

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free