SalyPath360: Saliency and Scanpath Prediction Framework for Omnidirectional Images

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

This paper introduces a new framework to predict the visual attention of omnidirectional images. The key setup of our architecture is the simultaneous prediction of the saliency map and a corresponding scanpath for a given stimulus. The framework implements a fully encoder-decoder convolutional neural network augmented by an attention module to generate representative saliency maps. In addition, an auxiliary network is employed to generate probable viewport center fixation points through the So ftArgMax function. The latter allows deriving fixation points from feature maps. To take advantage of the scanpath prediction, an adaptive joint probability distribution model is then applied to construct the final unbiased saliency map by leveraging the encoder decoder-based saliency map and the scanpath-based saliency heatmap. The proposed framework was evaluated in terms of saliency and scanpath prediction, and the results were compared to state-of-the-art methods on Salient360! dataset. The results showed the relevance of our framework and the benefits of such architecture for further omnidirectional visual attention prediction tasks.

Cite

CITATION STYLE

APA

Kerkouri, M. A., Tliba, M., Chetouani, A., & Sayeh, M. (2022). SalyPath360: Saliency and Scanpath Prediction Framework for Omnidirectional Images. In IS and T International Symposium on Electronic Imaging Science and Technology (Vol. 34). Society for Imaging Science and Technology. https://doi.org/10.2352/EI.2022.34.11.HVEI-168

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free