Fusion of multiple visual cues for visual saliency extraction from wearable camera settings with strong motion

19Citations
Citations of this article
34Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we are interested in the saliency of visual content from wearable cameras. The subjective saliency in wearable video is studied first due to the psycho-visual experience on this content. Then the method for objective saliency map computation with a specific contribution based on geometrical saliency is proposed. Fusion of spatial, temporal and geometric cues in an objective saliency map is realized by the multiplicative operator. Resulting objective saliency maps are evaluated against the subjective maps with promising results, highlighting interesting performance of proposed geometric saliency model. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Boujut, H., Benois-Pineau, J., & Megret, R. (2012). Fusion of multiple visual cues for visual saliency extraction from wearable camera settings with strong motion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7585 LNCS, pp. 436–445). Springer Verlag. https://doi.org/10.1007/978-3-642-33885-4_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free