Spatiotemporal saliency detection using slow feature analysis and spatial information for dynamic scenes

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Slow feature analysis (SFA) can extract slowly varying signals from quickly varying input data. Inspired by the temporal slowness principle, we propose a novel spatiotemporal saliency algorithm for dynamic scenes analysis. In the training phase, slow feature functions are learned from different video patches using SFA. At the stage of saliency computation, we first exploit two-layer slow feature functions to extract pixel-level high-level motion features, which represent temporal slowness of every local space-time cuboid. Temporal saliency of each location is measured by the average of the corresponding feature vector. Finally, a saliency map is generated by combining the proposed temporal saliency and existing spatial saliency. The algorithm is qualitatively and quantitatively evaluated on challenging video sequences, and achieves competitive performance in contrast to the state-of-the-art algorithm.

Cite

CITATION STYLE

APA

Wu, Y., Wang, Z., Xu, X., Gong, S., Liu, Q., & Liu, C. (2015). Spatiotemporal saliency detection using slow feature analysis and spatial information for dynamic scenes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9242, pp. 331–340). Springer Verlag. https://doi.org/10.1007/978-3-319-23989-7_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free