Task-driven saliency detection on music video

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a saliency model to estimate the task-driven eyemovement. Human eye movement patterns is affected by observer’s task and mental state [1]. However, the existing saliency model are detected from the low-level image features such as bright regions, edges, colors, etc. In this paper, the tasks (e.g., evaluation of a piano performance) are given to the observer who is watching the music videos. Unlike existing visual-based methods, we use musical score features and image features to detect a saliency. We show that our saliency model outperforms existing models that use eye movement patterns.

Cite

CITATION STYLE

APA

Numano, S., Enami, N., & Ariki, Y. (2015). Task-driven saliency detection on music video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9009, pp. 658–671). Springer Verlag. https://doi.org/10.1007/978-3-319-16631-5_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free