Much research has been concerned with the contribution of the low level features of a visual scene to the deployment of visual attention. Bottom-up saliency models have been developed to predict the location of gaze according to these features. So far, color besides to brightness, contrast and motion is considered as one of the primary features in computing bottom-up saliency. However, its contribution in guiding eye movements when viewing natural scenes has been debated. We investigated the contribution of color information in a bottom-up visual saliency model. The model efficiency was tested using the experimental data obtained on 45 observers who were eye tracked while freely exploring a large data set of color and grayscale videos. The two datasets of recorded eye positions, for grayscale and color videos, were compared with a luminance-based saliency model [1]. We incorporated chrominance information to the model. Results show that color information improves the performance of the saliency model in predicting eye positions. © 2014 Springer International Publishing.
CITATION STYLE
Hamel, S., Guyader, N., Pellerin, D., & Houzet, D. (2014). Contribution of color information in visual saliency model for videos. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8509 LNCS, pp. 213–221). Springer Verlag. https://doi.org/10.1007/978-3-319-07998-1_24
Mendeley helps you to discover research relevant for your work.