Learning visual saliency based on object's relative relationship

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As a challenging issue in both computer vision and psychological research, visual attention has arouse a wide range of discussions and studies in recent years. However, conventional computational models mainly focus on low-level information, while high-level information and their interrelationship are ignored. In this paper, we stress the issue of relative relationship between high-level information, and a saliency model based on low-level and high-level analysis is also proposed. Firstly, more than 50 categories of objects are selected from nearly 800 images in MIT data set[1], and concrete quantitative relationship is learned based on detail analysis and computation. Secondly, using the least square regression with constraints method, we demonstrate an optimal saliency model to produce saliency maps. Experimental results indicate that our model outperforms several state-of-art methods and produces better matching to human eye-tracking data. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Wang, S., Zhao, Q., Song, M., Bu, J., Chen, C., & Tao, D. (2012). Learning visual saliency based on object’s relative relationship. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7667 LNCS, pp. 318–327). https://doi.org/10.1007/978-3-642-34500-5_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free