Tracking the saliency features in images based on human observation statistics

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We address the statistical inference of saliency features in the images based on human eye-tracking measurements. Training videos were recorded by a head-mounted wearable eye-tracker device, where the position of the eye fixation relative to the recorded image was annotated. From the same video records, artificial saliency points (SIFT) were measured by computer vision algorithms which were clustered to describe the images with a manageable amount of descriptors. The measured human eye-tracking (fixation pattern) and the estimated saliency points are fused in a statistical model, where the eye-tracking supports us with transition probabilities among the possible image feature points. This HVS-based statistical model results in the estimation of possible tracking paths and region of interest areas of the human vision. The proposed method may help in image saliency analysis, better compression of region of interest areas and in the development of more efficient human-computer-interaction devices. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Szalai, S., Szirányi, T., & Vidnyanszky, Z. (2012). Tracking the saliency features in images based on human observation statistics. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7252 LNCS, pp. 219–233). https://doi.org/10.1007/978-3-642-32436-9_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free