Robust validation of Visual Focus of Attention using adaptive fusion of head and eye gaze patterns

9Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a framework for inferring the focus of attention of a person, utilizing information coming both from head rotation and eye gaze estimation. To this aim, we use fuzzy logic to estimate confidence on the gaze of a person towards a specific point, and results are compared to human annotation. For head pose we propose Bayesian modality fusion of both local and holistic information, while for eye gaze we propose a methodology that calculates eye gaze directionality, removing the influence of head rotation, using a simple camera. For local information, feature positions are used, while holistic information makes use of face region. Holistic information uses Convolutional Neural Networks which have been shown to be immune to small translations and distortions of test data. This is vital for an application in an unpretending environment, where background noise should be expected. The ability of the system to estimate focus of attention towards specific areas, for unknown users, is grounded at the end of the paper. © 2011 IEEE.

Cite

CITATION STYLE

APA

Asteriadis, S., Karpouzis, K., & Kollias, S. (2011). Robust validation of Visual Focus of Attention using adaptive fusion of head and eye gaze patterns. In Proceedings of the IEEE International Conference on Computer Vision (pp. 414–421). https://doi.org/10.1109/ICCVW.2011.6130271

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free