Predicting Gaze in Egocentric Video by Learning Task-Dependent Attention Transition

9Citations
Citations of this article
165Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We present a new computational model for gaze prediction in egocentric videos by exploring patterns in temporal shift of gaze fixations (attention transition) that are dependent on egocentric manipulation tasks. Our assumption is that the high-level context of how a task is completed in a certain way has a strong influence on attention transition and should be modeled for gaze prediction in natural dynamic scenes. Specifically, we propose a hybrid model based on deep neural networks which integrates task-dependent attention transition with bottom-up saliency prediction. In particular, the task-dependent attention transition is learned with a recurrent neural network to exploit the temporal context of gaze fixations, e.g. looking at a cup after moving gaze away from a grasped bottle. Experiments on public egocentric activity datasets show that our model significantly outperforms state-of-the-art gaze prediction methods and is able to learn meaningful transition of human attention.

Cite

CITATION STYLE

APA

Huang, Y., Cai, M., Li, Z., & Sato, Y. (2018). Predicting Gaze in Egocentric Video by Learning Task-Dependent Attention Transition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11208 LNCS, pp. 789–804). Springer Verlag. https://doi.org/10.1007/978-3-030-01225-0_46

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free