Learning attention model from human for visuomotor tasks

4Citations
Citations of this article
116Readers
Mendeley users who have this article in their library.

Abstract

A wealth of information regarding intelligent decision making is conveyed by human gaze and visual attention, hence, modeling and exploiting such information might be a promising way to strengthen algorithms like deep reinforcement learning. We collect high-quality human action and gaze data while playing Atari games. Using these data, we train a deep neural network that can predict human gaze positions and visual attention with high accuracy.

Cite

CITATION STYLE

APA

Zhang, L., Zhang, R., Liu, Z., Hayhoe, M. M., & Ballard, D. H. (2018). Learning attention model from human for visuomotor tasks. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 8181–8182). AAAI press. https://doi.org/10.1609/aaai.v32i1.12147

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free