Abstract
A wealth of information regarding intelligent decision making is conveyed by human gaze and visual attention, hence, modeling and exploiting such information might be a promising way to strengthen algorithms like deep reinforcement learning. We collect high-quality human action and gaze data while playing Atari games. Using these data, we train a deep neural network that can predict human gaze positions and visual attention with high accuracy.
Cite
CITATION STYLE
Zhang, L., Zhang, R., Liu, Z., Hayhoe, M. M., & Ballard, D. H. (2018). Learning attention model from human for visuomotor tasks. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 8181–8182). AAAI press. https://doi.org/10.1609/aaai.v32i1.12147
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.