Multi-timescale memory dynamics extend task repertoire in a reinforcement learning network with attention-gated memory

2Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

The interplay of reinforcement learning and memory is at the core of several recent neural network models, such as the Attention-Gated MEmory Tagging (AuGMEnT) model. While successful at various animal learning tasks, we find that the AuGMEnT network is unable to cope with some hierarchical tasks, where higher-level stimuli have to be maintained over a long time, while lower-level stimuli need to be remembered and forgotten over a shorter timescale. To overcome this limitation, we introduce a hybrid AuGMEnT, with leaky (or short-timescale) and non-leaky (or long-timescale) memory units, that allows the exchange of low-level information while maintaining high-level one. We test the performance of the hybrid AuGMEnT network on two cognitive reference tasks, sequence prediction and 12AX.

Cite

CITATION STYLE

APA

Martinolli, M., Gerstner, W., & Gilra, A. (2018). Multi-timescale memory dynamics extend task repertoire in a reinforcement learning network with attention-gated memory. Frontiers in Computational Neuroscience, 12. https://doi.org/10.3389/fncom.2018.00050

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free