Attractor Neural States: A Brain-Inspired Complementary Approach to Reinforcement Learning

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

It is widely accepted that reinforcement learning (RL) mechanisms are optimal only if there is a predefined set of distinct states that are predictive of reward. This poses a cognitive challenge as to which events or combinations of events could potentially predict reward in a non-stationary environment. In addition, the computational discrepancy between two families of RL algorithms, model-free and model-based RL, creates a stability-plasticity dilemma, which in the case of interactive and competitive multiple brain systems poses a question of how to guide optimal decision-making control when there is competition between two systems implementing different types of RL methods. We argue that both computational and cognitive challenges can be met by infusing the RL framework as an algorithmic theory of human behavior with the strengths of the attractor framework at the level of neural implementation. Our position is supported by the hypothesis that ‘attractor states’ which are stable patterns of self-sustained and reverberating brain activity, are a manifestation of the collective dynamics of neuronal populations in the brain. Hence, when neuronal activity is described at an appropriate level of abstraction, simulations of spiking neuronal populations capture the collective dynamics of the network in response to recurrent interactions between these populations.

Cite

CITATION STYLE

APA

Hamid, O. H., & Braun, J. (2017). Attractor Neural States: A Brain-Inspired Complementary Approach to Reinforcement Learning. In International Joint Conference on Computational Intelligence (Vol. 1, pp. 385–392). Science and Technology Publications, Lda. https://doi.org/10.5220/0006580203850392

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free