Which temporal difference learning algorithm best reproduces dopamine activity in a multi-choice task?

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The activity of dopaminergic (DA) neurons has been hypothesized to encode a reward prediction error (RPE) which corresponds to the error signal in Temporal Difference (TD) learning algorithms. This hypothesis has been reinforced by numerous studies showing the relevance of TD learning algorithms to describe the role of basal ganglia in classical conditioning. However, recent recordings of DA neurons during multi-choice tasks raised contradictory interpretations on whether DA's RPE signal is action dependent or not. Thus the precise TD algorithm (i.e. Actor-Critic, Q-learning or SARSA) that best describes DA signals remains unknown. Here we simulate and precisely analyze these TD algorithms on a multi-choice task performed by rats. We find that DA activity previously reported in this task is best fitted by a TD error which has not fully converged, and which converged faster than observed behavioral adaptation. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Bellot, J., Sigaud, O., & Khamassi, M. (2012). Which temporal difference learning algorithm best reproduces dopamine activity in a multi-choice task? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7426 LNAI, pp. 289–298). https://doi.org/10.1007/978-3-642-33093-3_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free