Converging evidence implicates striatal dopamine (DA) in reinforcement learning, such that DA increases enhance "Go learning" to pursue actions with rewarding outcomes, whereas DA decreases enhance "NoGo learning" to avoid non-rewarding actions. Here we test whether these effects apply to the response time domain. We employ a novel paradigm which requires the adjustment of response times to a single response. Reward probability varies as a function of response time, whereas reward magnitude changes in the opposite direction. In the control condition, these factors exactly cancel, such that the expected value across time is constant (CEV). In two other conditions, expected value increases (IEV) or decreases (DEV), such that reward maximization requires either speeding up (Go learning) or slowing down (NoGo learning) relative to the CEV condition. We tested patients with Parkinson's disease (depleted striatal DA levels) on and off dopaminergic medication, compared with age-matched controls. While medicated, patients were better at speeding up in the DEV relative to CEV conditions. Conversely, nonmedicated patients were better at slowing down to maximize reward in the IEV condition. These effects of DA manipulation on cumulative Go/NoGo response time adaptation were captured with our a priori computational model of the basal ganglia, previously applied only to forced-choice tasks. There were also robust trial-to-trial changes in response time, but these single trial adaptations were not affected by disease or medication and are posited to rely on extrastriatal, possibly prefrontal, structures. Copyright © 2008 Society for Neuroscience.
CITATION STYLE
Moustafa, A. A., Cohen, M. X., Sherman, S. J., & Frank, M. J. (2008). A role for dopamine in temporal decision making and reward maximization in Parkinsonism. Journal of Neuroscience, 28(47), 12294–12304. https://doi.org/10.1523/JNEUROSCI.3116-08.2008
Mendeley helps you to discover research relevant for your work.