Reward-Based Learning, Model-Based and Model-Free

  • Huys Q
  • Cruickshank A
  • Seriès P
N/ACitations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Reward-Based Learning, Model-Based and Model-Free : Definition Reinforcement learning (RL) techniques are a set of solutions for optimal long-term action choice such that actions take into account both immediate and delayed consequences. They fall into two broad classes. Model-based approaches assume an explicit model of the environment and the agent. The model describes the consequences of actions and the associated returns. From this, optimal policies can be inferred. Psychologically, model-based descriptions apply to goal-directed decisions, in which choices reflect current preferences over outcomes. Model-free approaches forgo any explicit knowledge of the dynamics of the environment or the consequences of actions and evaluate how good actions are through trial-and-error learning. Model-free values underlie habitual and Pavlovian conditioned responses that are emitted reflexively when faced with certain stimuli. While model-based techniques have substantial computational demands, model-free techniques require extensive experience. Huys Q.J.M., Cruickshank A., Seriès P. (2015) Reward-Based Learning, Model-Based and Model-Free. In: Jaeger D., Jung R. (eds) Encyclopedia of Computational Neuroscience. Springer, New York, NY

Cite

CITATION STYLE

APA

Huys, Q. J. M., Cruickshank, A., & Seriès, P. (2015). Reward-Based Learning, Model-Based and Model-Free. In Encyclopedia of Computational Neuroscience (pp. 2634–2641). Springer New York. https://doi.org/10.1007/978-1-4614-6675-8_674

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free