Free energy, value, and attractors

106Citations
Citations of this article
253Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It has been suggested recently that action and perception can be understood as minimising the free energy of sensory samples. This ensures that agents sample the environment to maximise the evidence for their model of the world, such that exchanges with the environment are predictable and adaptive. However, the free energy account does not invoke reward or cost-functions from reinforcement-learning and optimal control theory. We therefore ask whether reward is necessary to explain adaptive behaviour. The free energy formulation uses ideas from statistical physics to explain action in terms of minimising sensory surprise. Conversely, reinforcement-learning has its roots in behaviourism and engineering and assumes that agents optimise a policy to maximise future reward. This paper tries to connect the two formulations and concludes that optimal policies correspond to empirical priors on the trajectories of hidden environmental states, which compel agents to seek out the (valuable) states they expect to encounter. Copyright © 2012 Karl Friston and Ping Ao.

Cite

CITATION STYLE

APA

Friston, K., & Ao, P. (2012). Free energy, value, and attractors. Computational and Mathematical Methods in Medicine, 2012. https://doi.org/10.1155/2012/937860

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free