A minimum relative entropy principle for learning and acting

48Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.

Abstract

This paper proposes a method to construct an adaptive agent that is universal with respect to a given class of experts, where each expert is designed specifically for a particular environment. This adaptive control problem is formalized as the problem of minimizing the relative entropy of the adaptive agent from the expert that is most suitable for the unknown environment. If the agent is a passive observer, then the optimal solution is the well-known Bayesian predictor. However, if the agent is active, then its past actions need to be treated as causal interventions on the I/O stream rather than normal probability conditions. Here it is shown that the solution to this new variational problem is given by a stochastic controller called the Bayesian control rule, which implements adaptive behavior as a mixture of experts. Furthermore, it is shown that under mild assumptions, the Bayesian control rule converges to the control law of the most suitable expert. © 2010 AI Access Foundation.

Cite

CITATION STYLE

APA

Ortega, P. A., & Braun, D. A. (2010). A minimum relative entropy principle for learning and acting. Journal of Artificial Intelligence Research, 38, 475–511. https://doi.org/10.1613/jair.3062

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free