Regret bounds for reinforcement learning with policy advice

16Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In some reinforcement learning problems an agent may be provided with a set of input policies, perhaps learned from prior experience or provided by advisors. We present a reinforcement learning with policy advice (RLPA) algorithm which leverages this input set and learns to use the best policy in the set for the reinforcement learning task at hand. We prove that RLPA has a sub-linear regret of Õ(√T) relative to the best input policy, and that both this regret and its computational complexity are independent of the size of the state and action space. Our empirical simulations support our theoretical analysis. This suggests RLPA may offer significant advantages in large domains where some prior good policies are provided. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Azar, M. G., Lazaric, A., & Brunskill, E. (2013). Regret bounds for reinforcement learning with policy advice. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8188 LNAI, pp. 97–112). https://doi.org/10.1007/978-3-642-40988-2_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free