Regret bounds for restless Markov bandits

29Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider the restless Markov bandit problem, in which the state of each arm evolves according to a Markov process independently of the learner's actions. We suggest an algorithm that after T steps achieves Õ(√T) regret with respect to the best policy that knows the distributions of all arms. No assumptions on the Markov chains are made except that they are irreducible. In addition, we show that index-based policies are necessarily suboptimal for the considered problem. © 2012 Springer-Verlag.

Cite

CITATION STYLE

APA

Ortner, R., Ryabko, D., Auer, P., & Munos, R. (2012). Regret bounds for restless Markov bandits. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7568 LNAI, pp. 214–228). https://doi.org/10.1007/978-3-642-34106-9_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free