Tighter value function bounds for Bayesian reinforcement learning

0Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

Bayesian reinforcement learning (BRL) provides a principled framework for optimal exploration-exploitation tradeoff in reinforcement learning. We focus on modelbased BRL, which involves a compact formulation of the optimal tradeoff from the Bayesian perspective. However, it still remains a computational challenge to compute the Bayes-optimal policy. In this paper, we propose a novel approach to compute tighter value function bounds of the Bayes-optimal value function, which is crucial for improving the performance of many model-based BRL algorithms. We then present how our bounds can be integrated into real-time AO∗ heuristic search, and provide a theoretical analysis on the impact of improved bounds on the search efficiency. We also provide empirical results on standard BRL domains that demonstrate the effectiveness of our approach.

Cite

CITATION STYLE

APA

Lee, K., & Kim, K. E. (2015). Tighter value function bounds for Bayesian reinforcement learning. In Proceedings of the National Conference on Artificial Intelligence (Vol. 5, pp. 3556–3563). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9700

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free