Parameterized indexed value function for efficient exploration in reinforcement learning

3Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

It is well known that quantifying uncertainty in the action-value estimates is crucial for efficient exploration in reinforcement learning. Ensemble sampling offers a relatively computationally tractable way of doing this using randomized value functions. However, it still requires a huge amount of computational resources for complex problems. In this paper, we present an alternative, computationally efficient way to induce exploration using index sampling. We use an indexed value function to represent uncertainty in our action-value estimates. We first present an algorithm to learn parameterized indexed value function through a distributional version of temporal difference in a tabular setting and prove its regret bound. Then, in a computational point of view, we propose a dual-network architecture, Parameterized Indexed Networks (PINs), comprising one mean network and one uncertainty network to learn the indexed value function. Finally, we show the efficacy of PINs through computational experiments.

Cite

CITATION STYLE

APA

Tan, T., Xiong, Z., & Dwaracherla, V. R. (2020). Parameterized indexed value function for efficient exploration in reinforcement learning. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 5948–5955). AAAI press. https://doi.org/10.1609/aaai.v34i04.6055

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free