A gaussian process reinforcement learning algorithm with adaptability and minimal tuning requirements

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a novel Bayesian reinforcement learning algorithm that addresses model bias and exploration overhead issues. The algorithm combines different aspects of several state-of-the-art reinforcement learning methods that use Gaussian Processes model-based approaches to increase the use of the online data samples. The algorithm uses a smooth reward function requiring the reward value to be derived from the environment state. It works with continuous states and actions in a coherent way with a minimized need for expert knowledge in parameter tuning. We analyse and discuss the practical benefits of the selected approach in comparison to more traditional methodological choices, and illustrate the use of the algorithm in a motor control problem involving a two-link simulated arm. © 2014 Springer International Publishing Switzerland.

Cite

CITATION STYLE

APA

Strahl, J., Honkela, T., & Wagner, P. (2014). A gaussian process reinforcement learning algorithm with adaptability and minimal tuning requirements. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8681 LNCS, pp. 371–378). Springer Verlag. https://doi.org/10.1007/978-3-319-11179-7_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free