Transferring instances for model-based reinforcement learning

86Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Reinforcement learning agents typically require a significant amount of data before performing well on complex tasks. Transfer learning methods have made progress reducing sample complexity, but they have primarily been applied to model-free learning methods, not more data-efficient model-based learning methods. This paper introduces timbrel, a novel method capable of transferring information effectively into a model-based reinforcement learning algorithm. We demonstrate that timbrel can significantly improve the sample efficiency and asymptotic performance of a model-based algorithm when learning in a continuous state space. Additionally, we conduct experiments to test the limits of timbrel's effectiveness. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Taylor, M. E., Jong, N. K., & Stone, P. (2008). Transferring instances for model-based reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5212 LNAI, pp. 488–505). https://doi.org/10.1007/978-3-540-87481-2_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free