Transfer learning in reinforcement learning problems through partial policy recycling

55Citations
Citations of this article
129Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We investigate the relation between transfer learning in reinforcement learning with function approximation and supervised learning with concept drift. We present a new incremental relational regression tree algorithm that is capable of dealing with concept drift through tree restructuring and show that it enables a Q-learner to transfer knowledge from one task to another by recycling those parts of the generalized Q-function that still hold interesting information for the new task. We illustrate the performance of the algorithm in several experiments. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Ramon, J., Driessens, K., & Croonenborghs, T. (2007). Transfer learning in reinforcement learning problems through partial policy recycling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4701 LNAI, pp. 699–707). Springer Verlag. https://doi.org/10.1007/978-3-540-74958-5_70

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free