We investigate the relation between transfer learning in reinforcement learning with function approximation and supervised learning with concept drift. We present a new incremental relational regression tree algorithm that is capable of dealing with concept drift through tree restructuring and show that it enables a Q-learner to transfer knowledge from one task to another by recycling those parts of the generalized Q-function that still hold interesting information for the new task. We illustrate the performance of the algorithm in several experiments. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Ramon, J., Driessens, K., & Croonenborghs, T. (2007). Transfer learning in reinforcement learning problems through partial policy recycling. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4701 LNAI, pp. 699–707). Springer Verlag. https://doi.org/10.1007/978-3-540-74958-5_70
Mendeley helps you to discover research relevant for your work.