Improving reinforcement learning by using sequence trees

19Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper proposes a novel approach to discover options in the form of stochastic conditionally terminating sequences; it shows how such sequences can be integrated into the reinforcement learning framework to improve the learning performance. The method utilizes stored histories of possible optimal policies and constructs a specialized tree structure during the learning process. The constructed tree facilitates the process of identifying frequently used action sequences together with states that are visited during the execution of such sequences. The tree is constantly updated and used to implicitly run corresponding options. The effectiveness of the method is demonstrated empirically by conducting extensive experiments on various domains with different properties. © 2010 The Author(s).

Cite

CITATION STYLE

APA

Girgin, S., Polat, F., & Alhajj, R. (2010). Improving reinforcement learning by using sequence trees. Machine Learning, 81(3), 283–331. https://doi.org/10.1007/s10994-010-5182-y

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free