Speeding-up reinforcement learning with multi-step actions

10Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In recent years hierarchical concepts of temporal abstraction have been integrated in the reinforcement learning framework to improve scalability. However, existing approaches are limited to domains where a decomposition into subtasks is known a priori. In this paper we propose the concept of explicitly selecting time scale related actions if no subgoalrelated abstract actions are available. This is realised with multi-step actions on different time scales that are combined in one single action set. The special structure of the action set is exploited in the MSAQ-learning algorithm. By learning on different explicitly specified time scales simultaneously, a considerable improvement of learning speed can be achieved. This is demonstrated on two benchmark problems. © Springer-Verlag Berlin Heidelberg 2002.

Cite

CITATION STYLE

APA

Schoknecht, R., & Riedmiller, M. (2002). Speeding-up reinforcement learning with multi-step actions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2415 LNCS, pp. 813–818). Springer Verlag. https://doi.org/10.1007/3-540-46084-5_132

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free