Hierarchical reinforcement learning (HRL) has had a vast range of applications in recent years. Preparing mechanisms for autonomous acquisition of skills has been a main topic of research in this area. While different methods have been proposed to achieve this goal, few methods have been shown to be successful both in performance and also efficiency in terms of time complexity of the algorithm. In this paper, a linear time algorithm is proposed to find subgoal states of the environment in early episodes of learning. Having subgoals available in early phases of a learning task, results in building skills that dramatically increase the convergence rate of the learning process. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Kazemitabar, S. J., & Beigy, H. (2009). Using strongly connected components as a basis for autonomous skill acquisition in reinforcement learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5551 LNCS, pp. 794–803). https://doi.org/10.1007/978-3-642-01507-6_89
Mendeley helps you to discover research relevant for your work.