Model approximation for HEXQ hierarchical reinforcement learning

9Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

HEXQ is a reinforcement learning algorithm that discovers hierarchical structure automatically. The generated task hierarchy represents the problem at different levels of abstraction. In this paper we extend HEXQ with heuristics that automatically approximate the structure of the task hierarchy. Construction, learning and execution time, as well as storage requirements of a task hierarchy may be significantly reduced and traded off against solution quality. © Springer-Verlag Berlin Heidelberg 2004.

Cite

CITATION STYLE

APA

Hengst, B. (2004). Model approximation for HEXQ hierarchical reinforcement learning. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 3201, pp. 144–155). Springer Verlag. https://doi.org/10.1007/978-3-540-30115-8_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free