Learning graph-based representations for continuous reinforcement learning domains

8Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Graph-based domain representations have been used in discrete reinforcement learning domains as basis for, e.g., autonomous skill discovery and representation learning. These abilities are also highly relevant for learning in domains which have structured, continuous state spaces as they allow to decompose complex problems into simpler ones and reduce the burden of hand-engineering features. However, since graphs are inherently discrete structures, the extension of these approaches to continuous domains is not straight-forward. We argue that graphs should be seen as discrete, generative models of continuous domains. Based on this intuition, we define the likelihood of a graph for a given set of observed state transitions and derive a heuristic method entitled fige that allows to learn graph-based representations of continuous domains with large likelihood. Based on fige, we present a new skill discovery approach for continuous domains. Furthermore, we show that the learning of representations can be considerably improved by using fige. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Metzen, J. H. (2013). Learning graph-based representations for continuous reinforcement learning domains. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8188 LNAI, pp. 81–96). https://doi.org/10.1007/978-3-642-40988-2_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free