Learning models of relational MDPs using graph Kernels

7Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Relational reinforcement learning is the application of reinforcement learning to structured state descriptions. Model-based methods learn a policy based on a known model that comprises a description of the actions and their effects as well as the reward function. If the model is initially unknown, one might learn the model first and then apply the model-based method (indirect reinforcement learning). In this paper, we propose a method for model-learning that is based on a combination of several SVMs using graph kernels. Indeterministic processes can be dealt with by combining the kernel approach with a clustering technique. We demonstrate the validity of the approach by a range of experiments on various Blocksworld scenarios. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Halbritter, F., & Geibel, P. (2007). Learning models of relational MDPs using graph Kernels. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4827 LNAI, pp. 409–419). Springer Verlag. https://doi.org/10.1007/978-3-540-76631-5_39

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free