Solving relational and first-order logical markov decision processes: A survey

15Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this chapter we survey representations and techniques for Markov decision processes, reinforcement learning, and dynamic programming in worlds explicitly modeled in terms of objects and relations. Such relational worlds can be found everywhere in planning domains, games, real-world indoor scenes and many more. Relational representations allow for expressive and natural datastructures that capture the objects and relations in an explicit way, enabling generalization over objects and relations, but also over similar problems which differ in the number of objects. The field was recently surveyed completely in (van Otterlo, 2009b), and here we describe a large portion of the main approaches. We discuss model-free – both value-based and policy-based – and model-based dynamic programming techniques. Several other aspects will be covered, such as models and hierarchies, and we end with several recent efforts and future directions.

Author supplied keywords

Cite

CITATION STYLE

APA

van Otterlo, M. (2012). Solving relational and first-order logical markov decision processes: A survey. In Adaptation, Learning, and Optimization (Vol. 12, pp. 253–292). Springer Verlag. https://doi.org/10.1007/978-3-642-27645-3_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free