Feature markov decision processes

5Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is an art performed by human designers to extract the right state representation out of the bare observations, i.e. to reduce the agent setup to the MDP framework, Before we can think of mechanizing this search for suitable MDPs, we need a formal objective criterion. The main contribution of this article is to develop such a criterion. I also integrate the various parts into one learning algorithm. Extensions to more realistic dynamic Bayesian networks are developed in the companion article [Hut09].

Cite

CITATION STYLE

APA

Hutter, M. (2009). Feature markov decision processes. In Proceedings of the 2nd Conference on Artificial General Intelligence, AGI 2009 (pp. 61–66). Atlantis Press. https://doi.org/10.2991/agi.2009.30

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free