Forward and backward feature selection in gradient-based MDP algorithms

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In problems modeled as Markov Decision Processes (MDP), knowledge transfer is related to the notion of generalization and state abstraction. Abstraction can be obtained through factored representation by describing states with a set of features. Thus, the definition of the best action to be taken in a state can be easily transferred to similar states, i.e., states with similar features. In this paper we compare forward and backward greedy feature selection to find an appropriate compact set of features for such abstraction, thus facilitating the transfer of knowledge to new problems. We also present heuristic versions of both approaches and compare all of the approaches within a discrete simulated navigation problem. © 2013 Springer-Verlag.

Cite

CITATION STYLE

APA

Bogdan, K. O. M., & Da Silva, V. F. (2013). Forward and backward feature selection in gradient-based MDP algorithms. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7629 LNAI, pp. 383–394). https://doi.org/10.1007/978-3-642-37807-2_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free