Cosine policy iteration for solving infinite-horizon markov decision processes

2Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Police Iteration (PI) is a widely used traditional method for solving Markov Decision Processes (MDPs). In this paper, the cosine policy iteration (CPI) method for solving complex problems formulated as infinite-horizon MDPs is proposed. CPI combines the advantages of two methods: i) Cosine Simplex Method (CSM) which is based on the Karush, Kuhn, and Tucker (KKT) optimality conditions and finds rapidly an initial policy close to the optimal solution and ii) PI which is able to achieve the global optimum. In order to apply CSM to this kind of problems, a well- known LP formulation is applied and particular features are derived in this paper. Obtained results show that the application of CPI solves MDPs in a lower number of iterations that the traditional PI. © 2009 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Frausto-Solis, J., Santiago, E., & Mora-Vargas, J. (2009). Cosine policy iteration for solving infinite-horizon markov decision processes. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5845 LNAI, pp. 75–86). https://doi.org/10.1007/978-3-642-05258-3_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free