The existing classification-based policy iteration (CBPI) algorithms can be divided into two categories: direct policy iteration (DPI) methods that directly assign the output of the classifier (the approximate greedy policy w.r.t. the current policy) to the next policy, and conservative policy iteration (CPI) methods in which the new policy is a mixture distribution of the current policy and the output of the classifier. The conservative policy update gives CPI a desirable feature, namely the guarantee that the policies generated by this algorithm improve at each iteration. We provide a detailed algorithmic and theoretical comparison of these two classes of CBPI algorithms. Our results reveal that in order to achieve the same level of accuracy, CPI requires more iterations, and thus, more samples than the DPI algorithm. Furthermore, CPI may converge to suboptimal policies whose performance is not better than DPI's.
CITATION STYLE
Ghavamzadeh, M., & Lazaric, A. (2012). Conservative and Greedy Approaches to Classification-Based Policy Iteration. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI 2012 (pp. 914–920). AAAI Press. https://doi.org/10.1609/aaai.v26i1.8304
Mendeley helps you to discover research relevant for your work.