We show that for several variations of partially observable Markov decision processes, polynomial-time algorithms for finding control policies are unlikely to or simply don't have guarantees of finding policies within a constant factor or a constant summand of optimal. Here "unlikely" means "unless some complexity classes collapse," where the collapses considered are P = NP, P = PSPACE, or P = EXP. Until or unless these collapses are shown to hold, any control-policy designer must choose between such performance guarantees and efficient computation. ©2001 AI Access Foundation and Morgan Kaufmann Publishers. All rights reserved.
CITATION STYLE
Lusena, C., Goldsmith, J., & Mundhenk, M. (2001). Nonapproximability results for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 14, 83–103. https://doi.org/10.1613/jair.714
Mendeley helps you to discover research relevant for your work.