Safe stochastic planning: Planning to avoid fatal states

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Markov decision processes (MDPs) are applied as a standard model in Artificial Intelligence planning. MDPs are used to construct optimal or near optimal policies or plans. One area that is often missing from discussions of planning under stochastic environment is how MDPs handle safety constraints expressed as probability of reaching threat states. We introduce a method for finding a value optimal policy satisfying the safety constraint, and report on the validity and effectiveness of our method through a set of experiments. © 2009 Springer Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ren, H., Bitaghsir, A. A., & Barley, M. (2009). Safe stochastic planning: Planning to avoid fatal states. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4324 LNAI, pp. 101–115). https://doi.org/10.1007/978-3-642-04879-1_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free