Policy learning for time-bounded reachability in continuous-time Markov decision processes via doubly-stochastic gradient ascent

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Continuous-time Markov decision processes are an important class of models in a wide range of applications, ranging from cyberphysical systems to synthetic biology. A central problem is how to devise a policy to control the system in order to maximise the probability of satisfying a set of temporal logic specifications. Here we present a novel approach based on statistical model checking and an unbiased estimation of a functional gradient in the space of possible policies. The statistical approach has several advantages over conventional approaches based on uniformisation, as it can also be applied when the model is replaced by a black box, and does not suffer from state-space explosion. The use of a stochastic gradient to guide our search considerably improves the efficiency of learning policies. We demonstrate the method on a proof-ofprinciple non-linear population model, showing strong performance in a non-trivial task.

Cite

CITATION STYLE

APA

Bartocci, E., Bortolussi, L., Brázdil, T., Milios, D., & Sanguinetti, G. (2016). Policy learning for time-bounded reachability in continuous-time Markov decision processes via doubly-stochastic gradient ascent. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9826 LNCS, pp. 244–259). Springer Verlag. https://doi.org/10.1007/978-3-319-43425-4_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free