Markov automata allow us to model a wide range of complex real-life systems by combining continuous stochastic timing with probabilistic transitions and nondeterministic choices. By adding a reward function it is possible to model costs like the energy consumption of a system as well. However, models of real-life systems tend to be large, and the analysis methods for such powerful models like Markov (reward) automata do not scale well, which limits their applicability. To solve this problem we present an abstraction technique for Markov reward automata, based on stochastic games, together with automatic refinement methods for the computation of time-bounded accumulated reward properties. Experiments show a significant speed-up and reduction in system size compared to direct analysis methods.
CITATION STYLE
Braitling, B., Ferrer Fioriti, L. M., Hatefi, H., Wimmer, R., Becker, B., & Hermanns, H. (2015). Abstraction-based computation of reward measures for markov automata. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8931, pp. 172–189). Springer Verlag. https://doi.org/10.1007/978-3-662-46081-8_10
Mendeley helps you to discover research relevant for your work.