Simulation-based optimization of singularly perturbed markov reward processes with states aggregation

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a simulation-based algorithm to compute the average reward of singulary perturbed Markov Reward Processes (SPMRPs) with large scale state spaces, which depend on some sets of parameters. Compared with the original algorithm applied on these problems of general Markov Reward Processes (MRPs), our algorithm aims to obtain a faster pace in singularly perturbed cases. This algorithm relies on the special structure of singularly perturbed Markov processes, evolves along a single sample path, and hence can be applied on-line. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Zhang, D., Xi, H., & Yin, B. (2005). Simulation-based optimization of singularly perturbed markov reward processes with states aggregation. In Lecture Notes in Computer Science (Vol. 3645, pp. 129–138). Springer Verlag. https://doi.org/10.1007/11538356_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free