Calculating transient distributions of cumulative reward

18Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

Markov reward models have been employed to obtain performability measures of computer and communication systems. In these models, a continuous time Markov chain is used to represent changes in the system structure, usually caused by faults and repairs of its components, and reward rates are assigned to states of the model to indicate some measure of accomplishment at each structure. A procedure to calculate numerically the distribution of the reward accumulated over a finite observation period is presented. The development is based solely on probabilistic arguments, and the final recursion is quite simple. The algorithm has a low computational cost in terms of model parameters. In fact, the number of operations is linear in a parameter that is smaller than the number of rewards, while the storage required is independent of the number of rewards. We also consider the calculation of the distribution of cumulative reward for models in which impulse based rewards are associated with transitions.

Cite

CITATION STYLE

APA

De Souza e Silva, E., Gail, H. R., & Campos, R. V. (1995). Calculating transient distributions of cumulative reward. In Proceedings of the 1995 ACM SIGMETRICS Joint International Conference on Measurement and Modeling of Computer Systems, SIGMETRICS 1995/PERFORMANCE 1995 (pp. 231–240). Association for Computing Machinery, Inc. https://doi.org/10.1145/223587.223612

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free