Convergence of Markov Chains

  • Klenke A
N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We consider a Markov chain X with invariant distribution π and investigate conditions under which the distribution of X n converges to π as n→∞. Essentially it is necessary and sufficient that the state space of the chain cannot be decomposed into subspaces that the chain does not leave, or that are visited by the chain periodically; e.g., only for odd n or only for even n. In the first case, the chain would be called reducible, and in the second case, it would be periodic. We study periodicity of Markov chains in the first section. In the second section, we prove the convergence theorem. The third section is devoted to applications of the convergence theorem to computer simulations with the so-called Monte Carlo method. In the last section, we describe the speed of convergence to the equilibrium by means of the spectrum of the transition matrix.

Cite

CITATION STYLE

APA

Klenke, A. (2008). Convergence of Markov Chains. In Probability Theory (pp. 379–402). Springer London. https://doi.org/10.1007/978-1-84800-048-3_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free