From EM to data augmentation: The emergence of MCMC Bayesian computation in the 1980s

21Citations
Citations of this article
91Readers
Mendeley users who have this article in their library.

Abstract

It was known from Metropolis et al. [J. Chem. Phys. 21 (1953) 1087-1092] that one can sample from a distribution by performing Monte Carlo simulation from a Markov chain whose equilibrium distribution is equal to the target distribution. However, it took several decades before the statistical community embraced Markov chain Monte Carlo (MCMC) as a general computational tool in Bayesian inference. The usual reasons that are advanced to explain why statisticians were slow to catch on to the method include lack of computing power and unfamiliarity with the early dynamic Monte Carlo papers in the statistical physics literature. We argue that there was a deeper reason, namely, that the structure of problems in the statistical mechanics and those in the standard statistical literature are different. To make the methods usable in standard Bayesian problems, one had to exploit the power that comes from the introduction of judiciously chosen auxiliary variables and collective moves. This paper examines the development in the critical period 1980-1990, when the ideas of Markov chain simulation from the statistical physics literature and the latent variable formulation in maximum likelihood computation (i.e., EM algorithm) came together to spark the widespread application of MCMC methods in Bayesian computation. © Institute of Mathematical Statistics, 2010.

Author supplied keywords

Cite

CITATION STYLE

APA

Tanner, M. A., & Wong, W. H. (2010). From EM to data augmentation: The emergence of MCMC Bayesian computation in the 1980s. Statistical Science, 25(4), 506–516. https://doi.org/10.1214/10-STS341

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free