Evaluating Markov Chain Monte Carlo Algorithms and Model Fit

  • Lynch S
N/ACitations
Citations of this article
24Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the previous two chapters, we used Gibbs sampling and Metropolis-Hastings (MH) sampling to make inference for parameters. Making inference, however, should come after (1) we have determined that the algorithm worked correctly , and (2) we have decided that the model we chose is acceptable for our purposes. These two issues are the focus of this chapter. The first part of this chapter addresses the first concern in discussing the convergence and mixing of MCMC algorithms. This part should not be considered an exhaustive exposition of the topic; as I stated, many of the recent advances in MCMC methods have been in this area. However, the approaches I present to evaluating algorithm performance are the most common ones used (see Liu 2001 and Robert and Casella 1999). In the previous chapters, I showed the basics of MCMC implementation, but I left these technical issues unaddressed. However, because software development is largely left to the researcher estimating Bayesian models, assessing how well an MCMC algorithm performs is crucial to conducting a responsible Bayesian analysis and to making appropriate inferences. The second part of the chapter discusses three approaches to evaluating the fit of models and to selecting a model as "best." Specifically, I discuss posterior predictive distributions, Bayes factors, and Bayesian model averaging. I devote relatively little attention to the latter two methods. Bayes factors require the computation of the marginal likelihood of the data (the denominator of Bayes' full formula for probability distributions), which is a complex integral that is not a by-broduct of MCMC estimation and is generally quite difficult to compute. Hence, additional methods are needed to compute it, and such is beyond the scope of this book (see Chen, Shao, and Ibrahim 2000). Bayesian model averaging (BMA) avoids the need for selecting models essentially by combining the results of multiple models into a single model. BMA therefore may not be used often in a social science setting in which we are generally interested in testing a single, specific model to evaluate a hypothesis.

Cite

CITATION STYLE

APA

Lynch, S. M. (2007). Evaluating Markov Chain Monte Carlo Algorithms and Model Fit. In Introduction to Applied Bayesian Statistics and Estimation for Social Scientists (pp. 131–164). Springer New York. https://doi.org/10.1007/978-0-387-71265-9_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free